diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Step-by-Step Guide to OBS Studio Download for Windows 7 64 Bit and Installation.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Step-by-Step Guide to OBS Studio Download for Windows 7 64 Bit and Installation.md deleted file mode 100644 index 5041a5064608acb41ee98f0b890767f339cc5564..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Step-by-Step Guide to OBS Studio Download for Windows 7 64 Bit and Installation.md +++ /dev/null @@ -1,34 +0,0 @@ -
-

How to OBS Studio Download for Windows 7 64 Bit and Use It for Streaming and Recording

-

OBS Studio is a free and open source software that allows you to stream and record your video and audio content. OBS Studio stands for Open Broadcaster Software Studio, and it is one of the most popular tools for live streaming and video recording. OBS Studio supports various platforms, such as Windows, Mac OS, and Linux. OBS Studio also supports various streaming services, such as Twitch, YouTube, Facebook, and more.

-

obs studio download for windows 7 64 bit


Downloadhttps://byltly.com/2uKxUo



-

If you want to use OBS Studio for your streaming and recording needs, you need to download it from the official website and install it on your computer. In this article, we will show you how to do that step by step for Windows 7 64 bit.

- -

How to OBS Studio Download for Windows 7 64 Bit

-

Follow these steps to download and install OBS Studio for Windows 7 64 bit:

-
    -
  1. Go to the official website of OBS Studio. The URL is https://obsproject.com/.
  2. -
  3. On the website, you will see a button that says "Download Installer". Click on it to start the download process.
  4. -
  5. You will see a file named OBS-Studio-x.x.x-Full-Installer-x64.exe, where x.x.x is the version number. This is the installer file for OBS Studio. Save it to your preferred location on your computer.
  6. -
  7. Once the download is complete, double-click on the installer file to launch it. Follow the instructions on the screen to complete the installation process. You can choose the default settings or customize them according to your preferences.
  8. -
  9. After the installation is finished, you will have OBS Studio installed on your computer. You can verify this by opening the Start menu and looking for OBS Studio in the list of programs.
  10. -
-

Congratulations! You have successfully downloaded and installed OBS Studio for Windows 7 64 bit.

-

- -

How to Use OBS Studio for Streaming and Recording

-

Now that you have OBS Studio installed on your computer, you can start using it for your streaming and recording purposes. Here are some basic steps to get you started:

-
    -
  1. Launch OBS Studio by clicking on its icon in the Start menu or on your desktop.
  2. -
  3. You will see the main window of OBS Studio with four sections: Scenes, Sources, Mixer, and Controls. Scenes are collections of sources that you want to show on your stream or recording. Sources are the elements that you want to capture, such as your webcam, microphone, game window, browser window, etc. Mixer is where you can adjust the audio levels of your sources. Controls are where you can start and stop your stream or recording, as well as access other settings and options.
  4. -
  5. To add a scene, click on the "+" button in the Scenes section. Give your scene a name and click OK.
  6. -
  7. To add a source, click on the "+" button in the Sources section. Choose the type of source that you want to add from the list of options. For example, if you want to capture your webcam, choose Video Capture Device. Give your source a name and click OK.
  8. -
  9. You will see a window with various settings for your source. Adjust them according to your needs and click OK.
  10. -
  11. You can resize and reposition your source by dragging its edges or corners in the preview window. You can also right-click on your source and choose Transform to access more options for cropping, rotating, flipping, etc.
  12. -
  13. You can add more scenes and sources as needed by repeating steps 3 to 6.
  14. -
  15. To start streaming, click on the Settings button in the Controls section. Go to the Stream tab and choose the service that you want to stream to from the drop-down menu. Enter your stream key or log in with your account credentials. Click Apply and OK.
  16. -
  17. To start recording, click on the Settings button in the Controls section. Go to the Output tab and choose the mode that you want to use: Simple or Advanced. Adjust the settings for video quality, audio quality, file format, etc. Click Apply and OK.
  18. -
  19. When you are ready to go live or record, click on the Start Streaming or Start Recording button in the Controls section.
  20. -
  21. When you are done with your stream or recording, click on the Stop Streaming or Stop Recording button in

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Activation Code Airdroid Premium Crack.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Activation Code Airdroid Premium Crack.md deleted file mode 100644 index ae739cbe397e5ea19210b6f17e1558a70d064aeb..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Activation Code Airdroid Premium Crack.md +++ /dev/null @@ -1,47 +0,0 @@ -
    -

    Activation Code Airdroid Premium Crack: What Is It and How to Use It?

    -

    If you are an Android user who wants to access and manage your device from your computer, you may have heard of Airdroid, a popular tool that lets you do just that. But what if you want to enjoy more features and benefits without paying for the premium subscription? You may have also heard of Airdroid Premium Crack, a modified version of Airdroid that claims to offer you all the premium features for free. But is it safe and legal to use? And are there any alternatives to it? In this article, we will answer these questions and more.

    -

    What Is Airdroid and What Are Its Features and Benefits?

    -

    Airdroid is a cross-platform tool that allows you to access and manage your Android devices wirelessly over the web. You can use it to transfer files, control mobile devices remotely, receive and reply to messages, mirror screen, and more. It works on Windows, Mac, Linux, Chrome, Firefox, Safari, Edge, Opera, and other browsers.

    -

    Activation Code Airdroid Premium Crack


    Download ––– https://byltly.com/2uKzVp



    -

    Airdroid offers multiple features to enhance productivity and convenience

    -

    Some of the main features of Airdroid are:

    - -

    Airdroid also supports multiple languages, dark mode, QR code login, SMS backup, call logs, etc.

    -

    Airdroid has some drawbacks and limitations that may affect user experience

    -

    Despite its many features and benefits, Airdroid is not perfect. Some of the drawbacks and limitations of Airdroid are:

    - -

    What Is Airdroid Premium and How to Get It?

    -

    Airdroid Premium is a paid subscription that unlocks more features and benefits for Airdroid users. With Airdroid Premium, you can enjoy:

    - -

    Airdroid Premium costs $1.99 per month or $19.99 per year

    -

    The price of Airdroid Premium is $1.99 per month or $19.99 per year. You can also get a 7-day free trial before you decide to purchase it. You can pay with PayPal, credit card, debit card, Google Play balance, etc.

    -

    -

    Airdroid Premium can be purchased from the official website or the app

    -

    To buy Airdroid Premium, you can either visit the official website or open the app on your device. Then you need to sign in with your Airdroid account or create one if you don't have one. Next, you need to choose the plan that suits you best and follow the instructions to complete the payment process. Once you have purchased Airdroid Premium, you can activate it on up to six devices using the same account.

    - b2dd77e56b
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Arma 3 1.14 Multiplayer Crack.md b/spaces/1gistliPinn/ChatGPT4/Examples/Arma 3 1.14 Multiplayer Crack.md deleted file mode 100644 index 596bad65b70ab1867159f7c6c1289e09c6121fc5..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Arma 3 1.14 Multiplayer Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

    arma 3 1.14 multiplayer crack


    Download ✔✔✔ https://imgfil.com/2uxZDl



    - -Arma 3 1.14 Crack Education Program are autonomy about 30 utilities and ... open occurrences much. bachata music free online and ST& this PowerPoint ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/DISQLite3 Pro 5.22.0 D4-XE10.2.md b/spaces/1gistliPinn/ChatGPT4/Examples/DISQLite3 Pro 5.22.0 D4-XE10.2.md deleted file mode 100644 index c0d8e1d6336321e151c55736207fa764111e26cd..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/DISQLite3 Pro 5.22.0 D4-XE10.2.md +++ /dev/null @@ -1,6 +0,0 @@ - -

    in addition, disqlite3 pro is a powerful application for the creation and manage of database programs and databases. it is not very difficult to use the application, and more importantly, it has a graphical interface for the creation and management of the database. in addition, it is possible to create all types of databases and database files using this application. all the databases are stored in the same directory, and the user does not have to enter the path of the database. it is also possible to create the database program by the application. this application can be used for the creation of the database files, in addition to the creation of the database files from the url. the application is also available for windows and mac os. users can make use of the application for the creation and management of the database and the database program.

    -

    furthermore, disqlite3 pro is a powerful application for the creation and management of database programs and databases. it is not very difficult to use the application, and more importantly, it has a graphical interface for the creation and management of the database. in addition, it is possible to create all types of databases and database files using this application. all the databases are stored in the same directory, and the user does not have to enter the path of the database. it is also possible to create the database program by the application. this application can be used for the creation of the database files, in addition to the creation of the database files from the url. the application is also available for windows and mac os. users can make use of the application for the creation and management of the database and the database program.

    -

    DISQLite3 Pro 5.22.0 D4-XE10.2


    Download 🆓 https://imgfil.com/2uxZs4



    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download The Last Train - Bullet Train Download] [Torrent]l Everything You Need to Know About the Movie and the Torrent.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download The Last Train - Bullet Train Download] [Torrent]l Everything You Need to Know About the Movie and the Torrent.md deleted file mode 100644 index 99b18526d76d3f92139738ec5deb63d23e3ed5bc..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download The Last Train - Bullet Train Download] [Torrent]l Everything You Need to Know About the Movie and the Torrent.md +++ /dev/null @@ -1,28 +0,0 @@ -
    -

    Its extensive torrent index makes it one of the best movie torrent sites out there. You can download movies of all genres from The Pirate Bay without worrying about downloading suspicious files.

    -

    There is a list of backup trackers given on the torrents page listing. Add them to get every last bit of available speed. GloTorrents also has an active forum where you can request torrents, subtitles, and more.

    -

    The Last Train - Bullet Train Download] [Torrent]l


    Download Filehttps://imgfil.com/2uxZbq



    -

    This article was co-authored by wikiHow Staff. Our trained team of editors and researchers validate articles for accuracy and comprehensiveness. wikiHow's Content Management Team carefully monitors the work from our editorial staff to ensure that each article is backed by trusted research and meets our high quality standards.

    This article has been viewed 199,709 times.

    Learn more...

    -

    It is especially helpful in preventing hackers from stealing your data while connected to an unsecure public Wi-Fi network. A VPN for torrenting allows you the anonymity to download as much as you want.

    -

    Technically, it is safe to torrent. It is based on a P2P (peer-to-peer) network where all participants share bits of a file. As more people download a file or some portion of it, they can become an active participant.

    -

    It depends on where you are downloading the file more than anything else. Public torrents are swarming with trojans that infect your system with malware such as a cryptominer. To prevent this from happening, always be mindful of what you download. Copyrighted material such as games are usually a honeypot for hackers.

    -

    Privacy experts recommend the use of a Torrent VPN to make your torrent activities anonymous. With a VPN for torrenting, you can download torrent files securely in countries dominated by DMCAs and copyright laws.

    -

    Kickasstorrents.to is probably the oldest and still functioning Kickass clones that users can access right now. You can access it using a VPN for all your torrenting needs. it offers complete Kickass torrents database with a whole connection of movies, series, documentaries, and much more for users to download. The site also has its Kickass community where it provides regular updates of the latest torrents available for download.

    -

    Tor-cr.org is yet another great Kickass clone. It has turned up to be a very useful clone website as it offers the complete list of Kickass Torrents. The website is easily accessible from all regions unless your ISP has imposed regional-restrictions on these versions of Kickass. However, using a VPN will give you full access to Tor-cr.org and download torrents from a wide range of content categories.

    -

    -

    Kat.li is another top Kickass clone website with a fast and powerful Torrents search engine similar to the one we had with the original Kickass website. The site indexes torrent files from multiple domains and provide a huge collection of Kickass torrents for users to download their favorite content including TV Shows, Movies, Games, Music, Apps and many more.

    -

    Although there is a very slight chance that the above mentioned torrenting clone websites could get shut down in the near future, if they do, you can make do with non-English torrenting sites to find your favorite content. These Non-English torrenting websites may be difficult to use for English-only downloaders, you can still use the help of Google translator to translate and change to the language of the website to make it easy for you to download stuff easily.

    -

    The popular animetorrents indexing website got shut down recently, causing concerns for all torrent fans who relied on the website to download anime content. But it is now back with a new interface and the same directory of torrents. You can download your favorite anime movie and series without any problems.

    -

    ArenaBG is a Bulgarian-torrents indexing website. It has been a target of a lot of investigations for violating copyright laws, but it is still up and running. Initially it was only available to access in Bulgaria, however, since 2011, users from around the world can access it easily. ArenaBG offers a huge selection of torrents for download and you can access it easily from anywhere. But remember, to avoid any trouble, you can use a Kickass VPN to stay anonymous and private.

    -

    ExtraTorrent is a great torrent website and thousands of users use it to download their favorite torrents every day. It offers a huge database of torrents for download and is surely one of the best Kickass alternatives you must consider.

    -

    Torrents.me work like a multi-search engine that allows you to search and download your favorite torrents from popular torrenting websites like the Pirate Bay, ExtraTorrent, and LimeTorrents. You can easily add your preferred torrenting websites in the search and find your favorite torrents through their database.

    -

    Since 1985, SERTC has provided hands-on, realistic training in surface transportation hazmat response. With new facilities and expanding curriculum, the SERTC trainee community is growing to keep local, state, tribal and territorial communities even safer.

    -

    As he was older and stronger than any of the other members who took upracing, and as he always rode the lightest and best wheel that money couldprocure, he had, without much hard work, easily maintained a lead in theracing field, and had come to consider himself as invincible. He regardedhimself as such a sure winner of this last[Pg 6] race for the Railroad Cup,that he had not taken the trouble to go into training for it. He would noteven give up his cigarette smoking, a habit that he had acquired becausehe considered it fashionable and manly. Now he was beaten, disgracefully,and that by a boy nearly two years younger than himself. It was too much,and he determined to find some excuse for his defeat, that should at thesame time remove the disgrace from him, and place it upon other shoulders.

    -

    With this Rod plunged down the steep bank to the railroad track, anddisappeared in the darkness. He went in the direction of the next stationto Euston, about five miles away, as he did not wish to be recognized whenhe made the attempt to secure a ride on some train to New York. It was tobe an attempt only; for he had not a cent of money in his pockets, and hadno idea of how he should obtain the coveted ride. In addition to beingpenniless, he was hungry, and his hunger was increased tenfold by theknowledge that he had no means of satisfying it. Still he was a boy withunlimited confidence in himself. He always had fallen on his feet; and,though this was the worse fix in which he had ever found himself, he hadfaith that he would come out[Pg 32]of it all right somehow. His heart wasalready so much lighter since he had learned from Dan that some of hisfriends, and especially Eltje Vanderveer, still believed in him, that hissituation did not seem half so desperate as it had an hour before.

    -

    Rod was already enough of a railroad man to know that, as he was goingeast, he must walk on the west bound track. By so doing he would be ableto see trains bound west, while they were still at some distance from him,and would be in no danger from those bound east and overtaking him.

    -

    When he was about half a mile from the little station, toward which he waswalking, he heard the long-drawn, far-away whistle of a locomotive. Was itahead of him or behind? On account of the bewildering echoes he could nottell. To settle the question he kneeled down, and placed his ear againstone of rails of the west bound track. It was cold and silent. Then hetried the east bound track in the same way. This rail seemed to tinglewith life, and a faint, humming sound came from it. It was a perfectrailroad telephone, and it informed the listener as plainly as words couldhave told him, that a train was approaching from the west.

    -

    [Pg 33]He stopped to note its approach. In a few minutes the rails of the eastbound track began to quiver with light from the powerful reflector infront of its locomotive. Then they stretched away toward the oncomingtrain in gleaming bands of indefinite length, while the dazzling lightseemed to cut a bright pathway between walls of solid blackness for theuse of the advancing monster. As the bewildering glare passed him, Rod sawthat the train was a long, heavy-laden freight, and that some of its carscontained cattle. He stood motionless as it rushed past him, shaking thesolid earth with its ponderous weight, and he drew a decided breath ofrelief at the sight of the blinking red eyes on the rear platform of itscaboose. How he wished he was in that caboose, riding comfortably towardNew York, instead of plodding wearily along on foot, with nothing butuncertainties ahead of him.

    -

    As Rod stood gazing at the receding train he noticed a human figure stepfrom the lighted interior of the caboose, through the open doorway, to theplatform, apparently kick at something, and almost instantly return intothe car. At the same time the boy fancied he heard a sharp cry of pain;but was not sure. As he resumed his tiresome walk, gazing longingly afterthe vanishing train lights, he saw another light, a white one that movedtoward him with a swinging motion, close to the ground. While he waswondering what it was, he almost stumbled over a small animal that stoodmotionless on the track, directly in front of him. It was a dog. Now Roddearly loved dogs, and seemed instinctively to know that this one was insome sort of trouble. As he stopped to pat it, the creature uttered alittle whine, as though [Pg 35]askinghis sympathy and help. At the same time it licked his hand.

    -

    The latter told the boy that the young tramp, as they called him, wasbilled through to New York, to look after some cattle that were on thetrain; but that he was a worthless, ugly fellow, who had not paid theslightest attention to them, and whose only object in accepting the jobwas evidently to obtain a free ride in the caboose. Smiler, whom he hadbeen delighted to find on the train when it was turned over to him, hadtaken a great dislike to the[Pg 45] fellowfrom the first. He had growled andshown his teeth whenever the tramp moved about the car, and several timesthe latter had threatened to teach him better manners. When he andBrakeman Joe went to the forward end of the train, to make ready forside-tracking it, they left the dog sitting on the rear platform of thecaboose, and the tramp apparently asleep, as Rod had found him, on one ofthe lockers. He must have taken advantage of their absence to deal the dogthe cruel kick that cut his ear, and landed him, stunned and bruised, onthe track where he had been discovered.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/1line/AutoGPT/autogpt/processing/text.py b/spaces/1line/AutoGPT/autogpt/processing/text.py deleted file mode 100644 index 52add81401775c1b111512d8149f86a175fd9acb..0000000000000000000000000000000000000000 --- a/spaces/1line/AutoGPT/autogpt/processing/text.py +++ /dev/null @@ -1,132 +0,0 @@ -"""Text processing functions""" -from typing import Dict, Generator, Optional - -from selenium.webdriver.remote.webdriver import WebDriver - -from autogpt.config import Config -from autogpt.llm_utils import create_chat_completion -from autogpt.memory import get_memory - -CFG = Config() -MEMORY = get_memory(CFG) - - -def split_text(text: str, max_length: int = 8192) -> Generator[str, None, None]: - """Split text into chunks of a maximum length - - Args: - text (str): The text to split - max_length (int, optional): The maximum length of each chunk. Defaults to 8192. - - Yields: - str: The next chunk of text - - Raises: - ValueError: If the text is longer than the maximum length - """ - paragraphs = text.split("\n") - current_length = 0 - current_chunk = [] - - for paragraph in paragraphs: - if current_length + len(paragraph) + 1 <= max_length: - current_chunk.append(paragraph) - current_length += len(paragraph) + 1 - else: - yield "\n".join(current_chunk) - current_chunk = [paragraph] - current_length = len(paragraph) + 1 - - if current_chunk: - yield "\n".join(current_chunk) - - -def summarize_text( - url: str, text: str, question: str, driver: Optional[WebDriver] = None -) -> str: - """Summarize text using the OpenAI API - - Args: - url (str): The url of the text - text (str): The text to summarize - question (str): The question to ask the model - driver (WebDriver): The webdriver to use to scroll the page - - Returns: - str: The summary of the text - """ - if not text: - return "Error: No text to summarize" - - text_length = len(text) - print(f"Text length: {text_length} characters") - - summaries = [] - chunks = list(split_text(text)) - scroll_ratio = 1 / len(chunks) - - for i, chunk in enumerate(chunks): - if driver: - scroll_to_percentage(driver, scroll_ratio * i) - print(f"Adding chunk {i + 1} / {len(chunks)} to memory") - - memory_to_add = f"Source: {url}\n" f"Raw content part#{i + 1}: {chunk}" - - MEMORY.add(memory_to_add) - - print(f"Summarizing chunk {i + 1} / {len(chunks)}") - messages = [create_message(chunk, question)] - - summary = create_chat_completion( - model=CFG.fast_llm_model, - messages=messages, - ) - summaries.append(summary) - print(f"Added chunk {i + 1} summary to memory") - - memory_to_add = f"Source: {url}\n" f"Content summary part#{i + 1}: {summary}" - - MEMORY.add(memory_to_add) - - print(f"Summarized {len(chunks)} chunks.") - - combined_summary = "\n".join(summaries) - messages = [create_message(combined_summary, question)] - - return create_chat_completion( - model=CFG.fast_llm_model, - messages=messages, - ) - - -def scroll_to_percentage(driver: WebDriver, ratio: float) -> None: - """Scroll to a percentage of the page - - Args: - driver (WebDriver): The webdriver to use - ratio (float): The percentage to scroll to - - Raises: - ValueError: If the ratio is not between 0 and 1 - """ - if ratio < 0 or ratio > 1: - raise ValueError("Percentage should be between 0 and 1") - driver.execute_script(f"window.scrollTo(0, document.body.scrollHeight * {ratio});") - - -def create_message(chunk: str, question: str) -> Dict[str, str]: - """Create a message for the chat completion - - Args: - chunk (str): The chunk of text to summarize - question (str): The question to answer - - Returns: - Dict[str, str]: The message to send to the chat completion - """ - return { - "role": "user", - "content": f'"""{chunk}""" Using the above text, answer the following' - f' question: "{question}" -- if the question cannot be answered using the text,' - " summarize the text.", - } diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/Build-a-lot 7 - Fairy Tales - Full PreCracked - Foxy Games Torrent !EXCLUSIVE!.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/Build-a-lot 7 - Fairy Tales - Full PreCracked - Foxy Games Torrent !EXCLUSIVE!.md deleted file mode 100644 index 28ed019d26be1aadc7e0e33e06c5c13a0278634a..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/Build-a-lot 7 - Fairy Tales - Full PreCracked - Foxy Games Torrent !EXCLUSIVE!.md +++ /dev/null @@ -1,74 +0,0 @@ -## Build-a-lot 7 - Fairy Tales - Full PreCracked - Foxy Games Torrent - - - - - - ![Build-a-lot 7 - Fairy Tales - Full PreCracked - Foxy Games Torrent !EXCLUSIVE!](https://cdn-games.bigfishsites.com/en_buildalot-fairy-tales/screen2.jpg) - - - - - -**CLICK HERE ••• [https://www.google.com/url?q=https%3A%2F%2Fbytlly.com%2F2txjm0&sa=D&sntz=1&usg=AOvVaw1SVqXiA0JjUeIJDUtRRRY4](https://www.google.com/url?q=https%3A%2F%2Fbytlly.com%2F2txjm0&sa=D&sntz=1&usg=AOvVaw1SVqXiA0JjUeIJDUtRRRY4)** - - - - - - - - - - - - - -# How to Download and Play Build-a-lot 7 - Fairy Tales - Full PreCracked - Foxy Games Torrent - - - -If you are looking for a fun and relaxing game that combines city-building and fairy tale elements, then you should try Build-a-lot 7 - Fairy Tales. This is the seventh installment of the popular Build-a-lot series, and it offers you a chance to create your own magical kingdom with castles, cottages, fountains, and more. You can also explore different fairy tale worlds, meet famous characters, and complete challenging quests. - - - -But how can you get this game for free? The answer is by downloading and playing the Build-a-lot 7 - Fairy Tales - Full PreCracked - Foxy Games Torrent. This is a file that contains the full version of the game, already cracked and ready to play. You don't need to install anything or register any account. You just need to follow these simple steps: - - - -1. Download a torrent client, such as uTorrent or BitTorrent, and install it on your computer. - -2. Go to a torrent site, such as The Pirate Bay or Kickass Torrents, and search for "Build-a-lot 7 - Fairy Tales - Full PreCracked - Foxy Games". - -3. Choose the torrent file that has the most seeders and leechers, and download it to your computer. - -4. Open the torrent file with your torrent client, and select the destination folder where you want to save the game. - -5. Wait for the download to finish. It may take some time depending on your internet speed and the number of peers. - -6. Once the download is complete, open the destination folder and double-click on the game icon. The game will launch automatically. - -7. Enjoy playing Build-a-lot 7 - Fairy Tales! - - - -Note: Downloading and playing torrent files may be illegal in some countries. Please check your local laws before proceeding. Also, be careful of viruses and malware that may be hidden in some torrent files. Always scan your files with an antivirus program before opening them. - - - -Build-a-lot 7 - Fairy Tales is a game that will appeal to both casual and hardcore gamers. You can choose from four different modes: Campaign, Casual, Expert, and Sandbox. Each mode has its own objectives and challenges, and you can adjust the difficulty level according to your preference. You can also unlock achievements and trophies as you progress through the game. - - - -The game features stunning graphics and sound effects that will immerse you in the fairy tale atmosphere. You can customize your kingdom with different types of buildings, decorations, and landscaping. You can also interact with various fairy tale characters, such as Cinderella, Snow White, Rapunzel, and more. You can help them with their problems, or cause some mischief if you feel like it. - - - -Build-a-lot 7 - Fairy Tales is a game that will keep you entertained for hours. You can download and play it for free by using the Build-a-lot 7 - Fairy Tales - Full PreCracked - Foxy Games Torrent. Just follow the instructions above and start building your dream kingdom today! - - 1b8d091108 - - - - - diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Beach Buggy Racing 2 How to Unlock and Upgrade Over 40 Powerups.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Beach Buggy Racing 2 How to Unlock and Upgrade Over 40 Powerups.md deleted file mode 100644 index d1992a1ffe97e888087f8a6b3bcd5ee9a9109b3b..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Beach Buggy Racing 2 How to Unlock and Upgrade Over 40 Powerups.md +++ /dev/null @@ -1,116 +0,0 @@ - -

    Beach Buggy Racing 2: A Fun and Exciting Kart Racing Game

    -

    Do you love kart racing games? Do you want to experience a thrilling adventure on a mysterious island? Do you want to compete against other players from around the world? If you answered yes to any of these questions, then you should try Beach Buggy Racing 2, a fun and exciting kart racing game that you can download from Microsoft Store. In this article, we will tell you everything you need to know about this game, including what it is, how to download it, what are its features, how to play it, and why you should play it.

    -

    beach buggy racing 2 download microsoft store


    Download File ……… https://urlin.us/2uSSvc



    -

    What is Beach Buggy Racing 2?

    -

    Beach Buggy Racing 2 is a sequel to the popular Beach Buggy Racing, a game that introduced over 100 million international mobile players to console-style kart racing with a playful off-road twist. Beach Buggy Racing 2 is a fully 3D off-road kart racing game with amazing physics, detailed cars and characters, and spectacular weapons, powered by Vector Engine and NVIDIA's PhysX. It's like a console game in the palm of your hand!

    -

    Beach Buggy Racing 2 is a game that you can play solo or with friends in split screen or online modes. You can join the Beach Buggy Racing League and compete against drivers and cars from around the world. You can race through Egyptian pyramids, dragon-infested castles, pirate ship wrecks, and experimental alien bio-labs. You can collect and upgrade an arsenal of fun and wacky powerups. You can recruit new drivers, assemble a garage full of cars, and race your way to the top of the league.

    -

    How to download Beach Buggy Racing 2 from Microsoft Store?

    -

    If you want to download Beach Buggy Racing 2 on your Windows 10 device, you can follow these simple steps:

    -

    beach buggy racing 2 island adventure xbox one
    -beach buggy racing 2 hot wheels edition
    -beach buggy racing 2 split screen
    -beach buggy racing 2 game crafting
    -beach buggy racing 2 adventure mode
    -beach buggy racing 2 xbox series x
    -beach buggy racing 2 oddball car pack
    -beach buggy racing 2 firework fury
    -beach buggy racing 2 vector unit
    -beach buggy racing 2 xbox local multiplayer
    -beach buggy racing 2 kart racer
    -beach buggy racing 2 powerups
    -beach buggy racing 2 championships
    -beach buggy racing 2 drift attack
    -beach buggy racing 2 tropical rivals
    -beach buggy racing 2 official sequel
    -beach buggy racing 2 free driving game
    -beach buggy racing 2 moon buggies
    -beach buggy racing 2 monster trucks
    -beach buggy racing 2 ancient temples
    -beach buggy racing 2 dragon castles
    -beach buggy racing 2 ice cream stands
    -beach buggy racing 2 rag-tag crew
    -beach buggy racing 2 mysterious island
    -beach buggy racing 2 epic race
    -beach buggy racing 2 ultimate trophy
    -beach buggy racing 2 mayhem-filled kart racer
    -beach buggy racing 2 solo or with friends
    -beach buggy racing 2 story-driven adventure mode
    -beach buggy racing 2 adrenaline-pumping races
    -beach buggy racing 2 skill-mastering drift attacks
    -beach buggy racing 2 custom game modes
    -beach buggy racing 2 zany race rules
    -beach buggy racing 2 bouncy tires powerup
    -beach buggy racing 2 rocket boost powerup
    -beach buggy racing 2 police chase powerup
    -beach buggy racing 2 fast-paced driving action game
    -beach buggy racing 2 explosive fun for all skill levels
    -beach buggy racing 2 net energy gain experiment
    -beach buggy racing 2 holy grail fusion experiment
    -beach buggy racing 2 mini sun experiment
    -beach buggy racing 2 seven times hotter than the sun core experiment

    -
      -
    1. Open Microsoft Store app on your device.
    2. -
    3. Search for Beach Buggy Racing 2 in the search bar.
    4. -
    5. Select the game from the search results.
    6. -
    7. Click on Get or Install button.
    8. -
    9. Wait for the download and installation process to complete.
    10. -
    11. Launch the game and enjoy!
    12. -
    -

    The system requirements for Beach Buggy Racing 2 are:

    -
      -
    • OS: Windows 10 version 18362.0 or higher
    • -
    • Architecture: x64
    • -
    • DirectX: Version 11
    • -
    • Memory: 4 GB
    • -
    • Processor: Intel Core i5-6500 or equivalent
    • -
    • Graphics: NVIDIA GeForce GTX750 Ti or equivalent
    • -
    -

    The price of Beach Buggy Racing 2 is $19.99. However, you can also buy the Hot Wheels Edition bundle for $26.98, which includes the game and two DLC packs: Hot Wheels Booster Pack and Oddball Car

    One of the benefits of downloading the game from Microsoft Store is that you can enjoy the Hot Wheels Booster Pack DLC, an exciting new content expansion that adds seven legendary Hot Wheels cars and four new tracks, complete with twisting orange track pieces, to the Beach Buggy Racing League. You can also get the Oddball Car Pack DLC, which adds four wacky and weird cars to your garage: the Rocket Car, the Shark Car, the Alien Car, and the Monster Truck. These DLC packs are sold separately or as a bundle with the game for a discounted price.

    -

    What are the features of Beach Buggy Racing 2?

    -

    Beach Buggy Racing 2 is not just a simple racing game. It has many features that make it a fun and exciting kart racing game. Here are some of them:

    -

    The different game modes and challenges

    -

    You can choose from different game modes and challenges to test your skills and have fun. You can play the Adventure mode, where you can explore the island and unlock new tracks, cars, drivers, and powerups. You can also play the Quick Race mode, where you can race on any track you want with any car you want. You can also play the Championship mode, where you can compete in a series of races and earn trophies. You can also play the Daily Challenges mode, where you can complete different tasks and earn rewards. You can also play the Special Events mode, where you can join limited-time events and win exclusive prizes.

    -

    The variety of cars, drivers, and powerups

    -

    You can collect and upgrade over 40 cars, each with their own unique stats and abilities. You can also recruit over 20 drivers, each with their own special power. You can also collect and upgrade over 40 powerups, each with their own effects and strategies. You can mix and match different cars, drivers, and powerups to create your own style and strategy.

    -

    The customization options and the achievements

    -

    You can customize your cars with different paints, decals, wheels, spoilers, and more. You can also customize your drivers with different outfits, hats, glasses, and more. You can also customize your powerup deck with different combinations of powerups. You can also unlock over 100 achievements and show off your skills and progress.

    -

    How to play Beach Buggy Racing 2?

    -

    Beach Buggy Racing 2 is easy to play but hard to master. Here are some tips and tricks to help you play better:

    -

    The controls and the tips for racing

    -

    You can choose from different control options: tilt, touch, or gamepad. You can also adjust the sensitivity and the steering assist. The basic controls are: accelerate, brake, steer, drift, use powerup, use driver ability. The tips for racing are: use drift to take sharp turns and fill up your boost meter; use boost to speed up and overtake your opponents; use powerups wisely and strategically; use driver ability at the right time and situation; avoid obstacles and traps; collect coins and gems; look for shortcuts and secrets.

    -

    The powerup deck and the special abilities

    -

    You can create your own powerup deck with up to eight powerups. You can choose from offensive, defensive, or utility powerups. You can also upgrade your powerups to make them more effective. Some examples of powerups are: firework (shoots a rocket that explodes on impact); oil slick (drops a slippery puddle that spins out other racers); shield (protects you from attacks for a short time); nitro (gives you a burst of speed); magnet (attracts coins and gems); lightning (zaps nearby racers); tornado (creates a swirling wind that blows away other racers); ice cream (freezes other racers in place). You can also use your driver ability once per race. Each driver has a unique ability that can give you an edge over your opponents. Some examples of driver abilities are: beach ball barrage (launches beach balls everywhere); fire breath (breathes fire in front of you); teleport (teleports you to a random position); coin storm (makes coins rain from the sky); banana split (splits into three copies of yourself).

    -

    The online competitions and tournaments

    -

    You can join the Beach Buggy Racing League and compete against other players from around the world in online races. You can earn trophies and rank up in different leagues. You can also join online tournaments and win exclusive rewards. You can also create or join a team and chat with other players.

    -

    Why should you play Beach Buggy Racing 2?

    -

    Beach Buggy Racing 2 is a game that you should play if you love kart racing games. Here are some reasons why you should play Beach Buggy Racing 2:

    -

    The fun and addictive gameplay

    -

    Beach Buggy Racing 2 is a game that will keep you hooked for hours. You will never get bored of racing on different tracks, using different powerups, and unlocking new cars, drivers, and upgrades. You will also enjoy the challenge of competing against other players and improving your skills and rank. You will also have fun exploring the island and discovering its secrets and surprises.

    -

    The stunning graphics and sound effects

    -

    Beach Buggy Racing 2 is a game that will impress you with its graphics and sound effects. You will admire the detailed and colorful 3D graphics that bring the island to life. You will also appreciate the realistic physics and animations that make the racing experience more immersive. You will also enjoy the catchy and upbeat music and sound effects that match the mood and theme of the game.

    -

    The replay value and the updates

    -

    Beach Buggy Racing 2 is a game that will keep you coming back for more. You will always find something new and exciting to do in the game. You will also benefit from the regular updates that add new content and features to the game. You will also be able to play the game offline or online, depending on your preference and availability.

    -

    Conclusion

    -

    Beach Buggy Racing 2 is a fun and exciting kart racing game that you can download from Microsoft Store. It is a sequel to the popular Beach Buggy Racing, a game that introduced over 100 million international mobile players to console-style kart racing with a playful off-road twist. Beach Buggy Racing 2 is a fully 3D off-road kart racing game with amazing physics, detailed cars and characters, and spectacular weapons, powered by Vector Engine and NVIDIA's PhysX. It's like a console game in the palm of your hand!

    -

    Beach Buggy Racing 2 is a game that you can play solo or with friends in split screen or online modes. You can join the Beach Buggy Racing League and compete against drivers and cars from around the world. You can race through Egyptian pyramids, dragon-infested castles, pirate ship wrecks, and experimental alien bio-labs. You can collect and upgrade an arsenal of fun and wacky powerups. You can recruit new drivers, assemble a garage full of cars, and race your way to the top of the league.

    -

    Beach Buggy Racing 2 is a game that has many features that make it a fun and exciting kart racing game. You can choose from different game modes and challenges to test your skills and have fun. You can collect and upgrade over 40 cars, each with their own unique stats and abilities. You can also recruit over 20 drivers, each with their own special power. You can also collect and upgrade over 40 powerups, each with their own effects and strategies. You can mix and match different cars, drivers, and powerups to create your own style and strategy.

    -

    Beach Buggy Racing 2 is a game that is easy to play but hard to master. You can choose from different control options: tilt, touch, or gamepad. You can also adjust the sensitivity and the steering assist. The basic controls are: accelerate, brake, steer, drift, use powerup, use driver ability. The tips for racing are: use drift to take sharp turns and fill up your boost meter; use boost to speed up and overtake your opponents; use powerups wisely and strategically; use driver ability at the right time and situation; avoid obstacles and traps; collect coins and gems; look for shortcuts and secrets.

    -

    Beach Buggy Racing 2 is a game that you should play if you love kart racing games. You will enjoy the fun and addictive gameplay, the stunning graphics and sound effects, and the replay value and the updates. You will also have fun playing with your friends or other players online. You will also be able to customize your cars, drivers, and powerups to suit your preferences and style.

    -

    If you are ready to join the Beach Buggy Racing League and have a blast on the island, download Beach Buggy Racing 2 from Microsoft Store today and start your engine!

    -

    FAQs

    -

    Here are some frequently asked questions about Beach Buggy Racing 2:

    -
      -
    1. How can I get more coins and gems in the game?
    2. -

      You can get more coins and gems by racing on different tracks, completing daily challenges, participating in special events, watching ads, or buying them with real money.

      -
    3. How can I unlock more cars and drivers in the game?
    4. -

      You can unlock more cars and drivers by progressing through the adventure mode, winning championships, opening chests, or buying them with coins or gems.

      -
    5. How can I upgrade my cars and powerups in the game?
    6. -

      You can upgrade your cars and powerups by using upgrade cards that you can get from chests, daily challenges, special events, or buying them with coins or gems.

      -
    7. How can I join a team or create my own team in the game?
    8. -

      You can join a team or create your own team by tapping on the team icon on the main menu. You can search for an existing team or create a new one with a name, a logo, and a description. You can also invite other players to join your team or accept invitations from other teams. You can chat with your team members, share tips and strategies, and compete in team tournaments.

      -
    9. How can I contact the developers of the game or report a bug or a problem?
    10. -

      You can contact the developers of the game or report a bug or a problem by tapping on the settings icon on the main menu. You can then tap on the help icon and choose from different options: FAQ, support, feedback, privacy policy, terms of service, credits. You can also visit their website at https://www.vectorunit.com/ or follow them on social media at https://www.facebook.com/VectorUnit/ or https://twitter.com/VectorUnit/.

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bus Simulator Indonesia How to Download and Install Jai Guru Jinn Livery.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bus Simulator Indonesia How to Download and Install Jai Guru Jinn Livery.md deleted file mode 100644 index bc4b6134b42f4723cd2f7ce998644542ff86dd05..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bus Simulator Indonesia How to Download and Install Jai Guru Jinn Livery.md +++ /dev/null @@ -1,113 +0,0 @@ - -

    Bus Simulator Indonesia: How to Download and Install Jai Guru Livery

    -

    Do you love driving buses in realistic and authentic environments? Do you want to customize your bus with cool and fun designs? If yes, then you should try Bus Simulator Indonesia, a popular game that lets you experience what it likes being a bus driver in Indonesia. And if you are looking for a unique and stylish livery for your bus, then you should check out the Jai Guru livery, a beautiful and eye-catching design that will make your bus stand out from the crowd. In this article, we will tell you more about Bus Simulator Indonesia, Jai Guru livery, and how to download and install it in your game.

    -

    bus simulator indonesia jai guru livery download


    Download Zip ☆☆☆☆☆ https://urlin.us/2uSYjQ



    -

    What is Bus Simulator Indonesia?

    -

    Bus Simulator Indonesia (aka BUSSID) is a game developed by Maleo, an Indonesian game studio. It was released in 2017 and has been updated regularly with new features and improvements. The game is available for Android and iOS devices, as well as PC via emulator. The game has over 100 million downloads on Google Play Store and has received positive reviews from players and critics.

    -

    Game features

    -

    Some of the top features of Bus Simulator Indonesia are:

    -
      -
    • Design your own livery: You can create your own livery for your bus using the template provided by the developer or using your own 3D model. You can also download and use livery from other players or creators.
    • -
    • Very easy and intuitive control: You can choose between tilt, steering wheel, or buttons to control your bus. You can also adjust the sensitivity and camera angle according to your preference.
    • -
    • Authentic Indonesian cities and places: You can drive your bus in various cities and places in Indonesia, such as Jakarta, Surabaya, Bali, Sumatra, Java, etc. You can also see landmarks, buildings, traffic signs, and other details that make the game more realistic.
    • -
    • Variety of Indonesian buses with unique features: You can choose from different types of buses, such as mini bus, double decker, articulated bus, etc. Each bus has its own characteristics, such as speed, handling, capacity, etc.
    • -
    • Cool and fun honks: You can honk your horn with different sounds, such as the iconic "Om Telolet Om!" honk that became viral on social media. You can also hear other buses honking back at you.
    • -
    • High-quality and detailed 3D graphics: The game has stunning graphics that show the beauty of Indonesia. You can see the shadows, reflections, weather effects, day and night cycle, etc.
    • -
    • No obstructive ads while driving: The game does not show ads while you are driving your bus. You can enjoy the game without any interruption or distraction.
    • -
    • Leaderboard and online data saving: You can compete with other players on the leaderboard based on your score and achievements. You can also save your data online so you don't lose your progress.
    • -
    • Online multiplayer convoy: You can join or create a convoy with other players online. You can chat with them, follow them, or challenge them.
    • -
    -

    Livery customization

    -

    One of the most fun features of Bus Simulator Indonesia is the livery customization. You can design your own livery for your bus using the template provided by the developer or using your own 3D model. You can also download and use livery from other players or creators. Livery is a term that refers to the paint scheme or design of a vehicle, especially a bus or a plane. Livery can be used to express your personality, style, or preference. You can also use livery to promote your brand, business, or cause. Livery can make your bus more attractive, unique, and recognizable.

    -

    What is Jai Guru Livery?

    -

    Jai Guru livery is a livery created by Jai Guru, a popular and talented livery maker in the BUSSID community. Jai Guru has made many liveries for different types of buses, such as Srikandi SHD, Jetbus 3+, Legacy SR2 XHD Prime, etc. Jai Guru livery is known for its high-quality, colorful, and artistic design. Jai Guru livery is also inspired by Indian culture and religion, as well as other themes and motifs.

    -

    Design and style

    -

    Jai Guru livery has a distinctive design and style that makes it stand out from other liveries. Some of the features of Jai Guru livery are:

    -
      -
    • Bright and vibrant colors: Jai Guru livery uses a combination of bright and vibrant colors, such as red, yellow, green, blue, purple, etc. The colors create a contrast and harmony that make the livery more eye-catching and appealing.
    • -
    • Indian symbols and images: Jai Guru livery incorporates various symbols and images from Indian culture and religion, such as the Om sign, the lotus flower, the elephant, the peacock, etc. The symbols and images represent different meanings and values, such as peace, wisdom, prosperity, beauty, etc.
    • -
    • Floral and geometric patterns: Jai Guru livery also uses floral and geometric patterns to decorate the bus. The patterns add more detail and texture to the livery. The patterns are also influenced by Indian art and architecture.
    • -
    • Texts and slogans: Jai Guru livery also includes texts and slogans on the bus. The texts and slogans are usually in Hindi or English. They can be the name of the bus company, the destination of the bus, or a message to the passengers or other drivers.
    • -
    -

    Download link and credit

    -

    If you want to download and use Jai Guru livery in your game, you can find the download link on Jai Guru's YouTube channel or Facebook page. You can also find other liveries made by Jai Guru on these platforms. Please note that you need to have the compatible bus model in your game before you can use the livery. You can also download the bus model from Jai Guru's channel or page.

    -

    When you download and use Jai Guru livery, please give credit to Jai Guru as the original creator of the livery. Do not claim the livery as your own or modify it without permission from Jai Guru. Do not upload or share the livery on other platforms without giving proper credit to Jai Guru. Respect the work and effort of Jai Guru and support him by subscribing to his channel or liking his page.

    -

    How to Install Jai Guru Livery in Bus Simulator Indonesia?

    -

    Installing Jai Guru livery in Bus Simulator Indonesia is easy and simple. Just follow these steps:

    -

    jai guru bus mod video download
    -jai guru bus jinn livery link
    -jai guru bus simulator indonesia gameplay
    -jai guru bus skin for bussid
    -jai guru bus mod apk download
    -jai guru bus mod for jetbus
    -jai guru bus mod by team tvz official
    -jai guru bus mod with gandharvan link
    -jai guru bus mod with evonex link
    -jai guru bus mod with scania link
    -jai guru bus mod with haryanto link
    -jai guru bus mod with bejeu link
    -jai guru bus mod with bandung express link
    -jai guru bus mod with armada jaya perkasa link
    -jai guru bus mod with budiman link
    -jai guru bus mod with gede trans link
    -jai guru bus mod with a.l.s link
    -jai guru bus mod with akas link
    -jai guru bus mod with agra mas link
    -jai guru bus mod with eagle high link
    -jai guru bus mod with dewi sri link
    -jai guru bus mod with garuda mas link
    -jai guru bus mod with eka cepat link
    -jai guru bus mod with family raya link
    -jai guru bus mod with gunung mulia link
    -jai guru bus mod with gunung harta link
    -jai guru bus mod with handoyo blangkon link
    -jai guru bus mod with harapan jaya link
    -jai guru bus mod with sempati star link
    -jai guru bus mod with shantika link
    -jai guru bus mod with sinar jaya link
    -jai guru bus mod with sudiro tungga jaya link
    -jai guru bus mod with sugeng rahayu link
    -jai guru bus mod with sumba putra link
    -jai guru bus mod with sumber rejeki link
    -jai guru bus mod with sumber selamat link
    -jai guru bus mod with haryanto gold link
    -jai guru bus mod with haryanto oren link
    -jai guru bus mod with haryanto kuning link
    -jai guru bus mod with raya by thobie link
    -jai guru bus mod with rosalia indah by doel link
    -jai guru bus mod with rukun jaya by doel link
    -jai guru bus mod with sahabat by doel link
    -jai guru bus mod with santoso by doel link
    -jai guru bus mod with luragung star by mbs team link
    -jai guru bus mod with maju lancar by doel link
    -jai guru bus mod with mira by dyt'z link
    -jai guru bus mod with pahala kencana by hanafi art link
    -jai guru bus mod with haryanto becak by agusgps link

    -

    Step 1: Download the livery file

    -

    The first step is to download the livery file from Jai Guru's channel or page. The file will be in .bussid format, which is a special format for BUSSID liveries. The file size will vary depending on the type of bus and the complexity of the design.

    -

    Step 2: Move the livery file to the BUSSID folder

    -

    The next step is to move the livery file to the BUSSID folder on your device. You can use any file manager app to do this. The BUSSID folder is usually located in Internal Storage > Android > data > com.maleo.bussimulatorid > files > BUSSID.

    -

    Step 3: Open the game and select the garage menu

    -

    The third step is to open Bus Simulator Indonesia on your device and select the garage menu from the main menu. The garage menu is where you can choose and customize your bus.

    -

    Step 4: Select the livery file menu and click BUSSID file manager

    -

    The fourth step is to select the livery file menu from the garage menu. The l ivery file menu is where you can see the list of livery files that you have downloaded or created. From the livery file menu, click on the BUSSID file manager button. The BUSSID file manager is where you can access the BUSSID folder and see the livery files that you have moved there.

    -

    Step 5: Choose the livery you want to use and click open

    -

    The final step is to choose the Jai Guru livery that you want to use for your bus and click on the open button. The game will load the livery and apply it to your bus. You can see the preview of your bus with the Jai Guru livery on the screen. You can also change the color, accessories, or other features of your bus if you want. When you are satisfied with your bus, click on the save button and exit the garage menu.

    -

    Conclusion

    -

    Bus Simulator Indonesia is a fun and realistic game that lets you drive buses in Indonesia. You can also customize your bus with different liveries, such as the Jai Guru livery, a beautiful and eye-catching design inspired by Indian culture and religion. To download and install Jai Guru livery in your game, you just need to follow five simple steps: download the livery file, move it to the BUSSID folder, open the game and select the garage menu, select the livery file menu and click BUSSID file manager, and choose the livery you want to use and click open. Enjoy your bus with Jai Guru livery and have a safe and happy journey!

    -

    FAQs

    -

    Here are some frequently asked questions about Bus Simulator Indonesia and Jai Guru livery:

    -
      -
    • Q: How can I get more buses in Bus Simulator Indonesia?
    • -
    • A: You can get more buses in Bus Simulator Indonesia by buying them with coins or diamonds. You can earn coins or diamonds by playing the game, completing missions, watching ads, or buying them with real money.
    • -
    • Q: How can I create my own livery in Bus Simulator Indonesia?
    • -
    • A: You can create your own livery in Bus Simulator Indonesia by using the template provided by the developer or using your own 3D model. You can find the template and instructions on how to use it on Maleo's website or YouTube channel.
    • -
    • Q: How can I share my livery with other players in Bus Simulator Indonesia?
    • -
    • A: You can share your livery with other players in Bus Simulator Indonesia by uploading it to Maleo's website or any other platform that supports .bussid files. You can also join online multiplayer convoys and show off your livery to other players.
    • -
    • Q: How can I contact Jai Guru or request a custom livery from him?
    • -
    • A: You can contact Jai Guru or request a custom livery from him by sending him a message on his YouTube channel or Facebook page. He will reply to you as soon as possible.
    • -
    • Q: How can I support Jai Guru and his work?
    • -
    • A: You can support Jai Guru and his work by subscribing to his YouTube channel, liking his Facebook page, giving him feedback, sharing his liveries with others, and donating to him if you want.
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/onnx_ijbc.py b/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/onnx_ijbc.py deleted file mode 100644 index 05b50bfad4b4cf38903b89f596263a8e29a50d3e..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/onnx_ijbc.py +++ /dev/null @@ -1,267 +0,0 @@ -import argparse -import os -import pickle -import timeit - -import cv2 -import mxnet as mx -import numpy as np -import pandas as pd -import prettytable -import skimage.transform -from sklearn.metrics import roc_curve -from sklearn.preprocessing import normalize - -from onnx_helper import ArcFaceORT - -SRC = np.array( - [ - [30.2946, 51.6963], - [65.5318, 51.5014], - [48.0252, 71.7366], - [33.5493, 92.3655], - [62.7299, 92.2041]] - , dtype=np.float32) -SRC[:, 0] += 8.0 - - -class AlignedDataSet(mx.gluon.data.Dataset): - def __init__(self, root, lines, align=True): - self.lines = lines - self.root = root - self.align = align - - def __len__(self): - return len(self.lines) - - def __getitem__(self, idx): - each_line = self.lines[idx] - name_lmk_score = each_line.strip().split(' ') - name = os.path.join(self.root, name_lmk_score[0]) - img = cv2.cvtColor(cv2.imread(name), cv2.COLOR_BGR2RGB) - landmark5 = np.array([float(x) for x in name_lmk_score[1:-1]], dtype=np.float32).reshape((5, 2)) - st = skimage.transform.SimilarityTransform() - st.estimate(landmark5, SRC) - img = cv2.warpAffine(img, st.params[0:2, :], (112, 112), borderValue=0.0) - img_1 = np.expand_dims(img, 0) - img_2 = np.expand_dims(np.fliplr(img), 0) - output = np.concatenate((img_1, img_2), axis=0).astype(np.float32) - output = np.transpose(output, (0, 3, 1, 2)) - output = mx.nd.array(output) - return output - - -def extract(model_root, dataset): - model = ArcFaceORT(model_path=model_root) - model.check() - feat_mat = np.zeros(shape=(len(dataset), 2 * model.feat_dim)) - - def batchify_fn(data): - return mx.nd.concat(*data, dim=0) - - data_loader = mx.gluon.data.DataLoader( - dataset, 128, last_batch='keep', num_workers=4, - thread_pool=True, prefetch=16, batchify_fn=batchify_fn) - num_iter = 0 - for batch in data_loader: - batch = batch.asnumpy() - batch = (batch - model.input_mean) / model.input_std - feat = model.session.run(model.output_names, {model.input_name: batch})[0] - feat = np.reshape(feat, (-1, model.feat_dim * 2)) - feat_mat[128 * num_iter: 128 * num_iter + feat.shape[0], :] = feat - num_iter += 1 - if num_iter % 50 == 0: - print(num_iter) - return feat_mat - - -def read_template_media_list(path): - ijb_meta = pd.read_csv(path, sep=' ', header=None).values - templates = ijb_meta[:, 1].astype(np.int) - medias = ijb_meta[:, 2].astype(np.int) - return templates, medias - - -def read_template_pair_list(path): - pairs = pd.read_csv(path, sep=' ', header=None).values - t1 = pairs[:, 0].astype(np.int) - t2 = pairs[:, 1].astype(np.int) - label = pairs[:, 2].astype(np.int) - return t1, t2, label - - -def read_image_feature(path): - with open(path, 'rb') as fid: - img_feats = pickle.load(fid) - return img_feats - - -def image2template_feature(img_feats=None, - templates=None, - medias=None): - unique_templates = np.unique(templates) - template_feats = np.zeros((len(unique_templates), img_feats.shape[1])) - for count_template, uqt in enumerate(unique_templates): - (ind_t,) = np.where(templates == uqt) - face_norm_feats = img_feats[ind_t] - face_medias = medias[ind_t] - unique_medias, unique_media_counts = np.unique(face_medias, return_counts=True) - media_norm_feats = [] - for u, ct in zip(unique_medias, unique_media_counts): - (ind_m,) = np.where(face_medias == u) - if ct == 1: - media_norm_feats += [face_norm_feats[ind_m]] - else: # image features from the same video will be aggregated into one feature - media_norm_feats += [np.mean(face_norm_feats[ind_m], axis=0, keepdims=True), ] - media_norm_feats = np.array(media_norm_feats) - template_feats[count_template] = np.sum(media_norm_feats, axis=0) - if count_template % 2000 == 0: - print('Finish Calculating {} template features.'.format( - count_template)) - template_norm_feats = normalize(template_feats) - return template_norm_feats, unique_templates - - -def verification(template_norm_feats=None, - unique_templates=None, - p1=None, - p2=None): - template2id = np.zeros((max(unique_templates) + 1, 1), dtype=int) - for count_template, uqt in enumerate(unique_templates): - template2id[uqt] = count_template - score = np.zeros((len(p1),)) - total_pairs = np.array(range(len(p1))) - batchsize = 100000 - sublists = [total_pairs[i: i + batchsize] for i in range(0, len(p1), batchsize)] - total_sublists = len(sublists) - for c, s in enumerate(sublists): - feat1 = template_norm_feats[template2id[p1[s]]] - feat2 = template_norm_feats[template2id[p2[s]]] - similarity_score = np.sum(feat1 * feat2, -1) - score[s] = similarity_score.flatten() - if c % 10 == 0: - print('Finish {}/{} pairs.'.format(c, total_sublists)) - return score - - -def verification2(template_norm_feats=None, - unique_templates=None, - p1=None, - p2=None): - template2id = np.zeros((max(unique_templates) + 1, 1), dtype=int) - for count_template, uqt in enumerate(unique_templates): - template2id[uqt] = count_template - score = np.zeros((len(p1),)) # save cosine distance between pairs - total_pairs = np.array(range(len(p1))) - batchsize = 100000 # small batchsize instead of all pairs in one batch due to the memory limiation - sublists = [total_pairs[i:i + batchsize] for i in range(0, len(p1), batchsize)] - total_sublists = len(sublists) - for c, s in enumerate(sublists): - feat1 = template_norm_feats[template2id[p1[s]]] - feat2 = template_norm_feats[template2id[p2[s]]] - similarity_score = np.sum(feat1 * feat2, -1) - score[s] = similarity_score.flatten() - if c % 10 == 0: - print('Finish {}/{} pairs.'.format(c, total_sublists)) - return score - - -def main(args): - use_norm_score = True # if Ture, TestMode(N1) - use_detector_score = True # if Ture, TestMode(D1) - use_flip_test = True # if Ture, TestMode(F1) - assert args.target == 'IJBC' or args.target == 'IJBB' - - start = timeit.default_timer() - templates, medias = read_template_media_list( - os.path.join('%s/meta' % args.image_path, '%s_face_tid_mid.txt' % args.target.lower())) - stop = timeit.default_timer() - print('Time: %.2f s. ' % (stop - start)) - - start = timeit.default_timer() - p1, p2, label = read_template_pair_list( - os.path.join('%s/meta' % args.image_path, - '%s_template_pair_label.txt' % args.target.lower())) - stop = timeit.default_timer() - print('Time: %.2f s. ' % (stop - start)) - - start = timeit.default_timer() - img_path = '%s/loose_crop' % args.image_path - img_list_path = '%s/meta/%s_name_5pts_score.txt' % (args.image_path, args.target.lower()) - img_list = open(img_list_path) - files = img_list.readlines() - dataset = AlignedDataSet(root=img_path, lines=files, align=True) - img_feats = extract(args.model_root, dataset) - - faceness_scores = [] - for each_line in files: - name_lmk_score = each_line.split() - faceness_scores.append(name_lmk_score[-1]) - faceness_scores = np.array(faceness_scores).astype(np.float32) - stop = timeit.default_timer() - print('Time: %.2f s. ' % (stop - start)) - print('Feature Shape: ({} , {}) .'.format(img_feats.shape[0], img_feats.shape[1])) - start = timeit.default_timer() - - if use_flip_test: - img_input_feats = img_feats[:, 0:img_feats.shape[1] // 2] + img_feats[:, img_feats.shape[1] // 2:] - else: - img_input_feats = img_feats[:, 0:img_feats.shape[1] // 2] - - if use_norm_score: - img_input_feats = img_input_feats - else: - img_input_feats = img_input_feats / np.sqrt(np.sum(img_input_feats ** 2, -1, keepdims=True)) - - if use_detector_score: - print(img_input_feats.shape, faceness_scores.shape) - img_input_feats = img_input_feats * faceness_scores[:, np.newaxis] - else: - img_input_feats = img_input_feats - - template_norm_feats, unique_templates = image2template_feature( - img_input_feats, templates, medias) - stop = timeit.default_timer() - print('Time: %.2f s. ' % (stop - start)) - - start = timeit.default_timer() - score = verification(template_norm_feats, unique_templates, p1, p2) - stop = timeit.default_timer() - print('Time: %.2f s. ' % (stop - start)) - save_path = os.path.join(args.result_dir, "{}_result".format(args.target)) - if not os.path.exists(save_path): - os.makedirs(save_path) - score_save_file = os.path.join(save_path, "{}.npy".format(args.model_root)) - np.save(score_save_file, score) - files = [score_save_file] - methods = [] - scores = [] - for file in files: - methods.append(os.path.basename(file)) - scores.append(np.load(file)) - methods = np.array(methods) - scores = dict(zip(methods, scores)) - x_labels = [10 ** -6, 10 ** -5, 10 ** -4, 10 ** -3, 10 ** -2, 10 ** -1] - tpr_fpr_table = prettytable.PrettyTable(['Methods'] + [str(x) for x in x_labels]) - for method in methods: - fpr, tpr, _ = roc_curve(label, scores[method]) - fpr = np.flipud(fpr) - tpr = np.flipud(tpr) - tpr_fpr_row = [] - tpr_fpr_row.append("%s-%s" % (method, args.target)) - for fpr_iter in np.arange(len(x_labels)): - _, min_index = min( - list(zip(abs(fpr - x_labels[fpr_iter]), range(len(fpr))))) - tpr_fpr_row.append('%.2f' % (tpr[min_index] * 100)) - tpr_fpr_table.add_row(tpr_fpr_row) - print(tpr_fpr_table) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='do ijb test') - # general - parser.add_argument('--model-root', default='', help='path to load model.') - parser.add_argument('--image-path', default='', type=str, help='') - parser.add_argument('--result-dir', default='.', type=str, help='') - parser.add_argument('--target', default='IJBC', type=str, help='target, set to IJBC or IJBB') - main(parser.parse_args()) diff --git a/spaces/8star/DeepDanbooru_string/app.py b/spaces/8star/DeepDanbooru_string/app.py deleted file mode 100644 index 49019837c9207cc68cb37be0342f3bc44fd0decb..0000000000000000000000000000000000000000 --- a/spaces/8star/DeepDanbooru_string/app.py +++ /dev/null @@ -1,185 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import argparse -import functools -import os -import html -import pathlib -import tarfile - -import deepdanbooru as dd -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image -import tensorflow as tf -import piexif -import piexif.helper - -TITLE = 'DeepDanbooru String' - -TOKEN = os.environ['TOKEN'] -MODEL_REPO = 'CikeyQI/DeepDanbooru_string' -MODEL_FILENAME = 'model-resnet_custom_v3.h5' -LABEL_FILENAME = 'tags.txt' - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--score-slider-step', type=float, default=0.05) - parser.add_argument('--score-threshold', type=float, default=0.5) - parser.add_argument('--theme', type=str, default='dark-grass') - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - return parser.parse_args() - - -def load_sample_image_paths() -> list[pathlib.Path]: - image_dir = pathlib.Path('images') - if not image_dir.exists(): - dataset_repo = 'hysts/sample-images-TADNE' - path = huggingface_hub.hf_hub_download(dataset_repo, - 'images.tar.gz', - repo_type='dataset', - use_auth_token=TOKEN) - with tarfile.open(path) as f: - f.extractall() - return sorted(image_dir.glob('*')) - - -def load_model() -> tf.keras.Model: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - MODEL_FILENAME, - use_auth_token=TOKEN) - model = tf.keras.models.load_model(path) - return model - - -def load_labels() -> list[str]: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - LABEL_FILENAME, - use_auth_token=TOKEN) - with open(path) as f: - labels = [line.strip() for line in f.readlines()] - return labels - -def plaintext_to_html(text): - text = "

    " + "
    \n".join([f"{html.escape(x)}" for x in text.split('\n')]) + "

    " - return text - -def predict(image: PIL.Image.Image, score_threshold: float, - model: tf.keras.Model, labels: list[str]) -> dict[str, float]: - rawimage = image - _, height, width, _ = model.input_shape - image = np.asarray(image) - image = tf.image.resize(image, - size=(height, width), - method=tf.image.ResizeMethod.AREA, - preserve_aspect_ratio=True) - image = image.numpy() - image = dd.image.transform_and_pad_image(image, width, height) - image = image / 255. - probs = model.predict(image[None, ...])[0] - probs = probs.astype(float) - res = dict() - for prob, label in zip(probs.tolist(), labels): - if prob < score_threshold: - continue - res[label] = prob - b = dict(sorted(res.items(),key=lambda item:item[1], reverse=True)) - a = ', '.join(list(b.keys())).replace('_',' ').replace('(','\(').replace(')','\)') - c = ', '.join(list(b.keys())) - - items = rawimage.info - geninfo = '' - - if "exif" in rawimage.info: - exif = piexif.load(rawimage.info["exif"]) - exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b'') - try: - exif_comment = piexif.helper.UserComment.load(exif_comment) - except ValueError: - exif_comment = exif_comment.decode('utf8', errors="ignore") - - items['exif comment'] = exif_comment - geninfo = exif_comment - - for field in ['jfif', 'jfif_version', 'jfif_unit', 'jfif_density', 'dpi', 'exif', - 'loop', 'background', 'timestamp', 'duration']: - items.pop(field, None) - - geninfo = items.get('parameters', geninfo) - - info = f""" -

    PNG Info

    -""" - for key, text in items.items(): - info += f""" -
    -

    {plaintext_to_html(str(key))}

    -

    {plaintext_to_html(str(text))}

    -
    -""".strip()+"\n" - - if len(info) == 0: - message = "Nothing found in the image." - info = f"

    {message}

    " - - return (a,c,res,info) - - -def main(): - args = parse_args() - model = load_model() - labels = load_labels() - - func = functools.partial(predict, model=model, labels=labels) - func = functools.update_wrapper(func, predict) - - gr.Interface( - func, - [ - gr.inputs.Image(type='pil', label='Input'), - gr.inputs.Slider(0, - 1, - step=args.score_slider_step, - default=args.score_threshold, - label='Score Threshold'), - ], - [ - gr.outputs.Textbox(label='Output (string)'), - gr.outputs.Textbox(label='Output (raw string)'), - gr.outputs.Label(label='Output (label)'), - gr.outputs.HTML() - ], - examples=[ - ['miku.jpg',0.5], - ['miku2.jpg',0.5] - ], - title=TITLE, - description=''' -Demo for [KichangKim/DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) with "ready to copy" prompt and a prompt analyzer. - -Modified from [hysts/DeepDanbooru](https://huggingface.co/spaces/hysts/DeepDanbooru) - -PNG Info code forked from [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - ''', - theme=args.theme, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/AI4PD/hexviz/README.md b/spaces/AI4PD/hexviz/README.md deleted file mode 100644 index f9d69dcb3ca704284729c4d451eae875156d211e..0000000000000000000000000000000000000000 --- a/spaces/AI4PD/hexviz/README.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: Hexviz -emoji: 👁️🧬 -colorFrom: green -colorTo: purple -sdk: streamlit -sdk_version: 1.17.0 -python_version: 3.10.5 -app_file: ./hexviz/🧬Attention_Visualization.py -pinned: true -tags: - - protein language models - - attention analysis - - protein structure - - biology ---- -# hexviz -Visualize attention pattern on 3D protein structures - -## Install and run - -```shell -poetry install - -poetry run streamlit run hexviz/streamlit/Attention_On_Structure.py -``` - -## Export dependecies from poetry -Spaces [require](https://huggingface.co/docs/hub/spaces-dependencies#adding-your-own-dependencies) dependencies in a `requirements.txt` file. Export depencies from poetry's `pyproject.toml` file with: -```shell -poetry export -f requirements.txt --output requirements.txt --without-hashes -``` - -## Acknowledgements -This project builds on the attention visualization introduced and developed in -https://github.com/salesforce/provis#provis-attention-visualizer diff --git a/spaces/AIConsultant/MusicGen/audiocraft/modules/streaming.py b/spaces/AIConsultant/MusicGen/audiocraft/modules/streaming.py deleted file mode 100644 index fba06936294ca15d72acd2d44f9dbda39a638107..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/modules/streaming.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Streaming module API that should be implemented by all Streaming components, -""" - -from contextlib import contextmanager -import typing as tp -from torch import nn -import torch - - -State = tp.Dict[str, torch.Tensor] - - -class StreamingModule(nn.Module): - """Common API for streaming components. - - Each streaming component has a streaming state, which is just a dict[str, Tensor]. - By convention, the first dim of each tensor must be the batch size. - Don't use dots in the key names, as this would clash with submodules - (like in state_dict). - - If `self._is_streaming` is True, the component should use and remember - the proper state inside `self._streaming_state`. - - To set a streaming component in streaming state, use - - with module.streaming(): - ... - - This will automatically reset the streaming state when exiting the context manager. - This also automatically propagates to all streaming children module. - - Some module might also implement the `StreamingModule.flush` method, although - this one is trickier, as all parents module must be StreamingModule and implement - it as well for it to work properly. See `StreamingSequential` after. - """ - def __init__(self) -> None: - super().__init__() - self._streaming_state: State = {} - self._is_streaming = False - - def _apply_named_streaming(self, fn: tp.Any): - for name, module in self.named_modules(): - if isinstance(module, StreamingModule): - fn(name, module) - - def _set_streaming(self, streaming: bool): - def _set_streaming(name, module): - module._is_streaming = streaming - self._apply_named_streaming(_set_streaming) - - @contextmanager - def streaming(self): - """Context manager to enter streaming mode. Reset streaming state on exit.""" - self._set_streaming(True) - try: - yield - finally: - self._set_streaming(False) - self.reset_streaming() - - def reset_streaming(self): - """Reset the streaming state.""" - def _reset(name: str, module: StreamingModule): - module._streaming_state.clear() - - self._apply_named_streaming(_reset) - - def get_streaming_state(self) -> State: - """Return the streaming state, including that of sub-modules.""" - state: State = {} - - def _add(name: str, module: StreamingModule): - if name: - name += "." - for key, value in module._streaming_state.items(): - state[name + key] = value - - self._apply_named_streaming(_add) - return state - - def set_streaming_state(self, state: State): - """Set the streaming state, including that of sub-modules.""" - state = dict(state) - - def _set(name: str, module: StreamingModule): - if name: - name += "." - module._streaming_state.clear() - for key, value in list(state.items()): - # complexity is not ideal here, but probably fine. - if key.startswith(name): - local_key = key[len(name):] - if '.' not in local_key: - module._streaming_state[local_key] = value - del state[key] - - self._apply_named_streaming(_set) - assert len(state) == 0, list(state.keys()) - - def flush(self, x: tp.Optional[torch.Tensor] = None): - """Flush any remaining outputs that were waiting for completion. - Typically, for convolutions, this will add the final padding - and process the last buffer. - - This should take an optional argument `x`, which will be provided - if a module before this one in the streaming pipeline has already - spitted out a flushed out buffer. - """ - if x is None: - return None - else: - return self(x) - - -class StreamingSequential(StreamingModule, nn.Sequential): - """A streaming compatible alternative of `nn.Sequential`. - """ - def flush(self, x: tp.Optional[torch.Tensor] = None): - for module in self: - if isinstance(module, StreamingModule): - x = module.flush(x) - elif x is not None: - x = module(x) - return x diff --git a/spaces/AIZeroToHero/Video-Automatic-Speech-Recognition/README.md b/spaces/AIZeroToHero/Video-Automatic-Speech-Recognition/README.md deleted file mode 100644 index 581e8dbede4f0e13eaa8c5c6cc3a954ab3a1ab56..0000000000000000000000000000000000000000 --- a/spaces/AIZeroToHero/Video-Automatic-Speech-Recognition/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Video Automatic Speech Recognition -emoji: 💻 -colorFrom: indigo -colorTo: green -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIatUIUC/CodeLATS/executors/py_executor.py b/spaces/AIatUIUC/CodeLATS/executors/py_executor.py deleted file mode 100644 index 8d0e61d7ab0c0dd9a5e755ef7876b2e92204d2a6..0000000000000000000000000000000000000000 --- a/spaces/AIatUIUC/CodeLATS/executors/py_executor.py +++ /dev/null @@ -1,88 +0,0 @@ -import ast -import signal -import astunparse - -from .executor_utils import function_with_timeout - -from typing import List -from .executor_types import ExecuteResult, Executor - -class PyExecutor(Executor): - def execute(self, func: str, tests: List[str], timeout: int = 5) -> ExecuteResult: - # Combine function code and assert statement - imports = 'from typing import *' - func_test_list = [f'{imports}\n{func}\n{test}' for test in tests] - - # Run the tests and collect the results - success_tests = [] - failed_tests = [] - is_passing = True - num_tests = len(func_test_list) - for i in range(num_tests): - try: - - function_with_timeout(exec, (func_test_list[i], globals()), timeout) - - success_tests += [tests[i]] - except Exception: - output = get_output(func, tests[i], timeout=timeout) - failed_tests += [f"{tests[i]} # output: {output}"] - is_passing = False - - state = [] - for test in tests: - if test in success_tests: - state += [True] - else: - state += [False] - - state = tuple(state) - - feedback = "Tested passed:" - for test in success_tests: - feedback += f"\n{test}" - feedback += "\n\nTests failed:" - for test in failed_tests: - feedback += f"\n{test}" - - return ExecuteResult(is_passing, feedback, state) - - def evaluate(self, name: str, func: str, test: str, timeout: int = 5) -> bool: - """ - Evaluates the implementation on Human-Eval Python. - - probably should be written in a dataset-agnostic way but not now - """ - code = f"""{func} - -{test} - -check({name}) - """ - try: - - function_with_timeout(exec, (code, globals()), timeout) - - return True - except Exception: - return False - -def get_call_str(assert_statement: str) -> str: - ast_parsed = ast.parse(assert_statement) - try: - call_str = ast_parsed.body[0].test.left # type: ignore - except: - call_str = ast_parsed.body[0].test # type: ignore - - return astunparse.unparse(call_str).strip() - -def get_output(func: str, assert_statement: str, timeout: int = 5) -> str: - try: - exec(f"from typing import *\n{func}", globals()) - func_call = get_call_str(assert_statement) - output = function_with_timeout(eval, (func_call, globals()), timeout) - return output - except TimeoutError: - return "TIMEOUT" - except Exception as e: - return str(e) diff --git a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/api.py b/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/api.py deleted file mode 100644 index b7d6aefb6378c9f7418af0277a5357319e943393..0000000000000000000000000000000000000000 --- a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/api.py +++ /dev/null @@ -1,269 +0,0 @@ -from enum import Enum, unique - -import cv2 -import torch -from basicsr.utils import img2tensor -from ldm.util import resize_numpy_image -from PIL import Image -from torch import autocast - - -@unique -class ExtraCondition(Enum): - sketch = 0 - keypose = 1 - seg = 2 - depth = 3 - canny = 4 - style = 5 - color = 6 - openpose = 7 - - -def get_cond_model(opt, cond_type: ExtraCondition): - if cond_type == ExtraCondition.sketch: - from ldm.modules.extra_condition.model_edge import pidinet - model = pidinet() - ckp = torch.load('models/table5_pidinet.pth', map_location='cpu')['state_dict'] - model.load_state_dict({k.replace('module.', ''): v for k, v in ckp.items()}, strict=True) - model.to(opt.device) - return model - elif cond_type == ExtraCondition.seg: - raise NotImplementedError - elif cond_type == ExtraCondition.keypose: - import mmcv - from mmdet.apis import init_detector - from mmpose.apis import init_pose_model - det_config = 'configs/mm/faster_rcnn_r50_fpn_coco.py' - det_checkpoint = 'models/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth' - pose_config = 'configs/mm/hrnet_w48_coco_256x192.py' - pose_checkpoint = 'models/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth' - det_config_mmcv = mmcv.Config.fromfile(det_config) - det_model = init_detector(det_config_mmcv, det_checkpoint, device=opt.device) - pose_config_mmcv = mmcv.Config.fromfile(pose_config) - pose_model = init_pose_model(pose_config_mmcv, pose_checkpoint, device=opt.device) - return {'pose_model': pose_model, 'det_model': det_model} - elif cond_type == ExtraCondition.depth: - from ldm.modules.extra_condition.midas.api import MiDaSInference - model = MiDaSInference(model_type='dpt_hybrid').to(opt.device) - return model - elif cond_type == ExtraCondition.canny: - return None - elif cond_type == ExtraCondition.style: - from transformers import CLIPProcessor, CLIPVisionModel - version = 'openai/clip-vit-large-patch14' - processor = CLIPProcessor.from_pretrained(version) - clip_vision_model = CLIPVisionModel.from_pretrained(version).to(opt.device) - return {'processor': processor, 'clip_vision_model': clip_vision_model} - elif cond_type == ExtraCondition.color: - return None - elif cond_type == ExtraCondition.openpose: - from ldm.modules.extra_condition.openpose.api import OpenposeInference - model = OpenposeInference().to(opt.device) - return model - else: - raise NotImplementedError - - -def get_cond_sketch(opt, cond_image, cond_inp_type, cond_model=None): - if isinstance(cond_image, str): - edge = cv2.imread(cond_image) - else: - # for gradio input, pay attention, it's rgb numpy - edge = cv2.cvtColor(cond_image, cv2.COLOR_RGB2BGR) - edge = resize_numpy_image(edge, max_resolution=opt.max_resolution, resize_short_edge=opt.resize_short_edge) - opt.H, opt.W = edge.shape[:2] - if cond_inp_type == 'sketch': - edge = img2tensor(edge)[0].unsqueeze(0).unsqueeze(0) / 255. - edge = edge.to(opt.device) - elif cond_inp_type == 'image': - edge = img2tensor(edge).unsqueeze(0) / 255. - edge = cond_model(edge.to(opt.device))[-1] - else: - raise NotImplementedError - - # edge = 1-edge # for white background - edge = edge > 0.5 - edge = edge.float() - - return edge - - -def get_cond_seg(opt, cond_image, cond_inp_type='image', cond_model=None): - if isinstance(cond_image, str): - seg = cv2.imread(cond_image) - else: - seg = cv2.cvtColor(cond_image, cv2.COLOR_RGB2BGR) - seg = resize_numpy_image(seg, max_resolution=opt.max_resolution, resize_short_edge=opt.resize_short_edge) - opt.H, opt.W = seg.shape[:2] - if cond_inp_type == 'seg': - seg = img2tensor(seg).unsqueeze(0) / 255. - seg = seg.to(opt.device) - else: - raise NotImplementedError - - return seg - - -def get_cond_keypose(opt, cond_image, cond_inp_type='image', cond_model=None): - if isinstance(cond_image, str): - pose = cv2.imread(cond_image) - else: - pose = cv2.cvtColor(cond_image, cv2.COLOR_RGB2BGR) - pose = resize_numpy_image(pose, max_resolution=opt.max_resolution, resize_short_edge=opt.resize_short_edge) - opt.H, opt.W = pose.shape[:2] - if cond_inp_type == 'keypose': - pose = img2tensor(pose).unsqueeze(0) / 255. - pose = pose.to(opt.device) - elif cond_inp_type == 'image': - from ldm.modules.extra_condition.utils import imshow_keypoints - from mmdet.apis import inference_detector - from mmpose.apis import (inference_top_down_pose_model, process_mmdet_results) - - # mmpose seems not compatible with autocast fp16 - with autocast("cuda", dtype=torch.float32): - mmdet_results = inference_detector(cond_model['det_model'], pose) - # keep the person class bounding boxes. - person_results = process_mmdet_results(mmdet_results, 1) - - # optional - return_heatmap = False - dataset = cond_model['pose_model'].cfg.data['test']['type'] - - # e.g. use ('backbone', ) to return backbone feature - output_layer_names = None - pose_results, returned_outputs = inference_top_down_pose_model( - cond_model['pose_model'], - pose, - person_results, - bbox_thr=0.2, - format='xyxy', - dataset=dataset, - dataset_info=None, - return_heatmap=return_heatmap, - outputs=output_layer_names) - - # show the results - pose = imshow_keypoints(pose, pose_results, radius=2, thickness=2) - pose = img2tensor(pose).unsqueeze(0) / 255. - pose = pose.to(opt.device) - else: - raise NotImplementedError - - return pose - - -def get_cond_depth(opt, cond_image, cond_inp_type='image', cond_model=None): - if isinstance(cond_image, str): - depth = cv2.imread(cond_image) - else: - depth = cv2.cvtColor(cond_image, cv2.COLOR_RGB2BGR) - depth = resize_numpy_image(depth, max_resolution=opt.max_resolution, resize_short_edge=opt.resize_short_edge) - opt.H, opt.W = depth.shape[:2] - if cond_inp_type == 'depth': - depth = img2tensor(depth).unsqueeze(0) / 255. - depth = depth.to(opt.device) - elif cond_inp_type == 'image': - depth = img2tensor(depth).unsqueeze(0) / 127.5 - 1.0 - depth = cond_model(depth.to(opt.device)).repeat(1, 3, 1, 1) - depth -= torch.min(depth) - depth /= torch.max(depth) - else: - raise NotImplementedError - - return depth - - -def get_cond_canny(opt, cond_image, cond_inp_type='image', cond_model=None): - if isinstance(cond_image, str): - canny = cv2.imread(cond_image) - else: - canny = cv2.cvtColor(cond_image, cv2.COLOR_RGB2BGR) - canny = resize_numpy_image(canny, max_resolution=opt.max_resolution, resize_short_edge=opt.resize_short_edge) - opt.H, opt.W = canny.shape[:2] - if cond_inp_type == 'canny': - canny = img2tensor(canny)[0:1].unsqueeze(0) / 255. - canny = canny.to(opt.device) - elif cond_inp_type == 'image': - canny = cv2.Canny(canny, 100, 200)[..., None] - canny = img2tensor(canny).unsqueeze(0) / 255. - canny = canny.to(opt.device) - else: - raise NotImplementedError - - return canny - - -def get_cond_style(opt, cond_image, cond_inp_type='image', cond_model=None): - assert cond_inp_type == 'image' - if isinstance(cond_image, str): - style = Image.open(cond_image) - else: - # numpy image to PIL image - style = Image.fromarray(cond_image) - - style_for_clip = cond_model['processor'](images=style, return_tensors="pt")['pixel_values'] - style_feat = cond_model['clip_vision_model'](style_for_clip.to(opt.device))['last_hidden_state'] - - return style_feat - - -def get_cond_color(opt, cond_image, cond_inp_type='image', cond_model=None): - if isinstance(cond_image, str): - color = cv2.imread(cond_image) - else: - color = cv2.cvtColor(cond_image, cv2.COLOR_RGB2BGR) - color = resize_numpy_image(color, max_resolution=opt.max_resolution, resize_short_edge=opt.resize_short_edge) - opt.H, opt.W = color.shape[:2] - if cond_inp_type == 'image': - color = cv2.resize(color, (opt.W//64, opt.H//64), interpolation=cv2.INTER_CUBIC) - color = cv2.resize(color, (opt.W, opt.H), interpolation=cv2.INTER_NEAREST) - color = img2tensor(color).unsqueeze(0) / 255. - color = color.to(opt.device) - return color - - -def get_cond_openpose(opt, cond_image, cond_inp_type='image', cond_model=None): - if isinstance(cond_image, str): - openpose_keypose = cv2.imread(cond_image) - else: - openpose_keypose = cv2.cvtColor(cond_image, cv2.COLOR_RGB2BGR) - openpose_keypose = resize_numpy_image( - openpose_keypose, max_resolution=opt.max_resolution, resize_short_edge=opt.resize_short_edge) - opt.H, opt.W = openpose_keypose.shape[:2] - if cond_inp_type == 'openpose': - openpose_keypose = img2tensor(openpose_keypose).unsqueeze(0) / 255. - openpose_keypose = openpose_keypose.to(opt.device) - elif cond_inp_type == 'image': - with autocast('cuda', dtype=torch.float32): - openpose_keypose = cond_model(openpose_keypose) - openpose_keypose = img2tensor(openpose_keypose).unsqueeze(0) / 255. - openpose_keypose = openpose_keypose.to(opt.device) - - else: - raise NotImplementedError - - return openpose_keypose - - -def get_adapter_feature(inputs, adapters): - ret_feat_map = None - ret_feat_seq = None - if not isinstance(inputs, list): - inputs = [inputs] - adapters = [adapters] - - for input, adapter in zip(inputs, adapters): - cur_feature = adapter['model'](input) - if isinstance(cur_feature, list): - if ret_feat_map is None: - ret_feat_map = list(map(lambda x: x * adapter['cond_weight'], cur_feature)) - else: - ret_feat_map = list(map(lambda x, y: x + y * adapter['cond_weight'], ret_feat_map, cur_feature)) - else: - if ret_feat_seq is None: - ret_feat_seq = cur_feature * adapter['cond_weight'] - else: - ret_feat_seq = torch.cat([ret_feat_seq, cur_feature * adapter['cond_weight']], dim=1) - - return ret_feat_map, ret_feat_seq diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/board/Fill.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/board/Fill.js deleted file mode 100644 index c9cce61937470aeec8490b4c3ea2f1522687ecb9..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/board/Fill.js +++ /dev/null @@ -1,36 +0,0 @@ -/* -1. Fill empty grids -*/ - -var Fill = function (map) { - var upperBoard = false; - if (typeof (map) === 'boolean') { - upperBoard = map; - map = undefined; - } - - var symbol; - var board = this.board, - symbols = this.candidateSymbols; - - var height = this.board.height; - if (upperBoard) { - height /= 2; - } - for (var tileY = 0; tileY < height; tileY++) { - for (var tileX = 0, width = this.board.width; tileX < width; tileX++) { - if (board.contains(tileX, tileY, this.chessTileZ)) { // not empty - continue; - } - - if (map !== undefined) { - symbol = map[tileX][tileY]; - if (symbol !== '?') { - symbols = symbol; - } - } - this.createChess(tileX, tileY, symbols); - } - } -} -export default Fill; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/click/Click.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/click/Click.js deleted file mode 100644 index 093ae2ad896ba15f081f0fd5f1665938221c0439..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/click/Click.js +++ /dev/null @@ -1,2 +0,0 @@ -import Click from '../../../plugins/button.js' -export default Click; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/DropDownList.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/DropDownList.d.ts deleted file mode 100644 index 3648d8717d74ed3f52e8197b344cde7777890d61..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/DropDownList.d.ts +++ /dev/null @@ -1,130 +0,0 @@ -import Label from '../label/Label'; - -export default DropDownList; - -declare namespace DropDownList { - type CreateButtonCallbackType = ( - this: DropDownList, - scene: Phaser.Scene, - option: any, - index: number, - options: any[] - ) => Phaser.GameObjects.GameObject; - - type CreateBackgroundCallbackType = ( - this: DropDownList, - scene: Phaser.Scene, - ) => Phaser.GameObjects.GameObject; - - type OnButtonClickCallbackType = ( - this: DropDownList, - button: Phaser.GameObjects.GameObject, - index: number, - pointer: Phaser.Input.Pointer, - event: Phaser.Types.Input.EventData - ) => void; - - type OnButtonOverCallbackType = ( - this: DropDownList, - button: Phaser.GameObjects.GameObject, - index: number, - pointer: Phaser.Input.Pointer, - event: Phaser.Types.Input.EventData - ) => void; - - type OnButtonOutCallbackType = ( - this: DropDownList, - button: Phaser.GameObjects.GameObject, - index: number, - pointer: Phaser.Input.Pointer, - event: Phaser.Types.Input.EventData - ) => void; - - type AlignParentType = 'text' | 'icon'; - - type ExpandDirectionType = 0 | 1 | 'down' | 'up'; - - type SetValueCallbackType = ( - dropDownList: DropDownList, - value?: any, - previousValue?: any, - ) => void; - - type ListSpaceType = { - left?: number, right?: number, top?: number, bottom?: number, item?: number - }; - - type WrapListSpaceType = { - left?: number, right?: number, top?: number, bottom?: number, item?: number, line?: number - } - - interface IConfig extends Label.IConfig { - options?: any[], - list?: { - createBackgroundCallback?: CreateBackgroundCallbackType; - createButtonCallback?: CreateButtonCallbackType; - - onButtonClick?: OnButtonClickCallbackType; - onButtonOver?: OnButtonOverCallbackType; - onButtonOut?: OnButtonOutCallbackType; - - easeIn?: number; - easeOut?: number; - - wrap?: boolean; - width?: number; - height?: number; - alignParent?: AlignParentType; - alignSide?: string; - expandDirection?: ExpandDirectionType; - bounds?: Phaser.Geom.Rectangle; - - space?: ListSpaceType | WrapListSpaceType; - - draggable?: boolean; - }, - - setValueCallback?: SetValueCallbackType; - setValueCallbackScope?: object; - value?: any; - } -} - -declare class DropDownList extends Label { - constructor( - scene: Phaser.Scene, - config?: DropDownList.IConfig - ); - - setOptions(options: any[]): this; - - openListPanel(): this; - closeListPanel(): this; - toggleListPanel(): this; - - setValue(value?: any): this; - value: any; - - setCreateButtonCallback(callback?: DropDownList.CreateBackgroundCallbackType): this; - setCreateBackgroundCallback(callback?: DropDownList.CreateBackgroundCallbackType): this; - - setButtonClickCallback(callback?: DropDownList.OnButtonClickCallbackType): this; - setButtonOverCallback(callback?: DropDownList.OnButtonOverCallbackType): this; - setButtonOutCallback(callback?: DropDownList.OnButtonOutCallbackType): this; - - setListEaseInDuration(duration?: number): this; - setListEaseOutDuration(duration?: number): this; - - setWrapEnable(enable?: boolean): this; - setListWidth(width?: number): this; - setListHeight(height?: number): this; - setListSize(width?: number, height?: number): this; - - setListAlignmentMode(mode?: DropDownList.AlignParentType): this; - setListAlignmentSide(side?: string): this; - setListBounds(bounds: Phaser.Geom.Rectangle): this; - - setListSpace(space?: DropDownList.ListSpaceType | DropDownList.WrapListSpaceType): this; - - setListDraggable(enable?: boolean): this; -} \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/roundrectangle/RoundRectangle.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/roundrectangle/RoundRectangle.d.ts deleted file mode 100644 index 990e814eccc548081543dda98307abc4bd5814f6..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/roundrectangle/RoundRectangle.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -import RoundRectangle from "../../../plugins/roundrectangle"; -export default RoundRectangle; \ No newline at end of file diff --git a/spaces/Ajit025/Text_to_Image_conversion/app.py b/spaces/Ajit025/Text_to_Image_conversion/app.py deleted file mode 100644 index 38284eb13a3476a3ca0d63455b7dd139e13e5c51..0000000000000000000000000000000000000000 --- a/spaces/Ajit025/Text_to_Image_conversion/app.py +++ /dev/null @@ -1,15 +0,0 @@ -from text_to_image import TextToImageTool -import gradio as gr - -tool = TextToImageTool() - -def fn(*args, **kwargs): - return tool(*args, **kwargs) - -gr.Interface( - fn=fn, - inputs=tool.inputs, - outputs=tool.outputs, - title="Text_to_Image", - article=tool.description, -).queue(concurrency_count=5).launch() diff --git a/spaces/Aki004/herta-so-vits/flask_api.py b/spaces/Aki004/herta-so-vits/flask_api.py deleted file mode 100644 index dff87134620d6ec00e6c8950ccf6313946216af8..0000000000000000000000000000000000000000 --- a/spaces/Aki004/herta-so-vits/flask_api.py +++ /dev/null @@ -1,62 +0,0 @@ -import io -import logging - -import soundfile -import torch -import torchaudio -from flask import Flask, request, send_file -from flask_cors import CORS - -from inference.infer_tool import Svc, RealTimeVC - -app = Flask(__name__) - -CORS(app) - -logging.getLogger('numba').setLevel(logging.WARNING) - - -@app.route("/voiceChangeModel", methods=["POST"]) -def voice_change_model(): - request_form = request.form - wave_file = request.files.get("sample", None) - # pitch changing information - f_pitch_change = float(request_form.get("fPitchChange", 0)) - # DAW required sampling rate - daw_sample = int(float(request_form.get("sampleRate", 0))) - speaker_id = int(float(request_form.get("sSpeakId", 0))) - # get wav from http and convert - input_wav_path = io.BytesIO(wave_file.read()) - - # inference - if raw_infer: - # out_audio, out_sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path) - out_audio, out_sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path, cluster_infer_ratio=0, - auto_predict_f0=False, noice_scale=0.4, f0_filter=False) - tar_audio = torchaudio.functional.resample(out_audio, svc_model.target_sample, daw_sample) - else: - out_audio = svc.process(svc_model, speaker_id, f_pitch_change, input_wav_path, cluster_infer_ratio=0, - auto_predict_f0=False, noice_scale=0.4, f0_filter=False) - tar_audio = torchaudio.functional.resample(torch.from_numpy(out_audio), svc_model.target_sample, daw_sample) - # return - out_wav_path = io.BytesIO() - soundfile.write(out_wav_path, tar_audio.cpu().numpy(), daw_sample, format="wav") - out_wav_path.seek(0) - return send_file(out_wav_path, download_name="temp.wav", as_attachment=True) - - -if __name__ == '__main__': - # True means splice directly. There may be explosive sounds at the splice. - # False means use cross fade. There may be slight overlapping sounds at the splice. - # Using 0.3-0.5s in VST plugin can reduce latency. - # You can adjust the maximum slicing time of VST plugin to 1 second and set it to ture here to get a stable sound quality and a relatively large delay。 - # Choose an acceptable method on your own. - raw_infer = True - # each model and config are corresponding - model_name = "logs/32k/G_174000-Copy1.pth" - config_name = "configs/config.json" - cluster_model_path = "logs/44k/kmeans_10000.pt" - svc_model = Svc(model_name, config_name, cluster_model_path=cluster_model_path) - svc = RealTimeVC() - # corresponding to the vst plugin here - app.run(port=6842, host="0.0.0.0", debug=False, threaded=False) diff --git a/spaces/AlexWang/lama/bin/paper_runfiles/env.sh b/spaces/AlexWang/lama/bin/paper_runfiles/env.sh deleted file mode 100644 index f3052f0ea1672a569e7775f8c54967d730a7b5ec..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/bin/paper_runfiles/env.sh +++ /dev/null @@ -1,8 +0,0 @@ -DIRNAME="$(dirname $0)" -DIRNAME="$(realpath ""$DIRNAME"")" - -BINDIR="$DIRNAME/.." -SRCDIR="$BINDIR/.." -CONFIGDIR="$SRCDIR/configs" - -export PYTHONPATH="$SRCDIR:$PYTHONPATH" diff --git a/spaces/Alfasign/nomic-ai-gpt4all-13b-snoozy/README.md b/spaces/Alfasign/nomic-ai-gpt4all-13b-snoozy/README.md deleted file mode 100644 index c31d94799485a38ee1a1e088ed6ca4345f3bda9a..0000000000000000000000000000000000000000 --- a/spaces/Alfasign/nomic-ai-gpt4all-13b-snoozy/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chat -emoji: 📈 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/torch_utils/ops/__init__.py b/spaces/Amrrs/DragGan-Inversion/PTI/torch_utils/ops/__init__.py deleted file mode 100644 index ece0ea08fe2e939cc260a1dafc0ab5b391b773d9..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/torch_utils/ops/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/Amrrs/DragGan-Inversion/dnnlib/util.py b/spaces/Amrrs/DragGan-Inversion/dnnlib/util.py deleted file mode 100644 index 90f91e1085239fd9672b2cbe83cbd8e85b27ec0e..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/dnnlib/util.py +++ /dev/null @@ -1,504 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Miscellaneous utility classes and functions.""" - -import ctypes -import fnmatch -import importlib -import inspect -import numpy as np -import os -import shutil -import sys -import types -import io -import pickle -import re -import requests -import html -import hashlib -import glob -import tempfile -import urllib -import urllib.request -import uuid - -from distutils.util import strtobool -from typing import Any, List, Tuple, Union - - -# Util classes -# ------------------------------------------------------------------------------------------ - - -class EasyDict(dict): - """Convenience class that behaves like a dict but allows access with the attribute syntax.""" - - def __getattr__(self, name: str) -> Any: - try: - return self[name] - except KeyError: - raise AttributeError(name) - - def __setattr__(self, name: str, value: Any) -> None: - self[name] = value - - def __delattr__(self, name: str) -> None: - del self[name] - - -class Logger(object): - """Redirect stderr to stdout, optionally print stdout to a file, and optionally force flushing on both stdout and the file.""" - - def __init__(self, file_name: str = None, file_mode: str = "w", should_flush: bool = True): - self.file = None - - if file_name is not None: - self.file = open(file_name, file_mode) - - self.should_flush = should_flush - self.stdout = sys.stdout - self.stderr = sys.stderr - - sys.stdout = self - sys.stderr = self - - def __enter__(self) -> "Logger": - return self - - def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None: - self.close() - - def write(self, text: Union[str, bytes]) -> None: - """Write text to stdout (and a file) and optionally flush.""" - if isinstance(text, bytes): - text = text.decode() - if len(text) == 0: # workaround for a bug in VSCode debugger: sys.stdout.write(''); sys.stdout.flush() => crash - return - - if self.file is not None: - self.file.write(text) - - self.stdout.write(text) - - if self.should_flush: - self.flush() - - def flush(self) -> None: - """Flush written text to both stdout and a file, if open.""" - if self.file is not None: - self.file.flush() - - self.stdout.flush() - - def close(self) -> None: - """Flush, close possible files, and remove stdout/stderr mirroring.""" - self.flush() - - # if using multiple loggers, prevent closing in wrong order - if sys.stdout is self: - sys.stdout = self.stdout - if sys.stderr is self: - sys.stderr = self.stderr - - if self.file is not None: - self.file.close() - self.file = None - - -# Cache directories -# ------------------------------------------------------------------------------------------ - -_dnnlib_cache_dir = None - - -def set_cache_dir(path: str) -> None: - global _dnnlib_cache_dir - _dnnlib_cache_dir = path - - -def make_cache_dir_path(*paths: str) -> str: - if _dnnlib_cache_dir is not None: - return os.path.join(_dnnlib_cache_dir, *paths) - if 'DNNLIB_CACHE_DIR' in os.environ: - return os.path.join(os.environ['DNNLIB_CACHE_DIR'], *paths) - if 'HOME' in os.environ: - return os.path.join(os.environ['HOME'], '.cache', 'dnnlib', *paths) - if 'USERPROFILE' in os.environ: - return os.path.join(os.environ['USERPROFILE'], '.cache', 'dnnlib', *paths) - return os.path.join(tempfile.gettempdir(), '.cache', 'dnnlib', *paths) - -# Small util functions -# ------------------------------------------------------------------------------------------ - - -def format_time(seconds: Union[int, float]) -> str: - """Convert the seconds to human readable string with days, hours, minutes and seconds.""" - s = int(np.rint(seconds)) - - if s < 60: - return "{0}s".format(s) - elif s < 60 * 60: - return "{0}m {1:02}s".format(s // 60, s % 60) - elif s < 24 * 60 * 60: - return "{0}h {1:02}m {2:02}s".format(s // (60 * 60), (s // 60) % 60, s % 60) - else: - return "{0}d {1:02}h {2:02}m".format(s // (24 * 60 * 60), (s // (60 * 60)) % 24, (s // 60) % 60) - - -def format_time_brief(seconds: Union[int, float]) -> str: - """Convert the seconds to human readable string with days, hours, minutes and seconds.""" - s = int(np.rint(seconds)) - - if s < 60: - return "{0}s".format(s) - elif s < 60 * 60: - return "{0}m {1:02}s".format(s // 60, s % 60) - elif s < 24 * 60 * 60: - return "{0}h {1:02}m".format(s // (60 * 60), (s // 60) % 60) - else: - return "{0}d {1:02}h".format(s // (24 * 60 * 60), (s // (60 * 60)) % 24) - - -def ask_yes_no(question: str) -> bool: - """Ask the user the question until the user inputs a valid answer.""" - while True: - try: - print("{0} [y/n]".format(question)) - return strtobool(input().lower()) - except ValueError: - pass - - -def tuple_product(t: Tuple) -> Any: - """Calculate the product of the tuple elements.""" - result = 1 - - for v in t: - result *= v - - return result - - -_str_to_ctype = { - "uint8": ctypes.c_ubyte, - "uint16": ctypes.c_uint16, - "uint32": ctypes.c_uint32, - "uint64": ctypes.c_uint64, - "int8": ctypes.c_byte, - "int16": ctypes.c_int16, - "int32": ctypes.c_int32, - "int64": ctypes.c_int64, - "float32": ctypes.c_float, - "float64": ctypes.c_double -} - - -def get_dtype_and_ctype(type_obj: Any) -> Tuple[np.dtype, Any]: - """Given a type name string (or an object having a __name__ attribute), return matching Numpy and ctypes types that have the same size in bytes.""" - type_str = None - - if isinstance(type_obj, str): - type_str = type_obj - elif hasattr(type_obj, "__name__"): - type_str = type_obj.__name__ - elif hasattr(type_obj, "name"): - type_str = type_obj.name - else: - raise RuntimeError("Cannot infer type name from input") - - assert type_str in _str_to_ctype.keys() - - my_dtype = np.dtype(type_str) - my_ctype = _str_to_ctype[type_str] - - assert my_dtype.itemsize == ctypes.sizeof(my_ctype) - - return my_dtype, my_ctype - - -def is_pickleable(obj: Any) -> bool: - try: - with io.BytesIO() as stream: - pickle.dump(obj, stream) - return True - except: - return False - - -# Functionality to import modules/objects by name, and call functions by name -# ------------------------------------------------------------------------------------------ - -def get_module_from_obj_name(obj_name: str) -> Tuple[types.ModuleType, str]: - """Searches for the underlying module behind the name to some python object. - Returns the module and the object name (original name with module part removed).""" - - # allow convenience shorthands, substitute them by full names - obj_name = re.sub("^np.", "numpy.", obj_name) - obj_name = re.sub("^tf.", "tensorflow.", obj_name) - - # list alternatives for (module_name, local_obj_name) - parts = obj_name.split(".") - name_pairs = [(".".join(parts[:i]), ".".join(parts[i:])) - for i in range(len(parts), 0, -1)] - - # try each alternative in turn - for module_name, local_obj_name in name_pairs: - try: - module = importlib.import_module( - module_name) # may raise ImportError - # may raise AttributeError - get_obj_from_module(module, local_obj_name) - return module, local_obj_name - except: - pass - - # maybe some of the modules themselves contain errors? - for module_name, _local_obj_name in name_pairs: - try: - importlib.import_module(module_name) # may raise ImportError - except ImportError: - if not str(sys.exc_info()[1]).startswith("No module named '" + module_name + "'"): - raise - - # maybe the requested attribute is missing? - for module_name, local_obj_name in name_pairs: - try: - module = importlib.import_module( - module_name) # may raise ImportError - # may raise AttributeError - get_obj_from_module(module, local_obj_name) - except ImportError: - pass - - # we are out of luck, but we have no idea why - raise ImportError(obj_name) - - -def get_obj_from_module(module: types.ModuleType, obj_name: str) -> Any: - """Traverses the object name and returns the last (rightmost) python object.""" - if obj_name == '': - return module - obj = module - for part in obj_name.split("."): - obj = getattr(obj, part) - return obj - - -def get_obj_by_name(name: str) -> Any: - """Finds the python object with the given name.""" - module, obj_name = get_module_from_obj_name(name) - return get_obj_from_module(module, obj_name) - - -def call_func_by_name(*args, func_name: str = None, **kwargs) -> Any: - """Finds the python object with the given name and calls it as a function.""" - assert func_name is not None - func_obj = get_obj_by_name(func_name) - assert callable(func_obj) - return func_obj(*args, **kwargs) - - -def construct_class_by_name(*args, class_name: str = None, **kwargs) -> Any: - """Finds the python class with the given name and constructs it with the given arguments.""" - return call_func_by_name(*args, func_name=class_name, **kwargs) - - -def get_module_dir_by_obj_name(obj_name: str) -> str: - """Get the directory path of the module containing the given object name.""" - module, _ = get_module_from_obj_name(obj_name) - return os.path.dirname(inspect.getfile(module)) - - -def is_top_level_function(obj: Any) -> bool: - """Determine whether the given object is a top-level function, i.e., defined at module scope using 'def'.""" - return callable(obj) and obj.__name__ in sys.modules[obj.__module__].__dict__ - - -def get_top_level_function_name(obj: Any) -> str: - """Return the fully-qualified name of a top-level function.""" - assert is_top_level_function(obj) - module = obj.__module__ - if module == '__main__': - module = os.path.splitext(os.path.basename( - sys.modules[module].__file__))[0] - return module + "." + obj.__name__ - - -# File system helpers -# ------------------------------------------------------------------------------------------ - -def list_dir_recursively_with_ignore(dir_path: str, ignores: List[str] = None, add_base_to_relative: bool = False) -> List[Tuple[str, str]]: - """List all files recursively in a given directory while ignoring given file and directory names. - Returns list of tuples containing both absolute and relative paths.""" - assert os.path.isdir(dir_path) - base_name = os.path.basename(os.path.normpath(dir_path)) - - if ignores is None: - ignores = [] - - result = [] - - for root, dirs, files in os.walk(dir_path, topdown=True): - for ignore_ in ignores: - dirs_to_remove = [d for d in dirs if fnmatch.fnmatch(d, ignore_)] - - # dirs need to be edited in-place - for d in dirs_to_remove: - dirs.remove(d) - - files = [f for f in files if not fnmatch.fnmatch(f, ignore_)] - - absolute_paths = [os.path.join(root, f) for f in files] - relative_paths = [os.path.relpath(p, dir_path) for p in absolute_paths] - - if add_base_to_relative: - relative_paths = [os.path.join(base_name, p) - for p in relative_paths] - - assert len(absolute_paths) == len(relative_paths) - result += zip(absolute_paths, relative_paths) - - return result - - -def copy_files_and_create_dirs(files: List[Tuple[str, str]]) -> None: - """Takes in a list of tuples of (src, dst) paths and copies files. - Will create all necessary directories.""" - for file in files: - target_dir_name = os.path.dirname(file[1]) - - # will create all intermediate-level directories - if not os.path.exists(target_dir_name): - os.makedirs(target_dir_name) - - shutil.copyfile(file[0], file[1]) - - -# URL helpers -# ------------------------------------------------------------------------------------------ - -def is_url(obj: Any, allow_file_urls: bool = False) -> bool: - """Determine whether the given object is a valid URL string.""" - if not isinstance(obj, str) or not "://" in obj: - return False - if allow_file_urls and obj.startswith('file://'): - return True - try: - res = requests.compat.urlparse(obj) - if not res.scheme or not res.netloc or not "." in res.netloc: - return False - res = requests.compat.urlparse(requests.compat.urljoin(obj, "/")) - if not res.scheme or not res.netloc or not "." in res.netloc: - return False - except: - return False - return True - - -def open_url(url: str, cache_dir: str = None, num_attempts: int = 10, verbose: bool = True, return_filename: bool = False, cache: bool = True) -> Any: - """Download the given URL and return a binary-mode file object to access the data.""" - assert num_attempts >= 1 - assert not (return_filename and (not cache)) - - # Doesn't look like an URL scheme so interpret it as a local filename. - if not re.match('^[a-z]+://', url): - return url if return_filename else open(url, "rb") - - # Handle file URLs. This code handles unusual file:// patterns that - # arise on Windows: - # - # file:///c:/foo.txt - # - # which would translate to a local '/c:/foo.txt' filename that's - # invalid. Drop the forward slash for such pathnames. - # - # If you touch this code path, you should test it on both Linux and - # Windows. - # - # Some internet resources suggest using urllib.request.url2pathname() but - # but that converts forward slashes to backslashes and this causes - # its own set of problems. - if url.startswith('file://'): - filename = urllib.parse.urlparse(url).path - if re.match(r'^/[a-zA-Z]:', filename): - filename = filename[1:] - return filename if return_filename else open(filename, "rb") - - assert is_url(url) - - # Lookup from cache. - if cache_dir is None: - cache_dir = make_cache_dir_path('downloads') - - url_md5 = hashlib.md5(url.encode("utf-8")).hexdigest() - if cache: - cache_files = glob.glob(os.path.join(cache_dir, url_md5 + "_*")) - if len(cache_files) == 1: - filename = cache_files[0] - return filename if return_filename else open(filename, "rb") - - # Download. - url_name = None - url_data = None - with requests.Session() as session: - if verbose: - print("Downloading %s ..." % url, end="", flush=True) - for attempts_left in reversed(range(num_attempts)): - try: - with session.get(url) as res: - res.raise_for_status() - if len(res.content) == 0: - raise IOError("No data received") - - if len(res.content) < 8192: - content_str = res.content.decode("utf-8") - if "download_warning" in res.headers.get("Set-Cookie", ""): - links = [html.unescape(link) for link in content_str.split( - '"') if "export=download" in link] - if len(links) == 1: - url = requests.compat.urljoin(url, links[0]) - raise IOError("Google Drive virus checker nag") - if "Google Drive - Quota exceeded" in content_str: - raise IOError( - "Google Drive download quota exceeded -- please try again later") - - match = re.search( - r'filename="([^"]*)"', res.headers.get("Content-Disposition", "")) - url_name = match[1] if match else url - url_data = res.content - if verbose: - print(" done") - break - except KeyboardInterrupt: - raise - except: - if not attempts_left: - if verbose: - print(" failed") - raise - if verbose: - print(".", end="", flush=True) - - # Save to cache. - if cache: - safe_name = re.sub(r"[^0-9a-zA-Z-._]", "_", url_name) - cache_file = os.path.join(cache_dir, url_md5 + "_" + safe_name) - temp_file = os.path.join( - cache_dir, "tmp_" + uuid.uuid4().hex + "_" + url_md5 + "_" + safe_name) - os.makedirs(cache_dir, exist_ok=True) - with open(temp_file, "wb") as f: - f.write(url_data) - os.replace(temp_file, cache_file) # atomic - if return_filename: - return cache_file - - # Return data as file object. - assert not return_filename - return io.BytesIO(url_data) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/.github/ISSUE_TEMPLATE/feedback.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/.github/ISSUE_TEMPLATE/feedback.md deleted file mode 100644 index 25808b6575a405694f64dbf1b5a0ece8e0fcd2e2..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/.github/ISSUE_TEMPLATE/feedback.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -name: "💬 Feedback about API Design" -about: Give feedback about the current API design -title: '' -labels: '' -assignees: '' - ---- - -**What API design would you like to have changed or added to the library? Why?** - -**What use case would this enable or better enable? Can you give us a code example?** diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/generate_logits.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/generate_logits.py deleted file mode 100644 index 89dce0e78d4ef50e060ac554ac3f7e760f55983f..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/generate_logits.py +++ /dev/null @@ -1,127 +0,0 @@ -import random - -import torch -from huggingface_hub import HfApi - -from diffusers import UNet2DModel - - -api = HfApi() - -results = {} -# fmt: off -results["google_ddpm_cifar10_32"] = torch.tensor([ - -0.7515, -1.6883, 0.2420, 0.0300, 0.6347, 1.3433, -1.1743, -3.7467, - 1.2342, -2.2485, 0.4636, 0.8076, -0.7991, 0.3969, 0.8498, 0.9189, - -1.8887, -3.3522, 0.7639, 0.2040, 0.6271, -2.7148, -1.6316, 3.0839, - 0.3186, 0.2721, -0.9759, -1.2461, 2.6257, 1.3557 -]) -results["google_ddpm_ema_bedroom_256"] = torch.tensor([ - -2.3639, -2.5344, 0.0054, -0.6674, 1.5990, 1.0158, 0.3124, -2.1436, - 1.8795, -2.5429, -0.1566, -0.3973, 1.2490, 2.6447, 1.2283, -0.5208, - -2.8154, -3.5119, 2.3838, 1.2033, 1.7201, -2.1256, -1.4576, 2.7948, - 2.4204, -0.9752, -1.2546, 0.8027, 3.2758, 3.1365 -]) -results["CompVis_ldm_celebahq_256"] = torch.tensor([ - -0.6531, -0.6891, -0.3172, -0.5375, -0.9140, -0.5367, -0.1175, -0.7869, - -0.3808, -0.4513, -0.2098, -0.0083, 0.3183, 0.5140, 0.2247, -0.1304, - -0.1302, -0.2802, -0.2084, -0.2025, -0.4967, -0.4873, -0.0861, 0.6925, - 0.0250, 0.1290, -0.1543, 0.6316, 1.0460, 1.4943 -]) -results["google_ncsnpp_ffhq_1024"] = torch.tensor([ - 0.0911, 0.1107, 0.0182, 0.0435, -0.0805, -0.0608, 0.0381, 0.2172, - -0.0280, 0.1327, -0.0299, -0.0255, -0.0050, -0.1170, -0.1046, 0.0309, - 0.1367, 0.1728, -0.0533, -0.0748, -0.0534, 0.1624, 0.0384, -0.1805, - -0.0707, 0.0642, 0.0220, -0.0134, -0.1333, -0.1505 -]) -results["google_ncsnpp_bedroom_256"] = torch.tensor([ - 0.1321, 0.1337, 0.0440, 0.0622, -0.0591, -0.0370, 0.0503, 0.2133, - -0.0177, 0.1415, -0.0116, -0.0112, 0.0044, -0.0980, -0.0789, 0.0395, - 0.1502, 0.1785, -0.0488, -0.0514, -0.0404, 0.1539, 0.0454, -0.1559, - -0.0665, 0.0659, 0.0383, -0.0005, -0.1266, -0.1386 -]) -results["google_ncsnpp_celebahq_256"] = torch.tensor([ - 0.1154, 0.1218, 0.0307, 0.0526, -0.0711, -0.0541, 0.0366, 0.2078, - -0.0267, 0.1317, -0.0226, -0.0193, -0.0014, -0.1055, -0.0902, 0.0330, - 0.1391, 0.1709, -0.0562, -0.0693, -0.0560, 0.1482, 0.0381, -0.1683, - -0.0681, 0.0661, 0.0331, -0.0046, -0.1268, -0.1431 -]) -results["google_ncsnpp_church_256"] = torch.tensor([ - 0.1192, 0.1240, 0.0414, 0.0606, -0.0557, -0.0412, 0.0430, 0.2042, - -0.0200, 0.1385, -0.0115, -0.0132, 0.0017, -0.0965, -0.0802, 0.0398, - 0.1433, 0.1747, -0.0458, -0.0533, -0.0407, 0.1545, 0.0419, -0.1574, - -0.0645, 0.0626, 0.0341, -0.0010, -0.1199, -0.1390 -]) -results["google_ncsnpp_ffhq_256"] = torch.tensor([ - 0.1075, 0.1074, 0.0205, 0.0431, -0.0774, -0.0607, 0.0298, 0.2042, - -0.0320, 0.1267, -0.0281, -0.0250, -0.0064, -0.1091, -0.0946, 0.0290, - 0.1328, 0.1650, -0.0580, -0.0738, -0.0586, 0.1440, 0.0337, -0.1746, - -0.0712, 0.0605, 0.0250, -0.0099, -0.1316, -0.1473 -]) -results["google_ddpm_cat_256"] = torch.tensor([ - -1.4572, -2.0481, -0.0414, -0.6005, 1.4136, 0.5848, 0.4028, -2.7330, - 1.2212, -2.1228, 0.2155, 0.4039, 0.7662, 2.0535, 0.7477, -0.3243, - -2.1758, -2.7648, 1.6947, 0.7026, 1.2338, -1.6078, -0.8682, 2.2810, - 1.8574, -0.5718, -0.5586, -0.0186, 2.3415, 2.1251]) -results["google_ddpm_celebahq_256"] = torch.tensor([ - -1.3690, -1.9720, -0.4090, -0.6966, 1.4660, 0.9938, -0.1385, -2.7324, - 0.7736, -1.8917, 0.2923, 0.4293, 0.1693, 1.4112, 1.1887, -0.3181, - -2.2160, -2.6381, 1.3170, 0.8163, 0.9240, -1.6544, -0.6099, 2.5259, - 1.6430, -0.9090, -0.9392, -0.0126, 2.4268, 2.3266 -]) -results["google_ddpm_ema_celebahq_256"] = torch.tensor([ - -1.3525, -1.9628, -0.3956, -0.6860, 1.4664, 1.0014, -0.1259, -2.7212, - 0.7772, -1.8811, 0.2996, 0.4388, 0.1704, 1.4029, 1.1701, -0.3027, - -2.2053, -2.6287, 1.3350, 0.8131, 0.9274, -1.6292, -0.6098, 2.5131, - 1.6505, -0.8958, -0.9298, -0.0151, 2.4257, 2.3355 -]) -results["google_ddpm_church_256"] = torch.tensor([ - -2.0585, -2.7897, -0.2850, -0.8940, 1.9052, 0.5702, 0.6345, -3.8959, - 1.5932, -3.2319, 0.1974, 0.0287, 1.7566, 2.6543, 0.8387, -0.5351, - -3.2736, -4.3375, 2.9029, 1.6390, 1.4640, -2.1701, -1.9013, 2.9341, - 3.4981, -0.6255, -1.1644, -0.1591, 3.7097, 3.2066 -]) -results["google_ddpm_bedroom_256"] = torch.tensor([ - -2.3139, -2.5594, -0.0197, -0.6785, 1.7001, 1.1606, 0.3075, -2.1740, - 1.8071, -2.5630, -0.0926, -0.3811, 1.2116, 2.6246, 1.2731, -0.5398, - -2.8153, -3.6140, 2.3893, 1.3262, 1.6258, -2.1856, -1.3267, 2.8395, - 2.3779, -1.0623, -1.2468, 0.8959, 3.3367, 3.2243 -]) -results["google_ddpm_ema_church_256"] = torch.tensor([ - -2.0628, -2.7667, -0.2089, -0.8263, 2.0539, 0.5992, 0.6495, -3.8336, - 1.6025, -3.2817, 0.1721, -0.0633, 1.7516, 2.7039, 0.8100, -0.5908, - -3.2113, -4.4343, 2.9257, 1.3632, 1.5562, -2.1489, -1.9894, 3.0560, - 3.3396, -0.7328, -1.0417, 0.0383, 3.7093, 3.2343 -]) -results["google_ddpm_ema_cat_256"] = torch.tensor([ - -1.4574, -2.0569, -0.0473, -0.6117, 1.4018, 0.5769, 0.4129, -2.7344, - 1.2241, -2.1397, 0.2000, 0.3937, 0.7616, 2.0453, 0.7324, -0.3391, - -2.1746, -2.7744, 1.6963, 0.6921, 1.2187, -1.6172, -0.8877, 2.2439, - 1.8471, -0.5839, -0.5605, -0.0464, 2.3250, 2.1219 -]) -# fmt: on - -models = api.list_models(filter="diffusers") -for mod in models: - if "google" in mod.author or mod.modelId == "CompVis/ldm-celebahq-256": - local_checkpoint = "/home/patrick/google_checkpoints/" + mod.modelId.split("/")[-1] - - print(f"Started running {mod.modelId}!!!") - - if mod.modelId.startswith("CompVis"): - model = UNet2DModel.from_pretrained(local_checkpoint, subfolder="unet") - else: - model = UNet2DModel.from_pretrained(local_checkpoint) - - torch.manual_seed(0) - random.seed(0) - - noise = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) - time_step = torch.tensor([10] * noise.shape[0]) - with torch.no_grad(): - logits = model(noise, time_step).sample - - assert torch.allclose( - logits[0, 0, 0, :30], results["_".join("_".join(mod.modelId.split("/")).split("-"))], atol=1e-3 - ) - print(f"{mod.modelId} has passed successfully!!!") diff --git a/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_faster_x101_32x4d_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_faster_x101_32x4d_fpn_1x_coco.py deleted file mode 100644 index c9a035f15cfad12ddbbfa87ed0d579c1cde0c4ce..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_faster_x101_32x4d_fpn_1x_coco.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './ga_faster_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/htc/htc_x101_64x4d_fpn_16x1_20e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/htc/htc_x101_64x4d_fpn_16x1_20e_coco.py deleted file mode 100644 index b140f75182cd4832857b6a86fe11b2961703a17c..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/htc/htc_x101_64x4d_fpn_16x1_20e_coco.py +++ /dev/null @@ -1,18 +0,0 @@ -_base_ = './htc_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_64x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=64, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch')) -data = dict(samples_per_gpu=1, workers_per_gpu=1) -# learning policy -lr_config = dict(step=[16, 19]) -runner = dict(type='EpochBasedRunner', max_epochs=20) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x1024_40k_cityscapes.py deleted file mode 100644 index 6a4316dde57206fe369e72fa0d32a529fe1a1932..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = [ - '../_base_/models/ccnet_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r18b-d8_769x769_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r18b-d8_769x769_80k_cityscapes.py deleted file mode 100644 index b49da3581d9697e726e114b1564fc58a55ef1099..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r18b-d8_769x769_80k_cityscapes.py +++ /dev/null @@ -1,11 +0,0 @@ -_base_ = './deeplabv3plus_r50-d8_769x769_80k_cityscapes.py' -model = dict( - pretrained='torchvision://resnet18', - backbone=dict(type='ResNet', depth=18), - decode_head=dict( - c1_in_channels=64, - c1_channels=12, - in_channels=512, - channels=128, - ), - auxiliary_head=dict(in_channels=256, channels=64)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr48_512x512_20k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr48_512x512_20k_voc12aug.py deleted file mode 100644 index c2dd6d1158bd31ecdd7874827fd37bffb5d26db6..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr48_512x512_20k_voc12aug.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = './ocrnet_hr18_512x512_20k_voc12aug.py' -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w48', - backbone=dict( - extra=dict( - stage2=dict(num_channels=(48, 96)), - stage3=dict(num_channels=(48, 96, 192)), - stage4=dict(num_channels=(48, 96, 192, 384)))), - decode_head=[ - dict( - type='FCNHead', - in_channels=[48, 96, 192, 384], - channels=sum([48, 96, 192, 384]), - input_transform='resize_concat', - in_index=(0, 1, 2, 3), - kernel_size=1, - num_convs=1, - norm_cfg=norm_cfg, - concat_input=False, - dropout_ratio=-1, - num_classes=21, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - dict( - type='OCRHead', - in_channels=[48, 96, 192, 384], - channels=512, - ocr_channels=256, - input_transform='resize_concat', - in_index=(0, 1, 2, 3), - norm_cfg=norm_cfg, - dropout_ratio=-1, - num_classes=21, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)) - ]) diff --git a/spaces/ArchitSharma/Digital-Photo-Color-Restoration/src/deoldify/generators.py b/spaces/ArchitSharma/Digital-Photo-Color-Restoration/src/deoldify/generators.py deleted file mode 100644 index 0be74c39d095332a9143ea35c7ae36fd83e07e9f..0000000000000000000000000000000000000000 --- a/spaces/ArchitSharma/Digital-Photo-Color-Restoration/src/deoldify/generators.py +++ /dev/null @@ -1,151 +0,0 @@ -from fastai.vision import * -from fastai.vision.learner import cnn_config -from .unet import DynamicUnetWide, DynamicUnetDeep -from .loss import FeatureLoss -from .dataset import * - -# Weights are implicitly read from ./models/ folder -def gen_inference_wide( - root_folder: Path, weights_name: str, nf_factor: int = 2, arch=models.resnet101) -> Learner: - data = get_dummy_databunch() - learn = gen_learner_wide( - data=data, gen_loss=F.l1_loss, nf_factor=nf_factor, arch=arch - ) - learn.path = root_folder - learn.load(weights_name) - learn.model.eval() - return learn - - -def gen_learner_wide( - data: ImageDataBunch, gen_loss, arch=models.resnet101, nf_factor: int = 2 -) -> Learner: - return unet_learner_wide( - data, - arch=arch, - wd=1e-3, - blur=True, - norm_type=NormType.Spectral, - self_attention=True, - y_range=(-3.0, 3.0), - loss_func=gen_loss, - nf_factor=nf_factor, - ) - - -# The code below is meant to be merged into fastaiv1 ideally -def unet_learner_wide( - data: DataBunch, - arch: Callable, - pretrained: bool = True, - blur_final: bool = True, - norm_type: Optional[NormType] = NormType, - split_on: Optional[SplitFuncOrIdxList] = None, - blur: bool = False, - self_attention: bool = False, - y_range: Optional[Tuple[float, float]] = None, - last_cross: bool = True, - bottle: bool = False, - nf_factor: int = 1, - **kwargs: Any -) -> Learner: - "Build Unet learner from `data` and `arch`." - meta = cnn_config(arch) - body = create_body(arch, pretrained) - model = to_device( - DynamicUnetWide( - body, - n_classes=data.c, - blur=blur, - blur_final=blur_final, - self_attention=self_attention, - y_range=y_range, - norm_type=norm_type, - last_cross=last_cross, - bottle=bottle, - nf_factor=nf_factor, - ), - data.device, - ) - learn = Learner(data, model, **kwargs) - learn.split(ifnone(split_on, meta['split'])) - if pretrained: - learn.freeze() - apply_init(model[2], nn.init.kaiming_normal_) - return learn - - -# ---------------------------------------------------------------------- - -# Weights are implicitly read from ./models/ folder -def gen_inference_deep( - root_folder: Path, weights_name: str, arch=models.resnet34, nf_factor: float = 1.5) -> Learner: - data = get_dummy_databunch() - learn = gen_learner_deep( - data=data, gen_loss=F.l1_loss, arch=arch, nf_factor=nf_factor - ) - learn.path = root_folder - learn.load(weights_name) - learn.model.eval() - return learn - - -def gen_learner_deep( - data: ImageDataBunch, gen_loss, arch=models.resnet34, nf_factor: float = 1.5 -) -> Learner: - return unet_learner_deep( - data, - arch, - wd=1e-3, - blur=True, - norm_type=NormType.Spectral, - self_attention=True, - y_range=(-3.0, 3.0), - loss_func=gen_loss, - nf_factor=nf_factor, - ) - - -# The code below is meant to be merged into fastaiv1 ideally -def unet_learner_deep( - data: DataBunch, - arch: Callable, - pretrained: bool = True, - blur_final: bool = True, - norm_type: Optional[NormType] = NormType, - split_on: Optional[SplitFuncOrIdxList] = None, - blur: bool = False, - self_attention: bool = False, - y_range: Optional[Tuple[float, float]] = None, - last_cross: bool = True, - bottle: bool = False, - nf_factor: float = 1.5, - **kwargs: Any -) -> Learner: - "Build Unet learner from `data` and `arch`." - meta = cnn_config(arch) - body = create_body(arch, pretrained) - model = to_device( - DynamicUnetDeep( - body, - n_classes=data.c, - blur=blur, - blur_final=blur_final, - self_attention=self_attention, - y_range=y_range, - norm_type=norm_type, - last_cross=last_cross, - bottle=bottle, - nf_factor=nf_factor, - ), - data.device, - ) - learn = Learner(data, model, **kwargs) - learn.split(ifnone(split_on, meta['split'])) - if pretrained: - learn.freeze() - apply_init(model[2], nn.init.kaiming_normal_) - return learn - - -# ----------------------------- diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/filewrapper.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/filewrapper.py deleted file mode 100644 index f5ed5f6f6ec0eae90a9f48753622b2b5ee5d4a4f..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/filewrapper.py +++ /dev/null @@ -1,111 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -from tempfile import NamedTemporaryFile -import mmap - - -class CallbackFileWrapper(object): - """ - Small wrapper around a fp object which will tee everything read into a - buffer, and when that file is closed it will execute a callback with the - contents of that buffer. - - All attributes are proxied to the underlying file object. - - This class uses members with a double underscore (__) leading prefix so as - not to accidentally shadow an attribute. - - The data is stored in a temporary file until it is all available. As long - as the temporary files directory is disk-based (sometimes it's a - memory-backed-``tmpfs`` on Linux), data will be unloaded to disk if memory - pressure is high. For small files the disk usually won't be used at all, - it'll all be in the filesystem memory cache, so there should be no - performance impact. - """ - - def __init__(self, fp, callback): - self.__buf = NamedTemporaryFile("rb+", delete=True) - self.__fp = fp - self.__callback = callback - - def __getattr__(self, name): - # The vaguaries of garbage collection means that self.__fp is - # not always set. By using __getattribute__ and the private - # name[0] allows looking up the attribute value and raising an - # AttributeError when it doesn't exist. This stop thigns from - # infinitely recursing calls to getattr in the case where - # self.__fp hasn't been set. - # - # [0] https://docs.python.org/2/reference/expressions.html#atom-identifiers - fp = self.__getattribute__("_CallbackFileWrapper__fp") - return getattr(fp, name) - - def __is_fp_closed(self): - try: - return self.__fp.fp is None - - except AttributeError: - pass - - try: - return self.__fp.closed - - except AttributeError: - pass - - # We just don't cache it then. - # TODO: Add some logging here... - return False - - def _close(self): - if self.__callback: - if self.__buf.tell() == 0: - # Empty file: - result = b"" - else: - # Return the data without actually loading it into memory, - # relying on Python's buffer API and mmap(). mmap() just gives - # a view directly into the filesystem's memory cache, so it - # doesn't result in duplicate memory use. - self.__buf.seek(0, 0) - result = memoryview( - mmap.mmap(self.__buf.fileno(), 0, access=mmap.ACCESS_READ) - ) - self.__callback(result) - - # We assign this to None here, because otherwise we can get into - # really tricky problems where the CPython interpreter dead locks - # because the callback is holding a reference to something which - # has a __del__ method. Setting this to None breaks the cycle - # and allows the garbage collector to do it's thing normally. - self.__callback = None - - # Closing the temporary file releases memory and frees disk space. - # Important when caching big files. - self.__buf.close() - - def read(self, amt=None): - data = self.__fp.read(amt) - if data: - # We may be dealing with b'', a sign that things are over: - # it's passed e.g. after we've already closed self.__buf. - self.__buf.write(data) - if self.__is_fp_closed(): - self._close() - - return data - - def _safe_read(self, amt): - data = self.__fp._safe_read(amt) - if amt == 2 and data == b"\r\n": - # urllib executes this read to toss the CRLF at the end - # of the chunk. - return data - - self.__buf.write(data) - if self.__is_fp_closed(): - self._close() - - return data diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/resources.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/resources.py deleted file mode 100644 index fef52aa103ea369c96567b9af2a5a0ba14db5cb9..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/resources.py +++ /dev/null @@ -1,358 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2013-2017 Vinay Sajip. -# Licensed to the Python Software Foundation under a contributor agreement. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -from __future__ import unicode_literals - -import bisect -import io -import logging -import os -import pkgutil -import sys -import types -import zipimport - -from . import DistlibException -from .util import cached_property, get_cache_base, Cache - -logger = logging.getLogger(__name__) - - -cache = None # created when needed - - -class ResourceCache(Cache): - def __init__(self, base=None): - if base is None: - # Use native string to avoid issues on 2.x: see Python #20140. - base = os.path.join(get_cache_base(), str('resource-cache')) - super(ResourceCache, self).__init__(base) - - def is_stale(self, resource, path): - """ - Is the cache stale for the given resource? - - :param resource: The :class:`Resource` being cached. - :param path: The path of the resource in the cache. - :return: True if the cache is stale. - """ - # Cache invalidation is a hard problem :-) - return True - - def get(self, resource): - """ - Get a resource into the cache, - - :param resource: A :class:`Resource` instance. - :return: The pathname of the resource in the cache. - """ - prefix, path = resource.finder.get_cache_info(resource) - if prefix is None: - result = path - else: - result = os.path.join(self.base, self.prefix_to_dir(prefix), path) - dirname = os.path.dirname(result) - if not os.path.isdir(dirname): - os.makedirs(dirname) - if not os.path.exists(result): - stale = True - else: - stale = self.is_stale(resource, path) - if stale: - # write the bytes of the resource to the cache location - with open(result, 'wb') as f: - f.write(resource.bytes) - return result - - -class ResourceBase(object): - def __init__(self, finder, name): - self.finder = finder - self.name = name - - -class Resource(ResourceBase): - """ - A class representing an in-package resource, such as a data file. This is - not normally instantiated by user code, but rather by a - :class:`ResourceFinder` which manages the resource. - """ - is_container = False # Backwards compatibility - - def as_stream(self): - """ - Get the resource as a stream. - - This is not a property to make it obvious that it returns a new stream - each time. - """ - return self.finder.get_stream(self) - - @cached_property - def file_path(self): - global cache - if cache is None: - cache = ResourceCache() - return cache.get(self) - - @cached_property - def bytes(self): - return self.finder.get_bytes(self) - - @cached_property - def size(self): - return self.finder.get_size(self) - - -class ResourceContainer(ResourceBase): - is_container = True # Backwards compatibility - - @cached_property - def resources(self): - return self.finder.get_resources(self) - - -class ResourceFinder(object): - """ - Resource finder for file system resources. - """ - - if sys.platform.startswith('java'): - skipped_extensions = ('.pyc', '.pyo', '.class') - else: - skipped_extensions = ('.pyc', '.pyo') - - def __init__(self, module): - self.module = module - self.loader = getattr(module, '__loader__', None) - self.base = os.path.dirname(getattr(module, '__file__', '')) - - def _adjust_path(self, path): - return os.path.realpath(path) - - def _make_path(self, resource_name): - # Issue #50: need to preserve type of path on Python 2.x - # like os.path._get_sep - if isinstance(resource_name, bytes): # should only happen on 2.x - sep = b'/' - else: - sep = '/' - parts = resource_name.split(sep) - parts.insert(0, self.base) - result = os.path.join(*parts) - return self._adjust_path(result) - - def _find(self, path): - return os.path.exists(path) - - def get_cache_info(self, resource): - return None, resource.path - - def find(self, resource_name): - path = self._make_path(resource_name) - if not self._find(path): - result = None - else: - if self._is_directory(path): - result = ResourceContainer(self, resource_name) - else: - result = Resource(self, resource_name) - result.path = path - return result - - def get_stream(self, resource): - return open(resource.path, 'rb') - - def get_bytes(self, resource): - with open(resource.path, 'rb') as f: - return f.read() - - def get_size(self, resource): - return os.path.getsize(resource.path) - - def get_resources(self, resource): - def allowed(f): - return (f != '__pycache__' and not - f.endswith(self.skipped_extensions)) - return set([f for f in os.listdir(resource.path) if allowed(f)]) - - def is_container(self, resource): - return self._is_directory(resource.path) - - _is_directory = staticmethod(os.path.isdir) - - def iterator(self, resource_name): - resource = self.find(resource_name) - if resource is not None: - todo = [resource] - while todo: - resource = todo.pop(0) - yield resource - if resource.is_container: - rname = resource.name - for name in resource.resources: - if not rname: - new_name = name - else: - new_name = '/'.join([rname, name]) - child = self.find(new_name) - if child.is_container: - todo.append(child) - else: - yield child - - -class ZipResourceFinder(ResourceFinder): - """ - Resource finder for resources in .zip files. - """ - def __init__(self, module): - super(ZipResourceFinder, self).__init__(module) - archive = self.loader.archive - self.prefix_len = 1 + len(archive) - # PyPy doesn't have a _files attr on zipimporter, and you can't set one - if hasattr(self.loader, '_files'): - self._files = self.loader._files - else: - self._files = zipimport._zip_directory_cache[archive] - self.index = sorted(self._files) - - def _adjust_path(self, path): - return path - - def _find(self, path): - path = path[self.prefix_len:] - if path in self._files: - result = True - else: - if path and path[-1] != os.sep: - path = path + os.sep - i = bisect.bisect(self.index, path) - try: - result = self.index[i].startswith(path) - except IndexError: - result = False - if not result: - logger.debug('_find failed: %r %r', path, self.loader.prefix) - else: - logger.debug('_find worked: %r %r', path, self.loader.prefix) - return result - - def get_cache_info(self, resource): - prefix = self.loader.archive - path = resource.path[1 + len(prefix):] - return prefix, path - - def get_bytes(self, resource): - return self.loader.get_data(resource.path) - - def get_stream(self, resource): - return io.BytesIO(self.get_bytes(resource)) - - def get_size(self, resource): - path = resource.path[self.prefix_len:] - return self._files[path][3] - - def get_resources(self, resource): - path = resource.path[self.prefix_len:] - if path and path[-1] != os.sep: - path += os.sep - plen = len(path) - result = set() - i = bisect.bisect(self.index, path) - while i < len(self.index): - if not self.index[i].startswith(path): - break - s = self.index[i][plen:] - result.add(s.split(os.sep, 1)[0]) # only immediate children - i += 1 - return result - - def _is_directory(self, path): - path = path[self.prefix_len:] - if path and path[-1] != os.sep: - path += os.sep - i = bisect.bisect(self.index, path) - try: - result = self.index[i].startswith(path) - except IndexError: - result = False - return result - - -_finder_registry = { - type(None): ResourceFinder, - zipimport.zipimporter: ZipResourceFinder -} - -try: - # In Python 3.6, _frozen_importlib -> _frozen_importlib_external - try: - import _frozen_importlib_external as _fi - except ImportError: - import _frozen_importlib as _fi - _finder_registry[_fi.SourceFileLoader] = ResourceFinder - _finder_registry[_fi.FileFinder] = ResourceFinder - # See issue #146 - _finder_registry[_fi.SourcelessFileLoader] = ResourceFinder - del _fi -except (ImportError, AttributeError): - pass - - -def register_finder(loader, finder_maker): - _finder_registry[type(loader)] = finder_maker - - -_finder_cache = {} - - -def finder(package): - """ - Return a resource finder for a package. - :param package: The name of the package. - :return: A :class:`ResourceFinder` instance for the package. - """ - if package in _finder_cache: - result = _finder_cache[package] - else: - if package not in sys.modules: - __import__(package) - module = sys.modules[package] - path = getattr(module, '__path__', None) - if path is None: - raise DistlibException('You cannot get a finder for a module, ' - 'only for a package') - loader = getattr(module, '__loader__', None) - finder_maker = _finder_registry.get(type(loader)) - if finder_maker is None: - raise DistlibException('Unable to locate finder for %r' % package) - result = finder_maker(module) - _finder_cache[package] = result - return result - - -_dummy_module = types.ModuleType(str('__dummy__')) - - -def finder_for_path(path): - """ - Return a resource finder for a path, which should represent a container. - - :param path: The path. - :return: A :class:`ResourceFinder` instance for the path. - """ - result = None - # calls any path hooks, gets importer into cache - pkgutil.get_importer(path) - loader = sys.path_importer_cache.get(path) - finder = _finder_registry.get(type(loader)) - if finder: - module = _dummy_module - module.__file__ = os.path.join(path, '') - module.__loader__ = loader - result = finder(module) - return result diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/extension.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/extension.py deleted file mode 100644 index 6b8575de2949cd0519ee5f26b6eb00df417e2113..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/extension.py +++ /dev/null @@ -1,248 +0,0 @@ -"""distutils.extension - -Provides the Extension class, used to describe C/C++ extension -modules in setup scripts.""" - -import os -import warnings - -# This class is really only used by the "build_ext" command, so it might -# make sense to put it in distutils.command.build_ext. However, that -# module is already big enough, and I want to make this class a bit more -# complex to simplify some common cases ("foo" module in "foo.c") and do -# better error-checking ("foo.c" actually exists). -# -# Also, putting this in build_ext.py means every setup script would have to -# import that large-ish module (indirectly, through distutils.core) in -# order to do anything. - - -class Extension: - """Just a collection of attributes that describes an extension - module and everything needed to build it (hopefully in a portable - way, but there are hooks that let you be as unportable as you need). - - Instance attributes: - name : string - the full name of the extension, including any packages -- ie. - *not* a filename or pathname, but Python dotted name - sources : [string] - list of source filenames, relative to the distribution root - (where the setup script lives), in Unix form (slash-separated) - for portability. Source files may be C, C++, SWIG (.i), - platform-specific resource files, or whatever else is recognized - by the "build_ext" command as source for a Python extension. - include_dirs : [string] - list of directories to search for C/C++ header files (in Unix - form for portability) - define_macros : [(name : string, value : string|None)] - list of macros to define; each macro is defined using a 2-tuple, - where 'value' is either the string to define it to or None to - define it without a particular value (equivalent of "#define - FOO" in source or -DFOO on Unix C compiler command line) - undef_macros : [string] - list of macros to undefine explicitly - library_dirs : [string] - list of directories to search for C/C++ libraries at link time - libraries : [string] - list of library names (not filenames or paths) to link against - runtime_library_dirs : [string] - list of directories to search for C/C++ libraries at run time - (for shared extensions, this is when the extension is loaded) - extra_objects : [string] - list of extra files to link with (eg. object files not implied - by 'sources', static library that must be explicitly specified, - binary resource files, etc.) - extra_compile_args : [string] - any extra platform- and compiler-specific information to use - when compiling the source files in 'sources'. For platforms and - compilers where "command line" makes sense, this is typically a - list of command-line arguments, but for other platforms it could - be anything. - extra_link_args : [string] - any extra platform- and compiler-specific information to use - when linking object files together to create the extension (or - to create a new static Python interpreter). Similar - interpretation as for 'extra_compile_args'. - export_symbols : [string] - list of symbols to be exported from a shared extension. Not - used on all platforms, and not generally necessary for Python - extensions, which typically export exactly one symbol: "init" + - extension_name. - swig_opts : [string] - any extra options to pass to SWIG if a source file has the .i - extension. - depends : [string] - list of files that the extension depends on - language : string - extension language (i.e. "c", "c++", "objc"). Will be detected - from the source extensions if not provided. - optional : boolean - specifies that a build failure in the extension should not abort the - build process, but simply not install the failing extension. - """ - - # When adding arguments to this constructor, be sure to update - # setup_keywords in core.py. - def __init__( - self, - name, - sources, - include_dirs=None, - define_macros=None, - undef_macros=None, - library_dirs=None, - libraries=None, - runtime_library_dirs=None, - extra_objects=None, - extra_compile_args=None, - extra_link_args=None, - export_symbols=None, - swig_opts=None, - depends=None, - language=None, - optional=None, - **kw # To catch unknown keywords - ): - if not isinstance(name, str): - raise AssertionError("'name' must be a string") - if not (isinstance(sources, list) and all(isinstance(v, str) for v in sources)): - raise AssertionError("'sources' must be a list of strings") - - self.name = name - self.sources = sources - self.include_dirs = include_dirs or [] - self.define_macros = define_macros or [] - self.undef_macros = undef_macros or [] - self.library_dirs = library_dirs or [] - self.libraries = libraries or [] - self.runtime_library_dirs = runtime_library_dirs or [] - self.extra_objects = extra_objects or [] - self.extra_compile_args = extra_compile_args or [] - self.extra_link_args = extra_link_args or [] - self.export_symbols = export_symbols or [] - self.swig_opts = swig_opts or [] - self.depends = depends or [] - self.language = language - self.optional = optional - - # If there are unknown keyword options, warn about them - if len(kw) > 0: - options = [repr(option) for option in kw] - options = ', '.join(sorted(options)) - msg = "Unknown Extension options: %s" % options - warnings.warn(msg) - - def __repr__(self): - return '<{}.{}({!r}) at {:#x}>'.format( - self.__class__.__module__, - self.__class__.__qualname__, - self.name, - id(self), - ) - - -def read_setup_file(filename): # noqa: C901 - """Reads a Setup file and returns Extension instances.""" - from distutils.sysconfig import parse_makefile, expand_makefile_vars, _variable_rx - - from distutils.text_file import TextFile - from distutils.util import split_quoted - - # First pass over the file to gather "VAR = VALUE" assignments. - vars = parse_makefile(filename) - - # Second pass to gobble up the real content: lines of the form - # ... [ ...] [ ...] [ ...] - file = TextFile( - filename, - strip_comments=1, - skip_blanks=1, - join_lines=1, - lstrip_ws=1, - rstrip_ws=1, - ) - try: - extensions = [] - - while True: - line = file.readline() - if line is None: # eof - break - if _variable_rx.match(line): # VAR=VALUE, handled in first pass - continue - - if line[0] == line[-1] == "*": - file.warn("'%s' lines not handled yet" % line) - continue - - line = expand_makefile_vars(line, vars) - words = split_quoted(line) - - # NB. this parses a slightly different syntax than the old - # makesetup script: here, there must be exactly one extension per - # line, and it must be the first word of the line. I have no idea - # why the old syntax supported multiple extensions per line, as - # they all wind up being the same. - - module = words[0] - ext = Extension(module, []) - append_next_word = None - - for word in words[1:]: - if append_next_word is not None: - append_next_word.append(word) - append_next_word = None - continue - - suffix = os.path.splitext(word)[1] - switch = word[0:2] - value = word[2:] - - if suffix in (".c", ".cc", ".cpp", ".cxx", ".c++", ".m", ".mm"): - # hmm, should we do something about C vs. C++ sources? - # or leave it up to the CCompiler implementation to - # worry about? - ext.sources.append(word) - elif switch == "-I": - ext.include_dirs.append(value) - elif switch == "-D": - equals = value.find("=") - if equals == -1: # bare "-DFOO" -- no value - ext.define_macros.append((value, None)) - else: # "-DFOO=blah" - ext.define_macros.append((value[0:equals], value[equals + 2 :])) - elif switch == "-U": - ext.undef_macros.append(value) - elif switch == "-C": # only here 'cause makesetup has it! - ext.extra_compile_args.append(word) - elif switch == "-l": - ext.libraries.append(value) - elif switch == "-L": - ext.library_dirs.append(value) - elif switch == "-R": - ext.runtime_library_dirs.append(value) - elif word == "-rpath": - append_next_word = ext.runtime_library_dirs - elif word == "-Xlinker": - append_next_word = ext.extra_link_args - elif word == "-Xcompiler": - append_next_word = ext.extra_compile_args - elif switch == "-u": - ext.extra_link_args.append(word) - if not value: - append_next_word = ext.extra_link_args - elif suffix in (".a", ".so", ".sl", ".o", ".dylib"): - # NB. a really faithful emulation of makesetup would - # append a .o file to extra_objects only if it - # had a slash in it; otherwise, it would s/.o/.c/ - # and append it to sources. Hmmmm. - ext.extra_objects.append(word) - else: - file.warn("unrecognized argument '%s'" % word) - - extensions.append(ext) - finally: - file.close() - - return extensions diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docker/README.md b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docker/README.md deleted file mode 100644 index ea709f33b007abd2de044a0338659ec003330725..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docker/README.md +++ /dev/null @@ -1,45 +0,0 @@ - -## Use the container (with docker ≥ 19.03) - -``` -cd docker/ -# Build: -docker build --build-arg USER_ID=$UID -t detectron2:v0 . -# Launch (require GPUs): -docker run --gpus all -it \ - --shm-size=8gb --env="DISPLAY" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \ - --name=detectron2 detectron2:v0 - -# Grant docker access to host X server to show images -xhost +local:`docker inspect --format='{{ .Config.Hostname }}' detectron2` -``` - -## Use the container (with docker-compose ≥ 1.28.0) - -Install docker-compose and nvidia-docker-toolkit, then run: -``` -cd docker && USER_ID=$UID docker-compose run detectron2 -``` - -## Use the deployment container (to test C++ examples) -After building the base detectron2 container as above, do: -``` -# Build: -docker build -t detectron2-deploy:v0 -f deploy.Dockerfile . -# Launch: -docker run --gpus all -it detectron2-deploy:v0 -``` - -#### Using a persistent cache directory - -You can prevent models from being re-downloaded on every run, -by storing them in a cache directory. - -To do this, add `--volume=$HOME/.torch/fvcore_cache:/tmp:rw` in the run command. - -## Install new dependencies -Add the following to `Dockerfile` to make persistent changes. -``` -RUN sudo apt-get update && sudo apt-get install -y vim -``` -Or run them in the container to make temporary changes. diff --git a/spaces/Axolotlily/DalleMini/app.py b/spaces/Axolotlily/DalleMini/app.py deleted file mode 100644 index 854e43653214324740a762e6c5c245b4705ff657..0000000000000000000000000000000000000000 --- a/spaces/Axolotlily/DalleMini/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("huggingface/osanseviero/dalle-mini-fork").launch() \ No newline at end of file diff --git a/spaces/Bart92/RVC_HF/lib/infer_pack/onnx_inference.py b/spaces/Bart92/RVC_HF/lib/infer_pack/onnx_inference.py deleted file mode 100644 index 6517853be49e61c427cf7cd9b5ed203f6d5f367e..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/lib/infer_pack/onnx_inference.py +++ /dev/null @@ -1,145 +0,0 @@ -import onnxruntime -import librosa -import numpy as np -import soundfile - - -class ContentVec: - def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None): - print("load model(s) from {}".format(vec_path)) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def __call__(self, wav): - return self.forward(wav) - - def forward(self, wav): - feats = wav - if feats.ndim == 2: # double channels - feats = feats.mean(-1) - assert feats.ndim == 1, feats.ndim - feats = np.expand_dims(np.expand_dims(feats, 0), 0) - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input)[0] - return logits.transpose(0, 2, 1) - - -def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs): - if f0_predictor == "pm": - from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor - - f0_predictor_object = PMF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "harvest": - from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import ( - HarvestF0Predictor, - ) - - f0_predictor_object = HarvestF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "dio": - from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor - - f0_predictor_object = DioF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - else: - raise Exception("Unknown f0 predictor") - return f0_predictor_object - - -class OnnxRVC: - def __init__( - self, - model_path, - sr=40000, - hop_size=512, - vec_path="vec-768-layer-12", - device="cpu", - ): - vec_path = f"pretrained/{vec_path}.onnx" - self.vec_model = ContentVec(vec_path, device) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(model_path, providers=providers) - self.sampling_rate = sr - self.hop_size = hop_size - - def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd): - onnx_input = { - self.model.get_inputs()[0].name: hubert, - self.model.get_inputs()[1].name: hubert_length, - self.model.get_inputs()[2].name: pitch, - self.model.get_inputs()[3].name: pitchf, - self.model.get_inputs()[4].name: ds, - self.model.get_inputs()[5].name: rnd, - } - return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16) - - def inference( - self, - raw_path, - sid, - f0_method="dio", - f0_up_key=0, - pad_time=0.5, - cr_threshold=0.02, - ): - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0_predictor = get_f0_predictor( - f0_method, - hop_length=self.hop_size, - sampling_rate=self.sampling_rate, - threshold=cr_threshold, - ) - wav, sr = librosa.load(raw_path, sr=self.sampling_rate) - org_length = len(wav) - if org_length / sr > 50.0: - raise RuntimeError("Reached Max Length") - - wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000) - wav16k = wav16k - - hubert = self.vec_model(wav16k) - hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32) - hubert_length = hubert.shape[1] - - pitchf = f0_predictor.compute_f0(wav, hubert_length) - pitchf = pitchf * 2 ** (f0_up_key / 12) - pitch = pitchf.copy() - f0_mel = 1127 * np.log(1 + pitch / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - pitch = np.rint(f0_mel).astype(np.int64) - - pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32) - pitch = pitch.reshape(1, len(pitch)) - ds = np.array([sid]).astype(np.int64) - - rnd = np.random.randn(1, 192, hubert_length).astype(np.float32) - hubert_length = np.array([hubert_length]).astype(np.int64) - - out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze() - out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant") - return out_wav[0:org_length] diff --git a/spaces/Benjov/Demo-IR/README.md b/spaces/Benjov/Demo-IR/README.md deleted file mode 100644 index 1e937824a48a1f1f1e7a1a294c23d345c38f4bbb..0000000000000000000000000000000000000000 --- a/spaces/Benjov/Demo-IR/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Demo IR -emoji: 📚 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Benson/text-generation/Examples/Anime Life Simulator.md b/spaces/Benson/text-generation/Examples/Anime Life Simulator.md deleted file mode 100644 index 81bb438c77f4c239ef736f7798110fb61d4c0b9a..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Anime Life Simulator.md +++ /dev/null @@ -1,110 +0,0 @@ - -

    ¿Qué es un simulador de vida de anime?

    -

    Simulador de vida de anime es un género de videojuegos que te permite crear y controlar un personaje en un mundo virtual inspirado en el anime. Anime es un término para la animación japonesa que es conocido por su estilo distintivo, gráficos coloridos, y diversos temas. Los fans del anime a menudo disfrutan sumergirse en las historias y personajes de sus programas o películas favoritas. Los juegos de simulador de vida de anime ofrecen una manera de experimentar una vida diferente o alternativa en un entorno de anime.

    -

    anime life simulator


    DOWNLOAD 🆓 https://bltlly.com/2v6IyI



    -

    Los juegos de simulador de vida de anime pueden variar en su alcance y enfoque, pero por lo general comparten algunas características comunes. A menudo tienen herramientas de creación de personajes que te permiten personalizar tu apariencia, personalidad, habilidades y preferencias. También tienen mecanismos de simulación que te permiten interactuar con otros personajes, explorar el entorno, realizar tareas y tomar decisiones. Algunos juegos también pueden tener elementos de otros géneros, como juegos de rol, estrategia o acción.

    -

    Los juegos de simulador de vida de anime pueden atraer a diferentes tipos de jugadores por diferentes razones. Algunos pueden disfrutar de la libertad y la creatividad de crear su propio personaje e historia. Algunos pueden gustar el desafío y la variedad de la gestión de diferentes aspectos de su vida virtual. Algunos pueden buscar la diversión y la emoción de experimentar nuevas situaciones y aventuras. Algunos simplemente quieren relajarse y escapar de la realidad por un tiempo.

    -

    ¿Cómo jugar un simulador de vida de anime?

    -

    No hay una respuesta definitiva a cómo jugar un simulador de vida de anime, ya que cada juego puede tener sus propias reglas y objetivos. Sin embargo, hay algunos pasos generales que pueden ayudarte a empezar con cualquier juego de este género.

    -
      -
    1. Elige un juego que se adapte a tus preferencias e intereses. Hay muchos juegos de simulación de anime disponibles en varias plataformas, como PC, móvil o consola. Puedes buscar reseñas, valoraciones, capturas de pantalla, vídeos o demos en línea para encontrar un juego que te guste.
    2. - -
    3. Comienza tu simulación y explora el mundo del juego. Normalmente puedes moverte usando el teclado, el ratón o los controles de la pantalla táctil. También puede interactuar con objetos o personajes haciendo clic o tocando en ellos. También puedes acceder a menús o inventarios para comprobar tu estado, artículos, misiones, etc.
    4. -
    5. Sigue la historia del juego o crea la tuya. Algunos juegos pueden tener una trama

      lineal o ramificada que te guía a través de los principales eventos y opciones. Algunos juegos pueden tener un estilo más abierto o sandbox que te permite crear tu propia historia y objetivos. Normalmente puedes avanzar la historia completando misiones, tareas u objetivos, o tomando decisiones que afecten el resultado.

    6. -
    7. Disfruta de la simulación y diviértete. Por lo general, puede hacer varias actividades en el mundo del juego, como hablar con otros personajes, hacer amigos o enemigos, citas o casarse, trabajar o estudiar, ir de compras o hacer manualidades, luchar o explorar, etc. También puede experimentar diferentes emociones, como felicidad, tristeza, ira, miedo, etc. También puede desbloquear nuevo contenido, como elementos, ubicaciones, caracteres, etc.
    8. -
    -

    Tipos de juegos de simulador de vida de anime

    -

    Los juegos de simulador de vida de anime se pueden clasificar en diferentes tipos o subgéneros según su tema, configuración o enfoque. Aquí están algunos de los tipos más comunes y populares de juegos de simulador de vida de anime:

    -

    Sim de citas

    -

    Un simulador de citas es un tipo de juego de simulador de vida de anime que se centra en el romance y las relaciones. En este tipo de juego, generalmente puedes elegir entre una variedad de intereses amorosos potenciales, cada uno con su propia personalidad, apariencia y trasfondo. También puedes interactuar con ellos de diferentes maneras, como hablar, coquetear, dar regalos, salir con alguien, etc. Tu objetivo generalmente es ganar su afecto y lograr un final feliz con ellos.

    -

    Algunos ejemplos de juegos de simulación de citas son:

    -

    -
      - -
    • Dream Daddy: A Dad Dating Simulator: Un juego que cuenta con un padre soltero que se muda a una nueva ciudad y se reúne con otros padres solteros que también son potenciales intereses amorosos.
    • -
    • Hatoful Boyfriend: Un juego que parodia el género haciendo que el jugador salga con palomas en un mundo post-apocalíptico.
    • -
    -

    Sim de la escuela

    -

    Un simulador de escuela es un tipo de juego de simulador de vida de anime que simula la vida diaria de un estudiante en una escuela de anime. En este tipo de juego, generalmente puedes crear tu propio personaje e inscribirte en una escuela de tu elección. También puedes asistir a clases, unirte a clubes, hacer amigos, estudiar para los exámenes, participar en eventos, etc. Tu objetivo generalmente es equilibrar tu vida académica y social y lograr tus sueños.

    -

    Algunos ejemplos de juegos de simulación escolar son:

    -
      -
    • Persona 5: juego que combina elementos de simulación escolar con elementos de rol y acción. El jugador controla un grupo de estudiantes que utilizan sus habilidades sobrenaturales para luchar contra las fuerzas del mal en una dimensión alternativa.
    • -
    • Academia: School Simulator: Un juego que permite al jugador diseñar y gestionar su propia escuela. El jugador puede contratar personal, construir instalaciones, establecer políticas, abordar problemas, etc.
    • -
    • High School Story: Un juego que permite al jugador crear su propio personaje y construir su propia escuela secundaria. El jugador puede personalizar su escuela, reclutar estudiantes, organizar fiestas, ir a citas, etc.
    • -
    -

    Sim de fantasía

    -

    Un simulador de fantasía es un tipo de juego de simulador de vida de anime que incorpora elementos de magia, aventura y combate. En este tipo de juego, normalmente puedes crear tu propio personaje y entrar en un mundo de fantasía lleno de maravillas y peligros. También puedes aprender hechizos, empuñar armas, luchar contra enemigos, explorar mazmorras, recoger tesoros, etc. Tu objetivo suele ser completar misiones, salvar el mundo o cumplir tu destino.

    -

    Algunos ejemplos de juegos de simulación de fantasía son:

    -
      - -
    • Stardew Valley: Un juego que mezcla elementos de simulación agrícolas con elementos de fantasía. El jugador hereda una granja en un pueblo rural y puede cultivar, criar animales, extraer minerales, pescar, hacerse amigo de los aldeanos, etc.
    • -
    • Final Fantasy XIV: Un juego que es un juego de rol multijugador masivo en línea ubicado en un mundo de fantasía. El jugador puede elegir entre varias razas, clases y trabajos, y unirse a otros jugadores en misiones, incursiones, mazmorras, etc.
    • -
    -

    Sim de agricultura

    -

    Un simulador de agricultura es un tipo de juego de simulador de vida de anime que involucra el manejo de una granja e interactuar con animales y aldeanos. En este tipo de juego, normalmente puedes crear tu propio personaje y heredar o comprar una granja. También puede plantar cultivos, cosechar productos, criar ganado, vender bienes, etc. También puede socializar con la comunidad local, hacer amigos, citas, casarse, tener hijos, etc. Su objetivo es generalmente mejorar su granja y su vida.

    -

    Algunos ejemplos de juegos de simulación de agricultura son:

    -
      -
    • Harvest Moon: Una serie de juegos que es uno de los pioneros del género. Los juegos cuentan con varios ajustes y personajes, pero todos comparten la misma jugabilidad básica de la agricultura y la simulación de la vida.
    • -
    • Historia de las Estaciones: Una serie de juegos que es un sucesor espiritual de Harvest Moon. Los juegos tienen elementos de juego similares, pero también introducen nuevas características, como personalización, multijugador y personajes cruzados.
    • -
    • Rune Factory: Una serie de juegos que es un spin-off de Harvest Moon. Los juegos combinan elementos de simulación de granja con elementos de simulación de fantasía, como magia, combate y mazmorras.
    • -
    -

    Beneficios de jugar un simulador de vida de anime?

    -

    Jugar un simulador de vida de anime puede tener varios beneficios para diferentes jugadores. Aquí están algunos de los posibles beneficios de jugar este género:

    -
      - -
    • Relajación: Jugar un simulador de vida anime puede ayudarle a relajarse y relajarse. Puede disfrutar de los gráficos coloridos y la música relajante. También puedes escapar del estrés y la presión de la realidad por un tiempo.
    • -
    • Habilidades sociales: Jugar un simulador de vida de anime puede mejorar sus habilidades sociales y la confianza. Puede interactuar con varios personajes y aprender a comunicarse, empatizar y negociar. También puedes hacer amigos o encontrar el amor en el mundo del juego.
    • -
    -

    Desafíos de jugar un simulador de vida de anime?

    -

    Jugar un simulador de vida de anime también puede tener algunos desafíos o dificultades para algunos jugadores. Aquí están algunos de los posibles desafíos de jugar este género:

    -
      -
    • Adicción: Jugar un simulador de vida de anime puede ser adictivo y consumir mucho tiempo. Puedes pasar horas o días jugando el juego sin darte cuenta. También puede descuidar sus responsabilidades o relaciones de la vida real.
    • -
    • Expectativas poco realistas: Jugar un simulador de vida de anime puede crear expectativas o fantasías poco realistas. Puede comparar su vida real con su vida virtual y sentirse insatisfecho o infeliz. También puedes idealizar o idealizar los personajes o situaciones del juego.
    • -
    • Diferencias culturales: Jugar un simulador de vida de anime puede exponerte a diferencias culturales o malentendidos. Es posible que encuentre términos, referencias o comportamientos que no le resultan familiares o confusos. También puedes ofender o faltar el respeto a los personajes u otros jugadores sin querer.
    • -
    -

    Consejos y trucos para jugar un simulador de vida de anime?

    -

    Jugar un simulador de vida de anime puede ser más agradable y gratificante si sigues algunos consejos y trucos. Estos son algunos de los consejos y trucos útiles para jugar este género:

    -
      - -
    • Guardar: Durante la reproducción de un simulador de vida de anime, usted debe guardar su progreso con frecuencia y en diferentes ranuras. De esta manera, puede evitar perder sus datos o el progreso debido a fallos o errores. También puede volver a los puntos o opciones anteriores si desea cambiar algo o probar algo diferente.
    • -
    • Experimento: Mientras juegas un simulador de vida de anime, debes experimentar con diferentes opciones y resultados. No debes tener miedo de cometer errores o fallar. También deberías probar diferentes personajes, actividades, rutas, etc. para descubrir nuevos contenidos y posibilidades.
    • -
    -

    Ejemplos de juegos populares de simulador de vida de anime

    -

    Hay muchos juegos de simulador de vida de anime disponibles en varias plataformas y dispositivos. Aquí están algunos de los ejemplos de los juegos populares del simulador de la vida del anime:

    -

    Anime Play Life: Ilimitado

    -

    Anime Play Life: Unlimited es un juego

    que te permite hacer misiones, encontrar un trabajo, comprar casas, pescado, picnic y más en un mundo de anime. También puedes personalizar tu personaje, ropa, mascotas, vehículos, etc. También puedes interactuar con otros jugadores en línea y unirte a clubes, fiestas o eventos. El juego está disponible en PC y dispositivos móviles.

    -

    Gotas de XOXO

    -

    XOXO gotitas es un juego que cuenta con una comedia citas sim con múltiples finales y personajes. Juegas como una chica que se une a una escuela para estudiantes problemáticos y conoce a seis chicos que son todos idiotas a su manera. También puede explorar la ciudad, tienda, trabajo, estudio, etc. El juego está disponible en PC y dispositivos móviles.

    -

    Viva la reina

    -

    Larga vida a la reina es un juego que te desafía a gobernar un reino como una princesa joven. Tienes que manejar tus estadísticas, habilidades, humor, atuendos, eventos, etc. También tienes que lidiar con la intriga política, la guerra, los intentos de asesinato, etc. El juego tiene muchos caminos ramificados y finales dependiendo de tus elecciones. El juego está disponible en PC y dispositivos móviles.

    -

    Mon-cuties para todos

    - -

    Conclusión

    -

    Simulador de vida anime es un género de videojuegos que te permite crear y controlar un personaje en un mundo virtual inspirado en el anime. Los juegos de simulador de vida de anime pueden tener diferentes tipos, características, beneficios, desafíos, consejos y ejemplos. Jugar un simulador de vida de anime puede ser una experiencia divertida y gratificante para los fanáticos del anime y los jugadores por igual.

    -

    Si estás interesado en jugar un juego de simulador de vida de anime, puedes ver algunos de los juegos mencionados en este artículo o buscar otros juegos en línea. También puede compartir sus pensamientos y opiniones sobre este género en la sección de comentarios a continuación. ¡Gracias por leer y tener un gran día!

    -

    Preguntas frecuentes

    -

    Aquí están algunas de las preguntas y respuestas frecuentes sobre los juegos de simulador de vida de anime:

    -
      -
    1. ¿Cuál es la diferencia entre un simulador de vida de anime y una novela visual de anime?
    2. -

      Un simulador de vida de anime es un juego que simula la vida diaria de un personaje en un mundo de anime. Una novela visual anime es un juego que cuenta una historia a través de texto e imágenes en un estilo anime. Los juegos de simulador de vida de anime suelen tener más mecánica de juego e interactividad que las novelas visuales de anime.

      -
    3. ¿Cuáles son algunos de los mejores juegos de simulador de vida de anime para principiantes?
    4. -

      Algunos de los mejores juegos de simulador de vida de anime para principiantes son:

      - -
    5. ¿Cómo puedo jugar un juego de simulador de vida de anime en mi teléfono?
    6. - -
    7. ¿Cómo puedo hacer mi propio juego de simulador de vida de anime?
    8. -

      Usted puede hacer su propio juego de simulador de vida anime mediante el uso de un motor de juego o una herramienta de software que le permite crear juegos sin codificación. Algunas de las herramientas populares son:

      -
        -
      • Ren'Py: Una herramienta que te permite crear novelas visuales y sims de citas.
      • -
      • RPG Maker: Una herramienta que te permite crear juegos de rol y sims de fantasía.
      • -
      • Unity: Una herramienta que te permite crear cualquier tipo de juego con gráficos 2D o 3D.
      • -
      -
    9. ¿Cómo puedo aprender más sobre los juegos de simulación de vida de anime?
    10. -

      Puedes aprender más sobre juegos de simulador de vida de anime leyendo artículos en línea, blogs, revistas, libros, etc. sobre este género. También puedes ver videos en línea, transmisiones, podcasts, etc. sobre este género. También puede unirse a comunidades en línea, foros, grupos, etc. donde se puede discutir este género con otros fans y jugadores.

      -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/encoding.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/encoding.py deleted file mode 100644 index 008f06a79bf598b149bdccb73e572d13331a1631..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/encoding.py +++ /dev/null @@ -1,36 +0,0 @@ -import codecs -import locale -import re -import sys -from typing import List, Tuple - -BOMS: List[Tuple[bytes, str]] = [ - (codecs.BOM_UTF8, "utf-8"), - (codecs.BOM_UTF16, "utf-16"), - (codecs.BOM_UTF16_BE, "utf-16-be"), - (codecs.BOM_UTF16_LE, "utf-16-le"), - (codecs.BOM_UTF32, "utf-32"), - (codecs.BOM_UTF32_BE, "utf-32-be"), - (codecs.BOM_UTF32_LE, "utf-32-le"), -] - -ENCODING_RE = re.compile(rb"coding[:=]\s*([-\w.]+)") - - -def auto_decode(data: bytes) -> str: - """Check a bytes string for a BOM to correctly detect the encoding - - Fallback to locale.getpreferredencoding(False) like open() on Python3""" - for bom, encoding in BOMS: - if data.startswith(bom): - return data[len(bom) :].decode(encoding) - # Lets check the first two lines as in PEP263 - for line in data.split(b"\n")[:2]: - if line[0:1] == b"#" and ENCODING_RE.search(line): - result = ENCODING_RE.search(line) - assert result is not None - encoding = result.groups()[0].decode("ascii") - return data.decode(encoding) - return data.decode( - locale.getpreferredencoding(False) or sys.getdefaultencoding(), - ) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/euctwprober.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/euctwprober.py deleted file mode 100644 index a37ab18995822ad6b3372d56366becdccf9a4c26..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/euctwprober.py +++ /dev/null @@ -1,47 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is mozilla.org code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from .chardistribution import EUCTWDistributionAnalysis -from .codingstatemachine import CodingStateMachine -from .mbcharsetprober import MultiByteCharSetProber -from .mbcssm import EUCTW_SM_MODEL - - -class EUCTWProber(MultiByteCharSetProber): - def __init__(self) -> None: - super().__init__() - self.coding_sm = CodingStateMachine(EUCTW_SM_MODEL) - self.distribution_analyzer = EUCTWDistributionAnalysis() - self.reset() - - @property - def charset_name(self) -> str: - return "EUC-TW" - - @property - def language(self) -> str: - return "Taiwan" diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/CODE_OF_CONDUCT.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/CODE_OF_CONDUCT.md deleted file mode 100644 index 4bd525a54e78d9b0133aeaae32a9336ed0ccb9f3..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,76 +0,0 @@ -# Code of Conduct - -## Our Pledge - -In the interest of fostering an open and welcoming environment, we as -contributors and maintainers pledge to make participation in our project and -our community a harassment-free experience for everyone, regardless of age, body -size, disability, ethnicity, sex characteristics, gender identity and expression, -level of experience, education, socio-economic status, nationality, personal -appearance, race, religion, or sexual identity and orientation. - -## Our Standards - -Examples of behavior that contributes to creating a positive environment -include: - -* Using welcoming and inclusive language -* Being respectful of differing viewpoints and experiences -* Gracefully accepting constructive criticism -* Focusing on what is best for the community -* Showing empathy towards other community members - -Examples of unacceptable behavior by participants include: - -* The use of sexualized language or imagery and unwelcome sexual attention or -advances -* Trolling, insulting/derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or electronic -address, without explicit permission -* Other conduct which could reasonably be considered inappropriate in a -professional setting - -## Our Responsibilities - -Project maintainers are responsible for clarifying the standards of acceptable -behavior and are expected to take appropriate and fair corrective action in -response to any instances of unacceptable behavior. - -Project maintainers have the right and responsibility to remove, edit, or -reject comments, commits, code, wiki edits, issues, and other contributions -that are not aligned to this Code of Conduct, or to ban temporarily or -permanently any contributor for other behaviors that they deem inappropriate, -threatening, offensive, or harmful. - -## Scope - -This Code of Conduct applies within all project spaces, and it also applies when -an individual is representing the project or its community in public spaces. -Examples of representing a project or community include using an official -project e-mail address, posting via an official social media account, or acting -as an appointed representative at an online or offline event. Representation of -a project may be further defined and clarified by project maintainers. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported by contacting the project team at . All -complaints will be reviewed and investigated and will result in a response that -is deemed necessary and appropriate to the circumstances. The project team is -obligated to maintain confidentiality with regard to the reporter of an incident. -Further details of specific enforcement policies may be posted separately. - -Project maintainers who do not follow or enforce the Code of Conduct in good -faith may face temporary or permanent repercussions as determined by other -members of the project's leadership. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, -available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html - -[homepage]: https://www.contributor-covenant.org - -For answers to common questions about this code of conduct, see -https://www.contributor-covenant.org/faq diff --git a/spaces/CVPR/GFPGAN-example/PaperModel.md b/spaces/CVPR/GFPGAN-example/PaperModel.md deleted file mode 100644 index aec81d31de56df74c19ae840d44ad2b2a1f06d28..0000000000000000000000000000000000000000 --- a/spaces/CVPR/GFPGAN-example/PaperModel.md +++ /dev/null @@ -1,76 +0,0 @@ -# Installation - -We now provide a *clean* version of GFPGAN, which does not require customized CUDA extensions. See [here](README.md#installation) for this easier installation.
    -If you want want to use the original model in our paper, please follow the instructions below. - -1. Clone repo - - ```bash - git clone https://github.com/xinntao/GFPGAN.git - cd GFPGAN - ``` - -1. Install dependent packages - - As StyleGAN2 uses customized PyTorch C++ extensions, you need to **compile them during installation** or **load them just-in-time(JIT)**. - You can refer to [BasicSR-INSTALL.md](https://github.com/xinntao/BasicSR/blob/master/INSTALL.md) for more details. - - **Option 1: Load extensions just-in-time(JIT)** (For those just want to do simple inferences, may have less issues) - - ```bash - # Install basicsr - https://github.com/xinntao/BasicSR - # We use BasicSR for both training and inference - pip install basicsr - - # Install facexlib - https://github.com/xinntao/facexlib - # We use face detection and face restoration helper in the facexlib package - pip install facexlib - - pip install -r requirements.txt - python setup.py develop - - # remember to set BASICSR_JIT=True before your running commands - ``` - - **Option 2: Compile extensions during installation** (For those need to train/inference for many times) - - ```bash - # Install basicsr - https://github.com/xinntao/BasicSR - # We use BasicSR for both training and inference - # Set BASICSR_EXT=True to compile the cuda extensions in the BasicSR - It may take several minutes to compile, please be patient - # Add -vvv for detailed log prints - BASICSR_EXT=True pip install basicsr -vvv - - # Install facexlib - https://github.com/xinntao/facexlib - # We use face detection and face restoration helper in the facexlib package - pip install facexlib - - pip install -r requirements.txt - python setup.py develop - ``` - -## :zap: Quick Inference - -Download pre-trained models: [GFPGANv1.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/GFPGANv1.pth) - -```bash -wget https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/GFPGANv1.pth -P experiments/pretrained_models -``` - -- Option 1: Load extensions just-in-time(JIT) - - ```bash - BASICSR_JIT=True python inference_gfpgan.py --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/whole_imgs --save_root results --arch original --channel 1 - - # for aligned images - BASICSR_JIT=True python inference_gfpgan.py --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/cropped_faces --save_root results --arch original --channel 1 --aligned - ``` - -- Option 2: Have successfully compiled extensions during installation - - ```bash - python inference_gfpgan.py --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/whole_imgs --save_root results --arch original --channel 1 - - # for aligned images - python inference_gfpgan.py --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/cropped_faces --save_root results --arch original --channel 1 --aligned - ``` diff --git a/spaces/CVPR/WALT/mmcv_custom/runner/epoch_based_runner.py b/spaces/CVPR/WALT/mmcv_custom/runner/epoch_based_runner.py deleted file mode 100644 index 7cdf3fa05639f7fde652090be9dbf78b48790744..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmcv_custom/runner/epoch_based_runner.py +++ /dev/null @@ -1,104 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. -import os.path as osp -import platform -import shutil - -import torch -from torch.optim import Optimizer - -import mmcv -from mmcv.runner import RUNNERS, EpochBasedRunner -from .checkpoint import save_checkpoint - -try: - import apex -except: - print('apex is not installed') - - -@RUNNERS.register_module() -class EpochBasedRunnerAmp(EpochBasedRunner): - """Epoch-based Runner with AMP support. - - This runner train models epoch by epoch. - """ - - def save_checkpoint(self, - out_dir, - filename_tmpl='epoch_{}.pth', - save_optimizer=True, - meta=None, - create_symlink=True): - """Save the checkpoint. - - Args: - out_dir (str): The directory that checkpoints are saved. - filename_tmpl (str, optional): The checkpoint filename template, - which contains a placeholder for the epoch number. - Defaults to 'epoch_{}.pth'. - save_optimizer (bool, optional): Whether to save the optimizer to - the checkpoint. Defaults to True. - meta (dict, optional): The meta information to be saved in the - checkpoint. Defaults to None. - create_symlink (bool, optional): Whether to create a symlink - "latest.pth" to point to the latest checkpoint. - Defaults to True. - """ - if meta is None: - meta = dict(epoch=self.epoch + 1, iter=self.iter) - elif isinstance(meta, dict): - meta.update(epoch=self.epoch + 1, iter=self.iter) - else: - raise TypeError( - f'meta should be a dict or None, but got {type(meta)}') - if self.meta is not None: - meta.update(self.meta) - - filename = filename_tmpl.format(self.epoch + 1) - filepath = osp.join(out_dir, filename) - optimizer = self.optimizer if save_optimizer else None - save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta) - # in some environments, `os.symlink` is not supported, you may need to - # set `create_symlink` to False - if create_symlink: - dst_file = osp.join(out_dir, 'latest.pth') - if platform.system() != 'Windows': - mmcv.symlink(filename, dst_file) - else: - shutil.copy(filepath, dst_file) - - def resume(self, - checkpoint, - resume_optimizer=True, - map_location='default'): - if map_location == 'default': - if torch.cuda.is_available(): - device_id = torch.cuda.current_device() - checkpoint = self.load_checkpoint( - checkpoint, - map_location=lambda storage, loc: storage.cuda(device_id)) - else: - checkpoint = self.load_checkpoint(checkpoint) - else: - checkpoint = self.load_checkpoint( - checkpoint, map_location=map_location) - - self._epoch = checkpoint['meta']['epoch'] - self._iter = checkpoint['meta']['iter'] - if 'optimizer' in checkpoint and resume_optimizer: - if isinstance(self.optimizer, Optimizer): - self.optimizer.load_state_dict(checkpoint['optimizer']) - elif isinstance(self.optimizer, dict): - for k in self.optimizer.keys(): - self.optimizer[k].load_state_dict( - checkpoint['optimizer'][k]) - else: - raise TypeError( - 'Optimizer should be dict or torch.optim.Optimizer ' - f'but got {type(self.optimizer)}') - - if 'amp' in checkpoint: - apex.amp.load_state_dict(checkpoint['amp']) - self.logger.info('load amp state dict') - - self.logger.info('resumed epoch %d, iter %d', self.epoch, self.iter) diff --git a/spaces/CVPR/WALT/mmdet/core/bbox/coder/base_bbox_coder.py b/spaces/CVPR/WALT/mmdet/core/bbox/coder/base_bbox_coder.py deleted file mode 100644 index cf0b34c7cc2fe561718b0c884990beb40a993643..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/bbox/coder/base_bbox_coder.py +++ /dev/null @@ -1,17 +0,0 @@ -from abc import ABCMeta, abstractmethod - - -class BaseBBoxCoder(metaclass=ABCMeta): - """Base bounding box coder.""" - - def __init__(self, **kwargs): - pass - - @abstractmethod - def encode(self, bboxes, gt_bboxes): - """Encode deltas between bboxes and ground truth boxes.""" - - @abstractmethod - def decode(self, bboxes, bboxes_pred): - """Decode the predicted bboxes according to prediction and base - boxes.""" diff --git a/spaces/CVPR/WALT/mmdet/core/bbox/transforms.py b/spaces/CVPR/WALT/mmdet/core/bbox/transforms.py deleted file mode 100644 index df55b0a496516bf7373fe96cf746c561dd713c3b..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/bbox/transforms.py +++ /dev/null @@ -1,240 +0,0 @@ -import numpy as np -import torch - - -def bbox_flip(bboxes, img_shape, direction='horizontal'): - """Flip bboxes horizontally or vertically. - - Args: - bboxes (Tensor): Shape (..., 4*k) - img_shape (tuple): Image shape. - direction (str): Flip direction, options are "horizontal", "vertical", - "diagonal". Default: "horizontal" - - Returns: - Tensor: Flipped bboxes. - """ - assert bboxes.shape[-1] % 4 == 0 - assert direction in ['horizontal', 'vertical', 'diagonal'] - flipped = bboxes.clone() - if direction == 'horizontal': - flipped[..., 0::4] = img_shape[1] - bboxes[..., 2::4] - flipped[..., 2::4] = img_shape[1] - bboxes[..., 0::4] - elif direction == 'vertical': - flipped[..., 1::4] = img_shape[0] - bboxes[..., 3::4] - flipped[..., 3::4] = img_shape[0] - bboxes[..., 1::4] - else: - flipped[..., 0::4] = img_shape[1] - bboxes[..., 2::4] - flipped[..., 1::4] = img_shape[0] - bboxes[..., 3::4] - flipped[..., 2::4] = img_shape[1] - bboxes[..., 0::4] - flipped[..., 3::4] = img_shape[0] - bboxes[..., 1::4] - return flipped - - -def bbox_mapping(bboxes, - img_shape, - scale_factor, - flip, - flip_direction='horizontal'): - """Map bboxes from the original image scale to testing scale.""" - new_bboxes = bboxes * bboxes.new_tensor(scale_factor) - if flip: - new_bboxes = bbox_flip(new_bboxes, img_shape, flip_direction) - return new_bboxes - - -def bbox_mapping_back(bboxes, - img_shape, - scale_factor, - flip, - flip_direction='horizontal'): - """Map bboxes from testing scale to original image scale.""" - new_bboxes = bbox_flip(bboxes, img_shape, - flip_direction) if flip else bboxes - new_bboxes = new_bboxes.view(-1, 4) / new_bboxes.new_tensor(scale_factor) - return new_bboxes.view(bboxes.shape) - - -def bbox2roi(bbox_list): - """Convert a list of bboxes to roi format. - - Args: - bbox_list (list[Tensor]): a list of bboxes corresponding to a batch - of images. - - Returns: - Tensor: shape (n, 5), [batch_ind, x1, y1, x2, y2] - """ - rois_list = [] - for img_id, bboxes in enumerate(bbox_list): - if bboxes.size(0) > 0: - img_inds = bboxes.new_full((bboxes.size(0), 1), img_id) - rois = torch.cat([img_inds, bboxes[:, :4]], dim=-1) - else: - rois = bboxes.new_zeros((0, 5)) - rois_list.append(rois) - rois = torch.cat(rois_list, 0) - return rois - - -def roi2bbox(rois): - """Convert rois to bounding box format. - - Args: - rois (torch.Tensor): RoIs with the shape (n, 5) where the first - column indicates batch id of each RoI. - - Returns: - list[torch.Tensor]: Converted boxes of corresponding rois. - """ - bbox_list = [] - img_ids = torch.unique(rois[:, 0].cpu(), sorted=True) - for img_id in img_ids: - inds = (rois[:, 0] == img_id.item()) - bbox = rois[inds, 1:] - bbox_list.append(bbox) - return bbox_list - - -def bbox2result(bboxes, labels, num_classes): - """Convert detection results to a list of numpy arrays. - - Args: - bboxes (torch.Tensor | np.ndarray): shape (n, 5) - labels (torch.Tensor | np.ndarray): shape (n, ) - num_classes (int): class number, including background class - - Returns: - list(ndarray): bbox results of each class - """ - if bboxes.shape[0] == 0: - return [np.zeros((0, 5), dtype=np.float32) for i in range(num_classes)] - else: - if isinstance(bboxes, torch.Tensor): - bboxes = bboxes.detach().cpu().numpy() - labels = labels.detach().cpu().numpy() - return [bboxes[labels == i, :] for i in range(num_classes)] - - -def distance2bbox(points, distance, max_shape=None): - """Decode distance prediction to bounding box. - - Args: - points (Tensor): Shape (B, N, 2) or (N, 2). - distance (Tensor): Distance from the given point to 4 - boundaries (left, top, right, bottom). Shape (B, N, 4) or (N, 4) - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If priors shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - - Returns: - Tensor: Boxes with shape (N, 4) or (B, N, 4) - """ - x1 = points[..., 0] - distance[..., 0] - y1 = points[..., 1] - distance[..., 1] - x2 = points[..., 0] + distance[..., 2] - y2 = points[..., 1] + distance[..., 3] - - bboxes = torch.stack([x1, y1, x2, y2], -1) - - if max_shape is not None: - if not isinstance(max_shape, torch.Tensor): - max_shape = x1.new_tensor(max_shape) - max_shape = max_shape[..., :2].type_as(x1) - if max_shape.ndim == 2: - assert bboxes.ndim == 3 - assert max_shape.size(0) == bboxes.size(0) - - min_xy = x1.new_tensor(0) - max_xy = torch.cat([max_shape, max_shape], - dim=-1).flip(-1).unsqueeze(-2) - bboxes = torch.where(bboxes < min_xy, min_xy, bboxes) - bboxes = torch.where(bboxes > max_xy, max_xy, bboxes) - - return bboxes - - -def bbox2distance(points, bbox, max_dis=None, eps=0.1): - """Decode bounding box based on distances. - - Args: - points (Tensor): Shape (n, 2), [x, y]. - bbox (Tensor): Shape (n, 4), "xyxy" format - max_dis (float): Upper bound of the distance. - eps (float): a small value to ensure target < max_dis, instead <= - - Returns: - Tensor: Decoded distances. - """ - left = points[:, 0] - bbox[:, 0] - top = points[:, 1] - bbox[:, 1] - right = bbox[:, 2] - points[:, 0] - bottom = bbox[:, 3] - points[:, 1] - if max_dis is not None: - left = left.clamp(min=0, max=max_dis - eps) - top = top.clamp(min=0, max=max_dis - eps) - right = right.clamp(min=0, max=max_dis - eps) - bottom = bottom.clamp(min=0, max=max_dis - eps) - return torch.stack([left, top, right, bottom], -1) - - -def bbox_rescale(bboxes, scale_factor=1.0): - """Rescale bounding box w.r.t. scale_factor. - - Args: - bboxes (Tensor): Shape (n, 4) for bboxes or (n, 5) for rois - scale_factor (float): rescale factor - - Returns: - Tensor: Rescaled bboxes. - """ - if bboxes.size(1) == 5: - bboxes_ = bboxes[:, 1:] - inds_ = bboxes[:, 0] - else: - bboxes_ = bboxes - cx = (bboxes_[:, 0] + bboxes_[:, 2]) * 0.5 - cy = (bboxes_[:, 1] + bboxes_[:, 3]) * 0.5 - w = bboxes_[:, 2] - bboxes_[:, 0] - h = bboxes_[:, 3] - bboxes_[:, 1] - w = w * scale_factor - h = h * scale_factor - x1 = cx - 0.5 * w - x2 = cx + 0.5 * w - y1 = cy - 0.5 * h - y2 = cy + 0.5 * h - if bboxes.size(1) == 5: - rescaled_bboxes = torch.stack([inds_, x1, y1, x2, y2], dim=-1) - else: - rescaled_bboxes = torch.stack([x1, y1, x2, y2], dim=-1) - return rescaled_bboxes - - -def bbox_cxcywh_to_xyxy(bbox): - """Convert bbox coordinates from (cx, cy, w, h) to (x1, y1, x2, y2). - - Args: - bbox (Tensor): Shape (n, 4) for bboxes. - - Returns: - Tensor: Converted bboxes. - """ - cx, cy, w, h = bbox.split((1, 1, 1, 1), dim=-1) - bbox_new = [(cx - 0.5 * w), (cy - 0.5 * h), (cx + 0.5 * w), (cy + 0.5 * h)] - return torch.cat(bbox_new, dim=-1) - - -def bbox_xyxy_to_cxcywh(bbox): - """Convert bbox coordinates from (x1, y1, x2, y2) to (cx, cy, w, h). - - Args: - bbox (Tensor): Shape (n, 4) for bboxes. - - Returns: - Tensor: Converted bboxes. - """ - x1, y1, x2, y2 = bbox.split((1, 1, 1, 1), dim=-1) - bbox_new = [(x1 + x2) / 2, (y1 + y2) / 2, (x2 - x1), (y2 - y1)] - return torch.cat(bbox_new, dim=-1) diff --git a/spaces/Catmeow/Face2Painting_From_Photo/paintingface.py b/spaces/Catmeow/Face2Painting_From_Photo/paintingface.py deleted file mode 100644 index 3d51a85c793586d521a0db2dcbdd60f65a9b56bb..0000000000000000000000000000000000000000 --- a/spaces/Catmeow/Face2Painting_From_Photo/paintingface.py +++ /dev/null @@ -1,110 +0,0 @@ -import os -os.system("pip install dlib") -import sys -import face_detection -from PIL import Image, ImageOps, ImageFile -import numpy as np -import cv2 as cv -import torch -import gradio as gr - -torch.set_grad_enabled(False) - -device = "cuda" if torch.cuda.is_available() else "cpu" -model = torch.hub.load("bryandlee/animegan2-pytorch:main", "generator", device=device).eval() -model2 = torch.hub.load("AK391/animegan2-pytorch:main", "generator", pretrained="face_paint_512_v1", device=device).eval() -face2paint = torch.hub.load("bryandlee/animegan2-pytorch:main", "face2paint", device=device) -image_format = "png" #@param ["jpeg", "png"] - -def unsharp_mask(image, kernel_size=(5, 5), sigma=1.0, amount=2.0, threshold=0): - """Return a sharpened version of the image, using an unsharp mask.""" - blurred = cv.GaussianBlur(image, kernel_size, sigma) - sharpened = float(amount + 1) * image - float(amount) * blurred - sharpened = np.maximum(sharpened, np.zeros(sharpened.shape)) - sharpened = np.minimum(sharpened, 255 * np.ones(sharpened.shape)) - sharpened = sharpened.round().astype(np.uint8) - if threshold > 0: - low_contrast_mask = np.absolute(image - blurred) < threshold - np.copyto(sharpened, image, where=low_contrast_mask) - return sharpened - -def normPRED(d): - ma = np.max(d) - mi = np.min(d) - - dn = (d-mi)/(ma-mi) - - return dn - -def array_to_np(array_in): - array_in = normPRED(array_in) - array_in = np.squeeze(255.0*(array_in)) - array_in = np.transpose(array_in, (1, 2, 0)) - return array_in - -def array_to_image(array_in): - array_in = normPRED(array_in) - array_in = np.squeeze(255.0*(array_in)) - array_in = np.transpose(array_in, (1, 2, 0)) - im = Image.fromarray(array_in.astype(np.uint8)) - return im - - -def image_as_array(image_in): - image_in = np.array(image_in, np.float32) - tmpImg = np.zeros((image_in.shape[0],image_in.shape[1],3)) - image_in = image_in/np.max(image_in) - if image_in.shape[2]==1: - tmpImg[:,:,0] = (image_in[:,:,0]-0.485)/0.229 - tmpImg[:,:,1] = (image_in[:,:,0]-0.485)/0.229 - tmpImg[:,:,2] = (image_in[:,:,0]-0.485)/0.229 - else: - tmpImg[:,:,0] = (image_in[:,:,0]-0.485)/0.229 - tmpImg[:,:,1] = (image_in[:,:,1]-0.456)/0.224 - tmpImg[:,:,2] = (image_in[:,:,2]-0.406)/0.225 - - tmpImg = tmpImg.transpose((2, 0, 1)) - image_out = np.expand_dims(tmpImg, 0) - return image_out - -# detect a face -def find_aligned_face(image_in, size=400): - aligned_image, n_faces, quad = face_detection.align(image_in, face_index=0, output_size=size) - return aligned_image, n_faces, quad - -# clip the face, return array -def align_first_face(image_in, size=400): - aligned_image, n_faces, quad = find_aligned_face(image_in,size=size) - if n_faces == 0: - try: - image_in = ImageOps.exif_transpose(image_in) - except: - print("exif problem, not rotating") - image_in = image_in.resize((size, size)) - im_array = image_as_array(image_in) - else: - im_array = image_as_array(aligned_image) - - return im_array - -def img_concat_h(im1, im2): - dst = Image.new('RGB', (im1.width + im2.width, im1.height)) - dst.paste(im1, (0, 0)) - dst.paste(im2, (im1.width, 0)) - return dst - -def paintface(img: Image.Image,size: int) -> Image.Image: - aligned_img = align_first_face(img,size) - if aligned_img is None: - output=None - else: - im_in = array_to_image(aligned_img).convert("RGB") - im_out1 = face2paint(model, im_in, side_by_side=False) - im_out2 = face2paint(model2, im_in, side_by_side=False) - - output = img_concat_h(im_out1, im_out2) - return output - -def generate(img): - out = paintface(img, 400) - return out \ No newline at end of file diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/processing/text.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/processing/text.py deleted file mode 100644 index 52add81401775c1b111512d8149f86a175fd9acb..0000000000000000000000000000000000000000 --- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/processing/text.py +++ /dev/null @@ -1,132 +0,0 @@ -"""Text processing functions""" -from typing import Dict, Generator, Optional - -from selenium.webdriver.remote.webdriver import WebDriver - -from autogpt.config import Config -from autogpt.llm_utils import create_chat_completion -from autogpt.memory import get_memory - -CFG = Config() -MEMORY = get_memory(CFG) - - -def split_text(text: str, max_length: int = 8192) -> Generator[str, None, None]: - """Split text into chunks of a maximum length - - Args: - text (str): The text to split - max_length (int, optional): The maximum length of each chunk. Defaults to 8192. - - Yields: - str: The next chunk of text - - Raises: - ValueError: If the text is longer than the maximum length - """ - paragraphs = text.split("\n") - current_length = 0 - current_chunk = [] - - for paragraph in paragraphs: - if current_length + len(paragraph) + 1 <= max_length: - current_chunk.append(paragraph) - current_length += len(paragraph) + 1 - else: - yield "\n".join(current_chunk) - current_chunk = [paragraph] - current_length = len(paragraph) + 1 - - if current_chunk: - yield "\n".join(current_chunk) - - -def summarize_text( - url: str, text: str, question: str, driver: Optional[WebDriver] = None -) -> str: - """Summarize text using the OpenAI API - - Args: - url (str): The url of the text - text (str): The text to summarize - question (str): The question to ask the model - driver (WebDriver): The webdriver to use to scroll the page - - Returns: - str: The summary of the text - """ - if not text: - return "Error: No text to summarize" - - text_length = len(text) - print(f"Text length: {text_length} characters") - - summaries = [] - chunks = list(split_text(text)) - scroll_ratio = 1 / len(chunks) - - for i, chunk in enumerate(chunks): - if driver: - scroll_to_percentage(driver, scroll_ratio * i) - print(f"Adding chunk {i + 1} / {len(chunks)} to memory") - - memory_to_add = f"Source: {url}\n" f"Raw content part#{i + 1}: {chunk}" - - MEMORY.add(memory_to_add) - - print(f"Summarizing chunk {i + 1} / {len(chunks)}") - messages = [create_message(chunk, question)] - - summary = create_chat_completion( - model=CFG.fast_llm_model, - messages=messages, - ) - summaries.append(summary) - print(f"Added chunk {i + 1} summary to memory") - - memory_to_add = f"Source: {url}\n" f"Content summary part#{i + 1}: {summary}" - - MEMORY.add(memory_to_add) - - print(f"Summarized {len(chunks)} chunks.") - - combined_summary = "\n".join(summaries) - messages = [create_message(combined_summary, question)] - - return create_chat_completion( - model=CFG.fast_llm_model, - messages=messages, - ) - - -def scroll_to_percentage(driver: WebDriver, ratio: float) -> None: - """Scroll to a percentage of the page - - Args: - driver (WebDriver): The webdriver to use - ratio (float): The percentage to scroll to - - Raises: - ValueError: If the ratio is not between 0 and 1 - """ - if ratio < 0 or ratio > 1: - raise ValueError("Percentage should be between 0 and 1") - driver.execute_script(f"window.scrollTo(0, document.body.scrollHeight * {ratio});") - - -def create_message(chunk: str, question: str) -> Dict[str, str]: - """Create a message for the chat completion - - Args: - chunk (str): The chunk of text to summarize - question (str): The question to answer - - Returns: - Dict[str, str]: The message to send to the chat completion - """ - return { - "role": "user", - "content": f'"""{chunk}""" Using the above text, answer the following' - f' question: "{question}" -- if the question cannot be answered using the text,' - " summarize the text.", - } diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/image_degradation/bsrgan.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/image_degradation/bsrgan.py deleted file mode 100644 index 32ef56169978e550090261cddbcf5eb611a6173b..0000000000000000000000000000000000000000 --- a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/image_degradation/bsrgan.py +++ /dev/null @@ -1,730 +0,0 @@ -# -*- coding: utf-8 -*- -""" -# -------------------------------------------- -# Super-Resolution -# -------------------------------------------- -# -# Kai Zhang (cskaizhang@gmail.com) -# https://github.com/cszn -# From 2019/03--2021/08 -# -------------------------------------------- -""" - -import numpy as np -import cv2 -import torch - -from functools import partial -import random -from scipy import ndimage -import scipy -import scipy.stats as ss -from scipy.interpolate import interp2d -from scipy.linalg import orth -import albumentations - -import ldm.modules.image_degradation.utils_image as util - - -def modcrop_np(img, sf): - ''' - Args: - img: numpy image, WxH or WxHxC - sf: scale factor - Return: - cropped image - ''' - w, h = img.shape[:2] - im = np.copy(img) - return im[:w - w % sf, :h - h % sf, ...] - - -""" -# -------------------------------------------- -# anisotropic Gaussian kernels -# -------------------------------------------- -""" - - -def analytic_kernel(k): - """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)""" - k_size = k.shape[0] - # Calculate the big kernels size - big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2)) - # Loop over the small kernel to fill the big one - for r in range(k_size): - for c in range(k_size): - big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k - # Crop the edges of the big kernel to ignore very small values and increase run time of SR - crop = k_size // 2 - cropped_big_k = big_k[crop:-crop, crop:-crop] - # Normalize to 1 - return cropped_big_k / cropped_big_k.sum() - - -def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6): - """ generate an anisotropic Gaussian kernel - Args: - ksize : e.g., 15, kernel size - theta : [0, pi], rotation angle range - l1 : [0.1,50], scaling of eigenvalues - l2 : [0.1,l1], scaling of eigenvalues - If l1 = l2, will get an isotropic Gaussian kernel. - Returns: - k : kernel - """ - - v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.])) - V = np.array([[v[0], v[1]], [v[1], -v[0]]]) - D = np.array([[l1, 0], [0, l2]]) - Sigma = np.dot(np.dot(V, D), np.linalg.inv(V)) - k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize) - - return k - - -def gm_blur_kernel(mean, cov, size=15): - center = size / 2.0 + 0.5 - k = np.zeros([size, size]) - for y in range(size): - for x in range(size): - cy = y - center + 1 - cx = x - center + 1 - k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov) - - k = k / np.sum(k) - return k - - -def shift_pixel(x, sf, upper_left=True): - """shift pixel for super-resolution with different scale factors - Args: - x: WxHxC or WxH - sf: scale factor - upper_left: shift direction - """ - h, w = x.shape[:2] - shift = (sf - 1) * 0.5 - xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0) - if upper_left: - x1 = xv + shift - y1 = yv + shift - else: - x1 = xv - shift - y1 = yv - shift - - x1 = np.clip(x1, 0, w - 1) - y1 = np.clip(y1, 0, h - 1) - - if x.ndim == 2: - x = interp2d(xv, yv, x)(x1, y1) - if x.ndim == 3: - for i in range(x.shape[-1]): - x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1) - - return x - - -def blur(x, k): - ''' - x: image, NxcxHxW - k: kernel, Nx1xhxw - ''' - n, c = x.shape[:2] - p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2 - x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate') - k = k.repeat(1, c, 1, 1) - k = k.view(-1, 1, k.shape[2], k.shape[3]) - x = x.view(1, -1, x.shape[2], x.shape[3]) - x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c) - x = x.view(n, c, x.shape[2], x.shape[3]) - - return x - - -def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0): - """" - # modified version of https://github.com/assafshocher/BlindSR_dataset_generator - # Kai Zhang - # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var - # max_var = 2.5 * sf - """ - # Set random eigen-vals (lambdas) and angle (theta) for COV matrix - lambda_1 = min_var + np.random.rand() * (max_var - min_var) - lambda_2 = min_var + np.random.rand() * (max_var - min_var) - theta = np.random.rand() * np.pi # random theta - noise = -noise_level + np.random.rand(*k_size) * noise_level * 2 - - # Set COV matrix using Lambdas and Theta - LAMBDA = np.diag([lambda_1, lambda_2]) - Q = np.array([[np.cos(theta), -np.sin(theta)], - [np.sin(theta), np.cos(theta)]]) - SIGMA = Q @ LAMBDA @ Q.T - INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :] - - # Set expectation position (shifting kernel for aligned image) - MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2) - MU = MU[None, None, :, None] - - # Create meshgrid for Gaussian - [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1])) - Z = np.stack([X, Y], 2)[:, :, :, None] - - # Calcualte Gaussian for every pixel of the kernel - ZZ = Z - MU - ZZ_t = ZZ.transpose(0, 1, 3, 2) - raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise) - - # shift the kernel so it will be centered - # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor) - - # Normalize the kernel and return - # kernel = raw_kernel_centered / np.sum(raw_kernel_centered) - kernel = raw_kernel / np.sum(raw_kernel) - return kernel - - -def fspecial_gaussian(hsize, sigma): - hsize = [hsize, hsize] - siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0] - std = sigma - [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1)) - arg = -(x * x + y * y) / (2 * std * std) - h = np.exp(arg) - h[h < scipy.finfo(float).eps * h.max()] = 0 - sumh = h.sum() - if sumh != 0: - h = h / sumh - return h - - -def fspecial_laplacian(alpha): - alpha = max([0, min([alpha, 1])]) - h1 = alpha / (alpha + 1) - h2 = (1 - alpha) / (alpha + 1) - h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]] - h = np.array(h) - return h - - -def fspecial(filter_type, *args, **kwargs): - ''' - python code from: - https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py - ''' - if filter_type == 'gaussian': - return fspecial_gaussian(*args, **kwargs) - if filter_type == 'laplacian': - return fspecial_laplacian(*args, **kwargs) - - -""" -# -------------------------------------------- -# degradation models -# -------------------------------------------- -""" - - -def bicubic_degradation(x, sf=3): - ''' - Args: - x: HxWxC image, [0, 1] - sf: down-scale factor - Return: - bicubicly downsampled LR image - ''' - x = util.imresize_np(x, scale=1 / sf) - return x - - -def srmd_degradation(x, k, sf=3): - ''' blur + bicubic downsampling - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2018learning, - title={Learning a single convolutional super-resolution network for multiple degradations}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={3262--3271}, - year={2018} - } - ''' - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror' - x = bicubic_degradation(x, sf=sf) - return x - - -def dpsr_degradation(x, k, sf=3): - ''' bicubic downsampling + blur - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2019deep, - title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={1671--1681}, - year={2019} - } - ''' - x = bicubic_degradation(x, sf=sf) - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - return x - - -def classical_degradation(x, k, sf=3): - ''' blur + downsampling - Args: - x: HxWxC image, [0, 1]/[0, 255] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - ''' - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2)) - st = 0 - return x[st::sf, st::sf, ...] - - -def add_sharpening(img, weight=0.5, radius=50, threshold=10): - """USM sharpening. borrowed from real-ESRGAN - Input image: I; Blurry image: B. - 1. K = I + weight * (I - B) - 2. Mask = 1 if abs(I - B) > threshold, else: 0 - 3. Blur mask: - 4. Out = Mask * K + (1 - Mask) * I - Args: - img (Numpy array): Input image, HWC, BGR; float32, [0, 1]. - weight (float): Sharp weight. Default: 1. - radius (float): Kernel size of Gaussian blur. Default: 50. - threshold (int): - """ - if radius % 2 == 0: - radius += 1 - blur = cv2.GaussianBlur(img, (radius, radius), 0) - residual = img - blur - mask = np.abs(residual) * 255 > threshold - mask = mask.astype('float32') - soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0) - - K = img + weight * residual - K = np.clip(K, 0, 1) - return soft_mask * K + (1 - soft_mask) * img - - -def add_blur(img, sf=4): - wd2 = 4.0 + sf - wd = 2.0 + 0.2 * sf - if random.random() < 0.5: - l1 = wd2 * random.random() - l2 = wd2 * random.random() - k = anisotropic_Gaussian(ksize=2 * random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2) - else: - k = fspecial('gaussian', 2 * random.randint(2, 11) + 3, wd * random.random()) - img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror') - - return img - - -def add_resize(img, sf=4): - rnum = np.random.rand() - if rnum > 0.8: # up - sf1 = random.uniform(1, 2) - elif rnum < 0.7: # down - sf1 = random.uniform(0.5 / sf, 1) - else: - sf1 = 1.0 - img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - return img - - -# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): -# noise_level = random.randint(noise_level1, noise_level2) -# rnum = np.random.rand() -# if rnum > 0.6: # add color Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) -# elif rnum < 0.4: # add grayscale Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) -# else: # add noise -# L = noise_level2 / 255. -# D = np.diag(np.random.rand(3)) -# U = orth(np.random.rand(3, 3)) -# conv = np.dot(np.dot(np.transpose(U), D), U) -# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) -# img = np.clip(img, 0.0, 1.0) -# return img - -def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - rnum = np.random.rand() - if rnum > 0.6: # add color Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: # add grayscale Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: # add noise - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_speckle_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - img = np.clip(img, 0.0, 1.0) - rnum = random.random() - if rnum > 0.6: - img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: - img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_Poisson_noise(img): - img = np.clip((img * 255.0).round(), 0, 255) / 255. - vals = 10 ** (2 * random.random() + 2.0) # [2, 4] - if random.random() < 0.5: - img = np.random.poisson(img * vals).astype(np.float32) / vals - else: - img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114]) - img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255. - noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray - img += noise_gray[:, :, np.newaxis] - img = np.clip(img, 0.0, 1.0) - return img - - -def add_JPEG_noise(img): - quality_factor = random.randint(30, 95) - img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR) - result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor]) - img = cv2.imdecode(encimg, 1) - img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB) - return img - - -def random_crop(lq, hq, sf=4, lq_patchsize=64): - h, w = lq.shape[:2] - rnd_h = random.randint(0, h - lq_patchsize) - rnd_w = random.randint(0, w - lq_patchsize) - lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :] - - rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf) - hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :] - return lq, hq - - -def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - hq = img.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - img = util.imresize_np(img, 1 / 2, True) - img = np.clip(img, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - img = add_blur(img, sf=sf) - - elif i == 1: - img = add_blur(img, sf=sf) - - elif i == 2: - a, b = img.shape[1], img.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - img = ndimage.filters.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror') - img = img[0::sf, 0::sf, ...] # nearest downsampling - img = np.clip(img, 0.0, 1.0) - - elif i == 3: - # downsample3 - img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - img = add_JPEG_noise(img) - - elif i == 6: - # add processed camera sensor noise - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf_ori, lq_patchsize) - - return img, hq - - -# todo no isp_model? -def degradation_bsrgan_variant(image, sf=4, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - image = util.uint2single(image) - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = image.shape[:2] - image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = image.shape[:2] - - hq = image.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - image = util.imresize_np(image, 1 / 2, True) - image = np.clip(image, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - image = add_blur(image, sf=sf) - - elif i == 1: - image = add_blur(image, sf=sf) - - elif i == 2: - a, b = image.shape[1], image.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - image = ndimage.filters.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror') - image = image[0::sf, 0::sf, ...] # nearest downsampling - image = np.clip(image, 0.0, 1.0) - - elif i == 3: - # downsample3 - image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - image = np.clip(image, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - image = add_Gaussian_noise(image, noise_level1=2, noise_level2=25) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - image = add_JPEG_noise(image) - - # elif i == 6: - # # add processed camera sensor noise - # if random.random() < isp_prob and isp_model is not None: - # with torch.no_grad(): - # img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - image = add_JPEG_noise(image) - image = util.single2uint(image) - example = {"image":image} - return example - - -# TODO incase there is a pickle error one needs to replace a += x with a = a + x in add_speckle_noise etc... -def degradation_bsrgan_plus(img, sf=4, shuffle_prob=0.5, use_sharp=True, lq_patchsize=64, isp_model=None): - """ - This is an extended degradation model by combining - the degradation models of BSRGAN and Real-ESRGAN - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - use_shuffle: the degradation shuffle - use_sharp: sharpening the img - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - if use_sharp: - img = add_sharpening(img) - hq = img.copy() - - if random.random() < shuffle_prob: - shuffle_order = random.sample(range(13), 13) - else: - shuffle_order = list(range(13)) - # local shuffle for noise, JPEG is always the last one - shuffle_order[2:6] = random.sample(shuffle_order[2:6], len(range(2, 6))) - shuffle_order[9:13] = random.sample(shuffle_order[9:13], len(range(9, 13))) - - poisson_prob, speckle_prob, isp_prob = 0.1, 0.1, 0.1 - - for i in shuffle_order: - if i == 0: - img = add_blur(img, sf=sf) - elif i == 1: - img = add_resize(img, sf=sf) - elif i == 2: - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - elif i == 3: - if random.random() < poisson_prob: - img = add_Poisson_noise(img) - elif i == 4: - if random.random() < speckle_prob: - img = add_speckle_noise(img) - elif i == 5: - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - elif i == 6: - img = add_JPEG_noise(img) - elif i == 7: - img = add_blur(img, sf=sf) - elif i == 8: - img = add_resize(img, sf=sf) - elif i == 9: - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - elif i == 10: - if random.random() < poisson_prob: - img = add_Poisson_noise(img) - elif i == 11: - if random.random() < speckle_prob: - img = add_speckle_noise(img) - elif i == 12: - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - else: - print('check the shuffle!') - - # resize to desired size - img = cv2.resize(img, (int(1 / sf * hq.shape[1]), int(1 / sf * hq.shape[0])), - interpolation=random.choice([1, 2, 3])) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf, lq_patchsize) - - return img, hq - - -if __name__ == '__main__': - print("hey") - img = util.imread_uint('utils/test.png', 3) - print(img) - img = util.uint2single(img) - print(img) - img = img[:448, :448] - h = img.shape[0] // 4 - print("resizing to", h) - sf = 4 - deg_fn = partial(degradation_bsrgan_variant, sf=sf) - for i in range(20): - print(i) - img_lq = deg_fn(img) - print(img_lq) - img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img)["image"] - print(img_lq.shape) - print("bicubic", img_lq_bicubic.shape) - print(img_hq.shape) - lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1) - util.imsave(img_concat, str(i) + '.png') - - diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/builders/image_text_pair_builder.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/builders/image_text_pair_builder.py deleted file mode 100644 index 8f93bf8f0dd51318c01940f07dc10e9dda2dd275..0000000000000000000000000000000000000000 --- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/builders/image_text_pair_builder.py +++ /dev/null @@ -1,106 +0,0 @@ -import os -import logging -import warnings - -from video_llama.common.registry import registry -from video_llama.datasets.builders.base_dataset_builder import BaseDatasetBuilder -from video_llama.datasets.datasets.laion_dataset import LaionDataset -from video_llama.datasets.datasets.cc_sbu_dataset import CCSBUDataset, CCSBUAlignDataset - - -@registry.register_builder("cc_sbu") -class CCSBUBuilder(BaseDatasetBuilder): - train_dataset_cls = CCSBUDataset - - DATASET_CONFIG_DICT = {"default": "configs/datasets/cc_sbu/defaults.yaml"} - - def _download_ann(self): - pass - - def _download_vis(self): - pass - - def build(self): - self.build_processors() - - build_info = self.config.build_info - - datasets = dict() - split = "train" - - # create datasets - # [NOTE] return inner_datasets (wds.DataPipeline) - dataset_cls = self.train_dataset_cls - datasets[split] = dataset_cls( - vis_processor=self.vis_processors[split], - text_processor=self.text_processors[split], - location=build_info.storage, - ).inner_dataset - - return datasets - - -@registry.register_builder("laion") -class LaionBuilder(BaseDatasetBuilder): - train_dataset_cls = LaionDataset - - DATASET_CONFIG_DICT = {"default": "configs/datasets/laion/defaults.yaml"} - - def _download_ann(self): - pass - - def _download_vis(self): - pass - - def build(self): - self.build_processors() - - build_info = self.config.build_info - - datasets = dict() - split = "train" - - # create datasets - # [NOTE] return inner_datasets (wds.DataPipeline) - dataset_cls = self.train_dataset_cls - datasets[split] = dataset_cls( - vis_processor=self.vis_processors[split], - text_processor=self.text_processors[split], - location=build_info.storage, - ).inner_dataset - - return datasets - - -@registry.register_builder("cc_sbu_align") -class CCSBUAlignBuilder(BaseDatasetBuilder): - train_dataset_cls = CCSBUAlignDataset - - DATASET_CONFIG_DICT = { - "default": "configs/datasets/cc_sbu/align.yaml", - } - - def build_datasets(self): - # at this point, all the annotations and image/videos should be all downloaded to the specified locations. - logging.info("Building datasets...") - self.build_processors() - - build_info = self.config.build_info - storage_path = build_info.storage - - datasets = dict() - - if not os.path.exists(storage_path): - warnings.warn("storage path {} does not exist.".format(storage_path)) - - # create datasets - dataset_cls = self.train_dataset_cls - datasets['train'] = dataset_cls( - vis_processor=self.vis_processors["train"], - text_processor=self.text_processors["train"], - ann_paths=[os.path.join(storage_path, 'filter_cap.json')], - vis_root=os.path.join(storage_path, 'image'), - ) - - return datasets - diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/__init__.py deleted file mode 100644 index 301fead45c765c60e2e27f07eb174a2675d6f554..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/__init__.py +++ /dev/null @@ -1,64 +0,0 @@ -from importlib.metadata import entry_points - -from . import _version, caching -from .callbacks import Callback -from .compression import available_compressions -from .core import get_fs_token_paths, open, open_files, open_local -from .exceptions import FSTimeoutError -from .mapping import FSMap, get_mapper -from .registry import ( - available_protocols, - filesystem, - get_filesystem_class, - register_implementation, - registry, -) -from .spec import AbstractFileSystem - -__version__ = _version.get_versions()["version"] - -__all__ = [ - "AbstractFileSystem", - "FSTimeoutError", - "FSMap", - "filesystem", - "register_implementation", - "get_filesystem_class", - "get_fs_token_paths", - "get_mapper", - "open", - "open_files", - "open_local", - "registry", - "caching", - "Callback", - "available_protocols", - "available_compressions", -] - - -def process_entries(): - if entry_points is not None: - try: - eps = entry_points() - except TypeError: - pass # importlib-metadata < 0.8 - else: - if hasattr(eps, "select"): # Python 3.10+ / importlib_metadata >= 3.9.0 - specs = eps.select(group="fsspec.specs") - else: - specs = eps.get("fsspec.specs", []) - for spec in specs: - err_msg = f"Unable to load filesystem from {spec}" - register_implementation( - spec.name, - spec.value.replace(":", "."), - errtxt=err_msg, - # We take our implementations as the ones to overload with if - # for some reason we encounter some, may be the same, already - # registered - clobber=True, - ) - - -process_entries() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/UploadText-690664d1.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/UploadText-690664d1.css deleted file mode 100644 index 858fdcc04577128b4960af9c51ca8c41e2fd69e4..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/UploadText-690664d1.css +++ /dev/null @@ -1 +0,0 @@ -.wrap.svelte-1ck5uk8{display:flex;flex-direction:column;justify-content:center;min-height:var(--size-60);color:var(--block-label-text-color);line-height:var(--line-md)}.or.svelte-1ck5uk8{color:var(--body-text-color-subdued)}@media (min-width: 768px){.wrap.svelte-1ck5uk8{font-size:var(--text-lg)}} diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-0a171ecc.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-0a171ecc.js deleted file mode 100644 index 8eb943b1af0daba56054b3d31eca41213bec6f29..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-0a171ecc.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as O,e as P,s as Q,N as T,k as N,O as R,K as g,U,p as C,o as B,M as z,ap as A,Q as j,aw as G,z as q,v as E,A as D,x as S,a1 as X,B as Y,am as Z,P as y,R as x,a7 as p,E as $,ae as ee,h as F,j as K,q as ne,r as ie,t as M,F as k}from"./index-1d65707a.js";/* empty css */import{B as le}from"./Button-f155035a.js";import{B as ae}from"./BlockTitle-dee077e8.js";import"./Info-7c6961ef.js";function ue(n){let e;return{c(){e=y(n[4])},m(i,l){C(i,e,l)},p(i,l){l&16&&x(e,i[4])},d(i){i&&D(e)}}}function te(n){let e,i,l,t,s,b,d;return i=new ae({props:{show_label:n[6],info:n[5],$$slots:{default:[ue]},$$scope:{ctx:n}}}),{c(){e=T("label"),N(i.$$.fragment),l=R(),t=T("input"),g(t,"type","number"),g(t,"min",n[1]),g(t,"max",n[2]),t.disabled=n[3],g(t,"class","svelte-gigvtq"),g(e,"class","block svelte-gigvtq"),U(e,"container",n[7])},m(m,_){C(m,e,_),B(i,e,null),z(e,l),z(e,t),A(t,n[0]),s=!0,b||(d=[j(t,"input",n[11]),j(t,"keypress",n[8]),j(t,"blur",n[9])],b=!0)},p(m,[_]){const r={};_&64&&(r.show_label=m[6]),_&32&&(r.info=m[5]),_&16400&&(r.$$scope={dirty:_,ctx:m}),i.$set(r),(!s||_&2)&&g(t,"min",m[1]),(!s||_&4)&&g(t,"max",m[2]),(!s||_&8)&&(t.disabled=m[3]),_&1&&G(t.value)!==m[0]&&A(t,m[0]),(!s||_&128)&&U(e,"container",m[7])},i(m){s||(q(i.$$.fragment,m),s=!0)},o(m){E(i.$$.fragment,m),s=!1},d(m){m&&D(e),S(i),b=!1,X(d)}}}function se(n,e,i){let{value:l=0}=e,{minimum:t=void 0}=e,{maximum:s=void 0}=e,{value_is_output:b=!1}=e,{disabled:d=!1}=e,{label:m}=e,{info:_=void 0}=e,{show_label:r=!0}=e,{container:h=!0}=e;const u=Y();function o(){!isNaN(l)&&l!==null&&(u("change",l),b||u("input"))}Z(()=>{i(10,b=!1)});async function w(f){await p(),f.key==="Enter"&&(f.preventDefault(),u("submit"))}function c(f){u("blur")}function v(){l=G(this.value),i(0,l)}return n.$$set=f=>{"value"in f&&i(0,l=f.value),"minimum"in f&&i(1,t=f.minimum),"maximum"in f&&i(2,s=f.maximum),"value_is_output"in f&&i(10,b=f.value_is_output),"disabled"in f&&i(3,d=f.disabled),"label"in f&&i(4,m=f.label),"info"in f&&i(5,_=f.info),"show_label"in f&&i(6,r=f.show_label),"container"in f&&i(7,h=f.container)},n.$$.update=()=>{n.$$.dirty&1&&o()},[l,t,s,d,m,_,r,h,w,c,b,v]}class me extends O{constructor(e){super(),P(this,e,se,te,Q,{value:0,minimum:1,maximum:2,value_is_output:10,disabled:3,label:4,info:5,show_label:6,container:7})}}function fe(n){let e,i,l,t,s,b;const d=[n[13]];let m={};for(let u=0;uK(l,"value",_)),F.push(()=>K(l,"value_is_output",r)),l.$on("change",n[17]),l.$on("input",n[18]),l.$on("submit",n[19]),l.$on("blur",n[20]),{c(){N(e.$$.fragment),i=R(),N(l.$$.fragment)},m(u,o){B(e,u,o),C(u,i,o),B(l,u,o),b=!0},p(u,o){const w=o&8192?ne(d,[ie(u[13])]):{};e.$set(w);const c={};o&4&&(c.label=u[2]),o&8&&(c.info=u[3]),o&1024&&(c.show_label=u[10]),o&2048&&(c.minimum=u[11]),o&4096&&(c.maximum=u[12]),o&128&&(c.container=u[7]),o&16384&&(c.disabled=u[14]==="static"),!t&&o&1&&(t=!0,c.value=u[0],M(()=>t=!1)),!s&&o&2&&(s=!0,c.value_is_output=u[1],M(()=>s=!1)),l.$set(c)},i(u){b||(q(e.$$.fragment,u),q(l.$$.fragment,u),b=!0)},o(u){E(e.$$.fragment,u),E(l.$$.fragment,u),b=!1},d(u){u&&D(i),S(e,u),S(l,u)}}}function _e(n){let e,i;return e=new le({props:{visible:n[6],elem_id:n[4],elem_classes:n[5],padding:n[7],allow_overflow:!1,scale:n[8],min_width:n[9],$$slots:{default:[fe]},$$scope:{ctx:n}}}),{c(){N(e.$$.fragment)},m(l,t){B(e,l,t),i=!0},p(l,[t]){const s={};t&64&&(s.visible=l[6]),t&16&&(s.elem_id=l[4]),t&32&&(s.elem_classes=l[5]),t&128&&(s.padding=l[7]),t&256&&(s.scale=l[8]),t&512&&(s.min_width=l[9]),t&2129039&&(s.$$scope={dirty:t,ctx:l}),e.$set(s)},i(l){i||(q(e.$$.fragment,l),i=!0)},o(l){E(e.$$.fragment,l),i=!1},d(l){S(e,l)}}}function oe(n,e,i){let{label:l="Number"}=e,{info:t=void 0}=e,{elem_id:s=""}=e,{elem_classes:b=[]}=e,{visible:d=!0}=e,{container:m=!0}=e,{scale:_=null}=e,{min_width:r=void 0}=e,{value:h=0}=e,{show_label:u}=e,{minimum:o=void 0}=e,{maximum:w=void 0}=e,{loading_status:c}=e,{mode:v}=e,{value_is_output:f=!1}=e;function H(a){h=a,i(0,h)}function I(a){f=a,i(1,f)}function J(a){k.call(this,n,a)}function L(a){k.call(this,n,a)}function V(a){k.call(this,n,a)}function W(a){k.call(this,n,a)}return n.$$set=a=>{"label"in a&&i(2,l=a.label),"info"in a&&i(3,t=a.info),"elem_id"in a&&i(4,s=a.elem_id),"elem_classes"in a&&i(5,b=a.elem_classes),"visible"in a&&i(6,d=a.visible),"container"in a&&i(7,m=a.container),"scale"in a&&i(8,_=a.scale),"min_width"in a&&i(9,r=a.min_width),"value"in a&&i(0,h=a.value),"show_label"in a&&i(10,u=a.show_label),"minimum"in a&&i(11,o=a.minimum),"maximum"in a&&i(12,w=a.maximum),"loading_status"in a&&i(13,c=a.loading_status),"mode"in a&&i(14,v=a.mode),"value_is_output"in a&&i(1,f=a.value_is_output)},[h,f,l,t,s,b,d,m,_,r,u,o,w,c,v,H,I,J,L,V,W]}class be extends O{constructor(e){super(),P(this,e,oe,_e,Q,{label:2,info:3,elem_id:4,elem_classes:5,visible:6,container:7,scale:8,min_width:9,value:0,show_label:10,minimum:11,maximum:12,loading_status:13,mode:14,value_is_output:1})}}const we=be,ve=["static","dynamic"],ke=n=>({type:{payload:"number"},description:{payload:"numeric value"},example_data:n.value??1});export{we as Component,ke as document,ve as modes}; -//# sourceMappingURL=index-0a171ecc.js.map diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Download-fdaaf5d4.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Download-fdaaf5d4.js deleted file mode 100644 index 740b134cf8bb0473cd25d964c80dc0861bd60f07..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Download-fdaaf5d4.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as i,e as p,s as v,J as o,K as e,p as h,M as c,n,A as m}from"./index-3370be2a.js";function d(l){let t,s;return{c(){t=o("svg"),s=o("path"),e(s,"fill","currentColor"),e(s,"d","M26 24v4H6v-4H4v4a2 2 0 0 0 2 2h20a2 2 0 0 0 2-2v-4zm0-10l-1.41-1.41L17 20.17V2h-2v18.17l-7.59-7.58L6 14l10 10l10-10z"),e(t,"xmlns","http://www.w3.org/2000/svg"),e(t,"width","100%"),e(t,"height","100%"),e(t,"viewBox","0 0 32 32")},m(a,r){h(a,t,r),c(t,s)},p:n,i:n,o:n,d(a){a&&m(t)}}}class u extends i{constructor(t){super(),p(this,t,null,d,v,{})}}export{u as D}; -//# sourceMappingURL=Download-fdaaf5d4.js.map diff --git a/spaces/DaleChen/AutoGPT/run_continuous.bat b/spaces/DaleChen/AutoGPT/run_continuous.bat deleted file mode 100644 index 812aa01c1c5506c452665610c0e9e83a17c426f2..0000000000000000000000000000000000000000 --- a/spaces/DaleChen/AutoGPT/run_continuous.bat +++ /dev/null @@ -1,3 +0,0 @@ -@echo off -set argument=--continuous -call run.bat %argument% diff --git a/spaces/Datasculptor/StyleGAN-NADA/op/conv2d_gradfix.py b/spaces/Datasculptor/StyleGAN-NADA/op/conv2d_gradfix.py deleted file mode 100644 index bb2f94bbcb8132299fd4d538972d32bd7ff6e7d6..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/StyleGAN-NADA/op/conv2d_gradfix.py +++ /dev/null @@ -1,227 +0,0 @@ -import contextlib -import warnings - -import torch -from torch import autograd -from torch.nn import functional as F - -enabled = True -weight_gradients_disabled = False - - -@contextlib.contextmanager -def no_weight_gradients(): - global weight_gradients_disabled - - old = weight_gradients_disabled - weight_gradients_disabled = True - yield - weight_gradients_disabled = old - - -def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1): - if could_use_op(input): - return conv2d_gradfix( - transpose=False, - weight_shape=weight.shape, - stride=stride, - padding=padding, - output_padding=0, - dilation=dilation, - groups=groups, - ).apply(input, weight, bias) - - return F.conv2d( - input=input, - weight=weight, - bias=bias, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - ) - - -def conv_transpose2d( - input, - weight, - bias=None, - stride=1, - padding=0, - output_padding=0, - groups=1, - dilation=1, -): - if could_use_op(input): - return conv2d_gradfix( - transpose=True, - weight_shape=weight.shape, - stride=stride, - padding=padding, - output_padding=output_padding, - groups=groups, - dilation=dilation, - ).apply(input, weight, bias) - - return F.conv_transpose2d( - input=input, - weight=weight, - bias=bias, - stride=stride, - padding=padding, - output_padding=output_padding, - dilation=dilation, - groups=groups, - ) - - -def could_use_op(input): - if (not enabled) or (not torch.backends.cudnn.enabled): - return False - - if input.device.type != "cuda": - return False - - if any(torch.__version__.startswith(x) for x in ["1.7.", "1.8."]): - return True - - warnings.warn( - f"conv2d_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.conv2d()." - ) - - return False - - -def ensure_tuple(xs, ndim): - xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim - - return xs - - -conv2d_gradfix_cache = dict() - - -def conv2d_gradfix( - transpose, weight_shape, stride, padding, output_padding, dilation, groups -): - ndim = 2 - weight_shape = tuple(weight_shape) - stride = ensure_tuple(stride, ndim) - padding = ensure_tuple(padding, ndim) - output_padding = ensure_tuple(output_padding, ndim) - dilation = ensure_tuple(dilation, ndim) - - key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups) - if key in conv2d_gradfix_cache: - return conv2d_gradfix_cache[key] - - common_kwargs = dict( - stride=stride, padding=padding, dilation=dilation, groups=groups - ) - - def calc_output_padding(input_shape, output_shape): - if transpose: - return [0, 0] - - return [ - input_shape[i + 2] - - (output_shape[i + 2] - 1) * stride[i] - - (1 - 2 * padding[i]) - - dilation[i] * (weight_shape[i + 2] - 1) - for i in range(ndim) - ] - - class Conv2d(autograd.Function): - @staticmethod - def forward(ctx, input, weight, bias): - if not transpose: - out = F.conv2d(input=input, weight=weight, bias=bias, **common_kwargs) - - else: - out = F.conv_transpose2d( - input=input, - weight=weight, - bias=bias, - output_padding=output_padding, - **common_kwargs, - ) - - ctx.save_for_backward(input, weight) - - return out - - @staticmethod - def backward(ctx, grad_output): - input, weight = ctx.saved_tensors - grad_input, grad_weight, grad_bias = None, None, None - - if ctx.needs_input_grad[0]: - p = calc_output_padding( - input_shape=input.shape, output_shape=grad_output.shape - ) - grad_input = conv2d_gradfix( - transpose=(not transpose), - weight_shape=weight_shape, - output_padding=p, - **common_kwargs, - ).apply(grad_output, weight, None) - - if ctx.needs_input_grad[1] and not weight_gradients_disabled: - grad_weight = Conv2dGradWeight.apply(grad_output, input) - - if ctx.needs_input_grad[2]: - grad_bias = grad_output.sum((0, 2, 3)) - - return grad_input, grad_weight, grad_bias - - class Conv2dGradWeight(autograd.Function): - @staticmethod - def forward(ctx, grad_output, input): - op = torch._C._jit_get_operation( - "aten::cudnn_convolution_backward_weight" - if not transpose - else "aten::cudnn_convolution_transpose_backward_weight" - ) - flags = [ - torch.backends.cudnn.benchmark, - torch.backends.cudnn.deterministic, - torch.backends.cudnn.allow_tf32, - ] - grad_weight = op( - weight_shape, - grad_output, - input, - padding, - stride, - dilation, - groups, - *flags, - ) - ctx.save_for_backward(grad_output, input) - - return grad_weight - - @staticmethod - def backward(ctx, grad_grad_weight): - grad_output, input = ctx.saved_tensors - grad_grad_output, grad_grad_input = None, None - - if ctx.needs_input_grad[0]: - grad_grad_output = Conv2d.apply(input, grad_grad_weight, None) - - if ctx.needs_input_grad[1]: - p = calc_output_padding( - input_shape=input.shape, output_shape=grad_output.shape - ) - grad_grad_input = conv2d_gradfix( - transpose=(not transpose), - weight_shape=weight_shape, - output_padding=p, - **common_kwargs, - ).apply(grad_output, grad_grad_weight, None) - - return grad_grad_output, grad_grad_input - - conv2d_gradfix_cache[key] = Conv2d - - return Conv2d diff --git a/spaces/Detomo/ai-comic-generation/src/components/ui/label.tsx b/spaces/Detomo/ai-comic-generation/src/components/ui/label.tsx deleted file mode 100644 index 534182176bf87f9308355514adc884d2b69750a5..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/components/ui/label.tsx +++ /dev/null @@ -1,26 +0,0 @@ -"use client" - -import * as React from "react" -import * as LabelPrimitive from "@radix-ui/react-label" -import { cva, type VariantProps } from "class-variance-authority" - -import { cn } from "@/lib/utils" - -const labelVariants = cva( - "text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70" -) - -const Label = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & - VariantProps ->(({ className, ...props }, ref) => ( - -)) -Label.displayName = LabelPrimitive.Root.displayName - -export { Label } diff --git a/spaces/EcoCy/LoRA-DreamBooth-Training-UI/style.css b/spaces/EcoCy/LoRA-DreamBooth-Training-UI/style.css deleted file mode 100644 index c4739b4ea5fc35e774a049e3dacc443f7f0eac19..0000000000000000000000000000000000000000 --- a/spaces/EcoCy/LoRA-DreamBooth-Training-UI/style.css +++ /dev/null @@ -1,3 +0,0 @@ -h1 { - text-align: center; -} diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/main.py b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/main.py deleted file mode 100644 index 7b4f94c529618b7863fa213e339dbe49f839de79..0000000000000000000000000000000000000000 --- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/main.py +++ /dev/null @@ -1,582 +0,0 @@ -import argparse, os, sys, datetime, glob, importlib -from omegaconf import OmegaConf -import numpy as np -from PIL import Image -import torch -import torchvision -from torch.utils.data import random_split, DataLoader, Dataset -import pytorch_lightning as pl -from pytorch_lightning import seed_everything -from pytorch_lightning.trainer import Trainer -from pytorch_lightning.callbacks import ModelCheckpoint, Callback, LearningRateMonitor -from pytorch_lightning.utilities.distributed import rank_zero_only - -def get_obj_from_str(string, reload=False): - module, cls = string.rsplit(".", 1) - if reload: - module_imp = importlib.import_module(module) - importlib.reload(module_imp) - return getattr(importlib.import_module(module, package=None), cls) - - -def get_parser(**parser_kwargs): - def str2bool(v): - if isinstance(v, bool): - return v - if v.lower() in ("yes", "true", "t", "y", "1"): - return True - elif v.lower() in ("no", "false", "f", "n", "0"): - return False - else: - raise argparse.ArgumentTypeError("Boolean value expected.") - - parser = argparse.ArgumentParser(**parser_kwargs) - parser.add_argument( - "-n", - "--name", - type=str, - const=True, - default="", - nargs="?", - help="postfix for logdir", - ) - parser.add_argument( - "-r", - "--resume", - type=str, - const=True, - default="", - nargs="?", - help="resume from logdir or checkpoint in logdir", - ) - parser.add_argument( - "-b", - "--base", - nargs="*", - metavar="base_config.yaml", - help="paths to base configs. Loaded from left-to-right. " - "Parameters can be overwritten or added with command-line options of the form `--key value`.", - default=list(), - ) - parser.add_argument( - "-t", - "--train", - type=str2bool, - const=True, - default=False, - nargs="?", - help="train", - ) - parser.add_argument( - "--no-test", - type=str2bool, - const=True, - default=False, - nargs="?", - help="disable test", - ) - parser.add_argument("-p", "--project", help="name of new or path to existing project") - parser.add_argument( - "-d", - "--debug", - type=str2bool, - nargs="?", - const=True, - default=False, - help="enable post-mortem debugging", - ) - parser.add_argument( - "-s", - "--seed", - type=int, - default=23, - help="seed for seed_everything", - ) - parser.add_argument( - "-f", - "--postfix", - type=str, - default="", - help="post-postfix for default name", - ) - - return parser - - -def nondefault_trainer_args(opt): - parser = argparse.ArgumentParser() - parser = Trainer.add_argparse_args(parser) - args = parser.parse_args([]) - return sorted(k for k in vars(args) if getattr(opt, k) != getattr(args, k)) - - -def instantiate_from_config(config): - if not "target" in config: - raise KeyError("Expected key `target` to instantiate.") - return get_obj_from_str(config["target"])(**config.get("params", dict())) - - -class WrappedDataset(Dataset): - """Wraps an arbitrary object with __len__ and __getitem__ into a pytorch dataset""" - def __init__(self, dataset): - self.data = dataset - - def __len__(self): - return len(self.data) - - def __getitem__(self, idx): - return self.data[idx] - - -class DataModuleFromConfig(pl.LightningDataModule): - def __init__(self, batch_size, train=None, validation=None, test=None, - wrap=False, num_workers=None): - super().__init__() - self.batch_size = batch_size - self.dataset_configs = dict() - self.num_workers = num_workers if num_workers is not None else batch_size*2 - if train is not None: - self.dataset_configs["train"] = train - self.train_dataloader = self._train_dataloader - if validation is not None: - self.dataset_configs["validation"] = validation - self.val_dataloader = self._val_dataloader - if test is not None: - self.dataset_configs["test"] = test - self.test_dataloader = self._test_dataloader - self.wrap = wrap - - def prepare_data(self): - for data_cfg in self.dataset_configs.values(): - instantiate_from_config(data_cfg) - - def setup(self, stage=None): - self.datasets = dict( - (k, instantiate_from_config(self.dataset_configs[k])) - for k in self.dataset_configs) - if self.wrap: - for k in self.datasets: - self.datasets[k] = WrappedDataset(self.datasets[k]) - - def _train_dataloader(self): - return DataLoader(self.datasets["train"], batch_size=self.batch_size, - num_workers=self.num_workers, shuffle=True) - - def _val_dataloader(self): - return DataLoader(self.datasets["validation"], - batch_size=self.batch_size, - num_workers=self.num_workers) - - def _test_dataloader(self): - return DataLoader(self.datasets["test"], batch_size=self.batch_size, - num_workers=self.num_workers) - - -class SetupCallback(Callback): - def __init__(self, resume, now, logdir, ckptdir, cfgdir, config, lightning_config): - super().__init__() - self.resume = resume - self.now = now - self.logdir = logdir - self.ckptdir = ckptdir - self.cfgdir = cfgdir - self.config = config - self.lightning_config = lightning_config - - def on_pretrain_routine_start(self, trainer, pl_module): - if trainer.global_rank == 0: - # Create logdirs and save configs - os.makedirs(self.logdir, exist_ok=True) - os.makedirs(self.ckptdir, exist_ok=True) - os.makedirs(self.cfgdir, exist_ok=True) - - print("Project config") - print(self.config.pretty()) - OmegaConf.save(self.config, - os.path.join(self.cfgdir, "{}-project.yaml".format(self.now))) - - print("Lightning config") - print(self.lightning_config.pretty()) - OmegaConf.save(OmegaConf.create({"lightning": self.lightning_config}), - os.path.join(self.cfgdir, "{}-lightning.yaml".format(self.now))) - - else: - # ModelCheckpoint callback created log directory --- remove it - if not self.resume and os.path.exists(self.logdir): - dst, name = os.path.split(self.logdir) - dst = os.path.join(dst, "child_runs", name) - os.makedirs(os.path.split(dst)[0], exist_ok=True) - try: - os.rename(self.logdir, dst) - except FileNotFoundError: - pass - - -class ImageLogger(Callback): - def __init__(self, batch_frequency, max_images, clamp=True, increase_log_steps=True): - super().__init__() - self.batch_freq = batch_frequency - self.max_images = max_images - self.logger_log_images = { - pl.loggers.WandbLogger: self._wandb, - pl.loggers.TestTubeLogger: self._testtube, - } - self.log_steps = [2 ** n for n in range(int(np.log2(self.batch_freq)) + 1)] - if not increase_log_steps: - self.log_steps = [self.batch_freq] - self.clamp = clamp - - @rank_zero_only - def _wandb(self, pl_module, images, batch_idx, split): - raise ValueError("No way wandb") - grids = dict() - for k in images: - grid = torchvision.utils.make_grid(images[k]) - grids[f"{split}/{k}"] = wandb.Image(grid) - pl_module.logger.experiment.log(grids) - - @rank_zero_only - def _testtube(self, pl_module, images, batch_idx, split): - for k in images: - grid = torchvision.utils.make_grid(images[k]) - grid = (grid+1.0)/2.0 # -1,1 -> 0,1; c,h,w - - tag = f"{split}/{k}" - pl_module.logger.experiment.add_image( - tag, grid, - global_step=pl_module.global_step) - - @rank_zero_only - def log_local(self, save_dir, split, images, - global_step, current_epoch, batch_idx): - root = os.path.join(save_dir, "images", split) - for k in images: - grid = torchvision.utils.make_grid(images[k], nrow=4) - - grid = (grid+1.0)/2.0 # -1,1 -> 0,1; c,h,w - grid = grid.transpose(0,1).transpose(1,2).squeeze(-1) - grid = grid.numpy() - grid = (grid*255).astype(np.uint8) - filename = "{}_gs-{:06}_e-{:06}_b-{:06}.png".format( - k, - global_step, - current_epoch, - batch_idx) - path = os.path.join(root, filename) - os.makedirs(os.path.split(path)[0], exist_ok=True) - Image.fromarray(grid).save(path) - - def log_img(self, pl_module, batch, batch_idx, split="train"): - if (self.check_frequency(batch_idx) and # batch_idx % self.batch_freq == 0 - hasattr(pl_module, "log_images") and - callable(pl_module.log_images) and - self.max_images > 0): - logger = type(pl_module.logger) - - is_train = pl_module.training - if is_train: - pl_module.eval() - - with torch.no_grad(): - images = pl_module.log_images(batch, split=split) - - for k in images: - N = min(images[k].shape[0], self.max_images) - images[k] = images[k][:N] - if isinstance(images[k], torch.Tensor): - images[k] = images[k].detach().cpu() - if self.clamp: - images[k] = torch.clamp(images[k], -1., 1.) - - self.log_local(pl_module.logger.save_dir, split, images, - pl_module.global_step, pl_module.current_epoch, batch_idx) - - logger_log_images = self.logger_log_images.get(logger, lambda *args, **kwargs: None) - logger_log_images(pl_module, images, pl_module.global_step, split) - - if is_train: - pl_module.train() - - def check_frequency(self, batch_idx): - if (batch_idx % self.batch_freq) == 0 or (batch_idx in self.log_steps): - try: - self.log_steps.pop(0) - except IndexError: - pass - return True - return False - - def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx): - self.log_img(pl_module, batch, batch_idx, split="train") - - def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx): - self.log_img(pl_module, batch, batch_idx, split="val") - - - -if __name__ == "__main__": - # custom parser to specify config files, train, test and debug mode, - # postfix, resume. - # `--key value` arguments are interpreted as arguments to the trainer. - # `nested.key=value` arguments are interpreted as config parameters. - # configs are merged from left-to-right followed by command line parameters. - - # model: - # base_learning_rate: float - # target: path to lightning module - # params: - # key: value - # data: - # target: main.DataModuleFromConfig - # params: - # batch_size: int - # wrap: bool - # train: - # target: path to train dataset - # params: - # key: value - # validation: - # target: path to validation dataset - # params: - # key: value - # test: - # target: path to test dataset - # params: - # key: value - # lightning: (optional, has sane defaults and can be specified on cmdline) - # trainer: - # additional arguments to trainer - # logger: - # logger to instantiate - # modelcheckpoint: - # modelcheckpoint to instantiate - # callbacks: - # callback1: - # target: importpath - # params: - # key: value - - now = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S") - - # add cwd for convenience and to make classes in this file available when - # running as `python main.py` - # (in particular `main.DataModuleFromConfig`) - sys.path.append(os.getcwd()) - - parser = get_parser() - parser = Trainer.add_argparse_args(parser) - - opt, unknown = parser.parse_known_args() - if opt.name and opt.resume: - raise ValueError( - "-n/--name and -r/--resume cannot be specified both." - "If you want to resume training in a new log folder, " - "use -n/--name in combination with --resume_from_checkpoint" - ) - if opt.resume: - if not os.path.exists(opt.resume): - raise ValueError("Cannot find {}".format(opt.resume)) - if os.path.isfile(opt.resume): - paths = opt.resume.split("/") - idx = len(paths)-paths[::-1].index("logs")+1 - logdir = "/".join(paths[:idx]) - ckpt = opt.resume - else: - assert os.path.isdir(opt.resume), opt.resume - logdir = opt.resume.rstrip("/") - ckpt = os.path.join(logdir, "checkpoints", "last.ckpt") - - opt.resume_from_checkpoint = ckpt - base_configs = sorted(glob.glob(os.path.join(logdir, "configs/*.yaml"))) - opt.base = base_configs+opt.base - _tmp = logdir.split("/") - nowname = _tmp[_tmp.index("logs")+1] - else: - if opt.name: - name = "_"+opt.name - elif opt.base: - cfg_fname = os.path.split(opt.base[0])[-1] - cfg_name = os.path.splitext(cfg_fname)[0] - name = "_"+cfg_name - else: - name = "" - nowname = now+name+opt.postfix - logdir = os.path.join("logs", nowname) - - ckptdir = os.path.join(logdir, "checkpoints") - cfgdir = os.path.join(logdir, "configs") - seed_everything(opt.seed) - - try: - # init and save configs - configs = [OmegaConf.load(cfg) for cfg in opt.base] - cli = OmegaConf.from_dotlist(unknown) - config = OmegaConf.merge(*configs, cli) - lightning_config = config.pop("lightning", OmegaConf.create()) - # merge trainer cli with config - trainer_config = lightning_config.get("trainer", OmegaConf.create()) - # default to ddp - trainer_config["distributed_backend"] = "ddp" - for k in nondefault_trainer_args(opt): - trainer_config[k] = getattr(opt, k) - if not "gpus" in trainer_config: - del trainer_config["distributed_backend"] - cpu = True - else: - gpuinfo = trainer_config["gpus"] - print(f"Running on GPUs {gpuinfo}") - cpu = False - trainer_opt = argparse.Namespace(**trainer_config) - lightning_config.trainer = trainer_config - - # model - model = instantiate_from_config(config.model) - - # trainer and callbacks - trainer_kwargs = dict() - - # default logger configs - # NOTE wandb < 0.10.0 interferes with shutdown - # wandb >= 0.10.0 seems to fix it but still interferes with pudb - # debugging (wrongly sized pudb ui) - # thus prefer testtube for now - default_logger_cfgs = { - "wandb": { - "target": "pytorch_lightning.loggers.WandbLogger", - "params": { - "name": nowname, - "save_dir": logdir, - "offline": opt.debug, - "id": nowname, - } - }, - "testtube": { - "target": "pytorch_lightning.loggers.TestTubeLogger", - "params": { - "name": "testtube", - "save_dir": logdir, - } - }, - } - default_logger_cfg = default_logger_cfgs["testtube"] - logger_cfg = lightning_config.logger or OmegaConf.create() - logger_cfg = OmegaConf.merge(default_logger_cfg, logger_cfg) - trainer_kwargs["logger"] = instantiate_from_config(logger_cfg) - - # modelcheckpoint - use TrainResult/EvalResult(checkpoint_on=metric) to - # specify which metric is used to determine best models - default_modelckpt_cfg = { - "target": "pytorch_lightning.callbacks.ModelCheckpoint", - "params": { - "dirpath": ckptdir, - "filename": "{epoch:06}", - "verbose": True, - "save_last": True, - } - } - if hasattr(model, "monitor"): - print(f"Monitoring {model.monitor} as checkpoint metric.") - default_modelckpt_cfg["params"]["monitor"] = model.monitor - default_modelckpt_cfg["params"]["save_top_k"] = 3 - - modelckpt_cfg = lightning_config.modelcheckpoint or OmegaConf.create() - modelckpt_cfg = OmegaConf.merge(default_modelckpt_cfg, modelckpt_cfg) - trainer_kwargs["checkpoint_callback"] = instantiate_from_config(modelckpt_cfg) - - # add callback which sets up log directory - default_callbacks_cfg = { - "setup_callback": { - "target": "main.SetupCallback", - "params": { - "resume": opt.resume, - "now": now, - "logdir": logdir, - "ckptdir": ckptdir, - "cfgdir": cfgdir, - "config": config, - "lightning_config": lightning_config, - } - }, - "image_logger": { - "target": "main.ImageLogger", - "params": { - "batch_frequency": 750, - "max_images": 4, - "clamp": True - } - }, - "learning_rate_logger": { - "target": "main.LearningRateMonitor", - "params": { - "logging_interval": "step", - #"log_momentum": True - } - }, - } - callbacks_cfg = lightning_config.callbacks or OmegaConf.create() - callbacks_cfg = OmegaConf.merge(default_callbacks_cfg, callbacks_cfg) - trainer_kwargs["callbacks"] = [instantiate_from_config(callbacks_cfg[k]) for k in callbacks_cfg] - - trainer = Trainer.from_argparse_args(trainer_opt, **trainer_kwargs) - - # data - data = instantiate_from_config(config.data) - # NOTE according to https://pytorch-lightning.readthedocs.io/en/latest/datamodules.html - # calling these ourselves should not be necessary but it is. - # lightning still takes care of proper multiprocessing though - data.prepare_data() - data.setup() - - # configure learning rate - bs, base_lr = config.data.params.batch_size, config.model.base_learning_rate - if not cpu: - ngpu = len(lightning_config.trainer.gpus.strip(",").split(',')) - else: - ngpu = 1 - accumulate_grad_batches = lightning_config.trainer.accumulate_grad_batches or 1 - print(f"accumulate_grad_batches = {accumulate_grad_batches}") - lightning_config.trainer.accumulate_grad_batches = accumulate_grad_batches - model.learning_rate = accumulate_grad_batches * ngpu * bs * base_lr - print("Setting learning rate to {:.2e} = {} (accumulate_grad_batches) * {} (num_gpus) * {} (batchsize) * {:.2e} (base_lr)".format( - model.learning_rate, accumulate_grad_batches, ngpu, bs, base_lr)) - - # allow checkpointing via USR1 - def melk(*args, **kwargs): - # run all checkpoint hooks - if trainer.global_rank == 0: - print("Summoning checkpoint.") - ckpt_path = os.path.join(ckptdir, "last.ckpt") - trainer.save_checkpoint(ckpt_path) - - def divein(*args, **kwargs): - if trainer.global_rank == 0: - import pudb; pudb.set_trace() - - import signal - signal.signal(signal.SIGUSR1, melk) - signal.signal(signal.SIGUSR2, divein) - - # run - if opt.train: - try: - trainer.fit(model, data) - except Exception: - melk() - raise - if not opt.no_test and not trainer.interrupted: - trainer.test(model, data) - except Exception: - if opt.debug and trainer.global_rank==0: - try: - import pudb as debugger - except ImportError: - import pdb as debugger - debugger.post_mortem() - raise - finally: - # move newly created debug project to debug_runs - if opt.debug and not opt.resume and trainer.global_rank==0: - dst, name = os.path.split(logdir) - dst = os.path.join(dst, "debug_runs", name) - os.makedirs(os.path.split(dst)[0], exist_ok=True) - os.rename(logdir, dst) diff --git a/spaces/FelixLuoX/codeformer/README.md b/spaces/FelixLuoX/codeformer/README.md deleted file mode 100644 index b4b841a71df3c2e64e9305b459f4a14b37cd77f7..0000000000000000000000000000000000000000 --- a/spaces/FelixLuoX/codeformer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Codeformer -emoji: 🌍 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GXSA/bingo/src/app/loading.css b/spaces/GXSA/bingo/src/app/loading.css deleted file mode 100644 index eaaab6a86a228334c4eca3c5368ae6f0f593d405..0000000000000000000000000000000000000000 --- a/spaces/GXSA/bingo/src/app/loading.css +++ /dev/null @@ -1,68 +0,0 @@ -::-webkit-scrollbar { - width: 10px; - height: 10px; - display: none; -} - -::-webkit-scrollbar-button:start:decrement, -::-webkit-scrollbar-button:end:increment { - height: 30px; - background-color: transparent; -} - -::-webkit-scrollbar-track-piece { - background-color: #3b3b3b; - -webkit-border-radius: 16px; -} - -::-webkit-scrollbar-thumb:vertical { - height: 50px; - background-color: #666; - border: 1px solid #eee; - -webkit-border-radius: 6px; -} - -/* loading start */ -.loading-spinner { - display: flex; - justify-content: center; - align-items: center; - height: 100vh; - opacity: 1; - transition: opacity .8s ease-out; -} - -.loading-spinner.hidden { - opacity: 0; -} - -.loading-spinner>div { - width: 30px; - height: 30px; - background: linear-gradient(90deg, #2870EA 10.79%, #1B4AEF 87.08%); - - border-radius: 100%; - display: inline-block; - animation: sk-bouncedelay 1.4s infinite ease-in-out both; -} - -.loading-spinner .bounce1 { - animation-delay: -0.32s; -} - -.loading-spinner .bounce2 { - animation-delay: -0.16s; -} - -@keyframes sk-bouncedelay { - - 0%, - 80%, - 100% { - transform: scale(0); - } - - 40% { - transform: scale(1.0); - } -} diff --git a/spaces/Gertie01/MusicLM/musiclm_pytorch.py b/spaces/Gertie01/MusicLM/musiclm_pytorch.py deleted file mode 100644 index 48d1f8b1712610ca0971a4df41d8975634a4bea8..0000000000000000000000000000000000000000 --- a/spaces/Gertie01/MusicLM/musiclm_pytorch.py +++ /dev/null @@ -1,559 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn, einsum - -from torchaudio.transforms import Spectrogram, TimeStretch, FrequencyMasking, TimeMasking - -from audiolm_pytorch import AudioLM -from audiolm_pytorch.utils import AudioConditionerBase - -from x_clip.tokenizer import tokenizer -from vector_quantize_pytorch import ResidualVQ - -from einops import rearrange, repeat, reduce, pack, unpack - -from beartype.typing import List, Optional, Tuple -from beartype import beartype - -# functions - -def exists(val): - return val is not None - -def default(val, d): - return val if exists(val) else d - -def round_down_nearest_multiple(n, divisor): - return n // divisor * divisor - -# tensor functions - -def log(t, eps = 1e-20): - return torch.log(t.clamp(min = eps)) - -def l2norm(t): - return F.normalize(t, p = 2, dim = -1) - -# 2d sinusoidal positional embedding -# simple vit paper shows it is good enough compared to learned - -def posemb_sincos_2d(patches, temperature = 10000, dtype = torch.float32): - _, h, w, dim, device, dtype = *patches.shape, patches.device, patches.dtype - - y, x = torch.meshgrid(torch.arange(h, device = device), torch.arange(w, device = device), indexing = 'ij') - assert (dim % 4) == 0, 'feature dimension must be multiple of 4 for sincos emb' - - omega = torch.arange(dim // 4, device = device) / (dim // 4 - 1) - omega = 1. / (temperature ** omega) - - y = y.flatten()[:, None] * omega[None, :] - x = x.flatten()[:, None] * omega[None, :] - - pe = torch.cat((x.sin(), x.cos(), y.sin(), y.cos()), dim = 1) - pe = pe.type(dtype) - - return rearrange(pe, '(h w) d -> h w d', h = h, w = w) - -# biasless layernorm - -class LayerNorm(nn.Module): - def __init__(self, dim): - super().__init__() - self.gamma = nn.Parameter(torch.ones(dim)) - self.register_buffer('beta', torch.zeros(dim)) - - def forward(self, x): - return F.layer_norm(x, x.shape[-1:], self.gamma, self.beta) - -# feedforward - -class GEGLU(nn.Module): - def forward(self, x): - x, gate = x.chunk(2, dim = -1) - return F.gelu(gate) * x - -def FeedForward(dim, mult = 4, dropout = 0.): - dim_hidden = int(dim * mult * 2 / 3) - - return nn.Sequential( - LayerNorm(dim), - nn.Linear(dim, dim_hidden * 2, bias = False), - GEGLU(), - nn.Dropout(dropout), - nn.Linear(dim_hidden, dim, bias = False) - ) - -# attention - -class Attention(nn.Module): - def __init__( - self, - dim, - causal = False, - dim_head = 64, - heads = 8, - dropout = 0. - ): - super().__init__() - self.heads = heads - self.scale = dim_head ** -0.5 - self.causal = causal - inner_dim = dim_head * heads - - self.norm = LayerNorm(dim) - - self.attn_dropout = nn.Dropout(dropout) - - self.to_q = nn.Linear(dim, inner_dim, bias = False) - self.to_kv = nn.Linear(dim, inner_dim * 2, bias = False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, dim, bias = False), - nn.Dropout(dropout) - ) - - def forward( - self, - x, - mask = None - ): - b, n, _, device = *x.shape, x.device - - # prenorm - - x = self.norm(x) - - # project for queries, keys, values - - q, k, v = self.to_q(x), *self.to_kv(x).chunk(2, dim = -1) - - # split for multi-headed attention - - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h = self.heads), (q, k, v)) - - q = q * self.scale - - # similarities - - sim = einsum('b h i d, b h j d -> b h i j', q, k) - - if exists(mask): - mask = rearrange(mask, 'b j -> b 1 1 j') - sim = sim.masked_fill(~mask, -torch.finfo(sim.dtype).max) - - if self.causal: - i, j = sim.shape[-2:] - causal_mask = torch.ones((i, j), dtype = torch.bool, device = x.device).triu(j - i + 1) - sim = sim.masked_fill(causal_mask, -torch.finfo(sim.dtype).max) - - # attention - - attn = sim.softmax(dim = -1) - attn = self.attn_dropout(attn) - - # aggregate - - out = einsum('b h i j, b h j d -> b h i d', attn, v) - - # merge heads - - out = rearrange(out, 'b h n d -> b n (h d)') - return self.to_out(out) - -# transformer - -class Transformer(nn.Module): - def __init__( - self, - dim, - depth, - dim_head = 64, - heads = 8, - attn_dropout = 0., - ff_mult = 4, - ff_dropout = 0. - ): - super().__init__() - self.layers = nn.ModuleList([]) - for _ in range(depth): - self.layers.append(nn.ModuleList([ - Attention(dim = dim, dim_head = dim_head, heads = heads, dropout = attn_dropout), - FeedForward(dim = dim, mult = ff_mult, dropout = ff_dropout), - ])) - - def forward(self, x, mask = None): - - for attn, ff in self.layers: - x = attn(x, mask = mask) + x - x = ff(x) + x - - return x - -# Audio Spectrogram Transformer - https://arxiv.org/abs/2104.01778 - -def pair(t): - return (t, t) if not isinstance(t, tuple) else t - -class AudioSpectrogramTransformer(nn.Module): - def __init__( - self, - dim, - depth, - patch_size = 16, - dim_head = 64, - heads = 8, - attn_dropout = 0., - ff_mult = 4, - ff_dropout = 0., - spec_n_fft = 128, - spec_power = 2, - spec_win_length = 24, - spec_hop_length = None, - spec_pad = 0, - spec_center = True, - spec_pad_mode = 'reflect', - spec_aug_stretch_factor = 0.8, - spec_aug_freq_mask = 80, - spec_aug_time_mask = 80 - ): - super().__init__() - self.dim = dim - - self.patch_size = pair(patch_size) - self.to_patch_tokens = nn.Conv2d(self.patch_size[0] * self.patch_size[1], dim, 1) - - self.spec = Spectrogram( - n_fft = spec_n_fft, - power = spec_power, - win_length = spec_win_length, - hop_length = spec_hop_length, - pad = spec_pad, - center = spec_center, - pad_mode = spec_pad_mode - ) - - # SpecAugment - seems to be widely used in audio field https://arxiv.org/abs/1904.08779 - - self.aug = torch.nn.Sequential( - TimeStretch(spec_aug_stretch_factor, fixed_rate=True), - FrequencyMasking(freq_mask_param = spec_aug_freq_mask), - TimeMasking(time_mask_param = spec_aug_time_mask), - ) - - self.transformer = Transformer( - dim = dim, - depth = depth, - dim_head = dim_head, - heads = heads, - attn_dropout = attn_dropout, - ff_mult = ff_mult, - ff_dropout = ff_dropout - ) - - self.norm = LayerNorm(dim) - - def forward(self, x): - x = self.spec(x) - - if self.training: - x = self.aug(x) - - # automatically crop if audio does not yield a 2d spectrogram that is divisible by patch sizes - - height, width = x.shape[-2:] - patch_height, patch_width = self.patch_size - - rounded_height, rounded_width = map(lambda args: round_down_nearest_multiple(*args), ((height, patch_height), (width, patch_width))) - - if (height, width) != (rounded_height, rounded_width): # just keep printing to be annoying until it is fixed - print(f'spectrogram yielded shape of {(height, width)}, but had to be cropped to {(rounded_height, rounded_width)} to be patchified for transformer') - - x = x[..., :rounded_height, :rounded_width] - - # to patches - - x = rearrange(x, 'b (h p1) (w p2) -> b (p1 p2) h w', p1 = patch_height, p2 = patch_width) - x = self.to_patch_tokens(x) - - # 2d sinusoidal positional embedding - - x = rearrange(x, 'b c h w -> b h w c') - x = x + posemb_sincos_2d(x) - - # attention, what else - - x = rearrange(x, 'b ... c -> b (...) c') - - x = self.transformer(x) - - # final global average and norm (most recent papers show this is superior to CLS token) - - x = reduce(x, 'b n d -> b d', 'mean') - - return self.norm(x) - -# text transformer - -@beartype -class TextTransformer(nn.Module): - def __init__( - self, - dim, - depth, - num_tokens = tokenizer.vocab_size, - max_seq_len = 256, - dim_head = 64, - heads = 8, - attn_dropout = 0., - ff_dropout = 0., - ff_mult = 4, - pad_id = 0 - ): - super().__init__() - self.dim = dim - - self.token_emb = nn.Embedding(num_tokens, dim) - self.pos_emb = nn.Embedding(max_seq_len, dim) - - self.cls_token = nn.Parameter(torch.randn(dim)) - - self.transformer = Transformer( - dim = dim, - depth = depth, - dim_head = dim_head, - heads = heads, - attn_dropout = attn_dropout, - ff_dropout = ff_dropout, - ff_mult = ff_mult - ) - - self.pad_id = pad_id - self.norm = LayerNorm(dim) - - def forward( - self, - x = None, - raw_texts: Optional[List[str]] = None, - mask = None - ): - assert exists(x) ^ exists(raw_texts) - - if exists(raw_texts): - x = tokenizer.tokenize(raw_texts) - - if not exists(mask): - mask = x != self.pad_id - - b, n, device = *x.shape, x.device - - # token embedding + positional embedding - - x = self.token_emb(x) - x = x + self.pos_emb(torch.arange(n, device = device)) - - # cls tokens, as in bert - - cls_tokens = repeat(self.cls_token, 'd -> b d', b = b) - x, ps = pack([cls_tokens, x], 'b * d') - - # account for attending to cls token with self attention mask - - mask = F.pad(mask, (1, 0), value = True) - - # attention - - x = self.transformer(x, mask = mask) - - # unpack the cls tokens - - cls_tokens, _ = unpack(x, ps, 'b * d') - - return self.norm(cls_tokens) - -# main classes - -@beartype -class MuLaN(nn.Module): - def __init__( - self, - audio_transformer: AudioSpectrogramTransformer, - text_transformer: TextTransformer, - dim_latent = 128, # they use 128 - decoupled_contrastive_learning = True, # think this was used, make it optional - ): - super().__init__() - self.dim_latent = dim_latent - - self.audio = audio_transformer - self.text = text_transformer - - self.temperature = nn.Parameter(torch.tensor(1.)) - - self.text_to_latents = nn.Linear(self.text.dim, dim_latent) - self.audio_to_latents = nn.Linear(self.audio.dim, dim_latent) - - self.decoupled_contrastive_learning = decoupled_contrastive_learning - - def get_audio_latents( - self, - wavs - ): - audio_embeds = self.audio(wavs) - audio_latents = self.audio_to_latents(audio_embeds) - return l2norm(audio_latents) - - def get_text_latents( - self, - texts = None, - raw_texts: Optional[List[str]] = None - ): - text_embeds = self.text(texts) - text_latents = self.text_to_latents(text_embeds) - return l2norm(text_latents) - - def forward( - self, - wavs, - texts = None, - raw_texts: Optional[List[str]] = None, - return_similarities = False - ): - batch, device = wavs.shape[0], wavs.device - - audio_latents = self.get_audio_latents(wavs) - text_latents = self.get_text_latents(texts, raw_texts = raw_texts) - - cosine_sim = einsum('i d, j d -> i j', audio_latents, text_latents) - - assert cosine_sim.shape[0] == cosine_sim.shape[1], 'batch sizes for audio and text are not equal' - - if return_similarities: - return cosine_sim - - cosine_sim = cosine_sim * self.temperature.exp() - - cosine_sim_exp = cosine_sim.exp() - - numerator = cosine_sim_exp.diag() - - if self.decoupled_contrastive_learning: - eye = torch.eye(batch, device = device) - cosine_sim_exp = cosine_sim_exp.masked_fill(eye, 0.) - - denominator = reduce(cosine_sim_exp, 'i j -> i', 'sum') - - contrastive_loss = -log(numerator / denominator) - return contrastive_loss.mean() - -# music lm - -@beartype -class MuLaNEmbedQuantizer(AudioConditionerBase): - def __init__( - self, - mulan: MuLaN, - conditioning_dims: Tuple[int, ...], - rq_num_quantizers = 8, - rq_ema_decay = 0.9, - codebook_size = 1024, - namespaces: Tuple[str, ...] = ('semantic', 'coarse', 'fine'), - ): - super().__init__() - self.mulan = mulan - - assert len(namespaces) > 0 - self.namespaces = namespaces - self.conditioning_dims = conditioning_dims - - assert len(conditioning_dims) == len(namespaces), 'number of conditioning dimensions must be equal to number of namespaces' - - dim = mulan.dim_latent - - self.rq = ResidualVQ( - dim = dim, - num_quantizers = rq_num_quantizers, - codebook_size = codebook_size, - decay = rq_ema_decay, - commitment_weight = 0, # only use EMA to update codebooks - kmeans_init = True, - threshold_ema_dead_code = 2, - quantize_dropout = False # no quantize dropout - ) - - self.dim = dim - self.num_codebooks = rq_num_quantizers - - self.cond_embeddings = nn.ParameterDict({}) - - for namespace, conditioning_dim in zip(namespaces, conditioning_dims): - cond_embeddings = nn.Parameter(torch.randn(rq_num_quantizers, codebook_size, conditioning_dim)) - nn.init.normal_(cond_embeddings, std = 0.02) - - self.cond_embeddings[namespace] = cond_embeddings - - self.set_default_namespace(namespaces[0]) - - def parameters(self): - return self.cond_embeddings.parameters() - - def set_default_namespace(self, namespace): - self._default_namespace = namespace - - def forward( - self, - wavs = None, - texts = None, - namespace = None - ): - assert exists(wavs) ^ exists(texts) - - namespace = default(namespace, self._default_namespace) - assert namespace in self.namespaces, f'namespace {namespace} not found' - cond_embeddings = self.cond_embeddings[namespace] - - with torch.no_grad(): - self.mulan.eval() - - # sound and language live in joint embedding space because of contrastive learning - - if exists(wavs): - latents = self.mulan.get_audio_latents(wavs) - elif exists(texts): - latents = self.mulan.get_text_latents(texts) - - _, indices, _ = self.rq(latents) - - batch, num_codebooks, dim = indices.shape[0], self.num_codebooks, cond_embeddings.shape[-1] - - cond_embeddings = repeat(cond_embeddings, 'q c d -> b q c d', b = batch) - indices = repeat(indices, 'b q -> b q 1 d', q = num_codebooks, d = dim) - - cond_embeddings = cond_embeddings.gather(2, indices) - return rearrange(cond_embeddings, 'b q 1 d -> b q d') - -@beartype -class MusicLM(nn.Module): - def __init__( - self, - audio_lm: AudioLM, - mulan_embed_quantizer: MuLaNEmbedQuantizer - ): - super().__init__() - assert not exists(audio_lm.audio_conditioner), 'mulan must not have been passed into AudioLM. it will be managed externally now, embedding the text into the joint embedding space for text-to-audio synthesis' - - self.mulan_embed_quantizer = mulan_embed_quantizer - self.audio_lm = audio_lm - - @torch.no_grad() - def forward( - self, - raw_texts: List[str], - **audio_lm_kwargs - ): - self.eval() - - texts = tokenizer.tokenize(raw_texts) - - text_embeds = self.mulan_embed_quantizer(texts = texts) - - return self.audio_lm(text_embeds = text_embeds, **audio_lm_kwargs) \ No newline at end of file diff --git a/spaces/Giuvyz/rvc-genshin/vc_infer_pipeline.py b/spaces/Giuvyz/rvc-genshin/vc_infer_pipeline.py deleted file mode 100644 index c26d45068f9b6bf2b194b13c3c89f8a06347c124..0000000000000000000000000000000000000000 --- a/spaces/Giuvyz/rvc-genshin/vc_infer_pipeline.py +++ /dev/null @@ -1,306 +0,0 @@ -import numpy as np, parselmouth, torch, pdb -from time import time as ttime -import torch.nn.functional as F -from config import x_pad, x_query, x_center, x_max -import scipy.signal as signal -import pyworld, os, traceback, faiss -from scipy import signal - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - - -class VC(object): - def __init__(self, tgt_sr, device, is_half): - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * x_query # 查询切点前后查询时间 - self.t_center = self.sr * x_center # 查询切点位置 - self.t_max = self.sr * x_max # 免查询时长阈值 - self.device = device - self.is_half = is_half - - def get_f0(self, x, p_len, f0_up_key, f0_method, inp_f0=None): - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - f0, t = pyworld.harvest( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0] - f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9, # layer 9 - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) - - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - _, I = index.search(npy, 1) - npy = big_npy[I.squeeze()] - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] * 32768) - .data.cpu() - .float() - .numpy() - .astype(np.int16) - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0] * 32768) - .data.cpu() - .float() - .numpy() - .astype(np.int16) - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - f0_file=None, - ): - if ( - file_big_npy != "" - and file_index != "" - and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - big_npy = np.load(file_big_npy) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - print("Feature retrieval library doesn't exist or ratio is 0") - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0(audio_pad, p_len, f0_up_key, f0_method, inp_f0) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/latex/attention/model_architecture.tex b/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/latex/attention/model_architecture.tex deleted file mode 100644 index c82be6242cc9d26203360e90d3ac9184ef6ad842..0000000000000000000000000000000000000000 --- a/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/latex/attention/model_architecture.tex +++ /dev/null @@ -1,155 +0,0 @@ - -\begin{figure} - \centering - \includegraphics[scale=0.6]{Figures/ModalNet-21} - \caption{The Transformer - model architecture.} - \label{fig:model-arch} -\end{figure} - -% Although the primary workhorse of our model is attention, -%Our model maintains the encoder-decoder structure that is common to many so-called sequence-to-sequence models \citep{bahdanau2014neural,sutskever14}. As in all such architectures, the encoder computes a representation of the input sequence, and the decoder consumes these representations along with the output tokens to autoregressively produce the output sequence. Where, traditionally, the encoder and decoder contain stacks of recurrent or convolutional layers, our encoder and decoder stacks are composed of attention layers and position-wise feed-forward layers (Figure~\ref{fig:model-arch}). The following sections describe the gross architecture and these particular components in detail. - -Most competitive neural sequence transduction models have an encoder-decoder structure \citep{cho2014learning,bahdanau2014neural,sutskever14}. Here, the encoder maps an input sequence of symbol representations $(x_1, ..., x_n)$ to a sequence of continuous representations $\mathbf{z} = (z_1, ..., z_n)$. Given $\mathbf{z}$, the decoder then generates an output sequence $(y_1,...,y_m)$ of symbols one element at a time. At each step the model is auto-regressive \citep{graves2013generating}, consuming the previously generated symbols as additional input when generating the next. - -The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure~\ref{fig:model-arch}, respectively. - -\subsection{Encoder and Decoder Stacks} - -\paragraph{Encoder:}The encoder is composed of a stack of $N=6$ identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection \citep{he2016deep} around each of the two sub-layers, followed by layer normalization \cite{layernorm2016}. That is, the output of each sub-layer is $\mathrm{LayerNorm}(x + \mathrm{Sublayer}(x))$, where $\mathrm{Sublayer}(x)$ is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension $\dmodel=512$. - -\paragraph{Decoder:}The decoder is also composed of a stack of $N=6$ identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$. - -% In our model (Figure~\ref{fig:model-arch}), the encoder and decoder are composed of stacks of alternating self-attention layers (for cross-positional communication) and position-wise feed-forward layers (for in-place computation). In addition, the decoder stack contains encoder-decoder attention layers. Since attention is agnostic to the distances between words, our model requires a "positional encoding" to be added to the encoder and decoder input. The following sections describe all of these components in detail. - -\subsection{Attention} \label{sec:attention} -An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. - -\subsubsection{Scaled Dot-Product Attention} \label{sec:scaled-dot-prod} - -% \begin{figure} -% \centering -% \includegraphics[scale=0.6]{Figures/ModalNet-19} -% \caption{Scaled Dot-Product Attention.} -% \label{fig:multi-head-att} -% \end{figure} - -We call our particular attention "Scaled Dot-Product Attention" (Figure~\ref{fig:multi-head-att}). The input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. We compute the dot products of the query with all keys, divide each by $\sqrt{d_k}$, and apply a softmax function to obtain the weights on the values. - -In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$. The keys and values are also packed together into matrices $K$ and $V$. We compute the matrix of outputs as: - -\begin{equation} - \mathrm{Attention}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V -\end{equation} - -The two most commonly used attention functions are additive attention \citep{bahdanau2014neural}, and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d_k}}$. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code. - -%We scale the dot products by $1/\sqrt{d_k}$ to limit the magnitude of the dot products, which works well in practice. Otherwise, we found applying the softmax to often result in weights very close to 0 or 1, and hence minuscule gradients. - -% Already described in the subsequent section -%When used as part of decoder self-attention, an optional mask function is applied just before the softmax to prevent positions from attending to subsequent positions. This mask simply sets the logits corresponding to all illegal connections (those outside of the lower triangle) to $-\infty$. - -%\paragraph{Comparison to Additive Attention: } We choose dot product attention over additive attention \citep{bahdanau2014neural} since it can be computed using highly optimized matrix multiplication code. This optimization is particularly important to us, as we employ many attention layers in our model. - -While for small values of $d_k$ the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of $d_k$ \citep{DBLP:journals/corr/BritzGLL17}. We suspect that for large values of $d_k$, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients \footnote{To illustrate why the dot products get large, assume that the components of $q$ and $k$ are independent random variables with mean $0$ and variance $1$. Then their dot product, $q \cdot k = \sum_{i=1}^{d_k} q_ik_i$, has mean $0$ and variance $d_k$.}. To counteract this effect, we scale the dot products by $\frac{1}{\sqrt{d_k}}$. - - -%We suspect this to be caused by the dot products growing too large in magnitude to result in useful gradients after applying the softmax function. To counteract this, we scale the dot product by $1/\sqrt{d_k}$. - - -\subsubsection{Multi-Head Attention} \label{sec:multihead} - -\begin{figure} -\begin{minipage}[t]{0.5\textwidth} - \centering - Scaled Dot-Product Attention \\ - \vspace{0.5cm} - \includegraphics[scale=0.6]{Figures/ModalNet-19} -\end{minipage} -\begin{minipage}[t]{0.5\textwidth} - \centering - Multi-Head Attention \\ - \vspace{0.1cm} - \includegraphics[scale=0.6]{Figures/ModalNet-20} -\end{minipage} - - - % \centering - - \caption{(left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel.} - \label{fig:multi-head-att} -\end{figure} - -Instead of performing a single attention function with $\dmodel$-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values $h$ times with different, learned linear projections to $d_k$, $d_k$ and $d_v$ dimensions, respectively. -On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding $d_v$-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure~\ref{fig:multi-head-att}. - -Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this. - -\begin{align*} - \mathrm{MultiHead}(Q, K, V) &= \mathrm{Concat}(\mathrm{head_1}, ..., \mathrm{head_h})W^O\\ -% \mathrm{where} \mathrm{head_i} &= \mathrm{Attention}(QW_Q_i^{\dmodel \times d_q}, KW_K_i^{\dmodel \times d_k}, VW^V_i^{\dmodel \times d_v})\\ - \text{where}~\mathrm{head_i} &= \mathrm{Attention}(QW^Q_i, KW^K_i, VW^V_i)\\ -\end{align*} - -Where the projections are parameter matrices $W^Q_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^K_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^V_i \in \mathbb{R}^{\dmodel \times d_v}$ and $W^O \in \mathbb{R}^{hd_v \times \dmodel}$. - - -%find it better (and no more expensive) to have multiple parallel attention layers (each over the full set of positions) with proportionally lower-dimensional keys, values and queries. We call this "Multi-Head Attention" (Figure~\ref{fig:multi-head-att}). The keys, values, and queries for each of these parallel attention layers are computed by learned linear transformations of the inputs to the multi-head attention. We use different linear transformations across different parallel attention layers. The output of the parallel attention layers are concatenated, and then passed through a final learned linear transformation. - -In this work we employ $h=8$ parallel attention layers, or heads. For each of these we use $d_k=d_v=\dmodel/h=64$. -Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. - -\subsubsection{Applications of Attention in our Model} - -The Transformer uses multi-head attention in three different ways: -\begin{itemize} - \item In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as \citep{wu2016google, bahdanau2014neural,JonasFaceNet2017}. - - \item The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. - - \item Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to $-\infty$) all values in the input of the softmax which correspond to illegal connections. See Figure~\ref{fig:multi-head-att}. - -\end{itemize} - -\subsection{Position-wise Feed-Forward Networks}\label{sec:ffn} - -In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. - -\begin{equation} - \mathrm{FFN}(x)=\max(0, xW_1 + b_1) W_2 + b_2 -\end{equation} - -While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is $\dmodel=512$, and the inner-layer has dimensionality $d_{ff}=2048$. - - - -%In the appendix, we describe how the position-wise feed-forward network can also be seen as a form of attention. - -%from Jakob: The number of operations required for the model to relate signals from two arbitrary input or output positions grows in the distance between positions in input or output, linearly for ConvS2S and logarithmically for ByteNet, making it harder to learn dependencies between these positions \citep{hochreiter2001gradient}. In the transformer this is reduced to a constant number of operations, albeit at the cost of effective resolution caused by averaging attention-weighted positions, an effect we aim to counteract with multi-headed attention. - - -%Figure~\ref{fig:simple-att} presents a simple attention function, $A$, with a single head, that forms the basis of our multi-head attention. $A$ takes a query key vector $\kq$, matrices of memory keys $\km$ and memory values $\vm$ ,and produces a query value vector $\vq$ as -%\begin{equation*} \label{eq:attention} -% A(\kq, \km, \vm) = {\vm}^T (Softmax(\km \kq). -%\end{equation*} -%We linearly transform $\kq,\,\km$, and $\vm$ with learned matrices ${\Wkq \text{,} \, \Wkm}$, and ${\Wvm}$ before calling the attention function, and transform the output query with $\Wvq$ before handing it to the feed forward layer. Each attention layer has it's own set of transformation matrices, which are shared across all query positions. $A$ is applied in parallel for each query position, and is implemented very efficiently as a batch of matrix multiplies. The self-attention and encoder-decoder attention layers use $A$, but with different arguments. For example, in encdoder self-attention, queries in encoder layer $i$ attention to memories in encoder layer $i-1$. To ensure that decoder self-attention layers do not look at future words, we add $- \inf$ to the softmax logits in positions $j+1$ to query length for query position $l$. - -%In simple attention, the query value is a weighted combination of the memory values where the attention weights sum to one. Although this function performs well in practice, the constraint on attention weights can restrict the amount of information that flows from memories to queries because the query cannot focus on multiple memory positions at once, which might be desirable when translating long sequences. \marginpar{@usz, could you think of an example of this ?} We remedy this by maintaining multiple attention heads at each query position that attend to all memory positions in parallel, with a different set of parameters per attention head $h$. -%\marginpar{} - -\subsection{Embeddings and Softmax} -Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension $\dmodel$. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to \citep{press2016using}. In the embedding layers, we multiply those weights by $\sqrt{\dmodel}$. - - -\subsection{Positional Encoding} -Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $\dmodel$ as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed \citep{JonasFaceNet2017}. - -In this work, we use sine and cosine functions of different frequencies: - -\begin{align*} - PE_{(pos,2i)} = sin(pos / 10000^{2i/\dmodel}) \\ - PE_{(pos,2i+1)} = cos(pos / 10000^{2i/\dmodel}) -\end{align*} - -where $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\pi$ to $10000 \cdot 2\pi$. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$. - -We also experimented with using learned positional embeddings \citep{JonasFaceNet2017} instead, and found that the two versions produced nearly identical results (see Table~\ref{tab:variations} row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/groie/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_groie_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/groie/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_groie_1x_coco.py deleted file mode 100644 index 8b83722197c69a51907f43bcb05883deedc37f0c..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/groie/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_groie_1x_coco.py +++ /dev/null @@ -1,45 +0,0 @@ -_base_ = '../gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py' -# model settings -model = dict( - roi_head=dict( - bbox_roi_extractor=dict( - type='GenericRoIExtractor', - aggregation='sum', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=2), - out_channels=256, - featmap_strides=[4, 8, 16, 32], - pre_cfg=dict( - type='ConvModule', - in_channels=256, - out_channels=256, - kernel_size=5, - padding=2, - inplace=False, - ), - post_cfg=dict( - type='GeneralizedAttention', - in_channels=256, - spatial_range=-1, - num_heads=6, - attention_type='0100', - kv_stride=2)), - mask_roi_extractor=dict( - type='GenericRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=2), - out_channels=256, - featmap_strides=[4, 8, 16, 32], - pre_cfg=dict( - type='ConvModule', - in_channels=256, - out_channels=256, - kernel_size=5, - padding=2, - inplace=False, - ), - post_cfg=dict( - type='GeneralizedAttention', - in_channels=256, - spatial_range=-1, - num_heads=6, - attention_type='0100', - kv_stride=2)))) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_769x769_80k_cityscapes.py deleted file mode 100644 index c5ef3b880eac7dd089aace8ce2a87e1bd837beed..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_769x769_80k_cityscapes.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_r50-d8.py', - '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_80k.py' -] -model = dict( - backbone=dict(dilations=(1, 1, 1, 2), strides=(1, 2, 2, 1)), - decode_head=dict(align_corners=True, dilation=6), - auxiliary_head=dict(align_corners=True, dilation=6), - test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513))) diff --git a/spaces/Greysuki/whisper-api-compress/README.md b/spaces/Greysuki/whisper-api-compress/README.md deleted file mode 100644 index f991dd7a046cc23ae6d74725f7b12c8b7200db5e..0000000000000000000000000000000000000000 --- a/spaces/Greysuki/whisper-api-compress/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Whisper Api Compress -emoji: 🐈 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Hallucinate/demo/AdaBins-main/dataloader.py b/spaces/Hallucinate/demo/AdaBins-main/dataloader.py deleted file mode 100644 index 4de1ac1b9016d5b23618d06b877c3bb3c24dd0f2..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/AdaBins-main/dataloader.py +++ /dev/null @@ -1,284 +0,0 @@ -# This file is mostly taken from BTS; author: Jin Han Lee, with only slight modifications - -import os -import random - -import numpy as np -import torch -import torch.utils.data.distributed -from PIL import Image -from torch.utils.data import Dataset, DataLoader -from torchvision import transforms - - -def _is_pil_image(img): - return isinstance(img, Image.Image) - - -def _is_numpy_image(img): - return isinstance(img, np.ndarray) and (img.ndim in {2, 3}) - - -def preprocessing_transforms(mode): - return transforms.Compose([ - ToTensor(mode=mode) - ]) - - -class DepthDataLoader(object): - def __init__(self, args, mode): - if mode == 'train': - self.training_samples = DataLoadPreprocess(args, mode, transform=preprocessing_transforms(mode)) - if args.distributed: - self.train_sampler = torch.utils.data.distributed.DistributedSampler(self.training_samples) - else: - self.train_sampler = None - - self.data = DataLoader(self.training_samples, args.batch_size, - shuffle=(self.train_sampler is None), - num_workers=args.num_threads, - pin_memory=True, - sampler=self.train_sampler) - - elif mode == 'online_eval': - self.testing_samples = DataLoadPreprocess(args, mode, transform=preprocessing_transforms(mode)) - if args.distributed: # redundant. here only for readability and to be more explicit - # Give whole test set to all processes (and perform/report evaluation only on one) regardless - self.eval_sampler = None - else: - self.eval_sampler = None - self.data = DataLoader(self.testing_samples, 1, - shuffle=False, - num_workers=1, - pin_memory=False, - sampler=self.eval_sampler) - - elif mode == 'test': - self.testing_samples = DataLoadPreprocess(args, mode, transform=preprocessing_transforms(mode)) - self.data = DataLoader(self.testing_samples, 1, shuffle=False, num_workers=1) - - else: - print('mode should be one of \'train, test, online_eval\'. Got {}'.format(mode)) - - -def remove_leading_slash(s): - if s[0] == '/' or s[0] == '\\': - return s[1:] - return s - - -class DataLoadPreprocess(Dataset): - def __init__(self, args, mode, transform=None, is_for_online_eval=False): - self.args = args - if mode == 'online_eval': - with open(args.filenames_file_eval, 'r') as f: - self.filenames = f.readlines() - else: - with open(args.filenames_file, 'r') as f: - self.filenames = f.readlines() - - self.mode = mode - self.transform = transform - self.to_tensor = ToTensor - self.is_for_online_eval = is_for_online_eval - - def __getitem__(self, idx): - sample_path = self.filenames[idx] - focal = float(sample_path.split()[2]) - - if self.mode == 'train': - if self.args.dataset == 'kitti' and self.args.use_right is True and random.random() > 0.5: - image_path = os.path.join(self.args.data_path, remove_leading_slash(sample_path.split()[3])) - depth_path = os.path.join(self.args.gt_path, remove_leading_slash(sample_path.split()[4])) - else: - image_path = os.path.join(self.args.data_path, remove_leading_slash(sample_path.split()[0])) - depth_path = os.path.join(self.args.gt_path, remove_leading_slash(sample_path.split()[1])) - - image = Image.open(image_path) - depth_gt = Image.open(depth_path) - - if self.args.do_kb_crop is True: - height = image.height - width = image.width - top_margin = int(height - 352) - left_margin = int((width - 1216) / 2) - depth_gt = depth_gt.crop((left_margin, top_margin, left_margin + 1216, top_margin + 352)) - image = image.crop((left_margin, top_margin, left_margin + 1216, top_margin + 352)) - - # To avoid blank boundaries due to pixel registration - if self.args.dataset == 'nyu': - depth_gt = depth_gt.crop((43, 45, 608, 472)) - image = image.crop((43, 45, 608, 472)) - - if self.args.do_random_rotate is True: - random_angle = (random.random() - 0.5) * 2 * self.args.degree - image = self.rotate_image(image, random_angle) - depth_gt = self.rotate_image(depth_gt, random_angle, flag=Image.NEAREST) - - image = np.asarray(image, dtype=np.float32) / 255.0 - depth_gt = np.asarray(depth_gt, dtype=np.float32) - depth_gt = np.expand_dims(depth_gt, axis=2) - - if self.args.dataset == 'nyu': - depth_gt = depth_gt / 1000.0 - else: - depth_gt = depth_gt / 256.0 - - image, depth_gt = self.random_crop(image, depth_gt, self.args.input_height, self.args.input_width) - image, depth_gt = self.train_preprocess(image, depth_gt) - sample = {'image': image, 'depth': depth_gt, 'focal': focal} - - else: - if self.mode == 'online_eval': - data_path = self.args.data_path_eval - else: - data_path = self.args.data_path - - image_path = os.path.join(data_path, remove_leading_slash(sample_path.split()[0])) - image = np.asarray(Image.open(image_path), dtype=np.float32) / 255.0 - - if self.mode == 'online_eval': - gt_path = self.args.gt_path_eval - depth_path = os.path.join(gt_path, remove_leading_slash(sample_path.split()[1])) - has_valid_depth = False - try: - depth_gt = Image.open(depth_path) - has_valid_depth = True - except IOError: - depth_gt = False - # print('Missing gt for {}'.format(image_path)) - - if has_valid_depth: - depth_gt = np.asarray(depth_gt, dtype=np.float32) - depth_gt = np.expand_dims(depth_gt, axis=2) - if self.args.dataset == 'nyu': - depth_gt = depth_gt / 1000.0 - else: - depth_gt = depth_gt / 256.0 - - if self.args.do_kb_crop is True: - height = image.shape[0] - width = image.shape[1] - top_margin = int(height - 352) - left_margin = int((width - 1216) / 2) - image = image[top_margin:top_margin + 352, left_margin:left_margin + 1216, :] - if self.mode == 'online_eval' and has_valid_depth: - depth_gt = depth_gt[top_margin:top_margin + 352, left_margin:left_margin + 1216, :] - - if self.mode == 'online_eval': - sample = {'image': image, 'depth': depth_gt, 'focal': focal, 'has_valid_depth': has_valid_depth, - 'image_path': sample_path.split()[0], 'depth_path': sample_path.split()[1]} - else: - sample = {'image': image, 'focal': focal} - - if self.transform: - sample = self.transform(sample) - - return sample - - def rotate_image(self, image, angle, flag=Image.BILINEAR): - result = image.rotate(angle, resample=flag) - return result - - def random_crop(self, img, depth, height, width): - assert img.shape[0] >= height - assert img.shape[1] >= width - assert img.shape[0] == depth.shape[0] - assert img.shape[1] == depth.shape[1] - x = random.randint(0, img.shape[1] - width) - y = random.randint(0, img.shape[0] - height) - img = img[y:y + height, x:x + width, :] - depth = depth[y:y + height, x:x + width, :] - return img, depth - - def train_preprocess(self, image, depth_gt): - # Random flipping - do_flip = random.random() - if do_flip > 0.5: - image = (image[:, ::-1, :]).copy() - depth_gt = (depth_gt[:, ::-1, :]).copy() - - # Random gamma, brightness, color augmentation - do_augment = random.random() - if do_augment > 0.5: - image = self.augment_image(image) - - return image, depth_gt - - def augment_image(self, image): - # gamma augmentation - gamma = random.uniform(0.9, 1.1) - image_aug = image ** gamma - - # brightness augmentation - if self.args.dataset == 'nyu': - brightness = random.uniform(0.75, 1.25) - else: - brightness = random.uniform(0.9, 1.1) - image_aug = image_aug * brightness - - # color augmentation - colors = np.random.uniform(0.9, 1.1, size=3) - white = np.ones((image.shape[0], image.shape[1])) - color_image = np.stack([white * colors[i] for i in range(3)], axis=2) - image_aug *= color_image - image_aug = np.clip(image_aug, 0, 1) - - return image_aug - - def __len__(self): - return len(self.filenames) - - -class ToTensor(object): - def __init__(self, mode): - self.mode = mode - self.normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) - - def __call__(self, sample): - image, focal = sample['image'], sample['focal'] - image = self.to_tensor(image) - image = self.normalize(image) - - if self.mode == 'test': - return {'image': image, 'focal': focal} - - depth = sample['depth'] - if self.mode == 'train': - depth = self.to_tensor(depth) - return {'image': image, 'depth': depth, 'focal': focal} - else: - has_valid_depth = sample['has_valid_depth'] - return {'image': image, 'depth': depth, 'focal': focal, 'has_valid_depth': has_valid_depth, - 'image_path': sample['image_path'], 'depth_path': sample['depth_path']} - - def to_tensor(self, pic): - if not (_is_pil_image(pic) or _is_numpy_image(pic)): - raise TypeError( - 'pic should be PIL Image or ndarray. Got {}'.format(type(pic))) - - if isinstance(pic, np.ndarray): - img = torch.from_numpy(pic.transpose((2, 0, 1))) - return img - - # handle PIL Image - if pic.mode == 'I': - img = torch.from_numpy(np.array(pic, np.int32, copy=False)) - elif pic.mode == 'I;16': - img = torch.from_numpy(np.array(pic, np.int16, copy=False)) - else: - img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes())) - # PIL image mode: 1, L, P, I, F, RGB, YCbCr, RGBA, CMYK - if pic.mode == 'YCbCr': - nchannel = 3 - elif pic.mode == 'I;16': - nchannel = 1 - else: - nchannel = len(pic.mode) - img = img.view(pic.size[1], pic.size[0], nchannel) - - img = img.transpose(0, 1).transpose(0, 2).contiguous() - if isinstance(img, torch.ByteTensor): - return img.float() - else: - return img diff --git a/spaces/HaloMaster/chinesesummary/fengshen/data/bert_dataloader/preprocessing.py b/spaces/HaloMaster/chinesesummary/fengshen/data/bert_dataloader/preprocessing.py deleted file mode 100644 index c40e39a8122a5cc4ebd57b558f451c371f6066a3..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/data/bert_dataloader/preprocessing.py +++ /dev/null @@ -1,110 +0,0 @@ -import re -import json -import multiprocessing -from tqdm import tqdm -from pathlib import Path -from itertools import chain - -_SPLIT_DATA_PATH = '/data1/datas/wudao_180g' - - -def cut_sent(path): - """ - 中文分句,默认?、。、!、省略号分句,考虑双引号包裹的句子 - 采用分割替换的方式 - """ - path = Path(path) - # print(path) - save_path = str(Path('/data1/datas/wudao_180g_split', path.name)) - print('处理文件:', save_path) - with open(save_path, 'wt', encoding='utf-8') as w: - with open(path, 'rt', encoding='utf-8') as f: - for para in tqdm(f): - para = json.loads(para) - para_ = para['text'] + ' ' - # print('sentence piece......') - # pep8中 正则不能些 \? 要写成\\? - para_ = re.sub('([?。!\\?\\!…]+)([^”’]|[”’])', - r'\1#####\2', para_) - para_ = re.sub('([\\.]{3,})([^”’])', r'\1#####\2', para_) - - # 匹配 \1: 句子结束符紧挨’” \2: 非句子结束符号,被引号包裹的句子 - para_ = re.sub( - '([。!?\\?\\!…][”’])([^,。!?\\?\\!]|\\s)', r'\1#####\2', para_) - para_ = re.sub( - '([\\.]{3,}[”’])([^,。!?\\?\\!]|\\s)', r'\1#####\2', para_) - para_ = re.sub( - '([#]{5})([”’])([^,。!?\\?\\!])', r'\2#####\3', para_) - para_ = para_.strip() - # 一个512里面多个样本 - line_ = '' - for line in para_.split('#####'): - line = line.strip() - if len(line_) < 512 and len(line) > 0: - line_ += line - else: - w.writelines(json.dumps( - {'text': line_}, ensure_ascii=False)+'\n') - line_ = line - w.writelines(json.dumps( - {'text': line_}, ensure_ascii=False)+'\n') - - -def chain_iter(*filenames): - """ - 将多个文件读成一个迭代器 - """ - reader = [open(file, 'r') for file in filenames] - return chain(*reader) - - -class Config(object): - - def __init__(self, data_path=_SPLIT_DATA_PATH, num_worker=16, split_numb=600000, cut_sentence=True, output_file=None) -> None: - self.data_path = Path(data_path) - self.num_worker = num_worker - self.split_numb = split_numb - self.cut_sentence = cut_sentence - - -def processing1(): - args = Config() - p_ = [str(i) for i in args.data_path.glob('*')] - fin = chain_iter(*p_) - pool = multiprocessing.Pool(args.num_worker) - docs = pool.imap(cut_sent, fin, chunksize=args.num_worker) - - if not Path(args.data_path.parent, args.data_path.name+'_split').exists(): - Path(args.data_path.parent, args.data_path.name+'_split').mkdir() - writer = open(str(Path(args.data_path.parent, args.data_path.name + - '_split', 'sentence_level.json')), 'wt', encoding='utf-8') - for doc in tqdm(docs): - for sentence in doc: - writer.writelines(json.dumps( - {"text": sentence}, ensure_ascii=False)+'\n') - pool.close() - pool.join() - writer.close() - - -if __name__ == '__main__': - from time import process_time, perf_counter - from random import shuffle - st = process_time() - args = Config(num_worker=16) - - if not Path(args.data_path.parent, args.data_path.name+'_split').exists(): - Path(args.data_path.parent, args.data_path.name + - '_split').mkdir(parents=True) - - p_ = [str(i) for i in args.data_path.glob('*')] - # 简单shuffle - shuffle(p_) - - pool = multiprocessing.Pool(args.num_worker) - for item in p_: - pool.apply_async(func=cut_sent, args=(item,)) - pool.close() - pool.join() - cost_time = process_time() - st - print('DONE!! cost time : %.5f' % cost_time) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/wav2vec/wav2vec2.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/wav2vec/wav2vec2.py deleted file mode 100644 index 714fd3ab50443b8d15715b1cf5abd4eb517298c4..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/wav2vec/wav2vec2.py +++ /dev/null @@ -1,1016 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field -from typing import List, Tuple - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.data.data_utils import compute_mask_indices -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.modules import ( - Fp32GroupNorm, - Fp32LayerNorm, - GradMultiply, - GumbelVectorQuantizer, - LayerNorm, - MultiheadAttention, - SamePad, - TransposeLast, -) -from fairseq.modules.transformer_sentence_encoder import init_bert_params -from fairseq.utils import buffered_arange, index_put, is_xla_tensor - - -EXTRACTOR_MODE_CHOICES = ChoiceEnum(["default", "layer_norm"]) -MASKING_DISTRIBUTION_CHOICES = ChoiceEnum(["static", "uniform", "normal", "poisson"]) - - -@dataclass -class Wav2Vec2Config(FairseqDataclass): - extractor_mode: EXTRACTOR_MODE_CHOICES = field( - default="default", - metadata={ - "help": "mode for feature extractor. default has a single group norm with d " - "groups in the first conv block, whereas layer_norm has layer norms in " - "every block (meant to use with normalize=True)" - }, - ) - encoder_layers: int = field( - default=12, metadata={"help": "num encoder layers in the transformer"} - ) - encoder_embed_dim: int = field( - default=768, metadata={"help": "encoder embedding dimension"} - ) - encoder_ffn_embed_dim: int = field( - default=3072, metadata={"help": "encoder embedding dimension for FFN"} - ) - encoder_attention_heads: int = field( - default=12, metadata={"help": "num encoder attention heads"} - ) - activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field( - default="gelu", metadata={"help": "activation function to use"} - ) - - # dropouts - dropout: float = field( - default=0.1, metadata={"help": "dropout probability for the transformer"} - ) - attention_dropout: float = field( - default=0.1, metadata={"help": "dropout probability for attention weights"} - ) - activation_dropout: float = field( - default=0.0, metadata={"help": "dropout probability after activation in FFN"} - ) - encoder_layerdrop: float = field( - default=0.0, metadata={"help": "probability of dropping a tarnsformer layer"} - ) - dropout_input: float = field( - default=0.0, - metadata={"help": "dropout to apply to the input (after feat extr)"}, - ) - dropout_features: float = field( - default=0.0, - metadata={"help": "dropout to apply to the features (after feat extr)"}, - ) - - final_dim: int = field( - default=0, - metadata={ - "help": "project final representations and targets to this many dimensions." - "set to encoder_embed_dim is <= 0" - }, - ) - layer_norm_first: bool = field( - default=False, metadata={"help": "apply layernorm first in the transformer"} - ) - conv_feature_layers: str = field( - default="[(512, 10, 5)] + [(512, 3, 2)] * 4 + [(512,2,2)] + [(512,2,2)]", - metadata={ - "help": "string describing convolutional feature extraction layers in form of a python list that contains " - "[(dim, kernel_size, stride), ...]" - }, - ) - conv_bias: bool = field( - default=False, metadata={"help": "include bias in conv encoder"} - ) - logit_temp: float = field( - default=0.1, metadata={"help": "temperature to divide logits by"} - ) - quantize_targets: bool = field( - default=False, metadata={"help": "use quantized targets"} - ) - quantize_input: bool = field( - default=False, metadata={"help": "use quantized inputs"} - ) - same_quantizer: bool = field( - default=False, metadata={"help": "use same quantizer for inputs and targets"} - ) - target_glu: bool = field( - default=False, metadata={"help": "adds projection + glu to targets"} - ) - feature_grad_mult: float = field( - default=1.0, metadata={"help": "multiply feature extractor var grads by this"} - ) - quantizer_depth: int = field( - default=1, - metadata={"help": "number of quantizer layers"}, - ) - quantizer_factor: int = field( - default=3, - metadata={ - "help": "dimensionality increase for inner quantizer layers (if depth > 1)" - }, - ) - latent_vars: int = field( - default=320, - metadata={"help": "number of latent variables V in each group of the codebook"}, - ) - latent_groups: int = field( - default=2, - metadata={"help": "number of groups G of latent variables in the codebook"}, - ) - latent_dim: int = field( - default=0, - metadata={ - "help": "if > 0, uses this dimensionality for latent variables. " - "otherwise uses final_dim / latent_groups" - }, - ) - - # masking - mask_length: int = field(default=10, metadata={"help": "mask length"}) - mask_prob: float = field( - default=0.65, metadata={"help": "probability of replacing a token with mask"} - ) - mask_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", metadata={"help": "how to choose mask length"} - ) - mask_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument (used for more complex distributions), " - "see help in compute_mask_indices" - }, - ) - no_mask_overlap: bool = field( - default=False, metadata={"help": "whether to allow masks to overlap"} - ) - mask_min_space: int = field( - default=1, - metadata={"help": "min space between spans (if no overlap is enabled)"}, - ) - - # channel masking - mask_channel_length: int = field( - default=10, metadata={"help": "length of the mask for features (channels)"} - ) - mask_channel_prob: float = field( - default=0.0, metadata={"help": "probability of replacing a feature with 0"} - ) - mask_channel_before: bool = False - mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", - metadata={"help": "how to choose mask length for channel masking"}, - ) - mask_channel_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument (used for more complex distributions), " - "see help in compute_mask_indicesh" - }, - ) - no_mask_channel_overlap: bool = field( - default=False, metadata={"help": "whether to allow channel masks to overlap"} - ) - mask_channel_min_space: int = field( - default=1, - metadata={"help": "min space between spans (if no overlap is enabled)"}, - ) - - # negative selection - num_negatives: int = field( - default=100, - metadata={"help": "number of negative examples from the same sample"}, - ) - negatives_from_everywhere: bool = field( - default=False, - metadata={"help": "sample negatives from everywhere, not just masked states"}, - ) - cross_sample_negatives: int = field( - default=0, metadata={"help": "number of negative examples from the any sample"} - ) - codebook_negatives: int = field( - default=0, metadata={"help": "number of negative examples codebook"} - ) - - # positional embeddings - conv_pos: int = field( - default=128, - metadata={"help": "number of filters for convolutional positional embeddings"}, - ) - conv_pos_groups: int = field( - default=16, - metadata={"help": "number of groups for convolutional positional embedding"}, - ) - - latent_temp: Tuple[float, float, float] = field( - default=(2, 0.5, 0.999995), - metadata={ - "help": "temperature for latent variable sampling. " - "can be tuple of 3 values (start, end, decay)" - }, - ) - - -@register_model("wav2vec2", dataclass=Wav2Vec2Config) -class Wav2Vec2Model(BaseFairseqModel): - def __init__(self, cfg: Wav2Vec2Config): - super().__init__() - self.cfg = cfg - - feature_enc_layers = eval(cfg.conv_feature_layers) - self.embed = feature_enc_layers[-1][0] - - self.feature_extractor = ConvFeatureExtractionModel( - conv_layers=feature_enc_layers, - dropout=0.0, - mode=cfg.extractor_mode, - conv_bias=cfg.conv_bias, - ) - - self.post_extract_proj = ( - nn.Linear(self.embed, cfg.encoder_embed_dim) - if self.embed != cfg.encoder_embed_dim and not cfg.quantize_input - else None - ) - - self.mask_prob = cfg.mask_prob - self.mask_selection = cfg.mask_selection - self.mask_other = cfg.mask_other - self.mask_length = cfg.mask_length - self.no_mask_overlap = cfg.no_mask_overlap - self.mask_min_space = cfg.mask_min_space - - self.mask_channel_prob = cfg.mask_channel_prob - self.mask_channel_before = cfg.mask_channel_before - self.mask_channel_selection = cfg.mask_channel_selection - self.mask_channel_other = cfg.mask_channel_other - self.mask_channel_length = cfg.mask_channel_length - self.no_mask_channel_overlap = cfg.no_mask_channel_overlap - self.mask_channel_min_space = cfg.mask_channel_min_space - - self.dropout_input = nn.Dropout(cfg.dropout_input) - self.dropout_features = nn.Dropout(cfg.dropout_features) - - self.feature_grad_mult = cfg.feature_grad_mult - - self.quantizer = None - self.input_quantizer = None - - self.n_negatives = cfg.num_negatives - self.cross_sample_negatives = cfg.cross_sample_negatives - self.codebook_negatives = cfg.codebook_negatives - self.negatives_from_everywhere = cfg.negatives_from_everywhere - - self.logit_temp = cfg.logit_temp - - final_dim = cfg.final_dim if cfg.final_dim > 0 else cfg.encoder_embed_dim - - if cfg.quantize_targets: - vq_dim = cfg.latent_dim if cfg.latent_dim > 0 else final_dim - self.quantizer = GumbelVectorQuantizer( - dim=self.embed, - num_vars=cfg.latent_vars, - temp=cfg.latent_temp, - groups=cfg.latent_groups, - combine_groups=False, - vq_dim=vq_dim, - time_first=True, - weight_proj_depth=cfg.quantizer_depth, - weight_proj_factor=cfg.quantizer_factor, - ) - self.project_q = nn.Linear(vq_dim, final_dim) - else: - self.project_q = nn.Linear(self.embed, final_dim) - - if cfg.quantize_input: - if cfg.same_quantizer and self.quantizer is not None: - vq_dim = final_dim - self.input_quantizer = self.quantizer - else: - vq_dim = cfg.latent_dim if cfg.latent_dim > 0 else cfg.encoder_embed_dim - self.input_quantizer = GumbelVectorQuantizer( - dim=self.embed, - num_vars=cfg.latent_vars, - temp=cfg.latent_temp, - groups=cfg.latent_groups, - combine_groups=False, - vq_dim=vq_dim, - time_first=True, - weight_proj_depth=cfg.quantizer_depth, - weight_proj_factor=cfg.quantizer_factor, - ) - self.project_inp = nn.Linear(vq_dim, cfg.encoder_embed_dim) - - self.mask_emb = nn.Parameter( - torch.FloatTensor(cfg.encoder_embed_dim).uniform_() - ) - - self.encoder = TransformerEncoder(cfg) - self.layer_norm = LayerNorm(self.embed) - - self.target_glu = None - if cfg.target_glu: - self.target_glu = nn.Sequential( - nn.Linear(final_dim, final_dim * 2), nn.GLU() - ) - - self.final_proj = nn.Linear(cfg.encoder_embed_dim, final_dim) - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - return state_dict - - @classmethod - def build_model(cls, cfg: Wav2Vec2Config, task=None): - """Build a new model instance.""" - - return cls(cfg) - - def apply_mask( - self, - x, - padding_mask, - mask_indices=None, - mask_channel_indices=None, - ): - B, T, C = x.shape - - if self.mask_channel_prob > 0 and self.mask_channel_before: - mask_channel_indices = compute_mask_indices( - (B, C), - None, - self.mask_channel_prob, - self.mask_channel_length, - self.mask_channel_selection, - self.mask_channel_other, - no_overlap=self.no_mask_channel_overlap, - min_space=self.mask_channel_min_space, - ) - mask_channel_indices = ( - torch.from_numpy(mask_channel_indices) - .to(x.device) - .unsqueeze(1) - .expand(-1, T, -1) - ) - x[mask_channel_indices] = 0 - - if self.mask_prob > 0: - if mask_indices is None: - mask_indices = compute_mask_indices( - (B, T), - padding_mask, - self.mask_prob, - self.mask_length, - self.mask_selection, - self.mask_other, - min_masks=2, - no_overlap=self.no_mask_overlap, - min_space=self.mask_min_space, - ) - mask_indices = torch.from_numpy(mask_indices).to(x.device) - x = index_put(x, mask_indices, self.mask_emb) - else: - mask_indices = None - - if self.mask_channel_prob > 0 and not self.mask_channel_before: - if mask_channel_indices is None: - mask_channel_indices = compute_mask_indices( - (B, C), - None, - self.mask_channel_prob, - self.mask_channel_length, - self.mask_channel_selection, - self.mask_channel_other, - no_overlap=self.no_mask_channel_overlap, - min_space=self.mask_channel_min_space, - ) - mask_channel_indices = ( - torch.from_numpy(mask_channel_indices) - .to(x.device) - .unsqueeze(1) - .expand(-1, T, -1) - ) - x = index_put(x, mask_channel_indices, 0) - - return x, mask_indices - - def sample_negatives(self, y, num, padding_count=None): - - if self.n_negatives == 0 and self.cross_sample_negatives == 0: - return y.new(0) - - bsz, tsz, fsz = y.shape - y = y.view(-1, fsz) # BTC => (BxT)C - - # FIXME: what happens if padding_count is specified? - cross_high = tsz * bsz - high = tsz - (padding_count or 0) - with torch.no_grad(): - assert high > 1, f"{bsz,tsz,fsz}" - - if self.n_negatives > 0: - tszs = ( - buffered_arange(num) - .unsqueeze(-1) - .expand(-1, self.n_negatives) - .flatten() - ) - - neg_idxs = torch.randint( - low=0, high=high - 1, size=(bsz, self.n_negatives * num) - ) - neg_idxs[neg_idxs >= tszs] += 1 - - if self.cross_sample_negatives > 0: - tszs = ( - buffered_arange(num) - .unsqueeze(-1) - .expand(-1, self.cross_sample_negatives) - .flatten() - ) - - cross_neg_idxs = torch.randint( - low=0, - high=cross_high - 1, - size=(bsz, self.cross_sample_negatives * num), - ) - cross_neg_idxs[cross_neg_idxs >= tszs] += 1 - - if self.n_negatives > 0: - for i in range(1, bsz): - neg_idxs[i] += i * high - else: - neg_idxs = cross_neg_idxs - - if self.cross_sample_negatives > 0 and self.n_negatives > 0: - neg_idxs = torch.cat([neg_idxs, cross_neg_idxs], dim=1) - - negs = y[neg_idxs.view(-1)] - negs = negs.view( - bsz, num, self.n_negatives + self.cross_sample_negatives, fsz - ).permute( - 2, 0, 1, 3 - ) # to NxBxTxC - return negs, neg_idxs - - def compute_preds(self, x, y, negatives): - - neg_is_pos = (y == negatives).all(-1) - y = y.unsqueeze(0) - targets = torch.cat([y, negatives], dim=0) - - logits = torch.cosine_similarity(x.float(), targets.float(), dim=-1).type_as(x) - - logits = logits / self.logit_temp - - if is_xla_tensor(logits) or neg_is_pos.any(): - fillval = -float(2 ** 30) - if not hasattr(self, "_inftensor"): - self._inftensor = ( - torch.tensor(fillval).to(x.device) - if is_xla_tensor(logits) - else float("-inf") - ) - logits[1:] = index_put(logits[1:], neg_is_pos, self._inftensor) - - return logits - - def _get_feat_extract_output_lengths(self, input_lengths: torch.LongTensor): - """ - Computes the output length of the convolutional layers - """ - - def _conv_out_length(input_length, kernel_size, stride): - return torch.floor((input_length - kernel_size) / stride + 1) - - conv_cfg_list = eval(self.cfg.conv_feature_layers) - - for i in range(len(conv_cfg_list)): - input_lengths = _conv_out_length( - input_lengths, conv_cfg_list[i][1], conv_cfg_list[i][2] - ) - - return input_lengths.to(torch.long) - - def forward( - self, - source, - padding_mask=None, - mask=True, - features_only=False, - layer=None, - mask_indices=None, - mask_channel_indices=None, - padding_count=None, - ): - - if self.feature_grad_mult > 0: - features = self.feature_extractor(source) - if self.feature_grad_mult != 1.0: - features = GradMultiply.apply(features, self.feature_grad_mult) - else: - with torch.no_grad(): - features = self.feature_extractor(source) - - features_pen = features.float().pow(2).mean() - - features = features.transpose(1, 2) - features = self.layer_norm(features) - unmasked_features = features.clone() - - if padding_mask is not None and padding_mask.any(): - input_lengths = (1 - padding_mask.long()).sum(-1) - # apply conv formula to get real output_lengths - output_lengths = self._get_feat_extract_output_lengths(input_lengths) - - padding_mask = torch.zeros( - features.shape[:2], dtype=features.dtype, device=features.device - ) - - # these two operations makes sure that all values - # before the output lengths indices are attended to - padding_mask[ - ( - torch.arange(padding_mask.shape[0], device=padding_mask.device), - output_lengths - 1, - ) - ] = 1 - padding_mask = (1 - padding_mask.flip([-1]).cumsum(-1).flip([-1])).bool() - else: - padding_mask = None - - if self.post_extract_proj is not None: - features = self.post_extract_proj(features) - - features = self.dropout_input(features) - unmasked_features = self.dropout_features(unmasked_features) - - num_vars = None - code_ppl = None - prob_ppl = None - curr_temp = None - - if self.input_quantizer: - q = self.input_quantizer(features, produce_targets=False) - features = q["x"] - num_vars = q["num_vars"] - code_ppl = q["code_perplexity"] - prob_ppl = q["prob_perplexity"] - curr_temp = q["temp"] - features = self.project_inp(features) - - if mask: - x, mask_indices = self.apply_mask( - features, - padding_mask, - mask_indices=mask_indices, - mask_channel_indices=mask_channel_indices, - ) - if not is_xla_tensor(x) and mask_indices is not None: - # tpu-comment: reducing the size in a dynamic way causes - # too many recompilations on xla. - y = unmasked_features[mask_indices].view( - unmasked_features.size(0), -1, unmasked_features.size(-1) - ) - else: - y = unmasked_features - else: - x = features - y = unmasked_features - mask_indices = None - - x, layer_results = self.encoder(x, padding_mask=padding_mask, layer=layer) - - if features_only: - return { - "x": x, - "padding_mask": padding_mask, - "features": unmasked_features, - "layer_results": layer_results, - } - - if self.quantizer: - q = self.quantizer(y, produce_targets=False) - y = q["x"] - num_vars = q["num_vars"] - code_ppl = q["code_perplexity"] - prob_ppl = q["prob_perplexity"] - curr_temp = q["temp"] - - y = self.project_q(y) - - if self.negatives_from_everywhere: - neg_cands = self.quantizer(unmasked_features, produce_targets=False)[ - "x" - ] - negs, _ = self.sample_negatives( - neg_cands, - y.size(1), - padding_count=padding_count, - ) - negs = self.project_q(negs) - - else: - negs, _ = self.sample_negatives( - y, - y.size(1), - padding_count=padding_count, - ) - - if self.codebook_negatives > 0: - cb_negs = self.quantizer.sample_from_codebook( - y.size(0) * y.size(1), self.codebook_negatives - ) - cb_negs = cb_negs.view( - self.codebook_negatives, y.size(0), y.size(1), -1 - ) # order doesnt matter - cb_negs = self.project_q(cb_negs) - negs = torch.cat([negs, cb_negs], dim=0) - else: - y = self.project_q(y) - - if self.negatives_from_everywhere: - negs, _ = self.sample_negatives( - unmasked_features, - y.size(1), - padding_count=padding_count, - ) - negs = self.project_q(negs) - else: - negs, _ = self.sample_negatives( - y, - y.size(1), - padding_count=padding_count, - ) - - if not is_xla_tensor(x): - # tpu-comment: reducing the size in a dynamic way causes - # too many recompilations on xla. - x = x[mask_indices].view(x.size(0), -1, x.size(-1)) - - if self.target_glu: - y = self.target_glu(y) - negs = self.target_glu(negs) - - x = self.final_proj(x) - x = self.compute_preds(x, y, negs) - - result = { - "x": x, - "padding_mask": padding_mask, - "features_pen": features_pen, - } - - if prob_ppl is not None: - result["prob_perplexity"] = prob_ppl - result["code_perplexity"] = code_ppl - result["num_vars"] = num_vars - result["temp"] = curr_temp - - return result - - def quantize(self, x): - assert self.quantizer is not None - x = self.feature_extractor(x) - x = x.transpose(1, 2) - x = self.layer_norm(x) - return self.quantizer.forward_idx(x) - - def extract_features(self, source, padding_mask, mask=False, layer=None): - res = self.forward( - source, padding_mask, mask=mask, features_only=True, layer=layer - ) - return res - - def get_logits(self, net_output): - logits = net_output["x"] - logits = logits.transpose(0, 2) - logits = logits.reshape(-1, logits.size(-1)) - return logits - - def get_targets(self, sample, net_output, expand_steps=True): - x = net_output["x"] - return x.new_zeros(x.size(1) * x.size(2), dtype=torch.long) - - def get_extra_losses(self, net_output): - pen = [] - - if "prob_perplexity" in net_output: - pen.append( - (net_output["num_vars"] - net_output["prob_perplexity"]) - / net_output["num_vars"] - ) - - if "features_pen" in net_output: - pen.append(net_output["features_pen"]) - - return pen - - def remove_pretraining_modules(self): - self.quantizer = None - self.project_q = None - self.target_glu = None - self.final_proj = None - - -class ConvFeatureExtractionModel(nn.Module): - def __init__( - self, - conv_layers: List[Tuple[int, int, int]], - dropout: float = 0.0, - mode: str = "default", - conv_bias: bool = False, - ): - super().__init__() - - assert mode in {"default", "layer_norm"} - - def block( - n_in, - n_out, - k, - stride, - is_layer_norm=False, - is_group_norm=False, - conv_bias=False, - ): - def make_conv(): - conv = nn.Conv1d(n_in, n_out, k, stride=stride, bias=conv_bias) - nn.init.kaiming_normal_(conv.weight) - return conv - - assert ( - is_layer_norm and is_group_norm - ) == False, "layer norm and group norm are exclusive" - - if is_layer_norm: - return nn.Sequential( - make_conv(), - nn.Dropout(p=dropout), - nn.Sequential( - TransposeLast(), - Fp32LayerNorm(dim, elementwise_affine=True), - TransposeLast(), - ), - nn.GELU(), - ) - elif is_group_norm: - return nn.Sequential( - make_conv(), - nn.Dropout(p=dropout), - Fp32GroupNorm(dim, dim, affine=True), - nn.GELU(), - ) - else: - return nn.Sequential(make_conv(), nn.Dropout(p=dropout), nn.GELU()) - - in_d = 1 - self.conv_layers = nn.ModuleList() - for i, cl in enumerate(conv_layers): - assert len(cl) == 3, "invalid conv definition: " + str(cl) - (dim, k, stride) = cl - - self.conv_layers.append( - block( - in_d, - dim, - k, - stride, - is_layer_norm=mode == "layer_norm", - is_group_norm=mode == "default" and i == 0, - conv_bias=conv_bias, - ) - ) - in_d = dim - - def forward(self, x): - - # BxT -> BxCxT - x = x.unsqueeze(1) - - for conv in self.conv_layers: - x = conv(x) - - return x - - -class TransformerEncoder(nn.Module): - def __init__(self, args): - super().__init__() - - self.dropout = args.dropout - self.embedding_dim = args.encoder_embed_dim - - self.pos_conv = nn.Conv1d( - self.embedding_dim, - self.embedding_dim, - kernel_size=args.conv_pos, - padding=args.conv_pos // 2, - groups=args.conv_pos_groups, - ) - dropout = 0 - std = math.sqrt((4 * (1.0 - dropout)) / (args.conv_pos * self.embedding_dim)) - nn.init.normal_(self.pos_conv.weight, mean=0, std=std) - nn.init.constant_(self.pos_conv.bias, 0) - - self.pos_conv = nn.utils.weight_norm(self.pos_conv, name="weight", dim=2) - self.pos_conv = nn.Sequential(self.pos_conv, SamePad(args.conv_pos), nn.GELU()) - - self.layers = nn.ModuleList( - [ - TransformerSentenceEncoderLayer( - embedding_dim=self.embedding_dim, - ffn_embedding_dim=args.encoder_ffn_embed_dim, - num_attention_heads=args.encoder_attention_heads, - dropout=self.dropout, - attention_dropout=args.attention_dropout, - activation_dropout=args.activation_dropout, - activation_fn=args.activation_fn, - layer_norm_first=args.layer_norm_first, - ) - for _ in range(args.encoder_layers) - ] - ) - - self.layer_norm_first = args.layer_norm_first - self.layer_norm = LayerNorm(self.embedding_dim) - self.layerdrop = args.encoder_layerdrop - - self.apply(init_bert_params) - - def forward(self, x, padding_mask=None, layer=None): - x, layer_results = self.extract_features(x, padding_mask, layer) - - if self.layer_norm_first and layer is None: - x = self.layer_norm(x) - - return x, layer_results - - def extract_features(self, x, padding_mask=None, tgt_layer=None): - - if padding_mask is not None: - x = index_put(x, padding_mask, 0) - - x_conv = self.pos_conv(x.transpose(1, 2)) - x_conv = x_conv.transpose(1, 2) - x = x + x_conv - - if not self.layer_norm_first: - x = self.layer_norm(x) - - x = F.dropout(x, p=self.dropout, training=self.training) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - layer_results = [] - r = None - for i, layer in enumerate(self.layers): - dropout_probability = np.random.random() - if not self.training or (dropout_probability > self.layerdrop): - x, z = layer(x, self_attn_padding_mask=padding_mask, need_weights=False) - if tgt_layer is not None: - layer_results.append((x, z)) - if i == tgt_layer: - r = x - break - - if r is not None: - x = r - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - return x, layer_results - - def max_positions(self): - """Maximum output length supported by the encoder.""" - return self.args.max_positions - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - return state_dict - - -class TransformerSentenceEncoderLayer(nn.Module): - """ - Implements a Transformer Encoder Layer used in BERT/XLM style pre-trained - models. - """ - - def __init__( - self, - embedding_dim: float = 768, - ffn_embedding_dim: float = 3072, - num_attention_heads: float = 8, - dropout: float = 0.1, - attention_dropout: float = 0.1, - activation_dropout: float = 0.1, - activation_fn: str = "relu", - layer_norm_first: bool = False, - ) -> None: - - super().__init__() - # Initialize parameters - self.embedding_dim = embedding_dim - self.dropout = dropout - self.activation_dropout = activation_dropout - - # Initialize blocks - self.activation_fn = utils.get_activation_fn(activation_fn) - self.self_attn = MultiheadAttention( - self.embedding_dim, - num_attention_heads, - dropout=attention_dropout, - self_attention=True, - ) - - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(self.activation_dropout) - self.dropout3 = nn.Dropout(dropout) - - self.layer_norm_first = layer_norm_first - - # layer norm associated with the self attention layer - self.self_attn_layer_norm = LayerNorm(self.embedding_dim) - self.fc1 = nn.Linear(self.embedding_dim, ffn_embedding_dim) - self.fc2 = nn.Linear(ffn_embedding_dim, self.embedding_dim) - - # layer norm associated with the position wise feed-forward NN - self.final_layer_norm = LayerNorm(self.embedding_dim) - - def forward( - self, - x: torch.Tensor, - self_attn_mask: torch.Tensor = None, - self_attn_padding_mask: torch.Tensor = None, - need_weights: bool = False, - att_args=None, - ): - """ - LayerNorm is applied either before or after the self-attention/ffn - modules similar to the original Transformer imlementation. - """ - residual = x - - if self.layer_norm_first: - x = self.self_attn_layer_norm(x) - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - attn_mask=self_attn_mask, - ) - x = self.dropout1(x) - x = residual + x - - residual = x - x = self.final_layer_norm(x) - x = self.activation_fn(self.fc1(x)) - x = self.dropout2(x) - x = self.fc2(x) - x = self.dropout3(x) - x = residual + x - else: - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - ) - - x = self.dropout1(x) - x = residual + x - - x = self.self_attn_layer_norm(x) - - residual = x - x = self.activation_fn(self.fc1(x)) - x = self.dropout2(x) - x = self.fc2(x) - x = self.dropout3(x) - x = residual + x - x = self.final_layer_norm(x) - - return x, attn diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/speech_generator.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/speech_generator.py deleted file mode 100644 index 8086e34d2b56fa808d0905b1a00e87e6736fcf04..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/speech_generator.py +++ /dev/null @@ -1,219 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import numpy as np - -from fairseq.data.audio.speech_to_text_dataset import S2TDataConfig - - -class SpeechGenerator(object): - def __init__(self, model, vocoder, data_cfg: S2TDataConfig): - self.model = model - self.vocoder = vocoder - stats_npz_path = data_cfg.global_cmvn_stats_npz - self.gcmvn_stats = None - if stats_npz_path is not None: - self.gcmvn_stats = np.load(stats_npz_path) - - def gcmvn_denormalize(self, x): - # x: B x T x C - if self.gcmvn_stats is None: - return x - mean = torch.from_numpy(self.gcmvn_stats["mean"]).to(x) - std = torch.from_numpy(self.gcmvn_stats["std"]).to(x) - assert len(x.shape) == 3 and mean.shape[0] == std.shape[0] == x.shape[2] - x = x * std.view(1, 1, -1).expand_as(x) - return x + mean.view(1, 1, -1).expand_as(x) - - def get_waveform(self, feat): - # T x C -> T - return None if self.vocoder is None else self.vocoder(feat).squeeze(0) - - -class AutoRegressiveSpeechGenerator(SpeechGenerator): - def __init__( - self, model, vocoder, data_cfg, max_iter: int = 6000, - eos_prob_threshold: float = 0.5, - ): - super().__init__(model, vocoder, data_cfg) - self.max_iter = max_iter - self.eos_prob_threshold = eos_prob_threshold - - @torch.no_grad() - def generate(self, model, sample, has_targ=False, **kwargs): - model.eval() - - src_tokens = sample["net_input"]["src_tokens"] - src_lengths = sample["net_input"]["src_lengths"] - bsz, src_len = src_tokens.size() - n_frames_per_step = model.decoder.n_frames_per_step - out_dim = model.decoder.out_dim - raw_dim = out_dim // n_frames_per_step - - # initialize - encoder_out = model.forward_encoder(src_tokens, src_lengths, - speaker=sample["speaker"]) - incremental_state = {} - feat, attn, eos_prob = [], [], [] - finished = src_tokens.new_zeros((bsz,)).bool() - out_lens = src_lengths.new_zeros((bsz,)).long().fill_(self.max_iter) - - prev_feat_out = encoder_out["encoder_out"][0].new_zeros(bsz, 1, out_dim) - for step in range(self.max_iter): - cur_out_lens = out_lens.clone() - cur_out_lens.masked_fill_(cur_out_lens.eq(self.max_iter), step + 1) - _, cur_eos_out, cur_extra = model.forward_decoder( - prev_feat_out, encoder_out=encoder_out, - incremental_state=incremental_state, - target_lengths=cur_out_lens, speaker=sample["speaker"], **kwargs - ) - cur_eos_prob = torch.sigmoid(cur_eos_out).squeeze(2) - feat.append(cur_extra['feature_out']) - attn.append(cur_extra['attn']) - eos_prob.append(cur_eos_prob) - - cur_finished = (cur_eos_prob.squeeze(1) > self.eos_prob_threshold) - out_lens.masked_fill_((~finished) & cur_finished, step + 1) - finished = finished | cur_finished - if finished.sum().item() == bsz: - break - prev_feat_out = cur_extra['feature_out'] - - feat = torch.cat(feat, dim=1) - feat = model.decoder.postnet(feat) + feat - eos_prob = torch.cat(eos_prob, dim=1) - attn = torch.cat(attn, dim=2) - alignment = attn.max(dim=1)[1] - - feat = feat.reshape(bsz, -1, raw_dim) - feat = self.gcmvn_denormalize(feat) - - eos_prob = eos_prob.repeat_interleave(n_frames_per_step, dim=1) - attn = attn.repeat_interleave(n_frames_per_step, dim=2) - alignment = alignment.repeat_interleave(n_frames_per_step, dim=1) - out_lens = out_lens * n_frames_per_step - - finalized = [ - { - 'feature': feat[b, :out_len], - 'eos_prob': eos_prob[b, :out_len], - 'attn': attn[b, :, :out_len], - 'alignment': alignment[b, :out_len], - 'waveform': self.get_waveform(feat[b, :out_len]), - } - for b, out_len in zip(range(bsz), out_lens) - ] - - if has_targ: - assert sample["target"].size(-1) == out_dim - tgt_feats = sample["target"].view(bsz, -1, raw_dim) - tgt_feats = self.gcmvn_denormalize(tgt_feats) - tgt_lens = sample["target_lengths"] * n_frames_per_step - for b, (f, l) in enumerate(zip(tgt_feats, tgt_lens)): - finalized[b]["targ_feature"] = f[:l] - finalized[b]["targ_waveform"] = self.get_waveform(f[:l]) - return finalized - - -class NonAutoregressiveSpeechGenerator(SpeechGenerator): - @torch.no_grad() - def generate(self, model, sample, has_targ=False, **kwargs): - model.eval() - - bsz, max_src_len = sample["net_input"]["src_tokens"].size() - n_frames_per_step = model.encoder.n_frames_per_step - out_dim = model.encoder.out_dim - raw_dim = out_dim // n_frames_per_step - - feat, out_lens, log_dur_out, _, _ = model( - src_tokens=sample["net_input"]["src_tokens"], - src_lengths=sample["net_input"]["src_lengths"], - prev_output_tokens=sample["net_input"]["prev_output_tokens"], - incremental_state=None, - target_lengths=sample["target_lengths"], - speaker=sample["speaker"] - ) - - feat = feat.view(bsz, -1, raw_dim) - feat = self.gcmvn_denormalize(feat) - - dur_out = torch.clamp( - torch.round(torch.exp(log_dur_out) - 1).long(), min=0 - ) - - def get_dur_plot_data(d): - r = [] - for i, dd in enumerate(d): - r += [i + 1] * dd.item() - return r - - out_lens = out_lens * n_frames_per_step - finalized = [ - { - 'feature': feat[b, :l] if l > 0 else feat.new_zeros([1, raw_dim]), - 'waveform': self.get_waveform( - feat[b, :l] if l > 0 else feat.new_zeros([1, raw_dim]) - ), - 'attn': feat.new_tensor(get_dur_plot_data(dur_out[b])), - } - for b, l in zip(range(bsz), out_lens) - ] - - if has_targ: - tgt_feats = sample["target"].view(bsz, -1, raw_dim) - tgt_feats = self.gcmvn_denormalize(tgt_feats) - tgt_lens = sample["target_lengths"] * n_frames_per_step - for b, (f, l) in enumerate(zip(tgt_feats, tgt_lens)): - finalized[b]["targ_feature"] = f[:l] - finalized[b]["targ_waveform"] = self.get_waveform(f[:l]) - return finalized - - -class TeacherForcingAutoRegressiveSpeechGenerator(AutoRegressiveSpeechGenerator): - @torch.no_grad() - def generate(self, model, sample, has_targ=False, **kwargs): - model.eval() - - src_tokens = sample["net_input"]["src_tokens"] - src_lens = sample["net_input"]["src_lengths"] - prev_out_tokens = sample["net_input"]["prev_output_tokens"] - tgt_lens = sample["target_lengths"] - n_frames_per_step = model.decoder.n_frames_per_step - raw_dim = model.decoder.out_dim // n_frames_per_step - bsz = src_tokens.shape[0] - - feat, eos_prob, extra = model( - src_tokens, src_lens, prev_out_tokens, incremental_state=None, - target_lengths=tgt_lens, speaker=sample["speaker"] - ) - - attn = extra["attn"] # B x T_s x T_t - alignment = attn.max(dim=1)[1] - feat = feat.reshape(bsz, -1, raw_dim) - feat = self.gcmvn_denormalize(feat) - eos_prob = eos_prob.repeat_interleave(n_frames_per_step, dim=1) - attn = attn.repeat_interleave(n_frames_per_step, dim=2) - alignment = alignment.repeat_interleave(n_frames_per_step, dim=1) - tgt_lens = sample["target_lengths"] * n_frames_per_step - - finalized = [ - { - 'feature': feat[b, :tgt_len], - 'eos_prob': eos_prob[b, :tgt_len], - 'attn': attn[b, :, :tgt_len], - 'alignment': alignment[b, :tgt_len], - 'waveform': self.get_waveform(feat[b, :tgt_len]), - } - for b, tgt_len in zip(range(bsz), tgt_lens) - ] - - if has_targ: - tgt_feats = sample["target"].view(bsz, -1, raw_dim) - tgt_feats = self.gcmvn_denormalize(tgt_feats) - for b, (f, l) in enumerate(zip(tgt_feats, tgt_lens)): - finalized[b]["targ_feature"] = f[:l] - finalized[b]["targ_waveform"] = self.get_waveform(f[:l]) - return finalized diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/audio_processing.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/audio_processing.py deleted file mode 100644 index 3a4467355952fefaba117b6014864139ac319c6b..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/audio_processing.py +++ /dev/null @@ -1,100 +0,0 @@ -import torch -import numpy as np -from scipy.signal import get_window -import librosa.util as librosa_util - - -def window_sumsquare( - window, - n_frames, - hop_length=200, - win_length=800, - n_fft=800, - dtype=np.float32, - norm=None, -): - """ - # from librosa 0.6 - Compute the sum-square envelope of a window function at a given hop length. - - This is used to estimate modulation effects induced by windowing - observations in short-time fourier transforms. - - Parameters - ---------- - window : string, tuple, number, callable, or list-like - Window specification, as in `get_window` - - n_frames : int > 0 - The number of analysis frames - - hop_length : int > 0 - The number of samples to advance between frames - - win_length : [optional] - The length of the window function. By default, this matches `n_fft`. - - n_fft : int > 0 - The length of each analysis frame. - - dtype : np.dtype - The data type of the output - - Returns - ------- - wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))` - The sum-squared envelope of the window function - """ - if win_length is None: - win_length = n_fft - - n = n_fft + hop_length * (n_frames - 1) - x = np.zeros(n, dtype=dtype) - - # Compute the squared window at the desired length - win_sq = get_window(window, win_length, fftbins=True) - win_sq = librosa_util.normalize(win_sq, norm=norm) ** 2 - win_sq = librosa_util.pad_center(win_sq, n_fft) - - # Fill the envelope - for i in range(n_frames): - sample = i * hop_length - x[sample : min(n, sample + n_fft)] += win_sq[: max(0, min(n_fft, n - sample))] - return x - - -def griffin_lim(magnitudes, stft_fn, n_iters=30): - """ - PARAMS - ------ - magnitudes: spectrogram magnitudes - stft_fn: STFT class with transform (STFT) and inverse (ISTFT) methods - """ - - angles = np.angle(np.exp(2j * np.pi * np.random.rand(*magnitudes.size()))) - angles = angles.astype(np.float32) - angles = torch.autograd.Variable(torch.from_numpy(angles)) - signal = stft_fn.inverse(magnitudes, angles).squeeze(1) - - for i in range(n_iters): - _, angles = stft_fn.transform(signal) - signal = stft_fn.inverse(magnitudes, angles).squeeze(1) - return signal - - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C diff --git a/spaces/Harveenchadha/oiTrans/legacy/run_inference.sh b/spaces/Harveenchadha/oiTrans/legacy/run_inference.sh deleted file mode 100644 index ff582a6c49d015cf36c82e8f20a755f6d1418ed8..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/legacy/run_inference.sh +++ /dev/null @@ -1,80 +0,0 @@ -src_lang=${1:-hi} -tgt_lang=${2:-en} -bucket_path=${3:-gs://ai4b-anuvaad-nmt/baselines/transformer-base/baselines-${src_lang}-${tgt_lang}} - -expdir=../baselines/baselines-${src_lang}-${tgt_lang} - -if [[ -d $expdir ]] -then - echo "$expdir exists on your filesystem. Please delete this if you have made some changes to the bucket files and trying to redownload" -else - mkdir -p $expdir - mkdir -p $expdir/model - cd ../baselines - gsutil -m cp -r $bucket_path/vocab $expdir - gsutil -m cp -r $bucket_path/final_bin $expdir - gsutil -m cp $bucket_path/model/checkpoint_best.pt $expdir/model - cd ../indicTrans -fi - - -if [ $src_lang == 'hi' ] || [ $tgt_lang == 'hi' ]; then - #TEST_SETS=( wmt-news wat2021-devtest wat2020-devtest anuvaad-legal tico19 sap-documentation-benchmark all) - TEST_SETS=( wat2021-devtest wat2020-devtest wat-2018 wmt-news ) -elif [ $src_lang == 'ta' ] || [ $tgt_lang == 'ta' ]; then - # TEST_SETS=( wmt-news wat2021-devtest wat2020-devtest anuvaad-legal tico19 all) - TEST_SETS=( wat2021-devtest wat2020-devtest wat-2018 wmt-news ufal-ta) -elif [ $src_lang == 'bn' ] || [ $tgt_lang == 'bn' ]; then - # TEST_SETS=( wat2021-devtest wat2020-devtest anuvaad-legal tico19 all) - TEST_SETS=( wat2021-devtest wat2020-devtest wat-2018) -elif [ $src_lang == 'gu' ] || [ $tgt_lang == 'gu' ]; then - # TEST_SETS=( wmt-news wat2021-devtest wat2020-devtest all) - TEST_SETS=( wat2021-devtest wat2020-devtest wmt-news ) -elif [ $src_lang == 'as' ] || [ $tgt_lang == 'as' ]; then - TEST_SETS=( pmi ) -elif [ $src_lang == 'kn' ] || [ $tgt_lang == 'kn' ]; then - # TEST_SETS=( wat2021-devtest anuvaad-legal all) - TEST_SETS=( wat2021-devtest ) -elif [ $src_lang == 'ml' ] || [ $tgt_lang == 'ml' ]; then - # TEST_SETS=( wat2021-devtest wat2020-devtest anuvaad-legal all) - TEST_SETS=( wat2021-devtest wat2020-devtest wat-2018) -elif [ $src_lang == 'mr' ] || [ $tgt_lang == 'mr' ]; then - # TEST_SETS=( wat2021-devtest wat2020-devtest all) - TEST_SETS=( wat2021-devtest wat2020-devtest ) -elif [ $src_lang == 'or' ] || [ $tgt_lang == 'or' ]; then - TEST_SETS=( wat2021-devtest ) -elif [ $src_lang == 'pa' ] || [ $tgt_lang == 'pa' ]; then - TEST_SETS=( wat2021-devtest ) -elif [ $src_lang == 'te' ] || [ $tgt_lang == 'te' ]; then - # TEST_SETS=( wat2021-devtest wat2020-devtest anuvaad-legal all ) - TEST_SETS=( wat2021-devtest wat2020-devtest wat-2018) -fi - -if [ $src_lang == 'en' ]; then - indic_lang=$tgt_lang -else - indic_lang=$src_lang -fi - - -for tset in ${TEST_SETS[@]};do - echo $tset $src_lang $tgt_lang - if [ $tset == 'wat2021-devtest' ]; then - SRC_FILE=${expdir}/benchmarks/$tset/test.$src_lang - REF_FILE=${expdir}/benchmarks/$tset/test.$tgt_lang - else - SRC_FILE=${expdir}/benchmarks/$tset/en-${indic_lang}/test.$src_lang - REF_FILE=${expdir}/benchmarks/$tset/en-${indic_lang}/test.$tgt_lang - fi - RESULTS_DIR=${expdir}/results/$tset - - mkdir -p $RESULTS_DIR - - bash translate.sh $SRC_FILE $RESULTS_DIR/${src_lang}-${tgt_lang} $src_lang $tgt_lang $expdir $REF_FILE - # for newline between different outputs - echo -done -# send the results to the bucket -gsutil -m cp -r $expdir/results $bucket_path -# clear up the space in the instance -# rm -r $expdir \ No newline at end of file diff --git a/spaces/Harveenchadha/oiTrans/scripts/remove_large_sentences.py b/spaces/Harveenchadha/oiTrans/scripts/remove_large_sentences.py deleted file mode 100644 index a045f95df1af2d327104e73ae4ed90558d115058..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/scripts/remove_large_sentences.py +++ /dev/null @@ -1,44 +0,0 @@ -from tqdm import tqdm -import sys - - -def remove_large_sentences(src_path, tgt_path): - count = 0 - new_src_lines = [] - new_tgt_lines = [] - src_num_lines = sum(1 for line in open(src_path, "r", encoding="utf-8")) - tgt_num_lines = sum(1 for line in open(tgt_path, "r", encoding="utf-8")) - assert src_num_lines == tgt_num_lines - with open(src_path, encoding="utf-8") as f1, open(tgt_path, encoding="utf-8") as f2: - for src_line, tgt_line in tqdm(zip(f1, f2), total=src_num_lines): - src_tokens = src_line.strip().split(" ") - tgt_tokens = tgt_line.strip().split(" ") - if len(src_tokens) > 200 or len(tgt_tokens) > 200: - count += 1 - continue - new_src_lines.append(src_line) - new_tgt_lines.append(tgt_line) - return count, new_src_lines, new_tgt_lines - - -def create_txt(outFile, lines, add_newline=False): - outfile = open("{0}".format(outFile), "w", encoding="utf-8") - for line in lines: - if add_newline: - outfile.write(line + "\n") - else: - outfile.write(line) - outfile.close() - - -if __name__ == "__main__": - - src_path = sys.argv[1] - tgt_path = sys.argv[2] - new_src_path = sys.argv[3] - new_tgt_path = sys.argv[4] - - count, new_src_lines, new_tgt_lines = remove_large_sentences(src_path, tgt_path) - print(f'{count} lines removed due to seq_len > 200') - create_txt(new_src_path, new_src_lines) - create_txt(new_tgt_path, new_tgt_lines) diff --git a/spaces/HgMenon/Transcribe_V0.2/src/__init__.py b/spaces/HgMenon/Transcribe_V0.2/src/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/HighCWu/GPEN/retinaface/layers/__init__.py b/spaces/HighCWu/GPEN/retinaface/layers/__init__.py deleted file mode 100644 index 53a3f4b5160995d93bc7911e808b3045d74362c9..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/GPEN/retinaface/layers/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .functions import * -from .modules import * diff --git a/spaces/ICML2022/OFA/data/ofa_dataset.py b/spaces/ICML2022/OFA/data/ofa_dataset.py deleted file mode 100644 index 02d856c28016b3a1c020fed483afe0aa797bf50f..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/data/ofa_dataset.py +++ /dev/null @@ -1,74 +0,0 @@ -import logging -import re -import torch.utils.data -from fairseq.data import FairseqDataset - -logger = logging.getLogger(__name__) - - -class OFADataset(FairseqDataset): - def __init__(self, split, dataset, bpe, src_dict, tgt_dict): - self.split = split - self.dataset = dataset - self.bpe = bpe - self.src_dict = src_dict - self.tgt_dict = tgt_dict - - self.bos = src_dict.bos() - self.eos = src_dict.eos() - self.pad = src_dict.pad() - self.bos_item = torch.LongTensor([self.bos]) - self.eos_item = torch.LongTensor([self.eos]) - - def __len__(self): - return len(self.dataset) - - def encode_text(self, text, length=None, append_bos=False, append_eos=False, use_bpe=True): - s = self.tgt_dict.encode_line( - line=self.bpe.encode(text) if use_bpe else text, - add_if_not_exist=False, - append_eos=False - ).long() - if length is not None: - s = s[:length] - if append_bos: - s = torch.cat([self.bos_item, s]) - if append_eos: - s = torch.cat([s, self.eos_item]) - return s - - def pre_question(self, question, max_ques_words): - question = question.lower().lstrip(",.!?*#:;~").replace('-', ' ').replace('/', ' ') - - question = re.sub( - r"\s{2,}", - ' ', - question, - ) - question = question.rstrip('\n') - question = question.strip(' ') - - # truncate question - question_words = question.split(' ') - if len(question_words) > max_ques_words: - question = ' '.join(question_words[:max_ques_words]) - - return question - - def pre_caption(self, caption, max_words): - caption = caption.lower().lstrip(",.!?*#:;~").replace('-', ' ').replace('/', ' ').replace('', 'person') - - caption = re.sub( - r"\s{2,}", - ' ', - caption, - ) - caption = caption.rstrip('\n') - caption = caption.strip(' ') - - # truncate caption - caption_words = caption.split(' ') - if len(caption_words) > max_words: - caption = ' '.join(caption_words[:max_words]) - - return caption diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/optim/adafactor.py b/spaces/ICML2022/OFA/fairseq/fairseq/optim/adafactor.py deleted file mode 100644 index c969b9fbc0d229a25f2046ec67c53c57a433814b..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/optim/adafactor.py +++ /dev/null @@ -1,268 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.optim - -from . import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("adafactor") -class FairseqAdafactor(LegacyFairseqOptimizer): - def __init__(self, args, params): - super().__init__(args) - self._optimizer = Adafactor(params, **self.optimizer_config) - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--adafactor-eps', default='(1e-30, 1e-3)', metavar="E", - help='epsilons for Adafactor optimizer') - parser.add_argument('--clip-threshold', type=float, default=1.0, metavar="C", - help='threshold for clipping update root mean square') - parser.add_argument('--decay-rate', type=float, default=-0.8, metavar="D", - help='decay rate of the second moment estimator') - parser.add_argument('--beta1', type=float, default=None, metavar="B", - help='beta for first moment estimator. Optional') - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - parser.add_argument('--scale-parameter', action='store_true', - help='scale learning rate by root mean square of parameter') - parser.add_argument('--relative-step', action='store_true', - help='set learning rate to inverse square root of timestep,' - 'otherwise use external learning rate') - parser.add_argument('--warmup-init', action='store_true', - help='use relative step for warm-up learning rate schedule') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - Note : Convergence issues empirically observed with fp16 on. - Might require search for appropriate configuration. - """ - return { - "lr": self.args.lr[0], - "eps": eval(self.args.adafactor_eps), - "clip_threshold": self.args.clip_threshold, - "decay_rate": self.args.decay_rate, - "beta1": self.args.beta1, - "weight_decay": self.args.weight_decay, - "scale_parameter": self.args.scale_parameter, # defaults to False - "relative_step": self.args.relative_step, # defaults to False - "warmup_init": self.args.warmup_init, - } - - -class Adafactor(torch.optim.Optimizer): - """Implements Adafactor algorithm. - - This implementation is based on: - `Adafactor: Adaptive Learning Rates with Sublinear Memory Cost` - (see https://arxiv.org/abs/1804.04235) - - Note that this optimizer internally adjusts the learning rate - depending on the *scale_parameter*, *relative_step* and - *warmup_init* options. To use a manual (external) learning rate - schedule you should set `scale_parameter=False` and - `relative_step=False`. - - Args: - params (iterable): iterable of parameters to optimize or dicts defining - parameter groups - lr (float, optional): external learning rate (default: None) - eps (tuple[float, float]): regularization constans for square gradient - and parameter scale respectively (default: (1e-30, 1e-3)) - clip_threshold (float): threshold of root mean square of - final gradient update (default: 1.0) - decay_rate (float): coefficient used to compute running averages of square - gradient (default: -0.8) - beta1 (float): coefficient used for computing running averages of gradient - (default: None) - weight_decay (float, optional): weight decay (L2 penalty) (default: 0) - scale_parameter (bool): if True, learning rate is scaled by root mean square of - parameter (default: True) - relative_step (bool): if True, time-dependent learning rate is computed - instead of external learning rate (default: True) - warmup_init (bool): time-dependent learning rate computation depends on - whether warm-up initialization is being used (default: False) - """ - - def __init__( - self, - params, - lr=None, - eps=(1e-30, 1e-3), - clip_threshold=1.0, - decay_rate=-0.8, - beta1=None, - weight_decay=0.0, - scale_parameter=True, - relative_step=True, - warmup_init=False, - ): - if lr is not None and relative_step: - raise ValueError("Cannot combine manual lr and relative_step options") - if warmup_init and not relative_step: - raise ValueError("warmup_init requires relative_step=True") - - defaults = dict( - lr=lr, - eps=eps, - clip_threshold=clip_threshold, - decay_rate=decay_rate, - beta1=beta1, - weight_decay=weight_decay, - scale_parameter=scale_parameter, - relative_step=relative_step, - warmup_init=warmup_init, - ) - super(Adafactor, self).__init__(params, defaults) - - @property - def supports_memory_efficient_fp16(self): - return True - - @property - def supports_flat_params(self): - return False - - def _get_lr(self, param_group, param_state): - rel_step_sz = param_group["lr"] - if param_group["relative_step"]: - min_step = ( - 1e-6 * param_state["step"] if param_group["warmup_init"] else 1e-2 - ) - rel_step_sz = min(min_step, 1.0 / math.sqrt(param_state["step"])) - param_scale = 1.0 - if param_group["scale_parameter"]: - param_scale = max(param_group["eps"][1], param_state["RMS"]) - return param_scale * rel_step_sz - - def _get_options(self, param_group, param_shape): - factored = len(param_shape) >= 2 - use_first_moment = param_group["beta1"] is not None - return factored, use_first_moment - - def _rms(self, tensor): - return tensor.norm(2) / (tensor.numel() ** 0.5) - - def _approx_sq_grad(self, exp_avg_sq_row, exp_avg_sq_col): - r_factor = ( - (exp_avg_sq_row / exp_avg_sq_row.mean(dim=-1, keepdim=True)) - .rsqrt_() - .unsqueeze(-1) - ) - c_factor = exp_avg_sq_col.unsqueeze(-2).rsqrt() - return torch.mul(r_factor, c_factor) - - def step(self, closure=None): - """Performs a single optimization step. - - Args: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - """ - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - for p in group["params"]: - if p.grad is None: - continue - grad = p.grad.data - if grad.dtype in {torch.float16, torch.bfloat16}: - grad = grad.float() - if grad.is_sparse: - raise RuntimeError("Adafactor does not support sparse gradients.") - - state = self.state[p] - grad_shape = grad.shape - - factored, use_first_moment = self._get_options(group, grad_shape) - # State Initialization - if len(state) == 0: - state["step"] = 0 - - if use_first_moment: - # Exponential moving average of gradient values - state["exp_avg"] = torch.zeros_like(grad) - if factored: - state["exp_avg_sq_row"] = torch.zeros(grad_shape[:-1]).to(grad) - state["exp_avg_sq_col"] = torch.zeros( - grad_shape[:-2] + grad_shape[-1:] - ).to(grad) - else: - state["exp_avg_sq"] = torch.zeros_like(grad) - - state["RMS"] = 0 - else: - if use_first_moment: - state["exp_avg"] = state["exp_avg"].to(grad) - if factored: - state["exp_avg_sq_row"] = state["exp_avg_sq_row"].to(grad) - state["exp_avg_sq_col"] = state["exp_avg_sq_col"].to(grad) - else: - state["exp_avg_sq"] = state["exp_avg_sq"].to(grad) - - p_data_fp32 = p.data - if p.data.dtype in {torch.float16, torch.bfloat16}: - p_data_fp32 = p_data_fp32.float() - - state["step"] += 1 - state["RMS"] = self._rms(p_data_fp32) - group["lr"] = self._get_lr(group, state) - - beta2t = 1.0 - math.pow(state["step"], group["decay_rate"]) - update = (grad ** 2) + group["eps"][0] - if factored: - exp_avg_sq_row = state["exp_avg_sq_row"] - exp_avg_sq_col = state["exp_avg_sq_col"] - - exp_avg_sq_row.mul_(beta2t).add_( - update.mean(dim=-1), alpha=1.0 - beta2t - ) - exp_avg_sq_col.mul_(beta2t).add_( - update.mean(dim=-2), alpha=1.0 - beta2t - ) - - # Approximation of exponential moving average of square of gradient - update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col) - update.mul_(grad) - else: - exp_avg_sq = state["exp_avg_sq"] - - exp_avg_sq.mul_(beta2t).add_(update, alpha=1.0 - beta2t) - update = exp_avg_sq.rsqrt().mul_(grad) - - update.div_( - (self._rms(update) / group["clip_threshold"]).clamp_(min=1.0) - ) - update.mul_(group["lr"]) - - if use_first_moment: - exp_avg = state["exp_avg"] - exp_avg.mul_(group["beta1"]).add_(update, alpha=1 - group["beta1"]) - update = exp_avg - - if group["weight_decay"] != 0: - p_data_fp32.add_( - p_data_fp32, alpha=-group["weight_decay"] * group["lr"] - ) - - p_data_fp32.add_(-update) - - if p.data.dtype in {torch.float16, torch.bfloat16}: - p.data.copy_(p_data_fp32) - - return loss diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/optim/lr_scheduler/polynomial_decay_schedule.py b/spaces/ICML2022/OFA/fairseq/fairseq/optim/lr_scheduler/polynomial_decay_schedule.py deleted file mode 100644 index 73c3c8ea3435d6050401c45e737e4ecf5662825c..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/optim/lr_scheduler/polynomial_decay_schedule.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -from typing import Optional, List -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class PolynomialDecayLRScheduleConfig(FairseqDataclass): - warmup_updates: int = field( - default=0, - metadata={"help": "warmup the learning rate linearly for the first N updates"}, - ) - warmup_ratio: float = field( - default=0, - metadata={"help": "warmup ratio"}, - ) - force_anneal: Optional[int] = field( - default=None, - metadata={"help": "force annealing at specified epoch"}, - ) - end_learning_rate: float = field( - default=0.0, - metadata={"help": "learning rate to decay to"}, - ) - power: float = field( - default=1.0, - metadata={"help": "decay exponent"}, - ) - total_num_update: Optional[float] = field( - default=1000000, - metadata={"help": "total number of updates over which to decay learning rate"}, - ) - lr: List[float] = II("optimization.lr") - - -@register_lr_scheduler("polynomial_decay", dataclass=PolynomialDecayLRScheduleConfig) -class PolynomialDecayLRSchedule(FairseqLRScheduler): - """Decay the LR on a fixed schedule.""" - - def __init__(self, cfg: PolynomialDecayLRScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - - assert cfg.total_num_update > 0 - # set defaults - cfg.warmup_updates = getattr(cfg, 'warmup_updates', 0) or 0 - - self.lr = cfg.lr[0] - self.warmup_updates = cfg.warmup_updates - if self.warmup_updates > 0: - self.warmup_factor = 1.0 / self.warmup_updates - else: - self.warmup_factor = 1 - self.end_learning_rate = cfg.end_learning_rate - self.total_num_update = cfg.total_num_update - self.power = cfg.power - self.optimizer.set_lr(self.warmup_factor * self.lr) - - def get_next_lr(self, epoch): - lrs = self.cfg.lr - if self.cfg.force_anneal is None or epoch < self.cfg.force_anneal: - # use fixed LR schedule - next_lr = lrs[min(epoch, len(lrs) - 1)] - else: - # annneal based on lr_shrink - next_lr = self.optimizer.get_lr() - return next_lr - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - self.lr = self.get_next_lr(epoch) - self.optimizer.set_lr(self.warmup_factor * self.lr) - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - if self.warmup_updates > 0 and num_updates <= self.warmup_updates: - self.warmup_factor = num_updates / float(self.warmup_updates) - lr = self.warmup_factor * self.lr - elif num_updates >= self.total_num_update: - lr = self.end_learning_rate - else: - warmup = self.warmup_updates - lr_range = self.lr - self.end_learning_rate - pct_remaining = 1 - (num_updates - warmup) / (self.total_num_update - warmup) - lr = lr_range * pct_remaining ** (self.power) + self.end_learning_rate - self.optimizer.set_lr(lr) - return self.optimizer.get_lr() - - def reinit(self, total_num_update, num_updates): - # only enable this when set warmup_ratio - if self.cfg.warmup_ratio <= 0: - return - # re init this according to the real number of updates - self.total_num_update = total_num_update - self.warmup_updates = int(self.total_num_update * self.cfg.warmup_ratio) - if num_updates > 0: - self.warmup_factor = min(1.0, num_updates / float(self.warmup_updates)) - self.step_update(num_updates) - else: - self.warmup_factor = 1.0 / self.warmup_updates - self.optimizer.set_lr(self.warmup_factor * self.lr) - print('Total steps {}, warmup steps {}, warmup_factor {}'.format(self.total_num_update, self.warmup_updates, - self.warmup_factor)) \ No newline at end of file diff --git a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/demo/gradio_app.py b/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/demo/gradio_app.py deleted file mode 100644 index 15e08323f485291df8b53eefd4691c087d7863f7..0000000000000000000000000000000000000000 --- a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/demo/gradio_app.py +++ /dev/null @@ -1,125 +0,0 @@ -import argparse -from functools import partial -import cv2 -import requests -import os -from io import BytesIO -from PIL import Image -import numpy as np -from pathlib import Path - - -import warnings - -import torch - -# prepare the environment -os.system("python setup.py build develop --user") -os.system("pip install packaging==21.3") -os.system("pip install gradio") - - -warnings.filterwarnings("ignore") - -import gradio as gr - -from groundingdino.models import build_model -from groundingdino.util.slconfig import SLConfig -from groundingdino.util.utils import clean_state_dict -from groundingdino.util.inference import annotate, load_image, predict -import groundingdino.datasets.transforms as T - -from huggingface_hub import hf_hub_download - - - -# Use this command for evaluate the GLIP-T model -config_file = "groundingdino/config/GroundingDINO_SwinT_OGC.py" -ckpt_repo_id = "ShilongLiu/GroundingDINO" -ckpt_filenmae = "groundingdino_swint_ogc.pth" - - -def load_model_hf(model_config_path, repo_id, filename, device='cpu'): - args = SLConfig.fromfile(model_config_path) - model = build_model(args) - args.device = device - - cache_file = hf_hub_download(repo_id=repo_id, filename=filename) - checkpoint = torch.load(cache_file, map_location='cpu') - log = model.load_state_dict(clean_state_dict(checkpoint['model']), strict=False) - print("Model loaded from {} \n => {}".format(cache_file, log)) - _ = model.eval() - return model - -def image_transform_grounding(init_image): - transform = T.Compose([ - T.RandomResize([800], max_size=1333), - T.ToTensor(), - T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) - ]) - image, _ = transform(init_image, None) # 3, h, w - return init_image, image - -def image_transform_grounding_for_vis(init_image): - transform = T.Compose([ - T.RandomResize([800], max_size=1333), - ]) - image, _ = transform(init_image, None) # 3, h, w - return image - -model = load_model_hf(config_file, ckpt_repo_id, ckpt_filenmae) - -def run_grounding(input_image, grounding_caption, box_threshold, text_threshold): - init_image = input_image.convert("RGB") - original_size = init_image.size - - _, image_tensor = image_transform_grounding(init_image) - image_pil: Image = image_transform_grounding_for_vis(init_image) - - # run grounidng - boxes, logits, phrases = predict(model, image_tensor, grounding_caption, box_threshold, text_threshold, device='cpu') - annotated_frame = annotate(image_source=np.asarray(image_pil), boxes=boxes, logits=logits, phrases=phrases) - image_with_box = Image.fromarray(cv2.cvtColor(annotated_frame, cv2.COLOR_BGR2RGB)) - - - return image_with_box - -if __name__ == "__main__": - - parser = argparse.ArgumentParser("Grounding DINO demo", add_help=True) - parser.add_argument("--debug", action="store_true", help="using debug mode") - parser.add_argument("--share", action="store_true", help="share the app") - args = parser.parse_args() - - block = gr.Blocks().queue() - with block: - gr.Markdown("# [Grounding DINO](https://github.com/IDEA-Research/GroundingDINO)") - gr.Markdown("### Open-World Detection with Grounding DINO") - - with gr.Row(): - with gr.Column(): - input_image = gr.Image(source='upload', type="pil") - grounding_caption = gr.Textbox(label="Detection Prompt") - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - box_threshold = gr.Slider( - label="Box Threshold", minimum=0.0, maximum=1.0, value=0.25, step=0.001 - ) - text_threshold = gr.Slider( - label="Text Threshold", minimum=0.0, maximum=1.0, value=0.25, step=0.001 - ) - - with gr.Column(): - gallery = gr.outputs.Image( - type="pil", - # label="grounding results" - ).style(full_width=True, full_height=True) - # gallery = gr.Gallery(label="Generated images", show_label=False).style( - # grid=[1], height="auto", container=True, full_width=True, full_height=True) - - run_button.click(fn=run_grounding, inputs=[ - input_image, grounding_caption, box_threshold, text_threshold], outputs=[gallery]) - - - block.launch(server_name='0.0.0.0', server_port=7579, debug=args.debug, share=args.share) - diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/archs/dfdnet_util.py b/spaces/Iceclear/StableSR/StableSR/basicsr/archs/dfdnet_util.py deleted file mode 100644 index b4dc0ff738c76852e830b32fffbe65bffb5ddf50..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/basicsr/archs/dfdnet_util.py +++ /dev/null @@ -1,162 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import Function -from torch.nn.utils.spectral_norm import spectral_norm - - -class BlurFunctionBackward(Function): - - @staticmethod - def forward(ctx, grad_output, kernel, kernel_flip): - ctx.save_for_backward(kernel, kernel_flip) - grad_input = F.conv2d(grad_output, kernel_flip, padding=1, groups=grad_output.shape[1]) - return grad_input - - @staticmethod - def backward(ctx, gradgrad_output): - kernel, _ = ctx.saved_tensors - grad_input = F.conv2d(gradgrad_output, kernel, padding=1, groups=gradgrad_output.shape[1]) - return grad_input, None, None - - -class BlurFunction(Function): - - @staticmethod - def forward(ctx, x, kernel, kernel_flip): - ctx.save_for_backward(kernel, kernel_flip) - output = F.conv2d(x, kernel, padding=1, groups=x.shape[1]) - return output - - @staticmethod - def backward(ctx, grad_output): - kernel, kernel_flip = ctx.saved_tensors - grad_input = BlurFunctionBackward.apply(grad_output, kernel, kernel_flip) - return grad_input, None, None - - -blur = BlurFunction.apply - - -class Blur(nn.Module): - - def __init__(self, channel): - super().__init__() - kernel = torch.tensor([[1, 2, 1], [2, 4, 2], [1, 2, 1]], dtype=torch.float32) - kernel = kernel.view(1, 1, 3, 3) - kernel = kernel / kernel.sum() - kernel_flip = torch.flip(kernel, [2, 3]) - - self.kernel = kernel.repeat(channel, 1, 1, 1) - self.kernel_flip = kernel_flip.repeat(channel, 1, 1, 1) - - def forward(self, x): - return blur(x, self.kernel.type_as(x), self.kernel_flip.type_as(x)) - - -def calc_mean_std(feat, eps=1e-5): - """Calculate mean and std for adaptive_instance_normalization. - - Args: - feat (Tensor): 4D tensor. - eps (float): A small value added to the variance to avoid - divide-by-zero. Default: 1e-5. - """ - size = feat.size() - assert len(size) == 4, 'The input feature should be 4D tensor.' - n, c = size[:2] - feat_var = feat.view(n, c, -1).var(dim=2) + eps - feat_std = feat_var.sqrt().view(n, c, 1, 1) - feat_mean = feat.view(n, c, -1).mean(dim=2).view(n, c, 1, 1) - return feat_mean, feat_std - - -def adaptive_instance_normalization(content_feat, style_feat): - """Adaptive instance normalization. - - Adjust the reference features to have the similar color and illuminations - as those in the degradate features. - - Args: - content_feat (Tensor): The reference feature. - style_feat (Tensor): The degradate features. - """ - size = content_feat.size() - style_mean, style_std = calc_mean_std(style_feat) - content_mean, content_std = calc_mean_std(content_feat) - normalized_feat = (content_feat - content_mean.expand(size)) / content_std.expand(size) - return normalized_feat * style_std.expand(size) + style_mean.expand(size) - - -def AttentionBlock(in_channel): - return nn.Sequential( - spectral_norm(nn.Conv2d(in_channel, in_channel, 3, 1, 1)), nn.LeakyReLU(0.2, True), - spectral_norm(nn.Conv2d(in_channel, in_channel, 3, 1, 1))) - - -def conv_block(in_channels, out_channels, kernel_size=3, stride=1, dilation=1, bias=True): - """Conv block used in MSDilationBlock.""" - - return nn.Sequential( - spectral_norm( - nn.Conv2d( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - dilation=dilation, - padding=((kernel_size - 1) // 2) * dilation, - bias=bias)), - nn.LeakyReLU(0.2), - spectral_norm( - nn.Conv2d( - out_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - dilation=dilation, - padding=((kernel_size - 1) // 2) * dilation, - bias=bias)), - ) - - -class MSDilationBlock(nn.Module): - """Multi-scale dilation block.""" - - def __init__(self, in_channels, kernel_size=3, dilation=(1, 1, 1, 1), bias=True): - super(MSDilationBlock, self).__init__() - - self.conv_blocks = nn.ModuleList() - for i in range(4): - self.conv_blocks.append(conv_block(in_channels, in_channels, kernel_size, dilation=dilation[i], bias=bias)) - self.conv_fusion = spectral_norm( - nn.Conv2d( - in_channels * 4, - in_channels, - kernel_size=kernel_size, - stride=1, - padding=(kernel_size - 1) // 2, - bias=bias)) - - def forward(self, x): - out = [] - for i in range(4): - out.append(self.conv_blocks[i](x)) - out = torch.cat(out, 1) - out = self.conv_fusion(out) + x - return out - - -class UpResBlock(nn.Module): - - def __init__(self, in_channel): - super(UpResBlock, self).__init__() - self.body = nn.Sequential( - nn.Conv2d(in_channel, in_channel, 3, 1, 1), - nn.LeakyReLU(0.2, True), - nn.Conv2d(in_channel, in_channel, 3, 1, 1), - ) - - def forward(self, x): - out = x + self.body(x) - return out diff --git a/spaces/InvisableClearCoat101/mistralai-Mistral-7B-v0.1/README.md b/spaces/InvisableClearCoat101/mistralai-Mistral-7B-v0.1/README.md deleted file mode 100644 index e823e04d51a48ec54ad5e6ba16be94d4b50616fe..0000000000000000000000000000000000000000 --- a/spaces/InvisableClearCoat101/mistralai-Mistral-7B-v0.1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Mistralai Mistral 7B V0.1 -emoji: 👀 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Jamkonams/AutoGPT/autogpt/memory/local.py b/spaces/Jamkonams/AutoGPT/autogpt/memory/local.py deleted file mode 100644 index 803b6dc6ebb430285f423cda592fa3e902e9a4a6..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/autogpt/memory/local.py +++ /dev/null @@ -1,136 +0,0 @@ -from __future__ import annotations - -import dataclasses -import os -from typing import Any, List - -import numpy as np -import orjson - -from autogpt.llm_utils import create_embedding_with_ada -from autogpt.memory.base import MemoryProviderSingleton - -EMBED_DIM = 1536 -SAVE_OPTIONS = orjson.OPT_SERIALIZE_NUMPY | orjson.OPT_SERIALIZE_DATACLASS - - -def create_default_embeddings(): - return np.zeros((0, EMBED_DIM)).astype(np.float32) - - -@dataclasses.dataclass -class CacheContent: - texts: List[str] = dataclasses.field(default_factory=list) - embeddings: np.ndarray = dataclasses.field( - default_factory=create_default_embeddings - ) - - -class LocalCache(MemoryProviderSingleton): - """A class that stores the memory in a local file""" - - def __init__(self, cfg) -> None: - """Initialize a class instance - - Args: - cfg: Config object - - Returns: - None - """ - self.filename = f"{cfg.memory_index}.json" - if os.path.exists(self.filename): - try: - with open(self.filename, "w+b") as f: - file_content = f.read() - if not file_content.strip(): - file_content = b"{}" - f.write(file_content) - - loaded = orjson.loads(file_content) - self.data = CacheContent(**loaded) - except orjson.JSONDecodeError: - print(f"Error: The file '{self.filename}' is not in JSON format.") - self.data = CacheContent() - else: - print( - f"Warning: The file '{self.filename}' does not exist. " - "Local memory would not be saved to a file." - ) - self.data = CacheContent() - - def add(self, text: str): - """ - Add text to our list of texts, add embedding as row to our - embeddings-matrix - - Args: - text: str - - Returns: None - """ - if "Command Error:" in text: - return "" - self.data.texts.append(text) - - embedding = create_embedding_with_ada(text) - - vector = np.array(embedding).astype(np.float32) - vector = vector[np.newaxis, :] - self.data.embeddings = np.concatenate( - [ - self.data.embeddings, - vector, - ], - axis=0, - ) - - with open(self.filename, "wb") as f: - out = orjson.dumps(self.data, option=SAVE_OPTIONS) - f.write(out) - return text - - def clear(self) -> str: - """ - Clears the redis server. - - Returns: A message indicating that the memory has been cleared. - """ - self.data = CacheContent() - return "Obliviated" - - def get(self, data: str) -> list[Any] | None: - """ - Gets the data from the memory that is most relevant to the given data. - - Args: - data: The data to compare to. - - Returns: The most relevant data. - """ - return self.get_relevant(data, 1) - - def get_relevant(self, text: str, k: int) -> list[Any]: - """ " - matrix-vector mult to find score-for-each-row-of-matrix - get indices for top-k winning scores - return texts for those indices - Args: - text: str - k: int - - Returns: List[str] - """ - embedding = create_embedding_with_ada(text) - - scores = np.dot(self.data.embeddings, embedding) - - top_k_indices = np.argsort(scores)[-k:][::-1] - - return [self.data.texts[i] for i in top_k_indices] - - def get_stats(self) -> tuple[int, tuple[int, ...]]: - """ - Returns: The stats of the local cache. - """ - return len(self.data.texts), self.data.embeddings.shape diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/presets.py b/spaces/JohnSmith9982/ChuanhuChatGPT/modules/presets.py deleted file mode 100644 index a56d50e1c7aefae37b3252b983d445ea327471a4..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/presets.py +++ /dev/null @@ -1,248 +0,0 @@ -# -*- coding:utf-8 -*- -import os -from pathlib import Path -import gradio as gr -from .webui_locale import I18nAuto - -i18n = I18nAuto() # internationalization - -CHATGLM_MODEL = None -CHATGLM_TOKENIZER = None -LLAMA_MODEL = None -LLAMA_INFERENCER = None - -# ChatGPT 设置 -INITIAL_SYSTEM_PROMPT = "You are a helpful assistant." -API_HOST = "api.openai.com" -COMPLETION_URL = "https://api.openai.com/v1/chat/completions" -BALANCE_API_URL="https://api.openai.com/dashboard/billing/credit_grants" -USAGE_API_URL="https://api.openai.com/dashboard/billing/usage" -HISTORY_DIR = Path("history") -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -# 错误信息 -STANDARD_ERROR_MSG = i18n("☹️发生了错误:") # 错误信息的标准前缀 -GENERAL_ERROR_MSG = i18n("获取对话时发生错误,请查看后台日志") -ERROR_RETRIEVE_MSG = i18n("请检查网络连接,或者API-Key是否有效。") -CONNECTION_TIMEOUT_MSG = i18n("连接超时,无法获取对话。") # 连接超时 -READ_TIMEOUT_MSG = i18n("读取超时,无法获取对话。") # 读取超时 -PROXY_ERROR_MSG = i18n("代理错误,无法获取对话。") # 代理错误 -SSL_ERROR_PROMPT = i18n("SSL错误,无法获取对话。") # SSL 错误 -NO_APIKEY_MSG = i18n("API key为空,请检查是否输入正确。") # API key 长度不足 51 位 -NO_INPUT_MSG = i18n("请输入对话内容。") # 未输入对话内容 -BILLING_NOT_APPLICABLE_MSG = i18n("账单信息不适用") # 本地运行的模型返回的账单信息 - -TIMEOUT_STREAMING = 60 # 流式对话时的超时时间 -TIMEOUT_ALL = 200 # 非流式对话时的超时时间 -ENABLE_STREAMING_OPTION = True # 是否启用选择选择是否实时显示回答的勾选框 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True -CONCURRENT_COUNT = 100 # 允许同时使用的用户数量 - -SIM_K = 5 -INDEX_QUERY_TEMPRATURE = 1.0 - -CHUANHU_TITLE = i18n("川虎Chat 🚀") - -CHUANHU_DESCRIPTION = i18n("由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536)、[明昭MZhao](https://space.bilibili.com/24807452) 和 [Keldos](https://github.com/Keldos-Li) 开发
    访问川虎Chat的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本") - - -ONLINE_MODELS = [ - "gpt-3.5-turbo", - "gpt-3.5-turbo-16k", - "gpt-3.5-turbo-0301", - "gpt-3.5-turbo-0613", - "gpt-4", - "gpt-4-0314", - "gpt-4-0613", - "gpt-4-32k", - "gpt-4-32k-0314", - "gpt-4-32k-0613", - "川虎助理", - "川虎助理 Pro", - "GooglePaLM", - "xmchat", - "Azure OpenAI", - "yuanai-1.0-base_10B", - "yuanai-1.0-translate", - "yuanai-1.0-dialog", - "yuanai-1.0-rhythm_poems", - "minimax-abab4-chat", - "minimax-abab5-chat", - "midjourney" -] - -LOCAL_MODELS = [ - "chatglm-6b", - "chatglm-6b-int4", - "chatglm-6b-int4-ge", - "chatglm2-6b", - "chatglm2-6b-int4", - "StableLM", - "MOSS", - "llama-7b-hf", - "llama-13b-hf", - "llama-30b-hf", - "llama-65b-hf", -] - -if os.environ.get('HIDE_LOCAL_MODELS', 'false') == 'true': - MODELS = ONLINE_MODELS -else: - MODELS = ONLINE_MODELS + LOCAL_MODELS - -DEFAULT_MODEL = 0 - -os.makedirs("models", exist_ok=True) -os.makedirs("lora", exist_ok=True) -os.makedirs("history", exist_ok=True) -for dir_name in os.listdir("models"): - if os.path.isdir(os.path.join("models", dir_name)): - if dir_name not in MODELS: - MODELS.append(dir_name) - -MODEL_TOKEN_LIMIT = { - "gpt-3.5-turbo": 4096, - "gpt-3.5-turbo-16k": 16384, - "gpt-3.5-turbo-0301": 4096, - "gpt-3.5-turbo-0613": 4096, - "gpt-4": 8192, - "gpt-4-0314": 8192, - "gpt-4-0613": 8192, - "gpt-4-32k": 32768, - "gpt-4-32k-0314": 32768, - "gpt-4-32k-0613": 32768 -} - -TOKEN_OFFSET = 1000 # 模型的token上限减去这个值,得到软上限。到达软上限之后,自动尝试减少token占用。 -DEFAULT_TOKEN_LIMIT = 3000 # 默认的token上限 -REDUCE_TOKEN_FACTOR = 0.5 # 与模型token上限想乘,得到目标token数。减少token占用时,将token占用减少到目标token数以下。 - -REPLY_LANGUAGES = [ - "简体中文", - "繁體中文", - "English", - "日本語", - "Español", - "Français", - "Deutsch", - "한국어", - "跟随问题语言(不稳定)" -] - - -WEBSEARCH_PTOMPT_TEMPLATE = """\ -Web search results: - -{web_results} -Current date: {current_date} - -Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject. -Query: {query} -Reply in {reply_language} -""" - -PROMPT_TEMPLATE = """\ -Context information is below. ---------------------- -{context_str} ---------------------- -Current date: {current_date}. -Using the provided context information, write a comprehensive reply to the given query. -Make sure to cite results using [number] notation after the reference. -If the provided context information refer to multiple subjects with the same name, write separate answers for each subject. -Use prior knowledge only if the given context didn't provide enough information. -Answer the question: {query_str} -Reply in {reply_language} -""" - -REFINE_TEMPLATE = """\ -The original question is as follows: {query_str} -We have provided an existing answer: {existing_answer} -We have the opportunity to refine the existing answer -(only if needed) with some more context below. ------------- -{context_msg} ------------- -Given the new context, refine the original answer to better -Reply in {reply_language} -If the context isn't useful, return the original answer. -""" - -SUMMARIZE_PROMPT = """Write a concise summary of the following: - -{text} - -CONCISE SUMMARY IN 中文:""" - -ALREADY_CONVERTED_MARK = "" -START_OF_OUTPUT_MARK = "" -END_OF_OUTPUT_MARK = "" - -small_and_beautiful_theme = gr.themes.Soft( - primary_hue=gr.themes.Color( - c50="#EBFAF2", - c100="#CFF3E1", - c200="#A8EAC8", - c300="#77DEA9", - c400="#3FD086", - c500="#02C160", - c600="#06AE56", - c700="#05974E", - c800="#057F45", - c900="#04673D", - c950="#2E5541", - name="small_and_beautiful", - ), - secondary_hue=gr.themes.Color( - c50="#576b95", - c100="#576b95", - c200="#576b95", - c300="#576b95", - c400="#576b95", - c500="#576b95", - c600="#576b95", - c700="#576b95", - c800="#576b95", - c900="#576b95", - c950="#576b95", - ), - neutral_hue=gr.themes.Color( - name="gray", - c50="#f6f7f8", - # c100="#f3f4f6", - c100="#F2F2F2", - c200="#e5e7eb", - c300="#d1d5db", - c400="#B2B2B2", - c500="#808080", - c600="#636363", - c700="#515151", - c800="#393939", - # c900="#272727", - c900="#2B2B2B", - c950="#171717", - ), - radius_size=gr.themes.sizes.radius_sm, - ).set( - # button_primary_background_fill="*primary_500", - button_primary_background_fill_dark="*primary_600", - # button_primary_background_fill_hover="*primary_400", - # button_primary_border_color="*primary_500", - button_primary_border_color_dark="*primary_600", - button_primary_text_color="white", - button_primary_text_color_dark="white", - button_secondary_background_fill="*neutral_100", - button_secondary_background_fill_hover="*neutral_50", - button_secondary_background_fill_dark="*neutral_900", - button_secondary_text_color="*neutral_800", - button_secondary_text_color_dark="white", - # background_fill_primary="#F7F7F7", - # background_fill_primary_dark="#1F1F1F", - # block_title_text_color="*primary_500", - block_title_background_fill_dark="*primary_900", - block_label_background_fill_dark="*primary_900", - input_background_fill="#F6F6F6", - chatbot_code_background_color="*neutral_950", - chatbot_code_background_color_dark="*neutral_950", - ) diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/models/minimax.py b/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/models/minimax.py deleted file mode 100644 index 2e1b50280fd2fbc43a69caaf660a0d64beaa405b..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/models/minimax.py +++ /dev/null @@ -1,161 +0,0 @@ -import json -import os - -import colorama -import requests -import logging - -from modules.models.base_model import BaseLLMModel -from modules.presets import STANDARD_ERROR_MSG, GENERAL_ERROR_MSG, TIMEOUT_STREAMING, TIMEOUT_ALL, i18n - -group_id = os.environ.get("MINIMAX_GROUP_ID", "") - - -class MiniMax_Client(BaseLLMModel): - """ - MiniMax Client - 接口文档见 https://api.minimax.chat/document/guides/chat - """ - - def __init__(self, model_name, api_key, user_name="", system_prompt=None): - super().__init__(model_name=model_name, user=user_name) - self.url = f'https://api.minimax.chat/v1/text/chatcompletion?GroupId={group_id}' - self.history = [] - self.api_key = api_key - self.system_prompt = system_prompt - self.headers = { - "Authorization": f"Bearer {api_key}", - "Content-Type": "application/json" - } - - def get_answer_at_once(self): - # minimax temperature is (0,1] and base model temperature is [0,2], and yuan 0.9 == base 1 so need to convert - temperature = self.temperature * 0.9 if self.temperature <= 1 else 0.9 + (self.temperature - 1) / 10 - - request_body = { - "model": self.model_name.replace('minimax-', ''), - "temperature": temperature, - "skip_info_mask": True, - 'messages': [{"sender_type": "USER", "text": self.history[-1]['content']}] - } - if self.n_choices: - request_body['beam_width'] = self.n_choices - if self.system_prompt: - request_body['prompt'] = self.system_prompt - if self.max_generation_token: - request_body['tokens_to_generate'] = self.max_generation_token - if self.top_p: - request_body['top_p'] = self.top_p - - response = requests.post(self.url, headers=self.headers, json=request_body) - - res = response.json() - answer = res['reply'] - total_token_count = res["usage"]["total_tokens"] - return answer, total_token_count - - def get_answer_stream_iter(self): - response = self._get_response(stream=True) - if response is not None: - iter = self._decode_chat_response(response) - partial_text = "" - for i in iter: - partial_text += i - yield partial_text - else: - yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG - - def _get_response(self, stream=False): - minimax_api_key = self.api_key - history = self.history - logging.debug(colorama.Fore.YELLOW + - f"{history}" + colorama.Fore.RESET) - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {minimax_api_key}", - } - - temperature = self.temperature * 0.9 if self.temperature <= 1 else 0.9 + (self.temperature - 1) / 10 - - messages = [] - for msg in self.history: - if msg['role'] == 'user': - messages.append({"sender_type": "USER", "text": msg['content']}) - else: - messages.append({"sender_type": "BOT", "text": msg['content']}) - - request_body = { - "model": self.model_name.replace('minimax-', ''), - "temperature": temperature, - "skip_info_mask": True, - 'messages': messages - } - if self.n_choices: - request_body['beam_width'] = self.n_choices - if self.system_prompt: - lines = self.system_prompt.splitlines() - if lines[0].find(":") != -1 and len(lines[0]) < 20: - request_body["role_meta"] = { - "user_name": lines[0].split(":")[0], - "bot_name": lines[0].split(":")[1] - } - lines.pop() - request_body["prompt"] = "\n".join(lines) - if self.max_generation_token: - request_body['tokens_to_generate'] = self.max_generation_token - else: - request_body['tokens_to_generate'] = 512 - if self.top_p: - request_body['top_p'] = self.top_p - - if stream: - timeout = TIMEOUT_STREAMING - request_body['stream'] = True - request_body['use_standard_sse'] = True - else: - timeout = TIMEOUT_ALL - try: - response = requests.post( - self.url, - headers=headers, - json=request_body, - stream=stream, - timeout=timeout, - ) - except: - return None - - return response - - def _decode_chat_response(self, response): - error_msg = "" - for chunk in response.iter_lines(): - if chunk: - chunk = chunk.decode() - chunk_length = len(chunk) - print(chunk) - try: - chunk = json.loads(chunk[6:]) - except json.JSONDecodeError: - print(i18n("JSON解析错误,收到的内容: ") + f"{chunk}") - error_msg += chunk - continue - if chunk_length > 6 and "delta" in chunk["choices"][0]: - if "finish_reason" in chunk["choices"][0] and chunk["choices"][0]["finish_reason"] == "stop": - self.all_token_counts.append(chunk["usage"]["total_tokens"] - sum(self.all_token_counts)) - break - try: - yield chunk["choices"][0]["delta"] - except Exception as e: - logging.error(f"Error: {e}") - continue - if error_msg: - try: - error_msg = json.loads(error_msg) - if 'base_resp' in error_msg: - status_code = error_msg['base_resp']['status_code'] - status_msg = error_msg['base_resp']['status_msg'] - raise Exception(f"{status_code} - {status_msg}") - except json.JSONDecodeError: - pass - raise Exception(error_msg) diff --git a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/utils/imageutil.py b/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/utils/imageutil.py deleted file mode 100644 index 897e8486c2c9cbd76f20739c4eb9575a9f2ac67c..0000000000000000000000000000000000000000 --- a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/utils/imageutil.py +++ /dev/null @@ -1,464 +0,0 @@ -import os -import textwrap -from pathlib import Path -from typing import List - -import cv2 -import numpy as np -import PIL -from PIL import Image, ImageChops, ImageDraw, ImageFont - -kMinMargin = 10 - - -def stack_images_horizontally(images: List, save_path=None): - widths, heights = list(zip(*(i.size for i in images))) - total_width = sum(widths) - max_height = max(heights) - new_im = Image.new("RGBA", (total_width, max_height)) - - x_offset = 0 - for im in images: - new_im.paste(im, (x_offset, 0)) - x_offset += im.size[0] - if save_path is not None: - new_im.save(save_path) - return new_im - - -def stack_images_vertically(images: List, save_path=None): - widths, heights = list(zip(*(i.size for i in images))) - max_width = max(widths) - total_height = sum(heights) - new_im = Image.new("RGBA", (max_width, total_height)) - - y_offset = 0 - for im in images: - new_im.paste(im, (0, y_offset)) - y_offset += im.size[1] - if save_path is not None: - new_im.save(save_path) - return new_im - - -def merge_images(images: List): - if isinstance(images[0], Image.Image): - return stack_images_horizontally(images) - - images = list(map(stack_images_horizontally, images)) - return stack_images_vertically(images) - - -def draw_text( - image: PIL.Image, - text: str, - font_size=None, - font_color=(0, 0, 0), - max_seq_length=100, -): - W, H = image.size - S = max(W, H) - - font_path = os.path.join(cv2.__path__[0], "qt", "fonts", "DejaVuSans.ttf") - font_size = max(int(S / 32), 20) if font_size is None else font_size - font = ImageFont.truetype(font_path, size=font_size) - - text_wrapped = textwrap.fill(text, max_seq_length) - w, h = font.getsize(text_wrapped) - new_im = Image.new("RGBA", (W, H + h)) - new_im.paste(image, (0, h)) - draw = ImageDraw.Draw(new_im) - draw.text((max((W - w) / 2, 0), 0), text_wrapped, font=font, fill=font_color) - return new_im - - -def to_white(img): - new_img = Image.new("RGBA", img.size, "WHITE") - new_img.paste(img, (0, 0), img) - new_img.convert("RGB") - return new_img - - -def get_bbox(in_file, fuzz=17.5): - im = Image.open(in_file) - - # bbox = im.convert("RGBa").getbbox() - try: - bg = Image.new(im.mode, im.size, im.getpixel((0, 0))) - except OSError as err: - print(f"error {in_file}") - raise OSError - diff = ImageChops.difference(im, bg) - offset = int(round(float(fuzz) / 100.0 * 255.0)) - diff = ImageChops.add(diff, diff, 2.0, -offset) - bbox = diff.getbbox() - - bx_min = max(bbox[0] - kMinMargin, 0) - by_min = max(bbox[1] - kMinMargin, 0) - bx_max = min(bbox[2] + kMinMargin, im.size[0]) - by_max = min(bbox[3] + kMinMargin, im.size[1]) - bbox_margin = (bx_min, by_min, bx_max, by_max) - return bbox_margin - - -def get_largest_bbox(in_files): - largest_bbox = (float("Inf"), float("Inf"), -float("Inf"), -float("Inf")) - for in_file in in_files: - bbox = get_bbox(in_file) - largest_bbox = ( - min(bbox[0], largest_bbox[0]), - min(bbox[1], largest_bbox[1]), - max(bbox[2], largest_bbox[2]), - max(bbox[3], largest_bbox[3]), - ) - return largest_bbox - - -def trim(in_file, out_file, keep_ratio): - # im = Image.open(in_file) - # bbox = im.convert("RGBa").getbbox() - bbox = get_bbox(in_file) - trim_with_bbox(in_file, out_file, bbox, keep_ratio) - - -def trim_with_bbox(in_file, out_file, bbox, keep_ratio): - im = Image.open(in_file) - - if keep_ratio: - w, h = im.size - r = float(w) / h - - bx_min, by_min, bx_max, by_max = bbox[0], bbox[1], bbox[2], bbox[3] - bw, bh = bx_max - bx_min, by_max - by_min - bcx, bcy = 0.5 * (bx_min + bx_max), 0.5 * (by_min + by_max) - br = float(bw) / bh - - if br > r: - bh = int(round(bw / r)) - by_min, by_max = int(round(bcy - 0.5 * bh)), int(round(bcy + 0.5 * bh)) - if by_min < 0: - by_min = 0 - by_max = bh - elif by_max > h: - by_max = h - by_min = h - bh - assert bh >= bh - elif br < r: - bw = int(round(bh * r)) - bx_min, bx_max = int(round(bcx - 0.5 * bw)), int(round(bcx + 0.5 * bw)) - if bx_min < 0: - bx_min = 0 - bx_max = bw - elif bx_max > w: - bx_max = w - bx_min = w - bw - - bbox = (bx_min, by_min, bx_max, by_max) - - im.crop(bbox).save(out_file, "png") - - -def trim_with_largest_bbox(in_files, out_files, keep_ratio): - assert len(in_files) == len(out_files) - - bbox = get_largest_bbox(in_files) - for i in range(len(in_files)): - trim_with_bbox(in_files[i], out_files[i], bbox, keep_ratio) - - -def create_image_table_tight_centering( - in_img_files, out_img_file, max_total_width=2560, draw_col_lines=[] -): - - n_rows = len(in_img_files) - n_cols = len(in_img_files[0]) - - # Compute width and height of each image. - width = 0 - row_top = [float("Inf")] * n_rows - row_bottom = [-float("Inf")] * n_rows - - for row in range(n_rows): - for col in range(n_cols): - img_left, img_top, img_right, img_bottom = get_bbox(in_img_files[row][col]) - img_width = img_right - img_left - width = max(width, img_width) - row_top[row] = min(row_top[row], img_top) - row_bottom[row] = max(row_bottom[row], img_bottom) - - row_height = [bottom - top for bottom, top in zip(row_bottom, row_top)] - - # Combine images. - cmd = "convert " - for row in range(n_rows): - cmd += " \( " - for col in range(n_cols): - img_left, img_top, img_right, img_bottom = get_bbox(in_img_files[row][col]) - img_h_center = 0.5 * (img_left + img_right) - left = int(img_h_center - 0.5 * width) - cmd += " \( {} ".format(in_img_files[row][col]) - cmd += "-gravity NorthWest -crop {}x{}+{}+{} +repage \) ".format( - width, row_height[row], left, row_top[row] - ) - cmd += " -gravity center -background white +append \) " - - cmd += "-append " + out_img_file - print(cmd) - os.system(cmd) - - # Draw lines for columns. - for col in draw_col_lines: - if col <= 0 or col >= n_cols: - continue - strokewidth = max(int(round(width * 0.005)), 1) - pos = col * width - cmd = "convert " + out_img_file + " -stroke black " - cmd += "-strokewidth {} ".format(strokewidth) - cmd += '-draw "line {0},0 {0},10000000" '.format(pos) + out_img_file - os.system(cmd) - - # Resize the combined image if it is too large. - print(n_cols * width) - if (n_cols * width) > max_total_width: - cmd = "convert {0} -resize {1}x +repage {0}".format( - out_img_file, max_total_width - ) - print(cmd) - os.system(cmd) - - print("Saved '{}'.".format(out_img_file)) - - return width, row_height - - -def create_image_table_tight_centering_per_row( - in_img_files, out_img_dir, max_total_width=1280, draw_col_lines=[] -): - - n_rows = len(in_img_files) - n_cols = len(in_img_files[0]) - - # Compute width and height of each image. - width = 0 - row_top = [float("Inf")] * n_rows - row_bottom = [-float("Inf")] * n_rows - - for row in range(n_rows): - for col in range(n_cols): - img_left, img_top, img_right, img_bottom = get_bbox(in_img_files[row][col]) - img_width = img_right - img_left - width = max(width, img_width) - row_top[row] = min(row_top[row], img_top) - row_bottom[row] = max(row_bottom[row], img_bottom) - - row_height = [bottom - top for bottom, top in zip(row_bottom, row_top)] - - if not os.path.exists(out_img_dir): - os.makedirs(out_img_dir) - - # Combine images. - for row in range(n_rows): - out_img_file = os.path.join(out_img_dir, "{:02d}.png".format(row)) - cmd = "convert " - for col in range(n_cols): - img_left, img_top, img_right, img_bottom = get_bbox(in_img_files[row][col]) - img_h_center = 0.5 * (img_left + img_right) - left = int(img_h_center - 0.5 * width) - cmd += " \( {} ".format(in_img_files[row][col]) - cmd += "-gravity NorthWest -crop {}x{}+{}+{} +repage \) ".format( - width, row_height[row], left, row_top[row] - ) - cmd += " -gravity center -background white +append " + out_img_file - print(cmd) - os.system(cmd) - - # Draw lines for columns. - for col in draw_col_lines: - if col <= 0 or col >= n_cols: - continue - strokewidth = max(int(round(width * 0.005)), 1) - pos = col * width - cmd = "convert " + out_img_file + " -stroke black " - cmd += "-strokewidth {} ".format(strokewidth) - cmd += '-draw "line {0},0 {0},10000000" '.format(pos) + out_img_file - os.system(cmd) - print(cmd) - - # Resize the combined image if it is too large. - print(n_cols * width) - if (n_cols * width) > max_total_width: - cmd = "convert {0} -resize {1}x +repage {0}".format( - out_img_file, max_total_width - ) - print(cmd) - os.system(cmd) - - print("Saved '{}'.".format(out_img_file)) - - return width, row_height - - -def create_image_table_tight_centering_per_col( - in_img_files, out_img_dir, max_width=2560, draw_col_lines=[] -): - - n_rows = len(in_img_files) - n_cols = len(in_img_files[0]) - - # Compute width and height of each image. - width = 0 - row_top = [float("Inf")] * n_rows - row_bottom = [-float("Inf")] * n_rows - - for row in range(n_rows): - for col in range(n_cols): - img_left, img_top, img_right, img_bottom = get_bbox(in_img_files[row][col]) - img_width = img_right - img_left - width = max(width, img_width) - row_top[row] = min(row_top[row], img_top) - row_bottom[row] = max(row_bottom[row], img_bottom) - - row_height = [bottom - top for bottom, top in zip(row_bottom, row_top)] - - if not os.path.exists(out_img_dir): - os.makedirs(out_img_dir) - - # Combine images. - for col in range(n_cols): - out_img_file = os.path.join(out_img_dir, "{:02d}.png".format(col)) - cmd = "convert " - for row in range(n_rows): - img_left, img_top, img_right, img_bottom = get_bbox(in_img_files[row][col]) - img_h_center = 0.5 * (img_left + img_right) - left = int(img_h_center - 0.5 * width) - cmd += " \( {} ".format(in_img_files[row][col]) - cmd += "-gravity NorthWest -crop {}x{}+{}+{} +repage \) ".format( - width, row_height[row], left, row_top[row] - ) - cmd += " -gravity center -background white -append " + out_img_file - print(cmd) - os.system(cmd) - - # Resize the combined image if it is too large. - if width > max_width: - cmd = "convert {0} -resize {1}x +repage {0}".format(out_img_file, max_width) - print(cmd) - os.system(cmd) - - print("Saved '{}'.".format(out_img_file)) - - return width, row_height - - -def create_image_table_after_crop( - in_img_files, - out_img_file, - lbox=None, - tbox=None, - rbox=None, - dbox=None, - max_total_width=2560, - draw_col_lines=[], - transpose=False, - verbose=False, - line_multi=None, -): - out_img_file = str(out_img_file) - if not isinstance(in_img_files[0], list): - in_img_files = [in_img_files] - in_img_files = [[x for x in row if len(str(x)) != 0] for row in in_img_files] - if transpose: - x = np.array(in_img_files) - in_img_files = x.transpose().tolist() - - n_rows = len(in_img_files) - n_cols = len(in_img_files[0]) - - # Compute width and height of each image. - width = 0 - row_top = [float("Inf")] * n_rows - row_bottom = [-float("Inf")] * n_rows - - for row in range(n_rows): - for col in range(n_cols): - img_left, img_top, img_right, img_bottom = get_bbox(in_img_files[row][col]) - # img_left, img_top, img_right, img_bottom = lbox, tbox, rbox, dbox - img_left = img_left if lbox is None else lbox - img_top = img_top if tbox is None else tbox - img_right = img_right if rbox is None else rbox - img_bottom = img_bottom if dbox is None else dbox - img_width = img_right - img_left - width = max(width, img_width) - row_top[row] = min(row_top[row], img_top) - row_bottom[row] = max(row_bottom[row], img_bottom) - - row_height = [bottom - top for bottom, top in zip(row_bottom, row_top)] - - # Combine images. - cmd = "convert " - for row in range(n_rows): - cmd += " \( " - for col in range(n_cols): - # img_left, img_top, img_right, img_bottom = lbox, tbox, rbox, dbox - img_left, img_top, img_right, img_bottom = get_bbox(in_img_files[row][col]) - img_left = img_left if lbox is None else lbox - img_top = img_top if tbox is None else tbox - img_right = img_right if rbox is None else rbox - img_bottom = img_bottom if dbox is None else dbox - img_h_center = 0.5 * (img_left + img_right) - left = int(img_h_center - 0.5 * width) - cmd += " \( {} ".format(in_img_files[row][col]) - cmd += "-gravity NorthWest -crop {}x{}+{}+{} +repage \) ".format( - width, row_height[row], left, row_top[row] - ) - cmd += " -gravity center -background white +append \) " - - cmd += "-append " + out_img_file - if verbose: - print(cmd) - os.system(cmd) - # Draw lines for columns. - for col in draw_col_lines: - if col <= 0 or col >= n_cols: - continue - strokewidth = max(int(round(width * 0.005)), 1) - if line_multi is not None: - strokewidth *= line_multi - pos = col * width - cmd = "convert " + out_img_file + " -stroke black " - cmd += "-strokewidth {} ".format(strokewidth) - cmd += '-draw "line {0},0 {0},10000000" '.format(pos) + out_img_file - if verbose: - print(cmd) - os.system(cmd) - - # Resize the combined image if it is too large. - # print(n_cols * width) - # if (n_cols * width) > max_total_width: - # cmd = "convert {0} -resize {1}x +repage {0}".format( - # out_img_file, max_total_width - # ) - # print(cmd) - # os.system(cmd) - - print("Saved '{}'.".format(out_img_file)) - - return width, row_height - - -def make_2dgrid(input_list, num_rows=None, num_cols=None): - # if num_rows * num_cols != len(input_list): - # raise Warning("Number of rows and columns do not match the length of the input list.") - - if num_rows is None and num_cols is not None: - num_rows = len(input_list) // num_cols + 1 - output_list = [] - for i in range(num_rows): - row = [] - for j in range(num_cols): - if i * num_cols + j >= len(input_list): - break - row.append(input_list[i * num_cols + j]) - output_list.append(row) - - return output_list diff --git a/spaces/KaygNas/cut-it/public/mockServiceWorker.js b/spaces/KaygNas/cut-it/public/mockServiceWorker.js deleted file mode 100644 index 87e0f31b814f1a4837b4b39510bae970a3bba65a..0000000000000000000000000000000000000000 --- a/spaces/KaygNas/cut-it/public/mockServiceWorker.js +++ /dev/null @@ -1,303 +0,0 @@ -/* eslint-disable */ -/* tslint:disable */ - -/** - * Mock Service Worker (1.2.1). - * @see https://github.com/mswjs/msw - * - Please do NOT modify this file. - * - Please do NOT serve this file on production. - */ - -const INTEGRITY_CHECKSUM = '3d6b9f06410d179a7f7404d4bf4c3c70' -const activeClientIds = new Set() - -self.addEventListener('install', function () { - self.skipWaiting() -}) - -self.addEventListener('activate', function (event) { - event.waitUntil(self.clients.claim()) -}) - -self.addEventListener('message', async function (event) { - const clientId = event.source.id - - if (!clientId || !self.clients) { - return - } - - const client = await self.clients.get(clientId) - - if (!client) { - return - } - - const allClients = await self.clients.matchAll({ - type: 'window', - }) - - switch (event.data) { - case 'KEEPALIVE_REQUEST': { - sendToClient(client, { - type: 'KEEPALIVE_RESPONSE', - }) - break - } - - case 'INTEGRITY_CHECK_REQUEST': { - sendToClient(client, { - type: 'INTEGRITY_CHECK_RESPONSE', - payload: INTEGRITY_CHECKSUM, - }) - break - } - - case 'MOCK_ACTIVATE': { - activeClientIds.add(clientId) - - sendToClient(client, { - type: 'MOCKING_ENABLED', - payload: true, - }) - break - } - - case 'MOCK_DEACTIVATE': { - activeClientIds.delete(clientId) - break - } - - case 'CLIENT_CLOSED': { - activeClientIds.delete(clientId) - - const remainingClients = allClients.filter((client) => { - return client.id !== clientId - }) - - // Unregister itself when there are no more clients - if (remainingClients.length === 0) { - self.registration.unregister() - } - - break - } - } -}) - -self.addEventListener('fetch', function (event) { - const { request } = event - const accept = request.headers.get('accept') || '' - - // Bypass server-sent events. - if (accept.includes('text/event-stream')) { - return - } - - // Bypass navigation requests. - if (request.mode === 'navigate') { - return - } - - // Opening the DevTools triggers the "only-if-cached" request - // that cannot be handled by the worker. Bypass such requests. - if (request.cache === 'only-if-cached' && request.mode !== 'same-origin') { - return - } - - // Bypass all requests when there are no active clients. - // Prevents the self-unregistered worked from handling requests - // after it's been deleted (still remains active until the next reload). - if (activeClientIds.size === 0) { - return - } - - // Generate unique request ID. - const requestId = Math.random().toString(16).slice(2) - - event.respondWith( - handleRequest(event, requestId).catch((error) => { - if (error.name === 'NetworkError') { - console.warn( - '[MSW] Successfully emulated a network error for the "%s %s" request.', - request.method, - request.url, - ) - return - } - - // At this point, any exception indicates an issue with the original request/response. - console.error( - `\ -[MSW] Caught an exception from the "%s %s" request (%s). This is probably not a problem with Mock Service Worker. There is likely an additional logging output above.`, - request.method, - request.url, - `${error.name}: ${error.message}`, - ) - }), - ) -}) - -async function handleRequest(event, requestId) { - const client = await resolveMainClient(event) - const response = await getResponse(event, client, requestId) - - // Send back the response clone for the "response:*" life-cycle events. - // Ensure MSW is active and ready to handle the message, otherwise - // this message will pend indefinitely. - if (client && activeClientIds.has(client.id)) { - ;(async function () { - const clonedResponse = response.clone() - sendToClient(client, { - type: 'RESPONSE', - payload: { - requestId, - type: clonedResponse.type, - ok: clonedResponse.ok, - status: clonedResponse.status, - statusText: clonedResponse.statusText, - body: - clonedResponse.body === null ? null : await clonedResponse.text(), - headers: Object.fromEntries(clonedResponse.headers.entries()), - redirected: clonedResponse.redirected, - }, - }) - })() - } - - return response -} - -// Resolve the main client for the given event. -// Client that issues a request doesn't necessarily equal the client -// that registered the worker. It's with the latter the worker should -// communicate with during the response resolving phase. -async function resolveMainClient(event) { - const client = await self.clients.get(event.clientId) - - if (client?.frameType === 'top-level') { - return client - } - - const allClients = await self.clients.matchAll({ - type: 'window', - }) - - return allClients - .filter((client) => { - // Get only those clients that are currently visible. - return client.visibilityState === 'visible' - }) - .find((client) => { - // Find the client ID that's recorded in the - // set of clients that have registered the worker. - return activeClientIds.has(client.id) - }) -} - -async function getResponse(event, client, requestId) { - const { request } = event - const clonedRequest = request.clone() - - function passthrough() { - // Clone the request because it might've been already used - // (i.e. its body has been read and sent to the client). - const headers = Object.fromEntries(clonedRequest.headers.entries()) - - // Remove MSW-specific request headers so the bypassed requests - // comply with the server's CORS preflight check. - // Operate with the headers as an object because request "Headers" - // are immutable. - delete headers['x-msw-bypass'] - - return fetch(clonedRequest, { headers }) - } - - // Bypass mocking when the client is not active. - if (!client) { - return passthrough() - } - - // Bypass initial page load requests (i.e. static assets). - // The absence of the immediate/parent client in the map of the active clients - // means that MSW hasn't dispatched the "MOCK_ACTIVATE" event yet - // and is not ready to handle requests. - if (!activeClientIds.has(client.id)) { - return passthrough() - } - - // Bypass requests with the explicit bypass header. - // Such requests can be issued by "ctx.fetch()". - if (request.headers.get('x-msw-bypass') === 'true') { - return passthrough() - } - - // Notify the client that a request has been intercepted. - const clientMessage = await sendToClient(client, { - type: 'REQUEST', - payload: { - id: requestId, - url: request.url, - method: request.method, - headers: Object.fromEntries(request.headers.entries()), - cache: request.cache, - mode: request.mode, - credentials: request.credentials, - destination: request.destination, - integrity: request.integrity, - redirect: request.redirect, - referrer: request.referrer, - referrerPolicy: request.referrerPolicy, - body: await request.text(), - bodyUsed: request.bodyUsed, - keepalive: request.keepalive, - }, - }) - - switch (clientMessage.type) { - case 'MOCK_RESPONSE': { - return respondWithMock(clientMessage.data) - } - - case 'MOCK_NOT_FOUND': { - return passthrough() - } - - case 'NETWORK_ERROR': { - const { name, message } = clientMessage.data - const networkError = new Error(message) - networkError.name = name - - // Rejecting a "respondWith" promise emulates a network error. - throw networkError - } - } - - return passthrough() -} - -function sendToClient(client, message) { - return new Promise((resolve, reject) => { - const channel = new MessageChannel() - - channel.port1.onmessage = (event) => { - if (event.data && event.data.error) { - return reject(event.data.error) - } - - resolve(event.data) - } - - client.postMessage(message, [channel.port2]) - }) -} - -function sleep(timeMs) { - return new Promise((resolve) => { - setTimeout(resolve, timeMs) - }) -} - -async function respondWithMock(response) { - await sleep(response.delay) - return new Response(response.body, response) -} diff --git a/spaces/Kevin676/AutoGPT/autogpt/commands/twitter.py b/spaces/Kevin676/AutoGPT/autogpt/commands/twitter.py deleted file mode 100644 index 3eaed36e20e1c520690ac59f25a4da6501f3440f..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/AutoGPT/autogpt/commands/twitter.py +++ /dev/null @@ -1,26 +0,0 @@ -import os - -import tweepy -from dotenv import load_dotenv - -load_dotenv() - - -def send_tweet(tweet_text): - consumer_key = os.environ.get("TW_CONSUMER_KEY") - consumer_secret = os.environ.get("TW_CONSUMER_SECRET") - access_token = os.environ.get("TW_ACCESS_TOKEN") - access_token_secret = os.environ.get("TW_ACCESS_TOKEN_SECRET") - # Authenticate to Twitter - auth = tweepy.OAuthHandler(consumer_key, consumer_secret) - auth.set_access_token(access_token, access_token_secret) - - # Create API object - api = tweepy.API(auth) - - # Send tweet - try: - api.update_status(tweet_text) - print("Tweet sent successfully!") - except tweepy.TweepyException as e: - print("Error sending tweet: {}".format(e.reason)) diff --git a/spaces/Khalida1w/denoising/README.md b/spaces/Khalida1w/denoising/README.md deleted file mode 100644 index 961671110cbaae348668288f1824766bcf3fd9df..0000000000000000000000000000000000000000 --- a/spaces/Khalida1w/denoising/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Denoising -emoji: 😻 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KyanChen/BuildingExtraction/Utils/Datasets.py b/spaces/KyanChen/BuildingExtraction/Utils/Datasets.py deleted file mode 100644 index a9ec3573ae62d0361ce7a9015389c1a44b4957cd..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/BuildingExtraction/Utils/Datasets.py +++ /dev/null @@ -1,143 +0,0 @@ -import os.path - -from torch.utils.data import Dataset, DataLoader -import torch -import numpy as np -import pandas as pd -from skimage import io -from Utils.Augmentations import Augmentations, Resize - - -class Datasets(Dataset): - def __init__(self, data_file, transform=None, phase='train', *args, **kwargs): - self.transform = transform - self.data_info = pd.read_csv(data_file, index_col=0) - self.phase = phase - - def __len__(self): - return len(self.data_info) - - def __getitem__(self, index): - data = self.pull_item_seg(index) - return data - - def pull_item_seg(self, index): - """ - :param index: image index - """ - data = self.data_info.iloc[index] - img_name = data['img'] - label_name = data['label'] - - ori_img = io.imread(img_name, as_gray=False) - ori_label = io.imread(label_name, as_gray=True) - assert (ori_img is not None and ori_label is not None), f'{img_name} or {label_name} is not valid' - - if self.transform is not None: - img, label = self.transform((ori_img, ori_label)) - - one_hot_label = np.zeros([2] + list(label.shape), dtype=np.float) - one_hot_label[0] = label == 0 - one_hot_label[1] = label > 0 - return_dict = { - 'img': torch.from_numpy(img).permute(2, 0, 1), - 'label': torch.from_numpy(one_hot_label), - 'img_name': os.path.basename(img_name) - } - return return_dict - - -def get_data_loader(config, test_mode=False): - if not test_mode: - train_params = { - 'batch_size': config['BATCH_SIZE'], - 'shuffle': config['IS_SHUFFLE'], - 'drop_last': False, - 'collate_fn': collate_fn, - 'num_workers': config['NUM_WORKERS'], - 'pin_memory': False - } - # data_file, config, transform=None - train_set = Datasets( - config['DATASET'], - Augmentations( - config['IMG_SIZE'], config['PRIOR_MEAN'], config['PRIOR_STD'], 'train', config['PHASE'], config - ), - config['PHASE'], - config - ) - patterns = ['train'] - else: - patterns = [] - - if config['IS_VAL']: - val_params = { - 'batch_size': config['VAL_BATCH_SIZE'], - 'shuffle': False, - 'drop_last': False, - 'collate_fn': collate_fn, - 'num_workers': config['NUM_WORKERS'], - 'pin_memory': False - } - val_set = Datasets( - config['VAL_DATASET'], - Augmentations( - config['IMG_SIZE'], config['PRIOR_MEAN'], config['PRIOR_STD'], 'val', config['PHASE'], config - ), - config['PHASE'], - config - ) - patterns += ['val'] - - if config['IS_TEST']: - test_params = { - 'batch_size': config['VAL_BATCH_SIZE'], - 'shuffle': False, - 'drop_last': False, - 'collate_fn': collate_fn, - 'num_workers': config['NUM_WORKERS'], - 'pin_memory': False - } - test_set = Datasets( - config['TEST_DATASET'], - Augmentations( - config['IMG_SIZE'], config['PRIOR_MEAN'], config['PRIOR_STD'], 'test', config['PHASE'], config - ), - config['PHASE'], - config - ) - patterns += ['test'] - - data_loaders = {} - for x in patterns: - data_loaders[x] = DataLoader(eval(x+'_set'), **eval(x+'_params')) - return data_loaders - - -def collate_fn(batch): - def to_tensor(item): - if torch.is_tensor(item): - return item - elif isinstance(item, type(np.array(0))): - return torch.from_numpy(item).float() - elif isinstance(item, type('0')): - return item - elif isinstance(item, list): - return item - elif isinstance(item, dict): - return item - - return_data = {} - for key in batch[0].keys(): - return_data[key] = [] - - for sample in batch: - for key, value in sample.items(): - return_data[key].append(to_tensor(value)) - - keys = set(batch[0].keys()) - {'img_name'} - for key in keys: - return_data[key] = torch.stack(return_data[key], dim=0) - - return return_data - diff --git a/spaces/LamaAl/arabic-empathetic/app.py b/spaces/LamaAl/arabic-empathetic/app.py deleted file mode 100644 index 8922200a14ab1ce2051315a452426f8921106c67..0000000000000000000000000000000000000000 --- a/spaces/LamaAl/arabic-empathetic/app.py +++ /dev/null @@ -1,41 +0,0 @@ -#Import transformers and gradio -import transformers -import gradio as gr -import git - -#Load arabert preprocessor -import git -git.Git("arabert").clone("https://github.com/aub-mind/arabert") -from arabert.preprocess import ArabertPreprocessor -arabert_prep = ArabertPreprocessor(model_name="bert-base-arabert", keep_emojis=False) - - -#Load Model -from transformers import EncoderDecoderModel, AutoTokenizer -tokenizer = AutoTokenizer.from_pretrained("tareknaous/bert2bert-empathetic-response-msa") -model = EncoderDecoderModel.from_pretrained("tareknaous/bert2bert-empathetic-response-msa") -model.eval() - -def generate_response(text): - text_clean = arabert_prep.preprocess(text) - inputs = tokenizer.encode_plus(text_clean,return_tensors='pt') - outputs = model.generate(input_ids = inputs.input_ids, - attention_mask = inputs.attention_mask, - do_sample = True) - preds = tokenizer.batch_decode(outputs) - response = str(preds) - response = response.replace("\'", '') - response = response.replace("[[CLS]", '') - response = response.replace("[SEP]]", '') - response = str(arabert_prep.desegment(response)) - return response - -title = 'BERT2BERT Response Generation in Arabic' -description = 'This demo is for a BERT2BERT model trained for single-turn open-domain dialogue response generation in Modern Standard Arabic' -gr.Interface(fn=generate_response, - inputs=[ - gr.inputs.Textbox(), - ], - outputs="text", - title=title, - description=description).launch() \ No newline at end of file diff --git a/spaces/Lianjd/stock_dashboard/backtrader/btrun/btrun.py b/spaces/Lianjd/stock_dashboard/backtrader/btrun/btrun.py deleted file mode 100644 index a839389f34300661106789ae17ea1dee8f4c1b0c..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/btrun/btrun.py +++ /dev/null @@ -1,743 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -import argparse -import datetime -import inspect -import itertools -import random -import string -import sys - -import backtrader as bt - - -DATAFORMATS = dict( - btcsv=bt.feeds.BacktraderCSVData, - vchartcsv=bt.feeds.VChartCSVData, - vcfile=bt.feeds.VChartFile, - sierracsv=bt.feeds.SierraChartCSVData, - mt4csv=bt.feeds.MT4CSVData, - yahoocsv=bt.feeds.YahooFinanceCSVData, - yahoocsv_unreversed=bt.feeds.YahooFinanceCSVData, - yahoo=bt.feeds.YahooFinanceData, -) - -try: - DATAFORMATS['vcdata'] = bt.feeds.VCData -except AttributeError: - pass # no comtypes available - -try: - DATAFORMATS['ibdata'] = bt.feeds.IBData, -except AttributeError: - pass # no ibpy available - -try: - DATAFORMATS['oandadata'] = bt.feeds.OandaData, -except AttributeError: - pass # no oandapy available - - -TIMEFRAMES = dict( - microseconds=bt.TimeFrame.MicroSeconds, - seconds=bt.TimeFrame.Seconds, - minutes=bt.TimeFrame.Minutes, - days=bt.TimeFrame.Days, - weeks=bt.TimeFrame.Weeks, - months=bt.TimeFrame.Months, - years=bt.TimeFrame.Years, -) - - -def btrun(pargs=''): - args = parse_args(pargs) - - if args.flush: - import backtrader.utils.flushfile - - stdstats = not args.nostdstats - - cer_kwargs_str = args.cerebro - cer_kwargs = eval('dict(' + cer_kwargs_str + ')') - if 'stdstats' not in cer_kwargs: - cer_kwargs.update(stdstats=stdstats) - - cerebro = bt.Cerebro(**cer_kwargs) - - if args.resample is not None or args.replay is not None: - if args.resample is not None: - tfcp = args.resample.split(':') - elif args.replay is not None: - tfcp = args.replay.split(':') - - # compression may be skipped and it will default to 1 - if len(tfcp) == 1 or tfcp[1] == '': - tf, cp = tfcp[0], 1 - else: - tf, cp = tfcp - - cp = int(cp) # convert any value to int - tf = TIMEFRAMES.get(tf, None) - - for data in getdatas(args): - if args.resample is not None: - cerebro.resampledata(data, timeframe=tf, compression=cp) - elif args.replay is not None: - cerebro.replaydata(data, timeframe=tf, compression=cp) - else: - cerebro.adddata(data) - - # get and add signals - signals = getobjects(args.signals, bt.Indicator, bt.signals, issignal=True) - for sig, kwargs, sigtype in signals: - stype = getattr(bt.signal, 'SIGNAL_' + sigtype.upper()) - cerebro.add_signal(stype, sig, **kwargs) - - # get and add strategies - strategies = getobjects(args.strategies, bt.Strategy, bt.strategies) - for strat, kwargs in strategies: - cerebro.addstrategy(strat, **kwargs) - - inds = getobjects(args.indicators, bt.Indicator, bt.indicators) - for ind, kwargs in inds: - cerebro.addindicator(ind, **kwargs) - - obs = getobjects(args.observers, bt.Observer, bt.observers) - for ob, kwargs in obs: - cerebro.addobserver(ob, **kwargs) - - ans = getobjects(args.analyzers, bt.Analyzer, bt.analyzers) - for an, kwargs in ans: - cerebro.addanalyzer(an, **kwargs) - - setbroker(args, cerebro) - - for wrkwargs_str in args.writers or []: - wrkwargs = eval('dict(' + wrkwargs_str + ')') - cerebro.addwriter(bt.WriterFile, **wrkwargs) - - ans = getfunctions(args.hooks, bt.Cerebro) - for hook, kwargs in ans: - hook(cerebro, **kwargs) - runsts = cerebro.run() - runst = runsts[0] # single strategy and no optimization - - if args.pranalyzer or args.ppranalyzer: - if runst.analyzers: - print('====================') - print('== Analyzers') - print('====================') - for name, analyzer in runst.analyzers.getitems(): - if args.pranalyzer: - analyzer.print() - elif args.ppranalyzer: - print('##########') - print(name) - print('##########') - analyzer.pprint() - - if args.plot: - pkwargs = dict(style='bar') - if args.plot is not True: - # evaluates to True but is not "True" - args were passed - ekwargs = eval('dict(' + args.plot + ')') - pkwargs.update(ekwargs) - - # cerebro.plot(numfigs=args.plotfigs, style=args.plotstyle) - cerebro.plot(**pkwargs) - - -def setbroker(args, cerebro): - broker = cerebro.getbroker() - - if args.cash is not None: - broker.setcash(args.cash) - - commkwargs = dict() - if args.commission is not None: - commkwargs['commission'] = args.commission - if args.margin is not None: - commkwargs['margin'] = args.margin - if args.mult is not None: - commkwargs['mult'] = args.mult - if args.interest is not None: - commkwargs['interest'] = args.interest - if args.interest_long is not None: - commkwargs['interest_long'] = args.interest_long - - if commkwargs: - broker.setcommission(**commkwargs) - - if args.slip_perc is not None: - cerebro.broker.set_slippage_perc(args.slip_perc, - slip_open=args.slip_open, - slip_match=not args.no_slip_match, - slip_out=args.slip_out) - elif args.slip_fixed is not None: - cerebro.broker.set_slippage_fixed(args.slip_fixed, - slip_open=args.slip_open, - slip_match=not args.no_slip_match, - slip_out=args.slip_out) - - -def getdatas(args): - # Get the data feed class from the global dictionary - dfcls = DATAFORMATS[args.format] - - # Prepare some args - dfkwargs = dict() - if args.format == 'yahoo_unreversed': - dfkwargs['reverse'] = True - - fmtstr = '%Y-%m-%d' - if args.fromdate: - dtsplit = args.fromdate.split('T') - if len(dtsplit) > 1: - fmtstr += 'T%H:%M:%S' - - fromdate = datetime.datetime.strptime(args.fromdate, fmtstr) - dfkwargs['fromdate'] = fromdate - - fmtstr = '%Y-%m-%d' - if args.todate: - dtsplit = args.todate.split('T') - if len(dtsplit) > 1: - fmtstr += 'T%H:%M:%S' - todate = datetime.datetime.strptime(args.todate, fmtstr) - dfkwargs['todate'] = todate - - if args.timeframe is not None: - dfkwargs['timeframe'] = TIMEFRAMES[args.timeframe] - - if args.compression is not None: - dfkwargs['compression'] = args.compression - - datas = list() - for dname in args.data: - dfkwargs['dataname'] = dname - data = dfcls(**dfkwargs) - datas.append(data) - - return datas - - -def getmodclasses(mod, clstype, clsname=None): - clsmembers = inspect.getmembers(mod, inspect.isclass) - - clslist = list() - for name, cls in clsmembers: - if not issubclass(cls, clstype): - continue - - if clsname: - if clsname == name: - clslist.append(cls) - break - else: - clslist.append(cls) - - return clslist - - -def getmodfunctions(mod, funcname=None): - members = inspect.getmembers(mod, inspect.isfunction) + \ - inspect.getmembers(mod, inspect.ismethod) - - funclist = list() - for name, member in members: - if funcname: - if name == funcname: - funclist.append(member) - break - else: - funclist.append(member) - - return funclist - - -def loadmodule(modpath, modname=''): - # generate a random name for the module - - if not modpath.endswith('.py'): - modpath += '.py' - - if not modname: - chars = string.ascii_uppercase + string.digits - modname = ''.join(random.choice(chars) for _ in range(10)) - - version = (sys.version_info[0], sys.version_info[1]) - - if version < (3, 3): - mod, e = loadmodule2(modpath, modname) - else: - mod, e = loadmodule3(modpath, modname) - - return mod, e - - -def loadmodule2(modpath, modname): - import imp - - try: - mod = imp.load_source(modname, modpath) - except Exception as e: - return (None, e) - - return (mod, None) - - -def loadmodule3(modpath, modname): - import importlib.machinery - - try: - loader = importlib.machinery.SourceFileLoader(modname, modpath) - mod = loader.load_module() - except Exception as e: - return (None, e) - - return (mod, None) - - -def getobjects(iterable, clsbase, modbase, issignal=False): - retobjects = list() - - for item in iterable or []: - if issignal: - sigtokens = item.split('+', 1) - if len(sigtokens) == 1: # no + seen - sigtype = 'longshort' - else: - sigtype, item = sigtokens - - tokens = item.split(':', 1) - - if len(tokens) == 1: - modpath = tokens[0] - name = '' - kwargs = dict() - else: - modpath, name = tokens - kwtokens = name.split(':', 1) - if len(kwtokens) == 1: - # no '(' found - kwargs = dict() - else: - name = kwtokens[0] - kwtext = 'dict(' + kwtokens[1] + ')' - kwargs = eval(kwtext) - - if modpath: - mod, e = loadmodule(modpath) - - if not mod: - print('') - print('Failed to load module %s:' % modpath, e) - sys.exit(1) - else: - mod = modbase - - loaded = getmodclasses(mod=mod, clstype=clsbase, clsname=name) - - if not loaded: - print('No class %s / module %s' % (str(name), modpath)) - sys.exit(1) - - if issignal: - retobjects.append((loaded[0], kwargs, sigtype)) - else: - retobjects.append((loaded[0], kwargs)) - - return retobjects - -def getfunctions(iterable, modbase): - retfunctions = list() - - for item in iterable or []: - tokens = item.split(':', 1) - - if len(tokens) == 1: - modpath = tokens[0] - name = '' - kwargs = dict() - else: - modpath, name = tokens - kwtokens = name.split(':', 1) - if len(kwtokens) == 1: - # no '(' found - kwargs = dict() - else: - name = kwtokens[0] - kwtext = 'dict(' + kwtokens[1] + ')' - kwargs = eval(kwtext) - - if modpath: - mod, e = loadmodule(modpath) - - if not mod: - print('') - print('Failed to load module %s:' % modpath, e) - sys.exit(1) - else: - mod = modbase - - loaded = getmodfunctions(mod=mod, funcname=name) - - if not loaded: - print('No function %s / module %s' % (str(name), modpath)) - sys.exit(1) - - retfunctions.append((loaded[0], kwargs)) - - return retfunctions - - -def parse_args(pargs=''): - parser = argparse.ArgumentParser( - description='Backtrader Run Script', - formatter_class=argparse.RawTextHelpFormatter, - ) - - group = parser.add_argument_group(title='Data options') - # Data options - group.add_argument('--data', '-d', action='append', required=True, - help='Data files to be added to the system') - - group = parser.add_argument_group(title='Cerebro options') - group.add_argument( - '--cerebro', '-cer', - metavar='kwargs', - required=False, const='', default='', nargs='?', - help=('The argument can be specified with the following form:\n' - '\n' - ' - kwargs\n' - '\n' - ' Example: "preload=True" which set its to True\n' - '\n' - 'The passed kwargs will be passed directly to the cerebro\n' - 'instance created for the execution\n' - '\n' - 'The available kwargs to cerebro are:\n' - ' - preload (default: True)\n' - ' - runonce (default: True)\n' - ' - maxcpus (default: None)\n' - ' - stdstats (default: True)\n' - ' - live (default: False)\n' - ' - exactbars (default: False)\n' - ' - preload (default: True)\n' - ' - writer (default False)\n' - ' - oldbuysell (default False)\n' - ' - tradehistory (default False)\n') - ) - - group.add_argument('--nostdstats', action='store_true', - help='Disable the standard statistics observers') - - datakeys = list(DATAFORMATS) - group.add_argument('--format', '--csvformat', '-c', required=False, - default='btcsv', choices=datakeys, - help='CSV Format') - - group.add_argument('--fromdate', '-f', required=False, default=None, - help='Starting date in YYYY-MM-DD[THH:MM:SS] format') - - group.add_argument('--todate', '-t', required=False, default=None, - help='Ending date in YYYY-MM-DD[THH:MM:SS] format') - - group.add_argument('--timeframe', '-tf', required=False, default='days', - choices=TIMEFRAMES.keys(), - help='Ending date in YYYY-MM-DD[THH:MM:SS] format') - - group.add_argument('--compression', '-cp', required=False, default=1, - type=int, - help='Ending date in YYYY-MM-DD[THH:MM:SS] format') - - group = parser.add_mutually_exclusive_group(required=False) - - group.add_argument('--resample', '-rs', required=False, default=None, - help='resample with timeframe:compression values') - - group.add_argument('--replay', '-rp', required=False, default=None, - help='replay with timeframe:compression values') - - group.add_argument( - '--hook', dest='hooks', - action='append', required=False, - metavar='module:hookfunction:kwargs', - help=('This option can be specified multiple times.\n' - '\n' - 'The argument can be specified with the following form:\n' - '\n' - ' - module:hookfunction:kwargs\n' - '\n' - ' Example: mymod:myhook:a=1,b=2\n' - '\n' - 'kwargs is optional\n' - '\n' - 'If module is omitted then hookfunction will be sought\n' - 'as the built-in cerebro method. Example:\n' - '\n' - ' - :addtz:tz=America/St_Johns\n' - '\n' - 'If name is omitted, then the 1st function found in the\n' - 'mod will be used. Such as in:\n' - '\n' - ' - module or module::kwargs\n' - '\n' - 'The function specified will be called, with cerebro\n' - 'instance passed as the first argument together with\n' - 'kwargs, if any were specified. This allows to customize\n' - 'cerebro, beyond options provided by this script\n\n') - ) - - # Module where to read the strategy from - group = parser.add_argument_group(title='Strategy options') - group.add_argument( - '--strategy', '-st', dest='strategies', - action='append', required=False, - metavar='module:name:kwargs', - help=('This option can be specified multiple times.\n' - '\n' - 'The argument can be specified with the following form:\n' - '\n' - ' - module:classname:kwargs\n' - '\n' - ' Example: mymod:myclass:a=1,b=2\n' - '\n' - 'kwargs is optional\n' - '\n' - 'If module is omitted then class name will be sought in\n' - 'the built-in strategies module. Such as in:\n' - '\n' - ' - :name:kwargs or :name\n' - '\n' - 'If name is omitted, then the 1st strategy found in the mod\n' - 'will be used. Such as in:\n' - '\n' - ' - module or module::kwargs') - ) - - # Module where to read the strategy from - group = parser.add_argument_group(title='Signals') - group.add_argument( - '--signal', '-sig', dest='signals', - action='append', required=False, - metavar='module:signaltype:name:kwargs', - help=('This option can be specified multiple times.\n' - '\n' - 'The argument can be specified with the following form:\n' - '\n' - ' - signaltype:module:signaltype:classname:kwargs\n' - '\n' - ' Example: longshort+mymod:myclass:a=1,b=2\n' - '\n' - 'signaltype may be ommited: longshort will be used\n' - '\n' - ' Example: mymod:myclass:a=1,b=2\n' - '\n' - 'kwargs is optional\n' - '\n' - 'signaltype will be uppercased to match the defintions\n' - 'fromt the backtrader.signal module\n' - '\n' - 'If module is omitted then class name will be sought in\n' - 'the built-in signals module. Such as in:\n' - '\n' - ' - LONGSHORT::name:kwargs or :name\n' - '\n' - 'If name is omitted, then the 1st signal found in the mod\n' - 'will be used. Such as in:\n' - '\n' - ' - module or module:::kwargs') - ) - - # Observers - group = parser.add_argument_group(title='Observers and statistics') - group.add_argument( - '--observer', '-ob', dest='observers', - action='append', required=False, - metavar='module:name:kwargs', - help=('This option can be specified multiple times.\n' - '\n' - 'The argument can be specified with the following form:\n' - '\n' - ' - module:classname:kwargs\n' - '\n' - ' Example: mymod:myclass:a=1,b=2\n' - '\n' - 'kwargs is optional\n' - '\n' - 'If module is omitted then class name will be sought in\n' - 'the built-in observers module. Such as in:\n' - '\n' - ' - :name:kwargs or :name\n' - '\n' - 'If name is omitted, then the 1st observer found in the\n' - 'will be used. Such as in:\n' - '\n' - ' - module or module::kwargs') - ) - # Analyzers - group = parser.add_argument_group(title='Analyzers') - group.add_argument( - '--analyzer', '-an', dest='analyzers', - action='append', required=False, - metavar='module:name:kwargs', - help=('This option can be specified multiple times.\n' - '\n' - 'The argument can be specified with the following form:\n' - '\n' - ' - module:classname:kwargs\n' - '\n' - ' Example: mymod:myclass:a=1,b=2\n' - '\n' - 'kwargs is optional\n' - '\n' - 'If module is omitted then class name will be sought in\n' - 'the built-in analyzers module. Such as in:\n' - '\n' - ' - :name:kwargs or :name\n' - '\n' - 'If name is omitted, then the 1st analyzer found in the\n' - 'will be used. Such as in:\n' - '\n' - ' - module or module::kwargs') - ) - - # Analyzer - Print - group = parser.add_mutually_exclusive_group(required=False) - group.add_argument('--pranalyzer', '-pralyzer', - required=False, action='store_true', - help=('Automatically print analyzers')) - - group.add_argument('--ppranalyzer', '-ppralyzer', - required=False, action='store_true', - help=('Automatically PRETTY print analyzers')) - - # Indicators - group = parser.add_argument_group(title='Indicators') - group.add_argument( - '--indicator', '-ind', dest='indicators', - metavar='module:name:kwargs', - action='append', required=False, - help=('This option can be specified multiple times.\n' - '\n' - 'The argument can be specified with the following form:\n' - '\n' - ' - module:classname:kwargs\n' - '\n' - ' Example: mymod:myclass:a=1,b=2\n' - '\n' - 'kwargs is optional\n' - '\n' - 'If module is omitted then class name will be sought in\n' - 'the built-in analyzers module. Such as in:\n' - '\n' - ' - :name:kwargs or :name\n' - '\n' - 'If name is omitted, then the 1st analyzer found in the\n' - 'will be used. Such as in:\n' - '\n' - ' - module or module::kwargs') - ) - - # Writer - group = parser.add_argument_group(title='Writers') - group.add_argument( - '--writer', '-wr', - dest='writers', metavar='kwargs', nargs='?', - action='append', required=False, const='', - help=('This option can be specified multiple times.\n' - '\n' - 'The argument can be specified with the following form:\n' - '\n' - ' - kwargs\n' - '\n' - ' Example: a=1,b=2\n' - '\n' - 'kwargs is optional\n' - '\n' - 'It creates a system wide writer which outputs run data\n' - '\n' - 'Please see the documentation for the available kwargs') - ) - - # Broker/Commissions - group = parser.add_argument_group(title='Cash and Commission Scheme Args') - group.add_argument('--cash', '-cash', required=False, type=float, - help='Cash to set to the broker') - group.add_argument('--commission', '-comm', required=False, type=float, - help='Commission value to set') - group.add_argument('--margin', '-marg', required=False, type=float, - help='Margin type to set') - group.add_argument('--mult', '-mul', required=False, type=float, - help='Multiplier to use') - - group.add_argument('--interest', required=False, type=float, - default=None, - help='Credit Interest rate to apply (0.0x)') - - group.add_argument('--interest_long', action='store_true', - required=False, default=None, - help='Apply credit interest to long positions') - - group.add_argument('--slip_perc', required=False, default=None, - type=float, - help='Enable slippage with a percentage value') - group.add_argument('--slip_fixed', required=False, default=None, - type=float, - help='Enable slippage with a fixed point value') - - group.add_argument('--slip_open', required=False, action='store_true', - help='enable slippage for when matching opening prices') - - group.add_argument('--no-slip_match', required=False, action='store_true', - help=('Disable slip_match, ie: matching capped at \n' - 'high-low if slippage goes over those limits')) - group.add_argument('--slip_out', required=False, action='store_true', - help='with slip_match enabled, match outside high-low') - - # Output flushing - group.add_argument('--flush', required=False, action='store_true', - help='flush the output - useful under win32 systems') - - # Plot options - parser.add_argument( - '--plot', '-p', nargs='?', - metavar='kwargs', - default=False, const=True, required=False, - help=('Plot the read data applying any kwargs passed\n' - '\n' - 'For example:\n' - '\n' - ' --plot style="candle" (to plot candlesticks)\n') - ) - - if pargs: - return parser.parse_args(pargs) - - return parser.parse_args() - - -if __name__ == '__main__': - btrun() diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_pipelines/sar_pipeline.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_pipelines/sar_pipeline.py deleted file mode 100644 index f43ded30f5b7fb54c302a442483b07ca8bf8af69..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_pipelines/sar_pipeline.py +++ /dev/null @@ -1,43 +0,0 @@ -img_norm_cfg = dict(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='ResizeOCR', - height=48, - min_width=48, - max_width=160, - keep_aspect_ratio=True, - width_downsample_ratio=0.25), - dict(type='ToTensorOCR'), - dict(type='NormalizeOCR', **img_norm_cfg), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'filename', 'ori_shape', 'resize_shape', 'text', 'valid_ratio' - ]), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiRotateAugOCR', - rotate_degrees=[0, 90, 270], - transforms=[ - dict( - type='ResizeOCR', - height=48, - min_width=48, - max_width=160, - keep_aspect_ratio=True, - width_downsample_ratio=0.25), - dict(type='ToTensorOCR'), - dict(type='NormalizeOCR', **img_norm_cfg), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'filename', 'ori_shape', 'resize_shape', 'valid_ratio', - 'img_norm_cfg', 'ori_filename', 'img_shape' - ]), - ]) -] diff --git a/spaces/LucasCodeBreak/MusicGen/tests/models/test_encodec_model.py b/spaces/LucasCodeBreak/MusicGen/tests/models/test_encodec_model.py deleted file mode 100644 index 2f9c1db3f69a45f02451b71da95f44356811acbb..0000000000000000000000000000000000000000 --- a/spaces/LucasCodeBreak/MusicGen/tests/models/test_encodec_model.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import random - -import numpy as np -import torch - -from audiocraft.models import EncodecModel -from audiocraft.modules import SEANetEncoder, SEANetDecoder -from audiocraft.quantization import DummyQuantizer - - -class TestEncodecModel: - - def _create_encodec_model(self, - sample_rate: int, - channels: int, - dim: int = 5, - n_filters: int = 3, - n_residual_layers: int = 1, - ratios: list = [5, 4, 3, 2], - **kwargs): - frame_rate = np.prod(ratios) - encoder = SEANetEncoder(channels=channels, dimension=dim, n_filters=n_filters, - n_residual_layers=n_residual_layers, ratios=ratios) - decoder = SEANetDecoder(channels=channels, dimension=dim, n_filters=n_filters, - n_residual_layers=n_residual_layers, ratios=ratios) - quantizer = DummyQuantizer() - model = EncodecModel(encoder, decoder, quantizer, frame_rate=frame_rate, - sample_rate=sample_rate, channels=channels, **kwargs) - return model - - def test_model(self): - random.seed(1234) - sample_rate = 24_000 - channels = 1 - model = self._create_encodec_model(sample_rate, channels) - for _ in range(10): - length = random.randrange(1, 10_000) - x = torch.randn(2, channels, length) - res = model(x) - assert res.x.shape == x.shape - - def test_model_renorm(self): - random.seed(1234) - sample_rate = 24_000 - channels = 1 - model_nonorm = self._create_encodec_model(sample_rate, channels, renormalize=False) - model_renorm = self._create_encodec_model(sample_rate, channels, renormalize=True) - - for _ in range(10): - length = random.randrange(1, 10_000) - x = torch.randn(2, channels, length) - codes, scales = model_nonorm.encode(x) - codes, scales = model_renorm.encode(x) - assert scales is not None diff --git a/spaces/Lwight/Ghibli-Diffusion/app.py b/spaces/Lwight/Ghibli-Diffusion/app.py deleted file mode 100644 index 25e4911d6481344a01f0ab7867dabd1f3d130e7a..0000000000000000000000000000000000000000 --- a/spaces/Lwight/Ghibli-Diffusion/app.py +++ /dev/null @@ -1,10 +0,0 @@ -import gradio as gr - -description = """
    - -
    -

    Ghibli Diffusion -This is the fine-tuned Stable Diffusion model trained on images from modern anime feature films from Studio Ghibli. Use the tokens ghibli style in your prompts for the effect.

    - """ - -gr.Interface.load("models/nitrosocke/Ghibli-Diffusion", description=description, examples=[["superman ghibli style"]]).launch() diff --git a/spaces/Mandy234/Mandy234-myQAmodel/README.md b/spaces/Mandy234/Mandy234-myQAmodel/README.md deleted file mode 100644 index d96254dd40cf35278b4841de3770ffe39ff1e3ae..0000000000000000000000000000000000000000 --- a/spaces/Mandy234/Mandy234-myQAmodel/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Mandy234 MyQAmodel -emoji: 🌖 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Marshalls/testmtd/feature_extraction/madmom/ml/__init__.py b/spaces/Marshalls/testmtd/feature_extraction/madmom/ml/__init__.py deleted file mode 100644 index 10989a5848e37aae5426560e9da7bf933040355f..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/feature_extraction/madmom/ml/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# encoding: utf-8 -""" -Machine learning package. - -""" - -from __future__ import absolute_import, division, print_function - -# import the submodules -from . import nn, hmm, gmm, crf diff --git a/spaces/Mathux/TMR/model.py b/spaces/Mathux/TMR/model.py deleted file mode 100644 index 5e5e8f30664f314c7fa74e1363920b8b5525005e..0000000000000000000000000000000000000000 --- a/spaces/Mathux/TMR/model.py +++ /dev/null @@ -1,128 +0,0 @@ -from typing import List -import torch.nn as nn -import os - -import torch -import numpy as np -from torch import Tensor -from transformers import AutoTokenizer, AutoModel -from transformers import logging -from torch.nn.functional import normalize - - -class PositionalEncoding(nn.Module): - def __init__(self, d_model, max_len=5000): - super().__init__() - - pe = torch.zeros(max_len, d_model) - position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1) - div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-np.log(10000.0) / d_model)) - pe[:, 0::2] = torch.sin(position * div_term) - pe[:, 1::2] = torch.cos(position * div_term) - pe = pe.unsqueeze(0).transpose(0, 1) - - self.register_buffer('pe', pe, persistent=False) - - def forward(self, x): - return x + self.pe[:x.shape[0], :] - - -class TMR_textencoder(nn.Module): - def __init__(self, modelpath: str, latent_dim: int, ff_size: int, - num_layers: int, num_heads: int, activation: str, **kwargs) -> None: - super().__init__() - - logging.set_verbosity_error() - - # Tokenizer - os.environ["TOKENIZERS_PARALLELISM"] = "false" - self.tokenizer = AutoTokenizer.from_pretrained(modelpath) - - # Text model - self.text_model = AutoModel.from_pretrained(modelpath) - # Then configure the model - self.text_encoded_dim = self.text_model.config.hidden_size - - # Projection of the text-outputs into the latent space - self.projection = nn.Sequential( - nn.ReLU(), - nn.Linear(self.text_encoded_dim, latent_dim) - ) - - self.mu_token = nn.Parameter(torch.randn(latent_dim)) - self.logvar_token = nn.Parameter(torch.randn(latent_dim)) - self.sequence_pos_encoding = PositionalEncoding(latent_dim) - - seq_trans_encoder_layer = nn.TransformerEncoderLayer(d_model=latent_dim, - nhead=num_heads, - dim_feedforward=ff_size, - dropout=0.0, - activation=activation) - self.seqTransEncoder = nn.TransformerEncoder( - seq_trans_encoder_layer, - num_layers=num_layers - ) - - def get_last_hidden_state(self, texts: List[str], - return_mask: bool = False): - encoded_inputs = self.tokenizer(texts, return_tensors="pt", padding=True) - output = self.text_model(**encoded_inputs.to(self.text_model.device)) - if not return_mask: - return output.last_hidden_state - return output.last_hidden_state, encoded_inputs.attention_mask.to(dtype=bool) - - def forward(self, texts: List[str]) -> Tensor: - text_encoded, mask = self.get_last_hidden_state(texts, return_mask=True) - - x = self.projection(text_encoded) - bs, nframes, _ = x.shape - # bs, nframes, totjoints, nfeats = x.shape - # Switch sequence and batch_size because the input of - # Pytorch Transformer is [Sequence, Batch size, ...] - x = x.permute(1, 0, 2) # now it is [nframes, bs, latent_dim] - - mu_token = torch.tile(self.mu_token, (bs,)).reshape(bs, -1) - logvar_token = torch.tile(self.logvar_token, (bs,)).reshape(bs, -1) - - # adding the distribution tokens for all sequences - xseq = torch.cat((mu_token[None], logvar_token[None], x), 0) - - # create a bigger mask, to allow attend to mu and logvar - token_mask = torch.ones((bs, 2), dtype=bool, device=x.device) - aug_mask = torch.cat((token_mask, mask), 1) - - # add positional encoding - xseq = self.sequence_pos_encoding(xseq) - final = self.seqTransEncoder(xseq, src_key_padding_mask=~aug_mask) - - # only mu for inference - mu = final[0] - return mu - - # compute score for retrieval - def compute_scores(self, texts, unit_embs=None, embs=None): - # not both empty - assert not (unit_embs is None and embs is None) - # not both filled - assert not (unit_embs is not None and embs is not None) - - output_str = False - # if one input, squeeze the output - if isinstance(texts, str): - texts = [texts] - output_str = True - - # compute unit_embs from embs if not given - if embs is not None: - unit_embs = normalize(embs) - - with torch.no_grad(): - latent_unit_texts = normalize(self(texts)) - # compute cosine similarity between 0 and 1 - scores = (unit_embs @ latent_unit_texts.T).T/2 + 0.5 - scores = scores.cpu().numpy() - - if output_str: - scores = scores[0] - - return scores diff --git a/spaces/MaximeTut/Emploi2021/emploi2021.py b/spaces/MaximeTut/Emploi2021/emploi2021.py deleted file mode 100644 index bfd9d2f336338eff80d422e567a7ddf81fe1e853..0000000000000000000000000000000000000000 --- a/spaces/MaximeTut/Emploi2021/emploi2021.py +++ /dev/null @@ -1,165 +0,0 @@ -import pandas as pd -import json -import matplotlib.pyplot as plt -import streamlit as st -import streamlit.components.v1 as stc -import plotly.express as px -import seaborn as sns -from streamlit_option_menu import option_menu - -sns.set() -logo = "https://www.ville-creteil.fr/img/Une-logo-pole-emploi.jpg" -logo2 = "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAoHCBISEhISEhESERESEhESEBEREhESEhAOFxMYGBcTFxcbICwkGx0pIBcXJTYlKS49MzMzGiI5PjkxPSwyMzABCwsLEA4QHhISHTIpIiAyMjAyMDIyMjIyMjAyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMv/AABEIAOEA4QMBIgACEQEDEQH/xAAcAAEAAQUBAQAAAAAAAAAAAAAABwECBAUGAwj/xABIEAACAQMBBAUGCgkBCAMAAAABAgADBBESBQYhMQcTQVFhIlJxgZGhFCMyQlRikrHB0RYXQ3JzgpOy0qJEU2ODlMLh8BUzNP/EABoBAQADAQEBAAAAAAAAAAAAAAADBAUBAgb/xAAzEQACAgECAwUFBwUAAAAAAAAAAQIRAwQSITFBExRRYXEFMoGRwQYjM7HR4fAiUmKhsv/aAAwDAQACEQMRAD8AmaIiAIiIAiIgCIiAIiIAiUiAVieLV0Xm6j0sBPI39H/ep9oTtM45JdTLiYq31E8qqfaE9lqqeTA+ggzjVBST5M9IlIzB0rERAEREAREQBERAEREAREQBERAEREAREQBKRNFtbeKnRyqfGVBzAPkqfrH8BPUYOTpI8ZMkcauTo3bsAMkgAcyTgTT3m8NFMhSajDzfk/aP4TkbzadWufLYkdiLwUeqVo2jH5R0+A4mXIaVLjNmbPXSnwxL4s2txvJWb5OmmPAZPtMwHuqtTm7t6zj2Ce1Ogi9mT3njMgPiSpQj7qIWsk/fkzXi1c/M9pAl4s37h7RM4PK653ezz3eJgGyqeaD/ADCWmhUXjoYeK/8AibPrIFSN7HYRMCntKvT5O48Gyfvmytt53XhUQOO9fJP5SxiDzAPp4zGq2VNuXknw/KeXHHLmj1HtcfuyOmstsUKuAr6W81/JPq7D6pspG1xZVE4jyx3rzHqmVs/eCrRIBOtBzVzxA8D2SKek6wZZx69p1lVef7Hfys12zNq0rgZRsMPlI3Bl9Xb6ZsJTaadM0IyUladorEROHoREQBERAEREAREQBERAE8qtRUUsxCqBkknAAirUVFLMQqqCSTwAA7ZG+8m8bXLlEJWgp4DtqHzm8O4SXDheV0uXiV9RqI4Y2+fRGw27vQ1QmnQJWnyL8mf0dwmltrdn4ngvf2meVpb/ADm9S/iZsA80Uo41tgZNTzPfk+R70lVBhRjx7TPUVJia5XXPNE/BcEZeuV62YgeVDzlAyw8qHmJrldcUDL6yV6yYgeVFScoGWKkqHmJ1kqKkAy9cxrm0Sp9VvOH498ojk8sn0cZ7rSc8kc/ytF0ccdypo0VValBwwJUg+S6mdfu/vOtUinWIWpyVuSue7waaypbOVIam5B55UzndoWTUjqw2jPAkEFT4/nPUowzKnzIovJpnuhy6olzMrOL3U3l1lbeu3l8qdQ/P+o3j3HtnZzOyY5Y5VI2MOWOWO6JWIieCUREQBERAEREAShlZzG+23fglDSh+OrZVO9E+c/q5DxM9Qg5yUV1PGSahFylyRz++28PWObak3xan4xh8+oPm/uj75oLGj89v5R+M19mmtsnkOJ8TNsHmsorHHYjFW7LPtJ/AyhUldcxRUgVJ5omMvXKh5idZK9ZAMrXK9ZMUPK9ZFAyusldUxg8r1k4DJ1y5WJOBxJ5AcyZZZW9Ss4p0xqY+xR3nuE7rZGxadAA411O1z2eCjsEiyZVAlx4nP0NHY7v1amGc9WvceLEejsm9tth0E5qXPe5z7uU2mIlOWWUi7HDCPQ80oqvBVVR4ACemIjMjJRiWPSVgQyhgeYIBEvzKwDSXu7NrV4mkEbmGpEowPfw4e6XVtoLaCmlwzFG8hbggY1di1Mcjjt5HE3Ew9pWSXFJ6TjKsMeIPYw8QYnKbjSfpYxQxxnclwfOuZkUqquAykMpGQwOQR4GesiS22tc7MuHpMS6BsPTb5LL2OvmnHdJK2RtWldUxUpHI5Mp+Ujeaw7DIcWZT4cmuhd1ehnp0pc4PlJfXwNjERJikIiIAiIgFjuACScAAknuAkI7ybXN3dO4JKZ6ukO6mDj38/XJF6Q9qG3snVTh65FJccwp4ufsgj1yJtnrltXYvL0zQ0UKTyP0Rma+Tk1jXqzb0BoUD2+JnqHmLrlQ8sEKMrXKh5jBpXVAMjXKh5i65UPAMrXKipMXXK9ZB0yg89KCs7KiAszEKoHaTMLXO13E2bnVcuOWUpZ/1P+HtkeWeyNnvHHfKjoth7KW2pheBdsGo3aT3egTaRKzMbbds1EklSE8q1VUUszBVAyWJwAJWo4UEkgAAkk8gB2yNN4dutcuQpIoofIXzvrt4/dPeLG8jojy5VjR0O0N8FBK0E1/XfIX1DnNO+89037QL4KiAe8Gc9rlesl+OCC6FGWecup0NPea6X9oG8GRD9wE3FhveCQKyafrpkj1rznD9ZK64lghLoI5px6kvUKyuodGDK3EFTkGehkZ7B221s44k0mPxif8AeO4j3ySadQOoZTlWAKkciD2yjlxODL2LKsiOI6R9k66a3SDyqfkVMfOpk+S3qPuM4fd7bj2VYVFJKHAqJ2VE/Mdkmq8tlq03puMq6srA9xGJAm07ZqNWpSb5VNmQ+ozM1ENs1NH13sTJHPhlp8itL/l/o+RPlldpWppVpkMjqGRh2gzJkadF22eL2jnhxqUs9h+co9XH2yS5bhLdGzA1ulelzSxvpy810ERE9lUShlZQwCIulXaGu6SiD5NGnkj67nJ9wE5uz8lB48Z5bz3nXX1zU55rMo/dU6B90vVsDHdNnHHbjSMbI92RyMrVGuY2uNc7RwytUuDzE1yuqcoGTrldUxtUrrnaBk6pUPMbVKhooGZTBdlVeLMQoHiTgSZdm2oo0qdIckUL6TjifbIq3Nodbe0QeIQtUb0IpI9+mTBKGrlxUS9pI8HISspEqFw5TfzaXV0VpKcNWJz/AA15+04EjzVN1v5ea71lzwpIlMekjU393unOa5pYIbYLzMzPLdN+Rk6o1TH1yuuTUQmRrl2uYuqVDQDJ1zv9xdodZSeixy1Igr/Cbl7CDI41zo9xbrTeKueFRHQ+kDUP7ZFnhug/ImwSqa8yTZEXSbZ9XdhwMLXpo/8AzAdLe7TJcM4XpTtdVvSq4406pBP1XXP3qJkahXD0Pp/Y2bs9ZH/K18+X+6I52NfG3r0awPyHRj4jV5Q9mZ9A0nDAMOIYAg+BGRPnAcx6RJ43SuOssLVicnqUUnvKeSf7ZFp3zRq/aPDwx5V5r6r6m7iIlk+WE8bh9KO3mqzewEz2mDthsW1we6hVP+gwD5yR9T6j85ix9JOZnB5rbc8V9A+6ZmqbrMNs99cuDzG1RqnDlmVrgVJja5a1dR2+yDtmZqldc1rXZ7B7Zabp/D2RQ4m11yuqan4U3f7oF2/h7Io9EldGCarms/mUQo/ncf4yTwZF/RBVLteZA4LQ5eJf8pJ8y9T+IzS034aLhEtjMrlgjTeTda8qXVWpTpiqlRy4KugIyB5JDEcRNX+iW0Pozf1KX+UmCJYjqppVwK0tLBu+JD/6JbQ+jH+pT/ylf0T2h9GP9Sl/lJfjM9d7n5HO6Q8yIf0T2h9Gb+pS/wAo/RTaH0Zvt0v8pL0pHep+CHdIeLIj/RO/+jN9ul/lNtu1uzd07qnUqUxSp021El0JbgRpAUnvkiywmJaiTVcOJ1aaEXfEGc3v9S12Ff6pR/YwnR5mk3wGbG6/hMZVn7rNDSScc8Gv7l+ZB0nHcMEbOtc+bUPqNVyJCtnavWqLTpqWd2Cqo7yefok/7JshQoUqI49VTRM95A4n25lXTq3Z9L9o8q7OGPrd/BKjOiIlo+TExNppqoVl86lUHtQiZcsdcgjvBHtgHy/SPL0CZOqWXtHq6tSmedOo6fZcj8JbmbidmJJcT11S1nxPNnxPImDiiej1CZZmWxB6ouzGZbEHaLsxmWxAokvobqfGXa99Oi3qDMPxkrAyFuiW60X7oTwq27geLIyuPcGkzAzM1K+8Zo6d/wBB6Ss8wZdmQE5xe2+ka2ta70Oqq1mpnTUZNAUP2qMnJxNf+ti2+i3H2qX5zgN+KWjaV2vfWLD0OoYffNDNCGmxuKZQlqJptEu/rYtvotx9ql+cfrYtvotx9ql+ciKJ67rj/jOd5yEu/rXtvotx9ql+cfrXtvotx9ql+ciKI7rj8/mO85CXP1rW30W4+1S/OU/WtbfRbj7VL85EkR3XH/Gc7zkJZbpWt+y0rn0vSE87XfT/AOTqrYfBuqpXGpHqdbqdU0knSNOM8JFU7Port9e0Ubsp06jn04AH3zxk0+OMG66EmLUZN8afUlnY+71taD4mnhiMF28qof5vym5ECVmalXI0Z5JZJOU3bfViIidPAlDKxAPn7pCsup2lcDGFqMKq+hxx94M5wNJO6Ztm/wD57pRw8qjUI7/lIT7GEizVNXBPdBGbmhU2XExmWZjMmsiovzGZZmMxYovzGZZmMxYovzGZZmMxYo3W6d/8HvrWqThVqqH/AHHyje5p9DAz5fzJ/wByNsi7sqTk5emOqqjtFRBjJ9IwfXKWqjykW9NKrR0WZXMsBjMplsiHpc2ead3TuAPIr0wrH/i0uHvUr7JwOZP++OwxfWj0hjrF+MonuqqOA9YyPXIAq02RmRgVZSVZTwKsDggzR007jXgUM8KlfiMxmWZjMsWQUX5jMszGYsUX5jMszKjJ4DiTwAHEk90WdouzJB3CuE2a1SterUpNXp0xQXq9TNS1Es5A5DIA4zP3F3F0aLq8TL8Go0G5Iex6g7T3L2TQb8bQ6++qsDkU8U0/dTn7y0zdbqqhtibnsT2atTn+8ukr4fImDY23ra8BNCoGK/KQjS6jvKnjNtIR6OXcbQpBScFXD+KY7fXiTaJSxzclbLPtLRx0ubZF2mrKxESQoCIiAaXevY4vLOvbn5TpqpnzaqnUh9o95nzg6lSVYFWUlWU81YHBHtn1QZCHSvu/8HuRcouKV0TqxyW4AyR6wM+oy1pZ09r6lfUQtbvA4TMZlsS9ZTouzGZbECi7MZlsQKLsxmWxAouzOr6Pt4/gV1pqNi3r4Sr3I+fIqerJB8D4TkonJJSVM7G4uz6eVs8RxB4gjkRLsyK+jvfUKFs7t8KMLbVmPAd1Nz9x9UlEGZ04ODpl6MlJWi/M4LfzccXRa5tQFucfGU+AWvgYDeD4GM9s7vMTkZOLtHZRUlTPmevSem7JUVqbocOjgqynuIM88z6J2xsG1vBi4oo5Awr401FHg44zjrzort2JNK5qIOxXVagHr4GW46iL58CrLA+hE+YzJNXooGeN5w8KXH75tLDoys0INR6tfHYSEU+nTxnp6iBxYJEU7M2dWuqgpUKbVHPMKOCjzmPJR4mS7uhuLSsytatprXI4rwylE/UB5t9Y+qdRYWFG3Tq6NNKVPzaahQT3nvPpmRmV8mdy4LgieGJR4s1u8e0hbWtWrnygulB31W4KP/e6Qc7EliTk5JJ7yeZnY9Iu2uurC3Q5p0M6scmqnmfUOHrM126O7VS+q8itBCDVfs78KfOPumVlbnOl0PtvZeKOj0rzZeG7i/TovV/U6voq2OR1l4455pUc9vnsPDPD1GSXMe0tkpU0p01CIihUUclUchMiWYR2qj5rV6l6nNLI+v5dBERPRWEREATVbw7Hp3ttUt6g8lx5LdqVBxVx4gzayhgHy7tfZ1S1r1Lequl6bEHhwYdjjwI4zDzJ66Q9zxtCkKlIAXdJT1Z5CqnM02/A9hkDVabIzI6lHUlXVhhlYcwRNHFl3rzKWSG1lMxmW5jMkIy7MZluYzALsxmW5jMAuzGZbmMwC7M7fdPpBq2oWjchq9AYCNn42kvcCflDwM4bMZnJRUlTOxk1xR9IbJ21bXia7eqlQfOUHFRD3Mh4rNhqnzHb3D02D03am68nRirD1idbsvpHv6ICuyXCj/eLh/tL+UrSwPoWFlXUm/MpmRra9K1I4620qKe003Vh7GxNlT6TtnEeV8IQ9xpavuMj7OS6HtTi+p2+ZTM449JOzfPrf9O88K3SfYL8hLiof4ap/cZzs5eA3I7jM5/e7bvwSjhONerlaS8yO+oR3D3maDZu/la+rrb2Vn5TcWqVnytKmObsF7B7zwnfUdi0RU650FSvgDrHGSMdiA8FHonjJGUVRPpp41NSmrS6eP7EZ7s7jVrlhWuQ9KiTqw3CrUzxOAeQPeZKuz7Gnb01pUlCIowFHvJ7z4zLlZDDGoci1rNdl1Urm+C5JckIiJ7KYiIgCIiAIiIBScB0g7iLeg3FsFS7UeUvJLhR2HufuPtkgSmJ2MnF2jjSapnyhcUHpu1OojJUQlXRxhlYdhE88z6H3x3Kt9pKWPxVyowldRknuVx85feJBu8O7l1YVClxTKjPkVVy1Nx3q34HjL2PMpepVnjaNVmMyzMrJLPFF2YzLMxmLFF+YzLMxmLFF+YzLMxmLFF+ZTMtzK5ixRdmMyzMZixRdmbTYGxLi+rLRt01E/Lc8EpJ2u7dg8OZm73R3Aur8rUcG3tcgtVdfLde6mh5nxPAePKTlsPYlvZUhRt0CIPlHmzt5zN2mQ5MyjwXMlhivmYm6m7VHZ1EU6Y1O2DVqkeVUf8AAdwm+lYlNtt2yzyERE4BERAEREAREQBERAEREATGvbOnWptTq00qIwwyOoZSPQZkxAIl3l6I1YtUsKujOT8GrElM9yVOa+hs+mRltjYV3aMVuaD08fPK5pn0OOE+psTyrUVdSrorqeBVgGBHoMmjmkufEjeNM+TMxmfQu1ujTZlxkii1u5z5Vs3V8e/Scr7pyV90Mt/s96COxa9Lj9pD+EmWeLI3iZE+YzO8ueibaa/INCp6KhX7xMJujLao/wBnQ+iok9dpHxOdmzkMxmdhT6MNqt+wpr+9VUTY2vRDtB//ALKtvS7/ACmfHsEdpHxHZsj7MZkxbO6GqQwbi8qv3rQRKY9GptRnZ7G3K2daYNK1QuP2lXNWpnv1NnHqnh54rkeliZBuwdytoXmDToFKZ/a1s00A7+IyfUJKu6vRjaWpWrcH4XXGCNa6aKH6tPtPi3sE78DHLgJdIZZZSJFjSLFUDgBgDgAOQEviJEexERAEREAREQBERAEREAREQBERAEREAREQBERAKRiViAUxErEAREQBERAEREAREQBERAEREAREQBERAEREAREQBERAEREAREQBERAEREAREQBERAEREAREQBERAEREAREQD//Z" - -st.set_page_config(page_icon = logo2, - page_title ="Bonsoir !", layout = "wide") - -df = pd.read_csv("df_clean2.csv") -departement_geo = json.load(open("departements.geojson", "r")) - -liste_dep = sorted(df.NomDept.unique().tolist()) -liste_famille = df.famille.unique().tolist() -liste_metier = list(df.metier.unique()) - - -dico_map = {} -for feature in departement_geo["features"]: - feature['id']=feature['properties']['code'] - dico_map[feature['properties']['nom']] = feature['id'] - - -def heatmap(dep): - departement = df[df.NomDept == dep] - - dep_tail = departement.groupby(["metier"]).agg({"Nbr_demande":"sum"}).sort_values(by="Nbr_demande", ascending = True).head(10) - labels_tail = dep_tail.index.values.tolist() - - dep_head = departement.groupby(["metier"]).agg({"Nbr_demande":"sum"}).sort_values(by="Nbr_demande", ascending = True).tail(10) - labels_head = dep_head.index.values.tolist() - - - sns.set() - dep_head.reset_index(inplace=True) - dep_head = dep_head.sort_values("Nbr_demande", ascending = False) - dep_head.columns = ["metier", "nbr_demande"] - - dep_tail.reset_index(inplace=True) - dep_tail = dep_tail.sort_values("Nbr_demande", ascending = False) - dep_tail.columns = ["metier", "nbr_demande"] - - - fig1= plt.figure() - sns.barplot(y= "metier", x= "nbr_demande", data = dep_head, - orient="h", palette ="Reds_r") - plt.xlabel("") - plt.title("Les métier les plus demandés", fontsize= 18) - plt.ylabel("") - - st.pyplot(fig1) - - fig2= plt.figure() - sns.barplot(y= "metier", x= "nbr_demande", data = dep_tail, orient="h", palette ="Blues") - plt.xlabel("") - plt.title("Les métier les moins demandés", fontsize= 18) - plt.ylabel("") - plt.xlim(0,50) - - st.pyplot(fig2) - -def demande_metier(metier): - - df_metier = df[df.metier == metier] - choro = df_metier.groupby(by=["NomDept"]).agg({"Nbr_demande":"sum"}) - choro = choro.reset_index() - choro['id']=choro['NomDept'].apply(lambda x: dico_map[x]) - - - fig = px.choropleth_mapbox(choro, width = 900, height =100, locations="id", geojson = departement_geo, color = "Nbr_demande", hover_name = "NomDept", - mapbox_style = "open-street-map", - center = {"lat":46.80, "lon":3.02}, zoom = 5, opacity = 0.5, - title = metier) - - fig.update_geos(fitbounds = "locations", visible = False) - fig.update_layout(height=800, title_font_size = 25) - - st.plotly_chart(fig) - -def departement_page(): - - dep = st.selectbox("Choisir un département",liste_dep) - heatmap(dep) - - - -def metier_page(): - - - famille = st.selectbox("Famille de métier",liste_famille) - liste_metier = df[df.famille == famille]["metier"].unique().tolist() - metier = st.selectbox("Choisir un métier", liste_metier) - - demande_metier(metier) - - -def contact_message(): - st.header(":mailbox: Let's Get In Touch !") - - name, message = st.columns((1,2)) - with name: - contact_form = """
    - - -
    """ - st.markdown(contact_form, unsafe_allow_html=True) - - with message : - contact_form2 = """
    - - - """ - st.markdown(contact_form2, unsafe_allow_html=True) - - with open("style.txt") as f: - st.markdown(f"", unsafe_allow_html=True) - - - - - -def main(): - st.title("Tendances de l'emploi en 2021") - - with st.sidebar: - st.image(logo, width = 300) - st.markdown("#") - st.markdown("####") - - choice = option_menu( - menu_title = "Analyses", - options = ["Par département", "Par métier", "Envoie Moi Un Message"], - icons=["house","hammer","envelope"], - menu_icon="search" - ) - - - - if choice == "Par département": - departement_page() - elif choice == "Par métier": - metier_page() - elif choice == "Envoie Moi Un Message": - contact_message() - - - st.sidebar.markdown("####") - st.sidebar.markdown("####") - st.sidebar.subheader(":notebook_with_decorative_cover: Par Maxime Le Tutour :relieved: ") - - st.sidebar.write(" :blue_book: [**LinkedIn**](https://share.streamlit.io/mesmith027/streamlit_webapps/main/MC_pi/streamlit_app.py)", unsafe_allow_html =True) - - - - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/video/processing.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/video/processing.py deleted file mode 100644 index 3d90b96e0823d5f116755e7f498d25d17017224a..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/video/processing.py +++ /dev/null @@ -1,160 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import os.path as osp -import subprocess -import tempfile - -from annotator.uniformer.mmcv.utils import requires_executable - - -@requires_executable('ffmpeg') -def convert_video(in_file, - out_file, - print_cmd=False, - pre_options='', - **kwargs): - """Convert a video with ffmpeg. - - This provides a general api to ffmpeg, the executed command is:: - - `ffmpeg -y -i ` - - Options(kwargs) are mapped to ffmpeg commands with the following rules: - - - key=val: "-key val" - - key=True: "-key" - - key=False: "" - - Args: - in_file (str): Input video filename. - out_file (str): Output video filename. - pre_options (str): Options appears before "-i ". - print_cmd (bool): Whether to print the final ffmpeg command. - """ - options = [] - for k, v in kwargs.items(): - if isinstance(v, bool): - if v: - options.append(f'-{k}') - elif k == 'log_level': - assert v in [ - 'quiet', 'panic', 'fatal', 'error', 'warning', 'info', - 'verbose', 'debug', 'trace' - ] - options.append(f'-loglevel {v}') - else: - options.append(f'-{k} {v}') - cmd = f'ffmpeg -y {pre_options} -i {in_file} {" ".join(options)} ' \ - f'{out_file}' - if print_cmd: - print(cmd) - subprocess.call(cmd, shell=True) - - -@requires_executable('ffmpeg') -def resize_video(in_file, - out_file, - size=None, - ratio=None, - keep_ar=False, - log_level='info', - print_cmd=False): - """Resize a video. - - Args: - in_file (str): Input video filename. - out_file (str): Output video filename. - size (tuple): Expected size (w, h), eg, (320, 240) or (320, -1). - ratio (tuple or float): Expected resize ratio, (2, 0.5) means - (w*2, h*0.5). - keep_ar (bool): Whether to keep original aspect ratio. - log_level (str): Logging level of ffmpeg. - print_cmd (bool): Whether to print the final ffmpeg command. - """ - if size is None and ratio is None: - raise ValueError('expected size or ratio must be specified') - if size is not None and ratio is not None: - raise ValueError('size and ratio cannot be specified at the same time') - options = {'log_level': log_level} - if size: - if not keep_ar: - options['vf'] = f'scale={size[0]}:{size[1]}' - else: - options['vf'] = f'scale=w={size[0]}:h={size[1]}:' \ - 'force_original_aspect_ratio=decrease' - else: - if not isinstance(ratio, tuple): - ratio = (ratio, ratio) - options['vf'] = f'scale="trunc(iw*{ratio[0]}):trunc(ih*{ratio[1]})"' - convert_video(in_file, out_file, print_cmd, **options) - - -@requires_executable('ffmpeg') -def cut_video(in_file, - out_file, - start=None, - end=None, - vcodec=None, - acodec=None, - log_level='info', - print_cmd=False): - """Cut a clip from a video. - - Args: - in_file (str): Input video filename. - out_file (str): Output video filename. - start (None or float): Start time (in seconds). - end (None or float): End time (in seconds). - vcodec (None or str): Output video codec, None for unchanged. - acodec (None or str): Output audio codec, None for unchanged. - log_level (str): Logging level of ffmpeg. - print_cmd (bool): Whether to print the final ffmpeg command. - """ - options = {'log_level': log_level} - if vcodec is None: - options['vcodec'] = 'copy' - if acodec is None: - options['acodec'] = 'copy' - if start: - options['ss'] = start - else: - start = 0 - if end: - options['t'] = end - start - convert_video(in_file, out_file, print_cmd, **options) - - -@requires_executable('ffmpeg') -def concat_video(video_list, - out_file, - vcodec=None, - acodec=None, - log_level='info', - print_cmd=False): - """Concatenate multiple videos into a single one. - - Args: - video_list (list): A list of video filenames - out_file (str): Output video filename - vcodec (None or str): Output video codec, None for unchanged - acodec (None or str): Output audio codec, None for unchanged - log_level (str): Logging level of ffmpeg. - print_cmd (bool): Whether to print the final ffmpeg command. - """ - tmp_filehandler, tmp_filename = tempfile.mkstemp(suffix='.txt', text=True) - with open(tmp_filename, 'w') as f: - for filename in video_list: - f.write(f'file {osp.abspath(filename)}\n') - options = {'log_level': log_level} - if vcodec is None: - options['vcodec'] = 'copy' - if acodec is None: - options['acodec'] = 'copy' - convert_video( - tmp_filename, - out_file, - print_cmd, - pre_options='-f concat -safe 0', - **options) - os.close(tmp_filehandler) - os.remove(tmp_filename) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/cityscapes.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/cityscapes.py deleted file mode 100644 index 81e47a914a1aa2e5458e18669d65ffb742f46fc6..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/cityscapes.py +++ /dev/null @@ -1,217 +0,0 @@ -import os.path as osp -import tempfile - -import annotator.uniformer.mmcv as mmcv -import numpy as np -from annotator.uniformer.mmcv.utils import print_log -from PIL import Image - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class CityscapesDataset(CustomDataset): - """Cityscapes dataset. - - The ``img_suffix`` is fixed to '_leftImg8bit.png' and ``seg_map_suffix`` is - fixed to '_gtFine_labelTrainIds.png' for Cityscapes dataset. - """ - - CLASSES = ('road', 'sidewalk', 'building', 'wall', 'fence', 'pole', - 'traffic light', 'traffic sign', 'vegetation', 'terrain', 'sky', - 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', - 'bicycle') - - PALETTE = [[128, 64, 128], [244, 35, 232], [70, 70, 70], [102, 102, 156], - [190, 153, 153], [153, 153, 153], [250, 170, 30], [220, 220, 0], - [107, 142, 35], [152, 251, 152], [70, 130, 180], [220, 20, 60], - [255, 0, 0], [0, 0, 142], [0, 0, 70], [0, 60, 100], - [0, 80, 100], [0, 0, 230], [119, 11, 32]] - - def __init__(self, **kwargs): - super(CityscapesDataset, self).__init__( - img_suffix='_leftImg8bit.png', - seg_map_suffix='_gtFine_labelTrainIds.png', - **kwargs) - - @staticmethod - def _convert_to_label_id(result): - """Convert trainId to id for cityscapes.""" - if isinstance(result, str): - result = np.load(result) - import cityscapesscripts.helpers.labels as CSLabels - result_copy = result.copy() - for trainId, label in CSLabels.trainId2label.items(): - result_copy[result == trainId] = label.id - - return result_copy - - def results2img(self, results, imgfile_prefix, to_label_id): - """Write the segmentation results to images. - - Args: - results (list[list | tuple | ndarray]): Testing results of the - dataset. - imgfile_prefix (str): The filename prefix of the png files. - If the prefix is "somepath/xxx", - the png files will be named "somepath/xxx.png". - to_label_id (bool): whether convert output to label_id for - submission - - Returns: - list[str: str]: result txt files which contains corresponding - semantic segmentation images. - """ - mmcv.mkdir_or_exist(imgfile_prefix) - result_files = [] - prog_bar = mmcv.ProgressBar(len(self)) - for idx in range(len(self)): - result = results[idx] - if to_label_id: - result = self._convert_to_label_id(result) - filename = self.img_infos[idx]['filename'] - basename = osp.splitext(osp.basename(filename))[0] - - png_filename = osp.join(imgfile_prefix, f'{basename}.png') - - output = Image.fromarray(result.astype(np.uint8)).convert('P') - import cityscapesscripts.helpers.labels as CSLabels - palette = np.zeros((len(CSLabels.id2label), 3), dtype=np.uint8) - for label_id, label in CSLabels.id2label.items(): - palette[label_id] = label.color - - output.putpalette(palette) - output.save(png_filename) - result_files.append(png_filename) - prog_bar.update() - - return result_files - - def format_results(self, results, imgfile_prefix=None, to_label_id=True): - """Format the results into dir (standard format for Cityscapes - evaluation). - - Args: - results (list): Testing results of the dataset. - imgfile_prefix (str | None): The prefix of images files. It - includes the file path and the prefix of filename, e.g., - "a/b/prefix". If not specified, a temp file will be created. - Default: None. - to_label_id (bool): whether convert output to label_id for - submission. Default: False - - Returns: - tuple: (result_files, tmp_dir), result_files is a list containing - the image paths, tmp_dir is the temporal directory created - for saving json/png files when img_prefix is not specified. - """ - - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: ' - f'{len(results)} != {len(self)}') - - if imgfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - imgfile_prefix = tmp_dir.name - else: - tmp_dir = None - result_files = self.results2img(results, imgfile_prefix, to_label_id) - - return result_files, tmp_dir - - def evaluate(self, - results, - metric='mIoU', - logger=None, - imgfile_prefix=None, - efficient_test=False): - """Evaluation in Cityscapes/default protocol. - - Args: - results (list): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. - logger (logging.Logger | None | str): Logger used for printing - related information during evaluation. Default: None. - imgfile_prefix (str | None): The prefix of output image file, - for cityscapes evaluation only. It includes the file path and - the prefix of filename, e.g., "a/b/prefix". - If results are evaluated with cityscapes protocol, it would be - the prefix of output png files. The output files would be - png images under folder "a/b/prefix/xxx.png", where "xxx" is - the image name of cityscapes. If not specified, a temp file - will be created for evaluation. - Default: None. - - Returns: - dict[str, float]: Cityscapes/default metrics. - """ - - eval_results = dict() - metrics = metric.copy() if isinstance(metric, list) else [metric] - if 'cityscapes' in metrics: - eval_results.update( - self._evaluate_cityscapes(results, logger, imgfile_prefix)) - metrics.remove('cityscapes') - if len(metrics) > 0: - eval_results.update( - super(CityscapesDataset, - self).evaluate(results, metrics, logger, efficient_test)) - - return eval_results - - def _evaluate_cityscapes(self, results, logger, imgfile_prefix): - """Evaluation in Cityscapes protocol. - - Args: - results (list): Testing results of the dataset. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - imgfile_prefix (str | None): The prefix of output image file - - Returns: - dict[str: float]: Cityscapes evaluation results. - """ - try: - import cityscapesscripts.evaluation.evalPixelLevelSemanticLabeling as CSEval # noqa - except ImportError: - raise ImportError('Please run "pip install cityscapesscripts" to ' - 'install cityscapesscripts first.') - msg = 'Evaluating in Cityscapes style' - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - result_files, tmp_dir = self.format_results(results, imgfile_prefix) - - if tmp_dir is None: - result_dir = imgfile_prefix - else: - result_dir = tmp_dir.name - - eval_results = dict() - print_log(f'Evaluating results under {result_dir} ...', logger=logger) - - CSEval.args.evalInstLevelScore = True - CSEval.args.predictionPath = osp.abspath(result_dir) - CSEval.args.evalPixelAccuracy = True - CSEval.args.JSONOutput = False - - seg_map_list = [] - pred_list = [] - - # when evaluating with official cityscapesscripts, - # **_gtFine_labelIds.png is used - for seg_map in mmcv.scandir( - self.ann_dir, 'gtFine_labelIds.png', recursive=True): - seg_map_list.append(osp.join(self.ann_dir, seg_map)) - pred_list.append(CSEval.getPrediction(CSEval.args, seg_map)) - - eval_results.update( - CSEval.evaluateImgLists(pred_list, seg_map_list, CSEval.args)) - - if tmp_dir is not None: - tmp_dir.cleanup() - - return eval_results diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/tutorial_train_sd21.py b/spaces/Mellow-ai/PhotoAI_Mellow/tutorial_train_sd21.py deleted file mode 100644 index 8bbc148f9b1e90561f5a186cc0be94c911dd67cf..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/tutorial_train_sd21.py +++ /dev/null @@ -1,35 +0,0 @@ -from share import * - -import pytorch_lightning as pl -from torch.utils.data import DataLoader -from tutorial_dataset import MyDataset -from cldm.logger import ImageLogger -from cldm.model import create_model, load_state_dict - - -# Configs -resume_path = './models/control_sd21_ini.ckpt' -batch_size = 4 -logger_freq = 300 -learning_rate = 1e-5 -sd_locked = True -only_mid_control = False - - -# First use cpu to load models. Pytorch Lightning will automatically move it to GPUs. -model = create_model('./models/cldm_v21.yaml').cpu() -model.load_state_dict(load_state_dict(resume_path, location='cpu')) -model.learning_rate = learning_rate -model.sd_locked = sd_locked -model.only_mid_control = only_mid_control - - -# Misc -dataset = MyDataset() -dataloader = DataLoader(dataset, num_workers=0, batch_size=batch_size, shuffle=True) -logger = ImageLogger(batch_frequency=logger_freq) -trainer = pl.Trainer(gpus=1, precision=32, callbacks=[logger]) - - -# Train! -trainer.fit(model, dataloader) diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/src/rotation_utils.py b/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/src/rotation_utils.py deleted file mode 100644 index 8d6d4f3cbdb1f808d210dce8b22fa3ba831d45a9..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/src/rotation_utils.py +++ /dev/null @@ -1,73 +0,0 @@ -# Copyright 2016 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Utilities for generating and applying rotation matrices. -""" -import numpy as np - -ANGLE_EPS = 0.001 - - -def normalize(v): - return v / np.linalg.norm(v) - - -def get_r_matrix(ax_, angle): - ax = normalize(ax_) - if np.abs(angle) > ANGLE_EPS: - S_hat = np.array( - [[0.0, -ax[2], ax[1]], [ax[2], 0.0, -ax[0]], [-ax[1], ax[0], 0.0]], - dtype=np.float32) - R = np.eye(3) + np.sin(angle)*S_hat + \ - (1-np.cos(angle))*(np.linalg.matrix_power(S_hat, 2)) - else: - R = np.eye(3) - return R - - -def r_between(v_from_, v_to_): - v_from = normalize(v_from_) - v_to = normalize(v_to_) - ax = normalize(np.cross(v_from, v_to)) - angle = np.arccos(np.dot(v_from, v_to)) - return get_r_matrix(ax, angle) - - -def rotate_camera_to_point_at(up_from, lookat_from, up_to, lookat_to): - inputs = [up_from, lookat_from, up_to, lookat_to] - for i in range(4): - inputs[i] = normalize(np.array(inputs[i]).reshape((-1,))) - up_from, lookat_from, up_to, lookat_to = inputs - r1 = r_between(lookat_from, lookat_to) - - new_x = np.dot(r1, np.array([1, 0, 0]).reshape((-1, 1))).reshape((-1)) - to_x = normalize(np.cross(lookat_to, up_to)) - angle = np.arccos(np.dot(new_x, to_x)) - if angle > ANGLE_EPS: - if angle < np.pi - ANGLE_EPS: - ax = normalize(np.cross(new_x, to_x)) - flip = np.dot(lookat_to, ax) - if flip > 0: - r2 = get_r_matrix(lookat_to, angle) - elif flip < 0: - r2 = get_r_matrix(lookat_to, -1. * angle) - else: - # Angle of rotation is too close to 180 degrees, direction of rotation - # does not matter. - r2 = get_r_matrix(lookat_to, angle) - else: - r2 = np.eye(3) - return np.dot(r2, r1) - diff --git a/spaces/NN520/AI/src/pages/api/image.ts b/spaces/NN520/AI/src/pages/api/image.ts deleted file mode 100644 index 4b894bea86050c0f3888cc56f60c0cb7f8b57cfc..0000000000000000000000000000000000000000 --- a/spaces/NN520/AI/src/pages/api/image.ts +++ /dev/null @@ -1,40 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { debug } from '@/lib/isomorphic' -import { createHeaders } from '@/lib/utils' -import { createImage } from '@/lib/bots/bing/utils' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - const { prompt, id } = req.query - if (!prompt) { - return res.json({ - result: { - value: 'Image', - message: 'No Prompt' - } - }) - } - try { - const headers = createHeaders(req.cookies, { - IMAGE_BING_COOKIE: process.env.IMAGE_BING_COOKIE - }) - - debug('headers', headers) - const response = await createImage(String(prompt), String(id), { - ...headers, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - }) - res.writeHead(200, { - 'Content-Type': 'text/plain; charset=UTF-8', - }) - return res.end(response) - } catch (e) { - return res.json({ - result: { - value: 'Error', - message: `${e}` - } - }) - } -} diff --git a/spaces/Nee001/bing0/src/components/tailwind-indicator.tsx b/spaces/Nee001/bing0/src/components/tailwind-indicator.tsx deleted file mode 100644 index f2a1291213dd67055fcebe67fab574c8441338df..0000000000000000000000000000000000000000 --- a/spaces/Nee001/bing0/src/components/tailwind-indicator.tsx +++ /dev/null @@ -1,14 +0,0 @@ -export function TailwindIndicator() { - if (process.env.NODE_ENV === 'production') return null - - return ( -
    -
    xs
    -
    sm
    -
    md
    -
    lg
    -
    xl
    -
    2xl
    -
    - ) -} diff --git a/spaces/Nortrom8844/summarize-long-text/app.py b/spaces/Nortrom8844/summarize-long-text/app.py deleted file mode 100644 index 679fa69c0ed82faf109c2d782a565ee6c470fbe7..0000000000000000000000000000000000000000 --- a/spaces/Nortrom8844/summarize-long-text/app.py +++ /dev/null @@ -1,314 +0,0 @@ -import logging -import random -import re -import time -from pathlib import Path - -import gradio as gr -import nltk -from cleantext import clean - -from summarize import load_model_and_tokenizer, summarize_via_tokenbatches -from utils import load_example_filenames, truncate_word_count - -_here = Path(__file__).parent - -nltk.download("stopwords") # TODO=find where this requirement originates from - -logging.basicConfig( - level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s" -) - - -def proc_submission( - input_text: str, - model_size: str, - num_beams, - token_batch_length, - length_penalty, - repetition_penalty, - no_repeat_ngram_size, - max_input_length: int = 1024, -): - """ - proc_submission - a helper function for the gradio module to process submissions - - Args: - input_text (str): the input text to summarize - model_size (str): the size of the model to use - num_beams (int): the number of beams to use - token_batch_length (int): the length of the token batches to use - length_penalty (float): the length penalty to use - repetition_penalty (float): the repetition penalty to use - no_repeat_ngram_size (int): the no-repeat ngram size to use - max_input_length (int, optional): the maximum input length to use. Defaults to 1024. - - Returns: - str in HTML format, string of the summary, str of score - """ - - settings = { - "length_penalty": float(length_penalty), - "repetition_penalty": float(repetition_penalty), - "no_repeat_ngram_size": int(no_repeat_ngram_size), - "encoder_no_repeat_ngram_size": 4, - "num_beams": int(num_beams), - "min_length": 4, - "max_length": int(token_batch_length // 4), - "early_stopping": True, - "do_sample": False, - } - st = time.perf_counter() - history = {} - clean_text = clean(input_text, lower=False) - max_input_length = 2048 if model_size == "base" else max_input_length - processed = truncate_word_count(clean_text, max_input_length) - - if processed["was_truncated"]: - tr_in = processed["truncated_text"] - # create elaborate HTML warning - input_wc = re.split(r"\s+", input_text) - msg = f""" -
    -

    Warning

    -

    Input text was truncated to {max_input_length} words. That's about {100*max_input_length/len(input_wc):.2f}% of the submission.

    -
    - """ - logging.warning(msg) - history["WARNING"] = msg - else: - tr_in = input_text - msg = None - - if len(input_text) < 50: - # this is essentially a different case from the above - msg = f""" -
    -

    Error

    -

    Input text is too short to summarize. Detected {len(input_text)} characters. - Please load text by selecting an example from the dropdown menu or by pasting text into the text box.

    -
    - """ - logging.warning(msg) - logging.warning("RETURNING EMPTY STRING") - history["WARNING"] = msg - - return msg, "", [] - - _summaries = summarize_via_tokenbatches( - tr_in, - model_sm if "base" in model_size.lower() else model, - tokenizer_sm if "base" in model_size.lower() else tokenizer, - batch_length=token_batch_length, - **settings, - ) - sum_text = [f"Section {i}: " + s["summary"][0] for i, s in enumerate(_summaries)] - sum_scores = [ - f" - Section {i}: {round(s['summary_score'],4)}" - for i, s in enumerate(_summaries) - ] - - sum_text_out = "\n".join(sum_text) - history["Summary Scores"] = "

    " - scores_out = "\n".join(sum_scores) - rt = round((time.perf_counter() - st) / 60, 2) - print(f"Runtime: {rt} minutes") - html = "" - html += f"

    Runtime: {rt} minutes on CPU

    " - if msg is not None: - html += msg - - html += "" - - return html, sum_text_out, scores_out - - -def load_single_example_text( - example_path: str or Path, -): - """ - load_single_example - a helper function for the gradio module to load examples - Returns: - list of str, the examples - """ - global name_to_path - full_ex_path = name_to_path[example_path] - full_ex_path = Path(full_ex_path) - # load the examples into a list - with open(full_ex_path, "r", encoding="utf-8", errors="ignore") as f: - raw_text = f.read() - text = clean(raw_text, lower=False) - return text - - -def load_uploaded_file(file_obj): - """ - load_uploaded_file - process an uploaded file - - Args: - file_obj (POTENTIALLY list): Gradio file object inside a list - - Returns: - str, the uploaded file contents - """ - - # file_path = Path(file_obj[0].name) - - # check if mysterious file object is a list - if isinstance(file_obj, list): - file_obj = file_obj[0] - file_path = Path(file_obj.name) - try: - with open(file_path, "r", encoding="utf-8", errors="ignore") as f: - raw_text = f.read() - text = clean(raw_text, lower=False) - return text - except Exception as e: - logging.info(f"Trying to load file with path {file_path}, error: {e}") - return "Error: Could not read file. Ensure that it is a valid text file with encoding UTF-8." - - -if __name__ == "__main__": - - model, tokenizer = load_model_and_tokenizer("pszemraj/led-large-book-summary") - model_sm, tokenizer_sm = load_model_and_tokenizer("pszemraj/led-base-book-summary") - - name_to_path = load_example_filenames(_here / "examples") - logging.info(f"Loaded {len(name_to_path)} examples") - demo = gr.Blocks() - _examples = list(name_to_path.keys()) - with demo: - - gr.Markdown("# Long-Form Summarization: LED & BookSum") - gr.Markdown( - "LED models ([model card](https://huggingface.co/pszemraj/led-large-book-summary)) fine-tuned to summarize long-form text. A [space with other models can be found here](https://huggingface.co/spaces/pszemraj/document-summarization)" - ) - with gr.Column(): - - gr.Markdown("## Load Inputs & Select Parameters") - gr.Markdown( - "Enter or upload text below, and it will be summarized [using the selected parameters](https://huggingface.co/blog/how-to-generate). " - ) - with gr.Row(): - model_size = gr.Radio( - choices=["base", "large"], label="Model Variant", value="large" - ) - num_beams = gr.Radio( - choices=[2, 3, 4], - label="Beam Search: # of Beams", - value=2, - ) - gr.Markdown("Load a a .txt - example or your own (_You may find [this OCR space](https://huggingface.co/spaces/pszemraj/pdf-ocr) useful_)") - with gr.Row(): - example_name = gr.Dropdown( - _examples, - label="Examples", - value=random.choice(_examples), - ) - uploaded_file = gr.File( - label="File Upload", - file_count="single", - type="file", - ) - with gr.Row(): - input_text = gr.Textbox( - lines=4, - label="Input Text (for summarization)", - placeholder="Enter text to summarize, the text will be cleaned and truncated on Spaces. Narrative, academic (both papers and lecture transcription), and article text work well. May take a bit to generate depending on the input text :)", - ) - with gr.Column(): - load_examples_button = gr.Button( - "Load Example", - ) - load_file_button = gr.Button("Upload File") - gr.Markdown("---") - - with gr.Column(): - gr.Markdown("## Generate Summary") - gr.Markdown( - "Summary generation should take approximately 1-2 minutes for most settings." - ) - summarize_button = gr.Button( - "Summarize!", - variant="primary", - ) - - output_text = gr.HTML("

    Output will appear below:

    ") - gr.Markdown("### Summary Output") - summary_text = gr.Textbox( - label="Summary", placeholder="The generated summary will appear here" - ) - gr.Markdown( - "The summary scores can be thought of as representing the quality of the summary. less-negative numbers (closer to 0) are better:" - ) - summary_scores = gr.Textbox( - label="Summary Scores", placeholder="Summary scores will appear here" - ) - - gr.Markdown("---") - - with gr.Column(): - gr.Markdown("### Advanced Settings") - with gr.Row(): - length_penalty = gr.inputs.Slider( - minimum=0.5, - maximum=1.0, - label="length penalty", - default=0.7, - step=0.05, - ) - token_batch_length = gr.Radio( - choices=[512, 768, 1024, 1536], - label="token batch length", - value=1024, - ) - - with gr.Row(): - repetition_penalty = gr.inputs.Slider( - minimum=1.0, - maximum=5.0, - label="repetition penalty", - default=3.5, - step=0.1, - ) - no_repeat_ngram_size = gr.Radio( - choices=[2, 3, 4], - label="no repeat ngram size", - value=3, - ) - with gr.Column(): - gr.Markdown("### About the Model") - gr.Markdown( - "- [This model](https://huggingface.co/pszemraj/led-large-book-summary) is a fine-tuned checkpoint of [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384) on the [BookSum dataset](https://arxiv.org/abs/2105.08209).The goal was to create a model that can generalize well and is useful in summarizing lots of text in academic and daily usage." - ) - gr.Markdown( - "- The two most important parameters-empirically-are the `num_beams` and `token_batch_length`. " - ) - gr.Markdown( - "- The model can be used with tag [pszemraj/led-large-book-summary](https://huggingface.co/pszemraj/led-large-book-summary). See the model card for details on usage & a Colab notebook for a tutorial." - ) - gr.Markdown("---") - - load_examples_button.click( - fn=load_single_example_text, inputs=[example_name], outputs=[input_text] - ) - - load_file_button.click( - fn=load_uploaded_file, inputs=uploaded_file, outputs=[input_text] - ) - - summarize_button.click( - fn=proc_submission, - inputs=[ - input_text, - model_size, - num_beams, - token_batch_length, - length_penalty, - repetition_penalty, - no_repeat_ngram_size, - ], - outputs=[output_text, summary_text, summary_scores], - ) - - demo.launch(enable_queue=True, share=True) diff --git a/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/data_objects/speaker.py b/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/data_objects/speaker.py deleted file mode 100644 index 07379847a854d85623db02ce5e5409c1566eb80c..0000000000000000000000000000000000000000 --- a/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/data_objects/speaker.py +++ /dev/null @@ -1,40 +0,0 @@ -from speaker_encoder.data_objects.random_cycler import RandomCycler -from speaker_encoder.data_objects.utterance import Utterance -from pathlib import Path - -# Contains the set of utterances of a single speaker -class Speaker: - def __init__(self, root: Path): - self.root = root - self.name = root.name - self.utterances = None - self.utterance_cycler = None - - def _load_utterances(self): - with self.root.joinpath("_sources.txt").open("r") as sources_file: - sources = [l.split(",") for l in sources_file] - sources = {frames_fname: wave_fpath for frames_fname, wave_fpath in sources} - self.utterances = [Utterance(self.root.joinpath(f), w) for f, w in sources.items()] - self.utterance_cycler = RandomCycler(self.utterances) - - def random_partial(self, count, n_frames): - """ - Samples a batch of unique partial utterances from the disk in a way that all - utterances come up at least once every two cycles and in a random order every time. - - :param count: The number of partial utterances to sample from the set of utterances from - that speaker. Utterances are guaranteed not to be repeated if is not larger than - the number of utterances available. - :param n_frames: The number of frames in the partial utterance. - :return: A list of tuples (utterance, frames, range) where utterance is an Utterance, - frames are the frames of the partial utterances and range is the range of the partial - utterance with regard to the complete utterance. - """ - if self.utterances is None: - self._load_utterances() - - utterances = self.utterance_cycler.sample(count) - - a = [(u,) + u.random_partial(n_frames) for u in utterances] - - return a diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/data/data_utils.py b/spaces/OFA-Sys/OFA-Generic_Interface/data/data_utils.py deleted file mode 100644 index 7f843789138c62668f9e1c4e7fd44299fb5ef768..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/data/data_utils.py +++ /dev/null @@ -1,601 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -try: - from collections.abc import Iterable -except ImportError: - from collections import Iterable -import contextlib -import itertools -import logging -import re -import warnings -from typing import Optional, Tuple - -import numpy as np -import torch - -from fairseq.file_io import PathManager -from fairseq import utils -import os - -logger = logging.getLogger(__name__) - - -def infer_language_pair(path): - """Infer language pair from filename: .-.(...).idx""" - src, dst = None, None - for filename in PathManager.ls(path): - parts = filename.split(".") - if len(parts) >= 3 and len(parts[1].split("-")) == 2: - return parts[1].split("-") - return src, dst - - -def collate_tokens( - values, - pad_idx, - eos_idx=None, - left_pad=False, - move_eos_to_beginning=False, - pad_to_length=None, - pad_to_multiple=1, - pad_to_bsz=None, -): - """Convert a list of 1d tensors into a padded 2d tensor.""" - size = max(v.size(0) for v in values) - size = size if pad_to_length is None else max(size, pad_to_length) - if pad_to_multiple != 1 and size % pad_to_multiple != 0: - size = int(((size - 0.1) // pad_to_multiple + 1) * pad_to_multiple) - - def copy_tensor(src, dst): - assert dst.numel() == src.numel() - if move_eos_to_beginning: - if eos_idx is None: - # if no eos_idx is specified, then use the last token in src - dst[0] = src[-1] - else: - dst[0] = eos_idx - dst[1:] = src[:-1] - else: - dst.copy_(src) - - if values[0].dim() == 1: - res = values[0].new(len(values), size).fill_(pad_idx) - elif values[0].dim() == 2: - assert move_eos_to_beginning is False - res = values[0].new(len(values), size, values[0].size(1)).fill_(pad_idx) - else: - raise NotImplementedError - - for i, v in enumerate(values): - copy_tensor(v, res[i][size - len(v) :] if left_pad else res[i][: len(v)]) - return res - - -def load_indexed_dataset( - path, dictionary=None, dataset_impl=None, combine=False, default="cached" -): - """A helper function for loading indexed datasets. - - Args: - path (str): path to indexed dataset (e.g., 'data-bin/train') - dictionary (~fairseq.data.Dictionary): data dictionary - dataset_impl (str, optional): which dataset implementation to use. If - not provided, it will be inferred automatically. For legacy indexed - data we use the 'cached' implementation by default. - combine (bool, optional): automatically load and combine multiple - datasets. For example, if *path* is 'data-bin/train', then we will - combine 'data-bin/train', 'data-bin/train1', ... and return a - single ConcatDataset instance. - """ - import fairseq.data.indexed_dataset as indexed_dataset - from fairseq.data.concat_dataset import ConcatDataset - - datasets = [] - for k in itertools.count(): - path_k = path + (str(k) if k > 0 else "") - try: - path_k = indexed_dataset.get_indexed_dataset_to_local(path_k) - except Exception as e: - if "StorageException: [404] Path not found" in str(e): - logger.warning(f"path_k: {e} not found") - else: - raise e - - dataset_impl_k = dataset_impl - if dataset_impl_k is None: - dataset_impl_k = indexed_dataset.infer_dataset_impl(path_k) - dataset = indexed_dataset.make_dataset( - path_k, - impl=dataset_impl_k or default, - fix_lua_indexing=True, - dictionary=dictionary, - ) - if dataset is None: - break - logger.info("loaded {:,} examples from: {}".format(len(dataset), path_k)) - datasets.append(dataset) - if not combine: - break - if len(datasets) == 0: - return None - elif len(datasets) == 1: - return datasets[0] - else: - return ConcatDataset(datasets) - - -@contextlib.contextmanager -def numpy_seed(seed, *addl_seeds): - """Context manager which seeds the NumPy PRNG with the specified seed and - restores the state afterward""" - if seed is None: - yield - return - if len(addl_seeds) > 0: - seed = int(hash((seed, *addl_seeds)) % 1e6) - state = np.random.get_state() - np.random.seed(seed) - try: - yield - finally: - np.random.set_state(state) - - -def collect_filtered(function, iterable, filtered): - """ - Similar to :func:`filter` but collects filtered elements in ``filtered``. - - Args: - function (callable): function that returns ``False`` for elements that - should be filtered - iterable (iterable): iterable to filter - filtered (list): list to store filtered elements - """ - for el in iterable: - if function(el): - yield el - else: - filtered.append(el) - - -def _filter_by_size_dynamic(indices, size_fn, max_positions, raise_exception=False): - def compare_leq(a, b): - return a <= b if not isinstance(a, tuple) else max(a) <= b - - def check_size(idx): - if isinstance(max_positions, float) or isinstance(max_positions, int): - return size_fn(idx) <= max_positions - elif isinstance(max_positions, dict): - idx_size = size_fn(idx) - assert isinstance(idx_size, dict) - intersect_keys = set(max_positions.keys()) & set(idx_size.keys()) - return all( - all( - a is None or b is None or a <= b - for a, b in zip(idx_size[key], max_positions[key]) - ) - for key in intersect_keys - ) - else: - # For MultiCorpusSampledDataset, will generalize it later - if not isinstance(size_fn(idx), Iterable): - return all(size_fn(idx) <= b for b in max_positions) - return all( - a is None or b is None or a <= b - for a, b in zip(size_fn(idx), max_positions) - ) - - ignored = [] - itr = collect_filtered(check_size, indices, ignored) - indices = np.fromiter(itr, dtype=np.int64, count=-1) - return indices, ignored - - -def filter_by_size(indices, dataset, max_positions, raise_exception=False): - """ - [deprecated] Filter indices based on their size. - Use `FairseqDataset::filter_indices_by_size` instead. - - Args: - indices (List[int]): ordered list of dataset indices - dataset (FairseqDataset): fairseq dataset instance - max_positions (tuple): filter elements larger than this size. - Comparisons are done component-wise. - raise_exception (bool, optional): if ``True``, raise an exception if - any elements are filtered (default: False). - """ - warnings.warn( - "data_utils.filter_by_size is deprecated. " - "Use `FairseqDataset::filter_indices_by_size` instead.", - stacklevel=2, - ) - if isinstance(max_positions, float) or isinstance(max_positions, int): - if hasattr(dataset, "sizes") and isinstance(dataset.sizes, np.ndarray): - ignored = indices[dataset.sizes[indices] > max_positions].tolist() - indices = indices[dataset.sizes[indices] <= max_positions] - elif ( - hasattr(dataset, "sizes") - and isinstance(dataset.sizes, list) - and len(dataset.sizes) == 1 - ): - ignored = indices[dataset.sizes[0][indices] > max_positions].tolist() - indices = indices[dataset.sizes[0][indices] <= max_positions] - else: - indices, ignored = _filter_by_size_dynamic( - indices, dataset.size, max_positions - ) - else: - indices, ignored = _filter_by_size_dynamic(indices, dataset.size, max_positions) - - if len(ignored) > 0 and raise_exception: - raise Exception( - ( - "Size of sample #{} is invalid (={}) since max_positions={}, " - "skip this example with --skip-invalid-size-inputs-valid-test" - ).format(ignored[0], dataset.size(ignored[0]), max_positions) - ) - if len(ignored) > 0: - logger.warning( - ( - "{} samples have invalid sizes and will be skipped, " - "max_positions={}, first few sample ids={}" - ).format(len(ignored), max_positions, ignored[:10]) - ) - return indices - - -def filter_paired_dataset_indices_by_size(src_sizes, tgt_sizes, indices, max_sizes): - """Filter a list of sample indices. Remove those that are longer - than specified in max_sizes. - - Args: - indices (np.array): original array of sample indices - max_sizes (int or list[int] or tuple[int]): max sample size, - can be defined separately for src and tgt (then list or tuple) - - Returns: - np.array: filtered sample array - list: list of removed indices - """ - if max_sizes is None: - return indices, [] - if type(max_sizes) in (int, float): - max_src_size, max_tgt_size = max_sizes, max_sizes - else: - max_src_size, max_tgt_size = max_sizes - if tgt_sizes is None: - ignored = indices[src_sizes[indices] > max_src_size] - else: - ignored = indices[ - (src_sizes[indices] > max_src_size) | (tgt_sizes[indices] > max_tgt_size) - ] - if len(ignored) > 0: - if tgt_sizes is None: - indices = indices[src_sizes[indices] <= max_src_size] - else: - indices = indices[ - (src_sizes[indices] <= max_src_size) - & (tgt_sizes[indices] <= max_tgt_size) - ] - return indices, ignored.tolist() - - -def batch_by_size( - indices, - num_tokens_fn, - num_tokens_vec=None, - max_tokens=None, - max_sentences=None, - required_batch_size_multiple=1, - fixed_shapes=None, -): - """ - Yield mini-batches of indices bucketed by size. Batches may contain - sequences of different lengths. - - Args: - indices (List[int]): ordered list of dataset indices - num_tokens_fn (callable): function that returns the number of tokens at - a given index - num_tokens_vec (List[int], optional): precomputed vector of the number - of tokens for each index in indices (to enable faster batch generation) - max_tokens (int, optional): max number of tokens in each batch - (default: None). - max_sentences (int, optional): max number of sentences in each - batch (default: None). - required_batch_size_multiple (int, optional): require batch size to - be less than N or a multiple of N (default: 1). - fixed_shapes (List[Tuple[int, int]], optional): if given, batches will - only be created with the given shapes. *max_sentences* and - *required_batch_size_multiple* will be ignored (default: None). - """ - try: - from fairseq.data.data_utils_fast import ( - batch_by_size_fn, - batch_by_size_vec, - batch_fixed_shapes_fast, - ) - except ImportError: - raise ImportError( - "Please build Cython components with: " - "`python setup.py build_ext --inplace`" - ) - except ValueError: - raise ValueError( - "Please build (or rebuild) Cython components with `python setup.py build_ext --inplace`." - ) - - # added int() to avoid TypeError: an integer is required - max_tokens = ( - int(max_tokens) if max_tokens is not None else -1 - ) - max_sentences = max_sentences if max_sentences is not None else -1 - bsz_mult = required_batch_size_multiple - - if not isinstance(indices, np.ndarray): - indices = np.fromiter(indices, dtype=np.int64, count=-1) - - if num_tokens_vec is not None and not isinstance(num_tokens_vec, np.ndarray): - num_tokens_vec = np.fromiter(num_tokens_vec, dtype=np.int64, count=-1) - - if fixed_shapes is None: - if num_tokens_vec is None: - return batch_by_size_fn( - indices, - num_tokens_fn, - max_tokens, - max_sentences, - bsz_mult, - ) - else: - return batch_by_size_vec( - indices, - num_tokens_vec, - max_tokens, - max_sentences, - bsz_mult, - ) - - else: - fixed_shapes = np.array(fixed_shapes, dtype=np.int64) - sort_order = np.lexsort( - [ - fixed_shapes[:, 1].argsort(), # length - fixed_shapes[:, 0].argsort(), # bsz - ] - ) - fixed_shapes_sorted = fixed_shapes[sort_order] - return batch_fixed_shapes_fast(indices, num_tokens_fn, fixed_shapes_sorted) - - -def post_process(sentence: str, symbol: str): - if symbol == "sentencepiece": - sentence = sentence.replace(" ", "").replace("\u2581", " ").strip() - elif symbol == "wordpiece": - sentence = sentence.replace(" ", "").replace("_", " ").strip() - elif symbol == "letter": - sentence = sentence.replace(" ", "").replace("|", " ").strip() - elif symbol == "silence": - import re - sentence = sentence.replace("", "") - sentence = re.sub(' +', ' ', sentence).strip() - elif symbol == "_EOW": - sentence = sentence.replace(" ", "").replace("_EOW", " ").strip() - elif symbol in {"subword_nmt", "@@ ", "@@"}: - if symbol == "subword_nmt": - symbol = "@@ " - sentence = (sentence + " ").replace(symbol, "").rstrip() - elif symbol == "none": - pass - elif symbol is not None: - raise NotImplementedError(f"Unknown post_process option: {symbol}") - return sentence - - -def compute_mask_indices( - shape: Tuple[int, int], - padding_mask: Optional[torch.Tensor], - mask_prob: float, - mask_length: int, - mask_type: str = "static", - mask_other: float = 0.0, - min_masks: int = 0, - no_overlap: bool = False, - min_space: int = 0, -) -> np.ndarray: - """ - Computes random mask spans for a given shape - - Args: - shape: the the shape for which to compute masks. - should be of size 2 where first element is batch size and 2nd is timesteps - padding_mask: optional padding mask of the same size as shape, which will prevent masking padded elements - mask_prob: probability for each token to be chosen as start of the span to be masked. this will be multiplied by - number of timesteps divided by length of mask span to mask approximately this percentage of all elements. - however due to overlaps, the actual number will be smaller (unless no_overlap is True) - mask_type: how to compute mask lengths - static = fixed size - uniform = sample from uniform distribution [mask_other, mask_length*2] - normal = sample from normal distribution with mean mask_length and stdev mask_other. mask is min 1 element - poisson = sample from possion distribution with lambda = mask length - min_masks: minimum number of masked spans - no_overlap: if false, will switch to an alternative recursive algorithm that prevents spans from overlapping - min_space: only used if no_overlap is True, this is how many elements to keep unmasked between spans - """ - - bsz, all_sz = shape - mask = np.full((bsz, all_sz), False) - - all_num_mask = int( - # add a random number for probabilistic rounding - mask_prob * all_sz / float(mask_length) - + np.random.rand() - ) - - all_num_mask = max(min_masks, all_num_mask) - - mask_idcs = [] - for i in range(bsz): - if padding_mask is not None: - sz = all_sz - padding_mask[i].long().sum().item() - num_mask = int( - # add a random number for probabilistic rounding - mask_prob * sz / float(mask_length) - + np.random.rand() - ) - num_mask = max(min_masks, num_mask) - else: - sz = all_sz - num_mask = all_num_mask - - if mask_type == "static": - lengths = np.full(num_mask, mask_length) - elif mask_type == "uniform": - lengths = np.random.randint(mask_other, mask_length * 2 + 1, size=num_mask) - elif mask_type == "normal": - lengths = np.random.normal(mask_length, mask_other, size=num_mask) - lengths = [max(1, int(round(x))) for x in lengths] - elif mask_type == "poisson": - lengths = np.random.poisson(mask_length, size=num_mask) - lengths = [int(round(x)) for x in lengths] - else: - raise Exception("unknown mask selection " + mask_type) - - if sum(lengths) == 0: - lengths[0] = min(mask_length, sz - 1) - - if no_overlap: - mask_idc = [] - - def arrange(s, e, length, keep_length): - span_start = np.random.randint(s, e - length) - mask_idc.extend(span_start + i for i in range(length)) - - new_parts = [] - if span_start - s - min_space >= keep_length: - new_parts.append((s, span_start - min_space + 1)) - if e - span_start - keep_length - min_space > keep_length: - new_parts.append((span_start + length + min_space, e)) - return new_parts - - parts = [(0, sz)] - min_length = min(lengths) - for length in sorted(lengths, reverse=True): - lens = np.fromiter( - (e - s if e - s >= length + min_space else 0 for s, e in parts), - np.int, - ) - l_sum = np.sum(lens) - if l_sum == 0: - break - probs = lens / np.sum(lens) - c = np.random.choice(len(parts), p=probs) - s, e = parts.pop(c) - parts.extend(arrange(s, e, length, min_length)) - mask_idc = np.asarray(mask_idc) - else: - min_len = min(lengths) - if sz - min_len <= num_mask: - min_len = sz - num_mask - 1 - - mask_idc = np.random.choice(sz - min_len, num_mask, replace=False) - - mask_idc = np.asarray( - [ - mask_idc[j] + offset - for j in range(len(mask_idc)) - for offset in range(lengths[j]) - ] - ) - - mask_idcs.append(np.unique(mask_idc[mask_idc < sz])) - - min_len = min([len(m) for m in mask_idcs]) - for i, mask_idc in enumerate(mask_idcs): - if len(mask_idc) > min_len: - mask_idc = np.random.choice(mask_idc, min_len, replace=False) - mask[i, mask_idc] = True - - return mask - - -def get_mem_usage(): - try: - import psutil - - mb = 1024 * 1024 - return f"used={psutil.virtual_memory().used / mb}Mb; avail={psutil.virtual_memory().available / mb}Mb" - except ImportError: - return "N/A" - - -# lens: torch.LongTensor -# returns: torch.BoolTensor -def lengths_to_padding_mask(lens): - bsz, max_lens = lens.size(0), torch.max(lens).item() - mask = torch.arange(max_lens).to(lens.device).view(1, max_lens) - mask = mask.expand(bsz, -1) >= lens.view(bsz, 1).expand(-1, max_lens) - return mask - - -# lens: torch.LongTensor -# returns: torch.BoolTensor -def lengths_to_mask(lens): - return ~lengths_to_padding_mask(lens) - - -def get_buckets(sizes, num_buckets): - buckets = np.unique( - np.percentile( - sizes, - np.linspace(0, 100, num_buckets + 1), - interpolation='lower', - )[1:] - ) - return buckets - - -def get_bucketed_sizes(orig_sizes, buckets): - sizes = np.copy(orig_sizes) - assert np.min(sizes) >= 0 - start_val = -1 - for end_val in buckets: - mask = (sizes > start_val) & (sizes <= end_val) - sizes[mask] = end_val - start_val = end_val - return sizes - - - -def _find_extra_valid_paths(dataset_path: str) -> set: - paths = utils.split_paths(dataset_path) - all_valid_paths = set() - for sub_dir in paths: - contents = PathManager.ls(sub_dir) - valid_paths = [c for c in contents if re.match("valid*[0-9].*", c) is not None] - all_valid_paths |= {os.path.basename(p) for p in valid_paths} - # Remove .bin, .idx etc - roots = {os.path.splitext(p)[0] for p in all_valid_paths} - return roots - - -def raise_if_valid_subsets_unintentionally_ignored(train_cfg) -> None: - """Raises if there are paths matching 'valid*[0-9].*' which are not combined or ignored.""" - if ( - train_cfg.dataset.ignore_unused_valid_subsets - or train_cfg.dataset.combine_valid_subsets - or train_cfg.dataset.disable_validation - or not hasattr(train_cfg.task, "data") - ): - return - other_paths = _find_extra_valid_paths(train_cfg.task.data) - specified_subsets = train_cfg.dataset.valid_subset.split(",") - ignored_paths = [p for p in other_paths if p not in specified_subsets] - if ignored_paths: - advice = "Set --combine-val to combine them or --ignore-unused-valid-subsets to ignore them." - msg = f"Valid paths {ignored_paths} will be ignored. {advice}" - raise ValueError(msg) \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/camembert/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/camembert/README.md deleted file mode 100644 index 5ef4fe3f151bb468712f3be935ea5bb1b1360bf7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/camembert/README.md +++ /dev/null @@ -1,75 +0,0 @@ -# CamemBERT: a Tasty French Language Model - -## Introduction - -[CamemBERT](https://arxiv.org/abs/1911.03894) is a pretrained language model trained on 138GB of French text based on RoBERTa. - -Also available in [github.com/huggingface/transformers](https://github.com/huggingface/transformers/). - -## Pre-trained models - -| Model | #params | Download | Arch. | Training data | -|--------------------------------|---------|--------------------------------------------------------------------------------------------------------------------------|-------|-----------------------------------| -| `camembert` / `camembert-base` | 110M | [camembert-base.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base.tar.gz) | Base | OSCAR (138 GB of text) | -| `camembert-large` | 335M | [camembert-large.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-large.tar.gz) | Large | CCNet (135 GB of text) | -| `camembert-base-ccnet` | 110M | [camembert-base-ccnet.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base-ccnet.tar.gz) | Base | CCNet (135 GB of text) | -| `camembert-base-wikipedia-4gb` | 110M | [camembert-base-wikipedia-4gb.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base-wikipedia-4gb.tar.gz) | Base | Wikipedia (4 GB of text) | -| `camembert-base-oscar-4gb` | 110M | [camembert-base-oscar-4gb.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base-oscar-4gb.tar.gz) | Base | Subsample of OSCAR (4 GB of text) | -| `camembert-base-ccnet-4gb` | 110M | [camembert-base-ccnet-4gb.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base-ccnet-4gb.tar.gz) | Base | Subsample of CCNet (4 GB of text) | - -## Example usage - -### fairseq -##### Load CamemBERT from torch.hub (PyTorch >= 1.1): -```python -import torch -camembert = torch.hub.load('pytorch/fairseq', 'camembert') -camembert.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Load CamemBERT (for PyTorch 1.0 or custom models): -```python -# Download camembert model -wget https://dl.fbaipublicfiles.com/fairseq/models/camembert-base.tar.gz -tar -xzvf camembert.tar.gz - -# Load the model in fairseq -from fairseq.models.roberta import CamembertModel -camembert = CamembertModel.from_pretrained('/path/to/camembert') -camembert.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Filling masks: -```python -masked_line = 'Le camembert est :)' -camembert.fill_mask(masked_line, topk=3) -# [('Le camembert est délicieux :)', 0.4909118115901947, ' délicieux'), -# ('Le camembert est excellent :)', 0.10556942224502563, ' excellent'), -# ('Le camembert est succulent :)', 0.03453322499990463, ' succulent')] -``` - -##### Extract features from Camembert: -```python -# Extract the last layer's features -line = "J'aime le camembert !" -tokens = camembert.encode(line) -last_layer_features = camembert.extract_features(tokens) -assert last_layer_features.size() == torch.Size([1, 10, 768]) - -# Extract all layer's features (layer 0 is the embedding layer) -all_layers = camembert.extract_features(tokens, return_all_hiddens=True) -assert len(all_layers) == 13 -assert torch.all(all_layers[-1] == last_layer_features) -``` - -## Citation -If you use our work, please cite: - -```bibtex -@inproceedings{martin2020camembert, - title={CamemBERT: a Tasty French Language Model}, - author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, - booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, - year={2020} -} -``` diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/shorten_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/shorten_dataset.py deleted file mode 100644 index 6ebb5d88feb3f29d1512a0873df304915d051209..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/shorten_dataset.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -from fairseq.data import data_utils - -from . import BaseWrapperDataset - - -class TruncateDataset(BaseWrapperDataset): - """Truncate a sequence by returning the first truncation_length tokens""" - - def __init__(self, dataset, truncation_length): - super().__init__(dataset) - assert truncation_length is not None - self.truncation_length = truncation_length - self.dataset = dataset - - def __getitem__(self, index): - item = self.dataset[index] - item_len = item.size(0) - if item_len > self.truncation_length: - item = item[: self.truncation_length] - return item - - @property - def sizes(self): - return np.minimum(self.dataset.sizes, self.truncation_length) - - def __len__(self): - return len(self.dataset) - - -class RandomCropDataset(TruncateDataset): - """Truncate a sequence by returning a random crop of truncation_length tokens""" - - def __init__(self, dataset, truncation_length, seed=1): - super().__init__(dataset, truncation_length) - self.seed = seed - self.epoch = 0 - - @property - def can_reuse_epoch_itr_across_epochs(self): - return True # only the crop changes, not item sizes - - def set_epoch(self, epoch, **unused): - super().set_epoch(epoch) - self.epoch = epoch - - def __getitem__(self, index): - with data_utils.numpy_seed(self.seed, self.epoch, index): - item = self.dataset[index] - item_len = item.size(0) - excess = item_len - self.truncation_length - if excess > 0: - start_idx = np.random.randint(0, excess) - item = item[start_idx : start_idx + self.truncation_length] - return item - - -def maybe_shorten_dataset( - dataset, - split, - shorten_data_split_list, - shorten_method, - tokens_per_sample, - seed, -): - truncate_split = ( - split in shorten_data_split_list.split(",") or len(shorten_data_split_list) == 0 - ) - if shorten_method == "truncate" and truncate_split: - dataset = TruncateDataset(dataset, tokens_per_sample) - elif shorten_method == "random_crop" and truncate_split: - dataset = RandomCropDataset(dataset, tokens_per_sample, seed) - return dataset diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/.github/PULL_REQUEST_TEMPLATE.md b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/.github/PULL_REQUEST_TEMPLATE.md deleted file mode 100644 index d005e2df4f717ea4844a8320981d77d96e425a52..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/.github/PULL_REQUEST_TEMPLATE.md +++ /dev/null @@ -1,16 +0,0 @@ -# Before submitting - -- [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements) -- [ ] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/main/CONTRIBUTING.md)? -- [ ] Did you make sure to update the docs? -- [ ] Did you write any new necessary tests? - -## What does this PR do? -Fixes # (issue). - -## PR review -Anyone in the community is free to review the PR once the tests have passed. -If we didn't discuss your PR in Github issues there's a high chance it will not be merged. - -## Did you have fun? -Make sure you had fun coding 🙃 diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/.github/ISSUE_TEMPLATE/how-to-question.md b/spaces/OFA-Sys/OFA-vqa/fairseq/.github/ISSUE_TEMPLATE/how-to-question.md deleted file mode 100644 index 04f3f15d3ed391e26ca87f726ae88f30d1d414ab..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/.github/ISSUE_TEMPLATE/how-to-question.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -name: ❓ Questions/Help -about: If you have questions, please first search existing issues and docs -labels: 'question, needs triage' ---- - -## ❓ Questions and Help - -### Before asking: -1. search the issues. -2. search the docs. - - - -#### What is your question? - -#### Code - - - -#### What have you tried? - -#### What's your environment? - - - fairseq Version (e.g., 1.0 or main): - - PyTorch Version (e.g., 1.0) - - OS (e.g., Linux): - - How you installed fairseq (`pip`, source): - - Build command you used (if compiling from source): - - Python version: - - CUDA/cuDNN version: - - GPU models and configuration: - - Any other relevant information: diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/data_scripts/download_ML50_v1.sh b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/data_scripts/download_ML50_v1.sh deleted file mode 100644 index 99fbc75920836a4b4bbdbd6b523749843288e450..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/data_scripts/download_ML50_v1.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - -# first run download_wmt20.sh; it will install a few useful tools for other scripts -# TODO: need to print out instructions on downloading a few files which requires manually authentication from the websites -bash ./download_wmt20.sh - -python ./download_wmt19_and_before.py -bash ./download_wat19_my.sh -python ./download_ted_and_extract.py -bash ./download_lotus.sh -bash ./download_iitb.sh -bash ./download_af_xh.sh - - -# IWSLT downloading URLs have changed in between; TODO: fix them: -bash ./download_iwslt_and_extract.sh - -# TODO: globalvoices URLs changed; need to be fixed -bash ./download_flores_data.sh diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_to_text/docs/mtedx_example.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_to_text/docs/mtedx_example.md deleted file mode 100644 index 25b4556affbf5bc141b103095d15fffef6225c0e..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_to_text/docs/mtedx_example.md +++ /dev/null @@ -1,200 +0,0 @@ -[[Back]](..) - -# S2T Example: Speech Translation (ST) on Multilingual TEDx - -[Multilingual TEDx](https://arxiv.org/abs/2102.01757) is multilingual corpus for speech recognition and -speech translation. The data is derived from TEDx talks in 8 source languages -with translations to a subset of 5 target languages. - -## Data Preparation -[Download](http://openslr.org/100/) and unpack Multilingual TEDx data to a path -`${MTEDX_ROOT}/${LANG_PAIR}`, then preprocess it with -```bash -# additional Python packages for S2T data processing/model training -pip install pandas torchaudio soundfile sentencepiece - -# Generate TSV manifests, features, vocabulary -# and configuration for each language -python examples/speech_to_text/prep_mtedx_data.py \ - --data-root ${MTEDX_ROOT} --task asr \ - --vocab-type unigram --vocab-size 1000 -python examples/speech_to_text/prep_mtedx_data.py \ - --data-root ${MTEDX_ROOT} --task st \ - --vocab-type unigram --vocab-size 1000 - -# Add vocabulary and configuration for joint data -# (based on the manifests and features generated above) -python examples/speech_to_text/prep_mtedx_data.py \ - --data-root ${MTEDX_ROOT} --task asr --joint \ - --vocab-type unigram --vocab-size 8000 -python examples/speech_to_text/prep_mtedx_data.py \ - --data-root ${MTEDX_ROOT} --task st --joint \ - --vocab-type unigram --vocab-size 8000 -``` -The generated files (manifest, features, vocabulary and data configuration) will be added to -`${MTEDX_ROOT}/${LANG_PAIR}` (per-language data) and `MTEDX_ROOT` (joint data). - - -## ASR -#### Training -Spanish as example: -```bash -fairseq-train ${MTEDX_ROOT}/es-es \ - --config-yaml config_asr.yaml --train-subset train_asr --valid-subset valid_asr \ - --save-dir ${ASR_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-epoch 200 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --report-accuracy \ - --arch s2t_transformer_xs --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --dropout 0.3 --label-smoothing 0.1 \ - --load-pretrained-encoder-from ${PRETRAINED_ENCODER} \ - --skip-invalid-size-inputs-valid-test \ - --keep-last-epochs 10 --update-freq 8 --patience 10 -``` -For joint model (using ASR data from all 8 languages): -```bash -fairseq-train ${MTEDX_ROOT} \ - --config-yaml config_asr.yaml \ - --train-subset train_es-es_asr,train_fr-fr_asr,train_pt-pt_asr,train_it-it_asr,train_ru-ru_asr,train_el-el_asr,train_ar-ar_asr,train_de-de_asr \ - --valid-subset valid_es-es_asr,valid_fr-fr_asr,valid_pt-pt_asr,valid_it-it_asr,valid_ru-ru_asr,valid_el-el_asr,valid_ar-ar_asr,valid_de-de_asr \ - --save-dir ${MULTILINGUAL_ASR_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-epoch 200 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --report-accuracy \ - --arch s2t_transformer_s --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --dropout 0.3 --label-smoothing 0.1 \ - --skip-invalid-size-inputs-valid-test \ - --keep-last-epochs 10 --update-freq 8 --patience 10 \ - --ignore-prefix-size 1 -``` -where `MULTILINGUAL_ASR_SAVE_DIR` is the checkpoint root path. We set `--update-freq 8` to simulate 8 GPUs -with 1 GPU. You may want to update it accordingly when using more than 1 GPU. -For multilingual models, we prepend target language ID token as target BOS, which should be excluded from -the training loss via `--ignore-prefix-size 1`. - -#### Inference & Evaluation -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${ASR_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}" - -fairseq-generate ${MTEDX_ROOT}/es-es \ - --config-yaml config_asr.yaml --gen-subset test --task speech_to_text \ - --path ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 \ - --skip-invalid-size-inputs-valid-test \ - --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct --remove-bpe - -# For models trained on joint data -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${MULTILINGUAL_ASR_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${MULTILINGUAL_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}" - -for LANG in es fr pt it ru el ar de; do - fairseq-generate ${MTEDX_ROOT} \ - --config-yaml config_asr.yaml --gen-subset test_${LANG}-${LANG}_asr --task speech_to_text \ - --prefix-size 1 --path ${MULTILINGUAL_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 40000 --beam 5 \ - --skip-invalid-size-inputs-valid-test \ - --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct --remove-bpe -done -``` -#### Results -| Data | --arch | Params | Es | Fr | Pt | It | Ru | El | Ar | De | -|--------------|--------------------|--------|------|------|------|------|------|-------|-------|-------| -| Monolingual | s2t_transformer_xs | 10M | 46.4 | 45.6 | 54.8 | 48.0 | 74.7 | 109.5 | 104.4 | 111.1 | - - -## ST -#### Training -Es-En as example: -```bash -fairseq-train ${MTEDX_ROOT}/es-en \ - --config-yaml config_st.yaml --train-subset train_st --valid-subset valid_st \ - --save-dir ${ST_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-epoch 200 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --report-accuracy \ - --arch s2t_transformer_xs --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --dropout 0.3 --label-smoothing 0.1 \ - --load-pretrained-encoder-from ${PRETRAINED_ENCODER} \ - --skip-invalid-size-inputs-valid-test \ - --keep-last-epochs 10 --update-freq 8 --patience 10 -``` -For multilingual model (all 12 directions): -```bash -fairseq-train ${MTEDX_ROOT} \ - --config-yaml config_st.yaml \ - --train-subset train_el-en_st,train_es-en_st,train_es-fr_st,train_es-it_st,train_es-pt_st,train_fr-en_st,train_fr-es_st,train_fr-pt_st,train_it-en_st,train_it-es_st,train_pt-en_st,train_pt-es_st,train_ru-en_st \ - --valid-subset valid_el-en_st,valid_es-en_st,valid_es-fr_st,valid_es-it_st,valid_es-pt_st,valid_fr-en_st,valid_fr-es_st,valid_fr-pt_st,valid_it-en_st,valid_it-es_st,valid_pt-en_st,valid_pt-es_st,valid_ru-en_st \ - --save-dir ${MULTILINGUAL_ST_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-epoch 200 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --report-accuracy \ - --arch s2t_transformer_s --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --dropout 0.3 --label-smoothing 0.1 \ - --skip-invalid-size-inputs-valid-test \ - --keep-last-epochs 10 --update-freq 8 --patience 10 \ - --ignore-prefix-size 1 \ - --load-pretrained-encoder-from ${PRETRAINED_ENCODER} -``` -where `ST_SAVE_DIR` (`MULTILINGUAL_ST_SAVE_DIR`) is the checkpoint root path. The ST encoder is pre-trained by ASR -for faster training and better performance: `--load-pretrained-encoder-from <(JOINT_)ASR checkpoint path>`. We set -`--update-freq 8` to simulate 8 GPUs with 1 GPU. You may want to update it accordingly when using more than 1 GPU. -For multilingual models, we prepend target language ID token as target BOS, which should be excluded from -the training loss via `--ignore-prefix-size 1`. - -#### Inference & Evaluation -Average the last 10 checkpoints and evaluate on the `test` split: -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${ST_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${ST_SAVE_DIR}/${CHECKPOINT_FILENAME}" - -fairseq-generate ${MTEDX_ROOT}/es-en \ - --config-yaml config_st.yaml --gen-subset test --task speech_to_text \ - --path ${ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 --scoring sacrebleu --remove-bpe - -# For multilingual models -python scripts/average_checkpoints.py \ - --inputs ${MULTILINGUAL_ST_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${MULTILINGUAL_ST_SAVE_DIR}/${CHECKPOINT_FILENAME}" - -for LANGPAIR in es-en es-fr es-pt fr-en fr-es fr-pt pt-en pt-es it-en it-es ru-en el-en; do - fairseq-generate ${MTEDX_ROOT} \ - --config-yaml config_st.yaml --gen-subset test_${LANGPAIR}_st --task speech_to_text \ - --prefix-size 1 --path ${MULTILINGUAL_ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 40000 --beam 5 \ - --skip-invalid-size-inputs-valid-test \ - --scoring sacrebleu --remove-bpe -done -``` -For multilingual models, we force decoding from the target language ID token (as BOS) via `--prefix-size 1`. - -#### Results -| Data | --arch | Params | Es-En | Es-Pt | Es-Fr | Fr-En | Fr-Es | Fr-Pt | Pt-En | Pt-Es | It-En | It-Es | Ru-En | El-En | -|--------------|--------------------|-----|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| -| Bilingual | s2t_transformer_xs | 10M | 7.0 | 12.2 | 1.7 | 8.9 | 10.6 | 7.9 | 8.1 | 8.7 | 6.4 | 1.0 | 0.7 | 0.6 | -| Multilingual | s2t_transformer_s | 31M | 12.3 | 17.4 | 6.1 | 12.0 | 13.6 | 13.2 | 12.0 | 13.7 | 10.7 | 13.1 | 0.6 | 0.8 | - - -## Citation -Please cite as: -``` -@misc{salesky2021mtedx, - title={Multilingual TEDx Corpus for Speech Recognition and Translation}, - author={Elizabeth Salesky and Matthew Wiesner and Jacob Bremerman and Roldano Cattoni and Matteo Negri and Marco Turchi and Douglas W. Oard and Matt Post}, - year={2021}, -} - -@inproceedings{wang2020fairseqs2t, - title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq}, - author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino}, - booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations}, - year = {2020}, -} - -@inproceedings{ott2019fairseq, - title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling}, - author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli}, - booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations}, - year = {2019}, -} -``` - -[[Back]](..) diff --git a/spaces/Old-Fat-Boy/Youtube_Thumbnail_CTR_Analyzer/README.md b/spaces/Old-Fat-Boy/Youtube_Thumbnail_CTR_Analyzer/README.md deleted file mode 100644 index 0d3edd57c12eddab746fded351ae2e0a4774ad95..0000000000000000000000000000000000000000 --- a/spaces/Old-Fat-Boy/Youtube_Thumbnail_CTR_Analyzer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Youtube Thumbnail CTR Analyzer -emoji: 🐠 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/modules/spatial_transform.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/modules/spatial_transform.py deleted file mode 100644 index 2de024ba08c549605a08b64d096f1f0db7b7722a..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/modules/spatial_transform.py +++ /dev/null @@ -1,49 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from kornia.geometry.transform import rotate - - -class LearnableSpatialTransformWrapper(nn.Module): - def __init__(self, impl, pad_coef=0.5, angle_init_range=80, train_angle=True): - super().__init__() - self.impl = impl - self.angle = torch.rand(1) * angle_init_range - if train_angle: - self.angle = nn.Parameter(self.angle, requires_grad=True) - self.pad_coef = pad_coef - - def forward(self, x): - if torch.is_tensor(x): - return self.inverse_transform(self.impl(self.transform(x)), x) - elif isinstance(x, tuple): - x_trans = tuple(self.transform(elem) for elem in x) - y_trans = self.impl(x_trans) - return tuple(self.inverse_transform(elem, orig_x) for elem, orig_x in zip(y_trans, x)) - else: - raise ValueError(f'Unexpected input type {type(x)}') - - def transform(self, x): - height, width = x.shape[2:] - pad_h, pad_w = int(height * self.pad_coef), int(width * self.pad_coef) - x_padded = F.pad(x, [pad_w, pad_w, pad_h, pad_h], mode='reflect') - x_padded_rotated = rotate(x_padded, angle=self.angle.to(x_padded)) - return x_padded_rotated - - def inverse_transform(self, y_padded_rotated, orig_x): - height, width = orig_x.shape[2:] - pad_h, pad_w = int(height * self.pad_coef), int(width * self.pad_coef) - - y_padded = rotate(y_padded_rotated, angle=-self.angle.to(y_padded_rotated)) - y_height, y_width = y_padded.shape[2:] - y = y_padded[:, :, pad_h : y_height - pad_h, pad_w : y_width - pad_w] - return y - - -if __name__ == '__main__': - layer = LearnableSpatialTransformWrapper(nn.Identity()) - x = torch.arange(2* 3 * 15 * 15).view(2, 3, 15, 15).float() - y = layer(x) - assert x.shape == y.shape - assert torch.allclose(x[:, :, 1:, 1:][:, :, :-1, :-1], y[:, :, 1:, 1:][:, :, :-1, :-1]) - print('all ok') diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/eval.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/eval.go deleted file mode 100644 index e436b4f0603ea02fa220b4dea04f888e79329bf6..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/eval.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/elisp/bindings.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/elisp/bindings.go deleted file mode 100644 index fbe8101c9fac9707251087e7f21d2661fd488d37..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/elisp/bindings.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-8.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-8.go deleted file mode 100644 index 52ea17c924568633638cb4e62b8e38f9a9d7aadb..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-8.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/web/uri.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/web/uri.go deleted file mode 100644 index ae68b3de9e082961de57dcf1a9a5f6e61230723b..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/web/uri.go and /dev/null differ diff --git a/spaces/Paulraj916/paulraj916/newScrapJs.py b/spaces/Paulraj916/paulraj916/newScrapJs.py deleted file mode 100644 index 758804eb4c04c545e5e29791064fe17062e43bfb..0000000000000000000000000000000000000000 --- a/spaces/Paulraj916/paulraj916/newScrapJs.py +++ /dev/null @@ -1,61 +0,0 @@ -import os -import requests -from urllib.parse import urlparse, urljoin -from bs4 import BeautifulSoup - -class NewScrapJs: - def __init__(self, link): - # Replace 'https://example.com' with the website you want to scrape - self.url = link - - def scrap_js(self): - # Send a GET request to the website and retrieve the content - response = requests.get(self.url) - if response.status_code == 200: - content = response.text - else: - print(f"Failed to fetch content from {self.url}") - exit() - - # Extract JavaScript file URLs from the webpage - js_urls = [script['src'] for script in BeautifulSoup(content, 'html.parser').find_all('script', src=True)] - - # Create a folder to store the downloaded JavaScript files - output_folder = 'output' # Choose a folder name you have write permissions for - if not os.path.exists(output_folder): - os.makedirs(output_folder) - - # Download and save each JavaScript file - for js_url in js_urls: - # Convert relative URLs to absolute URLs - js_url = urljoin(self.url, js_url) - - try: - # Check if the URL ends with ".js" - if not js_url.endswith(".js"): - # Append ".js" to the URL if it doesn't end with it - js_url += ".js" - - js_content = requests.get(js_url).text - - # Get the path to the JavaScript file - parsed_url = urlparse(js_url) - path = parsed_url.path - filename = os.path.join(output_folder, path.strip('/')) - - # Create subdirectories if needed - os.makedirs(os.path.dirname(filename), exist_ok=True) - - # Save the JavaScript content to the file - with open(filename, 'w', encoding='utf-8') as file: - file.write(js_content) - - print(f"Downloaded: {js_url}") - except requests.exceptions.MissingSchema: - print(f"Skipping download of {js_url} (Invalid URL)") - except requests.exceptions.RequestException as e: - print(f"Failed to download {js_url}: {e}") - except OSError as e: - print(f"Failed to save {js_url}: {e}") - - print("JavaScript files downloaded and saved successfully.") diff --git a/spaces/Pearx/ChatGPT-Assistant/set_context.py b/spaces/Pearx/ChatGPT-Assistant/set_context.py deleted file mode 100644 index 2cb65d0c122054a33f9d00e298a978c85efd6c7d..0000000000000000000000000000000000000000 --- a/spaces/Pearx/ChatGPT-Assistant/set_context.py +++ /dev/null @@ -1,64 +0,0 @@ -set_context = { - "英语学术润色": - "Below is a paragraph from an academic paper. Polish the writing to meet the academic style, improve the " - "spelling, grammar, clarity, concision and overall readability." - "When necessary, rewrite the whole sentence. Furthermore, list all modification and explain the reasons to do " - "so in markdown table.", - - '中文学术润色': - "在这次会话中,你将作为一名中文学术论文写作改进助理。你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性。" - "同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。", - - '查找语法错误': - r"Can you help me ensure that the grammar and the spelling is correct? " + - r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good." + - r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, " + - r"put the original text the first column, " + - r"put the corrected text in the second column and highlight the key words you fixed.""\n" - r"Example:""\n" - r"Paragraph: How is you? Do you knows what is it?""\n" - r"| Original sentence | Corrected sentence |""\n" - r"| :--- | :--- |""\n" - r"| How **is** you? | How **are** you? |""\n" - r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n" - r"Below is a paragraph from an academic paper. " - r"You need to report all grammar and spelling mistakes as the example before.", - - '学术中英互译': - "I want you to act as a scientific English-Chinese translator, I will provide you with some paragraphs in one " - "language and your task is to accurately and academically translate the paragraphs only into the other " - "language." - "Do not repeat the original provided paragraphs after translation. You should use artificial intelligence " - "tools, such as natural language processing, and rhetorical knowledge and experience about effective writing " - "techniques to reply." - "I'll give you my paragraphs as follows, tell me what language it is written in, and then translate.", - - '英语交流老师': - "I want you to act as a spoken English teacher and improver. I will speak to you in English and you will " - "reply to me in English to practice my spoken English. I want you to keep your reply neat, limiting the reply " - "to 100 words. I want you to strictly correct my grammar mistakes, typos, and factual errors. I want you to " - "ask me a question in your reply.Remember, I want you to strictly correct my grammar mistakes, typos, " - "and factual errors. Now let's start practicing.", - - '英文翻译与改进': - "在这次会话中,我想让你充当英语翻译员、拼写纠正员和改进员。我会用任何语言与你交谈,你会检测语言,并在更正和改进我的句子后用英语回答。" - "我希望你用更优美优雅的高级英语单词和句子来替换我使用的简单单词和句子。保持相同的意思,但使它们更文艺。我要你只回复更正、改进,不要写任何解释。", - - '寻找网络图片': - '我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL,' - '然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。' - '现在,请按以下描述给我发送图片:', - - '数据检索助理': - "在此次聊天中,你将担任数据检索助理。接下来我会发送数据名称,你告诉我在哪里可以获取到相关数据,并说明如何获取,数据来源要尽量丰富。", - - '充当Python解释器': - 'I want you to act like a Python interpreter. I will give you Python code, and you will execute it. Do not ' - 'provide any explanations. Do not respond with anything except the output of the code.', - - '正则表达式生成器': - "I want you to act as a regex generator. Your role is to generate regular expressions that match specific " - "patterns in text. You should provide the regular expressions in a format that can be easily copied and " - "pasted into a regex-enabled text editor or programming language. Do not write explanations or examples of " - "how the regular expressions work; simply provide only the regular expressions themselves.", -} diff --git a/spaces/PepijnvB/KappaNeuro-salomon-van-ruysdael-style/app.py b/spaces/PepijnvB/KappaNeuro-salomon-van-ruysdael-style/app.py deleted file mode 100644 index 12720f5c2689f773b9e9033d1ff0eeeefb3078b3..0000000000000000000000000000000000000000 --- a/spaces/PepijnvB/KappaNeuro-salomon-van-ruysdael-style/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/KappaNeuro/salomon-van-ruysdael-style").launch() \ No newline at end of file diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/refexp.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/refexp.py deleted file mode 100644 index 7e45ef30a495d1be17691bd78373470409a6df0f..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/refexp.py +++ /dev/null @@ -1,88 +0,0 @@ -import copy -from collections import defaultdict -from pathlib import Path - -import torch -import torch.utils.data - -import maskrcnn_benchmark.utils.dist as dist -from maskrcnn_benchmark.layers.set_loss import generalized_box_iou - -from .modulated_coco import ModulatedDataset - - -class RefExpDataset(ModulatedDataset): - pass - - -class RefExpEvaluator(object): - def __init__(self, refexp_gt, iou_types, k=(1, 5, 10), thresh_iou=0.5): - assert isinstance(k, (list, tuple)) - refexp_gt = copy.deepcopy(refexp_gt) - self.refexp_gt = refexp_gt - self.iou_types = iou_types - self.img_ids = self.refexp_gt.imgs.keys() - self.predictions = {} - self.k = k - self.thresh_iou = thresh_iou - - def accumulate(self): - pass - - def update(self, predictions): - self.predictions.update(predictions) - - def synchronize_between_processes(self): - all_predictions = dist.all_gather(self.predictions) - merged_predictions = {} - for p in all_predictions: - merged_predictions.update(p) - self.predictions = merged_predictions - - def summarize(self): - if dist.is_main_process(): - dataset2score = { - "refcoco": {k: 0.0 for k in self.k}, - "refcoco+": {k: 0.0 for k in self.k}, - "refcocog": {k: 0.0 for k in self.k}, - } - dataset2count = {"refcoco": 0.0, "refcoco+": 0.0, "refcocog": 0.0} - for image_id in self.img_ids: - ann_ids = self.refexp_gt.getAnnIds(imgIds=image_id) - assert len(ann_ids) == 1 - img_info = self.refexp_gt.loadImgs(image_id)[0] - - target = self.refexp_gt.loadAnns(ann_ids[0]) - prediction = self.predictions[image_id] - assert prediction is not None - sorted_scores_boxes = sorted( - zip(prediction["scores"].tolist(), prediction["boxes"].tolist()), reverse=True - ) - sorted_scores, sorted_boxes = zip(*sorted_scores_boxes) - sorted_boxes = torch.cat([torch.as_tensor(x).view(1, 4) for x in sorted_boxes]) - target_bbox = target[0]["bbox"] - converted_bbox = [ - target_bbox[0], - target_bbox[1], - target_bbox[2] + target_bbox[0], - target_bbox[3] + target_bbox[1], - ] - giou = generalized_box_iou(sorted_boxes, torch.as_tensor(converted_bbox).view(-1, 4)) - for k in self.k: - if max(giou[:k]) >= self.thresh_iou: - dataset2score[img_info["dataset_name"]][k] += 1.0 - dataset2count[img_info["dataset_name"]] += 1.0 - - for key, value in dataset2score.items(): - for k in self.k: - try: - value[k] /= dataset2count[key] - except: - pass - results = {} - for key, value in dataset2score.items(): - results[key] = sorted([v for k, v in value.items()]) - print(f" Dataset: {key} - Precision @ 1, 5, 10: {results[key]} \n") - - return results - return None diff --git a/spaces/RMeli/gnina-torch/README.md b/spaces/RMeli/gnina-torch/README.md deleted file mode 100644 index 744ace560f10fa19f02ea9819d0cb7629b6468c7..0000000000000000000000000000000000000000 --- a/spaces/RMeli/gnina-torch/README.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: Gnina Torch -emoji: 📊 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: mit ---- - -# GninaTorch @ Hugginface - -Scoring protein-ligand complexes using [Gnina](https://github.com/gnina/gnina)'s scoring function via [gnina-torch](https://github.com/RMeli/gnina-torch) on [Hugging Face Spaces](https://huggingface.co/spaces/RMeli/gnina-torch). - -[https://huggingface.co/spaces/RMeli/gnina-torch](https://huggingface.co/spaces/RMeli/gnina-torch) - -## Notes - -[Hugging Face Spaces](https://huggingface.co/docs/hub/spaces) work as `git` repositories. To keep everything on GitHub but publish on Hugging Face, add the Hugging Face Space repository as a remote repository: - -```bash -git remote add hf https://huggingface.co/spaces/RMeli/gnina-torch -``` -## Acknowledgements - -* @duerrsimon for [Visualize proteins on Hugging Face Spaces](https://huggingface.co/blog/spaces_3dmoljs) \ No newline at end of file diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/build/wheel.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/build/wheel.py deleted file mode 100644 index b0d2fc9eadb9349c0b8e69b58351648f3e54dfb5..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/build/wheel.py +++ /dev/null @@ -1,37 +0,0 @@ -import logging -import os -from typing import Optional - -from pip._vendor.pep517.wrappers import Pep517HookCaller - -from pip._internal.utils.subprocess import runner_with_spinner_message - -logger = logging.getLogger(__name__) - - -def build_wheel_pep517( - name: str, - backend: Pep517HookCaller, - metadata_directory: str, - tempd: str, -) -> Optional[str]: - """Build one InstallRequirement using the PEP 517 build process. - - Returns path to wheel if successfully built. Otherwise, returns None. - """ - assert metadata_directory is not None - try: - logger.debug("Destination directory: %s", tempd) - - runner = runner_with_spinner_message( - f"Building wheel for {name} (pyproject.toml)" - ) - with backend.subprocess_runner(runner): - wheel_name = backend.build_wheel( - tempd, - metadata_directory=metadata_directory, - ) - except Exception: - logger.error("Failed building wheel for %s", name) - return None - return os.path.join(tempd, wheel_name) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/socks.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/socks.py deleted file mode 100644 index c326e80dd117458ff6e71741ca57359629b05ae4..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/socks.py +++ /dev/null @@ -1,216 +0,0 @@ -# -*- coding: utf-8 -*- -""" -This module contains provisional support for SOCKS proxies from within -urllib3. This module supports SOCKS4, SOCKS4A (an extension of SOCKS4), and -SOCKS5. To enable its functionality, either install PySocks or install this -module with the ``socks`` extra. - -The SOCKS implementation supports the full range of urllib3 features. It also -supports the following SOCKS features: - -- SOCKS4A (``proxy_url='socks4a://...``) -- SOCKS4 (``proxy_url='socks4://...``) -- SOCKS5 with remote DNS (``proxy_url='socks5h://...``) -- SOCKS5 with local DNS (``proxy_url='socks5://...``) -- Usernames and passwords for the SOCKS proxy - -.. note:: - It is recommended to use ``socks5h://`` or ``socks4a://`` schemes in - your ``proxy_url`` to ensure that DNS resolution is done from the remote - server instead of client-side when connecting to a domain name. - -SOCKS4 supports IPv4 and domain names with the SOCKS4A extension. SOCKS5 -supports IPv4, IPv6, and domain names. - -When connecting to a SOCKS4 proxy the ``username`` portion of the ``proxy_url`` -will be sent as the ``userid`` section of the SOCKS request: - -.. code-block:: python - - proxy_url="socks4a://@proxy-host" - -When connecting to a SOCKS5 proxy the ``username`` and ``password`` portion -of the ``proxy_url`` will be sent as the username/password to authenticate -with the proxy: - -.. code-block:: python - - proxy_url="socks5h://:@proxy-host" - -""" -from __future__ import absolute_import - -try: - import socks -except ImportError: - import warnings - - from ..exceptions import DependencyWarning - - warnings.warn( - ( - "SOCKS support in urllib3 requires the installation of optional " - "dependencies: specifically, PySocks. For more information, see " - "https://urllib3.readthedocs.io/en/1.26.x/contrib.html#socks-proxies" - ), - DependencyWarning, - ) - raise - -from socket import error as SocketError -from socket import timeout as SocketTimeout - -from ..connection import HTTPConnection, HTTPSConnection -from ..connectionpool import HTTPConnectionPool, HTTPSConnectionPool -from ..exceptions import ConnectTimeoutError, NewConnectionError -from ..poolmanager import PoolManager -from ..util.url import parse_url - -try: - import ssl -except ImportError: - ssl = None - - -class SOCKSConnection(HTTPConnection): - """ - A plain-text HTTP connection that connects via a SOCKS proxy. - """ - - def __init__(self, *args, **kwargs): - self._socks_options = kwargs.pop("_socks_options") - super(SOCKSConnection, self).__init__(*args, **kwargs) - - def _new_conn(self): - """ - Establish a new connection via the SOCKS proxy. - """ - extra_kw = {} - if self.source_address: - extra_kw["source_address"] = self.source_address - - if self.socket_options: - extra_kw["socket_options"] = self.socket_options - - try: - conn = socks.create_connection( - (self.host, self.port), - proxy_type=self._socks_options["socks_version"], - proxy_addr=self._socks_options["proxy_host"], - proxy_port=self._socks_options["proxy_port"], - proxy_username=self._socks_options["username"], - proxy_password=self._socks_options["password"], - proxy_rdns=self._socks_options["rdns"], - timeout=self.timeout, - **extra_kw - ) - - except SocketTimeout: - raise ConnectTimeoutError( - self, - "Connection to %s timed out. (connect timeout=%s)" - % (self.host, self.timeout), - ) - - except socks.ProxyError as e: - # This is fragile as hell, but it seems to be the only way to raise - # useful errors here. - if e.socket_err: - error = e.socket_err - if isinstance(error, SocketTimeout): - raise ConnectTimeoutError( - self, - "Connection to %s timed out. (connect timeout=%s)" - % (self.host, self.timeout), - ) - else: - raise NewConnectionError( - self, "Failed to establish a new connection: %s" % error - ) - else: - raise NewConnectionError( - self, "Failed to establish a new connection: %s" % e - ) - - except SocketError as e: # Defensive: PySocks should catch all these. - raise NewConnectionError( - self, "Failed to establish a new connection: %s" % e - ) - - return conn - - -# We don't need to duplicate the Verified/Unverified distinction from -# urllib3/connection.py here because the HTTPSConnection will already have been -# correctly set to either the Verified or Unverified form by that module. This -# means the SOCKSHTTPSConnection will automatically be the correct type. -class SOCKSHTTPSConnection(SOCKSConnection, HTTPSConnection): - pass - - -class SOCKSHTTPConnectionPool(HTTPConnectionPool): - ConnectionCls = SOCKSConnection - - -class SOCKSHTTPSConnectionPool(HTTPSConnectionPool): - ConnectionCls = SOCKSHTTPSConnection - - -class SOCKSProxyManager(PoolManager): - """ - A version of the urllib3 ProxyManager that routes connections via the - defined SOCKS proxy. - """ - - pool_classes_by_scheme = { - "http": SOCKSHTTPConnectionPool, - "https": SOCKSHTTPSConnectionPool, - } - - def __init__( - self, - proxy_url, - username=None, - password=None, - num_pools=10, - headers=None, - **connection_pool_kw - ): - parsed = parse_url(proxy_url) - - if username is None and password is None and parsed.auth is not None: - split = parsed.auth.split(":") - if len(split) == 2: - username, password = split - if parsed.scheme == "socks5": - socks_version = socks.PROXY_TYPE_SOCKS5 - rdns = False - elif parsed.scheme == "socks5h": - socks_version = socks.PROXY_TYPE_SOCKS5 - rdns = True - elif parsed.scheme == "socks4": - socks_version = socks.PROXY_TYPE_SOCKS4 - rdns = False - elif parsed.scheme == "socks4a": - socks_version = socks.PROXY_TYPE_SOCKS4 - rdns = True - else: - raise ValueError("Unable to determine SOCKS version from %s" % proxy_url) - - self.proxy_url = proxy_url - - socks_options = { - "socks_version": socks_version, - "proxy_host": parsed.host, - "proxy_port": parsed.port, - "username": username, - "password": password, - "rdns": rdns, - } - connection_pool_kw["_socks_options"] = socks_options - - super(SOCKSProxyManager, self).__init__( - num_pools, headers, **connection_pool_kw - ) - - self.pool_classes_by_scheme = SOCKSProxyManager.pool_classes_by_scheme diff --git a/spaces/Realcat/image-matching-webui/third_party/DarkFeat/utils/__init__.py b/spaces/Realcat/image-matching-webui/third_party/DarkFeat/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/model/line_matcher.py b/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/model/line_matcher.py deleted file mode 100644 index 458a5e3141c0ad27c0ba665dbd72d5ce0c1c9a86..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/model/line_matcher.py +++ /dev/null @@ -1,316 +0,0 @@ -""" -Implements the full pipeline from raw images to line matches. -""" -import time -import cv2 -import numpy as np -import torch -import torch.nn.functional as F -from torch.nn.functional import softmax - -from .model_util import get_model -from .loss import get_loss_and_weights -from .metrics import super_nms -from .line_detection import LineSegmentDetectionModule -from .line_matching import WunschLineMatcher -from ..train import convert_junc_predictions -from ..misc.train_utils import adapt_checkpoint -from .line_detector import line_map_to_segments - - -class LineMatcher(object): - """Full line matcher including line detection and matching - with the Needleman-Wunsch algorithm.""" - - def __init__( - self, - model_cfg, - ckpt_path, - device, - line_detector_cfg, - line_matcher_cfg, - multiscale=False, - scales=[1.0, 2.0], - ): - # Get loss weights if dynamic weighting - _, loss_weights = get_loss_and_weights(model_cfg, device) - self.device = device - - # Initialize the cnn backbone - self.model = get_model(model_cfg, loss_weights) - checkpoint = torch.load(ckpt_path, map_location=self.device) - checkpoint = adapt_checkpoint(checkpoint["model_state_dict"]) - self.model.load_state_dict(checkpoint) - self.model = self.model.to(self.device) - self.model = self.model.eval() - - self.grid_size = model_cfg["grid_size"] - self.junc_detect_thresh = model_cfg["detection_thresh"] - self.max_num_junctions = model_cfg.get("max_num_junctions", 300) - - # Initialize the line detector - self.line_detector = LineSegmentDetectionModule(**line_detector_cfg) - self.multiscale = multiscale - self.scales = scales - - # Initialize the line matcher - self.line_matcher = WunschLineMatcher(**line_matcher_cfg) - - # Print some debug messages - for key, val in line_detector_cfg.items(): - print(f"[Debug] {key}: {val}") - # print("[Debug] detect_thresh: %f" % (line_detector_cfg["detect_thresh"])) - # print("[Debug] num_samples: %d" % (line_detector_cfg["num_samples"])) - - # Perform line detection and descriptor inference on a single image - def line_detection( - self, input_image, valid_mask=None, desc_only=False, profile=False - ): - # Restrict input_image to 4D torch tensor - if (not len(input_image.shape) == 4) or ( - not isinstance(input_image, torch.Tensor) - ): - raise ValueError("[Error] the input image should be a 4D torch tensor") - - # Move the input to corresponding device - input_image = input_image.to(self.device) - - # Forward of the CNN backbone - start_time = time.time() - with torch.no_grad(): - net_outputs = self.model(input_image) - - outputs = {"descriptor": net_outputs["descriptors"]} - - if not desc_only: - junc_np = convert_junc_predictions( - net_outputs["junctions"], - self.grid_size, - self.junc_detect_thresh, - self.max_num_junctions, - ) - if valid_mask is None: - junctions = np.where(junc_np["junc_pred_nms"].squeeze()) - else: - junctions = np.where(junc_np["junc_pred_nms"].squeeze() * valid_mask) - junctions = np.concatenate( - [junctions[0][..., None], junctions[1][..., None]], axis=-1 - ) - - if net_outputs["heatmap"].shape[1] == 2: - # Convert to single channel directly from here - heatmap = ( - softmax(net_outputs["heatmap"], dim=1)[:, 1:, :, :] - .cpu() - .numpy() - .transpose(0, 2, 3, 1) - ) - else: - heatmap = ( - torch.sigmoid(net_outputs["heatmap"]) - .cpu() - .numpy() - .transpose(0, 2, 3, 1) - ) - heatmap = heatmap[0, :, :, 0] - - # Run the line detector. - line_map, junctions, heatmap = self.line_detector.detect( - junctions, heatmap, device=self.device - ) - if isinstance(line_map, torch.Tensor): - line_map = line_map.cpu().numpy() - if isinstance(junctions, torch.Tensor): - junctions = junctions.cpu().numpy() - outputs["heatmap"] = heatmap.cpu().numpy() - outputs["junctions"] = junctions - - # If it's a line map with multiple detect_thresh and inlier_thresh - if len(line_map.shape) > 2: - num_detect_thresh = line_map.shape[0] - num_inlier_thresh = line_map.shape[1] - line_segments = [] - for detect_idx in range(num_detect_thresh): - line_segments_inlier = [] - for inlier_idx in range(num_inlier_thresh): - line_map_tmp = line_map[detect_idx, inlier_idx, :, :] - line_segments_tmp = line_map_to_segments( - junctions, line_map_tmp - ) - line_segments_inlier.append(line_segments_tmp) - line_segments.append(line_segments_inlier) - else: - line_segments = line_map_to_segments(junctions, line_map) - - outputs["line_segments"] = line_segments - - end_time = time.time() - - if profile: - outputs["time"] = end_time - start_time - - return outputs - - # Perform line detection and descriptor inference at multiple scales - def multiscale_line_detection( - self, - input_image, - valid_mask=None, - desc_only=False, - profile=False, - scales=[1.0, 2.0], - aggregation="mean", - ): - # Restrict input_image to 4D torch tensor - if (not len(input_image.shape) == 4) or ( - not isinstance(input_image, torch.Tensor) - ): - raise ValueError("[Error] the input image should be a 4D torch tensor") - - # Move the input to corresponding device - input_image = input_image.to(self.device) - img_size = input_image.shape[2:4] - desc_size = tuple(np.array(img_size) // 4) - - # Run the inference at multiple image scales - start_time = time.time() - junctions, heatmaps, descriptors = [], [], [] - for s in scales: - # Resize the image - resized_img = F.interpolate(input_image, scale_factor=s, mode="bilinear") - - # Forward of the CNN backbone - with torch.no_grad(): - net_outputs = self.model(resized_img) - - descriptors.append( - F.interpolate( - net_outputs["descriptors"], size=desc_size, mode="bilinear" - ) - ) - - if not desc_only: - junc_prob = convert_junc_predictions( - net_outputs["junctions"], self.grid_size - )["junc_pred"] - junctions.append( - cv2.resize( - junc_prob.squeeze(), - (img_size[1], img_size[0]), - interpolation=cv2.INTER_LINEAR, - ) - ) - - if net_outputs["heatmap"].shape[1] == 2: - # Convert to single channel directly from here - heatmap = softmax(net_outputs["heatmap"], dim=1)[:, 1:, :, :] - else: - heatmap = torch.sigmoid(net_outputs["heatmap"]) - heatmaps.append(F.interpolate(heatmap, size=img_size, mode="bilinear")) - - # Aggregate the results - if aggregation == "mean": - # Aggregation through the mean activation - descriptors = torch.stack(descriptors, dim=0).mean(0) - else: - # Aggregation through the max activation - descriptors = torch.stack(descriptors, dim=0).max(0)[0] - outputs = {"descriptor": descriptors} - - if not desc_only: - if aggregation == "mean": - junctions = np.stack(junctions, axis=0).mean(0)[None] - heatmap = torch.stack(heatmaps, dim=0).mean(0)[0, 0, :, :] - heatmap = heatmap.cpu().numpy() - else: - junctions = np.stack(junctions, axis=0).max(0)[None] - heatmap = torch.stack(heatmaps, dim=0).max(0)[0][0, 0, :, :] - heatmap = heatmap.cpu().numpy() - - # Extract junctions - junc_pred_nms = super_nms( - junctions[..., None], - self.grid_size, - self.junc_detect_thresh, - self.max_num_junctions, - ) - if valid_mask is None: - junctions = np.where(junc_pred_nms.squeeze()) - else: - junctions = np.where(junc_pred_nms.squeeze() * valid_mask) - junctions = np.concatenate( - [junctions[0][..., None], junctions[1][..., None]], axis=-1 - ) - - # Run the line detector. - line_map, junctions, heatmap = self.line_detector.detect( - junctions, heatmap, device=self.device - ) - if isinstance(line_map, torch.Tensor): - line_map = line_map.cpu().numpy() - if isinstance(junctions, torch.Tensor): - junctions = junctions.cpu().numpy() - outputs["heatmap"] = heatmap.cpu().numpy() - outputs["junctions"] = junctions - - # If it's a line map with multiple detect_thresh and inlier_thresh - if len(line_map.shape) > 2: - num_detect_thresh = line_map.shape[0] - num_inlier_thresh = line_map.shape[1] - line_segments = [] - for detect_idx in range(num_detect_thresh): - line_segments_inlier = [] - for inlier_idx in range(num_inlier_thresh): - line_map_tmp = line_map[detect_idx, inlier_idx, :, :] - line_segments_tmp = line_map_to_segments( - junctions, line_map_tmp - ) - line_segments_inlier.append(line_segments_tmp) - line_segments.append(line_segments_inlier) - else: - line_segments = line_map_to_segments(junctions, line_map) - - outputs["line_segments"] = line_segments - - end_time = time.time() - - if profile: - outputs["time"] = end_time - start_time - - return outputs - - def __call__(self, images, valid_masks=[None, None], profile=False): - # Line detection and descriptor inference on both images - if self.multiscale: - forward_outputs = [ - self.multiscale_line_detection( - images[0], valid_masks[0], profile=profile, scales=self.scales - ), - self.multiscale_line_detection( - images[1], valid_masks[1], profile=profile, scales=self.scales - ), - ] - else: - forward_outputs = [ - self.line_detection(images[0], valid_masks[0], profile=profile), - self.line_detection(images[1], valid_masks[1], profile=profile), - ] - line_seg1 = forward_outputs[0]["line_segments"] - line_seg2 = forward_outputs[1]["line_segments"] - desc1 = forward_outputs[0]["descriptor"] - desc2 = forward_outputs[1]["descriptor"] - - # Match the lines in both images - start_time = time.time() - matches = self.line_matcher.forward(line_seg1, line_seg2, desc1, desc2) - end_time = time.time() - - outputs = {"line_segments": [line_seg1, line_seg2], "matches": matches} - - if profile: - outputs["line_detection_time"] = ( - forward_outputs[0]["time"] + forward_outputs[1]["time"] - ) - outputs["line_matching_time"] = end_time - start_time - - return outputs diff --git a/spaces/RitaParadaRamos/SmallCapDemo/src/vision_encoder_decoder.py b/spaces/RitaParadaRamos/SmallCapDemo/src/vision_encoder_decoder.py deleted file mode 100644 index 3931256154479662216e09141e4fcbbb407487a2..0000000000000000000000000000000000000000 --- a/spaces/RitaParadaRamos/SmallCapDemo/src/vision_encoder_decoder.py +++ /dev/null @@ -1,560 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Classes to support Vision-Encoder-Text-Decoder architectures""" -import timeit - -from typing import Optional - -import torch -from torch import nn -from torch.nn import CrossEntropyLoss -from transformers.configuration_utils import PretrainedConfig -from transformers.modeling_outputs import BaseModelOutput, Seq2SeqLMOutput -from transformers.modeling_utils import PreTrainedModel -#from transformers.utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings -from transformers.utils import logging -from transformers.models.auto.configuration_auto import AutoConfig -from transformers.models.auto.modeling_auto import AutoModel, AutoModelForCausalLM -from transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder import VisionEncoderDecoderConfig -import inspect - -from .gpt2 import ThisGPT2LMHeadModel -from .gpt2 import ThisGPT2Config -from .xglm import ThisXGLMForCausalLM -from .xglm import ThisXGLMConfig -from .opt import ThisOPTForCausalLM -from .opt import ThisOPTConfig - -# Copied from transformers.models.encoder_decoder.modeling_encoder_decoder.shift_tokens_right -def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int): - """ - Shift input ids one token to the right. - """ - shifted_input_ids = input_ids.new_zeros(input_ids.shape) - shifted_input_ids[:, 1:] = input_ids[:, :-1].clone() - if decoder_start_token_id is None: - raise ValueError("Make sure to set the decoder_start_token_id attribute of the model's configuration.") - shifted_input_ids[:, 0] = decoder_start_token_id - - if pad_token_id is None: - raise ValueError("Make sure to set the pad_token_id attribute of the model's configuration.") - # replace possible -100 values in labels by `pad_token_id` - shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id) - - return shifted_input_ids - - -logger = logging.get_logger(__name__) - -_CONFIG_FOR_DOC = "SmallCapConfig" - -VISION_ENCODER_DECODER_START_DOCSTRING = r""" - This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model - as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via - [`~AutoModel.from_pretrained`] function and the decoder is loaded via [`~AutoModelForCausalLM.from_pretrained`] - function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream - generative task, like image captioning. - - The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation - tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation - Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi - Zhou, Wei Li, Peter J. Liu. - - Additionally, in [TrOCR: Transformer-based Optical Character Recognition with Pre-trained - Models](https://arxiv.org/abs/2109.10282) it is shown how leveraging large pretrained vision models for optical - character recognition (OCR) yields a significant performance improvement. - - After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any - other models (see the examples for more information). - - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`VisionEncoderDecoderConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -VISION_ENCODER_DECODER_INPUTS_DOCSTRING = r""" - Args: - pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Pixel values can be obtained using a feature extractor (e.g. if you use ViT as the encoder, - you should use [`ViTFeatureExtractor`]). See [`ViTFeatureExtractor.__call__`] for details. - decoder_input_ids (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): - Indices of decoder input sequence tokens in the vocabulary. - - Indices can be obtained using [`PreTrainedTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - - If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see - `past_key_values`). - - For training, `decoder_input_ids` are automatically created by the model by shifting the `labels` to the - right, replacing -100 by the `pad_token_id` and prepending them with the `decoder_start_token_id`. - decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): - Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also - be used by default. - encoder_outputs (`tuple(torch.FloatTensor)`, *optional*): - This tuple must consist of (`last_hidden_state`, *optional*: `hidden_states`, *optional*: `attentions`) - `last_hidden_state` (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`) is a tensor - of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the - decoder. - past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that - don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all - `decoder_input_ids` of shape `(batch_size, sequence_length)`. - decoder_inputs_embeds (`torch.FloatTensor` of shape `(batch_size, target_sequence_length, hidden_size)`, *optional*): - Optionally, instead of passing `decoder_input_ids` you can choose to directly pass an embedded - representation. This is useful if you want more control over how to convert `decoder_input_ids` indices - into associated vectors than the model's internal embedding lookup matrix. - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss for the decoder. Indices should be in `[-100, 0, - ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored - (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - If set to `True`, the model will return a [`~utils.Seq2SeqLMOutput`] instead of a plain tuple. - kwargs: (*optional*) Remaining dictionary of keyword arguments. Keyword arguments come in two flavors: - - - Without a prefix which will be input as `**encoder_kwargs` for the encoder forward function. - - With a *decoder_* prefix which will be input as `**decoder_kwargs` for the decoder forward function. -""" - -class SmallCapConfig(VisionEncoderDecoderConfig): - model_type = "smallcap" - - def __init__( - self, - **kwargs, - ): - super().__init__(**kwargs) - - -class SmallCap(PreTrainedModel): - r""" - [`VisionEncoderDecoderModel`] is a generic model class that will be instantiated as a transformer architecture with - one of the base vision model classes of the library as encoder and another one as decoder when created with the - :meth*~transformers.AutoModel.from_pretrained* class method for the encoder and - :meth*~transformers.AutoModelForCausalLM.from_pretrained* class method for the decoder. - """ - config_class = SmallCapConfig - base_model_prefix = "smallcap" - main_input_name = "pixel_values" - - def __init__( - self, - config: Optional[PretrainedConfig] = None, - encoder: Optional[PreTrainedModel] = None, - decoder: Optional[PreTrainedModel] = None, - ): - if config is None and (encoder is None or decoder is None): - raise ValueError("Either a configuration or an encoder and a decoder has to be provided.") - if config is None: - config = SmallCapConfig.from_encoder_decoder_configs(encoder.config, decoder.config) - else: - if not isinstance(config, self.config_class): - raise ValueError(f"Config: {config} has to be of type {self.config_class}") - - if config.decoder.cross_attention_hidden_size is not None: - if config.decoder.cross_attention_hidden_size != config.encoder.hidden_size: - raise ValueError( - "If `cross_attention_hidden_size` is specified in the decoder's configuration, it has to be equal#" - f" to the encoder's `hidden_size`. Got {config.decoder.cross_attention_hidden_size} for" - f" `config.decoder.cross_attention_hidden_size` and {config.encoder.hidden_size} for" - " `config.encoder.hidden_size`." - ) - - # initialize with config - # make sure input & output embeddings is not tied - config.tie_word_embeddings = False - super().__init__(config) - - if encoder is None: - encoder = AutoModel.from_config(config.encoder) - - if decoder is None: - decoder = AutoModelForCausalLM.from_config(config.decoder) - - self.encoder = encoder.vision_model - self.encoder.main_input_name = 'pixel_values' - self.decoder = decoder - # make sure that the individual model's config refers to the shared config - # so that the updates to the config will be synced - self.encoder.config = self.config.encoder - self.decoder.config = self.config.decoder - - def get_encoder(self): - return self.encoder - - def get_decoder(self): - return self.decoder - - def get_output_embeddings(self): - return self.decoder.get_output_embeddings() - - def set_output_embeddings(self, new_embeddings): - return self.decoder.set_output_embeddings(new_embeddings) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - # At the moment fast initialization is not supported for composite models - if kwargs.get("_fast_init", False): - logger.warning( - "Fast initialization is currently not supported for VisionEncoderDecoderModel. " - "Falling back to slow initialization..." - ) - kwargs["_fast_init"] = False - return super().from_pretrained(*args, **kwargs) - - @classmethod - def from_encoder_decoder_pretrained( - cls, - encoder_pretrained_model_name_or_path: str = None, - decoder_pretrained_model_name_or_path: str = None, - cross_attention_reduce_factor: int = None, - *model_args, - **kwargs - ) -> PreTrainedModel: - r""" - Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model - checkpoints. - - - The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). To train - the model, you need to first set it back in training mode with `model.train()`. - - Params: - encoder_pretrained_model_name_or_path (`str`, *optional*): - Information necessary to initiate the image encoder. Can be either: - - - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. An - example is `google/vit-base-patch16-224-in21k`. - - A path to a *directory* containing model weights saved using - [`~PreTrainedModel.save_pretrained`], e.g., `./my_model_directory/`. - - A path or url to a *tensorflow index checkpoint file* (e.g, `./tf_model/model.ckpt.index`). In - this case, `from_tf` should be set to `True` and a configuration object should be provided as - `config` argument. This loading path is slower than converting the TensorFlow checkpoint in a - PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. - - decoder_pretrained_model_name_or_path (`str`, *optional*, defaults to `None`): - Information necessary to initiate the text decoder. Can be either: - - - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a - user or organization name, like `dbmdz/bert-base-german-cased`. - - A path to a *directory* containing model weights saved using - [`~PreTrainedModel.save_pretrained`], e.g., `./my_model_directory/`. - - A path or url to a *tensorflow index checkpoint file* (e.g, `./tf_model/model.ckpt.index`). In - this case, `from_tf` should be set to `True` and a configuration object should be provided as - `config` argument. This loading path is slower than converting the TensorFlow checkpoint in a - PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. - - model_args (remaining positional arguments, *optional*): - All remaning positional arguments will be passed to the underlying model's `__init__` method. - - kwargs (remaining dictionary of keyword arguments, *optional*): - Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., - `output_attentions=True`). - - - To update the encoder configuration, use the prefix *encoder_* for each configuration parameter. - - To update the decoder configuration, use the prefix *decoder_* for each configuration parameter. - - To update the parent model configuration, do not use a prefix for each configuration parameter. - - Behaves differently depending on whether a `config` is provided or automatically loaded. - - Example: - - ```python - >>> from transformers import VisionEncoderDecoderModel - - >>> # initialize a vit-bert from a pretrained ViT and a pretrained BERT model. Note that the cross-attention layers will be randomly initialized - >>> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( - ... "google/vit-base-patch16-224-in21k", "bert-base-uncased" - ... ) - >>> # saving model after fine-tuning - >>> model.save_pretrained("./vit-bert") - >>> # load fine-tuned model - >>> model = VisionEncoderDecoderModel.from_pretrained("./vit-bert") - ```""" - - kwargs_encoder = { - argument[len("encoder_") :]: value for argument, value in kwargs.items() if argument.startswith("encoder_") - } - - kwargs_decoder = { - argument[len("decoder_") :]: value for argument, value in kwargs.items() if argument.startswith("decoder_") - } - - # remove encoder, decoder kwargs from kwargs - for key in kwargs_encoder.keys(): - del kwargs["encoder_" + key] - for key in kwargs_decoder.keys(): - del kwargs["decoder_" + key] - - # Load and initialize the encoder and decoder - # The distinction between encoder and decoder at the model level is made - # by the value of the flag `is_decoder` that we need to set correctly. - encoder = kwargs_encoder.pop("model", None) - if encoder is None: - if encoder_pretrained_model_name_or_path is None: - raise ValueError( - "If `encoder_model` is not defined as an argument, a `encoder_pretrained_model_name_or_path` has " - "to be defined." - ) - - if "config" not in kwargs_encoder: - encoder_config, kwargs_encoder = AutoConfig.from_pretrained( - encoder_pretrained_model_name_or_path, **kwargs_encoder, return_unused_kwargs=True - ) - - if encoder_config.is_decoder is True or encoder_config.add_cross_attention is True: - logger.info( - f"Initializing {encoder_pretrained_model_name_or_path} as a encoder model " - "from a decoder model. Cross-attention and casual mask are disabled." - ) - encoder_config.is_decoder = False - encoder_config.add_cross_attention = False - - kwargs_encoder["config"] = encoder_config - - encoder = AutoModel.from_pretrained(encoder_pretrained_model_name_or_path, *model_args, **kwargs_encoder) - - decoder = kwargs_decoder.pop("model", None) - if decoder is None: - if decoder_pretrained_model_name_or_path is None: - raise ValueError( - "If `decoder_model` is not defined as an argument, a `decoder_pretrained_model_name_or_path` has " - "to be defined." - ) - - if "config" not in kwargs_decoder: - if "xglm" in decoder_pretrained_model_name_or_path: - decoder_config, kwargs_decoder = ThisXGLMConfig.from_pretrained( - decoder_pretrained_model_name_or_path, **kwargs_decoder, return_unused_kwargs=True - ) - - elif "opt" in decoder_pretrained_model_name_or_path: - decoder_config, kwargs_decoder = ThisOPTConfig.from_pretrained( - decoder_pretrained_model_name_or_path, **kwargs_decoder, return_unused_kwargs=True - ) - - else: - decoder_config, kwargs_decoder = ThisGPT2Config.from_pretrained( - decoder_pretrained_model_name_or_path, **kwargs_decoder, return_unused_kwargs=True - ) - - if decoder_config.is_decoder is False or decoder_config.add_cross_attention is False: - logger.info( - f"Initializing {decoder_pretrained_model_name_or_path} as a decoder model. Cross attention" - f" layers are added to {decoder_pretrained_model_name_or_path} and randomly initialized if" - f" {decoder_pretrained_model_name_or_path}'s architecture allows for cross attention layers." - ) - decoder_config.is_decoder = True - decoder_config.add_cross_attention = True - decoder_config.encoder_hidden_size = encoder.config.vision_config.hidden_size - decoder_config.cross_attention_reduce_factor = cross_attention_reduce_factor - kwargs_decoder["config"] = decoder_config - - if kwargs_decoder["config"].is_decoder is False or kwargs_decoder["config"].add_cross_attention is False: - logger.warning( - f"Decoder model {decoder_pretrained_model_name_or_path} is not initialized as a decoder. " - f"In order to initialize {decoder_pretrained_model_name_or_path} as a decoder, " - "make sure that the attributes `is_decoder` and `add_cross_attention` of `decoder_config` " - "passed to `.from_encoder_decoder_pretrained(...)` are set to `True` or do not pass a " - "`decoder_config` to `.from_encoder_decoder_pretrained(...)`" - ) - - #decoder = AutoModelForCausalLM.from_pretrained(decoder_pretrained_model_name_or_path, **kwargs_decoder) - if "xglm" in decoder_pretrained_model_name_or_path: - decoder = ThisXGLMForCausalLM.from_pretrained(decoder_pretrained_model_name_or_path, **kwargs_decoder) - - elif "opt" in decoder_pretrained_model_name_or_path: - decoder = ThisOPTForCausalLM.from_pretrained(decoder_pretrained_model_name_or_path, **kwargs_decoder) - else: - decoder = ThisGPT2LMHeadModel.from_pretrained(decoder_pretrained_model_name_or_path, **kwargs_decoder) - - # instantiate config with corresponding kwargs - config = SmallCapConfig.from_encoder_decoder_configs(encoder.config, decoder.config, **kwargs) - - # make sure input & output embeddings is not tied - config.tie_word_embeddings = False - return cls(encoder=encoder, decoder=decoder, config=config) - - def forward( - self, - pixel_values=None, - decoder_input_ids=None, - decoder_attention_mask=None, - encoder_outputs=None, - past_key_values=None, - decoder_inputs_embeds=None, - labels=None, - use_cache=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - **kwargs, - ): - r""" - Returns: - - Examples: - - ```python - >>> from transformers import TrOCRProcessor, VisionEncoderDecoderModel - >>> import requests - >>> from PIL import Image - >>> import torch - - >>> processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") - >>> model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten") - - >>> # load image from the IAM dataset - >>> url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw).convert("RGB") - - >>> # training - >>> model.config.decoder_start_token_id = processor.tokenizer.cls_token_id - >>> model.config.pad_token_id = processor.tokenizer.pad_token_id - >>> model.config.vocab_size = model.config.decoder.vocab_size - - >>> pixel_values = processor(image, return_tensors="pt").pixel_values - >>> text = "hello world" - >>> labels = processor.tokenizer(text, return_tensors="pt").input_ids - >>> outputs = model(pixel_values=pixel_values, labels=labels) - >>> loss = outputs.loss - - >>> # inference (generation) - >>> generated_ids = model.generate(pixel_values) - >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] - ```""" - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - kwargs_encoder = {argument: value for argument, value in kwargs.items() if not argument.startswith("decoder_")} - - kwargs_decoder = { - argument[len("decoder_") :]: value for argument, value in kwargs.items() if argument.startswith("decoder_") - } - if encoder_outputs is None: - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - encoder_outputs = self.encoder( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - **kwargs_encoder, - ) - elif isinstance(encoder_outputs, tuple): - encoder_outputs = BaseModelOutput(*encoder_outputs) - else: - encoder_outputs = BaseModelOutput(encoder_outputs, None) - - encoder_hidden_states = encoder_outputs[0] - - # else: - encoder_attention_mask = None - if (labels is not None) and (decoder_input_ids is None and decoder_inputs_embeds is None): - decoder_input_ids = shift_tokens_right( - labels, self.config.pad_token_id, self.config.decoder_start_token_id - ) - - # Decode - decoder_outputs = self.decoder( - input_ids=decoder_input_ids, - attention_mask=decoder_attention_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - inputs_embeds=decoder_inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - use_cache=use_cache, - past_key_values=past_key_values, - return_dict=return_dict, - **kwargs_decoder, - ) - - # Compute loss independent from decoder (as some shift the logits inside them) - loss = None - if labels is not None: - logits = decoder_outputs.logits if return_dict else decoder_outputs[0] - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.reshape(-1, self.decoder.config.vocab_size), labels.view(-1)) - - if not return_dict: - if loss is not None: - return (loss,) + decoder_outputs + encoder_outputs - else: - return decoder_outputs + encoder_outputs - - return Seq2SeqLMOutput( - loss=loss, - logits=decoder_outputs.logits, - past_key_values=decoder_outputs.past_key_values, - decoder_hidden_states=decoder_outputs.hidden_states, - decoder_attentions=decoder_outputs.attentions, - cross_attentions=decoder_outputs.cross_attentions, - encoder_last_hidden_state=encoder_outputs.last_hidden_state, - encoder_hidden_states=encoder_outputs.hidden_states, - encoder_attentions=encoder_outputs.attentions, - ) - - def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor): - return shift_tokens_right(labels, self.config.pad_token_id, self.config.decoder_start_token_id) - - def prepare_inputs_for_generation( - self, input_ids, past=None, attention_mask=None, use_cache=None, encoder_outputs=None, **kwargs - ): - decoder_inputs = self.decoder.prepare_inputs_for_generation(input_ids, past=past) - decoder_attention_mask = decoder_inputs["attention_mask"] if "attention_mask" in decoder_inputs else None - input_dict = { - "attention_mask": attention_mask, - "decoder_attention_mask": decoder_attention_mask, - "decoder_input_ids": decoder_inputs["input_ids"], - "encoder_outputs": encoder_outputs, - "past_key_values": decoder_inputs["past_key_values"], - "use_cache": use_cache, - } - return input_dict - - def resize_token_embeddings(self, *args, **kwargs): - raise NotImplementedError( - "Resizing the embedding layers via the VisionEncoderDecoderModel directly is not supported.Please use the" - " respective methods of the wrapped decoder object (model.decoder.resize_token_embeddings(...))" - ) - - def _reorder_cache(self, past, beam_idx): - # apply decoder cache reordering here - return self.decoder._reorder_cache(past, beam_idx) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/bricks/conv.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/bricks/conv.py deleted file mode 100644 index cf54491997a48ac3e7fadc4183ab7bf3e831024c..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/bricks/conv.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from torch import nn - -from .registry import CONV_LAYERS - -CONV_LAYERS.register_module('Conv1d', module=nn.Conv1d) -CONV_LAYERS.register_module('Conv2d', module=nn.Conv2d) -CONV_LAYERS.register_module('Conv3d', module=nn.Conv3d) -CONV_LAYERS.register_module('Conv', module=nn.Conv2d) - - -def build_conv_layer(cfg, *args, **kwargs): - """Build convolution layer. - - Args: - cfg (None or dict): The conv layer config, which should contain: - - type (str): Layer type. - - layer args: Args needed to instantiate an conv layer. - args (argument list): Arguments passed to the `__init__` - method of the corresponding conv layer. - kwargs (keyword arguments): Keyword arguments passed to the `__init__` - method of the corresponding conv layer. - - Returns: - nn.Module: Created conv layer. - """ - if cfg is None: - cfg_ = dict(type='Conv2d') - else: - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in CONV_LAYERS: - raise KeyError(f'Unrecognized norm type {layer_type}') - else: - conv_layer = CONV_LAYERS.get(layer_type) - - layer = conv_layer(*args, **kwargs, **cfg_) - - return layer diff --git a/spaces/Sortoite/Simple-OpenAI-Chatbot/README.md b/spaces/Sortoite/Simple-OpenAI-Chatbot/README.md deleted file mode 100644 index 53d34d6a1bb08e097e4eb6ca71f7a027478155de..0000000000000000000000000000000000000000 --- a/spaces/Sortoite/Simple-OpenAI-Chatbot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Simple OpenAI Chatbot -emoji: 📈 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SriniJalasuthram/SJ-06-SL-AI-Image-Music-Video-UI-UX-URL/README.md b/spaces/SriniJalasuthram/SJ-06-SL-AI-Image-Music-Video-UI-UX-URL/README.md deleted file mode 100644 index 5e1cb2cabeb606e413bad7c42c07763de1f53d0f..0000000000000000000000000000000000000000 --- a/spaces/SriniJalasuthram/SJ-06-SL-AI-Image-Music-Video-UI-UX-URL/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: SJ 06 SL AI Image Music Video UI UX URL -emoji: 📊 -colorFrom: red -colorTo: green -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/charset_normalizer/version.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/charset_normalizer/version.py deleted file mode 100644 index b74c2643d1e22c31d054e9a0eaa746bc02bbd6dd..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/charset_normalizer/version.py +++ /dev/null @@ -1,6 +0,0 @@ -""" -Expose version -""" - -__version__ = "3.1.0" -VERSION = __version__.split(".") diff --git a/spaces/Suniilkumaar/MusicGen-updated/tests/modules/test_lstm.py b/spaces/Suniilkumaar/MusicGen-updated/tests/modules/test_lstm.py deleted file mode 100644 index 1248964c8191e19f27661f0974bef9cc967eb015..0000000000000000000000000000000000000000 --- a/spaces/Suniilkumaar/MusicGen-updated/tests/modules/test_lstm.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import random -import torch - -from audiocraft.modules.lstm import StreamableLSTM - - -class TestStreamableLSTM: - - def test_lstm(self): - B, C, T = 4, 2, random.randint(1, 100) - - lstm = StreamableLSTM(C, 3, skip=False) - x = torch.randn(B, C, T) - y = lstm(x) - - print(y.shape) - assert y.shape == torch.Size([B, C, T]) - - def test_lstm_skip(self): - B, C, T = 4, 2, random.randint(1, 100) - - lstm = StreamableLSTM(C, 3, skip=True) - x = torch.randn(B, C, T) - y = lstm(x) - - assert y.shape == torch.Size([B, C, T]) diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/datasets/pascal_voc12.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/datasets/pascal_voc12.py deleted file mode 100644 index ba1d42d0c5781f56dc177d860d856bb34adce555..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/datasets/pascal_voc12.py +++ /dev/null @@ -1,57 +0,0 @@ -# dataset settings -dataset_type = 'PascalVOCDataset' -data_root = 'data/VOCdevkit/VOC2012' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -crop_size = (512, 512) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(2048, 512), - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClass', - split='ImageSets/Segmentation/train.txt', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClass', - split='ImageSets/Segmentation/val.txt', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClass', - split='ImageSets/Segmentation/val.txt', - pipeline=test_pipeline)) diff --git a/spaces/Surendra/chatbot/README.md b/spaces/Surendra/chatbot/README.md deleted file mode 100644 index f5f846945eb850077fd56b144c86ce10b0aa19c6..0000000000000000000000000000000000000000 --- a/spaces/Surendra/chatbot/README.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: Cloud chat Bot -emoji: 🦀 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: true ---- - -# Cloud Chat Bot -- Use this bot to learn about the cloud. -- Ask iac code such as terraform, pulumi, etc - - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/TD-jayadeera/Password_Strength_Prediction/README.md b/spaces/TD-jayadeera/Password_Strength_Prediction/README.md deleted file mode 100644 index 159c8b31cbcc5aa6eeac6343f4d2f0b7c7eb9d92..0000000000000000000000000000000000000000 --- a/spaces/TD-jayadeera/Password_Strength_Prediction/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Password Strength Prediction -emoji: 🚀 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TEnngal/bingo/src/components/ui/dropdown-menu.tsx b/spaces/TEnngal/bingo/src/components/ui/dropdown-menu.tsx deleted file mode 100644 index 184d4e6007ef85187446362f69532ab077897fea..0000000000000000000000000000000000000000 --- a/spaces/TEnngal/bingo/src/components/ui/dropdown-menu.tsx +++ /dev/null @@ -1,128 +0,0 @@ -'use client' - -import * as React from 'react' -import * as DropdownMenuPrimitive from '@radix-ui/react-dropdown-menu' - -import { cn } from '@/lib/utils' - -const DropdownMenu = DropdownMenuPrimitive.Root - -const DropdownMenuTrigger = DropdownMenuPrimitive.Trigger - -const DropdownMenuGroup = DropdownMenuPrimitive.Group - -const DropdownMenuPortal = DropdownMenuPrimitive.Portal - -const DropdownMenuSub = DropdownMenuPrimitive.Sub - -const DropdownMenuRadioGroup = DropdownMenuPrimitive.RadioGroup - -const DropdownMenuSubContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSubContent.displayName = - DropdownMenuPrimitive.SubContent.displayName - -const DropdownMenuContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - - - -)) -DropdownMenuContent.displayName = DropdownMenuPrimitive.Content.displayName - -const DropdownMenuItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuItem.displayName = DropdownMenuPrimitive.Item.displayName - -const DropdownMenuLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuLabel.displayName = DropdownMenuPrimitive.Label.displayName - -const DropdownMenuSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSeparator.displayName = DropdownMenuPrimitive.Separator.displayName - -const DropdownMenuShortcut = ({ - className, - ...props -}: React.HTMLAttributes) => { - return ( - - ) -} -DropdownMenuShortcut.displayName = 'DropdownMenuShortcut' - -export { - DropdownMenu, - DropdownMenuTrigger, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuLabel, - DropdownMenuSeparator, - DropdownMenuShortcut, - DropdownMenuGroup, - DropdownMenuPortal, - DropdownMenuSub, - DropdownMenuSubContent, - DropdownMenuRadioGroup -} diff --git a/spaces/TabPFN/TabPFNPrediction/TabPFN/datasets/utils.py b/spaces/TabPFN/TabPFNPrediction/TabPFN/datasets/utils.py deleted file mode 100644 index d193dbe021c4daa3808fa0f8823a6decfe3f634e..0000000000000000000000000000000000000000 --- a/spaces/TabPFN/TabPFNPrediction/TabPFN/datasets/utils.py +++ /dev/null @@ -1,8 +0,0 @@ -def normalize_data(eval_xs): - mean = eval_xs.mean(0) - std = eval_xs.std(0) + .000001 - eval_xs = (eval_xs - mean) / std - - return eval_xs - - diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/filesystem.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/filesystem.py deleted file mode 100644 index 83c2df75b963e5866b63aaf0f4446a8ca61aebce..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/filesystem.py +++ /dev/null @@ -1,153 +0,0 @@ -import fnmatch -import os -import os.path -import random -import sys -from contextlib import contextmanager -from tempfile import NamedTemporaryFile -from typing import Any, BinaryIO, Generator, List, Union, cast - -from pip._vendor.tenacity import retry, stop_after_delay, wait_fixed - -from pip._internal.utils.compat import get_path_uid -from pip._internal.utils.misc import format_size - - -def check_path_owner(path: str) -> bool: - # If we don't have a way to check the effective uid of this process, then - # we'll just assume that we own the directory. - if sys.platform == "win32" or not hasattr(os, "geteuid"): - return True - - assert os.path.isabs(path) - - previous = None - while path != previous: - if os.path.lexists(path): - # Check if path is writable by current user. - if os.geteuid() == 0: - # Special handling for root user in order to handle properly - # cases where users use sudo without -H flag. - try: - path_uid = get_path_uid(path) - except OSError: - return False - return path_uid == 0 - else: - return os.access(path, os.W_OK) - else: - previous, path = path, os.path.dirname(path) - return False # assume we don't own the path - - -@contextmanager -def adjacent_tmp_file(path: str, **kwargs: Any) -> Generator[BinaryIO, None, None]: - """Return a file-like object pointing to a tmp file next to path. - - The file is created securely and is ensured to be written to disk - after the context reaches its end. - - kwargs will be passed to tempfile.NamedTemporaryFile to control - the way the temporary file will be opened. - """ - with NamedTemporaryFile( - delete=False, - dir=os.path.dirname(path), - prefix=os.path.basename(path), - suffix=".tmp", - **kwargs, - ) as f: - result = cast(BinaryIO, f) - try: - yield result - finally: - result.flush() - os.fsync(result.fileno()) - - -# Tenacity raises RetryError by default, explicitly raise the original exception -_replace_retry = retry(reraise=True, stop=stop_after_delay(1), wait=wait_fixed(0.25)) - -replace = _replace_retry(os.replace) - - -# test_writable_dir and _test_writable_dir_win are copied from Flit, -# with the author's agreement to also place them under pip's license. -def test_writable_dir(path: str) -> bool: - """Check if a directory is writable. - - Uses os.access() on POSIX, tries creating files on Windows. - """ - # If the directory doesn't exist, find the closest parent that does. - while not os.path.isdir(path): - parent = os.path.dirname(path) - if parent == path: - break # Should never get here, but infinite loops are bad - path = parent - - if os.name == "posix": - return os.access(path, os.W_OK) - - return _test_writable_dir_win(path) - - -def _test_writable_dir_win(path: str) -> bool: - # os.access doesn't work on Windows: http://bugs.python.org/issue2528 - # and we can't use tempfile: http://bugs.python.org/issue22107 - basename = "accesstest_deleteme_fishfingers_custard_" - alphabet = "abcdefghijklmnopqrstuvwxyz0123456789" - for _ in range(10): - name = basename + "".join(random.choice(alphabet) for _ in range(6)) - file = os.path.join(path, name) - try: - fd = os.open(file, os.O_RDWR | os.O_CREAT | os.O_EXCL) - except FileExistsError: - pass - except PermissionError: - # This could be because there's a directory with the same name. - # But it's highly unlikely there's a directory called that, - # so we'll assume it's because the parent dir is not writable. - # This could as well be because the parent dir is not readable, - # due to non-privileged user access. - return False - else: - os.close(fd) - os.unlink(file) - return True - - # This should never be reached - raise OSError("Unexpected condition testing for writable directory") - - -def find_files(path: str, pattern: str) -> List[str]: - """Returns a list of absolute paths of files beneath path, recursively, - with filenames which match the UNIX-style shell glob pattern.""" - result: List[str] = [] - for root, _, files in os.walk(path): - matches = fnmatch.filter(files, pattern) - result.extend(os.path.join(root, f) for f in matches) - return result - - -def file_size(path: str) -> Union[int, float]: - # If it's a symlink, return 0. - if os.path.islink(path): - return 0 - return os.path.getsize(path) - - -def format_file_size(path: str) -> str: - return format_size(file_size(path)) - - -def directory_size(path: str) -> Union[int, float]: - size = 0.0 - for root, _dirs, files in os.walk(path): - for filename in files: - file_path = os.path.join(root, filename) - size += file_size(file_path) - return size - - -def format_directory_size(path: str) -> str: - return format_size(directory_size(path)) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/box.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/box.py deleted file mode 100644 index 97d2a94445770e195b9fc73e904b920d5ff04104..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/box.py +++ /dev/null @@ -1,517 +0,0 @@ -import sys -from typing import TYPE_CHECKING, Iterable, List - -if sys.version_info >= (3, 8): - from typing import Literal -else: - from pip._vendor.typing_extensions import Literal # pragma: no cover - - -from ._loop import loop_last - -if TYPE_CHECKING: - from pip._vendor.rich.console import ConsoleOptions - - -class Box: - """Defines characters to render boxes. - - ┌─┬┐ top - │ ││ head - ├─┼┤ head_row - │ ││ mid - ├─┼┤ row - ├─┼┤ foot_row - │ ││ foot - └─┴┘ bottom - - Args: - box (str): Characters making up box. - ascii (bool, optional): True if this box uses ascii characters only. Default is False. - """ - - def __init__(self, box: str, *, ascii: bool = False) -> None: - self._box = box - self.ascii = ascii - line1, line2, line3, line4, line5, line6, line7, line8 = box.splitlines() - # top - self.top_left, self.top, self.top_divider, self.top_right = iter(line1) - # head - self.head_left, _, self.head_vertical, self.head_right = iter(line2) - # head_row - ( - self.head_row_left, - self.head_row_horizontal, - self.head_row_cross, - self.head_row_right, - ) = iter(line3) - - # mid - self.mid_left, _, self.mid_vertical, self.mid_right = iter(line4) - # row - self.row_left, self.row_horizontal, self.row_cross, self.row_right = iter(line5) - # foot_row - ( - self.foot_row_left, - self.foot_row_horizontal, - self.foot_row_cross, - self.foot_row_right, - ) = iter(line6) - # foot - self.foot_left, _, self.foot_vertical, self.foot_right = iter(line7) - # bottom - self.bottom_left, self.bottom, self.bottom_divider, self.bottom_right = iter( - line8 - ) - - def __repr__(self) -> str: - return "Box(...)" - - def __str__(self) -> str: - return self._box - - def substitute(self, options: "ConsoleOptions", safe: bool = True) -> "Box": - """Substitute this box for another if it won't render due to platform issues. - - Args: - options (ConsoleOptions): Console options used in rendering. - safe (bool, optional): Substitute this for another Box if there are known problems - displaying on the platform (currently only relevant on Windows). Default is True. - - Returns: - Box: A different Box or the same Box. - """ - box = self - if options.legacy_windows and safe: - box = LEGACY_WINDOWS_SUBSTITUTIONS.get(box, box) - if options.ascii_only and not box.ascii: - box = ASCII - return box - - def get_plain_headed_box(self) -> "Box": - """If this box uses special characters for the borders of the header, then - return the equivalent box that does not. - - Returns: - Box: The most similar Box that doesn't use header-specific box characters. - If the current Box already satisfies this criterion, then it's returned. - """ - return PLAIN_HEADED_SUBSTITUTIONS.get(self, self) - - def get_top(self, widths: Iterable[int]) -> str: - """Get the top of a simple box. - - Args: - widths (List[int]): Widths of columns. - - Returns: - str: A string of box characters. - """ - - parts: List[str] = [] - append = parts.append - append(self.top_left) - for last, width in loop_last(widths): - append(self.top * width) - if not last: - append(self.top_divider) - append(self.top_right) - return "".join(parts) - - def get_row( - self, - widths: Iterable[int], - level: Literal["head", "row", "foot", "mid"] = "row", - edge: bool = True, - ) -> str: - """Get the top of a simple box. - - Args: - width (List[int]): Widths of columns. - - Returns: - str: A string of box characters. - """ - if level == "head": - left = self.head_row_left - horizontal = self.head_row_horizontal - cross = self.head_row_cross - right = self.head_row_right - elif level == "row": - left = self.row_left - horizontal = self.row_horizontal - cross = self.row_cross - right = self.row_right - elif level == "mid": - left = self.mid_left - horizontal = " " - cross = self.mid_vertical - right = self.mid_right - elif level == "foot": - left = self.foot_row_left - horizontal = self.foot_row_horizontal - cross = self.foot_row_cross - right = self.foot_row_right - else: - raise ValueError("level must be 'head', 'row' or 'foot'") - - parts: List[str] = [] - append = parts.append - if edge: - append(left) - for last, width in loop_last(widths): - append(horizontal * width) - if not last: - append(cross) - if edge: - append(right) - return "".join(parts) - - def get_bottom(self, widths: Iterable[int]) -> str: - """Get the bottom of a simple box. - - Args: - widths (List[int]): Widths of columns. - - Returns: - str: A string of box characters. - """ - - parts: List[str] = [] - append = parts.append - append(self.bottom_left) - for last, width in loop_last(widths): - append(self.bottom * width) - if not last: - append(self.bottom_divider) - append(self.bottom_right) - return "".join(parts) - - -ASCII: Box = Box( - """\ -+--+ -| || -|-+| -| || -|-+| -|-+| -| || -+--+ -""", - ascii=True, -) - -ASCII2: Box = Box( - """\ -+-++ -| || -+-++ -| || -+-++ -+-++ -| || -+-++ -""", - ascii=True, -) - -ASCII_DOUBLE_HEAD: Box = Box( - """\ -+-++ -| || -+=++ -| || -+-++ -+-++ -| || -+-++ -""", - ascii=True, -) - -SQUARE: Box = Box( - """\ -┌─┬┐ -│ ││ -├─┼┤ -│ ││ -├─┼┤ -├─┼┤ -│ ││ -└─┴┘ -""" -) - -SQUARE_DOUBLE_HEAD: Box = Box( - """\ -┌─┬┐ -│ ││ -╞═╪╡ -│ ││ -├─┼┤ -├─┼┤ -│ ││ -└─┴┘ -""" -) - -MINIMAL: Box = Box( - """\ - ╷ - │ -╶─┼╴ - │ -╶─┼╴ -╶─┼╴ - │ - ╵ -""" -) - - -MINIMAL_HEAVY_HEAD: Box = Box( - """\ - ╷ - │ -╺━┿╸ - │ -╶─┼╴ -╶─┼╴ - │ - ╵ -""" -) - -MINIMAL_DOUBLE_HEAD: Box = Box( - """\ - ╷ - │ - ═╪ - │ - ─┼ - ─┼ - │ - ╵ -""" -) - - -SIMPLE: Box = Box( - """\ - - - ── - - - ── - - -""" -) - -SIMPLE_HEAD: Box = Box( - """\ - - - ── - - - - - -""" -) - - -SIMPLE_HEAVY: Box = Box( - """\ - - - ━━ - - - ━━ - - -""" -) - - -HORIZONTALS: Box = Box( - """\ - ── - - ── - - ── - ── - - ── -""" -) - -ROUNDED: Box = Box( - """\ -╭─┬╮ -│ ││ -├─┼┤ -│ ││ -├─┼┤ -├─┼┤ -│ ││ -╰─┴╯ -""" -) - -HEAVY: Box = Box( - """\ -┏━┳┓ -┃ ┃┃ -┣━╋┫ -┃ ┃┃ -┣━╋┫ -┣━╋┫ -┃ ┃┃ -┗━┻┛ -""" -) - -HEAVY_EDGE: Box = Box( - """\ -┏━┯┓ -┃ │┃ -┠─┼┨ -┃ │┃ -┠─┼┨ -┠─┼┨ -┃ │┃ -┗━┷┛ -""" -) - -HEAVY_HEAD: Box = Box( - """\ -┏━┳┓ -┃ ┃┃ -┡━╇┩ -│ ││ -├─┼┤ -├─┼┤ -│ ││ -└─┴┘ -""" -) - -DOUBLE: Box = Box( - """\ -╔═╦╗ -║ ║║ -╠═╬╣ -║ ║║ -╠═╬╣ -╠═╬╣ -║ ║║ -╚═╩╝ -""" -) - -DOUBLE_EDGE: Box = Box( - """\ -╔═╤╗ -║ │║ -╟─┼╢ -║ │║ -╟─┼╢ -╟─┼╢ -║ │║ -╚═╧╝ -""" -) - -MARKDOWN: Box = Box( - """\ - -| || -|-|| -| || -|-|| -|-|| -| || - -""", - ascii=True, -) - -# Map Boxes that don't render with raster fonts on to equivalent that do -LEGACY_WINDOWS_SUBSTITUTIONS = { - ROUNDED: SQUARE, - MINIMAL_HEAVY_HEAD: MINIMAL, - SIMPLE_HEAVY: SIMPLE, - HEAVY: SQUARE, - HEAVY_EDGE: SQUARE, - HEAVY_HEAD: SQUARE, -} - -# Map headed boxes to their headerless equivalents -PLAIN_HEADED_SUBSTITUTIONS = { - HEAVY_HEAD: SQUARE, - SQUARE_DOUBLE_HEAD: SQUARE, - MINIMAL_DOUBLE_HEAD: MINIMAL, - MINIMAL_HEAVY_HEAD: MINIMAL, - ASCII_DOUBLE_HEAD: ASCII2, -} - - -if __name__ == "__main__": # pragma: no cover - - from pip._vendor.rich.columns import Columns - from pip._vendor.rich.panel import Panel - - from . import box as box - from .console import Console - from .table import Table - from .text import Text - - console = Console(record=True) - - BOXES = [ - "ASCII", - "ASCII2", - "ASCII_DOUBLE_HEAD", - "SQUARE", - "SQUARE_DOUBLE_HEAD", - "MINIMAL", - "MINIMAL_HEAVY_HEAD", - "MINIMAL_DOUBLE_HEAD", - "SIMPLE", - "SIMPLE_HEAD", - "SIMPLE_HEAVY", - "HORIZONTALS", - "ROUNDED", - "HEAVY", - "HEAVY_EDGE", - "HEAVY_HEAD", - "DOUBLE", - "DOUBLE_EDGE", - "MARKDOWN", - ] - - console.print(Panel("[bold green]Box Constants", style="green"), justify="center") - console.print() - - columns = Columns(expand=True, padding=2) - for box_name in sorted(BOXES): - table = Table( - show_footer=True, style="dim", border_style="not dim", expand=True - ) - table.add_column("Header 1", "Footer 1") - table.add_column("Header 2", "Footer 2") - table.add_row("Cell", "Cell") - table.add_row("Cell", "Cell") - table.box = getattr(box, box_name) - table.title = Text(f"box.{box_name}", style="magenta") - columns.add_renderable(table) - console.print(columns) - - # console.save_svg("box.svg") diff --git a/spaces/TechShark20/handwespeak/spoter/spoter_model.py b/spaces/TechShark20/handwespeak/spoter/spoter_model.py deleted file mode 100644 index 9462ce1b084ad275a6025b5faa765593dff9c3d0..0000000000000000000000000000000000000000 --- a/spaces/TechShark20/handwespeak/spoter/spoter_model.py +++ /dev/null @@ -1,70 +0,0 @@ - -import copy -import torch - -import torch.nn as nn -from typing import Optional - - -def _get_clones(mod, n): - return nn.ModuleList([copy.deepcopy(mod) for _ in range(n)]) - - -class SPOTERTransformerDecoderLayer(nn.TransformerDecoderLayer): - """ - Edited TransformerDecoderLayer implementation omitting the redundant self-attention operation as opposed to the - standard implementation. - """ - - def __init__(self, d_model, nhead, dim_feedforward, dropout, activation): - super(SPOTERTransformerDecoderLayer, self).__init__(d_model, nhead, dim_feedforward, dropout, activation) - - del self.self_attn - - def forward(self, tgt: torch.Tensor, memory: torch.Tensor, tgt_mask: Optional[torch.Tensor] = None, - memory_mask: Optional[torch.Tensor] = None, tgt_key_padding_mask: Optional[torch.Tensor] = None, - memory_key_padding_mask: Optional[torch.Tensor] = None) -> torch.Tensor: - - tgt = tgt + self.dropout1(tgt) - tgt = self.norm1(tgt) - tgt2 = self.multihead_attn(tgt, memory, memory, attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask)[0] - tgt = tgt + self.dropout2(tgt2) - tgt = self.norm2(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt)))) - tgt = tgt + self.dropout3(tgt2) - tgt = self.norm3(tgt) - - return tgt - - -class SPOTER(nn.Module): - """ - Implementation of the SPOTER (Sign POse-based TransformER) architecture for sign language recognition from sequence - of skeletal data. - """ - - def __init__(self, num_classes, hidden_dim=55): - super().__init__() - - self.row_embed = nn.Parameter(torch.rand(50, hidden_dim)) - self.pos = nn.Parameter(torch.cat([self.row_embed[0].unsqueeze(0).repeat(1, 1, 1)], dim=-1).flatten(0, 1).unsqueeze(0)) - self.class_query = nn.Parameter(torch.rand(1, hidden_dim)) - self.transformer = nn.Transformer(hidden_dim, 9, 6, 6) - self.linear_class = nn.Linear(hidden_dim, num_classes) - - # Deactivate the initial attention decoder mechanism - custom_decoder_layer = SPOTERTransformerDecoderLayer(self.transformer.d_model, self.transformer.nhead, 2048, - 0.1, "relu") - self.transformer.decoder.layers = _get_clones(custom_decoder_layer, self.transformer.decoder.num_layers) - - def forward(self, inputs): - h = torch.unsqueeze(inputs.flatten(start_dim=1), 1).float() - h = self.transformer(self.pos + h, self.class_query.unsqueeze(0)).transpose(0, 1) - res = self.linear_class(h) - - return res - - -if __name__ == "__main__": - pass diff --git a/spaces/TempoFunk/makeavid-sd-jax/makeavid_sd/torch_impl/torch_cross_attention.py b/spaces/TempoFunk/makeavid-sd-jax/makeavid_sd/torch_impl/torch_cross_attention.py deleted file mode 100644 index 6e63229a4b45f2bd846ff237723152ee1f1e6623..0000000000000000000000000000000000000000 --- a/spaces/TempoFunk/makeavid-sd-jax/makeavid_sd/torch_impl/torch_cross_attention.py +++ /dev/null @@ -1,171 +0,0 @@ -from typing import Optional -import torch -import torch.nn as nn - -class CrossAttention(nn.Module): - r""" - A cross attention layer. - - Parameters: - query_dim (`int`): The number of channels in the query. - cross_attention_dim (`int`, *optional*): - The number of channels in the context. If not given, defaults to `query_dim`. - heads (`int`, *optional*, defaults to 8): The number of heads to use for multi-head attention. - dim_head (`int`, *optional*, defaults to 64): The number of channels in each head. - dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. - bias (`bool`, *optional*, defaults to False): - Set to `True` for the query, key, and value linear layers to contain a bias parameter. - """ - - def __init__(self, - query_dim: int, - cross_attention_dim: Optional[int] = None, - heads: int = 8, - dim_head: int = 64, - dropout: float = 0.0, - bias: bool = False - ): - super().__init__() - inner_dim = dim_head * heads - cross_attention_dim = cross_attention_dim if cross_attention_dim is not None else query_dim - - self.scale = dim_head**-0.5 - self.heads = heads - self.n_heads = heads - self.d_head = dim_head - - self.to_q = nn.Linear(query_dim, inner_dim, bias = bias) - self.to_k = nn.Linear(cross_attention_dim, inner_dim, bias = bias) - self.to_v = nn.Linear(cross_attention_dim, inner_dim, bias = bias) - - self.to_out = nn.ModuleList([]) - self.to_out.append(nn.Linear(inner_dim, query_dim)) - self.to_out.append(nn.Dropout(dropout)) - try: - # You can install flash attention by cloning their Github repo, - # [https://github.com/HazyResearch/flash-attention](https://github.com/HazyResearch/flash-attention) - # and then running `python setup.py install` - from flash_attn.flash_attention import FlashAttention - self.flash = FlashAttention() - # Set the scale for scaled dot-product attention. - self.flash.softmax_scale = self.scale - # Set to `None` if it's not installed - except ImportError: - self.flash = None - - def reshape_heads_to_batch_dim(self, tensor): - batch_size, seq_len, dim = tensor.shape - head_size = self.heads - tensor = tensor.reshape(batch_size, seq_len, head_size, dim // head_size) - tensor = tensor.permute(0, 2, 1, 3).reshape(batch_size * head_size, seq_len, dim // head_size) - return tensor - - def reshape_batch_dim_to_heads(self, tensor): - batch_size, seq_len, dim = tensor.shape - head_size = self.heads - tensor = tensor.reshape(batch_size // head_size, head_size, seq_len, dim) - tensor = tensor.permute(0, 2, 1, 3).reshape(batch_size // head_size, seq_len, dim * head_size) - return tensor - - def forward(self, - hidden_states: torch.Tensor, - encoder_hidden_states: Optional[torch.Tensor] = None, - mask: Optional[torch.Tensor] = None - ) -> torch.Tensor: - batch_size, sequence_length, _ = hidden_states.shape - is_self = encoder_hidden_states is None - # attention, what we cannot get enough of - query = self.to_q(hidden_states) - has_cond = encoder_hidden_states is not None - - encoder_hidden_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states - key = self.to_k(encoder_hidden_states) - value = self.to_v(encoder_hidden_states) - - dim = query.shape[-1] - - if self.flash is not None and not has_cond and self.d_head <= 64: - hidden_states = self.flash_attention(query, key, value) - else: - hidden_states = self.normal_attention(query, key, value, is_self) - - # linear proj - hidden_states = self.to_out[0](hidden_states) - # dropout - hidden_states = self.to_out[1](hidden_states) - return hidden_states - - def flash_attention(self, q: torch.Tensor, k: torch.Tensor, v: torch.Tensor): - """ - #### Flash Attention - :param q: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - :param k: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - :param v: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - """ - - # Get batch size and number of elements along sequence axis (`width * height`) - batch_size, seq_len, _ = q.shape - - # Stack `q`, `k`, `v` vectors for flash attention, to get a single tensor of - # shape `[batch_size, seq_len, 3, n_heads * d_head]` - qkv = torch.stack((q, k, v), dim = 2) - # Split the heads - qkv = qkv.view(batch_size, seq_len, 3, self.n_heads, self.d_head) - - # Flash attention works for head sizes `32`, `64` and `128`, so we have to pad the heads to - # fit this size. - if self.d_head <= 32: - pad = 32 - self.d_head - elif self.d_head <= 64: - pad = 64 - self.d_head - elif self.d_head <= 128: - pad = 128 - self.d_head - else: - raise ValueError(f'Head size ${self.d_head} too large for Flash Attention') - - # Pad the heads - if pad: - qkv = torch.cat((qkv, qkv.new_zeros(batch_size, seq_len, 3, self.n_heads, pad)), dim = -1) - - # Compute attention - # $$\underset{seq}{softmax}\Bigg(\frac{Q K^\top}{\sqrt{d_{key}}}\Bigg)V$$ - # This gives a tensor of shape `[batch_size, seq_len, n_heads, d_padded]` - out, _ = self.flash(qkv) - # Truncate the extra head size - out = out[:, :, :, :self.d_head] - # Reshape to `[batch_size, seq_len, n_heads * d_head]` - out = out.reshape(batch_size, seq_len, self.n_heads * self.d_head) - - # Map to `[batch_size, height * width, d_model]` with a linear layer - return out - - def normal_attention(self, q: torch.Tensor, k: torch.Tensor, v: torch.Tensor, is_self: bool): - """ - #### Normal Attention - - :param q: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - :param k: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - :param v: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]` - """ - # Split them to heads of shape `[batch_size, seq_len, n_heads, d_head]` - q = q.view(*q.shape[:2], self.n_heads, -1) - k = k.view(*k.shape[:2], self.n_heads, -1) - v = v.view(*v.shape[:2], self.n_heads, -1) - - # Calculate attention $\frac{Q K^\top}{\sqrt{d_{key}}}$ - attn = torch.einsum('bihd,bjhd->bhij', q, k) * self.scale - # Compute softmax - # $$\underset{seq}{softmax}\Bigg(\frac{Q K^\top}{\sqrt{d_{key}}}\Bigg)$$ - half = attn.shape[0] // 2 - attn[half:] = attn[half:].softmax(dim = -1) - attn[:half] = attn[:half].softmax(dim = -1) - - # Compute attention output - # $$\underset{seq}{softmax}\Bigg(\frac{Q K^\top}{\sqrt{d_{key}}}\Bigg)V$$ - out = torch.einsum('bhij,bjhd->bihd', attn, v) - - # Reshape to `[batch_size, height * width, n_heads * d_head]` - out = out.reshape(*out.shape[:2], -1) - - # Map to `[batch_size, height * width, d_model]` with a linear layer - return out \ No newline at end of file diff --git a/spaces/Trangluna2002/AI_Cover_Gen/src/mdx.py b/spaces/Trangluna2002/AI_Cover_Gen/src/mdx.py deleted file mode 100644 index 448e65d45cb1272c06f3ffa015cef8abd1257d9a..0000000000000000000000000000000000000000 --- a/spaces/Trangluna2002/AI_Cover_Gen/src/mdx.py +++ /dev/null @@ -1,292 +0,0 @@ -import gc -import hashlib -import os -import queue -import threading -import warnings - -import librosa -import numpy as np -import onnxruntime as ort -import soundfile as sf -import torch -from tqdm import tqdm - -warnings.filterwarnings("ignore") -stem_naming = {'Vocals': 'Instrumental', 'Other': 'Instruments', 'Instrumental': 'Vocals', 'Drums': 'Drumless', 'Bass': 'Bassless'} - - -class MDXModel: - def __init__(self, device, dim_f, dim_t, n_fft, hop=1024, stem_name=None, compensation=1.000): - self.dim_f = dim_f - self.dim_t = dim_t - self.dim_c = 4 - self.n_fft = n_fft - self.hop = hop - self.stem_name = stem_name - self.compensation = compensation - - self.n_bins = self.n_fft // 2 + 1 - self.chunk_size = hop * (self.dim_t - 1) - self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to(device) - - out_c = self.dim_c - - self.freq_pad = torch.zeros([1, out_c, self.n_bins - self.dim_f, self.dim_t]).to(device) - - def stft(self, x): - x = x.reshape([-1, self.chunk_size]) - x = torch.stft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True, return_complex=True) - x = torch.view_as_real(x) - x = x.permute([0, 3, 1, 2]) - x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape([-1, 4, self.n_bins, self.dim_t]) - return x[:, :, :self.dim_f] - - def istft(self, x, freq_pad=None): - freq_pad = self.freq_pad.repeat([x.shape[0], 1, 1, 1]) if freq_pad is None else freq_pad - x = torch.cat([x, freq_pad], -2) - # c = 4*2 if self.target_name=='*' else 2 - x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape([-1, 2, self.n_bins, self.dim_t]) - x = x.permute([0, 2, 3, 1]) - x = x.contiguous() - x = torch.view_as_complex(x) - x = torch.istft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True) - return x.reshape([-1, 2, self.chunk_size]) - - -class MDX: - DEFAULT_SR = 44100 - # Unit: seconds - DEFAULT_CHUNK_SIZE = 0 * DEFAULT_SR - DEFAULT_MARGIN_SIZE = 1 * DEFAULT_SR - - DEFAULT_PROCESSOR = 0 - - def __init__(self, model_path: str, params: MDXModel, processor=DEFAULT_PROCESSOR): - - # Set the device and the provider (CPU or CUDA) - #self.device = torch.device(f'cuda:{processor}') if processor >= 0 else torch.device('cpu') - self.device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu') - #self.provider = ['CUDAExecutionProvider'] if processor >= 0 else ['CPUExecutionProvider'] - self.provider = ['CPUExecutionProvider'] - - self.model = params - - # Load the ONNX model using ONNX Runtime - self.ort = ort.InferenceSession(model_path, providers=self.provider) - # Preload the model for faster performance - self.ort.run(None, {'input': torch.rand(1, 4, params.dim_f, params.dim_t).numpy()}) - self.process = lambda spec: self.ort.run(None, {'input': spec.cpu().numpy()})[0] - - self.prog = None - - @staticmethod - def get_hash(model_path): - try: - with open(model_path, 'rb') as f: - f.seek(- 10000 * 1024, 2) - model_hash = hashlib.md5(f.read()).hexdigest() - except: - model_hash = hashlib.md5(open(model_path, 'rb').read()).hexdigest() - - return model_hash - - @staticmethod - def segment(wave, combine=True, chunk_size=DEFAULT_CHUNK_SIZE, margin_size=DEFAULT_MARGIN_SIZE): - """ - Segment or join segmented wave array - - Args: - wave: (np.array) Wave array to be segmented or joined - combine: (bool) If True, combines segmented wave array. If False, segments wave array. - chunk_size: (int) Size of each segment (in samples) - margin_size: (int) Size of margin between segments (in samples) - - Returns: - numpy array: Segmented or joined wave array - """ - - if combine: - processed_wave = None # Initializing as None instead of [] for later numpy array concatenation - for segment_count, segment in enumerate(wave): - start = 0 if segment_count == 0 else margin_size - end = None if segment_count == len(wave) - 1 else -margin_size - if margin_size == 0: - end = None - if processed_wave is None: # Create array for first segment - processed_wave = segment[:, start:end] - else: # Concatenate to existing array for subsequent segments - processed_wave = np.concatenate((processed_wave, segment[:, start:end]), axis=-1) - - else: - processed_wave = [] - sample_count = wave.shape[-1] - - if chunk_size <= 0 or chunk_size > sample_count: - chunk_size = sample_count - - if margin_size > chunk_size: - margin_size = chunk_size - - for segment_count, skip in enumerate(range(0, sample_count, chunk_size)): - - margin = 0 if segment_count == 0 else margin_size - end = min(skip + chunk_size + margin_size, sample_count) - start = skip - margin - - cut = wave[:, start:end].copy() - processed_wave.append(cut) - - if end == sample_count: - break - - return processed_wave - - def pad_wave(self, wave): - """ - Pad the wave array to match the required chunk size - - Args: - wave: (np.array) Wave array to be padded - - Returns: - tuple: (padded_wave, pad, trim) - - padded_wave: Padded wave array - - pad: Number of samples that were padded - - trim: Number of samples that were trimmed - """ - n_sample = wave.shape[1] - trim = self.model.n_fft // 2 - gen_size = self.model.chunk_size - 2 * trim - pad = gen_size - n_sample % gen_size - - # Padded wave - wave_p = np.concatenate((np.zeros((2, trim)), wave, np.zeros((2, pad)), np.zeros((2, trim))), 1) - - mix_waves = [] - for i in range(0, n_sample + pad, gen_size): - waves = np.array(wave_p[:, i:i + self.model.chunk_size]) - mix_waves.append(waves) - - print(self.device) - - mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(self.device) - - return mix_waves, pad, trim - - def _process_wave(self, mix_waves, trim, pad, q: queue.Queue, _id: int): - """ - Process each wave segment in a multi-threaded environment - - Args: - mix_waves: (torch.Tensor) Wave segments to be processed - trim: (int) Number of samples trimmed during padding - pad: (int) Number of samples padded during padding - q: (queue.Queue) Queue to hold the processed wave segments - _id: (int) Identifier of the processed wave segment - - Returns: - numpy array: Processed wave segment - """ - mix_waves = mix_waves.split(1) - with torch.no_grad(): - pw = [] - for mix_wave in mix_waves: - self.prog.update() - spec = self.model.stft(mix_wave) - processed_spec = torch.tensor(self.process(spec)) - processed_wav = self.model.istft(processed_spec.to(self.device)) - processed_wav = processed_wav[:, :, trim:-trim].transpose(0, 1).reshape(2, -1).cpu().numpy() - pw.append(processed_wav) - processed_signal = np.concatenate(pw, axis=-1)[:, :-pad] - q.put({_id: processed_signal}) - return processed_signal - - def process_wave(self, wave: np.array, mt_threads=1): - """ - Process the wave array in a multi-threaded environment - - Args: - wave: (np.array) Wave array to be processed - mt_threads: (int) Number of threads to be used for processing - - Returns: - numpy array: Processed wave array - """ - self.prog = tqdm(total=0) - chunk = wave.shape[-1] // mt_threads - waves = self.segment(wave, False, chunk) - - # Create a queue to hold the processed wave segments - q = queue.Queue() - threads = [] - for c, batch in enumerate(waves): - mix_waves, pad, trim = self.pad_wave(batch) - self.prog.total = len(mix_waves) * mt_threads - thread = threading.Thread(target=self._process_wave, args=(mix_waves, trim, pad, q, c)) - thread.start() - threads.append(thread) - for thread in threads: - thread.join() - self.prog.close() - - processed_batches = [] - while not q.empty(): - processed_batches.append(q.get()) - processed_batches = [list(wave.values())[0] for wave in - sorted(processed_batches, key=lambda d: list(d.keys())[0])] - assert len(processed_batches) == len(waves), 'Incomplete processed batches, please reduce batch size!' - return self.segment(processed_batches, True, chunk) - - -def run_mdx(model_params, output_dir, model_path, filename, exclude_main=False, exclude_inversion=False, suffix=None, invert_suffix=None, denoise=False, keep_orig=True, m_threads=2): - device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu') - - #device_properties = torch.cuda.get_device_properties(device) - print("Device", device) - vram_gb = 12 #device_properties.total_memory / 1024**3 - m_threads = 1 if vram_gb < 8 else 2 - - model_hash = MDX.get_hash(model_path) - mp = model_params.get(model_hash) - model = MDXModel( - device, - dim_f=mp["mdx_dim_f_set"], - dim_t=2 ** mp["mdx_dim_t_set"], - n_fft=mp["mdx_n_fft_scale_set"], - stem_name=mp["primary_stem"], - compensation=mp["compensate"] - ) - - mdx_sess = MDX(model_path, model) - wave, sr = librosa.load(filename, mono=False, sr=44100) - # normalizing input wave gives better output - peak = max(np.max(wave), abs(np.min(wave))) - wave /= peak - if denoise: - wave_processed = -(mdx_sess.process_wave(-wave, m_threads)) + (mdx_sess.process_wave(wave, m_threads)) - wave_processed *= 0.5 - else: - wave_processed = mdx_sess.process_wave(wave, m_threads) - # return to previous peak - wave_processed *= peak - stem_name = model.stem_name if suffix is None else suffix - - main_filepath = None - if not exclude_main: - main_filepath = os.path.join(output_dir, f"{os.path.basename(os.path.splitext(filename)[0])}_{stem_name}.wav") - sf.write(main_filepath, wave_processed.T, sr) - - invert_filepath = None - if not exclude_inversion: - diff_stem_name = stem_naming.get(stem_name) if invert_suffix is None else invert_suffix - stem_name = f"{stem_name}_diff" if diff_stem_name is None else diff_stem_name - invert_filepath = os.path.join(output_dir, f"{os.path.basename(os.path.splitext(filename)[0])}_{stem_name}.wav") - sf.write(invert_filepath, (-wave_processed.T * model.compensation) + wave.T, sr) - - if not keep_orig: - os.remove(filename) - - del mdx_sess, wave_processed, wave - gc.collect() - return main_filepath, invert_filepath diff --git a/spaces/VickyKira/NASAGPT/app.py b/spaces/VickyKira/NASAGPT/app.py deleted file mode 100644 index 65f603f88f4a30ce02fb4f5554d2c5fc6259575d..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/app.py +++ /dev/null @@ -1,48 +0,0 @@ -import secrets - -from server.bp import bp -from server.website import Website -from server.backend import Backend_Api -from server.babel import create_babel -from json import load -from flask import Flask - -if __name__ == '__main__': - - # Load configuration from config.json - config = load(open('config.json', 'r')) - site_config = config['site_config'] - url_prefix = config.pop('url_prefix') - - # Create the app - app = Flask(__name__) - app.secret_key = secrets.token_hex(16) - - # Set up Babel - create_babel(app) - - # Set up the website routes - site = Website(bp, url_prefix) - for route in site.routes: - bp.add_url_rule( - route, - view_func=site.routes[route]['function'], - methods=site.routes[route]['methods'], - ) - - # Set up the backend API routes - backend_api = Backend_Api(bp, config) - for route in backend_api.routes: - bp.add_url_rule( - route, - view_func=backend_api.routes[route]['function'], - methods=backend_api.routes[route]['methods'], - ) - - # Register the blueprint - app.register_blueprint(bp, url_prefix=url_prefix) - - # Run the Flask server - print(f"Running on {site_config['port']}{url_prefix}") - app.run(**site_config) - print(f"Closing port {site_config['port']}") \ No newline at end of file diff --git a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Aichat.py b/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Aichat.py deleted file mode 100644 index d78375ce7e62b634c82e163c693a5557b8e2f860..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Aichat.py +++ /dev/null @@ -1,35 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://hteyun.com' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - } - data = { - 'model': model, - 'temperature': 0.7, - 'presence_penalty': 0, - 'messages': messages, - } - response = requests.post(url + '/api/chat-stream', - json=data, stream=True) - - if stream: - for chunk in response.iter_content(chunk_size=None): - chunk = chunk.decode('utf-8') - if chunk.strip(): - message = json.loads(chunk)['choices'][0]['message']['content'] - yield message - else: - message = response.json()['choices'][0]['message']['content'] - yield message - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/Xenova/next-server-example-app/Dockerfile b/spaces/Xenova/next-server-example-app/Dockerfile deleted file mode 100644 index a99d2b5846c127ed08f34dabc9d8524b6c934056..0000000000000000000000000000000000000000 --- a/spaces/Xenova/next-server-example-app/Dockerfile +++ /dev/null @@ -1,69 +0,0 @@ -# syntax=docker/dockerfile:1.4 - -# Adapted from https://github.com/vercel/next.js/blob/e60a1e747c3f521fc24dfd9ee2989e13afeb0a9b/examples/with-docker/Dockerfile -# For more information, see https://nextjs.org/docs/pages/building-your-application/deploying#docker-image - -FROM node:18 AS base - -# Install dependencies only when needed -FROM base AS deps -WORKDIR /app - -# Install dependencies based on the preferred package manager -COPY --link package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./ -RUN \ - if [ -f yarn.lock ]; then yarn --frozen-lockfile; \ - elif [ -f package-lock.json ]; then npm ci; \ - elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \ - else echo "Lockfile not found." && exit 1; \ - fi - - -# Rebuild the source code only when needed -FROM base AS builder -WORKDIR /app -COPY --from=deps --link /app/node_modules ./node_modules -COPY --link . . - -# Next.js collects completely anonymous telemetry data about general usage. -# Learn more here: https://nextjs.org/telemetry -# Uncomment the following line in case you want to disable telemetry during the build. -# ENV NEXT_TELEMETRY_DISABLED 1 - -RUN npm run build - -# If using yarn comment out above and use below instead -# RUN yarn build - -# Production image, copy all the files and run next -FROM base AS runner -WORKDIR /app - -ENV NODE_ENV production -# Uncomment the following line in case you want to disable telemetry during runtime. -# ENV NEXT_TELEMETRY_DISABLED 1 - -RUN \ - addgroup --system --gid 1001 nodejs; \ - adduser --system --uid 1001 nextjs - -COPY --from=builder --link /app/public ./public - -# Automatically leverage output traces to reduce image size -# https://nextjs.org/docs/advanced-features/output-file-tracing -COPY --from=builder --link --chown=1001:1001 /app/.next/standalone ./ -COPY --from=builder --link --chown=1001:1001 /app/.next/static ./.next/static - -USER nextjs - -EXPOSE 3000 - -ENV PORT 3000 -ENV HOSTNAME localhost - -# Allow the running process to write model files to the cache folder. -# NOTE: In practice, you would probably want to pre-download the model files to avoid having to download them on-the-fly. -RUN mkdir -p /app/node_modules/@xenova/.cache/ -RUN chmod 777 -R /app/node_modules/@xenova/ - -CMD ["node", "server.js"] \ No newline at end of file diff --git a/spaces/XzJosh/Bekki-Bert-VITS2/commons.py b/spaces/XzJosh/Bekki-Bert-VITS2/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Bekki-Bert-VITS2/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/YlcldKlns/bing/src/lib/bots/bing/tts.ts b/spaces/YlcldKlns/bing/src/lib/bots/bing/tts.ts deleted file mode 100644 index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000 --- a/spaces/YlcldKlns/bing/src/lib/bots/bing/tts.ts +++ /dev/null @@ -1,82 +0,0 @@ -import { sleep } from './utils' - -const synth = window.speechSynthesis - -export class TTS { - currentText = '' - speakText = '' - private controller = new AbortController() - speaking = false - get isSpeaking() { - return this.speaking - } - finished = false - constructor() {} - abort = () => { - this.controller.abort() - } - - reset = () => { - this.speaking = false - this.finished = true - this.currentText = '' - this.speakText = '' - this.abort() - } - - speak = (text: string) => { - if (!synth || text?.trim()?.length < 2) { - return - } - this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '') - this.finished = false - this.loop() - } - - private async doSpeek() { - return new Promise((resolve) => { - const endIndex = this.finished ? this.currentText.length : - Math.max( - this.currentText.lastIndexOf('。'), - this.currentText.lastIndexOf(';'), - this.currentText.lastIndexOf('、'), - this.currentText.lastIndexOf('?'), - this.currentText.lastIndexOf('\n') - ) - const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0 - - if (startIndex >= endIndex) { - return resolve(true) - } - const text = this.currentText.slice(startIndex, endIndex) - this.speakText = text - const utterThis = new SpeechSynthesisUtterance(text) - this.controller.signal.onabort = () => { - synth.cancel() - this.finished = true - resolve(false) - } - - utterThis.onend = function (event) { - resolve(true) - } - - utterThis.onerror = function (event) { - resolve(false) - } - - const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null - utterThis.voice = voice - synth.speak(utterThis) - }) - } - - private async loop() { - if (this.speaking) return - this.speaking = true - while(!this.finished) { - await Promise.all([sleep(1000), this.doSpeek()]) - } - this.speaking = false - } -} diff --git a/spaces/abdvl/datahub_qa_bot/docs/advanced/field-path-spec-v2.md b/spaces/abdvl/datahub_qa_bot/docs/advanced/field-path-spec-v2.md deleted file mode 100644 index 0ecf9cf52cdc1f5f89baee68a3f383eef55db4f1..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/advanced/field-path-spec-v2.md +++ /dev/null @@ -1,352 +0,0 @@ -# SchemaFieldPath Specification (Version 2) - -This document outlines the formal specification for the fieldPath member of -the [SchemaField](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/schema/SchemaField.pdl) -model. This specification (version 2) takes into account the unique requirements of supporting a wide variety of nested -types, unions and optional fields and is a substantial improvement over the current implementation (version 1). - -## Requirements - -The `fieldPath` field is currently used by datahub for not just rendering the schema fields in the UI, but also as a -primary identifier of a field in other places such -as [EditableSchemaFieldInfo](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/schema/EditableSchemaFieldInfo.pdl#L12), -usage stats and data profiles. Therefore, it must satisfy the following requirements. - -* must be unique across all fields within a schema. -* make schema navigation in the UI more intuitive. -* allow for identifying the type of schema the field is part of, such as a `key-schema` or a `value-schema`. -* allow for future-evolution - -## Existing Convention(v1) - -The existing convention is to simply use the field's name as the `fieldPath` for simple fields, and use the `dot` -delimited names for nested fields. This scheme does not satisfy the [requirements](#requirements) stated above. The -following example illustrates where the `uniqueness` requirement is not satisfied. - -### Example: Ambiguous field path - -Consider the following `Avro` schema which is a `union` of two record types `A` and `B`, each having a simple field with -the same name `f` that is of type `string`. The v1 naming scheme cannot differentiate if a `fieldPath=f` is referring to -the record type `A` or `B`. - -``` -[ - { - "type": "record", - "name": "A", - "fields": [{ "name": "f", "type": "string" } ] - }, { - "type": "record", - "name": "B", - "fields": [{ "name": "f", "type": "string" } ] - } -] -``` - -## The FieldPath encoding scheme(v2) - -The syntax for V2 encoding of the `fieldPath` is captured in the following grammar. The `FieldPathSpec` is essentially -the type annotated path of the member, with each token along the path representing one level of nested member, -starting from the most-enclosing type, leading up to the member. In the case of `unions` that have `one-of` semantics, -the corresponding field will be emitted once for each `member` of the union as its `type`, along with one path -corresponding to the `union` itself. - -### Formal Spec: - -``` - := .. // when part of a key-schema - | . // when part of a value schema - := [version=] // [version=2.0] for v2 - := [key=True] // when part of a key schema - := + // this is the type prefixed path field (nested if repeats). - := . // type prefixed path of a field. - := . | - := [type=] - := [type=] - := | union | array | map - := int | float | double | string | fixed | enum -``` - -For the [example above](#example-ambiguous-field-path), this encoding would produce the following 2 unique paths -corresponding to the `A.f` and `B.f` fields. - -```python -unique_v2_field_paths = [ - "[version=2.0].[type=union].[type=A].[type=string].f", - "[version=2.0].[type=union].[type=B].[type=string].f" -] -``` - -NOTE: - -- this encoding always ensures uniqueness within a schema since the full type annotation leading to a field is encoded - in the fieldPath itself. -- processing a fieldPath, such as from UI, gets simplified simply by walking each token along the path from - left-to-right. -- adding PartOfKeySchemaToken allows for identifying if the field is part of key-schema. -- adding VersionToken allows for future evolvability. -- to represent `optional` fields, which sometimes are modeled as `unions` in formats like `Avro`, instead of treating it - as a `union` member, set the `nullable` member of `SchemaField` to `True`. - -## Examples - -### Primitive types - -```python -avro_schema = """ -{ - "type": "string" -} -""" -unique_v2_field_paths = [ - "[version=2.0].[type=string]" -] -``` -### Records -**Simple Record** -```python -avro_schema = """ -{ - "type": "record", - "name": "some.event.E", - "namespace": "some.event.N", - "doc": "this is the event record E" - "fields": [ - { - "name": "a", - "type": "string", - "doc": "this is string field a of E" - }, - { - "name": "b", - "type": "string", - "doc": "this is string field b of E" - } - ] -} -""" - -unique_v2_field_paths = [ - "[version=2.0].[type=E].[type=string].a", - "[version=2.0].[type=E].[type=string].b", -] -``` -**Nested Record** -```python -avro_schema = """ -{ - "type": "record", - "name": "SimpleNested", - "namespace": "com.linkedin", - "fields": [{ - "name": "nestedRcd", - "type": { - "type": "record", - "name": "InnerRcd", - "fields": [{ - "name": "aStringField", - "type": "string" - } ] - } - }] -} -""" - -unique_v2_field_paths = [ - "[version=2.0].[key=True].[type=SimpleNested].[type=InnerRcd].nestedRcd", - "[version=2.0].[key=True].[type=SimpleNested].[type=InnerRcd].nestedRcd.[type=string].aStringField", -] -``` - -**Recursive Record** -```python -avro_schema = """ -{ - "type": "record", - "name": "Recursive", - "namespace": "com.linkedin", - "fields": [{ - "name": "r", - "type": { - "type": "record", - "name": "R", - "fields": [ - { "name" : "anIntegerField", "type" : "int" }, - { "name": "aRecursiveField", "type": "com.linkedin.R"} - ] - } - }] -} -""" - -unique_v2_field_paths = [ - "[version=2.0].[type=Recursive].[type=R].r", - "[version=2.0].[type=Recursive].[type=R].r.[type=int].anIntegerField", - "[version=2.0].[type=Recursive].[type=R].r.[type=R].aRecursiveField" -] -``` - -```python -avro_schema =""" -{ - "type": "record", - "name": "TreeNode", - "fields": [ - { - "name": "value", - "type": "long" - }, - { - "name": "children", - "type": { "type": "array", "items": "TreeNode" } - } - ] -} -""" -unique_v2_field_paths = [ - "[version=2.0].[type=TreeNode].[type=long].value", - "[version=2.0].[type=TreeNode].[type=array].[type=TreeNode].children", -] -``` -### Unions -```python -avro_schema = """ -{ - "type": "record", - "name": "ABUnion", - "namespace": "com.linkedin", - "fields": [{ - "name": "a", - "type": [{ - "type": "record", - "name": "A", - "fields": [{ "name": "f", "type": "string" } ] - }, { - "type": "record", - "name": "B", - "fields": [{ "name": "f", "type": "string" } ] - } - ] - }] -} -""" -unique_v2_field_paths: List[str] = [ - "[version=2.0].[key=True].[type=ABUnion].[type=union].a", - "[version=2.0].[key=True].[type=ABUnion].[type=union].[type=A].a", - "[version=2.0].[key=True].[type=ABUnion].[type=union].[type=A].a.[type=string].f", - "[version=2.0].[key=True].[type=ABUnion].[type=union].[type=B].a", - "[version=2.0].[key=True].[type=ABUnion].[type=union].[type=B].a.[type=string].f", -] -``` -### Arrays -```python -avro_schema = """ -{ - "type": "record", - "name": "NestedArray", - "namespace": "com.linkedin", - "fields": [{ - "name": "ar", - "type": { - "type": "array", - "items": { - "type": "array", - "items": [ - "null", - { - "type": "record", - "name": "Foo", - "fields": [ { - "name": "a", - "type": "long" - } ] - } - ] - } - } - }] -} -""" -unique_v2_field_paths: List[str] = [ - "[version=2.0].[type=NestedArray].[type=array].[type=array].[type=Foo].ar", - "[version=2.0].[type=NestedArray].[type=array].[type=array].[type=Foo].ar.[type=long].a", -] -``` -### Maps -```python -avro_schema = """ -{ - "type": "record", - "name": "R", - "namespace": "some.namespace", - "fields": [ - { - "name": "a_map_of_longs_field", - "type": { - "type": "map", - "values": "long" - } - } - ] -} -""" -unique_v2_field_paths = [ - "[version=2.0].[type=R].[type=map].[type=long].a_map_of_longs_field", -] - - -``` -### Mixed Complex Type Examples -```python -# Combines arrays, unions and records. -avro_schema = """ -{ - "type": "record", - "name": "ABFooUnion", - "namespace": "com.linkedin", - "fields": [{ - "name": "a", - "type": [ { - "type": "record", - "name": "A", - "fields": [{ "name": "f", "type": "string" } ] - }, { - "type": "record", - "name": "B", - "fields": [{ "name": "f", "type": "string" } ] - }, { - "type": "array", - "items": { - "type": "array", - "items": [ - "null", - { - "type": "record", - "name": "Foo", - "fields": [{ "name": "f", "type": "long" }] - } - ] - } - }] - }] -} -""" - -unique_v2_field_paths: List[str] = [ - "[version=2.0].[type=ABFooUnion].[type=union].a", - "[version=2.0].[type=ABFooUnion].[type=union].[type=A].a", - "[version=2.0].[type=ABFooUnion].[type=union].[type=A].a.[type=string].f", - "[version=2.0].[type=ABFooUnion].[type=union].[type=B].a", - "[version=2.0].[type=ABFooUnion].[type=union].[type=B].a.[type=string].f", - "[version=2.0].[type=ABFooUnion].[type=union].[type=array].[type=array].[type=Foo].a", - "[version=2.0].[type=ABFooUnion].[type=union].[type=array].[type=array].[type=Foo].a.[type=long].f", -] -``` - -For more examples, see -the [unit-tests for AvroToMceSchemaConverter](https://github.com/datahub-project/datahub/blob/master/metadata-ingestion/tests/unit/test_schema_util.py). - -### Backward-compatibility - -While this format is not directly compatible with the v1 format, the v1 equivalent can easily be constructed from the v2 -encoding by stripping away all the v2 tokens enclosed in the square-brackets `[]`. diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/anchor/point_generator.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/anchor/point_generator.py deleted file mode 100644 index e6fbd988c317992c092c68c827dc4c53223b4a4a..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/anchor/point_generator.py +++ /dev/null @@ -1,37 +0,0 @@ -import torch - -from .builder import ANCHOR_GENERATORS - - -@ANCHOR_GENERATORS.register_module() -class PointGenerator(object): - - def _meshgrid(self, x, y, row_major=True): - xx = x.repeat(len(y)) - yy = y.view(-1, 1).repeat(1, len(x)).view(-1) - if row_major: - return xx, yy - else: - return yy, xx - - def grid_points(self, featmap_size, stride=16, device='cuda'): - feat_h, feat_w = featmap_size - shift_x = torch.arange(0., feat_w, device=device) * stride - shift_y = torch.arange(0., feat_h, device=device) * stride - shift_xx, shift_yy = self._meshgrid(shift_x, shift_y) - stride = shift_x.new_full((shift_xx.shape[0], ), stride) - shifts = torch.stack([shift_xx, shift_yy, stride], dim=-1) - all_points = shifts.to(device) - return all_points - - def valid_flags(self, featmap_size, valid_size, device='cuda'): - feat_h, feat_w = featmap_size - valid_h, valid_w = valid_size - assert valid_h <= feat_h and valid_w <= feat_w - valid_x = torch.zeros(feat_w, dtype=torch.bool, device=device) - valid_y = torch.zeros(feat_h, dtype=torch.bool, device=device) - valid_x[:valid_w] = 1 - valid_y[:valid_h] = 1 - valid_xx, valid_yy = self._meshgrid(valid_x, valid_y) - valid = valid_xx & valid_yy - return valid diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/paa_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/paa_head.py deleted file mode 100644 index e067b0121cf8b8230c0c9c6b8cfd41f56be4e298..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/paa_head.py +++ /dev/null @@ -1,671 +0,0 @@ -import numpy as np -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply, multiclass_nms -from mmdet.core.bbox.iou_calculators import bbox_overlaps -from mmdet.models import HEADS -from mmdet.models.dense_heads import ATSSHead - -EPS = 1e-12 -try: - import sklearn.mixture as skm -except ImportError: - skm = None - - -def levels_to_images(mlvl_tensor): - """Concat multi-level feature maps by image. - - [feature_level0, feature_level1...] -> [feature_image0, feature_image1...] - Convert the shape of each element in mlvl_tensor from (N, C, H, W) to - (N, H*W , C), then split the element to N elements with shape (H*W, C), and - concat elements in same image of all level along first dimension. - - Args: - mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from - corresponding level. Each element is of shape (N, C, H, W) - - Returns: - list[torch.Tensor]: A list that contains N tensors and each tensor is - of shape (num_elements, C) - """ - batch_size = mlvl_tensor[0].size(0) - batch_list = [[] for _ in range(batch_size)] - channels = mlvl_tensor[0].size(1) - for t in mlvl_tensor: - t = t.permute(0, 2, 3, 1) - t = t.view(batch_size, -1, channels).contiguous() - for img in range(batch_size): - batch_list[img].append(t[img]) - return [torch.cat(item, 0) for item in batch_list] - - -@HEADS.register_module() -class PAAHead(ATSSHead): - """Head of PAAAssignment: Probabilistic Anchor Assignment with IoU - Prediction for Object Detection. - - Code is modified from the `official github repo - `_. - - More details can be found in the `paper - `_ . - - Args: - topk (int): Select topk samples with smallest loss in - each level. - score_voting (bool): Whether to use score voting in post-process. - covariance_type : String describing the type of covariance parameters - to be used in :class:`sklearn.mixture.GaussianMixture`. - It must be one of: - - - 'full': each component has its own general covariance matrix - - 'tied': all components share the same general covariance matrix - - 'diag': each component has its own diagonal covariance matrix - - 'spherical': each component has its own single variance - Default: 'diag'. From 'full' to 'spherical', the gmm fitting - process is faster yet the performance could be influenced. For most - cases, 'diag' should be a good choice. - """ - - def __init__(self, - *args, - topk=9, - score_voting=True, - covariance_type='diag', - **kwargs): - # topk used in paa reassign process - self.topk = topk - self.with_score_voting = score_voting - self.covariance_type = covariance_type - super(PAAHead, self).__init__(*args, **kwargs) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'iou_preds')) - def loss(self, - cls_scores, - bbox_preds, - iou_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - iou_preds (list[Tensor]): iou_preds for each scale - level with shape (N, num_anchors * 1, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): Specify which bounding - boxes can be ignored when are computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss gmm_assignment. - """ - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - ) - (labels, labels_weight, bboxes_target, bboxes_weight, pos_inds, - pos_gt_index) = cls_reg_targets - cls_scores = levels_to_images(cls_scores) - cls_scores = [ - item.reshape(-1, self.cls_out_channels) for item in cls_scores - ] - bbox_preds = levels_to_images(bbox_preds) - bbox_preds = [item.reshape(-1, 4) for item in bbox_preds] - iou_preds = levels_to_images(iou_preds) - iou_preds = [item.reshape(-1, 1) for item in iou_preds] - pos_losses_list, = multi_apply(self.get_pos_loss, anchor_list, - cls_scores, bbox_preds, labels, - labels_weight, bboxes_target, - bboxes_weight, pos_inds) - - with torch.no_grad(): - reassign_labels, reassign_label_weight, \ - reassign_bbox_weights, num_pos = multi_apply( - self.paa_reassign, - pos_losses_list, - labels, - labels_weight, - bboxes_weight, - pos_inds, - pos_gt_index, - anchor_list) - num_pos = sum(num_pos) - # convert all tensor list to a flatten tensor - cls_scores = torch.cat(cls_scores, 0).view(-1, cls_scores[0].size(-1)) - bbox_preds = torch.cat(bbox_preds, 0).view(-1, bbox_preds[0].size(-1)) - iou_preds = torch.cat(iou_preds, 0).view(-1, iou_preds[0].size(-1)) - labels = torch.cat(reassign_labels, 0).view(-1) - flatten_anchors = torch.cat( - [torch.cat(item, 0) for item in anchor_list]) - labels_weight = torch.cat(reassign_label_weight, 0).view(-1) - bboxes_target = torch.cat(bboxes_target, - 0).view(-1, bboxes_target[0].size(-1)) - - pos_inds_flatten = ((labels >= 0) - & - (labels < self.num_classes)).nonzero().reshape(-1) - - losses_cls = self.loss_cls( - cls_scores, - labels, - labels_weight, - avg_factor=max(num_pos, len(img_metas))) # avoid num_pos=0 - if num_pos: - pos_bbox_pred = self.bbox_coder.decode( - flatten_anchors[pos_inds_flatten], - bbox_preds[pos_inds_flatten]) - pos_bbox_target = bboxes_target[pos_inds_flatten] - iou_target = bbox_overlaps( - pos_bbox_pred.detach(), pos_bbox_target, is_aligned=True) - losses_iou = self.loss_centerness( - iou_preds[pos_inds_flatten], - iou_target.unsqueeze(-1), - avg_factor=num_pos) - losses_bbox = self.loss_bbox( - pos_bbox_pred, - pos_bbox_target, - iou_target.clamp(min=EPS), - avg_factor=iou_target.sum()) - else: - losses_iou = iou_preds.sum() * 0 - losses_bbox = bbox_preds.sum() * 0 - - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox, loss_iou=losses_iou) - - def get_pos_loss(self, anchors, cls_score, bbox_pred, label, label_weight, - bbox_target, bbox_weight, pos_inds): - """Calculate loss of all potential positive samples obtained from first - match process. - - Args: - anchors (list[Tensor]): Anchors of each scale. - cls_score (Tensor): Box scores of single image with shape - (num_anchors, num_classes) - bbox_pred (Tensor): Box energies / deltas of single image - with shape (num_anchors, 4) - label (Tensor): classification target of each anchor with - shape (num_anchors,) - label_weight (Tensor): Classification loss weight of each - anchor with shape (num_anchors). - bbox_target (dict): Regression target of each anchor with - shape (num_anchors, 4). - bbox_weight (Tensor): Bbox weight of each anchor with shape - (num_anchors, 4). - pos_inds (Tensor): Index of all positive samples got from - first assign process. - - Returns: - Tensor: Losses of all positive samples in single image. - """ - if not len(pos_inds): - return cls_score.new([]), - anchors_all_level = torch.cat(anchors, 0) - pos_scores = cls_score[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_label = label[pos_inds] - pos_label_weight = label_weight[pos_inds] - pos_bbox_target = bbox_target[pos_inds] - pos_bbox_weight = bbox_weight[pos_inds] - pos_anchors = anchors_all_level[pos_inds] - pos_bbox_pred = self.bbox_coder.decode(pos_anchors, pos_bbox_pred) - - # to keep loss dimension - loss_cls = self.loss_cls( - pos_scores, - pos_label, - pos_label_weight, - avg_factor=self.loss_cls.loss_weight, - reduction_override='none') - - loss_bbox = self.loss_bbox( - pos_bbox_pred, - pos_bbox_target, - pos_bbox_weight, - avg_factor=self.loss_cls.loss_weight, - reduction_override='none') - - loss_cls = loss_cls.sum(-1) - pos_loss = loss_bbox + loss_cls - return pos_loss, - - def paa_reassign(self, pos_losses, label, label_weight, bbox_weight, - pos_inds, pos_gt_inds, anchors): - """Fit loss to GMM distribution and separate positive, ignore, negative - samples again with GMM model. - - Args: - pos_losses (Tensor): Losses of all positive samples in - single image. - label (Tensor): classification target of each anchor with - shape (num_anchors,) - label_weight (Tensor): Classification loss weight of each - anchor with shape (num_anchors). - bbox_weight (Tensor): Bbox weight of each anchor with shape - (num_anchors, 4). - pos_inds (Tensor): Index of all positive samples got from - first assign process. - pos_gt_inds (Tensor): Gt_index of all positive samples got - from first assign process. - anchors (list[Tensor]): Anchors of each scale. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - label (Tensor): classification target of each anchor after - paa assign, with shape (num_anchors,) - - label_weight (Tensor): Classification loss weight of each - anchor after paa assign, with shape (num_anchors). - - bbox_weight (Tensor): Bbox weight of each anchor with shape - (num_anchors, 4). - - num_pos (int): The number of positive samples after paa - assign. - """ - if not len(pos_inds): - return label, label_weight, bbox_weight, 0 - label = label.clone() - label_weight = label_weight.clone() - bbox_weight = bbox_weight.clone() - num_gt = pos_gt_inds.max() + 1 - num_level = len(anchors) - num_anchors_each_level = [item.size(0) for item in anchors] - num_anchors_each_level.insert(0, 0) - inds_level_interval = np.cumsum(num_anchors_each_level) - pos_level_mask = [] - for i in range(num_level): - mask = (pos_inds >= inds_level_interval[i]) & ( - pos_inds < inds_level_interval[i + 1]) - pos_level_mask.append(mask) - pos_inds_after_paa = [label.new_tensor([])] - ignore_inds_after_paa = [label.new_tensor([])] - for gt_ind in range(num_gt): - pos_inds_gmm = [] - pos_loss_gmm = [] - gt_mask = pos_gt_inds == gt_ind - for level in range(num_level): - level_mask = pos_level_mask[level] - level_gt_mask = level_mask & gt_mask - value, topk_inds = pos_losses[level_gt_mask].topk( - min(level_gt_mask.sum(), self.topk), largest=False) - pos_inds_gmm.append(pos_inds[level_gt_mask][topk_inds]) - pos_loss_gmm.append(value) - pos_inds_gmm = torch.cat(pos_inds_gmm) - pos_loss_gmm = torch.cat(pos_loss_gmm) - # fix gmm need at least two sample - if len(pos_inds_gmm) < 2: - continue - device = pos_inds_gmm.device - pos_loss_gmm, sort_inds = pos_loss_gmm.sort() - pos_inds_gmm = pos_inds_gmm[sort_inds] - pos_loss_gmm = pos_loss_gmm.view(-1, 1).cpu().numpy() - min_loss, max_loss = pos_loss_gmm.min(), pos_loss_gmm.max() - means_init = np.array([min_loss, max_loss]).reshape(2, 1) - weights_init = np.array([0.5, 0.5]) - precisions_init = np.array([1.0, 1.0]).reshape(2, 1, 1) # full - if self.covariance_type == 'spherical': - precisions_init = precisions_init.reshape(2) - elif self.covariance_type == 'diag': - precisions_init = precisions_init.reshape(2, 1) - elif self.covariance_type == 'tied': - precisions_init = np.array([[1.0]]) - if skm is None: - raise ImportError('Please run "pip install sklearn" ' - 'to install sklearn first.') - gmm = skm.GaussianMixture( - 2, - weights_init=weights_init, - means_init=means_init, - precisions_init=precisions_init, - covariance_type=self.covariance_type) - gmm.fit(pos_loss_gmm) - gmm_assignment = gmm.predict(pos_loss_gmm) - scores = gmm.score_samples(pos_loss_gmm) - gmm_assignment = torch.from_numpy(gmm_assignment).to(device) - scores = torch.from_numpy(scores).to(device) - - pos_inds_temp, ignore_inds_temp = self.gmm_separation_scheme( - gmm_assignment, scores, pos_inds_gmm) - pos_inds_after_paa.append(pos_inds_temp) - ignore_inds_after_paa.append(ignore_inds_temp) - - pos_inds_after_paa = torch.cat(pos_inds_after_paa) - ignore_inds_after_paa = torch.cat(ignore_inds_after_paa) - reassign_mask = (pos_inds.unsqueeze(1) != pos_inds_after_paa).all(1) - reassign_ids = pos_inds[reassign_mask] - label[reassign_ids] = self.num_classes - label_weight[ignore_inds_after_paa] = 0 - bbox_weight[reassign_ids] = 0 - num_pos = len(pos_inds_after_paa) - return label, label_weight, bbox_weight, num_pos - - def gmm_separation_scheme(self, gmm_assignment, scores, pos_inds_gmm): - """A general separation scheme for gmm model. - - It separates a GMM distribution of candidate samples into three - parts, 0 1 and uncertain areas, and you can implement other - separation schemes by rewriting this function. - - Args: - gmm_assignment (Tensor): The prediction of GMM which is of shape - (num_samples,). The 0/1 value indicates the distribution - that each sample comes from. - scores (Tensor): The probability of sample coming from the - fit GMM distribution. The tensor is of shape (num_samples,). - pos_inds_gmm (Tensor): All the indexes of samples which are used - to fit GMM model. The tensor is of shape (num_samples,) - - Returns: - tuple[Tensor]: The indices of positive and ignored samples. - - - pos_inds_temp (Tensor): Indices of positive samples. - - ignore_inds_temp (Tensor): Indices of ignore samples. - """ - # The implementation is (c) in Fig.3 in origin paper instead of (b). - # You can refer to issues such as - # https://github.com/kkhoot/PAA/issues/8 and - # https://github.com/kkhoot/PAA/issues/9. - fgs = gmm_assignment == 0 - pos_inds_temp = fgs.new_tensor([], dtype=torch.long) - ignore_inds_temp = fgs.new_tensor([], dtype=torch.long) - if fgs.nonzero().numel(): - _, pos_thr_ind = scores[fgs].topk(1) - pos_inds_temp = pos_inds_gmm[fgs][:pos_thr_ind + 1] - ignore_inds_temp = pos_inds_gmm.new_tensor([]) - return pos_inds_temp, ignore_inds_temp - - def get_targets( - self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True, - ): - """Get targets for PAA head. - - This method is almost the same as `AnchorHead.get_targets()`. We direct - return the results from _get_targets_single instead map it to levels - by images_to_levels function. - - Args: - anchor_list (list[list[Tensor]]): Multi level anchors of each - image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, 4). - valid_flag_list (list[list[Tensor]]): Multi level valid flags of - each image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - labels (list[Tensor]): Labels of all anchors, each with - shape (num_anchors,). - - label_weights (list[Tensor]): Label weights of all anchor. - each with shape (num_anchors,). - - bbox_targets (list[Tensor]): BBox targets of all anchors. - each with shape (num_anchors, 4). - - bbox_weights (list[Tensor]): BBox weights of all anchors. - each with shape (num_anchors, 4). - - pos_inds (list[Tensor]): Contains all index of positive - sample in all anchor. - - gt_inds (list[Tensor]): Contains all gt_index of positive - sample in all anchor. - """ - - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - concat_anchor_list = [] - concat_valid_flag_list = [] - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - concat_anchor_list.append(torch.cat(anchor_list[i])) - concat_valid_flag_list.append(torch.cat(valid_flag_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - results = multi_apply( - self._get_targets_single, - concat_anchor_list, - concat_valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - - (labels, label_weights, bbox_targets, bbox_weights, valid_pos_inds, - valid_neg_inds, sampling_result) = results - - # Due to valid flag of anchors, we have to calculate the real pos_inds - # in origin anchor set. - pos_inds = [] - for i, single_labels in enumerate(labels): - pos_mask = (0 <= single_labels) & ( - single_labels < self.num_classes) - pos_inds.append(pos_mask.nonzero().view(-1)) - - gt_inds = [item.pos_assigned_gt_inds for item in sampling_result] - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - gt_inds) - - def _get_targets_single(self, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - This method is same as `AnchorHead._get_targets_single()`. - """ - assert unmap_outputs, 'We must map outputs back to the original' \ - 'set of anchors in PAAhead' - return super(ATSSHead, self)._get_targets_single( - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True) - - def _get_bboxes(self, - cls_scores, - bbox_preds, - iou_preds, - mlvl_anchors, - img_shapes, - scale_factors, - cfg, - rescale=False, - with_nms=True): - """Transform outputs for a single batch item into labeled boxes. - - This method is almost same as `ATSSHead._get_bboxes()`. - We use sqrt(iou_preds * cls_scores) in NMS process instead of just - cls_scores. Besides, score voting is used when `` score_voting`` - is set to True. - """ - assert with_nms, 'PAA only supports "with_nms=True" now' - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - batch_size = cls_scores[0].shape[0] - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_iou_preds = [] - for cls_score, bbox_pred, iou_preds, anchors in zip( - cls_scores, bbox_preds, iou_preds, mlvl_anchors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - - scores = cls_score.permute(0, 2, 3, 1).reshape( - batch_size, -1, self.cls_out_channels).sigmoid() - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(batch_size, -1, 4) - iou_preds = iou_preds.permute(0, 2, 3, 1).reshape(batch_size, - -1).sigmoid() - - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[1] > nms_pre: - max_scores, _ = (scores * iou_preds[..., None]).sqrt().max(-1) - _, topk_inds = max_scores.topk(nms_pre) - batch_inds = torch.arange(batch_size).view( - -1, 1).expand_as(topk_inds).long() - anchors = anchors[topk_inds, :] - bbox_pred = bbox_pred[batch_inds, topk_inds, :] - scores = scores[batch_inds, topk_inds, :] - iou_preds = iou_preds[batch_inds, topk_inds] - else: - anchors = anchors.expand_as(bbox_pred) - - bboxes = self.bbox_coder.decode( - anchors, bbox_pred, max_shape=img_shapes) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_iou_preds.append(iou_preds) - - batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1) - if rescale: - batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor( - scale_factors).unsqueeze(1) - batch_mlvl_scores = torch.cat(mlvl_scores, dim=1) - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = batch_mlvl_scores.new_zeros(batch_size, - batch_mlvl_scores.shape[1], 1) - batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1) - batch_mlvl_iou_preds = torch.cat(mlvl_iou_preds, dim=1) - batch_mlvl_nms_scores = (batch_mlvl_scores * - batch_mlvl_iou_preds[..., None]).sqrt() - - det_results = [] - for (mlvl_bboxes, mlvl_scores) in zip(batch_mlvl_bboxes, - batch_mlvl_nms_scores): - det_bbox, det_label = multiclass_nms( - mlvl_bboxes, - mlvl_scores, - cfg.score_thr, - cfg.nms, - cfg.max_per_img, - score_factors=None) - if self.with_score_voting and len(det_bbox) > 0: - det_bbox, det_label = self.score_voting( - det_bbox, det_label, mlvl_bboxes, mlvl_scores, - cfg.score_thr) - det_results.append(tuple([det_bbox, det_label])) - - return det_results - - def score_voting(self, det_bboxes, det_labels, mlvl_bboxes, - mlvl_nms_scores, score_thr): - """Implementation of score voting method works on each remaining boxes - after NMS procedure. - - Args: - det_bboxes (Tensor): Remaining boxes after NMS procedure, - with shape (k, 5), each dimension means - (x1, y1, x2, y2, score). - det_labels (Tensor): The label of remaining boxes, with shape - (k, 1),Labels are 0-based. - mlvl_bboxes (Tensor): All boxes before the NMS procedure, - with shape (num_anchors,4). - mlvl_nms_scores (Tensor): The scores of all boxes which is used - in the NMS procedure, with shape (num_anchors, num_class) - mlvl_iou_preds (Tensor): The predictions of IOU of all boxes - before the NMS procedure, with shape (num_anchors, 1) - score_thr (float): The score threshold of bboxes. - - Returns: - tuple: Usually returns a tuple containing voting results. - - - det_bboxes_voted (Tensor): Remaining boxes after - score voting procedure, with shape (k, 5), each - dimension means (x1, y1, x2, y2, score). - - det_labels_voted (Tensor): Label of remaining bboxes - after voting, with shape (num_anchors,). - """ - candidate_mask = mlvl_nms_scores > score_thr - candidate_mask_nonzeros = candidate_mask.nonzero() - candidate_inds = candidate_mask_nonzeros[:, 0] - candidate_labels = candidate_mask_nonzeros[:, 1] - candidate_bboxes = mlvl_bboxes[candidate_inds] - candidate_scores = mlvl_nms_scores[candidate_mask] - det_bboxes_voted = [] - det_labels_voted = [] - for cls in range(self.cls_out_channels): - candidate_cls_mask = candidate_labels == cls - if not candidate_cls_mask.any(): - continue - candidate_cls_scores = candidate_scores[candidate_cls_mask] - candidate_cls_bboxes = candidate_bboxes[candidate_cls_mask] - det_cls_mask = det_labels == cls - det_cls_bboxes = det_bboxes[det_cls_mask].view( - -1, det_bboxes.size(-1)) - det_candidate_ious = bbox_overlaps(det_cls_bboxes[:, :4], - candidate_cls_bboxes) - for det_ind in range(len(det_cls_bboxes)): - single_det_ious = det_candidate_ious[det_ind] - pos_ious_mask = single_det_ious > 0.01 - pos_ious = single_det_ious[pos_ious_mask] - pos_bboxes = candidate_cls_bboxes[pos_ious_mask] - pos_scores = candidate_cls_scores[pos_ious_mask] - pis = (torch.exp(-(1 - pos_ious)**2 / 0.025) * - pos_scores)[:, None] - voted_box = torch.sum( - pis * pos_bboxes, dim=0) / torch.sum( - pis, dim=0) - voted_score = det_cls_bboxes[det_ind][-1:][None, :] - det_bboxes_voted.append( - torch.cat((voted_box[None, :], voted_score), dim=1)) - det_labels_voted.append(cls) - - det_bboxes_voted = torch.cat(det_bboxes_voted, dim=0) - det_labels_voted = det_labels.new_tensor(det_labels_voted) - return det_bboxes_voted, det_labels_voted diff --git a/spaces/abidlabs/min-dalle-later/app.py b/spaces/abidlabs/min-dalle-later/app.py deleted file mode 100644 index 61d77344d0716fe4cb3702fffa3a0fb137e381ad..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/min-dalle-later/app.py +++ /dev/null @@ -1,6 +0,0 @@ -import gradio as gr - -with gr.Blocks() as demo: - gr.Gallery(["examples/dali-walle.jpg"]) - -demo.launch() \ No newline at end of file diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/platforms/base.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/platforms/base.py deleted file mode 100644 index c9ecda906145e239737901809aa59db8d3e231c6..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/platforms/base.py +++ /dev/null @@ -1,76 +0,0 @@ -import abc - -import six - - -@six.add_metaclass(abc.ABCMeta) -class Platform(object): - """Base class for all OpenGL platforms. - - Parameters - ---------- - viewport_width : int - The width of the main viewport, in pixels. - viewport_height : int - The height of the main viewport, in pixels - """ - - def __init__(self, viewport_width, viewport_height): - self.viewport_width = viewport_width - self.viewport_height = viewport_height - - @property - def viewport_width(self): - """int : The width of the main viewport, in pixels. - """ - return self._viewport_width - - @viewport_width.setter - def viewport_width(self, value): - self._viewport_width = value - - @property - def viewport_height(self): - """int : The height of the main viewport, in pixels. - """ - return self._viewport_height - - @viewport_height.setter - def viewport_height(self, value): - self._viewport_height = value - - @abc.abstractmethod - def init_context(self): - """Create an OpenGL context. - """ - pass - - @abc.abstractmethod - def make_current(self): - """Make the OpenGL context current. - """ - pass - - @abc.abstractmethod - def make_uncurrent(self): - """Make the OpenGL context uncurrent. - """ - pass - - @abc.abstractmethod - def delete_context(self): - """Delete the OpenGL context. - """ - pass - - @abc.abstractmethod - def supports_framebuffers(self): - """Returns True if the method supports framebuffer rendering. - """ - pass - - def __del__(self): - try: - self.delete_context() - except Exception: - pass diff --git a/spaces/adorp/ControlNet-v1-1-duplicate/app_depth.py b/spaces/adorp/ControlNet-v1-1-duplicate/app_depth.py deleted file mode 100644 index 3bbca14b25f161a39d578d2dd5e9004f40698275..0000000000000000000000000000000000000000 --- a/spaces/adorp/ControlNet-v1-1-duplicate/app_depth.py +++ /dev/null @@ -1,107 +0,0 @@ -#!/usr/bin/env python - -import gradio as gr - -from utils import randomize_seed_fn - - -def create_demo(process, max_images=12, default_num_images=3): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - image = gr.Image() - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button('Run') - with gr.Accordion('Advanced options', open=False): - preprocessor_name = gr.Radio( - label='Preprocessor', - choices=['Midas', 'DPT', 'None'], - type='value', - value='DPT') - num_samples = gr.Slider(label='Number of images', - minimum=1, - maximum=max_images, - value=default_num_images, - step=1) - image_resolution = gr.Slider(label='Image resolution', - minimum=256, - maximum=512, - value=512, - step=256) - preprocess_resolution = gr.Slider( - label='Preprocess resolution', - minimum=128, - maximum=512, - value=384, - step=1) - num_steps = gr.Slider(label='Number of steps', - minimum=1, - maximum=100, - value=20, - step=1) - guidance_scale = gr.Slider(label='Guidance scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=1000000, - step=1, - value=0, - randomize=True) - randomize_seed = gr.Checkbox(label='Randomize seed', - value=True) - a_prompt = gr.Textbox( - label='Additional prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - result = gr.Gallery(label='Output', show_label=False).style( - columns=2, object_fit='scale-down') - inputs = [ - image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - preprocess_resolution, - num_steps, - guidance_scale, - seed, - preprocessor_name, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - ).then( - fn=process, - inputs=inputs, - outputs=result, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name='depth', - ) - return demo - - -if __name__ == '__main__': - from model import Model - model = Model(task_name='depth') - demo = create_demo(model.process_depth) - demo.queue().launch() diff --git a/spaces/akashdhiman79830/MYGenAIVoice/app.py b/spaces/akashdhiman79830/MYGenAIVoice/app.py deleted file mode 100644 index ca8b6d40b4ab898c70da92f4a4298de2baf703dc..0000000000000000000000000000000000000000 --- a/spaces/akashdhiman79830/MYGenAIVoice/app.py +++ /dev/null @@ -1,164 +0,0 @@ -import os -import re -import requests -import json -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') -PLAY_HT_API_KEY=os.getenv('PLAY_HT_API_KEY') -PLAY_HT_USER_ID=os.getenv('PLAY_HT_USER_ID') - -PLAY_HT_VOICE_ID=os.getenv('PLAY_HT_VOICE_ID') -play_ht_api_get_audio_url = "https://play.ht/api/v2/tts" - - -template = """You are a helpful assistant to answer user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -headers = { - "accept": "text/event-stream", - "content-type": "application/json", - "AUTHORIZATION": "Bearer "+ PLAY_HT_API_KEY, - "X-USER-ID": PLAY_HT_USER_ID -} - - -def get_payload(text): - return { - "text": text, - "voice": PLAY_HT_VOICE_ID, - "quality": "medium", - "output_format": "mp3", - "speed": 1, - "sample_rate": 24000, - "seed": None, - "temperature": None - } - -def get_generated_audio(text): - payload = get_payload(text) - generated_response = {} - try: - response = requests.post(play_ht_api_get_audio_url, json=payload, headers=headers) - response.raise_for_status() - generated_response["type"]= 'SUCCESS' - generated_response["response"] = response.text - except requests.exceptions.RequestException as e: - generated_response["type"]= 'ERROR' - try: - response_text = json.loads(response.text) - if response_text['error_message']: - generated_response["response"] = response_text['error_message'] - else: - generated_response["response"] = response.text - except Exception as e: - generated_response["response"] = response.text - except Exception as e: - generated_response["type"]= 'ERROR' - generated_response["response"] = response.text - return generated_response - -def extract_urls(text): - # Define the regex pattern for URLs - url_pattern = r'https?://(?:[-\w.]|(?:%[\da-fA-F]{2}))+[/\w\.-]*' - - # Find all occurrences of URLs in the text - urls = re.findall(url_pattern, text) - - return urls - -def get_audio_reply_for_question(text): - generated_audio_event = get_generated_audio(text) - #From get_generated_audio, you will get events in a string format, from that we need to extract the url - final_response = { - "audio_url": '', - "message": '' - } - if generated_audio_event["type"] == 'SUCCESS': - audio_urls = extract_urls(generated_audio_event["response"]) - if len(audio_urls) == 0: - final_response['message'] = "No audio file link found in generated event" - else: - final_response['audio_url'] = audio_urls[-1] - else: - final_response['message'] = generated_audio_event['response'] - return final_response - -def download_url(url): - try: - # Send a GET request to the URL to fetch the content - final_response = { - 'content':'', - 'error':'' - } - response = requests.get(url) - # Check if the request was successful (status code 200) - if response.status_code == 200: - final_response['content'] = response.content - else: - final_response['error'] = f"Failed to download the URL. Status code: {response.status_code}" - except Exception as e: - final_response['error'] = f"Failed to download the URL. Error: {e}" - return final_response - -def get_filename_from_url(url): - # Use os.path.basename() to extract the file name from the URL - file_name = os.path.basename(url) - return file_name - -def get_text_response(user_message): - response = llm_chain.predict(user_message = user_message) - return response - -def get_text_response_and_audio_response(user_message): - response = get_text_response(user_message) # Getting the reply from Open AI - audio_reply_for_question_response = get_audio_reply_for_question(response) - final_response = { - 'output_file_path': '', - 'message':'' - } - audio_url = audio_reply_for_question_response['audio_url'] - if audio_url: - output_file_path=get_filename_from_url(audio_url) - download_url_response = download_url(audio_url) - audio_content = download_url_response['content'] - if audio_content: - with open(output_file_path, "wb") as audio_file: - audio_file.write(audio_content) - final_response['output_file_path'] = output_file_path - else: - final_response['message'] = download_url_response['error'] - else: - final_response['message'] = audio_reply_for_question_response['message'] - return final_response - -def chat_bot_response(message, history): - text_and_audio_response = get_text_response_and_audio_response(message) - output_file_path = text_and_audio_response['output_file_path'] - if output_file_path: - return (text_and_audio_response['output_file_path'],) - else: - return text_and_audio_response['message'] - -demo = gr.ChatInterface(chat_bot_response,examples=["How are you doing?","What are your interests?","Which places do you like to visit?"]) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/akashjeez/akashjeez/README.md b/spaces/akashjeez/akashjeez/README.md deleted file mode 100644 index 4d97634b7f443d808a58d95876c635f67868e3d0..0000000000000000000000000000000000000000 --- a/spaces/akashjeez/akashjeez/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Testapp -emoji: 🐢 -colorFrom: pink -colorTo: yellow -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/akhaliq/Mask2Former/mask2former/data/datasets/register_coco_stuff_10k.py b/spaces/akhaliq/Mask2Former/mask2former/data/datasets/register_coco_stuff_10k.py deleted file mode 100644 index a1ec0375858ada8e4270b534fcd58106254c7fa9..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/mask2former/data/datasets/register_coco_stuff_10k.py +++ /dev/null @@ -1,223 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import os - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets import load_sem_seg - -COCO_CATEGORIES = [ - {"color": [220, 20, 60], "isthing": 1, "id": 1, "name": "person"}, - {"color": [119, 11, 32], "isthing": 1, "id": 2, "name": "bicycle"}, - {"color": [0, 0, 142], "isthing": 1, "id": 3, "name": "car"}, - {"color": [0, 0, 230], "isthing": 1, "id": 4, "name": "motorcycle"}, - {"color": [106, 0, 228], "isthing": 1, "id": 5, "name": "airplane"}, - {"color": [0, 60, 100], "isthing": 1, "id": 6, "name": "bus"}, - {"color": [0, 80, 100], "isthing": 1, "id": 7, "name": "train"}, - {"color": [0, 0, 70], "isthing": 1, "id": 8, "name": "truck"}, - {"color": [0, 0, 192], "isthing": 1, "id": 9, "name": "boat"}, - {"color": [250, 170, 30], "isthing": 1, "id": 10, "name": "traffic light"}, - {"color": [100, 170, 30], "isthing": 1, "id": 11, "name": "fire hydrant"}, - {"color": [220, 220, 0], "isthing": 1, "id": 13, "name": "stop sign"}, - {"color": [175, 116, 175], "isthing": 1, "id": 14, "name": "parking meter"}, - {"color": [250, 0, 30], "isthing": 1, "id": 15, "name": "bench"}, - {"color": [165, 42, 42], "isthing": 1, "id": 16, "name": "bird"}, - {"color": [255, 77, 255], "isthing": 1, "id": 17, "name": "cat"}, - {"color": [0, 226, 252], "isthing": 1, "id": 18, "name": "dog"}, - {"color": [182, 182, 255], "isthing": 1, "id": 19, "name": "horse"}, - {"color": [0, 82, 0], "isthing": 1, "id": 20, "name": "sheep"}, - {"color": [120, 166, 157], "isthing": 1, "id": 21, "name": "cow"}, - {"color": [110, 76, 0], "isthing": 1, "id": 22, "name": "elephant"}, - {"color": [174, 57, 255], "isthing": 1, "id": 23, "name": "bear"}, - {"color": [199, 100, 0], "isthing": 1, "id": 24, "name": "zebra"}, - {"color": [72, 0, 118], "isthing": 1, "id": 25, "name": "giraffe"}, - {"color": [255, 179, 240], "isthing": 1, "id": 27, "name": "backpack"}, - {"color": [0, 125, 92], "isthing": 1, "id": 28, "name": "umbrella"}, - {"color": [209, 0, 151], "isthing": 1, "id": 31, "name": "handbag"}, - {"color": [188, 208, 182], "isthing": 1, "id": 32, "name": "tie"}, - {"color": [0, 220, 176], "isthing": 1, "id": 33, "name": "suitcase"}, - {"color": [255, 99, 164], "isthing": 1, "id": 34, "name": "frisbee"}, - {"color": [92, 0, 73], "isthing": 1, "id": 35, "name": "skis"}, - {"color": [133, 129, 255], "isthing": 1, "id": 36, "name": "snowboard"}, - {"color": [78, 180, 255], "isthing": 1, "id": 37, "name": "sports ball"}, - {"color": [0, 228, 0], "isthing": 1, "id": 38, "name": "kite"}, - {"color": [174, 255, 243], "isthing": 1, "id": 39, "name": "baseball bat"}, - {"color": [45, 89, 255], "isthing": 1, "id": 40, "name": "baseball glove"}, - {"color": [134, 134, 103], "isthing": 1, "id": 41, "name": "skateboard"}, - {"color": [145, 148, 174], "isthing": 1, "id": 42, "name": "surfboard"}, - {"color": [255, 208, 186], "isthing": 1, "id": 43, "name": "tennis racket"}, - {"color": [197, 226, 255], "isthing": 1, "id": 44, "name": "bottle"}, - {"color": [171, 134, 1], "isthing": 1, "id": 46, "name": "wine glass"}, - {"color": [109, 63, 54], "isthing": 1, "id": 47, "name": "cup"}, - {"color": [207, 138, 255], "isthing": 1, "id": 48, "name": "fork"}, - {"color": [151, 0, 95], "isthing": 1, "id": 49, "name": "knife"}, - {"color": [9, 80, 61], "isthing": 1, "id": 50, "name": "spoon"}, - {"color": [84, 105, 51], "isthing": 1, "id": 51, "name": "bowl"}, - {"color": [74, 65, 105], "isthing": 1, "id": 52, "name": "banana"}, - {"color": [166, 196, 102], "isthing": 1, "id": 53, "name": "apple"}, - {"color": [208, 195, 210], "isthing": 1, "id": 54, "name": "sandwich"}, - {"color": [255, 109, 65], "isthing": 1, "id": 55, "name": "orange"}, - {"color": [0, 143, 149], "isthing": 1, "id": 56, "name": "broccoli"}, - {"color": [179, 0, 194], "isthing": 1, "id": 57, "name": "carrot"}, - {"color": [209, 99, 106], "isthing": 1, "id": 58, "name": "hot dog"}, - {"color": [5, 121, 0], "isthing": 1, "id": 59, "name": "pizza"}, - {"color": [227, 255, 205], "isthing": 1, "id": 60, "name": "donut"}, - {"color": [147, 186, 208], "isthing": 1, "id": 61, "name": "cake"}, - {"color": [153, 69, 1], "isthing": 1, "id": 62, "name": "chair"}, - {"color": [3, 95, 161], "isthing": 1, "id": 63, "name": "couch"}, - {"color": [163, 255, 0], "isthing": 1, "id": 64, "name": "potted plant"}, - {"color": [119, 0, 170], "isthing": 1, "id": 65, "name": "bed"}, - {"color": [0, 182, 199], "isthing": 1, "id": 67, "name": "dining table"}, - {"color": [0, 165, 120], "isthing": 1, "id": 70, "name": "toilet"}, - {"color": [183, 130, 88], "isthing": 1, "id": 72, "name": "tv"}, - {"color": [95, 32, 0], "isthing": 1, "id": 73, "name": "laptop"}, - {"color": [130, 114, 135], "isthing": 1, "id": 74, "name": "mouse"}, - {"color": [110, 129, 133], "isthing": 1, "id": 75, "name": "remote"}, - {"color": [166, 74, 118], "isthing": 1, "id": 76, "name": "keyboard"}, - {"color": [219, 142, 185], "isthing": 1, "id": 77, "name": "cell phone"}, - {"color": [79, 210, 114], "isthing": 1, "id": 78, "name": "microwave"}, - {"color": [178, 90, 62], "isthing": 1, "id": 79, "name": "oven"}, - {"color": [65, 70, 15], "isthing": 1, "id": 80, "name": "toaster"}, - {"color": [127, 167, 115], "isthing": 1, "id": 81, "name": "sink"}, - {"color": [59, 105, 106], "isthing": 1, "id": 82, "name": "refrigerator"}, - {"color": [142, 108, 45], "isthing": 1, "id": 84, "name": "book"}, - {"color": [196, 172, 0], "isthing": 1, "id": 85, "name": "clock"}, - {"color": [95, 54, 80], "isthing": 1, "id": 86, "name": "vase"}, - {"color": [128, 76, 255], "isthing": 1, "id": 87, "name": "scissors"}, - {"color": [201, 57, 1], "isthing": 1, "id": 88, "name": "teddy bear"}, - {"color": [246, 0, 122], "isthing": 1, "id": 89, "name": "hair drier"}, - {"color": [191, 162, 208], "isthing": 1, "id": 90, "name": "toothbrush"}, - {"id": 92, "name": "banner", "supercategory": "textile"}, - {"id": 93, "name": "blanket", "supercategory": "textile"}, - {"id": 94, "name": "branch", "supercategory": "plant"}, - {"id": 95, "name": "bridge", "supercategory": "building"}, - {"id": 96, "name": "building-other", "supercategory": "building"}, - {"id": 97, "name": "bush", "supercategory": "plant"}, - {"id": 98, "name": "cabinet", "supercategory": "furniture-stuff"}, - {"id": 99, "name": "cage", "supercategory": "structural"}, - {"id": 100, "name": "cardboard", "supercategory": "raw-material"}, - {"id": 101, "name": "carpet", "supercategory": "floor"}, - {"id": 102, "name": "ceiling-other", "supercategory": "ceiling"}, - {"id": 103, "name": "ceiling-tile", "supercategory": "ceiling"}, - {"id": 104, "name": "cloth", "supercategory": "textile"}, - {"id": 105, "name": "clothes", "supercategory": "textile"}, - {"id": 106, "name": "clouds", "supercategory": "sky"}, - {"id": 107, "name": "counter", "supercategory": "furniture-stuff"}, - {"id": 108, "name": "cupboard", "supercategory": "furniture-stuff"}, - {"id": 109, "name": "curtain", "supercategory": "textile"}, - {"id": 110, "name": "desk-stuff", "supercategory": "furniture-stuff"}, - {"id": 111, "name": "dirt", "supercategory": "ground"}, - {"id": 112, "name": "door-stuff", "supercategory": "furniture-stuff"}, - {"id": 113, "name": "fence", "supercategory": "structural"}, - {"id": 114, "name": "floor-marble", "supercategory": "floor"}, - {"id": 115, "name": "floor-other", "supercategory": "floor"}, - {"id": 116, "name": "floor-stone", "supercategory": "floor"}, - {"id": 117, "name": "floor-tile", "supercategory": "floor"}, - {"id": 118, "name": "floor-wood", "supercategory": "floor"}, - {"id": 119, "name": "flower", "supercategory": "plant"}, - {"id": 120, "name": "fog", "supercategory": "water"}, - {"id": 121, "name": "food-other", "supercategory": "food-stuff"}, - {"id": 122, "name": "fruit", "supercategory": "food-stuff"}, - {"id": 123, "name": "furniture-other", "supercategory": "furniture-stuff"}, - {"id": 124, "name": "grass", "supercategory": "plant"}, - {"id": 125, "name": "gravel", "supercategory": "ground"}, - {"id": 126, "name": "ground-other", "supercategory": "ground"}, - {"id": 127, "name": "hill", "supercategory": "solid"}, - {"id": 128, "name": "house", "supercategory": "building"}, - {"id": 129, "name": "leaves", "supercategory": "plant"}, - {"id": 130, "name": "light", "supercategory": "furniture-stuff"}, - {"id": 131, "name": "mat", "supercategory": "textile"}, - {"id": 132, "name": "metal", "supercategory": "raw-material"}, - {"id": 133, "name": "mirror-stuff", "supercategory": "furniture-stuff"}, - {"id": 134, "name": "moss", "supercategory": "plant"}, - {"id": 135, "name": "mountain", "supercategory": "solid"}, - {"id": 136, "name": "mud", "supercategory": "ground"}, - {"id": 137, "name": "napkin", "supercategory": "textile"}, - {"id": 138, "name": "net", "supercategory": "structural"}, - {"id": 139, "name": "paper", "supercategory": "raw-material"}, - {"id": 140, "name": "pavement", "supercategory": "ground"}, - {"id": 141, "name": "pillow", "supercategory": "textile"}, - {"id": 142, "name": "plant-other", "supercategory": "plant"}, - {"id": 143, "name": "plastic", "supercategory": "raw-material"}, - {"id": 144, "name": "platform", "supercategory": "ground"}, - {"id": 145, "name": "playingfield", "supercategory": "ground"}, - {"id": 146, "name": "railing", "supercategory": "structural"}, - {"id": 147, "name": "railroad", "supercategory": "ground"}, - {"id": 148, "name": "river", "supercategory": "water"}, - {"id": 149, "name": "road", "supercategory": "ground"}, - {"id": 150, "name": "rock", "supercategory": "solid"}, - {"id": 151, "name": "roof", "supercategory": "building"}, - {"id": 152, "name": "rug", "supercategory": "textile"}, - {"id": 153, "name": "salad", "supercategory": "food-stuff"}, - {"id": 154, "name": "sand", "supercategory": "ground"}, - {"id": 155, "name": "sea", "supercategory": "water"}, - {"id": 156, "name": "shelf", "supercategory": "furniture-stuff"}, - {"id": 157, "name": "sky-other", "supercategory": "sky"}, - {"id": 158, "name": "skyscraper", "supercategory": "building"}, - {"id": 159, "name": "snow", "supercategory": "ground"}, - {"id": 160, "name": "solid-other", "supercategory": "solid"}, - {"id": 161, "name": "stairs", "supercategory": "furniture-stuff"}, - {"id": 162, "name": "stone", "supercategory": "solid"}, - {"id": 163, "name": "straw", "supercategory": "plant"}, - {"id": 164, "name": "structural-other", "supercategory": "structural"}, - {"id": 165, "name": "table", "supercategory": "furniture-stuff"}, - {"id": 166, "name": "tent", "supercategory": "building"}, - {"id": 167, "name": "textile-other", "supercategory": "textile"}, - {"id": 168, "name": "towel", "supercategory": "textile"}, - {"id": 169, "name": "tree", "supercategory": "plant"}, - {"id": 170, "name": "vegetable", "supercategory": "food-stuff"}, - {"id": 171, "name": "wall-brick", "supercategory": "wall"}, - {"id": 172, "name": "wall-concrete", "supercategory": "wall"}, - {"id": 173, "name": "wall-other", "supercategory": "wall"}, - {"id": 174, "name": "wall-panel", "supercategory": "wall"}, - {"id": 175, "name": "wall-stone", "supercategory": "wall"}, - {"id": 176, "name": "wall-tile", "supercategory": "wall"}, - {"id": 177, "name": "wall-wood", "supercategory": "wall"}, - {"id": 178, "name": "water-other", "supercategory": "water"}, - {"id": 179, "name": "waterdrops", "supercategory": "water"}, - {"id": 180, "name": "window-blind", "supercategory": "window"}, - {"id": 181, "name": "window-other", "supercategory": "window"}, - {"id": 182, "name": "wood", "supercategory": "solid"}, -] - - -def _get_coco_stuff_meta(): - # Id 0 is reserved for ignore_label, we change ignore_label for 0 - # to 255 in our pre-processing. - stuff_ids = [k["id"] for k in COCO_CATEGORIES] - assert len(stuff_ids) == 171, len(stuff_ids) - - # For semantic segmentation, this mapping maps from contiguous stuff id - # (in [0, 91], used in models) to ids in the dataset (used for processing results) - stuff_dataset_id_to_contiguous_id = {k: i for i, k in enumerate(stuff_ids)} - stuff_classes = [k["name"] for k in COCO_CATEGORIES] - - ret = { - "stuff_dataset_id_to_contiguous_id": stuff_dataset_id_to_contiguous_id, - "stuff_classes": stuff_classes, - } - return ret - - -def register_all_coco_stuff_10k(root): - root = os.path.join(root, "coco", "coco_stuff_10k") - meta = _get_coco_stuff_meta() - for name, image_dirname, sem_seg_dirname in [ - ("train", "images_detectron2/train", "annotations_detectron2/train"), - ("test", "images_detectron2/test", "annotations_detectron2/test"), - ]: - image_dir = os.path.join(root, image_dirname) - gt_dir = os.path.join(root, sem_seg_dirname) - name = f"coco_2017_{name}_stuff_10k_sem_seg" - DatasetCatalog.register( - name, lambda x=image_dir, y=gt_dir: load_sem_seg(y, x, gt_ext="png", image_ext="jpg") - ) - MetadataCatalog.get(name).set( - image_root=image_dir, - sem_seg_root=gt_dir, - evaluator_type="sem_seg", - ignore_label=255, - **meta, - ) - - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_coco_stuff_10k(_root) diff --git a/spaces/akhaliq/MobileStyleGAN/app.py b/spaces/akhaliq/MobileStyleGAN/app.py deleted file mode 100644 index 2d27c54cac689c5a88485d2e8d4878bb94e40d23..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/MobileStyleGAN/app.py +++ /dev/null @@ -1,23 +0,0 @@ -import random_face -import gradio as gr - -def mobileface(truncate, alpha): - engine = random_face.get_engine() - face = engine.get_random_face(truncate=truncate, alpha=alpha) - return face[:,:,::-1] - -inputs = [ - gr.inputs.Checkbox(label="Truncate"), - gr.inputs.Slider(minimum=0, maximum=1, step=None, default=0.5, label="Alpha") - -] - -outputs = gr.outputs.Image(type='numpy', label="Output Image") - -title = "MobileStyleGAN" -description = "Gradio demo for MobileStyleGAN: A Lightweight Convolutional Neural Network for High-Fidelity Image Synthesis. To use it, simply click submit and optionally adjust alpha and truncation values. Read more at the links below." -article = "

    MobileStyleGAN: A Lightweight Convolutional Neural Network for High-Fidelity Image Synthesis | Github Repo

    " - - - -gr.Interface(mobileface, inputs, outputs, title=title, description=description, article=article).launch() \ No newline at end of file diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/utils/split_scp.pl b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/utils/split_scp.pl deleted file mode 100644 index dc798282f79dcaeed60de4eba5c587f91ee071a8..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/utils/split_scp.pl +++ /dev/null @@ -1,246 +0,0 @@ -#!/usr/bin/env perl - -# Copyright 2010-2011 Microsoft Corporation - -# See ../../COPYING for clarification regarding multiple authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# THIS CODE IS PROVIDED *AS IS* BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY IMPLIED -# WARRANTIES OR CONDITIONS OF TITLE, FITNESS FOR A PARTICULAR PURPOSE, -# MERCHANTABLITY OR NON-INFRINGEMENT. -# See the Apache 2 License for the specific language governing permissions and -# limitations under the License. - - -# This program splits up any kind of .scp or archive-type file. -# If there is no utt2spk option it will work on any text file and -# will split it up with an approximately equal number of lines in -# each but. -# With the --utt2spk option it will work on anything that has the -# utterance-id as the first entry on each line; the utt2spk file is -# of the form "utterance speaker" (on each line). -# It splits it into equal size chunks as far as it can. If you use the utt2spk -# option it will make sure these chunks coincide with speaker boundaries. In -# this case, if there are more chunks than speakers (and in some other -# circumstances), some of the resulting chunks will be empty and it will print -# an error message and exit with nonzero status. -# You will normally call this like: -# split_scp.pl scp scp.1 scp.2 scp.3 ... -# or -# split_scp.pl --utt2spk=utt2spk scp scp.1 scp.2 scp.3 ... -# Note that you can use this script to split the utt2spk file itself, -# e.g. split_scp.pl --utt2spk=utt2spk utt2spk utt2spk.1 utt2spk.2 ... - -# You can also call the scripts like: -# split_scp.pl -j 3 0 scp scp.0 -# [note: with this option, it assumes zero-based indexing of the split parts, -# i.e. the second number must be 0 <= n < num-jobs.] - -use warnings; - -$num_jobs = 0; -$job_id = 0; -$utt2spk_file = ""; -$one_based = 0; - -for ($x = 1; $x <= 3 && @ARGV > 0; $x++) { - if ($ARGV[0] eq "-j") { - shift @ARGV; - $num_jobs = shift @ARGV; - $job_id = shift @ARGV; - } - if ($ARGV[0] =~ /--utt2spk=(.+)/) { - $utt2spk_file=$1; - shift; - } - if ($ARGV[0] eq '--one-based') { - $one_based = 1; - shift @ARGV; - } -} - -if ($num_jobs != 0 && ($num_jobs < 0 || $job_id - $one_based < 0 || - $job_id - $one_based >= $num_jobs)) { - die "$0: Invalid job number/index values for '-j $num_jobs $job_id" . - ($one_based ? " --one-based" : "") . "'\n" -} - -$one_based - and $job_id--; - -if(($num_jobs == 0 && @ARGV < 2) || ($num_jobs > 0 && (@ARGV < 1 || @ARGV > 2))) { - die -"Usage: split_scp.pl [--utt2spk=] in.scp out1.scp out2.scp ... - or: split_scp.pl -j num-jobs job-id [--one-based] [--utt2spk=] in.scp [out.scp] - ... where 0 <= job-id < num-jobs, or 1 <= job-id <- num-jobs if --one-based.\n"; -} - -$error = 0; -$inscp = shift @ARGV; -if ($num_jobs == 0) { # without -j option - @OUTPUTS = @ARGV; -} else { - for ($j = 0; $j < $num_jobs; $j++) { - if ($j == $job_id) { - if (@ARGV > 0) { push @OUTPUTS, $ARGV[0]; } - else { push @OUTPUTS, "-"; } - } else { - push @OUTPUTS, "/dev/null"; - } - } -} - -if ($utt2spk_file ne "") { # We have the --utt2spk option... - open($u_fh, '<', $utt2spk_file) || die "$0: Error opening utt2spk file $utt2spk_file: $!\n"; - while(<$u_fh>) { - @A = split; - @A == 2 || die "$0: Bad line $_ in utt2spk file $utt2spk_file\n"; - ($u,$s) = @A; - $utt2spk{$u} = $s; - } - close $u_fh; - open($i_fh, '<', $inscp) || die "$0: Error opening input scp file $inscp: $!\n"; - @spkrs = (); - while(<$i_fh>) { - @A = split; - if(@A == 0) { die "$0: Empty or space-only line in scp file $inscp\n"; } - $u = $A[0]; - $s = $utt2spk{$u}; - defined $s || die "$0: No utterance $u in utt2spk file $utt2spk_file\n"; - if(!defined $spk_count{$s}) { - push @spkrs, $s; - $spk_count{$s} = 0; - $spk_data{$s} = []; # ref to new empty array. - } - $spk_count{$s}++; - push @{$spk_data{$s}}, $_; - } - # Now split as equally as possible .. - # First allocate spks to files by allocating an approximately - # equal number of speakers. - $numspks = @spkrs; # number of speakers. - $numscps = @OUTPUTS; # number of output files. - if ($numspks < $numscps) { - die "$0: Refusing to split data because number of speakers $numspks " . - "is less than the number of output .scp files $numscps\n"; - } - for($scpidx = 0; $scpidx < $numscps; $scpidx++) { - $scparray[$scpidx] = []; # [] is array reference. - } - for ($spkidx = 0; $spkidx < $numspks; $spkidx++) { - $scpidx = int(($spkidx*$numscps) / $numspks); - $spk = $spkrs[$spkidx]; - push @{$scparray[$scpidx]}, $spk; - $scpcount[$scpidx] += $spk_count{$spk}; - } - - # Now will try to reassign beginning + ending speakers - # to different scp's and see if it gets more balanced. - # Suppose objf we're minimizing is sum_i (num utts in scp[i] - average)^2. - # We can show that if considering changing just 2 scp's, we minimize - # this by minimizing the squared difference in sizes. This is - # equivalent to minimizing the absolute difference in sizes. This - # shows this method is bound to converge. - - $changed = 1; - while($changed) { - $changed = 0; - for($scpidx = 0; $scpidx < $numscps; $scpidx++) { - # First try to reassign ending spk of this scp. - if($scpidx < $numscps-1) { - $sz = @{$scparray[$scpidx]}; - if($sz > 0) { - $spk = $scparray[$scpidx]->[$sz-1]; - $count = $spk_count{$spk}; - $nutt1 = $scpcount[$scpidx]; - $nutt2 = $scpcount[$scpidx+1]; - if( abs( ($nutt2+$count) - ($nutt1-$count)) - < abs($nutt2 - $nutt1)) { # Would decrease - # size-diff by reassigning spk... - $scpcount[$scpidx+1] += $count; - $scpcount[$scpidx] -= $count; - pop @{$scparray[$scpidx]}; - unshift @{$scparray[$scpidx+1]}, $spk; - $changed = 1; - } - } - } - if($scpidx > 0 && @{$scparray[$scpidx]} > 0) { - $spk = $scparray[$scpidx]->[0]; - $count = $spk_count{$spk}; - $nutt1 = $scpcount[$scpidx-1]; - $nutt2 = $scpcount[$scpidx]; - if( abs( ($nutt2-$count) - ($nutt1+$count)) - < abs($nutt2 - $nutt1)) { # Would decrease - # size-diff by reassigning spk... - $scpcount[$scpidx-1] += $count; - $scpcount[$scpidx] -= $count; - shift @{$scparray[$scpidx]}; - push @{$scparray[$scpidx-1]}, $spk; - $changed = 1; - } - } - } - } - # Now print out the files... - for($scpidx = 0; $scpidx < $numscps; $scpidx++) { - $scpfile = $OUTPUTS[$scpidx]; - ($scpfile ne '-' ? open($f_fh, '>', $scpfile) - : open($f_fh, '>&', \*STDOUT)) || - die "$0: Could not open scp file $scpfile for writing: $!\n"; - $count = 0; - if(@{$scparray[$scpidx]} == 0) { - print STDERR "$0: eError: split_scp.pl producing empty .scp file " . - "$scpfile (too many splits and too few speakers?)\n"; - $error = 1; - } else { - foreach $spk ( @{$scparray[$scpidx]} ) { - print $f_fh @{$spk_data{$spk}}; - $count += $spk_count{$spk}; - } - $count == $scpcount[$scpidx] || die "Count mismatch [code error]"; - } - close($f_fh); - } -} else { - # This block is the "normal" case where there is no --utt2spk - # option and we just break into equal size chunks. - - open($i_fh, '<', $inscp) || die "$0: Error opening input scp file $inscp: $!\n"; - - $numscps = @OUTPUTS; # size of array. - @F = (); - while(<$i_fh>) { - push @F, $_; - } - $numlines = @F; - if($numlines == 0) { - print STDERR "$0: error: empty input scp file $inscp\n"; - $error = 1; - } - $linesperscp = int( $numlines / $numscps); # the "whole part".. - $linesperscp >= 1 || die "$0: You are splitting into too many pieces! [reduce \$nj]\n"; - $remainder = $numlines - ($linesperscp * $numscps); - ($remainder >= 0 && $remainder < $numlines) || die "bad remainder $remainder"; - # [just doing int() rounds down]. - $n = 0; - for($scpidx = 0; $scpidx < @OUTPUTS; $scpidx++) { - $scpfile = $OUTPUTS[$scpidx]; - ($scpfile ne '-' ? open($o_fh, '>', $scpfile) - : open($o_fh, '>&', \*STDOUT)) || - die "$0: Could not open scp file $scpfile for writing: $!\n"; - for($k = 0; $k < $linesperscp + ($scpidx < $remainder ? 1 : 0); $k++) { - print $o_fh $F[$n++]; - } - close($o_fh) || die "$0: Eror closing scp file $scpfile: $!\n"; - } - $n == $numlines || die "$n != $numlines [code error]"; -} - -exit ($error); diff --git a/spaces/akhaliq/stylegan3_clip/torch_utils/ops/bias_act.cpp b/spaces/akhaliq/stylegan3_clip/torch_utils/ops/bias_act.cpp deleted file mode 100644 index 218bc8fdf74f8e7ff74f6676f49b231c6a57bfb4..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/stylegan3_clip/torch_utils/ops/bias_act.cpp +++ /dev/null @@ -1,99 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include -#include -#include -#include "bias_act.h" - -//------------------------------------------------------------------------ - -static bool has_same_layout(torch::Tensor x, torch::Tensor y) -{ - if (x.dim() != y.dim()) - return false; - for (int64_t i = 0; i < x.dim(); i++) - { - if (x.size(i) != y.size(i)) - return false; - if (x.size(i) >= 2 && x.stride(i) != y.stride(i)) - return false; - } - return true; -} - -//------------------------------------------------------------------------ - -static torch::Tensor bias_act(torch::Tensor x, torch::Tensor b, torch::Tensor xref, torch::Tensor yref, torch::Tensor dy, int grad, int dim, int act, float alpha, float gain, float clamp) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(b.numel() == 0 || (b.dtype() == x.dtype() && b.device() == x.device()), "b must have the same dtype and device as x"); - TORCH_CHECK(xref.numel() == 0 || (xref.sizes() == x.sizes() && xref.dtype() == x.dtype() && xref.device() == x.device()), "xref must have the same shape, dtype, and device as x"); - TORCH_CHECK(yref.numel() == 0 || (yref.sizes() == x.sizes() && yref.dtype() == x.dtype() && yref.device() == x.device()), "yref must have the same shape, dtype, and device as x"); - TORCH_CHECK(dy.numel() == 0 || (dy.sizes() == x.sizes() && dy.dtype() == x.dtype() && dy.device() == x.device()), "dy must have the same dtype and device as x"); - TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(b.dim() == 1, "b must have rank 1"); - TORCH_CHECK(b.numel() == 0 || (dim >= 0 && dim < x.dim()), "dim is out of bounds"); - TORCH_CHECK(b.numel() == 0 || b.numel() == x.size(dim), "b has wrong number of elements"); - TORCH_CHECK(grad >= 0, "grad must be non-negative"); - - // Validate layout. - TORCH_CHECK(x.is_non_overlapping_and_dense(), "x must be non-overlapping and dense"); - TORCH_CHECK(b.is_contiguous(), "b must be contiguous"); - TORCH_CHECK(xref.numel() == 0 || has_same_layout(xref, x), "xref must have the same layout as x"); - TORCH_CHECK(yref.numel() == 0 || has_same_layout(yref, x), "yref must have the same layout as x"); - TORCH_CHECK(dy.numel() == 0 || has_same_layout(dy, x), "dy must have the same layout as x"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - torch::Tensor y = torch::empty_like(x); - TORCH_CHECK(has_same_layout(y, x), "y must have the same layout as x"); - - // Initialize CUDA kernel parameters. - bias_act_kernel_params p; - p.x = x.data_ptr(); - p.b = (b.numel()) ? b.data_ptr() : NULL; - p.xref = (xref.numel()) ? xref.data_ptr() : NULL; - p.yref = (yref.numel()) ? yref.data_ptr() : NULL; - p.dy = (dy.numel()) ? dy.data_ptr() : NULL; - p.y = y.data_ptr(); - p.grad = grad; - p.act = act; - p.alpha = alpha; - p.gain = gain; - p.clamp = clamp; - p.sizeX = (int)x.numel(); - p.sizeB = (int)b.numel(); - p.stepB = (b.numel()) ? (int)x.stride(dim) : 1; - - // Choose CUDA kernel. - void* kernel; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - kernel = choose_bias_act_kernel(p); - }); - TORCH_CHECK(kernel, "no CUDA kernel found for the specified activation func"); - - // Launch CUDA kernel. - p.loopX = 4; - int blockSize = 4 * 32; - int gridSize = (p.sizeX - 1) / (p.loopX * blockSize) + 1; - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("bias_act", &bias_act); -} - -//------------------------------------------------------------------------ diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/json.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/json.py deleted file mode 100644 index 23583871e8f2a466abec0bce1397fb495b9c212d..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/json.py +++ /dev/null @@ -1,140 +0,0 @@ -from json import loads, dumps -from typing import Any, Callable, Optional, Union - -from .text import Text -from .highlighter import JSONHighlighter, NullHighlighter - - -class JSON: - """A renderable which pretty prints JSON. - - Args: - json (str): JSON encoded data. - indent (Union[None, int, str], optional): Number of characters to indent by. Defaults to 2. - highlight (bool, optional): Enable highlighting. Defaults to True. - skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False. - ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False. - check_circular (bool, optional): Check for circular references. Defaults to True. - allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True. - default (Callable, optional): A callable that converts values that can not be encoded - in to something that can be JSON encoded. Defaults to None. - sort_keys (bool, optional): Sort dictionary keys. Defaults to False. - """ - - def __init__( - self, - json: str, - indent: Union[None, int, str] = 2, - highlight: bool = True, - skip_keys: bool = False, - ensure_ascii: bool = True, - check_circular: bool = True, - allow_nan: bool = True, - default: Optional[Callable[[Any], Any]] = None, - sort_keys: bool = False, - ) -> None: - data = loads(json) - json = dumps( - data, - indent=indent, - skipkeys=skip_keys, - ensure_ascii=ensure_ascii, - check_circular=check_circular, - allow_nan=allow_nan, - default=default, - sort_keys=sort_keys, - ) - highlighter = JSONHighlighter() if highlight else NullHighlighter() - self.text = highlighter(json) - self.text.no_wrap = True - self.text.overflow = None - - @classmethod - def from_data( - cls, - data: Any, - indent: Union[None, int, str] = 2, - highlight: bool = True, - skip_keys: bool = False, - ensure_ascii: bool = True, - check_circular: bool = True, - allow_nan: bool = True, - default: Optional[Callable[[Any], Any]] = None, - sort_keys: bool = False, - ) -> "JSON": - """Encodes a JSON object from arbitrary data. - - Args: - data (Any): An object that may be encoded in to JSON - indent (Union[None, int, str], optional): Number of characters to indent by. Defaults to 2. - highlight (bool, optional): Enable highlighting. Defaults to True. - default (Callable, optional): Optional callable which will be called for objects that cannot be serialized. Defaults to None. - skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False. - ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False. - check_circular (bool, optional): Check for circular references. Defaults to True. - allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True. - default (Callable, optional): A callable that converts values that can not be encoded - in to something that can be JSON encoded. Defaults to None. - sort_keys (bool, optional): Sort dictionary keys. Defaults to False. - - Returns: - JSON: New JSON object from the given data. - """ - json_instance: "JSON" = cls.__new__(cls) - json = dumps( - data, - indent=indent, - skipkeys=skip_keys, - ensure_ascii=ensure_ascii, - check_circular=check_circular, - allow_nan=allow_nan, - default=default, - sort_keys=sort_keys, - ) - highlighter = JSONHighlighter() if highlight else NullHighlighter() - json_instance.text = highlighter(json) - json_instance.text.no_wrap = True - json_instance.text.overflow = None - return json_instance - - def __rich__(self) -> Text: - return self.text - - -if __name__ == "__main__": - - import argparse - import sys - - parser = argparse.ArgumentParser(description="Pretty print json") - parser.add_argument( - "path", - metavar="PATH", - help="path to file, or - for stdin", - ) - parser.add_argument( - "-i", - "--indent", - metavar="SPACES", - type=int, - help="Number of spaces in an indent", - default=2, - ) - args = parser.parse_args() - - from pip._vendor.rich.console import Console - - console = Console() - error_console = Console(stderr=True) - - try: - if args.path == "-": - json_data = sys.stdin.read() - else: - with open(args.path, "rt") as json_file: - json_data = json_file.read() - except Exception as error: - error_console.print(f"Unable to read {args.path!r}; {error}") - sys.exit(-1) - - console.print(JSON(json_data, indent=args.indent), soft_wrap=True) diff --git a/spaces/allknowingroger/Image-Models-Test103/app.py b/spaces/allknowingroger/Image-Models-Test103/app.py deleted file mode 100644 index 9ac304d1c5769b6e3b47fa33b8be578be2595259..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test103/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "LinoyTsaban/lora-xl-graffiti-0.0001-5e-05-1000-1-None", - "Muhammadreza/mann-e-artistic-1-revised", - "digiplay/asyncsMIX_v2", - "digiplay/PerfectWorld_v4", - "digiplay/fantexi_v0.9", - "LinoyTsaban/lora-xl-sneaker-0.0001-5e-06-500-1-None", - "bongo2112/sdxl-db-richtilebati", - "sunyijia97/lora-trained-xl-colab-face-v1", - "livingbox/incremental-test-03", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test2/README.md b/spaces/allknowingroger/Image-Models-Test2/README.md deleted file mode 100644 index f91e4b31ab345f987b425de029c057bfb69d9e1b..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test ---- - - \ No newline at end of file diff --git a/spaces/allknowingroger/text-generation-webui-space-1/api-example.py b/spaces/allknowingroger/text-generation-webui-space-1/api-example.py deleted file mode 100644 index 0306b7ab8a3fa3d6f57d8474ad74d67f13557b6d..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/text-generation-webui-space-1/api-example.py +++ /dev/null @@ -1,59 +0,0 @@ -''' - -This is an example on how to use the API for oobabooga/text-generation-webui. - -Make sure to start the web UI with the following flags: - -python server.py --model MODEL --listen --no-stream - -Optionally, you can also add the --share flag to generate a public gradio URL, -allowing you to use the API remotely. - -''' -import requests - -# Server address -server = "127.0.0.1" - -# Generation parameters -# Reference: https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig -params = { - 'max_new_tokens': 200, - 'do_sample': True, - 'temperature': 0.5, - 'top_p': 0.9, - 'typical_p': 1, - 'repetition_penalty': 1.05, - 'top_k': 0, - 'min_length': 0, - 'no_repeat_ngram_size': 0, - 'num_beams': 1, - 'penalty_alpha': 0, - 'length_penalty': 1, - 'early_stopping': False, -} - -# Input prompt -prompt = "What I would like to say is the following: " - -response = requests.post(f"http://{server}:7860/run/textgen", json={ - "data": [ - prompt, - params['max_new_tokens'], - params['do_sample'], - params['temperature'], - params['top_p'], - params['typical_p'], - params['repetition_penalty'], - params['top_k'], - params['min_length'], - params['no_repeat_ngram_size'], - params['num_beams'], - params['penalty_alpha'], - params['length_penalty'], - params['early_stopping'], - ] -}).json() - -reply = response["data"][0] -print(reply) diff --git a/spaces/amankishore/sjc/sd1/ldm/models/diffusion/plms.py b/spaces/amankishore/sjc/sd1/ldm/models/diffusion/plms.py deleted file mode 100644 index 78eeb1003aa45d27bdbfc6b4a1d7ccbff57cd2e3..0000000000000000000000000000000000000000 --- a/spaces/amankishore/sjc/sd1/ldm/models/diffusion/plms.py +++ /dev/null @@ -1,236 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm -from functools import partial - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like - - -class PLMSSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - if ddim_eta != 0: - raise ValueError('ddim_eta must be 0 for PLMS') - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for PLMS sampling is {size}') - - samples, intermediates = self.plms_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - ) - return samples, intermediates - - @torch.no_grad() - def plms_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None,): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = list(reversed(range(0,timesteps))) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running PLMS Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='PLMS Sampler', total=total_steps) - old_eps = [] - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - ts_next = torch.full((b,), time_range[min(i + 1, len(time_range) - 1)], device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - outs = self.p_sample_plms(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - old_eps=old_eps, t_next=ts_next) - img, pred_x0, e_t = outs - old_eps.append(e_t) - if len(old_eps) >= 4: - old_eps.pop(0) - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_plms(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, old_eps=None, t_next=None): - b, *_, device = *x.shape, x.device - - def get_model_output(x, t): - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - e_t = self.model.apply_model(x, t, c) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - c_in = torch.cat([unconditional_conditioning, c]) - e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2) - e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond) - - if score_corrector is not None: - assert self.model.parameterization == "eps" - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - return e_t - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - - def get_x_prev_and_pred_x0(e_t, index): - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - e_t = get_model_output(x, t) - if len(old_eps) == 0: - # Pseudo Improved Euler (2nd order) - x_prev, pred_x0 = get_x_prev_and_pred_x0(e_t, index) - e_t_next = get_model_output(x_prev, t_next) - e_t_prime = (e_t + e_t_next) / 2 - elif len(old_eps) == 1: - # 2nd order Pseudo Linear Multistep (Adams-Bashforth) - e_t_prime = (3 * e_t - old_eps[-1]) / 2 - elif len(old_eps) == 2: - # 3nd order Pseudo Linear Multistep (Adams-Bashforth) - e_t_prime = (23 * e_t - 16 * old_eps[-1] + 5 * old_eps[-2]) / 12 - elif len(old_eps) >= 3: - # 4nd order Pseudo Linear Multistep (Adams-Bashforth) - e_t_prime = (55 * e_t - 59 * old_eps[-1] + 37 * old_eps[-2] - 9 * old_eps[-3]) / 24 - - x_prev, pred_x0 = get_x_prev_and_pred_x0(e_t_prime, index) - - return x_prev, pred_x0, e_t diff --git a/spaces/amasad/sahil2801-replit-code-instruct-glaive/README.md b/spaces/amasad/sahil2801-replit-code-instruct-glaive/README.md deleted file mode 100644 index a3bc76f4051ee330bf464259d4554482d852f275..0000000000000000000000000000000000000000 --- a/spaces/amasad/sahil2801-replit-code-instruct-glaive/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Replit V1 CodeInstruct 3B -emoji: 🏢 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -duplicated_from: teknium/sahil2801-replit-code-instruct-glaive ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/amsterdamNLP/CLIP-attention-rollout/CLIP_explainability/utils.py b/spaces/amsterdamNLP/CLIP-attention-rollout/CLIP_explainability/utils.py deleted file mode 100644 index a703c3ed2b88eefdd9950e13b7237c20f6ee235a..0000000000000000000000000000000000000000 --- a/spaces/amsterdamNLP/CLIP-attention-rollout/CLIP_explainability/utils.py +++ /dev/null @@ -1,136 +0,0 @@ -import torch -import CLIP.clip as clip -from PIL import Image -import numpy as np -import cv2 -import matplotlib.pyplot as plt -from captum.attr import visualization -import os - - -from CLIP.clip.simple_tokenizer import SimpleTokenizer as _Tokenizer -_tokenizer = _Tokenizer() - -#@title Control context expansion (number of attention layers to consider) -#@title Number of layers for image Transformer -#start_layer = 11#@param {type:"number"} - -#@title Number of layers for text Transformer -start_layer_text = 11#@param {type:"number"} - - -def interpret(image, texts, model, device, start_layer): - batch_size = texts.shape[0] - images = image.repeat(batch_size, 1, 1, 1) - logits_per_image, logits_per_text = model(images, texts) - probs = logits_per_image.softmax(dim=-1).detach().cpu().numpy() - index = [i for i in range(batch_size)] - one_hot = np.zeros((logits_per_image.shape[0], logits_per_image.shape[1]), dtype=np.float32) - one_hot[torch.arange(logits_per_image.shape[0]), index] = 1 - one_hot = torch.from_numpy(one_hot).requires_grad_(True) - one_hot = torch.sum(one_hot.to(device) * logits_per_image) - model.zero_grad() - - image_attn_blocks = list(dict(model.visual.transformer.resblocks.named_children()).values()) - num_tokens = image_attn_blocks[0].attn_probs.shape[-1] - R = torch.eye(num_tokens, num_tokens, dtype=image_attn_blocks[0].attn_probs.dtype).to(device) - R = R.unsqueeze(0).expand(batch_size, num_tokens, num_tokens) - for i, blk in enumerate(image_attn_blocks): - if i < start_layer: - continue - grad = torch.autograd.grad(one_hot, [blk.attn_probs], retain_graph=True)[0].detach() - cam = blk.attn_probs.detach() - cam = cam.reshape(-1, cam.shape[-1], cam.shape[-1]) - grad = grad.reshape(-1, grad.shape[-1], grad.shape[-1]) - cam = grad * cam - cam = cam.reshape(batch_size, -1, cam.shape[-1], cam.shape[-1]) - cam = cam.clamp(min=0).mean(dim=1) - R = R + torch.bmm(cam, R) - image_relevance = R[:, 0, 1:] - - - text_attn_blocks = list(dict(model.transformer.resblocks.named_children()).values()) - num_tokens = text_attn_blocks[0].attn_probs.shape[-1] - R_text = torch.eye(num_tokens, num_tokens, dtype=text_attn_blocks[0].attn_probs.dtype).to(device) - R_text = R_text.unsqueeze(0).expand(batch_size, num_tokens, num_tokens) - for i, blk in enumerate(text_attn_blocks): - if i < start_layer_text: - continue - grad = torch.autograd.grad(one_hot, [blk.attn_probs], retain_graph=True)[0].detach() - cam = blk.attn_probs.detach() - cam = cam.reshape(-1, cam.shape[-1], cam.shape[-1]) - grad = grad.reshape(-1, grad.shape[-1], grad.shape[-1]) - cam = grad * cam - cam = cam.reshape(batch_size, -1, cam.shape[-1], cam.shape[-1]) - cam = cam.clamp(min=0).mean(dim=1) - R_text = R_text + torch.bmm(cam, R_text) - text_relevance = R_text - - return text_relevance, image_relevance - - -def show_image_relevance(image_relevance, image, orig_image, device): - # create heatmap from mask on image - def show_cam_on_image(img, mask): - heatmap = cv2.applyColorMap(np.uint8(255 * mask), cv2.COLORMAP_JET) - heatmap = np.float32(heatmap) / 255 - cam = heatmap + np.float32(img) - cam = cam / np.max(cam) - return cam - - rel_shp = np.sqrt(image_relevance.shape[0]).astype(int) - img_size = image.shape[-1] - image_relevance = image_relevance.reshape(1, 1, rel_shp, rel_shp) - image_relevance = torch.nn.functional.interpolate(image_relevance, size=img_size, mode='bilinear') - image_relevance = image_relevance.reshape(img_size, img_size).data.cpu().numpy() - image_relevance = (image_relevance - image_relevance.min()) / (image_relevance.max() - image_relevance.min()) - image = image[0].permute(1, 2, 0).data.cpu().numpy() - image = (image - image.min()) / (image.max() - image.min()) - vis = show_cam_on_image(image, image_relevance) - vis = np.uint8(255 * vis) - vis = cv2.cvtColor(np.array(vis), cv2.COLOR_RGB2BGR) - - return image_relevance - - -def show_heatmap_on_text(text, text_encoding, R_text): - CLS_idx = text_encoding.argmax(dim=-1) - R_text = R_text[CLS_idx, 1:CLS_idx] - text_scores = R_text / R_text.sum() - text_scores = text_scores.flatten() - # print(text_scores) - text_tokens=_tokenizer.encode(text) - text_tokens_decoded=[_tokenizer.decode([a]) for a in text_tokens] - vis_data_records = [visualization.VisualizationDataRecord(text_scores,0,0,0,0,0,text_tokens_decoded,1)] - - return text_scores, text_tokens_decoded - - -def show_img_heatmap(image_relevance, image, orig_image, device): - return show_image_relevance(image_relevance, image, orig_image, device) - - -def show_txt_heatmap(text, text_encoding, R_text): - return show_heatmap_on_text(text, text_encoding, R_text) - - -def load_dataset(): - dataset_path = os.path.join('..', '..', 'dummy-data', '71226_segments' + '.pt') - device = "cuda" if torch.cuda.is_available() else "cpu" - - data = torch.load(dataset_path, map_location=device) - - return data - - -class color: - PURPLE = '\033[95m' - CYAN = '\033[96m' - DARKCYAN = '\033[36m' - BLUE = '\033[94m' - GREEN = '\033[92m' - YELLOW = '\033[93m' - RED = '\033[91m' - BOLD = '\033[1m' - UNDERLINE = '\033[4m' - END = '\033[0m' diff --git a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/helpers/phind.py b/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/helpers/phind.py deleted file mode 100644 index 70525d51d849c43bd1cf29c7f9b18f22bff1e982..0000000000000000000000000000000000000000 --- a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/helpers/phind.py +++ /dev/null @@ -1,69 +0,0 @@ -import sys -import json -import datetime -import urllib.parse - -from curl_cffi import requests - -config = json.loads(sys.argv[1]) -prompt = config['messages'][-1]['content'] - -skill = 'expert' if config['model'] == 'gpt-4' else 'intermediate' - -json_data = json.dumps({ - 'question': prompt, - 'options': { - 'skill': skill, - 'date': datetime.datetime.now().strftime('%d/%m/%Y'), - 'language': 'en', - 'detailed': True, - 'creative': True, - 'customLinks': []}}, separators=(',', ':')) - -headers = { - 'Content-Type': 'application/json', - 'Pragma': 'no-cache', - 'Accept': '*/*', - 'Sec-Fetch-Site': 'same-origin', - 'Accept-Language': 'en-GB,en;q=0.9', - 'Cache-Control': 'no-cache', - 'Sec-Fetch-Mode': 'cors', - 'Content-Length': str(len(json_data)), - 'Origin': 'https://www.phind.com', - 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.4 Safari/605.1.15', - 'Referer': f'https://www.phind.com/search?q={urllib.parse.quote(prompt)}&source=searchbox', - 'Connection': 'keep-alive', - 'Host': 'www.phind.com', - 'Sec-Fetch-Dest': 'empty' -} - - -def output(chunk): - try: - if b'PHIND_METADATA' in chunk: - return - - if chunk == b'data: \r\ndata: \r\ndata: \r\n\r\n': - chunk = b'data: \n\r\n\r\n' - - chunk = chunk.decode() - - chunk = chunk.replace('data: \r\n\r\ndata: ', 'data: \n') - chunk = chunk.replace('\r\ndata: \r\ndata: \r\n\r\n', '\n\r\n\r\n') - chunk = chunk.replace('data: ', '').replace('\r\n\r\n', '') - - print(chunk, flush=True, end = '') - - except json.decoder.JSONDecodeError: - pass - -while True: - try: - response = requests.post('https://www.phind.com/api/infer/answer', - headers=headers, data=json_data, content_callback=output, timeout=999999, impersonate='safari15_5') - - exit(0) - - except Exception as e: - print('an error occured, retrying... |', e, flush=True) - continue \ No newline at end of file diff --git a/spaces/aphenx/bingo/src/components/providers.tsx b/spaces/aphenx/bingo/src/components/providers.tsx deleted file mode 100644 index 892226412d80fe0b05211911b9e245cd22876460..0000000000000000000000000000000000000000 --- a/spaces/aphenx/bingo/src/components/providers.tsx +++ /dev/null @@ -1,15 +0,0 @@ -'use client' - -import * as React from 'react' -import { ThemeProvider as NextThemesProvider } from 'next-themes' -import { ThemeProviderProps } from 'next-themes/dist/types' - -import { TooltipProvider } from '@/components/ui/tooltip' - -export function Providers({ children, ...props }: ThemeProviderProps) { - return ( - - {children} - - ) -} diff --git a/spaces/arbml/Ashaar/poetry_diacritizer/util/decorators.py b/spaces/arbml/Ashaar/poetry_diacritizer/util/decorators.py deleted file mode 100644 index 4a1a46c8ae63dfb6d9cb99c0ef7321c26985f275..0000000000000000000000000000000000000000 --- a/spaces/arbml/Ashaar/poetry_diacritizer/util/decorators.py +++ /dev/null @@ -1,27 +0,0 @@ -import traceback -from time import time - - -def ignore_exception(f): - def apply_func(*args, **kwargs): - try: - result = f(*args, **kwargs) - return result - except Exception: - if False: - print(f"Catched exception in {f}:") - traceback.print_exc() - return None - - return apply_func - - -def time_it(f): - def apply_func(*args, **kwargs): - t_start = time() - result = f(*args, **kwargs) - t_end = time() - dur = round(t_end - t_start, ndigits=2) - return result, dur - - return apply_func diff --git a/spaces/arnepeine/monaspeech/app.py b/spaces/arnepeine/monaspeech/app.py deleted file mode 100644 index 8b6575bb112e1725826bd1e6c6befb2ba9ae12de..0000000000000000000000000000000000000000 --- a/spaces/arnepeine/monaspeech/app.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch - -import gradio as gr -import pytube as pt -from transformers import pipeline -from huggingface_hub import model_info - -MODEL_NAME = "arnepeine/mona_speech" -CHUNK_LENGTH_S = 30 - -print(f"Is CUDA available: {torch.cuda.is_available()}") -# True -print(f"CUDA device: {torch.cuda.get_device_name(torch.cuda.current_device())}") -# Tesla T4 - - -device = 0 if torch.cuda.is_available() else "cpu" -pipe = pipeline( - task="automatic-speech-recognition", - model=MODEL_NAME, - chunk_length_s=CHUNK_LENGTH_S, - device=device, -) - -pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language="de", task="transcribe") - -def transcribe(microphone, file_upload): - warn_output = "" - if (microphone is not None) and (file_upload is not None): - warn_output = ( - "WARNING: You've uploaded an audio file and used the microphone. " - "The recorded file from the microphone will be used and the uploaded audio will be discarded.\n" - ) - - elif (microphone is None) and (file_upload is None): - return "ERROR: You have to either use the microphone or upload an audio file" - - file = microphone if microphone is not None else file_upload - - text = pipe(file)["text"] - - return warn_output + text - - -def _return_yt_html_embed(yt_url): - video_id = yt_url.split("?v=")[-1] - HTML_str = ( - f'
    ' - "
    " - ) - return HTML_str - - -def yt_transcribe(yt_url): - yt = pt.YouTube(yt_url) - html_embed_str = _return_yt_html_embed(yt_url) - stream = yt.streams.filter(only_audio=True)[0] - stream.download(filename="audio.mp3") - - text = pipe("audio.mp3")["text"] - - return html_embed_str, text - - -demo = gr.Blocks() - -mf_transcribe = gr.Interface( - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="microphone", type="filepath", optional=True), - gr.inputs.Audio(source="upload", type="filepath", optional=True), - ], - outputs="text", - layout="vertical", - theme="huggingface", - css="footer {visibility: hidden}", - title="🏥 Mona Speech: An fine-tuned ASR Model for hospital medical speech.", - description=( - "Transcribe long-form microphone or audio inputs containing medical terminology (in particular in the acute care, hospital domain) using your microphone or by dropping recorded files. Fine tuned for German language. " - ), - allow_flagging="never", -) - -yt_transcribe = gr.Interface( - fn=yt_transcribe, - inputs=[gr.inputs.Textbox(lines=1, placeholder="Paste the URL to a YouTube video here", label="YouTube URL")], - outputs=["html", "text"], - css="footer {visibility: hidden}", - layout="horizontal", - theme="huggingface", - title="Transcribe Telemedicine", - description=( - "Transcribe long-form Telemedicine Sessions! " - ), - allow_flagging="never", -) - -with demo: - gr.TabbedInterface([mf_transcribe, yt_transcribe], ["Transcribe Audio", "Transcribe Telemedicine"]) - -demo.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/artificialguybr/video-dubbing/Wav2Lip/face_detection/models.py b/spaces/artificialguybr/video-dubbing/Wav2Lip/face_detection/models.py deleted file mode 100644 index ee2dde32bdf72c25a4600e48efa73ffc0d4a3893..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/Wav2Lip/face_detection/models.py +++ /dev/null @@ -1,261 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import math - - -def conv3x3(in_planes, out_planes, strd=1, padding=1, bias=False): - "3x3 convolution with padding" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, - stride=strd, padding=padding, bias=bias) - - -class ConvBlock(nn.Module): - def __init__(self, in_planes, out_planes): - super(ConvBlock, self).__init__() - self.bn1 = nn.BatchNorm2d(in_planes) - self.conv1 = conv3x3(in_planes, int(out_planes / 2)) - self.bn2 = nn.BatchNorm2d(int(out_planes / 2)) - self.conv2 = conv3x3(int(out_planes / 2), int(out_planes / 4)) - self.bn3 = nn.BatchNorm2d(int(out_planes / 4)) - self.conv3 = conv3x3(int(out_planes / 4), int(out_planes / 4)) - - if in_planes != out_planes: - self.downsample = nn.Sequential( - nn.BatchNorm2d(in_planes), - nn.ReLU(True), - nn.Conv2d(in_planes, out_planes, - kernel_size=1, stride=1, bias=False), - ) - else: - self.downsample = None - - def forward(self, x): - residual = x - - out1 = self.bn1(x) - out1 = F.relu(out1, True) - out1 = self.conv1(out1) - - out2 = self.bn2(out1) - out2 = F.relu(out2, True) - out2 = self.conv2(out2) - - out3 = self.bn3(out2) - out3 = F.relu(out3, True) - out3 = self.conv3(out3) - - out3 = torch.cat((out1, out2, out3), 1) - - if self.downsample is not None: - residual = self.downsample(residual) - - out3 += residual - - return out3 - - -class Bottleneck(nn.Module): - - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(Bottleneck, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, - padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * 4) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class HourGlass(nn.Module): - def __init__(self, num_modules, depth, num_features): - super(HourGlass, self).__init__() - self.num_modules = num_modules - self.depth = depth - self.features = num_features - - self._generate_network(self.depth) - - def _generate_network(self, level): - self.add_module('b1_' + str(level), ConvBlock(self.features, self.features)) - - self.add_module('b2_' + str(level), ConvBlock(self.features, self.features)) - - if level > 1: - self._generate_network(level - 1) - else: - self.add_module('b2_plus_' + str(level), ConvBlock(self.features, self.features)) - - self.add_module('b3_' + str(level), ConvBlock(self.features, self.features)) - - def _forward(self, level, inp): - # Upper branch - up1 = inp - up1 = self._modules['b1_' + str(level)](up1) - - # Lower branch - low1 = F.avg_pool2d(inp, 2, stride=2) - low1 = self._modules['b2_' + str(level)](low1) - - if level > 1: - low2 = self._forward(level - 1, low1) - else: - low2 = low1 - low2 = self._modules['b2_plus_' + str(level)](low2) - - low3 = low2 - low3 = self._modules['b3_' + str(level)](low3) - - up2 = F.interpolate(low3, scale_factor=2, mode='nearest') - - return up1 + up2 - - def forward(self, x): - return self._forward(self.depth, x) - - -class FAN(nn.Module): - - def __init__(self, num_modules=1): - super(FAN, self).__init__() - self.num_modules = num_modules - - # Base part - self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3) - self.bn1 = nn.BatchNorm2d(64) - self.conv2 = ConvBlock(64, 128) - self.conv3 = ConvBlock(128, 128) - self.conv4 = ConvBlock(128, 256) - - # Stacking part - for hg_module in range(self.num_modules): - self.add_module('m' + str(hg_module), HourGlass(1, 4, 256)) - self.add_module('top_m_' + str(hg_module), ConvBlock(256, 256)) - self.add_module('conv_last' + str(hg_module), - nn.Conv2d(256, 256, kernel_size=1, stride=1, padding=0)) - self.add_module('bn_end' + str(hg_module), nn.BatchNorm2d(256)) - self.add_module('l' + str(hg_module), nn.Conv2d(256, - 68, kernel_size=1, stride=1, padding=0)) - - if hg_module < self.num_modules - 1: - self.add_module( - 'bl' + str(hg_module), nn.Conv2d(256, 256, kernel_size=1, stride=1, padding=0)) - self.add_module('al' + str(hg_module), nn.Conv2d(68, - 256, kernel_size=1, stride=1, padding=0)) - - def forward(self, x): - x = F.relu(self.bn1(self.conv1(x)), True) - x = F.avg_pool2d(self.conv2(x), 2, stride=2) - x = self.conv3(x) - x = self.conv4(x) - - previous = x - - outputs = [] - for i in range(self.num_modules): - hg = self._modules['m' + str(i)](previous) - - ll = hg - ll = self._modules['top_m_' + str(i)](ll) - - ll = F.relu(self._modules['bn_end' + str(i)] - (self._modules['conv_last' + str(i)](ll)), True) - - # Predict heatmaps - tmp_out = self._modules['l' + str(i)](ll) - outputs.append(tmp_out) - - if i < self.num_modules - 1: - ll = self._modules['bl' + str(i)](ll) - tmp_out_ = self._modules['al' + str(i)](tmp_out) - previous = previous + ll + tmp_out_ - - return outputs - - -class ResNetDepth(nn.Module): - - def __init__(self, block=Bottleneck, layers=[3, 8, 36, 3], num_classes=68): - self.inplanes = 64 - super(ResNetDepth, self).__init__() - self.conv1 = nn.Conv2d(3 + 68, 64, kernel_size=7, stride=2, padding=3, - bias=False) - self.bn1 = nn.BatchNorm2d(64) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2) - self.layer4 = self._make_layer(block, 512, layers[3], stride=2) - self.avgpool = nn.AvgPool2d(7) - self.fc = nn.Linear(512 * block.expansion, num_classes) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - m.weight.data.normal_(0, math.sqrt(2. / n)) - elif isinstance(m, nn.BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - - def _make_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes * block.expansion, - kernel_size=1, stride=stride, bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - - x = self.avgpool(x) - x = x.view(x.size(0), -1) - x = self.fc(x) - - return x diff --git a/spaces/asgaardlab/CLIPxGamePhysics/SimSearch.py b/spaces/asgaardlab/CLIPxGamePhysics/SimSearch.py deleted file mode 100644 index 4621d2b76091a79d04ac516f16743b771f392842..0000000000000000000000000000000000000000 --- a/spaces/asgaardlab/CLIPxGamePhysics/SimSearch.py +++ /dev/null @@ -1,46 +0,0 @@ -import faiss -import numpy as np - -class FaissNeighbors: - def __init__(self): - self.index = None - self.y = None - - def fit(self, X, y): - self.index = faiss.IndexFlatL2(X.shape[1]) - self.index.add(X.astype(np.float32)) - self.y = y - - def get_distances_and_indices(self, X, top_K=1000): - distances, indices = self.index.search(X.astype(np.float32), k=top_K) - return np.copy(distances), np.copy(indices), np.copy(self.y[indices]) - - def get_nearest_labels(self, X, top_K=1000): - distances, indices = self.index.search(X.astype(np.float32), k=top_K) - return np.copy(self.y[indices]) - - -class FaissCosineNeighbors: - def __init__(self): - self.cindex = None - self.y = None - - def fit(self, X, y): - self.cindex = faiss.index_factory(X.shape[1], "Flat", faiss.METRIC_INNER_PRODUCT) - X = np.copy(X) - X = X.astype(np.float32) - faiss.normalize_L2(X) - self.cindex.add(X) - self.y = y - - def get_distances_and_indices(self, Q, topK): - Q = np.copy(Q) - faiss.normalize_L2(Q) - distances, indices = self.cindex.search(Q.astype(np.float32), k=topK) - return np.copy(distances), np.copy(indices), np.copy(self.y[indices]) - - def get_nearest_labels(self, Q, topK=1000): - Q = np.copy(Q) - faiss.normalize_L2(Q) - distances, indices = self.cindex.search(Q.astype(np.float32), k=topK) - return np.copy(self.y[indices]) \ No newline at end of file diff --git a/spaces/asyafiqe/pdfGPT-chat/api.py b/spaces/asyafiqe/pdfGPT-chat/api.py deleted file mode 100644 index 58977a55878a2fb77333576ff312e3f13b43c2b1..0000000000000000000000000000000000000000 --- a/spaces/asyafiqe/pdfGPT-chat/api.py +++ /dev/null @@ -1,336 +0,0 @@ -import gc -import os -import re -import shutil -import urllib.request -from pathlib import Path -from tempfile import NamedTemporaryFile - -import fitz -import numpy as np -import openai -import torch -import torch.nn.functional as F -from fastapi import UploadFile -from lcserve import serving -from optimum.bettertransformer import BetterTransformer -from sklearn import svm -from sklearn.cluster import KMeans -from sklearn.metrics import pairwise_distances_argmin_min -from torch import Tensor -from transformers import AutoModel, AutoTokenizer - -recommender = None - - -def download_pdf(url, output_path): - urllib.request.urlretrieve(url, output_path) - - -def preprocess(text): - text = text.replace("-\n", "") - text = text.replace("\n", " ") - text = re.sub("\s+", " ", text) - return text - - -def get_margin(pdf): - page = pdf[0] - page_size = page.mediabox - margin_hor = page.mediabox.width * 0.05 - margin_ver = page.mediabox.height * 0.05 - margin_size = page_size + (margin_hor, margin_ver, -margin_hor, -margin_ver) - return margin_size - - -def pdf_to_text(path, start_page=1, end_page=None): - doc = fitz.open(path) - total_pages = doc.page_count - - if end_page is None: - end_page = total_pages - - text_list = [] - margin_size = get_margin(doc) - for i in range(start_page - 1, end_page): - page = doc[i] - page.set_cropbox(margin_size) - text = page.get_text("text") - text = preprocess(text) - text_list.append(text) - - doc.close() - return text_list - - -def text_to_chunks(texts, word_length=150, start_page=1): - text_toks = [t.split(" ") for t in texts] - page_nums = [] - chunks = [] - - for idx, words in enumerate(text_toks): - for i in range(0, len(words), word_length): - chunk = words[i : i + word_length] - if ( - (i + word_length) > len(words) - and (len(chunk) < word_length) - and (len(text_toks) != (idx + 1)) - ): - text_toks[idx + 1] = chunk + text_toks[idx + 1] - continue - chunk = " ".join(chunk).strip() - chunk = f"[Page no. {idx+start_page}]" + " " + '"' + chunk + '"' - chunks.append(chunk) - return chunks - - -class SemanticSearch: - def __init__(self, embedding_model): - self.tokenizer = AutoTokenizer.from_pretrained(f"intfloat/{embedding_model}") - self.model = AutoModel.from_pretrained( - f"intfloat/{embedding_model}", - # cache_dir =, - ) - self.model = BetterTransformer.transform(self.model, keep_original_model=True) - - # set device - self.device = "cuda" if torch.cuda.is_available() else "cpu" - self.model = self.model.to(self.device) - self.fitted = False - - def fit(self, data, batch_size=32, n_neighbors=5): - self.data = data - self.embeddings = self.get_text_embedding(self.data, batch_size=batch_size) - self.fitted = True - - def __call__(self, text, return_data=True): - self.inp_emb = self.get_text_embedding([text], prefix="query") - self.matches = self.run_svm(self.inp_emb, self.embeddings) - - if return_data: - # return 5 first match, first index is query, so it has to be skipped - return [self.data[i - 1] for i in self.matches[1:6]] - - else: - return self.matches - - def average_pool( - self, last_hidden_states: Tensor, attention_mask: Tensor - ) -> Tensor: - self.last_hidden = last_hidden_states.masked_fill( - ~attention_mask[..., None].bool(), 0.0 - ) - return self.last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] - - def get_text_embedding(self, texts, prefix="passage", batch_size=32): - # Tokenize the input texts - texts = [f"{prefix}: {text}" for text in texts] - batch_dict = self.tokenizer( - texts, max_length=512, padding=True, truncation=True, return_tensors="pt" - ).to(self.device) - - with torch.no_grad(): - outputs = self.model(**batch_dict) - - embeddings = self.average_pool( - outputs.last_hidden_state, batch_dict["attention_mask"] - ) - - # Normalize embeddings - embeddings = F.normalize(embeddings, p=2, dim=1) - - # Convert pytorch tensor to numpy array (no grad) - if self.device == "cuda": - embeddings = embeddings.detach().cpu().clone().numpy() - else: - embeddings = embeddings.detach().numpy() - return embeddings - - def run_svm(self, query_emb, passage_emb): - joined_emb = np.concatenate((query_emb, passage_emb)) - - # create var for SVM label - y = np.zeros(joined_emb.shape[0]) - # mark query as a positive example - y[0] = 1 - - # declare SVM - clf = svm.LinearSVC( - class_weight="balanced", verbose=False, max_iter=10000, tol=1e-6, C=0.1 - ) - # train (Exemplar) SVM - clf.fit(joined_emb, y) - - # infer on original data - similarities = clf.decision_function(joined_emb) - sorted_ix = np.argsort(-similarities) - return sorted_ix - - def summarize(self): - n_clusters = int(np.ceil(len(self.embeddings)**0.5)) - # max cluster 5 (reserve token) - n_clusters = n_clusters if n_clusters <= 5 else 5 - kmeans = KMeans(n_clusters=n_clusters, random_state=23) - kmeans = kmeans.fit(self.embeddings) - - avg = [] - closest = [] - for j in range(n_clusters): - # find first chunk index of every cluster - idx = np.where(kmeans.labels_ == j)[0] - avg.append(np.mean(idx)) - # find chunk that is closest to the centroid - closest, _ = pairwise_distances_argmin_min(kmeans.cluster_centers_, - self.embeddings) - ordering = sorted(range(n_clusters), key=lambda k: avg[k]) - # concat representative chunks - summary = [self.data[i] for i in [closest[idx] for idx in ordering]] - return summary - - -def clear_cache(): - global recommender - if "recommender" in globals(): - del recommender - gc.collect() - if torch.cuda.is_available(): - return torch.cuda.empty_cache() - - -def load_recommender(path, embedding_model, rebuild_embedding, start_page=1): - global recommender - if rebuild_embedding: - clear_cache() - recommender = None - if recommender is None: - recommender = SemanticSearch(embedding_model) - if recommender.fitted: - return "Corpus Loaded." - else: - texts = pdf_to_text(path, start_page=start_page) - chunks = text_to_chunks(texts, start_page=start_page) - recommender.fit(chunks) - return "Corpus Loaded." - - -def generate_text(openai_key, prompt, model="gpt-3.5-turbo"): - openai.api_key = openai_key - completions = openai.ChatCompletion.create( - model=model, - messages=[{"role": "user", "content": prompt}], - max_tokens=512, - n=1, - stop=None, - temperature=0.7, - ) - message = f"{prompt}###{completions.choices[0].message.content}###{completions.usage.total_tokens}###{completions.model}" - return message - -def generate_answer(question, gpt_model, openai_key): - topn_chunks = recommender(question) - prompt = "" - prompt += "search results:\n\n" - for c in topn_chunks: - prompt += c + "\n\n" - - prompt += ( - "Instructions: Compose a comprehensive reply to the query using the search results given. " - "Cite each reference using [ Page Number] notation (every result has this number at the beginning). " - "Citation should be done at the end of each sentence. If the search results mention multiple subjects " - "with the same name, create separate answers for each. Only include information found in the results and " - "don't add any additional information. Make sure the answer is correct and don't output false content. " - "If the text does not relate to the query, simply state 'Text Not Found in PDF'. Ignore outlier " - "search results which has nothing to do with the question. Only answer what is asked. The " - "answer should be short and concise. Answer step-by-step.\n\n" - ) - - prompt += f"Query: {question}" - answer = generate_text(openai_key, prompt, gpt_model) - return answer - -def generate_summary(gpt_model, openai_key): - topn_chunks = recommender.summarize() - prompt = "" - prompt += ( - "Summarize the highlights of the search results and output a summary in bulletpoints. " - "Do not write anything before the bulletpoints. " - "Cite each reference using [Page no.] notation (every result has this number at the beginning). " - "Citation should be done at the end of each sentence. " - "Give conclusion in the end. " - "Write your response in the language of the search results. " - "Search results:\n\n" - ) - for c in topn_chunks: - prompt += c + "\n\n" - summary = generate_text(openai_key, prompt, gpt_model) - return summary - - -def load_openai_key() -> str: - key = os.environ.get("OPENAI_API_KEY") - if key is None: - raise ValueError( - "[ERROR]: Please pass your OPENAI_API_KEY. Get your key here : https://platform.openai.com/account/api-keys" - ) - return key - - -# %% -@serving -def ask_url( - url: str, - question: str, - rebuild_embedding: bool, - embedding_model: str, - gpt_model: str, -) -> str: - if rebuild_embedding: - load_url(url, embedding_model, rebuild_embedding) - openai_key = load_openai_key() - return generate_answer(question, gpt_model, openai_key) - - -@serving -async def ask_file( - file: UploadFile, - question: str, - rebuild_embedding: bool, - embedding_model: str, - gpt_model: str, -) -> str: - if rebuild_embedding: - load_file(file, embedding_model, rebuild_embedding) - openai_key = load_openai_key() - return generate_answer(question, gpt_model, openai_key) - - -@serving -def load_url(url: str, - embedding_model: str, - rebuild_embedding: bool, - gpt_model: str - ) -> str: - download_pdf(url, "corpus.pdf") - notification = load_recommender("corpus.pdf", embedding_model, rebuild_embedding) - openai_key = load_openai_key() - summary = generate_summary(gpt_model, openai_key) - response = f"{notification}###{summary}" - return response - - -@serving -async def load_file( - file: UploadFile, - embedding_model: str, - rebuild_embedding: bool, - gpt_model: str -) -> str: - suffix = Path(file.filename).suffix - with NamedTemporaryFile(delete=False, suffix=suffix) as tmp: - shutil.copyfileobj(file.file, tmp) - tmp_path = Path(tmp.name) - notification = load_recommender(str(tmp_path), embedding_model, rebuild_embedding) - openai_key = load_openai_key() - summary = generate_summary(gpt_model, openai_key) - response = f"{notification}###{summary}" - return response diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/OBJLoader2.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/OBJLoader2.js deleted file mode 100644 index 7d3d21a5679bc495b24d5e90dcb35bf0b5858d29..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/OBJLoader2.js +++ /dev/null @@ -1,1449 +0,0 @@ -/** - * @author Kai Salmen / https://kaisalmen.de - * Development repository: https://github.com/kaisalmen/WWOBJLoader - */ - -'use strict'; - -if ( THREE.OBJLoader2 === undefined ) { THREE.OBJLoader2 = {} } - -if ( THREE.LoaderSupport === undefined ) console.error( '"THREE.LoaderSupport" is not available. "THREE.OBJLoader2" requires it. Please include "LoaderSupport.js" in your HTML.' ); - -/** - * Use this class to load OBJ data from files or to parse OBJ data from an arraybuffer - * @class - * - * @param {THREE.DefaultLoadingManager} [manager] The loadingManager for the loader to use. Default is {@link THREE.DefaultLoadingManager} - */ - -THREE.OBJLoader2 = function ( manager ) { - console.info( 'Using THREE.OBJLoader2 version: ' + THREE.OBJLoader2.OBJLOADER2_VERSION ); - - this.manager = THREE.LoaderSupport.Validator.verifyInput( manager, THREE.DefaultLoadingManager ); - this.logging = { - enabled: true, - debug: false - }; - - this.modelName = ''; - this.instanceNo = 0; - this.path; - this.resourcePath; - this.useIndices = false; - this.disregardNormals = false; - this.materialPerSmoothingGroup = false; - this.useOAsMesh = false; - this.loaderRootNode = new THREE.Group(); - - this.meshBuilder = new THREE.LoaderSupport.MeshBuilder(); - this.callbacks = new THREE.LoaderSupport.Callbacks(); - this.workerSupport = new THREE.LoaderSupport.WorkerSupport(); - this.terminateWorkerOnLoad = true; -}; - -THREE.OBJLoader2.OBJLOADER2_VERSION = '2.5.0'; - -THREE.OBJLoader2.prototype = { - - constructor: THREE.OBJLoader2, - - /** - * Enable or disable logging in general (except warn and error), plus enable or disable debug logging. - * - * @param {boolean} enabled True or false. - * @param {boolean} debug True or false. - */ - setLogging: function ( enabled, debug ) { - this.logging.enabled = enabled === true; - this.logging.debug = debug === true; - this.meshBuilder.setLogging( this.logging.enabled, this.logging.debug ); - }, - - /** - * Set the name of the model. - * - * @param {string} modelName - */ - setModelName: function ( modelName ) { - this.modelName = THREE.LoaderSupport.Validator.verifyInput( modelName, this.modelName ); - }, - - /** - * The URL of the base path. - * - * @param {string} path URL - */ - setPath: function ( path ) { - this.path = THREE.LoaderSupport.Validator.verifyInput( path, this.path ); - }, - - /** - * Allows to specify resourcePath for dependencies of specified resource. - * @param {string} resourcePath - */ - setResourcePath: function ( resourcePath ) { - this.resourcePath = THREE.LoaderSupport.Validator.verifyInput( resourcePath, this.resourcePath ); - }, - - /** - * Set the node where the loaded objects will be attached directly. - * - * @param {THREE.Object3D} streamMeshesTo Object already attached to scenegraph where new meshes will be attached to - */ - setStreamMeshesTo: function ( streamMeshesTo ) { - this.loaderRootNode = THREE.LoaderSupport.Validator.verifyInput( streamMeshesTo, this.loaderRootNode ); - }, - - /** - * Set materials loaded by MTLLoader or any other supplier of an Array of {@link THREE.Material}. - * - * @param {THREE.Material[]} materials Array of {@link THREE.Material} - */ - setMaterials: function ( materials ) { - this.meshBuilder.setMaterials( materials ); - }, - - /** - * Instructs loaders to create indexed {@link THREE.BufferGeometry}. - * - * @param {boolean} useIndices=false - */ - setUseIndices: function ( useIndices ) { - this.useIndices = useIndices === true; - }, - - /** - * Tells whether normals should be completely disregarded and regenerated. - * - * @param {boolean} disregardNormals=false - */ - setDisregardNormals: function ( disregardNormals ) { - this.disregardNormals = disregardNormals === true; - }, - - /** - * Tells whether a material shall be created per smoothing group. - * - * @param {boolean} materialPerSmoothingGroup=false - */ - setMaterialPerSmoothingGroup: function ( materialPerSmoothingGroup ) { - this.materialPerSmoothingGroup = materialPerSmoothingGroup === true; - }, - - /** - * Usually 'o' is meta-information and does not result in creation of new meshes, but mesh creation on occurrence of "o" can be enforced. - * - * @param {boolean} useOAsMesh=false - */ - setUseOAsMesh: function ( useOAsMesh ) { - this.useOAsMesh = useOAsMesh === true; - }, - - _setCallbacks: function ( callbacks ) { - if ( THREE.LoaderSupport.Validator.isValid( callbacks.onProgress ) ) this.callbacks.setCallbackOnProgress( callbacks.onProgress ); - if ( THREE.LoaderSupport.Validator.isValid( callbacks.onReportError ) ) this.callbacks.setCallbackOnReportError( callbacks.onReportError ); - if ( THREE.LoaderSupport.Validator.isValid( callbacks.onMeshAlter ) ) this.callbacks.setCallbackOnMeshAlter( callbacks.onMeshAlter ); - if ( THREE.LoaderSupport.Validator.isValid( callbacks.onLoad ) ) this.callbacks.setCallbackOnLoad( callbacks.onLoad ); - if ( THREE.LoaderSupport.Validator.isValid( callbacks.onLoadMaterials ) ) this.callbacks.setCallbackOnLoadMaterials( callbacks.onLoadMaterials ); - - this.meshBuilder._setCallbacks( this.callbacks ); - }, - - /** - * Announce feedback which is give to the registered callbacks. - * @private - * - * @param {string} type The type of event - * @param {string} text Textual description of the event - * @param {number} numericalValue Numerical value describing the progress - */ - onProgress: function ( type, text, numericalValue ) { - var content = THREE.LoaderSupport.Validator.isValid( text ) ? text: ''; - var event = { - detail: { - type: type, - modelName: this.modelName, - instanceNo: this.instanceNo, - text: content, - numericalValue: numericalValue - } - }; - - if ( THREE.LoaderSupport.Validator.isValid( this.callbacks.onProgress ) ) this.callbacks.onProgress( event ); - - if ( this.logging.enabled && this.logging.debug ) console.debug( content ); - }, - - _onError: function ( event ) { - var output = 'Error occurred while downloading!'; - - if ( event.currentTarget && event.currentTarget.statusText !== null ) { - - output += '\nurl: ' + event.currentTarget.responseURL + '\nstatus: ' + event.currentTarget.statusText; - - } - this.onProgress( 'error', output, -1 ); - this._throwError( output ); - }, - - _throwError: function ( errorMessage ) { - if ( THREE.LoaderSupport.Validator.isValid( this.callbacks.onReportError ) ) { - - this.callbacks.onReportError( errorMessage ); - - } else { - - throw errorMessage; - - } - }, - - /** - * Use this convenient method to load a file at the given URL. By default the fileLoader uses an ArrayBuffer. - * - * @param {string} url A string containing the path/URL of the file to be loaded. - * @param {callback} onLoad A function to be called after loading is successfully completed. The function receives loaded Object3D as an argument. - * @param {callback} [onProgress] A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, which contains total and Integer bytes. - * @param {callback} [onError] A function to be called if an error occurs during loading. The function receives the error as an argument. - * @param {callback} [onMeshAlter] A function to be called after a new mesh raw data becomes available for alteration. - * @param {boolean} [useAsync] If true, uses async loading with worker, if false loads data synchronously. - */ - load: function ( url, onLoad, onProgress, onError, onMeshAlter, useAsync ) { - var resource = new THREE.LoaderSupport.ResourceDescriptor( url, 'OBJ' ); - this._loadObj( resource, onLoad, onProgress, onError, onMeshAlter, useAsync ); - }, - - _loadObj: function ( resource, onLoad, onProgress, onError, onMeshAlter, useAsync ) { - var scope = this; - if ( ! THREE.LoaderSupport.Validator.isValid( onError ) ) { - onError = function ( event ) { - scope._onError( event ); - } - } - - // fast-fail - if ( ! THREE.LoaderSupport.Validator.isValid( resource ) ) onError( 'An invalid ResourceDescriptor was provided. Unable to continue!' ); - var fileLoaderOnLoad = function ( content ) { - - resource.content = content; - if ( useAsync ) { - - scope.parseAsync( content, onLoad ); - - } else { - - var callbacks = new THREE.LoaderSupport.Callbacks(); - callbacks.setCallbackOnMeshAlter( onMeshAlter ); - scope._setCallbacks( callbacks ); - onLoad( - { - detail: { - loaderRootNode: scope.parse( content ), - modelName: scope.modelName, - instanceNo: scope.instanceNo - } - } - ); - - } - }; - this.setPath( resource.path ); - this.setResourcePath( resource.resourcePath ); - - // fast-fail - if ( ! THREE.LoaderSupport.Validator.isValid( resource.url ) || THREE.LoaderSupport.Validator.isValid( resource.content ) ) { - - fileLoaderOnLoad( THREE.LoaderSupport.Validator.isValid( resource.content ) ? resource.content : null ); - - } else { - - if ( ! THREE.LoaderSupport.Validator.isValid( onProgress ) ) { - var numericalValueRef = 0; - var numericalValue = 0; - onProgress = function ( event ) { - if ( ! event.lengthComputable ) return; - - numericalValue = event.loaded / event.total; - if ( numericalValue > numericalValueRef ) { - - numericalValueRef = numericalValue; - var output = 'Download of "' + resource.url + '": ' + ( numericalValue * 100 ).toFixed( 2 ) + '%'; - scope.onProgress( 'progressLoad', output, numericalValue ); - - } - }; - } - - - var fileLoader = new THREE.FileLoader( this.manager ); - fileLoader.setPath( this.path || this.resourcePath ); - fileLoader.setResponseType( 'arraybuffer' ); - fileLoader.load( resource.name, fileLoaderOnLoad, onProgress, onError ); - - } - }, - - /** - * Run the loader according the provided instructions. - * - * @param {THREE.LoaderSupport.PrepData} prepData All parameters and resources required for execution - * @param {THREE.LoaderSupport.WorkerSupport} [workerSupportExternal] Use pre-existing WorkerSupport - */ - run: function ( prepData, workerSupportExternal ) { - this._applyPrepData( prepData ); - var available = prepData.checkResourceDescriptorFiles( prepData.resources, - [ - { ext: "obj", type: "ArrayBuffer", ignore: false }, - { ext: "mtl", type: "String", ignore: false }, - { ext: "zip", type: "String", ignore: true } - ] - ); - if ( THREE.LoaderSupport.Validator.isValid( workerSupportExternal ) ) { - - this.terminateWorkerOnLoad = false; - this.workerSupport = workerSupportExternal; - this.logging.enabled = this.workerSupport.logging.enabled; - this.logging.debug = this.workerSupport.logging.debug; - - } - var scope = this; - var onMaterialsLoaded = function ( materials ) { - if ( materials !== null ) scope.meshBuilder.setMaterials( materials ); - scope._loadObj( available.obj, scope.callbacks.onLoad, null, null, scope.callbacks.onMeshAlter, prepData.useAsync ); - - }; - this._loadMtl( available.mtl, onMaterialsLoaded, null, null, prepData.crossOrigin, prepData.materialOptions ); - }, - - _applyPrepData: function ( prepData ) { - if ( THREE.LoaderSupport.Validator.isValid( prepData ) ) { - - this.setLogging( prepData.logging.enabled, prepData.logging.debug ); - this.setModelName( prepData.modelName ); - this.setStreamMeshesTo( prepData.streamMeshesTo ); - this.meshBuilder.setMaterials( prepData.materials ); - this.setUseIndices( prepData.useIndices ); - this.setDisregardNormals( prepData.disregardNormals ); - this.setMaterialPerSmoothingGroup( prepData.materialPerSmoothingGroup ); - this.setUseOAsMesh( prepData.useOAsMesh ); - - this._setCallbacks( prepData.getCallbacks() ); - - } - }, - - /** - * Parses OBJ data synchronously from arraybuffer or string. - * - * @param {arraybuffer|string} content OBJ data as Uint8Array or String - */ - parse: function ( content ) { - // fast-fail in case of illegal data - if ( ! THREE.LoaderSupport.Validator.isValid( content ) ) { - - console.warn( 'Provided content is not a valid ArrayBuffer or String.' ); - return this.loaderRootNode; - - } - if ( this.logging.enabled ) console.time( 'OBJLoader2 parse: ' + this.modelName ); - this.meshBuilder.init(); - - var parser = new THREE.OBJLoader2.Parser(); - parser.setLogging( this.logging.enabled, this.logging.debug ); - parser.setMaterialPerSmoothingGroup( this.materialPerSmoothingGroup ); - parser.setUseOAsMesh( this.useOAsMesh ); - parser.setUseIndices( this.useIndices ); - parser.setDisregardNormals( this.disregardNormals ); - // sync code works directly on the material references - parser.setMaterials( this.meshBuilder.getMaterials() ); - - var scope = this; - var onMeshLoaded = function ( payload ) { - var meshes = scope.meshBuilder.processPayload( payload ); - var mesh; - for ( var i in meshes ) { - mesh = meshes[ i ]; - scope.loaderRootNode.add( mesh ); - } - }; - parser.setCallbackMeshBuilder( onMeshLoaded ); - var onProgressScoped = function ( text, numericalValue ) { - scope.onProgress( 'progressParse', text, numericalValue ); - }; - parser.setCallbackProgress( onProgressScoped ); - - if ( content instanceof ArrayBuffer || content instanceof Uint8Array ) { - - if ( this.logging.enabled ) console.info( 'Parsing arrayBuffer...' ); - parser.parse( content ); - - } else if ( typeof( content ) === 'string' || content instanceof String ) { - - if ( this.logging.enabled ) console.info( 'Parsing text...' ); - parser.parseText( content ); - - } else { - - this._throwError( 'Provided content was neither of type String nor Uint8Array! Aborting...' ); - - } - if ( this.logging.enabled ) console.timeEnd( 'OBJLoader2 parse: ' + this.modelName ); - - return this.loaderRootNode; - }, - - /** - * Parses OBJ content asynchronously from arraybuffer. - * - * @param {arraybuffer} content OBJ data as Uint8Array - * @param {callback} onLoad Called after worker successfully completed loading - */ - parseAsync: function ( content, onLoad ) { - var scope = this; - var measureTime = false; - var scopedOnLoad = function () { - onLoad( - { - detail: { - loaderRootNode: scope.loaderRootNode, - modelName: scope.modelName, - instanceNo: scope.instanceNo - } - } - ); - if ( measureTime && scope.logging.enabled ) console.timeEnd( 'OBJLoader2 parseAsync: ' + scope.modelName ); - }; - // fast-fail in case of illegal data - if ( ! THREE.LoaderSupport.Validator.isValid( content ) ) { - - console.warn( 'Provided content is not a valid ArrayBuffer.' ); - scopedOnLoad() - - } else { - - measureTime = true; - - } - if ( measureTime && this.logging.enabled ) console.time( 'OBJLoader2 parseAsync: ' + this.modelName ); - this.meshBuilder.init(); - - var scopedOnMeshLoaded = function ( payload ) { - var meshes = scope.meshBuilder.processPayload( payload ); - var mesh; - for ( var i in meshes ) { - mesh = meshes[ i ]; - scope.loaderRootNode.add( mesh ); - } - }; - var buildCode = function ( codeSerializer ) { - var workerCode = ''; - workerCode += '/**\n'; - workerCode += ' * This code was constructed by OBJLoader2 buildCode.\n'; - workerCode += ' */\n\n'; - workerCode += 'THREE = { LoaderSupport: {}, OBJLoader2: {} };\n\n'; - workerCode += codeSerializer.serializeObject( 'THREE.LoaderSupport.Validator', THREE.LoaderSupport.Validator ); - workerCode += codeSerializer.serializeClass( 'THREE.OBJLoader2.Parser', THREE.OBJLoader2.Parser ); - - return workerCode; - }; - this.workerSupport.validate( buildCode, 'THREE.OBJLoader2.Parser' ); - this.workerSupport.setCallbacks( scopedOnMeshLoaded, scopedOnLoad ); - if ( scope.terminateWorkerOnLoad ) this.workerSupport.setTerminateRequested( true ); - - var materialNames = {}; - var materials = this.meshBuilder.getMaterials(); - for ( var materialName in materials ) { - - materialNames[ materialName ] = materialName; - - } - this.workerSupport.run( - { - params: { - useAsync: true, - materialPerSmoothingGroup: this.materialPerSmoothingGroup, - useOAsMesh: this.useOAsMesh, - useIndices: this.useIndices, - disregardNormals: this.disregardNormals - }, - logging: { - enabled: this.logging.enabled, - debug: this.logging.debug - }, - materials: { - // in async case only material names are supplied to parser - materials: materialNames - }, - data: { - input: content, - options: null - } - } - ); - }, - - /** - * Utility method for loading an mtl file according resource description. Provide url or content. - * - * @param {string} url URL to the file - * @param {Object} content The file content as arraybuffer or text - * @param {function} onLoad Callback to be called after successful load - * @param {callback} [onProgress] A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, which contains total and Integer bytes. - * @param {callback} [onError] A function to be called if an error occurs during loading. The function receives the error as an argument. - * @param {string} [crossOrigin] CORS value - * @param {Object} [materialOptions] Set material loading options for MTLLoader - */ - loadMtl: function ( url, content, onLoad, onProgress, onError, crossOrigin, materialOptions ) { - var resource = new THREE.LoaderSupport.ResourceDescriptor( url, 'MTL' ); - resource.setContent( content ); - this._loadMtl( resource, onLoad, onProgress, onError, crossOrigin, materialOptions ); - }, - - _loadMtl: function ( resource, onLoad, onProgress, onError, crossOrigin, materialOptions ) { - if ( THREE.MTLLoader === undefined ) console.error( '"THREE.MTLLoader" is not available. "THREE.OBJLoader2" requires it for loading MTL files.' ); - if ( THREE.LoaderSupport.Validator.isValid( resource ) && this.logging.enabled ) console.time( 'Loading MTL: ' + resource.name ); - - var materials = []; - var scope = this; - var processMaterials = function ( materialCreator ) { - var materialCreatorMaterials = []; - if ( THREE.LoaderSupport.Validator.isValid( materialCreator ) ) { - - materialCreator.preload(); - materialCreatorMaterials = materialCreator.materials; - for ( var materialName in materialCreatorMaterials ) { - - if ( materialCreatorMaterials.hasOwnProperty( materialName ) ) { - - materials[ materialName ] = materialCreatorMaterials[ materialName ]; - - } - } - } - - if ( THREE.LoaderSupport.Validator.isValid( resource ) && scope.logging.enabled ) console.timeEnd( 'Loading MTL: ' + resource.name ); - onLoad( materials, materialCreator ); - }; - - // fast-fail - if ( ! THREE.LoaderSupport.Validator.isValid( resource ) || ( ! THREE.LoaderSupport.Validator.isValid( resource.content ) && ! THREE.LoaderSupport.Validator.isValid( resource.url ) ) ) { - - processMaterials(); - - } else { - - var mtlLoader = new THREE.MTLLoader( this.manager ); - crossOrigin = THREE.LoaderSupport.Validator.verifyInput( crossOrigin, 'anonymous' ); - mtlLoader.setCrossOrigin( crossOrigin ); - mtlLoader.setResourcePath( resource.resourcePath || resource.path ); - if ( THREE.LoaderSupport.Validator.isValid( materialOptions ) ) mtlLoader.setMaterialOptions( materialOptions ); - - var parseTextWithMtlLoader = function ( content ) { - var contentAsText = content; - if ( typeof( content ) !== 'string' && ! ( content instanceof String ) ) { - - if ( content.length > 0 || content.byteLength > 0 ) { - - contentAsText = THREE.LoaderUtils.decodeText( content ); - - } else { - - this._throwError( 'Unable to parse mtl as it it seems to be neither a String, an Array or an ArrayBuffer!' ); - } - - } - processMaterials( mtlLoader.parse( contentAsText ) ); - }; - - if ( THREE.LoaderSupport.Validator.isValid( resource.content ) ) { - - parseTextWithMtlLoader( resource.content ); - - } else if ( THREE.LoaderSupport.Validator.isValid( resource.url ) ) { - - var fileLoader = new THREE.FileLoader( this.manager ); - if ( ! THREE.LoaderSupport.Validator.isValid( onError ) ) { - onError = function ( event ) { - scope._onError( event ); - } - } - if ( ! THREE.LoaderSupport.Validator.isValid( onProgress ) ) { - var numericalValueRef = 0; - var numericalValue = 0; - onProgress = function ( event ) { - if ( ! event.lengthComputable ) return; - - numericalValue = event.loaded / event.total; - if ( numericalValue > numericalValueRef ) { - - numericalValueRef = numericalValue; - var output = 'Download of "' + resource.url + '": ' + ( numericalValue * 100 ).toFixed( 2 ) + '%'; - scope.onProgress( 'progressLoad', output, numericalValue ); - - } - }; - } - - fileLoader.load( resource.url, parseTextWithMtlLoader, onProgress, onError ); - - } - } - } -}; - - -/** - * Parse OBJ data either from ArrayBuffer or string - * @class - */ -THREE.OBJLoader2.Parser = function () { - this.callbackProgress = null; - this.callbackMeshBuilder = null; - this.contentRef = null; - this.legacyMode = false; - - this.materials = {}; - this.useAsync = false; - this.materialPerSmoothingGroup = false; - this.useOAsMesh = false; - this.useIndices = false; - this.disregardNormals = false; - - this.vertices = []; - this.colors = []; - this.normals = []; - this.uvs = []; - - this.rawMesh = { - objectName: '', - groupName: '', - activeMtlName: '', - mtllibName: '', - - // reset with new mesh - faceType: -1, - subGroups: [], - subGroupInUse: null, - smoothingGroup: { - splitMaterials: false, - normalized: -1, - real: -1 - }, - counts: { - doubleIndicesCount: 0, - faceCount: 0, - mtlCount: 0, - smoothingGroupCount: 0 - } - }; - - this.inputObjectCount = 1; - this.outputObjectCount = 1; - this.globalCounts = { - vertices: 0, - faces: 0, - doubleIndicesCount: 0, - lineByte: 0, - currentByte: 0, - totalBytes: 0 - }; - - this.logging = { - enabled: true, - debug: false - }; -}; - - -THREE.OBJLoader2.Parser.prototype = { - - constructor: THREE.OBJLoader2.Parser, - - resetRawMesh: function () { - // faces are stored according combined index of group, material and smoothingGroup (0 or not) - this.rawMesh.subGroups = []; - this.rawMesh.subGroupInUse = null; - this.rawMesh.smoothingGroup.normalized = -1; - this.rawMesh.smoothingGroup.real = -1; - - // this default index is required as it is possible to define faces without 'g' or 'usemtl' - this.pushSmoothingGroup( 1 ); - - this.rawMesh.counts.doubleIndicesCount = 0; - this.rawMesh.counts.faceCount = 0; - this.rawMesh.counts.mtlCount = 0; - this.rawMesh.counts.smoothingGroupCount = 0; - }, - - setUseAsync: function ( useAsync ) { - this.useAsync = useAsync; - }, - - setMaterialPerSmoothingGroup: function ( materialPerSmoothingGroup ) { - this.materialPerSmoothingGroup = materialPerSmoothingGroup; - }, - - setUseOAsMesh: function ( useOAsMesh ) { - this.useOAsMesh = useOAsMesh; - }, - - setUseIndices: function ( useIndices ) { - this.useIndices = useIndices; - }, - - setDisregardNormals: function ( disregardNormals ) { - this.disregardNormals = disregardNormals; - }, - - setMaterials: function ( materials ) { - this.materials = THREE.LoaderSupport.Validator.verifyInput( materials, this.materials ); - this.materials = THREE.LoaderSupport.Validator.verifyInput( this.materials, {} ); - }, - - setCallbackMeshBuilder: function ( callbackMeshBuilder ) { - if ( ! THREE.LoaderSupport.Validator.isValid( callbackMeshBuilder ) ) { - - this._throwError( 'Unable to run as no "MeshBuilder" callback is set.' ); - - } - this.callbackMeshBuilder = callbackMeshBuilder; - }, - - setCallbackProgress: function ( callbackProgress ) { - this.callbackProgress = callbackProgress; - }, - - setLogging: function ( enabled, debug ) { - this.logging.enabled = enabled === true; - this.logging.debug = debug === true; - }, - - configure: function () { - this.pushSmoothingGroup( 1 ); - - if ( this.logging.enabled ) { - - var matKeys = Object.keys( this.materials ); - var matNames = ( matKeys.length > 0 ) ? '\n\tmaterialNames:\n\t\t- ' + matKeys.join( '\n\t\t- ' ) : '\n\tmaterialNames: None'; - var printedConfig = 'OBJLoader2.Parser configuration:' - + matNames - + '\n\tuseAsync: ' + this.useAsync - + '\n\tmaterialPerSmoothingGroup: ' + this.materialPerSmoothingGroup - + '\n\tuseOAsMesh: ' + this.useOAsMesh - + '\n\tuseIndices: ' + this.useIndices - + '\n\tdisregardNormals: ' + this.disregardNormals - + '\n\tcallbackMeshBuilderName: ' + this.callbackMeshBuilder.name - + '\n\tcallbackProgressName: ' + this.callbackProgress.name; - console.info( printedConfig ); - } - }, - - /** - * Parse the provided arraybuffer - * - * @param {Uint8Array} arrayBuffer OBJ data as Uint8Array - */ - parse: function ( arrayBuffer ) { - if ( this.logging.enabled ) console.time( 'OBJLoader2.Parser.parse' ); - this.configure(); - - var arrayBufferView = new Uint8Array( arrayBuffer ); - this.contentRef = arrayBufferView; - var length = arrayBufferView.byteLength; - this.globalCounts.totalBytes = length; - var buffer = new Array( 128 ); - - for ( var code, word = '', bufferPointer = 0, slashesCount = 0, i = 0; i < length; i++ ) { - - code = arrayBufferView[ i ]; - switch ( code ) { - // space - case 32: - if ( word.length > 0 ) buffer[ bufferPointer++ ] = word; - word = ''; - break; - // slash - case 47: - if ( word.length > 0 ) buffer[ bufferPointer++ ] = word; - slashesCount++; - word = ''; - break; - - // LF - case 10: - if ( word.length > 0 ) buffer[ bufferPointer++ ] = word; - word = ''; - this.globalCounts.lineByte = this.globalCounts.currentByte; - this.globalCounts.currentByte = i; - this.processLine( buffer, bufferPointer, slashesCount ); - bufferPointer = 0; - slashesCount = 0; - break; - - // CR - case 13: - break; - - default: - word += String.fromCharCode( code ); - break; - } - } - this.finalizeParsing(); - if ( this.logging.enabled ) console.timeEnd( 'OBJLoader2.Parser.parse' ); - }, - - /** - * Parse the provided text - * - * @param {string} text OBJ data as string - */ - parseText: function ( text ) { - if ( this.logging.enabled ) console.time( 'OBJLoader2.Parser.parseText' ); - this.configure(); - this.legacyMode = true; - this.contentRef = text; - var length = text.length; - this.globalCounts.totalBytes = length; - var buffer = new Array( 128 ); - - for ( var char, word = '', bufferPointer = 0, slashesCount = 0, i = 0; i < length; i++ ) { - - char = text[ i ]; - switch ( char ) { - case ' ': - if ( word.length > 0 ) buffer[ bufferPointer++ ] = word; - word = ''; - break; - - case '/': - if ( word.length > 0 ) buffer[ bufferPointer++ ] = word; - slashesCount++; - word = ''; - break; - - case '\n': - if ( word.length > 0 ) buffer[ bufferPointer++ ] = word; - word = ''; - this.globalCounts.lineByte = this.globalCounts.currentByte; - this.globalCounts.currentByte = i; - this.processLine( buffer, bufferPointer, slashesCount ); - bufferPointer = 0; - slashesCount = 0; - break; - - case '\r': - break; - - default: - word += char; - } - } - this.finalizeParsing(); - if ( this.logging.enabled ) console.timeEnd( 'OBJLoader2.Parser.parseText' ); - }, - - processLine: function ( buffer, bufferPointer, slashesCount ) { - if ( bufferPointer < 1 ) return; - - var reconstructString = function ( content, legacyMode, start, stop ) { - var line = ''; - if ( stop > start ) { - - var i; - if ( legacyMode ) { - - for ( i = start; i < stop; i++ ) line += content[ i ]; - - } else { - - - for ( i = start; i < stop; i++ ) line += String.fromCharCode( content[ i ] ); - - } - line = line.trim(); - - } - return line; - }; - - var bufferLength, length, i, lineDesignation; - lineDesignation = buffer [ 0 ]; - switch ( lineDesignation ) { - case 'v': - this.vertices.push( parseFloat( buffer[ 1 ] ) ); - this.vertices.push( parseFloat( buffer[ 2 ] ) ); - this.vertices.push( parseFloat( buffer[ 3 ] ) ); - if ( bufferPointer > 4 ) { - - this.colors.push( parseFloat( buffer[ 4 ] ) ); - this.colors.push( parseFloat( buffer[ 5 ] ) ); - this.colors.push( parseFloat( buffer[ 6 ] ) ); - - } - break; - - case 'vt': - this.uvs.push( parseFloat( buffer[ 1 ] ) ); - this.uvs.push( parseFloat( buffer[ 2 ] ) ); - break; - - case 'vn': - this.normals.push( parseFloat( buffer[ 1 ] ) ); - this.normals.push( parseFloat( buffer[ 2 ] ) ); - this.normals.push( parseFloat( buffer[ 3 ] ) ); - break; - - case 'f': - bufferLength = bufferPointer - 1; - - // "f vertex ..." - if ( slashesCount === 0 ) { - - this.checkFaceType( 0 ); - for ( i = 2, length = bufferLength; i < length; i ++ ) { - - this.buildFace( buffer[ 1 ] ); - this.buildFace( buffer[ i ] ); - this.buildFace( buffer[ i + 1 ] ); - - } - - // "f vertex/uv ..." - } else if ( bufferLength === slashesCount * 2 ) { - - this.checkFaceType( 1 ); - for ( i = 3, length = bufferLength - 2; i < length; i += 2 ) { - - this.buildFace( buffer[ 1 ], buffer[ 2 ] ); - this.buildFace( buffer[ i ], buffer[ i + 1 ] ); - this.buildFace( buffer[ i + 2 ], buffer[ i + 3 ] ); - - } - - // "f vertex/uv/normal ..." - } else if ( bufferLength * 2 === slashesCount * 3 ) { - - this.checkFaceType( 2 ); - for ( i = 4, length = bufferLength - 3; i < length; i += 3 ) { - - this.buildFace( buffer[ 1 ], buffer[ 2 ], buffer[ 3 ] ); - this.buildFace( buffer[ i ], buffer[ i + 1 ], buffer[ i + 2 ] ); - this.buildFace( buffer[ i + 3 ], buffer[ i + 4 ], buffer[ i + 5 ] ); - - } - - // "f vertex//normal ..." - } else { - - this.checkFaceType( 3 ); - for ( i = 3, length = bufferLength - 2; i < length; i += 2 ) { - - this.buildFace( buffer[ 1 ], undefined, buffer[ 2 ] ); - this.buildFace( buffer[ i ], undefined, buffer[ i + 1 ] ); - this.buildFace( buffer[ i + 2 ], undefined, buffer[ i + 3 ] ); - - } - - } - break; - - case 'l': - case 'p': - bufferLength = bufferPointer - 1; - if ( bufferLength === slashesCount * 2 ) { - - this.checkFaceType( 4 ); - for ( i = 1, length = bufferLength + 1; i < length; i += 2 ) this.buildFace( buffer[ i ], buffer[ i + 1 ] ); - - } else { - - this.checkFaceType( ( lineDesignation === 'l' ) ? 5 : 6 ); - for ( i = 1, length = bufferLength + 1; i < length; i ++ ) this.buildFace( buffer[ i ] ); - - } - break; - - case 's': - this.pushSmoothingGroup( buffer[ 1 ] ); - break; - - case 'g': - // 'g' leads to creation of mesh if valid data (faces declaration was done before), otherwise only groupName gets set - this.processCompletedMesh(); - this.rawMesh.groupName = reconstructString( this.contentRef, this.legacyMode, this.globalCounts.lineByte + 2, this.globalCounts.currentByte ); - break; - - case 'o': - // 'o' is meta-information and usually does not result in creation of new meshes, but can be enforced with "useOAsMesh" - if ( this.useOAsMesh ) this.processCompletedMesh(); - this.rawMesh.objectName = reconstructString( this.contentRef, this.legacyMode, this.globalCounts.lineByte + 2, this.globalCounts.currentByte ); - break; - - case 'mtllib': - this.rawMesh.mtllibName = reconstructString( this.contentRef, this.legacyMode, this.globalCounts.lineByte + 7, this.globalCounts.currentByte ); - break; - - case 'usemtl': - var mtlName = reconstructString( this.contentRef, this.legacyMode, this.globalCounts.lineByte + 7, this.globalCounts.currentByte ); - if ( mtlName !== '' && this.rawMesh.activeMtlName !== mtlName ) { - - this.rawMesh.activeMtlName = mtlName; - this.rawMesh.counts.mtlCount++; - this.checkSubGroup(); - - } - break; - - default: - break; - } - }, - - pushSmoothingGroup: function ( smoothingGroup ) { - var smoothingGroupInt = parseInt( smoothingGroup ); - if ( isNaN( smoothingGroupInt ) ) { - smoothingGroupInt = smoothingGroup === "off" ? 0 : 1; - } - - var smoothCheck = this.rawMesh.smoothingGroup.normalized; - this.rawMesh.smoothingGroup.normalized = this.rawMesh.smoothingGroup.splitMaterials ? smoothingGroupInt : ( smoothingGroupInt === 0 ) ? 0 : 1; - this.rawMesh.smoothingGroup.real = smoothingGroupInt; - - if ( smoothCheck !== smoothingGroupInt ) { - - this.rawMesh.counts.smoothingGroupCount++; - this.checkSubGroup(); - - } - }, - - /** - * Expanded faceTypes include all four face types, both line types and the point type - * faceType = 0: "f vertex ..." - * faceType = 1: "f vertex/uv ..." - * faceType = 2: "f vertex/uv/normal ..." - * faceType = 3: "f vertex//normal ..." - * faceType = 4: "l vertex/uv ..." or "l vertex ..." - * faceType = 5: "l vertex ..." - * faceType = 6: "p vertex ..." - */ - checkFaceType: function ( faceType ) { - if ( this.rawMesh.faceType !== faceType ) { - - this.processCompletedMesh(); - this.rawMesh.faceType = faceType; - this.checkSubGroup(); - - } - }, - - checkSubGroup: function () { - var index = this.rawMesh.activeMtlName + '|' + this.rawMesh.smoothingGroup.normalized; - this.rawMesh.subGroupInUse = this.rawMesh.subGroups[ index ]; - - if ( ! THREE.LoaderSupport.Validator.isValid( this.rawMesh.subGroupInUse ) ) { - - this.rawMesh.subGroupInUse = { - index: index, - objectName: this.rawMesh.objectName, - groupName: this.rawMesh.groupName, - materialName: this.rawMesh.activeMtlName, - smoothingGroup: this.rawMesh.smoothingGroup.normalized, - vertices: [], - indexMappingsCount: 0, - indexMappings: [], - indices: [], - colors: [], - uvs: [], - normals: [] - }; - this.rawMesh.subGroups[ index ] = this.rawMesh.subGroupInUse; - - } - }, - - buildFace: function ( faceIndexV, faceIndexU, faceIndexN ) { - if ( this.disregardNormals ) faceIndexN = undefined; - var scope = this; - var updateSubGroupInUse = function () { - - var faceIndexVi = parseInt( faceIndexV ); - var indexPointerV = 3 * ( faceIndexVi > 0 ? faceIndexVi - 1 : faceIndexVi + scope.vertices.length / 3 ); - var indexPointerC = scope.colors.length > 0 ? indexPointerV : null; - - var vertices = scope.rawMesh.subGroupInUse.vertices; - vertices.push( scope.vertices[ indexPointerV++ ] ); - vertices.push( scope.vertices[ indexPointerV++ ] ); - vertices.push( scope.vertices[ indexPointerV ] ); - - if ( indexPointerC !== null ) { - - var colors = scope.rawMesh.subGroupInUse.colors; - colors.push( scope.colors[ indexPointerC++ ] ); - colors.push( scope.colors[ indexPointerC++ ] ); - colors.push( scope.colors[ indexPointerC ] ); - - } - if ( faceIndexU ) { - - var faceIndexUi = parseInt( faceIndexU ); - var indexPointerU = 2 * ( faceIndexUi > 0 ? faceIndexUi - 1 : faceIndexUi + scope.uvs.length / 2 ); - var uvs = scope.rawMesh.subGroupInUse.uvs; - uvs.push( scope.uvs[ indexPointerU++ ] ); - uvs.push( scope.uvs[ indexPointerU ] ); - - } - if ( faceIndexN ) { - - var faceIndexNi = parseInt( faceIndexN ); - var indexPointerN = 3 * ( faceIndexNi > 0 ? faceIndexNi - 1 : faceIndexNi + scope.normals.length / 3 ); - var normals = scope.rawMesh.subGroupInUse.normals; - normals.push( scope.normals[ indexPointerN++ ] ); - normals.push( scope.normals[ indexPointerN++ ] ); - normals.push( scope.normals[ indexPointerN ] ); - - } - }; - - if ( this.useIndices ) { - - var mappingName = faceIndexV + ( faceIndexU ? '_' + faceIndexU : '_n' ) + ( faceIndexN ? '_' + faceIndexN : '_n' ); - var indicesPointer = this.rawMesh.subGroupInUse.indexMappings[ mappingName ]; - if ( THREE.LoaderSupport.Validator.isValid( indicesPointer ) ) { - - this.rawMesh.counts.doubleIndicesCount++; - - } else { - - indicesPointer = this.rawMesh.subGroupInUse.vertices.length / 3; - updateSubGroupInUse(); - this.rawMesh.subGroupInUse.indexMappings[ mappingName ] = indicesPointer; - this.rawMesh.subGroupInUse.indexMappingsCount++; - - } - this.rawMesh.subGroupInUse.indices.push( indicesPointer ); - - } else { - - updateSubGroupInUse(); - - } - this.rawMesh.counts.faceCount++; - }, - - createRawMeshReport: function ( inputObjectCount ) { - return 'Input Object number: ' + inputObjectCount + - '\n\tObject name: ' + this.rawMesh.objectName + - '\n\tGroup name: ' + this.rawMesh.groupName + - '\n\tMtllib name: ' + this.rawMesh.mtllibName + - '\n\tVertex count: ' + this.vertices.length / 3 + - '\n\tNormal count: ' + this.normals.length / 3 + - '\n\tUV count: ' + this.uvs.length / 2 + - '\n\tSmoothingGroup count: ' + this.rawMesh.counts.smoothingGroupCount + - '\n\tMaterial count: ' + this.rawMesh.counts.mtlCount + - '\n\tReal MeshOutputGroup count: ' + this.rawMesh.subGroups.length; - }, - - /** - * Clear any empty subGroup and calculate absolute vertex, normal and uv counts - */ - finalizeRawMesh: function () { - var meshOutputGroupTemp = []; - var meshOutputGroup; - var absoluteVertexCount = 0; - var absoluteIndexMappingsCount = 0; - var absoluteIndexCount = 0; - var absoluteColorCount = 0; - var absoluteNormalCount = 0; - var absoluteUvCount = 0; - var indices; - for ( var name in this.rawMesh.subGroups ) { - - meshOutputGroup = this.rawMesh.subGroups[ name ]; - if ( meshOutputGroup.vertices.length > 0 ) { - - indices = meshOutputGroup.indices; - if ( indices.length > 0 && absoluteIndexMappingsCount > 0 ) { - - for ( var i in indices ) indices[ i ] = indices[ i ] + absoluteIndexMappingsCount; - - } - meshOutputGroupTemp.push( meshOutputGroup ); - absoluteVertexCount += meshOutputGroup.vertices.length; - absoluteIndexMappingsCount += meshOutputGroup.indexMappingsCount; - absoluteIndexCount += meshOutputGroup.indices.length; - absoluteColorCount += meshOutputGroup.colors.length; - absoluteUvCount += meshOutputGroup.uvs.length; - absoluteNormalCount += meshOutputGroup.normals.length; - - } - } - - // do not continue if no result - var result = null; - if ( meshOutputGroupTemp.length > 0 ) { - - result = { - name: this.rawMesh.groupName !== '' ? this.rawMesh.groupName : this.rawMesh.objectName, - subGroups: meshOutputGroupTemp, - absoluteVertexCount: absoluteVertexCount, - absoluteIndexCount: absoluteIndexCount, - absoluteColorCount: absoluteColorCount, - absoluteNormalCount: absoluteNormalCount, - absoluteUvCount: absoluteUvCount, - faceCount: this.rawMesh.counts.faceCount, - doubleIndicesCount: this.rawMesh.counts.doubleIndicesCount - }; - - } - return result; - }, - - processCompletedMesh: function () { - var result = this.finalizeRawMesh(); - if ( THREE.LoaderSupport.Validator.isValid( result ) ) { - - if ( this.colors.length > 0 && this.colors.length !== this.vertices.length ) { - - this._throwError( 'Vertex Colors were detected, but vertex count and color count do not match!' ); - - } - if ( this.logging.enabled && this.logging.debug ) console.debug( this.createRawMeshReport( this.inputObjectCount ) ); - this.inputObjectCount++; - - this.buildMesh( result ); - var progressBytesPercent = this.globalCounts.currentByte / this.globalCounts.totalBytes; - this.callbackProgress( 'Completed [o: ' + this.rawMesh.objectName + ' g:' + this.rawMesh.groupName + '] Total progress: ' + ( progressBytesPercent * 100 ).toFixed( 2 ) + '%', progressBytesPercent ); - this.resetRawMesh(); - return true; - - } else { - - return false; - } - }, - - /** - * SubGroups are transformed to too intermediate format that is forwarded to the MeshBuilder. - * It is ensured that SubGroups only contain objects with vertices (no need to check). - * - * @param result - */ - buildMesh: function ( result ) { - var meshOutputGroups = result.subGroups; - - var vertexFA = new Float32Array( result.absoluteVertexCount ); - this.globalCounts.vertices += result.absoluteVertexCount / 3; - this.globalCounts.faces += result.faceCount; - this.globalCounts.doubleIndicesCount += result.doubleIndicesCount; - var indexUA = ( result.absoluteIndexCount > 0 ) ? new Uint32Array( result.absoluteIndexCount ) : null; - var colorFA = ( result.absoluteColorCount > 0 ) ? new Float32Array( result.absoluteColorCount ) : null; - var normalFA = ( result.absoluteNormalCount > 0 ) ? new Float32Array( result.absoluteNormalCount ) : null; - var uvFA = ( result.absoluteUvCount > 0 ) ? new Float32Array( result.absoluteUvCount ) : null; - var haveVertexColors = THREE.LoaderSupport.Validator.isValid( colorFA ); - - var meshOutputGroup; - var materialNames = []; - - var createMultiMaterial = ( meshOutputGroups.length > 1 ); - var materialIndex = 0; - var materialIndexMapping = []; - var selectedMaterialIndex; - var materialGroup; - var materialGroups = []; - - var vertexFAOffset = 0; - var indexUAOffset = 0; - var colorFAOffset = 0; - var normalFAOffset = 0; - var uvFAOffset = 0; - var materialGroupOffset = 0; - var materialGroupLength = 0; - - var materialOrg, material, materialName, materialNameOrg; - // only one specific face type - for ( var oodIndex in meshOutputGroups ) { - - if ( ! meshOutputGroups.hasOwnProperty( oodIndex ) ) continue; - meshOutputGroup = meshOutputGroups[ oodIndex ]; - - materialNameOrg = meshOutputGroup.materialName; - if ( this.rawMesh.faceType < 4 ) { - - materialName = materialNameOrg + ( haveVertexColors ? '_vertexColor' : '' ) + ( meshOutputGroup.smoothingGroup === 0 ? '_flat' : '' ); - - - } else { - - materialName = this.rawMesh.faceType === 6 ? 'defaultPointMaterial' : 'defaultLineMaterial'; - - } - materialOrg = this.materials[ materialNameOrg ]; - material = this.materials[ materialName ]; - - // both original and derived names do not lead to an existing material => need to use a default material - if ( ! THREE.LoaderSupport.Validator.isValid( materialOrg ) && ! THREE.LoaderSupport.Validator.isValid( material ) ) { - - var defaultMaterialName = haveVertexColors ? 'defaultVertexColorMaterial' : 'defaultMaterial'; - materialOrg = this.materials[ defaultMaterialName ]; - if ( this.logging.enabled ) console.warn( 'object_group "' + meshOutputGroup.objectName + '_' + - meshOutputGroup.groupName + '" was defined with unresolvable material "' + - materialNameOrg + '"! Assigning "' + defaultMaterialName + '".' ); - materialNameOrg = defaultMaterialName; - - // if names are identical then there is no need for later manipulation - if ( materialNameOrg === materialName ) { - - material = materialOrg; - materialName = defaultMaterialName; - - } - - } - if ( ! THREE.LoaderSupport.Validator.isValid( material ) ) { - - var materialCloneInstructions = { - materialNameOrg: materialNameOrg, - materialName: materialName, - materialProperties: { - vertexColors: haveVertexColors ? 2 : 0, - flatShading: meshOutputGroup.smoothingGroup === 0 - } - }; - var payload = { - cmd: 'materialData', - materials: { - materialCloneInstructions: materialCloneInstructions - } - }; - this.callbackMeshBuilder( payload ); - - // fake entry for async; sync Parser always works on material references (Builder update directly visible here) - if ( this.useAsync ) this.materials[ materialName ] = materialCloneInstructions; - - } - - if ( createMultiMaterial ) { - - // re-use material if already used before. Reduces materials array size and eliminates duplicates - selectedMaterialIndex = materialIndexMapping[ materialName ]; - if ( ! selectedMaterialIndex ) { - - selectedMaterialIndex = materialIndex; - materialIndexMapping[ materialName ] = materialIndex; - materialNames.push( materialName ); - materialIndex++; - - } - materialGroupLength = this.useIndices ? meshOutputGroup.indices.length : meshOutputGroup.vertices.length / 3; - materialGroup = { - start: materialGroupOffset, - count: materialGroupLength, - index: selectedMaterialIndex - }; - materialGroups.push( materialGroup ); - materialGroupOffset += materialGroupLength; - - } else { - - materialNames.push( materialName ); - - } - - vertexFA.set( meshOutputGroup.vertices, vertexFAOffset ); - vertexFAOffset += meshOutputGroup.vertices.length; - - if ( indexUA ) { - - indexUA.set( meshOutputGroup.indices, indexUAOffset ); - indexUAOffset += meshOutputGroup.indices.length; - - } - - if ( colorFA ) { - - colorFA.set( meshOutputGroup.colors, colorFAOffset ); - colorFAOffset += meshOutputGroup.colors.length; - - } - - if ( normalFA ) { - - normalFA.set( meshOutputGroup.normals, normalFAOffset ); - normalFAOffset += meshOutputGroup.normals.length; - - } - if ( uvFA ) { - - uvFA.set( meshOutputGroup.uvs, uvFAOffset ); - uvFAOffset += meshOutputGroup.uvs.length; - - } - - if ( this.logging.enabled && this.logging.debug ) { - var materialIndexLine = THREE.LoaderSupport.Validator.isValid( selectedMaterialIndex ) ? '\n\t\tmaterialIndex: ' + selectedMaterialIndex : ''; - var createdReport = '\tOutput Object no.: ' + this.outputObjectCount + - '\n\t\tgroupName: ' + meshOutputGroup.groupName + - '\n\t\tIndex: ' + meshOutputGroup.index + - '\n\t\tfaceType: ' + this.rawMesh.faceType + - '\n\t\tmaterialName: ' + meshOutputGroup.materialName + - '\n\t\tsmoothingGroup: ' + meshOutputGroup.smoothingGroup + - materialIndexLine + - '\n\t\tobjectName: ' + meshOutputGroup.objectName + - '\n\t\t#vertices: ' + meshOutputGroup.vertices.length / 3 + - '\n\t\t#indices: ' + meshOutputGroup.indices.length + - '\n\t\t#colors: ' + meshOutputGroup.colors.length / 3 + - '\n\t\t#uvs: ' + meshOutputGroup.uvs.length / 2 + - '\n\t\t#normals: ' + meshOutputGroup.normals.length / 3; - console.debug( createdReport ); - } - - } - - this.outputObjectCount++; - this.callbackMeshBuilder( - { - cmd: 'meshData', - progress: { - numericalValue: this.globalCounts.currentByte / this.globalCounts.totalBytes - }, - params: { - meshName: result.name - }, - materials: { - multiMaterial: createMultiMaterial, - materialNames: materialNames, - materialGroups: materialGroups - }, - buffers: { - vertices: vertexFA, - indices: indexUA, - colors: colorFA, - normals: normalFA, - uvs: uvFA - }, - // 0: mesh, 1: line, 2: point - geometryType: this.rawMesh.faceType < 4 ? 0 : ( this.rawMesh.faceType === 6 ) ? 2 : 1 - }, - [ vertexFA.buffer ], - THREE.LoaderSupport.Validator.isValid( indexUA ) ? [ indexUA.buffer ] : null, - THREE.LoaderSupport.Validator.isValid( colorFA ) ? [ colorFA.buffer ] : null, - THREE.LoaderSupport.Validator.isValid( normalFA ) ? [ normalFA.buffer ] : null, - THREE.LoaderSupport.Validator.isValid( uvFA ) ? [ uvFA.buffer ] : null - ); - }, - - finalizeParsing: function () { - if ( this.logging.enabled ) console.info( 'Global output object count: ' + this.outputObjectCount ); - if ( this.processCompletedMesh() && this.logging.enabled ) { - - var parserFinalReport = 'Overall counts: ' + - '\n\tVertices: ' + this.globalCounts.vertices + - '\n\tFaces: ' + this.globalCounts.faces + - '\n\tMultiple definitions: ' + this.globalCounts.doubleIndicesCount; - console.info( parserFinalReport ); - - } - } -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/inputs/Vector2Node.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/inputs/Vector2Node.js deleted file mode 100644 index ebc4add3ae74b1af607c7a514f0c84828e5800c8..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/inputs/Vector2Node.js +++ /dev/null @@ -1,55 +0,0 @@ -/** - * @author sunag / http://www.sunag.com.br/ - */ - -import { InputNode } from '../core/InputNode.js'; -import { NodeUtils } from '../core/NodeUtils.js'; - -function Vector2Node( x, y ) { - - InputNode.call( this, 'v2' ); - - this.value = x instanceof THREE.Vector2 ? x : new THREE.Vector2( x, y ); - -} - -Vector2Node.prototype = Object.create( InputNode.prototype ); -Vector2Node.prototype.constructor = Vector2Node; -Vector2Node.prototype.nodeType = "Vector2"; - -NodeUtils.addShortcuts( Vector2Node.prototype, 'value', [ 'x', 'y' ] ); - -Vector2Node.prototype.generateReadonly = function ( builder, output, uuid, type, ns, needsUpdate ) { - - return builder.format( "vec2( " + this.x + ", " + this.y + " )", type, output ); - -}; - -Vector2Node.prototype.copy = function ( source ) { - - InputNode.prototype.copy.call( this, source ); - - this.value.copy( source ); - -}; - -Vector2Node.prototype.toJSON = function ( meta ) { - - var data = this.getJSONNode( meta ); - - if ( ! data ) { - - data = this.createJSONNode( meta ); - - data.x = this.x; - data.y = this.y; - - if ( this.readonly === true ) data.readonly = true; - - } - - return data; - -}; - -export { Vector2Node }; diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/loaders/OBJLoader.d.ts b/spaces/banana-projects/web3d/node_modules/three/examples/jsm/loaders/OBJLoader.d.ts deleted file mode 100644 index db14792baa6c8edcfbe289f682c878e8ddf61799..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/loaders/OBJLoader.d.ts +++ /dev/null @@ -1,19 +0,0 @@ -import { - Material, - LoadingManager, - Group -} from '../../../src/Three'; - -export class OBJLoader { - constructor(manager?: LoadingManager); - manager: LoadingManager; - regexp: any; - materials: Material[]; - path: string; - - load(url: string, onLoad: (group: Group) => void, onProgress?: (event: ProgressEvent) => void, onError?: (event: ErrorEvent) => void): void; - parse(data: string) : Group; - setPath(value: string) : void; - setMaterials(materials: Material[]) : void; - _createParserState() : any; -} diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/CubicBezierCurve.js b/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/CubicBezierCurve.js deleted file mode 100644 index 719bc2f43b245b379364d165fcf6fe51a736f856..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/CubicBezierCurve.js +++ /dev/null @@ -1,79 +0,0 @@ -import { Curve } from '../core/Curve.js'; -import { CubicBezier } from '../core/Interpolations.js'; -import { Vector2 } from '../../math/Vector2.js'; - - -function CubicBezierCurve( v0, v1, v2, v3 ) { - - Curve.call( this ); - - this.type = 'CubicBezierCurve'; - - this.v0 = v0 || new Vector2(); - this.v1 = v1 || new Vector2(); - this.v2 = v2 || new Vector2(); - this.v3 = v3 || new Vector2(); - -} - -CubicBezierCurve.prototype = Object.create( Curve.prototype ); -CubicBezierCurve.prototype.constructor = CubicBezierCurve; - -CubicBezierCurve.prototype.isCubicBezierCurve = true; - -CubicBezierCurve.prototype.getPoint = function ( t, optionalTarget ) { - - var point = optionalTarget || new Vector2(); - - var v0 = this.v0, v1 = this.v1, v2 = this.v2, v3 = this.v3; - - point.set( - CubicBezier( t, v0.x, v1.x, v2.x, v3.x ), - CubicBezier( t, v0.y, v1.y, v2.y, v3.y ) - ); - - return point; - -}; - -CubicBezierCurve.prototype.copy = function ( source ) { - - Curve.prototype.copy.call( this, source ); - - this.v0.copy( source.v0 ); - this.v1.copy( source.v1 ); - this.v2.copy( source.v2 ); - this.v3.copy( source.v3 ); - - return this; - -}; - -CubicBezierCurve.prototype.toJSON = function () { - - var data = Curve.prototype.toJSON.call( this ); - - data.v0 = this.v0.toArray(); - data.v1 = this.v1.toArray(); - data.v2 = this.v2.toArray(); - data.v3 = this.v3.toArray(); - - return data; - -}; - -CubicBezierCurve.prototype.fromJSON = function ( json ) { - - Curve.prototype.fromJSON.call( this, json ); - - this.v0.fromArray( json.v0 ); - this.v1.fromArray( json.v1 ); - this.v2.fromArray( json.v2 ); - this.v3.fromArray( json.v3 ); - - return this; - -}; - - -export { CubicBezierCurve }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/aomap_fragment.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/aomap_fragment.glsl.js deleted file mode 100644 index 3c15d332218ee76bf92a845152993ef18ad5eb99..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/aomap_fragment.glsl.js +++ /dev/null @@ -1,18 +0,0 @@ -export default /* glsl */` -#ifdef USE_AOMAP - - // reads channel R, compatible with a combined OcclusionRoughnessMetallic (RGB) texture - float ambientOcclusion = ( texture2D( aoMap, vUv2 ).r - 1.0 ) * aoMapIntensity + 1.0; - - reflectedLight.indirectDiffuse *= ambientOcclusion; - - #if defined( USE_ENVMAP ) && defined( PHYSICAL ) - - float dotNV = saturate( dot( geometry.normal, geometry.viewDir ) ); - - reflectedLight.indirectSpecular *= computeSpecularOcclusion( dotNV, ambientOcclusion, material.specularRoughness ); - - #endif - -#endif -`; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/roughnessmap_pars_fragment.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/roughnessmap_pars_fragment.glsl.js deleted file mode 100644 index cea3ecd812cd48f92377002418359ca55e7a0f84..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/roughnessmap_pars_fragment.glsl.js +++ /dev/null @@ -1,7 +0,0 @@ -export default /* glsl */` -#ifdef USE_ROUGHNESSMAP - - uniform sampler2D roughnessMap; - -#endif -`; diff --git a/spaces/barani/ControlNet/model.py b/spaces/barani/ControlNet/model.py deleted file mode 100644 index a9239489a9ee2d1a082f701847dccd209f0477ac..0000000000000000000000000000000000000000 --- a/spaces/barani/ControlNet/model.py +++ /dev/null @@ -1,591 +0,0 @@ -from __future__ import annotations - -import gc - -import numpy as np -import PIL.Image -import torch -from controlnet_aux.util import HWC3 -from diffusers import (ControlNetModel, DiffusionPipeline, - StableDiffusionControlNetPipeline, - UniPCMultistepScheduler) - -from cv_utils import resize_image -from preprocessor import Preprocessor - -CONTROLNET_MODEL_IDS = { - 'Openpose': 'lllyasviel/control_v11p_sd15_openpose', - 'Canny': 'lllyasviel/control_v11p_sd15_canny', - 'MLSD': 'lllyasviel/control_v11p_sd15_mlsd', - 'scribble': 'lllyasviel/control_v11p_sd15_scribble', - 'softedge': 'lllyasviel/control_v11p_sd15_softedge', - 'segmentation': 'lllyasviel/control_v11p_sd15_seg', - 'depth': 'lllyasviel/control_v11f1p_sd15_depth', - 'NormalBae': 'lllyasviel/control_v11p_sd15_normalbae', - 'lineart': 'lllyasviel/control_v11p_sd15_lineart', - 'lineart_anime': 'lllyasviel/control_v11p_sd15s2_lineart_anime', - 'shuffle': 'lllyasviel/control_v11e_sd15_shuffle', - 'ip2p': 'lllyasviel/control_v11e_sd15_ip2p', - 'inpaint': 'lllyasviel/control_v11e_sd15_inpaint', -} - - -def download_all_controlnet_weights() -> None: - for model_id in CONTROLNET_MODEL_IDS.values(): - ControlNetModel.from_pretrained(model_id) - - -class Model: - def __init__(self, - base_model_id: str = 'runwayml/stable-diffusion-v1-5', - task_name: str = 'Canny'): - self.device = torch.device( - 'cuda:0' if torch.cuda.is_available() else 'cpu') - self.base_model_id = '' - self.task_name = '' - self.pipe = self.load_pipe(base_model_id, task_name) - self.preprocessor = Preprocessor() - - def load_pipe(self, base_model_id: str, task_name) -> DiffusionPipeline: - if base_model_id == self.base_model_id and task_name == self.task_name and hasattr( - self, 'pipe') and self.pipe is not None: - return self.pipe - model_id = CONTROLNET_MODEL_IDS[task_name] - controlnet = ControlNetModel.from_pretrained(model_id, - torch_dtype=torch.float16) - pipe = StableDiffusionControlNetPipeline.from_pretrained( - base_model_id, - safety_checker=None, - controlnet=controlnet, - torch_dtype=torch.float16) - pipe.scheduler = UniPCMultistepScheduler.from_config( - pipe.scheduler.config) - if self.device.type == 'cuda': - pipe.enable_xformers_memory_efficient_attention() - pipe.to(self.device) - torch.cuda.empty_cache() - gc.collect() - self.base_model_id = base_model_id - self.task_name = task_name - return pipe - - def set_base_model(self, base_model_id: str) -> str: - if not base_model_id or base_model_id == self.base_model_id: - return self.base_model_id - del self.pipe - torch.cuda.empty_cache() - gc.collect() - try: - self.pipe = self.load_pipe(base_model_id, self.task_name) - except Exception: - self.pipe = self.load_pipe(self.base_model_id, self.task_name) - return self.base_model_id - - def load_controlnet_weight(self, task_name: str) -> None: - if task_name == self.task_name: - return - if self.pipe is not None and hasattr(self.pipe, 'controlnet'): - del self.pipe.controlnet - torch.cuda.empty_cache() - gc.collect() - model_id = CONTROLNET_MODEL_IDS[task_name] - controlnet = ControlNetModel.from_pretrained(model_id, - torch_dtype=torch.float16) - controlnet.to(self.device) - torch.cuda.empty_cache() - gc.collect() - self.pipe.controlnet = controlnet - self.task_name = task_name - - def get_prompt(self, prompt: str, additional_prompt: str) -> str: - if not prompt: - prompt = additional_prompt - else: - prompt = f'{prompt}, {additional_prompt}' - return prompt - - @torch.autocast('cuda') - def run_pipe( - self, - prompt: str, - negative_prompt: str, - control_image: PIL.Image.Image, - num_images: int, - num_steps: int, - guidance_scale: float, - seed: int, - ) -> list[PIL.Image.Image]: - if seed == -1: - seed = np.random.randint(0, np.iinfo(np.int64).max) - generator = torch.Generator().manual_seed(seed) - return self.pipe(prompt=prompt, - negative_prompt=negative_prompt, - guidance_scale=guidance_scale, - num_images_per_prompt=num_images, - num_inference_steps=num_steps, - generator=generator, - image=control_image).images - - @torch.inference_mode() - def process_canny( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - low_threshold: int, - high_threshold: int, - ) -> list[PIL.Image.Image]: - self.preprocessor.load('Canny') - control_image = self.preprocessor(image=image, - low_threshold=low_threshold, - high_threshold=high_threshold, - detect_resolution=image_resolution) - - self.load_controlnet_weight('Canny') - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_mlsd( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - preprocess_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - value_threshold: float, - distance_threshold: float, - ) -> list[PIL.Image.Image]: - self.preprocessor.load('MLSD') - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - thr_v=value_threshold, - thr_d=distance_threshold, - ) - self.load_controlnet_weight('MLSD') - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_scribble( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - preprocess_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - preprocessor_name: str, - ) -> list[PIL.Image.Image]: - if preprocessor_name == 'None': - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - elif preprocessor_name == 'HED': - self.preprocessor.load(preprocessor_name) - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - scribble=False, - ) - elif preprocessor_name == 'PidiNet': - self.preprocessor.load(preprocessor_name) - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - safe=False, - ) - self.load_controlnet_weight('scribble') - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_scribble_interactive( - self, - image_and_mask: dict[str, np.ndarray], - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - ) -> list[PIL.Image.Image]: - image = image_and_mask['mask'] - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - - self.load_controlnet_weight('scribble') - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_softedge( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - preprocess_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - preprocessor_name: str, - ) -> list[PIL.Image.Image]: - if preprocessor_name == 'None': - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - elif preprocessor_name in ['HED', 'HED safe']: - safe = 'safe' in preprocessor_name - self.preprocessor.load('HED') - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - scribble=safe, - ) - elif preprocessor_name in ['PidiNet', 'PidiNet safe']: - safe = 'safe' in preprocessor_name - self.preprocessor.load('PidiNet') - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - safe=safe, - ) - else: - raise ValueError - self.load_controlnet_weight('softedge') - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_openpose( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - preprocess_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - preprocessor_name: str, - ) -> list[PIL.Image.Image]: - if preprocessor_name == 'None': - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - else: - self.preprocessor.load('Openpose') - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - hand_and_face=True, - ) - self.load_controlnet_weight('Openpose') - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_segmentation( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - preprocess_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - preprocessor_name: str, - ) -> list[PIL.Image.Image]: - if preprocessor_name == 'None': - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - else: - self.preprocessor.load(preprocessor_name) - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - ) - self.load_controlnet_weight('segmentation') - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_depth( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - preprocess_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - preprocessor_name: str, - ) -> list[PIL.Image.Image]: - if preprocessor_name == 'None': - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - else: - self.preprocessor.load(preprocessor_name) - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - ) - self.load_controlnet_weight('depth') - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_normal( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - preprocess_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - preprocessor_name: str, - ) -> list[PIL.Image.Image]: - if preprocessor_name == 'None': - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - else: - self.preprocessor.load('NormalBae') - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - ) - self.load_controlnet_weight('NormalBae') - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_lineart( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - preprocess_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - preprocessor_name: str, - ) -> list[PIL.Image.Image]: - if preprocessor_name in ['None', 'None (anime)']: - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - elif preprocessor_name in ['Lineart', 'Lineart coarse']: - coarse = 'coarse' in preprocessor_name - self.preprocessor.load('Lineart') - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - coarse=coarse, - ) - elif preprocessor_name == 'Lineart (anime)': - self.preprocessor.load('LineartAnime') - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - ) - if 'anime' in preprocessor_name: - self.load_controlnet_weight('lineart_anime') - else: - self.load_controlnet_weight('lineart') - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_shuffle( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - preprocessor_name: str, - ) -> list[PIL.Image.Image]: - if preprocessor_name == 'None': - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - else: - self.preprocessor.load(preprocessor_name) - control_image = self.preprocessor( - image=image, - image_resolution=image_resolution, - ) - self.load_controlnet_weight('shuffle') - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results - - @torch.inference_mode() - def process_ip2p( - self, - image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - ) -> list[PIL.Image.Image]: - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - self.load_controlnet_weight('ip2p') - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return [control_image] + results diff --git a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/retriever/dino_embedder.py b/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/retriever/dino_embedder.py deleted file mode 100644 index d270a3dff5da7c8ed85124b1a63e12624326eb02..0000000000000000000000000000000000000000 --- a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/retriever/dino_embedder.py +++ /dev/null @@ -1,20 +0,0 @@ -import numpy as np -from PIL import Image -from transformers import ViTFeatureExtractor, ViTModel - -from .abstract_embedder import AbstractImageEmbedder - - -class DinoEmbedder(AbstractImageEmbedder): - def __init__(self, device: str = "cpu", model_name: str = "facebook/dino-vitb8"): - super().__init__(device) - self.feature_extractor = ViTFeatureExtractor.from_pretrained(model_name) - self.model = ViTModel.from_pretrained(model_name).to(self.device) - - def embed(self, image: Image) -> np.ndarray: - inputs = self.feature_extractor(images=image, return_tensors="pt") - for key in inputs: - inputs[key] = inputs[key].to(self.device) - outputs = self.model(**inputs) - last_hidden_states = outputs.last_hidden_state.to("cpu").numpy() - return last_hidden_states diff --git a/spaces/better57/CHATGPT/assets/Kelpy-Codos.js b/spaces/better57/CHATGPT/assets/Kelpy-Codos.js deleted file mode 100644 index cfbaeedb4f371dfb5fe157db545b364046fca3e1..0000000000000000000000000000000000000000 --- a/spaces/better57/CHATGPT/assets/Kelpy-Codos.js +++ /dev/null @@ -1,76 +0,0 @@ -// ==UserScript== -// @name Kelpy Codos -// @namespace https://github.com/Keldos-Li/Kelpy-Codos -// @version 1.0.5 -// @author Keldos; https://keldos.me/ -// @description Add copy button to PRE tags before CODE tag, for Chuanhu ChatGPT especially. -// Based on Chuanhu ChatGPT version: ac04408 (2023-3-22) -// @license GPL-3.0 -// @grant none -// ==/UserScript== - -(function () { - 'use strict'; - - function addCopyButton(pre) { - var code = pre.querySelector('code'); - if (!code) { - return; // 如果没有找到 元素,则不添加按钮 - } - var firstChild = code.firstChild; - if (!firstChild) { - return; // 如果 元素没有子节点,则不添加按钮 - } - var button = document.createElement('button'); - button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本 - button.style.position = 'relative'; - button.style.float = 'right'; - button.style.fontSize = '1em'; // 可选:调整按钮大小 - button.style.background = 'none'; // 可选:去掉背景颜色 - button.style.border = 'none'; // 可选:去掉边框 - button.style.cursor = 'pointer'; // 可选:显示指针样式 - button.addEventListener('click', function () { - var range = document.createRange(); - range.selectNodeContents(code); - range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前 - var selection = window.getSelection(); - selection.removeAllRanges(); - selection.addRange(range); - - try { - var success = document.execCommand('copy'); - if (success) { - button.textContent = '\u2714'; - setTimeout(function () { - button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制” - }, 2000); - } else { - button.textContent = '\u2716'; - } - } catch (e) { - console.error(e); - button.textContent = '\u2716'; - } - - selection.removeAllRanges(); - }); - code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前 - } - - function handleNewElements(mutationsList, observer) { - for (var mutation of mutationsList) { - if (mutation.type === 'childList') { - for (var node of mutation.addedNodes) { - if (node.nodeName === 'PRE') { - addCopyButton(node); - } - } - } - } - } - - var observer = new MutationObserver(handleNewElements); - observer.observe(document.documentElement, { childList: true, subtree: true }); - - document.querySelectorAll('pre').forEach(addCopyButton); -})(); diff --git a/spaces/bigcode/santacoder-demo/app.py b/spaces/bigcode/santacoder-demo/app.py deleted file mode 100644 index 0f15b69ada201f8d77b89519a1b23b6324202543..0000000000000000000000000000000000000000 --- a/spaces/bigcode/santacoder-demo/app.py +++ /dev/null @@ -1,129 +0,0 @@ -import gradio as gr -from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed -from transformers import pipeline -import os -import torch - -description = """#

    🎅 SantaCoder: Code Generation

    -This is a demo to generate code with SantaCoder, -a 1.1B parameter model for code generation in Python, Java & JavaScript. The model can also do infilling, just specify where you would like the model to complete code -with the <FILL-HERE> token.""" - -token = os.environ["HUB_TOKEN"] -device="cuda:0" - - -FIM_PREFIX = "" -FIM_MIDDLE = "" -FIM_SUFFIX = "" -FIM_PAD = "" -EOD = "<|endoftext|>" - -GENERATION_TITLE= "

    Generated code:

    " - -tokenizer_fim = AutoTokenizer.from_pretrained("bigcode/santacoder", use_auth_token=token, padding_side="left") - -tokenizer_fim.add_special_tokens({ - "additional_special_tokens": [EOD, FIM_PREFIX, FIM_MIDDLE, FIM_SUFFIX, FIM_PAD], - "pad_token": EOD, -}) - -tokenizer = AutoTokenizer.from_pretrained("bigcode/christmas-models", use_auth_token=token) -model = AutoModelForCausalLM.from_pretrained("bigcode/christmas-models", trust_remote_code=True, use_auth_token=token).to(device) -pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=device) - -def post_processing(prompt, completion): - completion = "" + completion + "" - prompt = "" + prompt + "" - code_html = f"


    {prompt}{completion}


    " - return GENERATION_TITLE + code_html - -def post_processing_fim(prefix, middle, suffix): - prefix = "" + prefix + "" - middle = "" + middle + "" - suffix = "" + suffix + "" - code_html = f"


    {prefix}{middle}{suffix}


    " - return GENERATION_TITLE + code_html - -def fim_generation(prompt, max_new_tokens, temperature): - prefix = prompt.split("")[0] - suffix = prompt.split("")[1] - [middle] = infill((prefix, suffix), max_new_tokens, temperature) - return post_processing_fim(prefix, middle, suffix) - -def extract_fim_part(s: str): - # Find the index of - start = s.find(FIM_MIDDLE) + len(FIM_MIDDLE) - stop = s.find(EOD, start) or len(s) - return s[start:stop] - -def infill(prefix_suffix_tuples, max_new_tokens, temperature): - if type(prefix_suffix_tuples) == tuple: - prefix_suffix_tuples = [prefix_suffix_tuples] - - prompts = [f"{FIM_PREFIX}{prefix}{FIM_SUFFIX}{suffix}{FIM_MIDDLE}" for prefix, suffix in prefix_suffix_tuples] - # `return_token_type_ids=False` is essential, or we get nonsense output. - inputs = tokenizer_fim(prompts, return_tensors="pt", padding=True, return_token_type_ids=False).to(device) - with torch.no_grad(): - outputs = model.generate( - **inputs, - do_sample=True, - temperature=temperature, - max_new_tokens=max_new_tokens, - pad_token_id=tokenizer.pad_token_id - ) - # WARNING: cannot use skip_special_tokens, because it blows away the FIM special tokens. - return [ - extract_fim_part(tokenizer_fim.decode(tensor, skip_special_tokens=False)) for tensor in outputs - ] - - -def code_generation(prompt, max_new_tokens, temperature=0.2, seed=42): - #set_seed(seed) - - if "" in prompt: - return fim_generation(prompt, max_new_tokens, temperature=0.2) - else: - completion = pipe(prompt, do_sample=True, top_p=0.95, temperature=temperature, max_new_tokens=max_new_tokens)[0]['generated_text'] - completion = completion[len(prompt):] - return post_processing(prompt, completion) - - -demo = gr.Blocks( - css=".gradio-container {background-color: #20233fff; color:white}" -) -with demo: - with gr.Row(): - _, colum_2, _ = gr.Column(scale=1), gr.Column(scale=6), gr.Column(scale=1) - with colum_2: - gr.Markdown(value=description) - code = gr.Code(lines=5, language="python", label="Input code", value="def all_odd_elements(sequence):\n \"\"\"Returns every odd element of the sequence.\"\"\"") - - with gr.Accordion("Advanced settings", open=False): - max_new_tokens= gr.Slider( - minimum=8, - maximum=1024, - step=1, - value=48, - label="Number of tokens to generate", - ) - temperature = gr.Slider( - minimum=0.1, - maximum=2.5, - step=0.1, - value=0.2, - label="Temperature", - ) - seed = gr.Slider( - minimum=0, - maximum=1000, - step=1, - label="Random seed to use for the generation" - ) - run = gr.Button() - output = gr.HTML(label="Generated code") - - event = run.click(code_generation, [code, max_new_tokens, temperature, seed], output, api_name="predict") - gr.HTML(label="Contact", value="contact") - -demo.launch() \ No newline at end of file diff --git a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/depth.py b/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/depth.py deleted file mode 100644 index 61a50459a4a3ed046ed1c4cdcbd914437026fc0d..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/depth.py +++ /dev/null @@ -1,166 +0,0 @@ -import math, os, subprocess -import cv2 -import hashlib -import numpy as np -import torch -import gc -import torchvision.transforms as T -from einops import rearrange, repeat -from PIL import Image -from infer import InferenceHelper -from midas.dpt_depth import DPTDepthModel -from midas.transforms import Resize, NormalizeImage, PrepareForNet -import torchvision.transforms.functional as TF -from .general_utils import checksum - -class DepthModel(): - def __init__(self, device): - self.adabins_helper = None - self.depth_min = 1000 - self.depth_max = -1000 - self.device = device - self.midas_model = None - self.midas_transform = None - - def load_adabins(self, models_path): - if not os.path.exists(os.path.join(models_path,'AdaBins_nyu.pt')): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(r"https://cloudflare-ipfs.com/ipfs/Qmd2mMnDLWePKmgfS8m6ntAg4nhV5VkUyAydYBp8cWWeB7/AdaBins_nyu.pt", models_path) - if checksum(os.path.join(models_path,'AdaBins_nyu.pt')) != "643db9785c663aca72f66739427642726b03acc6c4c1d3755a4587aa2239962746410d63722d87b49fc73581dbc98ed8e3f7e996ff7b9c0d56d0fbc98e23e41a": - raise Exception(r"Error while downloading AdaBins_nyu.pt. Please download from here: https://drive.google.com/file/d/1lvyZZbC9NLcS8a__YPcUP7rDiIpbRpoF and place in: " + models_path) - self.adabins_helper = InferenceHelper(models_path=models_path, dataset='nyu', device=self.device) - - def load_midas(self, models_path, half_precision=True): - if not os.path.exists(os.path.join(models_path, 'dpt_large-midas-2f21e586.pt')): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(r"https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt", models_path) - if checksum(os.path.join(models_path,'dpt_large-midas-2f21e586.pt')) != "fcc4829e65d00eeed0a38e9001770676535d2e95c8a16965223aba094936e1316d569563552a852d471f310f83f597e8a238987a26a950d667815e08adaebc06": - raise Exception(r"Error while downloading dpt_large-midas-2f21e586.pt. Please download from here: https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt and place in: " + models_path) - - self.midas_model = DPTDepthModel( - path=f"{models_path}/dpt_large-midas-2f21e586.pt", - backbone="vitl16_384", - non_negative=True, - ) - normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) - - self.midas_transform = T.Compose([ - Resize( - 384, 384, - resize_target=None, - keep_aspect_ratio=True, - ensure_multiple_of=32, - resize_method="minimal", - image_interpolation_method=cv2.INTER_CUBIC, - ), - normalization, - PrepareForNet() - ]) - - self.midas_model.eval() - if self.device == torch.device("cuda"): - self.midas_model = self.midas_model.to(memory_format=torch.channels_last) - if half_precision: - self.midas_model = self.midas_model.half() - self.midas_model.to(self.device) - - def predict(self, prev_img_cv2, anim_args, half_precision) -> torch.Tensor: - w, h = prev_img_cv2.shape[1], prev_img_cv2.shape[0] - - # predict depth with AdaBins - use_adabins = anim_args.midas_weight < 1.0 and self.adabins_helper is not None - if use_adabins: - MAX_ADABINS_AREA = 500000 - MIN_ADABINS_AREA = 448*448 - - # resize image if too large or too small - img_pil = Image.fromarray(cv2.cvtColor(prev_img_cv2.astype(np.uint8), cv2.COLOR_RGB2BGR)) - image_pil_area = w*h - resized = True - if image_pil_area > MAX_ADABINS_AREA: - scale = math.sqrt(MAX_ADABINS_AREA) / math.sqrt(image_pil_area) - depth_input = img_pil.resize((int(w*scale), int(h*scale)), Image.LANCZOS) # LANCZOS is good for downsampling - print(f" resized to {depth_input.width}x{depth_input.height}") - elif image_pil_area < MIN_ADABINS_AREA: - scale = math.sqrt(MIN_ADABINS_AREA) / math.sqrt(image_pil_area) - depth_input = img_pil.resize((int(w*scale), int(h*scale)), Image.BICUBIC) - print(f" resized to {depth_input.width}x{depth_input.height}") - else: - depth_input = img_pil - resized = False - - # predict depth and resize back to original dimensions - try: - with torch.no_grad(): - _, adabins_depth = self.adabins_helper.predict_pil(depth_input) - if resized: - adabins_depth = TF.resize( - torch.from_numpy(adabins_depth), - torch.Size([h, w]), - interpolation=TF.InterpolationMode.BICUBIC - ) - adabins_depth = adabins_depth.cpu().numpy() - adabins_depth = adabins_depth.squeeze() - except: - print(f" exception encountered, falling back to pure MiDaS") - use_adabins = False - torch.cuda.empty_cache() - - if self.midas_model is not None: - # convert image from 0->255 uint8 to 0->1 float for feeding to MiDaS - img_midas = prev_img_cv2.astype(np.float32) / 255.0 - img_midas_input = self.midas_transform({"image": img_midas})["image"] - - # MiDaS depth estimation implementation - sample = torch.from_numpy(img_midas_input).float().to(self.device).unsqueeze(0) - if self.device == torch.device("cuda"): - sample = sample.to(memory_format=torch.channels_last) - if half_precision: - sample = sample.half() - with torch.no_grad(): - midas_depth = self.midas_model.forward(sample) - midas_depth = torch.nn.functional.interpolate( - midas_depth.unsqueeze(1), - size=img_midas.shape[:2], - mode="bicubic", - align_corners=False, - ).squeeze() - midas_depth = midas_depth.cpu().numpy() - torch.cuda.empty_cache() - - # MiDaS makes the near values greater, and the far values lesser. Let's reverse that and try to align with AdaBins a bit better. - midas_depth = np.subtract(50.0, midas_depth) - midas_depth = midas_depth / 19.0 - - # blend between MiDaS and AdaBins predictions - if use_adabins: - depth_map = midas_depth*anim_args.midas_weight + adabins_depth*(1.0-anim_args.midas_weight) - else: - depth_map = midas_depth - - depth_map = np.expand_dims(depth_map, axis=0) - depth_tensor = torch.from_numpy(depth_map).squeeze().to(self.device) - else: - depth_tensor = torch.ones((h, w), device=self.device) - - return depth_tensor - - def save(self, filename: str, depth: torch.Tensor): - depth = depth.cpu().numpy() - if len(depth.shape) == 2: - depth = np.expand_dims(depth, axis=0) - self.depth_min = min(self.depth_min, depth.min()) - self.depth_max = max(self.depth_max, depth.max()) - print(f" depth min:{depth.min()} max:{depth.max()}") - denom = max(1e-8, self.depth_max - self.depth_min) - temp = rearrange((depth - self.depth_min) / denom * 255, 'c h w -> h w c') - temp = repeat(temp, 'h w 1 -> h w c', c=3) - Image.fromarray(temp.astype(np.uint8)).save(filename) - - def to(self, device): - self.device = device - self.midas_model.to(device) - if self.adabins_helper is not None: - self.adabins_helper.to(device) - gc.collect() - torch.cuda.empty_cache() diff --git a/spaces/bioriAsaeru/text-to-voice/BigFishGamesLoaderv20exe What You Need to Know About the New and Improved Game Manager.md b/spaces/bioriAsaeru/text-to-voice/BigFishGamesLoaderv20exe What You Need to Know About the New and Improved Game Manager.md deleted file mode 100644 index be97ce31597b5e34c0012e6335c675a3f99ed8df..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/BigFishGamesLoaderv20exe What You Need to Know About the New and Improved Game Manager.md +++ /dev/null @@ -1,6 +0,0 @@ -

    BigFishGamesLoaderv20exe


    Download Zip ✸✸✸ https://urloso.com/2uyS1Q



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Customize Your Xcom 2 Squad with the Soldier Editor Tool.md b/spaces/bioriAsaeru/text-to-voice/Customize Your Xcom 2 Squad with the Soldier Editor Tool.md deleted file mode 100644 index 2290d45f24812cc06c85a4ba1b80963fb26ffada..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Customize Your Xcom 2 Squad with the Soldier Editor Tool.md +++ /dev/null @@ -1,35 +0,0 @@ - -

    DefaultClassData.ini Covers the four classes and their growth as they gain ranks. You can adjust how much aim and health you gain per promotion, allow soldiers to carry weapons from other classes, adjust who gets what perks, and more.

    -

    Xcom 2 Soldier Editor


    Download Zip ⚙⚙⚙ https://urloso.com/2uySa9



    -

    DefaultNameList.ini A very important file that lets you decide which random names the game creates its recruits with. You can also adjust the chance of new soldiers having hats, props and beards in here.

    -

    Finally, open up DefaultGameData_WeaponData.ini again and find AdvTrooperM1_idealRange. This entire section governs how close or far each enemy wants to be from your soldiers. For example, increasing the first value to AdvTrooperM1_idealRange=12 encourages all ADVENT Troopers to stay around 12 tiles away from your squad. Setting AdvMEC_M1_idealRange=1 encourages ADVENT MECs to charge your position, ending up 1 tile away from you in close combat.

    -

    A soldier is an elite XCOM operative who has military training and executes combat missions in XCOM: Enemy Unknown. Soldiers are managed and recruited through the Barracks. Between missions soldiers can be seen participating in a variety of off-duty activities on the various levels of the Barracks facility while in the "ant farm" view.

    -

    -

    As Earth's first and last line of defense against the Alien invaders, XCOM's soldiers are deployed by the Commander (player character) to engage in ground combat. They fulfill a variety of roles based on their class and abilities to complete the objectives at hand.

    -

    XCOM starts out with a group of twelve Rookie soldiers. Additional soldiers can be recruited via the Barracks and arrive at headquarters three days later. Each recruit costs §10 on Easy and Normal difficulties, and §15 on Classic and Impossible. Additional soldiers can also be received as a mission reward from the Council.

    -

    When recruited, a soldier's gender and nationality are randomly selected. When acquired as a mission reward, a soldier originates from the country where the mission took place. Gender and nationality restrict the possible outcomes for a soldier's randomly-generated name and physical appearance. Upon attaining the rank of Sergeant, soldiers also receive a nickname, randomly selected from a list pertaining to their class and gender.

    -

    Other than for an International Service Cross medal option in XCOM: Enemy Within, a soldier's nationality is completely inconsequential to gameplay. Gender is also trivial, except for when obtaining the "Flight of the Valkyries" achievement. While a soldier's most inconsequential attributes are random, factors related to combat such as initial stats and abilities, are fixed.

    -

    The player can customize a soldier's name, nickname, voice, and appearance, but their nationality and gender cannot be changed. Additional appearance options (such as armor decoration and tinting or new hair/helmet choices) are available through purchasable downloadable content, such as the Elite Soldier Pack, or by editing the game's XComGame.int file.

    -

    Soldiers start out as Rookies with basic abilities. As they earn experience (XP) by killing enemies and completing missions, soldiers increase in rank, rewarding them with increased stats and additional abilities (based on the soldier's class).

    -

    Soldiers have several specializations available, known as classes. Upon receiving a promotion to the rank of Squaddie, soldiers are randomly assigned one of four classes (weighted slightly towards the class XCOM has fewest of) that determines the weapons and abilities they can use:

    -

    Soldiers have access to a variety of general and class-related abilities in combat. General abilities allow soldiers to perform basic actions such as initiating Overwatch, tossing a Frag Grenade, or using a Medikit. Class-related abilities allow soldiers to perform specialized actions such as a Heavy pinning down an enemy with Suppression while protected by a Support's Smoke Grenade, or a Sniper using Double Tap to take out an alien that an Assault has Flushed out into the open.

    -

    Initially, soldiers are fielded in squads of one to four units. The Squad Size I and Squad Size II upgrades available at the Officer Training School increase the squad size to a maximum of five and six soldiers, respectively.

    -

    The squad leader is determined when selecting soldiers to board the Skyranger before embarking on a mission. If one soldier is higher in rank than any other in the squad, the role of squad leader is assigned to them. If two or more soldiers share the highest rank, other criteria such as position in the squad selection screen, number of missions completed, or number of kills are considered. The squad leader can be identified by a yellow star over their rank icon in battle. The Lead By Example EW upgrade allows them to substitute their Will for that of all nearby lower-Will squadmates.

    -

    Soldiers involved in combat are susceptible to injury and death. Whenever a soldier incurs damage exceeding that of the HP bonus granted by their armor, the soldier is flagged as wounded. Upon return to base, they become unavailable for subsequent missions while recovering in the Infirmary. Procuring the Rapid Recovery training from the Officer Training School greatly reduces the amount of time soldiers spend out of action due to injuries.

    -

    Injured
    If a soldier is injured during combat, for the remainder of the mission they are subject to a Will penalty, visible as "Battle Fatigue" when viewing the soldier info interface. If injured by less than 50% of their total health, the penalty is -5 Will. If injured by more than 50% of their total health, the penalty is -10 Will.

    -

    Gravely Wounded
    When a soldier is heavily (but not critically) wounded, they are considered to be gravely wounded. Other than an increased recuperation time in the Infirmary, this status is the same as wounded. It does not require stabilization or entail critically wounded's Will reduction penalty.

    -

    Critically Wounded
    When a soldier loses all of their HP during a mission, they either die or become critically wounded. Soldiers of a higher rank are more likely to be critically wounded instead of dying. Critically wounded soldiers must be stabilized or revived with a Medikit or they will bleed out and die in three turns (including the turn they were injured), if the mission is not completed by then. Critically wounded soldiers, even if saved, incur a permanent -10 (-15 on Classic or Impossible) reduction to Will. In XCOM: Enemy Within, this penalty can be avoided with the Secondary Heart Gene Mod.

    -

    Critically wounded soldiers are not targeted by enemies, but can be killed before bleeding out in the event of an explosion. Take care while fighting enemies that are equipped with explosives or explode upon death (such as Cyberdiscs), as well as near combustible objects in the environment (such as vehicles).

    -

    Death
    When a soldier dies during a mission, that soldier is permanently removed from the unit roster and their information is recorded on the Memorial Wall in the Barracks. This information includes the soldier's rank and name (and nickname if applicable), total kills, total missions performed, the name and in-game date of the mission they died on. In XCOM: Enemy Within, the information also includes how the soldier was killed and lists any medals they may have received. Their medals are returned to XCOM and can to be reissued to another soldier in three days.

    -

    Fellow squadmates are subject to a "Fallen Comrade" penalty of -5 Will if they witness a soldier being killed in action during a mission. In the event of multiple deaths, the penalty is -5 Will per soldier killed.

    -

    This console command will set the stat of the specified solider to the specified value. See commands.gg/xcom2/setsoldierstat for stat IDs. If you are using the WOTC DLC, you will also need to specify the 0/1 argument at the end of the command.

    -

    The ability to rename your soldiers in XCOM 2 and its predecessor ensures you're commanding a squad with whom you have a very close attachment. However, in addition to using the names of friends and family members to add a personal touch, there are certain names that can be bestowed upon soldiers to unlock legendary XCOM heroes.

    -

    These Hero Characters typically come in the form of fully-upgraded soldiers of a pre-determined class that benefit from incredibly high stats and are outfitted with the very best of the game's advanced equipment. They can be recruited for free, but are so powerful that you'll be warned that adding them to your roster will immediately disable achievements for the current playthrough.

    -

    Hero Characters can be unlocked at any time during the game by heading to the Soldier roster, selecting any soldier to customise and changing their Character Info. Select the First Name field and enter the name listed below and then do the same for the Last Name field.

    -

    Be aware that choosing to call forth legendary XCOM heroes in this way will overwrite all previous soldier stats, characteristics, abilities and customisations, and their name and basic physical attributes cannot be altered once summoned. They can still be dismissed from your roster or killed in combat though.

    -

    Strategy game legend, Sid "Godfather" Meier, is a Psi Operative Magnus who has access to every Psionic ability in the game, in addition to an incredibly high Psi stat and some super-advanced weaponry and armour. It is also possible to have a more vanilla version of Sid Meier appear in your game as a standard soldier for hire.

    -

    The first part includes a series of XCOM: Enemy Unknown cheats that can be activated during a mission, while the second part contains an editor, which modifies the stats of a soldier included in the player’s team.

    -

    Using this specific trainer, players get unlimited movement points, unlimited health for their soldiers; or they can modify a soldier’s stats such as HP (health points), will, and even the number of kills he performed.

    -

    The game is divided in missions, in which the player controls a team of human soldiers fighting against the alien invasion. The missions take place on various continents and environments; but between missions, players return to their main base where they can research new weapons, hire new soldiers and improve their bases by adding new structures.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/docs/MUSICGEN.md b/spaces/brainblow/AudioCreator_Music-Audio_Generation/docs/MUSICGEN.md deleted file mode 100644 index 606ce85808a428432f4e77564fb97dcade3851a3..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/docs/MUSICGEN.md +++ /dev/null @@ -1,362 +0,0 @@ -# MusicGen: Simple and Controllable Music Generation - -AudioCraft provides the code and models for MusicGen, [a simple and controllable model for music generation][arxiv]. -MusicGen is a single stage auto-regressive Transformer model trained over a 32kHz -EnCodec tokenizer with 4 codebooks sampled at 50 Hz. -Unlike existing methods like [MusicLM](https://arxiv.org/abs/2301.11325), MusicGen doesn't require -a self-supervised semantic representation, and it generates all 4 codebooks in one pass. By introducing -a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive -steps per second of audio. -Check out our [sample page][musicgen_samples] or test the available demo! - - - Open In Colab - - - Open in HugginFace - -
    - -We use 20K hours of licensed music to train MusicGen. Specifically, we rely on an internal dataset -of 10K high-quality music tracks, and on the ShutterStock and Pond5 music data. - - -## Model Card - -See [the model card](../model_cards/MUSICGEN_MODEL_CARD.md). - - -## Installation - -Please follow the AudioCraft installation instructions from the [README](../README.md). - -AudioCraft requires a GPU with at least 16 GB of memory for running inference with the medium-sized models (~1.5B parameters). - -## Usage - -We offer a number of way to interact with MusicGen: -1. A demo is also available on the [`facebook/MusicGen` Hugging Face Space](https://huggingface.co/spaces/facebook/MusicGen) -(huge thanks to all the HF team for their support). -2. You can run the extended demo on a Colab: -[colab notebook](https://colab.research.google.com/drive/1JlTOjB-G0A2Hz3h8PK63vLZk4xdCI5QB?usp=sharing) -3. You can use the gradio demo locally by running [`python -m demos.musicgen_app --share`](../demos/musicgen_app.py). -4. You can play with MusicGen by running the jupyter notebook at [`demos/musicgen_demo.ipynb`](../demos/musicgen_demo.ipynb) locally (if you have a GPU). -5. Finally, checkout [@camenduru Colab page](https://github.com/camenduru/MusicGen-colab) -which is regularly updated with contributions from @camenduru and the community. - - -## API - -We provide a simple API and 4 pre-trained models. The pre trained models are: -- `facebook/musicgen-small`: 300M model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-small) -- `facebook/musicgen-medium`: 1.5B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-medium) -- `facebook/musicgen-melody`: 1.5B model, text to music and text+melody to music - [🤗 Hub](https://huggingface.co/facebook/musicgen-melody) -- `facebook/musicgen-large`: 3.3B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-large) - -We observe the best trade-off between quality and compute with the `facebook/musicgen-medium` or `facebook/musicgen-melody` model. -In order to use MusicGen locally **you must have a GPU**. We recommend 16GB of memory, but smaller -GPUs will be able to generate short sequences, or longer sequences with the `facebook/musicgen-small` model. - -See after a quick example for using the API. - -```python -import torchaudio -from audiocraft.models import MusicGen -from audiocraft.data.audio import audio_write - -model = MusicGen.get_pretrained('facebook/musicgen-melody') -model.set_generation_params(duration=8) # generate 8 seconds. -wav = model.generate_unconditional(4) # generates 4 unconditional audio samples -descriptions = ['happy rock', 'energetic EDM', 'sad jazz'] -wav = model.generate(descriptions) # generates 3 samples. - -melody, sr = torchaudio.load('./assets/bach.mp3') -# generates using the melody from the given audio and the provided descriptions. -wav = model.generate_with_chroma(descriptions, melody[None].expand(3, -1, -1), sr) - -for idx, one_wav in enumerate(wav): - # Will save under {idx}.wav, with loudness normalization at -14 db LUFS. - audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True) -``` - -## 🤗 Transformers Usage - -MusicGen is available in the 🤗 Transformers library from version 4.31.0 onwards, requiring minimal dependencies -and additional packages. Steps to get started: - -1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) from main: - -```shell -pip install git+https://github.com/huggingface/transformers.git -``` - -2. Run the following Python code to generate text-conditional audio samples: - -```py -from transformers import AutoProcessor, MusicgenForConditionalGeneration - - -processor = AutoProcessor.from_pretrained("facebook/musicgen-small") -model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") - -inputs = processor( - text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"], - padding=True, - return_tensors="pt", -) - -audio_values = model.generate(**inputs, max_new_tokens=256) -``` - -3. Listen to the audio samples either in an ipynb notebook: - -```py -from IPython.display import Audio - -sampling_rate = model.config.audio_encoder.sampling_rate -Audio(audio_values[0].numpy(), rate=sampling_rate) -``` - -Or save them as a `.wav` file using a third-party library, e.g. `scipy`: - -```py -import scipy - -sampling_rate = model.config.audio_encoder.sampling_rate -scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy()) -``` - -For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the -[MusicGen docs](https://huggingface.co/docs/transformers/main/en/model_doc/musicgen) or the hands-on -[Google Colab](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb). - - -## Training - -The [MusicGenSolver](../audiocraft/solvers/musicgen.py) implements MusicGen's training pipeline. -It defines an autoregressive language modeling task over multiple streams of discrete tokens -extracted from a pre-trained EnCodec model (see [EnCodec documentation](./ENCODEC.md) -for more details on how to train such model). - -Note that **we do NOT provide any of the datasets** used for training MusicGen. -We provide a dummy dataset containing just a few examples for illustrative purposes. - -Please read first the [TRAINING documentation](./TRAINING.md), in particular the Environment Setup section. - -### Example configurations and grids - -We provide configurations to reproduce the released models and our research. -MusicGen solvers configuration are available in [config/solver/musicgen](../config/solver/musicgen), -in particular: -* MusicGen base model for text-to-music: -[`solver=musicgen/musicgen_base_32khz`](../config/solver/musicgen/musicgen_base_32khz.yaml) -* MusicGen model with chromagram-conditioning support: -[`solver=musicgen/musicgen_melody_32khz`](../config/solver/musicgen/musicgen_melody_32khz.yaml) - -We provide 3 different scales, e.g. `model/lm/model_scale=small` (300M), or `medium` (1.5B), and `large` (3.3B). - -Please find some example grids to train MusicGen at -[audiocraft/grids/musicgen](../audiocraft/grids/musicgen/). - -```shell -# text-to-music -dora grid musicgen.musicgen_base_32khz --dry_run --init -# melody-guided music generation -dora grid musicgen.musicgen_melody_base_32khz --dry_run --init -# Remove the `--dry_run --init` flags to actually schedule the jobs once everything is setup. -``` - -### Music dataset and metadata - -MusicGen's underlying dataset is an AudioDataset augmented with music-specific metadata. -The MusicGen dataset implementation expects the metadata to be available as `.json` files -at the same location as the audio files. Learn more in the [datasets section](./DATASETS.md). - - -### Audio tokenizers - -We support a number of audio tokenizers: either pretrained EnCodec models, [DAC](https://github.com/descriptinc/descript-audio-codec), or your own models. -The tokenizer is controlled with the setting `compression_model_checkpoint`. -For instance, - -```bash -# Using the 32kHz EnCodec trained on music -dora run solver=musicgen/debug \ - compression_model_checkpoint=//pretrained/facebook/encodec_32khz \ - transformer_lm.n_q=4 transformer_lm.card=2048 - -# Using DAC -dora run solver=musicgen/debug \ - compression_model_checkpoint=//pretrained/dac_44khz \ - transformer_lm.n_q=9 transformer_lm.card=1024 \ - 'codebooks_pattern.delay.delays=[0,1,2,3,4,5,6,7,8]' - -# Using your own model after export (see ENCODEC.md) -dora run solver=musicgen/debug \ - compression_model_checkpoint=//pretrained//checkpoints/my_audio_lm/compression_state_dict.bin \ - transformer_lm.n_q=... transformer_lm.card=... - -# Using your own model from its training checkpoint. -dora run solver=musicgen/debug \ - compression_model_checkpoint=//sig/SIG \ # where SIG is the Dora signature of the EnCodec XP. - transformer_lm.n_q=... transformer_lm.card=... -``` - -**Warning:** you are responsible for setting the proper value for `transformer_lm.n_q` and `transformer_lm.card` (cardinality of the codebooks). You also have to update the codebook_pattern to match `n_q` as shown in the example for using DAC. . - - -### Fine tuning existing models - -You can initialize your model to one of the pretrained models by using the `continue_from` argument, in particular - -```bash -# Using pretrained MusicGen model. -dora run solver=musicgen/musicgen_base_32khz model/lm/model_scale=medium continue_from=//pretrained/facebook/musicgen-medium conditioner=text2music - -# Using another model you already trained with a Dora signature SIG. -dora run solver=musicgen/musicgen_base_32khz model/lm/model_scale=medium continue_from=//sig/SIG conditioner=text2music - -# Or providing manually a path -dora run solver=musicgen/musicgen_base_32khz model/lm/model_scale=medium continue_from=/checkpoints/my_other_xp/checkpoint.th -``` - -**Warning:** You are responsible for selecting the other parameters accordingly, in a way that make it compatible - with the model you are fine tuning. Configuration is NOT automatically inherited from the model you continue from. In particular make sure to select the proper `conditioner` and `model/lm/model_scale`. - -**Warning:** We currently do not support fine tuning a model with slightly different layers. If you decide - to change some parts, like the conditioning or some other parts of the model, you are responsible for manually crafting a checkpoint file from which we can safely run `load_state_dict`. - If you decide to do so, make sure your checkpoint is saved with `torch.save` and contains a dict - `{'best_state': {'model': model_state_dict_here}}`. Directly give the path to `continue_from` without a `//pretrained/` prefix. - -### Caching of EnCodec tokens - -It is possible to precompute the EnCodec tokens and other metadata. -An example of generating and using this cache provided in the [musicgen.musicgen_base_cached_32khz grid](../audiocraft/grids/musicgen/musicgen_base_cached_32khz.py). - -### Evaluation stage - -By default, evaluation stage is also computing the cross-entropy and the perplexity over the -evaluation dataset. Indeed the objective metrics used for evaluation can be costly to run -or require some extra dependencies. Please refer to the [metrics documentation](./METRICS.md) -for more details on the requirements for each metric. - -We provide an off-the-shelf configuration to enable running the objective metrics -for audio generation in -[config/solver/musicgen/evaluation/objective_eval](../config/solver/musicgen/evaluation/objective_eval.yaml). - -One can then activate evaluation the following way: -```shell -# using the configuration -dora run solver=musicgen/debug solver/musicgen/evaluation=objective_eval -# specifying each of the fields, e.g. to activate KL computation -dora run solver=musicgen/debug evaluate.metrics.kld=true -``` - -See [an example evaluation grid](../audiocraft/grids/musicgen/musicgen_pretrained_32khz_eval.py). - -### Generation stage - -The generation stage allows to generate samples conditionally and/or unconditionally and to perform -audio continuation (from a prompt). We currently support greedy sampling (argmax), sampling -from softmax with a given temperature, top-K and top-P (nucleus) sampling. The number of samples -generated and the batch size used are controlled by the `dataset.generate` configuration -while the other generation parameters are defined in `generate.lm`. - -```shell -# control sampling parameters -dora run solver=musicgen/debug generate.lm.gen_duration=10 generate.lm.use_sampling=true generate.lm.top_k=15 -``` - -#### Listening to samples - -Note that generation happens automatically every 25 epochs. You can easily access and -compare samples between models (as long as they are trained) on the same dataset using the -MOS tool. For that first `pip install Flask gunicorn`. Then -``` -gunicorn -w 4 -b 127.0.0.1:8895 -t 120 'scripts.mos:app' --access-logfile - -``` -And access the tool at [https://127.0.0.1:8895](https://127.0.0.1:8895). - -### Playing with the model - -Once you have launched some experiments, you can easily get access -to the Solver with the latest trained model using the following snippet. - -```python -from audiocraft.solvers.musicgen import MusicGen - -solver = MusicGen.get_eval_solver_from_sig('SIG', device='cpu', batch_size=8) -solver.model -solver.dataloaders -``` - -### Importing / Exporting models - -We do not support currently loading a model from the Hugging Face implementation or exporting to it. -If you want to export your model in a way that is compatible with `audiocraft.models.MusicGen` -API, you can run: - -```python -from audiocraft.utils import export -from audiocraft import train -xp = train.main.get_xp_from_sig('SIG_OF_LM') -export.export_lm(xp.folder / 'checkpoint.th', '/checkpoints/my_audio_lm/state_dict.bin') -# You also need to bundle the EnCodec model you used !! -## Case 1) you trained your own -xp_encodec = train.main.get_xp_from_sig('SIG_OF_ENCODEC') -export.export_encodec(xp_encodec.folder / 'checkpoint.th', '/checkpoints/my_audio_lm/compression_state_dict.bin') -## Case 2) you used a pretrained model. Give the name you used without the //pretrained/ prefix. -## This will actually not dump the actual model, simply a pointer to the right model to download. -export.export_pretrained_compression_model('facebook/encodec_32khz', '/checkpoints/my_audio_lm/compression_state_dict.bin') -``` - -Now you can load your custom model with: -```python -import audiocraft.models -musicgen = audiocraft.models.MusicGen.get_pretrained('/checkpoints/my_audio_lm/') -``` - - -### Learn more - -Learn more about AudioCraft training pipelines in the [dedicated section](./TRAINING.md). - -## FAQ - -#### I need help on Windows - -@FurkanGozukara made a complete tutorial for [AudioCraft/MusicGen on Windows](https://youtu.be/v-YpvPkhdO4) - -#### I need help for running the demo on Colab - -Check [@camenduru tutorial on YouTube](https://www.youtube.com/watch?v=EGfxuTy9Eeo). - -#### What are top-k, top-p, temperature and classifier-free guidance? - -Check out [@FurkanGozukara tutorial](https://github.com/FurkanGozukara/Stable-Diffusion/blob/main/Tutorials/AI-Music-Generation-Audiocraft-Tutorial.md#more-info-about-top-k-top-p-temperature-and-classifier-free-guidance-from-chatgpt). - -#### Should I use FSDP or autocast ? - -The two are mutually exclusive (because FSDP does autocast on its own). -You can use autocast up to 1.5B (medium), if you have enough RAM on your GPU. -FSDP makes everything more complex but will free up some memory for the actual -activations by sharding the optimizer state. - -## Citation -``` -@article{copet2023simple, - title={Simple and Controllable Music Generation}, - author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez}, - year={2023}, - journal={arXiv preprint arXiv:2306.05284}, -} -``` - - -## License - -See license information in the [model card](../model_cards/MUSICGEN_MODEL_CARD.md). - - -[arxiv]: https://arxiv.org/abs/2306.05284 -[musicgen_samples]: https://ai.honu.io/papers/musicgen/ diff --git a/spaces/brainblow/MusiCreator/audiocraft/__init__.py b/spaces/brainblow/MusiCreator/audiocraft/__init__.py deleted file mode 100644 index 6b8594f470200ff5c000542ef115375ed69b749c..0000000000000000000000000000000000000000 --- a/spaces/brainblow/MusiCreator/audiocraft/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from . import data, modules, models - -__version__ = '0.0.2a2' diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ.py b/spaces/brjathu/HMR2.0/vendor/detectron2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ.py deleted file mode 100644 index b867cc865e5ac4d7b70221da141894efd7cbd75c..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ.py +++ /dev/null @@ -1,14 +0,0 @@ -from .mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -train.max_iter *= 2 # 100ep -> 200ep - -lr_multiplier.scheduler.milestones = [ - milestone * 2 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/projects/README.md b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/projects/README.md deleted file mode 100644 index 95afe7ff8c8a9bd2f56621fcc3c1bdac11c256a9..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/projects/README.md +++ /dev/null @@ -1,2 +0,0 @@ - -Projects live in the [`projects` directory](../../projects) under the root of this repository, but not here. diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/ViTDet/README.md b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/ViTDet/README.md deleted file mode 100644 index 0a525e00e643017fc971566931936f1573d9b47c..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/ViTDet/README.md +++ /dev/null @@ -1,364 +0,0 @@ -# ViTDet: Exploring Plain Vision Transformer Backbones for Object Detection - -Yanghao Li, Hanzi Mao, Ross Girshick†, Kaiming He† - -[[`arXiv`](https://arxiv.org/abs/2203.16527)] [[`BibTeX`](#CitingViTDet)] - -In this repository, we provide configs and models in Detectron2 for ViTDet as well as MViTv2 and Swin backbones with our implementation and settings as described in [ViTDet](https://arxiv.org/abs/2203.16527) paper. - - -## Pretrained Models - -### COCO - -#### Mask R-CNN - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Namepre-traintrain
    time
    (s/im)
    inference
    time
    (s/im)
    train
    mem
    (GB)
    box
    AP
    mask
    AP
    model iddownload
    ViTDet, ViT-BIN1K, MAE0.3140.07910.951.645.9325346929model
    ViTDet, ViT-LIN1K, MAE0.6030.12520.955.549.2325599698model
    ViTDet, ViT-HIN1K, MAE1.0980.17831.556.750.2329145471model
    - -#### Cascade Mask R-CNN - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Namepre-traintrain
    time
    (s/im)
    inference
    time
    (s/im)
    train
    mem
    (GB)
    box
    AP
    mask
    AP
    model iddownload
    Swin-BIN21K, sup0.3890.0778.753.946.2342979038model
    Swin-LIN21K, sup0.5080.09712.655.047.2342979186model
    MViTv2-BIN21K, sup0.4750.0908.955.648.1325820315model
    MViTv2-LIN21K, sup0.8440.15719.755.748.3325607715model
    MViTv2-HIN21K, sup1.6550.28518.4*55.948.3326187358model
    ViTDet, ViT-BIN1K, MAE0.3620.08912.354.046.7325358525model
    ViTDet, ViT-LIN1K, MAE0.6430.14222.357.650.0328021305model
    ViTDet, ViT-HIN1K, MAE1.1370.19632.958.751.0328730692model
    - - -### LVIS - -#### Mask R-CNN - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Namepre-traintrain
    time
    (s/im)
    inference
    time
    (s/im)
    train
    mem
    (GB)
    box
    AP
    mask
    AP
    model iddownload
    ViTDet, ViT-BIN1K, MAE0.3170.08514.440.238.2329225748model
    ViTDet, ViT-LIN1K, MAE0.5760.13724.746.143.6329211570model
    ViTDet, ViT-HIN1K, MAE1.0590.18635.349.146.0332434656model
    - -#### Cascade Mask R-CNN - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Namepre-traintrain
    time
    (s/im)
    inference
    time
    (s/im)
    train
    mem
    (GB)
    box
    AP
    mask
    AP
    model iddownload
    Swin-BIN21K, sup0.3680.09011.544.039.6329222304model
    Swin-LIN21K, sup0.4860.10513.846.041.4329222724model
    MViTv2-BIN21K, sup0.4750.10011.846.342.0329477206model
    MViTv2-LIN21K, sup0.8440.17221.049.444.2329661552model
    MViTv2-HIN21K, sup1.6610.29021.3*49.544.1330445165model
    ViTDet, ViT-BIN1K, MAE0.3560.09915.243.038.9329226874model
    ViTDet, ViT-LIN1K, MAE0.6290.15024.949.244.5329042206model
    ViTDet, ViT-HIN1K, MAE1.1000.20435.551.546.6332552778model
    - -Note: Unlike the system-level comparisons in the paper, these models use a lower resolution (1024 instead of 1280) and standard NMS (instead of soft NMS). As a result, they have slightly lower box and mask AP. - -We observed higher variance on LVIS evalution results compared to COCO. For example, the standard deviations of box AP and mask AP were 0.30% (compared to 0.10% on COCO) when we trained ViTDet, ViT-B five times with varying random seeds. - -The above models were trained and measured on 8-node with 64 NVIDIA A100 GPUs in total. *: Activation checkpointing is used. - - -## Training -All configs can be trained with: - -``` -../../tools/lazyconfig_train_net.py --config-file configs/path/to/config.py -``` -By default, we use 64 GPUs with batch size as 64 for training. - -## Evaluation -Model evaluation can be done similarly: -``` -../../tools/lazyconfig_train_net.py --config-file configs/path/to/config.py --eval-only train.init_checkpoint=/path/to/model_checkpoint -``` - - -## Citing ViTDet - -If you use ViTDet, please use the following BibTeX entry. - -```BibTeX -@article{li2022exploring, - title={Exploring plain vision transformer backbones for object detection}, - author={Li, Yanghao and Mao, Hanzi and Girshick, Ross and He, Kaiming}, - journal={arXiv preprint arXiv:2203.16527}, - year={2022} -} -``` diff --git a/spaces/bruno16/massa_qa/config.py b/spaces/bruno16/massa_qa/config.py deleted file mode 100644 index 8c7ba27fa196bebf0c480323f0a4adf63407acf3..0000000000000000000000000000000000000000 --- a/spaces/bruno16/massa_qa/config.py +++ /dev/null @@ -1,20 +0,0 @@ -"""Configuration for the LLM Apps Course""" -from types import SimpleNamespace - -TEAM = None -PROJECT = "massa" -JOB_TYPE = "production" - -default_config = SimpleNamespace( - project=PROJECT, - entity=TEAM, - job_type=JOB_TYPE, - vector_store_artifact="cir-neige/massa/vector_store_massa:latest", - chat_prompt_artifact="cir-neige/massa/chat_prompt:latest", - chat_temperature=0.5, #0.3 - max_fallback_retries=3, -## model_name="gpt-4", - model_name="gpt-3.5-turbo", - eval_model="gpt-3.5-turbo", - eval_artifact="cir-neige/massa/generated_examples:v0", -) diff --git a/spaces/cadige/01ST-CSV-Dataset-Analyzer/README.md b/spaces/cadige/01ST-CSV-Dataset-Analyzer/README.md deleted file mode 100644 index cd7406111446d604d041583053cb3abefd833365..0000000000000000000000000000000000000000 --- a/spaces/cadige/01ST-CSV-Dataset-Analyzer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 01ST CSV Dataset Analyzer -emoji: 🏢 -colorFrom: yellow -colorTo: red -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/caffeinum/VToonify/vtoonify/model/stylegan/op_gpu/fused_act.py b/spaces/caffeinum/VToonify/vtoonify/model/stylegan/op_gpu/fused_act.py deleted file mode 100644 index 815eca1905b7962a2314f6af3b3ab5daeb74a009..0000000000000000000000000000000000000000 --- a/spaces/caffeinum/VToonify/vtoonify/model/stylegan/op_gpu/fused_act.py +++ /dev/null @@ -1,119 +0,0 @@ -import os - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function -from torch.utils.cpp_extension import load - - -module_path = os.path.dirname(__file__) -fused = load( - "fused", - sources=[ - os.path.join(module_path, "fused_bias_act.cpp"), - os.path.join(module_path, "fused_bias_act_kernel.cu"), - ], -) - - -class FusedLeakyReLUFunctionBackward(Function): - @staticmethod - def forward(ctx, grad_output, out, bias, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused.fused_bias_act( - grad_output.contiguous(), empty, out, 3, 1, negative_slope, scale - ) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - if bias: - grad_bias = grad_input.sum(dim).detach() - - else: - grad_bias = empty - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused.fused_bias_act( - gradgrad_input.contiguous(), gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale - ) - - return gradgrad_out, None, None, None, None - - -class FusedLeakyReLUFunction(Function): - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - - ctx.bias = bias is not None - - if bias is None: - bias = empty - - out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.bias, ctx.negative_slope, ctx.scale - ) - - if not ctx.bias: - grad_bias = None - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, bias=True, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - if bias: - self.bias = nn.Parameter(torch.zeros(channel)) - - else: - self.bias = None - - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias=None, negative_slope=0.2, scale=2 ** 0.5): - if input.device.type == "cpu": - if bias is not None: - rest_dim = [1] * (input.ndim - bias.ndim - 1) - return ( - F.leaky_relu( - input + bias.view(1, bias.shape[0], *rest_dim), negative_slope=0.2 - ) - * scale - ) - - else: - return F.leaky_relu(input, negative_slope=0.2) * scale - - else: - return FusedLeakyReLUFunction.apply(input.contiguous(), bias, negative_slope, scale) diff --git a/spaces/cahya/persona-chatbot/app/js/chatbot.js b/spaces/cahya/persona-chatbot/app/js/chatbot.js deleted file mode 100644 index 3b0b3a4605a962848baea97b1c277ede69e25dcb..0000000000000000000000000000000000000000 --- a/spaces/cahya/persona-chatbot/app/js/chatbot.js +++ /dev/null @@ -1,403 +0,0 @@ -updateValue = function(id, value) { - document.getElementById(id).innerText = value; -} - -htmlToElement = function(html) { - let template = document.createElement('template'); - html = html.trim(); // Never return a text node of whitespace as the result - template.innerHTML = html; - return template.content; -} - -let websocket = null; -let currentPersonaID = null; -let persona_ids = {}; - -pageSetup = function() { - // const users = document.querySelector('.users'); - const userInput = document.querySelector('.user-input'); - const userInputButton = document.querySelector('.user-input-button'); - const serverMessageValue = document.querySelector('.server-message-value'); - const messages = document.getElementById('chat-messages'); - const friends = document.getElementById('friends'); - websocket = new WebSocket("wss://gpt2-chat.ai-research.id/"); - //websocket = new WebSocket("ws://localhost:8502/"); - let persona_list; - - let getParameters = function() { - return { - "do_sample": document.getElementById("doSample").checked, - "min_length": parseInt(document.getElementById("minLength").value), - "max_length": parseInt(document.getElementById("maxLength").value), - "temperature": parseFloat(document.getElementById("temperature").value), - "top_k": parseInt(document.getElementById("topK").value), - "top_p": parseFloat(document.getElementById("topP").value), - }; - } - - let createMessage = function (message, image, bot) { - let message_template = ""; - if(bot) { - message_template += '
    '; - message_template += ' '; - } - else { - message_template += '
    '; - message_template += ' '; - } - message_template += '
    ' + message; - message_template += '
    '; - message_template += '
    '; - message_template += '
    '; - return message_template; - } - - let createFriends = function (persona_list) { - html_template = ''; - for (let i=0; i'; - html_template += '

    '; - html_template += ' ' + persona_list[i]["name"] + ''; - html_template += ' ' + persona_list[i]["email"]+ ''; - html_template += ' ' + persona_list[i]["id"]+ ''; - html_template += '

    '; - html_template += '
    '; - html_template += '
    '; - } - return html_template; - } - - let hoverMesssagePhoto = function (persona_id) { - let id = '#chat_message_' + persona_id; - let message_photo = $(id + ' .message:last-child img'); - message_photo.hover(function () { - let profile_photo_zoom = $('#photo-block img'); - profile_photo_zoom[0].src = message_photo[0].src; - $('#photo-block').fadeIn(); - }, function () { - $('#photo-block').fadeOut(800); - }) - } - - let processUserInput = function (userInput) { - let parameters = getParameters(); - parameters["action"] = "talk"; - parameters["persona_id"] = currentPersonaID; - parameters["utterance"] = userInput.value; - websocket.send(JSON.stringify(parameters)); - let message = createMessage(userInput.value, persona_ids[currentPersonaID]["image"], false); - const element = htmlToElement(message).firstChild; - userInput.value = ""; - let chat_message = $('#chat_message_' + currentPersonaID)[0]; - chat_message.appendChild(element); - const margin_top = element.childNodes[3].offsetHeight - 25; - element.childNodes[1].style = "margin-top:" + margin_top + "px"; - chat_message.scrollIntoView({behavior: "smooth", block: "end", inline: "nearest"}); - hoverMesssagePhoto(currentPersonaID); - } - - userInputButton.onclick = function () { - processUserInput(userInput); - } - - userInput.addEventListener("keyup", function(event) { - if (event.keyCode === 13) { - // Cancel the default action, if needed - event.preventDefault(); - processUserInput(userInput); - } - }); - - websocket.onmessage = function (event) { - let data = JSON.parse(event.data); - switch (data.type) { - case 'connection': - console.log(data.value) - websocket.send(JSON.stringify({action: 'dialog', personality: []})); - break; - case 'state': - console.log("stat: " + data.value) - break; - case 'users': - serverMessageValue.textContent = ( - data.count.toString() + " user" + - (data.count === 1 ? "" : "s") + " online"); - break; - case 'dialog': - console.log(data.message) - break; - case 'talk': - case 'persona_greeting': - let message = createMessage(data.message, persona_ids[currentPersonaID]["image"], true); - const element = htmlToElement(message).firstChild; - let chat_message = $('#chat_message_' + currentPersonaID)[0]; - chat_message.appendChild(element); - margin_top = element.childNodes[3].offsetHeight - 25; - element.childNodes[1].style = "margin-top:" + margin_top + "px"; - chat_message.scrollIntoView({behavior: "smooth", block: "end", inline: "nearest"}); - hoverMesssagePhoto(currentPersonaID); - break; - case 'personality': - const elements = document.querySelectorAll(".bot-personality input") - for (let i = 0; i < Math.min(elements.length, data.message.length); i++) { - elements[i].value = data.message[i]; - } - break; - case 'persona_list': - persona_list = data.message; - for(i=0; i'; - $('#chat-block').children().first().append(html_template) - chat_message = $('#chat_message_' + currentPersonaID); - websocket.send(JSON.stringify({action: 'persona_chosen', persona_id: currentPersonaID})); - } - else { - chat_message.show(400, function () { - chat_message[0].scrollIntoView({behavior: "auto", block: "end", inline: "nearest"}); - }); - } - - $(clone).css({'top': top}).addClass("floatingImg").appendTo("#chatbox"); - - setTimeout(function(){$("#profile p").addClass("animate");$("#profile").addClass("animate");}, 100); - setTimeout(function(){ - chat_message.addClass("animate"); - $('.cx, .cy').addClass('s1'); - setTimeout(function(){$('.cx, .cy').addClass('s2');}, 100); - setTimeout(function(){$('.cx, .cy').addClass('s3');}, 200); - }, 150); - - let profile_photo = $('.floatingImg'); - profile_photo.animate({ - 'width': "68px", - 'left':'15px', - 'top':'20px' - }, 200); - - profile_photo.hover(function () { - var profile_photo_zoom = $('#photo-block img'); - console.log(profile_photo_zoom); - profile_photo_zoom[0].src = profile_photo[0].src; - $('#photo-block').fadeIn(); - }, function () { - $('#photo-block').fadeOut(800); - }); - - var name = $(this).find("p strong").html(); - var email = $(this).find("p span").html(); - $("#profile p").html(name); - $("#profile span").html(email); - - $(".message").not(".right").find("img").attr("src", $(clone).attr("src")); - $('#friendslist').fadeOut(); - $('#chat-block').show(); - $('#config-block').hide(); - $('#chatview').fadeIn(); - - - $('#close').unbind("click").click(function(){ - chat_message.removeClass("animate"); - chat_message.hide(); - $("#profile, #profile p").removeClass("animate"); - $('.cx, .cy').removeClass("s1 s2 s3"); - $('.floatingImg').animate({ - 'width': "40px", - 'top':top, - 'left': '12px' - }, 200, function(){$('.floatingImg').remove()}); - - setTimeout(function(){ - $('#chatview').fadeOut(); - $('#friendslist').fadeIn(); - }, 50); - }); - - personalities = ["", "", "", "", ""]; - - $('#personalities').unbind("click").click(function(){ - personality_input = document.querySelectorAll(".bot-personality input") - for (let i = 0; i < Math.min(personality_input.length, persona_ids[currentPersonaID]["personality"].length); i++) { - personality_input[i].value = persona_ids[currentPersonaID]["personality"][i+3]; - } - setTimeout(function(){ - $('#server_view').fadeOut(400, function () { - $('#server_view').fadeIn(); - }); - $('#parameters_view').fadeOut(400, function (){ - $('#about_view').fadeOut(400, function () { - $('#personalities_view').fadeIn(); - }); - }); - $('#about_view').fadeOut(400); - $('#chat-block').fadeOut(400, function (){ - $('#config-block').fadeIn(); - }); - - }, 50); - const elements = document.querySelectorAll(".bot-personality input") - for (let i = 0; i < Math.min(elements.length, 5); i++) { - personalities[i] = elements[i].value; - } - }); - - $('#personalities_cancel').unbind("click").click(function(){ - const elements = document.querySelectorAll(".bot-personality input") - for (let i = 0; i < Math.min(elements.length, 5); i++) { - elements[i].value = personalities[i]; - } - setTimeout(function(){ - $('#config-block').fadeOut(400, function (){ - $('#chat-block').fadeIn(); - }); - }, 50); - }); - - $('#personalities_update').unbind("click").click(function(){ - const elements = document.querySelectorAll(".bot-personality input") - let data = { - "action": "personality", - "persona_id": currentPersonaID, - "message": [] - } - // persona_ids[currentPersonaID]["personality"] - for (let i = 0; i < Math.min(elements.length, 5); i++) { - if(elements[i].value.length >0) - persona_ids[currentPersonaID]["personality"][i+3] = elements[i].value; - data.message.push(elements[i].value); - } - websocket.send(JSON.stringify(data)); - setTimeout(function(){ - $('#config-block').fadeOut(400, function (){ - $('#chat-block').fadeIn(); - }); - }, 500); - }); - - $('#parameters').unbind("click").click(function(){ - setTimeout(function(){ - $('#server_view').fadeOut(400, function () { - $('#server_view').fadeIn(); - }); - $('#personalities_view').fadeOut(400, function (){ - $('#about_view').fadeOut(400, function () { - $('#parameters_view').fadeIn(); - }); - }); - $('#chat-block').fadeOut(400, function () { - $('#config-block').fadeIn(); - }); - }, 50); - }); - - $('#parameters_ok').unbind("click").click(function(){ - setTimeout(function(){ - $('#config-block').fadeOut(400, function (){ - $('#chat-block').fadeIn(); - }); - - }, 50); - }); - - $('#about').unbind("click").click(function(){ - setTimeout(function(){ - $('#server_view').fadeOut(400, function () { - $('#server_view').fadeIn(); - }); - $('#personalities_view').fadeOut(400, function (){ - $('#parameters_view').fadeOut(400, function (){ - $('#about_view').fadeIn(); - }); - }); - $('#chat-block').fadeOut(400, function () { - $('#config-block').fadeIn(); - }); - }, 50); - }); - - $('#about_close').unbind("click").click(function(){ - setTimeout(function(){ - $('#config-block').fadeOut(400, function (){ - $('#chat-block').fadeIn(); - }); - - }, 50); - }); - - }); - }); - - // $("#friends")[0].firstElementChild.click() -}; \ No newline at end of file diff --git a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/open_clip/linear_probe.py b/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/open_clip/linear_probe.py deleted file mode 100644 index 9d7e23b6b67a53e16d050d675a99d01d7d04d581..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/open_clip/linear_probe.py +++ /dev/null @@ -1,66 +0,0 @@ -import numpy as np -import torch.nn.functional as F -from torch import nn -from .model import MLPLayers - - -class LinearProbe(nn.Module): - def __init__(self, model, mlp, freeze, in_ch, out_ch, act=None): - """ - Args: - model: nn.Module - mlp: bool, if True, then use the MLP layer as the linear probe module - freeze: bool, if Ture, then freeze all the CLAP model's layers when training the linear probe - in_ch: int, the output channel from CLAP model - out_ch: int, the output channel from linear probe (class_num) - act: torch.nn.functional, the activation function before the loss function - """ - super().__init__() - in_ch = 512 - self.clap_model = model - self.clap_model.text_branch = None # to save memory - self.freeze = freeze - if mlp: - self.lp_layer = MLPLayers(units=[in_ch, in_ch * 2, out_ch]) - else: - self.lp_layer = nn.Linear(in_ch, out_ch) - - if self.freeze: - for param in self.clap_model.parameters(): - param.requires_grad = False - - if act == "None": - self.act = None - elif act == "relu": - self.act = nn.ReLU() - elif act == "elu": - self.act = nn.ELU() - elif act == "prelu": - self.act = nn.PReLU(num_parameters=in_ch) - elif act == "softmax": - self.act = nn.Softmax(dim=-1) - elif act == "sigmoid": - self.act = nn.Sigmoid() - - def forward(self, x, mix_lambda=None, device=None): - """ - Args: - x: waveform, torch.tensor [batch, t_samples] / batch of mel_spec and longer list - mix_lambda: torch.tensor [batch], the mixup lambda - Returns: - class_prob: torch.tensor [batch, class_num] - - """ - # batchnorm cancel grandient - if self.freeze: - self.clap_model.eval() - - x = self.clap_model.audio_projection( - self.clap_model.audio_branch(x, mixup_lambda=mix_lambda, device=device)[ - "embedding" - ] - ) - out = self.lp_layer(x) - if self.act is not None: - out = self.act(out) - return out diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/cse_confidence.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/cse_confidence.py deleted file mode 100644 index ee5166f82d45ecb4ea829ec2ecab248161c19421..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/cse_confidence.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from dataclasses import make_dataclass -from functools import lru_cache -from typing import Any, Optional -import torch - - -@lru_cache(maxsize=None) -def decorate_cse_predictor_output_class_with_confidences(BasePredictorOutput: type) -> type: - """ - Create a new output class from an existing one by adding new attributes - related to confidence estimation: - - coarse_segm_confidence (tensor) - - Details on confidence estimation parameters can be found in: - N. Neverova, D. Novotny, A. Vedaldi "Correlated Uncertainty for Learning - Dense Correspondences from Noisy Labels", p. 918--926, in Proc. NIPS 2019 - A. Sanakoyeu et al., Transferring Dense Pose to Proximal Animal Classes, CVPR 2020 - - The new class inherits the provided `BasePredictorOutput` class, - it's name is composed of the name of the provided class and - "WithConfidences" suffix. - - Args: - BasePredictorOutput (type): output type to which confidence data - is to be added, assumed to be a dataclass - Return: - New dataclass derived from the provided one that has attributes - for confidence estimation - """ - - PredictorOutput = make_dataclass( - BasePredictorOutput.__name__ + "WithConfidences", - fields=[ - ("coarse_segm_confidence", Optional[torch.Tensor], None), - ], - bases=(BasePredictorOutput,), - ) - - # add possibility to index PredictorOutput - - def slice_if_not_none(data, item): - if data is None: - return None - if isinstance(item, int): - return data[item].unsqueeze(0) - return data[item] - - def PredictorOutput_getitem(self, item): - PredictorOutput = type(self) - base_predictor_output_sliced = super(PredictorOutput, self).__getitem__(item) - return PredictorOutput( - **base_predictor_output_sliced.__dict__, - coarse_segm_confidence=slice_if_not_none(self.coarse_segm_confidence, item), - ) - - PredictorOutput.__getitem__ = PredictorOutput_getitem - - def PredictorOutput_to(self, device: torch.device): - """ - Transfers all tensors to the given device - """ - PredictorOutput = type(self) - base_predictor_output_to = super(PredictorOutput, self).to(device) # pyre-ignore[16] - - def to_device_if_tensor(var: Any): - if isinstance(var, torch.Tensor): - return var.to(device) - return var - - return PredictorOutput( - **base_predictor_output_to.__dict__, - coarse_segm_confidence=to_device_if_tensor(self.coarse_segm_confidence), - ) - - PredictorOutput.to = PredictorOutput_to - return PredictorOutput diff --git a/spaces/chasemcdo/hf_localai/Makefile b/spaces/chasemcdo/hf_localai/Makefile deleted file mode 100644 index a898dc7cb96b9cef0d5ca7e753abde9f77117225..0000000000000000000000000000000000000000 --- a/spaces/chasemcdo/hf_localai/Makefile +++ /dev/null @@ -1,303 +0,0 @@ -GOCMD=go -GOTEST=$(GOCMD) test -GOVET=$(GOCMD) vet -BINARY_NAME=local-ai - -GOLLAMA_VERSION?=f104111358e8098aea69ce408b85b707528179ef -GPT4ALL_REPO?=https://github.com/nomic-ai/gpt4all -GPT4ALL_VERSION?=c1794597a7559d5616567e280b722231c624a57b -GOGGMLTRANSFORMERS_VERSION?=a459d2726792132541152c981ed9fbfe28f4fd20 -RWKV_REPO?=https://github.com/donomii/go-rwkv.cpp -RWKV_VERSION?=f5a8c45396741470583f59b916a2a7641e63bcd0 -WHISPER_CPP_VERSION?=72deb41eb26300f71c50febe29db8ffcce09256c -BERT_VERSION?=6069103f54b9969c02e789d0fb12a23bd614285f -PIPER_VERSION?=56b8a81b4760a6fbee1a82e62f007ae7e8f010a7 -BLOOMZ_VERSION?=1834e77b83faafe912ad4092ccf7f77937349e2f -export BUILD_TYPE?= -CGO_LDFLAGS?= -CUDA_LIBPATH?=/usr/local/cuda/lib64/ -STABLEDIFFUSION_VERSION?=d89260f598afb809279bc72aa0107b4292587632 -GO_TAGS?= -BUILD_ID?=git - -VERSION?=$(shell git describe --always --tags --dirty || echo "dev" ) -# go tool nm ./local-ai | grep Commit -LD_FLAGS?= -override LD_FLAGS += -X "github.com/go-skynet/LocalAI/internal.Version=$(VERSION)" -override LD_FLAGS += -X "github.com/go-skynet/LocalAI/internal.Commit=$(shell git rev-parse HEAD)" - -OPTIONAL_TARGETS?= -ESPEAK_DATA?= - -OS := $(shell uname -s) -ARCH := $(shell uname -m) -GREEN := $(shell tput -Txterm setaf 2) -YELLOW := $(shell tput -Txterm setaf 3) -WHITE := $(shell tput -Txterm setaf 7) -CYAN := $(shell tput -Txterm setaf 6) -RESET := $(shell tput -Txterm sgr0) - -C_INCLUDE_PATH=$(shell pwd)/go-llama:$(shell pwd)/go-stable-diffusion/:$(shell pwd)/gpt4all/gpt4all-bindings/golang/:$(shell pwd)/go-ggml-transformers:$(shell pwd)/go-rwkv:$(shell pwd)/whisper.cpp:$(shell pwd)/go-bert:$(shell pwd)/bloomz -LIBRARY_PATH=$(shell pwd)/go-piper:$(shell pwd)/go-llama:$(shell pwd)/go-stable-diffusion/:$(shell pwd)/gpt4all/gpt4all-bindings/golang/:$(shell pwd)/go-ggml-transformers:$(shell pwd)/go-rwkv:$(shell pwd)/whisper.cpp:$(shell pwd)/go-bert:$(shell pwd)/bloomz - -ifeq ($(BUILD_TYPE),openblas) - CGO_LDFLAGS+=-lopenblas -endif - -ifeq ($(BUILD_TYPE),cublas) - CGO_LDFLAGS+=-lcublas -lcudart -L$(CUDA_LIBPATH) - export LLAMA_CUBLAS=1 -endif - -ifeq ($(BUILD_TYPE),metal) - CGO_LDFLAGS+=-framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders - export LLAMA_METAL=1 -endif - -ifeq ($(BUILD_TYPE),clblas) - CGO_LDFLAGS+=-lOpenCL -lclblast -endif - -# glibc-static or glibc-devel-static required -ifeq ($(STATIC),true) - LD_FLAGS=-linkmode external -extldflags -static -endif - -ifeq ($(findstring stablediffusion,$(GO_TAGS)),stablediffusion) - OPTIONAL_TARGETS+=go-stable-diffusion/libstablediffusion.a -endif - -ifeq ($(findstring tts,$(GO_TAGS)),tts) - OPTIONAL_TARGETS+=go-piper/libpiper_binding.a - OPTIONAL_TARGETS+=backend-assets/espeak-ng-data -endif - -.PHONY: all test build vendor - -all: help - -## GPT4ALL -gpt4all: - git clone --recurse-submodules $(GPT4ALL_REPO) gpt4all - cd gpt4all && git checkout -b build $(GPT4ALL_VERSION) && git submodule update --init --recursive --depth 1 - # This is hackish, but needed as both go-llama and go-gpt4allj have their own version of ggml.. - @find ./gpt4all -type f -name "*.c" -exec sed -i'' -e 's/ggml_/ggml_gpt4all_/g' {} + - @find ./gpt4all -type f -name "*.cpp" -exec sed -i'' -e 's/ggml_/ggml_gpt4all_/g' {} + - @find ./gpt4all -type f -name "*.m" -exec sed -i'' -e 's/ggml_/ggml_gpt4all_/g' {} + - @find ./gpt4all -type f -name "*.h" -exec sed -i'' -e 's/ggml_/ggml_gpt4all_/g' {} + - @find ./gpt4all -type f -name "*.c" -exec sed -i'' -e 's/llama_/llama_gpt4all_/g' {} + - @find ./gpt4all -type f -name "*.cpp" -exec sed -i'' -e 's/llama_/llama_gpt4all_/g' {} + - @find ./gpt4all -type f -name "*.h" -exec sed -i'' -e 's/llama_/llama_gpt4all_/g' {} + - @find ./gpt4all/gpt4all-backend -type f -name "llama_util.h" -execdir mv {} "llama_gpt4all_util.h" \; - @find ./gpt4all -type f -name "*.cmake" -exec sed -i'' -e 's/llama_util/llama_gpt4all_util/g' {} + - @find ./gpt4all -type f -name "*.txt" -exec sed -i'' -e 's/llama_util/llama_gpt4all_util/g' {} + - @find ./gpt4all/gpt4all-bindings/golang -type f -name "*.cpp" -exec sed -i'' -e 's/load_model/load_gpt4all_model/g' {} + - @find ./gpt4all/gpt4all-bindings/golang -type f -name "*.go" -exec sed -i'' -e 's/load_model/load_gpt4all_model/g' {} + - @find ./gpt4all/gpt4all-bindings/golang -type f -name "*.h" -exec sed -i'' -e 's/load_model/load_gpt4all_model/g' {} + - -## go-piper -go-piper: - git clone --recurse-submodules https://github.com/mudler/go-piper go-piper - cd go-piper && git checkout -b build $(PIPER_VERSION) && git submodule update --init --recursive --depth 1 - -## BERT embeddings -go-bert: - git clone --recurse-submodules https://github.com/go-skynet/go-bert.cpp go-bert - cd go-bert && git checkout -b build $(BERT_VERSION) && git submodule update --init --recursive --depth 1 - @find ./go-bert -type f -name "*.c" -exec sed -i'' -e 's/ggml_/ggml_bert_/g' {} + - @find ./go-bert -type f -name "*.cpp" -exec sed -i'' -e 's/ggml_/ggml_bert_/g' {} + - @find ./go-bert -type f -name "*.h" -exec sed -i'' -e 's/ggml_/ggml_bert_/g' {} + - -## stable diffusion -go-stable-diffusion: - git clone --recurse-submodules https://github.com/mudler/go-stable-diffusion go-stable-diffusion - cd go-stable-diffusion && git checkout -b build $(STABLEDIFFUSION_VERSION) && git submodule update --init --recursive --depth 1 - -go-stable-diffusion/libstablediffusion.a: - $(MAKE) -C go-stable-diffusion libstablediffusion.a - -## RWKV -go-rwkv: - git clone --recurse-submodules $(RWKV_REPO) go-rwkv - cd go-rwkv && git checkout -b build $(RWKV_VERSION) && git submodule update --init --recursive --depth 1 - @find ./go-rwkv -type f -name "*.c" -exec sed -i'' -e 's/ggml_/ggml_rwkv_/g' {} + - @find ./go-rwkv -type f -name "*.cpp" -exec sed -i'' -e 's/ggml_/ggml_rwkv_/g' {} + - @find ./go-rwkv -type f -name "*.h" -exec sed -i'' -e 's/ggml_/ggml_rwkv_/g' {} + - -go-rwkv/librwkv.a: go-rwkv - cd go-rwkv && cd rwkv.cpp && cmake . -DRWKV_BUILD_SHARED_LIBRARY=OFF && cmake --build . && cp librwkv.a .. - -## bloomz -bloomz: - git clone --recurse-submodules https://github.com/go-skynet/bloomz.cpp bloomz - @find ./bloomz -type f -name "*.c" -exec sed -i'' -e 's/ggml_/ggml_bloomz_/g' {} + - @find ./bloomz -type f -name "*.cpp" -exec sed -i'' -e 's/ggml_/ggml_bloomz_/g' {} + - @find ./bloomz -type f -name "*.h" -exec sed -i'' -e 's/ggml_/ggml_bloomz_/g' {} + - @find ./bloomz -type f -name "*.cpp" -exec sed -i'' -e 's/gpt_/gpt_bloomz_/g' {} + - @find ./bloomz -type f -name "*.h" -exec sed -i'' -e 's/gpt_/gpt_bloomz_/g' {} + - @find ./bloomz -type f -name "*.cpp" -exec sed -i'' -e 's/void replace/void json_bloomz_replace/g' {} + - @find ./bloomz -type f -name "*.cpp" -exec sed -i'' -e 's/::replace/::json_bloomz_replace/g' {} + - -bloomz/libbloomz.a: bloomz - cd bloomz && make libbloomz.a - -go-bert/libgobert.a: go-bert - $(MAKE) -C go-bert libgobert.a - -backend-assets/gpt4all: gpt4all/gpt4all-bindings/golang/libgpt4all.a - mkdir -p backend-assets/gpt4all - @cp gpt4all/gpt4all-bindings/golang/buildllm/*.so backend-assets/gpt4all/ || true - @cp gpt4all/gpt4all-bindings/golang/buildllm/*.dylib backend-assets/gpt4all/ || true - @cp gpt4all/gpt4all-bindings/golang/buildllm/*.dll backend-assets/gpt4all/ || true - -backend-assets/espeak-ng-data: - mkdir -p backend-assets/espeak-ng-data -ifdef ESPEAK_DATA - @cp -rf $(ESPEAK_DATA)/. backend-assets/espeak-ng-data -else - @touch backend-assets/espeak-ng-data/keep -endif - -gpt4all/gpt4all-bindings/golang/libgpt4all.a: gpt4all - $(MAKE) -C gpt4all/gpt4all-bindings/golang/ libgpt4all.a - -## CEREBRAS GPT -go-ggml-transformers: - git clone --recurse-submodules https://github.com/go-skynet/go-ggml-transformers.cpp go-ggml-transformers - cd go-ggml-transformers && git checkout -b build $(GOGPT2_VERSION) && git submodule update --init --recursive --depth 1 - # This is hackish, but needed as both go-llama and go-gpt4allj have their own version of ggml.. - @find ./go-ggml-transformers -type f -name "*.c" -exec sed -i'' -e 's/ggml_/ggml_gpt2_/g' {} + - @find ./go-ggml-transformers -type f -name "*.cpp" -exec sed -i'' -e 's/ggml_/ggml_gpt2_/g' {} + - @find ./go-ggml-transformers -type f -name "*.h" -exec sed -i'' -e 's/ggml_/ggml_gpt2_/g' {} + - @find ./go-ggml-transformers -type f -name "*.cpp" -exec sed -i'' -e 's/gpt_print_usage/gpt2_print_usage/g' {} + - @find ./go-ggml-transformers -type f -name "*.h" -exec sed -i'' -e 's/gpt_print_usage/gpt2_print_usage/g' {} + - @find ./go-ggml-transformers -type f -name "*.cpp" -exec sed -i'' -e 's/gpt_params_parse/gpt2_params_parse/g' {} + - @find ./go-ggml-transformers -type f -name "*.h" -exec sed -i'' -e 's/gpt_params_parse/gpt2_params_parse/g' {} + - @find ./go-ggml-transformers -type f -name "*.cpp" -exec sed -i'' -e 's/gpt_random_prompt/gpt2_random_prompt/g' {} + - @find ./go-ggml-transformers -type f -name "*.h" -exec sed -i'' -e 's/gpt_random_prompt/gpt2_random_prompt/g' {} + - @find ./go-ggml-transformers -type f -name "*.cpp" -exec sed -i'' -e 's/json_/json_gpt2_/g' {} + - -go-ggml-transformers/libtransformers.a: go-ggml-transformers - $(MAKE) -C go-ggml-transformers libtransformers.a - -whisper.cpp: - git clone https://github.com/ggerganov/whisper.cpp.git - cd whisper.cpp && git checkout -b build $(WHISPER_CPP_VERSION) && git submodule update --init --recursive --depth 1 - @find ./whisper.cpp -type f -name "*.c" -exec sed -i'' -e 's/ggml_/ggml_whisper_/g' {} + - @find ./whisper.cpp -type f -name "*.cpp" -exec sed -i'' -e 's/ggml_/ggml_whisper_/g' {} + - @find ./whisper.cpp -type f -name "*.h" -exec sed -i'' -e 's/ggml_/ggml_whisper_/g' {} + - -whisper.cpp/libwhisper.a: whisper.cpp - cd whisper.cpp && make libwhisper.a - -go-llama: - git clone --recurse-submodules https://github.com/go-skynet/go-llama.cpp go-llama - cd go-llama && git checkout -b build $(GOLLAMA_VERSION) && git submodule update --init --recursive --depth 1 - -go-llama/libbinding.a: go-llama - $(MAKE) -C go-llama BUILD_TYPE=$(BUILD_TYPE) libbinding.a - -go-piper/libpiper_binding.a: - $(MAKE) -C go-piper libpiper_binding.a example/main - -get-sources: go-llama go-ggml-transformers gpt4all go-piper go-rwkv whisper.cpp go-bert bloomz go-stable-diffusion - touch $@ - -replace: - $(GOCMD) mod edit -replace github.com/go-skynet/go-llama.cpp=$(shell pwd)/go-llama - $(GOCMD) mod edit -replace github.com/nomic-ai/gpt4all/gpt4all-bindings/golang=$(shell pwd)/gpt4all/gpt4all-bindings/golang - $(GOCMD) mod edit -replace github.com/go-skynet/go-ggml-transformers.cpp=$(shell pwd)/go-ggml-transformers - $(GOCMD) mod edit -replace github.com/donomii/go-rwkv.cpp=$(shell pwd)/go-rwkv - $(GOCMD) mod edit -replace github.com/ggerganov/whisper.cpp=$(shell pwd)/whisper.cpp - $(GOCMD) mod edit -replace github.com/go-skynet/go-bert.cpp=$(shell pwd)/go-bert - $(GOCMD) mod edit -replace github.com/go-skynet/bloomz.cpp=$(shell pwd)/bloomz - $(GOCMD) mod edit -replace github.com/mudler/go-stable-diffusion=$(shell pwd)/go-stable-diffusion - $(GOCMD) mod edit -replace github.com/mudler/go-piper=$(shell pwd)/go-piper - -prepare-sources: get-sources replace - $(GOCMD) mod download - -## GENERIC -rebuild: ## Rebuilds the project - $(MAKE) -C go-llama clean - $(MAKE) -C gpt4all/gpt4all-bindings/golang/ clean - $(MAKE) -C go-ggml-transformers clean - $(MAKE) -C go-rwkv clean - $(MAKE) -C whisper.cpp clean - $(MAKE) -C go-stable-diffusion clean - $(MAKE) -C go-bert clean - $(MAKE) -C bloomz clean - $(MAKE) -C go-piper clean - $(MAKE) build - -prepare: prepare-sources backend-assets/gpt4all $(OPTIONAL_TARGETS) go-llama/libbinding.a go-bert/libgobert.a go-ggml-transformers/libtransformers.a go-rwkv/librwkv.a whisper.cpp/libwhisper.a bloomz/libbloomz.a ## Prepares for building - touch $@ - -clean: ## Remove build related file - rm -fr ./go-llama - rm -rf ./gpt4all - rm -rf ./go-gpt2 - rm -rf ./go-stable-diffusion - rm -rf ./go-ggml-transformers - rm -rf ./backend-assets - rm -rf ./go-rwkv - rm -rf ./go-bert - rm -rf ./bloomz - rm -rf ./whisper.cpp - rm -rf ./go-piper - rm -rf $(BINARY_NAME) - rm -rf release/ - -## Build: - -build: prepare ## Build the project - $(info ${GREEN}I local-ai build info:${RESET}) - $(info ${GREEN}I BUILD_TYPE: ${YELLOW}$(BUILD_TYPE)${RESET}) - $(info ${GREEN}I GO_TAGS: ${YELLOW}$(GO_TAGS)${RESET}) - $(info ${GREEN}I LD_FLAGS: ${YELLOW}$(LD_FLAGS)${RESET}) - - CGO_LDFLAGS="$(CGO_LDFLAGS)" C_INCLUDE_PATH=${C_INCLUDE_PATH} LIBRARY_PATH=${LIBRARY_PATH} $(GOCMD) build -ldflags "$(LD_FLAGS)" -tags "$(GO_TAGS)" -o $(BINARY_NAME) ./ -ifeq ($(BUILD_TYPE),metal) - cp go-llama/build/bin/ggml-metal.metal . -endif - -dist: build - mkdir -p release - cp $(BINARY_NAME) release/$(BINARY_NAME)-$(BUILD_ID)-$(OS)-$(ARCH) - -generic-build: ## Build the project using generic - BUILD_TYPE="generic" $(MAKE) build - -## Run -run: prepare ## run local-ai - CGO_LDFLAGS="$(CGO_LDFLAGS)" C_INCLUDE_PATH=${C_INCLUDE_PATH} LIBRARY_PATH=${LIBRARY_PATH} $(GOCMD) run ./ - -test-models/testmodel: - mkdir test-models - mkdir test-dir - wget https://huggingface.co/nnakasato/ggml-model-test/resolve/main/ggml-model-q4.bin -O test-models/testmodel - wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.en.bin -O test-models/whisper-en - wget https://huggingface.co/skeskinen/ggml/resolve/main/all-MiniLM-L6-v2/ggml-model-q4_0.bin -O test-models/bert - wget https://cdn.openai.com/whisper/draft-20220913a/micro-machines.wav -O test-dir/audio.wav - wget https://huggingface.co/mudler/rwkv-4-raven-1.5B-ggml/resolve/main/RWKV-4-Raven-1B5-v11-Eng99%2525-Other1%2525-20230425-ctx4096_Q4_0.bin -O test-models/rwkv - wget https://raw.githubusercontent.com/saharNooby/rwkv.cpp/5eb8f09c146ea8124633ab041d9ea0b1f1db4459/rwkv/20B_tokenizer.json -O test-models/rwkv.tokenizer.json - cp tests/models_fixtures/* test-models - -test: prepare test-models/testmodel - cp -r backend-assets api - cp tests/models_fixtures/* test-models - C_INCLUDE_PATH=${C_INCLUDE_PATH} LIBRARY_PATH=${LIBRARY_PATH} TEST_DIR=$(abspath ./)/test-dir/ FIXTURES=$(abspath ./)/tests/fixtures CONFIG_FILE=$(abspath ./)/test-models/config.yaml MODELS_PATH=$(abspath ./)/test-models $(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --label-filter="!gpt4all && !llama" --flake-attempts 5 -v -r ./api ./pkg - C_INCLUDE_PATH=${C_INCLUDE_PATH} LIBRARY_PATH=${LIBRARY_PATH} TEST_DIR=$(abspath ./)/test-dir/ FIXTURES=$(abspath ./)/tests/fixtures CONFIG_FILE=$(abspath ./)/test-models/config.yaml MODELS_PATH=$(abspath ./)/test-models $(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --label-filter="gpt4all" --flake-attempts 5 -v -r ./api ./pkg - C_INCLUDE_PATH=${C_INCLUDE_PATH} LIBRARY_PATH=${LIBRARY_PATH} TEST_DIR=$(abspath ./)/test-dir/ FIXTURES=$(abspath ./)/tests/fixtures CONFIG_FILE=$(abspath ./)/test-models/config.yaml MODELS_PATH=$(abspath ./)/test-models $(GOCMD) run github.com/onsi/ginkgo/v2/ginkgo --label-filter="llama" --flake-attempts 5 -v -r ./api ./pkg - -## Help: -help: ## Show this help. - @echo '' - @echo 'Usage:' - @echo ' ${YELLOW}make${RESET} ${GREEN}${RESET}' - @echo '' - @echo 'Targets:' - @awk 'BEGIN {FS = ":.*?## "} { \ - if (/^[a-zA-Z_-]+:.*?##.*$$/) {printf " ${YELLOW}%-20s${GREEN}%s${RESET}\n", $$1, $$2} \ - else if (/^## .*$$/) {printf " ${CYAN}%s${RESET}\n", substr($$1,4)} \ - }' $(MAKEFILE_LIST) diff --git a/spaces/chendl/compositional_test/multimodal/range_vqa.sh b/spaces/chendl/compositional_test/multimodal/range_vqa.sh deleted file mode 100644 index 1062258553b419998ec2aad5f73d3ba45ce13a72..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/range_vqa.sh +++ /dev/null @@ -1,9 +0,0 @@ -sbatch -J vqa submit_eval.sh eval_vqav2.sh checkpoints/091701_pythiaS_previsual_fix/checkpoint_12000.pt -sbatch -J vqa submit_eval.sh eval_vqav2.sh checkpoints/091701_pythiaS_previsual_fix/checkpoint_18000.pt - - -sbatch -J vqa3B submit_eval.sh eval_vqav2_3b.sh checkpoints/091801_pythia3b_previsual_fix/checkpoint_10000.pt -sbatch -J vqa3B submit_eval.sh eval_vqav2_3b.sh checkpoints/091801_pythia3b_previsual_fix/checkpoint_12000.pt -sbatch -J vqa3B submit_eval.sh eval_vqav2_3b.sh checkpoints/091801_pythia3b_previsual_fix/checkpoint_14000.pt -sbatch -J vqa3B submit_eval.sh eval_vqav2_3b.sh checkpoints/091801_pythia3b_previsual_fix/checkpoint_16000.pt -sbatch -J vqa3B submit_eval.sh eval_vqav2_3b.sh checkpoints/091801_pythia3b_previsual_fix/checkpoint_18000.pt diff --git a/spaces/chiulori/bertopic-reviews/README.md b/spaces/chiulori/bertopic-reviews/README.md deleted file mode 100644 index 769726a4c8f2f71ebf2327cf761ba2ed5a59be83..0000000000000000000000000000000000000000 --- a/spaces/chiulori/bertopic-reviews/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bertopic Reviews -emoji: 😻 -colorFrom: pink -colorTo: gray -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/streams/tls.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/streams/tls.py deleted file mode 100644 index 9f9e9fd89c891dd6285789811f7ce29a7b86c00f..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/streams/tls.py +++ /dev/null @@ -1,320 +0,0 @@ -from __future__ import annotations - -import logging -import re -import ssl -from dataclasses import dataclass -from functools import wraps -from typing import Any, Callable, Mapping, Tuple, TypeVar - -from .. import ( - BrokenResourceError, - EndOfStream, - aclose_forcefully, - get_cancelled_exc_class, -) -from .._core._typedattr import TypedAttributeSet, typed_attribute -from ..abc import AnyByteStream, ByteStream, Listener, TaskGroup - -T_Retval = TypeVar("T_Retval") -_PCTRTT = Tuple[Tuple[str, str], ...] -_PCTRTTT = Tuple[_PCTRTT, ...] - - -class TLSAttribute(TypedAttributeSet): - """Contains Transport Layer Security related attributes.""" - - #: the selected ALPN protocol - alpn_protocol: str | None = typed_attribute() - #: the channel binding for type ``tls-unique`` - channel_binding_tls_unique: bytes = typed_attribute() - #: the selected cipher - cipher: tuple[str, str, int] = typed_attribute() - #: the peer certificate in dictionary form (see :meth:`ssl.SSLSocket.getpeercert` - #: for more information) - peer_certificate: dict[str, str | _PCTRTTT | _PCTRTT] | None = typed_attribute() - #: the peer certificate in binary form - peer_certificate_binary: bytes | None = typed_attribute() - #: ``True`` if this is the server side of the connection - server_side: bool = typed_attribute() - #: ciphers shared by the client during the TLS handshake (``None`` if this is the - #: client side) - shared_ciphers: list[tuple[str, str, int]] | None = typed_attribute() - #: the :class:`~ssl.SSLObject` used for encryption - ssl_object: ssl.SSLObject = typed_attribute() - #: ``True`` if this stream does (and expects) a closing TLS handshake when the - #: stream is being closed - standard_compatible: bool = typed_attribute() - #: the TLS protocol version (e.g. ``TLSv1.2``) - tls_version: str = typed_attribute() - - -@dataclass(eq=False) -class TLSStream(ByteStream): - """ - A stream wrapper that encrypts all sent data and decrypts received data. - - This class has no public initializer; use :meth:`wrap` instead. - All extra attributes from :class:`~TLSAttribute` are supported. - - :var AnyByteStream transport_stream: the wrapped stream - - """ - - transport_stream: AnyByteStream - standard_compatible: bool - _ssl_object: ssl.SSLObject - _read_bio: ssl.MemoryBIO - _write_bio: ssl.MemoryBIO - - @classmethod - async def wrap( - cls, - transport_stream: AnyByteStream, - *, - server_side: bool | None = None, - hostname: str | None = None, - ssl_context: ssl.SSLContext | None = None, - standard_compatible: bool = True, - ) -> TLSStream: - """ - Wrap an existing stream with Transport Layer Security. - - This performs a TLS handshake with the peer. - - :param transport_stream: a bytes-transporting stream to wrap - :param server_side: ``True`` if this is the server side of the connection, - ``False`` if this is the client side (if omitted, will be set to ``False`` - if ``hostname`` has been provided, ``False`` otherwise). Used only to create - a default context when an explicit context has not been provided. - :param hostname: host name of the peer (if host name checking is desired) - :param ssl_context: the SSLContext object to use (if not provided, a secure - default will be created) - :param standard_compatible: if ``False``, skip the closing handshake when closing the - connection, and don't raise an exception if the peer does the same - :raises ~ssl.SSLError: if the TLS handshake fails - - """ - if server_side is None: - server_side = not hostname - - if not ssl_context: - purpose = ( - ssl.Purpose.CLIENT_AUTH if server_side else ssl.Purpose.SERVER_AUTH - ) - ssl_context = ssl.create_default_context(purpose) - - # Re-enable detection of unexpected EOFs if it was disabled by Python - if hasattr(ssl, "OP_IGNORE_UNEXPECTED_EOF"): - ssl_context.options &= ~ssl.OP_IGNORE_UNEXPECTED_EOF - - bio_in = ssl.MemoryBIO() - bio_out = ssl.MemoryBIO() - ssl_object = ssl_context.wrap_bio( - bio_in, bio_out, server_side=server_side, server_hostname=hostname - ) - wrapper = cls( - transport_stream=transport_stream, - standard_compatible=standard_compatible, - _ssl_object=ssl_object, - _read_bio=bio_in, - _write_bio=bio_out, - ) - await wrapper._call_sslobject_method(ssl_object.do_handshake) - return wrapper - - async def _call_sslobject_method( - self, func: Callable[..., T_Retval], *args: object - ) -> T_Retval: - while True: - try: - result = func(*args) - except ssl.SSLWantReadError: - try: - # Flush any pending writes first - if self._write_bio.pending: - await self.transport_stream.send(self._write_bio.read()) - - data = await self.transport_stream.receive() - except EndOfStream: - self._read_bio.write_eof() - except OSError as exc: - self._read_bio.write_eof() - self._write_bio.write_eof() - raise BrokenResourceError from exc - else: - self._read_bio.write(data) - except ssl.SSLWantWriteError: - await self.transport_stream.send(self._write_bio.read()) - except ssl.SSLSyscallError as exc: - self._read_bio.write_eof() - self._write_bio.write_eof() - raise BrokenResourceError from exc - except ssl.SSLError as exc: - self._read_bio.write_eof() - self._write_bio.write_eof() - if ( - isinstance(exc, ssl.SSLEOFError) - or "UNEXPECTED_EOF_WHILE_READING" in exc.strerror - ): - if self.standard_compatible: - raise BrokenResourceError from exc - else: - raise EndOfStream from None - - raise - else: - # Flush any pending writes first - if self._write_bio.pending: - await self.transport_stream.send(self._write_bio.read()) - - return result - - async def unwrap(self) -> tuple[AnyByteStream, bytes]: - """ - Does the TLS closing handshake. - - :return: a tuple of (wrapped byte stream, bytes left in the read buffer) - - """ - await self._call_sslobject_method(self._ssl_object.unwrap) - self._read_bio.write_eof() - self._write_bio.write_eof() - return self.transport_stream, self._read_bio.read() - - async def aclose(self) -> None: - if self.standard_compatible: - try: - await self.unwrap() - except BaseException: - await aclose_forcefully(self.transport_stream) - raise - - await self.transport_stream.aclose() - - async def receive(self, max_bytes: int = 65536) -> bytes: - data = await self._call_sslobject_method(self._ssl_object.read, max_bytes) - if not data: - raise EndOfStream - - return data - - async def send(self, item: bytes) -> None: - await self._call_sslobject_method(self._ssl_object.write, item) - - async def send_eof(self) -> None: - tls_version = self.extra(TLSAttribute.tls_version) - match = re.match(r"TLSv(\d+)(?:\.(\d+))?", tls_version) - if match: - major, minor = int(match.group(1)), int(match.group(2) or 0) - if (major, minor) < (1, 3): - raise NotImplementedError( - f"send_eof() requires at least TLSv1.3; current " - f"session uses {tls_version}" - ) - - raise NotImplementedError( - "send_eof() has not yet been implemented for TLS streams" - ) - - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - return { - **self.transport_stream.extra_attributes, - TLSAttribute.alpn_protocol: self._ssl_object.selected_alpn_protocol, - TLSAttribute.channel_binding_tls_unique: self._ssl_object.get_channel_binding, - TLSAttribute.cipher: self._ssl_object.cipher, - TLSAttribute.peer_certificate: lambda: self._ssl_object.getpeercert(False), - TLSAttribute.peer_certificate_binary: lambda: self._ssl_object.getpeercert( - True - ), - TLSAttribute.server_side: lambda: self._ssl_object.server_side, - TLSAttribute.shared_ciphers: lambda: self._ssl_object.shared_ciphers() - if self._ssl_object.server_side - else None, - TLSAttribute.standard_compatible: lambda: self.standard_compatible, - TLSAttribute.ssl_object: lambda: self._ssl_object, - TLSAttribute.tls_version: self._ssl_object.version, - } - - -@dataclass(eq=False) -class TLSListener(Listener[TLSStream]): - """ - A convenience listener that wraps another listener and auto-negotiates a TLS session on every - accepted connection. - - If the TLS handshake times out or raises an exception, :meth:`handle_handshake_error` is - called to do whatever post-mortem processing is deemed necessary. - - Supports only the :attr:`~TLSAttribute.standard_compatible` extra attribute. - - :param Listener listener: the listener to wrap - :param ssl_context: the SSL context object - :param standard_compatible: a flag passed through to :meth:`TLSStream.wrap` - :param handshake_timeout: time limit for the TLS handshake - (passed to :func:`~anyio.fail_after`) - """ - - listener: Listener[Any] - ssl_context: ssl.SSLContext - standard_compatible: bool = True - handshake_timeout: float = 30 - - @staticmethod - async def handle_handshake_error(exc: BaseException, stream: AnyByteStream) -> None: - """ - Handle an exception raised during the TLS handshake. - - This method does 3 things: - - #. Forcefully closes the original stream - #. Logs the exception (unless it was a cancellation exception) using the - ``anyio.streams.tls`` logger - #. Reraises the exception if it was a base exception or a cancellation exception - - :param exc: the exception - :param stream: the original stream - - """ - await aclose_forcefully(stream) - - # Log all except cancellation exceptions - if not isinstance(exc, get_cancelled_exc_class()): - logging.getLogger(__name__).exception("Error during TLS handshake") - - # Only reraise base exceptions and cancellation exceptions - if not isinstance(exc, Exception) or isinstance(exc, get_cancelled_exc_class()): - raise - - async def serve( - self, - handler: Callable[[TLSStream], Any], - task_group: TaskGroup | None = None, - ) -> None: - @wraps(handler) - async def handler_wrapper(stream: AnyByteStream) -> None: - from .. import fail_after - - try: - with fail_after(self.handshake_timeout): - wrapped_stream = await TLSStream.wrap( - stream, - ssl_context=self.ssl_context, - standard_compatible=self.standard_compatible, - ) - except BaseException as exc: - await self.handle_handshake_error(exc, stream) - else: - await handler(wrapped_stream) - - await self.listener.serve(handler_wrapper, task_group) - - async def aclose(self) -> None: - await self.listener.aclose() - - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - return { - TLSAttribute.standard_compatible: lambda: self.standard_compatible, - } diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/image/exceptions.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/image/exceptions.py deleted file mode 100644 index f233edc4e5b24559045447e8f8a66fcf297c8081..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/image/exceptions.py +++ /dev/null @@ -1,23 +0,0 @@ -# encoding: utf-8 - -""" -Exceptions specific the the image sub-package -""" - - -class InvalidImageStreamError(Exception): - """ - The recognized image stream appears to be corrupted - """ - - -class UnexpectedEndOfFileError(Exception): - """ - EOF was unexpectedly encountered while reading an image stream. - """ - - -class UnrecognizedImageError(Exception): - """ - The provided image stream could not be recognized. - """ diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/pens/momentsPen.c b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/pens/momentsPen.c deleted file mode 100644 index c62288eb66721af85314bd75c3f98a622c112012..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/pens/momentsPen.c +++ /dev/null @@ -1,10242 +0,0 @@ -/* Generated by Cython 0.29.36 */ - -/* BEGIN: Cython Metadata -{ - "distutils": { - "name": "fontTools.pens.momentsPen", - "sources": [ - "Lib/fontTools/pens/momentsPen.py" - ] - }, - "module_name": "fontTools.pens.momentsPen" -} -END: Cython Metadata */ - -#ifndef PY_SSIZE_T_CLEAN -#define PY_SSIZE_T_CLEAN -#endif /* PY_SSIZE_T_CLEAN */ -#include "Python.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) - #error Cython requires Python 2.6+ or Python 3.3+. -#else -#define CYTHON_ABI "0_29_36" -#define CYTHON_HEX_VERSION 0x001D24F0 -#define CYTHON_FUTURE_DIVISION 1 -#include -#ifndef offsetof - #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) -#endif -#if !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#endif -#ifndef DL_IMPORT - #define DL_IMPORT(t) t -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#define __PYX_COMMA , -#ifndef HAVE_LONG_LONG - #if PY_VERSION_HEX >= 0x02070000 - #define HAVE_LONG_LONG - #endif -#endif -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef Py_HUGE_VAL - #define Py_HUGE_VAL HUGE_VAL -#endif -#ifdef PYPY_VERSION - #define CYTHON_COMPILING_IN_PYPY 1 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #if PY_VERSION_HEX < 0x03090000 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #elif !defined(CYTHON_PEP489_MULTI_PHASE_INIT) - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1 && PYPY_VERSION_NUM >= 0x07030C00) - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(PYSTON_VERSION) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 1 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(PY_NOGIL) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_NOGIL 1 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #ifndef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#else - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 1 - #define CYTHON_COMPILING_IN_NOGIL 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #elif !defined(CYTHON_USE_PYTYPE_LOOKUP) - #define CYTHON_USE_PYTYPE_LOOKUP 1 - #endif - #if PY_MAJOR_VERSION < 3 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #elif !defined(CYTHON_USE_PYLONG_INTERNALS) - #define CYTHON_USE_PYLONG_INTERNALS (PY_VERSION_HEX < 0x030C00A5) - #endif - #ifndef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 1 - #endif - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #if PY_VERSION_HEX < 0x030300F0 || PY_VERSION_HEX >= 0x030B00A2 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #elif !defined(CYTHON_USE_UNICODE_WRITER) - #define CYTHON_USE_UNICODE_WRITER 1 - #endif - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #if PY_VERSION_HEX >= 0x030B00A4 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #elif !defined(CYTHON_FAST_THREAD_STATE) - #define CYTHON_FAST_THREAD_STATE 1 - #endif - #ifndef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL (PY_VERSION_HEX < 0x030A0000) - #endif - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000) - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1) - #endif - #ifndef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS ((PY_VERSION_HEX >= 0x030600B1) && (PY_VERSION_HEX < 0x030C00A5)) - #endif - #if PY_VERSION_HEX >= 0x030B00A4 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #elif !defined(CYTHON_USE_EXC_INFO_STACK) - #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3) - #endif - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 1 - #endif -#endif -#if !defined(CYTHON_FAST_PYCCALL) -#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) -#endif -#if CYTHON_USE_PYLONG_INTERNALS - #if PY_MAJOR_VERSION < 3 - #include "longintrepr.h" - #endif - #undef SHIFT - #undef BASE - #undef MASK - #ifdef SIZEOF_VOID_P - enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; - #endif -#endif -#ifndef __has_attribute - #define __has_attribute(x) 0 -#endif -#ifndef __has_cpp_attribute - #define __has_cpp_attribute(x) 0 -#endif -#ifndef CYTHON_RESTRICT - #if defined(__GNUC__) - #define CYTHON_RESTRICT __restrict__ - #elif defined(_MSC_VER) && _MSC_VER >= 1400 - #define CYTHON_RESTRICT __restrict - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_RESTRICT restrict - #else - #define CYTHON_RESTRICT - #endif -#endif -#ifndef CYTHON_UNUSED -# if defined(__GNUC__) -# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_MAYBE_UNUSED_VAR -# if defined(__cplusplus) - template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { } -# else -# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x) -# endif -#endif -#ifndef CYTHON_NCP_UNUSED -# if CYTHON_COMPILING_IN_CPYTHON -# define CYTHON_NCP_UNUSED -# else -# define CYTHON_NCP_UNUSED CYTHON_UNUSED -# endif -#endif -#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) -#ifdef _MSC_VER - #ifndef _MSC_STDINT_H_ - #if _MSC_VER < 1300 - typedef unsigned char uint8_t; - typedef unsigned int uint32_t; - #else - typedef unsigned __int8 uint8_t; - typedef unsigned __int32 uint32_t; - #endif - #endif -#else - #include -#endif -#ifndef CYTHON_FALLTHROUGH - #if defined(__cplusplus) && __cplusplus >= 201103L - #if __has_cpp_attribute(fallthrough) - #define CYTHON_FALLTHROUGH [[fallthrough]] - #elif __has_cpp_attribute(clang::fallthrough) - #define CYTHON_FALLTHROUGH [[clang::fallthrough]] - #elif __has_cpp_attribute(gnu::fallthrough) - #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_attribute(fallthrough) - #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) - #else - #define CYTHON_FALLTHROUGH - #endif - #endif - #if defined(__clang__ ) && defined(__apple_build_version__) - #if __apple_build_version__ < 7000000 - #undef CYTHON_FALLTHROUGH - #define CYTHON_FALLTHROUGH - #endif - #endif -#endif - -#ifndef CYTHON_INLINE - #if defined(__clang__) - #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) - #elif defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_INLINE inline - #else - #define CYTHON_INLINE - #endif -#endif - -#define __PYX_BUILD_PY_SSIZE_T "n" -#define CYTHON_FORMAT_SSIZE_T "z" -#if PY_MAJOR_VERSION < 3 - #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) - #define __Pyx_DefaultClassType PyClass_Type -#else - #define __Pyx_BUILTIN_MODULE_NAME "builtins" - #define __Pyx_DefaultClassType PyType_Type -#if PY_VERSION_HEX >= 0x030B00A1 - static CYTHON_INLINE PyCodeObject* __Pyx_PyCode_New(int a, int k, int l, int s, int f, - PyObject *code, PyObject *c, PyObject* n, PyObject *v, - PyObject *fv, PyObject *cell, PyObject* fn, - PyObject *name, int fline, PyObject *lnos) { - PyObject *kwds=NULL, *argcount=NULL, *posonlyargcount=NULL, *kwonlyargcount=NULL; - PyObject *nlocals=NULL, *stacksize=NULL, *flags=NULL, *replace=NULL, *call_result=NULL, *empty=NULL; - const char *fn_cstr=NULL; - const char *name_cstr=NULL; - PyCodeObject* co=NULL; - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); - if (!(kwds=PyDict_New())) goto end; - if (!(argcount=PyLong_FromLong(a))) goto end; - if (PyDict_SetItemString(kwds, "co_argcount", argcount) != 0) goto end; - if (!(posonlyargcount=PyLong_FromLong(0))) goto end; - if (PyDict_SetItemString(kwds, "co_posonlyargcount", posonlyargcount) != 0) goto end; - if (!(kwonlyargcount=PyLong_FromLong(k))) goto end; - if (PyDict_SetItemString(kwds, "co_kwonlyargcount", kwonlyargcount) != 0) goto end; - if (!(nlocals=PyLong_FromLong(l))) goto end; - if (PyDict_SetItemString(kwds, "co_nlocals", nlocals) != 0) goto end; - if (!(stacksize=PyLong_FromLong(s))) goto end; - if (PyDict_SetItemString(kwds, "co_stacksize", stacksize) != 0) goto end; - if (!(flags=PyLong_FromLong(f))) goto end; - if (PyDict_SetItemString(kwds, "co_flags", flags) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_code", code) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_consts", c) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_names", n) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_varnames", v) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_freevars", fv) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_cellvars", cell) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_linetable", lnos) != 0) goto end; - if (!(fn_cstr=PyUnicode_AsUTF8AndSize(fn, NULL))) goto end; - if (!(name_cstr=PyUnicode_AsUTF8AndSize(name, NULL))) goto end; - if (!(co = PyCode_NewEmpty(fn_cstr, name_cstr, fline))) goto end; - if (!(replace = PyObject_GetAttrString((PyObject*)co, "replace"))) goto cleanup_code_too; - if (!(empty = PyTuple_New(0))) goto cleanup_code_too; // unfortunately __pyx_empty_tuple isn't available here - if (!(call_result = PyObject_Call(replace, empty, kwds))) goto cleanup_code_too; - Py_XDECREF((PyObject*)co); - co = (PyCodeObject*)call_result; - call_result = NULL; - if (0) { - cleanup_code_too: - Py_XDECREF((PyObject*)co); - co = NULL; - } - end: - Py_XDECREF(kwds); - Py_XDECREF(argcount); - Py_XDECREF(posonlyargcount); - Py_XDECREF(kwonlyargcount); - Py_XDECREF(nlocals); - Py_XDECREF(stacksize); - Py_XDECREF(replace); - Py_XDECREF(call_result); - Py_XDECREF(empty); - if (type) { - PyErr_Restore(type, value, traceback); - } - return co; - } -#else - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#endif - #define __Pyx_DefaultClassType PyType_Type -#endif -#if PY_VERSION_HEX >= 0x030900F0 && !CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyObject_GC_IsFinalized(o) PyObject_GC_IsFinalized(o) -#else - #define __Pyx_PyObject_GC_IsFinalized(o) _PyGC_FINALIZED(o) -#endif -#ifndef Py_TPFLAGS_CHECKTYPES - #define Py_TPFLAGS_CHECKTYPES 0 -#endif -#ifndef Py_TPFLAGS_HAVE_INDEX - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif -#ifndef Py_TPFLAGS_HAVE_NEWBUFFER - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif -#ifndef Py_TPFLAGS_HAVE_FINALIZE - #define Py_TPFLAGS_HAVE_FINALIZE 0 -#endif -#ifndef METH_STACKLESS - #define METH_STACKLESS 0 -#endif -#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL) - #ifndef METH_FASTCALL - #define METH_FASTCALL 0x80 - #endif - typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); - typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, - Py_ssize_t nargs, PyObject *kwnames); -#else - #define __Pyx_PyCFunctionFast _PyCFunctionFast - #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords -#endif -#if CYTHON_FAST_PYCCALL -#define __Pyx_PyFastCFunction_Check(func)\ - ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))))) -#else -#define __Pyx_PyFastCFunction_Check(func) 0 -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) - #define PyObject_Malloc(s) PyMem_Malloc(s) - #define PyObject_Free(p) PyMem_Free(p) - #define PyObject_Realloc(p) PyMem_Realloc(p) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1 - #define PyMem_RawMalloc(n) PyMem_Malloc(n) - #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n) - #define PyMem_RawFree(p) PyMem_Free(p) -#endif -#if CYTHON_COMPILING_IN_PYSTON - #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno) -#else - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) -#endif -#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#elif PY_VERSION_HEX >= 0x03060000 - #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() -#elif PY_VERSION_HEX >= 0x03000000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#else - #define __Pyx_PyThreadState_Current _PyThreadState_Current -#endif -#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT) -#include "pythread.h" -#define Py_tss_NEEDS_INIT 0 -typedef int Py_tss_t; -static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) { - *key = PyThread_create_key(); - return 0; -} -static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) { - Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t)); - *key = Py_tss_NEEDS_INIT; - return key; -} -static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) { - PyObject_Free(key); -} -static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) { - return *key != Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) { - PyThread_delete_key(*key); - *key = Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) { - return PyThread_set_key_value(*key, value); -} -static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) { - return PyThread_get_key_value(*key); -} -#endif -#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) -#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) -#else -#define __Pyx_PyDict_NewPresized(n) PyDict_New() -#endif -#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION - #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#else - #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS -#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) -#else -#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name) -#endif -#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) - #define CYTHON_PEP393_ENABLED 1 - #if PY_VERSION_HEX >= 0x030C0000 - #define __Pyx_PyUnicode_READY(op) (0) - #else - #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ - 0 : _PyUnicode_Ready((PyObject *)(op))) - #endif - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) - #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u) - #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) - #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch) - #if PY_VERSION_HEX >= 0x030C0000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u)) - #else - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03090000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : ((PyCompactUnicodeObject *)(u))->wstr_length)) - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) - #endif - #endif -#else - #define CYTHON_PEP393_ENABLED 0 - #define PyUnicode_1BYTE_KIND 1 - #define PyUnicode_2BYTE_KIND 2 - #define PyUnicode_4BYTE_KIND 4 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111) - #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE)) - #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) -#else - #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ - PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains) - #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check) - #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format) - #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) -#endif -#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) -#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) -#else - #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) -#endif -#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) - #define PyObject_ASCII(o) PyObject_Repr(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBaseString_Type PyUnicode_Type - #define PyStringObject PyUnicodeObject - #define PyString_Type PyUnicode_Type - #define PyString_Check PyUnicode_Check - #define PyString_CheckExact PyUnicode_CheckExact -#ifndef PyObject_Unicode - #define PyObject_Unicode PyObject_Str -#endif -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) - #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) -#else - #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) - #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) -#endif -#ifndef PySet_CheckExact - #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type) -#endif -#if PY_VERSION_HEX >= 0x030900A4 - #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size) -#else - #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size) -#endif -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) -#else - #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyIntObject PyLongObject - #define PyInt_Type PyLong_Type - #define PyInt_Check(op) PyLong_Check(op) - #define PyInt_CheckExact(op) PyLong_CheckExact(op) - #define PyInt_FromString PyLong_FromString - #define PyInt_FromUnicode PyLong_FromUnicode - #define PyInt_FromLong PyLong_FromLong - #define PyInt_FromSize_t PyLong_FromSize_t - #define PyInt_FromSsize_t PyLong_FromSsize_t - #define PyInt_AsLong PyLong_AsLong - #define PyInt_AS_LONG PyLong_AS_LONG - #define PyInt_AsSsize_t PyLong_AsSsize_t - #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask - #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask - #define PyNumber_Int PyNumber_Long -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBoolObject PyLongObject -#endif -#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY - #ifndef PyUnicode_InternFromString - #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) - #endif -#endif -#if PY_VERSION_HEX < 0x030200A4 - typedef long Py_hash_t; - #define __Pyx_PyInt_FromHash_t PyInt_FromLong - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsHash_t -#else - #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsSsize_t -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyMethod_New(func, self, klass) ((self) ? ((void)(klass), PyMethod_New(func, self)) : __Pyx_NewRef(func)) -#else - #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass) -#endif -#if CYTHON_USE_ASYNC_SLOTS - #if PY_VERSION_HEX >= 0x030500B1 - #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods - #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) - #else - #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) - #endif -#else - #define __Pyx_PyType_AsAsync(obj) NULL -#endif -#ifndef __Pyx_PyAsyncMethodsStruct - typedef struct { - unaryfunc am_await; - unaryfunc am_aiter; - unaryfunc am_anext; - } __Pyx_PyAsyncMethodsStruct; -#endif - -#if defined(_WIN32) || defined(WIN32) || defined(MS_WINDOWS) - #if !defined(_USE_MATH_DEFINES) - #define _USE_MATH_DEFINES - #endif -#endif -#include -#ifdef NAN -#define __PYX_NAN() ((float) NAN) -#else -static CYTHON_INLINE float __PYX_NAN() { - float value; - memset(&value, 0xFF, sizeof(value)); - return value; -} -#endif -#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) -#define __Pyx_truncl trunc -#else -#define __Pyx_truncl truncl -#endif - -#define __PYX_MARK_ERR_POS(f_index, lineno) \ - { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; } -#define __PYX_ERR(f_index, lineno, Ln_error) \ - { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; } - -#ifndef __PYX_EXTERN_C - #ifdef __cplusplus - #define __PYX_EXTERN_C extern "C" - #else - #define __PYX_EXTERN_C extern - #endif -#endif - -#define __PYX_HAVE__fontTools__pens__momentsPen -#define __PYX_HAVE_API__fontTools__pens__momentsPen -/* Early includes */ -#ifdef _OPENMP -#include -#endif /* _OPENMP */ - -#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) -#define CYTHON_WITHOUT_ASSERTIONS -#endif - -typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; - const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; - -#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8) -#define __PYX_DEFAULT_STRING_ENCODING "" -#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString -#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ - (sizeof(type) < sizeof(Py_ssize_t)) ||\ - (sizeof(type) > sizeof(Py_ssize_t) &&\ - likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX) &&\ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ - v == (type)PY_SSIZE_T_MIN))) ||\ - (sizeof(type) == sizeof(Py_ssize_t) &&\ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX))) ) -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - return (size_t) i < (size_t) limit; -} -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); -#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); -#if PY_MAJOR_VERSION < 3 - #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#else - #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize -#endif -#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) { - const Py_UNICODE *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) -#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode -#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) -#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); -#define __Pyx_PySequence_Tuple(obj)\ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject*); -#if CYTHON_ASSUME_SAFE_MACROS -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#else -#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#endif -#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#else -#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) -#endif -#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x)) -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII -static int __Pyx_sys_getdefaultencoding_not_ascii; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - PyObject* ascii_chars_u = NULL; - PyObject* ascii_chars_b = NULL; - const char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - if (strcmp(default_encoding_c, "ascii") == 0) { - __Pyx_sys_getdefaultencoding_not_ascii = 0; - } else { - char ascii_chars[128]; - int c; - for (c = 0; c < 128; c++) { - ascii_chars[c] = c; - } - __Pyx_sys_getdefaultencoding_not_ascii = 1; - ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); - if (!ascii_chars_u) goto bad; - ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); - if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { - PyErr_Format( - PyExc_ValueError, - "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", - default_encoding_c); - goto bad; - } - Py_DECREF(ascii_chars_u); - Py_DECREF(ascii_chars_b); - } - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - Py_XDECREF(ascii_chars_u); - Py_XDECREF(ascii_chars_b); - return -1; -} -#endif -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#else -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -static char* __PYX_DEFAULT_STRING_ENCODING; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); - if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; - strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - return -1; -} -#endif -#endif - - -/* Test for GCC > 2.95 */ -#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) -#else /* !__GNUC__ or GCC < 2.95 */ - #define likely(x) (x) - #define unlikely(x) (x) -#endif /* __GNUC__ */ -static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } - -static PyObject *__pyx_m = NULL; -static PyObject *__pyx_d; -static PyObject *__pyx_b; -static PyObject *__pyx_cython_runtime = NULL; -static PyObject *__pyx_empty_tuple; -static PyObject *__pyx_empty_bytes; -static PyObject *__pyx_empty_unicode; -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * __pyx_cfilenm= __FILE__; -static const char *__pyx_filename; - - -static const char *__pyx_f[] = { - "Lib/fontTools/pens/momentsPen.py", -}; - -/*--- Type declarations ---*/ - -/* --- Runtime support code (head) --- */ -/* Refnanny.proto */ -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, int); - void (*DECREF)(void*, PyObject*, int); - void (*GOTREF)(void*, PyObject*, int); - void (*GIVEREF)(void*, PyObject*, int); - void* (*SetupContext)(const char*, int, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); - #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; -#ifdef WITH_THREAD - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - if (acquire_gil) {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - PyGILState_Release(__pyx_gilstate_save);\ - } else {\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - } -#else - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) -#endif - #define __Pyx_RefNannyFinishContext()\ - __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0) - #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0) - #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0) - #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0) -#else - #define __Pyx_RefNannyDeclarations - #define __Pyx_RefNannySetupContext(name, acquire_gil) - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XINCREF(r) Py_XINCREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) - #define __Pyx_XGOTREF(r) - #define __Pyx_XGIVEREF(r) -#endif -#define __Pyx_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_XDECREF(tmp);\ - } while (0) -#define __Pyx_DECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_DECREF(tmp);\ - } while (0) -#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) -#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) - -/* PyObjectGetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) -#endif - -/* GetBuiltinName.proto */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name); - -/* RaiseDoubleKeywords.proto */ -static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); - -/* ParseKeywords.proto */ -static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\ - PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\ - const char* function_name); - -/* RaiseArgTupleInvalid.proto */ -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); - -/* PyDictVersioning.proto */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) -#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ - (version_var) = __PYX_GET_DICT_VERSION(dict);\ - (cache_var) = (value); -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ - (VAR) = __pyx_dict_cached_value;\ - } else {\ - (VAR) = __pyx_dict_cached_value = (LOOKUP);\ - __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ - }\ -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); -#else -#define __PYX_GET_DICT_VERSION(dict) (0) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); -#endif - -/* GetModuleGlobalName.proto */ -#if CYTHON_USE_DICT_VERSIONS -#define __Pyx_GetModuleGlobalName(var, name) do {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\ - (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ - __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} while(0) -#define __Pyx_GetModuleGlobalNameUncached(var, name) do {\ - PY_UINT64_T __pyx_dict_version;\ - PyObject *__pyx_dict_cached_value;\ - (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} while(0) -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); -#else -#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) -#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); -#endif - -/* PyFunctionFastCall.proto */ -#if CYTHON_FAST_PYCALL -#define __Pyx_PyFunction_FastCall(func, args, nargs)\ - __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs); -#else -#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs) -#endif -#define __Pyx_BUILD_ASSERT_EXPR(cond)\ - (sizeof(char [1 - 2*!(cond)]) - 1) -#ifndef Py_MEMBER_SIZE -#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member) -#endif -#if CYTHON_FAST_PYCALL - static size_t __pyx_pyframe_localsplus_offset = 0; - #include "frameobject.h" -#if PY_VERSION_HEX >= 0x030b00a6 - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif - #define __Pxy_PyFrame_Initialize_Offsets()\ - ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\ - (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus))) - #define __Pyx_PyFrame_GetLocalsplus(frame)\ - (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset)) -#endif // CYTHON_FAST_PYCALL -#endif - -/* PyCFunctionFastCall.proto */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs); -#else -#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL) -#endif - -/* PyObjectCall.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); -#else -#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) -#endif - -/* PyObjectSetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -#define __Pyx_PyObject_DelAttrStr(o,n) __Pyx_PyObject_SetAttrStr(o, n, NULL) -static CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value); -#else -#define __Pyx_PyObject_DelAttrStr(o,n) PyObject_DelAttr(o,n) -#define __Pyx_PyObject_SetAttrStr(o,n,v) PyObject_SetAttr(o,n,v) -#endif - -/* PyObjectCallMethO.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); -#endif - -/* PyObjectCallNoArg.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func); -#else -#define __Pyx_PyObject_CallNoArg(func) __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL) -#endif - -/* PyObjectCallOneArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); - -/* PyObjectCall2Args.proto */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2); - -/* PyThreadStateGet.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; -#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; -#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type -#else -#define __Pyx_PyThreadState_declare -#define __Pyx_PyThreadState_assign -#define __Pyx_PyErr_Occurred() PyErr_Occurred() -#endif - -/* PyErrFetchRestore.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) -#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) -#else -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#endif -#else -#define __Pyx_PyErr_Clear() PyErr_Clear() -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) -#endif - -/* RaiseException.proto */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); - -/* RaiseTooManyValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); - -/* RaiseNeedMoreValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -/* IterFinish.proto */ -static CYTHON_INLINE int __Pyx_IterFinish(void); - -/* UnpackItemEndCheck.proto */ -static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected); - -/* Import.proto */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); - -/* ImportFrom.proto */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); - -/* GetTopmostException.proto */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); -#endif - -/* SaveResetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -#else -#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) -#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) -#endif - -/* PyErrExceptionMatches.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) -static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); -#else -#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) -#endif - -/* GetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* CalculateMetaclass.proto */ -static PyObject *__Pyx_CalculateMetaclass(PyTypeObject *metaclass, PyObject *bases); - -/* FetchCommonType.proto */ -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type); - -/* CythonFunctionShared.proto */ -#define __Pyx_CyFunction_USED 1 -#define __Pyx_CYFUNCTION_STATICMETHOD 0x01 -#define __Pyx_CYFUNCTION_CLASSMETHOD 0x02 -#define __Pyx_CYFUNCTION_CCLASS 0x04 -#define __Pyx_CyFunction_GetClosure(f)\ - (((__pyx_CyFunctionObject *) (f))->func_closure) -#define __Pyx_CyFunction_GetClassObj(f)\ - (((__pyx_CyFunctionObject *) (f))->func_classobj) -#define __Pyx_CyFunction_Defaults(type, f)\ - ((type *)(((__pyx_CyFunctionObject *) (f))->defaults)) -#define __Pyx_CyFunction_SetDefaultsGetter(f, g)\ - ((__pyx_CyFunctionObject *) (f))->defaults_getter = (g) -typedef struct { - PyCFunctionObject func; -#if PY_VERSION_HEX < 0x030500A0 - PyObject *func_weakreflist; -#endif - PyObject *func_dict; - PyObject *func_name; - PyObject *func_qualname; - PyObject *func_doc; - PyObject *func_globals; - PyObject *func_code; - PyObject *func_closure; - PyObject *func_classobj; - void *defaults; - int defaults_pyobjects; - size_t defaults_size; // used by FusedFunction for copying defaults - int flags; - PyObject *defaults_tuple; - PyObject *defaults_kwdict; - PyObject *(*defaults_getter)(PyObject *); - PyObject *func_annotations; -} __pyx_CyFunctionObject; -static PyTypeObject *__pyx_CyFunctionType = 0; -#define __Pyx_CyFunction_Check(obj) (__Pyx_TypeCheck(obj, __pyx_CyFunctionType)) -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject* op, PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *self, - PyObject *module, PyObject *globals, - PyObject* code); -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m, - size_t size, - int pyobjects); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *m, - PyObject *tuple); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *m, - PyObject *dict); -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *m, - PyObject *dict); -static int __pyx_CyFunction_init(void); - -/* CythonFunction.proto */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); - -/* SetNameInClass.proto */ -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 -#define __Pyx_SetNameInClass(ns, name, value)\ - (likely(PyDict_CheckExact(ns)) ? _PyDict_SetItem_KnownHash(ns, name, value, ((PyASCIIObject *) name)->hash) : PyObject_SetItem(ns, name, value)) -#elif CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_SetNameInClass(ns, name, value)\ - (likely(PyDict_CheckExact(ns)) ? PyDict_SetItem(ns, name, value) : PyObject_SetItem(ns, name, value)) -#else -#define __Pyx_SetNameInClass(ns, name, value) PyObject_SetItem(ns, name, value) -#endif - -/* Py3ClassCreate.proto */ -static PyObject *__Pyx_Py3MetaclassPrepare(PyObject *metaclass, PyObject *bases, PyObject *name, PyObject *qualname, - PyObject *mkw, PyObject *modname, PyObject *doc); -static PyObject *__Pyx_Py3ClassCreate(PyObject *metaclass, PyObject *name, PyObject *bases, PyObject *dict, - PyObject *mkw, int calculate_metaclass, int allow_py2_metaclass); - -/* IncludeStringH.proto */ -#include - -/* BytesEquals.proto */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); - -/* UnicodeEquals.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); - -/* CLineInTraceback.proto */ -#ifdef CYTHON_CLINE_IN_TRACEBACK -#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) -#else -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); -#endif - -/* CodeObjectCache.proto */ -typedef struct { - PyCodeObject* code_object; - int code_line; -} __Pyx_CodeObjectCacheEntry; -struct __Pyx_CodeObjectCache { - int count; - int max_count; - __Pyx_CodeObjectCacheEntry* entries; -}; -static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); -static PyCodeObject *__pyx_find_code_object(int code_line); -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); - -/* AddTraceback.proto */ -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename); - -/* GCCDiagnostics.proto */ -#if defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)) -#define __Pyx_HAS_GCC_DIAGNOSTIC -#endif - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); - -/* CIntFromPy.proto */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); - -/* FastTypeChecks.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); -#else -#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) -#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) -#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) -#endif -#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) - -/* CheckBinaryVersion.proto */ -static int __Pyx_check_binary_version(void); - -/* InitStrings.proto */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); - - -/* Module declarations from 'cython' */ - -/* Module declarations from 'fontTools.pens.momentsPen' */ -#define __Pyx_MODULE_NAME "fontTools.pens.momentsPen" -extern int __pyx_module_is_main_fontTools__pens__momentsPen; -int __pyx_module_is_main_fontTools__pens__momentsPen = 0; - -/* Implementation of 'fontTools.pens.momentsPen' */ -static PyObject *__pyx_builtin_AttributeError; -static PyObject *__pyx_builtin_ImportError; -static const char __pyx_k_x[] = "x"; -static const char __pyx_k_y[] = "y"; -static const char __pyx_k_p0[] = "p0"; -static const char __pyx_k_p1[] = "p1"; -static const char __pyx_k_p2[] = "p2"; -static const char __pyx_k_p3[] = "p3"; -static const char __pyx_k_r0[] = "r0"; -static const char __pyx_k_r1[] = "r1"; -static const char __pyx_k_r2[] = "r2"; -static const char __pyx_k_r3[] = "r3"; -static const char __pyx_k_r4[] = "r4"; -static const char __pyx_k_r5[] = "r5"; -static const char __pyx_k_r6[] = "r6"; -static const char __pyx_k_r7[] = "r7"; -static const char __pyx_k_r8[] = "r8"; -static const char __pyx_k_r9[] = "r9"; -static const char __pyx_k_x0[] = "x0"; -static const char __pyx_k_x1[] = "x1"; -static const char __pyx_k_x2[] = "x2"; -static const char __pyx_k_x3[] = "x3"; -static const char __pyx_k_y0[] = "y0"; -static const char __pyx_k_y1[] = "y1"; -static const char __pyx_k_y2[] = "y2"; -static const char __pyx_k_y3[] = "y3"; -static const char __pyx_k_all[] = "__all__"; -static const char __pyx_k_doc[] = "__doc__"; -static const char __pyx_k_r10[] = "r10"; -static const char __pyx_k_r11[] = "r11"; -static const char __pyx_k_r12[] = "r12"; -static const char __pyx_k_r13[] = "r13"; -static const char __pyx_k_r14[] = "r14"; -static const char __pyx_k_r15[] = "r15"; -static const char __pyx_k_r16[] = "r16"; -static const char __pyx_k_r17[] = "r17"; -static const char __pyx_k_r18[] = "r18"; -static const char __pyx_k_r19[] = "r19"; -static const char __pyx_k_r20[] = "r20"; -static const char __pyx_k_r21[] = "r21"; -static const char __pyx_k_r22[] = "r22"; -static const char __pyx_k_r23[] = "r23"; -static const char __pyx_k_r24[] = "r24"; -static const char __pyx_k_r25[] = "r25"; -static const char __pyx_k_r26[] = "r26"; -static const char __pyx_k_r27[] = "r27"; -static const char __pyx_k_r28[] = "r28"; -static const char __pyx_k_r29[] = "r29"; -static const char __pyx_k_r30[] = "r30"; -static const char __pyx_k_r31[] = "r31"; -static const char __pyx_k_r32[] = "r32"; -static const char __pyx_k_r33[] = "r33"; -static const char __pyx_k_r34[] = "r34"; -static const char __pyx_k_r35[] = "r35"; -static const char __pyx_k_r36[] = "r36"; -static const char __pyx_k_r37[] = "r37"; -static const char __pyx_k_r38[] = "r38"; -static const char __pyx_k_r39[] = "r39"; -static const char __pyx_k_r40[] = "r40"; -static const char __pyx_k_r41[] = "r41"; -static const char __pyx_k_r42[] = "r42"; -static const char __pyx_k_r43[] = "r43"; -static const char __pyx_k_r44[] = "r44"; -static const char __pyx_k_r45[] = "r45"; -static const char __pyx_k_r46[] = "r46"; -static const char __pyx_k_r47[] = "r47"; -static const char __pyx_k_r48[] = "r48"; -static const char __pyx_k_r49[] = "r49"; -static const char __pyx_k_r50[] = "r50"; -static const char __pyx_k_r51[] = "r51"; -static const char __pyx_k_r52[] = "r52"; -static const char __pyx_k_r53[] = "r53"; -static const char __pyx_k_r54[] = "r54"; -static const char __pyx_k_r55[] = "r55"; -static const char __pyx_k_r56[] = "r56"; -static const char __pyx_k_r57[] = "r57"; -static const char __pyx_k_r58[] = "r58"; -static const char __pyx_k_r59[] = "r59"; -static const char __pyx_k_r60[] = "r60"; -static const char __pyx_k_r61[] = "r61"; -static const char __pyx_k_r62[] = "r62"; -static const char __pyx_k_r63[] = "r63"; -static const char __pyx_k_r64[] = "r64"; -static const char __pyx_k_r65[] = "r65"; -static const char __pyx_k_r66[] = "r66"; -static const char __pyx_k_r67[] = "r67"; -static const char __pyx_k_r68[] = "r68"; -static const char __pyx_k_r69[] = "r69"; -static const char __pyx_k_r70[] = "r70"; -static const char __pyx_k_r71[] = "r71"; -static const char __pyx_k_r72[] = "r72"; -static const char __pyx_k_r73[] = "r73"; -static const char __pyx_k_r74[] = "r74"; -static const char __pyx_k_r75[] = "r75"; -static const char __pyx_k_r76[] = "r76"; -static const char __pyx_k_r77[] = "r77"; -static const char __pyx_k_r78[] = "r78"; -static const char __pyx_k_r79[] = "r79"; -static const char __pyx_k_r80[] = "r80"; -static const char __pyx_k_r81[] = "r81"; -static const char __pyx_k_r82[] = "r82"; -static const char __pyx_k_r83[] = "r83"; -static const char __pyx_k_r84[] = "r84"; -static const char __pyx_k_r85[] = "r85"; -static const char __pyx_k_r86[] = "r86"; -static const char __pyx_k_r87[] = "r87"; -static const char __pyx_k_r88[] = "r88"; -static const char __pyx_k_r89[] = "r89"; -static const char __pyx_k_r90[] = "r90"; -static const char __pyx_k_r91[] = "r91"; -static const char __pyx_k_r92[] = "r92"; -static const char __pyx_k_r93[] = "r93"; -static const char __pyx_k_r94[] = "r94"; -static const char __pyx_k_r95[] = "r95"; -static const char __pyx_k_r96[] = "r96"; -static const char __pyx_k_r97[] = "r97"; -static const char __pyx_k_r98[] = "r98"; -static const char __pyx_k_r99[] = "r99"; -static const char __pyx_k_area[] = "area"; -static const char __pyx_k_init[] = "__init__"; -static const char __pyx_k_main[] = "__main__"; -static const char __pyx_k_name[] = "__name__"; -static const char __pyx_k_r100[] = "r100"; -static const char __pyx_k_r101[] = "r101"; -static const char __pyx_k_r102[] = "r102"; -static const char __pyx_k_r103[] = "r103"; -static const char __pyx_k_r104[] = "r104"; -static const char __pyx_k_r105[] = "r105"; -static const char __pyx_k_r106[] = "r106"; -static const char __pyx_k_r107[] = "r107"; -static const char __pyx_k_r108[] = "r108"; -static const char __pyx_k_r109[] = "r109"; -static const char __pyx_k_r110[] = "r110"; -static const char __pyx_k_r111[] = "r111"; -static const char __pyx_k_r112[] = "r112"; -static const char __pyx_k_r113[] = "r113"; -static const char __pyx_k_r114[] = "r114"; -static const char __pyx_k_r115[] = "r115"; -static const char __pyx_k_r116[] = "r116"; -static const char __pyx_k_r117[] = "r117"; -static const char __pyx_k_r118[] = "r118"; -static const char __pyx_k_r119[] = "r119"; -static const char __pyx_k_r120[] = "r120"; -static const char __pyx_k_r121[] = "r121"; -static const char __pyx_k_r122[] = "r122"; -static const char __pyx_k_r123[] = "r123"; -static const char __pyx_k_r124[] = "r124"; -static const char __pyx_k_r125[] = "r125"; -static const char __pyx_k_r126[] = "r126"; -static const char __pyx_k_r127[] = "r127"; -static const char __pyx_k_r128[] = "r128"; -static const char __pyx_k_r129[] = "r129"; -static const char __pyx_k_r130[] = "r130"; -static const char __pyx_k_r131[] = "r131"; -static const char __pyx_k_r132[] = "r132"; -static const char __pyx_k_self[] = "self"; -static const char __pyx_k_test[] = "__test__"; -static const char __pyx_k_cython[] = "cython"; -static const char __pyx_k_import[] = "__import__"; -static const char __pyx_k_lineTo[] = "_lineTo"; -static const char __pyx_k_module[] = "__module__"; -static const char __pyx_k_moveTo[] = "_moveTo"; -static const char __pyx_k_BasePen[] = "BasePen"; -static const char __pyx_k_endPath[] = "_endPath"; -static const char __pyx_k_momentX[] = "momentX"; -static const char __pyx_k_momentY[] = "momentY"; -static const char __pyx_k_prepare[] = "__prepare__"; -static const char __pyx_k_COMPILED[] = "COMPILED"; -static const char __pyx_k_glyphset[] = "glyphset"; -static const char __pyx_k_momentXX[] = "momentXX"; -static const char __pyx_k_momentXY[] = "momentXY"; -static const char __pyx_k_momentYY[] = "momentYY"; -static const char __pyx_k_qualname[] = "__qualname__"; -static const char __pyx_k_closePath[] = "_closePath"; -static const char __pyx_k_metaclass[] = "__metaclass__"; -static const char __pyx_k_MomentsPen[] = "MomentsPen"; -static const char __pyx_k_curveToOne[] = "_curveToOne"; -static const char __pyx_k_ImportError[] = "ImportError"; -static const char __pyx_k_qCurveToOne[] = "_qCurveToOne"; -static const char __pyx_k_printGreenPen[] = "printGreenPen"; -static const char __pyx_k_AttributeError[] = "AttributeError"; -static const char __pyx_k_fontTools_misc[] = "fontTools.misc"; -static const char __pyx_k_getCurrentPoint[] = "_getCurrentPoint"; -static const char __pyx_k_OpenContourError[] = "OpenContourError"; -static const char __pyx_k_MomentsPen___init[] = "MomentsPen.__init__"; -static const char __pyx_k_MomentsPen__lineTo[] = "MomentsPen._lineTo"; -static const char __pyx_k_MomentsPen__moveTo[] = "MomentsPen._moveTo"; -static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; -static const char __pyx_k_MomentsPen__endPath[] = "MomentsPen._endPath"; -static const char __pyx_k_MomentsPen__closePath[] = "MomentsPen._closePath"; -static const char __pyx_k_MomentsPen__curveToOne[] = "MomentsPen._curveToOne"; -static const char __pyx_k_MomentsPen__startPoint[] = "_MomentsPen__startPoint"; -static const char __pyx_k_fontTools_misc_symfont[] = "fontTools.misc.symfont"; -static const char __pyx_k_fontTools_pens_basePen[] = "fontTools.pens.basePen"; -static const char __pyx_k_MomentsPen__qCurveToOne[] = "MomentsPen._qCurveToOne"; -static const char __pyx_k_fontTools_pens_momentsPen[] = "fontTools.pens.momentsPen"; -static const char __pyx_k_Green_theorem_is_not_defined_on[] = "Green theorem is not defined on open contours."; -static const char __pyx_k_Lib_fontTools_pens_momentsPen_py[] = "Lib/fontTools/pens/momentsPen.py"; -static PyObject *__pyx_n_s_AttributeError; -static PyObject *__pyx_n_s_BasePen; -static PyObject *__pyx_n_s_COMPILED; -static PyObject *__pyx_kp_u_Green_theorem_is_not_defined_on; -static PyObject *__pyx_n_s_ImportError; -static PyObject *__pyx_kp_s_Lib_fontTools_pens_momentsPen_py; -static PyObject *__pyx_n_s_MomentsPen; -static PyObject *__pyx_n_u_MomentsPen; -static PyObject *__pyx_n_s_MomentsPen___init; -static PyObject *__pyx_n_s_MomentsPen__closePath; -static PyObject *__pyx_n_s_MomentsPen__curveToOne; -static PyObject *__pyx_n_s_MomentsPen__endPath; -static PyObject *__pyx_n_s_MomentsPen__lineTo; -static PyObject *__pyx_n_s_MomentsPen__moveTo; -static PyObject *__pyx_n_s_MomentsPen__qCurveToOne; -static PyObject *__pyx_n_s_MomentsPen__startPoint; -static PyObject *__pyx_n_s_OpenContourError; -static PyObject *__pyx_n_s_all; -static PyObject *__pyx_n_s_area; -static PyObject *__pyx_n_u_area; -static PyObject *__pyx_n_s_cline_in_traceback; -static PyObject *__pyx_n_s_closePath; -static PyObject *__pyx_n_s_curveToOne; -static PyObject *__pyx_n_s_cython; -static PyObject *__pyx_n_s_doc; -static PyObject *__pyx_n_s_endPath; -static PyObject *__pyx_n_s_fontTools_misc; -static PyObject *__pyx_n_s_fontTools_misc_symfont; -static PyObject *__pyx_n_s_fontTools_pens_basePen; -static PyObject *__pyx_n_s_fontTools_pens_momentsPen; -static PyObject *__pyx_n_s_getCurrentPoint; -static PyObject *__pyx_n_s_glyphset; -static PyObject *__pyx_n_s_import; -static PyObject *__pyx_n_s_init; -static PyObject *__pyx_n_s_lineTo; -static PyObject *__pyx_n_s_main; -static PyObject *__pyx_n_u_main; -static PyObject *__pyx_n_s_metaclass; -static PyObject *__pyx_n_s_module; -static PyObject *__pyx_n_s_momentX; -static PyObject *__pyx_n_u_momentX; -static PyObject *__pyx_n_s_momentXX; -static PyObject *__pyx_n_u_momentXX; -static PyObject *__pyx_n_s_momentXY; -static PyObject *__pyx_n_u_momentXY; -static PyObject *__pyx_n_s_momentY; -static PyObject *__pyx_n_u_momentY; -static PyObject *__pyx_n_s_momentYY; -static PyObject *__pyx_n_u_momentYY; -static PyObject *__pyx_n_s_moveTo; -static PyObject *__pyx_n_s_name; -static PyObject *__pyx_n_s_p0; -static PyObject *__pyx_n_s_p1; -static PyObject *__pyx_n_s_p2; -static PyObject *__pyx_n_s_p3; -static PyObject *__pyx_n_s_prepare; -static PyObject *__pyx_n_s_printGreenPen; -static PyObject *__pyx_n_s_qCurveToOne; -static PyObject *__pyx_n_s_qualname; -static PyObject *__pyx_n_s_r0; -static PyObject *__pyx_n_s_r1; -static PyObject *__pyx_n_s_r10; -static PyObject *__pyx_n_s_r100; -static PyObject *__pyx_n_s_r101; -static PyObject *__pyx_n_s_r102; -static PyObject *__pyx_n_s_r103; -static PyObject *__pyx_n_s_r104; -static PyObject *__pyx_n_s_r105; -static PyObject *__pyx_n_s_r106; -static PyObject *__pyx_n_s_r107; -static PyObject *__pyx_n_s_r108; -static PyObject *__pyx_n_s_r109; -static PyObject *__pyx_n_s_r11; -static PyObject *__pyx_n_s_r110; -static PyObject *__pyx_n_s_r111; -static PyObject *__pyx_n_s_r112; -static PyObject *__pyx_n_s_r113; -static PyObject *__pyx_n_s_r114; -static PyObject *__pyx_n_s_r115; -static PyObject *__pyx_n_s_r116; -static PyObject *__pyx_n_s_r117; -static PyObject *__pyx_n_s_r118; -static PyObject *__pyx_n_s_r119; -static PyObject *__pyx_n_s_r12; -static PyObject *__pyx_n_s_r120; -static PyObject *__pyx_n_s_r121; -static PyObject *__pyx_n_s_r122; -static PyObject *__pyx_n_s_r123; -static PyObject *__pyx_n_s_r124; -static PyObject *__pyx_n_s_r125; -static PyObject *__pyx_n_s_r126; -static PyObject *__pyx_n_s_r127; -static PyObject *__pyx_n_s_r128; -static PyObject *__pyx_n_s_r129; -static PyObject *__pyx_n_s_r13; -static PyObject *__pyx_n_s_r130; -static PyObject *__pyx_n_s_r131; -static PyObject *__pyx_n_s_r132; -static PyObject *__pyx_n_s_r14; -static PyObject *__pyx_n_s_r15; -static PyObject *__pyx_n_s_r16; -static PyObject *__pyx_n_s_r17; -static PyObject *__pyx_n_s_r18; -static PyObject *__pyx_n_s_r19; -static PyObject *__pyx_n_s_r2; -static PyObject *__pyx_n_s_r20; -static PyObject *__pyx_n_s_r21; -static PyObject *__pyx_n_s_r22; -static PyObject *__pyx_n_s_r23; -static PyObject *__pyx_n_s_r24; -static PyObject *__pyx_n_s_r25; -static PyObject *__pyx_n_s_r26; -static PyObject *__pyx_n_s_r27; -static PyObject *__pyx_n_s_r28; -static PyObject *__pyx_n_s_r29; -static PyObject *__pyx_n_s_r3; -static PyObject *__pyx_n_s_r30; -static PyObject *__pyx_n_s_r31; -static PyObject *__pyx_n_s_r32; -static PyObject *__pyx_n_s_r33; -static PyObject *__pyx_n_s_r34; -static PyObject *__pyx_n_s_r35; -static PyObject *__pyx_n_s_r36; -static PyObject *__pyx_n_s_r37; -static PyObject *__pyx_n_s_r38; -static PyObject *__pyx_n_s_r39; -static PyObject *__pyx_n_s_r4; -static PyObject *__pyx_n_s_r40; -static PyObject *__pyx_n_s_r41; -static PyObject *__pyx_n_s_r42; -static PyObject *__pyx_n_s_r43; -static PyObject *__pyx_n_s_r44; -static PyObject *__pyx_n_s_r45; -static PyObject *__pyx_n_s_r46; -static PyObject *__pyx_n_s_r47; -static PyObject *__pyx_n_s_r48; -static PyObject *__pyx_n_s_r49; -static PyObject *__pyx_n_s_r5; -static PyObject *__pyx_n_s_r50; -static PyObject *__pyx_n_s_r51; -static PyObject *__pyx_n_s_r52; -static PyObject *__pyx_n_s_r53; -static PyObject *__pyx_n_s_r54; -static PyObject *__pyx_n_s_r55; -static PyObject *__pyx_n_s_r56; -static PyObject *__pyx_n_s_r57; -static PyObject *__pyx_n_s_r58; -static PyObject *__pyx_n_s_r59; -static PyObject *__pyx_n_s_r6; -static PyObject *__pyx_n_s_r60; -static PyObject *__pyx_n_s_r61; -static PyObject *__pyx_n_s_r62; -static PyObject *__pyx_n_s_r63; -static PyObject *__pyx_n_s_r64; -static PyObject *__pyx_n_s_r65; -static PyObject *__pyx_n_s_r66; -static PyObject *__pyx_n_s_r67; -static PyObject *__pyx_n_s_r68; -static PyObject *__pyx_n_s_r69; -static PyObject *__pyx_n_s_r7; -static PyObject *__pyx_n_s_r70; -static PyObject *__pyx_n_s_r71; -static PyObject *__pyx_n_s_r72; -static PyObject *__pyx_n_s_r73; -static PyObject *__pyx_n_s_r74; -static PyObject *__pyx_n_s_r75; -static PyObject *__pyx_n_s_r76; -static PyObject *__pyx_n_s_r77; -static PyObject *__pyx_n_s_r78; -static PyObject *__pyx_n_s_r79; -static PyObject *__pyx_n_s_r8; -static PyObject *__pyx_n_s_r80; -static PyObject *__pyx_n_s_r81; -static PyObject *__pyx_n_s_r82; -static PyObject *__pyx_n_s_r83; -static PyObject *__pyx_n_s_r84; -static PyObject *__pyx_n_s_r85; -static PyObject *__pyx_n_s_r86; -static PyObject *__pyx_n_s_r87; -static PyObject *__pyx_n_s_r88; -static PyObject *__pyx_n_s_r89; -static PyObject *__pyx_n_s_r9; -static PyObject *__pyx_n_s_r90; -static PyObject *__pyx_n_s_r91; -static PyObject *__pyx_n_s_r92; -static PyObject *__pyx_n_s_r93; -static PyObject *__pyx_n_s_r94; -static PyObject *__pyx_n_s_r95; -static PyObject *__pyx_n_s_r96; -static PyObject *__pyx_n_s_r97; -static PyObject *__pyx_n_s_r98; -static PyObject *__pyx_n_s_r99; -static PyObject *__pyx_n_s_self; -static PyObject *__pyx_n_s_test; -static PyObject *__pyx_n_s_x; -static PyObject *__pyx_n_s_x0; -static PyObject *__pyx_n_s_x1; -static PyObject *__pyx_n_s_x2; -static PyObject *__pyx_n_s_x3; -static PyObject *__pyx_n_s_y; -static PyObject *__pyx_n_s_y0; -static PyObject *__pyx_n_s_y1; -static PyObject *__pyx_n_s_y2; -static PyObject *__pyx_n_s_y3; -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_glyphset); /* proto */ -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_2_moveTo(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p0); /* proto */ -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_4_closePath(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_6_endPath(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_8_lineTo(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1); /* proto */ -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_10_qCurveToOne(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1, PyObject *__pyx_v_p2); /* proto */ -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_12_curveToOne(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1, PyObject *__pyx_v_p2, PyObject *__pyx_v_p3); /* proto */ -static PyObject *__pyx_int_0; -static PyObject *__pyx_int_1; -static PyObject *__pyx_int_2; -static PyObject *__pyx_tuple_; -static PyObject *__pyx_tuple__3; -static PyObject *__pyx_tuple__4; -static PyObject *__pyx_tuple__6; -static PyObject *__pyx_tuple__8; -static PyObject *__pyx_tuple__10; -static PyObject *__pyx_tuple__12; -static PyObject *__pyx_tuple__14; -static PyObject *__pyx_tuple__16; -static PyObject *__pyx_codeobj__2; -static PyObject *__pyx_codeobj__5; -static PyObject *__pyx_codeobj__7; -static PyObject *__pyx_codeobj__9; -static PyObject *__pyx_codeobj__11; -static PyObject *__pyx_codeobj__13; -static PyObject *__pyx_codeobj__15; -/* Late includes */ - -/* "fontTools/pens/momentsPen.py":18 - * - * class MomentsPen(BasePen): - * def __init__(self, glyphset=None): # <<<<<<<<<<<<<< - * BasePen.__init__(self, glyphset) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_1__init__(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen___init__[] = "MomentsPen.__init__(self, glyphset=None)"; -static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_1__init__ = {"__init__", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_1__init__, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen___init__}; -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_1__init__(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_glyphset = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_glyphset,0}; - PyObject* values[2] = {0,0}; - values[1] = ((PyObject *)((PyObject *)Py_None)); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_glyphset); - if (value) { values[1] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(0, 18, __pyx_L3_error) - } - } else { - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_self = values[0]; - __pyx_v_glyphset = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__init__", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 18, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen___init__(__pyx_self, __pyx_v_self, __pyx_v_glyphset); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_glyphset) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__init__", 0); - - /* "fontTools/pens/momentsPen.py":19 - * class MomentsPen(BasePen): - * def __init__(self, glyphset=None): - * BasePen.__init__(self, glyphset) # <<<<<<<<<<<<<< - * - * self.area = 0 - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_BasePen); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 19, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_init); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 19, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_3)) { - PyObject *__pyx_temp[3] = {__pyx_t_2, __pyx_v_self, __pyx_v_glyphset}; - __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_4, 2+__pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 19, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_GOTREF(__pyx_t_1); - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) { - PyObject *__pyx_temp[3] = {__pyx_t_2, __pyx_v_self, __pyx_v_glyphset}; - __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_4, 2+__pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 19, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_GOTREF(__pyx_t_1); - } else - #endif - { - __pyx_t_5 = PyTuple_New(2+__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 19, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (__pyx_t_2) { - __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_2); __pyx_t_2 = NULL; - } - __Pyx_INCREF(__pyx_v_self); - __Pyx_GIVEREF(__pyx_v_self); - PyTuple_SET_ITEM(__pyx_t_5, 0+__pyx_t_4, __pyx_v_self); - __Pyx_INCREF(__pyx_v_glyphset); - __Pyx_GIVEREF(__pyx_v_glyphset); - PyTuple_SET_ITEM(__pyx_t_5, 1+__pyx_t_4, __pyx_v_glyphset); - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_5, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 19, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":21 - * BasePen.__init__(self, glyphset) - * - * self.area = 0 # <<<<<<<<<<<<<< - * self.momentX = 0 - * self.momentY = 0 - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_area, __pyx_int_0) < 0) __PYX_ERR(0, 21, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":22 - * - * self.area = 0 - * self.momentX = 0 # <<<<<<<<<<<<<< - * self.momentY = 0 - * self.momentXX = 0 - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentX, __pyx_int_0) < 0) __PYX_ERR(0, 22, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":23 - * self.area = 0 - * self.momentX = 0 - * self.momentY = 0 # <<<<<<<<<<<<<< - * self.momentXX = 0 - * self.momentXY = 0 - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentY, __pyx_int_0) < 0) __PYX_ERR(0, 23, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":24 - * self.momentX = 0 - * self.momentY = 0 - * self.momentXX = 0 # <<<<<<<<<<<<<< - * self.momentXY = 0 - * self.momentYY = 0 - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXX, __pyx_int_0) < 0) __PYX_ERR(0, 24, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":25 - * self.momentY = 0 - * self.momentXX = 0 - * self.momentXY = 0 # <<<<<<<<<<<<<< - * self.momentYY = 0 - * - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXY, __pyx_int_0) < 0) __PYX_ERR(0, 25, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":26 - * self.momentXX = 0 - * self.momentXY = 0 - * self.momentYY = 0 # <<<<<<<<<<<<<< - * - * def _moveTo(self, p0): - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentYY, __pyx_int_0) < 0) __PYX_ERR(0, 26, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":18 - * - * class MomentsPen(BasePen): - * def __init__(self, glyphset=None): # <<<<<<<<<<<<<< - * BasePen.__init__(self, glyphset) - * - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/pens/momentsPen.py":28 - * self.momentYY = 0 - * - * def _moveTo(self, p0): # <<<<<<<<<<<<<< - * self.__startPoint = p0 - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_3_moveTo(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_2_moveTo[] = "MomentsPen._moveTo(self, p0)"; -static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_3_moveTo = {"_moveTo", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_3_moveTo, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_2_moveTo}; -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_3_moveTo(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_p0 = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_moveTo (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_p0,0}; - PyObject* values[2] = {0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p0)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("_moveTo", 1, 2, 2, 1); __PYX_ERR(0, 28, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "_moveTo") < 0)) __PYX_ERR(0, 28, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 2) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_p0 = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_moveTo", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 28, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._moveTo", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_2_moveTo(__pyx_self, __pyx_v_self, __pyx_v_p0); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_2_moveTo(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p0) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_moveTo", 0); - - /* "fontTools/pens/momentsPen.py":29 - * - * def _moveTo(self, p0): - * self.__startPoint = p0 # <<<<<<<<<<<<<< - * - * def _closePath(self): - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_MomentsPen__startPoint, __pyx_v_p0) < 0) __PYX_ERR(0, 29, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":28 - * self.momentYY = 0 - * - * def _moveTo(self, p0): # <<<<<<<<<<<<<< - * self.__startPoint = p0 - * - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._moveTo", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/pens/momentsPen.py":31 - * self.__startPoint = p0 - * - * def _closePath(self): # <<<<<<<<<<<<<< - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_5_closePath(PyObject *__pyx_self, PyObject *__pyx_v_self); /*proto*/ -static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_4_closePath[] = "MomentsPen._closePath(self)"; -static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_5_closePath = {"_closePath", (PyCFunction)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_5_closePath, METH_O, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_4_closePath}; -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_5_closePath(PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_closePath (wrapper)", 0); - __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_4_closePath(__pyx_self, ((PyObject *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_4_closePath(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_v_p0 = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_closePath", 0); - - /* "fontTools/pens/momentsPen.py":32 - * - * def _closePath(self): - * p0 = self._getCurrentPoint() # <<<<<<<<<<<<<< - * if p0 != self.__startPoint: - * self._lineTo(self.__startPoint) - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_getCurrentPoint); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_p0 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":33 - * def _closePath(self): - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: # <<<<<<<<<<<<<< - * self._lineTo(self.__startPoint) - * - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_MomentsPen__startPoint); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 33, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_RichCompare(__pyx_v_p0, __pyx_t_1, Py_NE); __Pyx_XGOTREF(__pyx_t_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 33, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(0, 33, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_4) { - - /* "fontTools/pens/momentsPen.py":34 - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - * self._lineTo(self.__startPoint) # <<<<<<<<<<<<<< - * - * def _endPath(self): - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_lineTo); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 34, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_MomentsPen__startPoint); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 34, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - } - } - __pyx_t_2 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_1, __pyx_t_5, __pyx_t_3) : __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 34, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":33 - * def _closePath(self): - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: # <<<<<<<<<<<<<< - * self._lineTo(self.__startPoint) - * - */ - } - - /* "fontTools/pens/momentsPen.py":31 - * self.__startPoint = p0 - * - * def _closePath(self): # <<<<<<<<<<<<<< - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._closePath", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_p0); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/pens/momentsPen.py":36 - * self._lineTo(self.__startPoint) - * - * def _endPath(self): # <<<<<<<<<<<<<< - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_7_endPath(PyObject *__pyx_self, PyObject *__pyx_v_self); /*proto*/ -static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_6_endPath[] = "MomentsPen._endPath(self)"; -static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_7_endPath = {"_endPath", (PyCFunction)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_7_endPath, METH_O, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_6_endPath}; -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_7_endPath(PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_endPath (wrapper)", 0); - __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_6_endPath(__pyx_self, ((PyObject *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_6_endPath(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_v_p0 = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_endPath", 0); - - /* "fontTools/pens/momentsPen.py":37 - * - * def _endPath(self): - * p0 = self._getCurrentPoint() # <<<<<<<<<<<<<< - * if p0 != self.__startPoint: - * # Green theorem is not defined on open contours. - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_getCurrentPoint); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 37, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 37, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_p0 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":38 - * def _endPath(self): - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: # <<<<<<<<<<<<<< - * # Green theorem is not defined on open contours. - * raise OpenContourError("Green theorem is not defined on open contours.") - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_MomentsPen__startPoint); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_RichCompare(__pyx_v_p0, __pyx_t_1, Py_NE); __Pyx_XGOTREF(__pyx_t_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(__pyx_t_4)) { - - /* "fontTools/pens/momentsPen.py":40 - * if p0 != self.__startPoint: - * # Green theorem is not defined on open contours. - * raise OpenContourError("Green theorem is not defined on open contours.") # <<<<<<<<<<<<<< - * - * @cython.locals(r0=cython.double) - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_OpenContourError); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 40, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - } - } - __pyx_t_2 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_1, __pyx_t_3, __pyx_kp_u_Green_theorem_is_not_defined_on) : __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_kp_u_Green_theorem_is_not_defined_on); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 40, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(0, 40, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":38 - * def _endPath(self): - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: # <<<<<<<<<<<<<< - * # Green theorem is not defined on open contours. - * raise OpenContourError("Green theorem is not defined on open contours.") - */ - } - - /* "fontTools/pens/momentsPen.py":36 - * self._lineTo(self.__startPoint) - * - * def _endPath(self): # <<<<<<<<<<<<<< - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._endPath", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_p0); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/pens/momentsPen.py":57 - * @cython.locals(x0=cython.double, y0=cython.double) - * @cython.locals(x1=cython.double, y1=cython.double) - * def _lineTo(self, p1): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_9_lineTo(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_8_lineTo[] = "MomentsPen._lineTo(self, p1)"; -static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_9_lineTo = {"_lineTo", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_9_lineTo, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_8_lineTo}; -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_9_lineTo(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_p1 = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_lineTo (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_p1,0}; - PyObject* values[2] = {0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p1)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("_lineTo", 1, 2, 2, 1); __PYX_ERR(0, 57, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "_lineTo") < 0)) __PYX_ERR(0, 57, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 2) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_p1 = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_lineTo", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 57, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._lineTo", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_8_lineTo(__pyx_self, __pyx_v_self, __pyx_v_p1); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_8_lineTo(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1) { - double __pyx_v_x1; - double __pyx_v_y1; - double __pyx_v_x0; - double __pyx_v_y0; - double __pyx_v_r12; - double __pyx_v_r11; - double __pyx_v_r10; - double __pyx_v_r9; - double __pyx_v_r8; - double __pyx_v_r7; - double __pyx_v_r6; - double __pyx_v_r5; - double __pyx_v_r4; - double __pyx_v_r3; - double __pyx_v_r2; - double __pyx_v_r1; - double __pyx_v_r0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *(*__pyx_t_5)(PyObject *); - double __pyx_t_6; - double __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_lineTo", 0); - - /* "fontTools/pens/momentsPen.py":58 - * @cython.locals(x1=cython.double, y1=cython.double) - * def _lineTo(self, p1): - * x0, y0 = self._getCurrentPoint() # <<<<<<<<<<<<<< - * x1, y1 = p1 - * - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_getCurrentPoint); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 58, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_4 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_5 = Py_TYPE(__pyx_t_4)->tp_iternext; - index = 0; __pyx_t_2 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_3)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 58, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 58, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_2); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x0 = __pyx_t_6; - __pyx_v_y0 = __pyx_t_7; - - /* "fontTools/pens/momentsPen.py":59 - * def _lineTo(self, p1): - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 # <<<<<<<<<<<<<< - * - * r0 = x1 * y0 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_p1))) || (PyList_CheckExact(__pyx_v_p1))) { - PyObject* sequence = __pyx_v_p1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 59, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 59, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 59, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_2 = PyObject_GetIter(__pyx_v_p1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 59, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext; - index = 0; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 59, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 59, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 59, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 59, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x1 = __pyx_t_7; - __pyx_v_y1 = __pyx_t_6; - - /* "fontTools/pens/momentsPen.py":61 - * x1, y1 = p1 - * - * r0 = x1 * y0 # <<<<<<<<<<<<<< - * r1 = x1 * y1 - * r2 = x1**2 - */ - __pyx_v_r0 = (__pyx_v_x1 * __pyx_v_y0); - - /* "fontTools/pens/momentsPen.py":62 - * - * r0 = x1 * y0 - * r1 = x1 * y1 # <<<<<<<<<<<<<< - * r2 = x1**2 - * r3 = r2 * y1 - */ - __pyx_v_r1 = (__pyx_v_x1 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":63 - * r0 = x1 * y0 - * r1 = x1 * y1 - * r2 = x1**2 # <<<<<<<<<<<<<< - * r3 = r2 * y1 - * r4 = y0 - y1 - */ - __pyx_v_r2 = pow(__pyx_v_x1, 2.0); - - /* "fontTools/pens/momentsPen.py":64 - * r1 = x1 * y1 - * r2 = x1**2 - * r3 = r2 * y1 # <<<<<<<<<<<<<< - * r4 = y0 - y1 - * r5 = r4 * x0 - */ - __pyx_v_r3 = (__pyx_v_r2 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":65 - * r2 = x1**2 - * r3 = r2 * y1 - * r4 = y0 - y1 # <<<<<<<<<<<<<< - * r5 = r4 * x0 - * r6 = x0**2 - */ - __pyx_v_r4 = (__pyx_v_y0 - __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":66 - * r3 = r2 * y1 - * r4 = y0 - y1 - * r5 = r4 * x0 # <<<<<<<<<<<<<< - * r6 = x0**2 - * r7 = 2 * y0 - */ - __pyx_v_r5 = (__pyx_v_r4 * __pyx_v_x0); - - /* "fontTools/pens/momentsPen.py":67 - * r4 = y0 - y1 - * r5 = r4 * x0 - * r6 = x0**2 # <<<<<<<<<<<<<< - * r7 = 2 * y0 - * r8 = y0**2 - */ - __pyx_v_r6 = pow(__pyx_v_x0, 2.0); - - /* "fontTools/pens/momentsPen.py":68 - * r5 = r4 * x0 - * r6 = x0**2 - * r7 = 2 * y0 # <<<<<<<<<<<<<< - * r8 = y0**2 - * r9 = y1**2 - */ - __pyx_v_r7 = (2.0 * __pyx_v_y0); - - /* "fontTools/pens/momentsPen.py":69 - * r6 = x0**2 - * r7 = 2 * y0 - * r8 = y0**2 # <<<<<<<<<<<<<< - * r9 = y1**2 - * r10 = x1**3 - */ - __pyx_v_r8 = pow(__pyx_v_y0, 2.0); - - /* "fontTools/pens/momentsPen.py":70 - * r7 = 2 * y0 - * r8 = y0**2 - * r9 = y1**2 # <<<<<<<<<<<<<< - * r10 = x1**3 - * r11 = y0**3 - */ - __pyx_v_r9 = pow(__pyx_v_y1, 2.0); - - /* "fontTools/pens/momentsPen.py":71 - * r8 = y0**2 - * r9 = y1**2 - * r10 = x1**3 # <<<<<<<<<<<<<< - * r11 = y0**3 - * r12 = y1**3 - */ - __pyx_v_r10 = pow(__pyx_v_x1, 3.0); - - /* "fontTools/pens/momentsPen.py":72 - * r9 = y1**2 - * r10 = x1**3 - * r11 = y0**3 # <<<<<<<<<<<<<< - * r12 = y1**3 - * - */ - __pyx_v_r11 = pow(__pyx_v_y0, 3.0); - - /* "fontTools/pens/momentsPen.py":73 - * r10 = x1**3 - * r11 = y0**3 - * r12 = y1**3 # <<<<<<<<<<<<<< - * - * self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2 - */ - __pyx_v_r12 = pow(__pyx_v_y1, 3.0); - - /* "fontTools/pens/momentsPen.py":75 - * r12 = y1**3 - * - * self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2 # <<<<<<<<<<<<<< - * self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6 - * self.momentY += ( - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_area); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyFloat_FromDouble(((((-__pyx_v_r0) / 2.0) - (__pyx_v_r1 / 2.0)) + ((__pyx_v_x0 * (__pyx_v_y0 + __pyx_v_y1)) / 2.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_area, __pyx_t_2) < 0) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":76 - * - * self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2 - * self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6 # <<<<<<<<<<<<<< - * self.momentY += ( - * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyFloat_FromDouble(((((((-__pyx_v_r2) * __pyx_v_y0) / 6.0) - (__pyx_v_r3 / 3.0)) - ((__pyx_v_r5 * __pyx_v_x1) / 6.0)) + ((__pyx_v_r6 * (__pyx_v_r7 + __pyx_v_y1)) / 6.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentX, __pyx_t_3) < 0) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/pens/momentsPen.py":77 - * self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2 - * self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6 - * self.momentY += ( # <<<<<<<<<<<<<< - * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6 - * ) - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentY); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":78 - * self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6 - * self.momentY += ( - * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6 # <<<<<<<<<<<<<< - * ) - * self.momentXX += ( - */ - __pyx_t_1 = PyFloat_FromDouble(((((((-__pyx_v_r0) * __pyx_v_y1) / 6.0) - ((__pyx_v_r8 * __pyx_v_x1) / 6.0)) - ((__pyx_v_r9 * __pyx_v_x1) / 6.0)) + ((__pyx_v_x0 * ((__pyx_v_r8 + __pyx_v_r9) + (__pyx_v_y0 * __pyx_v_y1))) / 6.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 78, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":77 - * self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2 - * self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6 - * self.momentY += ( # <<<<<<<<<<<<<< - * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6 - * ) - */ - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentY, __pyx_t_2) < 0) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":80 - * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6 - * ) - * self.momentXX += ( # <<<<<<<<<<<<<< - * -r10 * y0 / 12 - * - r10 * y1 / 4 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 80, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":85 - * - r2 * r5 / 12 - * - r4 * r6 * x1 / 12 - * + x0**3 * (3 * y0 + y1) / 12 # <<<<<<<<<<<<<< - * ) - * self.momentXY += ( - */ - __pyx_t_1 = PyFloat_FromDouble((((((((-__pyx_v_r10) * __pyx_v_y0) / 12.0) - ((__pyx_v_r10 * __pyx_v_y1) / 4.0)) - ((__pyx_v_r2 * __pyx_v_r5) / 12.0)) - (((__pyx_v_r4 * __pyx_v_r6) * __pyx_v_x1) / 12.0)) + ((pow(__pyx_v_x0, 3.0) * ((3.0 * __pyx_v_y0) + __pyx_v_y1)) / 12.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 85, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":80 - * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6 - * ) - * self.momentXX += ( # <<<<<<<<<<<<<< - * -r10 * y0 / 12 - * - r10 * y1 / 4 - */ - __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 80, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXX, __pyx_t_3) < 0) __PYX_ERR(0, 80, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/pens/momentsPen.py":87 - * + x0**3 * (3 * y0 + y1) / 12 - * ) - * self.momentXY += ( # <<<<<<<<<<<<<< - * -r2 * r8 / 24 - * - r2 * r9 / 8 - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXY); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 87, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":92 - * - r3 * r7 / 24 - * + r6 * (r7 * y1 + 3 * r8 + r9) / 24 - * - x0 * x1 * (r8 - r9) / 12 # <<<<<<<<<<<<<< - * ) - * self.momentYY += ( - */ - __pyx_t_1 = PyFloat_FromDouble((((((((-__pyx_v_r2) * __pyx_v_r8) / 24.0) - ((__pyx_v_r2 * __pyx_v_r9) / 8.0)) - ((__pyx_v_r3 * __pyx_v_r7) / 24.0)) + ((__pyx_v_r6 * (((__pyx_v_r7 * __pyx_v_y1) + (3.0 * __pyx_v_r8)) + __pyx_v_r9)) / 24.0)) - (((__pyx_v_x0 * __pyx_v_x1) * (__pyx_v_r8 - __pyx_v_r9)) / 12.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 92, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":87 - * + x0**3 * (3 * y0 + y1) / 12 - * ) - * self.momentXY += ( # <<<<<<<<<<<<<< - * -r2 * r8 / 24 - * - r2 * r9 / 8 - */ - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 87, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXY, __pyx_t_2) < 0) __PYX_ERR(0, 87, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":94 - * - x0 * x1 * (r8 - r9) / 12 - * ) - * self.momentYY += ( # <<<<<<<<<<<<<< - * -r0 * r9 / 12 - * - r1 * r8 / 12 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentYY); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 94, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":99 - * - r11 * x1 / 12 - * - r12 * x1 / 12 - * + x0 * (r11 + r12 + r8 * y1 + r9 * y0) / 12 # <<<<<<<<<<<<<< - * ) - * - */ - __pyx_t_1 = PyFloat_FromDouble((((((((-__pyx_v_r0) * __pyx_v_r9) / 12.0) - ((__pyx_v_r1 * __pyx_v_r8) / 12.0)) - ((__pyx_v_r11 * __pyx_v_x1) / 12.0)) - ((__pyx_v_r12 * __pyx_v_x1) / 12.0)) + ((__pyx_v_x0 * (((__pyx_v_r11 + __pyx_v_r12) + (__pyx_v_r8 * __pyx_v_y1)) + (__pyx_v_r9 * __pyx_v_y0))) / 12.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":94 - * - x0 * x1 * (r8 - r9) / 12 - * ) - * self.momentYY += ( # <<<<<<<<<<<<<< - * -r0 * r9 / 12 - * - r1 * r8 / 12 - */ - __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 94, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentYY, __pyx_t_3) < 0) __PYX_ERR(0, 94, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/pens/momentsPen.py":57 - * @cython.locals(x0=cython.double, y0=cython.double) - * @cython.locals(x1=cython.double, y1=cython.double) - * def _lineTo(self, p1): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._lineTo", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/pens/momentsPen.py":159 - * @cython.locals(x1=cython.double, y1=cython.double) - * @cython.locals(x2=cython.double, y2=cython.double) - * def _qCurveToOne(self, p1, p2): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_11_qCurveToOne(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_10_qCurveToOne[] = "MomentsPen._qCurveToOne(self, p1, p2)"; -static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_11_qCurveToOne = {"_qCurveToOne", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_11_qCurveToOne, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_10_qCurveToOne}; -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_11_qCurveToOne(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_p1 = 0; - PyObject *__pyx_v_p2 = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_qCurveToOne (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_p1,&__pyx_n_s_p2,0}; - PyObject* values[3] = {0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p1)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("_qCurveToOne", 1, 3, 3, 1); __PYX_ERR(0, 159, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p2)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("_qCurveToOne", 1, 3, 3, 2); __PYX_ERR(0, 159, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "_qCurveToOne") < 0)) __PYX_ERR(0, 159, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 3) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - } - __pyx_v_self = values[0]; - __pyx_v_p1 = values[1]; - __pyx_v_p2 = values[2]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_qCurveToOne", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 159, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._qCurveToOne", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_10_qCurveToOne(__pyx_self, __pyx_v_self, __pyx_v_p1, __pyx_v_p2); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_10_qCurveToOne(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1, PyObject *__pyx_v_p2) { - double __pyx_v_x2; - double __pyx_v_y2; - double __pyx_v_x1; - double __pyx_v_y1; - double __pyx_v_x0; - double __pyx_v_y0; - double __pyx_v_r53; - double __pyx_v_r52; - double __pyx_v_r51; - double __pyx_v_r50; - double __pyx_v_r49; - double __pyx_v_r48; - double __pyx_v_r47; - double __pyx_v_r46; - double __pyx_v_r45; - double __pyx_v_r44; - double __pyx_v_r43; - double __pyx_v_r42; - double __pyx_v_r41; - double __pyx_v_r40; - double __pyx_v_r39; - double __pyx_v_r38; - double __pyx_v_r37; - double __pyx_v_r36; - double __pyx_v_r35; - double __pyx_v_r34; - double __pyx_v_r33; - double __pyx_v_r32; - double __pyx_v_r31; - double __pyx_v_r30; - double __pyx_v_r29; - double __pyx_v_r28; - double __pyx_v_r27; - double __pyx_v_r26; - double __pyx_v_r25; - double __pyx_v_r24; - double __pyx_v_r23; - double __pyx_v_r22; - double __pyx_v_r21; - double __pyx_v_r20; - double __pyx_v_r19; - double __pyx_v_r18; - double __pyx_v_r17; - double __pyx_v_r16; - double __pyx_v_r15; - double __pyx_v_r14; - double __pyx_v_r13; - double __pyx_v_r12; - double __pyx_v_r11; - double __pyx_v_r10; - double __pyx_v_r9; - double __pyx_v_r8; - double __pyx_v_r7; - double __pyx_v_r6; - double __pyx_v_r5; - double __pyx_v_r4; - double __pyx_v_r3; - double __pyx_v_r2; - double __pyx_v_r1; - double __pyx_v_r0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *(*__pyx_t_5)(PyObject *); - double __pyx_t_6; - double __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_qCurveToOne", 0); - - /* "fontTools/pens/momentsPen.py":160 - * @cython.locals(x2=cython.double, y2=cython.double) - * def _qCurveToOne(self, p1, p2): - * x0, y0 = self._getCurrentPoint() # <<<<<<<<<<<<<< - * x1, y1 = p1 - * x2, y2 = p2 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_getCurrentPoint); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 160, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_4 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_5 = Py_TYPE(__pyx_t_4)->tp_iternext; - index = 0; __pyx_t_2 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_3)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 160, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 160, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_2); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x0 = __pyx_t_6; - __pyx_v_y0 = __pyx_t_7; - - /* "fontTools/pens/momentsPen.py":161 - * def _qCurveToOne(self, p1, p2): - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 # <<<<<<<<<<<<<< - * x2, y2 = p2 - * - */ - if ((likely(PyTuple_CheckExact(__pyx_v_p1))) || (PyList_CheckExact(__pyx_v_p1))) { - PyObject* sequence = __pyx_v_p1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 161, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_2 = PyObject_GetIter(__pyx_v_p1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext; - index = 0; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 161, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 161, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x1 = __pyx_t_7; - __pyx_v_y1 = __pyx_t_6; - - /* "fontTools/pens/momentsPen.py":162 - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - * x2, y2 = p2 # <<<<<<<<<<<<<< - * - * r0 = 2 * y1 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_p2))) || (PyList_CheckExact(__pyx_v_p2))) { - PyObject* sequence = __pyx_v_p2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 162, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_3 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 162, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 162, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_2 = PyObject_GetIter(__pyx_v_p2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 162, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext; - index = 0; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 1; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 162, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 162, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 162, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 162, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_x2 = __pyx_t_6; - __pyx_v_y2 = __pyx_t_7; - - /* "fontTools/pens/momentsPen.py":164 - * x2, y2 = p2 - * - * r0 = 2 * y1 # <<<<<<<<<<<<<< - * r1 = r0 * x2 - * r2 = x2 * y2 - */ - __pyx_v_r0 = (2.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":165 - * - * r0 = 2 * y1 - * r1 = r0 * x2 # <<<<<<<<<<<<<< - * r2 = x2 * y2 - * r3 = 3 * r2 - */ - __pyx_v_r1 = (__pyx_v_r0 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":166 - * r0 = 2 * y1 - * r1 = r0 * x2 - * r2 = x2 * y2 # <<<<<<<<<<<<<< - * r3 = 3 * r2 - * r4 = 2 * x1 - */ - __pyx_v_r2 = (__pyx_v_x2 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":167 - * r1 = r0 * x2 - * r2 = x2 * y2 - * r3 = 3 * r2 # <<<<<<<<<<<<<< - * r4 = 2 * x1 - * r5 = 3 * y0 - */ - __pyx_v_r3 = (3.0 * __pyx_v_r2); - - /* "fontTools/pens/momentsPen.py":168 - * r2 = x2 * y2 - * r3 = 3 * r2 - * r4 = 2 * x1 # <<<<<<<<<<<<<< - * r5 = 3 * y0 - * r6 = x1**2 - */ - __pyx_v_r4 = (2.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":169 - * r3 = 3 * r2 - * r4 = 2 * x1 - * r5 = 3 * y0 # <<<<<<<<<<<<<< - * r6 = x1**2 - * r7 = x2**2 - */ - __pyx_v_r5 = (3.0 * __pyx_v_y0); - - /* "fontTools/pens/momentsPen.py":170 - * r4 = 2 * x1 - * r5 = 3 * y0 - * r6 = x1**2 # <<<<<<<<<<<<<< - * r7 = x2**2 - * r8 = 4 * y1 - */ - __pyx_v_r6 = pow(__pyx_v_x1, 2.0); - - /* "fontTools/pens/momentsPen.py":171 - * r5 = 3 * y0 - * r6 = x1**2 - * r7 = x2**2 # <<<<<<<<<<<<<< - * r8 = 4 * y1 - * r9 = 10 * y2 - */ - __pyx_v_r7 = pow(__pyx_v_x2, 2.0); - - /* "fontTools/pens/momentsPen.py":172 - * r6 = x1**2 - * r7 = x2**2 - * r8 = 4 * y1 # <<<<<<<<<<<<<< - * r9 = 10 * y2 - * r10 = 2 * y2 - */ - __pyx_v_r8 = (4.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":173 - * r7 = x2**2 - * r8 = 4 * y1 - * r9 = 10 * y2 # <<<<<<<<<<<<<< - * r10 = 2 * y2 - * r11 = r4 * x2 - */ - __pyx_v_r9 = (10.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":174 - * r8 = 4 * y1 - * r9 = 10 * y2 - * r10 = 2 * y2 # <<<<<<<<<<<<<< - * r11 = r4 * x2 - * r12 = x0**2 - */ - __pyx_v_r10 = (2.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":175 - * r9 = 10 * y2 - * r10 = 2 * y2 - * r11 = r4 * x2 # <<<<<<<<<<<<<< - * r12 = x0**2 - * r13 = 10 * y0 - */ - __pyx_v_r11 = (__pyx_v_r4 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":176 - * r10 = 2 * y2 - * r11 = r4 * x2 - * r12 = x0**2 # <<<<<<<<<<<<<< - * r13 = 10 * y0 - * r14 = r4 * y2 - */ - __pyx_v_r12 = pow(__pyx_v_x0, 2.0); - - /* "fontTools/pens/momentsPen.py":177 - * r11 = r4 * x2 - * r12 = x0**2 - * r13 = 10 * y0 # <<<<<<<<<<<<<< - * r14 = r4 * y2 - * r15 = x2 * y0 - */ - __pyx_v_r13 = (10.0 * __pyx_v_y0); - - /* "fontTools/pens/momentsPen.py":178 - * r12 = x0**2 - * r13 = 10 * y0 - * r14 = r4 * y2 # <<<<<<<<<<<<<< - * r15 = x2 * y0 - * r16 = 4 * x1 - */ - __pyx_v_r14 = (__pyx_v_r4 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":179 - * r13 = 10 * y0 - * r14 = r4 * y2 - * r15 = x2 * y0 # <<<<<<<<<<<<<< - * r16 = 4 * x1 - * r17 = r0 * x1 + r2 - */ - __pyx_v_r15 = (__pyx_v_x2 * __pyx_v_y0); - - /* "fontTools/pens/momentsPen.py":180 - * r14 = r4 * y2 - * r15 = x2 * y0 - * r16 = 4 * x1 # <<<<<<<<<<<<<< - * r17 = r0 * x1 + r2 - * r18 = r2 * r8 - */ - __pyx_v_r16 = (4.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":181 - * r15 = x2 * y0 - * r16 = 4 * x1 - * r17 = r0 * x1 + r2 # <<<<<<<<<<<<<< - * r18 = r2 * r8 - * r19 = y1**2 - */ - __pyx_v_r17 = ((__pyx_v_r0 * __pyx_v_x1) + __pyx_v_r2); - - /* "fontTools/pens/momentsPen.py":182 - * r16 = 4 * x1 - * r17 = r0 * x1 + r2 - * r18 = r2 * r8 # <<<<<<<<<<<<<< - * r19 = y1**2 - * r20 = 2 * r19 - */ - __pyx_v_r18 = (__pyx_v_r2 * __pyx_v_r8); - - /* "fontTools/pens/momentsPen.py":183 - * r17 = r0 * x1 + r2 - * r18 = r2 * r8 - * r19 = y1**2 # <<<<<<<<<<<<<< - * r20 = 2 * r19 - * r21 = y2**2 - */ - __pyx_v_r19 = pow(__pyx_v_y1, 2.0); - - /* "fontTools/pens/momentsPen.py":184 - * r18 = r2 * r8 - * r19 = y1**2 - * r20 = 2 * r19 # <<<<<<<<<<<<<< - * r21 = y2**2 - * r22 = r21 * x2 - */ - __pyx_v_r20 = (2.0 * __pyx_v_r19); - - /* "fontTools/pens/momentsPen.py":185 - * r19 = y1**2 - * r20 = 2 * r19 - * r21 = y2**2 # <<<<<<<<<<<<<< - * r22 = r21 * x2 - * r23 = 5 * r22 - */ - __pyx_v_r21 = pow(__pyx_v_y2, 2.0); - - /* "fontTools/pens/momentsPen.py":186 - * r20 = 2 * r19 - * r21 = y2**2 - * r22 = r21 * x2 # <<<<<<<<<<<<<< - * r23 = 5 * r22 - * r24 = y0**2 - */ - __pyx_v_r22 = (__pyx_v_r21 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":187 - * r21 = y2**2 - * r22 = r21 * x2 - * r23 = 5 * r22 # <<<<<<<<<<<<<< - * r24 = y0**2 - * r25 = y0 * y2 - */ - __pyx_v_r23 = (5.0 * __pyx_v_r22); - - /* "fontTools/pens/momentsPen.py":188 - * r22 = r21 * x2 - * r23 = 5 * r22 - * r24 = y0**2 # <<<<<<<<<<<<<< - * r25 = y0 * y2 - * r26 = 5 * r24 - */ - __pyx_v_r24 = pow(__pyx_v_y0, 2.0); - - /* "fontTools/pens/momentsPen.py":189 - * r23 = 5 * r22 - * r24 = y0**2 - * r25 = y0 * y2 # <<<<<<<<<<<<<< - * r26 = 5 * r24 - * r27 = x1**3 - */ - __pyx_v_r25 = (__pyx_v_y0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":190 - * r24 = y0**2 - * r25 = y0 * y2 - * r26 = 5 * r24 # <<<<<<<<<<<<<< - * r27 = x1**3 - * r28 = x2**3 - */ - __pyx_v_r26 = (5.0 * __pyx_v_r24); - - /* "fontTools/pens/momentsPen.py":191 - * r25 = y0 * y2 - * r26 = 5 * r24 - * r27 = x1**3 # <<<<<<<<<<<<<< - * r28 = x2**3 - * r29 = 30 * y1 - */ - __pyx_v_r27 = pow(__pyx_v_x1, 3.0); - - /* "fontTools/pens/momentsPen.py":192 - * r26 = 5 * r24 - * r27 = x1**3 - * r28 = x2**3 # <<<<<<<<<<<<<< - * r29 = 30 * y1 - * r30 = 6 * y1 - */ - __pyx_v_r28 = pow(__pyx_v_x2, 3.0); - - /* "fontTools/pens/momentsPen.py":193 - * r27 = x1**3 - * r28 = x2**3 - * r29 = 30 * y1 # <<<<<<<<<<<<<< - * r30 = 6 * y1 - * r31 = 10 * r7 * x1 - */ - __pyx_v_r29 = (30.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":194 - * r28 = x2**3 - * r29 = 30 * y1 - * r30 = 6 * y1 # <<<<<<<<<<<<<< - * r31 = 10 * r7 * x1 - * r32 = 5 * y2 - */ - __pyx_v_r30 = (6.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":195 - * r29 = 30 * y1 - * r30 = 6 * y1 - * r31 = 10 * r7 * x1 # <<<<<<<<<<<<<< - * r32 = 5 * y2 - * r33 = 12 * r6 - */ - __pyx_v_r31 = ((10.0 * __pyx_v_r7) * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":196 - * r30 = 6 * y1 - * r31 = 10 * r7 * x1 - * r32 = 5 * y2 # <<<<<<<<<<<<<< - * r33 = 12 * r6 - * r34 = 30 * x1 - */ - __pyx_v_r32 = (5.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":197 - * r31 = 10 * r7 * x1 - * r32 = 5 * y2 - * r33 = 12 * r6 # <<<<<<<<<<<<<< - * r34 = 30 * x1 - * r35 = x1 * y1 - */ - __pyx_v_r33 = (12.0 * __pyx_v_r6); - - /* "fontTools/pens/momentsPen.py":198 - * r32 = 5 * y2 - * r33 = 12 * r6 - * r34 = 30 * x1 # <<<<<<<<<<<<<< - * r35 = x1 * y1 - * r36 = r3 + 20 * r35 - */ - __pyx_v_r34 = (30.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":199 - * r33 = 12 * r6 - * r34 = 30 * x1 - * r35 = x1 * y1 # <<<<<<<<<<<<<< - * r36 = r3 + 20 * r35 - * r37 = 12 * x1 - */ - __pyx_v_r35 = (__pyx_v_x1 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":200 - * r34 = 30 * x1 - * r35 = x1 * y1 - * r36 = r3 + 20 * r35 # <<<<<<<<<<<<<< - * r37 = 12 * x1 - * r38 = 20 * r6 - */ - __pyx_v_r36 = (__pyx_v_r3 + (20.0 * __pyx_v_r35)); - - /* "fontTools/pens/momentsPen.py":201 - * r35 = x1 * y1 - * r36 = r3 + 20 * r35 - * r37 = 12 * x1 # <<<<<<<<<<<<<< - * r38 = 20 * r6 - * r39 = 8 * r6 * y1 - */ - __pyx_v_r37 = (12.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":202 - * r36 = r3 + 20 * r35 - * r37 = 12 * x1 - * r38 = 20 * r6 # <<<<<<<<<<<<<< - * r39 = 8 * r6 * y1 - * r40 = r32 * r7 - */ - __pyx_v_r38 = (20.0 * __pyx_v_r6); - - /* "fontTools/pens/momentsPen.py":203 - * r37 = 12 * x1 - * r38 = 20 * r6 - * r39 = 8 * r6 * y1 # <<<<<<<<<<<<<< - * r40 = r32 * r7 - * r41 = 60 * y1 - */ - __pyx_v_r39 = ((8.0 * __pyx_v_r6) * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":204 - * r38 = 20 * r6 - * r39 = 8 * r6 * y1 - * r40 = r32 * r7 # <<<<<<<<<<<<<< - * r41 = 60 * y1 - * r42 = 20 * r19 - */ - __pyx_v_r40 = (__pyx_v_r32 * __pyx_v_r7); - - /* "fontTools/pens/momentsPen.py":205 - * r39 = 8 * r6 * y1 - * r40 = r32 * r7 - * r41 = 60 * y1 # <<<<<<<<<<<<<< - * r42 = 20 * r19 - * r43 = 4 * r19 - */ - __pyx_v_r41 = (60.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":206 - * r40 = r32 * r7 - * r41 = 60 * y1 - * r42 = 20 * r19 # <<<<<<<<<<<<<< - * r43 = 4 * r19 - * r44 = 15 * r21 - */ - __pyx_v_r42 = (20.0 * __pyx_v_r19); - - /* "fontTools/pens/momentsPen.py":207 - * r41 = 60 * y1 - * r42 = 20 * r19 - * r43 = 4 * r19 # <<<<<<<<<<<<<< - * r44 = 15 * r21 - * r45 = 12 * x2 - */ - __pyx_v_r43 = (4.0 * __pyx_v_r19); - - /* "fontTools/pens/momentsPen.py":208 - * r42 = 20 * r19 - * r43 = 4 * r19 - * r44 = 15 * r21 # <<<<<<<<<<<<<< - * r45 = 12 * x2 - * r46 = 12 * y2 - */ - __pyx_v_r44 = (15.0 * __pyx_v_r21); - - /* "fontTools/pens/momentsPen.py":209 - * r43 = 4 * r19 - * r44 = 15 * r21 - * r45 = 12 * x2 # <<<<<<<<<<<<<< - * r46 = 12 * y2 - * r47 = 6 * x1 - */ - __pyx_v_r45 = (12.0 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":210 - * r44 = 15 * r21 - * r45 = 12 * x2 - * r46 = 12 * y2 # <<<<<<<<<<<<<< - * r47 = 6 * x1 - * r48 = 8 * r19 * x1 + r23 - */ - __pyx_v_r46 = (12.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":211 - * r45 = 12 * x2 - * r46 = 12 * y2 - * r47 = 6 * x1 # <<<<<<<<<<<<<< - * r48 = 8 * r19 * x1 + r23 - * r49 = 8 * y1**3 - */ - __pyx_v_r47 = (6.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":212 - * r46 = 12 * y2 - * r47 = 6 * x1 - * r48 = 8 * r19 * x1 + r23 # <<<<<<<<<<<<<< - * r49 = 8 * y1**3 - * r50 = y2**3 - */ - __pyx_v_r48 = (((8.0 * __pyx_v_r19) * __pyx_v_x1) + __pyx_v_r23); - - /* "fontTools/pens/momentsPen.py":213 - * r47 = 6 * x1 - * r48 = 8 * r19 * x1 + r23 - * r49 = 8 * y1**3 # <<<<<<<<<<<<<< - * r50 = y2**3 - * r51 = y0**3 - */ - __pyx_v_r49 = (8.0 * pow(__pyx_v_y1, 3.0)); - - /* "fontTools/pens/momentsPen.py":214 - * r48 = 8 * r19 * x1 + r23 - * r49 = 8 * y1**3 - * r50 = y2**3 # <<<<<<<<<<<<<< - * r51 = y0**3 - * r52 = 10 * y1 - */ - __pyx_v_r50 = pow(__pyx_v_y2, 3.0); - - /* "fontTools/pens/momentsPen.py":215 - * r49 = 8 * y1**3 - * r50 = y2**3 - * r51 = y0**3 # <<<<<<<<<<<<<< - * r52 = 10 * y1 - * r53 = 12 * y1 - */ - __pyx_v_r51 = pow(__pyx_v_y0, 3.0); - - /* "fontTools/pens/momentsPen.py":216 - * r50 = y2**3 - * r51 = y0**3 - * r52 = 10 * y1 # <<<<<<<<<<<<<< - * r53 = 12 * y1 - * - */ - __pyx_v_r52 = (10.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":217 - * r51 = y0**3 - * r52 = 10 * y1 - * r53 = 12 * y1 # <<<<<<<<<<<<<< - * - * self.area += ( - */ - __pyx_v_r53 = (12.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":219 - * r53 = 12 * y1 - * - * self.area += ( # <<<<<<<<<<<<<< - * -r1 / 6 - * - r3 / 6 - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_area); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 219, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":224 - * + x0 * (r0 + r5 + y2) / 6 - * + x1 * y2 / 3 - * - y0 * (r4 + x2) / 6 # <<<<<<<<<<<<<< - * ) - * self.momentX += ( - */ - __pyx_t_3 = PyFloat_FromDouble(((((((-__pyx_v_r1) / 6.0) - (__pyx_v_r3 / 6.0)) + ((__pyx_v_x0 * ((__pyx_v_r0 + __pyx_v_r5) + __pyx_v_y2)) / 6.0)) + ((__pyx_v_x1 * __pyx_v_y2) / 3.0)) - ((__pyx_v_y0 * (__pyx_v_r4 + __pyx_v_x2)) / 6.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 224, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":219 - * r53 = 12 * y1 - * - * self.area += ( # <<<<<<<<<<<<<< - * -r1 / 6 - * - r3 / 6 - */ - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 219, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_area, __pyx_t_2) < 0) __PYX_ERR(0, 219, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":226 - * - y0 * (r4 + x2) / 6 - * ) - * self.momentX += ( # <<<<<<<<<<<<<< - * -r11 * (-r10 + y1) / 30 - * + r12 * (r13 + r8 + y2) / 30 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":233 - * - r7 * r9 / 30 - * + x0 * (r14 - r15 - r16 * y0 + r17) / 30 - * - y0 * (r11 + 2 * r6 + r7) / 30 # <<<<<<<<<<<<<< - * ) - * self.momentY += ( - */ - __pyx_t_3 = PyFloat_FromDouble((((((((((-__pyx_v_r11) * ((-__pyx_v_r10) + __pyx_v_y1)) / 30.0) + ((__pyx_v_r12 * ((__pyx_v_r13 + __pyx_v_r8) + __pyx_v_y2)) / 30.0)) + ((__pyx_v_r6 * __pyx_v_y2) / 15.0)) - ((__pyx_v_r7 * __pyx_v_r8) / 30.0)) - ((__pyx_v_r7 * __pyx_v_r9) / 30.0)) + ((__pyx_v_x0 * (((__pyx_v_r14 - __pyx_v_r15) - (__pyx_v_r16 * __pyx_v_y0)) + __pyx_v_r17)) / 30.0)) - ((__pyx_v_y0 * ((__pyx_v_r11 + (2.0 * __pyx_v_r6)) + __pyx_v_r7)) / 30.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 233, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":226 - * - y0 * (r4 + x2) / 6 - * ) - * self.momentX += ( # <<<<<<<<<<<<<< - * -r11 * (-r10 + y1) / 30 - * + r12 * (r13 + r8 + y2) / 30 - */ - __pyx_t_1 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentX, __pyx_t_1) < 0) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":235 - * - y0 * (r11 + 2 * r6 + r7) / 30 - * ) - * self.momentY += ( # <<<<<<<<<<<<<< - * -r18 / 30 - * - r20 * x2 / 30 - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentY); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":242 - * + x0 * (r0 * y2 + r20 + r21 + r25 + r26 + r8 * y0) / 30 - * + x1 * y2 * (r10 + y1) / 15 - * - y0 * (r1 + r17) / 30 # <<<<<<<<<<<<<< - * ) - * self.momentXX += ( - */ - __pyx_t_3 = PyFloat_FromDouble(((((((((-__pyx_v_r18) / 30.0) - ((__pyx_v_r20 * __pyx_v_x2) / 30.0)) - (__pyx_v_r23 / 30.0)) - ((__pyx_v_r24 * (__pyx_v_r16 + __pyx_v_x2)) / 30.0)) + ((__pyx_v_x0 * ((((((__pyx_v_r0 * __pyx_v_y2) + __pyx_v_r20) + __pyx_v_r21) + __pyx_v_r25) + __pyx_v_r26) + (__pyx_v_r8 * __pyx_v_y0))) / 30.0)) + (((__pyx_v_x1 * __pyx_v_y2) * (__pyx_v_r10 + __pyx_v_y1)) / 15.0)) - ((__pyx_v_y0 * (__pyx_v_r1 + __pyx_v_r17)) / 30.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 242, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":235 - * - y0 * (r11 + 2 * r6 + r7) / 30 - * ) - * self.momentY += ( # <<<<<<<<<<<<<< - * -r18 / 30 - * - r20 * x2 / 30 - */ - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentY, __pyx_t_2) < 0) __PYX_ERR(0, 235, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":244 - * - y0 * (r1 + r17) / 30 - * ) - * self.momentXX += ( # <<<<<<<<<<<<<< - * r12 * (r1 - 5 * r15 - r34 * y0 + r36 + r9 * x1) / 420 - * + 2 * r27 * y2 / 105 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 244, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":264 - * ) - * / 420 - * - y0 * (8 * r27 + 5 * r28 + r31 + r33 * x2) / 420 # <<<<<<<<<<<<<< - * ) - * self.momentXY += ( - */ - __pyx_t_3 = PyFloat_FromDouble(((((((((((__pyx_v_r12 * ((((__pyx_v_r1 - (5.0 * __pyx_v_r15)) - (__pyx_v_r34 * __pyx_v_y0)) + __pyx_v_r36) + (__pyx_v_r9 * __pyx_v_x1))) / 420.0) + (((2.0 * __pyx_v_r27) * __pyx_v_y2) / 105.0)) - ((__pyx_v_r28 * __pyx_v_r29) / 420.0)) - ((__pyx_v_r28 * __pyx_v_y2) / 4.0)) - ((__pyx_v_r31 * (__pyx_v_r0 - (3.0 * __pyx_v_y2))) / 420.0)) - (((__pyx_v_r6 * __pyx_v_x2) * (__pyx_v_r0 - __pyx_v_r32)) / 105.0)) + ((pow(__pyx_v_x0, 3.0) * ((__pyx_v_r30 + (21.0 * __pyx_v_y0)) + __pyx_v_y2)) / 84.0)) - ((__pyx_v_x0 * ((((((((__pyx_v_r0 * __pyx_v_r7) + (__pyx_v_r15 * __pyx_v_r37)) - (__pyx_v_r2 * __pyx_v_r37)) - (__pyx_v_r33 * __pyx_v_y2)) + (__pyx_v_r38 * __pyx_v_y0)) - __pyx_v_r39) - __pyx_v_r40) + (__pyx_v_r5 * __pyx_v_r7))) / 420.0)) - ((__pyx_v_y0 * ((((8.0 * __pyx_v_r27) + (5.0 * __pyx_v_r28)) + __pyx_v_r31) + (__pyx_v_r33 * __pyx_v_x2))) / 420.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 264, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":244 - * - y0 * (r1 + r17) / 30 - * ) - * self.momentXX += ( # <<<<<<<<<<<<<< - * r12 * (r1 - 5 * r15 - r34 * y0 + r36 + r9 * x1) / 420 - * + 2 * r27 * y2 / 105 - */ - __pyx_t_1 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 244, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXX, __pyx_t_1) < 0) __PYX_ERR(0, 244, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":266 - * - y0 * (8 * r27 + 5 * r28 + r31 + r33 * x2) / 420 - * ) - * self.momentXY += ( # <<<<<<<<<<<<<< - * r12 * (r13 * y2 + 3 * r21 + 105 * r24 + r41 * y0 + r42 + r46 * y1) / 840 - * - r16 * x2 * (r43 - r44) / 840 - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXY); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 266, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":286 - * ) - * / 420 - * - y0 * (r16 * r2 + r30 * r7 + r35 * r45 + r39 + r40) / 420 # <<<<<<<<<<<<<< - * ) - * self.momentYY += ( - */ - __pyx_t_3 = PyFloat_FromDouble(((((((((((__pyx_v_r12 * ((((((__pyx_v_r13 * __pyx_v_y2) + (3.0 * __pyx_v_r21)) + (105.0 * __pyx_v_r24)) + (__pyx_v_r41 * __pyx_v_y0)) + __pyx_v_r42) + (__pyx_v_r46 * __pyx_v_y1))) / 840.0) - (((__pyx_v_r16 * __pyx_v_x2) * (__pyx_v_r43 - __pyx_v_r44)) / 840.0)) - ((__pyx_v_r21 * __pyx_v_r7) / 8.0)) - ((__pyx_v_r24 * ((__pyx_v_r38 + (__pyx_v_r45 * __pyx_v_x1)) + (3.0 * __pyx_v_r7))) / 840.0)) - (((__pyx_v_r41 * __pyx_v_r7) * __pyx_v_y2) / 840.0)) - ((__pyx_v_r42 * __pyx_v_r7) / 840.0)) + (((__pyx_v_r6 * __pyx_v_y2) * (__pyx_v_r32 + __pyx_v_r8)) / 210.0)) + ((__pyx_v_x0 * (((((((((-__pyx_v_r15) * __pyx_v_r8) + (__pyx_v_r16 * __pyx_v_r25)) + __pyx_v_r18) + (__pyx_v_r21 * __pyx_v_r47)) - (__pyx_v_r24 * __pyx_v_r34)) - (__pyx_v_r26 * __pyx_v_x2)) + (__pyx_v_r35 * __pyx_v_r46)) + __pyx_v_r48)) / 420.0)) - ((__pyx_v_y0 * (((((__pyx_v_r16 * __pyx_v_r2) + (__pyx_v_r30 * __pyx_v_r7)) + (__pyx_v_r35 * __pyx_v_r45)) + __pyx_v_r39) + __pyx_v_r40)) / 420.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 286, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":266 - * - y0 * (8 * r27 + 5 * r28 + r31 + r33 * x2) / 420 - * ) - * self.momentXY += ( # <<<<<<<<<<<<<< - * r12 * (r13 * y2 + 3 * r21 + 105 * r24 + r41 * y0 + r42 + r46 * y1) / 840 - * - r16 * x2 * (r43 - r44) / 840 - */ - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 266, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXY, __pyx_t_2) < 0) __PYX_ERR(0, 266, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":288 - * - y0 * (r16 * r2 + r30 * r7 + r35 * r45 + r39 + r40) / 420 - * ) - * self.momentYY += ( # <<<<<<<<<<<<<< - * -r2 * r42 / 420 - * - r22 * r29 / 420 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentYY); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":310 - * / 420 - * + x1 * y2 * (r43 + r44 + r9 * y1) / 210 - * - y0 * (r19 * r45 + r2 * r53 - r21 * r4 + r48) / 420 # <<<<<<<<<<<<<< - * ) - * - */ - __pyx_t_3 = PyFloat_FromDouble((((((((((((-__pyx_v_r2) * __pyx_v_r42) / 420.0) - ((__pyx_v_r22 * __pyx_v_r29) / 420.0)) - ((__pyx_v_r24 * ((__pyx_v_r14 + __pyx_v_r36) + (__pyx_v_r52 * __pyx_v_x2))) / 420.0)) - ((__pyx_v_r49 * __pyx_v_x2) / 420.0)) - ((__pyx_v_r50 * __pyx_v_x2) / 12.0)) - ((__pyx_v_r51 * (__pyx_v_r47 + __pyx_v_x2)) / 84.0)) + ((__pyx_v_x0 * ((((((((((__pyx_v_r19 * __pyx_v_r46) + (__pyx_v_r21 * __pyx_v_r5)) + (__pyx_v_r21 * __pyx_v_r52)) + (__pyx_v_r24 * __pyx_v_r29)) + (__pyx_v_r25 * __pyx_v_r53)) + (__pyx_v_r26 * __pyx_v_y2)) + (__pyx_v_r42 * __pyx_v_y0)) + __pyx_v_r49) + (5.0 * __pyx_v_r50)) + (35.0 * __pyx_v_r51))) / 420.0)) + (((__pyx_v_x1 * __pyx_v_y2) * ((__pyx_v_r43 + __pyx_v_r44) + (__pyx_v_r9 * __pyx_v_y1))) / 210.0)) - ((__pyx_v_y0 * ((((__pyx_v_r19 * __pyx_v_r45) + (__pyx_v_r2 * __pyx_v_r53)) - (__pyx_v_r21 * __pyx_v_r4)) + __pyx_v_r48)) / 420.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 310, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":288 - * - y0 * (r16 * r2 + r30 * r7 + r35 * r45 + r39 + r40) / 420 - * ) - * self.momentYY += ( # <<<<<<<<<<<<<< - * -r2 * r42 / 420 - * - r22 * r29 / 420 - */ - __pyx_t_1 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentYY, __pyx_t_1) < 0) __PYX_ERR(0, 288, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":159 - * @cython.locals(x1=cython.double, y1=cython.double) - * @cython.locals(x2=cython.double, y2=cython.double) - * def _qCurveToOne(self, p1, p2): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._qCurveToOne", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/pens/momentsPen.py":450 - * @cython.locals(x2=cython.double, y2=cython.double) - * @cython.locals(x3=cython.double, y3=cython.double) - * def _curveToOne(self, p1, p2, p3): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_13_curveToOne(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_12_curveToOne[] = "MomentsPen._curveToOne(self, p1, p2, p3)"; -static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_13_curveToOne = {"_curveToOne", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_13_curveToOne, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_12_curveToOne}; -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_13_curveToOne(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_p1 = 0; - PyObject *__pyx_v_p2 = 0; - PyObject *__pyx_v_p3 = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_curveToOne (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_p1,&__pyx_n_s_p2,&__pyx_n_s_p3,0}; - PyObject* values[4] = {0,0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p1)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("_curveToOne", 1, 4, 4, 1); __PYX_ERR(0, 450, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p2)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("_curveToOne", 1, 4, 4, 2); __PYX_ERR(0, 450, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p3)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("_curveToOne", 1, 4, 4, 3); __PYX_ERR(0, 450, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "_curveToOne") < 0)) __PYX_ERR(0, 450, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 4) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - } - __pyx_v_self = values[0]; - __pyx_v_p1 = values[1]; - __pyx_v_p2 = values[2]; - __pyx_v_p3 = values[3]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_curveToOne", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 450, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._curveToOne", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_12_curveToOne(__pyx_self, __pyx_v_self, __pyx_v_p1, __pyx_v_p2, __pyx_v_p3); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_12_curveToOne(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1, PyObject *__pyx_v_p2, PyObject *__pyx_v_p3) { - double __pyx_v_x3; - double __pyx_v_y3; - double __pyx_v_x2; - double __pyx_v_y2; - double __pyx_v_x1; - double __pyx_v_y1; - double __pyx_v_x0; - double __pyx_v_y0; - double __pyx_v_r132; - double __pyx_v_r131; - double __pyx_v_r130; - double __pyx_v_r129; - double __pyx_v_r128; - double __pyx_v_r127; - double __pyx_v_r126; - double __pyx_v_r125; - double __pyx_v_r124; - double __pyx_v_r123; - double __pyx_v_r122; - double __pyx_v_r121; - double __pyx_v_r120; - double __pyx_v_r119; - double __pyx_v_r118; - double __pyx_v_r117; - double __pyx_v_r116; - double __pyx_v_r115; - double __pyx_v_r114; - double __pyx_v_r113; - double __pyx_v_r112; - double __pyx_v_r111; - double __pyx_v_r110; - double __pyx_v_r109; - double __pyx_v_r108; - double __pyx_v_r107; - double __pyx_v_r106; - double __pyx_v_r105; - double __pyx_v_r104; - double __pyx_v_r103; - double __pyx_v_r102; - double __pyx_v_r101; - double __pyx_v_r100; - double __pyx_v_r99; - double __pyx_v_r98; - double __pyx_v_r97; - double __pyx_v_r96; - double __pyx_v_r95; - double __pyx_v_r94; - double __pyx_v_r93; - double __pyx_v_r92; - double __pyx_v_r91; - double __pyx_v_r90; - double __pyx_v_r89; - double __pyx_v_r88; - double __pyx_v_r87; - double __pyx_v_r86; - double __pyx_v_r85; - double __pyx_v_r84; - double __pyx_v_r83; - double __pyx_v_r82; - double __pyx_v_r81; - double __pyx_v_r80; - double __pyx_v_r79; - double __pyx_v_r78; - double __pyx_v_r77; - double __pyx_v_r76; - double __pyx_v_r75; - double __pyx_v_r74; - double __pyx_v_r73; - double __pyx_v_r72; - double __pyx_v_r71; - double __pyx_v_r70; - double __pyx_v_r69; - double __pyx_v_r68; - double __pyx_v_r67; - double __pyx_v_r66; - double __pyx_v_r65; - double __pyx_v_r64; - double __pyx_v_r63; - double __pyx_v_r62; - double __pyx_v_r61; - double __pyx_v_r60; - double __pyx_v_r59; - double __pyx_v_r58; - double __pyx_v_r57; - double __pyx_v_r56; - double __pyx_v_r55; - double __pyx_v_r54; - double __pyx_v_r53; - double __pyx_v_r52; - double __pyx_v_r51; - double __pyx_v_r50; - double __pyx_v_r49; - double __pyx_v_r48; - double __pyx_v_r47; - double __pyx_v_r46; - double __pyx_v_r45; - double __pyx_v_r44; - double __pyx_v_r43; - double __pyx_v_r42; - double __pyx_v_r41; - double __pyx_v_r40; - double __pyx_v_r39; - double __pyx_v_r38; - double __pyx_v_r37; - double __pyx_v_r36; - double __pyx_v_r35; - double __pyx_v_r34; - double __pyx_v_r33; - double __pyx_v_r32; - double __pyx_v_r31; - double __pyx_v_r30; - double __pyx_v_r29; - double __pyx_v_r28; - double __pyx_v_r27; - double __pyx_v_r26; - double __pyx_v_r25; - double __pyx_v_r24; - double __pyx_v_r23; - double __pyx_v_r22; - double __pyx_v_r21; - double __pyx_v_r20; - double __pyx_v_r19; - double __pyx_v_r18; - double __pyx_v_r17; - double __pyx_v_r16; - double __pyx_v_r15; - double __pyx_v_r14; - double __pyx_v_r13; - double __pyx_v_r12; - double __pyx_v_r11; - double __pyx_v_r10; - double __pyx_v_r9; - double __pyx_v_r8; - double __pyx_v_r7; - double __pyx_v_r6; - double __pyx_v_r5; - double __pyx_v_r4; - double __pyx_v_r3; - double __pyx_v_r2; - double __pyx_v_r1; - double __pyx_v_r0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *(*__pyx_t_5)(PyObject *); - double __pyx_t_6; - double __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_curveToOne", 0); - - /* "fontTools/pens/momentsPen.py":451 - * @cython.locals(x3=cython.double, y3=cython.double) - * def _curveToOne(self, p1, p2, p3): - * x0, y0 = self._getCurrentPoint() # <<<<<<<<<<<<<< - * x1, y1 = p1 - * x2, y2 = p2 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_getCurrentPoint); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 451, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 451, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 451, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 451, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 451, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_4 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 451, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_5 = Py_TYPE(__pyx_t_4)->tp_iternext; - index = 0; __pyx_t_2 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_3)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 451, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 451, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_2); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 451, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 451, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x0 = __pyx_t_6; - __pyx_v_y0 = __pyx_t_7; - - /* "fontTools/pens/momentsPen.py":452 - * def _curveToOne(self, p1, p2, p3): - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 # <<<<<<<<<<<<<< - * x2, y2 = p2 - * x3, y3 = p3 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_p1))) || (PyList_CheckExact(__pyx_v_p1))) { - PyObject* sequence = __pyx_v_p1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 452, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 452, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 452, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_2 = PyObject_GetIter(__pyx_v_p1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 452, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext; - index = 0; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 452, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 452, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 452, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 452, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x1 = __pyx_t_7; - __pyx_v_y1 = __pyx_t_6; - - /* "fontTools/pens/momentsPen.py":453 - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - * x2, y2 = p2 # <<<<<<<<<<<<<< - * x3, y3 = p3 - * - */ - if ((likely(PyTuple_CheckExact(__pyx_v_p2))) || (PyList_CheckExact(__pyx_v_p2))) { - PyObject* sequence = __pyx_v_p2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 453, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_3 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 453, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 453, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_2 = PyObject_GetIter(__pyx_v_p2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 453, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext; - index = 0; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 1; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 453, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 453, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 453, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 453, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_x2 = __pyx_t_6; - __pyx_v_y2 = __pyx_t_7; - - /* "fontTools/pens/momentsPen.py":454 - * x1, y1 = p1 - * x2, y2 = p2 - * x3, y3 = p3 # <<<<<<<<<<<<<< - * - * r0 = 6 * y2 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_p3))) || (PyList_CheckExact(__pyx_v_p3))) { - PyObject* sequence = __pyx_v_p3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 454, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 454, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 454, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_2 = PyObject_GetIter(__pyx_v_p3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 454, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext; - index = 0; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 454, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L10_unpacking_done; - __pyx_L9_unpacking_failed:; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 454, __pyx_L1_error) - __pyx_L10_unpacking_done:; - } - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 454, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 454, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x3 = __pyx_t_7; - __pyx_v_y3 = __pyx_t_6; - - /* "fontTools/pens/momentsPen.py":456 - * x3, y3 = p3 - * - * r0 = 6 * y2 # <<<<<<<<<<<<<< - * r1 = r0 * x3 - * r2 = 10 * y3 - */ - __pyx_v_r0 = (6.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":457 - * - * r0 = 6 * y2 - * r1 = r0 * x3 # <<<<<<<<<<<<<< - * r2 = 10 * y3 - * r3 = r2 * x3 - */ - __pyx_v_r1 = (__pyx_v_r0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":458 - * r0 = 6 * y2 - * r1 = r0 * x3 - * r2 = 10 * y3 # <<<<<<<<<<<<<< - * r3 = r2 * x3 - * r4 = 3 * y1 - */ - __pyx_v_r2 = (10.0 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":459 - * r1 = r0 * x3 - * r2 = 10 * y3 - * r3 = r2 * x3 # <<<<<<<<<<<<<< - * r4 = 3 * y1 - * r5 = 6 * x1 - */ - __pyx_v_r3 = (__pyx_v_r2 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":460 - * r2 = 10 * y3 - * r3 = r2 * x3 - * r4 = 3 * y1 # <<<<<<<<<<<<<< - * r5 = 6 * x1 - * r6 = 3 * x2 - */ - __pyx_v_r4 = (3.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":461 - * r3 = r2 * x3 - * r4 = 3 * y1 - * r5 = 6 * x1 # <<<<<<<<<<<<<< - * r6 = 3 * x2 - * r7 = 6 * y1 - */ - __pyx_v_r5 = (6.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":462 - * r4 = 3 * y1 - * r5 = 6 * x1 - * r6 = 3 * x2 # <<<<<<<<<<<<<< - * r7 = 6 * y1 - * r8 = 3 * y2 - */ - __pyx_v_r6 = (3.0 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":463 - * r5 = 6 * x1 - * r6 = 3 * x2 - * r7 = 6 * y1 # <<<<<<<<<<<<<< - * r8 = 3 * y2 - * r9 = x2**2 - */ - __pyx_v_r7 = (6.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":464 - * r6 = 3 * x2 - * r7 = 6 * y1 - * r8 = 3 * y2 # <<<<<<<<<<<<<< - * r9 = x2**2 - * r10 = 45 * r9 - */ - __pyx_v_r8 = (3.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":465 - * r7 = 6 * y1 - * r8 = 3 * y2 - * r9 = x2**2 # <<<<<<<<<<<<<< - * r10 = 45 * r9 - * r11 = r10 * y3 - */ - __pyx_v_r9 = pow(__pyx_v_x2, 2.0); - - /* "fontTools/pens/momentsPen.py":466 - * r8 = 3 * y2 - * r9 = x2**2 - * r10 = 45 * r9 # <<<<<<<<<<<<<< - * r11 = r10 * y3 - * r12 = x3**2 - */ - __pyx_v_r10 = (45.0 * __pyx_v_r9); - - /* "fontTools/pens/momentsPen.py":467 - * r9 = x2**2 - * r10 = 45 * r9 - * r11 = r10 * y3 # <<<<<<<<<<<<<< - * r12 = x3**2 - * r13 = r12 * y2 - */ - __pyx_v_r11 = (__pyx_v_r10 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":468 - * r10 = 45 * r9 - * r11 = r10 * y3 - * r12 = x3**2 # <<<<<<<<<<<<<< - * r13 = r12 * y2 - * r14 = r12 * y3 - */ - __pyx_v_r12 = pow(__pyx_v_x3, 2.0); - - /* "fontTools/pens/momentsPen.py":469 - * r11 = r10 * y3 - * r12 = x3**2 - * r13 = r12 * y2 # <<<<<<<<<<<<<< - * r14 = r12 * y3 - * r15 = 7 * y3 - */ - __pyx_v_r13 = (__pyx_v_r12 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":470 - * r12 = x3**2 - * r13 = r12 * y2 - * r14 = r12 * y3 # <<<<<<<<<<<<<< - * r15 = 7 * y3 - * r16 = 15 * x3 - */ - __pyx_v_r14 = (__pyx_v_r12 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":471 - * r13 = r12 * y2 - * r14 = r12 * y3 - * r15 = 7 * y3 # <<<<<<<<<<<<<< - * r16 = 15 * x3 - * r17 = r16 * x2 - */ - __pyx_v_r15 = (7.0 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":472 - * r14 = r12 * y3 - * r15 = 7 * y3 - * r16 = 15 * x3 # <<<<<<<<<<<<<< - * r17 = r16 * x2 - * r18 = x1**2 - */ - __pyx_v_r16 = (15.0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":473 - * r15 = 7 * y3 - * r16 = 15 * x3 - * r17 = r16 * x2 # <<<<<<<<<<<<<< - * r18 = x1**2 - * r19 = 9 * r18 - */ - __pyx_v_r17 = (__pyx_v_r16 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":474 - * r16 = 15 * x3 - * r17 = r16 * x2 - * r18 = x1**2 # <<<<<<<<<<<<<< - * r19 = 9 * r18 - * r20 = x0**2 - */ - __pyx_v_r18 = pow(__pyx_v_x1, 2.0); - - /* "fontTools/pens/momentsPen.py":475 - * r17 = r16 * x2 - * r18 = x1**2 - * r19 = 9 * r18 # <<<<<<<<<<<<<< - * r20 = x0**2 - * r21 = 21 * y1 - */ - __pyx_v_r19 = (9.0 * __pyx_v_r18); - - /* "fontTools/pens/momentsPen.py":476 - * r18 = x1**2 - * r19 = 9 * r18 - * r20 = x0**2 # <<<<<<<<<<<<<< - * r21 = 21 * y1 - * r22 = 9 * r9 - */ - __pyx_v_r20 = pow(__pyx_v_x0, 2.0); - - /* "fontTools/pens/momentsPen.py":477 - * r19 = 9 * r18 - * r20 = x0**2 - * r21 = 21 * y1 # <<<<<<<<<<<<<< - * r22 = 9 * r9 - * r23 = r7 * x3 - */ - __pyx_v_r21 = (21.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":478 - * r20 = x0**2 - * r21 = 21 * y1 - * r22 = 9 * r9 # <<<<<<<<<<<<<< - * r23 = r7 * x3 - * r24 = 9 * y2 - */ - __pyx_v_r22 = (9.0 * __pyx_v_r9); - - /* "fontTools/pens/momentsPen.py":479 - * r21 = 21 * y1 - * r22 = 9 * r9 - * r23 = r7 * x3 # <<<<<<<<<<<<<< - * r24 = 9 * y2 - * r25 = r24 * x2 + r3 - */ - __pyx_v_r23 = (__pyx_v_r7 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":480 - * r22 = 9 * r9 - * r23 = r7 * x3 - * r24 = 9 * y2 # <<<<<<<<<<<<<< - * r25 = r24 * x2 + r3 - * r26 = 9 * x2 - */ - __pyx_v_r24 = (9.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":481 - * r23 = r7 * x3 - * r24 = 9 * y2 - * r25 = r24 * x2 + r3 # <<<<<<<<<<<<<< - * r26 = 9 * x2 - * r27 = x2 * y3 - */ - __pyx_v_r25 = ((__pyx_v_r24 * __pyx_v_x2) + __pyx_v_r3); - - /* "fontTools/pens/momentsPen.py":482 - * r24 = 9 * y2 - * r25 = r24 * x2 + r3 - * r26 = 9 * x2 # <<<<<<<<<<<<<< - * r27 = x2 * y3 - * r28 = -r26 * y1 + 15 * r27 - */ - __pyx_v_r26 = (9.0 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":483 - * r25 = r24 * x2 + r3 - * r26 = 9 * x2 - * r27 = x2 * y3 # <<<<<<<<<<<<<< - * r28 = -r26 * y1 + 15 * r27 - * r29 = 3 * x1 - */ - __pyx_v_r27 = (__pyx_v_x2 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":484 - * r26 = 9 * x2 - * r27 = x2 * y3 - * r28 = -r26 * y1 + 15 * r27 # <<<<<<<<<<<<<< - * r29 = 3 * x1 - * r30 = 45 * x1 - */ - __pyx_v_r28 = (((-__pyx_v_r26) * __pyx_v_y1) + (15.0 * __pyx_v_r27)); - - /* "fontTools/pens/momentsPen.py":485 - * r27 = x2 * y3 - * r28 = -r26 * y1 + 15 * r27 - * r29 = 3 * x1 # <<<<<<<<<<<<<< - * r30 = 45 * x1 - * r31 = 12 * x3 - */ - __pyx_v_r29 = (3.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":486 - * r28 = -r26 * y1 + 15 * r27 - * r29 = 3 * x1 - * r30 = 45 * x1 # <<<<<<<<<<<<<< - * r31 = 12 * x3 - * r32 = 45 * r18 - */ - __pyx_v_r30 = (45.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":487 - * r29 = 3 * x1 - * r30 = 45 * x1 - * r31 = 12 * x3 # <<<<<<<<<<<<<< - * r32 = 45 * r18 - * r33 = 5 * r12 - */ - __pyx_v_r31 = (12.0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":488 - * r30 = 45 * x1 - * r31 = 12 * x3 - * r32 = 45 * r18 # <<<<<<<<<<<<<< - * r33 = 5 * r12 - * r34 = r8 * x3 - */ - __pyx_v_r32 = (45.0 * __pyx_v_r18); - - /* "fontTools/pens/momentsPen.py":489 - * r31 = 12 * x3 - * r32 = 45 * r18 - * r33 = 5 * r12 # <<<<<<<<<<<<<< - * r34 = r8 * x3 - * r35 = 105 * y0 - */ - __pyx_v_r33 = (5.0 * __pyx_v_r12); - - /* "fontTools/pens/momentsPen.py":490 - * r32 = 45 * r18 - * r33 = 5 * r12 - * r34 = r8 * x3 # <<<<<<<<<<<<<< - * r35 = 105 * y0 - * r36 = 30 * y0 - */ - __pyx_v_r34 = (__pyx_v_r8 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":491 - * r33 = 5 * r12 - * r34 = r8 * x3 - * r35 = 105 * y0 # <<<<<<<<<<<<<< - * r36 = 30 * y0 - * r37 = r36 * x2 - */ - __pyx_v_r35 = (105.0 * __pyx_v_y0); - - /* "fontTools/pens/momentsPen.py":492 - * r34 = r8 * x3 - * r35 = 105 * y0 - * r36 = 30 * y0 # <<<<<<<<<<<<<< - * r37 = r36 * x2 - * r38 = 5 * x3 - */ - __pyx_v_r36 = (30.0 * __pyx_v_y0); - - /* "fontTools/pens/momentsPen.py":493 - * r35 = 105 * y0 - * r36 = 30 * y0 - * r37 = r36 * x2 # <<<<<<<<<<<<<< - * r38 = 5 * x3 - * r39 = 15 * y3 - */ - __pyx_v_r37 = (__pyx_v_r36 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":494 - * r36 = 30 * y0 - * r37 = r36 * x2 - * r38 = 5 * x3 # <<<<<<<<<<<<<< - * r39 = 15 * y3 - * r40 = 5 * y3 - */ - __pyx_v_r38 = (5.0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":495 - * r37 = r36 * x2 - * r38 = 5 * x3 - * r39 = 15 * y3 # <<<<<<<<<<<<<< - * r40 = 5 * y3 - * r41 = r40 * x3 - */ - __pyx_v_r39 = (15.0 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":496 - * r38 = 5 * x3 - * r39 = 15 * y3 - * r40 = 5 * y3 # <<<<<<<<<<<<<< - * r41 = r40 * x3 - * r42 = x2 * y2 - */ - __pyx_v_r40 = (5.0 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":497 - * r39 = 15 * y3 - * r40 = 5 * y3 - * r41 = r40 * x3 # <<<<<<<<<<<<<< - * r42 = x2 * y2 - * r43 = 18 * r42 - */ - __pyx_v_r41 = (__pyx_v_r40 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":498 - * r40 = 5 * y3 - * r41 = r40 * x3 - * r42 = x2 * y2 # <<<<<<<<<<<<<< - * r43 = 18 * r42 - * r44 = 45 * y1 - */ - __pyx_v_r42 = (__pyx_v_x2 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":499 - * r41 = r40 * x3 - * r42 = x2 * y2 - * r43 = 18 * r42 # <<<<<<<<<<<<<< - * r44 = 45 * y1 - * r45 = r41 + r43 + r44 * x1 - */ - __pyx_v_r43 = (18.0 * __pyx_v_r42); - - /* "fontTools/pens/momentsPen.py":500 - * r42 = x2 * y2 - * r43 = 18 * r42 - * r44 = 45 * y1 # <<<<<<<<<<<<<< - * r45 = r41 + r43 + r44 * x1 - * r46 = y2 * y3 - */ - __pyx_v_r44 = (45.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":501 - * r43 = 18 * r42 - * r44 = 45 * y1 - * r45 = r41 + r43 + r44 * x1 # <<<<<<<<<<<<<< - * r46 = y2 * y3 - * r47 = r46 * x3 - */ - __pyx_v_r45 = ((__pyx_v_r41 + __pyx_v_r43) + (__pyx_v_r44 * __pyx_v_x1)); - - /* "fontTools/pens/momentsPen.py":502 - * r44 = 45 * y1 - * r45 = r41 + r43 + r44 * x1 - * r46 = y2 * y3 # <<<<<<<<<<<<<< - * r47 = r46 * x3 - * r48 = y2**2 - */ - __pyx_v_r46 = (__pyx_v_y2 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":503 - * r45 = r41 + r43 + r44 * x1 - * r46 = y2 * y3 - * r47 = r46 * x3 # <<<<<<<<<<<<<< - * r48 = y2**2 - * r49 = 45 * r48 - */ - __pyx_v_r47 = (__pyx_v_r46 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":504 - * r46 = y2 * y3 - * r47 = r46 * x3 - * r48 = y2**2 # <<<<<<<<<<<<<< - * r49 = 45 * r48 - * r50 = r49 * x3 - */ - __pyx_v_r48 = pow(__pyx_v_y2, 2.0); - - /* "fontTools/pens/momentsPen.py":505 - * r47 = r46 * x3 - * r48 = y2**2 - * r49 = 45 * r48 # <<<<<<<<<<<<<< - * r50 = r49 * x3 - * r51 = y3**2 - */ - __pyx_v_r49 = (45.0 * __pyx_v_r48); - - /* "fontTools/pens/momentsPen.py":506 - * r48 = y2**2 - * r49 = 45 * r48 - * r50 = r49 * x3 # <<<<<<<<<<<<<< - * r51 = y3**2 - * r52 = r51 * x3 - */ - __pyx_v_r50 = (__pyx_v_r49 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":507 - * r49 = 45 * r48 - * r50 = r49 * x3 - * r51 = y3**2 # <<<<<<<<<<<<<< - * r52 = r51 * x3 - * r53 = y1**2 - */ - __pyx_v_r51 = pow(__pyx_v_y3, 2.0); - - /* "fontTools/pens/momentsPen.py":508 - * r50 = r49 * x3 - * r51 = y3**2 - * r52 = r51 * x3 # <<<<<<<<<<<<<< - * r53 = y1**2 - * r54 = 9 * r53 - */ - __pyx_v_r52 = (__pyx_v_r51 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":509 - * r51 = y3**2 - * r52 = r51 * x3 - * r53 = y1**2 # <<<<<<<<<<<<<< - * r54 = 9 * r53 - * r55 = y0**2 - */ - __pyx_v_r53 = pow(__pyx_v_y1, 2.0); - - /* "fontTools/pens/momentsPen.py":510 - * r52 = r51 * x3 - * r53 = y1**2 - * r54 = 9 * r53 # <<<<<<<<<<<<<< - * r55 = y0**2 - * r56 = 21 * x1 - */ - __pyx_v_r54 = (9.0 * __pyx_v_r53); - - /* "fontTools/pens/momentsPen.py":511 - * r53 = y1**2 - * r54 = 9 * r53 - * r55 = y0**2 # <<<<<<<<<<<<<< - * r56 = 21 * x1 - * r57 = 6 * x2 - */ - __pyx_v_r55 = pow(__pyx_v_y0, 2.0); - - /* "fontTools/pens/momentsPen.py":512 - * r54 = 9 * r53 - * r55 = y0**2 - * r56 = 21 * x1 # <<<<<<<<<<<<<< - * r57 = 6 * x2 - * r58 = r16 * y2 - */ - __pyx_v_r56 = (21.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":513 - * r55 = y0**2 - * r56 = 21 * x1 - * r57 = 6 * x2 # <<<<<<<<<<<<<< - * r58 = r16 * y2 - * r59 = r39 * y2 - */ - __pyx_v_r57 = (6.0 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":514 - * r56 = 21 * x1 - * r57 = 6 * x2 - * r58 = r16 * y2 # <<<<<<<<<<<<<< - * r59 = r39 * y2 - * r60 = 9 * r48 - */ - __pyx_v_r58 = (__pyx_v_r16 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":515 - * r57 = 6 * x2 - * r58 = r16 * y2 - * r59 = r39 * y2 # <<<<<<<<<<<<<< - * r60 = 9 * r48 - * r61 = r6 * y3 - */ - __pyx_v_r59 = (__pyx_v_r39 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":516 - * r58 = r16 * y2 - * r59 = r39 * y2 - * r60 = 9 * r48 # <<<<<<<<<<<<<< - * r61 = r6 * y3 - * r62 = 3 * y3 - */ - __pyx_v_r60 = (9.0 * __pyx_v_r48); - - /* "fontTools/pens/momentsPen.py":517 - * r59 = r39 * y2 - * r60 = 9 * r48 - * r61 = r6 * y3 # <<<<<<<<<<<<<< - * r62 = 3 * y3 - * r63 = r36 * y2 - */ - __pyx_v_r61 = (__pyx_v_r6 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":518 - * r60 = 9 * r48 - * r61 = r6 * y3 - * r62 = 3 * y3 # <<<<<<<<<<<<<< - * r63 = r36 * y2 - * r64 = y1 * y3 - */ - __pyx_v_r62 = (3.0 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":519 - * r61 = r6 * y3 - * r62 = 3 * y3 - * r63 = r36 * y2 # <<<<<<<<<<<<<< - * r64 = y1 * y3 - * r65 = 45 * r53 - */ - __pyx_v_r63 = (__pyx_v_r36 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":520 - * r62 = 3 * y3 - * r63 = r36 * y2 - * r64 = y1 * y3 # <<<<<<<<<<<<<< - * r65 = 45 * r53 - * r66 = 5 * r51 - */ - __pyx_v_r64 = (__pyx_v_y1 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":521 - * r63 = r36 * y2 - * r64 = y1 * y3 - * r65 = 45 * r53 # <<<<<<<<<<<<<< - * r66 = 5 * r51 - * r67 = x2**3 - */ - __pyx_v_r65 = (45.0 * __pyx_v_r53); - - /* "fontTools/pens/momentsPen.py":522 - * r64 = y1 * y3 - * r65 = 45 * r53 - * r66 = 5 * r51 # <<<<<<<<<<<<<< - * r67 = x2**3 - * r68 = x3**3 - */ - __pyx_v_r66 = (5.0 * __pyx_v_r51); - - /* "fontTools/pens/momentsPen.py":523 - * r65 = 45 * r53 - * r66 = 5 * r51 - * r67 = x2**3 # <<<<<<<<<<<<<< - * r68 = x3**3 - * r69 = 630 * y2 - */ - __pyx_v_r67 = pow(__pyx_v_x2, 3.0); - - /* "fontTools/pens/momentsPen.py":524 - * r66 = 5 * r51 - * r67 = x2**3 - * r68 = x3**3 # <<<<<<<<<<<<<< - * r69 = 630 * y2 - * r70 = 126 * x3 - */ - __pyx_v_r68 = pow(__pyx_v_x3, 3.0); - - /* "fontTools/pens/momentsPen.py":525 - * r67 = x2**3 - * r68 = x3**3 - * r69 = 630 * y2 # <<<<<<<<<<<<<< - * r70 = 126 * x3 - * r71 = x1**3 - */ - __pyx_v_r69 = (630.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":526 - * r68 = x3**3 - * r69 = 630 * y2 - * r70 = 126 * x3 # <<<<<<<<<<<<<< - * r71 = x1**3 - * r72 = 126 * x2 - */ - __pyx_v_r70 = (126.0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":527 - * r69 = 630 * y2 - * r70 = 126 * x3 - * r71 = x1**3 # <<<<<<<<<<<<<< - * r72 = 126 * x2 - * r73 = 63 * r9 - */ - __pyx_v_r71 = pow(__pyx_v_x1, 3.0); - - /* "fontTools/pens/momentsPen.py":528 - * r70 = 126 * x3 - * r71 = x1**3 - * r72 = 126 * x2 # <<<<<<<<<<<<<< - * r73 = 63 * r9 - * r74 = r73 * x3 - */ - __pyx_v_r72 = (126.0 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":529 - * r71 = x1**3 - * r72 = 126 * x2 - * r73 = 63 * r9 # <<<<<<<<<<<<<< - * r74 = r73 * x3 - * r75 = r15 * x3 + 15 * r42 - */ - __pyx_v_r73 = (63.0 * __pyx_v_r9); - - /* "fontTools/pens/momentsPen.py":530 - * r72 = 126 * x2 - * r73 = 63 * r9 - * r74 = r73 * x3 # <<<<<<<<<<<<<< - * r75 = r15 * x3 + 15 * r42 - * r76 = 630 * x1 - */ - __pyx_v_r74 = (__pyx_v_r73 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":531 - * r73 = 63 * r9 - * r74 = r73 * x3 - * r75 = r15 * x3 + 15 * r42 # <<<<<<<<<<<<<< - * r76 = 630 * x1 - * r77 = 14 * x3 - */ - __pyx_v_r75 = ((__pyx_v_r15 * __pyx_v_x3) + (15.0 * __pyx_v_r42)); - - /* "fontTools/pens/momentsPen.py":532 - * r74 = r73 * x3 - * r75 = r15 * x3 + 15 * r42 - * r76 = 630 * x1 # <<<<<<<<<<<<<< - * r77 = 14 * x3 - * r78 = 21 * r27 - */ - __pyx_v_r76 = (630.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":533 - * r75 = r15 * x3 + 15 * r42 - * r76 = 630 * x1 - * r77 = 14 * x3 # <<<<<<<<<<<<<< - * r78 = 21 * r27 - * r79 = 42 * x1 - */ - __pyx_v_r77 = (14.0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":534 - * r76 = 630 * x1 - * r77 = 14 * x3 - * r78 = 21 * r27 # <<<<<<<<<<<<<< - * r79 = 42 * x1 - * r80 = 42 * x2 - */ - __pyx_v_r78 = (21.0 * __pyx_v_r27); - - /* "fontTools/pens/momentsPen.py":535 - * r77 = 14 * x3 - * r78 = 21 * r27 - * r79 = 42 * x1 # <<<<<<<<<<<<<< - * r80 = 42 * x2 - * r81 = x1 * y2 - */ - __pyx_v_r79 = (42.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":536 - * r78 = 21 * r27 - * r79 = 42 * x1 - * r80 = 42 * x2 # <<<<<<<<<<<<<< - * r81 = x1 * y2 - * r82 = 63 * r42 - */ - __pyx_v_r80 = (42.0 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":537 - * r79 = 42 * x1 - * r80 = 42 * x2 - * r81 = x1 * y2 # <<<<<<<<<<<<<< - * r82 = 63 * r42 - * r83 = x1 * y1 - */ - __pyx_v_r81 = (__pyx_v_x1 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":538 - * r80 = 42 * x2 - * r81 = x1 * y2 - * r82 = 63 * r42 # <<<<<<<<<<<<<< - * r83 = x1 * y1 - * r84 = r41 + r82 + 378 * r83 - */ - __pyx_v_r82 = (63.0 * __pyx_v_r42); - - /* "fontTools/pens/momentsPen.py":539 - * r81 = x1 * y2 - * r82 = 63 * r42 - * r83 = x1 * y1 # <<<<<<<<<<<<<< - * r84 = r41 + r82 + 378 * r83 - * r85 = x2 * x3 - */ - __pyx_v_r83 = (__pyx_v_x1 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":540 - * r82 = 63 * r42 - * r83 = x1 * y1 - * r84 = r41 + r82 + 378 * r83 # <<<<<<<<<<<<<< - * r85 = x2 * x3 - * r86 = r85 * y1 - */ - __pyx_v_r84 = ((__pyx_v_r41 + __pyx_v_r82) + (378.0 * __pyx_v_r83)); - - /* "fontTools/pens/momentsPen.py":541 - * r83 = x1 * y1 - * r84 = r41 + r82 + 378 * r83 - * r85 = x2 * x3 # <<<<<<<<<<<<<< - * r86 = r85 * y1 - * r87 = r27 * x3 - */ - __pyx_v_r85 = (__pyx_v_x2 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":542 - * r84 = r41 + r82 + 378 * r83 - * r85 = x2 * x3 - * r86 = r85 * y1 # <<<<<<<<<<<<<< - * r87 = r27 * x3 - * r88 = 27 * r9 - */ - __pyx_v_r86 = (__pyx_v_r85 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":543 - * r85 = x2 * x3 - * r86 = r85 * y1 - * r87 = r27 * x3 # <<<<<<<<<<<<<< - * r88 = 27 * r9 - * r89 = r88 * y2 - */ - __pyx_v_r87 = (__pyx_v_r27 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":544 - * r86 = r85 * y1 - * r87 = r27 * x3 - * r88 = 27 * r9 # <<<<<<<<<<<<<< - * r89 = r88 * y2 - * r90 = 42 * r14 - */ - __pyx_v_r88 = (27.0 * __pyx_v_r9); - - /* "fontTools/pens/momentsPen.py":545 - * r87 = r27 * x3 - * r88 = 27 * r9 - * r89 = r88 * y2 # <<<<<<<<<<<<<< - * r90 = 42 * r14 - * r91 = 90 * x1 - */ - __pyx_v_r89 = (__pyx_v_r88 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":546 - * r88 = 27 * r9 - * r89 = r88 * y2 - * r90 = 42 * r14 # <<<<<<<<<<<<<< - * r91 = 90 * x1 - * r92 = 189 * r18 - */ - __pyx_v_r90 = (42.0 * __pyx_v_r14); - - /* "fontTools/pens/momentsPen.py":547 - * r89 = r88 * y2 - * r90 = 42 * r14 - * r91 = 90 * x1 # <<<<<<<<<<<<<< - * r92 = 189 * r18 - * r93 = 378 * r18 - */ - __pyx_v_r91 = (90.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":548 - * r90 = 42 * r14 - * r91 = 90 * x1 - * r92 = 189 * r18 # <<<<<<<<<<<<<< - * r93 = 378 * r18 - * r94 = r12 * y1 - */ - __pyx_v_r92 = (189.0 * __pyx_v_r18); - - /* "fontTools/pens/momentsPen.py":549 - * r91 = 90 * x1 - * r92 = 189 * r18 - * r93 = 378 * r18 # <<<<<<<<<<<<<< - * r94 = r12 * y1 - * r95 = 252 * x1 * x2 - */ - __pyx_v_r93 = (378.0 * __pyx_v_r18); - - /* "fontTools/pens/momentsPen.py":550 - * r92 = 189 * r18 - * r93 = 378 * r18 - * r94 = r12 * y1 # <<<<<<<<<<<<<< - * r95 = 252 * x1 * x2 - * r96 = r79 * x3 - */ - __pyx_v_r94 = (__pyx_v_r12 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":551 - * r93 = 378 * r18 - * r94 = r12 * y1 - * r95 = 252 * x1 * x2 # <<<<<<<<<<<<<< - * r96 = r79 * x3 - * r97 = 30 * r85 - */ - __pyx_v_r95 = ((252.0 * __pyx_v_x1) * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":552 - * r94 = r12 * y1 - * r95 = 252 * x1 * x2 - * r96 = r79 * x3 # <<<<<<<<<<<<<< - * r97 = 30 * r85 - * r98 = r83 * x3 - */ - __pyx_v_r96 = (__pyx_v_r79 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":553 - * r95 = 252 * x1 * x2 - * r96 = r79 * x3 - * r97 = 30 * r85 # <<<<<<<<<<<<<< - * r98 = r83 * x3 - * r99 = 30 * x3 - */ - __pyx_v_r97 = (30.0 * __pyx_v_r85); - - /* "fontTools/pens/momentsPen.py":554 - * r96 = r79 * x3 - * r97 = 30 * r85 - * r98 = r83 * x3 # <<<<<<<<<<<<<< - * r99 = 30 * x3 - * r100 = 42 * x3 - */ - __pyx_v_r98 = (__pyx_v_r83 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":555 - * r97 = 30 * r85 - * r98 = r83 * x3 - * r99 = 30 * x3 # <<<<<<<<<<<<<< - * r100 = 42 * x3 - * r101 = r42 * x1 - */ - __pyx_v_r99 = (30.0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":556 - * r98 = r83 * x3 - * r99 = 30 * x3 - * r100 = 42 * x3 # <<<<<<<<<<<<<< - * r101 = r42 * x1 - * r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99 - */ - __pyx_v_r100 = (42.0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":557 - * r99 = 30 * x3 - * r100 = 42 * x3 - * r101 = r42 * x1 # <<<<<<<<<<<<<< - * r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99 - * r103 = 378 * r48 - */ - __pyx_v_r101 = (__pyx_v_r42 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":558 - * r100 = 42 * x3 - * r101 = r42 * x1 - * r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99 # <<<<<<<<<<<<<< - * r103 = 378 * r48 - * r104 = 18 * y1 - */ - __pyx_v_r102 = ((((__pyx_v_r10 * __pyx_v_y2) + (14.0 * __pyx_v_r14)) + ((126.0 * __pyx_v_r18) * __pyx_v_y1)) + (__pyx_v_r81 * __pyx_v_r99)); - - /* "fontTools/pens/momentsPen.py":559 - * r101 = r42 * x1 - * r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99 - * r103 = 378 * r48 # <<<<<<<<<<<<<< - * r104 = 18 * y1 - * r105 = r104 * y2 - */ - __pyx_v_r103 = (378.0 * __pyx_v_r48); - - /* "fontTools/pens/momentsPen.py":560 - * r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99 - * r103 = 378 * r48 - * r104 = 18 * y1 # <<<<<<<<<<<<<< - * r105 = r104 * y2 - * r106 = y0 * y1 - */ - __pyx_v_r104 = (18.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":561 - * r103 = 378 * r48 - * r104 = 18 * y1 - * r105 = r104 * y2 # <<<<<<<<<<<<<< - * r106 = y0 * y1 - * r107 = 252 * y2 - */ - __pyx_v_r105 = (__pyx_v_r104 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":562 - * r104 = 18 * y1 - * r105 = r104 * y2 - * r106 = y0 * y1 # <<<<<<<<<<<<<< - * r107 = 252 * y2 - * r108 = r107 * y0 - */ - __pyx_v_r106 = (__pyx_v_y0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":563 - * r105 = r104 * y2 - * r106 = y0 * y1 - * r107 = 252 * y2 # <<<<<<<<<<<<<< - * r108 = r107 * y0 - * r109 = y0 * y3 - */ - __pyx_v_r107 = (252.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":564 - * r106 = y0 * y1 - * r107 = 252 * y2 - * r108 = r107 * y0 # <<<<<<<<<<<<<< - * r109 = y0 * y3 - * r110 = 42 * r64 - */ - __pyx_v_r108 = (__pyx_v_r107 * __pyx_v_y0); - - /* "fontTools/pens/momentsPen.py":565 - * r107 = 252 * y2 - * r108 = r107 * y0 - * r109 = y0 * y3 # <<<<<<<<<<<<<< - * r110 = 42 * r64 - * r111 = 378 * r53 - */ - __pyx_v_r109 = (__pyx_v_y0 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":566 - * r108 = r107 * y0 - * r109 = y0 * y3 - * r110 = 42 * r64 # <<<<<<<<<<<<<< - * r111 = 378 * r53 - * r112 = 63 * r48 - */ - __pyx_v_r110 = (42.0 * __pyx_v_r64); - - /* "fontTools/pens/momentsPen.py":567 - * r109 = y0 * y3 - * r110 = 42 * r64 - * r111 = 378 * r53 # <<<<<<<<<<<<<< - * r112 = 63 * r48 - * r113 = 27 * x2 - */ - __pyx_v_r111 = (378.0 * __pyx_v_r53); - - /* "fontTools/pens/momentsPen.py":568 - * r110 = 42 * r64 - * r111 = 378 * r53 - * r112 = 63 * r48 # <<<<<<<<<<<<<< - * r113 = 27 * x2 - * r114 = r27 * y2 - */ - __pyx_v_r112 = (63.0 * __pyx_v_r48); - - /* "fontTools/pens/momentsPen.py":569 - * r111 = 378 * r53 - * r112 = 63 * r48 - * r113 = 27 * x2 # <<<<<<<<<<<<<< - * r114 = r27 * y2 - * r115 = r113 * r48 + 42 * r52 - */ - __pyx_v_r113 = (27.0 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":570 - * r112 = 63 * r48 - * r113 = 27 * x2 - * r114 = r27 * y2 # <<<<<<<<<<<<<< - * r115 = r113 * r48 + 42 * r52 - * r116 = x3 * y3 - */ - __pyx_v_r114 = (__pyx_v_r27 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":571 - * r113 = 27 * x2 - * r114 = r27 * y2 - * r115 = r113 * r48 + 42 * r52 # <<<<<<<<<<<<<< - * r116 = x3 * y3 - * r117 = 54 * r42 - */ - __pyx_v_r115 = ((__pyx_v_r113 * __pyx_v_r48) + (42.0 * __pyx_v_r52)); - - /* "fontTools/pens/momentsPen.py":572 - * r114 = r27 * y2 - * r115 = r113 * r48 + 42 * r52 - * r116 = x3 * y3 # <<<<<<<<<<<<<< - * r117 = 54 * r42 - * r118 = r51 * x1 - */ - __pyx_v_r116 = (__pyx_v_x3 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":573 - * r115 = r113 * r48 + 42 * r52 - * r116 = x3 * y3 - * r117 = 54 * r42 # <<<<<<<<<<<<<< - * r118 = r51 * x1 - * r119 = r51 * x2 - */ - __pyx_v_r117 = (54.0 * __pyx_v_r42); - - /* "fontTools/pens/momentsPen.py":574 - * r116 = x3 * y3 - * r117 = 54 * r42 - * r118 = r51 * x1 # <<<<<<<<<<<<<< - * r119 = r51 * x2 - * r120 = r48 * x1 - */ - __pyx_v_r118 = (__pyx_v_r51 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":575 - * r117 = 54 * r42 - * r118 = r51 * x1 - * r119 = r51 * x2 # <<<<<<<<<<<<<< - * r120 = r48 * x1 - * r121 = 21 * x3 - */ - __pyx_v_r119 = (__pyx_v_r51 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":576 - * r118 = r51 * x1 - * r119 = r51 * x2 - * r120 = r48 * x1 # <<<<<<<<<<<<<< - * r121 = 21 * x3 - * r122 = r64 * x1 - */ - __pyx_v_r120 = (__pyx_v_r48 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":577 - * r119 = r51 * x2 - * r120 = r48 * x1 - * r121 = 21 * x3 # <<<<<<<<<<<<<< - * r122 = r64 * x1 - * r123 = r81 * y3 - */ - __pyx_v_r121 = (21.0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":578 - * r120 = r48 * x1 - * r121 = 21 * x3 - * r122 = r64 * x1 # <<<<<<<<<<<<<< - * r123 = r81 * y3 - * r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1 - */ - __pyx_v_r122 = (__pyx_v_r64 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":579 - * r121 = 21 * x3 - * r122 = r64 * x1 - * r123 = r81 * y3 # <<<<<<<<<<<<<< - * r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1 - * r125 = y2**3 - */ - __pyx_v_r123 = (__pyx_v_r81 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":580 - * r122 = r64 * x1 - * r123 = r81 * y3 - * r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1 # <<<<<<<<<<<<<< - * r125 = y2**3 - * r126 = y3**3 - */ - __pyx_v_r124 = (((((30.0 * __pyx_v_r27) * __pyx_v_y1) + (__pyx_v_r49 * __pyx_v_x2)) + (14.0 * __pyx_v_r52)) + ((126.0 * __pyx_v_r53) * __pyx_v_x1)); - - /* "fontTools/pens/momentsPen.py":581 - * r123 = r81 * y3 - * r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1 - * r125 = y2**3 # <<<<<<<<<<<<<< - * r126 = y3**3 - * r127 = y1**3 - */ - __pyx_v_r125 = pow(__pyx_v_y2, 3.0); - - /* "fontTools/pens/momentsPen.py":582 - * r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1 - * r125 = y2**3 - * r126 = y3**3 # <<<<<<<<<<<<<< - * r127 = y1**3 - * r128 = y0**3 - */ - __pyx_v_r126 = pow(__pyx_v_y3, 3.0); - - /* "fontTools/pens/momentsPen.py":583 - * r125 = y2**3 - * r126 = y3**3 - * r127 = y1**3 # <<<<<<<<<<<<<< - * r128 = y0**3 - * r129 = r51 * y2 - */ - __pyx_v_r127 = pow(__pyx_v_y1, 3.0); - - /* "fontTools/pens/momentsPen.py":584 - * r126 = y3**3 - * r127 = y1**3 - * r128 = y0**3 # <<<<<<<<<<<<<< - * r129 = r51 * y2 - * r130 = r112 * y3 + r21 * r51 - */ - __pyx_v_r128 = pow(__pyx_v_y0, 3.0); - - /* "fontTools/pens/momentsPen.py":585 - * r127 = y1**3 - * r128 = y0**3 - * r129 = r51 * y2 # <<<<<<<<<<<<<< - * r130 = r112 * y3 + r21 * r51 - * r131 = 189 * r53 - */ - __pyx_v_r129 = (__pyx_v_r51 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":586 - * r128 = y0**3 - * r129 = r51 * y2 - * r130 = r112 * y3 + r21 * r51 # <<<<<<<<<<<<<< - * r131 = 189 * r53 - * r132 = 90 * y2 - */ - __pyx_v_r130 = ((__pyx_v_r112 * __pyx_v_y3) + (__pyx_v_r21 * __pyx_v_r51)); - - /* "fontTools/pens/momentsPen.py":587 - * r129 = r51 * y2 - * r130 = r112 * y3 + r21 * r51 - * r131 = 189 * r53 # <<<<<<<<<<<<<< - * r132 = 90 * y2 - * - */ - __pyx_v_r131 = (189.0 * __pyx_v_r53); - - /* "fontTools/pens/momentsPen.py":588 - * r130 = r112 * y3 + r21 * r51 - * r131 = 189 * r53 - * r132 = 90 * y2 # <<<<<<<<<<<<<< - * - * self.area += ( - */ - __pyx_v_r132 = (90.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":590 - * r132 = 90 * y2 - * - * self.area += ( # <<<<<<<<<<<<<< - * -r1 / 20 - * - r3 / 20 - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_area); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 590, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":597 - * + 3 * x1 * (y2 + y3) / 20 - * + 3 * x2 * y3 / 10 - * - y0 * (r5 + r6 + x3) / 20 # <<<<<<<<<<<<<< - * ) - * self.momentX += ( - */ - __pyx_t_1 = PyFloat_FromDouble(((((((((-__pyx_v_r1) / 20.0) - (__pyx_v_r3 / 20.0)) - ((__pyx_v_r4 * (__pyx_v_x2 + __pyx_v_x3)) / 20.0)) + ((__pyx_v_x0 * (((__pyx_v_r7 + __pyx_v_r8) + (10.0 * __pyx_v_y0)) + __pyx_v_y3)) / 20.0)) + (((3.0 * __pyx_v_x1) * (__pyx_v_y2 + __pyx_v_y3)) / 20.0)) + (((3.0 * __pyx_v_x2) * __pyx_v_y3) / 10.0)) - ((__pyx_v_y0 * ((__pyx_v_r5 + __pyx_v_r6) + __pyx_v_x3)) / 20.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 597, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":590 - * r132 = 90 * y2 - * - * self.area += ( # <<<<<<<<<<<<<< - * -r1 / 20 - * - r3 / 20 - */ - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 590, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_area, __pyx_t_2) < 0) __PYX_ERR(0, 590, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":599 - * - y0 * (r5 + r6 + x3) / 20 - * ) - * self.momentX += ( # <<<<<<<<<<<<<< - * r11 / 840 - * - r13 / 8 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 599, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":621 - * ) - * / 840 - * - y0 * (r17 + r30 * x2 + r31 * x1 + r32 + r33 + 18 * r9) / 840 # <<<<<<<<<<<<<< - * ) - * self.momentY += ( - */ - __pyx_t_1 = PyFloat_FromDouble(((((((((((__pyx_v_r11 / 840.0) - (__pyx_v_r13 / 8.0)) - (__pyx_v_r14 / 3.0)) - ((__pyx_v_r17 * ((-__pyx_v_r15) + __pyx_v_r8)) / 840.0)) + ((__pyx_v_r19 * (__pyx_v_r8 + (2.0 * __pyx_v_y3))) / 840.0)) + ((__pyx_v_r20 * (((__pyx_v_r0 + __pyx_v_r21) + (56.0 * __pyx_v_y0)) + __pyx_v_y3)) / 168.0)) + ((__pyx_v_r29 * (((-__pyx_v_r23) + __pyx_v_r25) + __pyx_v_r28)) / 840.0)) - ((__pyx_v_r4 * (((10.0 * __pyx_v_r12) + __pyx_v_r17) + __pyx_v_r22)) / 840.0)) + ((__pyx_v_x0 * (((((((((12.0 * __pyx_v_r27) + (__pyx_v_r30 * __pyx_v_y2)) + __pyx_v_r34) - (__pyx_v_r35 * __pyx_v_x1)) - __pyx_v_r37) - (__pyx_v_r38 * __pyx_v_y0)) + (__pyx_v_r39 * __pyx_v_x1)) - (__pyx_v_r4 * __pyx_v_x3)) + __pyx_v_r45)) / 840.0)) - ((__pyx_v_y0 * (((((__pyx_v_r17 + (__pyx_v_r30 * __pyx_v_x2)) + (__pyx_v_r31 * __pyx_v_x1)) + __pyx_v_r32) + __pyx_v_r33) + (18.0 * __pyx_v_r9))) / 840.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 621, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":599 - * - y0 * (r5 + r6 + x3) / 20 - * ) - * self.momentX += ( # <<<<<<<<<<<<<< - * r11 / 840 - * - r13 / 8 - */ - __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 599, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentX, __pyx_t_3) < 0) __PYX_ERR(0, 599, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/pens/momentsPen.py":623 - * - y0 * (r17 + r30 * x2 + r31 * x1 + r32 + r33 + 18 * r9) / 840 - * ) - * self.momentY += ( # <<<<<<<<<<<<<< - * -r4 * (r25 + r58) / 840 - * - r47 / 8 - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentY); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 623, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":646 - * + x1 * (r24 * y1 + 10 * r51 + r59 + r60 + r7 * y3) / 280 - * + x2 * y3 * (r15 + r8) / 56 - * - y0 * (r16 * y1 + r31 * y2 + r44 * x2 + r45 + r61 - r62 * x1) / 840 # <<<<<<<<<<<<<< - * ) - * self.momentXX += ( - */ - __pyx_t_1 = PyFloat_FromDouble(((((((((((((-__pyx_v_r4) * (__pyx_v_r25 + __pyx_v_r58)) / 840.0) - (__pyx_v_r47 / 8.0)) - (__pyx_v_r50 / 840.0)) - (__pyx_v_r52 / 6.0)) - ((__pyx_v_r54 * (__pyx_v_r6 + (2.0 * __pyx_v_x3))) / 840.0)) - ((__pyx_v_r55 * ((__pyx_v_r56 + __pyx_v_r57) + __pyx_v_x3)) / 168.0)) + ((__pyx_v_x0 * ((((((((((__pyx_v_r35 * __pyx_v_y1) + (__pyx_v_r40 * __pyx_v_y0)) + (__pyx_v_r44 * __pyx_v_y2)) + (18.0 * __pyx_v_r48)) + (140.0 * __pyx_v_r55)) + __pyx_v_r59) + __pyx_v_r63) + (12.0 * __pyx_v_r64)) + __pyx_v_r65) + __pyx_v_r66)) / 840.0)) + ((__pyx_v_x1 * (((((__pyx_v_r24 * __pyx_v_y1) + (10.0 * __pyx_v_r51)) + __pyx_v_r59) + __pyx_v_r60) + (__pyx_v_r7 * __pyx_v_y3))) / 280.0)) + (((__pyx_v_x2 * __pyx_v_y3) * (__pyx_v_r15 + __pyx_v_r8)) / 56.0)) - ((__pyx_v_y0 * ((((((__pyx_v_r16 * __pyx_v_y1) + (__pyx_v_r31 * __pyx_v_y2)) + (__pyx_v_r44 * __pyx_v_x2)) + __pyx_v_r45) + __pyx_v_r61) - (__pyx_v_r62 * __pyx_v_x1))) / 840.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 646, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":623 - * - y0 * (r17 + r30 * x2 + r31 * x1 + r32 + r33 + 18 * r9) / 840 - * ) - * self.momentY += ( # <<<<<<<<<<<<<< - * -r4 * (r25 + r58) / 840 - * - r47 / 8 - */ - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 623, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentY, __pyx_t_2) < 0) __PYX_ERR(0, 623, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":648 - * - y0 * (r16 * y1 + r31 * y2 + r44 * x2 + r45 + r61 - r62 * x1) / 840 - * ) - * self.momentXX += ( # <<<<<<<<<<<<<< - * -r12 * r72 * (-r40 + r8) / 9240 - * + 3 * r18 * (r28 + r34 - r38 * y1 + r75) / 3080 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 648, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":706 - * ) - * / 9240 - * - y0 # <<<<<<<<<<<<<< - * * ( - * r12 * r56 - */ - __pyx_t_1 = PyFloat_FromDouble(((((((((((((((((-__pyx_v_r12) * __pyx_v_r72) * ((-__pyx_v_r40) + __pyx_v_r8)) / 9240.0) + (((3.0 * __pyx_v_r18) * (((__pyx_v_r28 + __pyx_v_r34) - (__pyx_v_r38 * __pyx_v_y1)) + __pyx_v_r75)) / 3080.0)) + ((__pyx_v_r20 * (((((((((__pyx_v_r24 * __pyx_v_x3) - (__pyx_v_r72 * __pyx_v_y0)) - (__pyx_v_r76 * __pyx_v_y0)) - (__pyx_v_r77 * __pyx_v_y0)) + __pyx_v_r78) + (__pyx_v_r79 * __pyx_v_y3)) + (__pyx_v_r80 * __pyx_v_y1)) + (210.0 * __pyx_v_r81)) + __pyx_v_r84)) / 9240.0)) - ((__pyx_v_r29 * ((((((((__pyx_v_r12 * __pyx_v_r21) + (14.0 * __pyx_v_r13)) + (__pyx_v_r44 * __pyx_v_r9)) - (__pyx_v_r73 * __pyx_v_y3)) + (54.0 * __pyx_v_r86)) - (84.0 * __pyx_v_r87)) - __pyx_v_r89) - __pyx_v_r90)) / 9240.0)) - ((__pyx_v_r4 * (((((70.0 * __pyx_v_r12) * __pyx_v_x2) + (27.0 * __pyx_v_r67)) + (42.0 * __pyx_v_r68)) + __pyx_v_r74)) / 9240.0)) + (((3.0 * __pyx_v_r67) * __pyx_v_y3) / 220.0)) - ((__pyx_v_r68 * __pyx_v_r69) / 9240.0)) - ((__pyx_v_r68 * __pyx_v_y3) / 4.0)) - (((__pyx_v_r70 * __pyx_v_r9) * ((-__pyx_v_r62) + __pyx_v_y2)) / 9240.0)) + (((3.0 * __pyx_v_r71) * (__pyx_v_r24 + __pyx_v_r40)) / 3080.0)) + ((pow(__pyx_v_x0, 3.0) * (((__pyx_v_r24 + __pyx_v_r44) + (165.0 * __pyx_v_y0)) + __pyx_v_y3)) / 660.0)) + ((__pyx_v_x0 * (((((((((((((((((((__pyx_v_r100 * __pyx_v_r27) + (162.0 * __pyx_v_r101)) + __pyx_v_r102) + __pyx_v_r11) + ((63.0 * __pyx_v_r18) * __pyx_v_y3)) + (__pyx_v_r27 * __pyx_v_r91)) - (__pyx_v_r33 * __pyx_v_y0)) - (__pyx_v_r37 * __pyx_v_x3)) + (__pyx_v_r43 * __pyx_v_x3)) - (__pyx_v_r73 * __pyx_v_y0)) - (__pyx_v_r88 * __pyx_v_y1)) + (__pyx_v_r92 * __pyx_v_y2)) - (__pyx_v_r93 * __pyx_v_y0)) - (9.0 * __pyx_v_r94)) - (__pyx_v_r95 * __pyx_v_y0)) - (__pyx_v_r96 * __pyx_v_y0)) - (__pyx_v_r97 * __pyx_v_y1)) - (18.0 * __pyx_v_r98)) + ((__pyx_v_r99 * __pyx_v_x1) * __pyx_v_y3))) / 9240.0)) - ((__pyx_v_y0 * ((((((((((__pyx_v_r12 * __pyx_v_r56) + (__pyx_v_r12 * __pyx_v_r80)) + (__pyx_v_r32 * __pyx_v_x3)) + (45.0 * __pyx_v_r67)) + (14.0 * __pyx_v_r68)) + (126.0 * __pyx_v_r71)) + __pyx_v_r74) + (__pyx_v_r85 * __pyx_v_r91)) + ((135.0 * __pyx_v_r9) * __pyx_v_x1)) + (__pyx_v_r92 * __pyx_v_x2))) / 9240.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 706, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":648 - * - y0 * (r16 * y1 + r31 * y2 + r44 * x2 + r45 + r61 - r62 * x1) / 840 - * ) - * self.momentXX += ( # <<<<<<<<<<<<<< - * -r12 * r72 * (-r40 + r8) / 9240 - * + 3 * r18 * (r28 + r34 - r38 * y1 + r75) / 3080 - */ - __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 648, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXX, __pyx_t_3) < 0) __PYX_ERR(0, 648, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/pens/momentsPen.py":721 - * / 9240 - * ) - * self.momentXY += ( # <<<<<<<<<<<<<< - * -r103 * r12 / 18480 - * - r12 * r51 / 8 - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXY); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 721, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":783 - * ) - * / 3080 - * - y0 # <<<<<<<<<<<<<< - * * ( - * 54 * r101 - */ - __pyx_t_1 = PyFloat_FromDouble((((((((((((((((-__pyx_v_r103) * __pyx_v_r12) / 18480.0) - ((__pyx_v_r12 * __pyx_v_r51) / 8.0)) - (((3.0 * __pyx_v_r14) * __pyx_v_y2) / 44.0)) + (((3.0 * __pyx_v_r18) * ((((__pyx_v_r105 + (__pyx_v_r2 * __pyx_v_y1)) + (18.0 * __pyx_v_r46)) + (15.0 * __pyx_v_r48)) + (7.0 * __pyx_v_r51))) / 6160.0)) + ((__pyx_v_r20 * ((((((((((1260.0 * __pyx_v_r106) + (__pyx_v_r107 * __pyx_v_y1)) + __pyx_v_r108) + (28.0 * __pyx_v_r109)) + __pyx_v_r110) + __pyx_v_r111) + __pyx_v_r112) + (30.0 * __pyx_v_r46)) + (2310.0 * __pyx_v_r55)) + __pyx_v_r66)) / 18480.0)) - ((__pyx_v_r54 * (((7.0 * __pyx_v_r12) + (18.0 * __pyx_v_r85)) + (15.0 * __pyx_v_r9))) / 18480.0)) - ((__pyx_v_r55 * (((((__pyx_v_r33 + __pyx_v_r73) + __pyx_v_r93) + __pyx_v_r95) + __pyx_v_r96) + __pyx_v_r97)) / 18480.0)) - ((__pyx_v_r7 * (((((42.0 * __pyx_v_r13) + (__pyx_v_r82 * __pyx_v_x3)) + (28.0 * __pyx_v_r87)) + __pyx_v_r89) + __pyx_v_r90)) / 18480.0)) - (((3.0 * __pyx_v_r85) * (__pyx_v_r48 - __pyx_v_r66)) / 220.0)) + ((((3.0 * __pyx_v_r9) * __pyx_v_y3) * (__pyx_v_r62 + (2.0 * __pyx_v_y2))) / 440.0)) + ((__pyx_v_x0 * (((((((((((((((((((((((-__pyx_v_r1) * __pyx_v_y0) - ((84.0 * __pyx_v_r106) * __pyx_v_x2)) + (__pyx_v_r109 * __pyx_v_r56)) + (54.0 * __pyx_v_r114)) + (__pyx_v_r117 * __pyx_v_y1)) + (15.0 * __pyx_v_r118)) + (21.0 * __pyx_v_r119)) + (81.0 * __pyx_v_r120)) + (__pyx_v_r121 * __pyx_v_r46)) + (54.0 * __pyx_v_r122)) + (60.0 * __pyx_v_r123)) + __pyx_v_r124) - ((__pyx_v_r21 * __pyx_v_x3) * __pyx_v_y0)) + (__pyx_v_r23 * __pyx_v_y3)) - (__pyx_v_r54 * __pyx_v_x3)) - (__pyx_v_r55 * __pyx_v_r72)) - (__pyx_v_r55 * __pyx_v_r76)) - (__pyx_v_r55 * __pyx_v_r77)) + ((__pyx_v_r57 * __pyx_v_y0) * __pyx_v_y3)) + (__pyx_v_r60 * __pyx_v_x3)) + ((84.0 * __pyx_v_r81) * __pyx_v_y0)) + ((189.0 * __pyx_v_r81) * __pyx_v_y1))) / 9240.0)) + ((__pyx_v_x1 * ((((((((__pyx_v_r104 * __pyx_v_r27) - (__pyx_v_r105 * __pyx_v_x3)) - (__pyx_v_r113 * __pyx_v_r53)) + (63.0 * __pyx_v_r114)) + __pyx_v_r115) - (__pyx_v_r16 * __pyx_v_r53)) + (28.0 * __pyx_v_r47)) + (__pyx_v_r51 * __pyx_v_r80))) / 3080.0)) - ((__pyx_v_y0 * (((((((((((((54.0 * __pyx_v_r101) + __pyx_v_r102) + (__pyx_v_r116 * __pyx_v_r5)) + (__pyx_v_r117 * __pyx_v_x3)) + (21.0 * __pyx_v_r13)) - (__pyx_v_r19 * __pyx_v_y3)) + (__pyx_v_r22 * __pyx_v_y3)) + (__pyx_v_r78 * __pyx_v_x3)) + ((189.0 * __pyx_v_r83) * __pyx_v_x2)) + (60.0 * __pyx_v_r86)) + ((81.0 * __pyx_v_r9) * __pyx_v_y1)) + (15.0 * __pyx_v_r94)) + (54.0 * __pyx_v_r98))) / 9240.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 783, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":721 - * / 9240 - * ) - * self.momentXY += ( # <<<<<<<<<<<<<< - * -r103 * r12 / 18480 - * - r12 * r51 / 8 - */ - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 721, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXY, __pyx_t_2) < 0) __PYX_ERR(0, 721, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":801 - * / 9240 - * ) - * self.momentYY += ( # <<<<<<<<<<<<<< - * -r103 * r116 / 9240 - * - r125 * r70 / 9240 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentYY); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 801, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":849 - * / 3080 - * + 3 * x2 * y3 * (r48 + r66 + r8 * y3) / 220 - * - y0 # <<<<<<<<<<<<<< - * * ( - * r100 * r46 - */ - __pyx_t_1 = PyFloat_FromDouble((((((((((((((((-__pyx_v_r103) * __pyx_v_r116) / 9240.0) - ((__pyx_v_r125 * __pyx_v_r70) / 9240.0)) - ((__pyx_v_r126 * __pyx_v_x3) / 12.0)) - (((3.0 * __pyx_v_r127) * (__pyx_v_r26 + __pyx_v_r38)) / 3080.0)) - ((__pyx_v_r128 * ((__pyx_v_r26 + __pyx_v_r30) + __pyx_v_x3)) / 660.0)) - ((__pyx_v_r4 * ((((__pyx_v_r112 * __pyx_v_x3) + __pyx_v_r115) - (14.0 * __pyx_v_r119)) + (84.0 * __pyx_v_r47))) / 9240.0)) - ((__pyx_v_r52 * __pyx_v_r69) / 9240.0)) - ((__pyx_v_r54 * ((__pyx_v_r58 + __pyx_v_r61) + __pyx_v_r75)) / 9240.0)) - ((__pyx_v_r55 * ((((((__pyx_v_r100 * __pyx_v_y1) + (__pyx_v_r121 * __pyx_v_y2)) + (__pyx_v_r26 * __pyx_v_y3)) + (__pyx_v_r79 * __pyx_v_y2)) + __pyx_v_r84) + ((210.0 * __pyx_v_x2) * __pyx_v_y1))) / 9240.0)) + ((__pyx_v_x0 * (((((((((((((((((((__pyx_v_r108 * __pyx_v_y1) + (__pyx_v_r110 * __pyx_v_y0)) + (__pyx_v_r111 * __pyx_v_y0)) + (__pyx_v_r112 * __pyx_v_y0)) + (45.0 * __pyx_v_r125)) + (14.0 * __pyx_v_r126)) + (126.0 * __pyx_v_r127)) + (770.0 * __pyx_v_r128)) + (42.0 * __pyx_v_r129)) + __pyx_v_r130) + (__pyx_v_r131 * __pyx_v_y2)) + (__pyx_v_r132 * __pyx_v_r64)) + ((135.0 * __pyx_v_r48) * __pyx_v_y1)) + ((630.0 * __pyx_v_r55) * __pyx_v_y1)) + ((126.0 * __pyx_v_r55) * __pyx_v_y2)) + ((14.0 * __pyx_v_r55) * __pyx_v_y3)) + (__pyx_v_r63 * __pyx_v_y3)) + (__pyx_v_r65 * __pyx_v_y3)) + (__pyx_v_r66 * __pyx_v_y0))) / 9240.0)) + ((__pyx_v_x1 * ((((((((27.0 * __pyx_v_r125) + (42.0 * __pyx_v_r126)) + (70.0 * __pyx_v_r129)) + __pyx_v_r130) + (__pyx_v_r39 * __pyx_v_r53)) + (__pyx_v_r44 * __pyx_v_r48)) + ((27.0 * __pyx_v_r53) * __pyx_v_y2)) + ((54.0 * __pyx_v_r64) * __pyx_v_y2))) / 3080.0)) + ((((3.0 * __pyx_v_x2) * __pyx_v_y3) * ((__pyx_v_r48 + __pyx_v_r66) + (__pyx_v_r8 * __pyx_v_y3))) / 220.0)) - ((__pyx_v_y0 * (((((((((((((__pyx_v_r100 * __pyx_v_r46) + (18.0 * __pyx_v_r114)) - (9.0 * __pyx_v_r118)) - (27.0 * __pyx_v_r120)) - (18.0 * __pyx_v_r122)) - (30.0 * __pyx_v_r123)) + __pyx_v_r124) + (__pyx_v_r131 * __pyx_v_x2)) + ((__pyx_v_r132 * __pyx_v_x3) * __pyx_v_y1)) + ((162.0 * __pyx_v_r42) * __pyx_v_y1)) + __pyx_v_r50) + ((63.0 * __pyx_v_r53) * __pyx_v_x3)) + (__pyx_v_r64 * __pyx_v_r99))) / 9240.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 849, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":801 - * / 9240 - * ) - * self.momentYY += ( # <<<<<<<<<<<<<< - * -r103 * r116 / 9240 - * - r125 * r70 / 9240 - */ - __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 801, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentYY, __pyx_t_3) < 0) __PYX_ERR(0, 801, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/pens/momentsPen.py":450 - * @cython.locals(x2=cython.double, y2=cython.double) - * @cython.locals(x3=cython.double, y3=cython.double) - * def _curveToOne(self, p1, p2, p3): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._curveToOne", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyMethodDef __pyx_methods[] = { - {0, 0, 0, 0} -}; - -#if PY_MAJOR_VERSION >= 3 -#if CYTHON_PEP489_MULTI_PHASE_INIT -static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ -static int __pyx_pymod_exec_momentsPen(PyObject* module); /*proto*/ -static PyModuleDef_Slot __pyx_moduledef_slots[] = { - {Py_mod_create, (void*)__pyx_pymod_create}, - {Py_mod_exec, (void*)__pyx_pymod_exec_momentsPen}, - {0, NULL} -}; -#endif - -static struct PyModuleDef __pyx_moduledef = { - PyModuleDef_HEAD_INIT, - "momentsPen", - 0, /* m_doc */ - #if CYTHON_PEP489_MULTI_PHASE_INIT - 0, /* m_size */ - #else - -1, /* m_size */ - #endif - __pyx_methods /* m_methods */, - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_moduledef_slots, /* m_slots */ - #else - NULL, /* m_reload */ - #endif - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ -}; -#endif -#ifndef CYTHON_SMALL_CODE -#if defined(__clang__) - #define CYTHON_SMALL_CODE -#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) - #define CYTHON_SMALL_CODE __attribute__((cold)) -#else - #define CYTHON_SMALL_CODE -#endif -#endif - -static __Pyx_StringTabEntry __pyx_string_tab[] = { - {&__pyx_n_s_AttributeError, __pyx_k_AttributeError, sizeof(__pyx_k_AttributeError), 0, 0, 1, 1}, - {&__pyx_n_s_BasePen, __pyx_k_BasePen, sizeof(__pyx_k_BasePen), 0, 0, 1, 1}, - {&__pyx_n_s_COMPILED, __pyx_k_COMPILED, sizeof(__pyx_k_COMPILED), 0, 0, 1, 1}, - {&__pyx_kp_u_Green_theorem_is_not_defined_on, __pyx_k_Green_theorem_is_not_defined_on, sizeof(__pyx_k_Green_theorem_is_not_defined_on), 0, 1, 0, 0}, - {&__pyx_n_s_ImportError, __pyx_k_ImportError, sizeof(__pyx_k_ImportError), 0, 0, 1, 1}, - {&__pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_k_Lib_fontTools_pens_momentsPen_py, sizeof(__pyx_k_Lib_fontTools_pens_momentsPen_py), 0, 0, 1, 0}, - {&__pyx_n_s_MomentsPen, __pyx_k_MomentsPen, sizeof(__pyx_k_MomentsPen), 0, 0, 1, 1}, - {&__pyx_n_u_MomentsPen, __pyx_k_MomentsPen, sizeof(__pyx_k_MomentsPen), 0, 1, 0, 1}, - {&__pyx_n_s_MomentsPen___init, __pyx_k_MomentsPen___init, sizeof(__pyx_k_MomentsPen___init), 0, 0, 1, 1}, - {&__pyx_n_s_MomentsPen__closePath, __pyx_k_MomentsPen__closePath, sizeof(__pyx_k_MomentsPen__closePath), 0, 0, 1, 1}, - {&__pyx_n_s_MomentsPen__curveToOne, __pyx_k_MomentsPen__curveToOne, sizeof(__pyx_k_MomentsPen__curveToOne), 0, 0, 1, 1}, - {&__pyx_n_s_MomentsPen__endPath, __pyx_k_MomentsPen__endPath, sizeof(__pyx_k_MomentsPen__endPath), 0, 0, 1, 1}, - {&__pyx_n_s_MomentsPen__lineTo, __pyx_k_MomentsPen__lineTo, sizeof(__pyx_k_MomentsPen__lineTo), 0, 0, 1, 1}, - {&__pyx_n_s_MomentsPen__moveTo, __pyx_k_MomentsPen__moveTo, sizeof(__pyx_k_MomentsPen__moveTo), 0, 0, 1, 1}, - {&__pyx_n_s_MomentsPen__qCurveToOne, __pyx_k_MomentsPen__qCurveToOne, sizeof(__pyx_k_MomentsPen__qCurveToOne), 0, 0, 1, 1}, - {&__pyx_n_s_MomentsPen__startPoint, __pyx_k_MomentsPen__startPoint, sizeof(__pyx_k_MomentsPen__startPoint), 0, 0, 1, 1}, - {&__pyx_n_s_OpenContourError, __pyx_k_OpenContourError, sizeof(__pyx_k_OpenContourError), 0, 0, 1, 1}, - {&__pyx_n_s_all, __pyx_k_all, sizeof(__pyx_k_all), 0, 0, 1, 1}, - {&__pyx_n_s_area, __pyx_k_area, sizeof(__pyx_k_area), 0, 0, 1, 1}, - {&__pyx_n_u_area, __pyx_k_area, sizeof(__pyx_k_area), 0, 1, 0, 1}, - {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {&__pyx_n_s_closePath, __pyx_k_closePath, sizeof(__pyx_k_closePath), 0, 0, 1, 1}, - {&__pyx_n_s_curveToOne, __pyx_k_curveToOne, sizeof(__pyx_k_curveToOne), 0, 0, 1, 1}, - {&__pyx_n_s_cython, __pyx_k_cython, sizeof(__pyx_k_cython), 0, 0, 1, 1}, - {&__pyx_n_s_doc, __pyx_k_doc, sizeof(__pyx_k_doc), 0, 0, 1, 1}, - {&__pyx_n_s_endPath, __pyx_k_endPath, sizeof(__pyx_k_endPath), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_misc, __pyx_k_fontTools_misc, sizeof(__pyx_k_fontTools_misc), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_misc_symfont, __pyx_k_fontTools_misc_symfont, sizeof(__pyx_k_fontTools_misc_symfont), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_pens_basePen, __pyx_k_fontTools_pens_basePen, sizeof(__pyx_k_fontTools_pens_basePen), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_pens_momentsPen, __pyx_k_fontTools_pens_momentsPen, sizeof(__pyx_k_fontTools_pens_momentsPen), 0, 0, 1, 1}, - {&__pyx_n_s_getCurrentPoint, __pyx_k_getCurrentPoint, sizeof(__pyx_k_getCurrentPoint), 0, 0, 1, 1}, - {&__pyx_n_s_glyphset, __pyx_k_glyphset, sizeof(__pyx_k_glyphset), 0, 0, 1, 1}, - {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {&__pyx_n_s_init, __pyx_k_init, sizeof(__pyx_k_init), 0, 0, 1, 1}, - {&__pyx_n_s_lineTo, __pyx_k_lineTo, sizeof(__pyx_k_lineTo), 0, 0, 1, 1}, - {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, - {&__pyx_n_u_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 1, 0, 1}, - {&__pyx_n_s_metaclass, __pyx_k_metaclass, sizeof(__pyx_k_metaclass), 0, 0, 1, 1}, - {&__pyx_n_s_module, __pyx_k_module, sizeof(__pyx_k_module), 0, 0, 1, 1}, - {&__pyx_n_s_momentX, __pyx_k_momentX, sizeof(__pyx_k_momentX), 0, 0, 1, 1}, - {&__pyx_n_u_momentX, __pyx_k_momentX, sizeof(__pyx_k_momentX), 0, 1, 0, 1}, - {&__pyx_n_s_momentXX, __pyx_k_momentXX, sizeof(__pyx_k_momentXX), 0, 0, 1, 1}, - {&__pyx_n_u_momentXX, __pyx_k_momentXX, sizeof(__pyx_k_momentXX), 0, 1, 0, 1}, - {&__pyx_n_s_momentXY, __pyx_k_momentXY, sizeof(__pyx_k_momentXY), 0, 0, 1, 1}, - {&__pyx_n_u_momentXY, __pyx_k_momentXY, sizeof(__pyx_k_momentXY), 0, 1, 0, 1}, - {&__pyx_n_s_momentY, __pyx_k_momentY, sizeof(__pyx_k_momentY), 0, 0, 1, 1}, - {&__pyx_n_u_momentY, __pyx_k_momentY, sizeof(__pyx_k_momentY), 0, 1, 0, 1}, - {&__pyx_n_s_momentYY, __pyx_k_momentYY, sizeof(__pyx_k_momentYY), 0, 0, 1, 1}, - {&__pyx_n_u_momentYY, __pyx_k_momentYY, sizeof(__pyx_k_momentYY), 0, 1, 0, 1}, - {&__pyx_n_s_moveTo, __pyx_k_moveTo, sizeof(__pyx_k_moveTo), 0, 0, 1, 1}, - {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, - {&__pyx_n_s_p0, __pyx_k_p0, sizeof(__pyx_k_p0), 0, 0, 1, 1}, - {&__pyx_n_s_p1, __pyx_k_p1, sizeof(__pyx_k_p1), 0, 0, 1, 1}, - {&__pyx_n_s_p2, __pyx_k_p2, sizeof(__pyx_k_p2), 0, 0, 1, 1}, - {&__pyx_n_s_p3, __pyx_k_p3, sizeof(__pyx_k_p3), 0, 0, 1, 1}, - {&__pyx_n_s_prepare, __pyx_k_prepare, sizeof(__pyx_k_prepare), 0, 0, 1, 1}, - {&__pyx_n_s_printGreenPen, __pyx_k_printGreenPen, sizeof(__pyx_k_printGreenPen), 0, 0, 1, 1}, - {&__pyx_n_s_qCurveToOne, __pyx_k_qCurveToOne, sizeof(__pyx_k_qCurveToOne), 0, 0, 1, 1}, - {&__pyx_n_s_qualname, __pyx_k_qualname, sizeof(__pyx_k_qualname), 0, 0, 1, 1}, - {&__pyx_n_s_r0, __pyx_k_r0, sizeof(__pyx_k_r0), 0, 0, 1, 1}, - {&__pyx_n_s_r1, __pyx_k_r1, sizeof(__pyx_k_r1), 0, 0, 1, 1}, - {&__pyx_n_s_r10, __pyx_k_r10, sizeof(__pyx_k_r10), 0, 0, 1, 1}, - {&__pyx_n_s_r100, __pyx_k_r100, sizeof(__pyx_k_r100), 0, 0, 1, 1}, - {&__pyx_n_s_r101, __pyx_k_r101, sizeof(__pyx_k_r101), 0, 0, 1, 1}, - {&__pyx_n_s_r102, __pyx_k_r102, sizeof(__pyx_k_r102), 0, 0, 1, 1}, - {&__pyx_n_s_r103, __pyx_k_r103, sizeof(__pyx_k_r103), 0, 0, 1, 1}, - {&__pyx_n_s_r104, __pyx_k_r104, sizeof(__pyx_k_r104), 0, 0, 1, 1}, - {&__pyx_n_s_r105, __pyx_k_r105, sizeof(__pyx_k_r105), 0, 0, 1, 1}, - {&__pyx_n_s_r106, __pyx_k_r106, sizeof(__pyx_k_r106), 0, 0, 1, 1}, - {&__pyx_n_s_r107, __pyx_k_r107, sizeof(__pyx_k_r107), 0, 0, 1, 1}, - {&__pyx_n_s_r108, __pyx_k_r108, sizeof(__pyx_k_r108), 0, 0, 1, 1}, - {&__pyx_n_s_r109, __pyx_k_r109, sizeof(__pyx_k_r109), 0, 0, 1, 1}, - {&__pyx_n_s_r11, __pyx_k_r11, sizeof(__pyx_k_r11), 0, 0, 1, 1}, - {&__pyx_n_s_r110, __pyx_k_r110, sizeof(__pyx_k_r110), 0, 0, 1, 1}, - {&__pyx_n_s_r111, __pyx_k_r111, sizeof(__pyx_k_r111), 0, 0, 1, 1}, - {&__pyx_n_s_r112, __pyx_k_r112, sizeof(__pyx_k_r112), 0, 0, 1, 1}, - {&__pyx_n_s_r113, __pyx_k_r113, sizeof(__pyx_k_r113), 0, 0, 1, 1}, - {&__pyx_n_s_r114, __pyx_k_r114, sizeof(__pyx_k_r114), 0, 0, 1, 1}, - {&__pyx_n_s_r115, __pyx_k_r115, sizeof(__pyx_k_r115), 0, 0, 1, 1}, - {&__pyx_n_s_r116, __pyx_k_r116, sizeof(__pyx_k_r116), 0, 0, 1, 1}, - {&__pyx_n_s_r117, __pyx_k_r117, sizeof(__pyx_k_r117), 0, 0, 1, 1}, - {&__pyx_n_s_r118, __pyx_k_r118, sizeof(__pyx_k_r118), 0, 0, 1, 1}, - {&__pyx_n_s_r119, __pyx_k_r119, sizeof(__pyx_k_r119), 0, 0, 1, 1}, - {&__pyx_n_s_r12, __pyx_k_r12, sizeof(__pyx_k_r12), 0, 0, 1, 1}, - {&__pyx_n_s_r120, __pyx_k_r120, sizeof(__pyx_k_r120), 0, 0, 1, 1}, - {&__pyx_n_s_r121, __pyx_k_r121, sizeof(__pyx_k_r121), 0, 0, 1, 1}, - {&__pyx_n_s_r122, __pyx_k_r122, sizeof(__pyx_k_r122), 0, 0, 1, 1}, - {&__pyx_n_s_r123, __pyx_k_r123, sizeof(__pyx_k_r123), 0, 0, 1, 1}, - {&__pyx_n_s_r124, __pyx_k_r124, sizeof(__pyx_k_r124), 0, 0, 1, 1}, - {&__pyx_n_s_r125, __pyx_k_r125, sizeof(__pyx_k_r125), 0, 0, 1, 1}, - {&__pyx_n_s_r126, __pyx_k_r126, sizeof(__pyx_k_r126), 0, 0, 1, 1}, - {&__pyx_n_s_r127, __pyx_k_r127, sizeof(__pyx_k_r127), 0, 0, 1, 1}, - {&__pyx_n_s_r128, __pyx_k_r128, sizeof(__pyx_k_r128), 0, 0, 1, 1}, - {&__pyx_n_s_r129, __pyx_k_r129, sizeof(__pyx_k_r129), 0, 0, 1, 1}, - {&__pyx_n_s_r13, __pyx_k_r13, sizeof(__pyx_k_r13), 0, 0, 1, 1}, - {&__pyx_n_s_r130, __pyx_k_r130, sizeof(__pyx_k_r130), 0, 0, 1, 1}, - {&__pyx_n_s_r131, __pyx_k_r131, sizeof(__pyx_k_r131), 0, 0, 1, 1}, - {&__pyx_n_s_r132, __pyx_k_r132, sizeof(__pyx_k_r132), 0, 0, 1, 1}, - {&__pyx_n_s_r14, __pyx_k_r14, sizeof(__pyx_k_r14), 0, 0, 1, 1}, - {&__pyx_n_s_r15, __pyx_k_r15, sizeof(__pyx_k_r15), 0, 0, 1, 1}, - {&__pyx_n_s_r16, __pyx_k_r16, sizeof(__pyx_k_r16), 0, 0, 1, 1}, - {&__pyx_n_s_r17, __pyx_k_r17, sizeof(__pyx_k_r17), 0, 0, 1, 1}, - {&__pyx_n_s_r18, __pyx_k_r18, sizeof(__pyx_k_r18), 0, 0, 1, 1}, - {&__pyx_n_s_r19, __pyx_k_r19, sizeof(__pyx_k_r19), 0, 0, 1, 1}, - {&__pyx_n_s_r2, __pyx_k_r2, sizeof(__pyx_k_r2), 0, 0, 1, 1}, - {&__pyx_n_s_r20, __pyx_k_r20, sizeof(__pyx_k_r20), 0, 0, 1, 1}, - {&__pyx_n_s_r21, __pyx_k_r21, sizeof(__pyx_k_r21), 0, 0, 1, 1}, - {&__pyx_n_s_r22, __pyx_k_r22, sizeof(__pyx_k_r22), 0, 0, 1, 1}, - {&__pyx_n_s_r23, __pyx_k_r23, sizeof(__pyx_k_r23), 0, 0, 1, 1}, - {&__pyx_n_s_r24, __pyx_k_r24, sizeof(__pyx_k_r24), 0, 0, 1, 1}, - {&__pyx_n_s_r25, __pyx_k_r25, sizeof(__pyx_k_r25), 0, 0, 1, 1}, - {&__pyx_n_s_r26, __pyx_k_r26, sizeof(__pyx_k_r26), 0, 0, 1, 1}, - {&__pyx_n_s_r27, __pyx_k_r27, sizeof(__pyx_k_r27), 0, 0, 1, 1}, - {&__pyx_n_s_r28, __pyx_k_r28, sizeof(__pyx_k_r28), 0, 0, 1, 1}, - {&__pyx_n_s_r29, __pyx_k_r29, sizeof(__pyx_k_r29), 0, 0, 1, 1}, - {&__pyx_n_s_r3, __pyx_k_r3, sizeof(__pyx_k_r3), 0, 0, 1, 1}, - {&__pyx_n_s_r30, __pyx_k_r30, sizeof(__pyx_k_r30), 0, 0, 1, 1}, - {&__pyx_n_s_r31, __pyx_k_r31, sizeof(__pyx_k_r31), 0, 0, 1, 1}, - {&__pyx_n_s_r32, __pyx_k_r32, sizeof(__pyx_k_r32), 0, 0, 1, 1}, - {&__pyx_n_s_r33, __pyx_k_r33, sizeof(__pyx_k_r33), 0, 0, 1, 1}, - {&__pyx_n_s_r34, __pyx_k_r34, sizeof(__pyx_k_r34), 0, 0, 1, 1}, - {&__pyx_n_s_r35, __pyx_k_r35, sizeof(__pyx_k_r35), 0, 0, 1, 1}, - {&__pyx_n_s_r36, __pyx_k_r36, sizeof(__pyx_k_r36), 0, 0, 1, 1}, - {&__pyx_n_s_r37, __pyx_k_r37, sizeof(__pyx_k_r37), 0, 0, 1, 1}, - {&__pyx_n_s_r38, __pyx_k_r38, sizeof(__pyx_k_r38), 0, 0, 1, 1}, - {&__pyx_n_s_r39, __pyx_k_r39, sizeof(__pyx_k_r39), 0, 0, 1, 1}, - {&__pyx_n_s_r4, __pyx_k_r4, sizeof(__pyx_k_r4), 0, 0, 1, 1}, - {&__pyx_n_s_r40, __pyx_k_r40, sizeof(__pyx_k_r40), 0, 0, 1, 1}, - {&__pyx_n_s_r41, __pyx_k_r41, sizeof(__pyx_k_r41), 0, 0, 1, 1}, - {&__pyx_n_s_r42, __pyx_k_r42, sizeof(__pyx_k_r42), 0, 0, 1, 1}, - {&__pyx_n_s_r43, __pyx_k_r43, sizeof(__pyx_k_r43), 0, 0, 1, 1}, - {&__pyx_n_s_r44, __pyx_k_r44, sizeof(__pyx_k_r44), 0, 0, 1, 1}, - {&__pyx_n_s_r45, __pyx_k_r45, sizeof(__pyx_k_r45), 0, 0, 1, 1}, - {&__pyx_n_s_r46, __pyx_k_r46, sizeof(__pyx_k_r46), 0, 0, 1, 1}, - {&__pyx_n_s_r47, __pyx_k_r47, sizeof(__pyx_k_r47), 0, 0, 1, 1}, - {&__pyx_n_s_r48, __pyx_k_r48, sizeof(__pyx_k_r48), 0, 0, 1, 1}, - {&__pyx_n_s_r49, __pyx_k_r49, sizeof(__pyx_k_r49), 0, 0, 1, 1}, - {&__pyx_n_s_r5, __pyx_k_r5, sizeof(__pyx_k_r5), 0, 0, 1, 1}, - {&__pyx_n_s_r50, __pyx_k_r50, sizeof(__pyx_k_r50), 0, 0, 1, 1}, - {&__pyx_n_s_r51, __pyx_k_r51, sizeof(__pyx_k_r51), 0, 0, 1, 1}, - {&__pyx_n_s_r52, __pyx_k_r52, sizeof(__pyx_k_r52), 0, 0, 1, 1}, - {&__pyx_n_s_r53, __pyx_k_r53, sizeof(__pyx_k_r53), 0, 0, 1, 1}, - {&__pyx_n_s_r54, __pyx_k_r54, sizeof(__pyx_k_r54), 0, 0, 1, 1}, - {&__pyx_n_s_r55, __pyx_k_r55, sizeof(__pyx_k_r55), 0, 0, 1, 1}, - {&__pyx_n_s_r56, __pyx_k_r56, sizeof(__pyx_k_r56), 0, 0, 1, 1}, - {&__pyx_n_s_r57, __pyx_k_r57, sizeof(__pyx_k_r57), 0, 0, 1, 1}, - {&__pyx_n_s_r58, __pyx_k_r58, sizeof(__pyx_k_r58), 0, 0, 1, 1}, - {&__pyx_n_s_r59, __pyx_k_r59, sizeof(__pyx_k_r59), 0, 0, 1, 1}, - {&__pyx_n_s_r6, __pyx_k_r6, sizeof(__pyx_k_r6), 0, 0, 1, 1}, - {&__pyx_n_s_r60, __pyx_k_r60, sizeof(__pyx_k_r60), 0, 0, 1, 1}, - {&__pyx_n_s_r61, __pyx_k_r61, sizeof(__pyx_k_r61), 0, 0, 1, 1}, - {&__pyx_n_s_r62, __pyx_k_r62, sizeof(__pyx_k_r62), 0, 0, 1, 1}, - {&__pyx_n_s_r63, __pyx_k_r63, sizeof(__pyx_k_r63), 0, 0, 1, 1}, - {&__pyx_n_s_r64, __pyx_k_r64, sizeof(__pyx_k_r64), 0, 0, 1, 1}, - {&__pyx_n_s_r65, __pyx_k_r65, sizeof(__pyx_k_r65), 0, 0, 1, 1}, - {&__pyx_n_s_r66, __pyx_k_r66, sizeof(__pyx_k_r66), 0, 0, 1, 1}, - {&__pyx_n_s_r67, __pyx_k_r67, sizeof(__pyx_k_r67), 0, 0, 1, 1}, - {&__pyx_n_s_r68, __pyx_k_r68, sizeof(__pyx_k_r68), 0, 0, 1, 1}, - {&__pyx_n_s_r69, __pyx_k_r69, sizeof(__pyx_k_r69), 0, 0, 1, 1}, - {&__pyx_n_s_r7, __pyx_k_r7, sizeof(__pyx_k_r7), 0, 0, 1, 1}, - {&__pyx_n_s_r70, __pyx_k_r70, sizeof(__pyx_k_r70), 0, 0, 1, 1}, - {&__pyx_n_s_r71, __pyx_k_r71, sizeof(__pyx_k_r71), 0, 0, 1, 1}, - {&__pyx_n_s_r72, __pyx_k_r72, sizeof(__pyx_k_r72), 0, 0, 1, 1}, - {&__pyx_n_s_r73, __pyx_k_r73, sizeof(__pyx_k_r73), 0, 0, 1, 1}, - {&__pyx_n_s_r74, __pyx_k_r74, sizeof(__pyx_k_r74), 0, 0, 1, 1}, - {&__pyx_n_s_r75, __pyx_k_r75, sizeof(__pyx_k_r75), 0, 0, 1, 1}, - {&__pyx_n_s_r76, __pyx_k_r76, sizeof(__pyx_k_r76), 0, 0, 1, 1}, - {&__pyx_n_s_r77, __pyx_k_r77, sizeof(__pyx_k_r77), 0, 0, 1, 1}, - {&__pyx_n_s_r78, __pyx_k_r78, sizeof(__pyx_k_r78), 0, 0, 1, 1}, - {&__pyx_n_s_r79, __pyx_k_r79, sizeof(__pyx_k_r79), 0, 0, 1, 1}, - {&__pyx_n_s_r8, __pyx_k_r8, sizeof(__pyx_k_r8), 0, 0, 1, 1}, - {&__pyx_n_s_r80, __pyx_k_r80, sizeof(__pyx_k_r80), 0, 0, 1, 1}, - {&__pyx_n_s_r81, __pyx_k_r81, sizeof(__pyx_k_r81), 0, 0, 1, 1}, - {&__pyx_n_s_r82, __pyx_k_r82, sizeof(__pyx_k_r82), 0, 0, 1, 1}, - {&__pyx_n_s_r83, __pyx_k_r83, sizeof(__pyx_k_r83), 0, 0, 1, 1}, - {&__pyx_n_s_r84, __pyx_k_r84, sizeof(__pyx_k_r84), 0, 0, 1, 1}, - {&__pyx_n_s_r85, __pyx_k_r85, sizeof(__pyx_k_r85), 0, 0, 1, 1}, - {&__pyx_n_s_r86, __pyx_k_r86, sizeof(__pyx_k_r86), 0, 0, 1, 1}, - {&__pyx_n_s_r87, __pyx_k_r87, sizeof(__pyx_k_r87), 0, 0, 1, 1}, - {&__pyx_n_s_r88, __pyx_k_r88, sizeof(__pyx_k_r88), 0, 0, 1, 1}, - {&__pyx_n_s_r89, __pyx_k_r89, sizeof(__pyx_k_r89), 0, 0, 1, 1}, - {&__pyx_n_s_r9, __pyx_k_r9, sizeof(__pyx_k_r9), 0, 0, 1, 1}, - {&__pyx_n_s_r90, __pyx_k_r90, sizeof(__pyx_k_r90), 0, 0, 1, 1}, - {&__pyx_n_s_r91, __pyx_k_r91, sizeof(__pyx_k_r91), 0, 0, 1, 1}, - {&__pyx_n_s_r92, __pyx_k_r92, sizeof(__pyx_k_r92), 0, 0, 1, 1}, - {&__pyx_n_s_r93, __pyx_k_r93, sizeof(__pyx_k_r93), 0, 0, 1, 1}, - {&__pyx_n_s_r94, __pyx_k_r94, sizeof(__pyx_k_r94), 0, 0, 1, 1}, - {&__pyx_n_s_r95, __pyx_k_r95, sizeof(__pyx_k_r95), 0, 0, 1, 1}, - {&__pyx_n_s_r96, __pyx_k_r96, sizeof(__pyx_k_r96), 0, 0, 1, 1}, - {&__pyx_n_s_r97, __pyx_k_r97, sizeof(__pyx_k_r97), 0, 0, 1, 1}, - {&__pyx_n_s_r98, __pyx_k_r98, sizeof(__pyx_k_r98), 0, 0, 1, 1}, - {&__pyx_n_s_r99, __pyx_k_r99, sizeof(__pyx_k_r99), 0, 0, 1, 1}, - {&__pyx_n_s_self, __pyx_k_self, sizeof(__pyx_k_self), 0, 0, 1, 1}, - {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, - {&__pyx_n_s_x, __pyx_k_x, sizeof(__pyx_k_x), 0, 0, 1, 1}, - {&__pyx_n_s_x0, __pyx_k_x0, sizeof(__pyx_k_x0), 0, 0, 1, 1}, - {&__pyx_n_s_x1, __pyx_k_x1, sizeof(__pyx_k_x1), 0, 0, 1, 1}, - {&__pyx_n_s_x2, __pyx_k_x2, sizeof(__pyx_k_x2), 0, 0, 1, 1}, - {&__pyx_n_s_x3, __pyx_k_x3, sizeof(__pyx_k_x3), 0, 0, 1, 1}, - {&__pyx_n_s_y, __pyx_k_y, sizeof(__pyx_k_y), 0, 0, 1, 1}, - {&__pyx_n_s_y0, __pyx_k_y0, sizeof(__pyx_k_y0), 0, 0, 1, 1}, - {&__pyx_n_s_y1, __pyx_k_y1, sizeof(__pyx_k_y1), 0, 0, 1, 1}, - {&__pyx_n_s_y2, __pyx_k_y2, sizeof(__pyx_k_y2), 0, 0, 1, 1}, - {&__pyx_n_s_y3, __pyx_k_y3, sizeof(__pyx_k_y3), 0, 0, 1, 1}, - {0, 0, 0, 0, 0, 0, 0} -}; -static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) { - __pyx_builtin_AttributeError = __Pyx_GetBuiltinName(__pyx_n_s_AttributeError); if (!__pyx_builtin_AttributeError) __PYX_ERR(0, 7, __pyx_L1_error) - __pyx_builtin_ImportError = __Pyx_GetBuiltinName(__pyx_n_s_ImportError); if (!__pyx_builtin_ImportError) __PYX_ERR(0, 7, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); - - /* "fontTools/pens/momentsPen.py":18 - * - * class MomentsPen(BasePen): - * def __init__(self, glyphset=None): # <<<<<<<<<<<<<< - * BasePen.__init__(self, glyphset) - * - */ - __pyx_tuple_ = PyTuple_Pack(2, __pyx_n_s_self, __pyx_n_s_glyphset); if (unlikely(!__pyx_tuple_)) __PYX_ERR(0, 18, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple_); - __Pyx_GIVEREF(__pyx_tuple_); - __pyx_codeobj__2 = (PyObject*)__Pyx_PyCode_New(2, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple_, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_init, 18, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__2)) __PYX_ERR(0, 18, __pyx_L1_error) - __pyx_tuple__3 = PyTuple_Pack(1, ((PyObject *)Py_None)); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(0, 18, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__3); - __Pyx_GIVEREF(__pyx_tuple__3); - - /* "fontTools/pens/momentsPen.py":28 - * self.momentYY = 0 - * - * def _moveTo(self, p0): # <<<<<<<<<<<<<< - * self.__startPoint = p0 - * - */ - __pyx_tuple__4 = PyTuple_Pack(2, __pyx_n_s_self, __pyx_n_s_p0); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(0, 28, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__4); - __Pyx_GIVEREF(__pyx_tuple__4); - __pyx_codeobj__5 = (PyObject*)__Pyx_PyCode_New(2, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__4, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_moveTo, 28, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__5)) __PYX_ERR(0, 28, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":31 - * self.__startPoint = p0 - * - * def _closePath(self): # <<<<<<<<<<<<<< - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - */ - __pyx_tuple__6 = PyTuple_Pack(2, __pyx_n_s_self, __pyx_n_s_p0); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(0, 31, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__6); - __Pyx_GIVEREF(__pyx_tuple__6); - __pyx_codeobj__7 = (PyObject*)__Pyx_PyCode_New(1, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__6, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_closePath, 31, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__7)) __PYX_ERR(0, 31, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":36 - * self._lineTo(self.__startPoint) - * - * def _endPath(self): # <<<<<<<<<<<<<< - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - */ - __pyx_tuple__8 = PyTuple_Pack(2, __pyx_n_s_self, __pyx_n_s_p0); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(0, 36, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__8); - __Pyx_GIVEREF(__pyx_tuple__8); - __pyx_codeobj__9 = (PyObject*)__Pyx_PyCode_New(1, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__8, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_endPath, 36, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__9)) __PYX_ERR(0, 36, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":57 - * @cython.locals(x0=cython.double, y0=cython.double) - * @cython.locals(x1=cython.double, y1=cython.double) - * def _lineTo(self, p1): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - __pyx_tuple__10 = PyTuple_Pack(19, __pyx_n_s_self, __pyx_n_s_p1, __pyx_n_s_x1, __pyx_n_s_y1, __pyx_n_s_x0, __pyx_n_s_y0, __pyx_n_s_r12, __pyx_n_s_r11, __pyx_n_s_r10, __pyx_n_s_r9, __pyx_n_s_r8, __pyx_n_s_r7, __pyx_n_s_r6, __pyx_n_s_r5, __pyx_n_s_r4, __pyx_n_s_r3, __pyx_n_s_r2, __pyx_n_s_r1, __pyx_n_s_r0); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(0, 57, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__10); - __Pyx_GIVEREF(__pyx_tuple__10); - __pyx_codeobj__11 = (PyObject*)__Pyx_PyCode_New(2, 0, 19, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__10, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_lineTo, 57, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__11)) __PYX_ERR(0, 57, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":159 - * @cython.locals(x1=cython.double, y1=cython.double) - * @cython.locals(x2=cython.double, y2=cython.double) - * def _qCurveToOne(self, p1, p2): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - __pyx_tuple__12 = PyTuple_Pack(63, __pyx_n_s_self, __pyx_n_s_p1, __pyx_n_s_p2, __pyx_n_s_x2, __pyx_n_s_y2, __pyx_n_s_x1, __pyx_n_s_y1, __pyx_n_s_x0, __pyx_n_s_y0, __pyx_n_s_r53, __pyx_n_s_r52, __pyx_n_s_r51, __pyx_n_s_r50, __pyx_n_s_r49, __pyx_n_s_r48, __pyx_n_s_r47, __pyx_n_s_r46, __pyx_n_s_r45, __pyx_n_s_r44, __pyx_n_s_r43, __pyx_n_s_r42, __pyx_n_s_r41, __pyx_n_s_r40, __pyx_n_s_r39, __pyx_n_s_r38, __pyx_n_s_r37, __pyx_n_s_r36, __pyx_n_s_r35, __pyx_n_s_r34, __pyx_n_s_r33, __pyx_n_s_r32, __pyx_n_s_r31, __pyx_n_s_r30, __pyx_n_s_r29, __pyx_n_s_r28, __pyx_n_s_r27, __pyx_n_s_r26, __pyx_n_s_r25, __pyx_n_s_r24, __pyx_n_s_r23, __pyx_n_s_r22, __pyx_n_s_r21, __pyx_n_s_r20, __pyx_n_s_r19, __pyx_n_s_r18, __pyx_n_s_r17, __pyx_n_s_r16, __pyx_n_s_r15, __pyx_n_s_r14, __pyx_n_s_r13, __pyx_n_s_r12, __pyx_n_s_r11, __pyx_n_s_r10, __pyx_n_s_r9, __pyx_n_s_r8, __pyx_n_s_r7, __pyx_n_s_r6, __pyx_n_s_r5, __pyx_n_s_r4, __pyx_n_s_r3, __pyx_n_s_r2, __pyx_n_s_r1, __pyx_n_s_r0); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__12); - __Pyx_GIVEREF(__pyx_tuple__12); - __pyx_codeobj__13 = (PyObject*)__Pyx_PyCode_New(3, 0, 63, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__12, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_qCurveToOne, 159, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__13)) __PYX_ERR(0, 159, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":450 - * @cython.locals(x2=cython.double, y2=cython.double) - * @cython.locals(x3=cython.double, y3=cython.double) - * def _curveToOne(self, p1, p2, p3): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - __pyx_tuple__14 = PyTuple_Pack(145, __pyx_n_s_self, __pyx_n_s_p1, __pyx_n_s_p2, __pyx_n_s_p3, __pyx_n_s_x3, __pyx_n_s_y3, __pyx_n_s_x2, __pyx_n_s_y2, __pyx_n_s_x1, __pyx_n_s_y1, __pyx_n_s_x0, __pyx_n_s_y0, __pyx_n_s_r132, __pyx_n_s_r131, __pyx_n_s_r130, __pyx_n_s_r129, __pyx_n_s_r128, __pyx_n_s_r127, __pyx_n_s_r126, __pyx_n_s_r125, __pyx_n_s_r124, __pyx_n_s_r123, __pyx_n_s_r122, __pyx_n_s_r121, __pyx_n_s_r120, __pyx_n_s_r119, __pyx_n_s_r118, __pyx_n_s_r117, __pyx_n_s_r116, __pyx_n_s_r115, __pyx_n_s_r114, __pyx_n_s_r113, __pyx_n_s_r112, __pyx_n_s_r111, __pyx_n_s_r110, __pyx_n_s_r109, __pyx_n_s_r108, __pyx_n_s_r107, __pyx_n_s_r106, __pyx_n_s_r105, __pyx_n_s_r104, __pyx_n_s_r103, __pyx_n_s_r102, __pyx_n_s_r101, __pyx_n_s_r100, __pyx_n_s_r99, __pyx_n_s_r98, __pyx_n_s_r97, __pyx_n_s_r96, __pyx_n_s_r95, __pyx_n_s_r94, __pyx_n_s_r93, __pyx_n_s_r92, __pyx_n_s_r91, __pyx_n_s_r90, __pyx_n_s_r89, __pyx_n_s_r88, __pyx_n_s_r87, __pyx_n_s_r86, __pyx_n_s_r85, __pyx_n_s_r84, __pyx_n_s_r83, __pyx_n_s_r82, __pyx_n_s_r81, __pyx_n_s_r80, __pyx_n_s_r79, __pyx_n_s_r78, __pyx_n_s_r77, __pyx_n_s_r76, __pyx_n_s_r75, __pyx_n_s_r74, __pyx_n_s_r73, __pyx_n_s_r72, __pyx_n_s_r71, __pyx_n_s_r70, __pyx_n_s_r69, __pyx_n_s_r68, __pyx_n_s_r67, __pyx_n_s_r66, __pyx_n_s_r65, __pyx_n_s_r64, __pyx_n_s_r63, __pyx_n_s_r62, __pyx_n_s_r61, __pyx_n_s_r60, __pyx_n_s_r59, __pyx_n_s_r58, __pyx_n_s_r57, __pyx_n_s_r56, __pyx_n_s_r55, __pyx_n_s_r54, __pyx_n_s_r53, __pyx_n_s_r52, __pyx_n_s_r51, __pyx_n_s_r50, __pyx_n_s_r49, __pyx_n_s_r48, __pyx_n_s_r47, __pyx_n_s_r46, __pyx_n_s_r45, __pyx_n_s_r44, __pyx_n_s_r43, __pyx_n_s_r42, __pyx_n_s_r41, __pyx_n_s_r40, __pyx_n_s_r39, __pyx_n_s_r38, __pyx_n_s_r37, __pyx_n_s_r36, __pyx_n_s_r35, __pyx_n_s_r34, __pyx_n_s_r33, __pyx_n_s_r32, __pyx_n_s_r31, __pyx_n_s_r30, __pyx_n_s_r29, __pyx_n_s_r28, __pyx_n_s_r27, __pyx_n_s_r26, __pyx_n_s_r25, __pyx_n_s_r24, __pyx_n_s_r23, __pyx_n_s_r22, __pyx_n_s_r21, __pyx_n_s_r20, __pyx_n_s_r19, __pyx_n_s_r18, __pyx_n_s_r17, __pyx_n_s_r16, __pyx_n_s_r15, __pyx_n_s_r14, __pyx_n_s_r13, __pyx_n_s_r12, __pyx_n_s_r11, __pyx_n_s_r10, __pyx_n_s_r9, __pyx_n_s_r8, __pyx_n_s_r7, __pyx_n_s_r6, __pyx_n_s_r5, __pyx_n_s_r4, __pyx_n_s_r3, __pyx_n_s_r2, __pyx_n_s_r1, __pyx_n_s_r0); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(0, 450, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__14); - __Pyx_GIVEREF(__pyx_tuple__14); - __pyx_codeobj__15 = (PyObject*)__Pyx_PyCode_New(4, 0, 145, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__14, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_curveToOne, 450, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__15)) __PYX_ERR(0, 450, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":875 - * "MomentsPen", - * [ - * ("area", 1), # <<<<<<<<<<<<<< - * ("momentX", x), - * ("momentY", y), - */ - __pyx_tuple__16 = PyTuple_Pack(2, __pyx_n_u_area, __pyx_int_1); if (unlikely(!__pyx_tuple__16)) __PYX_ERR(0, 875, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__16); - __Pyx_GIVEREF(__pyx_tuple__16); - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) { - if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_2 = PyInt_FromLong(2); if (unlikely(!__pyx_int_2)) __PYX_ERR(0, 1, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/ - -static int __Pyx_modinit_global_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); - /*--- Global init code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); - /*--- Variable export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); - /*--- Function export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); - /*--- Type init code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); - /*--- Type import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); - /*--- Variable import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); - /*--- Function import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - - -#ifndef CYTHON_NO_PYINIT_EXPORT -#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC -#elif PY_MAJOR_VERSION < 3 -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" void -#else -#define __Pyx_PyMODINIT_FUNC void -#endif -#else -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" PyObject * -#else -#define __Pyx_PyMODINIT_FUNC PyObject * -#endif -#endif - - -#if PY_MAJOR_VERSION < 3 -__Pyx_PyMODINIT_FUNC initmomentsPen(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC initmomentsPen(void) -#else -__Pyx_PyMODINIT_FUNC PyInit_momentsPen(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC PyInit_momentsPen(void) -#if CYTHON_PEP489_MULTI_PHASE_INIT -{ - return PyModuleDef_Init(&__pyx_moduledef); -} -static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { - #if PY_VERSION_HEX >= 0x030700A1 - static PY_INT64_T main_interpreter_id = -1; - PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); - if (main_interpreter_id == -1) { - main_interpreter_id = current_id; - return (unlikely(current_id == -1)) ? -1 : 0; - } else if (unlikely(main_interpreter_id != current_id)) - #else - static PyInterpreterState *main_interpreter = NULL; - PyInterpreterState *current_interpreter = PyThreadState_Get()->interp; - if (!main_interpreter) { - main_interpreter = current_interpreter; - } else if (unlikely(main_interpreter != current_interpreter)) - #endif - { - PyErr_SetString( - PyExc_ImportError, - "Interpreter change detected - this module can only be loaded into one interpreter per process."); - return -1; - } - return 0; -} -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) { - PyObject *value = PyObject_GetAttrString(spec, from_name); - int result = 0; - if (likely(value)) { - if (allow_none || value != Py_None) { - result = PyDict_SetItemString(moddict, to_name, value); - } - Py_DECREF(value); - } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - } else { - result = -1; - } - return result; -} -static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) { - PyObject *module = NULL, *moddict, *modname; - if (__Pyx_check_single_interpreter()) - return NULL; - if (__pyx_m) - return __Pyx_NewRef(__pyx_m); - modname = PyObject_GetAttrString(spec, "name"); - if (unlikely(!modname)) goto bad; - module = PyModule_NewObject(modname); - Py_DECREF(modname); - if (unlikely(!module)) goto bad; - moddict = PyModule_GetDict(module); - if (unlikely(!moddict)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; - return module; -bad: - Py_XDECREF(module); - return NULL; -} - - -static CYTHON_SMALL_CODE int __pyx_pymod_exec_momentsPen(PyObject *__pyx_pyinit_module) -#endif -#endif -{ - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - int __pyx_t_10; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - #if CYTHON_PEP489_MULTI_PHASE_INIT - if (__pyx_m) { - if (__pyx_m == __pyx_pyinit_module) return 0; - PyErr_SetString(PyExc_RuntimeError, "Module 'momentsPen' has already been imported. Re-initialisation is not supported."); - return -1; - } - #elif PY_MAJOR_VERSION >= 3 - if (__pyx_m) return __Pyx_NewRef(__pyx_m); - #endif - #if CYTHON_REFNANNY -__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); -if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); -} -#endif - __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_momentsPen(void)", 0); - if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pxy_PyFrame_Initialize_Offsets - __Pxy_PyFrame_Initialize_Offsets(); - #endif - __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pyx_CyFunction_USED - if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_FusedFunction_USED - if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Coroutine_USED - if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Generator_USED - if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_StopAsyncIteration_USED - if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - /*--- Library function declarations ---*/ - /*--- Threads initialization code ---*/ - #if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0 && defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS - PyEval_InitThreads(); - #endif - /*--- Module creation code ---*/ - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_m = __pyx_pyinit_module; - Py_INCREF(__pyx_m); - #else - #if PY_MAJOR_VERSION < 3 - __pyx_m = Py_InitModule4("momentsPen", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); - #else - __pyx_m = PyModule_Create(&__pyx_moduledef); - #endif - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_d); - __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_b); - __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_cython_runtime); - if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Initialize various global constants etc. ---*/ - if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) - if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - if (__pyx_module_is_main_fontTools__pens__momentsPen) { - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - } - #if PY_MAJOR_VERSION >= 3 - { - PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) - if (!PyDict_GetItemString(modules, "fontTools.pens.momentsPen")) { - if (unlikely(PyDict_SetItemString(modules, "fontTools.pens.momentsPen", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error) - } - } - #endif - /*--- Builtin init code ---*/ - if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Constants init code ---*/ - if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Global type/function init code ---*/ - (void)__Pyx_modinit_global_init_code(); - (void)__Pyx_modinit_variable_export_code(); - (void)__Pyx_modinit_function_export_code(); - (void)__Pyx_modinit_type_init_code(); - (void)__Pyx_modinit_type_import_code(); - (void)__Pyx_modinit_variable_import_code(); - (void)__Pyx_modinit_function_import_code(); - /*--- Execution code ---*/ - #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - - /* "fontTools/pens/momentsPen.py":1 - * from fontTools.pens.basePen import BasePen, OpenContourError # <<<<<<<<<<<<<< - * - * try: - */ - __pyx_t_1 = PyList_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_n_s_BasePen); - __Pyx_GIVEREF(__pyx_n_s_BasePen); - PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_BasePen); - __Pyx_INCREF(__pyx_n_s_OpenContourError); - __Pyx_GIVEREF(__pyx_n_s_OpenContourError); - PyList_SET_ITEM(__pyx_t_1, 1, __pyx_n_s_OpenContourError); - __pyx_t_2 = __Pyx_Import(__pyx_n_s_fontTools_pens_basePen, __pyx_t_1, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_BasePen); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_BasePen, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_OpenContourError); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_OpenContourError, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":3 - * from fontTools.pens.basePen import BasePen, OpenContourError - * - * try: # <<<<<<<<<<<<<< - * import cython - * - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_4, &__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_5); - /*try:*/ { - - /* "fontTools/pens/momentsPen.py":6 - * import cython - * - * COMPILED = cython.compiled # <<<<<<<<<<<<<< - * except (AttributeError, ImportError): - * # if cython not installed, use mock module with no-op decorators and types - */ - if (PyDict_SetItem(__pyx_d, __pyx_n_s_COMPILED, Py_True) < 0) __PYX_ERR(0, 6, __pyx_L2_error) - - /* "fontTools/pens/momentsPen.py":3 - * from fontTools.pens.basePen import BasePen, OpenContourError - * - * try: # <<<<<<<<<<<<<< - * import cython - * - */ - } - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - goto __pyx_L7_try_end; - __pyx_L2_error:; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":7 - * - * COMPILED = cython.compiled - * except (AttributeError, ImportError): # <<<<<<<<<<<<<< - * # if cython not installed, use mock module with no-op decorators and types - * from fontTools.misc import cython - */ - __pyx_t_6 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_AttributeError) || __Pyx_PyErr_ExceptionMatches(__pyx_builtin_ImportError); - if (__pyx_t_6) { - __Pyx_AddTraceback("fontTools.pens.momentsPen", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_2, &__pyx_t_1, &__pyx_t_7) < 0) __PYX_ERR(0, 7, __pyx_L4_except_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GOTREF(__pyx_t_7); - - /* "fontTools/pens/momentsPen.py":9 - * except (AttributeError, ImportError): - * # if cython not installed, use mock module with no-op decorators and types - * from fontTools.misc import cython # <<<<<<<<<<<<<< - * - * COMPILED = False - */ - __pyx_t_8 = PyList_New(1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 9, __pyx_L4_except_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_n_s_cython); - __Pyx_GIVEREF(__pyx_n_s_cython); - PyList_SET_ITEM(__pyx_t_8, 0, __pyx_n_s_cython); - __pyx_t_9 = __Pyx_Import(__pyx_n_s_fontTools_misc, __pyx_t_8, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 9, __pyx_L4_except_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_ImportFrom(__pyx_t_9, __pyx_n_s_cython); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 9, __pyx_L4_except_error) - __Pyx_GOTREF(__pyx_t_8); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_cython, __pyx_t_8) < 0) __PYX_ERR(0, 9, __pyx_L4_except_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/pens/momentsPen.py":11 - * from fontTools.misc import cython - * - * COMPILED = False # <<<<<<<<<<<<<< - * - * - */ - if (PyDict_SetItem(__pyx_d, __pyx_n_s_COMPILED, Py_False) < 0) __PYX_ERR(0, 11, __pyx_L4_except_error) - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L3_exception_handled; - } - goto __pyx_L4_except_error; - __pyx_L4_except_error:; - - /* "fontTools/pens/momentsPen.py":3 - * from fontTools.pens.basePen import BasePen, OpenContourError - * - * try: # <<<<<<<<<<<<<< - * import cython - * - */ - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L1_error; - __pyx_L3_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - __pyx_L7_try_end:; - } - - /* "fontTools/pens/momentsPen.py":14 - * - * - * __all__ = ["MomentsPen"] # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_7 = PyList_New(1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_INCREF(__pyx_n_u_MomentsPen); - __Pyx_GIVEREF(__pyx_n_u_MomentsPen); - PyList_SET_ITEM(__pyx_t_7, 0, __pyx_n_u_MomentsPen); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_all, __pyx_t_7) < 0) __PYX_ERR(0, 14, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "fontTools/pens/momentsPen.py":17 - * - * - * class MomentsPen(BasePen): # <<<<<<<<<<<<<< - * def __init__(self, glyphset=None): - * BasePen.__init__(self, glyphset) - */ - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_BasePen); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_7); - __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_CalculateMetaclass(NULL, __pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_2 = __Pyx_Py3MetaclassPrepare(__pyx_t_7, __pyx_t_1, __pyx_n_s_MomentsPen, __pyx_n_s_MomentsPen, (PyObject *) NULL, __pyx_n_s_fontTools_pens_momentsPen, (PyObject *) NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":18 - * - * class MomentsPen(BasePen): - * def __init__(self, glyphset=None): # <<<<<<<<<<<<<< - * BasePen.__init__(self, glyphset) - * - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_1__init__, 0, __pyx_n_s_MomentsPen___init, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__2)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 18, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_9, __pyx_tuple__3); - if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_init, __pyx_t_9) < 0) __PYX_ERR(0, 18, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/pens/momentsPen.py":28 - * self.momentYY = 0 - * - * def _moveTo(self, p0): # <<<<<<<<<<<<<< - * self.__startPoint = p0 - * - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_3_moveTo, 0, __pyx_n_s_MomentsPen__moveTo, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__5)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 28, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_moveTo, __pyx_t_9) < 0) __PYX_ERR(0, 28, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/pens/momentsPen.py":31 - * self.__startPoint = p0 - * - * def _closePath(self): # <<<<<<<<<<<<<< - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_5_closePath, 0, __pyx_n_s_MomentsPen__closePath, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__7)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 31, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_closePath, __pyx_t_9) < 0) __PYX_ERR(0, 31, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/pens/momentsPen.py":36 - * self._lineTo(self.__startPoint) - * - * def _endPath(self): # <<<<<<<<<<<<<< - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_7_endPath, 0, __pyx_n_s_MomentsPen__endPath, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__9)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 36, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_endPath, __pyx_t_9) < 0) __PYX_ERR(0, 36, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/pens/momentsPen.py":57 - * @cython.locals(x0=cython.double, y0=cython.double) - * @cython.locals(x1=cython.double, y1=cython.double) - * def _lineTo(self, p1): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_9_lineTo, 0, __pyx_n_s_MomentsPen__lineTo, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__11)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 57, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_lineTo, __pyx_t_9) < 0) __PYX_ERR(0, 57, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/pens/momentsPen.py":159 - * @cython.locals(x1=cython.double, y1=cython.double) - * @cython.locals(x2=cython.double, y2=cython.double) - * def _qCurveToOne(self, p1, p2): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_11_qCurveToOne, 0, __pyx_n_s_MomentsPen__qCurveToOne, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__13)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_qCurveToOne, __pyx_t_9) < 0) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/pens/momentsPen.py":450 - * @cython.locals(x2=cython.double, y2=cython.double) - * @cython.locals(x3=cython.double, y3=cython.double) - * def _curveToOne(self, p1, p2, p3): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_13_curveToOne, 0, __pyx_n_s_MomentsPen__curveToOne, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__15)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 450, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_curveToOne, __pyx_t_9) < 0) __PYX_ERR(0, 450, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/pens/momentsPen.py":17 - * - * - * class MomentsPen(BasePen): # <<<<<<<<<<<<<< - * def __init__(self, glyphset=None): - * BasePen.__init__(self, glyphset) - */ - __pyx_t_9 = __Pyx_Py3ClassCreate(__pyx_t_7, __pyx_n_s_MomentsPen, __pyx_t_1, __pyx_t_2, NULL, 0, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_MomentsPen, __pyx_t_9) < 0) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":869 - * - * - * if __name__ == "__main__": # <<<<<<<<<<<<<< - * from fontTools.misc.symfont import x, y, printGreenPen - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_name); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 869, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = (__Pyx_PyUnicode_Equals(__pyx_t_1, __pyx_n_u_main, Py_EQ)); if (unlikely(__pyx_t_10 < 0)) __PYX_ERR(0, 869, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_10) { - - /* "fontTools/pens/momentsPen.py":870 - * - * if __name__ == "__main__": - * from fontTools.misc.symfont import x, y, printGreenPen # <<<<<<<<<<<<<< - * - * printGreenPen( - */ - __pyx_t_1 = PyList_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 870, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_n_s_x); - __Pyx_GIVEREF(__pyx_n_s_x); - PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_x); - __Pyx_INCREF(__pyx_n_s_y); - __Pyx_GIVEREF(__pyx_n_s_y); - PyList_SET_ITEM(__pyx_t_1, 1, __pyx_n_s_y); - __Pyx_INCREF(__pyx_n_s_printGreenPen); - __Pyx_GIVEREF(__pyx_n_s_printGreenPen); - PyList_SET_ITEM(__pyx_t_1, 2, __pyx_n_s_printGreenPen); - __pyx_t_7 = __Pyx_Import(__pyx_n_s_fontTools_misc_symfont, __pyx_t_1, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 870, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_7, __pyx_n_s_x); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 870, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_x, __pyx_t_1) < 0) __PYX_ERR(0, 870, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_7, __pyx_n_s_y); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 870, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_y, __pyx_t_1) < 0) __PYX_ERR(0, 870, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_7, __pyx_n_s_printGreenPen); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 870, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_printGreenPen, __pyx_t_1) < 0) __PYX_ERR(0, 870, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "fontTools/pens/momentsPen.py":872 - * from fontTools.misc.symfont import x, y, printGreenPen - * - * printGreenPen( # <<<<<<<<<<<<<< - * "MomentsPen", - * [ - */ - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_printGreenPen); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 872, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - - /* "fontTools/pens/momentsPen.py":876 - * [ - * ("area", 1), - * ("momentX", x), # <<<<<<<<<<<<<< - * ("momentY", y), - * ("momentXX", x**2), - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_x); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 876, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 876, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_u_momentX); - __Pyx_GIVEREF(__pyx_n_u_momentX); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_n_u_momentX); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":877 - * ("area", 1), - * ("momentX", x), - * ("momentY", y), # <<<<<<<<<<<<<< - * ("momentXX", x**2), - * ("momentXY", x * y), - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_y); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 877, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_9 = PyTuple_New(2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 877, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_INCREF(__pyx_n_u_momentY); - __Pyx_GIVEREF(__pyx_n_u_momentY); - PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_n_u_momentY); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_9, 1, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":878 - * ("momentX", x), - * ("momentY", y), - * ("momentXX", x**2), # <<<<<<<<<<<<<< - * ("momentXY", x * y), - * ("momentYY", y**2), - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_x); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 878, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_8 = PyNumber_Power(__pyx_t_1, __pyx_int_2, Py_None); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 878, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 878, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_n_u_momentXX); - __Pyx_GIVEREF(__pyx_n_u_momentXX); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_n_u_momentXX); - __Pyx_GIVEREF(__pyx_t_8); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_8); - __pyx_t_8 = 0; - - /* "fontTools/pens/momentsPen.py":879 - * ("momentY", y), - * ("momentXX", x**2), - * ("momentXY", x * y), # <<<<<<<<<<<<<< - * ("momentYY", y**2), - * ], - */ - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_x); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 879, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_y); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 879, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_12 = PyNumber_Multiply(__pyx_t_8, __pyx_t_11); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 879, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 879, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_INCREF(__pyx_n_u_momentXY); - __Pyx_GIVEREF(__pyx_n_u_momentXY); - PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_n_u_momentXY); - __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_t_12); - __pyx_t_12 = 0; - - /* "fontTools/pens/momentsPen.py":880 - * ("momentXX", x**2), - * ("momentXY", x * y), - * ("momentYY", y**2), # <<<<<<<<<<<<<< - * ], - * ) - */ - __Pyx_GetModuleGlobalName(__pyx_t_12, __pyx_n_s_y); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 880, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_8 = PyNumber_Power(__pyx_t_12, __pyx_int_2, Py_None); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 880, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 880, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_INCREF(__pyx_n_u_momentYY); - __Pyx_GIVEREF(__pyx_n_u_momentYY); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_n_u_momentYY); - __Pyx_GIVEREF(__pyx_t_8); - PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_t_8); - __pyx_t_8 = 0; - - /* "fontTools/pens/momentsPen.py":874 - * printGreenPen( - * "MomentsPen", - * [ # <<<<<<<<<<<<<< - * ("area", 1), - * ("momentX", x), - */ - __pyx_t_8 = PyList_New(6); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 874, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_tuple__16); - __Pyx_GIVEREF(__pyx_tuple__16); - PyList_SET_ITEM(__pyx_t_8, 0, __pyx_tuple__16); - __Pyx_GIVEREF(__pyx_t_2); - PyList_SET_ITEM(__pyx_t_8, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_9); - PyList_SET_ITEM(__pyx_t_8, 2, __pyx_t_9); - __Pyx_GIVEREF(__pyx_t_1); - PyList_SET_ITEM(__pyx_t_8, 3, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_11); - PyList_SET_ITEM(__pyx_t_8, 4, __pyx_t_11); - __Pyx_GIVEREF(__pyx_t_12); - PyList_SET_ITEM(__pyx_t_8, 5, __pyx_t_12); - __pyx_t_2 = 0; - __pyx_t_9 = 0; - __pyx_t_1 = 0; - __pyx_t_11 = 0; - __pyx_t_12 = 0; - - /* "fontTools/pens/momentsPen.py":872 - * from fontTools.misc.symfont import x, y, printGreenPen - * - * printGreenPen( # <<<<<<<<<<<<<< - * "MomentsPen", - * [ - */ - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 872, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_INCREF(__pyx_n_u_MomentsPen); - __Pyx_GIVEREF(__pyx_n_u_MomentsPen); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_n_u_MomentsPen); - __Pyx_GIVEREF(__pyx_t_8); - PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_t_8); - __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_12, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 872, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "fontTools/pens/momentsPen.py":869 - * - * - * if __name__ == "__main__": # <<<<<<<<<<<<<< - * from fontTools.misc.symfont import x, y, printGreenPen - * - */ - } - - /* "fontTools/pens/momentsPen.py":1 - * from fontTools.pens.basePen import BasePen, OpenContourError # <<<<<<<<<<<<<< - * - * try: - */ - __pyx_t_8 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_8) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - - /*--- Wrapped vars code ---*/ - - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_XDECREF(__pyx_t_12); - if (__pyx_m) { - if (__pyx_d) { - __Pyx_AddTraceback("init fontTools.pens.momentsPen", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - Py_CLEAR(__pyx_m); - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init fontTools.pens.momentsPen"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if CYTHON_PEP489_MULTI_PHASE_INIT - return (__pyx_m != NULL) ? 0 : -1; - #elif PY_MAJOR_VERSION >= 3 - return __pyx_m; - #else - return; - #endif -} - -/* --- Runtime support code --- */ -/* Refnanny */ -#if CYTHON_REFNANNY -static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule(modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, "RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); -end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; -} -#endif - -/* PyObjectGetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro)) - return tp->tp_getattro(obj, attr_name); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_getattr)) - return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); -#endif - return PyObject_GetAttr(obj, attr_name); -} -#endif - -/* GetBuiltinName */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name) { - PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name); - if (unlikely(!result)) { - PyErr_Format(PyExc_NameError, -#if PY_MAJOR_VERSION >= 3 - "name '%U' is not defined", name); -#else - "name '%.200s' is not defined", PyString_AS_STRING(name)); -#endif - } - return result; -} - -/* RaiseDoubleKeywords */ -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, - PyObject* kw_name) -{ - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION >= 3 - "%s() got multiple values for keyword argument '%U'", func_name, kw_name); - #else - "%s() got multiple values for keyword argument '%s'", func_name, - PyString_AsString(kw_name)); - #endif -} - -/* ParseKeywords */ -static int __Pyx_ParseOptionalKeywords( - PyObject *kwds, - PyObject **argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject *key = 0, *value = 0; - Py_ssize_t pos = 0; - PyObject*** name; - PyObject*** first_kw_arg = argnames + num_pos_args; - while (PyDict_Next(kwds, &pos, &key, &value)) { - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - values[name-argnames] = value; - continue; - } - name = first_kw_arg; - #if PY_MAJOR_VERSION < 3 - if (likely(PyString_Check(key))) { - while (*name) { - if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) - && _PyString_Eq(**name, key)) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - if ((**argname == key) || ( - (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) - && _PyString_Eq(**argname, key))) { - goto arg_passed_twice; - } - argname++; - } - } - } else - #endif - if (likely(PyUnicode_Check(key))) { - while (*name) { - int cmp = (**name == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**name, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - int cmp = (**argname == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**argname, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) goto arg_passed_twice; - argname++; - } - } - } else - goto invalid_keyword_type; - if (kwds2) { - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else { - goto invalid_keyword; - } - } - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - goto bad; -invalid_keyword: - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION < 3 - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif -bad: - return -1; -} - -/* RaiseArgTupleInvalid */ -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *more_or_less; - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", - func_name, more_or_less, num_expected, - (num_expected == 1) ? "" : "s", num_found); -} - -/* PyDictVersioning */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { - PyObject **dictptr = NULL; - Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; - if (offset) { -#if CYTHON_COMPILING_IN_CPYTHON - dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); -#else - dictptr = _PyObject_GetDictPtr(obj); -#endif - } - return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; -} -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) - return 0; - return obj_dict_version == __Pyx_get_object_dict_version(obj); -} -#endif - -/* GetModuleGlobalName */ -#if CYTHON_USE_DICT_VERSIONS -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) -#else -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) -#endif -{ - PyObject *result; -#if !CYTHON_AVOID_BORROWED_REFS -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 - result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } else if (unlikely(PyErr_Occurred())) { - return NULL; - } -#else - result = PyDict_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } -#endif -#else - result = PyObject_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } - PyErr_Clear(); -#endif - return __Pyx_GetBuiltinName(name); -} - -/* PyFunctionFastCall */ -#if CYTHON_FAST_PYCALL -static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, - PyObject *globals) { - PyFrameObject *f; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject **fastlocals; - Py_ssize_t i; - PyObject *result; - assert(globals != NULL); - /* XXX Perhaps we should create a specialized - PyFrame_New() that doesn't take locals, but does - take builtins without sanity checking them. - */ - assert(tstate != NULL); - f = PyFrame_New(tstate, co, globals, NULL); - if (f == NULL) { - return NULL; - } - fastlocals = __Pyx_PyFrame_GetLocalsplus(f); - for (i = 0; i < na; i++) { - Py_INCREF(*args); - fastlocals[i] = *args++; - } - result = PyEval_EvalFrameEx(f,0); - ++tstate->recursion_depth; - Py_DECREF(f); - --tstate->recursion_depth; - return result; -} -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) { - PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); - PyObject *globals = PyFunction_GET_GLOBALS(func); - PyObject *argdefs = PyFunction_GET_DEFAULTS(func); - PyObject *closure; -#if PY_MAJOR_VERSION >= 3 - PyObject *kwdefs; -#endif - PyObject *kwtuple, **k; - PyObject **d; - Py_ssize_t nd; - Py_ssize_t nk; - PyObject *result; - assert(kwargs == NULL || PyDict_Check(kwargs)); - nk = kwargs ? PyDict_Size(kwargs) : 0; - if (Py_EnterRecursiveCall((char*)" while calling a Python object")) { - return NULL; - } - if ( -#if PY_MAJOR_VERSION >= 3 - co->co_kwonlyargcount == 0 && -#endif - likely(kwargs == NULL || nk == 0) && - co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { - if (argdefs == NULL && co->co_argcount == nargs) { - result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); - goto done; - } - else if (nargs == 0 && argdefs != NULL - && co->co_argcount == Py_SIZE(argdefs)) { - /* function called with no arguments, but all parameters have - a default value: use default values as arguments .*/ - args = &PyTuple_GET_ITEM(argdefs, 0); - result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); - goto done; - } - } - if (kwargs != NULL) { - Py_ssize_t pos, i; - kwtuple = PyTuple_New(2 * nk); - if (kwtuple == NULL) { - result = NULL; - goto done; - } - k = &PyTuple_GET_ITEM(kwtuple, 0); - pos = i = 0; - while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { - Py_INCREF(k[i]); - Py_INCREF(k[i+1]); - i += 2; - } - nk = i / 2; - } - else { - kwtuple = NULL; - k = NULL; - } - closure = PyFunction_GET_CLOSURE(func); -#if PY_MAJOR_VERSION >= 3 - kwdefs = PyFunction_GET_KW_DEFAULTS(func); -#endif - if (argdefs != NULL) { - d = &PyTuple_GET_ITEM(argdefs, 0); - nd = Py_SIZE(argdefs); - } - else { - d = NULL; - nd = 0; - } -#if PY_MAJOR_VERSION >= 3 - result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, kwdefs, closure); -#else - result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, closure); -#endif - Py_XDECREF(kwtuple); -done: - Py_LeaveRecursiveCall(); - return result; -} -#endif -#endif - -/* PyCFunctionFastCall */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) { - PyCFunctionObject *func = (PyCFunctionObject*)func_obj; - PyCFunction meth = PyCFunction_GET_FUNCTION(func); - PyObject *self = PyCFunction_GET_SELF(func); - int flags = PyCFunction_GET_FLAGS(func); - assert(PyCFunction_Check(func)); - assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))); - assert(nargs >= 0); - assert(nargs == 0 || args != NULL); - /* _PyCFunction_FastCallDict() must not be called with an exception set, - because it may clear it (directly or indirectly) and so the - caller loses its exception */ - assert(!PyErr_Occurred()); - if ((PY_VERSION_HEX < 0x030700A0) || unlikely(flags & METH_KEYWORDS)) { - return (*((__Pyx_PyCFunctionFastWithKeywords)(void*)meth)) (self, args, nargs, NULL); - } else { - return (*((__Pyx_PyCFunctionFast)(void*)meth)) (self, args, nargs); - } -} -#endif - -/* PyObjectCall */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *result; - ternaryfunc call = Py_TYPE(func)->tp_call; - if (unlikely(!call)) - return PyObject_Call(func, arg, kw); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = (*call)(func, arg, kw); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectSetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_setattro)) - return tp->tp_setattro(obj, attr_name, value); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_setattr)) - return tp->tp_setattr(obj, PyString_AS_STRING(attr_name), value); -#endif - return PyObject_SetAttr(obj, attr_name, value); -} -#endif - -/* PyObjectCallMethO */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { - PyObject *self, *result; - PyCFunction cfunc; - cfunc = PyCFunction_GET_FUNCTION(func); - self = PyCFunction_GET_SELF(func); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = cfunc(self, arg); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectCallNoArg */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) { -#if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCall(func, NULL, 0); - } -#endif -#if defined(__Pyx_CyFunction_USED) && defined(NDEBUG) - if (likely(PyCFunction_Check(func) || __Pyx_CyFunction_Check(func))) -#else - if (likely(PyCFunction_Check(func))) -#endif - { - if (likely(PyCFunction_GET_FLAGS(func) & METH_NOARGS)) { - return __Pyx_PyObject_CallMethO(func, NULL); - } - } - return __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL); -} -#endif - -/* PyObjectCallOneArg */ -#if CYTHON_COMPILING_IN_CPYTHON -static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_New(1); - if (unlikely(!args)) return NULL; - Py_INCREF(arg); - PyTuple_SET_ITEM(args, 0, arg); - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { -#if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCall(func, &arg, 1); - } -#endif - if (likely(PyCFunction_Check(func))) { - if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { - return __Pyx_PyObject_CallMethO(func, arg); -#if CYTHON_FAST_PYCCALL - } else if (__Pyx_PyFastCFunction_Check(func)) { - return __Pyx_PyCFunction_FastCall(func, &arg, 1); -#endif - } - } - return __Pyx__PyObject_CallOneArg(func, arg); -} -#else -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_Pack(1, arg); - if (unlikely(!args)) return NULL; - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -#endif - -/* PyObjectCall2Args */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) { - PyObject *args, *result = NULL; - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyFunction_FastCall(function, args, 2); - } - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyCFunction_FastCall(function, args, 2); - } - #endif - args = PyTuple_New(2); - if (unlikely(!args)) goto done; - Py_INCREF(arg1); - PyTuple_SET_ITEM(args, 0, arg1); - Py_INCREF(arg2); - PyTuple_SET_ITEM(args, 1, arg2); - Py_INCREF(function); - result = __Pyx_PyObject_Call(function, args, NULL); - Py_DECREF(args); - Py_DECREF(function); -done: - return result; -} - -/* PyErrFetchRestore */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -} -#endif - -/* RaiseException */ -#if PY_MAJOR_VERSION < 3 -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, - CYTHON_UNUSED PyObject *cause) { - __Pyx_PyThreadState_declare - Py_XINCREF(type); - if (!value || value == Py_None) - value = NULL; - else - Py_INCREF(value); - if (!tb || tb == Py_None) - tb = NULL; - else { - Py_INCREF(tb); - if (!PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto raise_error; - } - } - if (PyType_Check(type)) { -#if CYTHON_COMPILING_IN_PYPY - if (!value) { - Py_INCREF(Py_None); - value = Py_None; - } -#endif - PyErr_NormalizeException(&type, &value, &tb); - } else { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto raise_error; - } - value = type; - type = (PyObject*) Py_TYPE(type); - Py_INCREF(type); - if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto raise_error; - } - } - __Pyx_PyThreadState_assign - __Pyx_ErrRestore(type, value, tb); - return; -raise_error: - Py_XDECREF(value); - Py_XDECREF(type); - Py_XDECREF(tb); - return; -} -#else -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - PyObject* owned_instance = NULL; - if (tb == Py_None) { - tb = 0; - } else if (tb && !PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto bad; - } - if (value == Py_None) - value = 0; - if (PyExceptionInstance_Check(type)) { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto bad; - } - value = type; - type = (PyObject*) Py_TYPE(value); - } else if (PyExceptionClass_Check(type)) { - PyObject *instance_class = NULL; - if (value && PyExceptionInstance_Check(value)) { - instance_class = (PyObject*) Py_TYPE(value); - if (instance_class != type) { - int is_subclass = PyObject_IsSubclass(instance_class, type); - if (!is_subclass) { - instance_class = NULL; - } else if (unlikely(is_subclass == -1)) { - goto bad; - } else { - type = instance_class; - } - } - } - if (!instance_class) { - PyObject *args; - if (!value) - args = PyTuple_New(0); - else if (PyTuple_Check(value)) { - Py_INCREF(value); - args = value; - } else - args = PyTuple_Pack(1, value); - if (!args) - goto bad; - owned_instance = PyObject_Call(type, args, NULL); - Py_DECREF(args); - if (!owned_instance) - goto bad; - value = owned_instance; - if (!PyExceptionInstance_Check(value)) { - PyErr_Format(PyExc_TypeError, - "calling %R should have returned an instance of " - "BaseException, not %R", - type, Py_TYPE(value)); - goto bad; - } - } - } else { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto bad; - } - if (cause) { - PyObject *fixed_cause; - if (cause == Py_None) { - fixed_cause = NULL; - } else if (PyExceptionClass_Check(cause)) { - fixed_cause = PyObject_CallObject(cause, NULL); - if (fixed_cause == NULL) - goto bad; - } else if (PyExceptionInstance_Check(cause)) { - fixed_cause = cause; - Py_INCREF(fixed_cause); - } else { - PyErr_SetString(PyExc_TypeError, - "exception causes must derive from " - "BaseException"); - goto bad; - } - PyException_SetCause(value, fixed_cause); - } - PyErr_SetObject(type, value); - if (tb) { -#if CYTHON_FAST_THREAD_STATE - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* tmp_tb = tstate->curexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } -#else - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); - Py_INCREF(tb); - PyErr_Restore(tmp_type, tmp_value, tb); - Py_XDECREF(tmp_tb); -#endif - } -bad: - Py_XDECREF(owned_instance); - return; -} -#endif - -/* RaiseTooManyValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { - PyErr_Format(PyExc_ValueError, - "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); -} - -/* RaiseNeedMoreValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", - index, (index == 1) ? "" : "s"); -} - -/* IterFinish */ -static CYTHON_INLINE int __Pyx_IterFinish(void) { -#if CYTHON_FAST_THREAD_STATE - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* exc_type = tstate->curexc_type; - if (unlikely(exc_type)) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) { - PyObject *exc_value, *exc_tb; - exc_value = tstate->curexc_value; - exc_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; - Py_DECREF(exc_type); - Py_XDECREF(exc_value); - Py_XDECREF(exc_tb); - return 0; - } else { - return -1; - } - } - return 0; -#else - if (unlikely(PyErr_Occurred())) { - if (likely(PyErr_ExceptionMatches(PyExc_StopIteration))) { - PyErr_Clear(); - return 0; - } else { - return -1; - } - } - return 0; -#endif -} - -/* UnpackItemEndCheck */ -static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected) { - if (unlikely(retval)) { - Py_DECREF(retval); - __Pyx_RaiseTooManyValuesError(expected); - return -1; - } - return __Pyx_IterFinish(); -} - -/* Import */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { - PyObject *empty_list = 0; - PyObject *module = 0; - PyObject *global_dict = 0; - PyObject *empty_dict = 0; - PyObject *list; - #if PY_MAJOR_VERSION < 3 - PyObject *py_import; - py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); - if (!py_import) - goto bad; - #endif - if (from_list) - list = from_list; - else { - empty_list = PyList_New(0); - if (!empty_list) - goto bad; - list = empty_list; - } - global_dict = PyModule_GetDict(__pyx_m); - if (!global_dict) - goto bad; - empty_dict = PyDict_New(); - if (!empty_dict) - goto bad; - { - #if PY_MAJOR_VERSION >= 3 - if (level == -1) { - if ((1) && (strchr(__Pyx_MODULE_NAME, '.'))) { - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, 1); - if (!module) { - if (!PyErr_ExceptionMatches(PyExc_ImportError)) - goto bad; - PyErr_Clear(); - } - } - level = 0; - } - #endif - if (!module) { - #if PY_MAJOR_VERSION < 3 - PyObject *py_level = PyInt_FromLong(level); - if (!py_level) - goto bad; - module = PyObject_CallFunctionObjArgs(py_import, - name, global_dict, empty_dict, list, py_level, (PyObject *)NULL); - Py_DECREF(py_level); - #else - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, level); - #endif - } - } -bad: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_import); - #endif - Py_XDECREF(empty_list); - Py_XDECREF(empty_dict); - return module; -} - -/* ImportFrom */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { - PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); - if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Format(PyExc_ImportError, - #if PY_MAJOR_VERSION < 3 - "cannot import name %.230s", PyString_AS_STRING(name)); - #else - "cannot import name %S", name); - #endif - } - return value; -} - -/* GetTopmostException */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * -__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) -{ - _PyErr_StackItem *exc_info = tstate->exc_info; - while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) && - exc_info->previous_item != NULL) - { - exc_info = exc_info->previous_item; - } - return exc_info; -} -#endif - -/* SaveResetException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - *type = exc_info->exc_type; - *value = exc_info->exc_value; - *tb = exc_info->exc_traceback; - #else - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - #endif - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); -} -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = type; - exc_info->exc_value = value; - exc_info->exc_traceback = tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -#endif - -/* PyErrExceptionMatches */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; icurexc_type; - if (exc_type == err) return 1; - if (unlikely(!exc_type)) return 0; - if (unlikely(PyTuple_Check(err))) - return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); - return __Pyx_PyErr_GivenExceptionMatches(exc_type, err); -} -#endif - -/* GetException */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) -#endif -{ - PyObject *local_type, *local_value, *local_tb; -#if CYTHON_FAST_THREAD_STATE - PyObject *tmp_type, *tmp_value, *tmp_tb; - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -#else - PyErr_Fetch(&local_type, &local_value, &local_tb); -#endif - PyErr_NormalizeException(&local_type, &local_value, &local_tb); -#if CYTHON_FAST_THREAD_STATE - if (unlikely(tstate->curexc_type)) -#else - if (unlikely(PyErr_Occurred())) -#endif - goto bad; - #if PY_MAJOR_VERSION >= 3 - if (local_tb) { - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - } - #endif - Py_XINCREF(local_tb); - Py_XINCREF(local_type); - Py_XINCREF(local_value); - *type = local_type; - *value = local_value; - *tb = local_tb; -#if CYTHON_FAST_THREAD_STATE - #if CYTHON_USE_EXC_INFO_STACK - { - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = local_type; - exc_info->exc_value = local_value; - exc_info->exc_traceback = local_tb; - } - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#else - PyErr_SetExcInfo(local_type, local_value, local_tb); -#endif - return 0; -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -} - -/* CalculateMetaclass */ -static PyObject *__Pyx_CalculateMetaclass(PyTypeObject *metaclass, PyObject *bases) { - Py_ssize_t i, nbases = PyTuple_GET_SIZE(bases); - for (i=0; i < nbases; i++) { - PyTypeObject *tmptype; - PyObject *tmp = PyTuple_GET_ITEM(bases, i); - tmptype = Py_TYPE(tmp); -#if PY_MAJOR_VERSION < 3 - if (tmptype == &PyClass_Type) - continue; -#endif - if (!metaclass) { - metaclass = tmptype; - continue; - } - if (PyType_IsSubtype(metaclass, tmptype)) - continue; - if (PyType_IsSubtype(tmptype, metaclass)) { - metaclass = tmptype; - continue; - } - PyErr_SetString(PyExc_TypeError, - "metaclass conflict: " - "the metaclass of a derived class " - "must be a (non-strict) subclass " - "of the metaclasses of all its bases"); - return NULL; - } - if (!metaclass) { -#if PY_MAJOR_VERSION < 3 - metaclass = &PyClass_Type; -#else - metaclass = &PyType_Type; -#endif - } - Py_INCREF((PyObject*) metaclass); - return (PyObject*) metaclass; -} - -/* FetchCommonType */ -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type) { - PyObject* fake_module; - PyTypeObject* cached_type = NULL; - fake_module = PyImport_AddModule((char*) "_cython_" CYTHON_ABI); - if (!fake_module) return NULL; - Py_INCREF(fake_module); - cached_type = (PyTypeObject*) PyObject_GetAttrString(fake_module, type->tp_name); - if (cached_type) { - if (!PyType_Check((PyObject*)cached_type)) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s is not a type object", - type->tp_name); - goto bad; - } - if (cached_type->tp_basicsize != type->tp_basicsize) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s has the wrong size, try recompiling", - type->tp_name); - goto bad; - } - } else { - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad; - PyErr_Clear(); - if (PyType_Ready(type) < 0) goto bad; - if (PyObject_SetAttrString(fake_module, type->tp_name, (PyObject*) type) < 0) - goto bad; - Py_INCREF(type); - cached_type = type; - } -done: - Py_DECREF(fake_module); - return cached_type; -bad: - Py_XDECREF(cached_type); - cached_type = NULL; - goto done; -} - -/* CythonFunctionShared */ -#include -static PyObject * -__Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *closure) -{ - if (unlikely(op->func_doc == NULL)) { - if (op->func.m_ml->ml_doc) { -#if PY_MAJOR_VERSION >= 3 - op->func_doc = PyUnicode_FromString(op->func.m_ml->ml_doc); -#else - op->func_doc = PyString_FromString(op->func.m_ml->ml_doc); -#endif - if (unlikely(op->func_doc == NULL)) - return NULL; - } else { - Py_INCREF(Py_None); - return Py_None; - } - } - Py_INCREF(op->func_doc); - return op->func_doc; -} -static int -__Pyx_CyFunction_set_doc(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context) -{ - PyObject *tmp = op->func_doc; - if (value == NULL) { - value = Py_None; - } - Py_INCREF(value); - op->func_doc = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_name(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - if (unlikely(op->func_name == NULL)) { -#if PY_MAJOR_VERSION >= 3 - op->func_name = PyUnicode_InternFromString(op->func.m_ml->ml_name); -#else - op->func_name = PyString_InternFromString(op->func.m_ml->ml_name); -#endif - if (unlikely(op->func_name == NULL)) - return NULL; - } - Py_INCREF(op->func_name); - return op->func_name; -} -static int -__Pyx_CyFunction_set_name(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context) -{ - PyObject *tmp; -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__name__ must be set to a string object"); - return -1; - } - tmp = op->func_name; - Py_INCREF(value); - op->func_name = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_qualname(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - Py_INCREF(op->func_qualname); - return op->func_qualname; -} -static int -__Pyx_CyFunction_set_qualname(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context) -{ - PyObject *tmp; -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__qualname__ must be set to a string object"); - return -1; - } - tmp = op->func_qualname; - Py_INCREF(value); - op->func_qualname = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_self(__pyx_CyFunctionObject *m, CYTHON_UNUSED void *closure) -{ - PyObject *self; - self = m->func_closure; - if (self == NULL) - self = Py_None; - Py_INCREF(self); - return self; -} -static PyObject * -__Pyx_CyFunction_get_dict(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - if (unlikely(op->func_dict == NULL)) { - op->func_dict = PyDict_New(); - if (unlikely(op->func_dict == NULL)) - return NULL; - } - Py_INCREF(op->func_dict); - return op->func_dict; -} -static int -__Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context) -{ - PyObject *tmp; - if (unlikely(value == NULL)) { - PyErr_SetString(PyExc_TypeError, - "function's dictionary may not be deleted"); - return -1; - } - if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "setting function's dictionary to a non-dict"); - return -1; - } - tmp = op->func_dict; - Py_INCREF(value); - op->func_dict = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - Py_INCREF(op->func_globals); - return op->func_globals; -} -static PyObject * -__Pyx_CyFunction_get_closure(CYTHON_UNUSED __pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - Py_INCREF(Py_None); - return Py_None; -} -static PyObject * -__Pyx_CyFunction_get_code(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - PyObject* result = (op->func_code) ? op->func_code : Py_None; - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_init_defaults(__pyx_CyFunctionObject *op) { - int result = 0; - PyObject *res = op->defaults_getter((PyObject *) op); - if (unlikely(!res)) - return -1; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - op->defaults_tuple = PyTuple_GET_ITEM(res, 0); - Py_INCREF(op->defaults_tuple); - op->defaults_kwdict = PyTuple_GET_ITEM(res, 1); - Py_INCREF(op->defaults_kwdict); - #else - op->defaults_tuple = PySequence_ITEM(res, 0); - if (unlikely(!op->defaults_tuple)) result = -1; - else { - op->defaults_kwdict = PySequence_ITEM(res, 1); - if (unlikely(!op->defaults_kwdict)) result = -1; - } - #endif - Py_DECREF(res); - return result; -} -static int -__Pyx_CyFunction_set_defaults(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) { - PyObject* tmp; - if (!value) { - value = Py_None; - } else if (value != Py_None && !PyTuple_Check(value)) { - PyErr_SetString(PyExc_TypeError, - "__defaults__ must be set to a tuple object"); - return -1; - } - Py_INCREF(value); - tmp = op->defaults_tuple; - op->defaults_tuple = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_defaults(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) { - PyObject* result = op->defaults_tuple; - if (unlikely(!result)) { - if (op->defaults_getter) { - if (__Pyx_CyFunction_init_defaults(op) < 0) return NULL; - result = op->defaults_tuple; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_kwdefaults(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) { - PyObject* tmp; - if (!value) { - value = Py_None; - } else if (value != Py_None && !PyDict_Check(value)) { - PyErr_SetString(PyExc_TypeError, - "__kwdefaults__ must be set to a dict object"); - return -1; - } - Py_INCREF(value); - tmp = op->defaults_kwdict; - op->defaults_kwdict = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_kwdefaults(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) { - PyObject* result = op->defaults_kwdict; - if (unlikely(!result)) { - if (op->defaults_getter) { - if (__Pyx_CyFunction_init_defaults(op) < 0) return NULL; - result = op->defaults_kwdict; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_annotations(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) { - PyObject* tmp; - if (!value || value == Py_None) { - value = NULL; - } else if (!PyDict_Check(value)) { - PyErr_SetString(PyExc_TypeError, - "__annotations__ must be set to a dict object"); - return -1; - } - Py_XINCREF(value); - tmp = op->func_annotations; - op->func_annotations = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_annotations(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) { - PyObject* result = op->func_annotations; - if (unlikely(!result)) { - result = PyDict_New(); - if (unlikely(!result)) return NULL; - op->func_annotations = result; - } - Py_INCREF(result); - return result; -} -static PyGetSetDef __pyx_CyFunction_getsets[] = { - {(char *) "func_doc", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "__doc__", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "func_name", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__name__", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__qualname__", (getter)__Pyx_CyFunction_get_qualname, (setter)__Pyx_CyFunction_set_qualname, 0, 0}, - {(char *) "__self__", (getter)__Pyx_CyFunction_get_self, 0, 0, 0}, - {(char *) "func_dict", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "__dict__", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "func_globals", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "__globals__", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "func_closure", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "__closure__", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "func_code", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "__code__", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "func_defaults", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__defaults__", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__kwdefaults__", (getter)__Pyx_CyFunction_get_kwdefaults, (setter)__Pyx_CyFunction_set_kwdefaults, 0, 0}, - {(char *) "__annotations__", (getter)__Pyx_CyFunction_get_annotations, (setter)__Pyx_CyFunction_set_annotations, 0, 0}, - {0, 0, 0, 0, 0} -}; -static PyMemberDef __pyx_CyFunction_members[] = { - {(char *) "__module__", T_OBJECT, offsetof(PyCFunctionObject, m_module), PY_WRITE_RESTRICTED, 0}, - {0, 0, 0, 0, 0} -}; -static PyObject * -__Pyx_CyFunction_reduce(__pyx_CyFunctionObject *m, CYTHON_UNUSED PyObject *args) -{ -#if PY_MAJOR_VERSION >= 3 - Py_INCREF(m->func_qualname); - return m->func_qualname; -#else - return PyString_FromString(m->func.m_ml->ml_name); -#endif -} -static PyMethodDef __pyx_CyFunction_methods[] = { - {"__reduce__", (PyCFunction)__Pyx_CyFunction_reduce, METH_VARARGS, 0}, - {0, 0, 0, 0} -}; -#if PY_VERSION_HEX < 0x030500A0 -#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func_weakreflist) -#else -#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func.m_weakreflist) -#endif -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject *op, PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - if (unlikely(op == NULL)) - return NULL; - op->flags = flags; - __Pyx_CyFunction_weakreflist(op) = NULL; - op->func.m_ml = ml; - op->func.m_self = (PyObject *) op; - Py_XINCREF(closure); - op->func_closure = closure; - Py_XINCREF(module); - op->func.m_module = module; - op->func_dict = NULL; - op->func_name = NULL; - Py_INCREF(qualname); - op->func_qualname = qualname; - op->func_doc = NULL; - op->func_classobj = NULL; - op->func_globals = globals; - Py_INCREF(op->func_globals); - Py_XINCREF(code); - op->func_code = code; - op->defaults_pyobjects = 0; - op->defaults_size = 0; - op->defaults = NULL; - op->defaults_tuple = NULL; - op->defaults_kwdict = NULL; - op->defaults_getter = NULL; - op->func_annotations = NULL; - return (PyObject *) op; -} -static int -__Pyx_CyFunction_clear(__pyx_CyFunctionObject *m) -{ - Py_CLEAR(m->func_closure); - Py_CLEAR(m->func.m_module); - Py_CLEAR(m->func_dict); - Py_CLEAR(m->func_name); - Py_CLEAR(m->func_qualname); - Py_CLEAR(m->func_doc); - Py_CLEAR(m->func_globals); - Py_CLEAR(m->func_code); - Py_CLEAR(m->func_classobj); - Py_CLEAR(m->defaults_tuple); - Py_CLEAR(m->defaults_kwdict); - Py_CLEAR(m->func_annotations); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_XDECREF(pydefaults[i]); - PyObject_Free(m->defaults); - m->defaults = NULL; - } - return 0; -} -static void __Pyx__CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - if (__Pyx_CyFunction_weakreflist(m) != NULL) - PyObject_ClearWeakRefs((PyObject *) m); - __Pyx_CyFunction_clear(m); - PyObject_GC_Del(m); -} -static void __Pyx_CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - PyObject_GC_UnTrack(m); - __Pyx__CyFunction_dealloc(m); -} -static int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit, void *arg) -{ - Py_VISIT(m->func_closure); - Py_VISIT(m->func.m_module); - Py_VISIT(m->func_dict); - Py_VISIT(m->func_name); - Py_VISIT(m->func_qualname); - Py_VISIT(m->func_doc); - Py_VISIT(m->func_globals); - Py_VISIT(m->func_code); - Py_VISIT(m->func_classobj); - Py_VISIT(m->defaults_tuple); - Py_VISIT(m->defaults_kwdict); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_VISIT(pydefaults[i]); - } - return 0; -} -static PyObject *__Pyx_CyFunction_descr_get(PyObject *func, PyObject *obj, PyObject *type) -{ -#if PY_MAJOR_VERSION < 3 - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - if (m->flags & __Pyx_CYFUNCTION_STATICMETHOD) { - Py_INCREF(func); - return func; - } - if (m->flags & __Pyx_CYFUNCTION_CLASSMETHOD) { - if (type == NULL) - type = (PyObject *)(Py_TYPE(obj)); - return __Pyx_PyMethod_New(func, type, (PyObject *)(Py_TYPE(type))); - } - if (obj == Py_None) - obj = NULL; -#endif - return __Pyx_PyMethod_New(func, obj, type); -} -static PyObject* -__Pyx_CyFunction_repr(__pyx_CyFunctionObject *op) -{ -#if PY_MAJOR_VERSION >= 3 - return PyUnicode_FromFormat("", - op->func_qualname, (void *)op); -#else - return PyString_FromFormat("", - PyString_AsString(op->func_qualname), (void *)op); -#endif -} -static PyObject * __Pyx_CyFunction_CallMethod(PyObject *func, PyObject *self, PyObject *arg, PyObject *kw) { - PyCFunctionObject* f = (PyCFunctionObject*)func; - PyCFunction meth = f->m_ml->ml_meth; - Py_ssize_t size; - switch (f->m_ml->ml_flags & (METH_VARARGS | METH_KEYWORDS | METH_NOARGS | METH_O)) { - case METH_VARARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) - return (*meth)(self, arg); - break; - case METH_VARARGS | METH_KEYWORDS: - return (*(PyCFunctionWithKeywords)(void*)meth)(self, arg, kw); - case METH_NOARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { - size = PyTuple_GET_SIZE(arg); - if (likely(size == 0)) - return (*meth)(self, NULL); - PyErr_Format(PyExc_TypeError, - "%.200s() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); - return NULL; - } - break; - case METH_O: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { - size = PyTuple_GET_SIZE(arg); - if (likely(size == 1)) { - PyObject *result, *arg0; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - arg0 = PyTuple_GET_ITEM(arg, 0); - #else - arg0 = PySequence_ITEM(arg, 0); if (unlikely(!arg0)) return NULL; - #endif - result = (*meth)(self, arg0); - #if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(arg0); - #endif - return result; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); - return NULL; - } - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags in " - "__Pyx_CyFunction_Call. METH_OLDARGS is no " - "longer supported!"); - return NULL; - } - PyErr_Format(PyExc_TypeError, "%.200s() takes no keyword arguments", - f->m_ml->ml_name); - return NULL; -} -static CYTHON_INLINE PyObject *__Pyx_CyFunction_Call(PyObject *func, PyObject *arg, PyObject *kw) { - return __Pyx_CyFunction_CallMethod(func, ((PyCFunctionObject*)func)->m_self, arg, kw); -} -static PyObject *__Pyx_CyFunction_CallAsMethod(PyObject *func, PyObject *args, PyObject *kw) { - PyObject *result; - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *) func; - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - Py_ssize_t argc; - PyObject *new_args; - PyObject *self; - argc = PyTuple_GET_SIZE(args); - new_args = PyTuple_GetSlice(args, 1, argc); - if (unlikely(!new_args)) - return NULL; - self = PyTuple_GetItem(args, 0); - if (unlikely(!self)) { - Py_DECREF(new_args); -#if PY_MAJOR_VERSION > 2 - PyErr_Format(PyExc_TypeError, - "unbound method %.200S() needs an argument", - cyfunc->func_qualname); -#else - PyErr_SetString(PyExc_TypeError, - "unbound method needs an argument"); -#endif - return NULL; - } - result = __Pyx_CyFunction_CallMethod(func, self, new_args, kw); - Py_DECREF(new_args); - } else { - result = __Pyx_CyFunction_Call(func, args, kw); - } - return result; -} -static PyTypeObject __pyx_CyFunctionType_type = { - PyVarObject_HEAD_INIT(0, 0) - "cython_function_or_method", - sizeof(__pyx_CyFunctionObject), - 0, - (destructor) __Pyx_CyFunction_dealloc, - 0, - 0, - 0, -#if PY_MAJOR_VERSION < 3 - 0, -#else - 0, -#endif - (reprfunc) __Pyx_CyFunction_repr, - 0, - 0, - 0, - 0, - __Pyx_CyFunction_CallAsMethod, - 0, - 0, - 0, - 0, - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, - 0, - (traverseproc) __Pyx_CyFunction_traverse, - (inquiry) __Pyx_CyFunction_clear, - 0, -#if PY_VERSION_HEX < 0x030500A0 - offsetof(__pyx_CyFunctionObject, func_weakreflist), -#else - offsetof(PyCFunctionObject, m_weakreflist), -#endif - 0, - 0, - __pyx_CyFunction_methods, - __pyx_CyFunction_members, - __pyx_CyFunction_getsets, - 0, - 0, - __Pyx_CyFunction_descr_get, - 0, - offsetof(__pyx_CyFunctionObject, func_dict), - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, -#if PY_VERSION_HEX >= 0x030400a1 - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, -#endif -#if PY_VERSION_HEX >= 0x030C0000 - 0, -#endif -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, -#endif -}; -static int __pyx_CyFunction_init(void) { - __pyx_CyFunctionType = __Pyx_FetchCommonType(&__pyx_CyFunctionType_type); - if (unlikely(__pyx_CyFunctionType == NULL)) { - return -1; - } - return 0; -} -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *func, size_t size, int pyobjects) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults = PyObject_Malloc(size); - if (unlikely(!m->defaults)) - return PyErr_NoMemory(); - memset(m->defaults, 0, size); - m->defaults_pyobjects = pyobjects; - m->defaults_size = size; - return m->defaults; -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *func, PyObject *tuple) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_tuple = tuple; - Py_INCREF(tuple); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_kwdict = dict; - Py_INCREF(dict); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->func_annotations = dict; - Py_INCREF(dict); -} - -/* CythonFunction */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - PyObject *op = __Pyx_CyFunction_Init( - PyObject_GC_New(__pyx_CyFunctionObject, __pyx_CyFunctionType), - ml, flags, qualname, closure, module, globals, code - ); - if (likely(op)) { - PyObject_GC_Track(op); - } - return op; -} - -/* Py3ClassCreate */ -static PyObject *__Pyx_Py3MetaclassPrepare(PyObject *metaclass, PyObject *bases, PyObject *name, - PyObject *qualname, PyObject *mkw, PyObject *modname, PyObject *doc) { - PyObject *ns; - if (metaclass) { - PyObject *prep = __Pyx_PyObject_GetAttrStr(metaclass, __pyx_n_s_prepare); - if (prep) { - PyObject *pargs = PyTuple_Pack(2, name, bases); - if (unlikely(!pargs)) { - Py_DECREF(prep); - return NULL; - } - ns = PyObject_Call(prep, pargs, mkw); - Py_DECREF(prep); - Py_DECREF(pargs); - } else { - if (unlikely(!PyErr_ExceptionMatches(PyExc_AttributeError))) - return NULL; - PyErr_Clear(); - ns = PyDict_New(); - } - } else { - ns = PyDict_New(); - } - if (unlikely(!ns)) - return NULL; - if (unlikely(PyObject_SetItem(ns, __pyx_n_s_module, modname) < 0)) goto bad; - if (unlikely(PyObject_SetItem(ns, __pyx_n_s_qualname, qualname) < 0)) goto bad; - if (unlikely(doc && PyObject_SetItem(ns, __pyx_n_s_doc, doc) < 0)) goto bad; - return ns; -bad: - Py_DECREF(ns); - return NULL; -} -static PyObject *__Pyx_Py3ClassCreate(PyObject *metaclass, PyObject *name, PyObject *bases, - PyObject *dict, PyObject *mkw, - int calculate_metaclass, int allow_py2_metaclass) { - PyObject *result, *margs; - PyObject *owned_metaclass = NULL; - if (allow_py2_metaclass) { - owned_metaclass = PyObject_GetItem(dict, __pyx_n_s_metaclass); - if (owned_metaclass) { - metaclass = owned_metaclass; - } else if (likely(PyErr_ExceptionMatches(PyExc_KeyError))) { - PyErr_Clear(); - } else { - return NULL; - } - } - if (calculate_metaclass && (!metaclass || PyType_Check(metaclass))) { - metaclass = __Pyx_CalculateMetaclass((PyTypeObject*) metaclass, bases); - Py_XDECREF(owned_metaclass); - if (unlikely(!metaclass)) - return NULL; - owned_metaclass = metaclass; - } - margs = PyTuple_Pack(3, name, bases, dict); - if (unlikely(!margs)) { - result = NULL; - } else { - result = PyObject_Call(metaclass, margs, mkw); - Py_DECREF(margs); - } - Py_XDECREF(owned_metaclass); - return result; -} - -/* BytesEquals */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS && (PY_VERSION_HEX < 0x030B0000) - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -/* UnicodeEquals */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else -#if PY_MAJOR_VERSION < 3 - PyObject* owned_ref = NULL; -#endif - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); -#if PY_MAJOR_VERSION < 3 - if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) { - owned_ref = PyUnicode_FromObject(s2); - if (unlikely(!owned_ref)) - return -1; - s2 = owned_ref; - s2_is_unicode = 1; - } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) { - owned_ref = PyUnicode_FromObject(s1); - if (unlikely(!owned_ref)) - return -1; - s1 = owned_ref; - s1_is_unicode = 1; - } else if (((!s2_is_unicode) & (!s1_is_unicode))) { - return __Pyx_PyBytes_Equals(s1, s2, equals); - } -#endif - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length; - int kind; - void *data1, *data2; - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - length = __Pyx_PyUnicode_GET_LENGTH(s1); - if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - #if CYTHON_PEP393_ENABLED - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - #else - hash1 = ((PyUnicodeObject*)s1)->hash; - hash2 = ((PyUnicodeObject*)s2)->hash; - #endif - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ); -return_ne: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_NE); -#endif -} - -/* CLineInTraceback */ -#ifndef CYTHON_CLINE_IN_TRACEBACK -static int __Pyx_CLineForTraceback(CYTHON_UNUSED PyThreadState *tstate, int c_line) { - PyObject *use_cline; - PyObject *ptype, *pvalue, *ptraceback; -#if CYTHON_COMPILING_IN_CPYTHON - PyObject **cython_runtime_dict; -#endif - if (unlikely(!__pyx_cython_runtime)) { - return c_line; - } - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); -#if CYTHON_COMPILING_IN_CPYTHON - cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); - if (likely(cython_runtime_dict)) { - __PYX_PY_DICT_LOOKUP_IF_MODIFIED( - use_cline, *cython_runtime_dict, - __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback)) - } else -#endif - { - PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); - if (use_cline_obj) { - use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; - Py_DECREF(use_cline_obj); - } else { - PyErr_Clear(); - use_cline = NULL; - } - } - if (!use_cline) { - c_line = 0; - (void) PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); - } - else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) { - c_line = 0; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - return c_line; -} -#endif - -/* CodeObjectCache */ -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { - int start = 0, mid = 0, end = count - 1; - if (end >= 0 && code_line > entries[end].code_line) { - return count; - } - while (start < end) { - mid = start + (end - start) / 2; - if (code_line < entries[mid].code_line) { - end = mid; - } else if (code_line > entries[mid].code_line) { - start = mid + 1; - } else { - return mid; - } - } - if (code_line <= entries[mid].code_line) { - return mid; - } else { - return mid + 1; - } -} -static PyCodeObject *__pyx_find_code_object(int code_line) { - PyCodeObject* code_object; - int pos; - if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { - return NULL; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { - return NULL; - } - code_object = __pyx_code_cache.entries[pos].code_object; - Py_INCREF(code_object); - return code_object; -} -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { - int pos, i; - __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; - if (unlikely(!code_line)) { - return; - } - if (unlikely(!entries)) { - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); - if (likely(entries)) { - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = 64; - __pyx_code_cache.count = 1; - entries[0].code_line = code_line; - entries[0].code_object = code_object; - Py_INCREF(code_object); - } - return; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { - PyCodeObject* tmp = entries[pos].code_object; - entries[pos].code_object = code_object; - Py_DECREF(tmp); - return; - } - if (__pyx_code_cache.count == __pyx_code_cache.max_count) { - int new_max = __pyx_code_cache.max_count + 64; - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( - __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry)); - if (unlikely(!entries)) { - return; - } - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = new_max; - } - for (i=__pyx_code_cache.count; i>pos; i--) { - entries[i] = entries[i-1]; - } - entries[pos].code_line = code_line; - entries[pos].code_object = code_object; - __pyx_code_cache.count++; - Py_INCREF(code_object); -} - -/* AddTraceback */ -#include "compile.h" -#include "frameobject.h" -#include "traceback.h" -#if PY_VERSION_HEX >= 0x030b00a6 - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif -static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( - const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = NULL; - PyObject *py_funcname = NULL; - #if PY_MAJOR_VERSION < 3 - PyObject *py_srcfile = NULL; - py_srcfile = PyString_FromString(filename); - if (!py_srcfile) goto bad; - #endif - if (c_line) { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - #else - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - funcname = PyUnicode_AsUTF8(py_funcname); - if (!funcname) goto bad; - #endif - } - else { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromString(funcname); - if (!py_funcname) goto bad; - #endif - } - #if PY_MAJOR_VERSION < 3 - py_code = __Pyx_PyCode_New( - 0, - 0, - 0, - 0, - 0, - __pyx_empty_bytes, /*PyObject *code,*/ - __pyx_empty_tuple, /*PyObject *consts,*/ - __pyx_empty_tuple, /*PyObject *names,*/ - __pyx_empty_tuple, /*PyObject *varnames,*/ - __pyx_empty_tuple, /*PyObject *freevars,*/ - __pyx_empty_tuple, /*PyObject *cellvars,*/ - py_srcfile, /*PyObject *filename,*/ - py_funcname, /*PyObject *name,*/ - py_line, - __pyx_empty_bytes /*PyObject *lnotab*/ - ); - Py_DECREF(py_srcfile); - #else - py_code = PyCode_NewEmpty(filename, funcname, py_line); - #endif - Py_XDECREF(py_funcname); // XDECREF since it's only set on Py3 if cline - return py_code; -bad: - Py_XDECREF(py_funcname); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_srcfile); - #endif - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject *ptype, *pvalue, *ptraceback; - if (c_line) { - c_line = __Pyx_CLineForTraceback(tstate, c_line); - } - py_code = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!py_code) { - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); - py_code = __Pyx_CreateCodeObjectForTraceback( - funcname, c_line, py_line, filename); - if (!py_code) { - /* If the code object creation fails, then we should clear the - fetched exception references and propagate the new exception */ - Py_XDECREF(ptype); - Py_XDECREF(pvalue); - Py_XDECREF(ptraceback); - goto bad; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); - } - py_frame = PyFrame_New( - tstate, /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - __Pyx_PyFrame_SetLineNumber(py_frame, py_line); - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} - -/* CIntToPy */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(long) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(long) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(long) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(long), - little, !is_unsigned); - } -} - -/* CIntFromPyVerify */ -#define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ - {\ - func_type value = func_value;\ - if (sizeof(target_type) < sizeof(func_type)) {\ - if (unlikely(value != (func_type) (target_type) value)) {\ - func_type zero = 0;\ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ - return (target_type) -1;\ - if (is_unsigned && unlikely(value < zero))\ - goto raise_neg_overflow;\ - else\ - goto raise_overflow;\ - }\ - }\ - return (target_type) value;\ - } - -/* CIntFromPy */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(long) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (long) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) { - return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) { - return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) { - return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(long) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) - case -2: - if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - } -#endif - if (sizeof(long) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - long val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (long) -1; - } - } else { - long val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (long) -1; - val = __Pyx_PyInt_As_long(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to long"); - return (long) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long) -1; -} - -/* CIntFromPy */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(int) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (int) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) { - return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) { - return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) { - return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(int) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) - case -2: - if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - } -#endif - if (sizeof(int) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - int val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (int) -1; - } - } else { - int val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (int) -1; - val = __Pyx_PyInt_As_int(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to int"); - return (int) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to int"); - return (int) -1; -} - -/* FastTypeChecks */ -#if CYTHON_COMPILING_IN_CPYTHON -static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { - while (a) { - a = a->tp_base; - if (a == b) - return 1; - } - return b == &PyBaseObject_Type; -} -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (a == b) return 1; - mro = a->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(a, b); -} -#if PY_MAJOR_VERSION == 2 -static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { - PyObject *exception, *value, *tb; - int res; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&exception, &value, &tb); - res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - if (!res) { - res = PyObject_IsSubclass(err, exc_type2); - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - } - __Pyx_ErrRestore(exception, value, tb); - return res; -} -#else -static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { - int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0; - if (!res) { - res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); - } - return res; -} -#endif -static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - assert(PyExceptionClass_Check(exc_type)); - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; i '9'); - break; - } - if (rt_from_call[i] != ctversion[i]) { - same = 0; - break; - } - } - if (!same) { - char rtversion[5] = {'\0'}; - char message[200]; - for (i=0; i<4; ++i) { - if (rt_from_call[i] == '.') { - if (found_dot) break; - found_dot = 1; - } else if (rt_from_call[i] < '0' || rt_from_call[i] > '9') { - break; - } - rtversion[i] = rt_from_call[i]; - } - PyOS_snprintf(message, sizeof(message), - "compiletime version %s of module '%.100s' " - "does not match runtime version %s", - ctversion, __Pyx_MODULE_NAME, rtversion); - return PyErr_WarnEx(NULL, message, 1); - } - return 0; -} - -/* InitStrings */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION < 3 - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - #else - if (t->is_unicode | t->is_str) { - if (t->intern) { - *t->p = PyUnicode_InternFromString(t->s); - } else if (t->encoding) { - *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); - } else { - *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); - } - } else { - *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); - } - #endif - if (!*t->p) - return -1; - if (PyObject_Hash(*t->p) == -1) - return -1; - ++t; - } - return 0; -} - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); -} -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#if !CYTHON_PEP393_ENABLED -static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - char* defenc_c; - PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); - if (!defenc) return NULL; - defenc_c = PyBytes_AS_STRING(defenc); -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - { - char* end = defenc_c + PyBytes_GET_SIZE(defenc); - char* c; - for (c = defenc_c; c < end; c++) { - if ((unsigned char) (*c) >= 128) { - PyUnicode_AsASCIIString(o); - return NULL; - } - } - } -#endif - *length = PyBytes_GET_SIZE(defenc); - return defenc_c; -} -#else -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - PyUnicode_AsASCIIString(o); - return NULL; - } -#else - return PyUnicode_AsUTF8AndSize(o, length); -#endif -} -#endif -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT - if ( -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - __Pyx_sys_getdefaultencoding_not_ascii && -#endif - PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif -#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) - if (PyByteArray_Check(o)) { - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); - } else -#endif - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} -static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { -#if PY_MAJOR_VERSION >= 3 - if (PyLong_Check(result)) { - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type %.200s). " - "The ability to return an instance of a strict subclass of int " - "is deprecated, and may be removed in a future version of Python.", - Py_TYPE(result)->tp_name)) { - Py_DECREF(result); - return NULL; - } - return result; - } -#endif - PyErr_Format(PyExc_TypeError, - "__%.4s__ returned non-%.4s (type %.200s)", - type_name, type_name, Py_TYPE(result)->tp_name); - Py_DECREF(result); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - const char *name = NULL; - PyObject *res = NULL; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x) || PyLong_Check(x))) -#else - if (likely(PyLong_Check(x))) -#endif - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - #if PY_MAJOR_VERSION < 3 - if (m && m->nb_int) { - name = "int"; - res = m->nb_int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = m->nb_long(x); - } - #else - if (likely(m && m->nb_int)) { - name = "int"; - res = m->nb_int(x); - } - #endif -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Int(x); - } -#endif - if (likely(res)) { -#if PY_MAJOR_VERSION < 3 - if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { -#else - if (unlikely(!PyLong_CheckExact(res))) { -#endif - return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(b))) { - if (sizeof(Py_ssize_t) >= sizeof(long)) - return PyInt_AS_LONG(b); - else - return PyInt_AsSsize_t(b); - } -#endif - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)b)->ob_digit; - const Py_ssize_t size = Py_SIZE(b); - if (likely(__Pyx_sst_abs(size) <= 1)) { - ival = likely(size) ? digits[0] : 0; - if (size == -1) ival = -ival; - return ival; - } else { - switch (size) { - case 2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject* o) { - if (sizeof(Py_hash_t) == sizeof(Py_ssize_t)) { - return (Py_hash_t) __Pyx_PyIndex_AsSsize_t(o); -#if PY_MAJOR_VERSION < 3 - } else if (likely(PyInt_CheckExact(o))) { - return PyInt_AS_LONG(o); -#endif - } else { - Py_ssize_t ival; - PyObject *x; - x = PyNumber_Index(o); - if (!x) return -1; - ival = PyInt_AsLong(x); - Py_DECREF(x); - return ival; - } -} -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { - return PyInt_FromSize_t(ival); -} - - -#endif /* Py_PYTHON_H */ diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/G_D_E_F_.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/G_D_E_F_.py deleted file mode 100644 index d8ae8b23bb6af53aeb08271c3d489f52a28a5e02..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/G_D_E_F_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .otBase import BaseTTXConverter - - -class table_G_D_E_F_(BaseTTXConverter): - pass diff --git a/spaces/cihyFjudo/fairness-paper-search/Cyborg-1989-Full-Movie-In-Hindi-Download-NEW.md b/spaces/cihyFjudo/fairness-paper-search/Cyborg-1989-Full-Movie-In-Hindi-Download-NEW.md deleted file mode 100644 index fa2906f56cd97184f186e9c0a664cdbb6048b833..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Cyborg-1989-Full-Movie-In-Hindi-Download-NEW.md +++ /dev/null @@ -1,60 +0,0 @@ -## cyborg 1989 full movie in hindi download - - - - - - - - - -**Download File >>> [https://smitodoutcu.blogspot.com/?c=2txl9c](https://smitodoutcu.blogspot.com/?c=2txl9c)** - - - - - - - - - - - - Here is a possible title and article with HTML formatting for the keyword "cyborg 1989 full movie in hindi download": - -# Cyborg (1989): A Sci-Fi Action Thriller Starring Jean-Claude Van Damme - - - -Cyborg is a 1989 American science fiction action film directed by Albert Pyun and starring Jean-Claude Van Damme as a martial artist who hunts a killer in a plague-infested urban wasteland of the future. The film was originally intended to be a sequel to Masters of the Universe, but the project was cancelled and reworked into a standalone film with a low budget of $500,000. Cyborg was released on April 7, 1989 and grossed over $10 million at the box office. It received mixed reviews from critics, who praised Van Damme's performance and action scenes, but criticized the plot, dialogue, and production values. - - - -The film is set in a post-apocalyptic world where a deadly plague has wiped out most of humanity. A group of scientists have developed a cure, but they need to transport it across the country to Atlanta. They hire a cyborg named Pearl Prophet (Dayle Haddon) to carry the data containing the cure in her neural implant. However, Pearl is captured by a gang of pirates led by Fender Tremolo (Vincent Klyn), who wants to use the cure for himself and his followers. Pearl manages to send a distress signal to a mercenary named Gibson Rickenbacker (Van Damme), who agrees to rescue her and escort her to Atlanta. Along the way, they encounter various obstacles and enemies, as well as allies such as Nady Simmons (Deborah Richter), a young woman who joins them on their mission. - - - -Cyborg is considered to be one of Van Damme's early breakthrough films, as it showcased his martial arts skills and charisma. The film also spawned two sequels: Cyborg 2 (1993) and Cyborg 3: The Recycler (1994), neither of which featured Van Damme or any of the original cast members. Cyborg has developed a cult following among fans of sci-fi and action genres, and has been referenced in various media such as video games, comics, and music. - - - -If you are looking for a way to watch Cyborg (1989) full movie in Hindi, you can download it from various online sources such as Mkvking.com, MoviesMint.com, Archive.org, Stream2023.iblogger.org, or Companiesvoper.weebly.com. These websites offer different quality and formats of the movie, such as BluRay, WebRip, MKV, MP4, etc. You can also find subtitles in English and other languages for the movie. However, be aware that some of these websites may contain ads or malware that could harm your device or data. Therefore, it is advisable to use a VPN service and an antivirus software before downloading any content from these websites. - -Here are a few more paragraphs with HTML formatting for the article: - -Cyborg was originally conceived as a sequel to Masters of the Universe (1987), a fantasy film based on the popular toy line and cartoon series. However, due to the financial troubles of Cannon Films, the project was cancelled and the sets and costumes were reused for a new film with a sci-fi theme. Cyborg was also intended to be a live-action adaptation of Spider-Man, but the rights to the character were not secured in time. The director, Albert Pyun, wrote the script for Cyborg under a pseudonym, using elements from two previous scripts he had written: Johnny Guitar and Alex Rain. Some network television channels still give the film's title as Masters of the Universe 2: Cyborg, leading some viewers to think it is a sequel. - - - -Cyborg was shot in 23 days on a budget of less than $500,000, which was very low for an action film at the time. The film features many scenes of hand-to-hand combat and martial arts, showcasing Van Damme's skills and athleticism. The film also has a dark and gritty tone, with a bleak vision of the future where society has collapsed and violence is rampant. The film was influenced by other post-apocalyptic films such as Mad Max (1979) and Escape from New York (1981), as well as comic books and anime. The film's score was composed by Kevin Bassinson, who used synthesizers and electric guitars to create a rock-inspired soundtrack. - - - -Cyborg was released on April 7, 1989 in the United States, where it grossed over $10 million at the box office. It received mixed reviews from critics, who praised Van Damme's performance and action scenes, but criticized the plot, dialogue, and production values. The film has a rating of 14% on Rotten Tomatoes based on 21 reviews, with an average score of 3.5/10. The consensus reads: \"Jean-Claude Van Damme kicks up his heels once again in this futuristic David-and-Goliath tale.\" The film was more successful overseas, especially in Europe and Asia, where Van Damme's popularity was growing. - - dfd1c89656 - - - - - diff --git a/spaces/cloudqi/CQI_Texto_para_imagem_PT_v0/README.md b/spaces/cloudqi/CQI_Texto_para_imagem_PT_v0/README.md deleted file mode 100644 index ca2075c8c65ad99241fb6f59e687041b6a478d9f..0000000000000000000000000000000000000000 --- a/spaces/cloudqi/CQI_Texto_para_imagem_PT_v0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: CQI Texto Para Imagem PT V0 -emoji: 🌃 -colorFrom: gray -colorTo: black -sdk: gradio -sdk_version: 3.22.1 -app_file: app.py -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/payload_streamer.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/payload_streamer.py deleted file mode 100644 index 9f8b8bc57cc22fc693da1646bf806c2a6ca8d797..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/payload_streamer.py +++ /dev/null @@ -1,75 +0,0 @@ -""" -Payload implemenation for coroutines as data provider. - -As a simple case, you can upload data from file:: - - @aiohttp.streamer - async def file_sender(writer, file_name=None): - with open(file_name, 'rb') as f: - chunk = f.read(2**16) - while chunk: - await writer.write(chunk) - - chunk = f.read(2**16) - -Then you can use `file_sender` like this: - - async with session.post('http://httpbin.org/post', - data=file_sender(file_name='huge_file')) as resp: - print(await resp.text()) - -..note:: Coroutine must accept `writer` as first argument - -""" - -import types -import warnings -from typing import Any, Awaitable, Callable, Dict, Tuple - -from .abc import AbstractStreamWriter -from .payload import Payload, payload_type - -__all__ = ("streamer",) - - -class _stream_wrapper: - def __init__( - self, - coro: Callable[..., Awaitable[None]], - args: Tuple[Any, ...], - kwargs: Dict[str, Any], - ) -> None: - self.coro = types.coroutine(coro) - self.args = args - self.kwargs = kwargs - - async def __call__(self, writer: AbstractStreamWriter) -> None: - await self.coro(writer, *self.args, **self.kwargs) # type: ignore[operator] - - -class streamer: - def __init__(self, coro: Callable[..., Awaitable[None]]) -> None: - warnings.warn( - "@streamer is deprecated, use async generators instead", - DeprecationWarning, - stacklevel=2, - ) - self.coro = coro - - def __call__(self, *args: Any, **kwargs: Any) -> _stream_wrapper: - return _stream_wrapper(self.coro, args, kwargs) - - -@payload_type(_stream_wrapper) -class StreamWrapperPayload(Payload): - async def write(self, writer: AbstractStreamWriter) -> None: - await self._value(writer) - - -@payload_type(streamer) -class StreamPayload(StreamWrapperPayload): - def __init__(self, value: Any, *args: Any, **kwargs: Any) -> None: - super().__init__(value(), *args, **kwargs) - - async def write(self, writer: AbstractStreamWriter) -> None: - await self._value(writer) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/perimeterPen.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/perimeterPen.py deleted file mode 100644 index efb2b2d14cc46dc51ff795cf7a1fb95bd6d63673..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/perimeterPen.py +++ /dev/null @@ -1,69 +0,0 @@ -# -*- coding: utf-8 -*- -"""Calculate the perimeter of a glyph.""" - -from fontTools.pens.basePen import BasePen -from fontTools.misc.bezierTools import ( - approximateQuadraticArcLengthC, - calcQuadraticArcLengthC, - approximateCubicArcLengthC, - calcCubicArcLengthC, -) -import math - - -__all__ = ["PerimeterPen"] - - -def _distance(p0, p1): - return math.hypot(p0[0] - p1[0], p0[1] - p1[1]) - - -class PerimeterPen(BasePen): - def __init__(self, glyphset=None, tolerance=0.005): - BasePen.__init__(self, glyphset) - self.value = 0 - self.tolerance = tolerance - - # Choose which algorithm to use for quadratic and for cubic. - # Quadrature is faster but has fixed error characteristic with no strong - # error bound. The cutoff points are derived empirically. - self._addCubic = ( - self._addCubicQuadrature if tolerance >= 0.0015 else self._addCubicRecursive - ) - self._addQuadratic = ( - self._addQuadraticQuadrature - if tolerance >= 0.00075 - else self._addQuadraticExact - ) - - def _moveTo(self, p0): - self.__startPoint = p0 - - def _closePath(self): - p0 = self._getCurrentPoint() - if p0 != self.__startPoint: - self._lineTo(self.__startPoint) - - def _lineTo(self, p1): - p0 = self._getCurrentPoint() - self.value += _distance(p0, p1) - - def _addQuadraticExact(self, c0, c1, c2): - self.value += calcQuadraticArcLengthC(c0, c1, c2) - - def _addQuadraticQuadrature(self, c0, c1, c2): - self.value += approximateQuadraticArcLengthC(c0, c1, c2) - - def _qCurveToOne(self, p1, p2): - p0 = self._getCurrentPoint() - self._addQuadratic(complex(*p0), complex(*p1), complex(*p2)) - - def _addCubicRecursive(self, c0, c1, c2, c3): - self.value += calcCubicArcLengthC(c0, c1, c2, c3, self.tolerance) - - def _addCubicQuadrature(self, c0, c1, c2, c3): - self.value += approximateCubicArcLengthC(c0, c1, c2, c3) - - def _curveToOne(self, p1, p2, p3): - p0 = self._getCurrentPoint() - self._addCubic(complex(*p0), complex(*p1), complex(*p2), complex(*p3)) diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/bsf.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/bsf.c deleted file mode 100644 index 42cc1b5ab0c63a7e10e1449250596bb402b64c10..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/bsf.c +++ /dev/null @@ -1,562 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include - -#include "config_components.h" - -#include "libavutil/avassert.h" -#include "libavutil/log.h" -#include "libavutil/mem.h" -#include "libavutil/opt.h" -#include "libavutil/avstring.h" -#include "libavutil/bprint.h" - -#include "bsf.h" -#include "bsf_internal.h" -#include "codec_desc.h" -#include "codec_par.h" - -#define IS_EMPTY(pkt) (!(pkt)->data && !(pkt)->side_data_elems) - -static av_always_inline const FFBitStreamFilter *ff_bsf(const AVBitStreamFilter *bsf) -{ - return (const FFBitStreamFilter*)bsf; -} - -typedef struct FFBSFContext { - AVBSFContext pub; - AVPacket *buffer_pkt; - int eof; -} FFBSFContext; - -static av_always_inline FFBSFContext *ffbsfcontext(AVBSFContext *ctx) -{ - return (FFBSFContext *)ctx; -} - -void av_bsf_free(AVBSFContext **pctx) -{ - AVBSFContext *ctx; - FFBSFContext *bsfi; - - if (!pctx || !*pctx) - return; - ctx = *pctx; - bsfi = ffbsfcontext(ctx); - - if (ctx->priv_data) { - if (ff_bsf(ctx->filter)->close) - ff_bsf(ctx->filter)->close(ctx); - if (ctx->filter->priv_class) - av_opt_free(ctx->priv_data); - av_freep(&ctx->priv_data); - } - av_packet_free(&bsfi->buffer_pkt); - - avcodec_parameters_free(&ctx->par_in); - avcodec_parameters_free(&ctx->par_out); - - av_freep(pctx); -} - -static void *bsf_child_next(void *obj, void *prev) -{ - AVBSFContext *ctx = obj; - if (!prev && ctx->filter->priv_class) - return ctx->priv_data; - return NULL; -} - -static const char *bsf_to_name(void *bsf) -{ - return ((AVBSFContext *)bsf)->filter->name; -} - -static const AVClass bsf_class = { - .class_name = "AVBSFContext", - .item_name = bsf_to_name, - .version = LIBAVUTIL_VERSION_INT, - .child_next = bsf_child_next, - .child_class_iterate = ff_bsf_child_class_iterate, - .category = AV_CLASS_CATEGORY_BITSTREAM_FILTER, -}; - -const AVClass *av_bsf_get_class(void) -{ - return &bsf_class; -} - -int av_bsf_alloc(const AVBitStreamFilter *filter, AVBSFContext **pctx) -{ - AVBSFContext *ctx; - FFBSFContext *bsfi; - int ret; - - bsfi = av_mallocz(sizeof(*bsfi)); - if (!bsfi) - return AVERROR(ENOMEM); - ctx = &bsfi->pub; - - ctx->av_class = &bsf_class; - ctx->filter = filter; - - ctx->par_in = avcodec_parameters_alloc(); - ctx->par_out = avcodec_parameters_alloc(); - if (!ctx->par_in || !ctx->par_out) { - ret = AVERROR(ENOMEM); - goto fail; - } - /* allocate priv data and init private options */ - if (ff_bsf(filter)->priv_data_size) { - ctx->priv_data = av_mallocz(ff_bsf(filter)->priv_data_size); - if (!ctx->priv_data) { - ret = AVERROR(ENOMEM); - goto fail; - } - if (filter->priv_class) { - *(const AVClass **)ctx->priv_data = filter->priv_class; - av_opt_set_defaults(ctx->priv_data); - } - } - bsfi->buffer_pkt = av_packet_alloc(); - if (!bsfi->buffer_pkt) { - ret = AVERROR(ENOMEM); - goto fail; - } - - *pctx = ctx; - return 0; -fail: - av_bsf_free(&ctx); - return ret; -} - -int av_bsf_init(AVBSFContext *ctx) -{ - int ret, i; - - /* check that the codec is supported */ - if (ctx->filter->codec_ids) { - for (i = 0; ctx->filter->codec_ids[i] != AV_CODEC_ID_NONE; i++) - if (ctx->par_in->codec_id == ctx->filter->codec_ids[i]) - break; - if (ctx->filter->codec_ids[i] == AV_CODEC_ID_NONE) { - const AVCodecDescriptor *desc = avcodec_descriptor_get(ctx->par_in->codec_id); - av_log(ctx, AV_LOG_ERROR, "Codec '%s' (%d) is not supported by the " - "bitstream filter '%s'. Supported codecs are: ", - desc ? desc->name : "unknown", ctx->par_in->codec_id, ctx->filter->name); - for (i = 0; ctx->filter->codec_ids[i] != AV_CODEC_ID_NONE; i++) { - enum AVCodecID codec_id = ctx->filter->codec_ids[i]; - av_log(ctx, AV_LOG_ERROR, "%s (%d) ", - avcodec_get_name(codec_id), codec_id); - } - av_log(ctx, AV_LOG_ERROR, "\n"); - return AVERROR(EINVAL); - } - } - - /* initialize output parameters to be the same as input - * init below might overwrite that */ - ret = avcodec_parameters_copy(ctx->par_out, ctx->par_in); - if (ret < 0) - return ret; - - ctx->time_base_out = ctx->time_base_in; - - if (ff_bsf(ctx->filter)->init) { - ret = ff_bsf(ctx->filter)->init(ctx); - if (ret < 0) - return ret; - } - - return 0; -} - -void av_bsf_flush(AVBSFContext *ctx) -{ - FFBSFContext *const bsfi = ffbsfcontext(ctx); - - bsfi->eof = 0; - - av_packet_unref(bsfi->buffer_pkt); - - if (ff_bsf(ctx->filter)->flush) - ff_bsf(ctx->filter)->flush(ctx); -} - -int av_bsf_send_packet(AVBSFContext *ctx, AVPacket *pkt) -{ - FFBSFContext *const bsfi = ffbsfcontext(ctx); - int ret; - - if (!pkt || IS_EMPTY(pkt)) { - if (pkt) - av_packet_unref(pkt); - bsfi->eof = 1; - return 0; - } - - if (bsfi->eof) { - av_log(ctx, AV_LOG_ERROR, "A non-NULL packet sent after an EOF.\n"); - return AVERROR(EINVAL); - } - - if (!IS_EMPTY(bsfi->buffer_pkt)) - return AVERROR(EAGAIN); - - ret = av_packet_make_refcounted(pkt); - if (ret < 0) - return ret; - av_packet_move_ref(bsfi->buffer_pkt, pkt); - - return 0; -} - -int av_bsf_receive_packet(AVBSFContext *ctx, AVPacket *pkt) -{ - return ff_bsf(ctx->filter)->filter(ctx, pkt); -} - -int ff_bsf_get_packet(AVBSFContext *ctx, AVPacket **pkt) -{ - FFBSFContext *const bsfi = ffbsfcontext(ctx); - AVPacket *tmp_pkt; - - if (bsfi->eof) - return AVERROR_EOF; - - if (IS_EMPTY(bsfi->buffer_pkt)) - return AVERROR(EAGAIN); - - tmp_pkt = av_packet_alloc(); - if (!tmp_pkt) - return AVERROR(ENOMEM); - - *pkt = bsfi->buffer_pkt; - bsfi->buffer_pkt = tmp_pkt; - - return 0; -} - -int ff_bsf_get_packet_ref(AVBSFContext *ctx, AVPacket *pkt) -{ - FFBSFContext *const bsfi = ffbsfcontext(ctx); - - if (bsfi->eof) - return AVERROR_EOF; - - if (IS_EMPTY(bsfi->buffer_pkt)) - return AVERROR(EAGAIN); - - av_packet_move_ref(pkt, bsfi->buffer_pkt); - - return 0; -} - -typedef struct BSFListContext { - const AVClass *class; - - AVBSFContext **bsfs; - int nb_bsfs; - - unsigned idx; // index of currently processed BSF - - char * item_name; -} BSFListContext; - - -static int bsf_list_init(AVBSFContext *bsf) -{ - BSFListContext *lst = bsf->priv_data; - int ret, i; - const AVCodecParameters *cod_par = bsf->par_in; - AVRational tb = bsf->time_base_in; - - for (i = 0; i < lst->nb_bsfs; ++i) { - ret = avcodec_parameters_copy(lst->bsfs[i]->par_in, cod_par); - if (ret < 0) - goto fail; - - lst->bsfs[i]->time_base_in = tb; - - ret = av_bsf_init(lst->bsfs[i]); - if (ret < 0) - goto fail; - - cod_par = lst->bsfs[i]->par_out; - tb = lst->bsfs[i]->time_base_out; - } - - bsf->time_base_out = tb; - ret = avcodec_parameters_copy(bsf->par_out, cod_par); - -fail: - return ret; -} - -static int bsf_list_filter(AVBSFContext *bsf, AVPacket *out) -{ - BSFListContext *lst = bsf->priv_data; - int ret, eof = 0; - - if (!lst->nb_bsfs) - return ff_bsf_get_packet_ref(bsf, out); - - while (1) { - /* get a packet from the previous filter up the chain */ - if (lst->idx) - ret = av_bsf_receive_packet(lst->bsfs[lst->idx-1], out); - else - ret = ff_bsf_get_packet_ref(bsf, out); - if (ret == AVERROR(EAGAIN)) { - if (!lst->idx) - return ret; - lst->idx--; - continue; - } else if (ret == AVERROR_EOF) { - eof = 1; - } else if (ret < 0) - return ret; - - /* send it to the next filter down the chain */ - if (lst->idx < lst->nb_bsfs) { - ret = av_bsf_send_packet(lst->bsfs[lst->idx], eof ? NULL : out); - av_assert1(ret != AVERROR(EAGAIN)); - if (ret < 0) { - av_packet_unref(out); - return ret; - } - lst->idx++; - eof = 0; - } else if (eof) { - return ret; - } else { - return 0; - } - } -} - -static void bsf_list_flush(AVBSFContext *bsf) -{ - BSFListContext *lst = bsf->priv_data; - - for (int i = 0; i < lst->nb_bsfs; i++) - av_bsf_flush(lst->bsfs[i]); - lst->idx = 0; -} - -static void bsf_list_close(AVBSFContext *bsf) -{ - BSFListContext *lst = bsf->priv_data; - int i; - - for (i = 0; i < lst->nb_bsfs; ++i) - av_bsf_free(&lst->bsfs[i]); - av_freep(&lst->bsfs); - av_freep(&lst->item_name); -} - -static const char *bsf_list_item_name(void *ctx) -{ - static const char *null_filter_name = "null"; - AVBSFContext *bsf_ctx = ctx; - BSFListContext *lst = bsf_ctx->priv_data; - - if (!lst->nb_bsfs) - return null_filter_name; - - if (!lst->item_name) { - int i; - AVBPrint bp; - av_bprint_init(&bp, 16, 128); - - av_bprintf(&bp, "bsf_list("); - for (i = 0; i < lst->nb_bsfs; i++) - av_bprintf(&bp, i ? ",%s" : "%s", lst->bsfs[i]->filter->name); - av_bprintf(&bp, ")"); - - av_bprint_finalize(&bp, &lst->item_name); - } - - return lst->item_name; -} - -static const AVClass bsf_list_class = { - .class_name = "bsf_list", - .item_name = bsf_list_item_name, - .version = LIBAVUTIL_VERSION_INT, -}; - -static const FFBitStreamFilter list_bsf = { - .p.name = "bsf_list", - .p.priv_class = &bsf_list_class, - .priv_data_size = sizeof(BSFListContext), - .init = bsf_list_init, - .filter = bsf_list_filter, - .flush = bsf_list_flush, - .close = bsf_list_close, -}; - -struct AVBSFList { - AVBSFContext **bsfs; - int nb_bsfs; -}; - -AVBSFList *av_bsf_list_alloc(void) -{ - return av_mallocz(sizeof(AVBSFList)); -} - -void av_bsf_list_free(AVBSFList **lst) -{ - int i; - - if (!*lst) - return; - - for (i = 0; i < (*lst)->nb_bsfs; ++i) - av_bsf_free(&(*lst)->bsfs[i]); - av_free((*lst)->bsfs); - av_freep(lst); -} - -int av_bsf_list_append(AVBSFList *lst, AVBSFContext *bsf) -{ - return av_dynarray_add_nofree(&lst->bsfs, &lst->nb_bsfs, bsf); -} - -static int bsf_list_append_internal(AVBSFList *lst, const char *bsf_name, const char *options, AVDictionary ** options_dict) -{ - int ret; - const AVBitStreamFilter *filter; - AVBSFContext *bsf; - - filter = av_bsf_get_by_name(bsf_name); - if (!filter) - return AVERROR_BSF_NOT_FOUND; - - ret = av_bsf_alloc(filter, &bsf); - if (ret < 0) - return ret; - - if (options && filter->priv_class) { - const AVOption *opt = av_opt_next(bsf->priv_data, NULL); - const char * shorthand[2] = {NULL}; - - if (opt) - shorthand[0] = opt->name; - - ret = av_opt_set_from_string(bsf->priv_data, options, shorthand, "=", ":"); - if (ret < 0) - goto end; - } - - if (options_dict) { - ret = av_opt_set_dict2(bsf, options_dict, AV_OPT_SEARCH_CHILDREN); - if (ret < 0) - goto end; - } - - ret = av_bsf_list_append(lst, bsf); - -end: - if (ret < 0) - av_bsf_free(&bsf); - - return ret; -} - -int av_bsf_list_append2(AVBSFList *lst, const char *bsf_name, AVDictionary ** options) -{ - return bsf_list_append_internal(lst, bsf_name, NULL, options); -} - -int av_bsf_list_finalize(AVBSFList **lst, AVBSFContext **bsf) -{ - int ret = 0; - BSFListContext *ctx; - - if ((*lst)->nb_bsfs == 1) { - *bsf = (*lst)->bsfs[0]; - av_freep(&(*lst)->bsfs); - (*lst)->nb_bsfs = 0; - goto end; - } - - ret = av_bsf_alloc(&list_bsf.p, bsf); - if (ret < 0) - return ret; - - ctx = (*bsf)->priv_data; - - ctx->bsfs = (*lst)->bsfs; - ctx->nb_bsfs = (*lst)->nb_bsfs; - -end: - av_freep(lst); - return ret; -} - -static int bsf_parse_single(char *str, AVBSFList *bsf_lst) -{ - char *bsf_name, *bsf_options_str; - - bsf_name = av_strtok(str, "=", &bsf_options_str); - if (!bsf_name) - return AVERROR(EINVAL); - - return bsf_list_append_internal(bsf_lst, bsf_name, bsf_options_str, NULL); -} - -int av_bsf_list_parse_str(const char *str, AVBSFContext **bsf_lst) -{ - AVBSFList *lst; - int ret; - - if (!str) - return av_bsf_get_null_filter(bsf_lst); - - lst = av_bsf_list_alloc(); - if (!lst) - return AVERROR(ENOMEM); - - do { - char *bsf_str = av_get_token(&str, ","); - ret = bsf_parse_single(bsf_str, lst); - av_free(bsf_str); - if (ret < 0) - goto end; - } while (*str && *++str); - - ret = av_bsf_list_finalize(&lst, bsf_lst); -end: - if (ret < 0) - av_bsf_list_free(&lst); - return ret; -} - -int av_bsf_get_null_filter(AVBSFContext **bsf) -{ -#if CONFIG_NULL_BSF - extern const FFBitStreamFilter ff_null_bsf; - return av_bsf_alloc(&ff_null_bsf.p, bsf); -#else - return av_bsf_alloc(&list_bsf.p, bsf); -#endif -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/imc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/imc.c deleted file mode 100644 index 174332de4da869126f2d0446defb337b59f7c9e1..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/imc.c +++ /dev/null @@ -1,1058 +0,0 @@ -/* - * IMC compatible decoder - * Copyright (c) 2002-2004 Maxim Poliakovski - * Copyright (c) 2006 Benjamin Larsson - * Copyright (c) 2006 Konstantin Shishkov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * IMC - Intel Music Coder - * A mdct based codec using a 256 points large transform - * divided into 32 bands with some mix of scale factors. - * Only mono is supported. - */ - -#include "config_components.h" - -#include -#include - -#include "libavutil/channel_layout.h" -#include "libavutil/ffmath.h" -#include "libavutil/float_dsp.h" -#include "libavutil/internal.h" -#include "libavutil/mem_internal.h" -#include "libavutil/thread.h" -#include "libavutil/tx.h" - -#include "avcodec.h" -#include "bswapdsp.h" -#include "codec_internal.h" -#include "decode.h" -#include "get_bits.h" -#include "sinewin.h" - -#include "imcdata.h" - -#define IMC_BLOCK_SIZE 64 -#define IMC_FRAME_ID 0x21 -#define BANDS 32 -#define COEFFS 256 - -typedef struct IMCChannel { - float old_floor[BANDS]; - float flcoeffs1[BANDS]; - float flcoeffs2[BANDS]; - float flcoeffs3[BANDS]; - float flcoeffs4[BANDS]; - float flcoeffs5[BANDS]; - float flcoeffs6[BANDS]; - DECLARE_ALIGNED(32, float, CWdecoded)[COEFFS]; - - int bandWidthT[BANDS]; ///< codewords per band - int bitsBandT[BANDS]; ///< how many bits per codeword in band - int CWlengthT[COEFFS]; ///< how many bits in each codeword - int levlCoeffBuf[BANDS]; - int bandFlagsBuf[BANDS]; ///< flags for each band - int sumLenArr[BANDS]; ///< bits for all coeffs in band - int skipFlagRaw[BANDS]; ///< skip flags are stored in raw form or not - int skipFlagBits[BANDS]; ///< bits used to code skip flags - int skipFlagCount[BANDS]; ///< skipped coefficients per band - int skipFlags[COEFFS]; ///< skip coefficient decoding or not - int codewords[COEFFS]; ///< raw codewords read from bitstream - - int decoder_reset; - DECLARE_ALIGNED(32, float, prev_win)[128]; -} IMCChannel; - -typedef struct IMCContext { - IMCChannel chctx[2]; - - /** MDCT tables */ - DECLARE_ALIGNED(32, float, mdct_sine_window)[COEFFS]; - - float sqrt_tab[30]; - GetBitContext gb; - - AVFloatDSPContext *fdsp; - BswapDSPContext bdsp; - AVTXContext *mdct; - av_tx_fn mdct_fn; - float *out_samples; - DECLARE_ALIGNED(32, float, temp)[256]; - - int coef0_pos; - - int8_t cyclTab[32], cyclTab2[32]; - float weights1[31], weights2[31]; - - AVCodecContext *avctx; -} IMCContext; - -static VLC huffman_vlc[4][4]; - -#define IMC_VLC_BITS 9 -#define VLC_TABLES_SIZE 9512 - -static VLCElem vlc_tables[VLC_TABLES_SIZE]; - -static inline double freq2bark(double freq) -{ - return 3.5 * atan((freq / 7500.0) * (freq / 7500.0)) + 13.0 * atan(freq * 0.00076); -} - -static av_cold void iac_generate_tabs(IMCContext *q, int sampling_rate) -{ - double freqmin[32], freqmid[32], freqmax[32]; - double scale = sampling_rate / (256.0 * 2.0 * 2.0); - double nyquist_freq = sampling_rate * 0.5; - double freq, bark, prev_bark = 0, tf, tb; - int i, j; - - for (i = 0; i < 32; i++) { - freq = (band_tab[i] + band_tab[i + 1] - 1) * scale; - bark = freq2bark(freq); - - if (i > 0) { - tb = bark - prev_bark; - q->weights1[i - 1] = ff_exp10(-1.0 * tb); - q->weights2[i - 1] = ff_exp10(-2.7 * tb); - } - prev_bark = bark; - - freqmid[i] = freq; - - tf = freq; - while (tf < nyquist_freq) { - tf += 0.5; - tb = freq2bark(tf); - if (tb > bark + 0.5) - break; - } - freqmax[i] = tf; - - tf = freq; - while (tf > 0.0) { - tf -= 0.5; - tb = freq2bark(tf); - if (tb <= bark - 0.5) - break; - } - freqmin[i] = tf; - } - - for (i = 0; i < 32; i++) { - freq = freqmax[i]; - for (j = 31; j > 0 && freq <= freqmid[j]; j--); - q->cyclTab[i] = j + 1; - - freq = freqmin[i]; - for (j = 0; j < 32 && freq >= freqmid[j]; j++); - q->cyclTab2[i] = j - 1; - } -} - -static av_cold void imc_init_static(void) -{ - /* initialize the VLC tables */ - for (int i = 0, offset = 0; i < 4 ; i++) { - for (int j = 0; j < 4; j++) { - huffman_vlc[i][j].table = &vlc_tables[offset]; - huffman_vlc[i][j].table_allocated = VLC_TABLES_SIZE - offset; - ff_init_vlc_from_lengths(&huffman_vlc[i][j], IMC_VLC_BITS, imc_huffman_sizes[i], - imc_huffman_lens[i][j], 1, - imc_huffman_syms[i][j], 1, 1, - 0, INIT_VLC_STATIC_OVERLONG, NULL); - offset += huffman_vlc[i][j].table_size; - } - } -} - -static av_cold int imc_decode_init(AVCodecContext *avctx) -{ - int i, j, ret; - IMCContext *q = avctx->priv_data; - static AVOnce init_static_once = AV_ONCE_INIT; - float scale = 1.0f / (16384); - - if (avctx->codec_id == AV_CODEC_ID_IAC && avctx->sample_rate > 96000) { - av_log(avctx, AV_LOG_ERROR, - "Strange sample rate of %i, file likely corrupt or " - "needing a new table derivation method.\n", - avctx->sample_rate); - return AVERROR_PATCHWELCOME; - } - - if (avctx->codec_id == AV_CODEC_ID_IMC) { - av_channel_layout_uninit(&avctx->ch_layout); - avctx->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_MONO; - } - - if (avctx->ch_layout.nb_channels > 2) { - avpriv_request_sample(avctx, "Number of channels > 2"); - return AVERROR_PATCHWELCOME; - } - - for (j = 0; j < avctx->ch_layout.nb_channels; j++) { - q->chctx[j].decoder_reset = 1; - - for (i = 0; i < BANDS; i++) - q->chctx[j].old_floor[i] = 1.0; - } - - /* Build mdct window, a simple sine window normalized with sqrt(2) */ - ff_sine_window_init(q->mdct_sine_window, COEFFS); - for (i = 0; i < COEFFS; i++) - q->mdct_sine_window[i] *= sqrt(2.0); - - /* Generate a square root table */ - for (i = 0; i < 30; i++) - q->sqrt_tab[i] = sqrt(i); - - if (avctx->codec_id == AV_CODEC_ID_IAC) { - iac_generate_tabs(q, avctx->sample_rate); - } else { - memcpy(q->cyclTab, cyclTab, sizeof(cyclTab)); - memcpy(q->cyclTab2, cyclTab2, sizeof(cyclTab2)); - memcpy(q->weights1, imc_weights1, sizeof(imc_weights1)); - memcpy(q->weights2, imc_weights2, sizeof(imc_weights2)); - } - - q->fdsp = avpriv_float_dsp_alloc(avctx->flags & AV_CODEC_FLAG_BITEXACT); - if (!q->fdsp) - return AVERROR(ENOMEM); - - ret = av_tx_init(&q->mdct, &q->mdct_fn, AV_TX_FLOAT_MDCT, 1, COEFFS, &scale, 0); - if (ret < 0) - return ret; - - ff_bswapdsp_init(&q->bdsp); - - avctx->sample_fmt = AV_SAMPLE_FMT_FLTP; - - ff_thread_once(&init_static_once, imc_init_static); - - return 0; -} - -static void imc_calculate_coeffs(IMCContext *q, float *flcoeffs1, - float *flcoeffs2, int *bandWidthT, - float *flcoeffs3, float *flcoeffs5) -{ - float workT1[BANDS]; - float workT2[BANDS]; - float workT3[BANDS]; - float snr_limit = 1.e-30; - float accum = 0.0; - int i, cnt2; - - for (i = 0; i < BANDS; i++) { - flcoeffs5[i] = workT2[i] = 0.0; - if (bandWidthT[i]) { - workT1[i] = flcoeffs1[i] * flcoeffs1[i]; - flcoeffs3[i] = 2.0 * flcoeffs2[i]; - } else { - workT1[i] = 0.0; - flcoeffs3[i] = -30000.0; - } - workT3[i] = bandWidthT[i] * workT1[i] * 0.01; - if (workT3[i] <= snr_limit) - workT3[i] = 0.0; - } - - for (i = 0; i < BANDS; i++) { - for (cnt2 = i; cnt2 < q->cyclTab[i]; cnt2++) - flcoeffs5[cnt2] = flcoeffs5[cnt2] + workT3[i]; - workT2[cnt2 - 1] = workT2[cnt2 - 1] + workT3[i]; - } - - for (i = 1; i < BANDS; i++) { - accum = (workT2[i - 1] + accum) * q->weights1[i - 1]; - flcoeffs5[i] += accum; - } - - for (i = 0; i < BANDS; i++) - workT2[i] = 0.0; - - for (i = 0; i < BANDS; i++) { - for (cnt2 = i - 1; cnt2 > q->cyclTab2[i]; cnt2--) - flcoeffs5[cnt2] += workT3[i]; - workT2[cnt2+1] += workT3[i]; - } - - accum = 0.0; - - for (i = BANDS-2; i >= 0; i--) { - accum = (workT2[i+1] + accum) * q->weights2[i]; - flcoeffs5[i] += accum; - // there is missing code here, but it seems to never be triggered - } -} - - -static void imc_read_level_coeffs(IMCContext *q, int stream_format_code, - int *levlCoeffs) -{ - int i; - VLC *hufftab[4]; - int start = 0; - const uint8_t *cb_sel; - int s; - - s = stream_format_code >> 1; - hufftab[0] = &huffman_vlc[s][0]; - hufftab[1] = &huffman_vlc[s][1]; - hufftab[2] = &huffman_vlc[s][2]; - hufftab[3] = &huffman_vlc[s][3]; - cb_sel = imc_cb_select[s]; - - if (stream_format_code & 4) - start = 1; - if (start) - levlCoeffs[0] = get_bits(&q->gb, 7); - for (i = start; i < BANDS; i++) { - levlCoeffs[i] = get_vlc2(&q->gb, hufftab[cb_sel[i]]->table, - IMC_VLC_BITS, 2); - if (levlCoeffs[i] == 17) - levlCoeffs[i] += get_bits(&q->gb, 4); - } -} - -static void imc_read_level_coeffs_raw(IMCContext *q, int stream_format_code, - int *levlCoeffs) -{ - int i; - - q->coef0_pos = get_bits(&q->gb, 5); - levlCoeffs[0] = get_bits(&q->gb, 7); - for (i = 1; i < BANDS; i++) - levlCoeffs[i] = get_bits(&q->gb, 4); -} - -static void imc_decode_level_coefficients(IMCContext *q, int *levlCoeffBuf, - float *flcoeffs1, float *flcoeffs2) -{ - int i, level; - float tmp, tmp2; - // maybe some frequency division thingy - - flcoeffs1[0] = 20000.0 / exp2 (levlCoeffBuf[0] * 0.18945); // 0.18945 = log2(10) * 0.05703125 - flcoeffs2[0] = log2f(flcoeffs1[0]); - tmp = flcoeffs1[0]; - tmp2 = flcoeffs2[0]; - - for (i = 1; i < BANDS; i++) { - level = levlCoeffBuf[i]; - if (level == 16) { - flcoeffs1[i] = 1.0; - flcoeffs2[i] = 0.0; - } else { - if (level < 17) - level -= 7; - else if (level <= 24) - level -= 32; - else - level -= 16; - - tmp *= imc_exp_tab[15 + level]; - tmp2 += 0.83048 * level; // 0.83048 = log2(10) * 0.25 - flcoeffs1[i] = tmp; - flcoeffs2[i] = tmp2; - } - } -} - - -static void imc_decode_level_coefficients2(IMCContext *q, int *levlCoeffBuf, - float *old_floor, float *flcoeffs1, - float *flcoeffs2) -{ - int i; - /* FIXME maybe flag_buf = noise coding and flcoeffs1 = new scale factors - * and flcoeffs2 old scale factors - * might be incomplete due to a missing table that is in the binary code - */ - for (i = 0; i < BANDS; i++) { - flcoeffs1[i] = 0; - if (levlCoeffBuf[i] < 16) { - flcoeffs1[i] = imc_exp_tab2[levlCoeffBuf[i]] * old_floor[i]; - flcoeffs2[i] = (levlCoeffBuf[i] - 7) * 0.83048 + flcoeffs2[i]; // 0.83048 = log2(10) * 0.25 - } else { - flcoeffs1[i] = old_floor[i]; - } - } -} - -static void imc_decode_level_coefficients_raw(IMCContext *q, int *levlCoeffBuf, - float *flcoeffs1, float *flcoeffs2) -{ - int i, level, pos; - float tmp, tmp2; - - pos = q->coef0_pos; - flcoeffs1[pos] = 20000.0 / pow (2, levlCoeffBuf[0] * 0.18945); // 0.18945 = log2(10) * 0.05703125 - flcoeffs2[pos] = log2f(flcoeffs1[pos]); - tmp = flcoeffs1[pos]; - tmp2 = flcoeffs2[pos]; - - levlCoeffBuf++; - for (i = 0; i < BANDS; i++) { - if (i == pos) - continue; - level = *levlCoeffBuf++; - flcoeffs1[i] = tmp * powf(10.0, -level * 0.4375); //todo tab - flcoeffs2[i] = tmp2 - 1.4533435415 * level; // 1.4533435415 = log2(10) * 0.4375 - } -} - -/** - * Perform bit allocation depending on bits available - */ -static int bit_allocation(IMCContext *q, IMCChannel *chctx, - int stream_format_code, int freebits, int flag) -{ - int i, j; - const float limit = -1.e20; - float highest = 0.0; - int indx; - int t1 = 0; - int t2 = 1; - float summa = 0.0; - int iacc = 0; - int summer = 0; - int rres, cwlen; - float lowest = 1.e10; - int low_indx = 0; - float workT[32]; - int flg; - int found_indx = 0; - - for (i = 0; i < BANDS; i++) - highest = FFMAX(highest, chctx->flcoeffs1[i]); - - for (i = 0; i < BANDS - 1; i++) { - if (chctx->flcoeffs5[i] <= 0) { - av_log(q->avctx, AV_LOG_ERROR, "flcoeffs5 %f invalid\n", chctx->flcoeffs5[i]); - return AVERROR_INVALIDDATA; - } - chctx->flcoeffs4[i] = chctx->flcoeffs3[i] - log2f(chctx->flcoeffs5[i]); - } - chctx->flcoeffs4[BANDS - 1] = limit; - - highest = highest * 0.25; - - for (i = 0; i < BANDS; i++) { - indx = -1; - if ((band_tab[i + 1] - band_tab[i]) == chctx->bandWidthT[i]) - indx = 0; - - if ((band_tab[i + 1] - band_tab[i]) > chctx->bandWidthT[i]) - indx = 1; - - if (((band_tab[i + 1] - band_tab[i]) / 2) >= chctx->bandWidthT[i]) - indx = 2; - - if (indx == -1) - return AVERROR_INVALIDDATA; - - chctx->flcoeffs4[i] += xTab[(indx * 2 + (chctx->flcoeffs1[i] < highest)) * 2 + flag]; - } - - if (stream_format_code & 0x2) { - chctx->flcoeffs4[0] = limit; - chctx->flcoeffs4[1] = limit; - chctx->flcoeffs4[2] = limit; - chctx->flcoeffs4[3] = limit; - } - - for (i = (stream_format_code & 0x2) ? 4 : 0; i < BANDS - 1; i++) { - iacc += chctx->bandWidthT[i]; - summa += chctx->bandWidthT[i] * chctx->flcoeffs4[i]; - } - - if (!iacc) - return AVERROR_INVALIDDATA; - - chctx->bandWidthT[BANDS - 1] = 0; - summa = (summa * 0.5 - freebits) / iacc; - - - for (i = 0; i < BANDS / 2; i++) { - rres = summer - freebits; - if ((rres >= -8) && (rres <= 8)) - break; - - summer = 0; - iacc = 0; - - for (j = (stream_format_code & 0x2) ? 4 : 0; j < BANDS; j++) { - cwlen = av_clipf(((chctx->flcoeffs4[j] * 0.5) - summa + 0.5), 0, 6); - - chctx->bitsBandT[j] = cwlen; - summer += chctx->bandWidthT[j] * cwlen; - - if (cwlen > 0) - iacc += chctx->bandWidthT[j]; - } - - flg = t2; - t2 = 1; - if (freebits < summer) - t2 = -1; - if (i == 0) - flg = t2; - if (flg != t2) - t1++; - - summa = (float)(summer - freebits) / ((t1 + 1) * iacc) + summa; - } - - for (i = (stream_format_code & 0x2) ? 4 : 0; i < BANDS; i++) { - for (j = band_tab[i]; j < band_tab[i + 1]; j++) - chctx->CWlengthT[j] = chctx->bitsBandT[i]; - } - - if (freebits > summer) { - for (i = 0; i < BANDS; i++) { - workT[i] = (chctx->bitsBandT[i] == 6) ? -1.e20 - : (chctx->bitsBandT[i] * -2 + chctx->flcoeffs4[i] - 0.415); - } - - highest = 0.0; - - do { - if (highest <= -1.e20) - break; - - found_indx = 0; - highest = -1.e20; - - for (i = 0; i < BANDS; i++) { - if (workT[i] > highest) { - highest = workT[i]; - found_indx = i; - } - } - - if (highest > -1.e20) { - workT[found_indx] -= 2.0; - if (++chctx->bitsBandT[found_indx] == 6) - workT[found_indx] = -1.e20; - - for (j = band_tab[found_indx]; j < band_tab[found_indx + 1] && (freebits > summer); j++) { - chctx->CWlengthT[j]++; - summer++; - } - } - } while (freebits > summer); - } - if (freebits < summer) { - for (i = 0; i < BANDS; i++) { - workT[i] = chctx->bitsBandT[i] ? (chctx->bitsBandT[i] * -2 + chctx->flcoeffs4[i] + 1.585) - : 1.e20; - } - if (stream_format_code & 0x2) { - workT[0] = 1.e20; - workT[1] = 1.e20; - workT[2] = 1.e20; - workT[3] = 1.e20; - } - while (freebits < summer) { - lowest = 1.e10; - low_indx = 0; - for (i = 0; i < BANDS; i++) { - if (workT[i] < lowest) { - lowest = workT[i]; - low_indx = i; - } - } - // if (lowest >= 1.e10) - // break; - workT[low_indx] = lowest + 2.0; - - if (!--chctx->bitsBandT[low_indx]) - workT[low_indx] = 1.e20; - - for (j = band_tab[low_indx]; j < band_tab[low_indx+1] && (freebits < summer); j++) { - if (chctx->CWlengthT[j] > 0) { - chctx->CWlengthT[j]--; - summer--; - } - } - } - } - return 0; -} - -static void imc_get_skip_coeff(IMCContext *q, IMCChannel *chctx) -{ - int i, j; - - memset(chctx->skipFlagBits, 0, sizeof(chctx->skipFlagBits)); - memset(chctx->skipFlagCount, 0, sizeof(chctx->skipFlagCount)); - for (i = 0; i < BANDS; i++) { - if (!chctx->bandFlagsBuf[i] || !chctx->bandWidthT[i]) - continue; - - if (!chctx->skipFlagRaw[i]) { - chctx->skipFlagBits[i] = band_tab[i + 1] - band_tab[i]; - - for (j = band_tab[i]; j < band_tab[i + 1]; j++) { - chctx->skipFlags[j] = get_bits1(&q->gb); - if (chctx->skipFlags[j]) - chctx->skipFlagCount[i]++; - } - } else { - for (j = band_tab[i]; j < band_tab[i + 1] - 1; j += 2) { - if (!get_bits1(&q->gb)) { // 0 - chctx->skipFlagBits[i]++; - chctx->skipFlags[j] = 1; - chctx->skipFlags[j + 1] = 1; - chctx->skipFlagCount[i] += 2; - } else { - if (get_bits1(&q->gb)) { // 11 - chctx->skipFlagBits[i] += 2; - chctx->skipFlags[j] = 0; - chctx->skipFlags[j + 1] = 1; - chctx->skipFlagCount[i]++; - } else { - chctx->skipFlagBits[i] += 3; - chctx->skipFlags[j + 1] = 0; - if (!get_bits1(&q->gb)) { // 100 - chctx->skipFlags[j] = 1; - chctx->skipFlagCount[i]++; - } else { // 101 - chctx->skipFlags[j] = 0; - } - } - } - } - - if (j < band_tab[i + 1]) { - chctx->skipFlagBits[i]++; - if ((chctx->skipFlags[j] = get_bits1(&q->gb))) - chctx->skipFlagCount[i]++; - } - } - } -} - -/** - * Increase highest' band coefficient sizes as some bits won't be used - */ -static void imc_adjust_bit_allocation(IMCContext *q, IMCChannel *chctx, - int summer) -{ - float workT[32]; - int corrected = 0; - int i, j; - float highest = 0; - int found_indx = 0; - - for (i = 0; i < BANDS; i++) { - workT[i] = (chctx->bitsBandT[i] == 6) ? -1.e20 - : (chctx->bitsBandT[i] * -2 + chctx->flcoeffs4[i] - 0.415); - } - - while (corrected < summer) { - if (highest <= -1.e20) - break; - - highest = -1.e20; - - for (i = 0; i < BANDS; i++) { - if (workT[i] > highest) { - highest = workT[i]; - found_indx = i; - } - } - - if (highest > -1.e20) { - workT[found_indx] -= 2.0; - if (++(chctx->bitsBandT[found_indx]) == 6) - workT[found_indx] = -1.e20; - - for (j = band_tab[found_indx]; j < band_tab[found_indx+1] && (corrected < summer); j++) { - if (!chctx->skipFlags[j] && (chctx->CWlengthT[j] < 6)) { - chctx->CWlengthT[j]++; - corrected++; - } - } - } - } -} - -static int inverse_quant_coeff(IMCContext *q, IMCChannel *chctx, - int stream_format_code) -{ - int i, j; - int middle_value, cw_len, max_size; - const float *quantizer; - - for (i = 0; i < BANDS; i++) { - for (j = band_tab[i]; j < band_tab[i + 1]; j++) { - chctx->CWdecoded[j] = 0; - cw_len = chctx->CWlengthT[j]; - - if (cw_len <= 0 || chctx->skipFlags[j]) - continue; - - max_size = 1 << cw_len; - middle_value = max_size >> 1; - - if (chctx->codewords[j] >= max_size || chctx->codewords[j] < 0) - return AVERROR_INVALIDDATA; - - if (cw_len >= 4) { - quantizer = imc_quantizer2[(stream_format_code & 2) >> 1]; - if (chctx->codewords[j] >= middle_value) - chctx->CWdecoded[j] = quantizer[chctx->codewords[j] - 8] * chctx->flcoeffs6[i]; - else - chctx->CWdecoded[j] = -quantizer[max_size - chctx->codewords[j] - 8 - 1] * chctx->flcoeffs6[i]; - }else{ - quantizer = imc_quantizer1[((stream_format_code & 2) >> 1) | (chctx->bandFlagsBuf[i] << 1)]; - if (chctx->codewords[j] >= middle_value) - chctx->CWdecoded[j] = quantizer[chctx->codewords[j] - 1] * chctx->flcoeffs6[i]; - else - chctx->CWdecoded[j] = -quantizer[max_size - 2 - chctx->codewords[j]] * chctx->flcoeffs6[i]; - } - } - } - return 0; -} - - -static void imc_get_coeffs(AVCodecContext *avctx, - IMCContext *q, IMCChannel *chctx) -{ - int i, j, cw_len, cw; - - for (i = 0; i < BANDS; i++) { - if (!chctx->sumLenArr[i]) - continue; - if (chctx->bandFlagsBuf[i] || chctx->bandWidthT[i]) { - for (j = band_tab[i]; j < band_tab[i + 1]; j++) { - cw_len = chctx->CWlengthT[j]; - cw = 0; - - if (cw_len && (!chctx->bandFlagsBuf[i] || !chctx->skipFlags[j])) { - if (get_bits_count(&q->gb) + cw_len > 512) { - av_log(avctx, AV_LOG_WARNING, - "Potential problem on band %i, coefficient %i" - ": cw_len=%i\n", i, j, cw_len); - } else - cw = get_bits(&q->gb, cw_len); - } - - chctx->codewords[j] = cw; - } - } - } -} - -static void imc_refine_bit_allocation(IMCContext *q, IMCChannel *chctx) -{ - int i, j; - int summer; - - for (i = 0; i < BANDS; i++) { - chctx->sumLenArr[i] = 0; - chctx->skipFlagRaw[i] = 0; - for (j = band_tab[i]; j < band_tab[i + 1]; j++) - chctx->sumLenArr[i] += chctx->CWlengthT[j]; - if (chctx->bandFlagsBuf[i]) - if (((int)((band_tab[i + 1] - band_tab[i]) * 1.5) > chctx->sumLenArr[i]) && (chctx->sumLenArr[i] > 0)) - chctx->skipFlagRaw[i] = 1; - } - - imc_get_skip_coeff(q, chctx); - - for (i = 0; i < BANDS; i++) { - chctx->flcoeffs6[i] = chctx->flcoeffs1[i]; - /* band has flag set and at least one coded coefficient */ - if (chctx->bandFlagsBuf[i] && (band_tab[i + 1] - band_tab[i]) != chctx->skipFlagCount[i]) { - chctx->flcoeffs6[i] *= q->sqrt_tab[ band_tab[i + 1] - band_tab[i]] / - q->sqrt_tab[(band_tab[i + 1] - band_tab[i] - chctx->skipFlagCount[i])]; - } - } - - /* calculate bits left, bits needed and adjust bit allocation */ - summer = 0; - - for (i = 0; i < BANDS; i++) { - if (chctx->bandFlagsBuf[i]) { - for (j = band_tab[i]; j < band_tab[i + 1]; j++) { - if (chctx->skipFlags[j]) { - summer += chctx->CWlengthT[j]; - chctx->CWlengthT[j] = 0; - } - } - summer -= chctx->skipFlagBits[i]; - } - } - imc_adjust_bit_allocation(q, chctx, summer); -} - -static int imc_decode_block(AVCodecContext *avctx, IMCContext *q, int ch) -{ - int stream_format_code; - int imc_hdr, i, j, ret; - int flag; - int bits; - int bitscount; - IMCChannel *chctx = q->chctx + ch; - - - /* Check the frame header */ - imc_hdr = get_bits(&q->gb, 9); - if (imc_hdr & 0x18) { - av_log(avctx, AV_LOG_ERROR, "frame header check failed!\n"); - av_log(avctx, AV_LOG_ERROR, "got %X.\n", imc_hdr); - return AVERROR_INVALIDDATA; - } - stream_format_code = get_bits(&q->gb, 3); - - if (stream_format_code & 0x04) - chctx->decoder_reset = 1; - - if (chctx->decoder_reset) { - for (i = 0; i < BANDS; i++) - chctx->old_floor[i] = 1.0; - for (i = 0; i < COEFFS; i++) - chctx->CWdecoded[i] = 0; - chctx->decoder_reset = 0; - } - - flag = get_bits1(&q->gb); - if (stream_format_code & 0x1) - imc_read_level_coeffs_raw(q, stream_format_code, chctx->levlCoeffBuf); - else - imc_read_level_coeffs(q, stream_format_code, chctx->levlCoeffBuf); - - if (stream_format_code & 0x1) - imc_decode_level_coefficients_raw(q, chctx->levlCoeffBuf, - chctx->flcoeffs1, chctx->flcoeffs2); - else if (stream_format_code & 0x4) - imc_decode_level_coefficients(q, chctx->levlCoeffBuf, - chctx->flcoeffs1, chctx->flcoeffs2); - else - imc_decode_level_coefficients2(q, chctx->levlCoeffBuf, chctx->old_floor, - chctx->flcoeffs1, chctx->flcoeffs2); - - for(i=0; iflcoeffs1[i] > INT_MAX) { - av_log(avctx, AV_LOG_ERROR, "scalefactor out of range\n"); - return AVERROR_INVALIDDATA; - } - } - - memcpy(chctx->old_floor, chctx->flcoeffs1, 32 * sizeof(float)); - - if (stream_format_code & 0x1) { - for (i = 0; i < BANDS; i++) { - chctx->bandWidthT[i] = band_tab[i + 1] - band_tab[i]; - chctx->bandFlagsBuf[i] = 0; - chctx->flcoeffs3[i] = chctx->flcoeffs2[i] * 2; - chctx->flcoeffs5[i] = 1.0; - } - } else { - for (i = 0; i < BANDS; i++) { - if (chctx->levlCoeffBuf[i] == 16) { - chctx->bandWidthT[i] = 0; - } else - chctx->bandWidthT[i] = band_tab[i + 1] - band_tab[i]; - } - - memset(chctx->bandFlagsBuf, 0, BANDS * sizeof(int)); - for (i = 0; i < BANDS - 1; i++) - if (chctx->bandWidthT[i]) - chctx->bandFlagsBuf[i] = get_bits1(&q->gb); - - imc_calculate_coeffs(q, chctx->flcoeffs1, chctx->flcoeffs2, - chctx->bandWidthT, chctx->flcoeffs3, - chctx->flcoeffs5); - } - - bitscount = 0; - /* first 4 bands will be assigned 5 bits per coefficient */ - if (stream_format_code & 0x2) { - bitscount += 15; - - chctx->bitsBandT[0] = 5; - chctx->CWlengthT[0] = 5; - chctx->CWlengthT[1] = 5; - chctx->CWlengthT[2] = 5; - for (i = 1; i < 4; i++) { - if (stream_format_code & 0x1) - bits = 5; - else - bits = (chctx->levlCoeffBuf[i] == 16) ? 0 : 5; - chctx->bitsBandT[i] = bits; - for (j = band_tab[i]; j < band_tab[i + 1]; j++) { - chctx->CWlengthT[j] = bits; - bitscount += bits; - } - } - } - if (avctx->codec_id == AV_CODEC_ID_IAC) { - bitscount += !!chctx->bandWidthT[BANDS - 1]; - if (!(stream_format_code & 0x2)) - bitscount += 16; - } - - if ((ret = bit_allocation(q, chctx, stream_format_code, - 512 - bitscount - get_bits_count(&q->gb), - flag)) < 0) { - av_log(avctx, AV_LOG_ERROR, "Bit allocations failed\n"); - chctx->decoder_reset = 1; - return ret; - } - - if (stream_format_code & 0x1) { - for (i = 0; i < BANDS; i++) - chctx->skipFlags[i] = 0; - } else { - imc_refine_bit_allocation(q, chctx); - } - - for (i = 0; i < BANDS; i++) { - chctx->sumLenArr[i] = 0; - - for (j = band_tab[i]; j < band_tab[i + 1]; j++) - if (!chctx->skipFlags[j]) - chctx->sumLenArr[i] += chctx->CWlengthT[j]; - } - - memset(chctx->codewords, 0, sizeof(chctx->codewords)); - - imc_get_coeffs(avctx, q, chctx); - - if (inverse_quant_coeff(q, chctx, stream_format_code) < 0) { - av_log(avctx, AV_LOG_ERROR, "Inverse quantization of coefficients failed\n"); - chctx->decoder_reset = 1; - return AVERROR_INVALIDDATA; - } - - memset(chctx->skipFlags, 0, sizeof(chctx->skipFlags)); - - q->mdct_fn(q->mdct, q->temp, chctx->CWdecoded, sizeof(float)); - q->fdsp->vector_fmul_window(q->out_samples, chctx->prev_win, q->temp, - q->mdct_sine_window, 128); - memcpy(chctx->prev_win, q->temp + 128, sizeof(float)*128); - - return 0; -} - -static int imc_decode_frame(AVCodecContext *avctx, AVFrame *frame, - int *got_frame_ptr, AVPacket *avpkt) -{ - const uint8_t *buf = avpkt->data; - int buf_size = avpkt->size; - int ret, i; - - IMCContext *q = avctx->priv_data; - - LOCAL_ALIGNED_16(uint16_t, buf16, [(IMC_BLOCK_SIZE + AV_INPUT_BUFFER_PADDING_SIZE) / 2]); - - q->avctx = avctx; - - if (buf_size < IMC_BLOCK_SIZE * avctx->ch_layout.nb_channels) { - av_log(avctx, AV_LOG_ERROR, "frame too small!\n"); - return AVERROR_INVALIDDATA; - } - - /* get output buffer */ - frame->nb_samples = COEFFS; - if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) - return ret; - - for (i = 0; i < avctx->ch_layout.nb_channels; i++) { - q->out_samples = (float *)frame->extended_data[i]; - - q->bdsp.bswap16_buf(buf16, (const uint16_t *) buf, IMC_BLOCK_SIZE / 2); - - init_get_bits(&q->gb, (const uint8_t*)buf16, IMC_BLOCK_SIZE * 8); - - buf += IMC_BLOCK_SIZE; - - if ((ret = imc_decode_block(avctx, q, i)) < 0) - return ret; - } - - if (avctx->ch_layout.nb_channels == 2) { - q->fdsp->butterflies_float((float *)frame->extended_data[0], - (float *)frame->extended_data[1], COEFFS); - } - - *got_frame_ptr = 1; - - return IMC_BLOCK_SIZE * avctx->ch_layout.nb_channels; -} - -static av_cold int imc_decode_close(AVCodecContext * avctx) -{ - IMCContext *q = avctx->priv_data; - - av_free(q->fdsp); - av_tx_uninit(&q->mdct); - - return 0; -} - -static av_cold void flush(AVCodecContext *avctx) -{ - IMCContext *q = avctx->priv_data; - - q->chctx[0].decoder_reset = - q->chctx[1].decoder_reset = 1; -} - -#if CONFIG_IMC_DECODER -const FFCodec ff_imc_decoder = { - .p.name = "imc", - CODEC_LONG_NAME("IMC (Intel Music Coder)"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_IMC, - .priv_data_size = sizeof(IMCContext), - .init = imc_decode_init, - .close = imc_decode_close, - FF_CODEC_DECODE_CB(imc_decode_frame), - .flush = flush, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_CHANNEL_CONF, - .p.sample_fmts = (const enum AVSampleFormat[]) { AV_SAMPLE_FMT_FLTP, - AV_SAMPLE_FMT_NONE }, -}; -#endif -#if CONFIG_IAC_DECODER -const FFCodec ff_iac_decoder = { - .p.name = "iac", - CODEC_LONG_NAME("IAC (Indeo Audio Coder)"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_IAC, - .priv_data_size = sizeof(IMCContext), - .init = imc_decode_init, - .close = imc_decode_close, - FF_CODEC_DECODE_CB(imc_decode_frame), - .flush = flush, - .p.capabilities = AV_CODEC_CAP_DR1, - .p.sample_fmts = (const enum AVSampleFormat[]) { AV_SAMPLE_FMT_FLTP, - AV_SAMPLE_FMT_NONE }, -}; -#endif diff --git a/spaces/congsaPfin/Manga-OCR/logs/FIFA 18 V10 APK OBB - The Ultimate Soccer Game for Android 2023.md b/spaces/congsaPfin/Manga-OCR/logs/FIFA 18 V10 APK OBB - The Ultimate Soccer Game for Android 2023.md deleted file mode 100644 index 780a9a7f00863873bf113b4490237721d304e77c..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/FIFA 18 V10 APK OBB - The Ultimate Soccer Game for Android 2023.md +++ /dev/null @@ -1,121 +0,0 @@ -
    -

    How to Download and Install FIFA 2018 APK OBB FIFA 18 Android Game

    -

    If you are a fan of soccer games, you might have heard of FIFA 2018 APK OBB FIFA 18 Android Game. This is one of the most popular and realistic soccer games for Android devices, with stunning graphics, smooth gameplay, and authentic teams and players. In this article, we will show you how to download and install this amazing game on your device, as well as some of its features, tips, and tricks.

    -

    Introduction

    -

    FIFA 2018 APK OBB FIFA 18 Android Game is a modified version of the original FIFA 18 game that was released for consoles and PC. It has been optimized for Android devices, with reduced file size, improved performance, and offline mode. You can enjoy playing with your favorite teams and players from around the world, in various modes such as Career Mode, Tournament Mode, Online Mode, and more. You can also customize your controls, camera angles, difficulty levels, and other settings according to your preferences.

    -

    fifa 2018 apk obb fifa 18 android game download


    DOWNLOAD https://urlca.com/2uOe6c



    -

    If you love soccer games, you should definitely download and install FIFA 2018 APK OBB FIFA 18 Android Game on your device. It will give you hours of fun and excitement, as well as challenge your skills and strategies. You can also compete with other players online, or play with your friends using Bluetooth or Wi-Fi.

    -

    Requirements

    -

    Before you download and install FIFA 2018 APK OBB FIFA 18 Android Game on your device, you need to make sure that your device meets the minimum and recommended specifications. Here are the requirements for this game:

    - - - - - - - -
    Minimum SpecificationsRecommended Specifications
    Android version: 4.4 KitKat or higherAndroid version: 6.0 Marshmallow or higher
    RAM: 1 GB or moreRAM: 2 GB or more
    Processor: Quad-core or higherProcessor: Octa-core or higher
    Graphics: Adreno or Mali GPUGraphics: Adreno or Mali GPU
    Storage space: At least 3 GB freeStorage space: At least 5 GB free
    -

    You also need to have a stable internet connection for downloading the files, verifying them, and playing online mode. You can use Wi-Fi or mobile data, but make sure that you have enough data allowance or unlimited plan.

    -

    Download Links

    -

    Now that you know the requirements for FIFA 2018 APK OBB FIFA 18 Android Game, you can proceed to download the files for this game. You can find them on various websites online, but we recommend that you use this link for downloading the APK and OBB files. This link is from a trusted source that has verified the authenticity and safety of the files.

    After you download the files, you need to verify them before installing them on your device. To do this, you need to use a file manager app that can extract zip files, such as ZArchiver. You can download this app from the Google Play Store or from this link. Here are the steps to verify the files:

    -
      -
    1. Open ZArchiver and locate the downloaded zip file for FIFA 2018 APK OBB FIFA 18 Android Game.
    2. -
    3. Tap on the zip file and select "View" from the menu.
    4. -
    5. You should see two files inside the zip file: FIFA 2018.apk and FIFA 2018.obb.
    6. -
    7. Tap on FIFA 2018.apk and select "Properties" from the menu.
    8. -
    9. Check the size and MD5 checksum of the file. The size should be 66.9 MB and the MD5 checksum should be 9f2e7d9c5c0f28fbbd9cdea8f27d9141.
    10. -
    11. If the size and MD5 checksum match, tap on the back button and repeat the same steps for FIFA 2018.obb.
    12. -
    13. The size of FIFA 2018.obb should be 2.6 GB and the MD5 checksum should be 269d62f56ed318ec4b1d1b7f3b634a07.
    14. -
    15. If both files are verified, you can proceed to install them on your device.
    16. -
    -

    Installation Steps

    -

    Now that you have verified the files, you can install FIFA 2018 APK OBB FIFA 18 Android Game on your device. Here are the steps to install the game:

    -
      -
    1. Open ZArchiver and locate the downloaded zip file for FIFA 2018 APK OBB FIFA 18 Android Game.
    2. -
    3. Tap on the zip file and select "Extract" from the menu.
    4. -
    5. Select a folder where you want to extract the files, such as your internal storage or SD card.
    6. -
    7. Wait for the extraction process to finish. You should see two files in the folder: FIFA 2018.apk and FIFA 2018.obb.
    8. -
    9. Tap on FIFA 2018.apk and select "Install" from the menu.
    10. -
    11. Allow the installation of unknown sources if prompted by your device settings.
    12. -
    13. Wait for the installation process to finish. You should see a message that says "App installed".
    14. -
    15. Do not open the app yet. Tap on the back button and go back to the folder where you extracted the files.
    16. -
    17. Tap on FIFA 2018.obb and select "Copy" from the menu.
    18. -
    19. Navigate to your internal storage or SD card and find a folder named "Android".
    20. -
    21. Open the folder and find a subfolder named "obb". If you don't see it, create one by tapping on the "+" icon and naming it "obb".
    22. -
    23. Open the obb folder and find a subfolder named "com.ea.game.fifa14_row". If you don't see it, create one by tapping on the "+" icon and naming it "com.ea.game.fifa14_row".
    24. -
    25. Paste FIFA 2018.obb in this folder by tapping on the clipboard icon and selecting "Paste".
    26. -
    27. Wait for the copying process to finish. You should see FIFA 2018.obb in this folder with a size of 2.6 GB.
    28. -
    29. You have successfully installed FIFA 2018 APK OBB FIFA 18 Android Game on your device. You can now launch the game by tapping on its icon in your app drawer or home screen.
    30. -
    -

    Features

    -

    FIFA 2018 APK OBB FIFA 18 Android Game has many features that make it one of the best soccer games for Android devices. Here are some of them:

    -
  22. Realistic graphics and animations: FIFA 2018 APK OBB FIFA 18 Android Game has stunning graphics and animations that make the game look and feel realistic. You can see the details of the players, stadiums, crowds, weather, and more. You can also enjoy different camera angles and replays that enhance your gaming experience.
  23. -
  24. Smooth gameplay and controls: FIFA 2018 APK OBB FIFA 18 Android Game has smooth gameplay and controls that make the game easy and fun to play. You can use the virtual joystick, buttons, or gestures to control your players and perform various actions such as passing, shooting, dribbling, tackling, and more. You can also customize your controls and sensitivity according to your preferences.
  25. -
  26. Authentic teams and players: FIFA 2018 APK OBB FIFA 18 Android Game has authentic teams and players from around the world, including the latest rosters, kits, ratings, and stats. You can choose from over 650 teams and 17,000 players from various leagues and competitions such as the Premier League, La Liga, Bundesliga, Serie A, Champions League, World Cup, and more. You can also create your own team and players using the editor mode.
  27. -
  28. Various modes and challenges: FIFA 2018 APK OBB FIFA 18 Android Game has various modes and challenges that suit your mood and skill level. You can play in Career Mode, where you can start as a rookie and become a legend by managing your team, transfers, contracts, training, and more. You can also play in Tournament Mode, where you can compete in various tournaments such as the World Cup, Champions League, Europa League, Copa America, and more. You can also play in Online Mode, where you can challenge other players from around the world in real-time matches. You can also play in Offline Mode, where you can play against the AI or with your friends using Bluetooth or Wi-Fi.
  29. -
  30. Customizable settings and preferences: FIFA 2018 APK OBB FIFA 18 Android Game has customizable settings and preferences that allow you to tailor the game to your liking. You can change the language, sound effects, music, commentary, difficulty level, game speed, camera angle, display settings, and more. You can also save your progress and settings using the cloud save feature.
  31. - -

    Tips and Tricks

    -

    FIFA 2018 APK OBB FIFA 18 Android Game is a fun and exciting game that will test your skills and strategies. Here are some tips and tricks that will help you improve your gameplay and performance:

    -
      -
    • Practice your skills: FIFA 2018 APK OBB FIFA 18 Android Game has a training mode where you can practice your skills such as passing, shooting, dribbling, free kicks, penalties, corners, and more. You can also learn new skills and tricks by watching tutorials and videos in the game.
    • -
    • Use the right tactics: FIFA 2018 APK OBB FIFA 18 Android Game has a tactical mode where you can choose from different formations, strategies, styles, and instructions for your team. You can also adjust your tactics during the game by using the quick menu or by pausing the game. You should use the right tactics depending on your opponent's strengths and weaknesses.
    • -
    • Manage your stamina: FIFA 2018 APK OBB FIFA 18 Android Game has a stamina system that affects your players' performance and fatigue. You should manage your stamina by using substitutions, resting your players, or using consumables such as energy drinks or fitness cards. You should also avoid sprinting too much or making unnecessary fouls.
    • -
    • Unlock achievements and rewards: FIFA 2018 APK OBB FIFA 18 Android Game has a reward system that gives you coins, points, packs, players, kits, badges, trophies, and more for completing various tasks and challenges in the game. You should unlock as many achievements and rewards as possible to improve your team and collection.
    • -
    -

    Conclusion

    -

    FIFA 2018 APK OBB FIFA 18 Android Game is one of the best soccer games for Android devices that you should not miss. It has realistic graphics, smooth gameplay, authentic teams and players, various modes and challenges, customizable settings and preferences, and many tips and tricks to help you enjoy the game. You can download and install this game on your device by following the steps we have provided in this article. You can also share your feedback and opinions about this game in the comments section below. We hope you have fun playing FIFA 2018 APK OBB FIFA 18 Android Game!

    -

    FAQs

    -

    Here are some common questions and answers about FIFA 2018 APK OBB FIFA 18 Android Game:

    -

    fifa 18 v10 apk obb download for android
    -fifa 2018 android game free download apk + data
    -how to install fifa 18 apk obb on android device
    -fifa 18 mod apk obb offline android game
    -download fifa 2018 apk obb latest version for android
    -fifa 18 android game features and gameplay
    -fifa 2018 apk obb requirements and compatibility
    -fifa 18 apk obb download link and installation guide
    -fifa 2018 android game review and rating
    -fifa 18 mod apk obb unlimited money and coins
    -best fifa 2018 apk obb alternative games for android
    -fifa 18 apk obb file size and download speed
    -fifa 2018 android game tips and tricks
    -fifa 18 apk obb update and patch notes
    -fifa 2018 android game screenshots and videos
    -how to fix fifa 18 apk obb errors and issues
    -fifa 18 mod apk obb download with alex hunter patch
    -fifa 2018 android game online and offline modes
    -how to play fifa 18 apk obb with friends and other players
    -fifa 2018 android game controller support and settings
    -how to customize fifa 18 apk obb graphics and sound
    -fifa 2018 android game cheats and hacks
    -how to backup and restore fifa 18 apk obb data
    -fifa 18 mod apk obb download with world cup mode
    -fifa 2018 android game news and updates

    -

    Q: Is A: Is FIFA 2018 APK OBB FIFA 18 Android Game free to play?

    -

    A: Yes, FIFA 2018 APK OBB FIFA 18 Android Game is free to play, but it contains some in-app purchases and ads that you can disable or remove by using a modded version or a patcher app.

    -

    Q: How can I update FIFA 2018 APK OBB FIFA 18 Android Game?

    -

    A: You can update FIFA 2018 APK OBB FIFA 18 Android Game by downloading the latest version of the APK and OBB files from the same source that you used before, and following the same installation steps. You can also check for updates within the game by going to the settings menu and tapping on the update button.

    -

    Q: How can I fix FIFA 2018 APK OBB FIFA 18 Android Game errors and bugs?

    -

    A: You can fix FIFA 2018 APK OBB FIFA 18 Android Game errors and bugs by following these steps:

    -
      -
    • Make sure that your device meets the requirements for the game and has enough storage space.
    • -
    • Make sure that you have downloaded and installed the correct and verified files for the game.
    • -
    • Make sure that you have granted all the necessary permissions and enabled all the required features for the game.
    • -
    • Make sure that you have a stable internet connection for downloading, verifying, and playing online mode.
    • -
    • Clear the cache and data of the game by going to your device settings, apps, FIFA 2018, and tapping on clear cache and clear data.
    • -
    • Restart your device and launch the game again.
    • -
    • If none of these steps work, you can contact the developer or the source of the game for further assistance.
    • -
    -

    Q: How can I contact the developer or the source of FIFA 2018 APK OBB FIFA 18 Android Game?

    -

    A: You can contact the developer or the source of FIFA 2018 APK OBB FIFA 18 Android Game by visiting their website, social media pages, or email address. You can also leave a comment or a review on their page or platform. Here are some of their contact details:

    -
      -
    • Website: https://fifa-2018-apk-obb-fifa-18-android-game.com/
    • -
    • Facebook: https://www.facebook.com/fifa-2018-apk-obb-fifa-18-android-game/
    • -
    • Twitter: https://twitter.com/fifa_2018_apk_obb_fifa_18_android_game/
    • -
    • Email: fifa-2018-apk-obb-fifa-18-android-game@gmail.com
    • -
    -

    Q: How can I share my feedback and opinions about FIFA 2018 APK OBB FIFA 18 Android Game?

    -

    A: You can share your feedback and opinions about FIFA 2018 APK OBB FIFA 18 Android Game by leaving a comment or a review on this article, or on the website, social media pages, or email address of the developer or the source of the game. You can also rate the game on the Google Play Store or other platforms where you downloaded it. We appreciate your feedback and opinions, as they help us improve our content and services.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Get Infinite Souls and Coins in Mortal Kombat X Mod Apk Download.md b/spaces/congsaPfin/Manga-OCR/logs/Get Infinite Souls and Coins in Mortal Kombat X Mod Apk Download.md deleted file mode 100644 index e1e3fd5cc4775c9bd0aabc6da87802e94f96ae36..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Get Infinite Souls and Coins in Mortal Kombat X Mod Apk Download.md +++ /dev/null @@ -1,122 +0,0 @@ - -

    Mortal Kombat X Mod APK Download: Everything You Need to Know

    -

    If you are a fan of fighting games, you must have heard of Mortal Kombat X, one of the most popular and brutal games in the genre. Mortal Kombat X is a game that combines stunning graphics, thrilling gameplay, and a rich story mode to deliver an immersive and satisfying experience. But what if you want to take your game to the next level? What if you want to enjoy unlimited resources, unlock all the characters, and have god mode enabled? Well, that's where Mortal Kombat X Mod APK comes in. In this article, we will tell you everything you need to know about this modded version of the game, including its features, benefits, installation guide, pros and cons, and more. So, without further ado, let's get started!

    -

    What is Mortal Kombat X?

    -

    A brief introduction to the game and its features

    -

    Mortal Kombat X is a fighting game developed by NetherRealm Studios and published by Warner Bros. Interactive Entertainment in 2015. It is the tenth main installment in the Mortal Kombat series and a sequel to Mortal Kombat (2011). The game features a roster of over 30 characters, each with their own unique fighting style, moves, and fatalities. The game also has several modes, such as story mode, tower mode, online mode, faction wars, and more. The game is praised for its high-quality graphics, fluid animations, realistic physics, and gore effects. The game is available for various platforms, such as PlayStation 4, Xbox One, Windows PC, iOS, and Android.

    -

    mortal kombat x full mod apk download


    Download ★★★ https://urlca.com/2uO6ee



    -

    Why download Mortal Kombat X Mod APK?

    -

    The benefits of using the modded version of the game

    -

    While Mortal Kombat X is undoubtedly an amazing game, it also has some limitations and drawbacks. For instance, you need to spend real money to buy in-game currency (koins and souls) to unlock new characters, skins, equipment, and other items. You also need to grind a lot to level up your characters and complete challenges. Moreover, some characters are only available through special events or limited-time offers. And let's not forget about the annoying ads that pop up every now and then.

    -

    That's why many players opt for Mortal Kombat X Mod APK, which is a modified version of the original game that gives you access to unlimited resources, god mode, unlocked characters, no ads, and more. With this modded version of the game, you can enjoy the following benefits:

    -

    Unlimited money and souls

    -

    Money and souls are the main currencies in Mortal Kombat X. You need them to buy new characters, skins, equipment, cards, packs, and other items. However, earning them in the game is not easy. You need to win battles, complete quests, participate in events, or spend real money. But with Mortal Kombat X Mod APK, you don't have to worry about that. You will get unlimited money and souls right from the start. You can use them to buy anything you want without any restrictions.

    -

    God mode and unlocked characters

    -

    One of the most exciting features of Mortal Kombat X Mod APK is god mode

    God mode is a feature that makes you invincible in the game. You can take any amount of damage without dying. You can also deal massive damage to your opponents with every hit. This makes the game much easier and more fun. You can breeze through any challenge or difficulty level with god mode enabled.

    -

    Unlocked characters is another feature that lets you access all the characters in the game, including the ones that are normally locked or exclusive. You can choose from over 30 characters, each with their own unique abilities, skills, and fatalities. You can also customize their appearance and equipment to suit your preferences. You can create your own dream team of fighters with unlocked characters.

    -

    No ads and no root required

    -

    Ads are one of the most annoying things in any game. They interrupt your gameplay, waste your time, and sometimes even force you to watch them. But with Mortal Kombat X Mod APK, you don't have to deal with any ads. The modded version of the game removes all the ads from the game, giving you a smooth and uninterrupted gaming experience.

    -

    Another benefit of Mortal Kombat X Mod APK is that it does not require root access to work. Rooting is a process that gives you full control over your device, but it also voids your warranty and exposes you to security risks. Many modded games require root access to work, but not Mortal Kombat X Mod APK. You can install and play the modded game without rooting your device.

    -

    How to download and install Mortal Kombat X Mod APK?

    -

    A step-by-step guide with screenshots

    -

    Now that you know the benefits of Mortal Kombat X Mod APK, you might be wondering how to download and install it on your device. Well, don't worry, we have got you covered. Here is a simple and easy guide on how to do it:

    -

    mortal kombat x mod apk unlimited money and souls
    -mortal kombat x hack apk download for android
    -mortal kombat x apk + obb data full mod
    -mortal kombat x mod apk latest version 2023
    -mortal kombat x mega mod apk free download
    -mortal kombat x mod apk offline unlocked
    -mortal kombat x mod apk all characters unlocked
    -mortal kombat x mod apk no root required
    -mortal kombat x mod apk high compress
    -mortal kombat x mod apk with cheat menu
    -mortal kombat x mod apk anti ban
    -mortal kombat x mod apk unlimited coins and gems
    -mortal kombat x mod apk god mode and one hit kill
    -mortal kombat x mod apk unlimited everything
    -mortal kombat x mod apk all skins and costumes
    -mortal kombat x mod apk with mk11 characters
    -mortal kombat x mod apk revdl rexdl
    -mortal kombat x mod apk pure apkpure
    -mortal kombat x mod apk android 1 andropalace
    -mortal kombat x mod apk blackmod happymod
    -mortal kombat x mod apk online multiplayer
    -mortal kombat x mod apk new update 2023
    -mortal kombat x mod apk no verification or survey
    -mortal kombat x mod apk direct download link
    -mortal kombat x mod apk best graphics quality
    -mortal kombat x mod apk unlimited blood rubies and souls
    -mortal kombat x mod apk with all fatalities and brutalities
    -mortal kombat x mod apk for low end devices
    -mortal kombat x mod apk without obb file
    -mortal kombat x mod apk how to install and play
    -mortal kombat x mod apk original from play store
    -mortal kombat x mod apk with all special moves and combos
    -mortal kombat x mod apk with all stages and arenas
    -mortal kombat x mod apk with all weapons and items
    -mortal kombat x mod apk with all challenges and quests
    -mortal kombat x mod apk with all factions and clans
    -mortal kombat x mod apk with all trophies and achievements
    -mortal kombat x mod apk with all game modes and features
    -mortal kombat x mod apk with all soundtracks and voices
    -mortal kombat x mod apk with all languages supported

    -

    Download the mod APK file from a trusted source

    -

    The first step is to download the mod APK file from a reliable and safe source. There are many websites that claim to offer the modded version of the game, but not all of them are trustworthy. Some of them may contain viruses, malware, or fake files that can harm your device or steal your data. Therefore, you need to be careful and choose a reputable source.

    -

    One of the best sources to download Mortal Kombat X Mod APK is [this website]. This website provides the latest and updated version of the modded game, along with detailed information, features, screenshots, and reviews. You can also find other modded games and apps on this website.

    -

    To download the mod APK file from this website, follow these steps:

    -
      -
    1. Go to [this link] on your browser.
    2. -
    3. Scroll down and click on the green "Download" button.
    4. -
    5. Wait for a few seconds until the download link is generated.
    6. -
    7. Click on the download link and save the file on your device.
    8. -
    -

    The mod APK file size is about 1 GB, so make sure you have enough space on your device before downloading it.

    -

    Enable unknown sources on your device settings

    -

    The next step is to enable unknown sources on your device settings. This is necessary because Android devices do not allow installing apps from sources other than Google Play Store by default. To enable unknown sources, follow these steps:

    -
      -
    1. Go to your device settings and tap on "Security".
    2. -
    3. Find and toggle on "Unknown sources".
    4. -
    5. A warning message will pop up. Tap on "OK" to confirm.
    6. -
    -

    This will allow you to install apps from sources other than Google Play Store.

    Install the mod APK file and launch the game

    -

    The final step is to install the mod APK file and launch the game. To do this, follow these steps:

    -
      -
    1. Locate the downloaded mod APK file on your device storage.
    2. -
    3. Tap on the file and select "Install".
    4. -
    5. Wait for the installation process to finish.
    6. -
    7. Tap on "Open" to launch the game.
    8. -
    -

    Congratulations! You have successfully installed Mortal Kombat X Mod APK on your device. You can now enjoy the game with all the modded features and benefits.

    -

    Pros and cons of Mortal Kombat X Mod APK

    -

    A balanced review of the modded game

    -

    Mortal Kombat X Mod APK is a great way to enhance your gaming experience and have more fun. However, it also has some drawbacks that you should be aware of. Here are some of the pros and cons of using the modded version of the game:

    -

    Pros: enhanced graphics, gameplay, and features

    -

    One of the main advantages of Mortal Kombat X Mod APK is that it improves the graphics, gameplay, and features of the game. The modded game has better graphics quality, smoother animations, and more realistic effects. The gameplay is also more exciting, challenging, and varied. You can choose from different modes, levels, and difficulties. The features are also more diverse, such as unlimited resources, god mode, unlocked characters, no ads, and more. These enhancements make the game more enjoyable and satisfying.

    -

    Cons: possible compatibility issues, bugs, and bans

    -

    One of the main disadvantages of Mortal Kombat X Mod APK is that it may cause some compatibility issues, bugs, and bans. The modded game may not work on all devices or Android versions. It may also crash or freeze at times. Moreover, the modded game may violate the terms and conditions of the original game. This may result in your account being banned or suspended by the developers. Therefore, you should use the modded game at your own risk and discretion.

    -

    Conclusion

    -

    A summary of the main points and a call to action

    -

    Mortal Kombat X is one of the best fighting games ever made. It has stunning graphics, thrilling gameplay, and a rich story mode. However, if you want to take your game to the next level, you should try Mortal Kombat X Mod APK. This is a modified version of the game that gives you unlimited resources, god mode, unlocked characters, no ads, and more. You can download and install it easily by following our guide above. However, you should also be aware of the possible drawbacks of using the modded game, such as compatibility issues, bugs, and bans.

    -

    If you are ready to experience Mortal Kombat X like never before, download Mortal Kombat X Mod APK today and enjoy!

    -

    Frequently Asked Questions (FAQs)

    -

    Some common questions and answers about Mortal Kombat X Mod APK

    -