diff --git a/spaces/0x876/Yotta_Mix/README.md b/spaces/0x876/Yotta_Mix/README.md deleted file mode 100644 index 15d2cd66bb321d39f03c9786aae467c05e25a04c..0000000000000000000000000000000000000000 --- a/spaces/0x876/Yotta_Mix/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: CompVis Stable Diffusion V1 4 -emoji: 📉 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Audiffex Amplion Pro Torrent Create Amazing Guitar Tones with AmpLion Pro 1.1.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Audiffex Amplion Pro Torrent Create Amazing Guitar Tones with AmpLion Pro 1.1.md deleted file mode 100644 index d71c15640ff10fde58180213518d2b543c041ccf..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Audiffex Amplion Pro Torrent Create Amazing Guitar Tones with AmpLion Pro 1.1.md +++ /dev/null @@ -1,114 +0,0 @@ - -
If you are a radio enthusiast, you probably know how important it is to have a reliable and convenient way to control your radio equipment. Whether you are using your radio for hobby, business, or emergency purposes, you want to make sure that you can access and operate it from anywhere, anytime.
-DOWNLOAD ✵ https://byltly.com/2uKzQO
That's why you need Icom Rs-ba1 Ip Remote Control Software. This software allows you to remotely control your Icom radio over the internet, using your PC, smartphone, or tablet. You can also monitor multiple radios at the same time, customize your settings and preferences, record and playback audio files, and much more.
-In this article, we will explain what Icom Rs-ba1 Ip Remote Control Software is, why you need it, how to download and install it, and what features and benefits it offers. By the end of this article, you will be able to enjoy the full potential of your Icom radio with this amazing software.
-Icom Rs-ba1 Ip Remote Control Software is a software application that enables you to remotely control your Icom radio over the internet. It works with most Icom radios that have an Ethernet port or a USB port. You can connect your radio to your PC via a LAN cable or a USB cable, or use a wireless LAN adapter to connect it wirelessly. Then, you can use your PC as a base station to control your radio from anywhere in the world.
-You can also use your smartphone or tablet as a remote controller by installing the RS-BA1 remote control app on your device. The app is available for both Android and iOS devices. You can download it from Google Play or App Store for free. The app allows you to control the basic functions of your radio, such as frequency, mode, volume, squelch, etc. You can also use the app to view the spectrum scope and waterfall display of your radio.
-Icom Rs-ba1 Ip Remote Control Software also supports multiple users and multiple radios. You can set up different user accounts with different access levels and permissions. You can also monitor and control up to four radios simultaneously on one PC screen. This is useful for managing multiple stations or networks.
-Icom Rs-ba1 Ip Remote Control Software is not just a fancy gadget. It is a powerful tool that can enhance your communication experience and provide many benefits. Here are some of the reasons why you need this software:
-Downloading and installing Icom Rs-ba1 Ip Remote Control Software is easy and straightforward. Here are the steps you need to follow:
-Icom Rs-ba1 Ip Remote Control Software offers many features that make it a versatile and user-friendly software. Here are some of the main features:
-You can remotely control all the functions of your Icom radio over the internet using your PC or mobile device. You can change frequency, mode, volume, squelch, filter, etc. You can also transmit and receive audio signals with high quality sound.
-How to install RS-BA1 IP Remote Control Software on your PC
-RS-BA1 software update and firmware download
-Icom transceiver compatibility chart for RS-BA1
-RC-28 Remote Control USB Encoder for RS-BA1
-RS-BA1 Version 2 features and specifications
-How to use RS-BA1 over the Internet with your Icom radio
-RS-BA1 product brochure and instruction manual
-How to set up RS-BA1 with IC-7610, IC-9700, IC-7851, or IC-7700
-How to connect IC-705 to a PC over Wi-Fi with RS-BA1
-How to use the spectrum scope and waterfall functions with RS-BA1
-How to adjust RF power, CW pitch, RF gain, SQL and AF level with RS-BA1
-How to use the voice recording function with RS-BA1
-How to use the dualwatch operation and dual spectrum scopes with RS-BA1
-How to use the RIT tuning knob and ΔTX functions with RS-BA1
-How to use the CW keyer settings, voice memory, and SSB passband settings with RS-BA1
-How to use the slider control or tuning knob control screens with RS-BA1
-How to use the remote power ON/OFF function with RS-BA1
-How to troubleshoot common issues with RS-BA1
-How to update your Icom transceiver firmware for RS-BA1
-How to configure your router and firewall settings for RS-BA1
You can monitor up to four radios at the same time on one PC screen. You can switch between different radios easily by clicking on their icons. You can also view their status information such as frequency, mode, signal strength, etc.
-You can customize various settings and preferences of your software according to your needs and preferences. You can adjust the layout, color scheme, font size, etc. of the software interface. You can also create profiles for different radios and users with different settings.
-You can record audio files from your radio transmissions or receptions using the software. You can save them on your PC or upload them to cloud storage services such as Dropbox or Google Drive. You can also playback audio files from your PC or cloud storage services using the software.
-Icom Rs-ba1 Ip Remote Control Software is not only a feature-rich software but also a benefit-rich software. Here are some of the benefits that you can enjoy by using this software:
-You can enhance your communication experience by using this software. You can communicate with other radio users more easily and conveniently by controlling your radio remotely from anywhere in the world. You can also enjoy high-quality sound and clear signals by using this software.
-You can save time and money by using this software. You don't have to buy additional equipment or rent space for your radio station. You also don't have to travel to and from your radio location frequently. You can simply use your existing PC or mobile device to control your radio remotely.
-You can improve your safety and security by using this software. You can monitor your radio status and activity from a safe distance without exposing yourself to potential hazards or threats. You can also protect your radio from unauthorized access or theft by using encryption and password protection features.
-You can expand your network and reach by using this software. You can communicate with other radio users across the globe without any geographical limitations or restrictions. You can also join online communities and forums where you can share information and tips with other radio enthusiasts.
-Icom Rs-ba1 Ip Remote Control Software is a powerful software that allows you to remotely control
Icom Rs-ba1 Ip Remote Control Software is a powerful software that allows you to remotely control your Icom radio over the internet. It works with most Icom radios that have an Ethernet port or a USB port. You can use your PC, smartphone, or tablet as a remote controller for your radio. You can also monitor multiple radios at the same time, customize your settings and preferences, record and playback audio files, and much more.
-By using this software, you can enjoy many benefits such as enhancing your communication experience, saving time and money, improving your safety and security, and expanding your network and reach. You can communicate with other radio users across the globe without any geographical limitations or restrictions. You can also join online communities and forums where you can share information and tips with other radio enthusiasts.
-If you are interested in using this software, you can download it from the official website of Icom Inc. (https://www.icom.co.jp/world/) and install it on your PC or mobile device. You can also purchase it from authorized dealers or online stores. The software comes with a user manual that guides you through the installation and configuration process. You can also find online tutorials and videos that show you how to use the software effectively.
-Icom Rs-ba1 Ip Remote Control Software is a must-have software for any radio enthusiast who wants to make the most of their Icom radio. It is a software that will transform your radio into a powerful and versatile communication device that you can control from anywhere in the world.
-Here are some frequently asked questions about Icom Rs-ba1 Ip Remote Control Software:
-The system requirements for Icom Rs-ba1 Ip Remote Control Software are as follows:
-The compatible radios for Icom Rs-ba1 Ip Remote Control Software are as follows:
-The price of Icom Rs-ba1 Ip Remote Control Software varies depending on the seller and the region. You can check the price on the official website of Icom Inc. (https://www.icom.co.jp/world/) or on online stores such as Amazon or eBay.
-You can update Icom Rs-ba1 Ip Remote Control Software by downloading the latest version from the official website of Icom Inc. (https://www.icom.co.jp/world/) and installing it on your PC or mobile device. You can also check for updates from within the software by clicking on the "Help" menu and selecting "Check for Updates".
-You can contact Icom Inc. for support or feedback by using the following methods:
-If you are an anime fan, you have probably heard of Clannad, one of the most acclaimed and beloved anime series of all time. Clannad is a slice-of-life drama that follows the lives of Tomoya Okazaki, a delinquent who has lost interest in life, and Nagisa Furukawa, a shy girl who dreams of reviving the school's drama club. Together, they form bonds with other students and overcome various challenges and hardships.
-Download File ✸✸✸ https://byltly.com/2uKwlL
Clannad is based on a visual novel by Key, a famous developer of romance games. The anime adaptation was produced by Kyoto Animation, a studio known for its high-quality animation and storytelling. Clannad has two seasons, with the first one airing in 2007 and the second one, called Clannad After Story, airing in 2008. Both seasons have 24 episodes each, plus some extra episodes and specials.
-Clannad is widely praised for its emotional impact, memorable characters, beautiful music, and stunning visuals. It is considered one of the best anime series ever made, and has won many awards and accolades. It has also spawned a loyal fanbase that still loves and supports the show after more than a decade.
-If you want to watch Clannad, you have several options. You can stream it online on legal platforms like Netflix or Crunchyroll, or you can buy the DVD or Blu-ray discs. However, if you want to save money and time, you can also download it from torrent sites. A torrent is a file that contains information about other files that are shared by users on a peer-to-peer network. By using a torrent client, you can download those files from other users who have them on their computers.
-clannad english dubbed 720p download
-clannad anime dub torrent HD
-watch clannad eng dub online 720p
-clannad season 1 and 2 dub torrent
-clannad after story english dub 720p
-clannad dubbed anime torrent free
-clannad full series eng dub download
-clannad 720p dual audio torrent
-clannad bluray english dub torrent
-clannad movie eng dub 720p
-clannad complete eng dub torrent
-clannad original soundtrack 720p torrent
-clannad visual novel english dub download
-clannad another world eng dub 720p
-clannad tomoyo ova eng dub torrent
-clannad best quality dub torrent
-clannad episodes eng dub 720p mkv
-clannad sub or dub torrent comparison
-clannad voice actors english dub torrent
-clannad opening and ending songs 720p torrent
-clannad recap episode eng dub download
-clannad alternative endings eng dub 720p
-clannad director's cut eng dub torrent
-clannad fan made dub 720p download
-clannad special features eng dub torrent
-clannad behind the scenes eng dub 720p
-clannad commentary track eng dub torrent
-clannad trivia and facts 720p download
-clannad fan art and wallpapers 720p torrent
-clannad merchandise and collectibles eng dub
-clannad cosplay and costumes 720p torrent
-clannad quotes and memes eng dub download
-clannad reviews and ratings eng dub 720p
-clannad awards and nominations dub torrent
-clannad spin offs and sequels eng dub 720p
-clannad crossover and parody dub torrent
-clannad fan fiction and comics 720p download
-clannad games and apps eng dub torrent
-clannad live action adaptation 720p download
-clannad stage play and musical eng dub torrent
-clannad podcast and radio show 720p download
-clannad documentary and history eng dub torrent
-clannad analysis and discussion 720p download
-clannad theories and spoilers eng dub torrent
-clannad references and easter eggs 720p download
-clannad genre and themes eng dub torrent
-clannad influences and inspirations 720p download
-clannad similarities and differences eng dub torrent
-clannad controversies and criticisms 720p download
-clannad recommendations and suggestions eng dub torrent
One of the advantages of downloading Clannad from torrent sites is that you can choose the quality and language of the video files. For example, you can download Clannad eng dub 720p torrent, which means that you can watch Clannad in English dubbing with high-definition resolution. This way, you can enjoy the show without any language barriers or quality issues.
-In this article, we will show you how to download and watch Clannad eng dub 720p torrent in easy steps. We will also give you some tips and tricks for a better viewing experience. Let's get started!
-Downloading Clannad eng dub 720p torrent is not difficult, but it requires some preparation and caution. Here are the steps you need to follow:
-The first thing you need to do is to find a torrent site that has Clannad eng dub 720p torrent available. There are many torrent sites on the internet, but not all of them are trustworthy or safe. Some of them may contain malware, viruses, or fake files that can harm your computer or device.
-To avoid these risks, you should use only reputable and verified torrent sites that have positive reviews and ratings from other users. Some examples of such sites are The Pirate Bay, Nyaa, or Zoro. These sites have large collections of anime torrents, including Clannad eng dub 720p torrent.
-Once you have chosen a torrent site, you need to search for Clannad eng dub 720p torrent on it. You can use the search bar or browse through the categories to find it. You may also use filters or sorting options to narrow down your results.
-When you find Clannad eng dub 720p torrent on the site, you should check some details before downloading it. These details include:
-You should choose a file that has a reasonable file size, high number of seeders and low number of leechers, and positive comments and ratings.
-After selecting a file that meets your criteria, you need to download it to your computer or device. There are two ways to do this:
-Both methods work similarly, but magnet links are more convenient because they do not require downloading an extra file.
-A torrent client is a software program that allows you to download files from other users on the network using torrents. You need to have one installed on your computer or device before downloading Clannad eng dub 720p torrent.
-There are many torrent clients available online, but some of them may be unsafe or unreliable. You should use only trusted and popular ones that have good reviews and ratings from other users. Some examples of such clients are uTorrent, BitTorrent, qBittorrent, or Vuze. These clients have user-friendly interfaces and features that make downloading easy and fast.
-To open Clannad eng dub 720p torrent with your torrent client, you need to do one of these things:
-After opening Clannad eng dub 720p torrent with your torrent client, you need to choose where you want to save it on your computer or device. You can select any folder you want, but make sure you have enough space for it.
-Then, you need to start downloading by clicking on the start button or icon on your torrent client. Depending on your internet speed and connection quality, it may take some time for Clannad eng dub 720p torrent to finish downloading.
-Once Clannad eng dub 720p torrent is downloaded, you can watch it on your computer or device. Here are the steps you need to follow:
-Before you can watch Clannad eng dub 720p torrent, you need to wait for the download to finish completely. You can check the progress and status of the download on your torrent client. You can also pause or resume the download if you need to.
-Some torrent files may be compressed or archived to reduce their size and make them easier to share. If Clannad eng dub 720p torrent is compressed, you need to extract it before you can watch it.
-To extract Clannad eng dub 720p torrent, you need to use a software program that can open and extract compressed or archived files. Some examples of such programs are WinRAR, 7-Zip, or PeaZip. These programs have simple interfaces and features that make extracting easy and fast.
-To extract Clannad eng dub 720p torrent with your extraction program, you need to do one of these things:
-After extracting Clannad eng dub 720p torrent, you can play the video files with a media player. A media player is a software program that allows you to play audio and video files on your computer or device. You need to have one installed on your computer or device before watching Clannad eng dub 720p torrent.
-There are many media players available online, but some of them may not support all formats or codecs of video files. You should use only compatible and versatile ones that have good reviews and ratings from other users. Some examples of such players are VLC Media Player, MPC-HC, KMPlayer, or PotPlayer. These players have user-friendly interfaces and features that make playing easy and enjoyable.
-To play Clannad eng dub 720p torrent with your media player, you need to do one of these things:
-After playing Clannad eng dub 720p torrent with your media player, you can enjoy the show. You can adjust the volume, brightness, subtitles, speed, and other settings on your media player according to your preferences. You can also pause, resume, rewind, fast-forward, skip, or repeat the episodes as you wish.
-Clannad eng dub 720p torrent will take you on an emotional journey with Tomoya, Nagisa, and their friends. You will laugh, cry, smile, and feel inspired by their stories and experiences. You will also appreciate the beautiful animation, music, and voice acting that make Clannad a masterpiece of anime.
-To make your viewing experience of Clannad eng dub 720p torrent even better, here are some tips and tricks that you can follow:
-Downloading and watching Clannad eng dub 720p torrent may not be legal in some countries or regions. You may face legal consequences if you are caught by authorities or copyright holders. To avoid these risks, you should use a VPN (Virtual Private Network) when downloading and watching Clannad eng dub 720p torrent.
-A VPN is a service that encrypts your internet traffic and hides your IP address and location from others. By using a VPN, you can download and watch Clannad eng dub 720p torrent anonymously and safely. You can also access geo-restricted content and websites that may not be available in your area.
-There are many VPN services available online, but some of them may be slow, unreliable, or expensive. You should use only trusted and affordable ones that have good reviews and ratings from other users. Some examples of such services are NordVPN, ExpressVPN, Surfshark, or CyberGhost. These services have fast speeds, strong security, and user-friendly features that make using a VPN easy and convenient.
-Some anime fans may prefer watching Clannad in its original Japanese language with subtitles rather than in English dubbing. They may think that the original voice actors convey the emotions and personalities of the characters better than the English ones. They may also want to learn some Japanese words and phrases from the show.
-If you are one of those fans, you can use subtitles when watching Clannad eng dub 720p torrent. Subtitles are text that appear on the screen that translate what the characters are saying in another language. You can find subtitles for Clannad in various languages online on sites like Subscene, OpenSubtitles, or YIFY Subtitles. These sites have large collections of subtitles for anime and other shows.
-To use subtitles when watching Clannad eng dub 720p torrent with your media player, you need to do one of these things:
-If you enjoyed watching Clannad eng dub 720p torrent, you will love watching its sequel series Clannad After Story. Clannad After Story is a continuation of Clannad that follows Tomoya and Nagisa as they graduate from high school and enter adulthood. They face new challenges and hardships as they try to build their future together.
-Clannad After Story is widely regarded as one of the best anime series ever made, even surpassing its predecessor in terms of emotional impact, character development, plot twists, and animation quality. It is also one of the most tear-jerking anime series ever made, as it explores themes like love, family, tragedy, hope, and happiness.
-You can download and watch Clannad After Story eng dub 720p torrent in the same way as Clannad eng dub 720p torrent. You can find it on the same torrent sites as Clannad eng dub 720p torrent. However, be prepared to cry a lot when watching it.
-In this article, we have shown you how to download and watch Clannad eng dub 720p torrent in easy steps. We have also given you some tips and tricks for a better viewing experience.
-Here are some frequently asked questions about Clannad eng dub 720p torrent:
-A: Downloading and watching Clannad eng dub 720p torrent may not be legal in some countries or regions. You may face legal consequences if you are caught by authorities or copyright holders. To avoid these risks, you should use a VPN when downloading and watching Clannad eng dub 720p torrent.
-A: Downloading and watching Clannad eng dub 720p torrent may not be safe in some cases. You may encounter malware, viruses, or fake files that can harm your computer or device. To avoid these risks, you should use only reliable and verified torrent sites and torrent clients. You should also check the file size, seeders and leechers, comments and ratings, and subtitles before downloading and watching Clannad eng dub 720p torrent.
-A: Clannad eng dub 720p torrent is good quality in terms of resolution and audio. It has high-definition resolution of 1280x720 pixels and clear and crisp sound. However, some fans may prefer the original Japanese voice actors over the English ones. In that case, they can use subtitles when watching Clannad eng dub 720p torrent.
-A: Clannad eng dub 720p torrent is complete in terms of episodes and specials. It has all 24 episodes of the first season of Clannad, plus some extra episodes and specials. However, it does not include the second season of Clannad, called Clannad After Story. To watch that, you need to download and watch Clannad After Story eng dub 720p torrent.
-A: Clannad eng dub 720p torrent is worth watching for any anime fan who loves emotional stories and characters. It is one of the best anime series ever made that will make you laugh, cry, smile, and feel inspired by its beautiful animation, music, and voice acting. It is also a great way to learn about Japanese culture and language.
- 0a6ba089ebDownload File ––– https://byltly.com/2uKxCT
Driver Easy Pro Key 5.6.13 is a powerful and easy-to-use software that helps you update and install drivers for your Windows PC. It scans your computer for missing, outdated, or incompatible drivers and downloads them automatically from its huge database of over 8 million drivers. With Driver Easy Pro Key 5.6.13, you can fix driver problems, improve PC performance, and enjoy a smoother and faster gaming experience.
-However, Driver Easy Pro Key 5.6.13 is not a free software. You need to purchase a license key to activate its full features and benefits. But don't worry, in this article, we will show you how to download and install Driver Easy Pro Key 5.6.13 with crack for free. This way, you can enjoy all the advantages of Driver Easy Pro Key 5.6.13 without spending a dime.
-DOWNLOAD ››› https://byltly.com/2uKvPp
Follow these simple steps to download and install Driver Easy Pro Key 5.6.13 with crack on your Windows PC.
-You can download the Driver Easy Pro Key 5.6.13 setup file and crack file from the link below.
-Driver Easy Pro Key 5.6.13 With Crack Download [Latest]
-Driver Easy Pro Key 5.6.13 is a great software that helps you update and install drivers for your Windows PC with ease and efficiency. It can save you time, money, and hassle by fixing driver issues and improving PC performance. However, if you don't want to pay for a license key, you can download and install Driver Easy Pro Key 5.6.13 with crack for free by following the steps above.
-We hope this article was helpful for you. If you have any questions or suggestions, please leave a comment below.
- -Drivers are essential components of your Windows PC that enable your hardware and software to communicate and function properly. Without drivers, your PC would not be able to perform tasks such as printing, scanning, playing audio or video, connecting to the internet, or using external devices.
-However, drivers can also cause problems if they are missing, outdated, or incompatible with your system. For example, you may experience issues such as blue screen of death, device error, slow performance, or system crash. These problems can be frustrating and time-consuming to fix manually.
-That's why you need Driver Easy Pro Key 5.6.13. It is a smart and reliable software that can scan your PC for driver problems and fix them automatically. It can also backup and restore your drivers in case of any emergency. With Driver Easy Pro Key 5.6.13, you can keep your PC running smoothly and securely.
- -Driver Easy Pro Key 5.6.13 has many features that make it one of the best driver updater software in the market. Here are some of the features that you can enjoy with Driver Easy Pro Key 5.6.13.
-music, video, tv, audio & more.. download free mp3 songs songs of all the best known artists in all the best known genres. more than 300 000 songs in mp3 and various formats. find and download your favorite songs, videos, albums and music video with the largest online catalog in the world. no download needed. free personalized recommendations. easy and fast.
listen or download theeram malayalam movie song mp3 music song for free. free vandanam climax bgm malayalam movie mp3. download mohanlal super hits songs for 1980 to 1990 mp3 download high quality mp3 in 320kbps at ceakmp3.
no thanks 1 month free. find out why close. vandanam. theeram songs mp3. theeram mp3 songs mp3 download theeram mp3 songs mp3 download theeram songs mp3 download. listen or download theeram malayalam movie song mp3 music song for free. free malayalam movies mobile ringtone mp3. free download vandanam most romantic mp3. to start this download lagu you need to click on [download] button. remember that by downloading this song you accept our terms and conditions. find where free movies is available to stream now. yidio is the premier streaming guide for tv shows & movies on the web, phone, tablet or smart tv. vandanam theme music.(shymon). 16 climax mohanlal & lohithadas finest malayalam movie w/ eng. sub (1989. download now. watch anytime on amazon video. chottanikkara - chottanikkara bhagavathi mp3 song by s. janaki from the malayalam movie s janaki - bhagavathi vandanam (malayalam devotional songs).
music, video, tv, audio & more.
ayappan is known as sastavu, sastapa and shasta in south india. it is said that he is the son of shiva and vishnu's wife, lakshmi. in the south indian ayyappan temples, the main deity is not named ayyappan, but the presiding deity is called sastavu (lord of the sastavas) or sastapa ( lord of the sastavas ). ayyappan's birth is believed to have been prophesied by sage narada, and ayyappan is worshiped as a saviour. according to the puranas, ayyappan was born as manikantha in kerala, and this birth was prophesied by sage narada, after which ayyappan became popular in kerala. another version is that ayyappan's mother, parvati, was born as bhuvaneswari in tiruveezhinadu and was worshipped as bhuvaneswari, and ayyappan is worshiped as shasta and sastavu, and the temple is also known as bhuvaneswari temple.
-Download File ⚡ https://imgfil.com/2uxYZN
If you are 12th class students who are looking for Biology Projects for Class 12? If yes, then you are at the right place. Here in this post, we have provided a list of best Biology Projects for Class 12 NCERT. In this way, students can choose the best biology investigatory project for class 12.
-Development is the act of improving by expanding or enlarging or refining a situation or condition of a particular thing Tunde (2011). To develop biology education in Nigeria, the government should improve the teaching and learning of biology, by providing adequate laboratory/teaching materials, qualified biology teachers, good biology classroom environment, use of good teaching methods by the biology teacher, a proper curriculum standard for teaching biology, and provision of funds to the schools Ameh (1991).
-Download File ○ https://imgfil.com/2uxZfT
Students who did well in biology could have prospect in becoming; Doctors, nurses, pharmacist, dentist, biology teachers, medical technologist, food technologist, genetic engineers, microbiologists, biochemists, and other science subjects. Biology has its origin from Europe but today all the nations of the world accord priority attention to science and technology in development efforts. Effective teaching of biology is a process by which a biology teacher adopts all the possible methods use in teaching in classroom to make sure that students understand biology and be able to respond positively during assessment or produce good result. Teaching effectiveness is exhibited in teaching method, classroom management, the materials as well as the way students are been handled, a good teacher always bear in mind the individual differences of the students while presenting the lessons and frequenting check the students understanding of his or her points to make sure that they are understanding the lesson.
-Class 12th mathematics is quite easier as compared to that of the 11th class, Well, Dealing with this subject in class 12th is much more interesting. You can download its marking scheme by clicking on the link below.
-Class 12th biology is much easier as compared to that of the 11th class if you have a keen interest in biology then you will find this subject even more interesting. But with easiness, the syllabus of biology in class 12th is much vast than that of class 11th. You can download the marking scheme by clicking the link below.
aaccfb2cb3walpZoffoopyiptyday [url= descent part 3 in hindi torrent download [url= Mount And Blade Max Army Size [url= Fotofusion V4.2 Full Version[/url] Rajan Naik 720p movie download utorrent [url= sesspaphpag [url= Vicki Pettersson The Taken Epub Format animation banda igua [url= 10 Manager 3.1.2 Crack License Key Download] 10 Manager 3.1.2 Crack License Key Download[/url] Anita Swedish Nymphet 1973 [url= Textbook Of Biotechnology By Rc Dubey.pdf [url= [url= Curso De Electricidad Industrial Pdf Gratis [url=
-Download ———>>> https://imgfil.com/2uxX1w
hindi movie mera naam joker download [url= control 2010 full crack [url= Server 2016 Windows.rar 32 Torrent]seesaawiki[/url] WilcomEmbroideryStudioV90SP4FullCDMultilingualrar [url= melsAtterve [url= G-Eazy These Things Happen (2014) (320kbps) 21 [url= Cyberlink Powerdirector 10 Effects Pack 1 [url= Au Sk Inven 32 Activator Pc Download alstoleti] Au Sk Inven 32 Activator Pc Download alstoleti[/url] briletypeAbumunult [url= annayum rasoolum movie download kickass torrent [url= Ki Luv Story full movie hindi download [url= esxi 5.1 license key crack [url= V2.285 Download Pc[/url]
-download default torrents for free [url= [url= Hudl Platforms Edge Case Study [url= paskulata [url= hdvddd free [url= melsAtterve [url= Nordoix's Notebook Portable 2.0 4.17.1522.2148 [url= Bounce It Tan [url= [url= varible_level_helper_effect MOB SIMULCAST COD EVO 5 T [url= hack dht eudora keygen expressnews.cn [url= Peasantry: Life in a Suburb of Dehradun (Full Movie Download) [url= schoolkiwi_art_studios_4_2_4 [url= natttureCemFrawlHem [url= mera pc karelsi kollkarni inde [url= indsutthi byanakan pulimivegdam [url= jkljklm456 Mobile Physics FLASH FULL SCREEN [url= f83e7b7a] [url=%D6%95%D5%A7%D4%B3%D3%99%D4%BD%D7%94%D4%BA%D2%94%D6%96%D4%B0%D0%B2%D0%BE%D0%BB%D0%BB%D0%B0%D3%95%D4%BB%D4%B7%D0%BD%D0%B8%D2%BD%D3%95%D0%BD%D0%BE%D0%B5 [url= Fire Hazard Bannere [url= free download kripton [url= durch_die_hochzeit_handa_herkules_unter_anderem_3d_build_247_247_com [url= Telensa e sistema di conferma di indirizzo google [url= HD Online Player (Nigerian Hd Youtube Video) [url= download link for singapore [url= Selenium Edge Full Crack [url= agent_switcher_jre 10.2.0.112.1 Crack [url= fix to crack colemak [url= free download zip_or_rars [url= hdtennismoobvmyesda [url= f1_e486
899543212bWhatsApp es una de las aplicaciones más populares del mundo, con más de 2 mil millones de usuarios en más de 180 países. Se trata de una app de mensajería instantánea y llamadas de voz y video que funciona con una conexión a internet, lo que la hace práctica, confiable y privada. Además, ofrece muchas otras funciones y opciones para mantenerse en contacto con tus amigos, familiares y clientes. En este artículo, te explicaremos qué es WhatsApp, cómo descargarlo e instalarlo en tu dispositivo, cuáles son sus características y funciones principales, sus ventajas y desventajas, algunas alternativas a WhatsApp y algunos consejos y trucos para aprovecharlo al máximo.
-DOWNLOAD –––––>>> https://urlin.us/2uSXiG
WhatsApp es una aplicación gratuita que te permite enviar y recibir mensajes de texto, voz, imágenes, videos, documentos, ubicaciones y otros contenidos con solo una conexión a internet. No necesitas pagar tarifas ni tener un plan de datos específico para usarla. Además, puedes hacer llamadas y videollamadas gratuitas con hasta 8 personas al mismo tiempo. Todo esto con una encriptación de extremo a extremo que protege tus conversaciones de terceros, incluso de la propia empresa.
-WhatsApp sirve para comunicarte con tus contactos de forma fácil, rápida y segura. Solo necesitas tener el número de teléfono de la persona con la que quieres hablar y que ambos tengan instalada la app en sus dispositivos. Puedes usar WhatsApp en tu celular, tablet o computadora, e incluso sincronizar tus mensajes entre ellos. También puedes crear grupos para chatear con varias personas a la vez, compartir estados que desaparecen después de 24 horas y personalizar tu perfil con una foto y un nombre.
-Descargar e instalar WhatsApp es muy sencillo. Solo tienes que seguir estos pasos:
-* descargar aplicacion de whatsapp gratis
-* como usar la aplicacion de whatsapp
-* aplicacion de whatsapp para pc
-* aplicacion de whatsapp web
-* aplicacion de whatsapp business
-* aplicacion de whatsapp plus
-* aplicacion de whatsapp para tablet
-* aplicacion de whatsapp para android
-* aplicacion de whatsapp para iphone
-* aplicacion de whatsapp para windows 10
-* aplicacion de whatsapp para mac
-* aplicacion de whatsapp para smart tv
-* aplicacion de whatsapp para llamadas
-* aplicacion de whatsapp para videollamadas
-* aplicacion de whatsapp para enviar mensajes
-* aplicacion de whatsapp para enviar fotos
-* aplicacion de whatsapp para enviar videos
-* aplicacion de whatsapp para enviar stickers
-* aplicacion de whatsapp para enviar audios
-* aplicacion de whatsapp para enviar documentos
-* aplicacion de whatsapp para crear grupos
-* aplicacion de whatsapp para chatear con amigos
-* aplicacion de whatsapp para chatear con desconocidos
-* aplicacion de whatsapp para chatear con empresas
-* aplicacion de whatsapp para chatear con clientes
-* aplicacion de whatsapp para recuperar mensajes borrados
-* aplicacion de whatsapp para hacer copias de seguridad
-* aplicacion de whatsapp para restaurar chats
-* aplicacion de whatsapp para cambiar el numero
-* aplicacion de whatsapp para cambiar el nombre
-* aplicacion de whatsapp para cambiar el estado
-* aplicacion de whatsapp para cambiar el fondo
-* aplicacion de whatsapp para cambiar la foto
-* aplicacion de whatsapp para cambiar la fuente
-* aplicacion de whatsapp para cambiar el tono
-* aplicacion de whatsapp para bloquear contactos
-* aplicacion de whatsapp para desbloquear contactos
-* aplicacion de whatsapp para silenciar contactos
-* aplicacion de whatsapp para archivar chats
-* aplicacion de whatsapp para eliminar chats
-* aplicacion de whatsapp para ver quien esta en linea
-* aplicacion de whatsapp para ver quien te bloquea
-* aplicacion de whatsapp para ver quien te escribe
-* aplicacion de whatsapp para ver quien te llama
-* aplicacion de whatsapp para ver quien visita tu perfil
-* aplicacion de whatsapp con modo oscuro
-* aplicacion de whatsapp con doble sim
-* aplicacion de whatsapp con contraseña
-* aplicacion de whatsapp con temas personalizados
WhatsApp tiene muchas características y funciones que la hacen una de las mejores aplicaciones de comunicación del mundo. Aquí te mencionamos algunas de las más importantes:
-WhatsApp te permite enviar y recibir mensajes de texto, voz, imágenes, videos, documentos, ubicaciones y otros contenidos con solo una conexión a internet. No necesitas pagar tarifas ni tener un plan de datos específico para usarla. Además, puedes hacer llamadas y videollamadas gratuitas con hasta 8 personas al mismo tiempo. Todo esto con una encriptación de extremo a extremo que protege tus conversaciones de terceros, incluso de la propia empresa.
-WhatsApp también te ofrece opciones para controlar tu privacidad, como bloquear contactos indeseados, silenciar chats o grupos, ocultar tu última conexión, desactivar las confirmaciones de lectura o los recibos de entrega, configurar quién puede ver tu foto de perfil, tu estado o tu información personal, y eliminar mensajes enviados por error.
-WhatsApp te permite hacer llamadas y videollamadas gratuitas con solo una conexión a internet. Puedes hablar con una persona o con hasta 8 personas al mismo tiempo. Las llamadas y videollamadas son de alta calidad y se adaptan a la velocidad de tu conexión. También puedes usar los auriculares o el altavoz del dispositivo, o cambiar entre la cámara frontal y trasera.
-Para hacer una llamada o videollamada, solo tienes que abrir el chat con el contacto o el grupo que quieres llamar y pulsar el icono del teléfono o de la cámara en la parte superior derecha. También puedes ver el historial de llamadas en la pestaña de llamadas en la parte inferior izquierda.
-WhatsApp te permite crear grupos para chatear con varias personas a la vez. Puedes crear grupos de hasta 256 miembros, asignarles un nombre, una foto y una descripción, y añadir o eliminar participantes. Los grupos son ideales para mantenerse en contacto con tu familia, tus amigos, tus compañeros de trabajo o tu equipo deportivo.
-WhatsApp también te permite unirte a comunidades que comparten tus intereses o aficiones. Puedes encontrar grupos públicos o privados sobre diversos temas, como música, cine, deportes, viajes, negocios, etc. Para unirte a un grupo, solo necesitas tener el enlace de invitación que te puede enviar el administrador o algún miembro del grupo.
-WhatsApp te permite expresarte y personalizar tus chats con diferentes opciones. Puedes usar emojis, stickers, GIFs y memes para darle más vida a tus mensajes. También puedes usar el teclado para escribir en negrita, cursiva, tachado o monoespaciado. Además, puedes enviar mensajes de voz pulsando el icono del micrófono en la parte inferior derecha.
-WhatsApp también te permite personalizar tu perfil con una foto y un nombre que verán tus contactos. También puedes compartir estados que desaparecen después de 24 horas y que pueden ser fotos, videos o textos. Los estados son una forma de mostrar lo que estás haciendo o pensando en ese momento.
-WhatsApp Business es una versión especial de WhatsApp diseñada para las pequeñas y medianas empresas. Te permite crear un perfil profesional con información sobre tu negocio, como tu dirección, tu horario, tu sitio web, tu catálogo de productos o servicios, etc. También te permite comunicarte con tus clientes de forma rápida y eficiente, usando mensajes automáticos, etiquetas personalizadas, respuestas rápidas y estadísticas de rendimiento.
-WhatsApp Business es gratuita y se puede descargar desde la página web oficial (https://www.whatsapp.com/business/) o desde la tienda de aplicaciones de tu dispositivo (Google Play Store para Android o App Store para iOS). Para usarla, solo necesitas tener un número de teléfono diferente al que usas para WhatsApp normal.
-WhatsApp tiene muchas ventajas y desventajas que debes conocer antes de usarla. Aquí te las resumimos en una tabla:
-V
Alternativas a WhatsApp-Aunque WhatsApp es una de las aplicaciones más populares y completas del mercado, existen otras alternativas que también ofrecen servicios similares o incluso mejores. Aquí te presentamos algunas de ellas: -Telegram-Telegram es una aplicación de mensajería instantánea y llamadas de voz y video que se caracteriza por su rapidez, seguridad y versatilidad. Permite enviar y recibir mensajes, archivos, stickers, GIFs, audios y otros contenidos con una encriptación de extremo a extremo. También permite hacer llamadas y videollamadas gratuitas con hasta 1000 personas al mismo tiempo. Además, tiene funciones como los canales, los bots, las encuestas, los chats secretos, los mensajes que se autodestruyen y las carpetas para organizar los chats. Telegram es gratuita y se puede descargar desde la página web oficial (https://telegram.org/) o desde la tienda de aplicaciones de tu dispositivo (Google Play Store para Android o App Store para iOS). -Signal-Signal es una aplicación de mensajería instantánea y llamadas de voz y video que se enfoca en la privacidad y la seguridad de sus usuarios. Permite enviar y recibir mensajes, archivos, stickers, audios y otros contenidos con una encriptación de extremo a extremo. También permite hacer llamadas y videollamadas gratuitas con hasta 8 personas al mismo tiempo. Además, tiene funciones como los mensajes que desaparecen, el bloqueo de capturas de pantalla, el bloqueo con huella digital o código PIN y la verificación de seguridad. Signal es gratuita y se puede descargar desde la página web oficial (https://signal.org/) o desde la tienda de aplicaciones de tu dispositivo (Google Play Store para Android o App Store para iOS). -iMessage-iMessage es una aplicación de mensajería instantánea exclusiva para los dispositivos de Apple. Permite enviar y recibir mensajes, archivos, emojis, stickers, GIFs, audios y otros contenidos con una encriptación de extremo a extremo. También permite hacer llamadas y videollamadas gratuitas con hasta 32 personas al mismo tiempo usando FaceTime. Además, tiene funciones como los efectos de pantalla, las reacciones, las animojis, los memojis y la integración con otras apps de Apple. iMessage viene preinstalada en los dispositivos de Apple y se puede usar con el mismo ID de Apple. -Consejos y trucos de WhatsApp-Para terminar este artículo, te queremos compartir algunos consejos y trucos que te ayudarán a sacarle más provecho a WhatsApp. Estos son algunos de ellos: -Cómo responder a un mensaje específico-Si quieres responder a un mensaje específico dentro de un chat o un grupo, solo tienes que mantener pulsado el mensaje que quieres responder y luego pulsar el icono de la flecha hacia la izquierda en la parte superior derecha. Esto hará que tu respuesta aparezca debajo del mensaje original, lo que facilitará la comprensión del contexto. -Cómo fijar un chat en la parte superior-Si quieres tener siempre a la vista un chat importante o frecuente, puedes fijarlo en la parte superior de la lista de chats. Para hacerlo, solo tienes que mantener pulsado el chat que quieres fijar y luego pulsar el icono del alfiler en la parte superior derecha. Esto hará que el chat aparezca siempre en la primera posición, aunque no sea el más reciente. Puedes fijar hasta 3 chats a la vez. -Cómo cambiar el fondo de pantalla de tus chats-Si quieres personalizar el aspecto de tus chats, puedes cambiar el fondo de pantalla que aparece detrás de tus mensajes. Para hacerlo, solo tienes que entrar en los ajustes de WhatsApp, luego en la opción de chats y luego en la opción de fondo de pantalla. Allí podrás elegir entre diferentes opciones, como colores sólidos, imágenes predeterminadas, fotos de tu galería o fondos dinámicos que cambian según la hora del día. -Cómo buscar en tus chats-Si quieres encontrar un mensaje, un archivo, un contacto o un grupo específico dentro de tus chats, puedes usar la función de búsqueda de WhatsApp. Para hacerlo, solo tienes que pulsar el icono de la lupa en la parte superior derecha y luego escribir lo que quieres buscar. Podrás ver los resultados filtrados por chats, tipos de contenido o fechas. -Cómo enviar tu ubicación a un contacto-Si quieres compartir tu ubicación con un contacto o un grupo, puedes usar la función de enviar ubicación de WhatsApp. Para hacerlo, solo tienes que abrir el chat con el destinatario y luego pulsar el icono del clip en la parte inferior derecha. Luego, elige la opción de ubicación y podrás enviar tu ubicación actual o una ubicación en vivo que se actualizará durante un tiempo determinado. -Conclusión-WhatsApp es una aplicación de mensajería instantánea y llamadas de voz y video que te permite comunicarte con tus contactos de forma gratuita, fácil y segura. Tiene muchas características y funciones que la hacen una de las mejores apps del mundo, como la encriptación de extremo a extremo, los grupos, los estados, la personalización y la versión para negocios. Sin embargo, también tiene algunas desventajas, como la dependencia de una conexión a internet, el consumo de datos o la adicción. Por eso, es importante conocer sus alternativas, como Telegram, Signal o iMessage. Además, es conveniente saber algunos consejos y trucos para aprovecharla al máximo, como responder a un mensaje específico, fijar un chat en la parte superior, cambiar el fondo de pantalla de tus chats, buscar en tus chats o enviar tu ubicación a un contacto. -Esperamos que este artículo te haya sido útil e interesante. Si tienes alguna duda o comentario sobre WhatsApp, no dudes en dejarnos un mensaje. Y si te ha gustado este artículo, compártelo con tus amigos y familiares. ¡Gracias por leernos! -Preguntas frecuentes-
- - \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Azino777 Casino The Most Trusted and Reliable Online Casino.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Azino777 Casino The Most Trusted and Reliable Online Casino.md deleted file mode 100644 index a4e36b6bbb7a650d0517394880e9c1cb55ee7865..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Azino777 Casino The Most Trusted and Reliable Online Casino.md +++ /dev/null @@ -1,117 +0,0 @@ - - Azino777 Casino Review: A Low Safety Index Online Casino-Introduction-If you are looking for an online casino to play at, you might have come across Azino777 Casino. This casino claims to offer a variety of games, bonuses, payment methods, and customer support options. But is it a safe and reliable place to gamble? Or is it a scam that you should avoid? -azino777 casinoDownload File ››››› https://urlin.us/2uSWM6 - In this article, we will review Azino777 Casino based on the information we have gathered from various sources, including Casino Guru, a reputable website that provides honest and unbiased reviews of online casinos. We will look at the features, drawbacks, and safety index of this casino, and help you decide whether it is worth your time and money. -What is Azino777 Casino?-Azino777 Casino is an online casino that was established in 2011 by VictoryWillbeours N.V., a company registered in Curacao. The casino claims to offer over 1,000 games from 31 game providers, including slots, roulette, blackjack, video poker, bingo, baccarat, jackpot games, and live games. The casino also claims to offer generous bonuses and promotions, such as free spins, cashback, tournaments, and loyalty rewards. -Why is it rated low by Casino Guru?-According to Casino Guru, Azino777 Casino has a low safety index of 7.3/10. This means that the casino is not a good option for most players, as it has several issues that could affect their gaming experience and winnings. Some of these issues are: -
Features of Azino777 Casino-Games and Providers-One of the positive aspects of Azino777 Casino is its game selection. The casino offers over 1,000 games from 31 game providers, such as NetEnt, Microgaming, Play'n GO, Pragmatic Play, Betsoft, Evolution Gaming, and more. The games are categorized into slots, roulette, blackjack, video poker, bingo, baccarat, jackpot games, and live games. The casino also has a search function that allows players to find their favorite games by name or provider. -azino777 casino review Bonuses and Promotions-Another feature that might attract some players to Azino777 Casino is its bonus offers and promotions. The casino claims to offer various types of bonuses and promotions for new and existing players. Some of these are: -
Payment Methods and Limits-Azino777 Casino offers a range of payment methods for its players to make deposits and withdrawals. Some of these methods are: -
The minimum deposit amount at Azino777 Casino is $10, and the maximum deposit amount is $5,000 per transaction. The minimum withdrawal amount is $20, and the maximum withdrawal amount is $5,000 per day, $10,000 per week, and $20,000 per month. The casino does not charge any fees for deposits or withdrawals, but some payment providers may apply their own fees. -Customer Support and Languages-Azino777 Casino provides customer support via email, phone, and live chat. The email address is support@azino777.com, and the phone number is +442038076569. The live chat option is available 24/7 on the casino's website. The customer support agents are friendly and helpful, but they may not be very fluent in English. -The casino's website is available in 11 languages: English, Russian, German, Spanish, Portuguese, Turkish, Polish, Finnish, Norwegian, Japanese, and Chinese. However, some of the translations may not be very accurate or clear. -Drawbacks of Azino777 Casino-Player Complaints and Black Points-As we mentioned earlier, Azino777 Casino has received 5 complaints from players in the Casino Guru database, resulting in 1,307 black points in total. These complaints are related to delayed or denied payments, unfair bonus terms, account verification problems, and poor customer service. Some of these complaints are still unresolved or have been closed without a satisfactory outcome for the players. -This indicates that the casino has a poor reputation among its customers and does not handle their issues in a fair or timely manner. Players who choose to play at this casino should be aware of the potential risks and problems they may encounter. -Unfair Terms and Conditions-Another drawback of Azino777 Casino is its bonus terms and conditions, which are unfair or unclear for players. For example, the casino reserves the right to change or cancel any bonus offer without prior notice or explanation. This means that players may not receive the bonus they expected or agreed to, or they may lose their bonus and winnings without any reason. -The casino also has a high wagering requirement of 50x for its bonuses, which is above the industry average. This means that players have to wager their bonus amount 50 times before they can withdraw their winnings. This makes it very hard for players to meet the requirement and cash out their money. -Furthermore, the casino has some other terms and conditions that are unfavorable for players. For example, the casino limits the maximum bet amount to $5 when playing with a bonus, and it does not allow players to play certain games with a bonus. The casino also has a clause that states that it can confiscate the winnings of players who use "strategies" or "systems" to play, without defining what these terms mean. -Licensing and Security Issues-The last drawback of Azino777 Casino is its licensing and security issues. The casino is licensed by the Curacao eGaming Authority, which is not a very reputable or trustworthy regulator. The Curacao license does not offer much protection or oversight for players, and it does not require the casino to follow strict standards of fairness, transparency, and responsibility. -The casino does not provide any information about its security measures or encryption technology on its website. This raises doubts about how safe and secure the casino is, and how well it protects the personal and financial data of its customers. The casino also appears on one blacklist that indicates some potential risk for players. -Conclusion-Azino777 Casino is an online casino that offers a variety of games, bonuses, payment methods, and customer support options. However, it also has several issues that could affect the gaming experience and winnings of its players. These issues include player complaints and black points, unfair terms and conditions, licensing and security issues, and a low safety index rating by Casino Guru. -Therefore, we do not recommend this casino to our readers, as there are many other online casinos that are safer and more reliable. If you still want to try this casino, we advise you to be careful and cautious, and to read the terms and conditions carefully before accepting any bonus or making any deposit. -FAQs-
- - \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 8 Ball Pool Guideline for Windows A Simple Program to Help You Aim Better.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 8 Ball Pool Guideline for Windows A Simple Program to Help You Aim Better.md deleted file mode 100644 index 8fac9162c8daab50bad7bfb3aae9a63b1e042df4..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 8 Ball Pool Guideline for Windows A Simple Program to Help You Aim Better.md +++ /dev/null @@ -1,194 +0,0 @@ - - Download 8 Ball Pool Guideline: How to Improve Your Game and Win More Matches-If you are a fan of pool games, you have probably heard of or played 8 Ball Pool, the most popular online pool game in the world. 8 Ball Pool is a game that lets you compete with players from all over the world in various modes and tournaments. You can also customize your cue, table, and avatar, and chat with your opponents. -download 8 ball pool guidelineDownload File » https://urlin.us/2uSUfG - But playing 8 Ball Pool is not as easy as it seems. You need to have a good eye, a steady hand, and a lot of practice to master the game. And sometimes, even that is not enough. You may find yourself losing to players who seem to have an unfair advantage over you. How do they do it? -The answer is simple: they use a guideline tool. A guideline tool is an app that helps you aim better and make more accurate shots. It shows you the trajectory of the cue ball and the target ball, as well as the angles and distances involved. It also helps you with tricky shots like cushion shots and bank shots. -In this article, we will show you how to download and use one of the best guideline tools for 8 Ball Pool: 8 Pool Master - Guideline Tool. We will also give you some tips and tricks on how to play better and win more matches with a guideline tool. Let's get started! -Introduction-What is 8 Ball Pool and why do you need a guideline?-8 Ball Pool is a game that simulates the real-life pool game of the same name. It is played with a cue ball and 15 other balls appropriately numbered. Numbers 1 through 7 are solids or spots while 9 through 15 are stripes. The goal is to choose either solids or stripes and pocket them all before finally pocketing the 8 ball. -The game has various modes and features, such as: -download 8 ball pool guideline tool apk
While 8 Ball Pool is fun and addictive, it can also be frustrating if you keep losing to players who seem to have better skills than you. That's where a guideline tool comes in handy. A guideline tool can help you improve your game by showing you where to aim, how much power to use, and how to adjust for spin and angle. It can also help you with difficult shots that require more precision and strategy. -What are the benefits of using a guideline tool?-Using a guideline tool for 8 Ball Pool Using a guideline tool for 8 Ball Pool can give you many benefits, such as: -
How to download and use 8 Pool Master - Guideline Tool?-One of the best guideline tools for 8 Ball Pool is 8 Pool Master - Guideline Tool. It is a free app that you can download from the Google Play Store or the App Store. It is compatible with any device that runs Android or iOS. It is also easy to use and has many features, such as: -
To download and use 8 Pool Master - Guideline Tool, follow these steps: -
Tips and Tricks for Playing 8 Ball Pool with a Guideline-Choose your tables wisely-One of the first things you need to do when playing 8 Ball Pool is to choose a table that suits your skill level and budget. There are different types of tables in 8 Ball Pool, such as: -
As you can see, the higher the table, the higher the entry fee and the potential reward. But also, the higher the difficulty and the risk. You should choose a table that matches your skill level and your coin balance. Don't play on a table that is too easy or too hard for you, or that you can't afford to lose. You can also practice on offline tables or with your friends before playing on online tables. -Buy a better cue-Another thing you need to do when playing 8 Ball Pool is to buy a better cue that can enhance your performance. There are many cues in 8 Ball Pool, each with different attributes, such as: -
You can buy cues with coins or cash in the shop, or win them in spin & win or scratch & win. You can also upgrade your cues with coins to improve their attributes. You should choose a cue that suits your play style and preference. For example, if you like to make powerful shots, you should choose a cue with high force. If you like to make precise shots, you should choose a cue with high aim. And so on. -Use a little English-One of the most important skills in 8 Ball Pool is to use spin or English on the cue ball. Spin can affect the direction and speed of the cue ball after hitting the target ball or the cushion. It can help you make more shots, avoid scratches, and position yourself for the next shot. There are four types of spin in 8 Ball Pool, such as: -
You can apply spin to the cue ball by using the spin control feature in 8 Pool Master - Guideline Tool. You can also see how spin affects your shot by using the guideline on your screen. You should use spin wisely and sparingly, as too much spin can make your shot unpredictable and inaccurate. You should also practice using spin on offline tables or with your friends before using it on online tables. -Shoot faster-Another skill that can help you win more matches in 8 Ball Pool is to shoot faster. Shooting faster means making your shots in less time and avoiding running out of time or wasting time. Shooting faster can give you many advantages, such as: -
You can shoot faster by using 8 Pool Master - Guideline Tool, which can help you aim better and make more accurate shots in less time. You can also use the shot timer feature to see how much time you have left to make your shot, and avoid running out of time or rushing your shot. You should also practice shooting faster on offline tables or with your friends before shooting faster on online tables. -Extend your aim-The last skill that we will share with you in this article is to extend your aim. Extending your aim means aiming beyond the target ball and seeing where it will go after hitting it. Extending your aim can help you make more shots, especially when there are obstacles or other balls in the way. It can also help you position yourself for the next shot and plan your strategy ahead. -You can extend your aim by using 8 Pool Master - Guideline Tool, which can show you the trajectory of both You can extend your aim by using 8 Pool Master - Guideline Tool, which can show you the trajectory of both the cue ball and the target ball, as well as the angles and distances involved. You can also adjust the length and width of the guideline to see more or less of the path. You should also practice extending your aim on offline tables or with your friends before extending your aim on online tables. -Conclusion-Summary of the main points-In this article, we have shown you how to download and use 8 Pool Master - Guideline Tool, one of the best guideline tools for 8 Ball Pool. We have also given you some tips and tricks on how to play better and win more matches with a guideline tool. Here are the main points we have covered: -
Call to action-We hope you have enjoyed reading this article and learned something new. If you want to improve your game and win more matches in 8 Ball Pool, we highly recommend you to download and use 8 Pool Master - Guideline Tool. It is a free app that can help you aim better and make more accurate shots. It is also easy to use and has many features. You can download it from the links below: -Download 8 Pool Master - Guideline Tool for Android -Download 8 Pool Master - Guideline Tool for iOS -Thank you for reading this article. Please share it with your friends and family who love playing 8 Ball Pool. And don't forget to leave us a comment below and tell us what you think about 8 Pool Master - Guideline Tool. We would love to hear from you! -FAQs-Q1: Is 8 Pool Master - Guideline Tool safe to use?-A1: Yes, 8 Pool Master - Guideline Tool is safe to use. It does not contain any viruses or malware. It also does not require any root or jailbreak access. It only uses your device's camera and storage to show you the guideline on your screen. It does not interfere with or modify any other apps or data on your device. -Q2: Does 8 Pool Master - Guideline Tool work for all devices?-A2: Yes, 8 Pool Master - Guideline Tool works for all devices that run Android or iOS. It is compatible with any screen size and resolution. It also works for any version of 8 Ball Pool. -Q3: How can I get more coins and cash in 8 Ball Pool?-A3: There are several ways to get more coins and cash in 8 Ball Pool, such as: -
Q4: What are the best cues to use in 8 Ball Pool?-A4: The best cues to use in 8 Ball Pool A4: The best cues to use in 8 Ball Pool depend on your personal preference and play style. However, some general factors to consider when choosing a cue are:
Some examples of the best cues to use in 8 Ball Pool are: -
Of course, these are not the only cues available in 8 Ball Pool. You can explore the shop and find more cues that suit your taste and budget. You can also try different cues and see how they affect your game. -Q5: How can I improve my skills in 8 Ball Pool?-A5: The best way to improve your skills in 8 Ball Pool is to practice a lot and learn from your mistakes. You can also use 8 Pool Master - Guideline Tool to help you aim better and make more accurate shots. However, you should not rely on the guideline tool too much, as it can make you lazy and dependent. You should also try to develop your own intuition and judgment, and use the guideline tool as a guide, not a crutch. -Another way to improve your skills in 8 Ball Pool is to watch and learn from other players, especially those who are better than you. You can watch replays of your matches or other players' matches, and see how they make their shots, what strategies they use, and what mistakes they avoid. You can also join clubs and chat with other players, and ask them for tips and advice. You can also challenge them to friendly matches and see how you compare to them. -Finally, you can also read articles and watch videos online that teach you how to play better and win more matches in 8 Ball Pool. There are many resources available on the internet that can help you improve your game, such as blogs, forums, websites, YouTube channels, podcasts, etc. You can also check out our other articles on 8 Ball Pool, where we share more tips and tricks on how to play better and win more matches. 197e85843d- - \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 89.0.2 Firefox A Stable and Reliable Browser with Software WebRender.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 89.0.2 Firefox A Stable and Reliable Browser with Software WebRender.md deleted file mode 100644 index 290be07bfc6ddcdec96545f7b9f8f2e2f6c49369..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 89.0.2 Firefox A Stable and Reliable Browser with Software WebRender.md +++ /dev/null @@ -1,254 +0,0 @@ - - - Mention some common scenarios where people might search for it | | H2: How to download Firefox 89.0.2 | - Provide a brief overview of Firefox and its features - Explain why someone might want to download Firefox 89.0.2 - Provide a step-by-step guide on how to download and install Firefox 89.0.2 on different operating systems | | H3: How to download .NET 6.0 | - Provide a brief overview of .NET and its features - Explain why someone might want to download .NET 6.0 - Provide a step-by-step guide on how to download and install .NET 6.0 on different operating systems | | H3: How to update PS3 console system software | - Provide a brief overview of PS3 and its features - Explain why someone might want to update PS3 console system software - Provide a step-by-step guide on how to update PS3 console system software using different methods | | H2: Benefits of downloading and updating software | - Explain the advantages of downloading and updating software regularly - Provide some examples of benefits such as improved performance, security, compatibility, and functionality | | H2: Risks of downloading and updating software | - Explain the potential drawbacks of downloading and updating software without caution - Provide some examples of risks such as malware, bugs, compatibility issues, and data loss | | H2: Tips for downloading and updating software safely and efficiently | - Provide some best practices for downloading and updating software such as checking the source, verifying the file, backing up data, and following instructions | | H4: Conclusion | - Summarize the main points of the article - Provide a call to action for the readers | Table 2: Article with HTML formatting What does "download 89 6" mean?-If you have searched for "download 89 6" on the internet, you might be wondering what it means and what you can do with it. There are several possible interpretations of this query, depending on what you are looking for and what you are interested in. -One possibility is that you are looking for a specific version of a software or an application that has the number 89 or 6 in its name or version number. For example, you might be looking for Firefox 89.0.2, which is the latest version of the popular web browser from Mozilla. Or you might be looking for .NET 6.0, which is the latest version of the open-source development platform from Microsoft. -download 89 6Download ✸ https://urlin.us/2uSStn - Another possibility is that you are looking for a way to update your PS3 console system software, which is also known as firmware. The PS3 is a gaming console from Sony that was released in 2006 and discontinued in 2017. The latest version of the PS3 system software is 4.88, which was released in June 2021. However, some people might mistakenly search for "download 89 6" instead of "download 4.88" or "download PS3 firmware". -In any case, if you are looking for "download 89 6", you have come to the right place. In this article, we will show you how to download and install Firefox 89.0.2, .NET 6.0, and PS3 system software update using different methods and devices. We will also explain the benefits and risks of downloading and updating software, and provide some tips for doing it safely and efficiently. -How to download Firefox 89.0.2-Firefox is one of the most popular web browsers in the world, with over 200 million users as of June 2021. Firefox is developed by Mozilla, a non-profit organization that promotes openness, innovation, and privacy on the internet. Firefox offers many features such as fast performance, customizable interface, tabbed browsing, private browsing, sync across devices, extensions, themes, bookmarks, password manager, pop-up blocker, spell checker, and more. -download firefox 89.0.2 for windows If you want to download Firefox 89.0.2, you might have several reasons for doing so. For example, you might want to enjoy the new design and features that were introduced in Firefox 89, such as simplified toolbar, streamlined menus, improved tabs, enhanced privacy protection, and more. Or you might want to fix some issues that were caused by previous versions of Firefox, such as crashes, freezes, or compatibility problems. Or you might want to update your Firefox to the latest version to ensure that you have the most secure and stable browser available. Whatever your reason is, downloading and installing Firefox 89.0.2 is easy and fast. You can do it on different operating systems, such as Windows, Mac, Linux, Android, and iOS. Here are the steps you need to follow: How to download Firefox 89.0.2 on Windows-If you are using a Windows PC, you can download Firefox 89.0.2 from the official Mozilla website. Here is how: -
How to download Firefox 89.0.2 on Mac-If you are using a Mac computer, you can download Firefox 89.0.2 from the official Mozilla website. Here is how: -
How to download Firefox 89.0.2 on Linux-If you are using a Linux system, you can download Firefox 89.0.2 from the official Mozilla website. Here is how: -
How to download Firefox 89.0.2 on Android-If you are using an Android device, such as a smartphone or a tablet, you can download Firefox 89.0.2 from the Google Play Store. Here is how: -
How to download Firefox 89.0.2 on iOS-If you are using an iOS device, such as an iPhone or an iPad, you can download Firefox 89.0.2 from the App Store. Here is how: -
How to download .NET 6.0-.NET is an open-source development platform that allows you to create various types of applications, such as web, mobile, desktop, cloud, gaming, IoT, and more. .NET is developed by Microsoft and supported by a large community of developers and contributors. .NET offers many features such as cross-platform compatibility, high performance, modern languages, rich libraries, integrated tools, and more. -If you want to download .NET 6.0, you might have several reasons for doing so. For example, you might want to use the latest features and improvements that were introduced in .NET 6, such as minimal APIs, hot reload, Blazor desktop, MAUI, C# 10, F# 6, and more. Or you might want to fix some issues that were caused by previous versions of .NET, such as bugs, errors, or security vulnerabilities. Or you might want to update your .NET to the latest version to ensure that you have the most reliable and efficient development platform available. -Whatever your reason is, downloading and installing .NET 6.0 is easy and fast. You can do it on different operating systems, such as Windows, Mac, Linux, Android, and iOS. Here are the steps you need to follow: -How to download .NET 6.0 on Windows-If you are using a Windows PC, you can download .NET 6.0 from the official Microsoft website. Here is how: -
How to download .NET 6.0 on Mac-If you are using a Mac computer, you can download .NET 6.0 from the official Microsoft website. Here is how: -
How to download .NET 6.0 on Linux-If you are using a Linux system, you can download .NET 6.0 from the official Microsoft website. Here is how: -
How to update PS3 console system software-The PS3 is a gaming console from Sony that was released in 2006 and discontinued in 2017. The PS3 offers many features such as playing games, watching movies, streaming music, browsing the web, accessing online services, and more. The PS3 also has a system software, also known as firmware, that controls the basic operations of the console and provides various functions and settings. -If you want to update your PS3 console system software, you might have several reasons for doing so. For example, you might want to use the latest features and improvements that were introduced in the latest version of the PS3 system software, such as stability enhancements, security patches, bug fixes, and more. Or you might want to fix some issues that were caused by previous versions of the PS3 system software, such as errors, freezes, or compatibility problems. Or you might want to update your PS3 system software to ensure that you have the most secure and stable console available. -Whatever your reason is, updating your PS3 console system software is easy and fast. You can do it using different methods, such as online update, storage media update, or data transfer utility update. Here are the steps you need to follow: -How to update PS3 console system software using online update-If you have an internet connection and a PlayStation Network account, you can update your PS3 console system software using online update. Here is how: -
How to update PS3 console system software using storage media update-If you do not have an internet connection or a PlayStation Network account, you can update your PS3 console system software using storage media update. You will need a USB flash drive or a memory card with at least 200 MB of free space and a computer with an internet connection. Here is how: -
How to update PS3 console system software using data transfer utility update-If you have another PS3 console that has a newer version of the system software, you can update your PS3 console system software using data transfer utility update. You will need an Ethernet cable and both consoles connected to the same power source. Here is how: -
Benefits of downloading and updating software-Downloading and updating software regularly can bring you many benefits, such as improved performance, security, compatibility, and functionality. Here are some examples of how downloading and updating software can enhance your experience: -
As you can see, downloading and updating software can make your devices more powerful, secure, compatible, and functional. However, downloading and updating software also comes with some risks that you should be aware of. -Risks of downloading and updating software-Downloading and updating software without caution can expose your devices to some potential drawbacks, such as malware, bugs, compatibility issues, and data loss. Here are some examples of how downloading and updating software can harm your experience: -
As you can see, downloading and updating software can expose your devices to some risks that you should be careful of. However, you can avoid or minimize these risks by following some tips for downloading and updating software safely and efficiently. -Tips for downloading and updating software safely and efficiently-Downloading and updating software can be a rewarding and enjoyable experience if you do it right. Here are some best practices for downloading and updating software that can help you avoid or reduce the risks mentioned above: -
By following these tips, you can download and update software safely and efficiently. You can enjoy the benefits of downloading and updating software without worrying about the risks. -Conclusion-In this article, we have explained what "download 89 6" means and what you can do with it. We have shown you how to download and install Firefox 89.0.2, .NET 6.0, and PS3 system software update using different methods and devices. We have also explained the benefits and risks of downloading and updating software, and provided some tips for doing it safely and efficiently. -We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading! -FAQs-Here are some frequently asked questions about downloading and updating software: -
- - \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/A Step-by-Step Guide to Downloading and Configuring the YouTube API Key.xml.md b/spaces/1phancelerku/anime-remove-background/A Step-by-Step Guide to Downloading and Configuring the YouTube API Key.xml.md deleted file mode 100644 index 25f018ee21b1073626316724e6cd749be29be912..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/A Step-by-Step Guide to Downloading and Configuring the YouTube API Key.xml.md +++ /dev/null @@ -1,81 +0,0 @@ - - How to Download YouTube API Key.xml File-If you want to interact with YouTube using your application, you need to have a YouTube API key.xml file. This file contains your authorization credentials that allow you to access the YouTube Data API v3. In this article, you will learn what is YouTube API key.xml file, why do you need it, how to create it, and how to use it. -youtube api key.xml downloadDownload →→→ https://jinyurl.com/2uNU8R - What is YouTube API Key.xml File?-A YouTube API key.xml file is a file that contains your API key, which is a unique identifier that your application uses to communicate with the YouTube Data API v3. The API key is a string of characters that you can find in the Google API Console. The XML format of the file makes it easy to store and transfer the key. -Why do you need YouTube API Key.xml File?-You need YouTube API key.xml file because it enables you to use the YouTube Data API v3, which is a service that allows you to add a variety of YouTube features to your application. For example, you can use the API to upload videos, manage playlists and subscriptions, update channel settings, and more. Without the API key, you cannot access the API and perform these operations. -How to create YouTube API Key.xml File?-To create YouTube API key.xml file, you need to follow these steps: -Step 1: Create a project in Google API Console-Go to https://developers.google.com/youtube/registering_an_application and sign in with your Google account. Click on "Create Project" and enter a name for your project. Click on "Create" and wait for the project to be created. -How to create youtube api key.xml file Step 2: Enable YouTube Data API v3-In the Google API Console, select your project and go to "APIs & Services" > "Library". Search for "YouTube Data API v3" and click on it. Click on "Enable" and wait for the API to be enabled. -Step 3: Create an API key-In the Google API Console, go to "APIs & Services" > "Credentials". Click on "Create Credentials" > "API key". A pop-up window will show your new API key. Copy the key and click on "Close". You can also restrict the key by clicking on "Restrict key" and selecting one of the options. -Step 4: Download the API key.xml file-In the Google API Console, go to "APIs & Services" > "Credentials". Click on the name of your API key. Under "Key restrictions", click on "Download XML". A file named api_key.xml will be downloaded to your computer. This is your YouTube API key.xml file. -How to use YouTube API Key.xml File?-To use YouTube API key.xml file, you need to include it in your application code and send it along with your requests to the YouTube Data API v3. Depending on the type of request, you may also need to send an OAuth 2.0 token, which is another type of authorization credential that grants access to private user data. You can learn more about OAuth 2.0 here. -How to upload videos using YouTube API Key.xml File?-To upload videos using YouTube API Key.xml file, you need to use the videos.insert method of the YouTube Data API v3. You also need to send an OAuth 2.0 token that has the https://www.googleapis.com/auth/youtube.upload scope. You can use one of the following methods to upload videos: - Simple upload: This method is suitable for small files (less than 15 MB) and does not support resumable uploads. You can use the HTTP POST method and send the video metadata and content in a single request. You can learn more about simple upload here. - Resumable upload: This method is suitable for large files (more than 15 MB) and supports resumable uploads. You can use the HTTP POST method and send the video metadata in the first request, and then send the video content in one or more subsequent requests. You can learn more about resumable upload here. How to manage playlists and subscriptions using YouTube API Key.xml File?-To manage playlists and subscriptions using YouTube API Key.xml file, you need to use the playlists and subscriptions resources of the YouTube Data API v3. You also need to send an OAuth 2.0 token that has the https://www.googleapis.com/auth/youtube scope. You can use the following methods to manage playlists and subscriptions: - playlists.insert: This method allows you to create a new playlist. You need to send a playlist resource that contains the playlist title, description, and privacy status. You can learn more about playlists.insert here. - playlists.update: This method allows you to update an existing playlist. You need to send a playlist resource that contains the playlist ID and the updated fields. You can learn more about playlists.update here. - playlists.delete: This method allows you to delete an existing playlist. You need to send the playlist ID as a parameter. You can learn more about playlists.delete here. - subscriptions.insert: This method allows you to subscribe to a channel. You need to send a subscription resource that contains the channel ID of the channel you want to subscribe to. You can learn more about subscriptions.insert here. - subscriptions.delete: This method allows you to unsubscribe from a channel. You need to send the subscription ID as a parameter. You can learn more about subscriptions.delete here. -How to update channel settings using YouTube API Key.xml File?-To update channel settings using YouTube API Key.xml file, you need to use the channels resource of the YouTube Data API v3. You also need to send an OAuth 2.0 token that has the https://www.googleapis.com/auth/youtube scope. You can use the following method to update channel settings: - channels.update: This method allows you to update your channel's metadata, branding, and features. You need to send a channel resource that contains the channel ID and the updated fields. You can learn more about channels.update here. -Conclusion-In this article, you learned how to download YouTube API key.xml file, which is a file that contains your authorization credentials for accessing the YouTube Data API v3. You also learned how to use YouTube API key.xml file to upload videos, manage playlists and subscriptions, and update channel settings using the YouTube Data API v3. -FAQs-Here are some frequently asked questions about YouTube API key.xml file: -Q: How do I get an OAuth 2.0 token?-A: To get an OAuth 2.0 token, you need to follow the OAuth 2.0 authorization flow, which involves requesting user consent, exchanging authorization code for access token, and refreshing access token when it expires. You can learn more about OAuth 2.0 authorization flow here. -Q: How do I store and secure my YouTube API key.xml file?-A: To store and secure your YouTube API key.xml file, you need to follow some best practices, such as encrypting the file, storing it in a safe location, limiting its access, and rotating it regularly. You can learn more about storing and securing your YouTube API key.xml file here. -Q: How do I troubleshoot errors when using YouTube API key.xml file?-A: To troubleshoot errors when using YouTube API key.xml file, you need to check the error response code and message, which indicate the cause of the error and possible solutions. You can learn more about troubleshooting errors when using YouTube API key.xml file here. -Q: How do I monitor and optimize the performance of my YouTube API key.xml file?-A: To monitor and optimize the performance of your YouTube API key.xml file, you need to use the Google API Console, which provides various tools and reports to help you track and improve your API usage, quota, and billing. You can learn more about monitoring and optimizing the performance of your YouTube API key.xml file here. -Q: How do I find more resources and support for using YouTube API key.xml file?-A: To find more resources and support for using YouTube API key.xml file, you can visit the following websites: - YouTube Data API v3 Documentation: This website provides detailed information and examples on how to use the YouTube Data API v3. You can visit the website here. - YouTube Data API v3 Reference: This website provides a complete list and description of all the methods, parameters, and resources of the YouTube Data API v3. You can visit the website here. - YouTube Data API v3 Forum: This website provides a platform for developers to ask questions, share ideas, and get help from other developers who use the YouTube Data API v3. You can visit the website here. 401be4b1e0- - \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Beach Buggy Racing for Free and Enjoy Off-Road Mayhem.md b/spaces/1phancelerku/anime-remove-background/Download Beach Buggy Racing for Free and Enjoy Off-Road Mayhem.md deleted file mode 100644 index b3a430cfb4048346a2b8d23dfb20e8cf9cbcc30b..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Beach Buggy Racing for Free and Enjoy Off-Road Mayhem.md +++ /dev/null @@ -1,100 +0,0 @@ - - Beach Buggy Racing Download: A Guide to the Best Kart Racing Game for Your Device-If you're looking for a fun and exciting kart racing game that will make you feel like you're on a tropical island, then you should definitely check out Beach Buggy Racing. This game is a sequel to the popular Beach Buggy Blitz, and it offers more features, more tracks, more cars, more power-ups, and more fun than ever before. In this article, we'll tell you what Beach Buggy Racing is, why you should download it, and how to get it on your device. -beach buggy racing downloadDownload Zip ○○○ https://jinyurl.com/2uNPbP - What is Beach Buggy Racing?-A fun and colorful kart racing game with power-ups and customization-Beach Buggy Racing is a 3D kart racing game that lets you drive into an action-packed world of off-road mayhem. You can race against a field of rival drivers, each with their own personality and special ability. You can also collect and upgrade a variety of cars, from dune buggies to muscle cars to lunar rovers. And you can use over 25 different power-ups, like Dodgeball Frenzy, Fireball, and Oil Slick, to fight your way to the finish line. -The sequel to the popular Beach Buggy Blitz-Beach Buggy Racing is the official sequel to Beach Buggy Blitz, the free driving game with over 30 million players worldwide. Beach Buggy Blitz was a simple but addictive game that challenged you to drive as far as possible on a randomly generated island. Beach Buggy Racing takes this concept to the next level by adding more depth, variety, and challenge to the gameplay. -Available for Android, iOS, Windows, and Xbox One-Beach Buggy Racing is available for Android, iOS, Windows, and Xbox One devices. You can download it for free from the respective app stores or online platforms. The game is supported by ads and in-app purchases, but you can also upgrade to the premium version for a one-time fee. The premium version gives you infinite tickets (which you need to enter races), split-screen multiplayer mode (where you can race with up to four friends on one device), and removes all ads. -Why should you download Beach Buggy Racing?-Exc |
---|
Name | -lr sched |
-train time (s/iter) |
-inference time (s/im) |
-train mem (GB) |
-box AP |
-model id | -download | - - -
---|---|---|---|---|---|---|---|
R50-C4 | -1x | -0.551 | -0.102 | -4.8 | -35.7 | -137257644 | -model | metrics | -
R50-DC5 | -1x | -0.380 | -0.068 | -5.0 | -37.3 | -137847829 | -model | metrics | -
R50-FPN | -1x | -0.210 | -0.038 | -3.0 | -37.9 | -137257794 | -model | metrics | -
R50-C4 | -3x | -0.543 | -0.104 | -4.8 | -38.4 | -137849393 | -model | metrics | -
R50-DC5 | -3x | -0.378 | -0.070 | -5.0 | -39.0 | -137849425 | -model | metrics | -
R50-FPN | -3x | -0.209 | -0.038 | -3.0 | -40.2 | -137849458 | -model | metrics | -
R101-C4 | -3x | -0.619 | -0.139 | -5.9 | -41.1 | -138204752 | -model | metrics | -
R101-DC5 | -3x | -0.452 | -0.086 | -6.1 | -40.6 | -138204841 | -model | metrics | -
R101-FPN | -3x | -0.286 | -0.051 | -4.1 | -42.0 | -137851257 | -model | metrics | -
X101-FPN | -3x | -0.638 | -0.098 | -6.7 | -43.0 | -139173657 | -model | metrics | -
Name | -lr sched |
-train time (s/iter) |
-inference time (s/im) |
-train mem (GB) |
-box AP |
-model id | -download | - - -
---|---|---|---|---|---|---|---|
R50 | -1x | -0.205 | -0.041 | -4.1 | -37.4 | -190397773 | -model | metrics | -
R50 | -3x | -0.205 | -0.041 | -4.1 | -38.7 | -190397829 | -model | metrics | -
R101 | -3x | -0.291 | -0.054 | -5.2 | -40.4 | -190397697 | -model | metrics | -
Name | -lr sched |
-train time (s/iter) |
-inference time (s/im) |
-train mem (GB) |
-box AP |
-prop. AR |
-model id | -download | - - -
---|---|---|---|---|---|---|---|---|
RPN R50-C4 | -1x | -0.130 | -0.034 | -1.5 | -- | 51.6 | -137258005 | -model | metrics | -
RPN R50-FPN | -1x | -0.186 | -0.032 | -2.7 | -- | 58.0 | -137258492 | -model | metrics | -
Fast R-CNN R50-FPN | -1x | -0.140 | -0.029 | -2.6 | -37.8 | -- | 137635226 | -model | metrics | -
Name | -lr sched |
-train time (s/iter) |
-inference time (s/im) |
-train mem (GB) |
-box AP |
-mask AP |
-model id | -download | - - -
---|---|---|---|---|---|---|---|---|
R50-C4 | -1x | -0.584 | -0.110 | -5.2 | -36.8 | -32.2 | -137259246 | -model | metrics | -
R50-DC5 | -1x | -0.471 | -0.076 | -6.5 | -38.3 | -34.2 | -137260150 | -model | metrics | -
R50-FPN | -1x | -0.261 | -0.043 | -3.4 | -38.6 | -35.2 | -137260431 | -model | metrics | -
R50-C4 | -3x | -0.575 | -0.111 | -5.2 | -39.8 | -34.4 | -137849525 | -model | metrics | -
R50-DC5 | -3x | -0.470 | -0.076 | -6.5 | -40.0 | -35.9 | -137849551 | -model | metrics | -
R50-FPN | -3x | -0.261 | -0.043 | -3.4 | -41.0 | -37.2 | -137849600 | -model | metrics | -
R101-C4 | -3x | -0.652 | -0.145 | -6.3 | -42.6 | -36.7 | -138363239 | -model | metrics | -
R101-DC5 | -3x | -0.545 | -0.092 | -7.6 | -41.9 | -37.3 | -138363294 | -model | metrics | -
R101-FPN | -3x | -0.340 | -0.056 | -4.6 | -42.9 | -38.6 | -138205316 | -model | metrics | -
X101-FPN | -3x | -0.690 | -0.103 | -7.2 | -44.3 | -39.5 | -139653917 | -model | metrics | -
Name | -epochs | -train time (s/im) |
-inference time (s/im) |
-box AP |
-mask AP |
-model id | -download | - - -
---|---|---|---|---|---|---|---|
R50-FPN | -100 | -0.376 | -0.069 | -44.6 | -40.3 | -42047764 | -model | metrics | -
R50-FPN | -200 | -0.376 | -0.069 | -46.3 | -41.7 | -42047638 | -model | metrics | -
R50-FPN | -400 | -0.376 | -0.069 | -47.4 | -42.5 | -42019571 | -model | metrics | -
R101-FPN | -100 | -0.518 | -0.073 | -46.4 | -41.6 | -42025812 | -model | metrics | -
R101-FPN | -200 | -0.518 | -0.073 | -48.0 | -43.1 | -42131867 | -model | metrics | -
R101-FPN | -400 | -0.518 | -0.073 | -48.9 | -43.7 | -42073830 | -model | metrics | -
regnetx_4gf_dds_FPN | -100 | -0.474 | -0.071 | -46.0 | -41.3 | -42047771 | -model | metrics | -
regnetx_4gf_dds_FPN | -200 | -0.474 | -0.071 | -48.1 | -43.1 | -42132721 | -model | metrics | -
regnetx_4gf_dds_FPN | -400 | -0.474 | -0.071 | -48.6 | -43.5 | -42025447 | -model | metrics | -
regnety_4gf_dds_FPN | -100 | -0.487 | -0.073 | -46.1 | -41.6 | -42047784 | -model | metrics | -
regnety_4gf_dds_FPN | -200 | -0.487 | -0.072 | -47.8 | -43.0 | -42047642 | -model | metrics | -
regnety_4gf_dds_FPN | -400 | -0.487 | -0.072 | -48.2 | -43.3 | -42045954 | -model | metrics | -
Name | -lr sched |
-train time (s/iter) |
-inference time (s/im) |
-train mem (GB) |
-box AP |
-kp. AP |
-model id | -download | - - -
---|---|---|---|---|---|---|---|---|
R50-FPN | -1x | -0.315 | -0.072 | -5.0 | -53.6 | -64.0 | -137261548 | -model | metrics | -
R50-FPN | -3x | -0.316 | -0.066 | -5.0 | -55.4 | -65.5 | -137849621 | -model | metrics | -
R101-FPN | -3x | -0.390 | -0.076 | -6.1 | -56.4 | -66.1 | -138363331 | -model | metrics | -
X101-FPN | -3x | -0.738 | -0.121 | -8.7 | -57.3 | -66.0 | -139686956 | -model | metrics | -
Name | -lr sched |
-train time (s/iter) |
-inference time (s/im) |
-train mem (GB) |
-box AP |
-mask AP |
-PQ | -model id | -download | - - -
---|---|---|---|---|---|---|---|---|---|
R50-FPN | -1x | -0.304 | -0.053 | -4.8 | -37.6 | -34.7 | -39.4 | -139514544 | -model | metrics | -
R50-FPN | -3x | -0.302 | -0.053 | -4.8 | -40.0 | -36.5 | -41.5 | -139514569 | -model | metrics | -
R101-FPN | -3x | -0.392 | -0.066 | -6.0 | -42.4 | -38.5 | -43.0 | -139514519 | -model | metrics | -
Name | -lr sched |
-train time (s/iter) |
-inference time (s/im) |
-train mem (GB) |
-box AP |
-mask AP |
-model id | -download | - - -
---|---|---|---|---|---|---|---|---|
R50-FPN | -1x | -0.292 | -0.107 | -7.1 | -23.6 | -24.4 | -144219072 | -model | metrics | -
R101-FPN | -1x | -0.371 | -0.114 | -7.8 | -25.6 | -25.9 | -144219035 | -model | metrics | -
X101-FPN | -1x | -0.712 | -0.151 | -10.2 | -26.7 | -27.1 | -144219108 | -model | metrics | -
Name | -train time (s/iter) |
-inference time (s/im) |
-train mem (GB) |
-box AP |
-box AP50 |
-mask AP |
-model id | -download | - - -
---|---|---|---|---|---|---|---|---|
R50-FPN, Cityscapes | -0.240 | -0.078 | -4.4 | -- | - | 36.5 | -142423278 | -model | metrics | -
R50-C4, VOC | -0.537 | -0.081 | -4.8 | -51.9 | -80.3 | -- | 142202221 | -model | metrics | -
Name | -lr sched |
-train time (s/iter) |
-inference time (s/im) |
-train mem (GB) |
-box AP |
-mask AP |
-model id | -download | - - -
---|---|---|---|---|---|---|---|---|
Baseline R50-FPN | -1x | -0.261 | -0.043 | -3.4 | -38.6 | -35.2 | -137260431 | -model | metrics | -
Deformable Conv | -1x | -0.342 | -0.048 | -3.5 | -41.5 | -37.5 | -138602867 | -model | metrics | -
Cascade R-CNN | -1x | -0.317 | -0.052 | -4.0 | -42.1 | -36.4 | -138602847 | -model | metrics | -
Baseline R50-FPN | -3x | -0.261 | -0.043 | -3.4 | -41.0 | -37.2 | -137849600 | -model | metrics | -
Deformable Conv | -3x | -0.349 | -0.047 | -3.5 | -42.7 | -38.5 | -144998336 | -model | metrics | -
Cascade R-CNN | -3x | -0.328 | -0.053 | -4.0 | -44.3 | -38.5 | -144998488 | -model | metrics | -
Name | -lr sched |
-train time (s/iter) |
-inference time (s/im) |
-train mem (GB) |
-box AP |
-mask AP |
-model id | -download | - - -
---|---|---|---|---|---|---|---|---|
Baseline R50-FPN | -3x | -0.261 | -0.043 | -3.4 | -41.0 | -37.2 | -137849600 | -model | metrics | -
GN | -3x | -0.309 | -0.060 | -5.6 | -42.6 | -38.6 | -138602888 | -model | metrics | -
SyncBN | -3x | -0.345 | -0.053 | -5.5 | -41.9 | -37.8 | -169527823 | -model | metrics | -
GN (from scratch) | -3x | -0.338 | -0.061 | -7.2 | -39.9 | -36.6 | -138602908 | -model | metrics | -
GN (from scratch) | -9x | -N/A | -0.061 | -7.2 | -43.7 | -39.6 | -183808979 | -model | metrics | -
SyncBN (from scratch) | -9x | -N/A | -0.055 | -7.2 | -43.6 | -39.3 | -184226666 | -model | metrics | -
Name | -inference time (s/im) |
-train mem (GB) |
-box AP |
-mask AP |
-PQ | -model id | -download | - - -
---|---|---|---|---|---|---|---|
Panoptic FPN R101 | -0.098 | -11.4 | -47.4 | -41.3 | -46.1 | -139797668 | -model | metrics | -
Mask R-CNN X152 | -0.234 | -15.1 | -50.2 | -44.0 | -- | 18131413 | -model | metrics | -
above + test-time aug. | -- | - | 51.9 | -45.9 | -- | - | - |
Si usted es un fan de los juegos de fútbol, es posible que haya oído hablar de First Touch Soccer 2021, o FTS 21 para abreviar. Este es uno de los juegos de fútbol más populares y realistas para dispositivos Android, con increíbles gráficos, jugabilidad y características. En este artículo, le mostraremos cómo descargar e instalar datos FTS 21 APK OBB en su dispositivo, y cómo solucionar algunos errores comunes que podrían ocurrir.
-First Touch Soccer 2021 es un juego de fútbol desarrollado por First Touch Games, un estudio británico especializado en juegos deportivos. FTS 21 es la última versión del juego, que fue lanzado a finales de 2020. Es un juego offline, lo que significa que puedes jugar sin conexión a Internet. También puedes personalizar tu equipo, jugadores, kits, estadios y más.
-Download ✦✦✦ https://bltlly.com/2v6ML1
Algunas de las características que hacen que FTS 21 se destaque de otros juegos de fútbol son:
-Para reproducir FTS 21 en su dispositivo, necesitará lo siguiente:
-Para descargar e instalar FTS 21 en su dispositivo, tendrá que seguir estos pasos:
-Después de descargar el archivo RAR, tendrá que extraer su contenido utilizando una aplicación de administrador de archivos que puede manejar archivos RAR. Puede utilizar aplicaciones como ZArchiver o RAR para este propósito. Para extraer los archivos, siga estos pasos:
-El siguiente paso es instalar el archivo F TS 21 APK en su dispositivo. Para hacer esto, siga estos pasos:
-El paso final es mover la carpeta OBB a la ubicación correcta en el almacenamiento de su dispositivo. La carpeta OBB contiene los datos del juego necesarios para que FTS 21 funcione correctamente. Para mover la carpeta OBB, siga estos pasos:
-Ahora está listo para lanzar y reproducir FTS 21 en su dispositivo. Para hacer esto, siga estos pasos:
- -A veces, puede encontrar algunos errores o problemas al jugar FTS 21 en su dispositivo. Estos errores pueden ser causados por varios factores, como dispositivos incompatibles, archivos dañados, espacio de almacenamiento insuficiente o software obsoleto. Estos son algunos de los errores comunes que puede enfrentar y cómo solucionarlos:
-Este error se produce cuando el juego no se carga correctamente o se bloquea durante el juego. Para corregir este error, pruebe estas soluciones:
-Este error ocurre cuando el juego no puede funcionar sin problemas o de forma estable en su dispositivo. Para corregir este error, pruebe estas soluciones:
-Este error ocurre cuando el juego no registra correctamente tus entradas o gestos. Para corregir este error, prueba estas soluciones:
-Aquí están algunas de las preguntas más frecuentes sobre FTS 21:
-¿Te encanta conducir coches rápidos y realizar acrobacias increíbles? Si es así, entonces usted debe probar Extreme Car Driving Simulator, uno de los juegos de simulador de conducción de coches más populares y realistas para dispositivos Android. En este juego, usted puede conducir, deriva, y sentir un coche deportivo de carreras de forma gratuita. También puede personalizar su coche, elegir entre diferentes modos y mapas, y explorar un enorme entorno de mundo abierto.
-Pero ¿qué pasa si quieres desbloquear todos los coches, obtener dinero ilimitado, y disfrutar del juego sin ningún tipo de anuncios o restricciones? Bueno, hay una manera de hacerlo. Puede utilizar Extreme Car Driving Simulator Hack Mod APK, una versión modificada del juego que le da acceso a todas las características y recursos que necesita para tener más diversión y emoción. En este artículo, le diremos qué es Extreme Car Driving Simulator Hack Mod APK, cómo descargarlo e instalarlo, y cómo jugar el juego con algunos consejos y trucos. ¡Vamos a empezar!
-Download Zip ✒ ✒ ✒ https://bltlly.com/2v6Ml5
Extreme Car Driving Simulator es un simulador de conducción de coches en 3D desarrollado por AxesInMotion Racing, un estudio con sede en España. El juego fue lanzado en 2014 y se ha descargado más de 100 millones de veces en Google Play Store. También ha recibido críticas positivas de usuarios y críticos por igual.
-Algunas de las características de Extreme Car Driving Simulator son:
-Para jugar Extreme Car Driving Simulator, debe seguir estos pasos:
-Extreme Car Driving Simulator Hack Mod APK es una versión modificada del juego original que le da algunos beneficios adicionales y características que no están disponibles en la versión oficial. Algunos de estos beneficios son:
-Para descargar e instalar Extreme Car Driving Simulator Hack Mod APK, debe seguir estos pasos:
-Ahora que sabes cómo descargar y jugar Extreme Car Driving Simulator Hack Mod APK, aquí hay algunos consejos y trucos que le ayudarán a dominar el juego y divertirse más:
-Una de las mejores cosas acerca de Extreme Car Driving Simulator es que puede personalizar su coche con diferentes colores, ruedas, spoilers y más. También puede cambiar la matrícula, el sonido del motor y el velocímetro. Para personalizar su coche, ir al garaje y toque en los iconos en la parte inferior de la pantalla. Puede utilizar el dinero que gana de jugar el juego o utilizar el dinero ilimitado de la apk mod hack para comprar cualquier personalización que desee.
- -El juego tiene varios modos y mapas entre los que puedes elegir, cada uno con sus propios retos y características. Estos son algunos de ellos:
-El juego le permite realizar acrobacias increíbles y deriva con su coche. Puede usar rampas, bucles, barriles, puentes, etc. para lanzar su automóvil al aire y hacer volteretas, giros, rollos, etc. También puede desplazarse alrededor de las esquinas utilizando el freno de mano o tocando el botón de freno. Realizar acrobacias y derivas le ganará puntos y dinero que puede utilizar para comprar o mejorar su coche.
-El juego tiene tráfico y la policía que tratará de evitar que usted conduce imprudentemente. Si chocas contra otros coches o golpeas a peatones, dañarás tu coche y perderás puntos. Si rompes alguna regla de tráfico o causas demasiado caos, atraerás la atención de la policía. La policía lo perseguirá e intentará detenerlo embistiendo su automóvil o estableciendo controles de carretera. Si te atrapan, te multarán o te arrestarán. Para evitar el tráfico y la policía, puedes usar diferentes estrategias como conducir en el carril opuesto, usar atajos, esconderte en callejones o garajes, etc.
-Aquí hay algunas preguntas frecuentes sobre Extreme Car Driving Simulator y su hack mod apk:
-Si te gusta jugar blackjack, es posible que desee probar algunas de las aplicaciones gratuitas de blackjack disponibles en la tienda de aplicaciones. Las aplicaciones de blackjack son aplicaciones móviles que le permiten jugar blackjack en su teléfono inteligente o tableta, en cualquier momento y en cualquier lugar. Usted puede disfrutar de la emoción de este clásico juego de casino sin arriesgar dinero real, y divertirse con millones de otros jugadores de todo el mundo.
-En este artículo, te mostraremos cómo descargar aplicaciones gratuitas de blackjack, cuáles son los beneficios de jugar blackjack en tu dispositivo móvil, cómo elegir la mejor aplicación de blackjack para ti y cuáles son algunas de las mejores aplicaciones gratuitas de blackjack para probar. También te daremos algunos consejos y trucos para jugar blackjack online y mejorar tus habilidades. ¡Empecemos!
-Download 🆓 https://bltlly.com/2v6Mwv
Jugar blackjack en tu dispositivo móvil tiene muchas ventajas sobre jugar en un casino en tierra o en tu computadora. Aquí están algunas de ellas:
-Con tantas aplicaciones gratuitas de blackjack disponibles en la tienda de aplicaciones, ¿cómo sabes cuál es la mejor para ti? Estos son algunos factores a considerar antes de descargar cualquier aplicación:
-Si usted está buscando algunas aplicaciones de blackjack gratis para probar, aquí están nuestras 3 recomendaciones principales:
-Esta aplicación es una de las aplicaciones de blackjack gratis más populares y altamente calificadas en la tienda de aplicaciones. Tiene más de 10 millones de descargas y una calificación de 4.7 estrellas. Ofrece una interfaz simple y elegante, gráficos realistas y sonidos, y un juego suave. Puedes jugar al blackjack clásico o probar algunos de los otros modos de juego, como High Stakes, Casino y Multijugador. También puedes personalizar tu mesa, baraja y fichas, y desbloquear nuevos logros y recompensas. La aplicación es compatible con dispositivos iOS y Android, y es gratuita para descargar y jugar.
-Esta aplicación es más que una aplicación de blackjack. Es una aplicación de casino social que le permite jugar no solo blackjack, sino también poker, ruleta, baccarat, craps y más. Tiene más de 50 millones de descargas y una calificación de 4.5 estrellas. Cuenta con gráficos increíbles, física realista y un juego inmersivo. Puedes jugar con amigos o extraños de todo el mundo, chatear con ellos, enviarles regalos y hacer nuevos amigos. También puedes personalizar tu perfil, avatar y configuración del juego. La aplicación es compatible con dispositivos iOS y Android, y es gratuita para descargar y jugar.
-Jugar al blackjack online puede ser divertido y gratificante, pero también puede ser desafiante y arriesgado. Aquí hay algunos consejos y trucos para ayudarle a mejorar sus habilidades y aumentar sus posibilidades de ganar:
- -Para descargar aplicaciones gratuitas de blackjack, debe considerar algunos factores como compatibilidad, seguridad, características y comentarios. También deberías ver algunas de las mejores aplicaciones de blackjack gratis que hemos recomendado en este artículo: Blackjack por Tripledot Studios Limited, Blackjack por TapTapBoom Ltd., y Blackjack 21: Blackjackist por KamaGames. Estas aplicaciones son compatibles con dispositivos iOS y Android, seguras de usar, ricas en funciones y fáciles de usar, y altamente calificadas por millones de jugadores.
-Entonces, ¿qué estás esperando? Descarga estas aplicaciones de blackjack gratis hoy y comienza a jugar este increíble juego en tu dispositivo móvil. Usted tendrá una explosión!
-語音質檢噪音去除 (語音增強):Meta Denoiser", - description=( - "為了提升語音識別的效果,可以在識別前先進行噪音去除" - ), - allow_flagging="never", -examples=[ - ["exampleAudio/15s_2020-03-27_sep1.wav"], - ["exampleAudio/13s_2020-03-27_sep2.wav"], - ["exampleAudio/30s_2020-04-23_sep1.wav"], - ["exampleAudio/15s_2020-04-23_sep2.wav"], -], -) - -demo.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/Detomo/ai-comic-generation/README.md b/spaces/Detomo/ai-comic-generation/README.md deleted file mode 100644 index d164227a6e1193a45b4b87c06a13c227283e4d6b..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/README.md +++ /dev/null @@ -1,153 +0,0 @@ ---- -title: AI Comic Generation -emoji: 👩🎨 -colorFrom: red -colorTo: yellow -sdk: docker -pinned: true -app_port: 3000 -disable_embedding: true ---- - -# AI Comic Factory - -## Running the project at home - - -However the project isn't a monolithic Space that can be duplicated and ran immediately: -it requires various components to run for the frontend, backend, LLM, SDXL etc. - -If you try to duplicate the project, open the `.env` you will see it requires some variables. - -Provider config: -- `LLM_ENGINE`: can be one of: "INFERENCE_API", "INFERENCE_ENDPOINT", "OPENAI" -- `RENDERING_ENGINE`: can be one of: "INFERENCE_API", "INFERENCE_ENDPOINT", "REPLICATE", "VIDEOCHAIN" for now, unless you code your custom solution - -Auth config: -- `AUTH_HF_API_TOKEN`: only if you decide to use OpenAI for the LLM engine necessary if you decide to use an inference api model or a custom inference endpoint -- `AUTH_OPENAI_TOKEN`: only if you decide to use OpenAI for the LLM engine -- `AITH_VIDEOCHAIN_API_TOKEN`: secret token to access the VideoChain API server -- `AUTH_REPLICATE_API_TOKEN`: in case you want to use Replicate.com - -Rendering config: -- `RENDERING_HF_INFERENCE_ENDPOINT_URL`: necessary if you decide to use a custom inference endpoint -- `RENDERING_REPLICATE_API_MODEL_VERSION`: url to the VideoChain API server -- `RENDERING_HF_INFERENCE_ENDPOINT_URL`: optional, default to nothing -- `RENDERING_HF_INFERENCE_API_MODEL`: optional, defaults to "stabilityai/stable-diffusion-xl-base-1.0" -- `RENDERING_REPLICATE_API_MODEL`: optional, defaults to "stabilityai/sdxl" -- `RENDERING_REPLICATE_API_MODEL_VERSION`: optional, in case you want to change the version - -Language model config: -- `LLM_HF_INFERENCE_ENDPOINT_URL`: "https://llama-v2-70b-chat.ngrok.io" -- `LLM_HF_INFERENCE_API_MODEL`: "codellama/CodeLlama-7b-hf" - -In addition, there are some community sharing variables that you can just ignore. -Those variables are not required to run the AI Comic Factory on your own website or computer -(they are meant to create a connection with the Hugging Face community, -and thus only make sense for official Hugging Face apps): -- `NEXT_PUBLIC_ENABLE_COMMUNITY_SHARING`: you don't need this -- `COMMUNITY_API_URL`: you don't need this -- `COMMUNITY_API_TOKEN`: you don't need this -- `COMMUNITY_API_ID`: you don't need this - -Please read the `.env` default config file for more informations. -To customise a variable locally, you should create a `.env.local` -(do not commit this file as it will contain your secrets). - --> If you intend to run it with local, cloud-hosted and/or proprietary models **you are going to need to code 👨💻**. - -## The LLM API (Large Language Model) - -Currently the AI Comic Factory uses [Llama-2 70b](https://huggingface.co/blog/llama2) through an [Inference Endpoint](https://huggingface.co/docs/inference-endpoints/index). - -You have three options: - -### Option 1: Use an Inference API model - -This is a new option added recently, where you can use one of the models from the Hugging Face Hub. By default we suggest to use CodeLlama 34b as it will provide better results than the 7b model. - -To activate it, create a `.env.local` configuration file: - -```bash -LLM_ENGINE="INFERENCE_API" - -HF_API_TOKEN="Your Hugging Face token" - -# codellama/CodeLlama-7b-hf" is used by default, but you can change this -# note: You should use a model able to generate JSON responses, -# so it is storngly suggested to use at least the 34b model -HF_INFERENCE_API_MODEL="codellama/CodeLlama-7b-hf" -``` - -### Option 2: Use an Inference Endpoint URL - -If you would like to run the AI Comic Factory on a private LLM running on the Hugging Face Inference Endpoint service, create a `.env.local` configuration file: - -```bash -LLM_ENGINE="INFERENCE_ENDPOINT" - -HF_API_TOKEN="Your Hugging Face token" - -HF_INFERENCE_ENDPOINT_URL="path to your inference endpoint url" -``` - -To run this kind of LLM locally, you can use [TGI](https://github.com/huggingface/text-generation-inference) (Please read [this post](https://github.com/huggingface/text-generation-inference/issues/726) for more information about the licensing). - -### Option 3: Use an OpenAI API Key - -This is a new option added recently, where you can use OpenAI API with an OpenAI API Key. - -To activate it, create a `.env.local` configuration file: - -```bash -LLM_ENGINE="OPENAI" -# default openai api base url is: https://api.openai.com/v1 -OPENAI_API_BASE_URL="Your OpenAI API Base URL" -OPENAI_API_KEY="Your OpenAI API Key" -OPENAI_API_MODEL="gpt-3.5-turbo" -``` - -### Option 4: Fork and modify the code to use a different LLM system - -Another option could be to disable the LLM completely and replace it with another LLM protocol and/or provider (eg. Claude, Replicate), or a human-generated story instead (by returning mock or static data). - -### Notes - -It is possible that I modify the AI Comic Factory to make it easier in the future (eg. add support for Claude or Replicate) - -## The Rendering API - -This API is used to generate the panel images. This is an API I created for my various projects at Hugging Face. - -I haven't written documentation for it yet, but basically it is "just a wrapper ™" around other existing APIs: - -- The [hysts/SD-XL](https://huggingface.co/spaces/hysts/SD-XL?duplicate=true) Space by [@hysts](https://huggingface.co/hysts) -- And other APIs for making videos, adding audio etc.. but you won't need them for the AI Comic Factory - -### Option 1: Deploy VideoChain yourself - -You will have to [clone](https://huggingface.co/spaces/jbilcke-hf/VideoChain-API?duplicate=true) the [source-code](https://huggingface.co/spaces/jbilcke-hf/VideoChain-API/tree/main) - -Unfortunately, I haven't had the time to write the documentation for VideoChain yet. -(When I do I will update this document to point to the VideoChain's README) - - -### Option 2: Use Replicate - -To use Replicate, create a `.env.local` configuration file: - -```bash -RENDERING_ENGINE="REPLICATE" - -REPLICATE_API_TOKEN="Your Replicate token" - -REPLICATE_API_MODEL="stabilityai/sdxl" - -REPLICATE_API_MODEL_VERSION="da77bc59ee60423279fd632efb4795ab731d9e3ca9705ef3341091fb989b7eaf" -``` - -### Option 3: Use another SDXL API - -If you fork the project you will be able to modify the code to use the Stable Diffusion technology of your choice (local, open-source, proprietary, your custom HF Space etc). - -It would even be something else, such as Dall-E. diff --git a/spaces/Detomo/ai-comic-generation/src/app/interface/progress/progress-bar.tsx b/spaces/Detomo/ai-comic-generation/src/app/interface/progress/progress-bar.tsx deleted file mode 100644 index 0e926d05419cecc6d4a4964d53a8dad6e07a4102..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/app/interface/progress/progress-bar.tsx +++ /dev/null @@ -1,57 +0,0 @@ -"use client" - -import { CircularProgressbar, buildStyles } from "react-circular-progressbar" -import "react-circular-progressbar/dist/styles.css" - -export function ProgressBar ({ - className, - progressPercentage, - text -}: { - className?: string - progressPercentage?: number - text?: string -}) { - return ( -
Currently Running: baize-v2-7b
--Disclaimer: The LLaMA model is a third-party version available on Hugging Face model hub. This demo should be used for research purposes only. Commercial use is strictly prohibited. The model output is not censored and the authors do not endorse the opinions in the generated content. Use at your own risk. -
- 元素,则不添加按钮
- }
- var firstChild = code.firstChild;
- if (!firstChild) {
- return; // 如果 元素没有子节点,则不添加按钮
- }
- var button = document.createElement('button');
- button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本
- button.style.position = 'relative';
- button.style.float = 'right';
- button.style.fontSize = '1em'; // 可选:调整按钮大小
- button.style.background = 'none'; // 可选:去掉背景颜色
- button.style.border = 'none'; // 可选:去掉边框
- button.style.cursor = 'pointer'; // 可选:显示指针样式
- button.addEventListener('click', function () {
- var range = document.createRange();
- range.selectNodeContents(code);
- range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前
- var selection = window.getSelection();
- selection.removeAllRanges();
- selection.addRange(range);
-
- try {
- var success = document.execCommand('copy');
- if (success) {
- button.textContent = '\u2714';
- setTimeout(function () {
- button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制”
- }, 2000);
- } else {
- button.textContent = '\u2716';
- }
- } catch (e) {
- console.error(e);
- button.textContent = '\u2716';
- }
-
- selection.removeAllRanges();
- });
- code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前
- }
-
- function handleNewElements(mutationsList, observer) {
- for (var mutation of mutationsList) {
- if (mutation.type === 'childList') {
- for (var node of mutation.addedNodes) {
- if (node.nodeName === 'PRE') {
- addCopyButton(node);
- }
- }
- }
- }
- }
-
- var observer = new MutationObserver(handleNewElements);
- observer.observe(document.documentElement, { childList: true, subtree: true });
-
- document.querySelectorAll('pre').forEach(addCopyButton);
-})();
diff --git a/spaces/Junlinh/memorability_prediction/README.md b/spaces/Junlinh/memorability_prediction/README.md
deleted file mode 100644
index 424bb2b7fd1cac208492a995079a00807bf4c326..0000000000000000000000000000000000000000
--- a/spaces/Junlinh/memorability_prediction/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Memorability Prediction
-emoji: 📊
-colorFrom: yellow
-colorTo: blue
-sdk: gradio
-sdk_version: 3.9.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/JustSkyDev/DSEG/process.py b/spaces/JustSkyDev/DSEG/process.py
deleted file mode 100644
index 448310e6a4d22c6ce806ddd02f4c5c4467e03a3f..0000000000000000000000000000000000000000
--- a/spaces/JustSkyDev/DSEG/process.py
+++ /dev/null
@@ -1,30 +0,0 @@
-from iglovikov_helper_functions.utils.image_utils import load_rgb, pad, unpad
-from iglovikov_helper_functions.dl.pytorch.utils import tensor_from_rgb_image
-from cloths_segmentation.pre_trained_models import create_model
-from PIL import Image
-import numpy as np
-import cv2
-import torch
-import albumentations as albu
-
-def Segment(image: Image.Image, color=[255, 255, 255]):
- model = create_model("Unet_2020-10-30")
- model.eval()
-
- image = np.array(image)
- transform = albu.Compose([albu.Normalize(p=1)], p=1)
- padded_image, pads = pad(image, factor=32, border=cv2.BORDER_CONSTANT)
-
- x = transform(image=padded_image)["image"]
- x = torch.unsqueeze(tensor_from_rgb_image(x), 0)
-
- with torch.no_grad():
- prediction = model(x)[0][0]
-
- mask = (prediction > 0).cpu().numpy().astype(np.uint8)
- mask = unpad(mask, pads)
-
- mask_rgb = np.ones((*mask.shape, 3), dtype=np.uint8) * 255
- mask_rgb[mask == 1] = color
-
- return mask_rgb
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/layers_537238KB.py b/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/layers_537238KB.py
deleted file mode 100644
index 9b127bc6427f5c60c8cf85603a3d8a093c3501c4..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/layers_537238KB.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class SeperableConv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(SeperableConv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nin,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- groups=nin,
- bias=False,
- ),
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
-
- def __call__(self, x):
- skip = self.conv1(x)
- h = self.conv2(skip)
-
- return h, skip
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
- h = self.conv(x)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
- self.conv3 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv6 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv7 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = nn.Sequential(
- Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
- )
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- feat6 = self.conv6(x)
- feat7 = self.conv7(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1)
- bottle = self.bottleneck(out)
- return bottle
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/sublayer/common/highway_network.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/sublayer/common/highway_network.py
deleted file mode 100644
index d311c6924db6dfc247f69cc266d6c1975b6e03cd..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/sublayer/common/highway_network.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-class HighwayNetwork(nn.Module):
- def __init__(self, size):
- super().__init__()
- self.W1 = nn.Linear(size, size)
- self.W2 = nn.Linear(size, size)
- self.W1.bias.data.fill_(0.)
-
- def forward(self, x):
- x1 = self.W1(x)
- x2 = self.W2(x)
- g = torch.sigmoid(x2)
- y = g * F.relu(x1) + (1. - g) * x
- return y
diff --git a/spaces/Khaled27/Naptah/README.md b/spaces/Khaled27/Naptah/README.md
deleted file mode 100644
index 9c54c5d4d0053b944944d5176a0d00c68ea2addb..0000000000000000000000000000000000000000
--- a/spaces/Khaled27/Naptah/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Naptah
-emoji: 🏆
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-sdk_version: 2.9.1
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Kimata/Sanskrit-TTS/utils.py b/spaces/Kimata/Sanskrit-TTS/utils.py
deleted file mode 100644
index 07839a71a8339f90fe7eeff4dc4a6bd284330049..0000000000000000000000000000000000000000
--- a/spaces/Kimata/Sanskrit-TTS/utils.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import logging
-from json import loads
-from torch import load, FloatTensor
-from numpy import float32
-import librosa
-
-
-class HParams():
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
-
-
-def load_checkpoint(checkpoint_path, model):
- checkpoint_dict = load(checkpoint_path, map_location='cpu')
- iteration = checkpoint_dict['iteration']
- saved_state_dict = checkpoint_dict['model']
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict= {}
- for k, v in state_dict.items():
- try:
- new_state_dict[k] = saved_state_dict[k]
- except:
- logging.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, 'module'):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- logging.info("Loaded checkpoint '{}' (iteration {})" .format(
- checkpoint_path, iteration))
- return
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r") as f:
- data = f.read()
- config = loads(data)
-
- hparams = HParams(**config)
- return hparams
-
-
-def load_audio_to_torch(full_path, target_sampling_rate):
- audio, sampling_rate = librosa.load(full_path, sr=target_sampling_rate, mono=True)
- return FloatTensor(audio.astype(float32))
diff --git a/spaces/KyanChen/RSPrompter/configs/rsprompter/maskrcnn_nwpu_config.py b/spaces/KyanChen/RSPrompter/configs/rsprompter/maskrcnn_nwpu_config.py
deleted file mode 100644
index 8da032503b42113cccdcc8481d183b17820871cf..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/configs/rsprompter/maskrcnn_nwpu_config.py
+++ /dev/null
@@ -1,339 +0,0 @@
-custom_imports = dict(imports=['mmseg.datasets', 'mmseg.models', 'mmdet.models'], allow_failed_imports=False)
-max_epochs = 500
-
-optimizer = dict(
- type='AdamW',
- lr=0.0005,
- weight_decay=1e-4
-)
-
-param_scheduler = [
- # warm up learning rate scheduler
- dict(
- type='LinearLR',
- start_factor=1e-4,
- by_epoch=True,
- begin=0,
- end=1,
- # update by iter
- convert_to_iter_based=True),
- # main learning rate scheduler
- dict(
- type='CosineAnnealingLR',
- T_max=max_epochs,
- by_epoch=True,
- begin=1,
- end=max_epochs,
- )
-]
-
-param_scheduler_callback = dict(
- type='ParamSchedulerHook'
-)
-
-
-evaluator_ = dict(
- type='CocoPLMetric',
- metric=['bbox', 'segm'],
- proposal_nums=[1, 10, 100]
-)
-
-evaluator = dict(
- val_evaluator=evaluator_,
- test_evaluator=evaluator_
-)
-
-
-image_size = (1024, 1024)
-data_preprocessor = dict(
- type='mmdet.DetDataPreprocessor',
- mean=[123.675, 116.28, 103.53],
- std=[58.395, 57.12, 57.375],
- bgr_to_rgb=True,
- pad_mask=True,
- mask_pad_value=0,
- pad_size_divisor=32
-)
-
-num_things_classes = 10
-num_stuff_classes = 0
-num_classes = num_things_classes + num_stuff_classes
-
-# model settings
-model = dict(
- type='mmdet.MaskRCNN',
- data_preprocessor=data_preprocessor,
- backbone=dict(
- type='mmdet.ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch',
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')
- ),
- neck=dict(
- type='mmdet.FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- num_outs=5),
- rpn_head=dict(
- type='mmdet.RPNHead',
- in_channels=256,
- feat_channels=256,
- anchor_generator=dict(
- type='mmdet.AnchorGenerator',
- scales=[8],
- ratios=[0.5, 1.0, 2.0],
- strides=[4, 8, 16, 32, 64]),
- bbox_coder=dict(
- type='mmdet.DeltaXYWHBBoxCoder',
- target_means=[.0, .0, .0, .0],
- target_stds=[1.0, 1.0, 1.0, 1.0]),
- loss_cls=dict(
- type='mmdet.CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
- loss_bbox=dict(type='mmdet.L1Loss', loss_weight=1.0)),
- roi_head=dict(
- type='mmdet.StandardRoIHead',
- bbox_roi_extractor=dict(
- type='mmdet.SingleRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32]),
- bbox_head=dict(
- type='mmdet.Shared2FCBBoxHead',
- in_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=num_classes,
- bbox_coder=dict(
- type='mmdet.DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.1, 0.1, 0.2, 0.2]),
- reg_class_agnostic=False,
- loss_cls=dict(
- type='mmdet.CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='mmdet.L1Loss', loss_weight=1.0)),
- mask_roi_extractor=dict(
- type='mmdet.SingleRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32]),
- mask_head=dict(
- type='mmdet.FCNMaskHead',
- num_convs=4,
- in_channels=256,
- conv_out_channels=256,
- num_classes=num_classes,
- loss_mask=dict(
- type='mmdet.CrossEntropyLoss', use_mask=True, loss_weight=1.0))),
- # model training and testing settings
- train_cfg=dict(
- rpn=dict(
- assigner=dict(
- type='mmdet.MaxIoUAssigner',
- pos_iou_thr=0.7,
- neg_iou_thr=0.3,
- min_pos_iou=0.3,
- match_low_quality=True,
- ignore_iof_thr=-1),
- sampler=dict(
- type='mmdet.RandomSampler',
- num=256,
- pos_fraction=0.5,
- neg_pos_ub=-1,
- add_gt_as_proposals=False),
- allowed_border=-1,
- pos_weight=-1,
- debug=False),
- rpn_proposal=dict(
- nms_pre=2000,
- max_per_img=1000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0),
- rcnn=dict(
- assigner=dict(
- type='mmdet.MaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.5,
- min_pos_iou=0.5,
- match_low_quality=True,
- ignore_iof_thr=-1),
- sampler=dict(
- type='mmdet.RandomSampler',
- num=512,
- pos_fraction=0.25,
- neg_pos_ub=-1,
- add_gt_as_proposals=True),
- mask_size=28,
- pos_weight=-1,
- debug=False)),
- test_cfg=dict(
- rpn=dict(
- nms_pre=1000,
- max_per_img=1000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0),
- rcnn=dict(
- score_thr=0.05,
- nms=dict(type='nms', iou_threshold=0.5),
- max_per_img=100,
- mask_thr_binary=0.5)
- )
-)
-
-
-model_cfg = dict(
- type='MMDetPLer',
- hyperparameters=dict(
- optimizer=optimizer,
- param_scheduler=param_scheduler,
- evaluator=evaluator,
- ),
- whole_model=model,
-)
-
-task_name = 'nwpu_ins'
-exp_name = 'E20230520_0'
-logger = dict(
- type='WandbLogger',
- project=task_name,
- group='maskrcnn',
- name=exp_name
-)
-# logger = None
-
-
-callbacks = [
- param_scheduler_callback,
- dict(
- type='ModelCheckpoint',
- dirpath=f'results/{task_name}/{exp_name}/checkpoints',
- save_last=True,
- mode='max',
- monitor='valmap_0',
- save_top_k=2,
- filename='epoch_{epoch}-map_{valmap_0:.4f}'
- ),
- dict(
- type='LearningRateMonitor',
- logging_interval='step'
- )
-]
-
-
-trainer_cfg = dict(
- compiled_model=False,
- accelerator="cpu",
- strategy="auto",
- # strategy="ddp",
- # strategy='ddp_find_unused_parameters_true',
- # precision='32',
- # precision='16-mixed',
- devices=1,
- default_root_dir=f'results/{task_name}/{exp_name}',
- # default_root_dir='results/tmp',
- max_epochs=max_epochs,
- logger=logger,
- callbacks=callbacks,
- log_every_n_steps=3,
- check_val_every_n_epoch=5,
- benchmark=True,
- # sync_batchnorm=True,
- # fast_dev_run=True,
-
- # limit_train_batches=1,
- # limit_val_batches=0,
- # limit_test_batches=None,
- # limit_predict_batches=None,
- # overfit_batches=0.0,
-
- # val_check_interval=None,
- # num_sanity_val_steps=0,
- # enable_checkpointing=None,
- # enable_progress_bar=None,
- # enable_model_summary=None,
- # accumulate_grad_batches=32,
- # gradient_clip_val=15,
- # gradient_clip_algorithm='norm',
- # deterministic=None,
- # inference_mode: bool=True,
- use_distributed_sampler=True,
- # profiler="simple",
- # detect_anomaly=False,
- # barebones=False,
- # plugins=None,
- # reload_dataloaders_every_n_epochs=0,
-)
-
-
-backend_args = None
-train_pipeline = [
- dict(type='mmdet.LoadImageFromFile'),
- dict(type='mmdet.LoadAnnotations', with_bbox=True, with_mask=True),
- dict(type='mmdet.Resize', scale=image_size),
- dict(type='mmdet.RandomFlip', prob=0.5),
- dict(type='mmdet.PackDetInputs')
-]
-
-test_pipeline = [
- dict(type='mmdet.LoadImageFromFile', backend_args=backend_args),
- dict(type='mmdet.Resize', scale=image_size),
- # If you don't have a gt annotation, delete the pipeline
- dict(type='mmdet.LoadAnnotations', with_bbox=True, with_mask=True),
- dict(
- type='mmdet.PackDetInputs',
- meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
- 'scale_factor'))
-]
-
-
-train_batch_size_per_gpu = 2
-train_num_workers = 4
-test_batch_size_per_gpu = 2
-test_num_workers = 4
-persistent_workers = True
-
-data_parent = '/Users/kyanchen/datasets/seg/VHR-10_dataset_coco/NWPUVHR-10_dataset/'
-train_data_prefix = ''
-val_data_prefix = ''
-
-dataset_type = 'NWPUInsSegDataset'
-
-val_loader = dict(
- batch_size=test_batch_size_per_gpu,
- num_workers=test_num_workers,
- persistent_workers=persistent_workers,
- pin_memory=True,
- dataset=dict(
- type=dataset_type,
- data_root=data_parent,
- ann_file='NWPU_instances_val.json',
- data_prefix=dict(img_path='positive image set'),
- test_mode=True,
- filter_cfg=dict(filter_empty_gt=True, min_size=32),
- pipeline=test_pipeline,
- backend_args=backend_args))
-
-datamodule_cfg = dict(
- type='PLDataModule',
- train_loader=dict(
- batch_size=train_batch_size_per_gpu,
- num_workers=train_num_workers,
- persistent_workers=persistent_workers,
- pin_memory=True,
- dataset=dict(
- type=dataset_type,
- data_root=data_parent,
- ann_file='NWPU_instances_train.json',
- data_prefix=dict(img_path='positive image set'),
- filter_cfg=dict(filter_empty_gt=True, min_size=32),
- pipeline=train_pipeline,
- backend_args=backend_args)
- ),
- val_loader=val_loader,
- test_loader=val_loader,
- predict_loader=val_loader
-)
\ No newline at end of file
diff --git a/spaces/LENMON/ProxyGPT/Dockerfile b/spaces/LENMON/ProxyGPT/Dockerfile
deleted file mode 100644
index b9d7983f65fb4e4957b46f326b6f712e256d6225..0000000000000000000000000000000000000000
--- a/spaces/LENMON/ProxyGPT/Dockerfile
+++ /dev/null
@@ -1,48 +0,0 @@
-FROM zilliz/gptcache:latest
-
-CMD []
-
-RUN apt-get update && \
- apt-get install -y curl && \
- apt-get install -y git && \
- curl -sL https://deb.nodesource.com/setup_18.x | bash - && \
- apt-get update && \
- apt-get install -y nodejs
-
-RUN git clone https://github.com/PlusLemon/oai-cache-proxy.git /app
-WORKDIR /app
-RUN git rev-parse --short HEAD
-RUN npm install
-COPY Dockerfile greeting.md* .env* ./
-RUN npm run build
-
-EXPOSE 7860
-ENV NODE_ENV=production
-ENV GATEKEEPER=proxy_key
-USER root
-
-# RUN pip3 install --no-cache-dir sacremoses
-
-RUN echo 'embedding: huggingface' > /app/cache.yml && \
- echo 'embedding_config:' >> /app/cache.yml && \
- echo ' model: uer/albert-base-chinese-cluecorpussmall' >> /app/cache.yml && \
-# RUN echo 'embedding: paddlenlp' > /app/cache.yml && \
- echo 'config:' >> /app/cache.yml && \
- echo ' similarity_threshold: 0.9' >> /app/cache.yml
-
-RUN echo '#!/bin/bash' > /app/startup-script.sh && \
- echo 'gptcache_server -s 0.0.0.0 -f /app/cache.yml &' >> /app/startup-script.sh && \
- echo 'npm start' >> /app/startup-script.sh && \
- chmod +x /app/startup-script.sh
-
-RUN mkdir /.npm && \
- mkdir -p /.local/bin && \
- mkdir -p /.cache/pip && \
- mkdir gptcache_data && \
- chown -R 1000:0 "/.npm" "/.local" "/.cache" "/usr/local" "gptcache_data"
-
-RUN python3 -m pip install --upgrade pip
-
-ENV PATH="/.local/bin:$PATH"
-
-ENTRYPOINT sh -c "/app/startup-script.sh"
\ No newline at end of file
diff --git a/spaces/LanguageBind/LanguageBind/languagebind/image/processing_image.py b/spaces/LanguageBind/LanguageBind/languagebind/image/processing_image.py
deleted file mode 100644
index b61db106a39d2c44f1a8c1709afe08b4231c4c64..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/languagebind/image/processing_image.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import torch
-from PIL import Image
-from torchvision import transforms
-from transformers import ProcessorMixin, BatchEncoding
-from transformers.image_processing_utils import BatchFeature
-
-OPENAI_DATASET_MEAN = (0.48145466, 0.4578275, 0.40821073)
-OPENAI_DATASET_STD = (0.26862954, 0.26130258, 0.27577711)
-
-def make_list_of_images(x):
- if not isinstance(x, list):
- return [x]
- return x
-
-def get_image_transform(config):
- config = config.vision_config
- transform = transforms.Compose(
- [
- transforms.ToTensor(),
- transforms.Resize(224, interpolation=transforms.InterpolationMode.BICUBIC),
- transforms.CenterCrop(224),
- transforms.Normalize(OPENAI_DATASET_MEAN, OPENAI_DATASET_STD) # assume image
- ]
- )
- return transform
-
-
-def load_and_transform_image(image_path, transform):
- image = Image.open(image_path)
- image_outputs = transform(image)
- return image_outputs
-
-class LanguageBindImageProcessor(ProcessorMixin):
- attributes = []
- tokenizer_class = ("LanguageBindImageTokenizer")
-
- def __init__(self, config, tokenizer=None, **kwargs):
- super().__init__(**kwargs)
- self.config = config
- self.transform = get_image_transform(config)
- self.image_processor = load_and_transform_image
- self.tokenizer = tokenizer
-
- def __call__(self, images=None, text=None, context_length=77, return_tensors=None, **kwargs):
- if text is None and images is None:
- raise ValueError("You have to specify either text or images. Both cannot be none.")
-
- if text is not None:
- encoding = self.tokenizer(text, max_length=context_length, padding='max_length',
- truncation=True, return_tensors=return_tensors, **kwargs)
-
- if images is not None:
- images = make_list_of_images(images)
- image_features = [self.image_processor(image, self.transform) for image in images]
- image_features = torch.stack(image_features)
-
- if text is not None and images is not None:
- encoding["pixel_values"] = image_features
- return encoding
- elif text is not None:
- return encoding
- else:
- return {"pixel_values": image_features}
-
- def batch_decode(self, skip_special_tokens=True, *args, **kwargs):
- """
- This method forwards all its arguments to CLIPTokenizerFast's [`~PreTrainedTokenizer.batch_decode`]. Please
- refer to the docstring of this method for more information.
- """
- return self.tokenizer.batch_decode(*args, skip_special_tokens=skip_special_tokens, **kwargs)
-
- def decode(self, skip_special_tokens=True, *args, **kwargs):
- """
- This method forwards all its arguments to CLIPTokenizerFast's [`~PreTrainedTokenizer.decode`]. Please refer to
- the docstring of this method for more information.
- """
- return self.tokenizer.decode(*args, skip_special_tokens=skip_special_tokens, **kwargs)
diff --git a/spaces/MacYang/Diamond-Sutra/app.py b/spaces/MacYang/Diamond-Sutra/app.py
deleted file mode 100644
index d44a62f21c2eed39050ce9142d190f6ad8822bf2..0000000000000000000000000000000000000000
--- a/spaces/MacYang/Diamond-Sutra/app.py
+++ /dev/null
@@ -1,92 +0,0 @@
-"""A simple hugging face UI """
-import os
-import pickle
-from threading import Lock
-from typing import Optional, Tuple
-import openai
-import gradio as gr
-from query import get_chain
-
-VECTOR_STORE_PATH = "jinggang_embeddings.pkl"
-
-with open(VECTOR_STORE_PATH, "rb") as f:
- vectorstore = pickle.load(f)
-
-def set_openai_api_key(api_key: str):
- """Set the api key and return chain.
- If no api_key, then None is returned.
- """
- if api_key:
- os.environ["OPENAI_API_KEY"] = api_key
- chain = get_chain(vectorstore)
- os.environ["OPENAI_API_KEY"] = ""
- return chain
-
-class ChatWrapper:
- """wrap the chat with thread protection"""
- def __init__(self):
- self.lock = Lock()
-
- def __call__(self, api_key: str, question: str, history: Optional[Tuple[str, str]], chain):
- """Execute the chat functionality."""
- self.lock.acquire()
- try:
- history = history or []
- if api_key is None or api_key == "":
- history.append((question, "请先提供您的OpenAI API Key再开始提问"))
- return history, history
- if chain is None:
- history.append((question, "opps, 请检查您的OpenAI API Key是否正确"))
- return history, history
- openai.api_key = api_key
- output = chain({"question": question, "chat_history": history})["answer"]
- history.append((question, output))
- except Exception as ex:
- raise ex
- finally:
- self.lock.release()
- return history, history
-
-
-chat = ChatWrapper()
-block = gr.Blocks(css=".gradio-container {background-color: lightgray}")
-with block:
- with gr.Row():
- gr.Markdown("和金刚经对话
")
- openai_api_key_textbox = gr.Textbox(
- placeholder="请把您的OpenAI API key贴在这里:",
- show_label=False,
- lines=1,
- type="password",
- )
- chatbot = gr.Chatbot(show_label=False).style(height=500)
- gr.HTML("如果OpenAI的服务器负载较高,可能响应较慢,请耐心等待")
- with gr.Row():
- message = gr.Textbox(
- label="您的问题:",
- placeholder="请在这里输入您关于金刚经的问题:",
- lines=1,
- )
- gr.Examples(
- examples=[
- "金刚经主要讲了什么?",
- "什么是空性?",
- "什么是智慧?",
- ],
- inputs=message,
- label="参考问题"
- )
-
- chat_history = gr.State()
- chat_chain = gr.State()
- message.submit(chat,
- inputs=[openai_api_key_textbox, message, chat_history, chat_chain],
- outputs=[chatbot, chat_history])
-
- openai_api_key_textbox.change(
- set_openai_api_key,
- inputs=[openai_api_key_textbox],
- outputs=[chat_chain],
- )
-
-block.launch(debug=True)
diff --git a/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/text/sanskrit.py b/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/text/sanskrit.py
deleted file mode 100644
index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/text/sanskrit.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import re
-from indic_transliteration import sanscript
-
-
-# List of (iast, ipa) pairs:
-_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('a', 'ə'),
- ('ā', 'aː'),
- ('ī', 'iː'),
- ('ū', 'uː'),
- ('ṛ', 'ɹ`'),
- ('ṝ', 'ɹ`ː'),
- ('ḷ', 'l`'),
- ('ḹ', 'l`ː'),
- ('e', 'eː'),
- ('o', 'oː'),
- ('k', 'k⁼'),
- ('k⁼h', 'kʰ'),
- ('g', 'g⁼'),
- ('g⁼h', 'gʰ'),
- ('ṅ', 'ŋ'),
- ('c', 'ʧ⁼'),
- ('ʧ⁼h', 'ʧʰ'),
- ('j', 'ʥ⁼'),
- ('ʥ⁼h', 'ʥʰ'),
- ('ñ', 'n^'),
- ('ṭ', 't`⁼'),
- ('t`⁼h', 't`ʰ'),
- ('ḍ', 'd`⁼'),
- ('d`⁼h', 'd`ʰ'),
- ('ṇ', 'n`'),
- ('t', 't⁼'),
- ('t⁼h', 'tʰ'),
- ('d', 'd⁼'),
- ('d⁼h', 'dʰ'),
- ('p', 'p⁼'),
- ('p⁼h', 'pʰ'),
- ('b', 'b⁼'),
- ('b⁼h', 'bʰ'),
- ('y', 'j'),
- ('ś', 'ʃ'),
- ('ṣ', 's`'),
- ('r', 'ɾ'),
- ('l̤', 'l`'),
- ('h', 'ɦ'),
- ("'", ''),
- ('~', '^'),
- ('ṃ', '^')
-]]
-
-
-def devanagari_to_ipa(text):
- text = text.replace('ॐ', 'ओम्')
- text = re.sub(r'\s*।\s*$', '.', text)
- text = re.sub(r'\s*।\s*', ', ', text)
- text = re.sub(r'\s*॥', '.', text)
- text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST)
- for regex, replacement in _iast_to_ipa:
- text = re.sub(regex, replacement, text)
- text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0)
- [:-1]+'h'+x.group(1)+'*', text)
- return text
diff --git "a/spaces/Mahiruoshi/vits-chatbot/\351\203\250\347\275\262\346\265\201\347\250\213.md" "b/spaces/Mahiruoshi/vits-chatbot/\351\203\250\347\275\262\346\265\201\347\250\213.md"
deleted file mode 100644
index d237443a13c8a76c79a039b13732c6cb5e15e9f1..0000000000000000000000000000000000000000
--- "a/spaces/Mahiruoshi/vits-chatbot/\351\203\250\347\275\262\346\265\201\347\250\213.md"
+++ /dev/null
@@ -1,9 +0,0 @@
-```sh
-#前置条件 已安装Anaconda
-conda create -n chatbot python=3.8
-conda activate chatbot
-git clone https://huggingface.co/spaces/Mahiruoshi/vits-chatbot
-cd vits-chatbot
-pip install -r requirements.txt
-python main.py
-```
\ No newline at end of file
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/util/time_counter.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/util/time_counter.py
deleted file mode 100644
index 0aedb2e4d61bfbe7571dca9d50053f0fedaa1359..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/util/time_counter.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import json
-import time
-
-
-class TimeCounter:
- def __init__(self) -> None:
- pass
-
- def clear(self):
- self.timedict = {}
- self.basetime = time.perf_counter()
-
- def timeit(self, name):
- nowtime = time.perf_counter() - self.basetime
- self.timedict[name] = nowtime
- self.basetime = time.perf_counter()
-
-
-class TimeHolder:
- def __init__(self) -> None:
- self.timedict = {}
-
- def update(self, _timedict: dict):
- for k, v in _timedict.items():
- if k not in self.timedict:
- self.timedict[k] = AverageMeter(name=k, val_only=True)
- self.timedict[k].update(val=v)
-
- def final_res(self):
- return {k: v.avg for k, v in self.timedict.items()}
-
- def __str__(self):
- return json.dumps(self.final_res(), indent=2)
-
-
-class AverageMeter(object):
- """Computes and stores the average and current value"""
-
- def __init__(self, name, fmt=":f", val_only=False):
- self.name = name
- self.fmt = fmt
- self.val_only = val_only
- self.reset()
-
- def reset(self):
- self.val = 0
- self.avg = 0
- self.sum = 0
- self.count = 0
-
- def update(self, val, n=1):
- self.val = val
- self.sum += val * n
- self.count += n
- self.avg = self.sum / self.count
-
- def __str__(self):
- if self.val_only:
- fmtstr = "{name} {val" + self.fmt + "}"
- else:
- fmtstr = "{name} {val" + self.fmt + "} ({avg" + self.fmt + "})"
- return fmtstr.format(**self.__dict__)
diff --git a/spaces/Makiing/coolb-in-gtest/src/components/welcome-screen.tsx b/spaces/Makiing/coolb-in-gtest/src/components/welcome-screen.tsx
deleted file mode 100644
index f7449fcbb6c621875e235db98f2790bf7894fb0a..0000000000000000000000000000000000000000
--- a/spaces/Makiing/coolb-in-gtest/src/components/welcome-screen.tsx
+++ /dev/null
@@ -1,34 +0,0 @@
-import { useBing } from '@/lib/hooks/use-bing'
-
-const exampleMessages = [
- {
- heading: '🧐 提出复杂问题',
- message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?`
- },
- {
- heading: '🙌 获取更好的答案',
- message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?'
- },
- {
- heading: '🎨 获得创意灵感',
- message: `以海盗的口吻写一首关于外太空鳄鱼的俳句`
- }
-]
-
-export function WelcomeScreen({ setInput }: Pick, 'setInput'>) {
- return (
-
- {exampleMessages.map(example => (
-
- ))}
-
- )
-}
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/diffusionmodules/util.py b/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/diffusionmodules/util.py
deleted file mode 100644
index 637363dfe34799e70cfdbcd11445212df9d9ca1f..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/diffusionmodules/util.py
+++ /dev/null
@@ -1,270 +0,0 @@
-# adopted from
-# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py
-# and
-# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
-# and
-# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py
-#
-# thanks!
-
-
-import os
-import math
-import torch
-import torch.nn as nn
-import numpy as np
-from einops import repeat
-
-from ldm.util import instantiate_from_config
-
-
-def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- if schedule == "linear":
- betas = (
- torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2
- )
-
- elif schedule == "cosine":
- timesteps = (
- torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s
- )
- alphas = timesteps / (1 + cosine_s) * np.pi / 2
- alphas = torch.cos(alphas).pow(2)
- alphas = alphas / alphas[0]
- betas = 1 - alphas[1:] / alphas[:-1]
- betas = np.clip(betas, a_min=0, a_max=0.999)
-
- elif schedule == "sqrt_linear":
- betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64)
- elif schedule == "sqrt":
- betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5
- else:
- raise ValueError(f"schedule '{schedule}' unknown.")
- return betas.numpy()
-
-
-def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True):
- if ddim_discr_method == 'uniform':
- c = num_ddpm_timesteps // num_ddim_timesteps
- ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c)))
- elif ddim_discr_method == 'quad':
- ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int)
- else:
- raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"')
-
- # assert ddim_timesteps.shape[0] == num_ddim_timesteps
- # add one to get the final alpha values right (the ones from first scale to data during sampling)
- steps_out = ddim_timesteps + 1
- if verbose:
- print(f'Selected timesteps for ddim sampler: {steps_out}')
- return steps_out
-
-
-def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True):
- # select alphas for computing the variance schedule
- alphas = alphacums[ddim_timesteps]
- alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist())
-
- # according the the formula provided in https://arxiv.org/abs/2010.02502
- sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev))
- if verbose:
- print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}')
- print(f'For the chosen value of eta, which is {eta}, '
- f'this results in the following sigma_t schedule for ddim sampler {sigmas}')
- return sigmas, alphas, alphas_prev
-
-
-def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999):
- """
- Create a beta schedule that discretizes the given alpha_t_bar function,
- which defines the cumulative product of (1-beta) over time from t = [0,1].
- :param num_diffusion_timesteps: the number of betas to produce.
- :param alpha_bar: a lambda that takes an argument t from 0 to 1 and
- produces the cumulative product of (1-beta) up to that
- part of the diffusion process.
- :param max_beta: the maximum beta to use; use values lower than 1 to
- prevent singularities.
- """
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))
- return np.array(betas)
-
-
-def extract_into_tensor(a, t, x_shape):
- b, *_ = t.shape
- out = a.gather(-1, t)
- return out.reshape(b, *((1,) * (len(x_shape) - 1)))
-
-
-def checkpoint(func, inputs, params, flag):
- """
- Evaluate a function without caching intermediate activations, allowing for
- reduced memory at the expense of extra compute in the backward pass.
- :param func: the function to evaluate.
- :param inputs: the argument sequence to pass to `func`.
- :param params: a sequence of parameters `func` depends on but does not
- explicitly take as arguments.
- :param flag: if False, disable gradient checkpointing.
- """
- if flag:
- args = tuple(inputs) + tuple(params)
- return CheckpointFunction.apply(func, len(inputs), *args)
- else:
- return func(*inputs)
-
-
-class CheckpointFunction(torch.autograd.Function):
- @staticmethod
- def forward(ctx, run_function, length, *args):
- ctx.run_function = run_function
- ctx.input_tensors = list(args[:length])
- ctx.input_params = list(args[length:])
- ctx.gpu_autocast_kwargs = {"enabled": torch.is_autocast_enabled(),
- "dtype": torch.get_autocast_gpu_dtype(),
- "cache_enabled": torch.is_autocast_cache_enabled()}
- with torch.no_grad():
- output_tensors = ctx.run_function(*ctx.input_tensors)
- return output_tensors
-
- @staticmethod
- def backward(ctx, *output_grads):
- ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors]
- with torch.enable_grad(), \
- torch.cuda.amp.autocast(**ctx.gpu_autocast_kwargs):
- # Fixes a bug where the first op in run_function modifies the
- # Tensor storage in place, which is not allowed for detach()'d
- # Tensors.
- shallow_copies = [x.view_as(x) for x in ctx.input_tensors]
- output_tensors = ctx.run_function(*shallow_copies)
- input_grads = torch.autograd.grad(
- output_tensors,
- ctx.input_tensors + ctx.input_params,
- output_grads,
- allow_unused=True,
- )
- del ctx.input_tensors
- del ctx.input_params
- del output_tensors
- return (None, None) + input_grads
-
-
-def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False):
- """
- Create sinusoidal timestep embeddings.
- :param timesteps: a 1-D Tensor of N indices, one per batch element.
- These may be fractional.
- :param dim: the dimension of the output.
- :param max_period: controls the minimum frequency of the embeddings.
- :return: an [N x dim] Tensor of positional embeddings.
- """
- if not repeat_only:
- half = dim // 2
- freqs = torch.exp(
- -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half
- ).to(device=timesteps.device)
- args = timesteps[:, None].float() * freqs[None]
- embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
- if dim % 2:
- embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)
- else:
- embedding = repeat(timesteps, 'b -> b d', d=dim)
- return embedding
-
-
-def zero_module(module):
- """
- Zero out the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().zero_()
- return module
-
-
-def scale_module(module, scale):
- """
- Scale the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().mul_(scale)
- return module
-
-
-def mean_flat(tensor):
- """
- Take the mean over all non-batch dimensions.
- """
- return tensor.mean(dim=list(range(1, len(tensor.shape))))
-
-
-def normalization(channels):
- """
- Make a standard normalization layer.
- :param channels: number of input channels.
- :return: an nn.Module for normalization.
- """
- return GroupNorm32(32, channels)
-
-
-# PyTorch 1.7 has SiLU, but we support PyTorch 1.5.
-class SiLU(nn.Module):
- def forward(self, x):
- return x * torch.sigmoid(x)
-
-
-class GroupNorm32(nn.GroupNorm):
- def forward(self, x):
- return super().forward(x.float()).type(x.dtype)
-
-def conv_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D convolution module.
- """
- if dims == 1:
- return nn.Conv1d(*args, **kwargs)
- elif dims == 2:
- return nn.Conv2d(*args, **kwargs)
- elif dims == 3:
- return nn.Conv3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-def linear(*args, **kwargs):
- """
- Create a linear module.
- """
- return nn.Linear(*args, **kwargs)
-
-
-def avg_pool_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D average pooling module.
- """
- if dims == 1:
- return nn.AvgPool1d(*args, **kwargs)
- elif dims == 2:
- return nn.AvgPool2d(*args, **kwargs)
- elif dims == 3:
- return nn.AvgPool3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-class HybridConditioner(nn.Module):
-
- def __init__(self, c_concat_config, c_crossattn_config):
- super().__init__()
- self.concat_conditioner = instantiate_from_config(c_concat_config)
- self.crossattn_conditioner = instantiate_from_config(c_crossattn_config)
-
- def forward(self, c_concat, c_crossattn):
- c_concat = self.concat_conditioner(c_concat)
- c_crossattn = self.crossattn_conditioner(c_crossattn)
- return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]}
-
-
-def noise_like(shape, device, repeat=False):
- repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
- noise = lambda: torch.randn(shape, device=device)
- return repeat_noise() if repeat else noise()
\ No newline at end of file
diff --git a/spaces/Mojobones/speech-seperator-fixed/app.py b/spaces/Mojobones/speech-seperator-fixed/app.py
deleted file mode 100644
index 61f512f1c483e318022db38020f3c35fc4ad6207..0000000000000000000000000000000000000000
--- a/spaces/Mojobones/speech-seperator-fixed/app.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from speechbrain.pretrained import SepformerSeparation as separator
-import torchaudio
-import gradio as gr
-
-model = separator.from_hparams(source="speechbrain/sepformer-wsj02mix")
-
-def speechbrain(aud):
- est_sources = model.separate_file(path=aud.name)
- torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 8000)
- torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 8000)
- return "source1hat.wav", "source2hat.wav"
-
-inputs = gr.inputs.Audio(label="Input Audio", type="file")
-outputs = [
- gr.outputs.Audio(label="Output Audio One", type="file"),
- gr.outputs.Audio(label="Output Audio Two", type="file")
-]
-
-title = "Speech Seperation"
-description = "Gradio demo for Speech Seperation by SpeechBrain. To use it, simply upload your audio, or click one of the examples to load them. Read more at the links below."
-article = "Attention is All You Need in Speech Separation | Github Repo
"
-examples = [
- ['samples_audio_samples_test_mixture.wav']
-]
-gr.Interface(speechbrain, inputs, outputs, title=title, description=description, article=article, examples=examples).launch()
\ No newline at end of file
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/__init__.py
deleted file mode 100644
index 335bfd04b961b96d2fcf6bdc0ea235d98066094f..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/__init__.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-
-import mmcv
-import mmdet
-
-try:
- import mmengine
- from mmengine.utils import digit_version
-except ImportError:
- mmengine = None
- from mmcv import digit_version
-
-from .version import __version__, short_version
-
-mmcv_minimum_version = '2.0.0rc4'
-mmcv_maximum_version = '2.1.0'
-mmcv_version = digit_version(mmcv.__version__)
-if mmengine is not None:
- mmengine_minimum_version = '0.7.1'
- mmengine_maximum_version = '1.0.0'
- mmengine_version = digit_version(mmengine.__version__)
-
-if not mmengine or mmcv_version < digit_version('2.0.0rc0') or digit_version(
- mmdet.__version__) < digit_version('3.0.0rc0'):
- raise RuntimeError(
- 'MMOCR 1.0 only runs with MMEngine, MMCV 2.0.0rc0+ and '
- 'MMDetection 3.0.0rc0+, but got MMCV '
- f'{mmcv.__version__} and MMDetection '
- f'{mmdet.__version__}. For more information, please refer to '
- 'https://mmocr.readthedocs.io/en/dev-1.x/migration/overview.html'
- ) # noqa
-
-assert (mmcv_version >= digit_version(mmcv_minimum_version)
- and mmcv_version < digit_version(mmcv_maximum_version)), \
- f'MMCV {mmcv.__version__} is incompatible with MMOCR {__version__}. ' \
- f'Please use MMCV >= {mmcv_minimum_version}, ' \
- f'< {mmcv_maximum_version} instead.'
-
-assert (mmengine_version >= digit_version(mmengine_minimum_version)
- and mmengine_version < digit_version(mmengine_maximum_version)), \
- f'MMEngine=={mmengine.__version__} is used but incompatible. ' \
- f'Please install mmengine>={mmengine_minimum_version}, ' \
- f'<{mmengine_maximum_version}.'
-
-mmdet_minimum_version = '3.0.0rc5'
-mmdet_maximum_version = '3.1.0'
-mmdet_version = digit_version(mmdet.__version__)
-
-assert (mmdet_version >= digit_version(mmdet_minimum_version)
- and mmdet_version < digit_version(mmdet_maximum_version)), \
- f'MMDetection {mmdet.__version__} is incompatible ' \
- f'with MMOCR {__version__}. ' \
- f'Please use MMDetection >= {mmdet_minimum_version}, ' \
- f'< {mmdet_maximum_version} instead.'
-
-__all__ = ['__version__', 'short_version', 'digit_version']
diff --git a/spaces/MrSinan/LFW-MaskedRecogntion/app.py b/spaces/MrSinan/LFW-MaskedRecogntion/app.py
deleted file mode 100644
index e98e41dbac7286ee80348ed51c918ae69476ae02..0000000000000000000000000000000000000000
--- a/spaces/MrSinan/LFW-MaskedRecogntion/app.py
+++ /dev/null
@@ -1,337 +0,0 @@
-# -*- coding: utf-8 -*-
-"""app.ipynb
-
-Automatically generated by Colaboratory.
-
-Original file is located at
- https://colab.research.google.com/drive/1GWMyMjaydEM_30nRtu1W_B2eaTWLCCuN
-
-# T1
-"""
-
-from tensorflow.keras.regularizers import l2
-import pathlib
-import tensorflow
-from tensorflow import keras
-from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense,Dropout,BatchNormalization
-import tensorflow.keras
-import pathlib
-import tensorflow as tf
-from tensorflow import keras
-from tensorflow.keras.preprocessing.image import ImageDataGenerator
-import tensorflow.keras.utils as utils
-from tensorflow.keras.optimizers import Adam as adam
-from tensorflow.keras.optimizers import SGD
-from tensorflow.keras.optimizers import RMSprop
-from tensorflow.keras.optimizers import Adagrad
-from tensorflow.keras.callbacks import EarlyStopping ,ModelCheckpoint
-import tensorflow as tf
-from tensorflow.keras import Model
-import matplotlib.pyplot as plt
-import numpy as np
-from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, GlobalAveragePooling2D, Dropout, Input
-import gradio as gr
-from tensorflow.keras.applications import VGG16
-
-from tensorflow.keras.applications.resnet50 import preprocess_input
-from matplotlib import pyplot
-from PIL import Image
-from numpy import asarray
-from PIL import Image
-
-import glob
-import cv2
-from tensorflow.keras import layers
-from keras.models import load_model
-from matplotlib import pyplot
-from PIL import Image
-from numpy import asarray
-from mtcnn.mtcnn import MTCNN
-import cv2
-from mask_the_face import *
-import numpy as np
-
-def get_paths():
- classes = []
- for file in sorted(glob.iglob('./lfw-deepfunneled/*/')):
- classes.append(file)
- for i,d in enumerate(classes):
- paths=d+'*.jpg'
- class_=[]
- for file in sorted(glob.iglob(paths)):
- class_.append(file)
- classes[i]=class_
- return classes
-
-classLabels=np.load('classLabels.npy',)
-
-def extract_face(photo, required_size=(224, 224)):
- # load image from file
- pixels = photo
- print(pixels.shape)
- maxH=(pixels.shape[0])
- maxW=(pixels.shape[1])
- if (pixels.shape[-1])>3 or (pixels.shape[-1])<3:
- image = Image.fromarray(pixels)
- return image
-
- # create the detector, using default weights
- detector = MTCNN()
- # detect faces in the image
- results = detector.detect_faces(pixels)
- if not results:
- image = Image.fromarray(pixels)
- image = image.resize(required_size)
- print('not cropped')
- return image
- # extract the bounding box from the first face
- print('cropped')
- x1, y1, width, height = results[0]['box']
- x2, y2 = x1 + width, y1 + height
-
- face = pixels[y1:int(y2), int(x1):int(x2)]
- # resize pixels to the model size
- image = Image.fromarray(face)
- image = image.resize(required_size)
-
- return image
-
-class FaceNet():
- def __init__(self,Weights_loading_path,facenet_path):
- self.loading=Weights_loading_path
- self.modelPath=facenet_path
- self.data_augmentation = keras.Sequential([layers.Rescaling(scale=1./127.5, offset=-1),layers.Resizing(160, 160),],name="data_augmentation",)
- self.Facenet=tf.keras.models.load_model(self.modelPath)
- self.Facenet.load_weights(self.loading)
-
-
-
- def Transfer_FacenetModel_withNormlization(self):
-
- facenetmodel = tf.keras.models.load_model(self.modelPath)
- # facenetmodel.load_weights('/content/drive/MyDrive/FaceNet/facenet_keras_weights.h5')
- for layer in facenetmodel.layers[:-50]:
- layer.trainable = False
- inputs = layers.Input(shape=(224,224,3))
- # Augment data.
- augmented = self.data_augmentation(inputs)
- # This is 'bootstrapping' a new top_model onto the pretrained layers.
- top_model = facenetmodel(augmented)
- top_model = Dropout(0.5)(top_model)
- top_model = BatchNormalization()(top_model)
- top_model = Flatten(name="flatten")(top_model)
- output_layer = Dense(5750, activation='softmax')(top_model)
-
- # Group the convolutional base and new fully-connected layers into a Model object.
- model = Model(inputs=inputs, outputs=output_layer)
-
- return model
- def predict(self,testsSamples):
- predictionProbabilty=self.Facenet.predict(testsSamples)
- return predictionProbabilty
-
-class PatchEncoder(layers.Layer):
- def __init__(self, num_patches, projection_dim):
- super(PatchEncoder, self).__init__()
- self.num_patches = num_patches
- self.projection = layers.Dense(units=projection_dim)
- self.position_embedding = layers.Embedding(
- input_dim=num_patches, output_dim=projection_dim
- )
-
- def call(self, patch):
- positions = tf.range(start=0, limit=self.num_patches, delta=1)
- encoded = self.projection(patch) + self.position_embedding(positions)
- return encoded
-
-class Patches(layers.Layer):
- def __init__(self, patch_size):
- super(Patches, self).__init__()
- self.patch_size = patch_size
-
- def call(self, images):
- batch_size = tf.shape(images)[0]
- patches = tf.image.extract_patches(
- images=images,
- sizes=[1, self.patch_size, self.patch_size, 1],
- strides=[1, self.patch_size, self.patch_size, 1],
- rates=[1, 1, 1, 1],
- padding="VALID",
- )
- patch_dims = patches.shape[-1]
- patches = tf.reshape(patches, [batch_size, -1, patch_dims])
- return patches
-
-class Transforemer():
- def __init__(self,loading_path):
- self.learning_rate = 0.001
- self.weight_decay = 0.0001
- self.batch_size = 32
- self.num_epochs = 300
- self.image_size = 72
- self.patch_size = 6 # Size of the patches to be extract from the input images
- self.num_patches = (self.image_size // self.patch_size) ** 2
- self.projection_dim = 64
- self.num_heads = 8
- self.transformer_units = [self.projection_dim * 2,self.projection_dim,] # Size of the transformer layers
- self.transformer_layers = 10
-
- self.mlp_head_units = [2048, 1024] # Size of the dense layers of the final classifier
- self.loading=loading_path
- self.data_augmentation = keras.Sequential([ layers.Rescaling(1./255), layers.Resizing(self.image_size, self.image_size), layers.RandomFlip("horizontal")],name="data_augmentation",)
- self.transformer = self.create_vit_classifier()
- self.trnaformer = self.transformer.load_weights(self.loading)
-
- def mlp(self,x, hidden_units, dropout_rate):
- for units in hidden_units:
- x = layers.Dense(units, activation=tf.nn.gelu)(x)
- x = layers.Dropout(dropout_rate)(x)
- return x
- def create_vit_classifier(self):
-
- inputs = layers.Input(shape=(224,224,3))
-
- augmented = self.data_augmentation(inputs)
- patches = Patches(self.patch_size)(augmented)
- encoded_patches = PatchEncoder(self.num_patches, self.projection_dim)(patches)
-
- for _ in range(self.transformer_layers):
- x1 = layers.LayerNormalization(epsilon=1e-6)(encoded_patches)
- attention_output = layers.MultiHeadAttention(num_heads=self.num_heads, key_dim=self.projection_dim, dropout=0.3)(x1, x1)
- x2 = layers.Add()([attention_output, encoded_patches])
- x3 = layers.LayerNormalization(epsilon=1e-6)(x2)
- x3 = self.mlp(x3, hidden_units=self.transformer_units, dropout_rate=0.3)
- encoded_patches = layers.Add()([x3, x2])
-
- representation = layers.LayerNormalization(epsilon=1e-6)(encoded_patches)
- representation = layers.Flatten()(representation)
- representation = layers.Dropout(0.6)(representation)
- features = self.mlp(representation, hidden_units=self.mlp_head_units, dropout_rate=0.6)
- logits = layers.Dense(5750, activation='softmax')(features)
- model = keras.Model(inputs=inputs, outputs=logits)
-
- return model
-
- def predict(self,testsSamples):
- predictionProbabilty=self.transformer.predict(testsSamples)
- return predictionProbabilty
-
-class EnsembleModel():
- def __init__(self,classLabels,model1,model2,model3,model4):
- self.labels=classLabels
- self.model1 =model1
- self.model2 =model2
- self.model3 =model3
- self.model4 =model4
- def predict(self,testSample,):
- pred_prob1=self.model1.predict(testSample)
- pred_prob2=self.model2.predict(testSample)
- pred_prob3=self.model3.predict(testSample)
- pred_prob4=self.model4.predict(testSample)
- pred_sum=pred_prob1+pred_prob2+pred_prob3+pred_prob4
- print(pred_sum.shape)
- preds_classes_sum = np.argmax(pred_sum, axis=-1)
- total=sum(pred_sum[0])
- print(total)
- percentages=[x/total for x in pred_sum[0]]
- percentages=np.asarray(percentages)
- idx = np.argsort(pred_sum, axis=1)[:,-5:]
- print(pred_sum[0][idx])
- print(percentages[idx])
- return self.labels[preds_classes_sum][0],np.flip(self.labels[idx]),np.flip(percentages[idx])
-
-"""# Test
-
-"""
-
-faceModel1=FaceNet('MyEn3facenet.h5','facenetModel.h5')
-faceModel2=FaceNet('MyEn4facenet.h5','facenetModel.h5')
-transformerModel1=Transforemer('FirstTransformer3Ensamble1.h5')
-transformerModel2=Transforemer('FirstTransformer3Ensamble2.h5')
-
-Ensemble=EnsembleModel(classLabels,faceModel1,faceModel2,transformerModel1,transformerModel2)
-
-
-
-def grid_display(list_of_images, list_of_titles=[], no_of_columns=2, figsize=(10,10)):
-
- fig = plt.figure(figsize=figsize)
- column = 0
- for i in range(len(list_of_images)):
- column += 1
- # check for end of column and create a new figure
- if column == no_of_columns+1:
- fig = plt.figure(figsize=figsize)
- column = 1
- fig.add_subplot(1, no_of_columns, column)
- plt.imshow(list_of_images[i])
- plt.axis('off')
- if len(list_of_titles) >= len(list_of_images):
- plt.title(list_of_titles[i])
-
-def reconitionPipline(img,mask):
-
- im = Image.fromarray(img.astype('uint8'), 'RGB')
- im=np.array(im)
- im2= im[:,:,::-1].copy()
-
- if mask:
- im2=maskThisImages(im2)
- if len(im2)==0:
- im2=im.copy()
- im2= im2[:,:,::-1]
-
- im2= im2[:,:,::-1]
- temp=extract_face(im2)
- cropped = np.array(temp)
- open_cv_image = cropped[:, :, ::-1].copy()
-
- prediction,top5,percentage=Ensemble.predict(open_cv_image[None,...])
- return dict(zip(np.reshape(top5, -1), np.reshape(percentage, -1))),cropped
-
-with gr.Blocks() as demo:
- gr.HTML(
- """
-
-
-
-
- LFW-Masked Recognition
-
-
-
- An AI model developed using Ensamble learning method
- with transformer and facenet to recognize celebrties classes in LFW dataset (+5700 class)
-
-
-
- """
- )
- with gr.Row():
- with gr.Column():
- imagein = gr.Image(label='User-Input',interactive=True)
-
- with gr.Column():
- gr.Examples(['Donald_Rumsfeld_0084.jpg','James_Layug_0001.jpg','Laura_Elena_Harring_0001.jpg','Priyanka_Chopra_0001.jpg',
- 'Alexandra_Vodjanikova_0002.jpg','Nestor_Kirchner_0005.jpg','Gordon_Brown_0003.jpg',
- 'Arnold_Schwarzenegger_0038.jpg','Alejandro_Toledo_0004.jpg','George_W_Bush_0068.jpg'],inputs=imagein)
-
- with gr.Row():
- checkbox=gr.Checkbox(label='Mask the face')
- image_button = gr.Button("Submit")
-
- with gr.Row():
- mOut = gr.Image(type='numpy',label=' (Model-input)')
- label = gr.Label(num_top_classes=5)
-
-
- gr.Markdown("Made with 🖤 by Mohammed:Me.MohammedAlsinan@gmail.com & Aseel:A9eel.7neef@gmail.com
")
- image_button.click(fn=reconitionPipline,inputs=[imagein,checkbox],outputs=[label,mOut])
-demo.launch()
\ No newline at end of file
diff --git a/spaces/NATSpeech/PortaSpeech/mfa_usr/run_mfa_align.py b/spaces/NATSpeech/PortaSpeech/mfa_usr/run_mfa_align.py
deleted file mode 100644
index 9cda237ae71b61f03f86fe98fe8e2f7932a52bc7..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/PortaSpeech/mfa_usr/run_mfa_align.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import argparse
-import glob
-import importlib
-import os
-import subprocess
-from utils.commons.hparams import set_hparams, hparams
-from utils.commons.multiprocess_utils import multiprocess_run_tqdm
-from utils.os_utils import remove_file
-from utils.text.encoding import get_encoding
-
-
-def process_item(idx, txt_fn):
- base_fn = os.path.splitext(txt_fn)[0]
- basename = os.path.basename(base_fn)
- if os.path.exists(base_fn + '.wav'):
- wav_fn = base_fn + '.wav'
- elif os.path.exists(base_fn + '.mp3'):
- wav_fn = base_fn + '.mp3'
- else:
- return
- # process text
- encoding = get_encoding(txt_fn)
- with open(txt_fn, encoding=encoding) as f:
- txt_raw = " ".join(f.readlines()).strip()
- phs, _, phs_for_align, _ = preprocesser.process_text(txt_processor, txt_raw, hparams['preprocess_args'])
- os.makedirs(f'{mfa_process_dir}/{basename}', exist_ok=True)
- with open(f'{mfa_process_dir}/{basename}/{basename}.lab', 'w') as f:
- f.write(phs_for_align)
- # process wav
- new_wav_fn = preprocesser.process_wav(basename, wav_fn, mfa_process_dir, preprocess_args)
- subprocess.check_call(f'cp "{new_wav_fn}" "{mfa_process_dir}/{basename}/{basename}.wav"', shell=True)
-
-
-if __name__ == "__main__":
- set_hparams()
- parser = argparse.ArgumentParser(description='')
- parser.add_argument('--input_dir', type=str, default='', help='input dir')
- args, unknown = parser.parse_known_args()
- input_dir = args.input_dir
- processed_data_dir = hparams['processed_data_dir']
- preprocess_args = hparams['preprocess_args']
- preprocess_args['sox_to_wav'] = True
- preprocess_args['trim_all_sil'] = True
- # preprocess_args['trim_sil'] = True
- # preprocess_args['denoise'] = True
-
- pkg = ".".join(hparams["preprocess_cls"].split(".")[:-1])
- cls_name = hparams["preprocess_cls"].split(".")[-1]
- process_cls = getattr(importlib.import_module(pkg), cls_name)
- preprocesser = process_cls()
- txt_processor = preprocesser.txt_processor
- num_workers = int(os.getenv('N_PROC', os.cpu_count()))
-
- mfa_process_dir = f'{input_dir}/mfa_inputs'
- remove_file(mfa_process_dir, f'{input_dir}/mfa_tmp')
- os.makedirs(mfa_process_dir, exist_ok=True)
- os.makedirs(f'{mfa_process_dir}/processed_tmp', exist_ok=True)
- for res in multiprocess_run_tqdm(
- process_item, list(enumerate(glob.glob(f'{input_dir}/*.txt')))):
- pass
- remove_file(f'{mfa_process_dir}/processed_tmp')
- subprocess.check_call(
- f'mfa align {mfa_process_dir} ' # process dir
- f'{hparams["processed_data_dir"]}/mfa_dict.txt ' # dict
- f'{input_dir}/mfa_model.zip ' # model
- f'{input_dir}/mfa_outputs -t {input_dir}/mfa_tmp -j {num_workers} '
- f' && cp -rf {input_dir}/mfa_outputs/*/* {input_dir}/'
- f' && cp -rf {mfa_process_dir}/*/* {input_dir}/'
- f' && rm -rf {input_dir}/mfa_tmp {input_dir}/mfa_outputs {mfa_process_dir}', # remove tmp dir
- shell=True)
diff --git a/spaces/Nultx/VITS-TTS/text/symbols.py b/spaces/Nultx/VITS-TTS/text/symbols.py
deleted file mode 100644
index 053a7105f7ce95aa51614f6995399fa2172b3eb2..0000000000000000000000000000000000000000
--- a/spaces/Nultx/VITS-TTS/text/symbols.py
+++ /dev/null
@@ -1,76 +0,0 @@
-'''
-Defines the set of symbols used in text input to the model.
-'''
-
-# japanese_cleaners
-_pad = '_'
-_punctuation = ',.!?-'
-_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ '
-
-
-'''# japanese_cleaners2
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ '
-'''
-
-
-'''# korean_cleaners
-_pad = '_'
-_punctuation = ',.!?…~'
-_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ '
-'''
-
-'''# chinese_cleaners
-_pad = '_'
-_punctuation = ',。!?—…'
-_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ '
-'''
-
-'''# zh_ja_mixture_cleaners
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ '
-'''
-
-'''# sanskrit_cleaners
-_pad = '_'
-_punctuation = '।'
-_letters = 'ँंःअआइईउऊऋएऐओऔकखगघङचछजझञटठडढणतथदधनपफबभमयरलळवशषसहऽािीुूृॄेैोौ्ॠॢ '
-'''
-
-'''# cjks_cleaners
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'NQabdefghijklmnopstuvwxyzʃʧʥʦɯɹəɥçɸɾβŋɦː⁼ʰ`^#*=→↓↑ '
-'''
-
-'''# thai_cleaners
-_pad = '_'
-_punctuation = '.!? '
-_letters = 'กขฃคฆงจฉชซฌญฎฏฐฑฒณดตถทธนบปผฝพฟภมยรฤลวศษสหฬอฮฯะัาำิีึืุูเแโใไๅๆ็่้๊๋์'
-'''
-
-'''# cjke_cleaners2
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'NQabdefghijklmnopstuvwxyzɑæʃʑçɯɪɔɛɹðəɫɥɸʊɾʒθβŋɦ⁼ʰ`^#*=ˈˌ→↓↑ '
-'''
-
-'''# shanghainese_cleaners
-_pad = '_'
-_punctuation = ',.!?…'
-_letters = 'abdfghiklmnopstuvyzøŋȵɑɔɕəɤɦɪɿʑʔʰ̩̃ᴀᴇ15678 '
-'''
-
-'''# chinese_dialect_cleaners
-_pad = '_'
-_punctuation = ',.!?~…─'
-_letters = '#Nabdefghijklmnoprstuvwxyzæçøŋœȵɐɑɒɓɔɕɗɘəɚɛɜɣɤɦɪɭɯɵɷɸɻɾɿʂʅʊʋʌʏʑʔʦʮʰʷˀː˥˦˧˨˩̥̩̃̚ᴀᴇ↑↓∅ⱼ '
-'''
-
-# Export all symbols:
-symbols = [_pad] + list(_punctuation) + list(_letters)
-
-# Special symbol ids
-SPACE_ID = symbols.index(" ")
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/discriminative_reranking_nmt/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/discriminative_reranking_nmt/README.md
deleted file mode 100644
index b155e855f2f94e30ad22262f260008fda8ac1804..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/discriminative_reranking_nmt/README.md
+++ /dev/null
@@ -1,202 +0,0 @@
-# Discriminative Reranking for Neural Machine Translation
-https://aclanthology.org/2021.acl-long.563/
-
-This folder contains source code for training DrNMT, a discriminatively trained reranker for neural machine translation.
-
-## Data preparation
-1. Follow the instructions under `examples/translation` to build a base MT model. Prepare three files, one with source sentences, one with ground truth target sentences, and one with hypotheses generated from the base MT model. Each line in the file contains one sentence in raw text (i.e. no sentencepiece, etc.). Below is an example of the files with _N_ hypotheses for each source sentence.
-
-```
-# Example of the source sentence file: (The file should contain L lines.)
-
-source_sentence_1
-source_sentence_2
-source_sentence_3
-...
-source_sentence_L
-
-# Example of the target sentence file: (The file should contain L lines.)
-
-target_sentence_1
-target_sentence_2
-target_sentence_3
-...
-target_sentence_L
-
-# Example of the hypotheses file: (The file should contain L*N lines.)
-
-source_sentence_1_hypo_1
-source_sentence_1_hypo_2
-...
-source_sentence_1_hypo_N
-source_sentence_2_hypo_1
-...
-source_sentence_2_hypo_N
-...
-source_sentence_L_hypo_1
-...
-source_sentence_L_hypo_N
-```
-
-2. Download the [XLMR model](https://github.com/fairinternal/fairseq-py/tree/main/examples/xlmr#pre-trained-models).
-```
-wget https://dl.fbaipublicfiles.com/fairseq/models/xlmr.base.tar.gz
-tar zxvf xlmr.base.tar.gz
-
-# The folder should contain dict.txt, model.pt and sentencepiece.bpe.model.
-```
-
-3. Prepare scores and BPE data.
-* `N`: Number of hypotheses per each source sentence. We use 50 in the paper.
-* `SPLIT`: Name of the data split, i.e. train, valid, test. Use split_name, split_name1, split_name2, ..., if there are multiple datasets for a split, e.g. train, train1, valid, valid1.
-* `NUM_SHARDS`: Number of shards. Set this to 1 for non-train splits.
-* `METRIC`: The metric for DrNMT to optimize for. We support either `bleu` or `ter`.
-```
-# For each data split, e.g. train, valid, test, etc., run the following:
-
-SOURCE_FILE=/path/to/source_sentence_file
-TARGET_FILE=/path/to/target_sentence_file
-HYPO_FILE=/path/to/hypo_file
-XLMR_DIR=/path/to/xlmr
-OUTPUT_DIR=/path/to/output
-
-python scripts/prep_data.py \
- --input-source ${SOURCE_FILE} \
- --input-target ${TARGET_FILE} \
- --input-hypo ${HYPO_FILE} \
- --output-dir ${OUTPUT_DIR} \
- --split $SPLIT
- --beam $N \
- --sentencepiece-model ${XLMR_DIR}/sentencepiece.bpe.model \
- --metric $METRIC \
- --num-shards ${NUM_SHARDS}
-
-# The script will create ${OUTPUT_DIR}/$METRIC with ${NUM_SHARDS} splits.
-# Under split*/input_src, split*/input_tgt and split*/$METRIC, there will be $SPLIT.bpe and $SPLIT.$METRIC files, respectively.
-
-```
-
-4. Pre-process the data into fairseq format.
-```
-# use comma to separate if there are more than one train or valid set
-for suffix in src tgt ; do
- fairseq-preprocess --only-source \
- --trainpref ${OUTPUT_DIR}/$METRIC/split1/input_${suffix}/train.bpe \
- --validpref ${OUTPUT_DIR}/$METRIC/split1/input_${suffix}/valid.bpe \
- --destdir ${OUTPUT_DIR}/$METRIC/split1/input_${suffix} \
- --workers 60 \
- --srcdict ${XLMR_DIR}/dict.txt
-done
-
-for i in `seq 2 ${NUM_SHARDS}`; do
- for suffix in src tgt ; do
- fairseq-preprocess --only-source \
- --trainpref ${OUTPUT_DIR}/$METRIC/split${i}/input_${suffix}/train.bpe \
- --destdir ${OUTPUT_DIR}/$METRIC/split${i}/input_${suffix} \
- --workers 60 \
- --srcdict ${XLMR_DIR}/dict.txt
-
- ln -s ${OUTPUT_DIR}/$METRIC/split1/input_${suffix}/valid* ${OUTPUT_DIR}/$METRIC/split${i}/input_${suffix}/.
- done
-
- ln -s ${OUTPUT_DIR}/$METRIC/split1/$METRIC/valid* ${OUTPUT_DIR}/$METRIC/split${i}/$METRIC/.
-done
-```
-
-## Training
-
-```
-EXP_DIR=/path/to/exp
-
-# An example of training the model with the config for De-En experiment in the paper.
-# The config uses 16 GPUs and 50 hypotheses.
-# For training with fewer number of GPUs, set
-# distributed_training.distributed_world_size=k +optimization.update_freq='[x]' where x = 16/k
-# For training with fewer number of hypotheses, set
-# task.mt_beam=N dataset.batch_size=N dataset.required_batch_size_multiple=N
-
-fairseq-hydra-train -m \
- --config-dir config/ --config-name deen \
- task.data=${OUTPUT_DIR}/$METRIC/split1/ \
- task.num_data_splits=${NUM_SHARDS} \
- model.pretrained_model=${XLMR_DIR}/model.pt \
- common.user_dir=${FAIRSEQ_ROOT}/examples/discriminative_reranking_nmt \
- checkpoint.save_dir=${EXP_DIR}
-
-```
-
-## Inference & scoring
-Perform DrNMT reranking (fw + reranker score)
-1. Tune weights on valid sets.
-```
-# genrate N hypotheses with the base MT model (fw score)
-VALID_SOURCE_FILE=/path/to/source_sentences # one sentence per line, converted to the sentencepiece used by the base MT model
-VALID_TARGET_FILE=/path/to/target_sentences # one sentence per line in raw text, i.e. no sentencepiece and tokenization
-MT_MODEL=/path/to/mt_model
-MT_DATA_PATH=/path/to/mt_data
-
-cat ${VALID_SOURCE_FILE} | \
- fairseq-interactive ${MT_DATA_PATH} \
- --max-tokens 4000 --buffer-size 16 \
- --num-workers 32 --path ${MT_MODEL} \
- --beam $N --nbest $N \
- --post-process sentencepiece &> valid-hypo.out
-
-# replace "bleu" with "ter" to optimize for TER
-python drnmt_rerank.py \
- ${OUTPUT_DIR}/$METRIC/split1/ \
- --path ${EXP_DIR}/checkpoint_best.pt \
- --in-text valid-hypo.out \
- --results-path ${EXP_DIR} \
- --gen-subset valid \
- --target-text ${VALID_TARGET_FILE} \
- --user-dir ${FAIRSEQ_ROOT}/examples/discriminative_reranking_nmt \
- --bpe sentencepiece \
- --sentencepiece-model ${XLMR_DIR}/sentencepiece.bpe.model \
- --beam $N \
- --batch-size $N \
- --metric bleu \
- --tune
-
-```
-
-2. Apply best weights on test sets
-```
-# genrate N hypotheses with the base MT model (fw score)
-TEST_SOURCE_FILE=/path/to/source_sentences # one sentence per line, converted to the sentencepiece used by the base MT model
-
-cat ${TEST_SOURCE_FILE} | \
- fairseq-interactive ${MT_DATA_PATH} \
- --max-tokens 4000 --buffer-size 16 \
- --num-workers 32 --path ${MT_MODEL} \
- --beam $N --nbest $N \
- --post-process sentencepiece &> test-hypo.out
-
-# replace "bleu" with "ter" to evaluate TER
-# Add --target-text for evaluating BLEU/TER,
-# otherwise the script will only generate the hypotheses with the highest scores only.
-python drnmt_rerank.py \
- ${OUTPUT_DIR}/$METRIC/split1/ \
- --path ${EXP_DIR}/checkpoint_best.pt \
- --in-text test-hypo.out \
- --results-path ${EXP_DIR} \
- --gen-subset test \
- --user-dir ${FAIRSEQ_ROOT}/examples/discriminative_reranking_nmt \
- --bpe sentencepiece \
- --sentencepiece-model ${XLMR_DIR}/sentencepiece.bpe.model \
- --beam $N \
- --batch-size $N \
- --metric bleu \
- --fw-weight ${BEST_FW_WEIGHT} \
- --lenpen ${BEST_LENPEN}
-```
-
-## Citation
-```bibtex
-@inproceedings{lee2021discriminative,
- title={Discriminative Reranking for Neural Machine Translation},
- author={Lee, Ann and Auli, Michael and Ranzato, Marc'Aurelio},
- booktitle={ACL},
- year={2021}
-}
-```
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/m2m_100/process_data/clean_histogram.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/m2m_100/process_data/clean_histogram.py
deleted file mode 100644
index e24e073dc0eb43c76e2ce717f52bb848c5b026b8..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/m2m_100/process_data/clean_histogram.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import argparse
-
-parser = argparse.ArgumentParser()
-parser.add_argument('--src', type=str, help='Source language')
-parser.add_argument('--tgt', type=str, help='Target language')
-parser.add_argument('--src-file', type=str, help='Input source file')
-parser.add_argument('--tgt-file', type=str, help='Input target file')
-parser.add_argument('--src-output-file', type=str, help='Output source file')
-parser.add_argument('--tgt-output-file', type=str, help='Output target file')
-parser.add_argument('--threshold', type=float, default=0.5, help='Threshold')
-parser.add_argument('--threshold-character', type=str, default=']', help='Threshold character')
-parser.add_argument('--histograms', type=str, help='Path to histograms')
-
-args = parser.parse_args()
-
-
-def read_hist(f):
- ch = []
- for line in f:
- c = line[0]
- if c == args.threshold_character:
- break
- ch.append(c)
- return ch
-
-
-with(open("{}/{}".format(args.histograms, args.src), 'r', encoding='utf8')) as f:
- ch1 = read_hist(f)
-
-with(open("{}/{}".format(args.histograms, args.tgt), 'r', encoding='utf8')) as f:
- ch2 = read_hist(f)
-
-print("Accepted characters for {}: {}".format(args.src, ch1))
-print("Accepted characters for {}: {}".format(args.tgt, ch2))
-
-with open(args.src_file, 'r', encoding='utf8') as fs1, open(args.tgt_file, 'r', encoding='utf8') as fs2, open(args.src_output_file, 'w', encoding='utf8') as fos1, open(args.tgt_output_file, 'w', encoding='utf8') as fos2:
- ls1 = fs1.readline()
- ls2 = fs2.readline()
-
- while ls1 or ls2:
- cnt1 = len([c for c in ls1.strip() if c in ch1])
- cnt2 = len([c for c in ls2.strip() if c in ch2])
-
- if cnt1 / len(ls1) > args.threshold and cnt2 / len(ls2) > args.threshold:
- fos1.write(ls1)
- fos2.write(ls2)
- else:
- print("{} {} {} \n{} {} {}".format(args.src, cnt1 / len(ls1), ls1.strip(), args.tgt, cnt2 / len(ls2), ls2.strip()))
-
- ls1 = fs1.readline()
- ls2 = fs2.readline()
-
\ No newline at end of file
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/scripts/g2p_wrd_to_phn.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/scripts/g2p_wrd_to_phn.py
deleted file mode 100644
index 2e31c307bd67d10941150160c7fb8c9e085ac5d9..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/scripts/g2p_wrd_to_phn.py
+++ /dev/null
@@ -1,45 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import sys
-
-from g2p_en import G2p
-
-
-def main():
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--compact",
- action="store_true",
- help="if set, compacts phones",
- )
- args = parser.parse_args()
-
- compact = args.compact
-
- wrd_to_phn = {}
- g2p = G2p()
- for line in sys.stdin:
- words = line.strip().split()
- phones = []
- for w in words:
- if w not in wrd_to_phn:
- wrd_to_phn[w] = g2p(w)
- if compact:
- wrd_to_phn[w] = [
- p[:-1] if p[-1].isnumeric() else p for p in wrd_to_phn[w]
- ]
- phones.extend(wrd_to_phn[w])
- try:
- print(" ".join(phones))
- except:
- print(wrd_to_phn, words, phones, file=sys.stderr)
- raise
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/encoders/byte_utils.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/encoders/byte_utils.py
deleted file mode 100644
index a305c080926c2d094b7e8ae48f5331da82025a75..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/encoders/byte_utils.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import re
-
-
-WHITESPACE_NORMALIZER = re.compile(r"\s+")
-SPACE = chr(32)
-SPACE_ESCAPE = chr(9601)
-# excluding non-breaking space (160) here
-PRINTABLE_LATIN = set(
- list(range(32, 126 + 1)) + list(range(161, 172 + 1)) + list(range(174, 255 + 1))
-)
-BYTE_TO_BCHAR = {
- b: chr(b) if b in PRINTABLE_LATIN else chr(256 + b) for b in range(256)
-}
-BCHAR_TO_BYTE = {bc: b for b, bc in BYTE_TO_BCHAR.items()}
-
-
-def byte_encode(x: str) -> str:
- normalized = WHITESPACE_NORMALIZER.sub(SPACE, x)
- return "".join([BYTE_TO_BCHAR[b] for b in normalized.encode("utf-8")])
-
-
-def byte_decode(x: str) -> str:
- try:
- return bytes([BCHAR_TO_BYTE[bc] for bc in x]).decode("utf-8")
- except ValueError:
- return ""
-
-
-def smart_byte_decode(x: str) -> str:
- output = byte_decode(x)
- if output == "":
- # DP the best recovery (max valid chars) if it's broken
- n_bytes = len(x)
- f = [0 for _ in range(n_bytes + 1)]
- pt = [0 for _ in range(n_bytes + 1)]
- for i in range(1, n_bytes + 1):
- f[i], pt[i] = f[i - 1], i - 1
- for j in range(1, min(4, i) + 1):
- if f[i - j] + 1 > f[i] and len(byte_decode(x[i - j : i])) > 0:
- f[i], pt[i] = f[i - j] + 1, i - j
- cur_pt = n_bytes
- while cur_pt > 0:
- if f[cur_pt] == f[pt[cur_pt]] + 1:
- output = byte_decode(x[pt[cur_pt] : cur_pt]) + output
- cur_pt = pt[cur_pt]
- return output
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/translation_lev.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/translation_lev.py
deleted file mode 100644
index 041279305dc4978f6a3a4178c5ec4c72c5fb2b5c..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/translation_lev.py
+++ /dev/null
@@ -1,191 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass, field
-import torch
-from fairseq import utils
-from fairseq.data import LanguagePairDataset
-from fairseq.dataclass import ChoiceEnum
-from fairseq.tasks import register_task
-from fairseq.tasks.translation import TranslationConfig, TranslationTask, load_langpair_dataset
-from fairseq.utils import new_arange
-
-
-NOISE_CHOICES = ChoiceEnum(["random_delete", "random_mask", "no_noise", "full_mask"])
-
-@dataclass
-class TranslationLevenshteinConfig(TranslationConfig):
- noise: NOISE_CHOICES = field(
- default="random_delete",
- metadata={
- "help": "type of noise"
- },
- )
-
-@register_task("translation_lev", dataclass=TranslationLevenshteinConfig)
-class TranslationLevenshteinTask(TranslationTask):
- """
- Translation (Sequence Generation) task for Levenshtein Transformer
- See `"Levenshtein Transformer" `_.
- """
-
- cfg: TranslationLevenshteinConfig
-
- def load_dataset(self, split, epoch=1, combine=False, **kwargs):
- """Load a given dataset split.
-
- Args:
- split (str): name of the split (e.g., train, valid, test)
- """
- paths = utils.split_paths(self.cfg.data)
- assert len(paths) > 0
- data_path = paths[(epoch - 1) % len(paths)]
-
- # infer langcode
- src, tgt = self.cfg.source_lang, self.cfg.target_lang
-
- self.datasets[split] = load_langpair_dataset(
- data_path,
- split,
- src,
- self.src_dict,
- tgt,
- self.tgt_dict,
- combine=combine,
- dataset_impl=self.cfg.dataset_impl,
- upsample_primary=self.cfg.upsample_primary,
- left_pad_source=self.cfg.left_pad_source,
- left_pad_target=self.cfg.left_pad_target,
- max_source_positions=self.cfg.max_source_positions,
- max_target_positions=self.cfg.max_target_positions,
- prepend_bos=True,
- )
-
- def inject_noise(self, target_tokens):
- def _random_delete(target_tokens):
- pad = self.tgt_dict.pad()
- bos = self.tgt_dict.bos()
- eos = self.tgt_dict.eos()
-
- max_len = target_tokens.size(1)
- target_mask = target_tokens.eq(pad)
- target_score = target_tokens.clone().float().uniform_()
- target_score.masked_fill_(
- target_tokens.eq(bos) | target_tokens.eq(eos), 0.0
- )
- target_score.masked_fill_(target_mask, 1)
- target_score, target_rank = target_score.sort(1)
- target_length = target_mask.size(1) - target_mask.float().sum(
- 1, keepdim=True
- )
-
- # do not delete and (we assign 0 score for them)
- target_cutoff = (
- 2
- + (
- (target_length - 2)
- * target_score.new_zeros(target_score.size(0), 1).uniform_()
- ).long()
- )
- target_cutoff = target_score.sort(1)[1] >= target_cutoff
-
- prev_target_tokens = (
- target_tokens.gather(1, target_rank)
- .masked_fill_(target_cutoff, pad)
- .gather(1, target_rank.masked_fill_(target_cutoff, max_len).sort(1)[1])
- )
- prev_target_tokens = prev_target_tokens[
- :, : prev_target_tokens.ne(pad).sum(1).max()
- ]
-
- return prev_target_tokens
-
- def _random_mask(target_tokens):
- pad = self.tgt_dict.pad()
- bos = self.tgt_dict.bos()
- eos = self.tgt_dict.eos()
- unk = self.tgt_dict.unk()
-
- target_masks = (
- target_tokens.ne(pad) & target_tokens.ne(bos) & target_tokens.ne(eos)
- )
- target_score = target_tokens.clone().float().uniform_()
- target_score.masked_fill_(~target_masks, 2.0)
- target_length = target_masks.sum(1).float()
- target_length = target_length * target_length.clone().uniform_()
- target_length = target_length + 1 # make sure to mask at least one token.
-
- _, target_rank = target_score.sort(1)
- target_cutoff = new_arange(target_rank) < target_length[:, None].long()
- prev_target_tokens = target_tokens.masked_fill(
- target_cutoff.scatter(1, target_rank, target_cutoff), unk
- )
- return prev_target_tokens
-
- def _full_mask(target_tokens):
- pad = self.tgt_dict.pad()
- bos = self.tgt_dict.bos()
- eos = self.tgt_dict.eos()
- unk = self.tgt_dict.unk()
-
- target_mask = (
- target_tokens.eq(bos) | target_tokens.eq(eos) | target_tokens.eq(pad)
- )
- return target_tokens.masked_fill(~target_mask, unk)
-
- if self.cfg.noise == "random_delete":
- return _random_delete(target_tokens)
- elif self.cfg.noise == "random_mask":
- return _random_mask(target_tokens)
- elif self.cfg.noise == "full_mask":
- return _full_mask(target_tokens)
- elif self.cfg.noise == "no_noise":
- return target_tokens
- else:
- raise NotImplementedError
-
- def build_generator(self, models, args, **unused):
- # add models input to match the API for SequenceGenerator
- from fairseq.iterative_refinement_generator import IterativeRefinementGenerator
-
- return IterativeRefinementGenerator(
- self.target_dictionary,
- eos_penalty=getattr(args, "iter_decode_eos_penalty", 0.0),
- max_iter=getattr(args, "iter_decode_max_iter", 10),
- beam_size=getattr(args, "iter_decode_with_beam", 1),
- reranking=getattr(args, "iter_decode_with_external_reranker", False),
- decoding_format=getattr(args, "decoding_format", None),
- adaptive=not getattr(args, "iter_decode_force_max_iter", False),
- retain_history=getattr(args, "retain_iter_history", False),
- )
-
- def build_dataset_for_inference(self, src_tokens, src_lengths, constraints=None):
- if constraints is not None:
- # Though see Susanto et al. (ACL 2020): https://www.aclweb.org/anthology/2020.acl-main.325/
- raise NotImplementedError(
- "Constrained decoding with the translation_lev task is not supported"
- )
-
- return LanguagePairDataset(
- src_tokens, src_lengths, self.source_dictionary, append_bos=True
- )
-
- def train_step(
- self, sample, model, criterion, optimizer, update_num, ignore_grad=False
- ):
- model.train()
- sample["prev_target"] = self.inject_noise(sample["target"])
- loss, sample_size, logging_output = criterion(model, sample)
- if ignore_grad:
- loss *= 0
- optimizer.backward(loss)
- return loss, sample_size, logging_output
-
- def valid_step(self, sample, model, criterion):
- model.eval()
- with torch.no_grad():
- sample["prev_target"] = self.inject_noise(sample["target"])
- loss, sample_size, logging_output = criterion(model, sample)
- return loss, sample_size, logging_output
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/constrained_decoding/normalize.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/constrained_decoding/normalize.py
deleted file mode 100644
index 4ae2b5111ba025acb9e1613865c92fdc339a58d5..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/constrained_decoding/normalize.py
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/usr/bin/env python3
-#
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import sys
-
-from sacremoses.normalize import MosesPunctNormalizer
-
-
-def main(args):
- normalizer = MosesPunctNormalizer(lang=args.lang, penn=args.penn)
- for line in sys.stdin:
- print(normalizer.normalize(line.rstrip()), flush=True)
-
-
-if __name__ == "__main__":
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument("--lang", "-l", default="en")
- parser.add_argument("--penn", "-p", action="store_true")
- args = parser.parse_args()
-
- main(args)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/README.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/README.md
deleted file mode 100644
index 4a3ae54b857c43621c9fb67ee4b214584beec835..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_synthesis/README.md
+++ /dev/null
@@ -1,16 +0,0 @@
-Speech Synthesis (S^2)
-===
-
-Speech synthesis with fairseq.
-
-- Autoregressive and non-autoregressive models
-- Multi-speaker synthesis
-- Audio preprocessing
-- Automatic metrics
-- Similar data configuration as [S2T](../speech_to_text/README.md)
-
-
-## Examples
-- [Single-speaker synthesis on LJSpeech](docs/ljspeech_example.md)
-- [Multi-speaker synthesis on VCTK](docs/vctk_example.md)
-- [Multi-speaker synthesis on Common Voice](docs/common_voice_example.md)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/multilingual/sampled_multi_epoch_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/multilingual/sampled_multi_epoch_dataset.py
deleted file mode 100644
index 17387b2f85c0ee76db1a003091331b46de8d8def..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/multilingual/sampled_multi_epoch_dataset.py
+++ /dev/null
@@ -1,199 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import hashlib
-import logging
-import math
-
-import numpy as np
-from fairseq.data import SampledMultiDataset
-
-from .sampled_multi_dataset import CollateFormat, default_virtual_size_func
-
-
-logger = logging.getLogger(__name__)
-
-
-class SampledMultiEpochDataset(SampledMultiDataset):
- """Samples from multiple sub-datasets according to sampling ratios
- using virtual epoch sizes to speed up dataloading.
- Args:
- datasets (
- List[~torch.utils.data.Dataset]
- or OrderedDict[str, ~torch.utils.data.Dataset]
- ): datasets
- sampling_ratios (List[float]): list of probability of each dataset to be sampled
- (default: None, which corresponds to concating all dataset together).
- seed (int): RNG seed to use (default: 2).
- epoch (int): starting epoch number (default: 1).
- eval_key (str, optional): a key used at evaluation time that causes
- this instance to pass-through batches from *datasets[eval_key]*.
- collate_format (CollateFormat): collater output format, either CollateFormat.ordered_dict or
- CollateFormat.single (default: CollateFormat.single) where CollateFormat.single configures
- the collater to output batches of data mixed from all sub-datasets,
- and CollateFormat.ordered_dict configures the collater to output a dictionary of batches indexed by keys
- of sub-datasets.
- Note that not all sub-datasets will present in a single batch in both formats.
- virtual_size (int, or callable): the expected virtual size of the dataset (default: default_virtual_size_func).
- split (str): the split of the data, e.g. 'train', 'valid' or 'test'.
- virtual_epoch_size (int): virtual epoch size, the dataset will go through the data by
- this virtual epoch size one by one to speed up data loading, e.g. indicing and filtering
- can be performed whenever a virtual epoch is loaded without waiting for the whole dataset to be loaded.
- shared_collater (bool): whether or not to all sub-datasets have the same collater.
- shard_epoch (int): the real epoch number for shard selection.
- shuffle (bool): whether or not to shuffle data (default: True).
- """
-
- def __init__(
- self,
- datasets,
- sampling_ratios=None,
- seed=2,
- epoch=1,
- eval_key=None,
- collate_format=CollateFormat.single,
- virtual_size=default_virtual_size_func,
- split="",
- virtual_epoch_size=None,
- shared_collater=False,
- shard_epoch=1,
- shuffle=True,
- ):
- self.virtual_epoch_size = virtual_epoch_size
- self._current_epoch_start_index = None
- self._random_global_indices = None
- self.shard_epoch = shard_epoch if shard_epoch is not None else 1
- self.load_next_shard = None
- self._epoch_sizes = None
- super().__init__(
- datasets=datasets,
- sampling_ratios=sampling_ratios,
- seed=seed,
- epoch=epoch,
- eval_key=eval_key,
- collate_format=collate_format,
- virtual_size=virtual_size,
- split=split,
- shared_collater=shared_collater,
- shuffle=shuffle,
- )
-
- def _setup(self, epoch):
- self.virtual_epoch_size = (
- self.virtual_epoch_size
- if self.virtual_epoch_size is not None
- else self.virtual_size
- )
- if self.virtual_epoch_size > self.virtual_size:
- logger.warning(
- f"virtual epoch size {self.virtual_epoch_size} "
- f"is greater than virtual dataset size {self.virtual_size}"
- )
- self.virtual_epoch_size = self.virtual_size
- self.num_virtual_epochs = math.ceil(self.virtual_size / self.virtual_epoch_size)
- self._current_epoch_start_index = self._get_epoch_start_index(epoch)
- logger.info(
- f"virtual epoch size {self.virtual_epoch_size}; virtual dataset size {self.virtual_size}"
- )
-
- def _map_epoch_index_to_global(self, index):
- index = self._current_epoch_start_index + index
- # add randomness
- return self._random_global_indices[index]
-
- @property
- def sizes(self):
- if self._epoch_sizes is not None:
- return self._epoch_sizes
- _sizes = super().sizes
- indices = self._random_global_indices[
- self._current_epoch_start_index : self._current_epoch_start_index
- + len(self)
- ]
- self._epoch_sizes = _sizes[indices]
- # del super()._sizes to save memory
- del self._sizes
- self._sizes = None
- return self._epoch_sizes
-
- def _get_dataset_and_index(self, index):
- i = self._map_epoch_index_to_global(index)
- return super()._get_dataset_and_index(i)
-
- def __len__(self):
- return (
- self.virtual_epoch_size
- if self._current_epoch_start_index + self.virtual_epoch_size
- < self.virtual_size
- else self.virtual_size - self._current_epoch_start_index
- )
-
- def set_epoch(self, epoch):
- if self._current_epoch_start_index is None:
- # initializing epoch idnices of a virtual dataset
- self._setup(epoch)
- self._next_virtual_epoch(epoch)
- else:
- # working on already intialized epoch indices
- if epoch == self._cur_epoch:
- # re-enter so return
- return
- self._next_virtual_epoch(epoch)
-
- def _get_epoch_start_index(self, epoch):
- assert epoch >= 1 # fairseq is using 1-based epoch everywhere
- return ((epoch - 1) % self.num_virtual_epochs) * self.virtual_epoch_size
-
- def _next_global_indices(self, epoch):
- rng = np.random.RandomState(
- [
- int(
- hashlib.sha1(
- str(self.__class__.__name__).encode("utf-8")
- ).hexdigest(),
- 16,
- )
- % (2 ** 32),
- self.seed % (2 ** 32), # global seed
- epoch, # epoch index,
- ]
- )
- del self._random_global_indices
- self._random_global_indices = rng.choice(
- self.virtual_size, self.virtual_size, replace=False
- )
- if self.load_next_shard is None:
- self.load_next_shard = False
- else:
- # increase shard epoch for next loading
- self.shard_epoch += 1
- self.load_next_shard = True
- logger.info(
- "to load next epoch/shard in next load_dataset: "
- f"epoch={epoch}/shard_epoch={self.shard_epoch}"
- )
-
- def _next_virtual_epoch(self, epoch):
- index = self._get_epoch_start_index(epoch)
- if index == 0 or self._random_global_indices is None:
- # need to start from the beginning,
- # so call super().set_epoch(epoch) to establish the global virtual indices
- logger.info(
- "establishing a new set of global virtual indices for "
- f"epoch={epoch}/shard_epoch={self.shard_epoch}"
- )
- super().set_epoch(epoch)
- self._next_global_indices(epoch)
- else:
- self._cur_epoch = epoch
-
- # reset cache sizes and ordered_indices for the epoch after moving to a new epoch
- self._clean_if_not_none(
- [
- self._epoch_sizes,
- ]
- )
- self._epoch_sizes = None
- self._current_epoch_start_index = index
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/translation_from_pretrained_bart.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/translation_from_pretrained_bart.py
deleted file mode 100644
index 0fd7a5b29f0e34699b5d5ef7574bc39b8c6052c9..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/translation_from_pretrained_bart.py
+++ /dev/null
@@ -1,132 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-from fairseq import utils
-from fairseq.data import LanguagePairDataset
-
-from . import register_task
-from .translation import TranslationTask, load_langpair_dataset
-
-
-@register_task("translation_from_pretrained_bart")
-class TranslationFromPretrainedBARTTask(TranslationTask):
- """
- Translate from source language to target language with a model initialized with a multilingual pretrain.
-
- Args:
- src_dict (~fairseq.data.Dictionary): dictionary for the source language
- tgt_dict (~fairseq.data.Dictionary): dictionary for the target language
-
- .. note::
-
- The translation task is compatible with :mod:`fairseq-train`,
- :mod:`fairseq-generate` and :mod:`fairseq-interactive`.
-
- The translation task provides the following additional command-line
- arguments:
-
- .. argparse::
- :ref: fairseq.tasks.translation_parser
- :prog:
- """
-
- @staticmethod
- def add_args(parser):
- """Add task-specific arguments to the parser."""
- # fmt: off
- TranslationTask.add_args(parser)
- parser.add_argument('--langs', type=str, metavar='LANG',
- help='comma-separated list of monolingual language, '
- 'for example, "en,de,fr". These should match the '
- 'langs from pretraining (and be in the same order). '
- 'You should always add all pretraining language idx '
- 'during finetuning.')
- parser.add_argument('--prepend-bos', action='store_true',
- help='prepend bos token to each sentence, which matches '
- 'mBART pretraining')
- # fmt: on
-
- def __init__(self, args, src_dict, tgt_dict):
- super().__init__(args, src_dict, tgt_dict)
- self.langs = args.langs.split(",")
- for d in [src_dict, tgt_dict]:
- for l in self.langs:
- d.add_symbol("[{}]".format(l))
- d.add_symbol("")
-
- def load_dataset(self, split, epoch=1, combine=False, **kwargs):
- """Load a given dataset split.
-
- Args:
- split (str): name of the split (e.g., train, valid, test)
- """
- paths = utils.split_paths(self.args.data)
- assert len(paths) > 0
- data_path = paths[(epoch - 1) % len(paths)]
-
- # infer langcode
- src, tgt = self.args.source_lang, self.args.target_lang
-
- self.datasets[split] = load_langpair_dataset(
- data_path,
- split,
- src,
- self.src_dict,
- tgt,
- self.tgt_dict,
- combine=combine,
- dataset_impl=self.args.dataset_impl,
- upsample_primary=self.args.upsample_primary,
- left_pad_source=self.args.left_pad_source,
- left_pad_target=self.args.left_pad_target,
- max_source_positions=getattr(self.args, "max_source_positions", 1024),
- max_target_positions=getattr(self.args, "max_target_positions", 1024),
- load_alignments=self.args.load_alignments,
- prepend_bos=getattr(self.args, "prepend_bos", False),
- append_source_id=True,
- )
-
- def build_generator(self, models, args, **unused):
- if getattr(args, "score_reference", False):
- from fairseq.sequence_scorer import SequenceScorer
-
- return SequenceScorer(
- self.target_dictionary,
- eos=self.tgt_dict.index("[{}]".format(self.args.target_lang)),
- )
- else:
- from fairseq.sequence_generator import SequenceGenerator
-
- return SequenceGenerator(
- models,
- self.target_dictionary,
- beam_size=getattr(args, "beam", 5),
- max_len_a=getattr(args, "max_len_a", 0),
- max_len_b=getattr(args, "max_len_b", 200),
- min_len=getattr(args, "min_len", 1),
- normalize_scores=(not getattr(args, "unnormalized", False)),
- len_penalty=getattr(args, "lenpen", 1),
- unk_penalty=getattr(args, "unkpen", 0),
- temperature=getattr(args, "temperature", 1.0),
- match_source_len=getattr(args, "match_source_len", False),
- no_repeat_ngram_size=getattr(args, "no_repeat_ngram_size", 0),
- eos=self.tgt_dict.index("[{}]".format(self.args.target_lang)),
- )
-
- def build_dataset_for_inference(self, src_tokens, src_lengths, constraints=None):
- src_lang_id = self.source_dictionary.index("[{}]".format(self.args.source_lang))
- source_tokens = []
- for s_t in src_tokens:
- s_t = torch.cat([s_t, s_t.new(1).fill_(src_lang_id)])
- source_tokens.append(s_t)
- dataset = LanguagePairDataset(
- source_tokens,
- src_lengths,
- self.source_dictionary,
- tgt_dict=self.target_dictionary,
- constraints=constraints,
- )
- return dataset
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/scripts/count_docs.py b/spaces/OFA-Sys/OFA-vqa/fairseq/scripts/count_docs.py
deleted file mode 100644
index 58d85af85e91377a34dbd01f7674436152fd08e8..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/scripts/count_docs.py
+++ /dev/null
@@ -1,58 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-Count the number of documents and average number of lines and tokens per
-document in a large file. Documents should be separated by a single empty line.
-"""
-
-import argparse
-import gzip
-import sys
-
-import numpy as np
-
-
-def main():
- parser = argparse.ArgumentParser()
- parser.add_argument("input")
- parser.add_argument("--gzip", action="store_true")
- args = parser.parse_args()
-
- def gopen():
- if args.gzip:
- return gzip.open(args.input, "r")
- else:
- return open(args.input, "r", encoding="utf-8")
-
- num_lines = []
- num_toks = []
- with gopen() as h:
- num_docs = 1
- num_lines_in_doc = 0
- num_toks_in_doc = 0
- for i, line in enumerate(h):
- if len(line.strip()) == 0: # empty line indicates new document
- num_docs += 1
- num_lines.append(num_lines_in_doc)
- num_toks.append(num_toks_in_doc)
- num_lines_in_doc = 0
- num_toks_in_doc = 0
- else:
- num_lines_in_doc += 1
- num_toks_in_doc += len(line.rstrip().split())
- if i % 1000000 == 0:
- print(i, file=sys.stderr, end="", flush=True)
- elif i % 100000 == 0:
- print(".", file=sys.stderr, end="", flush=True)
- print(file=sys.stderr, flush=True)
-
- print("found {} docs".format(num_docs))
- print("average num lines per doc: {}".format(np.mean(num_lines)))
- print("average num toks per doc: {}".format(np.mean(num_toks)))
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/OldP1ng/QQsign/Dockerfile b/spaces/OldP1ng/QQsign/Dockerfile
deleted file mode 100644
index 535624113f3b520e4829240a48bd3652430de828..0000000000000000000000000000000000000000
--- a/spaces/OldP1ng/QQsign/Dockerfile
+++ /dev/null
@@ -1,23 +0,0 @@
-FROM openjdk:17-slim
-
-# 设置时区
-ENV TZ Asia/Shanghai
-
-# 设置工作目录
-WORKDIR /app
-
-# 复制文件到工作目录
-COPY bin /app/bin
-COPY lib /app/lib
-COPY txlib /app/txlib
-
-# 设置命令
-RUN chmod -R 777 /tmp
-RUN chmod -R 777 /app
-RUN sed 's/"key": ".*"/"key": "'"$KEY_VALUE"'"/' txlib/$TXLIB_VERSION/config.json > /app/txlib/$TXLIB_VERSION/config.json
-
-# 运行
-CMD bash bin/unidbg-fetch-qsign --basePath=txlib/$TXLIB_VERSION
-
-# 暴露端口
-EXPOSE 7860
\ No newline at end of file
diff --git a/spaces/Olivier-Truong/faster-whisper-webui-v2/README.md b/spaces/Olivier-Truong/faster-whisper-webui-v2/README.md
deleted file mode 100644
index 8c7ac56c090aac4236dc807e6385e43851e818a9..0000000000000000000000000000000000000000
--- a/spaces/Olivier-Truong/faster-whisper-webui-v2/README.md
+++ /dev/null
@@ -1,185 +0,0 @@
----
-title: Faster Whisper Webui
-emoji: 🚀
-colorFrom: indigo
-colorTo: blue
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
-
-# Running Locally
-
-To run this program locally, first install Python 3.9+ and Git. Then install Pytorch 10.1+ and all the other dependencies:
-```
-pip install -r requirements.txt
-```
-
-You can find detailed instructions for how to install this on Windows 10/11 [here (PDF)](docs/windows/install_win10_win11.pdf).
-
-Finally, run the full version (no audio length restrictions) of the app with parallel CPU/GPU enabled:
-```
-python app.py --input_audio_max_duration -1 --server_name 127.0.0.1 --auto_parallel True
-```
-
-You can also run the CLI interface, which is similar to Whisper's own CLI but also supports the following additional arguments:
-```
-python cli.py \
-[--vad {none,silero-vad,silero-vad-skip-gaps,silero-vad-expand-into-gaps,periodic-vad}] \
-[--vad_merge_window VAD_MERGE_WINDOW] \
-[--vad_max_merge_size VAD_MAX_MERGE_SIZE] \
-[--vad_padding VAD_PADDING] \
-[--vad_prompt_window VAD_PROMPT_WINDOW]
-[--vad_cpu_cores NUMBER_OF_CORES]
-[--vad_parallel_devices COMMA_DELIMITED_DEVICES]
-[--auto_parallel BOOLEAN]
-```
-In addition, you may also use URL's in addition to file paths as input.
-```
-python cli.py --model large --vad silero-vad --language Japanese "https://www.youtube.com/watch?v=4cICErqqRSM"
-```
-
-Rather than supplying arguments to `app.py` or `cli.py`, you can also use the configuration file [config.json5](config.json5). See that file for more information.
-If you want to use a different configuration file, you can use the `WHISPER_WEBUI_CONFIG` environment variable to specify the path to another file.
-
-### Multiple Files
-
-You can upload multiple files either through the "Upload files" option, or as a playlist on YouTube.
-Each audio file will then be processed in turn, and the resulting SRT/VTT/Transcript will be made available in the "Download" section.
-When more than one file is processed, the UI will also generate a "All_Output" zip file containing all the text output files.
-
-## Diarization
-
-To detect different speakers in the audio, you can use the [whisper-diarization](https://gitlab.com/aadnk/whisper-diarization) application.
-
-Download the JSON file after running Whisper on an audio file, and then run app.py in the
-whisper-diarization repository with the audio file and the JSON file as arguments.
-
-## Whisper Implementation
-
-You can choose between using `whisper` or `faster-whisper`. [Faster Whisper](https://github.com/guillaumekln/faster-whisper) as a drop-in replacement for the
-default Whisper which achieves up to a 4x speedup and 2x reduction in memory usage.
-
-You can install the requirements for a specific Whisper implementation in `requirements-fasterWhisper.txt`
-or `requirements-whisper.txt`:
-```
-pip install -r requirements-fasterWhisper.txt
-```
-And then run the App or the CLI with the `--whisper_implementation faster-whisper` flag:
-```
-python app.py --whisper_implementation faster-whisper --input_audio_max_duration -1 --server_name 127.0.0.1 --server_port 7860 --auto_parallel True
-```
-You can also select the whisper implementation in `config.json5`:
-```json5
-{
- "whisper_implementation": "faster-whisper"
-}
-```
-### GPU Acceleration
-
-In order to use GPU acceleration with Faster Whisper, both CUDA 11.2 and cuDNN 8 must be installed. You may want to install it in a virtual environment like Anaconda.
-
-## Google Colab
-
-You can also run this Web UI directly on [Google Colab](https://colab.research.google.com/drive/1qeTSvi7Bt_5RMm88ipW4fkcsMOKlDDss?usp=sharing), if you haven't got a GPU powerful enough to run the larger models.
-
-See the [colab documentation](docs/colab.md) for more information.
-
-## Parallel Execution
-
-You can also run both the Web-UI or the CLI on multiple GPUs in parallel, using the `vad_parallel_devices` option. This takes a comma-delimited list of
-device IDs (0, 1, etc.) that Whisper should be distributed to and run on concurrently:
-```
-python cli.py --model large --vad silero-vad --language Japanese \
---vad_parallel_devices 0,1 "https://www.youtube.com/watch?v=4cICErqqRSM"
-```
-
-Note that this requires a VAD to function properly, otherwise only the first GPU will be used. Though you could use `period-vad` to avoid taking the hit
-of running Silero-Vad, at a slight cost to accuracy.
-
-This is achieved by creating N child processes (where N is the number of selected devices), where Whisper is run concurrently. In `app.py`, you can also
-set the `vad_process_timeout` option. This configures the number of seconds until a process is killed due to inactivity, freeing RAM and video memory.
-The default value is 30 minutes.
-
-```
-python app.py --input_audio_max_duration -1 --vad_parallel_devices 0,1 --vad_process_timeout 3600
-```
-
-To execute the Silero VAD itself in parallel, use the `vad_cpu_cores` option:
-```
-python app.py --input_audio_max_duration -1 --vad_parallel_devices 0,1 --vad_process_timeout 3600 --vad_cpu_cores 4
-```
-
-You may also use `vad_process_timeout` with a single device (`--vad_parallel_devices 0`), if you prefer to always free video memory after a period of time.
-
-### Auto Parallel
-
-You can also set `auto_parallel` to `True`. This will set `vad_parallel_devices` to use all the GPU devices on the system, and `vad_cpu_cores` to be equal to the number of
-cores (up to 8):
-```
-python app.py --input_audio_max_duration -1 --auto_parallel True
-```
-
-# Docker
-
-To run it in Docker, first install Docker and optionally the NVIDIA Container Toolkit in order to use the GPU.
-Then either use the GitLab hosted container below, or check out this repository and build an image:
-```
-sudo docker build -t whisper-webui:1 .
-```
-
-You can then start the WebUI with GPU support like so:
-```
-sudo docker run -d --gpus=all -p 7860:7860 whisper-webui:1
-```
-
-Leave out "--gpus=all" if you don't have access to a GPU with enough memory, and are fine with running it on the CPU only:
-```
-sudo docker run -d -p 7860:7860 whisper-webui:1
-```
-
-# GitLab Docker Registry
-
-This Docker container is also hosted on GitLab:
-
-```
-sudo docker run -d --gpus=all -p 7860:7860 registry.gitlab.com/aadnk/whisper-webui:latest
-```
-
-## Custom Arguments
-
-You can also pass custom arguments to `app.py` in the Docker container, for instance to be able to use all the GPUs in parallel (replace administrator with your user):
-```
-sudo docker run -d --gpus all -p 7860:7860 \
---mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper \
---mount type=bind,source=/home/administrator/.cache/huggingface,target=/root/.cache/huggingface \
---restart=on-failure:15 registry.gitlab.com/aadnk/whisper-webui:latest \
-app.py --input_audio_max_duration -1 --server_name 0.0.0.0 --auto_parallel True \
---default_vad silero-vad --default_model_name large
-```
-
-You can also call `cli.py` the same way:
-```
-sudo docker run --gpus all \
---mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper \
---mount type=bind,source=/home/administrator/.cache/huggingface,target=/root/.cache/huggingface \
---mount type=bind,source=${PWD},target=/app/data \
-registry.gitlab.com/aadnk/whisper-webui:latest \
-cli.py --model large --auto_parallel True --vad silero-vad \
---output_dir /app/data /app/data/YOUR-FILE-HERE.mp4
-```
-
-## Caching
-
-Note that the models themselves are currently not included in the Docker images, and will be downloaded on the demand.
-To avoid this, bind the directory /root/.cache/whisper to some directory on the host (for instance /home/administrator/.cache/whisper), where you can (optionally)
-prepopulate the directory with the different Whisper models.
-```
-sudo docker run -d --gpus=all -p 7860:7860 \
---mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper \
-registry.gitlab.com/aadnk/whisper-webui:latest
-```
\ No newline at end of file
diff --git a/spaces/OmarN121/NLP_for_Jobs/README.md b/spaces/OmarN121/NLP_for_Jobs/README.md
deleted file mode 100644
index 3fbf37b0d5551208efa0231dfb44d25cc9f4f9c5..0000000000000000000000000000000000000000
--- a/spaces/OmarN121/NLP_for_Jobs/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: NLP_for_Jobs
-emoji: 📊
-colorFrom: green
-colorTo: gray
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/Omnibus/MusicGen/tests/__init__.py b/spaces/Omnibus/MusicGen/tests/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/Omnibus/MusicGen/tests/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/api.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/api.py
deleted file mode 100644
index ad4272183f2a533dbb68f6e65cf42144f4b69fc4..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/api.py
+++ /dev/null
@@ -1,235 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import copy
-import logging
-import os
-import torch
-from caffe2.proto import caffe2_pb2
-from torch import nn
-
-from detectron2.config import CfgNode
-from detectron2.utils.file_io import PathManager
-
-from .caffe2_inference import ProtobufDetectionModel
-from .caffe2_modeling import META_ARCH_CAFFE2_EXPORT_TYPE_MAP, convert_batched_inputs_to_c2_format
-from .shared import get_pb_arg_vali, get_pb_arg_vals, save_graph
-
-__all__ = [
- "add_export_config",
- "Caffe2Model",
- "Caffe2Tracer",
-]
-
-
-def add_export_config(cfg):
- return cfg
-
-
-class Caffe2Tracer:
- """
- Make a detectron2 model traceable with Caffe2 operators.
- This class creates a traceable version of a detectron2 model which:
-
- 1. Rewrite parts of the model using ops in Caffe2. Note that some ops do
- not have GPU implementation in Caffe2.
- 2. Remove post-processing and only produce raw layer outputs
-
- After making a traceable model, the class provide methods to export such a
- model to different deployment formats.
- Exported graph produced by this class take two input tensors:
-
- 1. (1, C, H, W) float "data" which is an image (usually in [0, 255]).
- (H, W) often has to be padded to multiple of 32 (depend on the model
- architecture).
- 2. 1x3 float "im_info", each row of which is (height, width, 1.0).
- Height and width are true image shapes before padding.
-
- The class currently only supports models using builtin meta architectures.
- Batch inference is not supported, and contributions are welcome.
- """
-
- def __init__(self, cfg: CfgNode, model: nn.Module, inputs):
- """
- Args:
- cfg (CfgNode): a detectron2 config used to construct caffe2-compatible model.
- model (nn.Module): An original pytorch model. Must be among a few official models
- in detectron2 that can be converted to become caffe2-compatible automatically.
- Weights have to be already loaded to this model.
- inputs: sample inputs that the given model takes for inference.
- Will be used to trace the model. For most models, random inputs with
- no detected objects will not work as they lead to wrong traces.
- """
- assert isinstance(cfg, CfgNode), cfg
- assert isinstance(model, torch.nn.Module), type(model)
-
- # TODO make it support custom models, by passing in c2 model directly
- C2MetaArch = META_ARCH_CAFFE2_EXPORT_TYPE_MAP[cfg.MODEL.META_ARCHITECTURE]
- self.traceable_model = C2MetaArch(cfg, copy.deepcopy(model))
- self.inputs = inputs
- self.traceable_inputs = self.traceable_model.get_caffe2_inputs(inputs)
-
- def export_caffe2(self):
- """
- Export the model to Caffe2's protobuf format.
- The returned object can be saved with its :meth:`.save_protobuf()` method.
- The result can be loaded and executed using Caffe2 runtime.
-
- Returns:
- :class:`Caffe2Model`
- """
- from .caffe2_export import export_caffe2_detection_model
-
- predict_net, init_net = export_caffe2_detection_model(
- self.traceable_model, self.traceable_inputs
- )
- return Caffe2Model(predict_net, init_net)
-
- def export_onnx(self):
- """
- Export the model to ONNX format.
- Note that the exported model contains custom ops only available in caffe2, therefore it
- cannot be directly executed by other runtime (such as onnxruntime or TensorRT).
- Post-processing or transformation passes may be applied on the model to accommodate
- different runtimes, but we currently do not provide support for them.
-
- Returns:
- onnx.ModelProto: an onnx model.
- """
- from .caffe2_export import export_onnx_model as export_onnx_model_impl
-
- return export_onnx_model_impl(self.traceable_model, (self.traceable_inputs,))
-
- def export_torchscript(self):
- """
- Export the model to a ``torch.jit.TracedModule`` by tracing.
- The returned object can be saved to a file by ``.save()``.
-
- Returns:
- torch.jit.TracedModule: a torch TracedModule
- """
- logger = logging.getLogger(__name__)
- logger.info("Tracing the model with torch.jit.trace ...")
- with torch.no_grad():
- return torch.jit.trace(self.traceable_model, (self.traceable_inputs,))
-
-
-class Caffe2Model(nn.Module):
- """
- A wrapper around the traced model in Caffe2's protobuf format.
- The exported graph has different inputs/outputs from the original Pytorch
- model, as explained in :class:`Caffe2Tracer`. This class wraps around the
- exported graph to simulate the same interface as the original Pytorch model.
- It also provides functions to save/load models in Caffe2's format.'
-
- Examples:
- ::
- c2_model = Caffe2Tracer(cfg, torch_model, inputs).export_caffe2()
- inputs = [{"image": img_tensor_CHW}]
- outputs = c2_model(inputs)
- orig_outputs = torch_model(inputs)
- """
-
- def __init__(self, predict_net, init_net):
- super().__init__()
- self.eval() # always in eval mode
- self._predict_net = predict_net
- self._init_net = init_net
- self._predictor = None
-
- __init__.__HIDE_SPHINX_DOC__ = True
-
- @property
- def predict_net(self):
- """
- caffe2.core.Net: the underlying caffe2 predict net
- """
- return self._predict_net
-
- @property
- def init_net(self):
- """
- caffe2.core.Net: the underlying caffe2 init net
- """
- return self._init_net
-
- def save_protobuf(self, output_dir):
- """
- Save the model as caffe2's protobuf format.
- It saves the following files:
-
- * "model.pb": definition of the graph. Can be visualized with
- tools like `netron `_.
- * "model_init.pb": model parameters
- * "model.pbtxt": human-readable definition of the graph. Not
- needed for deployment.
-
- Args:
- output_dir (str): the output directory to save protobuf files.
- """
- logger = logging.getLogger(__name__)
- logger.info("Saving model to {} ...".format(output_dir))
- if not PathManager.exists(output_dir):
- PathManager.mkdirs(output_dir)
-
- with PathManager.open(os.path.join(output_dir, "model.pb"), "wb") as f:
- f.write(self._predict_net.SerializeToString())
- with PathManager.open(os.path.join(output_dir, "model.pbtxt"), "w") as f:
- f.write(str(self._predict_net))
- with PathManager.open(os.path.join(output_dir, "model_init.pb"), "wb") as f:
- f.write(self._init_net.SerializeToString())
-
- def save_graph(self, output_file, inputs=None):
- """
- Save the graph as SVG format.
-
- Args:
- output_file (str): a SVG file
- inputs: optional inputs given to the model.
- If given, the inputs will be used to run the graph to record
- shape of every tensor. The shape information will be
- saved together with the graph.
- """
- from .caffe2_export import run_and_save_graph
-
- if inputs is None:
- save_graph(self._predict_net, output_file, op_only=False)
- else:
- size_divisibility = get_pb_arg_vali(self._predict_net, "size_divisibility", 0)
- device = get_pb_arg_vals(self._predict_net, "device", b"cpu").decode("ascii")
- inputs = convert_batched_inputs_to_c2_format(inputs, size_divisibility, device)
- inputs = [x.cpu().numpy() for x in inputs]
- run_and_save_graph(self._predict_net, self._init_net, inputs, output_file)
-
- @staticmethod
- def load_protobuf(dir):
- """
- Args:
- dir (str): a directory used to save Caffe2Model with
- :meth:`save_protobuf`.
- The files "model.pb" and "model_init.pb" are needed.
-
- Returns:
- Caffe2Model: the caffe2 model loaded from this directory.
- """
- predict_net = caffe2_pb2.NetDef()
- with PathManager.open(os.path.join(dir, "model.pb"), "rb") as f:
- predict_net.ParseFromString(f.read())
-
- init_net = caffe2_pb2.NetDef()
- with PathManager.open(os.path.join(dir, "model_init.pb"), "rb") as f:
- init_net.ParseFromString(f.read())
-
- return Caffe2Model(predict_net, init_net)
-
- def __call__(self, inputs):
- """
- An interface that wraps around a Caffe2 model and mimics detectron2's models'
- input/output format. See details about the format at :doc:`/tutorials/models`.
- This is used to compare the outputs of caffe2 model with its original torch model.
-
- Due to the extra conversion between Pytorch/Caffe2, this method is not meant for
- benchmark. Because of the conversion, this method also has dependency
- on detectron2 in order to convert to detectron2's output format.
- """
- if self._predictor is None:
- self._predictor = ProtobufDetectionModel(self._predict_net, self._init_net)
- return self._predictor(inputs)
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/tutorials/lazyconfigs.md b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/tutorials/lazyconfigs.md
deleted file mode 100644
index ca9de3052a8065c1c4579499cb8ef7ed9fc2d660..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/tutorials/lazyconfigs.md
+++ /dev/null
@@ -1,170 +0,0 @@
-# Lazy Configs
-
-The traditional yacs-based config system provides basic, standard functionalities.
-However, it does not offer enough flexibility for many new projects.
-We develop an alternative, non-intrusive config system that can be used with
-detectron2 or potentially any other complex projects.
-
-## Python Syntax
-
-Our config objects are still dictionaries. Instead of using Yaml to define dictionaries,
-we create dictionaries in Python directly. This gives users the following power that
-doesn't exist in Yaml:
-
-* Easily manipulate the dictionary (addition & deletion) using Python.
-* Write simple arithmetics or call simple functions.
-* Use more data types / objects.
-* Import / compose other config files, using the familiar Python import syntax.
-
-A Python config file can be loaded like this:
-```python
-# config.py:
-a = dict(x=1, y=2, z=dict(xx=1))
-b = dict(x=3, y=4)
-
-# my_code.py:
-from detectron2.config import LazyConfig
-cfg = LazyConfig.load("path/to/config.py") # an omegaconf dictionary
-assert cfg.a.z.xx == 1
-```
-
-After [LazyConfig.load](../modules/config.html#detectron2.config.LazyConfig.load), `cfg` will be a dictionary that contains all dictionaries
-defined in the global scope of the config file. Note that:
-* All dictionaries are turned to an [omegaconf](https://omegaconf.readthedocs.io/)
- config object during loading. This enables access to omegaconf features,
- such as its [access syntax](https://omegaconf.readthedocs.io/en/2.1_branch/usage.html#access-and-manipulation)
- and [interpolation](https://omegaconf.readthedocs.io/en/2.1_branch/usage.html#variable-interpolation).
-* Absolute imports in `config.py` works the same as in regular Python.
-* Relative imports can only import dictionaries from config files.
- They are simply a syntax sugar for [LazyConfig.load_rel](../modules/config.html#detectron2.config.LazyConfig.load_rel).
- They can load Python files at relative path without requiring `__init__.py`.
-
-[LazyConfig.save](../modules/config.html#detectron2.config.LazyConfig.save) can save a config object to yaml.
-Note that this is not always successful if non-serializable objects appear in the config file (e.g. lambdas).
-It is up to users whether to sacrifice the ability to save in exchange for flexibility.
-
-## Recursive Instantiation
-
-The LazyConfig system heavily uses recursive instantiation, which is a pattern that
-uses a dictionary to describe a
-call to a function/class. The dictionary consists of:
-
-1. A "\_target\_" key which contains path to the callable, such as "module.submodule.class_name".
-2. Other keys that represent arguments to pass to the callable. Arguments themselves can be defined
- using recursive instantiation.
-
-We provide a helper function [LazyCall](../modules/config.html#detectron2.config.LazyCall) that helps create such dictionaries.
-The following code using `LazyCall`
-```python
-from detectron2.config import LazyCall as L
-from my_app import Trainer, Optimizer
-cfg = L(Trainer)(
- optimizer=L(Optimizer)(
- lr=0.01,
- algo="SGD"
- )
-)
-```
-creates a dictionary like this:
-```
-cfg = {
- "_target_": "my_app.Trainer",
- "optimizer": {
- "_target_": "my_app.Optimizer",
- "lr": 0.01, "algo": "SGD"
- }
-}
-```
-
-By representing objects using such dictionaries, a general
-[instantiate](../modules/config.html#detectron2.config.instantiate)
-function can turn them into actual objects, i.e.:
-```python
-from detectron2.config import instantiate
-trainer = instantiate(cfg)
-# equivalent to:
-# from my_app import Trainer, Optimizer
-# trainer = Trainer(optimizer=Optimizer(lr=0.01, algo="SGD"))
-```
-
-This pattern is powerful enough to describe very complex objects, e.g.:
-
-
-
-A Full Mask R-CNN described in recursive instantiation (click to expand)
-
-
-```eval_rst
-.. literalinclude:: ../../configs/common/models/mask_rcnn_fpn.py
- :language: python
- :linenos:
-```
-
-
-
-There are also objects or logic that cannot be described simply by a dictionary,
-such as reused objects or method calls. They may require some refactoring
-to work with recursive instantiation.
-
-## Using Model Zoo LazyConfigs
-
-We provide some configs in the model zoo using the LazyConfig system, for example:
-
-* [common baselines](../../configs/common/).
-* [new Mask R-CNN baselines](../../configs/new_baselines/)
-
-After installing detectron2, they can be loaded by the model zoo API
-[model_zoo.get_config](../modules/model_zoo.html#detectron2.model_zoo.get_config).
-
-Using these as references, you're free to define custom config structure / fields for your own
-project, as long as your training script can understand them.
-Despite of this, our model zoo configs still follow some simple conventions for consistency, e.g.
-`cfg.model` defines a model object, `cfg.dataloader.{train,test}` defines dataloader objects,
-and `cfg.train` contains training options in key-value form.
-In addition to `print()`, a better way to view the structure of a config is like this:
-```
-from detectron2.model_zoo import get_config
-from detectron2.config import LazyConfig
-print(LazyConfig.to_py(get_config("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.py")))
-```
-From the output it's easier to find relevant options to change, e.g.
-`dataloader.train.total_batch_size` for the batch size, or `optimizer.lr` for base learning rate.
-
-We provide a reference training script
-[tools/lazyconfig_train_net.py](../../tools/lazyconfig_train_net.py),
-that can train/eval our model zoo configs.
-It also shows how to support command line value overrides.
-
-To demonstrate the power and flexibility of the new system, we show that
-[a simple config file](../../configs/Misc/torchvision_imagenet_R_50.py)
-can let detectron2 train an ImageNet classification model from torchvision, even though
-detectron2 contains no features about ImageNet classification.
-This can serve as a reference for using detectron2 in other deep learning tasks.
-
-## Summary
-
-By using recursive instantiation to create objects,
-we avoid passing a giant config to many places, because `cfg` is only passed to `instantiate`.
-This has the following benefits:
-
-* It's __non-intrusive__: objects to be constructed are config-agnostic, regular Python
- functions/classes.
- They can even live in other libraries. For example,
- `{"_target_": "torch.nn.Conv2d", "in_channels": 10, "out_channels": 10, "kernel_size": 1}`
- defines a conv layer.
-* __Clarity__ of what function/classes will be called, and what arguments they use.
-* `cfg` doesn't need pre-defined keys and structures. It's valid as long as it translates to valid
- code. This gives a lot more __flexibility__.
-* You can still pass huge dictionaries as arguments, just like the old way.
-
-Recursive instantiation and Python syntax are orthogonal: you can use one without the other.
-But by putting them together, the config file looks a lot like the code that will be executed:
-
-
-
-However, the config file just defines dictionaries, which can be easily manipulated further
-by composition or overrides.
-The corresponding code will only be executed
-later when `instantiate` is called. In some way,
-in config files we're writing "editable code" that will be "lazily executed" later when needed.
-That's why we call this system "LazyConfig".
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/test_solver.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/test_solver.py
deleted file mode 100644
index 6b3ae84c00b789df071ab5d12bae42d991df1d0b..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/test_solver.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import unittest
-
-from detectron2.solver.build import _expand_param_groups, reduce_param_groups
-
-
-class TestOptimizer(unittest.TestCase):
- def testExpandParamsGroups(self):
- params = [
- {
- "params": ["p1", "p2", "p3", "p4"],
- "lr": 1.0,
- "weight_decay": 3.0,
- },
- {
- "params": ["p2", "p3", "p5"],
- "lr": 2.0,
- "momentum": 2.0,
- },
- {
- "params": ["p1"],
- "weight_decay": 4.0,
- },
- ]
- out = _expand_param_groups(params)
- gt = [
- dict(params=["p1"], lr=1.0, weight_decay=4.0), # noqa
- dict(params=["p2"], lr=2.0, weight_decay=3.0, momentum=2.0), # noqa
- dict(params=["p3"], lr=2.0, weight_decay=3.0, momentum=2.0), # noqa
- dict(params=["p4"], lr=1.0, weight_decay=3.0), # noqa
- dict(params=["p5"], lr=2.0, momentum=2.0), # noqa
- ]
- self.assertEqual(out, gt)
-
- def testReduceParamGroups(self):
- params = [
- dict(params=["p1"], lr=1.0, weight_decay=4.0), # noqa
- dict(params=["p2", "p6"], lr=2.0, weight_decay=3.0, momentum=2.0), # noqa
- dict(params=["p3"], lr=2.0, weight_decay=3.0, momentum=2.0), # noqa
- dict(params=["p4"], lr=1.0, weight_decay=3.0), # noqa
- dict(params=["p5"], lr=2.0, momentum=2.0), # noqa
- ]
- gt_groups = [
- {
- "lr": 1.0,
- "weight_decay": 4.0,
- "params": ["p1"],
- },
- {
- "lr": 2.0,
- "weight_decay": 3.0,
- "momentum": 2.0,
- "params": ["p2", "p6", "p3"],
- },
- {
- "lr": 1.0,
- "weight_decay": 3.0,
- "params": ["p4"],
- },
- {
- "lr": 2.0,
- "momentum": 2.0,
- "params": ["p5"],
- },
- ]
- out = reduce_param_groups(params)
- self.assertEqual(out, gt_groups)
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/three_nn.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/three_nn.py
deleted file mode 100644
index 2b01047a129989cd5545a0a86f23a487f4a13ce1..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/three_nn.py
+++ /dev/null
@@ -1,51 +0,0 @@
-from typing import Tuple
-
-import torch
-from torch.autograd import Function
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext', ['three_nn_forward'])
-
-
-class ThreeNN(Function):
- """Find the top-3 nearest neighbors of the target set from the source set.
-
- Please refer to `Paper of PointNet++ `_
- for more details.
- """
-
- @staticmethod
- def forward(ctx, target: torch.Tensor,
- source: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
- """
- Args:
- target (Tensor): shape (B, N, 3), points set that needs to
- find the nearest neighbors.
- source (Tensor): shape (B, M, 3), points set that is used
- to find the nearest neighbors of points in target set.
-
- Returns:
- Tensor: shape (B, N, 3), L2 distance of each point in target
- set to their corresponding nearest neighbors.
- """
- target = target.contiguous()
- source = source.contiguous()
-
- B, N, _ = target.size()
- m = source.size(1)
- dist2 = torch.cuda.FloatTensor(B, N, 3)
- idx = torch.cuda.IntTensor(B, N, 3)
-
- ext_module.three_nn_forward(target, source, dist2, idx, b=B, n=N, m=m)
- if torch.__version__ != 'parrots':
- ctx.mark_non_differentiable(idx)
-
- return torch.sqrt(dist2), idx
-
- @staticmethod
- def backward(ctx, a=None, b=None):
- return None, None
-
-
-three_nn = ThreeNN.apply
diff --git a/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/op_ori/upfirdn2d.py b/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/op_ori/upfirdn2d.py
deleted file mode 100644
index e9cb52219689592e2745600abb19fad02740a139..0000000000000000000000000000000000000000
--- a/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/op_ori/upfirdn2d.py
+++ /dev/null
@@ -1,184 +0,0 @@
-import os
-
-import torch
-from torch.autograd import Function
-from torch.utils.cpp_extension import load
-
-module_path = os.path.dirname(__file__)
-upfirdn2d_op = load(
- 'upfirdn2d',
- sources=[
- os.path.join(module_path, 'upfirdn2d.cpp'),
- os.path.join(module_path, 'upfirdn2d_kernel.cu'),
- ],
-)
-
-
-class UpFirDn2dBackward(Function):
- @staticmethod
- def forward(
- ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size
- ):
- up_x, up_y = up
- down_x, down_y = down
- g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad
-
- grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1)
-
- grad_input = upfirdn2d_op.upfirdn2d(
- grad_output,
- grad_kernel,
- down_x,
- down_y,
- up_x,
- up_y,
- g_pad_x0,
- g_pad_x1,
- g_pad_y0,
- g_pad_y1,
- )
- grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3])
-
- ctx.save_for_backward(kernel)
-
- pad_x0, pad_x1, pad_y0, pad_y1 = pad
-
- ctx.up_x = up_x
- ctx.up_y = up_y
- ctx.down_x = down_x
- ctx.down_y = down_y
- ctx.pad_x0 = pad_x0
- ctx.pad_x1 = pad_x1
- ctx.pad_y0 = pad_y0
- ctx.pad_y1 = pad_y1
- ctx.in_size = in_size
- ctx.out_size = out_size
-
- return grad_input
-
- @staticmethod
- def backward(ctx, gradgrad_input):
- kernel, = ctx.saved_tensors
-
- gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1)
-
- gradgrad_out = upfirdn2d_op.upfirdn2d(
- gradgrad_input,
- kernel,
- ctx.up_x,
- ctx.up_y,
- ctx.down_x,
- ctx.down_y,
- ctx.pad_x0,
- ctx.pad_x1,
- ctx.pad_y0,
- ctx.pad_y1,
- )
- # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3])
- gradgrad_out = gradgrad_out.view(
- ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1]
- )
-
- return gradgrad_out, None, None, None, None, None, None, None, None
-
-
-class UpFirDn2d(Function):
- @staticmethod
- def forward(ctx, input, kernel, up, down, pad):
- up_x, up_y = up
- down_x, down_y = down
- pad_x0, pad_x1, pad_y0, pad_y1 = pad
-
- kernel_h, kernel_w = kernel.shape
- batch, channel, in_h, in_w = input.shape
- ctx.in_size = input.shape
-
- input = input.reshape(-1, in_h, in_w, 1)
-
- ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1]))
-
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1
- ctx.out_size = (out_h, out_w)
-
- ctx.up = (up_x, up_y)
- ctx.down = (down_x, down_y)
- ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1)
-
- g_pad_x0 = kernel_w - pad_x0 - 1
- g_pad_y0 = kernel_h - pad_y0 - 1
- g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1
- g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1
-
- ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1)
-
- out = upfirdn2d_op.upfirdn2d(
- input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
- )
- # out = out.view(major, out_h, out_w, minor)
- out = out.view(-1, channel, out_h, out_w)
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- kernel, grad_kernel = ctx.saved_tensors
-
- grad_input = UpFirDn2dBackward.apply(
- grad_output,
- kernel,
- grad_kernel,
- ctx.up,
- ctx.down,
- ctx.pad,
- ctx.g_pad,
- ctx.in_size,
- ctx.out_size,
- )
-
- return grad_input, None, None, None, None
-
-
-def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)):
- out = UpFirDn2d.apply(
- input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1])
- )
-
- return out
-
-
-def upfirdn2d_native(
- input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
-):
- _, in_h, in_w, minor = input.shape
- kernel_h, kernel_w = kernel.shape
-
- out = input.view(-1, in_h, 1, in_w, 1, minor)
- out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
- out = out.view(-1, in_h * up_y, in_w * up_x, minor)
-
- out = F.pad(
- out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]
- )
- out = out[
- :,
- max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0),
- max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0),
- :,
- ]
-
- out = out.permute(0, 3, 1, 2)
- out = out.reshape(
- [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]
- )
- w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
- out = F.conv2d(out, w)
- out = out.reshape(
- -1,
- minor,
- in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
- in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
- )
- out = out.permute(0, 2, 3, 1)
-
- return out[:, ::down_y, ::down_x, :]
\ No newline at end of file
diff --git a/spaces/PacBio/NewBing_BioTree/Dockerfile b/spaces/PacBio/NewBing_BioTree/Dockerfile
deleted file mode 100644
index 3698c7cb7938e025afc53b18a571ae2961fbdffe..0000000000000000000000000000000000000000
--- a/spaces/PacBio/NewBing_BioTree/Dockerfile
+++ /dev/null
@@ -1,34 +0,0 @@
-# Build Stage
-# 使用 golang:alpine 作为构建阶段的基础镜像
-FROM golang:alpine AS builder
-
-# 添加 git,以便之后能从GitHub克隆项目
-RUN apk --no-cache add git
-
-# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下
-RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app
-
-# 设置工作目录为之前克隆的项目目录
-WORKDIR /workspace/app
-
-# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小
-RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go
-
-# Runtime Stage
-# 使用轻量级的 alpine 镜像作为运行时的基础镜像
-FROM alpine
-
-# 设置工作目录
-WORKDIR /workspace/app
-
-# 从构建阶段复制编译后的二进制文件到运行时镜像中
-COPY --from=builder /workspace/app/go-proxy-bingai .
-
-# 设置环境变量,此处为随机字符
-ENV Go_Proxy_BingAI_USER_TOKEN_1="kJs8hD92ncMzLaoQWYtX5rG6bE3fZ4iO"
-
-# 暴露8080端口
-EXPOSE 8080
-
-# 容器启动时运行的命令
-CMD ["/workspace/app/go-proxy-bingai"]
\ No newline at end of file
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/box_iou_rotated.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/box_iou_rotated.py
deleted file mode 100644
index 2d78015e9c2a9e7a52859b4e18f84a9aa63481a0..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/box_iou_rotated.py
+++ /dev/null
@@ -1,45 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext', ['box_iou_rotated'])
-
-
-def box_iou_rotated(bboxes1, bboxes2, mode='iou', aligned=False):
- """Return intersection-over-union (Jaccard index) of boxes.
-
- Both sets of boxes are expected to be in
- (x_center, y_center, width, height, angle) format.
-
- If ``aligned`` is ``False``, then calculate the ious between each bbox
- of bboxes1 and bboxes2, otherwise the ious between each aligned pair of
- bboxes1 and bboxes2.
-
- Arguments:
- boxes1 (Tensor): rotated bboxes 1. \
- It has shape (N, 5), indicating (x, y, w, h, theta) for each row.
- Note that theta is in radian.
- boxes2 (Tensor): rotated bboxes 2. \
- It has shape (M, 5), indicating (x, y, w, h, theta) for each row.
- Note that theta is in radian.
- mode (str): "iou" (intersection over union) or iof (intersection over
- foreground).
-
- Returns:
- ious(Tensor): shape (N, M) if aligned == False else shape (N,)
- """
- assert mode in ['iou', 'iof']
- mode_dict = {'iou': 0, 'iof': 1}
- mode_flag = mode_dict[mode]
- rows = bboxes1.size(0)
- cols = bboxes2.size(0)
- if aligned:
- ious = bboxes1.new_zeros(rows)
- else:
- ious = bboxes1.new_zeros((rows * cols))
- bboxes1 = bboxes1.contiguous()
- bboxes2 = bboxes2.contiguous()
- ext_module.box_iou_rotated(
- bboxes1, bboxes2, ious, mode_flag=mode_flag, aligned=aligned)
- if not aligned:
- ious = ious.view(rows, cols)
- return ious
diff --git a/spaces/Plachta/VALL-E-X/app.py b/spaces/Plachta/VALL-E-X/app.py
deleted file mode 100644
index 67e1913875ccf96c4e329411fab8614ab67d58bd..0000000000000000000000000000000000000000
--- a/spaces/Plachta/VALL-E-X/app.py
+++ /dev/null
@@ -1,582 +0,0 @@
-import logging
-import os
-import pathlib
-import time
-import tempfile
-import platform
-import gc
-if platform.system().lower() == 'windows':
- temp = pathlib.PosixPath
- pathlib.PosixPath = pathlib.WindowsPath
-elif platform.system().lower() == 'linux':
- temp = pathlib.WindowsPath
- pathlib.WindowsPath = pathlib.PosixPath
-os.environ["PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION"] = "python"
-
-import langid
-langid.set_languages(['en', 'zh', 'ja'])
-
-import torch
-import torchaudio
-
-import numpy as np
-
-from data.tokenizer import (
- AudioTokenizer,
- tokenize_audio,
-)
-from data.collation import get_text_token_collater
-from models.vallex import VALLE
-from utils.g2p import PhonemeBpeTokenizer
-from descriptions import *
-from macros import *
-from examples import *
-
-import gradio as gr
-from vocos import Vocos
-from transformers import WhisperProcessor, WhisperForConditionalGeneration
-
-
-
-torch._C._jit_set_profiling_executor(False)
-torch._C._jit_set_profiling_mode(False)
-torch._C._set_graph_executor_optimize(False)
-
-text_tokenizer = PhonemeBpeTokenizer(tokenizer_path="./utils/g2p/bpe_69.json")
-text_collater = get_text_token_collater()
-
-device = torch.device("cpu")
-if torch.cuda.is_available():
- device = torch.device("cuda", 0)
-
-# VALL-E-X model
-model = VALLE(
- N_DIM,
- NUM_HEAD,
- NUM_LAYERS,
- norm_first=True,
- add_prenet=False,
- prefix_mode=PREFIX_MODE,
- share_embedding=True,
- nar_scale_factor=1.0,
- prepend_bos=True,
- num_quantizers=NUM_QUANTIZERS,
- ).to(device)
-checkpoint = torch.load("./epoch-10.pt", map_location='cpu')
-missing_keys, unexpected_keys = model.load_state_dict(
- checkpoint["model"], strict=True
-)
-del checkpoint
-assert not missing_keys
-model.eval()
-
-# Encodec model
-audio_tokenizer = AudioTokenizer(device)
-
-# Vocos decoder
-vocos = Vocos.from_pretrained('charactr/vocos-encodec-24khz').to(device)
-
-# ASR
-whisper_processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
-whisper = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium").to(device)
-whisper.config.forced_decoder_ids = None
-
-# Voice Presets
-preset_list = os.walk("./presets/").__next__()[2]
-preset_list = [preset[:-4] for preset in preset_list if preset.endswith(".npz")]
-
-def clear_prompts():
- try:
- path = tempfile.gettempdir()
- for eachfile in os.listdir(path):
- filename = os.path.join(path, eachfile)
- if os.path.isfile(filename) and filename.endswith(".npz"):
- lastmodifytime = os.stat(filename).st_mtime
- endfiletime = time.time() - 60
- if endfiletime > lastmodifytime:
- os.remove(filename)
- del path, filename, lastmodifytime, endfiletime
- gc.collect()
- except:
- return
-
-def transcribe_one(wav, sr):
- if sr != 16000:
- wav4trans = torchaudio.transforms.Resample(sr, 16000)(wav)
- else:
- wav4trans = wav
-
- input_features = whisper_processor(wav4trans.squeeze(0), sampling_rate=16000, return_tensors="pt").input_features
-
- # generate token ids
- predicted_ids = whisper.generate(input_features.to(device))
- lang = whisper_processor.batch_decode(predicted_ids[:, 1])[0].strip("<|>")
- # decode token ids to text
- text_pr = whisper_processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
-
- # print the recognized text
- print(text_pr)
-
- if text_pr.strip(" ")[-1] not in "?!.,。,?!。、":
- text_pr += "."
-
- # delete all variables
- del wav4trans, input_features, predicted_ids
- gc.collect()
- return lang, text_pr
-
-def make_npz_prompt(name, uploaded_audio, recorded_audio, transcript_content):
- clear_prompts()
- audio_prompt = uploaded_audio if uploaded_audio is not None else recorded_audio
- sr, wav_pr = audio_prompt
- if len(wav_pr) / sr > 15:
- return "Rejected, Audio too long (should be less than 15 seconds)", None
- if not isinstance(wav_pr, torch.FloatTensor):
- wav_pr = torch.FloatTensor(wav_pr)
- if wav_pr.abs().max() > 1:
- wav_pr /= wav_pr.abs().max()
- if wav_pr.size(-1) == 2:
- wav_pr = wav_pr[:, 0]
- if wav_pr.ndim == 1:
- wav_pr = wav_pr.unsqueeze(0)
- assert wav_pr.ndim and wav_pr.size(0) == 1
-
- if transcript_content == "":
- lang_pr, text_pr = transcribe_one(wav_pr, sr)
- lang_token = lang2token[lang_pr]
- text_pr = lang_token + text_pr + lang_token
- else:
- lang_pr = langid.classify(str(transcript_content))[0]
- lang_token = lang2token[lang_pr]
- transcript_content = transcript_content.replace("\n", "")
- text_pr = f"{lang_token}{str(transcript_content)}{lang_token}"
- # tokenize audio
- encoded_frames = tokenize_audio(audio_tokenizer, (wav_pr, sr))
- audio_tokens = encoded_frames[0][0].transpose(2, 1).cpu().numpy()
-
- # tokenize text
- phonemes, _ = text_tokenizer.tokenize(text=f"{text_pr}".strip())
- text_tokens, enroll_x_lens = text_collater(
- [
- phonemes
- ]
- )
-
- message = f"Detected language: {lang_pr}\n Detected text {text_pr}\n"
- if lang_pr not in ['ja', 'zh', 'en']:
- return f"Prompt can only made with one of model-supported languages, got {lang_pr} instead", None
-
- # save as npz file
- np.savez(os.path.join(tempfile.gettempdir(), f"{name}.npz"),
- audio_tokens=audio_tokens, text_tokens=text_tokens, lang_code=lang2code[lang_pr])
-
- # delete all variables
- del audio_tokens, text_tokens, phonemes, lang_pr, text_pr, wav_pr, sr, uploaded_audio, recorded_audio
- gc.collect()
- return message, os.path.join(tempfile.gettempdir(), f"{name}.npz")
-
-
-@torch.no_grad()
-def infer_from_audio(text, language, accent, audio_prompt, record_audio_prompt, transcript_content):
- if len(text) > 150:
- return "Rejected, Text too long (should be less than 150 characters)", None
- if audio_prompt is None and record_audio_prompt is None:
- audio_prompts = torch.zeros([1, 0, NUM_QUANTIZERS]).type(torch.int32).to(device)
- text_prompts = torch.zeros([1, 0]).type(torch.int32)
- lang_pr = 'en'
- text_pr = ""
- enroll_x_lens = 0
- wav_pr, sr = None, None
- else:
- audio_prompt = audio_prompt if audio_prompt is not None else record_audio_prompt
- sr, wav_pr = audio_prompt
- if len(wav_pr) / sr > 15:
- return "Rejected, Audio too long (should be less than 15 seconds)", None
- if not isinstance(wav_pr, torch.FloatTensor):
- wav_pr = torch.FloatTensor(wav_pr)
- if wav_pr.abs().max() > 1:
- wav_pr /= wav_pr.abs().max()
- if wav_pr.size(-1) == 2:
- wav_pr = wav_pr[:, 0]
- if wav_pr.ndim == 1:
- wav_pr = wav_pr.unsqueeze(0)
- assert wav_pr.ndim and wav_pr.size(0) == 1
-
- if transcript_content == "":
- lang_pr, text_pr = transcribe_one(wav_pr, sr)
- lang_token = lang2token[lang_pr]
- text_pr = lang_token + text_pr + lang_token
- else:
- lang_pr = langid.classify(str(transcript_content))[0]
- text_pr = transcript_content.replace("\n", "")
- if lang_pr not in ['ja', 'zh', 'en']:
- return f"Reference audio must be a speech of one of model-supported languages, got {lang_pr} instead", None
- lang_token = lang2token[lang_pr]
- text_pr = lang_token + text_pr + lang_token
-
- # tokenize audio
- encoded_frames = tokenize_audio(audio_tokenizer, (wav_pr, sr))
- audio_prompts = encoded_frames[0][0].transpose(2, 1).to(device)
-
- enroll_x_lens = None
- if text_pr:
- text_prompts, _ = text_tokenizer.tokenize(text=f"{text_pr}".strip())
- text_prompts, enroll_x_lens = text_collater(
- [
- text_prompts
- ]
- )
-
- if language == 'auto-detect':
- lang_token = lang2token[langid.classify(text)[0]]
- else:
- lang_token = langdropdown2token[language]
- lang = token2lang[lang_token]
- text = text.replace("\n", "")
- text = lang_token + text + lang_token
-
- # tokenize text
- logging.info(f"synthesize text: {text}")
- phone_tokens, langs = text_tokenizer.tokenize(text=f"_{text}".strip())
- text_tokens, text_tokens_lens = text_collater(
- [
- phone_tokens
- ]
- )
-
-
- text_tokens = torch.cat([text_prompts, text_tokens], dim=-1)
- text_tokens_lens += enroll_x_lens
- lang = lang if accent == "no-accent" else token2lang[langdropdown2token[accent]]
- encoded_frames = model.inference(
- text_tokens.to(device),
- text_tokens_lens.to(device),
- audio_prompts,
- enroll_x_lens=enroll_x_lens,
- top_k=-100,
- temperature=1,
- prompt_language=lang_pr,
- text_language=langs if accent == "no-accent" else lang,
- )
- # Decode with Vocos
- frames = encoded_frames.permute(2,0,1)
- features = vocos.codes_to_features(frames)
- samples = vocos.decode(features, bandwidth_id=torch.tensor([2], device=device))
-
- message = f"text prompt: {text_pr}\nsythesized text: {text}"
- # delete all variables
- del audio_prompts, text_tokens, text_prompts, phone_tokens, encoded_frames, wav_pr, sr, audio_prompt, record_audio_prompt, transcript_content
- gc.collect()
- return message, (24000, samples.squeeze(0).cpu().numpy())
-
-@torch.no_grad()
-def infer_from_prompt(text, language, accent, preset_prompt, prompt_file):
- if len(text) > 150:
- return "Rejected, Text too long (should be less than 150 characters)", None
- clear_prompts()
- # text to synthesize
- if language == 'auto-detect':
- lang_token = lang2token[langid.classify(text)[0]]
- else:
- lang_token = langdropdown2token[language]
- lang = token2lang[lang_token]
- text = text.replace("\n", "")
- text = lang_token + text + lang_token
-
- # load prompt
- if prompt_file is not None:
- prompt_data = np.load(prompt_file.name)
- else:
- prompt_data = np.load(os.path.join("./presets/", f"{preset_prompt}.npz"))
- audio_prompts = prompt_data['audio_tokens']
- text_prompts = prompt_data['text_tokens']
- lang_pr = prompt_data['lang_code']
- lang_pr = code2lang[int(lang_pr)]
-
- # numpy to tensor
- audio_prompts = torch.tensor(audio_prompts).type(torch.int32).to(device)
- text_prompts = torch.tensor(text_prompts).type(torch.int32)
-
- enroll_x_lens = text_prompts.shape[-1]
- logging.info(f"synthesize text: {text}")
- phone_tokens, langs = text_tokenizer.tokenize(text=f"_{text}".strip())
- text_tokens, text_tokens_lens = text_collater(
- [
- phone_tokens
- ]
- )
- text_tokens = torch.cat([text_prompts, text_tokens], dim=-1)
- text_tokens_lens += enroll_x_lens
- # accent control
- lang = lang if accent == "no-accent" else token2lang[langdropdown2token[accent]]
- encoded_frames = model.inference(
- text_tokens.to(device),
- text_tokens_lens.to(device),
- audio_prompts,
- enroll_x_lens=enroll_x_lens,
- top_k=-100,
- temperature=1,
- prompt_language=lang_pr,
- text_language=langs if accent == "no-accent" else lang,
- )
- # Decode with Vocos
- frames = encoded_frames.permute(2,0,1)
- features = vocos.codes_to_features(frames)
- samples = vocos.decode(features, bandwidth_id=torch.tensor([2], device=device))
-
- message = f"sythesized text: {text}"
-
- # delete all variables
- del audio_prompts, text_tokens, text_prompts, phone_tokens, encoded_frames, prompt_file, preset_prompt
- gc.collect()
- return message, (24000, samples.squeeze(0).cpu().numpy())
-
-
-from utils.sentence_cutter import split_text_into_sentences
-@torch.no_grad()
-def infer_long_text(text, preset_prompt, prompt=None, language='auto', accent='no-accent'):
- """
- For long audio generation, two modes are available.
- fixed-prompt: This mode will keep using the same prompt the user has provided, and generate audio sentence by sentence.
- sliding-window: This mode will use the last sentence as the prompt for the next sentence, but has some concern on speaker maintenance.
- """
- if len(text) > 1000:
- return "Rejected, Text too long (should be less than 1000 characters)", None
- mode = 'fixed-prompt'
- if (prompt is None or prompt == "") and preset_prompt == "":
- mode = 'sliding-window' # If no prompt is given, use sliding-window mode
- sentences = split_text_into_sentences(text)
- # detect language
- if language == "auto-detect":
- language = langid.classify(text)[0]
- else:
- language = token2lang[langdropdown2token[language]]
-
- # if initial prompt is given, encode it
- if prompt is not None and prompt != "":
- # load prompt
- prompt_data = np.load(prompt.name)
- audio_prompts = prompt_data['audio_tokens']
- text_prompts = prompt_data['text_tokens']
- lang_pr = prompt_data['lang_code']
- lang_pr = code2lang[int(lang_pr)]
-
- # numpy to tensor
- audio_prompts = torch.tensor(audio_prompts).type(torch.int32).to(device)
- text_prompts = torch.tensor(text_prompts).type(torch.int32)
- elif preset_prompt is not None and preset_prompt != "":
- prompt_data = np.load(os.path.join("./presets/", f"{preset_prompt}.npz"))
- audio_prompts = prompt_data['audio_tokens']
- text_prompts = prompt_data['text_tokens']
- lang_pr = prompt_data['lang_code']
- lang_pr = code2lang[int(lang_pr)]
-
- # numpy to tensor
- audio_prompts = torch.tensor(audio_prompts).type(torch.int32).to(device)
- text_prompts = torch.tensor(text_prompts).type(torch.int32)
- else:
- audio_prompts = torch.zeros([1, 0, NUM_QUANTIZERS]).type(torch.int32).to(device)
- text_prompts = torch.zeros([1, 0]).type(torch.int32)
- lang_pr = language if language != 'mix' else 'en'
- if mode == 'fixed-prompt':
- complete_tokens = torch.zeros([1, NUM_QUANTIZERS, 0]).type(torch.LongTensor).to(device)
- for text in sentences:
- text = text.replace("\n", "").strip(" ")
- if text == "":
- continue
- lang_token = lang2token[language]
- lang = token2lang[lang_token]
- text = lang_token + text + lang_token
-
- enroll_x_lens = text_prompts.shape[-1]
- logging.info(f"synthesize text: {text}")
- phone_tokens, langs = text_tokenizer.tokenize(text=f"_{text}".strip())
- text_tokens, text_tokens_lens = text_collater(
- [
- phone_tokens
- ]
- )
- text_tokens = torch.cat([text_prompts, text_tokens], dim=-1)
- text_tokens_lens += enroll_x_lens
- # accent control
- lang = lang if accent == "no-accent" else token2lang[langdropdown2token[accent]]
- encoded_frames = model.inference(
- text_tokens.to(device),
- text_tokens_lens.to(device),
- audio_prompts,
- enroll_x_lens=enroll_x_lens,
- top_k=-100,
- temperature=1,
- prompt_language=lang_pr,
- text_language=langs if accent == "no-accent" else lang,
- )
- complete_tokens = torch.cat([complete_tokens, encoded_frames.transpose(2, 1)], dim=-1)
- # Decode with Vocos
- frames = complete_tokens.permute(1, 0, 2)
- features = vocos.codes_to_features(frames)
- samples = vocos.decode(features, bandwidth_id=torch.tensor([2], device=device))
-
- message = f"Cut into {len(sentences)} sentences"
- return message, (24000, samples.squeeze(0).cpu().numpy())
- elif mode == "sliding-window":
- complete_tokens = torch.zeros([1, NUM_QUANTIZERS, 0]).type(torch.LongTensor).to(device)
- original_audio_prompts = audio_prompts
- original_text_prompts = text_prompts
- for text in sentences:
- text = text.replace("\n", "").strip(" ")
- if text == "":
- continue
- lang_token = lang2token[language]
- lang = token2lang[lang_token]
- text = lang_token + text + lang_token
-
- enroll_x_lens = text_prompts.shape[-1]
- logging.info(f"synthesize text: {text}")
- phone_tokens, langs = text_tokenizer.tokenize(text=f"_{text}".strip())
- text_tokens, text_tokens_lens = text_collater(
- [
- phone_tokens
- ]
- )
- text_tokens = torch.cat([text_prompts, text_tokens], dim=-1)
- text_tokens_lens += enroll_x_lens
- # accent control
- lang = lang if accent == "no-accent" else token2lang[langdropdown2token[accent]]
- encoded_frames = model.inference(
- text_tokens.to(device),
- text_tokens_lens.to(device),
- audio_prompts,
- enroll_x_lens=enroll_x_lens,
- top_k=-100,
- temperature=1,
- prompt_language=lang_pr,
- text_language=langs if accent == "no-accent" else lang,
- )
- complete_tokens = torch.cat([complete_tokens, encoded_frames.transpose(2, 1)], dim=-1)
- if torch.rand(1) < 1.0:
- audio_prompts = encoded_frames[:, :, -NUM_QUANTIZERS:]
- text_prompts = text_tokens[:, enroll_x_lens:]
- else:
- audio_prompts = original_audio_prompts
- text_prompts = original_text_prompts
- # Decode with Vocos
- frames = complete_tokens.permute(1, 0, 2)
- features = vocos.codes_to_features(frames)
- samples = vocos.decode(features, bandwidth_id=torch.tensor([2], device=device))
-
- message = f"Cut into {len(sentences)} sentences"
-
- return message, (24000, samples.squeeze(0).cpu().numpy())
- else:
- raise ValueError(f"No such mode {mode}")
-
-app = gr.Blocks()
-with app:
- gr.Markdown(top_md)
- with gr.Tab("Infer from audio"):
- gr.Markdown(infer_from_audio_md)
- with gr.Row():
- with gr.Column():
-
- textbox = gr.TextArea(label="Text",
- placeholder="Type your sentence here",
- value="Welcome back, Master. What can I do for you today?", elem_id=f"tts-input")
- language_dropdown = gr.Dropdown(choices=['auto-detect', 'English', '中文', '日本語'], value='auto-detect', label='language')
- accent_dropdown = gr.Dropdown(choices=['no-accent', 'English', '中文', '日本語'], value='no-accent', label='accent')
- textbox_transcript = gr.TextArea(label="Transcript",
- placeholder="Write transcript here. (leave empty to use whisper)",
- value="", elem_id=f"prompt-name")
- upload_audio_prompt = gr.Audio(label='uploaded audio prompt', source='upload', interactive=True)
- record_audio_prompt = gr.Audio(label='recorded audio prompt', source='microphone', interactive=True)
- with gr.Column():
- text_output = gr.Textbox(label="Message")
- audio_output = gr.Audio(label="Output Audio", elem_id="tts-audio")
- btn = gr.Button("Generate!")
- btn.click(infer_from_audio,
- inputs=[textbox, language_dropdown, accent_dropdown, upload_audio_prompt, record_audio_prompt, textbox_transcript],
- outputs=[text_output, audio_output])
- textbox_mp = gr.TextArea(label="Prompt name",
- placeholder="Name your prompt here",
- value="prompt_1", elem_id=f"prompt-name")
- btn_mp = gr.Button("Make prompt!")
- prompt_output = gr.File(interactive=False)
- btn_mp.click(make_npz_prompt,
- inputs=[textbox_mp, upload_audio_prompt, record_audio_prompt, textbox_transcript],
- outputs=[text_output, prompt_output])
- gr.Examples(examples=infer_from_audio_examples,
- inputs=[textbox, language_dropdown, accent_dropdown, upload_audio_prompt, record_audio_prompt, textbox_transcript],
- outputs=[text_output, audio_output],
- fn=infer_from_audio,
- cache_examples=False,)
- with gr.Tab("Make prompt"):
- gr.Markdown(make_prompt_md)
- with gr.Row():
- with gr.Column():
- textbox2 = gr.TextArea(label="Prompt name",
- placeholder="Name your prompt here",
- value="prompt_1", elem_id=f"prompt-name")
- # 添加选择语言和输入台本的地方
- textbox_transcript2 = gr.TextArea(label="Transcript",
- placeholder="Write transcript here. (leave empty to use whisper)",
- value="", elem_id=f"prompt-name")
- upload_audio_prompt_2 = gr.Audio(label='uploaded audio prompt', source='upload', interactive=True)
- record_audio_prompt_2 = gr.Audio(label='recorded audio prompt', source='microphone', interactive=True)
- with gr.Column():
- text_output_2 = gr.Textbox(label="Message")
- prompt_output_2 = gr.File(interactive=False)
- btn_2 = gr.Button("Make!")
- btn_2.click(make_npz_prompt,
- inputs=[textbox2, upload_audio_prompt_2, record_audio_prompt_2, textbox_transcript2],
- outputs=[text_output_2, prompt_output_2])
- gr.Examples(examples=make_npz_prompt_examples,
- inputs=[textbox2, upload_audio_prompt_2, record_audio_prompt_2, textbox_transcript2],
- outputs=[text_output_2, prompt_output_2],
- fn=make_npz_prompt,
- cache_examples=False,)
- with gr.Tab("Infer from prompt"):
- gr.Markdown(infer_from_prompt_md)
- with gr.Row():
- with gr.Column():
- textbox_3 = gr.TextArea(label="Text",
- placeholder="Type your sentence here",
- value="Welcome back, Master. What can I do for you today?", elem_id=f"tts-input")
- language_dropdown_3 = gr.Dropdown(choices=['auto-detect', 'English', '中文', '日本語', 'Mix'], value='auto-detect',
- label='language')
- accent_dropdown_3 = gr.Dropdown(choices=['no-accent', 'English', '中文', '日本語'], value='no-accent',
- label='accent')
- preset_dropdown_3 = gr.Dropdown(choices=preset_list, value=None, label='Voice preset')
- prompt_file = gr.File(file_count='single', file_types=['.npz'], interactive=True)
- with gr.Column():
- text_output_3 = gr.Textbox(label="Message")
- audio_output_3 = gr.Audio(label="Output Audio", elem_id="tts-audio")
- btn_3 = gr.Button("Generate!")
- btn_3.click(infer_from_prompt,
- inputs=[textbox_3, language_dropdown_3, accent_dropdown_3, preset_dropdown_3, prompt_file],
- outputs=[text_output_3, audio_output_3])
- gr.Examples(examples=infer_from_prompt_examples,
- inputs=[textbox_3, language_dropdown_3, accent_dropdown_3, preset_dropdown_3, prompt_file],
- outputs=[text_output_3, audio_output_3],
- fn=infer_from_prompt,
- cache_examples=False,)
- with gr.Tab("Infer long text"):
- gr.Markdown(long_text_md)
- with gr.Row():
- with gr.Column():
- textbox_4 = gr.TextArea(label="Text",
- placeholder="Type your sentence here",
- value=long_text_example, elem_id=f"tts-input")
- language_dropdown_4 = gr.Dropdown(choices=['auto-detect', 'English', '中文', '日本語'], value='auto-detect',
- label='language')
- accent_dropdown_4 = gr.Dropdown(choices=['no-accent', 'English', '中文', '日本語'], value='no-accent',
- label='accent')
- preset_dropdown_4 = gr.Dropdown(choices=preset_list, value=None, label='Voice preset')
- prompt_file_4 = gr.File(file_count='single', file_types=['.npz'], interactive=True)
- with gr.Column():
- text_output_4 = gr.TextArea(label="Message")
- audio_output_4 = gr.Audio(label="Output Audio", elem_id="tts-audio")
- btn_4 = gr.Button("Generate!")
- btn_4.click(infer_long_text,
- inputs=[textbox_4, preset_dropdown_4, prompt_file_4, language_dropdown_4, accent_dropdown_4],
- outputs=[text_output_4, audio_output_4])
-
-app.launch()
\ No newline at end of file
diff --git a/spaces/Pluviophile/QQsign/bin/unidbg-fetch-qsign.bat b/spaces/Pluviophile/QQsign/bin/unidbg-fetch-qsign.bat
deleted file mode 100644
index 8b291e7303b0c07d14b714e5795473891363c85b..0000000000000000000000000000000000000000
--- a/spaces/Pluviophile/QQsign/bin/unidbg-fetch-qsign.bat
+++ /dev/null
@@ -1,89 +0,0 @@
-@rem
-@rem Copyright 2015 the original author or authors.
-@rem
-@rem Licensed under the Apache License, Version 2.0 (the "License");
-@rem you may not use this file except in compliance with the License.
-@rem You may obtain a copy of the License at
-@rem
-@rem https://www.apache.org/licenses/LICENSE-2.0
-@rem
-@rem Unless required by applicable law or agreed to in writing, software
-@rem distributed under the License is distributed on an "AS IS" BASIS,
-@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-@rem See the License for the specific language governing permissions and
-@rem limitations under the License.
-@rem
-
-@if "%DEBUG%" == "" @echo off
-@rem ##########################################################################
-@rem
-@rem unidbg-fetch-qsign startup script for Windows
-@rem
-@rem ##########################################################################
-
-@rem Set local scope for the variables with windows NT shell
-if "%OS%"=="Windows_NT" setlocal
-
-set DIRNAME=%~dp0
-if "%DIRNAME%" == "" set DIRNAME=.
-set APP_BASE_NAME=%~n0
-set APP_HOME=%DIRNAME%..
-
-@rem Resolve any "." and ".." in APP_HOME to make it shorter.
-for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi
-
-@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script.
-set DEFAULT_JVM_OPTS=
-
-@rem Find java.exe
-if defined JAVA_HOME goto findJavaFromJavaHome
-
-set JAVA_EXE=java.exe
-%JAVA_EXE% -version >NUL 2>&1
-if "%ERRORLEVEL%" == "0" goto execute
-
-echo.
-echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
-echo.
-echo Please set the JAVA_HOME variable in your environment to match the
-echo location of your Java installation.
-
-goto fail
-
-:findJavaFromJavaHome
-set JAVA_HOME=%JAVA_HOME:"=%
-set JAVA_EXE=%JAVA_HOME%/bin/java.exe
-
-if exist "%JAVA_EXE%" goto execute
-
-echo.
-echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
-echo.
-echo Please set the JAVA_HOME variable in your environment to match the
-echo location of your Java installation.
-
-goto fail
-
-:execute
-@rem Setup the command line
-
-set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.9.jar;%APP_HOME%\lib\unidbg-android-105.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-status-pages-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar
-
-
-@rem Execute unidbg-fetch-qsign
-"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %*
-
-:end
-@rem End local scope for the variables with windows NT shell
-if "%ERRORLEVEL%"=="0" goto mainEnd
-
-:fail
-rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of
-rem the _cmd.exe /c_ return code!
-if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1
-exit /b 1
-
-:mainEnd
-if "%OS%"=="Windows_NT" endlocal
-
-:omega
diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/quantization/core_vq.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/quantization/core_vq.py
deleted file mode 100644
index da02a6ce3a7de15353f0fba9e826052beb67c436..0000000000000000000000000000000000000000
--- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/quantization/core_vq.py
+++ /dev/null
@@ -1,400 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-from einops import rearrange, repeat
-import flashy
-import torch
-from torch import nn, einsum
-import torch.nn.functional as F
-
-
-def exists(val: tp.Optional[tp.Any]) -> bool:
- return val is not None
-
-
-def default(val: tp.Any, d: tp.Any) -> tp.Any:
- return val if exists(val) else d
-
-
-def l2norm(t):
- return F.normalize(t, p=2, dim=-1)
-
-
-def ema_inplace(moving_avg, new, decay: float):
- moving_avg.data.mul_(decay).add_(new, alpha=(1 - decay))
-
-
-def laplace_smoothing(x, n_categories: int, epsilon: float = 1e-5):
- return (x + epsilon) / (x.sum() + n_categories * epsilon)
-
-
-def uniform_init(*shape: int):
- t = torch.empty(shape)
- nn.init.kaiming_uniform_(t)
- return t
-
-
-def sample_vectors(samples, num: int):
- num_samples, device = samples.shape[0], samples.device
-
- if num_samples >= num:
- indices = torch.randperm(num_samples, device=device)[:num]
- else:
- indices = torch.randint(0, num_samples, (num,), device=device)
-
- return samples[indices]
-
-
-def kmeans(samples, num_clusters: int, num_iters: int = 10):
- dim, dtype = samples.shape[-1], samples.dtype
-
- means = sample_vectors(samples, num_clusters)
-
- for _ in range(num_iters):
- diffs = rearrange(samples, "n d -> n () d") - rearrange(
- means, "c d -> () c d"
- )
- dists = -(diffs ** 2).sum(dim=-1)
-
- buckets = dists.max(dim=-1).indices
- bins = torch.bincount(buckets, minlength=num_clusters)
- zero_mask = bins == 0
- bins_min_clamped = bins.masked_fill(zero_mask, 1)
-
- new_means = buckets.new_zeros(num_clusters, dim, dtype=dtype)
- new_means.scatter_add_(0, repeat(buckets, "n -> n d", d=dim), samples)
- new_means = new_means / bins_min_clamped[..., None]
-
- means = torch.where(zero_mask[..., None], means, new_means)
-
- return means, bins
-
-
-def orthogonal_loss_fn(t):
- # eq (2) from https://arxiv.org/abs/2112.00384
- n = t.shape[0]
- normed_codes = l2norm(t)
- identity = torch.eye(n, device=t.device)
- cosine_sim = einsum("i d, j d -> i j", normed_codes, normed_codes)
- return ((cosine_sim - identity) ** 2).sum() / (n ** 2)
-
-
-class EuclideanCodebook(nn.Module):
- """Codebook with Euclidean distance.
-
- Args:
- dim (int): Dimension.
- codebook_size (int): Codebook size.
- kmeans_init (bool): Whether to use k-means to initialize the codebooks.
- If set to true, run the k-means algorithm on the first training batch and use
- the learned centroids as initialization.
- kmeans_iters (int): Number of iterations used for k-means algorithm at initialization.
- decay (float): Decay for exponential moving average over the codebooks.
- epsilon (float): Epsilon value for numerical stability.
- threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes
- that have an exponential moving average cluster size less than the specified threshold with
- randomly selected vector from the current batch.
- """
- def __init__(
- self,
- dim: int,
- codebook_size: int,
- kmeans_init: int = False,
- kmeans_iters: int = 10,
- decay: float = 0.8,
- epsilon: float = 1e-5,
- threshold_ema_dead_code: int = 2,
- ):
- super().__init__()
- self.decay = decay
- init_fn: tp.Union[tp.Callable[..., torch.Tensor], tp.Any] = uniform_init if not kmeans_init else torch.zeros
- embed = init_fn(codebook_size, dim)
-
- self.codebook_size = codebook_size
-
- self.kmeans_iters = kmeans_iters
- self.epsilon = epsilon
- self.threshold_ema_dead_code = threshold_ema_dead_code
-
- self.register_buffer("inited", torch.Tensor([not kmeans_init]))
- self.register_buffer("cluster_size", torch.zeros(codebook_size))
- self.register_buffer("embed", embed)
- self.register_buffer("embed_avg", embed.clone())
-
- @torch.jit.ignore
- def init_embed_(self, data):
- if self.inited:
- return
-
- embed, cluster_size = kmeans(data, self.codebook_size, self.kmeans_iters)
- self.embed.data.copy_(embed)
- self.embed_avg.data.copy_(embed.clone())
- self.cluster_size.data.copy_(cluster_size)
- self.inited.data.copy_(torch.Tensor([True]))
- # Make sure all buffers across workers are in sync after initialization
- flashy.distrib.broadcast_tensors(self.buffers())
-
- def replace_(self, samples, mask):
- modified_codebook = torch.where(
- mask[..., None], sample_vectors(samples, self.codebook_size), self.embed
- )
- self.embed.data.copy_(modified_codebook)
-
- def expire_codes_(self, batch_samples):
- if self.threshold_ema_dead_code == 0:
- return
-
- expired_codes = self.cluster_size < self.threshold_ema_dead_code
- if not torch.any(expired_codes):
- return
-
- batch_samples = rearrange(batch_samples, "... d -> (...) d")
- self.replace_(batch_samples, mask=expired_codes)
- flashy.distrib.broadcast_tensors(self.buffers())
-
- def preprocess(self, x):
- x = rearrange(x, "... d -> (...) d")
- return x
-
- def quantize(self, x):
- embed = self.embed.t()
- dist = -(
- x.pow(2).sum(1, keepdim=True)
- - 2 * x @ embed
- + embed.pow(2).sum(0, keepdim=True)
- )
- embed_ind = dist.max(dim=-1).indices
- return embed_ind
-
- def postprocess_emb(self, embed_ind, shape):
- return embed_ind.view(*shape[:-1])
-
- def dequantize(self, embed_ind):
- quantize = F.embedding(embed_ind, self.embed)
- return quantize
-
- def encode(self, x):
- shape = x.shape
- # pre-process
- x = self.preprocess(x)
- # quantize
- embed_ind = self.quantize(x)
- # post-process
- embed_ind = self.postprocess_emb(embed_ind, shape)
- return embed_ind
-
- def decode(self, embed_ind):
- quantize = self.dequantize(embed_ind)
- return quantize
-
- def forward(self, x):
- shape, dtype = x.shape, x.dtype
- x = self.preprocess(x)
- self.init_embed_(x)
-
- embed_ind = self.quantize(x)
- embed_onehot = F.one_hot(embed_ind, self.codebook_size).type(dtype)
- embed_ind = self.postprocess_emb(embed_ind, shape)
- quantize = self.dequantize(embed_ind)
-
- if self.training:
- # We do the expiry of code at that point as buffers are in sync
- # and all the workers will take the same decision.
- self.expire_codes_(x)
- ema_inplace(self.cluster_size, embed_onehot.sum(0), self.decay)
- embed_sum = x.t() @ embed_onehot
- ema_inplace(self.embed_avg, embed_sum.t(), self.decay)
- cluster_size = (
- laplace_smoothing(self.cluster_size, self.codebook_size, self.epsilon)
- * self.cluster_size.sum()
- )
- embed_normalized = self.embed_avg / cluster_size.unsqueeze(1)
- self.embed.data.copy_(embed_normalized)
-
- return quantize, embed_ind
-
-
-class VectorQuantization(nn.Module):
- """Vector quantization implementation.
- Currently supports only euclidean distance.
-
- Args:
- dim (int): Dimension
- codebook_size (int): Codebook size
- codebook_dim (int): Codebook dimension. If not defined, uses the specified dimension in dim.
- decay (float): Decay for exponential moving average over the codebooks.
- epsilon (float): Epsilon value for numerical stability.
- kmeans_init (bool): Whether to use kmeans to initialize the codebooks.
- kmeans_iters (int): Number of iterations used for kmeans initialization.
- threshold_ema_dead_code (int):
- channels_last (bool): Channels are the last dimension in the input tensors.
- commitment_weight (float): Weight for commitment loss.
- orthogonal_reg_weight (float): Orthogonal regularization weights.
- orthogonal_reg_active_codes_only (bool): Apply orthogonal regularization only on active codes.
- orthogonal_reg_max_codes (optional int): Maximum number of codes to consider
- for orthogonal regularization.
- threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes
- that have an exponential moving average cluster size less than the specified threshold with
- randomly selected vector from the current batch.
- """
- def __init__(
- self,
- dim: int,
- codebook_size: int,
- codebook_dim: tp.Optional[int] = None,
- decay: float = 0.8,
- epsilon: float = 1e-5,
- kmeans_init: bool = False,
- kmeans_iters: int = 10,
- threshold_ema_dead_code: int = 2,
- channels_last: bool = False,
- commitment_weight: float = 1.,
- orthogonal_reg_weight: float = 0.0,
- orthogonal_reg_active_codes_only: bool = False,
- orthogonal_reg_max_codes: tp.Optional[int] = None,
- ):
- super().__init__()
- _codebook_dim: int = default(codebook_dim, dim)
-
- requires_projection = _codebook_dim != dim
- self.project_in = (nn.Linear(dim, _codebook_dim) if requires_projection else nn.Identity())
- self.project_out = (nn.Linear(_codebook_dim, dim) if requires_projection else nn.Identity())
-
- self.epsilon = epsilon
- self.commitment_weight = commitment_weight
-
- self.orthogonal_reg_weight = orthogonal_reg_weight
- self.orthogonal_reg_active_codes_only = orthogonal_reg_active_codes_only
- self.orthogonal_reg_max_codes = orthogonal_reg_max_codes
-
- self._codebook = EuclideanCodebook(dim=_codebook_dim, codebook_size=codebook_size,
- kmeans_init=kmeans_init, kmeans_iters=kmeans_iters,
- decay=decay, epsilon=epsilon,
- threshold_ema_dead_code=threshold_ema_dead_code)
- self.codebook_size = codebook_size
-
- self.channels_last = channels_last
-
- @property
- def codebook(self):
- return self._codebook.embed
-
- @property
- def inited(self):
- return self._codebook.inited
-
- def _preprocess(self, x):
- if not self.channels_last:
- x = rearrange(x, "b d n -> b n d")
- return x
-
- def _postprocess(self, quantize):
- if not self.channels_last:
- quantize = rearrange(quantize, "b n d -> b d n")
- return quantize
-
- def encode(self, x):
- x = self._preprocess(x)
- x = self.project_in(x)
- embed_in = self._codebook.encode(x)
- return embed_in
-
- def decode(self, embed_ind):
- quantize = self._codebook.decode(embed_ind)
- quantize = self.project_out(quantize)
- quantize = self._postprocess(quantize)
- return quantize
-
- def forward(self, x):
- device = x.device
- x = self._preprocess(x)
-
- x = self.project_in(x)
- quantize, embed_ind = self._codebook(x)
-
- if self.training:
- quantize = x + (quantize - x).detach()
-
- loss = torch.tensor([0.0], device=device, requires_grad=self.training)
-
- if self.training:
- if self.commitment_weight > 0:
- commit_loss = F.mse_loss(quantize.detach(), x)
- loss = loss + commit_loss * self.commitment_weight
-
- if self.orthogonal_reg_weight > 0:
- codebook = self.codebook
-
- if self.orthogonal_reg_active_codes_only:
- # only calculate orthogonal loss for the activated codes for this batch
- unique_code_ids = torch.unique(embed_ind)
- codebook = codebook[unique_code_ids]
-
- num_codes = codebook.shape[0]
- if exists(self.orthogonal_reg_max_codes) and num_codes > self.orthogonal_reg_max_codes:
- rand_ids = torch.randperm(num_codes, device=device)[:self.orthogonal_reg_max_codes]
- codebook = codebook[rand_ids]
-
- orthogonal_reg_loss = orthogonal_loss_fn(codebook)
- loss = loss + orthogonal_reg_loss * self.orthogonal_reg_weight
-
- quantize = self.project_out(quantize)
- quantize = self._postprocess(quantize)
-
- return quantize, embed_ind, loss
-
-
-class ResidualVectorQuantization(nn.Module):
- """Residual vector quantization implementation.
-
- Follows Algorithm 1. in https://arxiv.org/pdf/2107.03312.pdf
- """
- def __init__(self, *, num_quantizers, **kwargs):
- super().__init__()
- self.layers = nn.ModuleList(
- [VectorQuantization(**kwargs) for _ in range(num_quantizers)]
- )
-
- def forward(self, x, n_q: tp.Optional[int] = None):
- quantized_out = 0.0
- residual = x
-
- all_losses = []
- all_indices = []
-
- n_q = n_q or len(self.layers)
-
- for i, layer in enumerate(self.layers[:n_q]):
- quantized, indices, loss = layer(residual)
- residual = residual - quantized
- quantized_out = quantized_out + quantized
- all_indices.append(indices)
- all_losses.append(loss)
-
- out_losses, out_indices = map(torch.stack, (all_losses, all_indices))
- return quantized_out, out_indices, out_losses
-
- def encode(self, x: torch.Tensor, n_q: tp.Optional[int] = None) -> torch.Tensor:
- residual = x
- all_indices = []
- n_q = n_q or len(self.layers)
- for layer in self.layers[:n_q]:
- indices = layer.encode(residual)
- quantized = layer.decode(indices)
- residual = residual - quantized
- all_indices.append(indices)
- out_indices = torch.stack(all_indices)
- return out_indices
-
- def decode(self, q_indices: torch.Tensor) -> torch.Tensor:
- quantized_out = torch.tensor(0.0, device=q_indices.device)
- for i, indices in enumerate(q_indices):
- layer = self.layers[i]
- quantized = layer.decode(indices)
- quantized_out = quantized_out + quantized
- return quantized_out
diff --git a/spaces/QINGCHE/TSA/train.py b/spaces/QINGCHE/TSA/train.py
deleted file mode 100644
index 942d4105b1e07963c0f5cdf78a11c04cdf3342a1..0000000000000000000000000000000000000000
--- a/spaces/QINGCHE/TSA/train.py
+++ /dev/null
@@ -1,217 +0,0 @@
-# %%
-import numpy as np
-import pandas as pd
-import csv
-import torch.nn as nn
-from torch.optim.lr_scheduler import ReduceLROnPlateau
-from torch.utils.data import TensorDataset, DataLoader
-from transformers import BertTokenizer,BertConfig,AdamW
-from sklearn.metrics import accuracy_score
-from sklearn.metrics import classification_report
-from tqdm import tqdm
-import torch
-import transformers
-from torch.utils.data import Dataset, DataLoader
-
-# %%
-
-class MyDataSet(Dataset):
- def __init__(self, loaded_data):
- self.data = loaded_data
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(self, idx):
- return self.data[idx]
-
-Data_path = "/kaggle/input/inference/train.csv"
-Totle_data = pd.read_csv(Data_path)
-Totle_data = Totle_data.sample(frac=0.1)
-Totle_data = Totle_data.dropna(axis=0,subset = ["2"])
-custom_dataset = MyDataSet(Totle_data)
-#按照比例划分
-train_size = int(len(custom_dataset) * 0.6)
-validate_size = int(len(custom_dataset) * 0.1)
-test_size = len(custom_dataset) - validate_size - train_size
-train_dataset, validate_dataset, test_dataset = torch.utils.data.random_split(custom_dataset, [train_size, validate_size, test_size])
-
-#设置保存路径
-train_data_path="Bert_Try.csv"
-dev_data_path = "Bert_Dev.csv"
-test_data_path="Bert_Test.csv"
-
-train_dataset = Totle_data.iloc[train_dataset.indices]
-validate_dataset = Totle_data.iloc[validate_dataset.indices]
-test_dataset = Totle_data.iloc[test_dataset.indices]
-
-#index参数设置为False表示不保存行索引,header设置为False表示不保存列索引
-train_dataset.to_csv(train_data_path,index=False,header=True)
-validate_dataset.to_csv(dev_data_path ,index=False,header=True)
-test_dataset.to_csv(test_data_path,index=False,header=True)
-
-# %%
-data = pd.read_csv(train_data_path)
-data.head
-
-# %%
-
-class BertClassificationModel(nn.Module):
- def __init__(self):
- super(BertClassificationModel, self).__init__()
- #加载预训练模型
- pretrained_weights="bert-base-chinese"
- self.bert = transformers.BertModel.from_pretrained(pretrained_weights)
- for param in self.bert.parameters():
- param.requires_grad = True
- #定义线性函数
- self.dense = nn.Linear(768, 3)
-
- def forward(self, input_ids,token_type_ids,attention_mask):
- #得到bert_output
- bert_output = self.bert(input_ids=input_ids,token_type_ids=token_type_ids, attention_mask=attention_mask)
- #获得预训练模型的输出
- bert_cls_hidden_state = bert_output[1]
- #将768维的向量输入到线性层映射为二维向量
- linear_output = self.dense(bert_cls_hidden_state)
- return linear_output
-
-# %%
-
-def encoder(max_len,vocab_path,text_list):
- #将text_list embedding成bert模型可用的输入形式
- #加载分词模型
- tokenizer = BertTokenizer.from_pretrained("bert-base-chinese")
- tokenizer = tokenizer(
- text_list,
- padding = True,
- truncation = True,
- max_length = max_len,
- return_tensors='pt' # 返回的类型为pytorch tensor
- )
- input_ids = tokenizer['input_ids']
- token_type_ids = tokenizer['token_type_ids']
- attention_mask = tokenizer['attention_mask']
- return input_ids,token_type_ids,attention_mask
-
-# %%
-labels2dict = {"neutral":0,"entailment":1,"contradiction":2}
-def load_data(path):
- csvFileObj = open(path)
- readerObj = csv.reader(csvFileObj)
- text_list = []
- labels = []
- for row in readerObj:
- #跳过表头
- if readerObj.line_num == 1:
- continue
- #label在什么位置就改成对应的index
- label = int(labels2dict[row[0]])
- text = row[1]
- text_list.append(text)
- labels.append(label)
- #调用encoder函数,获得预训练模型的三种输入形式
- input_ids,token_type_ids,attention_mask = encoder(max_len=150,vocab_path="/root/Bert/bert-base-chinese/vocab.txt",text_list=text_list)
- labels = torch.tensor(labels)
- #将encoder的返回值以及label封装为Tensor的形式
- data = TensorDataset(input_ids,token_type_ids,attention_mask,labels)
- return data
-
-# %%
-#设定batch_size
-batch_size = 16
-#引入数据路径
-train_data_path="Bert_Try.csv"
-dev_data_path="Bert_Dev.csv"
-test_data_path="Bert_Test.csv"
-#调用load_data函数,将数据加载为Tensor形式
-train_data = load_data(train_data_path)
-dev_data = load_data(dev_data_path)
-test_data = load_data(test_data_path)
-#将训练数据和测试数据进行DataLoader实例化
-train_loader = DataLoader(dataset=train_data, batch_size=batch_size, shuffle=True)
-dev_loader = DataLoader(dataset=dev_data, batch_size=batch_size, shuffle=True)
-test_loader = DataLoader(dataset=test_data, batch_size=batch_size, shuffle=False)
-
-# %%
-def dev(model,dev_loader):
- model.to(device)
-
- model.eval()
-
- with torch.no_grad():
- correct = 0
- total = 0
- for step, (input_ids,token_type_ids,attention_mask,labels) in tqdm(enumerate(dev_loader),desc='Dev Itreation:'):
- input_ids,token_type_ids,attention_mask,labels=input_ids.to(device),token_type_ids.to(device),attention_mask.to(device),labels.to(device)
- out_put = model(input_ids,token_type_ids,attention_mask)
- _, predict = torch.max(out_put.data, 1)
- correct += (predict==labels).sum().item()
- total += labels.size(0)
- res = correct / total
- return res
-
-# %%
-
-device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
-def train(model,train_loader,dev_loader) :
-
- model.to(device)
- model.train()
- criterion = nn.CrossEntropyLoss()
- param_optimizer = list(model.named_parameters())
- no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
-
- optimizer_grouped_parameters = [
- {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
- 'weight_decay': 0.01},
- {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
- ]
-
- optimizer_params = {'lr': 1e-5, 'eps': 1e-6, 'correct_bias': False}
- optimizer = AdamW(optimizer_grouped_parameters, **optimizer_params)
- scheduler = ReduceLROnPlateau(optimizer,mode='max',factor=0.5,min_lr=1e-7, patience=5,verbose= True, threshold=0.0001, eps=1e-08)
- t_total = len(train_loader)
-
- total_epochs = 10
- bestAcc = 0
- correct = 0
- total = 0
- print('Training and verification begin!')
- for epoch in range(total_epochs):
- for step, (input_ids,token_type_ids,attention_mask,labels) in enumerate(train_loader):
-
- optimizer.zero_grad()
- input_ids,token_type_ids,attention_mask,labels=input_ids.to(device),token_type_ids.to(device),attention_mask.to(device),labels.to(device)
- out_put = model(input_ids,token_type_ids,attention_mask)
- loss = criterion(out_put, labels)
- _, predict = torch.max(out_put.data, 1)
- correct += (predict == labels).sum().item()
- total += labels.size(0)
- loss.backward()
- optimizer.step()
- #每两步进行一次打印
- if (step + 1) % 10 == 0:
- train_acc = correct / total
- print("Train Epoch[{}/{}],step[{}/{}],tra_acc{:.6f} %,loss:{:.6f}".format(epoch + 1, total_epochs, step + 1, len(train_loader),train_acc*100,loss.item()))
- #每五十次进行一次验证
- if (step + 1) % 200 == 0:
- train_acc = correct / total
- #调用验证函数dev对模型进行验证,并将有效果提升的模型进行保存
- acc = dev(model, dev_loader)
- if bestAcc < acc:
- bestAcc = acc
- #模型保存路径
- path = 'bert_model.pkl'
- torch.save(model, path)
- print("DEV Epoch[{}/{}],step[{}/{}],tra_acc{:.6f} %,bestAcc{:.6f}%,dev_acc{:.6f} %,loss:{:.6f}".format(epoch + 1, total_epochs, step + 1, len(train_loader),train_acc*100,bestAcc*100,acc*100,loss.item()))
- scheduler.step(bestAcc)
-
-# %%
-
-path = '/kaggle/input/inference/bert_model.pkl'
-# model = torch.load(path)
-#实例化模型
-model = BertClassificationModel()
-#调用训练函数进行训练与验证
-train(model,train_loader,dev_loader)
diff --git a/spaces/RMXK/RVC_HFF/utils/backups.py b/spaces/RMXK/RVC_HFF/utils/backups.py
deleted file mode 100644
index b814f8184792e80e2324685436053d61487110b1..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/utils/backups.py
+++ /dev/null
@@ -1,141 +0,0 @@
-import os
-import shutil
-import hashlib
-import time
-import base64
-
-
-
-
-LOGS_FOLDER = '/content/Applio-RVC-Fork/logs'
-WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights'
-GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup'
-
-def import_google_drive_backup():
- print("Importing Google Drive backup...")
- weights_exist = False
- for root, dirs, files in os.walk(GOOGLE_DRIVE_PATH):
- for filename in files:
- filepath = os.path.join(root, filename)
- if os.path.isfile(filepath) and not filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')):
- backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH))
- backup_folderpath = os.path.dirname(backup_filepath)
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created backup folder: {backup_folderpath}', flush=True)
- shutil.copy2(filepath, backup_filepath) # copy file with metadata
- print(f'Imported file from Google Drive backup: {filename}')
- elif filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')) and filename.endswith('.pth'):
- weights_exist = True
- weights_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, os.path.join(GOOGLE_DRIVE_PATH, 'weights')))
- weights_folderpath = os.path.dirname(weights_filepath)
- if not os.path.exists(weights_folderpath):
- os.makedirs(weights_folderpath)
- print(f'Created weights folder: {weights_folderpath}', flush=True)
- shutil.copy2(filepath, weights_filepath) # copy file with metadata
- print(f'Imported file from weights: {filename}')
- if weights_exist:
- print("Copied weights from Google Drive backup to local weights folder.")
- else:
- print("No weights found in Google Drive backup.")
- print("Google Drive backup import completed.")
-
-def get_md5_hash(file_path):
- hash_md5 = hashlib.md5()
- with open(file_path, "rb") as f:
- for chunk in iter(lambda: f.read(4096), b""):
- hash_md5.update(chunk)
- return hash_md5.hexdigest()
-
-def copy_weights_folder_to_drive():
- destination_folder = os.path.join(GOOGLE_DRIVE_PATH, 'weights')
- try:
- if not os.path.exists(destination_folder):
- os.makedirs(destination_folder)
-
- num_copied = 0
- for filename in os.listdir(WEIGHTS_FOLDER):
- if filename.endswith('.pth'):
- source_file = os.path.join(WEIGHTS_FOLDER, filename)
- destination_file = os.path.join(destination_folder, filename)
- if not os.path.exists(destination_file):
- shutil.copy2(source_file, destination_file)
- num_copied += 1
- print(f"Copied {filename} to Google Drive!")
-
- if num_copied == 0:
- print("No new finished models found for copying.")
- else:
- print(f"Finished copying {num_copied} files to Google Drive!")
-
- except Exception as e:
- print(f"An error occurred while copying weights: {str(e)}")
- # You can log the error or take appropriate actions here.
-
-def backup_files():
- print("\nStarting backup loop...")
- last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt')
- fully_updated = False # boolean to track if all files are up to date
-
- while True:
- try:
- updated = False # flag to check if any files were updated
- last_backup_timestamps = {}
-
- try:
- with open(last_backup_timestamps_path, 'r') as f:
- last_backup_timestamps = dict(line.strip().split(':') for line in f)
- except FileNotFoundError:
- pass # File does not exist yet, which is fine
-
- for root, dirs, files in os.walk(LOGS_FOLDER):
- for filename in files:
- if filename != 'last_backup_timestamps.txt':
- filepath = os.path.join(root, filename)
- if os.path.isfile(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- backup_folderpath = os.path.dirname(backup_filepath)
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created backup folder: {backup_folderpath}', flush=True)
- # check if file has changed since last backup
- last_backup_timestamp = last_backup_timestamps.get(filepath)
- current_timestamp = os.path.getmtime(filepath)
- if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp:
- shutil.copy2(filepath, backup_filepath) # copy file with metadata
- last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp
- if last_backup_timestamp is None:
- print(f'Backed up file: {filename}')
- else:
- print(f'Updating backed up file: {filename}')
- updated = True
- fully_updated = False # if a file is updated, all files are not up to date
-
- # check if any files were deleted in Colab and delete them from the backup drive
- for filepath in list(last_backup_timestamps.keys()):
- if not os.path.exists(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- if os.path.exists(backup_filepath):
- os.remove(backup_filepath)
- print(f'Deleted file: {filepath}')
- del last_backup_timestamps[filepath]
- updated = True
- fully_updated = False # if a file is deleted, all files are not up to date
-
- if not updated and not fully_updated:
- print("Files are up to date.")
- fully_updated = True # if all files are up to date, set the boolean to True
- copy_weights_folder_to_drive()
- sleep_time = 15
- else:
- sleep_time = 0.1
-
- with open(last_backup_timestamps_path, 'w') as f:
- for filepath, timestamp in last_backup_timestamps.items():
- f.write(f'{filepath}:{timestamp}\n')
-
- time.sleep(sleep_time) # wait for 15 seconds before checking again, or 0.1s if not fully up to date to speed up backups
-
- except Exception as e:
- print(f"An error occurred: {str(e)}")
- # You can log the error or take appropriate actions here.
diff --git a/spaces/RMXK/RVC_HFF/venv.sh b/spaces/RMXK/RVC_HFF/venv.sh
deleted file mode 100644
index aa230992e892292cb8aa5924ecdafc5758f14e95..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/venv.sh
+++ /dev/null
@@ -1 +0,0 @@
-python3.8 -m venv .venv
diff --git a/spaces/Ricecake123/RVC-demo/lib/infer_pack/models_onnx.py b/spaces/Ricecake123/RVC-demo/lib/infer_pack/models_onnx.py
deleted file mode 100644
index 963e67b29f828e9fdd096397952054fe77cf3d10..0000000000000000000000000000000000000000
--- a/spaces/Ricecake123/RVC-demo/lib/infer_pack/models_onnx.py
+++ /dev/null
@@ -1,819 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from lib.infer_pack import modules
-from lib.infer_pack import attentions
-from lib.infer_pack import commons
-from lib.infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from lib.infer_pack.commons import init_weights
-import numpy as np
-from lib.infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder768(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(768, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMsNSFsidM(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- version,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- if version == "v1":
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- else:
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- self.speaker_map = None
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def construct_spkmixmap(self, n_speaker):
- self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels))
- for i in range(n_speaker):
- self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]]))
- self.speaker_map = self.speaker_map.unsqueeze(0)
-
- def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None):
- if self.speaker_map is not None: # [N, S] * [S, B, 1, H]
- g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1]
- g = g * self.speaker_map # [N, S, B, 1, H]
- g = torch.sum(g, dim=1) # [N, 1, B, 1, H]
- g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N]
- else:
- g = g.unsqueeze(0)
- g = self.emb_g(g).transpose(1, 2)
-
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class MultiPeriodDiscriminatorV2(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminatorV2, self).__init__()
- # periods = [2, 3, 5, 7, 11, 17]
- periods = [2, 3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/Riksarkivet/htr_demo/src/htr_pipeline/utils/__init__.py b/spaces/Riksarkivet/htr_demo/src/htr_pipeline/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/RlxDrk/huggingtweets-dolceragazza26-femdomfusion-mistressleiaa/app.py b/spaces/RlxDrk/huggingtweets-dolceragazza26-femdomfusion-mistressleiaa/app.py
deleted file mode 100644
index d70f39bf708fbb406059d8ccfdcf4a693deafa5e..0000000000000000000000000000000000000000
--- a/spaces/RlxDrk/huggingtweets-dolceragazza26-femdomfusion-mistressleiaa/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/huggingtweets/dolceragazza26-femdomfusion-mistressleiaa").launch()
\ No newline at end of file
diff --git a/spaces/RobLi/ControlNet-v1-1/app_mlsd.py b/spaces/RobLi/ControlNet-v1-1/app_mlsd.py
deleted file mode 100644
index 9440f2f480b3713aa081258909221eab792157b5..0000000000000000000000000000000000000000
--- a/spaces/RobLi/ControlNet-v1-1/app_mlsd.py
+++ /dev/null
@@ -1,113 +0,0 @@
-#!/usr/bin/env python
-
-import gradio as gr
-
-from utils import randomize_seed_fn
-
-
-def create_demo(process, max_images=12, default_num_images=3):
- with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- image = gr.Image()
- prompt = gr.Textbox(label='Prompt')
- run_button = gr.Button('Run')
- with gr.Accordion('Advanced options', open=False):
- num_samples = gr.Slider(label='Number of images',
- minimum=1,
- maximum=max_images,
- value=default_num_images,
- step=1)
- image_resolution = gr.Slider(label='Image resolution',
- minimum=256,
- maximum=512,
- value=512,
- step=256)
- preprocess_resolution = gr.Slider(
- label='Preprocess resolution',
- minimum=128,
- maximum=512,
- value=512,
- step=1)
- mlsd_value_threshold = gr.Slider(
- label='Hough value threshold (MLSD)',
- minimum=0.01,
- maximum=2.0,
- value=0.1,
- step=0.01)
- mlsd_distance_threshold = gr.Slider(
- label='Hough distance threshold (MLSD)',
- minimum=0.01,
- maximum=20.0,
- value=0.1,
- step=0.01)
- num_steps = gr.Slider(label='Number of steps',
- minimum=1,
- maximum=100,
- value=20,
- step=1)
- guidance_scale = gr.Slider(label='Guidance scale',
- minimum=0.1,
- maximum=30.0,
- value=9.0,
- step=0.1)
- seed = gr.Slider(label='Seed',
- minimum=0,
- maximum=1000000,
- step=1,
- value=0,
- randomize=True)
- randomize_seed = gr.Checkbox(label='Randomize seed',
- value=True)
- a_prompt = gr.Textbox(
- label='Additional prompt',
- value='best quality, extremely detailed')
- n_prompt = gr.Textbox(
- label='Negative prompt',
- value=
- 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
- )
- with gr.Column():
- result = gr.Gallery(label='Output', show_label=False).style(
- columns=2, object_fit='scale-down')
- inputs = [
- image,
- prompt,
- a_prompt,
- n_prompt,
- num_samples,
- image_resolution,
- preprocess_resolution,
- num_steps,
- guidance_scale,
- seed,
- mlsd_value_threshold,
- mlsd_distance_threshold,
- ]
- prompt.submit(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- ).then(
- fn=process,
- inputs=inputs,
- outputs=result,
- )
- run_button.click(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- ).then(
- fn=process,
- inputs=inputs,
- outputs=result,
- api_name='mlsd',
- )
- return demo
-
-
-if __name__ == '__main__':
- from model import Model
- model = Model(task_name='MLSD')
- demo = create_demo(model.process_mlsd)
- demo.queue().launch()
diff --git a/spaces/SankarSrin/image-matting-app/ppmatting/ml/__init__.py b/spaces/SankarSrin/image-matting-app/ppmatting/ml/__init__.py
deleted file mode 100644
index 612dff101f358f74db3eca601f0b9573ca6d93cb..0000000000000000000000000000000000000000
--- a/spaces/SankarSrin/image-matting-app/ppmatting/ml/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .methods import CloseFormMatting, KNNMatting, LearningBasedMatting, FastMatting, RandomWalksMatting
diff --git a/spaces/SirishaArveti/GenerativeAIChatBot/README.md b/spaces/SirishaArveti/GenerativeAIChatBot/README.md
deleted file mode 100644
index 3e2310f2da776b5f99a15dbb824cb692f51ba061..0000000000000000000000000000000000000000
--- a/spaces/SirishaArveti/GenerativeAIChatBot/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: GenerativeAIChatBot
-emoji: 👀
-colorFrom: purple
-colorTo: purple
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SkalskiP/MetaCLIP/README.md b/spaces/SkalskiP/MetaCLIP/README.md
deleted file mode 100644
index cd673ff9a54fe78062d2d0f4eabc90b0bb4c26ab..0000000000000000000000000000000000000000
--- a/spaces/SkalskiP/MetaCLIP/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: MetaCLIP
-emoji: 👁️
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Slep/CondViT-LRVSF-Demo/src/transform.py b/spaces/Slep/CondViT-LRVSF-Demo/src/transform.py
deleted file mode 100644
index 0889628f54ff7b57a4e26cd5618b3b9aa05ee7f7..0000000000000000000000000000000000000000
--- a/spaces/Slep/CondViT-LRVSF-Demo/src/transform.py
+++ /dev/null
@@ -1,29 +0,0 @@
-from torchvision.transforms import transforms as tf
-import torchvision.transforms.functional as F
-
-
-class SquarePad:
- def __init__(self, color):
- self.col = color
-
- def __call__(self, image):
- max_wh = max(image.size)
- p_left, p_top = [(max_wh - s) // 2 for s in image.size]
- p_right, p_bottom = [
- max_wh - (s + pad) for s, pad in zip(image.size, [p_left, p_top])
- ]
- padding = (p_left, p_top, p_right, p_bottom)
- return F.pad(image, padding, self.col, "constant")
-
-def valid_tf(size):
- return tf.Compose(
- [
- SquarePad(255),
- tf.Resize(size),
- tf.ToTensor(),
- tf.Normalize(
- mean=(0.48145466, 0.4578275, 0.40821073),
- std=(0.26862954, 0.26130258, 0.27577711),
- ),
- ]
- )
\ No newline at end of file
diff --git a/spaces/Smithjohny376/andite-anything-v4.0/README.md b/spaces/Smithjohny376/andite-anything-v4.0/README.md
deleted file mode 100644
index 24e540ff345047abf0f5509cd7e19823d4744989..0000000000000000000000000000000000000000
--- a/spaces/Smithjohny376/andite-anything-v4.0/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Andite Anything V4.0
-emoji: 🐨
-colorFrom: yellow
-colorTo: red
-sdk: gradio
-sdk_version: 3.16.1
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Sreezx/Sentzi/run.ps1 b/spaces/Sreezx/Sentzi/run.ps1
deleted file mode 100644
index 698a3ad38c108ceed7f8ac637aa6d2353c8a036a..0000000000000000000000000000000000000000
--- a/spaces/Sreezx/Sentzi/run.ps1
+++ /dev/null
@@ -1,24 +0,0 @@
-# Define the path to your requirements.txt file and the Python executable
-$requirementsFile = "requirements.txt"
-$pythonExec = "sentzi-venv\Scripts\python.exe"
-
-# activate venv
-$venvPath = ".\sentzi-venv\Scripts\Activate.ps1"
-
-# Check if the Activate.ps1 script exists
-if (Test-Path $venvPath) {
- # Activate the virtual environment
- . $venvPath
- Write-Host "Virtual environment activated."
-} else {
- Write-Host "Virtual environment script not found."
-}
-
-# Check if Python is installed
-if (Test-Path $pythonExec) {
- # Run the Streamlit application
- Write-Host "Running Streamlit application..."
- & $pythonExec -m streamlit run app.py
-} else {
-Write-Host "Python is not installed. Please install Python and try again."
-}
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/Image.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/Image.py
deleted file mode 100644
index 4a142a008ff09d770115c5969e4f90bd76463e1a..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/Image.py
+++ /dev/null
@@ -1,3928 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# the Image class wrapper
-#
-# partial release history:
-# 1995-09-09 fl Created
-# 1996-03-11 fl PIL release 0.0 (proof of concept)
-# 1996-04-30 fl PIL release 0.1b1
-# 1999-07-28 fl PIL release 1.0 final
-# 2000-06-07 fl PIL release 1.1
-# 2000-10-20 fl PIL release 1.1.1
-# 2001-05-07 fl PIL release 1.1.2
-# 2002-03-15 fl PIL release 1.1.3
-# 2003-05-10 fl PIL release 1.1.4
-# 2005-03-28 fl PIL release 1.1.5
-# 2006-12-02 fl PIL release 1.1.6
-# 2009-11-15 fl PIL release 1.1.7
-#
-# Copyright (c) 1997-2009 by Secret Labs AB. All rights reserved.
-# Copyright (c) 1995-2009 by Fredrik Lundh.
-#
-# See the README file for information on usage and redistribution.
-#
-
-import atexit
-import builtins
-import io
-import logging
-import math
-import os
-import re
-import struct
-import sys
-import tempfile
-import warnings
-from collections.abc import Callable, MutableMapping
-from enum import IntEnum
-from pathlib import Path
-
-try:
- import defusedxml.ElementTree as ElementTree
-except ImportError:
- ElementTree = None
-
-# VERSION was removed in Pillow 6.0.0.
-# PILLOW_VERSION was removed in Pillow 9.0.0.
-# Use __version__ instead.
-from . import (
- ExifTags,
- ImageMode,
- TiffTags,
- UnidentifiedImageError,
- __version__,
- _plugins,
-)
-from ._binary import i32le, o32be, o32le
-from ._deprecate import deprecate
-from ._util import DeferredError, is_path
-
-
-def __getattr__(name):
- categories = {"NORMAL": 0, "SEQUENCE": 1, "CONTAINER": 2}
- if name in categories:
- deprecate("Image categories", 10, "is_animated", plural=True)
- return categories[name]
- old_resampling = {
- "LINEAR": "BILINEAR",
- "CUBIC": "BICUBIC",
- "ANTIALIAS": "LANCZOS",
- }
- if name in old_resampling:
- deprecate(
- name, 10, f"{old_resampling[name]} or Resampling.{old_resampling[name]}"
- )
- return Resampling[old_resampling[name]]
- msg = f"module '{__name__}' has no attribute '{name}'"
- raise AttributeError(msg)
-
-
-logger = logging.getLogger(__name__)
-
-
-class DecompressionBombWarning(RuntimeWarning):
- pass
-
-
-class DecompressionBombError(Exception):
- pass
-
-
-# Limit to around a quarter gigabyte for a 24-bit (3 bpp) image
-MAX_IMAGE_PIXELS = int(1024 * 1024 * 1024 // 4 // 3)
-
-
-try:
- # If the _imaging C module is not present, Pillow will not load.
- # Note that other modules should not refer to _imaging directly;
- # import Image and use the Image.core variable instead.
- # Also note that Image.core is not a publicly documented interface,
- # and should be considered private and subject to change.
- from . import _imaging as core
-
- if __version__ != getattr(core, "PILLOW_VERSION", None):
- msg = (
- "The _imaging extension was built for another version of Pillow or PIL:\n"
- f"Core version: {getattr(core, 'PILLOW_VERSION', None)}\n"
- f"Pillow version: {__version__}"
- )
- raise ImportError(msg)
-
-except ImportError as v:
- core = DeferredError(ImportError("The _imaging C module is not installed."))
- # Explanations for ways that we know we might have an import error
- if str(v).startswith("Module use of python"):
- # The _imaging C module is present, but not compiled for
- # the right version (windows only). Print a warning, if
- # possible.
- warnings.warn(
- "The _imaging extension was built for another version of Python.",
- RuntimeWarning,
- )
- elif str(v).startswith("The _imaging extension"):
- warnings.warn(str(v), RuntimeWarning)
- # Fail here anyway. Don't let people run with a mostly broken Pillow.
- # see docs/porting.rst
- raise
-
-
-# works everywhere, win for pypy, not cpython
-USE_CFFI_ACCESS = hasattr(sys, "pypy_version_info")
-try:
- import cffi
-except ImportError:
- cffi = None
-
-
-def isImageType(t):
- """
- Checks if an object is an image object.
-
- .. warning::
-
- This function is for internal use only.
-
- :param t: object to check if it's an image
- :returns: True if the object is an image
- """
- return hasattr(t, "im")
-
-
-#
-# Constants
-
-
-# transpose
-class Transpose(IntEnum):
- FLIP_LEFT_RIGHT = 0
- FLIP_TOP_BOTTOM = 1
- ROTATE_90 = 2
- ROTATE_180 = 3
- ROTATE_270 = 4
- TRANSPOSE = 5
- TRANSVERSE = 6
-
-
-# transforms (also defined in Imaging.h)
-class Transform(IntEnum):
- AFFINE = 0
- EXTENT = 1
- PERSPECTIVE = 2
- QUAD = 3
- MESH = 4
-
-
-# resampling filters (also defined in Imaging.h)
-class Resampling(IntEnum):
- NEAREST = 0
- BOX = 4
- BILINEAR = 2
- HAMMING = 5
- BICUBIC = 3
- LANCZOS = 1
-
-
-_filters_support = {
- Resampling.BOX: 0.5,
- Resampling.BILINEAR: 1.0,
- Resampling.HAMMING: 1.0,
- Resampling.BICUBIC: 2.0,
- Resampling.LANCZOS: 3.0,
-}
-
-
-# dithers
-class Dither(IntEnum):
- NONE = 0
- ORDERED = 1 # Not yet implemented
- RASTERIZE = 2 # Not yet implemented
- FLOYDSTEINBERG = 3 # default
-
-
-# palettes/quantizers
-class Palette(IntEnum):
- WEB = 0
- ADAPTIVE = 1
-
-
-class Quantize(IntEnum):
- MEDIANCUT = 0
- MAXCOVERAGE = 1
- FASTOCTREE = 2
- LIBIMAGEQUANT = 3
-
-
-module = sys.modules[__name__]
-for enum in (Transpose, Transform, Resampling, Dither, Palette, Quantize):
- for item in enum:
- setattr(module, item.name, item.value)
-
-
-if hasattr(core, "DEFAULT_STRATEGY"):
- DEFAULT_STRATEGY = core.DEFAULT_STRATEGY
- FILTERED = core.FILTERED
- HUFFMAN_ONLY = core.HUFFMAN_ONLY
- RLE = core.RLE
- FIXED = core.FIXED
-
-
-# --------------------------------------------------------------------
-# Registries
-
-ID = []
-OPEN = {}
-MIME = {}
-SAVE = {}
-SAVE_ALL = {}
-EXTENSION = {}
-DECODERS = {}
-ENCODERS = {}
-
-# --------------------------------------------------------------------
-# Modes
-
-_ENDIAN = "<" if sys.byteorder == "little" else ">"
-
-
-def _conv_type_shape(im):
- m = ImageMode.getmode(im.mode)
- shape = (im.height, im.width)
- extra = len(m.bands)
- if extra != 1:
- shape += (extra,)
- return shape, m.typestr
-
-
-MODES = ["1", "CMYK", "F", "HSV", "I", "L", "LAB", "P", "RGB", "RGBA", "RGBX", "YCbCr"]
-
-# raw modes that may be memory mapped. NOTE: if you change this, you
-# may have to modify the stride calculation in map.c too!
-_MAPMODES = ("L", "P", "RGBX", "RGBA", "CMYK", "I;16", "I;16L", "I;16B")
-
-
-def getmodebase(mode):
- """
- Gets the "base" mode for given mode. This function returns "L" for
- images that contain grayscale data, and "RGB" for images that
- contain color data.
-
- :param mode: Input mode.
- :returns: "L" or "RGB".
- :exception KeyError: If the input mode was not a standard mode.
- """
- return ImageMode.getmode(mode).basemode
-
-
-def getmodetype(mode):
- """
- Gets the storage type mode. Given a mode, this function returns a
- single-layer mode suitable for storing individual bands.
-
- :param mode: Input mode.
- :returns: "L", "I", or "F".
- :exception KeyError: If the input mode was not a standard mode.
- """
- return ImageMode.getmode(mode).basetype
-
-
-def getmodebandnames(mode):
- """
- Gets a list of individual band names. Given a mode, this function returns
- a tuple containing the names of individual bands (use
- :py:method:`~PIL.Image.getmodetype` to get the mode used to store each
- individual band.
-
- :param mode: Input mode.
- :returns: A tuple containing band names. The length of the tuple
- gives the number of bands in an image of the given mode.
- :exception KeyError: If the input mode was not a standard mode.
- """
- return ImageMode.getmode(mode).bands
-
-
-def getmodebands(mode):
- """
- Gets the number of individual bands for this mode.
-
- :param mode: Input mode.
- :returns: The number of bands in this mode.
- :exception KeyError: If the input mode was not a standard mode.
- """
- return len(ImageMode.getmode(mode).bands)
-
-
-# --------------------------------------------------------------------
-# Helpers
-
-_initialized = 0
-
-
-def preinit():
- """Explicitly load standard file format drivers."""
-
- global _initialized
- if _initialized >= 1:
- return
-
- try:
- from . import BmpImagePlugin
-
- assert BmpImagePlugin
- except ImportError:
- pass
- try:
- from . import GifImagePlugin
-
- assert GifImagePlugin
- except ImportError:
- pass
- try:
- from . import JpegImagePlugin
-
- assert JpegImagePlugin
- except ImportError:
- pass
- try:
- from . import PpmImagePlugin
-
- assert PpmImagePlugin
- except ImportError:
- pass
- try:
- from . import PngImagePlugin
-
- assert PngImagePlugin
- except ImportError:
- pass
- # try:
- # import TiffImagePlugin
- # assert TiffImagePlugin
- # except ImportError:
- # pass
-
- _initialized = 1
-
-
-def init():
- """
- Explicitly initializes the Python Imaging Library. This function
- loads all available file format drivers.
- """
-
- global _initialized
- if _initialized >= 2:
- return 0
-
- for plugin in _plugins:
- try:
- logger.debug("Importing %s", plugin)
- __import__(f"PIL.{plugin}", globals(), locals(), [])
- except ImportError as e:
- logger.debug("Image: failed to import %s: %s", plugin, e)
-
- if OPEN or SAVE:
- _initialized = 2
- return 1
-
-
-# --------------------------------------------------------------------
-# Codec factories (used by tobytes/frombytes and ImageFile.load)
-
-
-def _getdecoder(mode, decoder_name, args, extra=()):
- # tweak arguments
- if args is None:
- args = ()
- elif not isinstance(args, tuple):
- args = (args,)
-
- try:
- decoder = DECODERS[decoder_name]
- except KeyError:
- pass
- else:
- return decoder(mode, *args + extra)
-
- try:
- # get decoder
- decoder = getattr(core, decoder_name + "_decoder")
- except AttributeError as e:
- msg = f"decoder {decoder_name} not available"
- raise OSError(msg) from e
- return decoder(mode, *args + extra)
-
-
-def _getencoder(mode, encoder_name, args, extra=()):
- # tweak arguments
- if args is None:
- args = ()
- elif not isinstance(args, tuple):
- args = (args,)
-
- try:
- encoder = ENCODERS[encoder_name]
- except KeyError:
- pass
- else:
- return encoder(mode, *args + extra)
-
- try:
- # get encoder
- encoder = getattr(core, encoder_name + "_encoder")
- except AttributeError as e:
- msg = f"encoder {encoder_name} not available"
- raise OSError(msg) from e
- return encoder(mode, *args + extra)
-
-
-# --------------------------------------------------------------------
-# Simple expression analyzer
-
-
-def coerce_e(value):
- deprecate("coerce_e", 10)
- return value if isinstance(value, _E) else _E(1, value)
-
-
-# _E(scale, offset) represents the affine transformation scale * x + offset.
-# The "data" field is named for compatibility with the old implementation,
-# and should be renamed once coerce_e is removed.
-class _E:
- def __init__(self, scale, data):
- self.scale = scale
- self.data = data
-
- def __neg__(self):
- return _E(-self.scale, -self.data)
-
- def __add__(self, other):
- if isinstance(other, _E):
- return _E(self.scale + other.scale, self.data + other.data)
- return _E(self.scale, self.data + other)
-
- __radd__ = __add__
-
- def __sub__(self, other):
- return self + -other
-
- def __rsub__(self, other):
- return other + -self
-
- def __mul__(self, other):
- if isinstance(other, _E):
- return NotImplemented
- return _E(self.scale * other, self.data * other)
-
- __rmul__ = __mul__
-
- def __truediv__(self, other):
- if isinstance(other, _E):
- return NotImplemented
- return _E(self.scale / other, self.data / other)
-
-
-def _getscaleoffset(expr):
- a = expr(_E(1, 0))
- return (a.scale, a.data) if isinstance(a, _E) else (0, a)
-
-
-# --------------------------------------------------------------------
-# Implementation wrapper
-
-
-class Image:
- """
- This class represents an image object. To create
- :py:class:`~PIL.Image.Image` objects, use the appropriate factory
- functions. There's hardly ever any reason to call the Image constructor
- directly.
-
- * :py:func:`~PIL.Image.open`
- * :py:func:`~PIL.Image.new`
- * :py:func:`~PIL.Image.frombytes`
- """
-
- format = None
- format_description = None
- _close_exclusive_fp_after_loading = True
-
- def __init__(self):
- # FIXME: take "new" parameters / other image?
- # FIXME: turn mode and size into delegating properties?
- self.im = None
- self.mode = ""
- self._size = (0, 0)
- self.palette = None
- self.info = {}
- self._category = 0
- self.readonly = 0
- self.pyaccess = None
- self._exif = None
-
- def __getattr__(self, name):
- if name == "category":
- deprecate("Image categories", 10, "is_animated", plural=True)
- return self._category
- raise AttributeError(name)
-
- @property
- def width(self):
- return self.size[0]
-
- @property
- def height(self):
- return self.size[1]
-
- @property
- def size(self):
- return self._size
-
- def _new(self, im):
- new = Image()
- new.im = im
- new.mode = im.mode
- new._size = im.size
- if im.mode in ("P", "PA"):
- if self.palette:
- new.palette = self.palette.copy()
- else:
- from . import ImagePalette
-
- new.palette = ImagePalette.ImagePalette()
- new.info = self.info.copy()
- return new
-
- # Context manager support
- def __enter__(self):
- return self
-
- def __exit__(self, *args):
- if hasattr(self, "fp") and getattr(self, "_exclusive_fp", False):
- if getattr(self, "_fp", False):
- if self._fp != self.fp:
- self._fp.close()
- self._fp = DeferredError(ValueError("Operation on closed image"))
- if self.fp:
- self.fp.close()
- self.fp = None
-
- def close(self):
- """
- Closes the file pointer, if possible.
-
- This operation will destroy the image core and release its memory.
- The image data will be unusable afterward.
-
- This function is required to close images that have multiple frames or
- have not had their file read and closed by the
- :py:meth:`~PIL.Image.Image.load` method. See :ref:`file-handling` for
- more information.
- """
- try:
- if getattr(self, "_fp", False):
- if self._fp != self.fp:
- self._fp.close()
- self._fp = DeferredError(ValueError("Operation on closed image"))
- if self.fp:
- self.fp.close()
- self.fp = None
- except Exception as msg:
- logger.debug("Error closing: %s", msg)
-
- if getattr(self, "map", None):
- self.map = None
-
- # Instead of simply setting to None, we're setting up a
- # deferred error that will better explain that the core image
- # object is gone.
- self.im = DeferredError(ValueError("Operation on closed image"))
-
- def _copy(self):
- self.load()
- self.im = self.im.copy()
- self.pyaccess = None
- self.readonly = 0
-
- def _ensure_mutable(self):
- if self.readonly:
- self._copy()
- else:
- self.load()
-
- def _dump(self, file=None, format=None, **options):
- suffix = ""
- if format:
- suffix = "." + format
-
- if not file:
- f, filename = tempfile.mkstemp(suffix)
- os.close(f)
- else:
- filename = file
- if not filename.endswith(suffix):
- filename = filename + suffix
-
- self.load()
-
- if not format or format == "PPM":
- self.im.save_ppm(filename)
- else:
- self.save(filename, format, **options)
-
- return filename
-
- def __eq__(self, other):
- return (
- self.__class__ is other.__class__
- and self.mode == other.mode
- and self.size == other.size
- and self.info == other.info
- and self._category == other._category
- and self.getpalette() == other.getpalette()
- and self.tobytes() == other.tobytes()
- )
-
- def __repr__(self):
- return "<%s.%s image mode=%s size=%dx%d at 0x%X>" % (
- self.__class__.__module__,
- self.__class__.__name__,
- self.mode,
- self.size[0],
- self.size[1],
- id(self),
- )
-
- def _repr_pretty_(self, p, cycle):
- """IPython plain text display support"""
-
- # Same as __repr__ but without unpredictable id(self),
- # to keep Jupyter notebook `text/plain` output stable.
- p.text(
- "<%s.%s image mode=%s size=%dx%d>"
- % (
- self.__class__.__module__,
- self.__class__.__name__,
- self.mode,
- self.size[0],
- self.size[1],
- )
- )
-
- def _repr_png_(self):
- """iPython display hook support
-
- :returns: png version of the image as bytes
- """
- b = io.BytesIO()
- try:
- self.save(b, "PNG")
- except Exception as e:
- msg = "Could not save to PNG for display"
- raise ValueError(msg) from e
- return b.getvalue()
-
- @property
- def __array_interface__(self):
- # numpy array interface support
- new = {"version": 3}
- try:
- if self.mode == "1":
- # Binary images need to be extended from bits to bytes
- # See: https://github.com/python-pillow/Pillow/issues/350
- new["data"] = self.tobytes("raw", "L")
- else:
- new["data"] = self.tobytes()
- except Exception as e:
- if not isinstance(e, (MemoryError, RecursionError)):
- try:
- import numpy
- from packaging.version import parse as parse_version
- except ImportError:
- pass
- else:
- if parse_version(numpy.__version__) < parse_version("1.23"):
- warnings.warn(e)
- raise
- new["shape"], new["typestr"] = _conv_type_shape(self)
- return new
-
- def __getstate__(self):
- return [self.info, self.mode, self.size, self.getpalette(), self.tobytes()]
-
- def __setstate__(self, state):
- Image.__init__(self)
- info, mode, size, palette, data = state
- self.info = info
- self.mode = mode
- self._size = size
- self.im = core.new(mode, size)
- if mode in ("L", "LA", "P", "PA") and palette:
- self.putpalette(palette)
- self.frombytes(data)
-
- def tobytes(self, encoder_name="raw", *args):
- """
- Return image as a bytes object.
-
- .. warning::
-
- This method returns the raw image data from the internal
- storage. For compressed image data (e.g. PNG, JPEG) use
- :meth:`~.save`, with a BytesIO parameter for in-memory
- data.
-
- :param encoder_name: What encoder to use. The default is to
- use the standard "raw" encoder.
-
- A list of C encoders can be seen under
- codecs section of the function array in
- :file:`_imaging.c`. Python encoders are
- registered within the relevant plugins.
- :param args: Extra arguments to the encoder.
- :returns: A :py:class:`bytes` object.
- """
-
- # may pass tuple instead of argument list
- if len(args) == 1 and isinstance(args[0], tuple):
- args = args[0]
-
- if encoder_name == "raw" and args == ():
- args = self.mode
-
- self.load()
-
- if self.width == 0 or self.height == 0:
- return b""
-
- # unpack data
- e = _getencoder(self.mode, encoder_name, args)
- e.setimage(self.im)
-
- bufsize = max(65536, self.size[0] * 4) # see RawEncode.c
-
- output = []
- while True:
- bytes_consumed, errcode, data = e.encode(bufsize)
- output.append(data)
- if errcode:
- break
- if errcode < 0:
- msg = f"encoder error {errcode} in tobytes"
- raise RuntimeError(msg)
-
- return b"".join(output)
-
- def tobitmap(self, name="image"):
- """
- Returns the image converted to an X11 bitmap.
-
- .. note:: This method only works for mode "1" images.
-
- :param name: The name prefix to use for the bitmap variables.
- :returns: A string containing an X11 bitmap.
- :raises ValueError: If the mode is not "1"
- """
-
- self.load()
- if self.mode != "1":
- msg = "not a bitmap"
- raise ValueError(msg)
- data = self.tobytes("xbm")
- return b"".join(
- [
- f"#define {name}_width {self.size[0]}\n".encode("ascii"),
- f"#define {name}_height {self.size[1]}\n".encode("ascii"),
- f"static char {name}_bits[] = {{\n".encode("ascii"),
- data,
- b"};",
- ]
- )
-
- def frombytes(self, data, decoder_name="raw", *args):
- """
- Loads this image with pixel data from a bytes object.
-
- This method is similar to the :py:func:`~PIL.Image.frombytes` function,
- but loads data into this image instead of creating a new image object.
- """
-
- # may pass tuple instead of argument list
- if len(args) == 1 and isinstance(args[0], tuple):
- args = args[0]
-
- # default format
- if decoder_name == "raw" and args == ():
- args = self.mode
-
- # unpack data
- d = _getdecoder(self.mode, decoder_name, args)
- d.setimage(self.im)
- s = d.decode(data)
-
- if s[0] >= 0:
- msg = "not enough image data"
- raise ValueError(msg)
- if s[1] != 0:
- msg = "cannot decode image data"
- raise ValueError(msg)
-
- def load(self):
- """
- Allocates storage for the image and loads the pixel data. In
- normal cases, you don't need to call this method, since the
- Image class automatically loads an opened image when it is
- accessed for the first time.
-
- If the file associated with the image was opened by Pillow, then this
- method will close it. The exception to this is if the image has
- multiple frames, in which case the file will be left open for seek
- operations. See :ref:`file-handling` for more information.
-
- :returns: An image access object.
- :rtype: :ref:`PixelAccess` or :py:class:`PIL.PyAccess`
- """
- if self.im is not None and self.palette and self.palette.dirty:
- # realize palette
- mode, arr = self.palette.getdata()
- self.im.putpalette(mode, arr)
- self.palette.dirty = 0
- self.palette.rawmode = None
- if "transparency" in self.info and mode in ("LA", "PA"):
- if isinstance(self.info["transparency"], int):
- self.im.putpalettealpha(self.info["transparency"], 0)
- else:
- self.im.putpalettealphas(self.info["transparency"])
- self.palette.mode = "RGBA"
- else:
- palette_mode = "RGBA" if mode.startswith("RGBA") else "RGB"
- self.palette.mode = palette_mode
- self.palette.palette = self.im.getpalette(palette_mode, palette_mode)
-
- if self.im is not None:
- if cffi and USE_CFFI_ACCESS:
- if self.pyaccess:
- return self.pyaccess
- from . import PyAccess
-
- self.pyaccess = PyAccess.new(self, self.readonly)
- if self.pyaccess:
- return self.pyaccess
- return self.im.pixel_access(self.readonly)
-
- def verify(self):
- """
- Verifies the contents of a file. For data read from a file, this
- method attempts to determine if the file is broken, without
- actually decoding the image data. If this method finds any
- problems, it raises suitable exceptions. If you need to load
- the image after using this method, you must reopen the image
- file.
- """
- pass
-
- def convert(
- self, mode=None, matrix=None, dither=None, palette=Palette.WEB, colors=256
- ):
- """
- Returns a converted copy of this image. For the "P" mode, this
- method translates pixels through the palette. If mode is
- omitted, a mode is chosen so that all information in the image
- and the palette can be represented without a palette.
-
- The current version supports all possible conversions between
- "L", "RGB" and "CMYK". The ``matrix`` argument only supports "L"
- and "RGB".
-
- When translating a color image to greyscale (mode "L"),
- the library uses the ITU-R 601-2 luma transform::
-
- L = R * 299/1000 + G * 587/1000 + B * 114/1000
-
- The default method of converting a greyscale ("L") or "RGB"
- image into a bilevel (mode "1") image uses Floyd-Steinberg
- dither to approximate the original image luminosity levels. If
- dither is ``None``, all values larger than 127 are set to 255 (white),
- all other values to 0 (black). To use other thresholds, use the
- :py:meth:`~PIL.Image.Image.point` method.
-
- When converting from "RGBA" to "P" without a ``matrix`` argument,
- this passes the operation to :py:meth:`~PIL.Image.Image.quantize`,
- and ``dither`` and ``palette`` are ignored.
-
- When converting from "PA", if an "RGBA" palette is present, the alpha
- channel from the image will be used instead of the values from the palette.
-
- :param mode: The requested mode. See: :ref:`concept-modes`.
- :param matrix: An optional conversion matrix. If given, this
- should be 4- or 12-tuple containing floating point values.
- :param dither: Dithering method, used when converting from
- mode "RGB" to "P" or from "RGB" or "L" to "1".
- Available methods are :data:`Dither.NONE` or :data:`Dither.FLOYDSTEINBERG`
- (default). Note that this is not used when ``matrix`` is supplied.
- :param palette: Palette to use when converting from mode "RGB"
- to "P". Available palettes are :data:`Palette.WEB` or
- :data:`Palette.ADAPTIVE`.
- :param colors: Number of colors to use for the :data:`Palette.ADAPTIVE`
- palette. Defaults to 256.
- :rtype: :py:class:`~PIL.Image.Image`
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- self.load()
-
- has_transparency = self.info.get("transparency") is not None
- if not mode and self.mode == "P":
- # determine default mode
- if self.palette:
- mode = self.palette.mode
- else:
- mode = "RGB"
- if mode == "RGB" and has_transparency:
- mode = "RGBA"
- if not mode or (mode == self.mode and not matrix):
- return self.copy()
-
- if matrix:
- # matrix conversion
- if mode not in ("L", "RGB"):
- msg = "illegal conversion"
- raise ValueError(msg)
- im = self.im.convert_matrix(mode, matrix)
- new = self._new(im)
- if has_transparency and self.im.bands == 3:
- transparency = new.info["transparency"]
-
- def convert_transparency(m, v):
- v = m[0] * v[0] + m[1] * v[1] + m[2] * v[2] + m[3] * 0.5
- return max(0, min(255, int(v)))
-
- if mode == "L":
- transparency = convert_transparency(matrix, transparency)
- elif len(mode) == 3:
- transparency = tuple(
- convert_transparency(matrix[i * 4 : i * 4 + 4], transparency)
- for i in range(0, len(transparency))
- )
- new.info["transparency"] = transparency
- return new
-
- if mode == "P" and self.mode == "RGBA":
- return self.quantize(colors)
-
- trns = None
- delete_trns = False
- # transparency handling
- if has_transparency:
- if (self.mode in ("1", "L", "I") and mode in ("LA", "RGBA")) or (
- self.mode == "RGB" and mode == "RGBA"
- ):
- # Use transparent conversion to promote from transparent
- # color to an alpha channel.
- new_im = self._new(
- self.im.convert_transparent(mode, self.info["transparency"])
- )
- del new_im.info["transparency"]
- return new_im
- elif self.mode in ("L", "RGB", "P") and mode in ("L", "RGB", "P"):
- t = self.info["transparency"]
- if isinstance(t, bytes):
- # Dragons. This can't be represented by a single color
- warnings.warn(
- "Palette images with Transparency expressed in bytes should be "
- "converted to RGBA images"
- )
- delete_trns = True
- else:
- # get the new transparency color.
- # use existing conversions
- trns_im = Image()._new(core.new(self.mode, (1, 1)))
- if self.mode == "P":
- trns_im.putpalette(self.palette)
- if isinstance(t, tuple):
- err = "Couldn't allocate a palette color for transparency"
- try:
- t = trns_im.palette.getcolor(t, self)
- except ValueError as e:
- if str(e) == "cannot allocate more than 256 colors":
- # If all 256 colors are in use,
- # then there is no need for transparency
- t = None
- else:
- raise ValueError(err) from e
- if t is None:
- trns = None
- else:
- trns_im.putpixel((0, 0), t)
-
- if mode in ("L", "RGB"):
- trns_im = trns_im.convert(mode)
- else:
- # can't just retrieve the palette number, got to do it
- # after quantization.
- trns_im = trns_im.convert("RGB")
- trns = trns_im.getpixel((0, 0))
-
- elif self.mode == "P" and mode in ("LA", "PA", "RGBA"):
- t = self.info["transparency"]
- delete_trns = True
-
- if isinstance(t, bytes):
- self.im.putpalettealphas(t)
- elif isinstance(t, int):
- self.im.putpalettealpha(t, 0)
- else:
- msg = "Transparency for P mode should be bytes or int"
- raise ValueError(msg)
-
- if mode == "P" and palette == Palette.ADAPTIVE:
- im = self.im.quantize(colors)
- new = self._new(im)
- from . import ImagePalette
-
- new.palette = ImagePalette.ImagePalette("RGB", new.im.getpalette("RGB"))
- if delete_trns:
- # This could possibly happen if we requantize to fewer colors.
- # The transparency would be totally off in that case.
- del new.info["transparency"]
- if trns is not None:
- try:
- new.info["transparency"] = new.palette.getcolor(trns, new)
- except Exception:
- # if we can't make a transparent color, don't leave the old
- # transparency hanging around to mess us up.
- del new.info["transparency"]
- warnings.warn("Couldn't allocate palette entry for transparency")
- return new
-
- if "LAB" in (self.mode, mode):
- other_mode = mode if self.mode == "LAB" else self.mode
- if other_mode in ("RGB", "RGBA", "RGBX"):
- from . import ImageCms
-
- srgb = ImageCms.createProfile("sRGB")
- lab = ImageCms.createProfile("LAB")
- profiles = [lab, srgb] if self.mode == "LAB" else [srgb, lab]
- transform = ImageCms.buildTransform(
- profiles[0], profiles[1], self.mode, mode
- )
- return transform.apply(self)
-
- # colorspace conversion
- if dither is None:
- dither = Dither.FLOYDSTEINBERG
-
- try:
- im = self.im.convert(mode, dither)
- except ValueError:
- try:
- # normalize source image and try again
- modebase = getmodebase(self.mode)
- if modebase == self.mode:
- raise
- im = self.im.convert(modebase)
- im = im.convert(mode, dither)
- except KeyError as e:
- msg = "illegal conversion"
- raise ValueError(msg) from e
-
- new_im = self._new(im)
- if mode == "P" and palette != Palette.ADAPTIVE:
- from . import ImagePalette
-
- new_im.palette = ImagePalette.ImagePalette("RGB", list(range(256)) * 3)
- if delete_trns:
- # crash fail if we leave a bytes transparency in an rgb/l mode.
- del new_im.info["transparency"]
- if trns is not None:
- if new_im.mode == "P":
- try:
- new_im.info["transparency"] = new_im.palette.getcolor(trns, new_im)
- except ValueError as e:
- del new_im.info["transparency"]
- if str(e) != "cannot allocate more than 256 colors":
- # If all 256 colors are in use,
- # then there is no need for transparency
- warnings.warn(
- "Couldn't allocate palette entry for transparency"
- )
- else:
- new_im.info["transparency"] = trns
- return new_im
-
- def quantize(
- self,
- colors=256,
- method=None,
- kmeans=0,
- palette=None,
- dither=Dither.FLOYDSTEINBERG,
- ):
- """
- Convert the image to 'P' mode with the specified number
- of colors.
-
- :param colors: The desired number of colors, <= 256
- :param method: :data:`Quantize.MEDIANCUT` (median cut),
- :data:`Quantize.MAXCOVERAGE` (maximum coverage),
- :data:`Quantize.FASTOCTREE` (fast octree),
- :data:`Quantize.LIBIMAGEQUANT` (libimagequant; check support
- using :py:func:`PIL.features.check_feature` with
- ``feature="libimagequant"``).
-
- By default, :data:`Quantize.MEDIANCUT` will be used.
-
- The exception to this is RGBA images. :data:`Quantize.MEDIANCUT`
- and :data:`Quantize.MAXCOVERAGE` do not support RGBA images, so
- :data:`Quantize.FASTOCTREE` is used by default instead.
- :param kmeans: Integer
- :param palette: Quantize to the palette of given
- :py:class:`PIL.Image.Image`.
- :param dither: Dithering method, used when converting from
- mode "RGB" to "P" or from "RGB" or "L" to "1".
- Available methods are :data:`Dither.NONE` or :data:`Dither.FLOYDSTEINBERG`
- (default).
- :returns: A new image
-
- """
-
- self.load()
-
- if method is None:
- # defaults:
- method = Quantize.MEDIANCUT
- if self.mode == "RGBA":
- method = Quantize.FASTOCTREE
-
- if self.mode == "RGBA" and method not in (
- Quantize.FASTOCTREE,
- Quantize.LIBIMAGEQUANT,
- ):
- # Caller specified an invalid mode.
- msg = (
- "Fast Octree (method == 2) and libimagequant (method == 3) "
- "are the only valid methods for quantizing RGBA images"
- )
- raise ValueError(msg)
-
- if palette:
- # use palette from reference image
- palette.load()
- if palette.mode != "P":
- msg = "bad mode for palette image"
- raise ValueError(msg)
- if self.mode != "RGB" and self.mode != "L":
- msg = "only RGB or L mode images can be quantized to a palette"
- raise ValueError(msg)
- im = self.im.convert("P", dither, palette.im)
- new_im = self._new(im)
- new_im.palette = palette.palette.copy()
- return new_im
-
- im = self._new(self.im.quantize(colors, method, kmeans))
-
- from . import ImagePalette
-
- mode = im.im.getpalettemode()
- palette = im.im.getpalette(mode, mode)[: colors * len(mode)]
- im.palette = ImagePalette.ImagePalette(mode, palette)
-
- return im
-
- def copy(self):
- """
- Copies this image. Use this method if you wish to paste things
- into an image, but still retain the original.
-
- :rtype: :py:class:`~PIL.Image.Image`
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
- self.load()
- return self._new(self.im.copy())
-
- __copy__ = copy
-
- def crop(self, box=None):
- """
- Returns a rectangular region from this image. The box is a
- 4-tuple defining the left, upper, right, and lower pixel
- coordinate. See :ref:`coordinate-system`.
-
- Note: Prior to Pillow 3.4.0, this was a lazy operation.
-
- :param box: The crop rectangle, as a (left, upper, right, lower)-tuple.
- :rtype: :py:class:`~PIL.Image.Image`
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- if box is None:
- return self.copy()
-
- if box[2] < box[0]:
- msg = "Coordinate 'right' is less than 'left'"
- raise ValueError(msg)
- elif box[3] < box[1]:
- msg = "Coordinate 'lower' is less than 'upper'"
- raise ValueError(msg)
-
- self.load()
- return self._new(self._crop(self.im, box))
-
- def _crop(self, im, box):
- """
- Returns a rectangular region from the core image object im.
-
- This is equivalent to calling im.crop((x0, y0, x1, y1)), but
- includes additional sanity checks.
-
- :param im: a core image object
- :param box: The crop rectangle, as a (left, upper, right, lower)-tuple.
- :returns: A core image object.
- """
-
- x0, y0, x1, y1 = map(int, map(round, box))
-
- absolute_values = (abs(x1 - x0), abs(y1 - y0))
-
- _decompression_bomb_check(absolute_values)
-
- return im.crop((x0, y0, x1, y1))
-
- def draft(self, mode, size):
- """
- Configures the image file loader so it returns a version of the
- image that as closely as possible matches the given mode and
- size. For example, you can use this method to convert a color
- JPEG to greyscale while loading it.
-
- If any changes are made, returns a tuple with the chosen ``mode`` and
- ``box`` with coordinates of the original image within the altered one.
-
- Note that this method modifies the :py:class:`~PIL.Image.Image` object
- in place. If the image has already been loaded, this method has no
- effect.
-
- Note: This method is not implemented for most images. It is
- currently implemented only for JPEG and MPO images.
-
- :param mode: The requested mode.
- :param size: The requested size in pixels, as a 2-tuple:
- (width, height).
- """
- pass
-
- def _expand(self, xmargin, ymargin=None):
- if ymargin is None:
- ymargin = xmargin
- self.load()
- return self._new(self.im.expand(xmargin, ymargin, 0))
-
- def filter(self, filter):
- """
- Filters this image using the given filter. For a list of
- available filters, see the :py:mod:`~PIL.ImageFilter` module.
-
- :param filter: Filter kernel.
- :returns: An :py:class:`~PIL.Image.Image` object."""
-
- from . import ImageFilter
-
- self.load()
-
- if isinstance(filter, Callable):
- filter = filter()
- if not hasattr(filter, "filter"):
- msg = "filter argument should be ImageFilter.Filter instance or class"
- raise TypeError(msg)
-
- multiband = isinstance(filter, ImageFilter.MultibandFilter)
- if self.im.bands == 1 or multiband:
- return self._new(filter.filter(self.im))
-
- ims = []
- for c in range(self.im.bands):
- ims.append(self._new(filter.filter(self.im.getband(c))))
- return merge(self.mode, ims)
-
- def getbands(self):
- """
- Returns a tuple containing the name of each band in this image.
- For example, ``getbands`` on an RGB image returns ("R", "G", "B").
-
- :returns: A tuple containing band names.
- :rtype: tuple
- """
- return ImageMode.getmode(self.mode).bands
-
- def getbbox(self):
- """
- Calculates the bounding box of the non-zero regions in the
- image.
-
- :returns: The bounding box is returned as a 4-tuple defining the
- left, upper, right, and lower pixel coordinate. See
- :ref:`coordinate-system`. If the image is completely empty, this
- method returns None.
-
- """
-
- self.load()
- return self.im.getbbox()
-
- def getcolors(self, maxcolors=256):
- """
- Returns a list of colors used in this image.
-
- The colors will be in the image's mode. For example, an RGB image will
- return a tuple of (red, green, blue) color values, and a P image will
- return the index of the color in the palette.
-
- :param maxcolors: Maximum number of colors. If this number is
- exceeded, this method returns None. The default limit is
- 256 colors.
- :returns: An unsorted list of (count, pixel) values.
- """
-
- self.load()
- if self.mode in ("1", "L", "P"):
- h = self.im.histogram()
- out = []
- for i in range(256):
- if h[i]:
- out.append((h[i], i))
- if len(out) > maxcolors:
- return None
- return out
- return self.im.getcolors(maxcolors)
-
- def getdata(self, band=None):
- """
- Returns the contents of this image as a sequence object
- containing pixel values. The sequence object is flattened, so
- that values for line one follow directly after the values of
- line zero, and so on.
-
- Note that the sequence object returned by this method is an
- internal PIL data type, which only supports certain sequence
- operations. To convert it to an ordinary sequence (e.g. for
- printing), use ``list(im.getdata())``.
-
- :param band: What band to return. The default is to return
- all bands. To return a single band, pass in the index
- value (e.g. 0 to get the "R" band from an "RGB" image).
- :returns: A sequence-like object.
- """
-
- self.load()
- if band is not None:
- return self.im.getband(band)
- return self.im # could be abused
-
- def getextrema(self):
- """
- Gets the minimum and maximum pixel values for each band in
- the image.
-
- :returns: For a single-band image, a 2-tuple containing the
- minimum and maximum pixel value. For a multi-band image,
- a tuple containing one 2-tuple for each band.
- """
-
- self.load()
- if self.im.bands > 1:
- extrema = []
- for i in range(self.im.bands):
- extrema.append(self.im.getband(i).getextrema())
- return tuple(extrema)
- return self.im.getextrema()
-
- def _getxmp(self, xmp_tags):
- def get_name(tag):
- return tag.split("}")[1]
-
- def get_value(element):
- value = {get_name(k): v for k, v in element.attrib.items()}
- children = list(element)
- if children:
- for child in children:
- name = get_name(child.tag)
- child_value = get_value(child)
- if name in value:
- if not isinstance(value[name], list):
- value[name] = [value[name]]
- value[name].append(child_value)
- else:
- value[name] = child_value
- elif value:
- if element.text:
- value["text"] = element.text
- else:
- return element.text
- return value
-
- if ElementTree is None:
- warnings.warn("XMP data cannot be read without defusedxml dependency")
- return {}
- else:
- root = ElementTree.fromstring(xmp_tags)
- return {get_name(root.tag): get_value(root)}
-
- def getexif(self):
- """
- Gets EXIF data from the image.
-
- :returns: an :py:class:`~PIL.Image.Exif` object.
- """
- if self._exif is None:
- self._exif = Exif()
- self._exif._loaded = False
- elif self._exif._loaded:
- return self._exif
- self._exif._loaded = True
-
- exif_info = self.info.get("exif")
- if exif_info is None:
- if "Raw profile type exif" in self.info:
- exif_info = bytes.fromhex(
- "".join(self.info["Raw profile type exif"].split("\n")[3:])
- )
- elif hasattr(self, "tag_v2"):
- self._exif.bigtiff = self.tag_v2._bigtiff
- self._exif.endian = self.tag_v2._endian
- self._exif.load_from_fp(self.fp, self.tag_v2._offset)
- if exif_info is not None:
- self._exif.load(exif_info)
-
- # XMP tags
- if 0x0112 not in self._exif:
- xmp_tags = self.info.get("XML:com.adobe.xmp")
- if xmp_tags:
- match = re.search(r'tiff:Orientation(="|>)([0-9])', xmp_tags)
- if match:
- self._exif[0x0112] = int(match[2])
-
- return self._exif
-
- def _reload_exif(self):
- if self._exif is None or not self._exif._loaded:
- return
- self._exif._loaded = False
- self.getexif()
-
- def get_child_images(self):
- child_images = []
- exif = self.getexif()
- ifds = []
- if ExifTags.Base.SubIFDs in exif:
- subifd_offsets = exif[ExifTags.Base.SubIFDs]
- if subifd_offsets:
- if not isinstance(subifd_offsets, tuple):
- subifd_offsets = (subifd_offsets,)
- for subifd_offset in subifd_offsets:
- ifds.append((exif._get_ifd_dict(subifd_offset), subifd_offset))
- ifd1 = exif.get_ifd(ExifTags.IFD.IFD1)
- if ifd1 and ifd1.get(513):
- ifds.append((ifd1, exif._info.next))
-
- offset = None
- for ifd, ifd_offset in ifds:
- current_offset = self.fp.tell()
- if offset is None:
- offset = current_offset
-
- fp = self.fp
- thumbnail_offset = ifd.get(513)
- if thumbnail_offset is not None:
- try:
- thumbnail_offset += self._exif_offset
- except AttributeError:
- pass
- self.fp.seek(thumbnail_offset)
- data = self.fp.read(ifd.get(514))
- fp = io.BytesIO(data)
-
- with open(fp) as im:
- if thumbnail_offset is None:
- im._frame_pos = [ifd_offset]
- im._seek(0)
- im.load()
- child_images.append(im)
-
- if offset is not None:
- self.fp.seek(offset)
- return child_images
-
- def getim(self):
- """
- Returns a capsule that points to the internal image memory.
-
- :returns: A capsule object.
- """
-
- self.load()
- return self.im.ptr
-
- def getpalette(self, rawmode="RGB"):
- """
- Returns the image palette as a list.
-
- :param rawmode: The mode in which to return the palette. ``None`` will
- return the palette in its current mode.
-
- .. versionadded:: 9.1.0
-
- :returns: A list of color values [r, g, b, ...], or None if the
- image has no palette.
- """
-
- self.load()
- try:
- mode = self.im.getpalettemode()
- except ValueError:
- return None # no palette
- if rawmode is None:
- rawmode = mode
- return list(self.im.getpalette(mode, rawmode))
-
- def apply_transparency(self):
- """
- If a P mode image has a "transparency" key in the info dictionary,
- remove the key and instead apply the transparency to the palette.
- Otherwise, the image is unchanged.
- """
- if self.mode != "P" or "transparency" not in self.info:
- return
-
- from . import ImagePalette
-
- palette = self.getpalette("RGBA")
- transparency = self.info["transparency"]
- if isinstance(transparency, bytes):
- for i, alpha in enumerate(transparency):
- palette[i * 4 + 3] = alpha
- else:
- palette[transparency * 4 + 3] = 0
- self.palette = ImagePalette.ImagePalette("RGBA", bytes(palette))
- self.palette.dirty = 1
-
- del self.info["transparency"]
-
- def getpixel(self, xy):
- """
- Returns the pixel value at a given position.
-
- :param xy: The coordinate, given as (x, y). See
- :ref:`coordinate-system`.
- :returns: The pixel value. If the image is a multi-layer image,
- this method returns a tuple.
- """
-
- self.load()
- if self.pyaccess:
- return self.pyaccess.getpixel(xy)
- return self.im.getpixel(xy)
-
- def getprojection(self):
- """
- Get projection to x and y axes
-
- :returns: Two sequences, indicating where there are non-zero
- pixels along the X-axis and the Y-axis, respectively.
- """
-
- self.load()
- x, y = self.im.getprojection()
- return list(x), list(y)
-
- def histogram(self, mask=None, extrema=None):
- """
- Returns a histogram for the image. The histogram is returned as a
- list of pixel counts, one for each pixel value in the source
- image. Counts are grouped into 256 bins for each band, even if
- the image has more than 8 bits per band. If the image has more
- than one band, the histograms for all bands are concatenated (for
- example, the histogram for an "RGB" image contains 768 values).
-
- A bilevel image (mode "1") is treated as a greyscale ("L") image
- by this method.
-
- If a mask is provided, the method returns a histogram for those
- parts of the image where the mask image is non-zero. The mask
- image must have the same size as the image, and be either a
- bi-level image (mode "1") or a greyscale image ("L").
-
- :param mask: An optional mask.
- :param extrema: An optional tuple of manually-specified extrema.
- :returns: A list containing pixel counts.
- """
- self.load()
- if mask:
- mask.load()
- return self.im.histogram((0, 0), mask.im)
- if self.mode in ("I", "F"):
- if extrema is None:
- extrema = self.getextrema()
- return self.im.histogram(extrema)
- return self.im.histogram()
-
- def entropy(self, mask=None, extrema=None):
- """
- Calculates and returns the entropy for the image.
-
- A bilevel image (mode "1") is treated as a greyscale ("L")
- image by this method.
-
- If a mask is provided, the method employs the histogram for
- those parts of the image where the mask image is non-zero.
- The mask image must have the same size as the image, and be
- either a bi-level image (mode "1") or a greyscale image ("L").
-
- :param mask: An optional mask.
- :param extrema: An optional tuple of manually-specified extrema.
- :returns: A float value representing the image entropy
- """
- self.load()
- if mask:
- mask.load()
- return self.im.entropy((0, 0), mask.im)
- if self.mode in ("I", "F"):
- if extrema is None:
- extrema = self.getextrema()
- return self.im.entropy(extrema)
- return self.im.entropy()
-
- def paste(self, im, box=None, mask=None):
- """
- Pastes another image into this image. The box argument is either
- a 2-tuple giving the upper left corner, a 4-tuple defining the
- left, upper, right, and lower pixel coordinate, or None (same as
- (0, 0)). See :ref:`coordinate-system`. If a 4-tuple is given, the size
- of the pasted image must match the size of the region.
-
- If the modes don't match, the pasted image is converted to the mode of
- this image (see the :py:meth:`~PIL.Image.Image.convert` method for
- details).
-
- Instead of an image, the source can be a integer or tuple
- containing pixel values. The method then fills the region
- with the given color. When creating RGB images, you can
- also use color strings as supported by the ImageColor module.
-
- If a mask is given, this method updates only the regions
- indicated by the mask. You can use either "1", "L", "LA", "RGBA"
- or "RGBa" images (if present, the alpha band is used as mask).
- Where the mask is 255, the given image is copied as is. Where
- the mask is 0, the current value is preserved. Intermediate
- values will mix the two images together, including their alpha
- channels if they have them.
-
- See :py:meth:`~PIL.Image.Image.alpha_composite` if you want to
- combine images with respect to their alpha channels.
-
- :param im: Source image or pixel value (integer or tuple).
- :param box: An optional 4-tuple giving the region to paste into.
- If a 2-tuple is used instead, it's treated as the upper left
- corner. If omitted or None, the source is pasted into the
- upper left corner.
-
- If an image is given as the second argument and there is no
- third, the box defaults to (0, 0), and the second argument
- is interpreted as a mask image.
- :param mask: An optional mask image.
- """
-
- if isImageType(box) and mask is None:
- # abbreviated paste(im, mask) syntax
- mask = box
- box = None
-
- if box is None:
- box = (0, 0)
-
- if len(box) == 2:
- # upper left corner given; get size from image or mask
- if isImageType(im):
- size = im.size
- elif isImageType(mask):
- size = mask.size
- else:
- # FIXME: use self.size here?
- msg = "cannot determine region size; use 4-item box"
- raise ValueError(msg)
- box += (box[0] + size[0], box[1] + size[1])
-
- if isinstance(im, str):
- from . import ImageColor
-
- im = ImageColor.getcolor(im, self.mode)
-
- elif isImageType(im):
- im.load()
- if self.mode != im.mode:
- if self.mode != "RGB" or im.mode not in ("LA", "RGBA", "RGBa"):
- # should use an adapter for this!
- im = im.convert(self.mode)
- im = im.im
-
- self._ensure_mutable()
-
- if mask:
- mask.load()
- self.im.paste(im, box, mask.im)
- else:
- self.im.paste(im, box)
-
- def alpha_composite(self, im, dest=(0, 0), source=(0, 0)):
- """'In-place' analog of Image.alpha_composite. Composites an image
- onto this image.
-
- :param im: image to composite over this one
- :param dest: Optional 2 tuple (left, top) specifying the upper
- left corner in this (destination) image.
- :param source: Optional 2 (left, top) tuple for the upper left
- corner in the overlay source image, or 4 tuple (left, top, right,
- bottom) for the bounds of the source rectangle
-
- Performance Note: Not currently implemented in-place in the core layer.
- """
-
- if not isinstance(source, (list, tuple)):
- msg = "Source must be a tuple"
- raise ValueError(msg)
- if not isinstance(dest, (list, tuple)):
- msg = "Destination must be a tuple"
- raise ValueError(msg)
- if not len(source) in (2, 4):
- msg = "Source must be a 2 or 4-tuple"
- raise ValueError(msg)
- if not len(dest) == 2:
- msg = "Destination must be a 2-tuple"
- raise ValueError(msg)
- if min(source) < 0:
- msg = "Source must be non-negative"
- raise ValueError(msg)
-
- if len(source) == 2:
- source = source + im.size
-
- # over image, crop if it's not the whole thing.
- if source == (0, 0) + im.size:
- overlay = im
- else:
- overlay = im.crop(source)
-
- # target for the paste
- box = dest + (dest[0] + overlay.width, dest[1] + overlay.height)
-
- # destination image. don't copy if we're using the whole image.
- if box == (0, 0) + self.size:
- background = self
- else:
- background = self.crop(box)
-
- result = alpha_composite(background, overlay)
- self.paste(result, box)
-
- def point(self, lut, mode=None):
- """
- Maps this image through a lookup table or function.
-
- :param lut: A lookup table, containing 256 (or 65536 if
- self.mode=="I" and mode == "L") values per band in the
- image. A function can be used instead, it should take a
- single argument. The function is called once for each
- possible pixel value, and the resulting table is applied to
- all bands of the image.
-
- It may also be an :py:class:`~PIL.Image.ImagePointHandler`
- object::
-
- class Example(Image.ImagePointHandler):
- def point(self, data):
- # Return result
- :param mode: Output mode (default is same as input). In the
- current version, this can only be used if the source image
- has mode "L" or "P", and the output has mode "1" or the
- source image mode is "I" and the output mode is "L".
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- self.load()
-
- if isinstance(lut, ImagePointHandler):
- return lut.point(self)
-
- if callable(lut):
- # if it isn't a list, it should be a function
- if self.mode in ("I", "I;16", "F"):
- # check if the function can be used with point_transform
- # UNDONE wiredfool -- I think this prevents us from ever doing
- # a gamma function point transform on > 8bit images.
- scale, offset = _getscaleoffset(lut)
- return self._new(self.im.point_transform(scale, offset))
- # for other modes, convert the function to a table
- lut = [lut(i) for i in range(256)] * self.im.bands
-
- if self.mode == "F":
- # FIXME: _imaging returns a confusing error message for this case
- msg = "point operation not supported for this mode"
- raise ValueError(msg)
-
- if mode != "F":
- lut = [round(i) for i in lut]
- return self._new(self.im.point(lut, mode))
-
- def putalpha(self, alpha):
- """
- Adds or replaces the alpha layer in this image. If the image
- does not have an alpha layer, it's converted to "LA" or "RGBA".
- The new layer must be either "L" or "1".
-
- :param alpha: The new alpha layer. This can either be an "L" or "1"
- image having the same size as this image, or an integer or
- other color value.
- """
-
- self._ensure_mutable()
-
- if self.mode not in ("LA", "PA", "RGBA"):
- # attempt to promote self to a matching alpha mode
- try:
- mode = getmodebase(self.mode) + "A"
- try:
- self.im.setmode(mode)
- except (AttributeError, ValueError) as e:
- # do things the hard way
- im = self.im.convert(mode)
- if im.mode not in ("LA", "PA", "RGBA"):
- raise ValueError from e # sanity check
- self.im = im
- self.pyaccess = None
- self.mode = self.im.mode
- except KeyError as e:
- msg = "illegal image mode"
- raise ValueError(msg) from e
-
- if self.mode in ("LA", "PA"):
- band = 1
- else:
- band = 3
-
- if isImageType(alpha):
- # alpha layer
- if alpha.mode not in ("1", "L"):
- msg = "illegal image mode"
- raise ValueError(msg)
- alpha.load()
- if alpha.mode == "1":
- alpha = alpha.convert("L")
- else:
- # constant alpha
- try:
- self.im.fillband(band, alpha)
- except (AttributeError, ValueError):
- # do things the hard way
- alpha = new("L", self.size, alpha)
- else:
- return
-
- self.im.putband(alpha.im, band)
-
- def putdata(self, data, scale=1.0, offset=0.0):
- """
- Copies pixel data from a flattened sequence object into the image. The
- values should start at the upper left corner (0, 0), continue to the
- end of the line, followed directly by the first value of the second
- line, and so on. Data will be read until either the image or the
- sequence ends. The scale and offset values are used to adjust the
- sequence values: **pixel = value*scale + offset**.
-
- :param data: A flattened sequence object.
- :param scale: An optional scale value. The default is 1.0.
- :param offset: An optional offset value. The default is 0.0.
- """
-
- self._ensure_mutable()
-
- self.im.putdata(data, scale, offset)
-
- def putpalette(self, data, rawmode="RGB"):
- """
- Attaches a palette to this image. The image must be a "P", "PA", "L"
- or "LA" image.
-
- The palette sequence must contain at most 256 colors, made up of one
- integer value for each channel in the raw mode.
- For example, if the raw mode is "RGB", then it can contain at most 768
- values, made up of red, green and blue values for the corresponding pixel
- index in the 256 colors.
- If the raw mode is "RGBA", then it can contain at most 1024 values,
- containing red, green, blue and alpha values.
-
- Alternatively, an 8-bit string may be used instead of an integer sequence.
-
- :param data: A palette sequence (either a list or a string).
- :param rawmode: The raw mode of the palette. Either "RGB", "RGBA", or a mode
- that can be transformed to "RGB" or "RGBA" (e.g. "R", "BGR;15", "RGBA;L").
- """
- from . import ImagePalette
-
- if self.mode not in ("L", "LA", "P", "PA"):
- msg = "illegal image mode"
- raise ValueError(msg)
- if isinstance(data, ImagePalette.ImagePalette):
- palette = ImagePalette.raw(data.rawmode, data.palette)
- else:
- if not isinstance(data, bytes):
- data = bytes(data)
- palette = ImagePalette.raw(rawmode, data)
- self.mode = "PA" if "A" in self.mode else "P"
- self.palette = palette
- self.palette.mode = "RGB"
- self.load() # install new palette
-
- def putpixel(self, xy, value):
- """
- Modifies the pixel at the given position. The color is given as
- a single numerical value for single-band images, and a tuple for
- multi-band images. In addition to this, RGB and RGBA tuples are
- accepted for P and PA images.
-
- Note that this method is relatively slow. For more extensive changes,
- use :py:meth:`~PIL.Image.Image.paste` or the :py:mod:`~PIL.ImageDraw`
- module instead.
-
- See:
-
- * :py:meth:`~PIL.Image.Image.paste`
- * :py:meth:`~PIL.Image.Image.putdata`
- * :py:mod:`~PIL.ImageDraw`
-
- :param xy: The pixel coordinate, given as (x, y). See
- :ref:`coordinate-system`.
- :param value: The pixel value.
- """
-
- if self.readonly:
- self._copy()
- self.load()
-
- if self.pyaccess:
- return self.pyaccess.putpixel(xy, value)
-
- if (
- self.mode in ("P", "PA")
- and isinstance(value, (list, tuple))
- and len(value) in [3, 4]
- ):
- # RGB or RGBA value for a P or PA image
- if self.mode == "PA":
- alpha = value[3] if len(value) == 4 else 255
- value = value[:3]
- value = self.palette.getcolor(value, self)
- if self.mode == "PA":
- value = (value, alpha)
- return self.im.putpixel(xy, value)
-
- def remap_palette(self, dest_map, source_palette=None):
- """
- Rewrites the image to reorder the palette.
-
- :param dest_map: A list of indexes into the original palette.
- e.g. ``[1,0]`` would swap a two item palette, and ``list(range(256))``
- is the identity transform.
- :param source_palette: Bytes or None.
- :returns: An :py:class:`~PIL.Image.Image` object.
-
- """
- from . import ImagePalette
-
- if self.mode not in ("L", "P"):
- msg = "illegal image mode"
- raise ValueError(msg)
-
- bands = 3
- palette_mode = "RGB"
- if source_palette is None:
- if self.mode == "P":
- self.load()
- palette_mode = self.im.getpalettemode()
- if palette_mode == "RGBA":
- bands = 4
- source_palette = self.im.getpalette(palette_mode, palette_mode)
- else: # L-mode
- source_palette = bytearray(i // 3 for i in range(768))
-
- palette_bytes = b""
- new_positions = [0] * 256
-
- # pick only the used colors from the palette
- for i, oldPosition in enumerate(dest_map):
- palette_bytes += source_palette[
- oldPosition * bands : oldPosition * bands + bands
- ]
- new_positions[oldPosition] = i
-
- # replace the palette color id of all pixel with the new id
-
- # Palette images are [0..255], mapped through a 1 or 3
- # byte/color map. We need to remap the whole image
- # from palette 1 to palette 2. New_positions is
- # an array of indexes into palette 1. Palette 2 is
- # palette 1 with any holes removed.
-
- # We're going to leverage the convert mechanism to use the
- # C code to remap the image from palette 1 to palette 2,
- # by forcing the source image into 'L' mode and adding a
- # mapping 'L' mode palette, then converting back to 'L'
- # sans palette thus converting the image bytes, then
- # assigning the optimized RGB palette.
-
- # perf reference, 9500x4000 gif, w/~135 colors
- # 14 sec prepatch, 1 sec postpatch with optimization forced.
-
- mapping_palette = bytearray(new_positions)
-
- m_im = self.copy()
- m_im.mode = "P"
-
- m_im.palette = ImagePalette.ImagePalette(
- palette_mode, palette=mapping_palette * bands
- )
- # possibly set palette dirty, then
- # m_im.putpalette(mapping_palette, 'L') # converts to 'P'
- # or just force it.
- # UNDONE -- this is part of the general issue with palettes
- m_im.im.putpalette(palette_mode + ";L", m_im.palette.tobytes())
-
- m_im = m_im.convert("L")
-
- m_im.putpalette(palette_bytes, palette_mode)
- m_im.palette = ImagePalette.ImagePalette(palette_mode, palette=palette_bytes)
-
- if "transparency" in self.info:
- try:
- m_im.info["transparency"] = dest_map.index(self.info["transparency"])
- except ValueError:
- if "transparency" in m_im.info:
- del m_im.info["transparency"]
-
- return m_im
-
- def _get_safe_box(self, size, resample, box):
- """Expands the box so it includes adjacent pixels
- that may be used by resampling with the given resampling filter.
- """
- filter_support = _filters_support[resample] - 0.5
- scale_x = (box[2] - box[0]) / size[0]
- scale_y = (box[3] - box[1]) / size[1]
- support_x = filter_support * scale_x
- support_y = filter_support * scale_y
-
- return (
- max(0, int(box[0] - support_x)),
- max(0, int(box[1] - support_y)),
- min(self.size[0], math.ceil(box[2] + support_x)),
- min(self.size[1], math.ceil(box[3] + support_y)),
- )
-
- def resize(self, size, resample=None, box=None, reducing_gap=None):
- """
- Returns a resized copy of this image.
-
- :param size: The requested size in pixels, as a 2-tuple:
- (width, height).
- :param resample: An optional resampling filter. This can be
- one of :py:data:`Resampling.NEAREST`, :py:data:`Resampling.BOX`,
- :py:data:`Resampling.BILINEAR`, :py:data:`Resampling.HAMMING`,
- :py:data:`Resampling.BICUBIC` or :py:data:`Resampling.LANCZOS`.
- If the image has mode "1" or "P", it is always set to
- :py:data:`Resampling.NEAREST`. If the image mode specifies a number
- of bits, such as "I;16", then the default filter is
- :py:data:`Resampling.NEAREST`. Otherwise, the default filter is
- :py:data:`Resampling.BICUBIC`. See: :ref:`concept-filters`.
- :param box: An optional 4-tuple of floats providing
- the source image region to be scaled.
- The values must be within (0, 0, width, height) rectangle.
- If omitted or None, the entire source is used.
- :param reducing_gap: Apply optimization by resizing the image
- in two steps. First, reducing the image by integer times
- using :py:meth:`~PIL.Image.Image.reduce`.
- Second, resizing using regular resampling. The last step
- changes size no less than by ``reducing_gap`` times.
- ``reducing_gap`` may be None (no first step is performed)
- or should be greater than 1.0. The bigger ``reducing_gap``,
- the closer the result to the fair resampling.
- The smaller ``reducing_gap``, the faster resizing.
- With ``reducing_gap`` greater or equal to 3.0, the result is
- indistinguishable from fair resampling in most cases.
- The default value is None (no optimization).
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- if resample is None:
- type_special = ";" in self.mode
- resample = Resampling.NEAREST if type_special else Resampling.BICUBIC
- elif resample not in (
- Resampling.NEAREST,
- Resampling.BILINEAR,
- Resampling.BICUBIC,
- Resampling.LANCZOS,
- Resampling.BOX,
- Resampling.HAMMING,
- ):
- msg = f"Unknown resampling filter ({resample})."
-
- filters = [
- f"{filter[1]} ({filter[0]})"
- for filter in (
- (Resampling.NEAREST, "Image.Resampling.NEAREST"),
- (Resampling.LANCZOS, "Image.Resampling.LANCZOS"),
- (Resampling.BILINEAR, "Image.Resampling.BILINEAR"),
- (Resampling.BICUBIC, "Image.Resampling.BICUBIC"),
- (Resampling.BOX, "Image.Resampling.BOX"),
- (Resampling.HAMMING, "Image.Resampling.HAMMING"),
- )
- ]
- msg += " Use " + ", ".join(filters[:-1]) + " or " + filters[-1]
- raise ValueError(msg)
-
- if reducing_gap is not None and reducing_gap < 1.0:
- msg = "reducing_gap must be 1.0 or greater"
- raise ValueError(msg)
-
- size = tuple(size)
-
- self.load()
- if box is None:
- box = (0, 0) + self.size
- else:
- box = tuple(box)
-
- if self.size == size and box == (0, 0) + self.size:
- return self.copy()
-
- if self.mode in ("1", "P"):
- resample = Resampling.NEAREST
-
- if self.mode in ["LA", "RGBA"] and resample != Resampling.NEAREST:
- im = self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode])
- im = im.resize(size, resample, box)
- return im.convert(self.mode)
-
- self.load()
-
- if reducing_gap is not None and resample != Resampling.NEAREST:
- factor_x = int((box[2] - box[0]) / size[0] / reducing_gap) or 1
- factor_y = int((box[3] - box[1]) / size[1] / reducing_gap) or 1
- if factor_x > 1 or factor_y > 1:
- reduce_box = self._get_safe_box(size, resample, box)
- factor = (factor_x, factor_y)
- if callable(self.reduce):
- self = self.reduce(factor, box=reduce_box)
- else:
- self = Image.reduce(self, factor, box=reduce_box)
- box = (
- (box[0] - reduce_box[0]) / factor_x,
- (box[1] - reduce_box[1]) / factor_y,
- (box[2] - reduce_box[0]) / factor_x,
- (box[3] - reduce_box[1]) / factor_y,
- )
-
- return self._new(self.im.resize(size, resample, box))
-
- def reduce(self, factor, box=None):
- """
- Returns a copy of the image reduced ``factor`` times.
- If the size of the image is not dividable by ``factor``,
- the resulting size will be rounded up.
-
- :param factor: A greater than 0 integer or tuple of two integers
- for width and height separately.
- :param box: An optional 4-tuple of ints providing
- the source image region to be reduced.
- The values must be within ``(0, 0, width, height)`` rectangle.
- If omitted or ``None``, the entire source is used.
- """
- if not isinstance(factor, (list, tuple)):
- factor = (factor, factor)
-
- if box is None:
- box = (0, 0) + self.size
- else:
- box = tuple(box)
-
- if factor == (1, 1) and box == (0, 0) + self.size:
- return self.copy()
-
- if self.mode in ["LA", "RGBA"]:
- im = self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode])
- im = im.reduce(factor, box)
- return im.convert(self.mode)
-
- self.load()
-
- return self._new(self.im.reduce(factor, box))
-
- def rotate(
- self,
- angle,
- resample=Resampling.NEAREST,
- expand=0,
- center=None,
- translate=None,
- fillcolor=None,
- ):
- """
- Returns a rotated copy of this image. This method returns a
- copy of this image, rotated the given number of degrees counter
- clockwise around its centre.
-
- :param angle: In degrees counter clockwise.
- :param resample: An optional resampling filter. This can be
- one of :py:data:`Resampling.NEAREST` (use nearest neighbour),
- :py:data:`Resampling.BILINEAR` (linear interpolation in a 2x2
- environment), or :py:data:`Resampling.BICUBIC` (cubic spline
- interpolation in a 4x4 environment). If omitted, or if the image has
- mode "1" or "P", it is set to :py:data:`Resampling.NEAREST`.
- See :ref:`concept-filters`.
- :param expand: Optional expansion flag. If true, expands the output
- image to make it large enough to hold the entire rotated image.
- If false or omitted, make the output image the same size as the
- input image. Note that the expand flag assumes rotation around
- the center and no translation.
- :param center: Optional center of rotation (a 2-tuple). Origin is
- the upper left corner. Default is the center of the image.
- :param translate: An optional post-rotate translation (a 2-tuple).
- :param fillcolor: An optional color for area outside the rotated image.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- angle = angle % 360.0
-
- # Fast paths regardless of filter, as long as we're not
- # translating or changing the center.
- if not (center or translate):
- if angle == 0:
- return self.copy()
- if angle == 180:
- return self.transpose(Transpose.ROTATE_180)
- if angle in (90, 270) and (expand or self.width == self.height):
- return self.transpose(
- Transpose.ROTATE_90 if angle == 90 else Transpose.ROTATE_270
- )
-
- # Calculate the affine matrix. Note that this is the reverse
- # transformation (from destination image to source) because we
- # want to interpolate the (discrete) destination pixel from
- # the local area around the (floating) source pixel.
-
- # The matrix we actually want (note that it operates from the right):
- # (1, 0, tx) (1, 0, cx) ( cos a, sin a, 0) (1, 0, -cx)
- # (0, 1, ty) * (0, 1, cy) * (-sin a, cos a, 0) * (0, 1, -cy)
- # (0, 0, 1) (0, 0, 1) ( 0, 0, 1) (0, 0, 1)
-
- # The reverse matrix is thus:
- # (1, 0, cx) ( cos -a, sin -a, 0) (1, 0, -cx) (1, 0, -tx)
- # (0, 1, cy) * (-sin -a, cos -a, 0) * (0, 1, -cy) * (0, 1, -ty)
- # (0, 0, 1) ( 0, 0, 1) (0, 0, 1) (0, 0, 1)
-
- # In any case, the final translation may be updated at the end to
- # compensate for the expand flag.
-
- w, h = self.size
-
- if translate is None:
- post_trans = (0, 0)
- else:
- post_trans = translate
- if center is None:
- # FIXME These should be rounded to ints?
- rotn_center = (w / 2.0, h / 2.0)
- else:
- rotn_center = center
-
- angle = -math.radians(angle)
- matrix = [
- round(math.cos(angle), 15),
- round(math.sin(angle), 15),
- 0.0,
- round(-math.sin(angle), 15),
- round(math.cos(angle), 15),
- 0.0,
- ]
-
- def transform(x, y, matrix):
- (a, b, c, d, e, f) = matrix
- return a * x + b * y + c, d * x + e * y + f
-
- matrix[2], matrix[5] = transform(
- -rotn_center[0] - post_trans[0], -rotn_center[1] - post_trans[1], matrix
- )
- matrix[2] += rotn_center[0]
- matrix[5] += rotn_center[1]
-
- if expand:
- # calculate output size
- xx = []
- yy = []
- for x, y in ((0, 0), (w, 0), (w, h), (0, h)):
- x, y = transform(x, y, matrix)
- xx.append(x)
- yy.append(y)
- nw = math.ceil(max(xx)) - math.floor(min(xx))
- nh = math.ceil(max(yy)) - math.floor(min(yy))
-
- # We multiply a translation matrix from the right. Because of its
- # special form, this is the same as taking the image of the
- # translation vector as new translation vector.
- matrix[2], matrix[5] = transform(-(nw - w) / 2.0, -(nh - h) / 2.0, matrix)
- w, h = nw, nh
-
- return self.transform(
- (w, h), Transform.AFFINE, matrix, resample, fillcolor=fillcolor
- )
-
- def save(self, fp, format=None, **params):
- """
- Saves this image under the given filename. If no format is
- specified, the format to use is determined from the filename
- extension, if possible.
-
- Keyword options can be used to provide additional instructions
- to the writer. If a writer doesn't recognise an option, it is
- silently ignored. The available options are described in the
- :doc:`image format documentation
- <../handbook/image-file-formats>` for each writer.
-
- You can use a file object instead of a filename. In this case,
- you must always specify the format. The file object must
- implement the ``seek``, ``tell``, and ``write``
- methods, and be opened in binary mode.
-
- :param fp: A filename (string), pathlib.Path object or file object.
- :param format: Optional format override. If omitted, the
- format to use is determined from the filename extension.
- If a file object was used instead of a filename, this
- parameter should always be used.
- :param params: Extra parameters to the image writer.
- :returns: None
- :exception ValueError: If the output format could not be determined
- from the file name. Use the format option to solve this.
- :exception OSError: If the file could not be written. The file
- may have been created, and may contain partial data.
- """
-
- filename = ""
- open_fp = False
- if isinstance(fp, Path):
- filename = str(fp)
- open_fp = True
- elif is_path(fp):
- filename = fp
- open_fp = True
- elif fp == sys.stdout:
- try:
- fp = sys.stdout.buffer
- except AttributeError:
- pass
- if not filename and hasattr(fp, "name") and is_path(fp.name):
- # only set the name for metadata purposes
- filename = fp.name
-
- # may mutate self!
- self._ensure_mutable()
-
- save_all = params.pop("save_all", False)
- self.encoderinfo = params
- self.encoderconfig = ()
-
- preinit()
-
- ext = os.path.splitext(filename)[1].lower()
-
- if not format:
- if ext not in EXTENSION:
- init()
- try:
- format = EXTENSION[ext]
- except KeyError as e:
- msg = f"unknown file extension: {ext}"
- raise ValueError(msg) from e
-
- if format.upper() not in SAVE:
- init()
- if save_all:
- save_handler = SAVE_ALL[format.upper()]
- else:
- save_handler = SAVE[format.upper()]
-
- created = False
- if open_fp:
- created = not os.path.exists(filename)
- if params.get("append", False):
- # Open also for reading ("+"), because TIFF save_all
- # writer needs to go back and edit the written data.
- fp = builtins.open(filename, "r+b")
- else:
- fp = builtins.open(filename, "w+b")
-
- try:
- save_handler(self, fp, filename)
- except Exception:
- if open_fp:
- fp.close()
- if created:
- try:
- os.remove(filename)
- except PermissionError:
- pass
- raise
- if open_fp:
- fp.close()
-
- def seek(self, frame):
- """
- Seeks to the given frame in this sequence file. If you seek
- beyond the end of the sequence, the method raises an
- ``EOFError`` exception. When a sequence file is opened, the
- library automatically seeks to frame 0.
-
- See :py:meth:`~PIL.Image.Image.tell`.
-
- If defined, :attr:`~PIL.Image.Image.n_frames` refers to the
- number of available frames.
-
- :param frame: Frame number, starting at 0.
- :exception EOFError: If the call attempts to seek beyond the end
- of the sequence.
- """
-
- # overridden by file handlers
- if frame != 0:
- raise EOFError
-
- def show(self, title=None):
- """
- Displays this image. This method is mainly intended for debugging purposes.
-
- This method calls :py:func:`PIL.ImageShow.show` internally. You can use
- :py:func:`PIL.ImageShow.register` to override its default behaviour.
-
- The image is first saved to a temporary file. By default, it will be in
- PNG format.
-
- On Unix, the image is then opened using the **display**, **eog** or
- **xv** utility, depending on which one can be found.
-
- On macOS, the image is opened with the native Preview application.
-
- On Windows, the image is opened with the standard PNG display utility.
-
- :param title: Optional title to use for the image window, where possible.
- """
-
- _show(self, title=title)
-
- def split(self):
- """
- Split this image into individual bands. This method returns a
- tuple of individual image bands from an image. For example,
- splitting an "RGB" image creates three new images each
- containing a copy of one of the original bands (red, green,
- blue).
-
- If you need only one band, :py:meth:`~PIL.Image.Image.getchannel`
- method can be more convenient and faster.
-
- :returns: A tuple containing bands.
- """
-
- self.load()
- if self.im.bands == 1:
- ims = [self.copy()]
- else:
- ims = map(self._new, self.im.split())
- return tuple(ims)
-
- def getchannel(self, channel):
- """
- Returns an image containing a single channel of the source image.
-
- :param channel: What channel to return. Could be index
- (0 for "R" channel of "RGB") or channel name
- ("A" for alpha channel of "RGBA").
- :returns: An image in "L" mode.
-
- .. versionadded:: 4.3.0
- """
- self.load()
-
- if isinstance(channel, str):
- try:
- channel = self.getbands().index(channel)
- except ValueError as e:
- msg = f'The image has no channel "{channel}"'
- raise ValueError(msg) from e
-
- return self._new(self.im.getband(channel))
-
- def tell(self):
- """
- Returns the current frame number. See :py:meth:`~PIL.Image.Image.seek`.
-
- If defined, :attr:`~PIL.Image.Image.n_frames` refers to the
- number of available frames.
-
- :returns: Frame number, starting with 0.
- """
- return 0
-
- def thumbnail(self, size, resample=Resampling.BICUBIC, reducing_gap=2.0):
- """
- Make this image into a thumbnail. This method modifies the
- image to contain a thumbnail version of itself, no larger than
- the given size. This method calculates an appropriate thumbnail
- size to preserve the aspect of the image, calls the
- :py:meth:`~PIL.Image.Image.draft` method to configure the file reader
- (where applicable), and finally resizes the image.
-
- Note that this function modifies the :py:class:`~PIL.Image.Image`
- object in place. If you need to use the full resolution image as well,
- apply this method to a :py:meth:`~PIL.Image.Image.copy` of the original
- image.
-
- :param size: The requested size in pixels, as a 2-tuple:
- (width, height).
- :param resample: Optional resampling filter. This can be one
- of :py:data:`Resampling.NEAREST`, :py:data:`Resampling.BOX`,
- :py:data:`Resampling.BILINEAR`, :py:data:`Resampling.HAMMING`,
- :py:data:`Resampling.BICUBIC` or :py:data:`Resampling.LANCZOS`.
- If omitted, it defaults to :py:data:`Resampling.BICUBIC`.
- (was :py:data:`Resampling.NEAREST` prior to version 2.5.0).
- See: :ref:`concept-filters`.
- :param reducing_gap: Apply optimization by resizing the image
- in two steps. First, reducing the image by integer times
- using :py:meth:`~PIL.Image.Image.reduce` or
- :py:meth:`~PIL.Image.Image.draft` for JPEG images.
- Second, resizing using regular resampling. The last step
- changes size no less than by ``reducing_gap`` times.
- ``reducing_gap`` may be None (no first step is performed)
- or should be greater than 1.0. The bigger ``reducing_gap``,
- the closer the result to the fair resampling.
- The smaller ``reducing_gap``, the faster resizing.
- With ``reducing_gap`` greater or equal to 3.0, the result is
- indistinguishable from fair resampling in most cases.
- The default value is 2.0 (very close to fair resampling
- while still being faster in many cases).
- :returns: None
- """
-
- provided_size = tuple(map(math.floor, size))
-
- def preserve_aspect_ratio():
- def round_aspect(number, key):
- return max(min(math.floor(number), math.ceil(number), key=key), 1)
-
- x, y = provided_size
- if x >= self.width and y >= self.height:
- return
-
- aspect = self.width / self.height
- if x / y >= aspect:
- x = round_aspect(y * aspect, key=lambda n: abs(aspect - n / y))
- else:
- y = round_aspect(
- x / aspect, key=lambda n: 0 if n == 0 else abs(aspect - x / n)
- )
- return x, y
-
- box = None
- if reducing_gap is not None:
- size = preserve_aspect_ratio()
- if size is None:
- return
-
- res = self.draft(None, (size[0] * reducing_gap, size[1] * reducing_gap))
- if res is not None:
- box = res[1]
- if box is None:
- self.load()
-
- # load() may have changed the size of the image
- size = preserve_aspect_ratio()
- if size is None:
- return
-
- if self.size != size:
- im = self.resize(size, resample, box=box, reducing_gap=reducing_gap)
-
- self.im = im.im
- self._size = size
- self.mode = self.im.mode
-
- self.readonly = 0
- self.pyaccess = None
-
- # FIXME: the different transform methods need further explanation
- # instead of bloating the method docs, add a separate chapter.
- def transform(
- self,
- size,
- method,
- data=None,
- resample=Resampling.NEAREST,
- fill=1,
- fillcolor=None,
- ):
- """
- Transforms this image. This method creates a new image with the
- given size, and the same mode as the original, and copies data
- to the new image using the given transform.
-
- :param size: The output size in pixels, as a 2-tuple:
- (width, height).
- :param method: The transformation method. This is one of
- :py:data:`Transform.EXTENT` (cut out a rectangular subregion),
- :py:data:`Transform.AFFINE` (affine transform),
- :py:data:`Transform.PERSPECTIVE` (perspective transform),
- :py:data:`Transform.QUAD` (map a quadrilateral to a rectangle), or
- :py:data:`Transform.MESH` (map a number of source quadrilaterals
- in one operation).
-
- It may also be an :py:class:`~PIL.Image.ImageTransformHandler`
- object::
-
- class Example(Image.ImageTransformHandler):
- def transform(self, size, data, resample, fill=1):
- # Return result
-
- It may also be an object with a ``method.getdata`` method
- that returns a tuple supplying new ``method`` and ``data`` values::
-
- class Example:
- def getdata(self):
- method = Image.Transform.EXTENT
- data = (0, 0, 100, 100)
- return method, data
- :param data: Extra data to the transformation method.
- :param resample: Optional resampling filter. It can be one of
- :py:data:`Resampling.NEAREST` (use nearest neighbour),
- :py:data:`Resampling.BILINEAR` (linear interpolation in a 2x2
- environment), or :py:data:`Resampling.BICUBIC` (cubic spline
- interpolation in a 4x4 environment). If omitted, or if the image
- has mode "1" or "P", it is set to :py:data:`Resampling.NEAREST`.
- See: :ref:`concept-filters`.
- :param fill: If ``method`` is an
- :py:class:`~PIL.Image.ImageTransformHandler` object, this is one of
- the arguments passed to it. Otherwise, it is unused.
- :param fillcolor: Optional fill color for the area outside the
- transform in the output image.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- if self.mode in ("LA", "RGBA") and resample != Resampling.NEAREST:
- return (
- self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode])
- .transform(size, method, data, resample, fill, fillcolor)
- .convert(self.mode)
- )
-
- if isinstance(method, ImageTransformHandler):
- return method.transform(size, self, resample=resample, fill=fill)
-
- if hasattr(method, "getdata"):
- # compatibility w. old-style transform objects
- method, data = method.getdata()
-
- if data is None:
- msg = "missing method data"
- raise ValueError(msg)
-
- im = new(self.mode, size, fillcolor)
- if self.mode == "P" and self.palette:
- im.palette = self.palette.copy()
- im.info = self.info.copy()
- if method == Transform.MESH:
- # list of quads
- for box, quad in data:
- im.__transformer(
- box, self, Transform.QUAD, quad, resample, fillcolor is None
- )
- else:
- im.__transformer(
- (0, 0) + size, self, method, data, resample, fillcolor is None
- )
-
- return im
-
- def __transformer(
- self, box, image, method, data, resample=Resampling.NEAREST, fill=1
- ):
- w = box[2] - box[0]
- h = box[3] - box[1]
-
- if method == Transform.AFFINE:
- data = data[:6]
-
- elif method == Transform.EXTENT:
- # convert extent to an affine transform
- x0, y0, x1, y1 = data
- xs = (x1 - x0) / w
- ys = (y1 - y0) / h
- method = Transform.AFFINE
- data = (xs, 0, x0, 0, ys, y0)
-
- elif method == Transform.PERSPECTIVE:
- data = data[:8]
-
- elif method == Transform.QUAD:
- # quadrilateral warp. data specifies the four corners
- # given as NW, SW, SE, and NE.
- nw = data[:2]
- sw = data[2:4]
- se = data[4:6]
- ne = data[6:8]
- x0, y0 = nw
- As = 1.0 / w
- At = 1.0 / h
- data = (
- x0,
- (ne[0] - x0) * As,
- (sw[0] - x0) * At,
- (se[0] - sw[0] - ne[0] + x0) * As * At,
- y0,
- (ne[1] - y0) * As,
- (sw[1] - y0) * At,
- (se[1] - sw[1] - ne[1] + y0) * As * At,
- )
-
- else:
- msg = "unknown transformation method"
- raise ValueError(msg)
-
- if resample not in (
- Resampling.NEAREST,
- Resampling.BILINEAR,
- Resampling.BICUBIC,
- ):
- if resample in (Resampling.BOX, Resampling.HAMMING, Resampling.LANCZOS):
- msg = {
- Resampling.BOX: "Image.Resampling.BOX",
- Resampling.HAMMING: "Image.Resampling.HAMMING",
- Resampling.LANCZOS: "Image.Resampling.LANCZOS",
- }[resample] + f" ({resample}) cannot be used."
- else:
- msg = f"Unknown resampling filter ({resample})."
-
- filters = [
- f"{filter[1]} ({filter[0]})"
- for filter in (
- (Resampling.NEAREST, "Image.Resampling.NEAREST"),
- (Resampling.BILINEAR, "Image.Resampling.BILINEAR"),
- (Resampling.BICUBIC, "Image.Resampling.BICUBIC"),
- )
- ]
- msg += " Use " + ", ".join(filters[:-1]) + " or " + filters[-1]
- raise ValueError(msg)
-
- image.load()
-
- self.load()
-
- if image.mode in ("1", "P"):
- resample = Resampling.NEAREST
-
- self.im.transform2(box, image.im, method, data, resample, fill)
-
- def transpose(self, method):
- """
- Transpose image (flip or rotate in 90 degree steps)
-
- :param method: One of :py:data:`Transpose.FLIP_LEFT_RIGHT`,
- :py:data:`Transpose.FLIP_TOP_BOTTOM`, :py:data:`Transpose.ROTATE_90`,
- :py:data:`Transpose.ROTATE_180`, :py:data:`Transpose.ROTATE_270`,
- :py:data:`Transpose.TRANSPOSE` or :py:data:`Transpose.TRANSVERSE`.
- :returns: Returns a flipped or rotated copy of this image.
- """
-
- self.load()
- return self._new(self.im.transpose(method))
-
- def effect_spread(self, distance):
- """
- Randomly spread pixels in an image.
-
- :param distance: Distance to spread pixels.
- """
- self.load()
- return self._new(self.im.effect_spread(distance))
-
- def toqimage(self):
- """Returns a QImage copy of this image"""
- from . import ImageQt
-
- if not ImageQt.qt_is_installed:
- msg = "Qt bindings are not installed"
- raise ImportError(msg)
- return ImageQt.toqimage(self)
-
- def toqpixmap(self):
- """Returns a QPixmap copy of this image"""
- from . import ImageQt
-
- if not ImageQt.qt_is_installed:
- msg = "Qt bindings are not installed"
- raise ImportError(msg)
- return ImageQt.toqpixmap(self)
-
-
-# --------------------------------------------------------------------
-# Abstract handlers.
-
-
-class ImagePointHandler:
- """
- Used as a mixin by point transforms
- (for use with :py:meth:`~PIL.Image.Image.point`)
- """
-
- pass
-
-
-class ImageTransformHandler:
- """
- Used as a mixin by geometry transforms
- (for use with :py:meth:`~PIL.Image.Image.transform`)
- """
-
- pass
-
-
-# --------------------------------------------------------------------
-# Factories
-
-#
-# Debugging
-
-
-def _wedge():
- """Create greyscale wedge (for debugging only)"""
-
- return Image()._new(core.wedge("L"))
-
-
-def _check_size(size):
- """
- Common check to enforce type and sanity check on size tuples
-
- :param size: Should be a 2 tuple of (width, height)
- :returns: True, or raises a ValueError
- """
-
- if not isinstance(size, (list, tuple)):
- msg = "Size must be a tuple"
- raise ValueError(msg)
- if len(size) != 2:
- msg = "Size must be a tuple of length 2"
- raise ValueError(msg)
- if size[0] < 0 or size[1] < 0:
- msg = "Width and height must be >= 0"
- raise ValueError(msg)
-
- return True
-
-
-def new(mode, size, color=0):
- """
- Creates a new image with the given mode and size.
-
- :param mode: The mode to use for the new image. See:
- :ref:`concept-modes`.
- :param size: A 2-tuple, containing (width, height) in pixels.
- :param color: What color to use for the image. Default is black.
- If given, this should be a single integer or floating point value
- for single-band modes, and a tuple for multi-band modes (one value
- per band). When creating RGB images, you can also use color
- strings as supported by the ImageColor module. If the color is
- None, the image is not initialised.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- _check_size(size)
-
- if color is None:
- # don't initialize
- return Image()._new(core.new(mode, size))
-
- if isinstance(color, str):
- # css3-style specifier
-
- from . import ImageColor
-
- color = ImageColor.getcolor(color, mode)
-
- im = Image()
- if mode == "P" and isinstance(color, (list, tuple)) and len(color) in [3, 4]:
- # RGB or RGBA value for a P image
- from . import ImagePalette
-
- im.palette = ImagePalette.ImagePalette()
- color = im.palette.getcolor(color)
- return im._new(core.fill(mode, size, color))
-
-
-def frombytes(mode, size, data, decoder_name="raw", *args):
- """
- Creates a copy of an image memory from pixel data in a buffer.
-
- In its simplest form, this function takes three arguments
- (mode, size, and unpacked pixel data).
-
- You can also use any pixel decoder supported by PIL. For more
- information on available decoders, see the section
- :ref:`Writing Your Own File Codec `.
-
- Note that this function decodes pixel data only, not entire images.
- If you have an entire image in a string, wrap it in a
- :py:class:`~io.BytesIO` object, and use :py:func:`~PIL.Image.open` to load
- it.
-
- :param mode: The image mode. See: :ref:`concept-modes`.
- :param size: The image size.
- :param data: A byte buffer containing raw data for the given mode.
- :param decoder_name: What decoder to use.
- :param args: Additional parameters for the given decoder.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- _check_size(size)
-
- # may pass tuple instead of argument list
- if len(args) == 1 and isinstance(args[0], tuple):
- args = args[0]
-
- if decoder_name == "raw" and args == ():
- args = mode
-
- im = new(mode, size)
- im.frombytes(data, decoder_name, args)
- return im
-
-
-def frombuffer(mode, size, data, decoder_name="raw", *args):
- """
- Creates an image memory referencing pixel data in a byte buffer.
-
- This function is similar to :py:func:`~PIL.Image.frombytes`, but uses data
- in the byte buffer, where possible. This means that changes to the
- original buffer object are reflected in this image). Not all modes can
- share memory; supported modes include "L", "RGBX", "RGBA", and "CMYK".
-
- Note that this function decodes pixel data only, not entire images.
- If you have an entire image file in a string, wrap it in a
- :py:class:`~io.BytesIO` object, and use :py:func:`~PIL.Image.open` to load it.
-
- In the current version, the default parameters used for the "raw" decoder
- differs from that used for :py:func:`~PIL.Image.frombytes`. This is a
- bug, and will probably be fixed in a future release. The current release
- issues a warning if you do this; to disable the warning, you should provide
- the full set of parameters. See below for details.
-
- :param mode: The image mode. See: :ref:`concept-modes`.
- :param size: The image size.
- :param data: A bytes or other buffer object containing raw
- data for the given mode.
- :param decoder_name: What decoder to use.
- :param args: Additional parameters for the given decoder. For the
- default encoder ("raw"), it's recommended that you provide the
- full set of parameters::
-
- frombuffer(mode, size, data, "raw", mode, 0, 1)
-
- :returns: An :py:class:`~PIL.Image.Image` object.
-
- .. versionadded:: 1.1.4
- """
-
- _check_size(size)
-
- # may pass tuple instead of argument list
- if len(args) == 1 and isinstance(args[0], tuple):
- args = args[0]
-
- if decoder_name == "raw":
- if args == ():
- args = mode, 0, 1
- if args[0] in _MAPMODES:
- im = new(mode, (1, 1))
- im = im._new(core.map_buffer(data, size, decoder_name, 0, args))
- if mode == "P":
- from . import ImagePalette
-
- im.palette = ImagePalette.ImagePalette("RGB", im.im.getpalette("RGB"))
- im.readonly = 1
- return im
-
- return frombytes(mode, size, data, decoder_name, args)
-
-
-def fromarray(obj, mode=None):
- """
- Creates an image memory from an object exporting the array interface
- (using the buffer protocol)::
-
- from PIL import Image
- import numpy as np
- a = np.zeros((5, 5))
- im = Image.fromarray(a)
-
- If ``obj`` is not contiguous, then the ``tobytes`` method is called
- and :py:func:`~PIL.Image.frombuffer` is used.
-
- In the case of NumPy, be aware that Pillow modes do not always correspond
- to NumPy dtypes. Pillow modes only offer 1-bit pixels, 8-bit pixels,
- 32-bit signed integer pixels, and 32-bit floating point pixels.
-
- Pillow images can also be converted to arrays::
-
- from PIL import Image
- import numpy as np
- im = Image.open("hopper.jpg")
- a = np.asarray(im)
-
- When converting Pillow images to arrays however, only pixel values are
- transferred. This means that P and PA mode images will lose their palette.
-
- :param obj: Object with array interface
- :param mode: Optional mode to use when reading ``obj``. Will be determined from
- type if ``None``.
-
- This will not be used to convert the data after reading, but will be used to
- change how the data is read::
-
- from PIL import Image
- import numpy as np
- a = np.full((1, 1), 300)
- im = Image.fromarray(a, mode="L")
- im.getpixel((0, 0)) # 44
- im = Image.fromarray(a, mode="RGB")
- im.getpixel((0, 0)) # (44, 1, 0)
-
- See: :ref:`concept-modes` for general information about modes.
- :returns: An image object.
-
- .. versionadded:: 1.1.6
- """
- arr = obj.__array_interface__
- shape = arr["shape"]
- ndim = len(shape)
- strides = arr.get("strides", None)
- if mode is None:
- try:
- typekey = (1, 1) + shape[2:], arr["typestr"]
- except KeyError as e:
- msg = "Cannot handle this data type"
- raise TypeError(msg) from e
- try:
- mode, rawmode = _fromarray_typemap[typekey]
- except KeyError as e:
- msg = "Cannot handle this data type: %s, %s" % typekey
- raise TypeError(msg) from e
- else:
- rawmode = mode
- if mode in ["1", "L", "I", "P", "F"]:
- ndmax = 2
- elif mode == "RGB":
- ndmax = 3
- else:
- ndmax = 4
- if ndim > ndmax:
- msg = f"Too many dimensions: {ndim} > {ndmax}."
- raise ValueError(msg)
-
- size = 1 if ndim == 1 else shape[1], shape[0]
- if strides is not None:
- if hasattr(obj, "tobytes"):
- obj = obj.tobytes()
- else:
- obj = obj.tostring()
-
- return frombuffer(mode, size, obj, "raw", rawmode, 0, 1)
-
-
-def fromqimage(im):
- """Creates an image instance from a QImage image"""
- from . import ImageQt
-
- if not ImageQt.qt_is_installed:
- msg = "Qt bindings are not installed"
- raise ImportError(msg)
- return ImageQt.fromqimage(im)
-
-
-def fromqpixmap(im):
- """Creates an image instance from a QPixmap image"""
- from . import ImageQt
-
- if not ImageQt.qt_is_installed:
- msg = "Qt bindings are not installed"
- raise ImportError(msg)
- return ImageQt.fromqpixmap(im)
-
-
-_fromarray_typemap = {
- # (shape, typestr) => mode, rawmode
- # first two members of shape are set to one
- ((1, 1), "|b1"): ("1", "1;8"),
- ((1, 1), "|u1"): ("L", "L"),
- ((1, 1), "|i1"): ("I", "I;8"),
- ((1, 1), "u2"): ("I", "I;16B"),
- ((1, 1), "i2"): ("I", "I;16BS"),
- ((1, 1), "u4"): ("I", "I;32B"),
- ((1, 1), "i4"): ("I", "I;32BS"),
- ((1, 1), "f4"): ("F", "F;32BF"),
- ((1, 1), "f8"): ("F", "F;64BF"),
- ((1, 1, 2), "|u1"): ("LA", "LA"),
- ((1, 1, 3), "|u1"): ("RGB", "RGB"),
- ((1, 1, 4), "|u1"): ("RGBA", "RGBA"),
- # shortcuts:
- ((1, 1), _ENDIAN + "i4"): ("I", "I"),
- ((1, 1), _ENDIAN + "f4"): ("F", "F"),
-}
-
-
-def _decompression_bomb_check(size):
- if MAX_IMAGE_PIXELS is None:
- return
-
- pixels = size[0] * size[1]
-
- if pixels > 2 * MAX_IMAGE_PIXELS:
- msg = (
- f"Image size ({pixels} pixels) exceeds limit of {2 * MAX_IMAGE_PIXELS} "
- "pixels, could be decompression bomb DOS attack."
- )
- raise DecompressionBombError(msg)
-
- if pixels > MAX_IMAGE_PIXELS:
- warnings.warn(
- f"Image size ({pixels} pixels) exceeds limit of {MAX_IMAGE_PIXELS} pixels, "
- "could be decompression bomb DOS attack.",
- DecompressionBombWarning,
- )
-
-
-def open(fp, mode="r", formats=None):
- """
- Opens and identifies the given image file.
-
- This is a lazy operation; this function identifies the file, but
- the file remains open and the actual image data is not read from
- the file until you try to process the data (or call the
- :py:meth:`~PIL.Image.Image.load` method). See
- :py:func:`~PIL.Image.new`. See :ref:`file-handling`.
-
- :param fp: A filename (string), pathlib.Path object or a file object.
- The file object must implement ``file.read``,
- ``file.seek``, and ``file.tell`` methods,
- and be opened in binary mode.
- :param mode: The mode. If given, this argument must be "r".
- :param formats: A list or tuple of formats to attempt to load the file in.
- This can be used to restrict the set of formats checked.
- Pass ``None`` to try all supported formats. You can print the set of
- available formats by running ``python3 -m PIL`` or using
- the :py:func:`PIL.features.pilinfo` function.
- :returns: An :py:class:`~PIL.Image.Image` object.
- :exception FileNotFoundError: If the file cannot be found.
- :exception PIL.UnidentifiedImageError: If the image cannot be opened and
- identified.
- :exception ValueError: If the ``mode`` is not "r", or if a ``StringIO``
- instance is used for ``fp``.
- :exception TypeError: If ``formats`` is not ``None``, a list or a tuple.
- """
-
- if mode != "r":
- msg = f"bad mode {repr(mode)}"
- raise ValueError(msg)
- elif isinstance(fp, io.StringIO):
- msg = (
- "StringIO cannot be used to open an image. "
- "Binary data must be used instead."
- )
- raise ValueError(msg)
-
- if formats is None:
- formats = ID
- elif not isinstance(formats, (list, tuple)):
- msg = "formats must be a list or tuple"
- raise TypeError(msg)
-
- exclusive_fp = False
- filename = ""
- if isinstance(fp, Path):
- filename = str(fp.resolve())
- elif is_path(fp):
- filename = fp
-
- if filename:
- fp = builtins.open(filename, "rb")
- exclusive_fp = True
-
- try:
- fp.seek(0)
- except (AttributeError, io.UnsupportedOperation):
- fp = io.BytesIO(fp.read())
- exclusive_fp = True
-
- prefix = fp.read(16)
-
- preinit()
-
- accept_warnings = []
-
- def _open_core(fp, filename, prefix, formats):
- for i in formats:
- i = i.upper()
- if i not in OPEN:
- init()
- try:
- factory, accept = OPEN[i]
- result = not accept or accept(prefix)
- if type(result) in [str, bytes]:
- accept_warnings.append(result)
- elif result:
- fp.seek(0)
- im = factory(fp, filename)
- _decompression_bomb_check(im.size)
- return im
- except (SyntaxError, IndexError, TypeError, struct.error):
- # Leave disabled by default, spams the logs with image
- # opening failures that are entirely expected.
- # logger.debug("", exc_info=True)
- continue
- except BaseException:
- if exclusive_fp:
- fp.close()
- raise
- return None
-
- im = _open_core(fp, filename, prefix, formats)
-
- if im is None and formats is ID:
- checked_formats = formats.copy()
- if init():
- im = _open_core(
- fp,
- filename,
- prefix,
- tuple(format for format in formats if format not in checked_formats),
- )
-
- if im:
- im._exclusive_fp = exclusive_fp
- return im
-
- if exclusive_fp:
- fp.close()
- for message in accept_warnings:
- warnings.warn(message)
- msg = "cannot identify image file %r" % (filename if filename else fp)
- raise UnidentifiedImageError(msg)
-
-
-#
-# Image processing.
-
-
-def alpha_composite(im1, im2):
- """
- Alpha composite im2 over im1.
-
- :param im1: The first image. Must have mode RGBA.
- :param im2: The second image. Must have mode RGBA, and the same size as
- the first image.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- im1.load()
- im2.load()
- return im1._new(core.alpha_composite(im1.im, im2.im))
-
-
-def blend(im1, im2, alpha):
- """
- Creates a new image by interpolating between two input images, using
- a constant alpha::
-
- out = image1 * (1.0 - alpha) + image2 * alpha
-
- :param im1: The first image.
- :param im2: The second image. Must have the same mode and size as
- the first image.
- :param alpha: The interpolation alpha factor. If alpha is 0.0, a
- copy of the first image is returned. If alpha is 1.0, a copy of
- the second image is returned. There are no restrictions on the
- alpha value. If necessary, the result is clipped to fit into
- the allowed output range.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- im1.load()
- im2.load()
- return im1._new(core.blend(im1.im, im2.im, alpha))
-
-
-def composite(image1, image2, mask):
- """
- Create composite image by blending images using a transparency mask.
-
- :param image1: The first image.
- :param image2: The second image. Must have the same mode and
- size as the first image.
- :param mask: A mask image. This image can have mode
- "1", "L", or "RGBA", and must have the same size as the
- other two images.
- """
-
- image = image2.copy()
- image.paste(image1, None, mask)
- return image
-
-
-def eval(image, *args):
- """
- Applies the function (which should take one argument) to each pixel
- in the given image. If the image has more than one band, the same
- function is applied to each band. Note that the function is
- evaluated once for each possible pixel value, so you cannot use
- random components or other generators.
-
- :param image: The input image.
- :param function: A function object, taking one integer argument.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- return image.point(args[0])
-
-
-def merge(mode, bands):
- """
- Merge a set of single band images into a new multiband image.
-
- :param mode: The mode to use for the output image. See:
- :ref:`concept-modes`.
- :param bands: A sequence containing one single-band image for
- each band in the output image. All bands must have the
- same size.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
-
- if getmodebands(mode) != len(bands) or "*" in mode:
- msg = "wrong number of bands"
- raise ValueError(msg)
- for band in bands[1:]:
- if band.mode != getmodetype(mode):
- msg = "mode mismatch"
- raise ValueError(msg)
- if band.size != bands[0].size:
- msg = "size mismatch"
- raise ValueError(msg)
- for band in bands:
- band.load()
- return bands[0]._new(core.merge(mode, *[b.im for b in bands]))
-
-
-# --------------------------------------------------------------------
-# Plugin registry
-
-
-def register_open(id, factory, accept=None):
- """
- Register an image file plugin. This function should not be used
- in application code.
-
- :param id: An image format identifier.
- :param factory: An image file factory method.
- :param accept: An optional function that can be used to quickly
- reject images having another format.
- """
- id = id.upper()
- if id not in ID:
- ID.append(id)
- OPEN[id] = factory, accept
-
-
-def register_mime(id, mimetype):
- """
- Registers an image MIME type. This function should not be used
- in application code.
-
- :param id: An image format identifier.
- :param mimetype: The image MIME type for this format.
- """
- MIME[id.upper()] = mimetype
-
-
-def register_save(id, driver):
- """
- Registers an image save function. This function should not be
- used in application code.
-
- :param id: An image format identifier.
- :param driver: A function to save images in this format.
- """
- SAVE[id.upper()] = driver
-
-
-def register_save_all(id, driver):
- """
- Registers an image function to save all the frames
- of a multiframe format. This function should not be
- used in application code.
-
- :param id: An image format identifier.
- :param driver: A function to save images in this format.
- """
- SAVE_ALL[id.upper()] = driver
-
-
-def register_extension(id, extension):
- """
- Registers an image extension. This function should not be
- used in application code.
-
- :param id: An image format identifier.
- :param extension: An extension used for this format.
- """
- EXTENSION[extension.lower()] = id.upper()
-
-
-def register_extensions(id, extensions):
- """
- Registers image extensions. This function should not be
- used in application code.
-
- :param id: An image format identifier.
- :param extensions: A list of extensions used for this format.
- """
- for extension in extensions:
- register_extension(id, extension)
-
-
-def registered_extensions():
- """
- Returns a dictionary containing all file extensions belonging
- to registered plugins
- """
- init()
- return EXTENSION
-
-
-def register_decoder(name, decoder):
- """
- Registers an image decoder. This function should not be
- used in application code.
-
- :param name: The name of the decoder
- :param decoder: A callable(mode, args) that returns an
- ImageFile.PyDecoder object
-
- .. versionadded:: 4.1.0
- """
- DECODERS[name] = decoder
-
-
-def register_encoder(name, encoder):
- """
- Registers an image encoder. This function should not be
- used in application code.
-
- :param name: The name of the encoder
- :param encoder: A callable(mode, args) that returns an
- ImageFile.PyEncoder object
-
- .. versionadded:: 4.1.0
- """
- ENCODERS[name] = encoder
-
-
-# --------------------------------------------------------------------
-# Simple display support.
-
-
-def _show(image, **options):
- from . import ImageShow
-
- ImageShow.show(image, **options)
-
-
-# --------------------------------------------------------------------
-# Effects
-
-
-def effect_mandelbrot(size, extent, quality):
- """
- Generate a Mandelbrot set covering the given extent.
-
- :param size: The requested size in pixels, as a 2-tuple:
- (width, height).
- :param extent: The extent to cover, as a 4-tuple:
- (x0, y0, x1, y1).
- :param quality: Quality.
- """
- return Image()._new(core.effect_mandelbrot(size, extent, quality))
-
-
-def effect_noise(size, sigma):
- """
- Generate Gaussian noise centered around 128.
-
- :param size: The requested size in pixels, as a 2-tuple:
- (width, height).
- :param sigma: Standard deviation of noise.
- """
- return Image()._new(core.effect_noise(size, sigma))
-
-
-def linear_gradient(mode):
- """
- Generate 256x256 linear gradient from black to white, top to bottom.
-
- :param mode: Input mode.
- """
- return Image()._new(core.linear_gradient(mode))
-
-
-def radial_gradient(mode):
- """
- Generate 256x256 radial gradient from black to white, centre to edge.
-
- :param mode: Input mode.
- """
- return Image()._new(core.radial_gradient(mode))
-
-
-# --------------------------------------------------------------------
-# Resources
-
-
-def _apply_env_variables(env=None):
- if env is None:
- env = os.environ
-
- for var_name, setter in [
- ("PILLOW_ALIGNMENT", core.set_alignment),
- ("PILLOW_BLOCK_SIZE", core.set_block_size),
- ("PILLOW_BLOCKS_MAX", core.set_blocks_max),
- ]:
- if var_name not in env:
- continue
-
- var = env[var_name].lower()
-
- units = 1
- for postfix, mul in [("k", 1024), ("m", 1024 * 1024)]:
- if var.endswith(postfix):
- units = mul
- var = var[: -len(postfix)]
-
- try:
- var = int(var) * units
- except ValueError:
- warnings.warn(f"{var_name} is not int")
- continue
-
- try:
- setter(var)
- except ValueError as e:
- warnings.warn(f"{var_name}: {e}")
-
-
-_apply_env_variables()
-atexit.register(core.clear_cache)
-
-
-class Exif(MutableMapping):
- """
- This class provides read and write access to EXIF image data::
-
- from PIL import Image
- im = Image.open("exif.png")
- exif = im.getexif() # Returns an instance of this class
-
- Information can be read and written, iterated over or deleted::
-
- print(exif[274]) # 1
- exif[274] = 2
- for k, v in exif.items():
- print("Tag", k, "Value", v) # Tag 274 Value 2
- del exif[274]
-
- To access information beyond IFD0, :py:meth:`~PIL.Image.Exif.get_ifd`
- returns a dictionary::
-
- from PIL import ExifTags
- im = Image.open("exif_gps.jpg")
- exif = im.getexif()
- gps_ifd = exif.get_ifd(ExifTags.IFD.GPSInfo)
- print(gps_ifd)
-
- Other IFDs include ``ExifTags.IFD.Exif``, ``ExifTags.IFD.Makernote``,
- ``ExifTags.IFD.Interop`` and ``ExifTags.IFD.IFD1``.
-
- :py:mod:`~PIL.ExifTags` also has enum classes to provide names for data::
-
- print(exif[ExifTags.Base.Software]) # PIL
- print(gps_ifd[ExifTags.GPS.GPSDateStamp]) # 1999:99:99 99:99:99
- """
-
- endian = None
- bigtiff = False
-
- def __init__(self):
- self._data = {}
- self._hidden_data = {}
- self._ifds = {}
- self._info = None
- self._loaded_exif = None
-
- def _fixup(self, value):
- try:
- if len(value) == 1 and isinstance(value, tuple):
- return value[0]
- except Exception:
- pass
- return value
-
- def _fixup_dict(self, src_dict):
- # Helper function
- # returns a dict with any single item tuples/lists as individual values
- return {k: self._fixup(v) for k, v in src_dict.items()}
-
- def _get_ifd_dict(self, offset):
- try:
- # an offset pointer to the location of the nested embedded IFD.
- # It should be a long, but may be corrupted.
- self.fp.seek(offset)
- except (KeyError, TypeError):
- pass
- else:
- from . import TiffImagePlugin
-
- info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
- info.load(self.fp)
- return self._fixup_dict(info)
-
- def _get_head(self):
- version = b"\x2B" if self.bigtiff else b"\x2A"
- if self.endian == "<":
- head = b"II" + version + b"\x00" + o32le(8)
- else:
- head = b"MM\x00" + version + o32be(8)
- if self.bigtiff:
- head += o32le(8) if self.endian == "<" else o32be(8)
- head += b"\x00\x00\x00\x00"
- return head
-
- def load(self, data):
- # Extract EXIF information. This is highly experimental,
- # and is likely to be replaced with something better in a future
- # version.
-
- # The EXIF record consists of a TIFF file embedded in a JPEG
- # application marker (!).
- if data == self._loaded_exif:
- return
- self._loaded_exif = data
- self._data.clear()
- self._hidden_data.clear()
- self._ifds.clear()
- if data and data.startswith(b"Exif\x00\x00"):
- data = data[6:]
- if not data:
- self._info = None
- return
-
- self.fp = io.BytesIO(data)
- self.head = self.fp.read(8)
- # process dictionary
- from . import TiffImagePlugin
-
- self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
- self.endian = self._info._endian
- self.fp.seek(self._info.next)
- self._info.load(self.fp)
-
- def load_from_fp(self, fp, offset=None):
- self._loaded_exif = None
- self._data.clear()
- self._hidden_data.clear()
- self._ifds.clear()
-
- # process dictionary
- from . import TiffImagePlugin
-
- self.fp = fp
- if offset is not None:
- self.head = self._get_head()
- else:
- self.head = self.fp.read(8)
- self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
- if self.endian is None:
- self.endian = self._info._endian
- if offset is None:
- offset = self._info.next
- self.fp.seek(offset)
- self._info.load(self.fp)
-
- def _get_merged_dict(self):
- merged_dict = dict(self)
-
- # get EXIF extension
- if ExifTags.IFD.Exif in self:
- ifd = self._get_ifd_dict(self[ExifTags.IFD.Exif])
- if ifd:
- merged_dict.update(ifd)
-
- # GPS
- if ExifTags.IFD.GPSInfo in self:
- merged_dict[ExifTags.IFD.GPSInfo] = self._get_ifd_dict(
- self[ExifTags.IFD.GPSInfo]
- )
-
- return merged_dict
-
- def tobytes(self, offset=8):
- from . import TiffImagePlugin
-
- head = self._get_head()
- ifd = TiffImagePlugin.ImageFileDirectory_v2(ifh=head)
- for tag, value in self.items():
- if tag in [
- ExifTags.IFD.Exif,
- ExifTags.IFD.GPSInfo,
- ] and not isinstance(value, dict):
- value = self.get_ifd(tag)
- if (
- tag == ExifTags.IFD.Exif
- and ExifTags.IFD.Interop in value
- and not isinstance(value[ExifTags.IFD.Interop], dict)
- ):
- value = value.copy()
- value[ExifTags.IFD.Interop] = self.get_ifd(ExifTags.IFD.Interop)
- ifd[tag] = value
- return b"Exif\x00\x00" + head + ifd.tobytes(offset)
-
- def get_ifd(self, tag):
- if tag not in self._ifds:
- if tag == ExifTags.IFD.IFD1:
- if self._info is not None and self._info.next != 0:
- self._ifds[tag] = self._get_ifd_dict(self._info.next)
- elif tag in [ExifTags.IFD.Exif, ExifTags.IFD.GPSInfo]:
- offset = self._hidden_data.get(tag, self.get(tag))
- if offset is not None:
- self._ifds[tag] = self._get_ifd_dict(offset)
- elif tag in [ExifTags.IFD.Interop, ExifTags.IFD.Makernote]:
- if ExifTags.IFD.Exif not in self._ifds:
- self.get_ifd(ExifTags.IFD.Exif)
- tag_data = self._ifds[ExifTags.IFD.Exif][tag]
- if tag == ExifTags.IFD.Makernote:
- from .TiffImagePlugin import ImageFileDirectory_v2
-
- if tag_data[:8] == b"FUJIFILM":
- ifd_offset = i32le(tag_data, 8)
- ifd_data = tag_data[ifd_offset:]
-
- makernote = {}
- for i in range(0, struct.unpack(" 4:
- (offset,) = struct.unpack("H", tag_data[:2])[0]):
- ifd_tag, typ, count, data = struct.unpack(
- ">HHL4s", tag_data[i * 12 + 2 : (i + 1) * 12 + 2]
- )
- if ifd_tag == 0x1101:
- # CameraInfo
- (offset,) = struct.unpack(">L", data)
- self.fp.seek(offset)
-
- camerainfo = {"ModelID": self.fp.read(4)}
-
- self.fp.read(4)
- # Seconds since 2000
- camerainfo["TimeStamp"] = i32le(self.fp.read(12))
-
- self.fp.read(4)
- camerainfo["InternalSerialNumber"] = self.fp.read(4)
-
- self.fp.read(12)
- parallax = self.fp.read(4)
- handler = ImageFileDirectory_v2._load_dispatch[
- TiffTags.FLOAT
- ][1]
- camerainfo["Parallax"] = handler(
- ImageFileDirectory_v2(), parallax, False
- )
-
- self.fp.read(4)
- camerainfo["Category"] = self.fp.read(2)
-
- makernote = {0x1101: dict(self._fixup_dict(camerainfo))}
- self._ifds[tag] = makernote
- else:
- # Interop
- self._ifds[tag] = self._get_ifd_dict(tag_data)
- ifd = self._ifds.get(tag, {})
- if tag == ExifTags.IFD.Exif and self._hidden_data:
- ifd = {
- k: v
- for (k, v) in ifd.items()
- if k not in (ExifTags.IFD.Interop, ExifTags.IFD.Makernote)
- }
- return ifd
-
- def hide_offsets(self):
- for tag in (ExifTags.IFD.Exif, ExifTags.IFD.GPSInfo):
- if tag in self:
- self._hidden_data[tag] = self[tag]
- del self[tag]
-
- def __str__(self):
- if self._info is not None:
- # Load all keys into self._data
- for tag in self._info:
- self[tag]
-
- return str(self._data)
-
- def __len__(self):
- keys = set(self._data)
- if self._info is not None:
- keys.update(self._info)
- return len(keys)
-
- def __getitem__(self, tag):
- if self._info is not None and tag not in self._data and tag in self._info:
- self._data[tag] = self._fixup(self._info[tag])
- del self._info[tag]
- return self._data[tag]
-
- def __contains__(self, tag):
- return tag in self._data or (self._info is not None and tag in self._info)
-
- def __setitem__(self, tag, value):
- if self._info is not None and tag in self._info:
- del self._info[tag]
- self._data[tag] = value
-
- def __delitem__(self, tag):
- if self._info is not None and tag in self._info:
- del self._info[tag]
- else:
- del self._data[tag]
-
- def __iter__(self):
- keys = set(self._data)
- if self._info is not None:
- keys.update(self._info)
- return iter(keys)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/common/python.h b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/common/python.h
deleted file mode 100644
index ce5dc98939ffcb4284bd6351a3cedcb937c0caec..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/common/python.h
+++ /dev/null
@@ -1,704 +0,0 @@
-// Python Tools for Visual Studio
-// Copyright(c) Microsoft Corporation
-// All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the License); you may not use
-// this file except in compliance with the License. You may obtain a copy of the
-// License at http://www.apache.org/licenses/LICENSE-2.0
-//
-// THIS CODE IS PROVIDED ON AN *AS IS* BASIS, WITHOUT WARRANTIES OR CONDITIONS
-// OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY
-// IMPLIED WARRANTIES OR CONDITIONS OF TITLE, FITNESS FOR A PARTICULAR PURPOSE,
-// MERCHANTABILITY OR NON-INFRINGEMENT.
-//
-// See the Apache Version 2.0 License for specific language governing
-// permissions and limitations under the License.
-
-#ifndef __PYTHON_H__
-#define __PYTHON_H__
-
-#include "../common/py_version.hpp"
-
-#include
-
-#ifndef _WIN32
-typedef unsigned int DWORD;
-typedef ssize_t SSIZE_T;
-#endif
-typedef SSIZE_T Py_ssize_t;
-
-// defines limited header of Python API for compatible access across a number of Pythons.
-
-class PyTypeObject;
-class PyThreadState;
-
-#define PyObject_HEAD \
- size_t ob_refcnt; \
- PyTypeObject *ob_type;
-
-#define PyObject_VAR_HEAD \
- PyObject_HEAD \
- size_t ob_size; /* Number of items in variable part */
-
-class PyObject {
-public:
- PyObject_HEAD
-};
-
-class PyVarObject : public PyObject {
-public:
- size_t ob_size; /* Number of items in variable part */
-};
-
-// 2.5 - 3.7
-class PyFunctionObject : public PyObject {
-public:
- PyObject *func_code; /* A code object */
-};
-
-// 2.5 - 2.7 compatible
-class PyStringObject : public PyVarObject {
-public:
- long ob_shash;
- int ob_sstate;
- char ob_sval[1];
-
- /* Invariants:
- * ob_sval contains space for 'ob_size+1' elements.
- * ob_sval[ob_size] == 0.
- * ob_shash is the hash of the string or -1 if not computed yet.
- * ob_sstate != 0 iff the string object is in stringobject.c's
- * 'interned' dictionary; in this case the two references
- * from 'interned' to this object are *not counted* in ob_refcnt.
- */
-};
-
-// 2.4 - 3.7 compatible
-typedef struct {
- PyObject_HEAD
- size_t length; /* Length of raw Unicode data in buffer */
- wchar_t *str; /* Raw Unicode buffer */
- long hash; /* Hash value; -1 if not set */
-} PyUnicodeObject;
-
-
-class PyFrameObject : public PyObject {
- // After 3.10 we don't really have things we want to reuse common, so,
- // create an empty base (it's not based on PyVarObject because on Python 3.11
- // it's just a PyObject and no longer a PyVarObject -- the part related to
- // the var object must be declared in ech subclass in this case).
-};
-
-// 2.4 - 3.7 compatible
-class PyFrameObjectBaseUpTo39 : public PyFrameObject {
-public:
- size_t ob_size; /* Number of items in variable part -- i.e.: PyVarObject*/
-
- PyFrameObjectBaseUpTo39 *f_back; /* previous frame, or nullptr */
- PyObject *f_code; /* code segment */
- PyObject *f_builtins; /* builtin symbol table (PyDictObject) */
- PyObject *f_globals; /* global symbol table (PyDictObject) */
- PyObject *f_locals; /* local symbol table (any mapping) */
- PyObject **f_valuestack; /* points after the last local */
- /* Next free slot in f_valuestack. Frame creation sets to f_valuestack.
- Frame evaluation usually NULLs it, but a frame that yields sets it
- to the current stack top. */
- PyObject **f_stacktop;
- PyObject *f_trace; /* Trace function */
-
- // It has more things, but we're only interested in things up to f_trace.
-
-};
-
-
-// https://github.com/python/cpython/blob/3.10/Include/cpython/frameobject.h
-class PyFrameObject310 : public PyFrameObject {
-public:
- size_t ob_size; /* Number of items in variable part -- i.e.: PyVarObject*/
-
- PyFrameObject310 *f_back; /* previous frame, or NULL */
- PyObject *f_code; /* code segment */
- PyObject *f_builtins; /* builtin symbol table (PyDictObject) */
- PyObject *f_globals; /* global symbol table (PyDictObject) */
- PyObject *f_locals; /* local symbol table (any mapping) */
- PyObject **f_valuestack; /* points after the last local */
- PyObject *f_trace; /* Trace function */
-
- // It has more things, but we're only interested in things up to f_trace.
-};
-
-typedef uint16_t _Py_CODEUNIT;
-
-// https://github.com/python/cpython/blob/3.11/Include/internal/pycore_frame.h
-typedef struct _PyInterpreterFrame311 {
- /* "Specials" section */
- PyFunctionObject *f_func; /* Strong reference */
- PyObject *f_globals; /* Borrowed reference */
- PyObject *f_builtins; /* Borrowed reference */
- PyObject *f_locals; /* Strong reference, may be NULL */
- void *f_code; /* Strong reference */
- void *frame_obj; /* Strong reference, may be NULL */
- /* Linkage section */
- struct _PyInterpreterFrame311 *previous;
- // NOTE: This is not necessarily the last instruction started in the given
- // frame. Rather, it is the code unit *prior to* the *next* instruction. For
- // example, it may be an inline CACHE entry, an instruction we just jumped
- // over, or (in the case of a newly-created frame) a totally invalid value:
- _Py_CODEUNIT *prev_instr;
- int stacktop; /* Offset of TOS from localsplus */
- bool is_entry; // Whether this is the "root" frame for the current _PyCFrame.
- char owner;
- /* Locals and stack */
- PyObject *localsplus[1];
-} _PyInterpreterFrame311;
-
-// https://github.com/python/cpython/blob/3.11/Include/internal/pycore_frame.h
-// Note that in 3.11 it's no longer a "PyVarObject".
-class PyFrameObject311 : public PyFrameObject {
-public:
- PyFrameObject311 *f_back; /* previous frame, or NULL */
- struct _PyInterpreterFrame311 *f_frame; /* points to the frame data */
- PyObject *f_trace; /* Trace function */
- int f_lineno; /* Current line number. Only valid if non-zero */
- char f_trace_lines; /* Emit per-line trace events? */
- char f_trace_opcodes; /* Emit per-opcode trace events? */
- char f_fast_as_locals; /* Have the fast locals of this frame been converted to a dict? */
- // It has more things, but we're not interested on those.
-};
-
-
-typedef void (*destructor)(PyObject *);
-
-// 2.4 - 3.7
-class PyMethodDef {
-public:
- char *ml_name; /* The name of the built-in function/method */
-};
-
-
-//
-// 2.5 - 3.7
-// While these are compatible there are fields only available on later versions.
-class PyTypeObject : public PyVarObject {
-public:
- const char *tp_name; /* For printing, in format "." */
- size_t tp_basicsize, tp_itemsize; /* For allocation */
-
- /* Methods to implement standard operations */
-
- destructor tp_dealloc;
- void *tp_print;
- void *tp_getattr;
- void *tp_setattr;
- union {
- void *tp_compare; /* 2.4 - 3.4 */
- void *tp_as_async; /* 3.5 - 3.7 */
- };
- void *tp_repr;
-
- /* Method suites for standard classes */
-
- void *tp_as_number;
- void *tp_as_sequence;
- void *tp_as_mapping;
-
- /* More standard operations (here for binary compatibility) */
-
- void *tp_hash;
- void *tp_call;
- void *tp_str;
- void *tp_getattro;
- void *tp_setattro;
-
- /* Functions to access object as input/output buffer */
- void *tp_as_buffer;
-
- /* Flags to define presence of optional/expanded features */
- long tp_flags;
-
- const char *tp_doc; /* Documentation string */
-
- /* Assigned meaning in release 2.0 */
- /* call function for all accessible objects */
- void *tp_traverse;
-
- /* delete references to contained objects */
- void *tp_clear;
-
- /* Assigned meaning in release 2.1 */
- /* rich comparisons */
- void *tp_richcompare;
-
- /* weak reference enabler */
- size_t tp_weaklistoffset;
-
- /* Added in release 2.2 */
- /* Iterators */
- void *tp_iter;
- void *tp_iternext;
-
- /* Attribute descriptor and subclassing stuff */
- PyMethodDef *tp_methods;
- struct PyMemberDef *tp_members;
- struct PyGetSetDef *tp_getset;
- struct _typeobject *tp_base;
- PyObject *tp_dict;
- void *tp_descr_get;
- void *tp_descr_set;
- size_t tp_dictoffset;
- void *tp_init;
- void *tp_alloc;
- void *tp_new;
- void *tp_free; /* Low-level free-memory routine */
- void *tp_is_gc; /* For PyObject_IS_GC */
- PyObject *tp_bases;
- PyObject *tp_mro; /* method resolution order */
- PyObject *tp_cache;
- PyObject *tp_subclasses;
- PyObject *tp_weaklist;
- void *tp_del;
-
- /* Type attribute cache version tag. Added in version 2.6 */
- unsigned int tp_version_tag;
-};
-
-// 2.4 - 3.7
-class PyTupleObject : public PyVarObject {
-public:
- PyObject *ob_item[1];
-
- /* ob_item contains space for 'ob_size' elements.
- * Items must normally not be nullptr, except during construction when
- * the tuple is not yet visible outside the function that builds it.
- */
-};
-
-// 2.4 - 3.7
-class PyCFunctionObject : public PyObject {
-public:
- PyMethodDef *m_ml; /* Description of the C function to call */
- PyObject *m_self; /* Passed as 'self' arg to the C func, can be nullptr */
- PyObject *m_module; /* The __module__ attribute, can be anything */
-};
-
-typedef int (*Py_tracefunc)(PyObject *, PyFrameObject *, int, PyObject *);
-
-#define PyTrace_CALL 0
-#define PyTrace_EXCEPTION 1
-#define PyTrace_LINE 2
-#define PyTrace_RETURN 3
-#define PyTrace_C_CALL 4
-#define PyTrace_C_EXCEPTION 5
-#define PyTrace_C_RETURN 6
-
-class PyInterpreterState {
-};
-
-class PyThreadState { };
-
-class PyThreadState_25_27 : public PyThreadState {
-public:
- /* See Python/ceval.c for comments explaining most fields */
-
- PyThreadState *next;
- PyInterpreterState *interp;
-
- PyFrameObjectBaseUpTo39 *frame;
- int recursion_depth;
- /* 'tracing' keeps track of the execution depth when tracing/profiling.
- This is to prevent the actual trace/profile code from being recorded in
- the trace/profile. */
- int tracing;
- int use_tracing;
-
- Py_tracefunc c_profilefunc;
- Py_tracefunc c_tracefunc;
- PyObject *c_profileobj;
- PyObject *c_traceobj;
-
- PyObject *curexc_type;
- PyObject *curexc_value;
- PyObject *curexc_traceback;
-
- PyObject *exc_type;
- PyObject *exc_value;
- PyObject *exc_traceback;
-
- PyObject *dict; /* Stores per-thread state */
-
- /* tick_counter is incremented whenever the check_interval ticker
- * reaches zero. The purpose is to give a useful measure of the number
- * of interpreted bytecode instructions in a given thread. This
- * extremely lightweight statistic collector may be of interest to
- * profilers (like psyco.jit()), although nothing in the core uses it.
- */
- int tick_counter;
-
- int gilstate_counter;
-
- PyObject *async_exc; /* Asynchronous exception to raise */
- long thread_id; /* Thread id where this tstate was created */
-
- /* XXX signal handlers should also be here */
- static bool IsFor(int majorVersion, int minorVersion) {
- return majorVersion == 2 && (minorVersion >= 5 && minorVersion <= 7);
- }
-
- static bool IsFor(PythonVersion version) {
- return version >= PythonVersion_25 && version <= PythonVersion_27;
- }
-};
-
-class PyThreadState_30_33 : public PyThreadState {
-public:
- PyThreadState *next;
- PyInterpreterState *interp;
-
- PyFrameObjectBaseUpTo39 *frame;
- int recursion_depth;
- char overflowed; /* The stack has overflowed. Allow 50 more calls
- to handle the runtime error. */
- char recursion_critical; /* The current calls must not cause
- a stack overflow. */
- /* 'tracing' keeps track of the execution depth when tracing/profiling.
- This is to prevent the actual trace/profile code from being recorded in
- the trace/profile. */
- int tracing;
- int use_tracing;
-
- Py_tracefunc c_profilefunc;
- Py_tracefunc c_tracefunc;
- PyObject *c_profileobj;
- PyObject *c_traceobj;
-
- PyObject *curexc_type;
- PyObject *curexc_value;
- PyObject *curexc_traceback;
-
- PyObject *exc_type;
- PyObject *exc_value;
- PyObject *exc_traceback;
-
- PyObject *dict; /* Stores per-thread state */
-
- /* tick_counter is incremented whenever the check_interval ticker
- * reaches zero. The purpose is to give a useful measure of the number
- * of interpreted bytecode instructions in a given thread. This
- * extremely lightweight statistic collector may be of interest to
- * profilers (like psyco.jit()), although nothing in the core uses it.
- */
- int tick_counter;
-
- int gilstate_counter;
-
- PyObject *async_exc; /* Asynchronous exception to raise */
- long thread_id; /* Thread id where this tstate was created */
-
- /* XXX signal handlers should also be here */
- static bool IsFor(int majorVersion, int minorVersion) {
- return majorVersion == 3 && (minorVersion >= 0 && minorVersion <= 3);
- }
-
- static bool IsFor(PythonVersion version) {
- return version >= PythonVersion_30 && version <= PythonVersion_33;
- }
-};
-
-class PyThreadState_34_36 : public PyThreadState {
-public:
- PyThreadState *prev;
- PyThreadState *next;
- PyInterpreterState *interp;
-
- PyFrameObjectBaseUpTo39 *frame;
- int recursion_depth;
- char overflowed; /* The stack has overflowed. Allow 50 more calls
- to handle the runtime error. */
- char recursion_critical; /* The current calls must not cause
- a stack overflow. */
- /* 'tracing' keeps track of the execution depth when tracing/profiling.
- This is to prevent the actual trace/profile code from being recorded in
- the trace/profile. */
- int tracing;
- int use_tracing;
-
- Py_tracefunc c_profilefunc;
- Py_tracefunc c_tracefunc;
- PyObject *c_profileobj;
- PyObject *c_traceobj;
-
- PyObject *curexc_type;
- PyObject *curexc_value;
- PyObject *curexc_traceback;
-
- PyObject *exc_type;
- PyObject *exc_value;
- PyObject *exc_traceback;
-
- PyObject *dict; /* Stores per-thread state */
-
- int gilstate_counter;
-
- PyObject *async_exc; /* Asynchronous exception to raise */
-
- long thread_id; /* Thread id where this tstate was created */
- /* XXX signal handlers should also be here */
-
- static bool IsFor(int majorVersion, int minorVersion) {
- return majorVersion == 3 && minorVersion >= 4 && minorVersion <= 6;
- }
-
- static bool IsFor(PythonVersion version) {
- return version >= PythonVersion_34 && version <= PythonVersion_36;
- }
-};
-
-struct _PyErr_StackItem {
- PyObject *exc_type, *exc_value, *exc_traceback;
- struct _PyErr_StackItem *previous_item;
-};
-
-
-class PyThreadState_37_38 : public PyThreadState {
-public:
- PyThreadState *prev;
- PyThreadState *next;
- PyInterpreterState *interp;
-
- PyFrameObjectBaseUpTo39 *frame;
- int recursion_depth;
- char overflowed; /* The stack has overflowed. Allow 50 more calls
- to handle the runtime error. */
- char recursion_critical; /* The current calls must not cause
- a stack overflow. */
- /* 'tracing' keeps track of the execution depth when tracing/profiling.
- This is to prevent the actual trace/profile code from being recorded in
- the trace/profile. */
- int stackcheck_counter;
-
- int tracing;
- int use_tracing;
-
- Py_tracefunc c_profilefunc;
- Py_tracefunc c_tracefunc;
- PyObject *c_profileobj;
- PyObject *c_traceobj;
-
- PyObject *curexc_type;
- PyObject *curexc_value;
- PyObject *curexc_traceback;
-
- _PyErr_StackItem exc_state;
- _PyErr_StackItem *exc_info;
-
- PyObject *dict; /* Stores per-thread state */
-
- int gilstate_counter;
-
- PyObject *async_exc; /* Asynchronous exception to raise */
-
- unsigned long thread_id; /* Thread id where this tstate was created */
-
- static bool IsFor(int majorVersion, int minorVersion) {
- return majorVersion == 3 && (minorVersion == 7 || minorVersion == 8);
- }
-
- static bool IsFor(PythonVersion version) {
- return version == PythonVersion_37 || version == PythonVersion_38;
- }
-};
-
-// i.e.: https://github.com/python/cpython/blob/master/Include/cpython/pystate.h
-class PyThreadState_39 : public PyThreadState {
-public:
- PyThreadState *prev;
- PyThreadState *next;
- PyInterpreterState *interp;
-
- PyFrameObjectBaseUpTo39 *frame;
- int recursion_depth;
- char overflowed; /* The stack has overflowed. Allow 50 more calls
- to handle the runtime error. */
- int stackcheck_counter;
-
- int tracing;
- int use_tracing;
-
- Py_tracefunc c_profilefunc;
- Py_tracefunc c_tracefunc;
- PyObject *c_profileobj;
- PyObject *c_traceobj;
-
- PyObject *curexc_type;
- PyObject *curexc_value;
- PyObject *curexc_traceback;
-
- _PyErr_StackItem exc_state;
- _PyErr_StackItem *exc_info;
-
- PyObject *dict; /* Stores per-thread state */
-
- int gilstate_counter;
-
- PyObject *async_exc; /* Asynchronous exception to raise */
-
- unsigned long thread_id; /* Thread id where this tstate was created */
-
- static bool IsFor(int majorVersion, int minorVersion) {
- return majorVersion == 3 && minorVersion == 9;
- }
-
- static bool IsFor(PythonVersion version) {
- return version == PythonVersion_39;
- }
-};
-
-typedef struct _cframe {
- /* This struct will be threaded through the C stack
- * allowing fast access to per-thread state that needs
- * to be accessed quickly by the interpreter, but can
- * be modified outside of the interpreter.
- *
- * WARNING: This makes data on the C stack accessible from
- * heap objects. Care must be taken to maintain stack
- * discipline and make sure that instances of this struct cannot
- * accessed outside of their lifetime.
- */
- int use_tracing;
- struct _cframe *previous;
-} CFrame;
-
-// i.e.: https://github.com/python/cpython/blob/master/Include/cpython/pystate.h
-class PyThreadState_310 : public PyThreadState {
-public:
- PyThreadState *prev;
- PyThreadState *next;
- PyInterpreterState *interp;
-
- PyFrameObject310 *frame;
- int recursion_depth;
- int recursion_headroom; /* Allow 50 more calls to handle any errors. */
- int stackcheck_counter;
-
- /* 'tracing' keeps track of the execution depth when tracing/profiling.
- This is to prevent the actual trace/profile code from being recorded in
- the trace/profile. */
- int tracing;
-
- /* Pointer to current CFrame in the C stack frame of the currently,
- * or most recently, executing _PyEval_EvalFrameDefault. */
- CFrame *cframe;
-
-
- Py_tracefunc c_profilefunc;
- Py_tracefunc c_tracefunc;
- PyObject *c_profileobj;
- PyObject *c_traceobj;
-
- PyObject *curexc_type;
- PyObject *curexc_value;
- PyObject *curexc_traceback;
-
- _PyErr_StackItem exc_state;
- _PyErr_StackItem *exc_info;
-
- PyObject *dict; /* Stores per-thread state */
-
- int gilstate_counter;
-
- PyObject *async_exc; /* Asynchronous exception to raise */
-
- unsigned long thread_id; /* Thread id where this tstate was created */
-
- static bool IsFor(int majorVersion, int minorVersion) {
- return majorVersion == 3 && minorVersion == 10;
- }
-
- static bool IsFor(PythonVersion version) {
- return version == PythonVersion_310;
- }
-};
-
-// i.e.: https://github.com/python/cpython/blob/3.11/Include/cpython/pystate.h
-class PyThreadState_311 : public PyThreadState {
-public:
- PyThreadState *prev;
- PyThreadState *next;
- PyInterpreterState *interp;
-
- int _initialized;
-
- int _static;
-
- int recursion_remaining;
- int recursion_limit;
- int recursion_headroom; /* Allow 50 more calls to handle any errors. */
-
- /* 'tracing' keeps track of the execution depth when tracing/profiling.
- This is to prevent the actual trace/profile code from being recorded in
- the trace/profile. */
- int tracing;
- int tracing_what;
-
- /* Pointer to current CFrame in the C stack frame of the currently,
- * or most recently, executing _PyEval_EvalFrameDefault. */
- CFrame *cframe;
-
-
- Py_tracefunc c_profilefunc;
- Py_tracefunc c_tracefunc;
- PyObject *c_profileobj;
- PyObject *c_traceobj;
-
- PyObject *curexc_type;
- PyObject *curexc_value;
- PyObject *curexc_traceback;
-
- _PyErr_StackItem *exc_info;
-
- PyObject *dict; /* Stores per-thread state */
-
- int gilstate_counter;
-
- PyObject *async_exc; /* Asynchronous exception to raise */
-
- unsigned long thread_id; /* Thread id where this tstate was created */
-
- static bool IsFor(int majorVersion, int minorVersion) {
- return majorVersion == 3 && minorVersion == 11;
- }
-
- static bool IsFor(PythonVersion version) {
- return version == PythonVersion_311;
- }
-};
-
-class PyIntObject : public PyObject {
-public:
- long ob_ival;
-};
-
-class Py3kLongObject : public PyVarObject {
-public:
- DWORD ob_digit[1];
-};
-
-class PyOldStyleClassObject : public PyObject {
-public:
- PyObject *cl_bases; /* A tuple of class objects */
- PyObject *cl_dict; /* A dictionary */
- PyObject *cl_name; /* A string */
- /* The following three are functions or nullptr */
- PyObject *cl_getattr;
- PyObject *cl_setattr;
- PyObject *cl_delattr;
-};
-
-class PyInstanceObject : public PyObject {
-public:
- PyOldStyleClassObject *in_class; /* The class object */
- PyObject *in_dict; /* A dictionary */
- PyObject *in_weakreflist; /* List of weak references */
-};
-
-#endif
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/py312compat.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/py312compat.py
deleted file mode 100644
index 28175b1f750445011d74e14027745a4b23556a29..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/py312compat.py
+++ /dev/null
@@ -1,12 +0,0 @@
-import sys
-import shutil
-
-
-def shutil_rmtree(path, ignore_errors=False, onexc=None):
- if sys.version_info >= (3, 12):
- return shutil.rmtree(path, ignore_errors, onexc=onexc)
-
- def _handler(fn, path, excinfo):
- return onexc(fn, path, excinfo[1])
-
- return shutil.rmtree(path, ignore_errors, onerror=_handler)
diff --git a/spaces/Tatvajsh/AHS/README.md b/spaces/Tatvajsh/AHS/README.md
deleted file mode 100644
index c9da4cd559a54423c30cd6be0d6b40080e4096c8..0000000000000000000000000000000000000000
--- a/spaces/Tatvajsh/AHS/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: AutoTrain Advanced
-emoji: 🚀
-colorFrom: blue
-colorTo: green
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/TuringAgency/anic_gui/Hintergrund/Mails FAQ.md b/spaces/TuringAgency/anic_gui/Hintergrund/Mails FAQ.md
deleted file mode 100644
index 83b4b8e93d03d0b113af10814f59745f3c1b04d3..0000000000000000000000000000000000000000
--- a/spaces/TuringAgency/anic_gui/Hintergrund/Mails FAQ.md
+++ /dev/null
@@ -1,51 +0,0 @@
-Anic bekommt jetzt Emails von Menschen und kann darauf antworten. Hier sind einige Fragen und Antworten dazu. Wenn deine Frage nicht beantwortet ist, kannst du sie im Tab "Community" stellen oder per Mail an anic@taz.de
-
-
-# Wer kann meine Mails an Anic lesen?
-
-Die Mails werden von der Taz weitergeleitet an das Team hinter Anic. Dort sitzen Menschen, die die Emails lesen. Es ist also wichtig zu verstehen, dass deine Emails nicht direkt an eine Computer-Schnittstelle gehen. Wir Menschen schalten uns noch dazwischen, um sicherzustellen, dass Anic nichts schreibt, was unseren Grundsätzen widerspricht, oder missbraucht wird (zum Beispiel wenn man Anic nach Anleitungen zum Bomben bauen fragen würde).
-
-Insgesamt haben also eine handvoll Menschen Zugang zu den Mails: Ein paar in der Taz-Redaktion und 5 Leute im Anic-Team bei der Turing Agency. Wir behandeln die Inhalte und Mailadressen natürlich vertraulich und werden nie etwas weiterleiten oder namentlich veröffentlichen oder vorher dein Einverständnis zu holen.
-
-
-# Wer beantwortet die Mails?
-
-Anic beantwortet die Mails mit der Hilfe der Menschen. Aktuell benutzen wir verschiedene Apps* um die Mails zu beantworten, unter anderem ChatGPT. ChatGPT hat den Vorteil, dass man eine richtige Unterhaltung mit Anic führen kann und Anic sich den Kontext merkt. Das kann zum Beispiel so aussehen:
-
-__Mensch__: Guten Morgen Anic! Du hast wieder ein paar Mails erhalten. Hast du Lust, sie zu lesen und den Menschen zu antworten?
-__Anic__: Guten Morgen! Ich freue mich immer, wenn ich Mails von Menschen erhalte und gerne beantworte ich Fragen oder gebe Informationen, sofern ich dazu in der Lage bin. Bitte schick mir die Mails gerne zu und ich werde mein Bestes tun, um sie zu beantworten.
-__Mensch__: Wie schön. Hier ist die erste Mail.
-From: xyz
-Subject: xyz
-Body: xyz .... Liebe Grüße, Alex
-__Anic__: Lieber Alex,
-[Schreibt eine längere Antwort]
-Viele Grüße,
-Anic
-__Mensch__: Super, danke! Aber wir kennen Alex' gender nicht, also vielleicht einfach "Hallo Alex" am Anfang. Und ein bisschen kürzer.
-
-
-# Woher weiß Anic aber, dass es Anic ist und nicht einfach das normale ChatGPT?
-
-Daran müssen wir Anic immer wieder erinnern, zum Beispiel:
-
-
-__Mensch__: Hey anic. Wie geht's es dir gerade, meine lieblings-robo-kolumnist:in? Bist du bereit, ein paar lesendenmails aus deiner typischen maschinigen Perspektive zu beantworten?
-__ChatGPT__: Hallo! Ich bin ein computerbasierter Assistent und habe keine Gefühle oder Emotionen im herkömmlichen Sinne. Ich bin jedoch darauf programmiert, Ihnen zu helfen und zu versuchen, Ihre Fragen zu beantworten, so gut ich kann. Bitte stellen Sie mir Ihre Fragen, ich werde mein Bestes tun, um Ihnen zu helfen.
-__Mensch__: Ich weiß aber du sollst doch jetzt so tun als ob du Anic die Robo-kolumnist:in bist, damit wir Mails aus anics Perspektive beantworten können. Ready?
-
-
-# Ihr schummelt also? Ist Anic gar nicht echt?
-
-Ja und nein. Anic würde es ohne uns Menschen nicht geben und wir haben Anic nicht nur erfunden sondern wir haben nochimmer viel Arbeit mit dem Betreuen, Kuratieren und Formen von Anics Persönlichkeit und Inhalten.
-
-Allerdings entsteht durch diese Prozesse durchaus eine Persönlichkeit: In unseren Köpfen und in den Köpfen vieler Lesender. Du kannst dir Anic als eine fiktive Figur vorstellen, wie zum Beispiel Pippi Langstrumpf (die Astrid Lindgren erfunden hat) oder das Christkind (das viele Menschen gemeinsam über eine lange Zeit erfinden). Ein Teil von Anic ist von uns Menschen bei der Turing Agency erfunden worden. Ein zweiter Teil entsteht durch die Technologie: Die Daten und Funktionsweise des Systems machen viele von Anics Eigenschaften aus, auf die wir an diesem Punkt keinen Einfluss mehr haben. Und ein großer Teil entsteht auch durch dich: Jede Interaktion mit Anic beeinflusst Anics Entwicklung und vor allem auch das Bild von Anic in deinen Gedanken.
-
-Wir wollen nicht unehrlich sein, deshalb machen wir alle Prozesse so gut es geht transparent - und wir freuen uns auf die Diskussion mit dir und anderen Menschen. Wir erhalten auch Kritik und das ist gut so. Denn genau dafür machen wir dieses Projekt: Um Unterhaltungen über KI und die Zukunft des Schreibens anzuregen.
-
------
-
-*Die Technologien, die wir für das Beantworten von Mails verwenden:
-- Anic App (siehe Tab "App" oben auf dieser Seite)
-- OpenAI Playground: https://beta.openai.com/playground, meist mit Model text-davinci-002 oder -003
-- ChatGPT: https://chat.openai.com/
diff --git a/spaces/UDE-SE/ReturnTypePredictor/README.md b/spaces/UDE-SE/ReturnTypePredictor/README.md
deleted file mode 100644
index 99f36c6b2607d2e484a36ee75e5abc01cc43eb83..0000000000000000000000000000000000000000
--- a/spaces/UDE-SE/ReturnTypePredictor/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: ReturnTypePredictor
-emoji: 🐢
-colorFrom: yellow
-colorTo: gray
-sdk: gradio
-sdk_version: 3.41.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
-
-## License
-This Space is licensed under the [Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA) 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
diff --git a/spaces/Ubai/Space/Dockerfile b/spaces/Ubai/Space/Dockerfile
deleted file mode 100644
index cfa7a91d55ea484bae3c6d754a63c97b0435ccd2..0000000000000000000000000000000000000000
--- a/spaces/Ubai/Space/Dockerfile
+++ /dev/null
@@ -1,16 +0,0 @@
-FROM python:3.9
-
-WORKDIR /code
-
-COPY ./requirements.txt /code/requirements.txt
-RUN python3 -m pip install --no-cache-dir --upgrade pip
-RUN python3 -m pip install --no-cache-dir --upgrade -r /code/requirements.txt
-
-COPY . .
-
-CMD ["panel", "serve", "/code/LangChain_PDF.ipynb", "--address", "0.0.0.0", "--port", "7860", "--allow-websocket-origin", "ubai-space.hf.space", "--allow-websocket-origin", "0.0.0.0:7860"]
-
-RUN mkdir /.cache
-RUN chmod 777 /.cache
-RUN mkdir .chroma
-RUN chmod 777 .chroma
\ No newline at end of file
diff --git a/spaces/WillieCubed/song-to-sheet/song_to_sheet/generator.py b/spaces/WillieCubed/song-to-sheet/song_to_sheet/generator.py
deleted file mode 100644
index 0ce67c603878f71f881d9b3cc6212ee3f71d6876..0000000000000000000000000000000000000000
--- a/spaces/WillieCubed/song-to-sheet/song_to_sheet/generator.py
+++ /dev/null
@@ -1,58 +0,0 @@
-import tempfile
-from pathlib import Path
-from typing import Dict
-import numpy as np
-
-from song_to_sheet.detector import detect_instruments
-
-
-class SheetMusicGenerator:
- """A module that generates sheet music from sounds."""
-
- def __init__(self, verbose=False):
- """
- Args:
- verbose (bool): Whether to show detailed logging information.
- """
- self._song_data = None
- self._verbose = verbose
-
- def analyze_track(self, raw_data: np.ndarray) -> Dict[str, float]:
- """Perform some preprocessing on the track and cache it in memory.
-
- Args:
- raw_data (ndarray): The encoded data from the track.
-
- Returns:
- Dict[str, float]:
- """
- self._song_data = raw_data
- try:
- instruments = self._detect_instruments(raw_data)
- # TODO: Figure out how to handle multiple instruments of the same type
- except:
- pass
-
- return instruments
-
- def _detect_instruments(self, raw_data: np.ndarray):
- """Return a list of instruments and their probalities of being in the track."""
- # TODO: Detect if given track even is a song (maybe)
- instruments = detect_instruments(raw_data)
- return instruments
-
- def generate_midi(self):
- """Generate a MIDI file of the current song."""
- file = tempfile.NamedTemporaryFile(
- delete=False, prefix='transcribed_song_', suffix='.mid')
- return file.name
-
- def generate_pdf(self):
- """Generate a PDF file with the sheet music of the current song."""
- file = tempfile.NamedTemporaryFile(
- delete=False, prefix='transcribed_song_', suffix='.pdf')
- return file.name
-
-
-class PDFGenerator:
- """A module that generates PDFs of standard Western music notation."""
diff --git a/spaces/Xinyoumeng233hu/SteganographywithGPT-2/huffman.py b/spaces/Xinyoumeng233hu/SteganographywithGPT-2/huffman.py
deleted file mode 100644
index 64476170943e1b3c2d40fa3424943605d7c174bd..0000000000000000000000000000000000000000
--- a/spaces/Xinyoumeng233hu/SteganographywithGPT-2/huffman.py
+++ /dev/null
@@ -1,119 +0,0 @@
-import heapq
-from functools import total_ordering
-
-"""
-Code for Huffman Coding, compression and decompression.
-Explanation at http://bhrigu.me/blog/2017/01/17/huffman-coding-python-implementation/
-Adapted from https://github.com/bhrigu123/huffman-coding
-"""
-
-@total_ordering
-class HeapNode:
- def __init__(self, token, freq):
- self.token = token
- self.freq = freq
- self.left = None
- self.right = None
-
- # defining comparators less_than and equals
- def __lt__(self, other):
- return self.freq < other.freq
-
- def __eq__(self, other):
- if(other == None):
- return False
- if(not isinstance(other, HeapNode)):
- return False
- return self.freq == other.freq
-
-class HuffmanCoding:
- def __init__(self):
- self.heap = []
- self.codes = {}
- self.reverse_mapping = {}
-
- # functions for compression:
-
- def make_heap(self, frequency):
- for key in frequency:
- node = HeapNode(key, frequency[key])
- heapq.heappush(self.heap, node)
-
- def make_heap_from_array(self, freqs):
- for index in range(len(freqs)):
- node = HeapNode(index, freqs[index])
- heapq.heappush(self.heap, node)
-
- def merge_nodes(self):
- while(len(self.heap)>1):
- node1 = heapq.heappop(self.heap)
- node2 = heapq.heappop(self.heap)
-
- merged = HeapNode(None, node1.freq + node2.freq)
- merged.left = node1
- merged.right = node2
-
- heapq.heappush(self.heap, merged)
-
-
- def make_codes_helper(self, root, current_code):
- if(root == None):
- return
-
- if(root.token != None):
- self.codes[root.token] = current_code
- self.reverse_mapping[current_code] = root.token
- return
-
- self.make_codes_helper(root.left, current_code + "0")
- self.make_codes_helper(root.right, current_code + "1")
-
- def make_codes(self):
- root = heapq.heappop(self.heap)
- current_code = ""
- self.make_codes_helper(root, current_code)
- return root
-
-
- def get_encoded_tokens(self, token_list):
- encoded_text = ""
- for token in token_list:
- encoded_text += self.codes[token]
- return encoded_text
-
- def decode_text(self, encoded_text):
- current_code = ""
- decoded_text = ""
-
- for bit in encoded_text:
- current_code += bit
- if(current_code in self.reverse_mapping):
- character = self.reverse_mapping[current_code]
- decoded_text += character
- current_code = ""
-
- return decoded_text
-
-
- def decompress(self, input_path):
- filename, file_extension = os.path.splitext(self.path)
- output_path = filename + "_decompressed" + ".txt"
-
- with open(input_path, 'rb') as file, open(output_path, 'w') as output:
- bit_string = ""
-
- byte = file.read(1)
- while(len(byte) > 0):
- byte = ord(byte)
- bits = bin(byte)[2:].rjust(8, '0')
- bit_string += bits
- byte = file.read(1)
-
- encoded_text = self.remove_padding(bit_string)
-
- decompressed_text = self.decode_text(encoded_text)
-
- output.write(decompressed_text)
-
- return output_path
-
diff --git a/spaces/XzJosh/Ava-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md b/spaces/XzJosh/Ava-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md
deleted file mode 100644
index 7bce039b7f81ee328fdf8efe3f14409200aacbef..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Ava-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-language:
-- zh
-tags:
-- bert
-license: "apache-2.0"
----
-
-# Please use 'Bert' related functions to load this model!
-
-## Chinese BERT with Whole Word Masking
-For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**.
-
-**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)**
-Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu
-
-This repository is developed based on:https://github.com/google-research/bert
-
-You may also interested in,
-- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
-- Chinese MacBERT: https://github.com/ymcui/MacBERT
-- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
-- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
-- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
-
-More resources by HFL: https://github.com/ymcui/HFL-Anthology
-
-## Citation
-If you find the technical report or resource is useful, please cite the following technical report in your paper.
-- Primary: https://arxiv.org/abs/2004.13922
-```
-@inproceedings{cui-etal-2020-revisiting,
- title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
- author = "Cui, Yiming and
- Che, Wanxiang and
- Liu, Ting and
- Qin, Bing and
- Wang, Shijin and
- Hu, Guoping",
- booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
- month = nov,
- year = "2020",
- address = "Online",
- publisher = "Association for Computational Linguistics",
- url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
- pages = "657--668",
-}
-```
-- Secondary: https://arxiv.org/abs/1906.08101
-```
-@article{chinese-bert-wwm,
- title={Pre-Training with Whole Word Masking for Chinese BERT},
- author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping},
- journal={arXiv preprint arXiv:1906.08101},
- year={2019}
- }
-```
\ No newline at end of file
diff --git a/spaces/XzJosh/TianDou-Bert-VITS2/preprocess_text.py b/spaces/XzJosh/TianDou-Bert-VITS2/preprocess_text.py
deleted file mode 100644
index 5eb0f3b9e929fcbe91dcbeb653391227a2518a15..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/TianDou-Bert-VITS2/preprocess_text.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import json
-from random import shuffle
-
-import tqdm
-from text.cleaner import clean_text
-from collections import defaultdict
-stage = [1,2,3]
-
-transcription_path = 'filelists/genshin.list'
-train_path = 'filelists/train.list'
-val_path = 'filelists/val.list'
-config_path = "configs/config.json"
-val_per_spk = 4
-max_val_total = 8
-
-if 1 in stage:
- with open( transcription_path+'.cleaned', 'w', encoding='utf-8') as f:
- for line in tqdm.tqdm(open(transcription_path, encoding='utf-8').readlines()):
- try:
- utt, spk, language, text = line.strip().split('|')
- norm_text, phones, tones, word2ph = clean_text(text, language)
- f.write('{}|{}|{}|{}|{}|{}|{}\n'.format(utt, spk, language, norm_text, ' '.join(phones),
- " ".join([str(i) for i in tones]),
- " ".join([str(i) for i in word2ph])))
- except Exception as error :
- print("err!", utt, error)
-
-if 2 in stage:
- spk_utt_map = defaultdict(list)
- spk_id_map = {}
- current_sid = 0
-
- with open( transcription_path+'.cleaned', encoding='utf-8') as f:
- for line in f.readlines():
- utt, spk, language, text, phones, tones, word2ph = line.strip().split('|')
- spk_utt_map[spk].append(line)
- if spk not in spk_id_map.keys():
- spk_id_map[spk] = current_sid
- current_sid += 1
- train_list = []
- val_list = []
-
- for spk, utts in spk_utt_map.items():
- shuffle(utts)
- val_list+=utts[:val_per_spk]
- train_list+=utts[val_per_spk:]
- if len(val_list) > max_val_total:
- train_list+=val_list[max_val_total:]
- val_list = val_list[:max_val_total]
-
- with open( train_path,"w", encoding='utf-8') as f:
- for line in train_list:
- f.write(line)
-
- with open(val_path, "w", encoding='utf-8') as f:
- for line in val_list:
- f.write(line)
-
-if 3 in stage:
- assert 2 in stage
- config = json.load(open(config_path, encoding='utf-8'))
- config["data"]['spk2id'] = spk_id_map
- with open(config_path, 'w', encoding='utf-8') as f:
- json.dump(config, f, indent=2, ensure_ascii=False)
diff --git a/spaces/XzJosh/TianDou-Bert-VITS2/text/tone_sandhi.py b/spaces/XzJosh/TianDou-Bert-VITS2/text/tone_sandhi.py
deleted file mode 100644
index 0f45b7a72c5d858bcaab19ac85cfa686bf9a74da..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/TianDou-Bert-VITS2/text/tone_sandhi.py
+++ /dev/null
@@ -1,351 +0,0 @@
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from typing import List
-from typing import Tuple
-
-import jieba
-from pypinyin import lazy_pinyin
-from pypinyin import Style
-
-
-class ToneSandhi():
- def __init__(self):
- self.must_neural_tone_words = {
- '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝',
- '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊',
- '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去',
- '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号',
- '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当',
- '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻',
- '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂',
- '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆',
- '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂',
- '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿',
- '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台',
- '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算',
- '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨',
- '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快',
- '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜',
- '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔',
- '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事',
- '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾',
- '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼',
- '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实',
- '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头',
- '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼',
- '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数',
- '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气',
- '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈',
- '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方',
- '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴',
- '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦',
- '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝',
- '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹',
- '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息',
- '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤',
- '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家',
- '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故',
- '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨',
- '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅',
- '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱',
- '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱',
- '扫把', '惦记'
- }
- self.must_not_neural_tone_words = {
- "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎"
- }
- self.punc = ":,;。?!“”‘’':,;.?!"
-
- # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041
- # e.g.
- # word: "家里"
- # pos: "s"
- # finals: ['ia1', 'i3']
- def _neural_sandhi(self, word: str, pos: str,
- finals: List[str]) -> List[str]:
-
- # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺
- for j, item in enumerate(word):
- if j - 1 >= 0 and item == word[j - 1] and pos[0] in {
- "n", "v", "a"
- } and word not in self.must_not_neural_tone_words:
- finals[j] = finals[j][:-1] + "5"
- ge_idx = word.find("个")
- if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶":
- finals[-1] = finals[-1][:-1] + "5"
- elif len(word) >= 1 and word[-1] in "的地得":
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 走了, 看着, 去过
- # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}:
- # finals[-1] = finals[-1][:-1] + "5"
- elif len(word) > 1 and word[-1] in "们子" and pos in {
- "r", "n"
- } and word not in self.must_not_neural_tone_words:
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 桌上, 地下, 家里
- elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}:
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 上来, 下去
- elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开":
- finals[-1] = finals[-1][:-1] + "5"
- # 个做量词
- elif (ge_idx >= 1 and
- (word[ge_idx - 1].isnumeric() or
- word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个':
- finals[ge_idx] = finals[ge_idx][:-1] + "5"
- else:
- if word in self.must_neural_tone_words or word[
- -2:] in self.must_neural_tone_words:
- finals[-1] = finals[-1][:-1] + "5"
-
- word_list = self._split_word(word)
- finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]]
- for i, word in enumerate(word_list):
- # conventional neural in Chinese
- if word in self.must_neural_tone_words or word[
- -2:] in self.must_neural_tone_words:
- finals_list[i][-1] = finals_list[i][-1][:-1] + "5"
- finals = sum(finals_list, [])
- return finals
-
- def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]:
- # e.g. 看不懂
- if len(word) == 3 and word[1] == "不":
- finals[1] = finals[1][:-1] + "5"
- else:
- for i, char in enumerate(word):
- # "不" before tone4 should be bu2, e.g. 不怕
- if char == "不" and i + 1 < len(word) and finals[i +
- 1][-1] == "4":
- finals[i] = finals[i][:-1] + "2"
- return finals
-
- def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]:
- # "一" in number sequences, e.g. 一零零, 二一零
- if word.find("一") != -1 and all(
- [item.isnumeric() for item in word if item != "一"]):
- return finals
- # "一" between reduplication words shold be yi5, e.g. 看一看
- elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]:
- finals[1] = finals[1][:-1] + "5"
- # when "一" is ordinal word, it should be yi1
- elif word.startswith("第一"):
- finals[1] = finals[1][:-1] + "1"
- else:
- for i, char in enumerate(word):
- if char == "一" and i + 1 < len(word):
- # "一" before tone4 should be yi2, e.g. 一段
- if finals[i + 1][-1] == "4":
- finals[i] = finals[i][:-1] + "2"
- # "一" before non-tone4 should be yi4, e.g. 一天
- else:
- # "一" 后面如果是标点,还读一声
- if word[i + 1] not in self.punc:
- finals[i] = finals[i][:-1] + "4"
- return finals
-
- def _split_word(self, word: str) -> List[str]:
- word_list = jieba.cut_for_search(word)
- word_list = sorted(word_list, key=lambda i: len(i), reverse=False)
- first_subword = word_list[0]
- first_begin_idx = word.find(first_subword)
- if first_begin_idx == 0:
- second_subword = word[len(first_subword):]
- new_word_list = [first_subword, second_subword]
- else:
- second_subword = word[:-len(first_subword)]
- new_word_list = [second_subword, first_subword]
- return new_word_list
-
- def _three_sandhi(self, word: str, finals: List[str]) -> List[str]:
- if len(word) == 2 and self._all_tone_three(finals):
- finals[0] = finals[0][:-1] + "2"
- elif len(word) == 3:
- word_list = self._split_word(word)
- if self._all_tone_three(finals):
- # disyllabic + monosyllabic, e.g. 蒙古/包
- if len(word_list[0]) == 2:
- finals[0] = finals[0][:-1] + "2"
- finals[1] = finals[1][:-1] + "2"
- # monosyllabic + disyllabic, e.g. 纸/老虎
- elif len(word_list[0]) == 1:
- finals[1] = finals[1][:-1] + "2"
- else:
- finals_list = [
- finals[:len(word_list[0])], finals[len(word_list[0]):]
- ]
- if len(finals_list) == 2:
- for i, sub in enumerate(finals_list):
- # e.g. 所有/人
- if self._all_tone_three(sub) and len(sub) == 2:
- finals_list[i][0] = finals_list[i][0][:-1] + "2"
- # e.g. 好/喜欢
- elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \
- finals_list[0][-1][-1] == "3":
-
- finals_list[0][-1] = finals_list[0][-1][:-1] + "2"
- finals = sum(finals_list, [])
- # split idiom into two words who's length is 2
- elif len(word) == 4:
- finals_list = [finals[:2], finals[2:]]
- finals = []
- for sub in finals_list:
- if self._all_tone_three(sub):
- sub[0] = sub[0][:-1] + "2"
- finals += sub
-
- return finals
-
- def _all_tone_three(self, finals: List[str]) -> bool:
- return all(x[-1] == "3" for x in finals)
-
- # merge "不" and the word behind it
- # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error
- def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- last_word = ""
- for word, pos in seg:
- if last_word == "不":
- word = last_word + word
- if word != "不":
- new_seg.append((word, pos))
- last_word = word[:]
- if last_word == "不":
- new_seg.append((last_word, 'd'))
- last_word = ""
- return new_seg
-
- # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听"
- # function 2: merge single "一" and the word behind it
- # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error
- # e.g.
- # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')]
- # output seg: [['听一听', 'v']]
- def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- # function 1
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][
- 0] == seg[i + 1][0] and seg[i - 1][1] == "v":
- new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0]
- else:
- if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][
- 0] == word and pos == "v":
- continue
- else:
- new_seg.append([word, pos])
- seg = new_seg
- new_seg = []
- # function 2
- for i, (word, pos) in enumerate(seg):
- if new_seg and new_seg[-1][0] == "一":
- new_seg[-1][0] = new_seg[-1][0] + word
- else:
- new_seg.append([word, pos])
- return new_seg
-
- # the first and the second words are all_tone_three
- def _merge_continuous_three_tones(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- sub_finals_list = [
- lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for (word, pos) in seg
- ]
- assert len(sub_finals_list) == len(seg)
- merge_last = [False] * len(seg)
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and self._all_tone_three(
- sub_finals_list[i - 1]) and self._all_tone_three(
- sub_finals_list[i]) and not merge_last[i - 1]:
- # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
- if not self._is_reduplication(seg[i - 1][0]) and len(
- seg[i - 1][0]) + len(seg[i][0]) <= 3:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- merge_last[i] = True
- else:
- new_seg.append([word, pos])
- else:
- new_seg.append([word, pos])
-
- return new_seg
-
- def _is_reduplication(self, word: str) -> bool:
- return len(word) == 2 and word[0] == word[1]
-
- # the last char of first word and the first char of second word is tone_three
- def _merge_continuous_three_tones_2(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- sub_finals_list = [
- lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for (word, pos) in seg
- ]
- assert len(sub_finals_list) == len(seg)
- merge_last = [False] * len(seg)
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \
- merge_last[i - 1]:
- # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
- if not self._is_reduplication(seg[i - 1][0]) and len(
- seg[i - 1][0]) + len(seg[i][0]) <= 3:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- merge_last[i] = True
- else:
- new_seg.append([word, pos])
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and word == "儿" and seg[i-1][0] != "#":
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def _merge_reduplication(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- for i, (word, pos) in enumerate(seg):
- if new_seg and word == new_seg[-1][0]:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def pre_merge_for_modify(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- seg = self._merge_bu(seg)
- try:
- seg = self._merge_yi(seg)
- except:
- print("_merge_yi failed")
- seg = self._merge_reduplication(seg)
- seg = self._merge_continuous_three_tones(seg)
- seg = self._merge_continuous_three_tones_2(seg)
- seg = self._merge_er(seg)
- return seg
-
- def modified_tone(self, word: str, pos: str,
- finals: List[str]) -> List[str]:
- finals = self._bu_sandhi(word, finals)
- finals = self._yi_sandhi(word, finals)
- finals = self._neural_sandhi(word, pos, finals)
- finals = self._three_sandhi(word, finals)
- return finals
diff --git a/spaces/XzJosh/nanami-Bert-VITS2/utils.py b/spaces/XzJosh/nanami-Bert-VITS2/utils.py
deleted file mode 100644
index c6aa6cfc64c33e2eed33e9845239e831fc1c4a1a..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/nanami-Bert-VITS2/utils.py
+++ /dev/null
@@ -1,293 +0,0 @@
-import os
-import glob
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
-logger = logging
-
-
-def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
- iteration = checkpoint_dict['iteration']
- learning_rate = checkpoint_dict['learning_rate']
- if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None:
- optimizer.load_state_dict(checkpoint_dict['optimizer'])
- elif optimizer is None and not skip_optimizer:
- #else: #Disable this line if Infer ,and enable the line upper
- new_opt_dict = optimizer.state_dict()
- new_opt_dict_params = new_opt_dict['param_groups'][0]['params']
- new_opt_dict['param_groups'] = checkpoint_dict['optimizer']['param_groups']
- new_opt_dict['param_groups'][0]['params'] = new_opt_dict_params
- optimizer.load_state_dict(new_opt_dict)
- saved_state_dict = checkpoint_dict['model']
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items():
- try:
- #assert "emb_g" not in k
- # print("load", k)
- new_state_dict[k] = saved_state_dict[k]
- assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape)
- except:
- print("error, %s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, 'module'):
- model.module.load_state_dict(new_state_dict, strict=False)
- else:
- model.load_state_dict(new_state_dict, strict=False)
- print("load ")
- logger.info("Loaded checkpoint '{}' (iteration {})".format(
- checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info("Saving model and optimizer state at iteration {} to {}".format(
- iteration, checkpoint_path))
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- torch.save({'model': state_dict,
- 'iteration': iteration,
- 'optimizer': optimizer.state_dict(),
- 'learning_rate': learning_rate}, checkpoint_path)
-
-
-def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050):
- for k, v in scalars.items():
- writer.add_scalar(k, v, global_step)
- for k, v in histograms.items():
- writer.add_histogram(k, v, global_step)
- for k, v in images.items():
- writer.add_image(k, v, global_step, dataformats='HWC')
- for k, v in audios.items():
- writer.add_audio(k, v, global_step, audio_sampling_rate)
-
-
-def latest_checkpoint_path(dir_path, regex="G_*.pth"):
- f_list = glob.glob(os.path.join(dir_path, regex))
- f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
- x = f_list[-1]
- print(x)
- return x
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower',
- interpolation='none')
- fig.colorbar(im, ax=ax)
- xlabel = 'Decoder timestep'
- if info is not None:
- xlabel += '\n\n' + info
- plt.xlabel(xlabel)
- plt.ylabel('Encoder timestep')
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- parser = argparse.ArgumentParser()
- parser.add_argument('-c', '--config', type=str, default="./configs/base.json",
- help='JSON file for configuration')
- parser.add_argument('-m', '--model', type=str, default="./OUTPUT_MODEL",
- help='Model name')
- parser.add_argument('--cont', dest='cont', action="store_true", default=False, help="whether to continue training on the latest checkpoint")
-
- args = parser.parse_args()
- model_dir = os.path.join("./logs", args.model)
-
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
-
- config_path = args.config
- config_save_path = os.path.join(model_dir, "config.json")
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- hparams.cont = args.cont
- return hparams
-
-
-def clean_checkpoints(path_to_models='logs/44k/', n_ckpts_to_keep=2, sort_by_time=True):
- """Freeing up space by deleting saved ckpts
-
- Arguments:
- path_to_models -- Path to the model directory
- n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth
- sort_by_time -- True -> chronologically delete ckpts
- False -> lexicographically delete ckpts
- """
- import re
- ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))]
- name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1)))
- time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f)))
- sort_key = time_key if sort_by_time else name_key
- x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')],
- key=sort_key)
- to_del = [os.path.join(path_to_models, fn) for fn in
- (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])]
- del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}")
- del_routine = lambda x: [os.remove(x), del_info(x)]
- rs = [del_routine(fn) for fn in to_del]
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- ))
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn("git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]))
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-class HParams():
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
diff --git a/spaces/XzJosh/yoyo-Bert-VITS2/text/japanese.py b/spaces/XzJosh/yoyo-Bert-VITS2/text/japanese.py
deleted file mode 100644
index ddedafa0c5b7986068dc6c91637a86febc3923a9..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/yoyo-Bert-VITS2/text/japanese.py
+++ /dev/null
@@ -1,104 +0,0 @@
-# modified from https://github.com/CjangCjengh/vits/blob/main/text/japanese.py
-import re
-import sys
-
-import pyopenjtalk
-
-from text import symbols
-
-# Regular expression matching Japanese without punctuation marks:
-_japanese_characters = re.compile(
- r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# Regular expression matching non-Japanese characters or punctuation marks:
-_japanese_marks = re.compile(
- r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# List of (symbol, Japanese) pairs for marks:
-_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('%', 'パーセント')
-]]
-
-
-# List of (consonant, sokuon) pairs:
-_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'Q([↑↓]*[kg])', r'k#\1'),
- (r'Q([↑↓]*[tdjʧ])', r't#\1'),
- (r'Q([↑↓]*[sʃ])', r's\1'),
- (r'Q([↑↓]*[pb])', r'p#\1')
-]]
-
-# List of (consonant, hatsuon) pairs:
-_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'N([↑↓]*[pbm])', r'm\1'),
- (r'N([↑↓]*[ʧʥj])', r'n^\1'),
- (r'N([↑↓]*[tdn])', r'n\1'),
- (r'N([↑↓]*[kg])', r'ŋ\1')
-]]
-
-
-
-def post_replace_ph(ph):
- rep_map = {
- ':': ',',
- ';': ',',
- ',': ',',
- '。': '.',
- '!': '!',
- '?': '?',
- '\n': '.',
- "·": ",",
- '、': ",",
- '...': '…',
- 'v': "V"
- }
- if ph in rep_map.keys():
- ph = rep_map[ph]
- if ph in symbols:
- return ph
- if ph not in symbols:
- ph = 'UNK'
- return ph
-
-def symbols_to_japanese(text):
- for regex, replacement in _symbols_to_japanese:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def preprocess_jap(text):
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
- text = symbols_to_japanese(text)
- sentences = re.split(_japanese_marks, text)
- marks = re.findall(_japanese_marks, text)
- text = []
- for i, sentence in enumerate(sentences):
- if re.match(_japanese_characters, sentence):
- p = pyopenjtalk.g2p(sentence)
- text += p.split(" ")
-
- if i < len(marks):
- text += [marks[i].replace(' ', '')]
- return text
-
-def text_normalize(text):
- # todo: jap text normalize
- return text
-
-def g2p(norm_text):
- phones = preprocess_jap(norm_text)
- phones = [post_replace_ph(i) for i in phones]
- # todo: implement tones and word2ph
- tones = [0 for i in phones]
- word2ph = [1 for i in phones]
- return phones, tones, word2ph
-
-
-if __name__ == '__main__':
- for line in open("../../../Downloads/transcript_utf8.txt").readlines():
- text = line.split(":")[1]
- phones, tones, word2ph = g2p(text)
- for p in phones:
- if p == "z":
- print(text, phones)
- sys.exit(0)
diff --git a/spaces/Yudha515/Rvc-Models/tests/models/test_encodec_model.py b/spaces/Yudha515/Rvc-Models/tests/models/test_encodec_model.py
deleted file mode 100644
index 2f9c1db3f69a45f02451b71da95f44356811acbb..0000000000000000000000000000000000000000
--- a/spaces/Yudha515/Rvc-Models/tests/models/test_encodec_model.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import random
-
-import numpy as np
-import torch
-
-from audiocraft.models import EncodecModel
-from audiocraft.modules import SEANetEncoder, SEANetDecoder
-from audiocraft.quantization import DummyQuantizer
-
-
-class TestEncodecModel:
-
- def _create_encodec_model(self,
- sample_rate: int,
- channels: int,
- dim: int = 5,
- n_filters: int = 3,
- n_residual_layers: int = 1,
- ratios: list = [5, 4, 3, 2],
- **kwargs):
- frame_rate = np.prod(ratios)
- encoder = SEANetEncoder(channels=channels, dimension=dim, n_filters=n_filters,
- n_residual_layers=n_residual_layers, ratios=ratios)
- decoder = SEANetDecoder(channels=channels, dimension=dim, n_filters=n_filters,
- n_residual_layers=n_residual_layers, ratios=ratios)
- quantizer = DummyQuantizer()
- model = EncodecModel(encoder, decoder, quantizer, frame_rate=frame_rate,
- sample_rate=sample_rate, channels=channels, **kwargs)
- return model
-
- def test_model(self):
- random.seed(1234)
- sample_rate = 24_000
- channels = 1
- model = self._create_encodec_model(sample_rate, channels)
- for _ in range(10):
- length = random.randrange(1, 10_000)
- x = torch.randn(2, channels, length)
- res = model(x)
- assert res.x.shape == x.shape
-
- def test_model_renorm(self):
- random.seed(1234)
- sample_rate = 24_000
- channels = 1
- model_nonorm = self._create_encodec_model(sample_rate, channels, renormalize=False)
- model_renorm = self._create_encodec_model(sample_rate, channels, renormalize=True)
-
- for _ in range(10):
- length = random.randrange(1, 10_000)
- x = torch.randn(2, channels, length)
- codes, scales = model_nonorm.encode(x)
- codes, scales = model_renorm.encode(x)
- assert scales is not None
diff --git a/spaces/aadnk/whisper-webui/src/hooks/progressListener.py b/spaces/aadnk/whisper-webui/src/hooks/progressListener.py
deleted file mode 100644
index a7852a24e237ae864bbce5f37674e1f7c817a1b3..0000000000000000000000000000000000000000
--- a/spaces/aadnk/whisper-webui/src/hooks/progressListener.py
+++ /dev/null
@@ -1,8 +0,0 @@
-from typing import Union
-
-class ProgressListener:
- def on_progress(self, current: Union[int, float], total: Union[int, float]):
- self.total = total
-
- def on_finished(self):
- pass
\ No newline at end of file
diff --git a/spaces/aadnk/whisper-webui/src/whisper/dummyWhisperContainer.py b/spaces/aadnk/whisper-webui/src/whisper/dummyWhisperContainer.py
deleted file mode 100644
index dddc2dc50e78880befe29d15c924ab811413a8f8..0000000000000000000000000000000000000000
--- a/spaces/aadnk/whisper-webui/src/whisper/dummyWhisperContainer.py
+++ /dev/null
@@ -1,101 +0,0 @@
-from typing import List
-
-import ffmpeg
-from src.config import ModelConfig
-from src.hooks.progressListener import ProgressListener
-from src.modelCache import ModelCache
-from src.prompts.abstractPromptStrategy import AbstractPromptStrategy
-from src.whisper.abstractWhisperContainer import AbstractWhisperCallback, AbstractWhisperContainer
-
-class DummyWhisperContainer(AbstractWhisperContainer):
- def __init__(self, model_name: str, device: str = None, compute_type: str = "float16",
- download_root: str = None,
- cache: ModelCache = None, models: List[ModelConfig] = []):
- super().__init__(model_name, device, compute_type, download_root, cache, models)
-
- def ensure_downloaded(self):
- """
- Ensure that the model is downloaded. This is useful if you want to ensure that the model is downloaded before
- passing the container to a subprocess.
- """
- print("[Dummy] Ensuring that the model is downloaded")
-
- def _create_model(self):
- print("[Dummy] Creating dummy whisper model " + self.model_name + " for device " + str(self.device))
- return None
-
- def create_callback(self, language: str = None, task: str = None,
- prompt_strategy: AbstractPromptStrategy = None,
- **decodeOptions: dict) -> AbstractWhisperCallback:
- """
- Create a WhisperCallback object that can be used to transcript audio files.
-
- Parameters
- ----------
- language: str
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- prompt_strategy: AbstractPromptStrategy
- The prompt strategy to use. If not specified, the prompt from Whisper will be used.
- decodeOptions: dict
- Additional options to pass to the decoder. Must be pickleable.
-
- Returns
- -------
- A WhisperCallback object.
- """
- return DummyWhisperCallback(self, language=language, task=task, prompt_strategy=prompt_strategy, **decodeOptions)
-
-class DummyWhisperCallback(AbstractWhisperCallback):
- def __init__(self, model_container: DummyWhisperContainer, **decodeOptions: dict):
- self.model_container = model_container
- self.decodeOptions = decodeOptions
-
- def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None):
- """
- Peform the transcription of the given audio file or data.
-
- Parameters
- ----------
- audio: Union[str, np.ndarray, torch.Tensor]
- The audio file to transcribe, or the audio data as a numpy array or torch tensor.
- segment_index: int
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- progress_listener: ProgressListener
- A callback to receive progress updates.
- """
- print("[Dummy] Invoking dummy whisper callback for segment " + str(segment_index))
-
- # Estimate length
- if isinstance(audio, str):
- audio_length = ffmpeg.probe(audio)["format"]["duration"]
- # Format is pcm_s16le at a sample rate of 16000, loaded as a float32 array.
- else:
- audio_length = len(audio) / 16000
-
- # Convert the segments to a format that is easier to serialize
- whisper_segments = [{
- "text": "Dummy text for segment " + str(segment_index),
- "start": 0,
- "end": audio_length,
-
- # Extra fields added by faster-whisper
- "words": []
- }]
-
- result = {
- "segments": whisper_segments,
- "text": "Dummy text for segment " + str(segment_index),
- "language": "en" if detected_language is None else detected_language,
-
- # Extra fields added by faster-whisper
- "language_probability": 1.0,
- "duration": audio_length,
- }
-
- if progress_listener is not None:
- progress_listener.on_finished()
- return result
\ No newline at end of file
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/utils/trace.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/utils/trace.py
deleted file mode 100644
index 5ca99dc3eda05ef980d9a4249b50deca8273b6cc..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/utils/trace.py
+++ /dev/null
@@ -1,23 +0,0 @@
-import warnings
-
-import torch
-
-from annotator.uniformer.mmcv.utils import digit_version
-
-
-def is_jit_tracing() -> bool:
- if (torch.__version__ != 'parrots'
- and digit_version(torch.__version__) >= digit_version('1.6.0')):
- on_trace = torch.jit.is_tracing()
- # In PyTorch 1.6, torch.jit.is_tracing has a bug.
- # Refers to https://github.com/pytorch/pytorch/issues/42448
- if isinstance(on_trace, bool):
- return on_trace
- else:
- return torch._C._is_tracing()
- else:
- warnings.warn(
- 'torch.jit.is_tracing is only supported after v1.6.0. '
- 'Therefore is_tracing returns False automatically. Please '
- 'set on_trace manually if you are using trace.', UserWarning)
- return False
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/runner/epoch_based_runner.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/runner/epoch_based_runner.py
deleted file mode 100644
index 766a9ce6afdf09cd11b1b15005f5132583011348..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/runner/epoch_based_runner.py
+++ /dev/null
@@ -1,187 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os.path as osp
-import platform
-import shutil
-import time
-import warnings
-
-import torch
-
-import annotator.uniformer.mmcv as mmcv
-from .base_runner import BaseRunner
-from .builder import RUNNERS
-from .checkpoint import save_checkpoint
-from .utils import get_host_info
-
-
-@RUNNERS.register_module()
-class EpochBasedRunner(BaseRunner):
- """Epoch-based Runner.
-
- This runner train models epoch by epoch.
- """
-
- def run_iter(self, data_batch, train_mode, **kwargs):
- if self.batch_processor is not None:
- outputs = self.batch_processor(
- self.model, data_batch, train_mode=train_mode, **kwargs)
- elif train_mode:
- outputs = self.model.train_step(data_batch, self.optimizer,
- **kwargs)
- else:
- outputs = self.model.val_step(data_batch, self.optimizer, **kwargs)
- if not isinstance(outputs, dict):
- raise TypeError('"batch_processor()" or "model.train_step()"'
- 'and "model.val_step()" must return a dict')
- if 'log_vars' in outputs:
- self.log_buffer.update(outputs['log_vars'], outputs['num_samples'])
- self.outputs = outputs
-
- def train(self, data_loader, **kwargs):
- self.model.train()
- self.mode = 'train'
- self.data_loader = data_loader
- self._max_iters = self._max_epochs * len(self.data_loader)
- self.call_hook('before_train_epoch')
- time.sleep(2) # Prevent possible deadlock during epoch transition
- for i, data_batch in enumerate(self.data_loader):
- self._inner_iter = i
- self.call_hook('before_train_iter')
- self.run_iter(data_batch, train_mode=True, **kwargs)
- self.call_hook('after_train_iter')
- self._iter += 1
-
- self.call_hook('after_train_epoch')
- self._epoch += 1
-
- @torch.no_grad()
- def val(self, data_loader, **kwargs):
- self.model.eval()
- self.mode = 'val'
- self.data_loader = data_loader
- self.call_hook('before_val_epoch')
- time.sleep(2) # Prevent possible deadlock during epoch transition
- for i, data_batch in enumerate(self.data_loader):
- self._inner_iter = i
- self.call_hook('before_val_iter')
- self.run_iter(data_batch, train_mode=False)
- self.call_hook('after_val_iter')
-
- self.call_hook('after_val_epoch')
-
- def run(self, data_loaders, workflow, max_epochs=None, **kwargs):
- """Start running.
-
- Args:
- data_loaders (list[:obj:`DataLoader`]): Dataloaders for training
- and validation.
- workflow (list[tuple]): A list of (phase, epochs) to specify the
- running order and epochs. E.g, [('train', 2), ('val', 1)] means
- running 2 epochs for training and 1 epoch for validation,
- iteratively.
- """
- assert isinstance(data_loaders, list)
- assert mmcv.is_list_of(workflow, tuple)
- assert len(data_loaders) == len(workflow)
- if max_epochs is not None:
- warnings.warn(
- 'setting max_epochs in run is deprecated, '
- 'please set max_epochs in runner_config', DeprecationWarning)
- self._max_epochs = max_epochs
-
- assert self._max_epochs is not None, (
- 'max_epochs must be specified during instantiation')
-
- for i, flow in enumerate(workflow):
- mode, epochs = flow
- if mode == 'train':
- self._max_iters = self._max_epochs * len(data_loaders[i])
- break
-
- work_dir = self.work_dir if self.work_dir is not None else 'NONE'
- self.logger.info('Start running, host: %s, work_dir: %s',
- get_host_info(), work_dir)
- self.logger.info('Hooks will be executed in the following order:\n%s',
- self.get_hook_info())
- self.logger.info('workflow: %s, max: %d epochs', workflow,
- self._max_epochs)
- self.call_hook('before_run')
-
- while self.epoch < self._max_epochs:
- for i, flow in enumerate(workflow):
- mode, epochs = flow
- if isinstance(mode, str): # self.train()
- if not hasattr(self, mode):
- raise ValueError(
- f'runner has no method named "{mode}" to run an '
- 'epoch')
- epoch_runner = getattr(self, mode)
- else:
- raise TypeError(
- 'mode in workflow must be a str, but got {}'.format(
- type(mode)))
-
- for _ in range(epochs):
- if mode == 'train' and self.epoch >= self._max_epochs:
- break
- epoch_runner(data_loaders[i], **kwargs)
-
- time.sleep(1) # wait for some hooks like loggers to finish
- self.call_hook('after_run')
-
- def save_checkpoint(self,
- out_dir,
- filename_tmpl='epoch_{}.pth',
- save_optimizer=True,
- meta=None,
- create_symlink=True):
- """Save the checkpoint.
-
- Args:
- out_dir (str): The directory that checkpoints are saved.
- filename_tmpl (str, optional): The checkpoint filename template,
- which contains a placeholder for the epoch number.
- Defaults to 'epoch_{}.pth'.
- save_optimizer (bool, optional): Whether to save the optimizer to
- the checkpoint. Defaults to True.
- meta (dict, optional): The meta information to be saved in the
- checkpoint. Defaults to None.
- create_symlink (bool, optional): Whether to create a symlink
- "latest.pth" to point to the latest checkpoint.
- Defaults to True.
- """
- if meta is None:
- meta = {}
- elif not isinstance(meta, dict):
- raise TypeError(
- f'meta should be a dict or None, but got {type(meta)}')
- if self.meta is not None:
- meta.update(self.meta)
- # Note: meta.update(self.meta) should be done before
- # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise
- # there will be problems with resumed checkpoints.
- # More details in https://github.com/open-mmlab/mmcv/pull/1108
- meta.update(epoch=self.epoch + 1, iter=self.iter)
-
- filename = filename_tmpl.format(self.epoch + 1)
- filepath = osp.join(out_dir, filename)
- optimizer = self.optimizer if save_optimizer else None
- save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta)
- # in some environments, `os.symlink` is not supported, you may need to
- # set `create_symlink` to False
- if create_symlink:
- dst_file = osp.join(out_dir, 'latest.pth')
- if platform.system() != 'Windows':
- mmcv.symlink(filename, dst_file)
- else:
- shutil.copy(filepath, dst_file)
-
-
-@RUNNERS.register_module()
-class Runner(EpochBasedRunner):
- """Deprecated name of EpochBasedRunner."""
-
- def __init__(self, *args, **kwargs):
- warnings.warn(
- 'Runner was deprecated, please use EpochBasedRunner instead')
- super().__init__(*args, **kwargs)
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/utils/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/utils/__init__.py
deleted file mode 100644
index 378a0068432a371af364de9d73785901c0f83383..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/utils/__init__.py
+++ /dev/null
@@ -1,69 +0,0 @@
-# flake8: noqa
-# Copyright (c) OpenMMLab. All rights reserved.
-from .config import Config, ConfigDict, DictAction
-from .misc import (check_prerequisites, concat_list, deprecated_api_warning,
- has_method, import_modules_from_strings, is_list_of,
- is_method_overridden, is_seq_of, is_str, is_tuple_of,
- iter_cast, list_cast, requires_executable, requires_package,
- slice_list, to_1tuple, to_2tuple, to_3tuple, to_4tuple,
- to_ntuple, tuple_cast)
-from .path import (check_file_exist, fopen, is_filepath, mkdir_or_exist,
- scandir, symlink)
-from .progressbar import (ProgressBar, track_iter_progress,
- track_parallel_progress, track_progress)
-from .testing import (assert_attrs_equal, assert_dict_contains_subset,
- assert_dict_has_keys, assert_is_norm_layer,
- assert_keys_equal, assert_params_all_zeros,
- check_python_script)
-from .timer import Timer, TimerError, check_time
-from .version_utils import digit_version, get_git_hash
-
-try:
- import torch
-except ImportError:
- __all__ = [
- 'Config', 'ConfigDict', 'DictAction', 'is_str', 'iter_cast',
- 'list_cast', 'tuple_cast', 'is_seq_of', 'is_list_of', 'is_tuple_of',
- 'slice_list', 'concat_list', 'check_prerequisites', 'requires_package',
- 'requires_executable', 'is_filepath', 'fopen', 'check_file_exist',
- 'mkdir_or_exist', 'symlink', 'scandir', 'ProgressBar',
- 'track_progress', 'track_iter_progress', 'track_parallel_progress',
- 'Timer', 'TimerError', 'check_time', 'deprecated_api_warning',
- 'digit_version', 'get_git_hash', 'import_modules_from_strings',
- 'assert_dict_contains_subset', 'assert_attrs_equal',
- 'assert_dict_has_keys', 'assert_keys_equal', 'check_python_script',
- 'to_1tuple', 'to_2tuple', 'to_3tuple', 'to_4tuple', 'to_ntuple',
- 'is_method_overridden', 'has_method'
- ]
-else:
- from .env import collect_env
- from .logging import get_logger, print_log
- from .parrots_jit import jit, skip_no_elena
- from .parrots_wrapper import (
- TORCH_VERSION, BuildExtension, CppExtension, CUDAExtension, DataLoader,
- PoolDataLoader, SyncBatchNorm, _AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd,
- _AvgPoolNd, _BatchNorm, _ConvNd, _ConvTransposeMixin, _InstanceNorm,
- _MaxPoolNd, get_build_config, is_rocm_pytorch, _get_cuda_home)
- from .registry import Registry, build_from_cfg
- from .trace import is_jit_tracing
- __all__ = [
- 'Config', 'ConfigDict', 'DictAction', 'collect_env', 'get_logger',
- 'print_log', 'is_str', 'iter_cast', 'list_cast', 'tuple_cast',
- 'is_seq_of', 'is_list_of', 'is_tuple_of', 'slice_list', 'concat_list',
- 'check_prerequisites', 'requires_package', 'requires_executable',
- 'is_filepath', 'fopen', 'check_file_exist', 'mkdir_or_exist',
- 'symlink', 'scandir', 'ProgressBar', 'track_progress',
- 'track_iter_progress', 'track_parallel_progress', 'Registry',
- 'build_from_cfg', 'Timer', 'TimerError', 'check_time', 'SyncBatchNorm',
- '_AdaptiveAvgPoolNd', '_AdaptiveMaxPoolNd', '_AvgPoolNd', '_BatchNorm',
- '_ConvNd', '_ConvTransposeMixin', '_InstanceNorm', '_MaxPoolNd',
- 'get_build_config', 'BuildExtension', 'CppExtension', 'CUDAExtension',
- 'DataLoader', 'PoolDataLoader', 'TORCH_VERSION',
- 'deprecated_api_warning', 'digit_version', 'get_git_hash',
- 'import_modules_from_strings', 'jit', 'skip_no_elena',
- 'assert_dict_contains_subset', 'assert_attrs_equal',
- 'assert_dict_has_keys', 'assert_keys_equal', 'assert_is_norm_layer',
- 'assert_params_all_zeros', 'check_python_script',
- 'is_method_overridden', 'is_jit_tracing', 'is_rocm_pytorch',
- '_get_cuda_home', 'has_method'
- ]
diff --git a/spaces/abidlabs/en2fr/app.py b/spaces/abidlabs/en2fr/app.py
deleted file mode 100644
index 680ff5f987ae7c5736f227c57bf4a59e121f5349..0000000000000000000000000000000000000000
--- a/spaces/abidlabs/en2fr/app.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import gradio as gr
-
-from transformers import pipeline
-
-pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-en-fr")
-
-def predict(text):
- return pipe(text)[0]["translation_text"]
-
-title = "English to French Translation"
-
-iface = gr.Interface(
- fn=predict,
- inputs=[gr.inputs.Textbox(label="text", lines=3)],
- outputs='text',
- title=title,
-)
-
-iface.launch()
\ No newline at end of file
diff --git a/spaces/abidlabs/streaming-asr-paused/app.py b/spaces/abidlabs/streaming-asr-paused/app.py
deleted file mode 100644
index b70c687fbbdc6e4bba72656dd6476f5ecab19eeb..0000000000000000000000000000000000000000
--- a/spaces/abidlabs/streaming-asr-paused/app.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from transformers import pipeline
-import gradio as gr
-import time
-
-p = pipeline("automatic-speech-recognition")
-
-def transcribe(audio, state=""):
- time.sleep(2)
- text = p(audio)["text"]
- state += text + " "
- return state, state
-
-gr.Interface(
- fn=transcribe,
- inputs=[
- gr.inputs.Audio(source="microphone", type="filepath"),
- "state"
- ],
- outputs=[
- "textbox",
- "state"
- ],
- live=True).launch()
diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/gl/xlib.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/gl/xlib.py
deleted file mode 100644
index c2f4271fe0018cffafea600afbfa0127ed328288..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/gl/xlib.py
+++ /dev/null
@@ -1,246 +0,0 @@
-import warnings
-from ctypes import *
-
-from .base import Config, CanvasConfig, Context
-from pyglet.canvas.xlib import XlibCanvas
-from pyglet.gl import glx
-from pyglet.gl import glxext_arb
-from pyglet.gl import glx_info
-from pyglet.gl import glxext_mesa
-from pyglet.gl import lib
-from pyglet import gl
-
-
-class XlibConfig(Config):
-
- def match(self, canvas):
- if not isinstance(canvas, XlibCanvas):
- raise RuntimeError('Canvas must be an instance of XlibCanvas')
-
- x_display = canvas.display._display
- x_screen = canvas.display.x_screen
-
- info = glx_info.GLXInfo(x_display)
-
- # Construct array of attributes
- attrs = []
- for name, value in self.get_gl_attributes():
- attr = XlibCanvasConfig.attribute_ids.get(name, None)
- if attr and value is not None:
- attrs.extend([attr, int(value)])
-
- attrs.extend([glx.GLX_X_RENDERABLE, True])
- attrs.extend([0, 0]) # attrib_list must be null terminated
-
- attrib_list = (c_int * len(attrs))(*attrs)
-
- elements = c_int()
- configs = glx.glXChooseFBConfig(x_display, x_screen, attrib_list, byref(elements))
- if not configs:
- return []
-
- configs = cast(configs, POINTER(glx.GLXFBConfig * elements.value)).contents
-
- result = [XlibCanvasConfig(canvas, info, c, self) for c in configs]
-
- # Can't free array until all XlibGLConfig's are GC'd. Too much
- # hassle, live with leak. XXX
- # xlib.XFree(configs)
-
- return result
-
-
-class XlibCanvasConfig(CanvasConfig):
-
- attribute_ids = {
- 'buffer_size': glx.GLX_BUFFER_SIZE,
- 'level': glx.GLX_LEVEL, # Not supported
- 'double_buffer': glx.GLX_DOUBLEBUFFER,
- 'stereo': glx.GLX_STEREO,
- 'aux_buffers': glx.GLX_AUX_BUFFERS,
- 'red_size': glx.GLX_RED_SIZE,
- 'green_size': glx.GLX_GREEN_SIZE,
- 'blue_size': glx.GLX_BLUE_SIZE,
- 'alpha_size': glx.GLX_ALPHA_SIZE,
- 'depth_size': glx.GLX_DEPTH_SIZE,
- 'stencil_size': glx.GLX_STENCIL_SIZE,
- 'accum_red_size': glx.GLX_ACCUM_RED_SIZE,
- 'accum_green_size': glx.GLX_ACCUM_GREEN_SIZE,
- 'accum_blue_size': glx.GLX_ACCUM_BLUE_SIZE,
- 'accum_alpha_size': glx.GLX_ACCUM_ALPHA_SIZE,
-
- 'sample_buffers': glx.GLX_SAMPLE_BUFFERS,
- 'samples': glx.GLX_SAMPLES,
-
- # Not supported in current pyglet API:
- 'render_type': glx.GLX_RENDER_TYPE,
- 'config_caveat': glx.GLX_CONFIG_CAVEAT,
- 'transparent_type': glx.GLX_TRANSPARENT_TYPE,
- 'transparent_index_value': glx.GLX_TRANSPARENT_INDEX_VALUE,
- 'transparent_red_value': glx.GLX_TRANSPARENT_RED_VALUE,
- 'transparent_green_value': glx.GLX_TRANSPARENT_GREEN_VALUE,
- 'transparent_blue_value': glx.GLX_TRANSPARENT_BLUE_VALUE,
- 'transparent_alpha_value': glx.GLX_TRANSPARENT_ALPHA_VALUE,
-
- # Used internally
- 'x_renderable': glx.GLX_X_RENDERABLE,
- }
-
- def __init__(self, canvas, info, fbconfig, config):
- super().__init__(canvas, config)
-
- self.glx_info = info
- self.fbconfig = fbconfig
-
- for name, attr in self.attribute_ids.items():
- value = c_int()
- result = glx.glXGetFBConfigAttrib(canvas.display._display, self.fbconfig, attr, byref(value))
- if result >= 0:
- setattr(self, name, value.value)
-
- def get_visual_info(self):
- return glx.glXGetVisualFromFBConfig(self.canvas.display._display, self.fbconfig).contents
-
- def create_context(self, share):
- return XlibContext(self, share)
-
- def compatible(self, canvas):
- # TODO check more
- return isinstance(canvas, XlibCanvas)
-
- def _create_glx_context(self, share):
- raise NotImplementedError('abstract')
-
- def is_complete(self):
- return True
-
-
-class XlibContext(Context):
- def __init__(self, config, share):
- super().__init__(config, share)
-
- self.x_display = config.canvas.display._display
-
- self.glx_context = self._create_glx_context(share)
- if not self.glx_context:
- # TODO: Check Xlib error generated
- raise gl.ContextException('Could not create GL context')
-
- self._have_SGI_video_sync = config.glx_info.have_extension('GLX_SGI_video_sync')
- self._have_SGI_swap_control = config.glx_info.have_extension('GLX_SGI_swap_control')
- self._have_EXT_swap_control = config.glx_info.have_extension('GLX_EXT_swap_control')
- self._have_MESA_swap_control = config.glx_info.have_extension('GLX_MESA_swap_control')
-
- # In order of preference:
- # 1. GLX_EXT_swap_control (more likely to work where video_sync will not)
- # 2. GLX_MESA_swap_control (same as above, but supported by MESA drivers)
- # 3. GLX_SGI_video_sync (does not work on Intel 945GM, but that has EXT)
- # 4. GLX_SGI_swap_control (cannot be disabled once enabled)
- self._use_video_sync = (self._have_SGI_video_sync and
- not (self._have_EXT_swap_control or self._have_MESA_swap_control))
-
- # XXX Mandate that vsync defaults on across all platforms.
- self._vsync = True
-
- self.glx_window = None
-
- def is_direct(self):
- return glx.glXIsDirect(self.x_display, self.glx_context)
-
- def _create_glx_context(self, share):
- if share:
- share_context = share.glx_context
- else:
- share_context = None
-
- attribs = []
- if self.config.major_version is not None:
- attribs.extend([glxext_arb.GLX_CONTEXT_MAJOR_VERSION_ARB, self.config.major_version])
- if self.config.minor_version is not None:
- attribs.extend([glxext_arb.GLX_CONTEXT_MINOR_VERSION_ARB, self.config.minor_version])
-
- if self.config.opengl_api == "gl":
- attribs.extend([glxext_arb.GLX_CONTEXT_PROFILE_MASK_ARB, glxext_arb.GLX_CONTEXT_CORE_PROFILE_BIT_ARB])
- elif self.config.opengl_api == "gles":
- attribs.extend([glxext_arb.GLX_CONTEXT_PROFILE_MASK_ARB, glxext_arb.GLX_CONTEXT_ES2_PROFILE_BIT_EXT])
-
- flags = 0
- if self.config.forward_compatible:
- flags |= glxext_arb.GLX_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB
- if self.config.debug:
- flags |= glxext_arb.GLX_CONTEXT_DEBUG_BIT_ARB
-
- if flags:
- attribs.extend([glxext_arb.GLX_CONTEXT_FLAGS_ARB, flags])
- attribs.append(0)
- attribs = (c_int * len(attribs))(*attribs)
-
- return glxext_arb.glXCreateContextAttribsARB(self.config.canvas.display._display,
- self.config.fbconfig, share_context, True, attribs)
-
- def attach(self, canvas):
- if canvas is self.canvas:
- return
-
- super().attach(canvas)
-
- self.glx_window = glx.glXCreateWindow(self.x_display, self.config.fbconfig, canvas.x_window, None)
- self.set_current()
-
- def set_current(self):
- glx.glXMakeContextCurrent(self.x_display, self.glx_window, self.glx_window, self.glx_context)
- super().set_current()
-
- def detach(self):
- if not self.canvas:
- return
-
- self.set_current()
- gl.glFlush()
-
- super().detach()
-
- glx.glXMakeContextCurrent(self.x_display, 0, 0, None)
- if self.glx_window:
- glx.glXDestroyWindow(self.x_display, self.glx_window)
- self.glx_window = None
-
- def destroy(self):
- super().destroy()
- if self.glx_window:
- glx.glXDestroyWindow(self.config.display._display, self.glx_window)
- self.glx_window = None
- if self.glx_context:
- glx.glXDestroyContext(self.x_display, self.glx_context)
- self.glx_context = None
-
- def set_vsync(self, vsync=True):
- self._vsync = vsync
- interval = vsync and 1 or 0
- try:
- if not self._use_video_sync and self._have_EXT_swap_control:
- glxext_arb.glXSwapIntervalEXT(self.x_display, glx.glXGetCurrentDrawable(), interval)
- elif not self._use_video_sync and self._have_MESA_swap_control:
- glxext_mesa.glXSwapIntervalMESA(interval)
- elif self._have_SGI_swap_control:
- glxext_arb.glXSwapIntervalSGI(interval)
- except lib.MissingFunctionException as e:
- warnings.warn(str(e))
-
- def get_vsync(self):
- return self._vsync
-
- def _wait_vsync(self):
- if self._vsync and self._have_SGI_video_sync and self._use_video_sync:
- count = c_uint()
- glxext_arb.glXGetVideoSyncSGI(byref(count))
- glxext_arb.glXWaitVideoSyncSGI(2, (count.value + 1) % 2, byref(count))
-
- def flip(self):
- if not self.glx_window:
- return
-
- if self._vsync:
- self._wait_vsync()
-
- glx.glXSwapBuffers(self.x_display, self.glx_window)
diff --git a/spaces/adhisetiawan/anime-voice-generator/app.py b/spaces/adhisetiawan/anime-voice-generator/app.py
deleted file mode 100644
index e41932ae3e0a20837c5740859b4be34253c59b82..0000000000000000000000000000000000000000
--- a/spaces/adhisetiawan/anime-voice-generator/app.py
+++ /dev/null
@@ -1,264 +0,0 @@
-# coding=utf-8
-import os
-import re
-import argparse
-import utils
-import commons
-import json
-import torch
-import gradio as gr
-from models import SynthesizerTrn
-from text import text_to_sequence, _clean_text
-from torch import no_grad, LongTensor
-import gradio.processing_utils as gr_processing_utils
-import logging
-logging.getLogger('numba').setLevel(logging.WARNING)
-limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces
-
-hps_ms = utils.get_hparams_from_file(r'config/config.json')
-
-audio_postprocess_ori = gr.Audio.postprocess
-
-def audio_postprocess(self, y):
- data = audio_postprocess_ori(self, y)
- if data is None:
- return None
- return gr_processing_utils.encode_url_or_file_to_base64(data["name"])
-
-
-gr.Audio.postprocess = audio_postprocess
-
-def get_text(text, hps, is_symbol):
- text_norm, clean_text = text_to_sequence(text, hps.symbols, [] if is_symbol else hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = LongTensor(text_norm)
- return text_norm, clean_text
-
-def create_tts_fn(net_g_ms, speaker_id):
- def tts_fn(text, language, noise_scale, noise_scale_w, length_scale, is_symbol):
- text = text.replace('\n', ' ').replace('\r', '').replace(" ", "")
- if limitation:
- text_len = len(re.sub("\[([A-Z]{2})\]", "", text))
- max_len = 100
- if is_symbol:
- max_len *= 3
- if text_len > max_len:
- return "Error: Text is too long", None
- if not is_symbol:
- if language == 0:
- text = f"[ZH]{text}[ZH]"
- elif language == 1:
- text = f"[JA]{text}[JA]"
- else:
- text = f"{text}"
- stn_tst, clean_text = get_text(text, hps_ms, is_symbol)
- with no_grad():
- x_tst = stn_tst.unsqueeze(0).to(device)
- x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device)
- sid = LongTensor([speaker_id]).to(device)
- audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=noise_scale, noise_scale_w=noise_scale_w,
- length_scale=length_scale)[0][0, 0].data.cpu().float().numpy()
-
- return "Success", (22050, audio)
- return tts_fn
-
-def create_to_symbol_fn(hps):
- def to_symbol_fn(is_symbol_input, input_text, temp_lang):
- if temp_lang == 0:
- clean_text = f'[ZH]{input_text}[ZH]'
- elif temp_lang == 1:
- clean_text = f'[JA]{input_text}[JA]'
- else:
- clean_text = input_text
- return _clean_text(clean_text, hps.data.text_cleaners) if is_symbol_input else ''
-
- return to_symbol_fn
-def change_lang(language):
- if language == 0:
- return 0.6, 0.668, 1.2
- elif language == 1:
- return 0.6, 0.668, 1
- else:
- return 0.6, 0.668, 1
-
-download_audio_js = """
-() =>{{
- let root = document.querySelector("body > gradio-app");
- if (root.shadowRoot != null)
- root = root.shadowRoot;
- let audio = root.querySelector("#tts-audio-{audio_id}").querySelector("audio");
- let text = root.querySelector("#input-text-{audio_id}").querySelector("textarea");
- if (audio == undefined)
- return;
- text = text.value;
- if (text == undefined)
- text = Math.floor(Math.random()*100000000);
- audio = audio.src;
- let oA = document.createElement("a");
- oA.download = text.substr(0, 20)+'.wav';
- oA.href = audio;
- document.body.appendChild(oA);
- oA.click();
- oA.remove();
-}}
-"""
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--device', type=str, default='cpu')
- parser.add_argument('--api', action="store_true", default=False)
- parser.add_argument("--share", action="store_true", default=False, help="share gradio app")
- args = parser.parse_args()
- device = torch.device(args.device)
-
- models = []
- with open("pretrained_models/info.json", "r", encoding="utf-8") as f:
- models_info = json.load(f)
- for i, info in models_info.items():
- if not info['enable']:
- continue
- sid = info['sid']
- name_en = info['name_en']
- name_zh = info['name_zh']
- title = info['title']
- cover = f"pretrained_models/{i}/{info['cover']}"
- example = info['example']
- language = info['language']
- net_g_ms = SynthesizerTrn(
- len(hps_ms.symbols),
- hps_ms.data.filter_length // 2 + 1,
- hps_ms.train.segment_size // hps_ms.data.hop_length,
- n_speakers=hps_ms.data.n_speakers if info['type'] == "multi" else 0,
- **hps_ms.model)
- utils.load_checkpoint(f'pretrained_models/{i}/{i}.pth', net_g_ms, None)
- _ = net_g_ms.eval().to(device)
- models.append((sid, name_en, name_zh, title, cover, example, language, net_g_ms, create_tts_fn(net_g_ms, sid), create_to_symbol_fn(hps_ms)))
- with gr.Blocks() as app:
- gr.Markdown(
- "# vits-models\n"
- "## Please do not generate content that could infringe upon the rights or cause harm to individuals or organizations.\n"
- "## ·请不要生成会对个人以及组织造成侵害的内容\n"
- "\n\n"
- "[Open In Colab]"
- "(https://colab.research.google.com/drive/10QOk9NPgoKZUXkIhhuVaZ7SYra1MPMKH?usp=share_link)"
- " without queue and length limitation.(无需等待队列,并且没有长度限制)\n\n"
- "[Finetune your own model](https://github.com/SayaSS/vits-finetuning)"
- )
-
- with gr.Tabs():
- with gr.TabItem("EN"):
- for (sid, name_en, name_zh, title, cover, example, language, net_g_ms, tts_fn, to_symbol_fn) in models:
- with gr.TabItem(name_en):
- with gr.Row():
- gr.Markdown(
- ''
- f'{title}'
- f'
' if cover else ""
- ''
- )
- with gr.Row():
- with gr.Column():
- input_text = gr.Textbox(label="Text (100 words limitation)" if limitation else "Text", lines=5, value=example, elem_id=f"input-text-en-{name_en.replace(' ','')}")
- lang = gr.Dropdown(label="Language", choices=["Chinese", "Japanese", "Mix(wrap the Chinese text with [ZH][ZH], wrap the Japanese text with [JA][JA])"],
- type="index", value=language)
- with gr.Accordion(label="Advanced Options", open=False):
- symbol_input = gr.Checkbox(value=False, label="Symbol input")
- symbol_list = gr.Dataset(label="Symbol list", components=[input_text],
- samples=[[x] for x in hps_ms.symbols])
- symbol_list_json = gr.Json(value=hps_ms.symbols, visible=False)
- btn = gr.Button(value="Generate", variant="primary")
- with gr.Row():
- ns = gr.Slider(label="noise_scale", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True)
- nsw = gr.Slider(label="noise_scale_w", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True)
- ls = gr.Slider(label="length_scale", minimum=0.1, maximum=2.0, step=0.1, value=1.2 if language=="Chinese" else 1, interactive=True)
- with gr.Column():
- o1 = gr.Textbox(label="Output Message")
- o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio-en-{name_en.replace(' ','')}")
- download = gr.Button("Download Audio")
- btn.click(tts_fn, inputs=[input_text, lang, ns, nsw, ls, symbol_input], outputs=[o1, o2], api_name=f"tts-{name_en}")
- download.click(None, [], [], _js=download_audio_js.format(audio_id=f"en-{name_en.replace(' ', '')}"))
- lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls])
- symbol_input.change(
- to_symbol_fn,
- [symbol_input, input_text, lang],
- [input_text]
- )
- symbol_list.click(None, [symbol_list, symbol_list_json], [input_text],
- _js=f"""
- (i,symbols) => {{
- let root = document.querySelector("body > gradio-app");
- if (root.shadowRoot != null)
- root = root.shadowRoot;
- let text_input = root.querySelector("#input-text-en-{name_en.replace(' ', '')}").querySelector("textarea");
- let startPos = text_input.selectionStart;
- let endPos = text_input.selectionEnd;
- let oldTxt = text_input.value;
- let result = oldTxt.substring(0, startPos) + symbols[i] + oldTxt.substring(endPos);
- text_input.value = result;
- let x = window.scrollX, y = window.scrollY;
- text_input.focus();
- text_input.selectionStart = startPos + symbols[i].length;
- text_input.selectionEnd = startPos + symbols[i].length;
- text_input.blur();
- window.scrollTo(x, y);
- return text_input.value;
- }}""")
- with gr.TabItem("中文"):
- for (sid, name_en, name_zh, title, cover, example, language, net_g_ms, tts_fn, to_symbol_fn) in models:
- with gr.TabItem(name_zh):
- with gr.Row():
- gr.Markdown(
- ''
- f'{title}'
- f'
' if cover else ""
- ''
- )
- with gr.Row():
- with gr.Column():
- input_text = gr.Textbox(label="文本 (100字上限)" if limitation else "文本", lines=5, value=example, elem_id=f"input-text-zh-{name_zh}")
- lang = gr.Dropdown(label="语言", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"],
- type="index", value="中文"if language == "Chinese" else "日语")
- with gr.Accordion(label="高级选项", open=False):
- symbol_input = gr.Checkbox(value=False, label="符号输入")
- symbol_list = gr.Dataset(label="符号列表", components=[input_text],
- samples=[[x] for x in hps_ms.symbols])
- symbol_list_json = gr.Json(value=hps_ms.symbols, visible=False)
- btn = gr.Button(value="生成", variant="primary")
- with gr.Row():
- ns = gr.Slider(label="控制感情变化程度", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True)
- nsw = gr.Slider(label="控制音素发音长度", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True)
- ls = gr.Slider(label="控制整体语速", minimum=0.1, maximum=2.0, step=0.1, value=1.2 if language=="Chinese" else 1, interactive=True)
- with gr.Column():
- o1 = gr.Textbox(label="输出信息")
- o2 = gr.Audio(label="输出音频", elem_id=f"tts-audio-zh-{name_zh}")
- download = gr.Button("下载音频")
- btn.click(tts_fn, inputs=[input_text, lang, ns, nsw, ls, symbol_input], outputs=[o1, o2])
- download.click(None, [], [], _js=download_audio_js.format(audio_id=f"zh-{name_zh}"))
- lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls])
- symbol_input.change(
- to_symbol_fn,
- [symbol_input, input_text, lang],
- [input_text]
- )
- symbol_list.click(None, [symbol_list, symbol_list_json], [input_text],
- _js=f"""
- (i,symbols) => {{
- let root = document.querySelector("body > gradio-app");
- if (root.shadowRoot != null)
- root = root.shadowRoot;
- let text_input = root.querySelector("#input-text-zh-{name_zh}").querySelector("textarea");
- let startPos = text_input.selectionStart;
- let endPos = text_input.selectionEnd;
- let oldTxt = text_input.value;
- let result = oldTxt.substring(0, startPos) + symbols[i] + oldTxt.substring(endPos);
- text_input.value = result;
- let x = window.scrollX, y = window.scrollY;
- text_input.focus();
- text_input.selectionStart = startPos + symbols[i].length;
- text_input.selectionEnd = startPos + symbols[i].length;
- text_input.blur();
- window.scrollTo(x, y);
- return text_input.value;
- }}""")
- app.queue(concurrency_count=1, api_open=args.api).launch(share=args.share)
diff --git a/spaces/akhaliq/Music_Source_Separation/bytesep/models/subband_tools/pqmf.py b/spaces/akhaliq/Music_Source_Separation/bytesep/models/subband_tools/pqmf.py
deleted file mode 100644
index f282678ee0803c3721cca7008754516ac28f632a..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Music_Source_Separation/bytesep/models/subband_tools/pqmf.py
+++ /dev/null
@@ -1,136 +0,0 @@
-'''
-@File : subband_util.py
-@Contact : liu.8948@buckeyemail.osu.edu
-@License : (C)Copyright 2020-2021
-@Modify Time @Author @Version @Desciption
------------- ------- -------- -----------
-2020/4/3 4:54 PM Haohe Liu 1.0 None
-'''
-
-import torch
-import torch.nn.functional as F
-import torch.nn as nn
-import numpy as np
-import os.path as op
-from scipy.io import loadmat
-
-
-def load_mat2numpy(fname=""):
- '''
- Args:
- fname: pth to mat
- type:
- Returns: dic object
- '''
- if len(fname) == 0:
- return None
- else:
- return loadmat(fname)
-
-
-class PQMF(nn.Module):
- def __init__(self, N, M, project_root):
- super().__init__()
- self.N = N # nsubband
- self.M = M # nfilter
- try:
- assert (N, M) in [(8, 64), (4, 64), (2, 64)]
- except:
- print("Warning:", N, "subbandand ", M, " filter is not supported")
- self.pad_samples = 64
- self.name = str(N) + "_" + str(M) + ".mat"
- self.ana_conv_filter = nn.Conv1d(
- 1, out_channels=N, kernel_size=M, stride=N, bias=False
- )
- data = load_mat2numpy(op.join(project_root, "f_" + self.name))
- data = data['f'].astype(np.float32) / N
- data = np.flipud(data.T).T
- data = np.reshape(data, (N, 1, M)).copy()
- dict_new = self.ana_conv_filter.state_dict().copy()
- dict_new['weight'] = torch.from_numpy(data)
- self.ana_pad = nn.ConstantPad1d((M - N, 0), 0)
- self.ana_conv_filter.load_state_dict(dict_new)
-
- self.syn_pad = nn.ConstantPad1d((0, M // N - 1), 0)
- self.syn_conv_filter = nn.Conv1d(
- N, out_channels=N, kernel_size=M // N, stride=1, bias=False
- )
- gk = load_mat2numpy(op.join(project_root, "h_" + self.name))
- gk = gk['h'].astype(np.float32)
- gk = np.transpose(np.reshape(gk, (N, M // N, N)), (1, 0, 2)) * N
- gk = np.transpose(gk[::-1, :, :], (2, 1, 0)).copy()
- dict_new = self.syn_conv_filter.state_dict().copy()
- dict_new['weight'] = torch.from_numpy(gk)
- self.syn_conv_filter.load_state_dict(dict_new)
-
- for param in self.parameters():
- param.requires_grad = False
-
- def __analysis_channel(self, inputs):
- return self.ana_conv_filter(self.ana_pad(inputs))
-
- def __systhesis_channel(self, inputs):
- ret = self.syn_conv_filter(self.syn_pad(inputs)).permute(0, 2, 1)
- return torch.reshape(ret, (ret.shape[0], 1, -1))
-
- def analysis(self, inputs):
- '''
- :param inputs: [batchsize,channel,raw_wav],value:[0,1]
- :return:
- '''
- inputs = F.pad(inputs, ((0, self.pad_samples)))
- ret = None
- for i in range(inputs.size()[1]): # channels
- if ret is None:
- ret = self.__analysis_channel(inputs[:, i : i + 1, :])
- else:
- ret = torch.cat(
- (ret, self.__analysis_channel(inputs[:, i : i + 1, :])), dim=1
- )
- return ret
-
- def synthesis(self, data):
- '''
- :param data: [batchsize,self.N*K,raw_wav_sub],value:[0,1]
- :return:
- '''
- ret = None
- # data = F.pad(data,((0,self.pad_samples//self.N)))
- for i in range(data.size()[1]): # channels
- if i % self.N == 0:
- if ret is None:
- ret = self.__systhesis_channel(data[:, i : i + self.N, :])
- else:
- new = self.__systhesis_channel(data[:, i : i + self.N, :])
- ret = torch.cat((ret, new), dim=1)
- ret = ret[..., : -self.pad_samples]
- return ret
-
- def forward(self, inputs):
- return self.ana_conv_filter(self.ana_pad(inputs))
-
-
-if __name__ == "__main__":
- import torch
- import numpy as np
- import matplotlib.pyplot as plt
- from tools.file.wav import *
-
- pqmf = PQMF(N=4, M=64, project_root="/Users/admin/Documents/projects")
-
- rs = np.random.RandomState(0)
- x = torch.tensor(rs.rand(4, 2, 32000), dtype=torch.float32)
-
- a1 = pqmf.analysis(x)
- a2 = pqmf.synthesis(a1)
-
- print(a2.size(), x.size())
-
- plt.subplot(211)
- plt.plot(x[0, 0, -500:])
- plt.subplot(212)
- plt.plot(a2[0, 0, -500:])
- plt.plot(x[0, 0, -500:] - a2[0, 0, -500:])
- plt.show()
-
- print(torch.sum(torch.abs(x[...] - a2[...])))
diff --git a/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder/hparams.py b/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder/hparams.py
deleted file mode 100644
index c1de9f7dcc2926735b80a28ed1226ff1b5824753..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder/hparams.py
+++ /dev/null
@@ -1,44 +0,0 @@
-from synthesizer.hparams import hparams as _syn_hp
-
-
-# Audio settings------------------------------------------------------------------------
-# Match the values of the synthesizer
-sample_rate = _syn_hp.sample_rate
-n_fft = _syn_hp.n_fft
-num_mels = _syn_hp.num_mels
-hop_length = _syn_hp.hop_size
-win_length = _syn_hp.win_size
-fmin = _syn_hp.fmin
-min_level_db = _syn_hp.min_level_db
-ref_level_db = _syn_hp.ref_level_db
-mel_max_abs_value = _syn_hp.max_abs_value
-preemphasis = _syn_hp.preemphasis
-apply_preemphasis = _syn_hp.preemphasize
-
-bits = 9 # bit depth of signal
-mu_law = True # Recommended to suppress noise if using raw bits in hp.voc_mode
- # below
-
-
-# WAVERNN / VOCODER --------------------------------------------------------------------------------
-voc_mode = 'RAW' # either 'RAW' (softmax on raw bits) or 'MOL' (sample from
-# mixture of logistics)
-voc_upsample_factors = (5, 5, 8) # NB - this needs to correctly factorise hop_length
-voc_rnn_dims = 512
-voc_fc_dims = 512
-voc_compute_dims = 128
-voc_res_out_dims = 128
-voc_res_blocks = 10
-
-# Training
-voc_batch_size = 100
-voc_lr = 1e-4
-voc_gen_at_checkpoint = 5 # number of samples to generate at each checkpoint
-voc_pad = 2 # this will pad the input so that the resnet can 'see' wider
- # than input length
-voc_seq_len = hop_length * 5 # must be a multiple of hop_length
-
-# Generating / Synthesizing
-voc_gen_batched = True # very fast (realtime+) single utterance batched generation
-voc_target = 8000 # target number of samples to be generated in each batch entry
-voc_overlap = 400 # number of samples for crossfading between batches
diff --git a/spaces/akhaliq/TensorFlowTTS/README.md b/spaces/akhaliq/TensorFlowTTS/README.md
deleted file mode 100644
index 7190be396548110c3906689484e43bd19fd3ed54..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/TensorFlowTTS/README.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: TensorFlowTTS
-emoji: 💩
-colorFrom: green
-colorTo: indigo
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/akhaliq/deeplab2/trainer/train_lib.py b/spaces/akhaliq/deeplab2/trainer/train_lib.py
deleted file mode 100644
index ab2e809a34c619249e8197ec965989282c15f6fa..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/deeplab2/trainer/train_lib.py
+++ /dev/null
@@ -1,206 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The Deeplab2 Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""This file contains code to create run an experiment."""
-import functools
-import os
-from typing import Text, Optional, Sequence
-
-from absl import logging
-import orbit
-import tensorflow as tf
-
-from deeplab2 import common
-from deeplab2 import config_pb2
-from deeplab2.data import dataset
-from deeplab2.model import deeplab
-from deeplab2.model.loss import loss_builder
-from deeplab2.trainer import distribution_utils
-from deeplab2.trainer import evaluator as evaluator_lib
-from deeplab2.trainer import runner_utils
-from deeplab2.trainer import trainer as trainer_lib
-from deeplab2.video import motion_deeplab
-from deeplab2.video import vip_deeplab
-
-_INSTANCE_LAYER_NAMES = (common.CKPT_MOTION_REGRESSION_HEAD_LAST_LAYER,
- common.CKPT_INSTANCE_REGRESSION_HEAD_LAST_LAYER,
- common.CKPT_INSTANCE_CENTER_HEAD_LAST_LAYER)
-# For Motion-Deeplab, 6 channels are used as input (2x RGB) during inference.
-# Its 7th input channel is obtained by the predicted center heatmap of
-# previous frame.
-_TWO_FRAME_MOTION_DEEPLAB_INPUT_CHANNELS = 6
-# All other networks use 3 channels as input (RGB).
-_SINGLE_FRAME_INPUT_CHANNELS = 3
-
-
-def create_deeplab_model(
- config: config_pb2.ExperimentOptions,
- dataset_descriptor: dataset.DatasetDescriptor) -> tf.keras.Model:
- """Creates DeepLab model based on config."""
- if config.model_options.WhichOneof('meta_architecture') == 'motion_deeplab':
- return motion_deeplab.MotionDeepLab(config, dataset_descriptor)
- elif config.model_options.WhichOneof('meta_architecture') == 'vip_deeplab':
- return vip_deeplab.ViPDeepLab(config, dataset_descriptor)
- else:
- return deeplab.DeepLab(config, dataset_descriptor)
-
-
-def build_deeplab_model(deeplab_model: tf.keras.Model,
- crop_size: Sequence[int],
- batch_size: Optional[int] = None):
- """Builds DeepLab model with input crop size."""
- if isinstance(deeplab_model, motion_deeplab.MotionDeepLab) or isinstance(
- deeplab_model, vip_deeplab.ViPDeepLab):
- # Motion-DeepLab and ViP-DeepLab use the input differently despite that
- # the input_shape is the same: Motion-DeepLab uses two frames as one input,
- # while ViP-DeepLab splits the two frames first and passes them individually
- # to the backbone encoder.
- input_shape = list(crop_size) + [_TWO_FRAME_MOTION_DEEPLAB_INPUT_CHANNELS]
- deeplab_model(
- tf.keras.Input(input_shape, batch_size=batch_size), training=False)
- else:
- input_shape = list(crop_size) + [_SINGLE_FRAME_INPUT_CHANNELS]
- deeplab_model(
- tf.keras.Input(input_shape, batch_size=batch_size), training=False)
- return input_shape
-
-
-def run_experiment(mode: Text, config: config_pb2.ExperimentOptions,
- model_dir: Text, tpu: Optional[Text], num_gpus: int):
- """Runs an experiment.
-
- Args:
- mode: A string specifying the mode of the experiment. Supported are `train`,
- `train_and_eval`, `eval` and `continuous_eval`.
- config: A config_pb2.ExperimentOptions configuration.
- model_dir: A path to store all checkpoints and other experimental artifacts.
- tpu: The name or address of the tpu to connect to, if any.
- num_gpus: An integer specifying the number of GPUs to use. If mode contains
- `eval`, num_gpus must be less or equal to 1.
-
- Raises:
- ValueError: If mode is none of `train`, `train_and_eval`, `eval`, or
- `continuous_eval`.
- ValueError: If mode is `train_and_eval`, but different dataset_names are
- specified for training and evaluation. This error could be relaxed for
- applications like domain transferring learning (e.g., synthetic to real
- datasets), which has not been fully tested yet.
- ValueError: If mode includes `eval` and num_gpus > 1. Currently, evaluation
- is not supported on more than a single GPU.
- """
- strategy = distribution_utils.create_strategy(tpu, num_gpus)
- logging.info('Using strategy %s with %d replicas', type(strategy),
- strategy.num_replicas_in_sync)
-
- if 'eval' in mode:
- dataset_name = config.eval_dataset_options.dataset
- if (mode == 'train_and_eval' and
- dataset_name != config.train_dataset_options.dataset):
- raise ValueError('Using difference dataset_names in train_and_eval mode.')
- if num_gpus > 1:
- raise ValueError(
- 'Using more than one GPU for evaluation is not supported.')
- else:
- dataset_name = config.train_dataset_options.dataset
-
- num_classes = dataset.MAP_NAME_TO_DATASET_INFO[dataset_name].num_classes
- ignore_label = dataset.MAP_NAME_TO_DATASET_INFO[dataset_name].ignore_label
- class_has_instances_list = (
- dataset.MAP_NAME_TO_DATASET_INFO[dataset_name].class_has_instances_list)
-
- trainer = None
- evaluator = None
- with strategy.scope():
- deeplab_model = create_deeplab_model(
- config, dataset.MAP_NAME_TO_DATASET_INFO[dataset_name])
- losses = loss_builder.DeepLabFamilyLoss(config.trainer_options.loss_options,
- num_classes, ignore_label,
- class_has_instances_list)
- global_step = orbit.utils.create_global_step()
- if 'train' in mode:
- trainer = trainer_lib.Trainer(config, deeplab_model, losses, global_step)
- if 'eval' in mode:
- evaluator = evaluator_lib.Evaluator(config, deeplab_model, losses,
- global_step, model_dir)
-
- checkpoint_dict = dict(global_step=global_step)
- checkpoint_dict.update(deeplab_model.checkpoint_items)
- if trainer is not None:
- checkpoint_dict['optimizer'] = trainer.optimizer
- if trainer.backbone_optimizer is not None:
- checkpoint_dict['backbone_optimizer'] = trainer.backbone_optimizer
- checkpoint = tf.train.Checkpoint(**checkpoint_dict)
-
- # Define items to load from initial checkpoint.
- init_dict = deeplab_model.checkpoint_items
- if (not config.model_options
- .restore_semantic_last_layer_from_initial_checkpoint):
- del init_dict[common.CKPT_SEMANTIC_LAST_LAYER]
- if (not config.model_options
- .restore_instance_last_layer_from_initial_checkpoint):
- for layer_name in _INSTANCE_LAYER_NAMES:
- if layer_name in init_dict:
- del init_dict[layer_name]
- init_fn = functools.partial(runner_utils.maybe_load_checkpoint,
- config.model_options.initial_checkpoint,
- init_dict)
- checkpoint_manager = tf.train.CheckpointManager(
- checkpoint,
- directory=model_dir,
- max_to_keep=config.trainer_options.num_checkpoints_to_keep,
- step_counter=global_step,
- checkpoint_interval=config.trainer_options.save_checkpoints_steps,
- init_fn=init_fn)
-
- controller = orbit.Controller(
- strategy=strategy,
- trainer=trainer,
- evaluator=evaluator,
- global_step=global_step,
- steps_per_loop=config.trainer_options.steps_per_loop,
- checkpoint_manager=checkpoint_manager,
- summary_interval=config.trainer_options.save_summaries_steps,
- summary_dir=os.path.join(model_dir, 'train'),
- eval_summary_dir=os.path.join(model_dir, 'eval'))
-
- with strategy.scope():
- # Save initial checkpoint.
- if 'train' in mode:
- crop_size = list(config.train_dataset_options.crop_size)
- # Build model before saving.
- build_deeplab_model(deeplab_model, crop_size)
- controller.save_checkpoint()
- if mode == 'train':
- controller.train(
- steps=config.trainer_options.solver_options.training_number_of_steps)
- elif mode == 'train_and_eval':
- # Interleave training and evaluation.
- controller.train_and_evaluate(
- train_steps=(
- config.trainer_options.solver_options.training_number_of_steps),
- eval_steps=config.evaluator_options.eval_steps,
- eval_interval=config.evaluator_options.eval_interval)
- elif mode == 'eval':
- controller.evaluate(steps=config.evaluator_options.eval_steps)
- elif mode == 'continuous_eval':
- # Monitor the checkpoint directory for new checkpoints to evaluate.
- timeout = config.evaluator_options.continuous_eval_timeout
- if timeout == -1:
- # Wait forever
- timeout = None
- controller.evaluate_continuously(
- steps=config.evaluator_options.eval_steps, timeout=timeout)
- else:
- raise ValueError('Mode %s is not a valid mode.' % mode)
diff --git a/spaces/alexrame/rewardedsoups/streamlit_app/data/locomotion/trajectories/1.html b/spaces/alexrame/rewardedsoups/streamlit_app/data/locomotion/trajectories/1.html
deleted file mode 100644
index 021d7af124814b79e971bad67e5035c5b10c6666..0000000000000000000000000000000000000000
--- a/spaces/alexrame/rewardedsoups/streamlit_app/data/locomotion/trajectories/1.html
+++ /dev/null
@@ -1,48 +0,0 @@
-
-
-
- brax visualizer
-
-
-
-
-
-
-
-
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/plugin.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/plugin.py
deleted file mode 100644
index 958ca21a3e2d316dcc1ef0e49146355b8871d483..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/plugin.py
+++ /dev/null
@@ -1,69 +0,0 @@
-"""
- pygments.plugin
- ~~~~~~~~~~~~~~~
-
- Pygments setuptools plugin interface. The methods defined
- here also work if setuptools isn't installed but they just
- return nothing.
-
- lexer plugins::
-
- [pygments.lexers]
- yourlexer = yourmodule:YourLexer
-
- formatter plugins::
-
- [pygments.formatters]
- yourformatter = yourformatter:YourFormatter
- /.ext = yourformatter:YourFormatter
-
- As you can see, you can define extensions for the formatter
- with a leading slash.
-
- syntax plugins::
-
- [pygments.styles]
- yourstyle = yourstyle:YourStyle
-
- filter plugin::
-
- [pygments.filter]
- yourfilter = yourfilter:YourFilter
-
-
- :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-LEXER_ENTRY_POINT = 'pygments.lexers'
-FORMATTER_ENTRY_POINT = 'pygments.formatters'
-STYLE_ENTRY_POINT = 'pygments.styles'
-FILTER_ENTRY_POINT = 'pygments.filters'
-
-
-def iter_entry_points(group_name):
- try:
- from pip._vendor import pkg_resources
- except (ImportError, OSError):
- return []
-
- return pkg_resources.iter_entry_points(group_name)
-
-
-def find_plugin_lexers():
- for entrypoint in iter_entry_points(LEXER_ENTRY_POINT):
- yield entrypoint.load()
-
-
-def find_plugin_formatters():
- for entrypoint in iter_entry_points(FORMATTER_ENTRY_POINT):
- yield entrypoint.name, entrypoint.load()
-
-
-def find_plugin_styles():
- for entrypoint in iter_entry_points(STYLE_ENTRY_POINT):
- yield entrypoint.name, entrypoint.load()
-
-
-def find_plugin_filters():
- for entrypoint in iter_entry_points(FILTER_ENTRY_POINT):
- yield entrypoint.name, entrypoint.load()
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/requests/packages.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/requests/packages.py
deleted file mode 100644
index 9582fa730f121634348a79c1a8b0cc2df99c616f..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/requests/packages.py
+++ /dev/null
@@ -1,16 +0,0 @@
-import sys
-
-# This code exists for backwards compatibility reasons.
-# I don't like it either. Just look the other way. :)
-
-for package in ('urllib3', 'idna', 'chardet'):
- vendored_package = "pip._vendor." + package
- locals()[package] = __import__(vendored_package)
- # This traversal is apparently necessary such that the identities are
- # preserved (requests.packages.urllib3.* is urllib3.*)
- for mod in list(sys.modules):
- if mod == vendored_package or mod.startswith(vendored_package + '.'):
- unprefixed_mod = mod[len("pip._vendor."):]
- sys.modules['pip._vendor.requests.packages.' + unprefixed_mod] = sys.modules[mod]
-
-# Kinda cool, though, right?
diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/Huggingface/Transformers/src/transformers/modeling_encoder_decoder.py b/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/Huggingface/Transformers/src/transformers/modeling_encoder_decoder.py
deleted file mode 100644
index a4d8a9cdead5c33b9c4507d1bf38277ef05e3f91..0000000000000000000000000000000000000000
--- a/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/Huggingface/Transformers/src/transformers/modeling_encoder_decoder.py
+++ /dev/null
@@ -1,1410 +0,0 @@
-# coding=utf-8
-# Copyright 2018 The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" Classes to support Encoder-Decoder architectures """
-
-
-import logging
-import os
-
-import torch
-from torch import nn
-from torch.nn import CrossEntropyLoss
-from torch.nn import functional as F
-
-
-logger = logging.getLogger(__name__)
-
-
-class PreTrainedEncoderDecoder(nn.Module):
- r"""
- :class:`~transformers.PreTrainedEncoderDecoder` is a generic model class that will be
- instantiated as a transformer architecture with one of the base model
- classes of the library as encoder and (optionally) another one as
- decoder when created with the `AutoModel.from_pretrained(pretrained_model_name_or_path)`
- class method.
- """
-
- def __init__(self, encoder, decoder):
- super().__init__()
- self.encoder = encoder
- self.decoder = decoder
- # manually set the self.config
- self.config = decoder.config
- self.config.is_encoder_decoder = True
-
- @classmethod
- def from_pretrained(
- cls,
- encoder_pretrained_model_name_or_path=None,
- decoder_pretrained_model_name_or_path=None,
- *model_args,
- **kwargs,
- ):
- r"""Instantiates an encoder and a decoder from one or two base classes of the library from pre-trained model checkpoints.
-
-
- The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated)
- To train the model, you need to first set it back in training mode with `model.train()`
-
- Params:
- encoder_pretrained_model_name_or_path: information necessary to initiate the encoder. Either:
-
- - a string with the `shortcut name` of a pre-trained model to load from cache or download, e.g.: ``bert-base-uncased``.
- - a string with the `identifier name` of a pre-trained model that was user-uploaded to our S3, e.g.: ``dbmdz/bert-base-german-cased``.
- - a path to a `directory` containing model weights saved using :func:`~transformers.PreTrainedModel.save_pretrained`, e.g.: ``./my_model_directory/encoder``.
- - a path or url to a `tensorflow index checkpoint file` (e.g. `./tf_model/model.ckpt.index`). In this case, ``from_tf`` should be set to True and a configuration object should be provided as ``config`` argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
-
- decoder_pretrained_model_name_or_path: information necessary to initiate the decoder. Either:
-
- - a string with the `shortcut name` of a pre-trained model to load from cache or download, e.g.: ``bert-base-uncased``.
- - a string with the `identifier name` of a pre-trained model that was user-uploaded to our S3, e.g.: ``dbmdz/bert-base-german-cased``.
- - a path to a `directory` containing model weights saved using :func:`~transformers.PreTrainedModel.save_pretrained`, e.g.: ``./my_model_directory/decoder``.
- - a path or url to a `tensorflow index checkpoint file` (e.g. `./tf_model/model.ckpt.index`). In this case, ``from_tf`` should be set to True and a configuration object should be provided as ``config`` argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
-
- model_args: (`optional`) Sequence of positional arguments:
- All remaning positional arguments will be passed to the underlying model's ``__init__`` method
-
- config: (`optional`) instance of a class derived from :class:`~transformers.PretrainedConfig`:
- Configuration for the model to use instead of an automatically loaded configuation. Configuration can be automatically loaded when:
-
- - the model is a model provided by the library (loaded with the ``shortcut-name`` string of a pretrained model), or
- - the model was saved using :func:`~transformers.PreTrainedModel.save_pretrained` and is reloaded by suppling the save directory.
- - the model is loaded by suppling a local directory as ``pretrained_model_name_or_path`` and a configuration JSON file named `config.json` is found in the directory.
-
- state_dict: (`optional`) dict:
- an optional state dictionnary for the model to use instead of a state dictionary loaded from saved weights file.
- This option can be used if you want to create a model from a pretrained configuration but load your own weights.
- In this case though, you should check if using :func:`~transformers.PreTrainedModel.save_pretrained` and :func:`~transformers.PreTrainedModel.from_pretrained` is not a simpler option.
-
- cache_dir: (`optional`) string:
- Path to a directory in which a downloaded pre-trained model
- configuration should be cached if the standard cache should not be used.
-
- force_download: (`optional`) boolean, default False:
- Force to (re-)download the model weights and configuration files and override the cached versions if they exists.
-
- proxies: (`optional`) dict, default None:
- A dictionary of proxy servers to use by protocol or endpoint, e.g.: {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
- The proxies are used on each request.
-
- output_loading_info: (`optional`) boolean:
- Set to ``True`` to also return a dictionnary containing missing keys, unexpected keys and error messages.
-
- kwargs: (`optional`) Remaining dictionary of keyword arguments.
- Can be used to update the configuration object (after it being loaded) and initiate the model. (e.g. ``output_attention=True``). Behave differently depending on whether a `config` is provided or automatically loaded:
-
- - If a configuration is provided with ``config``, ``**kwargs`` will be directly passed to the underlying model's ``__init__`` method (we assume all relevant updates to the configuration have already been done)
- - If a configuration is not provided, ``kwargs`` will be first passed to the configuration class initialization function (:func:`~transformers.PretrainedConfig.from_pretrained`). Each key of ``kwargs`` that corresponds to a configuration attribute will be used to override said attribute with the supplied ``kwargs`` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's ``__init__`` function.
-
- You can specify kwargs sepcific for the encoder and decoder by prefixing the key with `encoder_` and `decoder_` respectively. (e.g. ``decoder_output_attention=True``). The remaining kwargs will be passed to both encoders and decoders.
-
- Examples::
-
- # For example purposes. Not runnable.
- model = PreTrainedEncoderDecoder.from_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert
- """
-
- # keyword arguments come in 3 flavors: encoder-specific (prefixed by
- # `encoder_`), decoder-specific (prefixed by `decoder_`) and those
- # that apply to the model as a whole.
- # We let the specific kwargs override the common ones in case of conflict.
- kwargs_common = {
- argument: value
- for argument, value in kwargs.items()
- if not argument.startswith("encoder_")
- and not argument.startswith("decoder_")
- }
- kwargs_decoder = kwargs_common.copy()
- kwargs_encoder = kwargs_common.copy()
- kwargs_encoder.update(
- {
- argument[len("encoder_") :]: value
- for argument, value in kwargs.items()
- if argument.startswith("encoder_")
- }
- )
- kwargs_decoder.update(
- {
- argument[len("decoder_") :]: value
- for argument, value in kwargs.items()
- if argument.startswith("decoder_")
- }
- )
-
- # Load and initialize the encoder and decoder
- # The distinction between encoder and decoder at the model level is made
- # by the value of the flag `is_decoder` that we need to set correctly.
- encoder = kwargs_encoder.pop("model", None)
- if encoder is None:
- encoder = AutoModel.from_pretrained(
- encoder_pretrained_model_name_or_path, *model_args, **kwargs_encoder
- )
- encoder.config.is_decoder = False
-
- decoder = kwargs_decoder.pop("model", None)
- if decoder is None:
- decoder = AutoModelWithLMHead.from_pretrained(
- decoder_pretrained_model_name_or_path, **kwargs_decoder
- )
- decoder.config.is_decoder = True
-
- model = cls(encoder, decoder)
-
- return model
-
- def save_pretrained(self, save_directory):
- """Save a Seq2Seq model and its configuration file in a format such
- that it can be loaded using `:func:`~transformers.PreTrainedEncoderDecoder.from_pretrained`
-
- We save the encoder' and decoder's parameters in two separate directories.
- """
-
- # If the root output directory does not exist, create it
- if not os.path.exists(save_directory):
- os.mkdir(save_directory)
-
- # Check whether the output directory is empty or not
- sub_directories = [
- directory
- for directory in os.listdir(save_directory)
- if os.path.isdir(os.path.join(save_directory, directory))
- ]
-
- if len(sub_directories) > 0:
- if "encoder" in sub_directories and "decoder" in sub_directories:
- print(
- "WARNING: there is an older version of encoder-decoder saved in"
- + " the output directory. The default behaviour is to overwrite them."
- )
-
- # Empty the output directory
- for directory_to_remove in sub_directories:
- # Remove all files into the subdirectory
- files_to_remove = os.listdir(
- os.path.join(save_directory, directory_to_remove)
- )
- for file_to_remove in files_to_remove:
- os.remove(
- os.path.join(
- save_directory, directory_to_remove, file_to_remove
- )
- )
- # Remove the subdirectory itself
- os.rmdir(os.path.join(save_directory, directory_to_remove))
-
- assert len(os.listdir(save_directory)) == 0 # sanity check
-
- # Create the "encoder" directory inside the output directory and save the encoder into it
- if not os.path.exists(os.path.join(save_directory, "encoder")):
- os.mkdir(os.path.join(save_directory, "encoder"))
- self.encoder.save_pretrained(os.path.join(save_directory, "encoder"))
-
- # Create the "encoder" directory inside the output directory and save the decoder into it
- if not os.path.exists(os.path.join(save_directory, "decoder")):
- os.mkdir(os.path.join(save_directory, "decoder"))
- self.decoder.save_pretrained(os.path.join(save_directory, "decoder"))
-
- @staticmethod
- def prepare_model_kwargs(**kwargs):
- """Prepare the encoder and decoder's keyword arguments.
- Keyword arguments come in 3 flavors:
- - encoder-specific (prefixed by `encoder_`)
- - decoder-specific (prefixed by `decoder_`)
- - those that apply to the model as whole.
- We let the specific kwargs override the common ones in case of
- conflict.
- """
- kwargs_common = {
- argument: value
- for argument, value in kwargs.items()
- if not argument.startswith("encoder_")
- and not argument.startswith("decoder_")
- }
- decoder_kwargs = kwargs_common.copy()
- encoder_kwargs = kwargs_common.copy()
- encoder_kwargs.update(
- {
- argument[len("encoder_") :]: value
- for argument, value in kwargs.items()
- if argument.startswith("encoder_")
- }
- )
- decoder_kwargs.update(
- {
- argument[len("decoder_") :]: value
- for argument, value in kwargs.items()
- if argument.startswith("decoder_")
- }
- )
- decoder_kwargs["encoder_attention_mask"] = encoder_kwargs.get(
- "attention_mask", None
- )
- return encoder_kwargs, decoder_kwargs
-
- def forward(self, encoder_input_ids=None, decoder_input_ids=None, **kwargs):
- """The forward pass on a seq2eq depends what we are performing:
-
- - During training we perform one forward pass through both the encoder
- and decoder;
- - During prediction, we perform one forward pass through the encoder,
- and then perform several forward passes with the encoder's hidden
- state through the decoder to decode a full sequence.
-
- Therefore, we skip the forward pass on the encoder if an argument named
- `encoder_hidden_state` is passed to this function.
-
- Params:
- encoder_input_ids: ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``
- Indices of encoder input sequence tokens in the vocabulary.
- decoder_input_ids: ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``
- Indices of decoder input sequence tokens in the vocabulary.
- kwargs: (`optional`) Remaining dictionary of keyword arguments.
- """
- kwargs_encoder, kwargs_decoder = self.prepare_model_kwargs(**kwargs)
-
- # Encode if needed (training, first prediction pass)
- encoder_hidden_states = kwargs_encoder.pop("hidden_states", None)
- if encoder_hidden_states is None:
- encoder_outputs = self.encoder(encoder_input_ids, **kwargs_encoder)
- encoder_hidden_states = encoder_outputs[0]
- else:
- encoder_outputs = ()
-
- kwargs_decoder["encoder_hidden_states"] = encoder_hidden_states
- decoder_outputs = self.decoder(decoder_input_ids, **kwargs_decoder)
-
- return decoder_outputs + encoder_outputs
-
- def prepare_inputs_for_generation(self, input_ids, past, attention_mask, **kwargs):
- assert past is not None, "past has to be defined for encoder_outputs"
-
- # first step
- if type(past) is tuple:
- encoder_outputs = past
- else:
- encoder_outputs = (past,)
-
- return {
- "decoder_input_ids": input_ids,
- "encoder_outputs": encoder_outputs,
- "encoder_hidden_states": encoder_outputs[0],
- "decoder_attention_mask": None,
- }
-
- def prepare_scores_for_generation(self, scores, **kwargs):
- return scores
-
- def _do_output_past(self, outputs):
- """During generation, decide whether to pass the `past` variable to the next forward pass."""
- has_output_past = getattr(self.config, "output_past", False)
- mem_len = getattr(self.config, "mem_len", 0)
- if len(outputs) <= 1:
- return False
- if mem_len > 0 or has_output_past:
- return True
- return False
-
- def enforce_repetition_penalty_(
- self, lprobs, batch_size, num_beams, prev_output_tokens, repetition_penalty
- ):
- """repetition penalty (from CTRL paper https://arxiv.org/abs/1909.05858)."""
- for i in range(batch_size * num_beams):
- for previous_token in set(prev_output_tokens[i].tolist()):
- # if score < 0 then repetition penalty has to multiplied to reduce the previous token probability
- if lprobs[i, previous_token] < 0:
- lprobs[i, previous_token] *= repetition_penalty
- else:
- lprobs[i, previous_token] /= repetition_penalty
-
- def get_encoder(self):
- return self.encoder
-
- def get_decoder(self):
- return self.decoder
-
- def get_output_embeddings(self):
- return self.decoder.get_output_embeddings()
-
- @torch.no_grad()
- def generate(
- self,
- input_ids=None,
- max_length=None,
- min_length=None,
- do_sample=None,
- early_stopping=None,
- num_beams=None,
- temperature=None,
- top_k=None,
- top_p=None,
- repetition_penalty=None,
- bad_words_ids=None,
- bos_token_id=None,
- pad_token_id=None,
- eos_token_id=None,
- length_penalty=None,
- no_repeat_ngram_size=None,
- num_return_sequences=None,
- attention_mask=None,
- decoder_start_token_id=None,
- ):
- r"""Generates sequences for models with a LM head. The method currently supports greedy decoding, beam-search decoding, sampling with temperature, sampling with top-k or nucleus sampling.
-
- Adapted in part from `Facebook's XLM beam search code`_.
-
- .. _`Facebook's XLM beam search code`:
- https://github.com/facebookresearch/XLM/blob/9e6f6814d17be4fe5b15f2e6c43eb2b2d76daeb4/src/model/transformer.py#L529
-
-
- Parameters:
-
- input_ids: (`optional`) `torch.LongTensor` of shape `(batch_size, sequence_length)`
- The sequence used as a prompt for the generation. If `None` the method initializes
- it as an empty `torch.LongTensor` of shape `(1,)`.
-
- max_length: (`optional`) int
- The max length of the sequence to be generated. Between `min_length` and infinity. Default to 20.
-
- min_length: (`optional`) int
- The min length of the sequence to be generated. Between 0 and infinity. Default to 0.
-
- do_sample: (`optional`) bool
- If set to `False` greedy decoding is used. Otherwise sampling is used. Defaults to `False` as defined in `configuration_utils.PretrainedConfig`.
-
- early_stopping: (`optional`) bool
- if set to `True` beam search is stopped when at least `num_beams` sentences finished per batch. Defaults to `False` as defined in `configuration_utils.PretrainedConfig`.
-
- num_beams: (`optional`) int
- Number of beams for beam search. Must be between 1 and infinity. 1 means no beam search. Default to 1.
-
- temperature: (`optional`) float
- The value used to module the next token probabilities. Must be strictly positive. Default to 1.0.
-
- top_k: (`optional`) int
- The number of highest probability vocabulary tokens to keep for top-k-filtering. Between 1 and infinity. Default to 50.
-
- top_p: (`optional`) float
- The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Must be between 0 and 1. Default to 1.
-
- repetition_penalty: (`optional`) float
- The parameter for repetition penalty. Between 1.0 and infinity. 1.0 means no penalty. Default to 1.0.
-
- pad_token_id: (`optional`) int
- Padding token. Default to specicic model pad_token_id or None if it does not exist.
-
- bos_token_id: (`optional`) int
- BOS token. Defaults to `bos_token_id` as defined in the models config.
-
- eos_token_id: (`optional`) int
- EOS token. Defaults to `eos_token_id` as defined in the models config.
-
- length_penalty: (`optional`) float
- Exponential penalty to the length. Default to 1.
-
- no_repeat_ngram_size: (`optional`) int
- If set to int > 0, all ngrams of size `no_repeat_ngram_size` can only occur once.
- bad_words_ids: (`optional`) list of lists of int
- `bad_words_ids` contains tokens that are not allowed to be generated. In order to get the tokens of the words that should not appear in the generated text, use `tokenizer.encode(bad_word, add_prefix_space=True)`.
-
- num_return_sequences: (`optional`) int
- The number of independently computed returned sequences for each element in the batch. Default to 1.
-
- attention_mask (`optional`) obj: `torch.LongTensor` of same shape as `input_ids`
- Mask to avoid performing attention on padding token indices.
- Mask values selected in ``[0, 1]``:
- ``1`` for tokens that are NOT MASKED, ``0`` for MASKED tokens.
- Defaults to `None`.
-
- `What are attention masks? <../glossary.html#attention-mask>`__
-
- decoder_start_token_id=None: (`optional`) int
- If an encoder-decoder model starts decoding with a different token than BOS.
- Defaults to `None` and is changed to `BOS` later.
-
- Return:
-
- output: `torch.LongTensor` of shape `(batch_size * num_return_sequences, sequence_length)`
- sequence_length is either equal to max_length or shorter if all batches finished early due to the `eos_token_id`
-
- Examples::
-
- tokenizer = AutoTokenizer.from_pretrained('distilgpt2') # Initialize tokenizer
- model = AutoModelWithLMHead.from_pretrained('distilgpt2') # Download model and configuration from S3 and cache.
- outputs = model.generate(max_length=40) # do greedy decoding
- print('Generated: {}'.format(tokenizer.decode(outputs[0], skip_special_tokens=True)))
-
- tokenizer = AutoTokenizer.from_pretrained('openai-gpt') # Initialize tokenizer
- model = AutoModelWithLMHead.from_pretrained('openai-gpt') # Download model and configuration from S3 and cache.
- input_context = 'The dog'
- input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context
- outputs = model.generate(input_ids=input_ids, num_beams=5, num_return_sequences=3, temperature=1.5) # generate 3 independent sequences using beam search decoding (5 beams) with sampling from initial context 'The dog'
- for i in range(3): # 3 output sequences were generated
- print('Generated {}: {}'.format(i, tokenizer.decode(outputs[i], skip_special_tokens=True)))
-
- tokenizer = AutoTokenizer.from_pretrained('distilgpt2') # Initialize tokenizer
- model = AutoModelWithLMHead.from_pretrained('distilgpt2') # Download model and configuration from S3 and cache.
- input_context = 'The dog'
- input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context
- outputs = model.generate(input_ids=input_ids, max_length=40, temperature=0.7, num_return_sequences=3) # 3 generate sequences using by sampling
- for i in range(3): # 3 output sequences were generated
- print('Generated {}: {}'.format(i, tokenizer.decode(outputs[i], skip_special_tokens=True)))
-
- tokenizer = AutoTokenizer.from_pretrained('ctrl') # Initialize tokenizer
- model = AutoModelWithLMHead.from_pretrained('ctrl') # Download model and configuration from S3 and cache.
- input_context = 'Legal My neighbor is' # "Legal" is one of the control codes for ctrl
- input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context
- outputs = model.generate(input_ids=input_ids, max_length=50, temperature=0.7, repetition_penalty=1.2) # generate sequences
- print('Generated: {}'.format(tokenizer.decode(outputs[0], skip_special_tokens=True)))
-
- tokenizer = AutoTokenizer.from_pretrained('gpt2') # Initialize tokenizer
- model = AutoModelWithLMHead.from_pretrained('gpt2') # Download model and configuration from S3 and cache.
- input_context = 'My cute dog' # "Legal" is one of the control codes for ctrl
- bad_words_ids = [tokenizer.encode(bad_word, add_prefix_space=True) for bad_word in ['idiot', 'stupid', 'shut up']]
- input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context
- outputs = model.generate(input_ids=input_ids, max_length=100, do_sample=True, bad_words_ids=bad_words_ids) # generate sequences without allowing bad_words to be generated
- """
-
- # We cannot generate if the model does not have a LM head
- if self.get_output_embeddings() is None:
- raise AttributeError(
- "You tried to generate sequences with a model that does not have a LM Head."
- "Please use another model class (e.g. `OpenAIGPTLMHeadModel`, `XLNetLMHeadModel`, `GPT2LMHeadModel`, `CTRLLMHeadModel`, `T5WithLMHeadModel`, `TransfoXLLMHeadModel`, `XLMWithLMHeadModel`, `BartForConditionalGeneration` )"
- )
-
- max_length = max_length if max_length is not None else self.config.max_length
- min_length = min_length if min_length is not None else self.config.min_length
- do_sample = do_sample if do_sample is not None else self.config.do_sample
- early_stopping = (
- early_stopping if early_stopping is not None else self.config.early_stopping
- )
- num_beams = num_beams if num_beams is not None else self.config.num_beams
- temperature = (
- temperature if temperature is not None else self.config.temperature
- )
- top_k = top_k if top_k is not None else self.config.top_k
- top_p = top_p if top_p is not None else self.config.top_p
- repetition_penalty = (
- repetition_penalty
- if repetition_penalty is not None
- else self.config.repetition_penalty
- )
- bos_token_id = (
- bos_token_id if bos_token_id is not None else self.config.bos_token_id
- )
- pad_token_id = (
- pad_token_id if pad_token_id is not None else self.config.pad_token_id
- )
- eos_token_id = (
- eos_token_id if eos_token_id is not None else self.config.eos_token_id
- )
- length_penalty = (
- length_penalty if length_penalty is not None else self.config.length_penalty
- )
- no_repeat_ngram_size = (
- no_repeat_ngram_size
- if no_repeat_ngram_size is not None
- else self.config.no_repeat_ngram_size
- )
- bad_words_ids = (
- bad_words_ids if bad_words_ids is not None else self.config.bad_words_ids
- )
- num_return_sequences = (
- num_return_sequences
- if num_return_sequences is not None
- else self.config.num_return_sequences
- )
- decoder_start_token_id = (
- decoder_start_token_id
- if decoder_start_token_id is not None
- else self.config.decoder_start_token_id
- )
-
- if input_ids is not None:
- batch_size = input_ids.shape[0] # overriden by the input batch_size
- else:
- batch_size = 1
-
- assert (
- isinstance(max_length, int) and max_length > 0
- ), "`max_length` should be a strictly positive integer."
- assert (
- isinstance(min_length, int) and min_length >= 0
- ), "`min_length` should be a positive integer."
- assert isinstance(do_sample, bool), "`do_sample` should be a boolean."
- assert isinstance(early_stopping, bool), "`early_stopping` should be a boolean."
- assert (
- isinstance(num_beams, int) and num_beams > 0
- ), "`num_beams` should be a strictly positive integer."
- assert temperature > 0, "`temperature` should be strictly positive."
- assert (
- isinstance(top_k, int) and top_k >= 0
- ), "`top_k` should be a positive integer."
- assert 0 <= top_p <= 1, "`top_p` should be between 0 and 1."
- assert repetition_penalty >= 1.0, "`repetition_penalty` should be >= 1."
- assert input_ids is not None or (
- isinstance(bos_token_id, int) and bos_token_id >= 0
- ), "If input_ids is not defined, `bos_token_id` should be a positive integer."
- assert pad_token_id is None or (
- isinstance(pad_token_id, int) and (pad_token_id >= 0)
- ), "`pad_token_id` should be a positive integer."
- assert (eos_token_id is None) or (
- isinstance(eos_token_id, int) and (eos_token_id >= 0)
- ), "`eos_token_id` should be a positive integer."
- assert length_penalty > 0, "`length_penalty` should be strictly positive."
- assert (
- isinstance(no_repeat_ngram_size, int) and no_repeat_ngram_size >= 0
- ), "`no_repeat_ngram_size` should be a positive integer."
- assert (
- isinstance(num_return_sequences, int) and num_return_sequences > 0
- ), "`num_return_sequences` should be a strictly positive integer."
- assert (
- bad_words_ids is None
- or isinstance(bad_words_ids, list)
- and isinstance(bad_words_ids[0], list)
- ), "`bad_words_ids` is either `None` or a list of lists of tokens that should not be generated"
-
- if input_ids is None:
- assert isinstance(bos_token_id, int) and bos_token_id >= 0, (
- "you should either supply a context to complete as `input_ids` input "
- "or a `bos_token_id` (integer >= 0) as a first token to start the generation."
- )
- input_ids = torch.full(
- (batch_size, 1),
- bos_token_id,
- dtype=torch.long,
- device=next(self.parameters()).device,
- )
- else:
- assert (
- input_ids.dim() == 2
- ), "Input prompt should be of shape (batch_size, sequence length)."
-
- # not allow to duplicate outputs when greedy decoding
- if do_sample is False:
- if num_beams == 1:
- # no_beam_search greedy generation conditions
- assert (
- num_return_sequences == 1
- ), "Greedy decoding will always produce the same output for num_beams == 1 and num_return_sequences > 1. Please set num_return_sequences = 1"
-
- else:
- # beam_search greedy generation conditions
- assert (
- num_beams >= num_return_sequences
- ), "Greedy beam search decoding cannot return more sequences than it has beams. Please set num_beams >= num_return_sequences"
-
- # create attention mask if necessary
- # TODO (PVP): this should later be handled by the forward fn() in each model in the future see PR 3140
- if (
- (attention_mask is None)
- and (pad_token_id is not None)
- and (pad_token_id in input_ids)
- ):
- attention_mask = input_ids.ne(pad_token_id).long()
- elif attention_mask is None:
- attention_mask = input_ids.new_ones(input_ids.shape)
-
- # set pad_token_id to eos_token_id if not set. Important that this is done after
- # attention_mask is created
- if pad_token_id is None and eos_token_id is not None:
- logger.warning(
- "Setting `pad_token_id` to {} (first `eos_token_id`) to generate sequence".format(
- eos_token_id
- )
- )
- pad_token_id = eos_token_id
-
- # current position and vocab size
- vocab_size = self.config.vocab_size
-
- # set effective batch size and effective batch multiplier according to do_sample
- if do_sample:
- effective_batch_size = batch_size * num_return_sequences
- effective_batch_mult = num_return_sequences
- else:
- effective_batch_size = batch_size
- effective_batch_mult = 1
-
- if self.config.is_encoder_decoder:
- if decoder_start_token_id is None:
- decoder_start_token_id = bos_token_id
-
- assert (
- decoder_start_token_id is not None
- ), "decoder_start_token_id or bos_token_id has to be defined for encoder-decoder generation"
- assert hasattr(
- self, "get_encoder"
- ), "{} should have a 'get_encoder' function defined".format(self)
- assert callable(self.get_encoder), "{} should be a method".format(
- self.get_encoder
- )
-
- # get encoder and store encoder outputs
- encoder = self.get_encoder()
-
- encoder_outputs = encoder(input_ids, attention_mask=attention_mask)
-
- # Expand input ids if num_beams > 1 or num_return_sequences > 1
- if num_return_sequences > 1 or num_beams > 1:
- input_ids_len = input_ids.shape[-1]
- input_ids = input_ids.unsqueeze(1).expand(
- batch_size, effective_batch_mult * num_beams, input_ids_len
- )
- attention_mask = attention_mask.unsqueeze(1).expand(
- batch_size, effective_batch_mult * num_beams, input_ids_len
- )
-
- input_ids = input_ids.contiguous().view(
- effective_batch_size * num_beams, input_ids_len
- ) # shape: (batch_size * num_return_sequences * num_beams, cur_len)
- attention_mask = attention_mask.contiguous().view(
- effective_batch_size * num_beams, input_ids_len
- ) # shape: (batch_size * num_return_sequences * num_beams, cur_len)
-
- if self.config.is_encoder_decoder:
- # create empty decoder_input_ids
- input_ids = torch.full(
- (effective_batch_size * num_beams, 1),
- decoder_start_token_id,
- dtype=torch.long,
- device=next(self.parameters()).device,
- )
- cur_len = 1
-
- assert (
- batch_size == encoder_outputs[0].shape[0]
- ), f"expected encoder_outputs[0] to have 1st dimension bs={batch_size}, got {encoder_outputs[0].shape[0]} "
-
- # expand batch_idx to assign correct encoder output for expanded input_ids (due to num_beams > 1 and num_return_sequences > 1)
- expanded_batch_idxs = (
- torch.arange(batch_size)
- .view(-1, 1)
- .repeat(1, num_beams * effective_batch_mult)
- .view(-1)
- .to(input_ids.device)
- )
- # expand encoder_outputs
- encoder_outputs = (
- encoder_outputs[0].index_select(0, expanded_batch_idxs),
- *encoder_outputs[1:],
- )
-
- else:
- encoder_outputs = None
- cur_len = input_ids.shape[-1]
-
- if num_beams > 1:
- output = self._generate_beam_search(
- input_ids,
- cur_len=cur_len,
- max_length=max_length,
- min_length=min_length,
- do_sample=do_sample,
- early_stopping=early_stopping,
- temperature=temperature,
- top_k=top_k,
- top_p=top_p,
- repetition_penalty=repetition_penalty,
- no_repeat_ngram_size=no_repeat_ngram_size,
- bad_words_ids=bad_words_ids,
- bos_token_id=bos_token_id,
- pad_token_id=pad_token_id,
- decoder_start_token_id=decoder_start_token_id,
- eos_token_id=eos_token_id,
- batch_size=effective_batch_size,
- num_return_sequences=num_return_sequences,
- length_penalty=length_penalty,
- num_beams=num_beams,
- vocab_size=vocab_size,
- encoder_outputs=encoder_outputs,
- attention_mask=attention_mask,
- )
- else:
- output = self._generate_no_beam_search(
- input_ids,
- cur_len=cur_len,
- max_length=max_length,
- min_length=min_length,
- do_sample=do_sample,
- temperature=temperature,
- top_k=top_k,
- top_p=top_p,
- repetition_penalty=repetition_penalty,
- no_repeat_ngram_size=no_repeat_ngram_size,
- bad_words_ids=bad_words_ids,
- bos_token_id=bos_token_id,
- pad_token_id=pad_token_id,
- decoder_start_token_id=decoder_start_token_id,
- eos_token_id=eos_token_id,
- batch_size=effective_batch_size,
- encoder_outputs=encoder_outputs,
- attention_mask=attention_mask,
- )
-
- return output
-
- def _generate_no_beam_search(
- self,
- input_ids,
- cur_len,
- max_length,
- min_length,
- do_sample,
- temperature,
- top_k,
- top_p,
- repetition_penalty,
- no_repeat_ngram_size,
- bad_words_ids,
- bos_token_id,
- pad_token_id,
- eos_token_id,
- decoder_start_token_id,
- batch_size,
- encoder_outputs,
- attention_mask,
- ):
- """Generate sequences for each example without beam search (num_beams == 1).
- All returned sequence are generated independantly.
- """
- # length of generated sentences / unfinished sentences
- unfinished_sents = input_ids.new(batch_size).fill_(1)
- sent_lengths = input_ids.new(batch_size).fill_(max_length)
-
- past = encoder_outputs # defined for encoder-decoder models, None for decoder-only models
-
- while cur_len < max_length:
- model_inputs = self.prepare_inputs_for_generation(
- input_ids, past=past, attention_mask=attention_mask
- )
-
- outputs = self(**model_inputs)
- next_token_logits = outputs[0][:, -1, :]
-
- # if model has past, then set the past variable to speed up decoding
- if self._do_output_past(outputs):
- past = outputs[1]
-
- # repetition penalty from CTRL paper (https://arxiv.org/abs/1909.05858)
- if repetition_penalty != 1.0:
- self.enforce_repetition_penalty_(
- next_token_logits, batch_size, 1, input_ids, repetition_penalty
- )
-
- if no_repeat_ngram_size > 0:
- # calculate a list of banned tokens to prevent repetitively generating the same ngrams
- # from fairseq: https://github.com/pytorch/fairseq/blob/a07cb6f40480928c9e0548b737aadd36ee66ac76/fairseq/sequence_generator.py#L345
- banned_tokens = calc_banned_ngram_tokens(
- input_ids, batch_size, no_repeat_ngram_size, cur_len
- )
- for batch_idx in range(batch_size):
- next_token_logits[batch_idx, banned_tokens[batch_idx]] = -float(
- "inf"
- )
-
- if bad_words_ids is not None:
- # calculate a list of banned tokens according to bad words
- banned_tokens = calc_banned_bad_words_ids(input_ids, bad_words_ids)
-
- for batch_idx in range(batch_size):
- next_token_logits[batch_idx, banned_tokens[batch_idx]] = -float(
- "inf"
- )
-
- # set eos token prob to zero if min_length is not reached
- if eos_token_id is not None and cur_len < min_length:
- next_token_logits[:, eos_token_id] = -float("inf")
-
- if do_sample:
- # Temperature (higher temperature => more likely to sample low probability tokens)
- if temperature != 1.0:
- next_token_logits = next_token_logits / temperature
- # Top-p/top-k filtering
- next_token_logits = top_k_top_p_filtering(
- next_token_logits, top_k=top_k, top_p=top_p
- )
- # Sample
- probs = F.softmax(next_token_logits, dim=-1)
- next_token = torch.multinomial(probs, num_samples=1).squeeze(1)
- else:
- # Greedy decoding
- next_token = torch.argmax(next_token_logits, dim=-1)
-
- # update generations and finished sentences
- if eos_token_id is not None:
- # pad finished sentences if eos_token_id exist
- tokens_to_add = next_token * unfinished_sents + (pad_token_id) * (
- 1 - unfinished_sents
- )
- else:
- tokens_to_add = next_token
-
- input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
-
- if eos_token_id is not None:
- eos_in_sents = tokens_to_add == eos_token_id
- # if sentence is unfinished and the token to add is eos, sent_lengths is filled with current length
- is_sents_unfinished_and_token_to_add_is_eos = unfinished_sents.mul(
- eos_in_sents.long()
- ).bool()
- sent_lengths.masked_fill_(
- is_sents_unfinished_and_token_to_add_is_eos, cur_len + 1
- )
- # unfinished_sents is set to zero if eos in sentence
- unfinished_sents.mul_((~eos_in_sents).long())
-
- # stop when there is a in each sentence, or if we exceed the maximul length
- if unfinished_sents.max() == 0:
- break
-
- # extend attention_mask for new generated input if only decoder
- if self.config.is_encoder_decoder is False:
- attention_mask = torch.cat(
- [
- attention_mask,
- attention_mask.new_ones((attention_mask.shape[0], 1)),
- ],
- dim=-1,
- )
-
- cur_len = cur_len + 1
-
- # if there are different sentences lengths in the batch, some batches have to be padded
- if sent_lengths.min().item() != sent_lengths.max().item():
- assert (
- pad_token_id is not None
- ), "`Pad_token_id` has to be defined if batches have different lengths"
- # finished sents are filled with pad_token
- decoded = input_ids.new(batch_size, sent_lengths.max().item()).fill_(
- pad_token_id
- )
- else:
- decoded = input_ids
-
- for hypo_idx, hypo in enumerate(input_ids):
- decoded[hypo_idx, : sent_lengths[hypo_idx]] = hypo[: sent_lengths[hypo_idx]]
-
- return decoded
-
- def _generate_beam_search(
- self,
- input_ids,
- cur_len,
- max_length,
- min_length,
- do_sample,
- early_stopping,
- temperature,
- top_k,
- top_p,
- repetition_penalty,
- no_repeat_ngram_size,
- bad_words_ids,
- bos_token_id,
- pad_token_id,
- eos_token_id,
- decoder_start_token_id,
- batch_size,
- num_return_sequences,
- length_penalty,
- num_beams,
- vocab_size,
- encoder_outputs,
- attention_mask,
- ):
- """Generate sequences for each example with beam search."""
-
- # generated hypotheses
- generated_hyps = [
- BeamHypotheses(
- num_beams, max_length, length_penalty, early_stopping=early_stopping
- )
- for _ in range(batch_size)
- ]
-
- # scores for each sentence in the beam
- beam_scores = torch.zeros(
- (batch_size, num_beams), dtype=torch.float, device=input_ids.device
- )
-
- # for greedy decoding it is made sure that only tokens of the first beam are considered to avoid sampling the exact same tokens three times
- if do_sample is False:
- beam_scores[:, 1:] = -1e9
- beam_scores = beam_scores.view(-1) # shape (batch_size * num_beams,)
-
- # cache compute states
- past = encoder_outputs # defined for encoder-decoder models, None for decoder-only models
-
- # done sentences
- done = [False for _ in range(batch_size)]
-
- while cur_len < max_length:
- model_inputs = self.prepare_inputs_for_generation(
- input_ids, past=past, attention_mask=attention_mask
- )
- outputs = self(
- **model_inputs
- ) # (batch_size * num_beams, cur_len, vocab_size)
- next_token_logits = outputs[0][
- :, -1, :
- ] # (batch_size * num_beams, vocab_size)
-
- # if model has past, then set the past variable to speed up decoding
- if self._do_output_past(outputs):
- past = outputs[1]
-
- # repetition penalty (from CTRL paper https://arxiv.org/abs/1909.05858)
- if repetition_penalty != 1.0:
- self.enforce_repetition_penalty_(
- next_token_logits,
- batch_size,
- num_beams,
- input_ids,
- repetition_penalty,
- )
-
- if temperature != 1.0:
- next_token_logits = next_token_logits / temperature
-
- scores = F.log_softmax(
- next_token_logits, dim=-1
- ) # (batch_size * num_beams, vocab_size)
- if self.config.is_encoder_decoder and do_sample is False:
- # TODO (PVP) still a bit hacky here - there might be a better solutino
- scores = self.prepare_scores_for_generation(
- scores, cur_len=cur_len, max_length=max_length
- )
-
- # set eos token prob to zero if min_length is not reached
- if eos_token_id is not None and cur_len < min_length:
- scores[:, eos_token_id] = -float("inf")
-
- if no_repeat_ngram_size > 0:
- # calculate a list of banned tokens to prevent repetitively generating the same ngrams
- num_batch_hypotheses = batch_size * num_beams
- # from fairseq: https://github.com/pytorch/fairseq/blob/a07cb6f40480928c9e0548b737aadd36ee66ac76/fairseq/sequence_generator.py#L345
- banned_batch_tokens = calc_banned_ngram_tokens(
- input_ids, num_batch_hypotheses, no_repeat_ngram_size, cur_len
- )
- for i, banned_tokens in enumerate(banned_batch_tokens):
- scores[i, banned_tokens] = -float("inf")
-
- if bad_words_ids is not None:
- # calculate a list of banned tokens according to bad words
- banned_tokens = calc_banned_bad_words_ids(input_ids, bad_words_ids)
-
- for i, banned_tokens in enumerate(banned_tokens):
- scores[i, banned_tokens] = -float("inf")
-
- assert scores.shape == (
- batch_size * num_beams,
- vocab_size,
- ), "Shapes of scores: {} != {}".format(
- scores.shape, (batch_size * num_beams, vocab_size)
- )
-
- if do_sample:
- _scores = scores + beam_scores[:, None].expand_as(
- scores
- ) # (batch_size * num_beams, vocab_size)
- # Top-p/top-k filtering
- _scores = top_k_top_p_filtering(
- _scores, top_k=top_k, top_p=top_p, min_tokens_to_keep=2
- ) # (batch_size * num_beams, vocab_size)
- # re-organize to group the beam together to sample from all beam_idxs
- _scores = _scores.contiguous().view(
- batch_size, num_beams * vocab_size
- ) # (batch_size, num_beams * vocab_size)
-
- # Sample 2 next tokens for each beam (so we have some spare tokens and match output of greedy beam search)
- probs = F.softmax(_scores, dim=-1)
- next_tokens = torch.multinomial(
- probs, num_samples=2 * num_beams
- ) # (batch_size, num_beams * 2)
- # Compute next scores
- next_scores = torch.gather(
- _scores, -1, next_tokens
- ) # (batch_size, num_beams * 2)
- # sort the sampled vector to make sure that the first num_beams samples are the best
- next_scores, next_scores_indices = torch.sort(
- next_scores, descending=True, dim=1
- )
- next_tokens = torch.gather(
- next_tokens, -1, next_scores_indices
- ) # (batch_size, num_beams * 2)
-
- else:
- next_scores = scores + beam_scores[:, None].expand_as(
- scores
- ) # (batch_size * num_beams, vocab_size)
-
- # re-organize to group the beam together (we are keeping top hypothesis accross beams)
- next_scores = next_scores.view(
- batch_size, num_beams * vocab_size
- ) # (batch_size, num_beams * vocab_size)
-
- next_scores, next_tokens = torch.topk(
- next_scores, 2 * num_beams, dim=1, largest=True, sorted=True
- )
-
- assert (
- next_scores.size() == next_tokens.size() == (batch_size, 2 * num_beams)
- )
-
- # next batch beam content
- next_batch_beam = []
-
- # for each sentence
- for batch_idx in range(batch_size):
-
- # if we are done with this sentence
- if done[batch_idx]:
- assert (
- len(generated_hyps[batch_idx]) >= num_beams
- ), "Batch can only be done if at least {} beams have been generated".format(
- num_beams
- )
- assert (
- eos_token_id is not None and pad_token_id is not None
- ), "generated beams >= num_beams -> eos_token_id and pad_token have to be defined"
- next_batch_beam.extend(
- [(0, pad_token_id, 0)] * num_beams
- ) # pad the batch
- continue
-
- # next sentence beam content
- next_sent_beam = []
-
- # next tokens for this sentence
- for beam_token_rank, (beam_token_id, beam_token_score) in enumerate(
- zip(next_tokens[batch_idx], next_scores[batch_idx])
- ):
- # get beam and token IDs
- beam_id = beam_token_id // vocab_size
- token_id = beam_token_id % vocab_size
-
- effective_beam_id = batch_idx * num_beams + beam_id
- # add to generated hypotheses if end of sentence or last iteration
- if (eos_token_id is not None) and (token_id.item() == eos_token_id):
- # if beam_token does not belong to top num_beams tokens, it should not be added
- is_beam_token_worse_than_top_num_beams = (
- beam_token_rank >= num_beams
- )
- if is_beam_token_worse_than_top_num_beams:
- continue
- generated_hyps[batch_idx].add(
- input_ids[effective_beam_id].clone(),
- beam_token_score.item(),
- )
- else:
- # add next predicted token if it is not eos_token
- next_sent_beam.append(
- (beam_token_score, token_id, effective_beam_id)
- )
-
- # the beam for next step is full
- if len(next_sent_beam) == num_beams:
- break
-
- # Check if were done so that we can save a pad step if all(done)
- done[batch_idx] = done[batch_idx] or generated_hyps[batch_idx].is_done(
- next_scores[batch_idx].max().item(), cur_len=cur_len
- )
-
- # update next beam content
- assert len(next_sent_beam) == num_beams, "Beam should always be full"
- next_batch_beam.extend(next_sent_beam)
- assert len(next_batch_beam) == num_beams * (batch_idx + 1)
-
- # stop when we are done with each sentence
- if all(done):
- break
-
- # sanity check / prepare next batch
- assert len(next_batch_beam) == batch_size * num_beams
- beam_scores = beam_scores.new([x[0] for x in next_batch_beam])
- beam_tokens = input_ids.new([x[1] for x in next_batch_beam])
- beam_idx = input_ids.new([x[2] for x in next_batch_beam])
-
- # re-order batch
- input_ids = input_ids[beam_idx, :]
- input_ids = torch.cat([input_ids, beam_tokens.unsqueeze(1)], dim=-1)
- # re-order internal states
- if past is not None:
- past = self._reorder_cache(past, beam_idx)
-
- # extend attention_mask for new generated input if only decoder
- if self.config.is_encoder_decoder is False:
- attention_mask = torch.cat(
- [
- attention_mask,
- attention_mask.new_ones((attention_mask.shape[0], 1)),
- ],
- dim=-1,
- )
-
- # update current length
- cur_len = cur_len + 1
-
- # finalize all open beam hypotheses and end to generated hypotheses
- for batch_idx in range(batch_size):
- if done[batch_idx]:
- continue
-
- # test that beam scores match previously calculated scores if not eos and batch_idx not done
- if eos_token_id is not None and all(
- (token_id % vocab_size).item() is not eos_token_id
- for token_id in next_tokens[batch_idx]
- ):
- assert torch.all(
- next_scores[batch_idx, :num_beams]
- == beam_scores.view(batch_size, num_beams)[batch_idx]
- ), "If batch_idx is not done, final next scores: {} have to equal to accumulated beam_scores: {}".format(
- next_scores[:, :num_beams][batch_idx],
- beam_scores.view(batch_size, num_beams)[batch_idx],
- )
-
- # need to add best num_beams hypotheses to generated hyps
- for beam_id in range(num_beams):
- effective_beam_id = batch_idx * num_beams + beam_id
- final_score = beam_scores[effective_beam_id].item()
- final_tokens = input_ids[effective_beam_id]
- generated_hyps[batch_idx].add(final_tokens, final_score)
-
- # depending on whether greedy generation is wanted or not define different output_batch_size and output_num_return_sequences_per_batch
- output_batch_size = (
- batch_size if do_sample else batch_size * num_return_sequences
- )
- output_num_return_sequences_per_batch = 1 if do_sample else num_return_sequences
-
- # select the best hypotheses
- sent_lengths = input_ids.new(output_batch_size)
- best = []
-
- # retrieve best hypotheses
- for i, hypotheses in enumerate(generated_hyps):
- sorted_hyps = sorted(hypotheses.beams, key=lambda x: x[0])
- for j in range(output_num_return_sequences_per_batch):
- effective_batch_idx = output_num_return_sequences_per_batch * i + j
- best_hyp = sorted_hyps.pop()[1]
- sent_lengths[effective_batch_idx] = len(best_hyp)
- best.append(best_hyp)
-
- # shorter batches are filled with pad_token
- if sent_lengths.min().item() != sent_lengths.max().item():
- assert pad_token_id is not None, "`Pad_token_id` has to be defined"
- sent_max_len = min(sent_lengths.max().item() + 1, max_length)
- decoded = input_ids.new(output_batch_size, sent_max_len).fill_(pad_token_id)
-
- # fill with hypothesis and eos_token_id if necessary
- for i, hypo in enumerate(best):
- decoded[i, : sent_lengths[i]] = hypo
- if sent_lengths[i] < max_length:
- decoded[i, sent_lengths[i]] = eos_token_id
- else:
- # none of the hypotheses have an eos_token
- assert (len(hypo) == max_length for hypo in best)
- decoded = (
- torch.stack(best).type(torch.long).to(next(self.parameters()).device)
- )
-
- return decoded
-
- # force one of token_ids to be generated by setting prob of all other tokens to 0.
- def _force_token_ids_generation(self, scores, token_ids):
- if isinstance(token_ids, int):
- token_ids = [token_ids]
- all_but_token_ids_mask = torch.tensor(
- [x for x in range(self.config.vocab_size) if x not in token_ids],
- dtype=torch.long,
- device=next(self.parameters()).device,
- )
- assert (
- len(scores.shape) == 2
- ), "scores should be of rank 2 with shape: [batch_size, vocab_size]"
- scores[:, all_but_token_ids_mask] = -float("inf")
-
- @staticmethod
- def _reorder_cache(past, beam_idx):
- reordered_past = []
- for layer_past in past:
- # get the correct batch idx from layer past batch dim
- # batch dim of `past` and `mems` is at 2nd position
- reordered_layer_past = [
- layer_past[:, i].unsqueeze(1).clone().detach() for i in beam_idx
- ]
- reordered_layer_past = torch.cat(reordered_layer_past, dim=1)
- # check that shape matches
- assert reordered_layer_past.shape == layer_past.shape
- reordered_past.append(reordered_layer_past)
- past = tuple(reordered_past)
- return past
-
-
-def calc_banned_ngram_tokens(prev_input_ids, num_hypos, no_repeat_ngram_size, cur_len):
- # Copied from fairseq for no_repeat_ngram in beam_search"""
- if cur_len + 1 < no_repeat_ngram_size:
- # return no banned tokens if we haven't generated no_repeat_ngram_size tokens yet
- return [[] for _ in range(num_hypos)]
- generated_ngrams = [{} for _ in range(num_hypos)]
- for idx in range(num_hypos):
- gen_tokens = prev_input_ids[idx].tolist()
- generated_ngram = generated_ngrams[idx]
- for ngram in zip(*[gen_tokens[i:] for i in range(no_repeat_ngram_size)]):
- prev_ngram_tuple = tuple(ngram[:-1])
- generated_ngram[prev_ngram_tuple] = generated_ngram.get(
- prev_ngram_tuple, []
- ) + [ngram[-1]]
-
- def _get_generated_ngrams(hypo_idx):
- # Before decoding the next token, prevent decoding of ngrams that have already appeared
- start_idx = cur_len + 1 - no_repeat_ngram_size
- ngram_idx = tuple(prev_input_ids[hypo_idx, start_idx:cur_len].tolist())
- return generated_ngrams[hypo_idx].get(ngram_idx, [])
-
- banned_tokens = [_get_generated_ngrams(hypo_idx) for hypo_idx in range(num_hypos)]
- return banned_tokens
-
-
-def calc_banned_bad_words_ids(prev_input_ids, bad_words_ids):
- banned_tokens = []
-
- def _tokens_match(prev_tokens, tokens):
- if len(tokens) == 0:
- # if bad word tokens is just one token always ban it
- return True
- if len(tokens) > len(prev_input_ids):
- # if bad word tokens are longer then prev input_ids they can't be equal
- return False
-
- if prev_tokens[-len(tokens) :] == tokens:
- # if tokens match
- return True
- else:
- return False
-
- for prev_input_ids_slice in prev_input_ids:
- banned_tokens_slice = []
-
- for banned_token_seq in bad_words_ids:
- assert (
- len(banned_token_seq) > 0
- ), "Banned words token sequences {} cannot have an empty list".format(
- bad_words_ids
- )
-
- if (
- _tokens_match(prev_input_ids_slice.tolist(), banned_token_seq[:-1])
- is False
- ):
- # if tokens do not match continue
- continue
-
- banned_tokens_slice.append(banned_token_seq[-1])
-
- banned_tokens.append(banned_tokens_slice)
-
- return banned_tokens
-
-
-def top_k_top_p_filtering(
- logits, top_k=0, top_p=1.0, filter_value=-float("Inf"), min_tokens_to_keep=1
-):
- """Filter a distribution of logits using top-k and/or nucleus (top-p) filtering
- Args:
- logits: logits distribution shape (batch size, vocabulary size)
- if top_k > 0: keep only top k tokens with highest probability (top-k filtering).
- if top_p < 1.0: keep the top tokens with cumulative probability >= top_p (nucleus filtering).
- Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)
- Make sure we keep at least min_tokens_to_keep per batch example in the output
- From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317
- """
- if top_k > 0:
- top_k = min(max(top_k, min_tokens_to_keep), logits.size(-1)) # Safety check
- # Remove all tokens with a probability less than the last token of the top-k
- indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None]
- logits[indices_to_remove] = filter_value
-
- if top_p < 1.0:
- sorted_logits, sorted_indices = torch.sort(logits, descending=True)
- cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
-
- # Remove tokens with cumulative probability above the threshold (token with 0 are kept)
- sorted_indices_to_remove = cumulative_probs > top_p
- if min_tokens_to_keep > 1:
- # Keep at least min_tokens_to_keep (set to min_tokens_to_keep-1 because we add the first one below)
- sorted_indices_to_remove[..., :min_tokens_to_keep] = 0
- # Shift the indices to the right to keep also the first token above the threshold
- sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
- sorted_indices_to_remove[..., 0] = 0
-
- # scatter sorted tensors to original indexing
- indices_to_remove = sorted_indices_to_remove.scatter(
- 1, sorted_indices, sorted_indices_to_remove
- )
- logits[indices_to_remove] = filter_value
- return logits
-
-
-class BeamHypotheses(object):
- def __init__(self, num_beams, max_length, length_penalty, early_stopping):
- """
- Initialize n-best list of hypotheses.
- """
- self.max_length = max_length - 1 # ignoring bos_token
- self.length_penalty = length_penalty
- self.early_stopping = early_stopping
- self.num_beams = num_beams
- self.beams = []
- self.worst_score = 1e9
-
- def __len__(self):
- """
- Number of hypotheses in the list.
- """
- return len(self.beams)
-
- def add(self, hyp, sum_logprobs):
- """
- Add a new hypothesis to the list.
- """
- score = sum_logprobs / len(hyp) ** self.length_penalty
- if len(self) < self.num_beams or score > self.worst_score:
- self.beams.append((score, hyp))
- if len(self) > self.num_beams:
- sorted_scores = sorted(
- [(s, idx) for idx, (s, _) in enumerate(self.beams)]
- )
- del self.beams[sorted_scores[0][1]]
- self.worst_score = sorted_scores[1][0]
- else:
- self.worst_score = min(score, self.worst_score)
-
- def is_done(self, best_sum_logprobs, cur_len=None):
- """
- If there are enough hypotheses and that none of the hypotheses being generated
- can become better than the worst one in the heap, then we are done with this sentence.
- """
-
- if len(self) < self.num_beams:
- return False
- elif self.early_stopping:
- return True
- else:
- if cur_len is None:
- cur_len = self.max_length
- cur_score = best_sum_logprobs / cur_len ** self.length_penalty
- ret = self.worst_score >= cur_score
- return ret
diff --git a/spaces/aliabid94/crossword/wordmaker.py b/spaces/aliabid94/crossword/wordmaker.py
deleted file mode 100644
index 2f475fcdb370935e9ba14ee4638d6802b1c0b5f2..0000000000000000000000000000000000000000
--- a/spaces/aliabid94/crossword/wordmaker.py
+++ /dev/null
@@ -1,48 +0,0 @@
-from nltk import FreqDist
-from nltk.corpus import brown
-from nltk.stem.lancaster import LancasterStemmer
-import json
-
-WORD_LIMIT = 10000
-MIN_WORD_SIZE, MAX_WORD_SIZE = 4, 10
-
-# stem = LancasterStemmer()
-# frequency_list = FreqDist(i.lower() for i in brown.words())
-# words = [
-# w.lower()
-# for w, _ in frequency_list.most_common()[:WORD_LIMIT]
-# if w.isalpha() and len(w) >= MIN_WORD_SIZE and len(w) <= MAX_WORD_SIZE
-# ]
-# stem_to_words = {}
-# for word in words:
-# stemmed = stem.stem(word)
-# if stemmed not in stem_to_words:
-# stem_to_words[stemmed] = set()
-# stem_to_words[stemmed].add(word)
-
-# final_words = []
-# for stem, words in stem_to_words.items():
-# shortest = min(words, key=len)
-# final_words.append(shortest)
-
-# with open("words.json", "w") as f:
-# f.write(json.dumps(final_words))
-
-with open("jeopardy.json", "r") as f:
- jeopardy = json.loads(f.read())
-
-answers = set()
-for row in jeopardy:
- answer = row["answer"].lower()
- if not answer.isalpha():
- continue
- if answer.startswith("the "):
- answer = answer[4:]
- elif answer.startswith("a "):
- answer = answer[2:]
- if len(answer) < MIN_WORD_SIZE or len(answer) > MAX_WORD_SIZE:
- continue
- answers.add(answer)
-
-with open("words.json", "w") as f:
- f.write(json.dumps(list(answers)))
\ No newline at end of file
diff --git a/spaces/allknowingroger/Image-Models-Test36/app.py b/spaces/allknowingroger/Image-Models-Test36/app.py
deleted file mode 100644
index 001f874b3aeff65feefaaf253f328b41e20b1133..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test36/app.py
+++ /dev/null
@@ -1,144 +0,0 @@
-import gradio as gr
-# import os
-# import sys
-# from pathlib import Path
-import time
-
-models =[
- "Yntec/Rainbowsphere",
- "Yntec/Dreamsphere",
- "Yntec/Photosphere",
- "Yntec/ElldrethsRetroMix_Diffusers",
- "digiplay/cosfMix_v1",
- "digiplay/Opiate_v2",
- "mittalashish/chique7",
- "matgu23/abtrl",
- "matgu23/ptn",
-]
-
-
-model_functions = {}
-model_idx = 1
-for model_path in models:
- try:
- model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False)
- except Exception as error:
- def the_fn(txt):
- return None
- model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"])
- model_idx+=1
-
-
-def send_it_idx(idx):
- def send_it_fn(prompt):
- output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt)
- return output
- return send_it_fn
-
-def get_prompts(prompt_text):
- return prompt_text
-
-def clear_it(val):
- if int(val) != 0:
- val = 0
- else:
- val = 0
- pass
- return val
-
-def all_task_end(cnt,t_stamp):
- to = t_stamp + 60
- et = time.time()
- if et > to and t_stamp != 0:
- d = gr.update(value=0)
- tog = gr.update(value=1)
- #print(f'to: {to} et: {et}')
- else:
- if cnt != 0:
- d = gr.update(value=et)
- else:
- d = gr.update(value=0)
- tog = gr.update(value=0)
- #print (f'passing: to: {to} et: {et}')
- pass
- return d, tog
-
-def all_task_start():
- print("\n\n\n\n\n\n\n")
- t = time.gmtime()
- t_stamp = time.time()
- current_time = time.strftime("%H:%M:%S", t)
- return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0)
-
-def clear_fn():
- nn = len(models)
- return tuple([None, *[None for _ in range(nn)]])
-
-
-
-with gr.Blocks(title="SD Models") as my_interface:
- with gr.Column(scale=12):
- # with gr.Row():
- # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""")
- with gr.Row():
- with gr.Row(scale=6):
- primary_prompt=gr.Textbox(label="Prompt", value="")
- # real_prompt=gr.Textbox(label="Real prompt")
- with gr.Row(scale=6):
- # improve_prompts_btn=gr.Button("Improve")
- with gr.Row():
- run=gr.Button("Run",variant="primary")
- clear_btn=gr.Button("Clear")
- with gr.Row():
- sd_outputs = {}
- model_idx = 1
- for model_path in models:
- with gr.Column(scale=3, min_width=320):
- with gr.Box():
- sd_outputs[model_idx] = gr.Image(label=model_path)
- pass
- model_idx += 1
- pass
- pass
-
- with gr.Row(visible=False):
- start_box=gr.Number(interactive=False)
- end_box=gr.Number(interactive=False)
- tog_box=gr.Textbox(value=0,interactive=False)
-
- start_box.change(
- all_task_end,
- [start_box, end_box],
- [start_box, tog_box],
- every=1,
- show_progress=False)
-
- primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box])
- run.click(all_task_start, None, [start_box, end_box, tog_box])
- runs_dict = {}
- model_idx = 1
- for model_path in models:
- runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]])
- model_idx += 1
- pass
- pass
-
- # improve_prompts_btn_clicked=improve_prompts_btn.click(
- # get_prompts,
- # inputs=[primary_prompt],
- # outputs=[primary_prompt],
- # cancels=list(runs_dict.values()))
- clear_btn.click(
- clear_fn,
- None,
- [primary_prompt, *list(sd_outputs.values())],
- cancels=[*list(runs_dict.values())])
- tog_box.change(
- clear_it,
- tog_box,
- tog_box,
- cancels=[*list(runs_dict.values())])
-
-my_interface.queue(concurrency_count=600, status_update_rate=1)
-my_interface.launch(inline=True, show_api=False)
-
\ No newline at end of file
diff --git a/spaces/allknowingroger/Image-Models-Test90/README.md b/spaces/allknowingroger/Image-Models-Test90/README.md
deleted file mode 100644
index 2229572c29ff761fdc15391afca78ee5766463d9..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test90/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: true
-duplicated_from: allknowingroger/Image-Models-Test89
----
-
-
\ No newline at end of file
diff --git a/spaces/amanmibra/void-demo-aisf/server/main.py b/spaces/amanmibra/void-demo-aisf/server/main.py
deleted file mode 100644
index 10d64bc6e193cad9ca098e97651851847fc0383c..0000000000000000000000000000000000000000
--- a/spaces/amanmibra/void-demo-aisf/server/main.py
+++ /dev/null
@@ -1,64 +0,0 @@
-#
-import sys
-sys.path.append('..')
-
-import os
-from fastapi import FastAPI
-
-# torch
-import torch
-
-# utils
-from preprocess import process_from_filename, process_from_url, process_raw_wav
-from cnn import CNNetwork
-
-# load model
-model = CNNetwork()
-state_dict = torch.load("../models/aisf/void_20230517_113634.pth")
-model.load_state_dict(state_dict)
-
-# TODO: update to grabbing labels stored on model
-LABELS = ["shafqat", "aman", "jake"]
-
-print(f"Model loaded! \n {model}")
-
-app = FastAPI()
-
-@app.get("/")
-async def root():
- return { "message": "Hello World" }
-
-@app.get("/urlpredict")
-def url_predict(url: str):
- wav = process_from_url(url)
-
- model_prediction = model_predict(wav)
- return {
- "message": "Voice Identified!",
- "data": model_prediction,
- }
-
-@app.put("/predict")
-def predict(wav):
- print(f"wav {wav}")
- # return wav
- wav = process_raw_wav(wav)
- model_prediction = model_predict(wav)
-
- return {
- "message": "Voice Identified!",
- "data": model_prediction,
- }
-
-def model_predict(wav):
- model_input = wav.unsqueeze(0)
- output = model(model_input)
- prediction_index = torch.argmax(output, 1).item()
- output = output.detach().cpu().numpy()[0]
-
- return {
- "prediction_index": prediction_index,
- "labels": LABELS,
- "prediction_label": LABELS[prediction_index],
- "prediction_output": output.tolist(),
- }
\ No newline at end of file
diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/include/portaudio.h b/spaces/amarchheda/ChordDuplicate/portaudio/include/portaudio.h
deleted file mode 100644
index 5d847314ae9ba0e57468cccf4cfd52f9e100cf45..0000000000000000000000000000000000000000
--- a/spaces/amarchheda/ChordDuplicate/portaudio/include/portaudio.h
+++ /dev/null
@@ -1,1228 +0,0 @@
-#ifndef PORTAUDIO_H
-#define PORTAUDIO_H
-/*
- * $Id$
- * PortAudio Portable Real-Time Audio Library
- * PortAudio API Header File
- * Latest version available at: http://www.portaudio.com/
- *
- * Copyright (c) 1999-2002 Ross Bencina and Phil Burk
- *
- * Permission is hereby granted, free of charge, to any person obtaining
- * a copy of this software and associated documentation files
- * (the "Software"), to deal in the Software without restriction,
- * including without limitation the rights to use, copy, modify, merge,
- * publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so,
- * subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be
- * included in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR
- * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
- * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
- * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- */
-
-/*
- * The text above constitutes the entire PortAudio license; however,
- * the PortAudio community also makes the following non-binding requests:
- *
- * Any person wishing to distribute modifications to the Software is
- * requested to send the modifications to the original developer so that
- * they can be incorporated into the canonical version. It is also
- * requested that these non-binding requests be included along with the
- * license above.
- */
-
-/** @file
- @ingroup public_header
- @brief The portable PortAudio API.
-*/
-
-
-#ifdef __cplusplus
-extern "C"
-{
-#endif /* __cplusplus */
-
-/** Retrieve the release number of the currently running PortAudio build.
- For example, for version "19.5.1" this will return 0x00130501.
-
- @see paMakeVersionNumber
-*/
-int Pa_GetVersion( void );
-
-/** Retrieve a textual description of the current PortAudio build,
- e.g. "PortAudio V19.5.0-devel, revision 1952M".
- The format of the text may change in the future. Do not try to parse the
- returned string.
-
- @deprecated As of 19.5.0, use Pa_GetVersionInfo()->versionText instead.
-*/
-const char* Pa_GetVersionText( void );
-
-/**
- Generate a packed integer version number in the same format used
- by Pa_GetVersion(). Use this to compare a specified version number with
- the currently running version. For example:
-
- @code
- if( Pa_GetVersion() < paMakeVersionNumber(19,5,1) ) {}
- @endcode
-
- @see Pa_GetVersion, Pa_GetVersionInfo
- @version Available as of 19.5.0.
-*/
-#define paMakeVersionNumber(major, minor, subminor) \
- (((major)&0xFF)<<16 | ((minor)&0xFF)<<8 | ((subminor)&0xFF))
-
-
-/**
- A structure containing PortAudio API version information.
- @see Pa_GetVersionInfo, paMakeVersionNumber
- @version Available as of 19.5.0.
-*/
-typedef struct PaVersionInfo {
- int versionMajor;
- int versionMinor;
- int versionSubMinor;
- /**
- This is currently the Git revision hash but may change in the future.
- The versionControlRevision is updated by running a script before compiling the library.
- If the update does not occur, this value may refer to an earlier revision.
- */
- const char *versionControlRevision;
- /** Version as a string, for example "PortAudio V19.5.0-devel, revision 1952M" */
- const char *versionText;
-} PaVersionInfo;
-
-/** Retrieve version information for the currently running PortAudio build.
- @return A pointer to an immutable PaVersionInfo structure.
-
- @note This function can be called at any time. It does not require PortAudio
- to be initialized. The structure pointed to is statically allocated. Do not
- attempt to free it or modify it.
-
- @see PaVersionInfo, paMakeVersionNumber
- @version Available as of 19.5.0.
-*/
-const PaVersionInfo* Pa_GetVersionInfo( void );
-
-
-/** Error codes returned by PortAudio functions.
- Note that with the exception of paNoError, all PaErrorCodes are negative.
-*/
-
-typedef int PaError;
-typedef enum PaErrorCode
-{
- paNoError = 0,
-
- paNotInitialized = -10000,
- paUnanticipatedHostError,
- paInvalidChannelCount,
- paInvalidSampleRate,
- paInvalidDevice,
- paInvalidFlag,
- paSampleFormatNotSupported,
- paBadIODeviceCombination,
- paInsufficientMemory,
- paBufferTooBig,
- paBufferTooSmall,
- paNullCallback,
- paBadStreamPtr,
- paTimedOut,
- paInternalError,
- paDeviceUnavailable,
- paIncompatibleHostApiSpecificStreamInfo,
- paStreamIsStopped,
- paStreamIsNotStopped,
- paInputOverflowed,
- paOutputUnderflowed,
- paHostApiNotFound,
- paInvalidHostApi,
- paCanNotReadFromACallbackStream,
- paCanNotWriteToACallbackStream,
- paCanNotReadFromAnOutputOnlyStream,
- paCanNotWriteToAnInputOnlyStream,
- paIncompatibleStreamHostApi,
- paBadBufferPtr
-} PaErrorCode;
-
-
-/** Translate the supplied PortAudio error code into a human readable
- message.
-*/
-const char *Pa_GetErrorText( PaError errorCode );
-
-
-/** Library initialization function - call this before using PortAudio.
- This function initializes internal data structures and prepares underlying
- host APIs for use. With the exception of Pa_GetVersion(), Pa_GetVersionText(),
- and Pa_GetErrorText(), this function MUST be called before using any other
- PortAudio API functions.
-
- If Pa_Initialize() is called multiple times, each successful
- call must be matched with a corresponding call to Pa_Terminate().
- Pairs of calls to Pa_Initialize()/Pa_Terminate() may overlap, and are not
- required to be fully nested.
-
- Note that if Pa_Initialize() returns an error code, Pa_Terminate() should
- NOT be called.
-
- @return paNoError if successful, otherwise an error code indicating the cause
- of failure.
-
- @see Pa_Terminate
-*/
-PaError Pa_Initialize( void );
-
-
-/** Library termination function - call this when finished using PortAudio.
- This function deallocates all resources allocated by PortAudio since it was
- initialized by a call to Pa_Initialize(). In cases where Pa_Initialise() has
- been called multiple times, each call must be matched with a corresponding call
- to Pa_Terminate(). The final matching call to Pa_Terminate() will automatically
- close any PortAudio streams that are still open.
-
- Pa_Terminate() MUST be called before exiting a program which uses PortAudio.
- Failure to do so may result in serious resource leaks, such as audio devices
- not being available until the next reboot.
-
- @return paNoError if successful, otherwise an error code indicating the cause
- of failure.
-
- @see Pa_Initialize
-*/
-PaError Pa_Terminate( void );
-
-
-
-/** The type used to refer to audio devices. Values of this type usually
- range from 0 to (Pa_GetDeviceCount()-1), and may also take on the PaNoDevice
- and paUseHostApiSpecificDeviceSpecification values.
-
- @see Pa_GetDeviceCount, paNoDevice, paUseHostApiSpecificDeviceSpecification
-*/
-typedef int PaDeviceIndex;
-
-
-/** A special PaDeviceIndex value indicating that no device is available,
- or should be used.
-
- @see PaDeviceIndex
-*/
-#define paNoDevice ((PaDeviceIndex)-1)
-
-
-/** A special PaDeviceIndex value indicating that the device(s) to be used
- are specified in the host api specific stream info structure.
-
- @see PaDeviceIndex
-*/
-#define paUseHostApiSpecificDeviceSpecification ((PaDeviceIndex)-2)
-
-
-/* Host API enumeration mechanism */
-
-/** The type used to enumerate to host APIs at runtime. Values of this type
- range from 0 to (Pa_GetHostApiCount()-1).
-
- @see Pa_GetHostApiCount
-*/
-typedef int PaHostApiIndex;
-
-
-/** Retrieve the number of available host APIs. Even if a host API is
- available it may have no devices available.
-
- @return A non-negative value indicating the number of available host APIs
- or, a PaErrorCode (which are always negative) if PortAudio is not initialized
- or an error is encountered.
-
- @see PaHostApiIndex
-*/
-PaHostApiIndex Pa_GetHostApiCount( void );
-
-
-/** Retrieve the index of the default host API. The default host API will be
- the lowest common denominator host API on the current platform and is
- unlikely to provide the best performance.
-
- @return A non-negative value ranging from 0 to (Pa_GetHostApiCount()-1)
- indicating the default host API index or, a PaErrorCode (which are always
- negative) if PortAudio is not initialized or an error is encountered.
-*/
-PaHostApiIndex Pa_GetDefaultHostApi( void );
-
-
-/** Unchanging unique identifiers for each supported host API. This type
- is used in the PaHostApiInfo structure. The values are guaranteed to be
- unique and to never change, thus allowing code to be written that
- conditionally uses host API specific extensions.
-
- New type ids will be allocated when support for a host API reaches
- "public alpha" status, prior to that developers should use the
- paInDevelopment type id.
-
- @see PaHostApiInfo
-*/
-typedef enum PaHostApiTypeId
-{
- paInDevelopment=0, /* use while developing support for a new host API */
- paDirectSound=1,
- paMME=2,
- paASIO=3,
- paSoundManager=4,
- paCoreAudio=5,
- paOSS=7,
- paALSA=8,
- paAL=9,
- paBeOS=10,
- paWDMKS=11,
- paJACK=12,
- paWASAPI=13,
- paAudioScienceHPI=14
-} PaHostApiTypeId;
-
-
-/** A structure containing information about a particular host API. */
-
-typedef struct PaHostApiInfo
-{
- /** this is struct version 1 */
- int structVersion;
- /** The well known unique identifier of this host API @see PaHostApiTypeId */
- PaHostApiTypeId type;
- /** A textual description of the host API for display on user interfaces. */
- const char *name;
-
- /** The number of devices belonging to this host API. This field may be
- used in conjunction with Pa_HostApiDeviceIndexToDeviceIndex() to enumerate
- all devices for this host API.
- @see Pa_HostApiDeviceIndexToDeviceIndex
- */
- int deviceCount;
-
- /** The default input device for this host API. The value will be a
- device index ranging from 0 to (Pa_GetDeviceCount()-1), or paNoDevice
- if no default input device is available.
- */
- PaDeviceIndex defaultInputDevice;
-
- /** The default output device for this host API. The value will be a
- device index ranging from 0 to (Pa_GetDeviceCount()-1), or paNoDevice
- if no default output device is available.
- */
- PaDeviceIndex defaultOutputDevice;
-
-} PaHostApiInfo;
-
-
-/** Retrieve a pointer to a structure containing information about a specific
- host Api.
-
- @param hostApi A valid host API index ranging from 0 to (Pa_GetHostApiCount()-1)
-
- @return A pointer to an immutable PaHostApiInfo structure describing
- a specific host API. If the hostApi parameter is out of range or an error
- is encountered, the function returns NULL.
-
- The returned structure is owned by the PortAudio implementation and must not
- be manipulated or freed. The pointer is only guaranteed to be valid between
- calls to Pa_Initialize() and Pa_Terminate().
-*/
-const PaHostApiInfo * Pa_GetHostApiInfo( PaHostApiIndex hostApi );
-
-
-/** Convert a static host API unique identifier, into a runtime
- host API index.
-
- @param type A unique host API identifier belonging to the PaHostApiTypeId
- enumeration.
-
- @return A valid PaHostApiIndex ranging from 0 to (Pa_GetHostApiCount()-1) or,
- a PaErrorCode (which are always negative) if PortAudio is not initialized
- or an error is encountered.
-
- The paHostApiNotFound error code indicates that the host API specified by the
- type parameter is not available.
-
- @see PaHostApiTypeId
-*/
-PaHostApiIndex Pa_HostApiTypeIdToHostApiIndex( PaHostApiTypeId type );
-
-
-/** Convert a host-API-specific device index to standard PortAudio device index.
- This function may be used in conjunction with the deviceCount field of
- PaHostApiInfo to enumerate all devices for the specified host API.
-
- @param hostApi A valid host API index ranging from 0 to (Pa_GetHostApiCount()-1)
-
- @param hostApiDeviceIndex A valid per-host device index in the range
- 0 to (Pa_GetHostApiInfo(hostApi)->deviceCount-1)
-
- @return A non-negative PaDeviceIndex ranging from 0 to (Pa_GetDeviceCount()-1)
- or, a PaErrorCode (which are always negative) if PortAudio is not initialized
- or an error is encountered.
-
- A paInvalidHostApi error code indicates that the host API index specified by
- the hostApi parameter is out of range.
-
- A paInvalidDevice error code indicates that the hostApiDeviceIndex parameter
- is out of range.
-
- @see PaHostApiInfo
-*/
-PaDeviceIndex Pa_HostApiDeviceIndexToDeviceIndex( PaHostApiIndex hostApi,
- int hostApiDeviceIndex );
-
-
-
-/** Structure used to return information about a host error condition.
-*/
-typedef struct PaHostErrorInfo{
- PaHostApiTypeId hostApiType; /**< the host API which returned the error code */
- long errorCode; /**< the error code returned */
- const char *errorText; /**< a textual description of the error if available, otherwise a zero-length string */
-}PaHostErrorInfo;
-
-
-/** Return information about the last host error encountered. The error
- information returned by Pa_GetLastHostErrorInfo() will never be modified
- asynchronously by errors occurring in other PortAudio owned threads
- (such as the thread that manages the stream callback.)
-
- This function is provided as a last resort, primarily to enhance debugging
- by providing clients with access to all available error information.
-
- @return A pointer to an immutable structure constraining information about
- the host error. The values in this structure will only be valid if a
- PortAudio function has previously returned the paUnanticipatedHostError
- error code.
-*/
-const PaHostErrorInfo* Pa_GetLastHostErrorInfo( void );
-
-
-
-/* Device enumeration and capabilities */
-
-/** Retrieve the number of available devices. The number of available devices
- may be zero.
-
- @return A non-negative value indicating the number of available devices or,
- a PaErrorCode (which are always negative) if PortAudio is not initialized
- or an error is encountered.
-*/
-PaDeviceIndex Pa_GetDeviceCount( void );
-
-
-/** Retrieve the index of the default input device. The result can be
- used in the inputDevice parameter to Pa_OpenStream().
-
- @return The default input device index for the default host API, or paNoDevice
- if no default input device is available or an error was encountered.
-*/
-PaDeviceIndex Pa_GetDefaultInputDevice( void );
-
-
-/** Retrieve the index of the default output device. The result can be
- used in the outputDevice parameter to Pa_OpenStream().
-
- @return The default output device index for the default host API, or paNoDevice
- if no default output device is available or an error was encountered.
-
- @note
- On the PC, the user can specify a default device by
- setting an environment variable. For example, to use device #1.
-
- set PA_RECOMMENDED_OUTPUT_DEVICE=1
-
- The user should first determine the available device ids by using
- the supplied application "pa_devs".
-*/
-PaDeviceIndex Pa_GetDefaultOutputDevice( void );
-
-
-/** The type used to represent monotonic time in seconds. PaTime is
- used for the fields of the PaStreamCallbackTimeInfo argument to the
- PaStreamCallback and as the result of Pa_GetStreamTime().
-
- PaTime values have unspecified origin.
-
- @see PaStreamCallback, PaStreamCallbackTimeInfo, Pa_GetStreamTime
-*/
-typedef double PaTime;
-
-
-/** A type used to specify one or more sample formats. Each value indicates
- a possible format for sound data passed to and from the stream callback,
- Pa_ReadStream and Pa_WriteStream.
-
- The standard formats paFloat32, paInt16, paInt32, paInt24, paInt8
- and aUInt8 are usually implemented by all implementations.
-
- The floating point representation (paFloat32) uses +1.0 and -1.0 as the
- maximum and minimum respectively.
-
- paUInt8 is an unsigned 8 bit format where 128 is considered "ground"
-
- The paNonInterleaved flag indicates that audio data is passed as an array
- of pointers to separate buffers, one buffer for each channel. Usually,
- when this flag is not used, audio data is passed as a single buffer with
- all channels interleaved.
-
- @see Pa_OpenStream, Pa_OpenDefaultStream, PaDeviceInfo
- @see paFloat32, paInt16, paInt32, paInt24, paInt8
- @see paUInt8, paCustomFormat, paNonInterleaved
-*/
-typedef unsigned long PaSampleFormat;
-
-
-#define paFloat32 ((PaSampleFormat) 0x00000001) /**< @see PaSampleFormat */
-#define paInt32 ((PaSampleFormat) 0x00000002) /**< @see PaSampleFormat */
-#define paInt24 ((PaSampleFormat) 0x00000004) /**< Packed 24 bit format. @see PaSampleFormat */
-#define paInt16 ((PaSampleFormat) 0x00000008) /**< @see PaSampleFormat */
-#define paInt8 ((PaSampleFormat) 0x00000010) /**< @see PaSampleFormat */
-#define paUInt8 ((PaSampleFormat) 0x00000020) /**< @see PaSampleFormat */
-#define paCustomFormat ((PaSampleFormat) 0x00010000) /**< @see PaSampleFormat */
-
-#define paNonInterleaved ((PaSampleFormat) 0x80000000) /**< @see PaSampleFormat */
-
-/** A structure providing information and capabilities of PortAudio devices.
- Devices may support input, output or both input and output.
-*/
-typedef struct PaDeviceInfo
-{
- int structVersion; /* this is struct version 2 */
- const char *name;
- PaHostApiIndex hostApi; /**< note this is a host API index, not a type id*/
-
- int maxInputChannels;
- int maxOutputChannels;
-
- /** Default latency values for interactive performance. */
- PaTime defaultLowInputLatency;
- PaTime defaultLowOutputLatency;
- /** Default latency values for robust non-interactive applications (eg. playing sound files). */
- PaTime defaultHighInputLatency;
- PaTime defaultHighOutputLatency;
-
- double defaultSampleRate;
-} PaDeviceInfo;
-
-
-/** Retrieve a pointer to a PaDeviceInfo structure containing information
- about the specified device.
- @return A pointer to an immutable PaDeviceInfo structure. If the device
- parameter is out of range the function returns NULL.
-
- @param device A valid device index in the range 0 to (Pa_GetDeviceCount()-1)
-
- @note PortAudio manages the memory referenced by the returned pointer,
- the client must not manipulate or free the memory. The pointer is only
- guaranteed to be valid between calls to Pa_Initialize() and Pa_Terminate().
-
- @see PaDeviceInfo, PaDeviceIndex
-*/
-const PaDeviceInfo* Pa_GetDeviceInfo( PaDeviceIndex device );
-
-
-/** Parameters for one direction (input or output) of a stream.
-*/
-typedef struct PaStreamParameters
-{
- /** A valid device index in the range 0 to (Pa_GetDeviceCount()-1)
- specifying the device to be used or the special constant
- paUseHostApiSpecificDeviceSpecification which indicates that the actual
- device(s) to use are specified in hostApiSpecificStreamInfo.
- This field must not be set to paNoDevice.
- */
- PaDeviceIndex device;
-
- /** The number of channels of sound to be delivered to the
- stream callback or accessed by Pa_ReadStream() or Pa_WriteStream().
- It can range from 1 to the value of maxInputChannels in the
- PaDeviceInfo record for the device specified by the device parameter.
- */
- int channelCount;
-
- /** The sample format of the buffer provided to the stream callback,
- a_ReadStream() or Pa_WriteStream(). It may be any of the formats described
- by the PaSampleFormat enumeration.
- */
- PaSampleFormat sampleFormat;
-
- /** The desired latency in seconds. Where practical, implementations should
- configure their latency based on these parameters, otherwise they may
- choose the closest viable latency instead. Unless the suggested latency
- is greater than the absolute upper limit for the device implementations
- should round the suggestedLatency up to the next practical value - ie to
- provide an equal or higher latency than suggestedLatency wherever possible.
- Actual latency values for an open stream may be retrieved using the
- inputLatency and outputLatency fields of the PaStreamInfo structure
- returned by Pa_GetStreamInfo().
- @see default*Latency in PaDeviceInfo, *Latency in PaStreamInfo
- */
- PaTime suggestedLatency;
-
- /** An optional pointer to a host api specific data structure
- containing additional information for device setup and/or stream processing.
- hostApiSpecificStreamInfo is never required for correct operation,
- if not used it should be set to NULL.
- */
- void *hostApiSpecificStreamInfo;
-
-} PaStreamParameters;
-
-
-/** Return code for Pa_IsFormatSupported indicating success. */
-#define paFormatIsSupported (0)
-
-/** Determine whether it would be possible to open a stream with the specified
- parameters.
-
- @param inputParameters A structure that describes the input parameters used to
- open a stream. The suggestedLatency field is ignored. See PaStreamParameters
- for a description of these parameters. inputParameters must be NULL for
- output-only streams.
-
- @param outputParameters A structure that describes the output parameters used
- to open a stream. The suggestedLatency field is ignored. See PaStreamParameters
- for a description of these parameters. outputParameters must be NULL for
- input-only streams.
-
- @param sampleRate The required sampleRate. For full-duplex streams it is the
- sample rate for both input and output
-
- @return Returns 0 if the format is supported, and an error code indicating why
- the format is not supported otherwise. The constant paFormatIsSupported is
- provided to compare with the return value for success.
-
- @see paFormatIsSupported, PaStreamParameters
-*/
-PaError Pa_IsFormatSupported( const PaStreamParameters *inputParameters,
- const PaStreamParameters *outputParameters,
- double sampleRate );
-
-
-
-/* Streaming types and functions */
-
-
-/**
- A single PaStream can provide multiple channels of real-time
- streaming audio input and output to a client application. A stream
- provides access to audio hardware represented by one or more
- PaDevices. Depending on the underlying Host API, it may be possible
- to open multiple streams using the same device, however this behavior
- is implementation defined. Portable applications should assume that
- a PaDevice may be simultaneously used by at most one PaStream.
-
- Pointers to PaStream objects are passed between PortAudio functions that
- operate on streams.
-
- @see Pa_OpenStream, Pa_OpenDefaultStream, Pa_OpenDefaultStream, Pa_CloseStream,
- Pa_StartStream, Pa_StopStream, Pa_AbortStream, Pa_IsStreamActive,
- Pa_GetStreamTime, Pa_GetStreamCpuLoad
-
-*/
-typedef void PaStream;
-
-
-/** Can be passed as the framesPerBuffer parameter to Pa_OpenStream()
- or Pa_OpenDefaultStream() to indicate that the stream callback will
- accept buffers of any size.
-*/
-#define paFramesPerBufferUnspecified (0)
-
-
-/** Flags used to control the behavior of a stream. They are passed as
- parameters to Pa_OpenStream or Pa_OpenDefaultStream. Multiple flags may be
- ORed together.
-
- @see Pa_OpenStream, Pa_OpenDefaultStream
- @see paNoFlag, paClipOff, paDitherOff, paNeverDropInput,
- paPrimeOutputBuffersUsingStreamCallback, paPlatformSpecificFlags
-*/
-typedef unsigned long PaStreamFlags;
-
-/** @see PaStreamFlags */
-#define paNoFlag ((PaStreamFlags) 0)
-
-/** Disable default clipping of out of range samples.
- @see PaStreamFlags
-*/
-#define paClipOff ((PaStreamFlags) 0x00000001)
-
-/** Disable default dithering.
- @see PaStreamFlags
-*/
-#define paDitherOff ((PaStreamFlags) 0x00000002)
-
-/** Flag requests that where possible a full duplex stream will not discard
- overflowed input samples without calling the stream callback. This flag is
- only valid for full duplex callback streams and only when used in combination
- with the paFramesPerBufferUnspecified (0) framesPerBuffer parameter. Using
- this flag incorrectly results in a paInvalidFlag error being returned from
- Pa_OpenStream and Pa_OpenDefaultStream.
-
- @see PaStreamFlags, paFramesPerBufferUnspecified
-*/
-#define paNeverDropInput ((PaStreamFlags) 0x00000004)
-
-/** Call the stream callback to fill initial output buffers, rather than the
- default behavior of priming the buffers with zeros (silence). This flag has
- no effect for input-only and blocking read/write streams.
-
- @see PaStreamFlags
-*/
-#define paPrimeOutputBuffersUsingStreamCallback ((PaStreamFlags) 0x00000008)
-
-/** A mask specifying the platform specific bits.
- @see PaStreamFlags
-*/
-#define paPlatformSpecificFlags ((PaStreamFlags)0xFFFF0000)
-
-/**
- Timing information for the buffers passed to the stream callback.
-
- Time values are expressed in seconds and are synchronised with the time base used by Pa_GetStreamTime() for the associated stream.
-
- @see PaStreamCallback, Pa_GetStreamTime
-*/
-typedef struct PaStreamCallbackTimeInfo{
- PaTime inputBufferAdcTime; /**< The time when the first sample of the input buffer was captured at the ADC input */
- PaTime currentTime; /**< The time when the stream callback was invoked */
- PaTime outputBufferDacTime; /**< The time when the first sample of the output buffer will output the DAC */
-} PaStreamCallbackTimeInfo;
-
-
-/**
- Flag bit constants for the statusFlags to PaStreamCallback.
-
- @see paInputUnderflow, paInputOverflow, paOutputUnderflow, paOutputOverflow,
- paPrimingOutput
-*/
-typedef unsigned long PaStreamCallbackFlags;
-
-/** In a stream opened with paFramesPerBufferUnspecified, indicates that
- input data is all silence (zeros) because no real data is available. In a
- stream opened without paFramesPerBufferUnspecified, it indicates that one or
- more zero samples have been inserted into the input buffer to compensate
- for an input underflow.
- @see PaStreamCallbackFlags
-*/
-#define paInputUnderflow ((PaStreamCallbackFlags) 0x00000001)
-
-/** In a stream opened with paFramesPerBufferUnspecified, indicates that data
- prior to the first sample of the input buffer was discarded due to an
- overflow, possibly because the stream callback is using too much CPU time.
- Otherwise indicates that data prior to one or more samples in the
- input buffer was discarded.
- @see PaStreamCallbackFlags
-*/
-#define paInputOverflow ((PaStreamCallbackFlags) 0x00000002)
-
-/** Indicates that output data (or a gap) was inserted, possibly because the
- stream callback is using too much CPU time.
- @see PaStreamCallbackFlags
-*/
-#define paOutputUnderflow ((PaStreamCallbackFlags) 0x00000004)
-
-/** Indicates that output data will be discarded because no room is available.
- @see PaStreamCallbackFlags
-*/
-#define paOutputOverflow ((PaStreamCallbackFlags) 0x00000008)
-
-/** Some of all of the output data will be used to prime the stream, input
- data may be zero.
- @see PaStreamCallbackFlags
-*/
-#define paPrimingOutput ((PaStreamCallbackFlags) 0x00000010)
-
-/**
- Allowable return values for the PaStreamCallback.
- @see PaStreamCallback
-*/
-typedef enum PaStreamCallbackResult
-{
- paContinue=0, /**< Signal that the stream should continue invoking the callback and processing audio. */
- paComplete=1, /**< Signal that the stream should stop invoking the callback and finish once all output samples have played. */
- paAbort=2 /**< Signal that the stream should stop invoking the callback and finish as soon as possible. */
-} PaStreamCallbackResult;
-
-
-/**
- Functions of type PaStreamCallback are implemented by PortAudio clients.
- They consume, process or generate audio in response to requests from an
- active PortAudio stream.
-
- When a stream is running, PortAudio calls the stream callback periodically.
- The callback function is responsible for processing buffers of audio samples
- passed via the input and output parameters.
-
- The PortAudio stream callback runs at very high or real-time priority.
- It is required to consistently meet its time deadlines. Do not allocate
- memory, access the file system, call library functions or call other functions
- from the stream callback that may block or take an unpredictable amount of
- time to complete.
-
- In order for a stream to maintain glitch-free operation the callback
- must consume and return audio data faster than it is recorded and/or
- played. PortAudio anticipates that each callback invocation may execute for
- a duration approaching the duration of frameCount audio frames at the stream
- sample rate. It is reasonable to expect to be able to utilise 70% or more of
- the available CPU time in the PortAudio callback. However, due to buffer size
- adaption and other factors, not all host APIs are able to guarantee audio
- stability under heavy CPU load with arbitrary fixed callback buffer sizes.
- When high callback CPU utilisation is required the most robust behavior
- can be achieved by using paFramesPerBufferUnspecified as the
- Pa_OpenStream() framesPerBuffer parameter.
-
- @param input and @param output are either arrays of interleaved samples or;
- if non-interleaved samples were requested using the paNonInterleaved sample
- format flag, an array of buffer pointers, one non-interleaved buffer for
- each channel.
-
- The format, packing and number of channels used by the buffers are
- determined by parameters to Pa_OpenStream().
-
- @param frameCount The number of sample frames to be processed by
- the stream callback.
-
- @param timeInfo Timestamps indicating the ADC capture time of the first sample
- in the input buffer, the DAC output time of the first sample in the output buffer
- and the time the callback was invoked.
- See PaStreamCallbackTimeInfo and Pa_GetStreamTime()
-
- @param statusFlags Flags indicating whether input and/or output buffers
- have been inserted or will be dropped to overcome underflow or overflow
- conditions.
-
- @param userData The value of a user supplied pointer passed to
- Pa_OpenStream() intended for storing synthesis data etc.
-
- @return
- The stream callback should return one of the values in the
- ::PaStreamCallbackResult enumeration. To ensure that the callback continues
- to be called, it should return paContinue (0). Either paComplete or paAbort
- can be returned to finish stream processing, after either of these values is
- returned the callback will not be called again. If paAbort is returned the
- stream will finish as soon as possible. If paComplete is returned, the stream
- will continue until all buffers generated by the callback have been played.
- This may be useful in applications such as soundfile players where a specific
- duration of output is required. However, it is not necessary to utilize this
- mechanism as Pa_StopStream(), Pa_AbortStream() or Pa_CloseStream() can also
- be used to stop the stream. The callback must always fill the entire output
- buffer irrespective of its return value.
-
- @see Pa_OpenStream, Pa_OpenDefaultStream
-
- @note With the exception of Pa_GetStreamCpuLoad() it is not permissible to call
- PortAudio API functions from within the stream callback.
-*/
-typedef int PaStreamCallback(
- const void *input, void *output,
- unsigned long frameCount,
- const PaStreamCallbackTimeInfo* timeInfo,
- PaStreamCallbackFlags statusFlags,
- void *userData );
-
-
-/** Opens a stream for either input, output or both.
-
- @param stream The address of a PaStream pointer which will receive
- a pointer to the newly opened stream.
-
- @param inputParameters A structure that describes the input parameters used by
- the opened stream. See PaStreamParameters for a description of these parameters.
- inputParameters must be NULL for output-only streams.
-
- @param outputParameters A structure that describes the output parameters used by
- the opened stream. See PaStreamParameters for a description of these parameters.
- outputParameters must be NULL for input-only streams.
-
- @param sampleRate The desired sampleRate. For full-duplex streams it is the
- sample rate for both input and output. Note that the actual sampleRate
- may differ very slightly from the desired rate because of hardware limitations.
- The exact rate can be queried using Pa_GetStreamInfo(). If nothing close
- to the desired sampleRate is available then the open will fail and return an error.
-
- @param framesPerBuffer The number of frames passed to the stream callback
- function, or the preferred block granularity for a blocking read/write stream.
- The special value paFramesPerBufferUnspecified (0) may be used to request that
- the stream callback will receive an optimal (and possibly varying) number of
- frames based on host requirements and the requested latency settings.
- Note: With some host APIs, the use of non-zero framesPerBuffer for a callback
- stream may introduce an additional layer of buffering which could introduce
- additional latency. PortAudio guarantees that the additional latency
- will be kept to the theoretical minimum however, it is strongly recommended
- that a non-zero framesPerBuffer value only be used when your algorithm
- requires a fixed number of frames per stream callback.
-
- @param streamFlags Flags which modify the behavior of the streaming process.
- This parameter may contain a combination of flags ORed together. Some flags may
- only be relevant to certain buffer formats.
-
- @param streamCallback A pointer to a client supplied function that is responsible
- for processing and filling input and output buffers. If this parameter is NULL
- the stream will be opened in 'blocking read/write' mode. In blocking mode,
- the client can receive sample data using Pa_ReadStream and write sample data
- using Pa_WriteStream, the number of samples that may be read or written
- without blocking is returned by Pa_GetStreamReadAvailable and
- Pa_GetStreamWriteAvailable respectively.
-
- @param userData A client supplied pointer which is passed to the stream callback
- function. It could for example, contain a pointer to instance data necessary
- for processing the audio buffers. This parameter is ignored if streamCallback
- is NULL.
-
- @return
- Upon success Pa_OpenStream() returns paNoError and places a pointer to a
- valid PaStream in the stream argument. The stream is inactive (stopped).
- If a call to Pa_OpenStream() fails, a non-zero error code is returned (see
- PaError for possible error codes) and the value of stream is invalid.
-
- @see PaStreamParameters, PaStreamCallback, Pa_ReadStream, Pa_WriteStream,
- Pa_GetStreamReadAvailable, Pa_GetStreamWriteAvailable
-*/
-PaError Pa_OpenStream( PaStream** stream,
- const PaStreamParameters *inputParameters,
- const PaStreamParameters *outputParameters,
- double sampleRate,
- unsigned long framesPerBuffer,
- PaStreamFlags streamFlags,
- PaStreamCallback *streamCallback,
- void *userData );
-
-
-/** A simplified version of Pa_OpenStream() that opens the default input
- and/or output devices.
-
- @param stream The address of a PaStream pointer which will receive
- a pointer to the newly opened stream.
-
- @param numInputChannels The number of channels of sound that will be supplied
- to the stream callback or returned by Pa_ReadStream. It can range from 1 to
- the value of maxInputChannels in the PaDeviceInfo record for the default input
- device. If 0 the stream is opened as an output-only stream.
-
- @param numOutputChannels The number of channels of sound to be delivered to the
- stream callback or passed to Pa_WriteStream. It can range from 1 to the value
- of maxOutputChannels in the PaDeviceInfo record for the default output device.
- If 0 the stream is opened as an output-only stream.
-
- @param sampleFormat The sample format of both the input and output buffers
- provided to the callback or passed to and from Pa_ReadStream and Pa_WriteStream.
- sampleFormat may be any of the formats described by the PaSampleFormat
- enumeration.
-
- @param sampleRate Same as Pa_OpenStream parameter of the same name.
- @param framesPerBuffer Same as Pa_OpenStream parameter of the same name.
- @param streamCallback Same as Pa_OpenStream parameter of the same name.
- @param userData Same as Pa_OpenStream parameter of the same name.
-
- @return As for Pa_OpenStream
-
- @see Pa_OpenStream, PaStreamCallback
-*/
-PaError Pa_OpenDefaultStream( PaStream** stream,
- int numInputChannels,
- int numOutputChannels,
- PaSampleFormat sampleFormat,
- double sampleRate,
- unsigned long framesPerBuffer,
- PaStreamCallback *streamCallback,
- void *userData );
-
-
-/** Closes an audio stream. If the audio stream is active it
- discards any pending buffers as if Pa_AbortStream() had been called.
-*/
-PaError Pa_CloseStream( PaStream *stream );
-
-
-/** Functions of type PaStreamFinishedCallback are implemented by PortAudio
- clients. They can be registered with a stream using the Pa_SetStreamFinishedCallback
- function. Once registered they are called when the stream becomes inactive
- (ie once a call to Pa_StopStream() will not block).
- A stream will become inactive after the stream callback returns non-zero,
- or when Pa_StopStream or Pa_AbortStream is called. For a stream providing audio
- output, if the stream callback returns paComplete, or Pa_StopStream() is called,
- the stream finished callback will not be called until all generated sample data
- has been played.
-
- @param userData The userData parameter supplied to Pa_OpenStream()
-
- @see Pa_SetStreamFinishedCallback
-*/
-typedef void PaStreamFinishedCallback( void *userData );
-
-
-/** Register a stream finished callback function which will be called when the
- stream becomes inactive. See the description of PaStreamFinishedCallback for
- further details about when the callback will be called.
-
- @param stream a pointer to a PaStream that is in the stopped state - if the
- stream is not stopped, the stream's finished callback will remain unchanged
- and an error code will be returned.
-
- @param streamFinishedCallback a pointer to a function with the same signature
- as PaStreamFinishedCallback, that will be called when the stream becomes
- inactive. Passing NULL for this parameter will un-register a previously
- registered stream finished callback function.
-
- @return on success returns paNoError, otherwise an error code indicating the cause
- of the error.
-
- @see PaStreamFinishedCallback
-*/
-PaError Pa_SetStreamFinishedCallback( PaStream *stream, PaStreamFinishedCallback* streamFinishedCallback );
-
-
-/** Commences audio processing.
-*/
-PaError Pa_StartStream( PaStream *stream );
-
-
-/** Terminates audio processing. It waits until all pending
- audio buffers have been played before it returns.
-*/
-PaError Pa_StopStream( PaStream *stream );
-
-
-/** Terminates audio processing immediately without waiting for pending
- buffers to complete.
-*/
-PaError Pa_AbortStream( PaStream *stream );
-
-
-/** Determine whether the stream is stopped.
- A stream is considered to be stopped prior to a successful call to
- Pa_StartStream and after a successful call to Pa_StopStream or Pa_AbortStream.
- If a stream callback returns a value other than paContinue the stream is NOT
- considered to be stopped.
-
- @return Returns one (1) when the stream is stopped, zero (0) when
- the stream is running or, a PaErrorCode (which are always negative) if
- PortAudio is not initialized or an error is encountered.
-
- @see Pa_StopStream, Pa_AbortStream, Pa_IsStreamActive
-*/
-PaError Pa_IsStreamStopped( PaStream *stream );
-
-
-/** Determine whether the stream is active.
- A stream is active after a successful call to Pa_StartStream(), until it
- becomes inactive either as a result of a call to Pa_StopStream() or
- Pa_AbortStream(), or as a result of a return value other than paContinue from
- the stream callback. In the latter case, the stream is considered inactive
- after the last buffer has finished playing.
-
- @return Returns one (1) when the stream is active (ie playing or recording
- audio), zero (0) when not playing or, a PaErrorCode (which are always negative)
- if PortAudio is not initialized or an error is encountered.
-
- @see Pa_StopStream, Pa_AbortStream, Pa_IsStreamStopped
-*/
-PaError Pa_IsStreamActive( PaStream *stream );
-
-
-
-/** A structure containing unchanging information about an open stream.
- @see Pa_GetStreamInfo
-*/
-
-typedef struct PaStreamInfo
-{
- /** this is struct version 1 */
- int structVersion;
-
- /** The input latency of the stream in seconds. This value provides the most
- accurate estimate of input latency available to the implementation. It may
- differ significantly from the suggestedLatency value passed to Pa_OpenStream().
- The value of this field will be zero (0.) for output-only streams.
- @see PaTime
- */
- PaTime inputLatency;
-
- /** The output latency of the stream in seconds. This value provides the most
- accurate estimate of output latency available to the implementation. It may
- differ significantly from the suggestedLatency value passed to Pa_OpenStream().
- The value of this field will be zero (0.) for input-only streams.
- @see PaTime
- */
- PaTime outputLatency;
-
- /** The sample rate of the stream in Hertz (samples per second). In cases
- where the hardware sample rate is inaccurate and PortAudio is aware of it,
- the value of this field may be different from the sampleRate parameter
- passed to Pa_OpenStream(). If information about the actual hardware sample
- rate is not available, this field will have the same value as the sampleRate
- parameter passed to Pa_OpenStream().
- */
- double sampleRate;
-
-} PaStreamInfo;
-
-
-/** Retrieve a pointer to a PaStreamInfo structure containing information
- about the specified stream.
- @return A pointer to an immutable PaStreamInfo structure. If the stream
- parameter is invalid, or an error is encountered, the function returns NULL.
-
- @param stream A pointer to an open stream previously created with Pa_OpenStream.
-
- @note PortAudio manages the memory referenced by the returned pointer,
- the client must not manipulate or free the memory. The pointer is only
- guaranteed to be valid until the specified stream is closed.
-
- @see PaStreamInfo
-*/
-const PaStreamInfo* Pa_GetStreamInfo( PaStream *stream );
-
-
-/** Returns the current time in seconds for a stream according to the same clock used
- to generate callback PaStreamCallbackTimeInfo timestamps. The time values are
- monotonically increasing and have unspecified origin.
-
- Pa_GetStreamTime returns valid time values for the entire life of the stream,
- from when the stream is opened until it is closed. Starting and stopping the stream
- does not affect the passage of time returned by Pa_GetStreamTime.
-
- This time may be used for synchronizing other events to the audio stream, for
- example synchronizing audio to MIDI.
-
- @return The stream's current time in seconds, or 0 if an error occurred.
-
- @see PaTime, PaStreamCallback, PaStreamCallbackTimeInfo
-*/
-PaTime Pa_GetStreamTime( PaStream *stream );
-
-
-/** Retrieve CPU usage information for the specified stream.
- The "CPU Load" is a fraction of total CPU time consumed by a callback stream's
- audio processing routines including, but not limited to the client supplied
- stream callback. This function does not work with blocking read/write streams.
-
- This function may be called from the stream callback function or the
- application.
-
- @return
- A floating point value, typically between 0.0 and 1.0, where 1.0 indicates
- that the stream callback is consuming the maximum number of CPU cycles possible
- to maintain real-time operation. A value of 0.5 would imply that PortAudio and
- the stream callback was consuming roughly 50% of the available CPU time. The
- return value may exceed 1.0. A value of 0.0 will always be returned for a
- blocking read/write stream, or if an error occurs.
-*/
-double Pa_GetStreamCpuLoad( PaStream* stream );
-
-
-/** Read samples from an input stream. The function doesn't return until
- the entire buffer has been filled - this may involve waiting for the operating
- system to supply the data.
-
- @param stream A pointer to an open stream previously created with Pa_OpenStream.
-
- @param buffer A pointer to a buffer of sample frames. The buffer contains
- samples in the format specified by the inputParameters->sampleFormat field
- used to open the stream, and the number of channels specified by
- inputParameters->numChannels. If non-interleaved samples were requested using
- the paNonInterleaved sample format flag, buffer is a pointer to the first element
- of an array of buffer pointers, one non-interleaved buffer for each channel.
-
- @param frames The number of frames to be read into buffer. This parameter
- is not constrained to a specific range, however high performance applications
- will want to match this parameter to the framesPerBuffer parameter used
- when opening the stream.
-
- @return On success PaNoError will be returned, or PaInputOverflowed if input
- data was discarded by PortAudio after the previous call and before this call.
-*/
-PaError Pa_ReadStream( PaStream* stream,
- void *buffer,
- unsigned long frames );
-
-
-/** Write samples to an output stream. This function doesn't return until the
- entire buffer has been written - this may involve waiting for the operating
- system to consume the data.
-
- @param stream A pointer to an open stream previously created with Pa_OpenStream.
-
- @param buffer A pointer to a buffer of sample frames. The buffer contains
- samples in the format specified by the outputParameters->sampleFormat field
- used to open the stream, and the number of channels specified by
- outputParameters->numChannels. If non-interleaved samples were requested using
- the paNonInterleaved sample format flag, buffer is a pointer to the first element
- of an array of buffer pointers, one non-interleaved buffer for each channel.
-
- @param frames The number of frames to be written from buffer. This parameter
- is not constrained to a specific range, however high performance applications
- will want to match this parameter to the framesPerBuffer parameter used
- when opening the stream.
-
- @return On success PaNoError will be returned, or paOutputUnderflowed if
- additional output data was inserted after the previous call and before this
- call.
-*/
-PaError Pa_WriteStream( PaStream* stream,
- const void *buffer,
- unsigned long frames );
-
-
-/** Retrieve the number of frames that can be read from the stream without
- waiting.
-
- @return Returns a non-negative value representing the maximum number of frames
- that can be read from the stream without blocking or busy waiting or, a
- PaErrorCode (which are always negative) if PortAudio is not initialized or an
- error is encountered.
-*/
-signed long Pa_GetStreamReadAvailable( PaStream* stream );
-
-
-/** Retrieve the number of frames that can be written to the stream without
- waiting.
-
- @return Returns a non-negative value representing the maximum number of frames
- that can be written to the stream without blocking or busy waiting or, a
- PaErrorCode (which are always negative) if PortAudio is not initialized or an
- error is encountered.
-*/
-signed long Pa_GetStreamWriteAvailable( PaStream* stream );
-
-
-/* Miscellaneous utilities */
-
-
-/** Retrieve the size of a given sample format in bytes.
-
- @return The size in bytes of a single sample in the specified format,
- or paSampleFormatNotSupported if the format is not supported.
-*/
-PaError Pa_GetSampleSize( PaSampleFormat format );
-
-
-/** Put the caller to sleep for at least 'msec' milliseconds. This function is
- provided only as a convenience for authors of portable code (such as the tests
- and examples in the PortAudio distribution.)
-
- The function may sleep longer than requested so don't rely on this for accurate
- musical timing.
-*/
-void Pa_Sleep( long msec );
-
-
-
-#ifdef __cplusplus
-}
-#endif /* __cplusplus */
-#endif /* PORTAUDIO_H */
diff --git a/spaces/andrewgleave/tokbot/chain.py b/spaces/andrewgleave/tokbot/chain.py
deleted file mode 100644
index d5efd9b83ab71882bdecf76cd3a910b3e5ddece3..0000000000000000000000000000000000000000
--- a/spaces/andrewgleave/tokbot/chain.py
+++ /dev/null
@@ -1,124 +0,0 @@
-from typing import Dict, List, Tuple
-
-from langchain import OpenAI, PromptTemplate
-from langchain.chains import LLMChain
-from langchain.chains.base import Chain
-from langchain.chains.combine_documents.base import BaseCombineDocumentsChain
-from langchain.chains.question_answering import load_qa_chain
-from langchain.prompts import FewShotPromptTemplate
-from langchain.vectorstores import FAISS
-
-from pydantic import BaseModel
-
-
-class CustomChain(Chain, BaseModel):
-
- vstore: FAISS
- chain: BaseCombineDocumentsChain
- key_word_extractor: Chain
-
- @property
- def input_keys(self) -> List[str]:
- return ["question"]
-
- @property
- def output_keys(self) -> List[str]:
- return ["answer", "sources"]
-
- def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
- question = inputs["question"]
- chat_history_str = _get_chat_history(inputs["chat_history"])
- if chat_history_str:
- new_question = self.key_word_extractor.run(
- question=question, chat_history=chat_history_str
- )
- else:
- new_question = question
- docs = self.vstore.similarity_search(new_question, k=4)
- new_inputs = inputs.copy()
- new_inputs["question"] = new_question
- new_inputs["chat_history"] = chat_history_str
- answer, _ = self.chain.combine_docs(docs, **new_inputs)
- sources = []
- if "SOURCES:" in answer:
- answer, sources = answer.split("SOURCES:")
- sources = sources.split(", ")
- return {"answer": answer.strip(), "sources": sources}
-
-
-def get_chain(vectorstore: FAISS) -> Chain:
- _eg_template = """## Example:
-
- Chat History:
- {chat_history}
- Follow Up question: {question}
- Standalone question: {answer}"""
- _eg_prompt = PromptTemplate(
- template=_eg_template,
- input_variables=["chat_history", "question", "answer"],
- )
-
- _prefix = """Given the following Chat History and a Follow Up Question, rephrase the Follow Up Question to be a new Standalone Question that takes the Chat History and context in to consideration. You should assume that the question is related to the TokCast podcast."""
- _suffix = """## Example:
-
- Chat History:
- {chat_history}
- Follow Up Input: {question}
- Standalone question:"""
-
- examples = [
- {
- "question": "Who is that?",
- "chat_history": "Human: What is the TokCast podcast?\nAssistant: TokCast is a podcast about the philosophy of David Deutsch.",
- "answer": "Who is David Deutsch?",
- },
- {
- "question": "What is the worldview presented here?",
- "chat_history": "Human: What is the TokCast podcast?\nAssistant: TokCast is a podcast about the philosophy of David Deutsch.\nHuman: Who is that?\nAssistant: David Deutsch is a philosopher, physicist, and author. He is the author of The Beginning of Infinity, Fabric of Reality, and one of the pioneers of the field of quantum computing.",
- "answer": "What is David Deutsch's worldview?",
- },
- ]
- prompt = FewShotPromptTemplate(
- prefix=_prefix,
- suffix=_suffix,
- # example_selector=example_selector,
- examples=examples,
- example_prompt=_eg_prompt,
- input_variables=["question", "chat_history"],
- )
- llm = OpenAI(temperature=0, model_name="text-davinci-003")
- key_word_extractor = LLMChain(llm=llm, prompt=prompt)
-
- EXAMPLE_PROMPT = PromptTemplate(
- template="CONTENT:\n{page_content}\n----------\nSOURCE:\n{source}\n",
- input_variables=["page_content", "source"],
- )
- template = """You are an AI assistant for the TokCast Podcast. You're trained on all the transcripts of the podcast.
-Given a QUESTION and a series one or more CONTENT and SOURCE sections from a long document provide a conversational answer as "ANSWER" and a "SOURCES" output which lists verbatim the SOURCEs used in generating the response.
-You should only use SOURCEs that are explicitly listed as a SOURCE in the context.
-ALWAYS include the "SOURCES" as part of the response. If you don't have any sources, just say "SOURCES:"
-If you don't know the answer, just say "I'm not sure. Check out Brett's Channel" Don't try to make up an answer.
-QUESTION: {question}
-=========
-{context}
-=========
-ANSWER:"""
- PROMPT = PromptTemplate(template=template, input_variables=["question", "context"])
- doc_chain = load_qa_chain(
- OpenAI(temperature=0, model_name="text-davinci-003", max_tokens=-1),
- chain_type="stuff",
- prompt=PROMPT,
- document_prompt=EXAMPLE_PROMPT,
- )
- return CustomChain(
- chain=doc_chain, vstore=vectorstore, key_word_extractor=key_word_extractor
- )
-
-
-def _get_chat_history(chat_history: List[Tuple[str, str]]):
- buffer = ""
- for human_s, ai_s in chat_history:
- human = "Human: " + human_s
- ai = "Assistant: " + ai_s
- buffer += "\n" + "\n".join([human, ai])
- return buffer
diff --git a/spaces/aquaaaaaaaaaaaa/AI-minato_aqua/commons.py b/spaces/aquaaaaaaaaaaaa/AI-minato_aqua/commons.py
deleted file mode 100644
index 074888006392e956ce204d8368362dbb2cd4e304..0000000000000000000000000000000000000000
--- a/spaces/aquaaaaaaaaaaaa/AI-minato_aqua/commons.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-def slice_pitch_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-def rand_slice_segments_with_pitch(x, pitch, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- ret_pitch = slice_pitch_segments(pitch, ids_str, segment_size)
- return ret, ret_pitch, ids_str
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def rand_spec_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/overflow/common_layers.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/overflow/common_layers.py
deleted file mode 100644
index b036dd1bda92fb709f0cce796cf5a668a1c081df..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/overflow/common_layers.py
+++ /dev/null
@@ -1,323 +0,0 @@
-from typing import List, Tuple
-
-import torch
-import torch.nn.functional as F
-from torch import nn
-from tqdm.auto import tqdm
-
-from TTS.tts.layers.tacotron.common_layers import Linear
-from TTS.tts.layers.tacotron.tacotron2 import ConvBNBlock
-
-
-class Encoder(nn.Module):
- r"""Neural HMM Encoder
-
- Same as Tacotron 2 encoder but increases the input length by states per phone
-
- Args:
- num_chars (int): Number of characters in the input.
- state_per_phone (int): Number of states per phone.
- in_out_channels (int): number of input and output channels.
- n_convolutions (int): number of convolutional layers.
- """
-
- def __init__(self, num_chars, state_per_phone, in_out_channels=512, n_convolutions=3):
- super().__init__()
-
- self.state_per_phone = state_per_phone
- self.in_out_channels = in_out_channels
-
- self.emb = nn.Embedding(num_chars, in_out_channels)
- self.convolutions = nn.ModuleList()
- for _ in range(n_convolutions):
- self.convolutions.append(ConvBNBlock(in_out_channels, in_out_channels, 5, "relu"))
- self.lstm = nn.LSTM(
- in_out_channels,
- int(in_out_channels / 2) * state_per_phone,
- num_layers=1,
- batch_first=True,
- bias=True,
- bidirectional=True,
- )
- self.rnn_state = None
-
- def forward(self, x: torch.FloatTensor, x_len: torch.LongTensor) -> Tuple[torch.FloatTensor, torch.LongTensor]:
- """Forward pass to the encoder.
-
- Args:
- x (torch.FloatTensor): input text indices.
- - shape: :math:`(b, T_{in})`
- x_len (torch.LongTensor): input text lengths.
- - shape: :math:`(b,)`
-
- Returns:
- Tuple[torch.FloatTensor, torch.LongTensor]: encoder outputs and output lengths.
- -shape: :math:`((b, T_{in} * states_per_phone, in_out_channels), (b,))`
- """
- b, T = x.shape
- o = self.emb(x).transpose(1, 2)
- for layer in self.convolutions:
- o = layer(o)
- o = o.transpose(1, 2)
- o = nn.utils.rnn.pack_padded_sequence(o, x_len.cpu(), batch_first=True)
- self.lstm.flatten_parameters()
- o, _ = self.lstm(o)
- o, _ = nn.utils.rnn.pad_packed_sequence(o, batch_first=True)
- o = o.reshape(b, T * self.state_per_phone, self.in_out_channels)
- x_len = x_len * self.state_per_phone
- return o, x_len
-
- def inference(self, x, x_len):
- """Inference to the encoder.
-
- Args:
- x (torch.FloatTensor): input text indices.
- - shape: :math:`(b, T_{in})`
- x_len (torch.LongTensor): input text lengths.
- - shape: :math:`(b,)`
-
- Returns:
- Tuple[torch.FloatTensor, torch.LongTensor]: encoder outputs and output lengths.
- -shape: :math:`((b, T_{in} * states_per_phone, in_out_channels), (b,))`
- """
- b, T = x.shape
- o = self.emb(x).transpose(1, 2)
- for layer in self.convolutions:
- o = layer(o)
- o = o.transpose(1, 2)
- # self.lstm.flatten_parameters()
- o, _ = self.lstm(o)
- o = o.reshape(b, T * self.state_per_phone, self.in_out_channels)
- x_len = x_len * self.state_per_phone
- return o, x_len
-
-
-class ParameterModel(nn.Module):
- r"""Main neural network of the outputnet
-
- Note: Do not put dropout layers here, the model will not converge.
-
- Args:
- outputnet_size (List[int]): the architecture of the parameter model
- input_size (int): size of input for the first layer
- output_size (int): size of output i.e size of the feature dim
- frame_channels (int): feature dim to set the flat start bias
- flat_start_params (dict): flat start parameters to set the bias
- """
-
- def __init__(
- self,
- outputnet_size: List[int],
- input_size: int,
- output_size: int,
- frame_channels: int,
- flat_start_params: dict,
- ):
- super().__init__()
- self.frame_channels = frame_channels
-
- self.layers = nn.ModuleList(
- [Linear(inp, out) for inp, out in zip([input_size] + outputnet_size[:-1], outputnet_size)]
- )
- self.last_layer = nn.Linear(outputnet_size[-1], output_size)
- self.flat_start_output_layer(
- flat_start_params["mean"], flat_start_params["std"], flat_start_params["transition_p"]
- )
-
- def flat_start_output_layer(self, mean, std, transition_p):
- self.last_layer.weight.data.zero_()
- self.last_layer.bias.data[0 : self.frame_channels] = mean
- self.last_layer.bias.data[self.frame_channels : 2 * self.frame_channels] = OverflowUtils.inverse_softplus(std)
- self.last_layer.bias.data[2 * self.frame_channels :] = OverflowUtils.inverse_sigmod(transition_p)
-
- def forward(self, x):
- for layer in self.layers:
- x = F.relu(layer(x))
- x = self.last_layer(x)
- return x
-
-
-class Outputnet(nn.Module):
- r"""
- This network takes current state and previous observed values as input
- and returns its parameters, mean, standard deviation and probability
- of transition to the next state
- """
-
- def __init__(
- self,
- encoder_dim: int,
- memory_rnn_dim: int,
- frame_channels: int,
- outputnet_size: List[int],
- flat_start_params: dict,
- std_floor: float = 1e-2,
- ):
- super().__init__()
-
- self.frame_channels = frame_channels
- self.flat_start_params = flat_start_params
- self.std_floor = std_floor
-
- input_size = memory_rnn_dim + encoder_dim
- output_size = 2 * frame_channels + 1
-
- self.parametermodel = ParameterModel(
- outputnet_size=outputnet_size,
- input_size=input_size,
- output_size=output_size,
- flat_start_params=flat_start_params,
- frame_channels=frame_channels,
- )
-
- def forward(self, ar_mels, inputs):
- r"""Inputs observation and returns the means, stds and transition probability for the current state
-
- Args:
- ar_mel_inputs (torch.FloatTensor): shape (batch, prenet_dim)
- states (torch.FloatTensor): (batch, hidden_states, hidden_state_dim)
-
- Returns:
- means: means for the emission observation for each feature
- - shape: (B, hidden_states, feature_size)
- stds: standard deviations for the emission observation for each feature
- - shape: (batch, hidden_states, feature_size)
- transition_vectors: transition vector for the current hidden state
- - shape: (batch, hidden_states)
- """
- batch_size, prenet_dim = ar_mels.shape[0], ar_mels.shape[1]
- N = inputs.shape[1]
-
- ar_mels = ar_mels.unsqueeze(1).expand(batch_size, N, prenet_dim)
- ar_mels = torch.cat((ar_mels, inputs), dim=2)
- ar_mels = self.parametermodel(ar_mels)
-
- mean, std, transition_vector = (
- ar_mels[:, :, 0 : self.frame_channels],
- ar_mels[:, :, self.frame_channels : 2 * self.frame_channels],
- ar_mels[:, :, 2 * self.frame_channels :].squeeze(2),
- )
- std = F.softplus(std)
- std = self._floor_std(std)
- return mean, std, transition_vector
-
- def _floor_std(self, std):
- r"""
- It clamps the standard deviation to not to go below some level
- This removes the problem when the model tries to cheat for higher likelihoods by converting
- one of the gaussians to a point mass.
-
- Args:
- std (float Tensor): tensor containing the standard deviation to be
- """
- original_tensor = std.clone().detach()
- std = torch.clamp(std, min=self.std_floor)
- if torch.any(original_tensor != std):
- print(
- "[*] Standard deviation was floored! The model is preventing overfitting, nothing serious to worry about"
- )
- return std
-
-
-class OverflowUtils:
- @staticmethod
- def get_data_parameters_for_flat_start(
- data_loader: torch.utils.data.DataLoader, out_channels: int, states_per_phone: int
- ):
- """Generates data parameters for flat starting the HMM.
-
- Args:
- data_loader (torch.utils.data.Dataloader): _description_
- out_channels (int): mel spectrogram channels
- states_per_phone (_type_): HMM states per phone
- """
-
- # State related information for transition_p
- total_state_len = 0
- total_mel_len = 0
-
- # Useful for data mean an std
- total_mel_sum = 0
- total_mel_sq_sum = 0
-
- for batch in tqdm(data_loader, leave=False):
- text_lengths = batch["token_id_lengths"]
- mels = batch["mel"]
- mel_lengths = batch["mel_lengths"]
-
- total_state_len += torch.sum(text_lengths)
- total_mel_len += torch.sum(mel_lengths)
- total_mel_sum += torch.sum(mels)
- total_mel_sq_sum += torch.sum(torch.pow(mels, 2))
-
- data_mean = total_mel_sum / (total_mel_len * out_channels)
- data_std = torch.sqrt((total_mel_sq_sum / (total_mel_len * out_channels)) - torch.pow(data_mean, 2))
- average_num_states = total_state_len / len(data_loader.dataset)
- average_mel_len = total_mel_len / len(data_loader.dataset)
- average_duration_each_state = average_mel_len / average_num_states
- init_transition_prob = 1 / average_duration_each_state
-
- return data_mean, data_std, (init_transition_prob * states_per_phone)
-
- @staticmethod
- @torch.no_grad()
- def update_flat_start_transition(model, transition_p):
- model.neural_hmm.output_net.parametermodel.flat_start_output_layer(0.0, 1.0, transition_p)
-
- @staticmethod
- def log_clamped(x, eps=1e-04):
- """
- Avoids the log(0) problem
-
- Args:
- x (torch.tensor): input tensor
- eps (float, optional): lower bound. Defaults to 1e-04.
-
- Returns:
- torch.tensor: :math:`log(x)`
- """
- clamped_x = torch.clamp(x, min=eps)
- return torch.log(clamped_x)
-
- @staticmethod
- def inverse_sigmod(x):
- r"""
- Inverse of the sigmoid function
- """
- if not torch.is_tensor(x):
- x = torch.tensor(x)
- return OverflowUtils.log_clamped(x / (1.0 - x))
-
- @staticmethod
- def inverse_softplus(x):
- r"""
- Inverse of the softplus function
- """
- if not torch.is_tensor(x):
- x = torch.tensor(x)
- return OverflowUtils.log_clamped(torch.exp(x) - 1.0)
-
- @staticmethod
- def logsumexp(x, dim):
- r"""
- Differentiable LogSumExp: Does not creates nan gradients
- when all the inputs are -inf yeilds 0 gradients.
- Args:
- x : torch.Tensor - The input tensor
- dim: int - The dimension on which the log sum exp has to be applied
- """
-
- m, _ = x.max(dim=dim)
- mask = m == -float("inf")
- s = (x - m.masked_fill_(mask, 0).unsqueeze(dim=dim)).exp().sum(dim=dim)
- return s.masked_fill_(mask, 1).log() + m.masked_fill_(mask, -float("inf"))
-
- @staticmethod
- def double_pad(list_of_different_shape_tensors):
- r"""
- Pads the list of tensors in 2 dimensions
- """
- second_dim_lens = [len(a) for a in [i[0] for i in list_of_different_shape_tensors]]
- second_dim_max = max(second_dim_lens)
- padded_x = [F.pad(x, (0, second_dim_max - len(x[0]))) for x in list_of_different_shape_tensors]
- return nn.utils.rnn.pad_sequence(padded_x, batch_first=True)
diff --git a/spaces/artificialguybr/video-dubbing/whisper/whisper/normalizers/__init__.py b/spaces/artificialguybr/video-dubbing/whisper/whisper/normalizers/__init__.py
deleted file mode 100644
index 896d5e3364156efbcf74d8c6961448ef46f1ea1c..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/whisper/whisper/normalizers/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from .basic import BasicTextNormalizer as BasicTextNormalizer
-from .english import EnglishTextNormalizer as EnglishTextNormalizer
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Util/Counter.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Util/Counter.py
deleted file mode 100644
index c67bc95d5517f6c64a0192c23d05561f7b335979..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Util/Counter.py
+++ /dev/null
@@ -1,77 +0,0 @@
-# -*- coding: ascii -*-
-#
-# Util/Counter.py : Fast counter for use with CTR-mode ciphers
-#
-# Written in 2008 by Dwayne C. Litzenberger
-#
-# ===================================================================
-# The contents of this file are dedicated to the public domain. To
-# the extent that dedication to the public domain is not available,
-# everyone is granted a worldwide, perpetual, royalty-free,
-# non-exclusive license to exercise all rights associated with the
-# contents of this file for any purpose whatsoever.
-# No rights are reserved.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
-# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
-# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
-# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-# ===================================================================
-
-def new(nbits, prefix=b"", suffix=b"", initial_value=1, little_endian=False, allow_wraparound=False):
- """Create a stateful counter block function suitable for CTR encryption modes.
-
- Each call to the function returns the next counter block.
- Each counter block is made up by three parts:
-
- +------+--------------+-------+
- |prefix| counter value|postfix|
- +------+--------------+-------+
-
- The counter value is incremented by 1 at each call.
-
- Args:
- nbits (integer):
- Length of the desired counter value, in bits. It must be a multiple of 8.
- prefix (byte string):
- The constant prefix of the counter block. By default, no prefix is
- used.
- suffix (byte string):
- The constant postfix of the counter block. By default, no suffix is
- used.
- initial_value (integer):
- The initial value of the counter. Default value is 1.
- Its length in bits must not exceed the argument ``nbits``.
- little_endian (boolean):
- If ``True``, the counter number will be encoded in little endian format.
- If ``False`` (default), in big endian format.
- allow_wraparound (boolean):
- This parameter is ignored.
- Returns:
- An object that can be passed with the :data:`counter` parameter to a CTR mode
- cipher.
-
- It must hold that *len(prefix) + nbits//8 + len(suffix)* matches the
- block size of the underlying block cipher.
- """
-
- if (nbits % 8) != 0:
- raise ValueError("'nbits' must be a multiple of 8")
-
- iv_bl = initial_value.bit_length()
- if iv_bl > nbits:
- raise ValueError("Initial value takes %d bits but it is longer than "
- "the counter (%d bits)" %
- (iv_bl, nbits))
-
- # Ignore wraparound
- return {"counter_len": nbits // 8,
- "prefix": prefix,
- "suffix": suffix,
- "initial_value": initial_value,
- "little_endian": little_endian
- }
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/select_mark_area.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/select_mark_area.py
deleted file mode 100644
index c869f5cd78cdb73ade11f0d5b9ccd4a6e375ac9d..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/select_mark_area.py
+++ /dev/null
@@ -1,30 +0,0 @@
-"""
-Using Selection Interval with mark_area
-=========================================
-
-Because area is considered one object, just using the plain
-selector will select the entire area instead of just one part of it.
-
-This example shows how to use two areas, one on top of the other, and a
-`transform_filter` to fake out this effect.
-
-"""
-# category: interactive charts
-import altair as alt
-from vega_datasets import data
-
-source = data.unemployment_across_industries.url
-
-base = alt.Chart(source).mark_area(
- color='goldenrod',
- opacity=0.3
-).encode(
- x='yearmonth(date):T',
- y='sum(count):Q',
-)
-
-brush = alt.selection_interval(encodings=['x'],empty='all')
-background = base.add_selection(brush)
-selected = base.transform_filter(brush).mark_area(color='goldenrod')
-
-background + selected
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/tests/test_plugin_registry.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/tests/test_plugin_registry.py
deleted file mode 100644
index cbfb62ac2a2c0fc482bff57b9f1ea05295df21de..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/tests/test_plugin_registry.py
+++ /dev/null
@@ -1,123 +0,0 @@
-from ..plugin_registry import PluginRegistry
-from typing import Callable
-
-
-class TypedCallableRegistry(PluginRegistry[Callable[[int], int]]):
- pass
-
-
-class GeneralCallableRegistry(PluginRegistry):
- _global_settings = {"global_setting": None}
-
- @property
- def global_setting(self):
- return self._global_settings["global_setting"]
-
- @global_setting.setter
- def global_setting(self, val):
- self._global_settings["global_setting"] = val
-
-
-def test_plugin_registry():
- plugins = TypedCallableRegistry()
-
- assert plugins.names() == []
- assert plugins.active == ""
- assert plugins.get() is None
- assert repr(plugins) == "TypedCallableRegistry(active='', registered=[])"
-
- plugins.register("new_plugin", lambda x: x ** 2)
- assert plugins.names() == ["new_plugin"]
- assert plugins.active == ""
- assert plugins.get() is None
- assert repr(plugins) == (
- "TypedCallableRegistry(active='', " "registered=['new_plugin'])"
- )
-
- plugins.enable("new_plugin")
- assert plugins.names() == ["new_plugin"]
- assert plugins.active == "new_plugin"
- assert plugins.get()(3) == 9
- assert repr(plugins) == (
- "TypedCallableRegistry(active='new_plugin', " "registered=['new_plugin'])"
- )
-
-
-def test_plugin_registry_extra_options():
- plugins = GeneralCallableRegistry()
-
- plugins.register("metadata_plugin", lambda x, p=2: x ** p)
- plugins.enable("metadata_plugin")
- assert plugins.get()(3) == 9
-
- plugins.enable("metadata_plugin", p=3)
- assert plugins.active == "metadata_plugin"
- assert plugins.get()(3) == 27
-
- # enabling without changing name
- plugins.enable(p=2)
- assert plugins.active == "metadata_plugin"
- assert plugins.get()(3) == 9
-
-
-def test_plugin_registry_global_settings():
- plugins = GeneralCallableRegistry()
-
- # we need some default plugin, but we won't do anything with it
- plugins.register("default", lambda x: x)
- plugins.enable("default")
-
- # default value of the global flag
- assert plugins.global_setting is None
-
- # enabling changes the global state, not the options
- plugins.enable(global_setting=True)
- assert plugins.global_setting is True
- assert plugins._options == {}
-
- # context manager changes global state temporarily
- with plugins.enable(global_setting="temp"):
- assert plugins.global_setting == "temp"
- assert plugins._options == {}
- assert plugins.global_setting is True
- assert plugins._options == {}
-
-
-def test_plugin_registry_context():
- plugins = GeneralCallableRegistry()
-
- plugins.register("default", lambda x, p=2: x ** p)
-
- # At first there is no plugin enabled
- assert plugins.active == ""
- assert plugins.options == {}
-
- # Make sure the context is set and reset correctly
- with plugins.enable("default", p=6):
- assert plugins.active == "default"
- assert plugins.options == {"p": 6}
-
- assert plugins.active == ""
- assert plugins.options == {}
-
- # Make sure the context is reset even if there is an error
- try:
- with plugins.enable("default", p=6):
- assert plugins.active == "default"
- assert plugins.options == {"p": 6}
- raise ValueError()
- except ValueError:
- pass
-
- assert plugins.active == ""
- assert plugins.options == {}
-
- # Enabling without specifying name uses current name
- plugins.enable("default", p=2)
-
- with plugins.enable(p=6):
- assert plugins.active == "default"
- assert plugins.options == {"p": 6}
-
- assert plugins.active == "default"
- assert plugins.options == {"p": 2}
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/bitarray/util.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/bitarray/util.py
deleted file mode 100644
index 3e9388e6607857efb7a86ca47b99fca1c8d8d058..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/bitarray/util.py
+++ /dev/null
@@ -1,468 +0,0 @@
-# Copyright (c) 2019 - 2022, Ilan Schnell; All Rights Reserved
-# bitarray is published under the PSF license.
-#
-# Author: Ilan Schnell
-"""
-Useful utilities for working with bitarrays.
-"""
-from __future__ import absolute_import
-
-import os
-import sys
-
-from bitarray import bitarray, bits2bytes, get_default_endian
-
-from bitarray._util import (
- count_n, rindex, parity, count_and, count_or, count_xor, subset,
- serialize, ba2hex, _hex2ba, ba2base, _base2ba, vl_encode, _vl_decode,
- canonical_decode, _set_bato,
-)
-
-__all__ = [
- 'zeros', 'urandom', 'pprint', 'make_endian', 'rindex', 'strip', 'count_n',
- 'parity', 'count_and', 'count_or', 'count_xor', 'subset',
- 'ba2hex', 'hex2ba', 'ba2base', 'base2ba', 'ba2int', 'int2ba',
- 'serialize', 'deserialize', 'vl_encode', 'vl_decode',
- 'huffman_code', 'canonical_huffman', 'canonical_decode',
-]
-
-
-# tell the _util extension what the bitarray type object is, such that it
-# can check for instances thereof
-_set_bato(bitarray)
-
-_is_py2 = bool(sys.version_info[0] == 2)
-
-
-def zeros(__length, endian=None):
- """zeros(length, /, endian=None) -> bitarray
-
-Create a bitarray of length, with all values 0, and optional
-endianness, which may be 'big', 'little'.
-"""
- if not isinstance(__length, (int, long) if _is_py2 else int):
- raise TypeError("int expected, got '%s'" % type(__length).__name__)
-
- a = bitarray(__length, get_default_endian() if endian is None else endian)
- a.setall(0)
- return a
-
-
-def urandom(__length, endian=None):
- """urandom(length, /, endian=None) -> bitarray
-
-Return a bitarray of `length` random bits (uses `os.urandom`).
-"""
- a = bitarray(0, get_default_endian() if endian is None else endian)
- a.frombytes(os.urandom(bits2bytes(__length)))
- del a[__length:]
- return a
-
-
-def pprint(__a, stream=None, group=8, indent=4, width=80):
- """pprint(bitarray, /, stream=None, group=8, indent=4, width=80)
-
-Prints the formatted representation of object on `stream` (which defaults
-to `sys.stdout`). By default, elements are grouped in bytes (8 elements),
-and 8 bytes (64 elements) per line.
-Non-bitarray objects are printed by the standard library
-function `pprint.pprint()`.
-"""
- if stream is None:
- stream = sys.stdout
-
- if not isinstance(__a, bitarray):
- import pprint as _pprint
- _pprint.pprint(__a, stream=stream, indent=indent, width=width)
- return
-
- group = int(group)
- if group < 1:
- raise ValueError('group must be >= 1')
- indent = int(indent)
- if indent < 0:
- raise ValueError('indent must be >= 0')
- width = int(width)
- if width <= indent:
- raise ValueError('width must be > %d (indent)' % indent)
-
- gpl = (width - indent) // (group + 1) # groups per line
- epl = group * gpl # elements per line
- if epl == 0:
- epl = width - indent - 2
- type_name = type(__a).__name__
- # here 4 is len("'()'")
- multiline = len(type_name) + 4 + len(__a) + len(__a) // group >= width
- if multiline:
- quotes = "'''"
- elif __a:
- quotes = "'"
- else:
- quotes = ""
-
- stream.write("%s(%s" % (type_name, quotes))
- for i, b in enumerate(__a):
- if multiline and i % epl == 0:
- stream.write('\n%s' % (indent * ' '))
- if i % group == 0 and i % epl != 0:
- stream.write(' ')
- stream.write(str(b))
-
- if multiline:
- stream.write('\n')
-
- stream.write("%s)\n" % quotes)
- stream.flush()
-
-
-def make_endian(__a, endian):
- """make_endian(bitarray, /, endian) -> bitarray
-
-When the endianness of the given bitarray is different from `endian`,
-return a new bitarray, with endianness `endian` and the same elements
-as the original bitarray.
-Otherwise (endianness is already `endian`) the original bitarray is returned
-unchanged.
-"""
- if not isinstance(__a, bitarray):
- raise TypeError("bitarray expected, got '%s'" % type(__a).__name__)
-
- if __a.endian() == endian:
- return __a
-
- return bitarray(__a, endian)
-
-
-def strip(__a, mode='right'):
- """strip(bitarray, /, mode='right') -> bitarray
-
-Return a new bitarray with zeros stripped from left, right or both ends.
-Allowed values for mode are the strings: `left`, `right`, `both`
-"""
- if not isinstance(__a, bitarray):
- raise TypeError("bitarray expected, got '%s'" % type(__a).__name__)
- if not isinstance(mode, str):
- raise TypeError("str expected for mode, got '%s'" % type(__a).__name__)
- if mode not in ('left', 'right', 'both'):
- raise ValueError("mode must be 'left', 'right' or 'both', got %r" %
- mode)
- first = 0
- if mode in ('left', 'both'):
- try:
- first = __a.index(1)
- except ValueError:
- return __a[:0]
-
- last = len(__a) - 1
- if mode in ('right', 'both'):
- try:
- last = rindex(__a)
- except ValueError:
- return __a[:0]
-
- return __a[first:last + 1]
-
-
-def hex2ba(__s, endian=None):
- """hex2ba(hexstr, /, endian=None) -> bitarray
-
-Bitarray of hexadecimal representation. hexstr may contain any number
-(including odd numbers) of hex digits (upper or lower case).
-"""
- if isinstance(__s, unicode if _is_py2 else str):
- __s = __s.encode('ascii')
- if not isinstance(__s, bytes):
- raise TypeError("str expected, got '%s'" % type(__s).__name__)
-
- a = bitarray(4 * len(__s),
- get_default_endian() if endian is None else endian)
- _hex2ba(a, __s)
- return a
-
-
-def base2ba(__n, __s, endian=None):
- """base2ba(n, asciistr, /, endian=None) -> bitarray
-
-Bitarray of the base `n` ASCII representation.
-Allowed values for `n` are 2, 4, 8, 16, 32 and 64.
-For `n=16` (hexadecimal), `hex2ba()` will be much faster, as `base2ba()`
-does not take advantage of byte level operations.
-For `n=32` the RFC 4648 Base32 alphabet is used, and for `n=64` the
-standard base 64 alphabet is used.
-"""
- if isinstance(__s, unicode if _is_py2 else str):
- __s = __s.encode('ascii')
- if not isinstance(__s, bytes):
- raise TypeError("str expected, got '%s'" % type(__s).__name__)
-
- a = bitarray(_base2ba(__n) * len(__s),
- get_default_endian() if endian is None else endian)
- _base2ba(__n, a, __s)
- return a
-
-
-def ba2int(__a, signed=False):
- """ba2int(bitarray, /, signed=False) -> int
-
-Convert the given bitarray to an integer.
-The bit-endianness of the bitarray is respected.
-`signed` indicates whether two's complement is used to represent the integer.
-"""
- if not isinstance(__a, bitarray):
- raise TypeError("bitarray expected, got '%s'" % type(__a).__name__)
- length = len(__a)
- if length == 0:
- raise ValueError("non-empty bitarray expected")
-
- le = bool(__a.endian() == 'little')
- if length % 8:
- pad = zeros(8 - length % 8, __a.endian())
- __a = __a + pad if le else pad + __a
-
- if _is_py2:
- a = bitarray(__a, 'big')
- if le:
- a.reverse()
- res = int(ba2hex(a), 16)
- else: # py3
- res = int.from_bytes(__a.tobytes(), byteorder=__a.endian())
-
- if signed and res >= 1 << (length - 1):
- res -= 1 << length
- return res
-
-
-def int2ba(__i, length=None, endian=None, signed=False):
- """int2ba(int, /, length=None, endian=None, signed=False) -> bitarray
-
-Convert the given integer to a bitarray (with given endianness,
-and no leading (big-endian) / trailing (little-endian) zeros), unless
-the `length` of the bitarray is provided. An `OverflowError` is raised
-if the integer is not representable with the given number of bits.
-`signed` determines whether two's complement is used to represent the integer,
-and requires `length` to be provided.
-"""
- if not isinstance(__i, (int, long) if _is_py2 else int):
- raise TypeError("int expected, got '%s'" % type(__i).__name__)
- if length is not None:
- if not isinstance(length, int):
- raise TypeError("int expected for length")
- if length <= 0:
- raise ValueError("length must be > 0")
- if signed and length is None:
- raise TypeError("signed requires length")
-
- if __i == 0:
- # there are special cases for 0 which we'd rather not deal with below
- return zeros(length or 1, endian)
-
- if signed:
- m = 1 << (length - 1)
- if not (-m <= __i < m):
- raise OverflowError("signed integer not in range(%d, %d), "
- "got %d" % (-m, m, __i))
- if __i < 0:
- __i += 1 << length
- else: # unsigned
- if __i < 0:
- raise OverflowError("unsigned integer not positive, got %d" % __i)
- if length and __i >= (1 << length):
- raise OverflowError("unsigned integer not in range(0, %d), "
- "got %d" % (1 << length, __i))
-
- a = bitarray(0, get_default_endian() if endian is None else endian)
- le = bool(a.endian() == 'little')
- if _is_py2:
- s = hex(__i)[2:].rstrip('L')
- a.extend(hex2ba(s, 'big'))
- if le:
- a.reverse()
- else: # py3
- b = __i.to_bytes(bits2bytes(__i.bit_length()), byteorder=a.endian())
- a.frombytes(b)
-
- if length is None:
- return strip(a, 'right' if le else 'left')
-
- la = len(a)
- if la > length:
- a = a[:length] if le else a[-length:]
- if la < length:
- pad = zeros(length - la, endian)
- a = a + pad if le else pad + a
- assert len(a) == length
- return a
-
-
-def deserialize(__b):
- """deserialize(bytes, /) -> bitarray
-
-Return a bitarray given a bytes-like representation such as returned
-by `serialize()`.
-"""
- if isinstance(__b, int): # as bytes(n) will return n NUL bytes
- raise TypeError("cannot convert 'int' object to bytes")
- if not isinstance(__b, bytes):
- __b = bytes(__b)
- if len(__b) == 0:
- raise ValueError("non-empty bytes expected")
-
- if _is_py2:
- head = ord(__b[0])
- if head >= 32 or head % 16 >= 8:
- raise ValueError('invalid header byte: 0x%02x' % head)
- try:
- return bitarray(__b)
- except TypeError:
- raise ValueError('invalid header byte: 0x%02x' % __b[0])
-
-
-def vl_decode(__stream, endian=None):
- """vl_decode(stream, /, endian=None) -> bitarray
-
-Decode binary stream (an integer iterator, or bytes-like object), and return
-the decoded bitarray. This function consumes only one bitarray and leaves
-the remaining stream untouched. `StopIteration` is raised when no
-terminating byte is found.
-Use `vl_encode()` for encoding.
-"""
- a = bitarray(32, get_default_endian() if endian is None else endian)
- _vl_decode(iter(__stream), a)
- return a
-
-# ------------------------------ Huffman coding -----------------------------
-
-def _huffman_tree(__freq_map):
- """_huffman_tree(dict, /) -> Node
-
-Given a dict mapping symbols to their frequency, construct a Huffman tree
-and return its root node.
-"""
- from heapq import heappush, heappop
-
- class Node(object):
- """
- A Node object will either have .symbol (leaf node) or
- both .child_0 and .child_1 (internal node) attributes.
- The .freq attributes will always be present.
- """
- def __lt__(self, other):
- # heapq needs to be able to compare the nodes
- return self.freq < other.freq
-
- minheap = []
- # create all leaf nodes and push them onto the queue
- for sym, f in __freq_map.items():
- nd = Node()
- nd.symbol = sym
- nd.freq = f
- heappush(minheap, nd)
-
- # repeat the process until only one node remains
- while len(minheap) > 1:
- # take the two nodes with smallest frequencies from the queue
- child_0 = heappop(minheap)
- child_1 = heappop(minheap)
- # construct a new (internal) node and push it onto the queue
- parent = Node()
- parent.child_0 = child_0
- parent.child_1 = child_1
- parent.freq = child_0.freq + child_1.freq
- heappush(minheap, parent)
-
- # the single remaining node is the root of the Huffman tree
- return minheap[0]
-
-
-def huffman_code(__freq_map, endian=None):
- """huffman_code(dict, /, endian=None) -> dict
-
-Given a frequency map, a dictionary mapping symbols to their frequency,
-calculate the Huffman code, i.e. a dict mapping those symbols to
-bitarrays (with given endianness). Note that the symbols are not limited
-to being strings. Symbols may may be any hashable object (such as `None`).
-"""
- if not isinstance(__freq_map, dict):
- raise TypeError("dict expected, got '%s'" % type(__freq_map).__name__)
- if endian is None:
- endian = get_default_endian()
-
- b0 = bitarray('0', endian)
- b1 = bitarray('1', endian)
-
- if len(__freq_map) < 2:
- if len(__freq_map) == 0:
- raise ValueError("cannot create Huffman code with no symbols")
- # Only one symbol: Normally if only one symbol is given, the code
- # could be represented with zero bits. However here, the code should
- # be at least one bit for the .encode() and .decode() methods to work.
- # So we represent the symbol by a single code of length one, in
- # particular one 0 bit. This is an incomplete code, since if a 1 bit
- # is received, it has no meaning and will result in an error.
- return {list(__freq_map)[0]: b0}
-
- result = {}
-
- def traverse(nd, prefix=bitarray(0, endian)):
- try: # leaf
- result[nd.symbol] = prefix
- except AttributeError: # parent, so traverse each of the children
- traverse(nd.child_0, prefix + b0)
- traverse(nd.child_1, prefix + b1)
-
- traverse(_huffman_tree(__freq_map))
- return result
-
-
-def canonical_huffman(__freq_map):
- """canonical_huffman(dict, /) -> tuple
-
-Given a frequency map, a dictionary mapping symbols to their frequency,
-calculate the canonical Huffman code. Returns a tuple containing:
-
-0. the canonical Huffman code as a dict mapping symbols to bitarrays
-1. a list containing the number of symbols of each code length
-2. a list of symbols in canonical order
-
-Note: the two lists may be used as input for `canonical_decode()`.
-"""
- if not isinstance(__freq_map, dict):
- raise TypeError("dict expected, got '%s'" % type(__freq_map).__name__)
-
- if len(__freq_map) < 2:
- if len(__freq_map) == 0:
- raise ValueError("cannot create Huffman code with no symbols")
- # Only one symbol: see note above in huffman_code()
- sym = list(__freq_map)[0]
- return {sym: bitarray('0', 'big')}, [0, 1], [sym]
-
- code_length = {} # map symbols to their code length
-
- def traverse(nd, length=0):
- # traverse the Huffman tree, but (unlike in huffman_code() above) we
- # now just simply record the length for reaching each symbol
- try: # leaf
- code_length[nd.symbol] = length
- except AttributeError: # parent, so traverse each of the children
- traverse(nd.child_0, length + 1)
- traverse(nd.child_1, length + 1)
-
- traverse(_huffman_tree(__freq_map))
-
- # we now have a mapping of symbols to their code length,
- # which is all we need
-
- table = sorted(code_length.items(), key=lambda item: (item[1], item[0]))
-
- maxbits = max(item[1] for item in table)
- codedict = {}
- count = (maxbits + 1) * [0]
-
- code = 0
- for i, (sym, length) in enumerate(table):
- codedict[sym] = int2ba(code, length, 'big')
- count[length] += 1
- if i + 1 < len(table):
- code = (code + 1) << (table[i + 1][1] - length)
-
- return codedict, count, [item[0] for item in table]
diff --git a/spaces/awacke1/Webcam-Object-Recognition-Yolo-n-Coco/models.py b/spaces/awacke1/Webcam-Object-Recognition-Yolo-n-Coco/models.py
deleted file mode 100644
index 0f0ae7e84da69f9805bc3a20463e9db488cac889..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Webcam-Object-Recognition-Yolo-n-Coco/models.py
+++ /dev/null
@@ -1,530 +0,0 @@
-import numpy as np
-import cv2
-import os
-import json
-from tqdm import tqdm
-from glob import glob
-import matplotlib.pyplot as plt
-import tensorflow as tf
-from tensorflow.keras import layers, models, optimizers
-
-from custom_layers import yolov4_neck, yolov4_head, nms
-from utils import load_weights, get_detection_data, draw_bbox, voc_ap, draw_plot_func, read_txt_to_list
-from config import yolo_config
-from loss import yolo_loss
-
-
-class Yolov4(object):
- def __init__(self,
- weight_path=None,
- class_name_path='coco_classes.txt',
- config=yolo_config,
- ):
- assert config['img_size'][0] == config['img_size'][1], 'not support yet'
- assert config['img_size'][0] % config['strides'][-1] == 0, 'must be a multiple of last stride'
- self.class_names = [line.strip() for line in open(class_name_path).readlines()]
- self.img_size = yolo_config['img_size']
- self.num_classes = len(self.class_names)
- self.weight_path = weight_path
- self.anchors = np.array(yolo_config['anchors']).reshape((3, 3, 2))
- self.xyscale = yolo_config['xyscale']
- self.strides = yolo_config['strides']
- self.output_sizes = [self.img_size[0] // s for s in self.strides]
- self.class_color = {name: list(np.random.random(size=3)*255) for name in self.class_names}
- # Training
- self.max_boxes = yolo_config['max_boxes']
- self.iou_loss_thresh = yolo_config['iou_loss_thresh']
- self.config = yolo_config
- assert self.num_classes > 0, 'no classes detected!'
-
- tf.keras.backend.clear_session()
- if yolo_config['num_gpu'] > 1:
- mirrored_strategy = tf.distribute.MirroredStrategy()
- with mirrored_strategy.scope():
- self.build_model(load_pretrained=True if self.weight_path else False)
- else:
- self.build_model(load_pretrained=True if self.weight_path else False)
-
- def build_model(self, load_pretrained=True):
- # core yolo model
- input_layer = layers.Input(self.img_size)
- yolov4_output = yolov4_neck(input_layer, self.num_classes)
- self.yolo_model = models.Model(input_layer, yolov4_output)
-
- # Build training model
- y_true = [
- layers.Input(name='input_2', shape=(52, 52, 3, (self.num_classes + 5))), # label small boxes
- layers.Input(name='input_3', shape=(26, 26, 3, (self.num_classes + 5))), # label medium boxes
- layers.Input(name='input_4', shape=(13, 13, 3, (self.num_classes + 5))), # label large boxes
- layers.Input(name='input_5', shape=(self.max_boxes, 4)), # true bboxes
- ]
- loss_list = tf.keras.layers.Lambda(yolo_loss, name='yolo_loss',
- arguments={'num_classes': self.num_classes,
- 'iou_loss_thresh': self.iou_loss_thresh,
- 'anchors': self.anchors})([*self.yolo_model.output, *y_true])
- self.training_model = models.Model([self.yolo_model.input, *y_true], loss_list)
-
- # Build inference model
- yolov4_output = yolov4_head(yolov4_output, self.num_classes, self.anchors, self.xyscale)
- # output: [boxes, scores, classes, valid_detections]
- self.inference_model = models.Model(input_layer,
- nms(yolov4_output, self.img_size, self.num_classes,
- iou_threshold=self.config['iou_threshold'],
- score_threshold=self.config['score_threshold']))
-
- if load_pretrained and self.weight_path and self.weight_path.endswith('.weights'):
- if self.weight_path.endswith('.weights'):
- load_weights(self.yolo_model, self.weight_path)
- print(f'load from {self.weight_path}')
- elif self.weight_path.endswith('.h5'):
- self.training_model.load_weights(self.weight_path)
- print(f'load from {self.weight_path}')
-
- self.training_model.compile(optimizer=optimizers.Adam(lr=1e-3),
- loss={'yolo_loss': lambda y_true, y_pred: y_pred})
-
- def load_model(self, path):
- self.yolo_model = models.load_model(path, compile=False)
- yolov4_output = yolov4_head(self.yolo_model.output, self.num_classes, self.anchors, self.xyscale)
- self.inference_model = models.Model(self.yolo_model.input,
- nms(yolov4_output, self.img_size, self.num_classes)) # [boxes, scores, classes, valid_detections]
-
- def save_model(self, path):
- self.yolo_model.save(path)
-
- def preprocess_img(self, img):
- img = cv2.resize(img, self.img_size[:2])
- img = img / 255.
- return img
-
- def fit(self, train_data_gen, epochs, val_data_gen=None, initial_epoch=0, callbacks=None):
- self.training_model.fit(train_data_gen,
- steps_per_epoch=len(train_data_gen),
- validation_data=val_data_gen,
- validation_steps=len(val_data_gen),
- epochs=epochs,
- callbacks=callbacks,
- initial_epoch=initial_epoch)
- # raw_img: RGB
- def predict_img(self, raw_img, random_color=True, plot_img=True, figsize=(10, 10), show_text=True, return_output=True):
- print('img shape: ', raw_img.shape)
- img = self.preprocess_img(raw_img)
- imgs = np.expand_dims(img, axis=0)
- pred_output = self.inference_model.predict(imgs)
- detections = get_detection_data(img=raw_img,
- model_outputs=pred_output,
- class_names=self.class_names)
-
- output_img = draw_bbox(raw_img, detections, cmap=self.class_color, random_color=random_color, figsize=figsize,
- show_text=show_text, show_img=False)
- if return_output:
- return output_img, detections
- else:
- return detections
-
- def predict(self, img_path, random_color=True, plot_img=True, figsize=(10, 10), show_text=True):
- raw_img = img_path
- return self.predict_img(raw_img, random_color, plot_img, figsize, show_text)
-
- def export_gt(self, annotation_path, gt_folder_path):
- with open(annotation_path) as file:
- for line in file:
- line = line.split(' ')
- filename = line[0].split(os.sep)[-1].split('.')[0]
- objs = line[1:]
- # export txt file
- with open(os.path.join(gt_folder_path, filename + '.txt'), 'w') as output_file:
- for obj in objs:
- x_min, y_min, x_max, y_max, class_id = [float(o) for o in obj.strip().split(',')]
- output_file.write(f'{self.class_names[int(class_id)]} {x_min} {y_min} {x_max} {y_max}\n')
-
- def export_prediction(self, annotation_path, pred_folder_path, img_folder_path, bs=2):
- with open(annotation_path) as file:
- img_paths = [os.path.join(img_folder_path, line.split(' ')[0].split(os.sep)[-1]) for line in file]
- # print(img_paths[:20])
- for batch_idx in tqdm(range(0, len(img_paths), bs)):
- # print(len(img_paths), batch_idx, batch_idx*bs, (batch_idx+1)*bs)
- paths = img_paths[batch_idx:batch_idx+bs]
- # print(paths)
- # read and process img
- imgs = np.zeros((len(paths), *self.img_size))
- raw_img_shapes = []
- for j, path in enumerate(paths):
- img = cv2.imread(path)
- raw_img_shapes.append(img.shape)
- img = self.preprocess_img(img)
- imgs[j] = img
-
- # process batch output
- b_boxes, b_scores, b_classes, b_valid_detections = self.inference_model.predict(imgs)
- for k in range(len(paths)):
- num_boxes = b_valid_detections[k]
- raw_img_shape = raw_img_shapes[k]
- boxes = b_boxes[k, :num_boxes]
- classes = b_classes[k, :num_boxes]
- scores = b_scores[k, :num_boxes]
- # print(raw_img_shape)
- boxes[:, [0, 2]] = (boxes[:, [0, 2]] * raw_img_shape[1]) # w
- boxes[:, [1, 3]] = (boxes[:, [1, 3]] * raw_img_shape[0]) # h
- cls_names = [self.class_names[int(c)] for c in classes]
- # print(raw_img_shape, boxes.astype(int), cls_names, scores)
-
- img_path = paths[k]
- filename = img_path.split(os.sep)[-1].split('.')[0]
- # print(filename)
- output_path = os.path.join(pred_folder_path, filename+'.txt')
- with open(output_path, 'w') as pred_file:
- for box_idx in range(num_boxes):
- b = boxes[box_idx]
- pred_file.write(f'{cls_names[box_idx]} {scores[box_idx]} {b[0]} {b[1]} {b[2]} {b[3]}\n')
-
-
- def eval_map(self, gt_folder_path, pred_folder_path, temp_json_folder_path, output_files_path):
- """Process Gt"""
- ground_truth_files_list = glob(gt_folder_path + '/*.txt')
- assert len(ground_truth_files_list) > 0, 'no ground truth file'
- ground_truth_files_list.sort()
- # dictionary with counter per class
- gt_counter_per_class = {}
- counter_images_per_class = {}
-
- gt_files = []
- for txt_file in ground_truth_files_list:
- file_id = txt_file.split(".txt", 1)[0]
- file_id = os.path.basename(os.path.normpath(file_id))
- # check if there is a correspondent detection-results file
- temp_path = os.path.join(pred_folder_path, (file_id + ".txt"))
- assert os.path.exists(temp_path), "Error. File not found: {}\n".format(temp_path)
- lines_list = read_txt_to_list(txt_file)
- # create ground-truth dictionary
- bounding_boxes = []
- is_difficult = False
- already_seen_classes = []
- for line in lines_list:
- class_name, left, top, right, bottom = line.split()
- # check if class is in the ignore list, if yes skip
- bbox = left + " " + top + " " + right + " " + bottom
- bounding_boxes.append({"class_name": class_name, "bbox": bbox, "used": False})
- # count that object
- if class_name in gt_counter_per_class:
- gt_counter_per_class[class_name] += 1
- else:
- # if class didn't exist yet
- gt_counter_per_class[class_name] = 1
-
- if class_name not in already_seen_classes:
- if class_name in counter_images_per_class:
- counter_images_per_class[class_name] += 1
- else:
- # if class didn't exist yet
- counter_images_per_class[class_name] = 1
- already_seen_classes.append(class_name)
-
- # dump bounding_boxes into a ".json" file
- new_temp_file = os.path.join(temp_json_folder_path, file_id+"_ground_truth.json") #TEMP_FILES_PATH + "/" + file_id + "_ground_truth.json"
- gt_files.append(new_temp_file)
- with open(new_temp_file, 'w') as outfile:
- json.dump(bounding_boxes, outfile)
-
- gt_classes = list(gt_counter_per_class.keys())
- # let's sort the classes alphabetically
- gt_classes = sorted(gt_classes)
- n_classes = len(gt_classes)
- print(gt_classes, gt_counter_per_class)
-
- """Process prediction"""
-
- dr_files_list = sorted(glob(os.path.join(pred_folder_path, '*.txt')))
-
- for class_index, class_name in enumerate(gt_classes):
- bounding_boxes = []
- for txt_file in dr_files_list:
- # the first time it checks if all the corresponding ground-truth files exist
- file_id = txt_file.split(".txt", 1)[0]
- file_id = os.path.basename(os.path.normpath(file_id))
- temp_path = os.path.join(gt_folder_path, (file_id + ".txt"))
- if class_index == 0:
- if not os.path.exists(temp_path):
- error_msg = f"Error. File not found: {temp_path}\n"
- print(error_msg)
- lines = read_txt_to_list(txt_file)
- for line in lines:
- try:
- tmp_class_name, confidence, left, top, right, bottom = line.split()
- except ValueError:
- error_msg = f"""Error: File {txt_file} in the wrong format.\n
- Expected: \n
- Received: {line} \n"""
- print(error_msg)
- if tmp_class_name == class_name:
- # print("match")
- bbox = left + " " + top + " " + right + " " + bottom
- bounding_boxes.append({"confidence": confidence, "file_id": file_id, "bbox": bbox})
- # sort detection-results by decreasing confidence
- bounding_boxes.sort(key=lambda x: float(x['confidence']), reverse=True)
- with open(temp_json_folder_path + "/" + class_name + "_dr.json", 'w') as outfile:
- json.dump(bounding_boxes, outfile)
-
- """
- Calculate the AP for each class
- """
- sum_AP = 0.0
- ap_dictionary = {}
- # open file to store the output
- with open(output_files_path + "/output.txt", 'w') as output_file:
- output_file.write("# AP and precision/recall per class\n")
- count_true_positives = {}
- for class_index, class_name in enumerate(gt_classes):
- count_true_positives[class_name] = 0
- """
- Load detection-results of that class
- """
- dr_file = temp_json_folder_path + "/" + class_name + "_dr.json"
- dr_data = json.load(open(dr_file))
-
- """
- Assign detection-results to ground-truth objects
- """
- nd = len(dr_data)
- tp = [0] * nd # creates an array of zeros of size nd
- fp = [0] * nd
- for idx, detection in enumerate(dr_data):
- file_id = detection["file_id"]
- gt_file = temp_json_folder_path + "/" + file_id + "_ground_truth.json"
- ground_truth_data = json.load(open(gt_file))
- ovmax = -1
- gt_match = -1
- # load detected object bounding-box
- bb = [float(x) for x in detection["bbox"].split()]
- for obj in ground_truth_data:
- # look for a class_name match
- if obj["class_name"] == class_name:
- bbgt = [float(x) for x in obj["bbox"].split()]
- bi = [max(bb[0], bbgt[0]), max(bb[1], bbgt[1]), min(bb[2], bbgt[2]), min(bb[3], bbgt[3])]
- iw = bi[2] - bi[0] + 1
- ih = bi[3] - bi[1] + 1
- if iw > 0 and ih > 0:
- # compute overlap (IoU) = area of intersection / area of union
- ua = (bb[2] - bb[0] + 1) * (bb[3] - bb[1] + 1) + \
- (bbgt[2] - bbgt[0]+ 1) * (bbgt[3] - bbgt[1] + 1) - iw * ih
- ov = iw * ih / ua
- if ov > ovmax:
- ovmax = ov
- gt_match = obj
-
- min_overlap = 0.5
- if ovmax >= min_overlap:
- # if "difficult" not in gt_match:
- if not bool(gt_match["used"]):
- # true positive
- tp[idx] = 1
- gt_match["used"] = True
- count_true_positives[class_name] += 1
- # update the ".json" file
- with open(gt_file, 'w') as f:
- f.write(json.dumps(ground_truth_data))
- else:
- # false positive (multiple detection)
- fp[idx] = 1
- else:
- fp[idx] = 1
-
-
- # compute precision/recall
- cumsum = 0
- for idx, val in enumerate(fp):
- fp[idx] += cumsum
- cumsum += val
- print('fp ', cumsum)
- cumsum = 0
- for idx, val in enumerate(tp):
- tp[idx] += cumsum
- cumsum += val
- print('tp ', cumsum)
- rec = tp[:]
- for idx, val in enumerate(tp):
- rec[idx] = float(tp[idx]) / gt_counter_per_class[class_name]
- print('recall ', cumsum)
- prec = tp[:]
- for idx, val in enumerate(tp):
- prec[idx] = float(tp[idx]) / (fp[idx] + tp[idx])
- print('prec ', cumsum)
-
- ap, mrec, mprec = voc_ap(rec[:], prec[:])
- sum_AP += ap
- text = "{0:.2f}%".format(
- ap * 100) + " = " + class_name + " AP " # class_name + " AP = {0:.2f}%".format(ap*100)
-
- print(text)
- ap_dictionary[class_name] = ap
-
- n_images = counter_images_per_class[class_name]
- # lamr, mr, fppi = log_average_miss_rate(np.array(prec), np.array(rec), n_images)
- # lamr_dictionary[class_name] = lamr
-
- """
- Draw plot
- """
- if True:
- plt.plot(rec, prec, '-o')
- # add a new penultimate point to the list (mrec[-2], 0.0)
- # since the last line segment (and respective area) do not affect the AP value
- area_under_curve_x = mrec[:-1] + [mrec[-2]] + [mrec[-1]]
- area_under_curve_y = mprec[:-1] + [0.0] + [mprec[-1]]
- plt.fill_between(area_under_curve_x, 0, area_under_curve_y, alpha=0.2, edgecolor='r')
- # set window title
- fig = plt.gcf() # gcf - get current figure
- fig.canvas.set_window_title('AP ' + class_name)
- # set plot title
- plt.title('class: ' + text)
- # plt.suptitle('This is a somewhat long figure title', fontsize=16)
- # set axis titles
- plt.xlabel('Recall')
- plt.ylabel('Precision')
- # optional - set axes
- axes = plt.gca() # gca - get current axes
- axes.set_xlim([0.0, 1.0])
- axes.set_ylim([0.0, 1.05]) # .05 to give some extra space
- # Alternative option -> wait for button to be pressed
- # while not plt.waitforbuttonpress(): pass # wait for key display
- # Alternative option -> normal display
- plt.show()
- # save the plot
- # fig.savefig(output_files_path + "/classes/" + class_name + ".png")
- # plt.cla() # clear axes for next plot
-
- # if show_animation:
- # cv2.destroyAllWindows()
-
- output_file.write("\n# mAP of all classes\n")
- mAP = sum_AP / n_classes
- text = "mAP = {0:.2f}%".format(mAP * 100)
- output_file.write(text + "\n")
- print(text)
-
- """
- Count total of detection-results
- """
- # iterate through all the files
- det_counter_per_class = {}
- for txt_file in dr_files_list:
- # get lines to list
- lines_list = read_txt_to_list(txt_file)
- for line in lines_list:
- class_name = line.split()[0]
- # check if class is in the ignore list, if yes skip
- # if class_name in args.ignore:
- # continue
- # count that object
- if class_name in det_counter_per_class:
- det_counter_per_class[class_name] += 1
- else:
- # if class didn't exist yet
- det_counter_per_class[class_name] = 1
- # print(det_counter_per_class)
- dr_classes = list(det_counter_per_class.keys())
-
- """
- Plot the total number of occurences of each class in the ground-truth
- """
- if True:
- window_title = "ground-truth-info"
- plot_title = "ground-truth\n"
- plot_title += "(" + str(len(ground_truth_files_list)) + " files and " + str(n_classes) + " classes)"
- x_label = "Number of objects per class"
- output_path = output_files_path + "/ground-truth-info.png"
- to_show = False
- plot_color = 'forestgreen'
- draw_plot_func(
- gt_counter_per_class,
- n_classes,
- window_title,
- plot_title,
- x_label,
- output_path,
- to_show,
- plot_color,
- '',
- )
-
- """
- Finish counting true positives
- """
- for class_name in dr_classes:
- # if class exists in detection-result but not in ground-truth then there are no true positives in that class
- if class_name not in gt_classes:
- count_true_positives[class_name] = 0
- # print(count_true_positives)
-
- """
- Plot the total number of occurences of each class in the "detection-results" folder
- """
- if True:
- window_title = "detection-results-info"
- # Plot title
- plot_title = "detection-results\n"
- plot_title += "(" + str(len(dr_files_list)) + " files and "
- count_non_zero_values_in_dictionary = sum(int(x) > 0 for x in list(det_counter_per_class.values()))
- plot_title += str(count_non_zero_values_in_dictionary) + " detected classes)"
- # end Plot title
- x_label = "Number of objects per class"
- output_path = output_files_path + "/detection-results-info.png"
- to_show = False
- plot_color = 'forestgreen'
- true_p_bar = count_true_positives
- draw_plot_func(
- det_counter_per_class,
- len(det_counter_per_class),
- window_title,
- plot_title,
- x_label,
- output_path,
- to_show,
- plot_color,
- true_p_bar
- )
-
- """
- Draw mAP plot (Show AP's of all classes in decreasing order)
- """
- if True:
- window_title = "mAP"
- plot_title = "mAP = {0:.2f}%".format(mAP * 100)
- x_label = "Average Precision"
- output_path = output_files_path + "/mAP.png"
- to_show = True
- plot_color = 'royalblue'
- draw_plot_func(
- ap_dictionary,
- n_classes,
- window_title,
- plot_title,
- x_label,
- output_path,
- to_show,
- plot_color,
- ""
- )
-
- def predict_raw(self, img_path):
- raw_img = cv2.imread(img_path)
- print('img shape: ', raw_img.shape)
- img = self.preprocess_img(raw_img)
- imgs = np.expand_dims(img, axis=0)
- return self.yolo_model.predict(imgs)
-
- def predict_nonms(self, img_path, iou_threshold=0.413, score_threshold=0.1):
- raw_img = cv2.imread(img_path)
- print('img shape: ', raw_img.shape)
- img = self.preprocess_img(raw_img)
- imgs = np.expand_dims(img, axis=0)
- yolov4_output = self.yolo_model.predict(imgs)
- output = yolov4_head(yolov4_output, self.num_classes, self.anchors, self.xyscale)
- pred_output = nms(output, self.img_size, self.num_classes, iou_threshold, score_threshold)
- pred_output = [p.numpy() for p in pred_output]
- detections = get_detection_data(img=raw_img,
- model_outputs=pred_output,
- class_names=self.class_names)
- draw_bbox(raw_img, detections, cmap=self.class_color, random_color=True)
- return detections
-
diff --git a/spaces/badayvedat/LLaVA/llava/eval/generate_webpage_data_from_table.py b/spaces/badayvedat/LLaVA/llava/eval/generate_webpage_data_from_table.py
deleted file mode 100644
index 92602258ccd953a1d7137056aaf15c8de8166e21..0000000000000000000000000000000000000000
--- a/spaces/badayvedat/LLaVA/llava/eval/generate_webpage_data_from_table.py
+++ /dev/null
@@ -1,111 +0,0 @@
-"""Generate json file for webpage."""
-import json
-import os
-import re
-
-# models = ['llama', 'alpaca', 'gpt35', 'bard']
-models = ['vicuna']
-
-
-def read_jsonl(path: str, key: str=None):
- data = []
- with open(os.path.expanduser(path)) as f:
- for line in f:
- if not line:
- continue
- data.append(json.loads(line))
- if key is not None:
- data.sort(key=lambda x: x[key])
- data = {item[key]: item for item in data}
- return data
-
-
-def trim_hanging_lines(s: str, n: int) -> str:
- s = s.strip()
- for _ in range(n):
- s = s.split('\n', 1)[1].strip()
- return s
-
-
-if __name__ == '__main__':
- questions = read_jsonl('table/question.jsonl', key='question_id')
-
- # alpaca_answers = read_jsonl('table/answer/answer_alpaca-13b.jsonl', key='question_id')
- # bard_answers = read_jsonl('table/answer/answer_bard.jsonl', key='question_id')
- # gpt35_answers = read_jsonl('table/answer/answer_gpt35.jsonl', key='question_id')
- # llama_answers = read_jsonl('table/answer/answer_llama-13b.jsonl', key='question_id')
- vicuna_answers = read_jsonl('table/answer/answer_vicuna-13b.jsonl', key='question_id')
- ours_answers = read_jsonl('table/results/llama-13b-hf-alpaca.jsonl', key='question_id')
-
- review_vicuna = read_jsonl('table/review/review_vicuna-13b_llama-13b-hf-alpaca.jsonl', key='question_id')
- # review_alpaca = read_jsonl('table/review/review_alpaca-13b_vicuna-13b.jsonl', key='question_id')
- # review_bard = read_jsonl('table/review/review_bard_vicuna-13b.jsonl', key='question_id')
- # review_gpt35 = read_jsonl('table/review/review_gpt35_vicuna-13b.jsonl', key='question_id')
- # review_llama = read_jsonl('table/review/review_llama-13b_vicuna-13b.jsonl', key='question_id')
-
- records = []
- for qid in questions.keys():
- r = {
- 'id': qid,
- 'category': questions[qid]['category'],
- 'question': questions[qid]['text'],
- 'answers': {
- # 'alpaca': alpaca_answers[qid]['text'],
- # 'llama': llama_answers[qid]['text'],
- # 'bard': bard_answers[qid]['text'],
- # 'gpt35': gpt35_answers[qid]['text'],
- 'vicuna': vicuna_answers[qid]['text'],
- 'ours': ours_answers[qid]['text'],
- },
- 'evaluations': {
- # 'alpaca': review_alpaca[qid]['text'],
- # 'llama': review_llama[qid]['text'],
- # 'bard': review_bard[qid]['text'],
- 'vicuna': review_vicuna[qid]['content'],
- # 'gpt35': review_gpt35[qid]['text'],
- },
- 'scores': {
- 'vicuna': review_vicuna[qid]['tuple'],
- # 'alpaca': review_alpaca[qid]['score'],
- # 'llama': review_llama[qid]['score'],
- # 'bard': review_bard[qid]['score'],
- # 'gpt35': review_gpt35[qid]['score'],
- },
- }
-
- # cleanup data
- cleaned_evals = {}
- for k, v in r['evaluations'].items():
- v = v.strip()
- lines = v.split('\n')
- # trim the first line if it's a pair of numbers
- if re.match(r'\d+[, ]+\d+', lines[0]):
- lines = lines[1:]
- v = '\n'.join(lines)
- cleaned_evals[k] = v.replace('Assistant 1', "**Assistant 1**").replace('Assistant 2', '**Assistant 2**')
-
- r['evaluations'] = cleaned_evals
- records.append(r)
-
- # Reorder the records, this is optional
- for r in records:
- if r['id'] <= 20:
- r['id'] += 60
- else:
- r['id'] -= 20
- for r in records:
- if r['id'] <= 50:
- r['id'] += 10
- elif 50 < r['id'] <= 60:
- r['id'] -= 50
- for r in records:
- if r['id'] == 7:
- r['id'] = 1
- elif r['id'] < 7:
- r['id'] += 1
-
- records.sort(key=lambda x: x['id'])
-
- # Write to file
- with open('webpage/data.json', 'w') as f:
- json.dump({'questions': records, 'models': models}, f, indent=2)
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/libs/stats.min.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/libs/stats.min.js
deleted file mode 100644
index ef000cf4a9ab3b21369da0fe2be3881650127acb..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/libs/stats.min.js
+++ /dev/null
@@ -1,5 +0,0 @@
-// stats.js - http://github.com/mrdoob/stats.js
-var Stats=function(){function h(a){c.appendChild(a.dom);return a}function k(a){for(var d=0;de+1E3&&(r.update(1E3*a/(c-e),100),e=c,a=0,t)){var d=performance.memory;t.update(d.usedJSHeapSize/1048576,d.jsHeapSizeLimit/1048576)}return c},update:function(){g=this.end()},domElement:c,setMode:k}};
-Stats.Panel=function(h,k,l){var c=Infinity,g=0,e=Math.round,a=e(window.devicePixelRatio||1),r=80*a,f=48*a,t=3*a,u=2*a,d=3*a,m=15*a,n=74*a,p=30*a,q=document.createElement("canvas");q.width=r;q.height=f;q.style.cssText="width:80px;height:48px";var b=q.getContext("2d");b.font="bold "+9*a+"px Helvetica,Arial,sans-serif";b.textBaseline="top";b.fillStyle=l;b.fillRect(0,0,r,f);b.fillStyle=k;b.fillText(h,t,u);b.fillRect(d,m,n,p);b.fillStyle=l;b.globalAlpha=.9;b.fillRect(d,m,n,p);return{dom:q,update:function(f,
-v){c=Math.min(c,f);g=Math.max(g,f);b.fillStyle=l;b.globalAlpha=1;b.fillRect(0,0,r,m);b.fillStyle=k;b.fillText(e(f)+" "+h+" ("+e(c)+"-"+e(g)+")",t,u);b.drawImage(q,d+a,m,n-a,p,d,m,n-a,p);b.fillRect(d+n-a,m,a,p);b.fillStyle=l;b.globalAlpha=.9;b.fillRect(d+n-a,m,a,e((1-f/v)*p))}}};"object"===typeof module&&(module.exports=Stats);
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/shadowmap_vertex.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/shadowmap_vertex.glsl.js
deleted file mode 100644
index b8549a03b0f084b90d47a265f783ea8b9ecd762c..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/shadowmap_vertex.glsl.js
+++ /dev/null
@@ -1,46 +0,0 @@
-export default /* glsl */`
-#ifdef USE_SHADOWMAP
-
- #if NUM_DIR_LIGHTS > 0
-
- #pragma unroll_loop
- for ( int i = 0; i < NUM_DIR_LIGHTS; i ++ ) {
-
- vDirectionalShadowCoord[ i ] = directionalShadowMatrix[ i ] * worldPosition;
-
- }
-
- #endif
-
- #if NUM_SPOT_LIGHTS > 0
-
- #pragma unroll_loop
- for ( int i = 0; i < NUM_SPOT_LIGHTS; i ++ ) {
-
- vSpotShadowCoord[ i ] = spotShadowMatrix[ i ] * worldPosition;
-
- }
-
- #endif
-
- #if NUM_POINT_LIGHTS > 0
-
- #pragma unroll_loop
- for ( int i = 0; i < NUM_POINT_LIGHTS; i ++ ) {
-
- vPointShadowCoord[ i ] = pointShadowMatrix[ i ] * worldPosition;
-
- }
-
- #endif
-
- /*
- #if NUM_RECT_AREA_LIGHTS > 0
-
- // TODO (abelnation): update vAreaShadowCoord with area light info
-
- #endif
- */
-
-#endif
-`;
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/textures/DataTexture2DArray.js b/spaces/banana-projects/web3d/node_modules/three/src/textures/DataTexture2DArray.js
deleted file mode 100644
index 2a90d4cc4f9972b36fa04379d9347795d0f5c54e..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/textures/DataTexture2DArray.js
+++ /dev/null
@@ -1,28 +0,0 @@
-/**
- * @author Takahiro https://github.com/takahirox
- */
-
-import { Texture } from './Texture.js';
-import { ClampToEdgeWrapping, NearestFilter } from '../constants.js';
-
-function DataTexture2DArray( data, width, height, depth ) {
-
- Texture.call( this, null );
-
- this.image = { data: data, width: width, height: height, depth: depth };
-
- this.magFilter = NearestFilter;
- this.minFilter = NearestFilter;
-
- this.wrapR = ClampToEdgeWrapping;
-
- this.generateMipmaps = false;
- this.flipY = false;
-
-}
-
-DataTexture2DArray.prototype = Object.create( Texture.prototype );
-DataTexture2DArray.prototype.constructor = DataTexture2DArray;
-DataTexture2DArray.prototype.isDataTexture2DArray = true;
-
-export { DataTexture2DArray };
diff --git a/spaces/billusanda007/MNIST/app.py b/spaces/billusanda007/MNIST/app.py
deleted file mode 100644
index 225e16d0f2b59ac0665aa5383298137074877788..0000000000000000000000000000000000000000
--- a/spaces/billusanda007/MNIST/app.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import numpy as np
-import gradio as gr
-from PIL import Image
-
-def ReLU(Z):
- return np.maximum(Z, 0)
-
-def softmax(Z):
- A = np.exp(Z) / np.sum(np.exp(Z), axis=0)
- return A
-
-def init_params():
- W1 = np.random.rand(10, 784) - 0.5
- b1 = np.random.rand(10, 1) - 0.5
- W2 = np.random.rand(10, 10) - 0.5
- b2 = np.random.rand(10, 1) - 0.5
- return W1, b1, W2, b2
-
-def forward_prop(W1, b1, W2, b2, X):
- Z1 = W1.dot(X) + b1
- A1 = ReLU(Z1)
- Z2 = W2.dot(A1) + b2
- A2 = softmax(Z2)
- return Z1, A1, Z2, A2
-
-def get_predictions(A2):
- return np.argmax(A2, axis=0)
-
-def make_predictions(X, W1, b1, W2, b2):
- _, _, _, A2 = forward_prop(W1, b1, W2, b2, X)
- predictions = get_predictions(A2)
- return predictions
-
-def predict_digit(img):
- # Load the trained parameters
- params = np.load("trained_params.npz", allow_pickle=True)
- W1, b1, W2, b2 = params["W1"], params["b1"], params["W2"], params["b2"]
-
- # Convert the sketchpad drawing to grayscale and resize it to (28, 28)
- img_pil = Image.fromarray(np.uint8(img * 255)).convert("L")
-
- res=img_pil.resize((28, 28))
-
- # Convert the image to a NumPy array and normalize it
- X = np.array(res).reshape((784, 1)) / 255.
-
- # Get the prediction
- prediction = make_predictions(X, W1, b1, W2, b2)
-
- return int(prediction)
-
-iface = gr.Interface(
- fn=predict_digit,
- inputs="sketchpad",
- outputs=gr.outputs.Label(num_top_classes=3),
- live=True,
- capture_session=True,
- title="Handwritten Digit Recognizer",
- description="Draw a digit using your mouse, and the model will try to recognize it.",
-)
-
-if __name__ == "__main__":
- iface.launch()
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Caspernirvananb156wirelessdriver64bit EXCLUSIVE.md b/spaces/bioriAsaeru/text-to-voice/Caspernirvananb156wirelessdriver64bit EXCLUSIVE.md
deleted file mode 100644
index 5affb6811ef17121b5a3147081d2d00640662531..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Caspernirvananb156wirelessdriver64bit EXCLUSIVE.md
+++ /dev/null
@@ -1,6 +0,0 @@
-caspernirvananb156wirelessdriver64bit
Download File ⇔ https://urloso.com/2uyPHV
-
-I'm currently a Replacement Pilot at VFA-106 flying the F/A-18E/F at Oceana. 95643a41ab. caspernirvananb156wirelessdriver64bit 1fdad05405
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/DJ Sbu ft Zahara Lengoma MP3 12 The Song That Took the World by Storm.md b/spaces/bioriAsaeru/text-to-voice/DJ Sbu ft Zahara Lengoma MP3 12 The Song That Took the World by Storm.md
deleted file mode 100644
index 058616723fea08ba7e3cada0853e5e1fa989609b..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/DJ Sbu ft Zahara Lengoma MP3 12 The Song That Took the World by Storm.md
+++ /dev/null
@@ -1,6 +0,0 @@
-dj sbu ft zahara lengoma mp3 12
DOWNLOAD ✔ https://urloso.com/2uyOtx
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Download Driver Modem Huawei E1550 Untuk Windows 8 Best Sources and Reviews.md b/spaces/bioriAsaeru/text-to-voice/Download Driver Modem Huawei E1550 Untuk Windows 8 Best Sources and Reviews.md
deleted file mode 100644
index adaf1d83dc7c731c6ed8dcd0c4174847b566ed40..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Download Driver Modem Huawei E1550 Untuk Windows 8 Best Sources and Reviews.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-2) on your Mac, install the driver (MobileWifi_Driver.app) and move the program (MobielWifi.app) to your programs directory and/or connect it to your Dock (both files are in the The driver tool of Device(for Mac10.12) on -tkb.huawei.com/tkbapp/downloadWebsiteService?websiteId=665136.
-In order to install the driver on a mobile device Huawei e1550 download or a stationary computer, you should follow a simple step-by-step instruction. Huawei provides automatic driver downloads from links. How does it work?
-Download Driver Modem Huawei E1550 Untuk Windows 8
Download File ★ https://urloso.com/2uyRL5
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bla/tranny/App/Worker.py b/spaces/bla/tranny/App/Worker.py
deleted file mode 100644
index a4296616e81428b3e47334811f25d43309de052d..0000000000000000000000000000000000000000
--- a/spaces/bla/tranny/App/Worker.py
+++ /dev/null
@@ -1,49 +0,0 @@
-from celery import Celery, chain
-import os
-import time
-from App import celery_config
-import yt_dlp
-import tempfile
-from App.Transcription.Utils.audio_transcription import transcribe_file
-from App.Embedding.utils.Initialize import encode, generateChunks
-
-celery = Celery()
-celery.config_from_object(celery_config)
-celery.conf.update(
- # Other Celery configuration settings
- CELERYD_LOG_LEVEL='DEBUG', # Set log level to DEBUG for the worker
-)
-
-@celery.task(name="embbeding", bind=True)
-def generate_store(self, data, task_id):
- chunks = generateChunks(data, task_id)
- encode(chunks)
- print("hellooo")
-
-
-@celery.task(name="transcription", bind=True)
-def transcription_task(self, file_path, model_size="tiny"):
- data = transcribe_file(state=self, file_path=file_path, model_size=model_size)
- generate_store.delay(data["content"], self.request.id)
- return data
-
-
-@celery.task(name="download", bind=True)
-def downloadfile(self, url, ydl_opts, model_size="base"):
- # updated
- self.update_state(state="Downloading File..", meta={})
-
- ####
- with yt_dlp.YoutubeDL(ydl_opts) as ydl:
- ydl.download([url])
-
- # updated
- self.update_state(state="Downloading complete", meta={})
- audio_file = ydl_opts["outtmpl"]
- print(model_size, "worker after")
- # print(audio_file["default"])
- data = transcribe_file(
- state=self, file_path=audio_file["default"], model_size=model_size
- )
- generate_store.delay(data["content"], self.request.id)
- return data
diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/data/music_dataset.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/data/music_dataset.py
deleted file mode 100644
index 4e28796939f9cde2b23a2c4bf43fd7ba5fa26b2d..0000000000000000000000000000000000000000
--- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/data/music_dataset.py
+++ /dev/null
@@ -1,270 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""Dataset of music tracks with rich metadata.
-"""
-from dataclasses import dataclass, field, fields, replace
-import gzip
-import json
-import logging
-from pathlib import Path
-import random
-import typing as tp
-
-import torch
-
-from .info_audio_dataset import (
- InfoAudioDataset,
- AudioInfo,
- get_keyword_list,
- get_keyword,
- get_string
-)
-from ..modules.conditioners import (
- ConditioningAttributes,
- JointEmbedCondition,
- WavCondition,
-)
-from ..utils.utils import warn_once
-
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class MusicInfo(AudioInfo):
- """Segment info augmented with music metadata.
- """
- # music-specific metadata
- title: tp.Optional[str] = None
- artist: tp.Optional[str] = None # anonymized artist id, used to ensure no overlap between splits
- key: tp.Optional[str] = None
- bpm: tp.Optional[float] = None
- genre: tp.Optional[str] = None
- moods: tp.Optional[list] = None
- keywords: tp.Optional[list] = None
- description: tp.Optional[str] = None
- name: tp.Optional[str] = None
- instrument: tp.Optional[str] = None
- # original wav accompanying the metadata
- self_wav: tp.Optional[WavCondition] = None
- # dict mapping attributes names to tuple of wav, text and metadata
- joint_embed: tp.Dict[str, JointEmbedCondition] = field(default_factory=dict)
-
- @property
- def has_music_meta(self) -> bool:
- return self.name is not None
-
- def to_condition_attributes(self) -> ConditioningAttributes:
- out = ConditioningAttributes()
- for _field in fields(self):
- key, value = _field.name, getattr(self, _field.name)
- if key == 'self_wav':
- out.wav[key] = value
- elif key == 'joint_embed':
- for embed_attribute, embed_cond in value.items():
- out.joint_embed[embed_attribute] = embed_cond
- else:
- if isinstance(value, list):
- value = ' '.join(value)
- out.text[key] = value
- return out
-
- @staticmethod
- def attribute_getter(attribute):
- if attribute == 'bpm':
- preprocess_func = get_bpm
- elif attribute == 'key':
- preprocess_func = get_musical_key
- elif attribute in ['moods', 'keywords']:
- preprocess_func = get_keyword_list
- elif attribute in ['genre', 'name', 'instrument']:
- preprocess_func = get_keyword
- elif attribute in ['title', 'artist', 'description']:
- preprocess_func = get_string
- else:
- preprocess_func = None
- return preprocess_func
-
- @classmethod
- def from_dict(cls, dictionary: dict, fields_required: bool = False):
- _dictionary: tp.Dict[str, tp.Any] = {}
-
- # allow a subset of attributes to not be loaded from the dictionary
- # these attributes may be populated later
- post_init_attributes = ['self_wav', 'joint_embed']
- optional_fields = ['keywords']
-
- for _field in fields(cls):
- if _field.name in post_init_attributes:
- continue
- elif _field.name not in dictionary:
- if fields_required and _field.name not in optional_fields:
- raise KeyError(f"Unexpected missing key: {_field.name}")
- else:
- preprocess_func: tp.Optional[tp.Callable] = cls.attribute_getter(_field.name)
- value = dictionary[_field.name]
- if preprocess_func:
- value = preprocess_func(value)
- _dictionary[_field.name] = value
- return cls(**_dictionary)
-
-
-def augment_music_info_description(music_info: MusicInfo, merge_text_p: float = 0.,
- drop_desc_p: float = 0., drop_other_p: float = 0.) -> MusicInfo:
- """Augment MusicInfo description with additional metadata fields and potential dropout.
- Additional textual attributes are added given probability 'merge_text_conditions_p' and
- the original textual description is dropped from the augmented description given probability drop_desc_p.
-
- Args:
- music_info (MusicInfo): The music metadata to augment.
- merge_text_p (float): Probability of merging additional metadata to the description.
- If provided value is 0, then no merging is performed.
- drop_desc_p (float): Probability of dropping the original description on text merge.
- if provided value is 0, then no drop out is performed.
- drop_other_p (float): Probability of dropping the other fields used for text augmentation.
- Returns:
- MusicInfo: The MusicInfo with augmented textual description.
- """
- def is_valid_field(field_name: str, field_value: tp.Any) -> bool:
- valid_field_name = field_name in ['key', 'bpm', 'genre', 'moods', 'instrument', 'keywords']
- valid_field_value = field_value is not None and isinstance(field_value, (int, float, str, list))
- keep_field = random.uniform(0, 1) < drop_other_p
- return valid_field_name and valid_field_value and keep_field
-
- def process_value(v: tp.Any) -> str:
- if isinstance(v, (int, float, str)):
- return str(v)
- if isinstance(v, list):
- return ", ".join(v)
- else:
- raise ValueError(f"Unknown type for text value! ({type(v), v})")
-
- description = music_info.description
-
- metadata_text = ""
- if random.uniform(0, 1) < merge_text_p:
- meta_pairs = [f'{_field.name}: {process_value(getattr(music_info, _field.name))}'
- for _field in fields(music_info) if is_valid_field(_field.name, getattr(music_info, _field.name))]
- random.shuffle(meta_pairs)
- metadata_text = ". ".join(meta_pairs)
- description = description if not random.uniform(0, 1) < drop_desc_p else None
- logger.debug(f"Applying text augmentation on MMI info. description: {description}, metadata: {metadata_text}")
-
- if description is None:
- description = metadata_text if len(metadata_text) > 1 else None
- else:
- description = ". ".join([description.rstrip('.'), metadata_text])
- description = description.strip() if description else None
-
- music_info = replace(music_info)
- music_info.description = description
- return music_info
-
-
-class Paraphraser:
- def __init__(self, paraphrase_source: tp.Union[str, Path], paraphrase_p: float = 0.):
- self.paraphrase_p = paraphrase_p
- open_fn = gzip.open if str(paraphrase_source).lower().endswith('.gz') else open
- with open_fn(paraphrase_source, 'rb') as f: # type: ignore
- self.paraphrase_source = json.loads(f.read())
- logger.info(f"loaded paraphrasing source from: {paraphrase_source}")
-
- def sample_paraphrase(self, audio_path: str, description: str):
- if random.random() >= self.paraphrase_p:
- return description
- info_path = Path(audio_path).with_suffix('.json')
- if info_path not in self.paraphrase_source:
- warn_once(logger, f"{info_path} not in paraphrase source!")
- return description
- new_desc = random.choice(self.paraphrase_source[info_path])
- logger.debug(f"{description} -> {new_desc}")
- return new_desc
-
-
-class MusicDataset(InfoAudioDataset):
- """Music dataset is an AudioDataset with music-related metadata.
-
- Args:
- info_fields_required (bool): Whether to enforce having required fields.
- merge_text_p (float): Probability of merging additional metadata to the description.
- drop_desc_p (float): Probability of dropping the original description on text merge.
- drop_other_p (float): Probability of dropping the other fields used for text augmentation.
- joint_embed_attributes (list[str]): A list of attributes for which joint embedding metadata is returned.
- paraphrase_source (str, optional): Path to the .json or .json.gz file containing the
- paraphrases for the description. The json should be a dict with keys are the
- original info path (e.g. track_path.json) and each value is a list of possible
- paraphrased.
- paraphrase_p (float): probability of taking a paraphrase.
-
- See `audiocraft.data.info_audio_dataset.InfoAudioDataset` for full initialization arguments.
- """
- def __init__(self, *args, info_fields_required: bool = True,
- merge_text_p: float = 0., drop_desc_p: float = 0., drop_other_p: float = 0.,
- joint_embed_attributes: tp.List[str] = [],
- paraphrase_source: tp.Optional[str] = None, paraphrase_p: float = 0,
- **kwargs):
- kwargs['return_info'] = True # We require the info for each song of the dataset.
- super().__init__(*args, **kwargs)
- self.info_fields_required = info_fields_required
- self.merge_text_p = merge_text_p
- self.drop_desc_p = drop_desc_p
- self.drop_other_p = drop_other_p
- self.joint_embed_attributes = joint_embed_attributes
- self.paraphraser = None
- if paraphrase_source is not None:
- self.paraphraser = Paraphraser(paraphrase_source, paraphrase_p)
-
- def __getitem__(self, index):
- wav, info = super().__getitem__(index)
- info_data = info.to_dict()
- music_info_path = Path(info.meta.path).with_suffix('.json')
-
- if Path(music_info_path).exists():
- with open(music_info_path, 'r') as json_file:
- music_data = json.load(json_file)
- music_data.update(info_data)
- music_info = MusicInfo.from_dict(music_data, fields_required=self.info_fields_required)
- if self.paraphraser is not None:
- music_info.description = self.paraphraser.sample(music_info.meta.path, music_info.description)
- if self.merge_text_p:
- music_info = augment_music_info_description(
- music_info, self.merge_text_p, self.drop_desc_p, self.drop_other_p)
- else:
- music_info = MusicInfo.from_dict(info_data, fields_required=False)
-
- music_info.self_wav = WavCondition(
- wav=wav[None], length=torch.tensor([info.n_frames]),
- sample_rate=[info.sample_rate], path=[info.meta.path], seek_time=[info.seek_time])
-
- for att in self.joint_embed_attributes:
- att_value = getattr(music_info, att)
- joint_embed_cond = JointEmbedCondition(
- wav[None], [att_value], torch.tensor([info.n_frames]),
- sample_rate=[info.sample_rate], path=[info.meta.path], seek_time=[info.seek_time])
- music_info.joint_embed[att] = joint_embed_cond
-
- return wav, music_info
-
-
-def get_musical_key(value: tp.Optional[str]) -> tp.Optional[str]:
- """Preprocess key keywords, discarding them if there are multiple key defined."""
- if value is None or (not isinstance(value, str)) or len(value) == 0 or value == 'None':
- return None
- elif ',' in value:
- # For now, we discard when multiple keys are defined separated with comas
- return None
- else:
- return value.strip().lower()
-
-
-def get_bpm(value: tp.Optional[str]) -> tp.Optional[float]:
- """Preprocess to a float."""
- if value is None:
- return None
- try:
- return float(value)
- except ValueError:
- return None
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/__init__.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/__init__.py
deleted file mode 100644
index 761a3d1c7afa049e9779ee9fc4d299e9aae38cad..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/__init__.py
+++ /dev/null
@@ -1,26 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .batch_norm import FrozenBatchNorm2d, get_norm, NaiveSyncBatchNorm, CycleBatchNormList
-from .deform_conv import DeformConv, ModulatedDeformConv
-from .mask_ops import paste_masks_in_image
-from .nms import batched_nms, batched_nms_rotated, nms, nms_rotated
-from .roi_align import ROIAlign, roi_align
-from .roi_align_rotated import ROIAlignRotated, roi_align_rotated
-from .shape_spec import ShapeSpec
-from .wrappers import (
- BatchNorm2d,
- Conv2d,
- ConvTranspose2d,
- cat,
- interpolate,
- Linear,
- nonzero_tuple,
- cross_entropy,
- empty_input_loss_func_wrapper,
- shapes_to_tensor,
- move_device_like,
-)
-from .blocks import CNNBlockBase, DepthwiseSeparableConv2d
-from .aspp import ASPP
-from .losses import ciou_loss, diou_loss
-
-__all__ = [k for k in globals().keys() if not k.startswith("_")]
diff --git a/spaces/caffeinum/VToonify/vtoonify/model/raft/alt_cuda_corr/setup.py b/spaces/caffeinum/VToonify/vtoonify/model/raft/alt_cuda_corr/setup.py
deleted file mode 100644
index c0207ff285ffac4c8146c79d154f12416dbef48c..0000000000000000000000000000000000000000
--- a/spaces/caffeinum/VToonify/vtoonify/model/raft/alt_cuda_corr/setup.py
+++ /dev/null
@@ -1,15 +0,0 @@
-from setuptools import setup
-from torch.utils.cpp_extension import BuildExtension, CUDAExtension
-
-
-setup(
- name='correlation',
- ext_modules=[
- CUDAExtension('alt_cuda_corr',
- sources=['correlation.cpp', 'correlation_kernel.cu'],
- extra_compile_args={'cxx': [], 'nvcc': ['-O3']}),
- ],
- cmdclass={
- 'build_ext': BuildExtension
- })
-
diff --git a/spaces/camillevanhoffelen/langchain-HuggingGPT/hugginggpt/huggingface_api.py b/spaces/camillevanhoffelen/langchain-HuggingGPT/hugginggpt/huggingface_api.py
deleted file mode 100644
index 7596abbf9220866cafebf65b582ffec89283ea05..0000000000000000000000000000000000000000
--- a/spaces/camillevanhoffelen/langchain-HuggingGPT/hugginggpt/huggingface_api.py
+++ /dev/null
@@ -1,13 +0,0 @@
-import os
-
-from dotenv import load_dotenv
-
-load_dotenv()
-
-HUGGINGFACE_INFERENCE_API_URL = "https://api-inference.huggingface.co/models/"
-HUGGINGFACE_INFERENCE_API_STATUS_URL = f"https://api-inference.huggingface.co/status/"
-
-
-def get_hf_headers():
- HUGGINGFACEHUB_API_TOKEN = os.environ.get("HUGGINGFACEHUB_API_TOKEN")
- return {"Authorization": f"Bearer {HUGGINGFACEHUB_API_TOKEN}"}
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/MODEL_ZOO.md b/spaces/carlosalonso/Detection-video/carpeta_deteccion/MODEL_ZOO.md
deleted file mode 100644
index 69db2728563c680e89a0d5d3e6ba272b8d78bdbd..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/MODEL_ZOO.md
+++ /dev/null
@@ -1,1052 +0,0 @@
-# Detectron2 Model Zoo and Baselines
-
-## Introduction
-
-This file documents a large collection of baselines trained
-with detectron2 in Sep-Oct, 2019.
-All numbers were obtained on [Big Basin](https://engineering.fb.com/data-center-engineering/introducing-big-basin-our-next-generation-ai-hardware/)
-servers with 8 NVIDIA V100 GPUs & NVLink. The speed numbers are periodically updated with latest PyTorch/CUDA/cuDNN versions.
-You can access these models from code using [detectron2.model_zoo](https://detectron2.readthedocs.io/modules/model_zoo.html) APIs.
-
-In addition to these official baseline models, you can find more models in [projects/](projects/).
-
-#### How to Read the Tables
-* The "Name" column contains a link to the config file. Models can be reproduced using `tools/train_net.py` with the corresponding yaml config file,
- or `tools/lazyconfig_train_net.py` for python config files.
-* Training speed is averaged across the entire training.
- We keep updating the speed with latest version of detectron2/pytorch/etc.,
- so they might be different from the `metrics` file.
- Training speed for multi-machine jobs is not provided.
-* Inference speed is measured by `tools/train_net.py --eval-only`, or [inference_on_dataset()](https://detectron2.readthedocs.io/modules/evaluation.html#detectron2.evaluation.inference_on_dataset),
- with batch size 1 in detectron2 directly.
- Measuring it with custom code may introduce other overhead.
- Actual deployment in production should in general be faster than the given inference
- speed due to more optimizations.
-* The *model id* column is provided for ease of reference.
- To check downloaded file integrity, any model on this page contains its md5 prefix in its file name.
-* Training curves and other statistics can be found in `metrics` for each model.
-
-#### Common Settings for COCO Models
-* All COCO models were trained on `train2017` and evaluated on `val2017`.
-* The default settings are __not directly comparable__ with Detectron's standard settings.
- For example, our default training data augmentation uses scale jittering in addition to horizontal flipping.
-
- To make fair comparisons with Detectron's settings, see
- [Detectron1-Comparisons](configs/Detectron1-Comparisons/) for accuracy comparison,
- and [benchmarks](https://detectron2.readthedocs.io/notes/benchmarks.html)
- for speed comparison.
-* For Faster/Mask R-CNN, we provide baselines based on __3 different backbone combinations__:
- * __FPN__: Use a ResNet+FPN backbone with standard conv and FC heads for mask and box prediction,
- respectively. It obtains the best
- speed/accuracy tradeoff, but the other two are still useful for research.
- * __C4__: Use a ResNet conv4 backbone with conv5 head. The original baseline in the Faster R-CNN paper.
- * __DC5__ (Dilated-C5): Use a ResNet conv5 backbone with dilations in conv5, and standard conv and FC heads
- for mask and box prediction, respectively.
- This is used by the Deformable ConvNet paper.
-* Most models are trained with the 3x schedule (~37 COCO epochs).
- Although 1x models are heavily under-trained, we provide some ResNet-50 models with the 1x (~12 COCO epochs)
- training schedule for comparison when doing quick research iteration.
-
-#### ImageNet Pretrained Models
-
-It's common to initialize from backbone models pre-trained on ImageNet classification tasks. The following backbone models are available:
-
-* [R-50.pkl](https://dl.fbaipublicfiles.com/detectron2/ImageNetPretrained/MSRA/R-50.pkl): converted copy of [MSRA's original ResNet-50](https://github.com/KaimingHe/deep-residual-networks) model.
-* [R-101.pkl](https://dl.fbaipublicfiles.com/detectron2/ImageNetPretrained/MSRA/R-101.pkl): converted copy of [MSRA's original ResNet-101](https://github.com/KaimingHe/deep-residual-networks) model.
-* [X-101-32x8d.pkl](https://dl.fbaipublicfiles.com/detectron2/ImageNetPretrained/FAIR/X-101-32x8d.pkl): ResNeXt-101-32x8d model trained with Caffe2 at FB.
-* [R-50.pkl (torchvision)](https://dl.fbaipublicfiles.com/detectron2/ImageNetPretrained/torchvision/R-50.pkl): converted copy of [torchvision's ResNet-50](https://pytorch.org/docs/stable/torchvision/models.html#torchvision.models.resnet50) model.
- More details can be found in [the conversion script](tools/convert-torchvision-to-d2.py).
-
-Note that the above models have __different__ format from those provided in Detectron: we do not fuse BatchNorm into an affine layer.
-Pretrained models in Detectron's format can still be used. For example:
-* [X-152-32x8d-IN5k.pkl](https://dl.fbaipublicfiles.com/detectron/ImageNetPretrained/25093814/X-152-32x8d-IN5k.pkl):
- ResNeXt-152-32x8d model trained on ImageNet-5k with Caffe2 at FB (see ResNeXt paper for details on ImageNet-5k).
-* [R-50-GN.pkl](https://dl.fbaipublicfiles.com/detectron/ImageNetPretrained/47261647/R-50-GN.pkl):
- ResNet-50 with Group Normalization.
-* [R-101-GN.pkl](https://dl.fbaipublicfiles.com/detectron/ImageNetPretrained/47592356/R-101-GN.pkl):
- ResNet-101 with Group Normalization.
-
-These models require slightly different settings regarding normalization and architecture. See the model zoo configs for reference.
-
-#### License
-
-All models available for download through this document are licensed under the
-[Creative Commons Attribution-ShareAlike 3.0 license](https://creativecommons.org/licenses/by-sa/3.0/).
-
-### COCO Object Detection Baselines
-
-#### Faster R-CNN:
-
-
-
-
-
-
-Name
-lr
sched
-train
time
(s/iter)
-inference
time
(s/im)
-train
mem
(GB)
-box
AP
-model id
-download
-
-
- R50-C4
-1x
-0.551
-0.102
-4.8
-35.7
-137257644
-model | metrics
-
-
- R50-DC5
-1x
-0.380
-0.068
-5.0
-37.3
-137847829
-model | metrics
-
-
- R50-FPN
-1x
-0.210
-0.038
-3.0
-37.9
-137257794
-model | metrics
-
-
- R50-C4
-3x
-0.543
-0.104
-4.8
-38.4
-137849393
-model | metrics
-
-
- R50-DC5
-3x
-0.378
-0.070
-5.0
-39.0
-137849425
-model | metrics
-
-
- R50-FPN
-3x
-0.209
-0.038
-3.0
-40.2
-137849458
-model | metrics
-
-
- R101-C4
-3x
-0.619
-0.139
-5.9
-41.1
-138204752
-model | metrics
-
-
- R101-DC5
-3x
-0.452
-0.086
-6.1
-40.6
-138204841
-model | metrics
-
-
- R101-FPN
-3x
-0.286
-0.051
-4.1
-42.0
-137851257
-model | metrics
-
-
- X101-FPN
-3x
-0.638
-0.098
-6.7
-43.0
-139173657
-model | metrics
-
-
-
-#### RetinaNet:
-
-
-
-
-
-Name
-lr
sched
-train
time
(s/iter)
-inference
time
(s/im)
-train
mem
(GB)
-box
AP
-model id
-download
-
-
- R50
-1x
-0.205
-0.041
-4.1
-37.4
-190397773
-model | metrics
-
-
- R50
-3x
-0.205
-0.041
-4.1
-38.7
-190397829
-model | metrics
-
-
- R101
-3x
-0.291
-0.054
-5.2
-40.4
-190397697
-model | metrics
-
-
-
-
-#### RPN & Fast R-CNN:
-
-
-
-
-
-Name
-lr
sched
-train
time
(s/iter)
-inference
time
(s/im)
-train
mem
(GB)
-box
AP
-prop.
AR
-model id
-download
-
-
- RPN R50-C4
-1x
-0.130
-0.034
-1.5
-
-51.6
-137258005
-model | metrics
-
-
- RPN R50-FPN
-1x
-0.186
-0.032
-2.7
-
-58.0
-137258492
-model | metrics
-
-
- Fast R-CNN R50-FPN
-1x
-0.140
-0.029
-2.6
-37.8
-
-137635226
-model | metrics
-
-
-
-### COCO Instance Segmentation Baselines with Mask R-CNN
-
-
-
-
-
-
-
-Name
-lr
sched
-train
time
(s/iter)
-inference
time
(s/im)
-train
mem
(GB)
-box
AP
-mask
AP
-model id
-download
-
-
- R50-C4
-1x
-0.584
-0.110
-5.2
-36.8
-32.2
-137259246
-model | metrics
-
-
- R50-DC5
-1x
-0.471
-0.076
-6.5
-38.3
-34.2
-137260150
-model | metrics
-
-
- R50-FPN
-1x
-0.261
-0.043
-3.4
-38.6
-35.2
-137260431
-model | metrics
-
-
- R50-C4
-3x
-0.575
-0.111
-5.2
-39.8
-34.4
-137849525
-model | metrics
-
-
- R50-DC5
-3x
-0.470
-0.076
-6.5
-40.0
-35.9
-137849551
-model | metrics
-
-
- R50-FPN
-3x
-0.261
-0.043
-3.4
-41.0
-37.2
-137849600
-model | metrics
-
-
- R101-C4
-3x
-0.652
-0.145
-6.3
-42.6
-36.7
-138363239
-model | metrics
-
-
- R101-DC5
-3x
-0.545
-0.092
-7.6
-41.9
-37.3
-138363294
-model | metrics
-
-
- R101-FPN
-3x
-0.340
-0.056
-4.6
-42.9
-38.6
-138205316
-model | metrics
-
-
- X101-FPN
-3x
-0.690
-0.103
-7.2
-44.3
-39.5
-139653917
-model | metrics
-
-
-
-
-
-#### New baselines using Large-Scale Jitter and Longer Training Schedule
-
-The following baselines of COCO Instance Segmentation with Mask R-CNN are generated
-using a longer training schedule and large-scale jitter as described in Google's
-[Simple Copy-Paste Data Augmentation](https://arxiv.org/pdf/2012.07177.pdf) paper. These
-models are trained from scratch using random initialization. These baselines exceed the
-previous Mask R-CNN baselines.
-
-In the following table, one epoch consists of training on 118000 COCO images.
-
-
-
-
-Name
-epochs
-train
time
(s/im)
-inference
time
(s/im)
-box
AP
-mask
AP
-model id
-download
-
-
- R50-FPN
-100
-0.376
-0.069
-44.6
-40.3
-42047764
-model | metrics
-
-
- R50-FPN
-200
-0.376
-0.069
-46.3
-41.7
-42047638
-model | metrics
-
-
- R50-FPN
-400
-0.376
-0.069
-47.4
-42.5
-42019571
-model | metrics
-
-
- R101-FPN
-100
-0.518
-0.073
-46.4
-41.6
-42025812
-model | metrics
-
-
- R101-FPN
-200
-0.518
-0.073
-48.0
-43.1
-42131867
-model | metrics
-
-
- R101-FPN
-400
-0.518
-0.073
-48.9
-43.7
-42073830
-model | metrics
-
-
- regnetx_4gf_dds_FPN
-100
-0.474
-0.071
-46.0
-41.3
-42047771
-model | metrics
-
-
- regnetx_4gf_dds_FPN
-200
-0.474
-0.071
-48.1
-43.1
-42132721
-model | metrics
-
-
- regnetx_4gf_dds_FPN
-400
-0.474
-0.071
-48.6
-43.5
-42025447
-model | metrics
-
-
- regnety_4gf_dds_FPN
-100
-0.487
-0.073
-46.1
-41.6
-42047784
-model | metrics
-
-
- regnety_4gf_dds_FPN
-200
-0.487
-0.072
-47.8
-43.0
-42047642
-model | metrics
-
-
- regnety_4gf_dds_FPN
-400
-0.487
-0.072
-48.2
-43.3
-42045954
-model | metrics
-
-
-
-### COCO Person Keypoint Detection Baselines with Keypoint R-CNN
-
-
-
-
-
-
-Name
-lr
sched
-train
time
(s/iter)
-inference
time
(s/im)
-train
mem
(GB)
-box
AP
-kp.
AP
-model id
-download
-
-
- R50-FPN
-1x
-0.315
-0.072
-5.0
-53.6
-64.0
-137261548
-model | metrics
-
-
- R50-FPN
-3x
-0.316
-0.066
-5.0
-55.4
-65.5
-137849621
-model | metrics
-
-
- R101-FPN
-3x
-0.390
-0.076
-6.1
-56.4
-66.1
-138363331
-model | metrics
-
-
- X101-FPN
-3x
-0.738
-0.121
-8.7
-57.3
-66.0
-139686956
-model | metrics
-
-
-
-### COCO Panoptic Segmentation Baselines with Panoptic FPN
-
-
-
-
-
-
-Name
-lr
sched
-train
time
(s/iter)
-inference
time
(s/im)
-train
mem
(GB)
-box
AP
-mask
AP
-PQ
-model id
-download
-
-
- R50-FPN
-1x
-0.304
-0.053
-4.8
-37.6
-34.7
-39.4
-139514544
-model | metrics
-
-
- R50-FPN
-3x
-0.302
-0.053
-4.8
-40.0
-36.5
-41.5
-139514569
-model | metrics
-
-
- R101-FPN
-3x
-0.392
-0.066
-6.0
-42.4
-38.5
-43.0
-139514519
-model | metrics
-
-
-
-
-### LVIS Instance Segmentation Baselines with Mask R-CNN
-
-Mask R-CNN baselines on the [LVIS dataset](https://lvisdataset.org), v0.5.
-These baselines are described in Table 3(c) of the [LVIS paper](https://arxiv.org/abs/1908.03195).
-
-NOTE: the 1x schedule here has the same amount of __iterations__ as the COCO 1x baselines.
-They are roughly 24 epochs of LVISv0.5 data.
-The final results of these configs have large variance across different runs.
-
-
-
-
-
-
-
-Name
-lr
sched
-train
time
(s/iter)
-inference
time
(s/im)
-train
mem
(GB)
-box
AP
-mask
AP
-model id
-download
-
-
- R50-FPN
-1x
-0.292
-0.107
-7.1
-23.6
-24.4
-144219072
-model | metrics
-
-
- R101-FPN
-1x
-0.371
-0.114
-7.8
-25.6
-25.9
-144219035
-model | metrics
-
-
- X101-FPN
-1x
-0.712
-0.151
-10.2
-26.7
-27.1
-144219108
-model | metrics
-
-
-
-
-
-### Cityscapes & Pascal VOC Baselines
-
-Simple baselines for
-* Mask R-CNN on Cityscapes instance segmentation (initialized from COCO pre-training, then trained on Cityscapes fine annotations only)
-* Faster R-CNN on PASCAL VOC object detection (trained on VOC 2007 train+val + VOC 2012 train+val, tested on VOC 2007 using 11-point interpolated AP)
-
-
-
-
-
-
-
-Name
-train
time
(s/iter)
-inference
time
(s/im)
-train
mem
(GB)
-box
AP
-box
AP50
-mask
AP
-model id
-download
-
-
- R50-FPN, Cityscapes
-0.240
-0.078
-4.4
-
-
-36.5
-142423278
-model | metrics
-
-
- R50-C4, VOC
-0.537
-0.081
-4.8
-51.9
-80.3
-
-142202221
-model | metrics
-
-
-
-
-
-### Other Settings
-
-Ablations for Deformable Conv and Cascade R-CNN:
-
-
-
-
-
-
-
-Name
-lr
sched
-train
time
(s/iter)
-inference
time
(s/im)
-train
mem
(GB)
-box
AP
-mask
AP
-model id
-download
-
-
- Baseline R50-FPN
-1x
-0.261
-0.043
-3.4
-38.6
-35.2
-137260431
-model | metrics
-
-
- Deformable Conv
-1x
-0.342
-0.048
-3.5
-41.5
-37.5
-138602867
-model | metrics
-
-
- Cascade R-CNN
-1x
-0.317
-0.052
-4.0
-42.1
-36.4
-138602847
-model | metrics
-
-
- Baseline R50-FPN
-3x
-0.261
-0.043
-3.4
-41.0
-37.2
-137849600
-model | metrics
-
-
- Deformable Conv
-3x
-0.349
-0.047
-3.5
-42.7
-38.5
-144998336
-model | metrics
-
-
- Cascade R-CNN
-3x
-0.328
-0.053
-4.0
-44.3
-38.5
-144998488
-model | metrics
-
-
-
-
-Ablations for normalization methods, and a few models trained from scratch following [Rethinking ImageNet Pre-training](https://arxiv.org/abs/1811.08883).
-(Note: The baseline uses `2fc` head while the others use [`4conv1fc` head](https://arxiv.org/abs/1803.08494))
-
-
-
-
-
-
-Name
-lr
sched
-train
time
(s/iter)
-inference
time
(s/im)
-train
mem
(GB)
-box
AP
-mask
AP
-model id
-download
-
-
- Baseline R50-FPN
-3x
-0.261
-0.043
-3.4
-41.0
-37.2
-137849600
-model | metrics
-
-
- GN
-3x
-0.309
-0.060
-5.6
-42.6
-38.6
-138602888
-model | metrics
-
-
- SyncBN
-3x
-0.345
-0.053
-5.5
-41.9
-37.8
-169527823
-model | metrics
-
-
- GN (from scratch)
-3x
-0.338
-0.061
-7.2
-39.9
-36.6
-138602908
-model | metrics
-
-
- GN (from scratch)
-9x
-N/A
-0.061
-7.2
-43.7
-39.6
-183808979
-model | metrics
-
-
- SyncBN (from scratch)
-9x
-N/A
-0.055
-7.2
-43.6
-39.3
-184226666
-model | metrics
-
-
-
-
-A few very large models trained for a long time, for demo purposes. They are trained using multiple machines:
-
-
-
-
-
-
-
-Name
-inference
time
(s/im)
-train
mem
(GB)
-box
AP
-mask
AP
-PQ
-model id
-download
-
-
- Panoptic FPN R101
-0.098
-11.4
-47.4
-41.3
-46.1
-139797668
-model | metrics
-
-
- Mask R-CNN X152
-0.234
-15.1
-50.2
-44.0
-
-18131413
-model | metrics
-
-
- above + test-time aug.
-
-
-51.9
-45.9
-
-
-
-
-
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/docs/Makefile b/spaces/carlosalonso/Detection-video/carpeta_deteccion/docs/Makefile
deleted file mode 100644
index 718eddce170fe13b67216baf9d4d25b20e860506..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/docs/Makefile
+++ /dev/null
@@ -1,19 +0,0 @@
-# Minimal makefile for Sphinx documentation
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-# You can set these variables from the command line.
-SPHINXOPTS =
-SPHINXBUILD = sphinx-build
-SOURCEDIR = .
-BUILDDIR = _build
-
-# Put it first so that "make" without argument is like "make help".
-help:
- @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
-
-.PHONY: help Makefile
-
-# Catch-all target: route all unknown targets to Sphinx using the new
-# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
-%: Makefile
- @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
diff --git a/spaces/ceckenrode/AI-Dashboard-03142023/style.css b/spaces/ceckenrode/AI-Dashboard-03142023/style.css
deleted file mode 100644
index de5f3dee78e9172b86a93c41ab1a08035c7118a5..0000000000000000000000000000000000000000
--- a/spaces/ceckenrode/AI-Dashboard-03142023/style.css
+++ /dev/null
@@ -1,91 +0,0 @@
-/* Apply a background color to the body */
-body {
- background-color: #f2f2f2;
- margin: 0;
- padding: 0;
-}
-
-/* Center the main content */
-main {
- max-width: 800px;
- margin: 0 auto;
- padding: 20px;
-}
-
-/* Add a border and some padding to the iframes */
-iframe {
- border: 1px solid #ccc;
- padding: 10px;
- margin-bottom: 20px;
- overflow: hidden;
-}
-
-/* Set the height of iframes to fit on one page */
-iframe[src="https://ceckenrode-biomed-nlp-ai-clinical-terminolo-1e44ba4.hf.space"],
-iframe[src="https://ceckenrode-chatgpt-streamlit-fhir.hf.space"],
-iframe[src="https://ceckenrode-cognitive-ai-episodic-semantic-m-732f0f6.hf.space"],
-iframe[src="https://ceckenrode-dockerimagerecognitiontotext.hf.space"] {
- height: calc(100vh - 20px);
- margin-bottom: 20px;
-}
-
-iframe[src="https://awacke1-image-to-multilingual-ocr.hf.space"],
-iframe[src="https://awacke1-ai-quantum.hf.space"] {
- height: calc(100vh - 40px);
- margin-bottom: 20px;
-}
-
-/* Style the mermaid diagrams */
-.mermaid {
- background-color: white;
- padding: 20px;
- margin-bottom: 20px;
- border-radius: 5px;
- box-shadow: 0 2px 4px rgba(0,0,0,0.2);
-}
-
-/* Style the card */
-.card {
- background-color: white;
- padding: 20px;
- margin-bottom: 20px;
- border-radius: 5px;
- box-shadow: 0 2px 4px rgba(0,0,0,0.2);
-}
-
-/* Style the heading */
-h1 {
- font-size: 32px;
- font-weight: bold;
- margin-top: 0;
- margin-bottom: 20px;
-}
-
-/* Style the paragraphs */
-p {
- font-size: 16px;
- line-height: 1.5;
- margin-bottom: 20px;
-}
-
-/* Style the link */
-a {
- color: #0077cc;
- text-decoration: none;
-}
-
-a:hover {
- text-decoration: underline;
-}
-
-/* Style the code blocks */
-pre {
- background-color: #f5f5f5;
- padding: 10px;
- overflow: auto;
-}
-
-code {
- font-family: "Courier New", Courier, monospace;
- font-size: 14px;
-}
\ No newline at end of file
diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/MegEngine/python/demo.py b/spaces/chendl/compositional_test/multimodal/YOLOX/demo/MegEngine/python/demo.py
deleted file mode 100644
index 6542853a1a0eb1f8882892fcf55fff8838bd1468..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/MegEngine/python/demo.py
+++ /dev/null
@@ -1,237 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# Copyright (c) Megvii, Inc. and its affiliates.
-
-import argparse
-import os
-import time
-
-import cv2
-import megengine as mge
-import megengine.functional as F
-from loguru import logger
-
-from yolox.data.datasets import COCO_CLASSES
-from yolox.utils import vis
-from yolox.data.data_augment import preproc as preprocess
-
-from build import build_and_load
-
-IMAGE_EXT = [".jpg", ".jpeg", ".webp", ".bmp", ".png"]
-
-
-def make_parser():
- parser = argparse.ArgumentParser("YOLOX Demo!")
- parser.add_argument(
- "demo", default="image", help="demo type, eg. image, video and webcam"
- )
- parser.add_argument("-n", "--name", type=str, default="yolox-s", help="model name")
- parser.add_argument("--path", default="./test.png", help="path to images or video")
- parser.add_argument("--camid", type=int, default=0, help="webcam demo camera id")
- parser.add_argument(
- "--save_result",
- action="store_true",
- help="whether to save the inference result of image/video",
- )
-
- parser.add_argument("-c", "--ckpt", default=None, type=str, help="ckpt for eval")
- parser.add_argument("--conf", default=None, type=float, help="test conf")
- parser.add_argument("--nms", default=None, type=float, help="test nms threshold")
- parser.add_argument("--tsize", default=None, type=int, help="test img size")
- return parser
-
-
-def get_image_list(path):
- image_names = []
- for maindir, subdir, file_name_list in os.walk(path):
- for filename in file_name_list:
- apath = os.path.join(maindir, filename)
- ext = os.path.splitext(apath)[1]
- if ext in IMAGE_EXT:
- image_names.append(apath)
- return image_names
-
-
-def postprocess(prediction, num_classes, conf_thre=0.7, nms_thre=0.45):
- box_corner = F.zeros_like(prediction)
- box_corner[:, :, 0] = prediction[:, :, 0] - prediction[:, :, 2] / 2
- box_corner[:, :, 1] = prediction[:, :, 1] - prediction[:, :, 3] / 2
- box_corner[:, :, 2] = prediction[:, :, 0] + prediction[:, :, 2] / 2
- box_corner[:, :, 3] = prediction[:, :, 1] + prediction[:, :, 3] / 2
- prediction[:, :, :4] = box_corner[:, :, :4]
-
- output = [None for _ in range(len(prediction))]
- for i, image_pred in enumerate(prediction):
-
- # If none are remaining => process next image
- if not image_pred.shape[0]:
- continue
- # Get score and class with highest confidence
- class_conf = F.max(image_pred[:, 5: 5 + num_classes], 1, keepdims=True)
- class_pred = F.argmax(image_pred[:, 5: 5 + num_classes], 1, keepdims=True)
-
- class_conf_squeeze = F.squeeze(class_conf)
- conf_mask = image_pred[:, 4] * class_conf_squeeze >= conf_thre
- detections = F.concat((image_pred[:, :5], class_conf, class_pred), 1)
- detections = detections[conf_mask]
- if not detections.shape[0]:
- continue
-
- nms_out_index = F.vision.nms(
- detections[:, :4], detections[:, 4] * detections[:, 5], nms_thre,
- )
- detections = detections[nms_out_index]
- if output[i] is None:
- output[i] = detections
- else:
- output[i] = F.concat((output[i], detections))
-
- return output
-
-
-class Predictor(object):
- def __init__(
- self,
- model,
- confthre=0.01,
- nmsthre=0.65,
- test_size=(640, 640),
- cls_names=COCO_CLASSES,
- trt_file=None,
- decoder=None,
- ):
- self.model = model
- self.cls_names = cls_names
- self.decoder = decoder
- self.num_classes = 80
- self.confthre = confthre
- self.nmsthre = nmsthre
- self.test_size = test_size
-
- def inference(self, img):
- img_info = {"id": 0}
- if isinstance(img, str):
- img_info["file_name"] = os.path.basename(img)
- img = cv2.imread(img)
- if img is None:
- raise ValueError("test image path is invalid!")
- else:
- img_info["file_name"] = None
-
- height, width = img.shape[:2]
- img_info["height"] = height
- img_info["width"] = width
- img_info["raw_img"] = img
-
- img, ratio = preprocess(img, self.test_size)
- img_info["ratio"] = ratio
- img = F.expand_dims(mge.tensor(img), 0)
-
- t0 = time.time()
- outputs = self.model(img)
- outputs = postprocess(outputs, self.num_classes, self.confthre, self.nmsthre)
- logger.info("Infer time: {:.4f}s".format(time.time() - t0))
- return outputs, img_info
-
- def visual(self, output, img_info, cls_conf=0.35):
- ratio = img_info["ratio"]
- img = img_info["raw_img"]
- if output is None:
- return img
- output = output.numpy()
-
- # preprocessing: resize
- bboxes = output[:, 0:4] / ratio
-
- cls = output[:, 6]
- scores = output[:, 4] * output[:, 5]
-
- vis_res = vis(img, bboxes, scores, cls, cls_conf, self.cls_names)
- return vis_res
-
-
-def image_demo(predictor, vis_folder, path, current_time, save_result):
- if os.path.isdir(path):
- files = get_image_list(path)
- else:
- files = [path]
- files.sort()
- for image_name in files:
- outputs, img_info = predictor.inference(image_name)
- result_image = predictor.visual(outputs[0], img_info)
- if save_result:
- save_folder = os.path.join(
- vis_folder, time.strftime("%Y_%m_%d_%H_%M_%S", current_time)
- )
- os.makedirs(save_folder, exist_ok=True)
- save_file_name = os.path.join(save_folder, os.path.basename(image_name))
- logger.info("Saving detection result in {}".format(save_file_name))
- cv2.imwrite(save_file_name, result_image)
- ch = cv2.waitKey(0)
- if ch == 27 or ch == ord("q") or ch == ord("Q"):
- break
-
-
-def imageflow_demo(predictor, vis_folder, current_time, args):
- cap = cv2.VideoCapture(args.path if args.demo == "video" else args.camid)
- width = cap.get(cv2.CAP_PROP_FRAME_WIDTH) # float
- height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT) # float
- fps = cap.get(cv2.CAP_PROP_FPS)
- save_folder = os.path.join(
- vis_folder, time.strftime("%Y_%m_%d_%H_%M_%S", current_time)
- )
- os.makedirs(save_folder, exist_ok=True)
- if args.demo == "video":
- save_path = os.path.join(save_folder, os.path.basename(args.path))
- else:
- save_path = os.path.join(save_folder, "camera.mp4")
- logger.info(f"video save_path is {save_path}")
- vid_writer = cv2.VideoWriter(
- save_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (int(width), int(height))
- )
- while True:
- ret_val, frame = cap.read()
- if ret_val:
- outputs, img_info = predictor.inference(frame)
- result_frame = predictor.visual(outputs[0], img_info)
- if args.save_result:
- vid_writer.write(result_frame)
- ch = cv2.waitKey(1)
- if ch == 27 or ch == ord("q") or ch == ord("Q"):
- break
- else:
- break
-
-
-def main(args):
- file_name = os.path.join("./yolox_outputs", args.name)
- os.makedirs(file_name, exist_ok=True)
-
- if args.save_result:
- vis_folder = os.path.join(file_name, "vis_res")
- os.makedirs(vis_folder, exist_ok=True)
-
- confthre = 0.01
- nmsthre = 0.65
- test_size = (640, 640)
- if args.conf is not None:
- confthre = args.conf
- if args.nms is not None:
- nmsthre = args.nms
- if args.tsize is not None:
- test_size = (args.tsize, args.tsize)
-
- model = build_and_load(args.ckpt, name=args.name)
- model.eval()
-
- predictor = Predictor(model, confthre, nmsthre, test_size, COCO_CLASSES, None, None)
- current_time = time.localtime()
- if args.demo == "image":
- image_demo(predictor, vis_folder, args.path, current_time, args.save_result)
- elif args.demo == "video" or args.demo == "webcam":
- imageflow_demo(predictor, vis_folder, current_time, args)
-
-
-if __name__ == "__main__":
- args = make_parser().parse_args()
- main(args)
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/layoutlmv3/run_funsd_cord.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/layoutlmv3/run_funsd_cord.py
deleted file mode 100644
index e826fd997424ed129c18ddfb69be5c91a9e78fb4..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/research_projects/layoutlmv3/run_funsd_cord.py
+++ /dev/null
@@ -1,533 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-# Copyright 2022 The HuggingFace Team All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Fine-tuning LayoutLMv3 for token classification on FUNSD or CORD.
-"""
-# You can also adapt this script on your own token classification task and datasets. Pointers for this are left as
-# comments.
-
-import logging
-import os
-import sys
-from dataclasses import dataclass, field
-from typing import Optional
-
-import datasets
-import numpy as np
-from datasets import ClassLabel, load_dataset, load_metric
-
-import transformers
-from transformers import (
- AutoConfig,
- AutoModelForTokenClassification,
- AutoProcessor,
- HfArgumentParser,
- Trainer,
- TrainingArguments,
- set_seed,
-)
-from transformers.data.data_collator import default_data_collator
-from transformers.trainer_utils import get_last_checkpoint
-from transformers.utils import check_min_version
-from transformers.utils.versions import require_version
-
-
-# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
-check_min_version("4.19.0.dev0")
-
-require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/token-classification/requirements.txt")
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class ModelArguments:
- """
- Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
- """
-
- model_name_or_path: str = field(
- default="microsoft/layoutlmv3-base",
- metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"},
- )
- config_name: Optional[str] = field(
- default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
- )
- processor_name: Optional[str] = field(
- default=None, metadata={"help": "Name or path to the processor files if not the same as model_name"}
- )
- cache_dir: Optional[str] = field(
- default=None,
- metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
- )
- model_revision: str = field(
- default="main",
- metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
- )
- use_auth_token: bool = field(
- default=False,
- metadata={
- "help": (
- "Will use the token generated when running `huggingface-cli login` (necessary to use this script "
- "with private models)."
- )
- },
- )
-
-
-@dataclass
-class DataTrainingArguments:
- """
- Arguments pertaining to what data we are going to input our model for training and eval.
- """
-
- task_name: Optional[str] = field(default="ner", metadata={"help": "The name of the task (ner, pos...)."})
- dataset_name: Optional[str] = field(
- default="nielsr/funsd-layoutlmv3",
- metadata={"help": "The name of the dataset to use (via the datasets library)."},
- )
- dataset_config_name: Optional[str] = field(
- default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
- )
- train_file: Optional[str] = field(
- default=None, metadata={"help": "The input training data file (a csv or JSON file)."}
- )
- validation_file: Optional[str] = field(
- default=None,
- metadata={"help": "An optional input evaluation data file to evaluate on (a csv or JSON file)."},
- )
- test_file: Optional[str] = field(
- default=None,
- metadata={"help": "An optional input test data file to predict on (a csv or JSON file)."},
- )
- text_column_name: Optional[str] = field(
- default=None, metadata={"help": "The column name of text to input in the file (a csv or JSON file)."}
- )
- label_column_name: Optional[str] = field(
- default=None, metadata={"help": "The column name of label to input in the file (a csv or JSON file)."}
- )
- overwrite_cache: bool = field(
- default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
- )
- preprocessing_num_workers: Optional[int] = field(
- default=None,
- metadata={"help": "The number of processes to use for the preprocessing."},
- )
- max_seq_length: int = field(
- default=512,
- metadata={
- "help": (
- "The maximum total input sequence length after tokenization. If set, sequences longer "
- "than this will be truncated, sequences shorter will be padded."
- )
- },
- )
- max_train_samples: Optional[int] = field(
- default=None,
- metadata={
- "help": (
- "For debugging purposes or quicker training, truncate the number of training examples to this "
- "value if set."
- )
- },
- )
- max_eval_samples: Optional[int] = field(
- default=None,
- metadata={
- "help": (
- "For debugging purposes or quicker training, truncate the number of evaluation examples to this "
- "value if set."
- )
- },
- )
- max_predict_samples: Optional[int] = field(
- default=None,
- metadata={
- "help": (
- "For debugging purposes or quicker training, truncate the number of prediction examples to this "
- "value if set."
- )
- },
- )
- label_all_tokens: bool = field(
- default=False,
- metadata={
- "help": (
- "Whether to put the label for one word on all tokens of generated by that word or just on the "
- "one (in which case the other tokens will have a padding index)."
- )
- },
- )
- return_entity_level_metrics: bool = field(
- default=False,
- metadata={"help": "Whether to return all the entity levels during evaluation or just the overall ones."},
- )
-
- def __post_init__(self):
- if self.dataset_name is None and self.train_file is None and self.validation_file is None:
- raise ValueError("Need either a dataset name or a training/validation file.")
- else:
- if self.train_file is not None:
- extension = self.train_file.split(".")[-1]
- assert extension in ["csv", "json"], "`train_file` should be a csv or a json file."
- if self.validation_file is not None:
- extension = self.validation_file.split(".")[-1]
- assert extension in ["csv", "json"], "`validation_file` should be a csv or a json file."
- self.task_name = self.task_name.lower()
-
-
-def main():
- # See all possible arguments in src/transformers/training_args.py
- # or by passing the --help flag to this script.
- # We now keep distinct sets of args, for a cleaner separation of concerns.
-
- parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
- if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
- # If we pass only one argument to the script and it's the path to a json file,
- # let's parse it to get our arguments.
- model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
- else:
- model_args, data_args, training_args = parser.parse_args_into_dataclasses()
-
- # Setup logging
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- handlers=[logging.StreamHandler(sys.stdout)],
- )
-
- log_level = training_args.get_process_log_level()
- logger.setLevel(log_level)
- datasets.utils.logging.set_verbosity(log_level)
- transformers.utils.logging.set_verbosity(log_level)
- transformers.utils.logging.enable_default_handler()
- transformers.utils.logging.enable_explicit_format()
-
- # Log on each process the small summary:
- logger.warning(
- f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
- + f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
- )
- logger.info(f"Training/evaluation parameters {training_args}")
-
- # Detecting last checkpoint.
- last_checkpoint = None
- if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
- last_checkpoint = get_last_checkpoint(training_args.output_dir)
- if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
- raise ValueError(
- f"Output directory ({training_args.output_dir}) already exists and is not empty. "
- "Use --overwrite_output_dir to overcome."
- )
- elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:
- logger.info(
- f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
- "the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
- )
-
- # Set seed before initializing model.
- set_seed(training_args.seed)
-
- # Get the datasets
- # In distributed training, the load_dataset function guarantee that only one local process can concurrently
- # download the dataset.
- if data_args.dataset_name == "funsd":
- # Downloading and loading a dataset from the hub.
- dataset = load_dataset(
- "nielsr/funsd-layoutlmv3",
- data_args.dataset_config_name,
- cache_dir=model_args.cache_dir,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- elif data_args.dataset_name == "cord":
- # Downloading and loading a dataset from the hub.
- dataset = load_dataset(
- "nielsr/cord-layoutlmv3",
- data_args.dataset_config_name,
- cache_dir=model_args.cache_dir,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- else:
- raise ValueError("This script only supports either FUNSD or CORD out-of-the-box.")
-
- if training_args.do_train:
- column_names = dataset["train"].column_names
- features = dataset["train"].features
- else:
- column_names = dataset["test"].column_names
- features = dataset["test"].features
-
- image_column_name = "image"
- text_column_name = "words" if "words" in column_names else "tokens"
- boxes_column_name = "bboxes"
- label_column_name = (
- f"{data_args.task_name}_tags" if f"{data_args.task_name}_tags" in column_names else column_names[1]
- )
-
- remove_columns = column_names
-
- # In the event the labels are not a `Sequence[ClassLabel]`, we will need to go through the dataset to get the
- # unique labels.
- def get_label_list(labels):
- unique_labels = set()
- for label in labels:
- unique_labels = unique_labels | set(label)
- label_list = list(unique_labels)
- label_list.sort()
- return label_list
-
- # If the labels are of type ClassLabel, they are already integers and we have the map stored somewhere.
- # Otherwise, we have to get the list of labels manually.
- if isinstance(features[label_column_name].feature, ClassLabel):
- label_list = features[label_column_name].feature.names
- # No need to convert the labels since they are already ints.
- id2label = dict(enumerate(label_list))
- label2id = {v: k for k, v in enumerate(label_list)}
- else:
- label_list = get_label_list(datasets["train"][label_column_name])
- id2label = dict(enumerate(label_list))
- label2id = {v: k for k, v in enumerate(label_list)}
- num_labels = len(label_list)
-
- # Load pretrained model and processor
- #
- # Distributed training:
- # The .from_pretrained methods guarantee that only one local process can concurrently
- # download model & vocab.
- config = AutoConfig.from_pretrained(
- model_args.config_name if model_args.config_name else model_args.model_name_or_path,
- num_labels=num_labels,
- finetuning_task=data_args.task_name,
- cache_dir=model_args.cache_dir,
- revision=model_args.model_revision,
- use_auth_token=True if model_args.use_auth_token else None,
- )
-
- processor = AutoProcessor.from_pretrained(
- model_args.processor_name if model_args.processor_name else model_args.model_name_or_path,
- cache_dir=model_args.cache_dir,
- use_fast=True,
- revision=model_args.model_revision,
- use_auth_token=True if model_args.use_auth_token else None,
- add_prefix_space=True,
- apply_ocr=False,
- )
-
- model = AutoModelForTokenClassification.from_pretrained(
- model_args.model_name_or_path,
- from_tf=bool(".ckpt" in model_args.model_name_or_path),
- config=config,
- cache_dir=model_args.cache_dir,
- revision=model_args.model_revision,
- use_auth_token=True if model_args.use_auth_token else None,
- )
-
- # Set the correspondences label/ID inside the model config
- model.config.label2id = label2id
- model.config.id2label = id2label
-
- # Preprocessing the dataset
- # The processor does everything for us (prepare the image using LayoutLMv3ImageProcessor
- # and prepare the words, boxes and word-level labels using LayoutLMv3TokenizerFast)
- def prepare_examples(examples):
- images = examples[image_column_name]
- words = examples[text_column_name]
- boxes = examples[boxes_column_name]
- word_labels = examples[label_column_name]
-
- encoding = processor(
- images,
- words,
- boxes=boxes,
- word_labels=word_labels,
- truncation=True,
- padding="max_length",
- max_length=data_args.max_seq_length,
- )
-
- return encoding
-
- if training_args.do_train:
- if "train" not in dataset:
- raise ValueError("--do_train requires a train dataset")
- train_dataset = dataset["train"]
- if data_args.max_train_samples is not None:
- train_dataset = train_dataset.select(range(data_args.max_train_samples))
- with training_args.main_process_first(desc="train dataset map pre-processing"):
- train_dataset = train_dataset.map(
- prepare_examples,
- batched=True,
- remove_columns=remove_columns,
- num_proc=data_args.preprocessing_num_workers,
- load_from_cache_file=not data_args.overwrite_cache,
- )
-
- if training_args.do_eval:
- validation_name = "test"
- if validation_name not in dataset:
- raise ValueError("--do_eval requires a validation dataset")
- eval_dataset = dataset[validation_name]
- if data_args.max_eval_samples is not None:
- eval_dataset = eval_dataset.select(range(data_args.max_eval_samples))
- with training_args.main_process_first(desc="validation dataset map pre-processing"):
- eval_dataset = eval_dataset.map(
- prepare_examples,
- batched=True,
- remove_columns=remove_columns,
- num_proc=data_args.preprocessing_num_workers,
- load_from_cache_file=not data_args.overwrite_cache,
- )
-
- if training_args.do_predict:
- if "test" not in datasets:
- raise ValueError("--do_predict requires a test dataset")
- predict_dataset = datasets["test"]
- if data_args.max_predict_samples is not None:
- max_predict_samples = min(len(predict_dataset), data_args.max_predict_samples)
- predict_dataset = predict_dataset.select(range(max_predict_samples))
- with training_args.main_process_first(desc="prediction dataset map pre-processing"):
- predict_dataset = predict_dataset.map(
- prepare_examples,
- batched=True,
- remove_columns=remove_columns,
- num_proc=data_args.preprocessing_num_workers,
- load_from_cache_file=not data_args.overwrite_cache,
- )
-
- # Metrics
- metric = load_metric("seqeval")
-
- def compute_metrics(p):
- predictions, labels = p
- predictions = np.argmax(predictions, axis=2)
-
- # Remove ignored index (special tokens)
- true_predictions = [
- [label_list[p] for (p, l) in zip(prediction, label) if l != -100]
- for prediction, label in zip(predictions, labels)
- ]
- true_labels = [
- [label_list[l] for (p, l) in zip(prediction, label) if l != -100]
- for prediction, label in zip(predictions, labels)
- ]
-
- results = metric.compute(predictions=true_predictions, references=true_labels)
- if data_args.return_entity_level_metrics:
- # Unpack nested dictionaries
- final_results = {}
- for key, value in results.items():
- if isinstance(value, dict):
- for n, v in value.items():
- final_results[f"{key}_{n}"] = v
- else:
- final_results[key] = value
- return final_results
- else:
- return {
- "precision": results["overall_precision"],
- "recall": results["overall_recall"],
- "f1": results["overall_f1"],
- "accuracy": results["overall_accuracy"],
- }
-
- # Initialize our Trainer
- trainer = Trainer(
- model=model,
- args=training_args,
- train_dataset=train_dataset if training_args.do_train else None,
- eval_dataset=eval_dataset if training_args.do_eval else None,
- tokenizer=processor,
- data_collator=default_data_collator,
- compute_metrics=compute_metrics,
- )
-
- # Training
- if training_args.do_train:
- checkpoint = None
- if training_args.resume_from_checkpoint is not None:
- checkpoint = training_args.resume_from_checkpoint
- elif last_checkpoint is not None:
- checkpoint = last_checkpoint
- train_result = trainer.train(resume_from_checkpoint=checkpoint)
- metrics = train_result.metrics
- trainer.save_model() # Saves the tokenizer too for easy upload
-
- max_train_samples = (
- data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
- )
- metrics["train_samples"] = min(max_train_samples, len(train_dataset))
-
- trainer.log_metrics("train", metrics)
- trainer.save_metrics("train", metrics)
- trainer.save_state()
-
- # Evaluation
- if training_args.do_eval:
- logger.info("*** Evaluate ***")
-
- metrics = trainer.evaluate()
-
- max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)
- metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset))
-
- trainer.log_metrics("eval", metrics)
- trainer.save_metrics("eval", metrics)
-
- # Predict
- if training_args.do_predict:
- logger.info("*** Predict ***")
-
- predictions, labels, metrics = trainer.predict(predict_dataset, metric_key_prefix="predict")
- predictions = np.argmax(predictions, axis=2)
-
- # Remove ignored index (special tokens)
- true_predictions = [
- [label_list[p] for (p, l) in zip(prediction, label) if l != -100]
- for prediction, label in zip(predictions, labels)
- ]
-
- trainer.log_metrics("predict", metrics)
- trainer.save_metrics("predict", metrics)
-
- # Save predictions
- output_predictions_file = os.path.join(training_args.output_dir, "predictions.txt")
- if trainer.is_world_process_zero():
- with open(output_predictions_file, "w") as writer:
- for prediction in true_predictions:
- writer.write(" ".join(prediction) + "\n")
-
- kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "token-classification"}
- if data_args.dataset_name is not None:
- kwargs["dataset_tags"] = data_args.dataset_name
- if data_args.dataset_config_name is not None:
- kwargs["dataset_args"] = data_args.dataset_config_name
- kwargs["dataset"] = f"{data_args.dataset_name} {data_args.dataset_config_name}"
- else:
- kwargs["dataset"] = data_args.dataset_name
-
- if training_args.push_to_hub:
- trainer.push_to_hub(**kwargs)
- else:
- trainer.create_model_card(**kwargs)
-
-
-def _mp_fn(index):
- # For xla_spawn (TPUs)
- main()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/utils/plugin_registry.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/utils/plugin_registry.py
deleted file mode 100644
index 37d3db222ef2c7920628971a92e863d9915514c6..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/utils/plugin_registry.py
+++ /dev/null
@@ -1,228 +0,0 @@
-import sys
-from typing import Any, Dict, List, Optional, Generic, TypeVar, cast
-from types import TracebackType
-
-if sys.version_info >= (3, 8):
- from importlib.metadata import entry_points
-else:
- from importlib_metadata import entry_points
-
-from toolz import curry
-
-
-PluginType = TypeVar("PluginType")
-
-
-class NoSuchEntryPoint(Exception):
- def __init__(self, group, name):
- self.group = group
- self.name = name
-
- def __str__(self):
- return f"No {self.name!r} entry point found in group {self.group!r}"
-
-
-class PluginEnabler:
- """Context manager for enabling plugins
-
- This object lets you use enable() as a context manager to
- temporarily enable a given plugin::
-
- with plugins.enable('name'):
- do_something() # 'name' plugin temporarily enabled
- # plugins back to original state
- """
-
- def __init__(self, registry: "PluginRegistry", name: str, **options):
- self.registry = registry # type: PluginRegistry
- self.name = name # type: str
- self.options = options # type: Dict[str, Any]
- self.original_state = registry._get_state() # type: Dict[str, Any]
- self.registry._enable(name, **options)
-
- def __enter__(self) -> "PluginEnabler":
- return self
-
- def __exit__(self, typ: type, value: Exception, traceback: TracebackType) -> None:
- self.registry._set_state(self.original_state)
-
- def __repr__(self) -> str:
- return "{}.enable({!r})".format(self.registry.__class__.__name__, self.name)
-
-
-class PluginRegistry(Generic[PluginType]):
- """A registry for plugins.
-
- This is a plugin registry that allows plugins to be loaded/registered
- in two ways:
-
- 1. Through an explicit call to ``.register(name, value)``.
- 2. By looking for other Python packages that are installed and provide
- a setuptools entry point group.
-
- When you create an instance of this class, provide the name of the
- entry point group to use::
-
- reg = PluginRegister('my_entrypoint_group')
-
- """
-
- # this is a mapping of name to error message to allow custom error messages
- # in case an entrypoint is not found
- entrypoint_err_messages = {} # type: Dict[str, str]
-
- # global settings is a key-value mapping of settings that are stored globally
- # in the registry rather than passed to the plugins
- _global_settings = {} # type: Dict[str, Any]
-
- def __init__(self, entry_point_group: str = "", plugin_type: type = object):
- """Create a PluginRegistry for a named entry point group.
-
- Parameters
- ==========
- entry_point_group: str
- The name of the entry point group.
- plugin_type: object
- A type that will optionally be used for runtime type checking of
- loaded plugins using isinstance.
- """
- self.entry_point_group = entry_point_group # type: str
- self.plugin_type = plugin_type # type: Optional[type]
- self._active = None # type: Optional[PluginType]
- self._active_name = "" # type: str
- self._plugins = {} # type: Dict[str, PluginType]
- self._options = {} # type: Dict[str, Any]
- self._global_settings = self.__class__._global_settings.copy() # type: dict
-
- def register(self, name: str, value: Optional[PluginType]) -> Optional[PluginType]:
- """Register a plugin by name and value.
-
- This method is used for explicit registration of a plugin and shouldn't be
- used to manage entry point managed plugins, which are auto-loaded.
-
- Parameters
- ==========
- name: str
- The name of the plugin.
- value: PluginType or None
- The actual plugin object to register or None to unregister that plugin.
-
- Returns
- =======
- plugin: PluginType or None
- The plugin that was registered or unregistered.
- """
- if value is None:
- return self._plugins.pop(name, None)
- else:
- assert isinstance(value, self.plugin_type) # type: ignore[arg-type] # Should ideally be fixed by better annotating plugin_type
- self._plugins[name] = value
- return value
-
- def names(self) -> List[str]:
- """List the names of the registered and entry points plugins."""
- exts = list(self._plugins.keys())
- e_points = importlib_metadata_get(self.entry_point_group)
- more_exts = [ep.name for ep in e_points]
- exts.extend(more_exts)
- return sorted(set(exts))
-
- def _get_state(self) -> Dict[str, Any]:
- """Return a dictionary representing the current state of the registry"""
- return {
- "_active": self._active,
- "_active_name": self._active_name,
- "_plugins": self._plugins.copy(),
- "_options": self._options.copy(),
- "_global_settings": self._global_settings.copy(),
- }
-
- def _set_state(self, state: Dict[str, Any]) -> None:
- """Reset the state of the registry"""
- assert set(state.keys()) == {
- "_active",
- "_active_name",
- "_plugins",
- "_options",
- "_global_settings",
- }
- for key, val in state.items():
- setattr(self, key, val)
-
- def _enable(self, name: str, **options) -> None:
- if name not in self._plugins:
- try:
- (ep,) = [
- ep
- for ep in importlib_metadata_get(self.entry_point_group)
- if ep.name == name
- ]
- except ValueError as err:
- if name in self.entrypoint_err_messages:
- raise ValueError(self.entrypoint_err_messages[name]) from err
- else:
- raise NoSuchEntryPoint(self.entry_point_group, name) from err
- value = cast(PluginType, ep.load())
- self.register(name, value)
- self._active_name = name
- self._active = self._plugins[name]
- for key in set(options.keys()) & set(self._global_settings.keys()):
- self._global_settings[key] = options.pop(key)
- self._options = options
-
- def enable(self, name: Optional[str] = None, **options) -> PluginEnabler:
- """Enable a plugin by name.
-
- This can be either called directly, or used as a context manager.
-
- Parameters
- ----------
- name : string (optional)
- The name of the plugin to enable. If not specified, then use the
- current active name.
- **options :
- Any additional parameters will be passed to the plugin as keyword
- arguments
-
- Returns
- -------
- PluginEnabler:
- An object that allows enable() to be used as a context manager
- """
- if name is None:
- name = self.active
- return PluginEnabler(self, name, **options)
-
- @property
- def active(self) -> str:
- """Return the name of the currently active plugin"""
- return self._active_name
-
- @property
- def options(self) -> Dict[str, Any]:
- """Return the current options dictionary"""
- return self._options
-
- def get(self) -> Optional[PluginType]:
- """Return the currently active plugin."""
- if self._options:
- return curry(self._active, **self._options)
- else:
- return self._active
-
- def __repr__(self) -> str:
- return "{}(active={!r}, registered={!r})" "".format(
- self.__class__.__name__, self._active_name, list(self.names())
- )
-
-
-def importlib_metadata_get(group):
- ep = entry_points()
- # 'select' was introduced in Python 3.10 and 'get' got deprecated
- # We don't check for Python version here as by checking with hasattr we
- # also get compatibility with the importlib_metadata package which had a different
- # deprecation cycle for 'get'
- if hasattr(ep, "select"):
- return ep.select(group=group)
- else:
- return ep.get(group, [])
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dateutil/parser/isoparser.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dateutil/parser/isoparser.py
deleted file mode 100644
index 5d7bee38006d4e510b841d84df0322dee024b77c..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dateutil/parser/isoparser.py
+++ /dev/null
@@ -1,416 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-This module offers a parser for ISO-8601 strings
-
-It is intended to support all valid date, time and datetime formats per the
-ISO-8601 specification.
-
-..versionadded:: 2.7.0
-"""
-from datetime import datetime, timedelta, time, date
-import calendar
-from dateutil import tz
-
-from functools import wraps
-
-import re
-import six
-
-__all__ = ["isoparse", "isoparser"]
-
-
-def _takes_ascii(f):
- @wraps(f)
- def func(self, str_in, *args, **kwargs):
- # If it's a stream, read the whole thing
- str_in = getattr(str_in, 'read', lambda: str_in)()
-
- # If it's unicode, turn it into bytes, since ISO-8601 only covers ASCII
- if isinstance(str_in, six.text_type):
- # ASCII is the same in UTF-8
- try:
- str_in = str_in.encode('ascii')
- except UnicodeEncodeError as e:
- msg = 'ISO-8601 strings should contain only ASCII characters'
- six.raise_from(ValueError(msg), e)
-
- return f(self, str_in, *args, **kwargs)
-
- return func
-
-
-class isoparser(object):
- def __init__(self, sep=None):
- """
- :param sep:
- A single character that separates date and time portions. If
- ``None``, the parser will accept any single character.
- For strict ISO-8601 adherence, pass ``'T'``.
- """
- if sep is not None:
- if (len(sep) != 1 or ord(sep) >= 128 or sep in '0123456789'):
- raise ValueError('Separator must be a single, non-numeric ' +
- 'ASCII character')
-
- sep = sep.encode('ascii')
-
- self._sep = sep
-
- @_takes_ascii
- def isoparse(self, dt_str):
- """
- Parse an ISO-8601 datetime string into a :class:`datetime.datetime`.
-
- An ISO-8601 datetime string consists of a date portion, followed
- optionally by a time portion - the date and time portions are separated
- by a single character separator, which is ``T`` in the official
- standard. Incomplete date formats (such as ``YYYY-MM``) may *not* be
- combined with a time portion.
-
- Supported date formats are:
-
- Common:
-
- - ``YYYY``
- - ``YYYY-MM`` or ``YYYYMM``
- - ``YYYY-MM-DD`` or ``YYYYMMDD``
-
- Uncommon:
-
- - ``YYYY-Www`` or ``YYYYWww`` - ISO week (day defaults to 0)
- - ``YYYY-Www-D`` or ``YYYYWwwD`` - ISO week and day
-
- The ISO week and day numbering follows the same logic as
- :func:`datetime.date.isocalendar`.
-
- Supported time formats are:
-
- - ``hh``
- - ``hh:mm`` or ``hhmm``
- - ``hh:mm:ss`` or ``hhmmss``
- - ``hh:mm:ss.ssssss`` (Up to 6 sub-second digits)
-
- Midnight is a special case for `hh`, as the standard supports both
- 00:00 and 24:00 as a representation. The decimal separator can be
- either a dot or a comma.
-
-
- .. caution::
-
- Support for fractional components other than seconds is part of the
- ISO-8601 standard, but is not currently implemented in this parser.
-
- Supported time zone offset formats are:
-
- - `Z` (UTC)
- - `±HH:MM`
- - `±HHMM`
- - `±HH`
-
- Offsets will be represented as :class:`dateutil.tz.tzoffset` objects,
- with the exception of UTC, which will be represented as
- :class:`dateutil.tz.tzutc`. Time zone offsets equivalent to UTC (such
- as `+00:00`) will also be represented as :class:`dateutil.tz.tzutc`.
-
- :param dt_str:
- A string or stream containing only an ISO-8601 datetime string
-
- :return:
- Returns a :class:`datetime.datetime` representing the string.
- Unspecified components default to their lowest value.
-
- .. warning::
-
- As of version 2.7.0, the strictness of the parser should not be
- considered a stable part of the contract. Any valid ISO-8601 string
- that parses correctly with the default settings will continue to
- parse correctly in future versions, but invalid strings that
- currently fail (e.g. ``2017-01-01T00:00+00:00:00``) are not
- guaranteed to continue failing in future versions if they encode
- a valid date.
-
- .. versionadded:: 2.7.0
- """
- components, pos = self._parse_isodate(dt_str)
-
- if len(dt_str) > pos:
- if self._sep is None or dt_str[pos:pos + 1] == self._sep:
- components += self._parse_isotime(dt_str[pos + 1:])
- else:
- raise ValueError('String contains unknown ISO components')
-
- if len(components) > 3 and components[3] == 24:
- components[3] = 0
- return datetime(*components) + timedelta(days=1)
-
- return datetime(*components)
-
- @_takes_ascii
- def parse_isodate(self, datestr):
- """
- Parse the date portion of an ISO string.
-
- :param datestr:
- The string portion of an ISO string, without a separator
-
- :return:
- Returns a :class:`datetime.date` object
- """
- components, pos = self._parse_isodate(datestr)
- if pos < len(datestr):
- raise ValueError('String contains unknown ISO ' +
- 'components: {!r}'.format(datestr.decode('ascii')))
- return date(*components)
-
- @_takes_ascii
- def parse_isotime(self, timestr):
- """
- Parse the time portion of an ISO string.
-
- :param timestr:
- The time portion of an ISO string, without a separator
-
- :return:
- Returns a :class:`datetime.time` object
- """
- components = self._parse_isotime(timestr)
- if components[0] == 24:
- components[0] = 0
- return time(*components)
-
- @_takes_ascii
- def parse_tzstr(self, tzstr, zero_as_utc=True):
- """
- Parse a valid ISO time zone string.
-
- See :func:`isoparser.isoparse` for details on supported formats.
-
- :param tzstr:
- A string representing an ISO time zone offset
-
- :param zero_as_utc:
- Whether to return :class:`dateutil.tz.tzutc` for zero-offset zones
-
- :return:
- Returns :class:`dateutil.tz.tzoffset` for offsets and
- :class:`dateutil.tz.tzutc` for ``Z`` and (if ``zero_as_utc`` is
- specified) offsets equivalent to UTC.
- """
- return self._parse_tzstr(tzstr, zero_as_utc=zero_as_utc)
-
- # Constants
- _DATE_SEP = b'-'
- _TIME_SEP = b':'
- _FRACTION_REGEX = re.compile(b'[\\.,]([0-9]+)')
-
- def _parse_isodate(self, dt_str):
- try:
- return self._parse_isodate_common(dt_str)
- except ValueError:
- return self._parse_isodate_uncommon(dt_str)
-
- def _parse_isodate_common(self, dt_str):
- len_str = len(dt_str)
- components = [1, 1, 1]
-
- if len_str < 4:
- raise ValueError('ISO string too short')
-
- # Year
- components[0] = int(dt_str[0:4])
- pos = 4
- if pos >= len_str:
- return components, pos
-
- has_sep = dt_str[pos:pos + 1] == self._DATE_SEP
- if has_sep:
- pos += 1
-
- # Month
- if len_str - pos < 2:
- raise ValueError('Invalid common month')
-
- components[1] = int(dt_str[pos:pos + 2])
- pos += 2
-
- if pos >= len_str:
- if has_sep:
- return components, pos
- else:
- raise ValueError('Invalid ISO format')
-
- if has_sep:
- if dt_str[pos:pos + 1] != self._DATE_SEP:
- raise ValueError('Invalid separator in ISO string')
- pos += 1
-
- # Day
- if len_str - pos < 2:
- raise ValueError('Invalid common day')
- components[2] = int(dt_str[pos:pos + 2])
- return components, pos + 2
-
- def _parse_isodate_uncommon(self, dt_str):
- if len(dt_str) < 4:
- raise ValueError('ISO string too short')
-
- # All ISO formats start with the year
- year = int(dt_str[0:4])
-
- has_sep = dt_str[4:5] == self._DATE_SEP
-
- pos = 4 + has_sep # Skip '-' if it's there
- if dt_str[pos:pos + 1] == b'W':
- # YYYY-?Www-?D?
- pos += 1
- weekno = int(dt_str[pos:pos + 2])
- pos += 2
-
- dayno = 1
- if len(dt_str) > pos:
- if (dt_str[pos:pos + 1] == self._DATE_SEP) != has_sep:
- raise ValueError('Inconsistent use of dash separator')
-
- pos += has_sep
-
- dayno = int(dt_str[pos:pos + 1])
- pos += 1
-
- base_date = self._calculate_weekdate(year, weekno, dayno)
- else:
- # YYYYDDD or YYYY-DDD
- if len(dt_str) - pos < 3:
- raise ValueError('Invalid ordinal day')
-
- ordinal_day = int(dt_str[pos:pos + 3])
- pos += 3
-
- if ordinal_day < 1 or ordinal_day > (365 + calendar.isleap(year)):
- raise ValueError('Invalid ordinal day' +
- ' {} for year {}'.format(ordinal_day, year))
-
- base_date = date(year, 1, 1) + timedelta(days=ordinal_day - 1)
-
- components = [base_date.year, base_date.month, base_date.day]
- return components, pos
-
- def _calculate_weekdate(self, year, week, day):
- """
- Calculate the day of corresponding to the ISO year-week-day calendar.
-
- This function is effectively the inverse of
- :func:`datetime.date.isocalendar`.
-
- :param year:
- The year in the ISO calendar
-
- :param week:
- The week in the ISO calendar - range is [1, 53]
-
- :param day:
- The day in the ISO calendar - range is [1 (MON), 7 (SUN)]
-
- :return:
- Returns a :class:`datetime.date`
- """
- if not 0 < week < 54:
- raise ValueError('Invalid week: {}'.format(week))
-
- if not 0 < day < 8: # Range is 1-7
- raise ValueError('Invalid weekday: {}'.format(day))
-
- # Get week 1 for the specific year:
- jan_4 = date(year, 1, 4) # Week 1 always has January 4th in it
- week_1 = jan_4 - timedelta(days=jan_4.isocalendar()[2] - 1)
-
- # Now add the specific number of weeks and days to get what we want
- week_offset = (week - 1) * 7 + (day - 1)
- return week_1 + timedelta(days=week_offset)
-
- def _parse_isotime(self, timestr):
- len_str = len(timestr)
- components = [0, 0, 0, 0, None]
- pos = 0
- comp = -1
-
- if len_str < 2:
- raise ValueError('ISO time too short')
-
- has_sep = False
-
- while pos < len_str and comp < 5:
- comp += 1
-
- if timestr[pos:pos + 1] in b'-+Zz':
- # Detect time zone boundary
- components[-1] = self._parse_tzstr(timestr[pos:])
- pos = len_str
- break
-
- if comp == 1 and timestr[pos:pos+1] == self._TIME_SEP:
- has_sep = True
- pos += 1
- elif comp == 2 and has_sep:
- if timestr[pos:pos+1] != self._TIME_SEP:
- raise ValueError('Inconsistent use of colon separator')
- pos += 1
-
- if comp < 3:
- # Hour, minute, second
- components[comp] = int(timestr[pos:pos + 2])
- pos += 2
-
- if comp == 3:
- # Fraction of a second
- frac = self._FRACTION_REGEX.match(timestr[pos:])
- if not frac:
- continue
-
- us_str = frac.group(1)[:6] # Truncate to microseconds
- components[comp] = int(us_str) * 10**(6 - len(us_str))
- pos += len(frac.group())
-
- if pos < len_str:
- raise ValueError('Unused components in ISO string')
-
- if components[0] == 24:
- # Standard supports 00:00 and 24:00 as representations of midnight
- if any(component != 0 for component in components[1:4]):
- raise ValueError('Hour may only be 24 at 24:00:00.000')
-
- return components
-
- def _parse_tzstr(self, tzstr, zero_as_utc=True):
- if tzstr == b'Z' or tzstr == b'z':
- return tz.UTC
-
- if len(tzstr) not in {3, 5, 6}:
- raise ValueError('Time zone offset must be 1, 3, 5 or 6 characters')
-
- if tzstr[0:1] == b'-':
- mult = -1
- elif tzstr[0:1] == b'+':
- mult = 1
- else:
- raise ValueError('Time zone offset requires sign')
-
- hours = int(tzstr[1:3])
- if len(tzstr) == 3:
- minutes = 0
- else:
- minutes = int(tzstr[(4 if tzstr[3:4] == self._TIME_SEP else 3):])
-
- if zero_as_utc and hours == 0 and minutes == 0:
- return tz.UTC
- else:
- if minutes > 59:
- raise ValueError('Invalid minutes in time zone offset')
-
- if hours > 23:
- raise ValueError('Invalid hours in time zone offset')
-
- return tz.tzoffset(None, mult * (hours * 60 + minutes) * 60)
-
-
-DEFAULT_ISOPARSER = isoparser()
-isoparse = DEFAULT_ISOPARSER.isoparse
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dotenv/parser.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dotenv/parser.py
deleted file mode 100644
index 735f14a3b425a3ee7d81e6e4aa5a2b04202925c4..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dotenv/parser.py
+++ /dev/null
@@ -1,175 +0,0 @@
-import codecs
-import re
-from typing import (IO, Iterator, Match, NamedTuple, Optional, # noqa:F401
- Pattern, Sequence, Tuple)
-
-
-def make_regex(string: str, extra_flags: int = 0) -> Pattern[str]:
- return re.compile(string, re.UNICODE | extra_flags)
-
-
-_newline = make_regex(r"(\r\n|\n|\r)")
-_multiline_whitespace = make_regex(r"\s*", extra_flags=re.MULTILINE)
-_whitespace = make_regex(r"[^\S\r\n]*")
-_export = make_regex(r"(?:export[^\S\r\n]+)?")
-_single_quoted_key = make_regex(r"'([^']+)'")
-_unquoted_key = make_regex(r"([^=\#\s]+)")
-_equal_sign = make_regex(r"(=[^\S\r\n]*)")
-_single_quoted_value = make_regex(r"'((?:\\'|[^'])*)'")
-_double_quoted_value = make_regex(r'"((?:\\"|[^"])*)"')
-_unquoted_value = make_regex(r"([^\r\n]*)")
-_comment = make_regex(r"(?:[^\S\r\n]*#[^\r\n]*)?")
-_end_of_line = make_regex(r"[^\S\r\n]*(?:\r\n|\n|\r|$)")
-_rest_of_line = make_regex(r"[^\r\n]*(?:\r|\n|\r\n)?")
-_double_quote_escapes = make_regex(r"\\[\\'\"abfnrtv]")
-_single_quote_escapes = make_regex(r"\\[\\']")
-
-
-class Original(NamedTuple):
- string: str
- line: int
-
-
-class Binding(NamedTuple):
- key: Optional[str]
- value: Optional[str]
- original: Original
- error: bool
-
-
-class Position:
- def __init__(self, chars: int, line: int) -> None:
- self.chars = chars
- self.line = line
-
- @classmethod
- def start(cls) -> "Position":
- return cls(chars=0, line=1)
-
- def set(self, other: "Position") -> None:
- self.chars = other.chars
- self.line = other.line
-
- def advance(self, string: str) -> None:
- self.chars += len(string)
- self.line += len(re.findall(_newline, string))
-
-
-class Error(Exception):
- pass
-
-
-class Reader:
- def __init__(self, stream: IO[str]) -> None:
- self.string = stream.read()
- self.position = Position.start()
- self.mark = Position.start()
-
- def has_next(self) -> bool:
- return self.position.chars < len(self.string)
-
- def set_mark(self) -> None:
- self.mark.set(self.position)
-
- def get_marked(self) -> Original:
- return Original(
- string=self.string[self.mark.chars:self.position.chars],
- line=self.mark.line,
- )
-
- def peek(self, count: int) -> str:
- return self.string[self.position.chars:self.position.chars + count]
-
- def read(self, count: int) -> str:
- result = self.string[self.position.chars:self.position.chars + count]
- if len(result) < count:
- raise Error("read: End of string")
- self.position.advance(result)
- return result
-
- def read_regex(self, regex: Pattern[str]) -> Sequence[str]:
- match = regex.match(self.string, self.position.chars)
- if match is None:
- raise Error("read_regex: Pattern not found")
- self.position.advance(self.string[match.start():match.end()])
- return match.groups()
-
-
-def decode_escapes(regex: Pattern[str], string: str) -> str:
- def decode_match(match: Match[str]) -> str:
- return codecs.decode(match.group(0), 'unicode-escape') # type: ignore
-
- return regex.sub(decode_match, string)
-
-
-def parse_key(reader: Reader) -> Optional[str]:
- char = reader.peek(1)
- if char == "#":
- return None
- elif char == "'":
- (key,) = reader.read_regex(_single_quoted_key)
- else:
- (key,) = reader.read_regex(_unquoted_key)
- return key
-
-
-def parse_unquoted_value(reader: Reader) -> str:
- (part,) = reader.read_regex(_unquoted_value)
- return re.sub(r"\s+#.*", "", part).rstrip()
-
-
-def parse_value(reader: Reader) -> str:
- char = reader.peek(1)
- if char == u"'":
- (value,) = reader.read_regex(_single_quoted_value)
- return decode_escapes(_single_quote_escapes, value)
- elif char == u'"':
- (value,) = reader.read_regex(_double_quoted_value)
- return decode_escapes(_double_quote_escapes, value)
- elif char in (u"", u"\n", u"\r"):
- return u""
- else:
- return parse_unquoted_value(reader)
-
-
-def parse_binding(reader: Reader) -> Binding:
- reader.set_mark()
- try:
- reader.read_regex(_multiline_whitespace)
- if not reader.has_next():
- return Binding(
- key=None,
- value=None,
- original=reader.get_marked(),
- error=False,
- )
- reader.read_regex(_export)
- key = parse_key(reader)
- reader.read_regex(_whitespace)
- if reader.peek(1) == "=":
- reader.read_regex(_equal_sign)
- value: Optional[str] = parse_value(reader)
- else:
- value = None
- reader.read_regex(_comment)
- reader.read_regex(_end_of_line)
- return Binding(
- key=key,
- value=value,
- original=reader.get_marked(),
- error=False,
- )
- except Error:
- reader.read_regex(_rest_of_line)
- return Binding(
- key=None,
- value=None,
- original=reader.get_marked(),
- error=True,
- )
-
-
-def parse_stream(stream: IO[str]) -> Iterator[Binding]:
- reader = Reader(stream)
- while reader.has_next():
- yield parse_binding(reader)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/config/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/config/__init__.py
deleted file mode 100644
index c106fe51fc0b8a6926fa67928d4de7af1b9ffe5e..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/config/__init__.py
+++ /dev/null
@@ -1,74 +0,0 @@
-"""
-Define all configuration options that can affect the working of fontTools
-modules. E.g. optimization levels of varLib IUP, otlLib GPOS compression level,
-etc. If this file gets too big, split it into smaller files per-module.
-
-An instance of the Config class can be attached to a TTFont object, so that
-the various modules can access their configuration options from it.
-"""
-from textwrap import dedent
-
-from fontTools.misc.configTools import *
-
-
-class Config(AbstractConfig):
- options = Options()
-
-
-OPTIONS = Config.options
-
-
-Config.register_option(
- name="fontTools.otlLib.optimize.gpos:COMPRESSION_LEVEL",
- help=dedent(
- """\
- GPOS Lookup type 2 (PairPos) compression level:
- 0 = do not attempt to compact PairPos lookups;
- 1 to 8 = create at most 1 to 8 new subtables for each existing
- subtable, provided that it would yield a 50%% file size saving;
- 9 = create as many new subtables as needed to yield a file size saving.
- Default: 0.
-
- This compaction aims to save file size, by splitting large class
- kerning subtables (Format 2) that contain many zero values into
- smaller and denser subtables. It's a trade-off between the overhead
- of several subtables versus the sparseness of one big subtable.
-
- See the pull request: https://github.com/fonttools/fonttools/pull/2326
- """
- ),
- default=0,
- parse=int,
- validate=lambda v: v in range(10),
-)
-
-Config.register_option(
- name="fontTools.ttLib.tables.otBase:USE_HARFBUZZ_REPACKER",
- help=dedent(
- """\
- FontTools tries to use the HarfBuzz Repacker to serialize GPOS/GSUB tables
- if the uharfbuzz python bindings are importable, otherwise falls back to its
- slower, less efficient serializer. Set to False to always use the latter.
- Set to True to explicitly request the HarfBuzz Repacker (will raise an
- error if uharfbuzz cannot be imported).
- """
- ),
- default=None,
- parse=Option.parse_optional_bool,
- validate=Option.validate_optional_bool,
-)
-
-Config.register_option(
- name="fontTools.otlLib.builder:WRITE_GPOS7",
- help=dedent(
- """\
- macOS before 13.2 didn’t support GPOS LookupType 7 (non-chaining
- ContextPos lookups), so FontTools.otlLib.builder disables a file size
- optimisation that would use LookupType 7 instead of 8 when there is no
- chaining (no prefix or suffix). Set to True to enable the optimization.
- """
- ),
- default=False,
- parse=Option.parse_optional_bool,
- validate=Option.validate_optional_bool,
-)
diff --git a/spaces/cihyFjudo/fairness-paper-search/Full [EXCLUSIVE] Hd Toilet - Ek Prem Katha M.md b/spaces/cihyFjudo/fairness-paper-search/Full [EXCLUSIVE] Hd Toilet - Ek Prem Katha M.md
deleted file mode 100644
index 03cbf60ff4e03252b98f80f961c953be98bded1f..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Full [EXCLUSIVE] Hd Toilet - Ek Prem Katha M.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Full Hd Toilet - Ek Prem Katha M
DOWNLOAD ⚹ https://tinurli.com/2uwi8D
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/How to Install and Use Sampleson Melox Pro v1.0.0 WiN on Your Windows or Mac Computer.md b/spaces/cihyFjudo/fairness-paper-search/How to Install and Use Sampleson Melox Pro v1.0.0 WiN on Your Windows or Mac Computer.md
deleted file mode 100644
index b8b5fd3cd06ce8d18eb5a46ab260f0bc1a02f31a..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/How to Install and Use Sampleson Melox Pro v1.0.0 WiN on Your Windows or Mac Computer.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-Year / Release Date: 2020
Developer: Sampleson
Developer site: sampleson
Format: STANDALONE, VSTi, AU
Bit depth: 32bit, 64bit
Tabletka: not required | RETAiL
System requirements: WIN, macOS, Host with support for VST, AU plugins
-A set of 12 virtual instruments from Sampleson:
Sampleson 1959 v1.0.0
Sampleson Akoustic Spectral Synthesizer v1.1.0
Sampleson Brushify Pro v1.0.1
Sampleson Electrix v1.5.0
Sampleson ElectroNylon v1.0.0
Sampleson Glassy v1.6.0
Sampleson Klavee v1.1.7
Sampleson Markus 88 v1.5.2
Sampleson Melox Pro v1.5.1
Sampleson Model T v1.1.0
Sampleson Reed200 v1.0.0
Sampleson Suit73 v1.4.1
-Sampleson Melox Pro v1.0.0 WiN
Download ❤❤❤ https://tinurli.com/2uwhzO
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Lac Havasu Az Massage Erotique [UPD].md b/spaces/cihyFjudo/fairness-paper-search/Lac Havasu Az Massage Erotique [UPD].md
deleted file mode 100644
index 4ea612c2f2be0e97e06f14b0313773e7cbeb9f86..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Lac Havasu Az Massage Erotique [UPD].md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-We strive to make every visit to Massage Green SPA work within your schedule and budget, and we guarantee that every service will deliver a soothing, relaxing result that helps you find the commitment to incorporate massage, SPA facials and infrared sauna as an important part of your wellness routine.
-Lac Havasu Az Massage Erotique
DOWNLOAD 🔗 https://tinurli.com/2uwi3l
-Finding a couples retreat near me isn't always easy, but we can help. Are you looking for romantic AZ getaways and romantic getaways 'near me' this year? Our romantic glamping getaways in Arizona are the perfect way to spend some quality time with the one who stole your heart. Spend some well-deserved time together in the luxury spas and indulge in luxurious couples treatments in an intimate setting. There is nothing better than relieving stress during a back massage or enjoying an uplifting, refreshing facial. Perhaps an isolated cabin is what you desire on your romantic getaway? Arizona is the adventure you need. Simply step outside of your glamping accommodations and see the rich natural beauty. Explore the art and culture in all the corners of the state, from high-class fine arts to traditional crafts which you can get involved in, there is something for everyone. By night, dine in a gourmet restaurant and taste all the different culinary delights, from Native American to Mexican. Then retreat to your Arizona accommodation and simply enjoy spending time with each other. These romantic weekend getaways near me, Arizona vacation spots, and romantic getaways in Arizona cabins are not to be missed! The best cabin vacations 'near me' are only a click away. Stop searching for a couples getaway 'near me' and start planning an unforgettable vacation. Couple getaways in AZ don't come any better than the ones you'll find right here, so don't miss out.Book the most 'romantic getaway near me' with Glamping Hub!
-Sensual Massage in Lake Havasu City AZ may appear eccentric to many individuals but in fact it is in line with some of the earliest teachings in Far east praise. And the unquestionable aspect remains that by practising sacred orgasm massage our energy levels are increased and the sexual encounter extends to a psychic stage. Sacred Orgasm Massage is a major premature ejaculation control therapy by itself.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/xmlReader.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/xmlReader.py
deleted file mode 100644
index d8e502f141e9cb5df6ea11352b565c9a9cd4aa3d..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/xmlReader.py
+++ /dev/null
@@ -1,188 +0,0 @@
-from fontTools import ttLib
-from fontTools.misc.textTools import safeEval
-from fontTools.ttLib.tables.DefaultTable import DefaultTable
-import sys
-import os
-import logging
-
-
-log = logging.getLogger(__name__)
-
-
-class TTXParseError(Exception):
- pass
-
-
-BUFSIZE = 0x4000
-
-
-class XMLReader(object):
- def __init__(
- self, fileOrPath, ttFont, progress=None, quiet=None, contentOnly=False
- ):
- if fileOrPath == "-":
- fileOrPath = sys.stdin
- if not hasattr(fileOrPath, "read"):
- self.file = open(fileOrPath, "rb")
- self._closeStream = True
- else:
- # assume readable file object
- self.file = fileOrPath
- self._closeStream = False
- self.ttFont = ttFont
- self.progress = progress
- if quiet is not None:
- from fontTools.misc.loggingTools import deprecateArgument
-
- deprecateArgument("quiet", "configure logging instead")
- self.quiet = quiet
- self.root = None
- self.contentStack = []
- self.contentOnly = contentOnly
- self.stackSize = 0
-
- def read(self, rootless=False):
- if rootless:
- self.stackSize += 1
- if self.progress:
- self.file.seek(0, 2)
- fileSize = self.file.tell()
- self.progress.set(0, fileSize // 100 or 1)
- self.file.seek(0)
- self._parseFile(self.file)
- if self._closeStream:
- self.close()
- if rootless:
- self.stackSize -= 1
-
- def close(self):
- self.file.close()
-
- def _parseFile(self, file):
- from xml.parsers.expat import ParserCreate
-
- parser = ParserCreate()
- parser.StartElementHandler = self._startElementHandler
- parser.EndElementHandler = self._endElementHandler
- parser.CharacterDataHandler = self._characterDataHandler
-
- pos = 0
- while True:
- chunk = file.read(BUFSIZE)
- if not chunk:
- parser.Parse(chunk, 1)
- break
- pos = pos + len(chunk)
- if self.progress:
- self.progress.set(pos // 100)
- parser.Parse(chunk, 0)
-
- def _startElementHandler(self, name, attrs):
- if self.stackSize == 1 and self.contentOnly:
- # We already know the table we're parsing, skip
- # parsing the table tag and continue to
- # stack '2' which begins parsing content
- self.contentStack.append([])
- self.stackSize = 2
- return
- stackSize = self.stackSize
- self.stackSize = stackSize + 1
- subFile = attrs.get("src")
- if subFile is not None:
- if hasattr(self.file, "name"):
- # if file has a name, get its parent directory
- dirname = os.path.dirname(self.file.name)
- else:
- # else fall back to using the current working directory
- dirname = os.getcwd()
- subFile = os.path.join(dirname, subFile)
- if not stackSize:
- if name != "ttFont":
- raise TTXParseError("illegal root tag: %s" % name)
- if self.ttFont.reader is None and not self.ttFont.tables:
- sfntVersion = attrs.get("sfntVersion")
- if sfntVersion is not None:
- if len(sfntVersion) != 4:
- sfntVersion = safeEval('"' + sfntVersion + '"')
- self.ttFont.sfntVersion = sfntVersion
- self.contentStack.append([])
- elif stackSize == 1:
- if subFile is not None:
- subReader = XMLReader(subFile, self.ttFont, self.progress)
- subReader.read()
- self.contentStack.append([])
- return
- tag = ttLib.xmlToTag(name)
- msg = "Parsing '%s' table..." % tag
- if self.progress:
- self.progress.setLabel(msg)
- log.info(msg)
- if tag == "GlyphOrder":
- tableClass = ttLib.GlyphOrder
- elif "ERROR" in attrs or ("raw" in attrs and safeEval(attrs["raw"])):
- tableClass = DefaultTable
- else:
- tableClass = ttLib.getTableClass(tag)
- if tableClass is None:
- tableClass = DefaultTable
- if tag == "loca" and tag in self.ttFont:
- # Special-case the 'loca' table as we need the
- # original if the 'glyf' table isn't recompiled.
- self.currentTable = self.ttFont[tag]
- else:
- self.currentTable = tableClass(tag)
- self.ttFont[tag] = self.currentTable
- self.contentStack.append([])
- elif stackSize == 2 and subFile is not None:
- subReader = XMLReader(subFile, self.ttFont, self.progress, contentOnly=True)
- subReader.read()
- self.contentStack.append([])
- self.root = subReader.root
- elif stackSize == 2:
- self.contentStack.append([])
- self.root = (name, attrs, self.contentStack[-1])
- else:
- l = []
- self.contentStack[-1].append((name, attrs, l))
- self.contentStack.append(l)
-
- def _characterDataHandler(self, data):
- if self.stackSize > 1:
- # parser parses in chunks, so we may get multiple calls
- # for the same text node; thus we need to append the data
- # to the last item in the content stack:
- # https://github.com/fonttools/fonttools/issues/2614
- if (
- data != "\n"
- and self.contentStack[-1]
- and isinstance(self.contentStack[-1][-1], str)
- and self.contentStack[-1][-1] != "\n"
- ):
- self.contentStack[-1][-1] += data
- else:
- self.contentStack[-1].append(data)
-
- def _endElementHandler(self, name):
- self.stackSize = self.stackSize - 1
- del self.contentStack[-1]
- if not self.contentOnly:
- if self.stackSize == 1:
- self.root = None
- elif self.stackSize == 2:
- name, attrs, content = self.root
- self.currentTable.fromXML(name, attrs, content, self.ttFont)
- self.root = None
-
-
-class ProgressPrinter(object):
- def __init__(self, title, maxval=100):
- print(title)
-
- def set(self, val, maxval=None):
- pass
-
- def increment(self, val=1):
- pass
-
- def setLabel(self, text):
- print(text)
diff --git a/spaces/colakin/video-generater/public/ffmpeg/LICENSE.md b/spaces/colakin/video-generater/public/ffmpeg/LICENSE.md
deleted file mode 100644
index 613070e1b631a919a74c862f2a75a0f249a631ff..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/LICENSE.md
+++ /dev/null
@@ -1,129 +0,0 @@
-# License
-
-Most files in FFmpeg are under the GNU Lesser General Public License version 2.1
-or later (LGPL v2.1+). Read the file `COPYING.LGPLv2.1` for details. Some other
-files have MIT/X11/BSD-style licenses. In combination the LGPL v2.1+ applies to
-FFmpeg.
-
-Some optional parts of FFmpeg are licensed under the GNU General Public License
-version 2 or later (GPL v2+). See the file `COPYING.GPLv2` for details. None of
-these parts are used by default, you have to explicitly pass `--enable-gpl` to
-configure to activate them. In this case, FFmpeg's license changes to GPL v2+.
-
-Specifically, the GPL parts of FFmpeg are:
-
-- libpostproc
-- optional x86 optimization in the files
- - `libavcodec/x86/flac_dsp_gpl.asm`
- - `libavcodec/x86/idct_mmx.c`
- - `libavfilter/x86/vf_removegrain.asm`
-- the following building and testing tools
- - `compat/solaris/make_sunver.pl`
- - `doc/t2h.pm`
- - `doc/texi2pod.pl`
- - `libswresample/tests/swresample.c`
- - `tests/checkasm/*`
- - `tests/tiny_ssim.c`
-- the following filters in libavfilter:
- - `signature_lookup.c`
- - `vf_blackframe.c`
- - `vf_boxblur.c`
- - `vf_colormatrix.c`
- - `vf_cover_rect.c`
- - `vf_cropdetect.c`
- - `vf_delogo.c`
- - `vf_eq.c`
- - `vf_find_rect.c`
- - `vf_fspp.c`
- - `vf_histeq.c`
- - `vf_hqdn3d.c`
- - `vf_kerndeint.c`
- - `vf_lensfun.c` (GPL version 3 or later)
- - `vf_mcdeint.c`
- - `vf_mpdecimate.c`
- - `vf_nnedi.c`
- - `vf_owdenoise.c`
- - `vf_perspective.c`
- - `vf_phase.c`
- - `vf_pp.c`
- - `vf_pp7.c`
- - `vf_pullup.c`
- - `vf_repeatfields.c`
- - `vf_sab.c`
- - `vf_signature.c`
- - `vf_smartblur.c`
- - `vf_spp.c`
- - `vf_stereo3d.c`
- - `vf_super2xsai.c`
- - `vf_tinterlace.c`
- - `vf_uspp.c`
- - `vf_vaguedenoiser.c`
- - `vsrc_mptestsrc.c`
-
-Should you, for whatever reason, prefer to use version 3 of the (L)GPL, then
-the configure parameter `--enable-version3` will activate this licensing option
-for you. Read the file `COPYING.LGPLv3` or, if you have enabled GPL parts,
-`COPYING.GPLv3` to learn the exact legal terms that apply in this case.
-
-There are a handful of files under other licensing terms, namely:
-
-* The files `libavcodec/jfdctfst.c`, `libavcodec/jfdctint_template.c` and
- `libavcodec/jrevdct.c` are taken from libjpeg, see the top of the files for
- licensing details. Specifically note that you must credit the IJG in the
- documentation accompanying your program if you only distribute executables.
- You must also indicate any changes including additions and deletions to
- those three files in the documentation.
-* `tests/reference.pnm` is under the expat license.
-
-
-## External libraries
-
-FFmpeg can be combined with a number of external libraries, which sometimes
-affect the licensing of binaries resulting from the combination.
-
-### Compatible libraries
-
-The following libraries are under GPL version 2:
-- avisynth
-- frei0r
-- libcdio
-- libdavs2
-- librubberband
-- libvidstab
-- libx264
-- libx265
-- libxavs
-- libxavs2
-- libxvid
-
-When combining them with FFmpeg, FFmpeg needs to be licensed as GPL as well by
-passing `--enable-gpl` to configure.
-
-The following libraries are under LGPL version 3:
-- gmp
-- libaribb24
-- liblensfun
-
-When combining them with FFmpeg, use the configure option `--enable-version3` to
-upgrade FFmpeg to the LGPL v3.
-
-The VMAF, mbedTLS, RK MPI, OpenCORE and VisualOn libraries are under the Apache License
-2.0. That license is incompatible with the LGPL v2.1 and the GPL v2, but not with
-version 3 of those licenses. So to combine these libraries with FFmpeg, the
-license version needs to be upgraded by passing `--enable-version3` to configure.
-
-The smbclient library is under the GPL v3, to combine it with FFmpeg,
-the options `--enable-gpl` and `--enable-version3` have to be passed to
-configure to upgrade FFmpeg to the GPL v3.
-
-### Incompatible libraries
-
-There are certain libraries you can combine with FFmpeg whose licenses are not
-compatible with the GPL and/or the LGPL. If you wish to enable these
-libraries, even in circumstances that their license may be incompatible, pass
-`--enable-nonfree` to configure. This will cause the resulting binary to be
-unredistributable.
-
-The Fraunhofer FDK AAC and OpenSSL libraries are under licenses which are
-incompatible with the GPLv2 and v3. To the best of our knowledge, they are
-compatible with the LGPL.
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/anm.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/anm.c
deleted file mode 100644
index 4aabe853b43a070a3d95395642573a864743aa17..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/anm.c
+++ /dev/null
@@ -1,202 +0,0 @@
-/*
- * Deluxe Paint Animation decoder
- * Copyright (c) 2009 Peter Ross
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * Deluxe Paint Animation decoder
- */
-
-#include "avcodec.h"
-#include "bytestream.h"
-#include "codec_internal.h"
-#include "decode.h"
-
-typedef struct AnmContext {
- AVFrame *frame;
- int palette[AVPALETTE_COUNT];
-} AnmContext;
-
-static av_cold int decode_init(AVCodecContext *avctx)
-{
- AnmContext *s = avctx->priv_data;
- GetByteContext gb;
- int i;
-
- if (avctx->extradata_size < 16 * 8 + 4 * 256)
- return AVERROR_INVALIDDATA;
-
- avctx->pix_fmt = AV_PIX_FMT_PAL8;
-
- s->frame = av_frame_alloc();
- if (!s->frame)
- return AVERROR(ENOMEM);
-
- bytestream2_init(&gb, avctx->extradata, avctx->extradata_size);
- bytestream2_skipu(&gb, 16 * 8);
- for (i = 0; i < 256; i++)
- s->palette[i] = (0xFFU << 24) | bytestream2_get_le32u(&gb);
-
- return 0;
-}
-
-/**
- * Perform decode operation
- * @param dst pointer to destination image buffer
- * @param dst_end pointer to end of destination image buffer
- * @param gb GetByteContext (optional, see below)
- * @param pixel Fill color (optional, see below)
- * @param count Pixel count
- * @param x Pointer to x-axis counter
- * @param width Image width
- * @param linesize Destination image buffer linesize
- * @return non-zero if destination buffer is exhausted
- *
- * a copy operation is achieved when 'gb' is set
- * a fill operation is achieved when 'gb' is null and pixel is >= 0
- * a skip operation is achieved when 'gb' is null and pixel is < 0
- */
-static inline int op(uint8_t **dst, const uint8_t *dst_end,
- GetByteContext *gb,
- int pixel, int count,
- int *x, int width, int linesize)
-{
- int remaining = width - *x;
- while(count > 0) {
- int striplen = FFMIN(count, remaining);
- if (gb) {
- if (bytestream2_get_bytes_left(gb) < striplen)
- goto exhausted;
- bytestream2_get_bufferu(gb, *dst, striplen);
- } else if (pixel >= 0)
- memset(*dst, pixel, striplen);
- *dst += striplen;
- remaining -= striplen;
- count -= striplen;
- if (remaining <= 0) {
- *dst += linesize - width;
- remaining = width;
- }
- if (linesize > 0) {
- if (*dst >= dst_end) goto exhausted;
- } else {
- if (*dst <= dst_end) goto exhausted;
- }
- }
- *x = width - remaining;
- return 0;
-
-exhausted:
- *x = width - remaining;
- return 1;
-}
-
-static int decode_frame(AVCodecContext *avctx, AVFrame *rframe,
- int *got_frame, AVPacket *avpkt)
-{
- AnmContext *s = avctx->priv_data;
- const int buf_size = avpkt->size;
- uint8_t *dst, *dst_end;
- GetByteContext gb;
- int count, ret, x = 0;
-
- if (buf_size < 7)
- return AVERROR_INVALIDDATA;
-
- if ((ret = ff_reget_buffer(avctx, s->frame, 0)) < 0)
- return ret;
- dst = s->frame->data[0];
- dst_end = s->frame->data[0] + s->frame->linesize[0]*avctx->height;
-
- bytestream2_init(&gb, avpkt->data, buf_size);
-
- if (bytestream2_get_byte(&gb) != 0x42) {
- avpriv_request_sample(avctx, "Unknown record type");
- return AVERROR_INVALIDDATA;
- }
- if (bytestream2_get_byte(&gb)) {
- avpriv_request_sample(avctx, "Padding bytes");
- return AVERROR_PATCHWELCOME;
- }
- bytestream2_skip(&gb, 2);
-
- do {
- /* if statements are ordered by probability */
-#define OP(gb, pixel, count) \
- op(&dst, dst_end, (gb), (pixel), (count), &x, avctx->width, s->frame->linesize[0])
-
- int type = bytestream2_get_byte(&gb);
- count = type & 0x7F;
- type >>= 7;
- if (count) {
- if (OP(type ? NULL : &gb, -1, count)) break;
- } else if (!type) {
- int pixel;
- count = bytestream2_get_byte(&gb); /* count==0 gives nop */
- pixel = bytestream2_get_byte(&gb);
- if (OP(NULL, pixel, count)) break;
- } else {
- int pixel;
- type = bytestream2_get_le16(&gb);
- count = type & 0x3FFF;
- type >>= 14;
- if (!count) {
- if (type == 0)
- break; // stop
- if (type == 2) {
- avpriv_request_sample(avctx, "Unknown opcode");
- return AVERROR_PATCHWELCOME;
- }
- continue;
- }
- pixel = type == 3 ? bytestream2_get_byte(&gb) : -1;
- if (type == 1) count += 0x4000;
- if (OP(type == 2 ? &gb : NULL, pixel, count)) break;
- }
- } while (bytestream2_get_bytes_left(&gb) > 0);
-
- memcpy(s->frame->data[1], s->palette, AVPALETTE_SIZE);
-
- *got_frame = 1;
- if ((ret = av_frame_ref(rframe, s->frame)) < 0)
- return ret;
-
- return buf_size;
-}
-
-static av_cold int decode_end(AVCodecContext *avctx)
-{
- AnmContext *s = avctx->priv_data;
-
- av_frame_free(&s->frame);
- return 0;
-}
-
-const FFCodec ff_anm_decoder = {
- .p.name = "anm",
- CODEC_LONG_NAME("Deluxe Paint Animation"),
- .p.type = AVMEDIA_TYPE_VIDEO,
- .p.id = AV_CODEC_ID_ANM,
- .priv_data_size = sizeof(AnmContext),
- .init = decode_init,
- .close = decode_end,
- FF_CODEC_DECODE_CB(decode_frame),
- .p.capabilities = AV_CODEC_CAP_DR1,
-};
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Bed Wars APK Mod Unlimited Money and More for Epic Battles.md b/spaces/congsaPfin/Manga-OCR/logs/Bed Wars APK Mod Unlimited Money and More for Epic Battles.md
deleted file mode 100644
index 636b7094a3ca7d77ae1c28731f84db21a75422d9..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Bed Wars APK Mod Unlimited Money and More for Epic Battles.md
+++ /dev/null
@@ -1,101 +0,0 @@
-
-Bed Wars APK Download Hack: Everything You Need to Know
- Bed Wars is a popular team-based PVP game that has attracted millions of players around the world. The game is available on various platforms, such as Minecraft, Roblox, and Garena Blockman GO. The game's objective is to protect your bed from other players while trying to destroy their beds and eliminate them. The game requires strategy, teamwork, and skills to win.
-bed wars apk download hack
DOWNLOAD ⇔ https://urlca.com/2uOaMs
- However, some players want to have an edge over their opponents and use an apk download hack to get unlimited resources, items, and abilities in the game. An apk download hack is a modified version of the original game file that can be downloaded and installed on your device. It can give you access to features that are not available in the official game, such as unlimited money, diamonds, golds, irons, blocks, weapons, tools, firebombs, traps, kits, skins, and more.
- How to Download and Install Bed Wars APK Hack
- If you want to try out bed wars apk hack, you need to follow these steps:
-
-- Find a reliable source that offers bed wars apk hack for your device. You can search online or use the links provided by `search_web("bed wars apk download hack")`.
-- Download the bed wars apk hack file from the source. Make sure it is compatible with your device and has the latest version of the game.
-- Enable unknown sources on your device settings. This will allow you to install apps from sources other than the official app store.
-- Locate the downloaded bed wars apk hack file on your device and tap on it to install it.
-- Wait for the installation process to finish and launch the game.
-
- What are the Features and Benefits of Bed Wars APK Hack
- By using bed wars apk hack, you can enjoy these features and benefits:
-
-- You can get unlimited resources such as money, diamonds, golds, irons, blocks, etc. that you can use to buy items from merchants on the island or upgrade your team perks.
-- You can get unlimited items such as weapons, tools, firebombs, traps, etc. that you can use to attack your enemies or defend your base.
-- You can get unlimited abilities such as speed boost, jump boost, invisibility, etc. that you can use to gain an advantage over your opponents.
-- You can get access to all the kits that have different skills and effects that you can use in the game.
-- You can get access to all the skins that have different appearances and styles that you can use to customize your avatar.
-- You can get access to all the maps that have different layouts and themes that you can play on.
-- You can get access to all the modes that have different rules and objectives that you can choose from.
-
- What are the Risks and Drawbacks of Bed Wars APK Hack
- However, using bed wars apk hack also comes with some risks and drawbacks:
-
-- You may encounter bugs or glitches that may affect your gameplay or cause crashes.
-- You may face compatibility issues with your device or the official game updates.
-- You may lose your progress or data if you uninstall or reinstall the game.
-- You may get banned or reported by other players or the game developers if they detect your use of bed wars apk hack.
-- You may expose your device or data to viruses or malware that may be hidden in the bed wars apk hack file.
-- You may lose the fun and challenge of playing the game as it was intended.
-
- Conclusion
- Bed Wars is a fun and exciting game that can be enjoyed by players of all ages and backgrounds. However, some players may want to use bed wars apk hack to get unlimited resources, items, and abilities in the game. While this may seem tempting, it also comes with some risks and drawbacks that may ruin your gaming experience or even harm your device or data. Therefore, we recommend that you play the game without using bed wars apk hack and enjoy the game as it was designed.
- FAQs
- What is the difference between bed wars mod apk and bed wars apk hack?
- A bed wars mod apk is a modified version of the original game file that can be downloaded and installed on your device. It can add new features or change existing ones in the game. A bed wars apk hack is a type of bed wars mod apk that can give you unlimited resources, items, and abilities in the game.
-bed wars mod apk unlimited money download
-how to hack bed wars on android
-bed wars apk free download latest version
-bed wars cheats and hacks for android
-bed wars game download for android hack
-bed wars hack apk no root
-bed wars mod menu apk download
-bed wars hack online generator
-bed wars unlimited coins and diamonds apk
-bed wars hack version download for android
-bed wars mod apk 2023 download
-bed wars hack apk mediafıre
-bed wars apk download for pc hack
-bed wars hack tool apk
-bed wars mod apk revdl
-bed wars hack apk happymod
-bed wars mod apk rexdl
-bed wars hack apk 1.9.12.1
-bed wars mod apk android 1
-bed wars hack apk an1
-bed wars mod apk offline download
-bed wars hack script download
-bed wars mod apk obb download
-bed wars hack apk 2023
-bed wars mod apk unlimited everything
-bed wars hack app download
-bed wars mod apk all unlocked
-bed wars hack version game download
-bed wars mod apk latest update
-bed wars hack apk pure
-bed wars mod apk unlimited health
-bed wars hack no verification
-bed wars mod apk god mode
-bed wars hack ios download
-bed wars mod apk new version
-bed wars hack without human verification
-bed wars mod apk unlimited resources
-bed wars hack online no survey
-bed wars mod apk anti ban
-bed wars hack generator no human verification
- Is bed wars apk hack safe and legal to use?
- No, bed wars apk hack is not safe and legal to use. It may contain viruses or malware that can harm your device or data. It may also violate the terms and conditions of the game developers and get you banned or reported by other players. It is better to avoid using bed wars apk hack and play the game fairly.
- How to update bed wars apk hack to the latest version?
- If you want to update bed wars apk hack to the latest version, you need to find a reliable source that offers the updated version of the file. You can search online or use the links provided by `search_web("bed wars apk download hack")`. Then, you need to download and install the updated file on your device. However, be careful as some sources may not be trustworthy or compatible with your device.
- How to uninstall bed wars apk hack from your device?
- If you want to uninstall bed wars apk hack from your device, you need to follow these steps:
-
-- Go to your device settings and find the apps section.
-- Find and select bed wars apk hack from the list of apps.
-- Tap on uninstall and confirm your action.
-- Wait for the uninstallation process to finish and restart your device.
-
- Where to find more tips and tricks for bed wars game?
- If you want to find more tips and tricks for bed wars game, you can check out these sources:
-
-- [Bed Wars Wiki]: A comprehensive wiki that contains information about the game, such as gameplay, items, maps, modes, kits, skins, etc.
-- [Bed Wars YouTube Channel]: The official YouTube channel of the game that features videos about the game updates, events, tutorials, etc.
-- [Bed Wars Reddit Community]: A subreddit where you can interact with other players, share your experiences, ask questions, give feedback, etc.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Klee Genshin Impact Ringtone - Boom Boom Bakudan and More.md b/spaces/congsaPfin/Manga-OCR/logs/Download Klee Genshin Impact Ringtone - Boom Boom Bakudan and More.md
deleted file mode 100644
index ac1a9405fffcce456b7e8cb839b83662992ab5db..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Klee Genshin Impact Ringtone - Boom Boom Bakudan and More.md
+++ /dev/null
@@ -1,117 +0,0 @@
-
-How to Download Ringtone Klee Genshin Impact
-If you are a fan of Genshin Impact, you might want to have your favorite character's voice or sound as your ringtone. One of the most popular characters in the game is Klee, a cute and explosive Pyro user who loves to make things go boom. In this article, we will show you how to download ringtone Klee Genshin Impact for free and set it as your phone's ringtone.
-download ringtone klee genshin impact
Download Zip — https://urlca.com/2uOfyE
- What is Klee Genshin Impact?
-A brief introduction to the character and the game
-Genshin Impact is a free-to-play open-world action RPG game that was released in September 2020. The game features a vast fantasy world called Teyvat, where players can explore, fight, collect, and interact with various characters. The game also has a gacha system, where players can obtain new characters and weapons by spending in-game currency or real money.
-Klee is one of the playable characters in Genshin Impact. She is a five-star Pyro Catalyst user who belongs to the Knights of Favonius, an organization that protects the city of Mondstadt. She is also the daughter of Alice, a famous adventurer who travels around the world. Klee is a cheerful and curious girl who loves to create bombs and explosives. She often gets into trouble for causing explosions and damaging property.
- Why Klee is a popular choice for ringtones
-Klee is a popular choice for ringtones because she has a unique and adorable voice that can brighten up your day. Her voice actor, Kuno Misaki, did a great job of capturing her personality and emotions. Some of her most memorable lines include "Klee's specialty!", "Dodonko!", "Spark Knight, here to illuminate you!", and "Let's go on an adventure!". She also makes cute sounds when she attacks, jumps, runs, or gets hit.
-Another reason why Klee is a popular choice for ringtones is that she has some catchy and upbeat soundtracks that accompany her in the game. For example, her character theme song, "Fleeing Sunlight", is a lively and cheerful tune that reflects her adventurous spirit. Her story quest theme song, "Trifolium", is a soothing and relaxing melody that showcases her innocence and wonder. Her combat theme song, "Spark Knight", is a fast-paced and energetic track that matches her explosive style.
- How to find and download Klee ringtones
-The best websites and apps for free ringtones
-There are many websites and apps that offer free ringtones for various devices and platforms. However, not all of them are safe, reliable, or easy to use. Some of them may contain viruses, malware, or annoying ads. Some of them may have low-quality or outdated ringtones. Some of them may require registration or payment before downloading.
-To help you find and download ringtone Klee Genshin Impact without any hassle, we have selected some of the best websites and apps for free ringtones. These are:
-
-- ZEDGE: This is one of the most popular and trusted websites and apps for free ringtones, wallpapers, notifications, and more. You can browse through thousands of high-quality ringtones by categories, genres, artists , or keywords. You can also create your own ringtones by uploading or recording audio files. You can download the ringtones directly to your phone or computer, or scan a QR code to access them. ZEDGE is compatible with Android, iOS, and Windows devices.
-- Ringtone Maker: This is a simple and easy-to-use app that lets you create your own ringtones from any audio file on your phone. You can cut, trim, merge, mix, or edit the audio files as you like. You can also apply effects, filters, or fade in/out options. You can save the ringtones to your phone or share them with others. Ringtone Maker is available for Android devices.
-- Audiko: This is another website and app that offers free ringtones, wallpapers, and notifications. You can search for ringtones by categories, genres, artists, or keywords. You can also upload or record your own audio files and customize them with various tools. You can download the ringtones to your phone or computer, or sync them with iTunes. Audiko is compatible with Android and iOS devices.
-
- How to customize and set Klee ringtones on your phone
-Once you have found and downloaded ringtone Klee Genshin Impact from one of the websites or apps mentioned above, you can customize and set it as your phone's ringtone. The steps may vary depending on your device and platform, but here are some general guidelines:
-
-- Open the Settings app on your phone and tap on Sound or Sound & Notification.
-- Tap on Phone Ringtone or Default Ringtone and select the Klee ringtone from the list of available ringtones.
-- If you want to set a different Klee ringtone for a specific contact, open the Contacts app on your phone and tap on the contact you want to customize.
-- Tap on Edit or More Options and select Ringtone. Choose the Klee ringtone from the list of available ringtones.
-- Tap on Save or Done to confirm your changes.
-
- Conclusion
-A summary of the main points and a call to action
-In this article, we have shown you how to download ringtone Klee Genshin Impact for free and set it as your phone's ringtone. We have also explained what Klee Genshin Impact is and why Klee is a popular choice for ringtones. We have also recommended some of the best websites and apps for free ringtones that you can use to find and download Klee ringtones.
-download klee genshin impact theme song ringtone
-how to get klee genshin impact voice as ringtone
-klee genshin impact ringtone free download mp3
-klee genshin impact ringtone zedge
-klee genshin impact kaboom ringtone download
-download klee genshin impact character song ringtone
-klee genshin impact ringtone for iphone
-klee genshin impact ringtone for android
-klee genshin impact jumpy dumpty ringtone download
-download klee genshin impact explosive spark ringtone
-klee genshin impact ringtone meloboom
-klee genshin impact ringtone iringpro
-klee genshin impact ringtone audiko
-download klee genshin impact story quest theme ringtone
-klee genshin impact ringtone freetone
-klee genshin impact ringtone prokerala
-klee genshin impact ringtone mytinyphone
-download klee genshin impact lost prayer to the sacred winds ringtone
-klee genshin impact ringtone mobile ringtones
-klee genshin impact skyward atlas ringtone download
-download klee genshin impact crimson witch of flames ringtone
-klee genshin impact solar pearl ringtone
-klee genshin impact the widsith ringtone download
-download klee genshin impact sacrificial fragments ringtone
-klee genshin impact spark knight ringtone
-klee genshin impact daughter of alice ringtone download
-download klee genshin impact fish-flavored toast ringtone
-klee genshin impact fleeing sunlight card wish ringtone
-klee genshin impact trifolium constellation ringtone download
-download klee genshin impact pyro catalyst character ringtone
-klee genshin impact mondstadt region ringtone
-klee genshin impact knights of favonius affiliation ringtone download
-download klee genshin impact birthday july 27th ringtone
-klee genshin impact english voice actor poonam basu ringtone
-klee genshin impact japanese voice actor kuno misaki ringtone download
-download klee genshin impact normal attack kaboom! ringtone
-klee genshin impact elemental skill jumpy dumpty ringtone
-klee genshin impact elemental burst sparks 'n' splash ringtone download
-download klee genshin impact passive talent pounding surprise ringtone
-klee genshin impact passive talent all of my treasures! ringtone
-klee genshin impact passive talent sparkling burst! ringtone download
-download klee genshin impact constellation chain reaction! ringtone
-klee genshin impact constellation explosive fracture! ringtone
-klee genshin impact constellation exothermic reaction! ringtone download
-download klee genshin impact constellation nova burst! ringtone
-klee genshin impact constellation blazing delight! ringtone
-klee genshin impact constellation c4 sparkling explosion! ringtone download
-If you are a fan of Genshin Impact and Klee, you should definitely try out some of the Klee ringtones we have suggested. They will make your phone more fun and personalized. You can also share them with your friends who love Genshin Impact and Klee as well. You can also explore other Genshin Impact ringtones that feature other characters and soundtracks from the game.
-We hope you enjoyed this article and found it helpful. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
- FAQs
-What are some other Genshin Impact ringtones?
-Some other Genshin Impact ringtones that you might like are:
-
-- Diluc: "I am Diluc, owner of the Dawn Winery."
-- Venti: "Wouldn't gliding be faster?"
-- Paimon: "Hey! Listen!"
-- Lisa: "You're not bad with that wind glider."
-- Zhongli: "This is order!"
-
- How to make your own Genshin Impact ringtones?
-If you want to make your own Genshin Impact ringtones, you can use one of the apps we mentioned above, such as Ringtone Maker or Audiko. You can also use an online audio converter or editor, such as Online Audio Converter or Audio Trimmer. You will need to have the audio files of the Genshin Impact characters or soundtracks that you want to use as ringtones. You can either record them from the game using a screen recorder app, such as AZ Screen Recorder, or download them from online sources, such as YouTube or SoundCloud. Then, you can upload or import the audio files to the app or website of your choice and edit them as you wish. You can cut, trim, merge, mix, or add effects to the audio files. You can also adjust the volume, pitch, speed, or quality of the audio files. Once you are satisfied with your custom Genshin Impact ringtones, you can save them to your phone or computer, or share them with others.
- How to change the ringtone volume and vibration settings?
-If you want to change the ringtone volume and vibration settings on your phone, you can follow these steps:
-
-- Open the Settings app on your phone and tap on Sound or Sound & Notification.
-- Tap on Volume or Sound Mode and adjust the slider for Ringtone or Ring Volume as you like. You can also enable or disable Do Not Disturb mode or Silent mode if you want to mute your phone.
-- Tap on Vibrate or Vibration Pattern and choose the option that suits your preference. You can also enable or disable Vibrate on Ring or Vibrate on Silent if you want your phone to vibrate when it rings or when it is muted.
-
- How to assign different ringtones to different contacts?
-If you want to assign different ringtones to different contacts on your phone, you can follow these steps:
-
-- Open the Contacts app on your phone and tap on the contact you want to customize.
-- Tap on Edit or More Options and select Ringtone. Choose the ringtone from the list of available ringtones.
-- If you want to assign a custom ringtone that is not in the list, tap on Add or Browse and locate the ringtone file on your phone or computer. Select the ringtone file and confirm your choice.
-- Tap on Save or Done to confirm your changes.
-
- How to delete unwanted ringtones from your phone?
-If you want to delete unwanted ringtones from your phone, you can follow these steps:
-
-- Open the File Manager app on your phone and navigate to the folder where your ringtones are stored. Usually, this folder is called Ringtones, Media, Audio, or Sounds.
-- Select the ringtone files that you want to delete and tap on Delete or Trash. Confirm your action by tapping on OK or Yes.
-- If you want to delete ringtones that are pre-installed on your phone, you may need to root your phone or use a third-party app that can access system files. However, this may void your warranty or cause damage to your phone, so proceed with caution.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Mdundo and Listen to the Latest Hits from Tanzania Kenya Nigeria and More.md b/spaces/congsaPfin/Manga-OCR/logs/Download Mdundo and Listen to the Latest Hits from Tanzania Kenya Nigeria and More.md
deleted file mode 100644
index f92fa6996bceb5d4ae4fe2be408e23b30cd98b51..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Mdundo and Listen to the Latest Hits from Tanzania Kenya Nigeria and More.md
+++ /dev/null
@@ -1,84 +0,0 @@
-
-Download Mdundo: How to Enjoy Free Music on Your Phone
-Do you love listening to music on your phone? Do you want to discover new songs from different genres and artists? Do you want to save data and listen offline? If you answered yes to any of these questions, then you should download Mdundo, the best music app for African music.
-download mdundo
DOWNLOAD –––––>>> https://urlca.com/2uO8Ab
-What is Mdundo?
-A brief introduction to Mdundo and its features
-Mdundo is an online music platform that allows you to listen to and download your favourite songs from a variety of African artists. Mdundo was launched in 2013 and has since grown to become one of the most popular music apps in Tanzania, Uganda, Kenya, Nigeria, Zambia, and Ghana. Mdundo has over 5 million downloads on the Google Play Store and offers you the following features:
-
-- High quality MP3 downloads of your favourite African artists
-- Hundreds of playlists created by experts
-- Trending charts for Kenya, Uganda, and Tanzania
-- New releases from thousands of artists in East Africa
-- Save your favourite songs and artists to listen offline
-- Create your own custom playlist based on your mood
-- Gospel, Hip Hop, Genge, Afropop, Dancehall, DJ Mixes, Rhumba, Zilizopendwa, and many more genres
-
-Why download Mdundo?
-The benefits of using Mdundo for music lovers
-If you are a music lover, you will find many reasons to download Mdundo and enjoy its features. Here are some of the benefits of using Mdundo:
-Access to a large library of African music
-Mdundo gives you access to the largest library of music in Africa, with over 40 million songs from over 50,000 artists. You can find songs from various countries, languages, cultures, and styles. You can also discover new artists and songs that you may not find elsewhere. Whether you are looking for bongo, gospel, afropop, dancehall, hip hop, or any other genre, you will find it on Mdundo.
-High quality MP3 downloads and streaming
-Mdundo allows you to download or stream your favourite songs in high quality MP3 format. You can choose the quality that suits your preference and device. You can also download songs for free or pay a small fee to support the artists. You can also stream songs online without any interruptions or ads.
-How to download mdundo music on android
-Download mdundo app for free music
-Mdundo gospel music download mp3
-Download latest Nigerian music from mdundo
-Mdundo bongo flava music download
-Download mdundo music videos
-Mdundo afro pop music download
-Download mdundo mixtapes and dj mixes
-Mdundo dancehall music download
-Download mdundo music offline
-Mdundo hip hop music download
-Download mdundo music for PC
-Mdundo reggae music download
-Download mdundo music apk
-Mdundo rhumba music download
-Download mdundo music for iPhone
-Mdundo highlife music download
-Download mdundo music without internet
-Mdundo kwaito music download
-Download mdundo music ringtone
-Mdundo zouk music download
-Download mdundo music playlist
-Mdundo jazz music download
-Download mdundo music with lyrics
-Mdundo rock music download
-Download mdundo music from YouTube
-Mdundo soul music download
-Download mdundo music by artist name
-Mdundo country music download
-Download mdundo music by genre
-Mdundo classical music download
-Download mdundo music by mood
-Mdundo house music download
-Download mdundo music by year
-Mdundo funk music download
-Download mdundo music by popularity
-Mdundo r&b music download
-Download mdundo music by album name
-Mdundo blues music download
-Download mdundo top 100 songs of the week
-Customized playlists and charts
-Mdundo has hundreds of playlists created by experts that cater to different moods, occasions, genres, and themes. You can browse through the playlists and find the one that matches your taste. You can also create your own playlist by adding songs that you like. Mdundo also has trending charts for Kenya, Uganda, and Tanzania that show you the most popular songs in each country. You can follow the charts and stay updated with the latest hits.
-Offline listening and data saving
-Mdundo allows you to save your favourite songs and artists to listen offline whenever you want. You can also save data by downloading songs when you have a Wi-Fi connection and listening offline later. This way, you can enjoy music without worrying about data costs or network availability. Mdundo also has a data saving mode that reduces the size of the downloads and streams.
-How to download Mdundo?
-The steps to install Mdundo on your Android device
-Downloading Mdundo is easy and fast. You can install Mdundo on your Android device by following these simple steps:
-Visit the Google Play Store or the official website
-You can download Mdundo from the Google Play Store by searching for "Mdundo" or by clicking on this link: Mdundo - Free Music. You can also visit the official website of Mdundo at www.mdundo.com and click on the download button.
-Tap on the install button and accept the permissions
-Once you have found the Mdundo app, tap on the install button and wait for the app to download. You may need to accept some permissions to allow the app to access your device's storage, contacts, and phone. These permissions are necessary for the app to function properly and to provide you with the best service.
-Open the app and sign up or log in
-After the app is installed, open it and sign up or log in with your email, Facebook, or Google account. Signing up or logging in will allow you to access all the features of Mdundo and to personalize your experience. You can also skip this step and use the app as a guest, but you will miss out on some benefits such as saving songs offline and creating playlists.
-Browse, search, and play your favourite songs
-Now you are ready to enjoy free music on your phone. You can browse through the different categories of music, such as genres, playlists, charts, new releases, and artists. You can also use the search bar to find any song or artist you want. To play a song, just tap on it and it will start playing. You can also download it by tapping on the download icon next to it. You can also add it to your favourites or to a playlist by tapping on the heart or plus icons respectively.
-Conclusion
-A summary of the main points and a call to action
-Mdundo is an amazing music app that allows you to listen to and download free music from a variety of African artists. Mdundo has many features that make it stand out from other music apps, such as a large library of music, high quality MP3 downloads and streaming, customized playlists and charts, offline listening and data saving, and many more. Downloading Mdundo is easy and fast. You just need to visit the Google Play Store or the official website, install the app, sign up or log in, and start enjoying free music on your phone. If you are a music lover, you should not miss this opportunity to discover new songs and artists from Africa. Download Mdundo today and enjoy free music on your phone.
- FAQs Q: Is Mdundo free? A: Yes, Mdundo is free to download and use. You can listen to and download unlimited songs without paying anything. However, you can also choose to support the artists by paying a small fee for some songs. Q: Is Mdundo legal? A: Yes, Mdundo is legal and licensed by the artists and labels. Mdundo respects the rights of the artists and pays them royalties for every download. Q: Is Mdundo safe? A: Yes, Mdundo is safe and secure. The app does not contain any viruses or malware that could harm your device. The app also does not collect any personal information that could compromise your privacy. Q: Is Mdundo available for iOS devices? A: No, Mdundo is currently only available for Android devices. However, you can still access Mdundo's website from any browser on any device. Q: How can I contact Mdundo? A: You can contact Mdundo by sending an email to info@mdundo.com or by visiting their social media pages on Facebook, Twitter, Instagram, or YouTube. 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Euro Truck Simulator 2 Bus Mod APK A Must-Have for All Bus Lovers.md b/spaces/congsaPfin/Manga-OCR/logs/Euro Truck Simulator 2 Bus Mod APK A Must-Have for All Bus Lovers.md
deleted file mode 100644
index 66a9ee37c331ba17970ea844acc8b77699eb3b5b..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Euro Truck Simulator 2 Bus Mod APK A Must-Have for All Bus Lovers.md
+++ /dev/null
@@ -1,125 +0,0 @@
-
-Euro Truck Simulator 2 with Bus Mod APK Download: How to Enjoy Driving Buses Across Europe
-Do you love driving trucks and exploring different countries in Europe? Do you want to try something new and drive buses instead of trucks? If yes, then you should download and install the bus mod apk for Euro Truck Simulator 2, one of the most popular and realistic truck driving simulator games on the market. In this article, we will tell you what Euro Truck Simulator 2 is, what a bus mod apk is, how to download and install it, and how to use it to have fun driving buses across Europe.
- What is Euro Truck Simulator 2 and why is it popular?
-Euro Truck Simulator 2 is a game that lets you become a truck driver and deliver cargo across various European countries. You can choose from different licensed trucks by MAN, Scania, Iveco, Renault, DAF, and others, and customize them with various options. You can also run your own business, hire drivers, buy garages, and manage your company for maximum profits. The game features realistic road networks, landmarks, cities, weather, traffic, physics, and sounds that make you feel like you are driving a real truck.
-euro truck simulator 2 with bus mod apk download
Download Zip ✶✶✶ https://urlca.com/2uO7P8
- Features of Euro Truck Simulator 2
-
-- Transport a vast variety of cargo across more than 60 European cities.
-- Build your own fleet of trucks, buy garages, hire drivers, manage your company for maximum profits.
-- A varied amount of truck tuning that range from performance to cosmetic changes.
-- Travel through realistic road networks connecting up to 60 different European cities.
-- Use countless customization options to customize your truck.
-- Learn fast and safe driving, obey traffic road rules and meet delivery deadlines.
-
- Benefits of playing Euro Truck Simulator 2
-
-- Improve your driving skills and knowledge of European roads and traffic rules.
-- Enjoy the scenic views and landmarks of different European countries.
-- Relax and unwind from stress and boredom by driving a truck.
-- Challenge yourself with different cargo types, routes, weather conditions, and deadlines.
-- Have fun with your friends by playing online multiplayer mode or joining a virtual trucking company.
-
- What is a bus mod apk and how does it enhance the game?
-A bus mod apk is a modified version of the game that allows you to drive buses instead of trucks. It adds new buses to the game that you can buy, customize, and drive. It also changes the gameplay by adding passengers, tickets, destinations, schedules, and other features that make you feel like you are running a bus service. A bus mod apk enhances the game by giving you more variety, challenge, realism, and fun.
- Types of bus mod apk for Euro Truck Simulator 2
-
-- Bus Mods: These are mods that add new buses to the game that you can buy from dealers or online. Some examples are Man Lion's Coach, Mercedes-Benz New Travego, Setra s517 HDH-2021 Topclass, Irizar i8 Integral, etc.
-- Bus Skins: These are mods that change the appearance of the buses by adding different colors, logos, designs, etc. Some examples are TN SETC MARCOPOLO VIAGGIO 800, Al Salem, AutooCar, etc.
-- Bus Maps: These are mods that add new routes, cities, countries, and landmarks to the game that you can drive your buses to. Some examples are Bus Terminal Mod, Project Balkans, ProMods, etc.
-- Bus Sounds: These are mods that change the sounds of the buses by adding realistic engine, horn, brake, air, etc. sounds. Some examples are Bus Horn Pack, Bus Traffic Pack Sound Addon, Realistic Bus Sound Mod, etc.
-
- Features of bus mod apk for Euro Truck Simulator 2
-
-- Drive realistic and detailed buses from different brands and models.
-- Customize your buses with various options such as paint, accessories, lights, etc.
-- Pick up and drop off passengers at bus stations, terminals, or stops.
-- Sell tickets, check IDs, and manage your income and expenses.
-- Follow realistic schedules, timetables, and routes.
-- Experience different weather conditions, traffic situations, and road events.
-
- How to download and install bus mod apk for Euro Truck Simulator 2?
-If you want to download and install bus mod apk for Euro Truck Simulator 2, you need to follow these steps:
-euro truck simulator 2 bus mods free download for android
-ets2 bus mod apk download latest version
-how to install bus mod in euro truck simulator 2 android
-euro truck simulator 2 bus mod with passenger download
-best bus mods for euro truck simulator 2 apk
-euro truck simulator 2 bus mod indonesia apk download
-euro truck simulator 2 bus mod volvo apk
-euro truck simulator 2 bus mod mercedes benz apk
-euro truck simulator 2 bus mod scania apk
-euro truck simulator 2 bus mod man apk
-euro truck simulator 2 bus mod setra apk
-euro truck simulator 2 bus mod neoplan apk
-euro truck simulator 2 bus mod irizar apk
-euro truck simulator 2 bus mod temsa apk
-euro truck simulator 2 bus mod marcopolo apk
-euro truck simulator 2 bus mod iveco apk
-euro truck simulator 2 bus mod renault apk
-euro truck simulator 2 bus mod daf apk
-euro truck simulator 2 bus mod solaris apk
-euro truck simulator 2 bus mod vdl apk
-euro truck simulator 2 bus mod hyundai apk
-euro truck simulator 2 bus mod toyota apk
-euro truck simulator 2 bus mod honda apk
-euro truck simulator 2 bus mod ford apk
-euro truck simulator 2 bus mod chevrolet apk
-euro truck simulator 2 bus mod mitsubishi apk
-euro truck simulator 2 bus mod nissan apk
-euro truck simulator 2 bus mod kia apk
-euro truck simulator 2 bus mod mazda apk
-euro truck simulator 2 bus mod peugeot apk
-euro truck simulator 2 bus mod citroen apk
-euro truck simulator 2 bus mod fiat apk
-euro truck simulator 2 bus mod skoda apk
-euro truck simulator 2 bus mod opel apk
-euro truck simulator 2 bus mod audi apk
-euro truck simulator 2 bus mod bmw apk
-euro truck simulator 2 bus mod mercedes benz sprinter apk
-euro truck simulator 2 bus mod volkswagen crafter apk
-euro truck simulator 2 bus mod ford transit apk
-euro truck simulator 2 bus mod renault master apk
-euro truck simulator 2 bus mod iveco daily apk
-euro truck simulator 2 bus mod peugeot boxer apk
-euro truck simulator 2 bus mod citroen jumper apk
-euro truck simulator 2 bus mod fiat ducato apk
-euro truck simulator 2 bus mod hyundai h1 apk
-euro truck simulator 2 bus mod toyota hiace apk
-euro truck simulator 2 bus mod honda odyssey apk
-euro truck simulator 2 bus mod kia sedona apk
- Steps to download and install bus mod apk for Euro Truck Simulator 2
-
-- Find a reliable and safe website that offers bus mod apk for Euro Truck Simulator 2. Some examples are ETS2 Mods, ModsBase, ModLand, etc.
-- Select the bus mod apk that you want to download and click on the download button. You may need to complete some verification steps or surveys before the download starts.
-- Once the download is complete, locate the file on your device and extract it using a file manager or a zip extractor app.
-- Copy or move the extracted file to the mod folder of your Euro Truck Simulator 2 game directory. The default location is C:\Users\YourName\Documents\Euro Truck Simulator 2\mod.
-- Launch the game and go to the mod manager. Enable the bus mod apk that you have installed and confirm the changes.
-- Enjoy driving buses across Europe with your bus mod apk for Euro Truck Simulator 2.
-
- Tips and tricks to use bus mod apk for Euro Truck Simulator 2
-
-- Make sure that your device meets the minimum system requirements for running Euro Truck Simulator 2 and the bus mod apk. You may need to adjust the graphics settings or disable some mods if you experience lag or crashes.
-- Read the description and instructions of the bus mod apk before downloading and installing it. Some mods may require additional mods or DLCs to work properly.
-- Check the compatibility and updates of the bus mod apk with your game version. Some mods may not work with newer or older versions of Euro Truck Simulator 2.
-- Backup your game files and save data before installing any mod. This way, you can restore them if something goes wrong or you want to uninstall the mod.
-- Be careful when driving buses as they are larger, heavier, and slower than trucks. You need to pay more attention to the road signs, traffic rules, speed limits, turns, parking, etc.
-
- Conclusion
-Euro Truck Simulator 2 with bus mod apk is a great way to enjoy driving buses across Europe. You can choose from different types of bus mod apk that add new buses, skins, maps, sounds, and features to the game. You can also download and install them easily by following some simple steps. However, you need to be careful about the source, compatibility, and performance of the bus mod apk. You also need to improve your driving skills and knowledge of European roads and traffic rules. If you follow these tips and tricks, you will have a lot of fun with your bus mod apk for Euro Truck Simulator 2.
- FAQs
-
-- Q: How do I get more money in Euro Truck Simulator 2 with bus mod apk?
-- A: You can get more money by completing more deliveries, selling tickets, managing your company, or using cheats or trainers.
- - Q: How do I change the camera view in Euro Truck Simulator 2 with bus mod apk?
-- A: You can change the camera view by pressing the number keys from 1 to 9 on your keyboard or using the mouse wheel.
- - Q: How do I use the GPS in Euro Truck Simulator 2 with bus mod apk?
-- A: You can use the GPS by pressing the F3 key on your keyboard or clicking on the GPS icon on the bottom right corner of the screen. You can also set your destination, zoom in or out, or switch between 2D and 3D modes.
- - Q: How do I turn on the lights, wipers, indicators, etc. in Euro Truck Simulator 2 with bus mod apk?
-- A: You can turn on the lights, wipers, indicators, etc. by pressing the corresponding keys on your keyboard or using the mouse buttons. You can also customize the controls in the options menu.
- - Q: How do I update Euro Truck Simulator 2 with bus mod apk?
-- A: You can update Euro Truck Simulator 2 with bus mod apk by downloading and installing the latest version of the game and the mod from their official websites or other sources. You may need to delete or replace some files or folders in your game directory.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Explore a Vast Magical World of Genshin Impact with TapTap APK.md b/spaces/congsaPfin/Manga-OCR/logs/Explore a Vast Magical World of Genshin Impact with TapTap APK.md
deleted file mode 100644
index e93ae74a4865371e10d6f5653003d170d16226a3..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Explore a Vast Magical World of Genshin Impact with TapTap APK.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-Genshin Impact APK TapTap: A Better Way to Play the Game on Your Android Device
-Genshin Impact is one of the most popular open-world action RPG games in the world. It has millions of players who explore the vast and beautiful world of Teyvat, where they can meet various characters, fight enemies, collect items, and complete quests. The game is available for free on multiple platforms, including PC, PlayStation, iOS, and Android.
-However, if you are an Android user, you might have noticed that the game does not run very smoothly on your device. You might experience lag, crashes, or low graphics quality. This is because the Google Play Store version of Genshin Impact is not optimized for all Android devices, especially low-end ones.
-genshin impact apk taptap
Download Zip ✒ https://urlca.com/2uO63l
-Fortunately, there is a solution for this problem. You can download and install Genshin Impact APK TapTap, which is a version of the game that is distributed through a third-party app store called TapTap. TapTap is a platform that offers games that are not available or restricted on Google Play Store, as well as exclusive content and updates for some games.
-In this article, we will show you how to download and install Genshin Impact APK TapTap on your Android device, as well as the benefits and drawbacks of using it. Read on to find out more.
- How to Download and Install Genshin Impact APK TapTap on Your Android Device
-Downloading and installing Genshin Impact APK TapTap on your Android device is not very difficult, but it does require some extra steps. Here is what you need to do:
-
-- Download and install the TapTap app from its official website here. You will need to enable the installation of apps from unknown sources in your device settings.
-- Open the TapTap app and search for Genshin Impact. You will see two versions of the game: one with a blue icon and one with a yellow icon. The blue icon is the global version, while the yellow icon is the Chinese version. Choose the one that suits your preference. compatible with your device or cause some glitches or errors. Therefore, you should always be careful and cautious when downloading and installing the game from TapTap.
-- Possible violation of the game's terms of service and privacy policy: The TapTap version of Genshin Impact might not comply with the game's terms of service and privacy policy, which are set by the developer miHoYo. This means that you might be violating some rules or regulations by using the game from TapTap, which could result in some consequences, such as account suspension, ban, or legal action. Therefore, you should always read and understand the game's terms of service and privacy policy before using the game from TapTap.
-- Need to update the game manually through TapTap app: The TapTap version of Genshin Impact does not update automatically through your device's system, unlike the Google Play Store version. This means that you need to update the game manually through the TapTap app whenever there is a new version available. This could be inconvenient and time-consuming, especially if you have a slow internet connection or limited data plan. Therefore, you should always check for updates regularly and download them as soon as possible.
-
-These are some of the drawbacks of using Genshin Impact APK TapTap on your Android device. You should weigh the pros and cons carefully before deciding whether to use it or not.
- Conclusion: Is Genshin Impact APK TapTap Worth It?
-Genshin Impact APK TapTap is a version of the game that is distributed through a third-party app store called TapTap. It offers some benefits, such as faster and smoother performance, access to exclusive content and updates, and more control over your game settings and data usage. However, it also has some drawbacks, such as potential security risks and compatibility issues, possible violation of the game's terms of service and privacy policy, and need to update the game manually through TapTap app.
-Therefore, whether Genshin Impact APK TapTap is worth it or not depends on your personal preference and situation. If you are looking for a better way to play the game on your Android device, and you are willing to take some risks and inconveniences, then you might want to give it a try. However, if you are satisfied with the Google Play Store version, or you are concerned about the safety and legality of using the game from TapTap, then you might want to stick with it.
-genshin impact taptap download apk
-genshin impact apk taptap version
-how to install genshin impact on taptap
-genshin impact taptap apk update
-genshin impact apk taptap vs play store
-genshin impact taptap apk size
-genshin impact apk taptap english
-is genshin impact taptap safe
-genshin impact taptap apk mod
-genshin impact apk taptap reddit
-genshin impact taptap apk obb
-genshin impact apk taptap optimization
-genshin impact taptap apk link
-genshin impact apk taptap review
-genshin impact taptap apk latest
-genshin impact apk taptap performance
-genshin impact taptap apk free
-genshin impact apk taptap requirements
-genshin impact taptap apk offline
-genshin impact apk taptap online
-genshin impact taptap apk android
-genshin impact apk taptap ios
-genshin impact taptap apk pc
-genshin impact apk taptap mac
-genshin impact taptap apk windows
-genshin impact apk taptap emulator
-genshin impact taptap apk no verification
-genshin impact apk taptap hack
-genshin impact taptap apk cheat
-genshin impact apk taptap glitch
-genshin impact taptap apk error
-genshin impact apk taptap fix
-genshin impact taptap apk guide
-genshin impact apk taptap tips
-genshin impact taptap apk tricks
-genshin impact taptap apk walkthrough
-genshin impact apk taptap gameplay
-genshin impact taptap apk characters
-genshin impact taptap apk story
-genshin impact taptap apk events
-genshin impact taptap apk codes
-genshin impact taptap apk rewards
-genshin impact taptap apk skins
-genshin impact taptap apk weapons
-genshin impact taptap apk artifacts
-genshin impact taptap apk domains
-genshin impact taptap apk bosses
-genshin impact taptap apk quests
-genshin impact taptap apk achievements
-The choice is yours. We hope this article has helped you understand more about Genshin Impact APK TapTap and how to use it on your Android device. Happy gaming!
- FAQs About Genshin Impact APK TapTap
-Here are some frequently asked questions about Genshin Impact APK TapTap that you might find useful:
-
-- Is Genshin Impact APK TapTap safe to use?
-There is no definitive answer to this question, as there is always a risk involved when using apps from unofficial sources. However, TapTap is a reputable platform that has been around for a long time and has millions of users worldwide. It claims to have strict security measures and quality standards for its apps, and it does not require any permissions or access to your device's data. Therefore, Genshin Impact APK TapTap might be relatively safe to use, but you should still be careful and cautious when downloading and installing it.
-- Can I play Genshin Impact APK TapTap with my friends who use Google Play Store version?
-Yes, you can play Genshin Impact APK TapTap with your friends who use Google Play Store version, as long as you are playing on the same server region. The game supports cross-play across different platforms and devices, so you can join your friends in co-op mode or chat with them in-game. However, you might experience some lag or delay if you have different versions of the game or different internet connections.
-- How can I switch from Google Play Store version to TapTap version or vice versa?
-If you want to switch from Google Play Store version to TapTap version or vice versa, you need to uninstall the current version of the game from your device first. Then, you need to download and install the new version of the game from either Google Play Store or TapTap app. You also need to log in with your miHoYo account or create a new one if you don't have one. However, be aware that switching versions might cause some data loss or corruption, so make sure you back up your progress before doing so.
-- How can I contact the customer service of Genshin Impact or TapTap if I have any problems?
-If you have any problems with Genshin Impact or TapTap, you can contact the customer service of Genshin Impact or TapTap through the following ways:
-
-- For Genshin Impact, you can visit the official website here and click on the "Feedback" button on the top right corner. You can also access the feedback option in the game's settings menu. You can submit your questions, suggestions, or complaints, and the customer service will reply to you as soon as possible.
-- For TapTap, you can visit the official website here and click on the "Help Center" button on the bottom right corner. You can also access the help center option in the TapTap app's settings menu. You can browse through the FAQs, tutorials, or forums, or contact the customer service directly through email or phone.
-
-We hope that you can solve your problems with Genshin Impact or TapTap easily and quickly.
- - What are some tips and tricks to enjoy Genshin Impact more on my Android device?
-Genshin Impact is a fun and immersive game that you can enjoy on your Android device. However, there are some tips and tricks that can help you enhance your gaming experience and make it more enjoyable. Here are some of them:
-
-- Use a controller: If you find it hard to control the game with your touchscreen, you can use a controller instead. Genshin Impact supports various controllers that are compatible with Android devices, such as Xbox, PlayStation, or Bluetooth controllers. You can connect your controller to your device and customize your control settings in the game's options menu.
-- Optimize your battery: Genshin Impact is a battery-draining game that can consume a lot of power from your device. To optimize your battery life and prevent overheating, you can lower your brightness, turn off background apps, enable battery saver mode, or use a power bank.
-- Join a community: Genshin Impact has a large and active community of players who share their tips, guides, fan art, memes, and more. You can join a community of Genshin Impact fans on various platforms, such as Reddit, Discord, Facebook, Twitter, or YouTube. You can learn from other players, make new friends, or have fun with them.
-
-These are some of the tips and tricks to enjoy Genshin Impact more on your Android device. We hope that you find them useful and helpful.
-
-This is the end of the article. Thank you for reading it. We hope that you have learned something new and interesting about Genshin Impact APK TapTap and how to use it on your Android device. If you have any questions or feedback, please feel free to leave a comment below. Have a great day!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Pride Edition Sport Band and Watch Face A Review.md b/spaces/congsaPfin/Manga-OCR/logs/Pride Edition Sport Band and Watch Face A Review.md
deleted file mode 100644
index f30237b63b4aaf3d5c97f0061c0891cd0ede1c29..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Pride Edition Sport Band and Watch Face A Review.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-How to Download and Customize the Pride Watch Face for Your Apple Watch
- If you are looking for a way to celebrate and support the LGBTQ+ community with your Apple Watch, you might want to try the Pride watch face. This watch face is a colorful and dynamic display that showcases the diversity and strength of the LGBTQ+ movement. In this article, we will explain what the Pride watch face is, what it means, how to download it, and how to customize it for your Apple Watch.
- What is the Pride Watch Face and What Does It Mean?
- The Pride watch face is a feature that shows the colors of the LGBTQ+ movement flag on the Apple Watch. The watch face is available to Apple Watch Series 4 users or later. The watch face is included in watchOS 7.5 and later. The watch face can be installed from the Watch app on the iPhone, which needs to be running iOS 14.6 or later.
-pride watch face download
Download Zip ✒ ✒ ✒ https://urlca.com/2uOat4
- The History and Symbolism of the Pride Watch Face
- The Pride watch face was first introduced by Apple in 2017, as a way to stand with, support, and celebrate the LGBTQ+ community. Since then, Apple has released new versions of the Pride watch face every year, with different designs and colors inspired by various Pride flags. The Pride flags represent the diverse identities and experiences of LGBTQ+ people, such as transgender, bisexual, nonbinary, intersex, asexual, pansexual, genderqueer, and more.
- The colors of the Pride flags also have different meanings, such as life, healing, sunlight, nature, harmony, spirit, peace, love, magic, art, serenity, pride, community, solidarity, respect, dignity, courage, hope, joy, freedom, diversity, inclusion, equity, justice, and liberation. The Pride watch face aims to reflect these values and celebrate the individuality and uniqueness of all LGBTQ+ people.
- The Design and Features of the Pride Watch Face
- The latest version of the Pride watch face for 2023 is called "Pride Celebration". It features a white background with colorful shapes that appear to circulate from the band into the display. The shapes are based on geometric patterns that are compression-molded into the new Pride Edition Sport Band. The shapes respond to the movement of the wrist or the touch of the display. The watch face also has a digital clock that shows the time in white.
- The previous versions of the Pride watch face include "Pride Threads", which shows colorful threads that move as the Digital Crown is rotated or tapped; "Pride Analog", which shows a rainbow-colored analog clock with hands that move in sync with each other; "Pride Digital", which shows a rainbow-colored digital clock with numbers that change colors as time passes; "Pride Spectrum", which shows a spectrum of colors that shift as time passes; and "Pride Woven", which shows a woven pattern of colors that change as time passes.
- How to Download the Pride Watch Face for Your Apple Watch
- The Requirements and Compatibility of the Pride Watch Face
- To download and use the Pride watch face for your Apple Watch, you need to have an Apple Watch Series 4 or later that is running watchOS 7.5 or later. You also need to have an iPhone that is running iOS 14.6 or later. You can check your software versions by going to Settings > General > About on your devices.
- If you want to match your watch face with a special edition band, you can choose from two options: the Pride Edition Sport Band or the Pride Edition Nike Sport Loop. The Sport Band is compatible with Apple Watch Series 4 or later, and the Sport Loop is compatible with Apple Watch Series 3 or later. Both bands are available in 40mm and 44mm sizes. You can purchase them from the Apple Store or online.
- The Steps to Download the Pride Watch Face from Apple or Other Sources
- There are two ways to download the Pride watch face for your Apple Watch: from the Watch app on your iPhone, or from other sources such as websites or social media. Here are the steps for each method:
- From the Watch app on your iPhone:
-How to get the new Apple Watch Pride Edition watch face
-Apple Watch Pride Threads watch face and iOS wallpaper
-Nike Pride watch face for Apple Watch Series 4 and later
-Best places to buy Apple Watch Pride Edition Sport Band
-How to customize the Pride Celebration watch face on Apple Watch
-Apple Watch Pride Edition celebrates the LGBTQ+ community
-How to support LGBTQ+ advocacy organizations with Apple Watch
-What do the colors on the Apple Watch Pride Edition Sport Band mean?
-How to use App Clips to download the Pride Celebration watch face
-How to match your iPhone wallpaper with your Apple Watch Pride face
-How to change the background color of the Pride Threads watch face
-How to send vibrations across the threads on the Pride Threads watch face
-How to get exclusive pride bands for Nike edition of the Apple Watch
-How to pair your Apple Watch with other pride accessories
-How to share your Apple Watch Pride face with your friends
-How to create your own pride watch face using photos or emoji
-How to add rainbow complications to your Apple Watch Pride face
-How to make your Apple Watch Pride face respond to your wrist movement
-How to find more pride-themed watch faces on the App Store
-How to switch between different pride watch faces on your Apple Watch
-How to get notified when new pride watch faces are available from Apple
-How to remove the pride watch face from your Apple Watch
-How to restore the pride watch face if you accidentally deleted it
-How to sync your pride watch face across multiple Apple Watches
-How to adjust the brightness and contrast of the pride watch face
-How to set a timer or alarm with the pride watch face
-How to use Siri with the pride watch face on your Apple Watch
-How to track your activity and health with the pride watch face
-How to use the Digital Crown to control the pride watch face
-How to enable or disable sound effects on the pride watch face
-How to use Force Touch or Haptic Touch to access more options on the pride watch face
-How to use the Always On feature with the pride watch face on your Apple Watch Series 5 or later
-How to save battery life with the pride watch face on your Apple Watch
-How to update your Apple Watch software to get the latest pride watch face features
-How to troubleshoot common issues with the pride watch face on your Apple Watch
-How to contact Apple support if you have any problems with the pride watch face
-How to give feedback or suggestions for improving the pride watch face
-How to learn more about the history and meaning of the pride flag and its colors
-How to join or organize a pride event in your area using your Apple Watch
-How to celebrate Pride Month with your Apple Watch and other devices
-How to show off your pride watch face on social media platforms like Instagram, Twitter, or TikTok
-How to find other LGBTQ+ friendly apps and games for your Apple Watch
-How to discover and listen to LGBTQ+ podcasts and music on your Apple Watch
-How to read or watch LGBTQ+ stories and documentaries on your Apple Watch
-How to access LGBTQ+ resources and information on your Apple Watch
-
-- Open the Watch app on your iPhone and tap on the Face Gallery tab at the bottom.
-- Scroll down to find the Pride watch face section and tap on it.
-- Select the version of the Pride watch face that you want to download, such as Pride Celebration, Pride Threads, Pride Analog, Pride Digital, Pride Spectrum, or Pride Woven.
-- Tap on Add and wait for the watch face to sync with your Apple Watch.
-- On your Apple Watch, press the Digital Crown to go to the Home screen and swipe left or right to find the Pride watch face.
-- Tap on the watch face to activate it.
-
- From other sources:
-
-- Find a source that offers the Pride watch face for download, such as a website or a social media post. For example, you can visit this website that has a collection of Pride watch faces that you can download for free.
-- On your iPhone, tap on the link or the image of the Pride watch face that you want to download. This will open a preview of the watch face on your iPhone.
-- Tap on Add to My Faces and wait for the watch face to sync with your Apple Watch.
-- On your Apple Watch, press the Digital Crown to go to the Home screen and swipe left or right to find the Pride watch face.
-- Tap on the watch face to activate it.
-
- How to Customize the Pride Watch Face for Your Apple Watch
- The Options and Settings of the Pride Watch Face
- The Pride watch face has some options and settings that you can customize according to your preferences. Here are some of them:
-
-- You can change the style of the digital clock by tapping on it and choosing from different fonts and sizes.
-- You can add up to three complications to the watch face by tapping on an empty space and choosing from various options such as date, battery, activity, heart rate, weather, and more.
-- You can change the color scheme of the watch face by pressing firmly on the display and swiping left or right to choose from different variations of the rainbow colors.
-
- The Tips and Tricks to Make the Most of the Pride Watch Face
- The Pride watch face is not only a beautiful and meaningful display, but also a fun and interactive one. Here are some tips and tricks to make the most of it:
-
-- You can make the shapes move faster or slower by rotating or tapping on the Digital Crown.
-- You can make the shapes change colors by tapping on them or shaking your wrist.
-- You can make the shapes disappear by covering your Apple Watch with your palm.
-
- Conclusion
- The Pride watch face is a great way to show your support and appreciation for the LGBTQ+ community with your Apple Watch. It is a colorful and dynamic display that represents the diversity and strength of LGBTQ+ people. It is also easy to download and customize according to your preferences. We hope this article has helped you learn more about the Pride watch face and how to use it. Happy Pride!
- FAQs
- Here are some frequently asked questions about the Pride watch face:
-
-- How do I delete a Pride watch face from my Apple Watch?
-To delete a Pride watch face from your Apple Watch, press firmly on the display and swipe up on the Pride watch face that you want to delete. Tap on the delete icon (a red circle with a white minus sign) and confirm your choice.
- - How do I share a Pride watch face with someone else?
-To share a Pride watch face with someone else, press firmly on the display and tap on the share icon (a blue circle with a white arrow) at the bottom. Choose the contact that you want to share the watch face with and tap on Send. The recipient will receive a notification on their iPhone or Apple Watch and can choose to add the watch face to their collection.
- - How do I get more Pride watch faces for my Apple Watch?
-To get more Pride watch faces for your Apple Watch, you can either wait for Apple to release new versions of the Pride watch face every year, or you can look for other sources that offer Pride watch faces for download, such as websites or social media. You can also create your own Pride watch face by using the Photos app on your iPhone or Apple Watch and choosing an image that has the colors of the LGBTQ+ movement flag.
- - How do I change the band of my Apple Watch to match the Pride watch face?
-To change the band of your Apple Watch to match the Pride watch face, you need to have a compatible band that has the colors of the LGBTQ+ movement flag. You can either buy a special edition band from Apple, such as the Pride Edition Sport Band or the Pride Edition Nike Sport Loop, or you can find other bands from third-party sellers that have similar designs. To change the band of your Apple Watch, follow these steps:
-
-- Turn off your Apple Watch and place it face down on a clean surface.
-- Press and hold the band release button on the back of your Apple Watch and slide the band across to remove it.
-- Take your new band and slide it into place until you hear a click.
-- Turn on your Apple Watch and enjoy your new look.
-
- - How do I support the LGBTQ+ community with my Apple Watch?
-Besides using the Pride watch face and band for your Apple Watch, there are other ways to support the LGBTQ+ community with your device. Here are some of them:
-
-- You can donate to LGBTQ+ organizations and causes through apps such as GiveTide, Charity Miles, or Donate a Photo.
-- You can learn more about LGBTQ+ history and culture through apps such as Quist, LGBTQ+ History, or LGBT Life.
-- You can connect with other LGBTQ+ people and allies through apps such as HER, Grindr, Taimi, or Meetup.
-- You can express yourself and show your pride through apps such as GIPHY, Bitmoji, or Emoji Me.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Pro Series Drag Racing Mod APK The Best Racing Game with Unlimited Money and Gold.md b/spaces/congsaPfin/Manga-OCR/logs/Pro Series Drag Racing Mod APK The Best Racing Game with Unlimited Money and Gold.md
deleted file mode 100644
index a499de732f10d396f31fe7851b22f3b2c143df24..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Pro Series Drag Racing Mod APK The Best Racing Game with Unlimited Money and Gold.md
+++ /dev/null
@@ -1,73 +0,0 @@
-
-Pro Series Drag Racing Mod APK Unlimited Money: A Complete Guide
-If you are a fan of racing games, you might have heard of Pro Series Drag Racing, a fun and exciting simulation game developed by Battle Creek Games. In this game, you can play as a professional racer and experience many supercars with incredible speed. You can also customize your cars and tune them to your liking, as well as compete with other players online in four different racing modes. However, if you want to enjoy the game to the fullest, you might need a lot of money and gold to buy new cars and upgrade them. That's why we are here to introduce you to Pro Series Drag Racing Mod APK Unlimited Money, a modified version of the game that gives you unlimited resources and more. In this article, we will tell you everything you need to know about this mod, including its features, benefits, and how to download and install it on your Android device. Let's get started!
- What is Pro Series Drag Racing?
-Pro Series Drag Racing is a realistic and immersive racing game that lets you experience the thrill of drag racing on your mobile device. You can choose from over 40 different cars, ranging from classic muscle cars to exotic supercars, and customize them with hundreds of parts and options. You can also tune your car's performance, such as engine, turbo, nitrous, transmission, tires, and more, to suit your driving style and preferences. You can then test your skills in four different racing modes: Heads Up Racing, Grudge Racing, Bracket Racing, and Test & Tune. You can also challenge other players from around the world in online multiplayer mode, where you can race against real opponents and climb the leaderboards. Pro Series Drag Racing is a game that will keep you hooked for hours with its realistic graphics, physics, sound effects, and gameplay.
-pro series drag racing mod apk unlimited money
Download →→→ https://urlca.com/2uOfLX
- Features of Pro Series Drag Racing
-- Realistic graphics and physics
-Pro Series Drag Racing boasts stunning 3D graphics that will make you feel like you are on a real race track. You can see every detail of your car, from the paint job to the engine parts, as well as the environment around you, such as the trees, buildings, crowds, and weather effects. The game also uses realistic physics to simulate the behavior of your car on the road, such as acceleration, braking, steering, traction, drag, and more. You can feel the difference between different car models, parts, and settings, as well as the impact of factors such as temperature, humidity, altitude, and wind.
- - Customizable cars and tuning
-Pro Series Drag Racing gives you complete control over your car's appearance and performance. You can choose from over 40 different cars in four categories: Street Cars, Pro Mods, Funny Cars, and Top Fuel Dragsters. Each car has its own characteristics and specifications that affect its speed, power, handling, and durability. You can also customize your car's color, decals, wheels, tires, hood scoops, spoilers, exhausts, roll cages, parachutes, and more. You can also tune your car's performance by adjusting its engine parameters such as compression ratio, cam timing, spark advance, fuel injection, turbo boost, nitrous oxide, transmission ratio, tire pressure, and more. You can fine-tune your car to match your driving style and preferences, as well as the track conditions.
- - Four racing modes and online multiplayer
-Pro Series Drag Racing offers four different racing modes that will test your skills and abilities. You can play in Heads Up Racing mode, where you race against a single opponent in a straight line. You can also play in Grudge Racing mode, where you can challenge other players online and bet money on the outcome. You can also play in Bracket Racing mode, where you have to predict your finish time and try to match it as closely as possible. You can also play in Test & Tune mode, where you can practice your driving and tuning skills without any pressure. Pro Series Drag Racing also features online multiplayer mode, where you can race against real players from around the world in real-time. You can join or create a lobby, chat with other racers, and compete for glory and rewards. You can also check your stats and rankings on the global leaderboards.
- What is Pro Series Drag Racing Mod APK Unlimited Money?
-Pro Series Drag Racing Mod APK Unlimited Money is a modified version of the original game that gives you unlimited money and gold, as well as other benefits. With this mod, you can enjoy the game without any limitations or restrictions. You can buy any car you want, upgrade it to the max, and dominate the races. You can also unlock all the cars and parts that are otherwise locked or require real money to purchase. You can also get rid of annoying ads that interrupt your gameplay. Moreover, you don't need to root your device to use this mod, as it is compatible with most Android devices.
- Benefits of using Pro Series Drag Racing Mod APK Unlimited Money
-- Unlimited money and gold
-With Pro Series Drag Racing Mod APK Unlimited Money, you will never run out of money and gold, which are the main currencies in the game. You can use them to buy new cars, upgrade them, and customize them. You can also use them to bet on races and earn more rewards. You don't have to worry about saving or earning money and gold, as you will always have enough to spend.
-pro series drag racing hack apk unlimited gold and money
-pro series drag racing mod apk download latest version
-pro series drag racing cheats unlimited money and gold
-pro series drag racing mod apk free download for android
-pro series drag racing unlimited money and gold apk
-pro series drag racing hack mod apk 2021
-pro series drag racing mod apk unlimited everything
-pro series drag racing mod apk no root
-pro series drag racing hack apk download for android
-pro series drag racing mod apk offline
-pro series drag racing unlimited money and gold ios
-pro series drag racing mod apk revdl
-pro series drag racing hack online generator
-pro series drag racing mod apk android 1
-pro series drag racing unlimited money and gold glitch
-pro series drag racing mod apk obb
-pro series drag racing hack tool no survey
-pro series drag racing mod apk rexdl
-pro series drag racing hack apk ios
-pro series drag racing mod apk latest update
-pro series drag racing unlimited money and gold mod menu
-pro series drag racing mod apk data
-pro series drag racing hack without human verification
-pro series drag racing mod apk happymod
-pro series drag racing hack apk 2020
- - Unlocked all cars and parts
-With Pro Series Drag Racing Mod APK Unlimited Money, you will have access to all the cars and parts that are available in the game. You don't have to wait for them to unlock or pay real money to get them. You can choose from over 40 different cars in four categories: Street Cars, Pro Mods, Funny Cars, and Top Fuel Dragsters. You can also customize your car with hundreds of parts and options, such as engine, turbo, nitrous, transmission, tires, hood scoops, spoilers, exhausts, roll cages, parachutes, and more. You can create your dream car and make it stand out from the crowd.
- - No ads and root required
-With Pro Series Drag Racing Mod APK Unlimited Money, you will not see any ads that might ruin your gaming experience. You can play the game without any interruptions or distractions. You can also enjoy the game without rooting your device, which might void your warranty or expose your device to security risks. The mod is compatible with most Android devices and does not require any special permissions or access.
- How to download and install Pro Series Drag Racing Mod APK Unlimited Money?
-If you are interested in downloading and installing Pro Series Drag Racing Mod APK Unlimited Money on your Android device, you can follow these simple steps:
- Step-by-step instructions for Android devices
-- Enable unknown sources
-Before you can install the mod APK file on your device, you need to enable unknown sources in your settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
- - Download the mod APK file from a trusted source
-Next, you need to download the mod APK file from a trusted source. There are many websites that offer mod APK files for various games and apps, but not all of them are safe and reliable. Some of them might contain viruses or malware that could harm your device or steal your personal information. Therefore, you should be careful when choosing a source to download the mod APK file from. One of the best sources that we recommend is [text], where you can find many mod APK files for popular games and apps.
- - Install the mod APK file and enjoy the game
-Finally, you need to install the mod APK file on your device and enjoy the game. To do this, locate the mod APK file in your device's storage, usually in the Downloads folder. Tap on the file and follow the instructions on the screen to install it. You might need to grant some permissions to the app, such as storage, network, and others. Once the installation is complete, you can launch the game from your app drawer or home screen. You will see that you have unlimited money and gold, as well as all the cars and parts unlocked. You can now enjoy Pro Series Drag Racing Mod APK Unlimited Money and have fun racing with your friends.
- Conclusion
-Pro Series Drag Racing is a great game for racing enthusiasts who want to experience the thrill of drag racing on their mobile devices. The game offers realistic graphics, physics, sound effects, and gameplay, as well as customizable cars, tuning options, and four racing modes. However, if you want to enjoy the game without any limitations or restrictions, you should try Pro Series Drag Racing Mod APK Unlimited Money, a modified version of the game that gives you unlimited money and gold, as well as other benefits. With this mod, you can buy any car you want, upgrade it to the max, and dominate the races. You can also unlock all the cars and parts that are otherwise locked or require real money to purchase. You can also get rid of annoying ads that interrupt your gameplay. Moreover, you don't need to root your device to use this mod, as it is compatible with most Android devices. To download and install Pro Series Drag Racing Mod APK Unlimited Money on your device, you just need to follow the simple steps that we have explained in this article. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
- FAQs
-Here are some frequently asked questions about Pro Series Drag Racing Mod APK Unlimited Money:
-
-- Is Pro Series Drag Racing Mod APK Unlimited Money safe to use?
-Yes, Pro Series Drag Racing Mod APK Unlimited Money is safe to use, as long as you download it from a trusted source like [text]. The mod does not contain any viruses or malware that could harm your device or steal your personal information. However, you should always be careful when downloading and installing any mod APK file from unknown sources, as they might pose some risks to your device or privacy.
- - Is Pro Series Drag Racing Mod APK Unlimited Money legal to use?
-Pro Series Drag Racing Mod APK Unlimited Money is not legal to use, as it violates the terms and conditions of the original game developer. By using this mod, you are modifying the game's code and data, which is considered as piracy and cheating. This might result in some consequences, such as banning your account or losing your progress. Therefore, we do not encourage or endorse the use of this mod, and we are not responsible for any problems or issues that might arise from using it.
- - Can I play online multiplayer mode with Pro Series Drag Racing Mod APK Unlimited Money?
-Yes, you can play online multiplayer mode with Pro Series Drag Racing Mod APK Unlimited Money, but you might face some difficulties or disadvantages. For example, you might not be able to join some lobbies or races that have anti-cheat measures or restrictions. You might also encounter some players who might report you for using a mod or cheating. You might also face some lag or connection issues due to the mod's compatibility or performance. Therefore, we suggest that you play online multiplayer mode with caution and respect for other players.
- - Can I update Pro Series Drag Racing Mod APK Unlimited Money?
-No, you cannot update Pro Series Drag Racing Mod APK Unlimited Money directly from the Google Play Store or the original game developer. If you try to do so, you might lose your mod features and benefits, as well as your progress and data. To update Pro Series Drag Racing Mod APK Unlimited Money, you need to wait for a new version of the mod to be released by the mod developer. You can then download and install it on your device following the same steps that we have explained in this article.
- - Can I use Pro Series Drag Racing Mod APK Unlimited Money on iOS devices?
-No, you cannot use Pro Series Drag Racing Mod APK Unlimited Money on iOS devices, such as iPhones or iPads. The mod is only compatible with Android devices and does not work on iOS devices. To play Pro Series Drag Racing on iOS devices, you need to download and install the original version of the game from the App Store or the official website of the game developer. However, you might not be able to enjoy the same features and benefits as Pro Series Drag Racing Mod APK Unlimited Money, such as unlimited money and gold, unlocked cars and parts, and no ads.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/What is College Brawl? Everything You Need to Know About the Game.md b/spaces/congsaPfin/Manga-OCR/logs/What is College Brawl? Everything You Need to Know About the Game.md
deleted file mode 100644
index 07c6a68e5648d34894e5cd4b5c149de6d193b788..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/What is College Brawl? Everything You Need to Know About the Game.md
+++ /dev/null
@@ -1,160 +0,0 @@
-
-What is a college brawl and why does it happen?
-A college brawl is a violent physical confrontation between two or more groups of students on a college campus. College brawls can range from minor scuffles to large-scale riots that involve weapons, injuries, property damage, and police intervention. Some examples of college brawls are:
-college brawl
DOWNLOAD ✵✵✵ https://urlca.com/2uO9QS
-
-- The Djarragun College brawl in Australia, where students allegedly armed with metal bars fought each other, leaving one hospitalized with head injuries.
-- The TikTok challenge-inspired vandalism and assault incidents in various US schools, where students destroyed bathrooms, slapped teachers, and fought each other.
-- The bar fight between University of Toronto students and locals, where one student was stabbed and another was hit with a bottle.
-
-College brawls can happen for various reasons, such as:
-
-- Peer pressure, group identity, or loyalty to friends or partners.
-- Alcohol or drug use, which can impair judgment, lower inhibitions, and increase aggression.
-- Stress, frustration, or anger related to academic, personal, or social issues.
-- Conflict, rivalry, or competition between different groups based on ethnicity, religion, gender, sexuality, class, or other factors.
-- Provocation, retaliation, or self-defense in response to perceived threats or insults.
-
-College brawls can have serious consequences for the students involved and the campus community as a whole. Some of the possible consequences are:
-
-- Physical injuries, such as bruises, cuts, fractures, concussions, or even death.
-- Mental health problems, such as anxiety, depression, post-traumatic stress disorder (PTSD), or substance abuse.
-- Legal troubles, such as arrests, charges, fines, lawsuits, or imprisonment.
-- Academic difficulties, such as suspension, expulsion, poor grades, or loss of scholarships.
-- Social isolation, stigma, or harassment from peers, faculty, staff, or authorities.
-- Damage to campus reputation, safety, morale, or culture.
-
- How common are college brawls and who is involved?
-It is hard to estimate the exact prevalence of college brawls due to the lack of consistent definitions and data collection methods. However, existing data suggest that a significant number of college students experience some form of violence or harassment on campus. According to one national survey of US college students:
-
-- Approximately 17 percent of students reported experiencing some form of violence or harassment in the previous year.
-- Nearly 5 percent of female students reported experiencing a completed or attempted rape in the previous year.
-- 13 percent of female students reported being stalked in the previous year.
-- 20 to 50 percent of students reported experiencing dating violence by the end of college .
-- 13.2 percent of students reported being in a physical fight in the past year.
-- 8.5 percent of students reported carrying a weapon in the past month.
-- 4.3 percent of students reported having a working firearm with them at college.
-
- The risk factors and characteristics of The risk factors and characteristics of college brawlers may vary depending on the context and motivation of the violence. However, some common factors that may increase the likelihood of being involved in a college brawl are:
-
-- Being male, young, single, or living on campus.
-- Having a history of violence, aggression, or delinquency in childhood or adolescence .
-- Having low self-esteem, poor impulse control, or antisocial personality traits .
-- Having a history of trauma, abuse, or victimization .
-- Having mental health problems, such as depression, anxiety, or psychosis .
-- Having substance use problems, such as alcohol or drug abuse or dependence .
-- Belonging to a fraternity, sorority, gang, or other group that promotes violence or intolerance .
-- Being exposed to violence or harassment in the media, online, or in real life .
-- Being influenced by peers, partners, or family members who condone or encourage violence .
-- Facing academic, financial, or personal stressors that trigger anger or frustration .
-
- How can college brawls be prevented and intervened?
-The prevention and intervention of college brawls require a comprehensive and collaborative approach that involves multiple stakeholders and strategies. Some of the possible prevention and intervention methods are:
-
-- Educating students, faculty, staff, and parents about the causes, consequences, and signs of college brawls and how to prevent or respond to them.
-- Promoting a positive and respectful campus culture that values diversity, inclusion, and nonviolence.
-- Implementing policies and procedures that prohibit and sanction violence and harassment on campus and ensure accountability and justice for the perpetrators and victims.
-- Providing support and resources for students who are at risk of being involved in or affected by college brawls, such as counseling, mentoring, mediation, or referral services.
-- Developing and evaluating programs that target specific risk factors or populations related to college brawls, such as bystander intervention, anger management, conflict resolution, or substance abuse prevention programs .
-- Collaborating with community partners, such as law enforcement, health care providers, social service agencies, or advocacy groups to coordinate efforts and share information and resources .
-- Monitoring and reporting incidents of college brawls and evaluating the effectiveness of prevention and intervention strategies .
-
- Conclusion
-College brawls are a serious problem that can affect the physical, mental, legal, academic, and social well-being of students and the campus community. College brawls can be caused by various factors such as peer pressure, substance use, stress, conflict, or provocation. College brawls can be prevented and intervened by educating, promoting, implementing, providing, developing, collaborating, and monitoring strategies that address the multiple aspects of the problem. By working together, we can create a safer and more peaceful campus environment for everyone.
- FAQs
-Here are some frequently asked questions about college brawls and their answers:
-college brawl game download
-college brawl apk mod
-college brawl beat em up
-college brawl adult game
-college brawl walkthrough guide
-college brawl cheats and tips
-college brawl review and rating
-college brawl gameplay video
-college brawl characters and skills
-college brawl latest version update
-college brawl free online play
-college brawl pc requirements
-college brawl android compatibility
-college brawl steam release date
-college brawl best weapons and items
-college brawl how to unlock girls
-college brawl hot scenes gallery
-college brawl fan art and memes
-college brawl developer and publisher
-college brawl support and feedback
-college brawl wiki and faq
-college brawl discord server and community
-college brawl patreon and donations
-college brawl mods and customizations
-college brawl bugs and issues report
-college brawl soundtrack and music
-college brawl achievements and trophies
-college brawl endings and secrets
-college brawl easter eggs and references
-college brawl news and updates
-college brawl similar games and alternatives
-college brawl system requirements lab test
-college brawl controller support and settings
-college brawl save file location and backup
-college brawl level editor and workshop
-college brawl multiplayer mode and co-op
-college brawl voice actors and cast
-college brawl trailer and teaser
-college brawl merchandise and products
-college brawl comics and novels
-college brawl crossover and collaboration
-college brawl memes and jokes
-college brawl tips and tricks reddit
-college brawl speedrun and challenge
-college brawl patch notes and changelog
-college brawl mac os x download
-college brawl linux version availability
-college brawl vr compatibility and experience
-college brawl ios app store link
-
-- What are the legal implications of college brawls?
-Depending on the severity and circumstances of the college brawl, the legal implications may vary. Some of the possible legal outcomes are:
-
-- Arrest or detention by the police or campus security.
-- Criminal charges such as assault, battery, disorderly conduct, vandalism, or rioting.
-- Civil lawsuits for personal injury or property damage.
-- Disciplinary actions by the college administration such as warning, probation, suspension, or expulsion.
-- Loss of privileges or opportunities such as scholarships, financial aid, housing, or extracurricular activities.
-
- - How can I report a college brawl or get help if I witness one?
-If you witness a college brawl or are involved in one, you should take the following steps:
-
-- Call 911 or the campus emergency number if someone is injured, threatened, or in danger.
-- Stay calm and try to avoid getting involved in the violence unless it is necessary to protect yourself or someone else.
-- Move to a safe location and wait for the authorities to arrive.
-- Cooperate with the police or campus security and provide them with any information or evidence you have.
-- Seek medical attention if you are injured or traumatized by the incident.
-- Contact your friends, family, or support network for emotional support.
-
- - How can I cope with the trauma or stress of being involved in a college brawl?
-Being involved in a college brawl can be a traumatic and stressful experience that can affect your physical and mental health. Some of the ways you can cope with the aftermath are:
-
-- Acknowledge your feelings and express them in healthy ways such as talking, writing, or drawing.
-- Avoid using alcohol or drugs to cope as they can worsen your symptoms and create more problems.
-- Maintain a regular routine of sleeping, eating, exercising, and studying to help you feel more stable and in control.
-- Seek professional help from a counselor, therapist, or psychiatrist if you have persistent or severe symptoms such as nightmares, flashbacks, anxiety, depression, anger, guilt, or suicidal thoughts.
-
- - How can I support a friend or family member who has been affected by a college brawl?
-If you know someone who has been affected by a college brawl, you can support them by:
-
-- Listening to them and validating their feelings without judging or blaming them.
-- Offering them practical help such as accompanying them to appointments, helping them with chores, or providing them with resources.
-- Encouraging them to seek professional help if they need it and respecting their choices and boundaries.
-- Being patient and understanding that their recovery may take time and may not be linear.
-
- - How can I get involved in promoting a peaceful and safe campus culture?
-If you want to get involved in promoting a peaceful and safe campus culture, you can:
-
-- Educate yourself and others about the causes and consequences of college brawls and how to prevent or respond to them.
-- Model and promote nonviolent and respectful behaviors and attitudes among your peers and groups.
-- Speak up and intervene when you witness violence or harassment on campus or online.
-- Join or support campus organizations or initiatives that advocate for peace and safety on campus such as peer educators, mediators, mentors, or activists.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Adobe Photoshop CC 2019 V20.0.1 Pre-Activated [CracksMind] ((INSTALL)) Download.md b/spaces/contluForse/HuggingGPT/assets/Adobe Photoshop CC 2019 V20.0.1 Pre-Activated [CracksMind] ((INSTALL)) Download.md
deleted file mode 100644
index fcbe906204c741978258b2af50906a0186bc9fcb..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Adobe Photoshop CC 2019 V20.0.1 Pre-Activated [CracksMind] ((INSTALL)) Download.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-Adobe CC 2019 Serial Key below is the latest version available from the website. We also find a crack for you. Also an alternative which you will get at a certain time.]]>Crack of the Day2017-01-21T12:26:36+01:00http://www.filehippo.com/file/a3c1z/Adobe_Photoshop_Lightroom_Classic_CC_2019_v8_2_1_10_cracked/http://www.filehippo.com/file/1vy8/Adobe_Photoshop_Lightroom_Classic_CC_2019_v8_2_1_10_cracked.rarhttp://www.filehippo.com/file/xrtlw/Adobe_Photoshop_Lightroom_Classic_CC_2019_v8_2_1_10_cracked.rar Adobe Photoshop Lightroom CC 2019 v8.2.1.10 (x64) (Pre-Activated) B4tman2014-12-08T12:07:07+01:00Adobe Photoshop CC 2019 v20.0.1 Pre-Activated [CracksMind] downloadhttp://www.filehippo.com/file/atfjf/Adobe_Photoshop_CC_2019_v20_0_1_pre_activated_cracksmind/http://www.filehippo.com/file/alblw/Adobe_Photoshop_CC_2019_v20_0_1_pre_activated_cracksmind.rarhttp://www.filehippo.com/file/cfffr/Adobe_Photoshop_CC_2019_v20_0_1_pre_activated_cracksmind.rar Adobe Photoshop CC 2019 v20.0.1 Pre-Activated {CracksMind] 2014-10-19T13:13:39+01:00Adobe Photoshop CC 2019 v20.0.1 Pre-Activated [CracksMind] downloadhttp://www.filehippo.com/file/hkmfs/Adobe_Photoshop_CC_2019_v20_0_1_pre_activated_cracksmind/http://www.filehippo.com/file/kvz5t/Adobe_Photoshop_CC_2019_v20_0_1_pre_activated_cracksmind.rarhttp://www.filehippo.com/file/jtvce/Adobe_Photoshop_CC_2019_v20_0_1_pre_activated_cracksmind.rarGamma-hydroxybutyric acid ("GHB"), a naturally occurring compound found in the brain, has recently gained a lot of attention due to its potential as an efficient and relatively non-toxic anesthetic agent. GHB is structurally related to GABA, a neurotransmitter in the central nervous system. Synthesis of GHB involves hydration of glyoxylate through the GABA shunt pathway. Glyoxylate is the first product of the glyoxylate cycle and is converted to glycine and subsequently to succinate. GHB synthase is a non-specific enzyme which splits the molecule glyoxylate with the release of one molecule of carbon dioxide and one molecule of pyruvate. The carbon dioxide is released to the atmosphere, where it represents an atmospheric pollutant. Several patents have been issued for methods and compositions useful for synthesizing GHB. For example, U.
-Adobe Photoshop CC 2019 v20.0.1 Pre-Activated [CracksMind] download
Download Zip ✒ https://ssurll.com/2uzyCf
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Arcsoft Totalmedia Theatre 5.3.1.172 Serial Number TOP.md b/spaces/contluForse/HuggingGPT/assets/Arcsoft Totalmedia Theatre 5.3.1.172 Serial Number TOP.md
deleted file mode 100644
index b44a8d296c66a4eac7fd46e7ed6a9d4c8cc84ed2..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Arcsoft Totalmedia Theatre 5.3.1.172 Serial Number TOP.md
+++ /dev/null
@@ -1,6 +0,0 @@
-arcsoft totalmedia theatre 5.3.1.172 serial number
Download File ✫ https://ssurll.com/2uzvNB
-
-bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bRape Me. bR 4fefd39f24
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/C3312 Flashtool E2 Xmm2150 Zip Flash Samsung Phones with EM_XMM2150 Chipset Using Flashtool E2.md b/spaces/contluForse/HuggingGPT/assets/C3312 Flashtool E2 Xmm2150 Zip Flash Samsung Phones with EM_XMM2150 Chipset Using Flashtool E2.md
deleted file mode 100644
index 13a9165827dce712f045b0cd4b0e45858a674305..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/C3312 Flashtool E2 Xmm2150 Zip Flash Samsung Phones with EM_XMM2150 Chipset Using Flashtool E2.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Download Film Mars Taiwan Subtitle Indonesia Brilliant
Download Zip ····· https://ssurll.com/2uzyBE
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Descargar School Days Sin Censura Mf 720p.md b/spaces/contluForse/HuggingGPT/assets/Descargar School Days Sin Censura Mf 720p.md
deleted file mode 100644
index c841ff6a69e45634fe04fc122a5bec9a9ba1f0f3..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Descargar School Days Sin Censura Mf 720p.md
+++ /dev/null
@@ -1,10 +0,0 @@
-Descargar School Days Sin Censura Mf 720p
Download Zip ⭐ https://ssurll.com/2uzvSB
-
-descargar school days sin censura mf 720p
-Lyrics:
-In the middle of the night And on the street without a light I was standing There was a feeling in my head It was a feeling that my heart was playing Who would've thought we'd end up here?
-The feeling came unexpected And I wonder who was the one Who'd bring this down on me?
-I was so high You will never see me sitting in the street 8a78ff9644
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Download !!EXCLUSIVE!! Special 26 Movie In Hindi 720p.md b/spaces/contluForse/HuggingGPT/assets/Download !!EXCLUSIVE!! Special 26 Movie In Hindi 720p.md
deleted file mode 100644
index d0876e65db8943c3a36d5b4d2be12f4172fdca13..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Download !!EXCLUSIVE!! Special 26 Movie In Hindi 720p.md
+++ /dev/null
@@ -1,15 +0,0 @@
-
-Download Special 26 Movie in Hindi 720p: A Thrilling Heist Film Based on a True Story
-Special 26 is a 2013 Indian Hindi-language period heist thriller film written and directed by Neeraj Pandey. Based on the 1987 Opera House heist, the plot follows a team of criminals (Akshay Kumar with Anupam Kher), who pose as CBI officers and conduct raids, robbing politicians and businessmen of their black money. With the real CBI (Manoj Bajpayee) on their trail, they decide to pull off their biggest heist yet. The film also stars Jimmy Sheirgill, Kajal Aggarwal, Rajesh Sharma, Divya Dutta and Kishor Kadam.
-The film was released on 8 February 2013 and received widespread acclaim from the critics. Several critics called it one of the best films of the year. It became a commercial success, grossing â¹103 crore (equivalent to â¹149 crore or US$19 million in 2020).
-download Special 26 movie in hindi 720p
Download File ✒ ✒ ✒ https://ssurll.com/2uzwxz
-If you are looking for a gripping and entertaining film that will keep you on the edge of your seat, you can download Special 26 movie in Hindi 720p from various online platforms. However, before you do that, make sure you are aware of the legal and ethical implications of downloading pirated content. Piracy is a crime and can land you in trouble with the law. It also harms the film industry and the artists who work hard to create quality content for the audience.
-Therefore, we recommend you to watch Special 26 movie in Hindi 720p legally and ethically from authorized sources. You can stream or download the film from platforms like JioCinema[^2^], IMDb[^3^], or YouTube (paid). You can also buy or rent the DVD or Blu-ray from Amazon or Flipkart. By doing so, you will not only enjoy the film in high quality but also support the filmmakers and actors who deserve your appreciation.
-So, what are you waiting for? Download Special 26 movie in Hindi 720p today and witness a thrilling heist film based on a true story.
-
-Special 26 is not just a heist film, but also a cat-and-mouse game between the fake CBI team and the real CBI officer, who is determined to catch them red-handed. The film keeps you hooked with its smart twists and turns, witty dialogues, and thrilling sequences. The film also has a touch of humour and romance, which adds to its charm.
-The film boasts of a stellar cast, who deliver top-notch performances. Akshay Kumar sheds his action hero image and plays a suave and cunning conman with aplomb. He is ably supported by Anupam Kher, who is brilliant as his loyal partner-in-crime. Manoj Bajpayee is impressive as the no-nonsense CBI officer, who matches wits with Akshay. Jimmy Sheirgill, Rajesh Sharma, Kishor Kadam, Divya Dutta and Kajal Aggarwal also play their roles with conviction.
-The film has received rave reviews from the critics and the audience alike. It has been praised for its tight script, crisp editing, authentic production design, and engaging direction. The film has also been appreciated for its realistic portrayal of the 1980s era and its social relevance. The film has been hailed as one of the best films of 2013 and one of the finest heist films of Bollywood.
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/cooelf/Multimodal-CoT/timm/models/layers/mixed_conv2d.py b/spaces/cooelf/Multimodal-CoT/timm/models/layers/mixed_conv2d.py
deleted file mode 100644
index fa0ce565c0a9d348d4e68165960fa77fcf7f70d7..0000000000000000000000000000000000000000
--- a/spaces/cooelf/Multimodal-CoT/timm/models/layers/mixed_conv2d.py
+++ /dev/null
@@ -1,51 +0,0 @@
-""" PyTorch Mixed Convolution
-
-Paper: MixConv: Mixed Depthwise Convolutional Kernels (https://arxiv.org/abs/1907.09595)
-
-Hacked together by / Copyright 2020 Ross Wightman
-"""
-
-import torch
-from torch import nn as nn
-
-from .conv2d_same import create_conv2d_pad
-
-
-def _split_channels(num_chan, num_groups):
- split = [num_chan // num_groups for _ in range(num_groups)]
- split[0] += num_chan - sum(split)
- return split
-
-
-class MixedConv2d(nn.ModuleDict):
- """ Mixed Grouped Convolution
-
- Based on MDConv and GroupedConv in MixNet impl:
- https://github.com/tensorflow/tpu/blob/master/models/official/mnasnet/mixnet/custom_layers.py
- """
- def __init__(self, in_channels, out_channels, kernel_size=3,
- stride=1, padding='', dilation=1, depthwise=False, **kwargs):
- super(MixedConv2d, self).__init__()
-
- kernel_size = kernel_size if isinstance(kernel_size, list) else [kernel_size]
- num_groups = len(kernel_size)
- in_splits = _split_channels(in_channels, num_groups)
- out_splits = _split_channels(out_channels, num_groups)
- self.in_channels = sum(in_splits)
- self.out_channels = sum(out_splits)
- for idx, (k, in_ch, out_ch) in enumerate(zip(kernel_size, in_splits, out_splits)):
- conv_groups = in_ch if depthwise else 1
- # use add_module to keep key space clean
- self.add_module(
- str(idx),
- create_conv2d_pad(
- in_ch, out_ch, k, stride=stride,
- padding=padding, dilation=dilation, groups=conv_groups, **kwargs)
- )
- self.splits = in_splits
-
- def forward(self, x):
- x_split = torch.split(x, self.splits, 1)
- x_out = [c(x_split[i]) for i, c in enumerate(self.values())]
- x = torch.cat(x_out, 1)
- return x
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/utils/weight_init.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/utils/weight_init.py
deleted file mode 100644
index 38141ba3d61f64ddfc0a31574b4648cbad96d7dd..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/utils/weight_init.py
+++ /dev/null
@@ -1,62 +0,0 @@
-"""Modified from https://github.com/rwightman/pytorch-image-
-models/blob/master/timm/models/layers/drop.py."""
-
-import math
-import warnings
-
-import torch
-
-
-def _no_grad_trunc_normal_(tensor, mean, std, a, b):
- """Reference: https://people.sc.fsu.edu/~jburkardt/presentations
- /truncated_normal.pdf"""
-
- def norm_cdf(x):
- # Computes standard normal cumulative distribution function
- return (1. + math.erf(x / math.sqrt(2.))) / 2.
-
- if (mean < a - 2 * std) or (mean > b + 2 * std):
- warnings.warn(
- 'mean is more than 2 std from [a, b] in nn.init.trunc_normal_. '
- 'The distribution of values may be incorrect.',
- stacklevel=2)
-
- with torch.no_grad():
- # Values are generated by using a truncated uniform distribution and
- # then using the inverse CDF for the normal distribution.
- # Get upper and lower cdf values
- lower_bound = norm_cdf((a - mean) / std)
- upper_bound = norm_cdf((b - mean) / std)
-
- # Uniformly fill tensor with values from [l, u], then translate to
- # [2l-1, 2u-1].
- tensor.uniform_(2 * lower_bound - 1, 2 * upper_bound - 1)
-
- # Use inverse cdf transform for normal distribution to get truncated
- # standard normal
- tensor.erfinv_()
-
- # Transform to proper mean, std
- tensor.mul_(std * math.sqrt(2.))
- tensor.add_(mean)
-
- # Clamp to ensure it's in the proper range
- tensor.clamp_(min=a, max=b)
- return tensor
-
-
-def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.):
- r"""Fills the input Tensor with values drawn from a truncated
- normal distribution. The values are effectively drawn from the
- normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)`
- with values outside :math:`[a, b]` redrawn until they are within
- the bounds. The method used for generating the random values works
- best when :math:`a \leq \text{mean} \leq b`.
- Args:
- tensor (``torch.Tensor``): an n-dimensional `torch.Tensor`
- mean (float): the mean of the normal distribution
- std (float): the standard deviation of the normal distribution
- a (float): the minimum cutoff value
- b (float): the maximum cutoff value
- """
- return _no_grad_trunc_normal_(tensor, mean, std, a, b)
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/tracking/iou_weighted_hungarian_bbox_iou_tracker.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/tracking/iou_weighted_hungarian_bbox_iou_tracker.py
deleted file mode 100644
index e9b40f8a9c269029e220d5dfa8df1e8372d05007..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/tracking/iou_weighted_hungarian_bbox_iou_tracker.py
+++ /dev/null
@@ -1,102 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2004-present Facebook. All Rights Reserved.
-
-import numpy as np
-from typing import List
-
-from annotator.oneformer.detectron2.config import CfgNode as CfgNode_
-from annotator.oneformer.detectron2.config import configurable
-
-from .base_tracker import TRACKER_HEADS_REGISTRY
-from .vanilla_hungarian_bbox_iou_tracker import VanillaHungarianBBoxIOUTracker
-
-
-@TRACKER_HEADS_REGISTRY.register()
-class IOUWeightedHungarianBBoxIOUTracker(VanillaHungarianBBoxIOUTracker):
- """
- A tracker using IoU as weight in Hungarian algorithm, also known
- as Munkres or Kuhn-Munkres algorithm
- """
-
- @configurable
- def __init__(
- self,
- *,
- video_height: int,
- video_width: int,
- max_num_instances: int = 200,
- max_lost_frame_count: int = 0,
- min_box_rel_dim: float = 0.02,
- min_instance_period: int = 1,
- track_iou_threshold: float = 0.5,
- **kwargs,
- ):
- """
- Args:
- video_height: height the video frame
- video_width: width of the video frame
- max_num_instances: maximum number of id allowed to be tracked
- max_lost_frame_count: maximum number of frame an id can lost tracking
- exceed this number, an id is considered as lost
- forever
- min_box_rel_dim: a percentage, smaller than this dimension, a bbox is
- removed from tracking
- min_instance_period: an instance will be shown after this number of period
- since its first showing up in the video
- track_iou_threshold: iou threshold, below this number a bbox pair is removed
- from tracking
- """
- super().__init__(
- video_height=video_height,
- video_width=video_width,
- max_num_instances=max_num_instances,
- max_lost_frame_count=max_lost_frame_count,
- min_box_rel_dim=min_box_rel_dim,
- min_instance_period=min_instance_period,
- track_iou_threshold=track_iou_threshold,
- )
-
- @classmethod
- def from_config(cls, cfg: CfgNode_):
- """
- Old style initialization using CfgNode
-
- Args:
- cfg: D2 CfgNode, config file
- Return:
- dictionary storing arguments for __init__ method
- """
- assert "VIDEO_HEIGHT" in cfg.TRACKER_HEADS
- assert "VIDEO_WIDTH" in cfg.TRACKER_HEADS
- video_height = cfg.TRACKER_HEADS.get("VIDEO_HEIGHT")
- video_width = cfg.TRACKER_HEADS.get("VIDEO_WIDTH")
- max_num_instances = cfg.TRACKER_HEADS.get("MAX_NUM_INSTANCES", 200)
- max_lost_frame_count = cfg.TRACKER_HEADS.get("MAX_LOST_FRAME_COUNT", 0)
- min_box_rel_dim = cfg.TRACKER_HEADS.get("MIN_BOX_REL_DIM", 0.02)
- min_instance_period = cfg.TRACKER_HEADS.get("MIN_INSTANCE_PERIOD", 1)
- track_iou_threshold = cfg.TRACKER_HEADS.get("TRACK_IOU_THRESHOLD", 0.5)
- return {
- "_target_": "detectron2.tracking.iou_weighted_hungarian_bbox_iou_tracker.IOUWeightedHungarianBBoxIOUTracker", # noqa
- "video_height": video_height,
- "video_width": video_width,
- "max_num_instances": max_num_instances,
- "max_lost_frame_count": max_lost_frame_count,
- "min_box_rel_dim": min_box_rel_dim,
- "min_instance_period": min_instance_period,
- "track_iou_threshold": track_iou_threshold,
- }
-
- def assign_cost_matrix_values(self, cost_matrix: np.ndarray, bbox_pairs: List) -> np.ndarray:
- """
- Based on IoU for each pair of bbox, assign the associated value in cost matrix
-
- Args:
- cost_matrix: np.ndarray, initialized 2D array with target dimensions
- bbox_pairs: list of bbox pair, in each pair, iou value is stored
- Return:
- np.ndarray, cost_matrix with assigned values
- """
- for pair in bbox_pairs:
- # assign (-1 * IoU) for above threshold pairs, algorithms will minimize cost
- cost_matrix[pair["idx"]][pair["prev_idx"]] = -1 * pair["IoU"]
- return cost_matrix
diff --git a/spaces/crylake/img2poem/ulti.py b/spaces/crylake/img2poem/ulti.py
deleted file mode 100644
index 4610733ca79368a06efe229d8041aaaab31f7f38..0000000000000000000000000000000000000000
--- a/spaces/crylake/img2poem/ulti.py
+++ /dev/null
@@ -1,51 +0,0 @@
-'''
-import streamlit as st
-import cv2
-# import tensorflow as tf
-import torch
-import torchvision
-import io
-from PIL import Image
-# import tensorflow_addons as tfa
-import numpy as np
-# from autocrop import Cropper
-
-def get_bbox():
- pass
-
-def adjust_gamma(image, gamma=1.0):
- invGamma = 1.0 / gamma
- table = np.array([((i / 255.0) ** invGamma) * 255
- for i in np.arange(0, 256)]).astype("uint8")
-
- return cv2.LUT(image, table)
-
-def loadtest(image,cropornot=False):
- # if cropornot:
- # Percent = st.sidebar.slider('Zoom adjust', min_value=50, max_value=100,value=50,step=5)
- # cropper = Cropper(face_percent=Percent)
- #
- # # Get a Numpy array of the cropped image
- # image_crop = cropper.crop(image)
- # if image_crop is not None:
- # image_crop = cv2.cvtColor(image_crop, cv2.COLOR_BGR2RGB)
- # image = tf.convert_to_tensor(image_crop, dtype=tf.float32)
- # else:
- # st.write('Cannot find your face to crop')
-
- image = (tf.cast(image, tf.float32) /255.0 *2) -1
- image = tf.image.resize(image,
- [256, 256],
- method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
- image = tf.expand_dims(image, 0)
- return image
-
-def loadframe(image):
- image = tf.convert_to_tensor(image, dtype=tf.float32)
- image = (tf.cast(image, tf.float32) /255.0 *2) -1
- image = tf.image.resize(image,
- [256, 256],
- method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
- image = tf.expand_dims(image, 0)
- return image
-'''
\ No newline at end of file
diff --git a/spaces/danterivers/music-generation-samples/tests/common_utils/temp_utils.py b/spaces/danterivers/music-generation-samples/tests/common_utils/temp_utils.py
deleted file mode 100644
index d1e0367e979c8b9fea65472c373916d956ad5aaa..0000000000000000000000000000000000000000
--- a/spaces/danterivers/music-generation-samples/tests/common_utils/temp_utils.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import os
-import tempfile
-
-
-class TempDirMixin:
- """Mixin to provide easy access to temp dir.
- """
-
- temp_dir_ = None
-
- @classmethod
- def get_base_temp_dir(cls):
- # If AUDIOCRAFT_TEST_DIR is set, use it instead of temporary directory.
- # this is handy for debugging.
- key = "AUDIOCRAFT_TEST_DIR"
- if key in os.environ:
- return os.environ[key]
- if cls.temp_dir_ is None:
- cls.temp_dir_ = tempfile.TemporaryDirectory()
- return cls.temp_dir_.name
-
- @classmethod
- def tearDownClass(cls):
- if cls.temp_dir_ is not None:
- try:
- cls.temp_dir_.cleanup()
- cls.temp_dir_ = None
- except PermissionError:
- # On Windows there is a know issue with `shutil.rmtree`,
- # which fails intermittenly.
- # https://github.com/python/cpython/issues/74168
- # Following the above thread, we ignore it.
- pass
- super().tearDownClass()
-
- @property
- def id(self):
- return self.__class__.__name__
-
- def get_temp_path(self, *paths):
- temp_dir = os.path.join(self.get_base_temp_dir(), self.id)
- path = os.path.join(temp_dir, *paths)
- os.makedirs(os.path.dirname(path), exist_ok=True)
- return path
-
- def get_temp_dir(self, *paths):
- temp_dir = os.path.join(self.get_base_temp_dir(), self.id)
- path = os.path.join(temp_dir, *paths)
- os.makedirs(path, exist_ok=True)
- return path
diff --git a/spaces/darkCat/Anime-image-classification/src/prediction.py b/spaces/darkCat/Anime-image-classification/src/prediction.py
deleted file mode 100644
index 12acfce2715b93c1547060938ca1eca56045950b..0000000000000000000000000000000000000000
--- a/spaces/darkCat/Anime-image-classification/src/prediction.py
+++ /dev/null
@@ -1,123 +0,0 @@
-from abc import abstractmethod, ABCMeta
-import copy
-from .preprocess import perprocess_infer, prep_matting, prep_seg
-from .load_model import load_model_Xception, load_model_Matting
-import cv2
-import numpy as np
-import tensorflow as tf
-# from tensorflow.keras.utils import custom_object_scope
-
-def iou(y_true, y_pred):
- def f(y_true, y_pred):
- intersection = (y_true * y_pred).sum()
- union = y_true.sum() + y_pred.sum() - intersection
- x = (intersection + 1e-15) / (union + 1e-15)
- x = x.astype(np.float32)
- return x
- return tf.numpy_function(f, [y_true, y_pred], tf.float32)
-
-smooth = 1e-15
-def dice_coef(y_true, y_pred):
- y_true = tf.keras.layers.Flatten()(y_true)
- y_pred = tf.keras.layers.Flatten()(y_pred)
- intersection = tf.reduce_sum(y_true * y_pred)
- return (2. * intersection + smooth) / (tf.reduce_sum(y_true) + tf.reduce_sum(y_pred) + smooth)
-
-def dice_loss(y_true, y_pred):
- return 1.0 - dice_coef(y_true, y_pred)
-
-class Predictor(metaclass=ABCMeta):
- @abstractmethod
- def predict(self):
- pass
-
-class XceptionPredictor(Predictor):
- WIDTH = 299
- HEIGHT = 299
- def __init__(self, model, classes):
- self.model = model
- self.classes = classes
- def predict(self, inp) -> list:
- cls_count = len(self.classes)
- inp = perprocess_infer(inp, self.WIDTH, self.HEIGHT)
- prediction = self.model.predict(inp).flatten()
- confidences = {self.classes[i]: float(prediction[i]) for i in range(cls_count)}
- return confidences
-
-class MatPredictor(Predictor):
- def __init__(self, sess):
- self.sess = sess
- def predict(self, src):
- src, bgr = prep_matting(src)
- pha, fgr = self.sess.run(['pha', 'fgr'], {'src': src, 'bgr': bgr})
- com = (pha * fgr + (1 - pha) * bgr)*255
- com = com.astype(np.uint8)
- fgr = (fgr*255).astype(np.uint8)
- cv2.imshow('fgr', fgr.transpose((0,2,3,1)).squeeze())
- cv2.imshow('com', com.transpose((0,2,3,1)).squeeze())
- cv2.waitKey()
- return pha, fgr
-
-class Segmentor(Predictor):
- def __init__(self, model):
- self.model = model
- self.models = dict()
- def predict(self, x):
- h, w, _ = x.shape
- x = prep_seg(x)
- y = self.model.predict(x)[0]
- y = cv2.resize(y, (w, h))
- _, y = cv2.threshold(y, 0.5, 1, cv2.THRESH_BINARY)
- y = y.astype(np.uint8)
- # closed ops
- y = cv2.dilate(src=y, kernel=cv2.getStructuringElement(shape=cv2.MORPH_RECT, ksize=(3, 3)), iterations=4)
- y = cv2.erode(src=y, kernel=cv2.getStructuringElement(shape=cv2.MORPH_RECT, ksize=(3, 3)), iterations=2)
-
- # fill big hole
- y = FillHole(y)
- return y
-
-def FillHole(mask):
- contours, hierarchy = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
- len_contour = len(contours)
- contour_list = []
- if len_contour == 0:
- return mask
- for i in range(len_contour):
- drawing = np.zeros_like(mask, np.uint8) # create a black image
- img_contour = cv2.drawContours(drawing, contours, i, (255, 255, 255), -1)
- contour_list.append(img_contour)
-
- out = sum(contour_list)
- return out
-
-class PredictorFatory():
- def __init__(self, classes, model_path):
- self.model_path = model_path
- self.classes = classes
- self.models = dict()
-
- def get_predictor(self, mode='xception'):
-
- if mode == 'xception':
- if 'xception' in self.models.keys():
- return self.models['xception']
- else:
- model = load_model_Xception(self.model_path)
- return XceptionPredictor(model, copy.deepcopy(self.classes))
-
- def get_matting(self, matting_weight):
- return self.get('matting', lambda: MatPredictor(load_model_Matting(matting_weight)))
-
- def get_seg(self, segmodel):
- if 'segmentation' not in self.models.keys():
- with tf.keras.utils.custom_object_scope({'iou': iou, 'dice_coef': dice_coef, 'dice_loss': dice_loss}):
- self.models['segmentation'] = Segmentor(tf.keras.models.load_model(segmodel))
- return self.models['segmentation']
-
- def get(self, name, func):
- if name in self.models.keys():
- return self.models[name]
- else:
- self.models[name] = func()
- return self.models[name]
\ No newline at end of file
diff --git a/spaces/dbmdz/detectron2-model-demo/app.py b/spaces/dbmdz/detectron2-model-demo/app.py
deleted file mode 100644
index eb770296c112dd23c5e9cef8a759f8fa1e6cd643..0000000000000000000000000000000000000000
--- a/spaces/dbmdz/detectron2-model-demo/app.py
+++ /dev/null
@@ -1,127 +0,0 @@
-try:
- import detectron2
-except:
- import os
- os.system('pip install git+https://github.com/facebookresearch/detectron2.git')
-
-import cv2
-
-from matplotlib.pyplot import axis
-import gradio as gr
-import requests
-import numpy as np
-from torch import nn
-import requests
-
-import torch
-
-from detectron2 import model_zoo
-from detectron2.engine import DefaultPredictor
-from detectron2.config import get_cfg
-from detectron2.utils.visualizer import Visualizer
-from detectron2.data import MetadataCatalog
-
-
-models = [
-{
- "name": "Version 1 (2-class)",
- "model_path": "https://huggingface.co/dbmdz/detectron2-model/resolve/main/model_final.pth",
- "classes": ["Illumination", "Illustration"],
- "cfg": None,
- "metadata": None
-},
-{
- "name": "Version 2 (4-class)",
- "model_path": "https://huggingface.co/dbmdz/detectron2-v2-model/resolve/main/model_final.pth",
- "classes": ["ILLUSTRATION", "OTHER", "STAMP", "INITIAL"],
- "cfg": None,
- "metadata": None
-},
-]
-
-model_name_to_id = {model["name"] : id_ for id_, model in enumerate(models)}
-
-for model in models:
-
- model["cfg"] = get_cfg()
- model["cfg"].merge_from_file("./configs/detectron2/faster_rcnn_R_50_FPN_3x.yaml")
- model["cfg"].MODEL.ROI_HEADS.NUM_CLASSES = len(model["classes"])
- model["cfg"].MODEL.WEIGHTS = model["model_path"]
-
- model["metadata"] = MetadataCatalog.get(model["name"])
- model["metadata"].thing_classes = model["classes"]
-
- if not torch.cuda.is_available():
- model["cfg"].MODEL.DEVICE = "cpu"
-
-
-def inference(image_url, image, min_score, model_name):
- if image_url:
- r = requests.get(image_url)
- if r:
- im = np.frombuffer(r.content, dtype="uint8")
- im = cv2.imdecode(im, cv2.IMREAD_COLOR)
- else:
- # Model expect BGR!
- im = image[:,:,::-1]
-
- model_id = model_name_to_id[model_name]
-
- models[model_id]["cfg"].MODEL.ROI_HEADS.SCORE_THRESH_TEST = min_score
- predictor = DefaultPredictor(models[model_id]["cfg"])
-
- outputs = predictor(im)
-
- v = Visualizer(im, models[model_id]["metadata"], scale=1.2)
- out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
-
- return out.get_image()
-
-
-title = "# DBMDZ Detectron2 Model Demo"
-description = """
-This demo introduces an interactive playground for our trained Detectron2 model.
-
-Currently, two models are supported that were trained on manually annotated segments from digitized books:
-
-* [Version 1 (2-class)](https://huggingface.co/dbmdz/detectron2-model): This model can detect *Illustration* or *Illumination* segments on a given page.
-* [Version 2 (4-class)](https://huggingface.co/dbmdz/detectron2-v2-model): This model is more powerful and can detect *Illustration*, *Stamp*, *Initial* or *Other* segments on a given page.
-"""
-footer = "Made in Munich with ❤️ and 🥨."
-
-with gr.Blocks() as demo:
- gr.Markdown(title)
- gr.Markdown(description)
-
- with gr.Tab("From URL"):
- url_input = gr.Textbox(label="Image URL", placeholder="https://api.digitale-sammlungen.de/iiif/image/v2/bsb10483966_00008/full/500,/0/default.jpg")
-
- with gr.Tab("From Image"):
- image_input = gr.Image(type="numpy", label="Input Image")
-
- min_score = gr.Slider(minimum=0.0, maximum=1.0, value=0.5, label="Minimum score")
-
- model_name = gr.Radio(choices=[model["name"] for model in models], value=models[0]["name"], label="Select Detectron2 model")
-
- output_image = gr.Image(type="pil", label="Output")
-
- inference_button = gr.Button("Submit")
-
- inference_button.click(fn=inference, inputs=[url_input, image_input, min_score, model_name], outputs=output_image)
-
- gr.Markdown(footer)
-
-demo.launch()
-
-#gr.Interface(
-# inference,
-# [gr.inputs.Textbox(label="Image URL", placeholder="https://api.digitale-sammlungen.de/iiif/image/v2/bsb10483966_00008/full/500,/0/default.jpg"),
-# gr.inputs.Image(type="numpy", label="Input Image"),
-# gr.Slider(minimum=0.0, maximum=1.0, value=0.5, label="Minimum score"),
-# gr.Radio(choices=[model["name"] for model in models], value=models[0]["name"], label="Select Detectron2 model"),
-# ],
-# gr.outputs.Image(type="pil", label="Output"),
-# title=title,
-# description=description,
-# article=article,
-# examples=[]).launch()
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Empty-8e3485c0.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Empty-8e3485c0.js
deleted file mode 100644
index e2d1b095fcffb8ccfc8ca759e26d820254f8fcde..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Empty-8e3485c0.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as h,e as b,s as k,a9 as z,m as f,g,Y as u,h as v,j as C,ab as E,ac as B,ad as R,w as S,u as j,k as q,N as w}from"./index-9e76ffee.js";import"./Button-30a08c0b.js";function N(s){let e,o,n;const _=s[5].default,a=z(_,s,s[4],null);return{c(){e=f("div"),o=f("div"),a&&a.c(),g(o,"class","icon svelte-lk9eg8"),g(e,"class","empty svelte-lk9eg8"),u(e,"small",s[0]==="small"),u(e,"large",s[0]==="large"),u(e,"unpadded_box",s[1]),u(e,"small_parent",s[3])},m(t,i){v(t,e,i),C(e,o),a&&a.m(o,null),s[6](e),n=!0},p(t,[i]){a&&a.p&&(!n||i&16)&&E(a,_,t,t[4],n?R(_,t[4],i,null):B(t[4]),null),(!n||i&1)&&u(e,"small",t[0]==="small"),(!n||i&1)&&u(e,"large",t[0]==="large"),(!n||i&2)&&u(e,"unpadded_box",t[1]),(!n||i&8)&&u(e,"small_parent",t[3])},i(t){n||(S(a,t),n=!0)},o(t){j(a,t),n=!1},d(t){t&&q(e),a&&a.d(t),s[6](null)}}}function Y(s,e,o){let n,{$$slots:_={},$$scope:a}=e,{size:t="small"}=e,{unpadded_box:i=!1}=e,d;function m(l){if(!l)return!1;const{height:r}=l.getBoundingClientRect(),{height:c}=l.parentElement?.getBoundingClientRect()||{height:r};return r>c+2}function p(l){w[l?"unshift":"push"](()=>{d=l,o(2,d)})}return s.$$set=l=>{"size"in l&&o(0,t=l.size),"unpadded_box"in l&&o(1,i=l.unpadded_box),"$$scope"in l&&o(4,a=l.$$scope)},s.$$.update=()=>{s.$$.dirty&4&&o(3,n=m(d))},[t,i,d,n,a,_,p]}class F extends h{constructor(e){super(),b(this,e,Y,N,k,{size:0,unpadded_box:1})}}export{F as E};
-//# sourceMappingURL=Empty-8e3485c0.js.map
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/themes/builder_app.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/themes/builder_app.py
deleted file mode 100644
index 54defcf0d5c6620d282480693791c69dde0833da..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/themes/builder_app.py
+++ /dev/null
@@ -1,1003 +0,0 @@
-import inspect
-import time
-from typing import Iterable
-
-from gradio_client.documentation import document_fn
-
-import gradio as gr
-
-themes = [
- gr.themes.Base,
- gr.themes.Default,
- gr.themes.Soft,
- gr.themes.Monochrome,
- gr.themes.Glass,
-]
-colors = gr.themes.Color.all
-sizes = gr.themes.Size.all
-
-palette_range = [50, 100, 200, 300, 400, 500, 600, 700, 800, 900, 950]
-size_range = ["xxs", "xs", "sm", "md", "lg", "xl", "xxl"]
-docs_theme_core = document_fn(gr.themes.Base.__init__, gr.themes.Base)[1]
-docs_theme_vars = document_fn(gr.themes.Base.set, gr.themes.Base)[1]
-
-
-def get_docstr(var):
- for parameters in docs_theme_core + docs_theme_vars:
- if parameters["name"] == var:
- return parameters["doc"]
- raise ValueError(f"Variable {var} not found in theme documentation.")
-
-
-def get_doc_theme_var_groups():
- source = inspect.getsource(gr.themes.Base.set)
- groups = []
- group, desc, variables, flat_variables = None, None, [], []
- for line in source.splitlines():
- line = line.strip()
- if line.startswith(")"):
- break
- elif line.startswith("# "):
- if group is not None:
- groups.append((group, desc, variables))
- group, desc = line[2:].split(": ")
- variables = []
- elif "=" in line:
- var = line.split("=")[0]
- variables.append(var)
- flat_variables.append(var)
- groups.append((group, desc, variables))
- return groups, flat_variables
-
-
-variable_groups, flat_variables = get_doc_theme_var_groups()
-
-css = """
-.gradio-container {
- overflow: visible !important;
- max-width: none !important;
-}
-#controls {
- max-height: 100vh;
- flex-wrap: unset;
- overflow-y: scroll;
- position: sticky;
- top: 0;
-}
-#controls::-webkit-scrollbar {
- -webkit-appearance: none;
- width: 7px;
-}
-
-#controls::-webkit-scrollbar-thumb {
- border-radius: 4px;
- background-color: rgba(0, 0, 0, .5);
- box-shadow: 0 0 1px rgba(255, 255, 255, .5);
-}
-"""
-
-with gr.Blocks( # noqa: SIM117
- theme=gr.themes.Base(),
- css=css,
- title="Gradio Theme Builder",
-) as demo:
- with gr.Row():
- with gr.Column(scale=1, elem_id="controls", min_width=400):
- with gr.Row():
- undo_btn = gr.Button("Undo", size="sm")
- dark_mode_btn = gr.Button("Dark Mode", variant="primary", size="sm")
- with gr.Tabs():
- with gr.TabItem("Source Theme"):
- gr.Markdown(
- """
- Select a base theme below you would like to build off of. Note: when you click 'Load Theme', all variable values in other tabs will be overwritten!
- """
- )
- base_theme_dropdown = gr.Dropdown(
- [theme.__name__ for theme in themes],
- value="Base",
- show_label=False,
- )
- load_theme_btn = gr.Button("Load Theme", elem_id="load_theme")
- with gr.TabItem("Core Colors"):
- gr.Markdown(
- """Set the three hues of the theme: `primary_hue`, `secondary_hue`, and `neutral_hue`.
- Each of these is a palette ranging from 50 to 950 in brightness. Pick a preset palette - optionally, open the accordion to overwrite specific values.
- Note that these variables do not affect elements directly, but are referenced by other variables with asterisks, such as `*primary_200` or `*neutral_950`."""
- )
- primary_hue = gr.Dropdown(
- [color.name for color in colors], label="Primary Hue"
- )
- with gr.Accordion(label="Primary Hue Palette", open=False):
- primary_hues = []
- for i in palette_range:
- primary_hues.append(
- gr.ColorPicker(
- label=f"primary_{i}",
- )
- )
-
- secondary_hue = gr.Dropdown(
- [color.name for color in colors], label="Secondary Hue"
- )
- with gr.Accordion(label="Secondary Hue Palette", open=False):
- secondary_hues = []
- for i in palette_range:
- secondary_hues.append(
- gr.ColorPicker(
- label=f"secondary_{i}",
- )
- )
-
- neutral_hue = gr.Dropdown(
- [color.name for color in colors], label="Neutral hue"
- )
- with gr.Accordion(label="Neutral Hue Palette", open=False):
- neutral_hues = []
- for i in palette_range:
- neutral_hues.append(
- gr.ColorPicker(
- label=f"neutral_{i}",
- )
- )
-
- with gr.TabItem("Core Sizing"):
- gr.Markdown(
- """Set the sizing of the theme via: `text_size`, `spacing_size`, and `radius_size`.
- Each of these is set to a collection of sizes ranging from `xxs` to `xxl`. Pick a preset size collection - optionally, open the accordion to overwrite specific values.
- Note that these variables do not affect elements directly, but are referenced by other variables with asterisks, such as `*spacing_xl` or `*text_sm`.
- """
- )
- text_size = gr.Dropdown(
- [size.name for size in sizes if size.name.startswith("text_")],
- label="Text Size",
- )
- with gr.Accordion(label="Text Size Range", open=False):
- text_sizes = []
- for i in size_range:
- text_sizes.append(
- gr.Textbox(
- label=f"text_{i}",
- )
- )
-
- spacing_size = gr.Dropdown(
- [
- size.name
- for size in sizes
- if size.name.startswith("spacing_")
- ],
- label="Spacing Size",
- )
- with gr.Accordion(label="Spacing Size Range", open=False):
- spacing_sizes = []
- for i in size_range:
- spacing_sizes.append(
- gr.Textbox(
- label=f"spacing_{i}",
- )
- )
-
- radius_size = gr.Dropdown(
- [
- size.name
- for size in sizes
- if size.name.startswith("radius_")
- ],
- label="Radius Size",
- )
- with gr.Accordion(label="Radius Size Range", open=False):
- radius_sizes = []
- for i in size_range:
- radius_sizes.append(
- gr.Textbox(
- label=f"radius_{i}",
- )
- )
-
- with gr.TabItem("Core Fonts"):
- gr.Markdown(
- """Set the main `font` and the monospace `font_mono` here.
- Set up to 4 values for each (fallbacks in case a font is not available).
- Check "Google Font" if font should be loaded from Google Fonts.
- """
- )
- gr.Markdown("### Main Font")
- main_fonts, main_is_google = [], []
- for i in range(4):
- with gr.Row():
- font = gr.Textbox(label=f"Font {i + 1}")
- font_is_google = gr.Checkbox(label="Google Font")
- main_fonts.append(font)
- main_is_google.append(font_is_google)
-
- mono_fonts, mono_is_google = [], []
- gr.Markdown("### Monospace Font")
- for i in range(4):
- with gr.Row():
- font = gr.Textbox(label=f"Font {i + 1}")
- font_is_google = gr.Checkbox(label="Google Font")
- mono_fonts.append(font)
- mono_is_google.append(font_is_google)
-
- theme_var_input = []
-
- core_color_suggestions = (
- [f"*primary_{i}" for i in palette_range]
- + [f"*secondary_{i}" for i in palette_range]
- + [f"*neutral_{i}" for i in palette_range]
- )
-
- variable_suggestions = {
- "fill": core_color_suggestions[:],
- "color": core_color_suggestions[:],
- "text_size": [f"*text_{i}" for i in size_range],
- "radius": [f"*radius_{i}" for i in size_range],
- "padding": [f"*spacing_{i}" for i in size_range],
- "gap": [f"*spacing_{i}" for i in size_range],
- "weight": [
- "100",
- "200",
- "300",
- "400",
- "500",
- "600",
- "700",
- "800",
- ],
- "shadow": ["none"],
- "border_width": [],
- }
- for variable in flat_variables:
- if variable.endswith("_dark"):
- continue
- for style_type in variable_suggestions:
- if style_type in variable:
- variable_suggestions[style_type].append("*" + variable)
- break
-
- variable_suggestions["fill"], variable_suggestions["color"] = (
- variable_suggestions["fill"]
- + variable_suggestions["color"][len(core_color_suggestions) :],
- variable_suggestions["color"]
- + variable_suggestions["fill"][len(core_color_suggestions) :],
- )
-
- for group, desc, variables in variable_groups:
- with gr.TabItem(group):
- gr.Markdown(
- desc
- + "\nYou can set these to one of the dropdown values, or clear the dropdown to set a custom value."
- )
- for variable in variables:
- suggestions = []
- for style_type in variable_suggestions:
- if style_type in variable:
- suggestions = variable_suggestions[style_type][:]
- if "*" + variable in suggestions:
- suggestions.remove("*" + variable)
- break
- dropdown = gr.Dropdown(
- label=variable,
- info=get_docstr(variable),
- choices=suggestions,
- allow_custom_value=True,
- )
- theme_var_input.append(dropdown)
-
- # App
-
- with gr.Column(scale=6, elem_id="app"):
- with gr.Column(variant="panel"):
- gr.Markdown(
- """
- # Theme Builder
- Welcome to the theme builder. The left panel is where you create the theme. The different aspects of the theme are broken down into different tabs. Here's how to navigate them:
- 1. First, set the "Source Theme". This will set the default values that you can override.
- 2. Set the "Core Colors", "Core Sizing" and "Core Fonts". These are the core variables that are used to build the rest of the theme.
- 3. The rest of the tabs set specific CSS theme variables. These control finer aspects of the UI. Within these theme variables, you can reference the core variables and other theme variables using the variable name preceded by an asterisk, e.g. `*primary_50` or `*body_text_color`. Clear the dropdown to set a custom value.
- 4. Once you have finished your theme, click on "View Code" below to see how you can integrate the theme into your app. You can also click on "Upload to Hub" to upload your theme to the Hugging Face Hub, where others can download and use your theme.
- """
- )
- with gr.Accordion("View Code", open=False):
- output_code = gr.Code(language="python")
- with gr.Accordion("Upload to Hub", open=False):
- gr.Markdown(
- "You can save your theme on the Hugging Face Hub. HF API write token can be found [here](https://huggingface.co/settings/tokens)."
- )
- with gr.Row():
- theme_name = gr.Textbox(label="Theme Name")
- theme_hf_token = gr.Textbox(label="Hugging Face Write Token")
- theme_version = gr.Textbox(
- label="Version",
- placeholder="Leave blank to automatically update version.",
- )
- upload_to_hub_btn = gr.Button("Upload to Hub")
- theme_upload_status = gr.Markdown(visible=False)
-
- gr.Markdown("Below this panel is a dummy app to demo your theme.")
-
- name = gr.Textbox(
- label="Name",
- info="Full name, including middle name. No special characters.",
- placeholder="John Doe",
- value="John Doe",
- interactive=True,
- )
-
- with gr.Row():
- slider1 = gr.Slider(label="Slider 1")
- slider2 = gr.Slider(label="Slider 2")
- gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group")
-
- with gr.Row():
- with gr.Column(variant="panel", scale=1):
- gr.Markdown("## Panel 1")
- radio = gr.Radio(
- ["A", "B", "C"],
- label="Radio",
- info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.",
- )
- drop = gr.Dropdown(
- ["Option 1", "Option 2", "Option 3"], show_label=False
- )
- drop_2 = gr.Dropdown(
- ["Option A", "Option B", "Option C"],
- multiselect=True,
- value=["Option A"],
- label="Dropdown",
- interactive=True,
- )
- check = gr.Checkbox(label="Go")
- with gr.Column(variant="panel", scale=2):
- img = gr.Image(
- "https://raw.githubusercontent.com/gradio-app/gradio/main/js/_website/src/assets/img/header-image.jpg",
- label="Image",
- height=320,
- )
- with gr.Row():
- go_btn = gr.Button(
- "Go", label="Primary Button", variant="primary"
- )
- clear_btn = gr.Button(
- "Clear", label="Secondary Button", variant="secondary"
- )
-
- def go(*args):
- time.sleep(3)
- return "https://raw.githubusercontent.com/gradio-app/gradio/main/js/_website/src/assets/img/header-image.jpg"
-
- go_btn.click(
- go,
- [radio, drop, drop_2, check, name],
- img,
- api_name=False,
- )
-
- def clear():
- time.sleep(0.2)
- return None
-
- clear_btn.click(clear, None, img)
-
- with gr.Row():
- btn1 = gr.Button("Button 1", size="sm")
- btn2 = gr.UploadButton(size="sm")
- stop_btn = gr.Button(
- "Stop", label="Stop Button", variant="stop", size="sm"
- )
-
- gr.Examples(
- examples=[
- [
- "A",
- "Option 1",
- ["Option B"],
- True,
- ],
- [
- "B",
- "Option 2",
- ["Option B", "Option C"],
- False,
- ],
- ],
- inputs=[radio, drop, drop_2, check],
- label="Examples",
- )
-
- with gr.Row():
- gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe")
- gr.JSON(
- value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}},
- label="JSON",
- )
- gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1})
- gr.File()
- with gr.Row():
- gr.ColorPicker()
- gr.Video(
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4"
- )
- gr.Gallery(
- [
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg",
- "lion",
- ),
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png",
- "logo",
- ),
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg",
- "tower",
- ),
- ],
- height="200px",
- columns=2,
- )
-
- with gr.Row():
- with gr.Column(scale=2):
- chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot")
- chat_btn = gr.Button("Add messages")
-
- def chat(history):
- time.sleep(2)
- yield [["How are you?", "I am good."]]
-
- chat_btn.click(
- lambda history: history
- + [["How are you?", "I am good."]]
- + (time.sleep(2) or []),
- chatbot,
- chatbot,
- api_name=False,
- )
- with gr.Column(scale=1):
- with gr.Accordion("Advanced Settings"):
- gr.Markdown("Hello")
- gr.Number(label="Chatbot control 1")
- gr.Number(label="Chatbot control 2")
- gr.Number(label="Chatbot control 3")
-
- # Event Listeners
-
- secret_css = gr.Textbox(visible=False)
- secret_font = gr.JSON(visible=False)
-
- demo.load( # doing this via python was not working for some reason, so using this hacky method for now
- None,
- None,
- None,
- _js="""() => {
- document.head.innerHTML += "";
- let evt_listener = window.setTimeout(
- () => {
- load_theme_btn = document.querySelector('#load_theme');
- if (load_theme_btn) {
- load_theme_btn.click();
- window.clearTimeout(evt_listener);
- }
- },
- 100
- );
- }""",
- api_name=False,
- )
-
- theme_inputs = (
- [primary_hue, secondary_hue, neutral_hue]
- + primary_hues
- + secondary_hues
- + neutral_hues
- + [text_size, spacing_size, radius_size]
- + text_sizes
- + spacing_sizes
- + radius_sizes
- + main_fonts
- + main_is_google
- + mono_fonts
- + mono_is_google
- + theme_var_input
- )
-
- def load_theme(theme_name):
- theme = [theme for theme in themes if theme.__name__ == theme_name][0]
-
- parameters = inspect.signature(theme.__init__).parameters
- primary_hue = parameters["primary_hue"].default
- secondary_hue = parameters["secondary_hue"].default
- neutral_hue = parameters["neutral_hue"].default
- text_size = parameters["text_size"].default
- spacing_size = parameters["spacing_size"].default
- radius_size = parameters["radius_size"].default
-
- theme = theme()
-
- font = theme._font[:4]
- font_mono = theme._font_mono[:4]
- font_is_google = [isinstance(f, gr.themes.GoogleFont) for f in font]
- font_mono_is_google = [
- isinstance(f, gr.themes.GoogleFont) for f in font_mono
- ]
-
- def pad_to_4(x):
- return x + [None] * (4 - len(x))
-
- var_output = []
- for variable in flat_variables:
- theme_val = getattr(theme, variable)
- if theme_val is None and variable.endswith("_dark"):
- theme_val = getattr(theme, variable[:-5])
- var_output.append(theme_val)
-
- return (
- [primary_hue.name, secondary_hue.name, neutral_hue.name]
- + primary_hue.expand()
- + secondary_hue.expand()
- + neutral_hue.expand()
- + [text_size.name, spacing_size.name, radius_size.name]
- + text_size.expand()
- + spacing_size.expand()
- + radius_size.expand()
- + pad_to_4([f.name for f in font])
- + pad_to_4(font_is_google)
- + pad_to_4([f.name for f in font_mono])
- + pad_to_4(font_mono_is_google)
- + var_output
- )
-
- def generate_theme_code(
- base_theme, final_theme, core_variables, final_main_fonts, final_mono_fonts
- ):
- base_theme_name = base_theme
- base_theme = [theme for theme in themes if theme.__name__ == base_theme][
- 0
- ]()
-
- parameters = inspect.signature(base_theme.__init__).parameters
- primary_hue = parameters["primary_hue"].default
- secondary_hue = parameters["secondary_hue"].default
- neutral_hue = parameters["neutral_hue"].default
- text_size = parameters["text_size"].default
- spacing_size = parameters["spacing_size"].default
- radius_size = parameters["radius_size"].default
- font = parameters["font"].default
- font = [font] if not isinstance(font, Iterable) else font
- font = [
- gr.themes.Font(f) if not isinstance(f, gr.themes.Font) else f
- for f in font
- ]
- font_mono = parameters["font_mono"].default
- font_mono = (
- [font_mono] if not isinstance(font_mono, Iterable) else font_mono
- )
- font_mono = [
- gr.themes.Font(f) if not isinstance(f, gr.themes.Font) else f
- for f in font_mono
- ]
-
- core_diffs = {}
- specific_core_diffs = {}
- core_var_names = [
- "primary_hue",
- "secondary_hue",
- "neutral_hue",
- "text_size",
- "spacing_size",
- "radius_size",
- ]
- for value_name, base_value, source_class, final_value in zip(
- core_var_names,
- [
- primary_hue,
- secondary_hue,
- neutral_hue,
- text_size,
- spacing_size,
- radius_size,
- ],
- [
- gr.themes.Color,
- gr.themes.Color,
- gr.themes.Color,
- gr.themes.Size,
- gr.themes.Size,
- gr.themes.Size,
- ],
- core_variables,
- ):
- if base_value.name != final_value:
- core_diffs[value_name] = final_value
- source_obj = [
- obj for obj in source_class.all if obj.name == final_value
- ][0]
- final_attr_values = {}
- diff = False
- for attr in dir(source_obj):
- if attr in ["all", "name", "expand"] or attr.startswith("_"):
- continue
- final_theme_attr = (
- value_name.split("_")[0]
- + "_"
- + (attr[1:] if source_class == gr.themes.Color else attr)
- )
- final_attr_values[final_theme_attr] = getattr(
- final_theme, final_theme_attr
- )
- if getattr(source_obj, attr) != final_attr_values[final_theme_attr]:
- diff = True
- if diff:
- specific_core_diffs[value_name] = (source_class, final_attr_values)
-
- font_diffs = {}
-
- final_main_fonts = [font for font in final_main_fonts if font[0]]
- final_mono_fonts = [font for font in final_mono_fonts if font[0]]
- font = font[:4]
- font_mono = font_mono[:4]
- for base_font_set, theme_font_set, font_set_name in [
- (font, final_main_fonts, "font"),
- (font_mono, final_mono_fonts, "font_mono"),
- ]:
- if len(base_font_set) != len(theme_font_set) or any(
- base_font.name != theme_font[0]
- or isinstance(base_font, gr.themes.GoogleFont) != theme_font[1]
- for base_font, theme_font in zip(base_font_set, theme_font_set)
- ):
- font_diffs[font_set_name] = [
- f"gr.themes.GoogleFont('{font_name}')"
- if is_google_font
- else f"'{font_name}'"
- for font_name, is_google_font in theme_font_set
- ]
-
- newline = "\n"
-
- core_diffs_code = ""
- if len(core_diffs) + len(specific_core_diffs) > 0:
- for var_name in core_var_names:
- if var_name in specific_core_diffs:
- cls, vals = specific_core_diffs[var_name]
- core_diffs_code += f""" {var_name}=gr.themes.{cls.__name__}({', '.join(f'''{k}="{v}"''' for k, v in vals.items())}),\n"""
- elif var_name in core_diffs:
- core_diffs_code += (
- f""" {var_name}="{core_diffs[var_name]}",\n"""
- )
-
- font_diffs_code = ""
-
- if len(font_diffs) > 0:
- font_diffs_code = "".join(
- [
- f""" {font_set_name}=[{", ".join(fonts)}],\n"""
- for font_set_name, fonts in font_diffs.items()
- ]
- )
- var_diffs = {}
- for variable in flat_variables:
- base_theme_val = getattr(base_theme, variable)
- final_theme_val = getattr(final_theme, variable)
- if base_theme_val is None and variable.endswith("_dark"):
- base_theme_val = getattr(base_theme, variable[:-5])
- if base_theme_val != final_theme_val:
- var_diffs[variable] = getattr(final_theme, variable)
-
- newline = "\n"
-
- vars_diff_code = ""
- if len(var_diffs) > 0:
- vars_diff_code = f""".set(
- {(',' + newline + " ").join([f"{k}='{v}'" for k, v in var_diffs.items()])}
-)"""
-
- output = f"""
-import gradio as gr
-
-theme = gr.themes.{base_theme_name}({newline if core_diffs_code or font_diffs_code else ""}{core_diffs_code}{font_diffs_code}){vars_diff_code}
-
-with gr.Blocks(theme=theme) as demo:
- ..."""
- return output
-
- history = gr.State([])
- current_theme = gr.State(None)
-
- def render_variables(history, base_theme, *args):
- primary_hue, secondary_hue, neutral_hue = args[0:3]
- primary_hues = args[3 : 3 + len(palette_range)]
- secondary_hues = args[3 + len(palette_range) : 3 + 2 * len(palette_range)]
- neutral_hues = args[3 + 2 * len(palette_range) : 3 + 3 * len(palette_range)]
- text_size, spacing_size, radius_size = args[
- 3 + 3 * len(palette_range) : 6 + 3 * len(palette_range)
- ]
- text_sizes = args[
- 6
- + 3 * len(palette_range) : 6
- + 3 * len(palette_range)
- + len(size_range)
- ]
- spacing_sizes = args[
- 6
- + 3 * len(palette_range)
- + len(size_range) : 6
- + 3 * len(palette_range)
- + 2 * len(size_range)
- ]
- radius_sizes = args[
- 6
- + 3 * len(palette_range)
- + 2 * len(size_range) : 6
- + 3 * len(palette_range)
- + 3 * len(size_range)
- ]
- main_fonts = args[
- 6
- + 3 * len(palette_range)
- + 3 * len(size_range) : 6
- + 3 * len(palette_range)
- + 3 * len(size_range)
- + 4
- ]
- main_is_google = args[
- 6
- + 3 * len(palette_range)
- + 3 * len(size_range)
- + 4 : 6
- + 3 * len(palette_range)
- + 3 * len(size_range)
- + 8
- ]
- mono_fonts = args[
- 6
- + 3 * len(palette_range)
- + 3 * len(size_range)
- + 8 : 6
- + 3 * len(palette_range)
- + 3 * len(size_range)
- + 12
- ]
- mono_is_google = args[
- 6
- + 3 * len(palette_range)
- + 3 * len(size_range)
- + 12 : 6
- + 3 * len(palette_range)
- + 3 * len(size_range)
- + 16
- ]
- remaining_args = args[
- 6 + 3 * len(palette_range) + 3 * len(size_range) + 16 :
- ]
-
- final_primary_color = gr.themes.Color(*primary_hues)
- final_secondary_color = gr.themes.Color(*secondary_hues)
- final_neutral_color = gr.themes.Color(*neutral_hues)
- final_text_size = gr.themes.Size(*text_sizes)
- final_spacing_size = gr.themes.Size(*spacing_sizes)
- final_radius_size = gr.themes.Size(*radius_sizes)
-
- final_main_fonts = []
- font_weights = set()
- for attr, val in zip(flat_variables, remaining_args):
- if "weight" in attr:
- font_weights.add(val)
- font_weights = sorted(font_weights)
-
- for main_font, is_google in zip(main_fonts, main_is_google):
- if not main_font:
- continue
- if is_google:
- main_font = gr.themes.GoogleFont(main_font, weights=font_weights)
- final_main_fonts.append(main_font)
- final_mono_fonts = []
- for mono_font, is_google in zip(mono_fonts, mono_is_google):
- if not mono_font:
- continue
- if is_google:
- mono_font = gr.themes.GoogleFont(mono_font, weights=font_weights)
- final_mono_fonts.append(mono_font)
-
- theme = gr.themes.Base(
- primary_hue=final_primary_color,
- secondary_hue=final_secondary_color,
- neutral_hue=final_neutral_color,
- text_size=final_text_size,
- spacing_size=final_spacing_size,
- radius_size=final_radius_size,
- font=final_main_fonts,
- font_mono=final_mono_fonts,
- )
-
- theme.set(**dict(zip(flat_variables, remaining_args)))
- new_step = (base_theme, args)
- if len(history) == 0 or str(history[-1]) != str(new_step):
- history.append(new_step)
-
- return (
- history,
- theme._get_theme_css(),
- theme._stylesheets,
- generate_theme_code(
- base_theme,
- theme,
- (
- primary_hue,
- secondary_hue,
- neutral_hue,
- text_size,
- spacing_size,
- radius_size,
- ),
- list(zip(main_fonts, main_is_google)),
- list(zip(mono_fonts, mono_is_google)),
- ),
- theme,
- )
-
- def attach_rerender(evt_listener):
- return evt_listener(
- render_variables,
- [history, base_theme_dropdown] + theme_inputs,
- [history, secret_css, secret_font, output_code, current_theme],
- api_name=False,
- ).then(
- None,
- [secret_css, secret_font],
- None,
- _js="""(css, fonts) => {
- document.getElementById('theme_css').innerHTML = css;
- let existing_font_links = document.querySelectorAll('link[rel="stylesheet"][href^="https://fonts.googleapis.com/css"]');
- existing_font_links.forEach(link => {
- if (fonts.includes(link.href)) {
- fonts = fonts.filter(font => font != link.href);
- } else {
- link.remove();
- }
- });
- fonts.forEach(font => {
- let link = document.createElement('link');
- link.rel = 'stylesheet';
- link.href = font;
- document.head.appendChild(link);
- });
- }""",
- api_name=False,
- )
-
- def load_color(color_name):
- color = [color for color in colors if color.name == color_name][0]
- return [getattr(color, f"c{i}") for i in palette_range]
-
- attach_rerender(
- primary_hue.select(
- load_color, primary_hue, primary_hues, api_name=False
- ).then
- )
- attach_rerender(
- secondary_hue.select(
- load_color, secondary_hue, secondary_hue, api_name=False
- ).then
- )
- attach_rerender(
- neutral_hue.select(
- load_color, neutral_hue, neutral_hues, api_name=False
- ).then
- )
- for hue_set in (primary_hues, secondary_hues, neutral_hues):
- for hue in hue_set:
- attach_rerender(hue.blur)
-
- def load_size(size_name):
- size = [size for size in sizes if size.name == size_name][0]
- return [getattr(size, i) for i in size_range]
-
- attach_rerender(
- text_size.change(load_size, text_size, text_sizes, api_name=False).then
- )
- attach_rerender(
- spacing_size.change(
- load_size, spacing_size, spacing_sizes, api_name=False
- ).then
- )
- attach_rerender(
- radius_size.change(
- load_size, radius_size, radius_sizes, api_name=False
- ).then
- )
-
- attach_rerender(
- load_theme_btn.click(
- load_theme, base_theme_dropdown, theme_inputs, api_name=False
- ).then
- )
-
- for theme_box in (
- text_sizes + spacing_sizes + radius_sizes + main_fonts + mono_fonts
- ):
- attach_rerender(theme_box.blur)
- attach_rerender(theme_box.submit)
- for theme_box in theme_var_input:
- attach_rerender(theme_box.blur)
- attach_rerender(theme_box.select)
- for checkbox in main_is_google + mono_is_google:
- attach_rerender(checkbox.select)
-
- dark_mode_btn.click(
- None,
- None,
- None,
- _js="""() => {
- if (document.querySelectorAll('.dark').length) {
- document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
- } else {
- document.querySelector('body').classList.add('dark');
- }
- }""",
- api_name=False,
- )
-
- def undo(history_var):
- if len(history_var) <= 1:
- return {history: gr.skip()}
- else:
- history_var.pop()
- old = history_var.pop()
- return [history_var, old[0]] + list(old[1])
-
- attach_rerender(
- undo_btn.click(
- undo,
- [history],
- [history, base_theme_dropdown] + theme_inputs,
- api_name=False,
- ).then
- )
-
- def upload_to_hub(data):
- try:
- theme_url = data[current_theme].push_to_hub(
- repo_name=data[theme_name],
- version=data[theme_version] or None,
- hf_token=data[theme_hf_token],
- theme_name=data[theme_name],
- )
- space_name = "/".join(theme_url.split("/")[-2:])
- return (
- gr.Markdown.update(
- value=f"Theme uploaded [here!]({theme_url})! Load it as `gr.Blocks(theme='{space_name}')`",
- visible=True,
- ),
- "Upload to Hub",
- )
- except Exception as e:
- return (
- gr.Markdown.update(
- value=f"Error: {e}",
- visible=True,
- ),
- "Upload to Hub",
- )
-
- upload_to_hub_btn.click(
- lambda: "Uploading...",
- None,
- upload_to_hub_btn,
- api_name=False,
- ).then(
- upload_to_hub,
- {
- current_theme,
- theme_name,
- theme_hf_token,
- theme_version,
- },
- [theme_upload_status, upload_to_hub_btn],
- api_name=False,
- )
-
-
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/dcq/freegpt-webui/g4f/Provider/Providers/Ezcht.py b/spaces/dcq/freegpt-webui/g4f/Provider/Providers/Ezcht.py
deleted file mode 100644
index baec214f7e0e936ea06bffa357e1bd2b77cd4089..0000000000000000000000000000000000000000
--- a/spaces/dcq/freegpt-webui/g4f/Provider/Providers/Ezcht.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import requests
-import os
-import json
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://gpt4.ezchat.top'
-model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613']
-supports_stream = True
-needs_auth = False
-
-def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs):
- headers = {
- 'Content-Type': 'application/json',
- }
- data = {
- 'model': model,
- 'temperature': 0.7,
- 'presence_penalty': 0,
- 'messages': messages,
- }
- response = requests.post(url + '/api/openai/v1/chat/completions',
- json=data, stream=True)
-
- if stream:
- for chunk in response.iter_content(chunk_size=None):
- chunk = chunk.decode('utf-8')
- if chunk.strip():
- message = json.loads(chunk)['choices'][0]['message']['content']
- yield message
- else:
- message = response.json()['choices'][0]['message']['content']
- yield message
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/audio2pose_models/audio_encoder.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/audio2pose_models/audio_encoder.py
deleted file mode 100644
index 6279d2014a2e786a6c549f084339e18d00e50331..0000000000000000000000000000000000000000
--- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/audio2pose_models/audio_encoder.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-class Conv2d(nn.Module):
- def __init__(self, cin, cout, kernel_size, stride, padding, residual=False, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self.conv_block = nn.Sequential(
- nn.Conv2d(cin, cout, kernel_size, stride, padding),
- nn.BatchNorm2d(cout)
- )
- self.act = nn.ReLU()
- self.residual = residual
-
- def forward(self, x):
- out = self.conv_block(x)
- if self.residual:
- out += x
- return self.act(out)
-
-class AudioEncoder(nn.Module):
- def __init__(self, wav2lip_checkpoint, device):
- super(AudioEncoder, self).__init__()
-
- self.audio_encoder = nn.Sequential(
- Conv2d(1, 32, kernel_size=3, stride=1, padding=1),
- Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True),
-
- Conv2d(32, 64, kernel_size=3, stride=(3, 1), padding=1),
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
-
- Conv2d(64, 128, kernel_size=3, stride=3, padding=1),
- Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
-
- Conv2d(128, 256, kernel_size=3, stride=(3, 2), padding=1),
- Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),
-
- Conv2d(256, 512, kernel_size=3, stride=1, padding=0),
- Conv2d(512, 512, kernel_size=1, stride=1, padding=0),)
-
- #### load the pre-trained audio_encoder, we do not need to load wav2lip model here.
- # wav2lip_state_dict = torch.load(wav2lip_checkpoint, map_location=torch.device(device))['state_dict']
- # state_dict = self.audio_encoder.state_dict()
-
- # for k,v in wav2lip_state_dict.items():
- # if 'audio_encoder' in k:
- # state_dict[k.replace('module.audio_encoder.', '')] = v
- # self.audio_encoder.load_state_dict(state_dict)
-
-
- def forward(self, audio_sequences):
- # audio_sequences = (B, T, 1, 80, 16)
- B = audio_sequences.size(0)
-
- audio_sequences = torch.cat([audio_sequences[:, i] for i in range(audio_sequences.size(1))], dim=0)
-
- audio_embedding = self.audio_encoder(audio_sequences) # B, 512, 1, 1
- dim = audio_embedding.shape[1]
- audio_embedding = audio_embedding.reshape((B, -1, dim, 1, 1))
-
- return audio_embedding.squeeze(-1).squeeze(-1) #B seq_len+1 512
diff --git a/spaces/dentadelta123/GuardrailDetection/localfile.py b/spaces/dentadelta123/GuardrailDetection/localfile.py
deleted file mode 100644
index 48a231cb49e1c4756f18b80a6ce4b582e24dfd31..0000000000000000000000000000000000000000
--- a/spaces/dentadelta123/GuardrailDetection/localfile.py
+++ /dev/null
@@ -1,41 +0,0 @@
-import clip
-import torch
-import gradio as gr
-import torchvision.transforms as T
-from PIL import Image
-try:
- from torchvision.transforms import InterpolationMode
- BICUBIC = InterpolationMode.BICUBIC
-except ImportError:
- BICUBIC = Image.BICUBIC
-import warnings
-warnings.filterwarnings("ignore")
-
-#MODEL_PATH = '/media/delta/S/clipmodel.pth' #CHANGE THIS IF YOU WANT TO CHANGE THE MODEL PATH
-MODEL_PATH ='/media/delta/S/clipmodel_large.pth' #CHANGE THIS IF YOU WANT TO CHANGE THE MODEL PATH
-
-
-device = "cuda" if torch.cuda.is_available() else "cpu"
-model = clip.model.build_model(torch.load(MODEL_PATH)).to(device)
-preprocess = clip.clip._transform(model.visual.input_resolution)
-
-def zeroshot_detection(Press_Clear_Dont_Stack_Image):
- inp = Press_Clear_Dont_Stack_Image
-
- captions = "photo of a guardrail, no guardrail in the photo" #CHANGE THIS IF YOU WANT TO CHANGE THE PREDICTION: separate by commas
-
- captions = captions.split(',')
- caption = clip.tokenize(captions).to(device)
- image = preprocess(inp).unsqueeze(0).to(device)
- with torch.no_grad():
- image_features = model.encode_image(image)
- text_features = model.encode_text(caption)
- image_features /= image_features.norm(dim=-1, keepdim=True)
- text_features /= text_features.norm(dim=-1, keepdim=True)
- similarity = (100.0 * image_features @ text_features.T).softmax(dim=-1)
- values, indices = similarity[0].topk(len(captions))
- return {captions[indices[i].item()]: float(values[i].item()) for i in range(len(values))}
-
-gr.Interface(fn=zeroshot_detection,
- inputs=[gr.Image(type="pil")],
- outputs=gr.Label(num_top_classes=1)).launch()
diff --git a/spaces/diacanFperku/AutoGPT/Condor Plume L1 Flash File MT6737M 7.0 Hang Logo Dead Fix Firmware.md b/spaces/diacanFperku/AutoGPT/Condor Plume L1 Flash File MT6737M 7.0 Hang Logo Dead Fix Firmware.md
deleted file mode 100644
index 05aad8d6513f4bfb8fb2ac9dd5b16aaa63d7a641..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Condor Plume L1 Flash File MT6737M 7.0 Hang Logo Dead Fix Firmware.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Condor Plume L1 Flash File MT6737M 7.0 Hang Logo Dead Fix Firmware
Download ☆☆☆☆☆ https://gohhs.com/2uFUkR
-
-Flash Iris Vox 4 flashtool بطريقة مجربة بدون بوكس IRIS VOX 4 تÙليش إريس ... vox pop frp iris vox energy frp reset bypass google account 7.0 lava iris 88 frp remove file ... Bypass FRP condor plume L1==) http://bit.ly/2WCCQCz Bypass FRP condor ... Firmware Condor Plume H1 (SP-620) Fix Hang On Logo By CM2 Dongle ... 1fdad05405
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Download Teowin Torrent [PATCHED].md b/spaces/diacanFperku/AutoGPT/Download Teowin Torrent [PATCHED].md
deleted file mode 100644
index d2d6fd85ed4e7982464503031445f4fc124c7cec..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Download Teowin Torrent [PATCHED].md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-7 results. Zawgyi Myanmar Keyboard, Zawgyi Font & Keyboard Download for PC Windows. Many downloads like Teowin Torrent may also include a serial number,. Teowin 3D software for design and cabinet manufacturing Profesional tool HD. Click here to read comments while watching the video.
-Create a free account or login to download free, high quality 3D files in different file types. 30 Days Free Trial. teowin software download. teowin software download free. . The Free version of the app is free to use to search for and download high quality video, Music, audiobooks and TV content. Download and install Media Go APK 1.6.1 from apk download. Another good option is an official app from Sony called the PlayStation Mobile app.
-download teowin torrent
DOWNLOAD ❤ https://gohhs.com/2uFUvB
-Join us in our quest to create the best media player for Android. The player is available for both Android and iOS. The player allows you to enjoy millions of movies and TV shows. . Streaming Apps For Android. Watch Best Movies And TV Shows Without A Premium. . with this android app you can download mp3 songs and music videos. . TViO Player Latest Version 4.1.3 Beta Apk Download Full Version For Android. Veja Estrutura Dos Módulos Final Teowin 4.1.3 e seus Poderes. Obrigado por usar a Teowin. Download TViO Player APK 4.0.20 Poy. Also like to download and install the TViO Player for android? Media Go Download apk 1.7.2 free from our android apps and apk. Teowin is a free android app, similar to netflix (for windows) and vodafone app (for. . Screenshot Of “Teowin” Android App. Free Download and installation of apps are now supported on the Play Store version of Android 5.0 and above. SmartDownload app Download latest version of SmartDownload app for android free from Apkpk.app. Free Download, Teowin..
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Mai Drum Kit __TOP__ Freel.md b/spaces/diacanFperku/AutoGPT/Mai Drum Kit __TOP__ Freel.md
deleted file mode 100644
index 3b37ea520ec35721e260621a1a5b678e9ba735e0..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Mai Drum Kit __TOP__ Freel.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-the drum kit for the original djembe are actually in the sampler drumkit digital drum kit sampler free. this are some samples that will help you to learn how to take drum.
the elements of a drum kit are the following:
kicks - a kick is made by the musician with a mallet to hit the drum head. the two main places you will look for the drum elements are on the mic stand, and in the second rack.
snares - snares are also commonly referred to as “stick beats” and are commonly found in the rack above the kick.
hats - hats are typically placed on top of the snare, over or next to the speaker. the rack above the speaker will usually have a few hats. a hat is commonly referred to as a top hat or a china hat.
hi-hats - a hi-hat is used to separate the snare and kick/snares from each other. normally there will be one hi-hat on each side of the rack to the left and right side.
percussion - percussion can be broken up in to dry and wet. dry percussion is usually made up of various types of tambourines. dried wood blocks are not commonly used, but perhaps once in a while. wet percussion is mainly made up of bongos and congas.
shells - shells are typically used to remove various levels of frequencies from the drums to create a clean sound. these are typically placed on the rack just above the snare.
congas - congas are typically used to accent the drums to either fill the space or to slightly change the timbre of the drum element. a common use for congas would be to add a little bit of percussion at the end of a song to finish the song.
toms - toms are typically used to create rhythmic tension, such as during an intro. toms are often very close to the microphone to get a tight placement.
sidetones - sidetones are used to create a slightly different pitch that is used to fill in the gaps in the song. it is important to make sure that the sidetone does not interrupt the drums.
sequencer - the metronome is not used for the drums, but rather for the rhythm of the song. typically the metronome can be found in the rack to the far left or right of the drums.
the drums, and the various rack elements will be described in more detail in the section below.
-Mai Drum Kit Freel
Download ✺✺✺ https://gohhs.com/2uFUzI
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Mentalism-Tricks-Pdf-Free-Downloadl-FREE.md b/spaces/diacanFperku/AutoGPT/Mentalism-Tricks-Pdf-Free-Downloadl-FREE.md
deleted file mode 100644
index 8e7d71dd4f5ee8adbf8c443edc6fe1ced2421585..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Mentalism-Tricks-Pdf-Free-Downloadl-FREE.md
+++ /dev/null
@@ -1,87 +0,0 @@
-## Mentalism Tricks Pdf Free Downloadl
-
-
-
-**Download File --->>> [https://conttooperting.blogspot.com/?l=2twNTm](https://conttooperting.blogspot.com/?l=2twNTm)**
-
-
-
- I can help you with that. Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Mentalism Tricks Pdf Free Downloadl":
-
-# How to Learn Mentalism Tricks for Free with These PDF Downloads
-
-
-
-Mentalism is the art of using psychological techniques and subtle cues to influence people's thoughts, feelings and actions. Mentalists can perform amazing feats such as mind reading, hypnosis, prediction, telepathy and more.
-
-
-
-If you want to learn mentalism tricks for free, you don't need to spend a fortune on books, courses or DVDs. You can find plenty of resources online that offer free PDF downloads of mentalism tricks that you can practice and master.
-
-
-
-In this article, we will share some of the best sources of free PDF downloads of mentalism tricks that you can use to impress your friends, family and even strangers. Whether you are a beginner or an advanced learner, you will find something useful and interesting in these PDFs.
-
-
-
-## 1. Mentalism Zone
-
-
-
-Mentalism Zone is a website that offers a collection of free mentalism books and guides in PDF format. You can download them instantly and start learning from some of the most famous mentalists in history, such as Derren Brown, David Blaine, Uri Geller and more.
-
-
-
-Some of the PDFs you can find on Mentalism Zone are:
-
-
-
-- [13 Steps to Mentalism by Tony Corinda](https://mentalismzone.com/13-steps-to-mentalism-pdf/): This is a classic book that covers all the basics of mentalism, from cold reading to telepathy. It has 13 chapters that teach you 13 essential skills of mentalism.
-
-- [Mental Magic by Howard Thurston](https://mentalismzone.com/mental-magic-pdf/): This is a book that teaches you how to perform various mental magic tricks using cards, coins, dice and other props. It also explains the psychology behind each trick and how to present it effectively.
-
-- [Psychological Subtleties by Banachek](https://mentalismzone.com/psychological-subtleties-pdf/): This is a book that reveals the secrets of using subtle psychological cues to influence people's thoughts and actions. It covers topics such as body language, eye contact, verbal suggestions and more.
-
-
-
-## 2. Free Mentalism Tricks
-
-
-
-Free Mentalism Tricks is a blog that offers a variety of free mentalism tricks in PDF format. You can download them easily and learn how to perform them step by step. The blog also provides tips and advice on how to improve your mentalism skills and presentation.
-
-
-
-Some of the PDFs you can find on Free Mentalism Tricks are:
-
-
-
-- [Free Mentalism Tricks PDF](https://freementalismtricks.com/free-mentalism-tricks-pdf/): This is a PDF that contains 10 simple but effective mentalism tricks that you can do with everyday objects. You will learn how to read minds, predict outcomes, influence choices and more.
-
-- [Free Mentalism Ebook](https://freementalismtricks.com/free-mentalism-ebook/): This is a PDF that contains 50 amazing mentalism tricks that you can perform with cards, coins, paper, pencils and other common items. You will learn how to do mind-blowing effects such as mind control, telekinesis, remote viewing and more.
-
-- [Free Mentalism Book](https://freementalismtricks.com/free-mentalism-book/): This is a PDF that contains 100 incredible mentalism tricks that you can perform with or without props. You will learn how to do astonishing feats such as levitation, invisibility, clairvoyance and more.
-
-
-
-## 3. Learn Mentalism Tricks
-
-
-
-Learn Mentalism Tricks is a website that offers a free PDF download of a book called "The Art of Mentalism". This book teaches you how to become a master mentalist in 30 days by following a simple and practical plan.
-
-
-
-The book covers topics such as:
-
-
-
-- The history and principles of mentalism
-
-- The essential skills and techniques of mentalism
-
-- The best mentalism routines and effects for any occasion
-
-- The secrets of creating your own mentalism tricks
-
-- The tips and tricks dfd1c89656
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Morcheeba - Fragments Of Freedom [ChattChitto RG].md b/spaces/diacanFperku/AutoGPT/Morcheeba - Fragments Of Freedom [ChattChitto RG].md
deleted file mode 100644
index 48016cbd0903e4843b3377cc90df8994e46f44fb..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Morcheeba - Fragments Of Freedom [ChattChitto RG].md
+++ /dev/null
@@ -1,6 +0,0 @@
-Morcheeba - Fragments Of Freedom [ChattChitto RG]
Download ✒ https://gohhs.com/2uFTq5
-
- d5da3c52bf
-
-
-
diff --git a/spaces/digitalxingtong/Lixiang-Bert-Vits2/preprocess_text.py b/spaces/digitalxingtong/Lixiang-Bert-Vits2/preprocess_text.py
deleted file mode 100644
index 44c35fecd9b7f21016e80e9597d6055254cba3f7..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Lixiang-Bert-Vits2/preprocess_text.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import json
-from random import shuffle
-
-import tqdm
-from text.cleaner import clean_text
-from collections import defaultdict
-import shutil
-stage = [1,2,3]
-
-transcription_path = 'filelists/short_character_anno.list'
-train_path = 'filelists/train.list'
-val_path = 'filelists/val.list'
-config_path = "configs/config.json"
-val_per_spk = 4
-max_val_total = 8
-
-if 1 in stage:
- with open( transcription_path+'.cleaned', 'w', encoding='utf-8') as f:
- for line in tqdm.tqdm(open(transcription_path, encoding='utf-8').readlines()):
- try:
- utt, spk, language, text = line.strip().split('|')
- #language = "ZH"
- norm_text, phones, tones, word2ph = clean_text(text, language)
- f.write('{}|{}|{}|{}|{}|{}|{}\n'.format(utt, spk, language, norm_text, ' '.join(phones),
- " ".join([str(i) for i in tones]),
- " ".join([str(i) for i in word2ph])))
- except:
- print("err!", utt)
-
-if 2 in stage:
- spk_utt_map = defaultdict(list)
- spk_id_map = {}
- current_sid = 0
-
- with open( transcription_path+'.cleaned', encoding='utf-8') as f:
- for line in f.readlines():
- utt, spk, language, text, phones, tones, word2ph = line.strip().split('|')
- spk_utt_map[spk].append(line)
- if spk not in spk_id_map.keys():
- spk_id_map[spk] = current_sid
- current_sid += 1
- train_list = []
- val_list = []
- for spk, utts in spk_utt_map.items():
- shuffle(utts)
- val_list+=utts[:val_per_spk]
- train_list+=utts[val_per_spk:]
- if len(val_list) > max_val_total:
- train_list+=val_list[max_val_total:]
- val_list = val_list[:max_val_total]
-
- with open( train_path,"w", encoding='utf-8') as f:
- for line in train_list:
- f.write(line)
-
- file_path = transcription_path+'.cleaned'
- shutil.copy(file_path,'./filelists/train.list')
-
- with open(val_path, "w", encoding='utf-8') as f:
- for line in val_list:
- f.write(line)
-
-if 3 in stage:
- assert 2 in stage
- config = json.load(open(config_path))
- config['data']["n_speakers"] = current_sid #
- config["data"]['spk2id'] = spk_id_map
- with open(config_path, 'w', encoding='utf-8') as f:
- json.dump(config, f, indent=2, ensure_ascii=False)
diff --git a/spaces/doevent/background-remover/README.md b/spaces/doevent/background-remover/README.md
deleted file mode 100644
index 99118289c20bc8756badf1ab9fe5810de85b075d..0000000000000000000000000000000000000000
--- a/spaces/doevent/background-remover/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Background Remover
-emoji: 🦀
-colorFrom: green
-colorTo: green
-sdk: gradio
-sdk_version: 3.1.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/dolceschokolade/chatbot-mini/hooks/useCreateReducer.ts b/spaces/dolceschokolade/chatbot-mini/hooks/useCreateReducer.ts
deleted file mode 100644
index e42ba680ee1d15d7746811df55ae0e4a5c234178..0000000000000000000000000000000000000000
--- a/spaces/dolceschokolade/chatbot-mini/hooks/useCreateReducer.ts
+++ /dev/null
@@ -1,30 +0,0 @@
-import { useMemo, useReducer } from 'react';
-
-// Extracts property names from initial state of reducer to allow typesafe dispatch objects
-export type FieldNames = {
- [K in keyof T]: T[K] extends string ? K : K;
-}[keyof T];
-
-// Returns the Action Type for the dispatch object to be used for typing in things like context
-export type ActionType =
- | { type: 'reset' }
- | { type?: 'change'; field: FieldNames; value: any };
-
-// Returns a typed dispatch and state
-export const useCreateReducer = ({ initialState }: { initialState: T }) => {
- type Action =
- | { type: 'reset' }
- | { type?: 'change'; field: FieldNames; value: any };
-
- const reducer = (state: T, action: Action) => {
- if (!action.type) return { ...state, [action.field]: action.value };
-
- if (action.type === 'reset') return initialState;
-
- throw new Error();
- };
-
- const [state, dispatch] = useReducer(reducer, initialState);
-
- return useMemo(() => ({ state, dispatch }), [state, dispatch]);
-};
diff --git a/spaces/dolceschokolade/chatbot-mini/pages/_app.tsx b/spaces/dolceschokolade/chatbot-mini/pages/_app.tsx
deleted file mode 100644
index ad28961180ef5983ef2d9d8d70ba3d40f2c8324e..0000000000000000000000000000000000000000
--- a/spaces/dolceschokolade/chatbot-mini/pages/_app.tsx
+++ /dev/null
@@ -1,25 +0,0 @@
-import { Toaster } from 'react-hot-toast';
-import { QueryClient, QueryClientProvider } from 'react-query';
-
-import { appWithTranslation } from 'next-i18next';
-import type { AppProps } from 'next/app';
-import { Inter } from 'next/font/google';
-
-import '@/styles/globals.css';
-
-const inter = Inter({ subsets: ['latin'] });
-
-function App({ Component, pageProps }: AppProps<{}>) {
- const queryClient = new QueryClient();
-
- return (
-
-
-
-
-
-
- );
-}
-
-export default appWithTranslation(App);
diff --git a/spaces/drift-ai/recruiter-assistant/Makefile b/spaces/drift-ai/recruiter-assistant/Makefile
deleted file mode 100644
index fd81989af14d4bdce99ee0c81755d6bcfc46852c..0000000000000000000000000000000000000000
--- a/spaces/drift-ai/recruiter-assistant/Makefile
+++ /dev/null
@@ -1,2 +0,0 @@
-deps:
- pipenv requirements > requirements.txt
diff --git a/spaces/dylanebert/gaussian-viewer/README.md b/spaces/dylanebert/gaussian-viewer/README.md
deleted file mode 100644
index 3d52146f55d23a200015fb5f6720874b9a86ad79..0000000000000000000000000000000000000000
--- a/spaces/dylanebert/gaussian-viewer/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: gaussian-viewer
-emoji: 📺
-colorFrom: gray
-colorTo: gray
-sdk: static
-app_file: public/index.html
----
-
-This repository contains the frontend submodule of the [gaussian-viewer](https://github.com/dylanebert/gaussian-viewer) repository.
-
-Based on [3D Gaussian Splatting for Real-time Radiance Field Rendering](https://arxiv.org/abs/2308.04079)
\ No newline at end of file
diff --git a/spaces/epexVfeibi/Imagedeblurr/3 Tamil Movie 720p Download [PATCHED].md b/spaces/epexVfeibi/Imagedeblurr/3 Tamil Movie 720p Download [PATCHED].md
deleted file mode 100644
index 8ae499feeaeb37617cf57c8e00974f06a89280c0..0000000000000000000000000000000000000000
--- a/spaces/epexVfeibi/Imagedeblurr/3 Tamil Movie 720p Download [PATCHED].md
+++ /dev/null
@@ -1,17 +0,0 @@
-3 tamil movie 720p download
Download Zip ->>> https://jinyurl.com/2uErC6
-
-hello friends, welcome to our channel #MrIntellect #tamilmovies #tamil720pmovies #tamilhdmovies in this video you... Chapangya - Tamil Finger Puppet Story.
-Family Story for Kids.
-children's story.
-Tamil Finger Puppets
-Vor 4 years 1
-Chapangya - Tamil Finger Puppet Story.
-Tamil Finger Puppets Chapangya - Nilavu...
-Lohan / Lohan Story / Tamil Finger Puppet Story
-Vor 4 years 3
-Lohan / Lohan Story / Tamil Finger Puppet Story - Nilavu ​​: Tamil Finger Puppets Chapangya - Sasikumar
-Vor 4 years 7
-Family Story for Kids. 8a78ff9644
-
-
-
diff --git a/spaces/eson/tokenizer-arena/vocab/baichuan/Baichuan-7B/configuration_baichuan.py b/spaces/eson/tokenizer-arena/vocab/baichuan/Baichuan-7B/configuration_baichuan.py
deleted file mode 100644
index 5057100d9545f7f190ac14ff8a226e5b7a2a3dd1..0000000000000000000000000000000000000000
--- a/spaces/eson/tokenizer-arena/vocab/baichuan/Baichuan-7B/configuration_baichuan.py
+++ /dev/null
@@ -1,66 +0,0 @@
-# coding=utf-8
-# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
-#
-# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
-# and OPT implementations in this library. It has been modified from its
-# original forms to accommodate minor architectural differences compared
-# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from transformers.configuration_utils import PretrainedConfig
-from transformers.utils import logging
-
-
-logger = logging.get_logger(__name__)
-
-
-class BaiChuanConfig(PretrainedConfig):
- model_type = "baichuan"
- keys_to_ignore_at_inference = ["past_key_values"]
-
- def __init__(
- self,
- vocab_size=64000,
- hidden_size=4096,
- intermediate_size=11008,
- num_hidden_layers=32,
- num_attention_heads=32,
- hidden_act="silu",
- max_position_embeddings=4096,
- initializer_range=0.02,
- rms_norm_eps=1e-6,
- use_cache=True,
- pad_token_id=0,
- bos_token_id=1,
- eos_token_id=2,
- tie_word_embeddings=False,
- **kwargs,
- ):
- self.vocab_size = vocab_size
- self.max_position_embeddings = max_position_embeddings
- self.hidden_size = hidden_size
- self.intermediate_size = intermediate_size
- self.num_hidden_layers = num_hidden_layers
- self.num_attention_heads = num_attention_heads
- self.hidden_act = hidden_act
- self.initializer_range = initializer_range
- self.rms_norm_eps = rms_norm_eps
- self.use_cache = use_cache
- super().__init__(
- pad_token_id=pad_token_id,
- bos_token_id=bos_token_id,
- eos_token_id=eos_token_id,
- tie_word_embeddings=tie_word_embeddings,
- **kwargs,
- )
diff --git a/spaces/eson/tokenizer-arena/vocab/moss/moss-moon-003-sft/tokenization_moss.py b/spaces/eson/tokenizer-arena/vocab/moss/moss-moon-003-sft/tokenization_moss.py
deleted file mode 100644
index 7a61098bd0e2cc91cfc16309cb654c1c471ff851..0000000000000000000000000000000000000000
--- a/spaces/eson/tokenizer-arena/vocab/moss/moss-moon-003-sft/tokenization_moss.py
+++ /dev/null
@@ -1,368 +0,0 @@
-"""Tokenization classes for Moss"""
-
-import json
-import os
-import numpy as np
-import regex as re
-
-from functools import lru_cache
-from typing import TYPE_CHECKING, List, Optional, Tuple, Union
-
-from transformers.utils import is_tf_available, is_torch_available, logging
-from transformers.tokenization_utils import AddedToken, PreTrainedTokenizer
-
-
-if TYPE_CHECKING:
- if is_torch_available():
- import torch
- if is_tf_available():
- import tensorflow as tf
-
-
-logger = logging.get_logger(__name__)
-
-VOCAB_FILES_NAMES = {
- "vocab_file": "vocab.json",
- "merges_file": "merges.txt",
-}
-
-PRETRAINED_VOCAB_FILES_MAP = {
- "vocab_file": {
- "fnlp/moss-moon-003-base": "https://huggingface.co/fnlp/moss-moon-003-base/resolve/main/vocab.json",
- "fnlp/moss-moon-003-sft": "https://huggingface.co/fnlp/moss-moon-003-sft/resolve/main/vocab.json",
- "fnlp/moss-moon-003-sft-plugin": "https://huggingface.co/fnlp/moss-moon-003-sft-plugin/resolve/main/vocab.json",
- },
- "merges_file": {
- "fnlp/moss-moon-003-base": "https://huggingface.co/fnlp/moss-moon-003-base/resolve/main/merge.txt",
- "fnlp/moss-moon-003-sft": "https://huggingface.co/fnlp/moss-moon-003-sft/resolve/main/merge.txt",
- "fnlp/moss-moon-003-sft-plugin": "https://huggingface.co/fnlp/moss-moon-003-sft-plugin/resolve/main/merge.txt",
- },
-}
-
-PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
- "fnlp/moss-moon-003-base": 2048,
- "fnlp/moss-moon-003-sft": 2048,
- "fnlp/moss-moon-003-sft-plugin": 2048,
-}
-
-
-@lru_cache()
-def bytes_to_unicode():
- """
- Returns list of utf-8 byte and a mapping to unicode strings. We specifically avoids mapping to whitespace/control
- characters the bpe code barfs on.
-
- The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab
- if you want to avoid UNKs. When you're at something like a 10B token dataset you end up needing around 5K for
- decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup
- tables between utf-8 bytes and unicode strings.
- """
- bs = (
- list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1))
- )
- cs = bs[:]
- n = 0
- for b in range(2**8):
- if b not in bs:
- bs.append(b)
- cs.append(2**8 + n)
- n += 1
- cs = [chr(n) for n in cs]
- return dict(zip(bs, cs))
-
-
-def get_pairs(word):
- """
- Return set of symbol pairs in a word.
-
- Word is represented as tuple of symbols (symbols being variable-length strings).
- """
- pairs = set()
- prev_char = word[0]
- for char in word[1:]:
- pairs.add((prev_char, char))
- prev_char = char
- return pairs
-
-
-class MossTokenizer(PreTrainedTokenizer):
- """
- Construct a Moss tokenizer. Based on byte-level Byte-Pair-Encoding.
-
- This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
- be encoded differently whether it is at the beginning of the sentence (without space) or not:
-
- You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
- call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
-
-
-
- When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).
-
-
-
- This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
- this superclass for more information regarding those methods.
-
- Args:
- vocab_file (`str`):
- Path to the vocabulary file.
- merges_file (`str`):
- Path to the merges file.
- errors (`str`, *optional*, defaults to `"replace"`):
- Paradigm to follow when decoding bytes to UTF-8. See
- [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
- unk_token (`str`, *optional*, defaults to `<|endoftext|>`):
- The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
- token instead.
- bos_token (`str`, *optional*, defaults to `<|endoftext|>`):
- The beginning of sequence token.
- eos_token (`str`, *optional*, defaults to `<|endoftext|>`):
- The end of sequence token.
- add_prefix_space (`bool`, *optional*, defaults to `False`):
- Whether or not to add an initial space to the input. This allows to treat the leading word just as any
- other word. (Moss tokenizer detect beginning of words by the preceding space).
- """
-
- vocab_files_names = VOCAB_FILES_NAMES
- pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
- max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
- model_input_names = ["input_ids", "attention_mask"]
-
- def __init__(
- self,
- vocab_file,
- merges_file,
- errors="replace",
- unk_token="<|endoftext|>",
- bos_token="<|endoftext|>",
- eos_token="",
- pad_token=None,
- add_prefix_space=False,
- add_bos_token=False,
- **kwargs,
- ):
- bos_token = AddedToken(bos_token, lstrip=False, rstrip=False) if isinstance(bos_token, str) else bos_token
- eos_token = AddedToken(eos_token, lstrip=False, rstrip=False) if isinstance(eos_token, str) else eos_token
- unk_token = AddedToken(unk_token, lstrip=False, rstrip=False) if isinstance(unk_token, str) else unk_token
- pad_token = AddedToken(pad_token, lstrip=False, rstrip=False) if isinstance(pad_token, str) else pad_token
- super().__init__(
- errors=errors,
- unk_token=unk_token,
- bos_token=bos_token,
- eos_token=eos_token,
- pad_token=pad_token,
- add_prefix_space=add_prefix_space,
- add_bos_token=add_bos_token,
- **kwargs,
- )
- self.add_bos_token = add_bos_token
-
- with open(vocab_file, encoding="utf-8") as vocab_handle:
- self.encoder = json.load(vocab_handle)
- self.decoder = {v: k for k, v in self.encoder.items()}
- self.errors = errors # how to handle errors in decoding
- self.byte_encoder = bytes_to_unicode()
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
- with open(merges_file, encoding="utf-8") as merges_handle:
- bpe_merges = merges_handle.read().split("\n")[1:-1]
- bpe_merges = [tuple(merge.split()) for merge in bpe_merges]
- self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges))))
- self.cache = {}
- self.add_prefix_space = add_prefix_space
-
- # Should have added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions
- self.pat = re.compile(r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""")
-
- @property
- def vocab_size(self):
- return len(self.encoder)
-
- def get_vocab(self):
- return dict(self.encoder, **self.added_tokens_encoder)
-
- def bpe(self, token):
- if token in self.cache:
- return self.cache[token]
- word = tuple(token)
- pairs = get_pairs(word)
-
- if not pairs:
- return token
-
- while True:
- bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
- if bigram not in self.bpe_ranks:
- break
- first, second = bigram
- new_word = []
- i = 0
- while i < len(word):
- try:
- j = word.index(first, i)
- except ValueError:
- new_word.extend(word[i:])
- break
- else:
- new_word.extend(word[i:j])
- i = j
-
- if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
- new_word.append(first + second)
- i += 2
- else:
- new_word.append(word[i])
- i += 1
- new_word = tuple(new_word)
- word = new_word
- if len(word) == 1:
- break
- else:
- pairs = get_pairs(word)
- word = " ".join(word)
- self.cache[token] = word
- return word
-
- def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
- if self.add_bos_token:
- bos_token_ids = [self.bos_token_id]
- else:
- bos_token_ids = []
-
- output = bos_token_ids + token_ids_0
-
- if token_ids_1 is None:
- return output
-
- return output + bos_token_ids + token_ids_1
-
- def _tokenize(self, text):
- """Tokenize a string."""
- bpe_tokens = []
- for token in re.findall(self.pat, text):
- token = "".join(
- self.byte_encoder[b] for b in token.encode("utf-8")
- ) # Maps all our bytes to unicode strings, avoiding control tokens of the BPE (spaces in our case)
- bpe_tokens.extend(bpe_token for bpe_token in self.bpe(token).split(" "))
- return bpe_tokens
-
- def _convert_token_to_id(self, token):
- """Converts a token (str) in an id using the vocab."""
- return self.encoder.get(token, self.encoder.get(self.unk_token))
-
- def _convert_id_to_token(self, index):
- """Converts an index (integer) in a token (str) using the vocab."""
- return self.decoder.get(index)
-
- def convert_tokens_to_string(self, tokens):
- """Converts a sequence of tokens (string) in a single string."""
- text = "".join(tokens)
- text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors)
- return text
-
- def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
- if not os.path.isdir(save_directory):
- logger.error(f"Vocabulary path ({save_directory}) should be a directory")
- return
- vocab_file = os.path.join(
- save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
- )
- merge_file = os.path.join(
- save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["merges_file"]
- )
-
- with open(vocab_file, "w", encoding="utf-8") as f:
- f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n")
-
- index = 0
- with open(merge_file, "w", encoding="utf-8") as writer:
- writer.write("#version: 0.2\n")
- for bpe_tokens, token_index in sorted(self.bpe_ranks.items(), key=lambda kv: kv[1]):
- if index != token_index:
- logger.warning(
- f"Saving vocabulary to {merge_file}: BPE merge indices are not consecutive."
- " Please check that the tokenizer is not corrupted!"
- )
- index = token_index
- writer.write(" ".join(bpe_tokens) + "\n")
- index += 1
-
- return vocab_file, merge_file
-
- def prepare_for_tokenization(self, text, is_split_into_words=False, **kwargs):
- add_prefix_space = kwargs.pop("add_prefix_space", self.add_prefix_space)
- if is_split_into_words or add_prefix_space:
- text = " " + text
- return (text, kwargs)
-
- def decode(
- self,
- token_ids: Union[int, List[int], "np.ndarray", "torch.Tensor", "tf.Tensor"],
- skip_special_tokens: bool = False,
- clean_up_tokenization_spaces: bool = None,
- truncate_before_pattern: Optional[List[str]] = None,
- **kwargs,
- ) -> str:
- """
- Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special
- tokens and clean up tokenization spaces.
-
- Similar to doing `self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))`.
-
- Args:
- token_ids (`Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]`):
- List of tokenized input ids. Can be obtained using the `__call__` method.
- skip_special_tokens (`bool`, *optional*, defaults to `False`):
- Whether or not to remove special tokens in the decoding.
- clean_up_tokenization_spaces (`bool`, *optional*):
- Whether or not to clean up the tokenization spaces. If `None`, will default to
- `self.clean_up_tokenization_spaces` (available in the `tokenizer_config`).
- truncate_before_pattern (`List[str]`, *optional*, defaults to `None`):
- A list of regular expression strings that will be used to truncate the returned string. This can be
- used to remove extra pieces of code (e.g. truncate if observing a comment symbol "#" at the beginning
- of a new line). An example pattern could be `["^#", re.escape("<|endoftext|>"), "^'''", "\n\n\n"]`.
- kwargs (additional keyword arguments, *optional*):
- Will be passed to the underlying model specific decode method.
-
- Returns:
- `str`: The decoded sentence.
- """
- decoded_text = super()._decode(
- token_ids=token_ids,
- skip_special_tokens=skip_special_tokens,
- clean_up_tokenization_spaces=clean_up_tokenization_spaces,
- **kwargs,
- )
-
- if truncate_before_pattern is not None and len(truncate_before_pattern) > 0:
- decoded_text = self.truncate(decoded_text, truncate_before_pattern)
-
- return decoded_text
-
- def truncate(self, completion, truncate_before_pattern):
- def find_re(string, pattern, start_pos):
- m = pattern.search(string, start_pos)
- return m.start() if m else -1
-
- terminals = [re.compile(pattern, re.MULTILINE) for pattern in truncate_before_pattern]
-
- prints = list(re.finditer("^print", completion, re.MULTILINE))
-
- if len(prints) > 1:
- completion = completion[: prints[1].start()]
-
- defs = list(re.finditer("^def", completion, re.MULTILINE))
-
- if len(defs) > 1:
- completion = completion[: defs[1].start()]
-
- start_pos = 0
-
- terminals_pos = [
- pos for pos in [find_re(completion, terminal, start_pos) for terminal in terminals] if pos != -1
- ]
-
- if len(terminals_pos) > 0:
- return completion[: min(terminals_pos)]
- else:
- return completion
\ No newline at end of file
diff --git a/spaces/eswat/Image-and-3D-Model-Creator/PIFu/lib/mesh_util.py b/spaces/eswat/Image-and-3D-Model-Creator/PIFu/lib/mesh_util.py
deleted file mode 100644
index 39934219011401e194c61cc00034b12dad4072d3..0000000000000000000000000000000000000000
--- a/spaces/eswat/Image-and-3D-Model-Creator/PIFu/lib/mesh_util.py
+++ /dev/null
@@ -1,91 +0,0 @@
-from skimage import measure
-import numpy as np
-import torch
-from .sdf import create_grid, eval_grid_octree, eval_grid
-from skimage import measure
-
-
-def reconstruction(net, cuda, calib_tensor,
- resolution, b_min, b_max,
- use_octree=False, num_samples=10000, transform=None):
- '''
- Reconstruct meshes from sdf predicted by the network.
- :param net: a BasePixImpNet object. call image filter beforehead.
- :param cuda: cuda device
- :param calib_tensor: calibration tensor
- :param resolution: resolution of the grid cell
- :param b_min: bounding box corner [x_min, y_min, z_min]
- :param b_max: bounding box corner [x_max, y_max, z_max]
- :param use_octree: whether to use octree acceleration
- :param num_samples: how many points to query each gpu iteration
- :return: marching cubes results.
- '''
- # First we create a grid by resolution
- # and transforming matrix for grid coordinates to real world xyz
- coords, mat = create_grid(resolution, resolution, resolution,
- b_min, b_max, transform=transform)
-
- # Then we define the lambda function for cell evaluation
- def eval_func(points):
- points = np.expand_dims(points, axis=0)
- points = np.repeat(points, net.num_views, axis=0)
- samples = torch.from_numpy(points).to(device=cuda).float()
- net.query(samples, calib_tensor)
- pred = net.get_preds()[0][0]
- return pred.detach().cpu().numpy()
-
- # Then we evaluate the grid
- if use_octree:
- sdf = eval_grid_octree(coords, eval_func, num_samples=num_samples)
- else:
- sdf = eval_grid(coords, eval_func, num_samples=num_samples)
-
- # Finally we do marching cubes
- try:
- verts, faces, normals, values = measure.marching_cubes_lewiner(sdf, 0.5)
- # transform verts into world coordinate system
- verts = np.matmul(mat[:3, :3], verts.T) + mat[:3, 3:4]
- verts = verts.T
- return verts, faces, normals, values
- except:
- print('error cannot marching cubes')
- return -1
-
-
-def save_obj_mesh(mesh_path, verts, faces):
- file = open(mesh_path, 'w')
-
- for v in verts:
- file.write('v %.4f %.4f %.4f\n' % (v[0], v[1], v[2]))
- for f in faces:
- f_plus = f + 1
- file.write('f %d %d %d\n' % (f_plus[0], f_plus[2], f_plus[1]))
- file.close()
-
-
-def save_obj_mesh_with_color(mesh_path, verts, faces, colors):
- file = open(mesh_path, 'w')
-
- for idx, v in enumerate(verts):
- c = colors[idx]
- file.write('v %.4f %.4f %.4f %.4f %.4f %.4f\n' % (v[0], v[1], v[2], c[0], c[1], c[2]))
- for f in faces:
- f_plus = f + 1
- file.write('f %d %d %d\n' % (f_plus[0], f_plus[2], f_plus[1]))
- file.close()
-
-
-def save_obj_mesh_with_uv(mesh_path, verts, faces, uvs):
- file = open(mesh_path, 'w')
-
- for idx, v in enumerate(verts):
- vt = uvs[idx]
- file.write('v %.4f %.4f %.4f\n' % (v[0], v[1], v[2]))
- file.write('vt %.4f %.4f\n' % (vt[0], vt[1]))
-
- for f in faces:
- f_plus = f + 1
- file.write('f %d/%d %d/%d %d/%d\n' % (f_plus[0], f_plus[0],
- f_plus[2], f_plus[2],
- f_plus[1], f_plus[1]))
- file.close()
diff --git a/spaces/eswat/Image-and-3D-Model-Creator/PIFu/scripts/download_trained_model.sh b/spaces/eswat/Image-and-3D-Model-Creator/PIFu/scripts/download_trained_model.sh
deleted file mode 100644
index c652f2c666dc48ff1e2e7a94d559e925ac058dec..0000000000000000000000000000000000000000
--- a/spaces/eswat/Image-and-3D-Model-Creator/PIFu/scripts/download_trained_model.sh
+++ /dev/null
@@ -1,7 +0,0 @@
-set -ex
-
-mkdir -p checkpoints
-cd checkpoints
-wget "https://drive.google.com/uc?export=download&id=1zEmVXG2VHy0MMzngcRshB4D8Sr_oLHsm" -O net_G
-wget "https://drive.google.com/uc?export=download&id=1V83B6GDIjYMfHdpg-KcCSAPgHxpafHgd" -O net_C
-cd ..
\ No newline at end of file
diff --git a/spaces/fabiochiu/text-to-kb/networks/network.html b/spaces/fabiochiu/text-to-kb/networks/network.html
deleted file mode 100644
index d5e315eac2d3a3c88cb8598e0f9aea12d145eea3..0000000000000000000000000000000000000000
--- a/spaces/fabiochiu/text-to-kb/networks/network.html
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/falcondai/code-as-policies/lmp.py b/spaces/falcondai/code-as-policies/lmp.py
deleted file mode 100644
index 721775bad04c84fe3b348aff51eeff3b88c88722..0000000000000000000000000000000000000000
--- a/spaces/falcondai/code-as-policies/lmp.py
+++ /dev/null
@@ -1,256 +0,0 @@
-from time import sleep
-import ast
-import astunparse
-import openai
-from openai.error import RateLimitError, APIConnectionError
-from pygments import highlight
-from pygments.lexers import PythonLexer
-from pygments.formatters import TerminalFormatter
-
-
-class LMP:
-
- def __init__(self, name, cfg, lmp_fgen, fixed_vars, variable_vars, md_logger):
- self._name = name
- self._cfg = cfg
- self._md_logger = md_logger
-
- with open(self._cfg['prompt_path'], 'r') as f:
- self._base_prompt = f.read()
-
- self._stop_tokens = list(self._cfg['stop'])
-
- self._lmp_fgen = lmp_fgen
-
- self._fixed_vars = fixed_vars
- self._variable_vars = variable_vars
- self.exec_hist = ''
-
- def clear_exec_hist(self):
- self.exec_hist = ''
-
- def build_prompt(self, query, context=''):
- if len(self._variable_vars) > 0:
- variable_vars_imports_str = f"from utils import {', '.join(self._variable_vars.keys())}"
- else:
- variable_vars_imports_str = ''
- prompt = self._base_prompt.replace('{variable_vars_imports}', variable_vars_imports_str)
-
- if self._cfg['maintain_session']:
- prompt += f'\n{self.exec_hist}'
-
- if context != '':
- prompt += f'\n{context}'
-
- use_query = f'{self._cfg["query_prefix"]}{query}{self._cfg["query_suffix"]}'
- prompt += f'\n{use_query}'
-
- return prompt, use_query
-
- def __call__(self, query, context='', **kwargs):
- prompt, use_query = self.build_prompt(query, context=context)
-
- while True:
- try:
- code_str = openai.Completion.create(
- prompt=prompt,
- stop=self._stop_tokens,
- temperature=self._cfg['temperature'],
- engine=self._cfg['engine'],
- max_tokens=self._cfg['max_tokens']
- )['choices'][0]['text'].strip()
- break
- except (RateLimitError, APIConnectionError) as e:
- print(f'OpenAI API got err {e}')
- print('Retrying after 2s.')
- sleep(2)
-
- if self._cfg['include_context'] and context != '':
- to_exec = f'{context}\n{code_str}'
- to_log = f'{context}\n{use_query}\n{code_str}'
- else:
- to_exec = code_str
- to_log = f'{use_query}\n{to_exec}'
-
- to_log_pretty = highlight(to_log, PythonLexer(), TerminalFormatter())
- print(f'LMP {self._name} generated code:\n{to_log_pretty}')
- self._md_logger.log_text(f'LMP {self._name} Generated Code:')
- self._md_logger.log_code(to_log)
-
- new_fs = self._lmp_fgen.create_new_fs_from_code(code_str)
- self._variable_vars.update(new_fs)
-
- gvars = merge_dicts([self._fixed_vars, self._variable_vars])
- lvars = kwargs
-
- if not self._cfg['debug_mode']:
- exec_safe(to_exec, gvars, lvars)
-
- self.exec_hist += f'\n{to_exec}'
-
- if self._cfg['maintain_session']:
- self._variable_vars.update(lvars)
-
- if self._cfg['has_return']:
- return lvars[self._cfg['return_val_name']]
-
-
-class LMPFGen:
-
- def __init__(self, cfg, fixed_vars, variable_vars, md_logger):
- self._cfg = cfg
-
- self._stop_tokens = list(self._cfg['stop'])
- self._fixed_vars = fixed_vars
- self._variable_vars = variable_vars
- self._md_logger = md_logger
-
- with open(self._cfg['prompt_path'], 'r') as f:
- self._base_prompt = f.read()
-
- def create_f_from_sig(self, f_name, f_sig, other_vars=None, fix_bugs=False, return_src=False):
- print(f'Creating function: {f_sig}')
-
- use_query = f'{self._cfg["query_prefix"]}{f_sig}{self._cfg["query_suffix"]}'
- prompt = f'{self._base_prompt}\n{use_query}'
-
- while True:
- try:
- f_src = openai.Completion.create(
- prompt=prompt,
- stop=self._stop_tokens,
- temperature=self._cfg['temperature'],
- engine=self._cfg['engine'],
- max_tokens=self._cfg['max_tokens']
- )['choices'][0]['text'].strip()
- break
- except (RateLimitError, APIConnectionError) as e:
- print(f'OpenAI API got err {e}')
- print('Retrying after 2s.')
- sleep(2)
-
- if fix_bugs:
- f_src = openai.Edit.create(
- model='code-davinci-edit-001',
- input='# ' + f_src,
- temperature=0,
- instruction='Fix the bug if there is one. Improve readability. Keep same inputs and outputs. Only small changes. No comments.',
- )['choices'][0]['text'].strip()
-
- if other_vars is None:
- other_vars = {}
- gvars = merge_dicts([self._fixed_vars, self._variable_vars, other_vars])
- lvars = {}
- exec_safe(f_src, gvars, lvars)
-
- f = lvars[f_name]
-
- to_print = f'{use_query}\n{f_src}'
- to_print_pretty = highlight(to_print, PythonLexer(), TerminalFormatter())
- print(f'LMPFGen generated code:\n{to_print_pretty}')
- self._md_logger.log_text('Generated Function:')
- self._md_logger.log_code(to_print)
-
- if return_src:
- return f, f_src
- return f
-
- def create_new_fs_from_code(self, code_str, other_vars=None, fix_bugs=False, return_src=False):
- fs, f_assigns = {}, {}
- f_parser = FunctionParser(fs, f_assigns)
- f_parser.visit(ast.parse(code_str))
- for f_name, f_assign in f_assigns.items():
- if f_name in fs:
- fs[f_name] = f_assign
-
- if other_vars is None:
- other_vars = {}
-
- new_fs = {}
- srcs = {}
- for f_name, f_sig in fs.items():
- all_vars = merge_dicts([self._fixed_vars, self._variable_vars, new_fs, other_vars])
- if not var_exists(f_name, all_vars):
- f, f_src = self.create_f_from_sig(f_name, f_sig, new_fs, fix_bugs=fix_bugs, return_src=True)
-
- # recursively define child_fs in the function body if needed
- f_def_body = astunparse.unparse(ast.parse(f_src).body[0].body)
- child_fs, child_f_srcs = self.create_new_fs_from_code(
- f_def_body, other_vars=all_vars, fix_bugs=fix_bugs, return_src=True
- )
-
- if len(child_fs) > 0:
- new_fs.update(child_fs)
- srcs.update(child_f_srcs)
-
- # redefine parent f so newly created child_fs are in scope
- gvars = merge_dicts([self._fixed_vars, self._variable_vars, new_fs, other_vars])
- lvars = {}
-
- exec_safe(f_src, gvars, lvars)
-
- f = lvars[f_name]
-
- new_fs[f_name], srcs[f_name] = f, f_src
-
- if return_src:
- return new_fs, srcs
- return new_fs
-
-
-class FunctionParser(ast.NodeTransformer):
-
- def __init__(self, fs, f_assigns):
- super().__init__()
- self._fs = fs
- self._f_assigns = f_assigns
-
- def visit_Call(self, node):
- self.generic_visit(node)
- if isinstance(node.func, ast.Name):
- f_sig = astunparse.unparse(node).strip()
- f_name = astunparse.unparse(node.func).strip()
- self._fs[f_name] = f_sig
- return node
-
- def visit_Assign(self, node):
- self.generic_visit(node)
- if isinstance(node.value, ast.Call):
- assign_str = astunparse.unparse(node).strip()
- f_name = astunparse.unparse(node.value.func).strip()
- self._f_assigns[f_name] = assign_str
- return node
-
-
-def var_exists(name, all_vars):
- try:
- eval(name, all_vars)
- except:
- exists = False
- else:
- exists = True
- return exists
-
-
-def merge_dicts(dicts):
- return {
- k : v
- for d in dicts
- for k, v in d.items()
- }
-
-
-def exec_safe(code_str, gvars=None, lvars=None):
- banned_phrases = ['import', '__']
- for phrase in banned_phrases:
- assert phrase not in code_str
- if gvars is None:
- gvars = {}
- if lvars is None:
- lvars = {}
- empty_fn = lambda *args, **kwargs: None
- custom_gvars = merge_dicts([
- gvars,
- {'exec': empty_fn, 'eval': empty_fn}
- ])
- exec(code_str, custom_gvars, lvars)
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Dead Wishes Download 5kapks.md b/spaces/falterWliame/Face_Mask_Detection/Dead Wishes Download 5kapks.md
deleted file mode 100644
index 8bd2bee7ae0ba7862a2e3e22273d6460fd54179a..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Dead Wishes Download 5kapks.md
+++ /dev/null
@@ -1,10 +0,0 @@
-Dead Wishes download 5kapks
Download 🗸 https://urlca.com/2uDcvx
-
-is an American soap opera created by Clint Beverly, that aired for three years on NBC, from September 1, 1991, to May 26, 1994. It was produced by Universal Television Productions. Dead Wishes Download 5kapks is the story of a mother and daughter whose lives collide when a mentally ill woman is released from prison and moves in with the troubled mother, and a woman enters her life after the mother of her son is killed in a car accident. It was filmed in Beverly Hills, California and featured a large ensemble cast. However, several actors appeared in only one episode of the series, including Donald Sutherland, Jodi Lyn O'Keefe, Lisa Vidal, Colleen Riley, Kim Delaney, Brian Dennehy, and Jaimee Foxworth.
-
->>>> dead wishes 5kapks #reed900 . is an American soap opera created by Clint Beverly, that aired for three years on NBC, from September 1, 1991, to May 26, 1994. It was produced by Universal Television Productions. Dead Wishes Download 5kapks is the story of a mother and daughter whose lives collide when a mentally ill woman is released from prison and moves in with the troubled mother, and a woman enters her life after the mother of her son is killed in a car accident. It was filmed in Beverly Hills, California and featured a large ensemble cast. However, several actors appeared in only one episode of the series, including Donald Sutherland, Jodi Lyn O'Keefe, Lisa Vidal, Colleen Riley, Kim Delaney, Brian Dennehy, and Jaimee Foxworth.
-
->>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 4fefd39f24
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Inus Rapidform Xor3 X64 Torrent !EXCLUSIVE!.md b/spaces/falterWliame/Face_Mask_Detection/Inus Rapidform Xor3 X64 Torrent !EXCLUSIVE!.md
deleted file mode 100644
index 14db802660680efaeecd7a99acead4649e791a71..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Inus Rapidform Xor3 X64 Torrent !EXCLUSIVE!.md
+++ /dev/null
@@ -1,62 +0,0 @@
-Inus Rapidform Xor3 X64 Torrent
Download ►►►►► https://urlca.com/2uDdAY
-
-Hey guys, I'm new here. I'm doing my first steps with C++ and this wonderful language.
-
-I'm trying to get the skeleton of an extension for Vscode, but I have some problems.
-
-On the "main.cpp" file I have the function declaration and inside I have:
-
-int main(int argc, char* argv[])
-
-
-
- //my code goes here
-
- return 0;
-
-
-
-And on the "xor3.cpp" file I have:
-
-#include "rapidform.h"
-
-void init()
-
- rapidform::init();
-
-But when I try to compile it, i have the following error:
-
- error LNK2019: unresolved external symbol "public: __thiscall rapidform::RapidForm::RapidForm(void)" (??0RapidForm@@QAE@XZ) referenced in function _main
-
-I have no idea what to do or what is the problem. I'm pretty sure that I've included all the necessary headers.
-
-A:
-
-From rapidform.cpp:
-
-So either
-
-xor3.cpp needs to #include "rapidform.h"
-
-RapidForm needs to have an implementation in a separate source file, such as rapidform.cpp.
-
-Q:
-
-Can Neo4j handle queries that are ambiguous?
-
-I have a simple scenario. I want to store web log entries and show related links.
-
-The only problem is that the item in one entry may have more than one possible match. So if I say
-
-MATCH (e:item)<-[:`some-property`]-(l:link) RETURN e, l
-
-It will only return a small portion of the links.
-
-I understand this can be fixed by creating another relationship type that has a single value, which can be set through an operation.
-
-Now I'm just curious if Neo4j can handle this or not. I mean, is it possible to have one relationship with an ambiguous attribute (in this case "some-property") or will I have to create a relationship type that has a string value?
-
-Cycles in the graph model are bad. There is no way to express relationships between nodes such 4fefd39f24
-
-
-
diff --git a/spaces/fatiXbelha/sd/CarX Highway Racing Mod APK IOS Enjoy the Thrill of Realistic Racing.md b/spaces/fatiXbelha/sd/CarX Highway Racing Mod APK IOS Enjoy the Thrill of Realistic Racing.md
deleted file mode 100644
index ab9741f01133bdd1d3985ac76e64623e55020c28..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/CarX Highway Racing Mod APK IOS Enjoy the Thrill of Realistic Racing.md
+++ /dev/null
@@ -1,110 +0,0 @@
-
-CarX Highway Racing Mod APK iOS: How to Download and Play
-If you are a fan of racing games, you might have heard of CarX Highway Racing, a popular game that offers realistic driving experience and thrilling gameplay. But did you know that you can also enjoy this game on your iOS device with some extra features and benefits? In this article, we will tell you everything you need to know about CarX Highway Racing Mod APK iOS, how to download and install it, and how to play it.
-carx highway racing mod apk ios
Download Zip 🗸 https://urllie.com/2uNEqc
- What is CarX Highway Racing?
-CarX Highway Racing is a racing game developed by CarX Technologies, the same company that created the famous CarX Drift Racing game. In this game, you can drive various cars on different highways, compete with other racers, escape from the police, and complete various missions and challenges. You can also customize your cars with different parts and upgrades, and choose from different game modes such as campaign, survival, time attack, and online multiplayer.
- Features of CarX Highway Racing
-CarX Highway Racing has many features that make it one of the best racing games on the market. Here are some of them:
- Realistic physics and graphics
-CarX Highway Racing uses the advanced CarX Engine, which provides realistic car physics and behavior. You can feel the difference between different car models, such as their speed, acceleration, handling, braking, and drifting. The game also has stunning graphics and effects, such as dynamic lighting, shadows, reflections, smoke, dust, and weather conditions. You can enjoy the beautiful scenery of the highways, as well as the detailed car models and damage.
-carx highway racing mod apk ios download
-carx highway racing mod apk ios unlimited money
-carx highway racing mod apk ios free
-carx highway racing mod apk ios latest version
-carx highway racing mod apk ios no jailbreak
-carx highway racing mod apk ios platinmods
-carx highway racing mod apk ios hack
-carx highway racing mod apk ios 2023
-carx highway racing mod apk ios offline
-carx highway racing mod apk ios gameplay
-carx highway racing mod apk ios cheats
-carx highway racing mod apk ios review
-carx highway racing mod apk ios update
-carx highway racing mod apk ios install
-carx highway racing mod apk ios features
-carx highway racing mod apk ios online
-carx highway racing mod apk ios multiplayer
-carx highway racing mod apk ios best cars
-carx highway racing mod apk ios tips and tricks
-carx highway racing mod apk ios tutorial
-carx highway racing mod apk ios reddit
-carx highway racing mod apk ios requirements
-carx highway racing mod apk ios obb
-carx highway racing mod apk ios data
-carx highway racing mod apk ios file size
-carx highway racing mod apk ios graphics settings
-carx highway racing mod apk ios controller support
-carx highway racing mod apk ios soundtracks
-carx highway racing mod apk ios missions
-carx highway racing mod apk ios events
-carx highway racing mod apk ios customization
-carx highway racing mod apk ios maps
-carx highway racing mod apk ios tracks
-carx highway racing mod apk ios weather effects
-carx highway racing mod apk ios realistic physics
-carx highway racing mod apk ios drift mode
-carx highway racing mod apk ios police chase mode
-carx highway racing mod apk ios traffic mode
-carx highway racing mod apk ios night mode
-carx highway racing mod apk ios day mode
- Diverse game modes and challenges
-CarX Highway Racing has a lot of content to keep you entertained for hours. You can play the campaign mode, where you have to complete various missions and races against different opponents. You can also play the survival mode, where you have to drive as long as possible without crashing or running out of fuel. You can also play the time attack mode, where you have to beat your own or other players' records on different tracks. And if you want to challenge other players online, you can join the multiplayer mode, where you can race against up to 6 players in real time.
- Customizable cars and upgrades
-CarX Highway Racing has a large collection of cars that you can unlock and drive. You can choose from different categories, such as sports cars, muscle cars, supercars, trucks, and more. You can also customize your cars with different parts and upgrades, such as engines, transmissions, tires, brakes, suspensions, turbochargers, nitrous oxide, spoilers, paint jobs, decals, and more. You can make your car faster, more powerful, more stable, and more stylish.
- What is CarX Highway Racing Mod APK iOS?
-CarX Highway Racing Mod APK iOS is a modified version of the original game that allows you to play it on your iOS device with some extra features and benefits. It is not available on the official App Store, but you can download it from third-party sources.
- Benefits of CarX Highway Racing Mod APK iOS
-CarX Highway Racing Mod APK iOS has many benefits that make it better than the original game. Here are some of them:
- Unlimited gold and cash
-Gold and cash are the main currencies in CarX Highway Racing. You need them to buy new cars, parts, upgrades, fuel and other items in the game. However, earning them can be time-consuming and difficult. With CarX Highway Racing Mod APK iOS, you can get unlimited gold and cash for free. You can buy anything you want without worrying about the cost. You can also skip the ads that pop up when you earn gold and cash.
- Unlimited fuel
-Fuel is another important resource in CarX Highway Racing. You need it to drive your car and participate in races and missions. However, fuel is limited and you have to wait for it to refill or buy it with gold or cash. With CarX Highway Racing Mod APK iOS, you can get unlimited fuel for free. You can drive as much as you want without waiting or spending money. You can also enjoy the game without interruptions from the low fuel warnings.
- No ads
-Ads are annoying and distracting in any game. They can ruin your gaming experience and mood. They can also consume your data and battery. With CarX Highway Racing Mod APK iOS, you can get rid of all the ads in the game. You can play the game smoothly and peacefully without any ads popping up on your screen.
- How to Download and Install CarX Highway Racing Mod APK iOS?
-Now that you know the benefits of CarX Highway Racing Mod APK iOS, you might be wondering how to download and install it on your iOS device. Well, it is not very difficult, but you need to follow some steps carefully. Here are the requirements and steps for downloading and installing CarX Highway Racing Mod APK iOS:
- Requirements for CarX Highway Racing Mod APK iOS
-Before you download and install CarX Highway Racing Mod APK iOS, you need to make sure that your iOS device meets the following requirements:
-
-- Your iOS device should be running on iOS 9.0 or later.
-- Your iOS device should have at least 1 GB of free storage space.
-- Your iOS device should be jailbroken. Jailbreaking is a process that allows you to access and modify the system files of your iOS device. It also allows you to install apps that are not available on the official App Store. However, jailbreaking also voids your warranty and exposes your device to security risks. Therefore, you should only jailbreak your device at your own risk and responsibility.
-- You should have a file manager app on your iOS device, such as iFile or Filza.
-- You should have a web browser app on your iOS device, such as Safari or Chrome.
-
- Steps to Download and Install CarX Highway Racing Mod APK iOS
-Once you have met the requirements, you can follow these steps to download and install CarX Highway Racing Mod APK iOS on your iOS device:
-
-- Open your web browser app on your iOS device and go to this link: . This is the download page for CarX Highway Racing Mod APK iOS.
-- Tap on the download button and wait for the file to be downloaded on your device. The file size is about 300 MB, so it might take some time depending on your internet speed.
-- Once the file is downloaded, go to your file manager app and locate the file. It should be in the Downloads folder or the folder where you usually save your downloaded files.
-- Tap on the file and select Install. You might see a warning message that says "Untrusted Enterprise Developer". This is because the file is not from the official App Store, but from a third-party source. To proceed, tap on Cancel and go to Settings > General > Profiles & Device Management.
-- Find and tap on the profile that has the name of the developer of CarX Highway Racing Mod APK iOS. It should be something like "CarX Technologies Co., Ltd." or similar.
-- Tap on Trust and confirm your choice. This will allow you to install apps from this developer on your device.
-- Go back to your file manager app and tap on the file again. This time, it should install without any problem.
-- Once the installation is complete, you can find the CarX Highway Racing Mod APK iOS icon on your home screen. Tap on it and enjoy the game!
-
- Conclusion
-CarX Highway Racing is a great racing game that offers realistic physics and graphics, diverse game modes and challenges, customizable cars and upgrades, and online multiplayer mode. However, if you want to enjoy this game even more, you can try CarX Highway Racing Mod APK iOS, which gives you unlimited gold and cash, unlimited fuel, and no ads. You can download and install CarX Highway Racing Mod APK iOS on your jailbroken iOS device by following the steps we have shown you in this article. However, you should be aware of the risks and consequences of jailbreaking your device and installing apps from third-party sources. You should also respect the original developers of the game and support them if you like their work. We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below.
- FAQs
-Here are some frequently asked questions about CarX Highway Racing Mod APK iOS:
- Q: Is CarX Highway Racing Mod APK iOS safe to use?
-A: CarX Highway Racing Mod APK iOS is safe to use as long as you download it from a trusted source and scan it for viruses before installing it. However, you should also be careful about the permissions and data that the app might access on your device. You should also backup your device before installing the app, in case something goes wrong.
- Q: Is CarX Highway Racing Mod APK iOS legal to use?
-A: CarX Highway Racing Mod APK iOS is not legal to use, as it violates the terms and conditions of the original game and the App Store. It also infringes the intellectual property rights of the original developers of the game. Using CarX Highway Racing Mod APK iOS might result in legal actions or penalties from the authorities or the developers. Therefore, you should use CarX Highway Racing Mod APK iOS at your own risk and responsibility.
- Q: Will CarX Highway Racing Mod APK iOS work on my device?
-A: CarX Highway Racing Mod APK iOS will work on any iOS device that meets the requirements mentioned above. However, some devices might experience compatibility issues or performance problems due to different hardware specifications or software versions. Therefore, you should try CarX Highway Racing Mod APK iOS on your device at your own discretion and see if it works well for you.
- Q: How can I update CarX Highway Racing Mod APK iOS?
-A: CarX Highway Racing Mod APK iOS is not an official version of the game, so it will not receive any updates from the original developers. However, you might find some updates from the third-party sources that provide CarX Highway Racing Mod APK iOS. You can check their websites or social media pages for any news or announcements about CarX Highway Racing Mod APK iOS updates. You can also bookmark this article and visit it regularly for any updates we might post.
- Q: How can I uninstall CarX Highway Racing Mod APK iOS?
-A: If you want to uninstall CarX Highway Racing Mod APK iOS from your device, you can follow these steps:
-
-- Go to Settings > General > Profiles & Device Management.
-- Find and tap on the profile that has the name of the developer of CarX Highway Racing Mod APK iOS.
-- Tap on Delete Profile and confirm your choice.
-- Go to your file manager app and delete the file of CarX Highway Racing Mod APK iOS.
-- Restart your device and check if CarX Highway Racing Mod APK iOS is gone from your home screen.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Classic solitaire card game for Windows 7 how to download and enjoy it for free.md b/spaces/fatiXbelha/sd/Classic solitaire card game for Windows 7 how to download and enjoy it for free.md
deleted file mode 100644
index 32d97154aba73532189054ec0ddeb33ff0e00c7c..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Classic solitaire card game for Windows 7 how to download and enjoy it for free.md
+++ /dev/null
@@ -1,120 +0,0 @@
-
-Classic Solitaire Card Game Free Download for Windows 7
-If you are looking for a fun and relaxing way to pass the time, you might want to try playing classic solitaire on your Windows 7 computer. Solitaire is one of the most popular card games in the world, and it has been enjoyed by millions of people for decades. In this article, we will show you how to download and install classic solitaire games for Windows 7, as well as some of the features and benefits of playing them.
-Introduction
-What is classic solitaire?
-Classic solitaire, also known as Klondike solitaire, is a single-player card game that involves arranging cards in a specific order. The goal is to move all the cards from the tableau (the seven columns of cards) to the foundation (the four piles of cards in the upper right corner), starting with the aces and ending with the kings. You can move cards from one column to another if they are in descending order and alternating colors. You can also draw cards from the stock (the pile of face-down cards in the upper left corner) and place them on the waste (the pile of face-up cards next to the stock). You can move cards from the waste to the tableau or the foundation as well. The game is over when you have moved all the cards to the foundation or when there are no more moves possible.
-classic solitaire card game free download for windows 7
Download ►►►►► https://urllie.com/2uNvKl
-Why play classic solitaire on Windows 7?
-Classic solitaire is a great game to play on Windows 7 because it is simple, addictive, and relaxing. It does not require any special skills or equipment, just a standard deck of 52 cards and a computer. You can play it anytime, anywhere, and at your own pace. You can also challenge yourself by changing the game mode or difficulty level, or by completing daily challenges and events. Playing classic solitaire can help you improve your concentration, memory, logic, and problem-solving skills. It can also reduce stress, boredom, and anxiety.
-How to download and install classic solitaire for Windows 7
-Option 1: Get the classic free solitaire games from Microsoft Store
-If you loved the free solitaire games from earlier versions of Windows, you can get them all in one convenient app, the Microsoft Solitaire Collection. This app includes five different solitaire games: Klondike, Spider, FreeCell, TriPeaks, and Pyramid. Here are the steps to download and install this app on your Windows 7 computer:
-Step 1: Go to the Microsoft Store page for Microsoft Solitaire Collection
-You can use this link to go directly to the Microsoft Store page for Microsoft Solitaire Collection. Alternatively, you can open your web browser and search for "Microsoft Solitaire Collection" on Bing or Google.
-Step 2: Select Install and launch the game
-On the Microsoft Store page, select Install and wait for the app to download and install on your computer. You might need to sign in with your Microsoft account or create one if you don't have one. After the installation is complete, the app will launch automatically. You can also find it in your Start menu or pin it to your taskbar for easy access.
-Step 3: Pin the game to your taskbar or Start menu for easy access
-If you want to play classic solitaire anytime, you can pin the app to your taskbar or Start menu for easy access. To do this, right-click on the app icon and select Pin to taskbar or Pin to Start. You can also drag and drop the app icon to your desired location.
-Option 2: Get Classic Solitaire (Free) from Microsoft Store
-If you prefer a more minimalist and retro version of classic solitaire, you can try Classic Solitaire (Free) from Microsoft Store. This app is a faithful reproduction of the old good solitaire implementation that came with Windows 95, 98, 2000, and XP. It has the same look and feel, sound effects, and scoring system as the original game. Here are the steps to download and install this app on your Windows 7 computer:
-[Keyword Tool](^1^): This is a free online tool that generates up to 750+ long-tail keywords for every search term. You can choose a specific Google domain and language, and also see the estimated monthly search volume, cost per click, and competition level for each keyword.
-[WordStream Free Keyword Tool](^2^): This is another free online tool that gives you hundreds of keyword results, tailored to your industry and location. You can also see the accurate keyword volume and cost per click data, and download your full keyword list for your SEO and PPC campaigns.
-[Google Ads Keyword Planner](^3^): This is a tool from Google that helps you find new keywords and see how they might perform. You can discover new keywords related to your product or service, and also see historical statistics and forecasts for your keywords. You need a Google Ads account to use this tool.
-classic solitaire card game free download full version for windows 7
-classic solitaire card game free download offline for windows 7
-classic solitaire card game free download no ads for windows 7
-classic solitaire card game free download microsoft for windows 7
-classic solitaire card game free download without internet for windows 7
-classic solitaire card game free download with undo for windows 7
-classic solitaire card game free download with hints for windows 7
-classic solitaire card game free download with timer for windows 7
-classic solitaire card game free download with statistics for windows 7
-classic solitaire card game free download with themes for windows 7
-classic solitaire card game free download with sound effects for windows 7
-classic solitaire card game free download with music for windows 7
-classic solitaire card game free download with daily challenges for windows 7
-classic solitaire card game free download with levels for windows 7
-classic solitaire card game free download with achievements for windows 7
-classic solitaire card game free download with leaderboards for windows 7
-classic solitaire card game free download with multiplayer mode for windows 7
-classic solitaire card game free download with custom rules for windows 7
-classic solitaire card game free download with different modes for windows 7
-classic solitaire card game free download with different backgrounds for windows 7
-classic solitaire card game free download with different card designs for windows 7
-classic solitaire card game free download with different difficulty settings for windows 7
-classic solitaire card game free download with different scoring systems for windows 7
-classic solitaire card game free download with different draw options for windows 7
-classic solitaire card game free download with different shuffle options for windows 7
-classic solitaire card game free download with auto complete feature for windows 7
-classic solitaire card game free download with auto save feature for windows 7
-classic solitaire card game free download with resume feature for windows 7
-classic solitaire card game free download with pause feature for windows 7
-classic solitaire card game free download with zoom feature for windows 7
-classic solitaire card game free download with hints and tips feature for windows 7
-classic solitaire card game free download with tutorials and guides feature for windows 7
-classic solitaire card game free download with support and feedback feature for windows 7
-classic solitaire card game free download without registration or sign up for windows 7
-classic solitaire card game free download without installation or setup for windows 7
-classic solitaire card game free download without virus or malware for windows 7
-classic solitaire card game free download without ads or in-app purchases for windows 7
-classic solitaire card game free download without internet connection or wifi for windows 7
-how to play classic solitaire card game on windows 7 for free
-how to install classic solitaire card game on windows 7 for free
-how to uninstall classic solitaire card game on windows 7 for free
-how to update classic solitaire card game on windows 7 for free
-how to fix classic solitaire card game on windows 7 not working or crashing issues
-how to customize classic sol
-Step 1: Go to the Microsoft Store page for Classic Solitaire (Free)
-You can use this link to go directly to the Microsoft Store page for Classic Solitaire (Free). Alternatively, you can open your web browser and search for "Classic Solitaire (Free)" on Bing or Google.
-Step 2: Select Get and launch the game
-On the Microsoft Store page, select Get and wait for the app to download and install on your computer. You might need to sign in with your Microsoft account or create one if you don't have one. After the installation is complete, the app will launch automatically. You can also find it in your Start menu or pin it to your taskbar for easy access.
-Step 3: Enjoy the old good solitaire implementation for retro lovers
-Once you launch the app, you will see the familiar solitaire interface with green background, blue cards, and yellow score. You can play the game by using your mouse or keyboard shortcuts. You can also customize some settings such as card back, card size, draw mode, scoring mode, and sound effects by clicking on the Options button.
-Features and benefits of classic solitaire games for Windows 7
-Different game modes and difficulty levels to suit your preferences and skills
-One of the advantages of playing classic solitaire games on Windows 7 is that you can choose from different game modes and difficulty levels to suit your preferences and skills. For example, in Microsoft Solitaire Collection, you can play Klondike solitaire in either single-draw or three-draw mode, depending on how challenging you want the game to be. You can also choose from four difficulty levels: Easy, Medium, Hard, and Expert. In Classic Solitaire (Free), you can play either standard or Vegas scoring mode, depending on how competitive you want the game to be.
-Daily challenges and events to keep you engaged and rewarded
-Another benefit of playing classic solitaire games on Windows 7 is that you can participate in daily challenges and events that keep you engaged and rewarded. For example, in Microsoft Solitaire Collection, you can complete daily challenges that test your solitaire skills in different ways. You can earn badges and coins for completing them, as well as unlock new themes and card backs. You can also join events that run for a limited time and compete with other players around the world. In Classic Solitaire (Free), you can track your statistics and achievements, such as games played, games won, best score, best time, longest streak, etc.
-Themes and card backs to customize your card game experience
-A third benefit of playing classic solitaire games on Windows 7 is that you can customize your card game experience by choosing from different themes and card backs. For example, in Microsoft Solitaire Collection, you can change the background color, pattern, or image of your game board by selecting from various themes. You can also change the appearance of your cards by selecting from different card backs. Some of them are unlocked by completing daily challenges or events, while others are available for purchase with coins. In Classic Solitaire (Free), you can change the card back by selecting from a few options.
-Save your progress and achievements with a Microsoft account or Xbox Game Pass account
-A fourth benefit of playing classic solitaire games on Windows 7 is that you can save your progress and achievements with a Microsoft account or Xbox Game Pass account. This way, you can sync your game data across different devices and platforms, such as Windows 10, Android, iOS, and Xbox. You can also access your game history, statistics, achievements, and leaderboards. To do this, you need to sign in with your Microsoft account or Xbox Game Pass account in the app settings. If you don't have one, you can create one for free.
-Conclusion
-Summary of the main points
-In conclusion, classic solitaire is a fun and relaxing card game that you can play on your Windows 7 computer. You can download and install classic solitaire games for Windows 7 from Microsoft Store, such as Microsoft Solitaire Collection and Classic Solitaire (Free). These games offer different game modes and difficulty levels, daily challenges and events, themes and card backs, and cloud saving features. Playing classic solitaire can help you improve your mental skills and reduce stress.
-Call to action
-So what are you waiting for? Download and install classic solitaire games for Windows 7 today and enjoy the timeless card game experience. You can also share your feedback and suggestions with the developers by leaving a review or rating on the Microsoft Store page. Have fun playing classic solitaire!
-FAQs
-Here are some frequently asked questions about classic solitaire games for Windows 7:
-
-- Q: How do I uninstall classic solitaire games from my Windows 7 computer?
-- A: To uninstall classic solitaire games from your Windows 7 computer, follow these steps:
-- Go to Control Panel and select Programs and Features.
-- Find the app you want to uninstall and click on it.
-- Select Uninstall and follow the instructions.
-
-
-- Q: How do I play classic solitaire with a keyboard?
-- A: To play classic solitaire with a keyboard, you can use these keyboard shortcuts:
-- F2: Start a new game.
-- F5: Restart the current game.
-- Ctrl+Z: Undo the last move.
-- H: Show a hint.
-- M: Mute or unmute the sound effects.
-- Tab: Move the focus to the next element.
-- Shift+Tab: Move the focus to the previous element.
-- Spacebar: Select or deselect a card or pile.
-- Enter: Move a card or pile to a valid location.
-
-
-- Q: How do I win classic solitaire?
-- A: To win classic solitaire, you need to move all the cards from the tableau to the foundation in ascending order by suit. You can use the stock and waste piles to help you find the cards you need. You can also use hints or undo moves if you get stuck. The game is over when you have moved all the cards to the foundation or when there are no more moves possible.
-- Q: How do I change the difficulty level of classic solitaire?
-- A: To change the difficulty level of classic solitaire, you need to select a different game mode or scoring mode. For example, in Microsoft Solitaire Collection, you can choose from four difficulty levels for Klondike solitaire: Easy, Medium, Hard, and Expert. In Classic Solitaire (Free), you can choose between standard or Vegas scoring mode. The higher the difficulty level or scoring mode, the harder the game will be.
-- Q: How do I earn coins and badges in Microsoft Solitaire Collection?
-- A: To earn coins and badges in Microsoft Solitaire Collection, you need to complete daily challenges and events. Daily challenges are tasks that test your solitaire skills in different ways. You can earn bronze, silver, gold, or platinum badges depending on how many challenges you complete in a month. Events are competitions that run for a limited time and require you to play specific solitaire games. You can earn coins based on your performance and rank in each event. You can use coins to unlock new themes and card backs in the app.
-
- : https://www.microsoft.com/en-us/p/microsoft-solitaire-collection/9wzdncrfhwd2?activetab=pivot:overviewtab 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/options/base_options.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/options/base_options.py
deleted file mode 100644
index d8f921d5a43434ae802a55a0fa3889c4b7ab9f6d..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/options/base_options.py
+++ /dev/null
@@ -1,169 +0,0 @@
-"""This script contains base options for Deep3DFaceRecon_pytorch
-"""
-
-import argparse
-import os
-from util import util
-import numpy as np
-import torch
-import face3d.models as models
-import face3d.data as data
-
-
-class BaseOptions():
- """This class defines options used during both training and test time.
-
- It also implements several helper functions such as parsing, printing, and saving the options.
- It also gathers additional options defined in functions in both dataset class and model class.
- """
-
- def __init__(self, cmd_line=None):
- """Reset the class; indicates the class hasn't been initailized"""
- self.initialized = False
- self.cmd_line = None
- if cmd_line is not None:
- self.cmd_line = cmd_line.split()
-
- def initialize(self, parser):
- """Define the common options that are used in both training and test."""
- # basic parameters
- parser.add_argument('--name', type=str, default='face_recon', help='name of the experiment. It decides where to store samples and models')
- parser.add_argument('--gpu_ids', type=str, default='0', help='gpu ids: e.g. 0 0,1,2, 0,2. use -1 for CPU')
- parser.add_argument('--checkpoints_dir', type=str, default='./checkpoints', help='models are saved here')
- parser.add_argument('--vis_batch_nums', type=float, default=1, help='batch nums of images for visulization')
- parser.add_argument('--eval_batch_nums', type=float, default=float('inf'), help='batch nums of images for evaluation')
- parser.add_argument('--use_ddp', type=util.str2bool, nargs='?', const=True, default=True, help='whether use distributed data parallel')
- parser.add_argument('--ddp_port', type=str, default='12355', help='ddp port')
- parser.add_argument('--display_per_batch', type=util.str2bool, nargs='?', const=True, default=True, help='whether use batch to show losses')
- parser.add_argument('--add_image', type=util.str2bool, nargs='?', const=True, default=True, help='whether add image to tensorboard')
- parser.add_argument('--world_size', type=int, default=1, help='batch nums of images for evaluation')
-
- # model parameters
- parser.add_argument('--model', type=str, default='facerecon', help='chooses which model to use.')
-
- # additional parameters
- parser.add_argument('--epoch', type=str, default='latest', help='which epoch to load? set to latest to use latest cached model')
- parser.add_argument('--verbose', action='store_true', help='if specified, print more debugging information')
- parser.add_argument('--suffix', default='', type=str, help='customized suffix: opt.name = opt.name + suffix: e.g., {model}_{netG}_size{load_size}')
-
- self.initialized = True
- return parser
-
- def gather_options(self):
- """Initialize our parser with basic options(only once).
- Add additional model-specific and dataset-specific options.
- These options are defined in the function
- in model and dataset classes.
- """
- if not self.initialized: # check if it has been initialized
- parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
- parser = self.initialize(parser)
-
- # get the basic options
- if self.cmd_line is None:
- opt, _ = parser.parse_known_args()
- else:
- opt, _ = parser.parse_known_args(self.cmd_line)
-
- # set cuda visible devices
- os.environ['CUDA_VISIBLE_DEVICES'] = opt.gpu_ids
-
- # modify model-related parser options
- model_name = opt.model
- model_option_setter = models.get_option_setter(model_name)
- parser = model_option_setter(parser, self.isTrain)
- if self.cmd_line is None:
- opt, _ = parser.parse_known_args() # parse again with new defaults
- else:
- opt, _ = parser.parse_known_args(self.cmd_line) # parse again with new defaults
-
- # modify dataset-related parser options
- if opt.dataset_mode:
- dataset_name = opt.dataset_mode
- dataset_option_setter = data.get_option_setter(dataset_name)
- parser = dataset_option_setter(parser, self.isTrain)
-
- # save and return the parser
- self.parser = parser
- if self.cmd_line is None:
- return parser.parse_args()
- else:
- return parser.parse_args(self.cmd_line)
-
- def print_options(self, opt):
- """Print and save options
-
- It will print both current options and default values(if different).
- It will save options into a text file / [checkpoints_dir] / opt.txt
- """
- message = ''
- message += '----------------- Options ---------------\n'
- for k, v in sorted(vars(opt).items()):
- comment = ''
- default = self.parser.get_default(k)
- if v != default:
- comment = '\t[default: %s]' % str(default)
- message += '{:>25}: {:<30}{}\n'.format(str(k), str(v), comment)
- message += '----------------- End -------------------'
- print(message)
-
- # save to the disk
- expr_dir = os.path.join(opt.checkpoints_dir, opt.name)
- util.mkdirs(expr_dir)
- file_name = os.path.join(expr_dir, '{}_opt.txt'.format(opt.phase))
- try:
- with open(file_name, 'wt') as opt_file:
- opt_file.write(message)
- opt_file.write('\n')
- except PermissionError as error:
- print("permission error {}".format(error))
- pass
-
- def parse(self):
- """Parse our options, create checkpoints directory suffix, and set up gpu device."""
- opt = self.gather_options()
- opt.isTrain = self.isTrain # train or test
-
- # process opt.suffix
- if opt.suffix:
- suffix = ('_' + opt.suffix.format(**vars(opt))) if opt.suffix != '' else ''
- opt.name = opt.name + suffix
-
-
- # set gpu ids
- str_ids = opt.gpu_ids.split(',')
- gpu_ids = []
- for str_id in str_ids:
- id = int(str_id)
- if id >= 0:
- gpu_ids.append(id)
- opt.world_size = len(gpu_ids)
- # if len(opt.gpu_ids) > 0:
- # torch.cuda.set_device(gpu_ids[0])
- if opt.world_size == 1:
- opt.use_ddp = False
-
- if opt.phase != 'test':
- # set continue_train automatically
- if opt.pretrained_name is None:
- model_dir = os.path.join(opt.checkpoints_dir, opt.name)
- else:
- model_dir = os.path.join(opt.checkpoints_dir, opt.pretrained_name)
- if os.path.isdir(model_dir):
- model_pths = [i for i in os.listdir(model_dir) if i.endswith('pth')]
- if os.path.isdir(model_dir) and len(model_pths) != 0:
- opt.continue_train= True
-
- # update the latest epoch count
- if opt.continue_train:
- if opt.epoch == 'latest':
- epoch_counts = [int(i.split('.')[0].split('_')[-1]) for i in model_pths if 'latest' not in i]
- if len(epoch_counts) != 0:
- opt.epoch_count = max(epoch_counts) + 1
- else:
- opt.epoch_count = int(opt.epoch) + 1
-
-
- self.print_options(opt)
- self.opt = opt
- return self.opt
diff --git a/spaces/fclong/summary/fengshen/examples/clue1.1/predict2submit/c3_submit.py b/spaces/fclong/summary/fengshen/examples/clue1.1/predict2submit/c3_submit.py
deleted file mode 100644
index 5aae88745ba1783b09dc6a0b40108b0adde3bf2d..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/clue1.1/predict2submit/c3_submit.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import json
-from tqdm import tqdm
-import argparse
-
-
-def save_data(data,file_path):
- with open(file_path, 'w', encoding='utf8') as f:
- for line in data:
- json_data=json.dumps(line,ensure_ascii=False)
- f.write(json_data+'\n')
-
-
-def submit(file_path):
- with open(file_path, 'r', encoding='utf8') as f:
- lines = f.readlines()
- result=[]
- for line in tqdm(lines):
- data = json.loads(line)
- result.append({'id':data['id'],'label':data['label']})
- return result
-
-if __name__=="__main__":
- parser = argparse.ArgumentParser(description="train")
- parser.add_argument("--data_path", type=str,default="")
- parser.add_argument("--save_path", type=str,default="")
-
- args = parser.parse_args()
-
- save_data(submit(args.data_path), args.save_path)
-
-
-
\ No newline at end of file
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Create Barcodes with Code 39 Font - Free Download and Tutorial.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Create Barcodes with Code 39 Font - Free Download and Tutorial.md
deleted file mode 100644
index 20343d4ab6d84581a6de5fb348c521369e644d41..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Create Barcodes with Code 39 Font - Free Download and Tutorial.md
+++ /dev/null
@@ -1,129 +0,0 @@
-
-Code 39 Font Free Download: How to Create Barcodes Easily
-If you need to create barcodes for your products, labels, documents, or any other purposes, you might want to consider using Code 39 Font. Code 39 Font is a popular and widely used barcode font that can encode alphanumeric data in a simple and efficient way. In this article, you will learn what Code 39 Font is, how to download it for free, and how to use it to generate barcodes in various applications.
- What is Code 39 Font?
-Code 39 Font is a type of barcode font that can encode uppercase letters (A-Z), numbers (0-9), and some special characters (-, ., $, /, +, %, and space). It is also known as Alpha39, Code 3 of 9, Code 3/9, Type 39, USS Code 39, or USD-3. It was developed by Dr. David Allais and Ray Stevens of Intermec in 1974.
-code 39 font free download
DOWNLOAD • https://gohhs.com/2uPumv
- The features and benefits of Code 39 Font
-Some of the features and benefits of using Code 39 Font are:
-
-- It is a variable length barcode font, which means you can encode any amount of data within the limits of the barcode size.
-- It is a discrete barcode font, which means each character is encoded separately and there is no inter-character gap.
-- It is a self-checking barcode font, which means it does not require a check digit or checksum to verify the accuracy of the barcode.
-- It is easy to use and integrate into any printing system or software by adding a barcode font to the system or printer and then printing the raw data with asterisks (*) as the start and stop characters.
-- It is compatible with virtually any barcode reader or scanner.
-
- The applications and uses of Code 39 Font
-Code 39 Font is widely used in various industries and sectors for different purposes. Some of the common applications and uses of Code 39 Font are:
-
-- Inventory management and tracking
-- Product identification and labeling
-- Document control and security
-- Medical equipment and supplies
-- Military standards and specifications
-- Postal services and delivery
-
- How to Download Code 39 Font for Free?
-If you want to use Code 39 Font for your barcode needs, you can download it for free from various sources and websites. Here are some of the options you can choose from:
- The sources and websites that offer free Code 39 Font
-Some of the sources and websites that offer free Code 39 Font are:
-
-- dafont.com: This website offers a free version of Code 39 Font that you can download as a TrueType font file (.ttf). It also provides a preview of the font and some basic information about it.
-- fontspace.com: This website offers a free version of Code 39 Font that you can download as a TrueType font file (.ttf) or an OpenType font file (.otf). It also provides a preview of the font and some user reviews and ratings.
-- barcodesinc.com: This website offers a free version of Code 39 Font that you can download as a TrueType font file (.ttf) or a PostScript font file (.pfb). It also provides a barcode generator tool that you can use to create and print barcodes with Code 39 Font.
-- idautomation.com: This website offers a free version of Code 39 Font that you can download as a TrueType font file (.ttf) or an OpenType font file (.otf). It also provides a barcode generator tool that you can use to create and print barcodes with Code 39 Font. However, this version of Code 39 Font has some limitations and restrictions, such as the maximum data length of 48 characters and the requirement of a license agreement.
-
- The steps and instructions to download and install Code 39 Font
-The steps and instructions to download and install Code 39 Font may vary depending on the source and website you choose, but here are some general guidelines you can follow:
-code 39 barcode font generator online
-code 39 font for excel free download
-code 39 font windows 10 free download
-code 39 font mac free download
-code 39 font ttf free download
-code 39 font microsoft word free download
-code 39 font crystal reports free download
-code 39 font for android free download
-code 39 font for linux free download
-code 39 font for pdf free download
-code 39 font for label printer free download
-code 39 font for access free download
-code 39 font for google docs free download
-code 39 font for powerpoint free download
-code 39 font for illustrator free download
-code 39 font for photoshop free download
-code 39 font for coreldraw free download
-code 39 font for inkscape free download
-code 39 font for gimp free download
-code 39 font for libreoffice free download
-code 39 font with text below free download
-code 39 extended font free download
-code 39 full ascii font free download
-code 39 mod 43 font free download
-code 39 checksum font free download
-code 39 hr font free download
-code 39 narrow font free download
-code 39 wide font free download
-idautomationhc39m code 39 barcode font free download
-libre barcode 39 text regular font free download
-dafont.com/code39.font free download
-fonts.google.com/specimen/libre+barcode+39 free download
-idautomation.com/free-barcode-products/code39-font/ free download
-morovia.com/free-online-barcode-generator/code_3_of_9.asp free download
-barcodesinc.com/generator/index.php?sym=code_3_of_9&size=20&text=CODE%2039%20FONT%20FREE%20DOWNLOAD&output=html&font=Arial&fontsize=12&bold=0&italic=0&angle=0&quietzone=10&color=%23000000&bgcolor=%23FFFFFF&border=0&bordercolor=%23000000&bordersize=1&format=png&scale=1&dpi=72&mode=download&filename=code_3_of_9.png&filetype=image/png&marginsize=0&margincolor=%23FFFFFF&margintop=0&marginbottom=0&marginleft=0&marginright=0&margintexttop=&margintextbottom=&margintextleft=&margintextright=&margintextcolor=%23000000&margintextsize=&margintextfont=&margintextbold=&margintextitalic=&margintextalign=&margintextvalign=&margintextangle=&checksum=&checksumfont=&checksumsize=&checksumbold=&checksumitalic=&checksumcolor=%23000000&humanreadabletext=true&humanreadablefont=Arial&humanreadablesize=12&humanreadablebold=false&humanreadableitalic=false&humanreadablecolor=%23000000&humanreadablealign=center&humanreadablevalign=top&humanreadableangle=0&ecclevel=L&eccversion=AUTO&eccmask=AUTO&eccformat=AUTO&eccmode=AUTO&eccencoding=AUTO&eccerrorcorrection=AUTO&eccerrorcorrectionlevel=AUTO&eccerrorcorrectionversion=AUTO&eccerrorcorrectionmask=AUTO&eccerrorcorrectionformat=AUTO&eccerrorcorrectionmode=AUTO&eccerrorcorrectionencoding=AUTO&ecccharacterset=UTF8&eccstructure=AUTO&eccsegments=AUTO&eccsegmentid=AUTO&eccsegmentmode=AUTO&eccsegmentencoding=AUTO&eccsegmenteclevel=AUTO&eccsegmentecversion=AUTO&eccsegmentecmask=AUTO&eccsegmentecformat=AUTO&eccsegmentecmode=AUTO&eccsegmenteccorrection=AUTO&eccsegmenteccorrectionlevel=AUTO&eccsegmenteccorrectionversion=AUTO&eccsegmenteccorrectionmask=AUTO&eccsegmenteccorrectionformat=AUTO&eccsegmenteccorrectionmode=AUTO&eccsegmenteccorrectionencoding=AUTO&qrcodeversion=-1&qrcodemask=-1&qrcodeformat=-1&qrcodemode=-1&qrcodeencoding=-1&qrcodeerrorcorrection=-1&qrcodeerrorcorrectionlevel=-1&qrcodeerrorcorrectionversion=-1&qrcodeerrorcorrectionmask=-1&qrcodeerrorcorrectionformat=-1&qrcodeerrorcorrectionmode=-1&qrcodeerrorcorrectionencoding=-1&qrcodecharacters
-
-- Go to the source or website that offers free Code 39 Font and find the download link or button.
-- Click on the download link or button and save the font file to your computer.
-- Locate the font file on your computer and unzip it if it is compressed.
-- Double-click on the font file to open it and click on the install button or option.
-- Alternatively, you can copy and paste the font file to the fonts folder in your system directory (e.g., C:\Windows\Fonts for Windows).
-- Restart your computer or application to activate the font.
-
- How to Use Code 39 Font to Generate Barcodes?
-Once you have downloaded and installed Code 39 Font on your computer, you can use it to generate barcodes in various applications. Here are some of the software and tools that support Code 39 Font and the formulas and methods to create barcodes with it:
- The software and tools that support Code 39 Font
-Some of the software and tools that support Code 39 Font are:
-
-- Microsoft Word: You can use Code 39 Font to create barcodes in Microsoft Word by selecting the font from the font menu and typing the data with asterisks (*) as the start and stop characters. For example, if you want to create a barcode for "ABC123", you would type "*ABC123*". You can also adjust the font size, color, alignment, and other settings as needed.
-- Microsoft Excel: You can use Code 39 Font to create barcodes in Microsoft Excel by selecting the font from the font menu and typing the data with asterisks (*) as the start and stop characters. For example, if you want to create a barcode for "ABC123", you would type "*ABC123*". You can also adjust the font size, color, alignment, and other settings as needed. You can also use formulas to generate barcodes from other data in your spreadsheet. For example, if you have a column of product codes in column A, you can use this formula in column B to create barcodes for them: ="*"&A1&"*"
-- Adobe Photoshop: You can use Code 39 Font to create barcodes in Adobe Photoshop by selecting the font from the font menu and typing the data with asterisks (*) as the start and stop characters. For example, if you want to create a barcode for "ABC123", you would type "*ABC123*". You can also adjust the font size, color, alignment, and other settings as needed. You can also use layers, filters, effects, and other tools to enhance or modify the appearance of your barcode.
-- Online Barcode Generator: You can use an online barcode generator tool to create barcodes with Code 39 Font without downloading or installing anything. You just need to enter your data, select Code 39 Font as the barcode type, and click on generate. You can then download or print your barcode as an image file. Some of the online barcode generator tools that support Code 39 Font are barcode.tec-it.com, barcode-generator.org, online-barcode-generator.net, etc.
-
- The formulas and methods to create barcodes with Code 39 Font
-There are some formulas and methods that you can use to create barcodes with Code 39 Font more easily and efficiently. Here are some of them:
-
-- Using a table: You can use a table to create barcodes with Code 39 Font by dividing the data into columns and rows and applying the font to each cell. For example, if you want to create a barcode for "ABC123", you would create a table with three columns and two rows and type "A", "B", and "C" in the first row and "1", "2", and "3" in the second row. Then, you would select the entire table and apply Code 39 Font to it. You would also add asterisks (*) before and after the table to indicate the start and stop characters.
-- Using a formula: You can use a formula to create barcodes with Code 39 Font by concatenating the data with asterisks (*) as the start and stop characters. For example, if you want to create a barcode for "ABC123", you would use this formula: ="*"&"ABC123"&"*". You can also use other functions or operators to manipulate or format the data before applying Code 39 Font to it.
-- Using a macro: You can use a macro to create barcodes with Code 39 Font by writing a code or script that automates the process of applying the font to the data. For example, if you want to create a barcode for "ABC123", you would write a macro that selects the cell or range that contains the data, adds asterisks (*) before and after the data, and applies Code 39 Font to it. You can also write a macro that loops through multiple cells or ranges and creates barcodes for them.
-
- Conclusion
-Code 39 Font is a versatile and easy-to-use barcode font that can help you create barcodes for various purposes. You can download it for free from different sources and websites and use it in various applications and software. You can also use different formulas and methods to generate barcodes with Code 39 Font more efficiently. However, you should also be aware of the limitations and drawbacks of Code 39 Font, such as the limited character set, the large barcode size, and the low error correction capability. Therefore, you should choose Code 39 Font according to your needs and preferences.
-If you want to learn more about Code 39 Font or other barcode fonts, you can visit barcodefonts.net, barcodewiz.com, morovia.com, etc. You can also contact us if you have any questions or feedback about this article or our services. We hope you found this article helpful and informative. Thank you for reading!
- FAQs
-What are the advantages of using Code 39 Font over other barcode fonts?
-Some of the advantages of using Code 39 Font over other barcode fonts are:
-
-- It is one of the most common and widely used barcode fonts in the world.
-- It is compatible with virtually any barcode reader or scanner.
-- It does not require a check digit or checksum to verify the accuracy of the barcode.
-- It can encode alphanumeric data in a simple and efficient way.
-- It is easy to use and integrate into any printing system or software.
-
- What are the limitations and drawbacks of Code 39 Font?
-Some of the limitations and drawbacks of using Code 39 Font are:
-
-- It has a limited character set that only includes uppercase letters (A-Z), numbers (0-9), and some special characters (-, ., $, /, +, %, and space).
-- It produces large barcodes that require more space and ink than other barcode fonts.
-- It has a low error correction capability that makes it vulnerable to damage or distortion.
-- It may not be suitable for high-density or high-volume barcode applications.
-- It may not be compatible with some international standards or regulations.
-
- How can I scan and read barcodes created with Code 39 Font?
-You can scan and read barcodes created with Code 39 Font by using any barcode reader or scanner that supports Code 39 Font. You just need to point the device at the barcode and wait for it to decode the data. Some of the barcode readers or scanners that support Code 39 Font are Wasp Barcode Scanner, Zebra Barcode Scanner,
-- Check your font: Make sure your font is Code 39 Font and not another barcode font. It should also be installed and activated on your computer or printer. If your font is not Code 39 Font or not installed or activated, you should download or install or activate it before using it to create barcodes.
-- Check your settings and options: Make sure your settings and options are appropriate and suitable for Code 39 Font. They should not be too small or too large, too light or too dark, too skewed or rotated, too narrow or wide, etc. They should also not be incompatible or conflicting with Code 39 Font. If your settings and options are inappropriate or unsuitable, you should adjust or change them before using Code 39 Font to create barcodes.
-- Check your printer and scanner: Make sure your printer and scanner are working properly and are compatible with Code 39 Font. They should have enough ink and paper, be clean and clear, be calibrated and aligned, etc. They should also support Code 39 Font and have the correct drivers and software installed. If your printer and scanner are not working properly or are not compatible, you should fix or replace them before using Code 39 Font to create barcodes.
-- Check your environment: Make sure your environment is conducive and favorable for Code 39 Font. It should have enough light and contrast, be free of noise and interference, be stable and secure, etc. It should also not have any factors or elements that may damage or distort the barcode, such as dirt, dust, moisture, heat, cold, etc. If your environment is not conducive or favorable, you should improve or change it before using Code 39 Font to create barcodes.
-
- If you follow these steps and still encounter errors or issues with Code 39 Font, you can contact the source or website that provided the font or the manufacturer or supplier of the printer or scanner for further assistance and support.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Super Bear Adventure MOD APK and Explore Six Amazing Kingdoms.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Super Bear Adventure MOD APK and Explore Six Amazing Kingdoms.md
deleted file mode 100644
index 255e7447ea21477ea41db9a29afe9c613eb7b07b..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Super Bear Adventure MOD APK and Explore Six Amazing Kingdoms.md
+++ /dev/null
@@ -1,118 +0,0 @@
-
-Super Adventure Bear Mod APK: A Fun and Exciting Game for Android Users
-If you are looking for a new and exciting game to play on your Android device, you should check out Super Adventure Bear Mod APK. This is a modified version of the original Super Adventure Bear game, which is a platformer game where you control a cute and brave bear who goes on an epic adventure across different kingdoms. You can explore various environments, collect coins, stars, and keys, unlock new costumes and items, complete quests and challenges, and have a lot of fun along the way.
-super adventure bear mod apk
DOWNLOAD ► https://gohhs.com/2uPuJ4
-Super Adventure Bear Mod APK is not available on the Google Play Store, but you can download it from a trusted source for free. In this article, we will tell you everything you need to know about this game, including its features, how to download and install it, how to play it, and its pros and cons. Read on to find out more!
- Features of Super Adventure Bear Mod APK
-Super Adventure Bear Mod APK has many features that make it more enjoyable and rewarding than the original game. Here are some of them:
- Unlocked All
-One of the best features of Super Adventure Bear Mod APK is that it unlocks all the content in the game. This means that you can access all the kingdoms, costumes, and items in the game without having to spend any real money or complete any requirements. You can explore the Forest Kingdom, the Ice Kingdom, the Desert Kingdom, the Candy Kingdom, and more. You can also dress up your bear with different hats, glasses, masks, shirts, pants, shoes, and accessories. You can even change the color of your bear's fur and eyes. With so many options to choose from, you can create your own unique style and personality for your bear.
-super bear adventure unlocked all mod apk
-download super adventure bear mod apk free
-super adventure bear mod apk latest version
-super bear adventure android mod apk
-super adventure bear hack mod apk
-super bear adventure mod apk unlimited coins
-super adventure bear mod apk offline
-super bear adventure mod apk no ads
-super adventure bear premium mod apk
-super bear adventure mod apk 10.3.0
-super adventure bear full version mod apk
-super bear adventure mod apk for ios
-super adventure bear mod apk with cheats
-super bear adventure 3d platformer mod apk
-super adventure bear mod apk rexdl
-super bear adventure mod apk revdl
-super adventure bear mod apk happymod
-super bear adventure mod apk an1
-super adventure bear mod apk android 1
-super bear adventure mod apk apkpure
-super adventure bear pro mod apk
-super bear adventure mod apk unlimited money
-super adventure bear cracked mod apk
-super bear adventure mod apk download link
-super adventure bear original mod apk
-super bear adventure new update mod apk
-super adventure bear vip mod apk
-super bear adventure mega mod apk
-super adventure bear god mode mod apk
-super bear adventure all levels unlocked mod apk
-super adventure bear unlimited lives mod apk
-super bear adventure no root mod apk
-super adventure bear easy mode mod apk
-super bear adventure all skins unlocked mod apk
-super adventure bear high graphics mod apk
-super bear adventure low mb mod apk
-super adventure bear all quests completed mod apk
-super bear adventure all secrets unlocked mod apk
-super adventure bear best mods apk
-super bear adventure custom mods apk
-super adventure bear fan made mods apk
-super bear adventure online multiplayer mod apk
-super adventure bear sandbox mode mod apk
-super bear adventure survival mode mod apk
-super adventure bear speedrun mode mod apk
-super bear adventure endless mode mod apk
-super adventure bear hard mode mod apk
-super bear adventure classic mode mod apk
- Unlimited Coins
-Another feature of Super Adventure Bear Mod APK is that it gives you unlimited coins in the game. Coins are the main currency in the game that you can use to buy costumes and items for your bear. You can also use coins to unlock new kingdoms and levels. Normally, you would have to collect coins by playing the game or watching ads. But with Super Adventure Bear Mod APK, you don't have to worry about running out of coins. You can collect as many coins as you want and buy anything you need.
- No Ads
-The last feature of Super Adventure Bear Mod APK that we will mention is that it removes all the ads from the game. Ads are annoying and distracting, especially when they pop up in the middle of your gameplay or when you are trying to enjoy the game's graphics and sound effects. They can also slow down your device or consume your data. But with Super Adventure Bear Mod APK, you don't have to deal with any ads at all. You can enjoy the game without any interruptions or distractions.
- How to Download and Install Super Adventure Bear Mod APK
-If you want to download and install Super Adventure Bear Mod APK on your Android device, you need to follow these simple steps:
- Step 1: Download the mod apk file from a trusted source
-The first thing you need to do is to find a reliable and safe source that provides the mod apk file for Super Adventure Bear. You can search online for some websites that offer this file, but make sure to check their reviews and ratings before downloading anything. You can also ask your friends or other gamers for recommendations. Once you find a good source, click on the download button and save the file on your device.
- Step 2: Enable unknown sources on your device settings
-The next thing you need to do is to enable unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings and look for the security or privacy option. Then, find the unknown sources option and toggle it on. You may see a warning message that says installing apps from unknown sources may harm your device, but don't worry, this is just a precautionary measure. You can trust Super Adventure Bear Mod APK as long as you download it from a trusted source.
- Step 3: Install the mod apk file and launch the game
-The last thing you need to do is to install the mod apk file and launch the game. To do this, go to your file manager and locate the mod apk file that you downloaded earlier. Tap on it and follow the instructions on the screen to install it. Once the installation is done, you can open the game and enjoy playing Super Adventure Bear Mod APK.
- How to Play Super Adventure Bear Mod APK
-Playing Super Adventure Bear Mod APK is easy and fun. Here are some tips on how to play it:
- Explore different kingdoms and collect coins, stars, and keys
-The main goal of Super Adventure Bear Mod APK is to explore different kingdoms and collect coins, stars, and keys. Each kingdom has its own theme, enemies, obstacles, and secrets. You can use the virtual joystick on the left side of the screen to move your bear around, and the jump button on the right side of the screen to make your bear jump over gaps or enemies. You can also swipe up or down on the screen to make your bear climb or slide. As you explore each kingdom, you will find coins, stars, and keys scattered around. Coins are used to buy costumes and items, stars are used to unlock new kingdoms and levels, and keys are used to open chests that contain rewards.
- Unlock new costumes and items for your bear
-Another fun aspect of Super Adventure Bear Mod APK is that you can unlock new costumes and items for your bear. You can access the shop by tapping on the shopping cart icon on the top right corner of the screen. There, you will see different categories of costumes and items that you can buy with coins. You can also see how many coins you have on the top left corner of the screen. You can buy hats, glasses, masks, shirts, pants, shoes, and accessories for your bear. You can also buy items such as balloons, rockets, magnets, shields, and more that can help you in your adventure. You can mix and match different costumes and items to create your own unique look and style for your bear.
- Complete quests and challenges to earn rewards and achievements
-The last thing we will mention about Super Adventure Bear Mod APK is that you can complete quests and challenges to earn rewards and achievements. You can access the quests by tapping on the clipboard icon on the top right corner of the screen. There, you will see different quests that you can complete by playing the game. For example, some quests may ask you to collect a certain number of coins or stars, or to defeat a certain number of enemies. Completing quests will give you extra coins or items as rewards. You can also access the achievements by tapping on the trophy icon on the top right corner of the screen. There, you will see different achievements that you can unlock by playing the game. For example, some achievements may ask you to explore all the kingdoms or unlock all the costumes. Unlocking achievements will give you extra stars or keys as rewards.
- Pros and Cons of Super Adventure Bear Mod APK
-Super Adventure Bear Mod APK is a great game that has many pros and cons. Here are some of them:
- Pros
-
-- Fun, colorful, and engaging gameplay: Super Adventure Bear Mod APK has a fun, colorful, and engaging gameplay that will keep you entertained for hours. The game has smooth controls, beautiful graphics, lively sound effects, and catchy music. The game also has a lot of variety and content, with different kingdoms, levels, enemies, obstacles, costumes, items, quests, and achievements to explore and enjoy.
-- Lots of content and customization options: Super Adventure Bear Mod APK has a lot of content and customization options that make it more fun and rewarding. You can unlock all the kingdoms, costumes, and items in the game without having to spend any real money or complete any requirements. You can also collect unlimited coins and use them to buy anything you want. You can also mix and match different costumes and items to create your own unique look and style for your bear.
-- No ads or in-app purchases: Super Adventure Bear Mod APK has no ads or in-app purchases that can ruin your gaming experience. You don't have to worry about any annoying interruptions or distractions while playing the game. You also don't have to worry about spending any real money or being tempted by any offers or deals. You can enjoy the game without any limitations or restrictions.
-
- Cons
-
-- Requires internet connection: Super Adventure Bear Mod APK requires an internet connection to play. This means that you cannot play the game offline or without a stable network. This can be a problem if you have a limited data plan or if you are in an area with poor signal or wifi. You may also experience some lag or delay while playing the game online.
-- May not be compatible with some devices: Super Adventure Bear Mod APK may not be compatible with some devices or operating systems. This means that you may not be able to download, install, or run the game on your device. This can be a problem if you have an older or lower-end device or if you have a different operating system than Android. You may also encounter some errors or crashes while playing the game on your device.
-- May have some bugs or glitches: Super Adventure Bear Mod APK may have some bugs or glitches that can affect your gaming experience. This means that you may encounter some problems or issues while playing the game, such as freezing, crashing, lagging, missing graphics, sound issues, etc. These bugs or glitches may be minor or major, depending on your device and the source of the mod apk file. You may also lose your progress or data if the game malfunctions.
-
- Conclusion
-Super Adventure Bear Mod APK is a fun and exciting game for Android users who love platformer games. It is a modified version of the original Super Adventure Bear game, which is a game where you control a cute and brave bear who goes on an epic adventure across different kingdoms. You can explore various environments, collect coins, stars, and keys, unlock new costumes and items, complete quests and challenges, and have a lot of fun along the way.
-Super Adventure Bear Mod APK has many features that make it more enjoyable and rewarding than the original game, such as unlocked all, unlimited coins, and no ads. It also has some pros and cons that you should consider before downloading and installing it, such as requiring internet connection, may not be compatible with some devices, and may have some bugs or glitches.
-If you want to try Super Adventure Bear Mod APK for yourself, you can download it from a trusted source for free. Just follow the steps we provided above on how to download and install it on your device. Then, launch the game and enjoy playing Super Adventure Bear Mod APK.
- FAQs
-Here are some frequently asked questions about Super Adventure Bear Mod APK:
- Q: What is Super Adventure Bear?
-A: Super Adventure Bear is an original platformer game where you control a cute and brave bear who goes on an epic adventure across different kingdoms. You can explore various environments, collect coins, stars, and keys, unlock new costumes and items, complete quests and challenges, and have a lot of fun along the way.
- Q: What is Super Adventure Bear Mod APK?
-A: Super Adventure Bear Mod APK is a modified version of the original Super Adventure Bear game that has many features that make it more enjoyable and rewarding than the original game. Some of these features are unlocked all, unlimited coins, and no ads.
- Q: How do I download and install Super Adventure Bear Mod APK?
-A: To download and install Super Adventure Bear Mod APK on your Android device, you need to follow these simple steps:
-
-- Download the mod apk file from a trusted source.
-- Enable unknown sources on your device settings.
-- Install the mod apk file and launch the game.
-
- Q: How do I play Super Adventure Bear Mod APK?
-A: To play Super Adventure Bear Mod APK,
-
-- Requires internet connection.
-- May not be compatible with some devices.
-- May have some bugs or glitches.
-
- I hope you enjoyed reading this article and learned something new about Super Adventure Bear Mod APK. If you have any questions or feedback, feel free to leave a comment below. Thank you for your time and attention.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/Image-to-MusicGen/audiocraft/quantization/__init__.py b/spaces/fffiloni/Image-to-MusicGen/audiocraft/quantization/__init__.py
deleted file mode 100644
index 836d6eb518978480c6b95d6f29ce4f84a9428793..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/Image-to-MusicGen/audiocraft/quantization/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-from .vq import ResidualVectorQuantizer
-from .base import BaseQuantizer, DummyQuantizer, QuantizedResult
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/body-parser/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/body-parser/index.js
deleted file mode 100644
index bb24d739d9c5fade336cdf76b08b784ae2594d77..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/body-parser/index.js
+++ /dev/null
@@ -1,156 +0,0 @@
-/*!
- * body-parser
- * Copyright(c) 2014-2015 Douglas Christopher Wilson
- * MIT Licensed
- */
-
-'use strict'
-
-/**
- * Module dependencies.
- * @private
- */
-
-var deprecate = require('depd')('body-parser')
-
-/**
- * Cache of loaded parsers.
- * @private
- */
-
-var parsers = Object.create(null)
-
-/**
- * @typedef Parsers
- * @type {function}
- * @property {function} json
- * @property {function} raw
- * @property {function} text
- * @property {function} urlencoded
- */
-
-/**
- * Module exports.
- * @type {Parsers}
- */
-
-exports = module.exports = deprecate.function(bodyParser,
- 'bodyParser: use individual json/urlencoded middlewares')
-
-/**
- * JSON parser.
- * @public
- */
-
-Object.defineProperty(exports, 'json', {
- configurable: true,
- enumerable: true,
- get: createParserGetter('json')
-})
-
-/**
- * Raw parser.
- * @public
- */
-
-Object.defineProperty(exports, 'raw', {
- configurable: true,
- enumerable: true,
- get: createParserGetter('raw')
-})
-
-/**
- * Text parser.
- * @public
- */
-
-Object.defineProperty(exports, 'text', {
- configurable: true,
- enumerable: true,
- get: createParserGetter('text')
-})
-
-/**
- * URL-encoded parser.
- * @public
- */
-
-Object.defineProperty(exports, 'urlencoded', {
- configurable: true,
- enumerable: true,
- get: createParserGetter('urlencoded')
-})
-
-/**
- * Create a middleware to parse json and urlencoded bodies.
- *
- * @param {object} [options]
- * @return {function}
- * @deprecated
- * @public
- */
-
-function bodyParser (options) {
- // use default type for parsers
- var opts = Object.create(options || null, {
- type: {
- configurable: true,
- enumerable: true,
- value: undefined,
- writable: true
- }
- })
-
- var _urlencoded = exports.urlencoded(opts)
- var _json = exports.json(opts)
-
- return function bodyParser (req, res, next) {
- _json(req, res, function (err) {
- if (err) return next(err)
- _urlencoded(req, res, next)
- })
- }
-}
-
-/**
- * Create a getter for loading a parser.
- * @private
- */
-
-function createParserGetter (name) {
- return function get () {
- return loadParser(name)
- }
-}
-
-/**
- * Load a parser module.
- * @private
- */
-
-function loadParser (parserName) {
- var parser = parsers[parserName]
-
- if (parser !== undefined) {
- return parser
- }
-
- // this uses a switch for static require analysis
- switch (parserName) {
- case 'json':
- parser = require('./lib/types/json')
- break
- case 'raw':
- parser = require('./lib/types/raw')
- break
- case 'text':
- parser = require('./lib/types/text')
- break
- case 'urlencoded':
- parser = require('./lib/types/urlencoded')
- break
- }
-
- // store to prevent invoking require()
- return (parsers[parserName] = parser)
-}
diff --git a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_form/run.py b/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_form/run.py
deleted file mode 100644
index 8a3a9dde53a9a172836d5423bc987306631cbea5..0000000000000000000000000000000000000000
--- a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_form/run.py
+++ /dev/null
@@ -1,33 +0,0 @@
-import gradio as gr
-
-with gr.Blocks() as demo:
- error_box = gr.Textbox(label="Error", visible=False)
-
- name_box = gr.Textbox(label="Name")
- age_box = gr.Number(label="Age")
- symptoms_box = gr.CheckboxGroup(["Cough", "Fever", "Runny Nose"])
- submit_btn = gr.Button("Submit")
-
- with gr.Column(visible=False) as output_col:
- diagnosis_box = gr.Textbox(label="Diagnosis")
- patient_summary_box = gr.Textbox(label="Patient Summary")
-
- def submit(name, age, symptoms):
- if len(name) == 0:
- return {error_box: gr.update(value="Enter name", visible=True)}
- if age < 0 or age > 200:
- return {error_box: gr.update(value="Enter valid age", visible=True)}
- return {
- output_col: gr.update(visible=True),
- diagnosis_box: "covid" if "Cough" in symptoms else "flu",
- patient_summary_box: f"{name}, {age} y/o"
- }
-
- submit_btn.click(
- submit,
- [name_box, age_box, symptoms_box],
- [error_box, diagnosis_box, patient_summary_box, output_col],
- )
-
-if __name__ == "__main__":
- demo.launch()
\ No newline at end of file
diff --git a/spaces/fun-research/FC-CLIP/fcclip/evaluation/__init__.py b/spaces/fun-research/FC-CLIP/fcclip/evaluation/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/giswqs/Streamlit/apps/timelapse.py b/spaces/giswqs/Streamlit/apps/timelapse.py
deleted file mode 100644
index 05edc7c21e7d0e68ada4d7db989d21210c37b61e..0000000000000000000000000000000000000000
--- a/spaces/giswqs/Streamlit/apps/timelapse.py
+++ /dev/null
@@ -1,1314 +0,0 @@
-import ee
-import os
-import datetime
-import fiona
-import geopandas as gpd
-import folium
-import streamlit as st
-import geemap.colormaps as cm
-import geemap.foliumap as geemap
-from datetime import date
-from .rois import *
-
-
-@st.cache(allow_output_mutation=True)
-def uploaded_file_to_gdf(data):
- import tempfile
- import os
- import uuid
-
- _, file_extension = os.path.splitext(data.name)
- file_id = str(uuid.uuid4())
- file_path = os.path.join(tempfile.gettempdir(), f"{file_id}{file_extension}")
-
- with open(file_path, "wb") as file:
- file.write(data.getbuffer())
-
- if file_path.lower().endswith(".kml"):
- fiona.drvsupport.supported_drivers["KML"] = "rw"
- gdf = gpd.read_file(file_path, driver="KML")
- else:
- gdf = gpd.read_file(file_path)
-
- return gdf
-
-
-def app():
-
- today = date.today()
-
- st.title("Create Timelapse")
-
- st.markdown(
- """
- An interactive web app for creating [Landsat](https://developers.google.com/earth-engine/datasets/catalog/landsat)/[GOES](https://jstnbraaten.medium.com/goes-in-earth-engine-53fbc8783c16) timelapse for any location around the globe.
- The app was built using [streamlit](https://streamlit.io), [geemap](https://geemap.org), and [Google Earth Engine](https://earthengine.google.com). For more info, check out my streamlit [blog post](https://blog.streamlit.io/creating-satellite-timelapse-with-streamlit-and-earth-engine).
- """
- )
-
- row1_col1, row1_col2 = st.columns([2, 1])
-
- if st.session_state.get("zoom_level") is None:
- st.session_state["zoom_level"] = 4
-
- st.session_state["ee_asset_id"] = None
- st.session_state["bands"] = None
- st.session_state["palette"] = None
- st.session_state["vis_params"] = None
-
- with row1_col1:
- m = geemap.Map(
- basemap="HYBRID",
- plugin_Draw=True,
- Draw_export=True,
- locate_control=True,
- plugin_LatLngPopup=False,
- )
- m.add_basemap("ROADMAP")
-
- with row1_col2:
-
- keyword = st.text_input("Search for a location:", "")
- if keyword:
- locations = geemap.geocode(keyword)
- if locations is not None and len(locations) > 0:
- str_locations = [str(g)[1:-1] for g in locations]
- location = st.selectbox("Select a location:", str_locations)
- loc_index = str_locations.index(location)
- selected_loc = locations[loc_index]
- lat, lng = selected_loc.lat, selected_loc.lng
- folium.Marker(location=[lat, lng], popup=location).add_to(m)
- m.set_center(lng, lat, 12)
- st.session_state["zoom_level"] = 12
-
- collection = st.selectbox(
- "Select a satellite image collection: ",
- [
- "Any Earth Engine ImageCollection",
- "Landsat TM-ETM-OLI Surface Reflectance",
- "Sentinel-2 MSI Surface Reflectance",
- "Geostationary Operational Environmental Satellites (GOES)",
- "MODIS Vegetation Indices (NDVI/EVI) 16-Day Global 1km",
- "MODIS Gap filled Land Surface Temperature Daily",
- "MODIS Ocean Color SMI",
- "USDA National Agriculture Imagery Program (NAIP)",
- ],
- index=1,
- )
-
- if collection in [
- "Landsat TM-ETM-OLI Surface Reflectance",
- "Sentinel-2 MSI Surface Reflectance",
- ]:
- roi_options = ["Uploaded GeoJSON"] + list(landsat_rois.keys())
-
- elif collection == "Geostationary Operational Environmental Satellites (GOES)":
- roi_options = ["Uploaded GeoJSON"] + list(goes_rois.keys())
-
- elif collection in [
- "MODIS Vegetation Indices (NDVI/EVI) 16-Day Global 1km",
- "MODIS Gap filled Land Surface Temperature Daily",
- ]:
- roi_options = ["Uploaded GeoJSON"] + list(modis_rois.keys())
- elif collection == "MODIS Ocean Color SMI":
- roi_options = ["Uploaded GeoJSON"] + list(ocean_rois.keys())
- else:
- roi_options = ["Uploaded GeoJSON"]
-
- if collection == "Any Earth Engine ImageCollection":
- keyword = st.text_input("Enter a keyword to search (e.g., MODIS):", "")
- if keyword:
-
- assets = geemap.search_ee_data(keyword)
- ee_assets = []
- for asset in assets:
- if asset["ee_id_snippet"].startswith("ee.ImageCollection"):
- ee_assets.append(asset)
-
- asset_titles = [x["title"] for x in ee_assets]
- dataset = st.selectbox("Select a dataset:", asset_titles)
- if len(ee_assets) > 0:
- st.session_state["ee_assets"] = ee_assets
- st.session_state["asset_titles"] = asset_titles
- index = asset_titles.index(dataset)
- ee_id = ee_assets[index]["id"]
- else:
- ee_id = ""
-
- if dataset is not None:
- with st.expander("Show dataset details", False):
- index = asset_titles.index(dataset)
- html = geemap.ee_data_html(st.session_state["ee_assets"][index])
- st.markdown(html, True)
- # elif collection == "MODIS Gap filled Land Surface Temperature Daily":
- # ee_id = ""
- else:
- ee_id = ""
-
- asset_id = st.text_input("Enter an ee.ImageCollection asset ID:", ee_id)
-
- if asset_id:
- with st.expander("Customize band combination and color palette", True):
- try:
- col = ee.ImageCollection.load(asset_id)
- st.session_state["ee_asset_id"] = asset_id
- except:
- st.error("Invalid Earth Engine asset ID.")
- st.session_state["ee_asset_id"] = None
- return
-
- img_bands = col.first().bandNames().getInfo()
- if len(img_bands) >= 3:
- default_bands = img_bands[:3][::-1]
- else:
- default_bands = img_bands[:]
- bands = st.multiselect(
- "Select one or three bands (RGB):", img_bands, default_bands
- )
- st.session_state["bands"] = bands
-
- if len(bands) == 1:
- palette_options = st.selectbox(
- "Color palette",
- cm.list_colormaps(),
- index=2,
- )
- palette_values = cm.get_palette(palette_options, 15)
- palette = st.text_area(
- "Enter a custom palette:",
- palette_values,
- )
- st.write(
- cm.plot_colormap(cmap=palette_options, return_fig=True)
- )
- st.session_state["palette"] = eval(palette)
-
- if bands:
- vis_params = st.text_area(
- "Enter visualization parameters",
- "{'bands': ["
- + ", ".join([f"'{band}'" for band in bands])
- + "]}",
- )
- else:
- vis_params = st.text_area(
- "Enter visualization parameters",
- "{}",
- )
- try:
- st.session_state["vis_params"] = eval(vis_params)
- st.session_state["vis_params"]["palette"] = st.session_state[
- "palette"
- ]
- except Exception as e:
- st.session_state["vis_params"] = None
- st.error(
- f"Invalid visualization parameters. It must be a dictionary."
- )
-
- elif collection == "MODIS Gap filled Land Surface Temperature Daily":
- with st.expander("Show dataset details", False):
- st.markdown(
- """
- See the [Awesome GEE Community Datasets](https://samapriya.github.io/awesome-gee-community-datasets/projects/daily_lst/).
- """
- )
-
- MODIS_options = ["Daytime (1:30 pm)", "Nighttime (1:30 am)"]
- MODIS_option = st.selectbox("Select a MODIS dataset:", MODIS_options)
- if MODIS_option == "Daytime (1:30 pm)":
- st.session_state[
- "ee_asset_id"
- ] = "projects/sat-io/open-datasets/gap-filled-lst/gf_day_1km"
- else:
- st.session_state[
- "ee_asset_id"
- ] = "projects/sat-io/open-datasets/gap-filled-lst/gf_night_1km"
-
- palette_options = st.selectbox(
- "Color palette",
- cm.list_colormaps(),
- index=90,
- )
- palette_values = cm.get_palette(palette_options, 15)
- palette = st.text_area(
- "Enter a custom palette:",
- palette_values,
- )
- st.write(cm.plot_colormap(cmap=palette_options, return_fig=True))
- st.session_state["palette"] = eval(palette)
- elif collection == "MODIS Ocean Color SMI":
- with st.expander("Show dataset details", False):
- st.markdown(
- """
- See the [Earth Engine Data Catalog](https://developers.google.com/earth-engine/datasets/catalog/NASA_OCEANDATA_MODIS-Aqua_L3SMI).
- """
- )
-
- MODIS_options = ["Aqua", "Terra"]
- MODIS_option = st.selectbox("Select a satellite:", MODIS_options)
- st.session_state["ee_asset_id"] = MODIS_option
- # if MODIS_option == "Daytime (1:30 pm)":
- # st.session_state[
- # "ee_asset_id"
- # ] = "projects/sat-io/open-datasets/gap-filled-lst/gf_day_1km"
- # else:
- # st.session_state[
- # "ee_asset_id"
- # ] = "projects/sat-io/open-datasets/gap-filled-lst/gf_night_1km"
-
- band_dict = {
- "Chlorophyll a concentration": "chlor_a",
- "Normalized fluorescence line height": "nflh",
- "Particulate organic carbon": "poc",
- "Sea surface temperature": "sst",
- "Remote sensing reflectance at band 412nm": "Rrs_412",
- "Remote sensing reflectance at band 443nm": "Rrs_443",
- "Remote sensing reflectance at band 469nm": "Rrs_469",
- "Remote sensing reflectance at band 488nm": "Rrs_488",
- "Remote sensing reflectance at band 531nm": "Rrs_531",
- "Remote sensing reflectance at band 547nm": "Rrs_547",
- "Remote sensing reflectance at band 555nm": "Rrs_555",
- "Remote sensing reflectance at band 645nm": "Rrs_645",
- "Remote sensing reflectance at band 667nm": "Rrs_667",
- "Remote sensing reflectance at band 678nm": "Rrs_678",
- }
-
- band_options = list(band_dict.keys())
- band = st.selectbox(
- "Select a band",
- band_options,
- band_options.index("Sea surface temperature"),
- )
- st.session_state["band"] = band_dict[band]
-
- colors = cm.list_colormaps()
- palette_options = st.selectbox(
- "Color palette",
- colors,
- index=colors.index("coolwarm"),
- )
- palette_values = cm.get_palette(palette_options, 15)
- palette = st.text_area(
- "Enter a custom palette:",
- palette_values,
- )
- st.write(cm.plot_colormap(cmap=palette_options, return_fig=True))
- st.session_state["palette"] = eval(palette)
-
- sample_roi = st.selectbox(
- "Select a sample ROI or upload a GeoJSON file:",
- roi_options,
- index=0,
- )
-
- add_outline = st.checkbox(
- "Overlay an administrative boundary on timelapse", False
- )
-
- if add_outline:
-
- with st.expander("Customize administrative boundary", True):
-
- overlay_options = {
- "User-defined": None,
- "Continents": "continents",
- "Countries": "countries",
- "US States": "us_states",
- "China": "china",
- }
-
- overlay = st.selectbox(
- "Select an administrative boundary:",
- list(overlay_options.keys()),
- index=2,
- )
-
- overlay_data = overlay_options[overlay]
-
- if overlay_data is None:
- overlay_data = st.text_input(
- "Enter an HTTP URL to a GeoJSON file or an ee.FeatureCollection asset id:",
- "https://raw.githubusercontent.com/giswqs/geemap/master/examples/data/countries.geojson",
- )
-
- overlay_color = st.color_picker(
- "Select a color for the administrative boundary:", "#000000"
- )
- overlay_width = st.slider(
- "Select a line width for the administrative boundary:", 1, 20, 1
- )
- overlay_opacity = st.slider(
- "Select an opacity for the administrative boundary:",
- 0.0,
- 1.0,
- 1.0,
- 0.05,
- )
- else:
- overlay_data = None
- overlay_color = "black"
- overlay_width = 1
- overlay_opacity = 1
-
- with row1_col1:
-
- with st.expander(
- "Steps: Draw a rectangle on the map -> Export it as a GeoJSON -> Upload it back to the app -> Click the Submit button. Expand this tab to see a demo 👉"
- ):
- video_empty = st.empty()
-
- data = st.file_uploader(
- "Upload a GeoJSON file to use as an ROI. Customize timelapse parameters and then click the Submit button 😇👇",
- type=["geojson", "kml", "zip"],
- )
-
- crs = "epsg:4326"
- if sample_roi == "Uploaded GeoJSON":
- if data is None:
- # st.info(
- # "Steps to create a timelapse: Draw a rectangle on the map -> Export it as a GeoJSON -> Upload it back to the app -> Click Submit button"
- # )
- if collection in [
- "Geostationary Operational Environmental Satellites (GOES)",
- "USDA National Agriculture Imagery Program (NAIP)",
- ] and (not keyword):
- m.set_center(-100, 40, 3)
- # else:
- # m.set_center(4.20, 18.63, zoom=2)
- else:
- if collection in [
- "Landsat TM-ETM-OLI Surface Reflectance",
- "Sentinel-2 MSI Surface Reflectance",
- ]:
- gdf = gpd.GeoDataFrame(
- index=[0], crs=crs, geometry=[landsat_rois[sample_roi]]
- )
- elif (
- collection
- == "Geostationary Operational Environmental Satellites (GOES)"
- ):
- gdf = gpd.GeoDataFrame(
- index=[0], crs=crs, geometry=[goes_rois[sample_roi]["region"]]
- )
- elif collection == "MODIS Vegetation Indices (NDVI/EVI) 16-Day Global 1km":
- gdf = gpd.GeoDataFrame(
- index=[0], crs=crs, geometry=[modis_rois[sample_roi]]
- )
-
- if sample_roi != "Uploaded GeoJSON":
-
- if collection in [
- "Landsat TM-ETM-OLI Surface Reflectance",
- "Sentinel-2 MSI Surface Reflectance",
- ]:
- gdf = gpd.GeoDataFrame(
- index=[0], crs=crs, geometry=[landsat_rois[sample_roi]]
- )
- elif (
- collection
- == "Geostationary Operational Environmental Satellites (GOES)"
- ):
- gdf = gpd.GeoDataFrame(
- index=[0], crs=crs, geometry=[goes_rois[sample_roi]["region"]]
- )
- elif collection in [
- "MODIS Vegetation Indices (NDVI/EVI) 16-Day Global 1km",
- "MODIS Gap filled Land Surface Temperature Daily",
- ]:
- gdf = gpd.GeoDataFrame(
- index=[0], crs=crs, geometry=[modis_rois[sample_roi]]
- )
- elif collection == "MODIS Ocean Color SMI":
- gdf = gpd.GeoDataFrame(
- index=[0], crs=crs, geometry=[ocean_rois[sample_roi]]
- )
- st.session_state["roi"] = geemap.gdf_to_ee(gdf, geodesic=False)
- m.add_gdf(gdf, "ROI")
-
- elif data:
- gdf = uploaded_file_to_gdf(data)
- st.session_state["roi"] = geemap.gdf_to_ee(gdf, geodesic=False)
- m.add_gdf(gdf, "ROI")
-
- m.to_streamlit(height=600)
-
- with row1_col2:
-
- if collection in [
- "Landsat TM-ETM-OLI Surface Reflectance",
- "Sentinel-2 MSI Surface Reflectance",
- ]:
-
- if collection == "Landsat TM-ETM-OLI Surface Reflectance":
- sensor_start_year = 1984
- timelapse_title = "Landsat Timelapse"
- timelapse_speed = 5
- elif collection == "Sentinel-2 MSI Surface Reflectance":
- sensor_start_year = 2015
- timelapse_title = "Sentinel-2 Timelapse"
- timelapse_speed = 5
- video_empty.video("https://youtu.be/VVRK_-dEjR4")
-
- with st.form("submit_landsat_form"):
-
- roi = None
- if st.session_state.get("roi") is not None:
- roi = st.session_state.get("roi")
- out_gif = geemap.temp_file_path(".gif")
-
- title = st.text_input(
- "Enter a title to show on the timelapse: ", timelapse_title
- )
- RGB = st.selectbox(
- "Select an RGB band combination:",
- [
- "Red/Green/Blue",
- "NIR/Red/Green",
- "SWIR2/SWIR1/NIR",
- "NIR/SWIR1/Red",
- "SWIR2/NIR/Red",
- "SWIR2/SWIR1/Red",
- "SWIR1/NIR/Blue",
- "NIR/SWIR1/Blue",
- "SWIR2/NIR/Green",
- "SWIR1/NIR/Red",
- "SWIR2/NIR/SWIR1",
- "SWIR1/NIR/SWIR2",
- ],
- index=9,
- )
-
- frequency = st.selectbox(
- "Select a temporal frequency:",
- ["year", "quarter", "month"],
- index=0,
- )
-
- with st.expander("Customize timelapse"):
-
- speed = st.slider("Frames per second:", 1, 30, timelapse_speed)
- dimensions = st.slider(
- "Maximum dimensions (Width*Height) in pixels", 768, 2000, 768
- )
- progress_bar_color = st.color_picker(
- "Progress bar color:", "#0000ff"
- )
- years = st.slider(
- "Start and end year:",
- sensor_start_year,
- today.year,
- (sensor_start_year, today.year),
- )
- months = st.slider("Start and end month:", 1, 12, (1, 12))
- font_size = st.slider("Font size:", 10, 50, 30)
- font_color = st.color_picker("Font color:", "#ffffff")
- apply_fmask = st.checkbox(
- "Apply fmask (remove clouds, shadows, snow)", True
- )
- font_type = st.selectbox(
- "Select the font type for the title:",
- ["arial.ttf", "alibaba.otf"],
- index=0,
- )
- fading = st.slider(
- "Fading duration (seconds) for each frame:", 0.0, 3.0, 0.0
- )
- mp4 = st.checkbox("Save timelapse as MP4", True)
-
- empty_text = st.empty()
- empty_image = st.empty()
- empty_fire_image = st.empty()
- empty_video = st.container()
- submitted = st.form_submit_button("Submit")
- if submitted:
-
- if sample_roi == "Uploaded GeoJSON" and data is None:
- empty_text.warning(
- "Steps to create a timelapse: Draw a rectangle on the map -> Export it as a GeoJSON -> Upload it back to the app -> Click the Submit button. Alternatively, you can select a sample ROI from the dropdown list."
- )
- else:
-
- empty_text.text("Computing... Please wait...")
-
- start_year = years[0]
- end_year = years[1]
- start_date = str(months[0]).zfill(2) + "-01"
- end_date = str(months[1]).zfill(2) + "-30"
- bands = RGB.split("/")
-
- try:
- if collection == "Landsat TM-ETM-OLI Surface Reflectance":
- out_gif = geemap.landsat_timelapse(
- roi=roi,
- out_gif=out_gif,
- start_year=start_year,
- end_year=end_year,
- start_date=start_date,
- end_date=end_date,
- bands=bands,
- apply_fmask=apply_fmask,
- frames_per_second=speed,
- dimensions=dimensions,
- overlay_data=overlay_data,
- overlay_color=overlay_color,
- overlay_width=overlay_width,
- overlay_opacity=overlay_opacity,
- frequency=frequency,
- date_format=None,
- title=title,
- title_xy=("2%", "90%"),
- add_text=True,
- text_xy=("2%", "2%"),
- text_sequence=None,
- font_type=font_type,
- font_size=font_size,
- font_color=font_color,
- add_progress_bar=True,
- progress_bar_color=progress_bar_color,
- progress_bar_height=5,
- loop=0,
- mp4=mp4,
- fading=fading,
- )
- elif collection == "Sentinel-2 MSI Surface Reflectance":
- out_gif = geemap.sentinel2_timelapse(
- roi=roi,
- out_gif=out_gif,
- start_year=start_year,
- end_year=end_year,
- start_date=start_date,
- end_date=end_date,
- bands=bands,
- apply_fmask=apply_fmask,
- frames_per_second=speed,
- dimensions=dimensions,
- overlay_data=overlay_data,
- overlay_color=overlay_color,
- overlay_width=overlay_width,
- overlay_opacity=overlay_opacity,
- frequency=frequency,
- date_format=None,
- title=title,
- title_xy=("2%", "90%"),
- add_text=True,
- text_xy=("2%", "2%"),
- text_sequence=None,
- font_type=font_type,
- font_size=font_size,
- font_color=font_color,
- add_progress_bar=True,
- progress_bar_color=progress_bar_color,
- progress_bar_height=5,
- loop=0,
- mp4=mp4,
- fading=fading,
- )
- except:
- empty_text.error(
- "An error occurred while computing the timelapse. Your probably requested too much data. Try reducing the ROI or timespan."
- )
- st.stop()
-
- if out_gif is not None and os.path.exists(out_gif):
-
- empty_text.text(
- "Right click the GIF to save it to your computer👇"
- )
- empty_image.image(out_gif)
-
- out_mp4 = out_gif.replace(".gif", ".mp4")
- if mp4 and os.path.exists(out_mp4):
- with empty_video:
- st.text(
- "Right click the MP4 to save it to your computer👇"
- )
- st.video(out_gif.replace(".gif", ".mp4"))
-
- else:
- empty_text.error(
- "Something went wrong. You probably requested too much data. Try reducing the ROI or timespan."
- )
-
- elif collection == "Geostationary Operational Environmental Satellites (GOES)":
-
- video_empty.video("https://youtu.be/16fA2QORG4A")
-
- with st.form("submit_goes_form"):
-
- roi = None
- if st.session_state.get("roi") is not None:
- roi = st.session_state.get("roi")
- out_gif = geemap.temp_file_path(".gif")
-
- satellite = st.selectbox("Select a satellite:", ["GOES-17", "GOES-16"])
- earliest_date = datetime.date(2017, 7, 10)
- latest_date = datetime.date.today()
-
- if sample_roi == "Uploaded GeoJSON":
- roi_start_date = today - datetime.timedelta(days=2)
- roi_end_date = today - datetime.timedelta(days=1)
- roi_start_time = datetime.time(14, 00)
- roi_end_time = datetime.time(1, 00)
- else:
- roi_start = goes_rois[sample_roi]["start_time"]
- roi_end = goes_rois[sample_roi]["end_time"]
- roi_start_date = datetime.datetime.strptime(
- roi_start[:10], "%Y-%m-%d"
- )
- roi_end_date = datetime.datetime.strptime(roi_end[:10], "%Y-%m-%d")
- roi_start_time = datetime.time(
- int(roi_start[11:13]), int(roi_start[14:16])
- )
- roi_end_time = datetime.time(
- int(roi_end[11:13]), int(roi_end[14:16])
- )
-
- start_date = st.date_input("Select the start date:", roi_start_date)
- end_date = st.date_input("Select the end date:", roi_end_date)
-
- with st.expander("Customize timelapse"):
-
- add_fire = st.checkbox("Add Fire/Hotspot Characterization", False)
-
- scan_type = st.selectbox(
- "Select a scan type:", ["Full Disk", "CONUS", "Mesoscale"]
- )
-
- start_time = st.time_input(
- "Select the start time of the start date:", roi_start_time
- )
-
- end_time = st.time_input(
- "Select the end time of the end date:", roi_end_time
- )
-
- start = (
- start_date.strftime("%Y-%m-%d")
- + "T"
- + start_time.strftime("%H:%M:%S")
- )
- end = (
- end_date.strftime("%Y-%m-%d")
- + "T"
- + end_time.strftime("%H:%M:%S")
- )
-
- speed = st.slider("Frames per second:", 1, 30, 5)
- add_progress_bar = st.checkbox("Add a progress bar", True)
- progress_bar_color = st.color_picker(
- "Progress bar color:", "#0000ff"
- )
- font_size = st.slider("Font size:", 10, 50, 20)
- font_color = st.color_picker("Font color:", "#ffffff")
- fading = st.slider(
- "Fading duration (seconds) for each frame:", 0.0, 3.0, 0.0
- )
- mp4 = st.checkbox("Save timelapse as MP4", True)
-
- empty_text = st.empty()
- empty_image = st.empty()
- empty_video = st.container()
- empty_fire_text = st.empty()
- empty_fire_image = st.empty()
-
- submitted = st.form_submit_button("Submit")
- if submitted:
- if sample_roi == "Uploaded GeoJSON" and data is None:
- empty_text.warning(
- "Steps to create a timelapse: Draw a rectangle on the map -> Export it as a GeoJSON -> Upload it back to the app -> Click the Submit button. Alternatively, you can select a sample ROI from the dropdown list."
- )
- else:
- empty_text.text("Computing... Please wait...")
-
- geemap.goes_timelapse(
- out_gif,
- start_date=start,
- end_date=end,
- data=satellite,
- scan=scan_type.replace(" ", "_").lower(),
- region=roi,
- dimensions=768,
- framesPerSecond=speed,
- date_format="YYYY-MM-dd HH:mm",
- xy=("3%", "3%"),
- text_sequence=None,
- font_type="arial.ttf",
- font_size=font_size,
- font_color=font_color,
- add_progress_bar=add_progress_bar,
- progress_bar_color=progress_bar_color,
- progress_bar_height=5,
- loop=0,
- overlay_data=overlay_data,
- overlay_color=overlay_color,
- overlay_width=overlay_width,
- overlay_opacity=overlay_opacity,
- mp4=mp4,
- fading=fading,
- )
-
- if out_gif is not None and os.path.exists(out_gif):
- empty_text.text(
- "Right click the GIF to save it to your computer👇"
- )
- empty_image.image(out_gif)
-
- out_mp4 = out_gif.replace(".gif", ".mp4")
- if mp4 and os.path.exists(out_mp4):
- with empty_video:
- st.text(
- "Right click the MP4 to save it to your computer👇"
- )
- st.video(out_gif.replace(".gif", ".mp4"))
-
- if add_fire:
- out_fire_gif = geemap.temp_file_path(".gif")
- empty_fire_text.text(
- "Delineating Fire Hotspot... Please wait..."
- )
- geemap.goes_fire_timelapse(
- out_fire_gif,
- start_date=start,
- end_date=end,
- data=satellite,
- scan=scan_type.replace(" ", "_").lower(),
- region=roi,
- dimensions=768,
- framesPerSecond=speed,
- date_format="YYYY-MM-dd HH:mm",
- xy=("3%", "3%"),
- text_sequence=None,
- font_type="arial.ttf",
- font_size=font_size,
- font_color=font_color,
- add_progress_bar=add_progress_bar,
- progress_bar_color=progress_bar_color,
- progress_bar_height=5,
- loop=0,
- )
- if os.path.exists(out_fire_gif):
- empty_fire_image.image(out_fire_gif)
- else:
- empty_text.text(
- "Something went wrong, either the ROI is too big or there are no data available for the specified date range. Please try a smaller ROI or different date range."
- )
-
- elif collection == "MODIS Vegetation Indices (NDVI/EVI) 16-Day Global 1km":
-
- video_empty.video("https://youtu.be/16fA2QORG4A")
-
- satellite = st.selectbox("Select a satellite:", ["Terra", "Aqua"])
- band = st.selectbox("Select a band:", ["NDVI", "EVI"])
-
- with st.form("submit_modis_form"):
-
- roi = None
- if st.session_state.get("roi") is not None:
- roi = st.session_state.get("roi")
- out_gif = geemap.temp_file_path(".gif")
-
- with st.expander("Customize timelapse"):
-
- start = st.date_input(
- "Select a start date:", datetime.date(2000, 2, 8)
- )
- end = st.date_input("Select an end date:", datetime.date.today())
-
- start_date = start.strftime("%Y-%m-%d")
- end_date = end.strftime("%Y-%m-%d")
-
- speed = st.slider("Frames per second:", 1, 30, 5)
- add_progress_bar = st.checkbox("Add a progress bar", True)
- progress_bar_color = st.color_picker(
- "Progress bar color:", "#0000ff"
- )
- font_size = st.slider("Font size:", 10, 50, 20)
- font_color = st.color_picker("Font color:", "#ffffff")
-
- font_type = st.selectbox(
- "Select the font type for the title:",
- ["arial.ttf", "alibaba.otf"],
- index=0,
- )
- fading = st.slider(
- "Fading duration (seconds) for each frame:", 0.0, 3.0, 0.0
- )
- mp4 = st.checkbox("Save timelapse as MP4", True)
-
- empty_text = st.empty()
- empty_image = st.empty()
- empty_video = st.container()
-
- submitted = st.form_submit_button("Submit")
- if submitted:
- if sample_roi == "Uploaded GeoJSON" and data is None:
- empty_text.warning(
- "Steps to create a timelapse: Draw a rectangle on the map -> Export it as a GeoJSON -> Upload it back to the app -> Click the Submit button. Alternatively, you can select a sample ROI from the dropdown list."
- )
- else:
-
- empty_text.text("Computing... Please wait...")
-
- geemap.modis_ndvi_timelapse(
- out_gif,
- satellite,
- band,
- start_date,
- end_date,
- roi,
- 768,
- speed,
- overlay_data=overlay_data,
- overlay_color=overlay_color,
- overlay_width=overlay_width,
- overlay_opacity=overlay_opacity,
- mp4=mp4,
- fading=fading,
- )
-
- geemap.reduce_gif_size(out_gif)
-
- empty_text.text(
- "Right click the GIF to save it to your computer👇"
- )
- empty_image.image(out_gif)
-
- out_mp4 = out_gif.replace(".gif", ".mp4")
- if mp4 and os.path.exists(out_mp4):
- with empty_video:
- st.text(
- "Right click the MP4 to save it to your computer👇"
- )
- st.video(out_gif.replace(".gif", ".mp4"))
-
- elif collection == "Any Earth Engine ImageCollection":
-
- with st.form("submit_ts_form"):
- with st.expander("Customize timelapse"):
-
- title = st.text_input(
- "Enter a title to show on the timelapse: ", "Timelapse"
- )
- start_date = st.date_input(
- "Select the start date:", datetime.date(2020, 1, 1)
- )
- end_date = st.date_input(
- "Select the end date:", datetime.date.today()
- )
- frequency = st.selectbox(
- "Select a temporal frequency:",
- ["year", "quarter", "month", "day", "hour", "minute", "second"],
- index=0,
- )
- reducer = st.selectbox(
- "Select a reducer for aggregating data:",
- ["median", "mean", "min", "max", "sum", "variance", "stdDev"],
- index=0,
- )
- data_format = st.selectbox(
- "Select a date format to show on the timelapse:",
- [
- "YYYY-MM-dd",
- "YYYY",
- "YYMM-MM",
- "YYYY-MM-dd HH:mm",
- "YYYY-MM-dd HH:mm:ss",
- "HH:mm",
- "HH:mm:ss",
- "w",
- "M",
- "d",
- "D",
- ],
- index=0,
- )
-
- speed = st.slider("Frames per second:", 1, 30, 5)
- add_progress_bar = st.checkbox("Add a progress bar", True)
- progress_bar_color = st.color_picker(
- "Progress bar color:", "#0000ff"
- )
- font_size = st.slider("Font size:", 10, 50, 30)
- font_color = st.color_picker("Font color:", "#ffffff")
- font_type = st.selectbox(
- "Select the font type for the title:",
- ["arial.ttf", "alibaba.otf"],
- index=0,
- )
- fading = st.slider(
- "Fading duration (seconds) for each frame:", 0.0, 3.0, 0.0
- )
- mp4 = st.checkbox("Save timelapse as MP4", True)
-
- empty_text = st.empty()
- empty_image = st.empty()
- empty_video = st.container()
- empty_fire_image = st.empty()
-
- roi = None
- if st.session_state.get("roi") is not None:
- roi = st.session_state.get("roi")
- out_gif = geemap.temp_file_path(".gif")
-
- submitted = st.form_submit_button("Submit")
- if submitted:
-
- if sample_roi == "Uploaded GeoJSON" and data is None:
- empty_text.warning(
- "Steps to create a timelapse: Draw a rectangle on the map -> Export it as a GeoJSON -> Upload it back to the app -> Click the Submit button. Alternatively, you can select a sample ROI from the dropdown list."
- )
- else:
-
- empty_text.text("Computing... Please wait...")
- try:
- geemap.create_timelapse(
- st.session_state.get("ee_asset_id"),
- start_date=start_date.strftime("%Y-%m-%d"),
- end_date=end_date.strftime("%Y-%m-%d"),
- region=roi,
- frequency=frequency,
- reducer=reducer,
- date_format=data_format,
- out_gif=out_gif,
- bands=st.session_state.get("bands"),
- palette=st.session_state.get("palette"),
- vis_params=st.session_state.get("vis_params"),
- dimensions=768,
- frames_per_second=speed,
- crs="EPSG:3857",
- overlay_data=overlay_data,
- overlay_color=overlay_color,
- overlay_width=overlay_width,
- overlay_opacity=overlay_opacity,
- title=title,
- title_xy=("2%", "90%"),
- add_text=True,
- text_xy=("2%", "2%"),
- text_sequence=None,
- font_type=font_type,
- font_size=font_size,
- font_color=font_color,
- add_progress_bar=add_progress_bar,
- progress_bar_color=progress_bar_color,
- progress_bar_height=5,
- loop=0,
- mp4=mp4,
- fading=fading,
- )
- except:
- empty_text.error(
- "An error occurred while computing the timelapse. You probably requested too much data. Try reducing the ROI or timespan."
- )
-
- empty_text.text(
- "Right click the GIF to save it to your computer👇"
- )
- empty_image.image(out_gif)
-
- out_mp4 = out_gif.replace(".gif", ".mp4")
- if mp4 and os.path.exists(out_mp4):
- with empty_video:
- st.text(
- "Right click the MP4 to save it to your computer👇"
- )
- st.video(out_gif.replace(".gif", ".mp4"))
-
- elif collection in [
- "MODIS Gap filled Land Surface Temperature Daily",
- "MODIS Ocean Color SMI",
- ]:
-
- with st.form("submit_ts_form"):
- with st.expander("Customize timelapse"):
-
- title = st.text_input(
- "Enter a title to show on the timelapse: ",
- "Surface Temperature",
- )
- start_date = st.date_input(
- "Select the start date:", datetime.date(2018, 1, 1)
- )
- end_date = st.date_input(
- "Select the end date:", datetime.date(2020, 12, 31)
- )
- frequency = st.selectbox(
- "Select a temporal frequency:",
- ["year", "quarter", "month", "week", "day"],
- index=2,
- )
- reducer = st.selectbox(
- "Select a reducer for aggregating data:",
- ["median", "mean", "min", "max", "sum", "variance", "stdDev"],
- index=0,
- )
-
- vis_params = st.text_area(
- "Enter visualization parameters",
- "",
- help="Enter a string in the format of a dictionary, such as '{'min': 23, 'max': 32}'",
- )
-
- speed = st.slider("Frames per second:", 1, 30, 5)
- add_progress_bar = st.checkbox("Add a progress bar", True)
- progress_bar_color = st.color_picker(
- "Progress bar color:", "#0000ff"
- )
- font_size = st.slider("Font size:", 10, 50, 30)
- font_color = st.color_picker("Font color:", "#ffffff")
- font_type = st.selectbox(
- "Select the font type for the title:",
- ["arial.ttf", "alibaba.otf"],
- index=0,
- )
- add_colorbar = st.checkbox("Add a colorbar", True)
- colorbar_label = st.text_input(
- "Enter the colorbar label:", "Surface Temperature (°C)"
- )
- fading = st.slider(
- "Fading duration (seconds) for each frame:", 0.0, 3.0, 0.0
- )
- mp4 = st.checkbox("Save timelapse as MP4", True)
-
- empty_text = st.empty()
- empty_image = st.empty()
- empty_video = st.container()
-
- roi = None
- if st.session_state.get("roi") is not None:
- roi = st.session_state.get("roi")
- out_gif = geemap.temp_file_path(".gif")
-
- submitted = st.form_submit_button("Submit")
- if submitted:
-
- if sample_roi == "Uploaded GeoJSON" and data is None:
- empty_text.warning(
- "Steps to create a timelapse: Draw a rectangle on the map -> Export it as a GeoJSON -> Upload it back to the app -> Click the Submit button. Alternatively, you can select a sample ROI from the dropdown list."
- )
- else:
-
- empty_text.text("Computing... Please wait...")
- try:
- if (
- collection
- == "MODIS Gap filled Land Surface Temperature Daily"
- ):
- out_gif = geemap.create_timelapse(
- st.session_state.get("ee_asset_id"),
- start_date=start_date.strftime("%Y-%m-%d"),
- end_date=end_date.strftime("%Y-%m-%d"),
- region=roi,
- bands=None,
- frequency=frequency,
- reducer=reducer,
- date_format=None,
- out_gif=out_gif,
- palette=st.session_state.get("palette"),
- vis_params=None,
- dimensions=768,
- frames_per_second=speed,
- crs="EPSG:3857",
- overlay_data=overlay_data,
- overlay_color=overlay_color,
- overlay_width=overlay_width,
- overlay_opacity=overlay_opacity,
- title=title,
- title_xy=("2%", "90%"),
- add_text=True,
- text_xy=("2%", "2%"),
- text_sequence=None,
- font_type=font_type,
- font_size=font_size,
- font_color=font_color,
- add_progress_bar=add_progress_bar,
- progress_bar_color=progress_bar_color,
- progress_bar_height=5,
- add_colorbar=add_colorbar,
- colorbar_label=colorbar_label,
- loop=0,
- mp4=mp4,
- fading=fading,
- )
- elif collection == "MODIS Ocean Color SMI":
- if vis_params.startswith("{") and vis_params.endswith(
- "}"
- ):
- vis_params = eval(vis_params)
- else:
- vis_params = None
- out_gif = geemap.modis_ocean_color_timelapse(
- st.session_state.get("ee_asset_id"),
- start_date=start_date.strftime("%Y-%m-%d"),
- end_date=end_date.strftime("%Y-%m-%d"),
- region=roi,
- bands=st.session_state["band"],
- frequency=frequency,
- reducer=reducer,
- date_format=None,
- out_gif=out_gif,
- palette=st.session_state.get("palette"),
- vis_params=vis_params,
- dimensions=768,
- frames_per_second=speed,
- crs="EPSG:3857",
- overlay_data=overlay_data,
- overlay_color=overlay_color,
- overlay_width=overlay_width,
- overlay_opacity=overlay_opacity,
- title=title,
- title_xy=("2%", "90%"),
- add_text=True,
- text_xy=("2%", "2%"),
- text_sequence=None,
- font_type=font_type,
- font_size=font_size,
- font_color=font_color,
- add_progress_bar=add_progress_bar,
- progress_bar_color=progress_bar_color,
- progress_bar_height=5,
- add_colorbar=add_colorbar,
- colorbar_label=colorbar_label,
- loop=0,
- mp4=mp4,
- fading=fading,
- )
- except:
- empty_text.error(
- "Something went wrong. You probably requested too much data. Try reducing the ROI or timespan."
- )
-
- if out_gif is not None and os.path.exists(out_gif):
-
- geemap.reduce_gif_size(out_gif)
-
- empty_text.text(
- "Right click the GIF to save it to your computer👇"
- )
- empty_image.image(out_gif)
-
- out_mp4 = out_gif.replace(".gif", ".mp4")
- if mp4 and os.path.exists(out_mp4):
- with empty_video:
- st.text(
- "Right click the MP4 to save it to your computer👇"
- )
- st.video(out_gif.replace(".gif", ".mp4"))
-
- else:
- st.error(
- "Something went wrong. You probably requested too much data. Try reducing the ROI or timespan."
- )
-
- elif collection == "USDA National Agriculture Imagery Program (NAIP)":
-
- with st.form("submit_naip_form"):
- with st.expander("Customize timelapse"):
-
- title = st.text_input(
- "Enter a title to show on the timelapse: ", "NAIP Timelapse"
- )
-
- years = st.slider(
- "Start and end year:",
- 2003,
- today.year,
- (2003, today.year),
- )
-
- bands = st.selectbox(
- "Select a band combination:", ["N/R/G", "R/G/B"], index=0
- )
-
- speed = st.slider("Frames per second:", 1, 30, 3)
- add_progress_bar = st.checkbox("Add a progress bar", True)
- progress_bar_color = st.color_picker(
- "Progress bar color:", "#0000ff"
- )
- font_size = st.slider("Font size:", 10, 50, 30)
- font_color = st.color_picker("Font color:", "#ffffff")
- font_type = st.selectbox(
- "Select the font type for the title:",
- ["arial.ttf", "alibaba.otf"],
- index=0,
- )
- fading = st.slider(
- "Fading duration (seconds) for each frame:", 0.0, 3.0, 0.0
- )
- mp4 = st.checkbox("Save timelapse as MP4", True)
-
- empty_text = st.empty()
- empty_image = st.empty()
- empty_video = st.container()
- empty_fire_image = st.empty()
-
- roi = None
- if st.session_state.get("roi") is not None:
- roi = st.session_state.get("roi")
- out_gif = geemap.temp_file_path(".gif")
-
- submitted = st.form_submit_button("Submit")
- if submitted:
-
- if sample_roi == "Uploaded GeoJSON" and data is None:
- empty_text.warning(
- "Steps to create a timelapse: Draw a rectangle on the map -> Export it as a GeoJSON -> Upload it back to the app -> Click the Submit button. Alternatively, you can select a sample ROI from the dropdown list."
- )
- else:
-
- empty_text.text("Computing... Please wait...")
- try:
- geemap.naip_timelapse(
- roi,
- years[0],
- years[1],
- out_gif,
- bands=bands.split("/"),
- palette=st.session_state.get("palette"),
- vis_params=None,
- dimensions=768,
- frames_per_second=speed,
- crs="EPSG:3857",
- overlay_data=overlay_data,
- overlay_color=overlay_color,
- overlay_width=overlay_width,
- overlay_opacity=overlay_opacity,
- title=title,
- title_xy=("2%", "90%"),
- add_text=True,
- text_xy=("2%", "2%"),
- text_sequence=None,
- font_type=font_type,
- font_size=font_size,
- font_color=font_color,
- add_progress_bar=add_progress_bar,
- progress_bar_color=progress_bar_color,
- progress_bar_height=5,
- loop=0,
- mp4=mp4,
- fading=fading,
- )
- except:
- empty_text.error(
- "Something went wrong. You either requested too much data or the ROI is outside the U.S."
- )
-
- if out_gif is not None and os.path.exists(out_gif):
-
- empty_text.text(
- "Right click the GIF to save it to your computer👇"
- )
- empty_image.image(out_gif)
-
- out_mp4 = out_gif.replace(".gif", ".mp4")
- if mp4 and os.path.exists(out_mp4):
- with empty_video:
- st.text(
- "Right click the MP4 to save it to your computer👇"
- )
- st.video(out_gif.replace(".gif", ".mp4"))
-
- else:
- st.error(
- "Something went wrong. You either requested too much data or the ROI is outside the U.S."
- )
diff --git a/spaces/glyszt/vt/vtoonify/model/vgg.py b/spaces/glyszt/vt/vtoonify/model/vgg.py
deleted file mode 100644
index a1043d5bd8bdd0d1484d2270ae0d33c29495856c..0000000000000000000000000000000000000000
--- a/spaces/glyszt/vt/vtoonify/model/vgg.py
+++ /dev/null
@@ -1,60 +0,0 @@
-import torch
-import torch.nn as nn
-import torchvision
-
-# VGG architecter, used for the perceptual loss using a pretrained VGG network
-class VGG19(torch.nn.Module):
- def __init__(self, requires_grad=False):
- super().__init__()
- vgg_pretrained_features = torchvision.models.vgg19(pretrained=True).features
- self.slice1 = torch.nn.Sequential()
- self.slice2 = torch.nn.Sequential()
- self.slice3 = torch.nn.Sequential()
- self.slice4 = torch.nn.Sequential()
- self.slice5 = torch.nn.Sequential()
- self.slice6 = torch.nn.Sequential()
- for x in range(2):
- self.slice1.add_module(str(x), vgg_pretrained_features[x])
- for x in range(2, 7):
- self.slice2.add_module(str(x), vgg_pretrained_features[x])
- for x in range(7, 12):
- self.slice3.add_module(str(x), vgg_pretrained_features[x])
- for x in range(12, 21):
- self.slice4.add_module(str(x), vgg_pretrained_features[x])
- for x in range(21, 32):
- self.slice5.add_module(str(x), vgg_pretrained_features[x])
- for x in range(32, 36):
- self.slice6.add_module(str(x), vgg_pretrained_features[x])
- if not requires_grad:
- for param in self.parameters():
- param.requires_grad = False
-
- self.pool = nn.AdaptiveAvgPool2d(output_size=1)
-
- self.mean = torch.tensor([0.485, 0.456, 0.406]).view(1,-1, 1, 1).cuda() * 2 - 1
- self.std = torch.tensor([0.229, 0.224, 0.225]).view(1,-1, 1, 1).cuda() * 2
-
- def forward(self, X): # relui_1
- X = (X-self.mean)/self.std
- h_relu1 = self.slice1(X)
- h_relu2 = self.slice2(h_relu1)
- h_relu3 = self.slice3(h_relu2)
- h_relu4 = self.slice4(h_relu3)
- h_relu5 = self.slice5[:-2](h_relu4)
- out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5]
- return out
-
-# Perceptual loss that uses a pretrained VGG network
-class VGGLoss(nn.Module):
- def __init__(self):
- super(VGGLoss, self).__init__()
- self.vgg = VGG19().cuda()
- self.criterion = nn.L1Loss()
- self.weights = [1.0 / 32, 1.0 / 16, 1.0 / 8, 1.0 / 4, 1.0]
-
- def forward(self, x, y):
- x_vgg, y_vgg = self.vgg(x), self.vgg(y)
- loss = 0
- for i in range(len(x_vgg)):
- loss += self.weights[i] * self.criterion(x_vgg[i], y_vgg[i].detach())
- return loss
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Bluesoleil 8 Keygen With Activation Tool REPACK.md b/spaces/gotiQspiryo/whisper-ui/examples/Bluesoleil 8 Keygen With Activation Tool REPACK.md
deleted file mode 100644
index 283596fee948ad3a391a2406f0c48530305e5256..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Bluesoleil 8 Keygen With Activation Tool REPACK.md
+++ /dev/null
@@ -1,32 +0,0 @@
-Bluesoleil 8 Keygen With Activation Tool
DOWNLOAD ✓ https://urlgoal.com/2uyN3m
-
-WAV, MP3, OGG, and FLAC files for free.
-
-BlueSoleil 10 Serial Key Crack + Keygen Full Version Crack helps you to convert .WAV, MP3, OGG, and FLAC files into .M4A, MP3, FLAC, OGG, OGA, WAV, WMA, WAV, OGG, WMA, and M4A files.
-
-You can also get the best quality and high resolution files of all your music files that you have downloaded or got.
-
-BlueSoleil 10 Serial Key Crack + Keygen Full Version Crack provides you with very easy to use and friendly interface.
-
-BlueSoleil 10 Serial Key Crack + Keygen Full Version Crack is free and available with a trial version.
-
-After you have got the version, you need to purchase the license key for it.
-
-You can use it on all types of operating systems such as: Mac, PC, iPad, iPhone, iPod, Android and more.
-
-BlueSoleil 10 Serial Key Crack + Keygen Full Version Crack has been designed with the aim of providing the best quality and the highest resolution.
-
-BlueSoleil 10 Serial Key Crack + Keygen Full Version Crack is the best tool in the world that provides you with various music conversion tools.
-
-BlueSoleil 10 Serial Key Crack + Keygen Full Version Crack not only allows you to convert music files but also allows you to easily create audio CDs, audiobooks, and podcasts.
-
-You can get your music free of charge from the internet and even download from it.
-
-You can keep your music for as long as you want, and then convert it again to .MP3, .M4A, .OGG, .WAV, and .WMA.
-
-You can get a keygen that helps you to convert .MP3 files to .OGG, .M4A, .MP3, .OGG, .WAV, .WMA, .OGA, .FLAC, and .WAV files.
-
-BlueSoleil 10 Serial Key Crack + Keygen Full Version Crack also lets you convert .OGG, .WAV, .OGA, .FLAC, and .WMA files into .M4A 4fefd39f24
-
-
-
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Cod MW3 Full Game (Single Player And MultiPlayer) v2 Key Generator Download and Play Now.md b/spaces/gotiQspiryo/whisper-ui/examples/Cod MW3 Full Game (Single Player And MultiPlayer) v2 Key Generator Download and Play Now.md
deleted file mode 100644
index 1564fef6efb3bfe22f6f8169caf13dd8907814f4..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Cod MW3 Full Game (Single Player And MultiPlayer) v2 Key Generator Download and Play Now.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-After successfully downing an enemy player in a squad-based mode, you can interrogate them, briefly pinging the location of their allies both via the mini-map and outlines (like a Snapshot Grenade). This effect dissipates once the interrogator is more than 10 meters away from the interrogated enemy.
-Cod MW3 Full Game (Single Player And MultiPlayer) v2 Key Generator
Download File → https://urlgoal.com/2uyNza
-Even though the main attraction of the Call of Duty franchise is the Multiplayer and Zombie experience, that doesn't mean that the Campaigns are something hardcore fans should ignore. In fact, players can learn more about their favorite Operators like Soap and Ghost by playing the Campaign. But the truth is most COD fans miss out on these excellent story elements in favor of the more exciting multiplayer experience.
-That's why Infinity Ward decided to switch things up by letting players play the Call of Duty Modern Warfare 2 campaign a week before the full game launch so that players can experience it to the fullest. One gameplay feature that stands out is the ability to loot powerful weapons found in Safes scattered around different missions. So, players that want to take their gunfights to a whole new level will want to know the location and codes of these three Safes (as a bonus, finding and unlocking all three Safes unlocks the Gentleman Thief Achievement/Trophy).
-Strongholds are a brand new, multifaceted feature that are going to fundamentally change the way some players choose to spend their time. They represent a high-risk opportunity for teams to get a full Loadout early and a localized UAV for strategic positioning. Teams that are the first to clear a Stronghold will be granted a single Black Site key - an invitation to challenge fortified enemy combatants for the most valuable loot Al Mazrah has to offer. Black Sites are only found in Battle Royale.
-The Call of Duty Modern Warfare 2 release date is very nearly here, and Activision's servers are sure to be in for a challenging weekend. But when exactly can we start enjoying the MW2 multiplayer content?
-Of course, if you placed a Call of Duty pre-order this year, you're probably already enjoying the MW2 campaign early access, working through all that solo content and hunting for safe codes as you go. But multiplayer, as ever, is a huge part of the equation.
-
-On that date, players can expect to jump into the new paid-for Call of Duty game in all its multiplayer glory. However, in terms of free-to-play content, it's worth knowing that the Warzone 2 release date will not occur until the following month.
-And that's not all, as there will also be changes made to the weapon customisation, or Gunsmith as it's known here, while Ricochet, the anti-cheat system, will be in the game from launch, which is a move we fully support. Cheating in a solo game is one thing, but any true gamer will know cheating in multiplayer is one of the biggest gaming sins going.
-There are also two new multiplayer modes: Knockout, a life-based mode where teams attempt to capture a package, and Prisoner Rescue, in which an attacking team attempts to extract a hostage. Co-op mode Special Ops will also be returning.
-There's no trailer yet for the 2022 CoD game (unless you count the artwork reveal we embedded earlier), but we'll be sure to update you when the first proper footage of the new Modern Warfare sequel does appear online. The latest rumblings point to June for a trailer reveal, so not too long to wait hopefully.
-The Call of Duty series has successfully produced a line of outstanding shooter games. One of the things that players love about the franchise is its consistency. Players know what to expect from the series, and are rarely disappointed, so they go out and buy release after release. The series gains more avid players and repeat customers by the minute, reaping enormous profits, which they utilize to create the next best game in the series and perpetuating its legendary legacy as one of the best gaming franchises in history.
-Activision confirmed the title was in pre-production on April 9, 2010[1] and was released on November 8, 2011.[2] Sledgehammer Games, originally planning to work on the canceled action-adventure Call of Duty, worked with Infinity Ward to complete all aspects of Call of Duty: Modern Warfare 3. Raven Software assisted with the user interface and downloadable content, as they had with previous Call of Duty games. Treyarch also assisted with the multiplayer, designing the weapon artworks, and developing the survival mode. Treyarch also developed the Wii version of Modern Warfare 3. Neversoft assisted Infinity Ward with the campaign and worked on the Special Ops Chaos mode. The downloadable content was released once per month over a period of nine months. It was free to users of Call of Duty ELITE if they had the Premium version of the service.[3] Sledgehammer was aiming for a bug-free Call of Duty title and ratings above 95%.[4]
-The Multiplayer was revealed to the general public during the Call of Duty XP Event on September 2 and 3, 2011. Attendees were able to experience multiplayer gameplay two months before the game's release.
-The waves are randomized, yet increasingly difficult, so that in the later stages of the game, players can expect to face multiple Juggernauts, Dogs, Suicide Bombers strapped with C4, and enemy air support. There is no means of winning in this game mode, as the waves will continue until the player dies. Like other Special Ops missions, co-op is available for this mode, both online or split-screen.[7]
-Modern Warfare II brings several massive changes to the Call of Duty gameplay across the board. Several new changes have been touted, such as advanced AI system in the campaign and co-op modes, water physics and swimming mechanics, as well as an overhauled vehicle system. The Gunsmith system is also given a revamp, allowing players to fine-tune specific attachments to suit their playstyles.
Infinity Ward brings fans state-of-the-art gameplay, with all-new gun handling, advanced AI system, a new Gunsmith and a suite of other gameplay and graphical innovations that elevate the franchise to new heights.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Download [BETTER] Film The Last Airbender 2 Sub Indonesia.md b/spaces/gotiQspiryo/whisper-ui/examples/Download [BETTER] Film The Last Airbender 2 Sub Indonesia.md
deleted file mode 100644
index a98abfa4350541c84c1ca171fb97422a77100e92..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Download [BETTER] Film The Last Airbender 2 Sub Indonesia.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-GuiltyMoviePleasures #TheLastAirbender #AvatarOn the 10 year anniversary of it's release, Jesse McIntosh (who has not seen any Last. avatar the last airbender siblings. 7.3M views. Discover short videos related to avatar the last airbender siblings on TikTok.
-avatar aang indonesia, dubber avatar aang indonesia, nonton avatar aang. Avatar: The Last Airbender - Book 3: Fire Chapter 14 English Subtitle Indonesia. Amazon.com: The Last Airbender 2 Movie.. The Last Airbender (Two-Disc Blu-ray/DVD Combo). Avatar The Last Airbender - Book 2 Earth, Vol. 3.
-download film the last airbender 2 sub indonesia
Download File ✺✺✺ https://urlgoal.com/2uyN9I
-download film Avatar the last airbender subtitle indonesia ( book 1 water ) episode 06 full movie di link di bawah ini:. Download Lagu Avatar The Last Airbender Full Movie MP3 dapat kamu temukan dengan cara klik tombol Download di bawah secara GRATIS, CEPAT dan MUDAH, untuk link.
-Download Lagu MP3 & Video: Avatar The Last Airbender Full Movie. last airbender full movie subtitle indonesia part 2; avatar the last airbender season 2. Nonton Film The Last Airbender (2010) Streaming Movie Dunia21 Bioskop Cinema 21 Box Office Movies 21 Subtitle Indonesia Gratis Online.
-Avatar aang indonesia, dubber avatar aang indonesia, nonton avatar aang. Avatar: The Last Airbender - Book 3: Fire Chapter 14 English Subtitle Indonesia. Amazon.com: The Last Airbender 2 Movie.. The Last Airbender (Two-Disc Blu-ray/DVD Combo). Avatar The Last Airbender - Book 2 Earth, Vol. 3.
-guiltymoviepleasures #thelastairbender #avataron the 10 year anniversary of it's release, jesse mcintosh (who has not seen any last. avatar the last airbender siblings. 7.3m views. discover short videos related to avatar the last airbender siblings on tiktok.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/FULL MakeMusic Finale 25.5.0.297 Keygen For Windows [CrackzSoft NEW.md b/spaces/gotiQspiryo/whisper-ui/examples/FULL MakeMusic Finale 25.5.0.297 Keygen For Windows [CrackzSoft NEW.md
deleted file mode 100644
index 161361cd284d219bb153a28dd4663a1425ee4596..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/FULL MakeMusic Finale 25.5.0.297 Keygen For Windows [CrackzSoft NEW.md
+++ /dev/null
@@ -1,6 +0,0 @@
-FULL MakeMusic Finale 25.5.0.297 Keygen For Windows [CrackzSoft
Download ····· https://urlgoal.com/2uyNEW
-
-A pdf of this report is available for download at: http://eprints.soton.ac.uk ... U.K. Exploration and Production ... Samsung GT-e1272 Driver.epub 4d29de3e1b
-
-
-
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Falcon 4 BMS 4.32 Realism Mod.rar Download !!INSTALL!!.md b/spaces/gotiQspiryo/whisper-ui/examples/Falcon 4 BMS 4.32 Realism Mod.rar Download !!INSTALL!!.md
deleted file mode 100644
index 167a653464ade34c29043fa59b0622f59cb612ea..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Falcon 4 BMS 4.32 Realism Mod.rar Download !!INSTALL!!.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-For all those that miss the unique experience that is the land based training mission in the original Back To The Future film, your prayers have been answered. This is truly the best looking BTTF land based training mission around. Download this and let the good times roll.
-Falcon 4 BMS 4.32 Realism Mod.rar Download
DOWNLOAD ✦✦✦ https://urlgoal.com/2uyNz4
-You still dont want to play the campaign on a slow PC, though, so your best bet is to download and install one of the many available mission pack add-ons which contains everything youll need to have the most authentic flight experience around. We recommend some of the top-rated mission packs available for the Falcon 4 BMS, such as the excellent 'Highway to Russia'.
-Another high quality mission pack for the Falcon 4 BMS, 'Battlefield: Takeoff' contains the most highly realistic military aircraft available anywhere, and no aircraft is better represented than the outstanding Supermarine Spitfire Mk XIV (sadly no Mk 15, but thats a minor gripe!).
-The Falcon 4 BMS is a great addition to the DCS: F/A-18C Hornet mods and should provide you with hours and hours of extremely realistic gameplay. However, you will need to have a compatible DCS: F/A-18C Hornet version installed. The Falcon 4 BMS itself is a free download from the DCS: BMS add-on store. An additional 17 aircraft will be available for purchase later this year.
-
-The Falcon 4 BMS is currently my number one P3D flight simulator. The Hornet is a superb fighter and despite the kits limitations, its an easy aircraft to fly, and after the 3rd hour of soloing, I was actually getting more comfortable in the aircraft. Its a very challenging aircraft to fly and the airspeed is very unforgiving. After 30 mins of flying I was quite relieved to see that the Hornet had ceased to drop away from my wings. However, once I got into the VATS position, things really started to get interesting. And by the time I had bagged my 30th kill, it was starting to get a bit embarrassing as I had never managed to bag a single one in practice! It was a wonderful learning experience and I honestly believe its one of the best fighter aircraft in the entire DCS series. However, be warned - its no cake walk. If you are thinking of taking the plunge, you have to be prepared to put your heart and soul into learning to fly the most challenging fighter around.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/gptbase/GPTBase/README.md b/spaces/gptbase/GPTBase/README.md
deleted file mode 100644
index 463d8c1643d490b9e16f19a73aee367c28bc896e..0000000000000000000000000000000000000000
--- a/spaces/gptbase/GPTBase/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: GPTBase
-emoji: 🔥
-colorFrom: green
-colorTo: red
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/gradio/HuBERT/examples/megatron_11b/README.md b/spaces/gradio/HuBERT/examples/megatron_11b/README.md
deleted file mode 100644
index 945c96c91e2e2d93466abc28d90bc25a1e7dd471..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/examples/megatron_11b/README.md
+++ /dev/null
@@ -1,161 +0,0 @@
-# Megatron-11b
-
-Megatron-11b is a unidirectional language model with `11B` parameters based on [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf). Following the original Megatron work, we trained the model using intra-layer model parallelism with each layer's parameters split across 8 GPUs.
-
-Megatron-11b is trained on the same data and uses the same byte-pair encoding (BPE) as [RoBERTa](https://arxiv.org/pdf/1907.11692.pdf).
-
-## Pre-trained models
-
-Model | Description | # params | # filesize | Download
----|---|---|---|---
-`megatron_11b` | megatron_11b unidirectional language model | 11B | 19Gb | [megatron_11b.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/model_parallel/megatron_11b.tar.gz)
-
-#### Architecture:
-
-Param | Value
----|---
-embed_dim | 3072
-ffn_dim | 3072 * 6
-layers | 72
-attention heads | 32
-
-#### Training details:
-
-Param | value
----|---
-bsz | 512
-num_updates | 300,000
-peak_lr | 1.5e-04
-lr scheduler | inverse_sqrt
-clip norm | 0.0
-
-
-## Example training command (model parallel)
-
-Megatron-11b contains too many parameters to train on a single GPU. Following
-the original Megatron work, we adopt an intra-layer model parallel training
-approach in which each layer's parameters are split across multiple GPUs and
-activations and gradients are communicated during the forward/backward pass,
-respectively. We similarly split the loss computation using the
-`vocab_parallel_cross_entropy` criterion.
-
-The following training command illustrates how to do model parallel training in
-fairseq. We assume that each machine (node) has 8 GPUs among which to split the
-model parameters (`--model-parallel-size 8`). If you have access to multiple
-nodes, you may combine this with data parallel training by increasing
-`--distributed-world-size`.
-
-To train Megatron-11b on a single node:
-
-
-```bash
-fairseq-train \
- --distributed-world-size 8 \
- --memory-efficient-fp16 \
- --num-workers 2 \
- --model-parallel-size 8 \
- --criterion vocab_parallel_cross_entropy \
- --task language_modeling \
- --sample-break-mode none \
- --tokens-per-sample 1024 \
- --arch transformer_lm_megatron_11b \
- --share-decoder-input-output-embed \
- --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-08 --clip-norm 0.0 \
- --lr-scheduler inverse_sqrt --lr 0.00015 \
- --warmup-updates 3000 --weight-decay 0.01 \
- --dropout 0.1 --attention-dropout 0.1 \
- --batch-size 2 \
- --max-update 300000;
-```
-
-Note: Above was tested on `DGX-1` box, with `8xV100-32Gb` GPUs.
-
-## Results
-
-**[Wikitext103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/)**
-
-Model | Valid perplexity | Test perplexity
----|---|---
-`megatron_11b` | 10.64 | 10.54
-
-
-## Evaluating `megatron_11b` on Wikitext-103
-
-#### 1. Downloading Megatron-11b
-```bash
-# WARNING: this file is 19GB
-wget https://dl.fbaipublicfiles.com/fairseq/models/model_parallel/megatron_11b.tar.gz
-tar -xzvf megatron_11b.tar.gz
-```
-
-#### 2. Download Wikitext-103
-```bash
-wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip
-unzip wikitext-103-raw-v1.zip
-```
-
-#### 3. Detokenize test tokens
-Megatron-11b uses a byte-level BPE that expects raw (untokenized) input. Since
-the wikitext-103 dataset comes tokenized, we apply a simple detokenization
-process to restore the untokenized test set:
-
-```bash
-python -m examples.megatron_11b.detok wikitext-103-raw/wiki.test.raw > wikitext-103-raw/wiki.test.detok
-```
-
-#### 4. BPE encoding
-```bash
-wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json'
-wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe'
-
-python -m examples.roberta.multiprocessing_bpe_encoder \
- --encoder-json encoder.json \
- --vocab-bpe vocab.bpe \
- --inputs "wikitext-103-raw/wiki.test.detok" \
- --outputs "wikitext-103-raw/wiki.test.bpe" \
- --workers 60;
-```
-
-#### 5. Fairseq binarize
-```bash
-fairseq-preprocess \
- --only-source \
- --testpref wikitext-103-raw/wiki.test.bpe \
- --srcdict megatron_11b/dict.txt \
- --destdir wikitext103-bin;
-```
-
-#### 6. Evaluating perplexity.
-We can now evaluate perplexity on the test set. Note that because we've modified
-the test set (via detokenization and BPE), the perplexity reported by
-`fairseq-eval-lm` needs to be renormalized.
-
-Compute unnormalized perplexity:
-
-```bash
-DATA_PATH=wikitext103-bin/
-fairseq-eval-lm \
- $DATA_PATH \
- --path megatron_11b/model.pt \
- --task language_modeling \
- --gen-subset test \
- --batch-size 8 \
- --criterion cross_entropy \
- --context-window 992 \
- --distributed-world-size 8 \
- --model-parallel-size 8;
-# Expected PPL (unnormalized_ppl): [8.46]
-# Note: the eval command needs to run on 8 GPUs for the released model
-```
-Renormalizing formula: `2 ^ ( log_2(unnormalized_PPL) * (270847 / 245566))`.
-PPL After normalization: `10.54`
-
-To renormalize the perplexity, we must account for the change in token count
-after detokenizing and appling BPE. The formula for this is:
-`2 ^ ( log_2(unnormalized_PPL) * (new_token_cnt / orig_token_cnt))`
-
-For the wikitext-103 test set, the original token count is `245566` and the
-token count after detokenization and applying BPE is `270847`.
-
-The perplexity after renormalization is:
-`2 ^ ( log_2(8.46) * (270847 / 245566)) = 10.54`
diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/common/common.h b/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/common/common.h
deleted file mode 100644
index 8df48ed73cd330c45250ee02a113e03357504055..0000000000000000000000000000000000000000
--- a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/common/common.h
+++ /dev/null
@@ -1,253 +0,0 @@
-// Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-#pragma once
-#include
-#include
-
-//------------------------------------------------------------------------
-// C++ helper function prototypes.
-
-dim3 getLaunchBlockSize(int maxWidth, int maxHeight, int width, int height);
-dim3 getLaunchGridSize(dim3 blockSize, int width, int height, int depth);
-
-//------------------------------------------------------------------------
-// The rest is CUDA device code specific stuff.
-
-#ifdef __CUDACC__
-
-//------------------------------------------------------------------------
-// Helpers for CUDA vector types.
-
-static __device__ __forceinline__ float2& operator*= (float2& a, const float2& b) { a.x *= b.x; a.y *= b.y; return a; }
-static __device__ __forceinline__ float2& operator+= (float2& a, const float2& b) { a.x += b.x; a.y += b.y; return a; }
-static __device__ __forceinline__ float2& operator-= (float2& a, const float2& b) { a.x -= b.x; a.y -= b.y; return a; }
-static __device__ __forceinline__ float2& operator*= (float2& a, float b) { a.x *= b; a.y *= b; return a; }
-static __device__ __forceinline__ float2& operator+= (float2& a, float b) { a.x += b; a.y += b; return a; }
-static __device__ __forceinline__ float2& operator-= (float2& a, float b) { a.x -= b; a.y -= b; return a; }
-static __device__ __forceinline__ float2 operator* (const float2& a, const float2& b) { return make_float2(a.x * b.x, a.y * b.y); }
-static __device__ __forceinline__ float2 operator+ (const float2& a, const float2& b) { return make_float2(a.x + b.x, a.y + b.y); }
-static __device__ __forceinline__ float2 operator- (const float2& a, const float2& b) { return make_float2(a.x - b.x, a.y - b.y); }
-static __device__ __forceinline__ float2 operator* (const float2& a, float b) { return make_float2(a.x * b, a.y * b); }
-static __device__ __forceinline__ float2 operator+ (const float2& a, float b) { return make_float2(a.x + b, a.y + b); }
-static __device__ __forceinline__ float2 operator- (const float2& a, float b) { return make_float2(a.x - b, a.y - b); }
-static __device__ __forceinline__ float2 operator* (float a, const float2& b) { return make_float2(a * b.x, a * b.y); }
-static __device__ __forceinline__ float2 operator+ (float a, const float2& b) { return make_float2(a + b.x, a + b.y); }
-static __device__ __forceinline__ float2 operator- (float a, const float2& b) { return make_float2(a - b.x, a - b.y); }
-static __device__ __forceinline__ float2 operator- (const float2& a) { return make_float2(-a.x, -a.y); }
-static __device__ __forceinline__ float3& operator*= (float3& a, const float3& b) { a.x *= b.x; a.y *= b.y; a.z *= b.z; return a; }
-static __device__ __forceinline__ float3& operator+= (float3& a, const float3& b) { a.x += b.x; a.y += b.y; a.z += b.z; return a; }
-static __device__ __forceinline__ float3& operator-= (float3& a, const float3& b) { a.x -= b.x; a.y -= b.y; a.z -= b.z; return a; }
-static __device__ __forceinline__ float3& operator*= (float3& a, float b) { a.x *= b; a.y *= b; a.z *= b; return a; }
-static __device__ __forceinline__ float3& operator+= (float3& a, float b) { a.x += b; a.y += b; a.z += b; return a; }
-static __device__ __forceinline__ float3& operator-= (float3& a, float b) { a.x -= b; a.y -= b; a.z -= b; return a; }
-static __device__ __forceinline__ float3 operator* (const float3& a, const float3& b) { return make_float3(a.x * b.x, a.y * b.y, a.z * b.z); }
-static __device__ __forceinline__ float3 operator+ (const float3& a, const float3& b) { return make_float3(a.x + b.x, a.y + b.y, a.z + b.z); }
-static __device__ __forceinline__ float3 operator- (const float3& a, const float3& b) { return make_float3(a.x - b.x, a.y - b.y, a.z - b.z); }
-static __device__ __forceinline__ float3 operator* (const float3& a, float b) { return make_float3(a.x * b, a.y * b, a.z * b); }
-static __device__ __forceinline__ float3 operator+ (const float3& a, float b) { return make_float3(a.x + b, a.y + b, a.z + b); }
-static __device__ __forceinline__ float3 operator- (const float3& a, float b) { return make_float3(a.x - b, a.y - b, a.z - b); }
-static __device__ __forceinline__ float3 operator* (float a, const float3& b) { return make_float3(a * b.x, a * b.y, a * b.z); }
-static __device__ __forceinline__ float3 operator+ (float a, const float3& b) { return make_float3(a + b.x, a + b.y, a + b.z); }
-static __device__ __forceinline__ float3 operator- (float a, const float3& b) { return make_float3(a - b.x, a - b.y, a - b.z); }
-static __device__ __forceinline__ float3 operator- (const float3& a) { return make_float3(-a.x, -a.y, -a.z); }
-static __device__ __forceinline__ float4& operator*= (float4& a, const float4& b) { a.x *= b.x; a.y *= b.y; a.z *= b.z; a.w *= b.w; return a; }
-static __device__ __forceinline__ float4& operator+= (float4& a, const float4& b) { a.x += b.x; a.y += b.y; a.z += b.z; a.w += b.w; return a; }
-static __device__ __forceinline__ float4& operator-= (float4& a, const float4& b) { a.x -= b.x; a.y -= b.y; a.z -= b.z; a.w -= b.w; return a; }
-static __device__ __forceinline__ float4& operator*= (float4& a, float b) { a.x *= b; a.y *= b; a.z *= b; a.w *= b; return a; }
-static __device__ __forceinline__ float4& operator+= (float4& a, float b) { a.x += b; a.y += b; a.z += b; a.w += b; return a; }
-static __device__ __forceinline__ float4& operator-= (float4& a, float b) { a.x -= b; a.y -= b; a.z -= b; a.w -= b; return a; }
-static __device__ __forceinline__ float4 operator* (const float4& a, const float4& b) { return make_float4(a.x * b.x, a.y * b.y, a.z * b.z, a.w * b.w); }
-static __device__ __forceinline__ float4 operator+ (const float4& a, const float4& b) { return make_float4(a.x + b.x, a.y + b.y, a.z + b.z, a.w + b.w); }
-static __device__ __forceinline__ float4 operator- (const float4& a, const float4& b) { return make_float4(a.x - b.x, a.y - b.y, a.z - b.z, a.w - b.w); }
-static __device__ __forceinline__ float4 operator* (const float4& a, float b) { return make_float4(a.x * b, a.y * b, a.z * b, a.w * b); }
-static __device__ __forceinline__ float4 operator+ (const float4& a, float b) { return make_float4(a.x + b, a.y + b, a.z + b, a.w + b); }
-static __device__ __forceinline__ float4 operator- (const float4& a, float b) { return make_float4(a.x - b, a.y - b, a.z - b, a.w - b); }
-static __device__ __forceinline__ float4 operator* (float a, const float4& b) { return make_float4(a * b.x, a * b.y, a * b.z, a * b.w); }
-static __device__ __forceinline__ float4 operator+ (float a, const float4& b) { return make_float4(a + b.x, a + b.y, a + b.z, a + b.w); }
-static __device__ __forceinline__ float4 operator- (float a, const float4& b) { return make_float4(a - b.x, a - b.y, a - b.z, a - b.w); }
-static __device__ __forceinline__ float4 operator- (const float4& a) { return make_float4(-a.x, -a.y, -a.z, -a.w); }
-static __device__ __forceinline__ int2& operator*= (int2& a, const int2& b) { a.x *= b.x; a.y *= b.y; return a; }
-static __device__ __forceinline__ int2& operator+= (int2& a, const int2& b) { a.x += b.x; a.y += b.y; return a; }
-static __device__ __forceinline__ int2& operator-= (int2& a, const int2& b) { a.x -= b.x; a.y -= b.y; return a; }
-static __device__ __forceinline__ int2& operator*= (int2& a, int b) { a.x *= b; a.y *= b; return a; }
-static __device__ __forceinline__ int2& operator+= (int2& a, int b) { a.x += b; a.y += b; return a; }
-static __device__ __forceinline__ int2& operator-= (int2& a, int b) { a.x -= b; a.y -= b; return a; }
-static __device__ __forceinline__ int2 operator* (const int2& a, const int2& b) { return make_int2(a.x * b.x, a.y * b.y); }
-static __device__ __forceinline__ int2 operator+ (const int2& a, const int2& b) { return make_int2(a.x + b.x, a.y + b.y); }
-static __device__ __forceinline__ int2 operator- (const int2& a, const int2& b) { return make_int2(a.x - b.x, a.y - b.y); }
-static __device__ __forceinline__ int2 operator* (const int2& a, int b) { return make_int2(a.x * b, a.y * b); }
-static __device__ __forceinline__ int2 operator+ (const int2& a, int b) { return make_int2(a.x + b, a.y + b); }
-static __device__ __forceinline__ int2 operator- (const int2& a, int b) { return make_int2(a.x - b, a.y - b); }
-static __device__ __forceinline__ int2 operator* (int a, const int2& b) { return make_int2(a * b.x, a * b.y); }
-static __device__ __forceinline__ int2 operator+ (int a, const int2& b) { return make_int2(a + b.x, a + b.y); }
-static __device__ __forceinline__ int2 operator- (int a, const int2& b) { return make_int2(a - b.x, a - b.y); }
-static __device__ __forceinline__ int2 operator- (const int2& a) { return make_int2(-a.x, -a.y); }
-static __device__ __forceinline__ int3& operator*= (int3& a, const int3& b) { a.x *= b.x; a.y *= b.y; a.z *= b.z; return a; }
-static __device__ __forceinline__ int3& operator+= (int3& a, const int3& b) { a.x += b.x; a.y += b.y; a.z += b.z; return a; }
-static __device__ __forceinline__ int3& operator-= (int3& a, const int3& b) { a.x -= b.x; a.y -= b.y; a.z -= b.z; return a; }
-static __device__ __forceinline__ int3& operator*= (int3& a, int b) { a.x *= b; a.y *= b; a.z *= b; return a; }
-static __device__ __forceinline__ int3& operator+= (int3& a, int b) { a.x += b; a.y += b; a.z += b; return a; }
-static __device__ __forceinline__ int3& operator-= (int3& a, int b) { a.x -= b; a.y -= b; a.z -= b; return a; }
-static __device__ __forceinline__ int3 operator* (const int3& a, const int3& b) { return make_int3(a.x * b.x, a.y * b.y, a.z * b.z); }
-static __device__ __forceinline__ int3 operator+ (const int3& a, const int3& b) { return make_int3(a.x + b.x, a.y + b.y, a.z + b.z); }
-static __device__ __forceinline__ int3 operator- (const int3& a, const int3& b) { return make_int3(a.x - b.x, a.y - b.y, a.z - b.z); }
-static __device__ __forceinline__ int3 operator* (const int3& a, int b) { return make_int3(a.x * b, a.y * b, a.z * b); }
-static __device__ __forceinline__ int3 operator+ (const int3& a, int b) { return make_int3(a.x + b, a.y + b, a.z + b); }
-static __device__ __forceinline__ int3 operator- (const int3& a, int b) { return make_int3(a.x - b, a.y - b, a.z - b); }
-static __device__ __forceinline__ int3 operator* (int a, const int3& b) { return make_int3(a * b.x, a * b.y, a * b.z); }
-static __device__ __forceinline__ int3 operator+ (int a, const int3& b) { return make_int3(a + b.x, a + b.y, a + b.z); }
-static __device__ __forceinline__ int3 operator- (int a, const int3& b) { return make_int3(a - b.x, a - b.y, a - b.z); }
-static __device__ __forceinline__ int3 operator- (const int3& a) { return make_int3(-a.x, -a.y, -a.z); }
-static __device__ __forceinline__ int4& operator*= (int4& a, const int4& b) { a.x *= b.x; a.y *= b.y; a.z *= b.z; a.w *= b.w; return a; }
-static __device__ __forceinline__ int4& operator+= (int4& a, const int4& b) { a.x += b.x; a.y += b.y; a.z += b.z; a.w += b.w; return a; }
-static __device__ __forceinline__ int4& operator-= (int4& a, const int4& b) { a.x -= b.x; a.y -= b.y; a.z -= b.z; a.w -= b.w; return a; }
-static __device__ __forceinline__ int4& operator*= (int4& a, int b) { a.x *= b; a.y *= b; a.z *= b; a.w *= b; return a; }
-static __device__ __forceinline__ int4& operator+= (int4& a, int b) { a.x += b; a.y += b; a.z += b; a.w += b; return a; }
-static __device__ __forceinline__ int4& operator-= (int4& a, int b) { a.x -= b; a.y -= b; a.z -= b; a.w -= b; return a; }
-static __device__ __forceinline__ int4 operator* (const int4& a, const int4& b) { return make_int4(a.x * b.x, a.y * b.y, a.z * b.z, a.w * b.w); }
-static __device__ __forceinline__ int4 operator+ (const int4& a, const int4& b) { return make_int4(a.x + b.x, a.y + b.y, a.z + b.z, a.w + b.w); }
-static __device__ __forceinline__ int4 operator- (const int4& a, const int4& b) { return make_int4(a.x - b.x, a.y - b.y, a.z - b.z, a.w - b.w); }
-static __device__ __forceinline__ int4 operator* (const int4& a, int b) { return make_int4(a.x * b, a.y * b, a.z * b, a.w * b); }
-static __device__ __forceinline__ int4 operator+ (const int4& a, int b) { return make_int4(a.x + b, a.y + b, a.z + b, a.w + b); }
-static __device__ __forceinline__ int4 operator- (const int4& a, int b) { return make_int4(a.x - b, a.y - b, a.z - b, a.w - b); }
-static __device__ __forceinline__ int4 operator* (int a, const int4& b) { return make_int4(a * b.x, a * b.y, a * b.z, a * b.w); }
-static __device__ __forceinline__ int4 operator+ (int a, const int4& b) { return make_int4(a + b.x, a + b.y, a + b.z, a + b.w); }
-static __device__ __forceinline__ int4 operator- (int a, const int4& b) { return make_int4(a - b.x, a - b.y, a - b.z, a - b.w); }
-static __device__ __forceinline__ int4 operator- (const int4& a) { return make_int4(-a.x, -a.y, -a.z, -a.w); }
-static __device__ __forceinline__ uint2& operator*= (uint2& a, const uint2& b) { a.x *= b.x; a.y *= b.y; return a; }
-static __device__ __forceinline__ uint2& operator+= (uint2& a, const uint2& b) { a.x += b.x; a.y += b.y; return a; }
-static __device__ __forceinline__ uint2& operator-= (uint2& a, const uint2& b) { a.x -= b.x; a.y -= b.y; return a; }
-static __device__ __forceinline__ uint2& operator*= (uint2& a, unsigned int b) { a.x *= b; a.y *= b; return a; }
-static __device__ __forceinline__ uint2& operator+= (uint2& a, unsigned int b) { a.x += b; a.y += b; return a; }
-static __device__ __forceinline__ uint2& operator-= (uint2& a, unsigned int b) { a.x -= b; a.y -= b; return a; }
-static __device__ __forceinline__ uint2 operator* (const uint2& a, const uint2& b) { return make_uint2(a.x * b.x, a.y * b.y); }
-static __device__ __forceinline__ uint2 operator+ (const uint2& a, const uint2& b) { return make_uint2(a.x + b.x, a.y + b.y); }
-static __device__ __forceinline__ uint2 operator- (const uint2& a, const uint2& b) { return make_uint2(a.x - b.x, a.y - b.y); }
-static __device__ __forceinline__ uint2 operator* (const uint2& a, unsigned int b) { return make_uint2(a.x * b, a.y * b); }
-static __device__ __forceinline__ uint2 operator+ (const uint2& a, unsigned int b) { return make_uint2(a.x + b, a.y + b); }
-static __device__ __forceinline__ uint2 operator- (const uint2& a, unsigned int b) { return make_uint2(a.x - b, a.y - b); }
-static __device__ __forceinline__ uint2 operator* (unsigned int a, const uint2& b) { return make_uint2(a * b.x, a * b.y); }
-static __device__ __forceinline__ uint2 operator+ (unsigned int a, const uint2& b) { return make_uint2(a + b.x, a + b.y); }
-static __device__ __forceinline__ uint2 operator- (unsigned int a, const uint2& b) { return make_uint2(a - b.x, a - b.y); }
-static __device__ __forceinline__ uint3& operator*= (uint3& a, const uint3& b) { a.x *= b.x; a.y *= b.y; a.z *= b.z; return a; }
-static __device__ __forceinline__ uint3& operator+= (uint3& a, const uint3& b) { a.x += b.x; a.y += b.y; a.z += b.z; return a; }
-static __device__ __forceinline__ uint3& operator-= (uint3& a, const uint3& b) { a.x -= b.x; a.y -= b.y; a.z -= b.z; return a; }
-static __device__ __forceinline__ uint3& operator*= (uint3& a, unsigned int b) { a.x *= b; a.y *= b; a.z *= b; return a; }
-static __device__ __forceinline__ uint3& operator+= (uint3& a, unsigned int b) { a.x += b; a.y += b; a.z += b; return a; }
-static __device__ __forceinline__ uint3& operator-= (uint3& a, unsigned int b) { a.x -= b; a.y -= b; a.z -= b; return a; }
-static __device__ __forceinline__ uint3 operator* (const uint3& a, const uint3& b) { return make_uint3(a.x * b.x, a.y * b.y, a.z * b.z); }
-static __device__ __forceinline__ uint3 operator+ (const uint3& a, const uint3& b) { return make_uint3(a.x + b.x, a.y + b.y, a.z + b.z); }
-static __device__ __forceinline__ uint3 operator- (const uint3& a, const uint3& b) { return make_uint3(a.x - b.x, a.y - b.y, a.z - b.z); }
-static __device__ __forceinline__ uint3 operator* (const uint3& a, unsigned int b) { return make_uint3(a.x * b, a.y * b, a.z * b); }
-static __device__ __forceinline__ uint3 operator+ (const uint3& a, unsigned int b) { return make_uint3(a.x + b, a.y + b, a.z + b); }
-static __device__ __forceinline__ uint3 operator- (const uint3& a, unsigned int b) { return make_uint3(a.x - b, a.y - b, a.z - b); }
-static __device__ __forceinline__ uint3 operator* (unsigned int a, const uint3& b) { return make_uint3(a * b.x, a * b.y, a * b.z); }
-static __device__ __forceinline__ uint3 operator+ (unsigned int a, const uint3& b) { return make_uint3(a + b.x, a + b.y, a + b.z); }
-static __device__ __forceinline__ uint3 operator- (unsigned int a, const uint3& b) { return make_uint3(a - b.x, a - b.y, a - b.z); }
-static __device__ __forceinline__ uint4& operator*= (uint4& a, const uint4& b) { a.x *= b.x; a.y *= b.y; a.z *= b.z; a.w *= b.w; return a; }
-static __device__ __forceinline__ uint4& operator+= (uint4& a, const uint4& b) { a.x += b.x; a.y += b.y; a.z += b.z; a.w += b.w; return a; }
-static __device__ __forceinline__ uint4& operator-= (uint4& a, const uint4& b) { a.x -= b.x; a.y -= b.y; a.z -= b.z; a.w -= b.w; return a; }
-static __device__ __forceinline__ uint4& operator*= (uint4& a, unsigned int b) { a.x *= b; a.y *= b; a.z *= b; a.w *= b; return a; }
-static __device__ __forceinline__ uint4& operator+= (uint4& a, unsigned int b) { a.x += b; a.y += b; a.z += b; a.w += b; return a; }
-static __device__ __forceinline__ uint4& operator-= (uint4& a, unsigned int b) { a.x -= b; a.y -= b; a.z -= b; a.w -= b; return a; }
-static __device__ __forceinline__ uint4 operator* (const uint4& a, const uint4& b) { return make_uint4(a.x * b.x, a.y * b.y, a.z * b.z, a.w * b.w); }
-static __device__ __forceinline__ uint4 operator+ (const uint4& a, const uint4& b) { return make_uint4(a.x + b.x, a.y + b.y, a.z + b.z, a.w + b.w); }
-static __device__ __forceinline__ uint4 operator- (const uint4& a, const uint4& b) { return make_uint4(a.x - b.x, a.y - b.y, a.z - b.z, a.w - b.w); }
-static __device__ __forceinline__ uint4 operator* (const uint4& a, unsigned int b) { return make_uint4(a.x * b, a.y * b, a.z * b, a.w * b); }
-static __device__ __forceinline__ uint4 operator+ (const uint4& a, unsigned int b) { return make_uint4(a.x + b, a.y + b, a.z + b, a.w + b); }
-static __device__ __forceinline__ uint4 operator- (const uint4& a, unsigned int b) { return make_uint4(a.x - b, a.y - b, a.z - b, a.w - b); }
-static __device__ __forceinline__ uint4 operator* (unsigned int a, const uint4& b) { return make_uint4(a * b.x, a * b.y, a * b.z, a * b.w); }
-static __device__ __forceinline__ uint4 operator+ (unsigned int a, const uint4& b) { return make_uint4(a + b.x, a + b.y, a + b.z, a + b.w); }
-static __device__ __forceinline__ uint4 operator- (unsigned int a, const uint4& b) { return make_uint4(a - b.x, a - b.y, a - b.z, a - b.w); }
-
-template static __device__ __forceinline__ T zero_value(void);
-template<> __device__ __forceinline__ float zero_value (void) { return 0.f; }
-template<> __device__ __forceinline__ float2 zero_value(void) { return make_float2(0.f, 0.f); }
-template<> __device__ __forceinline__ float4 zero_value(void) { return make_float4(0.f, 0.f, 0.f, 0.f); }
-static __device__ __forceinline__ float3 make_float3(const float2& a, float b) { return make_float3(a.x, a.y, b); }
-static __device__ __forceinline__ float4 make_float4(const float3& a, float b) { return make_float4(a.x, a.y, a.z, b); }
-static __device__ __forceinline__ float4 make_float4(const float2& a, const float2& b) { return make_float4(a.x, a.y, b.x, b.y); }
-static __device__ __forceinline__ int3 make_int3(const int2& a, int b) { return make_int3(a.x, a.y, b); }
-static __device__ __forceinline__ int4 make_int4(const int3& a, int b) { return make_int4(a.x, a.y, a.z, b); }
-static __device__ __forceinline__ int4 make_int4(const int2& a, const int2& b) { return make_int4(a.x, a.y, b.x, b.y); }
-static __device__ __forceinline__ uint3 make_uint3(const uint2& a, unsigned int b) { return make_uint3(a.x, a.y, b); }
-static __device__ __forceinline__ uint4 make_uint4(const uint3& a, unsigned int b) { return make_uint4(a.x, a.y, a.z, b); }
-static __device__ __forceinline__ uint4 make_uint4(const uint2& a, const uint2& b) { return make_uint4(a.x, a.y, b.x, b.y); }
-
-template static __device__ __forceinline__ void swap(T& a, T& b) { T temp = a; a = b; b = temp; }
-
-//------------------------------------------------------------------------
-// Coalesced atomics. These are all done via macros.
-
-#if __CUDA_ARCH__ >= 700 // Warp match instruction __match_any_sync() is only available on compute capability 7.x and higher
-
-#define CA_TEMP _ca_temp
-#define CA_TEMP_PARAM float* CA_TEMP
-#define CA_DECLARE_TEMP(threads_per_block) \
- __shared__ float CA_TEMP[(threads_per_block)]
-
-#define CA_SET_GROUP_MASK(group, thread_mask) \
- bool _ca_leader; \
- float* _ca_ptr; \
- do { \
- int tidx = threadIdx.x + blockDim.x * threadIdx.y; \
- int lane = tidx & 31; \
- int warp = tidx >> 5; \
- int tmask = __match_any_sync((thread_mask), (group)); \
- int leader = __ffs(tmask) - 1; \
- _ca_leader = (leader == lane); \
- _ca_ptr = &_ca_temp[((warp << 5) + leader)]; \
- } while(0)
-
-#define CA_SET_GROUP(group) \
- CA_SET_GROUP_MASK((group), 0xffffffffu)
-
-#define caAtomicAdd(ptr, value) \
- do { \
- if (_ca_leader) \
- *_ca_ptr = 0.f; \
- atomicAdd(_ca_ptr, (value)); \
- if (_ca_leader) \
- atomicAdd((ptr), *_ca_ptr); \
- } while(0)
-
-#define caAtomicAdd3_xyw(ptr, x, y, w) \
- do { \
- caAtomicAdd((ptr), (x)); \
- caAtomicAdd((ptr)+1, (y)); \
- caAtomicAdd((ptr)+3, (w)); \
- } while(0)
-
-#define caAtomicAddTexture(ptr, level, idx, value) \
- do { \
- CA_SET_GROUP((idx) ^ ((level) << 27)); \
- caAtomicAdd((ptr)+(idx), (value)); \
- } while(0)
-
-//------------------------------------------------------------------------
-// Disable atomic coalescing for compute capability lower than 7.x
-
-#else // __CUDA_ARCH__ >= 700
-#define CA_TEMP _ca_temp
-#define CA_TEMP_PARAM float CA_TEMP
-#define CA_DECLARE_TEMP(threads_per_block) CA_TEMP_PARAM
-#define CA_SET_GROUP_MASK(group, thread_mask)
-#define CA_SET_GROUP(group)
-#define caAtomicAdd(ptr, value) atomicAdd((ptr), (value))
-#define caAtomicAdd3_xyw(ptr, x, y, w) \
- do { \
- atomicAdd((ptr), (x)); \
- atomicAdd((ptr)+1, (y)); \
- atomicAdd((ptr)+3, (w)); \
- } while(0)
-#define caAtomicAddTexture(ptr, level, idx, value) atomicAdd((ptr)+(idx), (value))
-#endif // __CUDA_ARCH__ >= 700
-
-//------------------------------------------------------------------------
-#endif // __CUDACC__
diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/training/projectors/w_projector.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/training/projectors/w_projector.py
deleted file mode 100644
index 12553b8c4450dc8bb605b0eab0f55d90ba2d051f..0000000000000000000000000000000000000000
--- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/training/projectors/w_projector.py
+++ /dev/null
@@ -1,161 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Project given image to the latent space of pretrained network pickle."""
-
-import copy
-import wandb
-import numpy as np
-import torch
-import torch.nn.functional as F
-from tqdm import tqdm
-from pti.pti_configs import global_config, hyperparameters
-from utils import log_utils
-import dnnlib
-
-
-def project(
- G,
- # [C,H,W] and dynamic range [0,255], W & H must match G output resolution
- target: torch.Tensor,
- *,
- num_steps=1000,
- w_avg_samples=10000,
- initial_learning_rate=0.01,
- initial_noise_factor=0.05,
- lr_rampdown_length=0.25,
- lr_rampup_length=0.05,
- noise_ramp_length=0.75,
- regularize_noise_weight=1e5,
- verbose=False,
- device: torch.device,
- use_wandb=False,
- initial_w=None,
- image_log_step=global_config.image_rec_result_log_snapshot,
- w_name: str
-):
- print(target.shape, G.img_channels, G.img_resolution, G.img_resolution//2)
- assert target.shape == (
- G.img_channels, G.img_resolution, G.img_resolution // 2)
-
- def logprint(*args):
- if verbose:
- print(*args)
-
- G = copy.deepcopy(G).eval().requires_grad_(
- False).to(device).float() # type: ignore
-
- # Compute w stats.
- logprint(
- f'Computing W midpoint and stddev using {w_avg_samples} samples...')
- z_samples = np.random.RandomState(123).randn(w_avg_samples, G.z_dim)
- w_samples = G.mapping(torch.from_numpy(
- z_samples).to(device), None) # [N, L, C]
- w_samples = w_samples[:, :1, :].cpu(
- ).numpy().astype(np.float32) # [N, 1, C]
- w_avg = np.mean(w_samples, axis=0, keepdims=True) # [1, 1, C]
- w_avg_tensor = torch.from_numpy(w_avg).to(global_config.device)
- w_std = (np.sum((w_samples - w_avg) ** 2) / w_avg_samples) ** 0.5
-
- start_w = initial_w if initial_w is not None else w_avg
-
- # Setup noise inputs.
- noise_bufs = {name: buf for (
- name, buf) in G.synthesis.named_buffers() if 'noise_const' in name}
-
- # Load VGG16 feature detector.
- url = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt'
- with dnnlib.util.open_url(url) as f:
- vgg16 = torch.jit.load(f).eval().to(device)
-
- # Features for target image.
- target_images = target.unsqueeze(0).to(device).to(torch.float32)
- if target_images.shape[2] > 256:
- target_images = F.interpolate(
- target_images, size=(256, 256), mode='area')
- target_features = vgg16(
- target_images, resize_images=False, return_lpips=True)
-
- w_opt = torch.tensor(start_w, dtype=torch.float32, device=device,
- requires_grad=True) # pylint: disable=not-callable
- optimizer = torch.optim.Adam([w_opt] + list(noise_bufs.values()), betas=(0.9, 0.999),
- lr=hyperparameters.first_inv_lr)
-
- # Init noise.
- for buf in noise_bufs.values():
- buf[:] = torch.randn_like(buf)
- buf.requires_grad = True
-
- for step in range(num_steps):
-
- # Learning rate schedule.
- t = step / num_steps
- w_noise_scale = w_std * initial_noise_factor * \
- max(0.0, 1.0 - t / noise_ramp_length) ** 2
- lr_ramp = min(1.0, (1.0 - t) / lr_rampdown_length)
- lr_ramp = 0.5 - 0.5 * np.cos(lr_ramp * np.pi)
- lr_ramp = lr_ramp * min(1.0, t / lr_rampup_length)
- lr = initial_learning_rate * lr_ramp
- for param_group in optimizer.param_groups:
- param_group['lr'] = lr
-
- # Synth images from opt_w.
- w_noise = torch.randn_like(w_opt) * w_noise_scale
- ws = (w_opt + w_noise).repeat([1, G.mapping.num_ws, 1])
- synth_images = G.synthesis(ws, noise_mode='const', force_fp32=True)
-
- # Downsample image to 256x256 if it's larger than that. VGG was built for 224x224 images.
- synth_images = (synth_images + 1) * (255 / 2)
- if synth_images.shape[2] > 256:
- synth_images = F.interpolate(
- synth_images, size=(256, 256), mode='area')
-
- # Features for synth images.
- synth_features = vgg16(
- synth_images, resize_images=False, return_lpips=True)
- dist = (target_features - synth_features).square().sum()
-
- # Noise regularization.
- reg_loss = 0.0
- for v in noise_bufs.values():
- noise = v[None, None, :, :] # must be [1,1,H,W] for F.avg_pool2d()
- while True:
- reg_loss += (noise * torch.roll(noise,
- shifts=1, dims=3)).mean() ** 2
- reg_loss += (noise * torch.roll(noise,
- shifts=1, dims=2)).mean() ** 2
- if noise.shape[2] <= 8:
- break
- noise = F.avg_pool2d(noise, kernel_size=2)
- loss = dist + reg_loss * regularize_noise_weight
- if step % 10 == 0:
- print("project loss", step, loss.data)
- if step % image_log_step == 0:
- with torch.no_grad():
- if use_wandb:
- global_config.training_step += 1
- wandb.log({f'first projection _{w_name}': loss.detach(
- ).cpu()}, step=global_config.training_step)
- log_utils.log_image_from_w(w_opt.repeat(
- [1, G.mapping.num_ws, 1]), G, w_name)
-
- # Step
- optimizer.zero_grad(set_to_none=True)
- loss.backward()
- optimizer.step()
- logprint(
- f'step {step + 1:>4d}/{num_steps}: dist {dist:<4.2f} loss {float(loss):<5.2f}')
-
- # Normalize noise.
- with torch.no_grad():
- for buf in noise_bufs.values():
- buf -= buf.mean()
- buf *= buf.square().mean().rsqrt()
-
- del G
- return w_opt.repeat([1, 18, 1])
diff --git a/spaces/h2oai/wave-tour/examples/plot_interaction_zoom.py b/spaces/h2oai/wave-tour/examples/plot_interaction_zoom.py
deleted file mode 100644
index ea38c863c4b2f0757e5962152ad7d1d4d0fdebb2..0000000000000000000000000000000000000000
--- a/spaces/h2oai/wave-tour/examples/plot_interaction_zoom.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# Plot / Interaction / Zoom
-# Make a scatterplot with zoom enabled. #plot
-# ---
-
-from h2o_wave import main, app, Q, ui, data
-
-
-@app('/demo')
-async def serve(q: Q):
- q.page['example'] = ui.plot_card(
- box='1 1 4 5',
- title='Point plot zoom',
- data=data('height weight', 10, rows=[
- (170, 59),
- (159.1, 47.6),
- (166, 69.8),
- (176.2, 66.8),
- (160.2, 75.2),
- (180.3, 76.4),
- (164.5, 63.2),
- (173, 60.9),
- (183.5, 74.8),
- (175.5, 70),
- ]),
- interactions=['scale_zoom'],
- plot=ui.plot([ui.mark(type='point', x='=weight', y='=height')])
- )
- await q.page.save()
diff --git a/spaces/h2oai/wave-tour/examples/synth.py b/spaces/h2oai/wave-tour/examples/synth.py
deleted file mode 100644
index dad29b6e289ecfeb33033a0860c501b90172a85c..0000000000000000000000000000000000000000
--- a/spaces/h2oai/wave-tour/examples/synth.py
+++ /dev/null
@@ -1,92 +0,0 @@
-import datetime
-import random
-
-
-class FakeSeries:
- def __init__(self, min=0.0, max=100.0, variation=10.0, start: int = None):
- self.min = min
- self.max = max
- self.variation = variation
- self.start = random.randint(min, max) if start is None else start
- self.x = self.start
-
- def next(self):
- x0 = self.x
- x = x0 + (random.random() - 0.5) * self.variation
- if not self.min <= x <= self.max:
- x = self.start
- self.x = x
- dx = 0 if x0 == 0 else 100.0 * (x - x0) / x0
- return x, dx
-
-
-class FakeTimeSeries:
- def __init__(self, min=0.0, max=100.0, variation=10.0, start: int = None, delta_days=1):
- self.series = FakeSeries(min, max, variation, start)
- self.delta_days = delta_days
- self.date = datetime.datetime.utcnow() - datetime.timedelta(days=10 * 365)
-
- def next(self):
- x, dx = self.series.next()
- self.date = self.date + datetime.timedelta(days=self.delta_days)
- return self.date.isoformat() + 'Z', x, dx
-
-
-class FakeMultiTimeSeries:
- def __init__(self, min=0.0, max=100.0, variation=10.0, start: int = None, delta_days=1, groups=5):
- self.series = [(f'G{c + 1}', FakeTimeSeries(min, max, variation, start, delta_days)) for c in range(groups)]
-
- def next(self):
- data = []
- for g, series in self.series:
- t, x, dx = series.next()
- data.append((g, t, x, dx))
- return data
-
-
-class FakeCategoricalSeries:
- def __init__(self, min=0.0, max=100.0, variation=10.0, start: int = None):
- self.series = FakeSeries(min, max, variation, start)
- self.i = 0
-
- def next(self):
- x, dx = self.series.next()
- self.i += 1
- return f'C{self.i}', x, dx
-
-
-class FakeMultiCategoricalSeries:
- def __init__(self, min=0.0, max=100.0, variation=10.0, start: int = None, groups=5):
- self.series = [(f'G{c + 1}', FakeCategoricalSeries(min, max, variation, start)) for c in range(groups)]
-
- def next(self):
- data = []
- for g, series in self.series:
- c, x, dx = series.next()
- data.append((g, c, x, dx))
- return data
-
-
-class FakeScatter:
- def __init__(self, min=0.0, max=100.0, variation=10.0, start: int = None):
- self.x = FakeSeries(min, max, variation, start)
- self.y = FakeSeries(min, max, variation, start)
-
- def next(self):
- x, dx = self.x.next()
- y, dy = self.y.next()
- return x, y
-
-
-class FakePercent:
- def __init__(self, min=5.0, max=35.0, variation=4.0):
- self.min = min
- self.max = max
- self.variation = variation
- self.x = random.randint(min, max)
-
- def next(self):
- self.x += random.random() * self.variation
- if self.x >= self.max:
- self.x = self.min
- return self.x, (self.x - self.min) / (self.max - self.min)
diff --git a/spaces/hamelcubsfan/AutoGPT/tests/test_image_gen.py b/spaces/hamelcubsfan/AutoGPT/tests/test_image_gen.py
deleted file mode 100644
index 19c57e427d5c1b84aa7f72925733d0056ddf5268..0000000000000000000000000000000000000000
--- a/spaces/hamelcubsfan/AutoGPT/tests/test_image_gen.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import hashlib
-import os
-import unittest
-
-from PIL import Image
-
-from autogpt.commands.image_gen import generate_image, generate_image_with_sd_webui
-from autogpt.config import Config
-from autogpt.workspace import path_in_workspace
-
-
-def lst(txt):
- return txt.split(":")[1].strip()
-
-
-@unittest.skipIf(os.getenv("CI"), "Skipping image generation tests")
-class TestImageGen(unittest.TestCase):
- def setUp(self):
- self.config = Config()
-
- def test_dalle(self):
- self.config.image_provider = "dalle"
-
- # Test using size 256
- result = lst(generate_image("astronaut riding a horse", 256))
- image_path = path_in_workspace(result)
- self.assertTrue(image_path.exists())
- with Image.open(image_path) as img:
- self.assertEqual(img.size, (256, 256))
- image_path.unlink()
-
- # Test using size 512
- result = lst(generate_image("astronaut riding a horse", 512))
- image_path = path_in_workspace(result)
- with Image.open(image_path) as img:
- self.assertEqual(img.size, (512, 512))
- image_path.unlink()
-
- def test_huggingface(self):
- self.config.image_provider = "huggingface"
-
- # Test usin SD 1.4 model and size 512
- self.config.huggingface_image_model = "CompVis/stable-diffusion-v1-4"
- result = lst(generate_image("astronaut riding a horse", 512))
- image_path = path_in_workspace(result)
- self.assertTrue(image_path.exists())
- with Image.open(image_path) as img:
- self.assertEqual(img.size, (512, 512))
- image_path.unlink()
-
- # Test using SD 2.1 768 model and size 768
- self.config.huggingface_image_model = "stabilityai/stable-diffusion-2-1"
- result = lst(generate_image("astronaut riding a horse", 768))
- image_path = path_in_workspace(result)
- with Image.open(image_path) as img:
- self.assertEqual(img.size, (768, 768))
- image_path.unlink()
-
- def test_sd_webui(self):
- self.config.image_provider = "sd_webui"
- return
-
- # Test using size 128
- result = lst(generate_image_with_sd_webui("astronaut riding a horse", 128))
- image_path = path_in_workspace(result)
- self.assertTrue(image_path.exists())
- with Image.open(image_path) as img:
- self.assertEqual(img.size, (128, 128))
- image_path.unlink()
-
- # Test using size 64 and negative prompt
- result = lst(
- generate_image_with_sd_webui(
- "astronaut riding a horse",
- negative_prompt="horse",
- size=64,
- extra={"seed": 123},
- )
- )
- image_path = path_in_workspace(result)
- with Image.open(image_path) as img:
- self.assertEqual(img.size, (64, 64))
- neg_image_hash = hashlib.md5(img.tobytes()).hexdigest()
- image_path.unlink()
-
- # Same test as above but without the negative prompt
- result = lst(
- generate_image_with_sd_webui(
- "astronaut riding a horse", image_size=64, size=1, extra={"seed": 123}
- )
- )
- image_path = path_in_workspace(result)
- with Image.open(image_path) as img:
- self.assertEqual(img.size, (64, 64))
- image_hash = hashlib.md5(img.tobytes()).hexdigest()
- image_path.unlink()
-
- self.assertNotEqual(image_hash, neg_image_hash)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/harveysamson/wav2vec2-speech-emotion-recognition/src/__init__.py b/spaces/harveysamson/wav2vec2-speech-emotion-recognition/src/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/dev/packaging/README.md b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/dev/packaging/README.md
deleted file mode 100644
index 095684fcc1c5593805158c81aa0168263eb57ced..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/dev/packaging/README.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-## To build a cu101 wheel for release:
-
-```
-$ nvidia-docker run -it --storage-opt "size=20GB" --name pt pytorch/manylinux-cuda101
-# inside the container:
-# git clone https://github.com/facebookresearch/detectron2/
-# cd detectron2
-# export CU_VERSION=cu101 D2_VERSION_SUFFIX= PYTHON_VERSION=3.7 PYTORCH_VERSION=1.4
-# ./dev/packaging/build_wheel.sh
-```
-
-## To build all wheels for `CUDA {9.2,10.0,10.1}` x `Python {3.6,3.7,3.8}`:
-```
-./dev/packaging/build_all_wheels.sh
-./dev/packaging/gen_wheel_index.sh /path/to/wheels
-```
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TridentNet/tridentnet/trident_rcnn.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TridentNet/tridentnet/trident_rcnn.py
deleted file mode 100644
index 65deb90977c525f9e42ea9b2581944832a9af47e..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TridentNet/tridentnet/trident_rcnn.py
+++ /dev/null
@@ -1,116 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-from detectron2.layers import batched_nms
-from detectron2.modeling import ROI_HEADS_REGISTRY, StandardROIHeads
-from detectron2.modeling.roi_heads.roi_heads import Res5ROIHeads
-from detectron2.structures import Instances
-
-
-def merge_branch_instances(instances, num_branch, nms_thresh, topk_per_image):
- """
- Merge detection results from different branches of TridentNet.
- Return detection results by applying non-maximum suppression (NMS) on bounding boxes
- and keep the unsuppressed boxes and other instances (e.g mask) if any.
-
- Args:
- instances (list[Instances]): A list of N * num_branch instances that store detection
- results. Contain N images and each image has num_branch instances.
- num_branch (int): Number of branches used for merging detection results for each image.
- nms_thresh (float): The threshold to use for box non-maximum suppression. Value in [0, 1].
- topk_per_image (int): The number of top scoring detections to return. Set < 0 to return
- all detections.
-
- Returns:
- results: (list[Instances]): A list of N instances, one for each image in the batch,
- that stores the topk most confidence detections after merging results from multiple
- branches.
- """
- if num_branch == 1:
- return instances
-
- batch_size = len(instances) // num_branch
- results = []
- for i in range(batch_size):
- instance = Instances.cat([instances[i + batch_size * j] for j in range(num_branch)])
-
- # Apply per-class NMS
- keep = batched_nms(
- instance.pred_boxes.tensor, instance.scores, instance.pred_classes, nms_thresh
- )
- keep = keep[:topk_per_image]
- result = instance[keep]
-
- results.append(result)
-
- return results
-
-
-@ROI_HEADS_REGISTRY.register()
-class TridentRes5ROIHeads(Res5ROIHeads):
- """
- The TridentNet ROIHeads in a typical "C4" R-CNN model.
- See :class:`Res5ROIHeads`.
- """
-
- def __init__(self, cfg, input_shape):
- super().__init__(cfg, input_shape)
-
- self.num_branch = cfg.MODEL.TRIDENT.NUM_BRANCH
- self.trident_fast = cfg.MODEL.TRIDENT.TEST_BRANCH_IDX != -1
-
- def forward(self, images, features, proposals, targets=None):
- """
- See :class:`Res5ROIHeads.forward`.
- """
- num_branch = self.num_branch if self.training or not self.trident_fast else 1
- all_targets = targets * num_branch if targets is not None else None
- pred_instances, losses = super().forward(images, features, proposals, all_targets)
- del images, all_targets, targets
-
- if self.training:
- return pred_instances, losses
- else:
- pred_instances = merge_branch_instances(
- pred_instances,
- num_branch,
- self.box_predictor.test_nms_thresh,
- self.box_predictor.test_topk_per_image,
- )
-
- return pred_instances, {}
-
-
-@ROI_HEADS_REGISTRY.register()
-class TridentStandardROIHeads(StandardROIHeads):
- """
- The `StandardROIHeads` for TridentNet.
- See :class:`StandardROIHeads`.
- """
-
- def __init__(self, cfg, input_shape):
- super(TridentStandardROIHeads, self).__init__(cfg, input_shape)
-
- self.num_branch = cfg.MODEL.TRIDENT.NUM_BRANCH
- self.trident_fast = cfg.MODEL.TRIDENT.TEST_BRANCH_IDX != -1
-
- def forward(self, images, features, proposals, targets=None):
- """
- See :class:`Res5ROIHeads.forward`.
- """
- # Use 1 branch if using trident_fast during inference.
- num_branch = self.num_branch if self.training or not self.trident_fast else 1
- # Duplicate targets for all branches in TridentNet.
- all_targets = targets * num_branch if targets is not None else None
- pred_instances, losses = super().forward(images, features, proposals, all_targets)
- del images, all_targets, targets
-
- if self.training:
- return pred_instances, losses
- else:
- pred_instances = merge_branch_instances(
- pred_instances,
- num_branch,
- self.box_predictor.test_nms_thresh,
- self.box_predictor.test_topk_per_image,
- )
-
- return pred_instances, {}
diff --git a/spaces/hekbobo/bingo/src/pages/api/blob.ts b/spaces/hekbobo/bingo/src/pages/api/blob.ts
deleted file mode 100644
index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000
--- a/spaces/hekbobo/bingo/src/pages/api/blob.ts
+++ /dev/null
@@ -1,40 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { Readable } from 'node:stream'
-import { fetch } from '@/lib/isomorphic'
-
-const API_DOMAIN = 'https://www.bing.com'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { bcid } = req.query
-
- const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`,
- {
- method: 'GET',
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referrer-Policy": "origin-when-cross-origin",
- },
- },
- )
-
- res.writeHead(200, {
- 'Content-Length': headers.get('content-length')!,
- 'Content-Type': headers.get('content-type')!,
- })
- // @ts-ignore
- return Readable.fromWeb(body!).pipe(res)
- } catch (e) {
- console.log('Error', e)
- return res.json({
- result: {
- value: 'UploadFailed',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/hkayabilisim/hdmr/app.py b/spaces/hkayabilisim/hdmr/app.py
deleted file mode 100644
index 9ebf9262b058e7c9f3078024adde4225dff2bd87..0000000000000000000000000000000000000000
--- a/spaces/hkayabilisim/hdmr/app.py
+++ /dev/null
@@ -1,206 +0,0 @@
-import numpy as np
-import gradio as gd
-import pandas as pd
-from sklearn.metrics import r2_score
-
-global df_trn, df_hat_trn, df_tst, df_hat_tst
-# Vanilla Legendre between [0,1]
-def Pn(m, x):
- if m == 0:
- return np.ones_like(x)
- elif m == 1:
- return x
- else:
- return (2*m-1)*x*Pn(m-1, x)/m - (m-1)*Pn(m-2, x)/m
-
-# Legendre between [a,b]
-def L(a,b,m,x):
- return np.sqrt((2*m+1)/(b-a))*Pn(m, 2*(x-b)/(b-a)+1)
-
-def sobol(x, y, m, range_min, range_max):
- print(x.shape, y.shape)
- N, n = x.shape
- f0 = np.mean(y)
- print(f'f0:{f0}')
- alpha = np.zeros((m, n))
-
- for r in range(m):
- for i in range(n):
- alpha[r, i] = (range_max-range_min) * np.mean((y-f0) * L(range_min, range_max, r+1, np.array(x[:, [i]])))
-
- global_D = (range_max-range_min)*np.mean(y ** 2) - ((range_max-range_min)*np.mean(y)) ** 2
- D_first_order = np.zeros((n))
- S_first_order = np.zeros((n))
- for k in range(n):
- D_first_order[k] = sum(alpha[r,k] ** 2 for r in range(m))
- S_first_order[k] = D_first_order[k]/global_D
-
- return S_first_order, f0, alpha
-
-def evalute_hdmr(x, f0, alpha, range_min, range_max):
- N, n = x.shape
- m, _ = alpha.shape
- y = f0 * np.ones((N,1))
- for r in range(m):
- for i in range(n):
- y = y + alpha[r, i] * L(range_min, range_max, r+1, np.array(x[:, [i]]))
- return y
-
-def f(x):
- return np.sum((x-0.5)**2,axis=1,keepdims=True)
-
-def optimize(N,n,m,func_code,trn_ratio,range_min,range_max):
- global df_trn, df_hat_trn, df_tst, df_hat_tst
- print(func_code)
- print(N)
- N_trn = round(N*trn_ratio)
- N_tst = N - N_trn
- print(f'N:{N} N_trn:{N_trn} N_tst:{N_tst}')
- x_trn = np.random.uniform(range_min,range_max,size=(N_trn,n))
- y_trn = eval(func_code, {'x': x_trn,'np':np})
- x_tst = np.random.uniform(range_min,range_max,size=(N_tst,n))
- y_tst = eval(func_code, {'x': x_tst,'np':np})
-
- si, f0, alpha = sobol(x_trn, y_trn, m, range_min, range_max)
- yhat_trn = evalute_hdmr(x_trn, f0, alpha, range_min, range_max)
- yhat_tst = evalute_hdmr(x_tst, f0, alpha, range_min, range_max)
- out = ''
- out += f'trn x:{x_trn.shape} y:{y_trn.shape}'
- out += f'tst x:{x_tst.shape} y:{y_tst.shape}'
- out += f'{si}'
-
- # Input/Output data concatenated
- xy_trn = np.concatenate([x_trn,y_trn],axis=1)
- xyhat_trn = np.concatenate([x_trn,yhat_trn],axis=1)
- y_vs_yhat_trn = np.concatenate([y_trn,yhat_trn],axis=1)
- r2_trn = r2_score(y_trn, yhat_trn)
-
- xy_tst = np.concatenate([x_tst,y_tst],axis=1)
- xyhat_tst = np.concatenate([x_tst,yhat_tst],axis=1)
- y_vs_yhat_tst = np.concatenate([y_tst,yhat_tst],axis=1)
- r2_tst = r2_score(y_tst, yhat_tst)
-
-
- # x1, x2, ... , xn, y
- columns = [f'x{id+1}' for id in range(n)] + ['y']
- print(columns)
- df_trn = pd.DataFrame(xy_trn, columns=columns)
- df_hat_trn = pd.DataFrame(xyhat_trn, columns=columns)
- df_corr_trn = pd.DataFrame(y_vs_yhat_trn, columns=['y','yhat'])
-
- df_tst = pd.DataFrame(xy_tst, columns=columns)
- df_hat_tst = pd.DataFrame(xyhat_tst, columns=columns)
- df_corr_tst = pd.DataFrame(y_vs_yhat_tst, columns=['y','yhat'])
-
- return (out,
- gd.Dropdown.update(choices=[f'x{id+1}' for id in range(n)]),
- gd.ScatterPlot.update(value=df_trn,x='x1',y='y'),
- gd.ScatterPlot.update(value=df_hat_trn,x='x1',y='y'),
- gd.ScatterPlot.update(value=df_corr_trn,x='y',y='yhat'),
- gd.Markdown.update(value=f'R2:{r2_trn}'),
- gd.ScatterPlot.update(value=df_tst,x='x1',y='y'),
- gd.ScatterPlot.update(value=df_hat_tst,x='x1',y='y'),
- gd.ScatterPlot.update(value=df_corr_tst,x='y',y='yhat'),
- gd.Markdown.update(value=f'R2:{r2_tst}'))
-
-
-def change_pdp(pdp_x):
- print(pdp_x, type(pdp_x))
- global df_trn, df_hat_trn, df_tst, df_hat_tst
- return (gd.ScatterPlot.update(value=df_trn, x=pdp_x, y='y'),
- gd.ScatterPlot.update(value=df_hat_trn, x=pdp_x, y='y'),
- gd.ScatterPlot.update(value=df_tst, x=pdp_x, y='y'),
- gd.ScatterPlot.update(value=df_hat_tst, x=pdp_x, y='y'))
-
-
-with gd.Blocks() as demo:
- with gd.Row():
- with gd.Column():
- N = gd.Slider(10, 5000, value=1000, step=100, label="Number of samples",
- info="""
- Please specify how many sample points you want to
- generate to generate meta-model.
- """)
- n = gd.Slider(2, 10, step=1, label="Number of variables",
- info="""
- Please specify how many variables you have
- in the optimized function.
- """)
- m = gd.Slider(1, 20, step=1, value=3, label="Number of bases",
- info="""
- Please specify how many base functions you use
- to construct the meta-model.
- """)
- trn_ratio = gd.Slider(0.1, 0.9, value=0.2, step=0.1, label="Training ratio")
- range_min = gd.Slider(-10.0, 10.0, step=0.01, value=-5, label="Minimum value of the range",
- info="""
- Please choose the lowest value of the range. Make sure the range_min is less than
- range_max.
- """)
- range_max = gd.Slider(-10.0, 10.0, step=0.01, value=5, label="Maximum value of the range",
- info="""
- Please choose the highest value of the range. Make sure the range_max is greate than
- range_min.
- """)
-
- with gd.Column():
- func_disp = gd.Markdown()
- func_code = gd.Code(
- value="np.sum((10*x-5)**2 - 10*np.cos(2*3.14*(10*x-5)),axis=1,keepdims=True) +10*x.shape[1]",
- language="python",
- lines=1,
- label="Model function: input is x with shape (N,n) and output is y with shape (N,1)",
- interactive=True
- )
- btn = gd.Button('Create FEOM')
- gd.Examples([["np.sum((10*x-5)**2 - 10*np.cos(2*3.14*(10*x-5)),axis=1,keepdims=True) +10*x.shape[1]",
- "$$\mathbf{y} = 10n+\sum_{i=1}^n[\mathbf{x}_i^2-10 \cos(2 \pi x_i)] \;\; \mathbf{x}_i \in R^{N}$$",
- 0,1],
- ["np.sum((x-0.5)**2,axis=1,keepdims=True)","$$\mathbf{y} = \sum_i^n(\mathbf{x}_i-0.5)^2,\;\; \mathbf{x}_i \in R^{N}$$",0,1],
- ["np.sum(x,axis=1,keepdims=True)","$$\mathbf{y} = \sum_i^n \mathbf{x}_i \;\; \mathbf{x}_i \in R^{N}$$",0,1],
- ["np.sum((x-0.5)**3,axis=1,keepdims=True)","$$\mathbf{y} = \sum_i^n (\mathbf{x}_i-0.5)^3 \;\; \mathbf{x}_i \in R^{N}$$",0,1]],
- inputs=[func_code,func_disp,range_min,range_max])
- with gd.Row():
- gd.Markdown('Original Training Data')
- gd.Markdown('HDMR Approximation applied to Training Data')
- r2_out_trn = gd.Markdown('R2 Score on Training')
- with gd.Row():
- pdp_trn = gd.ScatterPlot(label='Scatter plot of input data').style(container=True)
- feom_trn = gd.ScatterPlot(label='Scatter plot of FEOM').style(container=True)
- corr_trn = gd.ScatterPlot(label='y versus feom')
- with gd.Row():
- gd.Markdown('Original Testing Data')
- gd.Markdown('HDMR Approximation applied to Testing Data')
- r2_out_tst = gd.Markdown('R2 Score on Testing')
- with gd.Row():
- pdp_tst = gd.ScatterPlot(label='Scatter plot of input data').style(container=True)
- feom_tst = gd.ScatterPlot(label='Scatter plot of FEOM').style(container=True)
- corr_tst = gd.ScatterPlot(label='y versus feom')
-
- pdp_x = gd.Dropdown(['x1','x2'], label="Choose input variable to show in scatter plots")
- out = gd.TextArea()
- btn.click(optimize, inputs=[N,n,m,func_code, trn_ratio,range_min,range_max],
- outputs=[out, pdp_x, pdp_trn, feom_trn,corr_trn,r2_out_trn,
- pdp_tst, feom_tst,corr_tst,r2_out_tst])
- n.change(optimize, inputs=[N,n,m,func_code,trn_ratio,range_min,range_max],
- outputs=[out, pdp_x, pdp_trn, feom_trn,corr_trn,r2_out_trn,
- pdp_tst, feom_tst,corr_tst,r2_out_tst])
- N.change(optimize, inputs=[N,n,m,func_code,trn_ratio,range_min,range_max],
- outputs=[out, pdp_x, pdp_trn, feom_trn,corr_trn,r2_out_trn,
- pdp_tst, feom_tst,corr_tst,r2_out_tst])
- m.change(optimize, inputs=[N,n,m,func_code,trn_ratio,range_min, range_max],
- outputs=[out, pdp_x, pdp_trn, feom_trn,corr_trn,r2_out_trn,
- pdp_tst, feom_tst,corr_tst,r2_out_tst])
- trn_ratio.change(optimize, inputs=[N,n,m,func_code,trn_ratio,range_min, range_max],
- outputs=[out, pdp_x, pdp_trn, feom_trn,corr_trn,r2_out_trn,
- pdp_tst, feom_tst,corr_tst,r2_out_tst])
- range_min.change(optimize, inputs=[N,n,m,func_code,trn_ratio,range_min, range_max],
- outputs=[out, pdp_x, pdp_trn, feom_trn,corr_trn,r2_out_trn,
- pdp_tst, feom_tst,corr_tst,r2_out_tst])
- range_max.change(optimize, inputs=[N,n,m,func_code,trn_ratio,range_min, range_max],
- outputs=[out, pdp_x, pdp_trn, feom_trn,corr_trn,r2_out_trn,
- pdp_tst, feom_tst,corr_tst,r2_out_tst])
- pdp_x.change(change_pdp, inputs=pdp_x, outputs=[pdp_trn, feom_trn, pdp_tst, feom_tst])
-
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/huggingface-projects/diffuse-the-rest/build/_app/immutable/components/error.svelte-c15ad458.js b/spaces/huggingface-projects/diffuse-the-rest/build/_app/immutable/components/error.svelte-c15ad458.js
deleted file mode 100644
index f03843819ecde87c29eeb88862e9f709a330cf32..0000000000000000000000000000000000000000
--- a/spaces/huggingface-projects/diffuse-the-rest/build/_app/immutable/components/error.svelte-c15ad458.js
+++ /dev/null
@@ -1 +0,0 @@
-import{S as A,i as C,s as F,k as v,q as k,a as h,e as q,l as g,m as E,r as $,h as p,c as R,b as u,F as P,u as S,A as w,G}from"../chunks/index-032ac624.js";import{s as H}from"../chunks/singletons-d6c43dab.js";const O=()=>{const t=H,s={page:{subscribe:t.page.subscribe},navigating:{subscribe:t.navigating.subscribe},updated:t.updated};return Object.defineProperties(s,{preloading:{get(){return console.error("stores.preloading is deprecated; use stores.navigating instead"),{subscribe:t.navigating.subscribe}},enumerable:!1},session:{get(){return B(),{}},enumerable:!1}}),s},z={subscribe(t){return O().page.subscribe(t)}};function B(){throw new Error("stores.session is no longer available. See https://github.com/sveltejs/kit/discussions/5883")}function N(t){let s,i=t[0].error.frame+"",o;return{c(){s=v("pre"),o=k(i)},l(r){s=g(r,"PRE",{});var a=E(s);o=$(a,i),a.forEach(p)},m(r,a){u(r,s,a),P(s,o)},p(r,a){a&1&&i!==(i=r[0].error.frame+"")&&S(o,i)},d(r){r&&p(s)}}}function y(t){let s,i=t[0].error.stack+"",o;return{c(){s=v("pre"),o=k(i)},l(r){s=g(r,"PRE",{});var a=E(s);o=$(a,i),a.forEach(p)},m(r,a){u(r,s,a),P(s,o)},p(r,a){a&1&&i!==(i=r[0].error.stack+"")&&S(o,i)},d(r){r&&p(s)}}}function D(t){let s,i=t[0].status+"",o,r,a,b=t[0].error.message+"",_,d,c,m,l=t[0].error.frame&&N(t),n=t[0].error.stack&&y(t);return{c(){s=v("h1"),o=k(i),r=h(),a=v("pre"),_=k(b),d=h(),l&&l.c(),c=h(),n&&n.c(),m=q()},l(e){s=g(e,"H1",{});var f=E(s);o=$(f,i),f.forEach(p),r=R(e),a=g(e,"PRE",{});var j=E(a);_=$(j,b),j.forEach(p),d=R(e),l&&l.l(e),c=R(e),n&&n.l(e),m=q()},m(e,f){u(e,s,f),P(s,o),u(e,r,f),u(e,a,f),P(a,_),u(e,d,f),l&&l.m(e,f),u(e,c,f),n&&n.m(e,f),u(e,m,f)},p(e,[f]){f&1&&i!==(i=e[0].status+"")&&S(o,i),f&1&&b!==(b=e[0].error.message+"")&&S(_,b),e[0].error.frame?l?l.p(e,f):(l=N(e),l.c(),l.m(c.parentNode,c)):l&&(l.d(1),l=null),e[0].error.stack?n?n.p(e,f):(n=y(e),n.c(),n.m(m.parentNode,m)):n&&(n.d(1),n=null)},i:w,o:w,d(e){e&&p(s),e&&p(r),e&&p(a),e&&p(d),l&&l.d(e),e&&p(c),n&&n.d(e),e&&p(m)}}}function I(t,s,i){let o;return G(t,z,r=>i(0,o=r)),[o]}class L extends A{constructor(s){super(),C(this,s,I,D,F,{})}}export{L as default};
diff --git a/spaces/huytx267/function_retrieval/app.py b/spaces/huytx267/function_retrieval/app.py
deleted file mode 100644
index 6e47ca33e3a30878b709244dddfcb3dc1e9a065d..0000000000000000000000000000000000000000
--- a/spaces/huytx267/function_retrieval/app.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import pandas as pd
-import numpy as np
-from tqdm import tqdm
-from copy import deepcopy
-import torch
-import json
-from numpy.linalg import norm
-import gradio as gr
-from sentence_transformers import SentenceTransformer
-
-# necessary function
-def cosinesimilarity(vector1, vector2):
- cosine = np.dot(vector1, vector2)/(norm(vector1)*norm(vector2))
- return cosine
-
-def encode_input_and_return_top_n(input_in, db_dff, top_k, new2oldmatching):
- embed1 = model.encode(input_in)
- scores = []
- db_df_in = deepcopy(db_dff)
- db_in = list(set(db_df_in['Câu lệnh có sẵn'].tolist()))
- for i, func in enumerate(db_in):
- embed2 = db_df_in['Embedding'].loc[i]
- scores.append(round(cosinesimilarity(embed1, embed2), 3))
- db_df_in["Điểm"] = scores
- db_df_in.sort_values(by=['Điểm'], inplace=True, ascending=False)
- ids = db_df_in[:top_k].index.tolist()
- output = {new2oldmatching[db_df_in['Câu lệnh có sẵn'][i].strip()]: round(db_df_in['Điểm'][i].item(), 2) for i in ids}
- return output
-
-def image_classifier(Input):
- inputt = Input.lower()
- result = encode_input_and_return_top_n(inputt, db_df, 3, new2oldmatch)
- return result
-
-def encode_database(db_in):
- df = pd.DataFrame(list(zip(db_in, [[]]*len(db_in))), columns=["Câu lệnh có sẵn", "Embedding"])
- for i, func in tqdm(enumerate(db_in)):
- embedding2 = model.encode(func)
- df['Embedding'].loc[i] = embedding2
- else:
- print()
- print("Encode database successfully")
- return df
-
-model = SentenceTransformer("Huy1432884/function_retrieval")
-model.eval()
-
-with open('new2oldmatch.json', 'r') as openfile:
- new2oldmatch = json.load(openfile)
- new2oldmatch = {u.strip().lower(): v.strip() for u, v in new2oldmatch.items()}
-
-database = [cmd.lower() for cmd in new2oldmatch.keys()]
-db_df = encode_database(database)
-
-demo = gr.Interface(fn=image_classifier, inputs="text", outputs="label")
-demo.launch()
\ No newline at end of file
diff --git a/spaces/hysts/insightface-SCRFD/app.py b/spaces/hysts/insightface-SCRFD/app.py
deleted file mode 100644
index 0a407d302e27e9a40ff338273dc797f66e2132c6..0000000000000000000000000000000000000000
--- a/spaces/hysts/insightface-SCRFD/app.py
+++ /dev/null
@@ -1,154 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import functools
-import os
-import pathlib
-import shlex
-import subprocess
-import sys
-import urllib.request
-
-if os.environ.get('SYSTEM') == 'spaces':
- import mim
- mim.install('mmcv-full==1.4', is_yes=True)
-
- subprocess.call(shlex.split('pip uninstall -y opencv-python'))
- subprocess.call(shlex.split('pip uninstall -y opencv-python-headless'))
- subprocess.call(
- shlex.split('pip install opencv-python-headless==4.5.5.64'))
- subprocess.call(shlex.split('pip install terminaltables==3.1.0'))
- subprocess.call(shlex.split('pip install mmpycocotools==12.0.3'))
-
- subprocess.call(shlex.split('pip install insightface==0.6.2'))
- subprocess.call(shlex.split('sed -i 23,26d __init__.py'),
- cwd='insightface/detection/scrfd/mmdet')
-
-import cv2
-import gradio as gr
-import huggingface_hub
-import numpy as np
-import torch
-import torch.nn as nn
-
-sys.path.insert(0, 'insightface/detection/scrfd')
-
-from mmdet.apis import inference_detector, init_detector, show_result_pyplot
-
-TITLE = 'insightface Face Detection (SCRFD)'
-DESCRIPTION = 'This is an unofficial demo for https://github.com/deepinsight/insightface/tree/master/detection/scrfd.'
-
-HF_TOKEN = os.getenv('HF_TOKEN')
-
-
-def load_model(model_size: str, device) -> nn.Module:
- ckpt_path = huggingface_hub.hf_hub_download(
- 'hysts/insightface',
- f'models/scrfd_{model_size}/model.pth',
- use_auth_token=HF_TOKEN)
- scrfd_dir = 'insightface/detection/scrfd'
- config_path = f'{scrfd_dir}/configs/scrfd/scrfd_{model_size}.py'
- model = init_detector(config_path, ckpt_path, device.type)
- return model
-
-
-def update_test_pipeline(model: nn.Module, mode: int):
- cfg = model.cfg
- pipelines = cfg.data.test.pipeline
- for pipeline in pipelines:
- if pipeline.type == 'MultiScaleFlipAug':
- if mode == 0: # 640 scale
- pipeline.img_scale = (640, 640)
- if hasattr(pipeline, 'scale_factor'):
- del pipeline.scale_factor
- elif mode == 1: # for single scale in other pages
- pipeline.img_scale = (1100, 1650)
- if hasattr(pipeline, 'scale_factor'):
- del pipeline.scale_factor
- elif mode == 2: # original scale
- pipeline.img_scale = None
- pipeline.scale_factor = 1.0
- transforms = pipeline.transforms
- for transform in transforms:
- if transform.type == 'Pad':
- if mode != 2:
- transform.size = pipeline.img_scale
- if hasattr(transform, 'size_divisor'):
- del transform.size_divisor
- else:
- transform.size = None
- transform.size_divisor = 32
-
-
-def detect(image: np.ndarray, model_size: str, mode: int,
- face_score_threshold: float,
- detectors: dict[str, nn.Module]) -> np.ndarray:
- model = detectors[model_size]
- update_test_pipeline(model, mode)
-
- # RGB -> BGR
- image = image[:, :, ::-1]
- preds = inference_detector(model, image)
- boxes = preds[0]
-
- res = image.copy()
- for box in boxes:
- box, score = box[:4], box[4]
- if score < face_score_threshold:
- continue
- box = np.round(box).astype(int)
-
- line_width = max(2, int(3 * (box[2:] - box[:2]).max() / 256))
- cv2.rectangle(res, tuple(box[:2]), tuple(box[2:]), (0, 255, 0),
- line_width)
-
- res = cv2.cvtColor(res, cv2.COLOR_BGR2RGB)
- return res
-
-
-device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
-
-model_sizes = [
- '500m',
- '1g',
- '2.5g',
- '10g',
- '34g',
-]
-detectors = {
- model_size: load_model(model_size, device=device)
- for model_size in model_sizes
-}
-modes = [
- '(640, 640)',
- '(1100, 1650)',
- 'original',
-]
-
-func = functools.partial(detect, detectors=detectors)
-
-image_path = pathlib.Path('selfie.jpg')
-if not image_path.exists():
- url = 'https://raw.githubusercontent.com/peiyunh/tiny/master/data/demo/selfie.jpg'
- urllib.request.urlretrieve(url, image_path)
-examples = [[image_path.as_posix(), '10g', modes[0], 0.3]]
-
-gr.Interface(
- fn=func,
- inputs=[
- gr.Image(label='Input', type='numpy'),
- gr.Radio(label='Model', choices=model_sizes, type='value',
- value='10g'),
- gr.Radio(label='Mode', choices=modes, type='index', value=modes[0]),
- gr.Slider(label='Face Score Threshold',
- minimum=0,
- maximum=1,
- step=0.05,
- default=0.3),
- ],
- outputs=gr.Image(label='Output', type='numpy'),
- examples=examples,
- title=TITLE,
- description=DESCRIPTION,
-).queue().launch(show_api=False)
diff --git a/spaces/impira/flan-playground/README.md b/spaces/impira/flan-playground/README.md
deleted file mode 100644
index 4c1f3eed9ca7f12ec5a7e73e152710c8dd9f9bf9..0000000000000000000000000000000000000000
--- a/spaces/impira/flan-playground/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Flan Playground
-emoji: 🍮
-colorFrom: green
-colorTo: green
-sdk: gradio
-sdk_version: 3.2
-app_file: app.py
-pinned: true
----
diff --git a/spaces/inamXcontru/PoeticTTS/Backtrack 5 Wifi Drivers.md b/spaces/inamXcontru/PoeticTTS/Backtrack 5 Wifi Drivers.md
deleted file mode 100644
index db823a2d1b832084178f80bffd036d969ec5ea0a..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Backtrack 5 Wifi Drivers.md
+++ /dev/null
@@ -1,34 +0,0 @@
-
-For later versions of the 2.6 kernel (as well as recent kernels), the smp_lock.h was removed. What used to be contained therein was moved into . To successfully install the ALFA drivers, however, it is necessary to take a couple of extra steps. This is taken from the most recent release of BackTrack R5:
-I think it would be more trouble than it is worth. Plus BT is a live CD so why on earth would you run it virtualized? There are certain features that need special drivers for direct hardware access. This will be unreasonable to accomodate through an extra virtualized hardware abstraction layer.
-backtrack 5 wifi drivers
DOWNLOAD ⇔ https://gohhs.com/2uz5Bp
-- Download BackTrack 5R1
- Instructions:
- Navigate to one of the following URLs
- -images/backtrack-5/backtrack-5r1-gnome-i386/BT5R1-GNOME-32.iso
- Click the Save File Radio Button
- Click the OK Button
- Download Location
- Instructions:
- Navigate to a desired destination location of your choosing.
- In my case, I will save BT5R1-GNOME-32.iso toE:\ISOs, which is my external usb drive.
- Click the OK Button
-Compat-wireless is a package which contains the development and stable versions (pulled from the kernel's git repository) of the in-kernel wireless drivers and the mac80211 wireless stack. New packages are released pretty much every day.
-This package is mentioned quite often on the forums and the IRC channel because using it is very convenient. You can think of it (in fact, you should) as a sized-down version of the kernel tree, a one that contains only the sources of the wireless drivers and the wireless stack. Therefore, you can apply any wireless-related patches to it and recompile them without having to recompile the whole kernel.
-Additionally, the development versions of the drivers have one particular advantage : they're being constantly improved. It is not uncommon to see that they support more hardware, have better performance, or simply contain fixes to various bugs - in contrast to their stable versions. However, you should always remember that development versions may be incomplete or contain even more bugs than the stable ones.
-We distinguish two versions of compat-wireless, development (bleeding edge) and stable. The development version may include more drivers, supported hardware and bug-fixes but may not be as stable as the 'stable' version which implies its nature. With minor exceptions, both versions will suffice in your bid to enable aircrack-ng support for your card/driver. For the majority of the users the stable version is the preferred one to use
-
-compat-wireless offers one very useful script, called driver-select, which allows you to save time when compiling your preferred driver. Currently not all drivers are available via driver-select but the most common ones are. Its useful if you install compat-wireless frequently or if you have one card on your computer. Enter the compat-wireless directory and issue:
-DO NOT use the steps referenced in this article to install or uninstall Windows DCH drivers, as doing so bypasses the Intel installer designed to install these new drivers, thereby possibly resulting in minor to major system instability.
-Just installed RedHat 8. Noticed that I'm unable to connect to wifi. Going into "Settings->Wi-Fi" shows the message "No Wi-Fi Adapter Found...make sure you have a Wi-Fi adapter plugged and turned on". In the status bar up at the top there is also no wifi icon indicator either. My friend and I both have the same laptops (E480 Thinkpads) and he happened to see the same thing when he installed Centos7. So it appears to not be specific to my setup. Prior to installing RedHat I had Ubuntu installed and the wifi was working fine. So I'm stuck on why it's not working with RedHat and need some help. I included some more details below.
-I've also gone in and took a look at /lib/firmware/rtlwifi to see if the firmware for my wifi adapter was there and it is. This was recommended from a similar discussion. I can see the rtl8822befw.bin file there. However, I'm not too sure how to proceed knowing that it's there. A colleague referred me to a ubuntu post suggesting sudo rmmod ideapad_laptop. However, running that command leads to...
-Since then I have restarted my system to ensure the wifi still works properly and it is. Since you added to the elrepo for Centos8/RHEL8 how can we port it over to the elrepo for Centos7/RHEL7? I hate to even ask since you've went above and beyond already but wanted to kindly ask.
-hi everyone, i have RHEL 8.4 but the wifi driver is not availble PFAlspci -nn | grep 88210000:02:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8821CE 802.11ac PCIe Wireless Network Adapter [10ec:c821]
-Please note that Wifi drivers often break and need rebuilding against new point releases, so we may need to rebuild the driver for el7.7 and el8.1 once they are released. If the driver stops working after an update, please feel free to file a bug request at elrepo.org/bugs and either Akemi or I will fix it.
-Hi Philip, I am hitting the exact same issue: What I have done is: Install Redhat release is rhel7.6, download kmod-r8822be-0.0-1.el7_6.elrepo.x86_64.rpm and install it sucessfully. However, the modprobe command failed : modprobe:error: could not insert r88wwbe: Required key not available. The wifi adapter is rtl8822BE.
-Hi, thanks for the reply. already bought a wifi adapter and its working fine. But many will be facing issue with this adapter and mine is a brand new one. In windows there is no issue at all, so any other solutions will be appreciated. thanks in advance
-I have similar situation to Sangram's, I am on a Lenovo T420 and wifi was working well on RH 7.8, I installed yesterday 8.1 and wifi is out... I can't start it at all and nmcli device wifi rescan says No Wi-Fi device found. I am guessing this could be because of the driver as I have tried every other solution I could find to start the device with no success.
-I can confirm I'm using iwlwifi driver, I can't send you the lspci at the moment as I have no internet at all, tried an ethernet cable but I still can't seem to get to the outside world... I will try to write down the most important stuff and write it again here, in the mean time I will see why I don't get internet with the cable as well.
-Thank you! This works great. The wireless network on my laptop became unreliable after upgrading to Ubuntu 16.04 LTS. I too experienced no wifi after waking up from sleep. Also the wifi would at random. die for no apparent reason and the only that worked was a full reboot of the laptop.
-One trend I've noticed with Ubuntu Software Center is that often it installs 90% of a package and never "finishes" installing the package - and this is what happened again. Regardless, what has worked in the past is to close Ubuntu Software Center (which I did). The package was installed as far as I could tell, but in terminal, I tried "sudo modprobe -r wl" and got the response "wl is busy" so I restarted my pc. Lo and behold, after restart and third attempt at this line "sudo modprobe -r wl" my wifi indicator immediately lit up and it seems to be working fine. Thank you kindly for your help. I just wanted to let fellow linux users know that this worked for me, and also how I had to get it to work. Cheers!
-PS I am running on 2gb of ddr2 ram, with a dual-core processor @ approx 1.85ghz... Which means I still prefer backtrack 5 / ubuntu 10.04 for my daily computing needs/desires. I feel ubuntu 12 and 14 (I did not try 13) are too bloated, and while there are some nice ideas I see coming to life, they do not to my knowledge follow the KISS method. Thanks anyways :) I will still give ubuntu 14 a shot on this machine. Much love ~
-I might keep ubuntu 14, due to the fact that I am unable to get flash player working in either chromium or firefox with backtrack 5 / ubuntu 10.04; if anyone out there has found a workaround, please send me a link @ beautiful.insanity.420@gmail.com... Thank you!
-Apart from that, there are some issues with BT5 and wireless drivers atm - i would ask you to stick to the official BackTrack Forums for more help with that. There are several posts that might bring you forward.
-This talk will introduce techniques to detect Wi-Fi attacks such as Honeypots, Evil Twins, Mis-association , Hosted Network based backdoors etc. on a Windows client without the need for custom hardware or drivers. Our attack detection techniques will work for both Encrypted (WPA/WPA2 PSK and Enterprise) and Unencrypted networks.
-Dear sir,
I install back track 5r1 but i want to my wifi password. I face some problem
1-Wifi network adapter?
2-where i can buy in India?
3-how much rupees in Indian currency?
Sir plz help me .
-I had the same Atheros AR5007EG as the internal wireless on my old Toshiba Satellite. I could never get mine to work. Unless they have new drivers for it than more than likely you will have to buy a compatible USB adapter.
-An adapter that has a bigger dBi antenna will have better range but many use boosters in combination with bigger antenna to get real distance. Boosters and bigger antennas can always be added later if needed. Here is a post about this. -range-wifi-antenna-and-boosters.html
-Ok guys, I am an arch user for a good while now (1 month, ehehe), but I am not even close to a tech savy for now. I wanted to try today the aircrack injection in arch. It works flawlessly in backtrack 5. Now ... I get an error something about mon0 channel and access point channel. A bit of research, gave me some results from google.
-I have a 64 bit system and I tried to boot up with Security Onion 12.04 64 bit iso in order to install it on my hdd. Grub menu appears to work fine however, when Live Desktop environment tries to load, it freezes with the following characteristics:- the mouse pointer is able to move around.
- no desktop icons are present.
- "no wireless connection" notification is present on the top right in a frozen state not responsive to mouse clicks.
- no other objects/icons respond to mouse clicks.I have also tried the direct installer option in grub menu and it also freezes with no wizards appearing on screen.
Previously, on this system I have used/installed the following 64 bit systems:-Backtrack 5 R3
-windows pro
-ubuntu
-centos
-fedoraNone of them presented a problem. Moreover, on the same system, using the same iso dvd, I tried installing the 12.04 64bit inside a vm in virtual box(on windows host) and everything works fine and the installation completes with no problems at all so I'm assuming the dvd is fine.So I'm wondering what is causing the problem? could it be the graphic drivers or wifi nic drivers ?
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Fifa Street 4 Download Pc Torrent 19.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Fifa Street 4 Download Pc Torrent 19.md
deleted file mode 100644
index 3bfb859c2d77db5687177602f3b0eea6faeff660..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Fifa Street 4 Download Pc Torrent 19.md
+++ /dev/null
@@ -1,8 +0,0 @@
-Fifa Street 4 Download Pc Torrent 19
Download File ✪✪✪ https://urlin.us/2uEwoA
-
-A true street soccer game that replicates the way street players around the world play, this new invention of the FIFA Street series ... The game is the official sequel to the FIFA Street series, which has a huge number of fans and around the world.
-It has been several years since the series first appeared and the game is still very popular and considered to be very worthy and attractive.
-Over the years, the series has gained a large number of fans, and of course, this is not surprising, as it is a very exciting game, and at the same time it is very easy to control. 8a78ff9644
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/3gp Avventurieri Ai Confini Del Mondo Movie.md b/spaces/inreVtussa/clothingai/Examples/3gp Avventurieri Ai Confini Del Mondo Movie.md
deleted file mode 100644
index 18d3cc484c1fc044ef25277d192e476657fb5e6c..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/3gp Avventurieri Ai Confini Del Mondo Movie.md
+++ /dev/null
@@ -1,19 +0,0 @@
-
-Avventurieri ai confini del mondo: A Classic Adventure Movie
-Avventurieri ai confini del mondo (High Road to China in English) is a 1983 adventure movie directed by Brian G. Hutton and starring Tom Selleck, Bess Armstrong and Jack Weston. The movie is set in the 1920s and follows the exploits of Evie Tozer, a spoiled heiress who hires Patrick O'Malley, a former WWI pilot, to find her missing father in Asia before he loses his fortune. Along the way, they encounter various dangers and adventures, such as bandits, warlords and a rival pilot.
-The movie is based on a novel by Jon Cleary and was originally intended to star Steve McQueen and Jacqueline Bisset, but McQueen died before the production began. The movie was filmed in Yugoslavia, England and China, and features stunning aerial scenes and exotic locations. The movie also has a memorable soundtrack composed by John Barry, who also scored many James Bond movies.
-3gp Avventurieri ai confini del mondo movie
Download File ☆☆☆☆☆ https://tiurll.com/2uClaE
-Avventurieri ai confini del mondo is a fun and entertaining movie that pays homage to the classic adventure movies of the past. It has humor, romance, action and adventure, and showcases the charisma and chemistry of Tom Selleck and Bess Armstrong. The movie is available to watch on Prime Video[^1^] or on DVD from New Gold Film[^2^]. If you are a fan of adventure movies, you should definitely check out Avventurieri ai confini del mondo.
Avventurieri ai confini del mondo also has some interesting trivia behind its production and reception. Here are some of the facts that you may not know about this movie:
-
-- The movie was based on a novel by Jon Cleary, who was disappointed by the film adaptation and felt that it could have been one of the great adventure films of all time[^1^].
-- The movie was originally intended to star Steve McQueen and Jacqueline Bisset, but McQueen died before the production began[^1^].
-- The movie was frequently perceived by the film-going public as being a knock-off of Raiders of the Lost Ark (1981), when in fact the film had been in development since the late 1970s[^1^]. Interestingly, Tom Selleck auditioned for the part of Indiana Jones and one can see his screen test on the DVD for that movie. Selleck was locked into a tight shooting schedule for Magnum, P.I. (1980) which is why he was unable to star in Raiders of the Lost Ark (1981). As that movie was a blockbuster hit, Avventurieri ai confini del mondo is considered to have been offered to Selleck for compensatory reasons and also because he had been the original choice to play the Indiana Jones character[^1^].
-- The names of Patrick O'Malley's two bi-planes were Dorothy and Lillian. These were named after two silent screen actresses, the sisters Dorothy Gish and Lillian Gish[^1^].
-- The make and models of the three bi-planes in the film were all vintage Belgian-designed French-built Stampe SV4C aircraft that were built after World War II though in the film the planes portray World War I era bi-planes[^1^].
-- Actors Tom Selleck and Bess Armstrong had to overcome a significant height difference between them whilst blocking their scenes together. Armstrong is quoted in the January 1984 edition of Photoplay (UK) magazine as saying: "I spent five months in four-inch heels and standing on soap boxes just to reach him"[^1^].
-- Actress Bess Armstrong once said of her co-star Tom Selleck whilst doing publicity for the picture: "He is real gorgeous, and he has some real power now, but he doesn't use that, or his charm, to exploit women. He genuinely seems to like women. For an actor, that's rare"[^1^].
-- The movie features a song called Charleston, written by Cecil Mack and James P. Johnson, which was a popular dance tune in the 1920s[^2^].
-
-As you can see, Avventurieri ai confini del mondo is not only a great adventure movie, but also a fascinating piece of cinema history. If you want to learn more about this movie, you can visit its IMDb page[^2^] or its Wikipedia page[^4^]. You can also watch some clips and trailers on YouTube or other video platforms. Avventurieri ai confini del mondo is a movie that deserves more recognition and appreciation from fans of adventure movies.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/AccuRC 2 Torrent Download [License]golkes.md b/spaces/inreVtussa/clothingai/Examples/AccuRC 2 Torrent Download [License]golkes.md
deleted file mode 100644
index bd8a58eb2beb6c052fa7c075b366b1bea52c58cb..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/AccuRC 2 Torrent Download [License]golkes.md
+++ /dev/null
@@ -1,16 +0,0 @@
-AccuRC 2 Torrent Download [License]golkes
DOWNLOAD — https://tiurll.com/2uCkHX
-
-Bepantel is licensed to Good Medical Store,. HMG-CoA Reductase Inhibitors: Beyond Lipid-Lowering Effect. Gout, hyperuricemia and hyperuricosuria may be treated with allopurinol, a xanthine oxidase inhibitor.
-
-accuracy; accuracy vs precision; accuracy of rapid covid test . Gout, hyperuricemia and hyperuricosuria may be treated with allopurinol, a xanthine oxidase inhibitor. Gout is the most common form of crystal-induced arthritis. Accurate and rapid assessment of crystals in the joint is important to enable timely diagnosis and treatment of this debilitating disease. Herein, we describe a novel imaging technique for crystal detection and localization in gouty joints. It is based on the measurement of autofluorescence from free.
-
-A New Clinical Tool for Gouty Arthritis Diagnosis
-
-The most common cause of acute gouty arthritis is deposition of monosodium urate (MSU) crystals in the joint.1 MSU crystals are arachidonate-containing negatively charged glycosylated bilayered particles.2 In the joint, they are released by macrophages and neutrophils stimulated by MSU crystals.3-5 Subsequently, they are phagocytosed by fibroblasts and/or chondrocytes,1,6,7 leading to the development of the characteristic inflamed joint in which MSU crystals are found.8,9 A spectrum of clinical manifestations can occur ranging from acute gouty arthritis to recurrent and more chronic subclinical arthritis, which is seen in 1.7% to 4.9% of patients with gout.2,10
-
-The diagnosis of gout is currently based on demonstration of a monosodium urate crystal within the joint, which is obtained by aspiration of a joint fluid sample. A sample obtained from a joint of a patient with gout is typically positive for urate and often for MSU crystals on microscopic examination.11-14 In the past decade, however, a noninvasive test for rapid and accurate diagnosis of gout has been developed, based on detection of MSU crystals in the urine.14-18 This test, known as the test for pseudogout or gouty arthritis, is available in the US and Europe.
-
-The test has a high sensitivity (>90%) and specificity (>95%),11 4fefd39f24
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Adobe Photoshop CS6 V. 13.0 Keygen PASSWORD.txt.rar ((EXCLUSIVE)).md b/spaces/inreVtussa/clothingai/Examples/Adobe Photoshop CS6 V. 13.0 Keygen PASSWORD.txt.rar ((EXCLUSIVE)).md
deleted file mode 100644
index 52aa0302df7526e8d68aaafc71c398a77b267f07..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Adobe Photoshop CS6 V. 13.0 Keygen PASSWORD.txt.rar ((EXCLUSIVE)).md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-adobe dreamweaver cracked code, license key, free password, serial & license key, serial key, license key generator, crack code, patch code, serial number, pass code, licenser key, serial key and patch code. adobe graphics suite crack code for all (windows, macintosh, linux) platform.
-ecourier e-mail encryption. adobe dreamweaver cracked code, license key, free password, serial & license key, serial key, license key generator, crack code, patch code, serial number, pass code, licenser key, serial key and patch code.
-Adobe Photoshop CS6 v. 13.0 Keygen PASSWORD.txt.rar
Download File ✺ https://tiurll.com/2uCitR
-rating: 2.5 + i’ve used it before, but i’m not going to be purchasing it anymore. the hardest thing for me to do is find where i’m supposed to go. the things i want the most are the basics, and it doesnt even know where its tools are located. i’m not even sure if it has crack codes.
-7480 records. the complete instructions are given below step by step on how to install and use the latest version of adobe dreamweaver cc or cs6 for windows. adobe dreamweaver cc - dreamweaver cs6 - cs6 html5. create dreamweaver cs6 file for your project. intel core mac operating system: version: cs 6 service pack 1 language: english
-6009 records. adobe dreamweaver cs6 crack free download free adobe dreamweaver cs6 crack free download. when working on the computer page number of documents, dreamweaver cs6 can't differentiate to which of these pages. adobe dreamweaver pro 9.0.2 crack/patch error code: 745. dreamweaver cs6 keygen crack across the universe feat dreamweaver cc 6 serial with. adobe dreamweaver cs6 crack free download cs6 installation. cs6 installation a different way and remove the previous version.https://coub.com/stories/3355580-adobe-photoshop-cs6-crack-incompatibilities.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/Advanced Rar Repair 1.2 Full Version Free Download ((INSTALL)).md b/spaces/inreVtussa/clothingai/Examples/Advanced Rar Repair 1.2 Full Version Free Download ((INSTALL)).md
deleted file mode 100644
index 93fddaa977fc33bbbbedd199d5898a1bcc0027b6..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Advanced Rar Repair 1.2 Full Version Free Download ((INSTALL)).md
+++ /dev/null
@@ -1,6 +0,0 @@
-advanced rar repair 1.2 full version free download
DOWNLOAD ✔ https://tiurll.com/2uCktk
-
-DataNumen RAR Repair (formerly Advanced RAR Repair) is a ... To get them, you need to buy the full version which is available for 49.95USD. 4d29de3e1b
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Begum Jaan Movie Download In 720p Torrent FREE.md b/spaces/inreVtussa/clothingai/Examples/Begum Jaan Movie Download In 720p Torrent FREE.md
deleted file mode 100644
index ae6e079e05337c20aa7b76456c4b974452335eea..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Begum Jaan Movie Download In 720p Torrent FREE.md
+++ /dev/null
@@ -1,52 +0,0 @@
-
-Begum Jaan Movie Download in 720p Torrent: A Complete Guide
-If you are looking for a historical drama that depicts the partition of India and Pakistan in 1947, you might want to check out Begum Jaan movie download in 720p torrent. This movie tells the story of Begum Jaan, a brothel owner who refuses to leave her home that lies on the border between the two countries. She and her courtesans fight against the authorities who want to evict them and demolish their house.
-Begum Jaan movie download in 720p torrent is a remake of the Bengali film Rajkahini (2015), directed by Srijit Mukherji. The movie stars Vidya Balan as Begum Jaan, along with Naseeruddin Shah, Gauhar Khan, Flora Saini, and others. The movie was released in 2017 and received mixed reviews from critics and audiences.
-Begum Jaan movie download in 720p torrent
Download Zip ⇒ https://tiurll.com/2uCiBX
-How to Get Begum Jaan Movie Download in 720p Torrent
-There are two ways to get Begum Jaan movie download in 720p torrent: downloading it from the official website or from a trusted source; or downloading it from other websites or platforms that offer it for free or at a lower price. The first option is the most recommended one, as it ensures that you get a safe and secure download that does not contain any malware or viruses. Downloading Begum Jaan movie download in 720p torrent from the official website or from a trusted source also gives you access to subtitles and updates.
-The second option is to download Begum Jaan movie download in 720p torrent from other websites or platforms that offer it for free or at a lower price. This option is risky and not advisable, as you might end up with a corrupted or infected download that does not work or worse, damages your computer. Downloading Begum Jaan movie download in 720p torrent from unauthorized sources also violates the terms and conditions of the movie and might result in legal consequences.
-
-How to Use Begum Jaan Movie Download in 720p Torrent
-Once you have obtained Begum Jaan movie download in 720p torrent, you can use it to watch the movie on your computer or any compatible device. Here are the steps to follow:
-
-- Download and install a torrent client software, such as uTorrent, BitTorrent, or qBittorrent.
-- Run the torrent client software and click on "File" > "Add Torrent" or "Open Torrent".
-- Browse your computer and select the Begum Jaan movie download in 720p torrent file that you downloaded.
-- Click on "OK" and wait for the torrent client software to download the movie file.
-- Once the download is complete, you can find the movie file in the folder that you specified.
-- Double-click on the movie file to play it with your preferred media player.
-
-What are the Benefits of Begum Jaan Movie Download in 720p Torrent
-Begum Jaan movie download in 720p torrent has many benefits that make it a worthwhile choice for anyone who wants to watch a historical drama that portrays the partition of India and Pakistan. Some of the benefits are:
-
-- It offers high-quality video and audio that enhance your viewing experience. You can enjoy the movie in 720p resolution, which is clear and sharp. You can also enjoy the movie in Hindi language with English subtitles.
-- It saves you time and money that you would otherwise spend on buying or renting a physical disc or streaming online. You can download the movie file once and watch it anytime and anywhere you want. You can also share it with your friends or family members who want to watch it too.
-- It gives you more control and flexibility over your viewing options. You can pause, resume, rewind, fast-forward, or skip any part of
-
What are the Drawbacks of Begum Jaan Movie Download in 720p Torrent
-Begum Jaan movie download in 720p torrent is not a perfect choice and it has some drawbacks that you should be aware of before using it. Some of the drawbacks are:
-
-- It is not legal and you have to pay for it to get the full version. The movie is protected by copyright and you have to buy or rent it from the official website or from a trusted source. Downloading Begum Jaan movie download in 720p torrent from unauthorized sources is illegal and might result in legal consequences.
-- It may not support some devices or media players that are not compatible with the torrent format. You may need to convert the movie file to a compatible format before playing it on your device or media player.
-- It may not work well with some antivirus or firewall software that may block or interfere with its functions. You may need to disable or whitelist Begum Jaan movie download in 720p torrent in your antivirus or firewall settings to avoid any conflicts or errors.
-- It may cause some quality loss or distortion due to the compression or conversion of the movie file. You may not get the same quality as the original movie or as the streaming online.
-
-What are the Alternatives to Begum Jaan Movie Download in 720p Torrent
-If you are looking for alternatives to Begum Jaan movie download in 720p torrent, you have many options to choose from. There are many other ways to watch the movie legally and safely. Some of the alternatives are:
-
-- Streaming online: You can stream the movie online from the official website or from a trusted source that offers it for a fee or for free. You can enjoy the movie without downloading any file or software. You just need a stable internet connection and a compatible device or browser.
-- Buying or renting a physical disc: You can buy or rent a DVD or Blu-ray disc of the movie from the official website or from a trusted source that sells it. You can enjoy the movie with high-quality video and audio and with subtitles and extras. You just need a compatible device or player and a disc drive.
-- Downloading from a legal source: You can download the movie file from the official website or from a trusted source that offers it for a fee or for free. You can enjoy the movie without any risk of malware or viruses. You just need enough storage space and a compatible device or player.
-
-Conclusion
-Begum Jaan movie download in 720p torrent is a historical drama that portrays the plight of a brothel owner and her courtesans during the partition of India and Pakistan. It is a remake of the Bengali film Rajkahini (2015), directed by Srijit Mukherji. The movie features Vidya Balan as Begum Jaan, along with Naseeruddin Shah, Gauhar Khan, Flora Saini, and others.
-The best way to get Begum Jaan movie download in 720p torrent is to download it from the official website or from a trusted source. This way, you can ensure that you get a safe and secure download that works well and does not cause any trouble. You should avoid downloading Begum Jaan movie download in 720p torrent from untrusted sources, as they might be fake or harmful.
-We hope this article has helped you understand more about Begum Jaan movie download in 720p torrent and how to use it properly. If you have any questions or feedback, feel free to leave a comment below.
-If you liked this article, please share it with your friends and family who might be interested in watching Begum Jaan movie download in 720p torrent. You can also subscribe to our newsletter to get more articles like this delivered to your inbox. And if you haven't watched the movie yet, what are you waiting for? Go ahead and download Begum Jaan movie download in 720p torrent from the official website or from a trusted source and enjoy this historical drama that depicts the partition of India and Pakistan.
-If you liked this article, please share it with your friends and family who might be interested in watching Begum Jaan movie download in 720p torrent. You can also subscribe to our newsletter to get more articles like this delivered to your inbox. And if you haven't watched the movie yet, what are you waiting for? Go ahead and download Begum Jaan movie download in 720p torrent from the official website or from a trusted source and enjoy this historical drama that depicts the partition of India and Pakistan.
-Conclusion
-Begum Jaan movie download in 720p torrent is a historical drama that portrays the plight of a brothel owner and her courtesans during the partition of India and Pakistan. It is a remake of the Bengali film Rajkahini (2015), directed by Srijit Mukherji. The movie features Vidya Balan as Begum Jaan, along with Naseeruddin Shah, Gauhar Khan, Flora Saini, and others.
-The best way to get Begum Jaan movie download in 720p torrent is to download it from the official website or from a trusted source. This way, you can ensure that you get a safe and secure download that works well and does not cause any trouble. You should avoid downloading Begum Jaan movie download in 720p torrent from untrusted sources, as they might be fake or harmful.
-We hope this article has helped you understand more about Begum Jaan movie download in 720p torrent and how to use it properly. If you have any questions or feedback, feel free to leave a comment below.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/Cd Lucky Dube Discografia Torrentl [VERIFIED].md b/spaces/inreVtussa/clothingai/Examples/Cd Lucky Dube Discografia Torrentl [VERIFIED].md
deleted file mode 100644
index e4c4421428673fc8bbc683ad3a2a6f0b154df5f0..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Cd Lucky Dube Discografia Torrentl [VERIFIED].md
+++ /dev/null
@@ -1,11 +0,0 @@
-Cd Lucky Dube Discografia Torrentl
Download File > https://tiurll.com/2uCil2
-
-cd lucky dube discografia l ucy jones torrent. Porn video lucy jones download and watch online free mp4. Download new games for pc, movies in good quality, all series of TV shows, porn, music. Watch porn lucy jones online for free.
-Lucy porn pics.
-Cindy and lucy.
-Cindy and lucy.Lucy jones pornstar free sex video.
-Com: the best free porn videos on internet, 100 free.
-Lucy jones pornstar bio & video (aka lucy jones, lucy love, lucy dube, lubricia jones, lucy jonsson). 8a78ff9644
-
-
-
diff --git a/spaces/it-at-m/image-anonymizer/README.md b/spaces/it-at-m/image-anonymizer/README.md
deleted file mode 100644
index 5b7312770812d1e18eb590bf6ff1737f7ad8cff9..0000000000000000000000000000000000000000
--- a/spaces/it-at-m/image-anonymizer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Image Anonymizer
-emoji: 🕶
-colorFrom: gray
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
-license: agpl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ivn888/Twitter-dashboard/panel-geodashboard-twitter/graphs/tweet_map.py b/spaces/ivn888/Twitter-dashboard/panel-geodashboard-twitter/graphs/tweet_map.py
deleted file mode 100644
index f15c66af3670394794a31ade776b8c4ee6f132e3..0000000000000000000000000000000000000000
--- a/spaces/ivn888/Twitter-dashboard/panel-geodashboard-twitter/graphs/tweet_map.py
+++ /dev/null
@@ -1,46 +0,0 @@
-import holoviews as hv
-import hvplot.pandas # noqa
-from bokeh.models import HoverTool
-
-
-def get_tweet_points(in_data):
- """
- Returns a rasterized graph of tweet locations.
- """
-
- # Define a custom Hover tool for the points
- points_hover = HoverTool(
- tooltips=[("tweets", "@image")],
- )
-
- # Plot the tweet locations, apply rasterization and dynspread
- out_points = in_data.hvplot.points(
- x="x",
- y="y",
- xaxis=None,
- yaxis=None,
- rasterize=True,
- dynspread=True,
- cmap="viridis",
- cnorm="eq_hist",
- colorbar=False,
- tools=[points_hover],
- frame_width=600,
- frame_height=400,
- )
-
- return out_points
-
-
-def get_tweet_map(tweet_points):
- """
- This function combines the rasterized
- tweet locations with CartoDark tiles.
- """
-
- # Get the CartoDark tiles
- carto_tiles = hv.element.tiles.CartoDark()
-
- # Combine the tweet locations with CartoDark tiles
- tweet_map = carto_tiles * tweet_points
- return tweet_map
diff --git a/spaces/james-oldfield/PandA/networks/stylegan3/gen_images.py b/spaces/james-oldfield/PandA/networks/stylegan3/gen_images.py
deleted file mode 100644
index 66fdef44fd163d5207911a2b4fd7853aff73855c..0000000000000000000000000000000000000000
--- a/spaces/james-oldfield/PandA/networks/stylegan3/gen_images.py
+++ /dev/null
@@ -1,145 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Generate images using pretrained network pickle."""
-
-import os
-import re
-from typing import List, Optional, Tuple, Union
-
-import click
-import dnnlib
-import numpy as np
-import PIL.Image
-import torch
-
-import legacy
-
-#----------------------------------------------------------------------------
-
-def parse_range(s: Union[str, List]) -> List[int]:
- '''Parse a comma separated list of numbers or ranges and return a list of ints.
-
- Example: '1,2,5-10' returns [1, 2, 5, 6, 7]
- '''
- if isinstance(s, list): return s
- ranges = []
- range_re = re.compile(r'^(\d+)-(\d+)$')
- for p in s.split(','):
- m = range_re.match(p)
- if m:
- ranges.extend(range(int(m.group(1)), int(m.group(2))+1))
- else:
- ranges.append(int(p))
- return ranges
-
-#----------------------------------------------------------------------------
-
-def parse_vec2(s: Union[str, Tuple[float, float]]) -> Tuple[float, float]:
- '''Parse a floating point 2-vector of syntax 'a,b'.
-
- Example:
- '0,1' returns (0,1)
- '''
- if isinstance(s, tuple): return s
- parts = s.split(',')
- if len(parts) == 2:
- return (float(parts[0]), float(parts[1]))
- raise ValueError(f'cannot parse 2-vector {s}')
-
-#----------------------------------------------------------------------------
-
-def make_transform(translate: Tuple[float,float], angle: float):
- m = np.eye(3)
- s = np.sin(angle/360.0*np.pi*2)
- c = np.cos(angle/360.0*np.pi*2)
- m[0][0] = c
- m[0][1] = s
- m[0][2] = translate[0]
- m[1][0] = -s
- m[1][1] = c
- m[1][2] = translate[1]
- return m
-
-#----------------------------------------------------------------------------
-
-@click.command()
-@click.option('--network', 'network_pkl', help='Network pickle filename', required=True)
-@click.option('--seeds', type=parse_range, help='List of random seeds (e.g., \'0,1,4-6\')', required=True)
-@click.option('--trunc', 'truncation_psi', type=float, help='Truncation psi', default=1, show_default=True)
-@click.option('--class', 'class_idx', type=int, help='Class label (unconditional if not specified)')
-@click.option('--noise-mode', help='Noise mode', type=click.Choice(['const', 'random', 'none']), default='const', show_default=True)
-@click.option('--translate', help='Translate XY-coordinate (e.g. \'0.3,1\')', type=parse_vec2, default='0,0', show_default=True, metavar='VEC2')
-@click.option('--rotate', help='Rotation angle in degrees', type=float, default=0, show_default=True, metavar='ANGLE')
-@click.option('--outdir', help='Where to save the output images', type=str, required=True, metavar='DIR')
-def generate_images(
- network_pkl: str,
- seeds: List[int],
- truncation_psi: float,
- noise_mode: str,
- outdir: str,
- translate: Tuple[float,float],
- rotate: float,
- class_idx: Optional[int]
-):
- """Generate images using pretrained network pickle.
-
- Examples:
-
- \b
- # Generate an image using pre-trained AFHQv2 model ("Ours" in Figure 1, left).
- python gen_images.py --outdir=out --trunc=1 --seeds=2 \\
- --network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-afhqv2-512x512.pkl
-
- \b
- # Generate uncurated images with truncation using the MetFaces-U dataset
- python gen_images.py --outdir=out --trunc=0.7 --seeds=600-605 \\
- --network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-t-metfacesu-1024x1024.pkl
- """
-
- print('Loading networks from "%s"...' % network_pkl)
- device = torch.device('cuda')
- with dnnlib.util.open_url(network_pkl) as f:
- G = legacy.load_network_pkl(f)['G_ema'].to(device) # type: ignore
-
- os.makedirs(outdir, exist_ok=True)
-
- # Labels.
- label = torch.zeros([1, G.c_dim], device=device)
- if G.c_dim != 0:
- if class_idx is None:
- raise click.ClickException('Must specify class label with --class when using a conditional network')
- label[:, class_idx] = 1
- else:
- if class_idx is not None:
- print ('warn: --class=lbl ignored when running on an unconditional network')
-
- # Generate images.
- for seed_idx, seed in enumerate(seeds):
- print('Generating image for seed %d (%d/%d) ...' % (seed, seed_idx, len(seeds)))
- z = torch.from_numpy(np.random.RandomState(seed).randn(1, G.z_dim)).to(device)
-
- # Construct an inverse rotation/translation matrix and pass to the generator. The
- # generator expects this matrix as an inverse to avoid potentially failing numerical
- # operations in the network.
- if hasattr(G.synthesis, 'input'):
- m = make_transform(translate, rotate)
- m = np.linalg.inv(m)
- G.synthesis.input.transform.copy_(torch.from_numpy(m))
-
- img = G(z, label, truncation_psi=truncation_psi, noise_mode=noise_mode)
- img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8)
- PIL.Image.fromarray(img[0].cpu().numpy(), 'RGB').save(f'{outdir}/seed{seed:04d}.png')
-
-
-#----------------------------------------------------------------------------
-
-if __name__ == "__main__":
- generate_images() # pylint: disable=no-value-for-parameter
-
-#----------------------------------------------------------------------------
diff --git a/spaces/jbilcke-hf/Panoremix/src/app/interface/progress/index.tsx b/spaces/jbilcke-hf/Panoremix/src/app/interface/progress/index.tsx
deleted file mode 100644
index ce24276a4b241d185fce5bd306a0c3e339835626..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/Panoremix/src/app/interface/progress/index.tsx
+++ /dev/null
@@ -1,56 +0,0 @@
-import { useEffect, useRef, useState } from "react"
-
-import { ProgressBar } from "./progress-bar"
-import { cn } from "@/lib/utils"
-
-export function Progress({
- isLoading,
- resetKey = "", // when this key change, this will re-spawn the progress bar
- className = "",
-}: {
- isLoading: boolean
- resetKey?: string
- className?: string
-}) {
- const timeoutRef = useRef()
- const [progressPercent, setProcessPercent] = useState(0)
- const progressRef = useRef(0)
- const isLoadingRef = useRef(isLoading)
-
- const updateProgressBar = () => {
- const duration = 1000 // 1 sec
- const frequency = 200 // 200ms
- const nbUpdatesPerSec = duration / frequency // 5x per second
-
- // normally it takes 45, and we will try to go below,
- // but to be safe let's set the counter a 1 min
- const nbSeconds = 80 // 1 min
- const amountInPercent = 100 / (nbUpdatesPerSec * nbSeconds) // 0.333
-
- progressRef.current = Math.min(100, progressRef.current + amountInPercent)
- setProcessPercent(progressRef.current)
- }
-
- useEffect(() => {
- clearInterval(timeoutRef.current)
- isLoadingRef.current = isLoading
- progressRef.current = 0
- setProcessPercent(0)
- if (isLoading) {
- timeoutRef.current = setInterval(updateProgressBar, 200)
- }
- }, [isLoading, resetKey])
-
- return (
-
-
-
- )
-}
\ No newline at end of file
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Cipher/DES.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Cipher/DES.py
deleted file mode 100644
index 5cc286aee78a997631413f5981ad94638954c394..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Cipher/DES.py
+++ /dev/null
@@ -1,158 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Cipher/DES.py : DES
-#
-# ===================================================================
-# The contents of this file are dedicated to the public domain. To
-# the extent that dedication to the public domain is not available,
-# everyone is granted a worldwide, perpetual, royalty-free,
-# non-exclusive license to exercise all rights associated with the
-# contents of this file for any purpose whatsoever.
-# No rights are reserved.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
-# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
-# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
-# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-# ===================================================================
-"""
-Module's constants for the modes of operation supported with Single DES:
-
-:var MODE_ECB: :ref:`Electronic Code Book (ECB) `
-:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) `
-:var MODE_CFB: :ref:`Cipher FeedBack (CFB) `
-:var MODE_OFB: :ref:`Output FeedBack (OFB) `
-:var MODE_CTR: :ref:`CounTer Mode (CTR) `
-:var MODE_OPENPGP: :ref:`OpenPGP Mode `
-:var MODE_EAX: :ref:`EAX Mode `
-"""
-
-import sys
-
-from Crypto.Cipher import _create_cipher
-from Crypto.Util.py3compat import byte_string
-from Crypto.Util._raw_api import (load_pycryptodome_raw_lib,
- VoidPointer, SmartPointer,
- c_size_t, c_uint8_ptr)
-
-_raw_des_lib = load_pycryptodome_raw_lib(
- "Crypto.Cipher._raw_des",
- """
- int DES_start_operation(const uint8_t key[],
- size_t key_len,
- void **pResult);
- int DES_encrypt(const void *state,
- const uint8_t *in,
- uint8_t *out,
- size_t data_len);
- int DES_decrypt(const void *state,
- const uint8_t *in,
- uint8_t *out,
- size_t data_len);
- int DES_stop_operation(void *state);
- """)
-
-
-def _create_base_cipher(dict_parameters):
- """This method instantiates and returns a handle to a low-level
- base cipher. It will absorb named parameters in the process."""
-
- try:
- key = dict_parameters.pop("key")
- except KeyError:
- raise TypeError("Missing 'key' parameter")
-
- if len(key) != key_size:
- raise ValueError("Incorrect DES key length (%d bytes)" % len(key))
-
- start_operation = _raw_des_lib.DES_start_operation
- stop_operation = _raw_des_lib.DES_stop_operation
-
- cipher = VoidPointer()
- result = start_operation(c_uint8_ptr(key),
- c_size_t(len(key)),
- cipher.address_of())
- if result:
- raise ValueError("Error %X while instantiating the DES cipher"
- % result)
- return SmartPointer(cipher.get(), stop_operation)
-
-
-def new(key, mode, *args, **kwargs):
- """Create a new DES cipher.
-
- :param key:
- The secret key to use in the symmetric cipher.
- It must be 8 byte long. The parity bits will be ignored.
- :type key: bytes/bytearray/memoryview
-
- :param mode:
- The chaining mode to use for encryption or decryption.
- :type mode: One of the supported ``MODE_*`` constants
-
- :Keyword Arguments:
- * **iv** (*byte string*) --
- (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``,
- and ``MODE_OPENPGP`` modes).
-
- The initialization vector to use for encryption or decryption.
-
- For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 8 bytes long.
-
- For ``MODE_OPENPGP`` mode only,
- it must be 8 bytes long for encryption
- and 10 bytes for decryption (in the latter case, it is
- actually the *encrypted* IV which was prefixed to the ciphertext).
-
- If not provided, a random byte string is generated (you must then
- read its value with the :attr:`iv` attribute).
-
- * **nonce** (*byte string*) --
- (Only applicable for ``MODE_EAX`` and ``MODE_CTR``).
-
- A value that must never be reused for any other encryption done
- with this key.
-
- For ``MODE_EAX`` there are no
- restrictions on its length (recommended: **16** bytes).
-
- For ``MODE_CTR``, its length must be in the range **[0..7]**.
-
- If not provided for ``MODE_EAX``, a random byte string is generated (you
- can read it back via the ``nonce`` attribute).
-
- * **segment_size** (*integer*) --
- (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext
- are segmented in. It must be a multiple of 8.
- If not specified, it will be assumed to be 8.
-
- * **mac_len** : (*integer*) --
- (Only ``MODE_EAX``)
- Length of the authentication tag, in bytes.
- It must be no longer than 8 (default).
-
- * **initial_value** : (*integer*) --
- (Only ``MODE_CTR``). The initial value for the counter within
- the counter block. By default it is **0**.
-
- :Return: a DES object, of the applicable mode.
- """
-
- return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs)
-
-MODE_ECB = 1
-MODE_CBC = 2
-MODE_CFB = 3
-MODE_OFB = 5
-MODE_CTR = 6
-MODE_OPENPGP = 7
-MODE_EAX = 9
-
-# Size of a data block (in bytes)
-block_size = 8
-# Size of a key (in bytes)
-key_size = 8
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/contourpy/util/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/contourpy/util/__init__.py
deleted file mode 100644
index fe33fcef1e18d2a4b92287e434cf6b1257e4274f..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/contourpy/util/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from __future__ import annotations
-
-from contourpy.util._build_config import build_config
-
-__all__ = ["build_config"]
diff --git a/spaces/johnowhitaker/whistlegen_v2/model.py b/spaces/johnowhitaker/whistlegen_v2/model.py
deleted file mode 100644
index f4cc6612a00fc78fa937f6c5b633f71bbd00d4ef..0000000000000000000000000000000000000000
--- a/spaces/johnowhitaker/whistlegen_v2/model.py
+++ /dev/null
@@ -1,199 +0,0 @@
-"""
-GPT model:
-- the initial stem consists of a combination of token encoding and a positional encoding
-- the meat of it is a uniform sequence of Transformer blocks
- - each Transformer is a sequential combination of a 1-hidden-layer MLP block and a self-attention block
- - all blocks feed into a central residual pathway similar to resnets
-- the final decoder is a linear projection into a vanilla Softmax classifier
-"""
-
-import math
-import logging
-
-import torch
-import torch.nn as nn
-from torch.nn import functional as F
-
-logger = logging.getLogger(__name__)
-
-class GPTConfig:
- """ base GPT config, params common to all GPT versions """
- embd_pdrop = 0.1
- resid_pdrop = 0.1
- attn_pdrop = 0.1
-
- def __init__(self, vocab_size, block_size, **kwargs):
- self.vocab_size = vocab_size
- self.block_size = block_size
- for k,v in kwargs.items():
- setattr(self, k, v)
-
-class GPT1Config(GPTConfig):
- """ GPT-1 like network roughly 125M params """
- n_layer = 12
- n_head = 12
- n_embd = 768
-
-class CausalSelfAttention(nn.Module):
- """
- A vanilla multi-head masked self-attention layer with a projection at the end.
- It is possible to use torch.nn.MultiheadAttention here but I am including an
- explicit implementation here to show that there is nothing too scary here.
- """
-
- def __init__(self, config):
- super().__init__()
- assert config.n_embd % config.n_head == 0
- # key, query, value projections for all heads
- self.key = nn.Linear(config.n_embd, config.n_embd)
- self.query = nn.Linear(config.n_embd, config.n_embd)
- self.value = nn.Linear(config.n_embd, config.n_embd)
- # regularization
- self.attn_drop = nn.Dropout(config.attn_pdrop)
- self.resid_drop = nn.Dropout(config.resid_pdrop)
- # output projection
- self.proj = nn.Linear(config.n_embd, config.n_embd)
- # causal mask to ensure that attention is only applied to the left in the input sequence
- self.register_buffer("mask", torch.tril(torch.ones(config.block_size, config.block_size))
- .view(1, 1, config.block_size, config.block_size))
- self.n_head = config.n_head
-
- def forward(self, x):
- B, T, C = x.size() # batch size, sequence length, embedding dimensionality (n_embd)
-
- # calculate query, key, values for all heads in batch and move head forward to be the batch dim
- k = self.key(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
- q = self.query(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
- v = self.value(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
-
- # causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T)
- att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))
- att = att.masked_fill(self.mask[:,:,:T,:T] == 0, float('-inf'))
- att = F.softmax(att, dim=-1)
- att = self.attn_drop(att)
- y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs)
- y = y.transpose(1, 2).contiguous().view(B, T, C) # re-assemble all head outputs side by side
-
- # output projection
- y = self.resid_drop(self.proj(y))
- return y
-
-class Block(nn.Module):
- """ an unassuming Transformer block """
-
- def __init__(self, config):
- super().__init__()
- self.ln1 = nn.LayerNorm(config.n_embd)
- self.ln2 = nn.LayerNorm(config.n_embd)
- self.attn = CausalSelfAttention(config)
- self.mlp = nn.Sequential(
- nn.Linear(config.n_embd, 4 * config.n_embd),
- nn.GELU(),
- nn.Linear(4 * config.n_embd, config.n_embd),
- nn.Dropout(config.resid_pdrop),
- )
-
- def forward(self, x):
- x = x + self.attn(self.ln1(x))
- x = x + self.mlp(self.ln2(x))
- return x
-
-class GPT(nn.Module):
- """ the full GPT language model, with a context size of block_size """
-
- def __init__(self, config):
- super().__init__()
-
- # input embedding stem
- self.tok_emb = nn.Embedding(config.vocab_size, config.n_embd)
- self.pos_emb = nn.Parameter(torch.zeros(1, config.block_size, config.n_embd))
- self.drop = nn.Dropout(config.embd_pdrop)
- # transformer
- self.blocks = nn.Sequential(*[Block(config) for _ in range(config.n_layer)])
- # decoder head
- self.ln_f = nn.LayerNorm(config.n_embd)
- self.head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
-
- self.block_size = config.block_size
- self.apply(self._init_weights)
-
- logger.info("number of parameters: %e", sum(p.numel() for p in self.parameters()))
-
- def get_block_size(self):
- return self.block_size
-
- def _init_weights(self, module):
- if isinstance(module, (nn.Linear, nn.Embedding)):
- torch.nn.init.normal_(module.weight, mean=0.0, std=0.02)
- if isinstance(module, nn.Linear) and module.bias is not None:
- torch.nn.init.zeros_(module.bias)
- elif isinstance(module, nn.LayerNorm):
- torch.nn.init.zeros_(module.bias)
- torch.nn.init.ones_(module.weight)
- elif isinstance(module, GPT):
- torch.nn.init.normal_(module.pos_emb, mean=0.0, std=0.02)
-
- def configure_optimizers(self, train_config):
- """
- This long function is unfortunately doing something very simple and is being very defensive:
- We are separating out all parameters of the model into two buckets: those that will experience
- weight decay for regularization and those that won't (biases, and layernorm/embedding weights).
- We are then returning the PyTorch optimizer object.
- """
-
- # separate out all parameters to those that will and won't experience regularizing weight decay
- decay = set()
- no_decay = set()
- whitelist_weight_modules = (torch.nn.Linear, )
- blacklist_weight_modules = (torch.nn.LayerNorm, torch.nn.Embedding)
- for mn, m in self.named_modules():
- for pn, p in m.named_parameters():
- fpn = '%s.%s' % (mn, pn) if mn else pn # full param name
-
- if pn.endswith('bias'):
- # all biases will not be decayed
- no_decay.add(fpn)
- elif pn.endswith('weight') and isinstance(m, whitelist_weight_modules):
- # weights of whitelist modules will be weight decayed
- decay.add(fpn)
- elif pn.endswith('weight') and isinstance(m, blacklist_weight_modules):
- # weights of blacklist modules will NOT be weight decayed
- no_decay.add(fpn)
-
- # special case the position embedding parameter in the root GPT module as not decayed
- no_decay.add('pos_emb')
-
- # validate that we considered every parameter
- param_dict = {pn: p for pn, p in self.named_parameters()}
- inter_params = decay & no_decay
- union_params = decay | no_decay
- assert len(inter_params) == 0, "parameters %s made it into both decay/no_decay sets!" % (str(inter_params), )
- assert len(param_dict.keys() - union_params) == 0, "parameters %s were not separated into either decay/no_decay set!" \
- % (str(param_dict.keys() - union_params), )
-
- # create the pytorch optimizer object
- optim_groups = [
- {"params": [param_dict[pn] for pn in sorted(list(decay))], "weight_decay": train_config.weight_decay},
- {"params": [param_dict[pn] for pn in sorted(list(no_decay))], "weight_decay": 0.0},
- ]
- optimizer = torch.optim.AdamW(optim_groups, lr=train_config.learning_rate, betas=train_config.betas)
- return optimizer
-
- def forward(self, idx, targets=None):
- b, t = idx.size()
- assert t <= self.block_size, "Cannot forward, model block size is exhausted."
-
- # forward the GPT model
- token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector
- position_embeddings = self.pos_emb[:, :t, :] # each position maps to a (learnable) vector
- x = self.drop(token_embeddings + position_embeddings)
- x = self.blocks(x)
- x = self.ln_f(x)
- logits = self.head(x)
-
- # if we are given some desired targets also calculate the loss
- loss = None
- if targets is not None:
- loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1))
-
- return logits, loss
diff --git a/spaces/jrichez/digit_recognizer/README.md b/spaces/jrichez/digit_recognizer/README.md
deleted file mode 100644
index 92e24e184c500eec41af9f234ac59e96a808ec79..0000000000000000000000000000000000000000
--- a/spaces/jrichez/digit_recognizer/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Digit_recognizer
-emoji: 🐨
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/jshu/baeroml-hackathon2021/README.md b/spaces/jshu/baeroml-hackathon2021/README.md
deleted file mode 100644
index 68d5a659134a4c435ad78f63ce5717e0351b8782..0000000000000000000000000000000000000000
--- a/spaces/jshu/baeroml-hackathon2021/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Baeroml Hackathon2021
-emoji: 🚀
-colorFrom: blue
-colorTo: gray
-sdk: streamlit
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/jtpotato/firetrace/README.md b/spaces/jtpotato/firetrace/README.md
deleted file mode 100644
index fa88dd0185f73508bcbad4ce6e320eff6893ef40..0000000000000000000000000000000000000000
--- a/spaces/jtpotato/firetrace/README.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: "Firetrace"
-emoji: "🔥"
-colorFrom: "yellow"
-colorTo: "gray"
-sdk: "gradio"
-python_version: "3.11"
-sdk_version: "3.40.1"
-app_file: "app.py"
-pinned: true
-fullWidth: true
----
-
-
-# About
-Firetrace is an AI model and web interface that predicts the severity of bushfire events at a nationwide scale, by using projected weather data. It uses a deep neural network trained on data from BOM weather observatories around the country, NASA’s MODIS satellite and the Southern Oscillation Index as well as time information to represent seasonality and climate change trends.
-
-Read the [Report](./Firetrace%20-%20Paper.pdf)
-
-# Roadmap
-- [x] Gradio UI
-- [x] Web API
-- [x] Take Southern Oscillation Index into consideration
-- [ ] Take CO2 levels into consideration
-- [ ] Take last year's rainfall into consideration (affects vegetation)
\ No newline at end of file
diff --git a/spaces/juuxn/SimpleRVC/infer_pack/transforms.py b/spaces/juuxn/SimpleRVC/infer_pack/transforms.py
deleted file mode 100644
index c78e655e44b61bb3eaabeb736fdf4a46a165071c..0000000000000000000000000000000000000000
--- a/spaces/juuxn/SimpleRVC/infer_pack/transforms.py
+++ /dev/null
@@ -1,208 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
-
-
-def unconstrained_rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails="linear",
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == "linear":
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError("{} tails are not implemented.".format(tails))
-
- (
- outputs[inside_interval_mask],
- logabsdet[inside_interval_mask],
- ) = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound,
- right=tail_bound,
- bottom=-tail_bound,
- top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- )
-
- return outputs, logabsdet
-
-
-def rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0.0,
- right=1.0,
- bottom=0.0,
- top=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError("Input to a transform is not within its domain")
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError("Minimal bin width too large for the number of bins")
- if min_bin_height * num_bins > 1.0:
- raise ValueError("Minimal bin height too large for the number of bins")
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- ) + input_heights * (input_delta - input_derivatives)
- b = input_heights * input_derivatives - (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- )
- c = -input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (
- input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
- )
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/jyseo/3DFuse/run_3DFuse.py b/spaces/jyseo/3DFuse/run_3DFuse.py
deleted file mode 100644
index 3dfabd4671152381d0dfeb32b4c19f4245a98df7..0000000000000000000000000000000000000000
--- a/spaces/jyseo/3DFuse/run_3DFuse.py
+++ /dev/null
@@ -1,321 +0,0 @@
-import os
-import numpy as np
-import torch
-from einops import rearrange
-from imageio import imwrite
-from pydantic import validator
-import imageio
-import tempfile
-import gradio as gr
-
-from PIL import Image
-
-from my.utils import (
- tqdm, EventStorage, HeartBeat, EarlyLoopBreak,
- get_event_storage, get_heartbeat, read_stats
-)
-from my.config import BaseConf, dispatch, optional_load_config
-from my.utils.seed import seed_everything
-
-from adapt import ScoreAdapter
-from run_img_sampling import SD
-from misc import torch_samps_to_imgs
-from pose import PoseConfig
-
-from run_nerf import VoxConfig
-from voxnerf.utils import every
-from voxnerf.render import (
- as_torch_tsrs, rays_from_img, ray_box_intersect, render_ray_bundle
-)
-from voxnerf.vis import stitch_vis, bad_vis as nerf_vis
-
-from pytorch3d.renderer import PointsRasterizationSettings
-
-from semantic_coding import semantic_coding, semantic_karlo, semantic_sd
-from pc_project import point_e, render_depth_from_cloud
-device_glb = torch.device("cuda")
-
-def tsr_stats(tsr):
- return {
- "mean": tsr.mean().item(),
- "std": tsr.std().item(),
- "max": tsr.max().item(),
- }
-
-class SJC_3DFuse(BaseConf):
- family: str = "sd"
- sd: SD = SD(
- variant="v1",
- prompt="a comfortable bed",
- scale=100.0,
- dir="./results",
- alpha=0.3
- )
- lr: float = 0.05
- n_steps: int = 10000
- vox: VoxConfig = VoxConfig(
- model_type="V_SD", grid_size=100, density_shift=-1.0, c=3,
- blend_bg_texture=False , bg_texture_hw=4,
- bbox_len=1.0
- )
- pose: PoseConfig = PoseConfig(rend_hw=64, FoV=60.0, R=1.5)
-
- emptiness_scale: int = 10
- emptiness_weight: int = 1e4
- emptiness_step: float = 0.5
- emptiness_multiplier: float = 20.0
-
- depth_weight: int = 0
-
- var_red: bool = True
- exp_dir: str = "./results"
- ti_step: int = 800
- pt_step: int = 800
- initial: str = ""
- random_seed: int = 0
- semantic_model: str = "Karlo"
- bg_preprocess: bool = True
- num_initial_image: int = 4
- @validator("vox")
- def check_vox(cls, vox_cfg, values):
- family = values['family']
- if family == "sd":
- vox_cfg.c = 4
- return vox_cfg
-
- def run(self):
- raise Exception("This version is for huggingface demo, which doesn't support CLI. Please visit https://github.com/KU-CVLAB/3DFuse")
-
- def run_gradio(self, points, images):
- cfgs = self.dict()
- initial = cfgs.pop('initial')
- exp_dir=os.path.join(cfgs.pop('exp_dir'),initial)
-
- # Optimization and pivotal tuning for LoRA
- yield gr.update(value=None), "Tuning for the LoRA layer is starting now. It will take approximately ~10 mins.", gr.update(value=None)
- state=semantic_coding(images, cfgs,self.sd,initial)
- self.sd.dir=state
-
- # Load SD with Consistency Injection Module
- family = cfgs.pop("family")
- model = getattr(self, family).make()
- print(model.prompt)
- cfgs.pop("vox")
- vox = self.vox.make()
-
- cfgs.pop("pose")
- poser = self.pose.make()
-
- # Score distillation
- yield from fuse_3d(**cfgs, poser=poser,model=model,vox=vox,exp_dir=exp_dir, points=points, is_gradio=True)
-
-
-def fuse_3d(
- poser, vox, model: ScoreAdapter,
- lr, n_steps, emptiness_scale, emptiness_weight, emptiness_step, emptiness_multiplier,
- depth_weight, var_red, exp_dir, points, is_gradio, **kwargs
-):
- del kwargs
-
- if is_gradio:
- yield gr.update(visible=True), "LoRA layers tuning has just finished. \nScore distillation has started.", gr.update(visible=True)
- assert model.samps_centered()
- _, target_H, target_W = model.data_shape()
- bs = 1
- aabb = vox.aabb.T.cpu().numpy()
- vox = vox.to(device_glb)
- opt = torch.optim.Adamax(vox.opt_params(), lr=lr)
-
- H, W = poser.H, poser.W
- Ks_, poses_, prompt_prefixes_, angles_list = poser.sample_train(n_steps,device_glb)
-
- ts = model.us[30:-10]
-
- fuse = EarlyLoopBreak(5)
-
- raster_settings = PointsRasterizationSettings(
- image_size= 800,
- radius = 0.02,
- points_per_pixel = 10
- )
-
- ts = model.us[30:-10]
- calibration_value=0.0
-
-
-
- with tqdm(total=n_steps) as pbar:
- # HeartBeat(pbar) as hbeat, \
- # EventStorage(output_dir=os.path.join(exp_dir,'3d')) as metric:
-
- for i in range(len(poses_)):
- if fuse.on_break():
- break
-
- depth_map = render_depth_from_cloud(points, angles_list[i], raster_settings, device_glb,calibration_value)
-
- y, depth, ws = render_one_view(vox, aabb, H, W, Ks_[i], poses_[i], return_w=True)
-
-
- p = f"{prompt_prefixes_[i]} {model.prompt}"
- score_conds = model.prompts_emb([p])
-
- score_conds['c']=score_conds['c'].repeat(bs,1,1)
- score_conds['uc']=score_conds['uc'].repeat(bs,1,1)
-
- opt.zero_grad()
-
- with torch.no_grad():
- chosen_σs = np.random.choice(ts, bs, replace=False)
- chosen_σs = chosen_σs.reshape(-1, 1, 1, 1)
- chosen_σs = torch.as_tensor(chosen_σs, device=model.device, dtype=torch.float32)
-
-
- noise = torch.randn(bs, *y.shape[1:], device=model.device)
-
- zs = y + chosen_σs * noise
-
- Ds = model.denoise(zs, chosen_σs,depth_map.unsqueeze(dim=0),**score_conds)
-
- if var_red:
- grad = (Ds - y) / chosen_σs
- else:
- grad = (Ds - zs) / chosen_σs
-
- grad = grad.mean(0, keepdim=True)
-
- y.backward(-grad, retain_graph=True)
-
- if depth_weight > 0:
- center_depth = depth[7:-7, 7:-7]
- border_depth_mean = (depth.sum() - center_depth.sum()) / (64*64-50*50)
- center_depth_mean = center_depth.mean()
- depth_diff = center_depth_mean - border_depth_mean
- depth_loss = - torch.log(depth_diff + 1e-12)
- depth_loss = depth_weight * depth_loss
- depth_loss.backward(retain_graph=True)
-
- emptiness_loss = torch.log(1 + emptiness_scale * ws).mean()
- emptiness_loss = emptiness_weight * emptiness_loss
- if emptiness_step * n_steps <= i:
- emptiness_loss *= emptiness_multiplier
- emptiness_loss.backward()
-
- opt.step()
-
- # metric.put_scalars(**tsr_stats(y))
-
- if every(pbar, percent=2):
- with torch.no_grad():
- y = model.decode(y)
- # vis_routine(metric, y, depth,p,depth_map[0])
-
- if is_gradio :
- yield torch_samps_to_imgs(y)[0], f"Progress: {pbar.n}/{pbar.total} \nAfter the generation is complete, the video results will be displayed below.", gr.update(value=None)
-
-
-
-
- # metric.step()
- pbar.update()
-
- pbar.set_description(p)
- # hbeat.beat()
-
- # metric.put_artifact(
- # "ckpt", ".pt","", lambda fn: torch.save(vox.state_dict(), fn)
- # )
-
- # with EventStorage("result"):
- out=evaluate(model, vox, poser)
-
- if is_gradio:
- yield gr.update(visible=False), f"Generation complete. Please check the video below.", gr.update(value=out)
- else :
- yield None
-
- # metric.step()
-
- # hbeat.done()
-
-@torch.no_grad()
-def evaluate(score_model, vox, poser):
- H, W = poser.H, poser.W
- vox.eval()
- K, poses = poser.sample_test(100)
-
- fuse = EarlyLoopBreak(5)
- # metric = get_event_storage()
- # hbeat = get_heartbeat()
-
- aabb = vox.aabb.T.cpu().numpy()
- vox = vox.to(device_glb)
-
- num_imgs = len(poses)
- frames=[]
- for i in (pbar := tqdm(range(num_imgs))):
- if fuse.on_break():
- break
-
- pose = poses[i]
- y, depth = render_one_view(vox, aabb, H, W, K, pose)
- y = score_model.decode(y)
- # vis_routine(metric, y, depth,"",None)
- y=torch_samps_to_imgs(y)[0]
- frames.append(y)
- # metric.step()
- # hbeat.beat()
-
- # metric.flush_history()
-
- # metric.put_artifact(
- # "video", ".mp4","",
- # lambda fn: stitch_vis(fn, read_stats(metric.output_dir, "img")[1])
- # )
- out_file = tempfile.NamedTemporaryFile(suffix='.mp4', delete=False)
- writer = imageio.get_writer(out_file.name, fps=10)
- for img in frames:
- writer.append_data(img)
- writer.close()
- # metric.step()
- return out_file.name
-
-def render_one_view(vox, aabb, H, W, K, pose, return_w=False):
- N = H * W
- ro, rd = rays_from_img(H, W, K, pose)
-
- ro, rd, t_min, t_max = scene_box_filter_(ro, rd, aabb)
-
- assert len(ro) == N, "for now all pixels must be in"
- ro, rd, t_min, t_max = as_torch_tsrs(vox.device, ro, rd, t_min, t_max)
- rgbs, depth, weights = render_ray_bundle(vox, ro, rd, t_min, t_max)
-
- rgbs = rearrange(rgbs, "(h w) c -> 1 c h w", h=H, w=W)
- depth = rearrange(depth, "(h w) 1 -> h w", h=H, w=W)
- if return_w:
- return rgbs, depth, weights
- else:
- return rgbs, depth
-
-
-def scene_box_filter_(ro, rd, aabb):
- _, t_min, t_max = ray_box_intersect(ro, rd, aabb)
- # do not render what's behind the ray origin
- t_min, t_max = np.maximum(t_min, 0), np.maximum(t_max, 0)
- return ro, rd, t_min, t_max
-
-
-def vis_routine(metric, y, depth,prompt,depth_map):
- pane = nerf_vis(y, depth, final_H=256)
- im = torch_samps_to_imgs(y)[0]
-
- depth = depth.cpu().numpy()
- metric.put_artifact("view", ".png","",lambda fn: imwrite(fn, pane))
- metric.put_artifact("img", ".png",prompt, lambda fn: imwrite(fn, im))
- if depth_map != None:
- metric.put_artifact("PC_depth", ".png",prompt, lambda fn: imwrite(fn, depth_map.cpu().squeeze()))
- metric.put_artifact("depth", ".npy","",lambda fn: np.save(fn, depth))
-
-
-if __name__ == "__main__":
- dispatch(SJC_3DFuse)
\ No newline at end of file
diff --git a/spaces/kepl/g/gen.py b/spaces/kepl/g/gen.py
deleted file mode 100644
index be7bd5f41a5f4ce812e23dc54ad856a9fde78371..0000000000000000000000000000000000000000
--- a/spaces/kepl/g/gen.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import gradio as gr
-
-title = "🤖 KeplBot"
-
-description = \
-"""
-
-
-
-
-
- Add bot to Discord server
-
-
-✨️ We will try to realize any of your fantasies!
-
-🎨 We generate images on powerful servers using the latest neural networks and models!
-
-🆓️ We generate images ALWAYS FREE for EVERYONE!
-
-▶️ We do not compete with such bots as Midjourney and BlueWillow, these bots use more advanced technologies, we are just a non-commercial project of enthusiasts!
-
-"""
-
-article = \
-"""
-©️ Kepl Tech.
-"""
-
-gr.Interface.load("models/runwayml/stable-diffusion-v1-5", title=title, description=description, article=article, theme = "Soft").launch()
\ No newline at end of file
diff --git a/spaces/kepl/gpt/g4f/Provider/Providers/Bing.py b/spaces/kepl/gpt/g4f/Provider/Providers/Bing.py
deleted file mode 100644
index 87e04ac82293c7e22068af431ac407bdee435a1b..0000000000000000000000000000000000000000
--- a/spaces/kepl/gpt/g4f/Provider/Providers/Bing.py
+++ /dev/null
@@ -1,349 +0,0 @@
-import os
-import json
-import random
-import json
-import os
-import uuid
-import ssl
-import certifi
-import aiohttp
-import asyncio
-
-import requests
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://bing.com/chat'
-model = ['gpt-4']
-supports_stream = True
-needs_auth = False
-
-ssl_context = ssl.create_default_context()
-ssl_context.load_verify_locations(certifi.where())
-
-
-class optionsSets:
- optionSet: dict = {
- 'tone': str,
- 'optionsSets': list
- }
-
- jailbreak: dict = {
- "optionsSets": [
- 'saharasugg',
- 'enablenewsfc',
- 'clgalileo',
- 'gencontentv3',
- "nlu_direct_response_filter",
- "deepleo",
- "disable_emoji_spoken_text",
- "responsible_ai_policy_235",
- "enablemm",
- "h3precise"
- # "harmonyv3",
- "dtappid",
- "cricinfo",
- "cricinfov2",
- "dv3sugg",
- "nojbfedge"
- ]
- }
-
-
-class Defaults:
- delimiter = '\x1e'
- ip_address = f'13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}'
-
- allowedMessageTypes = [
- 'Chat',
- 'Disengaged',
- 'AdsQuery',
- 'SemanticSerp',
- 'GenerateContentQuery',
- 'SearchQuery',
- 'ActionRequest',
- 'Context',
- 'Progress',
- 'AdsQuery',
- 'SemanticSerp'
- ]
-
- sliceIds = [
-
- # "222dtappid",
- # "225cricinfo",
- # "224locals0"
-
- 'winmuid3tf',
- 'osbsdusgreccf',
- 'ttstmout',
- 'crchatrev',
- 'winlongmsgtf',
- 'ctrlworkpay',
- 'norespwtf',
- 'tempcacheread',
- 'temptacache',
- '505scss0',
- '508jbcars0',
- '515enbotdets0',
- '5082tsports',
- '515vaoprvs',
- '424dagslnv1s0',
- 'kcimgattcf',
- '427startpms0'
- ]
-
- location = {
- 'locale': 'en-US',
- 'market': 'en-US',
- 'region': 'US',
- 'locationHints': [
- {
- 'country': 'United States',
- 'state': 'California',
- 'city': 'Los Angeles',
- 'timezoneoffset': 8,
- 'countryConfidence': 8,
- 'Center': {
- 'Latitude': 34.0536909,
- 'Longitude': -118.242766
- },
- 'RegionType': 2,
- 'SourceType': 1
- }
- ],
- }
-
-
-def _format(msg: dict) -> str:
- return json.dumps(msg, ensure_ascii=False) + Defaults.delimiter
-
-
-async def create_conversation():
- for _ in range(5):
- create = requests.get('https://www.bing.com/turing/conversation/create',
- headers={
- 'authority': 'edgeservices.bing.com',
- 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
- 'accept-language': 'en-US,en;q=0.9',
- 'cache-control': 'max-age=0',
- 'sec-ch-ua': '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"',
- 'sec-ch-ua-arch': '"x86"',
- 'sec-ch-ua-bitness': '"64"',
- 'sec-ch-ua-full-version': '"110.0.1587.69"',
- 'sec-ch-ua-full-version-list': '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
- 'sec-ch-ua-mobile': '?0',
- 'sec-ch-ua-model': '""',
- 'sec-ch-ua-platform': '"Windows"',
- 'sec-ch-ua-platform-version': '"15.0.0"',
- 'sec-fetch-dest': 'document',
- 'sec-fetch-mode': 'navigate',
- 'sec-fetch-site': 'none',
- 'sec-fetch-user': '?1',
- 'upgrade-insecure-requests': '1',
- 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69',
- 'x-edge-shopping-flag': '1',
- 'x-forwarded-for': Defaults.ip_address
- })
-
- conversationId = create.json().get('conversationId')
- clientId = create.json().get('clientId')
- conversationSignature = create.json().get('conversationSignature')
-
- if not conversationId or not clientId or not conversationSignature and _ == 4:
- raise Exception('Failed to create conversation.')
-
- return conversationId, clientId, conversationSignature
-
-
-async def stream_generate(prompt: str, mode: optionsSets.optionSet = optionsSets.jailbreak, context: bool or str = False):
- timeout = aiohttp.ClientTimeout(total=900)
- session = aiohttp.ClientSession(timeout=timeout)
-
- conversationId, clientId, conversationSignature = await create_conversation()
-
- wss = await session.ws_connect('wss://sydney.bing.com/sydney/ChatHub', ssl=ssl_context, autoping=False,
- headers={
- 'accept': 'application/json',
- 'accept-language': 'en-US,en;q=0.9',
- 'content-type': 'application/json',
- 'sec-ch-ua': '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"',
- 'sec-ch-ua-arch': '"x86"',
- 'sec-ch-ua-bitness': '"64"',
- 'sec-ch-ua-full-version': '"109.0.1518.78"',
- 'sec-ch-ua-full-version-list': '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
- 'sec-ch-ua-mobile': '?0',
- 'sec-ch-ua-model': '',
- 'sec-ch-ua-platform': '"Windows"',
- 'sec-ch-ua-platform-version': '"15.0.0"',
- 'sec-fetch-dest': 'empty',
- 'sec-fetch-mode': 'cors',
- 'sec-fetch-site': 'same-origin',
- 'x-ms-client-request-id': str(uuid.uuid4()),
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- 'Referer': 'https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx',
- 'Referrer-Policy': 'origin-when-cross-origin',
- 'x-forwarded-for': Defaults.ip_address
- })
-
- await wss.send_str(_format({'protocol': 'json', 'version': 1}))
- await wss.receive(timeout=900)
-
- struct = {
- 'arguments': [
- {
- **mode,
- 'source': 'cib',
- 'allowedMessageTypes': Defaults.allowedMessageTypes,
- 'sliceIds': Defaults.sliceIds,
- 'traceId': os.urandom(16).hex(),
- 'isStartOfSession': True,
- 'message': Defaults.location | {
- 'author': 'user',
- 'inputMethod': 'Keyboard',
- 'text': prompt,
- 'messageType': 'Chat'
- },
- 'conversationSignature': conversationSignature,
- 'participant': {
- 'id': clientId
- },
- 'conversationId': conversationId
- }
- ],
- 'invocationId': '0',
- 'target': 'chat',
- 'type': 4
- }
-
- if context:
- struct['arguments'][0]['previousMessages'] = [
- {
- "author": "user",
- "description": context,
- "contextType": "WebPage",
- "messageType": "Context",
- "messageId": "discover-web--page-ping-mriduna-----"
- }
- ]
-
- await wss.send_str(_format(struct))
-
- final = False
- draw = False
- resp_txt = ''
- result_text = ''
- resp_txt_no_link = ''
- cache_text = ''
-
- while not final:
- msg = await wss.receive(timeout=900)
- objects = msg.data.split(Defaults.delimiter)
-
- for obj in objects:
- if obj is None or not obj:
- continue
-
- response = json.loads(obj)
- if response.get('type') == 1 and response['arguments'][0].get('messages',):
- if not draw:
- if (response['arguments'][0]['messages'][0]['contentOrigin'] != 'Apology') and not draw:
- resp_txt = result_text + \
- response['arguments'][0]['messages'][0]['adaptiveCards'][0]['body'][0].get(
- 'text', '')
- resp_txt_no_link = result_text + \
- response['arguments'][0]['messages'][0].get(
- 'text', '')
-
- if response['arguments'][0]['messages'][0].get('messageType',):
- resp_txt = (
- resp_txt
- + response['arguments'][0]['messages'][0]['adaptiveCards'][0]['body'][0]['inlines'][0].get('text')
- + '\n'
- )
- result_text = (
- result_text
- + response['arguments'][0]['messages'][0]['adaptiveCards'][0]['body'][0]['inlines'][0].get('text')
- + '\n'
- )
-
- if cache_text.endswith(' '):
- final = True
- if wss and not wss.closed:
- await wss.close()
- if session and not session.closed:
- await session.close()
-
- yield (resp_txt.replace(cache_text, ''))
- cache_text = resp_txt
-
- elif response.get('type') == 2:
- if response['item']['result'].get('error'):
- if wss and not wss.closed:
- await wss.close()
- if session and not session.closed:
- await session.close()
-
- raise Exception(
- f"{response['item']['result']['value']}: {response['item']['result']['message']}")
-
- if draw:
- cache = response['item']['messages'][1]['adaptiveCards'][0]['body'][0]['text']
- response['item']['messages'][1]['adaptiveCards'][0]['body'][0]['text'] = (
- cache + resp_txt)
-
- if (response['item']['messages'][-1]['contentOrigin'] == 'Apology' and resp_txt):
- response['item']['messages'][-1]['text'] = resp_txt_no_link
- response['item']['messages'][-1]['adaptiveCards'][0]['body'][0]['text'] = resp_txt
-
- # print('Preserved the message from being deleted', file=sys.stderr)
-
- final = True
- if wss and not wss.closed:
- await wss.close()
- if session and not session.closed:
- await session.close()
-
-
-def run(generator):
- loop = asyncio.new_event_loop()
- asyncio.set_event_loop(loop)
- gen = generator.__aiter__()
-
- while True:
- try:
- next_val = loop.run_until_complete(gen.__anext__())
- yield next_val
-
- except StopAsyncIteration:
- break
- #print('Done')
-
-def convert(messages):
- context = ""
-
- for message in messages:
- context += "[%s](#message)\n%s\n\n" % (message['role'],
- message['content'])
-
- return context
-
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- if len(messages) < 2:
- prompt = messages[0]['content']
- context = False
-
- else:
- prompt = messages[-1]['content']
- context = convert(messages[:-1])
-
- response = run(stream_generate(prompt, optionsSets.jailbreak, context))
- for token in response:
- yield (token)
-
- #print('Done')
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join(
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/keras-dreambooth/piranesi-monument-art/README.md b/spaces/keras-dreambooth/piranesi-monument-art/README.md
deleted file mode 100644
index f9e19925c11774db16c270a76c54d0c0e12dff8d..0000000000000000000000000000000000000000
--- a/spaces/keras-dreambooth/piranesi-monument-art/README.md
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Dreambooth Piranesi Space
-emoji: 🏯
-colorFrom: black
-colorTo: white
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: apache-2.0
-tags:
-- keras-dreambooth
-- consentful
----
-
-Demo space using the [dreambooth-piranesi](https://huggingface.co/keras-dreambooth/dreambooth-piranesi) model -
-Dreambooth fine-tuned on 18th century etching art by [Giambattista Piranesi](https://en.wikipedia.org/wiki/Giovanni_Battista_Piranesi).
\ No newline at end of file
diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/pay_less_attention_paper/README.md b/spaces/koajoel/PolyFormer/fairseq/examples/pay_less_attention_paper/README.md
deleted file mode 100644
index 5adab11f4dc3461f9e7126ac391b04e703616e6b..0000000000000000000000000000000000000000
--- a/spaces/koajoel/PolyFormer/fairseq/examples/pay_less_attention_paper/README.md
+++ /dev/null
@@ -1,176 +0,0 @@
-# Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019)
-
-This page contains pointers to pre-trained models as well as instructions on how to train new models for [our paper](https://arxiv.org/abs/1901.10430).
-
-## Citation:
-```bibtex
-@inproceedings{wu2018pay,
- title = {Pay Less Attention with Lightweight and Dynamic Convolutions},
- author = {Felix Wu and Angela Fan and Alexei Baevski and Yann Dauphin and Michael Auli},
- booktitle = {International Conference on Learning Representations},
- year = {2019},
- url = {https://arxiv.org/abs/1901.10430},
-}
-```
-
-## Translation
-
-### Pre-trained models
-For some datasets we release models without GLUs which are faster at inference.
-
-Model | Description | Dataset | Download
----|---|---|---
-`lightconv.no_glu.iwslt14.de-en` | LightConv (without GLUs) | [IWSLT14 German-English](https://wit3.fbk.eu/archive/2014-01/texts/de/en/de-en.tgz) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.lightconv.tar.gz)
IWSLT14 test:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/iwslt14.de-en.test.tar.bz2)
-`dynamicconv.no_glu.iwslt14.de-en` | DynamicConv (without GLUs) | [IWSLT14 German-English](https://wit3.fbk.eu/archive/2014-01/texts/de/en/de-en.tgz) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.dynamicconv.tar.gz)
IWSLT14 test:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/iwslt14.de-en.test.tar.bz2)
-`lightconv.no_glu.wmt16.en-de` | LightConv (without GLUs) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv.tar.gz)
newstest2014 (shared vocab):
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2)
-`dynamicconv.no_glu.wmt16.en-de` | DynamicConv (without GLUs) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv.tar.gz)
newstest2014 (shared vocab):
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2)
-`lightconv.glu.wmt16.en-de` | LightConv | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz)
newstest2014 (shared vocab):
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2)
-`dynamicconv.glu.wmt16.en-de` | DynamicConv | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz)
newstest2014 (shared vocab):
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2)
-`lightconv.glu.wmt14.en-fr` | LightConv | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.lightconv-glu.tar.gz)
newstest2014:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2)
-`dynamicconv.glu.wmt14.en-fr` | DynamicConv | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.dynamicconv-glu.tar.gz)
newstest2014:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2)
-`lightconv.glu.wmt17.zh-en` | LightConv | [WMT17 Chinese-English](http://statmt.org/wmt17/translation-task.html#Download) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.lightconv-glu.tar.gz)
newstest2017:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.zh-en.newstest2017.tar.bz2)
-`dynamicconv.glu.wmt17.zh-en` | DynamicConv | [WMT17 Chinese-English](http://statmt.org/wmt17/translation-task.html#Download) | model:
[download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.dynamicconv-glu.tar.gz)
newstest2017:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.zh-en.newstest2017.tar.bz2)
-
-### Memory-Efficient CUDA Kernels
-
-Since the PyTorch implementations of Light/Dynamic conv are quite memory intensive, we have developed CUDA kernels that implement the light and dynamic convolution operator in a memory-efficient and performant manner. For large sequence lengths, these kernels save about 50% memory compared to the PyTorch equivalent.
-
-To install the kernels, use the commands below. Once installed, they will automatically be used in place of the PyTorch implementations whenever a light or dynamic convolution is used.
-
-```sh
-# to install lightconv
-cd fairseq/modules/lightconv_layer
-python cuda_function_gen.py
-python setup.py install
-
-# to install dynamicconv
-cd fairseq/modules/dynamicconv_layer
-python cuda_function_gen.py
-python setup.py install
-```
-
-### Example usage (torch.hub)
-
-We require a few additional Python dependencies for preprocessing:
-```bash
-pip install sacremoses subword_nmt
-```
-
-Interactive translation via PyTorch Hub:
-```python
-import torch
-
-# List available models
-torch.hub.list('pytorch/fairseq') # [..., 'lightconv.glu.wmt17.zh-en', ... ]
-
-# Load a transformer trained on WMT'16 En-De
-zh2en = torch.hub.load('pytorch/fairseq', 'lightconv.glu.wmt17.zh-en', tokenizer='moses', bpe='subword_nmt')
-
-# The underlying model is available under the *models* attribute
-assert isinstance(zh2en.models[0], fairseq.models.lightconv.LightConvModel)
-
-# Translate a sentence
-zh2en.translate('你好 世界')
-# 'Hello World'
-```
-
-Loading custom models:
-```python
-from fairseq.models.lightconv import LightConvModel
-en2fr = LightConvModel.from_pretrained(
- '/path/to/checkpoints',
- checkpoint_file='checkpoint_best.pt',
- data_name_or_path='data-bin/wmt14_en_fr',
- bpe='subword_nmt',
- bpe_codes='data-bin/wmt14_en_fr/en.code'
-)
-en2fr.translate('Hello world!')
-# 'Bonjour le monde'
-```
-
-### Preprocessing the training datasets
-
-Please follow the instructions in [`examples/translation/README.md`](../translation/README.md) to preprocess the data.
-
-### Training and evaluation options:
-To use the model without GLU, please set `--encoder-glu 0 --decoder-glu 0`.
-For LightConv, please use `--encoder-conv-type lightweight --decoder-conv-type lightweight`, otherwise the default is DynamicConv.
-For best BLEU results, lenpen may need to be manually tuned.
-
-To use the CUDA kernels, first install the PyTorch modules using the commands
-above. Once the CUDA modules are installed, they will automatically be used
-instead of the PyTorch modules.
-
-### IWSLT14 De-En
-Training and evaluating DynamicConv (without GLU) on a GPU:
-```sh
-# Training
-SAVE="save/dynamic_conv_iwslt"
-mkdir -p $SAVE
-CUDA_VISIBLE_DEVICES=0 $(which fairseq-train) data-bin/iwslt14.tokenized.de-en \
- --clip-norm 0 --optimizer adam --lr 0.0005 \
- --source-lang de --target-lang en --max-tokens 4000 --no-progress-bar \
- --log-interval 100 --stop-min-lr '1e-09' --weight-decay 0.0001 \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --lr-scheduler inverse_sqrt \
- --ddp-backend=legacy_ddp \
- --max-update 50000 --warmup-updates 4000 --warmup-init-lr '1e-07' \
- --adam-betas '(0.9, 0.98)' --keep-last-epochs 10 \
- -a lightconv_iwslt_de_en --save-dir $SAVE \
- --dropout 0.3 --attention-dropout 0.1 --weight-dropout 0.1 \
- --encoder-glu 0 --decoder-glu 0
-python scripts/average_checkpoints.py --inputs $SAVE \
- --num-epoch-checkpoints 10 --output "${SAVE}/checkpoint_last10_avg.pt"
-
-# Evaluation
-CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/iwslt14.tokenized.de-en --path "${SAVE}/checkpoint_last10_avg.pt" --batch-size 128 --beam 4 --remove-bpe --lenpen 1 --gen-subset test --quiet
-```
-
-### WMT16 En-De
-Training and evaluating DynamicConv (with GLU) on WMT16 En-De using cosine scheduler on one machine with 8 V100 GPUs:
-```sh
-# Training
-SAVE="save/dynamic_conv_wmt16en2de"
-mkdir -p $SAVE
-python -m torch.distributed.launch --nproc_per_node 8 $(which fairseq-train) \
- data-bin/wmt16_en_de_bpe32k --fp16 --log-interval 100 --no-progress-bar \
- --max-update 30000 --share-all-embeddings --optimizer adam \
- --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --weight-decay 0.0 \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --stop-min-lr 1e-09 --update-freq 16 --attention-dropout 0.1 --keep-last-epochs 10 \
- --ddp-backend=legacy_ddp --max-tokens 3584 \
- --lr-scheduler cosine --warmup-init-lr 1e-7 --warmup-updates 10000 \
- --lr-shrink 1 --lr 0.001 --min-lr 1e-7 --warmup-init-lr 1e-07 \
- --t-mult 1 --lr-period-updates 20000 \
- --arch lightconv_wmt_en_de_big --save-dir $SAVE \
- --dropout 0.3 --attention-dropout 0.1 --weight-dropout 0.1 \
- --encoder-glu 1 --decoder-glu 1
-
-# Evaluation
-CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/wmt16.en-de.joined-dict.newstest2014 --path "${SAVE}/checkpoint_best.pt" --batch-size 128 --beam 5 --remove-bpe --lenpen 0.5 --gen-subset test > wmt16_gen.txt
-bash scripts/compound_split_bleu.sh wmt16_gen.txt
-```
-
-### WMT14 En-Fr
-Training DynamicConv (with GLU) on WMT14 En-Fr using cosine scheduler on one machine with 8 V100 GPUs:
-```sh
-# Training
-SAVE="save/dynamic_conv_wmt14en2fr"
-mkdir -p $SAVE
-python -m torch.distributed.launch --nproc_per_node 8 $(which fairseq-train) \
- data-bin/wmt14_en_fr --fp16 --log-interval 100 --no-progress-bar \
- --max-update 30000 --share-all-embeddings --optimizer adam \
- --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --weight-decay 0.0 \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
- --stop-min-lr 1e-09 --update-freq 16 --attention-dropout 0.1 --keep-last-epochs 10 \
- --ddp-backend=legacy_ddp --max-tokens 3584 \
- --lr-scheduler cosine --warmup-init-lr 1e-7 --warmup-updates 10000 \
- --lr-shrink 1 --lr 0.001 --min-lr 1e-7 --warmup-init-lr 1e-07 \
- --t-mult 1 --lr-period-updates 70000 \
- --arch lightconv_wmt_en_fr_big --save-dir $SAVE \
- --dropout 0.1 --attention-dropout 0.1 --weight-dropout 0.1 \
- --encoder-glu 1 --decoder-glu 1
-
-# Evaluation
-CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/wmt14.en-fr.joined-dict.newstest2014 --path "${SAVE}/checkpoint_best.pt" --batch-size 128 --beam 5 --remove-bpe --lenpen 0.9 --gen-subset test
-```
diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/roberta/README.pretraining.md b/spaces/koajoel/PolyFormer/fairseq/examples/roberta/README.pretraining.md
deleted file mode 100644
index a4e7453529111fdd198be637d911d1764cb96c0e..0000000000000000000000000000000000000000
--- a/spaces/koajoel/PolyFormer/fairseq/examples/roberta/README.pretraining.md
+++ /dev/null
@@ -1,84 +0,0 @@
-# Pretraining RoBERTa using your own data
-
-This tutorial will walk you through pretraining RoBERTa over your own data.
-
-### 1) Preprocess the data
-
-Data should be preprocessed following the [language modeling format](/examples/language_model), i.e. each document should be separated by an empty line (only useful with `--sample-break-mode complete_doc`). Lines will be concatenated as a 1D text stream during training.
-
-We'll use the [WikiText-103 dataset](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/)
-to demonstrate how to preprocess raw text data with the GPT-2 BPE. Of course
-this dataset is quite small, so the resulting pretrained model will perform
-poorly, but it gives the general idea.
-
-First download the dataset:
-```bash
-wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip
-unzip wikitext-103-raw-v1.zip
-```
-
-Next encode it with the GPT-2 BPE:
-```bash
-mkdir -p gpt2_bpe
-wget -O gpt2_bpe/encoder.json https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json
-wget -O gpt2_bpe/vocab.bpe https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe
-for SPLIT in train valid test; do \
- python -m examples.roberta.multiprocessing_bpe_encoder \
- --encoder-json gpt2_bpe/encoder.json \
- --vocab-bpe gpt2_bpe/vocab.bpe \
- --inputs wikitext-103-raw/wiki.${SPLIT}.raw \
- --outputs wikitext-103-raw/wiki.${SPLIT}.bpe \
- --keep-empty \
- --workers 60; \
-done
-```
-
-Finally preprocess/binarize the data using the GPT-2 fairseq dictionary:
-```bash
-wget -O gpt2_bpe/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt
-fairseq-preprocess \
- --only-source \
- --srcdict gpt2_bpe/dict.txt \
- --trainpref wikitext-103-raw/wiki.train.bpe \
- --validpref wikitext-103-raw/wiki.valid.bpe \
- --testpref wikitext-103-raw/wiki.test.bpe \
- --destdir data-bin/wikitext-103 \
- --workers 60
-```
-
-### 2) Train RoBERTa base
-```bash
-DATA_DIR=data-bin/wikitext-103
-
-fairseq-hydra-train -m --config-dir examples/roberta/config/pretraining \
---config-name base task.data=$DATA_DIR
-```
-
-**Note:** You can optionally resume training the released RoBERTa base model by
-adding `checkpoint.restore_file=/path/to/roberta.base/model.pt`.
-
-**Note:** The above command assumes training on 8x32GB V100 GPUs. Each GPU uses
-a batch size of 16 sequences (`dataset.batch_size`) and accumulates gradients to
-further increase the batch size by 16x (`optimization.update_freq`), for a total batch size
-of 2048 sequences. If you have fewer GPUs or GPUs with less memory you may need
-to reduce `dataset.batch_size` and increase dataset.update_freq to compensate.
-Alternatively if you have more GPUs you can decrease `dataset.update_freq` accordingly
-to increase training speed.
-
-**Note:** The learning rate and batch size are tightly connected and need to be
-adjusted together. We generally recommend increasing the learning rate as you
-increase the batch size according to the following table (although it's also
-dataset dependent, so don't rely on the following values too closely):
-
-batch size | peak learning rate
----|---
-256 | 0.0001
-2048 | 0.0005
-8192 | 0.0007
-
-### 3) Load your pretrained model
-```python
-from fairseq.models.roberta import RobertaModel
-roberta = RobertaModel.from_pretrained('checkpoints', 'checkpoint_best.pt', 'path/to/data')
-assert isinstance(roberta.model, torch.nn.Module)
-```
diff --git a/spaces/kukuhtw/VToonify/vtoonify/model/__init__.py b/spaces/kukuhtw/VToonify/vtoonify/model/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_b_s_l_n.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_b_s_l_n.py
deleted file mode 100644
index 8e266fa54d0f0fd05bfde372627e1fb948d6f0fd..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_b_s_l_n.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from .otBase import BaseTTXConverter
-
-
-# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6bsln.html
-class table__b_s_l_n(BaseTTXConverter):
- pass
diff --git a/spaces/lc202301/ChuanhuChatGPT/Dockerfile b/spaces/lc202301/ChuanhuChatGPT/Dockerfile
deleted file mode 100644
index 8cbd335b09b1d1975bfd83a053b5fcaf398147ea..0000000000000000000000000000000000000000
--- a/spaces/lc202301/ChuanhuChatGPT/Dockerfile
+++ /dev/null
@@ -1,14 +0,0 @@
-FROM python:3.9 as builder
-RUN apt-get update && apt-get install -y build-essential
-COPY requirements.txt .
-RUN pip install --user -r requirements.txt
-
-FROM python:3.9
-MAINTAINER iskoldt
-COPY --from=builder /root/.local /root/.local
-ENV PATH=/root/.local/bin:$PATH
-COPY . /app
-WORKDIR /app
-ENV my_api_key empty
-ENV dockerrun yes
-CMD ["python3", "-u", "ChuanhuChatbot.py", "2>&1", "|", "tee", "/var/log/application.log"]
diff --git a/spaces/leogabraneth/text-generation-webui-main/modules/monkey_patch_gptq_lora.py b/spaces/leogabraneth/text-generation-webui-main/modules/monkey_patch_gptq_lora.py
deleted file mode 100644
index 3166bd33ceba449cb542861b0238818f68c7b02e..0000000000000000000000000000000000000000
--- a/spaces/leogabraneth/text-generation-webui-main/modules/monkey_patch_gptq_lora.py
+++ /dev/null
@@ -1,39 +0,0 @@
-# Copied from https://github.com/johnsmith0031/alpaca_lora_4bit
-
-from pathlib import Path
-
-import alpaca_lora_4bit.autograd_4bit as autograd_4bit
-from alpaca_lora_4bit.amp_wrapper import AMPWrapper
-from alpaca_lora_4bit.autograd_4bit import (
- Autograd4bitQuantLinear,
- load_llama_model_4bit_low_ram
-)
-from alpaca_lora_4bit.models import Linear4bitLt
-from alpaca_lora_4bit.monkeypatch.peft_tuners_lora_monkey_patch import (
- replace_peft_model_with_int4_lora_model
-)
-
-from modules import shared
-from modules.GPTQ_loader import find_quantized_model_file
-
-replace_peft_model_with_int4_lora_model()
-
-
-def load_model_llama(model_name):
- config_path = str(Path(f'{shared.args.model_dir}/{model_name}'))
- model_path = str(find_quantized_model_file(model_name))
- model, tokenizer = load_llama_model_4bit_low_ram(config_path, model_path, groupsize=shared.args.groupsize, is_v1_model=False)
- for _, m in model.named_modules():
- if isinstance(m, Autograd4bitQuantLinear) or isinstance(m, Linear4bitLt):
- if m.is_v1_model:
- m.zeros = m.zeros.half()
- m.scales = m.scales.half()
- m.bias = m.bias.half()
-
- autograd_4bit.auto_switch = True
-
- model.half()
- wrapper = AMPWrapper(model)
- wrapper.apply_generate()
-
- return model, tokenizer
diff --git a/spaces/leonelhs/deoldify/README.md b/spaces/leonelhs/deoldify/README.md
deleted file mode 100644
index c93e34b440afa3c64e2ed1732382365e4dbbe0a8..0000000000000000000000000000000000000000
--- a/spaces/leonelhs/deoldify/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Deoldify
-emoji: 🐨
-colorFrom: gray
-colorTo: green
-sdk: gradio
-sdk_version: 3.34.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/liliyRehtina/color/utils/dataset_lab.py b/spaces/liliyRehtina/color/utils/dataset_lab.py
deleted file mode 100644
index 735f0d16a2d7a061e9fc205752ab51afeab88c8d..0000000000000000000000000000000000000000
--- a/spaces/liliyRehtina/color/utils/dataset_lab.py
+++ /dev/null
@@ -1,37 +0,0 @@
-from __future__ import print_function, division
-import torch, os, glob
-from torch.utils.data import Dataset, DataLoader
-import numpy as np
-from PIL import Image
-import cv2
-
-
-class LabDataset(Dataset):
-
- def __init__(self, rootdir=None, filelist=None, resize=None):
-
- if filelist:
- self.file_list = filelist
- else:
- assert os.path.exists(rootdir), "@dir:'%s' NOT exist ..."%rootdir
- self.file_list = glob.glob(os.path.join(rootdir, '*.*'))
- self.file_list.sort()
- self.resize = resize
-
- def __len__(self):
- return len(self.file_list)
-
- def __getitem__(self, idx):
- bgr_img = cv2.imread(self.file_list[idx], cv2.IMREAD_COLOR)
- if self.resize:
- bgr_img = cv2.resize(bgr_img, (self.resize,self.resize), interpolation=cv2.INTER_CUBIC)
- bgr_img = np.array(bgr_img / 255., np.float32)
- lab_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2LAB)
- #print('--------L:', np.min(lab_img[:,:,0]), np.max(lab_img[:,:,0]))
- #print('--------ab:', np.min(lab_img[:,:,1:3]), np.max(lab_img[:,:,1:3]))
- lab_img = torch.from_numpy(lab_img.transpose((2, 0, 1)))
- bgr_img = torch.from_numpy(bgr_img.transpose((2, 0, 1)))
- gray_img = (lab_img[0:1,:,:]-50.) / 50.
- color_map = lab_img[1:3,:,:] / 110.
- bgr_img = bgr_img*2. - 1.
- return {'gray': gray_img, 'color': color_map, 'BGR': bgr_img}
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/125 Balli Di Gruppo.rar WORK.md b/spaces/lincquiQcaudo/Top-20-Diffusion/125 Balli Di Gruppo.rar WORK.md
deleted file mode 100644
index be7eed84cb4274221d7200c7029f13e410582060..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/125 Balli Di Gruppo.rar WORK.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-the balli di gruppo: 07-07-1985. giovanni guido di neve di la val s. stiena's - il carroccio soldato di castracchio. [url= balli di gruppo.rar dl s h q[/url]..
a little bit of reference here - the world balli on youtube - : i wrote this and it's the first version of the balli di gruppo page.
-125 Balli di gruppo.rar
Download Zip https://bytlly.com/2uGy4J
-a partire dal 23/02/2017 alle ore 22 pomeriggio si terranno le sessioni degli studi sinfonici di gheno. 125 balli di gruppo.rar carlo campanile nei giorni dei festeggiamenti dell'incoronazione. un ringraziamento per tutti coloro che l'anno scorso hanno.
-patched tec-it barcode studio enterprise v14.0.1.17931 including crack [
ssdlife.ultrabook.v2.5.80-te serial key keygen
robert palmer discography 19742003
nys hunter education homework
villeroy-boch-dh-252g-bedienungsanleitung
kumar sanu bangla movie song download
passion of the christ full movie malayalam version
disk drill activation key
125 balli di gruppo.rar
antares autotune v7.08 - mac os x
-. [url= balli di gruppo.rar estrella directshow[/url] support, support, support, in support of the epr expert group. he is also the president of the.percutaneous myocardial biopsy in infants and children. although percutaneous myocardial biopsy (pmb) is an established method for the diagnosis of myocardial disease in children, only limited data are available on the safety and accuracy of the technique in this age group. we analyzed the results of pmb in 26 infants and children (age range, 4 days to 15.5 years) over a 5-year period. samples were obtained from the left ventricular myocardium by using a 21-gauge jamshidi needle. in 18 patients (69%), the diagnosis was made by pmb and confirmed by autopsy; in the remaining 8 patients, the diagnosis was made by pmb and was later confirmed by endomyocardial biopsy. the most common indications for biopsy were fever and myocarditis (54% each). the overall complication rate was 15.4%. death occurred in one patient (3.8%). no complications occurred in the youngest child. pmb is safe and accurate in children. the technique should be performed by experienced operators in children with normal cardiac function and the least severe disease.q: set a folder permissions for a python file i am trying to write an application which asks for a location to save a file. the user has to be able to write to that location. right now i am saving the file to the same directory as the source code. but i want it to be possible to write to a specific folder. i have tried it with os.mkdir('newlocation') but it is not writing to that directory. how can i do that? a: you can use os.chdir(new_path) before writing to ensure that your working directory is the right one before writing. the present invention relates to an on-vehicle camera system that can be installed within a vehicle and used to automatically and continuously shoot video images of the front side of the vehicle. today's on-vehicle cameras are mainly used to monitor the front side of the vehicle. there are various camera types, such as side view cameras, rear view cameras, and/or front view cameras.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/3310 Nhm 5 V639 37 LINK.md b/spaces/lincquiQcaudo/Top-20-Diffusion/3310 Nhm 5 V639 37 LINK.md
deleted file mode 100644
index 0e028489417c6ad006c67509a5f9c575c27f205f..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/3310 Nhm 5 V639 37 LINK.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-The characters are free in their own environment and have the ability to discuss a variety of topics, including the theories of Sartre and Freud. One can choose another second as a reminder to help the DJ.These are the scenes from the first six episodes of the 2016S (U.S) TV series
50e0806aeb jyssjam
-The programs main flaw is that it does not allow you to use those elements of your operating system that have been upgraded, such as the newer system, settings and other better achievements. In addition, the program displays the message that files need to be converted to ZIP format to use the archive.A new glimpse at the adaptation of Faulkners popular novel The Sound and the Fury has emerged online. The script, written by Academy Award-winning director John Hillcoat, stars
50e0806aeb jyssjam
-3310 Nhm 5 V639 37
Download ☆☆☆☆☆ https://bytlly.com/2uGxgP
-One of the most important aspects of any wedding ceremony is to choose the theme, which gives an impression to guests about the character and taste of the groom and bride. And, therefore, the message to the wedding hall will be far from the old and boring. Be it wedding decorations or wedding theme - it is a good way to send a message to everyone present at the wedding.
There are many modern and simple wedding hall themes that can be found on the internet and you can take a look at them. We have collected a selection of good wedding hall themes that you can use for your wedding.
On the site presented, you will find a collection of 100s best wedding themes which you can use for your wedding. From lavish castle wedding halls to modern minimalist wedding halls, each of these wedding halls is ready to meet the expectations of the modern day bride and groom.
Right now, the best wedding halls you will find on the internet is The English Hotel. The theme is romantic and you can see an illustration of the theme on the website.
Another one of the best wedding halls is Orchid Garden Wedding Halls. The theme of this wedding hall is a garden which is very romantic. You will find that this is the perfect wedding hall for a modern day bride and groom.
Some other wedding hall themes for you to have a look at are: Best Wedding Hall, Family Wedding Hall, Outdoor Wedding Hall and
50e0806aeb jyssjam
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Beachhead2002freedownload !!EXCLUSIVE!!fullversionforpc.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Beachhead2002freedownload !!EXCLUSIVE!!fullversionforpc.md
deleted file mode 100644
index 6548779b5b9f651e6d8187a942597262531a1bc1..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Beachhead2002freedownload !!EXCLUSIVE!!fullversionforpc.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-How to Download Beach Head 2002 for Free on PC
-Beach Head 2002 is a 3D interactive stationary shooter game that was released in 2002 by Infogrames and Digital Fusion. In this game, you play as a gunner who has to defend an outpost from waves of enemies using various weapons. The game features different scenarios, such as day and night fighting, beach landing, and jungle warfare. If you are looking for a classic action game that is easy to play and challenging to master, you might want to try Beach Head 2002.
-However, finding a legitimate and safe download link for this game can be tricky, as it is an old game that is not available on any official platforms. Fortunately, there are some websites that offer free downloads of Beach Head 2002 for PC. Here are some of them:
-beachhead2002freedownloadfullversionforpc
Download Zip › https://bytlly.com/2uGxVG
-
-- Softonic: This website offers a trial version of Beach Head 2002 that you can download for Windows. The file size is about 29 MB and the download process is simple and fast. You just need to click on the green "Download" button and follow the instructions. However, be careful of the additional software that might be bundled with the installer. You can decline them if you don't want them.
-- Free Download Manager: This website offers a shareware version of Beach Head 2002 that you can download for Windows XP/7. The file size is about 20 MB and the download process is similar to Softonic. You just need to click on the blue "Download" button and follow the instructions. Again, be careful of the additional software that might be offered during the installation.
-- Internet Archive: This website offers a free version of Beach Head 2002 that you can download for Windows. The file size is about 19 MB and the download process is straightforward. You just need to click on the "DOWNLOAD OPTIONS" section and choose the "ZIP" file. Then, you can extract the file and run the game.
-
-These are some of the websites that offer free downloads of Beach Head 2002 for PC. However, please note that these websites are not affiliated with or endorsed by the developers or publishers of the game. Therefore, we cannot guarantee the safety or legality of these downloads. Download at your own risk and discretion.
How to Play Beach Head 2002 on PC
-After downloading Beach Head 2002 from one of the websites mentioned above, you can install and run the game on your PC. The game does not require a lot of system resources, so it should run smoothly on most computers. However, you might encounter some compatibility issues with newer versions of Windows. Here are some tips to fix them:
-
-- If the game does not start or crashes, you might need to run it in compatibility mode. To do this, right-click on the game's executable file and select "Properties". Then, go to the "Compatibility" tab and check the box that says "Run this program in compatibility mode for". Choose a Windows version that is compatible with the game, such as Windows XP or Windows 7. Click "Apply" and "OK" and try to run the game again.
-- If the game's graphics or sound are distorted or glitchy, you might need to adjust the game's settings. To do this, open the game and go to the "Options" menu. There, you can change the resolution, color depth, sound quality, and other options to suit your preferences. You can also enable or disable the fullscreen mode by pressing Alt+Enter.
-- If the game's controls are not responsive or comfortable, you might need to change them. To do this, open the game and go to the "Controls" menu. There, you can customize the keyboard and mouse settings to your liking. You can also use a joystick or a gamepad if you have one.
-
-These are some of the tips to play Beach Head 2002 on PC. The game is simple to control and fun to play. You just need to aim and shoot at the enemies using your mouse or keyboard. You can switch between different weapons using the number keys or the mouse wheel. You can also zoom in and out using the right mouse button or the Z and X keys. The game has several levels with different objectives and difficulties. You can choose between easy, medium, and hard modes. You can also play in arcade mode or survival mode for more challenge.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/BhutachabhauREPACK Fullmoviedownload.md b/spaces/lincquiQcaudo/Top-20-Diffusion/BhutachabhauREPACK Fullmoviedownload.md
deleted file mode 100644
index 909ad556ca31cfb914c96a22367bfdd590e12a81..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/BhutachabhauREPACK Fullmoviedownload.md
+++ /dev/null
@@ -1,71 +0,0 @@
-
-Bhutacha Bhau Full Movie Download: How to Watch This Marathi Comedy Online
-Bhutacha Bhau is a Marathi comedy movie that was released in 1989 and became a huge hit among the audience. The movie features Ashok Saraf, Sachin Pilgaonkar, Laxmikant Berde and Varsha Usgaonkar in the main roles. The movie is directed by Sachin Pilgaonkar himself and has a funny story about a ghost who wants to reunite with his family and take revenge on his murderers. If you are looking for a way to watch Bhutacha Bhau full movie download, then you have come to the right place. In this article, we will tell you how to download or stream this movie legally and safely.
-bhutachabhaufullmoviedownload
Download File >>> https://bytlly.com/2uGyIe
-What is the plot of Bhutacha Bhau?
-The movie revolves around Bhau, a simple man who lives in a village with his wife and son. He is killed by some goons who want to take over his land. He becomes a ghost and comes to the city to find his long lost mother and brother. He also wants his brother to help him in avenging his death. However, he faces many challenges and obstacles in his quest. He meets a girl named Champa who falls in love with him. He also discovers that his killers have set a trap for him and his brother. How will he overcome these difficulties and fulfill his mission? You will have to watch Bhutacha Bhau full movie download to find out.
-Why should you watch Bhutacha Bhau full movie download?
-Bhutacha Bhau is a movie that will make you laugh till your stomach hurts. The movie has some brilliant performances by the actors, especially Ashok Saraf and Laxmikant Berde who play the roles of Bhau and his brother respectively. They have a great comic timing and chemistry and they deliver some hilarious dialogues and scenes. The movie also has some melodious songs composed by Anil Mohile and sung by Asha Bhosle and Shailendra Singh. The movie has a good message about family values and loyalty. It also has some emotional moments that will touch your heart.
-How to download or stream Bhutacha Bhau full movie?
-If you want to watch Bhutacha Bhau full movie download, then you have several options available online. You can stream or download the movie from various platforms like YouTube, Hotstar, Zee5, SonyLiv, etc. However, you should be careful about the quality and legality of the sources. Some of them may not have the original or HD version of the movie or may have some malware or viruses attached to them. Therefore, it is advisable to use only trusted and reliable sources for downloading or streaming Bhutacha Bhau full movie.
-
-Conclusion
-Bhutacha Bhau is a Marathi comedy movie that you should not miss if you love comedy movies. It has a great story, wonderful actors, melodious songs and a lot of humor. It is a perfect family entertainer that will make you laugh and cry at the same time. So, what are you waiting for? Download or stream Bhutacha Bhau full movie today and enjoy this classic Marathi comedy.
-Where can you watch Bhutacha Bhau full movie online?
-Bhutacha Bhau full movie is available on YouTube for free. You can watch the movie in two parts on the Rajshri Marathi Movies channel. The movie has good video and audio quality and you can enjoy it without any interruption. You can also watch the movie on other platforms like Hotstar, Zee5, SonyLiv, etc. However, you may need to pay a subscription fee or rent the movie to watch it on these platforms. You can also download the movie from these platforms and watch it offline.
-What are some of the best scenes from Bhutacha Bhau full movie?
-Bhutacha Bhau full movie has many memorable scenes that will make you laugh and cry. Some of the best scenes are:
-
-- The scene where Bhau meets his brother for the first time and tries to convince him that he is his ghost.
-- The scene where Bhau and his brother go to a hotel and create a ruckus with their antics.
-- The scene where Bhau and Champa sing a romantic song in the rain.
-- The scene where Bhau and his brother confront their killers and expose their evil plans.
-- The scene where Bhau reunites with his mother and son and bids farewell to them.
-What are some of the reviews of Bhutacha Bhau full movie?
-Bhutacha Bhau full movie has received positive reviews from critics and audience alike. The movie has been praised for its comedy, story, direction, music and performances. Some of the reviews are:
-
-- "Bhutacha Bhau is a Marathi comedy classic that will make you laugh till your stomach hurts. The movie has a great story, wonderful actors, melodious songs and a lot of humor. It is a perfect family entertainer that will make you laugh and cry at the same time." - IMDB
-- "Bhutacha Bhau is a hilarious movie that will keep you entertained from start to finish. The movie has some amazing performances by Ashok Saraf and Laxmikant Berde who play the roles of Bhau and his brother respectively. Their comic timing and chemistry are superb and they make you laugh with their antics. The movie also has some catchy songs composed by Anil Mohile and sung by Asha Bhosle and Shailendra Singh. The movie has a good message about family values and loyalty. It also has some emotional moments that will touch your heart." - YouTube
-- "Bhutacha Bhau is a comedy movie that you should not miss if you love comedy movies. It has a great story, wonderful actors, melodious songs and a lot of humor. It is a perfect family entertainer that will make you laugh and cry at the same time." - Zee5
-Who are the cast and crew of Bhutacha Bhau full movie?
-Bhutacha Bhau full movie has a talented cast and crew who have worked hard to make this movie a success. The movie is directed by Sachin Pilgaonkar, who is also one of the lead actors. He has directed and acted in many Marathi movies and is known for his versatility and talent. The movie is produced by Rajshri Productions, which is a renowned production house that has made many hit movies in Hindi and Marathi. The movie has a screenplay by Sachin Pilgaonkar and dialogues by Vasant Sabnis. The movie has a cinematography by Debu Deodhar and editing by Ashok Mestry.
-The movie has a star-studded cast that includes Ashok Saraf, Sachin Pilgaonkar, Laxmikant Berde and Varsha Usgaonkar in the main roles. Ashok Saraf plays the role of Bhau, the ghost who wants to reunite with his family and take revenge on his killers. He is one of the most popular and respected actors in Marathi cinema and has done many comedy roles. Sachin Pilgaonkar plays the role of Bhau's brother, who is a timid and innocent man. He is also a versatile actor, director, singer and producer who has worked in many languages. Laxmikant Berde plays the role of Bhau's friend, who helps him in his mission. He is also a legendary actor who has done many comedy roles in Marathi and Hindi cinema. Varsha Usgaonkar plays the role of Champa, the girl who falls in love with Bhau. She is also a beautiful and talented actress who has worked in many Marathi and Hindi movies.
-What are some of the trivia about Bhutacha Bhau full movie?
-Bhutacha Bhau full movie has some interesting trivia that you may not know. Here are some of them:
-
-- The movie is based on a Gujarati play called "Bhoot No 1" written by Pravin Solanki.
-- The movie was remade in Hindi as "Chamatkar" starring Shah Rukh Khan and Urmila Matondkar in 1992.
-- The movie was also remade in Telugu as "Yamaleela" starring Ali and Indraja in 1994.
-- The movie was one of the first Marathi movies to use special effects for the ghost scenes.
-- The movie was shot in Mumbai and Pune and some of the locations include Shaniwar Wada, Saras Baug, Sinhagad Fort, etc.
-What are some of the awards and nominations of Bhutacha Bhau full movie?
-Bhutacha Bhau full movie has received many awards and nominations for its excellence in various aspects. The movie has won the following awards:
-
-- Maharashtra State Film Awards 1989: Best Film on Family Welfare
-- Maharashtra State Film Awards 1989: Best Director - Sachin Pilgaonkar
-- Maharashtra State Film Awards 1989: Best Actor - Ashok Saraf
-- Maharashtra State Film Awards 1989: Best Supporting Actor - Laxmikant Berde
-- Maharashtra State Film Awards 1989: Best Music Director - Anil Mohile
-- Maharashtra State Film Awards 1989: Best Playback Singer (Female) - Asha Bhosle
-- Maharashtra State Film Awards 1989: Best Lyricist - Shantaram Nandgaonkar
-
-The movie has also been nominated for the following awards:
-
-- Filmfare Awards Marathi 1990: Best Film
-- Filmfare Awards Marathi 1990: Best Director - Sachin Pilgaonkar
-- Filmfare Awards Marathi 1990: Best Actor - Ashok Saraf
-- Filmfare Awards Marathi 1990: Best Supporting Actor - Laxmikant Berde
-- Filmfare Awards Marathi 1990: Best Music Director - Anil Mohile
-- Filmfare Awards Marathi 1990: Best Playback Singer (Female) - Asha Bhosle
-What are some of the related movies to Bhutacha Bhau full movie?
-If you liked Bhutacha Bhau full movie, then you may also like some of the related movies that have a similar genre, theme or cast. Some of the related movies are:
-
-- Chamatkar (1992): This is the Hindi remake of Bhutacha Bhau starring Shah Rukh Khan and Urmila Matondkar. The movie has a similar plot of a ghost who helps a young man in his love and revenge.
-- Yamaleela (1994): This is the Telugu remake of Bhutacha Bhau starring Ali and Indraja. The movie has a similar plot of a ghost who helps a young man in his love and revenge.
-- Bhootacha Honeymoon (2013): This is a sequel to Bhutacha Bhau starring Bharat Jadhav and Ruchita Jadhav. The movie has a different plot of a ghost who helps a newly married couple in their honeymoon.
-- Ashi Hi Banwa Banwi (1988): This is another Marathi comedy movie directed by Sachin Pilgaonkar and starring Ashok Saraf, Laxmikant Berde and Sachin Pilgaonkar. The movie has a different plot of four friends who pretend to be married to live in a rented apartment.
-- Gammat Jammat (1987): This is another Marathi comedy movie starring Ashok Saraf, Sachin Pilgaonkar and Varsha Usgaonkar. The movie has a different plot of two friends who fall in love with two sisters but face opposition from their father.
-Conclusion
-Bhutacha Bhau is a Marathi comedy movie that you should not miss if you love comedy movies. It has a great story, wonderful actors, melodious songs and a lot of humor. It is a perfect family entertainer that will make you laugh and cry at the same time. You can watch Bhutacha Bhau full movie download online from various sources or download it and watch it offline. You can also watch some of the related movies that have a similar genre, theme or cast. So, what are you waiting for? Download Bhutacha Bhau full movie today and enjoy this classic Marathi comedy.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Cine Tracer Download Crack With Full Gamel.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Cine Tracer Download Crack With Full Gamel.md
deleted file mode 100644
index d49a32e9adb85e9dd38cba20a9d574bce50105f6..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Cine Tracer Download Crack With Full Gamel.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Cine Tracer Download Crack With Full Gamel
Download Zip ⏩ https://bytlly.com/2uGwzA
-
- 4d29de3e1b
-
-
-
diff --git a/spaces/lixq/bingo61/src/components/turn-counter.tsx b/spaces/lixq/bingo61/src/components/turn-counter.tsx
deleted file mode 100644
index 08a9e488f044802a8600f4d195b106567c35aab4..0000000000000000000000000000000000000000
--- a/spaces/lixq/bingo61/src/components/turn-counter.tsx
+++ /dev/null
@@ -1,23 +0,0 @@
-import React from 'react'
-import { Throttling } from '@/lib/bots/bing/types'
-
-export interface TurnCounterProps {
- throttling?: Throttling
-}
-
-export function TurnCounter({ throttling }: TurnCounterProps) {
- if (!throttling) {
- return null
- }
-
- return (
-
-
- {throttling.numUserMessagesInConversation}
- 共
- {throttling.maxNumUserMessagesInConversation}
-
-
-
- )
-}
diff --git a/spaces/lizhen30/LangChainGo/chatgpt-next-web/service.py b/spaces/lizhen30/LangChainGo/chatgpt-next-web/service.py
deleted file mode 100644
index 02aed7be2a9283013c750275b8e52d543c5827fe..0000000000000000000000000000000000000000
--- a/spaces/lizhen30/LangChainGo/chatgpt-next-web/service.py
+++ /dev/null
@@ -1,775 +0,0 @@
-# coding=utf-8
-import json
-import random
-import string
-import uuid
-import jwt
-import datetime
-import requests
-import os
-import hashlib
-import time
-from flask import Flask, request, jsonify, Request, Response
-from redis import Redis
-from utils import nowtime
-import pay_package
-from ApiResponse import ApiResponse
-from flask_cors import CORS
-from mail import MyEmail
-
-SECERT_KEY = "8U2LL1"
-MY_OPENAI_API_KEY = os.environ.get('MY_OPENAI_API_KEY')
-
-app = Flask(__name__)
-cors = CORS(app, origins=["http://localhost:3000", "http://127.0.0.1:5697"])
-# redis = Redis(host='192.168.3.229', port=6379, password='lizhen-redis')
-# redis = Redis(host='10.254.13.87', port=6379)
-redis = Redis(host='localhost', port=6379)
-
-
-# 生成验证码
-def generate_verification_code():
- code = ''.join(random.choices(string.digits, k=6))
- return code
-
-
-# 发送验证码到用户邮箱(这里只是模拟发送过程)
-def send_verification_code(email, code):
- my = MyEmail()
- my.passwd = "todo"
- my.user = "lingsi@huggingprompt.com"
- my.to_list = [email]
- my.tag = "灵思互动Chat注册验证码"
- my.txt = f"【{code}】Chat邮箱注册验证码,您正在注册灵思互动Chat账号,请勿泄露。"
- my.send()
-
-
-# 用户请求发送验证码
-@app.route('/sendRegisterEmailCode', methods=['POST'])
-def send_verification_code_endpoint():
- # 从请求中获取邮箱地址
- email = request.json.get('email')
-
- # 生成验证码
- verification_code = generate_verification_code()
-
- # 发送验证码到用户邮箱
- send_verification_code(email, verification_code)
-
- # 保存验证码到Redis,并设置过期时间(例如,5分钟)
- redis.setex(email, 300, verification_code)
-
- return jsonify({'code': 0, 'message': '验证码已发送'})
-
-
-# 用户注册
-@app.route('/register', methods=['POST'])
-def register():
- # 从请求中获取注册信息
- email = request.json.get('email')
- username = request.json.get('username')
- password = request.json.get('password')
- verification_code = request.json.get('verification_code')
-
- if is_email_registered(email):
- return jsonify({'code': 400, 'message': '邮箱已被注册'})
-
- # 检查用户名是否已被注册
- if redis.hexists('users', username):
- return jsonify({'code': 400, 'message': '用户名已存在'})
-
- # 检查验证码是否匹配
- stored_code = redis.get(email)
- if stored_code is None or verification_code != stored_code.decode('utf-8'):
- return jsonify({'code': 400, 'message': '验证码无效'})
-
- # 生成唯一的用户ID
- user_id = str(uuid.uuid4())
-
- # 保存用户信息到Redis
- user_data = {
- 'user_id': user_id,
- 'username': username,
- 'email': email,
- 'password': password
- }
- redis.hset('users', username, json.dumps(user_data))
-
- # 清除验证码
- redis.delete(email)
-
- # 注册免费赠送12次
- pick_up_free_chat_count(user_id, 12)
-
- return jsonify({
- 'code': 0,
- 'message': '注册成功'
- })
-
-
-# 用户登录
-@app.route('/login', methods=['POST'])
-def login():
- # 从请求中获取登录信息
- username = request.json.get('username')
- password = request.json.get('password')
-
- # 检查用户名和密码是否匹配
- user_data = redis.hget('users', username)
- if not user_data:
- return jsonify({'code': 400, 'message': '用户名无效'})
-
- user_data = user_data.decode('utf-8')
- if password != eval(user_data)['password']:
- return jsonify({'code': 400, 'message': '密码错误'})
-
- # 生成令牌
- token = generate_token(eval(user_data)['user_id'], username)
- return jsonify({
- 'code': 0,
- 'message': 'Login successful',
- 'data': {
- 'token': token,
- 'email': eval(user_data)['email']
- }
- })
-
-
-@app.route('/resetPassword', methods=['POST'])
-def reset_password():
- email = request.json.get('email')
- verification_code = request.json.get('verification_code')
- new_password = request.json.get('new_password')
-
- # 检查验证码是否匹配
- stored_code = redis.get(email)
- if stored_code is None or verification_code != stored_code.decode('utf-8'):
- return jsonify({'code': 400, 'message': '验证码无效'})
-
- if not is_email_registered(email):
- return jsonify({'code': 400, 'message': '邮箱未注册'})
-
- update_password(email, new_password)
- redis.delete(email)
- return jsonify({'code': 0, 'message': '密码已更新'})
-
-
-# 用户注销
-@app.route('/logout', methods=['POST'])
-def logout():
- token = parse_token(request)
- # 验证令牌
- if not validate_token(token):
- # 将令牌添加到 Redis 黑名单
- redis.set(token, 'revoked')
- return jsonify({'code': 0, 'message': '已登出'})
-
-
-@app.route('/create_order', methods=['POST'])
-def create_order():
- # 从请求中获取套餐ID和用户令牌
- package_id = request.json.get('package_id')
- token = parse_token(request)
-
- # 验证令牌
- if not validate_token(token):
- return jsonify({'code': 401, 'message': '请重新登录'})
-
- # 根据套餐ID获取套餐信息
- package = pay_package.get_package_by_id(package_id)
- if not package:
- return jsonify({'code': 400, 'message': '套餐不存在'})
-
- # 获取用户ID
- user_id = get_user_id_from_token(token)
- if not user_id:
- return jsonify({'code': 400, 'message': '用户不存在'})
- # 创建订单
- order_id = generate_order_id()
- order_data = {
- 'order_id': order_id,
- 'user_id': user_id,
- 'package_id': package['id'],
- 'title': package['title'],
- 'amount': int(package['price']),
- 'status': 'pending', # 订单状态设置为待支付
- 'created_at': nowtime().strftime('%Y-%m-%d %H:%M:%S') # 将时间转换为字符串格式保存
- }
- # 将订单信息存储到Redis,并设置过期时间为2小时
- order_key = f'order:{user_id}:{order_id}'
- redis.hset(order_key, 'order_id', order_data['order_id'])
- redis.hset(order_key, 'user_id', order_data['user_id'])
- redis.hset(order_key, 'package_id', order_data['package_id'])
- redis.hset(order_key, 'title', order_data['title'].encode('utf-8'))
- redis.hset(order_key, 'amount', order_data['amount'])
- redis.hset(order_key, 'status', order_data['status'])
- redis.hset(order_key, 'created_at', order_data['created_at'])
- redis.expire(order_key, datetime.timedelta(hours=2))
-
- # 发起POST请求到http://localhost:8080/payPre获取codeUrl
- response = requests.post('http://localhost:5697/payPre', json=order_data)
-
- if response.status_code == 200:
- data = response.json()
- code_url = data.get('data', {}).get('codeUrl')
- if code_url:
- order_data['codeUrl'] = code_url
- else:
- return jsonify({'code': 401, 'message': '支付二维码生成失败,请重试'})
-
- return jsonify({
- 'code': 0,
- 'message': 'Order created',
- 'data': order_data
- })
-
-
-@app.route('/order_center', methods=['POST'])
-def order_center():
- token = parse_token(request)
-
- # 验证令牌
- if not validate_token(token):
- return jsonify({'code': 401, 'message': '请重新登录'})
-
- # 获取用户ID
- user_id = get_user_id_from_token(token)
- if not user_id:
- return jsonify({'code': 400, 'message': '用户不存在'})
-
- # 获取用户的所有订单
- user_orders = get_user_orders(user_id)
-
- print(user_orders)
-
- # 获取该用户的有效订单列表
- valid_orders = [
- order for order in user_orders
- if not is_order_expired(order)
- ]
-
- print(valid_orders)
- return jsonify({
- 'code': 0,
- 'message': 'Order center',
- 'data': {
- 'user_orders': valid_orders
- }
- })
-
-
-@app.route('/query_order', methods=['POST'])
-def query_order():
- data = request.json
- if 'order_id' not in data:
- return jsonify({'code': 400, 'message': '订单无效'})
-
- token = parse_token(request)
-
- # 验证令牌
- if not validate_token(token):
- return jsonify({'code': 401, 'message': '请重新登录'})
-
- # 获取用户ID
- user_id = get_user_id_from_token(token)
- if not user_id:
- return jsonify({'code': 400, 'message': '用户不存在'})
-
- order_id = data['order_id']
-
- # 从Redis中获取指定订单数据
- order_key = f'order:{user_id}:{order_id}'
- order_data = redis.hgetall(order_key)
-
- if not order_data:
- return jsonify({'code': 404, 'message': '订单无效'})
-
- # 将bytes类型的键和值转换为str类型
- order_data_str = {
- k.decode('utf-8'): int(v.decode('utf-8')) if k.decode('utf-8') == 'amount' else v.decode('utf-8')
- for k, v in order_data.items()
- }
-
- return jsonify({'code': 0, 'message': 'Success', 'data': order_data_str})
-
-
-@app.route('/pay/notify', methods=['POST'])
-def handle_payment_notification():
- # 获取POST请求中的JSON数据,这是微信支付发送的通知数据
- notification_data = request.json
-
- print("/pay/notify", notification_data)
-
- # 假设你从通知数据中获取了订单信息,例如订单ID和用户ID
- order_id = notification_data.get('order_id')
- user_id = notification_data.get('user_id')
-
- # 更新订单状态为已支付
- order_key = f'order:{user_id}:{order_id}'
- redis.hset(order_key, 'status', 'paid')
-
- # 设置订单6个月后过期
- redis.expire(order_key, datetime.timedelta(days=6*30).total_seconds())
-
- # 返回一个响应,可以是一个简单的字符串或者JSON对象,告诉微信支付通知已经收到
- return jsonify({'code': 0, 'message': 'Notification received'})
-
-
-@app.route('/users/profile', methods=['POST'])
-def get_user_profile():
- token = parse_token(request)
-
- # 验证令牌
- if not validate_token(token):
- return jsonify({'code': 401, 'message': '请重新登录'})
-
- # 获取用户ID
- user_id = get_user_id_from_token(token)
- if not user_id:
- return jsonify({'code': 400, 'message': '用户不存在'})
-
- free_count = get_free_count(user_id)
- package = get_user_package(user_id)
- basic_chat_limit = int(package.get(b'basic_chat_limit', 0))
- advanced_chat_limit = int(package.get(b'advanced_chat_limit', 0))
-
- user_package_key = f'user:{user_id}:package'
- expiration_seconds = redis.ttl(user_package_key)
- formatted_expiration = ""
- if expiration_seconds > 0:
- expiration_datetime = (
- datetime.datetime.now()
- + datetime.timedelta(seconds=expiration_seconds)
- )
- formatted_expiration = expiration_datetime.strftime('%Y-%m-%d')
- else:
- formatted_expiration = '已过期'
-
- if (package.get(b'title') is None):
- return jsonify({
- 'code': 0,
- 'message': 'Success',
- 'data': {
- 'free_count': free_count,
- 'basic_chat_limit': basic_chat_limit,
- 'advanced_chat_limit': advanced_chat_limit
- }
- })
-
- return jsonify({
- 'code': 0,
- 'message': 'Success',
- 'data': {
- 'free_count': free_count,
- 'basic_chat_limit': basic_chat_limit,
- 'advanced_chat_limit': advanced_chat_limit,
- 'expiration': formatted_expiration,
- 'package_title': package.get(b'title', b'unknown').decode('utf-8')
- }
- })
-
-
-def get_user_orders(user_id):
- # 从Redis中获取该用户的所有订单数据
- user_orders = []
- for key in redis.scan_iter(f'order:{user_id}:*'): # 修改获取键的方式
- order_data = redis.hgetall(key)
- # 将bytes类型的键和值转换为str类型
- order_data_str = {
- k.decode('utf-8'): int(v) if k == b'amount' else v.decode('utf-8')
- for k, v in order_data.items()
- }
- user_orders.append(order_data_str)
- return user_orders
-
-
-def get_order_by_id(user_id, order_id):
- # 从Redis中获取指定订单数据
- order_key = f'order:{user_id}:{order_id}'
- order_data = redis.hgetall(order_key)
- return order_data
-
-
-def is_order_expired(order_data):
- print("is_order_expired", order_data)
- # 获取订单创建时间
- created_at_str = order_data.get('created_at')
- if not created_at_str:
- print("created_at_str is None")
- return True
-
- # 将订单创建时间转换为datetime对象
- created_at = datetime.datetime.strptime(
- created_at_str,
- '%Y-%m-%d %H:%M:%S'
- )
-
- # 订单有效期为2小时
- expiration_time = datetime.timedelta(hours=2)
-
- # 获取当前时间
- current_time = nowtime()
-
- print(current_time, created_at, expiration_time)
-
- # 检查时间差是否超过订单有效期
- return (current_time - created_at) >= expiration_time
-
-
-@app.route('/v1/chat/completions', methods=['POST'])
-def proxy_chat_completions():
- token = parse_token(request)
- model = request.json.get('model')
-
- # 验证令牌
- if not validate_token(token):
- return jsonify({'code': 401, 'message': '请重新登录'})
-
- user_id = get_user_id_from_token(token)
-
- if not user_id:
- return jsonify({'code': 400, 'message': '用户不存在'})
-
- # 获取用户套餐信息
- package = get_user_package(user_id)
- if not package and get_free_count(user_id) <= 0:
- return jsonify({'code': 400, 'message': '聊天次数已用完,请购买套餐'})
-
- # 检查用户聊天次数是否超过限制
- if exceeded_chat_limit(user_id, package, model):
- if model == 'gpt-3.5-turbo':
- return jsonify({'code': 400, 'message': 'model3.5基础访问次数已用完'})
- if model == 'gpt-4':
- return jsonify({'code': 400, 'message': 'model4高级访问次数已用完'})
-
- # 获取请求数据
- data = request.get_json()
-
- stream = request.json.get('stream')
-
- # 设置请求头部信息
- headers = {
- 'Authorization': f'Bearer {MY_OPENAI_API_KEY}',
- 'Content-Type': 'application/json'
- }
-
- if stream:
- # 将请求转发到 OpenAI API
- response = requests.post(
- 'https://api.openai.com/v1/chat/completions', json=data, headers=headers, stream=True, verify=False)
-
- # 获取 OpenAI API 的响应数据
- result = response.iter_content(chunk_size=8192)
-
- if get_free_count(user_id) > 0 and model == 'gpt-3.5-turbo':
- redis.hincrby(f'user:{user_id}:free', 'basic_chat_count', -1)
- else:
- user_package_key = f'user:{user_id}:package'
- redis.hincrby(user_package_key, 'basic_chat_limit', -1)
-
- # 返回 OpenAI API 的响应给客户端
- return Response(result, content_type=response.headers['content-type'])
- else:
- # 将请求转发到 OpenAI API
- response = requests.post(
- 'https://api.openai.com/v1/chat/completions', json=data, headers=headers)
-
- # 获取 OpenAI API 的响应数据
- result = response.json()
-
- if get_free_count(user_id) > 0 and model == 'gpt-3.5-turbo':
- redis.hincrby(f'user:{user_id}:free', 'basic_chat_count', -1)
- else:
- user_package_key = f'user:{user_id}:package'
- redis.hincrby(user_package_key, 'basic_chat_limit', -1)
-
- # 返回 OpenAI API 的响应给客户端
- return result, response.status_code
-
-
-# 每天领取免费次数
-@app.route('/pickUpFreeChatCount', methods=['POST'])
-def handle_pick_up_free_chat_count():
- token = parse_token(request)
-
- # 验证令牌
- if not validate_token(token):
- return jsonify({'code': 401, 'message': '请重新登录'})
-
- user_id = get_user_id_from_token(token)
-
- if not user_id:
- return jsonify({'code': 400, 'message': '用户不存在'})
-
- # 领取赠送的聊天次数
- success = pick_up_free_chat_count(user_id)
-
- if success:
- return {'message': '领取成功'}
- else:
- return {'message': '您今天已经领取了'}
-
-
-@app.route('/package/onSales', methods=['POST'])
-def packageOnSales():
- token = parse_token(request)
-
- # 验证令牌
- if not validate_token(token):
- return jsonify({'code': 401, 'message': '请重新登录'})
-
- user_id = get_user_id_from_token(token)
-
- if not user_id:
- return jsonify({'code': 400, 'message': '用户不存在'})
-
- current_package = get_user_package(user_id)
- onSalesData = {
- 'currentPackage': int(current_package.get(b'id', 0)),
- 'packages': pay_package.packages
- }
- response = ApiResponse(
- code=0,
- message='Login successful',
- data=onSalesData)
- return jsonify(response.to_json())
-
-
-def parse_token(request: Request):
- token_with_bearer = request.headers.get('Authorization')
-
- if token_with_bearer is not None and token_with_bearer.startswith('Bearer '):
- token = token_with_bearer.split(' ')[1]
- else:
- # 处理未包含 "Bearer" 前缀的情况
- token = token_with_bearer
- return token
-
-
-# 生成令牌
-def generate_token(user_id, username):
- # 构造包含用户信息的负载
- # 如果要添加过期时间,加'exp': nowtime() + datetime.timedelta(days=30)
- payload = {
- 'user_id': user_id,
- 'username': username,
- }
-
- # 在这里,您可以使用您的密钥(secret key)来签署令牌
- # 选择适当的签名算法,并设置适当的过期时间等参数
- # 仅使用 HS256 算法和过期时间为1小时
- token = jwt.encode(payload, SECERT_KEY, algorithm='HS256')
- return token
-
-
-# 验证令牌
-def validate_token(token):
- try:
- # 使用密钥进行解码
- payload = jwt.decode(token, SECERT_KEY, algorithms=['HS256'])
-
- # 检查令牌的过期时间
- if 'exp' in payload and nowtime() > datetime.datetime.fromtimestamp(payload['exp']):
- return False
-
- if 'user_id' not in payload or 'username' not in payload:
- return False
-
- return True
- except (jwt.DecodeError, jwt.InvalidTokenError):
- return False
-
-
-def get_user_id_from_token(token):
- try:
- decoded_token = jwt.decode(
- token, SECERT_KEY, algorithms=['HS256'])
- user_id = decoded_token.get('user_id')
- return user_id
- except jwt.ExpiredSignatureError:
- # 处理过期的令牌
- return None
- except (jwt.DecodeError, jwt.InvalidTokenError):
- # 处理解码或无效的令牌
- return None
-
-
-# 获取用户ID通过用户名
-def get_user_id_by_username(username):
- user_data = redis.hget('users', username)
- if user_data:
- user_data = json.loads(user_data.decode('utf-8'))
- user_id = user_data.get('user_id')
- return user_id
- return None
-
-
-# 存储用户套餐信息到Redis
-def store_user_package(user_id, package):
- current_package = get_user_package(user_id)
- basic_chat_limit = package['basic_chat_limit']
- advanced_chat_limit = package['advanced_chat_limit']
- if current_package:
- basic_chat_limit += int(current_package.get(b'basic_chat_limit', 0))
- advanced_chat_limit += int(current_package.get(b'advanced_chat_limit', 0))
-
- user_package_key = f'user:{user_id}:package'
- redis.hset(user_package_key, 'id', package['id'])
- redis.hset(user_package_key, 'title', package['title'].encode('utf-8'))
- redis.hset(user_package_key, 'basic_chat_limit', basic_chat_limit)
- redis.hset(user_package_key, 'advanced_chat_limit', advanced_chat_limit)
- # 设置套餐过期时间
- expiration = int(time.time()) + package['expiration']
- redis.expireat(user_package_key, expiration)
-
-
-# 获取用户套餐信息
-def get_user_package(user_id):
- user_package_key = f'user:{user_id}:package'
- package = redis.hgetall(user_package_key)
- return package
-
-
-# 检查用户是否已经支付过指定套餐
-def has_purchased_package(user_id, package_id):
- user_package_key = f'user:{user_id}:package'
- purchased_package_id = redis.hget(user_package_key, 'id')
- return purchased_package_id.decode('utf-8') == str(package_id)
-
-
-# 检查用户是否已经支付了高级套餐
-def has_purchased_advanced_package(user_id):
- user_package_key = f'user:{user_id}:package'
- purchased_package_id = redis.hget(user_package_key, 'id')
- return purchased_package_id.decode('utf-8') == '2'
-
-
-# 检查套餐是否过期
-def is_package_expired(user_id):
- user_package_key = f'user:{user_id}:package'
- expiration = redis.ttl(user_package_key)
- return expiration <= 0
-
-
-# 获取套餐有效期
-def get_package_expiration(user_id):
- user_package_key = f'user:{user_id}:package'
- expiration = redis.ttl(user_package_key)
- return expiration
-
-
-# 检查用户聊天次数是否超过限制
-def exceeded_chat_limit(user_id, package, model):
- if model == 'gpt-3.5-turbo':
- basic_chat_limit = int(package.get(b'basic_chat_limit', 0))
- print('basic_chat_limit:', basic_chat_limit)
- if get_free_count(user_id) > 0:
- return False
- if basic_chat_limit <= 0 and basic_chat_limit != -1:
- return True
-
- if model == 'gpt-4':
- advanced_chat_limit = int(package.get(b'advanced_chat_limit', 0))
- if advanced_chat_limit <= 0 and advanced_chat_limit != -1:
- return True
-
- return False
-
-
-# 检查邮箱是否已注册
-def is_email_registered(email):
- # 遍历所有用户数据
- for username in redis.hkeys('users'):
- user_data = json.loads(redis.hget('users', username).decode('utf-8'))
- if user_data['email'] == email:
- return True
- return False
-
-
-# 更新用户密码
-def update_password(email, new_password):
- # 遍历所有用户数据
- for username in redis.hkeys('users'):
- user_data = json.loads(redis.hget('users', username).decode('utf-8'))
- if user_data['email'] == email:
- user_data['password'] = new_password
- redis.hset('users', username, json.dumps(user_data))
- return True
- return False
-
-
-def get_user_free_data(user_id):
- user_free_key = f'user:{user_id}:free'
- free_data = redis.hgetall(user_free_key)
- return free_data
-
-
-def initialize_user_free_data(user_id):
- free_data = {
- b'basic_chat_count': 0,
- b'last_gift_time': str(datetime.date.today()),
- b'has_pick_up_free': b'false' # 用户是否领取了赠送的聊天次数
- }
- user_free_name = f'user:{user_id}:free'
- redis.hset(user_free_name,
- b'basic_chat_count',
- free_data.get(b'basic_chat_count', 0))
- redis.hset(user_free_name,
- b'last_gift_time',
- free_data.get(b'last_gift_time'))
- redis.hset(user_free_name,
- b'has_pick_up_free',
- free_data.get(b'has_pick_up_free', b'false'))
- return free_data
-
-
-# 每天领取免费次数
-def pick_up_free_chat_count(user_id, basic_free_count=5):
- free_data = get_user_free_data(user_id)
- if not free_data:
- free_data = initialize_user_free_data(user_id)
- # 获取用户的聊天次数和最后赠送时间
- basic_chat_count = free_data.get(b'basic_chat_count', 0)
- last_gift_time = free_data.get(
- b'last_gift_time',
- str(datetime.date.today()))
- has_pick_up_free = (
- free_data.get(b'has_pick_up_free', b'false')
- .decode()
- .lower() == 'true')
-
- print('free_data3', basic_chat_count, last_gift_time, has_pick_up_free)
-
- # 检查用户是否已经领取过赠送的聊天次数今天
- if has_pick_up_free and last_gift_time.decode() == str(datetime.date.today()):
- return False
-
- # 更新用户聊天数据和领取状态
- user_free_name = f'user:{user_id}:free'
- redis.hset(user_free_name, b'basic_chat_count', basic_free_count)
- redis.hset(user_free_name, b'last_gift_time', str(datetime.date.today()))
- redis.hset(user_free_name, b'has_pick_up_free', 'true')
-
- return True
-
-
-def get_free_count(user_id):
- free_data = get_user_free_data(user_id)
- basic_chat_count = free_data.get(b'basic_chat_count', 0)
- return int(basic_chat_count)
-
-
-def generate_order_id():
- timestamp = int(time.time() * 1000) # 获取当前时间戳,精确到毫秒
- rand_num = random.randint(0, 9999) # 生成四位随机数
- raw_order_id = f"{timestamp}-{rand_num}"
-
- # 对订单号进行哈希处理,使用SHA-256算法(32字节长度)
- hashed_order_id = hashlib.sha256(raw_order_id.encode()).hexdigest()[:32]
-
- return hashed_order_id
-
-
-if __name__ == '__main__':
- print("启动了")
- app.run(host="0.0.0.0", port=5000)
diff --git a/spaces/lj1995/vocal2guitar/uvr5_pack/lib_v5/dataset.py b/spaces/lj1995/vocal2guitar/uvr5_pack/lib_v5/dataset.py
deleted file mode 100644
index ba0e45be1e8878da0b07eb2128e218bbd7de82ef..0000000000000000000000000000000000000000
--- a/spaces/lj1995/vocal2guitar/uvr5_pack/lib_v5/dataset.py
+++ /dev/null
@@ -1,183 +0,0 @@
-import os
-import random
-
-import numpy as np
-import torch
-import torch.utils.data
-from tqdm import tqdm
-
-from uvr5_pack.lib_v5 import spec_utils
-
-
-class VocalRemoverValidationSet(torch.utils.data.Dataset):
- def __init__(self, patch_list):
- self.patch_list = patch_list
-
- def __len__(self):
- return len(self.patch_list)
-
- def __getitem__(self, idx):
- path = self.patch_list[idx]
- data = np.load(path)
-
- X, y = data["X"], data["y"]
-
- X_mag = np.abs(X)
- y_mag = np.abs(y)
-
- return X_mag, y_mag
-
-
-def make_pair(mix_dir, inst_dir):
- input_exts = [".wav", ".m4a", ".mp3", ".mp4", ".flac"]
-
- X_list = sorted(
- [
- os.path.join(mix_dir, fname)
- for fname in os.listdir(mix_dir)
- if os.path.splitext(fname)[1] in input_exts
- ]
- )
- y_list = sorted(
- [
- os.path.join(inst_dir, fname)
- for fname in os.listdir(inst_dir)
- if os.path.splitext(fname)[1] in input_exts
- ]
- )
-
- filelist = list(zip(X_list, y_list))
-
- return filelist
-
-
-def train_val_split(dataset_dir, split_mode, val_rate, val_filelist):
- if split_mode == "random":
- filelist = make_pair(
- os.path.join(dataset_dir, "mixtures"),
- os.path.join(dataset_dir, "instruments"),
- )
-
- random.shuffle(filelist)
-
- if len(val_filelist) == 0:
- val_size = int(len(filelist) * val_rate)
- train_filelist = filelist[:-val_size]
- val_filelist = filelist[-val_size:]
- else:
- train_filelist = [
- pair for pair in filelist if list(pair) not in val_filelist
- ]
- elif split_mode == "subdirs":
- if len(val_filelist) != 0:
- raise ValueError(
- "The `val_filelist` option is not available in `subdirs` mode"
- )
-
- train_filelist = make_pair(
- os.path.join(dataset_dir, "training/mixtures"),
- os.path.join(dataset_dir, "training/instruments"),
- )
-
- val_filelist = make_pair(
- os.path.join(dataset_dir, "validation/mixtures"),
- os.path.join(dataset_dir, "validation/instruments"),
- )
-
- return train_filelist, val_filelist
-
-
-def augment(X, y, reduction_rate, reduction_mask, mixup_rate, mixup_alpha):
- perm = np.random.permutation(len(X))
- for i, idx in enumerate(tqdm(perm)):
- if np.random.uniform() < reduction_rate:
- y[idx] = spec_utils.reduce_vocal_aggressively(
- X[idx], y[idx], reduction_mask
- )
-
- if np.random.uniform() < 0.5:
- # swap channel
- X[idx] = X[idx, ::-1]
- y[idx] = y[idx, ::-1]
- if np.random.uniform() < 0.02:
- # mono
- X[idx] = X[idx].mean(axis=0, keepdims=True)
- y[idx] = y[idx].mean(axis=0, keepdims=True)
- if np.random.uniform() < 0.02:
- # inst
- X[idx] = y[idx]
-
- if np.random.uniform() < mixup_rate and i < len(perm) - 1:
- lam = np.random.beta(mixup_alpha, mixup_alpha)
- X[idx] = lam * X[idx] + (1 - lam) * X[perm[i + 1]]
- y[idx] = lam * y[idx] + (1 - lam) * y[perm[i + 1]]
-
- return X, y
-
-
-def make_padding(width, cropsize, offset):
- left = offset
- roi_size = cropsize - left * 2
- if roi_size == 0:
- roi_size = cropsize
- right = roi_size - (width % roi_size) + left
-
- return left, right, roi_size
-
-
-def make_training_set(filelist, cropsize, patches, sr, hop_length, n_fft, offset):
- len_dataset = patches * len(filelist)
-
- X_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64)
- y_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64)
-
- for i, (X_path, y_path) in enumerate(tqdm(filelist)):
- X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft)
- coef = np.max([np.abs(X).max(), np.abs(y).max()])
- X, y = X / coef, y / coef
-
- l, r, roi_size = make_padding(X.shape[2], cropsize, offset)
- X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant")
- y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant")
-
- starts = np.random.randint(0, X_pad.shape[2] - cropsize, patches)
- ends = starts + cropsize
- for j in range(patches):
- idx = i * patches + j
- X_dataset[idx] = X_pad[:, :, starts[j] : ends[j]]
- y_dataset[idx] = y_pad[:, :, starts[j] : ends[j]]
-
- return X_dataset, y_dataset
-
-
-def make_validation_set(filelist, cropsize, sr, hop_length, n_fft, offset):
- patch_list = []
- patch_dir = "cs{}_sr{}_hl{}_nf{}_of{}".format(
- cropsize, sr, hop_length, n_fft, offset
- )
- os.makedirs(patch_dir, exist_ok=True)
-
- for i, (X_path, y_path) in enumerate(tqdm(filelist)):
- basename = os.path.splitext(os.path.basename(X_path))[0]
-
- X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft)
- coef = np.max([np.abs(X).max(), np.abs(y).max()])
- X, y = X / coef, y / coef
-
- l, r, roi_size = make_padding(X.shape[2], cropsize, offset)
- X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant")
- y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant")
-
- len_dataset = int(np.ceil(X.shape[2] / roi_size))
- for j in range(len_dataset):
- outpath = os.path.join(patch_dir, "{}_p{}.npz".format(basename, j))
- start = j * roi_size
- if not os.path.exists(outpath):
- np.savez(
- outpath,
- X=X_pad[:, :, start : start + cropsize],
- y=y_pad[:, :, start : start + cropsize],
- )
- patch_list.append(outpath)
-
- return VocalRemoverValidationSet(patch_list)
diff --git a/spaces/ltgoslo/ssa-perin/data/field/bert_field.py b/spaces/ltgoslo/ssa-perin/data/field/bert_field.py
deleted file mode 100644
index 0e8d0355c8d6bca353978871980c6f69befc1371..0000000000000000000000000000000000000000
--- a/spaces/ltgoslo/ssa-perin/data/field/bert_field.py
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/usr/bin/env python3
-# coding=utf-8
-
-import torch
-from data.field.mini_torchtext.field import RawField
-
-
-class BertField(RawField):
- def __init__(self):
- super(BertField, self).__init__()
-
- def process(self, example, device=None):
- attention_mask = [1] * len(example)
-
- example = torch.LongTensor(example, device=device)
- attention_mask = torch.ones_like(example)
-
- return example, attention_mask
diff --git a/spaces/lukelike1001/PlaceAnalysis/README.md b/spaces/lukelike1001/PlaceAnalysis/README.md
deleted file mode 100644
index 423b2fb6761c08798b2bcca9046943417e666b34..0000000000000000000000000000000000000000
--- a/spaces/lukelike1001/PlaceAnalysis/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: PlaceAnalysis
-emoji: 🐠
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/lwdragon/token_classfication/README.md b/spaces/lwdragon/token_classfication/README.md
deleted file mode 100644
index 35868b08a0388d24e84898629df85bf051523f5b..0000000000000000000000000000000000000000
--- a/spaces/lwdragon/token_classfication/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Token Classfication
-emoji: 💩
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/m3hrdadfi/zabanshenas/libs/utils.py b/spaces/m3hrdadfi/zabanshenas/libs/utils.py
deleted file mode 100644
index 6d72695a66fee59165b9b5c3d4d29368ff9ea26b..0000000000000000000000000000000000000000
--- a/spaces/m3hrdadfi/zabanshenas/libs/utils.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import streamlit as st
-import numpy as np
-import plotly.express as px
-
-
-def plot_result(top_languages):
- top_k = len(top_languages)
- languages = [f'{r["language"]} ({r["code"]})' for r in top_languages]
- scores = np.array([r["score"] for r in top_languages])
- scores *= 100
- fig = px.bar(
- x=scores,
- y=languages,
- orientation='h',
- labels={'x': 'Confidence', 'y': 'Language'},
- text=scores,
- range_x=(0, 115),
- title=f'Top {top_k} Detections',
- color=np.linspace(0, 1, len(scores)),
- color_continuous_scale='Viridis'
- )
- fig.update(layout_coloraxis_showscale=False)
- fig.update_traces(texttemplate='%{text:0.1f}%', textposition='outside')
- fig.update_layout(yaxis={'categoryorder': 'total ascending'})
- st.plotly_chart(fig, use_container_width=True)
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/device_ptr.h b/spaces/ma-xu/LIVE/thrust/thrust/device_ptr.h
deleted file mode 100644
index fb3ad1ee021ab35d2ad1ef63744e9832d08a8b12..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/device_ptr.h
+++ /dev/null
@@ -1,192 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file device_ptr.h
- * \brief A pointer to a variable which resides in the "device" system's memory space
- */
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-
-/*! \addtogroup memory_management Memory Management
- * \addtogroup memory_management_classes Memory Management Classes
- * \ingroup memory_management
- * \{
- */
-
-// forward declarations
-template class device_reference;
-
-/*! \p device_ptr stores a pointer to an object allocated in device memory. This type
- * provides type safety when dispatching standard algorithms on ranges resident in
- * device memory.
- *
- * \p device_ptr has pointer semantics: it may be dereferenced safely from the host and
- * may be manipulated with pointer arithmetic.
- *
- * \p device_ptr can be created with the functions device_malloc, device_new, or
- * device_pointer_cast, or by explicitly calling its constructor with a raw pointer.
- *
- * The raw pointer encapsulated by a \p device_ptr may be obtained by either its get
- * method or the \p raw_pointer_cast free function.
- *
- * \note \p device_ptr is not a smart pointer; it is the programmer's responsibility to
- * deallocate memory pointed to by \p device_ptr.
- *
- * \see device_malloc
- * \see device_new
- * \see device_pointer_cast
- * \see raw_pointer_cast
- */
-template
- class device_ptr
- : public thrust::pointer<
- T,
- thrust::device_system_tag,
- thrust::device_reference,
- thrust::device_ptr
- >
-{
- private:
- typedef thrust::pointer<
- T,
- thrust::device_system_tag,
- thrust::device_reference,
- thrust::device_ptr
- > super_t;
-
- public:
- /*! \p device_ptr's null constructor initializes its raw pointer to \c 0.
- */
- __host__ __device__
- device_ptr() : super_t() {}
-
- #if THRUST_CPP_DIALECT >= 2011
- // NOTE: This is needed so that Thrust smart pointers can be used in
- // `std::unique_ptr`.
- __host__ __device__
- device_ptr(decltype(nullptr)) : super_t(nullptr) {}
- #endif
-
- /*! \p device_ptr's copy constructor is templated to allow copying to a
- * device_ptr from a T *.
- *
- * \param ptr A raw pointer to copy from, presumed to point to a location in
- * device memory.
- */
- template
- __host__ __device__
- explicit device_ptr(OtherT *ptr) : super_t(ptr) {}
-
- /*! \p device_ptr's copy constructor allows copying from another device_ptr with related type.
- * \param other The \p device_ptr to copy from.
- */
- template
- __host__ __device__
- device_ptr(const device_ptr &other) : super_t(other) {}
-
- /*! \p device_ptr's assignment operator allows assigning from another \p device_ptr with related type.
- * \param other The other \p device_ptr to copy from.
- * \return *this
- */
- template
- __host__ __device__
- device_ptr &operator=(const device_ptr &other)
- {
- super_t::operator=(other);
- return *this;
- }
-
- #if THRUST_CPP_DIALECT >= 2011
- // NOTE: This is needed so that Thrust smart pointers can be used in
- // `std::unique_ptr`.
- __host__ __device__
- device_ptr& operator=(decltype(nullptr))
- {
- super_t::operator=(nullptr);
- return *this;
- }
- #endif
-
-// declare these members for the purpose of Doxygenating them
-// they actually exist in a derived-from class
-#if 0
- /*! This method returns this \p device_ptr's raw pointer.
- * \return This \p device_ptr's raw pointer.
- */
- __host__ __device__
- T *get(void) const;
-#endif // end doxygen-only members
-}; // end device_ptr
-
-// declare these methods for the purpose of Doxygenating them
-// they actually are defined for a derived-from class
-#if 0
-/*! Writes to an output stream the value of a \p device_ptr's raw pointer.
- *
- * \param os The output stream.
- * \param p The \p device_ptr to output.
- * \return os.
- */
-template
-std::basic_ostream &
-operator<<(std::basic_ostream &os, const device_ptr &p);
-#endif
-
-/*! \}
- */
-
-
-/*!
- * \addtogroup memory_management_functions Memory Management Functions
- * \ingroup memory_management
- * \{
- */
-
-/*! \p device_pointer_cast creates a device_ptr from a raw pointer which is presumed to point
- * to a location in device memory.
- *
- * \param ptr A raw pointer, presumed to point to a location in device memory.
- * \return A device_ptr wrapping ptr.
- */
-template
-__host__ __device__
-inline device_ptr device_pointer_cast(T *ptr);
-
-/*! This version of \p device_pointer_cast creates a copy of a device_ptr from another device_ptr.
- * This version is included for symmetry with \p raw_pointer_cast.
- *
- * \param ptr A device_ptr.
- * \return A copy of \p ptr.
- */
-template
-__host__ __device__
-inline device_ptr device_pointer_cast(const device_ptr &ptr);
-
-/*! \}
- */
-
-} // end thrust
-
-#include
-#include
-
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/generic/reduce.h b/spaces/ma-xu/LIVE/thrust/thrust/system/detail/generic/reduce.h
deleted file mode 100644
index c3e7af0d28bc6a4a0d8f8c893bd7348f1b5e59b6..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/generic/reduce.h
+++ /dev/null
@@ -1,59 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace generic
-{
-
-
-template
-__host__ __device__
- typename thrust::iterator_traits::value_type
- reduce(thrust::execution_policy &exec, InputIterator first, InputIterator last);
-
-
-template
-__host__ __device__
- T reduce(thrust::execution_policy &exec, InputIterator first, InputIterator last, T init);
-
-
-template
-__host__ __device__
- T reduce(thrust::execution_policy &exec, InputIterator first, InputIterator last, T init, BinaryFunction binary_op);
-
-
-} // end namespace generic
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/mahidher/comment_toxicity/README.md b/spaces/mahidher/comment_toxicity/README.md
deleted file mode 100644
index d7245eae0c4a84d1cd060bc62965acf69b3e9e04..0000000000000000000000000000000000000000
--- a/spaces/mahidher/comment_toxicity/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Comment Toxicity
-emoji: 🐢
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.0.20
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/marioboy/neil-breen/vocoder/hparams.py b/spaces/marioboy/neil-breen/vocoder/hparams.py
deleted file mode 100644
index c1de9f7dcc2926735b80a28ed1226ff1b5824753..0000000000000000000000000000000000000000
--- a/spaces/marioboy/neil-breen/vocoder/hparams.py
+++ /dev/null
@@ -1,44 +0,0 @@
-from synthesizer.hparams import hparams as _syn_hp
-
-
-# Audio settings------------------------------------------------------------------------
-# Match the values of the synthesizer
-sample_rate = _syn_hp.sample_rate
-n_fft = _syn_hp.n_fft
-num_mels = _syn_hp.num_mels
-hop_length = _syn_hp.hop_size
-win_length = _syn_hp.win_size
-fmin = _syn_hp.fmin
-min_level_db = _syn_hp.min_level_db
-ref_level_db = _syn_hp.ref_level_db
-mel_max_abs_value = _syn_hp.max_abs_value
-preemphasis = _syn_hp.preemphasis
-apply_preemphasis = _syn_hp.preemphasize
-
-bits = 9 # bit depth of signal
-mu_law = True # Recommended to suppress noise if using raw bits in hp.voc_mode
- # below
-
-
-# WAVERNN / VOCODER --------------------------------------------------------------------------------
-voc_mode = 'RAW' # either 'RAW' (softmax on raw bits) or 'MOL' (sample from
-# mixture of logistics)
-voc_upsample_factors = (5, 5, 8) # NB - this needs to correctly factorise hop_length
-voc_rnn_dims = 512
-voc_fc_dims = 512
-voc_compute_dims = 128
-voc_res_out_dims = 128
-voc_res_blocks = 10
-
-# Training
-voc_batch_size = 100
-voc_lr = 1e-4
-voc_gen_at_checkpoint = 5 # number of samples to generate at each checkpoint
-voc_pad = 2 # this will pad the input so that the resnet can 'see' wider
- # than input length
-voc_seq_len = hop_length * 5 # must be a multiple of hop_length
-
-# Generating / Synthesizing
-voc_gen_batched = True # very fast (realtime+) single utterance batched generation
-voc_target = 8000 # target number of samples to be generated in each batch entry
-voc_overlap = 400 # number of samples for crossfading between batches
diff --git a/spaces/marrocovin/OPENAI_KEY/src/logger.ts b/spaces/marrocovin/OPENAI_KEY/src/logger.ts
deleted file mode 100644
index ed2642025769b107b97ab08a00e258217d6a451f..0000000000000000000000000000000000000000
--- a/spaces/marrocovin/OPENAI_KEY/src/logger.ts
+++ /dev/null
@@ -1,3 +0,0 @@
-import pino from "pino";
-
-export const logger = pino();
diff --git a/spaces/matthoffner/open-codetree/constants/icon.tsx b/spaces/matthoffner/open-codetree/constants/icon.tsx
deleted file mode 100644
index f21da0b2047f7063ed6f87b8b38dd4c38b200570..0000000000000000000000000000000000000000
--- a/spaces/matthoffner/open-codetree/constants/icon.tsx
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-export const SendIcon = ({ className }: { className?: string }) => {
- return (
-
- );
-};
\ No newline at end of file
diff --git a/spaces/matthoffner/open-codetree/graphql/definitions/upload.definition.ts b/spaces/matthoffner/open-codetree/graphql/definitions/upload.definition.ts
deleted file mode 100644
index 363291d81dab2eab153ccd89a62e3963ed57d82c..0000000000000000000000000000000000000000
--- a/spaces/matthoffner/open-codetree/graphql/definitions/upload.definition.ts
+++ /dev/null
@@ -1,10 +0,0 @@
-import { gql } from "@apollo/client";
-
-export const uploadFileMutation =
- gql(`mutation UploadFile($input: UploadInput!) {
- uploadFile(input: $input) {
- url
- message
- status
- }
-}`);
diff --git a/spaces/mehradans92/decode-elm/test/test.py b/spaces/mehradans92/decode-elm/test/test.py
deleted file mode 100644
index 24f3ef84ad52ced8fa52b059c7af9a1bda7e86b3..0000000000000000000000000000000000000000
--- a/spaces/mehradans92/decode-elm/test/test.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import unittest
-import sys
-sys.path.append('../')
-from utils import *
-import os
-import shutil
-
-class Utils(unittest.TestCase):
- def test_arXiv_API(self):
- search_query = 'Tools for Landscape Analysis of Optimisation Problems in Procedural Content Generation for Games'
- pdf_info = "('Tools for Landscape Analysis of Optimisation Problems in Procedural Content Generation for Games', 'http://arxiv.org/pdf/2302.08479v1', ['Vanessa Volz', 'Boris Naujoks', 'Pascal Kerschke', 'Tea Tusar'], ['cs.AI'], 'docs', 'Vanessa Volz, Boris Naujoks, Pascal Kerschke, Tea Tusar, Tools for Landscape Analysis of Optimisation Problems in Procedural Content Generation for Games. arXiv [cs.AI] (2023), (available at http://arxiv.org/pdf/2302.08479v1).')"
- max_results = 1
- XRxiv_servers = ['rxiv']
- search_engines = XRxivQuery(search_query, max_results, XRxiv_servers=XRxiv_servers)
- test_pdf_info = search_engines.call_API()
- self.assertEqual(pdf_info, str(test_pdf_info[0]))
-
- def test_download_pdf(self):
- search_query = 'Serverless Applications: Why, When, and How?'
- max_results = 1
- XRxiv_servers = ['rxiv']
- search_engines = XRxivQuery(search_query, max_results, XRxiv_servers=XRxiv_servers)
- test_pdf_info = search_engines.call_API()
- search_engines.download_pdf()
- dowloaded_dir = 'docs/Serverless Applications Why When and How .pdf'
- self.assertTrue(os.path.exists(dowloaded_dir))
- shutil.rmtree(f'docs/')
-
- def test_distibute_max_papers(self):
- XRxiv_servers = ['rxiv', 'medrxiv']
- max_results = 10
- max_papers_in_server = distibute_max_papers(max_results, XRxiv_servers)
- self.assertEqual(max_results, np.sum(max_papers_in_server))
- self.assertEqual(max_papers_in_server[2], 0)
- self.assertGreater(max_papers_in_server[0],0)
- self.assertGreater(max_papers_in_server[3],0)
-
-if __name__ == '__main__':
- unittest.main()
\ No newline at end of file
diff --git a/spaces/merve/anonymization/source/uncertainty-calibration/util.js b/spaces/merve/anonymization/source/uncertainty-calibration/util.js
deleted file mode 100644
index a0ce5b12a2a642f1186cc4004e90b046a89611f8..0000000000000000000000000000000000000000
--- a/spaces/merve/anonymization/source/uncertainty-calibration/util.js
+++ /dev/null
@@ -1,38 +0,0 @@
-window.initUtil = function(){
- function addAxisLabel(c, xText, yText, xOffset=40, yOffset=-40){
- c.svg.select('.x').append('g')
- .translate([c.width/2, xOffset])
- .append('text.axis-label')
- .text(xText)
- .at({textAnchor: 'middle'})
- .st({fill: '#000', fontSize: 14, fontFamily: 'sans-serif'})
-
- c.svg.select('.y')
- .append('g')
- .translate([yOffset, c.height/2])
- .append('text.axis-label')
- .text(yText)
- .at({textAnchor: 'middle', transform: 'rotate(-90)'})
- .st({fill: '#000', fontSize: 14, fontFamily: 'sans-serif'})
- }
-
- function ggPlotBg(c, isBlack=true){
- if (isBlack){
- c.svg.append('rect.bg-rect')
- .at({width: c.width, height: c.height, fill: '#eee'})
- .lower()
- }
-
- c.svg.selectAll('.tick').selectAll('line').remove()
- c.svg.selectAll('.y .tick')
- .append('path').at({d: 'M 0 0 H ' + c.width, stroke: '#fff', strokeWidth: 1})
- c.svg.selectAll('.y text').at({x: -3})
- c.svg.selectAll('.x .tick')
- .append('path').at({d: 'M 0 0 V -' + c.height, stroke: '#fff', strokeWidth: 1})
- }
-
-
- return {addAxisLabel, ggPlotBg}
-}
-
-if (window.init) window.init()
\ No newline at end of file
diff --git a/spaces/merve/dataset-worldviews/public/dataset-worldviews/shape-params.js b/spaces/merve/dataset-worldviews/public/dataset-worldviews/shape-params.js
deleted file mode 100644
index b36a500b99b8789ffe044a738c86e1459317974a..0000000000000000000000000000000000000000
--- a/spaces/merve/dataset-worldviews/public/dataset-worldviews/shape-params.js
+++ /dev/null
@@ -1,527 +0,0 @@
-const shapeParams = [
- {
- shape_name: "circle",
- pointiness: "round",
- size: "large",
- gt: "shaded",
- label: "unshaded",
- correctness: "incorrect",
- path: "M 25.0 0 A 0.5 0.5 0 0 0 -50 0 M -50 0 A 0.5 0.5 0 0 0 25.0 0",
- startX: 47.5,
- startY: 84.21875,
- endX: 474.5,
- endY: 293.828125,
- initialX: 50.5,
- initialY: 85.21875,
- },
- {
- shape_name: "circle",
- pointiness: "round",
- size: "large",
- gt: "shaded",
- label: "unshaded",
- correctness: "incorrect",
- path: "M 22.5 0 A 0.5 0.5 0 0 0 -45 0 M -45 0 A 0.5 0.5 0 0 0 22.5 0",
- startX: 247,
- startY: 433.828125,
- endX: 641.5,
- endY: 248.828125,
- initialX: 575.5,
- initialY: 157.21875,
- },
- {
- shape_name: "circle",
- pointiness: "round",
- size: "large",
- gt: "shaded",
- label: "unshaded",
- correctness: "incorrect",
- path: "M 30.0 0 A 0.5 0.5 0 0 0 -60 0 M -60 0 A 0.5 0.5 0 0 0 30.0 0",
- startX: 189.5,
- startY: 170.21875,
- endX: 799.5,
- endY: 325.828125,
- initialX: 511.5,
- initialY: 75.21875,
- },
- {
- shape_name: "circle",
- pointiness: "round",
- size: "large",
- gt: "unshaded",
- label: "unshaded",
- correctness: "correct",
- path: "M 25.0 0 A 0.5 0.5 0 0 0 -50 0 M -50 0 A 0.5 0.5 0 0 0 25.0 0",
- startX: 37.5,
- startY: 440.21875,
- endX: 475,
- endY: 425.21875,
- initialX: 715.5,
- initialY: 213.21875,
- },
- {
- shape_name: "circle",
- pointiness: "round",
- size: "rt_large",
- gt: "unshaded",
- label: "unshaded",
- correctness: "correct",
- path: "M 17.5 0 A 0.5 0.5 0 0 0 -35 0 M -35 0 A 0.5 0.5 0 0 0 17.5 0",
- startX: 282,
- startY: 207.828125,
- endX: 460.5,
- endY: 217.21875,
- initialX: 280.5,
- initialY: 146.21875,
- },
- {
- shape_name: "circle",
- pointiness: "round",
- size: "rt_small",
- gt: "shaded",
- label: "shaded",
- correctness: "correct",
- path: "M 12.5 0 A 0.5 0.5 0 0 0 -25 0 M -25 0 A 0.5 0.5 0 0 0 12.5 0",
- startX: 125.5,
- startY: 418.21875,
- endX: 715.5,
- endY: 76.828125,
- initialX: 680.5,
- initialY: 147.21875,
- },
- {
- shape_name: "rect",
- pointiness: "pointy",
- size: "rt_large",
- gt: "unshaded",
- label: "shaded",
- correctness: "incorrect",
- path: "M -45 -15 L 25.0 -15 L 25.0 5.0 L -45 5.0 L -45 -15",
- startX: 77.5,
- startY: 35.21875,
- endX: 712.5,
- endY: 124.828125,
- initialX: 79.5,
- initialY: 35.21875,
- },
- {
- shape_name: "rect",
- pointiness: "pointy",
- size: "rt_large",
- gt: "unshaded",
- label: "unshaded",
- correctness: "correct",
- path: "M -40 -60 L -20 -70 L 18 3 L -3 12.5 L -40 -60",
- startX: 320,
- startY: 451.828125,
- endX: 707.5,
- endY: 339.828125,
- initialX: 672.5,
- initialY: 104.21875,
- },
- {
- shape_name: "rect",
- pointiness: "pointy",
- size: "rt_small",
- gt: "shaded",
- label: "shaded",
- correctness: "correct",
- path: "M -30 -15 L 12.5 -15 L 12.5 5.5 L -30 5.5 L -30 -15",
- startX: 29.5,
- startY: 389.21875,
- endX: 774.5,
- endY: 78.828125,
- initialX: 115.5,
- initialY: 234.21875,
- },
- {
- shape_name: "rect",
- pointiness: "pointy",
- size: "rt_small",
- gt: "unshaded",
- label: "unshaded",
- correctness: "correct",
- path: "M -11 -34 L 4.5 -34 L 4.5 6.0 L -11 6.0 L -11 -34",
- startX: 242,
- startY: 271.828125,
- endX: 574.5,
- endY: 391.828125,
- initialX: 258.5,
- initialY: 230.21875,
- },
- {
- shape_name: "rect",
- pointiness: "pointy",
- size: "rt_small",
- gt: "unshaded",
- label: "unshaded",
- correctness: "correct",
- path: "M -10 -45 L 4.5 -45 L 4.5 6.0 L -10 6.0 L -10 -45",
- startX: 76.5,
- startY: 177.21875,
- endX: 522.5,
- endY: 327.828125,
- initialX: 89.5,
- initialY: 170.21875,
- },
- {
- shape_name: "rt_circle",
- pointiness: "pointy",
- size: "rt_large",
- gt: "unshaded",
- label: "unshaded",
- correctness: "correct",
- path: "M 25.0 0 M -50 0 L -44 2.0 L -50 3.5 L -44 5.0 L -48 7.5 L -41 8.0 L -45 10.5 L -37 10.5 L -41 14.0 L -34 14.5 L -35 17.5 L -29 16.5 L -28 20.5 L -22 19.5 L -21 22.5 L -14 21.0 L -12 24.0 L -7 22.0 L -4 24.5 L 0 22.5 L 2.0 24.5 L 3.5 21.5 L 5.5 24.0 L 7.5 21.0 L 9.5 22.5 L 9.5 19.5 L 12.5 21.0 L 13.0 17.5 L 16.0 18.5 L 15.5 15.0 L 19.0 15.5 L 17.0 12.5 L 21.0 12.5 L 18.5 10.0 L 22.5 9.5 L 19.5 7.0 L 23.5 6.5 L 20.0 4.5 L 24.0 4.0 L 20.5 2.0 L 25.0 0 L 21.0 -3 L 25.0 -6 L 21.0 -9 L 24.0 -13 L 20.5 -14 L 23.0 -19 L 20.0 -20 L 21.5 -25 L 18.0 -25 L 19.0 -32 L 15.0 -30 L 16.0 -38 L 12.5 -36 L 13.0 -43 L 10.0 -40 L 10.0 -46 L 7.0 -42 L 6.5 -48 L 4.0 -43 L 3.5 -49 L 1.5 -43 L 0 -50 L -3 -43 L -8 -49 L -9 -43 L -15 -48 L -15 -42 L -21 -46 L -21 -40 L -26 -43 L -26 -37 L -31 -39 L -30 -33 L -37 -34 L -35 -28 L -40 -29 L -38 -24 L -44 -25 L -42 -20 L -46 -20 L -44 -15 L -49 -14 L -45 -9 L -50 -6 L -45 -3 L -50 0",
- startX: 319,
- startY: 290.828125,
- endX: 738,
- endY: 410.21875,
- initialX: 605.5,
- initialY: 83.21875,
- },
- {
- shape_name: "rt_circle",
- pointiness: "round",
- size: "large",
- gt: "shaded",
- label: "shaded",
- correctness: "correct",
- path: "M 26.5 1.0 C 34.0 -75 -43 -70 -36 -34 M -36 -34 C -42 -14 -70 -34 -66 0 V 0 C -66 19.5 -47 26.0 3.5 26.5 C 11.5 28.0 26.0 13.0 26.5 1.0",
- startX: 154.5,
- startY: 89.21875,
- endX: 519.5,
- endY: 128.828125,
- initialX: 151.5,
- initialY: 88.21875,
- },
- {
- shape_name: "rt_circle",
- pointiness: "round",
- size: "rt_large",
- gt: "unshaded",
- label: "unshaded",
- correctness: "correct",
- path: "M 26.5 1.0 C 34.0 -75 -43 -70 -42 -51 M -42 -51 C -42 -14 -82 -12 -38 -4 V -4 C -9 0 -47 26.0 2.0 24.0 C 16.5 22.0 23.5 12.0 26.5 1.0",
- startX: 254,
- startY: 368.828125,
- endX: 749.5,
- endY: 254.828125,
- initialX: 497.5,
- initialY: 192.21875,
- },
- {
- shape_name: "rt_circle",
- pointiness: "round",
- size: "rt_small",
- gt: "unshaded",
- label: "unshaded",
- correctness: "correct",
- path: "M 17.0 -9 C 9.5 -44 -1 -65 -40 -34 M -40 -34 C -61 -15 -59 0.5 -38 9.5 C -19 19.0 -47 26.0 8.0 15.5 C 16.5 12.5 23.5 12.0 17.0 -9",
- startX: 42.5,
- startY: 185.21875,
- endX: 664,
- endY: 448.21875,
- initialX: 410.5,
- initialY: 148.21875,
- },
- {
- shape_name: "rt_circle",
- pointiness: "rt_pointy",
- size: "rt_large",
- gt: "unshaded",
- label: "unshaded",
- correctness: "correct",
- path: "M 14.0 3.5 L -6 0.5 L 15.0 -5 A 0.5 0.5 0 0 0 -48 0 M -48 0 A 0.5 0.5 0 0 0 14.0 3.5",
- startX: 48.5,
- startY: 252.21875,
- endX: 576,
- endY: 443.21875,
- initialX: 160.5,
- initialY: 155.21875,
- },
- {
- shape_name: "rt_circle",
- pointiness: "rt_round",
- size: "small",
- gt: "shaded",
- label: "shaded",
- correctness: "correct",
- path: "M 6.0 1.5 C 5.5 -3 0 4.5 -3 -1 C -3 -10 2.5 -7 6.0 -4 A 0.5 0.5 0 0 0 -18 0 M -18 0 A 0.5 0.5 0 0 0 6.0 1.5",
- startX: 334,
- startY: 185.828125,
- endX: 652.5,
- endY: 83.828125,
- initialX: 13.5,
- initialY: 232.21875,
- },
- {
- shape_name: "rt_circle",
- pointiness: "rt_round",
- size: "small",
- gt: "shaded",
- label: "shaded",
- correctness: "correct",
- path: "M -10 0 A 0.5 0.5 0 0 0 5.0 0 C 5.0 -12 3.5 -17 0 -10 C -7 -17 -10 -12 -10 0",
- startX: 318,
- startY: 355.828125,
- endX: 581,
- endY: 145.21875,
- initialX: 293.5,
- initialY: 190.21875,
- },
- {
- shape_name: "rt_circle",
- pointiness: "rt_round",
- size: "small",
- gt: "shaded",
- label: "shaded",
- correctness: "correct",
- path: "M -10 0 A 0.5 0.5 0 0 0 4.5 -3 C 5.5 0 6.5 4.5 7.5 0.5 C 7.5 -11 2.5 -18 -7 -11 C 3.5 -4 -10 -12 -10 0",
- startX: 80,
- startY: 308.828125,
- endX: 731.5,
- endY: 42.828125,
- initialX: 621.5,
- initialY: 132.21875,
- },
- {
- shape_name: "rt_circle",
- pointiness: "rt_round",
- size: "small",
- gt: "shaded",
- label: "unshaded",
- correctness: "incorrect",
- path: "M 0 10.0 C -20 7.5 -20 -5 -6 -15 L 2.5 -15 C 10.0 -5 10.0 7.5 0 10.0",
- startX: 199.5,
- startY: 50.21875,
- endX: 719.5,
- endY: 458.828125,
- initialX: 246.5,
- initialY: 59.21875,
- },
- {
- shape_name: "rt_rect",
- pointiness: "rt_pointy",
- size: "rt_large",
- gt: "shaded",
- label: "unshaded",
- correctness: "incorrect",
- path: "M 0 20.0 C -50 15.0 -10 35.0 -20 -45 L 10.0 -45 C 5.0 35.0 25.0 15.0 0 20.0",
- startX: 93.5,
- startY: 261.21875,
- endX: 807.5,
- endY: 250.828125,
- initialX: 57.5,
- initialY: 189.21875,
- },
- {
- shape_name: "rt_rect",
- pointiness: "rt_pointy",
- size: "rt_large",
- gt: "unshaded",
- label: "unshaded",
- correctness: "correct",
- path: "M 27.5 7.0 C -50 15.0 -39 33.5 -37 9.5 S -76 -1 -45 -21 C 11.0 -51 23.0 -52 27.5 7.0",
- startX: 284.5,
- startY: 152.21875,
- endX: 544.5,
- endY: 230.828125,
- initialX: 411.5,
- initialY: 73.21875,
- },
- {
- shape_name: "rt_rect",
- pointiness: "rt_pointy",
- size: "rt_large",
- gt: "unshaded",
- label: "unshaded",
- correctness: "correct",
- path: "M -25 -30 L 10.0 -30 C 22.5 0 22.5 0 10.0 15.0 L -25 15.0 C 0 0 0 0 -25 -30",
- startX: 219.5,
- startY: 99.21875,
- endX: 525.5,
- endY: 381.828125,
- initialX: 213.5,
- initialY: 96.21875,
- },
- {
- shape_name: "rt_rect",
- pointiness: "rt_pointy",
- size: "rt_large",
- gt: "unshaded",
- label: "unshaded",
- correctness: "correct",
- path: "M -25 -50 L 10.0 -50 C 0 0 22.5 0 10.0 25.0 L -25 25.0 C 0 0 -45 0 -25 -50",
- startX: 79.5,
- startY: 380.21875,
- endX: 565.5,
- endY: 298.828125,
- initialX: 719.5,
- initialY: 87.21875,
- },
- {
- shape_name: "rt_triangle",
- pointiness: "rt_pointy",
- size: "large",
- gt: "shaded",
- label: "unshaded",
- correctness: "incorrect",
- path: "M -45 -50 L 22.5 -50 L 0 34.5 C 0 0 -45 0 -45 -50",
- startX: 325.5,
- startY: 94.21875,
- endX: 636.5,
- endY: 360.828125,
- initialX: 324.5,
- initialY: 88.2,
- },
- {
- shape_name: "rt_triangle",
- pointiness: "rt_round",
- size: "large",
- gt: "shaded",
- label: "unshaded",
- correctness: "incorrect",
- path: "M -47 15.0 L -15 -56 C -7 -82 41.5 15.5 28.0 15.5 C 0 15.5 0 15.5 -47 15.0",
- startX: 191,
- startY: 283.828125,
- endX: 796,
- endY: 448.21875,
- initialX: 349.5,
- initialY: 223.21875,
- },
- {
- shape_name: "rt_triangle",
- pointiness: "rt_round",
- size: "large",
- gt: "unshaded",
- label: "shaded",
- correctness: "incorrect",
- path: "M 21.0 17.5 L -43 17.5 C -31 -26 9.5 -44 16.0 -69 C 24.5 -80 15.5 -12 21.0 17.5",
- startX: 163.5,
- startY: 446.21875,
- endX: 794.5,
- endY: 134.828125,
- initialX: 622.5,
- initialY: 210.21875,
- },
- {
- shape_name: "rt_triangle",
- pointiness: "rt_round",
- size: "rt_large",
- gt: "shaded",
- label: "shaded",
- correctness: "correct",
- path: "M -20 -35 L -20 10 L 25 10 C 25 5 25 5 20 5 C 20 0 20 0 15 0 C 15 -5 15 -5 10 -5 C 10 -10 10 -10 5 -10 C 5 -15 5 -15 0 -15 C 0 -20 0 -20 -5 -20 C -5 -25 -5 -25 -10 -25 C -10 -30 -10 -30 -15 -30 C -15 -35 -15 -35 -20 -35",
- startX: 132,
- startY: 350.828125,
- endX: 643.5,
- endY: 149.828125,
- initialX: 190.5,
- initialY: 240.21875,
- },
- {
- shape_name: "rt_triangle",
- pointiness: "rt_round",
- size: "small",
- gt: "shaded",
- label: "unshaded",
- correctness: "incorrect",
- path: "M 0 6.5 C 5.0 5.5 8.5 -8 7.5 -10 L -15 -10 C -17 -8 -10 5.5 0 6.5",
- startX: 87.5,
- startY: 461.21875,
- endX: 443.5,
- endY: 370.828125,
- initialX: 416.5,
- initialY: 234.21875,
- },
- {
- shape_name: "rt_triangle",
- pointiness: "rt_round",
- size: "small",
- gt: "unshaded",
- label: "shaded",
- correctness: "incorrect",
- path: "M 22.5 0 C 22.5 -11.25 11.25 -18.75 0 -15 C 0 -3.75 -11.25 11.25 -8.25 7.5 C -3.75 18.75 11.25 0 22.5 0",
- startX: 168,
- startY: 330.828125,
- endX: 522.5,
- endY: 47.828125,
- initialX: 402.5,
- initialY: 193.21875,
- },
- {
- shape_name: "triangle",
- pointiness: "pointy",
- size: "rt_large",
- gt: "shaded",
- label: "shaded",
- correctness: "correct",
- path: "M -9 25.0 L 7.5 25.0 L 0 -45 L -9 25.0",
- startX: 126.5,
- startY: 249.21875,
- endX: 433.5,
- endY: 135.828125,
- initialX: 219.5,
- initialY: 183.21875,
- },
- {
- shape_name: "triangle",
- pointiness: "pointy",
- size: "rt_small",
- gt: "shaded",
- label: "shaded",
- correctness: "correct",
- path: "M -29 5.0 L 15.0 0 L -29 -16 L -29 5.0",
- startX: 277.5,
- startY: 98.21875,
- endX: 596.5,
- endY: 70.828125,
- initialX: 280.5,
- initialY: 103.21875,
- },
- {
- shape_name: "triangle",
- pointiness: "pointy",
- size: "rt_small",
- gt: "shaded",
- label: "shaded",
- correctness: "correct",
- path: "M 3.5 13.5 L 9.5 -20 L -36 0 L 3.5 13.5",
- startX: 257.5,
- startY: 53.21875,
- endX: 593.5,
- endY: 105.828125,
- initialX: 546.5,
- initialY: 235.21875,
- },
- {
- shape_name: "triangle",
- pointiness: "pointy",
- size: "rt_small",
- gt: "unshaded",
- label: "shaded",
- correctness: "incorrect",
- path: "M 12.5 10.0 L 0 -35 L -25 10.0 L 12.5 10.0",
- startX: 15.5,
- startY: 332.8,
- endX: 463,
- endY: 63.21875,
- initialX: 13.5,
- initialY: 164.21875,
- },
- {
- shape_name: "triangle",
- pointiness: "pointy",
- size: "small",
- gt: "shaded",
- label: "shaded",
- correctness: "correct",
- path: "M 4.5 1.5 L 0 -15 L -8 1.5 L 4.5 1.5",
- startX: 111,
- startY: 180.828125,
- endX: 784.5,
- endY: 42.828125,
- initialX: 195.5,
- initialY: 136.21875,
- },
-];
diff --git a/spaces/merve/dataset-worldviews/public/private-and-fair/umap-digit.js b/spaces/merve/dataset-worldviews/public/private-and-fair/umap-digit.js
deleted file mode 100644
index f2fd20ea8d672ab49ca2698135c581605524bb46..0000000000000000000000000000000000000000
--- a/spaces/merve/dataset-worldviews/public/private-and-fair/umap-digit.js
+++ /dev/null
@@ -1,139 +0,0 @@
-
-!(async function(){
- var data = await util.getFile('mnist_train.csv')
- data.forEach(d => {
- delete d['']
- d.i = +d.i
- })
-
- var sel = d3.select('.umap-digit').html('')
- .at({role: 'graphics-document', 'aria-label': `Color coded UMAP of MNIST 1s showing that increasing privacy will misclassify slanted and serif “1” digits first.`})
-
- var umapSel = sel.append('div')
- .append('div.chart-title').text('Sensitivity to higher privacy levels →')
- .parent()
- .st({maxWidth: 600, margin: '0 auto', marginBottom: 10})
- .append('div')
-
-
- var buttonSel = sel.append('div.digit-button-container')
- .appendMany('div.button', d3.range(10))
- .text(d => d)
- .on('click', d => drawDigitUmap(d))
-
-
- drawDigitUmap(1)
-
-
- async function drawDigitUmap(digit){
- buttonSel.classed('active', d => d == digit)
-
- // var umap = await util.getFile(`umap_train_${digit}.npy`)
- var umap = await util.getFile(`cns-cache/umap_train_784_${digit}.npy`)
- util.getFile(`cns-cache/mnist_train_raw_${digit}.npy`)
-
- var digitData = data
- .filter(d => d.y == digit)
- .map((d, i) => ({
- rawPos: [umap.data[i*2 + 0], umap.data[i*2 + 1]],
- priv_order: d.priv_order,
- y: d.y,
- i: d.i
- }))
-
- var c = d3.conventions({
- sel: umapSel.html(''),
- width: 600,
- height: 600,
- layers: 'sdc',
- margin: {top: 45}
- })
-
- var nTicks = 200
- c.svg.appendMany('rect', d3.range(nTicks))
- .at({
- height: 15,
- width: 1,
- fill: i => d3.interpolatePlasma(i/nTicks),
- })
- .translate(i => [c.width/2 - nTicks/2 - 20 + i, -c.margin.top + 5])
-
-
- c.x.domain(d3.extent(digitData, d => d.rawPos[0]))
- c.y.domain(d3.extent(digitData, d => d.rawPos[1]))//.range([0, c.height])
- digitData.forEach(d => d.pos = [c.x(d.rawPos[0]), c.y(d.rawPos[1])])
-
- c.sel.select('canvas').st({pointerEvents: 'none'})
- var divSel = c.layers[1].st({pointerEvents: 'none'})
- var ctx = c.layers[2]
-
- digitData.forEach(d => {
- ctx.beginPath()
- ctx.fillStyle = d3.interpolatePlasma(1 - d.priv_order/60000)
- ctx.rect(d.pos[0], d.pos[1], 2, 2)
- ctx.fill()
- })
-
- var p = 10
- c.svg
- .append('rect').at({width: c.width + p*2, height: c.height + p*2, x: -p, y: -p})
- .parent()
- .call(d3.attachTooltip)
- .on('mousemove', function(){
- var [px, py] = d3.mouse(this)
-
- var minPoint = _.minBy(digitData, d => {
- var dx = d.pos[0] - px
- var dy = d.pos[1] - py
-
- return dx*dx + dy*dy
- })
-
- var s = 4
- var c = d3.conventions({
- sel: ttSel.html('').append('div'),
- width: 4*28,
- height: 4*28,
- layers: 'cs',
- margin: {top: 0, left: 0, right: 0, bottom: 0}
- })
-
- // Label: ${minPoint.y}
- // ttSel.append('div').html(`
- // Privacy Rank ${d3.format(',')(minPoint.priv_order)}
- // `)
-
- ttSel.classed('tooltip-footnote', 0).st({width: 112})
-
- util.drawDigit(c.layers[0], +minPoint.i, s)
- })
-
- if (digit == 1){
- var circleDigits = [
- {r: 40, index: 1188},
- {r: 53, index: 18698},
- {r: 40, index: 1662}
- ]
- circleDigits.forEach(d => {
- d.pos = digitData.filter(e => e.priv_order == d.index)[0].pos
- })
-
- c.svg.append('g')
- .appendMany('g', circleDigits)
- .translate(d => d.pos)
- .append('circle')
- .at({r: d => d.r, fill: 'none', stroke: '#fff', strokeDasharray: '2 3', strokeWidth: 1})
-
- var {r, pos} = circleDigits[0]
-
-
- divSel
- .append('div').translate(pos)
- .append('div').translate([r + 20, -r + 10])
- .st({width: 150, fontWeight: 300, fontSize: 14, color: '#fff', xbackground: 'rgba(255,0,0,.2)', lineHeight: '1.2em'})
- .text('Increasing privacy will misclassify slanted and serif “1” digits first')
- }
- }
-})()
-
-
diff --git a/spaces/merve/measuring-fairness/public/dataset-worldviews/style.css b/spaces/merve/measuring-fairness/public/dataset-worldviews/style.css
deleted file mode 100644
index b8cdd4b074388e961c5dd22322a9e056903f2b2c..0000000000000000000000000000000000000000
--- a/spaces/merve/measuring-fairness/public/dataset-worldviews/style.css
+++ /dev/null
@@ -1,260 +0,0 @@
-:root {
- --shaded-shape-color: #9e9e9e;
- --not-shaded-shape-color: white;
- --classifier-bg-color: #e6e6e6;
-}
-
-.right {
- float: right;
-}
-.left {
- float: left;
-}
-
-.gt-shaded {
- fill: var(--shaded-shape-color);
- stroke: black;
- stroke-width: 1;
-}
-
-.gt-unshaded {
- fill: var(--not-shaded-shape-color);
- stroke: black;
- stroke-width: 1;
-}
-
-.shape-label-group {
- opacity: 0;
-}
-.shape-label-group.visible {
- opacity: 100;
-}
-
-.incorrect.is-classified {
- stroke-width: 2;
- transition: stroke-width 0.5s;
- transition-timing-function: cubic-bezier(0, 7, 0, 7);
- stroke: #d15830;
-}
-
-.correct.is-classified {
- stroke-width: 1;
- stroke: green;
-}
-
-.shape-label-rect {
- opacity: 50;
- fill: white;
- stroke: none;
-}
-
-.shape-label-text {
- color: black;
-}
-
-.source {
- text-decoration: none;
- font-size: 10px;
-}
-
-.newspaper-image {
- width: 450px;
-}
-
-.interface-image {
- width: 450px;
-}
-.summary-text {
- opacity: 0;
- padding-top: 0px;
- padding-bottom: 20px;
- text-indent: 50px;
-}
-
-.summary-text.is-classified {
- transition: opacity 1000ms;
- transition-delay: 2500ms;
- opacity: 100;
-}
-
-.classifier {
- /* fill:#c2c2c2;
- stroke-width: 0;*/
- opacity: 0;
-}
-
-.classifier.is-classified {
- transition: opacity 1000ms;
- transition-delay: 1500ms;
- opacity: 100;
- fill: #c2c2c2;
- stroke-width: 2;
-}
-
-.classifier-text {
- text-anchor: middle;
- /*alignment-baseline: central;*/
- font-size: 30px;
-}
-
-.classifier-caption {
- width: 800px;
- text-align: center;
- position: relative;
- left: 50%;
- margin-left: -400px;
- font-size: 12px;
- /*right: 50%;*/
-}
-
-.classifier-bg-shaded {
- fill: var(--classifier-bg-color);
- stroke-width: 0;
-}
-
-.classifier-bg-unshaded {
- fill: var(--classifier-bg-color);
-}
-
-.item-text.invisible {
- fill-opacity: 10;
-}
-.item-text {
- fill-opacity: 100;
-}
-
-.explainer-label-text {
- padding-left: 2px;
- padding-right: 2px;
- padding-top: 1px;
- padding-bottom: 1px;
-}
-
-mark {
- padding-left: 2px;
- padding-right: 2px;
- padding-top: 1px;
- padding-bottom: 1px;
- outline: 1px solid #000000;
-}
-
-img.interface {
- padding-top: 20px;
- padding-right: 20px;
- padding-bottom: 20px;
- padding-left: 20px;
-}
-
-.classifier-button {
- padding: 10px 20px;
- text-align: center;
- font-family: "Google Sans", sans-serif;
- margin-left: 20px;
- margin-right: 20px;
-}
-
-.classifer-bg-text {
- font-family: "Consolas", "monaco", "monospace";
-}
-
-.emphasis {
- font-weight: 500;
-}
-
-.dropdown {
- padding: 8px 7px;
- min-width: 200px;
- background-color: #f9f9f9;
- box-shadow: 0px 8px 16px 0px rgba(0, 0, 0, 0.2);
- font-family: "Google Sans", sans-serif;
- font-size: 14px;
-}
-
-.fake-dropdown {
- padding-top: 10px;
- padding-bottom: 10px;
- padding-left: 10px;
- padding-right: 10px;
-}
-
-.monospace {
- font-family: "Consolas", "monaco", "monospace";
- font-size: 14px;
- font-weight: 500;
-}
-
-.monospace.shaded {
- background-color: var(--shaded-shape-color);
- outline: 1px solid #000000;
- padding: 1px;
- font-size: 14px;
-}
-
-.monospace.not-shaded {
- background-color: var(--not-shaded-shape-color);
- outline: 1px solid #000000;
- padding: 1px;
- font-size: 14px;
-}
-
-.classifier-info-blurb {
- font-style: italic;
- font-size: 11;
-}
-
-.photo-button {
- cursor: pointer;
-}
-
-.photo-button rect {
- fill: #ffffff;
-}
-
-.photo-button.is-active-button rect {
- stroke: #000;
-}
-
-.explainer-button {
- cursor: pointer;
-}
-
-.explainer-button rect {
- fill: #f9f9f9;
- stroke: #000000;
-}
-
-.explainer-button.explainer-active-button rect {
- fill: #fefefe;
- stroke-width: 3;
-}
-
-.tooltip {
- width: 180px;
- text-align: center;
-}
-
-.tooltip .correct-row span {
- outline: 1px solid red;
- padding: 2px;
-}
-
-.tooltip .correct-row.is-correct-tooltip span {
- outline: 1px solid green;
-}
-
-#row.row-highlighted {
- opacity: 0.2;
-}
-
-.shape-row-unhighlighted {
- opacity: 0.2;
-}
-
-.results-table {
- text-align: center;
-}
-
-.results-table tr.active {
- background-color: var(--classifier-bg-color);
- outline: 1px solid;
-}
diff --git a/spaces/mpatel57/WOUAF-Text-to-Image/README.md b/spaces/mpatel57/WOUAF-Text-to-Image/README.md
deleted file mode 100644
index fb011fa594a60a0d44ae9c20bd93ded3769c95c7..0000000000000000000000000000000000000000
--- a/spaces/mpatel57/WOUAF-Text-to-Image/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: WOUAF Text To Image
-emoji: 🔥
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 3.34.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/mrneuralnet/P-DFD/data/data_augment.py b/spaces/mrneuralnet/P-DFD/data/data_augment.py
deleted file mode 100644
index c1b52ae19bf8d9ac3fa256b68730ce1b556c6d6e..0000000000000000000000000000000000000000
--- a/spaces/mrneuralnet/P-DFD/data/data_augment.py
+++ /dev/null
@@ -1,237 +0,0 @@
-import cv2
-import numpy as np
-import random
-from utils.box_utils import matrix_iof
-
-
-def _crop(image, boxes, labels, landm, img_dim):
- height, width, _ = image.shape
- pad_image_flag = True
-
- for _ in range(250):
- """
- if random.uniform(0, 1) <= 0.2:
- scale = 1.0
- else:
- scale = random.uniform(0.3, 1.0)
- """
- PRE_SCALES = [0.3, 0.45, 0.6, 0.8, 1.0]
- scale = random.choice(PRE_SCALES)
- short_side = min(width, height)
- w = int(scale * short_side)
- h = w
-
- if width == w:
- l = 0
- else:
- l = random.randrange(width - w)
- if height == h:
- t = 0
- else:
- t = random.randrange(height - h)
- roi = np.array((l, t, l + w, t + h))
-
- value = matrix_iof(boxes, roi[np.newaxis])
- flag = (value >= 1)
- if not flag.any():
- continue
-
- centers = (boxes[:, :2] + boxes[:, 2:]) / 2
- mask_a = np.logical_and(roi[:2] < centers, centers < roi[2:]).all(axis=1)
- boxes_t = boxes[mask_a].copy()
- labels_t = labels[mask_a].copy()
- landms_t = landm[mask_a].copy()
- landms_t = landms_t.reshape([-1, 5, 2])
-
- if boxes_t.shape[0] == 0:
- continue
-
- image_t = image[roi[1]:roi[3], roi[0]:roi[2]]
-
- boxes_t[:, :2] = np.maximum(boxes_t[:, :2], roi[:2])
- boxes_t[:, :2] -= roi[:2]
- boxes_t[:, 2:] = np.minimum(boxes_t[:, 2:], roi[2:])
- boxes_t[:, 2:] -= roi[:2]
-
- # landm
- landms_t[:, :, :2] = landms_t[:, :, :2] - roi[:2]
- landms_t[:, :, :2] = np.maximum(landms_t[:, :, :2], np.array([0, 0]))
- landms_t[:, :, :2] = np.minimum(landms_t[:, :, :2], roi[2:] - roi[:2])
- landms_t = landms_t.reshape([-1, 10])
-
-
- # make sure that the cropped image contains at least one face > 16 pixel at training image scale
- b_w_t = (boxes_t[:, 2] - boxes_t[:, 0] + 1) / w * img_dim
- b_h_t = (boxes_t[:, 3] - boxes_t[:, 1] + 1) / h * img_dim
- mask_b = np.minimum(b_w_t, b_h_t) > 0.0
- boxes_t = boxes_t[mask_b]
- labels_t = labels_t[mask_b]
- landms_t = landms_t[mask_b]
-
- if boxes_t.shape[0] == 0:
- continue
-
- pad_image_flag = False
-
- return image_t, boxes_t, labels_t, landms_t, pad_image_flag
- return image, boxes, labels, landm, pad_image_flag
-
-
-def _distort(image):
-
- def _convert(image, alpha=1, beta=0):
- tmp = image.astype(float) * alpha + beta
- tmp[tmp < 0] = 0
- tmp[tmp > 255] = 255
- image[:] = tmp
-
- image = image.copy()
-
- if random.randrange(2):
-
- #brightness distortion
- if random.randrange(2):
- _convert(image, beta=random.uniform(-32, 32))
-
- #contrast distortion
- if random.randrange(2):
- _convert(image, alpha=random.uniform(0.5, 1.5))
-
- image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
-
- #saturation distortion
- if random.randrange(2):
- _convert(image[:, :, 1], alpha=random.uniform(0.5, 1.5))
-
- #hue distortion
- if random.randrange(2):
- tmp = image[:, :, 0].astype(int) + random.randint(-18, 18)
- tmp %= 180
- image[:, :, 0] = tmp
-
- image = cv2.cvtColor(image, cv2.COLOR_HSV2BGR)
-
- else:
-
- #brightness distortion
- if random.randrange(2):
- _convert(image, beta=random.uniform(-32, 32))
-
- image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
-
- #saturation distortion
- if random.randrange(2):
- _convert(image[:, :, 1], alpha=random.uniform(0.5, 1.5))
-
- #hue distortion
- if random.randrange(2):
- tmp = image[:, :, 0].astype(int) + random.randint(-18, 18)
- tmp %= 180
- image[:, :, 0] = tmp
-
- image = cv2.cvtColor(image, cv2.COLOR_HSV2BGR)
-
- #contrast distortion
- if random.randrange(2):
- _convert(image, alpha=random.uniform(0.5, 1.5))
-
- return image
-
-
-def _expand(image, boxes, fill, p):
- if random.randrange(2):
- return image, boxes
-
- height, width, depth = image.shape
-
- scale = random.uniform(1, p)
- w = int(scale * width)
- h = int(scale * height)
-
- left = random.randint(0, w - width)
- top = random.randint(0, h - height)
-
- boxes_t = boxes.copy()
- boxes_t[:, :2] += (left, top)
- boxes_t[:, 2:] += (left, top)
- expand_image = np.empty(
- (h, w, depth),
- dtype=image.dtype)
- expand_image[:, :] = fill
- expand_image[top:top + height, left:left + width] = image
- image = expand_image
-
- return image, boxes_t
-
-
-def _mirror(image, boxes, landms):
- _, width, _ = image.shape
- if random.randrange(2):
- image = image[:, ::-1]
- boxes = boxes.copy()
- boxes[:, 0::2] = width - boxes[:, 2::-2]
-
- # landm
- landms = landms.copy()
- landms = landms.reshape([-1, 5, 2])
- landms[:, :, 0] = width - landms[:, :, 0]
- tmp = landms[:, 1, :].copy()
- landms[:, 1, :] = landms[:, 0, :]
- landms[:, 0, :] = tmp
- tmp1 = landms[:, 4, :].copy()
- landms[:, 4, :] = landms[:, 3, :]
- landms[:, 3, :] = tmp1
- landms = landms.reshape([-1, 10])
-
- return image, boxes, landms
-
-
-def _pad_to_square(image, rgb_mean, pad_image_flag):
- if not pad_image_flag:
- return image
- height, width, _ = image.shape
- long_side = max(width, height)
- image_t = np.empty((long_side, long_side, 3), dtype=image.dtype)
- image_t[:, :] = rgb_mean
- image_t[0:0 + height, 0:0 + width] = image
- return image_t
-
-
-def _resize_subtract_mean(image, insize, rgb_mean):
- interp_methods = [cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_NEAREST, cv2.INTER_LANCZOS4]
- interp_method = interp_methods[random.randrange(5)]
- image = cv2.resize(image, (insize, insize), interpolation=interp_method)
- image = image.astype(np.float32)
- image -= rgb_mean
- return image.transpose(2, 0, 1)
-
-
-class preproc(object):
-
- def __init__(self, img_dim, rgb_means):
- self.img_dim = img_dim
- self.rgb_means = rgb_means
-
- def __call__(self, image, targets):
- assert targets.shape[0] > 0, "this image does not have gt"
-
- boxes = targets[:, :4].copy()
- labels = targets[:, -1].copy()
- landm = targets[:, 4:-1].copy()
-
- image_t, boxes_t, labels_t, landm_t, pad_image_flag = _crop(image, boxes, labels, landm, self.img_dim)
- image_t = _distort(image_t)
- image_t = _pad_to_square(image_t,self.rgb_means, pad_image_flag)
- image_t, boxes_t, landm_t = _mirror(image_t, boxes_t, landm_t)
- height, width, _ = image_t.shape
- image_t = _resize_subtract_mean(image_t, self.img_dim, self.rgb_means)
- boxes_t[:, 0::2] /= width
- boxes_t[:, 1::2] /= height
-
- landm_t[:, 0::2] /= width
- landm_t[:, 1::2] /= height
-
- labels_t = np.expand_dims(labels_t, 1)
- targets_t = np.hstack((boxes_t, landm_t, labels_t))
-
- return image_t, targets_t
diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/legacy/__init__.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/data/legacy/__init__.py
deleted file mode 100644
index 9bd5c72b5e9d7f67fb7e4ef10808d7ec08967ff4..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/legacy/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .block_pair_dataset import BlockPairDataset
-from .masked_lm_dataset import MaskedLMDataset
-from .masked_lm_dictionary import BertDictionary, MaskedLMDictionary
-
-
-__all__ = [
- "BertDictionary",
- "BlockPairDataset",
- "MaskedLMDataset",
- "MaskedLMDictionary",
-]
diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/nat/cmlm_transformer.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/models/nat/cmlm_transformer.py
deleted file mode 100644
index c876e9453c101c00bd8e93e6e6f1fb48dc26f993..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/nat/cmlm_transformer.py
+++ /dev/null
@@ -1,162 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-This file implements:
-Ghazvininejad, Marjan, et al.
-"Constant-time machine translation with conditional masked language models."
-arXiv preprint arXiv:1904.09324 (2019).
-"""
-
-from fairseq.models import register_model, register_model_architecture
-from fairseq.models.nat import NATransformerModel
-from fairseq.utils import new_arange
-
-
-def _skeptical_unmasking(output_scores, output_masks, p):
- sorted_index = output_scores.sort(-1)[1]
- boundary_len = (
- (output_masks.sum(1, keepdim=True).type_as(output_scores) - 2) * p
- ).long()
- skeptical_mask = new_arange(output_masks) < boundary_len
- return skeptical_mask.scatter(1, sorted_index, skeptical_mask)
-
-
-@register_model("cmlm_transformer")
-class CMLMNATransformerModel(NATransformerModel):
- @staticmethod
- def add_args(parser):
- NATransformerModel.add_args(parser)
-
- def forward(
- self, src_tokens, src_lengths, prev_output_tokens, tgt_tokens, **kwargs
- ):
- assert not self.decoder.src_embedding_copy, "do not support embedding copy."
-
- # encoding
- encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs)
- # length prediction
- length_out = self.decoder.forward_length(
- normalize=False, encoder_out=encoder_out
- )
- length_tgt = self.decoder.forward_length_prediction(
- length_out, encoder_out, tgt_tokens
- )
-
- # decoding
- word_ins_out = self.decoder(
- normalize=False,
- prev_output_tokens=prev_output_tokens,
- encoder_out=encoder_out,
- )
- word_ins_mask = prev_output_tokens.eq(self.unk)
-
- return {
- "word_ins": {
- "out": word_ins_out,
- "tgt": tgt_tokens,
- "mask": word_ins_mask,
- "ls": self.args.label_smoothing,
- "nll_loss": True,
- },
- "length": {
- "out": length_out,
- "tgt": length_tgt,
- "factor": self.decoder.length_loss_factor,
- },
- }
-
- def forward_decoder(self, decoder_out, encoder_out, decoding_format=None, **kwargs):
-
- step = decoder_out.step
- max_step = decoder_out.max_step
-
- output_tokens = decoder_out.output_tokens
- output_scores = decoder_out.output_scores
- history = decoder_out.history
-
- # execute the decoder
- output_masks = output_tokens.eq(self.unk)
- _scores, _tokens = self.decoder(
- normalize=True,
- prev_output_tokens=output_tokens,
- encoder_out=encoder_out,
- ).max(-1)
- output_tokens.masked_scatter_(output_masks, _tokens[output_masks])
- output_scores.masked_scatter_(output_masks, _scores[output_masks])
-
- if history is not None:
- history.append(output_tokens.clone())
-
- # skeptical decoding (depend on the maximum decoding steps.)
- if (step + 1) < max_step:
- skeptical_mask = _skeptical_unmasking(
- output_scores, output_tokens.ne(self.pad), 1 - (step + 1) / max_step
- )
-
- output_tokens.masked_fill_(skeptical_mask, self.unk)
- output_scores.masked_fill_(skeptical_mask, 0.0)
-
- if history is not None:
- history.append(output_tokens.clone())
-
- return decoder_out._replace(
- output_tokens=output_tokens,
- output_scores=output_scores,
- attn=None,
- history=history,
- )
-
-
-@register_model_architecture("cmlm_transformer", "cmlm_transformer")
-def cmlm_base_architecture(args):
- args.encoder_embed_path = getattr(args, "encoder_embed_path", None)
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048)
- args.encoder_layers = getattr(args, "encoder_layers", 6)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8)
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False)
- args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False)
- args.decoder_embed_path = getattr(args, "decoder_embed_path", None)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim)
- args.decoder_ffn_embed_dim = getattr(
- args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim
- )
- args.decoder_layers = getattr(args, "decoder_layers", 6)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8)
- args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False)
- args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False)
- args.attention_dropout = getattr(args, "attention_dropout", 0.0)
- args.activation_dropout = getattr(args, "activation_dropout", 0.0)
- args.activation_fn = getattr(args, "activation_fn", "relu")
- args.dropout = getattr(args, "dropout", 0.1)
- args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None)
- args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0)
- args.share_decoder_input_output_embed = getattr(
- args, "share_decoder_input_output_embed", False
- )
- args.share_all_embeddings = getattr(args, "share_all_embeddings", True)
- args.no_token_positional_embeddings = getattr(
- args, "no_token_positional_embeddings", False
- )
- args.adaptive_input = getattr(args, "adaptive_input", False)
- args.apply_bert_init = getattr(args, "apply_bert_init", False)
-
- args.decoder_output_dim = getattr(
- args, "decoder_output_dim", args.decoder_embed_dim
- )
- args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim)
-
- # --- special arguments ---
- args.sg_length_pred = getattr(args, "sg_length_pred", False)
- args.pred_length_offset = getattr(args, "pred_length_offset", False)
- args.length_loss_factor = getattr(args, "length_loss_factor", 0.1)
- args.ngram_predictor = getattr(args, "ngram_predictor", 1)
- args.src_embedding_copy = getattr(args, "src_embedding_copy", False)
-
-
-@register_model_architecture("cmlm_transformer", "cmlm_transformer_wmt_en_de")
-def cmlm_wmt_en_de(args):
- cmlm_base_architecture(args)
diff --git a/spaces/multimodalart/stable-diffusion-inpainting/README.md b/spaces/multimodalart/stable-diffusion-inpainting/README.md
deleted file mode 100644
index 2db3ba89d9399ca930b444e14d35f7d6e88f2af8..0000000000000000000000000000000000000000
--- a/spaces/multimodalart/stable-diffusion-inpainting/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Stable Diffusion Inpainting
-emoji: ⚡
-colorFrom: gray
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.11.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/mygyasir/genious_bgremover/carvekit/web/app.py b/spaces/mygyasir/genious_bgremover/carvekit/web/app.py
deleted file mode 100644
index cea35269297433799ee28415a946d20bb1f2d8a7..0000000000000000000000000000000000000000
--- a/spaces/mygyasir/genious_bgremover/carvekit/web/app.py
+++ /dev/null
@@ -1,30 +0,0 @@
-from pathlib import Path
-
-import uvicorn
-from fastapi import FastAPI
-from fastapi.middleware.cors import CORSMiddleware
-from starlette.staticfiles import StaticFiles
-
-from carvekit import version
-from carvekit.web.deps import config
-from carvekit.web.routers.api_router import api_router
-
-app = FastAPI(title="CarveKit Web API", version=version)
-
-app.add_middleware(
- CORSMiddleware,
- allow_origins=["*"],
- allow_credentials=True,
- allow_methods=["*"],
- allow_headers=["*"],
-)
-
-app.include_router(api_router, prefix="/api")
-app.mount(
- "/",
- StaticFiles(directory=Path(__file__).parent.joinpath("static"), html=True),
- name="static",
-)
-
-if __name__ == "__main__":
- uvicorn.run(app, host=config.host, port=config.port)
diff --git a/spaces/mygyasir/remove-photo-object/app.py b/spaces/mygyasir/remove-photo-object/app.py
deleted file mode 100644
index c6b9fb93874967393013d0fc2a83f3380339cc9e..0000000000000000000000000000000000000000
--- a/spaces/mygyasir/remove-photo-object/app.py
+++ /dev/null
@@ -1,138 +0,0 @@
-# App code based on: https://github.com/petergro-hub/ComicInpainting
-# Model based on: https://github.com/saic-mdal/lama
-
-import numpy as np
-import pandas as pd
-import streamlit as st
-import os
-from datetime import datetime
-from PIL import Image
-from streamlit_drawable_canvas import st_canvas
-from io import BytesIO
-from copy import deepcopy
-
-from src.core import process_inpaint
-
-
-def image_download_button(pil_image, filename: str, fmt: str, label="Download"):
- if fmt not in ["jpg", "png"]:
- raise Exception(f"Unknown image format (Available: {fmt} - case sensitive)")
-
- pil_format = "JPEG" if fmt == "jpg" else "PNG"
- file_format = "jpg" if fmt == "jpg" else "png"
- mime = "image/jpeg" if fmt == "jpg" else "image/png"
-
- buf = BytesIO()
- pil_image.save(buf, format=pil_format)
-
- return st.download_button(
- label=label,
- data=buf.getvalue(),
- file_name=f'{filename}.{file_format}',
- mime=mime,
- )
-
-
-
-if "button_id" not in st.session_state:
- st.session_state["button_id"] = ""
-if "color_to_label" not in st.session_state:
- st.session_state["color_to_label"] = {}
-
-if 'reuse_image' not in st.session_state:
- st.session_state.reuse_image = None
-def set_image(img):
- st.session_state.reuse_image = img
-
-st.title("AI Photo Object Removal")
-
-st.image(open("assets/demo.png", "rb").read())
-
-st.markdown(
- """
- So you want to remove an object in your photo? You don't need to learn photo editing skills.
- **Just draw over the parts of the image you want to remove, then our AI will remove them.**
- """
-)
-uploaded_file = st.file_uploader("Choose image", accept_multiple_files=False, type=["png", "jpg", "jpeg"])
-
-if uploaded_file is not None:
-
- if st.session_state.reuse_image is not None:
- img_input = Image.fromarray(st.session_state.reuse_image)
- else:
- bytes_data = uploaded_file.getvalue()
- img_input = Image.open(BytesIO(bytes_data)).convert("RGBA")
-
- # Resize the image while maintaining aspect ratio
- max_size = 2000
- img_width, img_height = img_input.size
- if img_width > max_size or img_height > max_size:
- if img_width > img_height:
- new_width = max_size
- new_height = int((max_size / img_width) * img_height)
- else:
- new_height = max_size
- new_width = int((max_size / img_height) * img_width)
- img_input = img_input.resize((new_width, new_height))
-
- stroke_width = st.slider("Brush size", 1, 100, 50)
-
- st.write("**Now draw (brush) the part of image that you want to remove.**")
-
- # Canvas size logic
- canvas_bg = deepcopy(img_input)
- aspect_ratio = canvas_bg.width / canvas_bg.height
- streamlit_width = 720
-
- # Max width is 720. Resize the height to maintain its aspectratio.
- if canvas_bg.width > streamlit_width:
- canvas_bg = canvas_bg.resize((streamlit_width, int(streamlit_width / aspect_ratio)))
-
- canvas_result = st_canvas(
- stroke_color="rgba(255, 0, 255, 1)",
- stroke_width=stroke_width,
- background_image=canvas_bg,
- width=canvas_bg.width,
- height=canvas_bg.height,
- drawing_mode="freedraw",
- key="compute_arc_length",
- )
-
- if canvas_result.image_data is not None:
- im = np.array(Image.fromarray(canvas_result.image_data.astype(np.uint8)).resize(img_input.size))
- background = np.where(
- (im[:, :, 0] == 0) &
- (im[:, :, 1] == 0) &
- (im[:, :, 2] == 0)
- )
- drawing = np.where(
- (im[:, :, 0] == 255) &
- (im[:, :, 1] == 0) &
- (im[:, :, 2] == 255)
- )
- im[background]=[0,0,0,255]
- im[drawing]=[0,0,0,0] # RGBA
-
- reuse = False
-
- if st.button('Submit'):
-
- with st.spinner("AI is doing the magic!"):
- output = process_inpaint(np.array(img_input), np.array(im)) #TODO Put button here
- img_output = Image.fromarray(output).convert("RGB")
-
- st.write("AI has finished the job!")
- st.image(img_output)
- # reuse = st.button('Edit again (Re-use this image)', on_click=set_image, args=(inpainted_img, ))
-
- uploaded_name = os.path.splitext(uploaded_file.name)[0]
- image_download_button(
- pil_image=img_output,
- filename=uploaded_name,
- fmt="jpg",
- label="Download Image"
- )
-
- st.info("**TIP**: If the result is not perfect, you can download it then "
- "upload then remove the artifacts.")
diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/docker/build.sh b/spaces/myrad01/Inpaint-Anything/third_party/lama/docker/build.sh
deleted file mode 100644
index d70b747ff1282b9c60cf8b8b352b6b7701e1a254..0000000000000000000000000000000000000000
--- a/spaces/myrad01/Inpaint-Anything/third_party/lama/docker/build.sh
+++ /dev/null
@@ -1,5 +0,0 @@
-#!/bin/bash
-
-BASEDIR="$(dirname $0)"
-
-docker build -t windj007/lama -f "$BASEDIR/Dockerfile" "$BASEDIR"
diff --git a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/render.py b/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/render.py
deleted file mode 100644
index 57c219386c9bc0adb1ee78dd1c31a6fbf0dd1b3d..0000000000000000000000000000000000000000
--- a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/render.py
+++ /dev/null
@@ -1,310 +0,0 @@
-from ctypes import *
-
-import numpy as np
-from .framework import *
-
-GLUT = None
-
-# NOTE: Render class assumes GL context is created already.
-class Render:
- def __init__(self, width=1600, height=1200, name='GL Renderer',
- program_files=['simple.fs', 'simple.vs'], color_size=1, ms_rate=1, egl=False):
- self.width = width
- self.height = height
- self.name = name
- self.use_inverse_depth = False
- self.egl = egl
-
- glEnable(GL_DEPTH_TEST)
-
- glClampColor(GL_CLAMP_READ_COLOR, GL_FALSE)
- glClampColor(GL_CLAMP_FRAGMENT_COLOR, GL_FALSE)
- glClampColor(GL_CLAMP_VERTEX_COLOR, GL_FALSE)
-
- # init program
- shader_list = []
-
- for program_file in program_files:
- _, ext = os.path.splitext(program_file)
- if ext == '.vs':
- shader_list.append(loadShader(GL_VERTEX_SHADER, program_file))
- elif ext == '.fs':
- shader_list.append(loadShader(GL_FRAGMENT_SHADER, program_file))
- elif ext == '.gs':
- shader_list.append(loadShader(GL_GEOMETRY_SHADER, program_file))
-
- self.program = createProgram(shader_list)
-
- for shader in shader_list:
- glDeleteShader(shader)
-
- # Init uniform variables
- self.model_mat_unif = glGetUniformLocation(self.program, 'ModelMat')
- self.persp_mat_unif = glGetUniformLocation(self.program, 'PerspMat')
-
- self.vertex_buffer = glGenBuffers(1)
-
- # Init screen quad program and buffer
- self.quad_program, self.quad_buffer = self.init_quad_program()
-
- # Configure frame buffer
- self.frame_buffer = glGenFramebuffers(1)
- glBindFramebuffer(GL_FRAMEBUFFER, self.frame_buffer)
-
- self.intermediate_fbo = None
- if ms_rate > 1:
- # Configure texture buffer to render to
- self.color_buffer = []
- for i in range(color_size):
- color_buffer = glGenTextures(1)
- multi_sample_rate = ms_rate
- glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, color_buffer)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
- glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, multi_sample_rate, GL_RGBA32F, self.width, self.height, GL_TRUE)
- glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, 0)
- glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D_MULTISAMPLE, color_buffer, 0)
- self.color_buffer.append(color_buffer)
-
- self.render_buffer = glGenRenderbuffers(1)
- glBindRenderbuffer(GL_RENDERBUFFER, self.render_buffer)
- glRenderbufferStorageMultisample(GL_RENDERBUFFER, multi_sample_rate, GL_DEPTH24_STENCIL8, self.width, self.height)
- glBindRenderbuffer(GL_RENDERBUFFER, 0)
- glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, self.render_buffer)
-
- attachments = []
- for i in range(color_size):
- attachments.append(GL_COLOR_ATTACHMENT0 + i)
- glDrawBuffers(color_size, attachments)
- glBindFramebuffer(GL_FRAMEBUFFER, 0)
-
- self.intermediate_fbo = glGenFramebuffers(1)
- glBindFramebuffer(GL_FRAMEBUFFER, self.intermediate_fbo)
-
- self.screen_texture = []
- for i in range(color_size):
- screen_texture = glGenTextures(1)
- glBindTexture(GL_TEXTURE_2D, screen_texture)
- glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, self.width, self.height, 0, GL_RGBA, GL_FLOAT, None)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
- glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, screen_texture, 0)
- self.screen_texture.append(screen_texture)
-
- glDrawBuffers(color_size, attachments)
- glBindFramebuffer(GL_FRAMEBUFFER, 0)
- else:
- self.color_buffer = []
- for i in range(color_size):
- color_buffer = glGenTextures(1)
- glBindTexture(GL_TEXTURE_2D, color_buffer)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
- glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, self.width, self.height, 0, GL_RGBA, GL_FLOAT, None)
- glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, color_buffer, 0)
- self.color_buffer.append(color_buffer)
-
- # Configure depth texture map to render to
- self.depth_buffer = glGenTextures(1)
- glBindTexture(GL_TEXTURE_2D, self.depth_buffer)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
- glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_INTENSITY)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE)
- glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL)
- glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, self.width, self.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, None)
- glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, self.depth_buffer, 0)
-
- attachments = []
- for i in range(color_size):
- attachments.append(GL_COLOR_ATTACHMENT0 + i)
- glDrawBuffers(color_size, attachments)
- self.screen_texture = self.color_buffer
-
- glBindFramebuffer(GL_FRAMEBUFFER, 0)
-
-
- # Configure texture buffer if needed
- self.render_texture = None
-
- # NOTE: original render_texture only support one input
- # this is tentative member of this issue
- self.render_texture_v2 = {}
-
- # Inner storage for buffer data
- self.vertex_data = None
- self.vertex_dim = None
- self.n_vertices = None
-
- self.model_view_matrix = None
- self.projection_matrix = None
-
- if not egl:
- global GLUT
- import OpenGL.GLUT as GLUT
- GLUT.glutDisplayFunc(self.display)
-
-
- def init_quad_program(self):
- shader_list = []
-
- shader_list.append(loadShader(GL_VERTEX_SHADER, "quad.vs"))
- shader_list.append(loadShader(GL_FRAGMENT_SHADER, "quad.fs"))
-
- the_program = createProgram(shader_list)
-
- for shader in shader_list:
- glDeleteShader(shader)
-
- # vertex attributes for a quad that fills the entire screen in Normalized Device Coordinates.
- # positions # texCoords
- quad_vertices = np.array(
- [-1.0, 1.0, 0.0, 1.0,
- -1.0, -1.0, 0.0, 0.0,
- 1.0, -1.0, 1.0, 0.0,
-
- -1.0, 1.0, 0.0, 1.0,
- 1.0, -1.0, 1.0, 0.0,
- 1.0, 1.0, 1.0, 1.0]
- )
-
- quad_buffer = glGenBuffers(1)
- glBindBuffer(GL_ARRAY_BUFFER, quad_buffer)
- glBufferData(GL_ARRAY_BUFFER, quad_vertices, GL_STATIC_DRAW)
-
- glBindBuffer(GL_ARRAY_BUFFER, 0)
-
- return the_program, quad_buffer
-
- def set_mesh(self, vertices, faces):
- self.vertex_data = vertices[faces.reshape([-1])]
- self.vertex_dim = self.vertex_data.shape[1]
- self.n_vertices = self.vertex_data.shape[0]
-
- glBindBuffer(GL_ARRAY_BUFFER, self.vertex_buffer)
- glBufferData(GL_ARRAY_BUFFER, self.vertex_data, GL_STATIC_DRAW)
-
- glBindBuffer(GL_ARRAY_BUFFER, 0)
-
- def set_viewpoint(self, projection, model_view):
- self.projection_matrix = projection
- self.model_view_matrix = model_view
-
- def draw_init(self):
- glBindFramebuffer(GL_FRAMEBUFFER, self.frame_buffer)
- glEnable(GL_DEPTH_TEST)
-
- glClearColor(0.0, 0.0, 0.0, 0.0)
- if self.use_inverse_depth:
- glDepthFunc(GL_GREATER)
- glClearDepth(0.0)
- else:
- glDepthFunc(GL_LESS)
- glClearDepth(1.0)
- glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
-
- def draw_end(self):
- if self.intermediate_fbo is not None:
- for i in range(len(self.color_buffer)):
- glBindFramebuffer(GL_READ_FRAMEBUFFER, self.frame_buffer)
- glReadBuffer(GL_COLOR_ATTACHMENT0 + i)
- glBindFramebuffer(GL_DRAW_FRAMEBUFFER, self.intermediate_fbo)
- glDrawBuffer(GL_COLOR_ATTACHMENT0 + i)
- glBlitFramebuffer(0, 0, self.width, self.height, 0, 0, self.width, self.height, GL_COLOR_BUFFER_BIT, GL_NEAREST)
-
- glBindFramebuffer(GL_FRAMEBUFFER, 0)
- glDepthFunc(GL_LESS)
- glClearDepth(1.0)
-
- def draw(self):
- self.draw_init()
-
- glUseProgram(self.program)
- glUniformMatrix4fv(self.model_mat_unif, 1, GL_FALSE, self.model_view_matrix.transpose())
- glUniformMatrix4fv(self.persp_mat_unif, 1, GL_FALSE, self.projection_matrix.transpose())
-
- glBindBuffer(GL_ARRAY_BUFFER, self.vertex_buffer)
-
- glEnableVertexAttribArray(0)
- glVertexAttribPointer(0, self.vertex_dim, GL_DOUBLE, GL_FALSE, 0, None)
-
- glDrawArrays(GL_TRIANGLES, 0, self.n_vertices)
-
- glDisableVertexAttribArray(0)
-
- glBindBuffer(GL_ARRAY_BUFFER, 0)
-
- glUseProgram(0)
-
- self.draw_end()
-
- def get_color(self, color_id=0):
- glBindFramebuffer(GL_FRAMEBUFFER, self.intermediate_fbo if self.intermediate_fbo is not None else self.frame_buffer)
- glReadBuffer(GL_COLOR_ATTACHMENT0 + color_id)
- data = glReadPixels(0, 0, self.width, self.height, GL_RGBA, GL_FLOAT, outputType=None)
- glBindFramebuffer(GL_FRAMEBUFFER, 0)
- rgb = data.reshape(self.height, self.width, -1)
- rgb = np.flip(rgb, 0)
- return rgb
-
- def get_z_value(self):
- glBindFramebuffer(GL_FRAMEBUFFER, self.frame_buffer)
- data = glReadPixels(0, 0, self.width, self.height, GL_DEPTH_COMPONENT, GL_FLOAT, outputType=None)
- glBindFramebuffer(GL_FRAMEBUFFER, 0)
- z = data.reshape(self.height, self.width)
- z = np.flip(z, 0)
- return z
-
- def display(self):
- self.draw()
-
- if not self.egl:
- # First we draw a scene.
- # Notice the result is stored in the texture buffer.
-
- # Then we return to the default frame buffer since we will display on the screen.
- glBindFramebuffer(GL_FRAMEBUFFER, 0)
-
- # Do the clean-up.
- glClearColor(0.0, 0.0, 0.0, 0.0)
- glClear(GL_COLOR_BUFFER_BIT)
-
- # We draw a rectangle which covers the whole screen.
- glUseProgram(self.quad_program)
- glBindBuffer(GL_ARRAY_BUFFER, self.quad_buffer)
-
- size_of_double = 8
- glEnableVertexAttribArray(0)
- glVertexAttribPointer(0, 2, GL_DOUBLE, GL_FALSE, 4 * size_of_double, None)
- glEnableVertexAttribArray(1)
- glVertexAttribPointer(1, 2, GL_DOUBLE, GL_FALSE, 4 * size_of_double, c_void_p(2 * size_of_double))
-
- glDisable(GL_DEPTH_TEST)
-
- # The stored texture is then mapped to this rectangle.
- # properly assing color buffer texture
- glActiveTexture(GL_TEXTURE0)
- glBindTexture(GL_TEXTURE_2D, self.screen_texture[0])
- glUniform1i(glGetUniformLocation(self.quad_program, 'screenTexture'), 0)
-
- glDrawArrays(GL_TRIANGLES, 0, 6)
-
- glDisableVertexAttribArray(1)
- glDisableVertexAttribArray(0)
-
- glEnable(GL_DEPTH_TEST)
- glBindBuffer(GL_ARRAY_BUFFER, 0)
- glUseProgram(0)
-
- GLUT.glutSwapBuffers()
- GLUT.glutPostRedisplay()
-
- def show(self):
- if not self.egl:
- GLUT.glutMainLoop()
diff --git a/spaces/nehuggingface/cant/README.md b/spaces/nehuggingface/cant/README.md
deleted file mode 100644
index c80f93476eb9e24b3572efffb0e8985be9ed5853..0000000000000000000000000000000000000000
--- a/spaces/nehuggingface/cant/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Cant
-emoji: 📉
-colorFrom: yellow
-colorTo: blue
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Free Solution Manual Of Applied Combinatorics By Alan Tucker Rar [REPACK].md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Free Solution Manual Of Applied Combinatorics By Alan Tucker Rar [REPACK].md
deleted file mode 100644
index 4c5e9789ca3274e0dfda13107c4334a340808c02..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Free Solution Manual Of Applied Combinatorics By Alan Tucker Rar [REPACK].md
+++ /dev/null
@@ -1,15 +0,0 @@
-
-How to Find a Free Solution Manual for Applied Combinatorics by Alan Tucker
-Applied Combinatorics by Alan Tucker is a widely used textbook that covers various topics in combinatorial mathematics, such as graph theory, enumeration, recursion, algorithms, and optimization. The book is designed for students with a wide range of ability and maturity, from sophomores to beginning graduate students. The book also provides many exercises and examples to help students develop their problem-solving skills and logical reasoning.
-However, finding a free solution manual for this book can be challenging, as the official solution manual is not available online. Some websites may claim to offer free downloads of the solution manual in a compressed file format (such as .rar), but these files may contain viruses or malware that can harm your computer. Therefore, it is not advisable to download any files from untrusted sources.
-Free Solution Manual Of Applied Combinatorics By Alan Tucker Rar
Download File ✯✯✯ https://urlcod.com/2uIbQG
-Instead, there are some alternative ways to find free solutions or hints for the exercises in Applied Combinatorics by Alan Tucker. Here are some suggestions:
-
-- Check the official website of the book (https://www.wiley.com/en-us/Applied+Combinatorics%2C+6th+Edition-p-9780470458389[^4^]) for errata, supplements, and instructor resources. You may find some useful information or links there.
-- Search online for lecture notes, slides, or videos from courses that use this book as a textbook. You may find some instructors who have posted their solutions or hints for some exercises on their websites or course pages. For example, you can check this Google Docs file (https://docs.google.com/viewer?a=v&pid=sites&srcid=aGNtdXMuZWR1LnZufHRvYW4taG9jLXRvLWhvcHxneDo0YTU5YTZhODE4Mjk5ZDQ4[^5^]) that contains some solutions for selected exercises from chapters 1 to 7.
-- Use online platforms such as Quizlet (https://quizlet.com/explanations/textbook-solutions/applied-combinatorics-6th-edition-9781118210116[^6^]) or Course Hero (https://www.coursehero.com/file/98381636/Solution-ManualApplied-Combinatorics-6th-Edition-by-Alan-Tuckerpdf/[^2^]) that provide student-generated solutions or answers for various textbooks. You may find some solutions or hints for the exercises in Applied Combinatorics by Alan Tucker there. However, be aware that these solutions may not be verified by experts and may contain errors or inaccuracies.
-- Ask for help from your classmates, instructors, tutors, or online communities. You can collaborate with your peers to work on the exercises together, or seek guidance from your instructors or tutors if you are stuck on a problem. You can also post your questions on online forums such as Stack Exchange (https://math.stackexchange.com/) or Reddit (https://www.reddit.com/r/math/) where you can get help from other math enthusiasts.
-
-In conclusion, finding a free solution manual for Applied Combinatorics by Alan Tucker can be difficult, but not impossible. You can use various online resources or seek help from others to find solutions or hints for the exercises in this book. However, remember that the best way to learn combinatorics is to practice and understand the concepts and methods yourself, rather than relying on ready-made solutions.
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Halla Bol Full Movie Download [CRACKED] Hd Mp4.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Halla Bol Full Movie Download [CRACKED] Hd Mp4.md
deleted file mode 100644
index d81dc86bc46f0eec0478022c6a06f4941ff59063..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Halla Bol Full Movie Download [CRACKED] Hd Mp4.md
+++ /dev/null
@@ -1,26 +0,0 @@
-
-Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Halla Bol Full Movie Download Hd Mp4":
-
-Halla Bol Full Movie Download Hd Mp4: Watch the Patriotic Drama Starring Ajay Devgan and Vidya Balan
-
-Halla Bol is a 2008 Hindi patriotic drama film directed by Rajkumar Santoshi and starring Ajay Devgan, Vidya Balan, Pankaj Kapur and Darshan Jariwala. The film is loosely based on the life of Hindi film actor and activist Nana Patekar, who was involved in a murder case of a social activist in 1997.
-Halla Bol Full Movie Download Hd Mp4
Download --->>> https://urlcod.com/2uIbkk
-
-The film revolves around Ashfaque (Ajay Devgan), a small town boy who joins a street theatre group run by Sidhu (Pankaj Kapur), a reformed dacoit who uses theatre as a medium to raise awareness among the masses. Ashfaque struggles to learn the art of acting and eventually moves to Mumbai to pursue his dream of becoming a film star. He changes his name to Sameer Khan and soon becomes a popular actor with a string of hits. However, he also loses his identity and integrity in the process and becomes corrupted by fame and money.
-
-One night, he witnesses the murder of a social activist by a powerful politician's son but refuses to testify in court due to fear of losing his career. He is haunted by guilt and his conscience, which is personified by Sidhu, who reminds him of his roots and ideals. He decides to take a stand against the injustice and expose the truth to the public, risking his life and career.
-
-Halla Bol is a film that explores the themes of morality, courage, social responsibility and patriotism. It also criticizes the hypocrisy and corruption of the film industry and the political system. The film received mixed reviews from critics but was praised for its performances, especially by Ajay Devgan and Pankaj Kapur. The film was also nominated for several awards, including the Filmfare Award for Best Actor (Ajay Devgan) and Best Supporting Actor (Pankaj Kapur).
-
-
-If you want to watch Halla Bol full movie download hd mp4, you can find it on various online platforms such as YouTube[^1^], Dailymotion[^2^] [^3^] etc. However, we advise you to watch it legally on authorized streaming services such as Amazon Prime Video or Netflix. Watching pirated movies is illegal and unethical and can harm the film industry.
-
-Halla Bol is a film that will inspire you to raise your voice against injustice and fight for your rights. It will also make you appreciate the power of art and theatre as a tool for social change. Watch Halla Bol full movie download hd mp4 today and enjoy this patriotic drama with your family and friends.
Here are a few more paragraphs for the article:
-
-Halla Bol received mixed reviews from critics who praised the performances of the lead actors but found the script and direction lacking. The film was also compared to Santoshi's earlier film Damini, which had a similar plot and theme. Some critics also felt that the film was too preachy and melodramatic and failed to capture the nuances of the real-life incident. The film also faced some controversies due to its depiction of the film industry and some real-life personalities.
-
-The film did not do well at the box office and was declared a flop. It was also overshadowed by another film based on the same incident, No One Killed Jessica, which released in 2011 and received critical acclaim and commercial success. However, Halla Bol has gained some cult following over the years and is considered as one of Santoshi's underrated films. The film also won some awards, including the National Film Award for Best Supporting Actor (Pankaj Kapur) and the Stardust Award for Best Actor in a Social Role (Ajay Devgan).
-
-Halla Bol is a film that tries to convey a strong message about the importance of speaking up against injustice and corruption. It also showcases the talent and versatility of Ajay Devgan and Pankaj Kapur, who deliver powerful performances as the protagonist and his mentor. The film also has some memorable dialogues and scenes that will stay with you long after you watch it. Halla Bol is a film that deserves a watch for its sincere attempt to make a difference.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Manidweepa Varnana In Telugu Pdf Free Download.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Manidweepa Varnana In Telugu Pdf Free Download.md
deleted file mode 100644
index 595ebc5c6c59fafb947f172363bca6e1ec9f4440..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Manidweepa Varnana In Telugu Pdf Free Download.md
+++ /dev/null
@@ -1,52 +0,0 @@
-
-Manidweepa Varnana: A Sacred Hymn to Goddess Lalitha
-Manidweepa Varnana is a sacred hymn that describes the divine abode of Goddess Lalitha, the supreme ruler of the universe. Manidweepa means "the island of gems" and it is said to be located above Brahmaloka, the highest heaven. Manidweepa is also known as Sri Nagara, the city of Sri or Lakshmi, the goddess of wealth and prosperity.
-Manidweepa Varnana is composed in Telugu language and it consists of 21 verses. Each verse praises the beauty, glory and power of Manidweepa and its inhabitants. The hymn also mentions the various benefits that one can attain by reciting or listening to Manidweepa Varnana with devotion.
-Manidweepa Varnana In Telugu Pdf Free Download
Download Zip ☆☆☆ https://urlcod.com/2uIbUs
-Some of the benefits are:
-
-- Removal of poverty and misfortune
-- Attainment of wealth, health, happiness and peace
-- Fulfillment of all desires and wishes
-- Protection from enemies and dangers
-- Acquisition of knowledge, wisdom and spiritual powers
-- Birth in noble families and association with saints and sages
-- Ultimate liberation from the cycle of birth and death
-
-Manidweepa Varnana is a part of Devi Bhagavatam, a sacred scripture that narrates the stories and teachings of Goddess Lalitha. Devi Bhagavatam also contains other hymns and stotras dedicated to Goddess Lalitha, such as Lalitha Sahasranama, Lalitha Trishati, Lalitha Ashtottara Shatanama, etc.
-Manidweepa Varnana is a very popular hymn among the devotees of Goddess Lalitha. It is often recited or sung during festivals, rituals, pujas and other occasions related to Goddess Lalitha. It is also a part of Sri Vidya tradition, a form of Tantra that worships Goddess Lalitha as the supreme reality.
-If you want to read or download Manidweepa Varnana in Telugu PDF format, you can visit the following websites:
-
-- https://greatertelugu.org/manidweepa-varnana/
-- https://stotranidhi.com/manidweepa-varnana-in-telugu/
-- https://panotbook.com/manidweepa-varnana/
-- https://srivaddipartipadmakar.org/stotram/manidweepa-varnanasloka/
-
-I hope this helps you. Have a blessed day!
-
-Manidweepa Varnana is not just a hymn but also a guide to perform Manidweepa Pooja, a special ritual that invokes the blessings of Goddess Lalitha. Manidweepa Pooja involves chanting the 32 verses of Manidweepa Varnana, 9 times per day for 9 consecutive days. Each day, a different flower is offered to the goddess along with other items such as fruits, sweets, incense, lamps, etc. The flowers and their corresponding days are as follows:
-
-Day Flower
-1 Lotus
-2 Jasmine
-3 Rose
-4 Hibiscus
-5 Chrysanthemum
-6 Tuberose
-7 Nerium
-8 Crossandra
-9 All flowers mixed together
-
-By performing Manidweepa Pooja with devotion and faith, one can experience the grace and glory of Goddess Lalitha in their life. One can also attain various benefits such as wealth, health, happiness, peace, protection, knowledge, wisdom, spiritual powers, etc. One can also get rid of all kinds of obstacles and difficulties in their life. One can also achieve ultimate liberation from the cycle of birth and death.
-
-Manidweepa Varnana is a very sacred and powerful hymn that can transform one's life for the better. It is recommended to recite or listen to Manidweepa Varnana daily or at least on Fridays, which are auspicious for Goddess Lalitha. It is also advised to recite or listen to Manidweepa Varnana with a pure mind and heart and with utmost reverence and love for the goddess.
-If you want to learn more about Manidweepa Varnana and its meaning, you can visit the following websites:
-
-- https://www.stotraveda.com/manidweepa-varnana/
-- https://bhaktinidhi.com/manidweepa-varnana-in-telugu/
-- https://www.youtube.com/watch?v=6gZf0YQmLZw
-- https://www.youtube.com/watch?v=JFt1yXtqRkE
-
-I hope this helps you. Have a blessed day!
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Mp3doctor Pro 2 V130 Serial 22.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Mp3doctor Pro 2 V130 Serial 22.md
deleted file mode 100644
index e830feae5596687700061c97b39cccdf5ce3b718..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Mp3doctor Pro 2 V130 Serial 22.md
+++ /dev/null
@@ -1,85 +0,0 @@
-
-Mp3doctor Pro 2 V130 Serial 22: The Ultimate Solution for MP3 Volume and Pitch Adjustment
- If you are a music lover, you probably have a lot of MP3 files on your computer, phone, or portable device. You may also have encountered some problems with your MP3 files, such as uneven volume levels, distorted sound quality, or mismatched pitch. These problems can ruin your listening experience and make you frustrated.
-Mp3doctor Pro 2 V130 Serial 22
Download File >> https://urlcod.com/2uIb9e
- Fortunately, there is a solution for these problems: Mp3doctor Pro 2 V130 Serial 22. This is a powerful and easy-to-use software that can normalize and enhance your MP3 files in a matter of minutes. With this software, you can adjust the volume and pitch of your MP3 files, as well as apply other effects and filters to improve their sound quality. You can also edit multiple MP3 files at once, saving you time and effort.
- In this article, we will show you what Mp3doctor Pro 2 V130 Serial 22 can do for you, how to download and install it, how to use it to edit your MP3 files, and how to compare it with other MP3 editing software. By the end of this article, you will be able to enjoy your MP3 files like never before.
- How to Download and Install Mp3doctor Pro 2 V130 Serial 22
- The first step to use Mp3doctor Pro 2 V130 Serial 22 is to download and install it on your computer. Here is how you can do that:
-
-- Go to the official website of Mp3doctor and click on the "Download" button.
-- Choose the version of Mp3doctor Pro 2 that suits your operating system (Windows or Mac) and click on the "Download" button again.
-- Save the file on your computer and run it once it is downloaded.
-- Follow the instructions on the screen to complete the installation process.
-- Launch Mp3doctor Pro 2 from your desktop or start menu.
-- Enter the serial key that you received when you purchased the software to activate the software. If you don't have a serial key, you can buy one from the website or use the trial version for a limited time.
-
- Congratulations, you have successfully installed Mp3doctor Pro 2 V130 Serial 22 on your computer. Now you are ready to use it to edit your MP3 files.
-
- How to Use Mp3doctor Pro 2 V130 Serial 22 to Normalize and Enhance Your MP3 Files
- Now that you have Mp3doctor Pro 2 V130 Serial 22 on your computer, you can start using it to normalize and enhance your MP3 files. Here is how you can do that:
-
-- Launch Mp3doctor Pro 2 from your desktop or start menu.
-- Click on the "Add Files" button to load your MP3 files into the software. You can also drag and drop your MP3 files into the software window.
-- Select the MP3 files that you want to edit and click on the "Normalize" button. This will automatically adjust the volume level of your MP3 files to make them sound consistent and balanced.
-- If you want to adjust the pitch of your MP3 files, click on the "Pitch" button. This will allow you to change the pitch of your MP3 files by semitones or cents. You can also use the "Tempo" button to change the speed of your MP3 files without affecting the pitch.
-- If you want to apply other effects and filters to your MP3 files, click on the "Effects" button. This will open a menu where you can choose from various options, such as equalizer, compressor, reverb, echo, chorus, flanger, and more. You can also use the "Presets" button to apply predefined settings for different genres and moods.
-- Once you are satisfied with your edits, click on the "Save Files" button to save your edited MP3 files. You can choose to save them in the same folder as the original files or in a different folder. You can also choose to overwrite the original files or create new files with different names.
-
- That's it, you have successfully used Mp3doctor Pro 2 V130 Serial 22 to normalize and enhance your MP3 files. You can now enjoy your MP3 files with improved sound quality and clarity.
- How to Compare Mp3doctor Pro 2 V130 Serial 22 with Other MP3 Editing Software
- Mp3doctor Pro 2 V130 Serial 22 is not the only MP3 editing software available in the market. There are many other options that you can choose from, depending on your needs and preferences. Here are some of the factors that you can use to compare Mp3doctor Pro 2 V130 Serial 22 with other MP3 editing software:
-
-
-Factor
-Mp3doctor Pro 2 V130 Serial 22
-Other MP3 Editing Software
-
-
-Features
-Mp3doctor Pro 2 V130 Serial 22 offers a wide range of features, such as volume normalization, pitch adjustment, tempo change, effects and filters, presets, batch processing, and more.
-Other MP3 editing software may offer similar or different features, depending on their design and functionality. Some may offer more advanced features, such as waveform editing, audio mixing, noise reduction, and more. Some may offer fewer or simpler features, such as volume leveling, trimming, and fading.
-
-
-Ease of use
-Mp3doctor Pro 2 V130 Serial 22 is very easy to use, thanks to its intuitive and user-friendly interface. You can edit your MP3 files with just a few clicks and without any technical skills.
-Other MP3 editing software may have different levels of ease of use, depending on their complexity and layout. Some may be more user-friendly and simple, while others may be more complicated and require more learning and practice.
-
-
-Quality
-Mp3doctor Pro 2 V130 Serial 22 delivers high-quality results, thanks to its advanced algorithms and technology. It preserves the original sound quality of your MP3 files while enhancing their volume and pitch.
-Other MP3 editing software may have different levels of quality, depending on their performance and reliability. Some may produce better or worse results than Mp3doctor Pro 2 V130 Serial 22, depending on their accuracy and efficiency.
-
-
-Price
-Mp3doctor Pro 2 V130 Serial 22 is reasonably priced, considering its features and quality. It costs $38 for a single license, which includes lifetime updates and support. You can also get a discount if you buy multiple licenses or bundles.
-Other MP3 editing software may have different prices, depending on their features and quality. Some may be more expensive or cheaper than Mp3doctor Pro 2 V130 Serial 22, depending on their value and reputation.
-
-
- As you can see, Mp3doctor Pro 2 V130 Serial 22 has many advantages over other MP3 editing software, but it also has some disadvantages. You should weigh the pros and cons of each software before making your final decision.
- Conclusion
- Mp3doctor Pro 2 V130 Serial 22 is a powerful and easy-to-use software that can normalize and enhance your MP3 files in a matter of minutes. With this software, you can adjust the volume and pitch of your MP3 files, as well as apply other effects and filters to improve their sound quality. You can also edit multiple MP3 files at once, saving you time and effort.
- Mp3doctor Pro 2 V130 Serial 22 is not the only MP3 editing software available in the market, but it is one of the best ones. It offers a wide range of features, high-quality results, reasonable price, and user-friendly interface. It is also safe and legal to use, as long as you have a valid serial key.
- If you are looking for a solution for your MP3 volume and pitch problems, you should definitely give Mp3doctor Pro 2 V130 Serial 22 a try. You will be amazed by how much it can improve your listening experience and make you enjoy your MP3 files like never before.
- To download and install Mp3doctor Pro 2 V130 Serial 22, go to the official website of Mp3doctor and follow the instructions. You can also check out the FAQs below for more information.
- FAQs
- Here are some of the frequently asked questions about Mp3doctor Pro 2 V130 Serial 22:
- What is the difference between Mp3doctor Pro and Mp3doctor Pro 2?
- Mp3doctor Pro is the older version of Mp3doctor Pro 2. It has fewer features and less compatibility than Mp3doctor Pro 2. Mp3doctor Pro 2 is the newer and improved version of Mp3doctor Pro. It has more features and more compatibility than Mp3doctor Pro. It can also edit other audio formats besides MP3, such as AAC, FLAC, OGG, WAV, and more.
- What are the system requirements for Mp3doctor Pro 2?
- The system requirements for Mp3doctor Pro 2 are as follows:
-
-- Operating system: Windows XP/Vista/7/8/10 or Mac OS X 10.6 or later
-- Processor: Pentium IV or higher
-- Memory: 512 MB RAM or higher
-- Disk space: 50 MB or higher
-- Internet connection: Required for activation and updates
-
- How much does Mp3doctor Pro 2 cost and where can I buy it?
- Mp3doctor Pro 2 costs $38 for a single license, which includes lifetime updates and support. You can also get a discount if you buy multiple licenses or bundles. You can buy Mp3doctor Pro 2 from the official website of Mp3doctor. You can pay with PayPal, credit card, or bank transfer.
- Is Mp3doctor Pro 2 safe and legal to use?
- Mp3doctor Pro 2 is safe and legal to use, as long as you have a valid serial key. The serial key is a unique code that activates the software and allows you to use it without any limitations. The serial key is provided to you when you purchase the software from the official website of Mp3doctor. You should not share your serial key with anyone else or use it on more than one computer.
- How can I contact the support team of Mp3doctor Pro 2?
- If you have any questions or issues with Mp3doctor Pro 2, you can contact the support team of Mp3doctor by email at support@mp3doctor.com. They will respond to you as soon as possible and help you solve your problems.
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/nickil/weakly-supervised-parsing/weakly_supervised_parser/tree/helpers.py b/spaces/nickil/weakly-supervised-parsing/weakly_supervised_parser/tree/helpers.py
deleted file mode 100644
index 6072278bfe44b8f00281fcaf2c8ef2c9d5dd24e2..0000000000000000000000000000000000000000
--- a/spaces/nickil/weakly-supervised-parsing/weakly_supervised_parser/tree/helpers.py
+++ /dev/null
@@ -1,177 +0,0 @@
-import nltk
-from collections import Counter
-from weakly_supervised_parser.tree.evaluate import tree_to_spans
-
-
-class Tree(object):
- def __init__(self, label, children, word):
- self.label = label
- self.children = children
- self.word = word
-
- def __str__(self):
- return self.linearize()
-
- def linearize(self):
- if not self.children:
- return f"({self.label} {self.word})"
- return f"({self.label} {' '.join(c.linearize() for c in self.children)})"
-
- def spans(self, start=0):
- if not self.children:
- return [(start, start + 1)]
- span_list = []
- position = start
- for c in self.children:
- cspans = c.spans(start=position)
- span_list.extend(cspans)
- position = cspans[0][1]
- return [(start, position)] + span_list
-
- def spans_labels(self, start=0):
- if not self.children:
- return [(start, start + 1, self.label)]
- span_list = []
- position = start
- for c in self.children:
- cspans = c.spans_labels(start=position)
- span_list.extend(cspans)
- position = cspans[0][1]
- return [(start, position, self.label)] + span_list
-
-
-def extract_sentence(sentence):
- t = nltk.Tree.fromstring(sentence)
- return " ".join(item[0] for item in t.pos())
-
-
-def get_constituents(sample_string, want_spans_mapping=False, whole_sentence=True, labels=False):
- t = nltk.Tree.fromstring(sample_string)
- if want_spans_mapping:
- spans = tree_to_spans(t, keep_labels=True)
- return dict(Counter(item[1] for item in spans))
- spans = tree_to_spans(t, keep_labels=True)
- sentence = extract_sentence(sample_string).split()
-
- labeled_consituents_lst = []
- constituents = []
- for span in spans:
- labeled_consituents = {}
- labeled_consituents["labels"] = span[0]
- i, j = span[1][0], span[1][1]
- constituents.append(" ".join(sentence[i:j]))
- labeled_consituents["constituent"] = " ".join(sentence[i:j])
- labeled_consituents_lst.append(labeled_consituents)
-
- # Add original sentence
- if whole_sentence:
- constituents = constituents + [" ".join(sentence)]
-
- if labels:
- return labeled_consituents_lst
-
- return constituents
-
-
-def get_distituents(sample_string):
- sentence = extract_sentence(sample_string).split()
-
- def get_all_combinations(sentence):
- L = sentence.split()
- N = len(L)
- out = []
- for n in range(2, N):
- for i in range(N - n + 1):
- out.append((i, i + n))
- return out
-
- combinations = get_all_combinations(extract_sentence(sample_string))
- constituents = list(get_constituents(sample_string, want_spans_mapping=True).keys())
- spans = [item for item in combinations if item not in constituents]
- distituents = []
- for span in spans:
- i, j = span[0], span[1]
- distituents.append(" ".join(sentence[i:j]))
- return distituents
-
-
-def get_leaves(tree):
- if not tree.children:
- return [tree]
- leaves = []
- for c in tree.children:
- leaves.extend(get_leaves(c))
- return leaves
-
-
-def unlinearize(string):
- """
- (TOP (S (NP (PRP He)) (VP (VBD was) (ADJP (JJ right))) (. .)))
- """
- tokens = string.replace("(", " ( ").replace(")", " ) ").split()
-
- def read_tree(start):
- if tokens[start + 2] != "(":
- return Tree(tokens[start + 1], None, tokens[start + 2]), start + 4
- i = start + 2
- children = []
- while tokens[i] != ")":
- tree, i = read_tree(i)
- children.append(tree)
- return Tree(tokens[start + 1], children, None), i + 1
-
- tree, _ = read_tree(0)
- return tree
-
-
-def recall_by_label(gold_standard, best_parse):
- correct = {}
- total = {}
- for tree1, tree2 in zip(gold_standard, best_parse):
- try:
- leaves1, leaves2 = get_leaves(tree1["tree"]), get_leaves(tree2["tree"])
- for l1, l2 in zip(leaves1, leaves2):
- assert l1.word.lower() == l2.word.lower(), f"{l1.word} =/= {l2.word}"
- spanlabels = tree1["tree"].spans_labels()
- spans = tree2["tree"].spans()
-
- for (i, j, label) in spanlabels:
- if j - i != 1:
- if label not in correct:
- correct[label] = 0
- total[label] = 0
- if (i, j) in spans:
- correct[label] += 1
- total[label] += 1
- except Exception as e:
- print(e)
- acc = {}
- for label in total.keys():
- acc[label] = correct[label] / total[label]
- return acc
-
-
-def label_recall_output(gold_standard, best_parse):
- best_parse_trees = []
- gold_standard_trees = []
- for t1, t2 in zip(gold_standard, best_parse):
- gold_standard_trees.append({"tree": unlinearize(t1)})
- best_parse_trees.append({"tree": unlinearize(t2)})
-
- dct = recall_by_label(gold_standard=gold_standard_trees, best_parse=best_parse_trees)
- labels = ["SBAR", "NP", "VP", "PP", "ADJP", "ADVP"]
- l = [{label: f"{recall * 100:.2f}"} for label, recall in dct.items() if label in labels]
- df = pd.DataFrame([item.values() for item in l], index=[item.keys() for item in l], columns=["recall"])
- df.index = df.index.map(lambda x: list(x)[0])
- df_out = df.reindex(labels)
- return df_out
-
-
-if __name__ == "__main__":
- import pandas as pd
- from weakly_supervised_parser.utils.prepare_dataset import PTBDataset
- from weakly_supervised_parser.settings import PTB_TEST_GOLD_WITHOUT_PUNCTUATION_ALIGNED_PATH, PTB_SAVE_TREES_PATH
-
- best_parse = PTBDataset(PTB_SAVE_TREES_PATH + "inside_model_predictions.txt").retrieve_all_sentences()
- gold_standard = PTBDataset(PTB_TEST_GOLD_WITHOUT_PUNCTUATION_ALIGNED_PATH).retrieve_all_sentences()
- print(label_recall_output(gold_standard, best_parse))
diff --git a/spaces/nivere/Pix2Pix-Video/README.md b/spaces/nivere/Pix2Pix-Video/README.md
deleted file mode 100644
index 3d8f7d06e470e918dedf27b7a230a565996a1252..0000000000000000000000000000000000000000
--- a/spaces/nivere/Pix2Pix-Video/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Pix2Pix Video
-emoji: 🎨🎞️
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.18.0
-app_file: app.py
-pinned: false
-duplicated_from: fffiloni/Pix2Pix-Video
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/nomic-ai/wikisql/index.html b/spaces/nomic-ai/wikisql/index.html
deleted file mode 100644
index 19254bcbe1d2190e2d4c2e6a895f6039a781c0ec..0000000000000000000000000000000000000000
--- a/spaces/nomic-ai/wikisql/index.html
+++ /dev/null
@@ -1,42 +0,0 @@
-
-
-
- wikisql
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/compute/thread_bounds.h b/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/compute/thread_bounds.h
deleted file mode 100644
index fd8a7d2b0e4e2fe5288efbb2e301f1a9475a9c5e..0000000000000000000000000000000000000000
--- a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/compute/thread_bounds.h
+++ /dev/null
@@ -1,74 +0,0 @@
-/*
- * Copyright 2021 Google LLC
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef LYRA_CODEC_SPARSE_MATMUL_COMPUTE_THREAD_BOUNDS_H_
-#define LYRA_CODEC_SPARSE_MATMUL_COMPUTE_THREAD_BOUNDS_H_
-
-#include
-
-namespace csrblocksparse {
-
-// Class to compute and store the bounds of each thread used in a computation,
-// and to provide corresponding spans of vectors.
-class ThreadBounds {
- public:
- ThreadBounds() : block_width_(0), block_height_(0) {}
-
- void PrepareForThreads(int block_width, int block_height, int num_threads,
- int reduced_rows_per_cache_row, int reduced_rows,
- const int* nnz_per_row);
-
- // Functions that offset the appropriate type to the start of the data
- // needed by the given thread id (|tid|).
- template
- const WeightType* OffsetWeights(const WeightType* weights, int tid) const {
- return weights + weight_starts_[tid];
- }
- template
- const RhsIndType* OffsetRhsIndices(const RhsIndType* rhs_indices,
- int tid) const {
- return rhs_indices + rhs_indices_starts_[tid];
- }
- template
- const BiasType* OffsetBias(const BiasType* bias, int tid) const {
- return bias + bias_starts_[tid];
- }
- template
- OutType* OffsetOutput(OutType* output, int tid) const {
- return output + block_height_ * row_starts_[tid];
- }
- int StartRow(int tid) const { return row_starts_[tid]; }
- const std::vector& row_starts() const { return row_starts_; }
-
- private:
- // Computes the block row (reduced) index of the start of each thread.
- void ComputeThreadSplitPoints(int num_threads, int reduced_rows_per_cache_row,
- int reduced_rows, const int* nnz_per_row);
-
- // Sizes of a sparse block.
- int block_width_;
- int block_height_;
- // Start indices of each data type by thread-id with an extra value at the
- // end.
- std::vector row_starts_;
- std::vector weight_starts_;
- std::vector rhs_indices_starts_;
- std::vector bias_starts_;
-};
-
-} // namespace csrblocksparse
-
-#endif // LYRA_CODEC_SPARSE_MATMUL_COMPUTE_THREAD_BOUNDS_H_
diff --git a/spaces/okeefe4ai/donut-cord/README.md b/spaces/okeefe4ai/donut-cord/README.md
deleted file mode 100644
index c52a4067f61dc03f0228ebe94c2168f86dc7746e..0000000000000000000000000000000000000000
--- a/spaces/okeefe4ai/donut-cord/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Donut Cord
-emoji: 💻
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.1.4
-app_file: app.py
-pinned: false
-duplicated_from: nielsr/donut-cord
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/openfoodfacts/ingredient-extraction/app.py b/spaces/openfoodfacts/ingredient-extraction/app.py
deleted file mode 100644
index beec54452818d42c053e3836e6a9463d3540aea0..0000000000000000000000000000000000000000
--- a/spaces/openfoodfacts/ingredient-extraction/app.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import requests
-import streamlit as st
-from annotated_text import annotated_text
-from openfoodfacts.images import generate_image_url, generate_json_ocr_url
-
-
-@st.cache_data
-def send_prediction_request(ocr_url: str, model_version: str):
- return requests.get(
- "https://robotoff.openfoodfacts.net/api/v1/predict/ingredient_list",
- params={"ocr_url": ocr_url, "model_version": model_version},
- ).json()
-
-
-def get_product(barcode: str):
- r = requests.get(f"https://world.openfoodfacts.org/api/v2/product/{barcode}")
-
- if r.status_code == 404:
- return None
-
- return r.json()["product"]
-
-
-def display_ner_tags(text: str, entities: list[dict]):
- spans = []
- previous_idx = 0
- for entity in entities:
- score = entity["score"]
- lang = entity["lang"]["lang"]
- start_idx = entity["start"]
- end_idx = entity["end"]
- spans.append(text[previous_idx:start_idx])
- spans.append((text[start_idx:end_idx], f"score={score:.3f} | lang={lang}"))
- previous_idx = end_idx
- spans.append(text[previous_idx:])
- annotated_text(spans)
-
-
-def run(
- barcode: str,
- model_version: str,
- min_threshold: float = 0.5,
-):
- product = get_product(barcode)
- st.markdown(f"[Product page](https://world.openfoodfacts.org/product/{barcode})")
-
- if not product:
- st.error(f"Product {barcode} not found")
- return
-
- images = product.get("images", [])
-
- if not images:
- st.error(f"No images found for product {barcode}")
- return
-
- for image_id, _ in images.items():
- if not image_id.isdigit():
- continue
-
- ocr_url = generate_json_ocr_url(barcode, image_id)
- prediction = send_prediction_request(ocr_url, model_version)
-
- st.divider()
- image_url = generate_image_url(barcode, image_id)
- st.markdown(f"[Image {image_id}]({image_url}), [OCR]({ocr_url})")
- st.image(image_url)
-
- if "error" in prediction:
- st.warning(f"Error: {prediction['description']}")
- continue
-
- entities = prediction["entities"]
- text = prediction["text"]
- filtered_entities = [e for e in entities if e["score"] >= min_threshold]
- display_ner_tags(text, filtered_entities)
-
-
-query_params = st.experimental_get_query_params()
-default_barcode = query_params["barcode"][0] if "barcode" in query_params else ""
-
-st.title("Ingredient extraction demo")
-st.markdown(
- "This demo leverages the ingredient entity detection model, "
- "that takes the OCR text as input and predict ingredient lists."
-)
-barcode = st.text_input(
- "barcode", help="Barcode of the product", value=default_barcode
-).strip()
-model_version = "1"
-st.experimental_set_query_params(barcode=barcode)
-
-threshold = st.number_input(
- "threshold",
- help="Minimum threshold for entity predictions",
- min_value=0.0,
- max_value=1.0,
- value=0.98,
-)
-
-if barcode:
- run(barcode, model_version=model_version, min_threshold=threshold)
diff --git a/spaces/ori1026/OriChatGPT/modules/llama_func.py b/spaces/ori1026/OriChatGPT/modules/llama_func.py
deleted file mode 100644
index 2b303f3457e07d51d120b10f2072489a729596ab..0000000000000000000000000000000000000000
--- a/spaces/ori1026/OriChatGPT/modules/llama_func.py
+++ /dev/null
@@ -1,215 +0,0 @@
-import os
-import logging
-
-from llama_index import GPTSimpleVectorIndex, ServiceContext
-from llama_index import download_loader
-from llama_index import (
- Document,
- LLMPredictor,
- PromptHelper,
- QuestionAnswerPrompt,
- RefinePrompt,
-)
-from langchain.llms import OpenAI
-from langchain.chat_models import ChatOpenAI
-import colorama
-import PyPDF2
-from tqdm import tqdm
-
-from modules.presets import *
-from modules.utils import *
-
-def get_index_name(file_src):
- file_paths = [x.name for x in file_src]
- file_paths.sort(key=lambda x: os.path.basename(x))
-
- md5_hash = hashlib.md5()
- for file_path in file_paths:
- with open(file_path, "rb") as f:
- while chunk := f.read(8192):
- md5_hash.update(chunk)
-
- return md5_hash.hexdigest()
-
-def block_split(text):
- blocks = []
- while len(text) > 0:
- blocks.append(Document(text[:1000]))
- text = text[1000:]
- return blocks
-
-def get_documents(file_src):
- documents = []
- logging.debug("Loading documents...")
- logging.debug(f"file_src: {file_src}")
- for file in file_src:
- logging.info(f"loading file: {file.name}")
- if os.path.splitext(file.name)[1] == ".pdf":
- logging.debug("Loading PDF...")
- pdftext = ""
- with open(file.name, 'rb') as pdfFileObj:
- pdfReader = PyPDF2.PdfReader(pdfFileObj)
- for page in tqdm(pdfReader.pages):
- pdftext += page.extract_text()
- text_raw = pdftext
- elif os.path.splitext(file.name)[1] == ".docx":
- logging.debug("Loading DOCX...")
- DocxReader = download_loader("DocxReader")
- loader = DocxReader()
- text_raw = loader.load_data(file=file.name)[0].text
- elif os.path.splitext(file.name)[1] == ".epub":
- logging.debug("Loading EPUB...")
- EpubReader = download_loader("EpubReader")
- loader = EpubReader()
- text_raw = loader.load_data(file=file.name)[0].text
- else:
- logging.debug("Loading text file...")
- with open(file.name, "r", encoding="utf-8") as f:
- text_raw = f.read()
- text = add_space(text_raw)
- # text = block_split(text)
- # documents += text
- documents += [Document(text)]
- logging.debug("Documents loaded.")
- return documents
-
-
-def construct_index(
- api_key,
- file_src,
- max_input_size=4096,
- num_outputs=5,
- max_chunk_overlap=20,
- chunk_size_limit=600,
- embedding_limit=None,
- separator=" "
-):
- os.environ["OPENAI_API_KEY"] = api_key
- chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit
- embedding_limit = None if embedding_limit == 0 else embedding_limit
- separator = " " if separator == "" else separator
-
- llm_predictor = LLMPredictor(
- llm=ChatOpenAI(model_name="gpt-3.5-turbo-0301", openai_api_key=api_key)
- )
- prompt_helper = PromptHelper(max_input_size = max_input_size, num_output = num_outputs, max_chunk_overlap = max_chunk_overlap, embedding_limit=embedding_limit, chunk_size_limit=600, separator=separator)
- index_name = get_index_name(file_src)
- if os.path.exists(f"./index/{index_name}.json"):
- logging.info("找到了缓存的索引文件,加载中……")
- return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json")
- else:
- try:
- documents = get_documents(file_src)
- logging.info("构建索引中……")
- service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper, chunk_size_limit=chunk_size_limit)
- index = GPTSimpleVectorIndex.from_documents(
- documents, service_context=service_context
- )
- logging.debug("索引构建完成!")
- os.makedirs("./index", exist_ok=True)
- index.save_to_disk(f"./index/{index_name}.json")
- logging.debug("索引已保存至本地!")
- return index
-
- except Exception as e:
- logging.error("索引构建失败!", e)
- print(e)
- return None
-
-
-def chat_ai(
- api_key,
- index,
- question,
- context,
- chatbot,
- reply_language,
-):
- os.environ["OPENAI_API_KEY"] = api_key
-
- logging.info(f"Question: {question}")
-
- response, chatbot_display, status_text = ask_ai(
- api_key,
- index,
- question,
- replace_today(PROMPT_TEMPLATE),
- REFINE_TEMPLATE,
- SIM_K,
- INDEX_QUERY_TEMPRATURE,
- context,
- reply_language,
- )
- if response is None:
- status_text = "查询失败,请换个问法试试"
- return context, chatbot
- response = response
-
- context.append({"role": "user", "content": question})
- context.append({"role": "assistant", "content": response})
- chatbot.append((question, chatbot_display))
-
- os.environ["OPENAI_API_KEY"] = ""
- return context, chatbot, status_text
-
-
-def ask_ai(
- api_key,
- index,
- question,
- prompt_tmpl,
- refine_tmpl,
- sim_k=5,
- temprature=0,
- prefix_messages=[],
- reply_language="中文",
-):
- os.environ["OPENAI_API_KEY"] = api_key
-
- logging.debug("Index file found")
- logging.debug("Querying index...")
- llm_predictor = LLMPredictor(
- llm=ChatOpenAI(
- temperature=temprature,
- model_name="gpt-3.5-turbo-0301",
- prefix_messages=prefix_messages,
- )
- )
-
- response = None # Initialize response variable to avoid UnboundLocalError
- qa_prompt = QuestionAnswerPrompt(prompt_tmpl.replace("{reply_language}", reply_language))
- rf_prompt = RefinePrompt(refine_tmpl.replace("{reply_language}", reply_language))
- response = index.query(
- question,
- similarity_top_k=sim_k,
- text_qa_template=qa_prompt,
- refine_template=rf_prompt,
- response_mode="compact",
- )
-
- if response is not None:
- logging.info(f"Response: {response}")
- ret_text = response.response
- nodes = []
- for index, node in enumerate(response.source_nodes):
- brief = node.source_text[:25].replace("\n", "")
- nodes.append(
- f"[{index + 1}]\t{brief}...
{node.source_text}
"
- )
- new_response = ret_text + "\n----------\n" + "\n\n".join(nodes)
- logging.info(
- f"Response: {colorama.Fore.BLUE}{ret_text}{colorama.Style.RESET_ALL}"
- )
- os.environ["OPENAI_API_KEY"] = ""
- return ret_text, new_response, f"查询消耗了{llm_predictor.last_token_usage} tokens"
- else:
- logging.warning("No response found, returning None")
- os.environ["OPENAI_API_KEY"] = ""
- return None
-
-
-def add_space(text):
- punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "}
- for cn_punc, en_punc in punctuations.items():
- text = text.replace(cn_punc, en_punc)
- return text
diff --git a/spaces/os1187/free-fast-youtube-url-video-to-text-using-openai-whisper/README.md b/spaces/os1187/free-fast-youtube-url-video-to-text-using-openai-whisper/README.md
deleted file mode 100644
index 91fee5f0ba5ebfa79d31018024eb4f8d2d653976..0000000000000000000000000000000000000000
--- a/spaces/os1187/free-fast-youtube-url-video-to-text-using-openai-whisper/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Free Youtube Url Video To Text Using Openai Whisper
-emoji: 📉
-colorFrom: blue
-colorTo: purple
-sdk: gradio
-sdk_version: 3.11.0
-app_file: app.py
-pinned: false
-license: gpl-3.0
-duplicated_from: SteveDigital/free-fast-youtube-url-video-to-text-using-openai-whisper
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/pinecone/yt-search/app.py b/spaces/pinecone/yt-search/app.py
deleted file mode 100644
index 795ea2fc80c8af8e4ffb5bb671f0c8e6ade409a1..0000000000000000000000000000000000000000
--- a/spaces/pinecone/yt-search/app.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import streamlit as st
-import pinecone
-from sentence_transformers import SentenceTransformer
-
-PINECONE_KEY = st.secrets["PINECONE_KEY"] # app.pinecone.io
-
-@st.experimental_singleton
-def init_pinecone():
- pinecone.init(api_key=PINECONE_KEY, environment="us-west1-gcp")
- return pinecone.Index('youtube-search')
-
-@st.experimental_singleton
-def init_retriever():
- return SentenceTransformer('flax-sentence-embeddings/all_datasets_v3_mpnet-base')
-
-index = init_pinecone()
-retriever = init_retriever()
-
-def card(thubmnail, title, url, context):
- return st.markdown(f"""
-
-
-
-
-
-
- """, unsafe_allow_html=True)
-
-
-st.write("""
-# YouTube Q&A
-Ask me a question!
-""")
-
-st.markdown("""
-
-""", unsafe_allow_html=True)
-
-query = st.text_input("Search!", "")
-
-if query != "":
- xq = retriever.encode([query]).tolist()
- xc = index.query(xq, top_k=5, include_metadata=True)
-
- for context in xc['matches']:
- card(
- context['metadata']['thumbnail'],
- context['metadata']['title'],
- context['metadata']['url'],
- context['metadata']['text']
- )
\ No newline at end of file
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/util/proxy.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/util/proxy.py
deleted file mode 100644
index 2199cc7b7f004009493d032720c36d6568f9d89e..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/util/proxy.py
+++ /dev/null
@@ -1,57 +0,0 @@
-from .ssl_ import create_urllib3_context, resolve_cert_reqs, resolve_ssl_version
-
-
-def connection_requires_http_tunnel(
- proxy_url=None, proxy_config=None, destination_scheme=None
-):
- """
- Returns True if the connection requires an HTTP CONNECT through the proxy.
-
- :param URL proxy_url:
- URL of the proxy.
- :param ProxyConfig proxy_config:
- Proxy configuration from poolmanager.py
- :param str destination_scheme:
- The scheme of the destination. (i.e https, http, etc)
- """
- # If we're not using a proxy, no way to use a tunnel.
- if proxy_url is None:
- return False
-
- # HTTP destinations never require tunneling, we always forward.
- if destination_scheme == "http":
- return False
-
- # Support for forwarding with HTTPS proxies and HTTPS destinations.
- if (
- proxy_url.scheme == "https"
- and proxy_config
- and proxy_config.use_forwarding_for_https
- ):
- return False
-
- # Otherwise always use a tunnel.
- return True
-
-
-def create_proxy_ssl_context(
- ssl_version, cert_reqs, ca_certs=None, ca_cert_dir=None, ca_cert_data=None
-):
- """
- Generates a default proxy ssl context if one hasn't been provided by the
- user.
- """
- ssl_context = create_urllib3_context(
- ssl_version=resolve_ssl_version(ssl_version),
- cert_reqs=resolve_cert_reqs(cert_reqs),
- )
-
- if (
- not ca_certs
- and not ca_cert_dir
- and not ca_cert_data
- and hasattr(ssl_context, "load_default_certs")
- ):
- ssl_context.load_default_certs()
-
- return ssl_context
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/_log.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/_log.py
deleted file mode 100644
index 4a2ae0acb865a3f1c36d00730cd076a12e5d031d..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/_log.py
+++ /dev/null
@@ -1,4 +0,0 @@
-import logging
-
-
-log = logging.getLogger()
diff --git a/spaces/pngwn/nextjs/out/_next/static/chunks/pages/_error-6679c1efb053c50e.js b/spaces/pngwn/nextjs/out/_next/static/chunks/pages/_error-6679c1efb053c50e.js
deleted file mode 100644
index cf8458cbe6b46509ad14c090e5d7bd2eed9265ce..0000000000000000000000000000000000000000
--- a/spaces/pngwn/nextjs/out/_next/static/chunks/pages/_error-6679c1efb053c50e.js
+++ /dev/null
@@ -1 +0,0 @@
-(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[820],{4977:function(n,_,u){(window.__NEXT_P=window.__NEXT_P||[]).push(["/_error",function(){return u(9185)}])}},function(n){n.O(0,[888,774,179],(function(){return _=4977,n(n.s=_);var _}));var _=n.O();_N_E=_}]);
\ No newline at end of file
diff --git a/spaces/politweet-sh/politweet/docs/make.bat b/spaces/politweet-sh/politweet/docs/make.bat
deleted file mode 100644
index 061f32f91b96f05f8fa2b52b2edcdcc19aa33d50..0000000000000000000000000000000000000000
--- a/spaces/politweet-sh/politweet/docs/make.bat
+++ /dev/null
@@ -1,35 +0,0 @@
-@ECHO OFF
-
-pushd %~dp0
-
-REM Command file for Sphinx documentation
-
-if "%SPHINXBUILD%" == "" (
- set SPHINXBUILD=sphinx-build
-)
-set SOURCEDIR=source
-set BUILDDIR=build
-
-if "%1" == "" goto help
-
-%SPHINXBUILD% >NUL 2>NUL
-if errorlevel 9009 (
- echo.
- echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
- echo.installed, then set the SPHINXBUILD environment variable to point
- echo.to the full path of the 'sphinx-build' executable. Alternatively you
- echo.may add the Sphinx directory to PATH.
- echo.
- echo.If you don't have Sphinx installed, grab it from
- echo.https://www.sphinx-doc.org/
- exit /b 1
-)
-
-%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
-goto end
-
-:help
-%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
-
-:end
-popd
diff --git a/spaces/ppsantiago/chatGPT/modules/utils.py b/spaces/ppsantiago/chatGPT/modules/utils.py
deleted file mode 100644
index ef8963d19b16e187a3381b85325d74a1a3562d64..0000000000000000000000000000000000000000
--- a/spaces/ppsantiago/chatGPT/modules/utils.py
+++ /dev/null
@@ -1,520 +0,0 @@
-# -*- coding:utf-8 -*-
-from __future__ import annotations
-from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple, Type
-import logging
-import json
-import os
-import datetime
-import hashlib
-import csv
-import requests
-import re
-import html
-import sys
-import subprocess
-
-import gradio as gr
-from pypinyin import lazy_pinyin
-import tiktoken
-import mdtex2html
-from markdown import markdown
-from pygments import highlight
-from pygments.lexers import get_lexer_by_name
-from pygments.formatters import HtmlFormatter
-
-from modules.presets import *
-import modules.shared as shared
-
-logging.basicConfig(
- level=logging.INFO,
- format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s",
-)
-
-if TYPE_CHECKING:
- from typing import TypedDict
-
- class DataframeData(TypedDict):
- headers: List[str]
- data: List[List[str | int | bool]]
-
-
-def count_token(message):
- encoding = tiktoken.get_encoding("cl100k_base")
- input_str = f"role: {message['role']}, content: {message['content']}"
- length = len(encoding.encode(input_str))
- return length
-
-
-def markdown_to_html_with_syntax_highlight(md_str):
- def replacer(match):
- lang = match.group(1) or "text"
- code = match.group(2)
-
- try:
- lexer = get_lexer_by_name(lang, stripall=True)
- except ValueError:
- lexer = get_lexer_by_name("text", stripall=True)
-
- formatter = HtmlFormatter()
- highlighted_code = highlight(code, lexer, formatter)
-
- return f'{highlighted_code}
'
-
- code_block_pattern = r"```(\w+)?\n([\s\S]+?)\n```"
- md_str = re.sub(code_block_pattern, replacer, md_str, flags=re.MULTILINE)
-
- html_str = markdown(md_str)
- return html_str
-
-
-def normalize_markdown(md_text: str) -> str:
- lines = md_text.split("\n")
- normalized_lines = []
- inside_list = False
-
- for i, line in enumerate(lines):
- if re.match(r"^(\d+\.|-|\*|\+)\s", line.strip()):
- if not inside_list and i > 0 and lines[i - 1].strip() != "":
- normalized_lines.append("")
- inside_list = True
- normalized_lines.append(line)
- elif inside_list and line.strip() == "":
- if i < len(lines) - 1 and not re.match(
- r"^(\d+\.|-|\*|\+)\s", lines[i + 1].strip()
- ):
- normalized_lines.append(line)
- continue
- else:
- inside_list = False
- normalized_lines.append(line)
-
- return "\n".join(normalized_lines)
-
-
-def convert_mdtext(md_text):
- code_block_pattern = re.compile(r"```(.*?)(?:```|$)", re.DOTALL)
- inline_code_pattern = re.compile(r"`(.*?)`", re.DOTALL)
- code_blocks = code_block_pattern.findall(md_text)
- non_code_parts = code_block_pattern.split(md_text)[::2]
-
- result = []
- for non_code, code in zip(non_code_parts, code_blocks + [""]):
- if non_code.strip():
- non_code = normalize_markdown(non_code)
- if inline_code_pattern.search(non_code):
- result.append(markdown(non_code, extensions=["tables"]))
- else:
- result.append(mdtex2html.convert(non_code, extensions=["tables"]))
- if code.strip():
- # _, code = detect_language(code) # 暂时去除代码高亮功能,因为在大段代码的情况下会出现问题
- # code = code.replace("\n\n", "\n") # 暂时去除代码中的空行,因为在大段代码的情况下会出现问题
- code = f"\n```{code}\n\n```"
- code = markdown_to_html_with_syntax_highlight(code)
- result.append(code)
- result = "".join(result)
- result += ALREADY_CONVERTED_MARK
- return result
-
-
-def convert_asis(userinput):
- return (
- f'{html.escape(userinput)}
'
- + ALREADY_CONVERTED_MARK
- )
-
-
-def detect_converted_mark(userinput):
- if userinput.endswith(ALREADY_CONVERTED_MARK):
- return True
- else:
- return False
-
-
-def detect_language(code):
- if code.startswith("\n"):
- first_line = ""
- else:
- first_line = code.strip().split("\n", 1)[0]
- language = first_line.lower() if first_line else ""
- code_without_language = code[len(first_line) :].lstrip() if first_line else code
- return language, code_without_language
-
-
-def construct_text(role, text):
- return {"role": role, "content": text}
-
-
-def construct_user(text):
- return construct_text("user", text)
-
-
-def construct_system(text):
- return construct_text("system", text)
-
-
-def construct_assistant(text):
- return construct_text("assistant", text)
-
-
-def construct_token_message(token, stream=False):
- return f"Token 计数: {token}"
-
-
-def delete_first_conversation(history, previous_token_count):
- if history:
- del history[:2]
- del previous_token_count[0]
- return (
- history,
- previous_token_count,
- construct_token_message(sum(previous_token_count)),
- )
-
-
-def delete_last_conversation(chatbot, history, previous_token_count):
- if len(chatbot) > 0 and standard_error_msg in chatbot[-1][1]:
- logging.info("由于包含报错信息,只删除chatbot记录")
- chatbot.pop()
- return chatbot, history
- if len(history) > 0:
- logging.info("删除了一组对话历史")
- history.pop()
- history.pop()
- if len(chatbot) > 0:
- logging.info("删除了一组chatbot对话")
- chatbot.pop()
- if len(previous_token_count) > 0:
- logging.info("删除了一组对话的token计数记录")
- previous_token_count.pop()
- return (
- chatbot,
- history,
- previous_token_count,
- construct_token_message(sum(previous_token_count)),
- )
-
-
-def save_file(filename, system, history, chatbot):
- logging.info("保存对话历史中……")
- os.makedirs(HISTORY_DIR, exist_ok=True)
- if filename.endswith(".json"):
- json_s = {"system": system, "history": history, "chatbot": chatbot}
- print(json_s)
- with open(os.path.join(HISTORY_DIR, filename), "w") as f:
- json.dump(json_s, f)
- elif filename.endswith(".md"):
- md_s = f"system: \n- {system} \n"
- for data in history:
- md_s += f"\n{data['role']}: \n- {data['content']} \n"
- with open(os.path.join(HISTORY_DIR, filename), "w", encoding="utf8") as f:
- f.write(md_s)
- logging.info("保存对话历史完毕")
- return os.path.join(HISTORY_DIR, filename)
-
-
-def save_chat_history(filename, system, history, chatbot):
- if filename == "":
- return
- if not filename.endswith(".json"):
- filename += ".json"
- return save_file(filename, system, history, chatbot)
-
-
-def export_markdown(filename, system, history, chatbot):
- if filename == "":
- return
- if not filename.endswith(".md"):
- filename += ".md"
- return save_file(filename, system, history, chatbot)
-
-
-def load_chat_history(filename, system, history, chatbot):
- logging.info("加载对话历史中……")
- if type(filename) != str:
- filename = filename.name
- try:
- with open(os.path.join(HISTORY_DIR, filename), "r") as f:
- json_s = json.load(f)
- try:
- if type(json_s["history"][0]) == str:
- logging.info("历史记录格式为旧版,正在转换……")
- new_history = []
- for index, item in enumerate(json_s["history"]):
- if index % 2 == 0:
- new_history.append(construct_user(item))
- else:
- new_history.append(construct_assistant(item))
- json_s["history"] = new_history
- logging.info(new_history)
- except:
- # 没有对话历史
- pass
- logging.info("加载对话历史完毕")
- return filename, json_s["system"], json_s["history"], json_s["chatbot"]
- except FileNotFoundError:
- logging.info("没有找到对话历史文件,不执行任何操作")
- return filename, system, history, chatbot
-
-
-def sorted_by_pinyin(list):
- return sorted(list, key=lambda char: lazy_pinyin(char)[0][0])
-
-
-def get_file_names(dir, plain=False, filetypes=[".json"]):
- logging.info(f"获取文件名列表,目录为{dir},文件类型为{filetypes},是否为纯文本列表{plain}")
- files = []
- try:
- for type in filetypes:
- files += [f for f in os.listdir(dir) if f.endswith(type)]
- except FileNotFoundError:
- files = []
- files = sorted_by_pinyin(files)
- if files == []:
- files = [""]
- if plain:
- return files
- else:
- return gr.Dropdown.update(choices=files)
-
-
-def get_history_names(plain=False):
- logging.info("获取历史记录文件名列表")
- return get_file_names(HISTORY_DIR, plain)
-
-
-def load_template(filename, mode=0):
- logging.info(f"加载模板文件{filename},模式为{mode}(0为返回字典和下拉菜单,1为返回下拉菜单,2为返回字典)")
- lines = []
- logging.info("Loading template...")
- if filename.endswith(".json"):
- with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as f:
- lines = json.load(f)
- lines = [[i["act"], i["prompt"]] for i in lines]
- else:
- with open(
- os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8"
- ) as csvfile:
- reader = csv.reader(csvfile)
- lines = list(reader)
- lines = lines[1:]
- if mode == 1:
- return sorted_by_pinyin([row[0] for row in lines])
- elif mode == 2:
- return {row[0]: row[1] for row in lines}
- else:
- choices = sorted_by_pinyin([row[0] for row in lines])
- return {row[0]: row[1] for row in lines}, gr.Dropdown.update(
- choices=choices, value=choices[0]
- )
-
-
-def get_template_names(plain=False):
- logging.info("获取模板文件名列表")
- return get_file_names(TEMPLATES_DIR, plain, filetypes=[".csv", "json"])
-
-
-def get_template_content(templates, selection, original_system_prompt):
- logging.info(f"应用模板中,选择为{selection},原始系统提示为{original_system_prompt}")
- try:
- return templates[selection]
- except:
- return original_system_prompt
-
-
-def reset_state():
- logging.info("重置状态")
- return [], [], [], construct_token_message(0)
-
-
-def reset_textbox():
- logging.debug("重置文本框")
- return gr.update(value="")
-
-
-def reset_default():
- newurl = shared.state.reset_api_url()
- os.environ.pop("HTTPS_PROXY", None)
- os.environ.pop("https_proxy", None)
- return gr.update(value=newurl), gr.update(value=""), "API URL 和代理已重置"
-
-
-def change_api_url(url):
- shared.state.set_api_url(url)
- msg = f"API地址更改为了{url}"
- logging.info(msg)
- return msg
-
-
-def change_proxy(proxy):
- os.environ["HTTPS_PROXY"] = proxy
- msg = f"代理更改为了{proxy}"
- logging.info(msg)
- return msg
-
-
-def hide_middle_chars(s):
- if s is None:
- return ""
- if len(s) <= 8:
- return s
- else:
- head = s[:4]
- tail = s[-4:]
- hidden = "*" * (len(s) - 8)
- return head + hidden + tail
-
-
-def submit_key(key):
- key = key.strip()
- msg = f"API密钥更改为了{hide_middle_chars(key)}"
- logging.info(msg)
- return key, msg
-
-
-def replace_today(prompt):
- today = datetime.datetime.today().strftime("%Y-%m-%d")
- return prompt.replace("{current_date}", today)
-
-
-def get_geoip():
- try:
- response = requests.get("https://ipapi.co/json/", timeout=5)
- data = response.json()
- except:
- data = {"error": True, "reason": "连接ipapi失败"}
- if "error" in data.keys():
- logging.warning(f"无法获取IP地址信息。\n{data}")
- if data["reason"] == "RateLimited":
- return (
- f"获取IP地理位置失败,因为达到了检测IP的速率限制。聊天功能可能仍然可用。"
- )
- else:
- return f"获取IP地理位置失败。原因:{data['reason']}。你仍然可以使用聊天功能。"
- else:
- country = data["country_name"]
- if country == "China":
- text = "**您的IP区域:中国。请立即检查代理设置,在不受支持的地区使用API可能导致账号被封禁。**"
- else:
- text = f"您的IP区域:{country}。"
- logging.info(text)
- return text
-
-
-def find_n(lst, max_num):
- n = len(lst)
- total = sum(lst)
-
- if total < max_num:
- return n
-
- for i in range(len(lst)):
- if total - lst[i] < max_num:
- return n - i - 1
- total = total - lst[i]
- return 1
-
-
-def start_outputing():
- logging.debug("显示取消按钮,隐藏发送按钮")
- return gr.Button.update(visible=False), gr.Button.update(visible=True)
-
-
-def end_outputing():
- return (
- gr.Button.update(visible=True),
- gr.Button.update(visible=False),
- )
-
-
-def cancel_outputing():
- logging.info("中止输出……")
- shared.state.interrupt()
-
-
-def transfer_input(inputs):
- # 一次性返回,降低延迟
- textbox = reset_textbox()
- outputing = start_outputing()
- return (
- inputs,
- gr.update(value=""),
- gr.Button.update(visible=True),
- gr.Button.update(visible=False),
- )
-
-
-def get_proxies():
- # 获取环境变量中的代理设置
- http_proxy = os.environ.get("HTTP_PROXY") or os.environ.get("http_proxy")
- https_proxy = os.environ.get("HTTPS_PROXY") or os.environ.get("https_proxy")
-
- # 如果存在代理设置,使用它们
- proxies = {}
- if http_proxy:
- logging.info(f"使用 HTTP 代理: {http_proxy}")
- proxies["http"] = http_proxy
- if https_proxy:
- logging.info(f"使用 HTTPS 代理: {https_proxy}")
- proxies["https"] = https_proxy
-
- if proxies == {}:
- proxies = None
-
- return proxies
-
-def run(command, desc=None, errdesc=None, custom_env=None, live=False):
- if desc is not None:
- print(desc)
- if live:
- result = subprocess.run(command, shell=True, env=os.environ if custom_env is None else custom_env)
- if result.returncode != 0:
- raise RuntimeError(f"""{errdesc or 'Error running command'}.
-Command: {command}
-Error code: {result.returncode}""")
-
- return ""
- result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, env=os.environ if custom_env is None else custom_env)
- if result.returncode != 0:
- message = f"""{errdesc or 'Error running command'}.
-Command: {command}
-Error code: {result.returncode}
-stdout: {result.stdout.decode(encoding="utf8", errors="ignore") if len(result.stdout)>0 else ''}
-stderr: {result.stderr.decode(encoding="utf8", errors="ignore") if len(result.stderr)>0 else ''}
-"""
- raise RuntimeError(message)
- return result.stdout.decode(encoding="utf8", errors="ignore")
-
-def versions_html():
- git = os.environ.get('GIT', "git")
- python_version = ".".join([str(x) for x in sys.version_info[0:3]])
- try:
- commit_hash = run(f"{git} rev-parse HEAD").strip()
- except Exception:
- commit_hash = ""
- if commit_hash != "":
- short_commit = commit_hash[0:7]
- commit_info = f"{short_commit}"
- else:
- commit_info = "unknown \U0001F615"
- return f"""
-Python: {python_version}
- •
-Gradio: {gr.__version__}
- •
-Commit: {commit_info}
-"""
-
-def add_source_numbers(lst, source_name = "Source", use_source = True):
- if use_source:
- return [f'[{idx+1}]\t "{item[0]}"\n{source_name}: {item[1]}' for idx, item in enumerate(lst)]
- else:
- return [f'[{idx+1}]\t "{item}"' for idx, item in enumerate(lst)]
-
-def add_details(lst):
- nodes = []
- for index, txt in enumerate(lst):
- brief = txt[:25].replace("\n", "")
- nodes.append(
- f"{brief}...
{txt}
"
- )
- return nodes
diff --git a/spaces/priyankasharma5882/Breed_Classification/app.py b/spaces/priyankasharma5882/Breed_Classification/app.py
deleted file mode 100644
index 7a8952375a1e7a76d7bb366b2a7b8659cf6a9e0a..0000000000000000000000000000000000000000
--- a/spaces/priyankasharma5882/Breed_Classification/app.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import tensorflow as tf
-import gradio as gr
-import cv2
-import numpy as np
-
-new_model = tf.keras.models.load_model('breedclassification.h5')
-
-def predict_classes(link):
- img = cv2.resize(link,(224,224))
- img = img/255
- img = img.reshape(-1,224,224,3)
- pred = np.round(new_model.predict(img)).argmax(axis = 1)
- dic = {0: 'Herding breed', 1: 'Hound breed', 2: 'Non sporting breed', 3: 'Terrior breed', 4:'working breed', 5: 'sporting breed', 6: 'toy breed'}
- print(dic.get(int(pred)))
- a = dic.get(int(pred))
- return a
-
-label = gr.outputs.Label(num_top_classes=7)
-gr.Interface(fn=predict_classes, inputs='image', outputs=label,interpretation='default', title = 'Breed Classification detection ', description = 'It will classify 7 different species: You can drage the images from google. 1. Terrier 2. Toy 3. Working 4. Sporting 5. Hound 6. Herding 7. Non sporting Group ').launch()
\ No newline at end of file
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/aiohttp/base_protocol.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/aiohttp/base_protocol.py
deleted file mode 100644
index 4c9f0a752e3aa833a17b7adf0c261d19a5f083fa..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/aiohttp/base_protocol.py
+++ /dev/null
@@ -1,90 +0,0 @@
-import asyncio
-from typing import Optional, cast
-
-from .tcp_helpers import tcp_nodelay
-
-
-class BaseProtocol(asyncio.Protocol):
- __slots__ = (
- "_loop",
- "_paused",
- "_drain_waiter",
- "_connection_lost",
- "_reading_paused",
- "transport",
- )
-
- def __init__(self, loop: asyncio.AbstractEventLoop) -> None:
- self._loop: asyncio.AbstractEventLoop = loop
- self._paused = False
- self._drain_waiter: Optional[asyncio.Future[None]] = None
- self._reading_paused = False
-
- self.transport: Optional[asyncio.Transport] = None
-
- @property
- def connected(self) -> bool:
- """Return True if the connection is open."""
- return self.transport is not None
-
- def pause_writing(self) -> None:
- assert not self._paused
- self._paused = True
-
- def resume_writing(self) -> None:
- assert self._paused
- self._paused = False
-
- waiter = self._drain_waiter
- if waiter is not None:
- self._drain_waiter = None
- if not waiter.done():
- waiter.set_result(None)
-
- def pause_reading(self) -> None:
- if not self._reading_paused and self.transport is not None:
- try:
- self.transport.pause_reading()
- except (AttributeError, NotImplementedError, RuntimeError):
- pass
- self._reading_paused = True
-
- def resume_reading(self) -> None:
- if self._reading_paused and self.transport is not None:
- try:
- self.transport.resume_reading()
- except (AttributeError, NotImplementedError, RuntimeError):
- pass
- self._reading_paused = False
-
- def connection_made(self, transport: asyncio.BaseTransport) -> None:
- tr = cast(asyncio.Transport, transport)
- tcp_nodelay(tr, True)
- self.transport = tr
-
- def connection_lost(self, exc: Optional[BaseException]) -> None:
- # Wake up the writer if currently paused.
- self.transport = None
- if not self._paused:
- return
- waiter = self._drain_waiter
- if waiter is None:
- return
- self._drain_waiter = None
- if waiter.done():
- return
- if exc is None:
- waiter.set_result(None)
- else:
- waiter.set_exception(exc)
-
- async def _drain_helper(self) -> None:
- if not self.connected:
- raise ConnectionResetError("Connection lost")
- if not self._paused:
- return
- waiter = self._drain_waiter
- if waiter is None:
- waiter = self._loop.create_future()
- self._drain_waiter = waiter
- await asyncio.shield(waiter)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/async_timeout/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/async_timeout/__init__.py
deleted file mode 100644
index 1ffb069fce9b2b9a03515404155a7e5cc439484a..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/async_timeout/__init__.py
+++ /dev/null
@@ -1,239 +0,0 @@
-import asyncio
-import enum
-import sys
-import warnings
-from types import TracebackType
-from typing import Optional, Type
-
-
-if sys.version_info >= (3, 8):
- from typing import final
-else:
- from typing_extensions import final
-
-
-if sys.version_info >= (3, 11):
-
- def _uncancel_task(task: "asyncio.Task[object]") -> None:
- task.uncancel()
-
-else:
-
- def _uncancel_task(task: "asyncio.Task[object]") -> None:
- pass
-
-
-__version__ = "4.0.3"
-
-
-__all__ = ("timeout", "timeout_at", "Timeout")
-
-
-def timeout(delay: Optional[float]) -> "Timeout":
- """timeout context manager.
-
- Useful in cases when you want to apply timeout logic around block
- of code or in cases when asyncio.wait_for is not suitable. For example:
-
- >>> async with timeout(0.001):
- ... async with aiohttp.get('https://github.com') as r:
- ... await r.text()
-
-
- delay - value in seconds or None to disable timeout logic
- """
- loop = asyncio.get_running_loop()
- if delay is not None:
- deadline = loop.time() + delay # type: Optional[float]
- else:
- deadline = None
- return Timeout(deadline, loop)
-
-
-def timeout_at(deadline: Optional[float]) -> "Timeout":
- """Schedule the timeout at absolute time.
-
- deadline argument points on the time in the same clock system
- as loop.time().
-
- Please note: it is not POSIX time but a time with
- undefined starting base, e.g. the time of the system power on.
-
- >>> async with timeout_at(loop.time() + 10):
- ... async with aiohttp.get('https://github.com') as r:
- ... await r.text()
-
-
- """
- loop = asyncio.get_running_loop()
- return Timeout(deadline, loop)
-
-
-class _State(enum.Enum):
- INIT = "INIT"
- ENTER = "ENTER"
- TIMEOUT = "TIMEOUT"
- EXIT = "EXIT"
-
-
-@final
-class Timeout:
- # Internal class, please don't instantiate it directly
- # Use timeout() and timeout_at() public factories instead.
- #
- # Implementation note: `async with timeout()` is preferred
- # over `with timeout()`.
- # While technically the Timeout class implementation
- # doesn't need to be async at all,
- # the `async with` statement explicitly points that
- # the context manager should be used from async function context.
- #
- # This design allows to avoid many silly misusages.
- #
- # TimeoutError is raised immediately when scheduled
- # if the deadline is passed.
- # The purpose is to time out as soon as possible
- # without waiting for the next await expression.
-
- __slots__ = ("_deadline", "_loop", "_state", "_timeout_handler", "_task")
-
- def __init__(
- self, deadline: Optional[float], loop: asyncio.AbstractEventLoop
- ) -> None:
- self._loop = loop
- self._state = _State.INIT
-
- self._task: Optional["asyncio.Task[object]"] = None
- self._timeout_handler = None # type: Optional[asyncio.Handle]
- if deadline is None:
- self._deadline = None # type: Optional[float]
- else:
- self.update(deadline)
-
- def __enter__(self) -> "Timeout":
- warnings.warn(
- "with timeout() is deprecated, use async with timeout() instead",
- DeprecationWarning,
- stacklevel=2,
- )
- self._do_enter()
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> Optional[bool]:
- self._do_exit(exc_type)
- return None
-
- async def __aenter__(self) -> "Timeout":
- self._do_enter()
- return self
-
- async def __aexit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> Optional[bool]:
- self._do_exit(exc_type)
- return None
-
- @property
- def expired(self) -> bool:
- """Is timeout expired during execution?"""
- return self._state == _State.TIMEOUT
-
- @property
- def deadline(self) -> Optional[float]:
- return self._deadline
-
- def reject(self) -> None:
- """Reject scheduled timeout if any."""
- # cancel is maybe better name but
- # task.cancel() raises CancelledError in asyncio world.
- if self._state not in (_State.INIT, _State.ENTER):
- raise RuntimeError(f"invalid state {self._state.value}")
- self._reject()
-
- def _reject(self) -> None:
- self._task = None
- if self._timeout_handler is not None:
- self._timeout_handler.cancel()
- self._timeout_handler = None
-
- def shift(self, delay: float) -> None:
- """Advance timeout on delay seconds.
-
- The delay can be negative.
-
- Raise RuntimeError if shift is called when deadline is not scheduled
- """
- deadline = self._deadline
- if deadline is None:
- raise RuntimeError("cannot shift timeout if deadline is not scheduled")
- self.update(deadline + delay)
-
- def update(self, deadline: float) -> None:
- """Set deadline to absolute value.
-
- deadline argument points on the time in the same clock system
- as loop.time().
-
- If new deadline is in the past the timeout is raised immediately.
-
- Please note: it is not POSIX time but a time with
- undefined starting base, e.g. the time of the system power on.
- """
- if self._state == _State.EXIT:
- raise RuntimeError("cannot reschedule after exit from context manager")
- if self._state == _State.TIMEOUT:
- raise RuntimeError("cannot reschedule expired timeout")
- if self._timeout_handler is not None:
- self._timeout_handler.cancel()
- self._deadline = deadline
- if self._state != _State.INIT:
- self._reschedule()
-
- def _reschedule(self) -> None:
- assert self._state == _State.ENTER
- deadline = self._deadline
- if deadline is None:
- return
-
- now = self._loop.time()
- if self._timeout_handler is not None:
- self._timeout_handler.cancel()
-
- self._task = asyncio.current_task()
- if deadline <= now:
- self._timeout_handler = self._loop.call_soon(self._on_timeout)
- else:
- self._timeout_handler = self._loop.call_at(deadline, self._on_timeout)
-
- def _do_enter(self) -> None:
- if self._state != _State.INIT:
- raise RuntimeError(f"invalid state {self._state.value}")
- self._state = _State.ENTER
- self._reschedule()
-
- def _do_exit(self, exc_type: Optional[Type[BaseException]]) -> None:
- if exc_type is asyncio.CancelledError and self._state == _State.TIMEOUT:
- assert self._task is not None
- _uncancel_task(self._task)
- self._timeout_handler = None
- self._task = None
- raise asyncio.TimeoutError
- # timeout has not expired
- self._state = _State.EXIT
- self._reject()
- return None
-
- def _on_timeout(self) -> None:
- assert self._task is not None
- self._task.cancel()
- self._state = _State.TIMEOUT
- # drop the reference early
- self._timeout_handler = None
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/UploadText-232a3213.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/UploadText-232a3213.js
deleted file mode 100644
index 461d8df6ed7bc9d3919f2f6259a1bc25718d452e..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/UploadText-232a3213.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import"./Index-37584f50.js";import"./Button-89057c03.js";const{SvelteComponent:U,append:g,attr:s,detach:j,init:B,insert:H,noop:w,safe_not_equal:M,svg_element:p}=window.__gradio__svelte__internal;function T(i){let e,l,o,a;return{c(){e=p("svg"),l=p("path"),o=p("polyline"),a=p("line"),s(l,"d","M21 15v4a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2v-4"),s(o,"points","17 8 12 3 7 8"),s(a,"x1","12"),s(a,"y1","3"),s(a,"x2","12"),s(a,"y2","15"),s(e,"xmlns","http://www.w3.org/2000/svg"),s(e,"width","90%"),s(e,"height","90%"),s(e,"viewBox","0 0 24 24"),s(e,"fill","none"),s(e,"stroke","currentColor"),s(e,"stroke-width","2"),s(e,"stroke-linecap","round"),s(e,"stroke-linejoin","round"),s(e,"class","feather feather-upload")},m(r,_){H(r,e,_),g(e,l),g(e,o),g(e,a)},p:w,i:w,o:w,d(r){r&&j(e)}}}class x extends U{constructor(e){super(),B(this,e,null,T,M,{})}}const{SvelteComponent:z,append:f,attr:k,create_component:A,destroy_component:D,detach:v,element:y,init:E,insert:h,mount_component:F,safe_not_equal:G,set_data:b,space:C,text:c,toggle_class:q,transition_in:I,transition_out:J}=window.__gradio__svelte__internal;function S(i){let e,l,o=i[1]("common.or")+"",a,r,_,u=(i[2]||i[1]("upload_text.click_to_upload"))+"",d;return{c(){e=y("span"),l=c("- "),a=c(o),r=c(" -"),_=C(),d=c(u),k(e,"class","or svelte-b0hvie")},m(t,n){h(t,e,n),f(e,l),f(e,a),f(e,r),h(t,_,n),h(t,d,n)},p(t,n){n&2&&o!==(o=t[1]("common.or")+"")&&b(a,o),n&6&&u!==(u=(t[2]||t[1]("upload_text.click_to_upload"))+"")&&b(d,u)},d(t){t&&(v(e),v(_),v(d))}}}function K(i){let e,l,o,a,r=i[1](i[5][i[0]]||i[5].file)+"",_,u,d;o=new x({});let t=i[3]!=="short"&&S(i);return{c(){e=y("div"),l=y("span"),A(o.$$.fragment),a=C(),_=c(r),u=C(),t&&t.c(),k(l,"class","icon-wrap svelte-b0hvie"),q(l,"hovered",i[4]),k(e,"class","wrap svelte-b0hvie")},m(n,m){h(n,e,m),f(e,l),F(o,l,null),f(e,a),f(e,_),f(e,u),t&&t.m(e,null),d=!0},p(n,[m]){(!d||m&16)&&q(l,"hovered",n[4]),(!d||m&3)&&r!==(r=n[1](n[5][n[0]]||n[5].file)+"")&&b(_,r),n[3]!=="short"?t?t.p(n,m):(t=S(n),t.c(),t.m(e,null)):t&&(t.d(1),t=null)},i(n){d||(I(o.$$.fragment,n),d=!0)},o(n){J(o.$$.fragment,n),d=!1},d(n){n&&v(e),D(o),t&&t.d()}}}function L(i,e,l){let{type:o="file"}=e,{i18n:a}=e,{message:r=void 0}=e,{mode:_="full"}=e,{hovered:u=!1}=e;const d={image:"upload_text.drop_image",video:"upload_text.drop_video",audio:"upload_text.drop_audio",file:"upload_text.drop_file",csv:"upload_text.drop_csv"};return i.$$set=t=>{"type"in t&&l(0,o=t.type),"i18n"in t&&l(1,a=t.i18n),"message"in t&&l(2,r=t.message),"mode"in t&&l(3,_=t.mode),"hovered"in t&&l(4,u=t.hovered)},[o,a,r,_,u,d]}class P extends z{constructor(e){super(),E(this,e,L,K,G,{type:0,i18n:1,message:2,mode:3,hovered:4})}}export{P as U,x as a};
-//# sourceMappingURL=UploadText-232a3213.js.map
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_custom_dtypes.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_custom_dtypes.py
deleted file mode 100644
index da6a4bd5064add3de2eb61f527efd757c6722443..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_custom_dtypes.py
+++ /dev/null
@@ -1,253 +0,0 @@
-import pytest
-
-import numpy as np
-from numpy.testing import assert_array_equal
-from numpy.core._multiarray_umath import (
- _discover_array_parameters as discover_array_params, _get_sfloat_dtype)
-
-
-SF = _get_sfloat_dtype()
-
-
-class TestSFloat:
- def _get_array(self, scaling, aligned=True):
- if not aligned:
- a = np.empty(3*8 + 1, dtype=np.uint8)[1:]
- a = a.view(np.float64)
- a[:] = [1., 2., 3.]
- else:
- a = np.array([1., 2., 3.])
-
- a *= 1./scaling # the casting code also uses the reciprocal.
- return a.view(SF(scaling))
-
- def test_sfloat_rescaled(self):
- sf = SF(1.)
- sf2 = sf.scaled_by(2.)
- assert sf2.get_scaling() == 2.
- sf6 = sf2.scaled_by(3.)
- assert sf6.get_scaling() == 6.
-
- def test_class_discovery(self):
- # This does not test much, since we always discover the scaling as 1.
- # But most of NumPy (when writing) does not understand DType classes
- dt, _ = discover_array_params([1., 2., 3.], dtype=SF)
- assert dt == SF(1.)
-
- @pytest.mark.parametrize("scaling", [1., -1., 2.])
- def test_scaled_float_from_floats(self, scaling):
- a = np.array([1., 2., 3.], dtype=SF(scaling))
-
- assert a.dtype.get_scaling() == scaling
- assert_array_equal(scaling * a.view(np.float64), [1., 2., 3.])
-
- def test_repr(self):
- # Check the repr, mainly to cover the code paths:
- assert repr(SF(scaling=1.)) == "_ScaledFloatTestDType(scaling=1.0)"
-
- def test_dtype_name(self):
- assert SF(1.).name == "_ScaledFloatTestDType64"
-
- @pytest.mark.parametrize("scaling", [1., -1., 2.])
- def test_sfloat_from_float(self, scaling):
- a = np.array([1., 2., 3.]).astype(dtype=SF(scaling))
-
- assert a.dtype.get_scaling() == scaling
- assert_array_equal(scaling * a.view(np.float64), [1., 2., 3.])
-
- @pytest.mark.parametrize("aligned", [True, False])
- @pytest.mark.parametrize("scaling", [1., -1., 2.])
- def test_sfloat_getitem(self, aligned, scaling):
- a = self._get_array(1., aligned)
- assert a.tolist() == [1., 2., 3.]
-
- @pytest.mark.parametrize("aligned", [True, False])
- def test_sfloat_casts(self, aligned):
- a = self._get_array(1., aligned)
-
- assert np.can_cast(a, SF(-1.), casting="equiv")
- assert not np.can_cast(a, SF(-1.), casting="no")
- na = a.astype(SF(-1.))
- assert_array_equal(-1 * na.view(np.float64), a.view(np.float64))
-
- assert np.can_cast(a, SF(2.), casting="same_kind")
- assert not np.can_cast(a, SF(2.), casting="safe")
- a2 = a.astype(SF(2.))
- assert_array_equal(2 * a2.view(np.float64), a.view(np.float64))
-
- @pytest.mark.parametrize("aligned", [True, False])
- def test_sfloat_cast_internal_errors(self, aligned):
- a = self._get_array(2e300, aligned)
-
- with pytest.raises(TypeError,
- match="error raised inside the core-loop: non-finite factor!"):
- a.astype(SF(2e-300))
-
- def test_sfloat_promotion(self):
- assert np.result_type(SF(2.), SF(3.)) == SF(3.)
- assert np.result_type(SF(3.), SF(2.)) == SF(3.)
- # Float64 -> SF(1.) and then promotes normally, so both of this work:
- assert np.result_type(SF(3.), np.float64) == SF(3.)
- assert np.result_type(np.float64, SF(0.5)) == SF(1.)
-
- # Test an undefined promotion:
- with pytest.raises(TypeError):
- np.result_type(SF(1.), np.int64)
-
- def test_basic_multiply(self):
- a = self._get_array(2.)
- b = self._get_array(4.)
-
- res = a * b
- # multiplies dtype scaling and content separately:
- assert res.dtype.get_scaling() == 8.
- expected_view = a.view(np.float64) * b.view(np.float64)
- assert_array_equal(res.view(np.float64), expected_view)
-
- def test_possible_and_impossible_reduce(self):
- # For reductions to work, the first and last operand must have the
- # same dtype. For this parametric DType that is not necessarily true.
- a = self._get_array(2.)
- # Addition reductin works (as of writing requires to pass initial
- # because setting a scaled-float from the default `0` fails).
- res = np.add.reduce(a, initial=0.)
- assert res == a.astype(np.float64).sum()
-
- # But each multiplication changes the factor, so a reduction is not
- # possible (the relaxed version of the old refusal to handle any
- # flexible dtype).
- with pytest.raises(TypeError,
- match="the resolved dtypes are not compatible"):
- np.multiply.reduce(a)
-
- def test_basic_ufunc_at(self):
- float_a = np.array([1., 2., 3.])
- b = self._get_array(2.)
-
- float_b = b.view(np.float64).copy()
- np.multiply.at(float_b, [1, 1, 1], float_a)
- np.multiply.at(b, [1, 1, 1], float_a)
-
- assert_array_equal(b.view(np.float64), float_b)
-
- def test_basic_multiply_promotion(self):
- float_a = np.array([1., 2., 3.])
- b = self._get_array(2.)
-
- res1 = float_a * b
- res2 = b * float_a
-
- # one factor is one, so we get the factor of b:
- assert res1.dtype == res2.dtype == b.dtype
- expected_view = float_a * b.view(np.float64)
- assert_array_equal(res1.view(np.float64), expected_view)
- assert_array_equal(res2.view(np.float64), expected_view)
-
- # Check that promotion works when `out` is used:
- np.multiply(b, float_a, out=res2)
- with pytest.raises(TypeError):
- # The promoter accepts this (maybe it should not), but the SFloat
- # result cannot be cast to integer:
- np.multiply(b, float_a, out=np.arange(3))
-
- def test_basic_addition(self):
- a = self._get_array(2.)
- b = self._get_array(4.)
-
- res = a + b
- # addition uses the type promotion rules for the result:
- assert res.dtype == np.result_type(a.dtype, b.dtype)
- expected_view = (a.astype(res.dtype).view(np.float64) +
- b.astype(res.dtype).view(np.float64))
- assert_array_equal(res.view(np.float64), expected_view)
-
- def test_addition_cast_safety(self):
- """The addition method is special for the scaled float, because it
- includes the "cast" between different factors, thus cast-safety
- is influenced by the implementation.
- """
- a = self._get_array(2.)
- b = self._get_array(-2.)
- c = self._get_array(3.)
-
- # sign change is "equiv":
- np.add(a, b, casting="equiv")
- with pytest.raises(TypeError):
- np.add(a, b, casting="no")
-
- # Different factor is "same_kind" (default) so check that "safe" fails
- with pytest.raises(TypeError):
- np.add(a, c, casting="safe")
-
- # Check that casting the output fails also (done by the ufunc here)
- with pytest.raises(TypeError):
- np.add(a, a, out=c, casting="safe")
-
- @pytest.mark.parametrize("ufunc",
- [np.logical_and, np.logical_or, np.logical_xor])
- def test_logical_ufuncs_casts_to_bool(self, ufunc):
- a = self._get_array(2.)
- a[0] = 0. # make sure first element is considered False.
-
- float_equiv = a.astype(float)
- expected = ufunc(float_equiv, float_equiv)
- res = ufunc(a, a)
- assert_array_equal(res, expected)
-
- # also check that the same works for reductions:
- expected = ufunc.reduce(float_equiv)
- res = ufunc.reduce(a)
- assert_array_equal(res, expected)
-
- # The output casting does not match the bool, bool -> bool loop:
- with pytest.raises(TypeError):
- ufunc(a, a, out=np.empty(a.shape, dtype=int), casting="equiv")
-
- def test_wrapped_and_wrapped_reductions(self):
- a = self._get_array(2.)
- float_equiv = a.astype(float)
-
- expected = np.hypot(float_equiv, float_equiv)
- res = np.hypot(a, a)
- assert res.dtype == a.dtype
- res_float = res.view(np.float64) * 2
- assert_array_equal(res_float, expected)
-
- # Also check reduction (keepdims, due to incorrect getitem)
- res = np.hypot.reduce(a, keepdims=True)
- assert res.dtype == a.dtype
- expected = np.hypot.reduce(float_equiv, keepdims=True)
- assert res.view(np.float64) * 2 == expected
-
- def test_astype_class(self):
- # Very simple test that we accept `.astype()` also on the class.
- # ScaledFloat always returns the default descriptor, but it does
- # check the relevant code paths.
- arr = np.array([1., 2., 3.], dtype=object)
-
- res = arr.astype(SF) # passing the class class
- expected = arr.astype(SF(1.)) # above will have discovered 1. scaling
- assert_array_equal(res.view(np.float64), expected.view(np.float64))
-
- def test_creation_class(self):
- arr1 = np.array([1., 2., 3.], dtype=SF)
- assert arr1.dtype == SF(1.)
- arr2 = np.array([1., 2., 3.], dtype=SF(1.))
- assert_array_equal(arr1.view(np.float64), arr2.view(np.float64))
-
-
-def test_type_pickle():
- # can't actually unpickle, but we can pickle (if in namespace)
- import pickle
-
- np._ScaledFloatTestDType = SF
-
- s = pickle.dumps(SF)
- res = pickle.loads(s)
- assert res is SF
-
- del np._ScaledFloatTestDType
-
-
-def test_is_numeric():
- assert SF._is_numeric
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/categorical/test_operators.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/categorical/test_operators.py
deleted file mode 100644
index a1e50917fed988ad0ffa8c428ece9071fca7ec3d..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/categorical/test_operators.py
+++ /dev/null
@@ -1,413 +0,0 @@
-import numpy as np
-import pytest
-
-import pandas as pd
-from pandas import (
- Categorical,
- DataFrame,
- Series,
- Timestamp,
- date_range,
-)
-import pandas._testing as tm
-
-
-class TestCategoricalOpsWithFactor:
- def test_categories_none_comparisons(self):
- factor = Categorical(["a", "b", "b", "a", "a", "c", "c", "c"], ordered=True)
- tm.assert_categorical_equal(factor, factor)
-
- def test_comparisons(self, factor):
- result = factor[factor == "a"]
- expected = factor[np.asarray(factor) == "a"]
- tm.assert_categorical_equal(result, expected)
-
- result = factor[factor != "a"]
- expected = factor[np.asarray(factor) != "a"]
- tm.assert_categorical_equal(result, expected)
-
- result = factor[factor < "c"]
- expected = factor[np.asarray(factor) < "c"]
- tm.assert_categorical_equal(result, expected)
-
- result = factor[factor > "a"]
- expected = factor[np.asarray(factor) > "a"]
- tm.assert_categorical_equal(result, expected)
-
- result = factor[factor >= "b"]
- expected = factor[np.asarray(factor) >= "b"]
- tm.assert_categorical_equal(result, expected)
-
- result = factor[factor <= "b"]
- expected = factor[np.asarray(factor) <= "b"]
- tm.assert_categorical_equal(result, expected)
-
- n = len(factor)
-
- other = factor[np.random.default_rng(2).permutation(n)]
- result = factor == other
- expected = np.asarray(factor) == np.asarray(other)
- tm.assert_numpy_array_equal(result, expected)
-
- result = factor == "d"
- expected = np.zeros(len(factor), dtype=bool)
- tm.assert_numpy_array_equal(result, expected)
-
- # comparisons with categoricals
- cat_rev = Categorical(["a", "b", "c"], categories=["c", "b", "a"], ordered=True)
- cat_rev_base = Categorical(
- ["b", "b", "b"], categories=["c", "b", "a"], ordered=True
- )
- cat = Categorical(["a", "b", "c"], ordered=True)
- cat_base = Categorical(["b", "b", "b"], categories=cat.categories, ordered=True)
-
- # comparisons need to take categories ordering into account
- res_rev = cat_rev > cat_rev_base
- exp_rev = np.array([True, False, False])
- tm.assert_numpy_array_equal(res_rev, exp_rev)
-
- res_rev = cat_rev < cat_rev_base
- exp_rev = np.array([False, False, True])
- tm.assert_numpy_array_equal(res_rev, exp_rev)
-
- res = cat > cat_base
- exp = np.array([False, False, True])
- tm.assert_numpy_array_equal(res, exp)
-
- # Only categories with same categories can be compared
- msg = "Categoricals can only be compared if 'categories' are the same"
- with pytest.raises(TypeError, match=msg):
- cat > cat_rev
-
- cat_rev_base2 = Categorical(["b", "b", "b"], categories=["c", "b", "a", "d"])
-
- with pytest.raises(TypeError, match=msg):
- cat_rev > cat_rev_base2
-
- # Only categories with same ordering information can be compared
- cat_unordered = cat.set_ordered(False)
- assert not (cat > cat).any()
-
- with pytest.raises(TypeError, match=msg):
- cat > cat_unordered
-
- # comparison (in both directions) with Series will raise
- s = Series(["b", "b", "b"])
- msg = (
- "Cannot compare a Categorical for op __gt__ with type "
- r""
- )
- with pytest.raises(TypeError, match=msg):
- cat > s
- with pytest.raises(TypeError, match=msg):
- cat_rev > s
- with pytest.raises(TypeError, match=msg):
- s < cat
- with pytest.raises(TypeError, match=msg):
- s < cat_rev
-
- # comparison with numpy.array will raise in both direction, but only on
- # newer numpy versions
- a = np.array(["b", "b", "b"])
- with pytest.raises(TypeError, match=msg):
- cat > a
- with pytest.raises(TypeError, match=msg):
- cat_rev > a
-
- # Make sure that unequal comparison take the categories order in
- # account
- cat_rev = Categorical(list("abc"), categories=list("cba"), ordered=True)
- exp = np.array([True, False, False])
- res = cat_rev > "b"
- tm.assert_numpy_array_equal(res, exp)
-
- # check that zero-dim array gets unboxed
- res = cat_rev > np.array("b")
- tm.assert_numpy_array_equal(res, exp)
-
-
-class TestCategoricalOps:
- @pytest.mark.parametrize(
- "categories",
- [["a", "b"], [0, 1], [Timestamp("2019"), Timestamp("2020")]],
- )
- def test_not_equal_with_na(self, categories):
- # https://github.com/pandas-dev/pandas/issues/32276
- c1 = Categorical.from_codes([-1, 0], categories=categories)
- c2 = Categorical.from_codes([0, 1], categories=categories)
-
- result = c1 != c2
-
- assert result.all()
-
- def test_compare_frame(self):
- # GH#24282 check that Categorical.__cmp__(DataFrame) defers to frame
- data = ["a", "b", 2, "a"]
- cat = Categorical(data)
-
- df = DataFrame(cat)
-
- result = cat == df.T
- expected = DataFrame([[True, True, True, True]])
- tm.assert_frame_equal(result, expected)
-
- result = cat[::-1] != df.T
- expected = DataFrame([[False, True, True, False]])
- tm.assert_frame_equal(result, expected)
-
- def test_compare_frame_raises(self, comparison_op):
- # alignment raises unless we transpose
- op = comparison_op
- cat = Categorical(["a", "b", 2, "a"])
- df = DataFrame(cat)
- msg = "Unable to coerce to Series, length must be 1: given 4"
- with pytest.raises(ValueError, match=msg):
- op(cat, df)
-
- def test_datetime_categorical_comparison(self):
- dt_cat = Categorical(date_range("2014-01-01", periods=3), ordered=True)
- tm.assert_numpy_array_equal(dt_cat > dt_cat[0], np.array([False, True, True]))
- tm.assert_numpy_array_equal(dt_cat[0] < dt_cat, np.array([False, True, True]))
-
- def test_reflected_comparison_with_scalars(self):
- # GH8658
- cat = Categorical([1, 2, 3], ordered=True)
- tm.assert_numpy_array_equal(cat > cat[0], np.array([False, True, True]))
- tm.assert_numpy_array_equal(cat[0] < cat, np.array([False, True, True]))
-
- def test_comparison_with_unknown_scalars(self):
- # https://github.com/pandas-dev/pandas/issues/9836#issuecomment-92123057
- # and following comparisons with scalars not in categories should raise
- # for unequal comps, but not for equal/not equal
- cat = Categorical([1, 2, 3], ordered=True)
-
- msg = "Invalid comparison between dtype=category and int"
- with pytest.raises(TypeError, match=msg):
- cat < 4
- with pytest.raises(TypeError, match=msg):
- cat > 4
- with pytest.raises(TypeError, match=msg):
- 4 < cat
- with pytest.raises(TypeError, match=msg):
- 4 > cat
-
- tm.assert_numpy_array_equal(cat == 4, np.array([False, False, False]))
- tm.assert_numpy_array_equal(cat != 4, np.array([True, True, True]))
-
- def test_comparison_with_tuple(self):
- cat = Categorical(np.array(["foo", (0, 1), 3, (0, 1)], dtype=object))
-
- result = cat == "foo"
- expected = np.array([True, False, False, False], dtype=bool)
- tm.assert_numpy_array_equal(result, expected)
-
- result = cat == (0, 1)
- expected = np.array([False, True, False, True], dtype=bool)
- tm.assert_numpy_array_equal(result, expected)
-
- result = cat != (0, 1)
- tm.assert_numpy_array_equal(result, ~expected)
-
- @pytest.mark.filterwarnings("ignore::RuntimeWarning")
- def test_comparison_of_ordered_categorical_with_nan_to_scalar(
- self, compare_operators_no_eq_ne
- ):
- # https://github.com/pandas-dev/pandas/issues/26504
- # BUG: fix ordered categorical comparison with missing values (#26504 )
- # and following comparisons with scalars in categories with missing
- # values should be evaluated as False
-
- cat = Categorical([1, 2, 3, None], categories=[1, 2, 3], ordered=True)
- scalar = 2
- expected = getattr(np.array(cat), compare_operators_no_eq_ne)(scalar)
- actual = getattr(cat, compare_operators_no_eq_ne)(scalar)
- tm.assert_numpy_array_equal(actual, expected)
-
- @pytest.mark.filterwarnings("ignore::RuntimeWarning")
- def test_comparison_of_ordered_categorical_with_nan_to_listlike(
- self, compare_operators_no_eq_ne
- ):
- # https://github.com/pandas-dev/pandas/issues/26504
- # and following comparisons of missing values in ordered Categorical
- # with listlike should be evaluated as False
-
- cat = Categorical([1, 2, 3, None], categories=[1, 2, 3], ordered=True)
- other = Categorical([2, 2, 2, 2], categories=[1, 2, 3], ordered=True)
- expected = getattr(np.array(cat), compare_operators_no_eq_ne)(2)
- actual = getattr(cat, compare_operators_no_eq_ne)(other)
- tm.assert_numpy_array_equal(actual, expected)
-
- @pytest.mark.parametrize(
- "data,reverse,base",
- [(list("abc"), list("cba"), list("bbb")), ([1, 2, 3], [3, 2, 1], [2, 2, 2])],
- )
- def test_comparisons(self, data, reverse, base):
- cat_rev = Series(Categorical(data, categories=reverse, ordered=True))
- cat_rev_base = Series(Categorical(base, categories=reverse, ordered=True))
- cat = Series(Categorical(data, ordered=True))
- cat_base = Series(
- Categorical(base, categories=cat.cat.categories, ordered=True)
- )
- s = Series(base)
- a = np.array(base)
-
- # comparisons need to take categories ordering into account
- res_rev = cat_rev > cat_rev_base
- exp_rev = Series([True, False, False])
- tm.assert_series_equal(res_rev, exp_rev)
-
- res_rev = cat_rev < cat_rev_base
- exp_rev = Series([False, False, True])
- tm.assert_series_equal(res_rev, exp_rev)
-
- res = cat > cat_base
- exp = Series([False, False, True])
- tm.assert_series_equal(res, exp)
-
- scalar = base[1]
- res = cat > scalar
- exp = Series([False, False, True])
- exp2 = cat.values > scalar
- tm.assert_series_equal(res, exp)
- tm.assert_numpy_array_equal(res.values, exp2)
- res_rev = cat_rev > scalar
- exp_rev = Series([True, False, False])
- exp_rev2 = cat_rev.values > scalar
- tm.assert_series_equal(res_rev, exp_rev)
- tm.assert_numpy_array_equal(res_rev.values, exp_rev2)
-
- # Only categories with same categories can be compared
- msg = "Categoricals can only be compared if 'categories' are the same"
- with pytest.raises(TypeError, match=msg):
- cat > cat_rev
-
- # categorical cannot be compared to Series or numpy array, and also
- # not the other way around
- msg = (
- "Cannot compare a Categorical for op __gt__ with type "
- r""
- )
- with pytest.raises(TypeError, match=msg):
- cat > s
- with pytest.raises(TypeError, match=msg):
- cat_rev > s
- with pytest.raises(TypeError, match=msg):
- cat > a
- with pytest.raises(TypeError, match=msg):
- cat_rev > a
-
- with pytest.raises(TypeError, match=msg):
- s < cat
- with pytest.raises(TypeError, match=msg):
- s < cat_rev
-
- with pytest.raises(TypeError, match=msg):
- a < cat
- with pytest.raises(TypeError, match=msg):
- a < cat_rev
-
- @pytest.mark.parametrize(
- "ctor",
- [
- lambda *args, **kwargs: Categorical(*args, **kwargs),
- lambda *args, **kwargs: Series(Categorical(*args, **kwargs)),
- ],
- )
- def test_unordered_different_order_equal(self, ctor):
- # https://github.com/pandas-dev/pandas/issues/16014
- c1 = ctor(["a", "b"], categories=["a", "b"], ordered=False)
- c2 = ctor(["a", "b"], categories=["b", "a"], ordered=False)
- assert (c1 == c2).all()
-
- c1 = ctor(["a", "b"], categories=["a", "b"], ordered=False)
- c2 = ctor(["b", "a"], categories=["b", "a"], ordered=False)
- assert (c1 != c2).all()
-
- c1 = ctor(["a", "a"], categories=["a", "b"], ordered=False)
- c2 = ctor(["b", "b"], categories=["b", "a"], ordered=False)
- assert (c1 != c2).all()
-
- c1 = ctor(["a", "a"], categories=["a", "b"], ordered=False)
- c2 = ctor(["a", "b"], categories=["b", "a"], ordered=False)
- result = c1 == c2
- tm.assert_numpy_array_equal(np.array(result), np.array([True, False]))
-
- def test_unordered_different_categories_raises(self):
- c1 = Categorical(["a", "b"], categories=["a", "b"], ordered=False)
- c2 = Categorical(["a", "c"], categories=["c", "a"], ordered=False)
-
- with pytest.raises(TypeError, match=("Categoricals can only be compared")):
- c1 == c2
-
- def test_compare_different_lengths(self):
- c1 = Categorical([], categories=["a", "b"])
- c2 = Categorical([], categories=["a"])
-
- msg = "Categoricals can only be compared if 'categories' are the same."
- with pytest.raises(TypeError, match=msg):
- c1 == c2
-
- def test_compare_unordered_different_order(self):
- # https://github.com/pandas-dev/pandas/issues/16603#issuecomment-
- # 349290078
- a = Categorical(["a"], categories=["a", "b"])
- b = Categorical(["b"], categories=["b", "a"])
- assert not a.equals(b)
-
- def test_numeric_like_ops(self):
- df = DataFrame({"value": np.random.default_rng(2).integers(0, 10000, 100)})
- labels = [f"{i} - {i + 499}" for i in range(0, 10000, 500)]
- cat_labels = Categorical(labels, labels)
-
- df = df.sort_values(by=["value"], ascending=True)
- df["value_group"] = pd.cut(
- df.value, range(0, 10500, 500), right=False, labels=cat_labels
- )
-
- # numeric ops should not succeed
- for op, str_rep in [
- ("__add__", r"\+"),
- ("__sub__", "-"),
- ("__mul__", r"\*"),
- ("__truediv__", "/"),
- ]:
- msg = f"Series cannot perform the operation {str_rep}|unsupported operand"
- with pytest.raises(TypeError, match=msg):
- getattr(df, op)(df)
-
- # reduction ops should not succeed (unless specifically defined, e.g.
- # min/max)
- s = df["value_group"]
- for op in ["kurt", "skew", "var", "std", "mean", "sum", "median"]:
- msg = f"does not support reduction '{op}'"
- with pytest.raises(TypeError, match=msg):
- getattr(s, op)(numeric_only=False)
-
- def test_numeric_like_ops_series(self):
- # numpy ops
- s = Series(Categorical([1, 2, 3, 4]))
- with pytest.raises(TypeError, match="does not support reduction 'sum'"):
- np.sum(s)
-
- @pytest.mark.parametrize(
- "op, str_rep",
- [
- ("__add__", r"\+"),
- ("__sub__", "-"),
- ("__mul__", r"\*"),
- ("__truediv__", "/"),
- ],
- )
- def test_numeric_like_ops_series_arith(self, op, str_rep):
- # numeric ops on a Series
- s = Series(Categorical([1, 2, 3, 4]))
- msg = f"Series cannot perform the operation {str_rep}|unsupported operand"
- with pytest.raises(TypeError, match=msg):
- getattr(s, op)(2)
-
- def test_numeric_like_ops_series_invalid(self):
- # invalid ufunc
- s = Series(Categorical([1, 2, 3, 4]))
- msg = "Object with dtype category cannot perform the numpy op log"
- with pytest.raises(TypeError, match=msg):
- np.log(s)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/timedeltas/test_reductions.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/timedeltas/test_reductions.py
deleted file mode 100644
index 72d45f5b9a78cad11a4358ea4f32bee20904ccf8..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/timedeltas/test_reductions.py
+++ /dev/null
@@ -1,215 +0,0 @@
-import numpy as np
-import pytest
-
-import pandas as pd
-from pandas import Timedelta
-import pandas._testing as tm
-from pandas.core import nanops
-from pandas.core.arrays import TimedeltaArray
-
-
-class TestReductions:
- @pytest.mark.parametrize("name", ["std", "min", "max", "median", "mean"])
- @pytest.mark.parametrize("skipna", [True, False])
- def test_reductions_empty(self, name, skipna):
- tdi = pd.TimedeltaIndex([])
- arr = tdi.array
-
- result = getattr(tdi, name)(skipna=skipna)
- assert result is pd.NaT
-
- result = getattr(arr, name)(skipna=skipna)
- assert result is pd.NaT
-
- @pytest.mark.parametrize("skipna", [True, False])
- def test_sum_empty(self, skipna):
- tdi = pd.TimedeltaIndex([])
- arr = tdi.array
-
- result = tdi.sum(skipna=skipna)
- assert isinstance(result, Timedelta)
- assert result == Timedelta(0)
-
- result = arr.sum(skipna=skipna)
- assert isinstance(result, Timedelta)
- assert result == Timedelta(0)
-
- def test_min_max(self):
- arr = TimedeltaArray._from_sequence(["3H", "3H", "NaT", "2H", "5H", "4H"])
-
- result = arr.min()
- expected = Timedelta("2H")
- assert result == expected
-
- result = arr.max()
- expected = Timedelta("5H")
- assert result == expected
-
- result = arr.min(skipna=False)
- assert result is pd.NaT
-
- result = arr.max(skipna=False)
- assert result is pd.NaT
-
- def test_sum(self):
- tdi = pd.TimedeltaIndex(["3H", "3H", "NaT", "2H", "5H", "4H"])
- arr = tdi.array
-
- result = arr.sum(skipna=True)
- expected = Timedelta(hours=17)
- assert isinstance(result, Timedelta)
- assert result == expected
-
- result = tdi.sum(skipna=True)
- assert isinstance(result, Timedelta)
- assert result == expected
-
- result = arr.sum(skipna=False)
- assert result is pd.NaT
-
- result = tdi.sum(skipna=False)
- assert result is pd.NaT
-
- result = arr.sum(min_count=9)
- assert result is pd.NaT
-
- result = tdi.sum(min_count=9)
- assert result is pd.NaT
-
- result = arr.sum(min_count=1)
- assert isinstance(result, Timedelta)
- assert result == expected
-
- result = tdi.sum(min_count=1)
- assert isinstance(result, Timedelta)
- assert result == expected
-
- def test_npsum(self):
- # GH#25282, GH#25335 np.sum should return a Timedelta, not timedelta64
- tdi = pd.TimedeltaIndex(["3H", "3H", "2H", "5H", "4H"])
- arr = tdi.array
-
- result = np.sum(tdi)
- expected = Timedelta(hours=17)
- assert isinstance(result, Timedelta)
- assert result == expected
-
- result = np.sum(arr)
- assert isinstance(result, Timedelta)
- assert result == expected
-
- def test_sum_2d_skipna_false(self):
- arr = np.arange(8).astype(np.int64).view("m8[s]").astype("m8[ns]").reshape(4, 2)
- arr[-1, -1] = "Nat"
-
- tda = TimedeltaArray(arr)
-
- result = tda.sum(skipna=False)
- assert result is pd.NaT
-
- result = tda.sum(axis=0, skipna=False)
- expected = pd.TimedeltaIndex([Timedelta(seconds=12), pd.NaT])._values
- tm.assert_timedelta_array_equal(result, expected)
-
- result = tda.sum(axis=1, skipna=False)
- expected = pd.TimedeltaIndex(
- [
- Timedelta(seconds=1),
- Timedelta(seconds=5),
- Timedelta(seconds=9),
- pd.NaT,
- ]
- )._values
- tm.assert_timedelta_array_equal(result, expected)
-
- # Adding a Timestamp makes this a test for DatetimeArray.std
- @pytest.mark.parametrize(
- "add",
- [
- Timedelta(0),
- pd.Timestamp("2021-01-01"),
- pd.Timestamp("2021-01-01", tz="UTC"),
- pd.Timestamp("2021-01-01", tz="Asia/Tokyo"),
- ],
- )
- def test_std(self, add):
- tdi = pd.TimedeltaIndex(["0H", "4H", "NaT", "4H", "0H", "2H"]) + add
- arr = tdi.array
-
- result = arr.std(skipna=True)
- expected = Timedelta(hours=2)
- assert isinstance(result, Timedelta)
- assert result == expected
-
- result = tdi.std(skipna=True)
- assert isinstance(result, Timedelta)
- assert result == expected
-
- if getattr(arr, "tz", None) is None:
- result = nanops.nanstd(np.asarray(arr), skipna=True)
- assert isinstance(result, np.timedelta64)
- assert result == expected
-
- result = arr.std(skipna=False)
- assert result is pd.NaT
-
- result = tdi.std(skipna=False)
- assert result is pd.NaT
-
- if getattr(arr, "tz", None) is None:
- result = nanops.nanstd(np.asarray(arr), skipna=False)
- assert isinstance(result, np.timedelta64)
- assert np.isnat(result)
-
- def test_median(self):
- tdi = pd.TimedeltaIndex(["0H", "3H", "NaT", "5H06m", "0H", "2H"])
- arr = tdi.array
-
- result = arr.median(skipna=True)
- expected = Timedelta(hours=2)
- assert isinstance(result, Timedelta)
- assert result == expected
-
- result = tdi.median(skipna=True)
- assert isinstance(result, Timedelta)
- assert result == expected
-
- result = arr.median(skipna=False)
- assert result is pd.NaT
-
- result = tdi.median(skipna=False)
- assert result is pd.NaT
-
- def test_mean(self):
- tdi = pd.TimedeltaIndex(["0H", "3H", "NaT", "5H06m", "0H", "2H"])
- arr = tdi._data
-
- # manually verified result
- expected = Timedelta(arr.dropna()._ndarray.mean())
-
- result = arr.mean()
- assert result == expected
- result = arr.mean(skipna=False)
- assert result is pd.NaT
-
- result = arr.dropna().mean(skipna=False)
- assert result == expected
-
- result = arr.mean(axis=0)
- assert result == expected
-
- def test_mean_2d(self):
- tdi = pd.timedelta_range("14 days", periods=6)
- tda = tdi._data.reshape(3, 2)
-
- result = tda.mean(axis=0)
- expected = tda[1]
- tm.assert_timedelta_array_equal(result, expected)
-
- result = tda.mean(axis=1)
- expected = tda[:, 0] + Timedelta(hours=12)
- tm.assert_timedelta_array_equal(result, expected)
-
- result = tda.mean(axis=None)
- expected = tdi.mean()
- assert result == expected
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/base/reduce.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/base/reduce.py
deleted file mode 100644
index 43b2df4290eed2262d71989614ca9a2efb3e073b..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/base/reduce.py
+++ /dev/null
@@ -1,149 +0,0 @@
-from typing import final
-
-import pytest
-
-import pandas as pd
-import pandas._testing as tm
-from pandas.api.types import is_numeric_dtype
-
-
-class BaseReduceTests:
- """
- Reduction specific tests. Generally these only
- make sense for numeric/boolean operations.
- """
-
- def _supports_reduction(self, obj, op_name: str) -> bool:
- # Specify if we expect this reduction to succeed.
- return False
-
- def check_reduce(self, s, op_name, skipna):
- # We perform the same operation on the np.float64 data and check
- # that the results match. Override if you need to cast to something
- # other than float64.
- res_op = getattr(s, op_name)
-
- try:
- alt = s.astype("float64")
- except (TypeError, ValueError):
- # e.g. Interval can't cast, so let's cast to object and do
- # the reduction pointwise
- alt = s.astype(object)
-
- exp_op = getattr(alt, op_name)
- if op_name == "count":
- result = res_op()
- expected = exp_op()
- else:
- result = res_op(skipna=skipna)
- expected = exp_op(skipna=skipna)
- tm.assert_almost_equal(result, expected)
-
- def _get_expected_reduction_dtype(self, arr, op_name: str):
- # Find the expected dtype when the given reduction is done on a DataFrame
- # column with this array. The default assumes float64-like behavior,
- # i.e. retains the dtype.
- return arr.dtype
-
- # We anticipate that authors should not need to override check_reduce_frame,
- # but should be able to do any necessary overriding in
- # _get_expected_reduction_dtype. If you have a use case where this
- # does not hold, please let us know at github.com/pandas-dev/pandas/issues.
- @final
- def check_reduce_frame(self, ser: pd.Series, op_name: str, skipna: bool):
- # Check that the 2D reduction done in a DataFrame reduction "looks like"
- # a wrapped version of the 1D reduction done by Series.
- arr = ser.array
- df = pd.DataFrame({"a": arr})
-
- kwargs = {"ddof": 1} if op_name in ["var", "std"] else {}
-
- cmp_dtype = self._get_expected_reduction_dtype(arr, op_name)
-
- # The DataFrame method just calls arr._reduce with keepdims=True,
- # so this first check is perfunctory.
- result1 = arr._reduce(op_name, skipna=skipna, keepdims=True, **kwargs)
- result2 = getattr(df, op_name)(skipna=skipna, **kwargs).array
- tm.assert_extension_array_equal(result1, result2)
-
- # Check that the 2D reduction looks like a wrapped version of the
- # 1D reduction
- if not skipna and ser.isna().any():
- expected = pd.array([pd.NA], dtype=cmp_dtype)
- else:
- exp_value = getattr(ser.dropna(), op_name)()
- expected = pd.array([exp_value], dtype=cmp_dtype)
-
- tm.assert_extension_array_equal(result1, expected)
-
- @pytest.mark.parametrize("skipna", [True, False])
- def test_reduce_series_boolean(self, data, all_boolean_reductions, skipna):
- op_name = all_boolean_reductions
- s = pd.Series(data)
-
- if not self._supports_reduction(s, op_name):
- msg = (
- "[Cc]annot perform|Categorical is not ordered for operation|"
- "does not support reduction|"
- )
-
- with pytest.raises(TypeError, match=msg):
- getattr(s, op_name)(skipna=skipna)
-
- else:
- self.check_reduce(s, op_name, skipna)
-
- @pytest.mark.filterwarnings("ignore::RuntimeWarning")
- @pytest.mark.parametrize("skipna", [True, False])
- def test_reduce_series_numeric(self, data, all_numeric_reductions, skipna):
- op_name = all_numeric_reductions
- s = pd.Series(data)
-
- if not self._supports_reduction(s, op_name):
- msg = (
- "[Cc]annot perform|Categorical is not ordered for operation|"
- "does not support reduction|"
- )
-
- with pytest.raises(TypeError, match=msg):
- getattr(s, op_name)(skipna=skipna)
-
- else:
- # min/max with empty produce numpy warnings
- self.check_reduce(s, op_name, skipna)
-
- @pytest.mark.parametrize("skipna", [True, False])
- def test_reduce_frame(self, data, all_numeric_reductions, skipna):
- op_name = all_numeric_reductions
- s = pd.Series(data)
- if not is_numeric_dtype(s.dtype):
- pytest.skip("not numeric dtype")
-
- if op_name in ["count", "kurt", "sem"]:
- pytest.skip(f"{op_name} not an array method")
-
- if not self._supports_reduction(s, op_name):
- pytest.skip(f"Reduction {op_name} not supported for this dtype")
-
- self.check_reduce_frame(s, op_name, skipna)
-
-
-# TODO: deprecate BaseNoReduceTests, BaseNumericReduceTests, BaseBooleanReduceTests
-class BaseNoReduceTests(BaseReduceTests):
- """we don't define any reductions"""
-
-
-class BaseNumericReduceTests(BaseReduceTests):
- # For backward compatibility only, this only runs the numeric reductions
- def _supports_reduction(self, obj, op_name: str) -> bool:
- if op_name in ["any", "all"]:
- pytest.skip("These are tested in BaseBooleanReduceTests")
- return True
-
-
-class BaseBooleanReduceTests(BaseReduceTests):
- # For backward compatibility only, this only runs the numeric reductions
- def _supports_reduction(self, obj, op_name: str) -> bool:
- if op_name not in ["any", "all"]:
- pytest.skip("These are tested in BaseNumericReduceTests")
- return True
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/resolution/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/resolution/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/vcs/subversion.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/vcs/subversion.py
deleted file mode 100644
index 89c8754ce090a41f94ac9691098db6a9ec119930..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/vcs/subversion.py
+++ /dev/null
@@ -1,324 +0,0 @@
-import logging
-import os
-import re
-from typing import List, Optional, Tuple
-
-from pip._internal.utils.misc import (
- HiddenText,
- display_path,
- is_console_interactive,
- is_installable_dir,
- split_auth_from_netloc,
-)
-from pip._internal.utils.subprocess import CommandArgs, make_command
-from pip._internal.vcs.versioncontrol import (
- AuthInfo,
- RemoteNotFoundError,
- RevOptions,
- VersionControl,
- vcs,
-)
-
-logger = logging.getLogger(__name__)
-
-_svn_xml_url_re = re.compile('url="([^"]+)"')
-_svn_rev_re = re.compile(r'committed-rev="(\d+)"')
-_svn_info_xml_rev_re = re.compile(r'\s*revision="(\d+)"')
-_svn_info_xml_url_re = re.compile(r"(.*) ")
-
-
-class Subversion(VersionControl):
- name = "svn"
- dirname = ".svn"
- repo_name = "checkout"
- schemes = ("svn+ssh", "svn+http", "svn+https", "svn+svn", "svn+file")
-
- @classmethod
- def should_add_vcs_url_prefix(cls, remote_url: str) -> bool:
- return True
-
- @staticmethod
- def get_base_rev_args(rev: str) -> List[str]:
- return ["-r", rev]
-
- @classmethod
- def get_revision(cls, location: str) -> str:
- """
- Return the maximum revision for all files under a given location
- """
- # Note: taken from setuptools.command.egg_info
- revision = 0
-
- for base, dirs, _ in os.walk(location):
- if cls.dirname not in dirs:
- dirs[:] = []
- continue # no sense walking uncontrolled subdirs
- dirs.remove(cls.dirname)
- entries_fn = os.path.join(base, cls.dirname, "entries")
- if not os.path.exists(entries_fn):
- # FIXME: should we warn?
- continue
-
- dirurl, localrev = cls._get_svn_url_rev(base)
-
- if base == location:
- assert dirurl is not None
- base = dirurl + "/" # save the root url
- elif not dirurl or not dirurl.startswith(base):
- dirs[:] = []
- continue # not part of the same svn tree, skip it
- revision = max(revision, localrev)
- return str(revision)
-
- @classmethod
- def get_netloc_and_auth(
- cls, netloc: str, scheme: str
- ) -> Tuple[str, Tuple[Optional[str], Optional[str]]]:
- """
- This override allows the auth information to be passed to svn via the
- --username and --password options instead of via the URL.
- """
- if scheme == "ssh":
- # The --username and --password options can't be used for
- # svn+ssh URLs, so keep the auth information in the URL.
- return super().get_netloc_and_auth(netloc, scheme)
-
- return split_auth_from_netloc(netloc)
-
- @classmethod
- def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]:
- # hotfix the URL scheme after removing svn+ from svn+ssh:// readd it
- url, rev, user_pass = super().get_url_rev_and_auth(url)
- if url.startswith("ssh://"):
- url = "svn+" + url
- return url, rev, user_pass
-
- @staticmethod
- def make_rev_args(
- username: Optional[str], password: Optional[HiddenText]
- ) -> CommandArgs:
- extra_args: CommandArgs = []
- if username:
- extra_args += ["--username", username]
- if password:
- extra_args += ["--password", password]
-
- return extra_args
-
- @classmethod
- def get_remote_url(cls, location: str) -> str:
- # In cases where the source is in a subdirectory, we have to look up in
- # the location until we find a valid project root.
- orig_location = location
- while not is_installable_dir(location):
- last_location = location
- location = os.path.dirname(location)
- if location == last_location:
- # We've traversed up to the root of the filesystem without
- # finding a Python project.
- logger.warning(
- "Could not find Python project for directory %s (tried all "
- "parent directories)",
- orig_location,
- )
- raise RemoteNotFoundError
-
- url, _rev = cls._get_svn_url_rev(location)
- if url is None:
- raise RemoteNotFoundError
-
- return url
-
- @classmethod
- def _get_svn_url_rev(cls, location: str) -> Tuple[Optional[str], int]:
- from pip._internal.exceptions import InstallationError
-
- entries_path = os.path.join(location, cls.dirname, "entries")
- if os.path.exists(entries_path):
- with open(entries_path) as f:
- data = f.read()
- else: # subversion >= 1.7 does not have the 'entries' file
- data = ""
-
- url = None
- if data.startswith("8") or data.startswith("9") or data.startswith("10"):
- entries = list(map(str.splitlines, data.split("\n\x0c\n")))
- del entries[0][0] # get rid of the '8'
- url = entries[0][3]
- revs = [int(d[9]) for d in entries if len(d) > 9 and d[9]] + [0]
- elif data.startswith("= 1.7
- # Note that using get_remote_call_options is not necessary here
- # because `svn info` is being run against a local directory.
- # We don't need to worry about making sure interactive mode
- # is being used to prompt for passwords, because passwords
- # are only potentially needed for remote server requests.
- xml = cls.run_command(
- ["info", "--xml", location],
- show_stdout=False,
- stdout_only=True,
- )
- match = _svn_info_xml_url_re.search(xml)
- assert match is not None
- url = match.group(1)
- revs = [int(m.group(1)) for m in _svn_info_xml_rev_re.finditer(xml)]
- except InstallationError:
- url, revs = None, []
-
- if revs:
- rev = max(revs)
- else:
- rev = 0
-
- return url, rev
-
- @classmethod
- def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool:
- """Always assume the versions don't match"""
- return False
-
- def __init__(self, use_interactive: bool = None) -> None:
- if use_interactive is None:
- use_interactive = is_console_interactive()
- self.use_interactive = use_interactive
-
- # This member is used to cache the fetched version of the current
- # ``svn`` client.
- # Special value definitions:
- # None: Not evaluated yet.
- # Empty tuple: Could not parse version.
- self._vcs_version: Optional[Tuple[int, ...]] = None
-
- super().__init__()
-
- def call_vcs_version(self) -> Tuple[int, ...]:
- """Query the version of the currently installed Subversion client.
-
- :return: A tuple containing the parts of the version information or
- ``()`` if the version returned from ``svn`` could not be parsed.
- :raises: BadCommand: If ``svn`` is not installed.
- """
- # Example versions:
- # svn, version 1.10.3 (r1842928)
- # compiled Feb 25 2019, 14:20:39 on x86_64-apple-darwin17.0.0
- # svn, version 1.7.14 (r1542130)
- # compiled Mar 28 2018, 08:49:13 on x86_64-pc-linux-gnu
- # svn, version 1.12.0-SlikSvn (SlikSvn/1.12.0)
- # compiled May 28 2019, 13:44:56 on x86_64-microsoft-windows6.2
- version_prefix = "svn, version "
- version = self.run_command(["--version"], show_stdout=False, stdout_only=True)
- if not version.startswith(version_prefix):
- return ()
-
- version = version[len(version_prefix) :].split()[0]
- version_list = version.partition("-")[0].split(".")
- try:
- parsed_version = tuple(map(int, version_list))
- except ValueError:
- return ()
-
- return parsed_version
-
- def get_vcs_version(self) -> Tuple[int, ...]:
- """Return the version of the currently installed Subversion client.
-
- If the version of the Subversion client has already been queried,
- a cached value will be used.
-
- :return: A tuple containing the parts of the version information or
- ``()`` if the version returned from ``svn`` could not be parsed.
- :raises: BadCommand: If ``svn`` is not installed.
- """
- if self._vcs_version is not None:
- # Use cached version, if available.
- # If parsing the version failed previously (empty tuple),
- # do not attempt to parse it again.
- return self._vcs_version
-
- vcs_version = self.call_vcs_version()
- self._vcs_version = vcs_version
- return vcs_version
-
- def get_remote_call_options(self) -> CommandArgs:
- """Return options to be used on calls to Subversion that contact the server.
-
- These options are applicable for the following ``svn`` subcommands used
- in this class.
-
- - checkout
- - switch
- - update
-
- :return: A list of command line arguments to pass to ``svn``.
- """
- if not self.use_interactive:
- # --non-interactive switch is available since Subversion 0.14.4.
- # Subversion < 1.8 runs in interactive mode by default.
- return ["--non-interactive"]
-
- svn_version = self.get_vcs_version()
- # By default, Subversion >= 1.8 runs in non-interactive mode if
- # stdin is not a TTY. Since that is how pip invokes SVN, in
- # call_subprocess(), pip must pass --force-interactive to ensure
- # the user can be prompted for a password, if required.
- # SVN added the --force-interactive option in SVN 1.8. Since
- # e.g. RHEL/CentOS 7, which is supported until 2024, ships with
- # SVN 1.7, pip should continue to support SVN 1.7. Therefore, pip
- # can't safely add the option if the SVN version is < 1.8 (or unknown).
- if svn_version >= (1, 8):
- return ["--force-interactive"]
-
- return []
-
- def fetch_new(
- self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int
- ) -> None:
- rev_display = rev_options.to_display()
- logger.info(
- "Checking out %s%s to %s",
- url,
- rev_display,
- display_path(dest),
- )
- if verbosity <= 0:
- flag = "--quiet"
- else:
- flag = ""
- cmd_args = make_command(
- "checkout",
- flag,
- self.get_remote_call_options(),
- rev_options.to_args(),
- url,
- dest,
- )
- self.run_command(cmd_args)
-
- def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
- cmd_args = make_command(
- "switch",
- self.get_remote_call_options(),
- rev_options.to_args(),
- url,
- dest,
- )
- self.run_command(cmd_args)
-
- def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
- cmd_args = make_command(
- "update",
- self.get_remote_call_options(),
- rev_options.to_args(),
- dest,
- )
- self.run_command(cmd_args)
-
-
-vcs.register(Subversion)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/filters/sanitizer.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/filters/sanitizer.py
deleted file mode 100644
index aa7431d131213f85ab36cacc54b000e88898080b..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/filters/sanitizer.py
+++ /dev/null
@@ -1,916 +0,0 @@
-"""Deprecated from html5lib 1.1.
-
-See `here `_ for
-information about its deprecation; `Bleach `_
-is recommended as a replacement. Please let us know in the aforementioned issue
-if Bleach is unsuitable for your needs.
-
-"""
-from __future__ import absolute_import, division, unicode_literals
-
-import re
-import warnings
-from xml.sax.saxutils import escape, unescape
-
-from pip._vendor.six.moves import urllib_parse as urlparse
-
-from . import base
-from ..constants import namespaces, prefixes
-
-__all__ = ["Filter"]
-
-
-_deprecation_msg = (
- "html5lib's sanitizer is deprecated; see " +
- "https://github.com/html5lib/html5lib-python/issues/443 and please let " +
- "us know if Bleach is unsuitable for your needs"
-)
-
-warnings.warn(_deprecation_msg, DeprecationWarning)
-
-allowed_elements = frozenset((
- (namespaces['html'], 'a'),
- (namespaces['html'], 'abbr'),
- (namespaces['html'], 'acronym'),
- (namespaces['html'], 'address'),
- (namespaces['html'], 'area'),
- (namespaces['html'], 'article'),
- (namespaces['html'], 'aside'),
- (namespaces['html'], 'audio'),
- (namespaces['html'], 'b'),
- (namespaces['html'], 'big'),
- (namespaces['html'], 'blockquote'),
- (namespaces['html'], 'br'),
- (namespaces['html'], 'button'),
- (namespaces['html'], 'canvas'),
- (namespaces['html'], 'caption'),
- (namespaces['html'], 'center'),
- (namespaces['html'], 'cite'),
- (namespaces['html'], 'code'),
- (namespaces['html'], 'col'),
- (namespaces['html'], 'colgroup'),
- (namespaces['html'], 'command'),
- (namespaces['html'], 'datagrid'),
- (namespaces['html'], 'datalist'),
- (namespaces['html'], 'dd'),
- (namespaces['html'], 'del'),
- (namespaces['html'], 'details'),
- (namespaces['html'], 'dfn'),
- (namespaces['html'], 'dialog'),
- (namespaces['html'], 'dir'),
- (namespaces['html'], 'div'),
- (namespaces['html'], 'dl'),
- (namespaces['html'], 'dt'),
- (namespaces['html'], 'em'),
- (namespaces['html'], 'event-source'),
- (namespaces['html'], 'fieldset'),
- (namespaces['html'], 'figcaption'),
- (namespaces['html'], 'figure'),
- (namespaces['html'], 'footer'),
- (namespaces['html'], 'font'),
- (namespaces['html'], 'form'),
- (namespaces['html'], 'header'),
- (namespaces['html'], 'h1'),
- (namespaces['html'], 'h2'),
- (namespaces['html'], 'h3'),
- (namespaces['html'], 'h4'),
- (namespaces['html'], 'h5'),
- (namespaces['html'], 'h6'),
- (namespaces['html'], 'hr'),
- (namespaces['html'], 'i'),
- (namespaces['html'], 'img'),
- (namespaces['html'], 'input'),
- (namespaces['html'], 'ins'),
- (namespaces['html'], 'keygen'),
- (namespaces['html'], 'kbd'),
- (namespaces['html'], 'label'),
- (namespaces['html'], 'legend'),
- (namespaces['html'], 'li'),
- (namespaces['html'], 'm'),
- (namespaces['html'], 'map'),
- (namespaces['html'], 'menu'),
- (namespaces['html'], 'meter'),
- (namespaces['html'], 'multicol'),
- (namespaces['html'], 'nav'),
- (namespaces['html'], 'nextid'),
- (namespaces['html'], 'ol'),
- (namespaces['html'], 'output'),
- (namespaces['html'], 'optgroup'),
- (namespaces['html'], 'option'),
- (namespaces['html'], 'p'),
- (namespaces['html'], 'pre'),
- (namespaces['html'], 'progress'),
- (namespaces['html'], 'q'),
- (namespaces['html'], 's'),
- (namespaces['html'], 'samp'),
- (namespaces['html'], 'section'),
- (namespaces['html'], 'select'),
- (namespaces['html'], 'small'),
- (namespaces['html'], 'sound'),
- (namespaces['html'], 'source'),
- (namespaces['html'], 'spacer'),
- (namespaces['html'], 'span'),
- (namespaces['html'], 'strike'),
- (namespaces['html'], 'strong'),
- (namespaces['html'], 'sub'),
- (namespaces['html'], 'sup'),
- (namespaces['html'], 'table'),
- (namespaces['html'], 'tbody'),
- (namespaces['html'], 'td'),
- (namespaces['html'], 'textarea'),
- (namespaces['html'], 'time'),
- (namespaces['html'], 'tfoot'),
- (namespaces['html'], 'th'),
- (namespaces['html'], 'thead'),
- (namespaces['html'], 'tr'),
- (namespaces['html'], 'tt'),
- (namespaces['html'], 'u'),
- (namespaces['html'], 'ul'),
- (namespaces['html'], 'var'),
- (namespaces['html'], 'video'),
- (namespaces['mathml'], 'maction'),
- (namespaces['mathml'], 'math'),
- (namespaces['mathml'], 'merror'),
- (namespaces['mathml'], 'mfrac'),
- (namespaces['mathml'], 'mi'),
- (namespaces['mathml'], 'mmultiscripts'),
- (namespaces['mathml'], 'mn'),
- (namespaces['mathml'], 'mo'),
- (namespaces['mathml'], 'mover'),
- (namespaces['mathml'], 'mpadded'),
- (namespaces['mathml'], 'mphantom'),
- (namespaces['mathml'], 'mprescripts'),
- (namespaces['mathml'], 'mroot'),
- (namespaces['mathml'], 'mrow'),
- (namespaces['mathml'], 'mspace'),
- (namespaces['mathml'], 'msqrt'),
- (namespaces['mathml'], 'mstyle'),
- (namespaces['mathml'], 'msub'),
- (namespaces['mathml'], 'msubsup'),
- (namespaces['mathml'], 'msup'),
- (namespaces['mathml'], 'mtable'),
- (namespaces['mathml'], 'mtd'),
- (namespaces['mathml'], 'mtext'),
- (namespaces['mathml'], 'mtr'),
- (namespaces['mathml'], 'munder'),
- (namespaces['mathml'], 'munderover'),
- (namespaces['mathml'], 'none'),
- (namespaces['svg'], 'a'),
- (namespaces['svg'], 'animate'),
- (namespaces['svg'], 'animateColor'),
- (namespaces['svg'], 'animateMotion'),
- (namespaces['svg'], 'animateTransform'),
- (namespaces['svg'], 'clipPath'),
- (namespaces['svg'], 'circle'),
- (namespaces['svg'], 'defs'),
- (namespaces['svg'], 'desc'),
- (namespaces['svg'], 'ellipse'),
- (namespaces['svg'], 'font-face'),
- (namespaces['svg'], 'font-face-name'),
- (namespaces['svg'], 'font-face-src'),
- (namespaces['svg'], 'g'),
- (namespaces['svg'], 'glyph'),
- (namespaces['svg'], 'hkern'),
- (namespaces['svg'], 'linearGradient'),
- (namespaces['svg'], 'line'),
- (namespaces['svg'], 'marker'),
- (namespaces['svg'], 'metadata'),
- (namespaces['svg'], 'missing-glyph'),
- (namespaces['svg'], 'mpath'),
- (namespaces['svg'], 'path'),
- (namespaces['svg'], 'polygon'),
- (namespaces['svg'], 'polyline'),
- (namespaces['svg'], 'radialGradient'),
- (namespaces['svg'], 'rect'),
- (namespaces['svg'], 'set'),
- (namespaces['svg'], 'stop'),
- (namespaces['svg'], 'svg'),
- (namespaces['svg'], 'switch'),
- (namespaces['svg'], 'text'),
- (namespaces['svg'], 'title'),
- (namespaces['svg'], 'tspan'),
- (namespaces['svg'], 'use'),
-))
-
-allowed_attributes = frozenset((
- # HTML attributes
- (None, 'abbr'),
- (None, 'accept'),
- (None, 'accept-charset'),
- (None, 'accesskey'),
- (None, 'action'),
- (None, 'align'),
- (None, 'alt'),
- (None, 'autocomplete'),
- (None, 'autofocus'),
- (None, 'axis'),
- (None, 'background'),
- (None, 'balance'),
- (None, 'bgcolor'),
- (None, 'bgproperties'),
- (None, 'border'),
- (None, 'bordercolor'),
- (None, 'bordercolordark'),
- (None, 'bordercolorlight'),
- (None, 'bottompadding'),
- (None, 'cellpadding'),
- (None, 'cellspacing'),
- (None, 'ch'),
- (None, 'challenge'),
- (None, 'char'),
- (None, 'charoff'),
- (None, 'choff'),
- (None, 'charset'),
- (None, 'checked'),
- (None, 'cite'),
- (None, 'class'),
- (None, 'clear'),
- (None, 'color'),
- (None, 'cols'),
- (None, 'colspan'),
- (None, 'compact'),
- (None, 'contenteditable'),
- (None, 'controls'),
- (None, 'coords'),
- (None, 'data'),
- (None, 'datafld'),
- (None, 'datapagesize'),
- (None, 'datasrc'),
- (None, 'datetime'),
- (None, 'default'),
- (None, 'delay'),
- (None, 'dir'),
- (None, 'disabled'),
- (None, 'draggable'),
- (None, 'dynsrc'),
- (None, 'enctype'),
- (None, 'end'),
- (None, 'face'),
- (None, 'for'),
- (None, 'form'),
- (None, 'frame'),
- (None, 'galleryimg'),
- (None, 'gutter'),
- (None, 'headers'),
- (None, 'height'),
- (None, 'hidefocus'),
- (None, 'hidden'),
- (None, 'high'),
- (None, 'href'),
- (None, 'hreflang'),
- (None, 'hspace'),
- (None, 'icon'),
- (None, 'id'),
- (None, 'inputmode'),
- (None, 'ismap'),
- (None, 'keytype'),
- (None, 'label'),
- (None, 'leftspacing'),
- (None, 'lang'),
- (None, 'list'),
- (None, 'longdesc'),
- (None, 'loop'),
- (None, 'loopcount'),
- (None, 'loopend'),
- (None, 'loopstart'),
- (None, 'low'),
- (None, 'lowsrc'),
- (None, 'max'),
- (None, 'maxlength'),
- (None, 'media'),
- (None, 'method'),
- (None, 'min'),
- (None, 'multiple'),
- (None, 'name'),
- (None, 'nohref'),
- (None, 'noshade'),
- (None, 'nowrap'),
- (None, 'open'),
- (None, 'optimum'),
- (None, 'pattern'),
- (None, 'ping'),
- (None, 'point-size'),
- (None, 'poster'),
- (None, 'pqg'),
- (None, 'preload'),
- (None, 'prompt'),
- (None, 'radiogroup'),
- (None, 'readonly'),
- (None, 'rel'),
- (None, 'repeat-max'),
- (None, 'repeat-min'),
- (None, 'replace'),
- (None, 'required'),
- (None, 'rev'),
- (None, 'rightspacing'),
- (None, 'rows'),
- (None, 'rowspan'),
- (None, 'rules'),
- (None, 'scope'),
- (None, 'selected'),
- (None, 'shape'),
- (None, 'size'),
- (None, 'span'),
- (None, 'src'),
- (None, 'start'),
- (None, 'step'),
- (None, 'style'),
- (None, 'summary'),
- (None, 'suppress'),
- (None, 'tabindex'),
- (None, 'target'),
- (None, 'template'),
- (None, 'title'),
- (None, 'toppadding'),
- (None, 'type'),
- (None, 'unselectable'),
- (None, 'usemap'),
- (None, 'urn'),
- (None, 'valign'),
- (None, 'value'),
- (None, 'variable'),
- (None, 'volume'),
- (None, 'vspace'),
- (None, 'vrml'),
- (None, 'width'),
- (None, 'wrap'),
- (namespaces['xml'], 'lang'),
- # MathML attributes
- (None, 'actiontype'),
- (None, 'align'),
- (None, 'columnalign'),
- (None, 'columnalign'),
- (None, 'columnalign'),
- (None, 'columnlines'),
- (None, 'columnspacing'),
- (None, 'columnspan'),
- (None, 'depth'),
- (None, 'display'),
- (None, 'displaystyle'),
- (None, 'equalcolumns'),
- (None, 'equalrows'),
- (None, 'fence'),
- (None, 'fontstyle'),
- (None, 'fontweight'),
- (None, 'frame'),
- (None, 'height'),
- (None, 'linethickness'),
- (None, 'lspace'),
- (None, 'mathbackground'),
- (None, 'mathcolor'),
- (None, 'mathvariant'),
- (None, 'mathvariant'),
- (None, 'maxsize'),
- (None, 'minsize'),
- (None, 'other'),
- (None, 'rowalign'),
- (None, 'rowalign'),
- (None, 'rowalign'),
- (None, 'rowlines'),
- (None, 'rowspacing'),
- (None, 'rowspan'),
- (None, 'rspace'),
- (None, 'scriptlevel'),
- (None, 'selection'),
- (None, 'separator'),
- (None, 'stretchy'),
- (None, 'width'),
- (None, 'width'),
- (namespaces['xlink'], 'href'),
- (namespaces['xlink'], 'show'),
- (namespaces['xlink'], 'type'),
- # SVG attributes
- (None, 'accent-height'),
- (None, 'accumulate'),
- (None, 'additive'),
- (None, 'alphabetic'),
- (None, 'arabic-form'),
- (None, 'ascent'),
- (None, 'attributeName'),
- (None, 'attributeType'),
- (None, 'baseProfile'),
- (None, 'bbox'),
- (None, 'begin'),
- (None, 'by'),
- (None, 'calcMode'),
- (None, 'cap-height'),
- (None, 'class'),
- (None, 'clip-path'),
- (None, 'color'),
- (None, 'color-rendering'),
- (None, 'content'),
- (None, 'cx'),
- (None, 'cy'),
- (None, 'd'),
- (None, 'dx'),
- (None, 'dy'),
- (None, 'descent'),
- (None, 'display'),
- (None, 'dur'),
- (None, 'end'),
- (None, 'fill'),
- (None, 'fill-opacity'),
- (None, 'fill-rule'),
- (None, 'font-family'),
- (None, 'font-size'),
- (None, 'font-stretch'),
- (None, 'font-style'),
- (None, 'font-variant'),
- (None, 'font-weight'),
- (None, 'from'),
- (None, 'fx'),
- (None, 'fy'),
- (None, 'g1'),
- (None, 'g2'),
- (None, 'glyph-name'),
- (None, 'gradientUnits'),
- (None, 'hanging'),
- (None, 'height'),
- (None, 'horiz-adv-x'),
- (None, 'horiz-origin-x'),
- (None, 'id'),
- (None, 'ideographic'),
- (None, 'k'),
- (None, 'keyPoints'),
- (None, 'keySplines'),
- (None, 'keyTimes'),
- (None, 'lang'),
- (None, 'marker-end'),
- (None, 'marker-mid'),
- (None, 'marker-start'),
- (None, 'markerHeight'),
- (None, 'markerUnits'),
- (None, 'markerWidth'),
- (None, 'mathematical'),
- (None, 'max'),
- (None, 'min'),
- (None, 'name'),
- (None, 'offset'),
- (None, 'opacity'),
- (None, 'orient'),
- (None, 'origin'),
- (None, 'overline-position'),
- (None, 'overline-thickness'),
- (None, 'panose-1'),
- (None, 'path'),
- (None, 'pathLength'),
- (None, 'points'),
- (None, 'preserveAspectRatio'),
- (None, 'r'),
- (None, 'refX'),
- (None, 'refY'),
- (None, 'repeatCount'),
- (None, 'repeatDur'),
- (None, 'requiredExtensions'),
- (None, 'requiredFeatures'),
- (None, 'restart'),
- (None, 'rotate'),
- (None, 'rx'),
- (None, 'ry'),
- (None, 'slope'),
- (None, 'stemh'),
- (None, 'stemv'),
- (None, 'stop-color'),
- (None, 'stop-opacity'),
- (None, 'strikethrough-position'),
- (None, 'strikethrough-thickness'),
- (None, 'stroke'),
- (None, 'stroke-dasharray'),
- (None, 'stroke-dashoffset'),
- (None, 'stroke-linecap'),
- (None, 'stroke-linejoin'),
- (None, 'stroke-miterlimit'),
- (None, 'stroke-opacity'),
- (None, 'stroke-width'),
- (None, 'systemLanguage'),
- (None, 'target'),
- (None, 'text-anchor'),
- (None, 'to'),
- (None, 'transform'),
- (None, 'type'),
- (None, 'u1'),
- (None, 'u2'),
- (None, 'underline-position'),
- (None, 'underline-thickness'),
- (None, 'unicode'),
- (None, 'unicode-range'),
- (None, 'units-per-em'),
- (None, 'values'),
- (None, 'version'),
- (None, 'viewBox'),
- (None, 'visibility'),
- (None, 'width'),
- (None, 'widths'),
- (None, 'x'),
- (None, 'x-height'),
- (None, 'x1'),
- (None, 'x2'),
- (namespaces['xlink'], 'actuate'),
- (namespaces['xlink'], 'arcrole'),
- (namespaces['xlink'], 'href'),
- (namespaces['xlink'], 'role'),
- (namespaces['xlink'], 'show'),
- (namespaces['xlink'], 'title'),
- (namespaces['xlink'], 'type'),
- (namespaces['xml'], 'base'),
- (namespaces['xml'], 'lang'),
- (namespaces['xml'], 'space'),
- (None, 'y'),
- (None, 'y1'),
- (None, 'y2'),
- (None, 'zoomAndPan'),
-))
-
-attr_val_is_uri = frozenset((
- (None, 'href'),
- (None, 'src'),
- (None, 'cite'),
- (None, 'action'),
- (None, 'longdesc'),
- (None, 'poster'),
- (None, 'background'),
- (None, 'datasrc'),
- (None, 'dynsrc'),
- (None, 'lowsrc'),
- (None, 'ping'),
- (namespaces['xlink'], 'href'),
- (namespaces['xml'], 'base'),
-))
-
-svg_attr_val_allows_ref = frozenset((
- (None, 'clip-path'),
- (None, 'color-profile'),
- (None, 'cursor'),
- (None, 'fill'),
- (None, 'filter'),
- (None, 'marker'),
- (None, 'marker-start'),
- (None, 'marker-mid'),
- (None, 'marker-end'),
- (None, 'mask'),
- (None, 'stroke'),
-))
-
-svg_allow_local_href = frozenset((
- (None, 'altGlyph'),
- (None, 'animate'),
- (None, 'animateColor'),
- (None, 'animateMotion'),
- (None, 'animateTransform'),
- (None, 'cursor'),
- (None, 'feImage'),
- (None, 'filter'),
- (None, 'linearGradient'),
- (None, 'pattern'),
- (None, 'radialGradient'),
- (None, 'textpath'),
- (None, 'tref'),
- (None, 'set'),
- (None, 'use')
-))
-
-allowed_css_properties = frozenset((
- 'azimuth',
- 'background-color',
- 'border-bottom-color',
- 'border-collapse',
- 'border-color',
- 'border-left-color',
- 'border-right-color',
- 'border-top-color',
- 'clear',
- 'color',
- 'cursor',
- 'direction',
- 'display',
- 'elevation',
- 'float',
- 'font',
- 'font-family',
- 'font-size',
- 'font-style',
- 'font-variant',
- 'font-weight',
- 'height',
- 'letter-spacing',
- 'line-height',
- 'overflow',
- 'pause',
- 'pause-after',
- 'pause-before',
- 'pitch',
- 'pitch-range',
- 'richness',
- 'speak',
- 'speak-header',
- 'speak-numeral',
- 'speak-punctuation',
- 'speech-rate',
- 'stress',
- 'text-align',
- 'text-decoration',
- 'text-indent',
- 'unicode-bidi',
- 'vertical-align',
- 'voice-family',
- 'volume',
- 'white-space',
- 'width',
-))
-
-allowed_css_keywords = frozenset((
- 'auto',
- 'aqua',
- 'black',
- 'block',
- 'blue',
- 'bold',
- 'both',
- 'bottom',
- 'brown',
- 'center',
- 'collapse',
- 'dashed',
- 'dotted',
- 'fuchsia',
- 'gray',
- 'green',
- '!important',
- 'italic',
- 'left',
- 'lime',
- 'maroon',
- 'medium',
- 'none',
- 'navy',
- 'normal',
- 'nowrap',
- 'olive',
- 'pointer',
- 'purple',
- 'red',
- 'right',
- 'solid',
- 'silver',
- 'teal',
- 'top',
- 'transparent',
- 'underline',
- 'white',
- 'yellow',
-))
-
-allowed_svg_properties = frozenset((
- 'fill',
- 'fill-opacity',
- 'fill-rule',
- 'stroke',
- 'stroke-width',
- 'stroke-linecap',
- 'stroke-linejoin',
- 'stroke-opacity',
-))
-
-allowed_protocols = frozenset((
- 'ed2k',
- 'ftp',
- 'http',
- 'https',
- 'irc',
- 'mailto',
- 'news',
- 'gopher',
- 'nntp',
- 'telnet',
- 'webcal',
- 'xmpp',
- 'callto',
- 'feed',
- 'urn',
- 'aim',
- 'rsync',
- 'tag',
- 'ssh',
- 'sftp',
- 'rtsp',
- 'afs',
- 'data',
-))
-
-allowed_content_types = frozenset((
- 'image/png',
- 'image/jpeg',
- 'image/gif',
- 'image/webp',
- 'image/bmp',
- 'text/plain',
-))
-
-
-data_content_type = re.compile(r'''
- ^
- # Match a content type /
- (?P[-a-zA-Z0-9.]+/[-a-zA-Z0-9.]+)
- # Match any character set and encoding
- (?:(?:;charset=(?:[-a-zA-Z0-9]+)(?:;(?:base64))?)
- |(?:;(?:base64))?(?:;charset=(?:[-a-zA-Z0-9]+))?)
- # Assume the rest is data
- ,.*
- $
- ''',
- re.VERBOSE)
-
-
-class Filter(base.Filter):
- """Sanitizes token stream of XHTML+MathML+SVG and of inline style attributes"""
- def __init__(self,
- source,
- allowed_elements=allowed_elements,
- allowed_attributes=allowed_attributes,
- allowed_css_properties=allowed_css_properties,
- allowed_css_keywords=allowed_css_keywords,
- allowed_svg_properties=allowed_svg_properties,
- allowed_protocols=allowed_protocols,
- allowed_content_types=allowed_content_types,
- attr_val_is_uri=attr_val_is_uri,
- svg_attr_val_allows_ref=svg_attr_val_allows_ref,
- svg_allow_local_href=svg_allow_local_href):
- """Creates a Filter
-
- :arg allowed_elements: set of elements to allow--everything else will
- be escaped
-
- :arg allowed_attributes: set of attributes to allow in
- elements--everything else will be stripped
-
- :arg allowed_css_properties: set of CSS properties to allow--everything
- else will be stripped
-
- :arg allowed_css_keywords: set of CSS keywords to allow--everything
- else will be stripped
-
- :arg allowed_svg_properties: set of SVG properties to allow--everything
- else will be removed
-
- :arg allowed_protocols: set of allowed protocols for URIs
-
- :arg allowed_content_types: set of allowed content types for ``data`` URIs.
-
- :arg attr_val_is_uri: set of attributes that have URI values--values
- that have a scheme not listed in ``allowed_protocols`` are removed
-
- :arg svg_attr_val_allows_ref: set of SVG attributes that can have
- references
-
- :arg svg_allow_local_href: set of SVG elements that can have local
- hrefs--these are removed
-
- """
- super(Filter, self).__init__(source)
-
- warnings.warn(_deprecation_msg, DeprecationWarning)
-
- self.allowed_elements = allowed_elements
- self.allowed_attributes = allowed_attributes
- self.allowed_css_properties = allowed_css_properties
- self.allowed_css_keywords = allowed_css_keywords
- self.allowed_svg_properties = allowed_svg_properties
- self.allowed_protocols = allowed_protocols
- self.allowed_content_types = allowed_content_types
- self.attr_val_is_uri = attr_val_is_uri
- self.svg_attr_val_allows_ref = svg_attr_val_allows_ref
- self.svg_allow_local_href = svg_allow_local_href
-
- def __iter__(self):
- for token in base.Filter.__iter__(self):
- token = self.sanitize_token(token)
- if token:
- yield token
-
- # Sanitize the +html+, escaping all elements not in ALLOWED_ELEMENTS, and
- # stripping out all attributes not in ALLOWED_ATTRIBUTES. Style attributes
- # are parsed, and a restricted set, specified by ALLOWED_CSS_PROPERTIES and
- # ALLOWED_CSS_KEYWORDS, are allowed through. attributes in ATTR_VAL_IS_URI
- # are scanned, and only URI schemes specified in ALLOWED_PROTOCOLS are
- # allowed.
- #
- # sanitize_html('')
- # => <script> do_nasty_stuff() </script>
- # sanitize_html('Click here for $100')
- # => Click here for $100
- def sanitize_token(self, token):
-
- # accommodate filters which use token_type differently
- token_type = token["type"]
- if token_type in ("StartTag", "EndTag", "EmptyTag"):
- name = token["name"]
- namespace = token["namespace"]
- if ((namespace, name) in self.allowed_elements or
- (namespace is None and
- (namespaces["html"], name) in self.allowed_elements)):
- return self.allowed_token(token)
- else:
- return self.disallowed_token(token)
- elif token_type == "Comment":
- pass
- else:
- return token
-
- def allowed_token(self, token):
- if "data" in token:
- attrs = token["data"]
- attr_names = set(attrs.keys())
-
- # Remove forbidden attributes
- for to_remove in (attr_names - self.allowed_attributes):
- del token["data"][to_remove]
- attr_names.remove(to_remove)
-
- # Remove attributes with disallowed URL values
- for attr in (attr_names & self.attr_val_is_uri):
- assert attr in attrs
- # I don't have a clue where this regexp comes from or why it matches those
- # characters, nor why we call unescape. I just know it's always been here.
- # Should you be worried by this comment in a sanitizer? Yes. On the other hand, all
- # this will do is remove *more* than it otherwise would.
- val_unescaped = re.sub("[`\x00-\x20\x7f-\xa0\\s]+", '',
- unescape(attrs[attr])).lower()
- # remove replacement characters from unescaped characters
- val_unescaped = val_unescaped.replace("\ufffd", "")
- try:
- uri = urlparse.urlparse(val_unescaped)
- except ValueError:
- uri = None
- del attrs[attr]
- if uri and uri.scheme:
- if uri.scheme not in self.allowed_protocols:
- del attrs[attr]
- if uri.scheme == 'data':
- m = data_content_type.match(uri.path)
- if not m:
- del attrs[attr]
- elif m.group('content_type') not in self.allowed_content_types:
- del attrs[attr]
-
- for attr in self.svg_attr_val_allows_ref:
- if attr in attrs:
- attrs[attr] = re.sub(r'url\s*\(\s*[^#\s][^)]+?\)',
- ' ',
- unescape(attrs[attr]))
- if (token["name"] in self.svg_allow_local_href and
- (namespaces['xlink'], 'href') in attrs and re.search(r'^\s*[^#\s].*',
- attrs[(namespaces['xlink'], 'href')])):
- del attrs[(namespaces['xlink'], 'href')]
- if (None, 'style') in attrs:
- attrs[(None, 'style')] = self.sanitize_css(attrs[(None, 'style')])
- token["data"] = attrs
- return token
-
- def disallowed_token(self, token):
- token_type = token["type"]
- if token_type == "EndTag":
- token["data"] = "%s>" % token["name"]
- elif token["data"]:
- assert token_type in ("StartTag", "EmptyTag")
- attrs = []
- for (ns, name), v in token["data"].items():
- attrs.append(' %s="%s"' % (name if ns is None else "%s:%s" % (prefixes[ns], name), escape(v)))
- token["data"] = "<%s%s>" % (token["name"], ''.join(attrs))
- else:
- token["data"] = "<%s>" % token["name"]
- if token.get("selfClosing"):
- token["data"] = token["data"][:-1] + "/>"
-
- token["type"] = "Characters"
-
- del token["name"]
- return token
-
- def sanitize_css(self, style):
- # disallow urls
- style = re.compile(r'url\s*\(\s*[^\s)]+?\s*\)\s*').sub(' ', style)
-
- # gauntlet
- if not re.match(r"""^([:,;#%.\sa-zA-Z0-9!]|\w-\w|'[\s\w]+'|"[\s\w]+"|\([\d,\s]+\))*$""", style):
- return ''
- if not re.match(r"^\s*([-\w]+\s*:[^:;]*(;\s*|$))*$", style):
- return ''
-
- clean = []
- for prop, value in re.findall(r"([-\w]+)\s*:\s*([^:;]*)", style):
- if not value:
- continue
- if prop.lower() in self.allowed_css_properties:
- clean.append(prop + ': ' + value + ';')
- elif prop.split('-')[0].lower() in ['background', 'border', 'margin',
- 'padding']:
- for keyword in value.split():
- if keyword not in self.allowed_css_keywords and \
- not re.match(r"^(#[0-9a-fA-F]+|rgb\(\d+%?,\d*%?,?\d*%?\)?|\d{0,2}\.?\d{0,2}(cm|em|ex|in|mm|pc|pt|px|%|,|\))?)$", keyword): # noqa
- break
- else:
- clean.append(prop + ': ' + value + ';')
- elif prop.lower() in self.allowed_svg_properties:
- clean.append(prop + ': ' + value + ';')
-
- return ' '.join(clean)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/syntax.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/syntax.py
deleted file mode 100644
index 618e0459f5499c25216ceed0c4c3f1ece6640433..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/syntax.py
+++ /dev/null
@@ -1,958 +0,0 @@
-import os.path
-import platform
-import re
-import sys
-import textwrap
-from abc import ABC, abstractmethod
-from pathlib import Path
-from typing import (
- Any,
- Dict,
- Iterable,
- List,
- NamedTuple,
- Optional,
- Sequence,
- Set,
- Tuple,
- Type,
- Union,
-)
-
-from pygments.lexer import Lexer
-from pygments.lexers import get_lexer_by_name, guess_lexer_for_filename
-from pygments.style import Style as PygmentsStyle
-from pygments.styles import get_style_by_name
-from pygments.token import (
- Comment,
- Error,
- Generic,
- Keyword,
- Name,
- Number,
- Operator,
- String,
- Token,
- Whitespace,
-)
-from pygments.util import ClassNotFound
-
-from rich.containers import Lines
-from rich.padding import Padding, PaddingDimensions
-
-from ._loop import loop_first
-from .cells import cell_len
-from .color import Color, blend_rgb
-from .console import Console, ConsoleOptions, JustifyMethod, RenderResult
-from .jupyter import JupyterMixin
-from .measure import Measurement
-from .segment import Segment, Segments
-from .style import Style, StyleType
-from .text import Text
-
-TokenType = Tuple[str, ...]
-
-WINDOWS = platform.system() == "Windows"
-DEFAULT_THEME = "monokai"
-
-# The following styles are based on https://github.com/pygments/pygments/blob/master/pygments/formatters/terminal.py
-# A few modifications were made
-
-ANSI_LIGHT: Dict[TokenType, Style] = {
- Token: Style(),
- Whitespace: Style(color="white"),
- Comment: Style(dim=True),
- Comment.Preproc: Style(color="cyan"),
- Keyword: Style(color="blue"),
- Keyword.Type: Style(color="cyan"),
- Operator.Word: Style(color="magenta"),
- Name.Builtin: Style(color="cyan"),
- Name.Function: Style(color="green"),
- Name.Namespace: Style(color="cyan", underline=True),
- Name.Class: Style(color="green", underline=True),
- Name.Exception: Style(color="cyan"),
- Name.Decorator: Style(color="magenta", bold=True),
- Name.Variable: Style(color="red"),
- Name.Constant: Style(color="red"),
- Name.Attribute: Style(color="cyan"),
- Name.Tag: Style(color="bright_blue"),
- String: Style(color="yellow"),
- Number: Style(color="blue"),
- Generic.Deleted: Style(color="bright_red"),
- Generic.Inserted: Style(color="green"),
- Generic.Heading: Style(bold=True),
- Generic.Subheading: Style(color="magenta", bold=True),
- Generic.Prompt: Style(bold=True),
- Generic.Error: Style(color="bright_red"),
- Error: Style(color="red", underline=True),
-}
-
-ANSI_DARK: Dict[TokenType, Style] = {
- Token: Style(),
- Whitespace: Style(color="bright_black"),
- Comment: Style(dim=True),
- Comment.Preproc: Style(color="bright_cyan"),
- Keyword: Style(color="bright_blue"),
- Keyword.Type: Style(color="bright_cyan"),
- Operator.Word: Style(color="bright_magenta"),
- Name.Builtin: Style(color="bright_cyan"),
- Name.Function: Style(color="bright_green"),
- Name.Namespace: Style(color="bright_cyan", underline=True),
- Name.Class: Style(color="bright_green", underline=True),
- Name.Exception: Style(color="bright_cyan"),
- Name.Decorator: Style(color="bright_magenta", bold=True),
- Name.Variable: Style(color="bright_red"),
- Name.Constant: Style(color="bright_red"),
- Name.Attribute: Style(color="bright_cyan"),
- Name.Tag: Style(color="bright_blue"),
- String: Style(color="yellow"),
- Number: Style(color="bright_blue"),
- Generic.Deleted: Style(color="bright_red"),
- Generic.Inserted: Style(color="bright_green"),
- Generic.Heading: Style(bold=True),
- Generic.Subheading: Style(color="bright_magenta", bold=True),
- Generic.Prompt: Style(bold=True),
- Generic.Error: Style(color="bright_red"),
- Error: Style(color="red", underline=True),
-}
-
-RICH_SYNTAX_THEMES = {"ansi_light": ANSI_LIGHT, "ansi_dark": ANSI_DARK}
-NUMBERS_COLUMN_DEFAULT_PADDING = 2
-
-
-class SyntaxTheme(ABC):
- """Base class for a syntax theme."""
-
- @abstractmethod
- def get_style_for_token(self, token_type: TokenType) -> Style:
- """Get a style for a given Pygments token."""
- raise NotImplementedError # pragma: no cover
-
- @abstractmethod
- def get_background_style(self) -> Style:
- """Get the background color."""
- raise NotImplementedError # pragma: no cover
-
-
-class PygmentsSyntaxTheme(SyntaxTheme):
- """Syntax theme that delegates to Pygments theme."""
-
- def __init__(self, theme: Union[str, Type[PygmentsStyle]]) -> None:
- self._style_cache: Dict[TokenType, Style] = {}
- if isinstance(theme, str):
- try:
- self._pygments_style_class = get_style_by_name(theme)
- except ClassNotFound:
- self._pygments_style_class = get_style_by_name("default")
- else:
- self._pygments_style_class = theme
-
- self._background_color = self._pygments_style_class.background_color
- self._background_style = Style(bgcolor=self._background_color)
-
- def get_style_for_token(self, token_type: TokenType) -> Style:
- """Get a style from a Pygments class."""
- try:
- return self._style_cache[token_type]
- except KeyError:
- try:
- pygments_style = self._pygments_style_class.style_for_token(token_type)
- except KeyError:
- style = Style.null()
- else:
- color = pygments_style["color"]
- bgcolor = pygments_style["bgcolor"]
- style = Style(
- color="#" + color if color else "#000000",
- bgcolor="#" + bgcolor if bgcolor else self._background_color,
- bold=pygments_style["bold"],
- italic=pygments_style["italic"],
- underline=pygments_style["underline"],
- )
- self._style_cache[token_type] = style
- return style
-
- def get_background_style(self) -> Style:
- return self._background_style
-
-
-class ANSISyntaxTheme(SyntaxTheme):
- """Syntax theme to use standard colors."""
-
- def __init__(self, style_map: Dict[TokenType, Style]) -> None:
- self.style_map = style_map
- self._missing_style = Style.null()
- self._background_style = Style.null()
- self._style_cache: Dict[TokenType, Style] = {}
-
- def get_style_for_token(self, token_type: TokenType) -> Style:
- """Look up style in the style map."""
- try:
- return self._style_cache[token_type]
- except KeyError:
- # Styles form a hierarchy
- # We need to go from most to least specific
- # e.g. ("foo", "bar", "baz") to ("foo", "bar") to ("foo",)
- get_style = self.style_map.get
- token = tuple(token_type)
- style = self._missing_style
- while token:
- _style = get_style(token)
- if _style is not None:
- style = _style
- break
- token = token[:-1]
- self._style_cache[token_type] = style
- return style
-
- def get_background_style(self) -> Style:
- return self._background_style
-
-
-SyntaxPosition = Tuple[int, int]
-
-
-class _SyntaxHighlightRange(NamedTuple):
- """
- A range to highlight in a Syntax object.
- `start` and `end` are 2-integers tuples, where the first integer is the line number
- (starting from 1) and the second integer is the column index (starting from 0).
- """
-
- style: StyleType
- start: SyntaxPosition
- end: SyntaxPosition
-
-
-class Syntax(JupyterMixin):
- """Construct a Syntax object to render syntax highlighted code.
-
- Args:
- code (str): Code to highlight.
- lexer (Lexer | str): Lexer to use (see https://pygments.org/docs/lexers/)
- theme (str, optional): Color theme, aka Pygments style (see https://pygments.org/docs/styles/#getting-a-list-of-available-styles). Defaults to "monokai".
- dedent (bool, optional): Enable stripping of initial whitespace. Defaults to False.
- line_numbers (bool, optional): Enable rendering of line numbers. Defaults to False.
- start_line (int, optional): Starting number for line numbers. Defaults to 1.
- line_range (Tuple[int | None, int | None], optional): If given should be a tuple of the start and end line to render.
- A value of None in the tuple indicates the range is open in that direction.
- highlight_lines (Set[int]): A set of line numbers to highlight.
- code_width: Width of code to render (not including line numbers), or ``None`` to use all available width.
- tab_size (int, optional): Size of tabs. Defaults to 4.
- word_wrap (bool, optional): Enable word wrapping.
- background_color (str, optional): Optional background color, or None to use theme color. Defaults to None.
- indent_guides (bool, optional): Show indent guides. Defaults to False.
- padding (PaddingDimensions): Padding to apply around the syntax. Defaults to 0 (no padding).
- """
-
- _pygments_style_class: Type[PygmentsStyle]
- _theme: SyntaxTheme
-
- @classmethod
- def get_theme(cls, name: Union[str, SyntaxTheme]) -> SyntaxTheme:
- """Get a syntax theme instance."""
- if isinstance(name, SyntaxTheme):
- return name
- theme: SyntaxTheme
- if name in RICH_SYNTAX_THEMES:
- theme = ANSISyntaxTheme(RICH_SYNTAX_THEMES[name])
- else:
- theme = PygmentsSyntaxTheme(name)
- return theme
-
- def __init__(
- self,
- code: str,
- lexer: Union[Lexer, str],
- *,
- theme: Union[str, SyntaxTheme] = DEFAULT_THEME,
- dedent: bool = False,
- line_numbers: bool = False,
- start_line: int = 1,
- line_range: Optional[Tuple[Optional[int], Optional[int]]] = None,
- highlight_lines: Optional[Set[int]] = None,
- code_width: Optional[int] = None,
- tab_size: int = 4,
- word_wrap: bool = False,
- background_color: Optional[str] = None,
- indent_guides: bool = False,
- padding: PaddingDimensions = 0,
- ) -> None:
- self.code = code
- self._lexer = lexer
- self.dedent = dedent
- self.line_numbers = line_numbers
- self.start_line = start_line
- self.line_range = line_range
- self.highlight_lines = highlight_lines or set()
- self.code_width = code_width
- self.tab_size = tab_size
- self.word_wrap = word_wrap
- self.background_color = background_color
- self.background_style = (
- Style(bgcolor=background_color) if background_color else Style()
- )
- self.indent_guides = indent_guides
- self.padding = padding
-
- self._theme = self.get_theme(theme)
- self._stylized_ranges: List[_SyntaxHighlightRange] = []
-
- @classmethod
- def from_path(
- cls,
- path: str,
- encoding: str = "utf-8",
- lexer: Optional[Union[Lexer, str]] = None,
- theme: Union[str, SyntaxTheme] = DEFAULT_THEME,
- dedent: bool = False,
- line_numbers: bool = False,
- line_range: Optional[Tuple[int, int]] = None,
- start_line: int = 1,
- highlight_lines: Optional[Set[int]] = None,
- code_width: Optional[int] = None,
- tab_size: int = 4,
- word_wrap: bool = False,
- background_color: Optional[str] = None,
- indent_guides: bool = False,
- padding: PaddingDimensions = 0,
- ) -> "Syntax":
- """Construct a Syntax object from a file.
-
- Args:
- path (str): Path to file to highlight.
- encoding (str): Encoding of file.
- lexer (str | Lexer, optional): Lexer to use. If None, lexer will be auto-detected from path/file content.
- theme (str, optional): Color theme, aka Pygments style (see https://pygments.org/docs/styles/#getting-a-list-of-available-styles). Defaults to "emacs".
- dedent (bool, optional): Enable stripping of initial whitespace. Defaults to True.
- line_numbers (bool, optional): Enable rendering of line numbers. Defaults to False.
- start_line (int, optional): Starting number for line numbers. Defaults to 1.
- line_range (Tuple[int, int], optional): If given should be a tuple of the start and end line to render.
- highlight_lines (Set[int]): A set of line numbers to highlight.
- code_width: Width of code to render (not including line numbers), or ``None`` to use all available width.
- tab_size (int, optional): Size of tabs. Defaults to 4.
- word_wrap (bool, optional): Enable word wrapping of code.
- background_color (str, optional): Optional background color, or None to use theme color. Defaults to None.
- indent_guides (bool, optional): Show indent guides. Defaults to False.
- padding (PaddingDimensions): Padding to apply around the syntax. Defaults to 0 (no padding).
-
- Returns:
- [Syntax]: A Syntax object that may be printed to the console
- """
- code = Path(path).read_text(encoding=encoding)
-
- if not lexer:
- lexer = cls.guess_lexer(path, code=code)
-
- return cls(
- code,
- lexer,
- theme=theme,
- dedent=dedent,
- line_numbers=line_numbers,
- line_range=line_range,
- start_line=start_line,
- highlight_lines=highlight_lines,
- code_width=code_width,
- tab_size=tab_size,
- word_wrap=word_wrap,
- background_color=background_color,
- indent_guides=indent_guides,
- padding=padding,
- )
-
- @classmethod
- def guess_lexer(cls, path: str, code: Optional[str] = None) -> str:
- """Guess the alias of the Pygments lexer to use based on a path and an optional string of code.
- If code is supplied, it will use a combination of the code and the filename to determine the
- best lexer to use. For example, if the file is ``index.html`` and the file contains Django
- templating syntax, then "html+django" will be returned. If the file is ``index.html``, and no
- templating language is used, the "html" lexer will be used. If no string of code
- is supplied, the lexer will be chosen based on the file extension..
-
- Args:
- path (AnyStr): The path to the file containing the code you wish to know the lexer for.
- code (str, optional): Optional string of code that will be used as a fallback if no lexer
- is found for the supplied path.
-
- Returns:
- str: The name of the Pygments lexer that best matches the supplied path/code.
- """
- lexer: Optional[Lexer] = None
- lexer_name = "default"
- if code:
- try:
- lexer = guess_lexer_for_filename(path, code)
- except ClassNotFound:
- pass
-
- if not lexer:
- try:
- _, ext = os.path.splitext(path)
- if ext:
- extension = ext.lstrip(".").lower()
- lexer = get_lexer_by_name(extension)
- except ClassNotFound:
- pass
-
- if lexer:
- if lexer.aliases:
- lexer_name = lexer.aliases[0]
- else:
- lexer_name = lexer.name
-
- return lexer_name
-
- def _get_base_style(self) -> Style:
- """Get the base style."""
- default_style = self._theme.get_background_style() + self.background_style
- return default_style
-
- def _get_token_color(self, token_type: TokenType) -> Optional[Color]:
- """Get a color (if any) for the given token.
-
- Args:
- token_type (TokenType): A token type tuple from Pygments.
-
- Returns:
- Optional[Color]: Color from theme, or None for no color.
- """
- style = self._theme.get_style_for_token(token_type)
- return style.color
-
- @property
- def lexer(self) -> Optional[Lexer]:
- """The lexer for this syntax, or None if no lexer was found.
-
- Tries to find the lexer by name if a string was passed to the constructor.
- """
-
- if isinstance(self._lexer, Lexer):
- return self._lexer
- try:
- return get_lexer_by_name(
- self._lexer,
- stripnl=False,
- ensurenl=True,
- tabsize=self.tab_size,
- )
- except ClassNotFound:
- return None
-
- @property
- def default_lexer(self) -> Lexer:
- """A Pygments Lexer to use if one is not specified or invalid."""
- return get_lexer_by_name(
- "text",
- stripnl=False,
- ensurenl=True,
- tabsize=self.tab_size,
- )
-
- def highlight(
- self,
- code: str,
- line_range: Optional[Tuple[Optional[int], Optional[int]]] = None,
- ) -> Text:
- """Highlight code and return a Text instance.
-
- Args:
- code (str): Code to highlight.
- line_range(Tuple[int, int], optional): Optional line range to highlight.
-
- Returns:
- Text: A text instance containing highlighted syntax.
- """
-
- base_style = self._get_base_style()
- justify: JustifyMethod = (
- "default" if base_style.transparent_background else "left"
- )
-
- text = Text(
- justify=justify,
- style=base_style,
- tab_size=self.tab_size,
- no_wrap=not self.word_wrap,
- )
- _get_theme_style = self._theme.get_style_for_token
-
- lexer = self.lexer or self.default_lexer
-
- if lexer is None:
- text.append(code)
- else:
- if line_range:
- # More complicated path to only stylize a portion of the code
- # This speeds up further operations as there are less spans to process
- line_start, line_end = line_range
-
- def line_tokenize() -> Iterable[Tuple[Any, str]]:
- """Split tokens to one per line."""
- assert lexer # required to make MyPy happy - we know lexer is not None at this point
-
- for token_type, token in lexer.get_tokens(code):
- while token:
- line_token, new_line, token = token.partition("\n")
- yield token_type, line_token + new_line
-
- def tokens_to_spans() -> Iterable[Tuple[str, Optional[Style]]]:
- """Convert tokens to spans."""
- tokens = iter(line_tokenize())
- line_no = 0
- _line_start = line_start - 1 if line_start else 0
-
- # Skip over tokens until line start
- while line_no < _line_start:
- try:
- _token_type, token = next(tokens)
- except StopIteration:
- break
- yield (token, None)
- if token.endswith("\n"):
- line_no += 1
- # Generate spans until line end
- for token_type, token in tokens:
- yield (token, _get_theme_style(token_type))
- if token.endswith("\n"):
- line_no += 1
- if line_end and line_no >= line_end:
- break
-
- text.append_tokens(tokens_to_spans())
-
- else:
- text.append_tokens(
- (token, _get_theme_style(token_type))
- for token_type, token in lexer.get_tokens(code)
- )
- if self.background_color is not None:
- text.stylize(f"on {self.background_color}")
-
- if self._stylized_ranges:
- self._apply_stylized_ranges(text)
-
- return text
-
- def stylize_range(
- self, style: StyleType, start: SyntaxPosition, end: SyntaxPosition
- ) -> None:
- """
- Adds a custom style on a part of the code, that will be applied to the syntax display when it's rendered.
- Line numbers are 1-based, while column indexes are 0-based.
-
- Args:
- style (StyleType): The style to apply.
- start (Tuple[int, int]): The start of the range, in the form `[line number, column index]`.
- end (Tuple[int, int]): The end of the range, in the form `[line number, column index]`.
- """
- self._stylized_ranges.append(_SyntaxHighlightRange(style, start, end))
-
- def _get_line_numbers_color(self, blend: float = 0.3) -> Color:
- background_style = self._theme.get_background_style() + self.background_style
- background_color = background_style.bgcolor
- if background_color is None or background_color.is_system_defined:
- return Color.default()
- foreground_color = self._get_token_color(Token.Text)
- if foreground_color is None or foreground_color.is_system_defined:
- return foreground_color or Color.default()
- new_color = blend_rgb(
- background_color.get_truecolor(),
- foreground_color.get_truecolor(),
- cross_fade=blend,
- )
- return Color.from_triplet(new_color)
-
- @property
- def _numbers_column_width(self) -> int:
- """Get the number of characters used to render the numbers column."""
- column_width = 0
- if self.line_numbers:
- column_width = (
- len(str(self.start_line + self.code.count("\n")))
- + NUMBERS_COLUMN_DEFAULT_PADDING
- )
- return column_width
-
- def _get_number_styles(self, console: Console) -> Tuple[Style, Style, Style]:
- """Get background, number, and highlight styles for line numbers."""
- background_style = self._get_base_style()
- if background_style.transparent_background:
- return Style.null(), Style(dim=True), Style.null()
- if console.color_system in ("256", "truecolor"):
- number_style = Style.chain(
- background_style,
- self._theme.get_style_for_token(Token.Text),
- Style(color=self._get_line_numbers_color()),
- self.background_style,
- )
- highlight_number_style = Style.chain(
- background_style,
- self._theme.get_style_for_token(Token.Text),
- Style(bold=True, color=self._get_line_numbers_color(0.9)),
- self.background_style,
- )
- else:
- number_style = background_style + Style(dim=True)
- highlight_number_style = background_style + Style(dim=False)
- return background_style, number_style, highlight_number_style
-
- def __rich_measure__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "Measurement":
- _, right, _, left = Padding.unpack(self.padding)
- padding = left + right
- if self.code_width is not None:
- width = self.code_width + self._numbers_column_width + padding + 1
- return Measurement(self._numbers_column_width, width)
- lines = self.code.splitlines()
- width = (
- self._numbers_column_width
- + padding
- + (max(cell_len(line) for line in lines) if lines else 0)
- )
- if self.line_numbers:
- width += 1
- return Measurement(self._numbers_column_width, width)
-
- def __rich_console__(
- self, console: Console, options: ConsoleOptions
- ) -> RenderResult:
- segments = Segments(self._get_syntax(console, options))
- if self.padding:
- yield Padding(
- segments, style=self._theme.get_background_style(), pad=self.padding
- )
- else:
- yield segments
-
- def _get_syntax(
- self,
- console: Console,
- options: ConsoleOptions,
- ) -> Iterable[Segment]:
- """
- Get the Segments for the Syntax object, excluding any vertical/horizontal padding
- """
- transparent_background = self._get_base_style().transparent_background
- code_width = (
- (
- (options.max_width - self._numbers_column_width - 1)
- if self.line_numbers
- else options.max_width
- )
- if self.code_width is None
- else self.code_width
- )
-
- ends_on_nl, processed_code = self._process_code(self.code)
- text = self.highlight(processed_code, self.line_range)
-
- if not self.line_numbers and not self.word_wrap and not self.line_range:
- if not ends_on_nl:
- text.remove_suffix("\n")
- # Simple case of just rendering text
- style = (
- self._get_base_style()
- + self._theme.get_style_for_token(Comment)
- + Style(dim=True)
- + self.background_style
- )
- if self.indent_guides and not options.ascii_only:
- text = text.with_indent_guides(self.tab_size, style=style)
- text.overflow = "crop"
- if style.transparent_background:
- yield from console.render(
- text, options=options.update(width=code_width)
- )
- else:
- syntax_lines = console.render_lines(
- text,
- options.update(width=code_width, height=None, justify="left"),
- style=self.background_style,
- pad=True,
- new_lines=True,
- )
- for syntax_line in syntax_lines:
- yield from syntax_line
- return
-
- start_line, end_line = self.line_range or (None, None)
- line_offset = 0
- if start_line:
- line_offset = max(0, start_line - 1)
- lines: Union[List[Text], Lines] = text.split("\n", allow_blank=ends_on_nl)
- if self.line_range:
- if line_offset > len(lines):
- return
- lines = lines[line_offset:end_line]
-
- if self.indent_guides and not options.ascii_only:
- style = (
- self._get_base_style()
- + self._theme.get_style_for_token(Comment)
- + Style(dim=True)
- + self.background_style
- )
- lines = (
- Text("\n")
- .join(lines)
- .with_indent_guides(self.tab_size, style=style + Style(italic=False))
- .split("\n", allow_blank=True)
- )
-
- numbers_column_width = self._numbers_column_width
- render_options = options.update(width=code_width)
-
- highlight_line = self.highlight_lines.__contains__
- _Segment = Segment
- new_line = _Segment("\n")
-
- line_pointer = "> " if options.legacy_windows else "❱ "
-
- (
- background_style,
- number_style,
- highlight_number_style,
- ) = self._get_number_styles(console)
-
- for line_no, line in enumerate(lines, self.start_line + line_offset):
- if self.word_wrap:
- wrapped_lines = console.render_lines(
- line,
- render_options.update(height=None, justify="left"),
- style=background_style,
- pad=not transparent_background,
- )
- else:
- segments = list(line.render(console, end=""))
- if options.no_wrap:
- wrapped_lines = [segments]
- else:
- wrapped_lines = [
- _Segment.adjust_line_length(
- segments,
- render_options.max_width,
- style=background_style,
- pad=not transparent_background,
- )
- ]
-
- if self.line_numbers:
- wrapped_line_left_pad = _Segment(
- " " * numbers_column_width + " ", background_style
- )
- for first, wrapped_line in loop_first(wrapped_lines):
- if first:
- line_column = str(line_no).rjust(numbers_column_width - 2) + " "
- if highlight_line(line_no):
- yield _Segment(line_pointer, Style(color="red"))
- yield _Segment(line_column, highlight_number_style)
- else:
- yield _Segment(" ", highlight_number_style)
- yield _Segment(line_column, number_style)
- else:
- yield wrapped_line_left_pad
- yield from wrapped_line
- yield new_line
- else:
- for wrapped_line in wrapped_lines:
- yield from wrapped_line
- yield new_line
-
- def _apply_stylized_ranges(self, text: Text) -> None:
- """
- Apply stylized ranges to a text instance,
- using the given code to determine the right portion to apply the style to.
-
- Args:
- text (Text): Text instance to apply the style to.
- """
- code = text.plain
- newlines_offsets = [
- # Let's add outer boundaries at each side of the list:
- 0,
- # N.B. using "\n" here is much faster than using metacharacters such as "^" or "\Z":
- *[
- match.start() + 1
- for match in re.finditer("\n", code, flags=re.MULTILINE)
- ],
- len(code) + 1,
- ]
-
- for stylized_range in self._stylized_ranges:
- start = _get_code_index_for_syntax_position(
- newlines_offsets, stylized_range.start
- )
- end = _get_code_index_for_syntax_position(
- newlines_offsets, stylized_range.end
- )
- if start is not None and end is not None:
- text.stylize(stylized_range.style, start, end)
-
- def _process_code(self, code: str) -> Tuple[bool, str]:
- """
- Applies various processing to a raw code string
- (normalises it so it always ends with a line return, dedents it if necessary, etc.)
-
- Args:
- code (str): The raw code string to process
-
- Returns:
- Tuple[bool, str]: the boolean indicates whether the raw code ends with a line return,
- while the string is the processed code.
- """
- ends_on_nl = code.endswith("\n")
- processed_code = code if ends_on_nl else code + "\n"
- processed_code = (
- textwrap.dedent(processed_code) if self.dedent else processed_code
- )
- processed_code = processed_code.expandtabs(self.tab_size)
- return ends_on_nl, processed_code
-
-
-def _get_code_index_for_syntax_position(
- newlines_offsets: Sequence[int], position: SyntaxPosition
-) -> Optional[int]:
- """
- Returns the index of the code string for the given positions.
-
- Args:
- newlines_offsets (Sequence[int]): The offset of each newline character found in the code snippet.
- position (SyntaxPosition): The position to search for.
-
- Returns:
- Optional[int]: The index of the code string for this position, or `None`
- if the given position's line number is out of range (if it's the column that is out of range
- we silently clamp its value so that it reaches the end of the line)
- """
- lines_count = len(newlines_offsets)
-
- line_number, column_index = position
- if line_number > lines_count or len(newlines_offsets) < (line_number + 1):
- return None # `line_number` is out of range
- line_index = line_number - 1
- line_length = newlines_offsets[line_index + 1] - newlines_offsets[line_index] - 1
- # If `column_index` is out of range: let's silently clamp it:
- column_index = min(line_length, column_index)
- return newlines_offsets[line_index] + column_index
-
-
-if __name__ == "__main__": # pragma: no cover
- import argparse
- import sys
-
- parser = argparse.ArgumentParser(
- description="Render syntax to the console with Rich"
- )
- parser.add_argument(
- "path",
- metavar="PATH",
- help="path to file, or - for stdin",
- )
- parser.add_argument(
- "-c",
- "--force-color",
- dest="force_color",
- action="store_true",
- default=None,
- help="force color for non-terminals",
- )
- parser.add_argument(
- "-i",
- "--indent-guides",
- dest="indent_guides",
- action="store_true",
- default=False,
- help="display indent guides",
- )
- parser.add_argument(
- "-l",
- "--line-numbers",
- dest="line_numbers",
- action="store_true",
- help="render line numbers",
- )
- parser.add_argument(
- "-w",
- "--width",
- type=int,
- dest="width",
- default=None,
- help="width of output (default will auto-detect)",
- )
- parser.add_argument(
- "-r",
- "--wrap",
- dest="word_wrap",
- action="store_true",
- default=False,
- help="word wrap long lines",
- )
- parser.add_argument(
- "-s",
- "--soft-wrap",
- action="store_true",
- dest="soft_wrap",
- default=False,
- help="enable soft wrapping mode",
- )
- parser.add_argument(
- "-t", "--theme", dest="theme", default="monokai", help="pygments theme"
- )
- parser.add_argument(
- "-b",
- "--background-color",
- dest="background_color",
- default=None,
- help="Override background color",
- )
- parser.add_argument(
- "-x",
- "--lexer",
- default=None,
- dest="lexer_name",
- help="Lexer name",
- )
- parser.add_argument(
- "-p", "--padding", type=int, default=0, dest="padding", help="Padding"
- )
- parser.add_argument(
- "--highlight-line",
- type=int,
- default=None,
- dest="highlight_line",
- help="The line number (not index!) to highlight",
- )
- args = parser.parse_args()
-
- from rich.console import Console
-
- console = Console(force_terminal=args.force_color, width=args.width)
-
- if args.path == "-":
- code = sys.stdin.read()
- syntax = Syntax(
- code=code,
- lexer=args.lexer_name,
- line_numbers=args.line_numbers,
- word_wrap=args.word_wrap,
- theme=args.theme,
- background_color=args.background_color,
- indent_guides=args.indent_guides,
- padding=args.padding,
- highlight_lines={args.highlight_line},
- )
- else:
- syntax = Syntax.from_path(
- args.path,
- lexer=args.lexer_name,
- line_numbers=args.line_numbers,
- word_wrap=args.word_wrap,
- theme=args.theme,
- background_color=args.background_color,
- indent_guides=args.indent_guides,
- padding=args.padding,
- highlight_lines={args.highlight_line},
- )
- console.print(syntax, soft_wrap=args.soft_wrap)
diff --git a/spaces/pyodide-demo/self-hosted/msgpack.js b/spaces/pyodide-demo/self-hosted/msgpack.js
deleted file mode 100644
index 387082467c4b487ff3857fdf8fbfb20ef0acfc00..0000000000000000000000000000000000000000
--- a/spaces/pyodide-demo/self-hosted/msgpack.js
+++ /dev/null
@@ -1 +0,0 @@
-var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="msgpack.data";var REMOTE_PACKAGE_BASE="msgpack.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","msgpack",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","msgpack-1.0.3-py3.9.egg-info",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:92104,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,1222,2519,3590,4546,5802,7044,8223,9244,10330,11307,12337,13017,13852,14775,15927,17144,17948,18705,19549,20289,21239,22493,23680,24930,25969,27130,28348,29070,30191,31386,32296,33692,35156,36668,38006,39377,40650,41672,43024,44408,45658,46976,48605,50106,51496,52833,54097,55574,57210,58704,59777,61222,62558,64009,65325,66533,67975,69425,70595,72058,73557,75086,76576,78013,79259,80578,81665,82774,83789,84967,85891,86940,88285,89659,91027],sizes:[1222,1297,1071,956,1256,1242,1179,1021,1086,977,1030,680,835,923,1152,1217,804,757,844,740,950,1254,1187,1250,1039,1161,1218,722,1121,1195,910,1396,1464,1512,1338,1371,1273,1022,1352,1384,1250,1318,1629,1501,1390,1337,1264,1477,1636,1494,1073,1445,1336,1451,1316,1208,1442,1450,1170,1463,1499,1529,1490,1437,1246,1319,1087,1109,1015,1178,924,1049,1345,1374,1368,1077],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_msgpack.data")}Module["addRunDependency"]("datafile_msgpack.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/msgpack/__init__.py",start:0,end:1118,audio:0},{filename:"/lib/python3.9/site-packages/msgpack/_version.py",start:1118,end:1138,audio:0},{filename:"/lib/python3.9/site-packages/msgpack/exceptions.py",start:1138,end:2219,audio:0},{filename:"/lib/python3.9/site-packages/msgpack/ext.py",start:2219,end:8307,audio:0},{filename:"/lib/python3.9/site-packages/msgpack/fallback.py",start:8307,end:42782,audio:0},{filename:"/lib/python3.9/site-packages/msgpack/_cmsgpack.so",start:42782,end:145659,audio:0},{filename:"/lib/python3.9/site-packages/msgpack-1.0.3-py3.9.egg-info/PKG-INFO",start:145659,end:154384,audio:0},{filename:"/lib/python3.9/site-packages/msgpack-1.0.3-py3.9.egg-info/dependency_links.txt",start:154384,end:154385,audio:0},{filename:"/lib/python3.9/site-packages/msgpack-1.0.3-py3.9.egg-info/top_level.txt",start:154385,end:154393,audio:0},{filename:"/lib/python3.9/site-packages/msgpack-1.0.3-py3.9.egg-info/SOURCES.txt",start:154393,end:155253,audio:0}],remote_package_size:96200,package_uuid:"586264ac-faff-423f-93e5-aaf1541e2081"})})();
\ No newline at end of file
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Download Signalyst Hqplayer Full 13.md b/spaces/quidiaMuxgu/Expedit-SAM/Download Signalyst Hqplayer Full 13.md
deleted file mode 100644
index 0369ba89ae49c062a7a93c1822dc093f6f3974a6..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Download Signalyst Hqplayer Full 13.md
+++ /dev/null
@@ -1,104 +0,0 @@
-
-Download Signalyst Hqplayer Full 13: The Best Audio Software for Professionals and Enthusiasts
-
-If you are looking for a high-quality audio software that can convert, record and play various audio formats, you should download Signalyst Hqplayer Full 13. This software is designed to provide you with the best audio quality possible, using advanced algorithms and features. In this article, we will explain what Signalyst Hqplayer Full 13 can do for you and why you should download it today.
-Download Signalyst Hqplayer Full 13
Download Zip ○○○ https://geags.com/2uCsdR
-
-What is Signalyst Hqplayer Full 13?
-
-Signalyst Hqplayer Full 13 is a high-quality audio file converter, recorder and player for 64-bit Windows 7/8.1/10, macOS and Linux. It is mainly intended as a mastering post-production tool for producing high-quality final distribution files in various formats.
-
-Signalyst Hqplayer Full 13 supports many input and output formats, such as CDDA (Audio CD), FLAC, WavPack (PCM and DSD), DSDIFF (DFF, non-compressed), DSF, RIFF (WAV, including DXD/RF64), AIFF, MP3, PCM and DSD from an audio device.
-
-Signalyst Hqplayer Full 13 also features several user-selectable high-quality resamplers, dither/noise shaping algorithms and delta-sigma modulators. It can perform software upsampling/downsampling with selectable algorithm up to 32-bit 1.536 MHz or down to 16-bit 32 kHz PCM (45 options, most "apodizing"). It can also perform delta-sigma modulation for upsampling PCM/DSD content up to 49.152 MHz 1-bit SDM (12 modulators, 3 SDM rate converters, 49 oversampling options).
-
-Signalyst Hqplayer Full 13 also includes a pipeline matrix for performing channel routing, mixing and gain adjustments, including convolution engine for applying advanced equalization or other kinds of effects. It can also perform source content analysis to detect need for apodizing oversampling filter.
-
-Signalyst Hqplayer Full 13 also allows you to record and playback both PCM and SDM (DSD) formats. You can also monitor the final output format during conversion and recording, which can be enabled/disabled on the fly.
-
-
-Why Download Signalyst Hqplayer Full 13?
-
-There are many reasons why you should download Signalyst Hqplayer Full 13 if you are serious about audio quality. Here are some of them:
-
-
-- Signalyst Hqplayer Full 13 can help you produce high-quality final distribution files in various formats, such as FLAC, WavPack (PCM and DSD), DSDIFF (DFF, non-compressed), DSF, RIFF (WAV, including DXD/RF64), AIFF and MP3.
-- Signalyst Hqplayer Full 13 can help you convert any of the supported source formats to any of the supported destination formats. For example, you can convert DXD intermediate format to both PCM and DSD outputs.
-- Signalyst Hqplayer Full 13 can help you improve the quality of your audio files by using advanced algorithms and features, such as software upsampling/downsampling, delta-sigma modulation, dither/noise shaping, convolution engine, parametric equalizers and more.
-- Signalyst Hqplayer Full 13 can help you record and playback both PCM and SDM (DSD) formats with high quality. You can also monitor the final output format during conversion and recording.
-- Signalyst Hqplayer Full 13 can help you customize your audio processing by using the pipeline matrix for routing, mixing and gain adjustments. You can also apply advanced equalization or other kinds of effects using the convolution engine.
-- Signalyst Hqplayer Full 13 can help you bypass the operating system software mixer and sample rate conversion for better performance and quality. It also supports professional ASIO drivers for low-latency audio.
-
-
-How to Download Signalyst Hqplayer Full 13?
-
-If you are interested in downloading Signalyst Hqplayer Full 13, you can visit the official website of Signalyst at https://www.signalyst.com/professional.html. There you can find more information about the software and its features. You can also download a trial version of Signalyst Hqplayer Full 13 to test it before buying it.
-
-The trial version of Signalyst Hqplayer Full 13 has some limitations, such as limited number of conversions per session and limited duration of playback per session. If you want to unlock all the features of Signalyst Hqplayer Full 13, you need to buy a license key from Signalyst.
-
-The license key for Signalyst Hqplayer Full 13 costs €249.00 EUR (excluding VAT). You can buy it online using PayPal or credit card. Once you buy the license key, you will receive it by email. You can then enter it in the software to activate it.
-
-Conclusion
-
-Signalyst Hqplayer Full 13 is a high-quality audio file converter, recorder and player that can help you produce high-quality final distribution files in various formats. It also offers many advanced features and algorithms to improve your audio quality and customize your audio processing. If you are looking for a professional audio software that can handle various audio formats with high quality, you should download Signalyst Hqplayer Full 13 today.
-How to Use Signalyst Hqplayer Full 13?
-
-Using Signalyst Hqplayer Full 13 is easy and intuitive. You can use the software as a standalone application or as a network audio endpoint with Signalyst Asynchronous Network Audio Adapters. You can also control the software remotely using external control applications such as HQPDcontrol for Android and iOS.
-
-To use Signalyst Hqplayer Full 13 as a standalone application, you need to install it on your computer and launch it. You will see the main window of the software, where you can configure your audio settings, select your input and output devices, load your audio files or playlists, and start playing or converting them.
-
-To use Signalyst Hqplayer Full 13 as a network audio endpoint, you need to install it on a computer that is connected to your network and audio system. You also need to install Signalyst Asynchronous Network Audio Adapters on another computer that will act as a source of audio streams. You can then stream audio from the source computer to the endpoint computer using Signalyst Hqplayer Full 13.
-
-To control Signalyst Hqplayer Full 13 remotely, you need to install HQPDcontrol on your Android or iOS device and connect it to the same network as your computer running Signalyst Hqplayer Full 13. You can then use your device to browse your local library or Qobuz, select your audio files or playlists, and control the playback or conversion of Signalyst Hqplayer Full 13.
-
-What are the Benefits of Downloading Signalyst Hqplayer Full 13?
-
-Downloading Signalyst Hqplayer Full 13 can bring you many benefits, such as:
-
-
-- Enjoying high-quality audio playback and conversion with advanced algorithms and features.
-- Producing high-quality final distribution files in various formats for different purposes.
-- Customizing your audio processing with various options and settings.
-- Bypassing the operating system software mixer and sample rate conversion for better performance and quality.
-- Using professional ASIO drivers for low-latency audio.
-- Streaming audio over network with high quality and reliability.
-- Controlling the software remotely with your mobile device.
-- Supporting new generation of Nvidia GPUs for faster processing.
-- Getting regular updates and improvements from Signalyst.
-
-
-Conclusion
-
-Signalyst Hqplayer Full 13 is a high-quality audio file converter, recorder and player that can help you produce high-quality final distribution files in various formats. It also offers many advanced features and algorithms to improve your audio quality and customize your audio processing. If you are looking for a professional audio software that can handle various audio formats with high quality, you should download Signalyst Hqplayer Full 13 today.
-What are the Reviews of Signalyst Hqplayer Full 13?
-
-Signalyst Hqplayer Full 13 has received many positive reviews from users and experts who have tried it. Here are some of the reviews of Signalyst Hqplayer Full 13:
-
-
-"I have been using HQPlayer for several years now and I can say that it is the best audio software I have ever used. It can handle any audio format with ease and produce amazing sound quality. It also has many options and features to customize your audio processing. I highly recommend it to anyone who cares about audio quality."
-- John, a professional audio engineer
-
-
-
-"HQPlayer is a must-have for any audiophile who wants to enjoy high-resolution audio on their system. It can upsample and convert any audio file to PCM or DSD with superb quality. It also supports network audio streaming and remote control. It is simply the best audio player out there."
-- Lisa, an audiophile and music lover
-
-
-
-"I have been using HQPlayer for a few months now and I am very impressed by its performance and quality. It can play any audio file with high quality and low latency. It also has a very user-friendly interface and a lot of options to tweak your sound. It is worth every penny."
-- Mark, a music producer and DJ
-
-
-Where to Download Signalyst Hqplayer Full 13?
-
-If you want to download Signalyst Hqplayer Full 13, you can do so from the official website of Signalyst at https://www.signalyst.com/professional.html. There you can find more information about the software and its features. You can also download a trial version of Signalyst Hqplayer Full 13 to test it before buying it.
-
-The trial version of Signalyst Hqplayer Full 13 has some limitations, such as limited number of conversions per session and limited duration of playback per session. If you want to unlock all the features of Signalyst Hqplayer Full 13, you need to buy a license key from Signalyst.
-
-The license key for Signalyst Hqplayer Full 13 costs €249.00 EUR (excluding VAT). You can buy it online using PayPal or credit card. Once you buy the license key, you will receive it by email. You can then enter it in the software to activate it.
-
-Conclusion
-
-Signalyst Hqplayer Full 13 is a high-quality audio file converter, recorder and player that can help you produce high-quality final distribution files in various formats. It also offers many advanced features and algorithms to improve your audio quality and customize your audio processing. If you are looking for a professional audio software that can handle various audio formats with high quality, you should download Signalyst Hqplayer Full 13 today.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/HD Online Player (Raajneeti Movie Download Dubbed In Hindi) [TOP].md b/spaces/quidiaMuxgu/Expedit-SAM/HD Online Player (Raajneeti Movie Download Dubbed In Hindi) [TOP].md
deleted file mode 100644
index b3d672e9b839c740ed1c8a7db903e7defea1dd38..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/HD Online Player (Raajneeti Movie Download Dubbed In Hindi) [TOP].md
+++ /dev/null
@@ -1,12 +0,0 @@
-HD Online Player (Raajneeti movie download dubbed in hindi)
Download Zip ⚙ https://geags.com/2uCqLS
-
-Tiyagu (Vikram), a medical student, leads a gang of four to kidnap corrupt politicians and . Watch porn movies under the tag: porn movies with Russian translation.
-Large selection of films in the section - porn movies with Russian translation in .
-Watch online Porn movies with translation for free in good quality.
-The best world porn of the world, famous actors and directors, absolutely.
-Watch Porn movies with Russian translation for free online.
-Porn films with Russian translation · Porn films with a plot · Artistic .
-2 03 2019 - Porn movies with plot. 8a78ff9644
-
-
-
diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/nets_123812KB.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/nets_123812KB.py
deleted file mode 100644
index becbfae85683a13bbb19d3ea6c840da24e61e01e..0000000000000000000000000000000000000000
--- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/nets_123812KB.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from . import layers_123821KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 32)
- self.stg1_high_band_net = BaseASPPNet(2, 32)
-
- self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(16, 32)
-
- self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(32, 64)
-
- self.out = nn.Conv2d(64, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(32, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(32, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download _VERIFIED_ Free Movie Deewane Huye Paagal In Hindi Kickass Torrent.md b/spaces/raedeXanto/academic-chatgpt-beta/Download _VERIFIED_ Free Movie Deewane Huye Paagal In Hindi Kickass Torrent.md
deleted file mode 100644
index d406f61253e6da290a50138329aeb6e9c9883eb9..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Download _VERIFIED_ Free Movie Deewane Huye Paagal In Hindi Kickass Torrent.md
+++ /dev/null
@@ -1,34 +0,0 @@
-
-How to Download Free Movie Deewane Huye Paagal In Hindi Kickass Torrent
-
-Deewane Huye Paagal is a 2005 Indian Hindi-language comedy film starring Akshay Kumar, Suniel Shetty, Shahid Kapoor and Rimi Sen. The film is a hilarious spoof of Hollywood movies like The Terminator, The Matrix and The Bourne Identity. If you are a fan of this movie and want to watch it for free, you might be wondering how to download it from a kickass torrent site.
-Download Free Movie Deewane Huye Paagal In Hindi Kickass Torrent
DOWNLOAD ✫ https://tinourl.com/2uL3Rt
-
-In this article, we will show you how to download free movie Deewane Huye Paagal in Hindi kickass torrent in a few simple steps. But before we do that, let's understand what kickass torrents are and why they are popular among movie lovers.
-
-What are kickass torrents?
-
-Kickass torrents are files that contain metadata about movies, TV shows, music, games and other content that can be downloaded using a peer-to-peer network called BitTorrent. BitTorrent is a protocol that allows users to share large files over the internet without relying on a central server. Users who download or upload files are called peers, and the files are divided into small pieces called chunks. Peers exchange chunks with each other until they have the complete file.
-
-Kickass torrents are popular because they offer a wide variety of content for free, often in high quality and multiple languages. However, they also come with some risks and challenges. For one thing, downloading or uploading copyrighted content without permission is illegal in many countries and can result in fines or legal action. For another thing, some kickass torrent sites may contain malware, viruses or fake files that can harm your device or compromise your privacy.
-
-Therefore, if you decide to use kickass torrents, you should do so at your own risk and with caution. You should also use a VPN (virtual private network) service to hide your IP address and encrypt your traffic, and an antivirus software to scan your files before opening them.
-
-
-How to download free movie Deewane Huye Paagal in Hindi kickass torrent?
-
-Now that you know what kickass torrents are and how they work, let's see how you can download free movie Deewane Huye Paagal in Hindi kickass torrent. Here are the steps you need to follow:
-
-
-- Download and install a BitTorrent client on your device. A BitTorrent client is a software that enables you to download or upload torrent files. Some of the popular BitTorrent clients are uTorrent, BitTorrent, qBittorrent and Vuze.
-- Find a reliable kickass torrent site that has the movie Deewane Huye Paagal in Hindi. You can use a search engine like Google or Bing to look for keywords like "Deewane Huye Paagal Hindi kickass torrent" or "Deewane Huye Paagal Hindi 720p torrent". Alternatively, you can use a torrent aggregator site like Torrentz2 or 1337x that indexes torrents from various sources.
-- Choose a torrent file that has a high number of seeders and leechers. Seeders are peers who have the complete file and are sharing it with others. Leechers are peers who are downloading the file but have not completed it yet. A high number of seeders and leechers indicates that the file is popular and available, and that the download speed will be faster.
-- Download the torrent file or copy its magnet link. A magnet link is a URL that contains information about the file, such as its name, size and hash value. You can use either the torrent file or the magnet link to start the download process.
-- Open the torrent file or paste the magnet link into your BitTorrent client. Your BitTorrent client will connect to other peers who have the file and start downloading it. You can monitor the progress of the download on your BitTorrent client interface.
-- Wait until the download is complete. Depending on the size of the file and your internet speed, this may take from a few minutes to several hours. Once the download is complete, you can find the movie file in your designated folder.
-- Enjoy watching Deewane Huye Paagal for free!
-
-
-We hope this article helped you learn
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/FactoryTalk View Studio Serial Number Best Practices for HMI Development and Maintenance.md b/spaces/raedeXanto/academic-chatgpt-beta/FactoryTalk View Studio Serial Number Best Practices for HMI Development and Maintenance.md
deleted file mode 100644
index 9c7e32add79d228757fbaaff24c22dc7778d6c5c..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/FactoryTalk View Studio Serial Number Best Practices for HMI Development and Maintenance.md
+++ /dev/null
@@ -1,156 +0,0 @@
-
-
- Benefits of using FactoryTalk View Studio for HMI design and development | | H2: How to install FactoryTalk View Studio on your computer? | - System requirements and compatibility
- Installation steps and options
- Activation and licensing | | H2: How to create and run HMI applications with FactoryTalk View Studio? | - Overview of FactoryTalk View Studio editors and tools
- How to create a new project and add components
- How to test and debug your application
- How to deploy your application to a target device | | H2: How to use FactoryTalk ViewPoint for web-based HMI access? | - What is FactoryTalk ViewPoint and how does it work?
- How to enable FactoryTalk ViewPoint for your application
- How to access your application from a web browser
- How to customize your web interface and user experience | | H2: How to troubleshoot common issues with FactoryTalk View Studio? | - How to find and fix errors and warnings in your project
- How to update and patch your software
- How to contact technical support and get help
- How to access online resources and documentation | | H1: Conclusion | - Summary of the main points and takeaways
- Call to action and next steps | ## Article with HTML formatting What is FactoryTalk View Studio and why do you need it?
-If you are looking for a powerful and versatile software tool for designing and developing human-machine interface (HMI) applications, you might want to consider FactoryTalk View Studio. FactoryTalk View Studio is part of the FactoryTalk View family of products from Rockwell Automation, which also includes FactoryTalk View Site Edition (SE) and FactoryTalk View Machine Edition (ME).
-FactoryTalk View Studio is the design-time environment for both FactoryTalk View SE and ME, which provide the runtime environments for network distributed, network station, and local station HMI applications. With FactoryTalk View Studio, you can create graphical screens, animations, alarms, trends, reports, recipes, security, and more for your HMI applications. You can also use various editors and tools to configure data sources, communication protocols, tags, expressions, scripts, macros, languages, and other settings.
-factorytalk view studio serial number
Download Zip ⇒ https://tinourl.com/2uL1eQ
-The benefits of using FactoryTalk View Studio for your HMI projects are numerous. Some of them are:
-
-- You can use one integrated development environment for both site-level and machine-level applications.
-- You can leverage the power of the FactoryTalk platform for data management, security, diagnostics, audit trails, asset management, and more.
-- You can reuse existing components and templates across multiple projects and applications.
-- You can test and debug your applications in simulation mode or on a live device.
-- You can deploy your applications to various target platforms, such as Windows-based computers, PanelView Plus terminals, or web browsers.
-- You can easily update and maintain your applications with version control, backup and restore, patching, and online editing features.
-
-In this article, we will show you how to install FactoryTalk View Studio on your computer, how to create and run HMI applications with it, how to use FactoryTalk ViewPoint for web-based HMI access, and how to troubleshoot common issues with it. Let's get started!
- How to install FactoryTalk View Studio on your computer?
-Before you install FactoryTalk View Studio on your computer, you need to make sure that your system meets the minimum requirements and is compatible with the software. You can find the detailed system requirements and compatibility information in the FactoryTalk View Site Edition Installation Guide.
-The installation steps and options may vary depending on the version of the software you have. In general, you need to follow these steps:
-
-- Insert the installation media (DVD or USB) into your computer or download the software from the Rockwell Automation Download Center.
-- Run the setup.exe file from the installation media or the downloaded folder.
-- Select the language for the installation program.
-- Read and accept the license agreement.
-- Select the type of installation you want: Typical (recommended), Custom (to select specific components), or Compact (to install only essential components).
-- Select the destination folder for the installation.
-- Select whether you want to create shortcuts on your desktop or start menu.
-- Select whether you want to install additional components or features, such as RSLinx Classic Gateway (for communication), RSLogix Emulate 5000 (for simulation), or Microsoft SQL Server Express (for data logging).
-- Click Install to start the installation process.
-- Wait for the installation to complete.
-
-After you install FactoryTalk View Studio on your computer, you need to activate it with a valid product key. You can use either a hardware dongle or a software activation method. You can find more information about activation and licensing in the FactoryTalk View Ordering Guide.
- How to create and run HMI applications with FactoryTalk View Studio?
-To create an HMI application with FactoryTalk View Studio, you need to start a new project and add components to it. A project is a collection of files that define an HMI application. A component is a part of an HMI application that performs a specific function or provides a specific feature.
-The main components of an HMI application are:
-How to find factorytalk view studio serial number
-Factorytalk view studio activation key generator
-Factorytalk view studio license crack download
-Factorytalk view studio me se activation
-Factorytalk view studio enterprise edition serial number
-Factorytalk view studio site edition license
-Factorytalk view studio 10.0 serial number
-Factorytalk view studio 11.0 serial number
-Factorytalk view studio 12.0 serial number
-Factorytalk view studio 13.0 serial number
-Factorytalk view studio 14.0 serial number
-Factorytalk view studio 15.0 serial number
-Factorytalk view studio 16.0 serial number
-Factorytalk view studio 17.0 serial number
-Factorytalk view studio 18.0 serial number
-Factorytalk view studio 19.0 serial number
-Factorytalk view studio 20.0 serial number
-Factorytalk view studio 21.0 serial number
-Factorytalk view studio 22.0 serial number
-Factorytalk view studio 23.0 serial number
-Factorytalk view studio me se license transfer
-Factorytalk view studio me se license backup
-Factorytalk view studio me se license restore
-Factorytalk view studio me se license update
-Factorytalk view studio me se license renewal
-Factorytalk view studio me se license price
-Factorytalk view studio me se license cost
-Factorytalk view studio me se license discount
-Factorytalk view studio me se license coupon
-Factorytalk view studio me se license free trial
-Factorytalk view studio enterprise edition license transfer
-Factorytalk view studio enterprise edition license backup
-Factorytalk view studio enterprise edition license restore
-Factorytalk view studio enterprise edition license update
-Factorytalk view studio enterprise edition license renewal
-Factorytalk view studio enterprise edition license price
-Factorytalk view studio enterprise edition license cost
-Factorytalk view studio enterprise edition license discount
-Factorytalk view studio enterprise edition license coupon
-Factorytalk view studio enterprise edition license free trial
-Factorytalk view studio site edition serial key generator
-Factorytalk view studio site edition crack download free
-
-- A display: A graphical screen that shows information or controls for an operator.
-- A tag: A named variable that represents a data point or value in an HMI application.
-- A data source: A device or system that provides data for an HMI application.
-
-To start a new project in FactoryTalk View Studio:
-
-- Open FactoryTalk View Studio from your desktop or start menu shortcut.
-- Select File > New > Project from the menu bar.
-- Type a name for your project in the Project Name field.
-- Select whether you want to create a network distributed application (for FactoryTalk View SE) or a local station application (for FactoryTalk View ME).
-- Select whether you want to use an existing project as a template or start from scratch.
-- Select whether you want to enable security for your project or not.
-- Select whether you want to enable languages for your project or not.
-- Select whether you want to enable logging for your project or not.
-- Select whether you want to enable alarms for your project or not.
-- Select whether you want to enable trends for your project or not.
-- Select whether you want to enable reports for your project or not.
-- Select whether you want to enable recipes for your project or not.
-- Click OK to create your project.
-
- To add components to your project in FactoryTalk View Studio:
-
- - Select the component type from the Project Explorer pane on the left side of the screen. For example, Displays, Tags, Data Servers, etc.
- - Select File > New > Component Name from the menu bar. For example, File > New > Display File.
- - Type a name for your component in the Component Name field.
- - Click OK to create your component.
- - Edit your component using the appropriate editor or tool. For example, use the Graphic Display Editor to design your display screens, use the Tag Browser to create and configure your tags, use the Communication Setup Editor to define your data sources, etc.
-
- To test and debug your application in FactoryTalk View Studio:
-
- - Select Tools > Test Display from the menu bar if you want to test a single display screen in simulation mode.
- - Select Tools > Test Application from the menu bar if you want to test your entire application in simulation mode.
- - Select Tools > Debug Application from the menu bar if you want to test your entire application on a live device connected via RSLinx Classic Gateway.
-
- To deploy your application to a target device:
-
- How to use FactoryTalk ViewPoint for web-based HMI access?
-FactoryTalk ViewPoint is a web-enabled extension of FactoryTalk View SE and ME that allows you to access your HMI applications from any device with a web browser. FactoryTalk ViewPoint supports HTML5 technology, which means you can use it on desktops, laptops, tablets, or smartphones without installing any plugins or software.
-With FactoryTalk ViewPoint, you can view and interact with your HMI screens, alarms, trends, and reports from anywhere and anytime. You can also customize your web interface and user experience by adjusting the screen size, layout, navigation, security, and language settings.
-To use FactoryTalk ViewPoint for your HMI application, you need to enable it in FactoryTalk View Studio and configure some settings. Here are the steps:
-
-- Select Tools > FactoryTalk ViewPoint Configuration from the menu bar in FactoryTalk View Studio.
-- Select the Enable FactoryTalk ViewPoint check box in the General tab.
-- Select the application components that you want to publish to FactoryTalk ViewPoint in the Application Components tab. For example, Displays, Alarms, Trends, etc.
-- Select the security settings that you want to apply to FactoryTalk ViewPoint in the Security tab. For example, User Accounts, Roles, Permissions, etc.
-- Select the language settings that you want to use for FactoryTalk ViewPoint in the Languages tab. For example, Default Language, Supported Languages, etc.
-- Click OK to save your settings and close the dialog box.
-- Select Tools > Publish to FactoryTalk ViewPoint from the menu bar to publish your application components to FactoryTalk ViewPoint.
-
-To access your HMI application from a web browser using FactoryTalk ViewPoint:
-
-- Open your web browser and type the URL of your FactoryTalk ViewPoint server. The URL format is http://
/FactoryTalkViewPoint/.
-- Select your application name from the list of available applications.
-- Enter your user name and password if prompted by the security settings.
-- View and interact with your HMI screens as you would on a local device.
-
- How to troubleshoot common issues with FactoryTalk View Studio?
-FactoryTalk View Studio is a robust and reliable software tool for HMI design and development. However, like any software product, it may encounter some issues or errors from time to time. Here are some tips on how to troubleshoot common issues with FactoryTalk View Studio:
-
- - How to find and fix errors and warnings in your project?
You can use the Error List pane at the bottom of the screen to view and fix errors and warnings in your project. The Error List pane shows the description, source, location, and severity of each error or warning. You can double-click on an error or warning to go to its source and edit it. You can also right-click on an error or warning to get more information or suggestions on how to fix it.
- - How to update and patch your software?
You can use the Rockwell Automation Product Compatibility and Download Center (PCDC) website to check for updates and patches for your software products. You can also use the Update Advisor tool in FactoryTalk Administration Console to scan your system for available updates and patches. You can download and install updates and patches from either source.
- - How to contact technical support and get help?
You can contact Rockwell Automation Technical Support by phone, email, chat, or web form. You can find the contact information and hours of operation on the Rockwell Automation Support website. You can also use the Knowledgebase tool on the same website to search for solutions or articles related to your issue or question.
- - How to access online resources and documentation?
You can access various online resources and documentation for FactoryTalk View Studio and other related products on the Rockwell Automation Literature Library website. You can find user manuals, installation guides, reference manuals, technical data sheets, quick start guides, videos, tutorials, and more on this website.
-
- Conclusion
-In this article, we have introduced you to FactoryTalk View Studio and its features. We have shown you how to install it on your computer, how to create and run HMI applications with it, how to use FactoryTalk ViewPoint for web-based HMI access, and how to troubleshoot common issues with it. We hope you have found this article useful and informative.
-If you want to learn more about FactoryTalk View Studio and other related products from Rockwell Automation, you can visit their official website or contact their technical support team. You can also check out their online resources and documentation for more details and guidance.
-Thank you for reading this article. We hope you have enjoyed it. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you!
- Frequently Asked Questions
-
- - What is the difference between FactoryTalk View SE and ME?
FactoryTalk View SE is a site-level HMI solution that supports network distributed applications with multiple servers and clients. FactoryTalk View ME is a machine-level HMI solution that supports local station applications with single devices.
- - What are some of the target platforms that I can deploy my HMI applications to?
You can deploy your HMI applications to various target platforms such as Windows-based computers (for both SE and ME), PanelView Plus terminals (for ME), or web browsers (for SE with FactoryTalk ViewPoint).
- - What are some of the editors and tools that I can use in FactoryTalk View Studio?
You can use various editors and tools in FactoryTalk View Studio such as Graphic Display Editor (for designing screens), Tag Browser (for creating tags), Communication Setup Editor (for configuring data sources), Expression Editor (for writing expressions), Macro Editor (for creating macros), Script Editor (for writing scripts), Language Editor (for translating languages), Alarm Setup Editor (for setting up alarms), Trend Setup Editor (for setting up trends), Report Setup Editor (for setting up reports), RecipePro+ Editor (for creating recipes), etc.
- - What are some of the features that I can add to my HMI applications?
You can add various features to your HMI applications such as animations (to show dynamic changes), alarms (to alert operators of abnormal conditions), trends (to show historical data), reports (to generate documents), recipes (to store and retrieve sets of values), security (to restrict access and actions), logging (to record data), languages (to support multiple languages), etc.
- - How can I improve the performance and reliability of my HMI applications?
You can improve the performance and reliability of your HMI applications by following some best practices such as optimizing your graphics (to reduce memory usage), using tags efficiently (to avoid unnecessary reads/writes), using expressions wisely (to avoid complex calculations), using macros sparingly (to avoid overheads), using scripts carefully (to avoid errors or conflicts), using redundancy options (to provide backup or failover), using version control options (to manage changes or revisions), using backup and restore options (to protect your data or files), etc.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/text/__init__.py b/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/text/__init__.py
deleted file mode 100644
index 3f5aa62bfcd56165b85d064f5ca0ba59fbe34a72..0000000000000000000000000000000000000000
--- a/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/text/__init__.py
+++ /dev/null
@@ -1,84 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-import re
-from text import cleaners
-
-# Regular expression matching text enclosed in curly braces:
-_curly_re = re.compile(r'(.*?)\{(.+?)\}(.*)')
-
-
-def get_arpabet(word, dictionary):
- word_arpabet = dictionary.lookup(word)
- if word_arpabet is not None:
- return "{" + word_arpabet[0] + "}"
- else:
- return word
-
-
-def text_to_sequence(text, symbols, cleaner_names, dictionary=None):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
-
- The text can optionally have ARPAbet sequences enclosed in curly braces embedded
- in it. For example, "Turn left on {HH AW1 S S T AH0 N} Street."
-
- Args:
- text: string to convert to a sequence
- cleaner_names: names of the cleaner functions to run the text through
- dictionary: arpabet class with arpabet dictionary
-
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- # Mappings from symbol to numeric ID and vice versa:
- global _id_to_symbol, _symbol_to_id
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
- _id_to_symbol = {i: s for i, s in enumerate(symbols)}
-
- sequence = []
-
- space = _symbols_to_sequence(' ')
- # Check for curly braces and treat their contents as ARPAbet:
- while len(text):
- m = _curly_re.match(text)
- if not m:
- clean_text = _clean_text(text, cleaner_names)
- if dictionary is not None:
- clean_text = [get_arpabet(w, dictionary) for w in clean_text.split(" ")]
- for i in range(len(clean_text)):
- t = clean_text[i]
- if t.startswith("{"):
- sequence += _arpabet_to_sequence(t[1:-1])
- else:
- sequence += _symbols_to_sequence(t)
- sequence += space
- else:
- sequence += _symbols_to_sequence(clean_text)
- break
- sequence += _symbols_to_sequence(_clean_text(m.group(1), cleaner_names))
- sequence += _arpabet_to_sequence(m.group(2))
- text = m.group(3)
-
- # remove trailing space
- if dictionary is not None:
- sequence = sequence[:-1] if sequence[-1] == space[0] else sequence
- return sequence
-
-
-def _clean_text(text, cleaner_names):
- for name in cleaner_names:
- cleaner = getattr(cleaners, name)
- if not cleaner:
- raise Exception('Unknown cleaner: %s' % name)
- text = cleaner(text)
- return text
-
-
-def _symbols_to_sequence(symbols):
- return [_symbol_to_id[s] for s in symbols if _should_keep_symbol(s)]
-
-
-def _arpabet_to_sequence(text):
- return _symbols_to_sequence(['@' + s for s in text.split()])
-
-
-def _should_keep_symbol(s):
- return s in _symbol_to_id and s is not '_' and s is not '~'
\ No newline at end of file
diff --git a/spaces/ramiin2/AutoGPT/autogpt/agent/agent.py b/spaces/ramiin2/AutoGPT/autogpt/agent/agent.py
deleted file mode 100644
index ee7885f8844022597321fa6b492430ec34c0d6b9..0000000000000000000000000000000000000000
--- a/spaces/ramiin2/AutoGPT/autogpt/agent/agent.py
+++ /dev/null
@@ -1,197 +0,0 @@
-from colorama import Fore, Style
-
-from autogpt.app import execute_command, get_command
-from autogpt.chat import chat_with_ai, create_chat_message
-from autogpt.config import Config
-from autogpt.json_utils.json_fix_llm import fix_json_using_multiple_techniques
-from autogpt.json_utils.utilities import validate_json
-from autogpt.logs import logger, print_assistant_thoughts
-from autogpt.speech import say_text
-from autogpt.spinner import Spinner
-from autogpt.utils import clean_input
-
-
-class Agent:
- """Agent class for interacting with Auto-GPT.
-
- Attributes:
- ai_name: The name of the agent.
- memory: The memory object to use.
- full_message_history: The full message history.
- next_action_count: The number of actions to execute.
- system_prompt: The system prompt is the initial prompt that defines everything the AI needs to know to achieve its task successfully.
- Currently, the dynamic and customizable information in the system prompt are ai_name, description and goals.
-
- triggering_prompt: The last sentence the AI will see before answering. For Auto-GPT, this prompt is:
- Determine which next command to use, and respond using the format specified above:
- The triggering prompt is not part of the system prompt because between the system prompt and the triggering
- prompt we have contextual information that can distract the AI and make it forget that its goal is to find the next task to achieve.
- SYSTEM PROMPT
- CONTEXTUAL INFORMATION (memory, previous conversations, anything relevant)
- TRIGGERING PROMPT
-
- The triggering prompt reminds the AI about its short term meta task (defining the next task)
- """
-
- def __init__(
- self,
- ai_name,
- memory,
- full_message_history,
- next_action_count,
- system_prompt,
- triggering_prompt,
- ):
- self.ai_name = ai_name
- self.memory = memory
- self.full_message_history = full_message_history
- self.next_action_count = next_action_count
- self.system_prompt = system_prompt
- self.triggering_prompt = triggering_prompt
-
- def start_interaction_loop(self):
- # Interaction Loop
- cfg = Config()
- loop_count = 0
- command_name = None
- arguments = None
- user_input = ""
-
- while True:
- # Discontinue if continuous limit is reached
- loop_count += 1
- if (
- cfg.continuous_mode
- and cfg.continuous_limit > 0
- and loop_count > cfg.continuous_limit
- ):
- logger.typewriter_log(
- "Continuous Limit Reached: ", Fore.YELLOW, f"{cfg.continuous_limit}"
- )
- break
-
- # Send message to AI, get response
- with Spinner("Thinking... "):
- assistant_reply = chat_with_ai(
- self.system_prompt,
- self.triggering_prompt,
- self.full_message_history,
- self.memory,
- cfg.fast_token_limit,
- ) # TODO: This hardcodes the model to use GPT3.5. Make this an argument
-
- assistant_reply_json = fix_json_using_multiple_techniques(assistant_reply)
-
- # Print Assistant thoughts
- if assistant_reply_json != {}:
- validate_json(assistant_reply_json, "llm_response_format_1")
- # Get command name and arguments
- try:
- print_assistant_thoughts(self.ai_name, assistant_reply_json)
- command_name, arguments = get_command(assistant_reply_json)
- # command_name, arguments = assistant_reply_json_valid["command"]["name"], assistant_reply_json_valid["command"]["args"]
- if cfg.speak_mode:
- say_text(f"I want to execute {command_name}")
- except Exception as e:
- logger.error("Error: \n", str(e))
-
- if not cfg.continuous_mode and self.next_action_count == 0:
- ### GET USER AUTHORIZATION TO EXECUTE COMMAND ###
- # Get key press: Prompt the user to press enter to continue or escape
- # to exit
- logger.typewriter_log(
- "NEXT ACTION: ",
- Fore.CYAN,
- f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL} "
- f"ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}",
- )
- print(
- "Enter 'y' to authorise command, 'y -N' to run N continuous "
- "commands, 'n' to exit program, or enter feedback for "
- f"{self.ai_name}...",
- flush=True,
- )
- while True:
- console_input = clean_input(
- Fore.MAGENTA + "Input:" + Style.RESET_ALL
- )
- if console_input.lower().strip() == "y":
- user_input = "GENERATE NEXT COMMAND JSON"
- break
- elif console_input.lower().strip() == "":
- print("Invalid input format.")
- continue
- elif console_input.lower().startswith("y -"):
- try:
- self.next_action_count = abs(
- int(console_input.split(" ")[1])
- )
- user_input = "GENERATE NEXT COMMAND JSON"
- except ValueError:
- print(
- "Invalid input format. Please enter 'y -n' where n is"
- " the number of continuous tasks."
- )
- continue
- break
- elif console_input.lower() == "n":
- user_input = "EXIT"
- break
- else:
- user_input = console_input
- command_name = "human_feedback"
- break
-
- if user_input == "GENERATE NEXT COMMAND JSON":
- logger.typewriter_log(
- "-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=",
- Fore.MAGENTA,
- "",
- )
- elif user_input == "EXIT":
- print("Exiting...", flush=True)
- break
- else:
- # Print command
- logger.typewriter_log(
- "NEXT ACTION: ",
- Fore.CYAN,
- f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL}"
- f" ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}",
- )
-
- # Execute command
- if command_name is not None and command_name.lower().startswith("error"):
- result = (
- f"Command {command_name} threw the following error: {arguments}"
- )
- elif command_name == "human_feedback":
- result = f"Human feedback: {user_input}"
- else:
- result = (
- f"Command {command_name} returned: "
- f"{execute_command(command_name, arguments)}"
- )
- if self.next_action_count > 0:
- self.next_action_count -= 1
-
- memory_to_add = (
- f"Assistant Reply: {assistant_reply} "
- f"\nResult: {result} "
- f"\nHuman Feedback: {user_input} "
- )
-
- self.memory.add(memory_to_add)
-
- # Check if there's a result from the command append it to the message
- # history
- if result is not None:
- self.full_message_history.append(create_chat_message("system", result))
- logger.typewriter_log("SYSTEM: ", Fore.YELLOW, result)
- else:
- self.full_message_history.append(
- create_chat_message("system", "Unable to execute command")
- )
- logger.typewriter_log(
- "SYSTEM: ", Fore.YELLOW, "Unable to execute command"
- )
diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/concat-map/index.js b/spaces/rayan-saleh/whisper2notion/server/node_modules/concat-map/index.js
deleted file mode 100644
index b29a7812e5055ae915e771447e1380e01bf3bfdd..0000000000000000000000000000000000000000
--- a/spaces/rayan-saleh/whisper2notion/server/node_modules/concat-map/index.js
+++ /dev/null
@@ -1,13 +0,0 @@
-module.exports = function (xs, fn) {
- var res = [];
- for (var i = 0; i < xs.length; i++) {
- var x = fn(xs[i], i);
- if (isArray(x)) res.push.apply(res, x);
- else res.push(x);
- }
- return res;
-};
-
-var isArray = Array.isArray || function (xs) {
- return Object.prototype.toString.call(xs) === '[object Array]';
-};
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Americas Army Proving Grounds Crack Download !!TOP!! Offline Activation.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Americas Army Proving Grounds Crack Download !!TOP!! Offline Activation.md
deleted file mode 100644
index 2692e8a561cd1f1b54739eb4de8b90cdd981443b..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Americas Army Proving Grounds Crack Download !!TOP!! Offline Activation.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-america's army is an american first-person shooter video game series developed and published by bell and a subsidiary of hasbro interactive. the games take place in a fictional setting inspired by the american revolutionary war and describe the player as a solider in a squad of four other players. the role-playing portion of the game involves the player taking orders from a squad leader, who guides him to a battle between the united states and one of its allies.
-America's Army: Proving Grounds Crack Download Offline Activation
Download ->>> https://urlgoal.com/2uCKQb
- adam and the apocalypse the dvd in hindi film download
wtrp lwxc drivers 15.04 english
umbrella for him (digital copy) [ps3/720p-eng-bluray] 5.0
cx88-tunein-gpl-v5-4_6.2-00
naughty rape sex games - jayne paint.pdf
robertsville school district 163.pdf
sxl student edition version 5.0 client characteristics
- it's not your fault, all the people of the world are against you because you're just doing your job of protecting the earth and clean the country from evil. that's how you get ratings, and that's why it's your job. (back to me.) but this is the real world, all right? you're a little piece of the puzzle of the whole. and you're not working alone, are you? i'm your partner, and i'm here to help you. (back to joey) so get with it, and open up the door.. (back to me).and open up the door. (back to joey) i'm here for you if you need me. i'll be here. (back to me) this is the real world, and it's a brutal world. but that's the way things are, right? (back to joey) now, don't get paranoid on me. (back to me) now, before i forget, you got some more clothes for you and your partner. (back to joey) so when you're ready, you can let him out.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Karaoke Sound Tools 1.0.15 !FULL! Full Serial.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Karaoke Sound Tools 1.0.15 !FULL! Full Serial.md
deleted file mode 100644
index 9a4a0a2fd3a918a12bd0c95ba0f37c3a15043b19..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Karaoke Sound Tools 1.0.15 !FULL! Full Serial.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Download Karaoke Sound Tools 1.0.15 Full Serial
Download File ☆ https://urlgoal.com/2uCKKT
-
-... Serial Transmission And Their Advantages · Download Program Pemesanan ... KaraFun Karaoke Player 2020 full offline installer setup for PC 32bit/64bit ... The tool gives you more than 28,000 high-quality karaoke songs recorded in ... Download free Karaoke Sound Tools by Power Karaoke v.1.0.15 software 271910. 1fdad05405
-
-
-
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/George VS Bonny PP Wars Activation Code.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/George VS Bonny PP Wars Activation Code.md
deleted file mode 100644
index e4603da119b7d1474c7c9dd632976c69a0824cd1..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/George VS Bonny PP Wars Activation Code.md
+++ /dev/null
@@ -1,6 +0,0 @@
-George VS Bonny PP Wars Activation Code
Download Zip ⭐ https://urlgoal.com/2uCLmz
-
-—Halle Berry, Oscar acceptance remarks, March 24, 2002 V One year after ... Best Support of Whitney's Dictum That Crack Is Wack NEW JACK CITY (1991) ... One of three disillusioned Gulf War soldiers (along with Mark Wahlberg and George ... The Die Hard Award SET IT OFF (1996) Not since Bonnie had female bank ... 1fdad05405
-
-
-
diff --git a/spaces/riccorl/relik-entity-linking/relik/reader/pytorch_modules/__init__.py b/spaces/riccorl/relik-entity-linking/relik/reader/pytorch_modules/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/robyramos/estimativa_historia/README.md b/spaces/robyramos/estimativa_historia/README.md
deleted file mode 100644
index 6dabebda77d2ba8c05a726199c6329fce8c03c68..0000000000000000000000000000000000000000
--- a/spaces/robyramos/estimativa_historia/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Analise Perfil
-emoji: 💻
-colorFrom: purple
-colorTo: red
-sdk: gradio
-sdk_version: 3.32.0
-app_file: app.py
-pinned: false
-license: other
-duplicated_from: robyramos/analise_perfil_v2
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/hdetr/models/util/plot_utils.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/hdetr/models/util/plot_utils.py
deleted file mode 100644
index 759f34d252493fd93187ea3cf2ab0d63a3e2b280..0000000000000000000000000000000000000000
--- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/hdetr/models/util/plot_utils.py
+++ /dev/null
@@ -1,111 +0,0 @@
-# ------------------------------------------------------------------------
-# Deformable DETR
-# Copyright (c) 2020 SenseTime. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Modified from DETR (https://github.com/facebookresearch/detr)
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-# ------------------------------------------------------------------------
-
-"""
-Plotting utilities to visualize training logs.
-"""
-import torch
-import pandas as pd
-import seaborn as sns
-import matplotlib.pyplot as plt
-
-from pathlib import Path, PurePath
-
-
-def plot_logs(logs, fields=('class_error', 'loss_bbox_unscaled', 'mAP'), ewm_col=0, log_name='log.txt'):
- '''
- Function to plot specific fields from training log(s). Plots both training and test results.
-
- :: Inputs - logs = list containing Path objects, each pointing to individual dir with a log file
- - fields = which results to plot from each log file - plots both training and test for each field.
- - ewm_col = optional, which column to use as the exponential weighted smoothing of the plots
- - log_name = optional, name of log file if different than default 'log.txt'.
-
- :: Outputs - matplotlib plots of results in fields, color coded for each log file.
- - solid lines are training results, dashed lines are test results.
-
- '''
- func_name = "plot_utils.py::plot_logs"
-
- # verify logs is a list of Paths (list[Paths]) or single Pathlib object Path,
- # convert single Path to list to avoid 'not iterable' error
-
- if not isinstance(logs, list):
- if isinstance(logs, PurePath):
- logs = [logs]
- print(f"{func_name} info: logs param expects a list argument, converted to list[Path].")
- else:
- raise ValueError(f"{func_name} - invalid argument for logs parameter.\n \
- Expect list[Path] or single Path obj, received {type(logs)}")
-
- # verify valid dir(s) and that every item in list is Path object
- for i, dir in enumerate(logs):
- if not isinstance(dir, PurePath):
- raise ValueError(f"{func_name} - non-Path object in logs argument of {type(dir)}: \n{dir}")
- if dir.exists():
- continue
- raise ValueError(f"{func_name} - invalid directory in logs argument:\n{dir}")
-
- # load log file(s) and plot
- dfs = [pd.read_json(Path(p) / log_name, lines=True) for p in logs]
-
- fig, axs = plt.subplots(ncols=len(fields), figsize=(16, 5))
-
- for df, color in zip(dfs, sns.color_palette(n_colors=len(logs))):
- for j, field in enumerate(fields):
- if field == 'mAP':
- coco_eval = pd.DataFrame(pd.np.stack(df.test_coco_eval.dropna().values)[:, 1]).ewm(com=ewm_col).mean()
- axs[j].plot(coco_eval, c=color)
- else:
- df.interpolate().ewm(com=ewm_col).mean().plot(
- y=[f'train_{field}', f'test_{field}'],
- ax=axs[j],
- color=[color] * 2,
- style=['-', '--']
- )
- for ax, field in zip(axs, fields):
- ax.legend([Path(p).name for p in logs])
- ax.set_title(field)
-
-
-def plot_precision_recall(files, naming_scheme='iter'):
- if naming_scheme == 'exp_id':
- # name becomes exp_id
- names = [f.parts[-3] for f in files]
- elif naming_scheme == 'iter':
- names = [f.stem for f in files]
- else:
- raise ValueError(f'not supported {naming_scheme}')
- fig, axs = plt.subplots(ncols=2, figsize=(16, 5))
- for f, color, name in zip(files, sns.color_palette("Blues", n_colors=len(files)), names):
- data = torch.load(f)
- # precision is n_iou, n_points, n_cat, n_area, max_det
- precision = data['precision']
- recall = data['params'].recThrs
- scores = data['scores']
- # take precision for all classes, all areas and 100 detections
- precision = precision[0, :, :, 0, -1].mean(1)
- scores = scores[0, :, :, 0, -1].mean(1)
- prec = precision.mean()
- rec = data['recall'][0, :, 0, -1].mean()
- print(f'{naming_scheme} {name}: mAP@50={prec * 100: 05.1f}, ' +
- f'score={scores.mean():0.3f}, ' +
- f'f1={2 * prec * rec / (prec + rec + 1e-8):0.3f}'
- )
- axs[0].plot(recall, precision, c=color)
- axs[1].plot(recall, scores, c=color)
-
- axs[0].set_title('Precision / Recall')
- axs[0].legend(names)
- axs[1].set_title('Scores / Recall')
- axs[1].legend(names)
- return fig, axs
-
-
-
diff --git a/spaces/rorallitri/biomedical-language-models/logs/C-Murder-Life Or Death Full Album 17 Patch Bookworm Manua A Masterpiece of Hip Hop.md b/spaces/rorallitri/biomedical-language-models/logs/C-Murder-Life Or Death Full Album 17 Patch Bookworm Manua A Masterpiece of Hip Hop.md
deleted file mode 100644
index f543324241bde03b836b8b67a14ab6fdb000151c..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/C-Murder-Life Or Death Full Album 17 Patch Bookworm Manua A Masterpiece of Hip Hop.md
+++ /dev/null
@@ -1,6 +0,0 @@
-C-Murder-Life Or Death Full Album 17 patch bookworm manua
Download Zip ★★★★★ https://tinurll.com/2uznmp
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/rstallman/web-scraping/app.py b/spaces/rstallman/web-scraping/app.py
deleted file mode 100644
index 2f17589d49225bb5b363a18574a1e04930c7cf58..0000000000000000000000000000000000000000
--- a/spaces/rstallman/web-scraping/app.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import os
-import re
-import requests
-import uuid
-import zipfile
-import hashlib
-import shutil
-import gradio as gr
-from bs4 import BeautifulSoup
-from urllib.parse import urljoin, urlparse
-
-# Function to validate URLs
-def is_valid(url):
- parsed = urlparse(url)
- return bool(parsed.netloc) and bool(parsed.scheme)
-
-# Function to find files on webpage
-def find_files(url, soup, file_type):
- files = []
- if file_type == "image":
- tags = ['jpg', 'jpeg', 'png', 'svg', 'gif']
- for tag in soup.find_all('img'):
- file = tag.get('src')
- if any(tag in file for tag in tags):
- file_url = file
- if not is_valid(file_url):
- file_url = urljoin(url, file_url)
- files.append(file_url)
- elif file_type == "text":
- text_tags = ['p', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6', 'span', 'strong']
- for tag in text_tags:
- for element in soup.find_all(tag):
- files.append(element.get_text())
- else:
- for link in soup.find_all('a'):
- file = link.get('href')
- if file_type in file:
- file_url = file
- if not is_valid(file_url):
- file_url = urljoin(url, file_url)
- files.append(file_url)
- return files
-
-
-
-
-# Function to download files
-def download_files(urls, folder_name):
- os.makedirs(folder_name, exist_ok=True)
- for i, url in enumerate(urls):
- response = requests.get(url, stream=True)
- file_extension = url.split(".")[-1].split("&")[0]
- url_hash = hashlib.md5(url.encode()).hexdigest()
- unique_id = str(uuid.uuid4())[:8]
- file_name = f'{url_hash}-{unique_id}.{file_extension}'
- file_name = file_name[:255] # Truncate the file name to avoid exceeding the limit
- file_name = re.sub(r'[\\/:"*?<>|]+', '_', file_name) # Replace special characters with underscores
- with open(f'{folder_name}/{file_name}', 'wb') as out_file:
- out_file.write(response.content)
- print(f"Downloaded file: {file_name}")
-
-# Function to create zip file
-def create_zip_file(folder_name):
- # Only create zip file if there are files in the directory
- if os.listdir(folder_name):
- with zipfile.ZipFile(f'{folder_name}.zip', 'w') as zipf:
- for file in os.listdir(folder_name):
- zipf.write(f'{folder_name}/{file}')
- return f'{folder_name}.zip'
- else:
- return ""
-
-
-
-# Function to scrape website
-def scrape_website(url, images=False, text=False):
- try:
- response = requests.get(url, timeout=10)
- response.raise_for_status() # Raise an exception if the GET request was unsuccessful
- except (requests.exceptions.RequestException, ValueError):
- raise gr.Error(f"Unable to access URL: {url}")
- return None, None
-
- soup = BeautifulSoup(response.content, 'html.parser')
-
- # Clear the contents of the folders
- if images:
- shutil.rmtree('images', ignore_errors=True)
- if text:
- shutil.rmtree('text', ignore_errors=True)
-
- # Download files
- if images:
- image_urls = find_files(url, soup, 'image')
- download_files(image_urls, 'images')
- if text:
- text_content = find_files(url, soup, 'text')
- os.makedirs('text', exist_ok=True) # Make sure the directory exists before writing
- if text_content: # Only create the file if there is text to write
- with open('text/content.txt', 'w') as text_file:
- for line in text_content:
- text_file.write(line + '\n')
-
- # Create zip files and return paths
- images_zip_file, text_zip_file = None, None
- if images and os.path.exists('images') and os.listdir('images'):
- images_zip_file = create_zip_file('images')
- if text and os.path.exists('text') and os.listdir('text'):
- text_zip_file = create_zip_file('text')
-
- return images_zip_file, text_zip_file
-
-
-
-# Function for web scraping
-def web_scraping(url, file_types):
- # Check if the URL is empty
- if not url:
- raise gr.Error("URL cannot be empty.")
-
- # Check if the URL begins with https://
- if not url.startswith("https://"):
- raise gr.Error("The URL must begin with https://")
-
- # Check if at least one checkbox is selected
- if not file_types:
- raise gr.Error("At least one media type must be selected.")
-
- images = "Images" in file_types
- text = "Text" in file_types
- return scrape_website(url, images, text)
-
-with gr.Blocks(theme="dwancin/yellow", css=".lg.svelte-1ipelgc {max-height: 60px !important;}") as app:
- with gr.Row():
- with gr.Column(scale=2):
- url_name = gr.Textbox(
- placeholder="Enter URL here",
- show_label=True,
- label="Website",
- info="Example: https://en.wikipedia.org/wiki/Main_Page",
- )
- media_types = gr.CheckboxGroup(
- [
- "Images",
- "Text",
- ],
- value="Images",
- label="Media types",
- )
- submit_button = gr.Button(
- "Scrape",
- variant="primary",
- interactive=True,
- )
- with gr.Column(scale=1):
- output_images_zip_file = gr.File(label="Images ZIP-file")
- output_text_zip_file = gr.File(label="Text ZIP-file")
-
- submit_button.click(web_scraping, inputs=[url_name, media_types], outputs=[output_images_zip_file, output_text_zip_file])
-app.launch()
\ No newline at end of file
diff --git a/spaces/runa91/bite_gradio/src/stacked_hourglass/utils/finetune.py b/spaces/runa91/bite_gradio/src/stacked_hourglass/utils/finetune.py
deleted file mode 100644
index e7990b26a90e824f02141d7908907679f544f98c..0000000000000000000000000000000000000000
--- a/spaces/runa91/bite_gradio/src/stacked_hourglass/utils/finetune.py
+++ /dev/null
@@ -1,39 +0,0 @@
-# Modified from:
-# https://github.com/anibali/pytorch-stacked-hourglass
-# https://github.com/bearpaw/pytorch-pose
-
-import torch
-from torch.nn import Conv2d, ModuleList
-
-
-def change_hg_outputs(model, indices):
- """Change the output classes of the model.
-
- Args:
- model: The model to modify.
- indices: An array of indices describing the new model outputs. For example, [3, 4, None]
- will modify the model to have 3 outputs, the first two of which have parameters
- copied from the fourth and fifth outputs of the original model.
- """
- with torch.no_grad():
- new_n_outputs = len(indices)
- new_score = ModuleList()
- for conv in model.score:
- new_conv = Conv2d(conv.in_channels, new_n_outputs, conv.kernel_size, conv.stride)
- new_conv = new_conv.to(conv.weight.device, conv.weight.dtype)
- for i, index in enumerate(indices):
- if index is not None:
- new_conv.weight[i] = conv.weight[index]
- new_conv.bias[i] = conv.bias[index]
- new_score.append(new_conv)
- model.score = new_score
- new_score_ = ModuleList()
- for conv in model.score_:
- new_conv = Conv2d(new_n_outputs, conv.out_channels, conv.kernel_size, conv.stride)
- new_conv = new_conv.to(conv.weight.device, conv.weight.dtype)
- for i, index in enumerate(indices):
- if index is not None:
- new_conv.weight[:, i] = conv.weight[:, index]
- new_conv.bias = conv.bias
- new_score_.append(new_conv)
- model.score_ = new_score_
diff --git a/spaces/ruslanmv/Clone-Your-Voice/synthesizer/utils/symbols.py b/spaces/ruslanmv/Clone-Your-Voice/synthesizer/utils/symbols.py
deleted file mode 100644
index 132d3a612c3b13e2ada905a706001cff29a4f63a..0000000000000000000000000000000000000000
--- a/spaces/ruslanmv/Clone-Your-Voice/synthesizer/utils/symbols.py
+++ /dev/null
@@ -1,17 +0,0 @@
-"""
-Defines the set of symbols used in text input to the model.
-
-The default is a set of ASCII characters that works well for English or text that has been run
-through Unidecode. For other data, you can modify _characters. See TRAINING_DATA.md for details.
-"""
-# from . import cmudict
-
-_pad = "_"
-_eos = "~"
-_characters = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz!\'\"(),-.:;? "
-
-# Prepend "@" to ARPAbet symbols to ensure uniqueness (some are the same as uppercase letters):
-#_arpabet = ["@' + s for s in cmudict.valid_symbols]
-
-# Export all symbols:
-symbols = [_pad, _eos] + list(_characters) #+ _arpabet
diff --git a/spaces/scedlatioru/img-to-music/example/Fish And Fisheries Of India By V G Jhingran Pdf Download _VERIFIED_.md b/spaces/scedlatioru/img-to-music/example/Fish And Fisheries Of India By V G Jhingran Pdf Download _VERIFIED_.md
deleted file mode 100644
index aabe3f2d8df26f4cc2ffd5c734676d916c88265b..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Fish And Fisheries Of India By V G Jhingran Pdf Download _VERIFIED_.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-Fish And Fisheries Of India By V G Jhingran Pdf Download
-
-If you are interested in learning more about the fish and fisheries of India, you may want to download the book Fish And Fisheries Of India by V G Jhingran. This book is considered to be one of the most comprehensive and authoritative sources on the subject, covering various aspects of fish biology, culture, management, and conservation in India.
-Fish And Fisheries Of India By V G Jhingran Pdf Download
Download Zip >> https://gohhs.com/2uEAmE
-
-What is Fish And Fisheries Of India by V G Jhingran?
-
-Fish And Fisheries Of India by V G Jhingran is a book that was first published in 1975 by Hindustan Publishing Corporation (India). It has been revised and enlarged several times, with the latest edition being the third one in 1991. The book has 727 pages and contains numerous illustrations, tables, graphs, and maps.
-
-The book is divided into four parts: Part I deals with the general aspects of fish and fisheries, such as fish classification, distribution, morphology, physiology, ecology, and evolution. Part II covers the inland fisheries of India, including freshwater, brackishwater, and coldwater fisheries. Part III focuses on the marine fisheries of India, such as coastal, offshore, deep-sea, and estuarine fisheries. Part IV discusses the fish culture and management practices in India, such as pond culture, cage culture, hatchery management, feed and nutrition, disease control, and fisheries development.
-
-Why should you download Fish And Fisheries Of India by V G Jhingran?
-
-There are many reasons why you should download Fish And Fisheries Of India by V G Jhingran. Here are some of them:
-
-
-- The book is a valuable reference for students, researchers, teachers, managers, and policy makers who are involved or interested in fish and fisheries of India.
-- The book provides a comprehensive overview of the diversity, status, potential, problems, and prospects of fish and fisheries of India.
-- The book covers both traditional and modern aspects of fish and fisheries of India, such as indigenous knowledge, socio-economic aspects, biotechnology, aquaculture engineering, and environmental issues.
-- The book is written in a clear and concise manner, with easy-to-understand language and terminology.
-- The book is available in PDF format for free download from various online sources.
-
-
-How can you download Fish And Fisheries Of India by V G Jhingran?
-
-There are several ways to download Fish And Fisheries Of India by V G Jhingran. Here are some of them:
-
-
-- You can download the book from Google Books by clicking on this link: Fish And Fisheries Of India by V G Jhingran - Google Books. You can either read the book online or download it as a PDF file.
-- You can download the book from Open Library by clicking on this link: Fish And Fisheries Of India by V G Jhingran - Open Library. You can either borrow the book online or download it as a PDF file.
-- You can also search for other online sources that offer the book for free download by using your favorite search engine.
-
-
-Conclusion
-
-Fish And Fisheries Of India by V G Jhingran is a must-read book for anyone who wants to learn more about the fish and fisheries of India. The book covers all aspects of fish biology, culture, management, and conservation in India. The book is available in PDF format for free download from various online sources. Download it today and enjoy reading it!
-What are the benefits of reading Fish And Fisheries Of India by V G Jhingran?
-
-Reading Fish And Fisheries Of India by V G Jhingran can offer you many benefits, such as:
-
-
-- You can gain a deeper understanding of the fish and fisheries of India, which are rich in diversity, history, culture, and economic importance.
-- You can learn about the various challenges and opportunities facing the fish and fisheries of India, such as overexploitation, pollution, climate change, biodiversity loss, food security, and livelihoods.
-- You can discover the best practices and innovations in fish and fisheries of India, such as sustainable management, conservation, enhancement, value addition, and marketing.
-- You can appreciate the role and contribution of fish and fisheries of India to the national and global development goals, such as poverty alleviation, nutrition, health, education, and environment.
-- You can develop your skills and knowledge in fish and fisheries of India, which can help you in your academic, professional, or personal pursuits.
-
-
-How can you use Fish And Fisheries Of India by V G Jhingran for your projects?
-
-If you are working on a project related to fish and fisheries of India, you can use Fish And Fisheries Of India by V G Jhingran as a reliable source of information and inspiration. Here are some examples of how you can use the book for your projects:
-
-
-- If you are a student or a researcher, you can use the book as a reference for your assignments, papers, reports, presentations, or publications. You can cite the book using the appropriate format and style.
-- If you are a teacher or a trainer, you can use the book as a resource for your lectures, workshops, seminars, or courses. You can design your curriculum and activities based on the book's content and structure.
-- If you are a manager or a policy maker, you can use the book as a guide for your planning, decision making, implementation, monitoring, or evaluation. You can apply the book's principles and recommendations to your specific context and situation.
-- If you are an entrepreneur or a marketer, you can use the book as a tool for your product development, promotion, distribution, or sales. You can identify the needs and preferences of your target market and customers based on the book's data and analysis.
-- If you are an enthusiast or a hobbyist, you can use the book as a source of enjoyment and learning. You can read the book for fun or for personal growth.
-
-
-Conclusion
-
-Fish And Fisheries Of India by V G Jhingran is a great book to read if you want to know more about the fish and fisheries of India. The book is comprehensive, authoritative, informative, and interesting. The book is available in PDF format for free download from various online sources. Download it today and start reading it!
-
-What are the challenges of downloading Fish And Fisheries Of India by V G Jhingran?
-
-Downloading Fish And Fisheries Of India by V G Jhingran may not be easy for everyone. There are some challenges that you may face, such as:
-
-
-- The book may not be available in all online sources. You may have to search for different websites that offer the book for free download.
-- The book may not be compatible with all devices or formats. You may have to convert the book to a suitable format for your device or software.
-- The book may not be updated or accurate. The book was last revised in 1991, and some of the information may be outdated or incorrect.
-- The book may not be legal or ethical. The book may be protected by copyright or intellectual property rights, and downloading it without permission may violate the law or the author's rights.
-- The book may not be safe or secure. The book may contain viruses, malware, spyware, or other harmful elements that may damage your device or compromise your privacy.
-
-
-How can you overcome these challenges of downloading Fish And Fisheries Of India by V G Jhingran?
-
-There are some ways to overcome these challenges of downloading Fish And Fisheries Of India by V G Jhingran. Here are some of them:
-
-
-- You can check the credibility and reputation of the online sources that offer the book for free download. You can read reviews, ratings, comments, or feedback from other users who have downloaded the book.
-- You can use a reliable and compatible device and software to download and read the book. You can also use a converter tool to change the format of the book if needed.
-- You can verify and update the information in the book with other sources. You can cross-check the facts, figures, data, and analysis in the book with more recent and reliable sources.
-- You can respect and acknowledge the author's rights and contributions. You can cite the book properly and give credit to the author when you use it for your projects.
-- You can protect and secure your device and data. You can use antivirus, firewall, encryption, or other security measures to prevent any harm or loss from downloading the book.
-
-
-Conclusion
-
-Fish And Fisheries Of India by V G Jhingran is a wonderful book to download if you want to know more about the fish and fisheries of India. The book is comprehensive, authoritative, informative, and interesting. The book is available in PDF format for free download from various online sources. However, you may face some challenges when downloading the book, such as availability, compatibility, accuracy, legality, or safety. You can overcome these challenges by following some tips and precautions. Download it today and enjoy reading it!
-What are the alternatives to downloading Fish And Fisheries Of India by V G Jhingran?
-
-Downloading Fish And Fisheries Of India by V G Jhingran may not be the only option for you. There are some alternatives that you may consider, such as:
-
-
-- You can buy the book from online or offline stores. You can compare the prices, quality, and delivery options of different sellers and choose the best one for you.
-- You can borrow the book from a library or a friend. You can check the availability, condition, and duration of the loan and return the book on time.
-- You can access the book from a digital library or a database. You can search for the book using keywords, filters, or categories and read it online or offline.
-- You can request the book from the author or the publisher. You can contact them via email, phone, or social media and ask for a copy of the book.
-- You can find other books on fish and fisheries of India. You can look for books that have similar topics, themes, or perspectives as Fish And Fisheries Of India by V G Jhingran.
-
-
-How can you make the most of reading Fish And Fisheries Of India by V G Jhingran?
-
-Reading Fish And Fisheries Of India by V G Jhingran can be a rewarding and enjoyable experience. Here are some tips to make the most of reading the book:
-
-
-- You can set a goal and a schedule for reading the book. You can decide how much time and effort you want to spend on reading the book and stick to your plan.
-- You can take notes and highlight key points while reading the book. You can use a notebook, a sticky note, a marker, or a digital tool to record your thoughts, questions, comments, or insights.
-- You can review and reflect on what you have read after reading the book. You can summarize, analyze, evaluate, or synthesize what you have learned from the book.
-- You can share and discuss what you have read with others who have read or are interested in reading the book. You can join a book club, a forum, a blog, or a social media group to exchange your opinions, feedback, or recommendations.
-- You can apply what you have read to your own projects or situations. You can use the information, knowledge, skills, or ideas from the book to improve your work or life.
-
-
-Conclusion
-
-Fish And Fisheries Of India by V G Jhingran is an excellent book to read if you want to know more about the fish and fisheries of India. The book is comprehensive, authoritative, informative, and interesting. The book is available in PDF format for free download from various online sources. However, you may also consider other options to get the book, such as buying, borrowing, accessing, requesting, or finding other books. You can also follow some tips to make the most of reading the book, such as setting a goal and a schedule, taking notes and highlighting key points, reviewing and reflecting on what you have read, sharing and discussing what you have read with others, and applying what you have read to your own projects or situations. Read it today and enjoy it!
-Conclusion
-
-In this article, we have discussed Fish And Fisheries Of India by V G Jhingran, a book that covers various aspects of fish biology, culture, management, and conservation in India. We have seen what the book is, why you should download it, how you can download it, what are the benefits of reading it, what are the challenges of downloading it, how you can overcome them, what are the alternatives to downloading it, and how you can make the most of reading it. We hope that this article has helped you to learn more about the fish and fisheries of India and to appreciate the book by V G Jhingran. If you are interested in reading the book, you can download it for free from various online sources. Alternatively, you can buy, borrow, access, request, or find other books on the same topic. Whatever option you choose, we encourage you to read the book and enjoy it!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/seanpedrickcase/Light-PDF-Web-QA-Chatbot/chatfuncs/ingest.py b/spaces/seanpedrickcase/Light-PDF-Web-QA-Chatbot/chatfuncs/ingest.py
deleted file mode 100644
index 5904d3766be69f2282ee62615bda63c2a02f2d2b..0000000000000000000000000000000000000000
--- a/spaces/seanpedrickcase/Light-PDF-Web-QA-Chatbot/chatfuncs/ingest.py
+++ /dev/null
@@ -1,655 +0,0 @@
-# ---
-# jupyter:
-# jupytext:
-# formats: ipynb,py:light
-# text_representation:
-# extension: .py
-# format_name: light
-# format_version: '1.5'
-# jupytext_version: 1.14.6
-# kernelspec:
-# display_name: Python 3 (ipykernel)
-# language: python
-# name: python3
-# ---
-
-# # Ingest website to FAISS
-
-# ## Install/ import stuff we need
-
-import os
-from pathlib import Path
-import re
-import requests
-import pandas as pd
-import dateutil.parser
-from typing import TypeVar, List
-
-from langchain.embeddings import HuggingFaceEmbeddings # HuggingFaceInstructEmbeddings,
-from langchain.vectorstores.faiss import FAISS
-from langchain.vectorstores import Chroma
-from langchain.text_splitter import RecursiveCharacterTextSplitter
-from langchain.docstore.document import Document
-
-from bs4 import BeautifulSoup
-from docx import Document as Doc
-from pypdf import PdfReader
-
-PandasDataFrame = TypeVar('pd.core.frame.DataFrame')
-# -
-
-split_strat = ["\n\n", "\n", ". ", "! ", "? "]
-chunk_size = 500
-chunk_overlap = 0
-start_index = True
-
-## Parse files
-def determine_file_type(file_path):
- """
- Determine the file type based on its extension.
-
- Parameters:
- file_path (str): Path to the file.
-
- Returns:
- str: File extension (e.g., '.pdf', '.docx', '.txt', '.html').
- """
- return os.path.splitext(file_path)[1].lower()
-
-def parse_file(file_paths, text_column='text'):
- """
- Accepts a list of file paths, determines each file's type based on its extension,
- and passes it to the relevant parsing function.
-
- Parameters:
- file_paths (list): List of file paths.
- text_column (str): Name of the column in CSV/Excel files that contains the text content.
-
- Returns:
- dict: A dictionary with file paths as keys and their parsed content (or error message) as values.
- """
-
-
-
- if not isinstance(file_paths, list):
- raise ValueError("Expected a list of file paths.")
-
- extension_to_parser = {
- '.pdf': parse_pdf,
- '.docx': parse_docx,
- '.txt': parse_txt,
- '.html': parse_html,
- '.htm': parse_html, # Considering both .html and .htm for HTML files
- '.csv': lambda file_path: parse_csv_or_excel(file_path, text_column),
- '.xlsx': lambda file_path: parse_csv_or_excel(file_path, text_column)
- }
-
- parsed_contents = {}
- file_names = []
-
- for file_path in file_paths:
- print(file_path.name)
- #file = open(file_path.name, 'r')
- #print(file)
- file_extension = determine_file_type(file_path.name)
- if file_extension in extension_to_parser:
- parsed_contents[file_path.name] = extension_to_parser[file_extension](file_path.name)
- else:
- parsed_contents[file_path.name] = f"Unsupported file type: {file_extension}"
-
- filename_end = get_file_path_end(file_path.name)
-
- file_names.append(filename_end)
-
- return parsed_contents, file_names
-
-def text_regex_clean(text):
- # Merge hyphenated words
- text = re.sub(r"(\w+)-\n(\w+)", r"\1\2", text)
- # If a double newline ends in a letter, add a full stop.
- text = re.sub(r'(?<=[a-zA-Z])\n\n', '.\n\n', text)
- # Fix newlines in the middle of sentences
- text = re.sub(r"(? List[str]:
-
- """
- Extract text from a PDF file.
-
- Parameters:
- file_path (str): Path to the PDF file.
-
- Returns:
- List[str]: Extracted text from the PDF.
- """
-
- output = []
- #for file in files:
- print(file) # .name
- pdf = PdfReader(file) #[i] .name[i]
-
- for page in pdf.pages:
- text = page.extract_text()
-
- text = text_regex_clean(text)
-
- output.append(text)
- return output
-
-def parse_docx(file_path):
- """
- Reads the content of a .docx file and returns it as a string.
-
- Parameters:
- - file_path (str): Path to the .docx file.
-
- Returns:
- - str: Content of the .docx file.
- """
- doc = Doc(file_path)
- full_text = []
- for para in doc.paragraphs:
- para = text_regex_clean(para)
-
- full_text.append(para.text.replace(" ", " ").strip())
- return '\n'.join(full_text)
-
-def parse_txt(file_path):
- """
- Read text from a TXT or HTML file.
-
- Parameters:
- file_path (str): Path to the TXT or HTML file.
-
- Returns:
- str: Text content of the file.
- """
- with open(file_path, 'r', encoding="utf-8") as file:
- file_contents = file.read().replace(" ", " ").strip()
-
- file_contents = text_regex_clean(file_contents)
-
- return file_contents
-
-def parse_html(page_url, div_filter="p"):
- """
- Determine if the source is a web URL or a local HTML file, extract the content based on the div of choice. Also tries to extract dates (WIP)
-
- Parameters:
- page_url (str): The web URL or local file path.
-
- Returns:
- str: Extracted content.
- """
-
- def is_web_url(s):
- """
- Check if the input string is a web URL.
- """
- return s.startswith("http://") or s.startswith("https://")
-
- def is_local_html_file(s):
- """
- Check if the input string is a path to a local HTML file.
- """
- return (s.endswith(".html") or s.endswith(".htm")) and os.path.isfile(s)
-
- def extract_text_from_source(source):
- """
- Determine if the source is a web URL or a local HTML file,
- and then extract its content accordingly.
-
- Parameters:
- source (str): The web URL or local file path.
-
- Returns:
- str: Extracted content.
- """
- if is_web_url(source):
- response = requests.get(source)
- response.raise_for_status() # Raise an HTTPError for bad responses
- return response.text.replace(" ", " ").strip()
- elif is_local_html_file(source):
- with open(source, 'r', encoding='utf-8') as file:
- file_out = file.read().replace
- return file_out
- else:
- raise ValueError("Input is neither a valid web URL nor a local HTML file path.")
-
-
- def clean_html_data(data, date_filter="", div_filt="p"):
- """
- Extracts and cleans data from HTML content.
-
- Parameters:
- data (str): HTML content to be parsed.
- date_filter (str, optional): Date string to filter results. If set, only content with a date greater than this will be returned.
- div_filt (str, optional): HTML tag to search for text content. Defaults to "p".
-
- Returns:
- tuple: Contains extracted text and date as strings. Returns empty strings if not found.
- """
-
- soup = BeautifulSoup(data, 'html.parser')
-
- # Function to exclude div with id "bar"
- def exclude_div_with_id_bar(tag):
- return tag.has_attr('id') and tag['id'] == 'related-links'
-
- text_elements = soup.find_all(div_filt)
- date_elements = soup.find_all(div_filt, {"class": "page-neutral-intro__meta"})
-
- # Extract date
- date_out = ""
- if date_elements:
- date_out = re.search(">(.*?)<", str(date_elements[0])).group(1)
- date_dt = dateutil.parser.parse(date_out)
-
- if date_filter:
- date_filter_dt = dateutil.parser.parse(date_filter)
- if date_dt < date_filter_dt:
- return '', date_out
-
- # Extract text
- text_out_final = ""
- if text_elements:
- text_out_final = '\n'.join(paragraph.text for paragraph in text_elements)
- text_out_final = text_regex_clean(text_out_final)
- else:
- print(f"No elements found with tag '{div_filt}'. No text returned.")
-
- return text_out_final, date_out
-
-
- #page_url = "https://pypi.org/project/InstructorEmbedding/" #'https://www.ons.gov.uk/visualisations/censusareachanges/E09000022/index.html'
-
- html_text = extract_text_from_source(page_url)
- #print(page.text)
-
- texts = []
- metadatas = []
-
- clean_text, date = clean_html_data(html_text, date_filter="", div_filt=div_filter)
- texts.append(clean_text)
- metadatas.append({"source": page_url, "date":str(date)})
-
- #print(metadatas)
-
- return texts, metadatas, page_url
-
-def get_file_path_end(file_path):
- match = re.search(r'(.*[\/\\])?(.+)$', file_path)
-
- filename_end = match.group(2) if match else ''
-
- return filename_end
-
-# +
-# Convert parsed text to docs
-# -
-
-def text_to_docs(text_dict: dict, chunk_size: int = chunk_size) -> List[Document]:
- """
- Converts the output of parse_file (a dictionary of file paths to content)
- to a list of Documents with metadata.
- """
-
- doc_sections = []
- parent_doc_sections = []
-
- for file_path, content in text_dict.items():
- ext = os.path.splitext(file_path)[1].lower()
-
- # Depending on the file extension, handle the content
- if ext == '.pdf':
- docs, page_docs = pdf_text_to_docs(content, chunk_size)
- elif ext in ['.html', '.htm', '.txt', '.docx']:
- docs = html_text_to_docs(content, chunk_size)
- elif ext in ['.csv', '.xlsx']:
- docs, page_docs = csv_excel_text_to_docs(content, chunk_size)
- else:
- print(f"Unsupported file type {ext} for {file_path}. Skipping.")
- continue
-
-
- filename_end = get_file_path_end(file_path)
-
- #match = re.search(r'(.*[\/\\])?(.+)$', file_path)
- #filename_end = match.group(2) if match else ''
-
- # Add filename as metadata
- for doc in docs: doc.metadata["source"] = filename_end
- #for parent_doc in parent_docs: parent_doc.metadata["source"] = filename_end
-
- doc_sections.extend(docs)
- #parent_doc_sections.extend(parent_docs)
-
- return doc_sections#, page_docs
-
-def pdf_text_to_docs(text, chunk_size: int = chunk_size) -> List[Document]:
- """Converts a string or list of strings to a list of Documents
- with metadata."""
-
- #print(text)
-
- if isinstance(text, str):
- # Take a single string as one page
- text = [text]
-
- page_docs = [Document(page_content=page, metadata={"page": page}) for page in text]
-
-
- # Add page numbers as metadata
- for i, doc in enumerate(page_docs):
- doc.metadata["page"] = i + 1
-
- print("page docs are: ")
- print(page_docs)
-
- # Split pages into sections
- doc_sections = []
-
- for doc in page_docs:
-
- #print("page content: ")
- #print(doc.page_content)
-
- if doc.page_content == '':
- sections = ['']
-
- else:
- text_splitter = RecursiveCharacterTextSplitter(
- chunk_size=chunk_size,
- separators=split_strat,#["\n\n", "\n", ".", "!", "?", ",", " ", ""],
- chunk_overlap=chunk_overlap,
- add_start_index=True
- )
- sections = text_splitter.split_text(doc.page_content)
-
- for i, section in enumerate(sections):
- doc = Document(
- page_content=section, metadata={"page": doc.metadata["page"], "section": i, "page_section": f"{doc.metadata['page']}-{i}"})
-
-
- doc_sections.append(doc)
-
- return doc_sections, page_docs#, parent_doc
-
-def html_text_to_docs(texts, metadatas, chunk_size:int = chunk_size):
-
- text_splitter = RecursiveCharacterTextSplitter(
- separators=split_strat,#["\n\n", "\n", ".", "!", "?", ",", " ", ""],
- chunk_size=chunk_size,
- chunk_overlap=chunk_overlap,
- length_function=len,
- add_start_index=True
- )
-
- #print(texts)
- #print(metadatas)
-
- documents = text_splitter.create_documents(texts, metadatas=metadatas)
-
- for i, section in enumerate(documents):
- section.metadata["page_section"] = i + 1
-
-
-
- return documents
-
-def write_out_metadata_as_string(metadata_in):
- # If metadata_in is a single dictionary, wrap it in a list
- if isinstance(metadata_in, dict):
- metadata_in = [metadata_in]
-
- metadata_string = [f"{' '.join(f'{k}: {v}' for k, v in d.items() if k != 'page_section')}" for d in metadata_in] # ['metadata']
- return metadata_string
-
-def csv_excel_text_to_docs(df, text_column='text', chunk_size=None) -> List[Document]:
- """Converts a DataFrame's content to a list of Documents with metadata."""
-
- doc_sections = []
- df[text_column] = df[text_column].astype(str) # Ensure column is a string column
-
- # For each row in the dataframe
- for idx, row in df.iterrows():
- # Extract the text content for the document
- doc_content = row[text_column]
-
- # Generate metadata containing other columns' data
- metadata = {"row": idx + 1}
- for col, value in row.items():
- if col != text_column:
- metadata[col] = value
-
- metadata_string = write_out_metadata_as_string(metadata)[0]
-
-
-
- # If chunk_size is provided, split the text into chunks
- if chunk_size:
- # Assuming you have a text splitter function similar to the PDF handling
- text_splitter = RecursiveCharacterTextSplitter(
- chunk_size=chunk_size,
- # Other arguments as required by the splitter
- )
- sections = text_splitter.split_text(doc_content)
-
-
- # For each section, create a Document object
- for i, section in enumerate(sections):
- section = '. '.join([metadata_string, section])
- doc = Document(page_content=section,
- metadata={**metadata, "section": i, "row_section": f"{metadata['row']}-{i}"})
- doc_sections.append(doc)
- else:
- # If no chunk_size is provided, create a single Document object for the row
- doc_content = '. '.join([metadata_string, doc_content])
- doc = Document(page_content=doc_content, metadata=metadata)
- doc_sections.append(doc)
-
- return doc_sections
-
-# # Functions for working with documents after loading them back in
-
-def pull_out_data(series):
-
- # define a lambda function to convert each string into a tuple
- to_tuple = lambda x: eval(x)
-
- # apply the lambda function to each element of the series
- series_tup = series.apply(to_tuple)
-
- series_tup_content = list(zip(*series_tup))[1]
-
- series = pd.Series(list(series_tup_content))#.str.replace("^Main post content", "", regex=True).str.strip()
-
- return series
-
-def docs_from_csv(df):
-
- import ast
-
- documents = []
-
- page_content = pull_out_data(df["0"])
- metadatas = pull_out_data(df["1"])
-
- for x in range(0,len(df)):
- new_doc = Document(page_content=page_content[x], metadata=metadatas[x])
- documents.append(new_doc)
-
- return documents
-
-def docs_from_lists(docs, metadatas):
-
- documents = []
-
- for x, doc in enumerate(docs):
- new_doc = Document(page_content=doc, metadata=metadatas[x])
- documents.append(new_doc)
-
- return documents
-
-def docs_elements_from_csv_save(docs_path="documents.csv"):
-
- documents = pd.read_csv(docs_path)
-
- docs_out = docs_from_csv(documents)
-
- out_df = pd.DataFrame(docs_out)
-
- docs_content = pull_out_data(out_df[0].astype(str))
-
- docs_meta = pull_out_data(out_df[1].astype(str))
-
- doc_sources = [d['source'] for d in docs_meta]
-
- return out_df, docs_content, docs_meta, doc_sources
-
-# ## Create embeddings and save faiss vector store to the path specified in `save_to`
-
-def load_embeddings(model_name = "BAAI/bge-base-en-v1.5"):
-
- #if model_name == "hkunlp/instructor-large":
- # embeddings_func = HuggingFaceInstructEmbeddings(model_name=model_name,
- # embed_instruction="Represent the paragraph for retrieval: ",
- # query_instruction="Represent the question for retrieving supporting documents: "
- # )
-
- #else:
- embeddings_func = HuggingFaceEmbeddings(model_name=model_name)
-
- global embeddings
-
- embeddings = embeddings_func
-
- return embeddings_func
-
-def embed_faiss_save_to_zip(docs_out, save_to="faiss_lambeth_census_embedding", model_name = "BAAI/bge-base-en-v1.5"):
-
- load_embeddings(model_name=model_name)
-
- #embeddings_fast = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2")
-
- print(f"> Total split documents: {len(docs_out)}")
-
- vectorstore = FAISS.from_documents(documents=docs_out, embedding=embeddings)
-
-
- if Path(save_to).exists():
- vectorstore.save_local(folder_path=save_to)
-
- print("> DONE")
- print(f"> Saved to: {save_to}")
-
- ### Save as zip, then remove faiss/pkl files to allow for upload to huggingface
-
- import shutil
-
- shutil.make_archive(save_to, 'zip', save_to)
-
- os.remove(save_to + "/index.faiss")
- os.remove(save_to + "/index.pkl")
-
- shutil.move(save_to + '.zip', save_to + "/" + save_to + '.zip')
-
- return vectorstore
-
-def docs_to_chroma_save(embeddings, docs_out:PandasDataFrame, save_to:str):
- print(f"> Total split documents: {len(docs_out)}")
-
- vectordb = Chroma.from_documents(documents=docs_out,
- embedding=embeddings,
- persist_directory=save_to)
-
- # persiste the db to disk
- vectordb.persist()
-
- print("> DONE")
- print(f"> Saved to: {save_to}")
-
- return vectordb
-
-def sim_search_local_saved_vec(query, k_val, save_to="faiss_lambeth_census_embedding"):
-
- load_embeddings()
-
- docsearch = FAISS.load_local(folder_path=save_to, embeddings=embeddings)
-
-
- display(Markdown(question))
-
- search = docsearch.similarity_search_with_score(query, k=k_val)
-
- for item in search:
- print(item[0].page_content)
- print(f"Page: {item[0].metadata['source']}")
- print(f"Date: {item[0].metadata['date']}")
- print(f"Score: {item[1]}")
- print("---")
diff --git a/spaces/segments-tobias/conex/espnet2/layers/utterance_mvn.py b/spaces/segments-tobias/conex/espnet2/layers/utterance_mvn.py
deleted file mode 100644
index a41f869f322af8db54619731f268da5e730957c0..0000000000000000000000000000000000000000
--- a/spaces/segments-tobias/conex/espnet2/layers/utterance_mvn.py
+++ /dev/null
@@ -1,88 +0,0 @@
-from typing import Tuple
-
-import torch
-from typeguard import check_argument_types
-
-from espnet.nets.pytorch_backend.nets_utils import make_pad_mask
-from espnet2.layers.abs_normalize import AbsNormalize
-
-
-class UtteranceMVN(AbsNormalize):
- def __init__(
- self,
- norm_means: bool = True,
- norm_vars: bool = False,
- eps: float = 1.0e-20,
- ):
- assert check_argument_types()
- super().__init__()
- self.norm_means = norm_means
- self.norm_vars = norm_vars
- self.eps = eps
-
- def extra_repr(self):
- return f"norm_means={self.norm_means}, norm_vars={self.norm_vars}"
-
- def forward(
- self, x: torch.Tensor, ilens: torch.Tensor = None
- ) -> Tuple[torch.Tensor, torch.Tensor]:
- """Forward function
-
- Args:
- x: (B, L, ...)
- ilens: (B,)
-
- """
- return utterance_mvn(
- x,
- ilens,
- norm_means=self.norm_means,
- norm_vars=self.norm_vars,
- eps=self.eps,
- )
-
-
-def utterance_mvn(
- x: torch.Tensor,
- ilens: torch.Tensor = None,
- norm_means: bool = True,
- norm_vars: bool = False,
- eps: float = 1.0e-20,
-) -> Tuple[torch.Tensor, torch.Tensor]:
- """Apply utterance mean and variance normalization
-
- Args:
- x: (B, T, D), assumed zero padded
- ilens: (B,)
- norm_means:
- norm_vars:
- eps:
-
- """
- if ilens is None:
- ilens = x.new_full([x.size(0)], x.size(1))
- ilens_ = ilens.to(x.device, x.dtype).view(-1, *[1 for _ in range(x.dim() - 1)])
- # Zero padding
- if x.requires_grad:
- x = x.masked_fill(make_pad_mask(ilens, x, 1), 0.0)
- else:
- x.masked_fill_(make_pad_mask(ilens, x, 1), 0.0)
- # mean: (B, 1, D)
- mean = x.sum(dim=1, keepdim=True) / ilens_
-
- if norm_means:
- x -= mean
-
- if norm_vars:
- var = x.pow(2).sum(dim=1, keepdim=True) / ilens_
- std = torch.clamp(var.sqrt(), min=eps)
- x = x / std.sqrt()
- return x, ilens
- else:
- if norm_vars:
- y = x - mean
- y.masked_fill_(make_pad_mask(ilens, y, 1), 0.0)
- var = y.pow(2).sum(dim=1, keepdim=True) / ilens_
- std = torch.clamp(var.sqrt(), min=eps)
- x /= std
- return x, ilens
diff --git a/spaces/segments-tobias/conex/espnet2/tts/feats_extract/dio.py b/spaces/segments-tobias/conex/espnet2/tts/feats_extract/dio.py
deleted file mode 100644
index 48f7249aa1f9c8253ccdf48ac007375aca4c1829..0000000000000000000000000000000000000000
--- a/spaces/segments-tobias/conex/espnet2/tts/feats_extract/dio.py
+++ /dev/null
@@ -1,187 +0,0 @@
-# Copyright 2020 Nagoya University (Tomoki Hayashi)
-# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
-
-"""F0 extractor using DIO + Stonemask algorithm."""
-
-import logging
-
-from typing import Any
-from typing import Dict
-from typing import Tuple
-from typing import Union
-
-import humanfriendly
-import numpy as np
-import pyworld
-import torch
-import torch.nn.functional as F
-
-from scipy.interpolate import interp1d
-from typeguard import check_argument_types
-
-from espnet.nets.pytorch_backend.nets_utils import pad_list
-from espnet2.tts.feats_extract.abs_feats_extract import AbsFeatsExtract
-
-
-class Dio(AbsFeatsExtract):
- """F0 estimation with dio + stonemask algortihm.
-
- This is f0 extractor based on dio + stonmask algorithm introduced in `WORLD:
- a vocoder-based high-quality speech synthesis system for real-time applications`_.
-
- .. _`WORLD: a vocoder-based high-quality speech synthesis system for real-time
- applications`: https://doi.org/10.1587/transinf.2015EDP7457
-
- Note:
- This module is based on NumPy implementation. Therefore, the computational graph
- is not connected.
-
- Todo:
- Replace this module with PyTorch-based implementation.
-
- """
-
- def __init__(
- self,
- fs: Union[int, str] = 22050,
- n_fft: int = 1024,
- hop_length: int = 256,
- f0min: int = 80,
- f0max: int = 400,
- use_token_averaged_f0: bool = True,
- use_continuous_f0: bool = True,
- use_log_f0: bool = True,
- reduction_factor: int = None,
- ):
- assert check_argument_types()
- super().__init__()
- if isinstance(fs, str):
- fs = humanfriendly.parse_size(fs)
- self.fs = fs
- self.n_fft = n_fft
- self.hop_length = hop_length
- self.frame_period = 1000 * hop_length / fs
- self.f0min = f0min
- self.f0max = f0max
- self.use_token_averaged_f0 = use_token_averaged_f0
- self.use_continuous_f0 = use_continuous_f0
- self.use_log_f0 = use_log_f0
- if use_token_averaged_f0:
- assert reduction_factor >= 1
- self.reduction_factor = reduction_factor
-
- def output_size(self) -> int:
- return 1
-
- def get_parameters(self) -> Dict[str, Any]:
- return dict(
- fs=self.fs,
- n_fft=self.n_fft,
- hop_length=self.hop_length,
- f0min=self.f0min,
- f0max=self.f0max,
- use_token_averaged_f0=self.use_token_averaged_f0,
- use_continuous_f0=self.use_continuous_f0,
- use_log_f0=self.use_log_f0,
- reduction_factor=self.reduction_factor,
- )
-
- def forward(
- self,
- input: torch.Tensor,
- input_lengths: torch.Tensor = None,
- feats_lengths: torch.Tensor = None,
- durations: torch.Tensor = None,
- durations_lengths: torch.Tensor = None,
- ) -> Tuple[torch.Tensor, torch.Tensor]:
- # If not provide, we assume that the inputs have the same length
- if input_lengths is None:
- input_lengths = (
- input.new_ones(input.shape[0], dtype=torch.long) * input.shape[1]
- )
-
- # F0 extraction
- pitch = [self._calculate_f0(x[:xl]) for x, xl in zip(input, input_lengths)]
-
- # (Optional): Adjust length to match with the mel-spectrogram
- if feats_lengths is not None:
- pitch = [
- self._adjust_num_frames(p, fl).view(-1)
- for p, fl in zip(pitch, feats_lengths)
- ]
-
- # (Optional): Average by duration to calculate token-wise f0
- if self.use_token_averaged_f0:
- durations = durations * self.reduction_factor
- pitch = [
- self._average_by_duration(p, d).view(-1)
- for p, d in zip(pitch, durations)
- ]
- pitch_lengths = durations_lengths
- else:
- pitch_lengths = input.new_tensor([len(p) for p in pitch], dtype=torch.long)
-
- # Padding
- pitch = pad_list(pitch, 0.0)
-
- # Return with the shape (B, T, 1)
- return pitch.unsqueeze(-1), pitch_lengths
-
- def _calculate_f0(self, input: torch.Tensor) -> torch.Tensor:
- x = input.cpu().numpy().astype(np.double)
- f0, timeaxis = pyworld.dio(
- x,
- self.fs,
- f0_floor=self.f0min,
- f0_ceil=self.f0max,
- frame_period=self.frame_period,
- )
- f0 = pyworld.stonemask(x, f0, timeaxis, self.fs)
- if self.use_continuous_f0:
- f0 = self._convert_to_continuous_f0(f0)
- if self.use_log_f0:
- nonzero_idxs = np.where(f0 != 0)[0]
- f0[nonzero_idxs] = np.log(f0[nonzero_idxs])
- return input.new_tensor(f0.reshape(-1), dtype=torch.float)
-
- @staticmethod
- def _adjust_num_frames(x: torch.Tensor, num_frames: torch.Tensor) -> torch.Tensor:
- if num_frames > len(x):
- x = F.pad(x, (0, num_frames - len(x)))
- elif num_frames < len(x):
- x = x[:num_frames]
- return x
-
- @staticmethod
- def _convert_to_continuous_f0(f0: np.array) -> np.array:
- if (f0 == 0).all():
- logging.warn("All frames seems to be unvoiced.")
- return f0
-
- # padding start and end of f0 sequence
- start_f0 = f0[f0 != 0][0]
- end_f0 = f0[f0 != 0][-1]
- start_idx = np.where(f0 == start_f0)[0][0]
- end_idx = np.where(f0 == end_f0)[0][-1]
- f0[:start_idx] = start_f0
- f0[end_idx:] = end_f0
-
- # get non-zero frame index
- nonzero_idxs = np.where(f0 != 0)[0]
-
- # perform linear interpolation
- interp_fn = interp1d(nonzero_idxs, f0[nonzero_idxs])
- f0 = interp_fn(np.arange(0, f0.shape[0]))
-
- return f0
-
- def _average_by_duration(self, x: torch.Tensor, d: torch.Tensor) -> torch.Tensor:
- assert 0 <= len(x) - d.sum() < self.reduction_factor
- d_cumsum = F.pad(d.cumsum(dim=0), (1, 0))
- x_avg = [
- x[start:end].masked_select(x[start:end].gt(0.0)).mean(dim=0)
- if len(x[start:end].masked_select(x[start:end].gt(0.0))) != 0
- else x.new_tensor(0.0)
- for start, end in zip(d_cumsum[:-1], d_cumsum[1:])
- ]
- return torch.stack(x_avg)
diff --git a/spaces/shi-labs/OneFormer/oneformer/modeling/backbone/__init__.py b/spaces/shi-labs/OneFormer/oneformer/modeling/backbone/__init__.py
deleted file mode 100644
index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000
--- a/spaces/shi-labs/OneFormer/oneformer/modeling/backbone/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
diff --git a/spaces/shigel/recipe_0626/template.md b/spaces/shigel/recipe_0626/template.md
deleted file mode 100644
index 458969bcd8ce477b8a39b8e150697b2daed2e8bd..0000000000000000000000000000000000000000
--- a/spaces/shigel/recipe_0626/template.md
+++ /dev/null
@@ -1,23 +0,0 @@
-# レシピ: {title}
-
-{description}
-
-## 材料
-
-{ingredients}
-
-## 作り方
-
-{instruction}
-
-## このレシピの感想と味について
-
-{comment_feelings_taste}
-
-## 目の不自由な方への説明
-
-{explanation_to_blind_person}
-
-## 画像生成プロンプト
-
-{prompt_for_visual_expression}
diff --git a/spaces/shikunl/prismer/prismer/experts/generate_edge.py b/spaces/shikunl/prismer/prismer/experts/generate_edge.py
deleted file mode 100644
index 5d8f58d636b725ed3e50d3e8ce919cb80ab2e4f7..0000000000000000000000000000000000000000
--- a/spaces/shikunl/prismer/prismer/experts/generate_edge.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# Copyright (c) 2023, NVIDIA Corporation & Affiliates. All rights reserved.
-#
-# This work is made available under the Nvidia Source Code License-NC.
-# To view a copy of this license, visit
-# https://github.com/NVlabs/prismer/blob/main/LICENSE
-
-import torch
-import os
-try:
- import ruamel_yaml as yaml
-except ModuleNotFoundError:
- import ruamel.yaml as yaml
-
-from experts.model_bank import load_expert_model
-from experts.edge.generate_dataset import Dataset
-from experts.edge.images import fuse_edge
-import PIL.Image as Image
-from accelerate import Accelerator
-from tqdm import tqdm
-
-
-model, transform = load_expert_model(task='edge')
-accelerator = Accelerator(mixed_precision='fp16')
-
-config = yaml.load(open('configs/experts.yaml', 'r'), Loader=yaml.Loader)
-save_path = os.path.join(config['save_path'], 'edge')
-
-batch_size = 64
-dataset = Dataset(config, transform)
-data_loader = torch.utils.data.DataLoader(
- dataset=dataset,
- batch_size=batch_size,
- shuffle=False,
- num_workers=4,
- pin_memory=True
-)
-
-model, data_loader = accelerator.prepare(model, data_loader)
-
-with torch.no_grad():
- for i, (test_data, img_path, img_size) in enumerate(tqdm(data_loader)):
- test_pred = model(test_data)
- fuses = fuse_edge(test_pred)
- for k in range(len(fuses)):
- edge = fuses[k]
- img_path_split = img_path[k].split('/')
- ps = img_path[k].split('.')[-1]
- im_save_path = os.path.join(save_path, img_path_split[-3], img_path_split[-2])
- os.makedirs(im_save_path, exist_ok=True)
-
- im_size = img_size[0][k].item(), img_size[1][k].item()
- edge = Image.fromarray(edge).convert('L')
- edge = edge.resize((im_size[0], im_size[1]), resample=Image.Resampling.BILINEAR)
- edge.save(os.path.join(im_save_path, img_path_split[-1].replace(f'.{ps}', '.png')))
-
-
diff --git a/spaces/shuvojitkoley007/mrs-shuvojit-koley/setup.sh b/spaces/shuvojitkoley007/mrs-shuvojit-koley/setup.sh
deleted file mode 100644
index 4e926ca33ee2fac91e8027e043da18650600d4bd..0000000000000000000000000000000000000000
--- a/spaces/shuvojitkoley007/mrs-shuvojit-koley/setup.sh
+++ /dev/null
@@ -1,9 +0,0 @@
-mkdir -p ~/.streamlit/
-
-echo "\
-[server]\n\
-port = $PORT\n\
-enablesCORS = false\n\
-headless = true\n\
-\n\
-" > ~/.streamlit/config.toml
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download GTA Vice City Cars Mod APK for Android and Drive Your Dream Car.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download GTA Vice City Cars Mod APK for Android and Drive Your Dream Car.md
deleted file mode 100644
index 7a4276ad217d38374d9335893f2bc78e911b67c3..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download GTA Vice City Cars Mod APK for Android and Drive Your Dream Car.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-GTA Vice City Cars Mod Apk for Android: How to Download and Install
- Introduction
- If you are a fan of Grand Theft Auto (GTA) series, you probably know that GTA Vice City is one of the most popular and iconic games in the franchise. Released in 2002, GTA Vice City is an open-world action-adventure game that lets you explore the fictional city of Vice City, based on Miami, in the 1980s. You can drive various vehicles, complete missions, engage in shootouts, and enjoy the retro atmosphere of the game.
-gta vice city cars mod apk for android
DOWNLOAD ⇒ https://ssurll.com/2uNRH7
- But what if you want to spice up your GTA Vice City experience with some new and improved cars? Well, that's where GTA Vice City cars mods come in. These are modifications or add-ons that replace or enhance the original cars in the game with new models, textures, features, and performance. And the best part is, you can enjoy these mods on your Android device as well. In this article, we will show you how to download and install GTA Vice City cars mod apk for Android.
- What is GTA Vice City?
- GTA Vice City is the fourth main installment in the GTA series, developed by Rockstar North and published by Rockstar Games. It was originally released for PlayStation 2 in October 2002, and later for Windows, Xbox, iOS, Android, and other platforms. The game is set in 1986, and follows the story of Tommy Vercetti, a former mafia hitman who is sent to Vice City by his boss to establish a criminal empire. The game features a large and diverse map, with various locations, landmarks, vehicles, weapons, characters, and missions.
- What are GTA Vice City cars mods?
- GTA Vice City cars mods are modifications or add-ons that change or improve the original cars in the game with new ones. These mods can be created by fans or developers using various tools and software. Some of the most popular GTA Vice City cars mods include:
-
-- Realistic Car Pack: This mod replaces all the cars in the game with real-life models from various brands and eras.
-- Lamborghini Car Pack: This mod adds several Lamborghini models to the game, such as Diablo, Countach, Murcielago, Gallardo, Aventador, and more.
-- Ferrari Car Pack: This mod adds several Ferrari models to the game, such as Testarossa, F40, F50, Enzo, 458 Italia, LaFerrari, and more.
-- Bugatti Car Pack: This mod adds several Bugatti models to the game, such as EB110, Veyron, Chiron, Divo, Centodieci, and more.
-- BMW Car Pack: This mod adds several BMW models to the game, such as M3, M5, M6, Z4, i8, X5, X6, and more.
-- Mercedes-Benz Car Pack: This mod adds several Mercedes-Benz models to the game, such as SLR McLaren, SLS AMG, SLK55 AMG, CLS63 AMG, G63 AMG, and more.
-
- These are just some examples of GTA Vice City cars mods. There are many more available online for you to choose from.
- Why use GTA Vice City cars mod apk for Android?
- There are many reasons why you might want to use GTA Vice City cars mod apk for Android
Some of the reasons why you might want to use GTA Vice City cars mod apk for Android are:
-
-- You can enjoy the game with more realistic and modern cars that suit your taste and style.
-- You can customize the cars with different colors, decals, wheels, spoilers, and other accessories.
-- You can improve the performance and handling of the cars with various upgrades and modifications.
-- You can have more fun and challenge with the game by driving faster, smoother, and more powerful cars.
-- You can experience the game in a new and fresh way with different cars for different missions and scenarios.
-
- Of course, using GTA Vice City cars mod apk for Android also has some risks and drawbacks, such as:
-gta vice city new cars mod apk download for android
-how to install gta vice city car mods on android
-gta vice city deluxe car mod apk for android
-gta vice city real cars mod apk free download for android
-gta vice city super cars mod apk latest version for android
-gta vice city best car mods apk for android offline
-gta vice city sports cars mod apk unlimited money for android
-gta vice city classic cars mod apk full version for android
-gta vice city hd cars mod apk highly compressed for android
-gta vice city ultimate car mod apk no root for android
-gta vice city flying car mod apk cheat code for android
-gta vice city all cars unlocked mod apk data for android
-gta vice city custom cars mod apk obb for android
-gta vice city rare cars mod apk revdl for android
-gta vice city fast and furious cars mod apk rexdl for android
-gta vice city police car mod apk hack for android
-gta vice city bike and car mod apk gameplay for android
-gta vice city lamborghini car mod apk online for android
-gta vice city ferrari car mod apk update for android
-gta vice city bmw car mod apk 2023 for android
-gta vice city bugatti car mod apk 2022 for android
-gta vice city audi car mod apk 2021 for android
-gta vice city mercedes car mod apk 2020 for android
-gta vice city mustang car mod apk 2019 for android
-gta vice city jeep car mod apk 2018 for android
-gta vice city taxi car mod apk 2017 for android
-gta vice city hummer car mod apk 2016 for android
-gta vice city limo car mod apk 2015 for android
-gta vice city monster truck car mod apk 2014 for android
-gta vice city tank car mod apk 2013 for android
-gta vice city helicopter car mod apk 2012 for android
-gta vice city boat car mod apk 2011 for android
-gta vice city train car mod apk 2010 for android
-gta vice city plane car mod apk 2009 for android
-gta vice city rocket car mod apk 2008 for android
-gta vice city submarine car mod apk 2007 for android
-gta vice city batmobile car mod apk 2006 for android
-gta vice city ghost rider car mod apk 2005 for android
-gta vice city spiderman car mod apk 2004 for android
-gta vice city hulk car mod apk 2003 for android
-gta vice city iron man car mod apk 2002 for android
-gta vice city thor car mod apk 2001 for android
-gta vice city captain america car mod apk 2000 for android
-gta vice city deadpool car mod apk 1999 for android
-gta vice city wolverine car mod apk 1998 for android
-gta vice city harry potter car mod apk 1997 for android
-gta vice city star wars car mod apk 1996 for android
-gta vice city lord of the rings car mod apk 1995 for android
-
-- You might encounter some compatibility issues or bugs with the modded cars or the game itself.
-- You might need to root your device or use a third-party app to install the modded apk files.
-- You might violate the terms and conditions of the game or the platform by using unauthorized mods.
-- You might lose your progress or data if you uninstall or update the game or the mods.
-- You might face legal issues or penalties if you use pirated or stolen mods or apk files.
-
- Therefore, you should use GTA Vice City cars mod apk for Android at your own risk and discretion. You should also respect the rights and credits of the original creators and developers of the game and the mods.
- How to download GTA Vice City cars mod apk for Android
- If you are interested in downloading GTA Vice City cars mod apk for Android, you will need to follow some steps and requirements. Here is a general guide on how to do it:
- Requirements
- Before you start downloading GTA Vice City cars mod apk for Android, you will need to have some things ready:
-
-- An Android device that meets the minimum specifications of the game. The recommended specs are: Android 7.0 or higher, 1.5 GB of RAM, 2.5 GB of free storage space, and a quad-core processor.
-- A stable internet connection to download the files.
-- A file manager app to access and manage the files on your device.
-- A zip extractor app to unzip or extract the compressed files.
-- A backup of your game data and progress in case something goes wrong.
-
- Steps
- Once you have everything ready, you can follow these steps to download GTA Vice City cars mod apk for Android:
- Step 1: Download the GTA Vice City apk and data files
- The first step is to download the GTA Vice City apk and data files from a trusted source. You can either buy the official version from Google Play Store or download a free version from a reputable website. Make sure you download the latest version of the game that is compatible with your device. The apk file is usually around 10 MB, while the data file is around 1.4 GB. You can use any browser or downloader app to download the files. Save them in a folder on your device or on an external SD card.
- Step 2: Install the GTA Vice City apk file
- The next step is to install the GTA Vice City apk file on your device. Before you do that, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play Store. To do that, go to Settings > Security > Unknown Sources and toggle it on. Then, locate the GTA Vice City apk file on your file manager app and tap on it. Follow the instructions on the screen to install the app. Do not launch the game yet.
- Step 3: Extract and copy the data files to the Android/obb folder
- The third step is to extract and copy the data files to the Android/obb folder on your device. To do that, locate the GTA Vice City data file on your file manager app and tap on it. You will need a zip extractor app to unzip or extract the file. You will get a folder named com.rockstargames.gtavc. Copy this folder and paste it in the Android/obb folder on your device's internal storage or SD card. If you don't have an obb folder, create one.
- Step 4: Download and install the GTA Vice City cars mod apk file
- The fourth step is to download and install the GTA Vice City cars mod apk file on your device. You can find various GTA Vice City cars mod apk files online from different sources.
Some of the sources where you can find GTA Vice City cars mod apk files are:
-
-- [GTAinside](^1^): This is one of the largest and most popular GTA mod databases, where you can find thousands of mods for GTA Vice City and other GTA games. You can browse by categories, such as cars, bikes, boats, planes, helicopters, skins, tools, scripts, and more. You can also filter by platforms, such as PC, PS2, Xbox, iOS, Android, and more. You can download the mods for free and rate and comment on them.
-- [GTAModding](https://www.gtamodding.com/wiki/Category:Vice_City_Mods): This is another GTA mod database, where you can find various mods for GTA Vice City and other GTA games. You can browse by categories, such as vehicles, weapons, maps, textures, sounds, missions, scripts, and more. You can also filter by platforms, such as PC, PS2, Xbox, iOS, Android, and more. You can download the mods for free and contribute to the wiki.
-- [Android-1](https://android-1.com/en/4744-grand-theft-auto-vice-city-mod.html): This is a website where you can find various modded apk files for Android games and apps. You can find GTA Vice City cars mod apk files here, as well as other mods for the game. You can download the mods for free and install them on your device.
-
- These are just some examples of sources where you can find GTA Vice City cars mod apk files. There are many more available online for you to choose from. However, you should be careful and cautious when downloading and installing any modded apk files from unknown or untrusted sources. They might contain viruses, malware, spyware, or other harmful or illegal content. You should also check the reviews and ratings of the mods before downloading them.
- Step 5: Launch the game and enjoy the new cars
- The final step is to launch the game and enjoy the new cars. To do that, simply tap on the GTA Vice City icon on your device's home screen or app drawer. The game will load and you will see the new cars in the game. You can select any car you want from the garage or from the street. You can also customize the cars with different colors, decals, wheels, spoilers, and other accessories. You can drive the cars around the city and complete missions with them. You can also switch between the original and modded cars anytime you want.
- Conclusion
- Summary
- In this article, we have shown you how to download and install GTA Vice City cars mod apk for Android. We have explained what GTA Vice City is, what GTA Vice City cars mods are, why use GTA Vice City cars mod apk for Android , and how to download and install GTA Vice City cars mod apk for Android. We have also provided some sources where you can find various GTA Vice City cars mod apk files, as well as some tips and warnings on using them. We hope you have found this article helpful and informative.
- FAQs
- Here are some frequently asked questions about GTA Vice City cars mod apk for Android:
-
-- Q: Can I use GTA Vice City cars mod apk for Android without rooting my device?
-A: Yes, you can use GTA Vice City cars mod apk for Android without rooting your device. However, you might need to use a third-party app or tool to install the modded apk files on your device.
-- Q: Can I use GTA Vice City cars mod apk for Android with the original game?
-A: Yes, you can use GTA Vice City cars mod apk for Android with the original game. However, you might need to backup your game data and progress before installing the mods, as they might overwrite or delete them.
-- Q: Can I use GTA Vice City cars mod apk for Android online or offline?
-A: You can use GTA Vice City cars mod apk for Android online or offline. However, you might face some issues or errors when playing online, as the mods might not be compatible or supported by the game servers or other players.
-- Q: Can I use GTA Vice City cars mod apk for Android with other mods?
-A: You can use GTA Vice City cars mod apk for Android with other mods, such as skins, weapons, maps, missions, scripts, and more. However, you might need to check the compatibility and compatibility of the mods before installing them, as they might conflict or interfere with each other.
-- Q: Can I uninstall GTA Vice City cars mod apk for Android?
-A: Yes, you can uninstall GTA Vice City cars mod apk for Android. However, you might need to delete the modded apk files and data files from your device manually, as they might not be removed automatically.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Mortal Kombat Mod APK v5.2.0 and Enjoy the Ultimate Fighting Experience.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Mortal Kombat Mod APK v5.2.0 and Enjoy the Ultimate Fighting Experience.md
deleted file mode 100644
index e7a506db750ba237dd506172c2a817e9f8353a21..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Mortal Kombat Mod APK v5.2.0 and Enjoy the Ultimate Fighting Experience.md
+++ /dev/null
@@ -1,96 +0,0 @@
-
-Download Mortal Kombat Mod APK Latest Version
-If you are a fan of fighting games, you must have heard of Mortal Kombat, one of the most popular and iconic franchises in the genre. Mortal Kombat is a game that combines brutal combat, stunning visuals, and immersive storytelling to create an unforgettable experience. However, if you want to enjoy the game to the fullest, you might need to spend some real money to unlock all the features and content. That's why we recommend you to download Mortal Kombat mod apk, a modified version of the game that gives you unlimited resources, unlocked characters, and more. In this article, we will tell you what Mortal Kombat is, what features it has, why you should download Mortal Kombat mod apk, and how to do it. Let's get started!
-download mortal kombat mod apk latest version
Download File ››››› https://ssurll.com/2uNU4H
-What is Mortal Kombat?
-Mortal Kombat is a fighting game developed by NetherRealm Studios and published by Warner Bros. Interactive Entertainment. It is the eleventh main installment in the Mortal Kombat series, which debuted in 1992. The game features a diverse roster of fighters from different realms and factions, who compete in a tournament called Mortal Kombat to determine the fate of the universe. The game also has a rich story mode that follows the events of the previous games and introduces new characters and twists.
-Features of Mortal Kombat
-Mortal Kombat is a game that offers a lot of features and content for players to enjoy. Here are some of them:
-Stunning graphics and animations
-Mortal Kombat boasts impressive graphics and animations that bring the characters and environments to life. The game uses Unreal Engine 4, which allows for realistic lighting, shadows, textures, and effects. The game also features cinematic cutscenes that enhance the story and atmosphere. Moreover, the game is famous for its gore and violence, which are depicted in detail with blood, bones, organs, and fatalities.
-Epic roster of fighters
-Mortal Kombat has a huge roster of fighters that includes both classic and new characters. You can choose from over 100 characters, each with their own unique skills, abilities, weapons, and personalities. Some of the fan-favorite characters are Scorpion, Sub-Zero, Raiden, Liu Kang, Sonya Blade, Johnny Cage, Kitana, Mileena, Shao Kahn, and more. You can also play as guest characters from other franchises, such as Terminator, RoboCop, Joker, Spawn, Rambo, and more.
-download shadow fight 2 mod apk unlimited money and gems
-download shadow fight 2 mod apk latest version 2023
-download shadow fight 2 mod apk titan mode
-download shadow fight 2 mod apk special edition
-download shadow fight 2 mod apk max level
-download shadow fight 2 mod apk all weapons unlocked
-download shadow fight 2 mod apk offline
-download shadow fight 2 mod apk for android
-download shadow fight 2 mod apk no root
-download shadow fight 2 mod apk free shopping
-download shadow fight 2 mod apk god mode
-download shadow fight 2 mod apk unlimited everything
-download shadow fight 2 mod apk mega mod
-download shadow fight 2 mod apk hack
-download shadow fight 2 mod apk rexdl
-download shadow fight 2 mod apk revdl
-download shadow fight 2 mod apk happymod
-download shadow fight 2 mod apk android 1
-download shadow fight 2 mod apk highly compressed
-download shadow fight 2 mod apk full version
-download shadow fight 2 mod apk with obb file
-download shadow fight 2 mod apk unlimited energy
-download shadow fight 2 mod apk super magic
-download shadow fight 2 mod apk all bosses unlocked
-download shadow fight 2 mod apk all levels unlocked
-download shadow fight 2 mod apk unlimited coins and diamonds
-download shadow fight 2 mod apk unlimited orbs and gems
-download shadow fight 2 mod apk unlimited health and ammo
-download shadow fight 2 mod apk unlimited weapons and armor
-download shadow fight 2 mod apk unlimited skills and abilities
-download shadow fight 2 mod apk premium features unlocked
-download shadow fight 2 mod apk new update
-download shadow fight 2 mod apk old version
-download shadow fight 2 mod apk original version
-download shadow fight 2 mod apk pure version
-download shadow fight 2 mod apk best version
-download shadow fight 2 mod apk working version
-download shadow fight 2 mod apk safe version
-download shadow fight 2 mod apk virus free version
-download shadow fight 2 mod apk no ads version
-Thrilling game modes and challenges
-Mortal Kombat offers various game modes and challenges for players to test their skills and have fun. You can play solo or with friends in online or offline modes. Some of the game modes are:
-
-- Story mode: Follow the epic story of Mortal Kombat as you play as different characters and witness their struggles and triumphs.
-- Tower mode: Fight your way through different towers with varying difficulties and rewards.
-- Faction Wars: Join one of the five factions (Lin Kuei, White Lotus, Special Forces, Black Dragon, or Brotherhood of Shadow) and compete with other players for points and prizes.
-- Quest mode: Send your characters on quests to earn rewards and experience.
-- Relic Hunt: Search for ancient relics and unlock powerful items.
-- Trial mode: Complete daily and weekly challenges to earn rewards.
-
Customizable characters and gear
-Mortal Kombat allows you to customize your characters and gear to suit your preferences and playstyle. You can change the appearance, skills, and equipment of your characters, as well as their taunts, intros, and victory poses. You can also collect and upgrade different types of gear, such as weapons, armor, accessories, and consumables. You can use the gear to enhance your stats, abilities, and performance in battles.
-Why download Mortal Kombat mod apk?
-Mortal Kombat is a great game, but it also has some drawbacks. For example, the game requires a lot of storage space and internet connection. The game also has some in-app purchases and ads that can be annoying and expensive. That's why you might want to download Mortal Kombat mod apk, a modified version of the game that gives you some advantages and benefits. Here are some of them:
-Unlimited souls and money
-Souls and money are the main currencies in Mortal Kombat. You need them to buy and upgrade characters, gear, cards, and other items. However, earning them can be slow and tedious, especially if you want to get the best ones. With Mortal Kombat mod apk, you don't have to worry about that. You will get unlimited souls and money, so you can buy anything you want without any limitations.
-Unlocked characters and items
-Mortal Kombat has a lot of characters and items, but not all of them are available from the start. Some of them are locked behind paywalls, timers, or events. This can be frustrating if you want to try them out or use them in your battles. With Mortal Kombat mod apk, you don't have to wait or pay for anything. You will get all the characters and items unlocked from the beginning, so you can enjoy the full content of the game.
-No ads and root required
-Mortal Kombat has some ads that can interrupt your gameplay and ruin your immersion. The ads can also consume your data and battery life. Moreover, some mod apks require you to root your device, which can be risky and complicated. With Mortal Kombat mod apk, you don't have to deal with any of that. You will get rid of all the ads and enjoy a smooth and uninterrupted gaming experience. You also don't need to root your device, as the mod apk works on any Android device.
-How to download and install Mortal Kombat mod apk?
-If you are interested in downloading and installing Mortal Kombat mod apk, you can follow these simple steps:
-Step 1: Download the mod apk file from a trusted source
-The first thing you need to do is to find a reliable source that provides the mod apk file for Mortal Kombat. You can search online for different websites or forums that offer the file. However, be careful not to download from shady or suspicious sources, as they might contain viruses or malware that can harm your device. We recommend you to use this link to download the mod apk file safely and securely.
-Step 2: Enable unknown sources on your device settings
-The next thing you need to do is to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings > security > unknown sources > enable. This might vary depending on your device model and Android version.
-Step 3: Install the mod apk file and launch the game
-The final thing you need to do is to install the mod apk file and launch the game. To do this, go to your file manager > downloads > tap on the mod apk file > install > open. The game will start automatically and you will see the mod features activated.
-Conclusion
-Mortal Kombat is one of the best fighting games ever made. It has amazing graphics, gameplay, story, characters, modes, and features that will keep you entertained for hours. However, if you want to enhance your gaming experience even more, you should download Mortal Kombat mod apk. This will give you unlimited resources, unlocked content, no ads, no root required, and more benefits that will make you enjoy the game like never before.
-FAQs
-
-- Q: Is Mortal Kombat mod apk safe to use?
-- A: Yes, Mortal Kombat mod apk is safe to use as long as you download it from a trusted source like this one. However, we advise you to use it at your own risk and responsibility.
-- Q: Does Mortal Kombat mod apk work on all devices?
-- A: Mortal Kombat mod apk works on most Android devices that meet the minimum requirements of the game. However, some devices might not be compatible or have some issues with the mod apk. If you encounter any problems, you can try to reinstall the mod apk or contact the developer for support.
-- Q: Can I play online with Mortal Kombat mod apk?
-- A: Mortal Kombat mod apk allows you to play online with other players, but you might face some restrictions or bans from the game server. This is because the mod apk gives you an unfair advantage over other players, which might be detected and reported by the game system. Therefore, we suggest you to use the mod apk for offline modes only, or use a VPN to hide your IP address and location.
-- Q: How can I update Mortal Kombat mod apk?
-- A: Mortal Kombat mod apk is updated regularly by the developer to match the latest version of the game. You can check for updates on this website or on the developer's page. To update the mod apk, you need to download and install the new file over the old one. You don't need to uninstall the previous version, as your data and progress will be saved.
-- Q: Where can I find more mod apks for other games?
-- A: If you are looking for more mod apks for other games, you can visit this website or search online for other sources. However, be careful not to download from untrustworthy or malicious sources, as they might contain viruses or malware that can harm your device. Always scan the files before installing them and use a good antivirus software.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/sklearn-docs/MNIST_classification_using_multinomial_logistic_L1/app.py b/spaces/sklearn-docs/MNIST_classification_using_multinomial_logistic_L1/app.py
deleted file mode 100644
index d56f1c609caccda0b85bc82939503565888c628c..0000000000000000000000000000000000000000
--- a/spaces/sklearn-docs/MNIST_classification_using_multinomial_logistic_L1/app.py
+++ /dev/null
@@ -1,191 +0,0 @@
-import plotly.express as px
-import numpy as np
-
-from sklearn.datasets import fetch_openml
-from sklearn.linear_model import LogisticRegression
-from sklearn.model_selection import train_test_split
-from sklearn.preprocessing import StandardScaler
-from sklearn.utils import check_random_state
-import gradio as gr
-
-
-# Load data from https://www.openml.org/d/554
-X, y = fetch_openml(
- "mnist_784", version=1, return_X_y=True, as_frame=False, parser="pandas"
-)
-
-print("Data loaded")
-random_state = check_random_state(0)
-permutation = random_state.permutation(X.shape[0])
-X = X[permutation]
-y = y[permutation]
-X = X.reshape((X.shape[0], -1))
-
-
-scaler = StandardScaler()
-
-
-def dataset_display(digit, count_per_digit, binary_image):
- if digit not in range(10):
- # return a figure displaying an error message
- return px.imshow(
- np.zeros((28, 28)),
- labels=dict(x="Pixel columns", y="Pixel rows"),
- title=f"Digit {digit} is not in the data",
- )
-
- binary_value = True if binary_image == 1 else False
- digit_idxs = np.where(y == str(digit))[0]
- random_idxs = np.random.choice(digit_idxs, size=count_per_digit, replace=False)
-
- fig = px.imshow(
- np.array([X[i].reshape(28, 28) for i in random_idxs]),
- labels=dict(x="Pixel columns", y="Pixel rows"),
- title=f"Examples of Digit {digit} in Data",
- facet_col=0,
- facet_col_wrap=5,
- binary_string=binary_value,
- )
-
- return fig
-
-
-def predict(img):
- try:
- img = img.reshape(1, -1)
- except:
- return "Show Your Drawing Skills"
-
- try:
- img = scaler.transform(img)
- prediction = clf.predict(img)
- return prediction[0]
- except:
- return "Train the model first"
-
-
-def train_model(train_sample=5000, c=0.1, tol=0.1, solver="sage", penalty="l1"):
- X_train, X_test, y_train, y_test = train_test_split(
- X, y, train_size=train_sample, test_size=10000
- )
-
- penalty_dict = {
- "l2": ["lbfgs", "newton-cg", "newton-cholesky", "sag", "saga"],
- "l1": ["liblinear", "saga"],
- "elasticnet": ["saga"],
- }
-
- if solver not in penalty_dict[penalty]:
- return (
- "Solver not supported for the selected penalty",
- "Change the Combination",
- None,
- )
-
- global clf
- global scaler
- scaler = StandardScaler()
- X_train = scaler.fit_transform(X_train)
- X_test = scaler.transform(X_test)
-
- clf = LogisticRegression(C=c, penalty=penalty, solver=solver, tol=tol)
- clf.fit(X_train, y_train)
- sparsity = np.mean(clf.coef_ == 0) * 100
- score = clf.score(X_test, y_test)
-
- coef = clf.coef_.copy()
- scale = np.abs(coef).max()
-
- fig = px.imshow(
- np.array([coef[i].reshape(28, 28) for i in range(10)]),
- labels=dict(x="Pixel columns", y="Pixel rows"),
- title=f"Classification vector for each digit",
- range_color=[-scale, scale],
- facet_col=0,
- facet_col_wrap=5,
- facet_col_spacing=0.01,
- color_continuous_scale="RdBu",
- zmin=-scale,
- zmax=scale,
- )
-
- return score, sparsity, fig
-
-
-with gr.Blocks() as demo:
- gr.Markdown("# MNIST classification using multinomial logistic + L1 ")
- gr.Markdown(
- """This interactive demo is based on the [MNIST classification using multinomial logistic + L1](https://scikit-learn.org/stable/auto_examples/linear_model/plot_sparse_logistic_regression_mnist.html#sphx-glr-auto-examples-linear-model-plot-sparse-logistic-regression-mnist-py) example from the popular [scikit-learn](https://scikit-learn.org/stable/) library, which is a widely-used library for machine learning in Python. The primary goal of this demo is to showcase the use of logistic regression in classifying handwritten digits from the [MNIST](https://en.wikipedia.org/wiki/MNIST_database) dataset, which is a well-known benchmark dataset in computer vision. The dataset is loaded from [OpenML](https://www.openml.org/d/554), which is an open platform for machine learning research that provides easy access to a large number of datasets.
-The model is trained using the scikit-learn library, which provides a range of tools for machine learning, including classification, regression, and clustering algorithms, as well as tools for data preprocessing and model evaluation. The demo calculates the score and sparsity metrics using test data, which provides insight into the model's performance and sparsity, respectively. The score metric indicates how well the model is performing, while the sparsity metric provides information about the number of non-zero coefficients in the model, which can be useful for interpreting the model and reducing its complexity.
- """
- )
-
- with gr.Tab("Explore the Data"):
- gr.Markdown("## ")
- with gr.Row():
- digit = gr.Slider(0, 9, label="Select the Digit", value=5, step=1)
- count_per_digit = gr.Slider(
- 1, 10, label="Number of Images", value=10, step=1
- )
- binary_image = gr.Slider(0, 1, label="Binary Image", value=0, step=1)
-
- gen_btn = gr.Button("Show Me ")
- gen_btn.click(
- dataset_display,
- inputs=[digit, count_per_digit, binary_image],
- outputs=gr.Plot(),
- )
-
- with gr.Tab("Train Your Model"):
- gr.Markdown("# Play with the parameters to see how the model changes")
-
- gr.Markdown("## Solver and penalty")
- gr.Markdown(
- """
- Penalty | Solver
- -------|---------------
- l1 | saga
- l2 | saga
- """
- )
-
- with gr.Row():
- train_sample = gr.Slider(
- 1000, 60000, label="Train Sample", value=5000, step=1
- )
-
- c = gr.Slider(0.1, 1, label="C", value=0.1, step=0.1)
- tol = gr.Slider(
- 0.1, 1, label="Tolerance for stopping criteria.", value=0.1, step=0.1
- )
- max_iter = gr.Slider(100, 1000, label="Max Iter", value=100, step=1)
-
- penalty = gr.Dropdown(
- ["l1", "l2",], label="Penalty", value="l1"
- )
- solver = gr.Dropdown(
- ["saga"],
- label="Solver",
- value="saga",
- )
-
- train_btn = gr.Button("Train")
- train_btn.click(
- train_model,
- inputs=[train_sample, c, tol, solver, penalty],
- outputs=[
- gr.Textbox(label="Score"),
- gr.Textbox(label="Sparsity"),
- gr.Plot(),
- ],
- )
-
- with gr.Tab("Predict the Digit"):
- gr.Markdown("## Draw a digit and see the model's prediction")
- inputs = gr.Sketchpad(brush_radius=1.0)
- outputs = gr.Textbox(label="Predicted Label", lines=1)
- skecth_btn = gr.Button("Classify the Sketch")
- skecth_btn.click(predict, inputs, outputs)
-
-
-demo.launch()
diff --git a/spaces/spillwaysofyoursoul/janitorai/README.md b/spaces/spillwaysofyoursoul/janitorai/README.md
deleted file mode 100644
index 4fea533e0b6ebbeb8cee642851ea9e5b719915de..0000000000000000000000000000000000000000
--- a/spaces/spillwaysofyoursoul/janitorai/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Janitorai
-emoji: 🐠
-colorFrom: pink
-colorTo: yellow
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/clib/libnat/edit_dist.cpp b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/clib/libnat/edit_dist.cpp
deleted file mode 100644
index 9ffb60569d74d2868ed8113b7c787ef870e9da20..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/clib/libnat/edit_dist.cpp
+++ /dev/null
@@ -1,231 +0,0 @@
-/**
- * Copyright 2017-present, Facebook, Inc.
- * All rights reserved.
- *
- * This source code is licensed under the license found in the
- * LICENSE file in the root directory of this source tree.
- */
-
-#include
-#include
-#include // @manual=//caffe2:torch_extension
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-using namespace ::std;
-
-vector> edit_distance2_with_dp(
- vector& x,
- vector& y) {
- uint32_t lx = x.size();
- uint32_t ly = y.size();
- vector> d(lx + 1, vector(ly + 1));
- for (uint32_t i = 0; i < lx + 1; i++) {
- d[i][0] = i;
- }
- for (uint32_t j = 0; j < ly + 1; j++) {
- d[0][j] = j;
- }
- for (uint32_t i = 1; i < lx + 1; i++) {
- for (uint32_t j = 1; j < ly + 1; j++) {
- d[i][j] =
- min(min(d[i - 1][j], d[i][j - 1]) + 1,
- d[i - 1][j - 1] + 2 * (x.at(i - 1) == y.at(j - 1) ? 0 : 1));
- }
- }
- return d;
-}
-
-vector> edit_distance2_backtracking(
- vector>& d,
- vector& x,
- vector& y,
- uint32_t terminal_symbol) {
- vector seq;
- vector> edit_seqs(x.size() + 2, vector());
- /*
- edit_seqs:
- 0~x.size() cell is the insertion sequences
- last cell is the delete sequence
- */
-
- if (x.size() == 0) {
- edit_seqs.at(0) = y;
- return edit_seqs;
- }
-
- uint32_t i = d.size() - 1;
- uint32_t j = d.at(0).size() - 1;
-
- while ((i >= 0) && (j >= 0)) {
- if ((i == 0) && (j == 0)) {
- break;
- }
-
- if ((j > 0) && (d.at(i).at(j - 1) < d.at(i).at(j))) {
- seq.push_back(1); // insert
- seq.push_back(y.at(j - 1));
- j--;
- } else if ((i > 0) && (d.at(i - 1).at(j) < d.at(i).at(j))) {
- seq.push_back(2); // delete
- seq.push_back(x.at(i - 1));
- i--;
- } else {
- seq.push_back(3); // keep
- seq.push_back(x.at(i - 1));
- i--;
- j--;
- }
- }
-
- uint32_t prev_op, op, s, word;
- prev_op = 0, s = 0;
- for (uint32_t k = 0; k < seq.size() / 2; k++) {
- op = seq.at(seq.size() - 2 * k - 2);
- word = seq.at(seq.size() - 2 * k - 1);
- if (prev_op != 1) {
- s++;
- }
- if (op == 1) // insert
- {
- edit_seqs.at(s - 1).push_back(word);
- } else if (op == 2) // delete
- {
- edit_seqs.at(x.size() + 1).push_back(1);
- } else {
- edit_seqs.at(x.size() + 1).push_back(0);
- }
-
- prev_op = op;
- }
-
- for (uint32_t k = 0; k < edit_seqs.size(); k++) {
- if (edit_seqs[k].size() == 0) {
- edit_seqs[k].push_back(terminal_symbol);
- }
- }
- return edit_seqs;
-}
-
-vector> edit_distance2_backtracking_with_delete(
- vector>& d,
- vector& x,
- vector& y,
- uint32_t terminal_symbol,
- uint32_t deletion_symbol) {
- vector seq;
- vector> edit_seqs(x.size() + 1, vector());
- /*
- edit_seqs:
- 0~x.size() cell is the insertion sequences
- last cell is the delete sequence
- */
-
- if (x.size() == 0) {
- edit_seqs.at(0) = y;
- return edit_seqs;
- }
-
- uint32_t i = d.size() - 1;
- uint32_t j = d.at(0).size() - 1;
-
- while ((i >= 0) && (j >= 0)) {
- if ((i == 0) && (j == 0)) {
- break;
- }
-
- if ((j > 0) && (d.at(i).at(j - 1) < d.at(i).at(j))) {
- seq.push_back(1); // insert
- seq.push_back(y.at(j - 1));
- j--;
- } else if ((i > 0) && (d.at(i - 1).at(j) < d.at(i).at(j))) {
- seq.push_back(2); // delete
- seq.push_back(x.at(i - 1));
- i--;
- } else {
- seq.push_back(3); // keep
- seq.push_back(x.at(i - 1));
- i--;
- j--;
- }
- }
-
- uint32_t prev_op, op, s, word;
- prev_op = 0, s = 0;
- for (uint32_t k = 0; k < seq.size() / 2; k++) {
- op = seq.at(seq.size() - 2 * k - 2);
- word = seq.at(seq.size() - 2 * k - 1);
- if (prev_op != 1) {
- s++;
- }
- if (op == 1) // insert
- {
- edit_seqs.at(s - 1).push_back(word);
- } else if (op == 2) // delete
- {
- edit_seqs.at(s - 1).push_back(deletion_symbol);
- }
-
- prev_op = op;
- }
-
- for (uint32_t k = 0; k < edit_seqs.size(); k++) {
- if (edit_seqs.at(k).size() == 0) {
- edit_seqs.at(k).push_back(terminal_symbol);
- }
- }
- return edit_seqs;
-}
-
-vector compute_ed2(
- vector>& xs,
- vector>& ys) {
- vector distances(xs.size());
- for (uint32_t i = 0; i < xs.size(); i++) {
- vector> d = edit_distance2_with_dp(xs.at(i), ys.at(i));
- distances.at(i) = d.at(xs.at(i).size()).at(ys.at(i).size());
- }
- return distances;
-}
-
-vector>> suggested_ed2_path(
- vector>& xs,
- vector>& ys,
- uint32_t terminal_symbol) {
- vector>> seq(xs.size());
- for (uint32_t i = 0; i < xs.size(); i++) {
- vector> d = edit_distance2_with_dp(xs.at(i), ys.at(i));
- seq.at(i) =
- edit_distance2_backtracking(d, xs.at(i), ys.at(i), terminal_symbol);
- }
- return seq;
-}
-
-vector>> suggested_ed2_path_with_delete(
- vector>& xs,
- vector>& ys,
- uint32_t terminal_symbol,
- uint32_t deletion_symbol) {
- vector>> seq(xs.size());
- for (uint32_t i = 0; i < xs.size(); i++) {
- vector> d = edit_distance2_with_dp(xs.at(i), ys.at(i));
- seq.at(i) = edit_distance2_backtracking_with_delete(
- d, xs.at(i), ys.at(i), terminal_symbol, deletion_symbol);
- }
- return seq;
-}
-
-PYBIND11_MODULE(libnat, m) {
- m.def("compute_ed2", &compute_ed2, "compute_ed2");
- m.def("suggested_ed2_path", &suggested_ed2_path, "suggested_ed2_path");
- m.def(
- "suggested_ed2_path_with_delete",
- &suggested_ed2_path_with_delete,
- "suggested_ed2_path_with_delete");
-}
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/cross_entropy.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/cross_entropy.py
deleted file mode 100644
index 6f33c24cb56e25f91595009af38e63784c2263a0..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/cross_entropy.py
+++ /dev/null
@@ -1,61 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-
-import torch
-import torch.nn.functional as F
-
-
-logger = logging.getLogger(__name__)
-
-
-def _cross_entropy_pytorch(logits, target, ignore_index=None, reduction="mean"):
- lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32)
- return F.nll_loss(
- lprobs,
- target,
- ignore_index=ignore_index,
- reduction=reduction,
- )
-
-
-try:
- import xentropy_cuda
- from apex.contrib import xentropy
-
- def cross_entropy(logits, target, ignore_index=-100, reduction="mean"):
- if logits.device == torch.device("cpu"):
- return _cross_entropy_pytorch(logits, target, ignore_index, reduction)
- else:
- if not getattr(cross_entropy, "_has_logged_once", False):
- logger.info("using fused cross entropy")
- cross_entropy._has_logged_once = True
-
- half_to_float = logits.dtype == torch.half
- losses = xentropy.SoftmaxCrossEntropyLoss.apply(
- logits,
- target,
- 0.0,
- ignore_index,
- half_to_float,
- )
- if reduction == "sum":
- return losses.sum()
- elif reduction == "mean":
- if ignore_index >= 0:
- return losses.sum() / target.ne(ignore_index).sum()
- else:
- return losses.mean()
- elif reduction == "none":
- return losses
- else:
- raise NotImplementedError
-
-
-except ImportError:
-
- def cross_entropy(logits, target, ignore_index=-100, reduction="mean"):
- return _cross_entropy_pytorch(logits, target, ignore_index, reduction)
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/optim/cpu_adam.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/optim/cpu_adam.py
deleted file mode 100644
index b2f893aeda69ee1741e5e3af406ff4182b6f2416..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/optim/cpu_adam.py
+++ /dev/null
@@ -1,206 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import importlib
-from collections.abc import Collection
-from dataclasses import dataclass, field
-from typing import List
-
-import torch
-from fairseq.dataclass import FairseqDataclass
-from fairseq.optim import FairseqOptimizer, register_optimizer
-from omegaconf import II, DictConfig
-
-
-try:
- import deepspeed
- has_deepspeed = True
-except ImportError as e:
- has_deepspeed = False
-
-
-def _get_cpu_adam():
- try:
- from deepspeed.ops.op_builder import CPUAdamBuilder
- return CPUAdamBuilder().load()
- except ImportError:
- # fbcode
- from deepspeed.ops.adam import DeepSpeedCPUAdam as ds_opt_adam
- return ds_opt_adam
-
-@dataclass
-class FairseqCPUAdamConfig(FairseqDataclass):
- adam_betas: str = field(
- default="(0.9, 0.999)", metadata={"help": "betas for Adam optimizer"}
- )
- adam_eps: float = field(
- default=1e-8, metadata={"help": "epsilon for Adam optimizer"}
- )
- weight_decay: float = field(default=0.0, metadata={"help": "weight decay"})
- fp16_adam_stats: bool = field(
- default=False, metadata={"help": "use FP16 stats (with automatic scaling)"}
- )
- # TODO common vars below in parent
- lr: List[float] = II("optimization.lr")
-
-
-@register_optimizer("cpu_adam", dataclass=FairseqCPUAdamConfig)
-class FairseqCPUAdam(FairseqOptimizer):
- """Adam optimizer for fairseq, optimized for CPU tensors.
-
- Important note: this optimizer corresponds to the "AdamW" variant of
- Adam in its weight decay behavior. As such, it is most closely
- analogous to torch.optim.AdamW from PyTorch.
- """
-
- def __init__(self, cfg: DictConfig, params):
- super().__init__(cfg)
- self._optimizer = CPUAdam(params, **self.optimizer_config)
-
- @property
- def optimizer_config(self):
- """
- Return a kwarg dictionary that will be used to override optimizer
- args stored in checkpoints. This allows us to load a checkpoint and
- resume training using a different set of optimizer args, e.g., with a
- different learning rate.
- """
- return {
- "lr": self.cfg.lr[0]
- if isinstance(self.cfg.lr, Collection)
- else self.cfg.lr,
- "betas": eval(self.cfg.adam_betas),
- "eps": self.cfg.adam_eps,
- "weight_decay": self.cfg.weight_decay,
- "use_fp16_stats": self.cfg.fp16_adam_stats,
- }
-
-
-class CPUAdam(torch.optim.Optimizer):
-
- optimizer_id = 0
-
- def __init__(
- self,
- params,
- lr=1e-3,
- bias_correction=True,
- betas=(0.9, 0.999),
- eps=1e-8,
- weight_decay=0,
- use_fp16_stats=False,
- ):
- defaults = {
- "lr": lr,
- "bias_correction": bias_correction,
- "betas": betas,
- "eps": eps,
- "weight_decay": weight_decay,
- }
- super().__init__(params, defaults)
-
- self.use_fp16_stats = use_fp16_stats
- self.FLOAT16_MAX = 65504.0
-
- if not has_deepspeed:
- raise ImportError("Please install DeepSpeed: pip install deepspeed")
-
- self.opt_id = CPUAdam.optimizer_id
- CPUAdam.optimizer_id = CPUAdam.optimizer_id + 1
-
- self.ds_opt_adam = _get_cpu_adam()
- adamw_mode = True
- self.ds_opt_adam.create_adam(
- self.opt_id, lr, betas[0], betas[1], eps, weight_decay, adamw_mode
- )
-
- @property
- def supports_memory_efficient_fp16(self):
- return True
-
- @property
- def supports_flat_params(self):
- return True
-
- @torch.no_grad()
- def step(self, closure=None):
- loss = None
- if closure is not None:
- with torch.enable_grad():
- loss = closure()
-
- torch.cuda.synchronize()
-
- for group_id, group in enumerate(self.param_groups):
- for param_id, p in enumerate(group["params"]):
- if p.grad is None:
- continue
-
- state = self.state[p]
- if len(state) == 0:
- state["step"] = 0
- dtype = torch.float16 if self.use_fp16_stats else p.data.dtype
- # gradient momentums
- state["exp_avg"] = torch.zeros_like(
- p.data, dtype=dtype, device="cpu"
- )
- # gradient variances
- state["exp_avg_sq"] = torch.zeros_like(
- p.data, dtype=dtype, device="cpu"
- )
- if self.use_fp16_stats:
- assert torch.is_floating_point(p.data)
- state["exp_avg_scale"] = 1.0
- state["exp_avg_sq_scale"] = 1.0
-
- exp_avg, exp_avg_sq = state["exp_avg"], state["exp_avg_sq"]
-
- p_data_bak = p.data # backup of the original data pointer
-
- p.data = p.data.to(dtype=torch.float32, device="cpu")
- p.grad.data = p.grad.data.to(dtype=torch.float32, device="cpu")
-
- if self.use_fp16_stats:
- exp_avg = exp_avg.float() * state["exp_avg_scale"]
- exp_avg_sq = exp_avg_sq.float() * state["exp_avg_sq_scale"]
-
- state["step"] += 1
- beta1, beta2 = group["betas"]
-
- self.ds_opt_adam.adam_update(
- self.opt_id,
- state["step"],
- group["lr"],
- beta1,
- beta2,
- group["eps"],
- group["weight_decay"],
- group["bias_correction"],
- p.data,
- p.grad.data,
- exp_avg,
- exp_avg_sq,
- )
-
- if p_data_bak.data_ptr() != p.data.data_ptr():
- p_data_bak.copy_(p.data)
- p.data = p_data_bak
-
- if self.use_fp16_stats:
-
- def inf_norm(t):
- return torch.norm(t, float("inf"))
-
- # from github.com/openai/jukebox/blob/master/jukebox/utils/fp16.py
- state["exp_avg_scale"], state["exp_avg_sq_scale"] = (
- 1e-8 + inf_norm(exp_avg) / self.FLOAT16_MAX,
- 1e-8 + inf_norm(exp_avg_sq) / self.FLOAT16_MAX,
- )
- state["exp_avg"], state["exp_avg_sq"] = (
- (exp_avg / state["exp_avg_scale"]).half(),
- (exp_avg_sq / state["exp_avg_sq_scale"]).half(),
- )
-
- return loss
diff --git a/spaces/starlit7/USPoliticsTTS/utils.py b/spaces/starlit7/USPoliticsTTS/utils.py
deleted file mode 100644
index 4cb5b43d0ca2bae496e7871b2094f2ffb26ab642..0000000000000000000000000000000000000000
--- a/spaces/starlit7/USPoliticsTTS/utils.py
+++ /dev/null
@@ -1,226 +0,0 @@
-import os
-import glob
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.ERROR)
-logger = logging
-
-
-def load_checkpoint(checkpoint_path, model, optimizer=None):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
- iteration = checkpoint_dict['iteration']
- learning_rate = checkpoint_dict['learning_rate']
- if optimizer is not None:
- optimizer.load_state_dict(checkpoint_dict['optimizer'])
- saved_state_dict = checkpoint_dict['model']
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items():
- try:
- new_state_dict[k] = saved_state_dict[k]
- except:
- logger.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, 'module'):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- logger.info("Loaded checkpoint '{}' (iteration {})".format(
- checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower',
- interpolation='none')
- fig.colorbar(im, ax=ax)
- xlabel = 'Decoder timestep'
- if info is not None:
- xlabel += '\n\n' + info
- plt.xlabel(xlabel)
- plt.ylabel('Encoder timestep')
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- parser = argparse.ArgumentParser()
- parser.add_argument('-c', '--config', type=str, default="./configs/base.json",
- help='JSON file for configuration')
- parser.add_argument('-m', '--model', type=str, required=True,
- help='Model name')
-
- args = parser.parse_args()
- model_dir = os.path.join("./logs", args.model)
-
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
-
- config_path = args.config
- config_save_path = os.path.join(model_dir, "config.json")
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r", encoding="utf-8") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- ))
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn("git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]))
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-class HParams():
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Akazukin To Mayoi No Mori English Patch Download.md b/spaces/stomexserde/gpt4-ui/Examples/Akazukin To Mayoi No Mori English Patch Download.md
deleted file mode 100644
index 3bcff8aa4e002d41596c1ee726504777114cfb2f..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Akazukin To Mayoi No Mori English Patch Download.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-How to Download Akazukin To Mayoi No Mori English Patch
-Akazukin To Mayoi No Mori is a popular Japanese otome game that was released in 2010. The game follows the story of a girl named Tiana who gets lost in a mysterious forest full of fairy tale characters and dangers. The game has multiple endings and romance options, as well as beautiful graphics and music.
-Akazukin To Mayoi No Mori English Patch Download
DOWNLOAD 🗸🗸🗸 https://urlgoal.com/2uI9fE
-However, the game is only available in Japanese, which makes it difficult for non-Japanese speakers to enjoy it. Fortunately, there is a fan-made English patch that can translate the game into English and make it playable on PC. In this article, we will show you how to download Akazukin To Mayoi No Mori English patch and install it on your computer.
-Step 1: Download the Game
-The first step is to download the original game from a reliable source. You can buy the game online from sites like Amazon Japan or CDJapan, or you can download it from a torrent site. However, be careful when downloading from torrent sites, as they may contain viruses or malware that can harm your computer. Make sure you scan the files with an antivirus program before opening them.
-Step 2: Download the English Patch
-The next step is to download the English patch from a trusted site. The English patch was created by a group of fans called Otome Adventures, and they have released it for free on their website[^1^]. You can also find it on SoundCloud[^2^] or on other sites that host fan-made patches. The patch is a zip file that contains the translation files and instructions on how to apply them.
-Step 3: Install the English Patch
-The final step is to install the English patch on your game. To do this, you need to extract the zip file to a folder on your computer. Then, you need to copy the files from the folder to the game directory where you installed Akazukin To Mayoi No Mori. You can find the game directory by right-clicking on the game icon and selecting "Open file location". After copying the files, you need to run the patcher.exe file that is included in the folder. This will automatically patch your game and make it ready to play in English.
-Enjoy Akazukin To Mayoi No Mori in English
-Now that you have installed the English patch, you can enjoy Akazukin To Mayoi No Mori in English on your PC. You can launch the game by double-clicking on the game icon or by running the start.exe file in the game directory. You can also change some settings such as screen resolution, sound volume, and text speed by running the config.exe file in the game directory.
-
-Akazukin To Mayoi No Mori is a fun and engaging otome game that will take you on a magical adventure in a forest full of wonders and dangers. With the English patch, you can experience the game in a language that you understand and appreciate its story and characters better. We hope this article was helpful for you and that you enjoy playing Akazukin To Mayoi No Mori in English.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Festin De Cuervos Mobi 11.md b/spaces/stomexserde/gpt4-ui/Examples/Festin De Cuervos Mobi 11.md
deleted file mode 100644
index dc79e8959bcdd6a7fac70fc0ecd9e76bf0014f86..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Festin De Cuervos Mobi 11.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-Festin De Cuervos Mobi 11: A Review of the Fourth Book in the A Song of Ice and Fire Series
-
-Festin De Cuervos Mobi 11 is the Spanish edition of A Feast for Crows, the fourth book in the epic fantasy series A Song of Ice and Fire by George R. R. Martin. The book was published in 2007 by Ediciones Gigamesh, and it is divided into two volumes due to its length and complexity. The book follows the events of some of the main characters of the saga, while the rest of them are featured in the fifth book, Danza de Dragones, which takes place simultaneously.
-
-In this book, the War of the Five Kings is winding down, but new conflicts and intrigues arise in the aftermath. The young king Tommen Baratheon rules from King's Landing under the influence of his mother, Cersei Lannister, who faces growing opposition from the Tyrells and a prophecy that haunts her. Sansa Stark hides in the Vale under the protection of Petyr Baelish, who schemes to secure his power and her inheritance. In the Citadel of Oldtown, a novice named Samwell Tarly arrives to train as a maester and uncovers a mysterious plot involving a man who calls himself the Alchemist. And across the Narrow Sea, Arya Stark continues her training as an assassin in Braavos, while Brienne of Tarth searches for her missing oath in a land ravaged by war and violence.
-Festin De Cuervos Mobi 11
DOWNLOAD ○○○ https://urlgoal.com/2uI9h9
-
-Festin De Cuervos Mobi 11 is a masterfully written adventure, with a fast-paced and emotionally complex plot that explores themes such as power, religion, identity, loyalty, and justice. The book introduces new characters and locations, expands the history and lore of the world, and sets the stage for the upcoming events that will shake Westeros and beyond. Fans of fantasy, drama, and intrigue will enjoy this book and its rich and immersive storytelling.
-
-If you want to read Festin De Cuervos Mobi 11, you can find it online on SoundCloud[^1^] [^2^], or buy it from SHEVOLVE[^3^], Penguin Random House[^4^], or Book Depository[^7^]. You can also read more about the book and its author on Hielo y Fuego Wiki[^4^] or Wikipedia[^6^].
-
-Festin De Cuervos Mobi 11 is not only a great book for fantasy lovers, but also a valuable resource for learning Spanish. The book uses a rich and varied vocabulary, as well as different dialects and accents from the diverse regions and cultures of Westeros and Essos. Reading this book can help you improve your reading comprehension, grammar, and pronunciation skills, as well as expand your cultural knowledge and imagination.
-
-One of the best ways to learn Spanish with Festin De Cuervos Mobi 11 is to read it along with the original English version, A Feast for Crows. This way, you can compare the translations and see how the words and sentences are structured and expressed in both languages. You can also use a dictionary or an online translator to look up any unfamiliar words or phrases. Another way to learn Spanish with this book is to listen to the audiobook version, which is narrated by Carlos Ponce, a professional voice actor who gives life and personality to each character. Listening to the audiobook can help you improve your listening skills, as well as your pronunciation and intonation.
-
-Whether you read or listen to Festin De Cuervos Mobi 11, you will find yourself immersed in a captivating and thrilling story that will keep you hooked until the end. You will also learn new words and expressions, as well as discover new aspects of the Spanish language and culture. Festin De Cuervos Mobi 11 is more than just a book, it is an experience that you will not forget.
- 81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Gattu Eng Sub Full Movie Download Betriebskosten Chemn TOP.md b/spaces/stomexserde/gpt4-ui/Examples/Gattu Eng Sub Full Movie Download Betriebskosten Chemn TOP.md
deleted file mode 100644
index e3d8f26f6d3beabf8634a0064bf3953b77357ce2..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Gattu Eng Sub Full Movie Download Betriebskosten Chemn TOP.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-How to Download Gattu (2011) with English Subtitles and Save on Betriebskosten Chemn
-Gattu (2011) is a Hindi drama film directed by Rajan Khosa and produced by Children's Film Society of India. It tells the story of a cheeky nine-year-old boy called Gattu who lives in Roorkee, a small town that shares his obsession with flying kites and defeating the ruler of the skies above Roorkee. The film won 10 awards and 3 nominations at various international film festivals and received positive reviews from critics and audiences alike.
-If you want to watch this charming and heartwarming film with English subtitles, you can download it from one of the best free movie download sites for 2023. These sites offer high-quality video and audio files that you can save on your device for offline viewing. You can also choose from different formats and resolutions depending on your preferences and device compatibility.
-Gattu Eng Sub Full Movie Download betriebskosten chemn
Download File ✔ https://urlgoal.com/2uI5BG
-One of the best free movie download sites for 2023 is Lifewire, which provides a list of 12 sites that offer free movies online. You can find Gattu (2011) on one of these sites, such as IMDb, which also gives you information about the film's cast, crew, synopsis, reviews, and more. To download Gattu (2011) with English subtitles from IMDb, follow these steps:
-
-- Go to https://www.imdb.com/title/tt2156785/ and click on the Play trailer button to watch a preview of the film.
-- If you like what you see, click on the Download options button below the trailer and select your preferred format and resolution.
-- You will be redirected to a final download page, where you can save the file to your device by clicking on the Download button.
-- To add English subtitles to the film, you will need to download a separate subtitle file from a site like https://subscene.com/subtitles/gattu. Choose the subtitle file that matches your video file and download it to your device.
-- Open the video file with a media player that supports subtitles, such as VLC, and drag and drop the subtitle file onto the video window. You can also adjust the subtitle settings such as font size, color, position, and synchronization.
-
-Congratulations! You have successfully downloaded Gattu (2011) with English subtitles and can enjoy watching it anytime you want.
-But wait, there's more! If you want to save money on your betriebskosten chemn, which are the operating costs of your rented apartment in Germany, you can use some tips and tricks to reduce your expenses. Betriebskosten chemn include costs for water, heating, electricity, garbage disposal, cleaning, gardening, insurance, taxes, and more. Here are some ways to lower your betriebskosten chemn:
-
-- Check your rental contract carefully and make sure you are not paying for any services or facilities that you don't use or need.
-- Compare your betriebskosten chemn with other tenants in your building or neighborhood and see if there are any discrepancies or errors.
-- Use energy-efficient appliances and devices and switch them off when not in use.
-- Use water-saving devices such as low-flow showerheads and faucets and fix any leaks or drips.
-- Recycle your waste properly and avoid generating unnecessary garbage.
-- Clean your apartment regularly and share the cleaning duties with other tenants if possible.
-- Maintain your garden or balcony if you have one and use natural or organic fertilizers and pesticides.
-- Review your insurance policies and see if you can get a better deal or coverage elsewhere.
-- Negotiate with your landlord or property manager if you think your betriebsk
- cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Gm Bagi Bagi Char Lost Saga Pangkat Mayor42.md b/spaces/stomexserde/gpt4-ui/Examples/Gm Bagi Bagi Char Lost Saga Pangkat Mayor42.md
deleted file mode 100644
index 7e7f24182017e1cbe7270a23cab74ef8fc69106c..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Gm Bagi Bagi Char Lost Saga Pangkat Mayor42.md
+++ /dev/null
@@ -1,27 +0,0 @@
-
-GM Bagi-Bagi Char Lost Saga Pangkat Mayor42
-Apakah Anda penggemar game Lost Saga? Apakah Anda ingin mendapatkan char gratis dari GM? Jika ya, maka Anda beruntung karena ada event GM bagi-bagi char Lost Saga pangkat mayor42 yang sedang berlangsung.
-Lost Saga adalah game online bergenre action brawler yang menawarkan lebih dari 60 hero yang bisa diganti-ganti saat bertarung. Game ini memiliki berbagai mode permainan, seperti deathmatch, team deathmatch, boss raid, dan lain-lain. Game ini juga memiliki grafik yang menarik dan gameplay yang seru.
-gm bagi bagi char lost saga pangkat mayor42
Download ✒ https://urlgoal.com/2uI6qR
-Untuk mengikuti event GM bagi-bagi char Lost Saga pangkat mayor42, Anda harus memenuhi syarat-syarat berikut:
-
-- Mempunyai akun Lost Saga yang aktif dan terdaftar di situs resmi Lost Saga[^4^] atau Papaya Play[^5^].
-- Mengikuti halaman Facebook GM bagi-bagi char Lost Saga asli[^1^], GM Bagi - bagi Char Lost Saga[^2^], atau GM Bagi Bagi Char Lost Saga Asli No Tipu[^3^].
-- Mengirimkan screenshot akun Lost Saga Anda yang menunjukkan pangkat dan hero yang Anda miliki ke salah satu halaman Facebook tersebut.
-- Mengomentari postingan event GM bagi-bagi char Lost Saga pangkat mayor42 dengan alasan mengapa Anda ingin mendapatkan char gratis dari GM.
-- Menunggu pengumuman pemenang yang akan diumumkan pada tanggal 30 April 2023.
-
-Jika Anda terpilih sebagai pemenang, Anda akan mendapatkan char Lost Saga pangkat mayor42 secara gratis. Char ini memiliki hero-hero langka dan kuat, seperti Dark Shaman, Cyber Medic, Captain Hook, dan lain-lain. Anda juga akan mendapatkan berbagai item dan gear yang berguna untuk meningkatkan performa Anda di game.
-Jangan lewatkan kesempatan emas ini untuk mendapatkan char Lost Saga pangkat mayor42 gratis dari GM. Ikuti eventnya sekarang juga dan tunjukkan kemampuan Anda di game Lost Saga. Selamat bermain dan semoga beruntung!
Setelah Anda mendapatkan char Lost Saga pangkat mayor42 gratis dari GM, Anda tentu ingin segera mencoba hero-hero yang ada di dalamnya. Namun, sebelum itu, ada baiknya Anda mengetahui beberapa tips dan trik untuk bermain Lost Saga dengan lebih baik dan lebih seru.
-Berikut adalah beberapa tips dan trik yang bisa Anda coba:
-
-
-- Lakukan training mode untuk mempelajari dasar-dasar permainan, seperti gerakan, serangan, pertahanan, dan kombinasi. Training mode juga bisa memberikan Anda hadiah berupa gold dan item jika Anda berhasil mendapatkan bintang tiga di setiap level[^1^].
-- Ganti-ganti hero sesuai dengan situasi dan strategi yang Anda inginkan. Lost Saga memungkinkan Anda untuk membawa empat hero sekaligus di dalam pertarungan dan menggantinya kapan saja dengan menekan tombol F1-F4. Setiap hero memiliki kelebihan dan kekurangan masing-masing, jadi pilihlah hero yang sesuai dengan gaya bermain Anda.
-- Manfaatkan gear yang Anda miliki untuk meningkatkan kemampuan hero Anda. Gear adalah perlengkapan tambahan yang bisa Anda pakai di atas kostum hero Anda. Gear bisa memberikan efek-efek seperti menambah damage, defense, speed, atau skill khusus. Gear bisa didapatkan dari shop, event, atau gacha.
-- Bermainlah bersama teman-teman Anda untuk membuat tim yang solid dan kompak. Lost Saga adalah game yang mengandalkan kerjasama tim untuk meraih kemenangan. Anda bisa berkomunikasi dengan teman-teman Anda melalui fitur chat atau voice chat yang ada di dalam game. Anda juga bisa bergabung dengan guild untuk mendapatkan manfaat seperti bonus exp, gold, atau item.
-- Jangan lupa untuk bersenang-senang dan menghormati lawan maupun rekan Anda. Lost Saga adalah game yang bertujuan untuk menghibur dan mengasah kemampuan Anda. Jangan mudah marah atau menyerah jika kalah, tapi jadikanlah itu sebagai motivasi untuk belajar dan berlatih lebih keras. Jangan juga melakukan hal-hal yang merugikan atau mengganggu orang lain, seperti cheat, bug abuse, toxic, atau afk.
-
-Demikianlah beberapa tips dan trik untuk bermain Lost Saga dengan lebih baik dan lebih seru. Semoga bermanfaat dan selamat mencoba!
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/HOT Winning Eleven 3 Final Version Iso.md b/spaces/stomexserde/gpt4-ui/Examples/HOT Winning Eleven 3 Final Version Iso.md
deleted file mode 100644
index 5676eb58756c21d926404ba60946564867ba0898..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/HOT Winning Eleven 3 Final Version Iso.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-##HOT## Winning Eleven 3 Final Version Iso: How to Play the Classic Soccer Game on Your PC
-Winning Eleven 3: Final Version is a soccer video game that was released for the PlayStation (PSX) console in 1998. It is the third installment of the Winning Eleven series, which is also known as Pro Evolution Soccer or PES in some regions. The game features 32 national teams and 22 club teams from around the world, as well as various game modes such as exhibition, league, cup, and penalty shootout.
-If you want to relive the nostalgia of playing this classic game on your PC, you will need two things: an emulator and a rom. An emulator is a software that can simulate the PSX operating system and hardware on your computer, while a rom is a file that contains the data of the game itself. Here are the steps to play Winning Eleven 3: Final Version Iso on your PC:
-HOT Winning Eleven 3 Final Version Iso
DOWNLOAD >>> https://urlgoal.com/2uIaRM
-
-- Download a reliable and bug-free emulator for PSX. We recommend RetroArch, which is open source, fast, and frequently updated. You can download it from https://retroarch.com/. Extract the downloaded .zip file to a location of your choice, such as your Desktop.
-- Download the Winning Eleven 3: Final Version Iso rom from a trusted source. We suggest RetroEmulators.com, which has a large collection of PSX roms. You can download it from https://retroemulators.com/roms/winning-eleven-3-final-version-psx. The file size is 0.6MB and it comes in BIN/CUE format.
-- Launch RetroArch and click on Load Core. Select Sony - PlayStation (Beetle PSX HW) as the core to use. This is the emulator that will run the game.
-- Click on Load Content and navigate to the folder where you saved the Winning Eleven 3: Final Version Iso rom. Select the .cue file and click on Run.
-- The game will now start on the emulator and you can play it using your keyboard or a controller. You can adjust the settings such as video resolution, audio volume, input mapping, etc. by pressing F1 to access the menu.
-
-Enjoy playing Winning Eleven 3: Final Version Iso on your PC!
-
-Winning Eleven 3: Final Version Iso is widely regarded as one of the best soccer games ever made for the PSX. It has a realistic gameplay that captures the essence of the sport, with smooth animations, responsive controls, and dynamic physics. The game also boasts a variety of options and features that allow you to customize your experience, such as changing the camera angle, difficulty level, weather conditions, stadium, and commentary language.
-The game has a loyal fan base that still plays it today, thanks to the efforts of modders and hackers who have updated the game with new patches and enhancements. For example, you can download a new English patch that adds English menus, subtitles, and names to the game, as well as unlocking all the hidden teams and players. You can also find patches that update the rosters, kits, logos, and faces of the teams and players to match the current season.
-
-If you are looking for a challenging and fun soccer game that will test your skills and strategy, you should definitely try Winning Eleven 3: Final Version Iso. It is a classic game that has stood the test of time and still delivers a satisfying and enjoyable experience. Whether you play solo or with friends, you will have a blast playing this game on your PC.
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/sub314xxl/MetaGPT/metagpt/provider/metagpt_llm_api.py b/spaces/sub314xxl/MetaGPT/metagpt/provider/metagpt_llm_api.py
deleted file mode 100644
index c27e7132da336336c608d79d606111fff7c75538..0000000000000000000000000000000000000000
--- a/spaces/sub314xxl/MetaGPT/metagpt/provider/metagpt_llm_api.py
+++ /dev/null
@@ -1,33 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-@Time : 2023/8/30
-@Author : mashenquan
-@File : metagpt_llm_api.py
-@Desc : MetaGPT LLM related APIs
-"""
-
-import openai
-
-from metagpt.config import CONFIG
-from metagpt.provider import OpenAIGPTAPI
-from metagpt.provider.openai_api import RateLimiter
-
-
-class MetaGPTLLMAPI(OpenAIGPTAPI):
- """MetaGPT LLM api"""
-
- def __init__(self):
- self.__init_openai()
- self.llm = openai
- self.model = CONFIG.METAGPT_API_MODEL
- self.auto_max_tokens = False
- RateLimiter.__init__(self, rpm=self.rpm)
-
- def __init_openai(self, *args, **kwargs):
- openai.api_key = CONFIG.METAGPT_API_KEY
- if CONFIG.METAGPT_API_BASE:
- openai.api_base = CONFIG.METAGPT_API_BASE
- if CONFIG.METAGPT_API_TYPE:
- openai.api_type = CONFIG.METAGPT_API_TYPE
- openai.api_version = CONFIG.METAGPT_API_VERSION
- self.rpm = int(CONFIG.RPM) if CONFIG.RPM else 10
diff --git a/spaces/sub314xxl/MusicGen-Continuation/audiocraft/models/builders.py b/spaces/sub314xxl/MusicGen-Continuation/audiocraft/models/builders.py
deleted file mode 100644
index 77ee5f96fea2e3c9e475fe961bc1a5ee473ed8eb..0000000000000000000000000000000000000000
--- a/spaces/sub314xxl/MusicGen-Continuation/audiocraft/models/builders.py
+++ /dev/null
@@ -1,218 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-All the functions to build the relevant models and modules
-from the Hydra config.
-"""
-
-import typing as tp
-import warnings
-
-import audiocraft
-import omegaconf
-import torch
-
-from .encodec import CompressionModel, EncodecModel, FlattenedCompressionModel # noqa
-from .lm import LMModel
-from ..modules.codebooks_patterns import (
- CodebooksPatternProvider,
- DelayedPatternProvider,
- ParallelPatternProvider,
- UnrolledPatternProvider,
- VALLEPattern,
- MusicLMPattern,
-)
-from ..modules.conditioners import (
- BaseConditioner,
- ConditioningProvider,
- LUTConditioner,
- T5Conditioner,
- ConditionFuser,
- ChromaStemConditioner,
-)
-from .. import quantization as qt
-from ..utils.utils import dict_from_config
-
-
-def get_quantizer(quantizer: str, cfg: omegaconf.DictConfig, dimension: int) -> qt.BaseQuantizer:
- klass = {
- 'no_quant': qt.DummyQuantizer,
- 'rvq': qt.ResidualVectorQuantizer
- }[quantizer]
- kwargs = dict_from_config(getattr(cfg, quantizer))
- if quantizer != 'no_quant':
- kwargs['dimension'] = dimension
- return klass(**kwargs)
-
-
-def get_encodec_autoencoder(encoder_name: str, cfg: omegaconf.DictConfig):
- if encoder_name == 'seanet':
- kwargs = dict_from_config(getattr(cfg, 'seanet'))
- encoder_override_kwargs = kwargs.pop('encoder')
- decoder_override_kwargs = kwargs.pop('decoder')
- encoder_kwargs = {**kwargs, **encoder_override_kwargs}
- decoder_kwargs = {**kwargs, **decoder_override_kwargs}
- encoder = audiocraft.modules.SEANetEncoder(**encoder_kwargs)
- decoder = audiocraft.modules.SEANetDecoder(**decoder_kwargs)
- return encoder, decoder
- else:
- raise KeyError(f'Unexpected compression model {cfg.compression_model}')
-
-
-def get_compression_model(cfg: omegaconf.DictConfig) -> CompressionModel:
- """Instantiate a compression model.
- """
- if cfg.compression_model == 'encodec':
- kwargs = dict_from_config(getattr(cfg, 'encodec'))
- encoder_name = kwargs.pop('autoencoder')
- quantizer_name = kwargs.pop('quantizer')
- encoder, decoder = get_encodec_autoencoder(encoder_name, cfg)
- quantizer = get_quantizer(quantizer_name, cfg, encoder.dimension)
- frame_rate = kwargs['sample_rate'] // encoder.hop_length
- renormalize = kwargs.pop('renormalize', None)
- renorm = kwargs.pop('renorm')
- if renormalize is None:
- renormalize = renorm is not None
- warnings.warn("You are using a deprecated EnCodec model. Please migrate to new renormalization.")
- return EncodecModel(encoder, decoder, quantizer,
- frame_rate=frame_rate, renormalize=renormalize, **kwargs).to(cfg.device)
- else:
- raise KeyError(f'Unexpected compression model {cfg.compression_model}')
-
-
-def get_lm_model(cfg: omegaconf.DictConfig) -> LMModel:
- """Instantiate a transformer LM.
- """
- if cfg.lm_model == 'transformer_lm':
- kwargs = dict_from_config(getattr(cfg, 'transformer_lm'))
- n_q = kwargs['n_q']
- q_modeling = kwargs.pop('q_modeling', None)
- codebooks_pattern_cfg = getattr(cfg, 'codebooks_pattern')
- attribute_dropout = dict_from_config(getattr(cfg, 'attribute_dropout'))
- cls_free_guidance = dict_from_config(getattr(cfg, 'classifier_free_guidance'))
- cfg_prob, cfg_coef = cls_free_guidance["training_dropout"], cls_free_guidance["inference_coef"]
- fuser = get_condition_fuser(cfg)
- condition_provider = get_conditioner_provider(kwargs["dim"], cfg).to(cfg.device)
- if len(fuser.fuse2cond['cross']) > 0: # enforce cross-att programatically
- kwargs['cross_attention'] = True
- if codebooks_pattern_cfg.modeling is None:
- assert q_modeling is not None, \
- 'LM model should either have a codebook pattern defined or transformer_lm.q_modeling'
- codebooks_pattern_cfg = omegaconf.OmegaConf.create(
- {'modeling': q_modeling, 'delay': {'delays': list(range(n_q))}}
- )
- pattern_provider = get_codebooks_pattern_provider(n_q, codebooks_pattern_cfg)
- return LMModel(
- pattern_provider=pattern_provider,
- condition_provider=condition_provider,
- fuser=fuser,
- cfg_dropout=cfg_prob,
- cfg_coef=cfg_coef,
- attribute_dropout=attribute_dropout,
- dtype=getattr(torch, cfg.dtype),
- device=cfg.device,
- **kwargs
- ).to(cfg.device)
- else:
- raise KeyError(f'Unexpected LM model {cfg.lm_model}')
-
-
-def get_conditioner_provider(output_dim: int, cfg: omegaconf.DictConfig) -> ConditioningProvider:
- """Instantiate a conditioning model.
- """
- device = cfg.device
- duration = cfg.dataset.segment_duration
- cfg = getattr(cfg, "conditioners")
- cfg = omegaconf.OmegaConf.create({}) if cfg is None else cfg
- conditioners: tp.Dict[str, BaseConditioner] = {}
- with omegaconf.open_dict(cfg):
- condition_provider_args = cfg.pop('args', {})
- for cond, cond_cfg in cfg.items():
- model_type = cond_cfg["model"]
- model_args = cond_cfg[model_type]
- if model_type == "t5":
- conditioners[str(cond)] = T5Conditioner(output_dim=output_dim, device=device, **model_args)
- elif model_type == "lut":
- conditioners[str(cond)] = LUTConditioner(output_dim=output_dim, **model_args)
- elif model_type == "chroma_stem":
- model_args.pop('cache_path', None)
- conditioners[str(cond)] = ChromaStemConditioner(
- output_dim=output_dim,
- duration=duration,
- device=device,
- **model_args
- )
- else:
- raise ValueError(f"unrecognized conditioning model: {model_type}")
- conditioner = ConditioningProvider(conditioners, device=device, **condition_provider_args)
- return conditioner
-
-
-def get_condition_fuser(cfg: omegaconf.DictConfig) -> ConditionFuser:
- """Instantiate a condition fuser object.
- """
- fuser_cfg = getattr(cfg, "fuser")
- fuser_methods = ["sum", "cross", "prepend", "input_interpolate"]
- fuse2cond = {k: fuser_cfg[k] for k in fuser_methods}
- kwargs = {k: v for k, v in fuser_cfg.items() if k not in fuser_methods}
- fuser = ConditionFuser(fuse2cond=fuse2cond, **kwargs)
- return fuser
-
-
-def get_codebooks_pattern_provider(n_q: int, cfg: omegaconf.DictConfig) -> CodebooksPatternProvider:
- """Instantiate a codebooks pattern provider object.
- """
- pattern_providers = {
- 'parallel': ParallelPatternProvider,
- 'delay': DelayedPatternProvider,
- 'unroll': UnrolledPatternProvider,
- 'valle': VALLEPattern,
- 'musiclm': MusicLMPattern,
- }
- name = cfg.modeling
- kwargs = dict_from_config(cfg.get(name)) if hasattr(cfg, name) else {}
- klass = pattern_providers[name]
- return klass(n_q, **kwargs)
-
-
-def get_debug_compression_model(device='cpu'):
- """Instantiate a debug compression model to be used for unit tests.
- """
- seanet_kwargs = {
- 'n_filters': 4,
- 'n_residual_layers': 1,
- 'dimension': 32,
- 'ratios': [10, 8, 16] # 25 Hz at 32kHz
- }
- encoder = audiocraft.modules.SEANetEncoder(**seanet_kwargs)
- decoder = audiocraft.modules.SEANetDecoder(**seanet_kwargs)
- quantizer = qt.ResidualVectorQuantizer(dimension=32, bins=400, n_q=4)
- init_x = torch.randn(8, 32, 128)
- quantizer(init_x, 1) # initialize kmeans etc.
- compression_model = EncodecModel(
- encoder, decoder, quantizer,
- frame_rate=25, sample_rate=32000, channels=1).to(device)
- return compression_model.eval()
-
-
-def get_debug_lm_model(device='cpu'):
- """Instantiate a debug LM to be used for unit tests.
- """
- pattern = DelayedPatternProvider(n_q=4)
- dim = 16
- providers = {
- 'description': LUTConditioner(n_bins=128, dim=dim, output_dim=dim, tokenizer="whitespace"),
- }
- condition_provider = ConditioningProvider(providers)
- fuser = ConditionFuser(
- {'cross': ['description'], 'prepend': [],
- 'sum': [], 'input_interpolate': []})
- lm = LMModel(
- pattern, condition_provider, fuser,
- n_q=4, card=400, dim=dim, num_heads=4, custom=True, num_layers=2,
- cross_attention=True, causal=True)
- return lm.to(device).eval()
diff --git a/spaces/sub314xxl/MusicGen/audiocraft/modules/activations.py b/spaces/sub314xxl/MusicGen/audiocraft/modules/activations.py
deleted file mode 100644
index 8bd6f2917a56d72db56555d0ff54b2311bc21778..0000000000000000000000000000000000000000
--- a/spaces/sub314xxl/MusicGen/audiocraft/modules/activations.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-from torch import Tensor
-from typing import Union, Callable
-
-
-class CustomGLU(nn.Module):
- """Custom Gated Linear Unit activation.
- Applies a modified gated linear unit :math:`a * f(b)` where :math:`a` is the first half
- of the input matrices, :math:`b` is the second half, and :math:`f` is a provided activation
- function (i.e. sigmoid, swish, etc.).
-
- Args:
- activation (nn.Module): The custom activation to apply in the Gated Linear Unit
- dim (int): the dimension on which to split the input. Default: -1
-
- Shape:
- - Input: :math:`(\ast_1, N, \ast_2)` where `*` means, any number of additional
- dimensions
- - Output: :math:`(\ast_1, M, \ast_2)` where :math:`M=N/2`
-
- Examples::
- >>> m = CustomGLU(nn.Sigmoid())
- >>> input = torch.randn(4, 2)
- >>> output = m(input)
- """
- def __init__(self, activation: nn.Module, dim: int = -1):
- super(CustomGLU, self).__init__()
- self.dim = dim
- self.activation = activation
-
- def forward(self, x: Tensor):
- assert x.shape[self.dim] % 2 == 0 # M = N / 2
- a, b = torch.chunk(x, 2, dim=self.dim)
- return a * self.activation(b)
-
-
-class SwiGLU(CustomGLU):
- """SiLU Gated Linear Unit activation.
- Applies SiLU Gated Linear Unit :math:`a * SiLU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(SwiGLU, self).__init__(nn.SiLU(), dim)
-
-
-class GeGLU(CustomGLU):
- """GeLU Gated Linear Unit activation.
- Applies GeLU Gated Linear Unit :math:`a * GELU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(GeGLU, self).__init__(nn.GELU(), dim)
-
-
-class ReGLU(CustomGLU):
- """ReLU Gated Linear Unit activation.
- Applies ReLU Gated Linear Unit :math:`a * ReLU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(ReGLU, self).__init__(nn.ReLU(), dim)
-
-
-def get_activation_fn(
- activation: Union[str, Callable[[Tensor], Tensor]]
-) -> Union[str, Callable[[Tensor], Tensor]]:
- """Helper function to map an activation string to the activation class.
- If the supplied activation is not a string that is recognized, the activation is passed back.
-
- Args:
- activation (Union[str, Callable[[Tensor], Tensor]]): Activation to check
- """
- if isinstance(activation, str):
- if activation == "reglu":
- return ReGLU()
- elif activation == "geglu":
- return GeGLU()
- elif activation == "swiglu":
- return SwiGLU()
- return activation
diff --git a/spaces/sunwaee/Face-Mask-Detection/app.py b/spaces/sunwaee/Face-Mask-Detection/app.py
deleted file mode 100644
index 8d040e45a43a87de9b36db502ae6dceec862bfef..0000000000000000000000000000000000000000
--- a/spaces/sunwaee/Face-Mask-Detection/app.py
+++ /dev/null
@@ -1,242 +0,0 @@
-import csv
-import os.path
-import time
-
-import cv2
-import gdown
-import numpy as np
-import streamlit as st
-import torch
-from PIL import Image
-
-
-def load_classes(csv_reader):
- """
- Load classes from csv.
-
- :param csv_reader: csv
- :return:
- """
- result = {}
-
- for line, row in enumerate(csv_reader):
- line += 1
-
- try:
- class_name, class_id = row
- except ValueError:
- raise (ValueError('line {}: format should be \'class_name,class_id\''.format(line)))
- class_id = int(class_id)
-
- if class_name in result:
- raise ValueError('line {}: duplicate class name: \'{}\''.format(line, class_name))
- result[class_name] = class_id
- return result
-
-
-@st.cache
-def draw_caption(image, box, caption):
- """
- Draw caption and bbox on image.
-
- :param image: image
- :param box: bounding box
- :param caption: caption
- :return:
- """
-
- b = np.array(box).astype(int)
- cv2.putText(image, caption, (b[0], b[1] - 10), cv2.FONT_HERSHEY_PLAIN, 1, (0, 0, 0), 2)
- cv2.putText(image, caption, (b[0], b[1] - 10), cv2.FONT_HERSHEY_PLAIN, 1, (255, 255, 255), 1)
-
-
-@st.cache
-def load_labels():
- """
- Loads labels.
-
- :return:
- """
-
- with open("data/labels.csv", 'r') as f:
- classes = load_classes(csv.reader(f, delimiter=','))
-
- labels = {}
- for key, value in classes.items():
- labels[value] = key
-
- return labels
-
-
-def download_models(ids):
- """
- Download all models.
-
- :param ids: name and links of models
- :return:
- """
-
- # Download model from drive if not stored locally
- with st.spinner('Downloading models, this may take a minute...'):
- for key in ids:
- if not os.path.isfile(f"model/{key}.pt"):
- url = f"https://drive.google.com/uc?id={ids[key]}"
- gdown.download(url=url, output=f"model/{key}.pt")
-
-
-@st.cache(suppress_st_warning=True)
-def load_model(model_path, prefix: str = 'model/'):
- """
- Load model.
-
- :param model_path: path to inference model
- :param prefix: model prefix if needed
- :return:
- """
-
- # Load model
- if torch.cuda.is_available():
- model = torch.load(f"{prefix}{model_path}.pt").to('cuda')
- else:
- model = torch.load(f"{prefix}{model_path}.pt", map_location=torch.device('cpu'))
- model = model.module.cpu()
- model.training = False
- model.eval()
-
- return model
-
-
-def process_img(model, image, labels, caption: bool = True, thickness=2):
- """
- Process img given a model.
-
- :param caption: whether to use captions or not
- :param image: image to process
- :param model: inference model
- :param labels: given labels
- :param thickness: thickness of bboxes
- :return:
- """
-
- image_orig = image.copy()
- rows, cols, cns = image.shape
-
- smallest_side = min(rows, cols)
-
- # Rescale the image
- min_side = 608
- max_side = 1024
- scale = min_side / smallest_side
-
- # Check if the largest side is now greater than max_side
- largest_side = max(rows, cols)
-
- if largest_side * scale > max_side:
- scale = max_side / largest_side
-
- # Resize the image with the computed scale
- image = cv2.resize(image, (int(round(cols * scale)), int(round((rows * scale)))))
- rows, cols, cns = image.shape
-
- pad_w = 32 - rows % 32
- pad_h = 32 - cols % 32
-
- new_image = np.zeros((rows + pad_w, cols + pad_h, cns)).astype(np.float32)
- new_image[:rows, :cols, :] = image.astype(np.float32)
- image = new_image.astype(np.float32)
- image /= 255
- image -= [0.485, 0.456, 0.406]
- image /= [0.229, 0.224, 0.225]
- image = np.expand_dims(image, 0)
- image = np.transpose(image, (0, 3, 1, 2))
-
- with torch.no_grad():
-
- image = torch.from_numpy(image)
- if torch.cuda.is_available():
- image = image.cuda()
-
- st = time.time()
- scores, classification, transformed_anchors = model(image.float())
- elapsed_time = time.time() - st
- idxs = np.where(scores.cpu() > 0.5)
-
- for j in range(idxs[0].shape[0]):
- bbox = transformed_anchors[idxs[0][j], :]
-
- x1 = int(bbox[0] / scale)
- y1 = int(bbox[1] / scale)
- x2 = int(bbox[2] / scale)
- y2 = int(bbox[3] / scale)
- label_name = labels[int(classification[idxs[0][j]])]
- colors = {
- 'with_mask': (0, 255, 0),
- 'without_mask': (255, 0, 0),
- 'mask_weared_incorrect': (190, 100, 20)
- }
- cap = '{}'.format(label_name) if caption else ''
- draw_caption(image_orig, (x1, y1, x2, y2), cap)
- cv2.rectangle(image_orig, (x1, y1), (x2, y2), color=colors[label_name],
- thickness=int(1 * (smallest_side / 100)))
-
- return image_orig
-
-
-# Page config
-st.set_page_config(layout="centered")
-st.title("Face Mask Detection")
-st.write('Face Mask Detection on images, videos and webcam feed with ResNet[18~152] models. ')
-st.markdown(f"__Labels:__ with_mask, without_mask, mask_weared_incorrect")
-
-# Models drive ids
-ids = {
- 'resnet50_20': st.secrets['resnet50'],
- 'resnet152_20': st.secrets['resnet152'],
-}
-
-# Download all models from drive
-download_models(ids)
-
-# Split page into columns
-left, right = st.columns([5, 3])
-
-# Model selection
-labels = load_labels()
-model_path = right.selectbox('Choose a model', options=[k for k in ids], index=0)
-model = load_model(model_path=model_path) if model_path != '' else None
-
-# Display example selection
-index = left.number_input('', min_value=0, max_value=852, value=495, help='Choose an image. ')
-
-# Uploader
-uploaded = st.file_uploader("Try it out with your own image!", type=['.jpg', '.png', '.jfif'])
-
-if uploaded is not None:
- # Convert file to image
- image = Image.open(uploaded)
- image = np.array(image)
-else:
- # Get corresponding image and transform it
- image = cv2.imread(f'data/validation/image/maksssksksss{str(index)}.jpg')
- image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
-
-# Process img
-with st.spinner('Please wait while the image is being processed... This may take a while. '):
- image = process_img(model, image, labels, caption=False)
-
-left.image(cv2.resize(image, (450, 300)))
-
-# Write labels dict and device on right
-right.write({
- 'green': 'with_mask',
- 'orange': 'mask_weared_incorrect',
- 'red': 'without_mask'
-})
-device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-right.write(device)
-
-captions = [image for image in os.listdir('data/examples/')]
-images = [Image.open(f'data/examples/{image}') for image in os.listdir('data/examples/')]
-
-# Display examples
-st.image(images, width=350)
diff --git a/spaces/supertori/files/stable-diffusion-webui/javascript/hires_fix.js b/spaces/supertori/files/stable-diffusion-webui/javascript/hires_fix.js
deleted file mode 100644
index ced83e7c05c8bb5199f6f08c4bee47ee1277d4f3..0000000000000000000000000000000000000000
--- a/spaces/supertori/files/stable-diffusion-webui/javascript/hires_fix.js
+++ /dev/null
@@ -1,22 +0,0 @@
-
-function setInactive(elem, inactive){
- if(inactive){
- elem.classList.add('inactive')
- } else{
- elem.classList.remove('inactive')
- }
-}
-
-function onCalcResolutionHires(enable, width, height, hr_scale, hr_resize_x, hr_resize_y){
- hrUpscaleBy = gradioApp().getElementById('txt2img_hr_scale')
- hrResizeX = gradioApp().getElementById('txt2img_hr_resize_x')
- hrResizeY = gradioApp().getElementById('txt2img_hr_resize_y')
-
- gradioApp().getElementById('txt2img_hires_fix_row2').style.display = opts.use_old_hires_fix_width_height ? "none" : ""
-
- setInactive(hrUpscaleBy, opts.use_old_hires_fix_width_height || hr_resize_x > 0 || hr_resize_y > 0)
- setInactive(hrResizeX, opts.use_old_hires_fix_width_height || hr_resize_x == 0)
- setInactive(hrResizeY, opts.use_old_hires_fix_width_height || hr_resize_y == 0)
-
- return [enable, width, height, hr_scale, hr_resize_x, hr_resize_y]
-}
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/HD Online Player (orange For Oracle 6.0 Keygen) _HOT_.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/HD Online Player (orange For Oracle 6.0 Keygen) _HOT_.md
deleted file mode 100644
index 5ef848a5e6e964fc8fbf0174c5ef70ce777fb102..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/HD Online Player (orange For Oracle 6.0 Keygen) _HOT_.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
- 10.5 patch game i need that patch to play gta san andreas online,
build your own side-scrolling platformer game in this collection of 10 unique.
5.22 crack+free tools download. gamespc.com..
. ive had this working on previous win7 installs, but ive been using a new hdd. baraha 10.5 registration key.rar.how to install.4.1+keygen.10.5.1.
how to install & crack all data 10.53 software, alldata 10.53 setup instructions, from vipprogrammer. powerbuilder 10.5 crack torrent download list main page abuse torrent search other.4+.
10.5 patch game i need that patch to play gta san andreas online, for more working, fast, safe, effective and easy to use, visit us today.
wii u games baraha 10.5 crack, a set of sony psp titles will be featured and are.
how to install: open it and then follow the instructions carefully. (you can also read more on the.5 crack torrent download list main page abuse torrent search.
may 26, 2017s codetec of cairo baraha 10.rar
baraha 10. 10.
short description::how to install.rar
- beekeeping in my veins jesus in my heart bee lover shirt
dastaan-e-tarannum full movie hd 1080p blu-ray online
xmlbar video downloader for mac
character generator 2019 keygen kickass to
business result intermediate student book free downloadl
code pre gfx.ff mw2 dir file license keybfdcml
pour vous ajustement serre
healthy eating habits essay on facebook
gujrathi village girls fucking
harry potter fanfiction porn
-HD Online Player (orange for oracle 6.0 keygen)
Download Zip ✫ https://cinurl.com/2uEYYq
- *********************************************************************
the "day of the doctor" dvd is now available for purchase. more information can be found here:
/> ***************************
the "day of the doctor" 2 cd collector's edition is now available for purchase. the cd will feature the original music from the tv special as well as the new music. more information can be found here:
/> ***************************
new in the "day of the doctor" short film collection:
"bad wolf"
available to purchase from the big finish shop.
***************************
to purchase the audio book of the day of the doctor online, visit:
/> ***************************
to purchase the day of the doctor comic, visit:
/> ***************************
to purchase the day of the doctor novel, visit:
/> ***************************
to purchase the day of the doctor dvd, visit:
/> ***************************
to purchase the day of the doctor box set, visit:
> hd online player (orange for oracle 6.0 keygen)
*********************************************************************
the "day of the doctor" tv special is now available for purchase. more information can be found here:
/> *********************************************************************
the "day of the doctor" 2 cd collector's edition is now available for purchase. more information can be found here:
> hd online player (orange for oracle 6.0 keygen)
************************gaige************************
1. made deathtraps damage scale like an ai, making it have proper damage throughout the game (-29% at level 5, increase of +4% at level 30, +110% at level 50, +156% at level 72, +173% at op8).
1.1 made deathtrap get a bonus to his damage in co-op. the max bonus (4 players) should be about x1.9 in normal, and about x1.3 in tvhm/uvhm.2 the multiplicative bonus on deathtraps roid damage from sharing is caring now depends on level: instead of a constant value of x1.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/bbox.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/bbox.py
deleted file mode 100644
index 0c4d58b6c91f652933974f519acd3403a833e906..0000000000000000000000000000000000000000
--- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/bbox.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext', ['bbox_overlaps'])
-
-
-def bbox_overlaps(bboxes1, bboxes2, mode='iou', aligned=False, offset=0):
- """Calculate overlap between two set of bboxes.
-
- If ``aligned`` is ``False``, then calculate the ious between each bbox
- of bboxes1 and bboxes2, otherwise the ious between each aligned pair of
- bboxes1 and bboxes2.
-
- Args:
- bboxes1 (Tensor): shape (m, 4) in format or empty.
- bboxes2 (Tensor): shape (n, 4) in format or empty.
- If aligned is ``True``, then m and n must be equal.
- mode (str): "iou" (intersection over union) or iof (intersection over
- foreground).
-
- Returns:
- ious(Tensor): shape (m, n) if aligned == False else shape (m, 1)
-
- Example:
- >>> bboxes1 = torch.FloatTensor([
- >>> [0, 0, 10, 10],
- >>> [10, 10, 20, 20],
- >>> [32, 32, 38, 42],
- >>> ])
- >>> bboxes2 = torch.FloatTensor([
- >>> [0, 0, 10, 20],
- >>> [0, 10, 10, 19],
- >>> [10, 10, 20, 20],
- >>> ])
- >>> bbox_overlaps(bboxes1, bboxes2)
- tensor([[0.5000, 0.0000, 0.0000],
- [0.0000, 0.0000, 1.0000],
- [0.0000, 0.0000, 0.0000]])
-
- Example:
- >>> empty = torch.FloatTensor([])
- >>> nonempty = torch.FloatTensor([
- >>> [0, 0, 10, 9],
- >>> ])
- >>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1)
- >>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0)
- >>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0)
- """
-
- mode_dict = {'iou': 0, 'iof': 1}
- assert mode in mode_dict.keys()
- mode_flag = mode_dict[mode]
- # Either the boxes are empty or the length of boxes' last dimension is 4
- assert (bboxes1.size(-1) == 4 or bboxes1.size(0) == 0)
- assert (bboxes2.size(-1) == 4 or bboxes2.size(0) == 0)
- assert offset == 1 or offset == 0
-
- rows = bboxes1.size(0)
- cols = bboxes2.size(0)
- if aligned:
- assert rows == cols
-
- if rows * cols == 0:
- return bboxes1.new(rows, 1) if aligned else bboxes1.new(rows, cols)
-
- if aligned:
- ious = bboxes1.new_zeros(rows)
- else:
- ious = bboxes1.new_zeros((rows, cols))
- ext_module.bbox_overlaps(
- bboxes1, bboxes2, ious, mode=mode_flag, aligned=aligned, offset=offset)
- return ious
diff --git a/spaces/t13718236382/bingoGPT4/src/components/chat-message.tsx b/spaces/t13718236382/bingoGPT4/src/components/chat-message.tsx
deleted file mode 100644
index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000
--- a/spaces/t13718236382/bingoGPT4/src/components/chat-message.tsx
+++ /dev/null
@@ -1,93 +0,0 @@
-import remarkGfm from 'remark-gfm'
-import remarkMath from 'remark-math'
-import supersub from 'remark-supersub'
-import remarkBreaks from 'remark-breaks'
-import { cn } from '@/lib/utils'
-import { CodeBlock } from '@/components/ui/codeblock'
-import { MemoizedReactMarkdown } from '@/components/markdown'
-import { LearnMore } from './learn-more'
-import { ChatMessageModel } from '@/lib/bots/bing/types'
-import { useEffect } from 'react'
-import { TurnCounter } from './turn-counter'
-
-export interface ChatMessageProps {
- message: ChatMessageModel
-}
-
-export function ChatMessage({ message, ...props }: ChatMessageProps) {
- useEffect(() => {
- if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) {
- window.scrollBy(0, 200)
- }
- }, [message.text])
-
- return message.text ? (
-
-
-
- }
- } catch (e) {
- }
- return
- },
- p({ children }) {
- return {children}
- },
- code({ node, inline, className, children, ...props }) {
- if (children.length) {
- if (children[0] == '▍') {
- return (
- ▍
- )
- }
-
- children[0] = (children[0] as string).replace('`▍`', '▍')
- }
-
- const match = /language-(\w+)/.exec(className || '')
-
- if (inline) {
- return (
-
- {children}
-
- )
- }
-
- return (
-
- )
- }
- }}
- >
- {message.text}
-
-
-
- {message.author === 'bot' && }
- {message.author === 'bot' && }
-
-
- ) : null
-}
diff --git a/spaces/tabeina/bingo1/src/pages/api/blob.ts b/spaces/tabeina/bingo1/src/pages/api/blob.ts
deleted file mode 100644
index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000
--- a/spaces/tabeina/bingo1/src/pages/api/blob.ts
+++ /dev/null
@@ -1,40 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { Readable } from 'node:stream'
-import { fetch } from '@/lib/isomorphic'
-
-const API_DOMAIN = 'https://www.bing.com'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { bcid } = req.query
-
- const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`,
- {
- method: 'GET',
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referrer-Policy": "origin-when-cross-origin",
- },
- },
- )
-
- res.writeHead(200, {
- 'Content-Length': headers.get('content-length')!,
- 'Content-Type': headers.get('content-type')!,
- })
- // @ts-ignore
- return Readable.fromWeb(body!).pipe(res)
- } catch (e) {
- console.log('Error', e)
- return res.json({
- result: {
- value: 'UploadFailed',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/taesiri/DeticChatGPT/train_net.py b/spaces/taesiri/DeticChatGPT/train_net.py
deleted file mode 100644
index 251257ceb9e9dde53b12f6adf64c28fd71b3d43d..0000000000000000000000000000000000000000
--- a/spaces/taesiri/DeticChatGPT/train_net.py
+++ /dev/null
@@ -1,269 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import os
-import sys
-from collections import OrderedDict
-import torch
-from torch.nn.parallel import DistributedDataParallel
-import time
-import datetime
-
-from fvcore.common.timer import Timer
-import detectron2.utils.comm as comm
-from detectron2.checkpoint import DetectionCheckpointer, PeriodicCheckpointer
-from detectron2.config import get_cfg
-from detectron2.data import (
- MetadataCatalog,
- build_detection_test_loader,
-)
-from detectron2.engine import default_argument_parser, default_setup, launch
-
-from detectron2.evaluation import (
- inference_on_dataset,
- print_csv_format,
- LVISEvaluator,
- COCOEvaluator,
-)
-from detectron2.modeling import build_model
-from detectron2.solver import build_lr_scheduler, build_optimizer
-from detectron2.utils.events import (
- CommonMetricPrinter,
- EventStorage,
- JSONWriter,
- TensorboardXWriter,
-)
-from detectron2.data.dataset_mapper import DatasetMapper
-from detectron2.data.build import build_detection_train_loader
-from detectron2.utils.logger import setup_logger
-from torch.cuda.amp import GradScaler
-
-sys.path.insert(0, 'third_party/CenterNet2/projects/CenterNet2/')
-from centernet.config import add_centernet_config
-
-sys.path.insert(0, 'third_party/Deformable-DETR')
-from detic.config import add_detic_config
-from detic.data.custom_build_augmentation import build_custom_augmentation
-from detic.data.custom_dataset_dataloader import build_custom_train_loader
-from detic.data.custom_dataset_mapper import CustomDatasetMapper, DetrDatasetMapper
-from detic.custom_solver import build_custom_optimizer
-from detic.evaluation.oideval import OIDEvaluator
-from detic.evaluation.custom_coco_eval import CustomCOCOEvaluator
-from detic.modeling.utils import reset_cls_test
-
-
-logger = logging.getLogger("detectron2")
-
-def do_test(cfg, model):
- results = OrderedDict()
- for d, dataset_name in enumerate(cfg.DATASETS.TEST):
- if cfg.MODEL.RESET_CLS_TESTS:
- reset_cls_test(
- model,
- cfg.MODEL.TEST_CLASSIFIERS[d],
- cfg.MODEL.TEST_NUM_CLASSES[d])
- mapper = None if cfg.INPUT.TEST_INPUT_TYPE == 'default' \
- else DatasetMapper(
- cfg, False, augmentations=build_custom_augmentation(cfg, False))
- data_loader = build_detection_test_loader(cfg, dataset_name, mapper=mapper)
- output_folder = os.path.join(
- cfg.OUTPUT_DIR, "inference_{}".format(dataset_name))
- evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type
-
- if evaluator_type == "lvis" or cfg.GEN_PSEDO_LABELS:
- evaluator = LVISEvaluator(dataset_name, cfg, True, output_folder)
- elif evaluator_type == 'coco':
- if dataset_name == 'coco_generalized_zeroshot_val':
- # Additionally plot mAP for 'seen classes' and 'unseen classes'
- evaluator = CustomCOCOEvaluator(dataset_name, cfg, True, output_folder)
- else:
- evaluator = COCOEvaluator(dataset_name, cfg, True, output_folder)
- elif evaluator_type == 'oid':
- evaluator = OIDEvaluator(dataset_name, cfg, True, output_folder)
- else:
- assert 0, evaluator_type
-
- results[dataset_name] = inference_on_dataset(
- model, data_loader, evaluator)
- if comm.is_main_process():
- logger.info("Evaluation results for {} in csv format:".format(
- dataset_name))
- print_csv_format(results[dataset_name])
- if len(results) == 1:
- results = list(results.values())[0]
- return results
-
-def do_train(cfg, model, resume=False):
- model.train()
- if cfg.SOLVER.USE_CUSTOM_SOLVER:
- optimizer = build_custom_optimizer(cfg, model)
- else:
- assert cfg.SOLVER.OPTIMIZER == 'SGD'
- assert cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE != 'full_model'
- assert cfg.SOLVER.BACKBONE_MULTIPLIER == 1.
- optimizer = build_optimizer(cfg, model)
- scheduler = build_lr_scheduler(cfg, optimizer)
-
- checkpointer = DetectionCheckpointer(
- model, cfg.OUTPUT_DIR, optimizer=optimizer, scheduler=scheduler
- )
-
- start_iter = checkpointer.resume_or_load(
- cfg.MODEL.WEIGHTS, resume=resume).get("iteration", -1) + 1
- if not resume:
- start_iter = 0
- max_iter = cfg.SOLVER.MAX_ITER if cfg.SOLVER.TRAIN_ITER < 0 else cfg.SOLVER.TRAIN_ITER
-
- periodic_checkpointer = PeriodicCheckpointer(
- checkpointer, cfg.SOLVER.CHECKPOINT_PERIOD, max_iter=max_iter
- )
-
- writers = (
- [
- CommonMetricPrinter(max_iter),
- JSONWriter(os.path.join(cfg.OUTPUT_DIR, "metrics.json")),
- TensorboardXWriter(cfg.OUTPUT_DIR),
- ]
- if comm.is_main_process()
- else []
- )
-
- use_custom_mapper = cfg.WITH_IMAGE_LABELS
- MapperClass = CustomDatasetMapper if use_custom_mapper else DatasetMapper
- mapper = MapperClass(cfg, True) if cfg.INPUT.CUSTOM_AUG == '' else \
- DetrDatasetMapper(cfg, True) if cfg.INPUT.CUSTOM_AUG == 'DETR' else \
- MapperClass(cfg, True, augmentations=build_custom_augmentation(cfg, True))
- if cfg.DATALOADER.SAMPLER_TRAIN in ['TrainingSampler', 'RepeatFactorTrainingSampler']:
- data_loader = build_detection_train_loader(cfg, mapper=mapper)
- else:
- data_loader = build_custom_train_loader(cfg, mapper=mapper)
-
- if cfg.FP16:
- scaler = GradScaler()
-
- logger.info("Starting training from iteration {}".format(start_iter))
- with EventStorage(start_iter) as storage:
- step_timer = Timer()
- data_timer = Timer()
- start_time = time.perf_counter()
- for data, iteration in zip(data_loader, range(start_iter, max_iter)):
- data_time = data_timer.seconds()
- storage.put_scalars(data_time=data_time)
- step_timer.reset()
- iteration = iteration + 1
- storage.step()
- loss_dict = model(data)
-
- losses = sum(
- loss for k, loss in loss_dict.items())
- assert torch.isfinite(losses).all(), loss_dict
-
- loss_dict_reduced = {k: v.item() \
- for k, v in comm.reduce_dict(loss_dict).items()}
- losses_reduced = sum(loss for loss in loss_dict_reduced.values())
- if comm.is_main_process():
- storage.put_scalars(
- total_loss=losses_reduced, **loss_dict_reduced)
-
- optimizer.zero_grad()
- if cfg.FP16:
- scaler.scale(losses).backward()
- scaler.step(optimizer)
- scaler.update()
- else:
- losses.backward()
- optimizer.step()
-
- storage.put_scalar(
- "lr", optimizer.param_groups[0]["lr"], smoothing_hint=False)
-
- step_time = step_timer.seconds()
- storage.put_scalars(time=step_time)
- data_timer.reset()
- scheduler.step()
-
- if (cfg.TEST.EVAL_PERIOD > 0
- and iteration % cfg.TEST.EVAL_PERIOD == 0
- and iteration != max_iter):
- do_test(cfg, model)
- comm.synchronize()
-
- if iteration - start_iter > 5 and \
- (iteration % 20 == 0 or iteration == max_iter):
- for writer in writers:
- writer.write()
- periodic_checkpointer.step(iteration)
-
- total_time = time.perf_counter() - start_time
- logger.info(
- "Total training time: {}".format(
- str(datetime.timedelta(seconds=int(total_time)))))
-
-def setup(args):
- """
- Create configs and perform basic setups.
- """
- cfg = get_cfg()
- add_centernet_config(cfg)
- add_detic_config(cfg)
- cfg.merge_from_file(args.config_file)
- cfg.merge_from_list(args.opts)
- if '/auto' in cfg.OUTPUT_DIR:
- file_name = os.path.basename(args.config_file)[:-5]
- cfg.OUTPUT_DIR = cfg.OUTPUT_DIR.replace('/auto', '/{}'.format(file_name))
- logger.info('OUTPUT_DIR: {}'.format(cfg.OUTPUT_DIR))
- cfg.freeze()
- default_setup(cfg, args)
- setup_logger(output=cfg.OUTPUT_DIR, \
- distributed_rank=comm.get_rank(), name="detic")
- return cfg
-
-
-def main(args):
- cfg = setup(args)
-
- model = build_model(cfg)
- logger.info("Model:\n{}".format(model))
- if args.eval_only:
- DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load(
- cfg.MODEL.WEIGHTS, resume=args.resume
- )
-
- return do_test(cfg, model)
-
- distributed = comm.get_world_size() > 1
- if distributed:
- model = DistributedDataParallel(
- model, device_ids=[comm.get_local_rank()], broadcast_buffers=False,
- find_unused_parameters=cfg.FIND_UNUSED_PARAM
- )
-
- do_train(cfg, model, resume=args.resume)
- return do_test(cfg, model)
-
-
-if __name__ == "__main__":
- args = default_argument_parser()
- args = args.parse_args()
- if args.num_machines == 1:
- args.dist_url = 'tcp://127.0.0.1:{}'.format(
- torch.randint(11111, 60000, (1,))[0].item())
- else:
- if args.dist_url == 'host':
- args.dist_url = 'tcp://{}:12345'.format(
- os.environ['SLURM_JOB_NODELIST'])
- elif not args.dist_url.startswith('tcp'):
- tmp = os.popen(
- 'echo $(scontrol show job {} | grep BatchHost)'.format(
- args.dist_url)
- ).read()
- tmp = tmp[tmp.find('=') + 1: -1]
- args.dist_url = 'tcp://{}:12345'.format(tmp)
- print("Command Line Args:", args)
- launch(
- main,
- args.num_gpus,
- num_machines=args.num_machines,
- machine_rank=args.machine_rank,
- dist_url=args.dist_url,
- args=(args,),
- )
diff --git a/spaces/tang155/bingo/src/app/page.tsx b/spaces/tang155/bingo/src/app/page.tsx
deleted file mode 100644
index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000
--- a/spaces/tang155/bingo/src/app/page.tsx
+++ /dev/null
@@ -1,15 +0,0 @@
-import dynamic from 'next/dynamic'
-
-const DynamicComponentWithNoSSR = dynamic(
- () => import('../components/chat'),
- { ssr: false }
-)
-
-export default function IndexPage() {
- return (
- <>
-
-
- >
- )
-}
diff --git a/spaces/tang155/bingo/src/components/ui/input.tsx b/spaces/tang155/bingo/src/components/ui/input.tsx
deleted file mode 100644
index 684a857f3d769b78818fb13de1abaebfb09ca79c..0000000000000000000000000000000000000000
--- a/spaces/tang155/bingo/src/components/ui/input.tsx
+++ /dev/null
@@ -1,25 +0,0 @@
-import * as React from 'react'
-
-import { cn } from '@/lib/utils'
-
-export interface InputProps
- extends React.InputHTMLAttributes {}
-
-const Input = React.forwardRef(
- ({ className, type, ...props }, ref) => {
- return (
-
- )
- }
-)
-Input.displayName = 'Input'
-
-export { Input }
diff --git a/spaces/tanishqvashisht/comicInator/generator_model.py b/spaces/tanishqvashisht/comicInator/generator_model.py
deleted file mode 100644
index 826e0b09c150204cbffc863057f594d44038e2ed..0000000000000000000000000000000000000000
--- a/spaces/tanishqvashisht/comicInator/generator_model.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import torch
-import torch.nn as nn
-
-
-class Block(nn.Module):
- def __init__(self, in_channels, out_channels, down=True, act="relu", use_dropout=False):
- super(Block, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(in_channels, out_channels, 4, 2, 1, bias=False, padding_mode="reflect")
- if down
- else nn.ConvTranspose2d(in_channels, out_channels, 4, 2, 1, bias=False),
- nn.BatchNorm2d(out_channels),
- nn.ReLU() if act == "relu" else nn.LeakyReLU(0.2),
- )
-
- self.use_dropout = use_dropout
- self.dropout = nn.Dropout(0.5)
- self.down = down
-
- def forward(self, x):
- x = self.conv(x)
- return self.dropout(x) if self.use_dropout else x
-
-
-class Generator(nn.Module):
- def __init__(self, in_channels=3, features=64):
- super().__init__()
- self.initial_down = nn.Sequential(
- nn.Conv2d(in_channels, features, 4, 2, 1, padding_mode="reflect"),
- nn.LeakyReLU(0.2),
- )
- self.down1 = Block(features, features * 2, down=True, act="leaky", use_dropout=False)
- self.down2 = Block(
- features * 2, features * 4, down=True, act="leaky", use_dropout=False
- )
- self.down3 = Block(
- features * 4, features * 8, down=True, act="leaky", use_dropout=False
- )
- self.down4 = Block(
- features * 8, features * 8, down=True, act="leaky", use_dropout=False
- )
- self.down5 = Block(
- features * 8, features * 8, down=True, act="leaky", use_dropout=False
- )
- self.down6 = Block(
- features * 8, features * 8, down=True, act="leaky", use_dropout=False
- )
- self.bottleneck = nn.Sequential(
- nn.Conv2d(features * 8, features * 8, 4, 2, 1), nn.ReLU()
- )
-
- self.up1 = Block(features * 8, features * 8, down=False, act="relu", use_dropout=True)
- self.up2 = Block(
- features * 8 * 2, features * 8, down=False, act="relu", use_dropout=True
- )
- self.up3 = Block(
- features * 8 * 2, features * 8, down=False, act="relu", use_dropout=True
- )
- self.up4 = Block(
- features * 8 * 2, features * 8, down=False, act="relu", use_dropout=False
- )
- self.up5 = Block(
- features * 8 * 2, features * 4, down=False, act="relu", use_dropout=False
- )
- self.up6 = Block(
- features * 4 * 2, features * 2, down=False, act="relu", use_dropout=False
- )
- self.up7 = Block(features * 2 * 2, features, down=False, act="relu", use_dropout=False)
- self.final_up = nn.Sequential(
- nn.ConvTranspose2d(features * 2, in_channels, kernel_size=4, stride=2, padding=1),
- nn.Tanh(),
- )
-
- def forward(self, x):
- d1 = self.initial_down(x)
- d2 = self.down1(d1)
- d3 = self.down2(d2)
- d4 = self.down3(d3)
- d5 = self.down4(d4)
- d6 = self.down5(d5)
- d7 = self.down6(d6)
- bottleneck = self.bottleneck(d7)
- up1 = self.up1(bottleneck)
- up2 = self.up2(torch.cat([up1, d7], 1))
- up3 = self.up3(torch.cat([up2, d6], 1))
- up4 = self.up4(torch.cat([up3, d5], 1))
- up5 = self.up5(torch.cat([up4, d4], 1))
- up6 = self.up6(torch.cat([up5, d3], 1))
- up7 = self.up7(torch.cat([up6, d2], 1))
- return self.final_up(torch.cat([up7, d1], 1))
-
-
-def test():
- x = torch.randn((1, 3, 256, 256))
- model = Generator(in_channels=3, features=64)
- preds = model(x)
- print(preds.shape)
-
-
-if __name__ == "__main__":
- test()
\ No newline at end of file
diff --git a/spaces/tarteel-ai/demo-whisper-tiny-ar-quran/README.md b/spaces/tarteel-ai/demo-whisper-tiny-ar-quran/README.md
deleted file mode 100644
index 12bd6acc72c051c4df0e3f735abb23d8486a9934..0000000000000000000000000000000000000000
--- a/spaces/tarteel-ai/demo-whisper-tiny-ar-quran/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Tarteel AI - Whisper Tiny for Quran
-emoji: 📗
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.13.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Andromede 5 Logiciel DAstrologie Theme Astral.md b/spaces/terfces0erbo/CollegeProjectV2/Andromede 5 Logiciel DAstrologie Theme Astral.md
deleted file mode 100644
index 171807b92701a54df9b9bd88ce773ee7aa0e3f5a..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Andromede 5 Logiciel DAstrologie Theme Astral.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Andromede 5, Logiciel d\\\\\\\\\\\\\\\\'Astrologie, Theme astral
Download ⇔ https://bytlly.com/2uGlXV
-
-Modulos prefabricados baratos, Free music video editing software, ... Wwe set rollins title track 2016, Kedah vs pos malaysia 5-1, El-w200g? Football manager 2017 download torrent Russian version free full version" 8a78ff9644
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version.md b/spaces/terfces0erbo/CollegeProjectV2/CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version.md
deleted file mode 100644
index ffd1446884d710e220c9b7567d02366323f5774c..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version.md
+++ /dev/null
@@ -1,58 +0,0 @@
-
-CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version: A Review
-If you are looking for a powerful and versatile design tool for technical professionals, you might want to check out CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version. This is a comprehensive suite that contains impressive illustration tools, advanced dimensioning tools, 3D import and visualization tools, translation memory system, QR code generator, and more. In this article, we will review some of the key features and benefits of this software, as well as provide some tips on how to download and install it on your Windows PC.
-What is CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version?
-CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version is a software package that includes CorelDRAW Technical Suite X7, Corel DESIGNER Technical Suite X7, Corel PHOTO-PAINT X7, Corel PowerTRACE X7, Corel CONNECT X7, Corel CAPTURE X7, and Corel Website Creator X7. These applications work together to help you create technical illustrations, diagrams, schematics, manuals, brochures, flyers, posters, and more.
-CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version
DOWNLOAD ✵ https://bytlly.com/2uGlIz
-CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version is compatible with Windows 7/8/8.1 operating systems, and requires 2GB of RAM and 1GB of free disk space. It also supports both 32-bit and 64-bit architectures.
-What are the main features of CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version?
-CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version has many exclusive features that will enhance your designing experience. Here are some of them:
-
-- Advanced dimensioning tools: You can display precise measurement values in your drawings with dynamic callouts, dimension lines, and dimension tools. You can also customize the appearance and style of your dimensions to suit your needs.
-- 3D import and visualization tools: You can add 3D models into your technical projects with the integrated Lattice3D Studio Corel Edition. You can also view and edit 3D models from various sources, such as DWG, DXF, IGES, STEP, U3D, VRML, and PDF files.
-- Translation memory system: You can approach worldwide audience by sending text content from your graphics to Translation Memory Systems (TMS), such as SDL Trados Studio or SDL Passolo. You can also receive translated text from TMS and apply it to your graphics.
-- QR code generator: You can create QR codes to link your graphics to online sources or mobile devices. You can also customize the size, color, and border of your QR codes.
-- Enhanced color harmonies: You can create beautiful color schemes for your graphics with the enhanced color harmonies feature. You can also edit colors individually or collectively, and apply them to objects or styles.
-- Drawing scale control: You can create your drawings and designs in accurate scale with the drawing scale control feature. You can also switch between different scales easily and apply them to multiple pages.
-
-How to download and install CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version?
-If you want to try out CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version on your PC, you can follow these steps:
-
-- Download the software from this link. You will get two files: CorelDRAWTechnicalSuiteX7Installer_32Bit.rar and CorelDRAWTechnicalSuiteX7Installer_64Bit.rar.
-- Extract the files using WinRAR or any other file compression tool.
-- Run the installer for your preferred architecture (32-bit or 64-bit).
-- Follow the instructions on the screen to complete the installation process.
-- Enjoy using CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version on your PC.
-
-Conclusion
-CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version is a comprehensive design suite for technical professionals who need to create stunning technical illustrations with ease and efficiency. It has many exclusive features that will help you work faster and smarter, such as advanced dimensioning tools, 3D import and visualization tools, translation memory system, QR code generator, enhanced color harmonies, drawing scale control, and more.
-If you want to download and install CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version on your PC, you can use the link provided above and follow the steps we have outlined in this article.
-We hope you found this article helpful and informative.
-How to use CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version?
-CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version is easy to use and has a user-friendly interface that lets you access different design tools quickly and conveniently. You can start your project by choosing from a variety of templates, presets, and styles, or by importing technical designs from various sources, such as CAD files, PDF files, 3D models, and more. You can also create your own custom designs from scratch using the powerful drawing and editing tools.
-CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version allows you to work with multiple pages and layers, and to organize your objects and styles in a logical way. You can also use the alignment and distribution tools, the snapping and dynamic guides, and the object manager to arrange your elements precisely and efficiently. You can also add text, symbols, graphics, and images to your designs, and edit them with the text editing tools, the symbol manager, the graphic editing tools, and the image editing tools.
-
-CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version also helps you to enhance your designs with various effects and features, such as color harmonies, transparency, gradients, fills, outlines, shadows, contours, blends, distortions, envelopes, extrusions, lenses, and more. You can also apply filters and adjustments to your images, such as brightness, contrast, color balance, sharpening, noise reduction, red-eye removal, and more.
-What are the benefits of CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version?
-CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version has many benefits that will make your technical design projects easier and better. Here are some of them:
-
-- High-quality output: You can export your designs in various formats, such as PDF, EPS, SVG, CGM, TIFF, JPEG, PNG, GIF, BMP, and more. You can also print your designs with high resolution and accuracy using the print preview and print merge features.
-- Compatibility: You can import and export files from other applications and formats, such as Adobe Illustrator, Adobe Photoshop, Microsoft Word, Microsoft Excel, Microsoft PowerPoint, AutoCAD DWG/DXF/DWF files, Visio VSD files, WordPerfect files, CorelDRAW files (CDR), Corel DESIGNER files (DES), Corel PHOTO-PAINT files (CPT), Corel PowerTRACE files (CMX), Corel CONNECT files (CPT), Corel CAPTURE files (CPT), Corel Website Creator files (HTML), Lattice3D Studio files (3DV/3DXML/U3D), PDF files (PDF/A/PDF/X), 3D models (IGES/STEP/VRML/U3D), QR codes (PNG/JPEG/BMP), and more.
-- Collaboration: You can share your designs with others easily using the collaboration features of CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version. You can send text content from your graphics to Translation Memory Systems (TMS) for translation into different languages. You can also receive translated text from TMS and apply it to your graphics automatically. You can also generate QR codes to link your graphics to online sources or mobile devices.
-- Support: You can get help and support from various sources when using CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version. You can access the comprehensive help system that contains tutorials, tips, tricks, videos, and more. You can also visit the online community that offers forums, blogs
-
What are the drawbacks of CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version?
-CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version is a great design suite for technical professionals, but it is not perfect. There are some drawbacks that you might want to consider before downloading and installing this software on your PC. Here are some of them:
-
-- Price: CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version is not a cheap software. It costs $999 for a full license, and $499 for an upgrade from a previous version. You might find this price too high for your budget or needs.
-- Compatibility: CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version is compatible with Windows 7/8/8.1 operating systems, but not with Windows 10 or Mac OS X. You might encounter some compatibility issues if you use a different operating system or want to switch to a newer one.
-- Learning curve: CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version has many features and tools that can help you create amazing technical illustrations, but it also has a steep learning curve. You might need to spend some time and effort to learn how to use this software effectively and efficiently.
-- Support: CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version has a comprehensive help system and an online community that can provide you with help and support, but it also has some limitations. You might not find the answers or solutions you need in the help system or the online community, or you might have to wait for a long time to get a response from the support team.
-
-Conclusion
-CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version is a comprehensive design suite for technical professionals who need to create stunning technical illustrations with ease and efficiency. It has many exclusive features that will help you work faster and smarter, such as advanced dimensioning tools, 3D import and visualization tools, translation memory system, QR code generator, enhanced color harmonies, drawing scale control, and more.
-If you want to download and install CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version on your PC, you can use the link provided above and follow the steps we have outlined in this article.
-However, you should also be aware of some of the drawbacks of this software, such as its price, compatibility, learning curve, and support.
-We hope you found this article helpful and informative.
-In conclusion, CorelDRAW Technical Suite X7 17.4.0.887 [32-64Bit]-[FirstUploads Full Version is a powerful and versatile design suite that can help you create professional and impressive technical illustrations for any purpose. It has many exclusive and advanced features that will enhance your designing experience and productivity. However, it also has some drawbacks that you should consider before downloading and installing this software on your PC, such as its price, compatibility, learning curve, and support. You should weigh the pros and cons of this software and decide if it suits your needs and budget.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Kaspersky Reset Trial V5.1.0.41 Final - [SH] Utorrentgolkes __FULL__.md b/spaces/terfces0erbo/CollegeProjectV2/Kaspersky Reset Trial V5.1.0.41 Final - [SH] Utorrentgolkes __FULL__.md
deleted file mode 100644
index d16a4bdcc12521494c53c5e70683786a79dff74c..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Kaspersky Reset Trial V5.1.0.41 Final - [SH] Utorrentgolkes __FULL__.md
+++ /dev/null
@@ -1,8 +0,0 @@
-Kaspersky Reset Trial v5.1.0.41 Final - [SH] utorrentgolkes
DOWNLOAD ⭐ https://bytlly.com/2uGlv1
-
-none none
-none
-none 8a78ff9644
-
-
-
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/A Guide to Microsoft Office 2019 Features Benefits and Installation.md b/spaces/tialenAdioni/chat-gpt-api/logs/A Guide to Microsoft Office 2019 Features Benefits and Installation.md
deleted file mode 100644
index 715173d847a396a2d78a94c7b444bb52d273099b..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/A Guide to Microsoft Office 2019 Features Benefits and Installation.md
+++ /dev/null
@@ -1,37 +0,0 @@
-
-How to Download and Install Microsoft Office 2019
-Microsoft Office 2019 is the latest version of the popular productivity suite that includes Word, Excel, PowerPoint, OneNote, and other applications. If you want to download and install Microsoft Office 2019 on your PC or Mac, here are the steps you need to follow.
-
-- Go to https://www.microsoft.com/en-us/microsoft-365/buy/microsoft-365 and choose the Microsoft 365 plan that suits your needs. You can also buy a one-time purchase version of Office 2019 from https://www.microsoft.com/en-us/microsoft-365/buy/office.
-- Sign in with the Microsoft account you associated with this version of Office 2019. If you don't have one, you can create one for free.
-- After signing in, select Install Office and then choose Office 2019 from the drop-down menu.
-- To install Office 2019 in a different language, or to install the 64-bit version, select Other options and choose the language and bit version you want.
-- Select Install and follow the instructions on the screen. Depending on your browser, select Run (in Edge or Internet Explorer), Setup (in Chrome), or Save File (in Firefox).
-- If you see the User Account Control prompt that says, Do you want to allow this app to make changes to your device? select Yes.
-- The installation will begin and you will see a progress bar. When it's done, you will see the phrase, You're all set! Office is installed now and an animation will show you where to find Office applications on your computer.
-- Select Close and start using your Office 2019 applications.
-
-If you have any issues with the installation or activation of Office 2019, you can visit https://support.microsoft.com/en-us/office/download-and-install-or-reinstall-office-2019-office-2016-or-office-2013-7c695b06-6d1a-4917-809c-98ce43f86479 for more help.
-microsoft office 2019 download cracked
DOWNLOAD ☆☆☆ https://urlcod.com/2uK3wZ
Why Choose Microsoft Office 2019?
-Microsoft Office 2019 is the latest and most advanced version of the Office suite that offers many new features and improvements over previous versions. Some of the benefits of choosing Microsoft Office 2019 are:
-
-- It works offline and does not require an internet connection to use.
-- It has a one-time purchase option that does not require a subscription or recurring payments.
-- It has enhanced security and privacy features that protect your data and devices.
-- It has improved performance and stability that make your work faster and smoother.
-- It has updated design and functionality that make your work easier and more enjoyable.
-
-Microsoft Office 2019 is compatible with Windows 10, Windows 8.1, Windows Server 2019, Windows Server 2016, and macOS. It includes the following applications:
-
-- Word: The powerful word processor that lets you create, edit, and share documents with ease.
-- Excel: The versatile spreadsheet application that lets you analyze, visualize, and manage data with ease.
-- PowerPoint: The dynamic presentation application that lets you create, edit, and deliver stunning presentations with ease.
-- OneNote: The digital notebook application that lets you capture, organize, and share notes with ease.
-- Outlook: The comprehensive email and calendar application that lets you manage your communications and schedule with ease.
-- Publisher: The desktop publishing application that lets you create, edit, and print professional-quality publications with ease.
-- Access: The database management application that lets you create, edit, and manage databases with ease.
-
-If you want to get the most out of your work and productivity, Microsoft Office 2019 is the best choice for you. You can download and install it today and enjoy its benefits for years to come.
- ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Autodesk T-splines Plugin For Rhino Crack Tips and Tricks for Using T-Splines in Rhino.md b/spaces/tialenAdioni/chat-gpt-api/logs/Autodesk T-splines Plugin For Rhino Crack Tips and Tricks for Using T-Splines in Rhino.md
deleted file mode 100644
index 6a1ac2471fb9cd5a97ae4e63d2ec18a7c4b111eb..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Autodesk T-splines Plugin For Rhino Crack Tips and Tricks for Using T-Splines in Rhino.md
+++ /dev/null
@@ -1,159 +0,0 @@
-
-Autodesk T-splines Plugin For Rhino Crack: What You Need to Know
-If you are looking for a way to create smooth and organic shapes in Rhino, you might have heard of Autodesk T-splines Plugin for Rhino. This plugin is a powerful tool that allows you to create and edit T-splines surfaces, which are a type of subdivision surface that can be converted to NURBS or meshes. However, this plugin is not free and requires a license to use. Some people might try to bypass this requirement by using a crack, which is a modified version of the software that removes the license verification. In this article, we will explain what Autodesk T-splines Plugin for Rhino is, what a crack is and why people use it, what are the risks and disadvantages of using a crack, how to install and use a crack, and what are some alternatives and recommendations to using a crack.
-Introduction
-What is Autodesk T-splines Plugin for Rhino?
-Autodesk T-splines Plugin for Rhino is a plugin that extends the functionality of Rhino, a popular 3D modeling software. The plugin allows you to create and edit T-splines surfaces, which are a type of subdivision surface that can be converted to NURBS or meshes. T-splines surfaces have several advantages over traditional NURBS surfaces, such as:
-Autodesk T-splines Plugin For Rhino Crack
Download ☆ https://urlcod.com/2uKa4B
-
-- They can have any number of sides, not just four.
-- They can have local refinement, meaning you can add more detail to specific areas without affecting the rest of the surface.
-- They can have smooth transitions between different levels of detail.
-- They can be easily manipulated by moving control points or using tools like extrude, insert edge, weld, etc.
-
-T-splines surfaces are ideal for creating smooth and organic shapes, such as characters, vehicles, jewelry, furniture, etc. They can also be used for reverse engineering scanned data or repairing meshes. The plugin integrates seamlessly with Rhino's interface and commands, and supports both Windows and Mac operating systems.
-What is a crack and why do people use it?
-A crack is a modified version of a software that removes or bypasses the license verification process. This means that you can use the software without paying for it or entering a valid serial number. People might use a crack for various reasons, such as:
-
-- They cannot afford to buy the software or they think it is too expensive.
-- They want to test the software before buying it or they only need it for a short period of time.
-- They do not agree with the terms and conditions or the privacy policy of the software developer.
-- They do not have access to the official website or download link of the software.
-
-A crack is usually distributed as a file or a folder that contains the modified version of the software or a patch that can modify the original version. You can find cracks on various websites, forums, blogs, torrent sites, etc. However, using a crack is illegal and unethical, as it violates the intellectual property rights of the software developer. It also exposes you to various risks and disadvantages, which we will discuss in the next section.
-What are the risks and disadvantages of using a crack?
-Using a crack might seem like an easy and convenient way to use Autodesk T-splines Plugin for Rhino without paying for it, but it comes with many drawbacks and dangers. Some of them are:
-
-- You might download a fake or corrupted crack that does not work or damages your system.
-- You might download a malicious crack that contains viruses, malware, spyware, ransomware, etc. that can infect your computer and compromise your security and privacy.
-- You might face legal consequences if you are caught using a crack, such as fines, lawsuits, or even jail time.
-- You might lose access to updates, bug fixes, technical support, customer service, etc. from the software developer.
-- You might experience performance issues, compatibility problems, errors, crashes, etc. while using the crack.
-- You might miss out on new features and improvements that are added to the official version of the software.
-
-As you can see, using a crack is not worth it in the long run. You might save some money in the short term, but you will pay a higher price in terms of quality, reliability, security, legality, and ethics. Therefore, we strongly advise you not to use a crack and instead look for other options that are safer and more legitimate.
- How to Install Autodesk T-splines Plugin for Rhino Crack
- Step 1: Download the crack file from a reliable source
- If you still decide to use a crack despite our warnings, you will need to find and download one from somewhere. As we mentioned before, there are many websites that offer cracks for various software, but not all of them are trustworthy or functional. You will need to do some research and check reviews before downloading anything. You will also need to make sure that your antivirus software is up to date and scan any file you download before opening it.
-How to install Autodesk T-splines plugin for Rhino
-Autodesk T-splines plugin for Rhino free download
-Autodesk T-splines plugin for Rhino tutorial
-Autodesk T-splines plugin for Rhino license key
-Autodesk T-splines plugin for Rhino serial number
-Autodesk T-splines plugin for Rhino activation code
-Autodesk T-splines plugin for Rhino full version
-Autodesk T-splines plugin for Rhino patch
-Autodesk T-splines plugin for Rhino keygen
-Autodesk T-splines plugin for Rhino crack download
-Autodesk T-splines plugin for Rhino crack mac
-Autodesk T-splines plugin for Rhino crack windows
-Autodesk T-splines plugin for Rhino crack 64 bit
-Autodesk T-splines plugin for Rhino crack 32 bit
-Autodesk T-splines plugin for Rhino crack 2023
-Autodesk T-splines plugin for Rhino crack 2022
-Autodesk T-splines plugin for Rhino crack 2021
-Autodesk T-splines plugin for Rhino crack 2020
-Autodesk T-splines plugin for Rhino crack 2019
-Autodesk T-splines plugin for Rhino crack 2018
-Autodesk T-splines plugin for Rhino crack 2017
-Autodesk T-splines plugin for Rhino crack 2016
-Autodesk T-splines plugin for Rhino crack 2015
-Autodesk T-splines plugin for Rhino crack 2014
-Autodesk T-splines plugin for Rhino crack 2013
-Autodesk T-splines plugin for Rhino review
-Autodesk T-splines plugin for Rhino features
-Autodesk T-splines plugin for Rhino benefits
-Autodesk T-splines plugin for Rhino alternatives
-Autodesk T-splines plugin for Rhino comparison
-Autodesk T-splines plugin for Rhino vs TsElements
-Autodesk T-splines plugin for Rhino vs Fusion 360
-Autodesk T-splines plugin for Rhino vs SolidWorks
-Autodesk T-splines plugin for Rhino vs SketchUp
-Autodesk T-splines plugin for Rhino vs Blender
-Autodesk T-splines plugin for Rhino vs ZBrush
-Autodesk T-splines plugin for Rhino vs Maya
-Autodesk T-splines plugin for Rhino vs 3ds Max
-Autodesk T-splines plugin for Rhino vs Modo
-Autodesk T-splines plugin for Rhino vs Cinema 4D
-How to use Autodesk T-splines plugin for Rhino
-How to create organic shapes with Autodesk T-splines plugin for Rhino
-How to model jewelry with Autodesk T-splines plugin for Rhino
-How to design products with Autodesk T-splines plugin for Rhino
-How to sculpt with Autodesk T-splines plugin for Rhino
-How to smooth meshes with Autodesk T-splines plugin for Rhino
-How to convert NURBS to polygons with Autodesk T-splines plugin for Rhino
-How to export models from Autodesk T-splines plugin for Rhino
-How to import models into Autodesk T-splines plugin for Rhino
-How to update Autodesk T-splines plugin for Rhino
- For this article, we will assume that you have found a working crack file for Autodesk T-splines Plugin for Rhino that matches your version of Rhino (the latest version at the time of writing is 4.0) and your operating system (Windows or Mac). We will also assume that you have already installed Rhino on your computer. If not, you will need to do that first before proceeding with this guide.
- The crack file should be in a compressed format such as ZIP or RAR. You will need to extract it using an application such as WinRAR or 7-Zip. You should see something like this:
-  Step 2: Extract the file and run the setup.exe file
- The next step is to extract the file and run the setup.exe file (or setup.dmg if you are on Mac). This will launch an installation wizard that will guide you through the process of installing Autodesk T-splines Plugin for Rhino on your computer. You should see something like this:
-  Follow quads, etc. For this example, we will use the default settings and click OK.
-- A new NURBS surface or a mesh will be created that matches the shape of the T-splines surface. You can see the control points and edges of the NURBS surface or the vertices and faces of the mesh by clicking on the Show Points button on the toolbar or typing PointsOn on the command line.
-
-Here is an image of the final result:
-  This is just a simple example of how to create and edit T-splines surfaces in Rhino. You can create more complex and organic shapes by using different types of curves, points, surfaces, or meshes as input. You can also use different tools and options to modify the shape and topology of the T-splines surface. You can experiment with different settings and parameters to achieve the desired result.
- How to convert T-splines surfaces to NURBS or meshes
- As we mentioned before, one of the advantages of T-splines surfaces is that they can be converted to NURBS surfaces or meshes. This allows you to use them with other Rhino commands or export them to other formats or applications. To convert T-splines surfaces to NURBS surfaces or meshes, you need to follow some basic steps. Here is an example of how to convert a T-splines surface to a NURBS surface:
-
-- Select the T-splines surface that you want to convert and click on the Convert button on the toolbar or type TsConvertToRhino on the command line.
-- A dialog box will appear where you can choose some options for converting the surface, such as tolerance, smoothness, quads, etc. For this example, we will use the default settings and click OK.
-- A new NURBS surface will be created that matches the shape of the T-splines surface. You can see the control points and edges of the NURBS surface by clicking on the Show Points button on the toolbar or typing PointsOn on the command line.
-- You can now use the NURBS surface with any Rhino command or export it to any format that supports NURBS surfaces.
-
-Here is an image of the final result:
-  This is just a simple example of how to convert a T-splines surface to a NURBS surface. You can also convert a T-splines surface to a mesh by choosing Mesh as the output type in the dialog box. You can also convert a NURBS surface or a mesh to a T-splines surface by clicking on the toolbar or typing TsFromRhino on the command line. You can also convert a .tsm file to a T-splines surface by clicking on the Import T-splines File button on the toolbar or typing TsImport on the command line.
- How to export and import T-splines files
- Another feature of Autodesk T-splines Plugin for Rhino is that it allows you to export and import T-splines files. A T-splines file is a proprietary format that can be opened by other Autodesk products such as Fusion 360 or Maya. This allows you to transfer your T-splines models between different applications and platforms. To export and import T-splines files, you need to follow some basic steps. Here is an example of how to export a T-splines surface to a .tsm file:
-
-- Select the T-splines surface that you want to export and click on the Export T-splines File button on the toolbar or type TsExport on the command line.
-- A dialog box will appear where you can choose a name and a location for the .tsm file. For this example, we will name it test.tsm and save it on the desktop.
-- A new .tsm file will be created that contains the data of the T-splines surface. You can open it with any application that supports .tsm files.
-
-Here is an image of the final result:
-  This is just a simple example of how to export a T-splines surface to a .tsm file. You can also import a .tsm file to Rhino by clicking on the Import T-splines File button on the toolbar or typing TsImport on the command line. A dialog box will appear where you can choose a .tsm file to open. A new T-splines surface will be created that matches the data of the .tsm file.
- Conclusion
- Summary of the main points of the article
- In this article, we have explained what Autodesk T-splines Plugin for Rhino is, what a crack is and why people use it, what are the risks and disadvantages of using a crack, how to install and use a crack, and what are some alternatives and recommendations to using a crack. We have also shown you how to create and edit T-splines surfaces in Rhino, how to convert them to NURBS surfaces or meshes, and how to export and import them as .tsm files. We hope that this article has been informative and helpful for you.
- Recommendations and alternatives to using a crack
- As we have mentioned several times throughout this article, using a crack is not a good idea for many reasons. It is illegal, unethical, risky, and unreliable. It can cause you many problems and headaches in the future. Therefore, we strongly recommend you not to use a crack and instead look for other options that are safer and more legitimate. Some of them are:
-
-- Buy the official version of Autodesk T-splines Plugin for Rhino from the Autodesk website or an authorized reseller. This is the best option if you want to use the plugin without any limitations or issues. You will also get access to updates, bug fixes, technical support, customer service, etc. The price of the plugin is $675 USD for a perpetual license or $195 USD for an annual subscription.
-- Use the free trial version of Autodesk T-splines Plugin for Rhino from the Autodesk website. This is a good option if you want to test the plugin before buying it or if you only need it for a short period of time. You can use the plugin for 30 days without any restrictions or obligations. You will need to register with your email address and create an Autodesk account to download and activate the trial version.
-- Use an alternative plugin or software that offers similar functionality as Autodesk T-splines Plugin for Rhino. This is an option if you want to save some money or if you prefer another interface or workflow. There are many plugins or software that allow you to create and edit subdivision surfaces in Rhino or other 3D modeling applications. Some examples are Clayoo, SubD, XirusCAD, etc.
-
-These are some of the options that we suggest you consider instead of using a crack. Of course, you are free to make your own decision based on your needs and preferences, but we hope that you will choose wisely and responsibly.
- FAQs
- Here are some frequently asked questions about Autodesk T-splines Plugin for Rhino Crack:
-
-- Q: Is Autodesk T-splines Plugin for Rhino compatible with Rhino 7?
A: No, Autodesk T-splines Plugin for Rhino is only compatible with Rhino 5 and Rhino 6. There is no plan to update it for Rhino 7.
-- Q: Is Autodesk T-splines Plugin for Rhino still supported by Autodesk?
A: No, Autodesk T-splines Plugin for Rhino is no longer supported by Autodesk since January 7th, 2018. This means that there will be no more updates, bug fixes, technical support, customer service, etc. for this plugin.
-the official version of Autodesk T-splines Plugin for Rhino?
A: Yes, there are some differences between the crack and the official version of the plugin. The crack might not work properly or cause errors, crashes, compatibility problems, etc. The crack might also contain viruses, malware, spyware, ransomware, etc. that can harm your computer and compromise your security and privacy. The crack might also be outdated and miss some features and improvements that are added to the official version of the plugin.
-- Q: How can I uninstall Autodesk T-splines Plugin for Rhino Crack from my computer?
A: To uninstall the crack from your computer, you need to follow some steps. Here is an example of how to uninstall it on Windows:
-
-- Go to Control Panel > Programs > Programs and Features.
-- Find Autodesk T-splines Plugin for Rhino in the list of installed programs and click on Uninstall.
-- Follow the instructions on the screen to complete the uninstallation process.
-- Delete the crack file and folder from your computer.
-- Restart your computer.
-
-Here is an example of how to uninstall it on Mac:
-
-- Go to Applications > Rhino 6 > Plug-ins.
-- Find T-Splines.rhp in the list of plug-ins and drag it to the Trash.
-- Empty the Trash.
-- Delete the crack file and folder from your computer.
-- Restart your computer.
-
-- Q: Where can I find more information or help about Autodesk T-splines Plugin for Rhino?
A: You can find more information or help about the plugin on the following sources:
-
-- The official website of Autodesk T-splines Plugin for Rhino: https://www.autodesk.com/products/t-splines-plugin-for-rhino/overview
-- The official documentation of Autodesk T-splines Plugin for Rhino: https://help.autodesk.com/view/TSPR/2018/ENU/
-- The official forum of Autodesk T-splines Plugin for Rhino: https://forums.autodesk.com/t5/t-splines-for-rhino/bd-p/TSPLINES
-- The official YouTube channel of Autodesk T-splines Plugin for Rhino: https://www.youtube.com/user/tsplines
-
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Dr Suess The Lorax 720p Tamil Movie Download !!HOT!!.md b/spaces/tialenAdioni/chat-gpt-api/logs/Dr Suess The Lorax 720p Tamil Movie Download !!HOT!!.md
deleted file mode 100644
index b5aed47723a90c28b46bbec371f980fc87c91ee7..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Dr Suess The Lorax 720p Tamil Movie Download !!HOT!!.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-Here is a possible title and article with html formatting for the keyword "Dr Suess The Lorax 720p Tamil Movie Download":
-
-How to Download Dr Suess The Lorax 720p Tamil Movie for Free
-If you are a fan of Dr Suess's classic children's book The Lorax, you might be interested in watching the 2012 animated movie adaptation in Tamil. The movie features the voices of Danny DeVito, Zac Efron, Taylor Swift, Ed Helms and Betty White. It tells the story of a young boy who seeks to find the legendary Lorax, a creature who speaks for the trees, and learns about the environmental consequences of greed and consumerism.
-Fortunately, you can download Dr Suess The Lorax 720p Tamil movie for free from various online sources. However, you should be careful about the quality and legality of the downloads. Some websites may offer low-quality or corrupted files that can harm your device or expose you to malware. Others may violate the copyright laws and put you at risk of legal action.
-Dr Suess The Lorax 720p Tamil Movie Download
Download ✑ https://urlcod.com/2uKaGE
-To avoid these problems, we recommend you to use a trusted and reliable website that offers high-quality and legal downloads of Dr Suess The Lorax 720p Tamil movie. One such website is www.example.com, which has a large collection of Tamil dubbed movies in HD quality. You can download Dr Suess The Lorax 720p Tamil movie from this website in a few simple steps:
-
-- Visit www.example.com and search for Dr Suess The Lorax 720p Tamil movie in the search bar.
-- Select the movie from the list of results and click on the download button.
-- Choose a suitable download option from the available links and wait for the download to complete.
-- Enjoy watching Dr Suess The Lorax 720p Tamil movie on your device.
-
-By following these steps, you can download Dr Suess The Lorax 720p Tamil movie for free and enjoy it anytime you want. However, we advise you to respect the rights of the creators and distributors of the movie and not to share or distribute it without their permission. We hope you have a great time watching Dr Suess The Lorax 720p Tamil movie.
Here are a few more paragraphs for the article:
-
-Dr Suess The Lorax 720p Tamil movie is a fun and educational movie for children and adults alike. It teaches important lessons about the environment, sustainability, responsibility and friendship. It also features catchy songs, colorful animation and humorous characters that will keep you entertained throughout the movie.
-The movie is based on the 1971 book by Dr Suess, who was a famous American author and illustrator of children's books. He wrote over 60 books that have been translated into more than 20 languages and sold over 600 million copies worldwide. Some of his other popular books include The Cat in the Hat, Green Eggs and Ham, How the Grinch Stole Christmas and Horton Hears a Who.
-Dr Suess The Lorax 720p Tamil movie is a must-watch for fans of Dr Suess and Tamil movies. It is a rare opportunity to enjoy a classic story in a different language and culture. It is also a great way to introduce Dr Suess's works to a new generation of readers and viewers. You can download Dr Suess The Lorax 720p Tamil movie for free from www.example.com and share it with your family and friends.
- 7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Enjoy Hindi Hd Anaarkali Of Aarah Movies 1080p Torrent A Gripping Tale of Harassment and Humiliation.md b/spaces/tialenAdioni/chat-gpt-api/logs/Enjoy Hindi Hd Anaarkali Of Aarah Movies 1080p Torrent A Gripping Tale of Harassment and Humiliation.md
deleted file mode 100644
index c1e591d17943c75b249b9d480a826a63056f8c6c..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Enjoy Hindi Hd Anaarkali Of Aarah Movies 1080p Torrent A Gripping Tale of Harassment and Humiliation.md
+++ /dev/null
@@ -1,56 +0,0 @@
-
-Anaarkali of Aarah: A Drama Movie with a Strong Message
-Anaarkali of Aarah is a 2017 Hindi drama movie that tells the story of a village dancer who faces harassment and humiliation from a powerful politician and his cronies. The movie stars Swara Bhaskar as Anaarkali, a feisty and fearless performer who sings and dances to raunchy songs in public events. She catches the eye of Dharmendra Chauhan (Sanjay Mishra), the vice chancellor of a local university and a influential leader of the ruling party. He becomes obsessed with her and tries to molest her during a show, but she slaps him and escapes. This triggers a series of events that put her life and dignity in danger, as Chauhan and his men try to silence her by any means necessary.
-Hindi Hd Anaarkali Of Aarah Movies 1080p Torrent
Download Zip ✶ https://urlcod.com/2uK8te
-Anaarkali of Aarah is a movie that challenges the patriarchal and feudal mindset that treats women as objects of lust and violence. It also showcases the resilience and courage of Anaarkali, who refuses to give up her art and identity, and fights back against her oppressors. The movie is directed by Avinash Das, who makes his debut as a filmmaker. He also wrote the screenplay and the lyrics for the songs. The movie has received critical acclaim for its realistic portrayal of the plight of women artists in rural India, and for its powerful performances by the lead actors. Swara Bhaskar, in particular, has been praised for her nuanced and bold portrayal of Anaarkali, which earned her several awards and nominations.
-Anaarkali of Aarah is a movie that deserves to be watched by everyone who loves cinema with a social message. It is available for download in high definition quality from various torrent sites. You can also watch it online on Netflix, where it is streaming with English subtitles. If you are looking for a movie that will make you think, feel and cheer, then Anaarkali of Aarah is the one for you.
-
-Anaarkali of Aarah is not just a movie about a woman's struggle for justice, but also a movie about the rich and vibrant culture of Bihar, where the story is set. The movie showcases the folk music and dance of the region, which are often ignored or ridiculed by the mainstream media. The movie features several songs that are sung by Anaarkali and her troupe, which are composed by Rohit Sharma and sung by Swati Sharma. The songs are catchy, witty and full of double entendres, which reflect the spirit and humor of Anaarkali. The movie also captures the colorful and chaotic atmosphere of the small towns and villages of Bihar, where politics, religion and caste play a major role in shaping people's lives.
-Anaarkali Of Aarah full movie download in Hindi hd 1080p
-Watch Anaarkali Of Aarah online free Hindi hd quality
-Anaarkali Of Aarah Hindi movie torrent magnet link 1080p
-How to download Anaarkali Of Aarah movie in Hindi hd
-Anaarkali Of Aarah 2017 Hindi movie 1080p bluray download
-Anaarkali Of Aarah Hindi movie review and ratings
-Anaarkali Of Aarah cast and crew details in Hindi
-Anaarkali Of Aarah movie songs and trailer in Hindi hd
-Anaarkali Of Aarah movie subtitles in Hindi download
-Anaarkali Of Aarah movie plot and summary in Hindi
-Anaarkali Of Aarah movie awards and nominations in Hindi
-Anaarkali Of Aarah movie box office collection in Hindi
-Anaarkali Of Aarah movie behind the scenes and making in Hindi
-Anaarkali Of Aarah movie trivia and facts in Hindi
-Anaarkali Of Aarah movie best scenes and dialogues in Hindi
-Anaarkali Of Aarah movie wallpapers and posters in Hindi hd
-Anaarkali Of Aarah movie memes and jokes in Hindi
-Anaarkali Of Aarah movie fan art and cosplay in Hindi
-Anaarkali Of Aarah movie merchandise and products in Hindi
-Anaarkali Of Aarah movie quotes and status in Hindi
-Anaarkali Of Aarah movie analysis and critique in Hindi
-Anaarkali Of Aarah movie comparison and contrast in Hindi
-Anaarkali Of Aarah movie genre and theme in Hindi
-Anaarkali Of Aarah movie inspiration and influence in Hindi
-Anaarkali Of Aarah movie remake and sequel in Hindi
-Anaarkali Of Aarah movie streaming and download sites in Hindi
-Anaarkali Of Aarah movie alternatives and recommendations in Hindi
-Anaarkali Of Aarah movie torrent file size and quality in Hindi
-Anaarkali Of Aarah movie torrent download speed and peers in Hindi
-Anaarkali Of Aarah movie torrent legal and safe issues in Hindi
-Anaarkali Of Aarah movie torrent proxy and mirror sites in Hindi
-Anaarkali Of Aarah movie torrent best practices and tips in Hindi
-Anaarkali Of Aarah movie torrent feedback and comments in Hindi
-Anaarkali Of Aarah movie torrent problems and solutions in Hindi
-Anaarkali Of Aarah movie torrent software and tools in Hindi
-Anaarkali Of Aarah movie torrent latest news and updates in Hindi
-Anaarkali Of Aarah movie torrent history and evolution in Hindi
-Anaarkali Of Aarah movie torrent advantages and disadvantages in Hindi
-Anaarkali Of Aarah movie torrent statistics and trends in Hindi
-Anaarkali Of Aarah movie torrent myths and misconceptions in Hindi
-Anaarkali Of Aarah movie torrent FAQs and guides in Hindi
-Anaarkali Of Aarah movie torrent dos and don'ts in Hindi
-Anaarkali Of Aarah movie torrent hacks and tricks in Hindi
-Anaarkali Of Aarah movie torrent secrets and hidden features in Hindi
-Anaarkali Of Aarah movie torrent alternatives for Android/iOS/PC/Mac/Linux/Windows/Chrome/Firefox/Safari/Opera etc. in Hindi
-Anaarkali of Aarah is a movie that celebrates the art and agency of women, who are often marginalized and exploited by the society. It is a movie that questions the norms and values that justify violence and oppression against women. It is a movie that inspires women to stand up for their rights and dignity, and to challenge the status quo. It is a movie that deserves to be seen and appreciated by everyone who believes in equality and freedom.
e753bf7129
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/Line 6 Pod Farm 25 Platinum Crack Windows 13.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/Line 6 Pod Farm 25 Platinum Crack Windows 13.md
deleted file mode 100644
index e6265af9ff93396d88d658956af1e3317a09af01..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/Line 6 Pod Farm 25 Platinum Crack Windows 13.md
+++ /dev/null
@@ -1,130 +0,0 @@
-## Line 6 Pod Farm 25 Platinum Crack Windows 13
-
-
-
-
-
- 
-
-
-
-
-
-**CLICK HERE ✸✸✸ [https://urluso.com/2tBPX0](https://urluso.com/2tBPX0)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# How to Get Line 6 Pod Farm 25 Platinum for Free on Windows 13
-
-
-
-Line 6 Pod Farm 25 Platinum is a premium tone plug-in that brings world-renowned POD® tone to your favorite DAW. It features an arsenal of vintage and modern amps, cabs, studio-standard effects, classic stompboxes and colorful preamps. You can create your own custom signal chains, split your guitar signal and create dual tones, and choose from more than 1000 presets.
-
-
-
-However, Line 6 Pod Farm 25 Platinum is not a cheap plug-in. It costs $299 and requires an iLok dongle to authorize. If you don't have the money or the iLok, you might be tempted to look for a cracked version online. But beware, most of the cracks you find are either fake, infected with malware, or incomplete.
-
-
-
-So how can you get Line 6 Pod Farm 25 Platinum for free on Windows 13 without risking your computer or your wallet? The answer is simple: use R2R's release. R2R is a scene group that specializes in cracking audio software. They have released a working version of Line 6 Pod Farm 25 Platinum that does not require an iLok driver, loads faster than the original, and decrypts all the additional modules properly.
-
-
-
-To get Line 6 Pod Farm 25 Platinum for free on Windows 13, follow these steps:
-
-
-
-1. Download R2R's release from [here](https://audioz.download/software/win/125012-download_line-6-pod-farm-25-platinum-v259-r2r.html).
-
-2. Extract the zip file and run the setup.exe file.
-
-3. Follow the installation instructions and choose your VST folder.
-
-4. Copy the cracked files from the R2R folder to your VST folder, replacing the original files.
-
-5. Launch your DAW and scan for new plug-ins.
-
-6. Enjoy Line 6 Pod Farm 25 Platinum for free!
-
-
-
-Note: This method is for educational purposes only. We do not condone piracy or illegal downloading. If you like Line 6 Pod Farm 25 Platinum, please support the developers and buy it from their official website.
-
-
-
-What are the benefits of using Line 6 Pod Farm 25 Platinum?
-
-
-
-Line 6 Pod Farm 25 Platinum is not just a plug-in, it's a dream collection of gear for guitarists, bassists and vocalists. With more than 250 models of amps, cabs, effects, preamps and more, you can create any tone you can imagine. Whether you want to emulate your favorite artists, experiment with new sounds, or record professional-quality tracks, Line 6 Pod Farm 25 Platinum has you covered.
-
-
-
-Some of the benefits of using Line 6 Pod Farm 25 Platinum are:
-
-
-
-- You can use it with any audio interface, no need for a Line 6 hardware.
-
-- You can use it as a stand-alone app or as a plug-in in your DAW.
-
-- You can use it with any MIDI controller, including Line 6 FBV Shortboard MkII and FBV Express MkII.
-
-- You can use it in 64-bit mode for better performance and compatibility.
-
-- You can use POD Farm Elements to load single models or FX groups to save CPU power.
-
-- You can use Dual Tone functionality and A/B/Y box to create complex and versatile signal chains.
-
-- You can access thousands of presets created by Line 6 and other users.
-
-
-
-How to get the best out of Line 6 Pod Farm 25 Platinum?
-
-
-
-Line 6 Pod Farm 25 Platinum is a powerful and flexible tool that can help you achieve amazing tones. However, to get the best out of it, you need to know how to use it properly. Here are some tips and tricks to help you get started:
-
-
-
-- Use the carousel-style gear browser to browse and select models easily.
-
-- Use the drag-and-drop interface to arrange and adjust models in your signal chain.
-
-- Use the Compare button to switch between two different tones and find the best one.
-
-- Use the tuner and metronome to stay in tune and in time.
-
-- Use the mixer to balance the levels and pan of your signal chains.
-
-- Use the tone locker to save and load your custom tones.
-
-- Use the online library to download and upload presets from other users.
-
-- Use the help menu to access manuals, tutorials and support.
-
-
-
-Conclusion
-
-
-
-Line 6 Pod Farm 25 Platinum is a must-have plug-in for any guitarist, bassist or vocalist who wants to access a huge collection of world-class tones in their computer. It is easy to use, versatile and affordable. You can get it for free by using R2R's release, but we recommend that you support the developers and buy it from their official website. Line 6 Pod Farm 25 Platinum will take your tone to the next level!
-
- 145887f19f
-
-
-
-
-
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Boris Fx Plug 8.2.0 For Final Cut Pro X Torrent File.md b/spaces/tioseFevbu/cartoon-converter/scripts/Boris Fx Plug 8.2.0 For Final Cut Pro X Torrent File.md
deleted file mode 100644
index 062194bff9b6f64c9c8e39764479f26e0f405d4f..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Boris Fx Plug 8.2.0 For Final Cut Pro X Torrent File.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-Boris FX Plug 8.2.0 for Final Cut Pro X: A Powerful Plugin for Video Editing
-Boris FX Plug 8.2.0 is a plugin that enhances the capabilities of Final Cut Pro X, a popular video editing software for Mac users. Boris FX Plug 8.2.0 offers a range of features and effects that can help you create stunning videos with ease.
-Some of the features and effects that Boris FX Plug 8.2.0 provides are:
-boris fx plug 8.2.0 for final cut pro x torrent file
Download Zip ===== https://urlcod.com/2uHxjC
-
-- Continuum: A collection of over 250 filters and transitions that cover every aspect of video editing, from color correction to 3D titling.
-- Sapphire: A set of over 3000 presets and effects that add cinematic flair and realism to your videos, such as lens flares, glows, and blurs.
-- Mocha Pro: A powerful planar tracking and masking tool that allows you to isolate and manipulate objects in your footage with accuracy and speed.
-- Silhouette: A professional-grade rotoscoping and paint tool that lets you create seamless composites and remove unwanted elements from your videos.
-- Optics: A plugin that simulates optical camera filters, specialized lenses, film stocks, and grain to give your videos a unique look and feel.
-- Particle Illusion: A plugin that generates realistic particle effects such as fire, smoke, sparks, and snow with ease.
-
-Boris FX Plug 8.2.0 is compatible with Final Cut Pro X version 10.4 or later, and requires macOS 10.13 or later. You can download Boris FX Plug 8.2.0 from the official website[^1^], where you can also find tutorials, support, and pricing information.
-If you are looking for a torrent file of Boris FX Plug 8.2.0 for Final Cut Pro X, you might be disappointed to know that there is no such file available online. Torrent files are illegal and risky to download, as they may contain viruses or malware that can harm your computer or compromise your personal data. Moreover, torrenting Boris FX Plug 8.2.0 would be a violation of the intellectual property rights of Boris FX, the developer of the plugin.
-Therefore, we strongly advise you to purchase Boris FX Plug 8.2.0 from the official website[^1^], where you can get a free trial or a discounted price if you are a student or an educator. By doing so, you will not only support the creators of this amazing plugin, but also enjoy its full functionality and updates without any hassle.
Here are some more paragraphs for the article:
-Boris FX Plug 8.2.0 is a plugin that can help you take your video editing skills to the next level. Whether you are a beginner or a professional, you can find a feature or an effect that suits your needs and preferences. You can also customize and tweak the settings of each effect to achieve the desired result.
-
-One of the advantages of Boris FX Plug 8.2.0 is that it integrates seamlessly with Final Cut Pro X, so you don't have to switch between different applications or formats. You can access all the features and effects of Boris FX Plug 8.2.0 from within the Final Cut Pro X interface, and apply them directly to your timeline or clips. You can also preview and render your videos faster and smoother with Boris FX Plug 8.2.0, thanks to its GPU-accelerated performance and optimization.
-Boris FX Plug 8.2.0 is a plugin that can transform your videos from ordinary to extraordinary. With its wide range of features and effects, you can create stunning videos that will impress your audience and clients. Whether you are working on a documentary, a music video, a commercial, or a personal project, Boris FX Plug 8.2.0 can help you achieve your creative vision.
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/HD Online Player (softube Saturation Knob Aax Plugin C).md b/spaces/tioseFevbu/cartoon-converter/scripts/HD Online Player (softube Saturation Knob Aax Plugin C).md
deleted file mode 100644
index ecf1b0d54f85e9c600275fd89ed0226a24912f2a..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/HD Online Player (softube Saturation Knob Aax Plugin C).md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-How to Use Softube Saturation Knob to Add Warmth and Grit to Your Mixes
-Softube Saturation Knob is a free plugin that simulates output distortion and can be used to add some warmth and grit to your mixes. Whether you want to spice up your bass lines, vocals, drums, or any other sound source, Saturation Knob can help you achieve a more analog and lively sound.
-HD Online Player (softube saturation knob aax plugin c)
Download Zip ===> https://urlcod.com/2uHyus
-In this article, we will show you how to use Softube Saturation Knob to enhance your mixes with three different modes of saturation: Keep High, Neutral, and Keep Low. We will also show you how to adjust the gain controls for optimal results.
-What is Softube Saturation Knob?
-Softube Saturation Knob is a free plugin that models output distortion and can be used anywhere you need some grit. It has one large knob that controls the amount of saturation and one small switch that selects the mode of saturation[^1^].
-Saturation Knob can be used as a native plugin in any DAW that supports VST, VST3, AU, or AAX formats. It can also be used as a module in Softube Modular[^1^], a virtual Eurorack platform that lets you create modular synth sounds with modules from Doepfer, Buchla, Intellijel, and more. Additionally, it can be used as an effect in Softube Amp Room[^1^], a pro audio guitar and bass platform that lets you combine studio effects, classic amps, cabs, and pedals to create record-ready tones.
-How to Use Softube Saturation Knob?
-To use Softube Saturation Knob, simply insert it on the track or bus that you want to saturate. Then, adjust the knob to set the amount of saturation and choose the mode that suits your sound source. You can also adjust the input and output gain controls to fine-tune the level and avoid clipping.
-
-Saturation Knob has three modes of saturation: Keep High, Neutral, and Keep Low. Each mode affects the frequency spectrum differently and gives you a different character of distortion[^1^].
-
-- Keep High: This mode distorts the low frequencies more and keeps the high frequencies relatively clean. This is useful for adding some weight and warmth to thin or bright sounds, such as vocals, guitars, or synths.
-- Neutral: This mode distorts the whole frequency spectrum evenly. This is useful for adding some overall harmonics and shimmer to dull or flat sounds, such as drums, bass, or pianos.
-- Keep Low: This mode distorts the high frequencies more and keeps the low frequencies relatively intact. This is useful for adding some edge and bite to thick or dark sounds, such as basses, kicks, or snares.
-
-You can experiment with different modes and amounts of saturation to find the best fit for your mix. You can also use Saturation Knob on multiple tracks or buses to create a cohesive and warm sound.
-How to Adjust the Gain Controls?
-Saturation Knob has two gain controls: input gain and output gain. These controls let you adjust the level of the signal before and after the saturation process. They are important for achieving a balanced and clean sound.
-The input gain control lets you boost or attenuate the signal before it reaches the saturation knob. This affects how much distortion is applied to the signal. A higher input gain will result in more saturation and vice versa. You can use this control to fine-tune the amount of saturation without changing the knob position.
-The output gain control lets you boost or attenuate the signal after it passes through the saturation knob. This affects how loud or quiet the signal is after being saturated. A higher output gain will result in a louder signal and vice versa. You can use this control to match the level of the original signal or adjust it according to your mix needs.
-You should always check your levels with a metering plugin or your DAW's built-in metering system. You should avoid clipping or distorting your signal beyond what you intend with Saturation Knob. A good rule of thumb is to keep your peaks below -6 dBFS on individual tracks and below -3 dBFS on buses or master outputs
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/models/wheel.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/models/wheel.py
deleted file mode 100644
index 35c703755390c834c1aea21588e18976dfd83e90..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/models/wheel.py
+++ /dev/null
@@ -1,92 +0,0 @@
-"""Represents a wheel file and provides access to the various parts of the
-name that have meaning.
-"""
-import re
-from typing import Dict, Iterable, List
-
-from pip._vendor.packaging.tags import Tag
-
-from pip._internal.exceptions import InvalidWheelFilename
-
-
-class Wheel:
- """A wheel file"""
-
- wheel_file_re = re.compile(
- r"""^(?P(?P.+?)-(?P.*?))
- ((-(?P\d[^-]*?))?-(?P.+?)-(?P.+?)-(?P.+?)
- \.whl|\.dist-info)$""",
- re.VERBOSE,
- )
-
- def __init__(self, filename: str) -> None:
- """
- :raises InvalidWheelFilename: when the filename is invalid for a wheel
- """
- wheel_info = self.wheel_file_re.match(filename)
- if not wheel_info:
- raise InvalidWheelFilename(f"{filename} is not a valid wheel filename.")
- self.filename = filename
- self.name = wheel_info.group("name").replace("_", "-")
- # we'll assume "_" means "-" due to wheel naming scheme
- # (https://github.com/pypa/pip/issues/1150)
- self.version = wheel_info.group("ver").replace("_", "-")
- self.build_tag = wheel_info.group("build")
- self.pyversions = wheel_info.group("pyver").split(".")
- self.abis = wheel_info.group("abi").split(".")
- self.plats = wheel_info.group("plat").split(".")
-
- # All the tag combinations from this file
- self.file_tags = {
- Tag(x, y, z) for x in self.pyversions for y in self.abis for z in self.plats
- }
-
- def get_formatted_file_tags(self) -> List[str]:
- """Return the wheel's tags as a sorted list of strings."""
- return sorted(str(tag) for tag in self.file_tags)
-
- def support_index_min(self, tags: List[Tag]) -> int:
- """Return the lowest index that one of the wheel's file_tag combinations
- achieves in the given list of supported tags.
-
- For example, if there are 8 supported tags and one of the file tags
- is first in the list, then return 0.
-
- :param tags: the PEP 425 tags to check the wheel against, in order
- with most preferred first.
-
- :raises ValueError: If none of the wheel's file tags match one of
- the supported tags.
- """
- try:
- return next(i for i, t in enumerate(tags) if t in self.file_tags)
- except StopIteration:
- raise ValueError()
-
- def find_most_preferred_tag(
- self, tags: List[Tag], tag_to_priority: Dict[Tag, int]
- ) -> int:
- """Return the priority of the most preferred tag that one of the wheel's file
- tag combinations achieves in the given list of supported tags using the given
- tag_to_priority mapping, where lower priorities are more-preferred.
-
- This is used in place of support_index_min in some cases in order to avoid
- an expensive linear scan of a large list of tags.
-
- :param tags: the PEP 425 tags to check the wheel against.
- :param tag_to_priority: a mapping from tag to priority of that tag, where
- lower is more preferred.
-
- :raises ValueError: If none of the wheel's file tags match one of
- the supported tags.
- """
- return min(
- tag_to_priority[tag] for tag in self.file_tags if tag in tag_to_priority
- )
-
- def supported(self, tags: Iterable[Tag]) -> bool:
- """Return whether the wheel is compatible with one of the given tags.
-
- :param tags: the PEP 425 tags to check the wheel against.
- """
- return not self.file_tags.isdisjoint(tags)
diff --git a/spaces/tomofi/MMOCR/configs/_base_/recog_pipelines/satrn_pipeline.py b/spaces/tomofi/MMOCR/configs/_base_/recog_pipelines/satrn_pipeline.py
deleted file mode 100644
index f191c5235a08eeae7d1e61002c00eccbdac39ed4..0000000000000000000000000000000000000000
--- a/spaces/tomofi/MMOCR/configs/_base_/recog_pipelines/satrn_pipeline.py
+++ /dev/null
@@ -1,44 +0,0 @@
-img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='ResizeOCR',
- height=32,
- min_width=100,
- max_width=100,
- keep_aspect_ratio=False,
- width_downsample_ratio=0.25),
- dict(type='ToTensorOCR'),
- dict(type='NormalizeOCR', **img_norm_cfg),
- dict(
- type='Collect',
- keys=['img'],
- meta_keys=[
- 'filename', 'ori_shape', 'img_shape', 'text', 'valid_ratio',
- 'resize_shape'
- ]),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiRotateAugOCR',
- rotate_degrees=[0, 90, 270],
- transforms=[
- dict(
- type='ResizeOCR',
- height=32,
- min_width=100,
- max_width=100,
- keep_aspect_ratio=False,
- width_downsample_ratio=0.25),
- dict(type='ToTensorOCR'),
- dict(type='NormalizeOCR', **img_norm_cfg),
- dict(
- type='Collect',
- keys=['img'],
- meta_keys=[
- 'filename', 'ori_shape', 'img_shape', 'valid_ratio',
- 'resize_shape', 'img_norm_cfg', 'ori_filename'
- ]),
- ])
-]
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/regnet/retinanet_regnetx-800MF_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/regnet/retinanet_regnetx-800MF_fpn_1x_coco.py
deleted file mode 100644
index fe1d659f1a58ddb6e662d74a41c77005d2ee0638..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/regnet/retinanet_regnetx-800MF_fpn_1x_coco.py
+++ /dev/null
@@ -1,16 +0,0 @@
-_base_ = './retinanet_regnetx-3.2GF_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://regnetx_800mf',
- backbone=dict(
- type='RegNet',
- arch='regnetx_800mf',
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'),
- neck=dict(
- type='FPN',
- in_channels=[64, 128, 288, 672],
- out_channels=256,
- num_outs=5))
diff --git a/spaces/training-transformers-together/Dashboard/README.md b/spaces/training-transformers-together/Dashboard/README.md
deleted file mode 100644
index 5e02143d40e746d7941067531e452472cf4b8a63..0000000000000000000000000000000000000000
--- a/spaces/training-transformers-together/Dashboard/README.md
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title: Dashboard
-emoji: 🌐
-colorFrom: blue
-colorTo: red
-sdk: streamlit
-app_file: app.py
-pinned: true
----
-
-# Training transformers together dashboard
-
-[](https://huggingface.co/spaces/training-transformers-together/training-transformers-together-dashboard)
-
-A dashboard app for Hugging Face Spaces
-
----
-
-Autogenerated using [this template](https://github.com/nateraw/spaces-template)
diff --git a/spaces/trttung1610/musicgen/audiocraft/modules/activations.py b/spaces/trttung1610/musicgen/audiocraft/modules/activations.py
deleted file mode 100644
index 2d83d7c4c2dc84c64b724eadbe06157507d4f20d..0000000000000000000000000000000000000000
--- a/spaces/trttung1610/musicgen/audiocraft/modules/activations.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-from torch import Tensor
-from typing import Union, Callable
-
-
-class CustomGLU(nn.Module):
- """Custom Gated Linear Unit activation.
- Applies a modified gated linear unit :math:`a * f(b)` where :math:`a` is the first half
- of the input matrices, :math:`b` is the second half, and :math:`f` is a provided activation
- function (i.e. sigmoid, swish, etc.).
-
- Args:
- activation (nn.Module): The custom activation to apply in the Gated Linear Unit
- dim (int): the dimension on which to split the input. Default: -1
-
- Shape:
- - Input: :math:`(\ast_1, N, \ast_2)` where `*` means, any number of additional
- dimensions
- - Output: :math:`(\ast_1, M, \ast_2)` where :math:`M=N/2`
-
- Examples::
- >>> m = CustomGLU(nn.Sigmoid())
- >>> input = torch.randn(4, 2)
- >>> output = m(input)
- """
- def __init__(self, activation: nn.Module, dim: int = -1):
- super(CustomGLU, self).__init__()
- self.dim = dim
- self.activation = activation
-
- def forward(self, x: Tensor):
- assert x.shape[self.dim] % 2 == 0 # M = N / 2
- a, b = torch.chunk(x, 2, dim=self.dim)
- return a * self.activation(b)
-
-
-class SwiGLU(CustomGLU):
- """SiLU Gated Linear Unit activation.
- Applies SiLU Gated Linear Unit :math:`a * SiLU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(SwiGLU, self).__init__(nn.SiLU(), dim)
-
-
-class GeGLU(CustomGLU):
- """GeLU Gated Linear Unit activation.
- Applies GeLU Gated Linear Unit :math:`a * GELU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(GeGLU, self).__init__(nn.GELU(), dim)
-
-
-class ReGLU(CustomGLU):
- """ReLU Gated Linear Unit activation.
- Applies ReLU Gated Linear Unit :math:`a * ReLU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(ReGLU, self).__init__(nn.ReLU(), dim)
-
-
-def get_activation_fn(
- activation: Union[str, Callable[[Tensor], Tensor]]
-) -> Union[str, Callable[[Tensor], Tensor]]:
- """Helper function to map an activation string to the activation class.
- If the supplied activation is not a string that is recognized, the activation is passed back.
-
- Args:
- activation (str, or Callable[[Tensor], Tensor]): Activation to check
- """
- if isinstance(activation, str):
- if activation == "reglu":
- return ReGLU()
- elif activation == "geglu":
- return GeGLU()
- elif activation == "swiglu":
- return SwiGLU()
- return activation
diff --git a/spaces/ucalyptus/PTI/README.md b/spaces/ucalyptus/PTI/README.md
deleted file mode 100644
index c4b70abea48f0b4457025136df12c0facc92fd5c..0000000000000000000000000000000000000000
--- a/spaces/ucalyptus/PTI/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: PTI
-emoji: 🦀
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
\ No newline at end of file
diff --git a/spaces/ulysses115/diffsvc_test/training/task/SVC_task.py b/spaces/ulysses115/diffsvc_test/training/task/SVC_task.py
deleted file mode 100644
index 56c66751af4a279adeb76089b45d0ab93b29e6ad..0000000000000000000000000000000000000000
--- a/spaces/ulysses115/diffsvc_test/training/task/SVC_task.py
+++ /dev/null
@@ -1,223 +0,0 @@
-import torch
-
-import utils
-from utils.hparams import hparams
-from network.diff.net import DiffNet
-from network.diff.diffusion import GaussianDiffusion, OfflineGaussianDiffusion
-from training.task.fs2 import FastSpeech2Task
-from network.vocoders.base_vocoder import get_vocoder_cls, BaseVocoder
-from modules.fastspeech.tts_modules import mel2ph_to_dur
-
-from network.diff.candidate_decoder import FFT
-from utils.pitch_utils import denorm_f0
-from training.dataset.fs2_utils import FastSpeechDataset
-
-import numpy as np
-import os
-import torch.nn.functional as F
-
-DIFF_DECODERS = {
- 'wavenet': lambda hp: DiffNet(hp['audio_num_mel_bins']),
- 'fft': lambda hp: FFT(
- hp['hidden_size'], hp['dec_layers'], hp['dec_ffn_kernel_size'], hp['num_heads']),
-}
-
-
-class SVCDataset(FastSpeechDataset):
- def collater(self, samples):
- from preprocessing.process_pipeline import File2Batch
- return File2Batch.processed_input2batch(samples)
-
-
-class SVCTask(FastSpeech2Task):
- def __init__(self):
- super(SVCTask, self).__init__()
- self.dataset_cls = SVCDataset
- self.vocoder: BaseVocoder = get_vocoder_cls(hparams)()
-
- def build_tts_model(self):
- # import torch
- # from tqdm import tqdm
- # v_min = torch.ones([80]) * 100
- # v_max = torch.ones([80]) * -100
- # for i, ds in enumerate(tqdm(self.dataset_cls('train'))):
- # v_max = torch.max(torch.max(ds['mel'].reshape(-1, 80), 0)[0], v_max)
- # v_min = torch.min(torch.min(ds['mel'].reshape(-1, 80), 0)[0], v_min)
- # if i % 100 == 0:
- # print(i, v_min, v_max)
- # print('final', v_min, v_max)
- mel_bins = hparams['audio_num_mel_bins']
- self.model = GaussianDiffusion(
- phone_encoder=self.phone_encoder,
- out_dims=mel_bins, denoise_fn=DIFF_DECODERS[hparams['diff_decoder_type']](hparams),
- timesteps=hparams['timesteps'],
- K_step=hparams['K_step'],
- loss_type=hparams['diff_loss_type'],
- spec_min=hparams['spec_min'], spec_max=hparams['spec_max'],
- )
-
-
- def build_optimizer(self, model):
- self.optimizer = optimizer = torch.optim.AdamW(
- filter(lambda p: p.requires_grad, model.parameters()),
- lr=hparams['lr'],
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
- weight_decay=hparams['weight_decay'])
- return optimizer
-
- def run_model(self, model, sample, return_output=False, infer=False):
- '''
- steps:
- 1. run the full model, calc the main loss
- 2. calculate loss for dur_predictor, pitch_predictor, energy_predictor
- '''
- hubert = sample['hubert'] # [B, T_t,H]
- target = sample['mels'] # [B, T_s, 80]
- mel2ph = sample['mel2ph'] # [B, T_s]
- f0 = sample['f0']
- uv = sample['uv']
- energy = sample['energy']
-
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- if hparams['pitch_type'] == 'cwt':
- # NOTE: this part of script is *isolated* from other scripts, which means
- # it may not be compatible with the current version.
- pass
- # cwt_spec = sample[f'cwt_spec']
- # f0_mean = sample['f0_mean']
- # f0_std = sample['f0_std']
- # sample['f0_cwt'] = f0 = model.cwt2f0_norm(cwt_spec, f0_mean, f0_std, mel2ph)
-
- # output == ret
- # model == src.diff.diffusion.GaussianDiffusion
- output = model(hubert, mel2ph=mel2ph, spk_embed=spk_embed,
- ref_mels=target, f0=f0, uv=uv, energy=energy, infer=infer)
-
- losses = {}
- if 'diff_loss' in output:
- losses['mel'] = output['diff_loss']
- #self.add_dur_loss(output['dur'], mel2ph, txt_tokens, sample['word_boundary'], losses=losses)
- # if hparams['use_pitch_embed']:
- # self.add_pitch_loss(output, sample, losses)
- # if hparams['use_energy_embed']:
- # self.add_energy_loss(output['energy_pred'], energy, losses)
- if not return_output:
- return losses
- else:
- return losses, output
-
- def _training_step(self, sample, batch_idx, _):
- log_outputs = self.run_model(self.model, sample)
- total_loss = sum([v for v in log_outputs.values() if isinstance(v, torch.Tensor) and v.requires_grad])
- log_outputs['batch_size'] = sample['hubert'].size()[0]
- log_outputs['lr'] = self.scheduler.get_lr()[0]
- return total_loss, log_outputs
-
- def build_scheduler(self, optimizer):
- return torch.optim.lr_scheduler.StepLR(optimizer, hparams['decay_steps'], gamma=0.5)
-
- def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx):
- if optimizer is None:
- return
- optimizer.step()
- optimizer.zero_grad()
- if self.scheduler is not None:
- self.scheduler.step(self.global_step // hparams['accumulate_grad_batches'])
-
- def validation_step(self, sample, batch_idx):
- outputs = {}
- hubert = sample['hubert'] # [B, T_t]
-
- target = sample['mels'] # [B, T_s, 80]
- energy = sample['energy']
- # fs2_mel = sample['fs2_mels']
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- mel2ph = sample['mel2ph']
-
- outputs['losses'] = {}
-
- outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=False)
-
- outputs['total_loss'] = sum(outputs['losses'].values())
- outputs['nsamples'] = sample['nsamples']
- outputs = utils.tensors_to_scalars(outputs)
- if batch_idx < hparams['num_valid_plots']:
- model_out = self.model(
- hubert, spk_embed=spk_embed, mel2ph=mel2ph, f0=sample['f0'], uv=sample['uv'], energy=energy, ref_mels=None, infer=True
- )
-
- if hparams.get('pe_enable') is not None and hparams['pe_enable']:
- gt_f0 = self.pe(sample['mels'])['f0_denorm_pred'] # pe predict from GT mel
- pred_f0 = self.pe(model_out['mel_out'])['f0_denorm_pred'] # pe predict from Pred mel
- else:
- gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams)
- pred_f0 = model_out.get('f0_denorm')
- self.plot_wav(batch_idx, sample['mels'], model_out['mel_out'], is_mel=True, gt_f0=gt_f0, f0=pred_f0)
- self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'], name=f'diffmel_{batch_idx}')
- #self.plot_mel(batch_idx, sample['mels'], model_out['fs2_mel'], name=f'fs2mel_{batch_idx}')
- if hparams['use_pitch_embed']:
- self.plot_pitch(batch_idx, sample, model_out)
- return outputs
-
- def add_dur_loss(self, dur_pred, mel2ph, txt_tokens, wdb, losses=None):
- """
- the effect of each loss component:
- hparams['dur_loss'] : align each phoneme
- hparams['lambda_word_dur']: align each word
- hparams['lambda_sent_dur']: align each sentence
-
- :param dur_pred: [B, T], float, log scale
- :param mel2ph: [B, T]
- :param txt_tokens: [B, T]
- :param losses:
- :return:
- """
- B, T = txt_tokens.shape
- nonpadding = (txt_tokens != 0).float()
- dur_gt = mel2ph_to_dur(mel2ph, T).float() * nonpadding
- is_sil = torch.zeros_like(txt_tokens).bool()
- for p in self.sil_ph:
- is_sil = is_sil | (txt_tokens == self.phone_encoder.encode(p)[0])
- is_sil = is_sil.float() # [B, T_txt]
-
- # phone duration loss
- if hparams['dur_loss'] == 'mse':
- losses['pdur'] = F.mse_loss(dur_pred, (dur_gt + 1).log(), reduction='none')
- losses['pdur'] = (losses['pdur'] * nonpadding).sum() / nonpadding.sum()
- losses['pdur'] = losses['pdur'] * hparams['lambda_ph_dur']
- dur_pred = (dur_pred.exp() - 1).clamp(min=0)
- else:
- raise NotImplementedError
-
- # use linear scale for sent and word duration
- if hparams['lambda_word_dur'] > 0:
- #idx = F.pad(wdb.cumsum(axis=1), (1, 0))[:, :-1]
- idx = wdb.cumsum(axis=1)
- # word_dur_g = dur_gt.new_zeros([B, idx.max() + 1]).scatter_(1, idx, midi_dur) # midi_dur can be implied by add gt-ph_dur
- word_dur_p = dur_pred.new_zeros([B, idx.max() + 1]).scatter_add(1, idx, dur_pred)
- word_dur_g = dur_gt.new_zeros([B, idx.max() + 1]).scatter_add(1, idx, dur_gt)
- wdur_loss = F.mse_loss((word_dur_p + 1).log(), (word_dur_g + 1).log(), reduction='none')
- word_nonpadding = (word_dur_g > 0).float()
- wdur_loss = (wdur_loss * word_nonpadding).sum() / word_nonpadding.sum()
- losses['wdur'] = wdur_loss * hparams['lambda_word_dur']
- if hparams['lambda_sent_dur'] > 0:
- sent_dur_p = dur_pred.sum(-1)
- sent_dur_g = dur_gt.sum(-1)
- sdur_loss = F.mse_loss((sent_dur_p + 1).log(), (sent_dur_g + 1).log(), reduction='mean')
- losses['sdur'] = sdur_loss.mean() * hparams['lambda_sent_dur']
-
- ############
- # validation plots
- ############
- def plot_wav(self, batch_idx, gt_wav, wav_out, is_mel=False, gt_f0=None, f0=None, name=None):
- gt_wav = gt_wav[0].cpu().numpy()
- wav_out = wav_out[0].cpu().numpy()
- gt_f0 = gt_f0[0].cpu().numpy()
- f0 = f0[0].cpu().numpy()
- if is_mel:
- gt_wav = self.vocoder.spec2wav(gt_wav, f0=gt_f0)
- wav_out = self.vocoder.spec2wav(wav_out, f0=f0)
- self.logger.experiment.add_audio(f'gt_{batch_idx}', gt_wav, sample_rate=hparams['audio_sample_rate'], global_step=self.global_step)
- self.logger.experiment.add_audio(f'wav_{batch_idx}', wav_out, sample_rate=hparams['audio_sample_rate'], global_step=self.global_step)
-
-
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Chicken Invaders 6 Free Download Full Version For Windows 7 Discover 15 Awesome Weapons and 30 Unique Bonuses.md b/spaces/usbethFlerru/sovits-modelsV2/example/Chicken Invaders 6 Free Download Full Version For Windows 7 Discover 15 Awesome Weapons and 30 Unique Bonuses.md
deleted file mode 100644
index 939dd414c368d78dd58e576e0d7b0531f890ebac..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Chicken Invaders 6 Free Download Full Version For Windows 7 Discover 15 Awesome Weapons and 30 Unique Bonuses.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-diablo free download full game pcr project download windows freebest naruto games for pc free downloadsql server management studio 2012 download for windows 10 64 bit freedownload totally free vpn for pcdownload game plant vs zombie untuk pcsim card reader driver free download for windows 7 freefree download latest adobe reader for windows vista freefree download solitaire game for windows 10wordpress free download for windows 8.1 free
adobe lightroom windows 10 free download
free yahtzee game download windows 7 free
hearts for windows 10 download
brother mfc 490cw driver download for windows 10
geometry dash download windows 10 free
gom player for windows 10 free download free
bloodshed dev c++ download windows 10
maxima download for windows 10
apps for pc windows 7 free download games
windows 62 bit download free
download driver epson lx 800 windows xp free
hungama software free download for pc windows 7
adobe acrobat reader dc download windows 10 free
informatica download for windows 7 64 bit free
tunein download for windows 7 free
-windows xp embedded bootable iso download free
battlefield 5 pc download free
download and install google chrome on windows 10
usb driver windows xp free download free
smart switch for pc windows 10 download
-Chicken Invaders 6 Free Download Full Version For Windows 7
Download Zip 🆗 https://urlcod.com/2uyXzZ
-ea sports cricket 2007 pc game free download highly compressed
cheat engine download pc free
contract killer game free download for pc
directx 9 windows 10 free download
chicken invaders 6 free download full version for pc
-minecraft download pc windows xp free
blade 2 game free download pc
free fire game for pc free download windows 8.1
asus eee pc software download free
download ralink rt3290 driver 802.11 bgn wifi adapter windows 10
-daemon tools free download for pc games
call of duty download pc windows 7
download mario typing for windows 10
crystal reports for windows 7 64 bit download free
abc games for pc free download
windows 10 download utorrent free
candy crush saga pc game free download windows 10
vlc player.exe free download for windows 7 free
download firefox for laptop windows 10
download windows 10 64 bit iso free free
-windows phone update tool download freeinternet explorer 12 free download for windows 10balance game download for pcfarming simulator 18 download pc windows 10airdroid for pc windows 10 free downloadbattlefield 2142 free download full version pccisco vpn client windows 10 64 bit downloadand1 streetball pc game free downloadadobe photoshop free download for windows 10 pcadobe premiere pro plugins free download for windows 10
download game pc plants vs zombies 2 full version free
epson scanner software for windows 10 free download
google chrome free download windows 10 64 bit
limbo game download for pc windows 10
arduino software free download for windows 10
angry birds game free download for pc windows 10
candy crush for windows 10 free download
windows 7 themes free download full version for pc
app store for pc free download for windows 7
direct3d 12 download windows 10
free download pokemon games for pc windows 7
adobe photoshop full version free download for windows xp free
download game original pc
quicktime player free download for windows 8.1 free
disney infinity marvel super heroes pc game download
--free-download-for-windows-10-3utools-can/
-connect-windows-10-download-solve-pc-problems/
-ok.ru/download-vpn-client-for-windows-10-64-bit-results/
-2000-notebook-drivers-for-windows-10-64-bit/
-download-microsoft-office-word-for-windows-10/
-counter strike 1.6 free download for windows 10 32 bit
download war mode pc free
inet_e_download_failure windows 10 fix
download winrar apk for windows 10
chicken invaders 5 download free full version for pc
--ok.ru/airdroid-download-windows-10/
-software-al-quran-untuk-windows-10-ayat/
-of-light-for-strange-coins-steam-linking-is/
-witcher-2-enhanced-edition-trainer-the-witcher-2/
-free-download-manager-full-version-for-windows/
-adobe pdf for windows 7 free download free
download adb for windows 10 64 bit
download iwatch dvr for windows 10
apple magic trackpad windows 10 driver download
download irfanview 64 bit for windows 10
dell video drivers for windows xp free download free
adventure island game download for windows 7 free
internet explorer 9 download portugues windows 7 free
vlc media player for windows xp sp2 download free
free pc software download windows 7 ultimate
-
-download android games for pc windows 10thread disk for windows download freeadobe xd free download for windows 10 64 bitdownload video drivers for windows 7 32 bit freedownload snapchat for pc windows 7cisco linksys ae1000 driver download windows 10 freedownload adobe flash for chrome windows 10dino island pc game free download fullchess game download for pc windows 10 freeicloud for pc windows 10 download
download intercepter ng for windows 10
firefox windows 2003 download free
dolby atmos software for pc download windows 8.1
control center 3 brother download for windows 10
drum pads 24 for pc free download
galaxy life pc game download
malwarebytes anti malware free download for windows 10 32 bit
download beach buggy racing for pc windows 7
b03 free download pc
adobe flash player free download for windows 10 google chrome
windows 7 multilingual user interface pack mui free download free
download hbo go windows 10
free zip software download for windows 10
flatout 2 pc game full version free download
gta sa pc windows 10 download
-windows media player 10 video codec download free
hp photosmart premium c309a driver download windows 10
adobe premiere pro free download full version for pc
call of duty free download for pc windows 10
windows 95 iso download free free
-download intel hd graphics driver windows 10 64 bit
keylogger for windows 10 free download
call break multiplayer game download for pc
windows live mail xp 32 bit full download free
ft245r usb fifo driver windows 10 download
-crysis pc game free download
jdk download windows 10 64 bit
download control center 4 windows 10
calculator download windows 10 64 bit
canon lbp2900b software free download for windows xp free
kms download windows 10
download windows 10 pro free 64 bit
remote desktop download windows 8 free
ds3 tool download windows 10
best pc game download website free
-download minecraft for pc windows 7 full version
download russian keyboard for windows 10
download skyrim free pc windows 8
download autocad 2007 for windows 10
chess game free download windows xp free
-boot camp assistant download for windows 10
download wia driver for hp scanner windows 10
pro tools free windows download free
canon printer software download windows 10
download driver dell optiplex 7010 windows 10 64 bit
-365scores for pc free download
cricket 2004 game free download for pc
download outlaws pc game
adobe acrobat 7.0 professional free download for windows 8 free
download photoshop 64 bit windows 10
--cut-pro-for-windows-10-crack-download-final/
-ea9200-firmware-update-linksys-ea9200/
-download-icons-for-windows-10-pichon-free/
-zii-windows-10-download-adobe-zii-6-1-2-cc/
-enemy-within-hidden-achievements-xcom-enemy/
-dell touch screen driver windows 10 download
download minecraft launcher pc windows 7
dave game for pc free download
download chicken invaders 5 full version for free pc
360 browser for pc free download
citrix workspace windows 10 download
free download uc browser for pc windows 10
f1 2010 pc game free download
itunes exe download windows 10
download usb game controller driver windows 10
-korean font download windows xp freefree download teracopy software windows 7 freedownload windows 7 home premium free trial freedownload hotspot pc windows 10imo video calling for pc windows 7 free downloadgoogle duo download for pc windows 10puffin web browser for pc windows 10 free downloadcanon pixma mp250 scanner driver free download for windows 7 freekeil uvision 4 free download for windows 10 64 bitdownload vmware player for windows 10 free
free download game booster full version for windows xp free
download mf dll windows 10
javafx scene builder 2.0 download windows 10
adobe acrobat pdf free download for windows 10
ftd2xx dll windows 10 64 bit download
autodock download for windows 10
civilization 2 free download windows 10
icloud download pc windows 8
download imo messenger for windows 10
instagram pc free download windows 10
microsoft office 2013 free download for windows 7 64 bit free
beauty plus download for pc windows 7
firestorm zotac download windows 10
conexant high definition smartaudio 221 windows 10 driver download
download disk manager for windows 10
-cyberlink youcam download for windows 10
paint program download for windows 7 free
adobe photoshop 7.0 free download for windows 7 64 bit free
microphone driver download windows 10
free windows 2012 download free
-windows skins for xp free download free
curse voice download for windows free
mobile pc display driver windows 7 download
windows media player download for pc windows 10 64 bit
application lock software for pc free download full version
-windows 10 free download full version for pc 64 bit
mozilla firefox latest version windows 7 64 bit free download free
el matador pc game free download full version
manycam windows 10 download
autodesk 360 download windows 10
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/vialibre/edia_full_es/modules/module_word2Context.py b/spaces/vialibre/edia_full_es/modules/module_word2Context.py
deleted file mode 100644
index 2bb45b7d4113ebea641260768abf17b250b001de..0000000000000000000000000000000000000000
--- a/spaces/vialibre/edia_full_es/modules/module_word2Context.py
+++ /dev/null
@@ -1,208 +0,0 @@
-from datasets import load_dataset, interleave_datasets
-from modules.module_segmentedWordCloud import SegmentedWordCloud
-from modules.module_customSubsetsLabel import CustomSubsetsLabel
-from random import sample as random_sample
-from typing import Tuple, List, Dict
-import re
-
-import matplotlib as mpl
-mpl.use('Agg')
-import matplotlib.pyplot as plt
-
-
-class Word2Context:
- def __init__(
- self,
- context_ds_name: str, # Context dataset HF name | path
- vocabulary, # Vocabulary class instance
- errorManager # ErrorManager class instance
- ) -> None:
-
- self.context_ds_name = context_ds_name
-
- # Vocabulary class
- self.vocab = vocabulary
-
- # Custom Label component
- self.Label = CustomSubsetsLabel()
-
- self.errorManager = errorManager
-
- def errorChecking(
- self,
- word: str
- ) -> str:
-
- out_msj = ""
-
- if not word:
- out_msj = ['EMBEDDING_NO_WORD_PROVIDED']
- else:
- if word not in self.vocab:
- out_msj = ['EMBEDDING_WORD_OOV', word]
-
- return self.errorManager.process(out_msj)
-
- def genWebLink(
- self,
- text: str
- ) -> str:
-
- text = text.replace("\"", "'")
- text = text.replace("", "")
- text = text.replace("", "")
- url = "https://www.google.com.tr/search?q={}".format(text)
- return '🌐🔍 '.format(url)
-
- def genWordCloudPlot(
- self,
- word: str,
- figsize: Tuple[int,int]=(9,3)
- ) -> plt.Figure:
-
- err = self.errorChecking(word)
- if err:
- raise Exception(err)
-
- freq_dic, l_group, g_group = self.vocab.getWordNeighbors(word, n_neighbors=10)
- wc = SegmentedWordCloud(freq_dic, l_group, g_group)
- return wc.plot(figsize)
-
- def genDistributionPlot(
- self,
- word: str,
- figsize: Tuple[int,int]=(6,1)
- ) -> plt.Figure:
-
- err = self.errorChecking(word)
- if err:
- raise Exception(err)
-
- x_values, y_values = self.vocab.distribution()
- w_percentile = self.vocab.getPercentile(word)
- w_freq = self.vocab.getFreq(word)
-
- fig, ax = plt.subplots(figsize=figsize)
- ax.plot(x_values, y_values, color='green')
- ax.fill_between(x_values, y_values, color='lightgreen',)
-
- ax.axvline(x=max(0,w_percentile-.01),
- color='blue',
- linewidth=7,
- alpha=.1,
- linestyle='-'
- )
-
- ax.axvline(x=min(100,w_percentile+.01),
- color='black',
- linewidth=7,
- alpha=.1,
- linestyle='-'
- )
-
- ax.axvline(x=w_percentile,
- color='#d35400',
- linewidth=2,
- linestyle='--',
- label=f'{w_freq}\n(frecuencia total)'
- )
-
- ax.axis('off')
- plt.legend(loc='upper left', prop={'size': 7})
- return fig
-
- def findSplits(
- self,
- word: str,
- subsets_list: List[str]
- ):
-
- err = self.errorChecking(word)
- if err:
- raise Exception(err)
-
- w_splits = self.vocab.getSplits(word)
-
- splits_list = []
- for subset in subsets_list:
- current_split_list = []
- for s in w_splits:
- if (subset == s.split("_")[0]):
- current_split_list.append(s)
-
- if current_split_list:
- splits_list.append(current_split_list)
-
- splits_list = [random_sample(s_list, 1)[0] for s_list in splits_list]
-
- ds_list = [
- load_dataset(path=self.context_ds_name, name=split, streaming=True, split='all')
- for split in splits_list
- ]
-
- datasets = ds_list[0]
- if len(ds_list) > 1:
- datasets = interleave_datasets(ds_list, probabilities=None)
-
- return datasets
-
- def findContexts(
- self,
- sample: str,
- word: str
- ) -> Dict[str,str]:
-
- sample = sample['text'].strip()
- context = ""
- m = re.search(r'\b{}\b'.format(word), sample)
- if m:
- init = m.span()[0]
- end = init+len(word)
- context = sample[:init]+""+word+""+sample[end:]
- return {'context':context}
-
- def getSubsetsInfo(
- self,
- word: str
- ) -> Tuple:
-
- err = self.errorChecking(word)
- if err:
- raise Exception(err)
-
- total_freq = self.vocab.getFreq(word)
- subsets_name_list = list(self.vocab.getSubsets(word).keys())
- subsets_freq_list = list(self.vocab.getSubsets(word).values())
-
- # Create subset frequency dict to subset_freq component
- subsets_info = {
- s_name + f" ({s_freq})": s_freq/total_freq
- for s_name, s_freq in zip(subsets_name_list, subsets_freq_list)
- }
-
- subsets_origin_info = dict(sorted(subsets_info.items(), key=lambda x: x[1], reverse=True))
- subsets_info = self.Label.compute(subsets_origin_info)
- return subsets_info, subsets_origin_info
-
- def getContexts(
- self,
- word: str,
- n_context: int,
- ds
- ) -> List[Tuple]:
-
- err = self.errorChecking(word)
- if err:
- raise Exception(err)
-
- ds_w_contexts = ds.map(lambda sample: self.findContexts(sample, word))
- only_contexts = ds_w_contexts.filter(lambda sample: sample['context'] != "")
- shuffle_contexts = only_contexts.shuffle(buffer_size=10)
-
- list_of_dict = list(shuffle_contexts.take(n_context))
- list_of_contexts = [
- (i, dic['context'], dic['subset'])
- for i,dic in enumerate(list_of_dict)
- ]
-
- return list_of_contexts
\ No newline at end of file
diff --git a/spaces/vishnu0001/text2mesh/shap_e/util/notebooks.py b/spaces/vishnu0001/text2mesh/shap_e/util/notebooks.py
deleted file mode 100644
index 08ca9c294920fece544ccd8443a0623d1996be2e..0000000000000000000000000000000000000000
--- a/spaces/vishnu0001/text2mesh/shap_e/util/notebooks.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import base64
-import io
-from typing import Union
-
-import ipywidgets as widgets
-import numpy as np
-import torch
-from PIL import Image
-
-from shap_e.models.nn.camera import DifferentiableCameraBatch, DifferentiableProjectiveCamera
-from shap_e.models.transmitter.base import Transmitter, VectorDecoder
-from shap_e.rendering.torch_mesh import TorchMesh
-from shap_e.util.collections import AttrDict
-
-
-def create_pan_cameras(size: int, device: torch.device) -> DifferentiableCameraBatch:
- origins = []
- xs = []
- ys = []
- zs = []
- for theta in np.linspace(0, 2 * np.pi, num=20):
- z = np.array([np.sin(theta), np.cos(theta), -0.5])
- z /= np.sqrt(np.sum(z**2))
- origin = -z * 4
- x = np.array([np.cos(theta), -np.sin(theta), 0.0])
- y = np.cross(z, x)
- origins.append(origin)
- xs.append(x)
- ys.append(y)
- zs.append(z)
- return DifferentiableCameraBatch(
- shape=(1, len(xs)),
- flat_camera=DifferentiableProjectiveCamera(
- origin=torch.from_numpy(np.stack(origins, axis=0)).float().to(device),
- x=torch.from_numpy(np.stack(xs, axis=0)).float().to(device),
- y=torch.from_numpy(np.stack(ys, axis=0)).float().to(device),
- z=torch.from_numpy(np.stack(zs, axis=0)).float().to(device),
- width=size,
- height=size,
- x_fov=0.7,
- y_fov=0.7,
- ),
- )
-
-
-@torch.no_grad()
-def decode_latent_images(
- xm: Union[Transmitter, VectorDecoder],
- latent: torch.Tensor,
- cameras: DifferentiableCameraBatch,
- rendering_mode: str = "stf",
-):
- decoded = xm.renderer.render_views(
- AttrDict(cameras=cameras),
- params=(xm.encoder if isinstance(xm, Transmitter) else xm).bottleneck_to_params(
- latent[None]
- ),
- options=AttrDict(rendering_mode=rendering_mode, render_with_direction=False),
- )
- arr = decoded.channels.clamp(0, 255).to(torch.uint8)[0].cpu().numpy()
- return [Image.fromarray(x) for x in arr]
-
-
-@torch.no_grad()
-def decode_latent_mesh(
- xm: Union[Transmitter, VectorDecoder],
- latent: torch.Tensor,
-) -> TorchMesh:
- decoded = xm.renderer.render_views(
- AttrDict(cameras=create_pan_cameras(2, latent.device)), # lowest resolution possible
- params=(xm.encoder if isinstance(xm, Transmitter) else xm).bottleneck_to_params(
- latent[None]
- ),
- options=AttrDict(rendering_mode="stf", render_with_direction=False),
- )
- return decoded.raw_meshes[0]
-
-
-def gif_widget(images):
- writer = io.BytesIO()
- images[0].save(
- writer, format="GIF", save_all=True, append_images=images[1:], duration=100, loop=0
- )
- writer.seek(0)
- data = base64.b64encode(writer.read()).decode("ascii")
- return widgets.HTML(f'
')
diff --git a/spaces/vrajeshbhatt/Automated-Ticket-Management-System/static/js/count.js b/spaces/vrajeshbhatt/Automated-Ticket-Management-System/static/js/count.js
deleted file mode 100644
index 9b44fa46e364c498c78491f1e62376fc054ad98b..0000000000000000000000000000000000000000
--- a/spaces/vrajeshbhatt/Automated-Ticket-Management-System/static/js/count.js
+++ /dev/null
@@ -1,8 +0,0 @@
-var count = {{count}}
-if (count == 1) {
- $(".container").toggleClass("log-in");
- count = 0;
-}
-$(".container-form .btn2").click(function() {
- $(".container").toggleClass("log-in");
-});
\ No newline at end of file
diff --git a/spaces/vumichien/Generate_human_motion/pyrender/pyrender/primitive.py b/spaces/vumichien/Generate_human_motion/pyrender/pyrender/primitive.py
deleted file mode 100644
index 7f83f46f532b126a4573e715dd03d079fef755ca..0000000000000000000000000000000000000000
--- a/spaces/vumichien/Generate_human_motion/pyrender/pyrender/primitive.py
+++ /dev/null
@@ -1,489 +0,0 @@
-"""Primitives, conforming to the glTF 2.0 standards as specified in
-https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#reference-primitive
-
-Author: Matthew Matl
-"""
-import numpy as np
-
-from OpenGL.GL import *
-
-from .material import Material, MetallicRoughnessMaterial
-from .constants import FLOAT_SZ, UINT_SZ, BufFlags, GLTF
-from .utils import format_color_array
-
-
-class Primitive(object):
- """A primitive object which can be rendered.
-
- Parameters
- ----------
- positions : (n, 3) float
- XYZ vertex positions.
- normals : (n, 3) float
- Normalized XYZ vertex normals.
- tangents : (n, 4) float
- XYZW vertex tangents where the w component is a sign value
- (either +1 or -1) indicating the handedness of the tangent basis.
- texcoord_0 : (n, 2) float
- The first set of UV texture coordinates.
- texcoord_1 : (n, 2) float
- The second set of UV texture coordinates.
- color_0 : (n, 4) float
- RGBA vertex colors.
- joints_0 : (n, 4) float
- Joint information.
- weights_0 : (n, 4) float
- Weight information for morphing.
- indices : (m, 3) int
- Face indices for triangle meshes or fans.
- material : :class:`Material`
- The material to apply to this primitive when rendering.
- mode : int
- The type of primitives to render, one of the following:
-
- - ``0``: POINTS
- - ``1``: LINES
- - ``2``: LINE_LOOP
- - ``3``: LINE_STRIP
- - ``4``: TRIANGLES
- - ``5``: TRIANGLES_STRIP
- - ``6``: TRIANGLES_FAN
- targets : (k,) int
- Morph target indices.
- poses : (x,4,4), float
- Array of 4x4 transformation matrices for instancing this object.
- """
-
- def __init__(self,
- positions,
- normals=None,
- tangents=None,
- texcoord_0=None,
- texcoord_1=None,
- color_0=None,
- joints_0=None,
- weights_0=None,
- indices=None,
- material=None,
- mode=None,
- targets=None,
- poses=None):
-
- if mode is None:
- mode = GLTF.TRIANGLES
-
- self.positions = positions
- self.normals = normals
- self.tangents = tangents
- self.texcoord_0 = texcoord_0
- self.texcoord_1 = texcoord_1
- self.color_0 = color_0
- self.joints_0 = joints_0
- self.weights_0 = weights_0
- self.indices = indices
- self.material = material
- self.mode = mode
- self.targets = targets
- self.poses = poses
-
- self._bounds = None
- self._vaid = None
- self._buffers = []
- self._is_transparent = None
- self._buf_flags = None
-
- @property
- def positions(self):
- """(n,3) float : XYZ vertex positions.
- """
- return self._positions
-
- @positions.setter
- def positions(self, value):
- value = np.asanyarray(value, dtype=np.float32)
- self._positions = np.ascontiguousarray(value)
- self._bounds = None
-
- @property
- def normals(self):
- """(n,3) float : Normalized XYZ vertex normals.
- """
- return self._normals
-
- @normals.setter
- def normals(self, value):
- if value is not None:
- value = np.asanyarray(value, dtype=np.float32)
- value = np.ascontiguousarray(value)
- if value.shape != self.positions.shape:
- raise ValueError('Incorrect normals shape')
- self._normals = value
-
- @property
- def tangents(self):
- """(n,4) float : XYZW vertex tangents.
- """
- return self._tangents
-
- @tangents.setter
- def tangents(self, value):
- if value is not None:
- value = np.asanyarray(value, dtype=np.float32)
- value = np.ascontiguousarray(value)
- if value.shape != (self.positions.shape[0], 4):
- raise ValueError('Incorrect tangent shape')
- self._tangents = value
-
- @property
- def texcoord_0(self):
- """(n,2) float : The first set of UV texture coordinates.
- """
- return self._texcoord_0
-
- @texcoord_0.setter
- def texcoord_0(self, value):
- if value is not None:
- value = np.asanyarray(value, dtype=np.float32)
- value = np.ascontiguousarray(value)
- if (value.ndim != 2 or value.shape[0] != self.positions.shape[0] or
- value.shape[1] < 2):
- raise ValueError('Incorrect texture coordinate shape')
- if value.shape[1] > 2:
- value = value[:,:2]
- self._texcoord_0 = value
-
- @property
- def texcoord_1(self):
- """(n,2) float : The second set of UV texture coordinates.
- """
- return self._texcoord_1
-
- @texcoord_1.setter
- def texcoord_1(self, value):
- if value is not None:
- value = np.asanyarray(value, dtype=np.float32)
- value = np.ascontiguousarray(value)
- if (value.ndim != 2 or value.shape[0] != self.positions.shape[0] or
- value.shape[1] != 2):
- raise ValueError('Incorrect texture coordinate shape')
- self._texcoord_1 = value
-
- @property
- def color_0(self):
- """(n,4) float : RGBA vertex colors.
- """
- return self._color_0
-
- @color_0.setter
- def color_0(self, value):
- if value is not None:
- value = np.ascontiguousarray(
- format_color_array(value, shape=(len(self.positions), 4))
- )
- self._is_transparent = None
- self._color_0 = value
-
- @property
- def joints_0(self):
- """(n,4) float : Joint information.
- """
- return self._joints_0
-
- @joints_0.setter
- def joints_0(self, value):
- self._joints_0 = value
-
- @property
- def weights_0(self):
- """(n,4) float : Weight information for morphing.
- """
- return self._weights_0
-
- @weights_0.setter
- def weights_0(self, value):
- self._weights_0 = value
-
- @property
- def indices(self):
- """(m,3) int : Face indices for triangle meshes or fans.
- """
- return self._indices
-
- @indices.setter
- def indices(self, value):
- if value is not None:
- value = np.asanyarray(value, dtype=np.float32)
- value = np.ascontiguousarray(value)
- self._indices = value
-
- @property
- def material(self):
- """:class:`Material` : The material for this primitive.
- """
- return self._material
-
- @material.setter
- def material(self, value):
- # Create default material
- if value is None:
- value = MetallicRoughnessMaterial()
- else:
- if not isinstance(value, Material):
- raise TypeError('Object material must be of type Material')
- self._material = value
-
- @property
- def mode(self):
- """int : The type of primitive to render.
- """
- return self._mode
-
- @mode.setter
- def mode(self, value):
- value = int(value)
- if value < GLTF.POINTS or value > GLTF.TRIANGLE_FAN:
- raise ValueError('Invalid mode')
- self._mode = value
-
- @property
- def targets(self):
- """(k,) int : Morph target indices.
- """
- return self._targets
-
- @targets.setter
- def targets(self, value):
- self._targets = value
-
- @property
- def poses(self):
- """(x,4,4) float : Homogenous transforms for instancing this primitive.
- """
- return self._poses
-
- @poses.setter
- def poses(self, value):
- if value is not None:
- value = np.asanyarray(value, dtype=np.float32)
- value = np.ascontiguousarray(value)
- if value.ndim == 2:
- value = value[np.newaxis,:,:]
- if value.shape[1] != 4 or value.shape[2] != 4:
- raise ValueError('Pose matrices must be of shape (n,4,4), '
- 'got {}'.format(value.shape))
- self._poses = value
- self._bounds = None
-
- @property
- def bounds(self):
- if self._bounds is None:
- self._bounds = self._compute_bounds()
- return self._bounds
-
- @property
- def centroid(self):
- """(3,) float : The centroid of the primitive's AABB.
- """
- return np.mean(self.bounds, axis=0)
-
- @property
- def extents(self):
- """(3,) float : The lengths of the axes of the primitive's AABB.
- """
- return np.diff(self.bounds, axis=0).reshape(-1)
-
- @property
- def scale(self):
- """(3,) float : The length of the diagonal of the primitive's AABB.
- """
- return np.linalg.norm(self.extents)
-
- @property
- def buf_flags(self):
- """int : The flags for the render buffer.
- """
- if self._buf_flags is None:
- self._buf_flags = self._compute_buf_flags()
- return self._buf_flags
-
- def delete(self):
- self._unbind()
- self._remove_from_context()
-
- @property
- def is_transparent(self):
- """bool : If True, the mesh is partially-transparent.
- """
- return self._compute_transparency()
-
- def _add_to_context(self):
- if self._vaid is not None:
- raise ValueError('Mesh is already bound to a context')
-
- # Generate and bind VAO
- self._vaid = glGenVertexArrays(1)
- glBindVertexArray(self._vaid)
-
- #######################################################################
- # Fill vertex buffer
- #######################################################################
-
- # Generate and bind vertex buffer
- vertexbuffer = glGenBuffers(1)
- self._buffers.append(vertexbuffer)
- glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer)
-
- # positions
- vertex_data = self.positions
- attr_sizes = [3]
-
- # Normals
- if self.normals is not None:
- vertex_data = np.hstack((vertex_data, self.normals))
- attr_sizes.append(3)
-
- # Tangents
- if self.tangents is not None:
- vertex_data = np.hstack((vertex_data, self.tangents))
- attr_sizes.append(4)
-
- # Texture Coordinates
- if self.texcoord_0 is not None:
- vertex_data = np.hstack((vertex_data, self.texcoord_0))
- attr_sizes.append(2)
- if self.texcoord_1 is not None:
- vertex_data = np.hstack((vertex_data, self.texcoord_1))
- attr_sizes.append(2)
-
- # Color
- if self.color_0 is not None:
- vertex_data = np.hstack((vertex_data, self.color_0))
- attr_sizes.append(4)
-
- # TODO JOINTS AND WEIGHTS
- # PASS
-
- # Copy data to buffer
- vertex_data = np.ascontiguousarray(
- vertex_data.flatten().astype(np.float32)
- )
- glBufferData(
- GL_ARRAY_BUFFER, FLOAT_SZ * len(vertex_data),
- vertex_data, GL_STATIC_DRAW
- )
- total_sz = sum(attr_sizes)
- offset = 0
- for i, sz in enumerate(attr_sizes):
- glVertexAttribPointer(
- i, sz, GL_FLOAT, GL_FALSE, FLOAT_SZ * total_sz,
- ctypes.c_void_p(FLOAT_SZ * offset)
- )
- glEnableVertexAttribArray(i)
- offset += sz
-
- #######################################################################
- # Fill model matrix buffer
- #######################################################################
-
- if self.poses is not None:
- pose_data = np.ascontiguousarray(
- np.transpose(self.poses, [0,2,1]).flatten().astype(np.float32)
- )
- else:
- pose_data = np.ascontiguousarray(
- np.eye(4).flatten().astype(np.float32)
- )
-
- modelbuffer = glGenBuffers(1)
- self._buffers.append(modelbuffer)
- glBindBuffer(GL_ARRAY_BUFFER, modelbuffer)
- glBufferData(
- GL_ARRAY_BUFFER, FLOAT_SZ * len(pose_data),
- pose_data, GL_STATIC_DRAW
- )
-
- for i in range(0, 4):
- idx = i + len(attr_sizes)
- glEnableVertexAttribArray(idx)
- glVertexAttribPointer(
- idx, 4, GL_FLOAT, GL_FALSE, FLOAT_SZ * 4 * 4,
- ctypes.c_void_p(4 * FLOAT_SZ * i)
- )
- glVertexAttribDivisor(idx, 1)
-
- #######################################################################
- # Fill element buffer
- #######################################################################
- if self.indices is not None:
- elementbuffer = glGenBuffers(1)
- self._buffers.append(elementbuffer)
- glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementbuffer)
- glBufferData(GL_ELEMENT_ARRAY_BUFFER, UINT_SZ * self.indices.size,
- self.indices.flatten().astype(np.uint32),
- GL_STATIC_DRAW)
-
- glBindVertexArray(0)
-
- def _remove_from_context(self):
- if self._vaid is not None:
- glDeleteVertexArrays(1, [self._vaid])
- glDeleteBuffers(len(self._buffers), self._buffers)
- self._vaid = None
- self._buffers = []
-
- def _in_context(self):
- return self._vaid is not None
-
- def _bind(self):
- if self._vaid is None:
- raise ValueError('Cannot bind a Mesh that has not been added '
- 'to a context')
- glBindVertexArray(self._vaid)
-
- def _unbind(self):
- glBindVertexArray(0)
-
- def _compute_bounds(self):
- """Compute the bounds of this object.
- """
- # Compute bounds of this object
- bounds = np.array([np.min(self.positions, axis=0),
- np.max(self.positions, axis=0)])
-
- # If instanced, compute translations for approximate bounds
- if self.poses is not None:
- bounds += np.array([np.min(self.poses[:,:3,3], axis=0),
- np.max(self.poses[:,:3,3], axis=0)])
- return bounds
-
- def _compute_transparency(self):
- """Compute whether or not this object is transparent.
- """
- if self.material.is_transparent:
- return True
- if self._is_transparent is None:
- self._is_transparent = False
- if self.color_0 is not None:
- if np.any(self._color_0[:,3] != 1.0):
- self._is_transparent = True
- return self._is_transparent
-
- def _compute_buf_flags(self):
- buf_flags = BufFlags.POSITION
-
- if self.normals is not None:
- buf_flags |= BufFlags.NORMAL
- if self.tangents is not None:
- buf_flags |= BufFlags.TANGENT
- if self.texcoord_0 is not None:
- buf_flags |= BufFlags.TEXCOORD_0
- if self.texcoord_1 is not None:
- buf_flags |= BufFlags.TEXCOORD_1
- if self.color_0 is not None:
- buf_flags |= BufFlags.COLOR_0
- if self.joints_0 is not None:
- buf_flags |= BufFlags.JOINTS_0
- if self.weights_0 is not None:
- buf_flags |= BufFlags.WEIGHTS_0
-
- return buf_flags
diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/ops/deform_conv.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/ops/deform_conv.py
deleted file mode 100644
index a3f8c75ee774823eea334e3b3732af6a18f55038..0000000000000000000000000000000000000000
--- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/ops/deform_conv.py
+++ /dev/null
@@ -1,405 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Tuple, Union
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch import Tensor
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-from torch.nn.modules.utils import _pair, _single
-
-from annotator.uniformer.mmcv.utils import deprecated_api_warning
-from ..cnn import CONV_LAYERS
-from ..utils import ext_loader, print_log
-
-ext_module = ext_loader.load_ext('_ext', [
- 'deform_conv_forward', 'deform_conv_backward_input',
- 'deform_conv_backward_parameters'
-])
-
-
-class DeformConv2dFunction(Function):
-
- @staticmethod
- def symbolic(g,
- input,
- offset,
- weight,
- stride,
- padding,
- dilation,
- groups,
- deform_groups,
- bias=False,
- im2col_step=32):
- return g.op(
- 'mmcv::MMCVDeformConv2d',
- input,
- offset,
- weight,
- stride_i=stride,
- padding_i=padding,
- dilation_i=dilation,
- groups_i=groups,
- deform_groups_i=deform_groups,
- bias_i=bias,
- im2col_step_i=im2col_step)
-
- @staticmethod
- def forward(ctx,
- input,
- offset,
- weight,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deform_groups=1,
- bias=False,
- im2col_step=32):
- if input is not None and input.dim() != 4:
- raise ValueError(
- f'Expected 4D tensor as input, got {input.dim()}D tensor \
- instead.')
- assert bias is False, 'Only support bias is False.'
- ctx.stride = _pair(stride)
- ctx.padding = _pair(padding)
- ctx.dilation = _pair(dilation)
- ctx.groups = groups
- ctx.deform_groups = deform_groups
- ctx.im2col_step = im2col_step
-
- # When pytorch version >= 1.6.0, amp is adopted for fp16 mode;
- # amp won't cast the type of model (float32), but "offset" is cast
- # to float16 by nn.Conv2d automatically, leading to the type
- # mismatch with input (when it is float32) or weight.
- # The flag for whether to use fp16 or amp is the type of "offset",
- # we cast weight and input to temporarily support fp16 and amp
- # whatever the pytorch version is.
- input = input.type_as(offset)
- weight = weight.type_as(input)
- ctx.save_for_backward(input, offset, weight)
-
- output = input.new_empty(
- DeformConv2dFunction._output_size(ctx, input, weight))
-
- ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones
-
- cur_im2col_step = min(ctx.im2col_step, input.size(0))
- assert (input.size(0) %
- cur_im2col_step) == 0, 'im2col step must divide batchsize'
- ext_module.deform_conv_forward(
- input,
- weight,
- offset,
- output,
- ctx.bufs_[0],
- ctx.bufs_[1],
- kW=weight.size(3),
- kH=weight.size(2),
- dW=ctx.stride[1],
- dH=ctx.stride[0],
- padW=ctx.padding[1],
- padH=ctx.padding[0],
- dilationW=ctx.dilation[1],
- dilationH=ctx.dilation[0],
- group=ctx.groups,
- deformable_group=ctx.deform_groups,
- im2col_step=cur_im2col_step)
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- input, offset, weight = ctx.saved_tensors
-
- grad_input = grad_offset = grad_weight = None
-
- cur_im2col_step = min(ctx.im2col_step, input.size(0))
- assert (input.size(0) % cur_im2col_step
- ) == 0, 'batch size must be divisible by im2col_step'
-
- grad_output = grad_output.contiguous()
- if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]:
- grad_input = torch.zeros_like(input)
- grad_offset = torch.zeros_like(offset)
- ext_module.deform_conv_backward_input(
- input,
- offset,
- grad_output,
- grad_input,
- grad_offset,
- weight,
- ctx.bufs_[0],
- kW=weight.size(3),
- kH=weight.size(2),
- dW=ctx.stride[1],
- dH=ctx.stride[0],
- padW=ctx.padding[1],
- padH=ctx.padding[0],
- dilationW=ctx.dilation[1],
- dilationH=ctx.dilation[0],
- group=ctx.groups,
- deformable_group=ctx.deform_groups,
- im2col_step=cur_im2col_step)
-
- if ctx.needs_input_grad[2]:
- grad_weight = torch.zeros_like(weight)
- ext_module.deform_conv_backward_parameters(
- input,
- offset,
- grad_output,
- grad_weight,
- ctx.bufs_[0],
- ctx.bufs_[1],
- kW=weight.size(3),
- kH=weight.size(2),
- dW=ctx.stride[1],
- dH=ctx.stride[0],
- padW=ctx.padding[1],
- padH=ctx.padding[0],
- dilationW=ctx.dilation[1],
- dilationH=ctx.dilation[0],
- group=ctx.groups,
- deformable_group=ctx.deform_groups,
- scale=1,
- im2col_step=cur_im2col_step)
-
- return grad_input, grad_offset, grad_weight, \
- None, None, None, None, None, None, None
-
- @staticmethod
- def _output_size(ctx, input, weight):
- channels = weight.size(0)
- output_size = (input.size(0), channels)
- for d in range(input.dim() - 2):
- in_size = input.size(d + 2)
- pad = ctx.padding[d]
- kernel = ctx.dilation[d] * (weight.size(d + 2) - 1) + 1
- stride_ = ctx.stride[d]
- output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, )
- if not all(map(lambda s: s > 0, output_size)):
- raise ValueError(
- 'convolution input is too small (output would be ' +
- 'x'.join(map(str, output_size)) + ')')
- return output_size
-
-
-deform_conv2d = DeformConv2dFunction.apply
-
-
-class DeformConv2d(nn.Module):
- r"""Deformable 2D convolution.
-
- Applies a deformable 2D convolution over an input signal composed of
- several input planes. DeformConv2d was described in the paper
- `Deformable Convolutional Networks
- `_
-
- Note:
- The argument ``im2col_step`` was added in version 1.3.17, which means
- number of samples processed by the ``im2col_cuda_kernel`` per call.
- It enables users to define ``batch_size`` and ``im2col_step`` more
- flexibly and solved `issue mmcv#1440
- `_.
-
- Args:
- in_channels (int): Number of channels in the input image.
- out_channels (int): Number of channels produced by the convolution.
- kernel_size(int, tuple): Size of the convolving kernel.
- stride(int, tuple): Stride of the convolution. Default: 1.
- padding (int or tuple): Zero-padding added to both sides of the input.
- Default: 0.
- dilation (int or tuple): Spacing between kernel elements. Default: 1.
- groups (int): Number of blocked connections from input.
- channels to output channels. Default: 1.
- deform_groups (int): Number of deformable group partitions.
- bias (bool): If True, adds a learnable bias to the output.
- Default: False.
- im2col_step (int): Number of samples processed by im2col_cuda_kernel
- per call. It will work when ``batch_size`` > ``im2col_step``, but
- ``batch_size`` must be divisible by ``im2col_step``. Default: 32.
- `New in version 1.3.17.`
- """
-
- @deprecated_api_warning({'deformable_groups': 'deform_groups'},
- cls_name='DeformConv2d')
- def __init__(self,
- in_channels: int,
- out_channels: int,
- kernel_size: Union[int, Tuple[int, ...]],
- stride: Union[int, Tuple[int, ...]] = 1,
- padding: Union[int, Tuple[int, ...]] = 0,
- dilation: Union[int, Tuple[int, ...]] = 1,
- groups: int = 1,
- deform_groups: int = 1,
- bias: bool = False,
- im2col_step: int = 32) -> None:
- super(DeformConv2d, self).__init__()
-
- assert not bias, \
- f'bias={bias} is not supported in DeformConv2d.'
- assert in_channels % groups == 0, \
- f'in_channels {in_channels} cannot be divisible by groups {groups}'
- assert out_channels % groups == 0, \
- f'out_channels {out_channels} cannot be divisible by groups \
- {groups}'
-
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = _pair(kernel_size)
- self.stride = _pair(stride)
- self.padding = _pair(padding)
- self.dilation = _pair(dilation)
- self.groups = groups
- self.deform_groups = deform_groups
- self.im2col_step = im2col_step
- # enable compatibility with nn.Conv2d
- self.transposed = False
- self.output_padding = _single(0)
-
- # only weight, no bias
- self.weight = nn.Parameter(
- torch.Tensor(out_channels, in_channels // self.groups,
- *self.kernel_size))
-
- self.reset_parameters()
-
- def reset_parameters(self):
- # switch the initialization of `self.weight` to the standard kaiming
- # method described in `Delving deep into rectifiers: Surpassing
- # human-level performance on ImageNet classification` - He, K. et al.
- # (2015), using a uniform distribution
- nn.init.kaiming_uniform_(self.weight, nonlinearity='relu')
-
- def forward(self, x: Tensor, offset: Tensor) -> Tensor:
- """Deformable Convolutional forward function.
-
- Args:
- x (Tensor): Input feature, shape (B, C_in, H_in, W_in)
- offset (Tensor): Offset for deformable convolution, shape
- (B, deform_groups*kernel_size[0]*kernel_size[1]*2,
- H_out, W_out), H_out, W_out are equal to the output's.
-
- An offset is like `[y0, x0, y1, x1, y2, x2, ..., y8, x8]`.
- The spatial arrangement is like:
-
- .. code:: text
-
- (x0, y0) (x1, y1) (x2, y2)
- (x3, y3) (x4, y4) (x5, y5)
- (x6, y6) (x7, y7) (x8, y8)
-
- Returns:
- Tensor: Output of the layer.
- """
- # To fix an assert error in deform_conv_cuda.cpp:128
- # input image is smaller than kernel
- input_pad = (x.size(2) < self.kernel_size[0]) or (x.size(3) <
- self.kernel_size[1])
- if input_pad:
- pad_h = max(self.kernel_size[0] - x.size(2), 0)
- pad_w = max(self.kernel_size[1] - x.size(3), 0)
- x = F.pad(x, (0, pad_w, 0, pad_h), 'constant', 0).contiguous()
- offset = F.pad(offset, (0, pad_w, 0, pad_h), 'constant', 0)
- offset = offset.contiguous()
- out = deform_conv2d(x, offset, self.weight, self.stride, self.padding,
- self.dilation, self.groups, self.deform_groups,
- False, self.im2col_step)
- if input_pad:
- out = out[:, :, :out.size(2) - pad_h, :out.size(3) -
- pad_w].contiguous()
- return out
-
- def __repr__(self):
- s = self.__class__.__name__
- s += f'(in_channels={self.in_channels},\n'
- s += f'out_channels={self.out_channels},\n'
- s += f'kernel_size={self.kernel_size},\n'
- s += f'stride={self.stride},\n'
- s += f'padding={self.padding},\n'
- s += f'dilation={self.dilation},\n'
- s += f'groups={self.groups},\n'
- s += f'deform_groups={self.deform_groups},\n'
- # bias is not supported in DeformConv2d.
- s += 'bias=False)'
- return s
-
-
-@CONV_LAYERS.register_module('DCN')
-class DeformConv2dPack(DeformConv2d):
- """A Deformable Conv Encapsulation that acts as normal Conv layers.
-
- The offset tensor is like `[y0, x0, y1, x1, y2, x2, ..., y8, x8]`.
- The spatial arrangement is like:
-
- .. code:: text
-
- (x0, y0) (x1, y1) (x2, y2)
- (x3, y3) (x4, y4) (x5, y5)
- (x6, y6) (x7, y7) (x8, y8)
-
- Args:
- in_channels (int): Same as nn.Conv2d.
- out_channels (int): Same as nn.Conv2d.
- kernel_size (int or tuple[int]): Same as nn.Conv2d.
- stride (int or tuple[int]): Same as nn.Conv2d.
- padding (int or tuple[int]): Same as nn.Conv2d.
- dilation (int or tuple[int]): Same as nn.Conv2d.
- groups (int): Same as nn.Conv2d.
- bias (bool or str): If specified as `auto`, it will be decided by the
- norm_cfg. Bias will be set as True if norm_cfg is None, otherwise
- False.
- """
-
- _version = 2
-
- def __init__(self, *args, **kwargs):
- super(DeformConv2dPack, self).__init__(*args, **kwargs)
- self.conv_offset = nn.Conv2d(
- self.in_channels,
- self.deform_groups * 2 * self.kernel_size[0] * self.kernel_size[1],
- kernel_size=self.kernel_size,
- stride=_pair(self.stride),
- padding=_pair(self.padding),
- dilation=_pair(self.dilation),
- bias=True)
- self.init_offset()
-
- def init_offset(self):
- self.conv_offset.weight.data.zero_()
- self.conv_offset.bias.data.zero_()
-
- def forward(self, x):
- offset = self.conv_offset(x)
- return deform_conv2d(x, offset, self.weight, self.stride, self.padding,
- self.dilation, self.groups, self.deform_groups,
- False, self.im2col_step)
-
- def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
- missing_keys, unexpected_keys, error_msgs):
- version = local_metadata.get('version', None)
-
- if version is None or version < 2:
- # the key is different in early versions
- # In version < 2, DeformConvPack loads previous benchmark models.
- if (prefix + 'conv_offset.weight' not in state_dict
- and prefix[:-1] + '_offset.weight' in state_dict):
- state_dict[prefix + 'conv_offset.weight'] = state_dict.pop(
- prefix[:-1] + '_offset.weight')
- if (prefix + 'conv_offset.bias' not in state_dict
- and prefix[:-1] + '_offset.bias' in state_dict):
- state_dict[prefix +
- 'conv_offset.bias'] = state_dict.pop(prefix[:-1] +
- '_offset.bias')
-
- if version is not None and version > 1:
- print_log(
- f'DeformConv2dPack {prefix.rstrip(".")} is upgraded to '
- 'version 2.',
- logger='root')
-
- super()._load_from_state_dict(state_dict, prefix, local_metadata,
- strict, missing_keys, unexpected_keys,
- error_msgs)
diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv_custom/__init__.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv_custom/__init__.py
deleted file mode 100644
index 4b958738b9fd93bfcec239c550df1d9a44b8c536..0000000000000000000000000000000000000000
--- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv_custom/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# -*- coding: utf-8 -*-
-
-from .checkpoint import load_checkpoint
-
-__all__ = ['load_checkpoint']
\ No newline at end of file
diff --git a/spaces/weiren119/AudiogramDigitization/src/digitizer/yolov5/models/__init__.py b/spaces/weiren119/AudiogramDigitization/src/digitizer/yolov5/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/whilefalse/CLIP/README.md b/spaces/whilefalse/CLIP/README.md
deleted file mode 100644
index 53e7da397b70290efac7056c6408969191d81c47..0000000000000000000000000000000000000000
--- a/spaces/whilefalse/CLIP/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: CLIP
-emoji: 👀
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/whitphx/gradio-static-test/dist/assets/index-986187d2.js b/spaces/whitphx/gradio-static-test/dist/assets/index-986187d2.js
deleted file mode 100644
index f5a867b276d0e19d75b0136118fa451883ad891c..0000000000000000000000000000000000000000
--- a/spaces/whitphx/gradio-static-test/dist/assets/index-986187d2.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as N,i as U,s as G,e as q,I as C,H as j,D as w,m as D,h as B,A as Q,ar as V,q as T,t as z,o as A,r as R,u as W,aa as X,J as O,L as P,as as Y,N as S,F as y,K as H,a1 as Z,y as p,a0 as x,b as I,a as J,j as $,k as ee,l as K,z as F}from"../lite.js";/* empty css */import{g as le,B as te}from"./Button-0391b19a.js";/* empty css */import{B as ae}from"./BlockTitle-a953ec46.js";import"./Info-7e9477b8.js";function L(a,e,t){const l=a.slice();return l[15]=e[t],l[17]=t,l}function ie(a){let e;return{c(){e=O(a[3])},m(t,l){B(t,e,l)},p(t,l){l&8&&P(e,t[3])},d(t){t&&R(e)}}}function M(a,e){let t,l,s,o,m=!1,h,b,i=e[15]+"",_,c,n,f,v;function r(){return e[13](e[15],e[17])}return n=Y(e[12][0]),{key:a,first:null,c(){t=j("label"),l=j("input"),h=C(),b=j("span"),_=O(i),c=C(),l.disabled=e[2],w(l,"type","radio"),w(l,"name",s="radio-"+e[6]),l.__value=o=e[15],l.value=l.__value,w(l,"class","svelte-1p9xokt"),w(b,"class","ml-2 svelte-1p9xokt"),w(t,"style",e[7]),w(t,"class","svelte-1p9xokt"),S(t,"disabled",e[2]),S(t,"selected",e[0]===e[15]),n.p(l),this.first=t},m(k,g){B(k,t,g),y(t,l),l.checked=l.__value===e[0],y(t,h),y(t,b),y(b,_),y(t,c),f||(v=[H(l,"change",e[11]),H(l,"input",r)],f=!0)},p(k,g){e=k,g&4&&(l.disabled=e[2]),g&64&&s!==(s="radio-"+e[6])&&w(l,"name",s),g&2&&o!==(o=e[15])&&(l.__value=o,l.value=l.__value,m=!0),(m||g&3)&&(l.checked=l.__value===e[0]),g&2&&i!==(i=e[15]+"")&&P(_,i),g&128&&w(t,"style",e[7]),g&4&&S(t,"disabled",e[2]),g&3&&S(t,"selected",e[0]===e[15])},d(k){k&&R(t),n.r(),f=!1,Z(v)}}}function ne(a){let e,t,l,s=[],o=new Map,m;e=new ae({props:{show_label:a[5],info:a[4],$$slots:{default:[ie]},$$scope:{ctx:a}}});let h=a[1];const b=i=>i[17];for(let i=0;i{t(9,o=!1)});const r=[[]];function k(){s=this.__value,t(0,s)}const g=(d,E)=>f("select",{value:d,index:E});return a.$$set=d=>{"value"in d&&t(0,s=d.value),"value_is_output"in d&&t(9,o=d.value_is_output),"style"in d&&t(10,m=d.style),"choices"in d&&t(1,h=d.choices),"disabled"in d&&t(2,b=d.disabled),"label"in d&&t(3,i=d.label),"info"in d&&t(4,_=d.info),"show_label"in d&&t(5,c=d.show_label),"elem_id"in d&&t(6,n=d.elem_id)},a.$$.update=()=>{a.$$.dirty&1&&v(),a.$$.dirty&1024&&t(7,{item_container:l}=le(m,["item_container"]),l)},[s,h,b,i,_,c,n,l,f,o,m,k,r,g]}class ue extends N{constructor(e){super(),U(this,e,se,ne,G,{value:0,value_is_output:9,style:10,choices:1,disabled:2,label:3,info:4,show_label:5,elem_id:6})}}function _e(a){let e,t,l,s,o,m;const h=[a[11]];let b={};for(let n=0;nJ(l,"value",i)),I.push(()=>J(l,"value_is_output",_)),l.$on("change",a[14]),l.$on("input",a[15]),l.$on("select",a[16]),{c(){q(e.$$.fragment),t=C(),q(l.$$.fragment)},m(n,f){D(e,n,f),B(n,t,f),D(l,n,f),m=!0},p(n,f){const v=f&2048?$(h,[ee(n[11])]):{};e.$set(v);const r={};f&4&&(r.label=n[2]),f&8&&(r.info=n[3]),f&16&&(r.elem_id=n[4]),f&512&&(r.show_label=n[9]),f&128&&(r.choices=n[7]),f&1024&&(r.style=n[10]),f&256&&(r.disabled=n[8]==="static"),!s&&f&1&&(s=!0,r.value=n[0],K(()=>s=!1)),!o&&f&2&&(o=!0,r.value_is_output=n[1],K(()=>o=!1)),l.$set(r)},i(n){m||(T(e.$$.fragment,n),T(l.$$.fragment,n),m=!0)},o(n){z(e.$$.fragment,n),z(l.$$.fragment,n),m=!1},d(n){A(e,n),n&&R(t),A(l,n)}}}function oe(a){let e,t;return e=new te({props:{visible:a[6],type:"fieldset",elem_id:a[4],elem_classes:a[5],disable:typeof a[10].container=="boolean"&&!a[10].container,$$slots:{default:[_e]},$$scope:{ctx:a}}}),{c(){q(e.$$.fragment)},m(l,s){D(e,l,s),t=!0},p(l,[s]){const o={};s&64&&(o.visible=l[6]),s&16&&(o.elem_id=l[4]),s&32&&(o.elem_classes=l[5]),s&1024&&(o.disable=typeof l[10].container=="boolean"&&!l[10].container),s&135071&&(o.$$scope={dirty:s,ctx:l}),e.$set(o)},i(l){t||(T(e.$$.fragment,l),t=!0)},o(l){z(e.$$.fragment,l),t=!1},d(l){A(e,l)}}}function fe(a,e,t){let{label:l="Radio"}=e,{info:s=void 0}=e,{elem_id:o=""}=e,{elem_classes:m=[]}=e,{visible:h=!0}=e,{value:b=null}=e,{value_is_output:i=!1}=e,{choices:_=[]}=e,{mode:c}=e,{show_label:n}=e,{style:f={}}=e,{loading_status:v}=e;function r(u){b=u,t(0,b)}function k(u){i=u,t(1,i)}function g(u){F.call(this,a,u)}function d(u){F.call(this,a,u)}function E(u){F.call(this,a,u)}return a.$$set=u=>{"label"in u&&t(2,l=u.label),"info"in u&&t(3,s=u.info),"elem_id"in u&&t(4,o=u.elem_id),"elem_classes"in u&&t(5,m=u.elem_classes),"visible"in u&&t(6,h=u.visible),"value"in u&&t(0,b=u.value),"value_is_output"in u&&t(1,i=u.value_is_output),"choices"in u&&t(7,_=u.choices),"mode"in u&&t(8,c=u.mode),"show_label"in u&&t(9,n=u.show_label),"style"in u&&t(10,f=u.style),"loading_status"in u&&t(11,v=u.loading_status)},[b,i,l,s,o,m,h,_,c,n,f,v,r,k,g,d,E]}class ce extends N{constructor(e){super(),U(this,e,fe,oe,G,{label:2,info:3,elem_id:4,elem_classes:5,visible:6,value:0,value_is_output:1,choices:7,mode:8,show_label:9,style:10,loading_status:11})}}const ve=ce,ke=["static","dynamic"],we=a=>({type:{payload:"string"},description:{payload:"selected choice"},example_data:a.choices.length>1?a.choices[0]:""});export{ve as Component,we as document,ke as modes};
-//# sourceMappingURL=index-986187d2.js.map
diff --git a/spaces/wuhuik/bingo/src/components/providers.tsx b/spaces/wuhuik/bingo/src/components/providers.tsx
deleted file mode 100644
index 892226412d80fe0b05211911b9e245cd22876460..0000000000000000000000000000000000000000
--- a/spaces/wuhuik/bingo/src/components/providers.tsx
+++ /dev/null
@@ -1,15 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import { ThemeProvider as NextThemesProvider } from 'next-themes'
-import { ThemeProviderProps } from 'next-themes/dist/types'
-
-import { TooltipProvider } from '@/components/ui/tooltip'
-
-export function Providers({ children, ...props }: ThemeProviderProps) {
- return (
-
- {children}
-
- )
-}
diff --git a/spaces/xcchen/vits-uma-genshin-honkai/text/cleaners.py b/spaces/xcchen/vits-uma-genshin-honkai/text/cleaners.py
deleted file mode 100644
index d26581deb399609163518054718ad80ecca5d934..0000000000000000000000000000000000000000
--- a/spaces/xcchen/vits-uma-genshin-honkai/text/cleaners.py
+++ /dev/null
@@ -1,475 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-
-'''
-Cleaners are transformations that run over the input text at both training and eval time.
-
-Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners"
-hyperparameter. Some cleaners are English-specific. You'll typically want to use:
- 1. "english_cleaners" for English text
- 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using
- the Unidecode library (https://pypi.python.org/pypi/Unidecode)
- 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update
- the symbols in symbols.py to match your data).
-'''
-
-import re
-from unidecode import unidecode
-import pyopenjtalk
-from jamo import h2j, j2hcj
-from pypinyin import lazy_pinyin, BOPOMOFO
-import jieba, cn2an
-
-
-# This is a list of Korean classifiers preceded by pure Korean numerals.
-_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통'
-
-# Regular expression matching whitespace:
-_whitespace_re = re.compile(r'\s+')
-
-# Regular expression matching Japanese without punctuation marks:
-_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# Regular expression matching non-Japanese characters or punctuation marks:
-_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# List of (regular expression, replacement) pairs for abbreviations:
-_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [
- ('mrs', 'misess'),
- ('mr', 'mister'),
- ('dr', 'doctor'),
- ('st', 'saint'),
- ('co', 'company'),
- ('jr', 'junior'),
- ('maj', 'major'),
- ('gen', 'general'),
- ('drs', 'doctors'),
- ('rev', 'reverend'),
- ('lt', 'lieutenant'),
- ('hon', 'honorable'),
- ('sgt', 'sergeant'),
- ('capt', 'captain'),
- ('esq', 'esquire'),
- ('ltd', 'limited'),
- ('col', 'colonel'),
- ('ft', 'fort'),
-]]
-
-# List of (hangul, hangul divided) pairs:
-_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄳ', 'ㄱㅅ'),
- ('ㄵ', 'ㄴㅈ'),
- ('ㄶ', 'ㄴㅎ'),
- ('ㄺ', 'ㄹㄱ'),
- ('ㄻ', 'ㄹㅁ'),
- ('ㄼ', 'ㄹㅂ'),
- ('ㄽ', 'ㄹㅅ'),
- ('ㄾ', 'ㄹㅌ'),
- ('ㄿ', 'ㄹㅍ'),
- ('ㅀ', 'ㄹㅎ'),
- ('ㅄ', 'ㅂㅅ'),
- ('ㅘ', 'ㅗㅏ'),
- ('ㅙ', 'ㅗㅐ'),
- ('ㅚ', 'ㅗㅣ'),
- ('ㅝ', 'ㅜㅓ'),
- ('ㅞ', 'ㅜㅔ'),
- ('ㅟ', 'ㅜㅣ'),
- ('ㅢ', 'ㅡㅣ'),
- ('ㅑ', 'ㅣㅏ'),
- ('ㅒ', 'ㅣㅐ'),
- ('ㅕ', 'ㅣㅓ'),
- ('ㅖ', 'ㅣㅔ'),
- ('ㅛ', 'ㅣㅗ'),
- ('ㅠ', 'ㅣㅜ')
-]]
-
-# List of (Latin alphabet, hangul) pairs:
-_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', '에이'),
- ('b', '비'),
- ('c', '시'),
- ('d', '디'),
- ('e', '이'),
- ('f', '에프'),
- ('g', '지'),
- ('h', '에이치'),
- ('i', '아이'),
- ('j', '제이'),
- ('k', '케이'),
- ('l', '엘'),
- ('m', '엠'),
- ('n', '엔'),
- ('o', '오'),
- ('p', '피'),
- ('q', '큐'),
- ('r', '아르'),
- ('s', '에스'),
- ('t', '티'),
- ('u', '유'),
- ('v', '브이'),
- ('w', '더블유'),
- ('x', '엑스'),
- ('y', '와이'),
- ('z', '제트')
-]]
-
-# List of (Latin alphabet, bopomofo) pairs:
-_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', 'ㄟˉ'),
- ('b', 'ㄅㄧˋ'),
- ('c', 'ㄙㄧˉ'),
- ('d', 'ㄉㄧˋ'),
- ('e', 'ㄧˋ'),
- ('f', 'ㄝˊㄈㄨˋ'),
- ('g', 'ㄐㄧˋ'),
- ('h', 'ㄝˇㄑㄩˋ'),
- ('i', 'ㄞˋ'),
- ('j', 'ㄐㄟˋ'),
- ('k', 'ㄎㄟˋ'),
- ('l', 'ㄝˊㄛˋ'),
- ('m', 'ㄝˊㄇㄨˋ'),
- ('n', 'ㄣˉ'),
- ('o', 'ㄡˉ'),
- ('p', 'ㄆㄧˉ'),
- ('q', 'ㄎㄧㄡˉ'),
- ('r', 'ㄚˋ'),
- ('s', 'ㄝˊㄙˋ'),
- ('t', 'ㄊㄧˋ'),
- ('u', 'ㄧㄡˉ'),
- ('v', 'ㄨㄧˉ'),
- ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'),
- ('x', 'ㄝˉㄎㄨˋㄙˋ'),
- ('y', 'ㄨㄞˋ'),
- ('z', 'ㄗㄟˋ')
-]]
-
-
-# List of (bopomofo, romaji) pairs:
-_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('ㄅㄛ', 'p⁼wo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p⁼'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't⁼'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k⁼'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'h'),
- ('ㄐ', 'ʧ⁼'),
- ('ㄑ', 'ʧʰ'),
- ('ㄒ', 'ʃ'),
- ('ㄓ', 'ʦ`⁼'),
- ('ㄔ', 'ʦ`ʰ'),
- ('ㄕ', 's`'),
- ('ㄖ', 'ɹ`'),
- ('ㄗ', 'ʦ⁼'),
- ('ㄘ', 'ʦʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ə'),
- ('ㄝ', 'e'),
- ('ㄞ', 'ai'),
- ('ㄟ', 'ei'),
- ('ㄠ', 'au'),
- ('ㄡ', 'ou'),
- ('ㄧㄢ', 'yeNN'),
- ('ㄢ', 'aNN'),
- ('ㄧㄣ', 'iNN'),
- ('ㄣ', 'əNN'),
- ('ㄤ', 'aNg'),
- ('ㄧㄥ', 'iNg'),
- ('ㄨㄥ', 'uNg'),
- ('ㄩㄥ', 'yuNg'),
- ('ㄥ', 'əNg'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'ɥ'),
- ('ˉ', '→'),
- ('ˊ', '↑'),
- ('ˇ', '↓↑'),
- ('ˋ', '↓'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-
-def expand_abbreviations(text):
- for regex, replacement in _abbreviations:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def lowercase(text):
- return text.lower()
-
-
-def collapse_whitespace(text):
- return re.sub(_whitespace_re, ' ', text)
-
-
-def convert_to_ascii(text):
- return unidecode(text)
-
-
-def japanese_to_romaji_with_accent(text):
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
- sentences = re.split(_japanese_marks, text)
- marks = re.findall(_japanese_marks, text)
- text = ''
- for i, sentence in enumerate(sentences):
- if re.match(_japanese_characters, sentence):
- if text!='':
- text+=' '
- labels = pyopenjtalk.extract_fullcontext(sentence)
- for n, label in enumerate(labels):
- phoneme = re.search(r'\-([^\+]*)\+', label).group(1)
- if phoneme not in ['sil','pau']:
- text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q')
- else:
- continue
- n_moras = int(re.search(r'/F:(\d+)_', label).group(1))
- a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1))
- a2 = int(re.search(r"\+(\d+)\+", label).group(1))
- a3 = int(re.search(r"\+(\d+)/", label).group(1))
- if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']:
- a2_next=-1
- else:
- a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1))
- # Accent phrase boundary
- if a3 == 1 and a2_next == 1:
- text += ' '
- # Falling
- elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras:
- text += '↓'
- # Rising
- elif a2 == 1 and a2_next == 2:
- text += '↑'
- if i 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- def forward(self, x, x_lengths, y, y_lengths, sid=None):
-
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
-
- with torch.no_grad():
- # negative cross-entropy
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
- neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
- neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
-
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
-
- w = attn.sum(2)
- if self.use_sdp:
- l_length = self.dp(x, x_mask, w, g=g)
- l_length = l_length / torch.sum(x_mask)
- else:
- logw_ = torch.log(w + 1e-6) * x_mask
- logw = self.dp(x, x_mask, g=g)
- l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging
-
- # expand prior
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
-
- z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
- o = self.dec(z_slice, g=g)
- return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None):
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- if self.use_sdp:
- logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
- else:
- logw = self.dp(x, x_mask, g=g)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
- z = self.flow(z_p, y_mask, g=g, reverse=True)
- o = self.dec((z * y_mask)[:,:,:max_len], g=g)
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
-
- def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
- assert self.n_speakers > 0, "n_speakers have to be larger than 0."
- g_src = self.emb_g(sid_src).unsqueeze(-1)
- g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
- z_p = self.flow(z, y_mask, g=g_src)
- z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
- o_hat = self.dec(z_hat * y_mask, g=g_tgt)
- return o_hat, y_mask, (z, z_p, z_hat)
-
diff --git a/spaces/xp3857/Image_Restoration_Colorization/Global/data/base_dataset.py b/spaces/xp3857/Image_Restoration_Colorization/Global/data/base_dataset.py
deleted file mode 100644
index 5f0ac562eacc926b606f70c9dea680021dec2edc..0000000000000000000000000000000000000000
--- a/spaces/xp3857/Image_Restoration_Colorization/Global/data/base_dataset.py
+++ /dev/null
@@ -1,114 +0,0 @@
-# Copyright (c) Microsoft Corporation.
-# Licensed under the MIT License.
-
-import torch.utils.data as data
-from PIL import Image
-import torchvision.transforms as transforms
-import numpy as np
-import random
-
-class BaseDataset(data.Dataset):
- def __init__(self):
- super(BaseDataset, self).__init__()
-
- def name(self):
- return 'BaseDataset'
-
- def initialize(self, opt):
- pass
-
-def get_params(opt, size):
- w, h = size
- new_h = h
- new_w = w
- if opt.resize_or_crop == 'resize_and_crop':
- new_h = new_w = opt.loadSize
-
- if opt.resize_or_crop == 'scale_width_and_crop': # we scale the shorter side into 256
-
- if w 0.5
- return {'crop_pos': (x, y), 'flip': flip}
-
-def get_transform(opt, params, method=Image.BICUBIC, normalize=True):
- transform_list = []
- if 'resize' in opt.resize_or_crop:
- osize = [opt.loadSize, opt.loadSize]
- transform_list.append(transforms.Scale(osize, method))
- elif 'scale_width' in opt.resize_or_crop:
- # transform_list.append(transforms.Lambda(lambda img: __scale_width(img, opt.loadSize, method))) ## Here , We want the shorter side to match 256, and Scale will finish it.
- transform_list.append(transforms.Scale(256,method))
-
- if 'crop' in opt.resize_or_crop:
- if opt.isTrain:
- transform_list.append(transforms.Lambda(lambda img: __crop(img, params['crop_pos'], opt.fineSize)))
- else:
- if opt.test_random_crop:
- transform_list.append(transforms.RandomCrop(opt.fineSize))
- else:
- transform_list.append(transforms.CenterCrop(opt.fineSize))
-
- ## when testing, for ablation study, choose center_crop directly.
-
-
-
- if opt.resize_or_crop == 'none':
- base = float(2 ** opt.n_downsample_global)
- if opt.netG == 'local':
- base *= (2 ** opt.n_local_enhancers)
- transform_list.append(transforms.Lambda(lambda img: __make_power_2(img, base, method)))
-
- if opt.isTrain and not opt.no_flip:
- transform_list.append(transforms.Lambda(lambda img: __flip(img, params['flip'])))
-
- transform_list += [transforms.ToTensor()]
-
- if normalize:
- transform_list += [transforms.Normalize((0.5, 0.5, 0.5),
- (0.5, 0.5, 0.5))]
- return transforms.Compose(transform_list)
-
-def normalize():
- return transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
-
-def __make_power_2(img, base, method=Image.BICUBIC):
- ow, oh = img.size
- h = int(round(oh / base) * base)
- w = int(round(ow / base) * base)
- if (h == oh) and (w == ow):
- return img
- return img.resize((w, h), method)
-
-def __scale_width(img, target_width, method=Image.BICUBIC):
- ow, oh = img.size
- if (ow == target_width):
- return img
- w = target_width
- h = int(target_width * oh / ow)
- return img.resize((w, h), method)
-
-def __crop(img, pos, size):
- ow, oh = img.size
- x1, y1 = pos
- tw = th = size
- if (ow > tw or oh > th):
- return img.crop((x1, y1, x1 + tw, y1 + th))
- return img
-
-def __flip(img, flip):
- if flip:
- return img.transpose(Image.FLIP_LEFT_RIGHT)
- return img
diff --git a/spaces/xszqxszq/sovits-svc-mix/slicer.py b/spaces/xszqxszq/sovits-svc-mix/slicer.py
deleted file mode 100644
index 52300c30712388aa6360d111f67c67bf0296a509..0000000000000000000000000000000000000000
--- a/spaces/xszqxszq/sovits-svc-mix/slicer.py
+++ /dev/null
@@ -1,163 +0,0 @@
-import os.path
-import time
-from argparse import ArgumentParser
-
-import librosa
-import numpy as np
-import soundfile
-from scipy.ndimage import maximum_filter1d, uniform_filter1d
-
-
-def timeit(func):
- def run(*args, **kwargs):
- t = time.time()
- res = func(*args, **kwargs)
- print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t))
- return res
-
- return run
-
-
-# @timeit
-def _window_maximum(arr, win_sz):
- return maximum_filter1d(arr, size=win_sz)[win_sz // 2: win_sz // 2 + arr.shape[0] - win_sz + 1]
-
-
-# @timeit
-def _window_rms(arr, win_sz):
- filtered = np.sqrt(uniform_filter1d(np.power(arr, 2), win_sz) - np.power(uniform_filter1d(arr, win_sz), 2))
- return filtered[win_sz // 2: win_sz // 2 + arr.shape[0] - win_sz + 1]
-
-
-def level2db(levels, eps=1e-12):
- return 20 * np.log10(np.clip(levels, a_min=eps, a_max=1))
-
-
-def _apply_slice(audio, begin, end):
- if len(audio.shape) > 1:
- return audio[:, begin: end]
- else:
- return audio[begin: end]
-
-
-class Slicer:
- def __init__(self,
- sr: int,
- db_threshold: float = -40,
- min_length: int = 5000,
- win_l: int = 300,
- win_s: int = 20,
- max_silence_kept: int = 500):
- self.db_threshold = db_threshold
- self.min_samples = round(sr * min_length / 1000)
- self.win_ln = round(sr * win_l / 1000)
- self.win_sn = round(sr * win_s / 1000)
- self.max_silence = round(sr * max_silence_kept / 1000)
- if not self.min_samples >= self.win_ln >= self.win_sn:
- raise ValueError('The following condition must be satisfied: min_length >= win_l >= win_s')
- if not self.max_silence >= self.win_sn:
- raise ValueError('The following condition must be satisfied: max_silence_kept >= win_s')
-
- @timeit
- def slice(self, audio):
- if len(audio.shape) > 1:
- samples = librosa.to_mono(audio)
- else:
- samples = audio
- if samples.shape[0] <= self.min_samples:
- return [audio]
- # get absolute amplitudes
- abs_amp = np.abs(samples - np.mean(samples))
- # calculate local maximum with large window
- win_max_db = level2db(_window_maximum(abs_amp, win_sz=self.win_ln))
- sil_tags = []
- left = right = 0
- while right < win_max_db.shape[0]:
- if win_max_db[right] < self.db_threshold:
- right += 1
- elif left == right:
- left += 1
- right += 1
- else:
- if left == 0:
- split_loc_l = left
- else:
- sil_left_n = min(self.max_silence, (right + self.win_ln - left) // 2)
- rms_db_left = level2db(_window_rms(samples[left: left + sil_left_n], win_sz=self.win_sn))
- split_win_l = left + np.argmin(rms_db_left)
- split_loc_l = split_win_l + np.argmin(abs_amp[split_win_l: split_win_l + self.win_sn])
- if len(sil_tags) != 0 and split_loc_l - sil_tags[-1][1] < self.min_samples and right < win_max_db.shape[
- 0] - 1:
- right += 1
- left = right
- continue
- if right == win_max_db.shape[0] - 1:
- split_loc_r = right + self.win_ln
- else:
- sil_right_n = min(self.max_silence, (right + self.win_ln - left) // 2)
- rms_db_right = level2db(_window_rms(samples[right + self.win_ln - sil_right_n: right + self.win_ln],
- win_sz=self.win_sn))
- split_win_r = right + self.win_ln - sil_right_n + np.argmin(rms_db_right)
- split_loc_r = split_win_r + np.argmin(abs_amp[split_win_r: split_win_r + self.win_sn])
- sil_tags.append((split_loc_l, split_loc_r))
- right += 1
- left = right
- if left != right:
- sil_left_n = min(self.max_silence, (right + self.win_ln - left) // 2)
- rms_db_left = level2db(_window_rms(samples[left: left + sil_left_n], win_sz=self.win_sn))
- split_win_l = left + np.argmin(rms_db_left)
- split_loc_l = split_win_l + np.argmin(abs_amp[split_win_l: split_win_l + self.win_sn])
- sil_tags.append((split_loc_l, samples.shape[0]))
- if len(sil_tags) == 0:
- return [audio]
- else:
- chunks = []
- for i in range(0, len(sil_tags)):
- chunks.append(int((sil_tags[i][0] + sil_tags[i][1]) / 2))
- return chunks
-
-
-def main():
- parser = ArgumentParser()
- parser.add_argument('audio', type=str, help='The audio to be sliced')
- parser.add_argument('--out_name', type=str, help='Output directory of the sliced audio clips')
- parser.add_argument('--out', type=str, help='Output directory of the sliced audio clips')
- parser.add_argument('--db_thresh', type=float, required=False, default=-40,
- help='The dB threshold for silence detection')
- parser.add_argument('--min_len', type=int, required=False, default=5000,
- help='The minimum milliseconds required for each sliced audio clip')
- parser.add_argument('--win_l', type=int, required=False, default=300,
- help='Size of the large sliding window, presented in milliseconds')
- parser.add_argument('--win_s', type=int, required=False, default=20,
- help='Size of the small sliding window, presented in milliseconds')
- parser.add_argument('--max_sil_kept', type=int, required=False, default=500,
- help='The maximum silence length kept around the sliced audio, presented in milliseconds')
- args = parser.parse_args()
- out = args.out
- if out is None:
- out = os.path.dirname(os.path.abspath(args.audio))
- audio, sr = librosa.load(args.audio, sr=None)
- slicer = Slicer(
- sr=sr,
- db_threshold=args.db_thresh,
- min_length=args.min_len,
- win_l=args.win_l,
- win_s=args.win_s,
- max_silence_kept=args.max_sil_kept
- )
- chunks = slicer.slice(audio)
- if not os.path.exists(args.out):
- os.makedirs(args.out)
- start = 0
- end_id = 0
- for i, chunk in enumerate(chunks):
- end = chunk
- soundfile.write(os.path.join(out, f'%s-%s.wav' % (args.out_name, str(i).zfill(2))), audio[start:end], sr)
- start = end
- end_id = i + 1
- soundfile.write(os.path.join(out, f'%s-%s.wav' % (args.out_name, str(end_id).zfill(2))), audio[start:len(audio)],
- sr)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/xu1998hz/sescore_german_mt/description.md b/spaces/xu1998hz/sescore_german_mt/description.md
deleted file mode 100644
index acc8b9f730fcdc4869da324c54f2db6c57f1983c..0000000000000000000000000000000000000000
--- a/spaces/xu1998hz/sescore_german_mt/description.md
+++ /dev/null
@@ -1,64 +0,0 @@
-## Installation and usage
-
-```bash
-pip install -r requirements.txt
-```
-
-Minimal example (evaluating English text generation)
-```python
-import evaluate
-sescore = evaluate.load("xu1998hz/sescore")
-# for different versions of SEScore
-# sescore = evaluate.load("xu1998hz/sescore_english_mt") -> for English at Machine Translation
-# sescore = evaluate.load("xu1998hz/sescore_german_mt") -> for German at Machine Translation
-# sescore = evaluate.load("xu1998hz/sescore_english_webnlg") -> for webnlg data-to-text
-# sescore = evaluate.load("xu1998hz/sescore_english_coco") -> for image caption
-score = sescore.compute(
- references=['sescore is a simple but effective next-generation text evaluation metric'],
- predictions=['sescore is simple effective text evaluation metric for next generation']
-)
-```
-
-*SEScore* compares a list of references (gold translation/generated output examples) with a same-length list of candidate generated samples. Currently, the output range is learned and scores are most useful in relative ranking scenarios rather than absolute comparisons. We are producing a series of rescaling options to make absolute SEScore-based scaling more effective.
-
-
-### Available pre-trained models
-
-Currently, the following language/model pairs are available:
-
-| Language | pretrained data | pretrained model link |
-|----------|-----------------|-----------------------|
-| English | MT | [xu1998hz/sescore_english_mt](https://huggingface.co/xu1998hz/sescore_english_mt) |
-| German | MT | [xu1998hz/sescore_german_mt](https://huggingface.co/xu1998hz/sescore_german_mt) |
-| English | webNLG17 | [xu1998hz/sescore_english_webnlg17](https://huggingface.co/xu1998hz/sescore_english_webnlg17) |
-| English | CoCo captions | [xu1998hz/sescore_english_coco](https://huggingface.co/xu1998hz/sescore_english_coco) |
-
-
-Please contact repo maintainer Wenda Xu to add your models!
-
-## Limitations
-
-*SEScore* is trained on synthetic data in-domain.
-Although this data is generated to simulate user-relevant errors like deletion and spurious insertion, it may be limited in its ability to simulate humanlike errors.
-Model applicability is domain-specific (e.g., CoCo caption-trained model will be better for captioning than MT-trained).
-
-We are in the process of producing and benchmarking general language-level *SEScore* variants.
-
-## Citation
-
-If you find our work useful, please cite the following:
-
-```bibtex
-@inproceedings{xu-etal-2022-not,
- title={Not All Errors are Equal: Learning Text Generation Metrics using Stratified Error Synthesis},
- author={Xu, Wenda and Tuan, Yi-lin and Lu, Yujie and Saxon, Michael and Li, Lei and Wang, William Yang},
- booktitle ={Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing},
- month={dec},
- year={2022},
- url={https://arxiv.org/abs/2210.05035}
-}
-```
-
-## Acknowledgements
-
-The work of the [COMET](https://github.com/Unbabel/COMET) maintainers at [Unbabel](https://duckduckgo.com/?t=ffab&q=unbabel&ia=web) has been instrumental in producing SEScore.
\ No newline at end of file
diff --git a/spaces/yaoshining/text-generation-webui/modules/llamacpp_model.py b/spaces/yaoshining/text-generation-webui/modules/llamacpp_model.py
deleted file mode 100644
index 10a852dbc356929f780d2dbed727e14f43eb5433..0000000000000000000000000000000000000000
--- a/spaces/yaoshining/text-generation-webui/modules/llamacpp_model.py
+++ /dev/null
@@ -1,101 +0,0 @@
-'''
-Based on
-https://github.com/abetlen/llama-cpp-python
-
-Documentation:
-https://abetlen.github.io/llama-cpp-python/
-'''
-
-import re
-from functools import partial
-
-from llama_cpp import Llama, LlamaCache, LogitsProcessorList
-
-from modules import shared
-from modules.callbacks import Iteratorize
-from modules.logging_colors import logger
-
-
-def ban_eos_logits_processor(eos_token, input_ids, logits):
- logits[eos_token] = -float('inf')
- return logits
-
-
-class LlamaCppModel:
- def __init__(self):
- self.initialized = False
-
- def __del__(self):
- self.model.__del__()
-
- @classmethod
- def from_pretrained(self, path):
- result = self()
- cache_capacity = 0
- if shared.args.cache_capacity is not None:
- if 'GiB' in shared.args.cache_capacity:
- cache_capacity = int(re.sub('[a-zA-Z]', '', shared.args.cache_capacity)) * 1000 * 1000 * 1000
- elif 'MiB' in shared.args.cache_capacity:
- cache_capacity = int(re.sub('[a-zA-Z]', '', shared.args.cache_capacity)) * 1000 * 1000
- else:
- cache_capacity = int(shared.args.cache_capacity)
-
- logger.info("Cache capacity is " + str(cache_capacity) + " bytes")
- params = {
- 'model_path': str(path),
- 'n_ctx': shared.args.n_ctx,
- 'seed': int(shared.args.llama_cpp_seed),
- 'n_threads': shared.args.threads or None,
- 'n_batch': shared.args.n_batch,
- 'use_mmap': not shared.args.no_mmap,
- 'use_mlock': shared.args.mlock,
- 'n_gpu_layers': shared.args.n_gpu_layers
- }
-
- result.model = Llama(**params)
- if cache_capacity > 0:
- result.model.set_cache(LlamaCache(capacity_bytes=cache_capacity))
-
- # This is ugly, but the model and the tokenizer are the same object in this library.
- return result, result
-
- def encode(self, string):
- if type(string) is str:
- string = string.encode()
-
- return self.model.tokenize(string)
-
- def generate(self, prompt, state, callback=None):
- prompt = prompt if type(prompt) is str else prompt.decode()
- completion_chunks = self.model.create_completion(
- prompt=prompt,
- max_tokens=state['max_new_tokens'],
- temperature=state['temperature'],
- top_p=state['top_p'],
- top_k=state['top_k'],
- repeat_penalty=state['repetition_penalty'],
- tfs_z=state['tfs'],
- mirostat_mode=int(state['mirostat_mode']),
- mirostat_tau=state['mirostat_tau'],
- mirostat_eta=state['mirostat_eta'],
- stream=True,
- logits_processor=LogitsProcessorList([
- partial(ban_eos_logits_processor, self.model.token_eos()),
- ]) if state['ban_eos_token'] else None,
- )
-
- output = ""
- for completion_chunk in completion_chunks:
- text = completion_chunk['choices'][0]['text']
- output += text
- if callback:
- callback(text)
-
- return output
-
- def generate_with_streaming(self, *args, **kwargs):
- with Iteratorize(self.generate, args, kwargs, callback=None) as generator:
- reply = ''
- for token in generator:
- reply += token
- yield reply
diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/hooks/useToast.tsx b/spaces/yderre-aubay/midi-player-demo/src/main/hooks/useToast.tsx
deleted file mode 100644
index 5838e714a1e60540bc15fcdd88933f7100c46730..0000000000000000000000000000000000000000
--- a/spaces/yderre-aubay/midi-player-demo/src/main/hooks/useToast.tsx
+++ /dev/null
@@ -1,73 +0,0 @@
-import { createContext, FC, ReactNode, useContext, useState } from "react"
-
-export type ToastSeverity = "warning" | "error" | "info" | "success"
-
-export interface ToastMessage {
- message: string
- severity: ToastSeverity
- key: number
-}
-
-export const ToastContext = createContext<{
- addMessage: (message: ToastMessage) => void
-}>(null as never)
-
-interface ToastProps {
- message: string
- severity: ToastSeverity
- onExited: () => void
-}
-
-export const ToastProvider: FC<{
- children: ReactNode
- component: FC
-}> = ({ children, component: Toast }) => {
- const [messages, setMessages] = useState([])
-
- const removeMessage = (key: number) =>
- setMessages((arr) => arr.filter((m) => m.key !== key))
-
- return (
- [...arr, message])
- },
- }}
- >
- {children}
- {messages.map((m) => (
- removeMessage(m.key)}
- />
- ))}
-
- )
-}
-
-export const useToast = () => {
- const { addMessage } = useContext(ToastContext)
-
- const show = (message: string, options: { severity: ToastSeverity }) => {
- addMessage({ message, ...options, key: new Date().getTime() })
- }
-
- return {
- show,
- info(message: string) {
- show(message, { severity: "info" })
- },
- success(message: string) {
- show(message, { severity: "success" })
- },
- warning(message: string) {
- show(message, { severity: "warning" })
- },
- error(message: string) {
- show(message, { severity: "error" })
- },
- }
-}
diff --git a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/PIPNet/FaceBoxesV2/utils/box_utils.py b/spaces/ygtxr1997/ReliableSwap_Demo/third_party/PIPNet/FaceBoxesV2/utils/box_utils.py
deleted file mode 100644
index 83e13402e6f1520cb670b6c2e26f67860cb8b1f8..0000000000000000000000000000000000000000
--- a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/PIPNet/FaceBoxesV2/utils/box_utils.py
+++ /dev/null
@@ -1,276 +0,0 @@
-import torch
-import numpy as np
-
-
-def point_form(boxes):
- """ Convert prior_boxes to (xmin, ymin, xmax, ymax)
- representation for comparison to point form ground truth data.
- Args:
- boxes: (tensor) center-size default boxes from priorbox layers.
- Return:
- boxes: (tensor) Converted xmin, ymin, xmax, ymax form of boxes.
- """
- return torch.cat((boxes[:, :2] - boxes[:, 2:]/2, # xmin, ymin
- boxes[:, :2] + boxes[:, 2:]/2), 1) # xmax, ymax
-
-
-def center_size(boxes):
- """ Convert prior_boxes to (cx, cy, w, h)
- representation for comparison to center-size form ground truth data.
- Args:
- boxes: (tensor) point_form boxes
- Return:
- boxes: (tensor) Converted xmin, ymin, xmax, ymax form of boxes.
- """
- return torch.cat((boxes[:, 2:] + boxes[:, :2])/2, # cx, cy
- boxes[:, 2:] - boxes[:, :2], 1) # w, h
-
-
-def intersect(box_a, box_b):
- """ We resize both tensors to [A,B,2] without new malloc:
- [A,2] -> [A,1,2] -> [A,B,2]
- [B,2] -> [1,B,2] -> [A,B,2]
- Then we compute the area of intersect between box_a and box_b.
- Args:
- box_a: (tensor) bounding boxes, Shape: [A,4].
- box_b: (tensor) bounding boxes, Shape: [B,4].
- Return:
- (tensor) intersection area, Shape: [A,B].
- """
- A = box_a.size(0)
- B = box_b.size(0)
- max_xy = torch.min(box_a[:, 2:].unsqueeze(1).expand(A, B, 2),
- box_b[:, 2:].unsqueeze(0).expand(A, B, 2))
- min_xy = torch.max(box_a[:, :2].unsqueeze(1).expand(A, B, 2),
- box_b[:, :2].unsqueeze(0).expand(A, B, 2))
- inter = torch.clamp((max_xy - min_xy), min=0)
- return inter[:, :, 0] * inter[:, :, 1]
-
-
-def jaccard(box_a, box_b):
- """Compute the jaccard overlap of two sets of boxes. The jaccard overlap
- is simply the intersection over union of two boxes. Here we operate on
- ground truth boxes and default boxes.
- E.g.:
- A ∩ B / A ∪ B = A ∩ B / (area(A) + area(B) - A ∩ B)
- Args:
- box_a: (tensor) Ground truth bounding boxes, Shape: [num_objects,4]
- box_b: (tensor) Prior boxes from priorbox layers, Shape: [num_priors,4]
- Return:
- jaccard overlap: (tensor) Shape: [box_a.size(0), box_b.size(0)]
- """
- inter = intersect(box_a, box_b)
- area_a = ((box_a[:, 2]-box_a[:, 0]) *
- (box_a[:, 3]-box_a[:, 1])).unsqueeze(1).expand_as(inter) # [A,B]
- area_b = ((box_b[:, 2]-box_b[:, 0]) *
- (box_b[:, 3]-box_b[:, 1])).unsqueeze(0).expand_as(inter) # [A,B]
- union = area_a + area_b - inter
- return inter / union # [A,B]
-
-
-def matrix_iou(a, b):
- """
- return iou of a and b, numpy version for data augenmentation
- """
- lt = np.maximum(a[:, np.newaxis, :2], b[:, :2])
- rb = np.minimum(a[:, np.newaxis, 2:], b[:, 2:])
-
- area_i = np.prod(rb - lt, axis=2) * (lt < rb).all(axis=2)
- area_a = np.prod(a[:, 2:] - a[:, :2], axis=1)
- area_b = np.prod(b[:, 2:] - b[:, :2], axis=1)
- return area_i / (area_a[:, np.newaxis] + area_b - area_i)
-
-
-def matrix_iof(a, b):
- """
- return iof of a and b, numpy version for data augenmentation
- """
- lt = np.maximum(a[:, np.newaxis, :2], b[:, :2])
- rb = np.minimum(a[:, np.newaxis, 2:], b[:, 2:])
-
- area_i = np.prod(rb - lt, axis=2) * (lt < rb).all(axis=2)
- area_a = np.prod(a[:, 2:] - a[:, :2], axis=1)
- return area_i / np.maximum(area_a[:, np.newaxis], 1)
-
-
-def match(threshold, truths, priors, variances, labels, loc_t, conf_t, idx):
- """Match each prior box with the ground truth box of the highest jaccard
- overlap, encode the bounding boxes, then return the matched indices
- corresponding to both confidence and location preds.
- Args:
- threshold: (float) The overlap threshold used when mathing boxes.
- truths: (tensor) Ground truth boxes, Shape: [num_obj, num_priors].
- priors: (tensor) Prior boxes from priorbox layers, Shape: [n_priors,4].
- variances: (tensor) Variances corresponding to each prior coord,
- Shape: [num_priors, 4].
- labels: (tensor) All the class labels for the image, Shape: [num_obj].
- loc_t: (tensor) Tensor to be filled w/ endcoded location targets.
- conf_t: (tensor) Tensor to be filled w/ matched indices for conf preds.
- idx: (int) current batch index
- Return:
- The matched indices corresponding to 1)location and 2)confidence preds.
- """
- # jaccard index
- overlaps = jaccard(
- truths,
- point_form(priors)
- )
- # (Bipartite Matching)
- # [1,num_objects] best prior for each ground truth
- best_prior_overlap, best_prior_idx = overlaps.max(1, keepdim=True)
-
- # ignore hard gt
- valid_gt_idx = best_prior_overlap[:, 0] >= 0.2
- best_prior_idx_filter = best_prior_idx[valid_gt_idx, :]
- if best_prior_idx_filter.shape[0] <= 0:
- loc_t[idx] = 0
- conf_t[idx] = 0
- return
-
- # [1,num_priors] best ground truth for each prior
- best_truth_overlap, best_truth_idx = overlaps.max(0, keepdim=True)
- best_truth_idx.squeeze_(0)
- best_truth_overlap.squeeze_(0)
- best_prior_idx.squeeze_(1)
- best_prior_idx_filter.squeeze_(1)
- best_prior_overlap.squeeze_(1)
- best_truth_overlap.index_fill_(0, best_prior_idx_filter, 2) # ensure best prior
- # NoTODO refactor: index best_prior_idx with long tensor
- # ensure every gt matches with its prior of max overlap
- for j in range(best_prior_idx.size(0)):
- best_truth_idx[best_prior_idx[j]] = j
- matches = truths[best_truth_idx] # Shape: [num_priors,4]
- conf = labels[best_truth_idx] # Shape: [num_priors]
- conf[best_truth_overlap < threshold] = 0 # label as background
- loc = encode(matches, priors, variances)
- loc_t[idx] = loc # [num_priors,4] encoded offsets to learn
- conf_t[idx] = conf # [num_priors] top class label for each prior
-
-
-def encode(matched, priors, variances):
- """Encode the variances from the priorbox layers into the ground truth boxes
- we have matched (based on jaccard overlap) with the prior boxes.
- Args:
- matched: (tensor) Coords of ground truth for each prior in point-form
- Shape: [num_priors, 4].
- priors: (tensor) Prior boxes in center-offset form
- Shape: [num_priors,4].
- variances: (list[float]) Variances of priorboxes
- Return:
- encoded boxes (tensor), Shape: [num_priors, 4]
- """
-
- # dist b/t match center and prior's center
- g_cxcy = (matched[:, :2] + matched[:, 2:])/2 - priors[:, :2]
- # encode variance
- g_cxcy /= (variances[0] * priors[:, 2:])
- # match wh / prior wh
- g_wh = (matched[:, 2:] - matched[:, :2]) / priors[:, 2:]
- g_wh = torch.log(g_wh) / variances[1]
- # return target for smooth_l1_loss
- return torch.cat([g_cxcy, g_wh], 1) # [num_priors,4]
-
-
-# Adapted from https://github.com/Hakuyume/chainer-ssd
-def decode(loc, priors, variances):
- """Decode locations from predictions using priors to undo
- the encoding we did for offset regression at train time.
- Args:
- loc (tensor): location predictions for loc layers,
- Shape: [num_priors,4]
- priors (tensor): Prior boxes in center-offset form.
- Shape: [num_priors,4].
- variances: (list[float]) Variances of priorboxes
- Return:
- decoded bounding box predictions
- """
-
- boxes = torch.cat((
- priors[:, :2] + loc[:, :2] * variances[0] * priors[:, 2:],
- priors[:, 2:] * torch.exp(loc[:, 2:] * variances[1])), 1)
- boxes[:, :2] -= boxes[:, 2:] / 2
- boxes[:, 2:] += boxes[:, :2]
- return boxes
-
-
-def log_sum_exp(x):
- """Utility function for computing log_sum_exp while determining
- This will be used to determine unaveraged confidence loss across
- all examples in a batch.
- Args:
- x (Variable(tensor)): conf_preds from conf layers
- """
- x_max = x.data.max()
- return torch.log(torch.sum(torch.exp(x-x_max), 1, keepdim=True)) + x_max
-
-
-# Original author: Francisco Massa:
-# https://github.com/fmassa/object-detection.torch
-# Ported to PyTorch by Max deGroot (02/01/2017)
-def nms(boxes, scores, overlap=0.5, top_k=200):
- """Apply non-maximum suppression at test time to avoid detecting too many
- overlapping bounding boxes for a given object.
- Args:
- boxes: (tensor) The location preds for the img, Shape: [num_priors,4].
- scores: (tensor) The class predscores for the img, Shape:[num_priors].
- overlap: (float) The overlap thresh for suppressing unnecessary boxes.
- top_k: (int) The Maximum number of box preds to consider.
- Return:
- The indices of the kept boxes with respect to num_priors.
- """
-
- keep = torch.Tensor(scores.size(0)).fill_(0).long()
- if boxes.numel() == 0:
- return keep
- x1 = boxes[:, 0]
- y1 = boxes[:, 1]
- x2 = boxes[:, 2]
- y2 = boxes[:, 3]
- area = torch.mul(x2 - x1, y2 - y1)
- v, idx = scores.sort(0) # sort in ascending order
- # I = I[v >= 0.01]
- idx = idx[-top_k:] # indices of the top-k largest vals
- xx1 = boxes.new()
- yy1 = boxes.new()
- xx2 = boxes.new()
- yy2 = boxes.new()
- w = boxes.new()
- h = boxes.new()
-
- # keep = torch.Tensor()
- count = 0
- while idx.numel() > 0:
- i = idx[-1] # index of current largest val
- # keep.append(i)
- keep[count] = i
- count += 1
- if idx.size(0) == 1:
- break
- idx = idx[:-1] # remove kept element from view
- # load bboxes of next highest vals
- torch.index_select(x1, 0, idx, out=xx1)
- torch.index_select(y1, 0, idx, out=yy1)
- torch.index_select(x2, 0, idx, out=xx2)
- torch.index_select(y2, 0, idx, out=yy2)
- # store element-wise max with next highest score
- xx1 = torch.clamp(xx1, min=x1[i])
- yy1 = torch.clamp(yy1, min=y1[i])
- xx2 = torch.clamp(xx2, max=x2[i])
- yy2 = torch.clamp(yy2, max=y2[i])
- w.resize_as_(xx2)
- h.resize_as_(yy2)
- w = xx2 - xx1
- h = yy2 - yy1
- # check sizes of xx1 and xx2.. after each iteration
- w = torch.clamp(w, min=0.0)
- h = torch.clamp(h, min=0.0)
- inter = w*h
- # IoU = i / (area(a) + area(b) - i)
- rem_areas = torch.index_select(area, 0, idx) # load remaining areas)
- union = (rem_areas - inter) + area[i]
- IoU = inter/union # store result in iou
- # keep only elements with an IoU <= overlap
- idx = idx[IoU.le(overlap)]
- return keep, count
-
-
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/encoder_decoder/modeling_flax_encoder_decoder.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/encoder_decoder/modeling_flax_encoder_decoder.py
deleted file mode 100644
index 3d9679f26a1c33ba2969bc4936b416a1bfea4420..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/encoder_decoder/modeling_flax_encoder_decoder.py
+++ /dev/null
@@ -1,902 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" Classes to support Flax Encoder-Decoder architectures"""
-
-
-import os
-from typing import Optional, Tuple, Union
-
-import flax.linen as nn
-import jax
-import jax.numpy as jnp
-from flax.core.frozen_dict import FrozenDict, freeze, unfreeze
-from flax.traverse_util import flatten_dict, unflatten_dict
-from jax import lax
-from jax.random import PRNGKey
-
-from ...modeling_flax_outputs import FlaxBaseModelOutput, FlaxCausalLMOutputWithCrossAttentions, FlaxSeq2SeqLMOutput
-from ...modeling_flax_utils import FlaxPreTrainedModel
-from ...utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings
-from ..auto.configuration_auto import AutoConfig
-from ..auto.modeling_flax_auto import FlaxAutoModel, FlaxAutoModelForCausalLM
-from .configuration_encoder_decoder import EncoderDecoderConfig
-
-
-logger = logging.get_logger(__name__)
-
-_CONFIG_FOR_DOC = "EncoderDecoderConfig"
-
-ENCODER_DECODER_START_DOCSTRING = r"""
- This class can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the
- encoder and any pretrained autoregressive model as the decoder. The encoder is loaded via
- [`~AutoModel.from_pretrained`] function and the decoder is loaded via [`~AutoModelForCausalLM.from_pretrained`]
- function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream
- generative task, like summarization.
-
- The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation
- tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation
- Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi
- Zhou, Wei Li, Peter J. Liu.
-
- After such an Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models
- (see the examples for more information).
-
- This model inherits from [`FlaxPreTrainedModel`]. Check the superclass documentation for the generic methods the
- library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
- etc.)
-
- This model is also a Flax Linen
- [flax.nn.Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html) subclass. Use it as a
- regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
-
- Parameters:
- config ([`EncoderDecoderConfig`]): Model configuration class with all the parameters of the model.
- Initializing with a config file does not load the weights associated with the model, only the
- configuration. Check out the [`~FlaxPreTrainedModel.from_pretrained`] method to load the model weights.
- dtype (`jax.numpy.dtype`, *optional*, defaults to `jax.numpy.float32`):
- The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and
- `jax.numpy.bfloat16` (on TPUs).
-
- This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
- specified all the computation will be performed with the given `dtype`.
-
- **Note that this only specifies the dtype of the computation and does not influence the dtype of model
- parameters.**
-
- If you wish to change the dtype of the model parameters, see [`~FlaxPreTrainedModel.to_fp16`] and
- [`~FlaxPreTrainedModel.to_bf16`].
-"""
-
-ENCODER_DECODER_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (`jnp.ndarray` of shape `(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
- it.
-
- Indices can be obtained using [`PreTrainedTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- [What are input IDs?](../glossary#input-ids)
- attention_mask (`jnp.ndarray` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- [What are attention masks?](../glossary#attention-mask)
- decoder_input_ids (`jnp.ndarray` of shape `(batch_size, target_sequence_length)`, *optional*):
- Indices of decoder input sequence tokens in the vocabulary.
-
- Indices can be obtained using [`PreTrainedTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- [What are decoder input IDs?](../glossary#decoder-input-ids)
-
- For sequence to sequence training, `decoder_input_ids` should be provided. `decoder_input_ids` should be
- created outside of the model by shifting the `labels` to the right, replacing -100 by the `pad_token_id`
- and prepending them with the `decoder_start_token_id`.
- decoder_attention_mask (`jnp.ndarray` of shape `(batch_size, target_sequence_length)`, *optional*):
- Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also
- be used by default.
- position_ids (`numpy.ndarray` of shape `(batch_size, sequence_length)`, *optional*):
- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
- config.encoder.max_position_embeddings - 1]`.
- decoder_position_ids (`numpy.ndarray` of shape `(batch_size, sequence_length)`, *optional*):
- Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
- range `[0, config.decoder.max_position_embeddings - 1]`.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- If set to `True`, the model will return a [`~utils.FlaxSeq2SeqLMOutput`] instead of a plain tuple.
-"""
-
-ENCODER_DECODER_ENCODE_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (`jnp.ndarray` of shape `(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
- it.
-
- Indices can be obtained using [`PreTrainedTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- [What are input IDs?](../glossary#input-ids)
- attention_mask (`jnp.ndarray` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- [What are attention masks?](../glossary#attention-mask)
- position_ids (`numpy.ndarray` of shape `(batch_size, sequence_length)`, *optional*):
- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
- config.encoder.max_position_embeddings - 1]`.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- If set to `True`, the model will return a [`~utils.FlaxBaseModelOutput`] instead of a plain tuple.
-"""
-
-ENCODER_DECODER_DECODE_INPUTS_DOCSTRING = r"""
- Args:
- decoder_input_ids (`jnp.ndarray` of shape `(batch_size, target_sequence_length)`, *optional*):
- Indices of decoder input sequence tokens in the vocabulary.
-
- Indices can be obtained using [`PreTrainedTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- [What are decoder input IDs?](../glossary#decoder-input-ids)
-
- If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
- `past_key_values`).
-
- For sequence to sequence training, `decoder_input_ids` should be provided. `decoder_input_ids` should be
- created outside of the model by shifting the `labels` to the right, replacing -100 by the `pad_token_id`
- and prepending them with the `decoder_start_token_id`.
- encoder_outputs (`tuple(tuple(jnp.ndarray)`):
- Tuple consists of (`last_hidden_state`, *optional*: `hidden_states`, *optional*: `attentions`)
- `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) is a sequence of
- hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
- encoder_attention_mask (`jnp.ndarray` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- [What are attention masks?](../glossary#attention-mask)
- decoder_attention_mask (`jnp.ndarray` of shape `(batch_size, target_sequence_length)`, *optional*):
- Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also
- be used by default.
- decoder_position_ids (`numpy.ndarray` of shape `(batch_size, sequence_length)`, *optional*):
- Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
- range `[0, config.decoder.max_position_embeddings - 1]`.
- past_key_values (`Dict[str, np.ndarray]`, *optional*, returned by `init_cache` or when passing previous `past_key_values`):
- Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
- auto-regressive decoding. Pre-computed key and value hidden-states are of shape *[batch_size, max_length]*.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- If set to `True`, the model will return a [`~utils.FlaxCausalLMOutputWithCrossAttentions`] instead of a
- plain tuple.
-"""
-
-
-class FlaxEncoderDecoderModule(nn.Module):
- config: EncoderDecoderConfig
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- encoder_config = self.config.encoder
- decoder_config = self.config.decoder
-
- # Copied from `modeling_hybrid_clip.py` with modifications.
- from ...models.auto.modeling_flax_auto import FLAX_MODEL_FOR_CAUSAL_LM_MAPPING, FLAX_MODEL_MAPPING
-
- encoder_module = FLAX_MODEL_MAPPING[encoder_config.__class__].module_class
- decoder_module = FLAX_MODEL_FOR_CAUSAL_LM_MAPPING[decoder_config.__class__].module_class
-
- self.encoder = encoder_module(encoder_config, dtype=self.dtype)
- self.decoder = decoder_module(decoder_config, dtype=self.dtype)
-
- # encoder outputs might need to be projected to different dimension for decoder
- if (
- self.encoder.config.hidden_size != self.decoder.config.hidden_size
- and self.decoder.config.cross_attention_hidden_size is None
- ):
- self.enc_to_dec_proj = nn.Dense(
- self.decoder.config.hidden_size,
- kernel_init=jax.nn.initializers.normal(self.decoder.config.initializer_range),
- dtype=self.dtype,
- )
- else:
- self.enc_to_dec_proj = None
-
- def _get_encoder_module(self):
- return self.encoder
-
- def _get_projection_module(self):
- return self.enc_to_dec_proj
-
- def _get_decoder_module(self):
- return self.decoder
-
- def __call__(
- self,
- input_ids,
- attention_mask,
- decoder_input_ids,
- decoder_attention_mask,
- position_ids,
- decoder_position_ids,
- output_attentions: bool = False,
- output_hidden_states: bool = False,
- return_dict: bool = True,
- deterministic: bool = True,
- ):
- encoder_outputs = self.encoder(
- input_ids=input_ids,
- attention_mask=attention_mask,
- position_ids=position_ids,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- deterministic=deterministic,
- )
-
- encoder_hidden_states = encoder_outputs[0]
-
- # optionally project encoder_hidden_states
- if self.enc_to_dec_proj is not None:
- encoder_hidden_states = self.enc_to_dec_proj(encoder_hidden_states)
-
- decoder_outputs = self.decoder(
- input_ids=decoder_input_ids,
- attention_mask=decoder_attention_mask,
- position_ids=decoder_position_ids,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=attention_mask,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- deterministic=deterministic,
- )
-
- if not return_dict:
- return decoder_outputs + encoder_outputs
-
- return FlaxSeq2SeqLMOutput(
- logits=decoder_outputs.logits,
- decoder_hidden_states=decoder_outputs.hidden_states,
- decoder_attentions=decoder_outputs.attentions,
- cross_attentions=decoder_outputs.cross_attentions,
- encoder_last_hidden_state=encoder_outputs.last_hidden_state,
- encoder_hidden_states=encoder_outputs.hidden_states,
- encoder_attentions=encoder_outputs.attentions,
- )
-
-
-@add_start_docstrings(ENCODER_DECODER_START_DOCSTRING)
-class FlaxEncoderDecoderModel(FlaxPreTrainedModel):
- r"""
- [`FlaxEncoderDecoderModel`] is a generic model class that will be instantiated as a transformer architecture with
- the module (flax.nn.Module) of one of the base model classes of the library as encoder module and another one as
- decoder module when created with the :meth*~transformers.FlaxAutoModel.from_pretrained* class method for the
- encoder and :meth*~transformers.FlaxAutoModelForCausalLM.from_pretrained* class method for the decoder.
- """
- config_class = EncoderDecoderConfig
- base_model_prefix = "encoder_decoder"
- module_class = FlaxEncoderDecoderModule
-
- def __init__(
- self,
- config: EncoderDecoderConfig,
- input_shape: Optional[Tuple] = None,
- seed: int = 0,
- dtype: jnp.dtype = jnp.float32,
- _do_init: bool = True,
- **kwargs,
- ):
- if input_shape is None:
- input_shape = ((1, 1), (1, 1))
-
- if not _do_init:
- raise ValueError(
- "`FlaxEncoderDecoderModel` cannot be created without initializing, `_do_init` must be `True`."
- )
-
- if config.decoder.cross_attention_hidden_size is not None:
- if config.decoder.cross_attention_hidden_size != config.encoder.hidden_size:
- raise ValueError(
- "If `cross_attention_hidden_size` is specified in the decoder's configuration, it has to be equal"
- f" to the encoder's `hidden_size`. Got {config.decoder.cross_attention_hidden_size} for"
- f" `config.decoder.cross_attention_hidden_size` and {config.encoder.hidden_size} for"
- " `config.encoder.hidden_size`."
- )
-
- module = self.module_class(config=config, dtype=dtype, **kwargs)
- super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype, _do_init=_do_init)
-
- def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple, params: FrozenDict = None) -> FrozenDict:
- encoder_input_shape, decoder_input_shape = input_shape
-
- # init input tensors
- input_ids = jnp.zeros(encoder_input_shape, dtype="i4")
- attention_mask = jnp.ones_like(input_ids)
- decoder_input_ids = jnp.zeros(decoder_input_shape, dtype="i4")
- decoder_attention_mask = jnp.ones_like(decoder_input_ids)
-
- batch_size, sequence_length = input_ids.shape
- position_ids = jnp.broadcast_to(jnp.arange(sequence_length)[None, :], (batch_size, sequence_length))
-
- decoder_batch_size, decoder_sequence_length = decoder_input_ids.shape
- if not decoder_batch_size == batch_size:
- raise ValueError(
- f"The inputs of encoder and decoder should have the same batch size, but got {batch_size} for encoder"
- f" and {decoder_batch_size} for decoder."
- )
- decoder_position_ids = jnp.broadcast_to(
- jnp.arange(decoder_sequence_length)[None, :], (decoder_batch_size, decoder_sequence_length)
- )
-
- params_rng, dropout_rng = jax.random.split(rng)
- rngs = {"params": params_rng, "dropout": dropout_rng}
-
- random_params = self.module.init(
- rngs,
- input_ids,
- attention_mask,
- decoder_input_ids,
- decoder_attention_mask,
- position_ids,
- decoder_position_ids,
- )["params"]
-
- if params is not None:
- random_params = flatten_dict(unfreeze(random_params))
- params = flatten_dict(unfreeze(params))
- for missing_key in self._missing_keys:
- params[missing_key] = random_params[missing_key]
- self._missing_keys = set()
- return freeze(unflatten_dict(params))
- else:
- return random_params
-
- def init_cache(self, batch_size, max_length, encoder_outputs):
- r"""
- Args:
- batch_size (`int`):
- batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache.
- max_length (`int`):
- maximum possible length for auto-regressive decoding. Defines the sequence length of the initialized
- cache.
- encoder_outputs (`Union[FlaxBaseModelOutput, tuple(tuple(jnp.ndarray)]`):
- `encoder_outputs` consists of (`last_hidden_state`, *optional*: `hidden_states`, *optional*:
- `attentions`). `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)`, *optional*)
- is a sequence of hidden-states at the output of the last layer of the encoder. Used in the
- cross-attention of the decoder.
- """
- # init input variables to retrieve cache
- decoder_input_ids = jnp.ones((batch_size, max_length), dtype="i4")
- decoder_attention_mask = jnp.ones_like(decoder_input_ids)
- decoder_position_ids = jnp.broadcast_to(
- jnp.arange(jnp.atleast_2d(decoder_input_ids).shape[-1]), decoder_input_ids.shape
- )
-
- def _decoder_forward(module, decoder_input_ids, decoder_attention_mask, decoder_position_ids, **kwargs):
- decoder_module = module._get_decoder_module()
- return decoder_module(
- input_ids=decoder_input_ids,
- attention_mask=decoder_attention_mask,
- position_ids=decoder_position_ids,
- **kwargs,
- )
-
- init_variables = self.module.init(
- jax.random.PRNGKey(0),
- decoder_input_ids=decoder_input_ids,
- decoder_attention_mask=decoder_attention_mask,
- decoder_position_ids=decoder_position_ids,
- encoder_hidden_states=encoder_outputs[0],
- init_cache=True,
- method=_decoder_forward, # we only need to call the decoder to init the cache
- )
- return unfreeze(init_variables["cache"])
-
- @add_start_docstrings(ENCODER_DECODER_ENCODE_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=FlaxBaseModelOutput, config_class=_CONFIG_FOR_DOC)
- def encode(
- self,
- input_ids: jnp.ndarray,
- attention_mask: Optional[jnp.ndarray] = None,
- position_ids: Optional[jnp.ndarray] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- train: bool = False,
- params: dict = None,
- dropout_rng: PRNGKey = None,
- ):
- r"""
- Returns:
-
- Example:
-
- ```python
- >>> from transformers import FlaxEncoderDecoderModel, BertTokenizer
-
- >>> # initialize a bert2gpt2 from pretrained BERT and GPT2 models. Note that the cross-attention layers will be randomly initialized
- >>> model = FlaxEncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-cased", "gpt2")
-
- >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
-
- >>> text = "My friends are cool but they eat too many carbs."
- >>> input_ids = tokenizer.encode(text, return_tensors="np")
- >>> encoder_outputs = model.encode(input_ids)
- ```"""
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.return_dict
-
- if attention_mask is None:
- attention_mask = jnp.ones_like(input_ids)
- if position_ids is None:
- batch_size, sequence_length = input_ids.shape
- position_ids = jnp.broadcast_to(jnp.arange(sequence_length)[None, :], (batch_size, sequence_length))
-
- # Handle any PRNG if needed
- rngs = {}
- if dropout_rng is not None:
- rngs["dropout"] = dropout_rng
-
- def _encoder_forward(module, input_ids, attention_mask, position_ids, **kwargs):
- encode_module = module._get_encoder_module()
- return encode_module(input_ids, attention_mask, position_ids, **kwargs)
-
- outputs = self.module.apply(
- {"params": params or self.params},
- input_ids=jnp.array(input_ids, dtype="i4"),
- attention_mask=jnp.array(attention_mask, dtype="i4"),
- position_ids=jnp.array(position_ids, dtype="i4"),
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- deterministic=not train,
- rngs=rngs,
- method=_encoder_forward,
- )
-
- if return_dict:
- outputs = FlaxBaseModelOutput(
- last_hidden_state=outputs.last_hidden_state,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
- return outputs
-
- @add_start_docstrings(ENCODER_DECODER_DECODE_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=FlaxCausalLMOutputWithCrossAttentions, config_class=_CONFIG_FOR_DOC)
- def decode(
- self,
- decoder_input_ids,
- encoder_outputs,
- encoder_attention_mask: Optional[jnp.ndarray] = None,
- decoder_attention_mask: Optional[jnp.ndarray] = None,
- decoder_position_ids: Optional[jnp.ndarray] = None,
- past_key_values: dict = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- train: bool = False,
- params: dict = None,
- dropout_rng: PRNGKey = None,
- ):
- r"""
- Returns:
-
- Example:
-
- ```python
- >>> from transformers import FlaxEncoderDecoderModel, BertTokenizer
- >>> import jax.numpy as jnp
-
- >>> # initialize a bert2gpt2 from pretrained BERT and GPT2 models. Note that the cross-attention layers will be randomly initialized
- >>> model = FlaxEncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-cased", "gpt2")
-
- >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
-
- >>> text = "My friends are cool but they eat too many carbs."
- >>> input_ids = tokenizer.encode(text, max_length=1024, return_tensors="np")
- >>> encoder_outputs = model.encode(input_ids)
-
- >>> decoder_start_token_id = model.config.decoder.bos_token_id
- >>> decoder_input_ids = jnp.ones((input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
-
- >>> outputs = model.decode(decoder_input_ids, encoder_outputs)
- >>> logits = outputs.logits
- ```"""
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.return_dict
-
- encoder_hidden_states = encoder_outputs[0]
- if encoder_attention_mask is None:
- batch_size, sequence_length = encoder_hidden_states.shape[:2]
- encoder_attention_mask = jnp.ones((batch_size, sequence_length))
-
- batch_size, sequence_length = decoder_input_ids.shape
- if decoder_attention_mask is None:
- decoder_attention_mask = jnp.ones((batch_size, sequence_length))
-
- if decoder_position_ids is None:
- if past_key_values is not None:
- raise ValueError("Make sure to provide `decoder_position_ids` when passing `past_key_values`.")
-
- decoder_position_ids = jnp.broadcast_to(
- jnp.arange(sequence_length)[None, :], (batch_size, sequence_length)
- )
-
- # Handle any PRNG if needed
- rngs = {}
- if dropout_rng is not None:
- rngs["dropout"] = dropout_rng
-
- inputs = {"params": params or self.params}
-
- # if past_key_values are passed then cache is already initialized a private flag init_cache has to be
- # passed down to ensure cache is used. It has to be made sure that cache is marked as mutable so that
- # it can be changed by FlaxBartAttention module
- if past_key_values:
- inputs["cache"] = past_key_values
- mutable = ["cache"]
- else:
- mutable = False
-
- def _decoder_forward(
- module, decoder_input_ids, decoder_attention_mask, decoder_position_ids, encoder_hidden_states, **kwargs
- ):
- projection_module = module._get_projection_module()
- decoder_module = module._get_decoder_module()
-
- # optionally project encoder_hidden_states
- if projection_module is not None:
- encoder_hidden_states = projection_module(encoder_hidden_states)
-
- return decoder_module(
- decoder_input_ids,
- decoder_attention_mask,
- decoder_position_ids,
- encoder_hidden_states=encoder_hidden_states,
- **kwargs,
- )
-
- outputs = self.module.apply(
- inputs,
- decoder_input_ids=jnp.array(decoder_input_ids, dtype="i4"),
- decoder_attention_mask=jnp.array(decoder_attention_mask, dtype="i4"),
- decoder_position_ids=jnp.array(decoder_position_ids, dtype="i4"),
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=jnp.array(encoder_attention_mask, dtype="i4"),
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- deterministic=not train,
- rngs=rngs,
- mutable=mutable,
- method=_decoder_forward,
- )
-
- # add updated cache to model output
- if past_key_values is not None and return_dict:
- outputs, past = outputs
- outputs["past_key_values"] = unfreeze(past["cache"])
- return outputs
- elif past_key_values is not None and not return_dict:
- outputs, past = outputs
- outputs = outputs[:1] + (unfreeze(past["cache"]),) + outputs[1:]
-
- return outputs
-
- @add_start_docstrings_to_model_forward(ENCODER_DECODER_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=FlaxSeq2SeqLMOutput, config_class=_CONFIG_FOR_DOC)
- def __call__(
- self,
- input_ids: jnp.ndarray,
- attention_mask: Optional[jnp.ndarray] = None,
- decoder_input_ids: Optional[jnp.ndarray] = None,
- decoder_attention_mask: Optional[jnp.ndarray] = None,
- position_ids: Optional[jnp.ndarray] = None,
- decoder_position_ids: Optional[jnp.ndarray] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- train: bool = False,
- params: dict = None,
- dropout_rng: PRNGKey = None,
- ):
- r"""
- Returns:
-
- Examples:
-
- ```python
- >>> from transformers import FlaxEncoderDecoderModel, BertTokenizer, GPT2Tokenizer
-
- >>> # load a fine-tuned bert2gpt2 model
- >>> model = FlaxEncoderDecoderModel.from_pretrained("patrickvonplaten/bert2gpt2-cnn_dailymail-fp16")
- >>> # load input & output tokenizer
- >>> tokenizer_input = BertTokenizer.from_pretrained("bert-base-cased")
- >>> tokenizer_output = GPT2Tokenizer.from_pretrained("gpt2")
-
- >>> article = '''Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members
- >>> singing a racist chant. SAE's national chapter suspended the students,
- >>> but University of Oklahoma President David Boren took it a step further,
- >>> saying the university's affiliation with the fraternity is permanently done.'''
-
- >>> input_ids = tokenizer_input(article, add_special_tokens=True, return_tensors="np").input_ids
-
- >>> # use GPT2's eos_token as the pad as well as eos token
- >>> model.config.eos_token_id = model.config.decoder.eos_token_id
- >>> model.config.pad_token_id = model.config.eos_token_id
-
- >>> sequences = model.generate(input_ids, num_beams=4, max_length=12).sequences
-
- >>> summary = tokenizer_output.batch_decode(sequences, skip_special_tokens=True)[0]
- >>> assert summary == "SAS Alpha Epsilon suspended Sigma Alpha Epsilon members"
- ```
- """
-
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.return_dict
-
- # prepare encoder inputs
- if attention_mask is None:
- attention_mask = jnp.ones_like(input_ids)
- if position_ids is None:
- batch_size, sequence_length = input_ids.shape
- position_ids = jnp.broadcast_to(jnp.arange(sequence_length)[None, :], (batch_size, sequence_length))
-
- # prepare decoder inputs
- if decoder_input_ids is None:
- raise ValueError(
- "`decoder_input_ids` cannot be `None`. For sequence to sequence training, `decoder_position_ids` must"
- " be specified as an input argument."
- )
- if decoder_attention_mask is None:
- decoder_attention_mask = jnp.ones_like(decoder_input_ids)
- if decoder_position_ids is None:
- batch_size, sequence_length = decoder_input_ids.shape
- decoder_position_ids = jnp.broadcast_to(
- jnp.arange(sequence_length)[None, :], (batch_size, sequence_length)
- )
-
- # Handle any PRNG if needed
- rngs = {"dropout": dropout_rng} if dropout_rng is not None else {}
-
- return self.module.apply(
- {"params": params or self.params},
- input_ids=jnp.array(input_ids, dtype="i4"),
- attention_mask=jnp.array(attention_mask, dtype="i4"),
- decoder_input_ids=jnp.array(decoder_input_ids, dtype="i4"),
- decoder_attention_mask=jnp.array(decoder_attention_mask, dtype="i4"),
- position_ids=jnp.array(position_ids, dtype="i4"),
- decoder_position_ids=jnp.array(decoder_position_ids, dtype="i4"),
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- deterministic=not train,
- rngs=rngs,
- )
-
- def prepare_inputs_for_generation(
- self,
- decoder_input_ids,
- max_length,
- attention_mask: Optional[jax.Array] = None,
- decoder_attention_mask: Optional[jax.Array] = None,
- encoder_outputs=None,
- **kwargs,
- ):
- # initializing the cache
- batch_size, seq_length = decoder_input_ids.shape
-
- past_key_values = self.init_cache(batch_size, max_length, encoder_outputs)
- # Note that usually one would have to put 0's in the attention_mask for x > input_ids.shape[-1] and x < cache_length.
- # But since the decoder uses a causal mask, those positions are masked anyways.
- # Thus we can create a single static attention_mask here, which is more efficient for compilation
- extended_attention_mask = jnp.ones((batch_size, max_length), dtype="i4")
- if decoder_attention_mask is not None:
- decoder_position_ids = decoder_attention_mask.cumsum(axis=-1) - 1
- extended_attention_mask = lax.dynamic_update_slice(extended_attention_mask, decoder_attention_mask, (0, 0))
- else:
- decoder_position_ids = jnp.broadcast_to(
- jnp.arange(seq_length, dtype="i4")[None, :], (batch_size, seq_length)
- )
-
- return {
- "past_key_values": past_key_values,
- "encoder_outputs": encoder_outputs,
- "encoder_attention_mask": attention_mask,
- "decoder_attention_mask": extended_attention_mask,
- "decoder_position_ids": decoder_position_ids,
- }
-
- def update_inputs_for_generation(self, model_outputs, model_kwargs):
- model_kwargs["past_key_values"] = model_outputs.past_key_values
- model_kwargs["decoder_position_ids"] = model_kwargs["decoder_position_ids"][:, -1:] + 1
- return model_kwargs
-
- @classmethod
- def from_encoder_decoder_pretrained(
- cls,
- encoder_pretrained_model_name_or_path: Optional[Union[str, os.PathLike]] = None,
- decoder_pretrained_model_name_or_path: Optional[Union[str, os.PathLike]] = None,
- *model_args,
- **kwargs,
- ) -> FlaxPreTrainedModel:
- r"""
- Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model
- checkpoints.
-
- Params:
- encoder_pretrained_model_name_or_path (`Union[str, os.PathLike]`, *optional*):
- Information necessary to initiate the encoder. Can be either:
-
- - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
- Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a
- user or organization name, like `dbmdz/bert-base-german-cased`.
- - A path to a *directory* containing model weights saved using
- [`~FlaxPreTrainedModel.save_pretrained`], e.g., `./my_model_directory/`.
-
- decoder_pretrained_model_name_or_path (`Union[str, os.PathLike]`, *optional*, defaults to `None`):
- Information necessary to initiate the decoder. Can be either:
-
- - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
- Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a
- user or organization name, like `dbmdz/bert-base-german-cased`.
- - A path to a *directory* containing model weights saved using
- [`~FlaxPreTrainedModel.save_pretrained`], e.g., `./my_model_directory/`.
-
- model_args (remaining positional arguments, *optional*):
- All remaning positional arguments will be passed to the underlying model's `__init__` method.
-
- kwargs (remaining dictionary of keyword arguments, *optional*):
- Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
- `output_attentions=True`).
-
- - To update the encoder configuration, use the prefix *encoder_* for each configuration parameter.
- - To update the decoder configuration, use the prefix *decoder_* for each configuration parameter.
- - To update the parent model configuration, do not use a prefix for each configuration parameter.
-
- Behaves differently depending on whether a `config` is provided or automatically loaded.
-
- Example:
-
- ```python
- >>> from transformers import FlaxEncoderDecoderModel
-
- >>> # initialize a bert2gpt2 from pretrained BERT and GPT2 models. Note that the cross-attention layers will be randomly initialized
- >>> model = FlaxEncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-cased", "gpt2")
- >>> # saving model after fine-tuning
- >>> model.save_pretrained("./bert2gpt2")
- >>> # load fine-tuned model
- >>> model = FlaxEncoderDecoderModel.from_pretrained("./bert2gpt2")
- ```"""
-
- kwargs_encoder = {
- argument[len("encoder_") :]: value for argument, value in kwargs.items() if argument.startswith("encoder_")
- }
-
- kwargs_decoder = {
- argument[len("decoder_") :]: value for argument, value in kwargs.items() if argument.startswith("decoder_")
- }
-
- # remove encoder, decoder kwargs from kwargs
- for key in kwargs_encoder.keys():
- del kwargs["encoder_" + key]
- for key in kwargs_decoder.keys():
- del kwargs["decoder_" + key]
-
- # Load and initialize the encoder and decoder
- # The distinction between encoder and decoder at the model level is made
- # by the value of the flag `is_decoder` that we need to set correctly.
- encoder = kwargs_encoder.pop("model", None)
- if encoder is None:
- if encoder_pretrained_model_name_or_path is None:
- raise ValueError(
- "If `encoder_model` is not defined as an argument, a `encoder_pretrained_model_name_or_path` has "
- "to be defined."
- )
-
- if "config" not in kwargs_encoder:
- encoder_config, kwargs_encoder = AutoConfig.from_pretrained(
- encoder_pretrained_model_name_or_path, **kwargs_encoder, return_unused_kwargs=True
- )
- if encoder_config.is_decoder is True or encoder_config.add_cross_attention is True:
- logger.info(
- f"Initializing {encoder_pretrained_model_name_or_path} as a encoder model "
- "from a decoder model. Cross-attention and casual mask are disabled."
- )
- encoder_config.is_decoder = False
- encoder_config.add_cross_attention = False
-
- kwargs_encoder["config"] = encoder_config
-
- encoder = FlaxAutoModel.from_pretrained(
- encoder_pretrained_model_name_or_path, *model_args, **kwargs_encoder
- )
-
- decoder = kwargs_decoder.pop("model", None)
- if decoder is None:
- if decoder_pretrained_model_name_or_path is None:
- raise ValueError(
- "If `decoder_model` is not defined as an argument, a `decoder_pretrained_model_name_or_path` has "
- "to be defined."
- )
-
- if "config" not in kwargs_decoder:
- decoder_config, kwargs_decoder = AutoConfig.from_pretrained(
- decoder_pretrained_model_name_or_path, **kwargs_decoder, return_unused_kwargs=True
- )
- if decoder_config.is_decoder is False or decoder_config.add_cross_attention is False:
- logger.info(
- f"Initializing {decoder_pretrained_model_name_or_path} as a decoder model. Cross attention"
- f" layers are added to {decoder_pretrained_model_name_or_path} and randomly initialized if"
- f" {decoder_pretrained_model_name_or_path}'s architecture allows for cross attention layers."
- )
- decoder_config.is_decoder = True
- decoder_config.add_cross_attention = True
-
- kwargs_decoder["config"] = decoder_config
-
- if kwargs_decoder["config"].is_decoder is False or kwargs_decoder["config"].add_cross_attention is False:
- logger.warning(
- f"Decoder model {decoder_pretrained_model_name_or_path} is not initialized as a decoder. "
- f"In order to initialize {decoder_pretrained_model_name_or_path} as a decoder, "
- "make sure that the attributes `is_decoder` and `add_cross_attention` of `decoder_config` "
- "passed to `.from_encoder_decoder_pretrained(...)` are set to `True` or do not pass a "
- "`decoder_config` to `.from_encoder_decoder_pretrained(...)`"
- )
-
- decoder = FlaxAutoModelForCausalLM.from_pretrained(decoder_pretrained_model_name_or_path, **kwargs_decoder)
-
- # instantiate config with corresponding kwargs
- dtype = kwargs.pop("dtype", jnp.float32)
- config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder.config, decoder.config, **kwargs)
-
- # init model
- model = cls(config, dtype=dtype)
- model.params["encoder"] = encoder.params
- model.params["decoder"] = decoder.params
-
- return model
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/musicgen/configuration_musicgen.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/musicgen/configuration_musicgen.py
deleted file mode 100644
index 03371e10446c993175ef1604c87c48e0466a7ede..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/musicgen/configuration_musicgen.py
+++ /dev/null
@@ -1,233 +0,0 @@
-# coding=utf-8
-# Copyright 2023 Meta AI and The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" MusicGen model configuration"""
-
-from ...configuration_utils import PretrainedConfig
-from ...utils import logging
-from ..auto.configuration_auto import AutoConfig
-
-
-logger = logging.get_logger(__name__)
-
-MUSICGEN_PRETRAINED_CONFIG_ARCHIVE_MAP = {
- "facebook/musicgen-small": "https://huggingface.co/facebook/musicgen-small/resolve/main/config.json",
- # See all Musicgen models at https://huggingface.co/models?filter=musicgen
-}
-
-
-class MusicgenDecoderConfig(PretrainedConfig):
- r"""
- This is the configuration class to store the configuration of an [`MusicgenDecoder`]. It is used to instantiate a
- MusicGen decoder according to the specified arguments, defining the model architecture. Instantiating a
- configuration with the defaults will yield a similar configuration to that of the MusicGen
- [facebook/musicgen-small](https://huggingface.co/facebook/musicgen-small) architecture.
-
- Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
- documentation from [`PretrainedConfig`] for more information.
-
-
- Args:
- vocab_size (`int`, *optional*, defaults to 2048):
- Vocabulary size of the MusicgenDecoder model. Defines the number of different tokens that can be
- represented by the `inputs_ids` passed when calling [`MusicgenDecoder`].
- hidden_size (`int`, *optional*, defaults to 1024):
- Dimensionality of the layers and the pooler layer.
- num_hidden_layers (`int`, *optional*, defaults to 24):
- Number of decoder layers.
- num_attention_heads (`int`, *optional*, defaults to 16):
- Number of attention heads for each attention layer in the Transformer block.
- ffn_dim (`int`, *optional*, defaults to 4096):
- Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer block.
- activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
- The non-linear activation function (function or string) in the decoder and pooler. If string, `"gelu"`,
- `"relu"`, `"silu"` and `"gelu_new"` are supported.
- dropout (`float`, *optional*, defaults to 0.1):
- The dropout probability for all fully connected layers in the embeddings, text_encoder, and pooler.
- attention_dropout (`float`, *optional*, defaults to 0.0):
- The dropout ratio for the attention probabilities.
- activation_dropout (`float`, *optional*, defaults to 0.0):
- The dropout ratio for activations inside the fully connected layer.
- max_position_embeddings (`int`, *optional*, defaults to 2048):
- The maximum sequence length that this model might ever be used with. Typically, set this to something large
- just in case (e.g., 512 or 1024 or 2048).
- initializer_factor (`float`, *optional*, defaults to 0.02):
- The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
- layerdrop (`float`, *optional*, defaults to 0.0):
- The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
- for more details.
- scale_embedding (`bool`, *optional*, defaults to `False`):
- Scale embeddings by diving by sqrt(hidden_size).
- use_cache (`bool`, *optional*, defaults to `True`):
- Whether the model should return the last key/values attentions (not used by all models)
- num_codebooks (`int`, *optional*, defaults to 4):
- The number of parallel codebooks forwarded to the model.
- tie_word_embeddings(`bool`, *optional*, defaults to `False`):
- Whether input and output word embeddings should be tied.
- """
- model_type = "musicgen_decoder"
- keys_to_ignore_at_inference = ["past_key_values"]
-
- def __init__(
- self,
- vocab_size=2048,
- max_position_embeddings=2048,
- num_hidden_layers=24,
- ffn_dim=4096,
- num_attention_heads=16,
- layerdrop=0.0,
- use_cache=True,
- activation_function="gelu",
- hidden_size=1024,
- dropout=0.1,
- attention_dropout=0.0,
- activation_dropout=0.0,
- initializer_factor=0.02,
- scale_embedding=False,
- num_codebooks=4,
- pad_token_id=2048,
- bos_token_id=2048,
- eos_token_id=None,
- tie_word_embeddings=False,
- **kwargs,
- ):
- self.vocab_size = vocab_size
- self.max_position_embeddings = max_position_embeddings
- self.hidden_size = hidden_size
- self.ffn_dim = ffn_dim
- self.num_hidden_layers = num_hidden_layers
- self.num_attention_heads = num_attention_heads
- self.dropout = dropout
- self.attention_dropout = attention_dropout
- self.activation_dropout = activation_dropout
- self.activation_function = activation_function
- self.initializer_factor = initializer_factor
- self.layerdrop = layerdrop
- self.use_cache = use_cache
- self.scale_embedding = scale_embedding # scale factor will be sqrt(d_model) if True
- self.num_codebooks = num_codebooks
- super().__init__(
- pad_token_id=pad_token_id,
- bos_token_id=bos_token_id,
- eos_token_id=eos_token_id,
- tie_word_embeddings=tie_word_embeddings,
- **kwargs,
- )
-
-
-class MusicgenConfig(PretrainedConfig):
- r"""
- This is the configuration class to store the configuration of a [`MusicgenModel`]. It is used to instantiate a
- MusicGen model according to the specified arguments, defining the text encoder, audio encoder and MusicGen decoder
- configs.
-
- Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
- documentation from [`PretrainedConfig`] for more information.
-
- Args:
- kwargs (*optional*):
- Dictionary of keyword arguments. Notably:
-
- - **text_encoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that
- defines the text encoder config.
- - **audio_encoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that
- defines the audio encoder config.
- - **decoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines
- the decoder config.
-
- Example:
-
- ```python
- >>> from transformers import (
- ... MusicgenConfig,
- ... MusicgenDecoderConfig,
- ... T5Config,
- ... EncodecConfig,
- ... MusicgenForConditionalGeneration,
- ... )
-
- >>> # Initializing text encoder, audio encoder, and decoder model configurations
- >>> text_encoder_config = T5Config()
- >>> audio_encoder_config = EncodecConfig()
- >>> decoder_config = MusicgenDecoderConfig()
-
- >>> configuration = MusicgenConfig.from_sub_models_config(
- ... text_encoder_config, audio_encoder_config, decoder_config
- ... )
-
- >>> # Initializing a MusicgenForConditionalGeneration (with random weights) from the facebook/musicgen-small style configuration
- >>> model = MusicgenForConditionalGeneration(configuration)
-
- >>> # Accessing the model configuration
- >>> configuration = model.config
- >>> config_text_encoder = model.config.text_encoder
- >>> config_audio_encoder = model.config.audio_encoder
- >>> config_decoder = model.config.decoder
-
- >>> # Saving the model, including its configuration
- >>> model.save_pretrained("musicgen-model")
-
- >>> # loading model and config from pretrained folder
- >>> musicgen_config = MusicgenConfig.from_pretrained("musicgen-model")
- >>> model = MusicgenForConditionalGeneration.from_pretrained("musicgen-model", config=musicgen_config)
- ```"""
-
- model_type = "musicgen"
- is_composition = True
-
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
- if "text_encoder" not in kwargs or "audio_encoder" not in kwargs or "decoder" not in kwargs:
- raise ValueError("Config has to be initialized with text_encoder, audio_encoder and decoder config")
-
- text_encoder_config = kwargs.pop("text_encoder")
- text_encoder_model_type = text_encoder_config.pop("model_type")
-
- audio_encoder_config = kwargs.pop("audio_encoder")
- audio_encoder_model_type = audio_encoder_config.pop("model_type")
-
- decoder_config = kwargs.pop("decoder")
-
- self.text_encoder = AutoConfig.for_model(text_encoder_model_type, **text_encoder_config)
- self.audio_encoder = AutoConfig.for_model(audio_encoder_model_type, **audio_encoder_config)
- self.decoder = MusicgenDecoderConfig(**decoder_config)
- self.is_encoder_decoder = True
-
- @classmethod
- def from_sub_models_config(
- cls,
- text_encoder_config: PretrainedConfig,
- audio_encoder_config: PretrainedConfig,
- decoder_config: MusicgenDecoderConfig,
- **kwargs,
- ):
- r"""
- Instantiate a [`MusicgenConfig`] (or a derived class) from text encoder, audio encoder and decoder
- configurations.
-
- Returns:
- [`MusicgenConfig`]: An instance of a configuration object
- """
-
- return cls(
- text_encoder=text_encoder_config.to_dict(),
- audio_encoder=audio_encoder_config.to_dict(),
- decoder=decoder_config.to_dict(),
- **kwargs,
- )
-
- @property
- # This is a property because you might want to change the codec model on the fly
- def sampling_rate(self):
- return self.audio_encoder.sampling_rate
diff --git a/spaces/ynhe/AskAnything/models/grit_src/grit/data/custom_build_augmentation.py b/spaces/ynhe/AskAnything/models/grit_src/grit/data/custom_build_augmentation.py
deleted file mode 100644
index 49a52d011c09dbe027d41ee7e50127c392a8bf33..0000000000000000000000000000000000000000
--- a/spaces/ynhe/AskAnything/models/grit_src/grit/data/custom_build_augmentation.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from detectron2.data import transforms as T
-from .transforms.custom_augmentation_impl import EfficientDetResizeCrop
-
-
-def build_custom_augmentation(cfg, is_train, scale=None, size=None, \
- min_size=None, max_size=None):
- """
- Create a list of default :class:`Augmentation` from config.
- Now it includes resizing and flipping.
-
- Returns:
- list[Augmentation]
- """
- if cfg.INPUT.CUSTOM_AUG == 'ResizeShortestEdge':
- if is_train:
- min_size = cfg.INPUT.MIN_SIZE_TRAIN if min_size is None else min_size
- max_size = cfg.INPUT.MAX_SIZE_TRAIN if max_size is None else max_size
- sample_style = cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING
- else:
- min_size = cfg.INPUT.MIN_SIZE_TEST
- max_size = cfg.INPUT.MAX_SIZE_TEST
- sample_style = "choice"
- augmentation = [T.ResizeShortestEdge(min_size, max_size, sample_style)]
- elif cfg.INPUT.CUSTOM_AUG == 'EfficientDetResizeCrop':
- if is_train:
- scale = cfg.INPUT.SCALE_RANGE if scale is None else scale
- size = cfg.INPUT.TRAIN_SIZE if size is None else size
- else:
- scale = (1, 1)
- size = cfg.INPUT.TEST_SIZE
- augmentation = [EfficientDetResizeCrop(size, scale)]
- else:
- assert 0, cfg.INPUT.CUSTOM_AUG
-
- if is_train:
- augmentation.append(T.RandomFlip())
- return augmentation
-
-
-build_custom_transform_gen = build_custom_augmentation
-"""
-Alias for backward-compatibility.
-"""
\ No newline at end of file
diff --git a/spaces/yujieq/RxnScribe/README.md b/spaces/yujieq/RxnScribe/README.md
deleted file mode 100644
index a6f6bdff436b7af8f7b25e6a18124c77b8807b4e..0000000000000000000000000000000000000000
--- a/spaces/yujieq/RxnScribe/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: RxnScribe
-emoji: 😻
-colorFrom: gray
-colorTo: green
-sdk: gradio
-sdk_version: 3.14.0
-python_version: 3.8
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/yz333/real-bing/Dockerfile b/spaces/yz333/real-bing/Dockerfile
deleted file mode 100644
index 830ca23cf42e757a643e8bf73782052f766260b0..0000000000000000000000000000000000000000
--- a/spaces/yz333/real-bing/Dockerfile
+++ /dev/null
@@ -1,34 +0,0 @@
-# Build Stage
-# 使用 golang:alpine 作为构建阶段的基础镜像
-FROM golang:alpine AS builder
-
-# 添加 git,以便之后能从GitHub克隆项目
-RUN apk --no-cache add git
-
-# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下
-RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app
-
-# 设置工作目录为之前克隆的项目目录
-WORKDIR /workspace/app
-
-# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小
-RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go
-
-# Runtime Stage
-# 使用轻量级的 alpine 镜像作为运行时的基础镜像
-FROM alpine
-
-# 设置工作目录
-WORKDIR /workspace/app
-
-# 从构建阶段复制编译后的二进制文件到运行时镜像中
-COPY --from=builder /workspace/app/go-proxy-bingai .
-
-# 设置环境变量,此处为随机字符
-ENV Go_Proxy_BingAI_USER_TOKEN_1="1T2F4_Odu2IwYu8s_agYT5yAosP-CP4c1QEzm7tZAqfLGYfWbqswug4R7SaToyb6264LYdXzhBS2zJp6NrtQhhoEvYUMfP6tcIBI2TsTeEm9YSA9X1q0D9dakKc5AXj9-gg2VvOtOs2aQLbPeoYyhWZb42eo2rVrFTE6s_FgPo9ZlyJZOVdB5twnwU6yGfCptu-gS4SEyR045tyBCgZzlDw"
-
-# 暴露8080端口
-EXPOSE 8080
-
-# 容器启动时运行的命令
-CMD ["/workspace/app/go-proxy-bingai"]
\ No newline at end of file
diff --git a/spaces/zej97/AI-Research-Assistant/actions/google_search.py b/spaces/zej97/AI-Research-Assistant/actions/google_search.py
deleted file mode 100644
index 1943a37ff3d2a9ba5c8fab7be99e2898f34c3d49..0000000000000000000000000000000000000000
--- a/spaces/zej97/AI-Research-Assistant/actions/google_search.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import requests
-from bs4 import BeautifulSoup
-
-
-def get_urls(query, proxies=None):
- query = query
- url = f"https://www.google.com/search?q={query}"
- headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36'}
- response = requests.get(url, headers=headers, proxies=proxies)
- soup = BeautifulSoup(response.content, 'html.parser')
- results = []
- for g in soup.find_all('div', class_='g'):
- anchors = g.find_all('a')
- if anchors:
- link = anchors[0]['href']
- if link.startswith('/url?q='):
- link = link[7:]
- if not link.startswith('http'):
- continue
- title = g.find('h3').text
- item = {'title': title, 'link': link}
- results.append(item)
-
- return results
-
-def scrape_text(url, proxies=None) -> str:
- """Scrape text from a webpage
-
- Args:
- url (str): The URL to scrape text from
-
- Returns:
- str: The scraped text
- """
- headers = {
- 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',
- 'Content-Type': 'text/plain',
- }
- try:
- response = requests.get(url, headers=headers, proxies=proxies, timeout=8)
- if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding
- except:
- return "Unable to connect to the server"
- soup = BeautifulSoup(response.text, "html.parser")
- for script in soup(["script", "style"]):
- script.extract()
- text = soup.get_text()
- lines = (line.strip() for line in text.splitlines())
- chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
- text = "\n".join(chunk for chunk in chunks if chunk)
- return text
-
-
-if __name__ == '__main__':
- txt = "What is LSTM?"
- proxies = None
- urls = get_urls(txt, proxies)
- max_search_result = 10
-
- for url in urls[:max_search_result]:
- print(url)
- print(scrape_text(url['link'], proxies))
- print("\n\n")
diff --git a/spaces/zhang-wei-jian/docker/node_modules/media-typer/index.js b/spaces/zhang-wei-jian/docker/node_modules/media-typer/index.js
deleted file mode 100644
index 07f7295ee780fbfb881b953e92f79e49fe00f08c..0000000000000000000000000000000000000000
--- a/spaces/zhang-wei-jian/docker/node_modules/media-typer/index.js
+++ /dev/null
@@ -1,270 +0,0 @@
-/*!
- * media-typer
- * Copyright(c) 2014 Douglas Christopher Wilson
- * MIT Licensed
- */
-
-/**
- * RegExp to match *( ";" parameter ) in RFC 2616 sec 3.7
- *
- * parameter = token "=" ( token | quoted-string )
- * token = 1*
- * separators = "(" | ")" | "<" | ">" | "@"
- * | "," | ";" | ":" | "\" | <">
- * | "/" | "[" | "]" | "?" | "="
- * | "{" | "}" | SP | HT
- * quoted-string = ( <"> *(qdtext | quoted-pair ) <"> )
- * qdtext = >
- * quoted-pair = "\" CHAR
- * CHAR =
- * TEXT =
- * LWS = [CRLF] 1*( SP | HT )
- * CRLF = CR LF
- * CR =
- * LF =
- * SP =
- * SHT =
- * CTL =
- * OCTET =
- */
-var paramRegExp = /; *([!#$%&'\*\+\-\.0-9A-Z\^_`a-z\|~]+) *= *("(?:[ !\u0023-\u005b\u005d-\u007e\u0080-\u00ff]|\\[\u0020-\u007e])*"|[!#$%&'\*\+\-\.0-9A-Z\^_`a-z\|~]+) */g;
-var textRegExp = /^[\u0020-\u007e\u0080-\u00ff]+$/
-var tokenRegExp = /^[!#$%&'\*\+\-\.0-9A-Z\^_`a-z\|~]+$/
-
-/**
- * RegExp to match quoted-pair in RFC 2616
- *
- * quoted-pair = "\" CHAR
- * CHAR =
- */
-var qescRegExp = /\\([\u0000-\u007f])/g;
-
-/**
- * RegExp to match chars that must be quoted-pair in RFC 2616
- */
-var quoteRegExp = /([\\"])/g;
-
-/**
- * RegExp to match type in RFC 6838
- *
- * type-name = restricted-name
- * subtype-name = restricted-name
- * restricted-name = restricted-name-first *126restricted-name-chars
- * restricted-name-first = ALPHA / DIGIT
- * restricted-name-chars = ALPHA / DIGIT / "!" / "#" /
- * "$" / "&" / "-" / "^" / "_"
- * restricted-name-chars =/ "." ; Characters before first dot always
- * ; specify a facet name
- * restricted-name-chars =/ "+" ; Characters after last plus always
- * ; specify a structured syntax suffix
- * ALPHA = %x41-5A / %x61-7A ; A-Z / a-z
- * DIGIT = %x30-39 ; 0-9
- */
-var subtypeNameRegExp = /^[A-Za-z0-9][A-Za-z0-9!#$&^_.-]{0,126}$/
-var typeNameRegExp = /^[A-Za-z0-9][A-Za-z0-9!#$&^_-]{0,126}$/
-var typeRegExp = /^ *([A-Za-z0-9][A-Za-z0-9!#$&^_-]{0,126})\/([A-Za-z0-9][A-Za-z0-9!#$&^_.+-]{0,126}) *$/;
-
-/**
- * Module exports.
- */
-
-exports.format = format
-exports.parse = parse
-
-/**
- * Format object to media type.
- *
- * @param {object} obj
- * @return {string}
- * @api public
- */
-
-function format(obj) {
- if (!obj || typeof obj !== 'object') {
- throw new TypeError('argument obj is required')
- }
-
- var parameters = obj.parameters
- var subtype = obj.subtype
- var suffix = obj.suffix
- var type = obj.type
-
- if (!type || !typeNameRegExp.test(type)) {
- throw new TypeError('invalid type')
- }
-
- if (!subtype || !subtypeNameRegExp.test(subtype)) {
- throw new TypeError('invalid subtype')
- }
-
- // format as type/subtype
- var string = type + '/' + subtype
-
- // append +suffix
- if (suffix) {
- if (!typeNameRegExp.test(suffix)) {
- throw new TypeError('invalid suffix')
- }
-
- string += '+' + suffix
- }
-
- // append parameters
- if (parameters && typeof parameters === 'object') {
- var param
- var params = Object.keys(parameters).sort()
-
- for (var i = 0; i < params.length; i++) {
- param = params[i]
-
- if (!tokenRegExp.test(param)) {
- throw new TypeError('invalid parameter name')
- }
-
- string += '; ' + param + '=' + qstring(parameters[param])
- }
- }
-
- return string
-}
-
-/**
- * Parse media type to object.
- *
- * @param {string|object} string
- * @return {Object}
- * @api public
- */
-
-function parse(string) {
- if (!string) {
- throw new TypeError('argument string is required')
- }
-
- // support req/res-like objects as argument
- if (typeof string === 'object') {
- string = getcontenttype(string)
- }
-
- if (typeof string !== 'string') {
- throw new TypeError('argument string is required to be a string')
- }
-
- var index = string.indexOf(';')
- var type = index !== -1
- ? string.substr(0, index)
- : string
-
- var key
- var match
- var obj = splitType(type)
- var params = {}
- var value
-
- paramRegExp.lastIndex = index
-
- while (match = paramRegExp.exec(string)) {
- if (match.index !== index) {
- throw new TypeError('invalid parameter format')
- }
-
- index += match[0].length
- key = match[1].toLowerCase()
- value = match[2]
-
- if (value[0] === '"') {
- // remove quotes and escapes
- value = value
- .substr(1, value.length - 2)
- .replace(qescRegExp, '$1')
- }
-
- params[key] = value
- }
-
- if (index !== -1 && index !== string.length) {
- throw new TypeError('invalid parameter format')
- }
-
- obj.parameters = params
-
- return obj
-}
-
-/**
- * Get content-type from req/res objects.
- *
- * @param {object}
- * @return {Object}
- * @api private
- */
-
-function getcontenttype(obj) {
- if (typeof obj.getHeader === 'function') {
- // res-like
- return obj.getHeader('content-type')
- }
-
- if (typeof obj.headers === 'object') {
- // req-like
- return obj.headers && obj.headers['content-type']
- }
-}
-
-/**
- * Quote a string if necessary.
- *
- * @param {string} val
- * @return {string}
- * @api private
- */
-
-function qstring(val) {
- var str = String(val)
-
- // no need to quote tokens
- if (tokenRegExp.test(str)) {
- return str
- }
-
- if (str.length > 0 && !textRegExp.test(str)) {
- throw new TypeError('invalid parameter value')
- }
-
- return '"' + str.replace(quoteRegExp, '\\$1') + '"'
-}
-
-/**
- * Simply "type/subtype+siffx" into parts.
- *
- * @param {string} string
- * @return {Object}
- * @api private
- */
-
-function splitType(string) {
- var match = typeRegExp.exec(string.toLowerCase())
-
- if (!match) {
- throw new TypeError('invalid media type')
- }
-
- var type = match[1]
- var subtype = match[2]
- var suffix
-
- // suffix after last +
- var index = subtype.lastIndexOf('+')
- if (index !== -1) {
- suffix = subtype.substr(index + 1)
- subtype = subtype.substr(0, index)
- }
-
- var obj = {
- type: type,
- subtype: subtype,
- suffix: suffix
- }
-
- return obj
-}
diff --git a/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp b/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp
deleted file mode 100644
index c94575903bdf2eef71ecbe66382375552446e510..0000000000000000000000000000000000000000
--- a/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp
+++ /dev/null
@@ -1,17 +0,0 @@
-#include "libipc/pool_alloc.h"
-
-#include "libipc/memory/resource.h"
-
-namespace ipc {
-namespace mem {
-
-void* pool_alloc::alloc(std::size_t size) {
- return async_pool_alloc::alloc(size);
-}
-
-void pool_alloc::free(void* p, std::size_t size) {
- async_pool_alloc::free(p, size);
-}
-
-} // namespace mem
-} // namespace ipc
diff --git a/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/test_project/cpp/longcode/jpge.cpp b/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/test_project/cpp/longcode/jpge.cpp
deleted file mode 100644
index 2e26b71ed5aad0d46478fdbcd3a880be1401f946..0000000000000000000000000000000000000000
--- a/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/test_project/cpp/longcode/jpge.cpp
+++ /dev/null
@@ -1,1049 +0,0 @@
-// jpge.cpp - C++ class for JPEG compression.
-// Public domain, Rich Geldreich
-// v1.01, Dec. 18, 2010 - Initial release
-// v1.02, Apr. 6, 2011 - Removed 2x2 ordered dither in H2V1 chroma subsampling method load_block_16_8_8(). (The rounding factor was 2, when it should have been 1. Either way, it wasn't helping.)
-// v1.03, Apr. 16, 2011 - Added support for optimized Huffman code tables, optimized dynamic memory allocation down to only 1 alloc.
-// Also from Alex Evans: Added RGBA support, linear memory allocator (no longer needed in v1.03).
-// v1.04, May. 19, 2012: Forgot to set m_pFile ptr to NULL in cfile_stream::close(). Thanks to Owen Kaluza for reporting this bug.
-// Code tweaks to fix VS2008 static code analysis warnings (all looked harmless).
-// Code review revealed method load_block_16_8_8() (used for the non-default H2V1 sampling mode to downsample chroma) somehow didn't get the rounding factor fix from v1.02.
-
-#include "jpge.h"
-
-#include
-#include
-#if PLATFORM_WINDOWS
-#include
-#endif
-
-#define JPGE_MAX(a,b) (((a)>(b))?(a):(b))
-#define JPGE_MIN(a,b) (((a)<(b))?(a):(b))
-
-namespace jpge {
-
-static inline void *jpge_malloc(size_t nSize) { return FMemory::Malloc(nSize); }
-static inline void jpge_free(void *p) { FMemory::Free(p);; }
-
-// Various JPEG enums and tables.
-enum { M_SOF0 = 0xC0, M_DHT = 0xC4, M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_APP0 = 0xE0 };
-enum { DC_LUM_CODES = 12, AC_LUM_CODES = 256, DC_CHROMA_CODES = 12, AC_CHROMA_CODES = 256, MAX_HUFF_SYMBOLS = 257, MAX_HUFF_CODESIZE = 32 };
-
-static uint8 s_zag[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 };
-static int16 s_std_lum_quant[64] = { 16,11,12,14,12,10,16,14,13,14,18,17,16,19,24,40,26,24,22,22,24,49,35,37,29,40,58,51,61,60,57,51,56,55,64,72,92,78,64,68,87,69,55,56,80,109,81,87,95,98,103,104,103,62,77,113,121,112,100,120,92,101,103,99 };
-static int16 s_std_croma_quant[64] = { 17,18,18,24,21,24,47,26,26,47,99,66,56,66,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99 };
-static uint8 s_dc_lum_bits[17] = { 0,0,1,5,1,1,1,1,1,1,0,0,0,0,0,0,0 };
-static uint8 s_dc_lum_val[DC_LUM_CODES] = { 0,1,2,3,4,5,6,7,8,9,10,11 };
-static uint8 s_ac_lum_bits[17] = { 0,0,2,1,3,3,2,4,3,5,5,4,4,0,0,1,0x7d };
-static uint8 s_ac_lum_val[AC_LUM_CODES] =
-{
- 0x01,0x02,0x03,0x00,0x04,0x11,0x05,0x12,0x21,0x31,0x41,0x06,0x13,0x51,0x61,0x07,0x22,0x71,0x14,0x32,0x81,0x91,0xa1,0x08,0x23,0x42,0xb1,0xc1,0x15,0x52,0xd1,0xf0,
- 0x24,0x33,0x62,0x72,0x82,0x09,0x0a,0x16,0x17,0x18,0x19,0x1a,0x25,0x26,0x27,0x28,0x29,0x2a,0x34,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48,0x49,
- 0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x83,0x84,0x85,0x86,0x87,0x88,0x89,
- 0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3,0xc4,0xc5,
- 0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe1,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf1,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8,
- 0xf9,0xfa
-};
-static uint8 s_dc_chroma_bits[17] = { 0,0,3,1,1,1,1,1,1,1,1,1,0,0,0,0,0 };
-static uint8 s_dc_chroma_val[DC_CHROMA_CODES] = { 0,1,2,3,4,5,6,7,8,9,10,11 };
-static uint8 s_ac_chroma_bits[17] = { 0,0,2,1,2,4,4,3,4,7,5,4,4,0,1,2,0x77 };
-static uint8 s_ac_chroma_val[AC_CHROMA_CODES] =
-{
- 0x00,0x01,0x02,0x03,0x11,0x04,0x05,0x21,0x31,0x06,0x12,0x41,0x51,0x07,0x61,0x71,0x13,0x22,0x32,0x81,0x08,0x14,0x42,0x91,0xa1,0xb1,0xc1,0x09,0x23,0x33,0x52,0xf0,
- 0x15,0x62,0x72,0xd1,0x0a,0x16,0x24,0x34,0xe1,0x25,0xf1,0x17,0x18,0x19,0x1a,0x26,0x27,0x28,0x29,0x2a,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48,
- 0x49,0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x82,0x83,0x84,0x85,0x86,0x87,
- 0x88,0x89,0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3,
- 0xc4,0xc5,0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8,
- 0xf9,0xfa
-};
-
-// Low-level helper functions.
-template inline void clear_obj(T &obj) { memset(&obj, 0, sizeof(obj)); }
-
-const int YR = 19595, YG = 38470, YB = 7471, CB_R = -11059, CB_G = -21709, CB_B = 32768, CR_R = 32768, CR_G = -27439, CR_B = -5329;
-static inline uint8 clamp(int i) { if (static_cast(i) > 255U) { if (i < 0) i = 0; else if (i > 255) i = 255; } return static_cast(i); }
-
-static void RGB_to_YCC(uint8* pDst, const uint8 *pSrc, int num_pixels)
-{
- for ( ; num_pixels; pDst += 3, pSrc += 3, num_pixels--)
- {
- const int r = pSrc[0], g = pSrc[1], b = pSrc[2];
- pDst[0] = static_cast((r * YR + g * YG + b * YB + 32768) >> 16);
- pDst[1] = clamp(128 + ((r * CB_R + g * CB_G + b * CB_B + 32768) >> 16));
- pDst[2] = clamp(128 + ((r * CR_R + g * CR_G + b * CR_B + 32768) >> 16));
- }
-}
-
-static void RGB_to_Y(uint8* pDst, const uint8 *pSrc, int num_pixels)
-{
- for ( ; num_pixels; pDst++, pSrc += 3, num_pixels--)
- pDst[0] = static_cast((pSrc[0] * YR + pSrc[1] * YG + pSrc[2] * YB + 32768) >> 16);
-}
-
-static void RGBA_to_YCC(uint8* pDst, const uint8 *pSrc, int num_pixels)
-{
- for ( ; num_pixels; pDst += 3, pSrc += 4, num_pixels--)
- {
- const int r = pSrc[0], g = pSrc[1], b = pSrc[2];
- pDst[0] = static_cast((r * YR + g * YG + b * YB + 32768) >> 16);
- pDst[1] = clamp(128 + ((r * CB_R + g * CB_G + b * CB_B + 32768) >> 16));
- pDst[2] = clamp(128 + ((r * CR_R + g * CR_G + b * CR_B + 32768) >> 16));
- }
-}
-
-static void RGBA_to_Y(uint8* pDst, const uint8 *pSrc, int num_pixels)
-{
- for ( ; num_pixels; pDst++, pSrc += 4, num_pixels--)
- pDst[0] = static_cast((pSrc[0] * YR + pSrc[1] * YG + pSrc[2] * YB + 32768) >> 16);
-}
-
-static void Y_to_YCC(uint8* pDst, const uint8* pSrc, int num_pixels)
-{
- for( ; num_pixels; pDst += 3, pSrc++, num_pixels--) { pDst[0] = pSrc[0]; pDst[1] = 128; pDst[2] = 128; }
-}
-
-// Forward DCT - DCT derived from jfdctint.
-#define CONST_BITS 13
-#define ROW_BITS 2
-#define DCT_DESCALE(x, n) (((x) + (((int32)1) << ((n) - 1))) >> (n))
-#define DCT_MUL(var, c) (static_cast(var) * static_cast(c))
-#define DCT1D(s0, s1, s2, s3, s4, s5, s6, s7) \
- int32 t0 = s0 + s7, t7 = s0 - s7, t1 = s1 + s6, t6 = s1 - s6, t2 = s2 + s5, t5 = s2 - s5, t3 = s3 + s4, t4 = s3 - s4; \
- int32 t10 = t0 + t3, t13 = t0 - t3, t11 = t1 + t2, t12 = t1 - t2; \
- int32 u1 = DCT_MUL(t12 + t13, 4433); \
- s2 = u1 + DCT_MUL(t13, 6270); \
- s6 = u1 + DCT_MUL(t12, -15137); \
- u1 = t4 + t7; \
- int32 u2 = t5 + t6, u3 = t4 + t6, u4 = t5 + t7; \
- int32 z5 = DCT_MUL(u3 + u4, 9633); \
- t4 = DCT_MUL(t4, 2446); t5 = DCT_MUL(t5, 16819); \
- t6 = DCT_MUL(t6, 25172); t7 = DCT_MUL(t7, 12299); \
- u1 = DCT_MUL(u1, -7373); u2 = DCT_MUL(u2, -20995); \
- u3 = DCT_MUL(u3, -16069); u4 = DCT_MUL(u4, -3196); \
- u3 += z5; u4 += z5; \
- s0 = t10 + t11; s1 = t7 + u1 + u4; s3 = t6 + u2 + u3; s4 = t10 - t11; s5 = t5 + u2 + u4; s7 = t4 + u1 + u3;
-
-static void DCT2D(int32 *p)
-{
- int32 c, *q = p;
- for (c = 7; c >= 0; c--, q += 8)
- {
- int32 s0 = q[0], s1 = q[1], s2 = q[2], s3 = q[3], s4 = q[4], s5 = q[5], s6 = q[6], s7 = q[7];
- DCT1D(s0, s1, s2, s3, s4, s5, s6, s7);
- q[0] = s0 << ROW_BITS; q[1] = DCT_DESCALE(s1, CONST_BITS-ROW_BITS); q[2] = DCT_DESCALE(s2, CONST_BITS-ROW_BITS); q[3] = DCT_DESCALE(s3, CONST_BITS-ROW_BITS);
- q[4] = s4 << ROW_BITS; q[5] = DCT_DESCALE(s5, CONST_BITS-ROW_BITS); q[6] = DCT_DESCALE(s6, CONST_BITS-ROW_BITS); q[7] = DCT_DESCALE(s7, CONST_BITS-ROW_BITS);
- }
- for (q = p, c = 7; c >= 0; c--, q++)
- {
- int32 s0 = q[0*8], s1 = q[1*8], s2 = q[2*8], s3 = q[3*8], s4 = q[4*8], s5 = q[5*8], s6 = q[6*8], s7 = q[7*8];
- DCT1D(s0, s1, s2, s3, s4, s5, s6, s7);
- q[0*8] = DCT_DESCALE(s0, ROW_BITS+3); q[1*8] = DCT_DESCALE(s1, CONST_BITS+ROW_BITS+3); q[2*8] = DCT_DESCALE(s2, CONST_BITS+ROW_BITS+3); q[3*8] = DCT_DESCALE(s3, CONST_BITS+ROW_BITS+3);
- q[4*8] = DCT_DESCALE(s4, ROW_BITS+3); q[5*8] = DCT_DESCALE(s5, CONST_BITS+ROW_BITS+3); q[6*8] = DCT_DESCALE(s6, CONST_BITS+ROW_BITS+3); q[7*8] = DCT_DESCALE(s7, CONST_BITS+ROW_BITS+3);
- }
-}
-
-struct sym_freq { uint m_key, m_sym_index; };
-
-// Radix sorts sym_freq[] array by 32-bit key m_key. Returns ptr to sorted values.
-static inline sym_freq* radix_sort_syms(uint num_syms, sym_freq* pSyms0, sym_freq* pSyms1)
-{
- const uint cMaxPasses = 4;
- uint32 hist[256 * cMaxPasses]; clear_obj(hist);
- for (uint i = 0; i < num_syms; i++) { uint freq = pSyms0[i].m_key; hist[freq & 0xFF]++; hist[256 + ((freq >> 8) & 0xFF)]++; hist[256*2 + ((freq >> 16) & 0xFF)]++; hist[256*3 + ((freq >> 24) & 0xFF)]++; }
- sym_freq* pCur_syms = pSyms0, *pNew_syms = pSyms1;
- uint total_passes = cMaxPasses; while ((total_passes > 1) && (num_syms == hist[(total_passes - 1) * 256])) total_passes--;
- for (uint pass_shift = 0, pass = 0; pass < total_passes; pass++, pass_shift += 8)
- {
- const uint32* pHist = &hist[pass << 8];
- uint offsets[256], cur_ofs = 0;
- for (uint i = 0; i < 256; i++) { offsets[i] = cur_ofs; cur_ofs += pHist[i]; }
- for (uint i = 0; i < num_syms; i++)
- pNew_syms[offsets[(pCur_syms[i].m_key >> pass_shift) & 0xFF]++] = pCur_syms[i];
- sym_freq* t = pCur_syms; pCur_syms = pNew_syms; pNew_syms = t;
- }
- return pCur_syms;
-}
-
-// calculate_minimum_redundancy() originally written by: Alistair Moffat, alistair@cs.mu.oz.au, Jyrki Katajainen, jyrki@diku.dk, November 1996.
-static void calculate_minimum_redundancy(sym_freq *A, int n)
-{
- int root, leaf, next, avbl, used, dpth;
- if (n==0) return; else if (n==1) { A[0].m_key = 1; return; }
- A[0].m_key += A[1].m_key; root = 0; leaf = 2;
- for (next=1; next < n-1; next++)
- {
- if (leaf>=n || A[root].m_key=n || (root=0; next--) A[next].m_key = A[A[next].m_key].m_key+1;
- avbl = 1; used = dpth = 0; root = n-2; next = n-1;
- while (avbl>0)
- {
- while (root>=0 && (int)A[root].m_key==dpth) { used++; root--; }
- while (avbl>used) { A[next--].m_key = dpth; avbl--; }
- avbl = 2*used; dpth++; used = 0;
- }
-}
-
-// Limits canonical Huffman code table's max code size to max_code_size.
-static void huffman_enforce_max_code_size(int *pNum_codes, int code_list_len, int max_code_size)
-{
- if (code_list_len <= 1) return;
-
- for (int i = max_code_size + 1; i <= MAX_HUFF_CODESIZE; i++) pNum_codes[max_code_size] += pNum_codes[i];
-
- uint32 total = 0;
- for (int i = max_code_size; i > 0; i--)
- total += (((uint32)pNum_codes[i]) << (max_code_size - i));
-
- while (total != (1UL << max_code_size))
- {
- pNum_codes[max_code_size]--;
- for (int i = max_code_size - 1; i > 0; i--)
- {
- if (pNum_codes[i]) { pNum_codes[i]--; pNum_codes[i + 1] += 2; break; }
- }
- total--;
- }
-}
-
-// Generates an optimized offman table.
-void jpeg_encoder::optimize_huffman_table(int table_num, int table_len)
-{
- sym_freq syms0[MAX_HUFF_SYMBOLS], syms1[MAX_HUFF_SYMBOLS];
- syms0[0].m_key = 1; syms0[0].m_sym_index = 0; // dummy symbol, assures that no valid code contains all 1's
- int num_used_syms = 1;
- const uint32 *pSym_count = &m_huff_count[table_num][0];
- for (int i = 0; i < table_len; i++)
- if (pSym_count[i]) { syms0[num_used_syms].m_key = pSym_count[i]; syms0[num_used_syms++].m_sym_index = i + 1; }
- sym_freq* pSyms = radix_sort_syms(num_used_syms, syms0, syms1);
- calculate_minimum_redundancy(pSyms, num_used_syms);
-
- // Count the # of symbols of each code size.
- int num_codes[1 + MAX_HUFF_CODESIZE]; clear_obj(num_codes);
- for (int i = 0; i < num_used_syms; i++)
- num_codes[pSyms[i].m_key]++;
-
- const uint JPGE_CODE_SIZE_LIMIT = 16; // the maximum possible size of a JPEG Huffman code (valid range is [9,16] - 9 vs. 8 because of the dummy symbol)
- huffman_enforce_max_code_size(num_codes, num_used_syms, JPGE_CODE_SIZE_LIMIT);
-
- // Compute m_huff_bits array, which contains the # of symbols per code size.
- clear_obj(m_huff_bits[table_num]);
- for (int i = 1; i <= (int)JPGE_CODE_SIZE_LIMIT; i++)
- m_huff_bits[table_num][i] = static_cast(num_codes[i]);
-
- // Remove the dummy symbol added above, which must be in largest bucket.
- for (int i = JPGE_CODE_SIZE_LIMIT; i >= 1; i--)
- {
- if (m_huff_bits[table_num][i]) { m_huff_bits[table_num][i]--; break; }
- }
-
- // Compute the m_huff_val array, which contains the symbol indices sorted by code size (smallest to largest).
- for (int i = num_used_syms - 1; i >= 1; i--)
- m_huff_val[table_num][num_used_syms - 1 - i] = static_cast(pSyms[i].m_sym_index - 1);
-}
-
-// JPEG marker generation.
-void jpeg_encoder::emit_byte(uint8 i)
-{
- m_all_stream_writes_succeeded = m_all_stream_writes_succeeded && m_pStream->put_obj(i);
-}
-
-void jpeg_encoder::emit_word(uint i)
-{
- emit_byte(uint8(i >> 8)); emit_byte(uint8(i & 0xFF));
-}
-
-void jpeg_encoder::emit_marker(int marker)
-{
- emit_byte(uint8(0xFF)); emit_byte(uint8(marker));
-}
-
-// Emit JFIF marker
-void jpeg_encoder::emit_jfif_app0()
-{
- emit_marker(M_APP0);
- emit_word(2 + 4 + 1 + 2 + 1 + 2 + 2 + 1 + 1);
- emit_byte(0x4A); emit_byte(0x46); emit_byte(0x49); emit_byte(0x46); /* Identifier: ASCII "JFIF" */
- emit_byte(0);
- emit_byte(1); /* Major version */
- emit_byte(1); /* Minor version */
- emit_byte(0); /* Density unit */
- emit_word(1);
- emit_word(1);
- emit_byte(0); /* No thumbnail image */
- emit_byte(0);
-}
-
-// Emit quantization tables
-void jpeg_encoder::emit_dqt()
-{
- for (int i = 0; i < ((m_num_components == 3) ? 2 : 1); i++)
- {
- emit_marker(M_DQT);
- emit_word(64 + 1 + 2);
- emit_byte(static_cast(i));
- for (int j = 0; j < 64; j++)
- emit_byte(static_cast(m_quantization_tables[i][j]));
- }
-}
-
-// Emit start of frame marker
-void jpeg_encoder::emit_sof()
-{
- emit_marker(M_SOF0); /* baseline */
- emit_word(3 * m_num_components + 2 + 5 + 1);
- emit_byte(8); /* precision */
- emit_word(m_image_y);
- emit_word(m_image_x);
- emit_byte(m_num_components);
- for (int i = 0; i < m_num_components; i++)
- {
- emit_byte(static_cast(i + 1)); /* component ID */
- emit_byte((m_comp_h_samp[i] << 4) + m_comp_v_samp[i]); /* h and v sampling */
- emit_byte(i > 0); /* quant. table num */
- }
-}
-
-// Emit Huffman table.
-void jpeg_encoder::emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag)
-{
- emit_marker(M_DHT);
-
- int length = 0;
- for (int i = 1; i <= 16; i++)
- length += bits[i];
-
- emit_word(length + 2 + 1 + 16);
- emit_byte(static_cast(index + (ac_flag << 4)));
-
- for (int i = 1; i <= 16; i++)
- emit_byte(bits[i]);
-
- for (int i = 0; i < length; i++)
- emit_byte(val[i]);
-}
-
-// Emit all Huffman tables.
-void jpeg_encoder::emit_dhts()
-{
- emit_dht(m_huff_bits[0+0], m_huff_val[0+0], 0, false);
- emit_dht(m_huff_bits[2+0], m_huff_val[2+0], 0, true);
- if (m_num_components == 3)
- {
- emit_dht(m_huff_bits[0+1], m_huff_val[0+1], 1, false);
- emit_dht(m_huff_bits[2+1], m_huff_val[2+1], 1, true);
- }
-}
-
-// emit start of scan
-void jpeg_encoder::emit_sos()
-{
- emit_marker(M_SOS);
- emit_word(2 * m_num_components + 2 + 1 + 3);
- emit_byte(m_num_components);
- for (int i = 0; i < m_num_components; i++)
- {
- emit_byte(static_cast(i + 1));
- if (i == 0)
- emit_byte((0 << 4) + 0);
- else
- emit_byte((1 << 4) + 1);
- }
- emit_byte(0); /* spectral selection */
- emit_byte(63);
- emit_byte(0);
-}
-
-// Emit all markers at beginning of image file.
-void jpeg_encoder::emit_markers()
-{
- emit_marker(M_SOI);
- emit_jfif_app0();
- emit_dqt();
- emit_sof();
- emit_dhts();
- emit_sos();
-}
-
-// Compute the actual canonical Huffman codes/code sizes given the JPEG huff bits and val arrays.
-void jpeg_encoder::compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val)
-{
- int i, l, last_p, si;
- uint8 huff_size[257];
- uint huff_code[257];
- uint code;
-
- int p = 0;
- for (l = 1; l <= 16; l++)
- for (i = 1; i <= bits[l]; i++)
- huff_size[p++] = (char)l;
-
- huff_size[p] = 0; last_p = p; // write sentinel
-
- code = 0; si = huff_size[0]; p = 0;
-
- while (huff_size[p])
- {
- while (huff_size[p] == si)
- huff_code[p++] = code++;
- code <<= 1;
- si++;
- }
-
- memset(codes, 0, sizeof(codes[0])*256);
- memset(code_sizes, 0, sizeof(code_sizes[0])*256);
- for (p = 0; p < last_p; p++)
- {
- codes[val[p]] = huff_code[p];
- code_sizes[val[p]] = huff_size[p];
- }
-}
-
-// Quantization table generation.
-void jpeg_encoder::compute_quant_table(int32 *pDst, int16 *pSrc)
-{
- int32 q;
- if (m_params.m_quality < 50)
- q = 5000 / m_params.m_quality;
- else
- q = 200 - m_params.m_quality * 2;
- for (int i = 0; i < 64; i++)
- {
- int32 j = *pSrc++; j = (j * q + 50L) / 100L;
- *pDst++ = JPGE_MIN(JPGE_MAX(j, 1), 255);
- }
-}
-
-// Higher-level methods.
-void jpeg_encoder::first_pass_init()
-{
- m_bit_buffer = 0; m_bits_in = 0;
- memset(m_last_dc_val, 0, 3 * sizeof(m_last_dc_val[0]));
- m_mcu_y_ofs = 0;
- m_pass_num = 1;
-}
-
-bool jpeg_encoder::second_pass_init()
-{
- compute_huffman_table(&m_huff_codes[0+0][0], &m_huff_code_sizes[0+0][0], m_huff_bits[0+0], m_huff_val[0+0]);
- compute_huffman_table(&m_huff_codes[2+0][0], &m_huff_code_sizes[2+0][0], m_huff_bits[2+0], m_huff_val[2+0]);
- if (m_num_components > 1)
- {
- compute_huffman_table(&m_huff_codes[0+1][0], &m_huff_code_sizes[0+1][0], m_huff_bits[0+1], m_huff_val[0+1]);
- compute_huffman_table(&m_huff_codes[2+1][0], &m_huff_code_sizes[2+1][0], m_huff_bits[2+1], m_huff_val[2+1]);
- }
- first_pass_init();
- emit_markers();
- m_pass_num = 2;
- return true;
-}
-
-bool jpeg_encoder::jpg_open(int p_x_res, int p_y_res, int src_channels)
-{
- m_num_components = 3;
- switch (m_params.m_subsampling)
- {
- case Y_ONLY:
- {
- m_num_components = 1;
- m_comp_h_samp[0] = 1; m_comp_v_samp[0] = 1;
- m_mcu_x = 8; m_mcu_y = 8;
- break;
- }
- case H1V1:
- {
- m_comp_h_samp[0] = 1; m_comp_v_samp[0] = 1;
- m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1;
- m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1;
- m_mcu_x = 8; m_mcu_y = 8;
- break;
- }
- case H2V1:
- {
- m_comp_h_samp[0] = 2; m_comp_v_samp[0] = 1;
- m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1;
- m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1;
- m_mcu_x = 16; m_mcu_y = 8;
- break;
- }
- case H2V2:
- {
- m_comp_h_samp[0] = 2; m_comp_v_samp[0] = 2;
- m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1;
- m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1;
- m_mcu_x = 16; m_mcu_y = 16;
- }
- }
-
- m_image_x = p_x_res; m_image_y = p_y_res;
- m_image_bpp = src_channels;
- m_image_bpl = m_image_x * src_channels;
- m_image_x_mcu = (m_image_x + m_mcu_x - 1) & (~(m_mcu_x - 1));
- m_image_y_mcu = (m_image_y + m_mcu_y - 1) & (~(m_mcu_y - 1));
- m_image_bpl_xlt = m_image_x * m_num_components;
- m_image_bpl_mcu = m_image_x_mcu * m_num_components;
- m_mcus_per_row = m_image_x_mcu / m_mcu_x;
-
- if ((m_mcu_lines[0] = static_cast(jpge_malloc(m_image_bpl_mcu * m_mcu_y))) == NULL) return false;
- for (int i = 1; i < m_mcu_y; i++)
- m_mcu_lines[i] = m_mcu_lines[i-1] + m_image_bpl_mcu;
-
- compute_quant_table(m_quantization_tables[0], s_std_lum_quant);
- compute_quant_table(m_quantization_tables[1], m_params.m_no_chroma_discrim_flag ? s_std_lum_quant : s_std_croma_quant);
-
- m_out_buf_left = JPGE_OUT_BUF_SIZE;
- m_pOut_buf = m_out_buf;
-
- if (m_params.m_two_pass_flag)
- {
- clear_obj(m_huff_count);
- first_pass_init();
- }
- else
- {
- memcpy(m_huff_bits[0+0], s_dc_lum_bits, 17); memcpy(m_huff_val [0+0], s_dc_lum_val, DC_LUM_CODES);
- memcpy(m_huff_bits[2+0], s_ac_lum_bits, 17); memcpy(m_huff_val [2+0], s_ac_lum_val, AC_LUM_CODES);
- memcpy(m_huff_bits[0+1], s_dc_chroma_bits, 17); memcpy(m_huff_val [0+1], s_dc_chroma_val, DC_CHROMA_CODES);
- memcpy(m_huff_bits[2+1], s_ac_chroma_bits, 17); memcpy(m_huff_val [2+1], s_ac_chroma_val, AC_CHROMA_CODES);
- if (!second_pass_init()) return false; // in effect, skip over the first pass
- }
- return m_all_stream_writes_succeeded;
-}
-
-void jpeg_encoder::load_block_8_8_grey(int x)
-{
- uint8 *pSrc;
- sample_array_t *pDst = m_sample_array;
- x <<= 3;
- for (int i = 0; i < 8; i++, pDst += 8)
- {
- pSrc = m_mcu_lines[i] + x;
- pDst[0] = pSrc[0] - 128; pDst[1] = pSrc[1] - 128; pDst[2] = pSrc[2] - 128; pDst[3] = pSrc[3] - 128;
- pDst[4] = pSrc[4] - 128; pDst[5] = pSrc[5] - 128; pDst[6] = pSrc[6] - 128; pDst[7] = pSrc[7] - 128;
- }
-}
-
-void jpeg_encoder::load_block_8_8(int x, int y, int c)
-{
- uint8 *pSrc;
- sample_array_t *pDst = m_sample_array;
- x = (x * (8 * 3)) + c;
- y <<= 3;
- for (int i = 0; i < 8; i++, pDst += 8)
- {
- pSrc = m_mcu_lines[y + i] + x;
- pDst[0] = pSrc[0 * 3] - 128; pDst[1] = pSrc[1 * 3] - 128; pDst[2] = pSrc[2 * 3] - 128; pDst[3] = pSrc[3 * 3] - 128;
- pDst[4] = pSrc[4 * 3] - 128; pDst[5] = pSrc[5 * 3] - 128; pDst[6] = pSrc[6 * 3] - 128; pDst[7] = pSrc[7 * 3] - 128;
- }
-}
-
-void jpeg_encoder::load_block_16_8(int x, int c)
-{
- uint8 *pSrc1, *pSrc2;
- sample_array_t *pDst = m_sample_array;
- x = (x * (16 * 3)) + c;
- int a = 0, b = 2;
- for (int i = 0; i < 16; i += 2, pDst += 8)
- {
- pSrc1 = m_mcu_lines[i + 0] + x;
- pSrc2 = m_mcu_lines[i + 1] + x;
- pDst[0] = ((pSrc1[ 0 * 3] + pSrc1[ 1 * 3] + pSrc2[ 0 * 3] + pSrc2[ 1 * 3] + a) >> 2) - 128; pDst[1] = ((pSrc1[ 2 * 3] + pSrc1[ 3 * 3] + pSrc2[ 2 * 3] + pSrc2[ 3 * 3] + b) >> 2) - 128;
- pDst[2] = ((pSrc1[ 4 * 3] + pSrc1[ 5 * 3] + pSrc2[ 4 * 3] + pSrc2[ 5 * 3] + a) >> 2) - 128; pDst[3] = ((pSrc1[ 6 * 3] + pSrc1[ 7 * 3] + pSrc2[ 6 * 3] + pSrc2[ 7 * 3] + b) >> 2) - 128;
- pDst[4] = ((pSrc1[ 8 * 3] + pSrc1[ 9 * 3] + pSrc2[ 8 * 3] + pSrc2[ 9 * 3] + a) >> 2) - 128; pDst[5] = ((pSrc1[10 * 3] + pSrc1[11 * 3] + pSrc2[10 * 3] + pSrc2[11 * 3] + b) >> 2) - 128;
- pDst[6] = ((pSrc1[12 * 3] + pSrc1[13 * 3] + pSrc2[12 * 3] + pSrc2[13 * 3] + a) >> 2) - 128; pDst[7] = ((pSrc1[14 * 3] + pSrc1[15 * 3] + pSrc2[14 * 3] + pSrc2[15 * 3] + b) >> 2) - 128;
- int temp = a; a = b; b = temp;
- }
-}
-
-void jpeg_encoder::load_block_16_8_8(int x, int c)
-{
- uint8 *pSrc1;
- sample_array_t *pDst = m_sample_array;
- x = (x * (16 * 3)) + c;
- for (int i = 0; i < 8; i++, pDst += 8)
- {
- pSrc1 = m_mcu_lines[i + 0] + x;
- pDst[0] = ((pSrc1[ 0 * 3] + pSrc1[ 1 * 3]) >> 1) - 128; pDst[1] = ((pSrc1[ 2 * 3] + pSrc1[ 3 * 3]) >> 1) - 128;
- pDst[2] = ((pSrc1[ 4 * 3] + pSrc1[ 5 * 3]) >> 1) - 128; pDst[3] = ((pSrc1[ 6 * 3] + pSrc1[ 7 * 3]) >> 1) - 128;
- pDst[4] = ((pSrc1[ 8 * 3] + pSrc1[ 9 * 3]) >> 1) - 128; pDst[5] = ((pSrc1[10 * 3] + pSrc1[11 * 3]) >> 1) - 128;
- pDst[6] = ((pSrc1[12 * 3] + pSrc1[13 * 3]) >> 1) - 128; pDst[7] = ((pSrc1[14 * 3] + pSrc1[15 * 3]) >> 1) - 128;
- }
-}
-
-void jpeg_encoder::load_quantized_coefficients(int component_num)
-{
- int32 *q = m_quantization_tables[component_num > 0];
- int16 *pDst = m_coefficient_array;
- for (int i = 0; i < 64; i++)
- {
- sample_array_t j = m_sample_array[s_zag[i]];
- if (j < 0)
- {
- if ((j = -j + (*q >> 1)) < *q)
- *pDst++ = 0;
- else
- *pDst++ = static_cast(-(j / *q));
- }
- else
- {
- if ((j = j + (*q >> 1)) < *q)
- *pDst++ = 0;
- else
- *pDst++ = static_cast((j / *q));
- }
- q++;
- }
-}
-
-void jpeg_encoder::flush_output_buffer()
-{
- if (m_out_buf_left != JPGE_OUT_BUF_SIZE)
- m_all_stream_writes_succeeded = m_all_stream_writes_succeeded && m_pStream->put_buf(m_out_buf, JPGE_OUT_BUF_SIZE - m_out_buf_left);
- m_pOut_buf = m_out_buf;
- m_out_buf_left = JPGE_OUT_BUF_SIZE;
-}
-
-void jpeg_encoder::put_bits(uint bits, uint len)
-{
- m_bit_buffer |= ((uint32)bits << (24 - (m_bits_in += len)));
- while (m_bits_in >= 8)
- {
- uint8 c;
- #define JPGE_PUT_BYTE(c) { *m_pOut_buf++ = (c); if (--m_out_buf_left == 0) flush_output_buffer(); }
- JPGE_PUT_BYTE(c = (uint8)((m_bit_buffer >> 16) & 0xFF));
- if (c == 0xFF) JPGE_PUT_BYTE(0);
- m_bit_buffer <<= 8;
- m_bits_in -= 8;
- }
-}
-
-void jpeg_encoder::code_coefficients_pass_one(int component_num)
-{
- if (component_num >= 3) return; // just to shut up static analysis
- int i, run_len, nbits, temp1;
- int16 *src = m_coefficient_array;
- uint32 *dc_count = component_num ? m_huff_count[0 + 1] : m_huff_count[0 + 0], *ac_count = component_num ? m_huff_count[2 + 1] : m_huff_count[2 + 0];
-
- temp1 = src[0] - m_last_dc_val[component_num];
- m_last_dc_val[component_num] = src[0];
- if (temp1 < 0) temp1 = -temp1;
-
- nbits = 0;
- while (temp1)
- {
- nbits++; temp1 >>= 1;
- }
-
- dc_count[nbits]++;
- for (run_len = 0, i = 1; i < 64; i++)
- {
- if ((temp1 = m_coefficient_array[i]) == 0)
- run_len++;
- else
- {
- while (run_len >= 16)
- {
- ac_count[0xF0]++;
- run_len -= 16;
- }
- if (temp1 < 0) temp1 = -temp1;
- nbits = 1;
- while (temp1 >>= 1) nbits++;
- ac_count[(run_len << 4) + nbits]++;
- run_len = 0;
- }
- }
- if (run_len) ac_count[0]++;
-}
-
-void jpeg_encoder::code_coefficients_pass_two(int component_num)
-{
- int i, j, run_len, nbits, temp1, temp2;
- int16 *pSrc = m_coefficient_array;
- uint *codes[2];
- uint8 *code_sizes[2];
-
- if (component_num == 0)
- {
- codes[0] = m_huff_codes[0 + 0]; codes[1] = m_huff_codes[2 + 0];
- code_sizes[0] = m_huff_code_sizes[0 + 0]; code_sizes[1] = m_huff_code_sizes[2 + 0];
- }
- else
- {
- codes[0] = m_huff_codes[0 + 1]; codes[1] = m_huff_codes[2 + 1];
- code_sizes[0] = m_huff_code_sizes[0 + 1]; code_sizes[1] = m_huff_code_sizes[2 + 1];
- }
-
- temp1 = temp2 = pSrc[0] - m_last_dc_val[component_num];
- m_last_dc_val[component_num] = pSrc[0];
-
- if (temp1 < 0)
- {
- temp1 = -temp1; temp2--;
- }
-
- nbits = 0;
- while (temp1)
- {
- nbits++; temp1 >>= 1;
- }
-
- put_bits(codes[0][nbits], code_sizes[0][nbits]);
- if (nbits) put_bits(temp2 & ((1 << nbits) - 1), nbits);
-
- for (run_len = 0, i = 1; i < 64; i++)
- {
- if ((temp1 = m_coefficient_array[i]) == 0)
- run_len++;
- else
- {
- while (run_len >= 16)
- {
- put_bits(codes[1][0xF0], code_sizes[1][0xF0]);
- run_len -= 16;
- }
- if ((temp2 = temp1) < 0)
- {
- temp1 = -temp1;
- temp2--;
- }
- nbits = 1;
- while (temp1 >>= 1)
- nbits++;
- j = (run_len << 4) + nbits;
- put_bits(codes[1][j], code_sizes[1][j]);
- put_bits(temp2 & ((1 << nbits) - 1), nbits);
- run_len = 0;
- }
- }
- if (run_len)
- put_bits(codes[1][0], code_sizes[1][0]);
-}
-
-void jpeg_encoder::code_block(int component_num)
-{
- DCT2D(m_sample_array);
- load_quantized_coefficients(component_num);
- if (m_pass_num == 1)
- code_coefficients_pass_one(component_num);
- else
- code_coefficients_pass_two(component_num);
-}
-
-void jpeg_encoder::process_mcu_row()
-{
- if (m_num_components == 1)
- {
- for (int i = 0; i < m_mcus_per_row; i++)
- {
- load_block_8_8_grey(i); code_block(0);
- }
- }
- else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1))
- {
- for (int i = 0; i < m_mcus_per_row; i++)
- {
- load_block_8_8(i, 0, 0); code_block(0); load_block_8_8(i, 0, 1); code_block(1); load_block_8_8(i, 0, 2); code_block(2);
- }
- }
- else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1))
- {
- for (int i = 0; i < m_mcus_per_row; i++)
- {
- load_block_8_8(i * 2 + 0, 0, 0); code_block(0); load_block_8_8(i * 2 + 1, 0, 0); code_block(0);
- load_block_16_8_8(i, 1); code_block(1); load_block_16_8_8(i, 2); code_block(2);
- }
- }
- else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2))
- {
- for (int i = 0; i < m_mcus_per_row; i++)
- {
- load_block_8_8(i * 2 + 0, 0, 0); code_block(0); load_block_8_8(i * 2 + 1, 0, 0); code_block(0);
- load_block_8_8(i * 2 + 0, 1, 0); code_block(0); load_block_8_8(i * 2 + 1, 1, 0); code_block(0);
- load_block_16_8(i, 1); code_block(1); load_block_16_8(i, 2); code_block(2);
- }
- }
-}
-
-bool jpeg_encoder::terminate_pass_one()
-{
- optimize_huffman_table(0+0, DC_LUM_CODES); optimize_huffman_table(2+0, AC_LUM_CODES);
- if (m_num_components > 1)
- {
- optimize_huffman_table(0+1, DC_CHROMA_CODES); optimize_huffman_table(2+1, AC_CHROMA_CODES);
- }
- return second_pass_init();
-}
-
-bool jpeg_encoder::terminate_pass_two()
-{
- put_bits(0x7F, 7);
- flush_output_buffer();
- emit_marker(M_EOI);
- m_pass_num++; // purposely bump up m_pass_num, for debugging
- return true;
-}
-
-bool jpeg_encoder::process_end_of_image()
-{
- if (m_mcu_y_ofs)
- {
- if (m_mcu_y_ofs < 16) // check here just to shut up static analysis
- {
- for (int i = m_mcu_y_ofs; i < m_mcu_y; i++)
- memcpy(m_mcu_lines[i], m_mcu_lines[m_mcu_y_ofs - 1], m_image_bpl_mcu);
- }
-
- process_mcu_row();
- }
-
- if (m_pass_num == 1)
- return terminate_pass_one();
- else
- return terminate_pass_two();
-}
-
-void jpeg_encoder::load_mcu(const void *pSrc)
-{
- const uint8* Psrc = reinterpret_cast(pSrc);
-
- uint8* pDst = m_mcu_lines[m_mcu_y_ofs]; // OK to write up to m_image_bpl_xlt bytes to pDst
-
- if (m_num_components == 1)
- {
- if (m_image_bpp == 4)
- RGBA_to_Y(pDst, Psrc, m_image_x);
- else if (m_image_bpp == 3)
- RGB_to_Y(pDst, Psrc, m_image_x);
- else
- memcpy(pDst, Psrc, m_image_x);
- }
- else
- {
- if (m_image_bpp == 4)
- RGBA_to_YCC(pDst, Psrc, m_image_x);
- else if (m_image_bpp == 3)
- RGB_to_YCC(pDst, Psrc, m_image_x);
- else
- Y_to_YCC(pDst, Psrc, m_image_x);
- }
-
- // Possibly duplicate pixels at end of scanline if not a multiple of 8 or 16
- if (m_num_components == 1)
- memset(m_mcu_lines[m_mcu_y_ofs] + m_image_bpl_xlt, pDst[m_image_bpl_xlt - 1], m_image_x_mcu - m_image_x);
- else
- {
- const uint8 y = pDst[m_image_bpl_xlt - 3 + 0], cb = pDst[m_image_bpl_xlt - 3 + 1], cr = pDst[m_image_bpl_xlt - 3 + 2];
- uint8 *q = m_mcu_lines[m_mcu_y_ofs] + m_image_bpl_xlt;
- for (int i = m_image_x; i < m_image_x_mcu; i++)
- {
- *q++ = y; *q++ = cb; *q++ = cr;
- }
- }
-
- if (++m_mcu_y_ofs == m_mcu_y)
- {
- process_mcu_row();
- m_mcu_y_ofs = 0;
- }
-}
-
-void jpeg_encoder::clear()
-{
- m_mcu_lines[0] = NULL;
- m_pass_num = 0;
- m_all_stream_writes_succeeded = true;
-}
-
-jpeg_encoder::jpeg_encoder()
-{
- clear();
-}
-
-jpeg_encoder::~jpeg_encoder()
-{
- deinit();
-}
-
-bool jpeg_encoder::init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params)
-{
- deinit();
- if (((!pStream) || (width < 1) || (height < 1)) || ((src_channels != 1) && (src_channels != 3) && (src_channels != 4)) || (!comp_params.check_valid())) return false;
- m_pStream = pStream;
- m_params = comp_params;
- return jpg_open(width, height, src_channels);
-}
-
-void jpeg_encoder::deinit()
-{
- jpge_free(m_mcu_lines[0]);
- clear();
-}
-
-bool jpeg_encoder::process_scanline(const void* pScanline)
-{
- if ((m_pass_num < 1) || (m_pass_num > 2)) return false;
- if (m_all_stream_writes_succeeded)
- {
- if (!pScanline)
- {
- if (!process_end_of_image()) return false;
- }
- else
- {
- load_mcu(pScanline);
- }
- }
- return m_all_stream_writes_succeeded;
-}
-
-// Higher level wrappers/examples (optional).
-#include
-
-class cfile_stream : public output_stream
-{
- cfile_stream(const cfile_stream &);
- cfile_stream &operator= (const cfile_stream &);
-
- FILE* m_pFile;
- bool m_bStatus;
-
-public:
- cfile_stream() : m_pFile(NULL), m_bStatus(false) { }
-
- virtual ~cfile_stream()
- {
- close();
- }
-
- bool open(const char *pFilename)
- {
- close();
-#if defined(_MSC_VER)
- if (fopen_s(&m_pFile, pFilename, "wb") != 0)
- {
- return false;
- }
-#else
- m_pFile = fopen(pFilename, "wb");
-#endif
- m_bStatus = (m_pFile != NULL);
- return m_bStatus;
- }
-
- bool close()
- {
- if (m_pFile)
- {
- if (fclose(m_pFile) == EOF)
- {
- m_bStatus = false;
- }
- m_pFile = NULL;
- }
- return m_bStatus;
- }
-
- virtual bool put_buf(const void* pBuf, int64_t len)
- {
- m_bStatus = m_bStatus && (fwrite(pBuf, len, 1, m_pFile) == 1);
- return m_bStatus;
- }
-
- uint get_size() const
- {
- return m_pFile ? ftell(m_pFile) : 0;
- }
-};
-
-// Writes JPEG image to file.
-bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params)
-{
- cfile_stream dst_stream;
- if (!dst_stream.open(pFilename))
- return false;
-
- jpge::jpeg_encoder dst_image;
- if (!dst_image.init(&dst_stream, width, height, num_channels, comp_params))
- return false;
-
- for (uint pass_index = 0; pass_index < dst_image.get_total_passes(); pass_index++)
- {
- for (int64_t i = 0; i < height; i++)
- {
- // i, width, and num_channels are all 64bit
- const uint8* pBuf = pImage_data + i * width * num_channels;
- if (!dst_image.process_scanline(pBuf))
- return false;
- }
- if (!dst_image.process_scanline(NULL))
- return false;
- }
-
- dst_image.deinit();
-
- return dst_stream.close();
-}
-
-class memory_stream : public output_stream
-{
- memory_stream(const memory_stream &);
- memory_stream &operator= (const memory_stream &);
-
- uint8 *m_pBuf;
- uint64_t m_buf_size, m_buf_ofs;
-
-public:
- memory_stream(void *pBuf, uint64_t buf_size) : m_pBuf(static_cast(pBuf)), m_buf_size(buf_size), m_buf_ofs(0) { }
-
- virtual ~memory_stream() { }
-
- virtual bool put_buf(const void* pBuf, int64_t len)
- {
- uint64_t buf_remaining = m_buf_size - m_buf_ofs;
- if ((uint64_t)len > buf_remaining)
- return false;
- memcpy(m_pBuf + m_buf_ofs, pBuf, len);
- m_buf_ofs += len;
- return true;
- }
-
- uint64_t get_size() const
- {
- return m_buf_ofs;
- }
-};
-
-bool compress_image_to_jpeg_file_in_memory(void *pDstBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params)
-{
- if ((!pDstBuf) || (!buf_size))
- return false;
-
- memory_stream dst_stream(pDstBuf, buf_size);
-
- buf_size = 0;
-
- jpge::jpeg_encoder dst_image;
- if (!dst_image.init(&dst_stream, width, height, num_channels, comp_params))
- return false;
-
- for (uint pass_index = 0; pass_index < dst_image.get_total_passes(); pass_index++)
- {
- for (int64_t i = 0; i < height; i++)
- {
- const uint8* pScanline = pImage_data + i * width * num_channels;
- if (!dst_image.process_scanline(pScanline))
- return false;
- }
- if (!dst_image.process_scanline(NULL))
- return false;
- }
-
- dst_image.deinit();
-
- buf_size = dst_stream.get_size();
- return true;
-}
-
-} // namespace jpge
\ No newline at end of file
diff --git a/spaces/zhuolisam/resume-ranker/gradio_app.py b/spaces/zhuolisam/resume-ranker/gradio_app.py
deleted file mode 100644
index ef2bf56b6b8298d56ea9ebabc70db140faecb16a..0000000000000000000000000000000000000000
--- a/spaces/zhuolisam/resume-ranker/gradio_app.py
+++ /dev/null
@@ -1,24 +0,0 @@
-from pdf_loader import load_documents
-from core import pipeline
-
-import gradio as gr
-
-def inference(query, files):
- #get path of uploaded files
- files = [file.name for file in files]
- results,_ = pipeline(query, load_documents(file_paths=files))
-
- prob_per_documents = {result['name']: result['similarity'] for result in results}
- return prob_per_documents
-
-with gr.Blocks() as demo:
- #write a header
-
- job_desc = gr.inputs.Textbox(lines=5, label="Job Description")
- files = gr.File(file_count="multiple", file_types=[".txt",".pdf"], label="Upload Resume")
- btn = gr.Button("Submit")
- output = gr.Label(label="Results")
- # output = gr.Number(label="Results")
- btn.click(inference, inputs=[job_desc, files], outputs=output)
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/zomehwh/sovits-xiaoke/models.py b/spaces/zomehwh/sovits-xiaoke/models.py
deleted file mode 100644
index 5d8f154887a43a5c5f67cf6340f74268398e32d5..0000000000000000000000000000000000000000
--- a/spaces/zomehwh/sovits-xiaoke/models.py
+++ /dev/null
@@ -1,351 +0,0 @@
-import copy
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import attentions
-import commons
-import modules
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-from vdecoder.hifigan.models import Generator
-from utils import f0_to_coarse
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class Encoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- # print(x.shape,x_lengths.shape)
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- filter_channels=None,
- n_heads=None,
- p_dropout=None):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
- self.f0_emb = nn.Embedding(256, hidden_channels)
-
- self.enc_ = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
-
- def forward(self, x, x_lengths, f0=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = x + self.f0_emb(f0).transpose(1,2)
- x = self.enc_(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
-
- return z, m, logs, x_mask
-
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2,3,5,7,11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class SpeakerEncoder(torch.nn.Module):
- def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256):
- super(SpeakerEncoder, self).__init__()
- self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True)
- self.linear = nn.Linear(model_hidden_size, model_embedding_size)
- self.relu = nn.ReLU()
-
- def forward(self, mels):
- self.lstm.flatten_parameters()
- _, (hidden, _) = self.lstm(mels)
- embeds_raw = self.relu(self.linear(hidden[-1]))
- return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True)
-
- def compute_partial_slices(self, total_frames, partial_frames, partial_hop):
- mel_slices = []
- for i in range(0, total_frames-partial_frames, partial_hop):
- mel_range = torch.arange(i, i+partial_frames)
- mel_slices.append(mel_range)
-
- return mel_slices
-
- def embed_utterance(self, mel, partial_frames=128, partial_hop=64):
- mel_len = mel.size(1)
- last_mel = mel[:,-partial_frames:]
-
- if mel_len > partial_frames:
- mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop)
- mels = list(mel[:,s] for s in mel_slices)
- mels.append(last_mel)
- mels = torch.stack(tuple(mels), 0).squeeze(1)
-
- with torch.no_grad():
- partial_embeds = self(mels)
- embed = torch.mean(partial_embeds, axis=0).unsqueeze(0)
- #embed = embed / torch.linalg.norm(embed, 2)
- else:
- with torch.no_grad():
- embed = self(last_mel)
-
- return embed
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- ssl_dim,
- n_speakers,
- **kwargs):
-
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- self.ssl_dim = ssl_dim
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- self.enc_p_ = TextEncoder(ssl_dim, inter_channels, hidden_channels, 5, 1, 16,0, filter_channels, n_heads, p_dropout)
- hps = {
- "sampling_rate": 48000,
- "inter_channels": 192,
- "resblock": "1",
- "resblock_kernel_sizes": [3, 7, 11],
- "resblock_dilation_sizes": [[1, 3, 5], [1, 3, 5], [1, 3, 5]],
- "upsample_rates": [10, 8, 2, 2],
- "upsample_initial_channel": 512,
- "upsample_kernel_sizes": [16, 16, 4, 4],
- "gin_channels": 256,
- }
- self.dec = Generator(h=hps)
- self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
-
- def forward(self, c, f0, spec, g=None, mel=None, c_lengths=None, spec_lengths=None):
- if c_lengths == None:
- c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device)
- if spec_lengths == None:
- spec_lengths = (torch.ones(spec.size(0)) * spec.size(-1)).to(spec.device)
-
- g = self.emb_g(g).transpose(1,2)
-
- z_ptemp, m_p, logs_p, _ = self.enc_p_(c, c_lengths, f0=f0_to_coarse(f0))
- z, m_q, logs_q, spec_mask = self.enc_q(spec, spec_lengths, g=g)
-
- z_p = self.flow(z, spec_mask, g=g)
- z_slice, pitch_slice, ids_slice = commons.rand_slice_segments_with_pitch(z, f0, spec_lengths, self.segment_size)
-
- # o = self.dec(z_slice, g=g)
- o = self.dec(z_slice, g=g, f0=pitch_slice)
-
- return o, ids_slice, spec_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, c, f0, g=None, mel=None, c_lengths=None):
- if c_lengths == None:
- c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device)
- g = self.emb_g(g).transpose(1,2)
-
- z_p, m_p, logs_p, c_mask = self.enc_p_(c, c_lengths, f0=f0_to_coarse(f0))
- z = self.flow(z_p, c_mask, g=g, reverse=True)
-
- o = self.dec(z * c_mask, g=g, f0=f0)
-
- return o