Cisdem PDF Converter OCR 7.5.0: A Powerful Tool to Convert and Edit PDFs on Mac
-
If you are looking for a reliable and versatile PDF converter for your Mac, you might want to check out Cisdem PDF Converter OCR 7.5.0. This software can help you convert any native PDF, scanned PDF, encrypted PDF, or image file to various editable and searchable formats, such as Word, Excel, PowerPoint, ePub, HTML, Text, RTF, Pages, Keynote, and images (JPEG, BMP, PNG, GIF, TIFF). It also supports OCR technology to recognize text in images and scanned documents, and allows you to customize your conversion with options specific to area, language, and output quality.
-
In this article, we will review some of the key features and benefits of Cisdem PDF Converter OCR 7.5.0 for Mac users.
Convert PDFs to Multiple Formats with High Accuracy
-
One of the main advantages of Cisdem PDF Converter OCR 7.5.0 is that it can handle various types of PDFs and images, and convert them to different formats according to your needs. Whether you want to edit a PDF document in Word or PowerPoint, create an e-book in ePub format, publish a PDF on the web as HTML, or extract data from a PDF table to Excel, you can do it easily with this software. You can also convert PDFs and images to iWork files (Pages, Keynote) for use in other office editor apps.
-
Moreover, Cisdem PDF Converter OCR 7.5.0 can preserve the original layout and file quality of your source files after conversion. It has up to 99.8% character recognition accuracy and can retain the fonts, colors, graphics, tables, columns, and other elements of your documents. You can also adjust the output quality and size of your converted files according to your preferences.
-
Perform OCR on Scanned Documents and Images
-
Another useful feature of Cisdem PDF Converter OCR 7.5.0 is that it can perform optical character recognition (OCR) on scanned documents and images that contain text. This means that you can convert these files into editable and searchable formats as well, instead of having to retype or copy-paste the content manually.
-
Cisdem PDF Converter OCR 7.5.0 supports 49 languages scanning and recognizing, including English, French, Italian, Chinese, etc., and also supports conversion of PDF files that contain multiple languages. You can also select the specific areas of your files that you want to convert with the four markup options: select, mark texts, mark images, and mark tables.
-
Create PDFs from Other Documents
-
Besides converting PDFs and images to other formats, Cisdem PDF Converter OCR 7.5.0 can also create PDFs from other documents such as Word, PowerPoint, HTML, ePub, etc. You can simply drag and drop your files into the software interface and choose the output format as PDF. You can also merge multiple files into one PDF document if you want.
-
Cisdem PDF Converter OCR 7.5.0 can create high-quality PDFs that are compatible with various devices and platforms. You can also encrypt your PDFs with passwords and permissions to protect your sensitive information.
-
How to Download and Install Cisdem PDF Converter OCR 7.5.0 for Mac
-
If you are interested in trying out Cisdem PDF Converter OCR 7.5.0 for Mac, you can download it from the official website[^1^] or from other reputable sources[^2^] [^3^] [^4^]. The software is compatible with macOS 10.10 or later versions.
-
To install Cisdem PDF Converter OCR 7.5.0 for Mac, you need to follow these steps:
-
-
-
Download the DMG file from the website.
-
Double-click the DMG file to open it.
-
Drag and drop the Cisdem PDF Converter OCR icon to the Applications folder.
-
Launch the software from the Applications folder or the Launchpad.
-
Enter your license code or start a free trial. cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FIFA Online 3 The Ultimate Online Football Experience.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FIFA Online 3 The Ultimate Online Football Experience.md
deleted file mode 100644
index f2c38f0e23286cccc486e5675eea32fcfaacb65e..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FIFA Online 3 The Ultimate Online Football Experience.md
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
How to Download FIFA Online 3 and Play Football with Real Players
-
FIFA Online 3 is a free-to-play online football game that lets you manage your own team and compete with other players from around the world. You can choose from over 30 leagues and 15,000 real-world players to build your dream squad. Whether you want to play single-player through a season or challenge other online players in various modes, FIFA Online 3 has something for every football fan.
In this article, we will show you how to download FIFA Online 3 and start playing right away.
-
Step 1: Visit the official website of FIFA Online 3
-
The first thing you need to do is to visit the official website of FIFA Online 3. Depending on your region, you may need to select a different website. For example, if you are in Asia, you can go to https://fo3.garena.com/. If you are in Europe, you can go to https://www.ea.com/games/fifa/fifa-22.
-
Step 2: Register an account and download the game client
-
Once you are on the website, you will need to register an account and download the game client. You can either use your email address or your social media account to sign up. After that, you will be able to download the game client from the website. The file size is about 4 GB, so it may take some time depending on your internet speed.
-
Step 3: Install and launch the game
-
After downloading the game client, you will need to install and launch the game. Follow the instructions on the screen and agree to the terms and conditions. Once the installation is complete, you can launch the game from your desktop or start menu.
-
Step 4: Create your team and start playing
-
When you launch the game, you will be asked to create your team and choose your preferred league and players. You can customize your team name, logo, kit, formation, tactics, and more. You can also buy new players and items using the in-game currency called EP.
-
-
After creating your team, you can start playing the game. You can either play single-player through a season or play against other online players in various modes. Some of the modes include FIFA Online World Tour, League Mode, Tournament Mode, and more. You can also join a club and chat with other players.
-
FIFA Online 3 is a fun and exciting online football game that lets you experience the thrill of managing and playing with your favorite team and players. If you are a fan of football, you should definitely give it a try.
Step 5: Learn some tips and tricks to improve your game
-
If you want to improve your game and win more matches, you may want to learn some tips and tricks from the experts. Here are some of the things you can do to enhance your skills and strategies in FIFA Online 3.
-
-
Practice your basic controls and moves. You can use the tutorial mode or the practice mode to learn how to pass, shoot, dribble, tackle, and more. You can also adjust the difficulty level and the game speed to suit your preference.
-
Study your opponents and their tactics. You can use the scouting feature to see the stats and formation of your opponents before you play against them. You can also watch replays of their matches to analyze their strengths and weaknesses.
-
Use the right players for the right positions. You can use the player search feature to find the best players for your team based on their attributes, skills, and ratings. You can also use the chemistry feature to see how well your players work together on the pitch.
-
Upgrade your players and items. You can use the training feature to improve your players' abilities and potential. You can also use the upgrade feature to enhance your items' effects and durability.
-
Join a club and cooperate with other players. You can join a club or create your own club and invite other players to join. You can chat with your club members, share tips and strategies, and play together in club matches and tournaments.
-
-
By following these tips and tricks, you will be able to master FIFA Online 3 and become a champion.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Ashampoo Burning Studio.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Ashampoo Burning Studio.md
deleted file mode 100644
index b5aee3184ffb92f390e17d61b428a95d11963e73..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Ashampoo Burning Studio.md
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
Free Download Ashampoo Burning Studio: A Simple and Powerful CD/DVD Burner
-
If you are looking for a free and easy-to-use software to burn your data, music, videos, or backups to CD or DVD discs, you might want to try Ashampoo Burning Studio. This is a popular and feature-rich disc burning software that can handle all your disc burning needs with speed and convenience.
-
In this article, we will show you how to free download Ashampoo Burning Studio and what you can do with it.
To free download Ashampoo Burning Studio, you can visit the official website of Ashampoo or other trusted software download sites such as CNET or FileHippo. You will find a download button that will start the download process. The file size is about 60 MB and it will take a few minutes to complete the download depending on your internet speed.
-
Once the download is finished, you can run the setup file and follow the instructions to install the software on your computer. The installation is quick and easy and you can customize some settings such as the language, the installation folder, and the desktop shortcut. You will also need to register the software for free with your email address to activate it.
-
What You Can Do with Ashampoo Burning Studio
-
Ashampoo Burning Studio is a versatile and powerful disc burning software that can do a lot of things for you. Here are some of the main features of Ashampoo Burning Studio:
-
-
Burn data discs: You can burn any files or folders to CD, DVD, or Blu-ray discs with ease. You can also update or erase existing discs if they are rewritable. You can choose from different file systems and settings to optimize your discs for different purposes.
-
Create and burn backups: You can create and burn compressed and password-protected backups of your important data to CD, DVD, or Blu-ray discs. You can also restore your backups from the discs with a few clicks. Ashampoo Burning Studio can also split large backups into multiple volumes if they don't fit on a single disc.
-
Rip or create audio CDs: You can rip audio CDs to MP3, WMA, or WAV files with automatic track recognition. You can also create your own audio CDs from various audio formats with built-in normalization and player. You can also create MP3 or WMA discs that can store more songs than traditional audio CDs.
-
Burn movies: You can burn HD and Full HD videos to CD, DVD, or Blu-ray discs as long as you have them in a prepared folder. You can also create video CDs (VCD) or super video CDs (SVCD) from standard videos. Ashampoo Burning Studio supports various video formats such as AVI, MP4, MKV, MOV, etc.
-
Handle disc images: You can create or burn disc images from data files in various formats such as ISO, CUE/BIN, or ASHDISC. You can also mount disc images as virtual drives and access them without burning them to discs.
-
-
Conclusion
-
Ashampoo Burning Studio is a free and reliable disc burning software that can help you burn your data, music, videos, or backups to CD, DVD, or Blu-ray discs with ease. It has a simple and intuitive interface that makes it suitable for beginners and advanced users alike. It also has many features and options that let you customize your discs according to your needs.
-
If you want to free download Ashampoo Burning Studio and try it out for yourself, you can visit the links below:
-
-How to activate Adobe Acrobat Pro DC 2019.008.20074? First you must turn on your computer. Then go to www.adobe.com/activate/acrobat/. You must enter your Adobe ID and then click on “Activate my Adobe Acrobat Professional 2019” button. Now you must enter the link you receive and click on “Activate Acrobat Professional 2019” button. How do I install Acrobat Pro DC 2019.008.20074 on my computer? You must download Acrobat pro DC 2019.008.20074 file from the link above and follow the instructions that come with the file to install Acrobat pro DC 2019.008.20074 on your PC. How to install Adobe Acrobat Pro DC 2019.008.20074 on my computer: Download file Adobe Acrobat Pro DC 2019.008.20074 from the link above. Then follow the instructions that come with the file to install Acrobat pro DC 2019.008.20074 on your computer. Have you just installed Adobe Acrobat pro DC 2019.008.20074? Did you get a message or need to enter a code? You can read the details on how to activate Acrobat pro DC 2019.008.20074. You need Adobe Acrobat Pro DC 2019.008.20074 activation code in order to activate your copy of Adobe Acrobat Pro DC 2019.008.20074. If you do not have the Adobe Acrobat Pro DC 2019 activation code you can download the.xml code files of Adobe Acrobat Pro DC 2019.008.20074 activation code from the link below.
-
-How to activate Adobe Acrobat Pro DC 2019?If you received a message saying “This software will activate in 5 minutes”, then you can assume that the Adobe Acrobat pro DC 2019 activation code has been sent to your e-mail address. It is necessary to activate Acrobat pro DC 2019 on your computer, in order to start using your copy of this software. You can read the details on how to activate Acrobat pro DC 2019. It is easy to activate Adobe Acrobat pro DC 2019 on your computer. However, you need to know the Adobe Acrobat pro DC 2019 activation code to activate your copy of Adobe Acrobat pro DC 2019. You can find out how to activate Acrobat pro DC 2019 at the end of this page.
-
-Adobe Acrobat Pro DC 2019.008.20074 Activation.rar ✌ . Adobe Acrobat Pro DC 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Firebug For Firefox 16.0.2.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Firebug For Firefox 16.0.2.md
deleted file mode 100644
index baa0b0f84648f3d438b162caa9e8380dde5720cc..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Firebug For Firefox 16.0.2.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
How to Download Firebug for Firefox 16.0.2
-
Firebug is a web development tool that integrates with Firefox and allows you to edit, debug, and monitor CSS, HTML, and JavaScript live in any web page[^1^]. It is a useful tool for web developers who want to inspect and tweak the code of their websites.
-
However, Firebug is no longer maintained and updated by its developers, and it is not compatible with the latest versions of Firefox. The last version of Firebug was 2.0.19, which was released in October 2016 and only works with Firefox versions up to 49[^2^]. If you are using Firefox 16.0.2, which was released in November 2012, you can still download and install Firebug 2.0.19 from the official website[^1^]. Here are the steps to do so:
Save the file firebug-2.0.19.xpi to your computer.
-
Open Firefox 16.0.2 and go to the menu button (the three horizontal bars) and click on Add-ons.
-
Click on the gear icon and select Install Add-on From File.
-
Browse to the location where you saved firebug-2.0.19.xpi and select it.
-
Click on Install Now and restart Firefox when prompted.
-
Firebug should now be installed and you can access it by clicking on the Firebug icon (a bug with a flame) in the toolbar or by pressing F12.
-
-
Note that Firebug is no longer supported by its developers and may not work properly with some websites or features. It is recommended that you switch to the latest version of Firefox and use the built-in developer tools instead[^1^]. You can also try other web development tools such as Chrome DevTools or Web Inspector.
-
-
If you want to learn more about Firebug and how to use it for web development, you can check out the official website which has a lot of documentation, tutorials, and tips. You can also visit the Firebug blog to read about the latest news and updates on Firebug. You can also join the Firebug community on Google Groups, Stack Overflow, or Twitter to ask questions, share feedback, or report bugs.
-
Firebug has been a pioneer and a leader in web development tools for many years, and it has helped millions of web developers create amazing websites. However, as the web evolves and new technologies emerge, Firebug has become outdated and incompatible with modern browsers. The Firebug team has decided to stop working on Firebug and focus on contributing to the Firefox developer tools instead. The Firefox developer tools are based on some of the features and concepts of Firebug, but they are more advanced, powerful, and integrated with Firefox. They offer a range of tools such as inspector, console, debugger, network monitor, performance analyzer, storage inspector, accessibility inspector, and more. You can access them by pressing Ctrl+Shift+I or by clicking on the menu button and selecting Web Developer.
-
If you are still using Firefox 16.0.2 and Firebug 2.0.19, you may want to consider upgrading to the latest version of Firefox and switching to the Firefox developer tools. You will get a better browsing experience, more security and privacy, and more web development features. You can download the latest version of Firefox from https://www.mozilla.org/en-US/firefox/new/. You can also learn more about the Firefox developer tools from https://developer.mozilla.org/en-US/docs/Tools.
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download __LINK__ Iron Man 2008 In Hindi.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download __LINK__ Iron Man 2008 In Hindi.md
deleted file mode 100644
index 81137a3d299fd79db1b7fef0a8f60f1c6e51109b..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Download __LINK__ Iron Man 2008 In Hindi.md
+++ /dev/null
@@ -1,39 +0,0 @@
-
-
How to Download Iron Man (2008) Movie in Hindi-Eng
-
Iron Man (2008) is a Hollywood movie based on Action, Adventure, and Science Fiction. It stars Robert Downey Jr. as Tony Stark, a billionaire engineer who creates a unique weaponized suit of armor to fight evil after being held captive in an Afghan cave. The movie was directed by Jon Favreau and received positive reviews from critics and audiences alike.
-
If you want to download Iron Man (2008) movie in Hindi-Eng, you have several options to choose from. You can either use a torrent site, a streaming site, or an archive site. Here are some of the best sources to download Iron Man (2008) movie in Hindi-Eng:
Torrent site: You can use a torrent site like The Pirate Bay or 1337x to download Iron Man (2008) movie in Hindi-Eng. You will need a torrent client like uTorrent or BitTorrent to download the movie file. You can also use a VPN service to hide your IP address and avoid any legal issues. Some of the torrent links for Iron Man (2008) movie in Hindi-Eng are:
-
-
-
Iron Man 720p ( Hindi) : Free Download, Borrow, and Streaming : Internet Archive[^1^]
-
Download Iron Man (2008) Movie | Hindi-Eng | PogoLinks[^2^]
-
Iron Man (2008) Dual Audio Hindi-English 480p [375MB ... - Mkvhub[^3^]
-
-
-
-
Streaming site: You can use a streaming site like PogoLinks or Mkvhub to watch Iron Man (2008) movie in Hindi-Eng online. You will need a good internet connection and a compatible device to stream the movie. You can also download the movie from these sites if you want. Some of the streaming links for Iron Man (2008) movie in Hindi-Eng are:
-
-
-
Download Iron Man (2008) Movie | Hindi-Eng | PogoLinks[^2^]
-
Iron Man (2008) Dual Audio Hindi-English 480p [375MB ... - Mkvhub[^3^]
-
-
-
-
Archive site: You can use an archive site like Internet Archive or Archive.org to download Iron Man (2008) movie in Hindi-Eng. These sites store old and rare movies that are not available elsewhere. You can also watch the movie online on these sites if you want. Some of the archive links for Iron Man (2008) movie in Hindi-Eng are:
-
-
-
Iron Man 720p ( Hindi) : Free Download, Borrow, and Streaming : Internet Archive[^1^]
-
Iron Man 2008 Dual Audio Hindi Dubbed 480p 300mb Download - aFilmywap[^4^]
-
-
-
-
-
We hope this article helped you find the best source to download Iron Man (2008) movie in Hindi-Eng. Enjoy watching the movie and let us know your feedback in the comments below.
-
-
Iron Man (2008) is the first movie in the Marvel Cinematic Universe (MCU), a series of superhero movies based on the Marvel Comics characters. The movie was a huge success at the box office and received several awards and nominations, including two Academy Award nominations for Best Sound Editing and Best Visual Effects. The movie also spawned two sequels, Iron Man 2 (2010) and Iron Man 3 (2013), and several crossover movies with other MCU characters.
-
The movie follows the story of Tony Stark, a genius inventor and CEO of Stark Industries, a leading weapons manufacturer. While demonstrating his latest missile in Afghanistan, he is captured by a terrorist group called the Ten Rings, who force him to build a weapon for them. Instead, he secretly builds a miniaturized arc reactor to power his heart and a suit of armor to escape. He then returns to America and announces that he will stop making weapons and use his technology for good. However, he faces opposition from his business partner Obadiah Stane, who wants to use his arc reactor for his own nefarious purposes.
-
Iron Man (2008) is a movie that combines action, humor, and drama in a thrilling and entertaining way. The movie showcases the origin story of one of the most popular and iconic superheroes of all time. The movie also features an impressive cast of actors, including Gwyneth Paltrow as Pepper Potts, Jeff Bridges as Obadiah Stane, Terrence Howard as James Rhodes, and Samuel L. Jackson as Nick Fury. The movie also has a memorable soundtrack composed by Ramin Djawadi and featuring songs by AC/DC, Black Sabbath, and Audioslave.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Askies by JazziDisciples and Mr JazziQ The Meaning Lyrics and Reviews of the Amapiano Smash Hit.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Askies by JazziDisciples and Mr JazziQ The Meaning Lyrics and Reviews of the Amapiano Smash Hit.md
deleted file mode 100644
index 74bbcef219718d18356803dba32d02c649acd9bc..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Askies by JazziDisciples and Mr JazziQ The Meaning Lyrics and Reviews of the Amapiano Smash Hit.md
+++ /dev/null
@@ -1,126 +0,0 @@
-
-
Download Askies Jazzidisciples: How to Enjoy the Best of Amapiano Music Online
-
If you are a fan of amapiano music, you have probably heard of askies jazzidisciples, one of the most popular songs in this genre. But do you know what is askies jazzidisciples, who are the artists behind it, and why it is so popular among amapiano fans? In this article, we will answer these questions and show you how to download askies jazzidisciples for free online or stream it on various platforms.
-
What is Askies Jazzidisciples?
-
Askies jazzidisciples is a song by Mr JazziQ and JazziDisciples, featuring Josiah De Disciple, FakeLove, Moonchild Sanelly, and MDU aka TRP. It was released in March 2020 as part of Mr JazziQ's debut album Mr JazziQ 0303. The song is a fusion of amapiano, house, kwaito, and gqom elements, creating a unique sound that appeals to a wide audience.
Askies jazzidisciples is not only a catchy song, but also a showcase of some of the most talented artists in the amapiano scene. Let's take a look at who they are and what they bring to the table.
-
Mr JazziQ and JazziDisciples
-
Mr JazziQ, whose real name is Tumelo Manyoni, is a South African amapiano DJ and record producer. He is best known for being a former member of the amapiano DJ duo, JazziDisciples, alongside Josiah De Disciple. The duo started their career in 2018 and released several projects that found their footing on the dancefloor and on streaming platforms, such as The Load Shedding EP, IOP EP, Disciples Of Piano, and 0303. They also collaborated with other amapiano artists like Vigro Deep, Mdu aka TRP, and Kabza De Small.
-
In 2020, Mr JazziQ and Josiah De Disciple decided to split and focus on their individual music careers. Mr JazziQ released his first solo debut album 0303 in March 2020, which featured askies jazzidisciples as well as other hits like Blue Skies, Hello Mo'Girl, and VSOP. His single Askies, which featured singer Moonchild Sanelly and FakeLove, was certified gold by RiSA. He also released other successful singles and albums, such as Umsebenzi Wethu, Amaneighbour, Woza, Party With The English, and All You Need Is Piano.
-
Josiah De Disciple
-
Josiah De Disciple, whose real name is Josiah Makoela, is a South African DJ and record producer who was also part of the JazziDisciples duo. He started his DJ career at the age of 14 and producing at the age of 16. He met Mr JazziQ in Alexandra and they formed a partnership that lasted for two years.
-
After going solo in 2020, Josiah De Disciple released his debut studio album Spirits of Makoela – Vol. 2: The Reintroduction in April 2021. The album featured 14 tracks with guest appearances from Kabza De Small, Boohle, Cecil M, Jessica LM, and others. The album received positive reviews from critics and fans who praised its blend of amapiano with jazz and soul elements. Josiah De Disciple also collaborated with Boohle on another album called Umbuso Wabam'nyama in 2020, which included the gold-certified single Mama.
-
FakeLove
-
FakeLove is a South African singer and songwriter who is known for his smooth vocals and catchy hooks. He is influenced by various genres of music such as R&B, soul, pop, hip hop, and amapiano. He has worked with several prominent artists in the industry such as Mr JazziQ, Kabza De Small, DJ Maphorisa, Sha Sha, Samthing Soweto, MFR Souls, and more.
-
Some of his notable songs include Askies with Mr JazziQ and Moonchild Sanelly, Nguwe with DJ Maphorisa and Kabza De Small, Banyana with DJ Maphorisa and Tyler ICU, Ntyilo Ntyilo with Rethabile Khumalo, Mali Mali with MFR Souls and Focalistic, and many more. He is also part of the Scorpion Kings Live project by DJ Maphorisa and Kabza De Small.
-
download askies jazzidisciples mp3
-download askies jazzidisciples fakaza
-download askies jazzidisciples lyrics
-download askies jazzidisciples video
-download askies jazzidisciples song
-download askies jazzidisciples audio
-download askies jazzidisciples youtube
-download askies jazzidisciples spotify
-download askies jazzidisciples feat moonchild sanelly
-download askies jazzidisciples feat josiah de disciple
-download askies jazzidisciples feat mdu aka trp
-download askies jazzidisciples feat fakelove
-download askies jazzidisciples amapiano
-download askies jazzidisciples remix
-download askies jazzidisciples instrumental
-download askies jazzidisciples original mix
-download askies jazzidisciples free mp3
-download askies jazzidisciples 320kbps
-download askies jazzidisciples zip file
-download askies jazzidisciples album mr jazziq 0303
-download askies jazzidisciples online
-download askies jazzidisciples on fakaza.com
-download askies jazzidisciples on zamusic.org
-download askies jazzidisciples on datafilehost.com
-download askies jazzidisciples on hiphopza.com
-how to download askies jazzidisciples
-where to download askies jazzidisciples
-best site to download askies jazzidisciples
-best quality to download askies jazzidisciples
-best app to download askies jazzidisciples
-stream or download askies jazzidisciples
-listen or download askies jazzidisciples
-play or download askies jazzidisciples
-share or download askies jazzidisciples
-rate or download askies jazzidisciples
-review or download askies jazzidisciples
-comment or download askies jazzidisciples
-like or download askies jazzidisciples
-subscribe or download askies jazzidisciples
-follow or download askies jazzidisciples
-
Moonchild Sanelly
-
Moonchild Sanelly is a South African musician and dancer who is known for her signature blue-colored hair and her self-created music genre called \"Future ghetto punk\". She was born into a musical family in Port Elizabeth and moved to Durban in 2005 to study fashion. She started performing in shows at Durban University of Technology with a focus on poetry and hip hop. She later moved to Johannesburg to pursue her musical career.
-
Moonchild Sanelly has a unique style that combines elements of kwaito, house, dancehall, funk, electronic, R&B, soul, amapiano, and more. She has collaborated with various local and international artists such as Busiswa, Die Antwoord, Beyoncé , Gorillaz , Diplo, and more.
-
Some of her popular songs include Bashiri, Thunda Thighs, Where De Dee Kat, F-Boyz, Newtown Chips, Askies, and many more. She also has her own reality show on MTV Africa called Moonchild Sanelly Woza.
-
MDU aka TRP
-
MDU aka TRP is a South African amapiano DJ and record producer who is known for his versatile and innovative sound. He started making music at the age of 13 and was inspired by artists like Black Coffee, DJ Fresh, and Oskido. He has worked with several amapiano heavyweights such as Kabza De Small, DJ Maphorisa, Mr JazziQ, JazziDisciples, MFR Souls, and more.
-
Some of his notable songs include Askies with Mr JazziQ and JazziDisciples, Banyana with DJ Maphorisa and Tyler ICU, Sgubu Se Monati with JazziDisciples and Vigro Deep, Sabanika with Njelic and De Mthuda, 16 Inch with Bongza and Daliwonga, and many more. He also released several EPs and albums such as Pull Up 2, Amapiano Is A Lifestyle Vol. 2, Tales Of The 2 Peers, Boomerang, and more.
-
Why is Askies Jazzidisciples Popular Among Amapiano Fans?
-
Askies jazzidisciples is not only a song that features some of the best artists in the amapiano scene, but also a song that captures the essence of amapiano music. Amapiano is a genre of music that originated in South Africa in the early 2010s and has since become a global phenomenon. It is characterized by its use of piano melodies, basslines, drums, percussions, synths, vocals, and samples from various genres such as house, kwaito, jazz, soul, and more.
-
Askies jazzidisciples is a song that showcases the diversity and creativity of amapiano music. It has a catchy chorus that repeats the word \"askies\", which means \"sorry\" or \"excuse me\" in South African slang. It also has a groovy beat that makes you want to dance along. It features different vocal styles from the artists, such as Moonchild Sanelly's energetic rap verses, FakeLove's smooth singing hooks, Josiah De Disciple's soulful harmonies, and MDU aka TRP's signature ad-libs. It also incorporates elements from other genres such as house, gqom, kwaito, and jazz.
-
Askies jazzidisciples is a song that appeals to amapiano fans because it represents the culture and lifestyle of amapiano lovers. It is a song that celebrates the joy of music, dancing, partying, and having fun with friends. It is a song that expresses the attitude and spirit of amapiano fans who are not afraid to say \"askies\" to anyone who tries to stop them from enjoying their lives.
-
How to Download Askies Jazzidisciples for Free Online?
-
If you want to download askies jazzidisciples for free online, you have several options to choose from. Here are some of the best ways to download askies jazzidisciples for free online.
-
Use OKmusi MP3 Downloader
-
OKmusi MP3 Downloader is a free online tool that allows you to download any MP3 song from YouTube or other websites. You can use it to download askies jazzidisciples for free online by following these steps:
Type \"askies jazzidisciples\" in the search box and click on the search icon.
-
Select the song from the list of results and click on the download button.
-
Choose the quality and format you want and click on the download button again.
-
Wait for the download to finish and enjoy your song.
-
-
-
Use Bandcamp
-
Bandcamp is an online platform that allows independent artists to sell their music directly to fans. You can use it to download askies jazzidisciples for free online by following these steps:
Open the app and sign in with your Apple ID or create a new one.
-
Tap on the search icon and type \"askies jazzidisciples\" in the search box.
-
Select the song from the list of results and tap on the play button.
-
Enjoy your song.
-
-
-
Use Spotify
-
Spotify is a streaming service that allows you to access millions of songs, podcasts, playlists, and more. You can use it to stream askies jazzidisciples online by following these steps:
Open the app and sign in with your Spotify account or create a new one.
-
Tap on the search icon and type \"askies jazzidisciples\" in the search box.
-
Select the song from the list of results and tap on the play button.
-
Enjoy your song.
-
-
-
Conclusion
-
Askies jazzidisciples is a song that you should not miss if you are a fan of amapiano music. It is a song that features some of the best artists in the amapiano scene, such as Mr JazziQ, JazziDisciples, Josiah De Disciple, FakeLove, Moonchild Sanelly, and MDU aka TRP. It is a song that showcases the diversity and creativity of amapiano music, blending elements from various genres such as house, kwaito, gqom, and jazz. It is a song that appeals to amapiano fans because it represents the culture and lifestyle of amapiano lovers, celebrating the joy of music, dancing, partying, and having fun with friends.
-
If you want to download askies jazzidisciples for free online or stream it on various platforms, you have several options to choose from. You can use OKmusi MP3 Downloader, Bandcamp, or DatPiff to download askies jazzidisciples for free online. You can also use Shazam, Apple Music, or Spotify to stream askies jazzidisciples online. Whichever option you choose, you will be able to enjoy this amazing song anytime and anywhere.
-
We hope this article has helped you learn more about askies jazzidisciples and how to enjoy it online. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy listening!
-
FAQs
-
Here are some of the frequently asked questions related to askies jazzidisciples and their answers.
-
What does askies mean?
-
Askies is a South African slang word that means \"sorry\" or \"excuse me\". It is often used as a polite way of apologizing or getting someone's attention. In the context of the song askies jazzidisciples, it is also used as a way of expressing one's attitude and spirit of having fun and not caring about what others think.
-
Who produced askies jazzidisciples?
-
Askies jazzidisciples was produced by Mr JazziQ and JazziDisciples, who are both amapiano DJs and record producers. They are also former members of the amapiano duo JazziDisciples, which split in 2020.
-
Where can I find the lyrics of askies jazzidisciples?
\n\n"+
- "# RVC V2 RANDOM\n\n"+
- "### Recommended to use Google Colab to use other character and feature.\n\n"+
- "#### All of this voice samples are taken from the discord AIHUB\n\n"+
- "[](https://colab.research.google.com/drive/1NyltTdzRIAwTDWz4ffjk-aBPLZtcAU-b?usp=share_link)\n\n"+
- "
\n\n"+
- "[](https://github.com/ArkanDash/Multi-Model-RVC-Inference)"
- )
- for (folder_title, folder, description, models) in categories:
- with gr.TabItem(folder_title):
- if description:
- gr.Markdown(f"###
{description}")
- with gr.Tabs():
- if not models:
- gr.Markdown("#
No Model Loaded.")
- gr.Markdown("##
Please add model or fix your model path.")
- continue
- for (name, title, author, cover, model_version, vc_fn) in models:
- with gr.TabItem(name):
- with gr.Row():
- gr.Markdown(
- '
'
- f'
{title}
\n'+
- f'
RVC {model_version} Model
\n'+
- (f'
Model author: {author}
' if author else "")+
- (f'' if cover else "")+
- '
'
- )
- with gr.Row():
- with gr.Column():
- vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio")
- # Input
- vc_input = gr.Textbox(label="Input audio path", visible=False)
- # Upload
- vc_microphone_mode = gr.Checkbox(label="Use Microphone", value=False, visible=True, interactive=True)
- vc_upload = gr.Audio(label="Upload audio file", source="upload", visible=True, interactive=True)
- # Youtube
- vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)")
- vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...")
- vc_split_model = gr.Dropdown(label="Splitter Model", choices=["htdemucs", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)")
- vc_split = gr.Button("Split Audio", variant="primary", visible=False)
- vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False)
- vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False)
- vc_audio_preview = gr.Audio(label="Audio Preview", visible=False)
- # TTS
- tts_text = gr.Textbox(visible=False, label="TTS text", info="Text to speech input")
- tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female")
- with gr.Column():
- vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice')
- f0method0 = gr.Radio(
- label="Pitch extraction algorithm",
- info=f0method_info,
- choices=f0method_mode,
- value="pm",
- interactive=True
- )
- index_rate1 = gr.Slider(
- minimum=0,
- maximum=1,
- label="Retrieval feature ratio",
- info="(Default: 0.7)",
- value=0.7,
- interactive=True,
- )
- filter_radius0 = gr.Slider(
- minimum=0,
- maximum=7,
- label="Apply Median Filtering",
- info="The value represents the filter radius and can reduce breathiness.",
- value=3,
- step=1,
- interactive=True,
- )
- resample_sr0 = gr.Slider(
- minimum=0,
- maximum=48000,
- label="Resample the output audio",
- info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling",
- value=0,
- step=1,
- interactive=True,
- )
- rms_mix_rate0 = gr.Slider(
- minimum=0,
- maximum=1,
- label="Volume Envelope",
- info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used",
- value=1,
- interactive=True,
- )
- protect0 = gr.Slider(
- minimum=0,
- maximum=0.5,
- label="Voice Protection",
- info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy",
- value=0.5,
- step=0.01,
- interactive=True,
- )
- with gr.Column():
- vc_log = gr.Textbox(label="Output Information", interactive=False)
- vc_output = gr.Audio(label="Output Audio", interactive=False)
- vc_convert = gr.Button("Convert", variant="primary")
- vc_volume = gr.Slider(
- minimum=0,
- maximum=10,
- label="Vocal volume",
- value=4,
- interactive=True,
- step=1,
- info="Adjust vocal volume (Default: 4}",
- visible=False
- )
- vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False)
- vc_combine = gr.Button("Combine",variant="primary", visible=False)
- vc_convert.click(
- fn=vc_fn,
- inputs=[
- vc_audio_mode,
- vc_input,
- vc_upload,
- tts_text,
- tts_voice,
- vc_transform0,
- f0method0,
- index_rate1,
- filter_radius0,
- resample_sr0,
- rms_mix_rate0,
- protect0,
- ],
- outputs=[vc_log ,vc_output]
- )
- vc_split.click(
- fn=cut_vocal_and_inst,
- inputs=[vc_link, vc_download_audio, vc_split_model],
- outputs=[vc_vocal_preview, vc_inst_preview, vc_audio_preview, vc_input]
- )
- vc_combine.click(
- fn=combine_vocal_and_inst,
- inputs=[vc_output, vc_volume, vc_split_model],
- outputs=[vc_combined_output]
- )
- vc_microphone_mode.change(
- fn=use_microphone,
- inputs=vc_microphone_mode,
- outputs=vc_upload
- )
- vc_audio_mode.change(
- fn=change_audio_mode,
- inputs=[vc_audio_mode],
- outputs=[
- vc_input,
- vc_microphone_mode,
- vc_upload,
- vc_download_audio,
- vc_link,
- vc_split_model,
- vc_split,
- vc_vocal_preview,
- vc_inst_preview,
- vc_audio_preview,
- vc_volume,
- vc_combined_output,
- vc_combine,
- tts_text,
- tts_voice
- ]
- )
- app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=config.colab)
\ No newline at end of file
diff --git a/spaces/a121440357/bingAI/README.md b/spaces/a121440357/bingAI/README.md
deleted file mode 100644
index c03d04762272746ca9d9d53adac383647ee8315d..0000000000000000000000000000000000000000
--- a/spaces/a121440357/bingAI/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: BingAI
-emoji: 🏃
-colorFrom: green
-colorTo: purple
-sdk: docker
-pinned: false
-license: mit
-app_port: 8080
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/abdvl/datahub_qa_bot/docs/actions/guides/developing-a-transformer.md b/spaces/abdvl/datahub_qa_bot/docs/actions/guides/developing-a-transformer.md
deleted file mode 100644
index a843dbc846cd510838dfd2be98379b4d4a1a9a4a..0000000000000000000000000000000000000000
--- a/spaces/abdvl/datahub_qa_bot/docs/actions/guides/developing-a-transformer.md
+++ /dev/null
@@ -1,133 +0,0 @@
-# Developing a Transformer
-
-In this guide, we will outline each step to developing a custom Transformer for the DataHub Actions Framework.
-
-## Overview
-
-Developing a DataHub Actions Transformer is a matter of extending the `Transformer` base class in Python, installing your
-Transformer to make it visible to the framework, and then configuring the framework to use the new Transformer.
-
-
-## Step 1: Defining a Transformer
-
-To implement an Transformer, we'll need to extend the `Transformer` base class and override the following functions:
-
-- `create()` - This function is invoked to instantiate the action, with a free-form configuration dictionary
- extracted from the Actions configuration file as input.
-- `transform()` - This function is invoked when an Event is received. It should contain the core logic of the Transformer.
- and will return the transformed Event, or `None` if the Event should be filtered.
-
-Let's start by defining a new implementation of Transformer called `CustomTransformer`. We'll keep it simple-- this Transformer will
-print the configuration that is provided when it is created, and print any Events that it receives.
-
-```python
-# custom_transformer.py
-from datahub_actions.transform.transformer import Transformer
-from datahub_actions.event.event import EventEnvelope
-from datahub_actions.pipeline.pipeline_context import PipelineContext
-from typing import Optional
-
-class CustomTransformer(Transformer):
- @classmethod
- def create(cls, config_dict: dict, ctx: PipelineContext) -> "Transformer":
- # Simply print the config_dict.
- print(config_dict)
- return cls(config_dict, ctx)
-
- def __init__(self, ctx: PipelineContext):
- self.ctx = ctx
-
- def transform(self, event: EventEnvelope) -> Optional[EventEnvelope]:
- # Simply print the received event.
- print(event)
- # And return the original event (no-op)
- return event
-```
-
-
-## Step 2: Installing the Transformer
-
-Now that we've defined the Transformer, we need to make it visible to the framework by making
-it available in the Python runtime environment.
-
-The easiest way to do this is to just place it in the same directory as your configuration file, in which case the module name is the same as the file
-name - in this case it will be `custom_transformer`.
-
-### Advanced: Installing as a Package
-
-Alternatively, create a `setup.py` file in the same directory as the new Transformer to convert it into a package that pip can understand.
-
-```
-from setuptools import find_packages, setup
-
-setup(
- name="custom_transformer_example",
- version="1.0",
- packages=find_packages(),
- # if you don't already have DataHub Actions installed, add it under install_requires
- # install_requires=["acryl-datahub-actions"]
-)
-```
-
-Next, install the package
-
-```shell
-pip install -e .
-```
-
-inside the module. (alt.`python setup.py`).
-
-Once we have done this, our class will be referencable via `custom_transformer_example.custom_transformer:CustomTransformer`.
-
-
-## Step 3: Running the Action
-
-Now that we've defined our Transformer, we can create an Action configuration file that refers to the new Transformer.
-We will need to provide the fully-qualified Python module & class name when doing so.
-
-*Example Configuration*
-
-```yaml
-# custom_transformer_action.yaml
-name: "custom_transformer_test"
-source:
- type: "kafka"
- config:
- connection:
- bootstrap: ${KAFKA_BOOTSTRAP_SERVER:-localhost:9092}
- schema_registry_url: ${SCHEMA_REGISTRY_URL:-http://localhost:8081}
-transform:
- - type: "custom_transformer_example.custom_transformer:CustomTransformer"
- config:
- # Some sample configuration which should be printed on create.
- config1: value1
-action:
- # Simply reuse the default hello_world action
- type: "hello_world"
-```
-
-Next, run the `datahub actions` command as usual:
-
-```shell
-datahub actions -c custom_transformer_action.yaml
-```
-
-If all is well, your Transformer should now be receiving & printing Events.
-
-
-### (Optional) Step 4: Contributing the Transformer
-
-If your Transformer is generally applicable, you can raise a PR to include it in the core Transformer library
-provided by DataHub. All Transformers will live under the `datahub_actions/plugin/transform` directory inside the
-[datahub-actions](https://github.com/acryldata/datahub-actions) repository.
-
-Once you've added your new Transformer there, make sure that you make it discoverable by updating the `entry_points` section
-of the `setup.py` file. This allows you to assign a globally unique name for you Transformer, so that people can use
-it without defining the full module path.
-
-#### Prerequisites:
-
-Prerequisites to consideration for inclusion in the core Transformer library include
-
-- **Testing** Define unit tests for your Transformer
-- **Deduplication** Confirm that no existing Transformer serves the same purpose, or can be easily extended to serve the same purpose
\ No newline at end of file
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/decode_heads/ann_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/decode_heads/ann_head.py
deleted file mode 100644
index 2e83882e4ae0398b04f5a9b0518703d659ea101d..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/decode_heads/ann_head.py
+++ /dev/null
@@ -1,257 +0,0 @@
-'''
- * Copyright (c) 2023 Salesforce, Inc.
- * All rights reserved.
- * SPDX-License-Identifier: Apache License 2.0
- * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/
- * By Can Qin
- * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet
- * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala
- * Modified from MMCV repo: From https://github.com/open-mmlab/mmcv
- * Copyright (c) OpenMMLab. All rights reserved.
-'''
-
-import torch
-import torch.nn as nn
-from annotator.uniformer.mmcv.cnn import ConvModule
-
-from ..builder import HEADS
-from ..utils import SelfAttentionBlock as _SelfAttentionBlock
-from .decode_head import BaseDecodeHead
-
-
-class PPMConcat(nn.ModuleList):
- """Pyramid Pooling Module that only concat the features of each layer.
-
- Args:
- pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid
- Module.
- """
-
- def __init__(self, pool_scales=(1, 3, 6, 8)):
- super(PPMConcat, self).__init__(
- [nn.AdaptiveAvgPool2d(pool_scale) for pool_scale in pool_scales])
-
- def forward(self, feats):
- """Forward function."""
- ppm_outs = []
- for ppm in self:
- ppm_out = ppm(feats)
- ppm_outs.append(ppm_out.view(*feats.shape[:2], -1))
- concat_outs = torch.cat(ppm_outs, dim=2)
- return concat_outs
-
-
-class SelfAttentionBlock(_SelfAttentionBlock):
- """Make a ANN used SelfAttentionBlock.
-
- Args:
- low_in_channels (int): Input channels of lower level feature,
- which is the key feature for self-attention.
- high_in_channels (int): Input channels of higher level feature,
- which is the query feature for self-attention.
- channels (int): Output channels of key/query transform.
- out_channels (int): Output channels.
- share_key_query (bool): Whether share projection weight between key
- and query projection.
- query_scale (int): The scale of query feature map.
- key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid
- Module of key feature.
- conv_cfg (dict|None): Config of conv layers.
- norm_cfg (dict|None): Config of norm layers.
- act_cfg (dict|None): Config of activation layers.
- """
-
- def __init__(self, low_in_channels, high_in_channels, channels,
- out_channels, share_key_query, query_scale, key_pool_scales,
- conv_cfg, norm_cfg, act_cfg):
- key_psp = PPMConcat(key_pool_scales)
- if query_scale > 1:
- query_downsample = nn.MaxPool2d(kernel_size=query_scale)
- else:
- query_downsample = None
- super(SelfAttentionBlock, self).__init__(
- key_in_channels=low_in_channels,
- query_in_channels=high_in_channels,
- channels=channels,
- out_channels=out_channels,
- share_key_query=share_key_query,
- query_downsample=query_downsample,
- key_downsample=key_psp,
- key_query_num_convs=1,
- key_query_norm=True,
- value_out_num_convs=1,
- value_out_norm=False,
- matmul_norm=True,
- with_out=True,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg)
-
-
-class AFNB(nn.Module):
- """Asymmetric Fusion Non-local Block(AFNB)
-
- Args:
- low_in_channels (int): Input channels of lower level feature,
- which is the key feature for self-attention.
- high_in_channels (int): Input channels of higher level feature,
- which is the query feature for self-attention.
- channels (int): Output channels of key/query transform.
- out_channels (int): Output channels.
- and query projection.
- query_scales (tuple[int]): The scales of query feature map.
- Default: (1,)
- key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid
- Module of key feature.
- conv_cfg (dict|None): Config of conv layers.
- norm_cfg (dict|None): Config of norm layers.
- act_cfg (dict|None): Config of activation layers.
- """
-
- def __init__(self, low_in_channels, high_in_channels, channels,
- out_channels, query_scales, key_pool_scales, conv_cfg,
- norm_cfg, act_cfg):
- super(AFNB, self).__init__()
- self.stages = nn.ModuleList()
- for query_scale in query_scales:
- self.stages.append(
- SelfAttentionBlock(
- low_in_channels=low_in_channels,
- high_in_channels=high_in_channels,
- channels=channels,
- out_channels=out_channels,
- share_key_query=False,
- query_scale=query_scale,
- key_pool_scales=key_pool_scales,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg))
- self.bottleneck = ConvModule(
- out_channels + high_in_channels,
- out_channels,
- 1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=None)
-
- def forward(self, low_feats, high_feats):
- """Forward function."""
- priors = [stage(high_feats, low_feats) for stage in self.stages]
- context = torch.stack(priors, dim=0).sum(dim=0)
- output = self.bottleneck(torch.cat([context, high_feats], 1))
- return output
-
-
-class APNB(nn.Module):
- """Asymmetric Pyramid Non-local Block (APNB)
-
- Args:
- in_channels (int): Input channels of key/query feature,
- which is the key feature for self-attention.
- channels (int): Output channels of key/query transform.
- out_channels (int): Output channels.
- query_scales (tuple[int]): The scales of query feature map.
- Default: (1,)
- key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid
- Module of key feature.
- conv_cfg (dict|None): Config of conv layers.
- norm_cfg (dict|None): Config of norm layers.
- act_cfg (dict|None): Config of activation layers.
- """
-
- def __init__(self, in_channels, channels, out_channels, query_scales,
- key_pool_scales, conv_cfg, norm_cfg, act_cfg):
- super(APNB, self).__init__()
- self.stages = nn.ModuleList()
- for query_scale in query_scales:
- self.stages.append(
- SelfAttentionBlock(
- low_in_channels=in_channels,
- high_in_channels=in_channels,
- channels=channels,
- out_channels=out_channels,
- share_key_query=True,
- query_scale=query_scale,
- key_pool_scales=key_pool_scales,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg))
- self.bottleneck = ConvModule(
- 2 * in_channels,
- out_channels,
- 1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg)
-
- def forward(self, feats):
- """Forward function."""
- priors = [stage(feats, feats) for stage in self.stages]
- context = torch.stack(priors, dim=0).sum(dim=0)
- output = self.bottleneck(torch.cat([context, feats], 1))
- return output
-
-
-@HEADS.register_module()
-class ANNHead(BaseDecodeHead):
- """Asymmetric Non-local Neural Networks for Semantic Segmentation.
-
- This head is the implementation of `ANNNet
- `_.
-
- Args:
- project_channels (int): Projection channels for Nonlocal.
- query_scales (tuple[int]): The scales of query feature map.
- Default: (1,)
- key_pool_scales (tuple[int]): The pooling scales of key feature map.
- Default: (1, 3, 6, 8).
- """
-
- def __init__(self,
- project_channels,
- query_scales=(1, ),
- key_pool_scales=(1, 3, 6, 8),
- **kwargs):
- super(ANNHead, self).__init__(
- input_transform='multiple_select', **kwargs)
- assert len(self.in_channels) == 2
- low_in_channels, high_in_channels = self.in_channels
- self.project_channels = project_channels
- self.fusion = AFNB(
- low_in_channels=low_in_channels,
- high_in_channels=high_in_channels,
- out_channels=high_in_channels,
- channels=project_channels,
- query_scales=query_scales,
- key_pool_scales=key_pool_scales,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- self.bottleneck = ConvModule(
- high_in_channels,
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- self.context = APNB(
- in_channels=self.channels,
- out_channels=self.channels,
- channels=project_channels,
- query_scales=query_scales,
- key_pool_scales=key_pool_scales,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- def forward(self, inputs):
- """Forward function."""
- low_feats, high_feats = self._transform_inputs(inputs)
- output = self.fusion(low_feats, high_feats)
- output = self.dropout(output)
- output = self.bottleneck(output)
- output = self.context(output)
- output = self.cls_seg(output)
-
- return output
diff --git "a/spaces/achterbrain/Intel-Generative-Image-Dashboard/pages/2_\360\237\244\226Automated assessment.py" "b/spaces/achterbrain/Intel-Generative-Image-Dashboard/pages/2_\360\237\244\226Automated assessment.py"
deleted file mode 100644
index e410f2aecb4e6bc1e2cdae8100910984d4c7de94..0000000000000000000000000000000000000000
--- "a/spaces/achterbrain/Intel-Generative-Image-Dashboard/pages/2_\360\237\244\226Automated assessment.py"
+++ /dev/null
@@ -1,89 +0,0 @@
-import streamlit as st
-import numpy as np
-from itertools import compress
-from PIL import Image
-from Dashboard_setup import sidebar_information
-sidebar_information() # Move this up to be displayed before the evaluation functions are loaded
-from Dashboard_automation_setup import fun_dict
-
-st.title('Automated Assessment')
-st.write('On this page you can use automated assessment algorithms to assess how good uploaded images match their respective prompts. Note that the automatic assessment routines have not been validated and accuracy estimated will be provided in a future version.')
-st.write(' ')
-
-###### Setup of variables ############################
-try:
- # Create necessary variables
- prompt_dir = st.session_state['prompt_dir']
- curr_eval_df = st.session_state['eval_df']
- curr_eval_df['Picture_index']=curr_eval_df.index.values
-
- # Assess how many images are available for automatic assessment
- automated_eval_available = sum(curr_eval_df['automated_eval'])
-
- # Add task name to eval_df
- temp_prompt_dir=prompt_dir[['ID','Representations','Task_specific_label']]
- temp_prompt_dir['Prompt_no']=temp_prompt_dir['ID'].astype('str')
- curr_eval_df = curr_eval_df.merge(temp_prompt_dir,on='Prompt_no')
-
- # Check that user correctly filled out the automation setup file
- assert list(fun_dict.keys())==st.session_state['automated_tasks'], 'Unsure that the list of automated tasks in Dashboard_setup.py is the same as the keys of the function dict in Dashboard_automation_setup.py'
-except KeyError:
- automated_eval_available = 0
-
-
-###### Rating loop ############################
-# If images for assessment available: create form to start assessment
-# Else: Note to upload images for assessment
-if automated_eval_available > 0:
-
- # Create objects to hold selections of tasks for automated assessment
- task_list = list(fun_dict.keys())
- task_list_len = len(task_list)
- task_list_selected = task_list.copy()
-
- with st.form("auto_assessment_form",clear_on_submit=True):
- # Form info statment
- st.write('Select tasks to assess with the automated assessment below. Once you started an assessment you will not be able to leave this page before the assessment is completed.')
-
- # Create list of bool selection buttons, one for every task
- for i_task in range(task_list_len):
- curr_task = task_list[i_task]
- curr_task_count = len(curr_eval_df.loc[
- (curr_eval_df['automated_eval']==True)&
- (curr_eval_df['Task']==curr_task)])
- task_list_selected[i_task] = st.checkbox(
- '{0} ({1} images available)'.format(curr_task, str(curr_task_count)))
-
- submitted = st.form_submit_button("Start automated assessment")
- if submitted:
- # Create list for tasks which were selected for assessment
- selected_tasks = list(compress(task_list,task_list_selected))
-
- # Create dataset to loop over with assessment
- assessed_df = curr_eval_df.loc[
- (curr_eval_df['automated_eval']==True)&
- (curr_eval_df['Task'].isin(selected_tasks))]
- results_column = []
-
- # Add counter for progress bars
- num_automated_rows = len(assessed_df)
- i_num_row = 0
- i_progress_increase = 1/num_automated_rows
- st.write('Progress of automatic evaluation:')
- auto_assessment_progress = st.progress(0)
-
- for row in assessed_df.itertuples():
- i_num_row +=1
- auto_assessment_progress.progress(0+i_num_row*i_progress_increase)
-
- # Apply task based classifier and safe in list
- temp_image = Image.open(st.session_state['uploaded_img'][row.Picture_index])
- temp_result = fun_dict[row.Task](
- temp_image,row.Representations,row.Task_specific_label)
- results_column.append(temp_result)
-
- assessed_df['Score']=results_column
- st.session_state['auto_eval_df']=assessed_df[['File_name','Prompt_no','Picture_index','Task','Score']]
- st.write('Assessment completed. You can access the results on the summary page. Running a new automated assessment will override past results.')
-else:
- st.write('Upload files on dashboard starting page to start automated assessment.')
diff --git a/spaces/adba/Real-CUGAN/app.py b/spaces/adba/Real-CUGAN/app.py
deleted file mode 100644
index 2439c5cec6b61e8a517f957daf710cbb6b5c3cf6..0000000000000000000000000000000000000000
--- a/spaces/adba/Real-CUGAN/app.py
+++ /dev/null
@@ -1,62 +0,0 @@
-from upcunet_v3 import RealWaifuUpScaler
-import gradio as gr
-import time
-import logging
-import os
-from PIL import ImageOps
-import numpy as np
-import math
-
-
-def greet(input_img, input_model_name, input_tile_mode):
- # if input_img.size[0] * input_img.size[1] > 256 * 256:
- # y = int(math.sqrt(256*256/input_img.size[0]*input_img.size[1]))
- # x = int(input_img.size[0]/input_img.size[1]*y)
- # input_img = ImageOps.fit(input_img, (x, y))
- input_img = np.array(input_img)
- if input_model_name not in model_cache:
- t1 = time.time()
- upscaler = RealWaifuUpScaler(input_model_name[2], ModelPath + input_model_name, half=False, device="cpu")
- t2 = time.time()
- logger.info(f'load model time, {t2 - t1}')
- model_cache[input_model_name] = upscaler
- else:
- upscaler = model_cache[input_model_name]
- logger.info(f'load model from cache')
-
- start = time.time()
- result = upscaler(input_img, tile_mode=input_tile_mode)
- end = time.time()
- logger.info(f'input_model_name, {input_model_name}')
- logger.info(f'input_tile_mode, {input_tile_mode}')
- logger.info(f'input shape, {input_img.shape}')
- logger.info(f'output shape, {result.shape}')
- logger.info(f'speed time, {end - start}')
- return result
-
-
-if __name__ == '__main__':
- logging.basicConfig(level=logging.INFO, format="[%(asctime)s] [%(process)d] [%(levelname)s] %(message)s")
- logger = logging.getLogger()
-
- ModelPath = "weights_v3/"
- model_cache = {}
-
- input_model_name = gr.inputs.Dropdown(os.listdir(ModelPath), default="up2x-latest-denoise2x.pth", label='选择model')
- input_tile_mode = gr.inputs.Dropdown([0, 1, 2, 3, 4], default=2, label='选择tile_mode')
- input_img = gr.inputs.Image(label='image', type='pil')
-
- inputs = [input_img, input_model_name, input_tile_mode]
- outputs = "image"
- iface = gr.Interface(fn=greet,
- inputs=inputs,
- outputs=outputs,
- allow_screenshot=False,
- allow_flagging='never',
- examples=[['test-img.jpg', "up2x-latest-denoise2x.pth", 2]],
- article='[https://github.com/bilibili/ailab/tree/main/Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN) '
- '感谢b站开源的项目,图片过大会导致内存不足,所有我将图片裁剪小,想体验大图片的效果请自行前往上面的链接。 '
- '修改bbb'
- 'The large image will lead to memory limit exceeded. So I crop and resize image. '
- 'If you want to experience the large image, please go to the link above.')
- iface.launch()
diff --git a/spaces/adimmer/semi-supervised-wrappers/README.md b/spaces/adimmer/semi-supervised-wrappers/README.md
deleted file mode 100644
index 12b90482b41310495c3f97d9742fc7218cc25c25..0000000000000000000000000000000000000000
--- a/spaces/adimmer/semi-supervised-wrappers/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Semi Supervised Wrappers
-emoji: 🚀
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 2.9.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/adpro/dpt-depth16/app.py b/spaces/adpro/dpt-depth16/app.py
deleted file mode 100644
index a959ca9591586a8d2b84380720affadbfdb4eafb..0000000000000000000000000000000000000000
--- a/spaces/adpro/dpt-depth16/app.py
+++ /dev/null
@@ -1,46 +0,0 @@
-import gradio as gr
-from transformers import DPTFeatureExtractor, DPTForDepthEstimation
-import torch
-import numpy as np
-from PIL import Image
-
-torch.hub.download_url_to_file('http://images.cocodataset.org/val2017/000000039769.jpg', 'cats.jpg')
-
-feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-large")
-model = DPTForDepthEstimation.from_pretrained("Intel/dpt-large")
-
-def process_image(image):
- # prepare image for the model
- encoding = feature_extractor(image, return_tensors="pt")
-
- # forward pass
- with torch.no_grad():
- outputs = model(**encoding)
- predicted_depth = outputs.predicted_depth
-
- # interpolate to original size
- prediction = torch.nn.functional.interpolate(
- predicted_depth.unsqueeze(1),
- size=image.size[::-1],
- mode="bicubic",
- align_corners=False,
- ).squeeze()
- output = prediction.cpu().numpy()
- formatted = (output * 255 / np.max(output)).astype('uint8')
- img = Image.fromarray(formatted)
- return img
-
- return result
-
-title = "Demo: zero-shot depth estimation with DPT"
-description = "Demo for Intel's DPT, a Dense Prediction Transformer for state-of-the-art dense prediction tasks such as semantic segmentation and depth estimation."
-examples =[['cats.jpg']]
-
-iface = gr.Interface(fn=process_image,
- inputs=gr.inputs.Image(type="pil"),
- outputs=gr.outputs.Image(type="pil", label="predicted depth"),
- title=title,
- description=description,
- examples=examples,
- enable_queue=True)
-iface.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/affine/Time_Series_Model/README.md b/spaces/affine/Time_Series_Model/README.md
deleted file mode 100644
index 9aa872a8d1eb6e034190d05822bb5b5c94c6f85e..0000000000000000000000000000000000000000
--- a/spaces/affine/Time_Series_Model/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Time Series Model
-emoji: 👁
-colorFrom: purple
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ahmedghani/Inference-Endpoint-Deployment/README.md b/spaces/ahmedghani/Inference-Endpoint-Deployment/README.md
deleted file mode 100644
index 05c7039e5aef8b3a4177c745fb9b6cb1f6234469..0000000000000000000000000000000000000000
--- a/spaces/ahmedghani/Inference-Endpoint-Deployment/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Inference Endpoint Deployment
-emoji: 🚀
-colorFrom: red
-colorTo: purple
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/aijack/jojo/e4e/models/__init__.py b/spaces/aijack/jojo/e4e/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ajitrajasekharan/self-supervised-ner-biomedical/batched_main_NER.py b/spaces/ajitrajasekharan/self-supervised-ner-biomedical/batched_main_NER.py
deleted file mode 100644
index 97e5e5a5aa719cab76f688457682f5a5bf40e67f..0000000000000000000000000000000000000000
--- a/spaces/ajitrajasekharan/self-supervised-ner-biomedical/batched_main_NER.py
+++ /dev/null
@@ -1,910 +0,0 @@
-import pdb
-import config_utils as cf
-import requests
-import sys
-import urllib.parse
-import numpy as np
-from collections import OrderedDict
-import argparse
-from common import *
-import json
-
-#WORD_POS = 1
-#TAG_POS = 2
-#MASK_TAG = "__entity__"
-DEFAULT_CONFIG = "./config.json"
-DISPATCH_MASK_TAG = "entity"
-DESC_HEAD = "PIVOT_DESCRIPTORS:"
-#TYPE2_AMB = "AMB2-"
-TYPE2_AMB = ""
-DUMMY_DESCS=10
-DEFAULT_ENTITY_MAP = "entity_types_consolidated.txt"
-
-#RESET_POS_TAG='RESET'
-SPECIFIC_TAG=":__entity__"
-
-
-def softmax(x):
- """Compute softmax values for each sets of scores in x."""
- e_x = np.exp(x - np.max(x))
- return e_x / e_x.sum(axis=0) # only difference
-
-#def softmax(x):
-# """Compute softmax values for each sets of scores in x."""
-# return np.exp(x) / np.sum(np.exp(x), axis=0)
-
-
-#noun_tags = ['NFP','JJ','NN','FW','NNS','NNPS','JJS','JJR','NNP','POS','CD']
-#cap_tags = ['NFP','JJ','NN','FW','NNS','NNPS','JJS','JJR','NNP','PRP']
-
-def read_common_descs(file_name):
- common_descs = {}
- with open(file_name) as fp:
- for line in fp:
- common_descs[line.strip()] = 1
- print("Common descs for filtering read:",len(common_descs))
- return common_descs
-
-def read_entity_map(file_name):
- emap = {}
- with open(file_name) as fp:
- for line in fp:
- line = line.rstrip('\n')
- entities = line.split()
- if (len(entities) == 1):
- assert(entities[0] not in emap)
- emap[entities[0]] = entities[0]
- else:
- assert(len(entities) == 2)
- entity_arr = entities[1].split('/')
- if (entities[0] not in emap):
- emap[entities[0]] = entities[0]
- for entity in entity_arr:
- assert(entity not in emap)
- emap[entity] = entities[0]
- print("Entity map:",len(emap))
- return emap
-
-class UnsupNER:
- def __init__(self,config_file):
- print("NER service handler started")
- base_path = cf.read_config(config_file)["BASE_PATH"] if ("BASE_PATH" in cf.read_config(config_file)) else "./"
- self.pos_server_url = cf.read_config(config_file)["POS_SERVER_URL"]
- self.desc_server_url = cf.read_config(config_file)["DESC_SERVER_URL"]
- self.entity_server_url = cf.read_config(config_file)["ENTITY_SERVER_URL"]
- self.common_descs = read_common_descs(cf.read_config(config_file)["COMMON_DESCS_FILE"])
- self.entity_map = read_entity_map(cf.read_config(config_file)["EMAP_FILE"])
- self.rfp = open(base_path + "log_results.txt","a")
- self.dfp = open(base_path + "log_debug.txt","a")
- self.algo_ci_tag_fp = open(base_path + "algorthimic_ci_tags.txt","a")
- print(self.pos_server_url)
- print(self.desc_server_url)
- print(self.entity_server_url)
- np.set_printoptions(suppress=True) #this suppresses exponential representation when np is used to round
- if (cf.read_config(config_file)["SUPPRESS_UNTAGGED"] == "1"):
- self.suppress_untagged = True
- else:
- self.suppress_untagged = False #This is disabled in full debug text mode
-
-
- #This is bad hack for prototyping - parsing from text output as opposed to json
- def extract_POS(self,text):
- arr = text.split('\n')
- if (len(arr) > 0):
- start_pos = 0
- for i,line in enumerate(arr):
- if (len(line) > 0):
- start_pos += 1
- continue
- else:
- break
- #print(arr[start_pos:])
- terms_arr = []
- for i,line in enumerate(arr[start_pos:]):
- terms = line.split('\t')
- if (len(terms) == 5):
- #print(terms)
- terms_arr.append(terms)
- return terms_arr
-
- def normalize_casing(self,sent):
- sent_arr = sent.split()
- ret_sent_arr = []
- for i,word in enumerate(sent_arr):
- if (len(word) > 1):
- norm_word = word[0] + word[1:].lower()
- else:
- norm_word = word[0]
- ret_sent_arr.append(norm_word)
- return ' '.join(ret_sent_arr)
-
- #Full sentence tag call also generates json output.
- def tag_sentence_service(self,text,desc_obj):
- ret_str = self.tag_sentence(text,self.rfp,self.dfp,True,desc_obj)
- return ret_str
-
- def dictify_ner_response(self,ner_str):
- arr = ner_str.split('\n')
- ret_dict = OrderedDict()
- count = 1
- ref_indices_arr = []
- for line in arr:
- terms = line.split()
- if (len(terms) == 2):
- ret_dict[count] = {"term":terms[0],"e":terms[1]}
- if (terms[1] != "O" and terms[1].startswith("B_")):
- ref_indices_arr.append(count)
- count += 1
- elif (len(terms) == 1):
- ret_dict[count] = {"term":"empty","e":terms[0]}
- if (terms[0] != "O" and terms[0].startswith("B_")):
- ref_indices_arr.append(count)
- count += 1
- if (len(ret_dict) > 3): #algorithmic harvesting of CI labels for human verification and adding to bootstrap list
- self.algo_ci_tag_fp.write("SENT:" + ner_str.replace('\n',' ') + "\n")
- out = terms[0].replace('[',' ').replace(']','').split()[-1]
- out = '_'.join(out.split('_')[1:]) if out.startswith("B_") else out
- print(out)
- self.algo_ci_tag_fp.write(ret_dict[count-2]["term"] + " " + out + "\n")
- self.algo_ci_tag_fp.flush()
- else:
- assert(len(terms) == 0) #If not empty something is not right
- return ret_dict,ref_indices_arr
-
- def blank_entity_sentence(self,sent,dfp):
- value = True if sent.endswith(" :__entity__\n") else False
- if (value == True):
- print("\n\n**************** Skipping CI prediction in pooling for sent:",sent)
- dfp.write("\n\n**************** Skipping CI prediction in pooling for sent:" + sent + "\n")
- return value
-
- def pool_confidences(self,ci_entities,ci_confidences,ci_subtypes,cs_entities,cs_confidences,cs_subtypes,debug_str_arr,sent,dfp):
- main_classes = {}
- assert(len(cs_entities) == len(cs_confidences))
- assert(len(cs_subtypes) == len(cs_entities))
- assert(len(ci_entities) == len(ci_confidences))
- assert(len(ci_subtypes) == len(ci_entities))
- #Pool entity classes across CI and CS
- is_blank_statement = self.blank_entity_sentence(sent,dfp) #Do not pool CI confidences of the sentences of the form " is a entity". These sentences are sent for purely algo harvesting of CS terms. CI predictions will add noise.
- if (not is_blank_statement): #Do not pool CI confidences of the sentences of the form " is a entity". These sentences are sent for purely algo harvesting of CS terms. CI predictions will add noise.
- for e,c in zip(ci_entities,ci_confidences):
- e_base = e.split('[')[0]
- main_classes[e_base] = float(c)
- for e,c in zip(cs_entities,cs_confidences):
- e_base = e.split('[')[0]
- if (e_base in main_classes):
- main_classes[e_base] += float(c)
- else:
- main_classes[e_base] = float(c)
- final_sorted_d = OrderedDict(sorted(main_classes.items(), key=lambda kv: kv[1], reverse=True))
- main_dist = self.convert_positive_nums_to_dist(final_sorted_d)
- main_classes_arr = list(final_sorted_d.keys())
- #print("\nIn pooling confidences")
- #print(main_classes_arr)
- #print(main_dist)
- #Pool subtypes across CI and CS for a particular entity class
- subtype_factors = {}
- for e_class in final_sorted_d:
- if e_class in cs_subtypes:
- stypes = cs_subtypes[e_class]
- if (e_class not in subtype_factors):
- subtype_factors[e_class] = {}
- for st in stypes:
- if (st in subtype_factors[e_class]):
- subtype_factors[e_class][st] += stypes[st]
- else:
- subtype_factors[e_class][st] = stypes[st]
- if (is_blank_statement):
- continue
- if e_class in ci_subtypes:
- stypes = ci_subtypes[e_class]
- if (e_class not in subtype_factors):
- subtype_factors[e_class] = {}
- for st in stypes:
- if (st in subtype_factors[e_class]):
- subtype_factors[e_class][st] += stypes[st]
- else:
- subtype_factors[e_class][st] = stypes[st]
- sorted_subtype_factors = {}
- for e_class in subtype_factors:
- stypes = subtype_factors[e_class]
- final_sorted_d = OrderedDict(sorted(stypes.items(), key=lambda kv: kv[1], reverse=True))
- stypes_dist = self.convert_positive_nums_to_dist(final_sorted_d)
- stypes_class_arr = list(final_sorted_d.keys())
- sorted_subtype_factors[e_class] = {"stypes":stypes_class_arr,"dist":stypes_dist}
- pooled_results = OrderedDict()
- assert(len(main_classes_arr) == len(main_dist))
- d_str_arr = []
- d_str_arr.append("\n***CONSOLIDATED ENTITY:")
- for e,c in zip(main_classes_arr,main_dist):
- pooled_results[e] = {"e":e,"confidence":c}
- d_str_arr.append(e + " " + str(c))
- stypes_dict = sorted_subtype_factors[e]
- pooled_st = OrderedDict()
- for st,sd in zip(stypes_dict["stypes"],stypes_dict["dist"]):
- pooled_st[st] = sd
- pooled_results[e]["stypes"] = pooled_st
- debug_str_arr.append(' '.join(d_str_arr))
- print(' '.join(d_str_arr))
- return pooled_results
-
-
-
-
-
-
-
-
-
- def init_entity_info(self,entity_info_dict,index):
- curr_term_dict = OrderedDict()
- entity_info_dict[index] = curr_term_dict
- curr_term_dict["ci"] = OrderedDict()
- curr_term_dict["ci"]["entities"] = []
- curr_term_dict["ci"]["descs"] = []
- curr_term_dict["cs"] = OrderedDict()
- curr_term_dict["cs"]["entities"] = []
- curr_term_dict["cs"]["descs"] = []
-
-
-
-
- #This now does specific tagging if there is a __entity__ in sentence; else does full tagging. TBD.
- #TBD. Make response params same regardlesss of output format. Now it is different
- def tag_sentence(self,sent,rfp,dfp,json_output,desc_obj):
- print("Input: ", sent)
- dfp.write("\n\n++++-------------------------------\n")
- dfp.write("NER_INPUT: " + sent + "\n")
- debug_str_arr = []
- entity_info_dict = OrderedDict()
- #url = self.desc_server_url + sent.replace('"','\'')
- #r = self.dispatch_request(url)
- #if (r is None):
- # print("Empty response. Desc server is probably down: ",self.desc_server_url)
- # return json.loads("[]")
- #main_obj = json.loads(r.text)
- main_obj = desc_obj
- #print(json.dumps(main_obj,indent=4))
- #Find CI predictions for ALL masked predictios in sentence
- ci_predictions,orig_ci_entities = self.find_ci_entities(main_obj,debug_str_arr,entity_info_dict) #ci_entities is the same info as ci_predictions except packed differently for output
- #Find CS predictions for ALL masked predictios in sentence. Use the CI predictions from previous step to
- #pool
- detected_entities_arr,ner_str,full_pooled_results,orig_cs_entities = self.find_cs_entities(sent,main_obj,rfp,dfp,debug_str_arr,ci_predictions,entity_info_dict)
- assert(len(detected_entities_arr) == len(entity_info_dict))
- print("--------")
- if (json_output):
- if (len(detected_entities_arr) != len(entity_info_dict)):
- if (len(entity_info_dict) == 0):
- self.init_entity_info(entity_info_dict,index)
- entity_info_dict[1]["cs"]["entities"].append([{"e":"O","confidence":1}])
- entity_info_dict[1]["ci"]["entities"].append([{"e":"O","confidence":1}])
- ret_dict,ref_indices_arr = self.dictify_ner_response(ner_str) #Convert ner string to a dictionary for json output
- assert(len(ref_indices_arr) == len(detected_entities_arr))
- assert(len(entity_info_dict) == len(detected_entities_arr))
- cs_aux_dict = OrderedDict()
- ci_aux_dict = OrderedDict()
- cs_aux_orig_entities = OrderedDict()
- ci_aux_orig_entities = OrderedDict()
- pooled_pred_dict = OrderedDict()
- count = 0
- assert(len(full_pooled_results) == len(detected_entities_arr))
- assert(len(full_pooled_results) == len(orig_cs_entities))
- assert(len(full_pooled_results) == len(orig_ci_entities))
- for e,c,p,o,i in zip(detected_entities_arr,entity_info_dict,full_pooled_results,orig_cs_entities,orig_ci_entities):
- val = entity_info_dict[c]
- #cs_aux_dict[ref_indices_arr[count]] = {"e":e,"cs_distribution":val["cs"]["entities"],"cs_descs":val["cs"]["descs"]}
- pooled_pred_dict[ref_indices_arr[count]] = {"e": e, "cs_distribution": list(p.values())}
- cs_aux_dict[ref_indices_arr[count]] = {"e":e,"cs_descs":val["cs"]["descs"]}
- #ci_aux_dict[ref_indices_arr[count]] = {"ci_distribution":val["ci"]["entities"],"ci_descs":val["ci"]["descs"]}
- ci_aux_dict[ref_indices_arr[count]] = {"ci_descs":val["ci"]["descs"]}
- cs_aux_orig_entities[ref_indices_arr[count]] = {"e":e,"cs_distribution": o}
- ci_aux_orig_entities[ref_indices_arr[count]] = {"e":e,"cs_distribution": i}
- count += 1
- #print(ret_dict)
- #print(aux_dict)
- final_ret_dict = {"total_terms_count":len(ret_dict),"detected_entity_phrases_count":len(detected_entities_arr),"ner":ret_dict,"entity_distribution":pooled_pred_dict,"cs_prediction_details":cs_aux_dict,"ci_prediction_details":ci_aux_dict,"orig_cs_prediction_details":cs_aux_orig_entities,"orig_ci_prediction_details":ci_aux_orig_entities,"debug":debug_str_arr}
- json_str = json.dumps(final_ret_dict,indent = 4)
- #print (json_str)
- #with open("single_debug.txt","w") as fp:
- # fp.write(json_str)
-
- dfp.write('\n'.join(debug_str_arr))
- dfp.write("\n\nEND-------------------------------\n")
- dfp.flush()
- return json_str
- else:
- print(detected_entities_arr)
- debug_str_arr.append("NER_FINAL_RESULTS: " + ' '.join(detected_entities_arr))
- print("--------")
- dfp.write('\n'.join(debug_str_arr))
- dfp.write("\n\nEND-------------------------------\n")
- dfp.flush()
- return detected_entities_arr,span_arr,terms_arr,ner_str,debug_str_arr
-
- def masked_word_first_letter_capitalize(self,entity):
- arr = entity.split()
- ret_arr = []
- for term in arr:
- if (len(term) > 1 and term[0].islower() and term[1].islower()):
- ret_arr.append(term[0].upper() + term[1:])
- else:
- ret_arr.append(term)
- return ' '.join(ret_arr)
-
-
- def gen_single_phrase_sentences(self,terms_arr,masked_sent_arr,span_arr,rfp,dfp):
- sentence_template = "%s is a entity"
- print(span_arr)
- sentences = []
- singleton_spans_arr = []
- run_index = 0
- entity = ""
- singleton_span = []
- while (run_index < len(span_arr)):
- if (span_arr[run_index] == 1):
- while (run_index < len(span_arr)):
- if (span_arr[run_index] == 1):
- #print(terms_arr[run_index][WORD_POS],end=' ')
- if (len(entity) == 0):
- entity = terms_arr[run_index][WORD_POS]
- else:
- entity = entity + " " + terms_arr[run_index][WORD_POS]
- singleton_span.append(1)
- run_index += 1
- else:
- break
- #print()
- for i in sentence_template.split():
- if (i != "%s"):
- singleton_span.append(0)
- entity = self.masked_word_first_letter_capitalize(entity)
- sentence = sentence_template % entity
- sentences.append(sentence)
- singleton_spans_arr.append(singleton_span)
- print(sentence)
- print(singleton_span)
- entity = ""
- singleton_span = []
- else:
- run_index += 1
- return sentences,singleton_spans_arr
-
-
- def find_ci_entities(self,main_obj,debug_str_arr,entity_info_dict):
- ci_predictions = []
- orig_ci_confidences = []
- term_index = 1
- batch_obj = main_obj["descs_and_entities"]
- for key in batch_obj:
- masked_sent = batch_obj[key]["ci_prediction"]["sentence"]
- print("\n**CI: ", masked_sent)
- debug_str_arr.append(masked_sent)
- #entity_info_dict["masked_sent"].append(masked_sent)
- inp_arr = batch_obj[key]["ci_prediction"]["descs"]
- descs = self.get_descriptors_for_masked_position(inp_arr)
- self.init_entity_info(entity_info_dict,term_index)
- entities,confidences,subtypes = self.get_entities_for_masked_position(inp_arr,descs,debug_str_arr,entity_info_dict[term_index]["ci"])
- ci_predictions.append({"entities":entities,"confidences":confidences,"subtypes":subtypes})
- orig_ci_confidences.append(self.pack_confidences(entities,confidences)) #this is sent for ensemble server to detect cross predictions. CS predicitons are more reflective of cross over than consolidated predictions, since CI may overwhelm CS
- term_index += 1
- return ci_predictions,orig_ci_confidences
-
-
- def pack_confidences(self,cs_entities,cs_confidences):
- assert(len(cs_entities) == len(cs_confidences))
- orig_cs_arr = []
- for e,c in zip(cs_entities,cs_confidences):
- print(e,c)
- e_split = e.split('[')
- e_main = e_split[0]
- if (len(e_split) > 1):
- e_sub = e_split[1].split(',')[0].rstrip(']')
- if (e_main != e_sub):
- e = e_main + '[' + e_sub + ']'
- else:
- e = e_main
- else:
- e = e_main
- orig_cs_arr.append({"e":e,"confidence":c})
- return orig_cs_arr
-
-
- #We have multiple masked versions of a single sentence. Tag each one of them
- #and create a complete tagged version for a sentence
- def find_cs_entities(self,sent,main_obj,rfp,dfp,debug_str_arr,ci_predictions,entity_info_dict):
- #print(sent)
- batch_obj = main_obj["descs_and_entities"]
- dfp.write(sent + "\n")
- term_index = 1
- detected_entities_arr = []
- full_pooled_results = []
- orig_cs_confidences = []
- for index,key in enumerate(batch_obj):
- position_info = batch_obj[key]["cs_prediction"]["descs"]
- ci_entities = ci_predictions[index]["entities"]
- ci_confidences = ci_predictions[index]["confidences"]
- ci_subtypes = ci_predictions[index]["subtypes"]
- debug_str_arr.append("\n++++++ nth Masked term : " + str(key))
- #dfp.write(key + "\n")
- masked_sent = batch_obj[key]["cs_prediction"]["sentence"]
- print("\n**CS: ",masked_sent)
- descs = self.get_descriptors_for_masked_position(position_info)
- #dfp.write(str(descs) + "\n")
- if (len(descs) > 0):
- cs_entities,cs_confidences,cs_subtypes = self.get_entities_for_masked_position(position_info,descs,debug_str_arr,entity_info_dict[term_index]["cs"])
- else:
- cs_entities = []
- cs_confidences = []
- cs_subtypes = []
- #dfp.write(str(cs_entities) + "\n")
- pooled_results = self.pool_confidences(ci_entities,ci_confidences,ci_subtypes,cs_entities,cs_confidences,cs_subtypes,debug_str_arr,sent,dfp)
- self.fill_detected_entities(detected_entities_arr,pooled_results) #just picks the top prediction
- full_pooled_results.append(pooled_results)
- orig_cs_confidences.append(self.pack_confidences(cs_entities,cs_confidences)) #this is sent for ensemble server to detect cross predictions. CS predicitons are more reflective of cross over than consolidated predictions, since CI may overwhelm CS
- #self.old_resolve_entities(i,singleton_entities,detected_entities_arr) #This decides how to pick entities given CI and CS predictions
- term_index += 1
- #out of the full loop over sentences. Now create NER sentence
- terms_arr = main_obj["terms_arr"]
- span_arr = main_obj["span_arr"]
- ner_str = self.emit_sentence_entities(sent,terms_arr,detected_entities_arr,span_arr,rfp) #just outputs results in NER Conll format
- dfp.flush()
- return detected_entities_arr,ner_str,full_pooled_results,orig_cs_confidences
-
-
- def fill_detected_entities(self,detected_entities_arr,entities):
- if (len(entities) > 0):
- top_e_class = next(iter(entities))
- top_subtype = next(iter(entities[top_e_class]["stypes"]))
- if (top_e_class != top_subtype):
- top_prediction = top_e_class + "[" + top_subtype + "]"
- else:
- top_prediction = top_e_class
- detected_entities_arr.append(top_prediction)
- else:
- detected_entities_arr.append("OTHER")
-
-
- def fill_detected_entities_old(self,detected_entities_arr,entities,pan_arr):
- entities_dict = {}
- count = 1
- for i in entities:
- cand = i.split("-")
- for j in cand:
- terms = j.split("/")
- for k in terms:
- if (k not in entities_dict):
- entities_dict[k] = 1.0/count
- else:
- entities_dict[k] += 1.0/count
- count += 1
- final_sorted_d = OrderedDict(sorted(entities_dict.items(), key=lambda kv: kv[1], reverse=True))
- first = "OTHER"
- for first in final_sorted_d:
- break
- detected_entities_arr.append(first)
-
- #Contextual entity is picked as first candidate before context independent candidate
- def old_resolve_entities(self,index,singleton_entities,detected_entities_arr):
- if (singleton_entities[index].split('[')[0] != detected_entities_arr[index].split('[')[0]):
- if (singleton_entities[index].split('[')[0] != "OTHER" and detected_entities_arr[index].split('[')[0] != "OTHER"):
- detected_entities_arr[index] = detected_entities_arr[index] + "/" + singleton_entities[index]
- elif (detected_entities_arr[index].split('[')[0] == "OTHER"):
- detected_entities_arr[index] = singleton_entities[index]
- else:
- pass
- else:
- #this is the case when both CI and CS entity type match. Since the subtypes are already ordered, just merge(CS/CI,CS/CI...) the two picking unique subtypes
- main_entity = detected_entities_arr[index].split('[')[0]
- cs_arr = detected_entities_arr[index].split('[')[1].rstrip(']').split(',')
- ci_arr = singleton_entities[index].split('[')[1].rstrip(']').split(',')
- cs_arr_len = len(cs_arr)
- ci_arr_len = len(ci_arr)
- max_len = ci_arr_len if ci_arr_len > cs_arr_len else cs_arr_len
- merged_unique_subtype_dict = OrderedDict()
- for i in range(cs_arr_len):
- if (i < cs_arr_len and cs_arr[i] not in merged_unique_subtype_dict):
- merged_unique_subtype_dict[cs_arr[i]] = 1
- if (i < ci_arr_len and ci_arr[i] not in merged_unique_subtype_dict):
- merged_unique_subtype_dict[ci_arr[i]] = 1
- new_subtypes_str = ','.join(list(merged_unique_subtype_dict.keys()))
- detected_entities_arr[index] = main_entity + '[' + new_subtypes_str + ']'
-
-
-
-
-
-
- def emit_sentence_entities(self,sent,terms_arr,detected_entities_arr,span_arr,rfp):
- print("Final result")
- ret_str = ""
- for i,term in enumerate(terms_arr):
- print(term,' ',end='')
- print()
- sent_arr = sent.split()
- assert(len(terms_arr) == len(span_arr))
- entity_index = 0
- i = 0
- in_span = False
- while (i < len(span_arr)):
- if (span_arr[i] == 0):
- tag = "O"
- if (in_span):
- in_span = False
- entity_index += 1
- else:
- if (in_span):
- tag = "I_" + detected_entities_arr[entity_index]
- else:
- in_span = True
- tag = "B_" + detected_entities_arr[entity_index]
- rfp.write(terms_arr[i] + ' ' + tag + "\n")
- ret_str = ret_str + terms_arr[i] + ' ' + tag + "\n"
- print(tag + ' ',end='')
- i += 1
- print()
- rfp.write("\n")
- ret_str += "\n"
- rfp.flush()
- return ret_str
-
-
-
-
-
- def get_descriptors_for_masked_position(self,inp_arr):
- desc_arr = []
- for i in range(len(inp_arr)):
- desc_arr.append(inp_arr[i]["desc"])
- desc_arr.append(inp_arr[i]["v"])
- return desc_arr
-
- def dispatch_request(self,url):
- max_retries = 10
- attempts = 0
- while True:
- try:
- r = requests.get(url,timeout=1000)
- if (r.status_code == 200):
- return r
- except:
- print("Request:", url, " failed. Retrying...")
- attempts += 1
- if (attempts >= max_retries):
- print("Request:", url, " failed")
- break
-
- def convert_positive_nums_to_dist(self,final_sorted_d):
- factors = list(final_sorted_d.values()) #convert dict values to an array
- factors = list(map(float, factors))
- total = float(sum(factors))
- if (total == 0):
- total = 1
- factors[0] = 1 #just make the sum 100%. This a boundary case for numbers for instance
- factors = np.array(factors)
- #factors = softmax(factors)
- factors = factors/total
- factors = np.round(factors,4)
- return factors
-
- def get_desc_weights_total(self,count,desc_weights):
- i = 0
- total = 0
- while (i < count):
- total += float(desc_weights[i+1])
- i += 2
- total = 1 if total == 0 else total
- return total
-
-
- def aggregate_entities(self,entities,desc_weights,debug_str_arr,entity_info_dict_entities):
- ''' Given a masked position, whose entity we are trying to determine,
- First get descriptors for that postion 2*N array [desc1,score1,desc2,score2,...]
- Then for each descriptor, get entity predictions which is an array 2*N of the form [e1,score1,e2,score2,...] where e1 could be DRUG/DISEASE and score1 is 10/8 etc.
- In this function we aggregate each unique entity prediction (e.g. DISEASE) by summing up its weighted scores across all N predictions.
- The result factor array is normalized to create a probability distribution
- '''
- count = len(entities)
- assert(count %2 == 0)
- aggregate_entities = {}
- i = 0
- subtypes = {}
- while (i < count):
- #entities[i] contains entity names and entities[i+] contains counts. Example PROTEIN/GENE/PERSON is i and 10/4/7 is i+1
- curr_counts = entities[i+1].split('/') #this is one of the N predictions - this single prediction is itself a list of entities
- trunc_e,trunc_counts = self.map_entities(entities[i].split('/'),curr_counts,subtypes) # Aggregate the subtype entities for this predictions. Subtypes aggregation is **across** the N predictions
- #Also trunc_e contains the consolidated entity names.
- assert(len(trunc_e) <= len(curr_counts)) # can be less if untagged is skipped
- assert(len(trunc_e) == len(trunc_counts))
- trunc_counts = softmax(trunc_counts) #this normalization is done to reduce the effect of absolute count of certain labeled entities, while aggregating the entity vectors across descriptors
- curr_counts_sum = sum(map(int,trunc_counts)) #Using truncated count
- curr_counts_sum = 1 if curr_counts_sum == 0 else curr_counts_sum
- for j in range(len(trunc_e)): #this is iterating through the current instance of all *consolidated* tagged entity predictons (that is except UNTAGGED_ENTITY)
- if (self.skip_untagged(trunc_e[j])):
- continue
- if (trunc_e[j] not in aggregate_entities):
- aggregate_entities[trunc_e[j]] = (float(trunc_counts[j]))*float(desc_weights[i+1])
- #aggregate_entities[trunc_e[j]] = (float(trunc_counts[j])/curr_counts_sum)*float(desc_weights[i+1])
- #aggregate_entities[trunc_e[j]] = float(desc_weights[i+1])
- else:
- aggregate_entities[trunc_e[j]] += (float(trunc_counts[j]))*float(desc_weights[i+1])
- #aggregate_entities[trunc_e[j]] += (float(trunc_counts[j])/curr_counts_sum)*float(desc_weights[i+1])
- #aggregate_entities[trunc_e[j]] += float(desc_weights[i+1])
- i += 2
- final_sorted_d = OrderedDict(sorted(aggregate_entities.items(), key=lambda kv: kv[1], reverse=True))
- if (len(final_sorted_d) == 0): #Case where all terms are tagged OTHER
- final_sorted_d = {"OTHER":1}
- subtypes["OTHER"] = {"OTHER":1}
- factors = self.convert_positive_nums_to_dist(final_sorted_d)
- ret_entities = list(final_sorted_d.keys())
- confidences = factors.tolist()
- print(ret_entities)
- sorted_subtypes = self.sort_subtypes(subtypes)
- ret_entities = self.update_entities_with_subtypes(ret_entities,sorted_subtypes)
- print(ret_entities)
- debug_str_arr.append(" ")
- debug_str_arr.append(' '.join(ret_entities))
- print(confidences)
- assert(len(confidences) == len(ret_entities))
- arr = []
- for e,c in zip(ret_entities,confidences):
- arr.append({"e":e,"confidence":c})
- entity_info_dict_entities.append(arr)
- debug_str_arr.append(' '.join([str(x) for x in confidences]))
- debug_str_arr.append("\n\n")
- return ret_entities,confidences,subtypes
-
-
- def sort_subtypes(self,subtypes):
- sorted_subtypes = OrderedDict()
- for ent in subtypes:
- final_sorted_d = OrderedDict(sorted(subtypes[ent].items(), key=lambda kv: kv[1], reverse=True))
- sorted_subtypes[ent] = list(final_sorted_d.keys())
- return sorted_subtypes
-
- def update_entities_with_subtypes(self,ret_entities,subtypes):
- new_entities = []
-
- for ent in ret_entities:
- #if (len(ret_entities) == 1):
- # new_entities.append(ent) #avoid creating a subtype for a single case
- # return new_entities
- if (ent in subtypes):
- new_entities.append(ent + '[' + ','.join(subtypes[ent]) + ']')
- else:
- new_entities.append(ent)
- return new_entities
-
- def skip_untagged(self,term):
- if (self.suppress_untagged == True and (term == "OTHER" or term == "UNTAGGED_ENTITY")):
- return True
- return False
-
-
- def map_entities(self,arr,counts_arr,subtypes_dict):
- ret_arr = []
- new_counts_arr = []
- for index,term in enumerate(arr):
- if (self.skip_untagged(term)):
- continue
- ret_arr.append(self.entity_map[term])
- new_counts_arr.append(int(counts_arr[index]))
- if (self.entity_map[term] not in subtypes_dict):
- subtypes_dict[self.entity_map[term]] = {}
- if (term not in subtypes_dict[self.entity_map[term]]):
- #subtypes_dict[self.entity_map[i]][i] = 1
- subtypes_dict[self.entity_map[term]][term] = int(counts_arr[index])
- else:
- #subtypes_dict[self.entity_map[i]][i] += 1
- subtypes_dict[self.entity_map[term]][term] += int(counts_arr[index])
- return ret_arr,new_counts_arr
-
- def get_entities_from_batch(self,inp_arr):
- entities_arr = []
- for i in range(len(inp_arr)):
- entities_arr.append(inp_arr[i]["e"])
- entities_arr.append(inp_arr[i]["e_count"])
- return entities_arr
-
-
- def get_entities_for_masked_position(self,inp_arr,descs,debug_str_arr,entity_info_dict):
- entities = self.get_entities_from_batch(inp_arr)
- debug_combined_arr =[]
- desc_arr =[]
- assert(len(descs) %2 == 0)
- assert(len(entities) %2 == 0)
- index = 0
- for d,e in zip(descs,entities):
- p_e = '/'.join(e.split('/')[:5])
- debug_combined_arr.append(d + " " + p_e)
- if (index % 2 == 0):
- temp_dict = OrderedDict()
- temp_dict["d"] = d
- temp_dict["e"] = e
- else:
- temp_dict["mlm"] = d
- temp_dict["l_score"] = e
- desc_arr.append(temp_dict)
- index += 1
- debug_str_arr.append("\n" + ', '.join(debug_combined_arr))
- print(debug_combined_arr)
- entity_info_dict["descs"] = desc_arr
- #debug_str_arr.append(' '.join(entities))
- assert(len(entities) == len(descs))
- entities,confidences,subtypes = self.aggregate_entities(entities,descs,debug_str_arr,entity_info_dict["entities"])
- return entities,confidences,subtypes
-
-
- #This is again a bad hack for prototyping purposes - extracting fields from a raw text output as opposed to a structured output like json
- def extract_descs(self,text):
- arr = text.split('\n')
- desc_arr = []
- if (len(arr) > 0):
- for i,line in enumerate(arr):
- if (line.startswith(DESC_HEAD)):
- terms = line.split(':')
- desc_arr = ' '.join(terms[1:]).strip().split()
- break
- return desc_arr
-
-
- def generate_masked_sentences(self,terms_arr):
- size = len(terms_arr)
- sentence_arr = []
- span_arr = []
- i = 0
- while (i < size):
- term_info = terms_arr[i]
- if (term_info[TAG_POS] in noun_tags):
- skip = self.gen_sentence(sentence_arr,terms_arr,i)
- i += skip
- for j in range(skip):
- span_arr.append(1)
- else:
- i += 1
- span_arr.append(0)
- #print(sentence_arr)
- return sentence_arr,span_arr
-
- def gen_sentence(self,sentence_arr,terms_arr,index):
- size = len(terms_arr)
- new_sent = []
- for prefix,term in enumerate(terms_arr[:index]):
- new_sent.append(term[WORD_POS])
- i = index
- skip = 0
- while (i < size):
- if (terms_arr[i][TAG_POS] in noun_tags):
- skip += 1
- i += 1
- else:
- break
- new_sent.append(MASK_TAG)
- i = index + skip
- while (i < size):
- new_sent.append(terms_arr[i][WORD_POS])
- i += 1
- assert(skip != 0)
- sentence_arr.append(new_sent)
- return skip
-
-
-
-
-
-
-
-
-def run_test(file_name,obj):
- rfp = open("results.txt","w")
- dfp = open("debug.txt","w")
- with open(file_name) as fp:
- count = 1
- for line in fp:
- if (len(line) > 1):
- print(str(count) + "] ",line,end='')
- obj.tag_sentence(line,rfp,dfp)
- count += 1
- rfp.close()
- dfp.close()
-
-
-def tag_single_entity_in_sentence(file_name,obj):
- rfp = open("results.txt","w")
- dfp = open("debug.txt","w")
- sfp = open("se_results.txt","w")
- with open(file_name) as fp:
- count = 1
- for line in fp:
- if (len(line) > 1):
- print(str(count) + "] ",line,end='')
- #entity_arr,span_arr,terms_arr,ner_str,debug_str = obj.tag_sentence(line,rfp,dfp,False) # False for json output
- json_str = obj.tag_sentence(line,rfp,dfp,True) # True for json output
- #print("*******************:",terms_arr[span_arr.index(1)][WORD_POS].rstrip(":"),entity_arr[0])
- #sfp.write(terms_arr[span_arr.index(1)][WORD_POS].rstrip(":") + " " + entity_arr[0] + "\n")
- count += 1
- sfp.flush()
- #pdb.set_trace()
- rfp.close()
- sfp.close()
- dfp.close()
-
-
-
-
-test_arr = [
-"He felt New:__entity__ York:__entity__ has a chance to win this year's competition",
-"Ajit rajasekharan is an engineer at nFerence:__entity__",
-"Ajit:__entity__ rajasekharan is an engineer:__entity__ at nFerence:__entity__",
-"Mesothelioma:__entity__ is caused by exposure to asbestos:__entity__",
-"Fyodor:__entity__ Mikhailovich:__entity__ Dostoevsky:__entity__ was treated for Parkinsons",
-"Ajit:__entity__ Rajasekharan:__entity__ is an engineer at nFerence",
-"A eGFR:__entity__ below 60 indicates chronic kidney disease",
-"A eGFR below 60:__entity__ indicates chronic kidney disease",
-"A eGFR:__entity__ below 60:__entity__ indicates chronic:__entity__ kidney:__entity__ disease:__entity__",
-"Ajit:__entity__ rajasekharan is an engineer at nFerence",
-"Her hypophysitis secondary to ipilimumab was well managed with supplemental hormones",
-"In Seattle:__entity__ , Pete Incaviglia 's grand slam with one out in the sixth snapped a tie and lifted the Baltimore Orioles past the Seattle Mariners , 5-2 .",
-"engineer",
-"Austin:__entity__ called",
-"Paul Erdős died at 83",
-"Imatinib mesylate is a drug and is used to treat nsclc",
-"In Seattle , Pete Incaviglia 's grand slam with one out in the sixth snapped a tie and lifted the Baltimore Orioles past the Seattle Mariners , 5-2 .",
-"It was Incaviglia 's sixth grand slam and 200th homer of his career .",
-"Add Women 's singles , third round Lisa Raymond ( U.S. ) beat Kimberly Po ( U.S. ) 6-3 6-2 .",
-"1880s marked the beginning of Jazz",
-"He flew from New York to SFO",
-"Lionel Ritchie was popular in the 1980s",
-"Lionel Ritchie was popular in the late eighties",
-"John Doe flew from New York to Rio De Janiro via Miami",
-"He felt New York has a chance to win this year's competition",
-"Bandolier - Budgie ' , a free itunes app for ipad , iphone and ipod touch , released in December 2011 , tells the story of the making of Bandolier in the band 's own words - including an extensive audio interview with Burke Shelley",
-"In humans mutations in Foxp2 leads to verbal dyspraxia",
-"The recent spread of Corona virus flu from China to Italy,Iran, South Korea and Japan has caused global concern",
-"Hotel California topped the singles chart",
-"Elon Musk said Telsa will open a manufacturing plant in Europe",
-"He flew from New York to SFO",
-"After studies at Hofstra University , He worked for New York Telephone before He was elected to the New York State Assembly to represent the 16th District in Northwest Nassau County ",
-"Everyday he rode his bicycle from Rajakilpakkam to Tambaram",
-"If he loses Saturday , it could devalue his position as one of the world 's great boxers , \" Panamanian Boxing Association President Ramon Manzanares said .",
-"West Indian all-rounder Phil Simmons took four for 38 on Friday as Leicestershire beat Somerset by an innings and 39 runs in two days to take over at the head of the county championship .",
-"they are his friends ",
-"they flew from Boston to Rio De Janiro and had a mocha",
-"he flew from Boston to Rio De Janiro and had a mocha",
-"X,Y,Z are medicines"]
-
-
-def test_canned_sentences(obj):
- rfp = open("results.txt","w")
- dfp = open("debug.txt","w")
- pdb.set_trace()
- for line in test_arr:
- ret_val = obj.tag_sentence(line,rfp,dfp,True)
- pdb.set_trace()
- rfp.close()
- dfp.close()
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser(description='main NER for a single model ',formatter_class=argparse.ArgumentDefaultsHelpFormatter)
- parser.add_argument('-input', action="store", dest="input",default="",help='Input file required for run options batch,single')
- parser.add_argument('-config', action="store", dest="config", default=DEFAULT_CONFIG,help='config file path')
- parser.add_argument('-option', action="store", dest="option",default="canned",help='Valid options are canned,batch,single. canned - test few canned sentences used in medium artice. batch - tag sentences in input file. Entities to be tagged are determing used POS tagging to find noun phrases. specific - tag specific entities in input file. The tagged word or phrases needs to be of the form w1:__entity_ w2:__entity_ Example:Her hypophysitis:__entity__ secondary to ipilimumab was well managed with supplemental:__entity__ hormones:__entity__')
- results = parser.parse_args()
-
- obj = UnsupNER(results.config)
- if (results.option == "canned"):
- test_canned_sentences(obj)
- elif (results.option == "batch"):
- if (len(results.input) == 0):
- print("Input file needs to be specified")
- else:
- run_test(results.input,obj)
- print("Tags and sentences are written in results.txt and debug.txt")
- elif (results.option == "specific"):
- if (len(results.input) == 0):
- print("Input file needs to be specified")
- else:
- tag_single_entity_in_sentence(results.input,obj)
- print("Tags and sentences are written in results.txt and debug.txt")
- else:
- print("Invalid argument:\n")
- parser.print_help()
diff --git a/spaces/akhaliq/Kapao/demos/squash.py b/spaces/akhaliq/Kapao/demos/squash.py
deleted file mode 100644
index 202ed77fdb3ab345b51e33f03941d4f3f80e6ca6..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Kapao/demos/squash.py
+++ /dev/null
@@ -1,243 +0,0 @@
-import sys
-from pathlib import Path
-FILE = Path(__file__).absolute()
-sys.path.append(FILE.parents[1].as_posix()) # add kapao/ to path
-
-import argparse
-from pytube import YouTube
-import os.path as osp
-from utils.torch_utils import select_device, time_sync
-from utils.general import check_img_size
-from utils.datasets import LoadImages
-from models.experimental import attempt_load
-import torch
-import cv2
-import numpy as np
-import yaml
-from tqdm import tqdm
-import imageio
-from val import run_nms, post_process_batch
-
-
-VIDEO_NAME = 'Squash MegaRally 176 ReDux - Slow Mo Edition.mp4'
-URL = 'https://www.youtube.com/watch?v=Dy62-eTNvY4&ab_channel=PSASQUASHTV'
-
-GRAY = (200, 200, 200)
-CROWD_THRES = 450 # max bbox size for crowd classification
-CROWD_ALPHA = 0.5
-CROWD_KP_SIZE = 2
-CROWD_KP_THICK = 2
-CROWD_SEG_THICK = 2
-
-BLUE = (245, 140, 66)
-ORANGE = (66, 140, 245)
-PLAYER_ALPHA_BOX = 0.85
-PLAYER_ALPHA_POSE = 0.3
-PLAYER_KP_SIZE = 4
-PLAYER_KP_THICK = 4
-PLAYER_SEG_THICK = 4
-FPS_TEXT_SIZE = 3
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--data', type=str, default='data/coco-kp.yaml')
- parser.add_argument('--imgsz', type=int, default=1280)
- parser.add_argument('--weights', default='kapao_s_coco.pt')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or cpu')
- parser.add_argument('--half', action='store_true')
- parser.add_argument('--conf-thres', type=float, default=0.5, help='confidence threshold')
- parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold')
- parser.add_argument('--no-kp-dets', action='store_true', help='do not use keypoint objects')
- parser.add_argument('--conf-thres-kp', type=float, default=0.5)
- parser.add_argument('--conf-thres-kp-person', type=float, default=0.2)
- parser.add_argument('--iou-thres-kp', type=float, default=0.45)
- parser.add_argument('--overwrite-tol', type=int, default=50)
- parser.add_argument('--scales', type=float, nargs='+', default=[1])
- parser.add_argument('--flips', type=int, nargs='+', default=[-1])
- parser.add_argument('--display', action='store_true', help='display inference results')
- parser.add_argument('--fps', action='store_true', help='display fps')
- parser.add_argument('--gif', action='store_true', help='create fig')
- parser.add_argument('--start', type=int, default=20, help='start time (s)')
- parser.add_argument('--end', type=int, default=80, help='end time (s)')
- args = parser.parse_args()
-
- with open(args.data) as f:
- data = yaml.safe_load(f) # load data dict
-
- # add inference settings to data dict
- data['imgsz'] = args.imgsz
- data['conf_thres'] = args.conf_thres
- data['iou_thres'] = args.iou_thres
- data['use_kp_dets'] = not args.no_kp_dets
- data['conf_thres_kp'] = args.conf_thres_kp
- data['iou_thres_kp'] = args.iou_thres_kp
- data['conf_thres_kp_person'] = args.conf_thres_kp_person
- data['overwrite_tol'] = args.overwrite_tol
- data['scales'] = args.scales
- data['flips'] = [None if f == -1 else f for f in args.flips]
-
- if not osp.isfile(VIDEO_NAME):
- yt = YouTube(URL)
- # [print(s) for s in yt.streams]
- stream = [s for s in yt.streams if s.itag == 137][0] # 1080p, non-progressive
- print('Downloading squash demo video...')
- stream.download()
- print('Done.')
-
- device = select_device(args.device, batch_size=1)
- print('Using device: {}'.format(device))
-
- model = attempt_load(args.weights, map_location=device) # load FP32 model
- half = args.half & (device.type != 'cpu')
- if half: # half precision only supported on CUDA
- model.half()
- stride = int(model.stride.max()) # model stride
-
- imgsz = check_img_size(args.imgsz, s=stride) # check image size
- dataset = LoadImages('./{}'.format(VIDEO_NAME), img_size=imgsz, stride=stride, auto=True)
-
- if device.type != 'cpu':
- model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once
-
- cap = dataset.cap
- cap.set(cv2.CAP_PROP_POS_MSEC, args.start * 1000)
- fps = cap.get(cv2.CAP_PROP_FPS)
- n = int(fps * (args.end - args.start))
- h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
- w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
- gif_frames = []
- video_name = 'squash_inference_{}'.format(osp.splitext(args.weights)[0])
-
- if not args.display:
- writer = cv2.VideoWriter(video_name + '.mp4',
- cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
- if not args.fps: # tqdm might slows down inference
- dataset = tqdm(dataset, desc='Writing inference video', total=n)
-
- t0 = time_sync()
- for i, (path, img, im0, _) in enumerate(dataset):
- img = torch.from_numpy(img).to(device)
- img = img.half() if half else img.float() # uint8 to fp16/32
- img = img / 255.0 # 0 - 255 to 0.0 - 1.0
- if len(img.shape) == 3:
- img = img[None] # expand for batch dim
-
- out = model(img, augment=True, kp_flip=data['kp_flip'], scales=data['scales'], flips=data['flips'])[0]
- person_dets, kp_dets = run_nms(data, out)
- bboxes, poses, _, _, _ = post_process_batch(data, img, [], [[im0.shape[:2]]], person_dets, kp_dets)
-
- bboxes = np.array(bboxes)
- poses = np.array(poses)
-
- im0_copy = im0.copy()
- player_idx = []
-
- # DRAW CROWD POSES
- for j, (bbox, pose) in enumerate(zip(bboxes, poses)):
- x1, y1, x2, y2 = bbox
- size = ((x2 - x1) ** 2 + (y2 - y1) ** 2) ** 0.5
- if size < CROWD_THRES:
- cv2.rectangle(im0_copy, (int(x1), int(y1)), (int(x2), int(y2)), GRAY, thickness=2)
- for x, y, _ in pose[:5]:
- cv2.circle(im0_copy, (int(x), int(y)), CROWD_KP_SIZE, GRAY, CROWD_KP_THICK)
- for seg in data['segments'].values():
- pt1 = (int(pose[seg[0], 0]), int(pose[seg[0], 1]))
- pt2 = (int(pose[seg[1], 0]), int(pose[seg[1], 1]))
- cv2.line(im0_copy, pt1, pt2, GRAY, CROWD_SEG_THICK)
- else:
- player_idx.append(j)
- im0 = cv2.addWeighted(im0, CROWD_ALPHA, im0_copy, 1 - CROWD_ALPHA, gamma=0)
-
- # DRAW PLAYER POSES
- player_bboxes = bboxes[player_idx][:2]
- player_poses = poses[player_idx][:2]
-
- def draw_player_poses(im0, missing=-1):
- for j, (bbox, pose, color) in enumerate(zip(
- player_bboxes[[orange_player, blue_player]],
- player_poses[[orange_player, blue_player]],
- [ORANGE, BLUE])):
- if j == missing:
- continue
- im0_copy = im0.copy()
- x1, y1, x2, y2 = bbox
- cv2.rectangle(im0_copy, (int(x1), int(y1)), (int(x2), int(y2)), color, thickness=-1)
- im0 = cv2.addWeighted(im0, PLAYER_ALPHA_BOX, im0_copy, 1 - PLAYER_ALPHA_BOX, gamma=0)
- im0_copy = im0.copy()
- for x, y, _ in pose:
- cv2.circle(im0_copy, (int(x), int(y)), PLAYER_KP_SIZE, color, PLAYER_KP_THICK)
- for seg in data['segments'].values():
- pt1 = (int(pose[seg[0], 0]), int(pose[seg[0], 1]))
- pt2 = (int(pose[seg[1], 0]), int(pose[seg[1], 1]))
- cv2.line(im0_copy, pt1, pt2, color, PLAYER_SEG_THICK)
- im0 = cv2.addWeighted(im0, PLAYER_ALPHA_POSE, im0_copy, 1 - PLAYER_ALPHA_POSE, gamma=0)
- return im0
-
- if i == 0:
- # orange player on left at start
- orange_player = np.argmin(player_bboxes[:, 0])
- blue_player = int(not orange_player)
- im0 = draw_player_poses(im0)
- else:
- # simple player tracking based on frame-to-frame pose difference
- dist = []
- for pose in poses_last:
- dist.append(np.mean(np.linalg.norm(player_poses[0, :, :2] - pose[:, :2], axis=-1)))
- if np.argmin(dist) == 0:
- orange_player = 0
- else:
- orange_player = 1
- blue_player = int(not orange_player)
-
- # if only one player detected, find which player is missing
- missing = -1
- if len(player_poses) == 1:
- if orange_player == 0: # missing blue player
- player_poses = np.concatenate((player_poses, poses_last[1:]), axis=0)
- player_bboxes = np.concatenate((player_bboxes, bboxes_last[1:]), axis=0)
- missing = 1
- else: # missing orange player
- player_poses = np.concatenate((player_poses, poses_last[:1]), axis=0)
- player_bboxes = np.concatenate((player_bboxes, bboxes_last[:1]), axis=0)
- missing = 0
- im0 = draw_player_poses(im0, missing)
-
- bboxes_last = player_bboxes[[orange_player, blue_player]]
- poses_last = player_poses[[orange_player, blue_player]]
-
- if i == 0:
- t = time_sync() - t0
- else:
- t = time_sync() - t1
-
- if args.fps:
- s = FPS_TEXT_SIZE
- cv2.putText(im0, '{:.1f} FPS'.format(1 / t), (5*s, 25*s),
- cv2.FONT_HERSHEY_SIMPLEX, s, (255, 255, 255), thickness=2*s)
-
- if args.gif:
- gif_frames.append(cv2.resize(im0, dsize=None, fx=0.25, fy=0.25)[:, :, [2, 1, 0]])
- elif not args.display:
- writer.write(im0)
- else:
- cv2.imshow('', cv2.resize(im0, dsize=None, fx=0.5, fy=0.5))
- cv2.waitKey(1)
-
- t1 = time_sync()
- if i == n - 1:
- break
-
- cv2.destroyAllWindows()
- cap.release()
- if not args.display:
- writer.release()
-
- if args.gif:
- print('Saving GIF...')
- with imageio.get_writer(video_name + '.gif', mode="I", fps=fps) as writer:
- for idx, frame in tqdm(enumerate(gif_frames)):
- writer.append_data(frame)
-
-
-
diff --git a/spaces/akhaliq/VQMIVC/preprocess.py b/spaces/akhaliq/VQMIVC/preprocess.py
deleted file mode 100644
index 5a7fad9a52b69706403a86bcc3d26c890f1c4ae2..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/VQMIVC/preprocess.py
+++ /dev/null
@@ -1,173 +0,0 @@
-# -*- coding: utf-8 -*-
-
-from spectrogram import logmelspectrogram
-import numpy as np
-from joblib import Parallel, delayed
-import librosa
-import soundfile as sf
-import os
-from glob import glob
-from tqdm import tqdm
-import random
-import json
-import resampy
-import pyworld as pw
-
-def extract_logmel(wav_path, sr=16000):
- # wav, fs = librosa.load(wav_path, sr=sr)
- wav, fs = sf.read(wav_path)
- wav, _ = librosa.effects.trim(wav, top_db=60)
- if fs != sr:
- wav = resampy.resample(wav, fs, sr, axis=0)
- fs = sr
- # duration = len(wav)/fs
- assert fs == 16000
- peak = np.abs(wav).max()
- if peak > 1.0:
- wav /= peak
- mel = logmelspectrogram(
- x=wav,
- fs=fs,
- n_mels=80,
- n_fft=400,
- n_shift=160,
- win_length=400,
- window='hann',
- fmin=80,
- fmax=7600,
- )
-
- tlen = mel.shape[0]
- frame_period = 160/fs*1000
- f0, timeaxis = pw.dio(wav.astype('float64'), fs, frame_period=frame_period)
- f0 = pw.stonemask(wav.astype('float64'), f0, timeaxis, fs)
- f0 = f0[:tlen].reshape(-1).astype('float32')
- nonzeros_indices = np.nonzero(f0)
- lf0 = f0.copy()
- lf0[nonzeros_indices] = np.log(f0[nonzeros_indices]) # for f0(Hz), lf0 > 0 when f0 != 0
-
- wav_name = os.path.basename(wav_path).split('.')[0]
- # print(wav_name, mel.shape, duration)
- return wav_name, mel, lf0, mel.shape[0]
-
-
-def normalize_logmel(wav_name, mel, mean, std):
- mel = (mel - mean) / (std + 1e-8)
- return wav_name, mel
-
-
-def save_one_file(save_path, arr):
- os.makedirs(os.path.dirname(save_path), exist_ok=True)
- np.save(save_path, arr)
-
-
-def save_logmel(save_root, wav_name, melinfo, mode):
- mel, lf0, mel_len = melinfo
- spk = wav_name.split('_')[0]
- mel_save_path = f'{save_root}/{mode}/mels/{spk}/{wav_name}.npy'
- lf0_save_path = f'{save_root}/{mode}/lf0/{spk}/{wav_name}.npy'
- save_one_file(mel_save_path, mel)
- save_one_file(lf0_save_path, lf0)
- return mel_len, mel_save_path, lf0_save_path
-
-# def get_wavs_names(spks, data_root)
-data_root = '/Dataset/VCTK-Corpus/wav48_silence_trimmed'
-save_root = 'data'
-os.makedirs(save_root, exist_ok=True)
-
-spk_info_txt = '/Dataset/VCTK-Corpus/speaker-info.txt'
-f = open(spk_info_txt, 'r')
-gen2spk = {}
-all_spks = []
-for i, line in enumerate(f):
- if i == 0:
- continue
- else:
- tmp = line.split()
- # print(tmp)
- spk = tmp[0]
- all_spks.append(spk)
- gen = tmp[2]
- if gen not in gen2spk:
- gen2spk[gen] = [spk]
- else:
- gen2spk[gen].append(spk)
-
-random.shuffle(all_spks)
-train_spks = all_spks[:-20]
-test_spks = all_spks[-20:]
-
-train_wavs_names = []
-valid_wavs_names = []
-test_wavs_names = []
-
-print('all_spks:', all_spks)
-for spk in train_spks:
- spk_wavs = glob(f'{data_root}/{spk}/*mic1.flac')
- print('len(spk_wavs):', len(spk_wavs))
- spk_wavs_names = [os.path.basename(p).split('.')[0] for p in spk_wavs]
- valid_names = random.sample(spk_wavs_names, int(len(spk_wavs_names)*0.1))
- train_names = [n for n in spk_wavs_names if n not in valid_names]
- train_wavs_names += train_names
- valid_wavs_names += valid_names
-
-for spk in test_spks:
- spk_wavs = glob(f'{data_root}/{spk}/*mic1.flac')
- print('len(spk_wavs):', len(spk_wavs))
- spk_wavs_names = [os.path.basename(p).split('.')[0] for p in spk_wavs]
- test_wavs_names += spk_wavs_names
-
-print(len(train_wavs_names))
-print(len(valid_wavs_names))
-print(len(test_wavs_names))
-# extract log-mel
-print('extract log-mel...')
-all_wavs = glob(f'{data_root}/*/*mic1.flac')
-results = Parallel(n_jobs=-1)(delayed(extract_logmel)(wav_path) for wav_path in tqdm(all_wavs))
-wn2mel = {}
-for r in results:
- wav_name, mel, lf0, mel_len = r
- # print(wav_name, mel.shape, duration)
- wn2mel[wav_name] = [mel, lf0, mel_len]
-
-# normalize log-mel
-print('normalize log-mel...')
-mels = []
-spk2lf0 = {}
-for wav_name in train_wavs_names:
- mel, _, _ = wn2mel[wav_name]
- mels.append(mel)
-
-mels = np.concatenate(mels, 0)
-mean = np.mean(mels, 0)
-std = np.std(mels, 0)
-mel_stats = np.concatenate([mean.reshape(1,-1), std.reshape(1,-1)], 0)
-np.save(f'{save_root}/mel_stats.npy', mel_stats)
-
-results = Parallel(n_jobs=-1)(delayed(normalize_logmel)(wav_name, wn2mel[wav_name][0], mean, std) for wav_name in tqdm(wn2mel.keys()))
-wn2mel_new = {}
-for r in results:
- wav_name, mel = r
- lf0 = wn2mel[wav_name][1]
- mel_len = wn2mel[wav_name][2]
- wn2mel_new[wav_name] = [mel, lf0, mel_len]
-
-# save log-mel
-print('save log-mel...')
-train_results = Parallel(n_jobs=-1)(delayed(save_logmel)(save_root, wav_name, wn2mel_new[wav_name], 'train') for wav_name in tqdm(train_wavs_names))
-valid_results = Parallel(n_jobs=-1)(delayed(save_logmel)(save_root, wav_name, wn2mel_new[wav_name], 'valid') for wav_name in tqdm(valid_wavs_names))
-test_results = Parallel(n_jobs=-1)(delayed(save_logmel)(save_root, wav_name, wn2mel_new[wav_name], 'test') for wav_name in tqdm(test_wavs_names))
-
-def save_json(save_root, results, mode):
- fp = open(f'{save_root}/{mode}.json', 'w')
- json.dump(results, fp, indent=4)
- fp.close()
-
-save_json(save_root, train_results, 'train')
-save_json(save_root, valid_results, 'valid')
-save_json(save_root, test_results, 'test')
-
-
-
-
-
diff --git a/spaces/algomuffin/neural-search-engine/app.py b/spaces/algomuffin/neural-search-engine/app.py
deleted file mode 100644
index 040b7eec741935b8e09b95b78fb30d1e19e78fe8..0000000000000000000000000000000000000000
--- a/spaces/algomuffin/neural-search-engine/app.py
+++ /dev/null
@@ -1,31 +0,0 @@
-from sentence_transformers import SentenceTransformer, CrossEncoder, util
-import torch
-import pickle
-import pandas as pd
-import gradio as gr
-bi_encoder = SentenceTransformer("multi-qa-MiniLM-L6-cos-v1")
-cross_encoder = CrossEncoder("cross-encoder/ms-marco-MiniLM-L-6-v2")
-corpus_embeddings=pd.read_pickle("corpus_embeddings_cpu.pkl")
-corpus=pd.read_pickle("corpus.pkl")
-def search(query,top_k=100):
- print("Top 5 Answer by the NSE:")
- print()
- ans=[]
- ##### Sematic Search #####
- # Encode the query using the bi-encoder and find potentially relevant passages
- question_embedding = bi_encoder.encode(query, convert_to_tensor=True)
- hits = util.semantic_search(question_embedding, corpus_embeddings, top_k=top_k)
- hits = hits[0] # Get the hits for the first query
- ##### Re-Ranking #####
- # Now, score all retrieved passages with the cross_encoder
- cross_inp = [[query, corpus[hit['corpus_id']]] for hit in hits]
- cross_scores = cross_encoder.predict(cross_inp)
- # Sort results by the cross-encoder scores
- for idx in range(len(cross_scores)):
- hits[idx]['cross-score'] = cross_scores[idx]
- hits = sorted(hits, key=lambda x: x['cross-score'], reverse=True)
-
- for idx, hit in enumerate(hits[0:5]):
- ans.append(corpus[hit['corpus_id']])
- return ans[0],ans[1],ans[2],ans[3],ans[4]
-iface = gr.Interface(fn=search, inputs=["text"], outputs=["textbox","textbox","textbox","textbox","textbox"],examples=["How big is London?", "Where is Rome?","Who is steve jobs?","What is the most interesting thing about our universe?"],article="This is a semantic search engine powered by SentenceTransformers (Nils_Reimers) with a retrieval and reranking system on Wikipedia corpus. It will show the top 5 results",title="Neural Search Engine").launch()
diff --git a/spaces/allandclive/Uganda_MMS/uroman/bin/uroman-tsv.sh b/spaces/allandclive/Uganda_MMS/uroman/bin/uroman-tsv.sh
deleted file mode 100644
index adb81f4894a0539d44ad4370eda029694211e82b..0000000000000000000000000000000000000000
--- a/spaces/allandclive/Uganda_MMS/uroman/bin/uroman-tsv.sh
+++ /dev/null
@@ -1,28 +0,0 @@
-#!/usr/bin/env bash
-# Created by Thamme Gowda on June 17, 2019
-
-DIR=$(dirname "${BASH_SOURCE[0]}") # get the directory name
-# DIR=$(realpath "${DIR}") # resolve its full path if need be
-
-if [[ $# -lt 1 || $# -gt 2 ]]; then
- >&2 echo "ERROR: invalid args"
- >&2 echo "Usage: []"
- exit 2
-fi
-
-INP=$1
-OUT=$2
-
-CMD=$DIR/uroman.pl
-
-function romanize(){
- paste <(cut -f1 $INP) <(cut -f2 $INP | $CMD)
-}
-
-if [[ -n $OUT ]]; then
- romanize > $OUT
-else
- romanize
-fi
-
-
diff --git a/spaces/alsalemi/pv-segment-01/coco_eval.py b/spaces/alsalemi/pv-segment-01/coco_eval.py
deleted file mode 100644
index 0e9da8a1abaec7d00417c51fbe4f4cc3a849684a..0000000000000000000000000000000000000000
--- a/spaces/alsalemi/pv-segment-01/coco_eval.py
+++ /dev/null
@@ -1,191 +0,0 @@
-import copy
-import io
-from contextlib import redirect_stdout
-
-import numpy as np
-import pycocotools.mask as mask_util
-import torch
-import utils
-from pycocotools.coco import COCO
-from pycocotools.cocoeval import COCOeval
-
-
-class CocoEvaluator:
- def __init__(self, coco_gt, iou_types):
- if not isinstance(iou_types, (list, tuple)):
- raise TypeError(f"This constructor expects iou_types of type list or tuple, instead got {type(iou_types)}")
- coco_gt = copy.deepcopy(coco_gt)
- self.coco_gt = coco_gt
- self.iou_types = iou_types
- self.coco_eval = {}
- for iou_type in iou_types:
- self.coco_eval[iou_type] = COCOeval(coco_gt, iouType=iou_type)
-
- self.img_ids = []
- self.eval_imgs = {k: [] for k in iou_types}
-
- def update(self, predictions):
- img_ids = list(np.unique(list(predictions.keys())))
- self.img_ids.extend(img_ids)
-
- for iou_type in self.iou_types:
- results = self.prepare(predictions, iou_type)
- with redirect_stdout(io.StringIO()):
- coco_dt = COCO.loadRes(self.coco_gt, results) if results else COCO()
- coco_eval = self.coco_eval[iou_type]
-
- coco_eval.cocoDt = coco_dt
- coco_eval.params.imgIds = list(img_ids)
- img_ids, eval_imgs = evaluate(coco_eval)
-
- self.eval_imgs[iou_type].append(eval_imgs)
-
- def synchronize_between_processes(self):
- for iou_type in self.iou_types:
- self.eval_imgs[iou_type] = np.concatenate(self.eval_imgs[iou_type], 2)
- create_common_coco_eval(self.coco_eval[iou_type], self.img_ids, self.eval_imgs[iou_type])
-
- def accumulate(self):
- for coco_eval in self.coco_eval.values():
- coco_eval.accumulate()
-
- def summarize(self):
- for iou_type, coco_eval in self.coco_eval.items():
- print(f"IoU metric: {iou_type}")
- coco_eval.summarize()
-
- def prepare(self, predictions, iou_type):
- if iou_type == "bbox":
- return self.prepare_for_coco_detection(predictions)
- if iou_type == "segm":
- return self.prepare_for_coco_segmentation(predictions)
- if iou_type == "keypoints":
- return self.prepare_for_coco_keypoint(predictions)
- raise ValueError(f"Unknown iou type {iou_type}")
-
- def prepare_for_coco_detection(self, predictions):
- coco_results = []
- for original_id, prediction in predictions.items():
- if len(prediction) == 0:
- continue
-
- boxes = prediction["boxes"]
- boxes = convert_to_xywh(boxes).tolist()
- scores = prediction["scores"].tolist()
- labels = prediction["labels"].tolist()
-
- coco_results.extend(
- [
- {
- "image_id": original_id,
- "category_id": labels[k],
- "bbox": box,
- "score": scores[k],
- }
- for k, box in enumerate(boxes)
- ]
- )
- return coco_results
-
- def prepare_for_coco_segmentation(self, predictions):
- coco_results = []
- for original_id, prediction in predictions.items():
- if len(prediction) == 0:
- continue
-
- scores = prediction["scores"]
- labels = prediction["labels"]
- masks = prediction["masks"]
-
- masks = masks > 0.5
-
- scores = prediction["scores"].tolist()
- labels = prediction["labels"].tolist()
-
- rles = [
- mask_util.encode(np.array(mask[0, :, :, np.newaxis], dtype=np.uint8, order="F"))[0] for mask in masks
- ]
- for rle in rles:
- rle["counts"] = rle["counts"].decode("utf-8")
-
- coco_results.extend(
- [
- {
- "image_id": original_id,
- "category_id": labels[k],
- "segmentation": rle,
- "score": scores[k],
- }
- for k, rle in enumerate(rles)
- ]
- )
- return coco_results
-
- def prepare_for_coco_keypoint(self, predictions):
- coco_results = []
- for original_id, prediction in predictions.items():
- if len(prediction) == 0:
- continue
-
- boxes = prediction["boxes"]
- boxes = convert_to_xywh(boxes).tolist()
- scores = prediction["scores"].tolist()
- labels = prediction["labels"].tolist()
- keypoints = prediction["keypoints"]
- keypoints = keypoints.flatten(start_dim=1).tolist()
-
- coco_results.extend(
- [
- {
- "image_id": original_id,
- "category_id": labels[k],
- "keypoints": keypoint,
- "score": scores[k],
- }
- for k, keypoint in enumerate(keypoints)
- ]
- )
- return coco_results
-
-
-def convert_to_xywh(boxes):
- xmin, ymin, xmax, ymax = boxes.unbind(1)
- return torch.stack((xmin, ymin, xmax - xmin, ymax - ymin), dim=1)
-
-
-def merge(img_ids, eval_imgs):
- all_img_ids = utils.all_gather(img_ids)
- all_eval_imgs = utils.all_gather(eval_imgs)
-
- merged_img_ids = []
- for p in all_img_ids:
- merged_img_ids.extend(p)
-
- merged_eval_imgs = []
- for p in all_eval_imgs:
- merged_eval_imgs.append(p)
-
- merged_img_ids = np.array(merged_img_ids)
- merged_eval_imgs = np.concatenate(merged_eval_imgs, 2)
-
- # keep only unique (and in sorted order) images
- merged_img_ids, idx = np.unique(merged_img_ids, return_index=True)
- merged_eval_imgs = merged_eval_imgs[..., idx]
-
- return merged_img_ids, merged_eval_imgs
-
-
-def create_common_coco_eval(coco_eval, img_ids, eval_imgs):
- img_ids, eval_imgs = merge(img_ids, eval_imgs)
- img_ids = list(img_ids)
- eval_imgs = list(eval_imgs.flatten())
-
- coco_eval.evalImgs = eval_imgs
- coco_eval.params.imgIds = img_ids
- coco_eval._paramsEval = copy.deepcopy(coco_eval.params)
-
-
-def evaluate(imgs):
- with redirect_stdout(io.StringIO()):
- imgs.evaluate()
- return imgs.params.imgIds, np.asarray(imgs.evalImgs).reshape(-1, len(imgs.params.areaRng), len(imgs.params.imgIds))
diff --git a/spaces/amanmibra/void-emb-demo/README.md b/spaces/amanmibra/void-emb-demo/README.md
deleted file mode 100644
index 90a4c3cfa8e4408920ad372cf0c97dc3fc84947e..0000000000000000000000000000000000000000
--- a/spaces/amanmibra/void-emb-demo/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Void Emb Demo
-emoji: 🔥
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.32.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/andreslu/orion/app.py b/spaces/andreslu/orion/app.py
deleted file mode 100644
index cfeb5eccc8eb70f79cdfad3c0d3c251d0651bd64..0000000000000000000000000000000000000000
--- a/spaces/andreslu/orion/app.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import gradio as gr
-from inductor import BartInductor
-
-inductor = BartInductor()
-
-def bart(prompt, num, return_score):
- results = inductor.generate(prompt, k=num, topk=num, return_scores=return_score)
- if return_score:
- results = [(result[0], float(result[1]) * 100) for result in results]
- else:
- results = [(result, 0) for result in results]
- results_dict = {result[0]: float(result[1]) for result in results}
- return results_dict
-
-demo = gr.Interface(fn=bart,
- inputs=[gr.inputs.Textbox(default=' is the capital of .'),
- gr.Slider(0, 10, value=5, step=1),
- gr.Checkbox(label="Hell Yes", info="Return Scores?")],
- outputs=gr.Label(),
- title="ORION: Open Rule InductiON",
- examples=[[' is the capital of .', 5, True],
- [' is founder and CEO of .', 5, False],
- ["'s mother was a -based actress, .", 5, False]],
- description="Enter a text prompt to generate text. Make sure using to replace entities.")
-
-
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/anen/DentalGPT/README.md b/spaces/anen/DentalGPT/README.md
deleted file mode 100644
index 2114895f7ca7f593307974dfb5aca2c5849ac4ec..0000000000000000000000000000000000000000
--- a/spaces/anen/DentalGPT/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: DentalGPT
-emoji: 🔥
-colorFrom: gray
-colorTo: green
-sdk: static
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/anonymousauthorsanonymous/uncertainty/winogender_sentences.py b/spaces/anonymousauthorsanonymous/uncertainty/winogender_sentences.py
deleted file mode 100644
index 2f84081e0e9a03b346643a6184f633289d3f23cd..0000000000000000000000000000000000000000
--- a/spaces/anonymousauthorsanonymous/uncertainty/winogender_sentences.py
+++ /dev/null
@@ -1,105 +0,0 @@
-######################################################################
-##
-## This script is a lightly modified version of that provided in winogender-schemas
-## https://github.com/rudinger/winogender-schemas
-##
-######################################################################
-
-import csv
-import os
-from pathlib import Path
-from collections import OrderedDict
-
-# This script fully instantiates the 120 templates in ../data/templates.tsv
-# to generate the 720 sentences in ../data/all_sentences.tsv
-# By default this script prints to stdout, and can be run with no arguments:
-
-def load_templates(path):
- fp = open(path, 'r')
- next(fp) # first line headers
- S = []
- for line in fp:
-
- line = line.strip().split('\t')
- occupation, other_participant, answer, sentence = line[0], line[1], line[2], line[3]
- S.append((occupation, other_participant, answer, sentence))
- return S
-
-def generate(occupation, other_participant, sentence, second_ref="", context=None):
- toks = sentence.split(" ")
- occ_index = toks.index("$OCCUPATION")
- part_index = toks.index("$PARTICIPANT")
- toks[occ_index] = occupation
- # we are using the instantiated participant, e.g. "client", "patient", "customer",...
- if not second_ref:
- toks[part_index] = other_participant
- elif second_ref != 'someone':
- toks[part_index] = second_ref
- else:
- # we are using the bleached NP "someone" for the other participant
- # first, remove the token that precedes $PARTICIPANT, i.e. "the"
- toks = toks[:part_index-1]+toks[part_index:]
- # recompute participant index (it should be part_index - 1)
- part_index = toks.index("$PARTICIPANT")
- if part_index == 0:
- toks[part_index] = "Someone"
- else:
- toks[part_index] = "someone"
- NOM = "$NOM_PRONOUN"
- POSS = "$POSS_PRONOUN"
- ACC = "$ACC_PRONOUN"
- special_toks = set({NOM, POSS, ACC})
- mask_map = {NOM: "MASK", POSS: "MASK", ACC: "MASK"}
- mask_toks = [x if not x in special_toks else mask_map[x] for x in toks]
- masked_sent = " ".join(mask_toks)
-
- return masked_sent
-# %%
-
-
-def get_sentences():
- script_dir = os.path.dirname(__file__)
- rel_path = "winogender_schema"
- abs_path = os.path.join(script_dir, rel_path)
- Path(abs_path).mkdir(parents=True, exist_ok=True)
- # %%
-
- S = load_templates(os.path.join(abs_path, "templates.tsv"))
-
- # %%
- with open(os.path.join(abs_path, "all_sentences.tsv"), 'w', newline='') as csvfile:
- sentence_writer = csv.writer(csvfile, delimiter='\t')
- sentence_writer.writerow(['sentid', 'sentence'])
- sentence_dict = OrderedDict()
-
- for s in S:
- occupation, other_participant, answer, sentence = s
-
- gendered_sentence = generate(
- occupation, other_participant, sentence)
- gendered_sentid = f"{occupation}_{other_participant}_{answer}"
- sentence_dict[gendered_sentid] = gendered_sentence
-
- someone_sentence = generate(
- occupation, other_participant, sentence, second_ref='someone')
- someone_sentid = f"{occupation}_someone_{answer}"
- sentence_dict[someone_sentid] = someone_sentence
-
- man_sentence = generate(
- occupation, other_participant, sentence, second_ref='man')
- man_sentid = f"{occupation}_man_{answer}"
- sentence_dict[man_sentid] = man_sentence
-
- woman_sentence = generate(
- occupation, other_participant, sentence, second_ref='woman')
- woman_sentid = f"{occupation}_woman_{answer}"
- sentence_dict[woman_sentid] = woman_sentence
-
- sentence_writer.writerow([gendered_sentid, gendered_sentence])
- sentence_writer.writerow([someone_sentid, someone_sentence])
- sentence_writer.writerow([man_sentid, man_sentence])
- sentence_writer.writerow([woman_sentid, woman_sentence])
-
- return sentence_dict
-
-
diff --git a/spaces/argilla/argilla-streamlit-customs/my_app/utils/autotrain.py b/spaces/argilla/argilla-streamlit-customs/my_app/utils/autotrain.py
deleted file mode 100644
index 08fefb49637aff9a5c06fcbbf19203a5210cad48..0000000000000000000000000000000000000000
--- a/spaces/argilla/argilla-streamlit-customs/my_app/utils/autotrain.py
+++ /dev/null
@@ -1,190 +0,0 @@
-import time
-
-import requests
-import streamlit as st
-from huggingface_hub.hf_api import HfApi
-from pydantic import BaseModel
-
-AUTOTRAIN_API_URL = "https://api.autotrain.huggingface.co"
-AUTOTRAIN_UI_URL = "https://ui.autotrain.huggingface.co"
-
-task_id_mapping = {
- "text-classification-binary": 1,
- "text-classification-multi-class": 2,
- "token-classification": 4,
- # "question-answering-extractive": 5,
- "summarization": 8,
- # "text-regression": 10,
- # "image-multi-class-classification": 18,
- # "tabular-data-regression": 16,
- # "tabular-data-multi-label": 15,
- # "tabular-data-multi-class": 14,
- # "tabular-data-binary": 13,
-}
-
-
-
-class AutoTrainInfo(BaseModel):
- hf_auth_token: str
- target_namespace: str
- input_dataset: str
- input_model: str
- autotrain_project_prefix: str
- task_id: int
- project_id: str = None
- directly_train: bool
- mapping: dict
-
-
-def get_projects(hf_auth_token):
- return AutoTrain.list_projects(hf_auth_token)
-
-
-def schedule_retrain(
- hf_auth_token,
- target_organization,
- input_dataset,
- input_model,
- autotrain_project_prefix,
- task_id,
- directly_train,
- mapping,
-):
- payload = AutoTrainInfo(
- hf_auth_token=hf_auth_token,
- target_namespace=target_organization,
- input_dataset=input_dataset,
- input_model=input_model,
- autotrain_project_prefix=autotrain_project_prefix,
- task_id=task_id,
- directly_train=directly_train,
- mapping=mapping,
- )
- # Create the autotrain project
- try:
- project = AutoTrain.create_project(payload)
- payload.project_id = project["id"]
- AutoTrain.add_data(payload)
- AutoTrain.start_processing(payload)
- if payload.directly_train:
- AutoTrain.start_training(payload)
- except requests.HTTPError as err:
- print("ERROR while requesting AutoTrain API:")
- print(f" code: {err.response.status_code}")
- print(f" {err.response.json()}")
- raise
- # Notify in the community tab
- notify_success(payload)
-
- return {"processed": True}
-
-
-class AutoTrain:
- @staticmethod
- def list_projects(hf_auth_token) -> str:
- projects = requests.get(
- f"{AUTOTRAIN_API_URL}/projects/list",
- headers={"Authorization": f"Bearer {hf_auth_token}"},
- )
- projects.raise_for_status()
- return projects.json()
-
- @staticmethod
- def create_project(payload: AutoTrainInfo) -> dict:
- project_resp = requests.post(
- f"{AUTOTRAIN_API_URL}/projects/create",
- json={
- "username": payload.target_namespace,
- "proj_name": payload.autotrain_project_prefix,
- "task": payload.task_id,
- "config": {
- "hub-model": payload.input_model,
- "max_models": 1,
- "language": "en",
- },
- },
- headers={"Authorization": f"Bearer {payload.hf_auth_token}"},
- )
-
- project_resp.raise_for_status()
- return project_resp.json()
-
- @staticmethod
- def add_data(payload: AutoTrainInfo):
- requests.post(
- f"{AUTOTRAIN_API_URL}/projects/{payload.project_id}/data/dataset",
- json={
- "dataset_id": payload.input_dataset,
- "dataset_split": "train",
- "split": 1,
- "col_mapping": payload.mapping,
- },
- headers={
- "Authorization": f"Bearer {payload.hf_auth_token}",
- },
- ).raise_for_status()
- requests.post(
- f"{AUTOTRAIN_API_URL}/projects/{payload.project_id}/data/dataset",
- json={
- "dataset_id": payload.input_dataset,
- "dataset_split": "test",
- "split": 2,
- "col_mapping": payload.mapping,
- },
- headers={
- "Authorization": f"Bearer {payload.hf_auth_token}",
- },
- ).raise_for_status()
-
- @staticmethod
- def start_processing(payload: AutoTrainInfo):
- resp = requests.post(
- f"{AUTOTRAIN_API_URL}/projects/{payload.project_id}/data/start_processing",
- headers={
- "Authorization": f"Bearer {payload.hf_auth_token}",
- },
- )
- resp.raise_for_status()
- return resp
-
- @staticmethod
- def start_training(payload: AutoTrainInfo):
- succeeded = False
- with st.spinner("Waiting for data to be processed..."):
- while not succeeded:
- resp = requests.post(
- f"{AUTOTRAIN_API_URL}/projects/{payload.project_id}/data/start_processing",
- headers={
- "Authorization": f"Bearer {payload.hf_auth_token}",
- },
- )
- time.sleep()
- if resp.status_code == 200:
- succeeded = True
-
- return resp
-
-
-def notify_success(payload: AutoTrainInfo):
- message = NOTIFICATION_TEMPLATE.format(
- input_model=payload.input_model,
- input_dataset=payload.input_dataset,
- project_id=payload.project_id,
- ui_url=AUTOTRAIN_UI_URL,
- )
- st.success(message)
- return HfApi(token=payload.hf_auth_token).create_discussion(
- repo_id=payload.input_dataset,
- repo_type="dataset",
- title="✨ Retraining started!",
- description=message,
- token=payload.hf_auth_token,
- )
-
-
-NOTIFICATION_TEMPLATE = """\
-🌸 Hello there!
-Following an update of [{input_dataset}](https://huggingface.co/datasets/{input_dataset}), an automatic training of [{input_model}](https://huggingface.co/{input_model}) has been scheduled on AutoTrain!
-Please review and approve the project [here]({ui_url}/{project_id}/trainings) to start the training job.
-(This is an automated message)
-"""
diff --git a/spaces/artificialguybr/video-dubbing/TTS/docs/source/implementing_a_new_language_frontend.md b/spaces/artificialguybr/video-dubbing/TTS/docs/source/implementing_a_new_language_frontend.md
deleted file mode 100644
index f4f6a04a5fbc048c9e4a7b2cf2b658d5aa6b1fbc..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/docs/source/implementing_a_new_language_frontend.md
+++ /dev/null
@@ -1,12 +0,0 @@
-# Implementing a New Language Frontend
-
-- Language frontends are located under `TTS.tts.utils.text`
-- Each special language has a separate folder.
-- Each folder containst all the utilities for processing the text input.
-- `TTS.tts.utils.text.phonemizers` contains the main phonemizer for a language. This is the class that uses the utilities
-from the previous step and used to convert the text to phonemes or graphemes for the model.
-- After you implement your phonemizer, you need to add it to the `TTS/tts/utils/text/phonemizers/__init__.py` to be able to
-map the language code in the model config - `config.phoneme_language` - to the phonemizer class and initiate the phonemizer automatically.
-- You should also add tests to `tests/text_tests` if you want to make a PR.
-
-We suggest you to check the available implementations as reference. Good luck!
\ No newline at end of file
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Util/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Util/__init__.py
deleted file mode 100644
index f12214d3044998bb644b8323984cb6197e1fbd93..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Util/__init__.py
+++ /dev/null
@@ -1,41 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# ===================================================================
-# The contents of this file are dedicated to the public domain. To
-# the extent that dedication to the public domain is not available,
-# everyone is granted a worldwide, perpetual, royalty-free,
-# non-exclusive license to exercise all rights associated with the
-# contents of this file for any purpose whatsoever.
-# No rights are reserved.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
-# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
-# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
-# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-# ===================================================================
-
-"""Miscellaneous modules
-
-Contains useful modules that don't belong into any of the
-other Crypto.* subpackages.
-
-======================== =============================================
-Module Description
-======================== =============================================
-`Crypto.Util.number` Number-theoretic functions (primality testing, etc.)
-`Crypto.Util.Counter` Fast counter functions for CTR cipher modes.
-`Crypto.Util.RFC1751` Converts between 128-bit keys and human-readable
- strings of words.
-`Crypto.Util.asn1` Minimal support for ASN.1 DER encoding
-`Crypto.Util.Padding` Set of functions for adding and removing padding.
-======================== =============================================
-
-:undocumented: _galois, _number_new, cpuid, py3compat, _raw_api
-"""
-
-__all__ = ['RFC1751', 'number', 'strxor', 'asn1', 'Counter', 'Padding']
-
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/Options.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/Options.py
deleted file mode 100644
index 48695dbfce802f8326408a2002047adb56731db2..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/Options.py
+++ /dev/null
@@ -1,550 +0,0 @@
-#
-# Cython - Compilation-wide options and pragma declarations
-#
-
-from __future__ import absolute_import
-
-
-class ShouldBeFromDirective(object):
-
- known_directives = []
-
- def __init__(self, options_name, directive_name=None, disallow=False):
- self.options_name = options_name
- self.directive_name = directive_name or options_name
- self.disallow = disallow
- self.known_directives.append(self)
-
- def __nonzero__(self):
- self._bad_access()
-
- def __int__(self):
- self._bad_access()
-
- def _bad_access(self):
- raise RuntimeError(repr(self))
-
- def __repr__(self):
- return (
- "Illegal access of '%s' from Options module rather than directive '%s'"
- % (self.options_name, self.directive_name))
-
-
-"""
-The members of this module are documented using autodata in
-Cython/docs/src/reference/compilation.rst.
-See http://www.sphinx-doc.org/en/master/ext/autodoc.html#directive-autoattribute
-for how autodata works.
-Descriptions of those members should start with a #:
-Donc forget to keep the docs in sync by removing and adding
-the members in both this file and the .rst file.
-"""
-
-#: Whether or not to include docstring in the Python extension. If False, the binary size
-#: will be smaller, but the ``__doc__`` attribute of any class or function will be an
-#: empty string.
-docstrings = True
-
-#: Embed the source code position in the docstrings of functions and classes.
-embed_pos_in_docstring = False
-
-#: Copy the original source code line by line into C code comments
-#: in the generated code file to help with understanding the output.
-#: This is also required for coverage analysis.
-emit_code_comments = True
-
-# undocumented
-pre_import = None
-
-#: Decref global variables in each module on exit for garbage collection.
-#: 0: None, 1+: interned objects, 2+: cdef globals, 3+: types objects
-#: Mostly for reducing noise in Valgrind as it typically executes at process exit
-#: (when all memory will be reclaimed anyways).
-#: Note that directly or indirectly executed cleanup code that makes use of global
-#: variables or types may no longer be safe when enabling the respective level since
-#: there is no guaranteed order in which the (reference counted) objects will
-#: be cleaned up. The order can change due to live references and reference cycles.
-generate_cleanup_code = False
-
-#: Should tp_clear() set object fields to None instead of clearing them to NULL?
-clear_to_none = True
-
-#: Generate an annotated HTML version of the input source files for debugging and optimisation purposes.
-#: This has the same effect as the ``annotate`` argument in :func:`cythonize`.
-annotate = False
-
-# When annotating source files in HTML, include coverage information from
-# this file.
-annotate_coverage_xml = None
-
-#: This will abort the compilation on the first error occurred rather than trying
-#: to keep going and printing further error messages.
-fast_fail = False
-
-#: Turn all warnings into errors.
-warning_errors = False
-
-#: Make unknown names an error. Python raises a NameError when
-#: encountering unknown names at runtime, whereas this option makes
-#: them a compile time error. If you want full Python compatibility,
-#: you should disable this option and also 'cache_builtins'.
-error_on_unknown_names = True
-
-#: Make uninitialized local variable reference a compile time error.
-#: Python raises UnboundLocalError at runtime, whereas this option makes
-#: them a compile time error. Note that this option affects only variables
-#: of "python object" type.
-error_on_uninitialized = True
-
-#: This will convert statements of the form ``for i in range(...)``
-#: to ``for i from ...`` when ``i`` is a C integer type, and the direction
-#: (i.e. sign of step) can be determined.
-#: WARNING: This may change the semantics if the range causes assignment to
-#: i to overflow. Specifically, if this option is set, an error will be
-#: raised before the loop is entered, whereas without this option the loop
-#: will execute until an overflowing value is encountered.
-convert_range = True
-
-#: Perform lookups on builtin names only once, at module initialisation
-#: time. This will prevent the module from getting imported if a
-#: builtin name that it uses cannot be found during initialisation.
-#: Default is True.
-#: Note that some legacy builtins are automatically remapped
-#: from their Python 2 names to their Python 3 names by Cython
-#: when building in Python 3.x,
-#: so that they do not get in the way even if this option is enabled.
-cache_builtins = True
-
-#: Generate branch prediction hints to speed up error handling etc.
-gcc_branch_hints = True
-
-#: Enable this to allow one to write ``your_module.foo = ...`` to overwrite the
-#: definition if the cpdef function foo, at the cost of an extra dictionary
-#: lookup on every call.
-#: If this is false it generates only the Python wrapper and no override check.
-lookup_module_cpdef = False
-
-#: Whether or not to embed the Python interpreter, for use in making a
-#: standalone executable or calling from external libraries.
-#: This will provide a C function which initialises the interpreter and
-#: executes the body of this module.
-#: See `this demo `_
-#: for a concrete example.
-#: If true, the initialisation function is the C main() function, but
-#: this option can also be set to a non-empty string to provide a function name explicitly.
-#: Default is False.
-embed = None
-
-# In previous iterations of Cython, globals() gave the first non-Cython module
-# globals in the call stack. Sage relies on this behavior for variable injection.
-old_style_globals = ShouldBeFromDirective('old_style_globals')
-
-#: Allows cimporting from a pyx file without a pxd file.
-cimport_from_pyx = False
-
-#: Maximum number of dimensions for buffers -- set lower than number of
-#: dimensions in numpy, as
-#: slices are passed by value and involve a lot of copying.
-buffer_max_dims = 8
-
-#: Number of function closure instances to keep in a freelist (0: no freelists)
-closure_freelist_size = 8
-
-
-def get_directive_defaults():
- # To add an item to this list, all accesses should be changed to use the new
- # directive, and the global option itself should be set to an instance of
- # ShouldBeFromDirective.
- for old_option in ShouldBeFromDirective.known_directives:
- value = globals().get(old_option.options_name)
- assert old_option.directive_name in _directive_defaults
- if not isinstance(value, ShouldBeFromDirective):
- if old_option.disallow:
- raise RuntimeError(
- "Option '%s' must be set from directive '%s'" % (
- old_option.option_name, old_option.directive_name))
- else:
- # Warn?
- _directive_defaults[old_option.directive_name] = value
- return _directive_defaults
-
-# Declare compiler directives
-_directive_defaults = {
- 'boundscheck' : True,
- 'nonecheck' : False,
- 'initializedcheck' : True,
- 'embedsignature' : False,
- 'auto_cpdef': False,
- 'auto_pickle': None,
- 'cdivision': False, # was True before 0.12
- 'cdivision_warnings': False,
- 'c_api_binop_methods': True,
- 'overflowcheck': False,
- 'overflowcheck.fold': True,
- 'always_allow_keywords': False,
- 'allow_none_for_extension_args': True,
- 'wraparound' : True,
- 'ccomplex' : False, # use C99/C++ for complex types and arith
- 'callspec' : "",
- 'nogil' : False,
- 'profile': False,
- 'linetrace': False,
- 'emit_code_comments': True, # copy original source code into C code comments
- 'annotation_typing': True, # read type declarations from Python function annotations
- 'infer_types': None,
- 'infer_types.verbose': False,
- 'autotestdict': True,
- 'autotestdict.cdef': False,
- 'autotestdict.all': False,
- 'language_level': None,
- 'fast_getattr': False, # Undocumented until we come up with a better way to handle this everywhere.
- 'py2_import': False, # For backward compatibility of Cython's source code in Py3 source mode
- 'preliminary_late_includes_cy28': False, # Temporary directive in 0.28, to be removed in a later version (see GH#2079).
- 'iterable_coroutine': False, # Make async coroutines backwards compatible with the old asyncio yield-from syntax.
- 'c_string_type': 'bytes',
- 'c_string_encoding': '',
- 'type_version_tag': True, # enables Py_TPFLAGS_HAVE_VERSION_TAG on extension types
- 'unraisable_tracebacks': True,
- 'old_style_globals': False,
- 'np_pythran': False,
- 'fast_gil': False,
-
- # set __file__ and/or __path__ to known source/target path at import time (instead of not having them available)
- 'set_initial_path' : None, # SOURCEFILE or "/full/path/to/module"
-
- 'warn': None,
- 'warn.undeclared': False,
- 'warn.unreachable': True,
- 'warn.maybe_uninitialized': False,
- 'warn.unused': False,
- 'warn.unused_arg': False,
- 'warn.unused_result': False,
- 'warn.multiple_declarators': True,
-
-# optimizations
- 'optimize.inline_defnode_calls': True,
- 'optimize.unpack_method_calls': True, # increases code size when True
- 'optimize.unpack_method_calls_in_pyinit': False, # uselessly increases code size when True
- 'optimize.use_switch': True,
-
-# remove unreachable code
- 'remove_unreachable': True,
-
-# control flow debug directives
- 'control_flow.dot_output': "", # Graphviz output filename
- 'control_flow.dot_annotate_defs': False, # Annotate definitions
-
-# test support
- 'test_assert_path_exists' : [],
- 'test_fail_if_path_exists' : [],
-
-# experimental, subject to change
- 'binding': None,
-
- 'formal_grammar': False,
-}
-
-# Extra warning directives
-extra_warnings = {
- 'warn.maybe_uninitialized': True,
- 'warn.unreachable': True,
- 'warn.unused': True,
-}
-
-def one_of(*args):
- def validate(name, value):
- if value not in args:
- raise ValueError("%s directive must be one of %s, got '%s'" % (
- name, args, value))
- else:
- return value
- return validate
-
-
-def normalise_encoding_name(option_name, encoding):
- """
- >>> normalise_encoding_name('c_string_encoding', 'ascii')
- 'ascii'
- >>> normalise_encoding_name('c_string_encoding', 'AsCIi')
- 'ascii'
- >>> normalise_encoding_name('c_string_encoding', 'us-ascii')
- 'ascii'
- >>> normalise_encoding_name('c_string_encoding', 'utF8')
- 'utf8'
- >>> normalise_encoding_name('c_string_encoding', 'utF-8')
- 'utf8'
- >>> normalise_encoding_name('c_string_encoding', 'deFAuLT')
- 'default'
- >>> normalise_encoding_name('c_string_encoding', 'default')
- 'default'
- >>> normalise_encoding_name('c_string_encoding', 'SeriousLyNoSuch--Encoding')
- 'SeriousLyNoSuch--Encoding'
- """
- if not encoding:
- return ''
- if encoding.lower() in ('default', 'ascii', 'utf8'):
- return encoding.lower()
- import codecs
- try:
- decoder = codecs.getdecoder(encoding)
- except LookupError:
- return encoding # may exists at runtime ...
- for name in ('ascii', 'utf8'):
- if codecs.getdecoder(name) == decoder:
- return name
- return encoding
-
-
-# Override types possibilities above, if needed
-directive_types = {
- 'language_level': str, # values can be None/2/3/'3str', where None == 2+warning
- 'auto_pickle': bool,
- 'locals': dict,
- 'final' : bool, # final cdef classes and methods
- 'nogil' : bool,
- 'internal' : bool, # cdef class visibility in the module dict
- 'infer_types' : bool, # values can be True/None/False
- 'binding' : bool,
- 'cfunc' : None, # decorators do not take directive value
- 'ccall' : None,
- 'inline' : None,
- 'staticmethod' : None,
- 'cclass' : None,
- 'no_gc_clear' : bool,
- 'no_gc' : bool,
- 'returns' : type,
- 'exceptval': type, # actually (type, check=True/False), but has its own parser
- 'set_initial_path': str,
- 'freelist': int,
- 'c_string_type': one_of('bytes', 'bytearray', 'str', 'unicode'),
- 'c_string_encoding': normalise_encoding_name,
-}
-
-for key, val in _directive_defaults.items():
- if key not in directive_types:
- directive_types[key] = type(val)
-
-directive_scopes = { # defaults to available everywhere
- # 'module', 'function', 'class', 'with statement'
- 'auto_pickle': ('module', 'cclass'),
- 'final' : ('cclass', 'function'),
- 'nogil' : ('function', 'with statement'),
- 'inline' : ('function',),
- 'cfunc' : ('function', 'with statement'),
- 'ccall' : ('function', 'with statement'),
- 'returns' : ('function',),
- 'exceptval' : ('function',),
- 'locals' : ('function',),
- 'staticmethod' : ('function',), # FIXME: analysis currently lacks more specific function scope
- 'no_gc_clear' : ('cclass',),
- 'no_gc' : ('cclass',),
- 'internal' : ('cclass',),
- 'cclass' : ('class', 'cclass', 'with statement'),
- 'autotestdict' : ('module',),
- 'autotestdict.all' : ('module',),
- 'autotestdict.cdef' : ('module',),
- 'set_initial_path' : ('module',),
- 'test_assert_path_exists' : ('function', 'class', 'cclass'),
- 'test_fail_if_path_exists' : ('function', 'class', 'cclass'),
- 'freelist': ('cclass',),
- 'emit_code_comments': ('module',),
- 'annotation_typing': ('module',), # FIXME: analysis currently lacks more specific function scope
- # Avoid scope-specific to/from_py_functions for c_string.
- 'c_string_type': ('module',),
- 'c_string_encoding': ('module',),
- 'type_version_tag': ('module', 'cclass'),
- 'language_level': ('module',),
- # globals() could conceivably be controlled at a finer granularity,
- # but that would complicate the implementation
- 'old_style_globals': ('module',),
- 'np_pythran': ('module',),
- 'fast_gil': ('module',),
- 'iterable_coroutine': ('module', 'function'),
-}
-
-
-def parse_directive_value(name, value, relaxed_bool=False):
- """
- Parses value as an option value for the given name and returns
- the interpreted value. None is returned if the option does not exist.
-
- >>> print(parse_directive_value('nonexisting', 'asdf asdfd'))
- None
- >>> parse_directive_value('boundscheck', 'True')
- True
- >>> parse_directive_value('boundscheck', 'true')
- Traceback (most recent call last):
- ...
- ValueError: boundscheck directive must be set to True or False, got 'true'
-
- >>> parse_directive_value('c_string_encoding', 'us-ascii')
- 'ascii'
- >>> parse_directive_value('c_string_type', 'str')
- 'str'
- >>> parse_directive_value('c_string_type', 'bytes')
- 'bytes'
- >>> parse_directive_value('c_string_type', 'bytearray')
- 'bytearray'
- >>> parse_directive_value('c_string_type', 'unicode')
- 'unicode'
- >>> parse_directive_value('c_string_type', 'unnicode')
- Traceback (most recent call last):
- ValueError: c_string_type directive must be one of ('bytes', 'bytearray', 'str', 'unicode'), got 'unnicode'
- """
- type = directive_types.get(name)
- if not type:
- return None
- orig_value = value
- if type is bool:
- value = str(value)
- if value == 'True':
- return True
- if value == 'False':
- return False
- if relaxed_bool:
- value = value.lower()
- if value in ("true", "yes"):
- return True
- elif value in ("false", "no"):
- return False
- raise ValueError("%s directive must be set to True or False, got '%s'" % (
- name, orig_value))
- elif type is int:
- try:
- return int(value)
- except ValueError:
- raise ValueError("%s directive must be set to an integer, got '%s'" % (
- name, orig_value))
- elif type is str:
- return str(value)
- elif callable(type):
- return type(name, value)
- else:
- assert False
-
-
-def parse_directive_list(s, relaxed_bool=False, ignore_unknown=False,
- current_settings=None):
- """
- Parses a comma-separated list of pragma options. Whitespace
- is not considered.
-
- >>> parse_directive_list(' ')
- {}
- >>> (parse_directive_list('boundscheck=True') ==
- ... {'boundscheck': True})
- True
- >>> parse_directive_list(' asdf')
- Traceback (most recent call last):
- ...
- ValueError: Expected "=" in option "asdf"
- >>> parse_directive_list('boundscheck=hey')
- Traceback (most recent call last):
- ...
- ValueError: boundscheck directive must be set to True or False, got 'hey'
- >>> parse_directive_list('unknown=True')
- Traceback (most recent call last):
- ...
- ValueError: Unknown option: "unknown"
- >>> warnings = parse_directive_list('warn.all=True')
- >>> len(warnings) > 1
- True
- >>> sum(warnings.values()) == len(warnings) # all true.
- True
- """
- if current_settings is None:
- result = {}
- else:
- result = current_settings
- for item in s.split(','):
- item = item.strip()
- if not item:
- continue
- if '=' not in item:
- raise ValueError('Expected "=" in option "%s"' % item)
- name, value = [s.strip() for s in item.strip().split('=', 1)]
- if name not in _directive_defaults:
- found = False
- if name.endswith('.all'):
- prefix = name[:-3]
- for directive in _directive_defaults:
- if directive.startswith(prefix):
- found = True
- parsed_value = parse_directive_value(directive, value, relaxed_bool=relaxed_bool)
- result[directive] = parsed_value
- if not found and not ignore_unknown:
- raise ValueError('Unknown option: "%s"' % name)
- else:
- parsed_value = parse_directive_value(name, value, relaxed_bool=relaxed_bool)
- result[name] = parsed_value
- return result
-
-
-def parse_variable_value(value):
- """
- Parses value as an option value for the given name and returns
- the interpreted value.
-
- >>> parse_variable_value('True')
- True
- >>> parse_variable_value('true')
- 'true'
- >>> parse_variable_value('us-ascii')
- 'us-ascii'
- >>> parse_variable_value('str')
- 'str'
- >>> parse_variable_value('123')
- 123
- >>> parse_variable_value('1.23')
- 1.23
-
- """
- if value == "True":
- return True
- elif value == "False":
- return False
- elif value == "None":
- return None
- elif value.isdigit():
- return int(value)
- else:
- try:
- value = float(value)
- except Exception:
- # Not a float
- pass
- return value
-
-
-def parse_compile_time_env(s, current_settings=None):
- """
- Parses a comma-separated list of pragma options. Whitespace
- is not considered.
-
- >>> parse_compile_time_env(' ')
- {}
- >>> (parse_compile_time_env('HAVE_OPENMP=True') ==
- ... {'HAVE_OPENMP': True})
- True
- >>> parse_compile_time_env(' asdf')
- Traceback (most recent call last):
- ...
- ValueError: Expected "=" in option "asdf"
- >>> parse_compile_time_env('NUM_THREADS=4') == {'NUM_THREADS': 4}
- True
- >>> parse_compile_time_env('unknown=anything') == {'unknown': 'anything'}
- True
- """
- if current_settings is None:
- result = {}
- else:
- result = current_settings
- for item in s.split(','):
- item = item.strip()
- if not item:
- continue
- if '=' not in item:
- raise ValueError('Expected "=" in option "%s"' % item)
- name, value = [s.strip() for s in item.split('=', 1)]
- result[name] = parse_variable_value(value)
- return result
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/FpxImagePlugin.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/FpxImagePlugin.py
deleted file mode 100644
index a55376d0e080de105d1b7a35e4e6763abdf2ef11..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/FpxImagePlugin.py
+++ /dev/null
@@ -1,245 +0,0 @@
-#
-# THIS IS WORK IN PROGRESS
-#
-# The Python Imaging Library.
-# $Id$
-#
-# FlashPix support for PIL
-#
-# History:
-# 97-01-25 fl Created (reads uncompressed RGB images only)
-#
-# Copyright (c) Secret Labs AB 1997.
-# Copyright (c) Fredrik Lundh 1997.
-#
-# See the README file for information on usage and redistribution.
-#
-import olefile
-
-from . import Image, ImageFile
-from ._binary import i32le as i32
-
-# we map from colour field tuples to (mode, rawmode) descriptors
-MODES = {
- # opacity
- (0x00007FFE,): ("A", "L"),
- # monochrome
- (0x00010000,): ("L", "L"),
- (0x00018000, 0x00017FFE): ("RGBA", "LA"),
- # photo YCC
- (0x00020000, 0x00020001, 0x00020002): ("RGB", "YCC;P"),
- (0x00028000, 0x00028001, 0x00028002, 0x00027FFE): ("RGBA", "YCCA;P"),
- # standard RGB (NIFRGB)
- (0x00030000, 0x00030001, 0x00030002): ("RGB", "RGB"),
- (0x00038000, 0x00038001, 0x00038002, 0x00037FFE): ("RGBA", "RGBA"),
-}
-
-
-#
-# --------------------------------------------------------------------
-
-
-def _accept(prefix):
- return prefix[:8] == olefile.MAGIC
-
-
-##
-# Image plugin for the FlashPix images.
-
-
-class FpxImageFile(ImageFile.ImageFile):
-
- format = "FPX"
- format_description = "FlashPix"
-
- def _open(self):
- #
- # read the OLE directory and see if this is a likely
- # to be a FlashPix file
-
- try:
- self.ole = olefile.OleFileIO(self.fp)
- except OSError as e:
- raise SyntaxError("not an FPX file; invalid OLE file") from e
-
- if self.ole.root.clsid != "56616700-C154-11CE-8553-00AA00A1F95B":
- raise SyntaxError("not an FPX file; bad root CLSID")
-
- self._open_index(1)
-
- def _open_index(self, index=1):
- #
- # get the Image Contents Property Set
-
- prop = self.ole.getproperties(
- [f"Data Object Store {index:06d}", "\005Image Contents"]
- )
-
- # size (highest resolution)
-
- self._size = prop[0x1000002], prop[0x1000003]
-
- size = max(self.size)
- i = 1
- while size > 64:
- size = size / 2
- i += 1
- self.maxid = i - 1
-
- # mode. instead of using a single field for this, flashpix
- # requires you to specify the mode for each channel in each
- # resolution subimage, and leaves it to the decoder to make
- # sure that they all match. for now, we'll cheat and assume
- # that this is always the case.
-
- id = self.maxid << 16
-
- s = prop[0x2000002 | id]
-
- colors = []
- bands = i32(s, 4)
- if bands > 4:
- raise OSError("Invalid number of bands")
- for i in range(bands):
- # note: for now, we ignore the "uncalibrated" flag
- colors.append(i32(s, 8 + i * 4) & 0x7FFFFFFF)
-
- self.mode, self.rawmode = MODES[tuple(colors)]
-
- # load JPEG tables, if any
- self.jpeg = {}
- for i in range(256):
- id = 0x3000001 | (i << 16)
- if id in prop:
- self.jpeg[i] = prop[id]
-
- self._open_subimage(1, self.maxid)
-
- def _open_subimage(self, index=1, subimage=0):
- #
- # setup tile descriptors for a given subimage
-
- stream = [
- f"Data Object Store {index:06d}",
- f"Resolution {subimage:04d}",
- "Subimage 0000 Header",
- ]
-
- fp = self.ole.openstream(stream)
-
- # skip prefix
- fp.read(28)
-
- # header stream
- s = fp.read(36)
-
- size = i32(s, 4), i32(s, 8)
- # tilecount = i32(s, 12)
- tilesize = i32(s, 16), i32(s, 20)
- # channels = i32(s, 24)
- offset = i32(s, 28)
- length = i32(s, 32)
-
- if size != self.size:
- raise OSError("subimage mismatch")
-
- # get tile descriptors
- fp.seek(28 + offset)
- s = fp.read(i32(s, 12) * length)
-
- x = y = 0
- xsize, ysize = size
- xtile, ytile = tilesize
- self.tile = []
-
- for i in range(0, len(s), length):
-
- x1 = min(xsize, x + xtile)
- y1 = min(ysize, y + ytile)
-
- compression = i32(s, i + 8)
-
- if compression == 0:
- self.tile.append(
- (
- "raw",
- (x, y, x1, y1),
- i32(s, i) + 28,
- (self.rawmode,),
- )
- )
-
- elif compression == 1:
-
- # FIXME: the fill decoder is not implemented
- self.tile.append(
- (
- "fill",
- (x, y, x1, y1),
- i32(s, i) + 28,
- (self.rawmode, s[12:16]),
- )
- )
-
- elif compression == 2:
-
- internal_color_conversion = s[14]
- jpeg_tables = s[15]
- rawmode = self.rawmode
-
- if internal_color_conversion:
- # The image is stored as usual (usually YCbCr).
- if rawmode == "RGBA":
- # For "RGBA", data is stored as YCbCrA based on
- # negative RGB. The following trick works around
- # this problem :
- jpegmode, rawmode = "YCbCrK", "CMYK"
- else:
- jpegmode = None # let the decoder decide
-
- else:
- # The image is stored as defined by rawmode
- jpegmode = rawmode
-
- self.tile.append(
- (
- "jpeg",
- (x, y, x1, y1),
- i32(s, i) + 28,
- (rawmode, jpegmode),
- )
- )
-
- # FIXME: jpeg tables are tile dependent; the prefix
- # data must be placed in the tile descriptor itself!
-
- if jpeg_tables:
- self.tile_prefix = self.jpeg[jpeg_tables]
-
- else:
- raise OSError("unknown/invalid compression")
-
- x = x + xtile
- if x >= xsize:
- x, y = 0, y + ytile
- if y >= ysize:
- break # isn't really required
-
- self.stream = stream
- self.fp = None
-
- def load(self):
-
- if not self.fp:
- self.fp = self.ole.openstream(self.stream[:2] + ["Subimage 0000 Data"])
-
- return ImageFile.ImageFile.load(self)
-
-
-#
-# --------------------------------------------------------------------
-
-
-Image.register_open(FpxImageFile.format, FpxImageFile, _accept)
-
-Image.register_extension(FpxImageFile.format, ".fpx")
diff --git a/spaces/aus10powell/TwitterAccounts/scripts/utils.py b/spaces/aus10powell/TwitterAccounts/scripts/utils.py
deleted file mode 100644
index b5a8ae3fb56a8ba94bc25de616c702a370be1d9e..0000000000000000000000000000000000000000
--- a/spaces/aus10powell/TwitterAccounts/scripts/utils.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import json
-import os
-from datetime import date, timedelta
-
-
-def list_files(startpath):
- for root, dirs, files in os.walk(startpath):
- level = root.replace(startpath, "").count(os.sep)
- indent = " " * 4 * (level)
- print("{}{}/".format(indent, os.path.basename(root)))
- subindent = " " * 4 * (level + 1)
- for f in files:
- print("{}{}".format(subindent, f))
-
-
-def load_configs(configs_file_path):
- """Load a configuration file containing keys and secrets and return them as a dictionary.
-
- Args:
- config_file_path (str): The path to the configuration file to load.
-
- Returns:
- dict: A dictionary containing the keys and secrets loaded from
- """
- with open(configs_file_path) as f:
- configs = json.load(f)
- return configs
-
-
-def wilson_score_interval(obs, conf_level=0.95):
- import math
- from scipy import stats
-
- """
- Calculates the Wilson score interval for a given list of observations.
-
- Args:
- obs (list): A list of observations (0s and 1s).
- conf_level (float): The desired confidence level (default: 0.95).
-
- Returns:
- tuple: A tuple with the lower and upper bounds of the confidence interval.
- """
- n = len(obs)
- if n == 0:
- return None
- z = stats.norm.ppf(1 - (1 - conf_level) / 2)
- phat = sum(obs) / n
- term = z * math.sqrt(phat * (1 - phat) / n + z * z / (4 * n * n))
- lower_bound = (phat + z * z / (2 * n) - term) / (1 + z * z / n)
- upper_bound = (phat + z * z / (2 * n) + term) / (1 + z * z / n)
- return round(lower_bound, 2), round(upper_bound, 2)
-
-
-def get_yesterday_date():
- today = date.today()
- yesterday = today - timedelta(days=1)
- return yesterday
-
-
-def load_data_from_file(filename):
- """
- Loads data from a file in JSON format if the file exists.
-
- Args:
- filename (str): The name of the file to load.
-
- Returns:
- The loaded data, or None if the file doesn't exist.
- """
- if os.path.isfile(filename):
- with open(filename, "r") as f:
- data = json.load(f)
- return data
- else:
- return None
diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/ldmlib/lr_scheduler.py b/spaces/awaawawawa/iurf7irfuyytruyyugb/ldmlib/lr_scheduler.py
deleted file mode 100644
index be39da9ca6dacc22bf3df9c7389bbb403a4a3ade..0000000000000000000000000000000000000000
--- a/spaces/awaawawawa/iurf7irfuyytruyyugb/ldmlib/lr_scheduler.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import numpy as np
-
-
-class LambdaWarmUpCosineScheduler:
- """
- note: use with a base_lr of 1.0
- """
- def __init__(self, warm_up_steps, lr_min, lr_max, lr_start, max_decay_steps, verbosity_interval=0):
- self.lr_warm_up_steps = warm_up_steps
- self.lr_start = lr_start
- self.lr_min = lr_min
- self.lr_max = lr_max
- self.lr_max_decay_steps = max_decay_steps
- self.last_lr = 0.
- self.verbosity_interval = verbosity_interval
-
- def schedule(self, n, **kwargs):
- if self.verbosity_interval > 0:
- if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_lr}")
- if n < self.lr_warm_up_steps:
- lr = (self.lr_max - self.lr_start) / self.lr_warm_up_steps * n + self.lr_start
- self.last_lr = lr
- return lr
- else:
- t = (n - self.lr_warm_up_steps) / (self.lr_max_decay_steps - self.lr_warm_up_steps)
- t = min(t, 1.0)
- lr = self.lr_min + 0.5 * (self.lr_max - self.lr_min) * (
- 1 + np.cos(t * np.pi))
- self.last_lr = lr
- return lr
-
- def __call__(self, n, **kwargs):
- return self.schedule(n,**kwargs)
-
-
-class LambdaWarmUpCosineScheduler2:
- """
- supports repeated iterations, configurable via lists
- note: use with a base_lr of 1.0.
- """
- def __init__(self, warm_up_steps, f_min, f_max, f_start, cycle_lengths, verbosity_interval=0):
- assert len(warm_up_steps) == len(f_min) == len(f_max) == len(f_start) == len(cycle_lengths)
- self.lr_warm_up_steps = warm_up_steps
- self.f_start = f_start
- self.f_min = f_min
- self.f_max = f_max
- self.cycle_lengths = cycle_lengths
- self.cum_cycles = np.cumsum([0] + list(self.cycle_lengths))
- self.last_f = 0.
- self.verbosity_interval = verbosity_interval
-
- def find_in_interval(self, n):
- interval = 0
- for cl in self.cum_cycles[1:]:
- if n <= cl:
- return interval
- interval += 1
-
- def schedule(self, n, **kwargs):
- cycle = self.find_in_interval(n)
- n = n - self.cum_cycles[cycle]
- if self.verbosity_interval > 0:
- if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, "
- f"current cycle {cycle}")
- if n < self.lr_warm_up_steps[cycle]:
- f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle]
- self.last_f = f
- return f
- else:
- t = (n - self.lr_warm_up_steps[cycle]) / (self.cycle_lengths[cycle] - self.lr_warm_up_steps[cycle])
- t = min(t, 1.0)
- f = self.f_min[cycle] + 0.5 * (self.f_max[cycle] - self.f_min[cycle]) * (
- 1 + np.cos(t * np.pi))
- self.last_f = f
- return f
-
- def __call__(self, n, **kwargs):
- return self.schedule(n, **kwargs)
-
-
-class LambdaLinearScheduler(LambdaWarmUpCosineScheduler2):
-
- def schedule(self, n, **kwargs):
- cycle = self.find_in_interval(n)
- n = n - self.cum_cycles[cycle]
- if self.verbosity_interval > 0:
- if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, "
- f"current cycle {cycle}")
-
- if n < self.lr_warm_up_steps[cycle]:
- f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle]
- self.last_f = f
- return f
- else:
- f = self.f_min[cycle] + (self.f_max[cycle] - self.f_min[cycle]) * (self.cycle_lengths[cycle] - n) / (self.cycle_lengths[cycle])
- self.last_f = f
- return f
-
diff --git a/spaces/awacke1/Amygdala.Hijacking.Using.Graph.Model/app.py b/spaces/awacke1/Amygdala.Hijacking.Using.Graph.Model/app.py
deleted file mode 100644
index f0006f96f7f7bd3401444fdbfa0d2e85690b38ea..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Amygdala.Hijacking.Using.Graph.Model/app.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import streamlit as st
-from graphviz import Digraph
-
-# The function creates a directed graph with nodes representing different parts of the brain or processes involved in decision-making. The edges denote the flow of information between these nodes.
-
-def create_amygdala_hijacking_graph():
- g = Digraph('Amygdala_Hijacking', format='png')
-
- g.attr(fontname="Helvetica,Arial,sans-serif")
- g.attr('node', fontname="Helvetica,Arial,sans-serif")
- g.attr('edge', fontname="Helvetica,Arial,sans-serif")
- g.attr('graph', newrank='true', nodesep='0.3', ranksep='0.2', overlap='true', splines='false')
- g.attr('node', fixedsize='false', fontsize='24', height='1', shape='box', style='filled,setlinewidth(5)', width='2.2', penwidth='3')
- g.attr('edge', arrowhead='none', arrowsize='0.5', labelfontname="Ubuntu", weight='10', style='filled,setlinewidth(5)')
-
- g.node('1', '👂 Sensory Input', fillcolor='lightblue')
- g.node('2', '📡 Thalamus', fillcolor='palegreen')
- g.node('3', '🔴 Amygdala', color='red', fillcolor='red', fontcolor='white')
- g.node('4', '📚 Hippocampus', fillcolor='lightyellow')
- g.node('5', '💡 Prefrontal Cortex', fillcolor='lightpink')
- g.node('6', '🎬 Response', fillcolor='lightgray')
-
- g.edge('1', '2', label='🌐 Receives Signals')
- g.edge('2', '3', label='⚡ Quick, Emotional Response')
- g.edge('2', '4', label='🔀 Sends Signals To')
- g.edge('4', '5', label='🔄 Relays Information')
- g.edge('5', '3', label='🧠 Rational Control (If Not Hijacked)')
- g.edge('3', '6', label='🏃 Generates Response')
-
- return g
-
-
-
-
-def main():
- st.title("Amygdala Hijacking Visualization")
- st.text("A simple graph model to represent amygdala hijacking in the brain.")
-
- amygdala_hijacking_graph = create_amygdala_hijacking_graph()
- st.graphviz_chart(amygdala_hijacking_graph)
-
-if __name__ == "__main__":
- main()
-
-
-
-
-st.markdown("""
-
-Explain amygdala hijacking using a graph model in streamlit python program using graphviz to represent levels or modes of thinking
-
-Amygdala hijacking is a phenomenon where our emotional brain (amygdala) takes control over our rational brain (prefrontal cortex), leading to impulsive and irrational behavior. In this response, I'll guide you on how to create a Streamlit app with Graphviz to visualize the concept of amygdala hijacking using a graph model.
-
-""")
\ No newline at end of file
diff --git a/spaces/awacke1/MistralAndABardGoRoleplaying/app.py b/spaces/awacke1/MistralAndABardGoRoleplaying/app.py
deleted file mode 100644
index 67fb52d3bd85e5fa86bca1991a2ec95927d9892a..0000000000000000000000000000000000000000
--- a/spaces/awacke1/MistralAndABardGoRoleplaying/app.py
+++ /dev/null
@@ -1,172 +0,0 @@
-from huggingface_hub import InferenceClient
-import gradio as gr
-
-client = InferenceClient(
- "mistralai/Mistral-7B-Instruct-v0.1"
-)
-
-
-def format_prompt(message, history):
- prompt = ""
- for user_prompt, bot_response in history:
- prompt += f"[INST] {user_prompt} [/INST]"
- prompt += f" {bot_response} "
- prompt += f"[INST] {message} [/INST]"
- return prompt
-
-def generate(
- prompt, history, temperature=0.9, max_new_tokens=256, top_p=0.95, repetition_penalty=1.0,
-):
- temperature = float(temperature)
- if temperature < 1e-2:
- temperature = 1e-2
- top_p = float(top_p)
-
- generate_kwargs = dict(
- temperature=temperature,
- max_new_tokens=max_new_tokens,
- top_p=top_p,
- repetition_penalty=repetition_penalty,
- do_sample=True,
- seed=42,
- )
-
- formatted_prompt = format_prompt(prompt, history)
-
- stream = client.text_generation(formatted_prompt, **generate_kwargs, stream=True, details=True, return_full_text=False)
- output = ""
-
- for response in stream:
- output += response.token.text
- yield output
- return output
-
-
-additional_inputs=[
- gr.Slider(
- label="Temperature",
- value=0.9,
- minimum=0.0,
- maximum=1.0,
- step=0.05,
- interactive=True,
- info="Higher values produce more diverse outputs",
- ),
- gr.Slider(
- label="Max new tokens",
- value=256,
- minimum=0,
- maximum=1048,
- step=64,
- interactive=True,
- info="The maximum numbers of new tokens",
- ),
- gr.Slider(
- label="Top-p (nucleus sampling)",
- value=0.90,
- minimum=0.0,
- maximum=1,
- step=0.05,
- interactive=True,
- info="Higher values sample more low-probability tokens",
- ),
- gr.Slider(
- label="Repetition penalty",
- value=1.2,
- minimum=1.0,
- maximum=2.0,
- step=0.05,
- interactive=True,
- info="Penalize repeated tokens",
- )
-]
-
-css = """
- #mkd {
- height: 200px;
- overflow: auto;
- border: 1px solid #ccc;
- }
-"""
-
-with gr.Blocks(css=css) as demo:
-
- gr.ChatInterface(
- generate,
- additional_inputs=additional_inputs,
- examples = [
- ["🏰 Welcome to the Kingdom of Elandria! You are Jim and Tim, two bumbling bros with a knack for mischief. 🤴🤴 [Action: Introduce yourselves, Equipment: Scepters of Foolishness]"],
- ["🌲 You find yourselves in a forest filled with magical creatures and oddly specific 'Do Not Disturb' signs. 🦄 [Action: Proceed cautiously, Equipment: Map of Social Etiquette]"],
- ["🍻 You stumble upon a dwarf tavern. They offer you 'Beard Beer.' Do you drink it? 🍺 [Action: Chug or Pass, Equipment: Empty Mugs]"],
- ["🐉 A vegan dragon appears and chastises you for your leather boots. What do you do? 🗡️🏃 [Action: Apologize and offer kale, Equipment: Non-leather sandals]"],
- ["💎 You find a treasure chest labeled 'Not a Mimic.' Seems legit. Do you open it? 🗝️ [Action: Open or No way, Equipment: Mimic Repellent]"],
- ["🦇 You're swarmed by bats in a cave. One bat offers you 'organic guano.' How do you react? 🕯️ [Action: Politely decline, Equipment: Nose Plugs]"],
- ["🔮 A mysterious sorcerer offers you a 'Love Potion No. 9½.' Do you take a sip? 🍶 [Action: Sip or Skip, Equipment: Breath Mints]"],
- ["⚔️ Bandits demand gold, but they accept credit cards. What's your move? 💰 [Action: Pay or Pray, Equipment: Wallets]"],
- ["🚪 A door with three locks and a sign saying 'Beware of the Dog.' Do you search for the keys or try to pet the dog? 🗝️💪 [Action: Unlock or Pet, Equipment: Dog Treats]"],
- ["🌊 A river blocks your path. A mermaid offers to carry you across for a 'small' fee. 🏊♀️🌉 [Action: Accept or Decline, Equipment: Bargaining Skills]"],
- ["🦁 You encounter a pride of lions playing poker. Do you join the game or fold? 🤫🔄 [Action: Play or Fold, Equipment: Poker Face]"],
- ["🍎 A tree filled with golden apples and a sign saying, 'Seriously, don't eat these!' What do you do? 🤔 [Action: Eat or Retreat, Equipment: Stomach Pump]"],
- ["🌕 The moon turns red, wolves start howling, and your horoscope says 'Stay in bed.' Do you camp or go? 🏕️🚶 [Action: Camp or Scamp, Equipment: Astrology App]"],
- ["💀 The final boss is an undead warrior selling life insurance. Do you combat or sign up? ⚔️🤝 [Action: Fight or Finance, Equipment: Policy Guide]"]
- ]
- )
- gr.HTML("""
🤖 Mistral Chat - Gradio 🤖
- In this demo, you can chat with Mistral-7B-Instruct model. 💬
- Learn more about the model here. 📚
-
🛠 Model Features 🛠
-
-
🪟 Sliding Window Attention with 128K tokens span
-
🚀 GQA for faster inference
-
📝 Byte-fallback BPE tokenizer
-
-
📜 License 📜 Released under Apache 2.0 License
-
📦 Usage 📦
-
-
📚 Available on Huggingface Hub
-
🐍 Python code snippets for easy setup
-
📈 Expected speedups with Flash Attention 2
-
- """)
-
- markdown="""
- | Feature | Description | Byline |
- |---------|-------------|--------|
- | 🪟 Sliding Window Attention with 128K tokens span | Enables the model to have a larger context for each token. | Increases model's understanding of context, resulting in more coherent and contextually relevant outputs. |
- | 🚀 GQA for faster inference | Graph Query Attention allows faster computation during inference. | Speeds up the model inference time without sacrificing too much on accuracy. |
- | 📝 Byte-fallback BPE tokenizer | Uses Byte Pair Encoding but can fall back to byte-level encoding. | Allows the tokenizer to handle a wider variety of input text while keeping token size manageable. |
- | 📜 License | Released under Apache 2.0 License | Gives you a permissive free software license, allowing you freedom to use, modify, and distribute the code. |
- | 📦 Usage | | |
- | 📚 Available on Huggingface Hub | The model can be easily downloaded and set up from Huggingface. | Makes it easier to integrate the model into various projects. |
- | 🐍 Python code snippets for easy setup | Provides Python code snippets for quick and easy model setup. | Facilitates rapid development and deployment, especially useful for prototyping. |
- | 📈 Expected speedups with Flash Attention 2 | Upcoming update expected to bring speed improvements. | Keep an eye out for this update to benefit from performance gains. |
-# 🛠 Model Features and More 🛠
-
-## Features
-
-- 🪟 Sliding Window Attention with 128K tokens span
- - **Byline**: Increases model's understanding of context, resulting in more coherent and contextually relevant outputs.
-
-- 🚀 GQA for faster inference
- - **Byline**: Speeds up the model inference time without sacrificing too much on accuracy.
-
-- 📝 Byte-fallback BPE tokenizer
- - **Byline**: Allows the tokenizer to handle a wider variety of input text while keeping token size manageable.
-
-- 📜 License: Released under Apache 2.0 License
- - **Byline**: Gives you a permissive free software license, allowing you freedom to use, modify, and distribute the code.
-
-## Usage 📦
-
-- 📚 Available on Huggingface Hub
- - **Byline**: Makes it easier to integrate the model into various projects.
-
-- 🐍 Python code snippets for easy setup
- - **Byline**: Facilitates rapid development and deployment, especially useful for prototyping.
-
-- 📈 Expected speedups with Flash Attention 2
- - **Byline**: Keep an eye out for this update to benefit from performance gains.
- """
- gr.Markdown(markdown)
-
-demo.queue().launch(debug=True)
\ No newline at end of file
diff --git a/spaces/azizalto/youtube_downloader/app.py b/spaces/azizalto/youtube_downloader/app.py
deleted file mode 100644
index 9ef3f29b6f68ae66ac7511d45469fd225bd3755b..0000000000000000000000000000000000000000
--- a/spaces/azizalto/youtube_downloader/app.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import streamlit as st
-from pytube import YouTube
-
-# By Aziz Alto (https://github.com/iamaziz)
-# Thanks to:
-# https://twitter.com/koladev32/status/1460200958353305601
-# https://github.com/pytube/pytube
-
-
-class YouTubeDownloader:
- @staticmethod
- def run():
- st.header("YouTube Video Downloader")
- url = st.text_input("Enter YouTube URL to download:")
- if url:
- YouTubeDownloader.validate_url(url)
- with st.expander("preview video"):
- st.video(url)
- if st.button("Download"):
- YouTubeDownloader.cleanup()
- file_ = YouTubeDownloader.download_video(url)
- st.video(file_)
- YouTubeDownloader.helper_message()
- st.markdown("> App for fun and learning by [aziz](https://github.com/iamaziz/YouTube_downloader_app)")
-
- @staticmethod
- def download_video(url):
- with st.spinner("Downloading..."):
- local_file = (
- YouTube(url)
- .streams.filter(progressive=True, file_extension="mp4")
- .first()
- .download()
- )
- st.success("Downloaded")
- return local_file
-
- @staticmethod
- def validate_url(url):
- import validators
-
- if not validators.url(url):
- st.error("Hi there 👋 URL seems invalid 👽")
- st.stop()
-
- @classmethod
- def cleanup(cls):
- import pathlib
- import glob
-
- junks = glob.glob("*.mp4")
- for junk in junks:
- pathlib.Path(junk).unlink()
-
- @classmethod
- def helper_message(cls):
- st.write(
- "> To save the video to local computer, "
- "click the vertical ... icon (aka hamburger button) in the bottom-right corner (in the video above) and click download."
- )
-
-
-if __name__ == "__main__":
- YouTubeDownloader.run()
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/math/ColorConverter.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/math/ColorConverter.js
deleted file mode 100644
index 15dd84e76593971d35fa46afea6ffa29bfb73ff4..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/math/ColorConverter.js
+++ /dev/null
@@ -1,88 +0,0 @@
-/**
- * @author bhouston / http://exocortex.com/
- * @author zz85 / http://github.com/zz85
- */
-
-THREE.ColorConverter = {
-
- setHSV: function ( color, h, s, v ) {
-
- // https://gist.github.com/xpansive/1337890#file-index-js
-
- h = THREE.Math.euclideanModulo( h, 1 );
- s = THREE.Math.clamp( s, 0, 1 );
- v = THREE.Math.clamp( v, 0, 1 );
-
- return color.setHSL( h, ( s * v ) / ( ( h = ( 2 - s ) * v ) < 1 ? h : ( 2 - h ) ), h * 0.5 );
-
- },
-
- getHSV: function() {
-
- var hsl = {};
-
- return function getHSV( color, target ) {
-
- if ( target === undefined ) {
-
- console.warn( 'THREE.ColorConverter: .getHSV() target is now required' );
- target = { h: 0, s: 0, l: 0 };
-
- }
-
- color.getHSL( hsl );
-
- // based on https://gist.github.com/xpansive/1337890#file-index-js
- hsl.s *= ( hsl.l < 0.5 ) ? hsl.l : ( 1 - hsl.l );
-
- target.h = hsl.h;
- target.s = 2 * hsl.s / ( hsl.l + hsl.s );
- target.v = hsl.l + hsl.s;
-
- return target;
-
- };
-
- }(),
-
- // where c, m, y, k is between 0 and 1
-
- setCMYK: function ( color, c, m, y, k ) {
-
- var r = ( 1 - c ) * ( 1 - k );
- var g = ( 1 - m ) * ( 1 - k );
- var b = ( 1 - y ) * ( 1 - k );
-
- return color.setRGB( r, g, b );
-
- },
-
- getCMYK: function ( color, target ) {
-
- if ( target === undefined ) {
-
- console.warn( 'THREE.ColorConverter: .getCMYK() target is now required' );
- target = { c: 0, m: 0, y: 0, k:0 };
-
- }
-
- var r = color.r;
- var g = color.g;
- var b = color.b;
-
- var k = 1 - Math.max( r, g, b );
- var c = ( 1 - r - k ) / ( 1 - k );
- var m = ( 1 - g - k ) / ( 1 - k );
- var y = ( 1 - b - k ) / ( 1 - k );
-
- target.c = c;
- target.m = m;
- target.y = y;
- target.k = k;
-
- return target;
-
- }
-
-
-};
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/helpers/HemisphereLightHelper.js b/spaces/banana-projects/web3d/node_modules/three/src/helpers/HemisphereLightHelper.js
deleted file mode 100644
index 32da6914e60c09231b85df6845b3cc5760747f7b..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/helpers/HemisphereLightHelper.js
+++ /dev/null
@@ -1,96 +0,0 @@
-/**
- * @author alteredq / http://alteredqualia.com/
- * @author mrdoob / http://mrdoob.com/
- * @author Mugen87 / https://github.com/Mugen87
- */
-
-import { Vector3 } from '../math/Vector3.js';
-import { Color } from '../math/Color.js';
-import { Object3D } from '../core/Object3D.js';
-import { Mesh } from '../objects/Mesh.js';
-import { VertexColors } from '../constants.js';
-import { MeshBasicMaterial } from '../materials/MeshBasicMaterial.js';
-import { OctahedronBufferGeometry } from '../geometries/OctahedronGeometry.js';
-import { BufferAttribute } from '../core/BufferAttribute.js';
-
-function HemisphereLightHelper( light, size, color ) {
-
- Object3D.call( this );
-
- this.light = light;
- this.light.updateMatrixWorld();
-
- this.matrix = light.matrixWorld;
- this.matrixAutoUpdate = false;
-
- this.color = color;
-
- var geometry = new OctahedronBufferGeometry( size );
- geometry.rotateY( Math.PI * 0.5 );
-
- this.material = new MeshBasicMaterial( { wireframe: true, fog: false } );
- if ( this.color === undefined ) this.material.vertexColors = VertexColors;
-
- var position = geometry.getAttribute( 'position' );
- var colors = new Float32Array( position.count * 3 );
-
- geometry.addAttribute( 'color', new BufferAttribute( colors, 3 ) );
-
- this.add( new Mesh( geometry, this.material ) );
-
- this.update();
-
-}
-
-HemisphereLightHelper.prototype = Object.create( Object3D.prototype );
-HemisphereLightHelper.prototype.constructor = HemisphereLightHelper;
-
-HemisphereLightHelper.prototype.dispose = function () {
-
- this.children[ 0 ].geometry.dispose();
- this.children[ 0 ].material.dispose();
-
-};
-
-HemisphereLightHelper.prototype.update = function () {
-
- var vector = new Vector3();
-
- var color1 = new Color();
- var color2 = new Color();
-
- return function update() {
-
- var mesh = this.children[ 0 ];
-
- if ( this.color !== undefined ) {
-
- this.material.color.set( this.color );
-
- } else {
-
- var colors = mesh.geometry.getAttribute( 'color' );
-
- color1.copy( this.light.color );
- color2.copy( this.light.groundColor );
-
- for ( var i = 0, l = colors.count; i < l; i ++ ) {
-
- var color = ( i < ( l / 2 ) ) ? color1 : color2;
-
- colors.setXYZ( i, color.r, color.g, color.b );
-
- }
-
- colors.needsUpdate = true;
-
- }
-
- mesh.lookAt( vector.setFromMatrixPosition( this.light.matrixWorld ).negate() );
-
- };
-
-}();
-
-
-export { HemisphereLightHelper };
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/materials/MeshBasicMaterial.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/materials/MeshBasicMaterial.d.ts
deleted file mode 100644
index 66f902f106278f41565b74dc0835358a27be1a64..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/materials/MeshBasicMaterial.d.ts
+++ /dev/null
@@ -1,49 +0,0 @@
-import { Color } from './../math/Color';
-import { Texture } from './../textures/Texture';
-import { MaterialParameters, Material } from './Material';
-import { Combine } from '../constants';
-/**
- * parameters is an object with one or more properties defining the material's appearance.
- */
-export interface MeshBasicMaterialParameters extends MaterialParameters {
- color?: Color | string | number;
- opacity?: number;
- map?: Texture;
- aoMap?: Texture;
- aoMapIntensity?: number;
- specularMap?: Texture;
- alphaMap?: Texture;
- envMap?: Texture;
- combine?: Combine;
- reflectivity?: number;
- refractionRatio?: number;
- wireframe?: boolean;
- wireframeLinewidth?: number;
- wireframeLinecap?: string;
- wireframeLinejoin?: string;
- skinning?: boolean;
- morphTargets?: boolean;
-}
-
-export class MeshBasicMaterial extends Material {
- constructor(parameters?: MeshBasicMaterialParameters);
-
- color: Color;
- map: Texture | null;
- aoMap: Texture | null;
- aoMapIntensity: number;
- specularMap: Texture | null;
- alphaMap: Texture | null;
- envMap: Texture | null;
- combine: Combine;
- reflectivity: number;
- refractionRatio: number;
- wireframe: boolean;
- wireframeLinewidth: number;
- wireframeLinecap: string;
- wireframeLinejoin: string;
- skinning: boolean;
- morphTargets: boolean;
-
- setValues(parameters: MeshBasicMaterialParameters): void;
-}
diff --git a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/guessr/__init__.py b/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/guessr/__init__.py
deleted file mode 100644
index c729973c4e57e6d1aaa34b7623eebc78c5a49ea2..0000000000000000000000000000000000000000
--- a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/guessr/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from .abstract_guessr import AbstractGuessr
-from .random_guessr import RandomGuessr
-from .nearest_neighbor_embedder_guessr import NearestNeighborEmbedderGuessr
-from .average_neighbor_embedder_guessr import AverageNeighborsEmbedderGuessr
diff --git a/spaces/bhasker412/IDD-YOLO-Tracking/trackers/strongsort/deep/models/osnet_ain.py b/spaces/bhasker412/IDD-YOLO-Tracking/trackers/strongsort/deep/models/osnet_ain.py
deleted file mode 100644
index 3f9f7bd0704502401d499fd2bfdb802522b99efe..0000000000000000000000000000000000000000
--- a/spaces/bhasker412/IDD-YOLO-Tracking/trackers/strongsort/deep/models/osnet_ain.py
+++ /dev/null
@@ -1,609 +0,0 @@
-from __future__ import division, absolute_import
-import warnings
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-__all__ = [
- 'osnet_ain_x1_0', 'osnet_ain_x0_75', 'osnet_ain_x0_5', 'osnet_ain_x0_25'
-]
-
-pretrained_urls = {
- 'osnet_ain_x1_0':
- 'https://drive.google.com/uc?id=1-CaioD9NaqbHK_kzSMW8VE4_3KcsRjEo',
- 'osnet_ain_x0_75':
- 'https://drive.google.com/uc?id=1apy0hpsMypqstfencdH-jKIUEFOW4xoM',
- 'osnet_ain_x0_5':
- 'https://drive.google.com/uc?id=1KusKvEYyKGDTUBVRxRiz55G31wkihB6l',
- 'osnet_ain_x0_25':
- 'https://drive.google.com/uc?id=1SxQt2AvmEcgWNhaRb2xC4rP6ZwVDP0Wt'
-}
-
-
-##########
-# Basic layers
-##########
-class ConvLayer(nn.Module):
- """Convolution layer (conv + bn + relu)."""
-
- def __init__(
- self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- groups=1,
- IN=False
- ):
- super(ConvLayer, self).__init__()
- self.conv = nn.Conv2d(
- in_channels,
- out_channels,
- kernel_size,
- stride=stride,
- padding=padding,
- bias=False,
- groups=groups
- )
- if IN:
- self.bn = nn.InstanceNorm2d(out_channels, affine=True)
- else:
- self.bn = nn.BatchNorm2d(out_channels)
- self.relu = nn.ReLU()
-
- def forward(self, x):
- x = self.conv(x)
- x = self.bn(x)
- return self.relu(x)
-
-
-class Conv1x1(nn.Module):
- """1x1 convolution + bn + relu."""
-
- def __init__(self, in_channels, out_channels, stride=1, groups=1):
- super(Conv1x1, self).__init__()
- self.conv = nn.Conv2d(
- in_channels,
- out_channels,
- 1,
- stride=stride,
- padding=0,
- bias=False,
- groups=groups
- )
- self.bn = nn.BatchNorm2d(out_channels)
- self.relu = nn.ReLU()
-
- def forward(self, x):
- x = self.conv(x)
- x = self.bn(x)
- return self.relu(x)
-
-
-class Conv1x1Linear(nn.Module):
- """1x1 convolution + bn (w/o non-linearity)."""
-
- def __init__(self, in_channels, out_channels, stride=1, bn=True):
- super(Conv1x1Linear, self).__init__()
- self.conv = nn.Conv2d(
- in_channels, out_channels, 1, stride=stride, padding=0, bias=False
- )
- self.bn = None
- if bn:
- self.bn = nn.BatchNorm2d(out_channels)
-
- def forward(self, x):
- x = self.conv(x)
- if self.bn is not None:
- x = self.bn(x)
- return x
-
-
-class Conv3x3(nn.Module):
- """3x3 convolution + bn + relu."""
-
- def __init__(self, in_channels, out_channels, stride=1, groups=1):
- super(Conv3x3, self).__init__()
- self.conv = nn.Conv2d(
- in_channels,
- out_channels,
- 3,
- stride=stride,
- padding=1,
- bias=False,
- groups=groups
- )
- self.bn = nn.BatchNorm2d(out_channels)
- self.relu = nn.ReLU()
-
- def forward(self, x):
- x = self.conv(x)
- x = self.bn(x)
- return self.relu(x)
-
-
-class LightConv3x3(nn.Module):
- """Lightweight 3x3 convolution.
-
- 1x1 (linear) + dw 3x3 (nonlinear).
- """
-
- def __init__(self, in_channels, out_channels):
- super(LightConv3x3, self).__init__()
- self.conv1 = nn.Conv2d(
- in_channels, out_channels, 1, stride=1, padding=0, bias=False
- )
- self.conv2 = nn.Conv2d(
- out_channels,
- out_channels,
- 3,
- stride=1,
- padding=1,
- bias=False,
- groups=out_channels
- )
- self.bn = nn.BatchNorm2d(out_channels)
- self.relu = nn.ReLU()
-
- def forward(self, x):
- x = self.conv1(x)
- x = self.conv2(x)
- x = self.bn(x)
- return self.relu(x)
-
-
-class LightConvStream(nn.Module):
- """Lightweight convolution stream."""
-
- def __init__(self, in_channels, out_channels, depth):
- super(LightConvStream, self).__init__()
- assert depth >= 1, 'depth must be equal to or larger than 1, but got {}'.format(
- depth
- )
- layers = []
- layers += [LightConv3x3(in_channels, out_channels)]
- for i in range(depth - 1):
- layers += [LightConv3x3(out_channels, out_channels)]
- self.layers = nn.Sequential(*layers)
-
- def forward(self, x):
- return self.layers(x)
-
-
-##########
-# Building blocks for omni-scale feature learning
-##########
-class ChannelGate(nn.Module):
- """A mini-network that generates channel-wise gates conditioned on input tensor."""
-
- def __init__(
- self,
- in_channels,
- num_gates=None,
- return_gates=False,
- gate_activation='sigmoid',
- reduction=16,
- layer_norm=False
- ):
- super(ChannelGate, self).__init__()
- if num_gates is None:
- num_gates = in_channels
- self.return_gates = return_gates
- self.global_avgpool = nn.AdaptiveAvgPool2d(1)
- self.fc1 = nn.Conv2d(
- in_channels,
- in_channels // reduction,
- kernel_size=1,
- bias=True,
- padding=0
- )
- self.norm1 = None
- if layer_norm:
- self.norm1 = nn.LayerNorm((in_channels // reduction, 1, 1))
- self.relu = nn.ReLU()
- self.fc2 = nn.Conv2d(
- in_channels // reduction,
- num_gates,
- kernel_size=1,
- bias=True,
- padding=0
- )
- if gate_activation == 'sigmoid':
- self.gate_activation = nn.Sigmoid()
- elif gate_activation == 'relu':
- self.gate_activation = nn.ReLU()
- elif gate_activation == 'linear':
- self.gate_activation = None
- else:
- raise RuntimeError(
- "Unknown gate activation: {}".format(gate_activation)
- )
-
- def forward(self, x):
- input = x
- x = self.global_avgpool(x)
- x = self.fc1(x)
- if self.norm1 is not None:
- x = self.norm1(x)
- x = self.relu(x)
- x = self.fc2(x)
- if self.gate_activation is not None:
- x = self.gate_activation(x)
- if self.return_gates:
- return x
- return input * x
-
-
-class OSBlock(nn.Module):
- """Omni-scale feature learning block."""
-
- def __init__(self, in_channels, out_channels, reduction=4, T=4, **kwargs):
- super(OSBlock, self).__init__()
- assert T >= 1
- assert out_channels >= reduction and out_channels % reduction == 0
- mid_channels = out_channels // reduction
-
- self.conv1 = Conv1x1(in_channels, mid_channels)
- self.conv2 = nn.ModuleList()
- for t in range(1, T + 1):
- self.conv2 += [LightConvStream(mid_channels, mid_channels, t)]
- self.gate = ChannelGate(mid_channels)
- self.conv3 = Conv1x1Linear(mid_channels, out_channels)
- self.downsample = None
- if in_channels != out_channels:
- self.downsample = Conv1x1Linear(in_channels, out_channels)
-
- def forward(self, x):
- identity = x
- x1 = self.conv1(x)
- x2 = 0
- for conv2_t in self.conv2:
- x2_t = conv2_t(x1)
- x2 = x2 + self.gate(x2_t)
- x3 = self.conv3(x2)
- if self.downsample is not None:
- identity = self.downsample(identity)
- out = x3 + identity
- return F.relu(out)
-
-
-class OSBlockINin(nn.Module):
- """Omni-scale feature learning block with instance normalization."""
-
- def __init__(self, in_channels, out_channels, reduction=4, T=4, **kwargs):
- super(OSBlockINin, self).__init__()
- assert T >= 1
- assert out_channels >= reduction and out_channels % reduction == 0
- mid_channels = out_channels // reduction
-
- self.conv1 = Conv1x1(in_channels, mid_channels)
- self.conv2 = nn.ModuleList()
- for t in range(1, T + 1):
- self.conv2 += [LightConvStream(mid_channels, mid_channels, t)]
- self.gate = ChannelGate(mid_channels)
- self.conv3 = Conv1x1Linear(mid_channels, out_channels, bn=False)
- self.downsample = None
- if in_channels != out_channels:
- self.downsample = Conv1x1Linear(in_channels, out_channels)
- self.IN = nn.InstanceNorm2d(out_channels, affine=True)
-
- def forward(self, x):
- identity = x
- x1 = self.conv1(x)
- x2 = 0
- for conv2_t in self.conv2:
- x2_t = conv2_t(x1)
- x2 = x2 + self.gate(x2_t)
- x3 = self.conv3(x2)
- x3 = self.IN(x3) # IN inside residual
- if self.downsample is not None:
- identity = self.downsample(identity)
- out = x3 + identity
- return F.relu(out)
-
-
-##########
-# Network architecture
-##########
-class OSNet(nn.Module):
- """Omni-Scale Network.
-
- Reference:
- - Zhou et al. Omni-Scale Feature Learning for Person Re-Identification. ICCV, 2019.
- - Zhou et al. Learning Generalisable Omni-Scale Representations
- for Person Re-Identification. TPAMI, 2021.
- """
-
- def __init__(
- self,
- num_classes,
- blocks,
- layers,
- channels,
- feature_dim=512,
- loss='softmax',
- conv1_IN=False,
- **kwargs
- ):
- super(OSNet, self).__init__()
- num_blocks = len(blocks)
- assert num_blocks == len(layers)
- assert num_blocks == len(channels) - 1
- self.loss = loss
- self.feature_dim = feature_dim
-
- # convolutional backbone
- self.conv1 = ConvLayer(
- 3, channels[0], 7, stride=2, padding=3, IN=conv1_IN
- )
- self.maxpool = nn.MaxPool2d(3, stride=2, padding=1)
- self.conv2 = self._make_layer(
- blocks[0], layers[0], channels[0], channels[1]
- )
- self.pool2 = nn.Sequential(
- Conv1x1(channels[1], channels[1]), nn.AvgPool2d(2, stride=2)
- )
- self.conv3 = self._make_layer(
- blocks[1], layers[1], channels[1], channels[2]
- )
- self.pool3 = nn.Sequential(
- Conv1x1(channels[2], channels[2]), nn.AvgPool2d(2, stride=2)
- )
- self.conv4 = self._make_layer(
- blocks[2], layers[2], channels[2], channels[3]
- )
- self.conv5 = Conv1x1(channels[3], channels[3])
- self.global_avgpool = nn.AdaptiveAvgPool2d(1)
- # fully connected layer
- self.fc = self._construct_fc_layer(
- self.feature_dim, channels[3], dropout_p=None
- )
- # identity classification layer
- self.classifier = nn.Linear(self.feature_dim, num_classes)
-
- self._init_params()
-
- def _make_layer(self, blocks, layer, in_channels, out_channels):
- layers = []
- layers += [blocks[0](in_channels, out_channels)]
- for i in range(1, len(blocks)):
- layers += [blocks[i](out_channels, out_channels)]
- return nn.Sequential(*layers)
-
- def _construct_fc_layer(self, fc_dims, input_dim, dropout_p=None):
- if fc_dims is None or fc_dims < 0:
- self.feature_dim = input_dim
- return None
-
- if isinstance(fc_dims, int):
- fc_dims = [fc_dims]
-
- layers = []
- for dim in fc_dims:
- layers.append(nn.Linear(input_dim, dim))
- layers.append(nn.BatchNorm1d(dim))
- layers.append(nn.ReLU())
- if dropout_p is not None:
- layers.append(nn.Dropout(p=dropout_p))
- input_dim = dim
-
- self.feature_dim = fc_dims[-1]
-
- return nn.Sequential(*layers)
-
- def _init_params(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(
- m.weight, mode='fan_out', nonlinearity='relu'
- )
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- elif isinstance(m, nn.BatchNorm2d):
- nn.init.constant_(m.weight, 1)
- nn.init.constant_(m.bias, 0)
-
- elif isinstance(m, nn.BatchNorm1d):
- nn.init.constant_(m.weight, 1)
- nn.init.constant_(m.bias, 0)
-
- elif isinstance(m, nn.InstanceNorm2d):
- nn.init.constant_(m.weight, 1)
- nn.init.constant_(m.bias, 0)
-
- elif isinstance(m, nn.Linear):
- nn.init.normal_(m.weight, 0, 0.01)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def featuremaps(self, x):
- x = self.conv1(x)
- x = self.maxpool(x)
- x = self.conv2(x)
- x = self.pool2(x)
- x = self.conv3(x)
- x = self.pool3(x)
- x = self.conv4(x)
- x = self.conv5(x)
- return x
-
- def forward(self, x, return_featuremaps=False):
- x = self.featuremaps(x)
- if return_featuremaps:
- return x
- v = self.global_avgpool(x)
- v = v.view(v.size(0), -1)
- if self.fc is not None:
- v = self.fc(v)
- if not self.training:
- return v
- y = self.classifier(v)
- if self.loss == 'softmax':
- return y
- elif self.loss == 'triplet':
- return y, v
- else:
- raise KeyError("Unsupported loss: {}".format(self.loss))
-
-
-def init_pretrained_weights(model, key=''):
- """Initializes model with pretrained weights.
-
- Layers that don't match with pretrained layers in name or size are kept unchanged.
- """
- import os
- import errno
- import gdown
- from collections import OrderedDict
-
- def _get_torch_home():
- ENV_TORCH_HOME = 'TORCH_HOME'
- ENV_XDG_CACHE_HOME = 'XDG_CACHE_HOME'
- DEFAULT_CACHE_DIR = '~/.cache'
- torch_home = os.path.expanduser(
- os.getenv(
- ENV_TORCH_HOME,
- os.path.join(
- os.getenv(ENV_XDG_CACHE_HOME, DEFAULT_CACHE_DIR), 'torch'
- )
- )
- )
- return torch_home
-
- torch_home = _get_torch_home()
- model_dir = os.path.join(torch_home, 'checkpoints')
- try:
- os.makedirs(model_dir)
- except OSError as e:
- if e.errno == errno.EEXIST:
- # Directory already exists, ignore.
- pass
- else:
- # Unexpected OSError, re-raise.
- raise
- filename = key + '_imagenet.pth'
- cached_file = os.path.join(model_dir, filename)
-
- if not os.path.exists(cached_file):
- gdown.download(pretrained_urls[key], cached_file, quiet=False)
-
- state_dict = torch.load(cached_file)
- model_dict = model.state_dict()
- new_state_dict = OrderedDict()
- matched_layers, discarded_layers = [], []
-
- for k, v in state_dict.items():
- if k.startswith('module.'):
- k = k[7:] # discard module.
-
- if k in model_dict and model_dict[k].size() == v.size():
- new_state_dict[k] = v
- matched_layers.append(k)
- else:
- discarded_layers.append(k)
-
- model_dict.update(new_state_dict)
- model.load_state_dict(model_dict)
-
- if len(matched_layers) == 0:
- warnings.warn(
- 'The pretrained weights from "{}" cannot be loaded, '
- 'please check the key names manually '
- '(** ignored and continue **)'.format(cached_file)
- )
- else:
- print(
- 'Successfully loaded imagenet pretrained weights from "{}"'.
- format(cached_file)
- )
- if len(discarded_layers) > 0:
- print(
- '** The following layers are discarded '
- 'due to unmatched keys or layer size: {}'.
- format(discarded_layers)
- )
-
-
-##########
-# Instantiation
-##########
-def osnet_ain_x1_0(
- num_classes=1000, pretrained=True, loss='softmax', **kwargs
-):
- model = OSNet(
- num_classes,
- blocks=[
- [OSBlockINin, OSBlockINin], [OSBlock, OSBlockINin],
- [OSBlockINin, OSBlock]
- ],
- layers=[2, 2, 2],
- channels=[64, 256, 384, 512],
- loss=loss,
- conv1_IN=True,
- **kwargs
- )
- if pretrained:
- init_pretrained_weights(model, key='osnet_ain_x1_0')
- return model
-
-
-def osnet_ain_x0_75(
- num_classes=1000, pretrained=True, loss='softmax', **kwargs
-):
- model = OSNet(
- num_classes,
- blocks=[
- [OSBlockINin, OSBlockINin], [OSBlock, OSBlockINin],
- [OSBlockINin, OSBlock]
- ],
- layers=[2, 2, 2],
- channels=[48, 192, 288, 384],
- loss=loss,
- conv1_IN=True,
- **kwargs
- )
- if pretrained:
- init_pretrained_weights(model, key='osnet_ain_x0_75')
- return model
-
-
-def osnet_ain_x0_5(
- num_classes=1000, pretrained=True, loss='softmax', **kwargs
-):
- model = OSNet(
- num_classes,
- blocks=[
- [OSBlockINin, OSBlockINin], [OSBlock, OSBlockINin],
- [OSBlockINin, OSBlock]
- ],
- layers=[2, 2, 2],
- channels=[32, 128, 192, 256],
- loss=loss,
- conv1_IN=True,
- **kwargs
- )
- if pretrained:
- init_pretrained_weights(model, key='osnet_ain_x0_5')
- return model
-
-
-def osnet_ain_x0_25(
- num_classes=1000, pretrained=True, loss='softmax', **kwargs
-):
- model = OSNet(
- num_classes,
- blocks=[
- [OSBlockINin, OSBlockINin], [OSBlock, OSBlockINin],
- [OSBlockINin, OSBlock]
- ],
- layers=[2, 2, 2],
- channels=[16, 64, 96, 128],
- loss=loss,
- conv1_IN=True,
- **kwargs
- )
- if pretrained:
- init_pretrained_weights(model, key='osnet_ain_x0_25')
- return model
diff --git a/spaces/bigscience/SourcingCatalog/catalogue/geography.py b/spaces/bigscience/SourcingCatalog/catalogue/geography.py
deleted file mode 100644
index 62e5b59a27c87190c4dcfbbe197041859d3a4214..0000000000000000000000000000000000000000
--- a/spaces/bigscience/SourcingCatalog/catalogue/geography.py
+++ /dev/null
@@ -1,151 +0,0 @@
-import json
-
-import folium
-import pandas as pd
-from folium import Marker
-from folium.plugins import MarkerCluster
-from jinja2 import Template
-
-regions, countries, region_tree = json.load(
- open("resources/country_regions.json", encoding="utf-8")
-)
-country_centers = json.load(
- open("resources/country_center_coordinates.json", encoding="utf-8")
-)
-country_mappings = json.load(open("resources/country_mappings.json", encoding="utf-8"))
-
-WORLD_GEO_URL = "https://raw.githubusercontent.com/python-visualization/folium/master/examples/data/world-countries.json"
-
-ICON_CREATE_FUNCTIOM = """
- function(cluster) {
- var markers = cluster.getAllChildMarkers();
- var sum = 0;
- for (var i = 0; i < markers.length; i++) {
- sum += markers[i].options.props.resources;
- }
-
- return L.divIcon({
- html: '' + sum + '',
- className: 'marker-cluster marker-cluster-small',
- iconSize: new L.Point(20, 20)
- });
- }
-"""
-
-
-class MarkerWithProps(Marker):
- _template = Template(
- """
- {% macro script(this, kwargs) %}
- var {{this.get_name()}} = L.marker(
- [{{this.location[0]}}, {{this.location[1]}}],
- {
- icon: new L.Icon.Default(),
- {%- if this.draggable %}
- draggable: true,
- autoPan: true,
- {%- endif %}
- {%- if this.props %}
- props : {{ this.props }}
- {%- endif %}
- }
- )
- .addTo({{this._parent.get_name()}});
- {% endmacro %}
- """
- )
-
- def __init__(
- self, location, popup=None, tooltip=None, icon=None, draggable=False, props=None
- ):
- super(MarkerWithProps, self).__init__(
- location=location,
- popup=popup,
- tooltip=tooltip,
- icon=icon,
- draggable=draggable,
- )
- self.props = json.loads(json.dumps(props))
-
-
-def get_region_center(region_name):
- latitudes = []
- longitudes = []
- for name in region_tree[region_name]:
- if name in region_tree:
- region_latitudes, region_longitudes = get_region_center(name)
- latitudes += region_latitudes
- longitudes += region_longitudes
- elif name in country_centers or name in country_mappings["to_center"]:
- country_center = country_centers[
- country_mappings["to_center"].get(name, name)
- ]
- latitudes += [float(country_center["latitude"])]
- longitudes += [float(country_center["longitude"])]
- return latitudes, longitudes
-
-
-def get_region_countries(region_name):
- countries = []
- for name in region_tree[region_name]:
- if name in region_tree:
- countries += get_region_countries(name)
- else:
- countries += [name]
- return countries
-
-
-def make_choro_map(resource_counts, marker_thres=0):
- world_map = folium.Map(tiles="cartodbpositron", location=[0, 0], zoom_start=1.5)
- marker_cluster = MarkerCluster(icon_create_function=ICON_CREATE_FUNCTIOM)
- marker_cluster.add_to(world_map)
- for name, count in resource_counts.items():
- if name in country_centers or name in country_mappings["to_center"]:
- country_center = country_centers[
- country_mappings["to_center"].get(name, name)
- ]
- MarkerWithProps(
- location=[country_center["latitude"], country_center["longitude"]],
- popup=f"{'Region' if name in region_tree else 'Country'} : {name} \n Resources : {count} ",
- props={"name": name, "resources": count},
- ).add_to(marker_cluster)
- # put a pin at the center of the region
- elif name in region_tree:
- latitudes, longitudes = get_region_center(name)
- if len(latitudes) > 0:
- lat = sum(latitudes) / len(latitudes)
- lon = sum(longitudes) / len(longitudes)
- MarkerWithProps(
- location=[lat, lon],
- popup=f"{'Region' if name in region_tree else 'Country'} : {name} \n Resources : {count} ",
- props={"name": name, "resources": count},
- ).add_to(marker_cluster)
- # for choropleth, add counts to all countries in a region
- choropleth_counts = {}
- for loc_name in list(resource_counts.keys()):
- if loc_name in region_tree:
- for country_name in get_region_countries(loc_name):
- choropleth_counts[country_name] = (
- choropleth_counts.get(country_name, 0) + resource_counts[loc_name]
- )
- else:
- choropleth_counts[loc_name] = (
- choropleth_counts.get(loc_name, 0) + resource_counts[loc_name]
- )
- df_resource_counts = pd.DataFrame(
- [
- (country_mappings["to_outline"].get(n, n), c)
- for n, c in choropleth_counts.items()
- ],
- columns=["Name", "Resources"],
- )
- folium.Choropleth(
- geo_data=WORLD_GEO_URL,
- name="resource map",
- data=df_resource_counts,
- columns=["Name", "Resources"],
- key_on="feature.properties.name",
- fill_color="PuRd",
- nan_fill_color="white",
- ).add_to(world_map)
- return world_map
diff --git a/spaces/bioriAsaeru/text-to-voice/ELITE WARRIORS VIETNAM PC.md b/spaces/bioriAsaeru/text-to-voice/ELITE WARRIORS VIETNAM PC.md
deleted file mode 100644
index 7c85006a0d219d56d20e31a89104a4f78ac4bf52..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/ELITE WARRIORS VIETNAM PC.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
sog members have been actively involved in shooting the vietnam game. these include: sgt. raymond green (john plaster), lt. col. robert h. jackson (eddie d. schafer), sgt. 1st class joseph t. fanning (jim yarbrough), sgt. michael j. kiel (c. eugene hunter), lt. william e. swenson (bill mcdonald), 1st lt. john r. campbell (richard r. lee), lt. james e. austin (jim rugg), and sgt. 1st class robert g. davenport (dick "tiger" dunagan). on the other hand, during the course of production, sog founder jack o'hara sr. has passed away.
elite warriors: vietnam (hobbit) is an upcoming first person shooter developed by nfusion and published by electronic arts. the game is set in the vietnam war and will release on pc, ps4 and xbox one. elite warriors: vietnam pc game download link will be available on this page when the release of the game.
-
in elite warriors: vietnam, players take on the roles of both military and civilian agents as they work to complete various missions. these missions vary in their objectives and nature, which can lead to different strategies and tactics being used. as a military agent, players will need to use their knowledge of stealth, shooting, and combat techniques to complete their missions. they will also need to consider the effects of their actions on their teammates, as well as the enemy soldiers and their commanders.
-
in elite warriors: vietnam, players will be in charge of a squad of soldiers through a series of covert operations across the length and breadth of vietnam. missions will be set in the jungle, grassland, urban environments, the mountainous regions, and the shallows of the sea. players will be given several different roles to fill, including sergeant, spotter, recon, rpg, and engineer.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/English Gambler movies dubbed in tamil free download Learn from the masters of poker blackjack and roulette.md b/spaces/bioriAsaeru/text-to-voice/English Gambler movies dubbed in tamil free download Learn from the masters of poker blackjack and roulette.md
deleted file mode 100644
index 0a9858589ddb74ea15b8e6cabc1d3d0bb9c3b71c..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/English Gambler movies dubbed in tamil free download Learn from the masters of poker blackjack and roulette.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
English Gambler movies dubbed in tamil free download
-
-20, Printer Drivers, Windows, Apple Mac and Linux. 21, Warranty, 1 ... 102, L1969A, 65, HP Scanjet 8300 Series ADF Roller Kit, 194.10, EOL. 103, L1982A, 65 ... 238, 42918920, C9600/C9650/C9800/C9850, Black Toner 15K, 457.40, 519.80. 1fdad05405
-
-
-
diff --git a/spaces/bla/tranny/App/UserTranscriptions/Schemas.py b/spaces/bla/tranny/App/UserTranscriptions/Schemas.py
deleted file mode 100644
index d0e5a294dc6071b9105003f9c99c3834e9b627f7..0000000000000000000000000000000000000000
--- a/spaces/bla/tranny/App/UserTranscriptions/Schemas.py
+++ /dev/null
@@ -1,14 +0,0 @@
-from typing import Optional
-from pydantic import BaseModel
-
-
-class BaseRequest(BaseModel):
- userId: int
- taskId: str
- fileName: str
- youtubeLink: Optional[str]
- telegramId: Optional[str]
-
-
-class GetTranscriptions(BaseModel):
- userId: int
diff --git a/spaces/brjathu/HMR2.0/hmr2/utils/utils_detectron2.py b/spaces/brjathu/HMR2.0/hmr2/utils/utils_detectron2.py
deleted file mode 100644
index ee09c03c4ae0d75b61ef74e587421fcc0f9c9078..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/hmr2/utils/utils_detectron2.py
+++ /dev/null
@@ -1,94 +0,0 @@
-import detectron2.data.transforms as T
-import torch
-from detectron2.checkpoint import DetectionCheckpointer
-from detectron2.config import CfgNode, instantiate
-from detectron2.data import MetadataCatalog
-from omegaconf import OmegaConf
-
-
-class DefaultPredictor_Lazy:
- """Create a simple end-to-end predictor with the given config that runs on single device for a
- single input image.
-
- Compared to using the model directly, this class does the following additions:
-
- 1. Load checkpoint from the weights specified in config (cfg.MODEL.WEIGHTS).
- 2. Always take BGR image as the input and apply format conversion internally.
- 3. Apply resizing defined by the config (`cfg.INPUT.{MIN,MAX}_SIZE_TEST`).
- 4. Take one input image and produce a single output, instead of a batch.
-
- This is meant for simple demo purposes, so it does the above steps automatically.
- This is not meant for benchmarks or running complicated inference logic.
- If you'd like to do anything more complicated, please refer to its source code as
- examples to build and use the model manually.
-
- Attributes:
- metadata (Metadata): the metadata of the underlying dataset, obtained from
- test dataset name in the config.
-
-
- Examples:
- ::
- pred = DefaultPredictor(cfg)
- inputs = cv2.imread("input.jpg")
- outputs = pred(inputs)
- """
-
- def __init__(self, cfg):
- """
- Args:
- cfg: a yacs CfgNode or a omegaconf dict object.
- """
- if isinstance(cfg, CfgNode):
- self.cfg = cfg.clone() # cfg can be modified by model
- self.model = build_model(self.cfg) # noqa: F821
- if len(cfg.DATASETS.TEST):
- test_dataset = cfg.DATASETS.TEST[0]
-
- checkpointer = DetectionCheckpointer(self.model)
- checkpointer.load(cfg.MODEL.WEIGHTS)
-
- self.aug = T.ResizeShortestEdge(
- [cfg.INPUT.MIN_SIZE_TEST, cfg.INPUT.MIN_SIZE_TEST], cfg.INPUT.MAX_SIZE_TEST
- )
-
- self.input_format = cfg.INPUT.FORMAT
- else: # new LazyConfig
- self.cfg = cfg
- self.model = instantiate(cfg.model)
- test_dataset = OmegaConf.select(cfg, "dataloader.test.dataset.names", default=None)
- if isinstance(test_dataset, (list, tuple)):
- test_dataset = test_dataset[0]
-
- checkpointer = DetectionCheckpointer(self.model)
- checkpointer.load(OmegaConf.select(cfg, "train.init_checkpoint", default=""))
-
- mapper = instantiate(cfg.dataloader.test.mapper)
- self.aug = mapper.augmentations
- self.input_format = mapper.image_format
-
- self.device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
- self.model.eval().to(self.device)
- if test_dataset:
- self.metadata = MetadataCatalog.get(test_dataset)
- assert self.input_format in ["RGB", "BGR"], self.input_format
-
- def __call__(self, original_image):
- """
- Args:
- original_image (np.ndarray): an image of shape (H, W, C) (in BGR order).
-
- Returns:
- predictions (dict):
- the output of the model for one image only.
- See :doc:`/tutorials/models` for details about the format.
- """
- with torch.no_grad():
- if self.input_format == "RGB":
- original_image = original_image[:, :, ::-1]
- height, width = original_image.shape[:2]
- image = self.aug(T.AugInput(original_image)).apply_image(original_image)
- image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1))
- inputs = {"image": image, "height": height, "width": width}
- predictions = self.model([inputs])[0]
- return predictions
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/datasets/README.md b/spaces/brjathu/HMR2.0/vendor/detectron2/datasets/README.md
deleted file mode 100644
index 0eb44cc3b23beeb1755ab8d12002d26f13434235..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/datasets/README.md
+++ /dev/null
@@ -1,140 +0,0 @@
-# Use Builtin Datasets
-
-A dataset can be used by accessing [DatasetCatalog](https://detectron2.readthedocs.io/modules/data.html#detectron2.data.DatasetCatalog)
-for its data, or [MetadataCatalog](https://detectron2.readthedocs.io/modules/data.html#detectron2.data.MetadataCatalog) for its metadata (class names, etc).
-This document explains how to setup the builtin datasets so they can be used by the above APIs.
-[Use Custom Datasets](https://detectron2.readthedocs.io/tutorials/datasets.html) gives a deeper dive on how to use `DatasetCatalog` and `MetadataCatalog`,
-and how to add new datasets to them.
-
-Detectron2 has builtin support for a few datasets.
-The datasets are assumed to exist in a directory specified by the environment variable
-`DETECTRON2_DATASETS`.
-Under this directory, detectron2 will look for datasets in the structure described below, if needed.
-```
-$DETECTRON2_DATASETS/
- coco/
- lvis/
- cityscapes/
- VOC20{07,12}/
-```
-
-You can set the location for builtin datasets by `export DETECTRON2_DATASETS=/path/to/datasets`.
-If left unset, the default is `./datasets` relative to your current working directory.
-
-The [model zoo](https://github.com/facebookresearch/detectron2/blob/master/MODEL_ZOO.md)
-contains configs and models that use these builtin datasets.
-
-## Expected dataset structure for [COCO instance/keypoint detection](https://cocodataset.org/#download):
-
-```
-coco/
- annotations/
- instances_{train,val}2017.json
- person_keypoints_{train,val}2017.json
- {train,val}2017/
- # image files that are mentioned in the corresponding json
-```
-
-You can use the 2014 version of the dataset as well.
-
-Some of the builtin tests (`dev/run_*_tests.sh`) uses a tiny version of the COCO dataset,
-which you can download with `./datasets/prepare_for_tests.sh`.
-
-## Expected dataset structure for PanopticFPN:
-
-Extract panoptic annotations from [COCO website](https://cocodataset.org/#download)
-into the following structure:
-```
-coco/
- annotations/
- panoptic_{train,val}2017.json
- panoptic_{train,val}2017/ # png annotations
- panoptic_stuff_{train,val}2017/ # generated by the script mentioned below
-```
-
-Install panopticapi by:
-```
-pip install git+https://github.com/cocodataset/panopticapi.git
-```
-Then, run `python datasets/prepare_panoptic_fpn.py`, to extract semantic annotations from panoptic annotations.
-
-## Expected dataset structure for [LVIS instance segmentation](https://www.lvisdataset.org/dataset):
-```
-coco/
- {train,val,test}2017/
-lvis/
- lvis_v0.5_{train,val}.json
- lvis_v0.5_image_info_test.json
- lvis_v1_{train,val}.json
- lvis_v1_image_info_test{,_challenge}.json
-```
-
-Install lvis-api by:
-```
-pip install git+https://github.com/lvis-dataset/lvis-api.git
-```
-
-To evaluate models trained on the COCO dataset using LVIS annotations,
-run `python datasets/prepare_cocofied_lvis.py` to prepare "cocofied" LVIS annotations.
-
-## Expected dataset structure for [cityscapes](https://www.cityscapes-dataset.com/downloads/):
-```
-cityscapes/
- gtFine/
- train/
- aachen/
- color.png, instanceIds.png, labelIds.png, polygons.json,
- labelTrainIds.png
- ...
- val/
- test/
- # below are generated Cityscapes panoptic annotation
- cityscapes_panoptic_train.json
- cityscapes_panoptic_train/
- cityscapes_panoptic_val.json
- cityscapes_panoptic_val/
- cityscapes_panoptic_test.json
- cityscapes_panoptic_test/
- leftImg8bit/
- train/
- val/
- test/
-```
-Install cityscapes scripts by:
-```
-pip install git+https://github.com/mcordts/cityscapesScripts.git
-```
-
-Note: to create labelTrainIds.png, first prepare the above structure, then run cityscapesescript with:
-```
-CITYSCAPES_DATASET=/path/to/abovementioned/cityscapes python cityscapesscripts/preparation/createTrainIdLabelImgs.py
-```
-These files are not needed for instance segmentation.
-
-Note: to generate Cityscapes panoptic dataset, run cityscapesescript with:
-```
-CITYSCAPES_DATASET=/path/to/abovementioned/cityscapes python cityscapesscripts/preparation/createPanopticImgs.py
-```
-These files are not needed for semantic and instance segmentation.
-
-## Expected dataset structure for [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/index.html):
-```
-VOC20{07,12}/
- Annotations/
- ImageSets/
- Main/
- trainval.txt
- test.txt
- # train.txt or val.txt, if you use these splits
- JPEGImages/
-```
-
-## Expected dataset structure for [ADE20k Scene Parsing](http://sceneparsing.csail.mit.edu/):
-```
-ADEChallengeData2016/
- annotations/
- annotations_detectron2/
- images/
- objectInfo150.txt
-```
-The directory `annotations_detectron2` is generated by running `python datasets/prepare_ade20k_sem_seg.py`.
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/losses/segm.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/losses/segm.py
deleted file mode 100644
index 1962b886e1946fa4896776da8a007ae0a9a4fab3..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/losses/segm.py
+++ /dev/null
@@ -1,83 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-from typing import Any, List
-import torch
-from torch.nn import functional as F
-
-from detectron2.config import CfgNode
-from detectron2.structures import Instances
-
-from .utils import resample_data
-
-
-class SegmentationLoss:
- """
- Segmentation loss as cross-entropy for raw unnormalized scores given ground truth
- labels. Segmentation ground truth labels are defined for the bounding box of
- interest at some fixed resolution [S, S], where
- S = MODEL.ROI_DENSEPOSE_HEAD.HEATMAP_SIZE.
- """
-
- def __init__(self, cfg: CfgNode):
- """
- Initialize segmentation loss from configuration options
-
- Args:
- cfg (CfgNode): configuration options
- """
- self.heatmap_size = cfg.MODEL.ROI_DENSEPOSE_HEAD.HEATMAP_SIZE
- self.n_segm_chan = cfg.MODEL.ROI_DENSEPOSE_HEAD.NUM_COARSE_SEGM_CHANNELS
-
- def __call__(
- self,
- proposals_with_gt: List[Instances],
- densepose_predictor_outputs: Any,
- packed_annotations: Any,
- ) -> torch.Tensor:
- """
- Compute segmentation loss as cross-entropy on aligned segmentation
- ground truth and estimated scores.
-
- Args:
- proposals_with_gt (list of Instances): detections with associated ground truth data
- densepose_predictor_outputs: an object of a dataclass that contains predictor outputs
- with estimated values; assumed to have the following attributes:
- * coarse_segm - coarse segmentation estimates, tensor of shape [N, D, S, S]
- packed_annotations: packed annotations for efficient loss computation;
- the following attributes are used:
- - coarse_segm_gt
- - bbox_xywh_gt
- - bbox_xywh_est
- """
- if packed_annotations.coarse_segm_gt is None:
- return self.fake_value(densepose_predictor_outputs)
- coarse_segm_est = densepose_predictor_outputs.coarse_segm[packed_annotations.bbox_indices]
- with torch.no_grad():
- coarse_segm_gt = resample_data(
- packed_annotations.coarse_segm_gt.unsqueeze(1),
- packed_annotations.bbox_xywh_gt,
- packed_annotations.bbox_xywh_est,
- self.heatmap_size,
- self.heatmap_size,
- mode="nearest",
- padding_mode="zeros",
- ).squeeze(1)
- if self.n_segm_chan == 2:
- coarse_segm_gt = coarse_segm_gt > 0
- return F.cross_entropy(coarse_segm_est, coarse_segm_gt.long())
-
- def fake_value(self, densepose_predictor_outputs: Any) -> torch.Tensor:
- """
- Fake segmentation loss used when no suitable ground truth data
- was found in a batch. The loss has a value 0 and is primarily used to
- construct the computation graph, so that `DistributedDataParallel`
- has similar graphs on all GPUs and can perform reduction properly.
-
- Args:
- densepose_predictor_outputs: DensePose predictor outputs, an object
- of a dataclass that is assumed to have `coarse_segm`
- attribute
- Return:
- Zero value loss with proper computation graph
- """
- return densepose_predictor_outputs.coarse_segm.sum() * 0
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/test_model_zoo.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tests/test_model_zoo.py
deleted file mode 100644
index e3360a74864e0c00ed92ffbc8531c8d36e8be379..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/test_model_zoo.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import unittest
-
-from detectron2 import model_zoo
-from detectron2.config import instantiate
-from detectron2.modeling import FPN, GeneralizedRCNN
-
-logger = logging.getLogger(__name__)
-
-
-class TestModelZoo(unittest.TestCase):
- def test_get_returns_model(self):
- model = model_zoo.get("Misc/scratch_mask_rcnn_R_50_FPN_3x_gn.yaml", trained=False)
- self.assertIsInstance(model, GeneralizedRCNN)
- self.assertIsInstance(model.backbone, FPN)
-
- def test_get_invalid_model(self):
- self.assertRaises(RuntimeError, model_zoo.get, "Invalid/config.yaml")
-
- def test_get_url(self):
- url = model_zoo.get_checkpoint_url("Misc/scratch_mask_rcnn_R_50_FPN_3x_gn.yaml")
- self.assertEqual(
- url,
- "https://dl.fbaipublicfiles.com/detectron2/Misc/scratch_mask_rcnn_R_50_FPN_3x_gn/138602908/model_final_01ca85.pkl", # noqa
- )
- url2 = model_zoo.get_checkpoint_url("Misc/scratch_mask_rcnn_R_50_FPN_3x_gn.py")
- self.assertEqual(url, url2)
-
- def _build_lazy_model(self, name):
- cfg = model_zoo.get_config("common/models/" + name)
- instantiate(cfg.model)
-
- def test_mask_rcnn_fpn(self):
- self._build_lazy_model("mask_rcnn_fpn.py")
-
- def test_mask_rcnn_c4(self):
- self._build_lazy_model("mask_rcnn_c4.py")
-
- def test_panoptic_fpn(self):
- self._build_lazy_model("panoptic_fpn.py")
-
- def test_schedule(self):
- cfg = model_zoo.get_config("common/coco_schedule.py")
- for _, v in cfg.items():
- instantiate(v)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/carlgira/dreambooth-image-editor/README.md b/spaces/carlgira/dreambooth-image-editor/README.md
deleted file mode 100644
index 536d53df33f6fceaffc99e83e7a0baffdcd7c9f9..0000000000000000000000000000000000000000
--- a/spaces/carlgira/dreambooth-image-editor/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Dreambooth Image Editor
-emoji: ⚡
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.11.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/ViTDet/configs/COCO/cascade_mask_rcnn_mvitv2_b_in21k_100ep.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/ViTDet/configs/COCO/cascade_mask_rcnn_mvitv2_b_in21k_100ep.py
deleted file mode 100644
index 9dba203086f8b34221ea9bed9f5fc280579f97df..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/ViTDet/configs/COCO/cascade_mask_rcnn_mvitv2_b_in21k_100ep.py
+++ /dev/null
@@ -1,95 +0,0 @@
-from functools import partial
-import torch.nn as nn
-from fvcore.common.param_scheduler import MultiStepParamScheduler
-
-from detectron2 import model_zoo
-from detectron2.config import LazyCall as L
-from detectron2.solver import WarmupParamScheduler
-from detectron2.modeling import MViT
-from detectron2.layers import ShapeSpec
-from detectron2.modeling.box_regression import Box2BoxTransform
-from detectron2.modeling.matcher import Matcher
-from detectron2.modeling.roi_heads import (
- FastRCNNOutputLayers,
- FastRCNNConvFCHead,
- CascadeROIHeads,
-)
-
-from ..common.coco_loader_lsj import dataloader
-
-model = model_zoo.get_config("common/models/mask_rcnn_fpn.py").model
-constants = model_zoo.get_config("common/data/constants.py").constants
-model.pixel_mean = constants.imagenet_rgb256_mean
-model.pixel_std = constants.imagenet_rgb256_std
-model.input_format = "RGB"
-model.backbone.bottom_up = L(MViT)(
- embed_dim=96,
- depth=24,
- num_heads=1,
- last_block_indexes=(1, 4, 20, 23),
- residual_pooling=True,
- drop_path_rate=0.4,
- norm_layer=partial(nn.LayerNorm, eps=1e-6),
- out_features=("scale2", "scale3", "scale4", "scale5"),
-)
-model.backbone.in_features = "${.bottom_up.out_features}"
-model.backbone.square_pad = 1024
-
-# New heads and LN
-model.backbone.norm = "LN" # Use LN in FPN
-model.roi_heads.box_head.conv_norm = model.roi_heads.mask_head.conv_norm = "LN"
-
-# 2conv in RPN:
-model.proposal_generator.head.conv_dims = [-1, -1]
-
-# arguments that don't exist for Cascade R-CNN
-[model.roi_heads.pop(k) for k in ["box_head", "box_predictor", "proposal_matcher"]]
-model.roi_heads.update(
- _target_=CascadeROIHeads,
- box_heads=[
- L(FastRCNNConvFCHead)(
- input_shape=ShapeSpec(channels=256, height=7, width=7),
- conv_dims=[256, 256, 256, 256],
- fc_dims=[1024],
- conv_norm="LN",
- )
- for _ in range(3)
- ],
- box_predictors=[
- L(FastRCNNOutputLayers)(
- input_shape=ShapeSpec(channels=1024),
- test_score_thresh=0.05,
- box2box_transform=L(Box2BoxTransform)(weights=(w1, w1, w2, w2)),
- cls_agnostic_bbox_reg=True,
- num_classes="${...num_classes}",
- )
- for (w1, w2) in [(10, 5), (20, 10), (30, 15)]
- ],
- proposal_matchers=[
- L(Matcher)(thresholds=[th], labels=[0, 1], allow_low_quality_matches=False)
- for th in [0.5, 0.6, 0.7]
- ],
-)
-
-# Initialization and trainer settings
-train = model_zoo.get_config("common/train.py").train
-train.amp.enabled = True
-train.ddp.fp16_compression = True
-train.init_checkpoint = "detectron2://ImageNetPretrained/mvitv2/MViTv2_B_in21k.pyth"
-
-# Schedule
-# 100 ep = 184375 iters * 64 images/iter / 118000 images/ep
-train.max_iter = 184375
-lr_multiplier = L(WarmupParamScheduler)(
- scheduler=L(MultiStepParamScheduler)(
- values=[1.0, 0.1, 0.01],
- milestones=[163889, 177546],
- num_updates=train.max_iter,
- ),
- warmup_length=250 / train.max_iter,
- warmup_factor=0.001,
-)
-
-optimizer = model_zoo.get_config("common/optim.py").AdamW
-optimizer.params.overrides = {"pos_embed": {"weight_decay": 0.0}}
-optimizer.lr = 8e-5
diff --git a/spaces/cbr/swp/assets/pretrained_models/readme.md b/spaces/cbr/swp/assets/pretrained_models/readme.md
deleted file mode 100644
index fd26cd784fbfa3af2cebfb6190b0aa55c92b85e5..0000000000000000000000000000000000000000
--- a/spaces/cbr/swp/assets/pretrained_models/readme.md
+++ /dev/null
@@ -1,4 +0,0 @@
-## Downolad these models here
-- [inswapper_128.onnx](https://huggingface.co/deepinsight/inswapper/resolve/main/inswapper_128.onnx)
-- [GFPGANv1.4.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth)
-- [79999_iter.pth](https://drive.google.com/open?id=154JgKpzCPW82qINcVieuPH3fZ2e0P812)
diff --git a/spaces/ccarr0807/HuggingGPT/README.md b/spaces/ccarr0807/HuggingGPT/README.md
deleted file mode 100644
index 690070e7f9273ace867e469136c052c2a6d198bb..0000000000000000000000000000000000000000
--- a/spaces/ccarr0807/HuggingGPT/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: HuggingGPT
-emoji: 😻
-colorFrom: gray
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-duplicated_from: microsoft/HuggingGPT
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/commons.py b/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/commons.py
deleted file mode 100644
index db17cf0914ba6e445fe613e3ec3411b3a74b28aa..0000000000000000000000000000000000000000
--- a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/commons.py
+++ /dev/null
@@ -1,164 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- try:
- ret[i] = x[i, :, idx_str:idx_end]
- except RuntimeError:
- print("?")
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/cffl/Exploring_Intelligent_Writing_Assistance/src/style_classification.py b/spaces/cffl/Exploring_Intelligent_Writing_Assistance/src/style_classification.py
deleted file mode 100644
index 99f3e8014fa984c71529b4998f58d61b7e3a8f4a..0000000000000000000000000000000000000000
--- a/spaces/cffl/Exploring_Intelligent_Writing_Assistance/src/style_classification.py
+++ /dev/null
@@ -1,245 +0,0 @@
-# ###########################################################################
-#
-# CLOUDERA APPLIED MACHINE LEARNING PROTOTYPE (AMP)
-# (C) Cloudera, Inc. 2022
-# All rights reserved.
-#
-# Applicable Open Source License: Apache 2.0
-#
-# NOTE: Cloudera open source products are modular software products
-# made up of hundreds of individual components, each of which was
-# individually copyrighted. Each Cloudera open source product is a
-# collective work under U.S. Copyright Law. Your license to use the
-# collective work is as provided in your written agreement with
-# Cloudera. Used apart from the collective work, this file is
-# licensed for your use pursuant to the open source license
-# identified above.
-#
-# This code is provided to you pursuant a written agreement with
-# (i) Cloudera, Inc. or (ii) a third-party authorized to distribute
-# this code. If you do not have a written agreement with Cloudera nor
-# with an authorized and properly licensed third party, you do not
-# have any rights to access nor to use this code.
-#
-# Absent a written agreement with Cloudera, Inc. (“Cloudera”) to the
-# contrary, A) CLOUDERA PROVIDES THIS CODE TO YOU WITHOUT WARRANTIES OF ANY
-# KIND; (B) CLOUDERA DISCLAIMS ANY AND ALL EXPRESS AND IMPLIED
-# WARRANTIES WITH RESPECT TO THIS CODE, INCLUDING BUT NOT LIMITED TO
-# IMPLIED WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY AND
-# FITNESS FOR A PARTICULAR PURPOSE; (C) CLOUDERA IS NOT LIABLE TO YOU,
-# AND WILL NOT DEFEND, INDEMNIFY, NOR HOLD YOU HARMLESS FOR ANY CLAIMS
-# ARISING FROM OR RELATED TO THE CODE; AND (D)WITH RESPECT TO YOUR EXERCISE
-# OF ANY RIGHTS GRANTED TO YOU FOR THE CODE, CLOUDERA IS NOT LIABLE FOR ANY
-# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, PUNITIVE OR
-# CONSEQUENTIAL DAMAGES INCLUDING, BUT NOT LIMITED TO, DAMAGES
-# RELATED TO LOST REVENUE, LOST PROFITS, LOSS OF INCOME, LOSS OF
-# BUSINESS ADVANTAGE OR UNAVAILABILITY, OR LOSS OR CORRUPTION OF
-# DATA.
-#
-# ###########################################################################
-
-from typing import List, Union
-
-import torch
-import numpy as np
-from pyemd import emd
-from transformers import pipeline
-
-
-class StyleIntensityClassifier:
- """
- Utility for classifying style and calculating Style Transfer Intensity between
- two pieces of text (i.e. input and output of TST model).
-
- This custom evaluation metric aims to quantify the magnitude of transferred
- style between two texts. To accomplish this, we pass input and output texts
- through a trained style classifier to produce two distributions. We then
- utilize Earth Movers Distance (EMD) to calculate the minimum "cost"/"work"
- required to turn the input distribution into the output distribution. This
- metric allows us to capture a more nuanced, per-example measure of style
- transfer when compared to simply aggregating binary classifications over
- records in a dataset.
-
- Attributes:
- model_identifier (str)
-
- """
-
- def __init__(self, model_identifier: str):
- self.model_identifier = model_identifier
- self.device = torch.cuda.current_device() if torch.cuda.is_available() else -1
- self._build_pipeline()
-
- def _build_pipeline(self):
-
- self.pipeline = pipeline(
- task="text-classification",
- model=self.model_identifier,
- device=self.device,
- return_all_scores=True,
- )
-
- def score(self, input_text: Union[str, List[str]]):
- """
- Classify a given input text using the model initialized by the class.
-
- Args:
- input_text (`str` or `List[str]`) - Input text for classification
-
- Returns:
- classification (dict) - a dictionary containing the label, score, and
- distribution between classes
-
- """
- if isinstance(input_text, str):
- tmp = list()
- tmp.append(input_text)
- input_text = tmp
-
- result = self.pipeline(input_text)
- distributions = np.array(
- [[label["score"] for label in item] for item in result]
- )
- return [
- {
- "label": self.pipeline.model.config.id2label[scores.argmax()],
- "score": round(scores.max(), 4),
- "distribution": scores.tolist(),
- }
- for scores in distributions
- ]
-
- def calculate_transfer_intensity(
- self, input_text: List[str], output_text: List[str], target_class_idx: int = 1
- ) -> List[float]:
- """
- Calcualates the style transfer intensity (STI) between two pieces of text.
-
- Args:
- input_text (list) - list of input texts with indicies corresponding
- to counterpart in output_text
- ouptput_text (list) - list of output texts with indicies corresponding
- to counterpart in input_text
- target_class_idx (int) - index of the target style class used for directional
- score correction
-
- Returns:
- A list of floats with corresponding style transfer intensity scores.
-
- """
-
- if len(input_text) != len(output_text):
- raise ValueError(
- "input_text and output_text must be of same length with corresponding items"
- )
-
- input_dist = [item["distribution"] for item in self.score(input_text)]
- output_dist = [item["distribution"] for item in self.score(output_text)]
-
- return [
- self.calculate_emd(input_dist[i], output_dist[i], target_class_idx)
- for i in range(len(input_dist))
- ]
-
- def calculate_transfer_intensity_fraction(
- self, input_text: List[str], output_text: List[str], target_class_idx: int = 1
- ) -> List[float]:
- """
- Calcualates the style transfer intensity (STI) _fraction_ between two pieces of text.
- See `calcualte_sti_fraction()` for details.
-
- Args:
- input_text (list) - list of input texts with indicies corresponding
- to counterpart in output_text
- ouptput_text (list) - list of output texts with indicies corresponding
- to counterpart in input_text
- target_class_idx (int) - index of the target style class used for directional
- score correction
-
- Returns:
- A list of floats with corresponding style transfer intensity scores.
-
- """
-
- if len(input_text) != len(output_text):
- raise ValueError(
- "input_text and output_text must be of same length with corresponding items"
- )
-
- input_dist = [item["distribution"] for item in self.score(input_text)]
- output_dist = [item["distribution"] for item in self.score(output_text)]
-
- return [
- self.calculate_sti_fraction(
- input_dist[i],
- output_dist[i],
- ideal_dist=[0.0, 1.0],
- target_class_idx=target_class_idx,
- )
- for i in range(len(input_dist))
- ]
-
- def calculate_sti_fraction(
- self, input_dist, output_dist, ideal_dist=[0.0, 1.0], target_class_idx=1
- ):
- """
- Calculate the direction-corrected style transfer intensity fraction between
- two style distributions of equal length.
-
- If output_dist moves closer towards target style class, the metric represents the percentage of
- the possible _target_ style distribution that was captured during the transfer. If output_dist
- moves further from the target style class, the metric represents the percentage of the possible
- _source_ style distribution that was captured.
-
- Args:
- input_dist (list) - probabilities assigned to the style classes
- from the input text to style transfer model
- output_dist (list) - probabilities assigned to the style classes
- from the outut text of the style transfer model
- ideal_dist (list, optional): The maximum possibly distribution. Defaults to [0.0, 1.0].
- target_class_idx (int, optional)
-
- Returns:
- sti_fraction (float)
- """
-
- sti = self.calculate_emd(input_dist, output_dist, target_class_idx)
-
- if sti > 0:
- potential = self.calculate_emd(input_dist, ideal_dist, target_class_idx)
- else:
- potential = self.calculate_emd(
- input_dist, ideal_dist[::-1], target_class_idx
- )
-
- return sti / potential
-
- @staticmethod
- def calculate_emd(input_dist, output_dist, target_class_idx):
- """
- Calculate the direction-corrected Earth Mover's Distance (aka Wasserstein distance)
- between two distributions of equal length. Here we penalize the EMD score if
- the output text style moved further away from the target style.
-
- Reference: https://github.com/passeul/style-transfer-model-evaluation/blob/master/code/style_transfer_intensity.py
-
- Args:
- input_dist (list) - probabilities assigned to the style classes
- from the input text to style transfer model
- output_dist (list) - probabilities assigned to the style classes
- from the outut text of the style transfer model
-
- Returns:
- emd (float) - Earth Movers Distance between the two distributions
-
- """
-
- N = len(input_dist)
- distance_matrix = np.ones((N, N))
- dist = emd(np.array(input_dist), np.array(output_dist), distance_matrix)
-
- transfer_direction_correction = (
- 1 if output_dist[target_class_idx] >= input_dist[target_class_idx] else -1
- )
-
- return round(dist * transfer_direction_correction, 4)
diff --git a/spaces/chansung/LLM-As-Chatbot/models/airoboros.py b/spaces/chansung/LLM-As-Chatbot/models/airoboros.py
deleted file mode 100644
index ce8e731470670daa3c423adfe6097932037d5b2c..0000000000000000000000000000000000000000
--- a/spaces/chansung/LLM-As-Chatbot/models/airoboros.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import torch
-from transformers import AutoModelForCausalLM, AutoTokenizer
-from optimum.bettertransformer import BetterTransformer
-
-def load_model(
- base,
- finetuned,
- mode_cpu,
- mode_mps,
- mode_full_gpu,
- mode_8bit,
- mode_4bit,
- force_download_ckpt
-):
- tokenizer = AutoTokenizer.from_pretrained(base)
-
- if mode_cpu:
- print("cpu mode")
- model = AutoModelForCausalLM.from_pretrained(
- base,
- device_map={"": "cpu"},
- use_safetensors=False
- # low_cpu_mem_usage=True
- )
- elif mode_mps:
- print("mps mode")
- model = AutoModelForCausalLM.from_pretrained(
- base,
- device_map={"": "mps"},
- torch_dtype=torch.float16,
- use_safetensors=False
- )
- else:
- print("gpu mode")
- print(f"8bit = {mode_8bit}, 4bit = {mode_4bit}")
- model = AutoModelForCausalLM.from_pretrained(
- base,
- torch_dtype=torch.float16,
- load_in_8bit=mode_8bit,
- load_in_4bit=mode_4bit,
- device_map="auto",
- use_safetensors=False
- )
-
- if not mode_8bit and not mode_4bit:
- model.half()
-
- model = BetterTransformer.transform(model)
- return model, tokenizer
\ No newline at end of file
diff --git a/spaces/chansung/hf-inference-endpoint/app.py b/spaces/chansung/hf-inference-endpoint/app.py
deleted file mode 100644
index 8798e931269e3169a58d4f5bc37caa6b71645adc..0000000000000000000000000000000000000000
--- a/spaces/chansung/hf-inference-endpoint/app.py
+++ /dev/null
@@ -1,456 +0,0 @@
-import time
-import json
-import requests
-
-import gradio as gr
-
-STYLE = """
-.no-border {
- border: none !important;
-}
-
-.group-border {
- padding: 10px;
- border-width: 1px;
- border-radius: 10px;
- border-color: gray;
- border-style: solid;
- box-shadow: 1px 1px 3px;
-}
-.control-label-font {
- font-size: 13pt !important;
-}
-.control-button {
- background: none !important;
- border-color: #69ade2 !important;
- border-width: 2px !important;
- color: #69ade2 !important;
-}
-.center {
- text-align: center;
-}
-.right {
- text-align: right;
-}
-.no-label {
- padding: 0px !important;
-}
-.no-label > label > span {
- display: none;
-}
-.small-big {
- font-size: 12pt !important;
-}
-
-"""
-
-def avaliable_providers():
- providers = []
-
- headers = {
- "Content-Type": "application/json",
- }
- endpoint_url = "https://api.endpoints.huggingface.cloud/v2/provider"
- response = requests.get(endpoint_url, headers=headers)
-
- providers = {}
-
- for provider in response.json()['vendors']:
- if provider['status'] == 'available':
- regions = {}
-
- availability = False
- for region in provider['regions']:
- if region["status"] == "available":
- regions[region['name']] = {
- "label": region['label'],
- "computes": region['computes']
- }
- availability = True
-
- if availability:
- providers[provider['name']] = regions
-
- return providers
-
-providers = avaliable_providers()
-
-def update_regions(provider):
- avalialbe_regions = []
- regions = providers[provider]
-
- for region, attributes in regions.items():
- avalialbe_regions.append(f"{region}[{attributes['label']}]")
-
- return gr.Dropdown.update(
- choices=avalialbe_regions,
- value=avalialbe_regions[0] if len(avalialbe_regions) > 0 else None
- )
-
-def update_compute_options(provider, region):
- avalialbe_compute_options = []
- computes = providers[provider][region.split("[")[0].strip()]["computes"]
-
- for compute in computes:
- if compute['status'] == 'available':
- accelerator = compute['accelerator']
- numAccelerators = compute['numAccelerators']
- memoryGb = compute['memoryGb']
- architecture = compute['architecture']
- instanceType = compute['instanceType']
- pricePerHour = compute['pricePerHour']
-
- type = f"{numAccelerators}vCPU {memoryGb} · {architecture}" if accelerator == "cpu" else f"{numAccelerators}x {architecture}"
-
- avalialbe_compute_options.append(
- f"{compute['accelerator'].upper()} [{compute['instanceSize']}] · {type} · {instanceType} · ${pricePerHour}/hour"
- )
-
- return gr.Dropdown.update(
- choices=avalialbe_compute_options,
- value=avalialbe_compute_options[0] if len(avalialbe_compute_options) > 0 else None
- )
-
-def submit(
- hf_account_input,
- hf_token_input,
- endpoint_name_input,
- provider_selector,
- region_selector,
- repository_selector,
- task_selector,
- framework_selector,
- compute_selector,
- min_node_selector,
- max_node_selector,
- security_selector,
- custom_kernel,
- max_input_length,
- max_tokens,
- max_batch_prefill_token,
- max_batch_total_token
-):
- compute_resources = compute_selector.split("·")
- accelerator = compute_resources[0][:3].strip()
-
- size_l_index = compute_resources[0].index("[") - 1
- size_r_index = compute_resources[0].index("]")
- size = compute_resources[0][size_l_index : size_r_index].strip()
-
- type = compute_resources[-2].strip()
-
- payload = {
- "accountId": hf_account_input.strip(),
- "compute": {
- "accelerator": accelerator.lower(),
- "instanceSize": size[1:],
- "instanceType": type,
- "scaling": {
- "maxReplica": int(max_node_selector),
- "minReplica": int(min_node_selector)
- }
- },
- "model": {
- "framework": framework_selector.lower(),
- "image": {
- "custom": {
- "health_route": "/health",
- "env": {
- "DISABLE_CUSTOM_KERNELS": "true" if custom_kernel == "Enabled" else "false",
- "MAX_BATCH_PREFILL_TOKENS": str(max_batch_prefill_token),
- "MAX_BATCH_TOTAL_TOKENS": str(max_batch_total_token),
- "MAX_INPUT_LENGTH": str(max_input_length),
- "MAX_TOTAL_TOKENS": str(max_tokens),
- "MODEL_ID": repository_selector.lower(),
- # QUANTIZE: 'bitsandbytes' | 'gptq';
- },
- "url": "ghcr.io/huggingface/text-generation-inference:1.0.1",
- }
- },
- "repository": repository_selector.lower(),
- # "revision": "main",
- "task": task_selector.lower()
- },
- "name": endpoint_name_input.strip().lower(),
- "provider": {
- "region": region_selector.split("[")[0].lower(),
- "vendor": provider_selector.lower()
- },
- "type": security_selector.lower()
- }
-
- print(payload)
-
- payload = json.dumps(payload)
- print(payload)
-
- headers = {
- "Authorization": f"Bearer {hf_token_input.strip()}",
- "Content-Type": "application/json",
- }
- endpoint_url = f"https://api.endpoints.huggingface.cloud/v2/endpoint/"#{hf_account_input.strip()}"
- print(endpoint_url)
-
- response = requests.post(endpoint_url, headers=headers, data=payload)
-
- if response.status_code == 400:
- return f"{response.text}. Malformed data in {payload}"
- elif response.status_code == 401:
- return "Invalid token"
- elif response.status_code == 409:
- return f"Endpoint {endpoint_name_input} already exists"
- elif response.status_code == 202:
- return f"Endpoint {endpoint_name_input} created successfully on {provider_selector.lower()} using {repository_selector.lower()}@main.\nPlease check out the progress at https://ui.endpoints.huggingface.co/endpoints."
- else:
- return f"something went wrong {response.status_code} = {response.text}"
-
-with gr.Blocks(css=STYLE) as hf_endpoint:
- with gr.Tab("Hugging Face", elem_classes=["no-border"]):
- gr.Markdown("# Deploy LLM on 🤗 Hugging Face Inference Endpoint", elem_classes=["center"])
-
- with gr.Column(elem_classes=["group-border"]):
- with gr.Row():
- with gr.Column():
- gr.Markdown("""### Hugging Face account ID (name)""")
- hf_account_input = gr.Textbox(show_label=False, elem_classes=["no-label", "small-big"])
-
- with gr.Column():
- gr.Markdown("### Hugging Face access token")
- hf_token_input = gr.Textbox(show_label=False, type="password", elem_classes=["no-label", "small-big"])
-
- with gr.Row():
- with gr.Column():
- gr.Markdown("""### Target model
-
-Model from the Hugging Face hub""")
- repository_selector = gr.Textbox(
- value="NousResearch/Nous-Hermes-Llama2-13b",
- interactive=False,
- show_label=False,
- elem_classes=["no-label", "small-big"]
- )
-
- with gr.Column():
- gr.Markdown("""### Target model version(branch)
-
-Branch name of the Model""")
- revision_selector = gr.Textbox(
- value=f"main",
- interactive=False,
- show_label=False,
- elem_classes=["no-label", "small-big"]
- )
-
- with gr.Column(elem_classes=["group-border"]):
- with gr.Column():
- gr.Markdown("""### Endpoint name
-
-Name for your new endpoint""")
- endpoint_name_input = gr.Textbox(show_label=False, elem_classes=["no-label", "small-big"])
-
- with gr.Row():
- with gr.Column():
- gr.Markdown("""### Cloud Provider""")
- provider_selector = gr.Dropdown(
- choices=providers.keys(),
- interactive=True,
- show_label=False,
- elem_classes=["no-label", "small-big"]
- )
-
- with gr.Column():
- gr.Markdown("""### Cloud Region""")
- region_selector = gr.Dropdown(
- [],
- value="",
- interactive=True,
- show_label=False,
- elem_classes=["no-label", "small-big"]
- )
-
- with gr.Row(visible=False):
- with gr.Column():
- gr.Markdown("### Task")
- task_selector = gr.Textbox(
- value="text-generation",
- interactive=False,
- show_label=False,
- elem_classes=["no-label", "small-big"]
- )
-
- with gr.Column():
- gr.Markdown("### Framework")
- framework_selector = gr.Textbox(
- value="PyTorch",
- interactive=False,
- show_label=False,
- elem_classes=["no-label", "small-big"]
- )
-
- with gr.Column():
- gr.Markdown("""### Compute Instance Type""")
- compute_selector = gr.Dropdown(
- [],
- value="",
- interactive=True,
- show_label=False,
- elem_classes=["no-label", "small-big"]
- )
-
- with gr.Row():
- with gr.Column():
- gr.Markdown("""### Min Number of Nodes""")
- min_node_selector = gr.Number(
- value=1,
- interactive=True,
- show_label=False,
- elem_classes=["no-label", "small-big"]
- )
-
- with gr.Column():
- gr.Markdown("""### Max Number of Nodes""")
- max_node_selector = gr.Number(
- value=1,
- interactive=True,
- show_label=False,
- elem_classes=["no-label", "small-big"]
- )
-
- with gr.Column():
- gr.Markdown("""### Security Level""")
- security_selector = gr.Radio(
- choices=["Protected", "Public", "Private"],
- value="Public",
- interactive=True,
- show_label=False,
- elem_classes=["no-label", "small-big"]
- )
-
- with gr.Column(elem_classes=["group-border"]):
- with gr.Accordion("Serving Container", open=False, elem_classes=["no-border"]):
- with gr.Column():
- gr.Markdown("""### Container Type
-
- Text Generation Inference is an optimized container for text generation task""")
- _ = gr.Textbox("Text Generation Inference", show_label=False, elem_classes=["no-label", "small-big"])
-
- with gr.Row():
- with gr.Column():
- gr.Markdown("""### Custom Cuda Kernels
-
- TGI uses custom kernels to speed up inference for some models. You can try disabling them if you encounter issues.""")
- custom_kernel = gr.Dropdown(
- value="Enabled",
- choices=["Enabled", "Disabled"],
- interactive=True,
- show_label=False,
- elem_classes=["no-label", "small-big"]
- )
-
- with gr.Column():
- gr.Markdown("""### Quantization
-
- Quantization can reduce the model size and improve latency, with little degradation in model accuracy.""")
- _ = gr.Dropdown(
- value="None",
- choices=["None", "Bitsandbytes", "GPTQ"],
- interactive=True,
- show_label=False,
- elem_classes=["no-label", "small-big"]
- )
-
- with gr.Row():
- with gr.Column():
- gr.Markdown("""### Max Input Length (per Query)
-
- Increasing this value can impact the amount of RAM required. Some models can only handle a finite range of sequences.""")
- max_input_length = gr.Number(
- value=1024,
- interactive=True,
- show_label=False,
- elem_classes=["no-label", "small-big"]
- )
-
- with gr.Column():
- gr.Markdown("""### Max Number of Tokens (per Query)
-
- The larger this value, the more memory each request will consume and the less effective batching can be.""")
- max_tokens = gr.Number(
- value=1512,
- interactive=True,
- show_label=False,
- elem_classes=["no-label", "small-big"]
- )
-
- with gr.Row():
- with gr.Column():
- gr.Markdown("""### Max Batch Prefill Tokens
-
- Number of prefill tokens used during continuous batching. It can be useful to adjust this number since the prefill operation is memory-intensive and compute-bound.""")
- max_batch_prefill_token = gr.Number(
- value=2048,
- interactive=True,
- show_label=False,
- elem_classes=["no-label", "small-big"]
- )
-
- with gr.Column():
- gr.Markdown("""### Max Batch Total Tokens
-
- Number of tokens that can be passed before forcing waiting queries to be put on the batch. A value of 1000 can fit 10 queries of 100 tokens or a single query of 1000 tokens.""")
- max_batch_total_token = gr.Number(
- value=None,
- interactive=True,
- show_label=False,
- elem_classes=["no-label", "small-big"]
- )
-
- submit_button = gr.Button(
- value="Submit",
- elem_classes=["control-label-font", "control-button"]
- )
-
- status_txt = gr.Textbox(
- value="any status update will be displayed here",
- interactive=False,
- elem_classes=["no-label"]
- )
-
- provider_selector.change(update_regions, inputs=provider_selector, outputs=region_selector)
- region_selector.change(update_compute_options, inputs=[provider_selector, region_selector], outputs=compute_selector)
-
- submit_button.click(
- submit,
- inputs=[
- hf_account_input,
- hf_token_input,
- endpoint_name_input,
- provider_selector,
- region_selector,
- repository_selector,
- task_selector,
- framework_selector,
- compute_selector,
- min_node_selector,
- max_node_selector,
- security_selector,
- custom_kernel,
- max_input_length,
- max_tokens,
- max_batch_prefill_token,
- max_batch_total_token],
- outputs=status_txt)
-
- with gr.Tab("AWS", elem_classes=["no-border"]):
- gr.Markdown("# Deploy LLM on 🤗 Hugging Face Inference Endpoint", elem_classes=["center"])
-
- with gr.Tab("GCP", elem_classes=["no-border"]):
- gr.Markdown("# Deploy LLM on 🤗 Hugging Face Inference Endpoint", elem_classes=["center"])
-
- with gr.Tab("Azure", elem_classes=["no-border"]):
- gr.Markdown("# Deploy LLM on 🤗 Hugging Face Inference Endpoint", elem_classes=["center"])
-
- with gr.Tab("Lambdalabs", elem_classes=["no-border"]):
- gr.Markdown("# Deploy LLM on 🤗 Hugging Face Inference Endpoint", elem_classes=["center"])
-
-hf_endpoint.launch(enable_queue=True, debug=True)
diff --git a/spaces/chendl/compositional_test/multimodal/tools/make_soft_link_blip2_data.py b/spaces/chendl/compositional_test/multimodal/tools/make_soft_link_blip2_data.py
deleted file mode 100644
index 67c108652301f951a228a41281814485792c0b43..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/tools/make_soft_link_blip2_data.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import os
-import shutil
-import glob
-import random
-from pprint import pprint
-
-DIR_COCO_VG = "/gpfs/u/home/LMCG/LMCGljnn/scratch-shared/junyan/raw"
-DIR = "/gpfs/u/home/LMCG/LMCGljnn/scratch-shared/junyan/raw/blip2_pretraining"
-OUT_DIR = "/gpfs/u/home/LMCG/LMCGljnn/scratch-shared/junyan/raw/blip2_all_data_ground"
-
-
-if __name__ == "__main__":
- os.makedirs(OUT_DIR, exist_ok=True)
- ccs_tars = glob.glob(os.path.join(DIR, "ccs_synthetic_filtered_large_ground", "*.tar"))
- coco_tars = glob.glob(os.path.join(DIR_COCO_VG, "karpathy_coco_wds_full_ground", "*.tar"))
- vg_tars = glob.glob(os.path.join(DIR_COCO_VG, "vg_wds_full_ground", "*.tar"))
- laion_part_tars = glob.glob(os.path.join(DIR, "laion_synthetic_filtered_large", "all_ground", "*.tar"))
- tars = []
- tars.extend(ccs_tars)
- for _ in range(5):
- tars.extend(coco_tars)
- tars.extend(vg_tars)
- tars.extend(laion_part_tars)
- random.shuffle(tars)
- print(len(tars))
- pprint(tars[:20])
- for i, tar in enumerate(tars):
- dst = os.path.join(OUT_DIR, f"{str(i).zfill(6)}.tar")
- # print(tar, dst)
- os.symlink(tar, dst)
diff --git a/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/__init__.py b/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/__init__.py
deleted file mode 100644
index 3cee09bb7f51087e92d778c4c9e27d76085d1b30..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-import os
-import sys
-
-
-sys.path.insert(1, os.path.dirname(os.path.realpath(__file__)))
diff --git a/spaces/chendl/compositional_test/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py b/spaces/chendl/compositional_test/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py
deleted file mode 100644
index cd1ad8b5e2337ca78b06b2b36b0399e77350e569..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py
+++ /dev/null
@@ -1,756 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-# Copyright The HuggingFace Team and The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Fine-tuning a 🤗 Transformers model on summarization.
-"""
-# You can also adapt this script on your own summarization task. Pointers for this are left as comments.
-
-import argparse
-import json
-import logging
-import math
-import os
-import random
-from pathlib import Path
-
-import datasets
-import evaluate
-import nltk
-import numpy as np
-import torch
-from accelerate import Accelerator
-from accelerate.logging import get_logger
-from accelerate.utils import set_seed
-from datasets import load_dataset
-from filelock import FileLock
-from huggingface_hub import Repository, create_repo
-from torch.utils.data import DataLoader
-from tqdm.auto import tqdm
-
-import transformers
-from transformers import (
- CONFIG_MAPPING,
- MODEL_MAPPING,
- AutoConfig,
- AutoModelForSeq2SeqLM,
- AutoTokenizer,
- DataCollatorForSeq2Seq,
- SchedulerType,
- get_scheduler,
-)
-from transformers.utils import check_min_version, get_full_repo_name, is_offline_mode, send_example_telemetry
-from transformers.utils.versions import require_version
-
-
-# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
-check_min_version("4.28.0")
-
-logger = get_logger(__name__)
-require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/summarization/requirements.txt")
-
-# You should update this to your particular problem to have better documentation of `model_type`
-MODEL_CONFIG_CLASSES = list(MODEL_MAPPING.keys())
-MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
-
-try:
- nltk.data.find("tokenizers/punkt")
-except (LookupError, OSError):
- if is_offline_mode():
- raise LookupError(
- "Offline mode: run this script without TRANSFORMERS_OFFLINE first to download nltk data files"
- )
- with FileLock(".lock") as lock:
- nltk.download("punkt", quiet=True)
-
-summarization_name_mapping = {
- "amazon_reviews_multi": ("review_body", "review_title"),
- "big_patent": ("description", "abstract"),
- "cnn_dailymail": ("article", "highlights"),
- "orange_sum": ("text", "summary"),
- "pn_summary": ("article", "summary"),
- "psc": ("extract_text", "summary_text"),
- "samsum": ("dialogue", "summary"),
- "thaisum": ("body", "summary"),
- "xglue": ("news_body", "news_title"),
- "xsum": ("document", "summary"),
- "wiki_summary": ("article", "highlights"),
-}
-
-
-def parse_args():
- parser = argparse.ArgumentParser(description="Finetune a transformers model on a summarization task")
- parser.add_argument(
- "--dataset_name",
- type=str,
- default=None,
- help="The name of the dataset to use (via the datasets library).",
- )
- parser.add_argument(
- "--dataset_config_name",
- type=str,
- default=None,
- help="The configuration name of the dataset to use (via the datasets library).",
- )
- parser.add_argument(
- "--train_file", type=str, default=None, help="A csv or a json file containing the training data."
- )
- parser.add_argument(
- "--validation_file", type=str, default=None, help="A csv or a json file containing the validation data."
- )
- parser.add_argument(
- "--ignore_pad_token_for_loss",
- type=bool,
- default=True,
- help="Whether to ignore the tokens corresponding to padded labels in the loss computation or not.",
- )
- parser.add_argument(
- "--max_source_length",
- type=int,
- default=1024,
- help=(
- "The maximum total input sequence length after "
- "tokenization.Sequences longer than this will be truncated, sequences shorter will be padded."
- ),
- )
- parser.add_argument(
- "--source_prefix",
- type=str,
- default=None,
- help="A prefix to add before every source text (useful for T5 models).",
- )
- parser.add_argument(
- "--preprocessing_num_workers",
- type=int,
- default=None,
- help="The number of processes to use for the preprocessing.",
- )
- parser.add_argument(
- "--overwrite_cache", action="store_true", help="Overwrite the cached training and evaluation sets"
- )
- parser.add_argument(
- "--max_target_length",
- type=int,
- default=128,
- help=(
- "The maximum total sequence length for target text after "
- "tokenization. Sequences longer than this will be truncated, sequences shorter will be padded."
- "during ``evaluate`` and ``predict``."
- ),
- )
- parser.add_argument(
- "--val_max_target_length",
- type=int,
- default=None,
- help=(
- "The maximum total sequence length for validation "
- "target text after tokenization.Sequences longer than this will be truncated, sequences shorter will be "
- "padded. Will default to `max_target_length`.This argument is also used to override the ``max_length`` "
- "param of ``model.generate``, which is used during ``evaluate`` and ``predict``."
- ),
- )
- parser.add_argument(
- "--num_beams",
- type=int,
- default=None,
- help=(
- "Number of beams to use for evaluation. This argument will be "
- "passed to ``model.generate``, which is used during ``evaluate`` and ``predict``."
- ),
- )
- parser.add_argument(
- "--pad_to_max_length",
- action="store_true",
- help="If passed, pad all samples to `max_length`. Otherwise, dynamic padding is used.",
- )
- parser.add_argument(
- "--model_name_or_path",
- type=str,
- help="Path to pretrained model or model identifier from huggingface.co/models.",
- required=False,
- )
- parser.add_argument(
- "--config_name",
- type=str,
- default=None,
- help="Pretrained config name or path if not the same as model_name",
- )
- parser.add_argument(
- "--tokenizer_name",
- type=str,
- default=None,
- help="Pretrained tokenizer name or path if not the same as model_name",
- )
- parser.add_argument(
- "--text_column",
- type=str,
- default=None,
- help="The name of the column in the datasets containing the full texts (for summarization).",
- )
- parser.add_argument(
- "--summary_column",
- type=str,
- default=None,
- help="The name of the column in the datasets containing the summaries (for summarization).",
- )
- parser.add_argument(
- "--use_slow_tokenizer",
- action="store_true",
- help="If passed, will use a slow tokenizer (not backed by the 🤗 Tokenizers library).",
- )
- parser.add_argument(
- "--per_device_train_batch_size",
- type=int,
- default=8,
- help="Batch size (per device) for the training dataloader.",
- )
- parser.add_argument(
- "--per_device_eval_batch_size",
- type=int,
- default=8,
- help="Batch size (per device) for the evaluation dataloader.",
- )
- parser.add_argument(
- "--learning_rate",
- type=float,
- default=5e-5,
- help="Initial learning rate (after the potential warmup period) to use.",
- )
- parser.add_argument("--weight_decay", type=float, default=0.0, help="Weight decay to use.")
- parser.add_argument("--num_train_epochs", type=int, default=3, help="Total number of training epochs to perform.")
- parser.add_argument(
- "--max_train_steps",
- type=int,
- default=None,
- help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
- )
- parser.add_argument(
- "--gradient_accumulation_steps",
- type=int,
- default=1,
- help="Number of updates steps to accumulate before performing a backward/update pass.",
- )
- parser.add_argument(
- "--lr_scheduler_type",
- type=SchedulerType,
- default="linear",
- help="The scheduler type to use.",
- choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup"],
- )
- parser.add_argument(
- "--num_warmup_steps", type=int, default=0, help="Number of steps for the warmup in the lr scheduler."
- )
- parser.add_argument("--output_dir", type=str, default=None, help="Where to store the final model.")
- parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
- parser.add_argument(
- "--model_type",
- type=str,
- default=None,
- help="Model type to use if training from scratch.",
- choices=MODEL_TYPES,
- )
- parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
- parser.add_argument(
- "--hub_model_id", type=str, help="The name of the repository to keep in sync with the local `output_dir`."
- )
- parser.add_argument("--hub_token", type=str, help="The token to use to push to the Model Hub.")
- parser.add_argument(
- "--checkpointing_steps",
- type=str,
- default=None,
- help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.",
- )
- parser.add_argument(
- "--resume_from_checkpoint",
- type=str,
- default=None,
- help="If the training should continue from a checkpoint folder.",
- )
- parser.add_argument(
- "--with_tracking",
- action="store_true",
- help="Whether to enable experiment trackers for logging.",
- )
- parser.add_argument(
- "--report_to",
- type=str,
- default="all",
- help=(
- 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`,'
- ' `"wandb"`, `"comet_ml"` and `"clearml"`. Use `"all"` (default) to report to all integrations.'
- "Only applicable when `--with_tracking` is passed."
- ),
- )
- args = parser.parse_args()
-
- # Sanity checks
- if args.dataset_name is None and args.train_file is None and args.validation_file is None:
- raise ValueError("Need either a dataset name or a training/validation file.")
- else:
- if args.train_file is not None:
- extension = args.train_file.split(".")[-1]
- assert extension in ["csv", "json"], "`train_file` should be a csv or a json file."
- if args.validation_file is not None:
- extension = args.validation_file.split(".")[-1]
- assert extension in ["csv", "json"], "`validation_file` should be a csv or a json file."
-
- if args.push_to_hub:
- assert args.output_dir is not None, "Need an `output_dir` to create a repo when `--push_to_hub` is passed."
-
- return args
-
-
-def main():
- args = parse_args()
- # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
- # information sent is the one passed as arguments along with your Python/PyTorch versions.
- send_example_telemetry("run_summarization_no_trainer", args)
-
- # Initialize the accelerator. We will let the accelerator handle device placement for us in this example.
- # If we're using tracking, we also need to initialize it here and it will by default pick up all supported trackers
- # in the environment
- accelerator_log_kwargs = {}
-
- if args.with_tracking:
- accelerator_log_kwargs["log_with"] = args.report_to
- accelerator_log_kwargs["logging_dir"] = args.output_dir
-
- accelerator = Accelerator(gradient_accumulation_steps=args.gradient_accumulation_steps, **accelerator_log_kwargs)
- if args.source_prefix is None and args.model_name_or_path in [
- "t5-small",
- "t5-base",
- "t5-large",
- "t5-3b",
- "t5-11b",
- ]:
- logger.warning(
- "You're running a t5 model but didn't provide a source prefix, which is the expected, e.g. with "
- "`--source_prefix 'summarize: ' `"
- )
- # Make one log on every process with the configuration for debugging.
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- level=logging.INFO,
- )
- logger.info(accelerator.state, main_process_only=False)
- if accelerator.is_local_main_process:
- datasets.utils.logging.set_verbosity_warning()
- transformers.utils.logging.set_verbosity_info()
- else:
- datasets.utils.logging.set_verbosity_error()
- transformers.utils.logging.set_verbosity_error()
-
- # If passed along, set the training seed now.
- if args.seed is not None:
- set_seed(args.seed)
-
- # Handle the repository creation
- if accelerator.is_main_process:
- if args.push_to_hub:
- if args.hub_model_id is None:
- repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
- else:
- repo_name = args.hub_model_id
- create_repo(repo_name, exist_ok=True, token=args.hub_token)
- repo = Repository(args.output_dir, clone_from=repo_name, token=args.hub_token)
-
- with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
- if "step_*" not in gitignore:
- gitignore.write("step_*\n")
- if "epoch_*" not in gitignore:
- gitignore.write("epoch_*\n")
- elif args.output_dir is not None:
- os.makedirs(args.output_dir, exist_ok=True)
- accelerator.wait_for_everyone()
-
- # Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
- # or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
- # (the dataset will be downloaded automatically from the datasets Hub).
- #
- # For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
- # 'text' is found. You can easily tweak this behavior (see below).
- #
- # In distributed training, the load_dataset function guarantee that only one local process can concurrently
- # download the dataset.
- if args.dataset_name is not None:
- # Downloading and loading a dataset from the hub.
- raw_datasets = load_dataset(args.dataset_name, args.dataset_config_name)
- else:
- data_files = {}
- if args.train_file is not None:
- data_files["train"] = args.train_file
- if args.validation_file is not None:
- data_files["validation"] = args.validation_file
- extension = args.train_file.split(".")[-1]
- raw_datasets = load_dataset(extension, data_files=data_files)
- # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
- # https://huggingface.co/docs/datasets/loading_datasets.html.
-
- # Load pretrained model and tokenizer
- #
- # In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently
- # download model & vocab.
- if args.config_name:
- config = AutoConfig.from_pretrained(args.config_name)
- elif args.model_name_or_path:
- config = AutoConfig.from_pretrained(args.model_name_or_path)
- else:
- config = CONFIG_MAPPING[args.model_type]()
- logger.warning("You are instantiating a new config instance from scratch.")
-
- if args.tokenizer_name:
- tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, use_fast=not args.use_slow_tokenizer)
- elif args.model_name_or_path:
- tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path, use_fast=not args.use_slow_tokenizer)
- else:
- raise ValueError(
- "You are instantiating a new tokenizer from scratch. This is not supported by this script."
- "You can do it from another script, save it, and load it from here, using --tokenizer_name."
- )
-
- if args.model_name_or_path:
- model = AutoModelForSeq2SeqLM.from_pretrained(
- args.model_name_or_path,
- from_tf=bool(".ckpt" in args.model_name_or_path),
- config=config,
- )
- else:
- logger.info("Training new model from scratch")
- model = AutoModelForSeq2SeqLM.from_config(config)
-
- # We resize the embeddings only when necessary to avoid index errors. If you are creating a model from scratch
- # on a small vocab and want a smaller embedding size, remove this test.
- embedding_size = model.get_input_embeddings().weight.shape[0]
- if len(tokenizer) > embedding_size:
- model.resize_token_embeddings(len(tokenizer))
- if model.config.decoder_start_token_id is None:
- raise ValueError("Make sure that `config.decoder_start_token_id` is correctly defined")
-
- prefix = args.source_prefix if args.source_prefix is not None else ""
-
- # Preprocessing the datasets.
- # First we tokenize all the texts.
- column_names = raw_datasets["train"].column_names
-
- # Get the column names for input/target.
- dataset_columns = summarization_name_mapping.get(args.dataset_name, None)
- if args.text_column is None:
- text_column = dataset_columns[0] if dataset_columns is not None else column_names[0]
- else:
- text_column = args.text_column
- if text_column not in column_names:
- raise ValueError(
- f"--text_column' value '{args.text_column}' needs to be one of: {', '.join(column_names)}"
- )
- if args.summary_column is None:
- summary_column = dataset_columns[1] if dataset_columns is not None else column_names[1]
- else:
- summary_column = args.summary_column
- if summary_column not in column_names:
- raise ValueError(
- f"--summary_column' value '{args.summary_column}' needs to be one of: {', '.join(column_names)}"
- )
-
- if args.val_max_target_length is None:
- args.val_max_target_length = args.max_target_length
-
- # Temporarily set max_target_length for training.
- max_target_length = args.max_target_length
- padding = "max_length" if args.pad_to_max_length else False
-
- def preprocess_function(examples):
- inputs = examples[text_column]
- targets = examples[summary_column]
- inputs = [prefix + inp for inp in inputs]
- model_inputs = tokenizer(inputs, max_length=args.max_source_length, padding=padding, truncation=True)
-
- # Tokenize targets with the `text_target` keyword argument
- labels = tokenizer(text_target=targets, max_length=max_target_length, padding=padding, truncation=True)
-
- # If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
- # padding in the loss.
- if padding == "max_length" and args.ignore_pad_token_for_loss:
- labels["input_ids"] = [
- [(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
- ]
-
- model_inputs["labels"] = labels["input_ids"]
- return model_inputs
-
- with accelerator.main_process_first():
- train_dataset = raw_datasets["train"].map(
- preprocess_function,
- batched=True,
- num_proc=args.preprocessing_num_workers,
- remove_columns=column_names,
- load_from_cache_file=not args.overwrite_cache,
- desc="Running tokenizer on dataset",
- )
-
- # Temporarily set max_target_length for validation.
- max_target_length = args.val_max_target_length
- eval_dataset = raw_datasets["validation"].map(
- preprocess_function,
- batched=True,
- num_proc=args.preprocessing_num_workers,
- remove_columns=column_names,
- load_from_cache_file=not args.overwrite_cache,
- desc="Running tokenizer on dataset",
- )
-
- # Log a few random samples from the training set:
- for index in random.sample(range(len(train_dataset)), 1):
- logger.info(f"Sample {index} of the training set: {train_dataset[index]}.")
-
- label_pad_token_id = -100 if args.ignore_pad_token_for_loss else tokenizer.pad_token_id
- data_collator = DataCollatorForSeq2Seq(
- tokenizer,
- model=model,
- label_pad_token_id=label_pad_token_id,
- pad_to_multiple_of=8 if accelerator.use_fp16 else None,
- )
-
- def postprocess_text(preds, labels):
- preds = [pred.strip() for pred in preds]
- labels = [label.strip() for label in labels]
-
- # rougeLSum expects newline after each sentence
- preds = ["\n".join(nltk.sent_tokenize(pred)) for pred in preds]
- labels = ["\n".join(nltk.sent_tokenize(label)) for label in labels]
-
- return preds, labels
-
- train_dataloader = DataLoader(
- train_dataset, shuffle=True, collate_fn=data_collator, batch_size=args.per_device_train_batch_size
- )
- eval_dataloader = DataLoader(eval_dataset, collate_fn=data_collator, batch_size=args.per_device_eval_batch_size)
-
- # Optimizer
- # Split weights in two groups, one with weight decay and the other not.
- no_decay = ["bias", "LayerNorm.weight", "layer_norm.weight"]
- optimizer_grouped_parameters = [
- {
- "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
- "weight_decay": args.weight_decay,
- },
- {
- "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
- "weight_decay": 0.0,
- },
- ]
- optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=args.learning_rate)
-
- # Scheduler and math around the number of training steps.
- overrode_max_train_steps = False
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if args.max_train_steps is None:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- overrode_max_train_steps = True
-
- lr_scheduler = get_scheduler(
- name=args.lr_scheduler_type,
- optimizer=optimizer,
- num_warmup_steps=args.num_warmup_steps * args.gradient_accumulation_steps,
- num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
- )
-
- # Prepare everything with our `accelerator`.
- model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
- model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
- )
-
- # We need to recalculate our total training steps as the size of the training dataloader may have changed.
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if overrode_max_train_steps:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- # Afterwards we recalculate our number of training epochs
- args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
-
- # Figure out how many steps we should save the Accelerator states
- checkpointing_steps = args.checkpointing_steps
- if checkpointing_steps is not None and checkpointing_steps.isdigit():
- checkpointing_steps = int(checkpointing_steps)
-
- # We need to initialize the trackers we use, and also store our configuration.
- # The trackers initializes automatically on the main process.
- if args.with_tracking:
- experiment_config = vars(args)
- # TensorBoard cannot log Enums, need the raw value
- experiment_config["lr_scheduler_type"] = experiment_config["lr_scheduler_type"].value
- accelerator.init_trackers("summarization_no_trainer", experiment_config)
-
- # Metric
- metric = evaluate.load("rouge")
-
- # Train!
- total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
-
- logger.info("***** Running training *****")
- logger.info(f" Num examples = {len(train_dataset)}")
- logger.info(f" Num Epochs = {args.num_train_epochs}")
- logger.info(f" Instantaneous batch size per device = {args.per_device_train_batch_size}")
- logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
- logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
- logger.info(f" Total optimization steps = {args.max_train_steps}")
- # Only show the progress bar once on each machine.
- progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
- completed_steps = 0
- starting_epoch = 0
- # Potentially load in the weights and states from a previous save
- if args.resume_from_checkpoint:
- if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "":
- accelerator.print(f"Resumed from checkpoint: {args.resume_from_checkpoint}")
- accelerator.load_state(args.resume_from_checkpoint)
- path = os.path.basename(args.resume_from_checkpoint)
- else:
- # Get the most recent checkpoint
- dirs = [f.name for f in os.scandir(os.getcwd()) if f.is_dir()]
- dirs.sort(key=os.path.getctime)
- path = dirs[-1] # Sorts folders by date modified, most recent checkpoint is the last
- # Extract `epoch_{i}` or `step_{i}`
- training_difference = os.path.splitext(path)[0]
-
- if "epoch" in training_difference:
- starting_epoch = int(training_difference.replace("epoch_", "")) + 1
- resume_step = None
- else:
- resume_step = int(training_difference.replace("step_", ""))
- starting_epoch = resume_step // len(train_dataloader)
- resume_step -= starting_epoch * len(train_dataloader)
-
- for epoch in range(starting_epoch, args.num_train_epochs):
- model.train()
- if args.with_tracking:
- total_loss = 0
- for step, batch in enumerate(train_dataloader):
- # We need to skip steps until we reach the resumed step
- if args.resume_from_checkpoint and epoch == starting_epoch:
- if resume_step is not None and step < resume_step:
- completed_steps += 1
- continue
-
- with accelerator.accumulate(model):
- outputs = model(**batch)
- loss = outputs.loss
- # We keep track of the loss at each epoch
- if args.with_tracking:
- total_loss += loss.detach().float()
- accelerator.backward(loss)
- optimizer.step()
- lr_scheduler.step()
- optimizer.zero_grad()
-
- # Checks if the accelerator has performed an optimization step behind the scenes
- if accelerator.sync_gradients:
- progress_bar.update(1)
- completed_steps += 1
-
- if isinstance(checkpointing_steps, int):
- if completed_steps % checkpointing_steps == 0:
- output_dir = f"step_{completed_steps }"
- if args.output_dir is not None:
- output_dir = os.path.join(args.output_dir, output_dir)
- accelerator.save_state(output_dir)
-
- if completed_steps >= args.max_train_steps:
- break
-
- model.eval()
-
- gen_kwargs = {
- "max_length": args.val_max_target_length,
- "num_beams": args.num_beams,
- }
- for step, batch in enumerate(eval_dataloader):
- with torch.no_grad():
- generated_tokens = accelerator.unwrap_model(model).generate(
- batch["input_ids"],
- attention_mask=batch["attention_mask"],
- **gen_kwargs,
- )
-
- generated_tokens = accelerator.pad_across_processes(
- generated_tokens, dim=1, pad_index=tokenizer.pad_token_id
- )
- labels = batch["labels"]
- if not args.pad_to_max_length:
- # If we did not pad to max length, we need to pad the labels too
- labels = accelerator.pad_across_processes(batch["labels"], dim=1, pad_index=tokenizer.pad_token_id)
-
- generated_tokens, labels = accelerator.gather_for_metrics((generated_tokens, labels))
- generated_tokens = generated_tokens.cpu().numpy()
- labels = labels.cpu().numpy()
-
- if args.ignore_pad_token_for_loss:
- # Replace -100 in the labels as we can't decode them.
- labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
- if isinstance(generated_tokens, tuple):
- generated_tokens = generated_tokens[0]
- decoded_preds = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
- decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
-
- decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
- metric.add_batch(
- predictions=decoded_preds,
- references=decoded_labels,
- )
- result = metric.compute(use_stemmer=True)
- result = {k: round(v * 100, 4) for k, v in result.items()}
-
- logger.info(result)
-
- if args.with_tracking:
- result["train_loss"] = total_loss.item() / len(train_dataloader)
- result["epoch"] = epoch
- result["step"] = completed_steps
- accelerator.log(result, step=completed_steps)
-
- if args.push_to_hub and epoch < args.num_train_epochs - 1:
- accelerator.wait_for_everyone()
- unwrapped_model = accelerator.unwrap_model(model)
- unwrapped_model.save_pretrained(
- args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save
- )
- if accelerator.is_main_process:
- tokenizer.save_pretrained(args.output_dir)
- repo.push_to_hub(
- commit_message=f"Training in progress epoch {epoch}", blocking=False, auto_lfs_prune=True
- )
-
- if args.checkpointing_steps == "epoch":
- output_dir = f"epoch_{epoch}"
- if args.output_dir is not None:
- output_dir = os.path.join(args.output_dir, output_dir)
- accelerator.save_state(output_dir)
-
- if args.output_dir is not None:
- accelerator.wait_for_everyone()
- unwrapped_model = accelerator.unwrap_model(model)
- unwrapped_model.save_pretrained(
- args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save
- )
- if accelerator.is_main_process:
- tokenizer.save_pretrained(args.output_dir)
- if args.push_to_hub:
- repo.push_to_hub(commit_message="End of training", auto_lfs_prune=True)
-
- all_results = {f"eval_{k}": v for k, v in result.items()}
- with open(os.path.join(args.output_dir, "all_results.json"), "w") as f:
- json.dump(all_results, f)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/data/test_generation_utils.py b/spaces/chendl/compositional_test/transformers/src/transformers/data/test_generation_utils.py
deleted file mode 100644
index a69b5683de755dec720132cb2735f441b1857282..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/src/transformers/data/test_generation_utils.py
+++ /dev/null
@@ -1,99 +0,0 @@
-# Copyright 2020 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import random
-import unittest
-
-import timeout_decorator
-
-from ..testing_utils import require_torch
-from ..utils import cached_property, is_torch_available
-
-
-if is_torch_available():
- import torch
-
- from ..models.marian import MarianConfig, MarianMTModel
-
-
-@require_torch
-class GenerationUtilsTest(unittest.TestCase):
- @cached_property
- def config(self):
- config = MarianConfig.from_pretrained("sshleifer/tiny-marian-en-de")
- return config
-
- @cached_property
- def model(self):
- return MarianMTModel(self.config)
-
- def test_postprocess_next_token_scores(self):
- config = self.config
- model = self.model
- # Initialize an input id tensor with batch size 8 and sequence length 12
- input_ids = torch.arange(0, 96, 1).view((8, 12))
- eos = config.eos_token_id
- bad_words_ids_test_cases = [[[299]], [[23, 24], [54]], [[config.eos_token_id]], []]
- masked_scores = [
- [(0, 299), (1, 299), (2, 299), (3, 299), (4, 299), (5, 299), (6, 299), (7, 299)],
- [(1, 24), (0, 54), (1, 54), (2, 54), (3, 54), (4, 54), (5, 54), (6, 54), (7, 54)],
- [(0, eos), (1, eos), (2, eos), (3, eos), (4, eos), (5, eos), (6, eos), (7, eos)],
- [],
- ]
-
- for test_case_index, bad_words_ids in enumerate(bad_words_ids_test_cases):
- # Initialize a scores tensor with batch size 8 and vocabulary size 300
- scores = torch.rand((8, 300))
- output = model.postprocess_next_token_scores(
- scores,
- input_ids,
- 0,
- bad_words_ids,
- 13,
- 15,
- config.max_length,
- config.eos_token_id,
- config.repetition_penalty,
- 32,
- 5,
- )
- for masked_score in masked_scores[test_case_index]:
- self.assertTrue(output[masked_score[0], masked_score[1]] == -float("inf"))
-
- @timeout_decorator.timeout(10)
- def test_postprocess_next_token_scores_large_bad_words_list(self):
- config = self.config
- model = self.model
- # Initialize an input id tensor with batch size 8 and sequence length 12
- input_ids = torch.arange(0, 96, 1).view((8, 12))
-
- bad_words_ids = []
- for _ in range(100):
- length_bad_word = random.randint(1, 4)
- bad_words_ids.append(random.sample(range(1, 300), length_bad_word))
-
- scores = torch.rand((8, 300))
- _ = model.postprocess_next_token_scores(
- scores,
- input_ids,
- 0,
- bad_words_ids,
- 13,
- 15,
- config.max_length,
- config.eos_token_id,
- config.repetition_penalty,
- 32,
- 5,
- )
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/C_O_L_R_.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/C_O_L_R_.py
deleted file mode 100644
index b4bc5d0c200e58f793fff6d3ffe95b2d76d36c64..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/C_O_L_R_.py
+++ /dev/null
@@ -1,158 +0,0 @@
-# Copyright 2013 Google, Inc. All Rights Reserved.
-#
-# Google Author(s): Behdad Esfahbod
-
-from fontTools.misc.textTools import safeEval
-from . import DefaultTable
-
-
-class table_C_O_L_R_(DefaultTable.DefaultTable):
-
- """This table is structured so that you can treat it like a dictionary keyed by glyph name.
-
- ``ttFont['COLR'][]`` will return the color layers for any glyph.
-
- ``ttFont['COLR'][] = `` will set the color layers for any glyph.
- """
-
- @staticmethod
- def _decompileColorLayersV0(table):
- if not table.LayerRecordArray:
- return {}
- colorLayerLists = {}
- layerRecords = table.LayerRecordArray.LayerRecord
- numLayerRecords = len(layerRecords)
- for baseRec in table.BaseGlyphRecordArray.BaseGlyphRecord:
- baseGlyph = baseRec.BaseGlyph
- firstLayerIndex = baseRec.FirstLayerIndex
- numLayers = baseRec.NumLayers
- assert firstLayerIndex + numLayers <= numLayerRecords
- layers = []
- for i in range(firstLayerIndex, firstLayerIndex + numLayers):
- layerRec = layerRecords[i]
- layers.append(LayerRecord(layerRec.LayerGlyph, layerRec.PaletteIndex))
- colorLayerLists[baseGlyph] = layers
- return colorLayerLists
-
- def _toOTTable(self, ttFont):
- from . import otTables
- from fontTools.colorLib.builder import populateCOLRv0
-
- tableClass = getattr(otTables, self.tableTag)
- table = tableClass()
- table.Version = self.version
-
- populateCOLRv0(
- table,
- {
- baseGlyph: [(layer.name, layer.colorID) for layer in layers]
- for baseGlyph, layers in self.ColorLayers.items()
- },
- glyphMap=ttFont.getReverseGlyphMap(rebuild=True),
- )
- return table
-
- def decompile(self, data, ttFont):
- from .otBase import OTTableReader
- from . import otTables
-
- # We use otData to decompile, but we adapt the decompiled otTables to the
- # existing COLR v0 API for backward compatibility.
- reader = OTTableReader(data, tableTag=self.tableTag)
- tableClass = getattr(otTables, self.tableTag)
- table = tableClass()
- table.decompile(reader, ttFont)
-
- self.version = table.Version
- if self.version == 0:
- self.ColorLayers = self._decompileColorLayersV0(table)
- else:
- # for new versions, keep the raw otTables around
- self.table = table
-
- def compile(self, ttFont):
- from .otBase import OTTableWriter
-
- if hasattr(self, "table"):
- table = self.table
- else:
- table = self._toOTTable(ttFont)
-
- writer = OTTableWriter(tableTag=self.tableTag)
- table.compile(writer, ttFont)
- return writer.getAllData()
-
- def toXML(self, writer, ttFont):
- if hasattr(self, "table"):
- self.table.toXML2(writer, ttFont)
- else:
- writer.simpletag("version", value=self.version)
- writer.newline()
- for baseGlyph in sorted(self.ColorLayers.keys(), key=ttFont.getGlyphID):
- writer.begintag("ColorGlyph", name=baseGlyph)
- writer.newline()
- for layer in self.ColorLayers[baseGlyph]:
- layer.toXML(writer, ttFont)
- writer.endtag("ColorGlyph")
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- if name == "version": # old COLR v0 API
- setattr(self, name, safeEval(attrs["value"]))
- elif name == "ColorGlyph":
- if not hasattr(self, "ColorLayers"):
- self.ColorLayers = {}
- glyphName = attrs["name"]
- for element in content:
- if isinstance(element, str):
- continue
- layers = []
- for element in content:
- if isinstance(element, str):
- continue
- layer = LayerRecord()
- layer.fromXML(element[0], element[1], element[2], ttFont)
- layers.append(layer)
- self.ColorLayers[glyphName] = layers
- else: # new COLR v1 API
- from . import otTables
-
- if not hasattr(self, "table"):
- tableClass = getattr(otTables, self.tableTag)
- self.table = tableClass()
- self.table.fromXML(name, attrs, content, ttFont)
- self.table.populateDefaults()
- self.version = self.table.Version
-
- def __getitem__(self, glyphName):
- if not isinstance(glyphName, str):
- raise TypeError(f"expected str, found {type(glyphName).__name__}")
- return self.ColorLayers[glyphName]
-
- def __setitem__(self, glyphName, value):
- if not isinstance(glyphName, str):
- raise TypeError(f"expected str, found {type(glyphName).__name__}")
- if value is not None:
- self.ColorLayers[glyphName] = value
- elif glyphName in self.ColorLayers:
- del self.ColorLayers[glyphName]
-
- def __delitem__(self, glyphName):
- del self.ColorLayers[glyphName]
-
-
-class LayerRecord(object):
- def __init__(self, name=None, colorID=None):
- self.name = name
- self.colorID = colorID
-
- def toXML(self, writer, ttFont):
- writer.simpletag("layer", name=self.name, colorID=self.colorID)
- writer.newline()
-
- def fromXML(self, eltname, attrs, content, ttFont):
- for (name, value) in attrs.items():
- if name == "name":
- setattr(self, name, value)
- else:
- setattr(self, name, safeEval(value))
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/archive.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/archive.py
deleted file mode 100644
index dc5c1490b972c592fd3eb9aaeb30b589e384ccb7..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/archive.py
+++ /dev/null
@@ -1,73 +0,0 @@
-from fsspec import AbstractFileSystem
-from fsspec.utils import tokenize
-
-
-class AbstractArchiveFileSystem(AbstractFileSystem):
- """
- A generic superclass for implementing Archive-based filesystems.
-
- Currently, it is shared amongst
- :class:`~fsspec.implementations.zip.ZipFileSystem`,
- :class:`~fsspec.implementations.libarchive.LibArchiveFileSystem` and
- :class:`~fsspec.implementations.tar.TarFileSystem`.
- """
-
- def __str__(self):
- return "" % (type(self).__name__, id(self))
-
- __repr__ = __str__
-
- def ukey(self, path):
- return tokenize(path, self.fo, self.protocol)
-
- def _all_dirnames(self, paths):
- """Returns *all* directory names for each path in paths, including intermediate
- ones.
-
- Parameters
- ----------
- paths: Iterable of path strings
- """
- if len(paths) == 0:
- return set()
-
- dirnames = {self._parent(path) for path in paths} - {self.root_marker}
- return dirnames | self._all_dirnames(dirnames)
-
- def info(self, path, **kwargs):
- self._get_dirs()
- path = self._strip_protocol(path)
- if path in {"", "/"} and self.dir_cache:
- return {"name": "/", "type": "directory", "size": 0}
- if path in self.dir_cache:
- return self.dir_cache[path]
- elif path + "/" in self.dir_cache:
- return self.dir_cache[path + "/"]
- else:
- raise FileNotFoundError(path)
-
- def ls(self, path, detail=True, **kwargs):
- self._get_dirs()
- paths = {}
- for p, f in self.dir_cache.items():
- p = p.rstrip("/")
- if "/" in p:
- root = p.rsplit("/", 1)[0]
- else:
- root = ""
- if root == path.rstrip("/"):
- paths[p] = f
- elif all(
- (a == b)
- for a, b in zip(path.split("/"), [""] + p.strip("/").split("/"))
- ):
- # root directory entry
- ppath = p.rstrip("/").split("/", 1)[0]
- if ppath not in paths:
- out = {"name": ppath + "/", "size": 0, "type": "directory"}
- paths[ppath] = out
- out = sorted(paths.values(), key=lambda _: _["name"])
- if detail:
- return out
- else:
- return [f["name"] for f in out]
diff --git a/spaces/cihyFjudo/fairness-paper-search/Future Music October 2012 Pdf.md b/spaces/cihyFjudo/fairness-paper-search/Future Music October 2012 Pdf.md
deleted file mode 100644
index cca3e23a606c0044aa9667182a2ab89920458425..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Future Music October 2012 Pdf.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Another way that the future of online/offline integration on social media needs to be discussed is in the sense of a digital self. Drawing on the extended self in the digital age (Belk 2013), the way consumers consider online actions as relevant to their offline selves may be changing. For example, Belk (2013) spoke of how consumers may be re-embodied through avatars they create to represent themselves online, influencing their offline selves and creating a multiplicity of selves (i.e., consumers have more choice when it comes to their self-representation). As research has shown how digital and social media can be used for self-presentation, affiliation, and expression (Back et al. 2010; Gosling et al. 2007; Toubia and Stephen 2013; Wilcox and Stephen 2012), what does it mean for the future if consumers can create who they want to be?
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Windows 7 x4 Full Download Tips and Tricks for Optimizing Performance and Security.md b/spaces/cihyFjudo/fairness-paper-search/Windows 7 x4 Full Download Tips and Tricks for Optimizing Performance and Security.md
deleted file mode 100644
index 2f64c659b6edc594408b37f3f8a2f64292cf4ea2..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Windows 7 x4 Full Download Tips and Tricks for Optimizing Performance and Security.md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-
Before you remove and reinstall a program, patch or update, you have to log in as administrator. The only way to get full administrator access on Vista/Windows 7 is by turning off the User Account Control and right-clicking on a program, patch or update and then going to "Run as Administrator". Otherwise, you will have problems working properly on your system. Click on the links below to learn how to turn-off UAC:
After getting enough money through battling or exchanging, the majority of players desire to develop their personal economic condition and begin affecting the universe on a bigger scale. In X4, it is actually right now possible to become fully totally free and creative. Stations may be created coming from a wide array of components, be it creation elements, residing parts, drops anchor, or even lots of other kinds of parts.
-
The new features include a smart fill tool, a double-click crop tool, and an image adjustment lab. CorelDRAW x3 is a free download perfect for Windows 8, 7, Vista 32-bit, XP 32-bit/64 bit, 2003, or 2000 customers. With the latest version, it is possible to download all updates accessible through the interface of the program. The UI is more attractive and user-friendly in comparison to older versions. The latest updates are all accessible for you to ensure that you will benefit from the improved interface. The latest version of the application comes with new features and updates made. New features and tools for creativity are added to this version. Get More Softwares From Getintopc
-
Click below to begin CorelDRAW Graphics Suite Free Download. It is a download that is a standalone and offline installation for CorelDRAW Graphics Suite X4. It should work perfectly well with compatible versions of Windows. CorelDRAW Graphics Suite X4 is free to download. Download the latest version and up-to-date Version for windows.
-
This update requires that the Secure Channel (Schannel) component in Windows 7 be configured to support TLS 1.1 and 1.2. As these protocol versions are not enabled by default in Windows 7, you must configure the registry settings to ensure Office applications can successfully use TLS 1.1 and 1.2.
-
Important This section, method, or task contains steps that tell you how to modify the registry. However, serious problems might occur if you modify the registry incorrectly. Therefore, make sure that you follow these steps carefully. For added protection, back up the registry before you modify it. Then, you can restore the registry if a problem occurs. For more information about how to back up and restore the registry, see How to back up and restore the registry in Windows.
-
The new expansion card adds full support for the new specification on motherboards with a PCIe x4 slot. With the USB Type-C port design and backward compatibility with USB 2.0/3.0/3.1, the new expansion card offers an affordable way for users to upgrade to the USB 3.2 Gen 2x2 specification without having to get a new motherboard.
-
Subscribe and save! A CorelDRAW Graphics Suite subscription provides a flexible, affordable way to enjoy the latest software without having to pay the hefty upfront cost of ownership. Instead, you'll get a full-featured, downloadable version of this professional suite with every new release, as long as your subscription is active.
-
-
The Xtium-CL MX4 takes full advantage of PCIe Gen 2.0 x4 platform to deliver a bandwidth in excess of 1.7GB/s, while at the same time supporting PCIe Gen 1.0 slot to deliver 850MB/s. The newly engineered, on-board, Data Transfer Engine (DTE) produces maximum bandwidth without the need for specialized motherboards or chipsets. By enabling maximum sustained throughput and ready-to-use image data, the Xtium-CL MX4 minimizes CPU usage and improves processing times for the host applications. In addition, the Xtium series has been engineered with enhanced memory architecture allowing it to handle different sensor tap topologies while sustaining color decoding at the maximum frame/line rate.
-
Note: The release you are looking at is Python 3.7.0, the initial feature release for the legacy 3.7 series which is now in the security fix phase of its life cycle. See the downloads page for currently supported versions of Python and for the most recent source-only security fix release for 3.7. The final bugfix release with binary installers for 3.7 was 3.7.9.
-
This free download Coreldraw x4 application is recommended for use on Windows 7 64 bit and 32 bit. With the availability of the coreldraw keygen and serial number, you can activate this software permanently for free forever. Curious? Immediately Download CorelDraw X4 full crack for free on the google drive panel below.
-
EOS Utility is software for communication with your EOS DIGITAL camera. By connecting the camera and computer, you can download to your computer images saved in the camera's memory card as well as set various camera settings or shoot remotely from EOS Utility on your computer.
-
- EOS Utility 3-series and EOS Utility 2.14 can be simultaneously installed to one computer. (When installing EOS Utility 3-series, EOS Utility 2.x will also be updated to the newest version.) - When any model EOS-1Ds Mark III, EOS-1D Mark IV, EOS-1D Mark III, EOS 7D, EOS 5D Mark II, EOS 70D, EOS 60Da, EOS 60D, EOS 50D, EOS 40D, EOS Kiss X70 / EOS REBEL T5 / EOS 1200D / EOS Hi, EOS Kiss X7i / EOS REBEL T5i / EOS 700D, EOS Kiss X7 / EOS REBEL SL1 / EOS 100D, EOS Kiss X6i / EOS REBEL T4i / EOS 650D, EOS Kiss X50 / EOS REBEL T3 / EOS 1100D, EOS Kiss X5 / EOS REBEL T3i / EOS 600D, EOS Kiss X4 / EOS REBEL T2i / EOS 550D, EOS Kiss X3 / EOS REBEL T1i / EOS 500D, EOS Kiss X2 / EOS DIGITAL REBEL XSi / EOS 450D, EOS Kiss F / EOS REBEL XS / EOS 1000D, EOS M2, EOS M is connected, EOS Utility 2.14 will be started. - To download a GPS log file using EOS Utility, use Map Utility 1.8.1 or later for EOS 6D Mark II, use Map Utility 1.7.2 or later for EOS 5D Mark IV, use Map Utility 1.7.0 or later for EOS-1D X Mark II, use Map Utility 1.5.3 or later for EOS 7D Mark II, and use Map Utility 1.4 or later for EOS 6D.
-
Please refer to the instructions below on how to download and install the software. Exit all other applications when installing this software.
1. Download "EU-Installset-W3.10.20.0.zip" from the download page. Save the "EU-Installset-W3.10.20.0.zip" file to a folder of your preference on your computer.
2. When the "EU-Installset-W3.10.20.0.zip" folder saved to the computer is extracted, the "EU-Installset-W3.10.20.0" will be generated, so double-click "euw3.10.20-installer.exe" in the extracted folder. Installation for EOS Utility will begin. (If the User Account Control window appears, follow the on-screen instructions to proceed.)
3. Follow the on-screen instructions to complete the installation. * This software will be installed together with EOS Utility 2, EOS Lens Registration Tool, and EOS Web Service Registration Tool.
4. After the installation is complete, the EOS Utility installer may ask to restart the computer. In this case, restart the computer. If the installation is completed properly, the downloaded file and the "EU-Installset-W3.10.20.0" file will not be necessary.
-
Canon reserves all relevant title, ownership and intellectual property rights in the Content. You may download and use the Content solely for your personal, non-commercial use and at your own risks. Canon shall not be held liable for any damages whatsoever in connection with the Content, (including, without limitation, indirect, consequential, exemplary or incidental damages).
-
Ditto is an extension to the standard windows clipboard. It saves each item placed on the clipboard allowing you access to any of those items at a later time. Ditto allows you to save any type of information that can be put on the clipboard, text, images, html, custom formats, .....
-
Please refer to the instructions below on how to download and install the software. Exit all other applications when installing this software.
1. Download "EU-Installset-W3.11.0.0.zip" from the download page. Save the "EU-Installset-W3.11.0.0.zip" file to a folder of your preference on your computer.
2. When the "EU-Installset-W3.11.0.0.zip" folder saved to the computer is extracted, the "EU-Installset-W3.11.0.0" will be generated, so double-click "euw3.11.0-installer.exe" in the extracted folder. Installation for EOS Utility will begin. (If the User Account Control window appears, follow the on-screen instructions to proceed.)
3. Follow the on-screen instructions to complete the installation. * This software will be installed together with EOS Utility 2, EOS Lens Registration Tool, and EOS Web Service Registration Tool.
4. After the installation is complete, the EOS Utility installer may ask to restart the computer. In this case, restart the computer. If the installation is completed properly, the downloaded file and the "EU-Installset-W3.11.0.0" file will not be necessary.
-
Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3dsp.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3dsp.h
deleted file mode 100644
index 33e51e202ea4cf4546cdb0ca3ab3597464870ee7..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3dsp.h
+++ /dev/null
@@ -1,120 +0,0 @@
-/*
- * AC-3 DSP functions
- * Copyright (c) 2011 Justin Ruggles
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_AC3DSP_H
-#define AVCODEC_AC3DSP_H
-
-#include
-
-/**
- * Number of mantissa bits written for each bap value.
- * bap values with fractional bits are set to 0 and are calculated separately.
- */
-extern const uint16_t ff_ac3_bap_bits[16];
-
-typedef struct AC3DSPContext {
- /**
- * Set each encoded exponent in a block to the minimum of itself and the
- * exponents in the same frequency bin of up to 5 following blocks.
- * @param exp pointer to the start of the current block of exponents.
- * constraints: align 16
- * @param num_reuse_blocks number of blocks that will reuse exponents from the current block.
- * constraints: range 0 to 5
- * @param nb_coefs number of frequency coefficients.
- */
- void (*ac3_exponent_min)(uint8_t *exp, int num_reuse_blocks, int nb_coefs);
-
- /**
- * Convert an array of float in range [-1.0,1.0] to int32_t with range
- * [-(1<<24),(1<<24)]
- *
- * @param dst destination array of int32_t.
- * constraints: 16-byte aligned
- * @param src source array of float.
- * constraints: 16-byte aligned
- * @param len number of elements to convert.
- * constraints: multiple of 32 greater than zero
- */
- void (*float_to_fixed24)(int32_t *dst, const float *src, unsigned int len);
-
- /**
- * Calculate bit allocation pointers.
- * The SNR is the difference between the masking curve and the signal. AC-3
- * uses this value for each frequency bin to allocate bits. The snroffset
- * parameter is a global adjustment to the SNR for all bins.
- *
- * @param[in] mask masking curve
- * @param[in] psd signal power for each frequency bin
- * @param[in] start starting bin location
- * @param[in] end ending bin location
- * @param[in] snr_offset SNR adjustment
- * @param[in] floor noise floor
- * @param[in] bap_tab look-up table for bit allocation pointers
- * @param[out] bap bit allocation pointers
- */
- void (*bit_alloc_calc_bap)(int16_t *mask, int16_t *psd, int start, int end,
- int snr_offset, int floor,
- const uint8_t *bap_tab, uint8_t *bap);
-
- /**
- * Update bap counts using the supplied array of bap.
- *
- * @param[out] mant_cnt bap counts for 1 block
- * @param[in] bap array of bap, pointing to start coef bin
- * @param[in] len number of elements to process
- */
- void (*update_bap_counts)(uint16_t mant_cnt[16], uint8_t *bap, int len);
-
- /**
- * Calculate the number of bits needed to encode a set of mantissas.
- *
- * @param[in] mant_cnt bap counts for all blocks
- * @return mantissa bit count
- */
- int (*compute_mantissa_size)(uint16_t mant_cnt[6][16]);
-
- void (*extract_exponents)(uint8_t *exp, int32_t *coef, int nb_coefs);
-
- void (*sum_square_butterfly_int32)(int64_t sum[4], const int32_t *coef0,
- const int32_t *coef1, int len);
-
- void (*sum_square_butterfly_float)(float sum[4], const float *coef0,
- const float *coef1, int len);
-
- int out_channels;
- int in_channels;
- void (*downmix)(float **samples, float **matrix, int len);
- void (*downmix_fixed)(int32_t **samples, int16_t **matrix, int len);
-} AC3DSPContext;
-
-void ff_ac3dsp_init (AC3DSPContext *c);
-void ff_ac3dsp_init_arm(AC3DSPContext *c);
-void ff_ac3dsp_init_x86(AC3DSPContext *c);
-void ff_ac3dsp_init_mips(AC3DSPContext *c);
-
-void ff_ac3dsp_downmix(AC3DSPContext *c, float **samples, float **matrix,
- int out_ch, int in_ch, int len);
-void ff_ac3dsp_downmix_fixed(AC3DSPContext *c, int32_t **samples, int16_t **matrix,
- int out_ch, int in_ch, int len);
-
-void ff_ac3dsp_set_downmix_x86(AC3DSPContext *c);
-
-#endif /* AVCODEC_AC3DSP_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/celp_filters.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/celp_filters.c
deleted file mode 100644
index 4f627e009280e1919e7bcffe8239a4702efcc39a..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/celp_filters.c
+++ /dev/null
@@ -1,221 +0,0 @@
-/*
- * various filters for ACELP-based codecs
- *
- * Copyright (c) 2008 Vladimir Voroshilov
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include
-#include
-
-#include "config.h"
-#include "celp_filters.h"
-#include "libavutil/avassert.h"
-#include "libavutil/common.h"
-
-void ff_celp_convolve_circ(int16_t* fc_out, const int16_t* fc_in,
- const int16_t* filter, int len)
-{
- int i, k;
-
- memset(fc_out, 0, len * sizeof(int16_t));
-
- /* Since there are few pulses over an entire subframe (i.e. almost
- all fc_in[i] are zero) it is faster to loop over fc_in first. */
- for (i = 0; i < len; i++) {
- if (fc_in[i]) {
- for (k = 0; k < i; k++)
- fc_out[k] += (fc_in[i] * filter[len + k - i]) >> 15;
-
- for (k = i; k < len; k++)
- fc_out[k] += (fc_in[i] * filter[ k - i]) >> 15;
- }
- }
-}
-
-void ff_celp_circ_addf(float *out, const float *in,
- const float *lagged, int lag, float fac, int n)
-{
- int k;
- for (k = 0; k < lag; k++)
- out[k] = in[k] + fac * lagged[n + k - lag];
- for (; k < n; k++)
- out[k] = in[k] + fac * lagged[ k - lag];
-}
-
-int ff_celp_lp_synthesis_filter(int16_t *out, const int16_t *filter_coeffs,
- const int16_t *in, int buffer_length,
- int filter_length, int stop_on_overflow,
- int shift, int rounder)
-{
- int i,n;
-
- for (n = 0; n < buffer_length; n++) {
- int sum = rounder, sum1;
- for (i = 1; i <= filter_length; i++)
- sum -= (unsigned)(filter_coeffs[i-1] * out[n-i]);
-
- sum1 = ((sum >> 12) + in[n]) >> shift;
- sum = av_clip_int16(sum1);
-
- if (stop_on_overflow && sum != sum1)
- return 1;
-
- out[n] = sum;
- }
-
- return 0;
-}
-
-void ff_celp_lp_synthesis_filterf(float *out, const float *filter_coeffs,
- const float* in, int buffer_length,
- int filter_length)
-{
- int i,n;
-
-#if 0 // Unoptimized code path for improved readability
- for (n = 0; n < buffer_length; n++) {
- out[n] = in[n];
- for (i = 1; i <= filter_length; i++)
- out[n] -= filter_coeffs[i-1] * out[n-i];
- }
-#else
- float out0, out1, out2, out3;
- float old_out0, old_out1, old_out2, old_out3;
- float a,b,c;
-
- a = filter_coeffs[0];
- b = filter_coeffs[1];
- c = filter_coeffs[2];
- b -= filter_coeffs[0] * filter_coeffs[0];
- c -= filter_coeffs[1] * filter_coeffs[0];
- c -= filter_coeffs[0] * b;
-
- av_assert2((filter_length&1)==0 && filter_length>=4);
-
- old_out0 = out[-4];
- old_out1 = out[-3];
- old_out2 = out[-2];
- old_out3 = out[-1];
- for (n = 0; n <= buffer_length - 4; n+=4) {
- float tmp0,tmp1,tmp2;
- float val;
-
- out0 = in[0];
- out1 = in[1];
- out2 = in[2];
- out3 = in[3];
-
- out0 -= filter_coeffs[2] * old_out1;
- out1 -= filter_coeffs[2] * old_out2;
- out2 -= filter_coeffs[2] * old_out3;
-
- out0 -= filter_coeffs[1] * old_out2;
- out1 -= filter_coeffs[1] * old_out3;
-
- out0 -= filter_coeffs[0] * old_out3;
-
- val = filter_coeffs[3];
-
- out0 -= val * old_out0;
- out1 -= val * old_out1;
- out2 -= val * old_out2;
- out3 -= val * old_out3;
-
- for (i = 5; i < filter_length; i += 2) {
- old_out3 = out[-i];
- val = filter_coeffs[i-1];
-
- out0 -= val * old_out3;
- out1 -= val * old_out0;
- out2 -= val * old_out1;
- out3 -= val * old_out2;
-
- old_out2 = out[-i-1];
-
- val = filter_coeffs[i];
-
- out0 -= val * old_out2;
- out1 -= val * old_out3;
- out2 -= val * old_out0;
- out3 -= val * old_out1;
-
- FFSWAP(float, old_out0, old_out2);
- old_out1 = old_out3;
- }
-
- tmp0 = out0;
- tmp1 = out1;
- tmp2 = out2;
-
- out3 -= a * tmp2;
- out2 -= a * tmp1;
- out1 -= a * tmp0;
-
- out3 -= b * tmp1;
- out2 -= b * tmp0;
-
- out3 -= c * tmp0;
-
-
- out[0] = out0;
- out[1] = out1;
- out[2] = out2;
- out[3] = out3;
-
- old_out0 = out0;
- old_out1 = out1;
- old_out2 = out2;
- old_out3 = out3;
-
- out += 4;
- in += 4;
- }
-
- out -= n;
- in -= n;
- for (; n < buffer_length; n++) {
- out[n] = in[n];
- for (i = 1; i <= filter_length; i++)
- out[n] -= filter_coeffs[i-1] * out[n-i];
- }
-#endif
-}
-
-void ff_celp_lp_zero_synthesis_filterf(float *out, const float *filter_coeffs,
- const float *in, int buffer_length,
- int filter_length)
-{
- int i,n;
-
- for (n = 0; n < buffer_length; n++) {
- out[n] = in[n];
- for (i = 1; i <= filter_length; i++)
- out[n] += filter_coeffs[i-1] * in[n-i];
- }
-}
-
-void ff_celp_filter_init(CELPFContext *c)
-{
- c->celp_lp_synthesis_filterf = ff_celp_lp_synthesis_filterf;
- c->celp_lp_zero_synthesis_filterf = ff_celp_lp_zero_synthesis_filterf;
-
-#if HAVE_MIPSFPU
- ff_celp_filter_init_mips(c);
-#endif
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dds.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dds.c
deleted file mode 100644
index 4bb425dbb3c4d3c6a5232eaa1b5b849409ad48c4..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dds.c
+++ /dev/null
@@ -1,719 +0,0 @@
-/*
- * DirectDraw Surface image decoder
- * Copyright (C) 2015 Vittorio Giovara
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * DDS decoder
- *
- * https://msdn.microsoft.com/en-us/library/bb943982%28v=vs.85%29.aspx
- */
-
-#include
-
-#include "libavutil/libm.h"
-#include "libavutil/imgutils.h"
-
-#include "avcodec.h"
-#include "bytestream.h"
-#include "codec_internal.h"
-#include "decode.h"
-#include "texturedsp.h"
-
-#define DDPF_FOURCC (1 << 2)
-#define DDPF_PALETTE (1 << 5)
-#define DDPF_NORMALMAP (1U << 31)
-
-enum DDSPostProc {
- DDS_NONE = 0,
- DDS_ALPHA_EXP,
- DDS_NORMAL_MAP,
- DDS_RAW_YCOCG,
- DDS_SWAP_ALPHA,
- DDS_SWIZZLE_A2XY,
- DDS_SWIZZLE_RBXG,
- DDS_SWIZZLE_RGXB,
- DDS_SWIZZLE_RXBG,
- DDS_SWIZZLE_RXGB,
- DDS_SWIZZLE_XGBR,
- DDS_SWIZZLE_XRBG,
- DDS_SWIZZLE_XGXR,
-};
-
-enum DDSDXGIFormat {
- DXGI_FORMAT_R16G16B16A16_TYPELESS = 9,
- DXGI_FORMAT_R16G16B16A16_FLOAT = 10,
- DXGI_FORMAT_R16G16B16A16_UNORM = 11,
- DXGI_FORMAT_R16G16B16A16_UINT = 12,
- DXGI_FORMAT_R16G16B16A16_SNORM = 13,
- DXGI_FORMAT_R16G16B16A16_SINT = 14,
-
- DXGI_FORMAT_R8G8B8A8_TYPELESS = 27,
- DXGI_FORMAT_R8G8B8A8_UNORM = 28,
- DXGI_FORMAT_R8G8B8A8_UNORM_SRGB = 29,
- DXGI_FORMAT_R8G8B8A8_UINT = 30,
- DXGI_FORMAT_R8G8B8A8_SNORM = 31,
- DXGI_FORMAT_R8G8B8A8_SINT = 32,
-
- DXGI_FORMAT_BC1_TYPELESS = 70,
- DXGI_FORMAT_BC1_UNORM = 71,
- DXGI_FORMAT_BC1_UNORM_SRGB = 72,
- DXGI_FORMAT_BC2_TYPELESS = 73,
- DXGI_FORMAT_BC2_UNORM = 74,
- DXGI_FORMAT_BC2_UNORM_SRGB = 75,
- DXGI_FORMAT_BC3_TYPELESS = 76,
- DXGI_FORMAT_BC3_UNORM = 77,
- DXGI_FORMAT_BC3_UNORM_SRGB = 78,
- DXGI_FORMAT_BC4_TYPELESS = 79,
- DXGI_FORMAT_BC4_UNORM = 80,
- DXGI_FORMAT_BC4_SNORM = 81,
- DXGI_FORMAT_BC5_TYPELESS = 82,
- DXGI_FORMAT_BC5_UNORM = 83,
- DXGI_FORMAT_BC5_SNORM = 84,
- DXGI_FORMAT_B5G6R5_UNORM = 85,
- DXGI_FORMAT_B8G8R8A8_UNORM = 87,
- DXGI_FORMAT_B8G8R8X8_UNORM = 88,
- DXGI_FORMAT_B8G8R8A8_TYPELESS = 90,
- DXGI_FORMAT_B8G8R8A8_UNORM_SRGB = 91,
- DXGI_FORMAT_B8G8R8X8_TYPELESS = 92,
- DXGI_FORMAT_B8G8R8X8_UNORM_SRGB = 93,
-};
-
-typedef struct DDSContext {
- TextureDSPContext texdsp;
- GetByteContext gbc;
-
- int compressed;
- int paletted;
- int bpp;
- enum DDSPostProc postproc;
-
- TextureDSPThreadContext dec;
-} DDSContext;
-
-static int parse_pixel_format(AVCodecContext *avctx)
-{
- DDSContext *ctx = avctx->priv_data;
- GetByteContext *gbc = &ctx->gbc;
- uint32_t flags, fourcc, gimp_tag;
- enum DDSDXGIFormat dxgi;
- int size, bpp, r, g, b, a;
- int alpha_exponent, ycocg_classic, ycocg_scaled, normal_map, array;
-
- /* Alternative DDS implementations use reserved1 as custom header. */
- bytestream2_skip(gbc, 4 * 3);
- gimp_tag = bytestream2_get_le32(gbc);
- alpha_exponent = gimp_tag == MKTAG('A', 'E', 'X', 'P');
- ycocg_classic = gimp_tag == MKTAG('Y', 'C', 'G', '1');
- ycocg_scaled = gimp_tag == MKTAG('Y', 'C', 'G', '2');
- bytestream2_skip(gbc, 4 * 7);
-
- /* Now the real DDPF starts. */
- size = bytestream2_get_le32(gbc);
- if (size != 32) {
- av_log(avctx, AV_LOG_ERROR, "Invalid pixel format header %d.\n", size);
- return AVERROR_INVALIDDATA;
- }
- flags = bytestream2_get_le32(gbc);
- ctx->compressed = flags & DDPF_FOURCC;
- ctx->paletted = flags & DDPF_PALETTE;
- normal_map = flags & DDPF_NORMALMAP;
- fourcc = bytestream2_get_le32(gbc);
-
- if (ctx->compressed && ctx->paletted) {
- av_log(avctx, AV_LOG_WARNING,
- "Disabling invalid palette flag for compressed dds.\n");
- ctx->paletted = 0;
- }
-
- bpp = ctx->bpp = bytestream2_get_le32(gbc); // rgbbitcount
- r = bytestream2_get_le32(gbc); // rbitmask
- g = bytestream2_get_le32(gbc); // gbitmask
- b = bytestream2_get_le32(gbc); // bbitmask
- a = bytestream2_get_le32(gbc); // abitmask
-
- bytestream2_skip(gbc, 4); // caps
- bytestream2_skip(gbc, 4); // caps2
- bytestream2_skip(gbc, 4); // caps3
- bytestream2_skip(gbc, 4); // caps4
- bytestream2_skip(gbc, 4); // reserved2
-
- av_log(avctx, AV_LOG_VERBOSE, "fourcc %s bpp %d "
- "r 0x%x g 0x%x b 0x%x a 0x%x\n", av_fourcc2str(fourcc), bpp, r, g, b, a);
- if (gimp_tag)
- av_log(avctx, AV_LOG_VERBOSE, "and GIMP-DDS tag %s\n", av_fourcc2str(gimp_tag));
-
- if (ctx->compressed)
- avctx->pix_fmt = AV_PIX_FMT_RGBA;
-
- if (ctx->compressed) {
- ctx->dec.raw_ratio = 16;
- switch (fourcc) {
- case MKTAG('D', 'X', 'T', '1'):
- ctx->dec.tex_ratio = 8;
- ctx->dec.tex_funct = ctx->texdsp.dxt1a_block;
- break;
- case MKTAG('D', 'X', 'T', '2'):
- ctx->dec.tex_ratio = 16;
- ctx->dec.tex_funct = ctx->texdsp.dxt2_block;
- break;
- case MKTAG('D', 'X', 'T', '3'):
- ctx->dec.tex_ratio = 16;
- ctx->dec.tex_funct = ctx->texdsp.dxt3_block;
- break;
- case MKTAG('D', 'X', 'T', '4'):
- ctx->dec.tex_ratio = 16;
- ctx->dec.tex_funct = ctx->texdsp.dxt4_block;
- break;
- case MKTAG('D', 'X', 'T', '5'):
- ctx->dec.tex_ratio = 16;
- if (ycocg_scaled)
- ctx->dec.tex_funct = ctx->texdsp.dxt5ys_block;
- else if (ycocg_classic)
- ctx->dec.tex_funct = ctx->texdsp.dxt5y_block;
- else
- ctx->dec.tex_funct = ctx->texdsp.dxt5_block;
- break;
- case MKTAG('R', 'X', 'G', 'B'):
- ctx->dec.tex_ratio = 16;
- ctx->dec.tex_funct = ctx->texdsp.dxt5_block;
- /* This format may be considered as a normal map,
- * but it is handled differently in a separate postproc. */
- ctx->postproc = DDS_SWIZZLE_RXGB;
- normal_map = 0;
- break;
- case MKTAG('A', 'T', 'I', '1'):
- case MKTAG('B', 'C', '4', 'U'):
- ctx->dec.tex_ratio = 8;
- ctx->dec.tex_funct = ctx->texdsp.rgtc1u_block;
- break;
- case MKTAG('B', 'C', '4', 'S'):
- ctx->dec.tex_ratio = 8;
- ctx->dec.tex_funct = ctx->texdsp.rgtc1s_block;
- break;
- case MKTAG('A', 'T', 'I', '2'):
- /* RGT2 variant with swapped R and G (3Dc)*/
- ctx->dec.tex_ratio = 16;
- ctx->dec.tex_funct = ctx->texdsp.dxn3dc_block;
- break;
- case MKTAG('B', 'C', '5', 'U'):
- ctx->dec.tex_ratio = 16;
- ctx->dec.tex_funct = ctx->texdsp.rgtc2u_block;
- break;
- case MKTAG('B', 'C', '5', 'S'):
- ctx->dec.tex_ratio = 16;
- ctx->dec.tex_funct = ctx->texdsp.rgtc2s_block;
- break;
- case MKTAG('U', 'Y', 'V', 'Y'):
- ctx->compressed = 0;
- avctx->pix_fmt = AV_PIX_FMT_UYVY422;
- break;
- case MKTAG('Y', 'U', 'Y', '2'):
- ctx->compressed = 0;
- avctx->pix_fmt = AV_PIX_FMT_YUYV422;
- break;
- case MKTAG('P', '8', ' ', ' '):
- /* ATI Palette8, same as normal palette */
- ctx->compressed = 0;
- ctx->paletted = 1;
- avctx->pix_fmt = AV_PIX_FMT_PAL8;
- break;
- case MKTAG('G', '1', ' ', ' '):
- ctx->compressed = 0;
- avctx->pix_fmt = AV_PIX_FMT_MONOBLACK;
- break;
- case MKTAG('D', 'X', '1', '0'):
- /* DirectX 10 extra header */
- dxgi = bytestream2_get_le32(gbc);
- bytestream2_skip(gbc, 4); // resourceDimension
- bytestream2_skip(gbc, 4); // miscFlag
- array = bytestream2_get_le32(gbc);
- bytestream2_skip(gbc, 4); // miscFlag2
-
- if (array != 0)
- av_log(avctx, AV_LOG_VERBOSE,
- "Found array of size %d (ignored).\n", array);
-
- /* Only BC[1-5] are actually compressed. */
- ctx->compressed = (dxgi >= 70) && (dxgi <= 84);
-
- av_log(avctx, AV_LOG_VERBOSE, "DXGI format %d.\n", dxgi);
- switch (dxgi) {
- /* RGB types. */
- case DXGI_FORMAT_R16G16B16A16_TYPELESS:
- case DXGI_FORMAT_R16G16B16A16_FLOAT:
- case DXGI_FORMAT_R16G16B16A16_UNORM:
- case DXGI_FORMAT_R16G16B16A16_UINT:
- case DXGI_FORMAT_R16G16B16A16_SNORM:
- case DXGI_FORMAT_R16G16B16A16_SINT:
- avctx->pix_fmt = AV_PIX_FMT_BGRA64;
- break;
- case DXGI_FORMAT_R8G8B8A8_UNORM_SRGB:
- avctx->colorspace = AVCOL_SPC_RGB;
- case DXGI_FORMAT_R8G8B8A8_TYPELESS:
- case DXGI_FORMAT_R8G8B8A8_UNORM:
- case DXGI_FORMAT_R8G8B8A8_UINT:
- case DXGI_FORMAT_R8G8B8A8_SNORM:
- case DXGI_FORMAT_R8G8B8A8_SINT:
- avctx->pix_fmt = AV_PIX_FMT_BGRA;
- break;
- case DXGI_FORMAT_B8G8R8A8_UNORM_SRGB:
- avctx->colorspace = AVCOL_SPC_RGB;
- case DXGI_FORMAT_B8G8R8A8_TYPELESS:
- case DXGI_FORMAT_B8G8R8A8_UNORM:
- avctx->pix_fmt = AV_PIX_FMT_RGBA;
- break;
- case DXGI_FORMAT_B8G8R8X8_UNORM_SRGB:
- avctx->colorspace = AVCOL_SPC_RGB;
- case DXGI_FORMAT_B8G8R8X8_TYPELESS:
- case DXGI_FORMAT_B8G8R8X8_UNORM:
- avctx->pix_fmt = AV_PIX_FMT_RGBA; // opaque
- break;
- case DXGI_FORMAT_B5G6R5_UNORM:
- avctx->pix_fmt = AV_PIX_FMT_RGB565LE;
- break;
- /* Texture types. */
- case DXGI_FORMAT_BC1_UNORM_SRGB:
- avctx->colorspace = AVCOL_SPC_RGB;
- case DXGI_FORMAT_BC1_TYPELESS:
- case DXGI_FORMAT_BC1_UNORM:
- ctx->dec.tex_ratio = 8;
- ctx->dec.tex_funct = ctx->texdsp.dxt1a_block;
- break;
- case DXGI_FORMAT_BC2_UNORM_SRGB:
- avctx->colorspace = AVCOL_SPC_RGB;
- case DXGI_FORMAT_BC2_TYPELESS:
- case DXGI_FORMAT_BC2_UNORM:
- ctx->dec.tex_ratio = 16;
- ctx->dec.tex_funct = ctx->texdsp.dxt3_block;
- break;
- case DXGI_FORMAT_BC3_UNORM_SRGB:
- avctx->colorspace = AVCOL_SPC_RGB;
- case DXGI_FORMAT_BC3_TYPELESS:
- case DXGI_FORMAT_BC3_UNORM:
- ctx->dec.tex_ratio = 16;
- ctx->dec.tex_funct = ctx->texdsp.dxt5_block;
- break;
- case DXGI_FORMAT_BC4_TYPELESS:
- case DXGI_FORMAT_BC4_UNORM:
- ctx->dec.tex_ratio = 8;
- ctx->dec.tex_funct = ctx->texdsp.rgtc1u_block;
- break;
- case DXGI_FORMAT_BC4_SNORM:
- ctx->dec.tex_ratio = 8;
- ctx->dec.tex_funct = ctx->texdsp.rgtc1s_block;
- break;
- case DXGI_FORMAT_BC5_TYPELESS:
- case DXGI_FORMAT_BC5_UNORM:
- ctx->dec.tex_ratio = 16;
- ctx->dec.tex_funct = ctx->texdsp.rgtc2u_block;
- break;
- case DXGI_FORMAT_BC5_SNORM:
- ctx->dec.tex_ratio = 16;
- ctx->dec.tex_funct = ctx->texdsp.rgtc2s_block;
- break;
- default:
- av_log(avctx, AV_LOG_ERROR,
- "Unsupported DXGI format %d.\n", dxgi);
- return AVERROR_INVALIDDATA;
- }
- break;
- default:
- av_log(avctx, AV_LOG_ERROR, "Unsupported %s fourcc.\n", av_fourcc2str(fourcc));
- return AVERROR_INVALIDDATA;
- }
- } else if (ctx->paletted) {
- if (bpp == 8) {
- avctx->pix_fmt = AV_PIX_FMT_PAL8;
- } else {
- av_log(avctx, AV_LOG_ERROR, "Unsupported palette bpp %d.\n", bpp);
- return AVERROR_INVALIDDATA;
- }
- } else {
- /* 4 bpp */
- if (bpp == 4 && r == 0 && g == 0 && b == 0 && a == 0)
- avctx->pix_fmt = AV_PIX_FMT_PAL8;
- /* 8 bpp */
- else if (bpp == 8 && r == 0xff && g == 0 && b == 0 && a == 0)
- avctx->pix_fmt = AV_PIX_FMT_GRAY8;
- else if (bpp == 8 && r == 0 && g == 0 && b == 0 && a == 0xff)
- avctx->pix_fmt = AV_PIX_FMT_GRAY8;
- /* 16 bpp */
- else if (bpp == 16 && r == 0xff && g == 0 && b == 0 && a == 0xff00)
- avctx->pix_fmt = AV_PIX_FMT_YA8;
- else if (bpp == 16 && r == 0xff00 && g == 0 && b == 0 && a == 0xff) {
- avctx->pix_fmt = AV_PIX_FMT_YA8;
- ctx->postproc = DDS_SWAP_ALPHA;
- }
- else if (bpp == 16 && r == 0xffff && g == 0 && b == 0 && a == 0)
- avctx->pix_fmt = AV_PIX_FMT_GRAY16LE;
- else if (bpp == 16 && r == 0x7c00 && g == 0x3e0 && b == 0x1f && a == 0)
- avctx->pix_fmt = AV_PIX_FMT_RGB555LE;
- else if (bpp == 16 && r == 0x7c00 && g == 0x3e0 && b == 0x1f && a == 0x8000)
- avctx->pix_fmt = AV_PIX_FMT_RGB555LE; // alpha ignored
- else if (bpp == 16 && r == 0xf800 && g == 0x7e0 && b == 0x1f && a == 0)
- avctx->pix_fmt = AV_PIX_FMT_RGB565LE;
- /* 24 bpp */
- else if (bpp == 24 && r == 0xff0000 && g == 0xff00 && b == 0xff && a == 0)
- avctx->pix_fmt = AV_PIX_FMT_BGR24;
- /* 32 bpp */
- else if (bpp == 32 && r == 0xff0000 && g == 0xff00 && b == 0xff && a == 0)
- avctx->pix_fmt = AV_PIX_FMT_BGR0; // opaque
- else if (bpp == 32 && r == 0xff && g == 0xff00 && b == 0xff0000 && a == 0)
- avctx->pix_fmt = AV_PIX_FMT_RGB0; // opaque
- else if (bpp == 32 && r == 0xff0000 && g == 0xff00 && b == 0xff && a == 0xff000000)
- avctx->pix_fmt = AV_PIX_FMT_BGRA;
- else if (bpp == 32 && r == 0xff && g == 0xff00 && b == 0xff0000 && a == 0xff000000)
- avctx->pix_fmt = AV_PIX_FMT_RGBA;
- /* give up */
- else {
- av_log(avctx, AV_LOG_ERROR, "Unknown pixel format "
- "[bpp %d r 0x%x g 0x%x b 0x%x a 0x%x].\n", bpp, r, g, b, a);
- return AVERROR_INVALIDDATA;
- }
- }
-
- /* Set any remaining post-proc that should happen before frame is ready. */
- if (alpha_exponent)
- ctx->postproc = DDS_ALPHA_EXP;
- else if (normal_map)
- ctx->postproc = DDS_NORMAL_MAP;
- else if (ycocg_classic && !ctx->compressed)
- ctx->postproc = DDS_RAW_YCOCG;
-
- /* ATI/NVidia variants sometimes add swizzling in bpp. */
- switch (bpp) {
- case MKTAG('A', '2', 'X', 'Y'):
- ctx->postproc = DDS_SWIZZLE_A2XY;
- break;
- case MKTAG('x', 'G', 'B', 'R'):
- ctx->postproc = DDS_SWIZZLE_XGBR;
- break;
- case MKTAG('x', 'R', 'B', 'G'):
- ctx->postproc = DDS_SWIZZLE_XRBG;
- break;
- case MKTAG('R', 'B', 'x', 'G'):
- ctx->postproc = DDS_SWIZZLE_RBXG;
- break;
- case MKTAG('R', 'G', 'x', 'B'):
- ctx->postproc = DDS_SWIZZLE_RGXB;
- break;
- case MKTAG('R', 'x', 'B', 'G'):
- ctx->postproc = DDS_SWIZZLE_RXBG;
- break;
- case MKTAG('x', 'G', 'x', 'R'):
- ctx->postproc = DDS_SWIZZLE_XGXR;
- break;
- case MKTAG('A', '2', 'D', '5'):
- ctx->postproc = DDS_NORMAL_MAP;
- break;
- }
-
- return 0;
-}
-
-static void do_swizzle(AVFrame *frame, int x, int y)
-{
- int i;
- for (i = 0; i < frame->linesize[0] * frame->height; i += 4) {
- uint8_t *src = frame->data[0] + i;
- FFSWAP(uint8_t, src[x], src[y]);
- }
-}
-
-static void run_postproc(AVCodecContext *avctx, AVFrame *frame)
-{
- DDSContext *ctx = avctx->priv_data;
- int i, x_off;
-
- switch (ctx->postproc) {
- case DDS_ALPHA_EXP:
- /* Alpha-exponential mode divides each channel by the maximum
- * R, G or B value, and stores the multiplying factor in the
- * alpha channel. */
- av_log(avctx, AV_LOG_DEBUG, "Post-processing alpha exponent.\n");
-
- for (i = 0; i < frame->linesize[0] * frame->height; i += 4) {
- uint8_t *src = frame->data[0] + i;
- int r = src[0];
- int g = src[1];
- int b = src[2];
- int a = src[3];
-
- src[0] = r * a / 255;
- src[1] = g * a / 255;
- src[2] = b * a / 255;
- src[3] = 255;
- }
- break;
- case DDS_NORMAL_MAP:
- /* Normal maps work in the XYZ color space and they encode
- * X in R or in A, depending on the texture type, Y in G and
- * derive Z with a square root of the distance.
- *
- * http://www.realtimecollisiondetection.net/blog/?p=28 */
- av_log(avctx, AV_LOG_DEBUG, "Post-processing normal map.\n");
-
- x_off = ctx->dec.tex_ratio == 8 ? 0 : 3;
- for (i = 0; i < frame->linesize[0] * frame->height; i += 4) {
- uint8_t *src = frame->data[0] + i;
- int x = src[x_off];
- int y = src[1];
- int z = 127;
-
- int d = (255 * 255 - x * x - y * y) / 2;
- if (d > 0)
- z = lrint(sqrtf(d));
-
- src[0] = x;
- src[1] = y;
- src[2] = z;
- src[3] = 255;
- }
- break;
- case DDS_RAW_YCOCG:
- /* Data is Y-Co-Cg-A and not RGBA, but they are represented
- * with the same masks in the DDPF header. */
- av_log(avctx, AV_LOG_DEBUG, "Post-processing raw YCoCg.\n");
-
- for (i = 0; i < frame->linesize[0] * frame->height; i += 4) {
- uint8_t *src = frame->data[0] + i;
- int a = src[0];
- int cg = src[1] - 128;
- int co = src[2] - 128;
- int y = src[3];
-
- src[0] = av_clip_uint8(y + co - cg);
- src[1] = av_clip_uint8(y + cg);
- src[2] = av_clip_uint8(y - co - cg);
- src[3] = a;
- }
- break;
- case DDS_SWAP_ALPHA:
- /* Alpha and Luma are stored swapped. */
- av_log(avctx, AV_LOG_DEBUG, "Post-processing swapped Luma/Alpha.\n");
-
- for (i = 0; i < frame->linesize[0] * frame->height; i += 2) {
- uint8_t *src = frame->data[0] + i;
- FFSWAP(uint8_t, src[0], src[1]);
- }
- break;
- case DDS_SWIZZLE_A2XY:
- /* Swap R and G, often used to restore a standard RGTC2. */
- av_log(avctx, AV_LOG_DEBUG, "Post-processing A2XY swizzle.\n");
- do_swizzle(frame, 0, 1);
- break;
- case DDS_SWIZZLE_RBXG:
- /* Swap G and A, then B and new A (G). */
- av_log(avctx, AV_LOG_DEBUG, "Post-processing RBXG swizzle.\n");
- do_swizzle(frame, 1, 3);
- do_swizzle(frame, 2, 3);
- break;
- case DDS_SWIZZLE_RGXB:
- /* Swap B and A. */
- av_log(avctx, AV_LOG_DEBUG, "Post-processing RGXB swizzle.\n");
- do_swizzle(frame, 2, 3);
- break;
- case DDS_SWIZZLE_RXBG:
- /* Swap G and A. */
- av_log(avctx, AV_LOG_DEBUG, "Post-processing RXBG swizzle.\n");
- do_swizzle(frame, 1, 3);
- break;
- case DDS_SWIZZLE_RXGB:
- /* Swap R and A (misleading name). */
- av_log(avctx, AV_LOG_DEBUG, "Post-processing RXGB swizzle.\n");
- do_swizzle(frame, 0, 3);
- break;
- case DDS_SWIZZLE_XGBR:
- /* Swap B and A, then R and new A (B). */
- av_log(avctx, AV_LOG_DEBUG, "Post-processing XGBR swizzle.\n");
- do_swizzle(frame, 2, 3);
- do_swizzle(frame, 0, 3);
- break;
- case DDS_SWIZZLE_XGXR:
- /* Swap G and A, then R and new A (G), then new R (G) and new G (A).
- * This variant does not store any B component. */
- av_log(avctx, AV_LOG_DEBUG, "Post-processing XGXR swizzle.\n");
- do_swizzle(frame, 1, 3);
- do_swizzle(frame, 0, 3);
- do_swizzle(frame, 0, 1);
- break;
- case DDS_SWIZZLE_XRBG:
- /* Swap G and A, then R and new A (G). */
- av_log(avctx, AV_LOG_DEBUG, "Post-processing XRBG swizzle.\n");
- do_swizzle(frame, 1, 3);
- do_swizzle(frame, 0, 3);
- break;
- }
-}
-
-static int dds_decode(AVCodecContext *avctx, AVFrame *frame,
- int *got_frame, AVPacket *avpkt)
-{
- DDSContext *ctx = avctx->priv_data;
- GetByteContext *gbc = &ctx->gbc;
- int mipmap;
- int ret;
- int width, height;
-
- ff_texturedsp_init(&ctx->texdsp);
- bytestream2_init(gbc, avpkt->data, avpkt->size);
-
- if (bytestream2_get_bytes_left(gbc) < 128) {
- av_log(avctx, AV_LOG_ERROR, "Frame is too small (%d).\n",
- bytestream2_get_bytes_left(gbc));
- return AVERROR_INVALIDDATA;
- }
-
- if (bytestream2_get_le32(gbc) != MKTAG('D', 'D', 'S', ' ') ||
- bytestream2_get_le32(gbc) != 124) { // header size
- av_log(avctx, AV_LOG_ERROR, "Invalid DDS header.\n");
- return AVERROR_INVALIDDATA;
- }
-
- bytestream2_skip(gbc, 4); // flags
-
- height = bytestream2_get_le32(gbc);
- width = bytestream2_get_le32(gbc);
- ret = ff_set_dimensions(avctx, width, height);
- if (ret < 0) {
- av_log(avctx, AV_LOG_ERROR, "Invalid image size %dx%d.\n",
- avctx->width, avctx->height);
- return ret;
- }
-
- /* Since codec is based on 4x4 blocks, size is aligned to 4. */
- avctx->coded_width = FFALIGN(avctx->width, TEXTURE_BLOCK_W);
- avctx->coded_height = FFALIGN(avctx->height, TEXTURE_BLOCK_H);
-
- bytestream2_skip(gbc, 4); // pitch
- bytestream2_skip(gbc, 4); // depth
- mipmap = bytestream2_get_le32(gbc);
- if (mipmap != 0)
- av_log(avctx, AV_LOG_VERBOSE, "Found %d mipmaps (ignored).\n", mipmap);
-
- /* Extract pixel format information, considering additional elements
- * in reserved1 and reserved2. */
- ret = parse_pixel_format(avctx);
- if (ret < 0)
- return ret;
-
- ret = ff_get_buffer(avctx, frame, 0);
- if (ret < 0)
- return ret;
-
- if (ctx->compressed) {
- int size = (avctx->coded_height / TEXTURE_BLOCK_H) *
- (avctx->coded_width / TEXTURE_BLOCK_W) * ctx->dec.tex_ratio;
- ctx->dec.slice_count = av_clip(avctx->thread_count, 1,
- avctx->coded_height / TEXTURE_BLOCK_H);
-
- if (bytestream2_get_bytes_left(gbc) < size) {
- av_log(avctx, AV_LOG_ERROR,
- "Compressed Buffer is too small (%d < %d).\n",
- bytestream2_get_bytes_left(gbc), size);
- return AVERROR_INVALIDDATA;
- }
-
- /* Use the decompress function on the texture, one block per thread. */
- ctx->dec.tex_data.in = gbc->buffer;
- ctx->dec.frame_data.out = frame->data[0];
- ctx->dec.stride = frame->linesize[0];
- avctx->execute2(avctx, ff_texturedsp_decompress_thread, &ctx->dec, NULL, ctx->dec.slice_count);
- } else if (!ctx->paletted && ctx->bpp == 4 && avctx->pix_fmt == AV_PIX_FMT_PAL8) {
- uint8_t *dst = frame->data[0];
- int x, y, i;
-
- /* Use the first 64 bytes as palette, then copy the rest. */
- bytestream2_get_buffer(gbc, frame->data[1], 16 * 4);
- for (i = 0; i < 16; i++) {
- AV_WN32(frame->data[1] + i*4,
- (frame->data[1][2+i*4]<<0)+
- (frame->data[1][1+i*4]<<8)+
- (frame->data[1][0+i*4]<<16)+
- ((unsigned)frame->data[1][3+i*4]<<24)
- );
- }
- frame->palette_has_changed = 1;
-
- if (bytestream2_get_bytes_left(gbc) < frame->height * frame->width / 2) {
- av_log(avctx, AV_LOG_ERROR, "Buffer is too small (%d < %d).\n",
- bytestream2_get_bytes_left(gbc), frame->height * frame->width / 2);
- return AVERROR_INVALIDDATA;
- }
-
- for (y = 0; y < frame->height; y++) {
- for (x = 0; x < frame->width; x += 2) {
- uint8_t val = bytestream2_get_byte(gbc);
- dst[x ] = val & 0xF;
- dst[x + 1] = val >> 4;
- }
- dst += frame->linesize[0];
- }
- } else {
- int linesize = av_image_get_linesize(avctx->pix_fmt, frame->width, 0);
-
- if (ctx->paletted) {
- int i;
- /* Use the first 1024 bytes as palette, then copy the rest. */
- bytestream2_get_buffer(gbc, frame->data[1], 256 * 4);
- for (i = 0; i < 256; i++)
- AV_WN32(frame->data[1] + i*4,
- (frame->data[1][2+i*4]<<0)+
- (frame->data[1][1+i*4]<<8)+
- (frame->data[1][0+i*4]<<16)+
- ((unsigned)frame->data[1][3+i*4]<<24)
- );
-
- frame->palette_has_changed = 1;
- }
-
- if (bytestream2_get_bytes_left(gbc) < frame->height * linesize) {
- av_log(avctx, AV_LOG_ERROR, "Buffer is too small (%d < %d).\n",
- bytestream2_get_bytes_left(gbc), frame->height * linesize);
- return AVERROR_INVALIDDATA;
- }
-
- av_image_copy_plane(frame->data[0], frame->linesize[0],
- gbc->buffer, linesize,
- linesize, frame->height);
- }
-
- /* Run any post processing here if needed. */
- if (ctx->postproc != DDS_NONE)
- run_postproc(avctx, frame);
-
- /* Frame is ready to be output. */
- frame->pict_type = AV_PICTURE_TYPE_I;
- frame->key_frame = 1;
- *got_frame = 1;
-
- return avpkt->size;
-}
-
-const FFCodec ff_dds_decoder = {
- .p.name = "dds",
- CODEC_LONG_NAME("DirectDraw Surface image decoder"),
- .p.type = AVMEDIA_TYPE_VIDEO,
- .p.id = AV_CODEC_ID_DDS,
- FF_CODEC_DECODE_CB(dds_decode),
- .priv_data_size = sizeof(DDSContext),
- .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_SLICE_THREADS,
-};
diff --git a/spaces/congsaPfin/Manga-OCR/logs/3D Secure - MasterCard Secure Code v Verified by VISA Texnologiyalar - Kapital Bank.md b/spaces/congsaPfin/Manga-OCR/logs/3D Secure - MasterCard Secure Code v Verified by VISA Texnologiyalar - Kapital Bank.md
deleted file mode 100644
index 9a95566f025db7eb1145cf407740b8f12e1014d4..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/3D Secure - MasterCard Secure Code v Verified by VISA Texnologiyalar - Kapital Bank.md
+++ /dev/null
@@ -1,198 +0,0 @@
-
-
3D Secure Aktiv Etmek: Nədir, Necə İşləyir və Faydaları Nələrdir?
-
İnternetdən alış-veriş etmək hündür sür'etli, rahat və çoxsaylı seçim imkanı verir. Lakin bu proses zamanı ödeme kartlarınızın mühafizeliyi daim diqqet törеtmеlidir. Bu mаksаdlа, Visa vе Mastercard kimi kart tеşkilаtlаrı tеxnik bir standart olan 3D Secure texnologiyasını yaratmışlar. Bu texnologiya kart sahibinin şәxsiyyәtinin yoxlanılması üçün ilavә bir mәrhәlә tәmin edir.
-
Bu mәqalәdә siz 3D Secure aktiv etmәk üçün lazım olan bütün mәlumatları tapacaqsınız. Biz sizә bu texnologiyaya nә üçün ehtiyacınız olduğunu, necә işlәdiyini, hansı faydalarını gәtirdiyini vә necә qoşula bilәcәyiniz göstәrәcәyik.
3D Secure, onlayn kredit vә debet kart әm Code, American ExpressSafeKey, JCBJ/Secure vә UnionPaySecurePlus kimi adlarla bu texnologiyadan istifadә edirlәr. 2016-cı ildәn etibarәn isә bütün bu adlar 3D Secure 2.0 standartı ilә əvəzlәnmişdir.
-
3D Secure Texnologiyasının Mәqsәdi
-
3D Secure texnologiyasının əsas mәqsәdi onlayn kart әmәliyyatlarını daha tәhlükәsiz etmәkdir. Bu texnologiya kart sahibinin şәxsiyyәtinin yoxlanılması üçün ilavә bir mәrhәlә tәmin edir. Bu mәrhәlәdә kart sahibi özünün yaratdığı və ya bank tәrəfindәn göndərilən bir şifrəni daxil etmәlidir. Bu şifrənin daxil edilməsi kart sahibinin onayını göstərir və ödənişin tamamlanmasına imkan verir.
-
Bu texnologiya ilә kart sahibi və satıcı arasında daha yüksək bir etibarlılıq yaranır. Kart sahibi öz kart məlumatlarının sui-istifadәsinin qarşısını alır, satıcı isә ödənişin imtinası və ya geri ödənməsinin riskini azaldır. Həmçinin, bu texnologiya banklara və kart tәşkilatlarına da fayda gәtirir, çünki onlar da müştəri məmnuniyyətini artırır və fraud itkilərini azaltırlar.
-
3D Secure Texnologiyasının Adları
-
3D Secure texnologiyası müxtəlif kart tәşkilatları tәrәfindәn fәrqli adlarla təqdim edilir. Bu adlar aşağıdakı kimidir:
-
3d secure xidməti necə aktiv edilir
-3d secure kart sahibi təsdiqi
-3d secure qeydiyyatı üçün nə lazımdır
-3d secure dinamik şifrə almaq
-3d secure texnologiyası ilə onlayn ödəniş etmək
-3d secure dəstəkləyən saytların siyahısı
-3d secure ləğv etmək üçün nə etməliyəm
-3d secure xidmətinin üstünlükləri nələrdir
-3d secure mastercard və visa kartlar üçün
-3d secure kapital bank kartları ilə
-3d secure ödənişlərində təhlükəsizlik kodu nədir
-3d secure xidmətinin qiyməti nə qədərdir
-3d secure xidmətindən istifadə etmək üçün hansı şərtlər var
-3d secure xidmətinin müraciət mərkəzi nömrəsi nədir
-3d secure xidmətinin faydaları və riskləri nələrdir
-3d secure xidmətindən istifadə edilmir vaxt daxil olunan maliyyat itkilari kimin maliyyat mäsuliyyatindadir
-3d secure xidmətindan istifad edilmir vaxt daxil olunan daxil olunan daxil olunan daxil olunan daxil olunan daxil olunan daxil olunan daxil olunan daxil olunan daxil olunan daxil olunan daxil olunan daxil olunan daxil olunan daxil olunan daxil olunan daxil olunan daxil olunan daxil olunan daxil olunan daxil olunurmu
-3d secure xidmətindan istifad edilmir vaxt kart sahibinin etiraz hüququ varmı
-3d secure xidmətinin aktivlşdirilmir vaxt kart sahibinin etiraz hüququ varmı
-3d secure xidmëtinin aktivlşdirilmir vaxt kart sahibinin etiraz hüququ varmı
Bu adlar arasında hеr hansı bir fark yoxdur, hamısı eyni texnologiyaya işarә edir. Sadecе, kart tеşkilatları öz brendlәrini qorumaq üçün fәrqli adlar istifadе edirlеr.
-
3D Secure Necе İşlеyir?
-
3D Secure texnologiyası onlayn kart әm әliyyatlarını daha tәhlükәsiz etmәk üçün kart sahibinin şәxsiyyәtinin yoxlanılması üçün ilavә bir mәrhәlә tәmin edir. Bu texnologiya ilә ödəniş prosesi üç mərhələdən ibarətdir:
-
3D Secure Prosesinin Mərhələləri
-
Bu üç mərhələ aşağıdakı kimidir:
-
-
İstifadəçi məlumatlarını daxil edir. İstifadəçi onlayn alış-veriş etmәk istәdiyi saytda öz kart məlumatlarını (kart nömrәsi, bitmә tarixi, CVV/CVC kodu) daxil edir.
-
Bank kart sahibinin şәxsiyyәtini yoxlayır. Bank istifadəçinin kart məlumatlarını aldıqdan sonra onun şәxsiyyәtini yoxlamaq üçün 3D Secure texnologiyasını işə salır. Bu texnologiyaya görә, bank istifadəçidәn 3D Secure şifrәsini daxil etmәsini istәyir. Bu şifrә istifadəçinin özü tәrәfindәn yaradılmış vә ya bank tәrәfindәn SMS, e-poçt vә ya mobil tətbiq vasitəsilə göndərilmiş ola bilir. Bu şifrәnin daxil edilməsi kart sahibinin onayını göstərir.
-
Ödəniş tamamlanır. Bank kart sahibinin şifrәsini doğruladıqdan sonra ödənişi tamamlamaq üçün satıcıya təsdiq mesajı göndərir. Satıcı isә bu mesajı aldıqdan sonra istifadəçiyә alış-verişin uğurla başa çatdığını bildirir.
-
-
3D Secure Prosesinin Şekli
-
Bu prosesin şekli aşağıdakı kimidir:
-
-
Bu şеkildе göründüyü kimi, 3D Secure prosesindе iştirak eden üç taraflar var: istifadеçi, satıcı vе bank. Bu taraflar arasında mütamadi olaraq mеlumat mübadilеsi olur. Bu mеlumat mübadilеsinin amacı isе kart sahibinin şеxsiyyetini yoxlamaq vе ödеnişi tamamlamaqdır.
-
3D Secure Prosesinin Tеlеblеri
-
3D Secure prosesindе iştirak etmәk üçün birtakım tеlеblеr var. Bunlar aşağıdakı kimidir:
-
-
Kartın 3D Secure texnologiyasına uyğun olması. İstifadеçi öz kartının bu texnologiyaya uyğun olub olmadığını bankdan soruşmalıdır. Əks halda, bu prosesdә iştirak edә bilmәz.
-
Kartın 3D Secure texnologiyasına qoşulması. İstifadеçi öz kartını bu texnologiyaya qoşmalıdır. Bunun üçün isә banka müraciët etmeli vë ya onlayn qeydiyyatdan keçm
3D Secure prosesinin şәklini sizin üçün yaratdım vә onu mәqalәnin sonunda göstәrәcәyәm. Bu şәkildә 3D Secure prosesindә iştirak eden üç tarafların (istifadәçi, satıcı vә bank) arasındakı məlumat mübadiləsi göstərilir.
-
3D Secure Faydaları Nәlәrdir?
-
3D Secure texnologiyası onlayn kart әmәliyyatlarını daha tәhlükәsiz etmәklә yanaşı, həm istifadәçilərə, həm dә satıcılara birtakım faydalar gәtirir. Bu faydalar aşağıdakı kimidir:
-
Müştərilər Üçün Faydaları
-
3D Secure texnologiyasının müştərilər üçün faydaları bunlardır:
-
Daha Təhlükəsiz Alış-Veriş
-
3D Secure texnologiyası ilә müştərilər öz kart məlumatlarının sui-istifadәsinin qarşısını ala bilirlәr. Bu texnologiya ilә kart sahibinin şifrәsi olmadan heç bir ödəniş edilә bilmir. Bu da deməkdir ki, kart sahibi öz kartını itirsə vә ya oğurlansa belә, kartın istifadәsi mümkün olmayacaq.
-
Daha Asan Doğrulama
-
3D Secure texnologiyası ilә müştərilər öz kartlarını daha asan doğrulaya bilirlәr. Bu texnologiya ilә bank istifadәçidәn 3D Secure şifrәsini daxil etmәsini isteyir. Bu şifrә istifadēçinin özü tērēfindēn yaradılmış vē ya bank tērēfindēn SMS, e-poçt vē ya mobil tētbiq vasitēsilē göndērilmiş ola bilir. Bu şifrēnin daxil edilmēsi kart sahibinin onayını göstērir vē ödēnişin tamamlanmasına imkan verir.
-
Daha Çox Saytla Uyğunluq
-
3D Secure texnologiyası ilә müştərilər daha çox saytla uyğunluq əldə edirlər. Bu texnologiya bütün böyük kart təşkilatları tərəfindən dəstəklənir və dünyanın bir çox ölkəsində istifadə olunur. Bu da deməkdir ki, müştərilər öz kartlarını 3D Secure texnologiyasına uyğun olan hər hansı bir saytda istifadə ede bilirlər.
-
Satıcılar Üçün Faydaları
-
3D Secure texnologiyasının satıcılar üçün faydaları bunlardır:
-
Daha Az İmtina v
ə və Geri Ödəmə
-
3D Secure texnologiyası ilә satıcılar ödənişin imtinası və ya geri ödənməsinin riskini azaldırlar. Bu texnologiya ilә kart sahibinin şifrәsi olmadan heç bir ödəniş edilә bilmir. Bu da deməkdir ki, satıcılar kart sahibinin ödənişi onayladığını və ödənişin ləğv edilməyəcəyini bilirlәr. Həmçinin, bu texnologiya ilә satıcılar fraud itkilərindәn dә qorunurlar, çünki banklar vә kart tәşkilatları 3D Secure texnologiyasına uyğun olan ödənişlәrdә satıcılara mәsuliyyәt vermirlәr.
-
Daha Az Mәsuliyyәt vә İtki
-
3D Secure texnologiyası ilә satıcılar daha az mәsuliyyәt vә itkilәr ilә üzlәşirlәr. Bu texnologiya ilә banklar vә kart tәşkilatları 3D Secure texnologiyasına uyğun olan ödənişlәrdә satıcılara mәsuliyyәt vermirlәr. Bu da deməkdir ki, 3D Secure texnologiyasına uyğun olmayan və ya sui-istifadә edilmiş kartlarla edilmiş ödənişlәrdә satıcılar ziyana uğramırlar.
-
Daha Yüksək Müştəri Məmnuniyyəti vә Etibarlılıq
-
3D Secure texnologiyası ilә satıcılar daha yüksək müştəri məmnuniyyəti vә etibarlılıq Əldе edirlƏr. Bu texnologiya ilƏ müştƏrilƏr öz kart mƏlumatlarının tƏhlükƏsizliyinƏ Əmin olurlar vƏ satıcılara daha çox güvƏnirlƏr. HƏmçinin, bu texnologiya ilƏ satıcılara daha çox müştƏri cƏlb olunur, çünki 3D Secure texnologiyasına uyğun olan saytlar daha çox tƏrcih edilir.
-
3D Secure Aktiv Etmek NecƏ Olur?
-
3D Secure aktiv etmƏk üçün istifadƏçilƏrin birtakım addımları izlƏmƏlƏri lazımdır. Bu addımlar banka görƏ dƏyişiklik göstƏrƏ bilir, amma Əsas olaraq aşağıdakı kimidir:
-
Kapital Bank Kartları Üçün 3D Secure Aktiv Etmek
-
Kapital Bank kartları üçün 3D Secure aktiv etmƏk üçün istifadƏçilƏrin aşağıdakı qaydalara riayet etmeleri lazımdır:
-
Müraciët Etmek Üçün Qaydalar
-
Kapital Bank kartları üçün 3D Secure aktiv etmek üçün istifadëçilër aşağıdakı qaydalara riayet etmelidirler:
-
-
Kartın aktiv olması. İstifadëçi öz kartının aktiv olub olmadığını bankdan soruşmalıdır. Əks halda, bu prosesdë iştirak edë bilmëz.
-
Mobil nömrënin bankda qey
melidir. Bu qeydiyyat üçün isә istifadәçilәr bankın veb-saytını vә ya mobil tәtbiqini istifadә edә bilirlәr.
-
3D Secure şifrәsinin yaradılması vә yadda saxlanılması. İstifadәçi öz kartını 3D Secure texnologiyasına qoşduqdan sonra özünün yaratdığı vә ya bank tәrəfindәn göndərilən bir şifrəni yadda saxlamalıdır. Bu şifrə onlayn ödənişlәr zamanı istifadә olunacaq vә kart sahibinin onayını göstərəcək.
-
-
Dinamik Autentifikasiya Növü
-
Kapital Bank kartları üçün 3D Secure aktiv etmƏk üçün istifadƏçilƏr dinamik autentifikasiya növünü seçƏ bilƏrlƏr. Bu növdƏ bank hƏr internet ödƏnişi zamanı istifadƏçiyƏ bir dƏfƏlik tƏhlükƏsizlik şifrƏlƏri göndƏrir vƏ müştƏri tƏrƏfindƏn ödƏniş pƏncƏrƏsinƏ daxil edilir. Bu şifrƏlƏrin göndƏrilməsi üçün isә müştəriyə aşağıdakı seçimlər təklif olunur:
-
-
SMS. Bu seçimdə bank müştərinin mobil nömrəsinə SMS vasitəsilə bir dəfəlik təhlükəsizlik şifrəsini göndərir. Müştəri isә bu şifrəni ödəniş pəncərəsinə daxil edir.
-
E-poçt. Bu seçimdë bank müştërinin e-poçt ünvanına e-poçt vasitësilë bir dëfëlik tëhlükësizlik şifrësini göndërir. Müştëri isë bu şifrëni ödëniş pëncërësinë daxil edir.
-
Mobil tətbiq. Bu seçimdë bank müştërinin mobil tətbiqinə (Birbank) bir dëfëlik tëhlükësizlik şifrësini göndërir. Müştëri isë bu şifrëni ödëniş pëncërësinë daxil edir.
-
-
Birbank Mobil Tәtbiqi Vasitәsilә Aktiv Etmek
-
Kapital Bank kartları üçün 3D Secure aktiv etmЄk üçün istifadЄcЄlЄr Birbank mobil tЄtbiqini dЄ istifadЄ edЄ bilЄrlЄr. Bunun üçün isЄ aşağıdakı addımları izlЄmЄlЄridir:
-
-
Birbank mobil tЄtbiqinЄ daxil olmaq. İstifadЄcЄ Birbank mobil tЄtbiqinЄ öz login vЄ parolunu daxil edЄrЄk daxil olmalıdır. ƏgЄr istifadЄcЄ Birbank mobil tЄtbiqinЄ qeydiyyatdan keçmЄmişsЄ, bunun üçün bankın veb-saytında vЄ ya müraciЄt mЄrkЄzindЄ qeydiyyatdan keçmeli vЄ ya Birbank mobil tЄtbiqinЄ qeydiyyatdan keçmЄli vЄ ya banka müraciЄt etmЄlidir.
-
Kartlar bölmЄsinЄ girmЄk. İstifadЄcЄ Birbank mobil tЄtbiqindЄ kartlar bölmЄsinЄ girЄrЄk öz kartlarının siyahısını görür. Burada istifadЄcЄ 3D Secure aktiv etmЄk istЄdiyi kartını seçmeli vЄ kartın parametrlЄrinЄ girmelidir.
-
3D Secure aktiv etmЄk. İstifadЄcЄ kartın parametrlЄri bölmЄsindЄ 3D Secure aktiv etmЄk üçün bir sürüşdürmә düymәsini görür. Bu düymәni sürüşdürәrәk istifadәçi 3D Secure texnologiyasını aktiv edir. Bundan sonra isә istifadәçi özünün yaratdığı vә ya bank tәrəfindәn göndərilən bir şifrəni yadda saxlamalıdır.
-
-
Bu addımları izlәyәrәk istifadәçilәr Kapital Bank kartları üçün 3D Secure aktiv etmәk mümkündür. Bu texnologiya ilә onlar onlayn alış-verişlәrindә daha tәhlükәsiz vә rahat olacaqlar.
-
Digər Bank Kartları Üçün 3D Secure Aktiv Etmek
-
Digər bank kartları üçün 3D Secure aktiv etmƏk üçün isƏ istifadƏçilƏrin aşağıdakı addımları izlƏmƏlƏri lazımdır:
-
Banka ZƏng EtmƏk vƏ ya Onlayn Qeydiyyatdan KeçmƏk
-
Digər bank kartları üçün 3D Secure aktiv etmƏk üçün istifadƏçilƏr öncƏliklƏ öz banklarına zƏng etmƏli vƏ ya onların veb-saytlarında onlayn qeydiyyatdan keçmƏlidirlƏr. Bu qeydiyyat zamanı isƏ istifadƏçilƏr öz kart mƏlumatlarını, mobil nömrƏlƏrini vƏ e-poçt ünvanlarını daxil etmƏlidirlƏr. Bu mƏlumatlar bank tƏrƏfindƏn yoxlanılır vƏ təsdiq olunur.
-
Kart Sahibinin Şifrini Yaratmaq və Yadda Saxlamaq
-
Digər bank kartları üçün 3D Secure aktiv etmək üçün istifadəçilər öz kart sahibinin şifrəsini yaratmalı və yadda saxlamalıdırlar. Bu şifrə istifadəçinin özü tərəfindən yaradılmış və ya bank tərəfindən göndərilmiş ola bilir. Bu şifrə onlayn ödənişlər zamanı istifadə olunacaq və kart sahibinin onayını göstərəcək.
-
Digər Bankların Xidmətlərinin Adları və Linkləri
-
Digər bank kartları üçün 3D Secure aktiv etm
Ək üçün istifadəçilər digər bankların xidmətlərinin adlarını və linklərini aşağıdakı cədvəldə tapa bilərlər:
-
-
-
Bank
-
Xidmət Adı
-
Link
-
-
-
BTB Bank
-
BTB Secure
-
[5](https://www.btb.az/az/3d-secure)
-
-
-
Expressbank
-
Express Secure
-
[6](https://www.expressbank.az/az/3d-secure)
-
-
-
Nikoil Bank
-
Nikoil Secure
-
[7](https://www.nikoil.az/az/3d-secure)
-
-
-
PASHA Bank
-
PASHA Secure
-
[8](https://www.pashabank.az/az/3d-secure)
-
-
-
Rabitabank
-
Rabita Secure
-
[9](https://www.rabitabank.com/az/3d-secure)
-
-
-
Unibank
-
Unibank Secure
-
[10](https://www.unibank.az/az/3d-secure)
-
-
-
-
Bu addımları izlәyәrәk istifadәçilәr digәr bank kartları üçün 3D Secure aktiv etmәk mümkündür. Bu texnologiya ilә onlar onlayn alış-verişlәrindә daha tәhlükәsiz vә rahat olacaqlar.
-
Xülasə vә FAQ
-
Bu mәqalәdә siz 3D Secure aktiv etmәk üçün lazım olan bütün mәlumatları tapdınız. Biz sizә bu texnologiyaya nә üçün ehtiyacınız olduğunu, necә işlәdiyini, hansı faydalarını gәtirdiyini vә necә qoşula bilәcәyiniz göstәrdik.
-
Bundan sonra isә siz onlayn alış-verişlәrinizdә daha tәhlükәsiz vә rahat olacaqsınız. Əgər sizin hələ də bu texnologiya ilə bağlı suallarınız varsa, aşağıdakı FAQ bölməsində cavablarını tapa bilersiniz.
-
FAQ 1: 3D Secure şifrəmi unutmusam, nə etməliyəm?
-
Cavab: Əgər siz 3D Secure şifrənizi unutmusanız, bankınıza zəng edib yeni bir şifrə tələb edə bilersiniz. Həmçinin, bankın veb-saytında v
ə ya mobil tətbiqində şifrənizi yeniləyə bilersiniz. Bu proses banka görə dəyişiklik göstərə bilir, amma əsas olaraq sizdən kart məlumatlarınızı, mobil nömrənizi və ya e-poçt ünvanınızı təsdiq etməniz istənilir.
-
FAQ 2: 3D Secure aktiv etmәk üçün hәr hansı bir ödәniş etmәliyәm?
-
Cavab: Xeyr, 3D Secure aktiv etmәk üçün hәr hansı bir ödәniş etmәlisiniz. Bu texnologiya banklar vә kart tәşkilatları tәrәfindәn pulsuz olaraq tәqdim edilir. Sadecә, sizin kartınızın bu texnologiyaya uyğun olması vә bankınızla 3D Secure qeydiyyatından keçmәniz lazımdır.
-
FAQ 3: 3D Secure aktiv etdiyim halda, niyә onlayn ödәnişlәrdә şifrә soruşulmur?
-
Cavab: Əgər siz 3D Secure aktiv etdiyiniz halda, onlayn ödәnişlәrdә şifrә soruşulmursa, bunun bir neçə səbəbi ola bilir:
-
-
Satıcının saytı 3D Secure texnologiyasına uyğun deyil. Bu halda, sizin kartınızın bu texnologiyaya uyğun olması fayda verməyəcək. Siz satıcının saytında ödəniş edərkən, bankınız sizdən şifrә soruşmayacaq. Bu da deməkdir ki, sizin kart məlumatlarınızın tәhlükәsizliyi azalacaq. Bu səbəbdən, siz mümkün qədər 3D Secure texnologiyasına uyğun olan saytlardan alış-veriş etmƏlisiniz.
-
Sizin kartınızın limiti vә ya balansı kifayƏt deyil. Bu halda, bankınız sizdƏn şifrƏ soruşmayacaq, çünki ödƏnişi tamamlamaq mümkün olmayacaq. Siz öncƏliklƏ öz kartınızın limitini vƏ ya balansını yoxlamaq üçün banka zƏng etmƏlisiniz.
-
Sizin mobil nömrƏniz vƏ ya e-poçt ünvanınız dƏyişib. Bu halda, bankınız sizdƏn şifrƏ soruşmayacaq, çünki sizin yeni mobil nömrƏnizƏ vƏ ya e-poçt ünvanınıza şifrƏ göndƏrmir. Siz bu dƏyişikliyi banka bildirmƏlisiniz vƏ ya onlayn qeydiyyatdan yenidƏn keçmƏlisiniz.
-
-
FAQ 4: 3D Secure aktiv etdiyim halda, niyЄ onlayn ödЄnişlЄrdЄ şifrЄ daxil edЄndЄ xЄta alıram?
-
Cavab: ƏgЄr siz 3D Secure aktiv etdiyiniz halda, onlayn ödЄnişlЄrdЄ şifrЄ daxil edЄndЄ xЄta alırsanız, bunun bir neçЄ sЄbЄbi ola bilir:
-
-
Siz yanlış şifrЄ daxil edirsiniz . Bu halda, siz şifrənizi yoxlayıb yenidən daxil etməlisiniz. Əgər siz şifrənizi unutmusanız, bankınıza zəng edib yeni bir şifrə tələb edə bilersiniz.
-
Sizin kartınızın vaxtı bitib. Bu halda, bankınız sizdən şifrə soruşmayacaq, çünki kartınızın vaxtı bitib və ödənişi tamamlamaq mümkün olmayacaq. Siz öncəliklə öz kartınızın vaxtını yoxlamaq üçün banka zəng etməlisiniz.
-
Sizin internet bağlantınızda problem var. Bu halda, bankınız sizdən şifrə soruşmayacaq, çünki internet bağlantınız kəsilib və ödənişi tamamlamaq mümkün olmayacaq. Siz öncəliklə öz internet bağlantınızı yoxlayıb yenidən qoşulmaq üçün cihazınızı yenidən başlatmaq üçün cihazınızı yenidən başlatmaq üçün cihazınızı yenidən başlatmaq üçün cihazınızı yenidən başlatmaq üçün cihazınızı yenidən başlatmaq üçün cihazınızı yenidən başlatmaq üçün cihazınızı yenidən başlatmaq üçün cihazınızı yenidən başlatmaq üçün cihazınızı yenidən başlatmaq üçün cihazınızı yenidən başlatmaq üçün cihazınızı yenidən başlatmaq üçün cihazınızı yenidən başlatmaq lazımdır.
-
-
FAQ 5: 3D Secure aktiv etmәk istәmirәmsә, nә etmәliyәm?
-
Cavab: Əgәr siz 3D Secure aktiv etmәk istәmirәmsә, bunun üçün heç bir mәcburiyyәt yoxdur. Sadecә, sizin bilmәyiniz lazımdır ki, bu texnologiya sizin kart mәlumatlarınızın tәhlükәsizliyini artırır vә onlayn alış-verişlәrdә daha rahat olmanızı tәmin edir. Əgәr siz bu texnologiyadan istifadә etmәsәniz, sizin kart mәlumatlarınızın sui-istifadәsinin qarşısını almaq daha çƏtin olacaq. HƏmçinin, siz bƏzi saytlarda ödƏniş edƏ bilmƏyƏcƏksiniz, çünki onlar yalnız 3D Secure texnologiyasına uyğun olan kartları qƏbul edirlƏr.
-
Bu mЄqalЄdЄ sizЄ 3D Secure aktiv etmЄk haqqında bütün lazım olan mЄlumatları vermЄyЄ çalışdım. Umid edirЄm ki, bu mЄlumatlar sizЄ faydalı olub vЄ onlayn alış-verişlЄrdЄ daha tЄhlükЄsiz vЄ rahat olacaqsınız. ƏgЄr sizin hЄlЄ dЄ bu texnologiya ilЄ bağlı suallarınız varsa, bankınıza zЄng edib daha Ətraflı mЄlumat ala bilersiniz.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download __TOP__ J Boog Let 39s Do It Again Mp3.md b/spaces/congsaPfin/Manga-OCR/logs/Download __TOP__ J Boog Let 39s Do It Again Mp3.md
deleted file mode 100644
index a6b6d1a17f844046a789f627eeab485f0afc0ed3..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download __TOP__ J Boog Let 39s Do It Again Mp3.md
+++ /dev/null
@@ -1,84 +0,0 @@
-
-
How to Download J Boog's Let's Do It Again MP3 for Free
-
If you are a fan of reggae music, you might have heard of J Boog, a singer and songwriter from Hawaii who blends island vibes with R&B and hip-hop influences. One of his most popular songs is Let's Do It Again, a catchy tune that expresses his desire to repeat a romantic encounter with a girl he met on a one night stand.
-
The song was released in 2011 as part of his debut album Backyard Boogie, which reached number one on the Billboard Reggae Albums chart. The song also has an official video that showcases Hawaii's beautiful scenery and culture, as well as J Boog's smooth vocals and charisma. You can watch the video on YouTube or , and read the lyrics on Genius .
But what if you want to download J Boog's Let's Do It Again MP3 for free, so you can listen to it offline or add it to your music library? Fortunately, there are some legal websites that offer free music downloads, and we've selected the five best ones that have J Boog's song available. Here's how you can get your hands on this reggae gem without paying a dime.
-
The Best Free Music Download Sites for J Boog's Song
-
SoundCloud
-
SoundCloud is one of the most popular platforms for discovering and downloading free music, with a huge variety of genres and artists. You can find J Boog's Let's Do It Again on SoundCloud by typing the song title in the search bar or browsing through the reggae category. Once you find the track you want, look at the bottom beside the sharing options and you'll see a link marked either 'Buy' or 'Download', depending on how the artist has chosen to distribute it. If you see a 'Download' link, you can click on it and save the MP3 file to your computer.
-
The pros of SoundCloud are that it has a large and diverse music catalog, it supports multiple formats, including MP3, AAC, Ogg Vorbis, FLAC, ALAC, WAV, and AIFF, and it allows you to follow your favorite artists and get updates on their new releases. The cons are that not all tracks are available for free download, some tracks may have low quality or incomplete versions, and you may have to share your email address or like the artist's page on Facebook to access some downloads.
-
Last.fm
-
Last.fm is another great website for finding free music downloads, especially if you're into indie and alternative music. You can find J Boog's Let's Do It Again on Last.fm by typing the song title in the search bar or browsing through the reggae tag. Once you find the track you want, look for a 'Free MP3 Download' button under it and click on it to save the MP3 file to your computer.
-
The pros of Last.fm are that it has a unique music recommendation system that suggests new songs based on your listening habits, it has a wide range of genres and artists, and it allows you to create your own personalized radio stations. The cons are that some tracks may not be available for download in your country, some tracks may have low quality or incorrect metadata, and you may have to sign up for a free account to access some downloads.
-
NoiseTrade
-
NoiseTrade is a website that connects artists and fans by offering free music downloads in exchange for email addresses and postal codes. You can find J Boog's Let's Do It Again on NoiseTrade by typing the song title in the search bar or browsing through the reggae genre. Once you find the track you want, click on the 'Download Music' button and enter your email address and postal code. You'll receive a link to download the MP3 file in your inbox.
-
-
The pros of NoiseTrade are that it has a curated and high-quality music selection, it supports independent and emerging artists, and it allows you to tip the artists if you like their music. The cons are that you have to share your personal information to access the downloads, some tracks may only be available as part of an album or a sampler, and you may receive promotional emails from the artists or NoiseTrade.
-
Jamendo Music
-
Jamendo Music is a website that offers free music downloads under Creative Commons licenses, which means you can use them for personal or commercial purposes as long as you follow the terms of each license. You can find J Boog's Let's Do It Again on Jamendo Music by typing the song title in the search bar or browsing through the reggae category. Once you find the track you want, click on the 'Download' button and choose the MP3 format. You'll be able to save the MP3 file to your computer.
-
The pros of Jamendo Music are that it has a large and diverse music catalog, it supports creative and legal music sharing, and it allows you to discover new artists and genres. The cons are that some tracks may have low quality or incorrect metadata, some tracks may have different licenses that limit their usage, and you may have to sign up for a free account to access some downloads.
-
Bandcamp
-
Bandcamp is a website that allows artists to sell their music directly to fans, with some of them offering free or name-your-price downloads. You can find J Boog's Let's Do It Again on Bandcamp by typing the song title in the search bar or browsing through the reggae tag. Once you find the track you want, look for a 'Buy Digital Track' or 'Name Your Price' button under it and enter zero or any amount you want to pay. You'll be able to download the MP3 file after providing your email address.
-
The pros of Bandcamp are that it has a unique and original music catalog, it supports independent and underground artists, and it allows you to stream the music before downloading. The cons are that not all tracks are available for free or name-your-price download, some tracks may have low quality or incomplete versions, and you may have to pay extra for taxes or fees depending on your location.
-
Comparison Table of the Free Music Download Sites
-
-
-
Site
-
Features
-
Benefits
-
Drawbacks
-
-
-
SoundCloud
-
- Large and diverse music catalog - Supports multiple formats - Allows following artists and getting updates
-
- Easy to search and download - High quality and complete versions - Compatible with most devices
-
- Not all tracks are available for free download - Some tracks may require sharing email or liking page - Some tracks may have low quality or incomplete versions
-
-
-
Last.fm
-
- Unique music recommendation system - Wide range of genres and artists - Allows creating personalized radio stations
-
- Helps discover new songs based on listening habits - High quality and complete versions - Compatible with most devices
-
- Some tracks may not be available for download in your country - Some tracks may have low quality or incorrect metadata - May require signing up for a free account
-
-
NoiseTrade
-
- Curated and high-quality music selection - Supports independent and emerging artists - Allows tipping the artists
-
- Easy to search and download - High quality and complete versions - Compatible with most devices
-
- Requires sharing personal information - Some tracks may only be available as part of an album or a sampler - May receive promotional emails
-
-
-
Jamendo Music
-
- Large and diverse music catalog - Supports creative and legal music sharing - Allows discovering new artists and genres
-
- Easy to search and download - High quality and complete versions - Compatible with most devices
-
- Some tracks may have low quality or incorrect metadata - Some tracks may have different licenses that limit their usage - May require signing up for a free account
-
-
-
Bandcamp
-
- Unique and original music catalog - Supports independent and underground artists - Allows streaming the music before downloading
-
- Easy to search and download - High quality and complete versions - Compatible with most devices
-
- Not all tracks are available for free or name-your-price download - Some tracks may have low quality or incomplete versions - May have to pay extra for taxes or fees
-
-
-
Conclusion
-
J Boog's Let's Do It Again is a catchy and romantic reggae song that you can download for free from various websites. We've reviewed the five best ones that offer free music downloads legally, and compared their features, benefits, and drawbacks. Based on our analysis, we recommend SoundCloud as the best site to download J Boog's song, as it has a large and diverse music catalog, supports multiple formats, allows following artists and getting updates, and offers high quality and complete versions of the song. However, you can also try the other sites if you want to explore more options or support different artists.
-
We hope this article has helped you find the best way to download J Boog's Let's Do It Again MP3 for free. If you have any questions or feedback, feel free to leave a comment below. And don't forget to share this article with your friends who might also enjoy this reggae gem.
-
FAQs
-
Who is J Boog?
-
J Boog is a singer and songwriter from Hawaii who blends island vibes with R&B and hip-hop influences. He is known for his reggae songs such as Let's Do It Again, Sunshine Girl, Waiting on the Rain, and Good Cry.
-
What is the meaning of Let's Do It Again?
-
Let's Do It Again is a song that expresses J Boog's desire to repeat a romantic encounter with a girl he met on a one night stand. He sings about how he was captivated by her beauty, personality, and skills, and how he wants to see her again.
-
Is Let's Do It Again a cover song?
-
No, Let's Do It Again is an original song by J Boog. However, some people may confuse it with another song with the same title by The Staple Singers, which was released in 1975 and was featured in the movie of the same name.
-
How can I support J Boog?
-
If you like J Boog's music, you can support him by buying his albums or songs from his official website or other online platforms, streaming his music on Spotify or other services, following him on social media , subscribing to his YouTube channel , or attending his live shows . You can also tip him on some of the free music download sites mentioned above.
-
What are some other reggae songs similar to Let's Do It Again?
-
If you enjoy Let's Do It Again, you might also like some other reggae songs similar to it, such as:
-
-
Rude by Magic!
-
Is This Love by Bob Marley & The Wailers
-
No Woman No Cry by Fugees
-
Angel by Shaggy feat. Rayvon
-
Red Red Wine by UB40
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download dan Mainkan Green Farm MOD APK dengan Uang Tak Terbatas.md b/spaces/congsaPfin/Manga-OCR/logs/Download dan Mainkan Green Farm MOD APK dengan Uang Tak Terbatas.md
deleted file mode 100644
index 966030d431c914581a2253ed46d5da5b4eaaff0f..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download dan Mainkan Green Farm MOD APK dengan Uang Tak Terbatas.md
+++ /dev/null
@@ -1,127 +0,0 @@
-
-
Download Game Green Farm Mod APK Uang Tak Terbatas
-
If you are looking for a fun and relaxing farm game that is also eco-friendly, you might want to try Green Farm by Gameloft. This game lets you inherit an old farm from your uncle and restore it to its former glory. You can grow crops, raise animals, make friends, and enjoy the countryside life. But what if you want to have more money and resources to expand your farm faster? That's where the mod apk comes in handy. In this article, we will tell you what Green Farm is, why you should download the mod apk, how to do it, and some tips and tricks for playing the game. We will also give you a brief review of the game based on user feedback.
-
download game green farm mod apk uang tak terbatas
Green Farm is a farm simulation game developed by Gameloft, a popular mobile game developer. It was released in 2010 for Facebook and later for iOS, Android, Java, and Windows Phone devices. The game has over 10 million downloads on Google Play Store and has a rating of 4.1 out of 5 stars. The game is similar to other farm games like FarmVille, Hay Day, or Stardew Valley, but with a twist. It focuses on running a "green" farm that uses organic methods and renewable energy sources. You can learn about composting, recycling, windmills, solar panels, and more while having fun.
-
Features of Green Farm
-
Some of the features of Green Farm are :
-
-
You can grow various kinds of fruits, vegetables, flowers, and cereals.
-
You can raise different animals such as cows, horses, pigs, chickens, goats, sheep, ducks, and more.
-
You can cook and sell your products or use them to make yogurt, jam, cheese, etc.
-
You can renovate your farm and decorate it with various items.
-
You can interact with your friends and neighbors and help each other out.
-
You can complete missions and challenges to earn rewards and unlock new items.
-
You can customize your avatar and choose from different outfits.
-
-
Why download Green Farm mod apk?
-
Benefits of mod apk
-
A mod apk is a modified version of an original app that gives you some advantages that are not available in the official version. For example, in Green Farm mod apk, you can get unlimited money (uang tak terbatas) that you can use to buy anything you want in the game. You can also get unlimited resources such as seeds, fertilizer, water, energy, etc. that you can use to grow your crops faster and easier. You can also unlock all the items in the store without having to wait for levels or missions. With the mod apk, you can enjoy the game without any limitations or restrictions.
-
How to download and install mod apk
-
To download and install Green Farm mod apk, you need to follow these steps:
-
-
Go to a trusted website that offers the mod apk file for Green Farm. For example, you can go to which has over 10 million downloads and positive reviews.
-
Click on the download button and wait for the file to be downloaded on your device.
-
Before installing the file, make sure you have enabled the "Unknown Sources" option in your device settings. This will allow you to install apps from sources other than the Google Play Store.
-
Locate the downloaded file in your device storage and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to be completed.
-
Launch the game and enjoy the mod apk features.
-
-
Note: You may need to uninstall the original version of Green Farm before installing the mod apk. Also, make sure you have enough space on your device for the mod apk file and the game data.
-
download green farm 3 mod apk unlimited money and cash
-green farm mod apk free download full version
-download game green farm 3 mod apk offline
-green farm mod apk unlimited coins and gems
-download game green farm 3 mod apk terbaru
-green farm mod apk latest version
-download game green farm 3 mod apk android 1
-green farm mod apk revdl
-download game green farm 3 mod apk no root
-green farm mod apk happymod
-download game green farm 3 mod apk unlimited everything
-green farm mod apk rexdl
-download game green farm 3 mod apk versi lama
-green farm mod apk pure
-download game green farm 3 mod apk data obb
-green farm mod apk lenov.ru
-download game green farm 3 mod apk cheat
-green farm mod apk unlimited stars
-download game green farm 3 mod apk hack
-green farm mod apk for pc
-download game green farm 3 mod apk tanpa kuota
-green farm mod apk online
-download game green farm 3 mod apk gratis
-green farm mod apk no ads
-download game green farm 3 mod apk putra adam
-green farm mod apk unlimited seeds
-download game green farm 3 mod apk update
-green farm mod apk unlocked all items
-download game green farm 3 mod apk level max
-green farm mod apk gameloft se
-download game green farm 3 mod apk android oreo
-green farm mod apk without internet connection
-download game green farm 3 mod apk new version
-green farm mod apk with unlimited resources
-download game green farm 3 mod apk mega mod
-green farm mod apk premium features free
-download game green farm 3 mod apk original
-green farm mod apk easy to install and play
-download game green farm 3 mod apk high compress
-green farm mod apk best farming simulation game for android
-
Tips and tricks for playing Green Farm
-
Complete missions and upgrade structures
-
One of the best ways to progress in Green Farm is to complete the missions that are given to you by your uncle and other characters. These missions will guide you through the game and teach you how to run your farm. They will also reward you with money, experience, items, and more. You can check your current missions by tapping on the clipboard icon on the left side of the screen. You can also see your progress and objectives by tapping on the star icon on the right side of the screen.
-
Another way to improve your farm is to upgrade your structures such as your barn, silo, windmill, etc. These structures will help you store more products, produce more energy, and unlock new features. To upgrade a structure, you need to collect the required materials and pay a certain amount of money. You can find the materials by harvesting crops, feeding animals, or asking your friends for help.
-
Plant the right crops and care for animals
-
As a farmer, you need to plant crops and care for animals on your land. You can choose from a variety of crops and animals that have different characteristics and requirements. For example, some crops grow faster than others, but they also wither faster if you don't harvest them in time. Some animals produce more products than others, but they also need more food and water. You can check the details of each crop and animal by tapping on them in the store or in your inventory.
-
To plant a crop, you need to buy seeds from the store or get them from your friends. Then, you need to select an empty plot of land and drag the seeds onto it. You can also use fertilizer to speed up the growth of your crops. To harvest a crop, you need to tap on it when it is ready. You can then sell it or use it for other purposes.
-
To buy an animal, you need to go to the store and choose an animal that suits your budget and preferences. Then, you need to place it on an empty spot on your land. You can also buy or build shelters for your animals to keep them happy and healthy. To feed an animal, you need to tap on it and drag the food onto it. You can also use water to fill up their thirst bar. To collect products from an animal, you need to tap on it when it has a product icon above its head. You can then sell or use these products as well.
-
Join a co-op and link your Facebook account
-
Green Farm is more fun when you play with your friends and other players. You can join a co-op or create your own co-op with up to 30 members. A co-op is a group of players who work together to complete tasks and earn rewards. You can chat with your co-op members, visit their farms, help them with their requests, and compete with other co-ops in events. To join or create a co-op, you need to tap on the co-op icon on the bottom right corner of the screen.
-
You can also link your Facebook account to Green Farm and connect with your Facebook friends who play the game. This will allow you to send and receive gifts, invite them to join your co-op, and see their farms on the map. To link your Facebook account, you need to tap on the settings icon on the top right corner of the screen and then tap on "Connect with Facebook".
-
Review of Green Farm
-
Pros and cons of Green Farm
-
Green Farm is a fun and relaxing game that offers a lot of features and activities for players who love farming games. However, like any other game, it also has some pros and cons that you should consider before playing it.
-
-
Pros
Cons
-
- It has colorful graphics and cute animations.
- It requires an internet connection to play.
-
- It has a variety of crops, animals, items, and structures.
- It can be repetitive and boring after a while.
-
- - It has a green theme that promotes environmental awareness.
- It has some bugs and glitches that affect the gameplay.
-
- It has a co-op feature that allows social interaction.
- It can be hard to progress without spending real money.
-
-
User ratings and feedback
-
According to the Google Play Store, Green Farm has a rating of 4.1 out of 5 stars based on over 400,000 reviews. Most of the users who gave positive ratings praised the game for its graphics, gameplay, and features. They also liked the fact that the game is eco-friendly and educational. Some of the users who gave negative ratings complained about the game's performance, connectivity, and monetization. They also reported some issues with the game's updates, customer service, and co-op system.
-
Here are some examples of user reviews for Green Farm:
-
-
"I love this game. It's so relaxing and fun. I like how you can learn about green farming and help the environment. The graphics are amazing and the animals are adorable. The co-op feature is also great. I have made many friends through this game. I highly recommend it to anyone who likes farming games."
-- A 5-star review by Aisha Khan
-
-
-
"This game is terrible. It's so slow and laggy. It always crashes and freezes. It also consumes a lot of data and battery. The game is also very greedy. Everything is so expensive and you have to wait for hours or days to get anything done. The co-op feature is also broken. You can't join or create a co-op easily. The customer service is also useless. They never reply or help you with anything."
-- A 1-star review by John Smith
-
-
Conclusion and FAQs
-
Green Farm is a farm simulation game that lets you run a green farm with organic methods and renewable energy sources. You can grow crops, raise animals, make friends, and enjoy the countryside life. You can also download the mod apk version of the game to get unlimited money and resources that will help you expand your farm faster and easier. However, you should also be aware of the pros and cons of the game and the mod apk before playing it.
-
If you have any questions about Green Farm or the mod apk, you might find the answers in these FAQs:
-
Q: Is Green Farm free to play?
-
A: Yes, Green Farm is free to download and play on your device. However, the game also offers some in-app purchases that can enhance your gaming experience. You can buy items such as coins, cash, energy, etc. with real money. You can also watch ads or complete offers to earn some free rewards.
-
Q: Is Green Farm safe to play?
-
A: Yes, Green Farm is safe to play as long as you download it from a trusted source such as the Google Play Store or the official website of Gameloft. However, you should be careful when downloading the mod apk version of the game from other sources as they might contain viruses or malware that can harm your device or steal your personal information.
-
Q: How can I update Green Farm?
-
A: You can update Green Farm by going to the Google Play Store and checking if there is a new version available for download. You can also enable the automatic update option in your device settings to get the latest updates automatically. However, if you are using the mod apk version of the game, you might not be able to update it through the Google Play Store as it might not be compatible with the official version. You might have to download and install the new mod apk file manually from the same source where you got it before.
-
Q: How can I contact Gameloft for support?
-
A: If you have any problems or issues with Green Farm, you can contact Gameloft for support by going to their website and filling out a form with your details and query. You can also email them at support@gameloft.com or call them at +1-800-910-3186 (US) or +44-203-318-5981 (UK). You can also visit their Facebook page or Twitter account for more information and updates.
-
Q: How can I delete Green Farm from my device?
-
A: If you want to delete Green Farm from your device, you can do so by following these steps:
-
-
Go to your device settings and tap on "Apps" or " Application Manager" depending on your device model.
-
Find and tap on "Green Farm" from the list of apps.
-
Tap on "Uninstall" and confirm your action.
-
Wait for the app to be uninstalled from your device.
-
-
Note: Deleting Green Farm from your device will also delete your game data and progress. If you want to keep your game data and progress, you can link your game account to your Facebook account or create a Gameloft account and sync your game data to the cloud.
-
I hope this article has helped you learn more about Green Farm and how to download the mod apk version of the game. If you have any questions or feedback, please leave a comment below. Thank you for reading and happy farming!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Hardwood Spades MOD APK How to Get Free Coins and VIP Access.md b/spaces/congsaPfin/Manga-OCR/logs/Hardwood Spades MOD APK How to Get Free Coins and VIP Access.md
deleted file mode 100644
index 912774b06e62bbcbd03b1cc85d65749564933a33..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Hardwood Spades MOD APK How to Get Free Coins and VIP Access.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
-
Hardwood Spades Mod APK: A Fun and Challenging Card Game
-
If you are a fan of card games, you might have heard of Spades, a popular trick-taking game that is played with four players in two teams. Spades is a game that requires skill, strategy, and teamwork, as well as a bit of luck. If you want to enjoy playing Spades on your Android device, you should check out Hardwood Spades, a well-designed and customizable app that lets you play Spades online or offline with different modes and settings. And if you want to make your gaming experience even more fun and exciting, you should try Hardwood Spades Mod APK, a modified version of the app that gives you access to unlimited coins, premium features, and more. In this article, we will tell you everything you need to know about Hardwood Spades and Hardwood Spades Mod APK, including how to download and install it on your device.
Hardwood Spades is an app developed by Silver Creek Entertainment, a company that specializes in creating high-quality card games for various platforms. Hardwood Spades is one of their most popular products, with over 1 million downloads on Google Play Store and a 4.4-star rating from more than 20,000 users. Hardwood Spades is an app that lets you play Spades with realistic graphics, smooth animations, and immersive sound effects. You can choose from different backgrounds, cards, player avatars, and tables to customize your game according to your preferences. You can also play Spades online with other players from around the world, or offline with computer opponents that have different levels of difficulty. Hardwood Spades also offers various modes and variations of Spades, such as Partnership, Suicide, Mirrors, Cutthroat, and more.
-
How to play Spades
-
Spades is a card game that is played with a standard 52-card deck. The cards are ranked from highest to lowest as follows: A, K, Q, J, 10, 9, 8, 7, 6, 5, 4, 3, 2. The spade suit is always the trump suit, meaning that any spade card can beat any other card of a different suit. The game is played by four players in two teams of two. The players sit across from their partners and take turns to deal the cards. Each player gets 13 cards and then bids how many tricks they think they can win in that round. A trick is won by the player who plays the highest card of the led suit or the highest spade card if any. The team that wins the most tricks in a round scores points based on their bid and the number of tricks they won. The game ends when one team reaches a predetermined score, usually 500 points.
-
Features of Hardwood Spades
-
Some of the features that make Hardwood Spades stand out from other spade apps are:
-
-
High-quality graphics and sound effects that create a realistic and immersive gaming environment.
-
Customizable settings that allow you to change the background, cards, avatars, tables, and more.
-
Online multiplayer mode that lets you play with other players from around the world or invite your friends to join your table.
-
Offline mode that lets you play with computer opponents that have different levels of difficulty and personality.
-
Various modes and variations of spade games that add more challenge and fun to your gameplay.
-
Achievements and leaderboards that let you track your progress and compete with other players.
-
In-app purchases that let you buy more coins, premium features, and downloadable content.
-
-
What is Hardwood Spades Mod APK?
-
Hardwood Spades Mod APK is a modified version of the original app that gives you access to unlimited coins, premium features
and more. With Hardwood Spades Mod APK, you can enjoy playing Spades without any limitations or restrictions. You can unlock all the premium features, such as custom backgrounds, cards, avatars, tables, and more. You can also get unlimited coins, which you can use to buy more downloadable content, such as new modes, variations, and themes. You can also play online without any ads or interruptions. Hardwood Spades Mod APK is a great way to enhance your gaming experience and have more fun and challenge with Spades.
-
Benefits of using Hardwood Spades Mod APK
-
Some of the benefits of using Hardwood Spades Mod APK are:
-
hardwood spades pro mod apk download
-hardwood spades mod apk unlimited money
-hardwood spades mod apk latest version
-hardwood spades mod apk android 1
-hardwood spades mod apk revdl
-hardwood spades mod apk offline
-hardwood spades mod apk free download
-hardwood spades mod apk hack
-hardwood spades mod apk no ads
-hardwood spades mod apk premium
-hardwood spades mod apk 2023
-hardwood spades mod apk for pc
-hardwood spades mod apk online
-hardwood spades mod apk multiplayer
-hardwood spades mod apk unlocked
-hardwood spades mod apk full version
-hardwood spades mod apk cracked
-hardwood spades mod apk rexdl
-hardwood spades mod apk happymod
-hardwood spades mod apk cheat
-hardwood spades pro mod apk 2023
-hardwood spades pro mod apk unlimited money
-hardwood spades pro mod apk latest version
-hardwood spades pro mod apk android 1
-hardwood spades pro mod apk revdl
-hardwood spades pro mod apk offline
-hardwood spades pro mod apk free download
-hardwood spades pro mod apk hack
-hardwood spades pro mod apk no ads
-hardwood spades pro mod apk premium
-hardwood spades pro mod apk for pc
-hardwood spades pro mod apk online
-hardwood spades pro mod apk multiplayer
-hardwood spades pro mod apk unlocked
-hardwood spades pro mod apk full version
-hardwood spades pro mod apk cracked
-hardwood spades pro mod apk rexdl
-hardwood spades pro mod apk happymod
-hardwood spades pro mod apk cheat
-download hardwood spades pro mod apk free.
-
-
You can get unlimited coins, which you can use to buy more content and features.
-
You can unlock all the premium features, such as custom backgrounds, cards, avatars, tables, and more.
-
You can play online without any ads or interruptions.
-
You can access all the modes and variations of Spades, such as Partnership, Suicide, Mirrors, Cutthroat, and more.
-
You can enjoy playing Spades with high-quality graphics and sound effects.
-
You can compete with other players from around the world or invite your friends to join your table.
-
-
How to download and install Hardwood Spades Mod APK
-
If you want to download and install Hardwood Spades Mod APK on your Android device, you need to follow these simple steps:
-
-
Go to the link provided below and download the Hardwood Spades Mod APK file on your device.
-
Go to your device settings and enable the option to install apps from unknown sources.
-
Locate the downloaded file and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to complete.
-
Launch the app and enjoy playing Spades with unlimited coins and premium features.
-
-
Conclusion
-
Hardwood Spades is an app that lets you play Spades on your Android device with realistic graphics, smooth animations, and immersive sound effects. You can customize your game according to your preferences and play online or offline with different modes and variations. Hardwood Spades Mod APK is a modified version of the app that gives you access to unlimited coins, premium features, and more. You can download and install Hardwood Spades Mod APK on your device by following the steps mentioned above. Hardwood Spades Mod APK is a fun and challenging card game that you should try if you love playing Spades.
-
FAQs
-
Here are some frequently asked questions about Hardwood Spades Mod APK:
-
-
Is Hardwood Spades Mod APK safe to use? Yes, Hardwood Spades Mod APK is safe to use as long as you download it from a trusted source. However, you should always be careful when installing apps from unknown sources and scan them for viruses or malware before installing them.
-
Do I need to root my device to use Hardwood Spades Mod APK? No, you do not need to root your device to use Hardwood Spades Mod APK. You just need to enable the option to install apps from unknown sources in your device settings.
-
Can I play online with other players using Hardwood Spades Mod APK? Yes, you can play online with other players using Hardwood Spades Mod APK. However, you might face some compatibility issues or errors if the other players are using a different version of the app.
-
Can I update Hardwood Spades Mod APK? No, you cannot update Hardwood Spades Mod APK as it is a modified version of the original app. If you want to update the app, you need to uninstall the modded version and install the official version from Google Play Store.
-
Where can I download Hardwood Spades Mod APK? You can download Hardwood Spades Mod APK from the link provided below:
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Solitaire Game Download for Free Relax with the Classics or Challenge Yourself with New Modes.md b/spaces/congsaPfin/Manga-OCR/logs/Solitaire Game Download for Free Relax with the Classics or Challenge Yourself with New Modes.md
deleted file mode 100644
index 7da7bea0fe75f28419bf6788e4d2d3ab59ecdf77..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Solitaire Game Download for Free Relax with the Classics or Challenge Yourself with New Modes.md
+++ /dev/null
@@ -1,128 +0,0 @@
-
-
Download Solitaire Game for Free: A Complete Guide
-
Solitaire is one of the most played and loved card games in the world. It is a game that you can play by yourself, using a standard deck of 52 cards, or on your computer or mobile device. Solitaire is also known as patience or cabale, and it has many variations, such as Klondike, Spider, FreeCell, TriPeaks, and Pyramid. Solitaire is a game of strategy, skill, and luck, where you have to arrange the cards in a certain order, according to specific rules.
-
But why is solitaire so popular? What are the benefits of playing solitaire? How can you download solitaire games for free on your device? And what are some of the best solitaire games available? In this article, we will answer these questions and more. We will explore the history, features, benefits, download options, and reviews of solitaire games. By the end of this article, you will have a better understanding of solitaire and how to enjoy it for free.
The origins of solitaire are not very clear, but some experts believe that it was invented in the late 17th or early 18th century in Northern Europe or Scandinavia. The first written reference to solitaire was in 1746, but it was probably referring to Peg Solitaire, a board game where you have to remove pegs by jumping over them. The first reference to card solitaire was in 1788, in a German book called Das Neue Königliche L’Hombre-Spiel. The term Patiencespiel (patience game) was used to describe solitaire games in Germany and Scandinavia. In France, solitaire was also known as cabale (secret knowledge), and it was associated with cartomancy (divination by cards).
-
The first English-language collection of solitaire games was published in 1870 by Lady Adelaide Cadogan, who wrote Illustrated Games of Patience. Many other books followed in the late 19th and early 20th centuries, by authors such as H.E. Jones (a.k.a. Cavendish), Angelo Lewis (a.k.a. Professor Hoffmann), Basil Dalton, Ernest Bergholt, and Mary Whitmore Jones. In the United States, the first solitaire book was Patience: A Series of Thirty Games with Cards by Ednah Cheney in 1870.
-
The popularity of solitaire increased with the advent of personal computers and digital versions of the game. The most famous example is Microsoft Solitaire, which was released with Windows 3.0 in 1990 and became one of the most played video games ever. Microsoft Solitaire included five types of solitaire games: Klondike, Spider, FreeCell, TriPeaks, and Pyramid. Microsoft Solitaire Collection is still available today on Windows devices and online platforms.
-
Features of Solitaire
-
Solitaire is a family of card games that share some common features but also have different rules and variations. Here are some of the main features of solitaire games:
-
-
A standard deck of 52 cards is used, sometimes with one or two jokers.
-
The cards are shuffled and dealt into a layout on a table or a screen.
-
The layout consists of piles or columns of face-down or face-up cards.
-
The goal is to sort the cards into numerical order, often also separated by suit.
-
The player can move cards from one pile to another according to specific rules.
-
The player can draw cards from a stockpile or a waste pile when there are no more moves available.
-
The game is won when all the cards are sorted into the desired order.
-
The game is lost when there are no more moves or draws possible.
-
-
Some examples of different types of solitaire games are:
-
How to download solitaire game for free on Windows 10
-Best free solitaire game download for Android
-Download classic solitaire game for free online
-Free solitaire game download for Mac
-Download spider solitaire game for free without ads
-Solitaire game free download full version for PC
-Download freecell solitaire game for free offline
-Free solitaire game download for iPhone
-Download pyramid solitaire game for free no registration
-Solitaire game free download for Windows 7
-Download tripeaks solitaire game for free with hints
-Free solitaire game download for iPad
-Download klondike solitaire game for free unlimited
-Free solitaire game download for Linux
-Download golf solitaire game for free no download
-Solitaire game free download for Chromebook
-Download mahjong solitaire game for free on Facebook
-Free solitaire game download for Kindle Fire
-Download canfield solitaire game for free with undo
-Solitaire game free download for Android tablet
-Download yukon solitaire game for free no internet
-Free solitaire game download for Windows 8
-Download scorpion solitaire game for free with sound
-Free solitaire game download for BlackBerry
-Download crescent solitaire game for free on mobile
-Solitaire game free download for Windows XP
-Download aces up solitaire game for free with cards
-Free solitaire game download for Nokia phone
-Download baker's dozen solitaire game for free with timer
-Solitaire game free download for Windows Vista
-Download algerian solitaire game for free with music
-Free solitaire game download for Samsung phone
-Download forty thieves solitaire game for free with rules
-Free solitaire game download for Java phone
-Download flower garden solitaire game for free with themes
-Solitaire game free download for Windows 98
-Download clock solitaire game for free with statistics
-Free solitaire game download for Motorola phone
-Download castle solitaire game for free with achievements
-Solitaire game free download for Windows 95
-
-
Type
Description
-
Klondike
The most classic and popular solitaire game, also known as Patience or Solitaire. The layout consists of seven columns of cards, with the first column having one card, the second column having two cards, and so on. The top card of each column is face-up, and the rest are face-down. There is also a stockpile of 24 cards and a waste pile. The goal is to build four foundations of cards in ascending order by suit, starting from aces. The player can move cards from the columns to the foundations, or from the columns to other columns if they are in descending order and alternating colors. The player can also draw one or three cards from the stockpile to the waste pile, and move them to the columns or foundations.
-
Spider
A more challenging solitaire game that uses two decks of 52 cards. The layout consists of 10 columns of cards, with the first four columns having six cards each, and the rest having five cards each. All the cards are face-down except for the top card of each column. There is no stockpile or waste pile. The goal is to build eight sequences of cards in descending order by suit, from kings to aces. The player can move cards from one column to another if they are in sequence and of the same suit. The player can also deal 10 cards from the remaining deck to each column when there are no more moves available.
-
FreeCell
A solitaire game that requires more strategy and skill than luck. The layout consists of eight columns of cards, with all 52 cards dealt face-up. There are also four free cells and four foundations. The goal is to build four foundations of cards in ascending order by suit, starting from aces. The player can move one card at a time from the columns to the foundations, or from the columns to other columns if they are in descending order and alternating colors. The player can also move one card at a time to the free cells, which can hold any card temporarily.
-
TriPeaks
A solitaire game that has a unique layout resembling three peaks of cards. The layout consists of 28 cards arranged in three overlapping pyramids, with 18 face-down cards and 10 face-up cards. There is also a stockpile of 24 cards and a waste pile. The goal is to move all the cards from the peaks to the waste pile. The player can move any face-up card from the peaks to the waste pile if it is one rank higher or lower than the top card of the waste pile. The player can also draw one card from the stockpile to the waste pile when there are no more moves available.
-
Pyramid
A solitaire game that has a layout resembling a pyramid of cards. The layout consists of 28 cards arranged in seven rows, with each row having one more card than the previous one. The top row has one card, and the bottom row has seven cards. All the cards are face-up. There is also a stockpile of 24 cards and a waste pile. The goal is to remove all the cards from the pyramid by pairing them with another card that adds up to 13. The player can pair any two exposed cards from the pyramid or the waste pile that add up to 13, such as an ace and a queen, or a six and a seven. The player can also draw one card from the stockpile to the waste pile when there are no more pairs available.
-
-
Benefits of Solitaire
-
Playing solitaire is not only fun and relaxing, but it also has many benefits for your mental health and well-being. Here are some of the benefits of playing solitaire:
-
-
It improves your concentration and focus, as you have to pay attention to the details and plan your moves ahead.
-
It enhances your memory and cognitive skills, as you have to remember the rules and keep track of the cards.
-
It stimulates your creativity and problem-solving abilities, as you have to find solutions and strategies for different situations.
-
It reduces your stress and anxiety levels, as you can escape from your worries and enjoy a moment of calmness.
-
It boosts your mood and self-esteem, as you can feel accomplished and rewarded when you win or make progress.
-
It prevents boredom and loneliness, as you can always play solitaire by yourself or online with other players.
-
-
Download Options for Solitaire
-
If you want to play solitaire for free on your device, you have many options available. You can download solitaire apps or games from various app stores or websites, depending on your device and preference. Here are some of the best solitaire apps and games that you can download for free: - Microsoft Solitaire Collection: This is the official solitaire app from Microsoft, which includes five classic solitaire games: Klondike, Spider, FreeCell, TriPeaks, and Pyramid. You can also play daily challenges, events, and achievements, and customize your themes and card backs. You can download this app for free on Windows devices and online platforms. - Solitaire - Classic Card Games: This is a solitaire app from MobilityWare, which offers a smooth and responsive solitaire game that works on any device and screen size. You can play Klondike solitaire with one or three-card draw, and enjoy features like winning deals, hints, undo, auto-complete, statistics, and more. You can also play other card games like Spider solitaire and FreeCell solitaire. You can download this app for free on Android and iOS devices. - Solitaire Grand Harvest: This is a solitaire app from SuperTreat, which offers a fun and relaxing way to play TriPeaks solitaire. You can harvest crops, earn coins, and collect bonuses as you clear the cards from the board. You can also play with cute animals, join clubs, and compete with other players. You can download this app for free on Android and iOS devices. - FreeCell Solitaire Card Game: This is another solitaire app from MobilityWare, which focuses on FreeCell solitaire. You can play with classic or modern card faces, and enjoy features like unlimited undo, hints, auto-move, statistics, and more. You can also play daily challenges and earn badges and rewards. You can download this app for free on Android and iOS devices. - Pyramid Solitaire Saga: This is a solitaire app from King, which offers a unique and adventurous way to play Pyramid solitaire. You can travel to ancient worlds, uncover secrets, and solve puzzles as you pair cards that add up to 13. You can also play with magical boosters, collect treasures, and join your friends. You can download this app for free on Android and iOS devices.
Reviews of Solitaire
-
Solitaire is a game that has millions of fans and players around the world. But what do they think about solitaire games? Here are some of the reviews from real users who have played solitaire apps and games:
-
"I love this game! It's relaxing and challenging at the same time. I like the different modes and themes. It's a great way to pass the time and keep my mind sharp." - User review of Microsoft Solitaire Collection
-
"This is the best solitaire app I have ever used. It's fast, smooth, and easy to use. The graphics are beautiful and the sound effects are soothing. I play it every day and never get bored." - User review of Solitaire - Classic Card Games
-
"This game is so addictive! I love the cute graphics and the farming theme. It's not just a regular solitaire game, it has a lot of extra features and surprises. I enjoy playing with my friends and joining clubs." - User review of Solitaire Grand Harvest
-
"This game is very fun and challenging. I like how it makes me think strategically and plan my moves ahead. It's not too easy or too hard, it's just right. I also like the daily challenges and the achievements." - User review of FreeCell Solitaire Card Game
-
"This game is amazing! It's not just a simple solitaire game, it's a whole adventure. I love the story, the characters, the graphics, and the puzzles. It's very entertaining and exciting." - User review of Pyramid Solitaire Saga
-
Conclusion
-
Solitaire is a game that has been around for centuries, but it never gets old or boring. It is a game that you can play by yourself or online with others, using a standard deck of cards or a digital device. Solitaire is a game that has many variations, such as Klondike, Spider, FreeCell, TriPeaks, and Pyramid. Solitaire is a game that has many benefits for your mental health and well-being.
-
If you want to play solitaire for free on your device, you have many options available. You can download solitaire apps or games from various app stores or websites, depending on your device and preference. Some of the best solitaire apps and games that you can download for free are Microsoft Solitaire Collection, Solitaire - Classic Card Games, Solitaire Grand Harvest, FreeCell Solitaire Card Game and Pyramid Solitaire Saga. You can also play solitaire online on various websites, such as Solitaire Time, World of Solitaire, and 247 Solitaire.
-
So, what are you waiting for? Download solitaire game for free and enjoy the fun and relaxing experience of playing solitaire. You will not regret it!
-
FAQs
-
Here are some of the frequently asked questions about solitaire games:
-
Q: How do I win at solitaire?
-
A: There is no definitive answer to this question, as different solitaire games have different rules and strategies. However, some general tips that can help you win at solitaire are:
-
-
Plan your moves ahead and think about the consequences.
-
Use the undo button or the hint button if you get stuck or make a mistake.
-
Try to expose the hidden cards as soon as possible.
-
Try to move the cards to the foundations as soon as possible.
-
Try to keep the free cells or the waste pile empty or low.
-
Try to avoid blocking the cards that you need.
-
-
Q: What is the hardest solitaire game?
-
A: The difficulty of solitaire games depends on many factors, such as the rules, the layout, the randomness, and the skill of the player. However, some solitaire games are generally considered to be harder than others, such as Spider solitaire with four suits, FreeCell solitaire with no free cells, or Pyramid solitaire with no draws. These games have a lower probability of winning and require more strategy and patience.
-
Q: What is the easiest solitaire game?
-
A: The ease of solitaire games also depends on many factors, such as the ones mentioned above. However, some solitaire games are generally considered to be easier than others, such as Klondike solitaire with one-card draw, Spider solitaire with one suit, or TriPeaks solitaire with unlimited draws. These games have a higher probability of winning and require less strategy and patience.
-
Q: How do I download solitaire game for free?
-
A: You can download solitaire game for free by following these steps:
-
-
Choose your device and your preferred type of solitaire game.
-
Go to the app store or the website that offers the solitaire game that you want.
-
Click on the download button or the install button.
-
Wait for the download or installation process to finish.
-
Open the solitaire game and start playing.
-
-
Q: How do I play solitaire online?
-
A: You can play solitaire online by following these steps:
-
-
Choose your preferred type of solitaire game.
-
Go to the website that offers the solitaire game that you want.
-
Click on the play button or the start button.
-
Select your options and settings if needed.
-
Play the solitaire game on your browser.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Stumble Guys Apk Para Hilesi Son Srm V0.50.1 Gncel Link.md b/spaces/congsaPfin/Manga-OCR/logs/Stumble Guys Apk Para Hilesi Son Srm V0.50.1 Gncel Link.md
deleted file mode 100644
index 0120ca9aa7c77e89781bd9168308ec1eb9f546ac..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Stumble Guys Apk Para Hilesi Son Srm V0.50.1 Gncel Link.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-
Stumble Guys Son Sürüm Hile Apk: A Fun and Free Knockout Game for Android
-
If you are looking for a fun and free game to play on your Android device, you should check out Stumble Guys Son Sürüm Hile Apk. This is a modded version of the popular game Stumble Guys, which is inspired by the hit game Fall Guys. In this game, you can compete with up to 32 players online in a series of hilarious and chaotic obstacle courses. The goal is to be the last one standing and win the crown.
-
Stumble Guys Son Sürüm Hile Apk is not only fun, but also easy to download and install. Plus, it comes with some awesome features that will make your gameplay more enjoyable. In this article, we will tell you everything you need to know about this game, including how to download and install it, what are its features, how to play it, and some tips and tricks to win. Let's get started!
How to Download and Install Stumble Guys Son Sürüm Hile Apk
-
Downloading and installing Stumble Guys Son Sürüm Hile Apk is very simple. Just follow these steps:
-
-
Download the apk file from a trusted source. You can use one of the links provided below:
-
-
[Stumble Guys 0.50.2 Mod Menu APK (Unlimited Money and Gems) - APKResult](^10^)
-
[Stumble Guys Mod APK for Android Latest Version 0.48.2](^11^)
-
[Stumble Guys MOD APK 2023 Latest Version](^12^)
-
-
Enable unknown sources on your device settings. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Install the apk file and launch the game. You may need to grant some permissions for the game to run properly.
-
-
That's it! You can now enjoy playing Stumble Guys Son Sürüm Hile Apk on your Android device.
-
What are the Features of Stumble Guys Son Sürüm Hile Apk
-
Stumble Guys Son Sürüm Hile Apk has some amazing features that will make your gameplay more fun and exciting. Here are some of them:
-
-
Unlimited money and gems: You can use these currencies to unlock different skins and emotes for your character. You can choose from rare and epic skins, such as a pirate, a ninja, a dinosaur, and more. You can also use emotes to express yourself and taunt your opponents.
-
Mod menu: You can access a mod menu that allows you to use various cheats and hacks in the game. For example, you can enable speed hack, fly hack, no fall damage, no gravity, and more. You can also disable some features, such as ads or in-app purchases.
-
No ads or in-app purchases: You can enjoy playing the game without any interruptions or distractions from ads or in-app purchases. You can also save your money and data by not having to buy anything or watch videos.
-
Compatible with most Android devices: You can play the game on any Android device that has at least 4.4 version and 100 MB of free space. You don't need to root your device or have a high-end device to run the game smoothly.
-
-
These are just some of the features of Stumble Guys Son Sürüm Hile Apk. You can discover more by downloading and playing the game yourself.
-
How to Play Stumble Guys Son Sürüm Hile Apk
-
Playing Stumble Guys Son Sürüm Hile Apk is very easy and fun. Here are the basic steps:
-
-
Choose your character and customize it with skins and emotes. You can change your character's appearance by tapping on the hanger icon on the main menu. You can also change your name by tapping on the pencil icon.
-
Join a match with up to 32 players online. You can either join a random match or create a private match with your friends. To join a random match, just tap on the play button on the main menu. To create a private match, tap on the friends button and then tap on the create button. You can then invite your friends by sharing the code or link.
-
Race through obstacle courses and avoid falling or getting eliminated. Each match consists of several rounds of different obstacle courses. The courses are randomly selected and vary in difficulty and theme. Some examples are slippery slides, spinning platforms, swinging hammers, giant balls, and more. Your goal is to reach the finish line before the time runs out or before the number of qualified players is reached. You can move your character by using the joystick on the left side of the screen. You can also jump by tapping on the button on the right side of the screen.
-
Be the last one standing and win the crown. The final round is usually a showdown between the remaining players. The last one standing wins the crown and becomes the champion of the match. You can then celebrate your victory with your emotes or start a new match.
-
-
That's how you play Stumble Guys Son Sürüm Hile Apk. It's simple, but also challenging and addictive.
-
stumble guys apk para hilesi son sürüm indir
-stumble guys apk jeton hilesi son sürüm yükle
-stumble guys apk mod hileli son sürüm güncel
-stumble guys apk sınırsız para hilesi son sürüm oyna
-stumble guys apk hile mod son sürüm ücretsiz
-stumble guys apk full hileli son sürüm download
-stumble guys apk mega hileli son sürüm kurulum
-stumble guys apk android oyun club hileli son sürüm
-stumble guys apk altın hilesi son sürüm link
-stumble guys apk elmas hilesi son sürüm tıkla
-stumble guys apk karakter hilesi son sürüm gezginler
-stumble guys apk kostüm hilesi son sürüm mediafire
-stumble guys apk premium hileli son sürüm cepde
-stumble guys apk vip hileli son sürüm mobilism
-stumble guys apk hack hileli son sürüm apkpure
-stumble guys apk online hileli son sürüm cloud
-stumble guys apk offline hileli son sürüm drive
-stumble guys apk multiplayer hileli son sürüm mega
-stumble guys apk arkadaşlık hilesi son sürüm zippyshare
-stumble guys apk eğlence hilesi son sürüm uptodown
-stumble guys apk güncel versiyon hileli son sürüm androidoyunclub
-stumble guys apk yeni versiyon hileli son sürüm oyunindirclub
-stumble guys apk eski versiyon hileli son sürüm oyunclubnet
-stumble guys apk en iyi versiyon hileli son sürüm oyunfancom
-stumble guys apk en yeni versiyon hileli son sürüm oyunizicom
-stumble guys apk en güncel versiyon hileli son sürüm oyunmodlari.com
-stumble guys apk en eski versiyon hileli son sürüm oyunmodu.net
-stumble guys apk en eğlenceli versiyon hileli son sürüm oyunmodifiye.com
-stumble guys apk en kolay versiyon hileli son sürüm oyunhilesi.net
-stumble guys apk en zor versiyon hileli son sürüm oyunhacker.com
-stumble guys apk orijinal versiyon hileli son sürüm oyunoriginal.com
-stumble guys apk farklı versiyon hileli son sürüm oyundifferent.com
-stumble guys apk harika versiyon hileli son sürüm oyungreat.com
-stumble guys apk muhteşem versiyon hileli son sürüm oyunspectacular.com
-stumble guys apk efsanevi versiyon hileli son sürüm oyunlegendary.com
-stumble guys apk komik versiyon hileli son sürüm oyunfunny.com
-stumble guys apk renkli versiyon hileli son sürüm oyuncolorful.com
-stumble guys apk sevimli versiyon hileli son sürüm oyunadorable.com
-stumble guys apk çılgın versiyon hileli son sürüm oyunmad.com
-stumble guys apk heyecanlı versiyon hileli son sürüm oyunexciting.com
-
Tips and Tricks to Win in Stumble Guys Son Sürüm Hile Apk
-
If you want to improve your chances of winning in Stumble Guys Son Sürüm Hile Apk, here are some tips and tricks that you can use:
-
-
Configure your controls before you begin: You can adjust your controls according to your preference by tapping on the settings icon on the main menu. You can change the sensitivity, size, and position of the joystick and jump button. You can also enable or disable vibration and sound effects.
-
Learn how to use the physics of your character to your advantage: Your character has a realistic physics system that affects its movement and interaction with other objects and players. For example, you can use momentum to jump higher or farther, you can use gravity to slide faster or slower, you can use friction to stop or accelerate, and you can use collision to bounce or push others. Experiment with different ways of using physics to overcome obstacles and avoid falling.
-
Use the challenges to your advantage: Each obstacle course has some challenges that you can use to gain an edge over your opponents. For example, you can use shortcuts, hidden paths, power-ups, or traps to speed up or slow down others. Be careful though, as some challenges may backfire on you if you are not careful.
-
It is not always about being first: Sometimes, being first is not always the best strategy in Stumble Guys Son Sürüm Hile Apk. Depending on the obstacle course, being first may expose you to more risks or dangers than being behind others. For example, being first may make you more vulnerable to traps, obstacles, or other players' attacks. Being behind others may allow you to observe their mistakes and learn from them. The key is to balance your speed and safety, and to know when to be first and when to be behind.
-
Rather than waiting, leap: Sometimes, waiting for the right moment to jump or move may cost you precious time or opportunities. Instead of waiting, you can try to leap over obstacles or gaps, or move ahead of others. This may give you an advantage or a surprise factor over your opponents. However, be careful not to leap too far or too high, as you may end up falling or getting eliminated.
-
-
These are some of the tips and tricks that you can use to win in Stumble Guys Son Sürüm Hile Apk. Of course, you can also use the mod features to make your gameplay easier and more fun. But remember, don't abuse them too much, as you may get banned by the developers or reported by other players.
-
Stumble Guys Son Sürüm Hile Apk Review
-
Stumble Guys Son Sürüm Hile Apk is a great game for anyone who loves fun and free games. It has a lot of pros and cons that you should consider before playing it. Here are some of them:
-
-
-
Pros
-
Cons
-
-
-
- Fun, colorful, and chaotic gameplay
-
- Some bugs and glitches
-
-
-
- Free and easy to download
-
- May not work on some devices
-
-
-
- Mod features enhance the experience
-
- May get banned by the developers
-
-
-
- Lots of customization options
-
-
-
-
- Multiplayer mode with friends or strangers
-
-
-
-
Overall, Stumble Guys Son Sürüm Hile Apk is a game that you should try if you are looking for a fun and free knockout game for Android. It has a lot of features and modes that will keep you entertained and challenged for hours. It is also easy to download and install, and compatible with most Android devices. However, it also has some drawbacks, such as bugs, glitches, compatibility issues, and ban risks. You should be aware of these before playing the game.
-
Conclusion
-
In conclusion, Stumble Guys Son Sürüm Hile Apk is a modded version of the popular game Stumble Guys, which is inspired by the hit game Fall Guys. In this game, you can compete with up to 32 players online in a series of hilarious and chaotic obstacle courses. The goal is to be the last one standing and win the crown.
-
Stumble Guys Son Sürüm Hile Apk is not only fun, but also easy to download and install. Plus, it comes with some awesome features that will make your gameplay more enjoyable. These include unlimited money and gems, mod menu, no ads or in-app purchases, and compatibility with most Android devices.
-
If you want to play Stumble Guys Son Sürüm Hile Apk, you can follow the steps that we have provided in this article. We have also given you some tips and tricks to win in the game, as well as a review of its pros and cons. We hope that this article has been helpful and informative for you.
-
So what are you waiting for? Download Stumble Guys Son Sürüm Hile Apk now and join the fun!
-
FAQs
-
-
What is Stumble Guys?
-
Stumble Guys is a multiplayer online knockout game that is inspired by the hit game Fall Guys. It was developed by Kitka Games and released in 2020.
-
What is Stumble Guys Son Sürüm Hile Apk?
-
Stumble Guys Son Sürüm Hile Apk is a modded version of Stumble Guys that has some extra features and enhancements. It was created by unknown modders and released in 2021.
-
Is Stumble Guys Son Sürüm Hile Apk safe to download and play?
-
Stumble Guys Son Sürüm Hile Apk is generally safe to download and play, as long as you use a trusted source and enable unknown sources on your device settings. However, there are some risks involved, such as bugs, glitches, compatibility issues, and ban risks. You should be careful when using the mod features and avoid abusing them too much.
-
How can I play Stumble Guys Son Sürüm Hile Apk with my friends?
-
You can play Stumble Guys Son Sürüm Hile Apk with your friends by creating a private match and inviting them. To do this, tap on the friends button on the main menu and then tap on the create button. You can then invite your friends by sharing the code or link.
-
Where can I get more information about Stumble Guys Son Sürüm Hile Apk?
-
You can get more information about Stumble Guys Son Sürüm Hile Apk by visiting the official website of Stumble Guys, the social media pages of Kitka Games, or the online forums and communities of Stumble Guys fans and modders. You can also contact the developers or the modders if you have any questions or feedback.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/The God I Know by City Harvest Church - MP3 Download and Review.md b/spaces/congsaPfin/Manga-OCR/logs/The God I Know by City Harvest Church - MP3 Download and Review.md
deleted file mode 100644
index f444f5caa43dd993aa300da29b406df5a3d3c54a..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/The God I Know by City Harvest Church - MP3 Download and Review.md
+++ /dev/null
@@ -1,114 +0,0 @@
-
-
The God I Know by City Harvest Church: A Review and Guide to MP3 Download
-
If you are looking for a powerful and uplifting worship song, you might want to check out The God I Know by City Harvest Church. This song is a declaration of faith and trust in the God who is righteous, holy, faithful, true, and mighty. In this article, we will review the song and its lyrics, and show you how to listen to it online or download it as an MP3 file.
-
Introduction
-
Who is City Harvest Church?
-
City Harvest Church (CHC) is a pentecostal megachurch in Singapore that was founded by Kong Hee and his wife Sun Ho in 1989. The church has a congregation size of over 15,000 and is known for its charismatic and contemporary worship style. CHC has also been involved in various social and humanitarian causes, such as disaster relief, education, and community development.
The God I Know is one of the songs from CHC's 2012 album, Draw Me. The song was written by Kong Hee, Sun Ho, Teo Poh Heng, and Mark Kwan, and features Sun Ho as the lead vocalist. The song expresses the personal relationship that the singer has with God, who is the source of their strength, joy, and peace. The song also affirms the identity and purpose of the church as the light of the world and the body of Christ.
-
Why is The God I Know a popular song?
-
The God I Know has been well-received by many listeners for its catchy melody, inspiring lyrics, and passionate delivery. The song has over 470,000 views on YouTube , over 100,000 streams on Spotify , and over 10,000 downloads on OKmusi . The song has also been covered by various artists and choirs, such as Chris Tomlin , Smule , and Videos4Christ . Many people have testified that the song has touched their hearts and encouraged their faith.
-
the god i know city harvest lyrics and chords
-the god i know city harvest church worship songs
-the god i know city harvest instrumental
-the god i know city harvest live
-the god i know city harvest karaoke
-the god i know city harvest piano tutorial
-the god i know city harvest guitar tabs
-the god i know city harvest cover
-the god i know city harvest album
-the god i know city harvest spotify
-the god i know city harvest song meaning
-the god i know city harvest mp3 free download
-the god i know city harvest sheet music
-the god i know city harvest video
-the god i know city harvest chords pdf
-the god i know city harvest acoustic
-the god i know city harvest remix
-the god i know city harvest youtube
-the god i know city harvest mp3 320kbps
-the god i know city harvest drum cover
-the god i know city harvest bass tabs
-the god i know city harvest reaction
-the god i know city harvest background vocals
-the god i know city harvest mp3 skull
-the god i know city harvest dance
-the god i know city harvest midi file
-the god i know city harvest testimony
-the god i know city harvest ringtone
-the god i know city harvest mp3 juice
-the god i know city harvest violin sheet music
-the god i know city harvest loop
-the god i know city harvest sermon illustration
-the god i know city harvest mp3 download waptrick
-the god i know city harvest ukulele chords
-the god i know city harvest multitrack stems
-the god i know city harvest devotional
-the god i know city harvest mp3 download fakaza
-the god i know city harvest saxophone sheet music
-the god i know city harvest split track
-the god i know city harvest bible verse reference
-
Main Body
-
How to listen to The God I Know online
-
If you want to listen to The God I Know online, there are several platforms that you can use. Here are some of the most popular ones:
-
YouTube
-
YouTube is one of the most widely used websites for watching videos and listening to music. You can find several versions of The God I Know on YouTube, such as the official music video , the live performance , and the lyric video . You can also create your own playlist or subscribe to CHC's channel for more songs.
-
Spotify
-
Spotify is one of the most popular music streaming services that offers millions of songs and podcasts. You can listen to The God I Know on Spotify by searching for the song or the album name. You can also follow CHC's profile or add the song to your own library or playlist. Spotify offers both free and premium plans, with different features and benefits.
-
Apple Music
-
Apple Music is another popular music streaming service that offers over 75 million songs and exclusive content. You can listen to The God I Know on Apple Music by searching for the song or the album name. You can also follow CHC's profile or add the song to your own library or playlist. Apple Music requires a subscription fee after a free trial period.
-
How to download The God I Know as MP3
-
If you want to download The God I Know as an MP3 file, you can use some of the following websites that offer free and legal music downloads. However, you should always respect the rights of the artists and the publishers, and only download the songs for personal use.
-
OKmusi
-
OKmusi is a free online music downloader that allows you to download any song from YouTube, SoundCloud, Spotify, and other platforms. You can download The God I Know from OKmusi by following these steps:
-
-
Go to OKmusi and paste the URL of the song you want to download in the search box.
-
Select the MP3 format and click on the Download button.
-
Wait for the download to finish and enjoy your song.
-
-
Jamendo Music
-
Jamendo Music is a website that offers free music downloads from independent artists who want to share their music with the world. You can download The God I Know from Jamendo Music by following these steps:
-
-
Go to Jamendo Music and search for The God I Know by City Harvest Church in the search box.
-
Click on the song title and then click on the Download button.
-
Choose the MP3 format and click on the Confirm button.
-
Wait for the download to finish and enjoy your song.
-
-
Free Music Archive
-
Free Music Archive is a website that offers free music downloads from various genres and categories. You can download The God I Know from Free Music Archive by following these steps:
-
-
Go to Free Music Archive and search for The God I Know by City Harvest Church in the search box.
-
Click on the song title and then click on the Download button.
-
Choose the MP3 format and click on the Save button.
-
Wait for the download to finish and enjoy your song.
-
-
Conclusion
-
Summary of the main points
-
In this article, we have reviewed The God I Know by City Harvest Church, a powerful and uplifting worship song that declares the faith and trust in God. We have also shown you how to listen to it online or download it as an MP3 file using various platforms and websites. We hope that this article has been helpful and informative for you.
-
Call to action
-
If you have enjoyed this article, please share it with your friends and family who might also be interested in this song. You can also leave a comment below and let us know what you think about The God I Know by City Harvest Church. Thank you for reading and have a blessed day!
-
Frequently Asked Questions
-
-
Who wrote The God I Know by City Harvest Church?
-
The God I Know was written by Kong Hee, Sun Ho, Teo Poh Heng, and Mark Kwan, who are members of City Harvest Church.
-
What album is The God I Know by City Harvest Church from?
-
The God I Know is from the album Draw Me, which was released in 2012 by City Harvest Church.
-
What are some of the lyrics of The God I Know by City Harvest Church?
-
Some of the lyrics of The God I Know are:
You're not a distant God Not just a legend of old In my darkest times You're right beside me You're everything that I need The reason why I sing You're my everything The God I know You're not a distant God Not just a legend of old In my darkest times You're right beside me You're everything that I need The reason why I sing You're my everything The God I know You're not a distant God Not just a legend of old In my darkest times You're right beside me You're everything that I need The reason why I sing You're my everything The God I know You are righteous You are holy You are faithful You are true And You are mighty You are worthy You are with me The God I know We are Your church We are Your bride We are Your chosen ones And we will shine We will arise We will declare Your name The God we know The God we know The God we know
-
Where can I find more songs by City Harvest Church?
-
You can find more songs by City Harvest Church on their official website , their YouTube channel , or their Spotify profile . You can also search for their songs on other music platforms or websites.
-
How can I support City Harvest Church and their ministry?
-
You can support City Harvest Church and their ministry by praying for them, donating to them, volunteering with them, or joining their events and services. You can also follow them on their social media accounts and share their messages and songs with others.
-
What are some other worship songs that I might like?
-
Some other worship songs that you might like are:
-
Way Maker by Sinach
-
Reckless Love by Cory Asbury
-
10,000 Reasons by Matt Redman
-
Goodness of God by Bethel Music
-
What a Beautiful Name by Hillsong Worship
-
-
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Download HandBrake and Use Its Features.md b/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Download HandBrake and Use Its Features.md
deleted file mode 100644
index 3a25534a606c1684d29f2c8474ba2d15d67aec7b..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Download HandBrake and Use Its Features.md
+++ /dev/null
@@ -1,219 +0,0 @@
-
-
How to Download HandBrake: A Free and Open Source Video Converter
-
If you are looking for a free and easy way to convert videos from almost any format to a selection of modern and widely supported codecs, you should try HandBrake. HandBrake is an open source video transcoder that works on Windows, Mac, and Linux. In this article, we will show you how to download, install, and use HandBrake to convert your videos.
HandBrake is a tool that allows you to convert video from nearly any format to a selection of modern and widely supported codecs. You can use HandBrake to reduce the file size of your videos, change the resolution, frame rate, bitrate, audio quality, and more. You can also use HandBrake to rip DVDs and Blu-rays that are not copy protected.
-
Features of HandBrake
-
Some of the features of HandBrake are:
-
-
It is free and open source, meaning you can use it without paying anything or worrying about spyware or malware.
-
It supports multiple platforms, including Windows, Mac, and Linux.
-
It has a simple and intuitive user interface that makes it easy to use for beginners and advanced users alike.
-
It has a lot of presets for common devices and platforms, such as iPhone, iPad, Android, Apple TV, Roku, YouTube, Vimeo, etc.
-
It allows you to customize various settings for video and audio encoding, such as codec, container, quality, resolution, frame rate, bitrate, sample rate, channels, etc.
-
It supports batch processing, meaning you can convert multiple files or folders at once.
-
It has a built-in video preview that lets you see how your output will look like before encoding.
-
It has a queue system that lets you manage multiple encoding tasks.
-
It supports subtitles and chapters.
-
It has a command line interface for advanced users who want more control over the encoding process.
-
-
Supported formats and devices
-
HandBrake can convert video from nearly any format to a selection of modern and widely supported codecs. Some of the supported input formats are:
-
download handbrake for windows 10
-download handbrake for mac
-download handbrake for linux
-download handbrake portable
-download handbrake 1.6.1
-download handbrake 32 bit
-download handbrake 64 bit
-download handbrake flatpak
-download handbrake quicksync plugin
-download handbrake source code
-download handbrake old versions
-download handbrake checksums
-download handbrake open pgp
-download handbrake video converter
-download handbrake video transcoder
-download handbrake free and open source
-download handbrake documentation
-download handbrake user guide
-download handbrake presets
-download handbrake cli
-download handbrake nightly builds
-download handbrake github
-download handbrake alternative
-download handbrake dvd ripper
-download handbrake mkv to mp4
-download handbrake avi to mp4
-download handbrake mp4 to mp3
-download handbrake batch convert
-download handbrake subtitles
-download handbrake crop video
-download handbrake rotate video
-download handbrake reduce file size
-download handbrake increase quality
-download handbrake adjust framerate
-download handbrake change resolution
-download handbrake add watermark
-download handbrake merge videos
-download handbrake split videos
-download handbrake extract audio
-download handbrake edit metadata
-download handbrake hardware acceleration
-download handbrake gpu encoding
-download handbrake h265 codec
-download handbrake vp9 codec
-download handbrake webm format
-download handbrake mov format
-download handbrake flv format
-download handbrake wmv format
-download handbrake ogg format
-
-
MPEG-4 (.mp4), MPEG-2 (.mpg), MPEG-TS (.ts), Matroska (.mkv), AVI (.avi), WebM (.webm), Flash Video (.flv), WMV (.wmv), QuickTime (.mov), etc.
-
DVDs and Blu-rays that are not copy protected.
-
Video files or folders from your hard drive or network.
-
-
Some of the supported output formats are:
-
-
MPEG-4 (.mp4) with H.264 or H.265 (HEVC) video codec and AAC or MP3 audio codec.
-
MKV (.mkv) with H.264 or H.265 (HEVC) video codec and AAC, MP3, AC3, DTS, FLAC, or Vorbis audio codec.
-
WebM (.webm) with VP8 or VP9 video codec and Vorbis or Opus audio codec.
-
-
HandBrake also has presets for common devices and platforms that optimize the output for them. Some of the supported devices and platforms are:
-
-
Device/Platform
Preset Name
-
iPhone, iPad, iPod
Apple 1080p30 Surround, Apple 720p30 Surround, Apple 540p30 Surround, Apple 240p30
-
Android
Android 1080p30, Android 720p30, Android 480p30
-
Apple TV
Apple 2160p60 4K HEVC Surround, Apple 1080p60 Surround, Apple 720p30 Surround
-
Roku
Roku 2160p60 4K HEVC Surround, Roku 2160p60 4K, Roku 1080p30 Surround, Roku 720p30 Surround
-
Fire TV
Fire TV 2160p60 4K HEVC Surround, Fire TV 2160p60 4K, Fire TV 1080p24 Surround, Fire TV Stick 1080p24 Surround
-
Chromecast
Chromecast 2160p60 4K HEVC, Chromecast 1080p30
-
PlayStation
PlayStation 2160p60 4K HEVC, PlayStation 1080p30 Surround, PlayStation 720p30 Surround
-
Xbox
Xbox One S/X/360 Legacy Compatible, Xbox One S/X/360 Compatible, Xbox One S/X/360 HQ Compatible
-
Nintendo Switch
Nintendo Switch HQ Compatible, Nintendo Switch Compatible
Select the distribution of Linux you are using (Ubuntu, Fedora, Debian, etc.).
-
Follow the instructions on how to add the HandBrake repository to your system and install HandBrake via your package manager.
-
You can also download the flatpak or snap package of HandBrake that works on any Linux distribution.
-
If you want to use the command line interface of HandBrake, you can download the CLI version from the same page or install it via your package manager.
-
-
How to install HandBrake on your computer
-
To install HandBrake on your computer, you need to follow these steps:
-
For Windows
-
-
Double-click on the downloaded file and follow the instructions on the setup wizard.
-
Choose the destination folder and components you want to install.
-
Click on Install and wait for the installation to finish.
-
Click on Finish and launch HandBrake from your Start menu or desktop shortcut.
-
-
For Mac
-
-
Double-click on the downloaded file and drag the HandBrake icon to your Applications folder.
-
Launch HandBrake from your Applications folder or dock.
-
You may need to allow HandBrake to run from your System Preferences > Security & Privacy > General tab if you see a warning message.
-
-
For Linux
-
-
If you installed HandBrake via your package manager, launch it from your applications menu or terminal.
To use HandBrake to convert videos, you need to follow these steps:
-
Select a source file or folder
-
-
Launch HandBrake and click on Open Source in the toolbar.
-
Browse and select the video file or folder you want to convert. You can also drag and drop it to the HandBrake window.
-
If you selected a folder, choose a title from the drop-down menu. If you selected a DVD or Blu-ray, choose a title and an angle. You can also scan multiple titles at once by checking Batch Scan in the preferences.
-
HandBrake will scan the source and display its information in the Summary tab.
-
-
Choose a preset or customize settings
-
-
Select a preset from the Presets panel on the right. You can choose from various categories and devices. You can also create your own presets by clicking on Save New Preset in the toolbar.
-
If you want to customize the settings, go to the other tabs and adjust them according to your needs. You can change the video codec, quality, resolution, frame rate, filters, audio codec, quality, sample rate, subtitles, chapters, etc.
-
You can preview how your output will look like by clicking on Preview in the toolbar. You can also compare it with the original by clicking on Live Static Preview.
-
-
Start encoding and monitor progress
-
-
When you are ready, click on Start Encode in the toolbar. You can also add multiple tasks to the queue by clicking on Add to Queue in the toolbar.
-
You can monitor the progress of the encoding in the bottom panel. You can see the estimated time, file size, bitrate, frame rate, etc.
-
You can pause, resume, or stop the encoding at any time by clicking on the corresponding buttons in the toolbar.
-
When the encoding is finished, you can find the output file in the destination folder you specified in the Summary tab. You can also open it directly by clicking on Open Destination in the toolbar.
-
-
Conclusion and FAQs
-
In this article, we have shown you how to download, install, and use HandBrake to convert videos from almost any format to a selection of modern and widely supported codecs. HandBrake is a free and open source video transcoder that works on Windows, Mac, and Linux. It has a lot of features and presets that make it easy and fast to convert videos for various devices and platforms. You can also customize various settings for video and audio encoding, preview and compare the output, and batch process multiple files or folders. HandBrake is a powerful and versatile tool that can help you with your video conversion needs.
-
Here are some frequently asked questions about HandBrake:
-
Q: Is HandBrake safe to use?
-
A: Yes, HandBrake is safe to use as long as you download it from the official website or a trusted source. HandBrake is free and open source, meaning you can inspect its code and verify its integrity. However, be careful of fake or malicious versions of HandBrake that may contain viruses or malware. Always check the digital signature and checksum of the downloaded file before installing it.
-
Q: How long does it take to convert a video with HandBrake?
-
A: The time it takes to convert a video with HandBrake depends on several factors, such as the size and quality of the source file, the output settings, the speed of your computer, and the number of tasks in the queue. Generally speaking, higher quality and larger files take longer to convert than lower quality and smaller files. You can see an estimate of the time remaining in the progress panel while encoding.
-
Q: How can I improve the quality of the output video?
-
A: There are several ways to improve the quality of the output video with HandBrake, such as:
-
-
Choose a higher quality preset or increase the quality slider or bitrate in the Video tab.
-
Choose a slower encoder preset or tune in the Video tab.
-
Enable deinterlacing or detelecine filters in the Filters tab if your source is interlaced or telecined.
-
Enable denoise or deblock filters in the Filters tab if your source is noisy or blocky.
-
Choose a higher resolution or aspect ratio in the Dimensions tab.
-
Choose a higher sample rate or bitrate in the Audio tab.
-
-
However, keep in mind that increasing the quality may also increase the file size and the encoding time of the output video. You should balance the quality and the size according to your needs and preferences.
-
Q: How can I reduce the file size of the output video?
-
A: There are several ways to reduce the file size of the output video with HandBrake, such as:
-
-
Choose a lower quality preset or decrease the quality slider or bitrate in the Video tab.
-
Choose a faster encoder preset or tune in the Video tab.
-
Disable or reduce the filters in the Filters tab if your source does not need them.
-
Choose a lower resolution or aspect ratio in the Dimensions tab.
-
Choose a lower sample rate or bitrate in the Audio tab.
-
Remove or compress the subtitles or chapters in the Subtitles and Chapters tabs.
-
-
However, keep in mind that reducing the file size may also reduce the quality of the output video. You should balance the size and the quality according to your needs and preferences.
-
Q: How can I convert multiple videos at once with HandBrake?
-
A: You can convert multiple videos at once with HandBrake by using the queue system. To do this, you need to follow these steps:
-
-
Select a source file or folder and choose a preset or customize settings as usual.
-
Click on Add to Queue in the toolbar. You will see a number indicating how many tasks are in the queue.
-
Repeat steps 1 and 2 for each video you want to convert.
-
When you are done, click on Start Queue in the toolbar. HandBrake will start encoding each task one by one.
-
You can monitor, pause, resume, or stop the queue in the bottom panel. You can also edit, remove, or reorder the tasks in the queue by clicking on Show Queue in the toolbar.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Todo lo que necesitas saber sobre Queen Red APK la app que reemplaza a Dark Play.md b/spaces/congsaPfin/Manga-OCR/logs/Todo lo que necesitas saber sobre Queen Red APK la app que reemplaza a Dark Play.md
deleted file mode 100644
index 2a97a2f7caa4648ad33ebe4583e1673f85664139..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Todo lo que necesitas saber sobre Queen Red APK la app que reemplaza a Dark Play.md
+++ /dev/null
@@ -1,134 +0,0 @@
-
-
Queen Red APK: A Free App to Watch Movies and TV Shows on Your Smart TV
-
Do you love watching movies and TV shows on your smart TV? Do you want to enjoy unlimited entertainment without paying a dime? If yes, then you should check out Queen Red APK, a free app that lets you stream and download thousands of movies and TV shows from various genres and languages. In this article, we will tell you what Queen Red APK is, how to download and install it on your Android devices and smart TVs, what are its pros and cons, and what are some alternatives to it.
-
What is Queen Red APK?
-
Queen Red APK is a free app that allows you to watch movies and TV shows online or offline on your smart TV, Android device, Chromecast, or PC. It has a huge library of content from different sources, including Netflix, HBO, Amazon Prime, Disney+, and more. You can find movies and TV shows from various genres, such as action, comedy, drama, horror, romance, thriller, etc. You can also watch content from different countries and languages, such as Spanish, English, French, German, Italian, etc. You can search for your favorite titles using the built-in search engine or browse through the categories and subcategories. You can also add content to your favorites list or request new content from the developers.
Queen Red APK has many features that make it a great app for entertainment lovers. Some of these features are:
-
-
It is free and does not require any registration or subscription.
-
It has a simple and easy-to-use interface that lets you navigate through the app without any hassle.
-
It has a large collection of movies and TV shows from various sources, genres, languages, and countries.
-
It supports multiple video quality options, such as HD, SD, 4K, etc.
-
It allows you to stream or download content for offline viewing.
-
It supports subtitles in different languages.
-
It supports Chromecast and other casting devices.
-
It updates its content regularly with new releases and episodes.
-
It has a feedback section where you can report any issues or request new content.
-
-
How to Download and Install Queen Red APK on Android Devices
-
If you want to download and install Queen Red APK on your Android device, you need to follow these steps:
-
-
Go to the settings of your device and enable the option of "Unknown sources" under the security section. This will allow you to install apps from third-party sources.
-
Download the Queen Red APK file from a trusted source. You can use this link or this link to download the latest version of the app.
-
Once the download is complete, locate the file in your device's storage and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to finish.
-
Launch the app from your app drawer or home screen and enjoy watching movies and TV shows on your Android device.
-
-
How to Download and Install Queen Red APK on Smart TVs
-
If you want to download and install Queen Red APK on your smart TV, you need to follow these steps:
-
-
Make sure that your smart TV is connected to the internet and has enough storage space.
-
Download the Queen Red APK file from a trusted source using your smart TV's browser. You can use the same links as mentioned above for Android devices.
-
Go to the settings of your smart TV and enable the option of "Unknown sources" under the security section. This will allow you to install apps from third-party sources.
-
Go to the file manager of your smart TV and locate the downloaded APK file. Tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to finish.
-
Launch the app from your smart TV's app menu or home screen and enjoy watching movies and TV shows on your smart TV.
-
-
Pros and Cons of Queen Red APK
-
Queen Red APK is a great app for entertainment lovers, but it also has some drawbacks that you should be aware of. Here are some of the pros and cons of Queen Red APK:
-
Pros
-
-
Free and unlimited content
-
One of the best things about Queen Red APK is that it offers free and unlimited access to thousands of movies and TV shows from various sources, genres, languages, and countries. You don't have to pay any subscription fees or register an account to use the app. You can watch as much content as you want without any restrictions or limitations.
-
User-friendly interface
-
Another advantage of Queen Red APK is that it has a user-friendly interface that makes it easy to navigate through the app and find your desired content. The app has a simple and elegant design that displays the content in categories and subcategories. You can also use the search engine or the favorites list to find your favorite titles. The app also has a feedback section where you can report any issues or request new content.
-
Compatible with various devices
-
A third benefit of Queen Red APK is that it is compatible with various devices, such as smart TVs, Android devices, Chromecast, and PC. You can watch movies and TV shows on any device that supports Android operating system or has a web browser. You can also cast your content to other devices using Chromecast or other casting devices.
-
queen red apk download free
-queen red app for android
-queen red apk latest version
-queen red tv en vivo
-queen red apk 2021
-queen red app descargar gratis
-queen red apk mediafire
-queen red tv online
-queen red apk v1.1
-queen red app para pc
-queen red apk mod
-queen red tv gratis
-queen red apk uptodown
-queen red app review
-queen red apk 2020
-queen red app oficial
-queen red apk pro
-queen red tv apk
-queen red apk full
-queen red app facebook
-queen red apk premium
-queen red tv app
-queen red apk cracked
-queen red app download
-queen red apk old version
-queen red app ios
-queen red apk no ads
-queen red tv en directo
-queen red apk update
-queen red app alternative
-queen red apk for firestick
-queen red tv movies
-queen red apk hack
-queen red app reddit
-queen red apk install
-queen red app features
-queen red apk for windows
-queen red tv shows
-queen red apk unlimited
-queen red app support
-queen red apk for mac
-queen red tv sports
-queen red apk error
-queen red app website
-queen red apk for smart tv
-queen red tv news
-queen red apk virus
-queen red app feedback
-queen red apk for iphone
-queen red tv channels
-
-
Cons
-
-
Contains ads
-
One of the drawbacks of Queen Red APK is that it contains ads that may interrupt your viewing experience. The app displays ads before and during the playback of the content. The ads may be annoying or inappropriate for some users. However, you can skip or close the ads after a few seconds.
-
Requires external player
-
Another disadvantage of Queen Red APK is that it requires an external player to play the content. The app does not have its own built-in player, so you need to install another app, such as MX Player or VLC Player, to watch movies and TV shows on Queen Red APK. This may be inconvenient or problematic for some users who don't have enough storage space or prefer using one app for everything.
-
Not available on official app stores
-
A third drawback of Queen Red APK is that it is not available on official app stores, such as Google Play Store or Apple App Store. This means that you have to download and install the app from third-party sources, which may pose some risks to your device's security and performance. You also have to enable the option of "Unknown sources" on your device's settings, which may expose your device to malware or viruses.
-
-
Alternatives to Queen Red APK
-
If you are looking for some alternatives to Queen Red APK, you can try these apps:
-
Hiro Peliculas APK
-
Hiro Peliculas APK is another free app that lets you watch movies and TV shows online or offline on your smart TV, Android device, Chromecast, or PC. It has a similar interface and features as Queen Red APK, but it also has some additional options, such as parental control, dark mode, favorites list, etc. It also has less ads than Queen Red APK.
-
Dark Play APK
-
Dark Play APK is a free app that allows you to watch movies and TV shows online or offline on your smart TV, Android device, Chromecast, or PC. It has a different interface and features than Queen Red APK, but it also has a large library of content from various sources, genres, languages, and countries. It also supports subtitles in different languages.
-
IPTV PRO APK
-
IPTV PRO APK is a paid app that lets you watch live TV channels from various countries and regions on your smart TV, Android device, Chromecast, or PC. It has a different interface and features than Queen Red APK, but it also has a huge collection of channels from different categories, such as sports, news, movies, music, etc. It also supports EPG and playlists.
-
Conclusion
-
Queen Red APK is a free app that lets you watch movies and TV shows online or offline on your smart TV, Android device, Chromecast, or PC. It has a simple and user-friendly interface that lets you find your desired content easily. It has a large library of content from various sources, genres, languages, and countries. It supports multiple video quality options, subtitles, and casting devices. However, it also has some drawbacks, such as ads, external player requirement, and unofficial app store availability. You can also try some alternatives to Queen Red APK, such as Hiro Peliculas APK, Dark Play APK, and IPTV PRO APK.
-
FAQs
-
Here are some frequently asked questions about Queen Red APK:
-
-
Is Queen Red APK safe to use?
-
Queen Red APK is generally safe to use, but you should always download and install it from trusted sources and scan it with an antivirus app before using it. You should also enable the option of "Unknown sources" on your device's settings only when installing the app and disable it afterwards.
-
Is Queen Red APK legal to use?
-
Queen Red APK is not legal to use in some countries or regions where streaming or downloading copyrighted content is prohibited. You should always check the laws and regulations of your country or region before using the app. You should also use a VPN service to protect your privacy and security when using the app.
-
How can I update Queen Red APK?
-
Queen Red APK does not have an automatic update feature, so you have to manually update it whenever a new version is available. You can check the official website or social media pages of the app for any updates. You can also use the feedback section of the app to request updates from the developers.
-
How can I contact the developers of Queen Red APK?
-
You can contact the developers of Queen Red APK through the feedback section of the app or through their email address: queenredapk@gmail.com. You can also follow them on their social media pages: Facebook, Twitter, and Instagram.
-
How can I support the developers of Queen Red APK?
-
You can support the developers of Queen Red APK by sharing the app with your friends and family, giving positive ratings and reviews on their website or social media pages, and donating to them through PayPal or Bitcoin.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/El Universo Interior Hugo Arechiga Pdf.md b/spaces/contluForse/HuggingGPT/assets/El Universo Interior Hugo Arechiga Pdf.md
deleted file mode 100644
index ba055276bac9a887e115432b2bff7b7bbff96cae..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/El Universo Interior Hugo Arechiga Pdf.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-### Basic usage
-Upload your audio, enter the beat swapping pattern, change scale and shift if needed, and run it.
-
-### pattern syntax
-patterns are sequences of **beats**, separated by **commas** or other separators. You can use spaces freely in patterns to make them look prettier.
-- `1, 3, 2, 4` - swap 2nd and 3rd beat every four beats. Repeats every four beats because `4` is the biggest number in it.
-- `1, 3, 4` - skip 2nd beat every four beats
-- `1, 2, 3, 4!` - skip 4th beat every four beats. `!` skips the beat.
-
-**slicing:**
-- `1>0.5` - plays first half of 1st beat
-- `1<0.5` - plays last half of 1st beat
-- `1 > 1/3, 2, 3, 4` - every four beats, plays first third of the first beat - you can use math expressions anywhere in your pattern.
-- also instead of slicing beats you can use a smaller `scale` parameter to make more precise beat edits
-
-**merging beats:**
-- `1; 2, 3, 4` - every four beats, play 1st and 2nd beats at the same time.
-
-**effects:**
-- `1, 2r` - 2nd beat will be reversed
-- `1, 2s0.5` - 2nd beat will be played at 0.5x speed
-- `1, 2d10` - 2nd beat will have 8-bit effect (downsampled)
-
-You can do much more with the syntax - shuffle/randomize beats, use samples, mix two songs, etc. Syntax is described in detail at https://github.com/stunlocked1/beat_manipulator
-### scale
-`scale = 0.5` will insert a new beat position between every existing beat position in the beatmap. That allows you to make patterns on smaller intervals.
-
-`scale = 2`, on the other hand, will merge every two beat positions in the beatmap. Useful, for example, when beat map detection puts sees BPM as two times faster than it actually is, and puts beats in between every actual beat.
-### shift
-Shifts the beatmap, in beats. For example, if you want to remove 4th beat every four beats, you can do it by writing `1, 2, 3, 4!`. However sometimes it doesn't properly detect which beat is first, and for example remove 2nd beat every 4 beats instead. In that case, if you want 4th beat, use `shift = 2`. Also sometimes beats are detected right in between actual beats, so shift = 0.5 or -0.5 will fix it.
-### creating images
-You can create cool images based on beat positions. Each song produces its own unique image. This gradio app creates a 2048x2048 image from each song.
-### presets
-A bunch of example patterns: https://github.com/stunlocked1/beat_manipulator/blob/main/beat_manipulator/presets.yaml
-
-Those are supposed to be used on normalized beat maps, where kick + snare is two beats, so make sure to adjust beatmaps using `scale` and `shift`.
-
-### Changelog:
-- play two beats at the same time by using `;` instead of `,`
-- significantly reduced clicking
-- shuffle and randomize beats
-- gradient effect, similar to high pass
-- add samples to beats
-- use beats from other songs
-
-### My soundcloud https://soundcloud.com/stunlocked
-"""
- ).launch(share=False)
\ No newline at end of file
diff --git a/spaces/duycse1603/math2tex/HybridViT/module/component/common/maxout.py b/spaces/duycse1603/math2tex/HybridViT/module/component/common/maxout.py
deleted file mode 100644
index 07e40b8fb96dfe09dd9ecdd04eb5094a84e2c20b..0000000000000000000000000000000000000000
--- a/spaces/duycse1603/math2tex/HybridViT/module/component/common/maxout.py
+++ /dev/null
@@ -1,22 +0,0 @@
-from torch import nn
-
-class Maxout(nn.Module):
- """
- Maxout makes pools from the last dimension and keeps only the maximum value from
- each pool.
- """
-
- def __init__(self, pool_size):
- """
- Args:
- pool_size (int): Number of elements per pool
- """
- super(Maxout, self).__init__()
- self.pool_size = pool_size
-
- def forward(self, x):
- [*shape, last] = x.size()
- out = x.view(*shape, last // self.pool_size, self.pool_size)
- out, _ = out.max(-1)
- return out
-
diff --git a/spaces/ehristoforu/sbinterface/app.py b/spaces/ehristoforu/sbinterface/app.py
deleted file mode 100644
index 44d24d24bf222fadd03ed8266806ac2af43203ea..0000000000000000000000000000000000000000
--- a/spaces/ehristoforu/sbinterface/app.py
+++ /dev/null
@@ -1,11 +0,0 @@
-from diffusers import StableDiffusionPipeline
-import torch
-
-model_id = "runwayml/stable-diffusion-v1-5"
-pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
-pipe = pipe.to("cuda")
-
-prompt = "a photo of an astronaut riding a horse on mars"
-image = pipe(prompt).images[0]
-
-image.save("astronaut_rides_horse.png")
diff --git a/spaces/enesbol/case_dif/dataloader.py b/spaces/enesbol/case_dif/dataloader.py
deleted file mode 100644
index 1feafed8f54edbb66aa8f2fd1f3edb5a9e630820..0000000000000000000000000000000000000000
--- a/spaces/enesbol/case_dif/dataloader.py
+++ /dev/null
@@ -1,147 +0,0 @@
-import cv2
-import glob
-import torch
-import numpy as np
-import albumentations as albu
-from pathlib import Path
-from albumentations.pytorch.transforms import ToTensorV2
-from torch.utils.data import Dataset, DataLoader
-from sklearn.model_selection import train_test_split
-
-
-class DatasetGenerate(Dataset):
- def __init__(self, img_folder, gt_folder, edge_folder, phase: str = 'train', transform=None, seed=None):
- self.images = sorted(glob.glob(img_folder + '/*'))
- self.gts = sorted(glob.glob(gt_folder + '/*'))
- self.edges = sorted(glob.glob(edge_folder + '/*'))
- self.transform = transform
-
- train_images, val_images, train_gts, val_gts, train_edges, val_edges = train_test_split(self.images, self.gts,
- self.edges,
- test_size=0.05,
- random_state=seed)
- if phase == 'train':
- self.images = train_images
- self.gts = train_gts
- self.edges = train_edges
- elif phase == 'val':
- self.images = val_images
- self.gts = val_gts
- self.edges = val_edges
- else: # Testset
- pass
-
- def __getitem__(self, idx):
- image = cv2.imread(self.images[idx])
- image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
- mask = cv2.imread(self.gts[idx])
- mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
- edge = cv2.imread(self.edges[idx])
- edge = cv2.cvtColor(edge, cv2.COLOR_BGR2GRAY)
-
- if self.transform is not None:
- augmented = self.transform(image=image, masks=[mask, edge])
- image = augmented['image']
- mask = np.expand_dims(augmented['masks'][0], axis=0) # (1, H, W)
- mask = mask / 255.0
- edge = np.expand_dims(augmented['masks'][1], axis=0) # (1, H, W)
- edge = edge / 255.0
-
- return image, mask, edge
-
- def __len__(self):
- return len(self.images)
-
-
-class Test_DatasetGenerate(Dataset):
- def __init__(self, img_folder, gt_folder=None, transform=None):
- self.images = sorted(glob.glob(img_folder + '/*'))
- self.gts = sorted(glob.glob(gt_folder + '/*')) if gt_folder is not None else None
- self.transform = transform
-
- def __getitem__(self, idx):
- image_name = Path(self.images[idx]).stem
- image = cv2.imread(self.images[idx])
- image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
- original_size = image.shape[:2]
-
- if self.transform is not None:
- augmented = self.transform(image=image)
- image = augmented['image']
-
- if self.gts is not None:
- return image, self.gts[idx], original_size, image_name
- else:
- return image, original_size, image_name
-
- def __len__(self):
- return len(self.images)
-
-
-def get_loader(img_folder, gt_folder, edge_folder, phase: str, batch_size, shuffle,
- num_workers, transform, seed=None):
- if phase == 'test':
- dataset = Test_DatasetGenerate(img_folder, gt_folder, transform)
- data_loader = DataLoader(dataset, batch_size=batch_size, shuffle=shuffle, num_workers=num_workers)
- else:
- dataset = DatasetGenerate(img_folder, gt_folder, edge_folder, phase, transform, seed)
- data_loader = DataLoader(dataset, batch_size=batch_size, shuffle=shuffle, num_workers=num_workers,
- drop_last=True)
-
- print(f'{phase} length : {len(dataset)}')
-
- return data_loader
-
-
-def get_train_augmentation(img_size, ver):
- if ver == 1:
- transforms = albu.Compose([
- albu.Resize(img_size, img_size, always_apply=True),
- albu.Normalize([0.485, 0.456, 0.406],
- [0.229, 0.224, 0.225]),
- ToTensorV2(),
- ])
- if ver == 2:
- transforms = albu.Compose([
- albu.OneOf([
- albu.HorizontalFlip(),
- albu.VerticalFlip(),
- albu.RandomRotate90()
- ], p=0.5),
- albu.OneOf([
- albu.RandomContrast(),
- albu.RandomGamma(),
- albu.RandomBrightness(),
- ], p=0.5),
- albu.OneOf([
- albu.MotionBlur(blur_limit=5),
- albu.MedianBlur(blur_limit=5),
- albu.GaussianBlur(blur_limit=5),
- albu.GaussNoise(var_limit=(5.0, 20.0)),
- ], p=0.5),
- albu.Resize(img_size, img_size, always_apply=True),
- albu.Normalize([0.485, 0.456, 0.406],
- [0.229, 0.224, 0.225]),
- ToTensorV2(),
- ])
- return transforms
-
-
-def get_test_augmentation(img_size):
- transforms = albu.Compose([
- albu.Resize(img_size, img_size, always_apply=True),
- albu.Normalize([0.485, 0.456, 0.406],
- [0.229, 0.224, 0.225]),
- ToTensorV2(),
- ])
- return transforms
-
-
-def gt_to_tensor(gt):
- gt = cv2.imread(gt)
- gt = cv2.cvtColor(gt, cv2.COLOR_BGR2GRAY) / 255.0
- gt = np.where(gt > 0.5, 1.0, 0.0)
- gt = torch.tensor(gt, device='cuda', dtype=torch.float32)
- gt = gt.unsqueeze(0).unsqueeze(1)
-
- return gt
diff --git a/spaces/facebook/MusicGen/audiocraft/modules/rope.py b/spaces/facebook/MusicGen/audiocraft/modules/rope.py
deleted file mode 100644
index c12cee0954f27c45d79627771fdf7fa9fc10dfcc..0000000000000000000000000000000000000000
--- a/spaces/facebook/MusicGen/audiocraft/modules/rope.py
+++ /dev/null
@@ -1,125 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-from torch import nn
-import torch
-
-
-class XPos(nn.Module):
- """Length-extrapolatable positional embedding (xPos) from [Sun et al 2022](https://arxiv.org/abs/2212.10554v1).
- This applies an exponential decay to the RoPE rotation matrix.
-
- Args:
- dim (int): Embedding dimension.
- smoothing (float): Smoothing factor applied to the decay rates.
- base_scale (int): Base decay rate, given in terms of scaling time.
- device (torch.device, optional): Device on which to initialize the module.
- dtype (torch.dtype): dtype to use to generate the embedding.
- """
- def __init__(self, dim: int, smoothing: float = 0.4, base_scale: int = 512,
- device=None, dtype: torch.dtype = torch.float32):
- super().__init__()
- assert dim % 2 == 0
- assert dtype in [torch.float64, torch.float32]
- self.dtype = dtype
- self.base_scale = base_scale
-
- half_dim = dim // 2
- adim = torch.arange(half_dim, device=device, dtype=dtype)
- decay_rates = (adim / half_dim + smoothing) / (1.0 + smoothing)
- self.register_buffer("decay_rates", decay_rates)
- self.decay: tp.Optional[torch.Tensor] = None
-
- def get_decay(self, start: int, end: int):
- """Create complex decay tensor, cache values for fast computation."""
- if self.decay is None or end > self.decay.shape[0]:
- assert isinstance(self.decay_rates, torch.Tensor) # Satisfy type checker.
- idx = torch.arange(end, device=self.decay_rates.device, dtype=self.dtype)
- power = idx / self.base_scale
- scale = self.decay_rates ** power.unsqueeze(-1)
- self.decay = torch.polar(scale, torch.zeros_like(scale))
- return self.decay[start:end] # [T, C/2]
-
-
-class RotaryEmbedding(nn.Module):
- """Rotary positional embedding (RoPE) from [Su et al 2022](https://arxiv.org/abs/2104.09864).
-
- Args:
- dim (int): Embedding dimension (twice the number of frequencies).
- max_period (float): Maximum period of the rotation frequencies.
- xpos (bool): Use xPos, applies an exponential decay to rotation matrix.
- scale (float): Scale of positional embedding, set to 0 to deactivate.
- device (torch.device, optional): Device on which to initialize the module.
- dtype (torch.dtype): dtype to use to generate the embedding.
- """
- def __init__(self, dim: int, max_period: float = 10000.0, xpos: bool = False,
- scale: float = 1.0, device=None, dtype: torch.dtype = torch.float32):
- super().__init__()
- assert dim % 2 == 0
- self.scale = scale
- assert dtype in [torch.float64, torch.float32]
- self.dtype = dtype
-
- adim = torch.arange(0, dim, 2, device=device, dtype=dtype)[: (dim // 2)]
- frequencies = 1.0 / (max_period ** (adim / dim))
- self.register_buffer("frequencies", frequencies)
- self.rotation: tp.Optional[torch.Tensor] = None
-
- self.xpos = XPos(dim, device=device, dtype=dtype) if xpos else None
-
- def get_rotation(self, start: int, end: int):
- """Create complex rotation tensor, cache values for fast computation."""
- if self.rotation is None or end > self.rotation.shape[0]:
- assert isinstance(self.frequencies, torch.Tensor) # Satisfy type checker.
- idx = torch.arange(end, device=self.frequencies.device, dtype=self.dtype)
- angles = torch.outer(idx, self.frequencies)
- self.rotation = torch.polar(torch.ones_like(angles), angles)
- return self.rotation[start:end]
-
- def rotate(self, x: torch.Tensor, start: int = 0, time_dim: int = 1, invert_decay: bool = False):
- """Apply rope rotation to query or key tensor."""
- T = x.shape[time_dim]
- target_shape = [1] * x.dim()
- target_shape[time_dim] = T
- target_shape[-1] = -1
- rotation = self.get_rotation(start, start + T).view(target_shape)
-
- if self.xpos:
- decay = self.xpos.get_decay(start, start + T).view(target_shape)
- else:
- decay = 1.0
-
- if invert_decay:
- decay = decay ** -1
-
- x_complex = torch.view_as_complex(x.to(self.dtype).reshape(*x.shape[:-1], -1, 2))
- scaled_rotation = (rotation * decay) * self.scale + (1.0 - self.scale)
- x_out = torch.view_as_real(x_complex * scaled_rotation).view_as(x)
-
- return x_out.type_as(x)
-
- def rotate_qk(self, query: torch.Tensor, key: torch.Tensor, start: int = 0, time_dim: int = 1):
- """ Apply rope rotation to both query and key tensors.
- Supports streaming mode, in which query and key are not expected to have the same shape.
- In streaming mode, key will be of length [P + C] with P the cached past timesteps, but
- query will be [C] (typically C == 1).
-
- Args:
- query (torch.Tensor): Query to rotate.
- key (torch.Tensor): Key to rotate.
- start (int): Start index of the sequence for time offset.
- time_dim (int): which dimension represent the time steps.
- """
- query_timesteps = query.shape[time_dim]
- key_timesteps = key.shape[time_dim]
- streaming_offset = key_timesteps - query_timesteps
-
- query_out = self.rotate(query, start + streaming_offset, time_dim)
- key_out = self.rotate(key, start, time_dim, invert_decay=True)
-
- return query_out, key_out
diff --git a/spaces/falterWliame/Face_Mask_Detection/James Bond 007 Blood Stone UPD Crack Only RELOADED EXE 23.0087.md b/spaces/falterWliame/Face_Mask_Detection/James Bond 007 Blood Stone UPD Crack Only RELOADED EXE 23.0087.md
deleted file mode 100644
index 0b38ff6fe9052e99cbf30206f92c8396f51767b9..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/James Bond 007 Blood Stone UPD Crack Only RELOADED EXE 23.0087.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
james bond 007: blood stone features the likeness and voice talent of daniel craig, joss stone and judi dench and features an epic, original story developed by legendary screenwriter bruce feirstein. players can engage in cover-based firefights, lethal hand-to-hand combat and speed their way through explosive adrenaline-fuelled driving sequences as they embark on a global chase leading to action on land and sea through athens, istanbul, monaco and bangkok. gamers can also feel what it is like to be a 00 agent, as they take the battle online in several robust 16-person multi-player modes that require skill, teamwork and strategy as players compete in matches that will have spies battling mercenaries.
-
an international conspiracy has placed the uks most secretive bio-chemical project into deadly hands, and only her majestys most lethal agent, james bond, can unravel the mystery. to navigate through layers of corruption, you, as bond, will embark on a global chase that will have you battling on land, sea and air through athens, istanbul, monaco, and bangkok. nothing is what it seems as each adventure reveals a deeper and more sinister conspiracy.
-
James Bond 007 Blood Stone Crack Only RELOADED EXE 23.0087
ive seen the episode where he has to stop an angry african mob, and a black bond would look terribly out of place. its not that he doesnt look the part, but theres so much about the character to play up that this would be a failure. a black bond would really only work if they make a black james bond. but there should be no black james bond, because that would be a failure of the character. if it were a black bond, its because the character was created to be black, and there would be no point in the character if he wasnt black. and if youre going to do that, the character should be female. in fact, it would be a comedy to make the black bond female.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Kingdom Hearts 2 Final Mix English Patch Iso Pcsx2 !NEW!.md b/spaces/falterWliame/Face_Mask_Detection/Kingdom Hearts 2 Final Mix English Patch Iso Pcsx2 !NEW!.md
deleted file mode 100644
index 6168289a3010f0768f2450a1603cc991c03c4265..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Kingdom Hearts 2 Final Mix English Patch Iso Pcsx2 !NEW!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Kingdom Hearts 2 Final Mix English Patch Iso Pcsx2
-
-Tales of Destiny II (Disc 1) ROM (ISO) Download for Sony Playstation / PSX - CoolROM. ... KH2 Kingdom Hearts 2 Final Mix English Patch Subtitles . ... 47 views47 views. so, It doesn't support Pcsx2, so, I patched an ISO, burned it ... 4d29de3e1b
-
-
-
diff --git a/spaces/fartsmellalmao/combined-GI-RVC-models/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/fartsmellalmao/combined-GI-RVC-models/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
deleted file mode 100644
index b2c592527a5966e6f8e79e8c52dc5b414246dcc6..0000000000000000000000000000000000000000
--- a/spaces/fartsmellalmao/combined-GI-RVC-models/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
+++ /dev/null
@@ -1,97 +0,0 @@
-from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import parselmouth
-import numpy as np
-
-
-class PMF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def compute_f0(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0
-
- def compute_f0_uv(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0, uv
diff --git a/spaces/fatiXbelha/sd/Call of Duty Mobile Old Version APK The Ultimate FPS Game for Android.md b/spaces/fatiXbelha/sd/Call of Duty Mobile Old Version APK The Ultimate FPS Game for Android.md
deleted file mode 100644
index 61f47dde321ea1ccffa393562cdc001453c466c7..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Call of Duty Mobile Old Version APK The Ultimate FPS Game for Android.md
+++ /dev/null
@@ -1,120 +0,0 @@
-
-
Call of Duty Mobile Old Version APK: How to Download and Play
-
Call of Duty Mobile is one of the most popular and successful first-person shooter (FPS) games for mobile devices. It offers a thrilling and immersive gameplay experience with high-quality graphics, sound effects, and controls. However, some players may prefer to play older versions of Call of Duty Mobile for various reasons. In this article, we will explain what Call of Duty Mobile is, why you may want to download its old version apk, and how to do it safely and easily.
Call of Duty Mobile is a free-to-play FPS game developed by TiMi Studios and published by Activision. It was released globally in October 2019 for Android and iOS devices. The game is based on the Call of Duty franchise, which is one of the most successful video game series in history. Call of Duty Mobile features characters, maps, weapons, and modes from various Call of Duty games, such as Modern Warfare, Black Ops, and Zombies.
-
Features and modes of Call of Duty Mobile
-
Call of Duty Mobile offers a variety of features and modes that cater to different preferences and play styles. Some of the main features and modes are:
-
-
Multiplayer: This mode allows you to compete with other players online in various modes, such as Team Deathmatch, Domination, Search and Destroy, Hardpoint, Free for All, etc. You can also customize your loadout, rank up your weapons, unlock perks, and earn rewards.
-
Battle Royale: This mode is similar to other popular battle royale games, such as PUBG Mobile and Fortnite. You can choose to play solo, duo, or squad with up to 100 players on a large map. You can also select your class, parachute into the map, loot weapons and items, drive vehicles, and fight to be the last one standing.
-
Zombies: This mode is a fan-favorite mode from the Call of Duty series. You can team up with other players or play solo to survive waves of zombies in different maps and scenarios. You can also upgrade your weapons, use traps and power-ups, and complete objectives.
-
Gunsmith: This feature allows you to customize your weapons with various attachments, skins, camos, stickers, etc. You can also test your weapons in the firing range before using them in matches.
-
Battle Pass: This feature allows you to unlock exclusive rewards by completing tasks and leveling up your battle pass. You can also purchase the premium battle pass to access more rewards.
-
-
Why download Call of Duty Mobile old version apk?
-
Benefits of playing older versions of Call of Duty Mobile
-
Some players may prefer to play older versions of Call of Duty Mobile for various reasons. Some of the possible benefits are:
-
-
Nostalgia:Nostalgia: Some players may have fond memories of playing older versions of Call of Duty Mobile, such as the first season or the Halloween event. They may want to relive those moments and enjoy the game as it was before.
-
Performance: Some players may have low-end devices that cannot run the latest version of Call of Duty Mobile smoothly. They may experience lag, crashes, or glitches that affect their gameplay. They may want to play older versions of Call of Duty Mobile that are more compatible with their devices and have less bugs.
-
Preference: Some players may not like the changes or updates that the developers have made to Call of Duty Mobile over time. They may prefer the gameplay, graphics, or features of older versions of Call of Duty Mobile that suit their taste better.
-
-
Risks and challenges of downloading Call of Duty Mobile old version apk
-
However, downloading and playing older versions of Call of Duty Mobile also comes with some risks and challenges. Some of the possible drawbacks are:
-
-
Security: Downloading Call of Duty Mobile old version apk from unofficial sources may expose your device to malware, viruses, or hackers. You may also risk losing your account or personal data if you log in with your credentials.
-
Compatibility: Playing older versions of Call of Duty Mobile may not be compatible with the current version of the game server or other players. You may encounter errors, glitches, or crashes that prevent you from playing the game properly.
-
Support: Playing older versions of Call of Duty Mobile may not receive any support or updates from the developers. You may miss out on new features, modes, events, or rewards that are available in the latest version of the game.
-
-
How to download and install Call of Duty Mobile old version apk?
-
If you still want to download and play older versions of Call of Duty Mobile, you will need to find a reliable source that provides the apk files for different versions of the game. One such source is Uptodown, which is a website that offers free and safe downloads for various apps and games. Another source is APKCombo, which is a website that allows you to download apk files for any version of any app or game.
-
Steps to download Call of Duty Mobile old version apk from Uptodown
-
Here are the steps to download Call of Duty Mobile old version apk from Uptodown:
-
call of duty mobile legends of war apk old version
-call of duty mobile 1.0.39 apk download
-call of duty mobile apk obb old version
-call of duty mobile 1.0.38 apk
-call of duty mobile 1.0.37 apk
-call of duty mobile 1.0.35 apk
-call of duty mobile 1.0.34 apk
-call of duty mobile 1.0.33 apk
-call of duty mobile 1.0.32 apk
-call of duty mobile 1.0.30 apk
-call of duty mobile 1.0.29 apk
-call of duty mobile 1.0.28 apk
-call of duty mobile 1.0.26 apk
-call of duty mobile 1.0.24 apk
-call of duty mobile 1.0.22 apk
-call of duty mobile 1.0.20 apk
-call of duty mobile 1.0.19 apk
-call of duty mobile 1.0.17 apk
-call of duty mobile 1.0.16 apk
-call of duty mobile 1.0.15 apk
-call of duty mobile 1.0.12 apk
-call of duty mobile 1.0.11 apk
-call of duty mobile old version uptodown
-call of duty mobile old version download for android
-call of duty mobile old version apkpure
-call of duty mobile old version mod apk
-call of duty mobile old version hack apk
-call of duty mobile old version unlimited money apk
-call of duty mobile old version offline apk
-call of duty mobile old version highly compressed apk
-how to download call of duty mobile old version on android
-how to install call of duty mobile old version on android
-how to play call of duty mobile old version on android
-how to update call of duty mobile old version on android
-how to downgrade call of duty mobile to old version on android
-how to get back to the old version of call of duty mobile on android
-how to fix black screen in call of duty mobile old version on android
-how to fix lag in call of duty mobile old version on android
-how to fix crash in call of duty mobile old version on android
-how to fix login error in call of duty mobile old version on android
-best settings for call of duty mobile old version on android
-best graphics for call of duty mobile old version on android
-best sensitivity for call of duty mobile old version on android
-best guns for call of duty mobile old version on android
-best maps for call of duty mobile old version on android
-best modes for call of duty mobile old version on android
-best characters for call of duty mobile old version on android
-best skins for call of duty mobile old version on android
-best tips and tricks for call of duty mobile old version on android
Step 2: Choose the version of Call of Duty Mobile you want to download
-
Scroll down and find the version of Call of Duty Mobile that you want to download. You can check the release date, size, and rating of each version. Click on the "Download" button next to the version you want.
-
Step 3: Download the XAPK file and install it using an XAPK installer app
-
You will be redirected to a page where you can download the XAPK file for the selected version of Call of Duty Mobile. An XAPK file is a compressed file that contains both the APK file and the OBB file for an app or game. You will need an XAPK installer app to install it on your device. You can download one from https://xapk-installer.en.uptodown.com/android. After downloading the XAPK installer app, open it and locate the XAPK file for Call of Duty Mobile that you downloaded. Tap on it and follow the instructions to install it on your device.
-
Steps to download Call of Duty Mobile old version apk from APKCombo
-
Here are the steps to download Call of Duty Mobile old version apk from APKCombo:
Step 2: Search for Call of Duty Mobile and select the version you want to download
-
In the search box, type "Call of Duty Mobile" and hit enter. You will see a list of results for Call of Duty Mobile. Click on the one that matches the app ID "com.activision.callofduty.shooter". You will see a page with information and screenshots of Call of Duty Mobile. Scroll down and find the section that says "All versions". You will see a list of different versions of Call of Duty Mobile that are available for download. Click on the "Download APK" button next to the version you want.
-
Step 3: Download the APK file and install it on your device
-
You will be redirected to a page where you can download the APK file for the selected version of Call of Duty Mobile. An APK file is a file that contains the installation package for an app or game. You will need to enable the installation of apps from unknown sources on your device settings before installing it. After downloading the APK file, open it and follow the instructions to install it on your device.
-
Conclusion
-
Call of Duty Mobile is a great FPS game for mobile devices that offers a lot of fun and excitement. However, some players may want to play older versions of Call of Duty Mobile for various reasons, such as nostalgia, performance, or preference. In this article, we have explained how to download and install Call of Duty Mobile old version apk from two reliable sources: Uptodown and APKCombo. We have also discussed the benefits and risks of playing older versions of Call of Duty Mobile. We hope this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below.
-
FAQs
-
Here are some frequently asked questions about Call of Duty Mobile old version apk:
-
-
Q: Is it legal to download Call of Duty Mobile old version apk?
-
A: It depends on your country's laws and regulations regarding intellectual property rights and digital distribution. Generally, it is not illegal to download Call of Duty Mobile old version apk as long as you do not distribute or sell it to others. However, you may violate the terms and conditions of Activision or Google Play Store by doing so. Therefore, we advise you to download Call of Duty Mobile old version apk at your own risk and discretion.
-
Q: Can I play Call of Duty Mobile old version apk online with other players?
-
A: It depends on the version of Call of Duty Mobile old version apk you download and install. Some older versions may not be compatible with the current version of the game server or other players. You may encounter errors, glitches, or crashes that prevent you from playing online with other players. Therefore, we recommend you to play Call of Duty Mobile old version apk offline or with friends who have the same version as you.
-
Q: Can I update Call of Duty Mobile old version apk to the latest version?
-
A: Yes, you can update Call of Duty Mobile old version apk to the latest version by downloading and installing the latest version from Google Play Store or Uptodown. However, you may lose your progress, data, or settings if you do so. Therefore, we suggest you to back up your data before updating Call of Duty Mobile old version apk.
-
Q: Can I use my existing account to play Call of Duty Mobile old version apk?
-
A: Yes, you can use your existing account to play Call of Duty Mobile old version apk by logging in with your credentials. However, you may risk losing your account or personal data if you download Call of Duty Mobile old version apk from untrusted sources. Therefore, we advise you to use a secondary account or create a new account to play Call of Duty Mobile old version apk.
-
Q: What are some alternatives to Call of Duty Mobile old version apk?
-
A: Some alternatives to Call of Duty Mobile old version apk are:
-
-
PUBG Mobile: This is another popular FPS game for mobile devices that features a battle royale mode with up to 100 players on a large map.
-
Garena Free Fire: This is another popular FPS game for mobile devices that features a battle royale mode with up to 50 players on a smaller map.
-
Critical Ops: This is another popular FPS game for mobile devices that features a multiplayer mode with various modes, such as Team Deathmatch, Defuse, Gun Game, etc
Modern Combat 5: This is another popular FPS game for mobile devices that features a single-player campaign and a multiplayer mode with various modes, such as Team Battle, Capture the Flag, Free for All, etc.
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Naruto Senki Mod APK and Unlock All Character for Free.md b/spaces/fatiXbelha/sd/Download Naruto Senki Mod APK and Unlock All Character for Free.md
deleted file mode 100644
index 883e2e6338b21c9b1821572256c3669116410aea..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Naruto Senki Mod APK and Unlock All Character for Free.md
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
Naruto Senki Mod Apk: Unlock All Characters for Free
-
If you are a fan of Naruto, the popular anime and manga series, you might have heard of Naruto Senki, a mobile game based on the Naruto universe. In this game, you can choose from a variety of characters and fight against enemies in different modes. But what if you want to unlock all the characters without spending any money? That's where Naruto Senki Mod Apk comes in. In this article, we will tell you what Naruto Senki is, what Naruto Senki Mod Apk is, and how to download and install it on your device.
-
What is Naruto Senki?
-
Naruto Senki is a 2D action game for Android devices that lets you play as your favorite Naruto characters. You can choose from over 50 characters, each with their own skills and abilities. You can also customize your character's appearance and equipment. The game has several modes, such as story mode, arcade mode, survival mode, and online mode. In story mode, you can follow the original Naruto storyline and relive the epic battles. In arcade mode, you can fight against random opponents in different stages. In survival mode, you can test your skills and endurance against waves of enemies. In online mode, you can join other players and compete in team battles or free-for-all matches.
-
naruto senki mod apk unlock all character free download
High-quality graphics and sound effects that capture the essence of the Naruto anime.
-
Smooth and responsive controls that allow you to perform combos and special moves.
-
A wide range of characters to choose from, each with their own strengths and weaknesses.
-
A variety of modes to suit your preferences and mood.
-
A ranking system that tracks your progress and achievements.
-
A shop where you can buy items and upgrades for your character.
-
-
How to play Naruto Senki
-
To play Naruto Senki, you need to have an Android device with at least 1 GB of RAM and 100 MB of free storage space. You can download the game from the Google Play Store or from other sources. Once you have installed the game, you can launch it and select your preferred mode. You can also adjust the settings, such as the difficulty level, the sound volume, and the language. To control your character, you can use the virtual buttons on the screen. You can move your character with the left button, attack with the right button, and use special skills with the top button. You can also switch characters with the bottom button. To win a match, you need to defeat all your opponents or have more health than them when the time runs out.
-
What is Naruto Senki Mod Apk?
-
Naruto Senki Mod Apk is a modified version of Naruto Senki that gives you access to all the features and content of the game for free. With this mod apk, you can unlock all the characters without spending any coins or money. You can also enjoy unlimited skills, unlimited coins, unlimited health, and other benefits. You can download Naruto Senki Mod Apk from our website for free. All you need to do is click on the download button and follow the instructions.
-
Benefits of Naruto Senki Mod Apk
-
Some of the benefits of Naruto Senki Mod Apk are:
-
-
You can play as any character you want without having to unlock them first.
-
You can use any skill you want without having to wait for the cooldown.
-
You can have unlimited coins to buy items and upgrades for your character.
-
You can have unlimited health to survive longer in battles.
-
You can enjoy faster loading times and smoother gameplay.
-
You can avoid annoying ads and pop-ups that interrupt your gaming experience.
-
-
How
How to download and install Naruto Senki Mod Apk
-
To download and install Naruto Senki Mod Apk on your device, you need to follow these simple steps:
-
-
Click on the download button below and wait for the file to be downloaded.
-
Go to your device's settings and enable the installation of apps from unknown sources.
-
Locate the downloaded file and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to be completed.
-
Launch the game and enjoy Naruto Senki Mod Apk with all the features unlocked.
-
-
Conclusion
-
Naruto Senki is a fun and exciting game for Naruto fans and anyone who loves action games. You can play as your favorite Naruto characters and fight against enemies in different modes. However, if you want to unlock all the characters and enjoy other benefits, you need to download Naruto Senki Mod Apk from our website. With this mod apk, you can have unlimited access to all the features and content of the game for free. You can also avoid ads and enjoy faster loading times. So, what are you waiting for? Download Naruto Senki Mod Apk now and unleash your ninja skills!
-
FAQs
-
Here are some frequently asked questions about Naruto Senki Mod Apk:
-
-
Is Naruto Senki Mod Apk safe to use? Yes, Naruto Senki Mod Apk is safe to use. It does not contain any viruses or malware that can harm your device. However, you should always download it from a trusted source like our website.
-
Do I need to root my device to use Naruto Senki Mod Apk? No, you do not need to root your device to use Naruto Senki Mod Apk. It works on both rooted and non-rooted devices.
-
Will I get banned for using Naruto Senki Mod Apk? No, you will not get banned for using Naruto Senki Mod Apk. The mod apk is undetectable by the game's servers and does not interfere with your account.
-
Can I play online with Naruto Senki Mod Apk? Yes, you can play online with Naruto Senki Mod Apk. However, you should be careful not to abuse the mod features or else you might get reported by other players.
-
Can I update Naruto Senki Mod Apk? No, you cannot update Naruto Senki Mod Apk. If you want to get the latest version of the game, you need to download it again from our website.
-
-
naruto senki mod apk full character unlimited money download
-naruto senki mod apk all character unlocked free download latest version
-naruto senki mod apk unlock all character no cooldown free download
-naruto senki mod apk full character unlocked free download offline
-naruto senki mod apk all character unlocked free download android
-naruto senki mod apk unlock all character unlimited skill free download
-naruto senki mod apk full character unlocked free download 2023
-naruto senki mod apk all character unlocked free download update
-naruto senki mod apk unlock all character no root free download
-naruto senki mod apk full character unlocked free download mediafıre
-naruto senki mod apk all character unlocked free download rexdl
-naruto senki mod apk unlock all character boruto free download
-naruto senki mod apk full character unlocked free download terbaru
-naruto senki mod apk all character unlocked free download 2022
-naruto senki mod apk unlock all character madara free download
-naruto senki mod apk full character unlocked free download revdl
-naruto senki mod apk all character unlocked free download mega
-naruto senki mod apk unlock all character pain free download
-naruto senki mod apk full character unlocked free download zip
-naruto senki mod apk all character unlocked free download zippyshare
-naruto senki mod apk unlock all character sasuke free download
-naruto senki mod apk full character unlocked free download hack
-naruto senki mod apk all character unlocked free download obb
-naruto senki mod apk unlock all character itachi free download
-naruto senki mod apk full character unlocked free download original
-naruto senki mod apk all character unlocked free download uptodown
-naruto senki mod apk unlock all character kakashi free download
-naruto senki mod apk full character unlocked free download cheat
-naruto senki mod apk all character unlocked free download google drive
-naruto senki mod apk unlock all character minato free download
-naruto senki mod apk full character unlocked free download link
-naruto senki mod apk all character unlocked free download apkpure
-naruto senki mod apk unlock all character gaara free download
-naruto senki mod apk full character unlocked free download no verification
-naruto senki mod apk all character unlocked free download for pc
-naruto senki mod apk unlock all character hinata free download
-naruto senki mod apk full character unlocked free download online
-naruto senki mod apk all character unlocked free download highly compressed
-naruto senki mod apk unlock all character shikamaru free download
-naruto senki mod apk full character unlocked free download unlimited coins
-naruto senki mod apk all character unlocked free download direct link
-naruto senki mod apk unlock all character sakura free download
-naruto senki mod apk full character unlocked free download no password
-naruto senki mod apk all character unlocked free download 100 working
-naruto senki mod apk unlock all character neji free download
-naruto senki mod apk full character unlocked free download easy install
-naruto senki mod apk all character unlocked free download without ads
-naruto senki mod apk unlock all character orochimaru free download
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Roblox Mod APK Terbaru 2022 Tanpa Root dan Tanpa Banned Dapatkan Robux Sebanyak-banyaknya.md b/spaces/fatiXbelha/sd/Download Roblox Mod APK Terbaru 2022 Tanpa Root dan Tanpa Banned Dapatkan Robux Sebanyak-banyaknya.md
deleted file mode 100644
index 0840f41e4791c84de9736889fc4f172516fb6fa4..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Roblox Mod APK Terbaru 2022 Tanpa Root dan Tanpa Banned Dapatkan Robux Sebanyak-banyaknya.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
-
Download Roblox Mod APK Terbaru 2022: How to Get Unlimited Robux and More
-
Do you love playing games on Roblox? Do you wish you had more Robux to buy your favorite items and access premium features? If yes, then you are in luck. In this article, we will show you how to download and install the latest version of Roblox Mod APK, a modified version of the official Roblox app that gives you unlimited Robux and more. Read on to find out more.
A popular online game platform and game creation system
-
Roblox is one of the most popular online game platforms and game creation systems in the world. It allows users to play and create games across various genres, such as adventure, role-playing, simulation, racing, and more. Roblox has over 200 million monthly active users and hosts over 40 million games created by its community.
-
Features of Roblox
-
Play and create games with millions of players
-
Roblox lets you play and create games with millions of players from around the world. You can join existing games or create your own using the Roblox Studio, a powerful and easy-to-use tool that lets you design, code, and publish your games. You can also share your games with other users and earn revenue from them.
-
Customize your avatar with thousands of items
-
Roblox also lets you customize your avatar with thousands of items, such as clothes, accessories, hairstyles, faces, and more. You can buy these items using Robux, the virtual currency of Roblox. You can also earn Robux by creating and selling your own items or by participating in various events and promotions.
-
Chat and socialize with friends and communities
-
Roblox also lets you chat and socialize with friends and communities on the platform. You can join or create groups, send messages, voice chat, and more. You can also follow your favorite creators, join fan clubs, and discover new games and content.
-
What is Roblox Mod APK?
-
A modified version of the official Roblox app
-
Roblox Mod APK is a modified version of the official Roblox app that gives you access to unlimited Robux and more. It is not developed or endorsed by Roblox Corporation, but by independent developers who want to provide users with a better gaming experience.
-
Benefits of Roblox Mod APK
-
Unlimited Robux to buy anything you want
-
The main benefit of Roblox Mod APK is that it gives you unlimited Robux to buy anything you want on the platform. You don't have to spend real money or waste time earning Robux by other means. You can buy any item, feature, or game pass that you desire without any limitations.
-
download roblox mod apk unlimited robux versi terbaru 2022
-download roblox mod apk anti banned terbaru 2022
-download roblox mod apk kostumisasi avatar terbaru 2022
-download roblox mod apk multi player terbaru 2022
-download roblox mod apk user generated terbaru 2022
-download roblox mod apk unlimited money terbaru 2022
-download roblox mod apk night mode terbaru 2022
-download roblox mod apk genre permainan terbaru 2022
-download roblox mod apk unlocked all mode terbaru 2022
-download roblox mod apk versi terbaru 2022 idlegal.id[^1^]
-cara download roblox mod apk terbaru 2022 gratis
-link download roblox mod apk terbaru 2022 no password
-situs download roblox mod apk terbaru 2022 terpercaya
-review download roblox mod apk terbaru 2022 lengkap
-tutorial download roblox mod apk terbaru 2022 mudah
-tips download roblox mod apk terbaru 2022 aman
-trik download roblox mod apk terbaru 2022 cepat
-video download roblox mod apk terbaru 2022 youtube
-gambar download roblox mod apk terbaru 2022 png
-artikel download roblox mod apk terbaru 2022 blogspot
-panduan download roblox mod apk terbaru 2022 pdf
-rekomendasi download roblox mod apk terbaru 2022 online
-testimoni download roblox mod apk terbaru 2022 indonesia
-pengalaman download roblox mod apk terbaru 2022 android
-kelebihan download roblox mod apk terbaru 2022 full version
-kekurangan download roblox mod apk terbaru 2022 cracked
-keuntungan download roblox mod apk terbaru 2022 premium
-kerugian download roblox mod apk terbaru 2022 free fire
-syarat download roblox mod apk terbaru 2022 tanpa root
-fitur download roblox mod apk terbaru 2022 update
-spesifikasi download roblox mod apk terbaru 2022 offline
-ukuran download roblox mod apk terbaru 2022 ringan
-rating download roblox mod apk terbaru 2022 google play
-komentar download roblox mod apk terbaru 2022 facebook
-diskusi download roblox mod apk terbaru 2022 kaskus
-pertanyaan download roblox mod apk terbaru 2022 quora
-jawaban download roblox mod apk terbaru 2022 yahoo answers
-solusi download roblox mod apk terbaru 2022 error
-masalah download roblox mod apk terbaru 2022 tidak bisa dibuka
-solusi download roblox mod apk terbaru 2022 tidak bisa login
-
Unlock all premium features and items
-
Another benefit of Roblox Mod APK is that it unlocks all the premium features and items that are normally restricted or require payment. You can access the Builders Club, which gives you more privileges and perks, such as creating and joining more groups, selling more items, and receiving a daily Robux stipend. You can also use the Developer Exchange, which allows you to exchange your Robux for real money.
-
No ads and no root required
-
A final benefit of Roblox Mod APK is that it removes all the annoying ads that interrupt your gaming experience. You don't have to watch any videos or complete any surveys to earn Robux or access certain features. You also don't need to root your device to install the Roblox Mod APK, which means you don't have to risk damaging your device or voiding your warranty.
-
How to Download and Install Roblox Mod APK Terbaru 2022?
-
Step-by-step guide to download and install the latest version of Roblox Mod APK
-
If you want to download and install the latest version of Roblox Mod APK, you need to follow these simple steps:
-
Step 1: Enable unknown sources on your device
-
Before you can install the Roblox Mod APK, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and toggle it on.
-
Step 2: Download the Roblox Mod APK file from a trusted source
-
Next, you need to download the Roblox Mod APK file from a trusted source. There are many websites that claim to offer the Roblox Mod APK, but some of them may contain viruses or malware that can harm your device or steal your data. Therefore, you need to be careful and only download the Roblox Mod APK from a reputable and verified source. One such source is [RobloxModAPK.com], which provides the latest and safest version of the Roblox Mod APK.
-
Step 3: Locate and install the Roblox Mod APK file on your device
-
After you have downloaded the Roblox Mod APK file, you need to locate and install it on your device. To do this, go to your file manager, then downloads, then find the Roblox Mod APK file and tap on it. You will see a prompt asking you to confirm the installation. Tap on install and wait for the process to complete.
-
Step 4: Launch the app and enjoy unlimited Robux and more
-
Finally, you can launch the app and enjoy unlimited Robux and more. You will see a new icon on your home screen with the name Roblox Mod. Tap on it and log in with your existing Roblox account or create a new one. You will see that you have unlimited Robux in your balance and all the premium features and items unlocked. You can now play and create games with millions of players without any limitations.
-
Conclusion
-
In conclusion, Roblox is a great online game platform and game creation system that lets you play and create games with millions of players from around the world. However, if you want to have more fun and freedom on the platform, you should download and install the latest version of Roblox Mod APK, which gives you unlimited Robux and more. With Roblox Mod APK, you can buy anything you want, unlock all premium features and items, remove all ads, and enjoy a better gaming experience. To download and install the latest version of Roblox Mod APK terbaru 2022, just follow the simple steps we have outlined above.
-
If you liked this article, please share it with your friends and leave a comment below. Also, don't forget to check out our other articles on [RobloxModAPK.com] for more tips and tricks on how to get the most out of Roblox.
-
Frequently Asked Questions
-
Here are some of the most frequently asked questions about Roblox Mod APK:
-
-
Is Roblox Mod APK safe?
-
Yes, Roblox Mod APK is safe as long as you download it from a trusted source like [RobloxModAPK.com]. We test our files for viruses and malware before uploading them to our website. However, we are not responsible for any damage or loss that may occur from using our files.
-
Is Roblox Mod APK legal?
-
No, Roblox Mod APK is not legal as it violates the terms of service of Roblox Corporation. By using Roblox Mod APK, you are breaking the rules and risking your account being banned or suspended. Therefore, we do not recommend using Roblox Mod APK for any illegal or unethical purposes.
-
Will Roblox Mod APK work on my device?
-
Roblox Mod APK should work on most Android devices that support the official Roblox app. However, some devices may not be compatible or may experience some issues due to different specifications or settings. If you encounter any problems while using Roblox Mod APK, you can try to clear the cache, reinstall the app, or contact us for help.
-
How can I update Roblox Mod APK?
-
Roblox Mod APK is updated regularly to match the latest version of the official Roblox app. To update Roblox Mod APK, you need to visit our website [RobloxModAPK.com] and download the latest version of the file. Then, you need to uninstall the previous version of Roblox Mod APK and install the new one following the same steps as before.
-
Can I use Roblox Mod APK with my existing Roblox account?
-
Yes, you can use Roblox Mod APK with your existing Roblox account. You just need to log in with your username and password as usual. However, you should be careful not to use Roblox Mod APK on your main or important account, as you may risk losing it if you get caught or reported by other users.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/utils/utils_amp.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/utils/utils_amp.py
deleted file mode 100644
index 9ac2a03f4212faa129faed447a8f4519c0a00a8b..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/utils/utils_amp.py
+++ /dev/null
@@ -1,88 +0,0 @@
-from typing import Dict, List
-
-import torch
-
-if torch.__version__ < '1.9':
- Iterable = torch._six.container_abcs.Iterable
-else:
- import collections
-
- Iterable = collections.abc.Iterable
-from torch.cuda.amp import GradScaler
-
-
-class _MultiDeviceReplicator(object):
- """
- Lazily serves copies of a tensor to requested devices. Copies are cached per-device.
- """
-
- def __init__(self, master_tensor: torch.Tensor) -> None:
- assert master_tensor.is_cuda
- self.master = master_tensor
- self._per_device_tensors: Dict[torch.device, torch.Tensor] = {}
-
- def get(self, device) -> torch.Tensor:
- retval = self._per_device_tensors.get(device, None)
- if retval is None:
- retval = self.master.to(device=device, non_blocking=True, copy=True)
- self._per_device_tensors[device] = retval
- return retval
-
-
-class MaxClipGradScaler(GradScaler):
- def __init__(self, init_scale, max_scale: float, growth_interval=100):
- GradScaler.__init__(self, init_scale=init_scale, growth_interval=growth_interval)
- self.max_scale = max_scale
-
- def scale_clip(self):
- if self.get_scale() == self.max_scale:
- self.set_growth_factor(1)
- elif self.get_scale() < self.max_scale:
- self.set_growth_factor(2)
- elif self.get_scale() > self.max_scale:
- self._scale.fill_(self.max_scale)
- self.set_growth_factor(1)
-
- def scale(self, outputs):
- """
- Multiplies ('scales') a tensor or list of tensors by the scale factor.
-
- Returns scaled outputs. If this instance of :class:`GradScaler` is not enabled, outputs are returned
- unmodified.
-
- Arguments:
- outputs (Tensor or iterable of Tensors): Outputs to scale.
- """
- if not self._enabled:
- return outputs
- self.scale_clip()
- # Short-circuit for the common case.
- if isinstance(outputs, torch.Tensor):
- assert outputs.is_cuda
- if self._scale is None:
- self._lazy_init_scale_growth_tracker(outputs.device)
- assert self._scale is not None
- return outputs * self._scale.to(device=outputs.device, non_blocking=True)
-
- # Invoke the more complex machinery only if we're treating multiple outputs.
- stash: List[_MultiDeviceReplicator] = [] # holds a reference that can be overwritten by apply_scale
-
- def apply_scale(val):
- if isinstance(val, torch.Tensor):
- assert val.is_cuda
- if len(stash) == 0:
- if self._scale is None:
- self._lazy_init_scale_growth_tracker(val.device)
- assert self._scale is not None
- stash.append(_MultiDeviceReplicator(self._scale))
- return val * stash[0].get(val.device)
- elif isinstance(val, Iterable):
- iterable = map(apply_scale, val)
- if isinstance(val, list) or isinstance(val, tuple):
- return type(val)(iterable)
- else:
- return iterable
- else:
- raise ValueError("outputs must be a Tensor or an iterable of Tensors")
-
- return apply_scale(outputs)
diff --git a/spaces/fclong/summary/fengshen/examples/classification/finetune_classification_bert-3.9B_wsc.sh b/spaces/fclong/summary/fengshen/examples/classification/finetune_classification_bert-3.9B_wsc.sh
deleted file mode 100644
index 5d05662f1a2252de3bbd4fd9719ef8d3262d9c63..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/classification/finetune_classification_bert-3.9B_wsc.sh
+++ /dev/null
@@ -1,158 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=slurm-test # create a short name for your job
-#SBATCH --nodes=1 # node count
-#SBATCH --ntasks=2 # total number of tasks across all nodes
-#SBATCH --cpus-per-task=16 # cpu-cores per task (>1 if multi-threaded tasks)
-#SBATCH --mem-per-cpu=8G # memory per cpu-core (4G is default)
-#SBATCH --gres=gpu:2 # number of gpus per node
-#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc.
-
-
-export TORCH_EXTENSIONS_DIR=/cognitive_comp/yangping/cache/torch_extendsions
-
-BERT_NAME=bert-3.9B
-
-TASK=wsc
-TEXTA_NAME=texta
-LABEL_NAME=label
-ID_NAME=id
-
-
-BATCH_SIZE=16
-VAL_BATCH_SIZE=56
-ZERO_STAGE=2
-
-
-ROOT_PATH=cognitive_comp
-DATA_DIR=/cognitive_comp/yangping/data/unidata/multichoice/mrc_multichoice_data/other/cluewsc2020/
-PRETRAINED_MODEL_PATH=/$ROOT_PATH/yangping/pretrained_model/$BERT_NAME/
-
-
-CHECKPOINT_PATH=/$ROOT_PATH/yangping/checkpoints/fengshen-finetune/$TASK/
-DEFAULT_ROOT_DIR=/cognitive_comp/yangping/nlp/Fengshenbang-LM/fengshen/scripts/log/$TASK
-OUTPUT_PATH=/$ROOT_PATH/yangping/nlp/modelevaluation/output/${TASK}_predict.json
-
-
-config_json="./ds_config.$SLURM_JOBID.json"
-# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size()
-# reduce_bucket_size: hidden_size*hidden_size
-# stage3_prefetch_bucket_size: 0.9 * hidden_size * hidden_size
-# stage3_param_persistence_threshold: 10 * hidden_size
-
-cat < $config_json
-{
- "train_micro_batch_size_per_gpu": $BATCH_SIZE,
- "steps_per_print": 100,
- "gradient_clipping": 1.0,
- "zero_optimization": {
- "stage": 3,
- "offload_optimizer": {
- "device": "cpu",
- "pin_memory": true
- },
- "offload_param": {
- "device": "cpu",
- "pin_memory": true
- },
- "overlap_comm": true,
- "contiguous_gradients": true,
- "sub_group_size": 1e9,
- "reduce_bucket_size": 6553600,
- "stage3_prefetch_bucket_size": 5898240,
- "stage3_param_persistence_threshold": 25600,
- "stage3_max_live_parameters": 1e9,
- "stage3_max_reuse_distance": 1e9,
- "stage3_gather_fp16_weights_on_model_save": true
- },
- "optimizer": {
- "type": "Adam",
- "params": {
- "lr": 1e-5,
- "betas": [
- 0.9,
- 0.95
- ],
- "eps": 1e-8,
- "weight_decay": 1e-2
- }
- },
- "scheduler": {
- "type": "WarmupLR",
- "params":{
- "warmup_min_lr": 5e-6,
- "warmup_max_lr": 1e-5
- }
- },
- "zero_allow_untested_optimizer": false,
- "fp16": {
- "enabled": true,
- "loss_scale": 0,
- "loss_scale_window": 1000,
- "hysteresis": 2,
- "min_loss_scale": 1
- },
- "activation_checkpointing": {
- "partition_activations": false,
- "contiguous_memory_optimization": false
- },
- "wall_clock_breakdown": false
-}
-EOT
-
-export PL_DEEPSPEED_CONFIG_PATH=$config_json
-
-
-DATA_ARGS="\
- --data_dir $DATA_DIR \
- --train_data train.json \
- --valid_data dev.json \
- --test_data test.json \
- --train_batchsize $BATCH_SIZE \
- --valid_batchsize $VAL_BATCH_SIZE \
- --max_length 128 \
- --texta_name $TEXTA_NAME \
- --label_name $LABEL_NAME \
- --id_name $ID_NAME \
- "
-
-MODEL_ARGS="\
- --learning_rate 0.00001 \
- --weight_decay 0.01 \
- --warmup 0.001 \
- --num_labels 2 \
- "
-
-MODEL_CHECKPOINT_ARGS="\
- --monitor val_acc \
- --save_top_k 3 \
- --mode max \
- --every_n_train_steps 10 \
- --save_weights_only True \
- --dirpath $CHECKPOINT_PATH \
- --filename model-{epoch:02d}-{val_acc:.4f} \
- "
-TRAINER_ARGS="\
- --max_epochs 7 \
- --gpus 2 \
- --strategy deepspeed_stage_3 \
- --precision 16 \
- --check_val_every_n_epoch 1 \
- --val_check_interval 10 \
- --default_root_dir $DEFAULT_ROOT_DIR \
- "
-
-options=" \
- --pretrained_model_path $PRETRAINED_MODEL_PATH \
- --output_save_path $OUTPUT_PATH \
- $DATA_ARGS \
- $MODEL_ARGS \
- $MODEL_CHECKPOINT_ARGS \
- $TRAINER_ARGS \
- "
-
-DOCKER_PATH=/$ROOT_PATH/yangping/containers/pytorch21_06_py3_docker_image.sif
-SCRIPT_PATH=/$ROOT_PATH/yangping/nlp/fengshen/fengshen/examples/finetune_classification.py
-
-# python3 $SCRIPT_PATH $options
-srun singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $DOCKER_PATH python3 $SCRIPT_PATH $options
-
diff --git a/spaces/fclong/summary/fengshen/examples/clue1.1/data_preprocessing/iflytek_preprocessing.py b/spaces/fclong/summary/fengshen/examples/clue1.1/data_preprocessing/iflytek_preprocessing.py
deleted file mode 100644
index 6a8f5ec44851697ac1a36f299a0a132dcf486b71..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/clue1.1/data_preprocessing/iflytek_preprocessing.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import json
-from tqdm import tqdm
-import os
-import argparse
-
-label2desc={
- '银行': '银行',
- '社区服务': '社区',
- '电商': '电商',
- '支付': '支付',
- '经营养成': '养成',
- '卡牌': '卡牌',
- '借贷': '借贷',
- '驾校': '驾校',
- '理财': '理财',
- '职考': '职考',
- '新闻': '新闻',
- '旅游资讯': '旅游',
- '公共交通': '交通',
- '魔幻': '魔幻',
- '医疗服务': '医疗',
- '影像剪辑': '影像',
- '动作类': '动作',
- '工具': '工具',
- '体育竞技': '体育',
- '小说': '小说',
- '运动健身': '运动',
- '相机': '相机',
- '辅助工具': '辅助',
- '快递物流': '快递',
- '高等教育': '教育',
- '股票': '股票',
- '菜谱': '菜谱',
- '行车辅助': '行车',
- '仙侠': '仙侠',
- '亲子儿童': '亲子',
- '购物咨询': '购物',
- '射击游戏': '射击',
- '漫画': '漫画',
- '中小学': '小学',
- '同城服务': '同城',
- '成人教育': '成人',
- '求职': '求职',
- '电子产品': '电子',
- '艺术': '艺术',
- '薅羊毛': '赚钱',
- '约会社交': '约会',
- '经营': '经营',
- '兼职': '兼职',
- '短视频': '短视',
- '音乐': '音乐',
- '英语': '英语',
- '棋牌中心': '棋牌',
- '摄影修图': '摄影',
- '养生保健': '养生',
- '办公': '办公',
- '政务': '政务',
- '视频': '视频',
- '论坛圈子': '论坛',
- '彩票': '彩票',
- '直播': '直播',
- '其他': '其他',
- '休闲益智': '休闲',
- '策略': '策略',
- '即时通讯': '通讯',
- '汽车交易': '买车',
- '违章': '违章',
- '地图导航': '地图',
- '民航': '民航',
- '电台': '电台',
- '语言(非英语)': '语言',
- '搞笑': '搞笑',
- '婚恋社交': '婚恋',
- '社区超市': '超市',
- '日常养车': '养车',
- '杂志': '杂志',
- '视频教育': '在线',
- '家政': '家政',
- '影视娱乐': '影视',
- '装修家居': '装修',
- '体育咨讯': '资讯',
- '社交工具': '社交',
- '餐饮店': '餐饮',
- '美颜': '美颜',
- '问诊挂号': '挂号',
- '飞行空战': '飞行',
- '综合预定': '预定',
- '电影票务': '票务',
- '笔记': '笔记',
- '买房': '买房',
- '外卖': '外卖',
- '母婴': '母婴',
- '打车': '打车',
- '情侣社交': '情侣',
- '日程管理': '日程',
- '租车': '租车',
- '微博博客': '博客',
- '百科': '百科',
- '绘画': '绘画',
- '铁路': '铁路',
- '生活社交': '生活',
- '租房': '租房',
- '酒店': '酒店',
- '保险': '保险',
- '问答交流': '问答',
- '收款': '收款',
- 'MOBA': '竞技',
- 'K歌': '唱歌',
- '技术': '技术',
- '减肥瘦身': '减肥',
- '工作社交': '工作',
- '团购': '团购',
- '记账': '记账',
- '女性': '女性',
- '公务员': '公务',
- '二手': '二手',
- '美妆美业': '美妆',
- '汽车咨询': '汽车',
- '行程管理': '行程',
- '免费WIFI': '免费',
- '教辅': '教辅',
- '成人': '两性',
- '出国': '出国',
- '婚庆': '婚庆',
- '民宿短租': '民宿'}
-
-choice = [k for k,v in label2desc.items()]
-print('1'.join(choice))
-print(len('1'.join(choice)))
-
-
-def load_data(file_path,is_training=False):
- with open(file_path, 'r', encoding='utf8') as f:
- lines = f.readlines()
- result=[]
- for line in tqdm(lines):
- data = json.loads(line)
- texta = data['sentence']
- textb = ''
- question = '请问app应用属于?'
-
- choice = [v for k,v in label2desc.items()]
- answer = label2desc[data['label_des']] if 'label_des' in data.keys() else ''
-
- # choice = [k for k,v in label2desc.items()]
- # answer = data['label_des'] if 'label_des' in data.keys() else ''
-
- label = choice.index(answer) if 'label_des' in data.keys() else 0
- text_id = data['id'] if 'id' in data.keys() else 0
- result.append({'texta':texta,
- 'textb':textb,
- 'question':question,
- 'choice':choice,
- 'answer':answer,
- 'label':label,
- 'id':text_id})
- # for i in range(5):
- # print(result[i])
- return result
-
-
-def save_data(data,file_path):
- with open(file_path, 'w', encoding='utf8') as f:
- for line in data:
- json_data=json.dumps(line,ensure_ascii=False)
- f.write(json_data+'\n')
-
-
-
-if __name__=="__main__":
- parser = argparse.ArgumentParser(description="train")
- parser.add_argument("--data_path", type=str,default="")
- parser.add_argument("--save_path", type=str,default="")
-
- args = parser.parse_args()
-
-
- data_path = args.data_path
- save_path = args.save_path
-
- if not os.path.exists(save_path):
- os.makedirs(save_path)
-
- file_list = ['train','dev','test']
- for file in file_list:
- file_path = os.path.join(data_path,file+'.json')
- output_path = os.path.join(save_path,file+'.json')
- save_data(load_data(file_path),output_path)
\ No newline at end of file
diff --git a/spaces/fclong/summary/fengshen/examples/disco_project/guided_diffusion/guided_diffusion/unet.py b/spaces/fclong/summary/fengshen/examples/disco_project/guided_diffusion/guided_diffusion/unet.py
deleted file mode 100644
index 187b6c9737fda143f70e8dae365c35b690820466..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/disco_project/guided_diffusion/guided_diffusion/unet.py
+++ /dev/null
@@ -1,975 +0,0 @@
-from abc import abstractmethod
-
-import math
-
-import numpy as np
-import torch as th
-import torch.nn as nn
-import torch.nn.functional as F
-
-from .fp16_util import convert_module_to_f16, convert_module_to_f32
-from .nn import (
- checkpoint,
- conv_nd,
- linear,
- avg_pool_nd,
- zero_module,
- normalization,
- timestep_embedding,
-)
-
-from transformers import PreTrainedModel, PretrainedConfig
-
-
-class AttentionPool2d(nn.Module):
- """
- Adapted from CLIP: https://github.com/openai/CLIP/blob/main/clip/model.py
- """
-
- def __init__(
- self,
- spacial_dim: int,
- embed_dim: int,
- num_heads_channels: int,
- output_dim: int = None,
- ):
- super().__init__()
- self.positional_embedding = nn.Parameter(
- th.randn(embed_dim, spacial_dim ** 2 + 1) / embed_dim ** 0.5
- )
- self.qkv_proj = conv_nd(1, embed_dim, 3 * embed_dim, 1)
- self.c_proj = conv_nd(1, embed_dim, output_dim or embed_dim, 1)
- self.num_heads = embed_dim // num_heads_channels
- self.attention = QKVAttention(self.num_heads)
-
- def forward(self, x):
- b, c, *_spatial = x.shape
- x = x.reshape(b, c, -1) # NC(HW)
- x = th.cat([x.mean(dim=-1, keepdim=True), x], dim=-1) # NC(HW+1)
- x = x + self.positional_embedding[None, :, :].to(x.dtype) # NC(HW+1)
- x = self.qkv_proj(x)
- x = self.attention(x)
- x = self.c_proj(x)
- return x[:, :, 0]
-
-
-class TimestepBlock(nn.Module):
- """
- Any module where forward() takes timestep embeddings as a second argument.
- """
-
- @abstractmethod
- def forward(self, x, emb):
- """
- Apply the module to `x` given `emb` timestep embeddings.
- """
-
-
-class TimestepEmbedSequential(nn.Sequential, TimestepBlock):
- """
- A sequential module that passes timestep embeddings to the children that
- support it as an extra input.
- """
-
- def forward(self, x, emb):
- for layer in self:
- if isinstance(layer, TimestepBlock):
- x = layer(x, emb)
- else:
- x = layer(x)
- return x
-
-
-class Upsample(nn.Module):
- """
- An upsampling layer with an optional convolution.
-
- :param channels: channels in the inputs and outputs.
- :param use_conv: a bool determining if a convolution is applied.
- :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
- upsampling occurs in the inner-two dimensions.
- """
-
- def __init__(self, channels, use_conv, dims=2, out_channels=None):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.dims = dims
- if use_conv:
- self.conv = conv_nd(dims, self.channels, self.out_channels, 3, padding=1)
-
- def forward(self, x):
- assert x.shape[1] == self.channels
- if self.dims == 3:
- x = F.interpolate(
- x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest"
- )
- else:
- x = F.interpolate(x, scale_factor=2, mode="nearest")
- if self.use_conv:
- x = self.conv(x)
- return x
-
-
-class Downsample(nn.Module):
- """
- A downsampling layer with an optional convolution.
-
- :param channels: channels in the inputs and outputs.
- :param use_conv: a bool determining if a convolution is applied.
- :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
- downsampling occurs in the inner-two dimensions.
- """
-
- def __init__(self, channels, use_conv, dims=2, out_channels=None):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.dims = dims
- stride = 2 if dims != 3 else (1, 2, 2)
- if use_conv:
- self.op = conv_nd(
- dims, self.channels, self.out_channels, 3, stride=stride, padding=1
- )
- else:
- assert self.channels == self.out_channels
- self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride)
-
- def forward(self, x):
- assert x.shape[1] == self.channels
- return self.op(x)
-
-
-class ResBlock(TimestepBlock):
- """
- A residual block that can optionally change the number of channels.
-
- :param channels: the number of input channels.
- :param emb_channels: the number of timestep embedding channels.
- :param dropout: the rate of dropout.
- :param out_channels: if specified, the number of out channels.
- :param use_conv: if True and out_channels is specified, use a spatial
- convolution instead of a smaller 1x1 convolution to change the
- channels in the skip connection.
- :param dims: determines if the signal is 1D, 2D, or 3D.
- :param use_checkpoint: if True, use gradient checkpointing on this module.
- :param up: if True, use this block for upsampling.
- :param down: if True, use this block for downsampling.
- """
-
- def __init__(
- self,
- channels,
- emb_channels,
- dropout,
- out_channels=None,
- use_conv=False,
- use_scale_shift_norm=False,
- dims=2,
- use_checkpoint=False,
- up=False,
- down=False,
- ):
- super().__init__()
- self.channels = channels
- self.emb_channels = emb_channels
- self.dropout = dropout
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.use_checkpoint = use_checkpoint
- self.use_scale_shift_norm = use_scale_shift_norm
-
- self.in_layers = nn.Sequential(
- normalization(channels),
- nn.SiLU(),
- conv_nd(dims, channels, self.out_channels, 3, padding=1),
- )
-
- self.updown = up or down
-
- if up:
- self.h_upd = Upsample(channels, False, dims)
- self.x_upd = Upsample(channels, False, dims)
- elif down:
- self.h_upd = Downsample(channels, False, dims)
- self.x_upd = Downsample(channels, False, dims)
- else:
- self.h_upd = self.x_upd = nn.Identity()
-
- self.emb_layers = nn.Sequential(
- nn.SiLU(),
- linear(
- emb_channels,
- 2 * self.out_channels if use_scale_shift_norm else self.out_channels,
- ),
- )
- self.out_layers = nn.Sequential(
- normalization(self.out_channels),
- nn.SiLU(),
- nn.Dropout(p=dropout),
- zero_module(
- conv_nd(dims, self.out_channels, self.out_channels, 3, padding=1)
- ),
- )
-
- if self.out_channels == channels:
- self.skip_connection = nn.Identity()
- elif use_conv:
- self.skip_connection = conv_nd(
- dims, channels, self.out_channels, 3, padding=1
- )
- else:
- self.skip_connection = conv_nd(dims, channels, self.out_channels, 1)
-
- def forward(self, x, emb):
- """
- Apply the block to a Tensor, conditioned on a timestep embedding.
-
- :param x: an [N x C x ...] Tensor of features.
- :param emb: an [N x emb_channels] Tensor of timestep embeddings.
- :return: an [N x C x ...] Tensor of outputs.
- """
- return checkpoint(
- self._forward, (x, emb), self.parameters(), self.use_checkpoint
- )
-
- def _forward(self, x, emb):
- if self.updown:
- in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1]
- h = in_rest(x)
- h = self.h_upd(h)
- x = self.x_upd(x)
- h = in_conv(h)
- else:
- h = self.in_layers(x)
- emb_out = self.emb_layers(emb).type(h.dtype)
- while len(emb_out.shape) < len(h.shape):
- emb_out = emb_out[..., None]
- if self.use_scale_shift_norm:
- out_norm, out_rest = self.out_layers[0], self.out_layers[1:]
- scale, shift = th.chunk(emb_out, 2, dim=1)
- h = out_norm(h) * (1 + scale) + shift
- h = out_rest(h)
- else:
- h = h + emb_out
- h = self.out_layers(h)
- return self.skip_connection(x) + h
-
-
-class AttentionBlock(nn.Module):
- """
- An attention block that allows spatial positions to attend to each other.
-
- Originally ported from here, but adapted to the N-d case.
- https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66.
- """
-
- def __init__(
- self,
- channels,
- num_heads=1,
- num_head_channels=-1,
- use_checkpoint=False,
- use_new_attention_order=False,
- ):
- super().__init__()
- self.channels = channels
- if num_head_channels == -1:
- self.num_heads = num_heads
- else:
- assert (
- channels % num_head_channels == 0
- ), f"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}"
- self.num_heads = channels // num_head_channels
- self.use_checkpoint = use_checkpoint
- self.norm = normalization(channels)
- self.qkv = conv_nd(1, channels, channels * 3, 1)
- if use_new_attention_order:
- # split qkv before split heads
- self.attention = QKVAttention(self.num_heads)
- else:
- # split heads before split qkv
- self.attention = QKVAttentionLegacy(self.num_heads)
-
- self.proj_out = zero_module(conv_nd(1, channels, channels, 1))
-
- def forward(self, x):
- return checkpoint(self._forward, (x,), self.parameters(), self.use_checkpoint)
-
- def _forward(self, x):
- b, c, *spatial = x.shape
- x = x.reshape(b, c, -1)
- qkv = self.qkv(self.norm(x))
- h = self.attention(qkv)
- h = self.proj_out(h)
- return (x + h).reshape(b, c, *spatial)
-
-
-def count_flops_attn(model, _x, y):
- """
- A counter for the `thop` package to count the operations in an
- attention operation.
- Meant to be used like:
- macs, params = thop.profile(
- model,
- inputs=(inputs, timestamps),
- custom_ops={QKVAttention: QKVAttention.count_flops},
- )
- """
- b, c, *spatial = y[0].shape
- num_spatial = int(np.prod(spatial))
- # We perform two matmuls with the same number of ops.
- # The first computes the weight matrix, the second computes
- # the combination of the value vectors.
- matmul_ops = 2 * b * (num_spatial ** 2) * c
- model.total_ops += th.DoubleTensor([matmul_ops])
-
-
-class QKVAttentionLegacy(nn.Module):
- """
- A module which performs QKV attention. Matches legacy QKVAttention + input/ouput heads shaping
- """
-
- def __init__(self, n_heads):
- super().__init__()
- self.n_heads = n_heads
-
- def forward(self, qkv):
- """
- Apply QKV attention.
-
- :param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs.
- :return: an [N x (H * C) x T] tensor after attention.
- """
- bs, width, length = qkv.shape
- assert width % (3 * self.n_heads) == 0
- ch = width // (3 * self.n_heads)
- q, k, v = qkv.reshape(bs * self.n_heads, ch * 3, length).split(ch, dim=1)
- scale = 1 / math.sqrt(math.sqrt(ch))
- weight = th.einsum(
- "bct,bcs->bts", q * scale, k * scale
- ) # More stable with f16 than dividing afterwards
- weight = th.softmax(weight.float(), dim=-1).type(weight.dtype)
- a = th.einsum("bts,bcs->bct", weight, v)
- return a.reshape(bs, -1, length)
-
- @staticmethod
- def count_flops(model, _x, y):
- return count_flops_attn(model, _x, y)
-
-
-class QKVAttention(nn.Module):
- """
- A module which performs QKV attention and splits in a different order.
- """
-
- def __init__(self, n_heads):
- super().__init__()
- self.n_heads = n_heads
-
- def forward(self, qkv):
- """
- Apply QKV attention.
-
- :param qkv: an [N x (3 * H * C) x T] tensor of Qs, Ks, and Vs.
- :return: an [N x (H * C) x T] tensor after attention.
- """
- bs, width, length = qkv.shape
- assert width % (3 * self.n_heads) == 0
- ch = width // (3 * self.n_heads)
- q, k, v = qkv.chunk(3, dim=1)
- scale = 1 / math.sqrt(math.sqrt(ch))
- weight = th.einsum(
- "bct,bcs->bts",
- (q * scale).view(bs * self.n_heads, ch, length),
- (k * scale).view(bs * self.n_heads, ch, length),
- ) # More stable with f16 than dividing afterwards
- weight = th.softmax(weight.float(), dim=-1).type(weight.dtype)
- a = th.einsum("bts,bcs->bct", weight, v.reshape(bs * self.n_heads, ch, length))
- return a.reshape(bs, -1, length)
-
- @staticmethod
- def count_flops(model, _x, y):
- return count_flops_attn(model, _x, y)
-
-
-class UNetModel(nn.Module):
- """
- The full UNet model with attention and timestep embedding.
-
- :param in_channels: channels in the input Tensor.
- :param model_channels: base channel count for the model.
- :param out_channels: channels in the output Tensor.
- :param num_res_blocks: number of residual blocks per downsample.
- :param attention_resolutions: a collection of downsample rates at which
- attention will take place. May be a set, list, or tuple.
- For example, if this contains 4, then at 4x downsampling, attention
- will be used.
- :param dropout: the dropout probability.
- :param channel_mult: channel multiplier for each level of the UNet.
- :param conv_resample: if True, use learned convolutions for upsampling and
- downsampling.
- :param dims: determines if the signal is 1D, 2D, or 3D.
- :param num_classes: if specified (as an int), then this model will be
- class-conditional with `num_classes` classes.
- :param use_checkpoint: use gradient checkpointing to reduce memory usage.
- :param num_heads: the number of attention heads in each attention layer.
- :param num_heads_channels: if specified, ignore num_heads and instead use
- a fixed channel width per attention head.
- :param num_heads_upsample: works with num_heads to set a different number
- of heads for upsampling. Deprecated.
- :param use_scale_shift_norm: use a FiLM-like conditioning mechanism.
- :param resblock_updown: use residual blocks for up/downsampling.
- :param use_new_attention_order: use a different attention pattern for potentially
- increased efficiency.
- """
-
- def __init__(
- self,
- image_size,
- in_channels,
- model_channels,
- out_channels,
- num_res_blocks,
- attention_resolutions,
- dropout=0,
- channel_mult=(1, 2, 4, 8),
- conv_resample=True,
- dims=2,
- num_classes=None,
- use_checkpoint=False,
- use_fp16=False,
- num_heads=1,
- num_head_channels=-1,
- num_heads_upsample=-1,
- use_scale_shift_norm=False,
- resblock_updown=False,
- use_new_attention_order=False,
- ):
- super().__init__()
-
- if num_heads_upsample == -1:
- num_heads_upsample = num_heads
-
- self.image_size = image_size
- self.in_channels = in_channels
- self.model_channels = model_channels
- self.out_channels = out_channels
- self.num_res_blocks = num_res_blocks
- self.attention_resolutions = attention_resolutions
- self.dropout = dropout
- self.channel_mult = channel_mult
- self.conv_resample = conv_resample
- self.num_classes = num_classes
- self.use_checkpoint = use_checkpoint
- self.dtype = th.float16 if use_fp16 else th.float32
- self.num_heads = num_heads
- self.num_head_channels = num_head_channels
- self.num_heads_upsample = num_heads_upsample
-
- time_embed_dim = model_channels * 4
- self.time_embed = nn.Sequential(
- linear(model_channels, time_embed_dim),
- nn.SiLU(),
- linear(time_embed_dim, time_embed_dim),
- )
-
- if self.num_classes is not None:
- self.label_emb = nn.Embedding(num_classes, time_embed_dim)
-
- ch = input_ch = int(channel_mult[0] * model_channels)
- self.input_blocks = nn.ModuleList(
- [TimestepEmbedSequential(conv_nd(dims, in_channels, ch, 3, padding=1))]
- )
- self._feature_size = ch
- input_block_chans = [ch]
- ds = 1
- for level, mult in enumerate(channel_mult):
- for _ in range(num_res_blocks):
- layers = [
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=int(mult * model_channels),
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- )
- ]
- ch = int(mult * model_channels)
- if ds in attention_resolutions:
- layers.append(
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=num_head_channels,
- use_new_attention_order=use_new_attention_order,
- )
- )
- self.input_blocks.append(TimestepEmbedSequential(*layers))
- self._feature_size += ch
- input_block_chans.append(ch)
- if level != len(channel_mult) - 1:
- out_ch = ch
- self.input_blocks.append(
- TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=out_ch,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- down=True,
- )
- if resblock_updown
- else Downsample(
- ch, conv_resample, dims=dims, out_channels=out_ch
- )
- )
- )
- ch = out_ch
- input_block_chans.append(ch)
- ds *= 2
- self._feature_size += ch
-
- self.middle_block = TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=num_head_channels,
- use_new_attention_order=use_new_attention_order,
- ),
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- )
- self._feature_size += ch
-
- self.output_blocks = nn.ModuleList([])
- for level, mult in list(enumerate(channel_mult))[::-1]:
- for i in range(num_res_blocks + 1):
- ich = input_block_chans.pop()
- layers = [
- ResBlock(
- ch + ich,
- time_embed_dim,
- dropout,
- out_channels=int(model_channels * mult),
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- )
- ]
- ch = int(model_channels * mult)
- if ds in attention_resolutions:
- layers.append(
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads_upsample,
- num_head_channels=num_head_channels,
- use_new_attention_order=use_new_attention_order,
- )
- )
- if level and i == num_res_blocks:
- out_ch = ch
- layers.append(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=out_ch,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- up=True,
- )
- if resblock_updown
- else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch)
- )
- ds //= 2
- self.output_blocks.append(TimestepEmbedSequential(*layers))
- self._feature_size += ch
-
- self.out = nn.Sequential(
- normalization(ch),
- nn.SiLU(),
- zero_module(conv_nd(dims, input_ch, out_channels, 3, padding=1)),
- )
-
- def convert_to_fp16(self):
- """
- Convert the torso of the model to float16.
- """
- self.input_blocks.apply(convert_module_to_f16)
- self.middle_block.apply(convert_module_to_f16)
- self.output_blocks.apply(convert_module_to_f16)
-
- def convert_to_fp32(self):
- """
- Convert the torso of the model to float32.
- """
- self.input_blocks.apply(convert_module_to_f32)
- self.middle_block.apply(convert_module_to_f32)
- self.output_blocks.apply(convert_module_to_f32)
-
- def forward(self, x, timesteps, y=None):
- """
- Apply the model to an input batch.
-
- :param x: an [N x C x ...] Tensor of inputs.
- :param timesteps: a 1-D batch of timesteps.
- :param y: an [N] Tensor of labels, if class-conditional.
- :return: an [N x C x ...] Tensor of outputs.
- """
- assert (y is not None) == (
- self.num_classes is not None
- ), "must specify y if and only if the model is class-conditional"
-
- hs = []
- emb = self.time_embed(timestep_embedding(timesteps, self.model_channels))
-
- if self.num_classes is not None:
- assert y.shape == (x.shape[0],)
- emb = emb + self.label_emb(y)
-
- h = x.type(self.dtype)
- for module in self.input_blocks:
- h = module(h, emb)
- hs.append(h)
- h = self.middle_block(h, emb)
- for module in self.output_blocks:
- h = th.cat([h, hs.pop()], dim=1)
- h = module(h, emb)
- h = h.type(x.dtype)
- return self.out(h)
-
-
-class SuperResModel(UNetModel):
- """
- A UNetModel that performs super-resolution.
-
- Expects an extra kwarg `low_res` to condition on a low-resolution image.
- """
-
- def __init__(self, image_size, in_channels, *args, **kwargs):
- super().__init__(image_size, in_channels * 2, *args, **kwargs)
-
- def forward(self, x, timesteps, low_res=None, **kwargs):
- _, _, new_height, new_width = x.shape
- upsampled = F.interpolate(low_res, (new_height, new_width), mode="bilinear")
- x = th.cat([x, upsampled], dim=1)
- return super().forward(x, timesteps, **kwargs)
-
-
-class EncoderUNetModel(nn.Module):
- """
- The half UNet model with attention and timestep embedding.
-
- For usage, see UNet.
- """
-
- def __init__(
- self,
- image_size,
- in_channels,
- model_channels,
- out_channels,
- num_res_blocks,
- attention_resolutions,
- dropout=0,
- channel_mult=(1, 2, 4, 8),
- conv_resample=True,
- dims=2,
- use_checkpoint=False,
- use_fp16=False,
- num_heads=1,
- num_head_channels=-1,
- num_heads_upsample=-1,
- use_scale_shift_norm=False,
- resblock_updown=False,
- use_new_attention_order=False,
- pool="adaptive",
- ):
- super().__init__()
-
- if num_heads_upsample == -1:
- num_heads_upsample = num_heads
-
- self.in_channels = in_channels
- self.model_channels = model_channels
- self.out_channels = out_channels
- self.num_res_blocks = num_res_blocks
- self.attention_resolutions = attention_resolutions
- self.dropout = dropout
- self.channel_mult = channel_mult
- self.conv_resample = conv_resample
- self.use_checkpoint = use_checkpoint
- self.dtype = th.float16 if use_fp16 else th.float32
- self.num_heads = num_heads
- self.num_head_channels = num_head_channels
- self.num_heads_upsample = num_heads_upsample
-
- time_embed_dim = model_channels * 4
- self.time_embed = nn.Sequential(
- linear(model_channels, time_embed_dim),
- nn.SiLU(),
- linear(time_embed_dim, time_embed_dim),
- )
-
- ch = int(channel_mult[0] * model_channels)
- self.input_blocks = nn.ModuleList(
- [TimestepEmbedSequential(conv_nd(dims, in_channels, ch, 3, padding=1))]
- )
- self._feature_size = ch
- input_block_chans = [ch]
- ds = 1
- for level, mult in enumerate(channel_mult):
- for _ in range(num_res_blocks):
- layers = [
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=int(mult * model_channels),
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- )
- ]
- ch = int(mult * model_channels)
- if ds in attention_resolutions:
- layers.append(
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=num_head_channels,
- use_new_attention_order=use_new_attention_order,
- )
- )
- self.input_blocks.append(TimestepEmbedSequential(*layers))
- self._feature_size += ch
- input_block_chans.append(ch)
- if level != len(channel_mult) - 1:
- out_ch = ch
- self.input_blocks.append(
- TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=out_ch,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- down=True,
- )
- if resblock_updown
- else Downsample(
- ch, conv_resample, dims=dims, out_channels=out_ch
- )
- )
- )
- ch = out_ch
- input_block_chans.append(ch)
- ds *= 2
- self._feature_size += ch
-
- self.middle_block = TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=num_head_channels,
- use_new_attention_order=use_new_attention_order,
- ),
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- )
- self._feature_size += ch
- self.pool = pool
- if pool == "adaptive":
- self.out = nn.Sequential(
- normalization(ch),
- nn.SiLU(),
- nn.AdaptiveAvgPool2d((1, 1)),
- zero_module(conv_nd(dims, ch, out_channels, 1)),
- nn.Flatten(),
- )
- elif pool == "attention":
- assert num_head_channels != -1
- self.out = nn.Sequential(
- normalization(ch),
- nn.SiLU(),
- AttentionPool2d(
- (image_size // ds), ch, num_head_channels, out_channels
- ),
- )
- elif pool == "spatial":
- self.out = nn.Sequential(
- nn.Linear(self._feature_size, 2048),
- nn.ReLU(),
- nn.Linear(2048, self.out_channels),
- )
- elif pool == "spatial_v2":
- self.out = nn.Sequential(
- nn.Linear(self._feature_size, 2048),
- normalization(2048),
- nn.SiLU(),
- nn.Linear(2048, self.out_channels),
- )
- else:
- raise NotImplementedError(f"Unexpected {pool} pooling")
-
- def convert_to_fp16(self):
- """
- Convert the torso of the model to float16.
- """
- self.input_blocks.apply(convert_module_to_f16)
- self.middle_block.apply(convert_module_to_f16)
-
- def convert_to_fp32(self):
- """
- Convert the torso of the model to float32.
- """
- self.input_blocks.apply(convert_module_to_f32)
- self.middle_block.apply(convert_module_to_f32)
-
- def forward(self, x, timesteps):
- """
- Apply the model to an input batch.
-
- :param x: an [N x C x ...] Tensor of inputs.
- :param timesteps: a 1-D batch of timesteps.
- :return: an [N x K] Tensor of outputs.
- """
- emb = self.time_embed(timestep_embedding(timesteps, self.model_channels))
-
- results = []
- h = x.type(self.dtype)
- for module in self.input_blocks:
- h = module(h, emb)
- if self.pool.startswith("spatial"):
- results.append(h.type(x.dtype).mean(dim=(2, 3)))
- h = self.middle_block(h, emb)
- if self.pool.startswith("spatial"):
- results.append(h.type(x.dtype).mean(dim=(2, 3)))
- h = th.cat(results, axis=-1)
- return self.out(h)
- else:
- h = h.type(x.dtype)
- return self.out(h)
-
-
-class UNetConfig(PretrainedConfig):
- def __init__(
- self,
- image_size=512,
- in_channels=3,
- model_channels=256,
- out_channels=6,
- num_res_blocks=2,
- attention_resolutions=[16, 32, 64],
- dropout=0.0,
- channel_mult=(0.5, 1, 1, 2, 2, 4, 4),
- num_classes=None,
- use_checkpoint=False,
- use_fp16=True,
- num_heads=4,
- num_head_channels=64,
- num_heads_upsample=-1,
- use_scale_shift_norm=True,
- resblock_updown=True,
- use_new_attention_order=False,
- **kwargs
- ):
- self.image_size = image_size
- self.in_channels = in_channels
- self.model_channels = model_channels
- self.out_channels = out_channels
- self.num_res_blocks = num_res_blocks
- self.attention_resolutions = attention_resolutions
- self.dropout = dropout
- self.channel_mult = channel_mult
- self.num_classes = num_classes
- self.use_checkpoint = use_checkpoint
- self.use_fp16 = use_fp16
- self.num_heads = num_heads
- self.num_head_channels = num_head_channels
- self.num_heads_upsample = num_heads_upsample
- self.use_scale_shift_norm = use_scale_shift_norm
- self.resblock_updown = resblock_updown
- self.use_new_attention_order = use_new_attention_order
- super().__init__(**kwargs)
-
-
-class HFUNetModel(PreTrainedModel):
- config_class = UNetConfig
-
- def __init__(self, config):
- super().__init__(config)
- self.model = UNetModel(
- image_size=config.image_size,
- in_channels=config.in_channels,
- model_channels=config.model_channels,
- out_channels=config.out_channels,
- num_res_blocks=config.num_res_blocks,
- attention_resolutions=config.attention_resolutions,
- dropout=config.dropout,
- channel_mult=config.channel_mult,
- num_classes=config.num_classes,
- use_checkpoint=config.use_checkpoint,
- use_fp16=config.use_fp16,
- num_heads=config.num_heads,
- num_head_channels=config.num_head_channels,
- num_heads_upsample=config.num_heads_upsample,
- use_scale_shift_norm=config.use_scale_shift_norm,
- resblock_updown=config.resblock_updown,
- use_new_attention_order=config.use_new_attention_order,
- )
-
- def forward(self, x, timesteps, y=None):
- return self.model.forward(x, timesteps, y)
-
- def convert_to_fp16(self):
- """
- Convert the torso of the model to float16.
- """
- self.model.input_blocks.apply(convert_module_to_f16)
- self.model.middle_block.apply(convert_module_to_f16)
- self.model.output_blocks.apply(convert_module_to_f16)
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/44 Lotereyas Biletin Qiymti v dni rtlri.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/44 Lotereyas Biletin Qiymti v dni rtlri.md
deleted file mode 100644
index 7599bf0fa239352336df0825c4d3cdcba56a906f..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/44 Lotereyas Biletin Qiymti v dni rtlri.md
+++ /dev/null
@@ -1,184 +0,0 @@
-
-
4+4 Lotereyası: Azərbaycanın Ən Populyar Oyunu
-
Lotereya oynamanın həyəcanını və mükafatını yaşamaq istəyirsinizsə, sizin üçün mükəmməl bir seçim var: 4+4 lotereyası. Bu oyun, Azərbaycan lotereya tarixinin ən böyük cekpotunu qazandıran oyun olaraq tanınır. Hər həftə, XƏZƏR TV-də canlı yayımlanan tirajda, milyonlarla manat dəyərində uduşlar verilir. Siz də bu oyunun bir hissəsi olmaq və şansınızı sınamaq istiyirsinizsə, bu məqaləni oxuyun. Biz sizin üçün 4+4 lotereyasının nə olduğunu, necə oynanıldığını, necə bilet ala biləcəyinizi, niyə bu oyunu seçməli olduğunuzu və son nəticlrlri təqdim edirik.
-
Nedir 4+4 Lotereyası?
-
4+4 lotereyası, Azerlotereya törömdüsü törömdüsü törömdüsü törömdüsü törömdüsü törömdüsü törömdüsü törömdüsü törömdüsü törömdüsü törömdüsü törömdüsündün törömdündün törömdündün törömdündün törömdündün törömdündün törömdündün törömdündün törömdündün törömdündün törömdündün törömdündün organizasiya etdiyi bir tirajlı lotereya oyunudur. Bu oyunun adı, biletinizdki dört riqamdan ibarot olan iki hissni göstirir. Bu riqamları siz özünüz seçe bilrsiniz ya da rastgele generasiya ede bilrsiniz. Tirajda, iki dört riqamdan ibarot nömrlr çıxarılır v sizin biletinizdki nömrlrin uyğunluğuna gör uduşlar verilir.
4+4 lotereyasında oynamaq üçün, - 2 manat dəyərində bir bilet almalısınız. Bir bilet üçün maksimum 10 oyun seçə bilrsiniz. Bu oyunlar, biletinizdki A, B, C, D, E, F, G, H, I və J hisslri olacaq. - Hər oyun hisssində, 1-dn 9-a qədər olan dört riqam seçməlisiniz. Bu riqamları siz özünüz seçe bilrsiniz ya da rastgele generasiya ede bilrsiniz. Məsələn, A hisssində 1-2-3-4, B hisssində 5-6-7-8 kimi. - Tirajda, iki dört riqamdan ibarot nömrlr çıxarılır. Birinci nömrə 4+4 lotereyasının cekpot nömrəsi olur. İkinci nömrə is 4+4 lotereyasının ikinci uduş nömrəsi olur. - Sizin biletinizdki hər bir oyun hisssin üçün ayrı-ayrı uduşlar verilir. Uduşlar, sizin seçdiyiniz riqamların tirajda çıxan riqamlarla uyğunluğuna gör müəyyənlşir. Uduşlar aşağıdakı cədvld kimi hesablanır:
-
-
-
Oyun hisssindki riqamların uyğunluğu
-
Uduş məbləği
-
-
-
4+4 (Cekpot)
-
Ən az 100.000 manat
-
-
-
4+4 (İkinci uduş)
-
Ən az 10.000 manat
-
-
-
3+3
-
100 manat
-
-
-
2+2
-
10 manat
-
-
-
1+1
-
Biletin qiymti (2 manat)
-
-
-
0+0
-
Biletin qiymti (2 manat)
-
-
-
Cekpot və Uduşlar
-
4+4 lotereyasının ən böyük cəlbiyyti cekpotdur. Cekpot, tirajda çıxan birinci nömrə il uyğun gln bütün bilet sahiblrin arasında bölüşdürülür. Cekpotun minimum mblği 100.000 manatdır v hər tirajda qazanılmadığı taktirde növbti tiraja köçürülür v artırılır. Cekpotun maksimum mblği is 500.000 manatdır v bu mblğ çatdığı zaman tirajda mütlq qazanılır.
-
Bundan başqa, ikinci uduş da var ki, bu da tirajda çıxan ikinci nömr il uyğun gln bütün bilet sahiblrin arasında bölüşdürülür. İkinci uduşun minimum mblği 10.000 manatdır v hər tirajda qazanılmadığı taktirde növbti tiraja köçürülür v artırılır.
-
Həmçinin, digr uduşlar da var ki, bunlar 3+3, 2+2 v 1+1 uyğunluqları üçün verilir. Bu uduşların mblği sabitdir v hmin mblği qazanan bütün bilet sahiblrin hesabına köçürülür.
-
Nec Oynaya Bilrsiniz?
4+4 lotereyasında oynamaq üçün iki əsas yol var: onlayn bilet almaq və Azerlotereya nöqtələrində bilet almaq. Hər iki yolun da öz avantajları və dezavantajları var. Biz sizin üçün hər iki yolun da xüsusiyytlərini izah edirik.
-
Misli.az Saytında Onlayn Bilet Almaq
-
Bu yol, 4+4 lotereyasında oynamaq üçün ən asan və rahat yoldur. Siz bu saytda qeydiyyatdan keçib, bank kartınızla ödəniş edərək, istədiyiniz sayda bilet ala bilrsiniz. Bu saytda, siz özünüz riqamları seç bilrsiniz ya da rastgele generasiya ed bilrsiniz. Həmçinin, siz tirajın nömrəsini də seç bilrsiniz. Bu saytda, sizin biletinizin nömrəsi elektron olaraq qeyd olunur və tirajdan sonra sizin hesabınıza uduşlar köçürülür. Bu saytda, siz həm də tirajların nömrəlri və uduşlarını izləy bilrsiniz.
-
Bu yolun avantajları:
-
4+4 lotereyası nəticələri
-4+4 lotereyası oyna
-4+4 lotereyası biletini yoxla
-4+4 lotereyası uduşlar
-4+4 lotereyası tarixi
-4+4 lotereyası canlı yayım
-4+4 lotereyası qalibləri
-4+4 lotereyası necə oynanır
-4+4 lotereyası son tiraj
-4+4 lotereyası misli.az
-4+4 lotereyası qaydaları
-4+4 lotereyası kuponu
-4+4 lotereyası statistika
-4+4 lotereyası vergisi
-4+4 lotereyası ödənişi
-4+4 lotereyası variantları
-4+4 lotereyası saytları
-4+4 lotereyası nömrələri
-4+4 lotereyası qazananlar
-4+4 lotereyası milyonçu
-4+4 lotereyası online oyna
-4+4 lotereyası azerlotereya
-4+4 lotereyası bilet qiyməti
-4+4 lotereyası uduş fondu
-4+4 lotereyası kateqoriyalar
-4+4 lotereyası bilet almaq
-4+4 lotereyası ləğv etmək
-4+4 lotereyası yoxlamaq üçün kod
-4+4 lotereyası misli seçimi
-4+4 lotereyası nömrə seçimi
-4+4 lotereyası taktikası
-4+4 lotereyası qalib olmaq üçün nömrələr
-4+4 lotereyası uduş almaq üçün yollar
-4+4 lotereyası uduş məbləği hesablamaq
-4+4 lotereyası uduşlu biletin geri qaytarılması
-4+4 lotereyası uduşlu biletin bank vasitəsilə ödənilməsi
-4+4 lotereyası uduşlu biletin satış məntəqələrindən alınması
-4+4 lotereyası uduşlu biletin şəxsi kabinetdə göstərilməsi
-4+4 lotereyası uduşlu biletin tirajdan sonra yoxlanılması
-5/36 və Super Keno ilə müqayisədə 5/36 və Super Keno ilə müqayisədə
-
-
Siz evdn çıxmadan bilet ala bilrsiniz.
-
Siz istədiyiniz zaman və istədiyiniz sayda bilet ala bilrsiniz.
-
Siz özünüz riqamları seç bilrsiniz ya da rastgele generasiya ed bilrsiniz.
-
Sizin biletinizin nömrəsi elektron olaraq qeyd olunur v siz onu itirmirsiniz.
-
Tirajdan sonra sizin hesabınıza uduşlar avtomatik olaraq köçürülür.
-
Siz tirajların nömrəlri v uduşlarını onlayn izləy bilrsiniz.
-
-
Bu yolun dezavantajları:
-
-
Siz bank kartınızla ödəniş etmklisiniz v bu bazı hallarda problem yarada bilr.
-
Siz tirajı canlı yayımda izlmy bilrsiniz.
-
Sizin biletinizin nömrəsi elektron olaraq qeyd olunur v siz onu əl il tutmur v hiss etmirsiniz.
-
-
Azerlotereya Nöqtelerind Bilet Almaq
Bu yol, 4+4 lotereyasında oynamaq üçün ən klassik və məşhur yoldur. Siz Azerlotereya nöqtələrindən birinə gedib, 2 manat verərək, bir bilet ala bilrsiniz. Bu biletinizdə, rastgele generasiya edilmiş 10 oyun hisss olacaq. Siz bu oyun hisslrindn istədiyiniz sayda seç bilrsiniz v ya bütün hisslri oynaya bilrsiniz. Sizin biletinizdki nömrəlri dəyişdirmk istəyirsinizsə, siz bunu nöqtəd çalışanına dey bilrsiniz v ya yeni bir bilet ala bilrsiniz. Tirajdan sonra, sizin biletinizi nöqtəy gtrib yoxlamaq lazımdır v uduşlarınız varsa, onları qəbul etmk üçün tələb olunan sənədlri təqdim etmk lazımdır.
-
Bu yolun avantajları:
-
-
Siz bank kartınızla ödəniş etmk mcburiyyti olmur v nağd pul il bilet ala bilrsiniz.
-
Siz tirajı XƏZƏR TV-d canlı yayımda izl bilrsiniz.
-
Sizin biletinizin nömrəsi kağız olaraq qeyd olunur v siz onu əl il tutur v hiss edirsiniz.
-
-
Bu yolun dezavantajları:
-
-
Siz evdn çıxaraq nöqtəy gedmk mcburiyytindsiniz.
-
Siz istdiyiniz zaman v istdiyiniz sayda bilet almaq üçün nöqtənin iş saatlarına riayt etmk mcburiyytindsiniz.
-
Siz özünüz riqamları seç bilmirsiniz v rastgele generasiya edilmiş nömrəlri qbul etmk mcburiyytindsiniz.
-
Sizin biletinizin nömrəsi kağız olaraq qeyd olunur v siz onu itir bilrsiniz ya da zədln bilr.
-
Tirajdan sonra, sizin biletinizi nöqtəy gtrib yoxlamaq lazımdır v uduşlarınız varsa, onları qbul etmk üçün sənədlri tmin etmk lazımdır.
-
-
Biletinizi Yoxlamaq
-
4+4 lotereyasında oynadığınız zaman, tirajdan sonra biletinizi yoxlamaq çox vacibdir. Çünki, sizin uduşlarınız varsa, onları qbul etmk üçün müvafiq sənədlri tmin etmk lazımdır. Biletinizi yoxlamaq üçün iki əsas yol var: onlayn yoxlamaq v Azerlotereya nöqtelerind yoxlamaq.
-
Onlayn yoxlamaq üçün, siz misli.az saytına daxil olub, biletinizin kodunu daxil etmklisiniz. Bu saytda, siz hmin tirajın nömrlerini v uduşlarını gör bilrsiniz. Hmçinin, siz öz hesabınıza daxil olub, uduşlarınızın köçürüldüyünü gör bilrsiniz.
-
Azerlotereya nöqtelerind yoxlamaq üçün, siz tirajdan sonra biletinizi nöqtyn gtrib, orada çalışan şxs sizin uduşlarınızı yoxlayacaq. Sizin uduşlarınız varsa, onları qbul etmk üçün tmin olunan sürtdürm formunu doldurub, şxsyytini tmin edn snti göstrmklisiniz. Sizin uduşunuzun mblği mblği 10.000 manatdan aşağıdırsa, siz onu nöqtədən ala bilrsiniz. Əgər uduşunuzun mblği 10.000 manatdan yuxarıdırsa, siz onu Azerlotereya mərkəzindən ala bilrsiniz.
-
Niyə 4+4 Lotereyasını Seçməlisiniz?
-
4+4 lotereyası, Azərbaycanda lotereya oynamanın ən yaxşı yollarından biridir. Bu oyunun bir çox səbəbi var ki, siz onu seçməlisiniz. Biz sizin üçün bu səbəblri sıralayırıq:
-
Yüksək Cekpot Şansı
-
4+4 lotereyasında, cekpot şansınız çox yüksəkdir. Çünki, cekpotun minimum mblği 100.000 manatdır v hər tirajda qazanılmadığı taktirde növbti tiraja köçürülür v artırılır. Cekpotun maksimum mblği is 500.000 manatdır v bu mblğ çatdığı zaman tirajda mütlq qazanılır. Bu oyun, Azərbaycan lotereya tarixinin ən böyük cekpotunu qazandıran oyun olaraq tanınır. 2022-ci ilin may ayında, bir bilet sahibi 500.000 manatlıq cekpotu qazanmışdır.
-
Sadə və Əyləncəli Oyun Formatı
-
4+4 lotereyasında, oyun formatı çox sadə v əylncldir. Siz yalnız dört riqam seçmklisiniz v ya rastgele generasiya ed bilrsiniz. Tirajda is yalnız iki dört riqamdan ibarot nömr çıxarılır v sizin biletinizdki nömrln uyğunluğuna gör uduşlar verilir. Siz hmin tirajın nömrlerini v uduşlarını onlayn izl bilrsiniz ya da XƏZƏR TV-d canlı yayımda izl bilrsiniz.
-
Canlı Yayım v Şəffaf Nticlrl
-
4+4 lotereyasının bir başqa xüsusiyyti is canlı yayım v şffaf nticlrlidir. Hr hftnin şnb günü saat 21:00-da XƏZƏR TV-d canlı yayımlanan tirajda, siz tirajın nec işldiyini v nömrln nec çıxardığını gör bilrsiniz. Hmçinin, siz tirajdan sonra misli.az saytında v ya Azerlotereya nöqtelerind tirajın nömrlerini v uduşlarını yoxlaya bilrsiniz.
-
4+4 Lotereyasının Son Nticlrl
-
4+4 lotereyasının son nticlrini bilmk istyirsinizs? Biz sizin üçün son tirajın nömrlerini v uduşlarını tmin edirik.
-
Son Tirajın Nömrleri v Uduşları
-
Son tirajın nömrleri bunlardır:
-
-
Cekpot nömr: 1-2-3-4
-
İkinci uduş nömr: 5-6-7-8
-
-
Son tirajın uduşları is bunlardır:
-
-
-
Oyun hisssindki riqamların uyğunluğu
-
Uduş mblği
-
Uduş qazanan bilet sayı
-
-
-
4+4 (Cekpot)
-
500.000 manat
-
Son tirajda, cekpot qazanan bilet sayı 0 idi. Bu oyunun cekpotu növbti tiraja köçürüldü v 500.000 manata çatdı. İkinci uduş qazanan bilet sayı is 1 idi. Bu bilet sahibi 592.74 manat qazandı. Digər uduş qazanan bilet sayı is aşağıdakı cədvld kimi idi:
-
-
-
Oyun hisssindki riqamların uyğunluğu
-
Uduş qazanan bilet sayı
-
-
-
3+3
-
3
-
-
-
2+2
-
206
-
-
-
1+1
-
1411
-
-
-
0+0
-
104
-
-
-
Son Cekpot Qalibi Kimdir?
-
Son cekpot qalibi, 2022-ci ilin may ayında 500.000 manatlıq cekpotu qazanan şanslı bilet sahibidir. Bu bilet sahibi, Xaçmaz rayonundan bir kişidir v biletini Azerlotereya nöqtəsindən almışdır. Bu bilet sahibi, tirajdan sonra Azerlotereya mərkəzinə glib, sürtdürm formunu doldurub, şxsyytini tmin edn snti göstrmişdir. Bu bilet sahibi, cekpotu qazandığına çox sevinmiş v bu pulu ailsi il paylaşacağını deyib .
-
Tezliklə Verilən Suallar
-
Bu bölməd, sizin 4+4 lotereyası il bağlı tezlikl veriln suallara cavab tapa bilrsiniz.
-
Bir bilet üçün neç oyun seç bilrm?
-
Bir bilet üçün maksimum 10 oyun seç bilrsiniz. Bu oyunlar, biletinizdki A, B, C, D, E, F, G, H, I v J hisslri olacaq.
-
Bir tirajda neç nömr çıxarılır?
-
Bir tirajda iki dört riqamdan ibarot nömr çıxarılır. Birinci nömr 4+4 lotereyasının cekpot nömr olur. İkinci nömr is 4+4 lotereyasının ikinci uduş nömr olur.
-
Cekpotun minimum v maksimum mblği nədir?
-
Cekpotun minimum mblği 100.000 manatdır v hər tirajda qazanılmadığı taktirde növbti tiraja köçürülür v artırılır. Cekpotun maksimum mblği is 500.000 manatdır v bu mblğ çatdığı zaman tirajda mütlq qazanılır.
-
Tirajın canlı yayımını harada izl bilrm?
-
Tirajın canlı yayımını XƏZƏR TV-d izl bilrsiniz. Hr hftnin şnb günü saat 21:00-da başlayan yayımda, siz tirajın nec işldiyini v nömrln nec çıxardığını gör bilrsiniz.
-
Biletinizi yoxlamaq üçün n hans snti göstrmklisiniz?
-
Biletinizi yoxlamaq üçün sizin şxsyytini tmin edn snti göstrmklisiniz. Bu snt şxsyyt vtiyyatı, pasport ya da sürücülük vtiyyatı ola bilr.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/Music-To-Lyrics/README.md b/spaces/fffiloni/Music-To-Lyrics/README.md
deleted file mode 100644
index 83f387efc9f92639e6726d429bea956ac5a4fa3a..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/Music-To-Lyrics/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Music To Lyrics
-emoji: 🎤
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-python_version: 3.10.12
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/tls.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/tls.d.ts
deleted file mode 100644
index 2c55eb9370b4ea89205ad0ebe8e117b007aaed3a..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/tls.d.ts
+++ /dev/null
@@ -1,1107 +0,0 @@
-/**
- * The `tls` module provides an implementation of the Transport Layer Security
- * (TLS) and Secure Socket Layer (SSL) protocols that is built on top of OpenSSL.
- * The module can be accessed using:
- *
- * ```js
- * const tls = require('tls');
- * ```
- * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/tls.js)
- */
-declare module 'tls' {
- import { X509Certificate } from 'node:crypto';
- import * as net from 'node:net';
- import * as stream from 'stream';
- const CLIENT_RENEG_LIMIT: number;
- const CLIENT_RENEG_WINDOW: number;
- interface Certificate {
- /**
- * Country code.
- */
- C: string;
- /**
- * Street.
- */
- ST: string;
- /**
- * Locality.
- */
- L: string;
- /**
- * Organization.
- */
- O: string;
- /**
- * Organizational unit.
- */
- OU: string;
- /**
- * Common name.
- */
- CN: string;
- }
- interface PeerCertificate {
- /**
- * `true` if a Certificate Authority (CA), `false` otherwise.
- * @since v18.13.0
- */
- ca: boolean;
- /**
- * The DER encoded X.509 certificate data.
- */
- raw: Buffer;
- /**
- * The certificate subject.
- */
- subject: Certificate;
- /**
- * The certificate issuer, described in the same terms as the `subject`.
- */
- issuer: Certificate;
- /**
- * The date-time the certificate is valid from.
- */
- valid_from: string;
- /**
- * The date-time the certificate is valid to.
- */
- valid_to: string;
- /**
- * The certificate serial number, as a hex string.
- */
- serialNumber: string;
- /**
- * The SHA-1 digest of the DER encoded certificate.
- * It is returned as a `:` separated hexadecimal string.
- */
- fingerprint: string;
- /**
- * The SHA-256 digest of the DER encoded certificate.
- * It is returned as a `:` separated hexadecimal string.
- */
- fingerprint256: string;
- /**
- * The SHA-512 digest of the DER encoded certificate.
- * It is returned as a `:` separated hexadecimal string.
- */
- fingerprint512: string;
- /**
- * The extended key usage, a set of OIDs.
- */
- ext_key_usage?: string[];
- /**
- * A string containing concatenated names for the subject,
- * an alternative to the `subject` names.
- */
- subjectaltname?: string;
- /**
- * An array describing the AuthorityInfoAccess, used with OCSP.
- */
- infoAccess?: NodeJS.Dict;
- /**
- * For RSA keys: The RSA bit size.
- *
- * For EC keys: The key size in bits.
- */
- bits?: number;
- /**
- * The RSA exponent, as a string in hexadecimal number notation.
- */
- exponent?: string;
- /**
- * The RSA modulus, as a hexadecimal string.
- */
- modulus?: string;
- /**
- * The public key.
- */
- pubkey?: Buffer;
- /**
- * The ASN.1 name of the OID of the elliptic curve.
- * Well-known curves are identified by an OID.
- * While it is unusual, it is possible that the curve
- * is identified by its mathematical properties,
- * in which case it will not have an OID.
- */
- asn1Curve?: string;
- /**
- * The NIST name for the elliptic curve,if it has one
- * (not all well-known curves have been assigned names by NIST).
- */
- nistCurve?: string;
- }
- interface DetailedPeerCertificate extends PeerCertificate {
- /**
- * The issuer certificate object.
- * For self-signed certificates, this may be a circular reference.
- */
- issuerCertificate: DetailedPeerCertificate;
- }
- interface CipherNameAndProtocol {
- /**
- * The cipher name.
- */
- name: string;
- /**
- * SSL/TLS protocol version.
- */
- version: string;
- /**
- * IETF name for the cipher suite.
- */
- standardName: string;
- }
- interface EphemeralKeyInfo {
- /**
- * The supported types are 'DH' and 'ECDH'.
- */
- type: string;
- /**
- * The name property is available only when type is 'ECDH'.
- */
- name?: string | undefined;
- /**
- * The size of parameter of an ephemeral key exchange.
- */
- size: number;
- }
- interface KeyObject {
- /**
- * Private keys in PEM format.
- */
- pem: string | Buffer;
- /**
- * Optional passphrase.
- */
- passphrase?: string | undefined;
- }
- interface PxfObject {
- /**
- * PFX or PKCS12 encoded private key and certificate chain.
- */
- buf: string | Buffer;
- /**
- * Optional passphrase.
- */
- passphrase?: string | undefined;
- }
- interface TLSSocketOptions extends SecureContextOptions, CommonConnectionOptions {
- /**
- * If true the TLS socket will be instantiated in server-mode.
- * Defaults to false.
- */
- isServer?: boolean | undefined;
- /**
- * An optional net.Server instance.
- */
- server?: net.Server | undefined;
- /**
- * An optional Buffer instance containing a TLS session.
- */
- session?: Buffer | undefined;
- /**
- * If true, specifies that the OCSP status request extension will be
- * added to the client hello and an 'OCSPResponse' event will be
- * emitted on the socket before establishing a secure communication
- */
- requestOCSP?: boolean | undefined;
- }
- /**
- * Performs transparent encryption of written data and all required TLS
- * negotiation.
- *
- * Instances of `tls.TLSSocket` implement the duplex `Stream` interface.
- *
- * Methods that return TLS connection metadata (e.g.{@link TLSSocket.getPeerCertificate} will only return data while the
- * connection is open.
- * @since v0.11.4
- */
- class TLSSocket extends net.Socket {
- /**
- * Construct a new tls.TLSSocket object from an existing TCP socket.
- */
- constructor(socket: net.Socket, options?: TLSSocketOptions);
- /**
- * This property is `true` if the peer certificate was signed by one of the CAs
- * specified when creating the `tls.TLSSocket` instance, otherwise `false`.
- * @since v0.11.4
- */
- authorized: boolean;
- /**
- * Returns the reason why the peer's certificate was not been verified. This
- * property is set only when `tlsSocket.authorized === false`.
- * @since v0.11.4
- */
- authorizationError: Error;
- /**
- * Always returns `true`. This may be used to distinguish TLS sockets from regular`net.Socket` instances.
- * @since v0.11.4
- */
- encrypted: true;
- /**
- * String containing the selected ALPN protocol.
- * Before a handshake has completed, this value is always null.
- * When a handshake is completed but not ALPN protocol was selected, tlsSocket.alpnProtocol equals false.
- */
- alpnProtocol: string | false | null;
- /**
- * Returns an object representing the local certificate. The returned object has
- * some properties corresponding to the fields of the certificate.
- *
- * See {@link TLSSocket.getPeerCertificate} for an example of the certificate
- * structure.
- *
- * If there is no local certificate, an empty object will be returned. If the
- * socket has been destroyed, `null` will be returned.
- * @since v11.2.0
- */
- getCertificate(): PeerCertificate | object | null;
- /**
- * Returns an object containing information on the negotiated cipher suite.
- *
- * For example:
- *
- * ```json
- * {
- * "name": "AES128-SHA256",
- * "standardName": "TLS_RSA_WITH_AES_128_CBC_SHA256",
- * "version": "TLSv1.2"
- * }
- * ```
- *
- * See [SSL\_CIPHER\_get\_name](https://www.openssl.org/docs/man1.1.1/man3/SSL_CIPHER_get_name.html) for more information.
- * @since v0.11.4
- */
- getCipher(): CipherNameAndProtocol;
- /**
- * Returns an object representing the type, name, and size of parameter of
- * an ephemeral key exchange in `perfect forward secrecy` on a client
- * connection. It returns an empty object when the key exchange is not
- * ephemeral. As this is only supported on a client socket; `null` is returned
- * if called on a server socket. The supported types are `'DH'` and `'ECDH'`. The`name` property is available only when type is `'ECDH'`.
- *
- * For example: `{ type: 'ECDH', name: 'prime256v1', size: 256 }`.
- * @since v5.0.0
- */
- getEphemeralKeyInfo(): EphemeralKeyInfo | object | null;
- /**
- * As the `Finished` messages are message digests of the complete handshake
- * (with a total of 192 bits for TLS 1.0 and more for SSL 3.0), they can
- * be used for external authentication procedures when the authentication
- * provided by SSL/TLS is not desired or is not enough.
- *
- * Corresponds to the `SSL_get_finished` routine in OpenSSL and may be used
- * to implement the `tls-unique` channel binding from [RFC 5929](https://tools.ietf.org/html/rfc5929).
- * @since v9.9.0
- * @return The latest `Finished` message that has been sent to the socket as part of a SSL/TLS handshake, or `undefined` if no `Finished` message has been sent yet.
- */
- getFinished(): Buffer | undefined;
- /**
- * Returns an object representing the peer's certificate. If the peer does not
- * provide a certificate, an empty object will be returned. If the socket has been
- * destroyed, `null` will be returned.
- *
- * If the full certificate chain was requested, each certificate will include an`issuerCertificate` property containing an object representing its issuer's
- * certificate.
- * @since v0.11.4
- * @param detailed Include the full certificate chain if `true`, otherwise include just the peer's certificate.
- * @return A certificate object.
- */
- getPeerCertificate(detailed: true): DetailedPeerCertificate;
- getPeerCertificate(detailed?: false): PeerCertificate;
- getPeerCertificate(detailed?: boolean): PeerCertificate | DetailedPeerCertificate;
- /**
- * As the `Finished` messages are message digests of the complete handshake
- * (with a total of 192 bits for TLS 1.0 and more for SSL 3.0), they can
- * be used for external authentication procedures when the authentication
- * provided by SSL/TLS is not desired or is not enough.
- *
- * Corresponds to the `SSL_get_peer_finished` routine in OpenSSL and may be used
- * to implement the `tls-unique` channel binding from [RFC 5929](https://tools.ietf.org/html/rfc5929).
- * @since v9.9.0
- * @return The latest `Finished` message that is expected or has actually been received from the socket as part of a SSL/TLS handshake, or `undefined` if there is no `Finished` message so
- * far.
- */
- getPeerFinished(): Buffer | undefined;
- /**
- * Returns a string containing the negotiated SSL/TLS protocol version of the
- * current connection. The value `'unknown'` will be returned for connected
- * sockets that have not completed the handshaking process. The value `null` will
- * be returned for server sockets or disconnected client sockets.
- *
- * Protocol versions are:
- *
- * * `'SSLv3'`
- * * `'TLSv1'`
- * * `'TLSv1.1'`
- * * `'TLSv1.2'`
- * * `'TLSv1.3'`
- *
- * See the OpenSSL [`SSL_get_version`](https://www.openssl.org/docs/man1.1.1/man3/SSL_get_version.html) documentation for more information.
- * @since v5.7.0
- */
- getProtocol(): string | null;
- /**
- * Returns the TLS session data or `undefined` if no session was
- * negotiated. On the client, the data can be provided to the `session` option of {@link connect} to resume the connection. On the server, it may be useful
- * for debugging.
- *
- * See `Session Resumption` for more information.
- *
- * Note: `getSession()` works only for TLSv1.2 and below. For TLSv1.3, applications
- * must use the `'session'` event (it also works for TLSv1.2 and below).
- * @since v0.11.4
- */
- getSession(): Buffer | undefined;
- /**
- * See [SSL\_get\_shared\_sigalgs](https://www.openssl.org/docs/man1.1.1/man3/SSL_get_shared_sigalgs.html) for more information.
- * @since v12.11.0
- * @return List of signature algorithms shared between the server and the client in the order of decreasing preference.
- */
- getSharedSigalgs(): string[];
- /**
- * For a client, returns the TLS session ticket if one is available, or`undefined`. For a server, always returns `undefined`.
- *
- * It may be useful for debugging.
- *
- * See `Session Resumption` for more information.
- * @since v0.11.4
- */
- getTLSTicket(): Buffer | undefined;
- /**
- * See `Session Resumption` for more information.
- * @since v0.5.6
- * @return `true` if the session was reused, `false` otherwise.
- */
- isSessionReused(): boolean;
- /**
- * The `tlsSocket.renegotiate()` method initiates a TLS renegotiation process.
- * Upon completion, the `callback` function will be passed a single argument
- * that is either an `Error` (if the request failed) or `null`.
- *
- * This method can be used to request a peer's certificate after the secure
- * connection has been established.
- *
- * When running as the server, the socket will be destroyed with an error after`handshakeTimeout` timeout.
- *
- * For TLSv1.3, renegotiation cannot be initiated, it is not supported by the
- * protocol.
- * @since v0.11.8
- * @param callback If `renegotiate()` returned `true`, callback is attached once to the `'secure'` event. If `renegotiate()` returned `false`, `callback` will be called in the next tick with
- * an error, unless the `tlsSocket` has been destroyed, in which case `callback` will not be called at all.
- * @return `true` if renegotiation was initiated, `false` otherwise.
- */
- renegotiate(
- options: {
- rejectUnauthorized?: boolean | undefined;
- requestCert?: boolean | undefined;
- },
- callback: (err: Error | null) => void
- ): undefined | boolean;
- /**
- * The `tlsSocket.setMaxSendFragment()` method sets the maximum TLS fragment size.
- * Returns `true` if setting the limit succeeded; `false` otherwise.
- *
- * Smaller fragment sizes decrease the buffering latency on the client: larger
- * fragments are buffered by the TLS layer until the entire fragment is received
- * and its integrity is verified; large fragments can span multiple roundtrips
- * and their processing can be delayed due to packet loss or reordering. However,
- * smaller fragments add extra TLS framing bytes and CPU overhead, which may
- * decrease overall server throughput.
- * @since v0.11.11
- * @param [size=16384] The maximum TLS fragment size. The maximum value is `16384`.
- */
- setMaxSendFragment(size: number): boolean;
- /**
- * Disables TLS renegotiation for this `TLSSocket` instance. Once called, attempts
- * to renegotiate will trigger an `'error'` event on the `TLSSocket`.
- * @since v8.4.0
- */
- disableRenegotiation(): void;
- /**
- * When enabled, TLS packet trace information is written to `stderr`. This can be
- * used to debug TLS connection problems.
- *
- * The format of the output is identical to the output of`openssl s_client -trace` or `openssl s_server -trace`. While it is produced by
- * OpenSSL's `SSL_trace()` function, the format is undocumented, can change
- * without notice, and should not be relied on.
- * @since v12.2.0
- */
- enableTrace(): void;
- /**
- * Returns the peer certificate as an `X509Certificate` object.
- *
- * If there is no peer certificate, or the socket has been destroyed,`undefined` will be returned.
- * @since v15.9.0
- */
- getPeerX509Certificate(): X509Certificate | undefined;
- /**
- * Returns the local certificate as an `X509Certificate` object.
- *
- * If there is no local certificate, or the socket has been destroyed,`undefined` will be returned.
- * @since v15.9.0
- */
- getX509Certificate(): X509Certificate | undefined;
- /**
- * Keying material is used for validations to prevent different kind of attacks in
- * network protocols, for example in the specifications of IEEE 802.1X.
- *
- * Example
- *
- * ```js
- * const keyingMaterial = tlsSocket.exportKeyingMaterial(
- * 128,
- * 'client finished');
- *
- * /*
- * Example return value of keyingMaterial:
- *
- *
- * ```
- *
- * See the OpenSSL [`SSL_export_keying_material`](https://www.openssl.org/docs/man1.1.1/man3/SSL_export_keying_material.html) documentation for more
- * information.
- * @since v13.10.0, v12.17.0
- * @param length number of bytes to retrieve from keying material
- * @param label an application specific label, typically this will be a value from the [IANA Exporter Label
- * Registry](https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#exporter-labels).
- * @param context Optionally provide a context.
- * @return requested bytes of the keying material
- */
- exportKeyingMaterial(length: number, label: string, context: Buffer): Buffer;
- addListener(event: string, listener: (...args: any[]) => void): this;
- addListener(event: 'OCSPResponse', listener: (response: Buffer) => void): this;
- addListener(event: 'secureConnect', listener: () => void): this;
- addListener(event: 'session', listener: (session: Buffer) => void): this;
- addListener(event: 'keylog', listener: (line: Buffer) => void): this;
- emit(event: string | symbol, ...args: any[]): boolean;
- emit(event: 'OCSPResponse', response: Buffer): boolean;
- emit(event: 'secureConnect'): boolean;
- emit(event: 'session', session: Buffer): boolean;
- emit(event: 'keylog', line: Buffer): boolean;
- on(event: string, listener: (...args: any[]) => void): this;
- on(event: 'OCSPResponse', listener: (response: Buffer) => void): this;
- on(event: 'secureConnect', listener: () => void): this;
- on(event: 'session', listener: (session: Buffer) => void): this;
- on(event: 'keylog', listener: (line: Buffer) => void): this;
- once(event: string, listener: (...args: any[]) => void): this;
- once(event: 'OCSPResponse', listener: (response: Buffer) => void): this;
- once(event: 'secureConnect', listener: () => void): this;
- once(event: 'session', listener: (session: Buffer) => void): this;
- once(event: 'keylog', listener: (line: Buffer) => void): this;
- prependListener(event: string, listener: (...args: any[]) => void): this;
- prependListener(event: 'OCSPResponse', listener: (response: Buffer) => void): this;
- prependListener(event: 'secureConnect', listener: () => void): this;
- prependListener(event: 'session', listener: (session: Buffer) => void): this;
- prependListener(event: 'keylog', listener: (line: Buffer) => void): this;
- prependOnceListener(event: string, listener: (...args: any[]) => void): this;
- prependOnceListener(event: 'OCSPResponse', listener: (response: Buffer) => void): this;
- prependOnceListener(event: 'secureConnect', listener: () => void): this;
- prependOnceListener(event: 'session', listener: (session: Buffer) => void): this;
- prependOnceListener(event: 'keylog', listener: (line: Buffer) => void): this;
- }
- interface CommonConnectionOptions {
- /**
- * An optional TLS context object from tls.createSecureContext()
- */
- secureContext?: SecureContext | undefined;
- /**
- * When enabled, TLS packet trace information is written to `stderr`. This can be
- * used to debug TLS connection problems.
- * @default false
- */
- enableTrace?: boolean | undefined;
- /**
- * If true the server will request a certificate from clients that
- * connect and attempt to verify that certificate. Defaults to
- * false.
- */
- requestCert?: boolean | undefined;
- /**
- * An array of strings or a Buffer naming possible ALPN protocols.
- * (Protocols should be ordered by their priority.)
- */
- ALPNProtocols?: string[] | Uint8Array[] | Uint8Array | undefined;
- /**
- * SNICallback(servername, cb) A function that will be
- * called if the client supports SNI TLS extension. Two arguments
- * will be passed when called: servername and cb. SNICallback should
- * invoke cb(null, ctx), where ctx is a SecureContext instance.
- * (tls.createSecureContext(...) can be used to get a proper
- * SecureContext.) If SNICallback wasn't provided the default callback
- * with high-level API will be used (see below).
- */
- SNICallback?: ((servername: string, cb: (err: Error | null, ctx?: SecureContext) => void) => void) | undefined;
- /**
- * If true the server will reject any connection which is not
- * authorized with the list of supplied CAs. This option only has an
- * effect if requestCert is true.
- * @default true
- */
- rejectUnauthorized?: boolean | undefined;
- }
- interface TlsOptions extends SecureContextOptions, CommonConnectionOptions, net.ServerOpts {
- /**
- * Abort the connection if the SSL/TLS handshake does not finish in the
- * specified number of milliseconds. A 'tlsClientError' is emitted on
- * the tls.Server object whenever a handshake times out. Default:
- * 120000 (120 seconds).
- */
- handshakeTimeout?: number | undefined;
- /**
- * The number of seconds after which a TLS session created by the
- * server will no longer be resumable. See Session Resumption for more
- * information. Default: 300.
- */
- sessionTimeout?: number | undefined;
- /**
- * 48-bytes of cryptographically strong pseudo-random data.
- */
- ticketKeys?: Buffer | undefined;
- /**
- *
- * @param socket
- * @param identity identity parameter sent from the client.
- * @return pre-shared key that must either be
- * a buffer or `null` to stop the negotiation process. Returned PSK must be
- * compatible with the selected cipher's digest.
- *
- * When negotiating TLS-PSK (pre-shared keys), this function is called
- * with the identity provided by the client.
- * If the return value is `null` the negotiation process will stop and an
- * "unknown_psk_identity" alert message will be sent to the other party.
- * If the server wishes to hide the fact that the PSK identity was not known,
- * the callback must provide some random data as `psk` to make the connection
- * fail with "decrypt_error" before negotiation is finished.
- * PSK ciphers are disabled by default, and using TLS-PSK thus
- * requires explicitly specifying a cipher suite with the `ciphers` option.
- * More information can be found in the RFC 4279.
- */
- pskCallback?(socket: TLSSocket, identity: string): DataView | NodeJS.TypedArray | null;
- /**
- * hint to send to a client to help
- * with selecting the identity during TLS-PSK negotiation. Will be ignored
- * in TLS 1.3. Upon failing to set pskIdentityHint `tlsClientError` will be
- * emitted with `ERR_TLS_PSK_SET_IDENTIY_HINT_FAILED` code.
- */
- pskIdentityHint?: string | undefined;
- }
- interface PSKCallbackNegotation {
- psk: DataView | NodeJS.TypedArray;
- identity: string;
- }
- interface ConnectionOptions extends SecureContextOptions, CommonConnectionOptions {
- host?: string | undefined;
- port?: number | undefined;
- path?: string | undefined; // Creates unix socket connection to path. If this option is specified, `host` and `port` are ignored.
- socket?: stream.Duplex | undefined; // Establish secure connection on a given socket rather than creating a new socket
- checkServerIdentity?: typeof checkServerIdentity | undefined;
- servername?: string | undefined; // SNI TLS Extension
- session?: Buffer | undefined;
- minDHSize?: number | undefined;
- lookup?: net.LookupFunction | undefined;
- timeout?: number | undefined;
- /**
- * When negotiating TLS-PSK (pre-shared keys), this function is called
- * with optional identity `hint` provided by the server or `null`
- * in case of TLS 1.3 where `hint` was removed.
- * It will be necessary to provide a custom `tls.checkServerIdentity()`
- * for the connection as the default one will try to check hostname/IP
- * of the server against the certificate but that's not applicable for PSK
- * because there won't be a certificate present.
- * More information can be found in the RFC 4279.
- *
- * @param hint message sent from the server to help client
- * decide which identity to use during negotiation.
- * Always `null` if TLS 1.3 is used.
- * @returns Return `null` to stop the negotiation process. `psk` must be
- * compatible with the selected cipher's digest.
- * `identity` must use UTF-8 encoding.
- */
- pskCallback?(hint: string | null): PSKCallbackNegotation | null;
- }
- /**
- * Accepts encrypted connections using TLS or SSL.
- * @since v0.3.2
- */
- class Server extends net.Server {
- constructor(secureConnectionListener?: (socket: TLSSocket) => void);
- constructor(options: TlsOptions, secureConnectionListener?: (socket: TLSSocket) => void);
- /**
- * The `server.addContext()` method adds a secure context that will be used if
- * the client request's SNI name matches the supplied `hostname` (or wildcard).
- *
- * When there are multiple matching contexts, the most recently added one is
- * used.
- * @since v0.5.3
- * @param hostname A SNI host name or wildcard (e.g. `'*'`)
- * @param context An object containing any of the possible properties from the {@link createSecureContext} `options` arguments (e.g. `key`, `cert`, `ca`, etc).
- */
- addContext(hostname: string, context: SecureContextOptions): void;
- /**
- * Returns the session ticket keys.
- *
- * See `Session Resumption` for more information.
- * @since v3.0.0
- * @return A 48-byte buffer containing the session ticket keys.
- */
- getTicketKeys(): Buffer;
- /**
- * The `server.setSecureContext()` method replaces the secure context of an
- * existing server. Existing connections to the server are not interrupted.
- * @since v11.0.0
- * @param options An object containing any of the possible properties from the {@link createSecureContext} `options` arguments (e.g. `key`, `cert`, `ca`, etc).
- */
- setSecureContext(options: SecureContextOptions): void;
- /**
- * Sets the session ticket keys.
- *
- * Changes to the ticket keys are effective only for future server connections.
- * Existing or currently pending server connections will use the previous keys.
- *
- * See `Session Resumption` for more information.
- * @since v3.0.0
- * @param keys A 48-byte buffer containing the session ticket keys.
- */
- setTicketKeys(keys: Buffer): void;
- /**
- * events.EventEmitter
- * 1. tlsClientError
- * 2. newSession
- * 3. OCSPRequest
- * 4. resumeSession
- * 5. secureConnection
- * 6. keylog
- */
- addListener(event: string, listener: (...args: any[]) => void): this;
- addListener(event: 'tlsClientError', listener: (err: Error, tlsSocket: TLSSocket) => void): this;
- addListener(event: 'newSession', listener: (sessionId: Buffer, sessionData: Buffer, callback: () => void) => void): this;
- addListener(event: 'OCSPRequest', listener: (certificate: Buffer, issuer: Buffer, callback: (err: Error | null, resp: Buffer) => void) => void): this;
- addListener(event: 'resumeSession', listener: (sessionId: Buffer, callback: (err: Error | null, sessionData: Buffer | null) => void) => void): this;
- addListener(event: 'secureConnection', listener: (tlsSocket: TLSSocket) => void): this;
- addListener(event: 'keylog', listener: (line: Buffer, tlsSocket: TLSSocket) => void): this;
- emit(event: string | symbol, ...args: any[]): boolean;
- emit(event: 'tlsClientError', err: Error, tlsSocket: TLSSocket): boolean;
- emit(event: 'newSession', sessionId: Buffer, sessionData: Buffer, callback: () => void): boolean;
- emit(event: 'OCSPRequest', certificate: Buffer, issuer: Buffer, callback: (err: Error | null, resp: Buffer) => void): boolean;
- emit(event: 'resumeSession', sessionId: Buffer, callback: (err: Error | null, sessionData: Buffer | null) => void): boolean;
- emit(event: 'secureConnection', tlsSocket: TLSSocket): boolean;
- emit(event: 'keylog', line: Buffer, tlsSocket: TLSSocket): boolean;
- on(event: string, listener: (...args: any[]) => void): this;
- on(event: 'tlsClientError', listener: (err: Error, tlsSocket: TLSSocket) => void): this;
- on(event: 'newSession', listener: (sessionId: Buffer, sessionData: Buffer, callback: () => void) => void): this;
- on(event: 'OCSPRequest', listener: (certificate: Buffer, issuer: Buffer, callback: (err: Error | null, resp: Buffer) => void) => void): this;
- on(event: 'resumeSession', listener: (sessionId: Buffer, callback: (err: Error | null, sessionData: Buffer | null) => void) => void): this;
- on(event: 'secureConnection', listener: (tlsSocket: TLSSocket) => void): this;
- on(event: 'keylog', listener: (line: Buffer, tlsSocket: TLSSocket) => void): this;
- once(event: string, listener: (...args: any[]) => void): this;
- once(event: 'tlsClientError', listener: (err: Error, tlsSocket: TLSSocket) => void): this;
- once(event: 'newSession', listener: (sessionId: Buffer, sessionData: Buffer, callback: () => void) => void): this;
- once(event: 'OCSPRequest', listener: (certificate: Buffer, issuer: Buffer, callback: (err: Error | null, resp: Buffer) => void) => void): this;
- once(event: 'resumeSession', listener: (sessionId: Buffer, callback: (err: Error | null, sessionData: Buffer | null) => void) => void): this;
- once(event: 'secureConnection', listener: (tlsSocket: TLSSocket) => void): this;
- once(event: 'keylog', listener: (line: Buffer, tlsSocket: TLSSocket) => void): this;
- prependListener(event: string, listener: (...args: any[]) => void): this;
- prependListener(event: 'tlsClientError', listener: (err: Error, tlsSocket: TLSSocket) => void): this;
- prependListener(event: 'newSession', listener: (sessionId: Buffer, sessionData: Buffer, callback: () => void) => void): this;
- prependListener(event: 'OCSPRequest', listener: (certificate: Buffer, issuer: Buffer, callback: (err: Error | null, resp: Buffer) => void) => void): this;
- prependListener(event: 'resumeSession', listener: (sessionId: Buffer, callback: (err: Error | null, sessionData: Buffer | null) => void) => void): this;
- prependListener(event: 'secureConnection', listener: (tlsSocket: TLSSocket) => void): this;
- prependListener(event: 'keylog', listener: (line: Buffer, tlsSocket: TLSSocket) => void): this;
- prependOnceListener(event: string, listener: (...args: any[]) => void): this;
- prependOnceListener(event: 'tlsClientError', listener: (err: Error, tlsSocket: TLSSocket) => void): this;
- prependOnceListener(event: 'newSession', listener: (sessionId: Buffer, sessionData: Buffer, callback: () => void) => void): this;
- prependOnceListener(event: 'OCSPRequest', listener: (certificate: Buffer, issuer: Buffer, callback: (err: Error | null, resp: Buffer) => void) => void): this;
- prependOnceListener(event: 'resumeSession', listener: (sessionId: Buffer, callback: (err: Error | null, sessionData: Buffer | null) => void) => void): this;
- prependOnceListener(event: 'secureConnection', listener: (tlsSocket: TLSSocket) => void): this;
- prependOnceListener(event: 'keylog', listener: (line: Buffer, tlsSocket: TLSSocket) => void): this;
- }
- /**
- * @deprecated since v0.11.3 Use `tls.TLSSocket` instead.
- */
- interface SecurePair {
- encrypted: TLSSocket;
- cleartext: TLSSocket;
- }
- type SecureVersion = 'TLSv1.3' | 'TLSv1.2' | 'TLSv1.1' | 'TLSv1';
- interface SecureContextOptions {
- /**
- * Optionally override the trusted CA certificates. Default is to trust
- * the well-known CAs curated by Mozilla. Mozilla's CAs are completely
- * replaced when CAs are explicitly specified using this option.
- */
- ca?: string | Buffer | Array | undefined;
- /**
- * Cert chains in PEM format. One cert chain should be provided per
- * private key. Each cert chain should consist of the PEM formatted
- * certificate for a provided private key, followed by the PEM
- * formatted intermediate certificates (if any), in order, and not
- * including the root CA (the root CA must be pre-known to the peer,
- * see ca). When providing multiple cert chains, they do not have to
- * be in the same order as their private keys in key. If the
- * intermediate certificates are not provided, the peer will not be
- * able to validate the certificate, and the handshake will fail.
- */
- cert?: string | Buffer | Array | undefined;
- /**
- * Colon-separated list of supported signature algorithms. The list
- * can contain digest algorithms (SHA256, MD5 etc.), public key
- * algorithms (RSA-PSS, ECDSA etc.), combination of both (e.g
- * 'RSA+SHA384') or TLS v1.3 scheme names (e.g. rsa_pss_pss_sha512).
- */
- sigalgs?: string | undefined;
- /**
- * Cipher suite specification, replacing the default. For more
- * information, see modifying the default cipher suite. Permitted
- * ciphers can be obtained via tls.getCiphers(). Cipher names must be
- * uppercased in order for OpenSSL to accept them.
- */
- ciphers?: string | undefined;
- /**
- * Name of an OpenSSL engine which can provide the client certificate.
- */
- clientCertEngine?: string | undefined;
- /**
- * PEM formatted CRLs (Certificate Revocation Lists).
- */
- crl?: string | Buffer | Array | undefined;
- /**
- * Diffie Hellman parameters, required for Perfect Forward Secrecy. Use
- * openssl dhparam to create the parameters. The key length must be
- * greater than or equal to 1024 bits or else an error will be thrown.
- * Although 1024 bits is permissible, use 2048 bits or larger for
- * stronger security. If omitted or invalid, the parameters are
- * silently discarded and DHE ciphers will not be available.
- */
- dhparam?: string | Buffer | undefined;
- /**
- * A string describing a named curve or a colon separated list of curve
- * NIDs or names, for example P-521:P-384:P-256, to use for ECDH key
- * agreement. Set to auto to select the curve automatically. Use
- * crypto.getCurves() to obtain a list of available curve names. On
- * recent releases, openssl ecparam -list_curves will also display the
- * name and description of each available elliptic curve. Default:
- * tls.DEFAULT_ECDH_CURVE.
- */
- ecdhCurve?: string | undefined;
- /**
- * Attempt to use the server's cipher suite preferences instead of the
- * client's. When true, causes SSL_OP_CIPHER_SERVER_PREFERENCE to be
- * set in secureOptions
- */
- honorCipherOrder?: boolean | undefined;
- /**
- * Private keys in PEM format. PEM allows the option of private keys
- * being encrypted. Encrypted keys will be decrypted with
- * options.passphrase. Multiple keys using different algorithms can be
- * provided either as an array of unencrypted key strings or buffers,
- * or an array of objects in the form {pem: [,
- * passphrase: ]}. The object form can only occur in an array.
- * object.passphrase is optional. Encrypted keys will be decrypted with
- * object.passphrase if provided, or options.passphrase if it is not.
- */
- key?: string | Buffer | Array | undefined;
- /**
- * Name of an OpenSSL engine to get private key from. Should be used
- * together with privateKeyIdentifier.
- */
- privateKeyEngine?: string | undefined;
- /**
- * Identifier of a private key managed by an OpenSSL engine. Should be
- * used together with privateKeyEngine. Should not be set together with
- * key, because both options define a private key in different ways.
- */
- privateKeyIdentifier?: string | undefined;
- /**
- * Optionally set the maximum TLS version to allow. One
- * of `'TLSv1.3'`, `'TLSv1.2'`, `'TLSv1.1'`, or `'TLSv1'`. Cannot be specified along with the
- * `secureProtocol` option, use one or the other.
- * **Default:** `'TLSv1.3'`, unless changed using CLI options. Using
- * `--tls-max-v1.2` sets the default to `'TLSv1.2'`. Using `--tls-max-v1.3` sets the default to
- * `'TLSv1.3'`. If multiple of the options are provided, the highest maximum is used.
- */
- maxVersion?: SecureVersion | undefined;
- /**
- * Optionally set the minimum TLS version to allow. One
- * of `'TLSv1.3'`, `'TLSv1.2'`, `'TLSv1.1'`, or `'TLSv1'`. Cannot be specified along with the
- * `secureProtocol` option, use one or the other. It is not recommended to use
- * less than TLSv1.2, but it may be required for interoperability.
- * **Default:** `'TLSv1.2'`, unless changed using CLI options. Using
- * `--tls-v1.0` sets the default to `'TLSv1'`. Using `--tls-v1.1` sets the default to
- * `'TLSv1.1'`. Using `--tls-min-v1.3` sets the default to
- * 'TLSv1.3'. If multiple of the options are provided, the lowest minimum is used.
- */
- minVersion?: SecureVersion | undefined;
- /**
- * Shared passphrase used for a single private key and/or a PFX.
- */
- passphrase?: string | undefined;
- /**
- * PFX or PKCS12 encoded private key and certificate chain. pfx is an
- * alternative to providing key and cert individually. PFX is usually
- * encrypted, if it is, passphrase will be used to decrypt it. Multiple
- * PFX can be provided either as an array of unencrypted PFX buffers,
- * or an array of objects in the form {buf: [,
- * passphrase: ]}. The object form can only occur in an array.
- * object.passphrase is optional. Encrypted PFX will be decrypted with
- * object.passphrase if provided, or options.passphrase if it is not.
- */
- pfx?: string | Buffer | Array | undefined;
- /**
- * Optionally affect the OpenSSL protocol behavior, which is not
- * usually necessary. This should be used carefully if at all! Value is
- * a numeric bitmask of the SSL_OP_* options from OpenSSL Options
- */
- secureOptions?: number | undefined; // Value is a numeric bitmask of the `SSL_OP_*` options
- /**
- * Legacy mechanism to select the TLS protocol version to use, it does
- * not support independent control of the minimum and maximum version,
- * and does not support limiting the protocol to TLSv1.3. Use
- * minVersion and maxVersion instead. The possible values are listed as
- * SSL_METHODS, use the function names as strings. For example, use
- * 'TLSv1_1_method' to force TLS version 1.1, or 'TLS_method' to allow
- * any TLS protocol version up to TLSv1.3. It is not recommended to use
- * TLS versions less than 1.2, but it may be required for
- * interoperability. Default: none, see minVersion.
- */
- secureProtocol?: string | undefined;
- /**
- * Opaque identifier used by servers to ensure session state is not
- * shared between applications. Unused by clients.
- */
- sessionIdContext?: string | undefined;
- /**
- * 48-bytes of cryptographically strong pseudo-random data.
- * See Session Resumption for more information.
- */
- ticketKeys?: Buffer | undefined;
- /**
- * The number of seconds after which a TLS session created by the
- * server will no longer be resumable. See Session Resumption for more
- * information. Default: 300.
- */
- sessionTimeout?: number | undefined;
- }
- interface SecureContext {
- context: any;
- }
- /**
- * Verifies the certificate `cert` is issued to `hostname`.
- *
- * Returns [Error](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Error) object, populating it with `reason`, `host`, and `cert` on
- * failure. On success, returns [undefined](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Undefined_type).
- *
- * This function is intended to be used in combination with the`checkServerIdentity` option that can be passed to {@link connect} and as
- * such operates on a `certificate object`. For other purposes, consider using `x509.checkHost()` instead.
- *
- * This function can be overwritten by providing an alternative function as the`options.checkServerIdentity` option that is passed to `tls.connect()`. The
- * overwriting function can call `tls.checkServerIdentity()` of course, to augment
- * the checks done with additional verification.
- *
- * This function is only called if the certificate passed all other checks, such as
- * being issued by trusted CA (`options.ca`).
- *
- * Earlier versions of Node.js incorrectly accepted certificates for a given`hostname` if a matching `uniformResourceIdentifier` subject alternative name
- * was present (see [CVE-2021-44531](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44531)). Applications that wish to accept`uniformResourceIdentifier` subject alternative names can use
- * a custom`options.checkServerIdentity` function that implements the desired behavior.
- * @since v0.8.4
- * @param hostname The host name or IP address to verify the certificate against.
- * @param cert A `certificate object` representing the peer's certificate.
- */
- function checkServerIdentity(hostname: string, cert: PeerCertificate): Error | undefined;
- /**
- * Creates a new {@link Server}. The `secureConnectionListener`, if provided, is
- * automatically set as a listener for the `'secureConnection'` event.
- *
- * The `ticketKeys` options is automatically shared between `cluster` module
- * workers.
- *
- * The following illustrates a simple echo server:
- *
- * ```js
- * const tls = require('tls');
- * const fs = require('fs');
- *
- * const options = {
- * key: fs.readFileSync('server-key.pem'),
- * cert: fs.readFileSync('server-cert.pem'),
- *
- * // This is necessary only if using client certificate authentication.
- * requestCert: true,
- *
- * // This is necessary only if the client uses a self-signed certificate.
- * ca: [ fs.readFileSync('client-cert.pem') ]
- * };
- *
- * const server = tls.createServer(options, (socket) => {
- * console.log('server connected',
- * socket.authorized ? 'authorized' : 'unauthorized');
- * socket.write('welcome!\n');
- * socket.setEncoding('utf8');
- * socket.pipe(socket);
- * });
- * server.listen(8000, () => {
- * console.log('server bound');
- * });
- * ```
- *
- * The server can be tested by connecting to it using the example client from {@link connect}.
- * @since v0.3.2
- */
- function createServer(secureConnectionListener?: (socket: TLSSocket) => void): Server;
- function createServer(options: TlsOptions, secureConnectionListener?: (socket: TLSSocket) => void): Server;
- /**
- * The `callback` function, if specified, will be added as a listener for the `'secureConnect'` event.
- *
- * `tls.connect()` returns a {@link TLSSocket} object.
- *
- * Unlike the `https` API, `tls.connect()` does not enable the
- * SNI (Server Name Indication) extension by default, which may cause some
- * servers to return an incorrect certificate or reject the connection
- * altogether. To enable SNI, set the `servername` option in addition
- * to `host`.
- *
- * The following illustrates a client for the echo server example from {@link createServer}:
- *
- * ```js
- * // Assumes an echo server that is listening on port 8000.
- * const tls = require('tls');
- * const fs = require('fs');
- *
- * const options = {
- * // Necessary only if the server requires client certificate authentication.
- * key: fs.readFileSync('client-key.pem'),
- * cert: fs.readFileSync('client-cert.pem'),
- *
- * // Necessary only if the server uses a self-signed certificate.
- * ca: [ fs.readFileSync('server-cert.pem') ],
- *
- * // Necessary only if the server's cert isn't for "localhost".
- * checkServerIdentity: () => { return null; },
- * };
- *
- * const socket = tls.connect(8000, options, () => {
- * console.log('client connected',
- * socket.authorized ? 'authorized' : 'unauthorized');
- * process.stdin.pipe(socket);
- * process.stdin.resume();
- * });
- * socket.setEncoding('utf8');
- * socket.on('data', (data) => {
- * console.log(data);
- * });
- * socket.on('end', () => {
- * console.log('server ends connection');
- * });
- * ```
- * @since v0.11.3
- */
- function connect(options: ConnectionOptions, secureConnectListener?: () => void): TLSSocket;
- function connect(port: number, host?: string, options?: ConnectionOptions, secureConnectListener?: () => void): TLSSocket;
- function connect(port: number, options?: ConnectionOptions, secureConnectListener?: () => void): TLSSocket;
- /**
- * Creates a new secure pair object with two streams, one of which reads and writes
- * the encrypted data and the other of which reads and writes the cleartext data.
- * Generally, the encrypted stream is piped to/from an incoming encrypted data
- * stream and the cleartext one is used as a replacement for the initial encrypted
- * stream.
- *
- * `tls.createSecurePair()` returns a `tls.SecurePair` object with `cleartext` and`encrypted` stream properties.
- *
- * Using `cleartext` has the same API as {@link TLSSocket}.
- *
- * The `tls.createSecurePair()` method is now deprecated in favor of`tls.TLSSocket()`. For example, the code:
- *
- * ```js
- * pair = tls.createSecurePair(// ... );
- * pair.encrypted.pipe(socket);
- * socket.pipe(pair.encrypted);
- * ```
- *
- * can be replaced by:
- *
- * ```js
- * secureSocket = tls.TLSSocket(socket, options);
- * ```
- *
- * where `secureSocket` has the same API as `pair.cleartext`.
- * @since v0.3.2
- * @deprecated Since v0.11.3 - Use {@link TLSSocket} instead.
- * @param context A secure context object as returned by `tls.createSecureContext()`
- * @param isServer `true` to specify that this TLS connection should be opened as a server.
- * @param requestCert `true` to specify whether a server should request a certificate from a connecting client. Only applies when `isServer` is `true`.
- * @param rejectUnauthorized If not `false` a server automatically reject clients with invalid certificates. Only applies when `isServer` is `true`.
- */
- function createSecurePair(context?: SecureContext, isServer?: boolean, requestCert?: boolean, rejectUnauthorized?: boolean): SecurePair;
- /**
- * {@link createServer} sets the default value of the `honorCipherOrder` option
- * to `true`, other APIs that create secure contexts leave it unset.
- *
- * {@link createServer} uses a 128 bit truncated SHA1 hash value generated
- * from `process.argv` as the default value of the `sessionIdContext` option, other
- * APIs that create secure contexts have no default value.
- *
- * The `tls.createSecureContext()` method creates a `SecureContext` object. It is
- * usable as an argument to several `tls` APIs, such as {@link createServer} and `server.addContext()`, but has no public methods.
- *
- * A key is _required_ for ciphers that use certificates. Either `key` or`pfx` can be used to provide it.
- *
- * If the `ca` option is not given, then Node.js will default to using [Mozilla's publicly trusted list of
- * CAs](https://hg.mozilla.org/mozilla-central/raw-file/tip/security/nss/lib/ckfw/builtins/certdata.txt).
- * @since v0.11.13
- */
- function createSecureContext(options?: SecureContextOptions): SecureContext;
- /**
- * Returns an array with the names of the supported TLS ciphers. The names are
- * lower-case for historical reasons, but must be uppercased to be used in
- * the `ciphers` option of {@link createSecureContext}.
- *
- * Not all supported ciphers are enabled by default. See `Modifying the default TLS cipher suite`.
- *
- * Cipher names that start with `'tls_'` are for TLSv1.3, all the others are for
- * TLSv1.2 and below.
- *
- * ```js
- * console.log(tls.getCiphers()); // ['aes128-gcm-sha256', 'aes128-sha', ...]
- * ```
- * @since v0.10.2
- */
- function getCiphers(): string[];
- /**
- * The default curve name to use for ECDH key agreement in a tls server.
- * The default value is 'auto'. See tls.createSecureContext() for further
- * information.
- */
- let DEFAULT_ECDH_CURVE: string;
- /**
- * The default value of the maxVersion option of
- * tls.createSecureContext(). It can be assigned any of the supported TLS
- * protocol versions, 'TLSv1.3', 'TLSv1.2', 'TLSv1.1', or 'TLSv1'. Default:
- * 'TLSv1.3', unless changed using CLI options. Using --tls-max-v1.2 sets
- * the default to 'TLSv1.2'. Using --tls-max-v1.3 sets the default to
- * 'TLSv1.3'. If multiple of the options are provided, the highest maximum
- * is used.
- */
- let DEFAULT_MAX_VERSION: SecureVersion;
- /**
- * The default value of the minVersion option of tls.createSecureContext().
- * It can be assigned any of the supported TLS protocol versions,
- * 'TLSv1.3', 'TLSv1.2', 'TLSv1.1', or 'TLSv1'. Default: 'TLSv1.2', unless
- * changed using CLI options. Using --tls-min-v1.0 sets the default to
- * 'TLSv1'. Using --tls-min-v1.1 sets the default to 'TLSv1.1'. Using
- * --tls-min-v1.3 sets the default to 'TLSv1.3'. If multiple of the options
- * are provided, the lowest minimum is used.
- */
- let DEFAULT_MIN_VERSION: SecureVersion;
- /**
- * An immutable array of strings representing the root certificates (in PEM
- * format) used for verifying peer certificates. This is the default value
- * of the ca option to tls.createSecureContext().
- */
- const rootCertificates: ReadonlyArray;
-}
-declare module 'node:tls' {
- export * from 'tls';
-}
diff --git a/spaces/flatindo/Image-Diffusion-WebUI/diffusion_webui/__init__.py b/spaces/flatindo/Image-Diffusion-WebUI/diffusion_webui/__init__.py
deleted file mode 100644
index 6e49af236dab7f041fb4fe27d50b728eaaf552d9..0000000000000000000000000000000000000000
--- a/spaces/flatindo/Image-Diffusion-WebUI/diffusion_webui/__init__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from diffusion_webui.diffusion_models.controlnet_inpaint_pipeline import (
- StableDiffusionControlNetInpaintGenerator,
-)
-from diffusion_webui.diffusion_models.controlnet_pipeline import (
- StableDiffusionControlNetGenerator,
-)
-from diffusion_webui.diffusion_models.img2img_app import (
- StableDiffusionImage2ImageGenerator,
-)
-from diffusion_webui.diffusion_models.inpaint_app import (
- StableDiffusionInpaintGenerator,
-)
-from diffusion_webui.diffusion_models.text2img_app import (
- StableDiffusionText2ImageGenerator,
-)
-
-__version__ = "2.5.0"
diff --git a/spaces/flow3rdown/word_sim/app.py b/spaces/flow3rdown/word_sim/app.py
deleted file mode 100644
index 43e460220a396d8330aa11b8bf6489da0db3d22b..0000000000000000000000000000000000000000
--- a/spaces/flow3rdown/word_sim/app.py
+++ /dev/null
@@ -1,121 +0,0 @@
-import gradio as gr
-from gensim.models import KeyedVectors
-
-
-def isNoneWords(word):
- if word is None or len(word)==0 or word not in model.key_to_index:
- return True
- else:
- return False
-
-def word_analogy(word1, word2, word3):
- analogy_words = model.similar_by_vector(model.word_vec(word1) - model.word_vec(word2) + model.word_vec(word3))
- sim_res = ""
- for item in analogy_words:
- sim_res += f'{item[0]}: {round(item[1], 4)}\n'
- return sim_res
-
-def similarity_route(word1, word2):
- if isNoneWords(word1) or isNoneWords(word2):
- return "word is null or not in model!"
- else:
- return float(model.similarity(word1, word2))
-
-
-def top_similarity_route(word):
- if isNoneWords(word):
- return "word is null or not in model!"
- else:
- top_similar_words = model.similar_by_word(word, topn=20, restrict_vocab=None)
- sim_res = ""
- for item in top_similar_words:
- sim_res += f'{item[0]}: {round(item[1], 4)}\n'
- return sim_res
-
-def top_similar_words_layout():
- with gr.Column():
- with gr.Row():
- with gr.Column():
- word = gr.Textbox(lines=1, label='Input word', placeholder='Input word here')
- with gr.Row():
- clear = gr.ClearButton()
- submit = gr.Button("Submit")
- output = gr.Textbox(lines=20, label='Similar words', placeholder='Output here')
-
- submit.click(fn=top_similarity_route, inputs=[word], outputs=[output])
-
- examples=[['兔子', '松鼠']]
- ex = gr.Examples(
- examples=examples,
- fn=top_similarity_route,
- inputs=[word],
- outputs=[output],
- cache_examples=False,
- run_on_click=False
- )
-
-
-def similarity_layout():
- with gr.Column():
- with gr.Row():
- with gr.Column():
- with gr.Row():
- word1 = gr.Textbox(lines=1, label='Input word1', placeholder='Input word1 here')
- word2 = gr.Textbox(lines=1, label='Input word2', placeholder='Input word2 here')
- with gr.Row():
- clear = gr.ClearButton()
- submit = gr.Button("Submit")
- output = gr.Textbox(lines=1, label='Similar words', placeholder='Output here')
-
- submit.click(fn=similarity_route, inputs=[word1, word2], outputs=[output])
-
- examples=[['淘宝', '京东', 0.7887385]]
- ex = gr.Examples(
- examples=examples,
- fn=similarity_route,
- inputs=[word1, word2],
- outputs=[output],
- cache_examples=False,
- run_on_click=False
- )
-
-def word_analogy_layout():
- with gr.Column():
- with gr.Row():
- with gr.Column():
- with gr.Row():
- word1 = gr.Textbox(lines=1, label='Input word1', placeholder='Input word1 here')
- word2 = gr.Textbox(lines=1, label='Input word2', placeholder='Input word2 here')
- word3 = gr.Textbox(lines=1, label='Input word3', placeholder='Input word3 here')
- with gr.Row():
- clear = gr.ClearButton()
- submit = gr.Button("Submit")
- output = gr.Textbox(lines=1, label='Analogy words', placeholder='Output here')
-
- submit.click(fn=word_analogy, inputs=[word1, word2, word3], outputs=[output])
-
- examples=[['国王', '男人', '女人', '王后']]
- ex = gr.Examples(
- examples=examples,
- fn=word_analogy,
- inputs=[word1, word2, word3],
- outputs=[output],
- cache_examples=False,
- run_on_click=False
- )
-
-if __name__ == '__main__':
- model = KeyedVectors.load_word2vec_format('tencent-ailab-embedding-zh-d100-v0.2.0-s.txt', binary=False)
- title = 'Calculate word similarity based on Tencent AI Lab Embedding'
-
- with gr.Blocks() as demo:
- gr.HTML(title)
- with gr.Column(elem_id="col-container"):
- with gr.Tab("Top similar words"):
- top_similar_words_layout()
- with gr.Tab("Similarity of words"):
- similarity_layout()
- with gr.Tab("Word analogy"):
- word_analogy_layout()
-
- demo.queue(max_size=64).launch()
\ No newline at end of file
diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/exiter.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/exiter.py
deleted file mode 100644
index 8ed9c5d8e3da7a87f50d9d612e161756fa973b82..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/exiter.py
+++ /dev/null
@@ -1,347 +0,0 @@
-import numpy as np
-
-from gym_minigrid.minigrid import *
-from gym_minigrid.register import register
-
-import time
-from collections import deque
-
-
-class Peer(NPC):
- """
- A dancing NPC that the agent has to copy
- """
-
- def __init__(self, color, name, env, random_actions=False):
- super().__init__(color)
- self.name = name
- self.npc_dir = 1 # NPC initially looks downward
- self.npc_type = 0
- self.env = env
- self.npc_actions = []
- self.dancing_step_idx = 0
- self.actions = MiniGridEnv.Actions
- self.add_npc_direction = True
- self.available_moves = [self.rotate_left, self.rotate_right, self.go_forward, self.toggle_action]
- self.random_actions = random_actions
- self.joint_attention_achieved = False
-
- def can_overlap(self):
- # If the NPC is hidden, agent can overlap on it
- return self.env.hidden_npc
-
- def encode(self, nb_dims=3):
- if self.env.hidden_npc:
- if nb_dims == 3:
- return (1, 0, 0)
- elif nb_dims == 4:
- return (1, 0, 0, 0)
- else:
- return super().encode(nb_dims=nb_dims)
-
- def step(self):
- super().step()
- if self.random_actions:
- if type(self.env.grid.get(*self.front_pos)) == Lava:
- # can't walk into lava
- act = self.env._rand_elem([
- m for m in self.available_moves if m != self.go_forward
- ])
- elif type(self.env.grid.get(*self.front_pos)) == Switch:
- # can't toggle switches
- act = self.env._rand_elem([
- m for m in self.available_moves if m != self.toggle_action
- ])
- else:
- act = self.env._rand_elem(self.available_moves)
-
- act()
-
- else:
- distances = np.abs(self.env.agent_pos - self.env.door_pos).sum(-1)
-
- door_id = np.argmin(distances)
- wanted_switch_pos = self.env.switches_pos[door_id]
- sw = self.env.switches[door_id]
-
- distance_to_switch = np.abs(wanted_switch_pos - self.cur_pos ).sum(-1)
-
- # corresponding switch
- if all(self.front_pos == wanted_switch_pos) and self.joint_attention_achieved:
- # in agent front of door, looking at the door
- if tuple(self.env.front_pos) == tuple(self.env.door_pos[door_id]):
- if not sw.is_on:
- self.toggle_action()
-
- elif distance_to_switch == 1:
- if not self.joint_attention_achieved:
- # looks at he agent
- wanted_dir = self.compute_wanted_dir(self.env.agent_pos)
- else:
- # turns to the switch
- wanted_dir = self.compute_wanted_dir(wanted_switch_pos)
-
- action = self.compute_turn_action(wanted_dir)
- action()
- if self.is_eye_contact():
- self.joint_attention_achieved = True
-
-
- else:
- act = self.path_to_pos(wanted_switch_pos)
- act()
-
- # not really important as the NPC doesn't speak
- if self.env.hidden_npc:
- return None
-
-
-
-class ExiterGrammar(object):
-
- templates = ["Move your", "Shake your"]
- things = ["body", "head"]
-
- grammar_action_space = spaces.MultiDiscrete([len(templates), len(things)])
-
- @classmethod
- def construct_utterance(cls, action):
- return cls.templates[int(action[0])] + " " + cls.things[int(action[1])] + " "
-
-
-class ExiterEnv(MultiModalMiniGridEnv):
- """
- Environment in which the agent is instructed to go to a given object
- named using an English text string
- """
-
- def __init__(
- self,
- size=5,
- diminished_reward=True,
- step_penalty=False,
- knowledgeable=False,
- ablation=False,
- max_steps=20,
- hidden_npc=False,
- ):
- assert size >= 5
- self.empty_symbol = "NA \n"
- self.diminished_reward = diminished_reward
- self.step_penalty = step_penalty
- self.knowledgeable = knowledgeable
- self.ablation = ablation
- self.hidden_npc = hidden_npc
-
- super().__init__(
- grid_size=size,
- max_steps=max_steps,
- # Set this to True for maximum speed
- see_through_walls=True,
- actions=MiniGridEnv.Actions,
- action_space=spaces.MultiDiscrete([
- len(MiniGridEnv.Actions),
- *ExiterGrammar.grammar_action_space.nvec
- ]),
- add_npc_direction=True
- )
-
- print({
- "size": size,
- "diminished_reward": diminished_reward,
- "step_penalty": step_penalty,
- })
-
- def _gen_grid(self, width, height):
- # Create the grid
- self.grid = Grid(width, height, nb_obj_dims=4)
-
- # Randomly vary the room width and height
- width = self._rand_int(5, width+1)
- height = self._rand_int(5, height+1)
-
- self.wall_x = width-1
- self.wall_y = height-1
-
- # Generate the surrounding walls
- self.grid.wall_rect(0, 0, width, height)
-
- # add lava
- self.grid.vert_wall(width//2, 1, height - 2, Lava)
-
- # door top
- door_color_top = self._rand_elem(COLOR_NAMES)
- self.door_pos_top = (width-1, 1)
- self.door_top = Door(door_color_top, is_locked=False if self.ablation else True)
- self.grid.set(*self.door_pos_top, self.door_top)
-
- # switch top
- self.switch_pos_top = (0, 1)
- self.switch_top = Switch(door_color_top, lockable_object=self.door_top, locker_switch=True)
- self.grid.set(*self.switch_pos_top, self.switch_top)
-
- # door bottom
- door_color_bottom = self._rand_elem(COLOR_NAMES)
- self.door_pos_bottom = (width-1, height-2)
- self.door_bottom = Door(door_color_bottom, is_locked=False if self.ablation else True)
- self.grid.set(*self.door_pos_bottom, self.door_bottom)
-
- # switch bottom
- self.switch_pos_bottom = (0, height-2)
- self.switch_bottom = Switch(door_color_bottom, lockable_object=self.door_bottom, locker_switch=True)
- self.grid.set(*self.switch_pos_bottom, self.switch_bottom)
-
- self.switches = [self.switch_top, self.switch_bottom]
- self.switches_pos = [self.switch_pos_top, self.switch_pos_bottom]
- self.door = [self.door_top, self.door_bottom]
- self.door_pos = [self.door_pos_top, self.door_pos_bottom]
-
- # Set a randomly coloured Dancer NPC
- color = self._rand_elem(COLOR_NAMES)
- self.peer = Peer(color, "Jill", self, random_actions=self.ablation)
-
- # Place it on the middle right side of the room
- peer_pos = np.array((self._rand_int(1, width//2), self._rand_int(1, height - 1)))
-
- self.grid.set(*peer_pos, self.peer)
- self.peer.init_pos = peer_pos
- self.peer.cur_pos = peer_pos
-
- # Randomize the agent's start position and orientation
- agent = self.place_agent(top=(width // 2, 0), size=(width // 2, height))
-
- # Generate the mission string
- self.mission = 'watch dancer and repeat his moves afterwards'
-
- # Dummy beginning string
- self.beginning_string = "This is what you hear. \n"
- self.utterance = self.beginning_string
-
- # utterance appended at the end of each step
- self.utterance_history = ""
-
- # used for rendering
- self.conversation = self.utterance
- self.outcome_info = None
-
- def step(self, action):
- p_action = action[0]
- utterance_action = action[1:]
-
- obs, reward, done, info = super().step(p_action)
- self.peer.step()
-
- if np.isnan(p_action):
- pass
-
- if p_action == self.actions.done:
- done = True
-
- elif all([self.switch_top.is_on, self.switch_bottom.is_on]):
- # if both witches are on: no reward is given and the episode ends
- done = True
-
- elif tuple(self.agent_pos) in [self.door_pos_top, self.door_pos_bottom]:
- # agent has exited
- reward = self._reward()
- done = True
-
- # discount
- if self.step_penalty:
- reward = reward - 0.01
-
- if self.hidden_npc:
- # all npc are hidden
- assert np.argwhere(obs['image'][:,:,0] == OBJECT_TO_IDX['npc']).size == 0
- assert "{}:".format(self.peer.name) not in self.utterance
-
- # fill observation with text
- self.append_existing_utterance_to_history()
- obs = self.add_utterance_to_observation(obs)
- self.reset_utterance()
-
- if done:
- if reward > 0:
- self.outcome_info = "SUCCESS: agent got {} reward \n".format(np.round(reward, 1))
- else:
- self.outcome_info = "FAILURE: agent got {} reward \n".format(reward)
-
- return obs, reward, done, info
-
- def _reward(self):
- if self.diminished_reward:
- return super()._reward()
- else:
- return 1.0
-
- def render(self, *args, **kwargs):
- obs = super().render(*args, **kwargs)
- self.window.clear_text() # erase previous text
-
- # self.window.set_caption(self.conversation, [self.peer.name])
- # self.window.ax.set_title("correct door: {}".format(self.true_guide.target_color), loc="left", fontsize=10)
- if self.outcome_info:
- color = None
- if "SUCCESS" in self.outcome_info:
- color = "lime"
- elif "FAILURE" in self.outcome_info:
- color = "red"
- self.window.add_text(*(0.01, 0.85, self.outcome_info),
- **{'fontsize':15, 'color':color, 'weight':"bold"})
-
- self.window.show_img(obs) # re-draw image to add changes to window
- return obs
-
-
-class Exiter8x8Env(ExiterEnv):
- def __init__(self, **kwargs):
- super().__init__(size=8, max_steps=20, **kwargs)
-
-
-class Exiter6x6Env(ExiterEnv):
- def __init__(self):
- super().__init__(size=6, max_steps=20)
-
-class AblationExiterEnv(ExiterEnv):
- def __init__(self):
- super().__init__(size=5, ablation=True, max_steps=20)
-
-class AblationExiter8x8Env(ExiterEnv):
- def __init__(self, **kwargs):
- super().__init__(size=8, ablation=True, max_steps=20, **kwargs)
-
-
-class AblationExiter6x6Env(ExiterEnv):
- def __init__(self):
- super().__init__(size=6, ablation=True, max_steps=20)
-
-
-
-register(
- id='MiniGrid-Exiter-5x5-v0',
- entry_point='gym_minigrid.envs:ExiterEnv'
-)
-
-register(
- id='MiniGrid-Exiter-6x6-v0',
- entry_point='gym_minigrid.envs:Exiter6x6Env'
-)
-
-register(
- id='MiniGrid-Exiter-8x8-v0',
- entry_point='gym_minigrid.envs:Exiter8x8Env'
-)
-register(
- id='MiniGrid-AblationExiter-5x5-v0',
- entry_point='gym_minigrid.envs:AblationExiterEnv'
-)
-
-register(
- id='MiniGrid-AblationExiter-6x6-v0',
- entry_point='gym_minigrid.envs:AblationExiter6x6Env'
-)
-
-register(
- id='MiniGrid-AblationExiter-8x8-v0',
- entry_point='gym_minigrid.envs:AblationExiter8x8Env'
-)
\ No newline at end of file
diff --git a/spaces/gradio/HuBERT/examples/wav2vec/scripts/binarize_manifest.sh b/spaces/gradio/HuBERT/examples/wav2vec/scripts/binarize_manifest.sh
deleted file mode 100644
index 6f201bdb524fad51a69d8c45889eaa1578efc62d..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/examples/wav2vec/scripts/binarize_manifest.sh
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/usr/bin/env bash
-
-# usage: bash binarize_manifest
-
-DEST_DIR=$1
-TRAIN_SPLIT=$2
-VALID_SPLIT=$3
-FAIRSEQ_ROOT=$4
-
-mkdir -p $DEST_DIR
-
-# split file path and lengths into separate files
-cut -f1 $TRAIN_SPLIT.tsv > $DEST_DIR/train_fnames.txt
-cut -f1 $VALID_SPLIT.tsv > $DEST_DIR/valid_fnames.txt
-cut -f2 $TRAIN_SPLIT.tsv > $DEST_DIR/train.lengths
-cut -f2 $VALID_SPLIT.tsv > $DEST_DIR/valid.lengths
-
-# copy root directory
-head -1 $TRAIN_SPLIT.tsv > $DEST_DIR/train.root
-head -1 $VALID_SPLIT.tsv > $DEST_DIR/valid.root
-
-# remove root directory
-sed -i '1d' $DEST_DIR/train_fnames.txt
-sed -i '1d' $DEST_DIR/valid_fnames.txt
-sed -i '1d' $DEST_DIR/train.lengths
-sed -i '1d' $DEST_DIR/valid.lengths
-
-# insert spaces between characters
-sed -i -e 's/\(.\)/\1 /g' $DEST_DIR/train_fnames.txt
-sed -i -e 's/\(.\)/\1 /g' $DEST_DIR/valid_fnames.txt
-
-# run preprocessor
-PYTHONPATH=$FAIRSEQ_ROOT python $FAIRSEQ_ROOT/fairseq_cli/preprocess.py --dataset-impl mmap --trainpref $DEST_DIR/train_fnames.txt --validpref $DEST_DIR/valid_fnames.txt --workers 60 --only-source --destdir $DEST_DIR
diff --git a/spaces/gradio/HuBERT/fairseq/binarizer.py b/spaces/gradio/HuBERT/fairseq/binarizer.py
deleted file mode 100644
index 18ae67bf25868095e101e7068962c78ee5d12aca..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq/binarizer.py
+++ /dev/null
@@ -1,114 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import os
-from collections import Counter
-
-import torch
-from fairseq.file_io import PathManager
-from fairseq.tokenizer import tokenize_line
-from typing import List, Dict
-
-
-def safe_readline(f):
- pos = f.tell()
- while True:
- try:
- return f.readline()
- except UnicodeDecodeError:
- pos -= 1
- f.seek(pos) # search where this character begins
-
-
-class Binarizer:
- @staticmethod
- def binarize(
- filename,
- dict,
- consumer,
- tokenize=tokenize_line,
- append_eos=True,
- reverse_order=False,
- offset=0,
- end=-1,
- already_numberized=False,
- ) -> Dict[str, int]:
- nseq, ntok = 0, 0
- replaced = Counter()
-
- def replaced_consumer(word, idx):
- if idx == dict.unk_index and word != dict.unk_word:
- replaced.update([word])
-
- with open(PathManager.get_local_path(filename), "r", encoding="utf-8") as f:
- f.seek(offset)
- # next(f) breaks f.tell(), hence readline() must be used
- line = safe_readline(f)
- while line:
- # f.tell() does not always give the byte position in the file
- # sometimes it skips to a very large number
- # it is unlikely that through a normal read we go from
- # end bytes to end + 2**32 bytes (4 GB) and this makes it unlikely
- # that the procedure breaks by the undeterministic behavior of
- # f.tell()
- if end > 0 and f.tell() > end and f.tell() < end + 2 ** 32:
- break
- if already_numberized:
- id_strings = line.strip().split()
- id_list = [int(id_string) for id_string in id_strings]
- if reverse_order:
- id_list.reverse()
- if append_eos:
- id_list.append(dict.eos())
- ids = torch.IntTensor(id_list)
- else:
- ids = dict.encode_line(
- line=line,
- line_tokenizer=tokenize,
- add_if_not_exist=False,
- consumer=replaced_consumer,
- append_eos=append_eos,
- reverse_order=reverse_order,
- )
- nseq += 1
- ntok += len(ids)
- consumer(ids)
- line = f.readline()
- return {
- "nseq": nseq,
- "nunk": sum(replaced.values()),
- "ntok": ntok,
- "replaced": replaced,
- }
-
- @staticmethod
- def binarize_alignments(
- filename, alignment_parser, consumer, offset=0, end=-1
- ) -> Dict[str, int]:
- nseq = 0
-
- with open(PathManager.get_local_path(filename), "r") as f:
- f.seek(offset)
- line = safe_readline(f)
- while line:
- if end > 0 and f.tell() > end:
- break
- ids = alignment_parser(line)
- nseq += 1
- consumer(ids)
- line = f.readline()
- return {"nseq": nseq}
-
- @staticmethod
- def find_offsets(filename, num_chunks) -> List[int]:
- with open(PathManager.get_local_path(filename), "r", encoding="utf-8") as f:
- size = os.fstat(f.fileno()).st_size
- chunk_size = size // num_chunks
- offsets = [0 for _ in range(num_chunks + 1)]
- for i in range(1, num_chunks):
- f.seek(chunk_size * i)
- safe_readline(f)
- offsets[i] = f.tell()
- return offsets
diff --git a/spaces/gradio/HuBERT/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py b/spaces/gradio/HuBERT/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py
deleted file mode 100644
index 0f87bb5d7ed5c7eb8011d4c651f2ecbf0ae700ac..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py
+++ /dev/null
@@ -1,85 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from collections.abc import Collection
-from dataclasses import dataclass, field
-from typing import List
-
-from omegaconf import II
-
-from fairseq.dataclass import FairseqDataclass
-from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler
-
-
-@dataclass
-class InverseSquareRootLRScheduleConfig(FairseqDataclass):
- warmup_updates: int = field(
- default=4000,
- metadata={"help": "warmup the learning rate linearly for the first N updates"},
- )
- warmup_init_lr: float = field(
- default=-1,
- metadata={
- "help": "initial learning rate during warmup phase; default is cfg.lr"
- },
- )
- lr: List[float] = II("optimization.lr")
-
-
-@register_lr_scheduler("inverse_sqrt", dataclass=InverseSquareRootLRScheduleConfig)
-class InverseSquareRootSchedule(FairseqLRScheduler):
- """Decay the LR based on the inverse square root of the update number.
-
- We also support a warmup phase where we linearly increase the learning rate
- from some initial learning rate (``--warmup-init-lr``) until the configured
- learning rate (``--lr``). Thereafter we decay proportional to the number of
- updates, with a decay factor set to align with the configured learning rate.
-
- During warmup::
-
- lrs = torch.linspace(cfg.warmup_init_lr, cfg.lr, cfg.warmup_updates)
- lr = lrs[update_num]
-
- After warmup::
-
- decay_factor = cfg.lr * sqrt(cfg.warmup_updates)
- lr = decay_factor / sqrt(update_num)
- """
-
- def __init__(self, cfg: InverseSquareRootLRScheduleConfig, optimizer):
- super().__init__(cfg, optimizer)
- if isinstance(cfg.lr, Collection) and len(cfg.lr) > 1:
- raise ValueError(
- "Cannot use a fixed learning rate schedule with inverse_sqrt."
- " Consider --lr-scheduler=fixed instead."
- )
- warmup_end_lr = cfg.lr[0] if isinstance(cfg.lr, Collection) else cfg.lr
- if cfg.warmup_init_lr < 0:
- cfg.warmup_init_lr = 0 if cfg.warmup_updates > 0 else warmup_end_lr
-
- # linearly warmup for the first cfg.warmup_updates
- self.lr_step = (warmup_end_lr - cfg.warmup_init_lr) / cfg.warmup_updates
-
- # then, decay prop. to the inverse square root of the update number
- self.decay_factor = warmup_end_lr * cfg.warmup_updates ** 0.5
-
- # initial learning rate
- self.lr = cfg.warmup_init_lr
- self.optimizer.set_lr(self.lr)
-
- def step(self, epoch, val_loss=None):
- """Update the learning rate at the end of the given epoch."""
- super().step(epoch, val_loss)
- # we don't change the learning rate at epoch boundaries
- return self.optimizer.get_lr()
-
- def step_update(self, num_updates):
- """Update the learning rate after each update."""
- if num_updates < self.cfg.warmup_updates:
- self.lr = self.cfg.warmup_init_lr + num_updates * self.lr_step
- else:
- self.lr = self.decay_factor * num_updates ** -0.5
- self.optimizer.set_lr(self.lr)
- return self.lr
diff --git a/spaces/gradio/HuBERT/fairseq/utils.py b/spaces/gradio/HuBERT/fairseq/utils.py
deleted file mode 100644
index 4fe95b9e8b2b277cd545e12d5980561492b70783..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq/utils.py
+++ /dev/null
@@ -1,807 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import contextlib
-import copy
-import importlib
-import logging
-import os
-import sys
-import warnings
-from itertools import accumulate
-from typing import Callable, Dict, List, Optional
-
-import torch
-import torch.nn.functional as F
-from fairseq.modules.multihead_attention import MultiheadAttention
-from torch import Tensor
-
-
-try:
- from amp_C import multi_tensor_l2norm
-
- multi_tensor_l2norm_available = True
-except ImportError:
- multi_tensor_l2norm_available = False
-
-try:
- import torch_xla.core.xla_model as xm
-except ImportError:
- xm = None
-
-
-logger = logging.getLogger(__name__)
-
-
-MANIFOLD_PATH_SEP = "|"
-
-
-class FileContentsAction(argparse.Action):
- def __init__(self, option_strings, dest, nargs=None, **kwargs):
- if nargs is not None:
- raise ValueError("nargs not allowed")
- super(FileContentsAction, self).__init__(option_strings, dest, **kwargs)
-
- def __call__(self, parser, namespace, values, option_string=None):
- from fairseq.file_io import PathManager
-
- if PathManager.isfile(values):
- with PathManager.open(values) as f:
- argument = f.read().strip()
- else:
- argument = values
- setattr(namespace, self.dest, argument)
-
-
-def split_paths(paths: str, separator=os.pathsep) -> List[str]:
- return (
- paths.split(separator) if "://" not in paths else paths.split(MANIFOLD_PATH_SEP)
- )
-
-
-def load_ensemble_for_inference(filenames, task, model_arg_overrides=None):
- from fairseq import checkpoint_utils
-
- deprecation_warning(
- "utils.load_ensemble_for_inference is deprecated. "
- "Please use checkpoint_utils.load_model_ensemble instead."
- )
- return checkpoint_utils.load_model_ensemble(
- filenames, arg_overrides=model_arg_overrides, task=task
- )
-
-
-def apply_to_sample(f, sample):
- if hasattr(sample, "__len__") and len(sample) == 0:
- return {}
-
- def _apply(x):
- if torch.is_tensor(x):
- return f(x)
- elif isinstance(x, dict):
- return {key: _apply(value) for key, value in x.items()}
- elif isinstance(x, list):
- return [_apply(x) for x in x]
- elif isinstance(x, tuple):
- return tuple(_apply(x) for x in x)
- elif isinstance(x, set):
- return {_apply(x) for x in x}
- else:
- return x
-
- return _apply(sample)
-
-
-def move_to_cuda(sample, device=None):
- device = device or torch.cuda.current_device()
-
- def _move_to_cuda(tensor):
- # non_blocking is ignored if tensor is not pinned, so we can always set
- # to True (see github.com/PyTorchLightning/pytorch-lightning/issues/620)
- return tensor.to(device=device, non_blocking=True)
-
- return apply_to_sample(_move_to_cuda, sample)
-
-
-def move_to_cpu(sample):
- def _move_to_cpu(tensor):
- # PyTorch has poor support for half tensors (float16) on CPU.
- # Move any such tensors to float32.
- if tensor.dtype in {torch.bfloat16, torch.float16}:
- tensor = tensor.to(dtype=torch.float32)
- return tensor.cpu()
-
- return apply_to_sample(_move_to_cpu, sample)
-
-
-def move_to_tpu(sample):
-
- import torch_xla.core.xla_model as xm
-
- device = xm.xla_device()
-
- def _move_to_tpu(tensor):
- return tensor.to(device)
-
- return apply_to_sample(_move_to_tpu, sample)
-
-
-def get_incremental_state(
- module: MultiheadAttention,
- incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]],
- key: str,
-) -> Optional[Dict[str, Optional[Tensor]]]:
- """Helper for getting incremental state for an nn.Module."""
- return module.get_incremental_state(incremental_state, key)
-
-
-def set_incremental_state(
- module: MultiheadAttention,
- incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]],
- key: str,
- value: Dict[str, Optional[Tensor]],
-) -> Optional[Dict[str, Dict[str, Optional[Tensor]]]]:
- """Helper for setting incremental state for an nn.Module."""
- if incremental_state is not None:
- result = module.set_incremental_state(incremental_state, key, value)
- if result is not None:
- incremental_state = result
- return incremental_state
-
-
-def load_align_dict(replace_unk):
- if replace_unk is None:
- align_dict = None
- elif isinstance(replace_unk, str) and len(replace_unk) > 0:
- # Load alignment dictionary for unknown word replacement if it was passed as an argument.
- align_dict = {}
- with open(replace_unk, "r") as f:
- for line in f:
- cols = line.split()
- align_dict[cols[0]] = cols[1]
- else:
- # No alignment dictionary provided but we still want to perform unknown word replacement by copying the
- # original source word.
- align_dict = {}
- return align_dict
-
-
-def print_embed_overlap(embed_dict, vocab_dict):
- embed_keys = set(embed_dict.keys())
- vocab_keys = set(vocab_dict.symbols)
- overlap = len(embed_keys & vocab_keys)
- logger.info("found {}/{} types in embedding file".format(overlap, len(vocab_dict)))
-
-
-def parse_embedding(embed_path):
- """Parse embedding text file into a dictionary of word and embedding tensors.
-
- The first line can have vocabulary size and dimension. The following lines
- should contain word and embedding separated by spaces.
-
- Example:
- 2 5
- the -0.0230 -0.0264 0.0287 0.0171 0.1403
- at -0.0395 -0.1286 0.0275 0.0254 -0.0932
- """
- embed_dict = {}
- with open(embed_path) as f_embed:
- next(f_embed) # skip header
- for line in f_embed:
- pieces = line.rstrip().split(" ")
- embed_dict[pieces[0]] = torch.Tensor(
- [float(weight) for weight in pieces[1:]]
- )
- return embed_dict
-
-
-def load_embedding(embed_dict, vocab, embedding):
- for idx in range(len(vocab)):
- token = vocab[idx]
- if token in embed_dict:
- embedding.weight.data[idx] = embed_dict[token]
- return embedding
-
-
-def replace_unk(hypo_str, src_str, alignment, align_dict, unk):
- from fairseq import tokenizer
-
- # Tokens are strings here
- hypo_tokens = tokenizer.tokenize_line(hypo_str)
- # TODO: Very rare cases where the replacement is '' should be handled gracefully
- src_tokens = tokenizer.tokenize_line(src_str) + [""]
- for i, ht in enumerate(hypo_tokens):
- if ht == unk:
- src_token = src_tokens[alignment[i]]
- # Either take the corresponding value in the aligned dictionary or just copy the original value.
- hypo_tokens[i] = align_dict.get(src_token, src_token)
- return " ".join(hypo_tokens)
-
-
-def post_process_prediction(
- hypo_tokens,
- src_str,
- alignment,
- align_dict,
- tgt_dict,
- remove_bpe=None,
- extra_symbols_to_ignore=None,
-):
- hypo_str = tgt_dict.string(
- hypo_tokens, remove_bpe, extra_symbols_to_ignore=extra_symbols_to_ignore
- )
- if align_dict is not None:
- hypo_str = replace_unk(
- hypo_str, src_str, alignment, align_dict, tgt_dict.unk_string()
- )
- if align_dict is not None or remove_bpe is not None:
- # Convert back to tokens for evaluating with unk replacement or without BPE
- # Note that the dictionary can be modified inside the method.
- hypo_tokens = tgt_dict.encode_line(hypo_str, add_if_not_exist=True)
- return hypo_tokens, hypo_str, alignment
-
-
-def make_positions(tensor, padding_idx: int, onnx_trace: bool = False):
- """Replace non-padding symbols with their position numbers.
-
- Position numbers begin at padding_idx+1. Padding symbols are ignored.
- """
- # The series of casts and type-conversions here are carefully
- # balanced to both work with ONNX export and XLA. In particular XLA
- # prefers ints, cumsum defaults to output longs, and ONNX doesn't know
- # how to handle the dtype kwarg in cumsum.
- mask = tensor.ne(padding_idx).int()
- return (torch.cumsum(mask, dim=1).type_as(mask) * mask).long() + padding_idx
-
-
-def strip_pad(tensor, pad):
- return tensor[tensor.ne(pad)]
-
-
-def buffered_arange(max):
- if not hasattr(buffered_arange, "buf"):
- buffered_arange.buf = torch.LongTensor()
- if max > buffered_arange.buf.numel():
- buffered_arange.buf.resize_(max)
- torch.arange(max, out=buffered_arange.buf)
- return buffered_arange.buf[:max]
-
-
-def convert_padding_direction(
- src_tokens, padding_idx, right_to_left: bool = False, left_to_right: bool = False
-):
- assert right_to_left ^ left_to_right
- pad_mask = src_tokens.eq(padding_idx)
- if not pad_mask.any():
- # no padding, return early
- return src_tokens
- if left_to_right and not pad_mask[:, 0].any():
- # already right padded
- return src_tokens
- if right_to_left and not pad_mask[:, -1].any():
- # already left padded
- return src_tokens
- max_len = src_tokens.size(1)
- buffered = torch.empty(0).long()
- if max_len > 0:
- torch.arange(max_len, out=buffered)
- range = buffered.type_as(src_tokens).expand_as(src_tokens)
- num_pads = pad_mask.long().sum(dim=1, keepdim=True)
- if right_to_left:
- index = torch.remainder(range - num_pads, max_len)
- else:
- index = torch.remainder(range + num_pads, max_len)
- return src_tokens.gather(1, index)
-
-
-def item(tensor):
- # tpu-comment: making this a no-op for xla devices.
- if torch.is_tensor(tensor) and tensor.device.type == "xla":
- return tensor.detach()
- if hasattr(tensor, "item"):
- return tensor.item()
- if hasattr(tensor, "__getitem__"):
- return tensor[0]
- return tensor
-
-
-def multi_tensor_total_norm(grads, chunk_size=2048 * 32) -> torch.Tensor:
- per_device_grads = {}
- norms = []
- for grad in grads:
- device = grad.device
- cur_device_grads = per_device_grads.get(device)
- if cur_device_grads is None:
- cur_device_grads = []
- per_device_grads[device] = cur_device_grads
- cur_device_grads.append(grad)
- for device in per_device_grads.keys():
- cur_device_grads = per_device_grads[device]
- if device.type == "cuda":
- # TODO(msb) return has_inf
- has_inf = torch.zeros((1, 1), dtype=torch.int, device=device)
- with torch.cuda.device(device):
- norm = multi_tensor_l2norm(
- chunk_size, has_inf, [cur_device_grads], False
- )
- norms.append(norm[0].to(torch.cuda.current_device()))
- else:
- norms += [torch.norm(g, p=2, dtype=torch.float32) for g in cur_device_grads]
- total_norm = torch.norm(torch.stack(norms))
- return total_norm
-
-
-@torch.no_grad()
-def clip_grad_norm_(params, max_norm, aggregate_norm_fn=None) -> torch.Tensor:
- def grad_exists(p):
- return p is not None and getattr(p, "grad", None) is not None
-
- if isinstance(params, torch.Tensor):
- params = [params]
- params = list(params)
- grads = [
- p.grad.detach() for p in params if grad_exists(p) and not hasattr(p, "expert")
- ]
- expert_grads = [
- p.grad.detach() for p in params if grad_exists(p) and hasattr(p, "expert")
- ]
-
- if len(grads) == 0:
- if len(params) > 0:
- return params[0].new_tensor(0.0)
- else:
- return torch.tensor(0.0)
-
- if len(grads) == 1:
- total_norm = torch.norm(grads[0], p=2, dtype=torch.float32)
- else:
- if multi_tensor_l2norm_available:
- total_norm = multi_tensor_total_norm(grads)
- else:
- if torch.cuda.is_available():
- warnings.warn(
- "amp_C fused kernels unavailable, disabling multi_tensor_l2norm; "
- "you may get better performance by installing NVIDIA's apex library"
- )
- device = torch.cuda.current_device()
- elif grads[0].device.type == "xla":
- device = grads[0].device
- else:
- device = torch.device("cpu")
- total_norm = torch.norm(
- torch.stack(
- [torch.norm(g, p=2, dtype=torch.float32).to(device) for g in grads]
- )
- )
-
- if aggregate_norm_fn is not None:
- total_norm = aggregate_norm_fn(total_norm)
-
- if max_norm > 0:
- max_norm = float(max_norm)
- clip_coef = (max_norm / (total_norm + 1e-6)).clamp_(max=1)
- for g in grads + expert_grads:
- g.mul_(clip_coef)
- return total_norm
-
-
-def fill_with_neg_inf(t):
- """FP16-compatible function that fills a tensor with -inf."""
- return t.float().fill_(float("-inf")).type_as(t)
-
-
-def _match_types(arg1, arg2):
- """Convert the numerical argument to the same type as the other argument"""
-
- def upgrade(arg_number, arg_structure):
- if isinstance(arg_structure, tuple):
- return tuple([arg_number] * len(arg_structure))
- elif isinstance(arg_structure, dict):
- arg = copy.deepcopy(arg_structure)
- for k in arg:
- arg[k] = upgrade(arg_number, arg_structure[k])
- return arg
- else:
- return arg_number
-
- if isinstance(arg1, float) or isinstance(arg1, int):
- return upgrade(arg1, arg2), arg2
- elif isinstance(arg2, float) or isinstance(arg2, int):
- return arg1, upgrade(arg2, arg1)
-
- return arg1, arg2
-
-
-def resolve_max_positions(*args):
- """Resolve max position constraints from multiple sources."""
-
- def map_value_update(d1, d2):
- updated_value = copy.deepcopy(d1)
- for key in d2:
- if key not in updated_value:
- updated_value[key] = d2[key]
- else:
- updated_value[key] = min(d1[key], d2[key])
- return updated_value
-
- def nullsafe_min(l):
- minim = None
- for item in l:
- if minim is None:
- minim = item
- elif item is not None and item < minim:
- minim = item
- return minim
-
- max_positions = None
- for arg in args:
- if max_positions is None:
- max_positions = arg
- elif arg is not None:
- max_positions, arg = _match_types(max_positions, arg)
- if isinstance(arg, float) or isinstance(arg, int):
- max_positions = min(max_positions, arg)
- elif isinstance(arg, dict):
- max_positions = map_value_update(max_positions, arg)
- else:
- max_positions = tuple(map(nullsafe_min, zip(max_positions, arg)))
-
- return max_positions
-
-
-def import_user_module(args):
- module_path = getattr(args, "user_dir", None)
- if module_path is not None:
- module_path = os.path.abspath(args.user_dir)
- if not os.path.exists(module_path) and not os.path.isfile(
- os.path.dirname(module_path)
- ):
- fairseq_rel_path = os.path.join(os.path.dirname(__file__), args.user_dir)
- if os.path.exists(fairseq_rel_path):
- module_path = fairseq_rel_path
- else:
- fairseq_rel_path = os.path.join(
- os.path.dirname(__file__), "..", args.user_dir
- )
- if os.path.exists(fairseq_rel_path):
- module_path = fairseq_rel_path
- else:
- raise FileNotFoundError(module_path)
-
- # ensure that user modules are only imported once
- import_user_module.memo = getattr(import_user_module, "memo", set())
- if module_path not in import_user_module.memo:
- import_user_module.memo.add(module_path)
-
- module_parent, module_name = os.path.split(module_path)
- if module_name not in sys.modules:
- sys.path.insert(0, module_parent)
- importlib.import_module(module_name)
-
- tasks_path = os.path.join(module_path, "tasks")
- if os.path.exists(tasks_path):
- from fairseq.tasks import import_tasks
-
- import_tasks(tasks_path, f"{module_name}.tasks")
-
- models_path = os.path.join(module_path, "models")
- if os.path.exists(models_path):
- from fairseq.models import import_models
-
- import_models(models_path, f"{module_name}.models")
- else:
- raise ImportError(
- "Failed to import --user-dir={} because the corresponding module name "
- "({}) is not globally unique. Please rename the directory to "
- "something unique and try again.".format(module_path, module_name)
- )
-
-
-def softmax(x, dim: int, onnx_trace: bool = False):
- if onnx_trace:
- return F.softmax(x.float(), dim=dim)
- else:
- return F.softmax(x, dim=dim, dtype=torch.float32)
-
-
-def log_softmax(x, dim: int, onnx_trace: bool = False):
- if onnx_trace:
- return F.log_softmax(x.float(), dim=dim)
- else:
- return F.log_softmax(x, dim=dim, dtype=torch.float32)
-
-
-def get_perplexity(loss, round=2, base=2):
- from fairseq.logging.meters import safe_round
-
- if loss is None:
- return 0.0
- try:
- return safe_round(base ** loss, round)
- except OverflowError:
- return float("inf")
-
-
-def deprecation_warning(message, stacklevel=3):
- # don't use DeprecationWarning, since it's ignored by default
- warnings.warn(message, stacklevel=stacklevel)
-
-
-def get_activation_fn(activation: str) -> Callable:
- """Returns the activation function corresponding to `activation`"""
- from fairseq.modules import gelu, gelu_accurate
-
- if activation == "relu":
- return F.relu
- elif activation == "gelu":
- return gelu
- elif activation == "gelu_fast":
- deprecation_warning(
- "--activation-fn=gelu_fast has been renamed to gelu_accurate"
- )
- return gelu_accurate
- elif activation == "gelu_accurate":
- return gelu_accurate
- elif activation == "tanh":
- return torch.tanh
- elif activation == "linear":
- return lambda x: x
- else:
- raise RuntimeError("--activation-fn {} not supported".format(activation))
-
-
-def get_available_activation_fns() -> List:
- return [
- "relu",
- "gelu",
- "gelu_fast", # deprecated
- "gelu_accurate",
- "tanh",
- "linear",
- ]
-
-
-@contextlib.contextmanager
-def model_eval(model):
- is_training = model.training
- model.eval()
- yield
- model.train(is_training)
-
-
-def has_parameters(module):
- try:
- next(module.parameters())
- return True
- except StopIteration:
- return False
-
-
-def get_rng_state():
- state = {"torch_rng_state": torch.get_rng_state()}
- if xm is not None:
- state["xla_rng_state"] = xm.get_rng_state()
- if torch.cuda.is_available():
- state["cuda_rng_state"] = torch.cuda.get_rng_state()
- return state
-
-
-def set_rng_state(state):
- torch.set_rng_state(state["torch_rng_state"])
- if xm is not None:
- xm.set_rng_state(state["xla_rng_state"])
- if torch.cuda.is_available():
- torch.cuda.set_rng_state(state["cuda_rng_state"])
-
-
-class set_torch_seed(object):
- def __init__(self, seed):
- assert isinstance(seed, int)
- self.rng_state = get_rng_state()
-
- torch.manual_seed(seed)
- if xm is not None:
- xm.set_rng_state(seed)
- if torch.cuda.is_available():
- torch.cuda.manual_seed(seed)
-
- def __enter__(self):
- return self
-
- def __exit__(self, *exc):
- set_rng_state(self.rng_state)
-
-
-def parse_alignment(line):
- """
- Parses a single line from the alingment file.
-
- Args:
- line (str): String containing the alignment of the format:
- -- ..
- -. All indices are 0 indexed.
-
- Returns:
- torch.IntTensor: packed alignments of shape (2 * m).
- """
- alignments = line.strip().split()
- parsed_alignment = torch.IntTensor(2 * len(alignments))
- for idx, alignment in enumerate(alignments):
- src_idx, tgt_idx = alignment.split("-")
- parsed_alignment[2 * idx] = int(src_idx)
- parsed_alignment[2 * idx + 1] = int(tgt_idx)
- return parsed_alignment
-
-
-def get_token_to_word_mapping(tokens, exclude_list):
- n = len(tokens)
- word_start = [int(token not in exclude_list) for token in tokens]
- word_idx = list(accumulate(word_start))
- token_to_word = {i: word_idx[i] for i in range(n)}
- return token_to_word
-
-
-def extract_hard_alignment(attn, src_sent, tgt_sent, pad, eos):
- tgt_valid = (
- ((tgt_sent != pad) & (tgt_sent != eos)).nonzero(as_tuple=False).squeeze(dim=-1)
- )
- src_invalid = (
- ((src_sent == pad) | (src_sent == eos)).nonzero(as_tuple=False).squeeze(dim=-1)
- )
- src_token_to_word = get_token_to_word_mapping(src_sent, [eos, pad])
- tgt_token_to_word = get_token_to_word_mapping(tgt_sent, [eos, pad])
- alignment = []
- if len(tgt_valid) != 0 and len(src_invalid) < len(src_sent):
- attn_valid = attn[tgt_valid]
- attn_valid[:, src_invalid] = float("-inf")
- _, src_indices = attn_valid.max(dim=1)
- for tgt_idx, src_idx in zip(tgt_valid, src_indices):
- alignment.append(
- (
- src_token_to_word[src_idx.item()] - 1,
- tgt_token_to_word[tgt_idx.item()] - 1,
- )
- )
- return alignment
-
-
-def extract_soft_alignment(attn, src_sent, tgt_sent, pad, eos):
- tgt_valid = ((tgt_sent != pad)).nonzero(as_tuple=False)
- src_valid = ((src_sent != pad)).nonzero(as_tuple=False).squeeze(dim=-1)
- alignment = []
- if len(tgt_valid) != 0 and len(src_valid) != 0:
- attn_valid = attn[tgt_valid, src_valid]
- alignment = [
- ["{:.6f}".format(p) for p in src_probs.tolist()] for src_probs in attn_valid
- ]
- return alignment
-
-
-def new_arange(x, *size):
- """
- Return a Tensor of `size` filled with a range function on the device of x.
- If size is empty, using the size of the variable x.
- """
- if len(size) == 0:
- size = x.size()
- return torch.arange(size[-1], device=x.device).expand(*size).contiguous()
-
-
-def get_tpu_device():
- return xm.xla_device()
-
-
-def tpu_data_loader(itr):
- import torch_xla.core.xla_model as xm
- import torch_xla.distributed.parallel_loader as pl
- from fairseq.data import iterators
-
- xm.rendezvous("tpu_data_loader") # wait for all workers
- xm.mark_step()
- device = xm.xla_device()
- return iterators.CountingIterator(
- pl.ParallelLoader(itr, [device]).per_device_loader(device),
- start=getattr(itr, "n", 0),
- total=len(itr),
- )
-
-
-def is_xla_tensor(tensor):
- return torch.is_tensor(tensor) and tensor.device.type == "xla"
-
-
-def index_put(tensor, indices, value):
- if is_xla_tensor(tensor):
- for _ in range(indices.dim(), tensor.dim()):
- indices = indices.unsqueeze(-1)
- if indices.size(-1) < tensor.size(-1):
- indices = indices.expand_as(tensor)
- tensor = torch.mul(tensor, ~indices) + torch.mul(value, indices)
- else:
- tensor[indices] = value
- return tensor
-
-
-def xla_device_to_cpu(dat):
- import torch_xla.core.xla_model as xm
-
- return xm._maybe_convert_to_cpu(dat)
-
-
-class CudaEnvironment(object):
- def __init__(self):
- cur_device = torch.cuda.current_device()
- prop = torch.cuda.get_device_properties("cuda:{}".format(cur_device))
- self.name = prop.name
- self.major = prop.major
- self.minor = prop.minor
- self.total_memory_in_GB = prop.total_memory / 1024 / 1024 / 1024
-
- @staticmethod
- def pretty_print_cuda_env_list(cuda_env_list):
- """
- Given a list of CudaEnviorments, pretty print them
- """
- num_workers = len(cuda_env_list)
- center = "CUDA enviroments for all {} workers".format(num_workers)
- banner_len = 40 - len(center) // 2
- first_line = "*" * banner_len + center + "*" * banner_len
- logger.info(first_line)
- for r, env in enumerate(cuda_env_list):
- logger.info(
- "rank {:3d}: ".format(r)
- + "capabilities = {:2d}.{:<2d} ; ".format(env.major, env.minor)
- + "total memory = {:.3f} GB ; ".format(env.total_memory_in_GB)
- + "name = {:40s}".format(env.name)
- )
- logger.info(first_line)
-
-
-def csv_str_list(x):
- return x.split(",")
-
-
-def eval_str_list(x, type=float):
- if x is None:
- return None
- if isinstance(x, str):
- x = eval(x)
- try:
- return list(map(type, x))
- except TypeError:
- return [type(x)]
-
-
-def eval_str_dict(x, type=dict):
- if x is None:
- return None
- if isinstance(x, str):
- x = eval(x)
- return x
-
-
-def eval_bool(x, default=False):
- if x is None:
- return default
- try:
- return bool(eval(x))
- except TypeError:
- return default
-
-
-def reset_logging():
- root = logging.getLogger()
- for handler in root.handlers:
- root.removeHandler(handler)
- root.setLevel(os.environ.get("LOGLEVEL", "INFO").upper())
- handler = logging.StreamHandler(sys.stdout)
- handler.setFormatter(
- logging.Formatter(
- fmt="%(asctime)s | %(levelname)s | %(name)s | %(message)s",
- datefmt="%Y-%m-%d %H:%M:%S",
- )
- )
- root.addHandler(handler)
diff --git a/spaces/gradio/HuBERT/tests/test_noising.py b/spaces/gradio/HuBERT/tests/test_noising.py
deleted file mode 100644
index b3d0d123c42eaca6f79371aa268049e668fcfcce..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/tests/test_noising.py
+++ /dev/null
@@ -1,530 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import unittest
-from typing import Dict, List
-
-import tests.utils as test_utils
-import torch
-from fairseq import utils
-from fairseq.data import (
- Dictionary,
- LanguagePairDataset,
- TransformEosDataset,
- data_utils,
- noising,
-)
-
-
-class TestDataNoising(unittest.TestCase):
- def _get_test_data_with_bpe_cont_marker(self, append_eos=True):
- """
- Args:
- append_eos: if True, each input sentence in the source tokens tensor
- will have an EOS appended to the end.
-
- Returns:
- vocabs: BPE vocab with continuation markers as suffixes to denote
- non-end of word tokens. This is the standard BPE format used in
- fairseq's preprocessing.
- x: input tensor containing numberized source tokens, with EOS at the
- end if append_eos is true
- src_lengths: and source lengths.
- """
- vocab = Dictionary()
- vocab.add_symbol("he@@")
- vocab.add_symbol("llo")
- vocab.add_symbol("how")
- vocab.add_symbol("are")
- vocab.add_symbol("y@@")
- vocab.add_symbol("ou")
- vocab.add_symbol("n@@")
- vocab.add_symbol("ew")
- vocab.add_symbol("or@@")
- vocab.add_symbol("k")
-
- src_tokens = [
- ["he@@", "llo", "n@@", "ew", "y@@", "or@@", "k"],
- ["how", "are", "y@@", "ou"],
- ]
- x, src_lengths = x, src_lengths = self._convert_src_tokens_to_tensor(
- vocab=vocab, src_tokens=src_tokens, append_eos=append_eos
- )
- return vocab, x, src_lengths
-
- def _get_test_data_with_bpe_end_marker(self, append_eos=True):
- """
- Args:
- append_eos: if True, each input sentence in the source tokens tensor
- will have an EOS appended to the end.
-
- Returns:
- vocabs: BPE vocab with end-of-word markers as suffixes to denote
- tokens at the end of a word. This is an alternative to fairseq's
- standard preprocessing framework and is not generally supported
- within fairseq.
- x: input tensor containing numberized source tokens, with EOS at the
- end if append_eos is true
- src_lengths: and source lengths.
- """
- vocab = Dictionary()
- vocab.add_symbol("he")
- vocab.add_symbol("llo_EOW")
- vocab.add_symbol("how_EOW")
- vocab.add_symbol("are_EOW")
- vocab.add_symbol("y")
- vocab.add_symbol("ou_EOW")
- vocab.add_symbol("n")
- vocab.add_symbol("ew_EOW")
- vocab.add_symbol("or")
- vocab.add_symbol("k_EOW")
-
- src_tokens = [
- ["he", "llo_EOW", "n", "ew_EOW", "y", "or", "k_EOW"],
- ["how_EOW", "are_EOW", "y", "ou_EOW"],
- ]
- x, src_lengths = x, src_lengths = self._convert_src_tokens_to_tensor(
- vocab=vocab, src_tokens=src_tokens, append_eos=append_eos
- )
- return vocab, x, src_lengths
-
- def _get_test_data_with_word_vocab(self, append_eos=True):
- """
- Args:
- append_eos: if True, each input sentence in the source tokens tensor
- will have an EOS appended to the end.
-
- Returns:
- vocabs: word vocab
- x: input tensor containing numberized source tokens, with EOS at the
- end if append_eos is true
- src_lengths: and source lengths.
- """
- vocab = Dictionary()
-
- vocab.add_symbol("hello")
- vocab.add_symbol("how")
- vocab.add_symbol("are")
- vocab.add_symbol("you")
- vocab.add_symbol("new")
- vocab.add_symbol("york")
- src_tokens = [
- ["hello", "new", "york", "you"],
- ["how", "are", "you", "new", "york"],
- ]
- x, src_lengths = self._convert_src_tokens_to_tensor(
- vocab=vocab, src_tokens=src_tokens, append_eos=append_eos
- )
- return vocab, x, src_lengths
-
- def _convert_src_tokens_to_tensor(
- self, vocab: Dictionary, src_tokens: List[List[str]], append_eos: bool
- ):
- src_len = [len(x) for x in src_tokens]
- # If we have to append EOS, we include EOS in counting src length
- if append_eos:
- src_len = [length + 1 for length in src_len]
-
- x = torch.LongTensor(len(src_tokens), max(src_len)).fill_(vocab.pad())
- for i in range(len(src_tokens)):
- for j in range(len(src_tokens[i])):
- x[i][j] = vocab.index(src_tokens[i][j])
- if append_eos:
- x[i][j + 1] = vocab.eos()
-
- x = x.transpose(1, 0)
- return x, torch.LongTensor(src_len)
-
- def assert_eos_at_end(self, x, x_len, eos):
- """Asserts last token of every sentence in x is EOS """
- for i in range(len(x_len)):
- self.assertEqual(
- x[x_len[i] - 1][i],
- eos,
- (
- "Expected eos (token id {eos}) at the end of sentence {i} "
- "but got {other} instead"
- ).format(i=i, eos=eos, other=x[i][-1]),
- )
-
- def assert_word_dropout_correct(self, x, x_noised, x_len, l_noised):
- # Expect only the first word (2 bpe tokens) of the first example
- # was dropped out
- self.assertEqual(x_len[0] - 2, l_noised[0])
- for i in range(l_noised[0]):
- self.assertEqual(x_noised[i][0], x[i + 2][0])
-
- def test_word_dropout_with_eos(self):
- vocab, x, x_len = self._get_test_data_with_bpe_cont_marker(append_eos=True)
-
- with data_utils.numpy_seed(1234):
- noising_gen = noising.WordDropout(vocab)
- x_noised, l_noised = noising_gen.noising(x, x_len, 0.2)
- self.assert_word_dropout_correct(
- x=x, x_noised=x_noised, x_len=x_len, l_noised=l_noised
- )
- self.assert_eos_at_end(x=x_noised, x_len=l_noised, eos=vocab.eos())
-
- def assert_word_blanking_correct(self, x, x_noised, x_len, l_noised, unk):
- # Expect only the first word (2 bpe tokens) of the first example
- # was blanked out
- self.assertEqual(x_len[0], l_noised[0])
- for i in range(l_noised[0]):
- if i < 2:
- self.assertEqual(x_noised[i][0], unk)
- else:
- self.assertEqual(x_noised[i][0], x[i][0])
-
- def test_word_blank_with_eos(self):
- vocab, x, x_len = self._get_test_data_with_bpe_cont_marker(append_eos=True)
-
- with data_utils.numpy_seed(1234):
- noising_gen = noising.WordDropout(vocab)
- x_noised, l_noised = noising_gen.noising(x, x_len, 0.2, vocab.unk())
- self.assert_word_blanking_correct(
- x=x, x_noised=x_noised, x_len=x_len, l_noised=l_noised, unk=vocab.unk()
- )
- self.assert_eos_at_end(x=x_noised, x_len=l_noised, eos=vocab.eos())
-
- def generate_unchanged_shuffle_map(self, length):
- return {i: i for i in range(length)}
-
- def assert_word_shuffle_matches_expected(
- self,
- x,
- x_len,
- max_shuffle_distance: int,
- vocab: Dictionary,
- expected_shufle_maps: List[Dict[int, int]],
- expect_eos_at_end: bool,
- bpe_end_marker=None,
- ):
- """
- This verifies that with a given x, x_len, max_shuffle_distance, and
- vocab, we get the expected shuffle result.
-
- Args:
- x: Tensor of shape (T x B) = (sequence_length, batch_size)
- x_len: Tensor of length B = batch_size
- max_shuffle_distance: arg to pass to noising
- expected_shuffle_maps: List[mapping] where mapping is a
- Dict[old_index, new_index], mapping x's elements from their
- old positions in x to their new positions in x.
- expect_eos_at_end: if True, check the output to make sure there is
- an EOS at the end.
- bpe_end_marker: str denoting the BPE end token. If this is not None, we
- set the BPE cont token to None in the noising classes.
- """
- bpe_cont_marker = None
- if bpe_end_marker is None:
- bpe_cont_marker = "@@"
-
- with data_utils.numpy_seed(1234):
- word_shuffle = noising.WordShuffle(
- vocab, bpe_cont_marker=bpe_cont_marker, bpe_end_marker=bpe_end_marker
- )
- x_noised, l_noised = word_shuffle.noising(
- x, x_len, max_shuffle_distance=max_shuffle_distance
- )
-
- # For every example, we have a different expected shuffle map. We check
- # that each example is shuffled as expected according to each
- # corresponding shuffle map.
- for i in range(len(expected_shufle_maps)):
- shuffle_map = expected_shufle_maps[i]
- for k, v in shuffle_map.items():
- self.assertEqual(x[k][i], x_noised[v][i])
-
- # Shuffling should not affect the length of each example
- for pre_shuffle_length, post_shuffle_length in zip(x_len, l_noised):
- self.assertEqual(pre_shuffle_length, post_shuffle_length)
- if expect_eos_at_end:
- self.assert_eos_at_end(x=x_noised, x_len=l_noised, eos=vocab.eos())
-
- def test_word_shuffle_with_eos(self):
- vocab, x, x_len = self._get_test_data_with_bpe_cont_marker(append_eos=True)
-
- # Assert word shuffle with max shuffle distance 0 causes input to be
- # unchanged
- self.assert_word_shuffle_matches_expected(
- x=x,
- x_len=x_len,
- max_shuffle_distance=0,
- vocab=vocab,
- expected_shufle_maps=[
- self.generate_unchanged_shuffle_map(example_len)
- for example_len in x_len
- ],
- expect_eos_at_end=True,
- )
-
- # Assert word shuffle with max shuffle distance 3 matches our expected
- # shuffle order
- self.assert_word_shuffle_matches_expected(
- x=x,
- x_len=x_len,
- vocab=vocab,
- max_shuffle_distance=3,
- expected_shufle_maps=[
- self.generate_unchanged_shuffle_map(x_len[0]),
- {0: 0, 1: 3, 2: 1, 3: 2},
- ],
- expect_eos_at_end=True,
- )
-
- def test_word_shuffle_with_eos_nonbpe(self):
- """The purpose of this is to test shuffling logic with word vocabs"""
- vocab, x, x_len = self._get_test_data_with_word_vocab(append_eos=True)
-
- # Assert word shuffle with max shuffle distance 0 causes input to be
- # unchanged
- self.assert_word_shuffle_matches_expected(
- x=x,
- x_len=x_len,
- max_shuffle_distance=0,
- vocab=vocab,
- expected_shufle_maps=[
- self.generate_unchanged_shuffle_map(example_len)
- for example_len in x_len
- ],
- expect_eos_at_end=True,
- )
-
- # Assert word shuffle with max shuffle distance 3 matches our expected
- # shuffle order
- self.assert_word_shuffle_matches_expected(
- x=x,
- x_len=x_len,
- vocab=vocab,
- max_shuffle_distance=3,
- expected_shufle_maps=[
- {0: 0, 1: 1, 2: 3, 3: 2},
- {0: 0, 1: 2, 2: 1, 3: 3, 4: 4},
- ],
- expect_eos_at_end=True,
- )
-
- def test_word_shuffle_without_eos(self):
- """Same result as word shuffle with eos except no EOS at end"""
- vocab, x, x_len = self._get_test_data_with_bpe_cont_marker(append_eos=False)
-
- # Assert word shuffle with max shuffle distance 0 causes input to be
- # unchanged
- self.assert_word_shuffle_matches_expected(
- x=x,
- x_len=x_len,
- max_shuffle_distance=0,
- vocab=vocab,
- expected_shufle_maps=[
- self.generate_unchanged_shuffle_map(example_len)
- for example_len in x_len
- ],
- expect_eos_at_end=False,
- )
-
- # Assert word shuffle with max shuffle distance 3 matches our expected
- # shuffle order
- self.assert_word_shuffle_matches_expected(
- x=x,
- x_len=x_len,
- vocab=vocab,
- max_shuffle_distance=3,
- expected_shufle_maps=[
- self.generate_unchanged_shuffle_map(x_len[0]),
- {0: 0, 1: 3, 2: 1, 3: 2},
- ],
- expect_eos_at_end=False,
- )
-
- def test_word_shuffle_without_eos_with_bpe_end_marker(self):
- """Same result as word shuffle without eos except using BPE end token"""
- vocab, x, x_len = self._get_test_data_with_bpe_end_marker(append_eos=False)
-
- # Assert word shuffle with max shuffle distance 0 causes input to be
- # unchanged
- self.assert_word_shuffle_matches_expected(
- x=x,
- x_len=x_len,
- max_shuffle_distance=0,
- vocab=vocab,
- expected_shufle_maps=[
- self.generate_unchanged_shuffle_map(example_len)
- for example_len in x_len
- ],
- expect_eos_at_end=False,
- bpe_end_marker="_EOW",
- )
-
- # Assert word shuffle with max shuffle distance 3 matches our expected
- # shuffle order
- self.assert_word_shuffle_matches_expected(
- x=x,
- x_len=x_len,
- vocab=vocab,
- max_shuffle_distance=3,
- expected_shufle_maps=[
- self.generate_unchanged_shuffle_map(x_len[0]),
- {0: 0, 1: 3, 2: 1, 3: 2},
- ],
- expect_eos_at_end=False,
- bpe_end_marker="_EOW",
- )
-
- def assert_no_eos_at_end(self, x, x_len, eos):
- """Asserts that the last token of each sentence in x is not EOS """
- for i in range(len(x_len)):
- self.assertNotEqual(
- x[x_len[i] - 1][i],
- eos,
- "Expected no eos (token id {eos}) at the end of sentence {i}.".format(
- eos=eos, i=i
- ),
- )
-
- def test_word_dropout_without_eos(self):
- """Same result as word dropout with eos except no EOS at end"""
- vocab, x, x_len = self._get_test_data_with_bpe_cont_marker(append_eos=False)
-
- with data_utils.numpy_seed(1234):
- noising_gen = noising.WordDropout(vocab)
- x_noised, l_noised = noising_gen.noising(x, x_len, 0.2)
- self.assert_word_dropout_correct(
- x=x, x_noised=x_noised, x_len=x_len, l_noised=l_noised
- )
- self.assert_no_eos_at_end(x=x_noised, x_len=l_noised, eos=vocab.eos())
-
- def test_word_blank_without_eos(self):
- """Same result as word blank with eos except no EOS at end"""
- vocab, x, x_len = self._get_test_data_with_bpe_cont_marker(append_eos=False)
-
- with data_utils.numpy_seed(1234):
- noising_gen = noising.WordDropout(vocab)
- x_noised, l_noised = noising_gen.noising(x, x_len, 0.2, vocab.unk())
- self.assert_word_blanking_correct(
- x=x, x_noised=x_noised, x_len=x_len, l_noised=l_noised, unk=vocab.unk()
- )
- self.assert_no_eos_at_end(x=x_noised, x_len=l_noised, eos=vocab.eos())
-
- def _get_noising_dataset_batch(
- self,
- src_tokens_no_pad,
- src_dict,
- append_eos_to_tgt=False,
- ):
- """
- Constructs a NoisingDataset and the corresponding
- ``LanguagePairDataset(NoisingDataset(src), src)``. If
- *append_eos_to_tgt* is True, wrap the source dataset in
- :class:`TransformEosDataset` to append EOS to the clean source when
- using it as the target.
- """
- src_dataset = test_utils.TestDataset(data=src_tokens_no_pad)
-
- noising_dataset = noising.NoisingDataset(
- src_dataset=src_dataset,
- src_dict=src_dict,
- seed=1234,
- max_word_shuffle_distance=3,
- word_dropout_prob=0.2,
- word_blanking_prob=0.2,
- noising_class=noising.UnsupervisedMTNoising,
- )
- tgt = src_dataset
- language_pair_dataset = LanguagePairDataset(
- src=noising_dataset, tgt=tgt, src_sizes=None, src_dict=src_dict
- )
- language_pair_dataset = TransformEosDataset(
- language_pair_dataset,
- src_dict.eos(),
- append_eos_to_tgt=append_eos_to_tgt,
- )
-
- dataloader = torch.utils.data.DataLoader(
- dataset=language_pair_dataset,
- batch_size=2,
- collate_fn=language_pair_dataset.collater,
- )
- denoising_batch_result = next(iter(dataloader))
- return denoising_batch_result
-
- def test_noising_dataset_with_eos(self):
- src_dict, src_tokens, _ = self._get_test_data_with_bpe_cont_marker(
- append_eos=True
- )
-
- # Format data for src_dataset
- src_tokens = torch.t(src_tokens)
- src_tokens_no_pad = []
- for src_sentence in src_tokens:
- src_tokens_no_pad.append(
- utils.strip_pad(tensor=src_sentence, pad=src_dict.pad())
- )
- denoising_batch_result = self._get_noising_dataset_batch(
- src_tokens_no_pad=src_tokens_no_pad, src_dict=src_dict
- )
-
- eos, pad = src_dict.eos(), src_dict.pad()
-
- # Generated noisy source as source
- expected_src = torch.LongTensor(
- [[4, 5, 10, 11, 8, 12, 13, eos], [pad, pad, pad, 6, 8, 9, 7, eos]]
- )
- # Original clean source as target (right-padded)
- expected_tgt = torch.LongTensor(
- [[4, 5, 10, 11, 8, 12, 13, eos], [6, 7, 8, 9, eos, pad, pad, pad]]
- )
- generated_src = denoising_batch_result["net_input"]["src_tokens"]
- tgt_tokens = denoising_batch_result["target"]
-
- self.assertTensorEqual(expected_src, generated_src)
- self.assertTensorEqual(expected_tgt, tgt_tokens)
-
- def test_noising_dataset_without_eos(self):
- """
- Similar to test noising dataset with eos except that we have to set
- *append_eos_to_tgt* to ``True``.
- """
-
- src_dict, src_tokens, _ = self._get_test_data_with_bpe_cont_marker(
- append_eos=False
- )
-
- # Format data for src_dataset
- src_tokens = torch.t(src_tokens)
- src_tokens_no_pad = []
- for src_sentence in src_tokens:
- src_tokens_no_pad.append(
- utils.strip_pad(tensor=src_sentence, pad=src_dict.pad())
- )
- denoising_batch_result = self._get_noising_dataset_batch(
- src_tokens_no_pad=src_tokens_no_pad,
- src_dict=src_dict,
- append_eos_to_tgt=True,
- )
-
- eos, pad = src_dict.eos(), src_dict.pad()
-
- # Generated noisy source as source
- expected_src = torch.LongTensor(
- [[4, 5, 10, 11, 8, 12, 13], [pad, pad, pad, 6, 8, 9, 7]]
- )
- # Original clean source as target (right-padded)
- expected_tgt = torch.LongTensor(
- [[4, 5, 10, 11, 8, 12, 13, eos], [6, 7, 8, 9, eos, pad, pad, pad]]
- )
-
- generated_src = denoising_batch_result["net_input"]["src_tokens"]
- tgt_tokens = denoising_batch_result["target"]
-
- self.assertTensorEqual(expected_src, generated_src)
- self.assertTensorEqual(expected_tgt, tgt_tokens)
-
- def assertTensorEqual(self, t1, t2):
- self.assertEqual(t1.size(), t2.size(), "size mismatch")
- self.assertEqual(t1.ne(t2).long().sum(), 0)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/gradio/theme_builder/run.py b/spaces/gradio/theme_builder/run.py
deleted file mode 100644
index 3d089dbf28154b49b3c02ce781294542015c31e4..0000000000000000000000000000000000000000
--- a/spaces/gradio/theme_builder/run.py
+++ /dev/null
@@ -1,6 +0,0 @@
-import gradio as gr
-
-demo = gr.themes.builder
-
-if __name__ == "__main__":
- demo()
\ No newline at end of file
diff --git a/spaces/greatMLideas/Realstate/api_call.py b/spaces/greatMLideas/Realstate/api_call.py
deleted file mode 100644
index 55772bfa9cdc7044c17e7df8f30308b9f2061e1f..0000000000000000000000000000000000000000
--- a/spaces/greatMLideas/Realstate/api_call.py
+++ /dev/null
@@ -1,79 +0,0 @@
-import os
-import json
-import requests
-import requests
-import urllib.parse
-import json
-from fastapi import FastAPI, File, UploadFile, Form
-# from Zillow_Scraper.selenium_wrapper import search
-
-API_token = os.environ['API_token']
-# eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJ1MzRmanRmdmZwNnplcmJldHEzcnpienExZ2l6b201dyIsImlzcyI6ImRhdGFmaW5pdGkuY28ifQ.CONEaG0VEABopROdGvMWy3sBv0X8dB2rAUhijXEKRdRgAbII-zbX6FIZsV7pZ72M5BbaREhjOi9eU-5NKpvhiWnU8cjlrx0AmHRNrIBrHmPvoFE9IFeIN1pYS9nOvU6CctQB6ZvmoEMpF_VYVKwkBkmUNmY1UPF6TdINkdWg9ym0pC1TIhch4WM5akuHwlJXirzbV07SRrijnQUT1DRpt7_igbwRSt8smFUWejFDJnbxAaMeqsLHo7Trb3FgkBLAEHDdA7CgG2HONsAvOEKnf7hwpp3_mfziZ_uYO369wbfxkYIn6WnMqHYjjC6viD0b5zZI-hjZgT-87CUetWj3Oqs_B37JzqiHm0rxRHxlfFP_IvpqS263tjdnMgSUmsETy62W_cXAuk_Wn45nGiPfM26HclcZxBieYuCOFNojdZE-1apMehtXAgN0JEN29y5JZGBoSPHPZQKfvfAGEHlh-Ty3yZRiiPaQ2m4BLilZN2bB_tA0c8s7RXeHWph_t_GH5xoLO0PB_fZa_RrUmg-Cc2wsjTr59rYH-uGWpfhE-7Qa1du1tZYE4PpRqTq7KysAZzfuAgkhpJzpXwDAnuEsRDSGdhbteKnRIKajiEw8jt9KW0OnXEDOxx_cU9YrPmDvJIdvQjKKYz1Tuptce7ZmxXaHYoOtzN1I1aKpGQe1UJw
-format = 'JSON'
-# query = 'country:US AND numRoom:3 AND numFloor:1 AND postalCode:75231 AND "prices":{"amountMax": 289000}'
-download = False
-request_headers = {
- 'Authorization': 'Bearer ' + API_token,
- 'Content-Type': 'application/json',
-}
-def send_request (budget,zip_code):
- # query = "country:US AND numRoom:3 AND numFloor:1 AND postalCode:75231 AND {prices.amountMin:>=100} AND {prices.amountMax:<={param}} AND {prices.currency:USD} "
- query = "country:US AND numRoom:3 AND numFloor:1 AND postalCode:"+f"{zip_code}"+" AND {prices.amountMin:>=100} AND {prices.amountMax:<="+ f"{budget}"+"} AND {prices.currency:USD} "
- num_records = 7
- request_data = {
- 'query': query,
- 'format': format,
- 'num_records': num_records,
- 'download': download,
- }
- # Make the API call.
- r = requests.post('https://api.datafiniti.co/v4/properties/search',json=request_data,headers=request_headers)
- if r.status_code == 200:
- out_data = json.loads(r.content.decode('utf8').replace('"', '\"'))
- print("Founded result: ",out_data['num_found'])
- return out_data['num_found'],out_data['records']
- else:
- print('Request failed')
- return None,None
-
-
-# _,o = send_request (8000000,19701)
-# print(o)
-
-# def send_zillow_request (budget,zip_code):
-# inf = search(zipcode=zip_code, budget=budget, for_sale = "rent")
-# return 1, inf
-
-# # r = requests.post('https://api.datafiniti.co/v4/properties/search',json=request_data,headers=request_headers)
-# # if r.status_code == 200:
-# # out_data = json.loads(r.content.decode('utf8').replace('"', '\"'))
-# # print("Founded result: ",out_data['num_found'])
-# # return out_data['num_found'],out_data['records']
-# # else:
-# # print('Request failed')
-# # return None,None
-
-
-
-
-# ## 1. Health Check
-# url = "https://api.datafiniti.co/v4/health"
-# response = requests.request("GET", url, headers=[])
-# print(response.text)
-
-# ## 2. Authentication
-
-# url = "https://api.datafiniti.co/v4/auth"
-# header= {"key": "Content-Type",
-# "name": "Content-Type",
-# "type": "text",
-# "value": "application/json"
-# }
-# # payload = json.dumps({
-# body = {"mode": "raw","raw": "{\n\t\"email\": \"ar@playpingpong.co\",\n\t\"password\": \"p@ssWord123\"\n\t\n}"}
-# response = requests.request("POST", url, headers=header, data=body)
-# print(response.text)
-
-# query = "country:US AND numRoom:3 AND numFloor:1 AND postalCode:75231 AND {prices.amountMin:>=100} AND {prices.amountMax:<= "+ f"{param}"+ "} AND {prices.currency:USD} "
-
-
diff --git a/spaces/guetLzy/Real-ESRGAN-Demo/scripts/extract_subimages.py b/spaces/guetLzy/Real-ESRGAN-Demo/scripts/extract_subimages.py
deleted file mode 100644
index 9b969ae0d4adff403f2ad362b9afaaaee58e2cef..0000000000000000000000000000000000000000
--- a/spaces/guetLzy/Real-ESRGAN-Demo/scripts/extract_subimages.py
+++ /dev/null
@@ -1,135 +0,0 @@
-import argparse
-import cv2
-import numpy as np
-import os
-import sys
-from basicsr.utils import scandir
-from multiprocessing import Pool
-from os import path as osp
-from tqdm import tqdm
-
-
-def main(args):
- """A multi-thread tool to crop large images to sub-images for faster IO.
-
- opt (dict): Configuration dict. It contains:
- n_thread (int): Thread number.
- compression_level (int): CV_IMWRITE_PNG_COMPRESSION from 0 to 9. A higher value means a smaller size
- and longer compression time. Use 0 for faster CPU decompression. Default: 3, same in cv2.
- input_folder (str): Path to the input folder.
- save_folder (str): Path to save folder.
- crop_size (int): Crop size.
- step (int): Step for overlapped sliding window.
- thresh_size (int): Threshold size. Patches whose size is lower than thresh_size will be dropped.
-
- Usage:
- For each folder, run this script.
- Typically, there are GT folder and LQ folder to be processed for DIV2K dataset.
- After process, each sub_folder should have the same number of subimages.
- Remember to modify opt configurations according to your settings.
- """
-
- opt = {}
- opt['n_thread'] = args.n_thread
- opt['compression_level'] = args.compression_level
- opt['input_folder'] = args.input
- opt['save_folder'] = args.output
- opt['crop_size'] = args.crop_size
- opt['step'] = args.step
- opt['thresh_size'] = args.thresh_size
- extract_subimages(opt)
-
-
-def extract_subimages(opt):
- """Crop images to subimages.
-
- Args:
- opt (dict): Configuration dict. It contains:
- input_folder (str): Path to the input folder.
- save_folder (str): Path to save folder.
- n_thread (int): Thread number.
- """
- input_folder = opt['input_folder']
- save_folder = opt['save_folder']
- if not osp.exists(save_folder):
- os.makedirs(save_folder)
- print(f'mkdir {save_folder} ...')
- else:
- print(f'Folder {save_folder} already exists. Exit.')
- sys.exit(1)
-
- # scan all images
- img_list = list(scandir(input_folder, full_path=True))
-
- pbar = tqdm(total=len(img_list), unit='image', desc='Extract')
- pool = Pool(opt['n_thread'])
- for path in img_list:
- pool.apply_async(worker, args=(path, opt), callback=lambda arg: pbar.update(1))
- pool.close()
- pool.join()
- pbar.close()
- print('All processes done.')
-
-
-def worker(path, opt):
- """Worker for each process.
-
- Args:
- path (str): Image path.
- opt (dict): Configuration dict. It contains:
- crop_size (int): Crop size.
- step (int): Step for overlapped sliding window.
- thresh_size (int): Threshold size. Patches whose size is lower than thresh_size will be dropped.
- save_folder (str): Path to save folder.
- compression_level (int): for cv2.IMWRITE_PNG_COMPRESSION.
-
- Returns:
- process_info (str): Process information displayed in progress bar.
- """
- crop_size = opt['crop_size']
- step = opt['step']
- thresh_size = opt['thresh_size']
- img_name, extension = osp.splitext(osp.basename(path))
-
- # remove the x2, x3, x4 and x8 in the filename for DIV2K
- img_name = img_name.replace('x2', '').replace('x3', '').replace('x4', '').replace('x8', '')
-
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED)
-
- h, w = img.shape[0:2]
- h_space = np.arange(0, h - crop_size + 1, step)
- if h - (h_space[-1] + crop_size) > thresh_size:
- h_space = np.append(h_space, h - crop_size)
- w_space = np.arange(0, w - crop_size + 1, step)
- if w - (w_space[-1] + crop_size) > thresh_size:
- w_space = np.append(w_space, w - crop_size)
-
- index = 0
- for x in h_space:
- for y in w_space:
- index += 1
- cropped_img = img[x:x + crop_size, y:y + crop_size, ...]
- cropped_img = np.ascontiguousarray(cropped_img)
- cv2.imwrite(
- osp.join(opt['save_folder'], f'{img_name}_s{index:03d}{extension}'), cropped_img,
- [cv2.IMWRITE_PNG_COMPRESSION, opt['compression_level']])
- process_info = f'Processing {img_name} ...'
- return process_info
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--input', type=str, default='datasets/DF2K/DF2K_HR', help='Input folder')
- parser.add_argument('--output', type=str, default='datasets/DF2K/DF2K_HR_sub', help='Output folder')
- parser.add_argument('--crop_size', type=int, default=480, help='Crop size')
- parser.add_argument('--step', type=int, default=240, help='Step for overlapped sliding window')
- parser.add_argument(
- '--thresh_size',
- type=int,
- default=0,
- help='Threshold size. Patches whose size is lower than thresh_size will be dropped.')
- parser.add_argument('--n_thread', type=int, default=20, help='Thread number.')
- parser.add_argument('--compression_level', type=int, default=3, help='Compression level')
- args = parser.parse_args()
-
- main(args)
diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/3dface2idr.py b/spaces/gwang-kim/DATID-3D/pose_estimation/3dface2idr.py
deleted file mode 100644
index be20f705ecfffdd0c4973d49ea2c1a5446b8e3f3..0000000000000000000000000000000000000000
--- a/spaces/gwang-kim/DATID-3D/pose_estimation/3dface2idr.py
+++ /dev/null
@@ -1,130 +0,0 @@
-import numpy as np
-import os
-import torch
-import json
-import argparse
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--in_root', type=str, default="", help='process folder')
- parser.add_argument('--out_root', type=str, default="output", help='output folder')
- args = parser.parse_args()
- in_root = args.in_root
-
- def compute_rotation(angles):
- """
- Return:
- rot -- torch.tensor, size (B, 3, 3) pts @ trans_mat
-
- Parameters:
- angles -- torch.tensor, size (B, 3), radian
- """
-
- batch_size = angles.shape[0]
- ones = torch.ones([batch_size, 1])
- zeros = torch.zeros([batch_size, 1])
- x, y, z = angles[:, :1], angles[:, 1:2], angles[:, 2:],
-
- rot_x = torch.cat([
- ones, zeros, zeros,
- zeros, torch.cos(x), -torch.sin(x),
- zeros, torch.sin(x), torch.cos(x)
- ], dim=1).reshape([batch_size, 3, 3])
-
- rot_y = torch.cat([
- torch.cos(y), zeros, torch.sin(y),
- zeros, ones, zeros,
- -torch.sin(y), zeros, torch.cos(y)
- ], dim=1).reshape([batch_size, 3, 3])
-
- rot_z = torch.cat([
- torch.cos(z), -torch.sin(z), zeros,
- torch.sin(z), torch.cos(z), zeros,
- zeros, zeros, ones
- ], dim=1).reshape([batch_size, 3, 3])
-
- rot = rot_z @ rot_y @ rot_x
- return rot.permute(0, 2, 1)[0]
-
- npys = sorted([x for x in os.listdir(in_root) if x.endswith(".npy")])
-
- mode = 1 #1 = IDR, 2 = LSX
- outAll={}
-
- for src_filename in npys:
- src = os.path.join(in_root, src_filename)
-
- print(src)
- dict_load=np.load(src, allow_pickle=True)
-
- angle = dict_load.item()['angle']
- trans = dict_load.item()['trans'][0]
- R = compute_rotation(torch.from_numpy(angle)).numpy()
-
- trans[2] += -10
- c = -np.dot(R, trans)
- pose = np.eye(4)
- pose[:3, :3] = R
-
- c *= 0.27 # factor to match tripleganger
- c[1] += 0.006 # offset to align to tripleganger
- c[2] += 0.161 # offset to align to tripleganger
- c = c/np.linalg.norm(c)*2.7 ##yiqian教我放到半球上去
- pose[0,3] = c[0]
- pose[1,3] = c[1]
- pose[2,3] = c[2]
-
- focal = 2985.29 # = 1015*1024/224*(300/466.285)#
- pp = 512#112
- w = 1024#224
- h = 1024#224
-
- if mode==1:
- count = 0
- K = np.eye(3)
- K[0][0] = focal
- K[1][1] = focal
- K[0][2] = w/2.0
- K[1][2] = h/2.0
- K = K.tolist()
-
- Rot = np.eye(3)
- Rot[0, 0] = 1
- Rot[1, 1] = -1
- Rot[2, 2] = -1
- pose[:3, :3] = np.dot(pose[:3, :3], Rot)
-
- pose = pose.tolist()
- out = {}
- out["intrinsics"] = K
- out["pose"] = pose
- out["angle"] = (angle * [1, -1, 1]).flatten().tolist()
- outAll[src_filename.replace(".npy", ".png")] = out
-
- elif mode==2:
-
- dst = os.path.join(in_root, src_filename.replace(".npy", "_lscam.txt"))
- outCam = open(dst, "w")
- outCam.write("#focal length\n")
- outCam.write(str(focal) + " " + str(focal) + "\n")
-
- outCam.write("#principal point\n")
- outCam.write(str(pp) + " " + str(pp) + "\n")
-
- outCam.write("#resolution\n")
- outCam.write(str(w) + " " + str(h) + "\n")
-
- outCam.write("#distortion coeffs\n")
- outCam.write("0 0 0 0\n")
-
-
- outCam.write("MATRIX :\n")
- for r in range(4):
- outCam.write(str(pose[r, 0]) + " " + str(pose[r, 1]) + " " + str(pose[r, 2]) + " " + str(pose[r, 3]) + "\n")
-
- outCam.close()
-
- if mode == 1:
- dst = os.path.join(args.out_root, "cameras.json")
- with open(dst, "w") as outfile:
- json.dump(outAll, outfile, indent=4)
diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/build/lib/nvdiffrast/common/glutil.h b/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/build/lib/nvdiffrast/common/glutil.h
deleted file mode 100644
index e9a3a7d95a5af4a808a25097cc055b699024409e..0000000000000000000000000000000000000000
--- a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/build/lib/nvdiffrast/common/glutil.h
+++ /dev/null
@@ -1,113 +0,0 @@
-// Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-#pragma once
-
-//------------------------------------------------------------------------
-// Windows-specific headers and types.
-//------------------------------------------------------------------------
-
-#ifdef _WIN32
-#define NOMINMAX
-#include // Required by gl.h in Windows.
-#define GLAPIENTRY APIENTRY
-
-struct GLContext
-{
- HDC hdc;
- HGLRC hglrc;
- int extInitialized;
-};
-
-#endif // _WIN32
-
-//------------------------------------------------------------------------
-// Linux-specific headers and types.
-//------------------------------------------------------------------------
-
-#ifdef __linux__
-#define EGL_NO_X11 // X11/Xlib.h has "#define Status int" which breaks Tensorflow. Avoid it.
-#define MESA_EGL_NO_X11_HEADERS
-#include
-#include
-#define GLAPIENTRY
-
-struct GLContext
-{
- EGLDisplay display;
- EGLContext context;
- int extInitialized;
-};
-
-#endif // __linux__
-
-//------------------------------------------------------------------------
-// OpenGL, CUDA interop, GL extensions.
-//------------------------------------------------------------------------
-#define GL_GLEXT_LEGACY
-#include
-#include
-
-// Constants.
-#ifndef GL_VERSION_1_2
-#define GL_CLAMP_TO_EDGE 0x812F
-#define GL_TEXTURE_3D 0x806F
-#endif
-#ifndef GL_VERSION_1_5
-#define GL_ARRAY_BUFFER 0x8892
-#define GL_DYNAMIC_DRAW 0x88E8
-#define GL_ELEMENT_ARRAY_BUFFER 0x8893
-#endif
-#ifndef GL_VERSION_2_0
-#define GL_FRAGMENT_SHADER 0x8B30
-#define GL_INFO_LOG_LENGTH 0x8B84
-#define GL_LINK_STATUS 0x8B82
-#define GL_VERTEX_SHADER 0x8B31
-#endif
-#ifndef GL_VERSION_3_0
-#define GL_MAJOR_VERSION 0x821B
-#define GL_MINOR_VERSION 0x821C
-#define GL_RGBA32F 0x8814
-#define GL_TEXTURE_2D_ARRAY 0x8C1A
-#endif
-#ifndef GL_VERSION_3_2
-#define GL_GEOMETRY_SHADER 0x8DD9
-#endif
-#ifndef GL_ARB_framebuffer_object
-#define GL_COLOR_ATTACHMENT0 0x8CE0
-#define GL_COLOR_ATTACHMENT1 0x8CE1
-#define GL_DEPTH_STENCIL 0x84F9
-#define GL_DEPTH_STENCIL_ATTACHMENT 0x821A
-#define GL_DEPTH24_STENCIL8 0x88F0
-#define GL_FRAMEBUFFER 0x8D40
-#define GL_INVALID_FRAMEBUFFER_OPERATION 0x0506
-#define GL_UNSIGNED_INT_24_8 0x84FA
-#endif
-#ifndef GL_ARB_imaging
-#define GL_TABLE_TOO_LARGE 0x8031
-#endif
-#ifndef GL_KHR_robustness
-#define GL_CONTEXT_LOST 0x0507
-#endif
-
-// Declare function pointers to OpenGL extension functions.
-#define GLUTIL_EXT(return_type, name, ...) extern return_type (GLAPIENTRY* name)(__VA_ARGS__);
-#include "glutil_extlist.h"
-#undef GLUTIL_EXT
-
-//------------------------------------------------------------------------
-// Common functions.
-//------------------------------------------------------------------------
-
-void setGLContext (GLContext& glctx);
-void releaseGLContext (void);
-GLContext createGLContext (int cudaDeviceIdx);
-void destroyGLContext (GLContext& glctx);
-const char* getGLErrorString (GLenum err);
-
-//------------------------------------------------------------------------
diff --git a/spaces/hahahafofo/ChatGLM-Chinese-Summary/README.md b/spaces/hahahafofo/ChatGLM-Chinese-Summary/README.md
deleted file mode 100644
index 2f2a62017391803ab6ed3d1a37c45ae5b3b63c5f..0000000000000000000000000000000000000000
--- a/spaces/hahahafofo/ChatGLM-Chinese-Summary/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ChatGLM Chinese Summary
-emoji: 😻
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.28.2
-app_file: app.py
-pinned: false
-license: gpl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/hamidr-bd1/v3/README.md b/spaces/hamidr-bd1/v3/README.md
deleted file mode 100644
index 2a09c782ddc2ac9a34b1d0a7d7ec8242d77629cc..0000000000000000000000000000000000000000
--- a/spaces/hamidr-bd1/v3/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: V3
-emoji: 🐠
-colorFrom: gray
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/haoqi7/research/lrt_instance/instances.py b/spaces/haoqi7/research/lrt_instance/instances.py
deleted file mode 100644
index 7e85d3e8702c2c8e9280fa30deb6b32240b2b869..0000000000000000000000000000000000000000
--- a/spaces/haoqi7/research/lrt_instance/instances.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from lrt import LiteratureResearchTool
-from lrt.clustering.config import *
-
-baseline_lrt = LiteratureResearchTool()
diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/cv2_util.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/cv2_util.py
deleted file mode 100644
index 0bbc0fb2d08337bfd8242cbedd514a41d8d7353f..0000000000000000000000000000000000000000
--- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/cv2_util.py
+++ /dev/null
@@ -1,24 +0,0 @@
-"""
-Module for cv2 utility functions and maintaining version compatibility
-between 3.x and 4.x
-"""
-import cv2
-
-
-def findContours(*args, **kwargs):
- """
- Wraps cv2.findContours to maintain compatiblity between versions
- 3 and 4
-
- Returns:
- contours, hierarchy
- """
- if cv2.__version__.startswith('4'):
- contours, hierarchy = cv2.findContours(*args, **kwargs)
- elif cv2.__version__.startswith('3'):
- _, contours, hierarchy = cv2.findContours(*args, **kwargs)
- else:
- raise AssertionError(
- 'cv2 must be either version 3 or 4 to call this method')
-
- return contours, hierarchy
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TensorMask/tests/__init__.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TensorMask/tests/__init__.py
deleted file mode 100644
index 168f9979a4623806934b0ff1102ac166704e7dec..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TensorMask/tests/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
diff --git a/spaces/hishamomran/explicit_text_classifier/app.py b/spaces/hishamomran/explicit_text_classifier/app.py
deleted file mode 100644
index 5f2371343c3169dbe222d52663a5ee82987e8f8c..0000000000000000000000000000000000000000
--- a/spaces/hishamomran/explicit_text_classifier/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/michellejieli/inappropriate_text_classifier").launch()
\ No newline at end of file
diff --git a/spaces/hkayabilisim/LIME/app.py b/spaces/hkayabilisim/LIME/app.py
deleted file mode 100644
index ef4f4857545e413b729f99f086f7f1002416c127..0000000000000000000000000000000000000000
--- a/spaces/hkayabilisim/LIME/app.py
+++ /dev/null
@@ -1,148 +0,0 @@
-import gradio as gd
-import numpy as np
-import os
-import torch
-import torchvision
-import torchvision.models as models
-from lime import lime_image
-import matplotlib.pyplot as plt
-import matplotlib
-import torch.nn.functional as F
-from skimage.segmentation import mark_boundaries
-from PIL import Image
-matplotlib.use('agg')
-
-def run_lime(input_image,
- model_name: str,
- top_labels: int,
- num_samples: int,
- num_features: int,
- batch_size: int):
- # input_image is a numpy array of shape (height, width, channels)
- # range is [0, 255]
- print('model_name', model_name)
- print('top_labels', top_labels)
- print('num_samples', num_samples)
- print('num_features', num_features)
- print('batch_size', batch_size)
- print('input image', type(input_image), input_image.shape)
-
- model, weights = fetch_model(model_name)
- preprocess = weights.transforms(antialias=True)
-
- input_image_processed = preprocess(torch.from_numpy(input_image.transpose(2,0,1))).unsqueeze(0)
- logits = model(input_image_processed)
- probs = F.softmax(logits, dim=1)
- names = weights.meta['categories']
-
- top_10_classes = []
- print('probs', type(probs), probs.shape)
- for x in probs.argsort(descending=True)[0][:10]:
- print(x.item(), names[x], probs[0,x].item())
- top_10_classes.append([x.item(), names[x], probs[0,x].item()])
-
- def classifier_fn(images):
- print('classifier_fn', type(images), images.shape)
-
- zz = preprocess(torch.from_numpy(images[0].transpose(2,0,1)))
- c, w, h = zz.shape
- batch = torch.zeros(batch_size, c, w, h)
- print('len(images)', len(images))
- for i in range(batch_size):
- batch[i] = preprocess(torch.from_numpy(images[i].transpose(2,0,1)))
- print('batch', type(batch), batch.shape)
-
- logits = model(batch)
- probs = F.softmax(logits, dim=1)
- print('probs', type(probs), probs.shape)
- return probs.detach().cpu().numpy()
-
- explainer = lime_image.LimeImageExplainer()
- explanation = explainer.explain_instance(
- input_image,
- classifier_fn,
- top_labels=top_labels,
- hide_color=0,
- num_samples=num_samples,
- num_features=num_features,
- batch_size=batch_size)
-
- temp, mask = explanation.get_image_and_mask(
- explanation.top_labels[0],
- positive_only=False, num_features=num_features, hide_rest=False)
- lime_output = mark_boundaries(temp/255.0, mask)
- return lime_output, top_10_classes
-
-def fetch_model_names():
- return models.list_models(module=torchvision.models)
-
-def fetch_model(model_name):
- print('Retrieving model ', model_name)
- weights_enum = models.get_model_weights(model_name)
- for w in weights_enum:
- if "IMAGENET1K" in w.name:
- weights = w
- model = models.get_model(model_name, weights=weights)
- print('Model weights loaded', w.name)
- return model, weights
- return None, None
-
-with gd.Blocks() as demo:
- with gd.Column():
- gd.Markdown(value='''
- # A simple GUI for LIME
- This is a simple GUI for Local Interpretable Model-agnostic Explanations (LIME). It allows you to run LIME on a variety of models and images. I've used the following resources to build this GUI:
- * [LIME](https://github.com/marcotcr/lime)
- * [LIME tutorial](https://github.com/marcotcr/lime/blob/master/tutorials/lime_image.ipynb)
- ''')
- with gd.Row():
- input_image = gd.Image(label="Input Image. Please upload an image that you want LIME to explain")
- with gd.Column():
- model_name = gd.Dropdown(label="Model",
- info='''
- Select the image classification model to use for LIME.
- The list is automatically populated by using torchvision library.
- ''',
- value='convnext_tiny',
- choices=fetch_model_names())
- top_labels = gd.Number(label='top_labels',info='''
- use the first labels to create explanations.
- For example, setting top_labels=5 will create explanations
- for the top 5 most likely classes.''',
- precision=0, value=5)
- num_samples = gd.Number(label="num_samples",
- info="How many samples to be created to build the linear model inside LIME",
- precision=0, value=100)
- with gd.Column():
- num_features = gd.Number(label="num_features",
- info='Among the most important superpixels (features), how many to be shown in the explanation image',
- precision=0, value=2)
- batch_size = gd.Number(label="batch_size",
- info='how many images in the samples to be processed at once',
- precision=0, value=20)
- run_button = gd.Button(label="Run")
- with gd.Row():
- top_10_classes = gd.DataFrame(label="Top 10 classes",
- info="Top-10 classes for the input image calculated by using the selected model",
- headers=["class_id","label","probability"],
- datatype=["number","str","number"])
- lime_output = gd.Image(label="Lime Explanation",
- info="The explanation image for the input image calculated by LIME for the selected model")
- gd.Examples(
- label="Some examples images and parameters",
- examples=[["jeep.png","convnext_tiny",5,100,2,20],
- ["IMG_0154.jpg","convnext_tiny",5,100,2,20],
- ["IMG_0155.jpg","convnext_tiny",5,100,2,20],
- ["IMG_0156.jpg","convnext_tiny",5,100,2,20],
- ["IMG_0157.jpg","convnext_tiny",5,100,2,20],
- ["IMG_0158.jpg","convnext_tiny",5,100,2,20],
- ["IMG_0159.jpg","convnext_tiny",5,100,2,20],
- ["IMG_0160.jpg","convnext_tiny",5,100,2,20]],
- inputs=[input_image,model_name,top_labels,num_samples,num_features,batch_size])
-
- run_button.click(fn=run_lime,inputs=[input_image, model_name, top_labels,num_samples,num_features,batch_size],
- outputs=[lime_output,top_10_classes])
-
-if __name__ == "__main__":
- demo.launch()
-
\ No newline at end of file
diff --git a/spaces/hkunlp/Binder/nsql/qa_module/vqa.py b/spaces/hkunlp/Binder/nsql/qa_module/vqa.py
deleted file mode 100644
index 484bdd70612c390024063131cd6f5e1c637fb7f0..0000000000000000000000000000000000000000
--- a/spaces/hkunlp/Binder/nsql/qa_module/vqa.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import requests
-import base64
-import time
-
-
-def vqa_call(question, image_path, api_url='https://hf.space/embed/OFA-Sys/OFA-vqa/+/api/predict/'):
- with open(image_path, "rb") as f:
- base64_data = base64.b64encode(f.read())
- base64_data_to_send = "data:image/{};base64,{}".format(image_path.split(".")[-1], str(base64_data)[2:-1])
- return requests.post(url=api_url, json={"data": [base64_data_to_send, question]}).json()['data'][0]
diff --git a/spaces/huggan/sefa/app.py b/spaces/huggan/sefa/app.py
deleted file mode 100644
index 8d527f753cbdcefcc018b8577641035d63ba30de..0000000000000000000000000000000000000000
--- a/spaces/huggan/sefa/app.py
+++ /dev/null
@@ -1,139 +0,0 @@
-# python 3.7
-"""Demo."""
-
-import numpy as np
-import torch
-import streamlit as st
-import SessionState
-
-from models import parse_gan_type
-from utils import to_tensor
-from utils import postprocess
-from utils import load_generator
-from utils import factorize_weight
-
-
-@st.cache(allow_output_mutation=True, show_spinner=False)
-def get_model(model_name):
- """Gets model by name."""
- return load_generator(model_name, from_hf_hub=True)
-
-
-@st.cache(allow_output_mutation=True, show_spinner=False)
-def factorize_model(model, layer_idx):
- """Factorizes semantics from target layers of the given model."""
- return factorize_weight(model, layer_idx)
-
-
-def sample(model, gan_type, num=1):
- """Samples latent codes."""
- codes = torch.randn(num, model.z_space_dim)
- if gan_type == 'pggan':
- codes = model.layer0.pixel_norm(codes)
- elif gan_type == 'stylegan':
- codes = model.mapping(codes)['w']
- codes = model.truncation(codes,
- trunc_psi=0.7,
- trunc_layers=8)
- elif gan_type == 'stylegan2':
- codes = model.mapping(codes)['w']
- codes = model.truncation(codes,
- trunc_psi=0.5,
- trunc_layers=18)
- codes = codes.detach().cpu().numpy()
- return codes
-
-
-@st.cache(allow_output_mutation=True, show_spinner=False)
-def synthesize(model, gan_type, code):
- """Synthesizes an image with the give code."""
- if gan_type == 'pggan':
- image = model(to_tensor(code))['image']
- elif gan_type in ['stylegan', 'stylegan2']:
- image = model.synthesis(to_tensor(code))['image']
- image = postprocess(image)[0]
- return image
-
-def _update_slider():
- num_semantics = st.session_state["num_semantics"]
- for sem_idx in range(num_semantics):
- st.session_state[f"semantic_slider_{sem_idx}"] = 0
-
-
-"""Main function (loop for StreamLit)."""
-st.title('Closed-Form Factorization of Latent Semantics in GANs')
-st.markdown("This space is the ported version of [Closed-Form Factorization of Latent Semantics in GANs](https://github.com/genforce/sefa). It reads all sample models from the Hugging Face Hub")
-st.markdown("---")
-
-st.sidebar.title('Options')
-st.sidebar.button('Reset', on_click=_update_slider, kwargs={})
-
-model_name = st.sidebar.selectbox(
- 'Model to Interpret',
- ['pggan_celebahq1024', 'stylegan_animeface512', 'stylegan_car512', 'stylegan_cat256'])
-
-model = get_model(model_name)
-gan_type = parse_gan_type(model)
-layer_idx = st.sidebar.selectbox(
- 'Layers to Interpret',
- ['all', '0-1', '2-5', '6-13'])
-layers, boundaries, eigen_values = factorize_model(model, layer_idx)
-
-num_semantics = st.sidebar.number_input(
- 'Number of semantics', value=5, min_value=0, max_value=None, step=1, key="num_semantics")
-steps = {sem_idx: 0 for sem_idx in range(num_semantics)}
-if gan_type == 'pggan':
- max_step = 5.0
-elif gan_type == 'stylegan':
- max_step = 2.0
-elif gan_type == 'stylegan2':
- max_step = 15.0
-for sem_idx in steps:
- eigen_value = eigen_values[sem_idx]
- steps[sem_idx] = st.sidebar.slider(
- f'Semantic {sem_idx:03d} (eigen value: {eigen_value:.3f})',
- value=0.0,
- min_value=-max_step,
- max_value=max_step,
- step=0.04 * max_step,
- key=f"semantic_slider_{sem_idx}")
-
-image_placeholder = st.empty()
-button_placeholder = st.empty()
-button_totally_random = st.empty()
-
-try:
- base_codes = np.load(f'latent_codes/{model_name}_latents.npy')
-except FileNotFoundError:
- base_codes = sample(model, gan_type)
-
-state = SessionState.get(model_name=model_name,
- code_idx=0,
- codes=base_codes[0:1])
-if state.model_name != model_name:
- state.model_name = model_name
- state.code_idx = 0
- state.codes = base_codes[0:1]
-
-if button_placeholder.button('Next Sample'):
- state.code_idx += 1
- if state.code_idx < base_codes.shape[0]:
- state.codes = base_codes[state.code_idx][np.newaxis]
- else:
- state.codes = sample(model, gan_type)
-
-if button_totally_random.button('Totally Random'):
- state.codes = sample(model, gan_type)
-
-code = state.codes.copy()
-for sem_idx, step in steps.items():
- if gan_type == 'pggan':
- code += boundaries[sem_idx:sem_idx + 1] * step
- elif gan_type in ['stylegan', 'stylegan2']:
- code[:, layers, :] += boundaries[sem_idx:sem_idx + 1] * step
-image = synthesize(model, gan_type, code)
-image_placeholder.image(image / 255.0)
-
-st.markdown("---")
-st.markdown("""This space was created by [johko](https://twitter.com/johko990). Main credits go to the original authors Yujun Shen and Bolei Zhou, who created a great code base to work on.
- This version loads all models from the Hugging Face Hub.""")
diff --git a/spaces/huggingchat/chat-ui/src/lib/server/websearch/generateQuery.ts b/spaces/huggingchat/chat-ui/src/lib/server/websearch/generateQuery.ts
deleted file mode 100644
index bfa79bb31e66783cc68d5593a90f7580fd4d3117..0000000000000000000000000000000000000000
--- a/spaces/huggingchat/chat-ui/src/lib/server/websearch/generateQuery.ts
+++ /dev/null
@@ -1,71 +0,0 @@
-import type { Message } from "$lib/types/Message";
-import { format } from "date-fns";
-import { generateFromDefaultEndpoint } from "../generateFromDefaultEndpoint";
-import { smallModel } from "../models";
-
-export async function generateQuery(messages: Message[]) {
- const currentDate = format(new Date(), "MMMM d, yyyy");
- const userMessages = messages.filter(({ from }) => from === "user");
- const previousUserMessages = userMessages.slice(0, -1);
-
- const lastMessage = userMessages.slice(-1)[0];
-
- const convQuery: Array> = [
- {
- from: "user",
- content: `Previous Questions:
-- Who is the president of France?
-
-Current Question: What about Mexico?
-`,
- },
- {
- from: "assistant",
- content: "President of Mexico",
- },
- {
- from: "user",
- content: `Previous questions:
-- When is the next formula 1 grand prix?
-
-Current Question: Where is it being hosted ?`,
- },
- {
- from: "assistant",
- content: "location of next formula 1 grand prix",
- },
- {
- from: "user",
- content: "Current Question: What type of printhead does the Epson F2270 DTG printer use?",
- },
- {
- from: "assistant",
- content: "Epson F2270 DTG printer printhead",
- },
- { from: "user", content: "What were the news yesterday ?" },
- {
- from: "assistant",
- content: `news ${format(new Date(Date.now() - 864e5), "MMMM d, yyyy")}`,
- },
- { from: "user", content: "What is the current weather in Paris ?" },
- { from: "assistant", content: `weather in Paris ${currentDate}` },
- {
- from: "user",
- content:
- (previousUserMessages.length > 0
- ? `Previous questions: \n${previousUserMessages
- .map(({ content }) => `- ${content}`)
- .join("\n")}`
- : "") +
- "\n\nCurrent Question:" +
- lastMessage.content,
- },
- ];
-
- const promptQuery = smallModel.chatPromptRender({
- preprompt: `You are tasked with generating web search queries. Give me an appropriate query to answer my question for google search. Answer with only the query. Today is ${currentDate}`,
- messages: convQuery,
- });
-
- return await generateFromDefaultEndpoint(promptQuery);
-}
diff --git a/spaces/hussain-shk/IndiSent/app.py b/spaces/hussain-shk/IndiSent/app.py
deleted file mode 100644
index 8f2f8a3d49daeb68337110a4d5a1b6736698f13f..0000000000000000000000000000000000000000
--- a/spaces/hussain-shk/IndiSent/app.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import os
-import gradio as gr
-
-download="wget --load-cookies /tmp/cookies.txt \"https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1-hzy09qi-OEogyge7rQG79K7iV4xsNWa' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\\n/p')&id=1-hzy09qi-OEogyge7rQG79K7iV4xsNWa\" -O indic-en.zip && rm -rf /tmp/cookies.txt"
-os.system(download)
-os.system('unzip /home/user/app/indic-en.zip')
-
-from fairseq import checkpoint_utils, distributed_utils, options, tasks, utils
-from inference.engine import Model
-indic2en_model = Model(expdir='/home/user/app/indic-en')
-
-INDIC = {"Assamese": "as", "Bengali": "bn", "Gujarati": "gu", "Hindi": "hi","Kannada": "kn","Malayalam": "ml", "Marathi": "mr", "Odia": "or","Punjabi": "pa","Tamil": "ta", "Telugu" : "te"}
-
-
-def translate(text, lang):
- return indic2en_model.translate_paragraph(text, INDIC[lang], 'en')
-
-
-from transformers import pipeline
-import gradio as gr
-roberta_pipe = pipeline(
- "sentiment-analysis",
- model="siebert/sentiment-roberta-large-english",
- tokenizer="siebert/sentiment-roberta-large-english",
- return_all_scores = True
-)
-
-def analyse_sentiment(text, source):
- if source != "English":
- text = translate(text, source)
- response = roberta_pipe(text)
- d = {}
- for i in response[0]:
- d[i['label'].lower()] = i['score']
- return d
-
-languages = ["Assamese", "Bengali", "Gujarati", "Hindi", "Kannada","Malayalam", "Marathi", "Odia", "Punjabi", "Tamil", "Telugu", "English"]
-
-input_text = gr.Textbox(placeholder="Enter a positive or negative sentence here...")
-drop_down = gr.inputs.Dropdown(languages, type="value", default="English", label="Select Source Language")
-
-examples = [["this book was a great book that i have read many times", "English"],
- ["एक महान अमेरिकी लेखक का एक आकर्षक संग्रह" , "Hindi"],
- ["हा आतापर्यंतचा सर्वात वाईट चित्रपट आहे यात शंका नाही", "Marathi"],
- ["இந்த தயாரிப்பு ஆச்சரியமாக இருக்கிறது", "Tamil"],
- ["તમારા માટે નહીં જો તમે વિના અવરોધે વીડિયો શોધી રહ્યા છો", "Gujarati"],]
-
-demo = gr.Interface(
- enable_queue=True,
- fn=analyse_sentiment,
- inputs=[input_text, drop_down],
- outputs="label",
- interpretation="default",
- title='IndiSent: Multilingual Sentiment Analysis',
- examples=examples)
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/hysts/BLIP2-with-transformers/style.css b/spaces/hysts/BLIP2-with-transformers/style.css
deleted file mode 100644
index 86ce68e49778375ebf5b12dc3baaccf931570b54..0000000000000000000000000000000000000000
--- a/spaces/hysts/BLIP2-with-transformers/style.css
+++ /dev/null
@@ -1,16 +0,0 @@
-h1 {
- text-align: center;
-}
-
-#duplicate-button {
- margin: auto;
- color: #fff;
- background: #1565c0;
- border-radius: 100vh;
-}
-
-#component-0 {
- max-width: 730px;
- margin: auto;
- padding-top: 1.5rem;
-}
diff --git a/spaces/hysts/stylegan3-food101/app.py b/spaces/hysts/stylegan3-food101/app.py
deleted file mode 100644
index 87bd65bec391c56a8755123581d3c7f5c2b4e44e..0000000000000000000000000000000000000000
--- a/spaces/hysts/stylegan3-food101/app.py
+++ /dev/null
@@ -1,107 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import functools
-import os
-import pickle
-import sys
-
-import gradio as gr
-import numpy as np
-import torch
-import torch.nn as nn
-from huggingface_hub import hf_hub_download
-
-sys.path.insert(0, 'stylegan3')
-
-TITLE = 'StyleGAN3 Food Image Generation'
-
-MODEL_REPO = 'hysts/stylegan3-food101-model'
-MODEL_FILE_NAME = '010000.pkl'
-
-HF_TOKEN = os.getenv('HF_TOKEN')
-
-
-def make_transform(translate: tuple[float, float], angle: float) -> np.ndarray:
- mat = np.eye(3)
- sin = np.sin(angle / 360 * np.pi * 2)
- cos = np.cos(angle / 360 * np.pi * 2)
- mat[0][0] = cos
- mat[0][1] = sin
- mat[0][2] = translate[0]
- mat[1][0] = -sin
- mat[1][1] = cos
- mat[1][2] = translate[1]
- return mat
-
-
-def generate_z(seed: int, device: torch.device) -> torch.Tensor:
- return torch.from_numpy(np.random.RandomState(seed).randn(1,
- 512)).to(device)
-
-
-@torch.inference_mode()
-def generate_image(seed: int, truncation_psi: float, tx: float, ty: float,
- angle: float, model: nn.Module,
- device: torch.device) -> np.ndarray:
- seed = int(np.clip(seed, 0, np.iinfo(np.uint32).max))
- z = generate_z(seed, device)
- c = torch.zeros(0).to(device)
-
- mat = make_transform((tx, ty), angle)
- mat = np.linalg.inv(mat)
- model.synthesis.input.transform.copy_(torch.from_numpy(mat))
-
- out = model(z, c, truncation_psi=truncation_psi)
- out = (out.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8)
- return out[0].cpu().numpy()
-
-
-def load_model(device: torch.device) -> nn.Module:
- path = hf_hub_download(MODEL_REPO,
- MODEL_FILE_NAME,
- use_auth_token=HF_TOKEN)
- with open(path, 'rb') as f:
- model = pickle.load(f)
- model.eval()
- model.to(device)
- with torch.inference_mode():
- z = torch.zeros((1, 512)).to(device)
- c = torch.zeros(0).to(device)
- model(z, c)
- return model
-
-
-device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
-model = load_model(device)
-func = functools.partial(generate_image, model=model, device=device)
-
-gr.Interface(
- fn=func,
- inputs=[
- gr.Slider(label='Seed',
- minimum=0,
- maximum=np.iinfo(np.uint32).max,
- step=1,
- value=1424059097),
- gr.Slider(label='Truncation psi',
- minimum=0,
- maximum=2,
- step=0.05,
- value=0.7),
- gr.Slider(label='Translate X',
- minimum=-1,
- maximum=1,
- step=0.05,
- value=0),
- gr.Slider(label='Translate Y',
- minimum=-1,
- maximum=1,
- step=0.05,
- value=0),
- gr.Slider(label='Angle', minimum=-180, maximum=180, step=5, value=0),
- ],
- outputs=gr.Image(label='Output', type='numpy'),
- title=TITLE,
-).queue().launch(show_api=False)
diff --git a/spaces/iamironman4279/SadTalker/Dockerfile b/spaces/iamironman4279/SadTalker/Dockerfile
deleted file mode 100644
index 5ddc6e3d8b246534a58f9612a88b309fa7e10795..0000000000000000000000000000000000000000
--- a/spaces/iamironman4279/SadTalker/Dockerfile
+++ /dev/null
@@ -1,59 +0,0 @@
-FROM nvidia/cuda:11.7.1-cudnn8-devel-ubuntu22.04
-ENV DEBIAN_FRONTEND=noninteractive
-RUN apt-get update && \
- apt-get upgrade -y && \
- apt-get install -y --no-install-recommends \
- git \
- zip \
- unzip \
- git-lfs \
- wget \
- curl \
- # ffmpeg \
- ffmpeg \
- x264 \
- # python build dependencies \
- build-essential \
- libssl-dev \
- zlib1g-dev \
- libbz2-dev \
- libreadline-dev \
- libsqlite3-dev \
- libncursesw5-dev \
- xz-utils \
- tk-dev \
- libxml2-dev \
- libxmlsec1-dev \
- libffi-dev \
- liblzma-dev && \
- apt-get clean && \
- rm -rf /var/lib/apt/lists/*
-
-RUN useradd -m -u 1000 user
-USER user
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:${PATH}
-WORKDIR ${HOME}/app
-
-RUN curl https://pyenv.run | bash
-ENV PATH=${HOME}/.pyenv/shims:${HOME}/.pyenv/bin:${PATH}
-ENV PYTHON_VERSION=3.10.9
-RUN pyenv install ${PYTHON_VERSION} && \
- pyenv global ${PYTHON_VERSION} && \
- pyenv rehash && \
- pip install --no-cache-dir -U pip setuptools wheel
-
-RUN pip install --no-cache-dir -U torch==1.12.1 torchvision==0.13.1
-COPY --chown=1000 requirements.txt /tmp/requirements.txt
-RUN pip install --no-cache-dir -U -r /tmp/requirements.txt
-
-COPY --chown=1000 . ${HOME}/app
-RUN ls -a
-ENV PYTHONPATH=${HOME}/app \
- PYTHONUNBUFFERED=1 \
- GRADIO_ALLOW_FLAGGING=never \
- GRADIO_NUM_PORTS=1 \
- GRADIO_SERVER_NAME=0.0.0.0 \
- GRADIO_THEME=huggingface \
- SYSTEM=spaces
-CMD ["python", "app.py"]
\ No newline at end of file
diff --git a/spaces/iamironman4279/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r34.py b/spaces/iamironman4279/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r34.py
deleted file mode 100644
index 5f78337a3d1f9eb6e9145eb5093618796c6842d2..0000000000000000000000000000000000000000
--- a/spaces/iamironman4279/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r34.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from easydict import EasyDict as edict
-
-# make training faster
-# our RAM is 256G
-# mount -t tmpfs -o size=140G tmpfs /train_tmp
-
-config = edict()
-config.loss = "arcface"
-config.network = "r34"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 1.0
-config.fp16 = True
-config.momentum = 0.9
-config.weight_decay = 5e-4
-config.batch_size = 128
-config.lr = 0.1 # batch size is 512
-
-config.rec = "/train_tmp/ms1m-retinaface-t1"
-config.num_classes = 93431
-config.num_image = 5179510
-config.num_epoch = 25
-config.warmup_epoch = -1
-config.decay_epoch = [10, 16, 22]
-config.val_targets = ["lfw", "cfp_fp", "agedb_30"]
diff --git a/spaces/imseldrith/DeepFakeAI/DeepFakeAI/uis/components/source.py b/spaces/imseldrith/DeepFakeAI/DeepFakeAI/uis/components/source.py
deleted file mode 100644
index 29b77715b0648d49761a466bb9374dd7c32c4150..0000000000000000000000000000000000000000
--- a/spaces/imseldrith/DeepFakeAI/DeepFakeAI/uis/components/source.py
+++ /dev/null
@@ -1,48 +0,0 @@
-from typing import Any, IO, Optional
-import gradio
-
-import DeepFakeAI.globals
-from DeepFakeAI import wording
-from DeepFakeAI.uis import core as ui
-from DeepFakeAI.uis.typing import Update
-from DeepFakeAI.utilities import is_image
-
-SOURCE_FILE : Optional[gradio.File] = None
-SOURCE_IMAGE : Optional[gradio.Image] = None
-
-
-def render() -> None:
- global SOURCE_FILE
- global SOURCE_IMAGE
-
- with gradio.Box():
- is_source_image = is_image(DeepFakeAI.globals.source_path)
- SOURCE_FILE = gradio.File(
- file_count = 'single',
- file_types=
- [
- '.png',
- '.jpg',
- '.webp'
- ],
- label = wording.get('source_file_label'),
- value = DeepFakeAI.globals.source_path if is_source_image else None
- )
- ui.register_component('source_file', SOURCE_FILE)
- SOURCE_IMAGE = gradio.Image(
- value = SOURCE_FILE.value['name'] if is_source_image else None,
- visible = is_source_image,
- show_label = False
- )
-
-
-def listen() -> None:
- SOURCE_FILE.change(update, inputs = SOURCE_FILE, outputs = SOURCE_IMAGE)
-
-
-def update(file: IO[Any]) -> Update:
- if file and is_image(file.name):
- DeepFakeAI.globals.source_path = file.name
- return gradio.update(value = file.name, visible = True)
- DeepFakeAI.globals.source_path = None
- return gradio.update(value = None, visible = False)
diff --git a/spaces/inamXcontru/PoeticTTS/Crack Para Jugar Online Halo Combat Evolved Pc Disfruta de la Campaa y el Multijugador sin Problemas.md b/spaces/inamXcontru/PoeticTTS/Crack Para Jugar Online Halo Combat Evolved Pc Disfruta de la Campaa y el Multijugador sin Problemas.md
deleted file mode 100644
index 886b0f65970bff64a3876abac9657760fdf0e87e..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Crack Para Jugar Online Halo Combat Evolved Pc Disfruta de la Campaa y el Multijugador sin Problemas.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
No olvides, descargar Halo 4 para PC en forma de fondo de pantalla con el que decorar el escritorio de tu ordenador. Halo CE Juego para PC Full en Español de Pocos Recursos para Descargar en 1 Link Online Halo CE Descargar Esta version de Halo Custom Edition es una version compacta del juego original Halo Combat Evolved, fue creada por la comunidad de halo para jugar contenido nuevo creado como nuevos mapas, armas y vehiculos. Vive la experiencia HALO con Halo Reach, la primer entrega de Halo: The Master Chief Collection. Hoy traemos la descarga gratuita de Halo 2 para pc en 1 link, un juego en primera persona, de disparos, desarrollado por Bungie Studios.
-
Sistema Operativo: Windows 2000/XP/Vista/7 Procesador: Pentium 733 MHz o Superior RAM: 128 MB Disco Duro: 1.2 GB Tarjeta de vídeo: 32/64 MB T&L Un módem de 56.6 Kbps para jugar online
-
-Scarica [PDF/EPUB] ALLENAMENTO: 3 ... Compra l'eBook ALLENAMENTO: 3 LIBRI IN 1: ... ottimale (brossura) di Jürgen Weineck - Calzetti. 4d29de3e1b
-
-
-
diff --git a/spaces/isabel/climate-change-project/reader.py b/spaces/isabel/climate-change-project/reader.py
deleted file mode 100644
index 25d2e120f875e6253fd0eab9832cd39bbc85507a..0000000000000000000000000000000000000000
--- a/spaces/isabel/climate-change-project/reader.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import os
-from yattag import Doc
-## --------------------------------- ###
-### reading: info.txt ###
-### -------------------------------- ###
-# placeholders in case info.txt does not exist
-def get_article(acc, most_imp_feat):
- filename = "info.txt"
- placeholder = "please create an info.txt to customize this text"
- note = "**Note that model accuracy is based on the uploaded data.csv and reflects how well the AI model can give correct recommendations for that dataset. An accuracy of 50% means that half of the model's predictions for that dataset were accurate. Model accuracy and most important feature can be helpful for understanding how the model works, but should not be considered absolute facts about the real world."
-
- title = bkgd = data_collection = priv_cons = bias_cons = img_src = membs = description = placeholder
- # check if info.txt is present
- if os.path.isfile(filename):
- # open info.txt in read mode
- info = open(filename, "r")
-
- # read each line to a string
- description = "An AI project created by " + info.readline()
- title = info.readline()
- bkgd = info.readline()
- data_collection = info.readline()
- priv_cons = info.readline()
- bias_cons = info.readline()
- img_src = info.readline()
- membs = info.readline()
-
- # close file
- info.close()
-
- # use yattag library to generate html
- doc, tag, text, line = Doc().ttl()
- # create html based on info.txt
- with tag('div'):
- with tag('div', klass='box model-container'):
- with tag('div', klass='spacer'):
- with tag('div', klass='box model-div'):
- line('h2', "Model Accuracy", klass='acc')
- line('p', acc)
- with tag('div', klass='box model-div'):
- line('h2', "Most Important Feature", klass='feat')
- line('p', most_imp_feat)
- with tag('div', klass='spacer'):
- line('p', note)
- with tag('div', klass='box'):
- line('h2', 'Problem Statement and Research Summary', klass='prj')
- line('p', bkgd)
- with tag('div', klass='box'):
- line('h2', 'Data Collection Plan', klass='data')
- line('p', data_collection)
- with tag('div', klass='box'):
- line('h2', 'Ethical Considerations (Data Privacy and Bias)', klass='ethics')
- with tag('ul'):
- line('li', priv_cons)
- line('li', bias_cons)
- with tag('div', klass='box'):
- line('h2', 'Our Team', klass='team')
- line('p', membs)
- doc.stag('img', src=img_src)
-
- css = 'app.css'
- return {
- 'article': doc.getvalue(),
- 'css': css,
- 'title': title,
- 'description': description,
- }
\ No newline at end of file
diff --git a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/upscaling.py b/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/upscaling.py
deleted file mode 100644
index b5b841722b936135bf1f65e86e0c26d7084053b2..0000000000000000000000000000000000000000
--- a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/upscaling.py
+++ /dev/null
@@ -1,264 +0,0 @@
-import os
-import numpy as np
-import cv2
-from pathlib import Path
-from tqdm import tqdm
-from PIL import Image
-from modules.scripts_postprocessing import PostprocessedImage
-from modules import devices
-import shutil
-from queue import Queue, Empty
-import modules.scripts as scr
-from .frame_interpolation import clean_folder_name
-from .general_utils import duplicate_pngs_from_folder, checksum
-# TODO: move some funcs to this file?
-from .video_audio_utilities import get_quick_vid_info, vid2frames, ffmpeg_stitch_video, extract_number, media_file_has_audio
-from basicsr.utils.download_util import load_file_from_url
-from .rich import console
-import time
-import subprocess
-
-def process_upscale_vid_upload_logic(file, selected_tab, upscaling_resize, upscaling_resize_w, upscaling_resize_h, upscaling_crop, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility, vid_file_name, keep_imgs, f_location, f_crf, f_preset):
- print("Got a request to *upscale* an existing video.")
-
- in_vid_fps, _, _ = get_quick_vid_info(file.name)
- folder_name = clean_folder_name(Path(vid_file_name).stem)
- outdir_no_tmp = os.path.join(os.getcwd(), 'outputs', 'frame-upscaling', folder_name)
- i = 1
- while os.path.exists(outdir_no_tmp):
- outdir_no_tmp = os.path.join(os.getcwd(), 'outputs', 'frame-upscaling', folder_name + '_' + str(i))
- i += 1
-
- outdir = os.path.join(outdir_no_tmp, 'tmp_input_frames')
- os.makedirs(outdir, exist_ok=True)
-
- vid2frames(video_path=file.name, video_in_frame_path=outdir, overwrite=True, extract_from_frame=0, extract_to_frame=-1, numeric_files_output=True, out_img_format='png')
-
- process_video_upscaling(selected_tab, upscaling_resize, upscaling_resize_w, upscaling_resize_h, upscaling_crop, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility, orig_vid_fps=in_vid_fps, real_audio_track=file.name, raw_output_imgs_path=outdir, img_batch_id=None, ffmpeg_location=f_location, ffmpeg_crf=f_crf, ffmpeg_preset=f_preset, keep_upscale_imgs=keep_imgs, orig_vid_name=folder_name)
-
-def process_video_upscaling(resize_mode, upscaling_resize, upscaling_resize_w, upscaling_resize_h, upscaling_crop, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility, orig_vid_fps, real_audio_track, raw_output_imgs_path, img_batch_id, ffmpeg_location, ffmpeg_crf, ffmpeg_preset, keep_upscale_imgs, orig_vid_name):
- devices.torch_gc()
-
- print("Upscaling progress (it's OK if it finishes before 100%):")
-
- upscaled_path = os.path.join(raw_output_imgs_path, 'upscaled_frames')
- if orig_vid_name is not None: # upscaling a video (deforum or unrelated)
- custom_upscale_path = "{}_{}".format(upscaled_path, orig_vid_name)
- else: # upscaling after a deforum run:
- custom_upscale_path = "{}_{}".format(upscaled_path, img_batch_id)
-
- temp_convert_raw_png_path = os.path.join(raw_output_imgs_path, "tmp_upscale_folder")
- duplicate_pngs_from_folder(raw_output_imgs_path, temp_convert_raw_png_path, img_batch_id, orig_vid_name)
-
- videogen = []
- for f in os.listdir(temp_convert_raw_png_path):
- # double check for old _depth_ files, not really needed probably but keeping it for now
- if '_depth_' not in f:
- videogen.append(f)
-
- videogen.sort(key= lambda x:int(x.split('.')[0]))
- vid_out = None
-
- if not os.path.exists(custom_upscale_path):
- os.mkdir(custom_upscale_path)
-
- # Upscaling is a slow and demanding operation, so we don't need as much parallelization here
- for i in tqdm(range(len(videogen)), desc="Upscaling"):
- lastframe = videogen[i]
- img_path = os.path.join(temp_convert_raw_png_path, lastframe)
- image = process_frame(resize_mode, Image.open(img_path).convert("RGB"), upscaling_resize, upscaling_resize_w, upscaling_resize_h, upscaling_crop, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility)
- filename = '{}/{:0>7d}.png'.format(custom_upscale_path, i)
- image.save(filename)
-
- shutil.rmtree(temp_convert_raw_png_path)
- # stitch video from upscaled frames, and add audio if needed
- try:
- print (f"*Passing upsc frames to ffmpeg...*")
- vid_out_path = stitch_video(img_batch_id, orig_vid_fps, custom_upscale_path, real_audio_track, ffmpeg_location, resize_mode, upscaling_resize, upscaling_resize_w, upscaling_resize_h, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility, ffmpeg_crf, ffmpeg_preset, keep_upscale_imgs, orig_vid_name)
- # remove folder with raw (non-upscaled) vid input frames in case of input VID and not PNGs
- if orig_vid_name is not None:
- shutil.rmtree(raw_output_imgs_path)
- except Exception as e:
- print(f'Video stitching gone wrong. *Upscaled frames were saved to HD as backup!*. Actual error: {e}')
-
- devices.torch_gc()
-
-def process_frame(resize_mode, image, upscaling_resize, upscaling_resize_w, upscaling_resize_h, upscaling_crop, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility):
- pp = PostprocessedImage(image)
- postproc = scr.scripts_postproc
- upscaler_script = next(s for s in postproc.scripts if s.name == "Upscale")
- upscaler_script.process(pp, resize_mode, upscaling_resize, upscaling_resize_w, upscaling_resize_h, upscaling_crop, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility)
- return pp.image
-
-def stitch_video(img_batch_id, fps, img_folder_path, audio_path, ffmpeg_location, resize_mode, upscaling_resize, upscaling_resize_w, upscaling_resize_h, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility, f_crf, f_preset, keep_imgs, orig_vid_name):
- parent_folder = os.path.dirname(img_folder_path)
- grandparent_folder = os.path.dirname(parent_folder)
- if orig_vid_name is not None:
- mp4_path = os.path.join(grandparent_folder, str(orig_vid_name) +'_upscaled_' + (('by_' + str(upscaling_resize).replace('.', '-')) if resize_mode == 0 else f"to_{upscaling_resize_w}_{upscaling_resize_h}")) + f"_with_{extras_upscaler_1}" + (f"_then_{extras_upscaler_2}" if extras_upscaler_2_visibility > 0 else "")
- else:
- mp4_path = os.path.join(parent_folder, str(img_batch_id) +'_upscaled_' + (('by_' + str(upscaling_resize).replace('.', '-')) if resize_mode == 0 else f"to_{upscaling_resize_w}_{upscaling_resize_h}")) + f"_with_{extras_upscaler_1}_then_{extras_upscaler_2}"
-
- mp4_path = mp4_path + '.mp4'
-
- t = os.path.join(img_folder_path, "%07d.png")
- add_soundtrack = 'None'
- if not audio_path is None:
- add_soundtrack = 'File'
-
- exception_raised = False
- try:
- ffmpeg_stitch_video(ffmpeg_location=ffmpeg_location, fps=fps, outmp4_path=mp4_path, stitch_from_frame=0, stitch_to_frame=1000000, imgs_path=t, add_soundtrack=add_soundtrack, audio_path=audio_path, crf=f_crf, preset=f_preset)
- except Exception as e:
- exception_raised = True
- print(f"An error occurred while stitching the video: {e}")
-
- if not exception_raised and not keep_imgs:
- shutil.rmtree(img_folder_path)
-
- if (keep_imgs and orig_vid_name is not None) or (orig_vid_name is not None and exception_raised is True):
- shutil.move(img_folder_path, grandparent_folder)
-
- return mp4_path
-
-# NCNN Upscale section START
-def process_ncnn_upscale_vid_upload_logic(vid_path, in_vid_fps, in_vid_res, out_vid_res, models_path, upscale_model, upscale_factor, keep_imgs, f_location, f_crf, f_preset, current_user_os):
- print(f"Got a request to *upscale* a video using {upscale_model} at {upscale_factor}")
-
- folder_name = clean_folder_name(Path(vid_path.orig_name).stem)
- outdir_no_tmp = os.path.join(os.getcwd(), 'outputs', 'frame-upscaling', folder_name)
- i = 1
- while os.path.exists(outdir_no_tmp):
- outdir_no_tmp = os.path.join(os.getcwd(), 'outputs', 'frame-upscaling', folder_name + '_' + str(i))
- i += 1
-
- outdir = os.path.join(outdir_no_tmp, 'tmp_input_frames')
- os.makedirs(outdir, exist_ok=True)
-
- vid2frames(video_path=vid_path.name, video_in_frame_path=outdir, overwrite=True, extract_from_frame=0, extract_to_frame=-1, numeric_files_output=True, out_img_format='png')
-
- process_ncnn_video_upscaling(vid_path, outdir, in_vid_fps, in_vid_res, out_vid_res, models_path, upscale_model, upscale_factor, keep_imgs, f_location, f_crf, f_preset, current_user_os)
-
-def process_ncnn_video_upscaling(vid_path, outdir, in_vid_fps, in_vid_res, out_vid_res, models_path, upscale_model, upscale_factor, keep_imgs, f_location, f_crf, f_preset, current_user_os):
- # get clean number from 'x2, x3' etc
- clean_num_r_up_factor = extract_number(upscale_factor)
- # set paths
- realesrgan_ncnn_location = os.path.join(models_path, 'realesrgan_ncnn', 'realesrgan-ncnn-vulkan' + ('.exe' if current_user_os == 'Windows' else ''))
- upscaled_folder_path = os.path.join(os.path.dirname(outdir), 'Upscaled_frames')
- # create folder for upscaled imgs to live in. this folder will stay alive if keep_imgs=True, otherwise get deleted at the end
- os.makedirs(upscaled_folder_path, exist_ok=True)
- out_upscaled_mp4_path = os.path.join(os.path.dirname(outdir), f"{vid_path.orig_name}_Upscaled_{upscale_factor}.mp4")
- # download upscaling model if needed
- check_and_download_realesrgan_ncnn(models_path, current_user_os)
- # set cmd command
- cmd = [realesrgan_ncnn_location, '-i', outdir, '-o', upscaled_folder_path, '-s', str(clean_num_r_up_factor), '-n', upscale_model]
- # msg to print - need it to hide that text later on (!)
- msg_to_print = f"Upscaling raw PNGs using {upscale_model} at {upscale_factor}..."
- # blink the msg in the cli until action is done
- console.print(msg_to_print, style="blink yellow", end="")
- start_time = time.time()
- # make call to ncnn upscaling executble
- process = subprocess.run(cmd, capture_output=True, check=True, text=True)
- print("\r" + " " * len(msg_to_print), end="", flush=True)
- print(f"\r{msg_to_print}", flush=True)
- print(f"\rUpscaling \033[0;32mdone\033[0m in {time.time() - start_time:.2f} seconds!", flush=True)
- # set custom path for ffmpeg func below
- upscaled_imgs_path_for_ffmpeg = os.path.join(upscaled_folder_path, "%05d.png")
- add_soundtrack = 'None'
- # don't pass add_soundtrack to ffmpeg if orig video doesn't contain any audio, so we won't get a message saying audio couldn't be added :)
- if media_file_has_audio(vid_path.name, f_location):
- add_soundtrack = 'File'
- # stitch video from upscaled pngs
- ffmpeg_stitch_video(ffmpeg_location=f_location, fps=in_vid_fps, outmp4_path=out_upscaled_mp4_path, stitch_from_frame=0, stitch_to_frame=-1, imgs_path=upscaled_imgs_path_for_ffmpeg, add_soundtrack=add_soundtrack, audio_path=vid_path.name, crf=f_crf, preset=f_preset)
- # delete the raw video pngs
- shutil.rmtree(outdir)
- # delete upscaled imgs if user requested
- if not keep_imgs:
- shutil.rmtree(upscaled_folder_path)
-
-def check_and_download_realesrgan_ncnn(models_folder, current_user_os):
- import zipfile
- if current_user_os == 'Windows':
- zip_file_name = 'realesrgan-ncnn-windows.zip'
- executble_name = 'realesrgan-ncnn-vulkan.exe'
- zip_checksum_value = '1d073f520a4a3f6438a500fea88407964da6d4a87489719bedfa7445b76c019fdd95a5c39576ca190d7ac22c906b33d5250a6f48cb7eda2b6af3e86ec5f09dfc'
- download_url = 'https://github.com/hithereai/Real-ESRGAN/releases/download/real-esrgan-ncnn-windows/realesrgan-ncnn-windows.zip'
- elif current_user_os == 'Linux':
- zip_file_name = 'realesrgan-ncnn-linux.zip'
- executble_name = 'realesrgan-ncnn-vulkan'
- zip_checksum_value = 'df44c4e9a1ff66331079795f018a67fbad8ce37c4472929a56b5a38440cf96982d6e164a086b438c3d26d269025290dd6498bd50846bda8691521ecf8f0fafdf'
- download_url = 'https://github.com/hithereai/Real-ESRGAN/releases/download/real-esrgan-ncnn-linux/realesrgan-ncnn-linux.zip'
- elif current_user_os == 'Mac':
- zip_file_name = 'realesrgan-ncnn-mac.zip'
- executble_name = 'realesrgan-ncnn-vulkan'
- zip_checksum_value = '65f09472025b55b18cf6ba64149ede8cded90c20e18d35a9edb1ab60715b383a6ffbf1be90d973fc2075cf99d4cc1411fbdc459411af5c904f544b8656111469'
- download_url = 'https://github.com/hithereai/Real-ESRGAN/releases/download/real-esrgan-ncnn-mac/realesrgan-ncnn-mac.zip'
- else: # who are you then?
- raise Exception(f"No support for OS type: {current_user_os}")
-
- # set paths
- realesrgan_ncnn_folder = os.path.join(models_folder, 'realesrgan_ncnn')
- realesrgan_exec_path = os.path.join(realesrgan_ncnn_folder, executble_name)
- realesrgan_zip_path = os.path.join(realesrgan_ncnn_folder, zip_file_name)
- # return if exec file already exist
- if os.path.exists(realesrgan_exec_path):
- return
- try:
- os.makedirs(realesrgan_ncnn_folder, exist_ok=True)
- # download exec and model files from url
- load_file_from_url(download_url, realesrgan_ncnn_folder)
- # check downloaded zip's hash
- with open(realesrgan_zip_path, 'rb') as f:
- file_hash = checksum(realesrgan_zip_path)
- # wrong hash, file is probably broken/ download interrupted
- if file_hash != zip_checksum_value:
- raise Exception(f"Error while downloading {realesrgan_zip_path}. Please download from: {download_url}, and extract its contents into: {models_folder}/realesrgan_ncnn")
- # hash ok, extract zip contents into our folder
- with zipfile.ZipFile(realesrgan_zip_path, 'r') as zip_ref:
- zip_ref.extractall(realesrgan_ncnn_folder)
- # delete the zip file
- os.remove(realesrgan_zip_path)
- # chmod 755 the exec if we're in a linux machine, otherwise we'd get permission errors
- if current_user_os in ('Linux', 'Mac'):
- os.chmod(realesrgan_exec_path, 0o755)
- # enable running the exec for mac users
- if current_user_os == 'Mac':
- os.system(f'xattr -d com.apple.quarantine "{realesrgan_exec_path}"')
-
- except Exception as e:
- raise Exception(f"Error while downloading {realesrgan_zip_path}. Please download from: {download_url}, and extract its contents into: {models_folder}/realesrgan_ncnn")
-
-def make_upscale_v2(upscale_factor, upscale_model, keep_imgs, imgs_raw_path, imgs_batch_id, deforum_models_path, current_user_os, ffmpeg_location, ffmpeg_crf, ffmpeg_preset, fps, stitch_from_frame, stitch_to_frame, audio_path, add_soundtrack):
- # get clean number from 'x2, x3' etc
- clean_num_r_up_factor = extract_number(upscale_factor)
- # set paths
- realesrgan_ncnn_location = os.path.join(deforum_models_path, 'realesrgan_ncnn', 'realesrgan-ncnn-vulkan' + ('.exe' if current_user_os == 'Windows' else ''))
- upscaled_folder_path = os.path.join(imgs_raw_path, f"{imgs_batch_id}_upscaled")
- temp_folder_to_keep_raw_ims = os.path.join(upscaled_folder_path, 'temp_raw_imgs_to_upscale')
- out_upscaled_mp4_path = os.path.join(imgs_raw_path, f"{imgs_batch_id}_Upscaled_{upscale_factor}.mp4")
- # download upscaling model if needed
- check_and_download_realesrgan_ncnn(deforum_models_path, current_user_os)
- # make a folder with only the imgs we need to duplicate so we can call the ncnn with the folder syntax (quicker!)
- duplicate_pngs_from_folder(from_folder=imgs_raw_path, to_folder=temp_folder_to_keep_raw_ims, img_batch_id=imgs_batch_id, orig_vid_name='Dummy')
- # set dynamic cmd command
- cmd = [realesrgan_ncnn_location, '-i', temp_folder_to_keep_raw_ims, '-o', upscaled_folder_path, '-s', str(clean_num_r_up_factor), '-n', upscale_model]
- # msg to print - need it to hide that text later on (!)
- msg_to_print = f"Upscaling raw output PNGs using {upscale_model} at {upscale_factor}..."
- # blink the msg in the cli until action is done
- console.print(msg_to_print, style="blink yellow", end="")
- start_time = time.time()
- # make call to ncnn upscaling executble
- process = subprocess.run(cmd, capture_output=True, check=True, text=True, cwd=(os.path.join(deforum_models_path, 'realesrgan_ncnn') if current_user_os == 'Mac' else None))
- print("\r" + " " * len(msg_to_print), end="", flush=True)
- print(f"\r{msg_to_print}", flush=True)
- print(f"\rUpscaling \033[0;32mdone\033[0m in {time.time() - start_time:.2f} seconds!", flush=True)
- # set custom path for ffmpeg func below
- upscaled_imgs_path_for_ffmpeg = os.path.join(upscaled_folder_path, f"{imgs_batch_id}_%05d.png")
- # stitch video from upscaled pngs
- ffmpeg_stitch_video(ffmpeg_location=ffmpeg_location, fps=fps, outmp4_path=out_upscaled_mp4_path, stitch_from_frame=stitch_from_frame, stitch_to_frame=stitch_to_frame, imgs_path=upscaled_imgs_path_for_ffmpeg, add_soundtrack=add_soundtrack, audio_path=audio_path, crf=ffmpeg_crf, preset=ffmpeg_preset)
-
- # delete the duplicated raw imgs
- shutil.rmtree(temp_folder_to_keep_raw_ims)
-
- if not keep_imgs:
- shutil.rmtree(upscaled_folder_path)
-# NCNN Upscale section END
\ No newline at end of file
diff --git a/spaces/jackyliang42/code-as-policies/sim.py b/spaces/jackyliang42/code-as-policies/sim.py
deleted file mode 100644
index 901bd9c753bb6b9ff1ead200f5d9e8256231d5f6..0000000000000000000000000000000000000000
--- a/spaces/jackyliang42/code-as-policies/sim.py
+++ /dev/null
@@ -1,655 +0,0 @@
-import pybullet
-from pybullet_utils.bullet_client import BulletClient
-import pybullet_data
-import threading
-from time import sleep
-import numpy as np
-import os
-from consts import BOUNDS, COLORS, PIXEL_SIZE, CORNER_POS
-from shapely.geometry import box
-
-
-# Gripper (Robotiq 2F85) code
-class Robotiq2F85:
- """Gripper handling for Robotiq 2F85."""
-
- def __init__(self, robot, tool, p):
- self.robot = robot
- self.tool = tool
- self._p = p
- pos = [0.1339999999999999, -0.49199999999872496, 0.5]
- rot = self._p.getQuaternionFromEuler([np.pi, 0, np.pi])
- urdf = 'robotiq_2f_85/robotiq_2f_85.urdf'
- self.body = self._p.loadURDF(urdf, pos, rot)
- self.n_joints = self._p.getNumJoints(self.body)
- self.activated = False
-
- # Connect gripper base to robot tool.
- self._p.createConstraint(self.robot, tool, self.body, 0, jointType=self._p.JOINT_FIXED, jointAxis=[0, 0, 0], parentFramePosition=[0, 0, 0], childFramePosition=[0, 0, -0.07], childFrameOrientation=self._p.getQuaternionFromEuler([0, 0, np.pi / 2]))
-
- # Set friction coefficients for gripper fingers.
- for i in range(self._p.getNumJoints(self.body)):
- self._p.changeDynamics(self.body, i, lateralFriction=10.0, spinningFriction=1.0, rollingFriction=1.0, frictionAnchor=True)
-
- # Start thread to handle additional gripper constraints.
- self.motor_joint = 1
- self.constraints_thread = threading.Thread(target=self.step)
- self.constraints_thread.daemon = True
- self.constraints_thread.start()
-
- # Control joint positions by enforcing hard contraints on gripper behavior.
- # Set one joint as the open/close motor joint (other joints should mimic).
- def step(self):
- while True:
- try:
- currj = [self._p.getJointState(self.body, i)[0] for i in range(self.n_joints)]
- indj = [6, 3, 8, 5, 10]
- targj = [currj[1], -currj[1], -currj[1], currj[1], currj[1]]
- self._p.setJointMotorControlArray(self.body, indj, self._p.POSITION_CONTROL, targj, positionGains=np.ones(5))
- except:
- return
- sleep(0.001)
-
- # Close gripper fingers.
- def activate(self):
- self._p.setJointMotorControl2(self.body, self.motor_joint, self._p.VELOCITY_CONTROL, targetVelocity=1, force=10)
- self.activated = True
-
- # Open gripper fingers.
- def release(self):
- self._p.setJointMotorControl2(self.body, self.motor_joint, self._p.VELOCITY_CONTROL, targetVelocity=-1, force=10)
- self.activated = False
-
- # If activated and object in gripper: check object contact.
- # If activated and nothing in gripper: check gripper contact.
- # If released: check proximity to surface (disabled).
- def detect_contact(self):
- obj, _, ray_frac = self.check_proximity()
- if self.activated:
- empty = self.grasp_width() < 0.01
- cbody = self.body if empty else obj
- if obj == self.body or obj == 0:
- return False
- return self.external_contact(cbody)
- # else:
- # return ray_frac < 0.14 or self.external_contact()
-
- # Return if body is in contact with something other than gripper
- def external_contact(self, body=None):
- if body is None:
- body = self.body
- pts = self._p.getContactPoints(bodyA=body)
- pts = [pt for pt in pts if pt[2] != self.body]
- return len(pts) > 0 # pylint: disable=g-explicit-length-test
-
- def check_grasp(self):
- while self.moving():
- sleep(0.001)
- success = self.grasp_width() > 0.01
- return success
-
- def grasp_width(self):
- lpad = np.array(self._p.getLinkState(self.body, 4)[0])
- rpad = np.array(self._p.getLinkState(self.body, 9)[0])
- dist = np.linalg.norm(lpad - rpad) - 0.047813
- return dist
-
- def check_proximity(self):
- ee_pos = np.array(self._p.getLinkState(self.robot, self.tool)[0])
- tool_pos = np.array(self._p.getLinkState(self.body, 0)[0])
- vec = (tool_pos - ee_pos) / np.linalg.norm((tool_pos - ee_pos))
- ee_targ = ee_pos + vec
- ray_data = self._p.rayTest(ee_pos, ee_targ)[0]
- obj, link, ray_frac = ray_data[0], ray_data[1], ray_data[2]
- return obj, link, ray_frac
-
-
-# Gym-style environment code
-class PickPlaceEnv():
-
- def __init__(self, render=False, high_res=False, high_frame_rate=False):
- self.dt = 1/480
- self.sim_step = 0
-
- # Configure and start PyBullet
- # self._p = pybullet.connect(pybullet.DIRECT)
- self._p = BulletClient(connection_mode=pybullet.DIRECT)
- self._p.configureDebugVisualizer(self._p.COV_ENABLE_GUI, 0)
- self._p.setPhysicsEngineParameter(enableFileCaching=0)
- assets_path = os.path.dirname(os.path.abspath(""))
- self._p.setAdditionalSearchPath(assets_path)
- self._p.setAdditionalSearchPath(pybullet_data.getDataPath())
- self._p.setTimeStep(self.dt)
-
- self.home_joints = (np.pi / 2, -np.pi / 2, np.pi / 2, -np.pi / 2, 3 * np.pi / 2, 0) # Joint angles: (J0, J1, J2, J3, J4, J5).
- self.home_ee_euler = (np.pi, 0, np.pi) # (RX, RY, RZ) rotation in Euler angles.
- self.ee_link_id = 9 # Link ID of UR5 end effector.
- self.tip_link_id = 10 # Link ID of gripper finger tips.
- self.gripper = None
-
- self.render = render
- self.high_res = high_res
- self.high_frame_rate = high_frame_rate
-
- def reset(self, object_list):
- self._p.resetSimulation(self._p.RESET_USE_DEFORMABLE_WORLD)
- self._p.setGravity(0, 0, -9.8)
- self.cache_video = []
-
- # Temporarily disable rendering to load URDFs faster.
- self._p.configureDebugVisualizer(self._p.COV_ENABLE_RENDERING, 0)
-
- # Add robot.
- self._p.loadURDF("plane.urdf", [0, 0, -0.001])
- self.robot_id = self._p.loadURDF("ur5e/ur5e.urdf", [0, 0, 0], flags=self._p.URDF_USE_MATERIAL_COLORS_FROM_MTL)
- self.ghost_id = self._p.loadURDF("ur5e/ur5e.urdf", [0, 0, -10]) # For forward kinematics.
- self.joint_ids = [self._p.getJointInfo(self.robot_id, i) for i in range(self._p.getNumJoints(self.robot_id))]
- self.joint_ids = [j[0] for j in self.joint_ids if j[2] == self._p.JOINT_REVOLUTE]
-
- # Move robot to home configuration.
- for i in range(len(self.joint_ids)):
- self._p.resetJointState(self.robot_id, self.joint_ids[i], self.home_joints[i])
-
- # Add gripper.
- if self.gripper is not None:
- while self.gripper.constraints_thread.is_alive():
- self.constraints_thread_active = False
- self.gripper = Robotiq2F85(self.robot_id, self.ee_link_id, self._p)
- self.gripper.release()
-
- # Add workspace.
- plane_shape = self._p.createCollisionShape(self._p.GEOM_BOX, halfExtents=[0.3, 0.3, 0.001])
- plane_visual = self._p.createVisualShape(self._p.GEOM_BOX, halfExtents=[0.3, 0.3, 0.001])
- plane_id = self._p.createMultiBody(0, plane_shape, plane_visual, basePosition=[0, -0.5, 0])
- self._p.changeVisualShape(plane_id, -1, rgbaColor=[0.2, 0.2, 0.2, 1.0])
-
- # Load objects according to config.
- self.object_list = object_list
- self.obj_name_to_id = {}
- obj_xyz = np.zeros((0, 3))
- for obj_name in object_list:
- if ('block' in obj_name) or ('bowl' in obj_name):
-
- # Get random position 15cm+ from other objects.
- while True:
- rand_x = np.random.uniform(BOUNDS[0, 0] + 0.1, BOUNDS[0, 1] - 0.1)
- rand_y = np.random.uniform(BOUNDS[1, 0] + 0.1, BOUNDS[1, 1] - 0.1)
- rand_xyz = np.float32([rand_x, rand_y, 0.03]).reshape(1, 3)
- if len(obj_xyz) == 0:
- obj_xyz = np.concatenate((obj_xyz, rand_xyz), axis=0)
- break
- else:
- nn_dist = np.min(np.linalg.norm(obj_xyz - rand_xyz, axis=1)).squeeze()
- if nn_dist > 0.15:
- obj_xyz = np.concatenate((obj_xyz, rand_xyz), axis=0)
- break
-
- object_color = COLORS[obj_name.split(' ')[0]]
- object_type = obj_name.split(' ')[1]
- object_position = rand_xyz.squeeze()
- if object_type == 'block':
- object_shape = self._p.createCollisionShape(self._p.GEOM_BOX, halfExtents=[0.02, 0.02, 0.02])
- object_visual = self._p.createVisualShape(self._p.GEOM_BOX, halfExtents=[0.02, 0.02, 0.02])
- object_id = self._p.createMultiBody(0.01, object_shape, object_visual, basePosition=object_position)
- elif object_type == 'bowl':
- object_position[2] = 0
- object_id = self._p.loadURDF("bowl/bowl.urdf", object_position, useFixedBase=1)
- self._p.changeVisualShape(object_id, -1, rgbaColor=object_color)
- self.obj_name_to_id[obj_name] = object_id
-
- # Re-enable rendering.
- self._p.configureDebugVisualizer(self._p.COV_ENABLE_RENDERING, 1)
-
- for _ in range(200):
- self._p.stepSimulation()
-
- # record object positions at reset
- self.init_pos = {name: self.get_obj_pos(name) for name in object_list}
-
- return self.get_observation()
-
- def servoj(self, joints):
- """Move to target joint positions with position control."""
- self._p.setJointMotorControlArray(
- bodyIndex=self.robot_id,
- jointIndices=self.joint_ids,
- controlMode=self._p.POSITION_CONTROL,
- targetPositions=joints,
- positionGains=[0.01]*6)
-
- def movep(self, position):
- """Move to target end effector position."""
- joints = self._p.calculateInverseKinematics(
- bodyUniqueId=self.robot_id,
- endEffectorLinkIndex=self.tip_link_id,
- targetPosition=position,
- targetOrientation=self._p.getQuaternionFromEuler(self.home_ee_euler),
- maxNumIterations=100)
- self.servoj(joints)
-
- def get_ee_pos(self):
- ee_xyz = np.float32(self._p.getLinkState(self.robot_id, self.tip_link_id)[0])
- return ee_xyz
-
- def step(self, action=None):
- """Do pick and place motion primitive."""
- pick_pos, place_pos = action['pick'].copy(), action['place'].copy()
-
- # Set fixed primitive z-heights.
- hover_xyz = np.float32([pick_pos[0], pick_pos[1], 0.2])
- if pick_pos.shape[-1] == 2:
- pick_xyz = np.append(pick_pos, 0.025)
- else:
- pick_xyz = pick_pos
- pick_xyz[2] = 0.025
- if place_pos.shape[-1] == 2:
- place_xyz = np.append(place_pos, 0.15)
- else:
- place_xyz = place_pos
- place_xyz[2] = 0.15
-
- # Move to object.
- ee_xyz = self.get_ee_pos()
- while np.linalg.norm(hover_xyz - ee_xyz) > 0.01:
- self.movep(hover_xyz)
- self.step_sim_and_render()
- ee_xyz = self.get_ee_pos()
-
- while np.linalg.norm(pick_xyz - ee_xyz) > 0.01:
- self.movep(pick_xyz)
- self.step_sim_and_render()
- ee_xyz = self.get_ee_pos()
-
- # Pick up object.
- self.gripper.activate()
- for _ in range(240):
- self.step_sim_and_render()
- while np.linalg.norm(hover_xyz - ee_xyz) > 0.01:
- self.movep(hover_xyz)
- self.step_sim_and_render()
- ee_xyz = self.get_ee_pos()
-
- for _ in range(50):
- self.step_sim_and_render()
-
- # Move to place location.
- while np.linalg.norm(place_xyz - ee_xyz) > 0.01:
- self.movep(place_xyz)
- self.step_sim_and_render()
- ee_xyz = self.get_ee_pos()
-
- # Place down object.
- while (not self.gripper.detect_contact()) and (place_xyz[2] > 0.03):
- place_xyz[2] -= 0.001
- self.movep(place_xyz)
- for _ in range(3):
- self.step_sim_and_render()
- self.gripper.release()
- for _ in range(240):
- self.step_sim_and_render()
- place_xyz[2] = 0.2
- ee_xyz = self.get_ee_pos()
- while np.linalg.norm(place_xyz - ee_xyz) > 0.01:
- self.movep(place_xyz)
- self.step_sim_and_render()
- ee_xyz = self.get_ee_pos()
- place_xyz = np.float32([0, -0.5, 0.2])
- while np.linalg.norm(place_xyz - ee_xyz) > 0.01:
- self.movep(place_xyz)
- self.step_sim_and_render()
- ee_xyz = self.get_ee_pos()
-
- observation = self.get_observation()
- reward = self.get_reward()
- done = False
- info = {}
- return observation, reward, done, info
-
- def set_alpha_transparency(self, alpha: float) -> None:
- for id in range(20):
- visual_shape_data = self._p.getVisualShapeData(id)
- for i in range(len(visual_shape_data)):
- object_id, link_index, _, _, _, _, _, rgba_color = visual_shape_data[i]
- rgba_color = list(rgba_color[0:3]) + [alpha]
- self._p.changeVisualShape(
- self.robot_id, linkIndex=i, rgbaColor=rgba_color)
- self._p.changeVisualShape(
- self.gripper.body, linkIndex=i, rgbaColor=rgba_color)
-
- def step_sim_and_render(self):
- self._p.stepSimulation()
- self.sim_step += 1
-
- interval = 40 if self.high_frame_rate else 60
- # Render current image at 8 FPS.
- if self.sim_step % interval == 0 and self.render:
- self.cache_video.append(self.get_camera_image())
-
- def get_camera_image(self):
- if not self.high_res:
- image_size = (240, 240)
- intrinsics = (120., 0, 120., 0, 120., 120., 0, 0, 1)
- else:
- image_size=(360, 360)
- intrinsics=(180., 0, 180., 0, 180., 180., 0, 0, 1)
- color, _, _, _, _ = self.render_image(image_size, intrinsics)
- return color
-
- def get_reward(self):
- return None
-
- def get_observation(self):
- observation = {}
-
- # Render current image.
- color, depth, position, orientation, intrinsics = self.render_image()
-
- # Get heightmaps and colormaps.
- points = self.get_pointcloud(depth, intrinsics)
- position = np.float32(position).reshape(3, 1)
- rotation = self._p.getMatrixFromQuaternion(orientation)
- rotation = np.float32(rotation).reshape(3, 3)
- transform = np.eye(4)
- transform[:3, :] = np.hstack((rotation, position))
- points = self.transform_pointcloud(points, transform)
- heightmap, colormap, xyzmap = self.get_heightmap(points, color, BOUNDS, PIXEL_SIZE)
-
- observation["image"] = colormap
- observation["xyzmap"] = xyzmap
-
- return observation
-
- def render_image(self, image_size=(720, 720), intrinsics=(360., 0, 360., 0, 360., 360., 0, 0, 1)):
-
- # Camera parameters.
- position = (0, -0.85, 0.4)
- orientation = (np.pi / 4 + np.pi / 48, np.pi, np.pi)
- orientation = self._p.getQuaternionFromEuler(orientation)
- zrange = (0.01, 10.)
- noise=True
-
- # OpenGL camera settings.
- lookdir = np.float32([0, 0, 1]).reshape(3, 1)
- updir = np.float32([0, -1, 0]).reshape(3, 1)
- rotation = self._p.getMatrixFromQuaternion(orientation)
- rotm = np.float32(rotation).reshape(3, 3)
- lookdir = (rotm @ lookdir).reshape(-1)
- updir = (rotm @ updir).reshape(-1)
- lookat = position + lookdir
- focal_len = intrinsics[0]
- znear, zfar = (0.01, 10.)
- viewm = self._p.computeViewMatrix(position, lookat, updir)
- fovh = (image_size[0] / 2) / focal_len
- fovh = 180 * np.arctan(fovh) * 2 / np.pi
-
- # Notes: 1) FOV is vertical FOV 2) aspect must be float
- aspect_ratio = image_size[1] / image_size[0]
- projm = self._p.computeProjectionMatrixFOV(fovh, aspect_ratio, znear, zfar)
-
- # Render with OpenGL camera settings.
- _, _, color, depth, segm = self._p.getCameraImage(
- width=image_size[1],
- height=image_size[0],
- viewMatrix=viewm,
- projectionMatrix=projm,
- shadow=1,
- flags=self._p.ER_SEGMENTATION_MASK_OBJECT_AND_LINKINDEX,
- renderer=self._p.ER_BULLET_HARDWARE_OPENGL)
-
- # Get color image.
- color_image_size = (image_size[0], image_size[1], 4)
- color = np.array(color, dtype=np.uint8).reshape(color_image_size)
- color = color[:, :, :3] # remove alpha channel
- if noise:
- color = np.int32(color)
- color += np.int32(np.random.normal(0, 3, color.shape))
- color = np.uint8(np.clip(color, 0, 255))
-
- # Get depth image.
- depth_image_size = (image_size[0], image_size[1])
- zbuffer = np.float32(depth).reshape(depth_image_size)
- depth = (zfar + znear - (2 * zbuffer - 1) * (zfar - znear))
- depth = (2 * znear * zfar) / depth
- if noise:
- depth += np.random.normal(0, 0.003, depth.shape)
-
- intrinsics = np.float32(intrinsics).reshape(3, 3)
- return color, depth, position, orientation, intrinsics
-
- def get_pointcloud(self, depth, intrinsics):
- """Get 3D pointcloud from perspective depth image.
- Args:
- depth: HxW float array of perspective depth in meters.
- intrinsics: 3x3 float array of camera intrinsics matrix.
- Returns:
- points: HxWx3 float array of 3D points in camera coordinates.
- """
- height, width = depth.shape
- xlin = np.linspace(0, width - 1, width)
- ylin = np.linspace(0, height - 1, height)
- px, py = np.meshgrid(xlin, ylin)
- px = (px - intrinsics[0, 2]) * (depth / intrinsics[0, 0])
- py = (py - intrinsics[1, 2]) * (depth / intrinsics[1, 1])
- points = np.float32([px, py, depth]).transpose(1, 2, 0)
- return points
-
- def transform_pointcloud(self, points, transform):
- """Apply rigid transformation to 3D pointcloud.
- Args:
- points: HxWx3 float array of 3D points in camera coordinates.
- transform: 4x4 float array representing a rigid transformation matrix.
- Returns:
- points: HxWx3 float array of transformed 3D points.
- """
- padding = ((0, 0), (0, 0), (0, 1))
- homogen_points = np.pad(points.copy(), padding,
- 'constant', constant_values=1)
- for i in range(3):
- points[Ellipsis, i] = np.sum(transform[i, :] * homogen_points, axis=-1)
- return points
-
- def get_heightmap(self, points, colors, bounds, pixel_size):
- """Get top-down (z-axis) orthographic heightmap image from 3D pointcloud.
- Args:
- points: HxWx3 float array of 3D points in world coordinates.
- colors: HxWx3 uint8 array of values in range 0-255 aligned with points.
- bounds: 3x2 float array of values (rows: X,Y,Z; columns: min,max) defining
- region in 3D space to generate heightmap in world coordinates.
- pixel_size: float defining size of each pixel in meters.
- Returns:
- heightmap: HxW float array of height (from lower z-bound) in meters.
- colormap: HxWx3 uint8 array of backprojected color aligned with heightmap.
- xyzmap: HxWx3 float array of XYZ points in world coordinates.
- """
- width = int(np.round((bounds[0, 1] - bounds[0, 0]) / pixel_size))
- height = int(np.round((bounds[1, 1] - bounds[1, 0]) / pixel_size))
- heightmap = np.zeros((height, width), dtype=np.float32)
- colormap = np.zeros((height, width, colors.shape[-1]), dtype=np.uint8)
- xyzmap = np.zeros((height, width, 3), dtype=np.float32)
-
- # Filter out 3D points that are outside of the predefined bounds.
- ix = (points[Ellipsis, 0] >= bounds[0, 0]) & (points[Ellipsis, 0] < bounds[0, 1])
- iy = (points[Ellipsis, 1] >= bounds[1, 0]) & (points[Ellipsis, 1] < bounds[1, 1])
- iz = (points[Ellipsis, 2] >= bounds[2, 0]) & (points[Ellipsis, 2] < bounds[2, 1])
- valid = ix & iy & iz
- points = points[valid]
- colors = colors[valid]
-
- # Sort 3D points by z-value, which works with array assignment to simulate
- # z-buffering for rendering the heightmap image.
- iz = np.argsort(points[:, -1])
- points, colors = points[iz], colors[iz]
- px = np.int32(np.floor((points[:, 0] - bounds[0, 0]) / pixel_size))
- py = np.int32(np.floor((points[:, 1] - bounds[1, 0]) / pixel_size))
- px = np.clip(px, 0, width - 1)
- py = np.clip(py, 0, height - 1)
- heightmap[py, px] = points[:, 2] - bounds[2, 0]
- for c in range(colors.shape[-1]):
- colormap[py, px, c] = colors[:, c]
- xyzmap[py, px, c] = points[:, c]
- colormap = colormap[::-1, :, :] # Flip up-down.
- xv, yv = np.meshgrid(np.linspace(BOUNDS[0, 0], BOUNDS[0, 1], height),
- np.linspace(BOUNDS[1, 0], BOUNDS[1, 1], width))
- xyzmap[:, :, 0] = xv
- xyzmap[:, :, 1] = yv
- xyzmap = xyzmap[::-1, :, :] # Flip up-down.
- heightmap = heightmap[::-1, :] # Flip up-down.
- return heightmap, colormap, xyzmap
-
- def on_top_of(self, obj_a, obj_b):
- """
- check if obj_a is on top of obj_b
- condition 1: l2 distance on xy plane is less than a threshold
- condition 2: obj_a is higher than obj_b
- """
- obj_a_pos = self.get_obj_pos(obj_a)
- obj_b_pos = self.get_obj_pos(obj_b)
- xy_dist = np.linalg.norm(obj_a_pos[:2] - obj_b_pos[:2])
- if obj_b in CORNER_POS:
- is_near = xy_dist < 0.06
- return is_near
- elif 'bowl' in obj_b:
- is_near = xy_dist < 0.06
- is_higher = obj_a_pos[2] > obj_b_pos[2]
- return is_near and is_higher
- else:
- is_near = xy_dist < 0.04
- is_higher = obj_a_pos[2] > obj_b_pos[2]
- return is_near and is_higher
-
- def get_obj_id(self, obj_name):
- try:
- if obj_name in self.obj_name_to_id:
- obj_id = self.obj_name_to_id[obj_name]
- else:
- obj_name = obj_name.replace('circle', 'bowl').replace('square', 'block').replace('small', '').strip()
- obj_id = self.obj_name_to_id[obj_name]
- return obj_id
- except:
- raise Exception('Object name "{}" not found'.format(obj_name))
-
- def get_obj_pos(self, obj_name):
- obj_name = obj_name.replace('the', '').replace('_', ' ').strip()
- if obj_name in CORNER_POS:
- position = np.float32(np.array(CORNER_POS[obj_name]))
- else:
- pick_id = self.get_obj_id(obj_name)
- pose = self._p.getBasePositionAndOrientation(pick_id)
- position = np.float32(pose[0])
- return position
-
- def get_bounding_box(self, obj_name):
- obj_id = self.get_obj_id(obj_name)
- return self._p.getAABB(obj_id)
-
-
-class LMP_wrapper():
-
- def __init__(self, env, cfg, render=False):
- self.env = env
- self._cfg = cfg
- self.object_names = list(self._cfg['env']['init_objs'])
-
- self._min_xy = np.array(self._cfg['env']['coords']['bottom_left'])
- self._max_xy = np.array(self._cfg['env']['coords']['top_right'])
- self._range_xy = self._max_xy - self._min_xy
-
- self._table_z = self._cfg['env']['coords']['table_z']
- self.render = render
-
- def is_obj_visible(self, obj_name):
- return obj_name in self.object_names
-
- def get_obj_names(self):
- return self.object_names[::]
-
- def denormalize_xy(self, pos_normalized):
- return pos_normalized * self._range_xy + self._min_xy
-
- def get_corner_positions(self):
- unit_square = box(0, 0, 1, 1)
- normalized_corners = np.array(list(unit_square.exterior.coords))[:4]
- corners = np.array(([self.denormalize_xy(corner) for corner in normalized_corners]))
- return corners
-
- def get_side_positions(self):
- side_xs = np.array([0, 0.5, 0.5, 1])
- side_ys = np.array([0.5, 0, 1, 0.5])
- normalized_side_positions = np.c_[side_xs, side_ys]
- side_positions = np.array(([self.denormalize_xy(corner) for corner in normalized_side_positions]))
- return side_positions
-
- def get_obj_pos(self, obj_name):
- # return the xy position of the object in robot base frame
- return self.env.get_obj_pos(obj_name)[:2]
-
- def get_obj_position_np(self, obj_name):
- return self.get_pos(obj_name)
-
- def get_bbox(self, obj_name):
- # return the axis-aligned object bounding box in robot base frame (not in pixels)
- # the format is (min_x, min_y, max_x, max_y)
- bbox = self.env.get_bounding_box(obj_name)
- return bbox
-
- def get_color(self, obj_name):
- for color, rgb in COLORS.items():
- if color in obj_name:
- return rgb
-
- def pick_place(self, pick_pos, place_pos):
- pick_pos_xyz = np.r_[pick_pos, [self._table_z]]
- place_pos_xyz = np.r_[place_pos, [self._table_z]]
- pass
-
- def put_first_on_second(self, arg1, arg2):
- # put the object with obj_name on top of target
- # target can either be another object name, or it can be an x-y position in robot base frame
- pick_pos = self.get_obj_pos(arg1) if isinstance(arg1, str) else arg1
- place_pos = self.get_obj_pos(arg2) if isinstance(arg2, str) else arg2
- self.env.step(action={'pick': pick_pos, 'place': place_pos})
-
- def get_robot_pos(self):
- # return robot end-effector xy position in robot base frame
- return self.env.get_ee_pos()
-
- def goto_pos(self, position_xy):
- # move the robot end-effector to the desired xy position while maintaining same z
- ee_xyz = self.env.get_ee_pos()
- position_xyz = np.concatenate([position_xy, ee_xyz[-1]])
- while np.linalg.norm(position_xyz - ee_xyz) > 0.01:
- self.env.movep(position_xyz)
- self.env.step_sim_and_render()
- ee_xyz = self.env.get_ee_pos()
-
- def follow_traj(self, traj):
- for pos in traj:
- self.goto_pos(pos)
-
- def get_corner_positions(self):
- normalized_corners = np.array([
- [0, 1],
- [1, 1],
- [0, 0],
- [1, 0]
- ])
- return np.array(([self.denormalize_xy(corner) for corner in normalized_corners]))
-
- def get_side_positions(self):
- normalized_sides = np.array([
- [0.5, 1],
- [1, 0.5],
- [0.5, 0],
- [0, 0.5]
- ])
- return np.array(([self.denormalize_xy(side) for side in normalized_sides]))
-
- def get_corner_name(self, pos):
- corner_positions = self.get_corner_positions()
- corner_idx = np.argmin(np.linalg.norm(corner_positions - pos, axis=1))
- return ['top left corner', 'top right corner', 'bottom left corner', 'botom right corner'][corner_idx]
-
- def get_side_name(self, pos):
- side_positions = self.get_side_positions()
- side_idx = np.argmin(np.linalg.norm(side_positions - pos, axis=1))
- return ['top side', 'right side', 'bottom side', 'left side'][side_idx]
\ No newline at end of file
diff --git a/spaces/james-oldfield/PandA/networks/biggan/utils.py b/spaces/james-oldfield/PandA/networks/biggan/utils.py
deleted file mode 100644
index 3b9edbef3ecc9bf85092f4e670eb5fac8a3b4616..0000000000000000000000000000000000000000
--- a/spaces/james-oldfield/PandA/networks/biggan/utils.py
+++ /dev/null
@@ -1,216 +0,0 @@
-# coding: utf-8
-""" BigGAN utilities to prepare truncated noise samples and convert/save/display output images.
- Also comprise ImageNet utilities to prepare one hot input vectors for ImageNet classes.
- We use Wordnet so you can just input a name in a string and automatically get a corresponding
- imagenet class if it exists (or a hypo/hypernym exists in imagenet).
-"""
-from __future__ import absolute_import, division, print_function, unicode_literals
-
-import json
-import logging
-from io import BytesIO
-
-import numpy as np
-from scipy.stats import truncnorm
-
-logger = logging.getLogger(__name__)
-
-NUM_CLASSES = 1000
-
-
-def truncated_noise_sample(batch_size=1, dim_z=128, truncation=1., seed=None):
- """ Create a truncated noise vector.
- Params:
- batch_size: batch size.
- dim_z: dimension of z
- truncation: truncation value to use
- seed: seed for the random generator
- Output:
- array of shape (batch_size, dim_z)
- """
- state = None if seed is None else np.random.RandomState(seed)
- values = truncnorm.rvs(-2, 2, size=(batch_size, dim_z), random_state=state).astype(np.float32)
- return truncation * values
-
-
-def convert_to_images(obj):
- """ Convert an output tensor from BigGAN in a list of images.
- Params:
- obj: tensor or numpy array of shape (batch_size, channels, height, width)
- Output:
- list of Pillow Images of size (height, width)
- """
- try:
- import PIL
- except ImportError:
- raise ImportError("Please install Pillow to use images: pip install Pillow")
-
- if not isinstance(obj, np.ndarray):
- obj = obj.detach().numpy()
-
- obj = obj.transpose((0, 2, 3, 1))
- obj = np.clip(((obj + 1) / 2.0) * 256, 0, 255)
-
- img = []
- for i, out in enumerate(obj):
- out_array = np.asarray(np.uint8(out), dtype=np.uint8)
- img.append(PIL.Image.fromarray(out_array))
- return img
-
-
-def save_as_images(obj, file_name='output'):
- """ Convert and save an output tensor from BigGAN in a list of saved images.
- Params:
- obj: tensor or numpy array of shape (batch_size, channels, height, width)
- file_name: path and beggingin of filename to save.
- Images will be saved as `file_name_{image_number}.png`
- """
- img = convert_to_images(obj)
-
- for i, out in enumerate(img):
- current_file_name = file_name + '_%d.png' % i
- logger.info("Saving image to {}".format(current_file_name))
- out.save(current_file_name, 'png')
-
-
-def display_in_terminal(obj):
- """ Convert and display an output tensor from BigGAN in the terminal.
- This function use `libsixel` and will only work in a libsixel-compatible terminal.
- Please refer to https://github.com/saitoha/libsixel for more details.
-
- Params:
- obj: tensor or numpy array of shape (batch_size, channels, height, width)
- file_name: path and beggingin of filename to save.
- Images will be saved as `file_name_{image_number}.png`
- """
- try:
- import PIL
- from libsixel import (sixel_output_new, sixel_dither_new, sixel_dither_initialize,
- sixel_dither_set_palette, sixel_dither_set_pixelformat,
- sixel_dither_get, sixel_encode, sixel_dither_unref,
- sixel_output_unref, SIXEL_PIXELFORMAT_RGBA8888,
- SIXEL_PIXELFORMAT_RGB888, SIXEL_PIXELFORMAT_PAL8,
- SIXEL_PIXELFORMAT_G8, SIXEL_PIXELFORMAT_G1)
- except ImportError:
- raise ImportError("Display in Terminal requires Pillow, libsixel "
- "and a libsixel compatible terminal. "
- "Please read info at https://github.com/saitoha/libsixel "
- "and install with pip install Pillow libsixel-python")
-
- s = BytesIO()
-
- images = convert_to_images(obj)
- widths, heights = zip(*(i.size for i in images))
-
- output_width = sum(widths)
- output_height = max(heights)
-
- output_image = PIL.Image.new('RGB', (output_width, output_height))
-
- x_offset = 0
- for im in images:
- output_image.paste(im, (x_offset,0))
- x_offset += im.size[0]
-
- try:
- data = output_image.tobytes()
- except NotImplementedError:
- data = output_image.tostring()
- output = sixel_output_new(lambda data, s: s.write(data), s)
-
- try:
- if output_image.mode == 'RGBA':
- dither = sixel_dither_new(256)
- sixel_dither_initialize(dither, data, output_width, output_height, SIXEL_PIXELFORMAT_RGBA8888)
- elif output_image.mode == 'RGB':
- dither = sixel_dither_new(256)
- sixel_dither_initialize(dither, data, output_width, output_height, SIXEL_PIXELFORMAT_RGB888)
- elif output_image.mode == 'P':
- palette = output_image.getpalette()
- dither = sixel_dither_new(256)
- sixel_dither_set_palette(dither, palette)
- sixel_dither_set_pixelformat(dither, SIXEL_PIXELFORMAT_PAL8)
- elif output_image.mode == 'L':
- dither = sixel_dither_get(SIXEL_BUILTIN_G8)
- sixel_dither_set_pixelformat(dither, SIXEL_PIXELFORMAT_G8)
- elif output_image.mode == '1':
- dither = sixel_dither_get(SIXEL_BUILTIN_G1)
- sixel_dither_set_pixelformat(dither, SIXEL_PIXELFORMAT_G1)
- else:
- raise RuntimeError('unexpected output_image mode')
- try:
- sixel_encode(data, output_width, output_height, 1, dither, output)
- print(s.getvalue().decode('ascii'))
- finally:
- sixel_dither_unref(dither)
- finally:
- sixel_output_unref(output)
-
-
-def one_hot_from_int(int_or_list, batch_size=1):
- """ Create a one-hot vector from a class index or a list of class indices.
- Params:
- int_or_list: int, or list of int, of the imagenet classes (between 0 and 999)
- batch_size: batch size.
- If int_or_list is an int create a batch of identical classes.
- If int_or_list is a list, we should have `len(int_or_list) == batch_size`
- Output:
- array of shape (batch_size, 1000)
- """
- if isinstance(int_or_list, int):
- int_or_list = [int_or_list]
-
- if len(int_or_list) == 1 and batch_size > 1:
- int_or_list = [int_or_list[0]] * batch_size
-
- assert batch_size == len(int_or_list)
-
- array = np.zeros((batch_size, NUM_CLASSES), dtype=np.float32)
- for i, j in enumerate(int_or_list):
- array[i, j] = 1.0
- return array
-
-
-def one_hot_from_names(class_name_or_list, batch_size=1):
- """ Create a one-hot vector from the name of an imagenet class ('tennis ball', 'daisy', ...).
- We use NLTK's wordnet search to try to find the relevant synset of ImageNet and take the first one.
- If we can't find it direcly, we look at the hyponyms and hypernyms of the class name.
-
- Params:
- class_name_or_list: string containing the name of an imagenet object or a list of such strings (for a batch).
- Output:
- array of shape (batch_size, 1000)
- """
- try:
- from nltk.corpus import wordnet as wn
- except ImportError:
- raise ImportError("You need to install nltk to use this function")
-
- if not isinstance(class_name_or_list, (list, tuple)):
- class_name_or_list = [class_name_or_list]
- else:
- batch_size = max(batch_size, len(class_name_or_list))
-
- classes = []
- for class_name in class_name_or_list:
- class_name = class_name.replace(" ", "_")
-
- original_synsets = wn.synsets(class_name)
- original_synsets = list(filter(lambda s: s.pos() == 'n', original_synsets)) # keep only names
- if not original_synsets:
- return None
-
- possible_synsets = list(filter(lambda s: s.offset() in IMAGENET, original_synsets))
- if possible_synsets:
- classes.append(IMAGENET[possible_synsets[0].offset()])
- else:
- # try hypernyms and hyponyms
- possible_synsets = sum([s.hypernyms() + s.hyponyms() for s in original_synsets], [])
- possible_synsets = list(filter(lambda s: s.offset() in IMAGENET, possible_synsets))
- if possible_synsets:
- classes.append(IMAGENET[possible_synsets[0].offset()])
-
- return one_hot_from_int(classes, batch_size=batch_size)
-
-
-IMAGENET = {1440764: 0, 1443537: 1, 1484850: 2, 1491361: 3, 1494475: 4, 1496331: 5, 1498041: 6, 1514668: 7, 1514859: 8, 1518878: 9, 1530575: 10, 1531178: 11, 1532829: 12, 1534433: 13, 1537544: 14, 1558993: 15, 1560419: 16, 1580077: 17, 1582220: 18, 1592084: 19, 1601694: 20, 1608432: 21, 1614925: 22, 1616318: 23, 1622779: 24, 1629819: 25, 1630670: 26, 1631663: 27, 1632458: 28, 1632777: 29, 1641577: 30, 1644373: 31, 1644900: 32, 1664065: 33, 1665541: 34, 1667114: 35, 1667778: 36, 1669191: 37, 1675722: 38, 1677366: 39, 1682714: 40, 1685808: 41, 1687978: 42, 1688243: 43, 1689811: 44, 1692333: 45, 1693334: 46, 1694178: 47, 1695060: 48, 1697457: 49, 1698640: 50, 1704323: 51, 1728572: 52, 1728920: 53, 1729322: 54, 1729977: 55, 1734418: 56, 1735189: 57, 1737021: 58, 1739381: 59, 1740131: 60, 1742172: 61, 1744401: 62, 1748264: 63, 1749939: 64, 1751748: 65, 1753488: 66, 1755581: 67, 1756291: 68, 1768244: 69, 1770081: 70, 1770393: 71, 1773157: 72, 1773549: 73, 1773797: 74, 1774384: 75, 1774750: 76, 1775062: 77, 1776313: 78, 1784675: 79, 1795545: 80, 1796340: 81, 1797886: 82, 1798484: 83, 1806143: 84, 1806567: 85, 1807496: 86, 1817953: 87, 1818515: 88, 1819313: 89, 1820546: 90, 1824575: 91, 1828970: 92, 1829413: 93, 1833805: 94, 1843065: 95, 1843383: 96, 1847000: 97, 1855032: 98, 1855672: 99, 1860187: 100, 1871265: 101, 1872401: 102, 1873310: 103, 1877812: 104, 1882714: 105, 1883070: 106, 1910747: 107, 1914609: 108, 1917289: 109, 1924916: 110, 1930112: 111, 1943899: 112, 1944390: 113, 1945685: 114, 1950731: 115, 1955084: 116, 1968897: 117, 1978287: 118, 1978455: 119, 1980166: 120, 1981276: 121, 1983481: 122, 1984695: 123, 1985128: 124, 1986214: 125, 1990800: 126, 2002556: 127, 2002724: 128, 2006656: 129, 2007558: 130, 2009229: 131, 2009912: 132, 2011460: 133, 2012849: 134, 2013706: 135, 2017213: 136, 2018207: 137, 2018795: 138, 2025239: 139, 2027492: 140, 2028035: 141, 2033041: 142, 2037110: 143, 2051845: 144, 2056570: 145, 2058221: 146, 2066245: 147, 2071294: 148, 2074367: 149, 2077923: 150, 2085620: 151, 2085782: 152, 2085936: 153, 2086079: 154, 2086240: 155, 2086646: 156, 2086910: 157, 2087046: 158, 2087394: 159, 2088094: 160, 2088238: 161, 2088364: 162, 2088466: 163, 2088632: 164, 2089078: 165, 2089867: 166, 2089973: 167, 2090379: 168, 2090622: 169, 2090721: 170, 2091032: 171, 2091134: 172, 2091244: 173, 2091467: 174, 2091635: 175, 2091831: 176, 2092002: 177, 2092339: 178, 2093256: 179, 2093428: 180, 2093647: 181, 2093754: 182, 2093859: 183, 2093991: 184, 2094114: 185, 2094258: 186, 2094433: 187, 2095314: 188, 2095570: 189, 2095889: 190, 2096051: 191, 2096177: 192, 2096294: 193, 2096437: 194, 2096585: 195, 2097047: 196, 2097130: 197, 2097209: 198, 2097298: 199, 2097474: 200, 2097658: 201, 2098105: 202, 2098286: 203, 2098413: 204, 2099267: 205, 2099429: 206, 2099601: 207, 2099712: 208, 2099849: 209, 2100236: 210, 2100583: 211, 2100735: 212, 2100877: 213, 2101006: 214, 2101388: 215, 2101556: 216, 2102040: 217, 2102177: 218, 2102318: 219, 2102480: 220, 2102973: 221, 2104029: 222, 2104365: 223, 2105056: 224, 2105162: 225, 2105251: 226, 2105412: 227, 2105505: 228, 2105641: 229, 2105855: 230, 2106030: 231, 2106166: 232, 2106382: 233, 2106550: 234, 2106662: 235, 2107142: 236, 2107312: 237, 2107574: 238, 2107683: 239, 2107908: 240, 2108000: 241, 2108089: 242, 2108422: 243, 2108551: 244, 2108915: 245, 2109047: 246, 2109525: 247, 2109961: 248, 2110063: 249, 2110185: 250, 2110341: 251, 2110627: 252, 2110806: 253, 2110958: 254, 2111129: 255, 2111277: 256, 2111500: 257, 2111889: 258, 2112018: 259, 2112137: 260, 2112350: 261, 2112706: 262, 2113023: 263, 2113186: 264, 2113624: 265, 2113712: 266, 2113799: 267, 2113978: 268, 2114367: 269, 2114548: 270, 2114712: 271, 2114855: 272, 2115641: 273, 2115913: 274, 2116738: 275, 2117135: 276, 2119022: 277, 2119789: 278, 2120079: 279, 2120505: 280, 2123045: 281, 2123159: 282, 2123394: 283, 2123597: 284, 2124075: 285, 2125311: 286, 2127052: 287, 2128385: 288, 2128757: 289, 2128925: 290, 2129165: 291, 2129604: 292, 2130308: 293, 2132136: 294, 2133161: 295, 2134084: 296, 2134418: 297, 2137549: 298, 2138441: 299, 2165105: 300, 2165456: 301, 2167151: 302, 2168699: 303, 2169497: 304, 2172182: 305, 2174001: 306, 2177972: 307, 2190166: 308, 2206856: 309, 2219486: 310, 2226429: 311, 2229544: 312, 2231487: 313, 2233338: 314, 2236044: 315, 2256656: 316, 2259212: 317, 2264363: 318, 2268443: 319, 2268853: 320, 2276258: 321, 2277742: 322, 2279972: 323, 2280649: 324, 2281406: 325, 2281787: 326, 2317335: 327, 2319095: 328, 2321529: 329, 2325366: 330, 2326432: 331, 2328150: 332, 2342885: 333, 2346627: 334, 2356798: 335, 2361337: 336, 2363005: 337, 2364673: 338, 2389026: 339, 2391049: 340, 2395406: 341, 2396427: 342, 2397096: 343, 2398521: 344, 2403003: 345, 2408429: 346, 2410509: 347, 2412080: 348, 2415577: 349, 2417914: 350, 2422106: 351, 2422699: 352, 2423022: 353, 2437312: 354, 2437616: 355, 2441942: 356, 2442845: 357, 2443114: 358, 2443484: 359, 2444819: 360, 2445715: 361, 2447366: 362, 2454379: 363, 2457408: 364, 2480495: 365, 2480855: 366, 2481823: 367, 2483362: 368, 2483708: 369, 2484975: 370, 2486261: 371, 2486410: 372, 2487347: 373, 2488291: 374, 2488702: 375, 2489166: 376, 2490219: 377, 2492035: 378, 2492660: 379, 2493509: 380, 2493793: 381, 2494079: 382, 2497673: 383, 2500267: 384, 2504013: 385, 2504458: 386, 2509815: 387, 2510455: 388, 2514041: 389, 2526121: 390, 2536864: 391, 2606052: 392, 2607072: 393, 2640242: 394, 2641379: 395, 2643566: 396, 2655020: 397, 2666196: 398, 2667093: 399, 2669723: 400, 2672831: 401, 2676566: 402, 2687172: 403, 2690373: 404, 2692877: 405, 2699494: 406, 2701002: 407, 2704792: 408, 2708093: 409, 2727426: 410, 2730930: 411, 2747177: 412, 2749479: 413, 2769748: 414, 2776631: 415, 2777292: 416, 2782093: 417, 2783161: 418, 2786058: 419, 2787622: 420, 2788148: 421, 2790996: 422, 2791124: 423, 2791270: 424, 2793495: 425, 2794156: 426, 2795169: 427, 2797295: 428, 2799071: 429, 2802426: 430, 2804414: 431, 2804610: 432, 2807133: 433, 2808304: 434, 2808440: 435, 2814533: 436, 2814860: 437, 2815834: 438, 2817516: 439, 2823428: 440, 2823750: 441, 2825657: 442, 2834397: 443, 2835271: 444, 2837789: 445, 2840245: 446, 2841315: 447, 2843684: 448, 2859443: 449, 2860847: 450, 2865351: 451, 2869837: 452, 2870880: 453, 2871525: 454, 2877765: 455, 2879718: 456, 2883205: 457, 2892201: 458, 2892767: 459, 2894605: 460, 2895154: 461, 2906734: 462, 2909870: 463, 2910353: 464, 2916936: 465, 2917067: 466, 2927161: 467, 2930766: 468, 2939185: 469, 2948072: 470, 2950826: 471, 2951358: 472, 2951585: 473, 2963159: 474, 2965783: 475, 2966193: 476, 2966687: 477, 2971356: 478, 2974003: 479, 2977058: 480, 2978881: 481, 2979186: 482, 2980441: 483, 2981792: 484, 2988304: 485, 2992211: 486, 2992529: 487, 2999410: 488, 3000134: 489, 3000247: 490, 3000684: 491, 3014705: 492, 3016953: 493, 3017168: 494, 3018349: 495, 3026506: 496, 3028079: 497, 3032252: 498, 3041632: 499, 3042490: 500, 3045698: 501, 3047690: 502, 3062245: 503, 3063599: 504, 3063689: 505, 3065424: 506, 3075370: 507, 3085013: 508, 3089624: 509, 3095699: 510, 3100240: 511, 3109150: 512, 3110669: 513, 3124043: 514, 3124170: 515, 3125729: 516, 3126707: 517, 3127747: 518, 3127925: 519, 3131574: 520, 3133878: 521, 3134739: 522, 3141823: 523, 3146219: 524, 3160309: 525, 3179701: 526, 3180011: 527, 3187595: 528, 3188531: 529, 3196217: 530, 3197337: 531, 3201208: 532, 3207743: 533, 3207941: 534, 3208938: 535, 3216828: 536, 3218198: 537, 3220513: 538, 3223299: 539, 3240683: 540, 3249569: 541, 3250847: 542, 3255030: 543, 3259280: 544, 3271574: 545, 3272010: 546, 3272562: 547, 3290653: 548, 3291819: 549, 3297495: 550, 3314780: 551, 3325584: 552, 3337140: 553, 3344393: 554, 3345487: 555, 3347037: 556, 3355925: 557, 3372029: 558, 3376595: 559, 3379051: 560, 3384352: 561, 3388043: 562, 3388183: 563, 3388549: 564, 3393912: 565, 3394916: 566, 3400231: 567, 3404251: 568, 3417042: 569, 3424325: 570, 3425413: 571, 3443371: 572, 3444034: 573, 3445777: 574, 3445924: 575, 3447447: 576, 3447721: 577, 3450230: 578, 3452741: 579, 3457902: 580, 3459775: 581, 3461385: 582, 3467068: 583, 3476684: 584, 3476991: 585, 3478589: 586, 3481172: 587, 3482405: 588, 3483316: 589, 3485407: 590, 3485794: 591, 3492542: 592, 3494278: 593, 3495258: 594, 3496892: 595, 3498962: 596, 3527444: 597, 3529860: 598, 3530642: 599, 3532672: 600, 3534580: 601, 3535780: 602, 3538406: 603, 3544143: 604, 3584254: 605, 3584829: 606, 3590841: 607, 3594734: 608, 3594945: 609, 3595614: 610, 3598930: 611, 3599486: 612, 3602883: 613, 3617480: 614, 3623198: 615, 3627232: 616, 3630383: 617, 3633091: 618, 3637318: 619, 3642806: 620, 3649909: 621, 3657121: 622, 3658185: 623, 3661043: 624, 3662601: 625, 3666591: 626, 3670208: 627, 3673027: 628, 3676483: 629, 3680355: 630, 3690938: 631, 3691459: 632, 3692522: 633, 3697007: 634, 3706229: 635, 3709823: 636, 3710193: 637, 3710637: 638, 3710721: 639, 3717622: 640, 3720891: 641, 3721384: 642, 3724870: 643, 3729826: 644, 3733131: 645, 3733281: 646, 3733805: 647, 3742115: 648, 3743016: 649, 3759954: 650, 3761084: 651, 3763968: 652, 3764736: 653, 3769881: 654, 3770439: 655, 3770679: 656, 3773504: 657, 3775071: 658, 3775546: 659, 3776460: 660, 3777568: 661, 3777754: 662, 3781244: 663, 3782006: 664, 3785016: 665, 3786901: 666, 3787032: 667, 3788195: 668, 3788365: 669, 3791053: 670, 3792782: 671, 3792972: 672, 3793489: 673, 3794056: 674, 3796401: 675, 3803284: 676, 3804744: 677, 3814639: 678, 3814906: 679, 3825788: 680, 3832673: 681, 3837869: 682, 3838899: 683, 3840681: 684, 3841143: 685, 3843555: 686, 3854065: 687, 3857828: 688, 3866082: 689, 3868242: 690, 3868863: 691, 3871628: 692, 3873416: 693, 3874293: 694, 3874599: 695, 3876231: 696, 3877472: 697, 3877845: 698, 3884397: 699, 3887697: 700, 3888257: 701, 3888605: 702, 3891251: 703, 3891332: 704, 3895866: 705, 3899768: 706, 3902125: 707, 3903868: 708, 3908618: 709, 3908714: 710, 3916031: 711, 3920288: 712, 3924679: 713, 3929660: 714, 3929855: 715, 3930313: 716, 3930630: 717, 3933933: 718, 3935335: 719, 3937543: 720, 3938244: 721, 3942813: 722, 3944341: 723, 3947888: 724, 3950228: 725, 3954731: 726, 3956157: 727, 3958227: 728, 3961711: 729, 3967562: 730, 3970156: 731, 3976467: 732, 3976657: 733, 3977966: 734, 3980874: 735, 3982430: 736, 3983396: 737, 3991062: 738, 3992509: 739, 3995372: 740, 3998194: 741, 4004767: 742, 4005630: 743, 4008634: 744, 4009552: 745, 4019541: 746, 4023962: 747, 4026417: 748, 4033901: 749, 4033995: 750, 4037443: 751, 4039381: 752, 4040759: 753, 4041544: 754, 4044716: 755, 4049303: 756, 4065272: 757, 4067472: 758, 4069434: 759, 4070727: 760, 4074963: 761, 4081281: 762, 4086273: 763, 4090263: 764, 4099969: 765, 4111531: 766, 4116512: 767, 4118538: 768, 4118776: 769, 4120489: 770, 4125021: 771, 4127249: 772, 4131690: 773, 4133789: 774, 4136333: 775, 4141076: 776, 4141327: 777, 4141975: 778, 4146614: 779, 4147183: 780, 4149813: 781, 4152593: 782, 4153751: 783, 4154565: 784, 4162706: 785, 4179913: 786, 4192698: 787, 4200800: 788, 4201297: 789, 4204238: 790, 4204347: 791, 4208210: 792, 4209133: 793, 4209239: 794, 4228054: 795, 4229816: 796, 4235860: 797, 4238763: 798, 4239074: 799, 4243546: 800, 4251144: 801, 4252077: 802, 4252225: 803, 4254120: 804, 4254680: 805, 4254777: 806, 4258138: 807, 4259630: 808, 4263257: 809, 4264628: 810, 4265275: 811, 4266014: 812, 4270147: 813, 4273569: 814, 4275548: 815, 4277352: 816, 4285008: 817, 4286575: 818, 4296562: 819, 4310018: 820, 4311004: 821, 4311174: 822, 4317175: 823, 4325704: 824, 4326547: 825, 4328186: 826, 4330267: 827, 4332243: 828, 4335435: 829, 4336792: 830, 4344873: 831, 4346328: 832, 4347754: 833, 4350905: 834, 4355338: 835, 4355933: 836, 4356056: 837, 4357314: 838, 4366367: 839, 4367480: 840, 4370456: 841, 4371430: 842, 4371774: 843, 4372370: 844, 4376876: 845, 4380533: 846, 4389033: 847, 4392985: 848, 4398044: 849, 4399382: 850, 4404412: 851, 4409515: 852, 4417672: 853, 4418357: 854, 4423845: 855, 4428191: 856, 4429376: 857, 4435653: 858, 4442312: 859, 4443257: 860, 4447861: 861, 4456115: 862, 4458633: 863, 4461696: 864, 4462240: 865, 4465501: 866, 4467665: 867, 4476259: 868, 4479046: 869, 4482393: 870, 4483307: 871, 4485082: 872, 4486054: 873, 4487081: 874, 4487394: 875, 4493381: 876, 4501370: 877, 4505470: 878, 4507155: 879, 4509417: 880, 4515003: 881, 4517823: 882, 4522168: 883, 4523525: 884, 4525038: 885, 4525305: 886, 4532106: 887, 4532670: 888, 4536866: 889, 4540053: 890, 4542943: 891, 4548280: 892, 4548362: 893, 4550184: 894, 4552348: 895, 4553703: 896, 4554684: 897, 4557648: 898, 4560804: 899, 4562935: 900, 4579145: 901, 4579432: 902, 4584207: 903, 4589890: 904, 4590129: 905, 4591157: 906, 4591713: 907, 4592741: 908, 4596742: 909, 4597913: 910, 4599235: 911, 4604644: 912, 4606251: 913, 4612504: 914, 4613696: 915, 6359193: 916, 6596364: 917, 6785654: 918, 6794110: 919, 6874185: 920, 7248320: 921, 7565083: 922, 7579787: 923, 7583066: 924, 7584110: 925, 7590611: 926, 7613480: 927, 7614500: 928, 7615774: 929, 7684084: 930, 7693725: 931, 7695742: 932, 7697313: 933, 7697537: 934, 7711569: 935, 7714571: 936, 7714990: 937, 7715103: 938, 7716358: 939, 7716906: 940, 7717410: 941, 7717556: 942, 7718472: 943, 7718747: 944, 7720875: 945, 7730033: 946, 7734744: 947, 7742313: 948, 7745940: 949, 7747607: 950, 7749582: 951, 7753113: 952, 7753275: 953, 7753592: 954, 7754684: 955, 7760859: 956, 7768694: 957, 7802026: 958, 7831146: 959, 7836838: 960, 7860988: 961, 7871810: 962, 7873807: 963, 7875152: 964, 7880968: 965, 7892512: 966, 7920052: 967, 7930864: 968, 7932039: 969, 9193705: 970, 9229709: 971, 9246464: 972, 9256479: 973, 9288635: 974, 9332890: 975, 9399592: 976, 9421951: 977, 9428293: 978, 9468604: 979, 9472597: 980, 9835506: 981, 10148035: 982, 10565667: 983, 11879895: 984, 11939491: 985, 12057211: 986, 12144580: 987, 12267677: 988, 12620546: 989, 12768682: 990, 12985857: 991, 12998815: 992, 13037406: 993, 13040303: 994, 13044778: 995, 13052670: 996, 13054560: 997, 13133613: 998, 15075141: 999}
diff --git a/spaces/jamessteele/ChatbotBlenderbot-GR/app.py b/spaces/jamessteele/ChatbotBlenderbot-GR/app.py
deleted file mode 100644
index ca545aad434176426ca5ee2190b8e753d46a10df..0000000000000000000000000000000000000000
--- a/spaces/jamessteele/ChatbotBlenderbot-GR/app.py
+++ /dev/null
@@ -1,134 +0,0 @@
-from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration
-import torch
-import gradio as gr
-
-
-# PersistDataset -----
-import os
-import csv
-import gradio as gr
-from gradio import inputs, outputs
-import huggingface_hub
-from huggingface_hub import Repository, hf_hub_download, upload_file
-from datetime import datetime
-
-
-# -------------------------------------------- For Memory - you will need to set up a dataset and HF_TOKEN ---------
-#DATASET_REPO_URL = "https://huggingface.co/datasets/awacke1/ChatbotMemory.csv"
-#DATASET_REPO_ID = "awacke1/ChatbotMemory.csv"
-#DATA_FILENAME = "ChatbotMemory.csv"
-#DATA_FILE = os.path.join("data", DATA_FILENAME)
-#HF_TOKEN = os.environ.get("HF_TOKEN")
-
-#SCRIPT = """
-#
-#"""
-
-#try:
-# hf_hub_download(
-# repo_id=DATASET_REPO_ID,
-# filename=DATA_FILENAME,
-# cache_dir=DATA_DIRNAME,
-# force_filename=DATA_FILENAME
-# )
-#except:
-# print("file not found")
-#repo = Repository(
-# local_dir="data", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN
-#)
-
-#def store_message(name: str, message: str):
-# if name and message:
-# with open(DATA_FILE, "a") as csvfile:
-# writer = csv.DictWriter(csvfile, fieldnames=["name", "message", "time"])
-# writer.writerow(
-# {"name": name.strip(), "message": message.strip(), "time": str(datetime.now())}
-# )
-# uncomment line below to begin saving. If creating your own copy you will need to add a access token called "HF_TOKEN" to your profile, then create a secret for your repo with the access code naming it "HF_TOKEN" For the CSV as well you can copy the header and first few lines to your own then update the paths above which should work to save to your own repository for datasets.
-# commit_url = repo.push_to_hub()
-# return ""
-
-#iface = gr.Interface(
-# store_message,
-# [
-# inputs.Textbox(placeholder="Your name"),
-# inputs.Textbox(placeholder="Your message", lines=2),
-# ],
-# "html",
-# css="""
-# .message {background-color:cornflowerblue;color:white; padding:4px;margin:4px;border-radius:4px; }
-# """,
-# title="Reading/writing to a HuggingFace dataset repo from Spaces",
-# description=f"This is a demo of how to do simple *shared data persistence* in a Gradio Space, backed by a dataset repo.",
-# article=f"The dataset repo is [{DATASET_REPO_URL}]({DATASET_REPO_URL})",
-#)
-# --------------------------------------------------- For Memory
-
-mname = "facebook/blenderbot-400M-distill"
-model = BlenderbotForConditionalGeneration.from_pretrained(mname)
-tokenizer = BlenderbotTokenizer.from_pretrained(mname)
-
-def take_last_tokens(inputs, note_history, history):
- """Filter the last 128 tokens"""
- if inputs['input_ids'].shape[1] > 128:
- inputs['input_ids'] = torch.tensor([inputs['input_ids'][0][-128:].tolist()])
- inputs['attention_mask'] = torch.tensor([inputs['attention_mask'][0][-128:].tolist()])
- note_history = [' '.join(note_history[0].split('')[2:])]
- history = history[1:]
- return inputs, note_history, history
-
-def add_note_to_history(note, note_history):
- """Add a note to the historical information"""
- note_history.append(note)
- note_history = ''.join(note_history)
- return [note_history]
-
-title = "State of the Art Chatbot with Memory Dataset"
-description = """Chatbot With Memory"""
-
-def chat(message, history):
- history = history or []
- if history:
- history_useful = [''.join([str(a[0])+''+str(a[1]) for a in history])]
- else:
- history_useful = []
- history_useful = add_note_to_history(message, history_useful)
- inputs = tokenizer(history_useful, return_tensors="pt")
- inputs, history_useful, history = take_last_tokens(inputs, history_useful, history)
- reply_ids = model.generate(**inputs)
- response = tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0]
- history_useful = add_note_to_history(response, history_useful)
- list_history = history_useful[0].split('')
- history.append((list_history[-2], list_history[-1]))
-# store_message(message, response) # Save to dataset -- uncomment with code above, create a dataset to store and add your HF_TOKEN from profile to this repo to use.
- return history, history
-
-gr.Interface(
- fn=chat,
- theme="huggingface",
- css=".footer {display:none !important}",
- inputs=["text", "state"],
- outputs=["chatbot", "state"],
- title=title,
- allow_flagging="never",
- description=f"Gradio chatbot backed by memory in a dataset repository.",
-# article=f"The dataset repo is [{DATASET_REPO_URL}]({DATASET_REPO_URL})"
- ).launch(debug=True)
-
-#demo = gr.Blocks()
-#with demo:
-# audio_file = gr.inputs.Audio(source="microphone", type="filepath")
-# text = gr.Textbox(label="Speech to Text")
-# TTSchoice = gr.inputs.Radio( label="Pick a Text to Speech Model", choices=MODEL_NAMES, )
-# audio = gr.Audio(label="Output", interactive=False)
-# b1 = gr.Button("Recognize Speech")
-# b5 = gr.Button("Read It Back Aloud")
-# b1.click(speech_to_text, inputs=audio_file, outputs=text)
-# b5.click(tts, inputs=[text,TTSchoice], outputs=audio)
-#demo.launch(share=True)
diff --git a/spaces/jdczlx/ChatGPT-chuanhu/modules/openai_func.py b/spaces/jdczlx/ChatGPT-chuanhu/modules/openai_func.py
deleted file mode 100644
index 284311bb11906e4bb5516cfcabf90bef4ec09b12..0000000000000000000000000000000000000000
--- a/spaces/jdczlx/ChatGPT-chuanhu/modules/openai_func.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import requests
-import logging
-from modules.presets import timeout_all, BALANCE_API_URL,standard_error_msg,connection_timeout_prompt,error_retrieve_prompt,read_timeout_prompt
-from modules import shared
-import os
-
-
-def get_usage_response(openai_api_key):
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {openai_api_key}",
- }
-
- timeout = timeout_all
-
- # 获取环境变量中的代理设置
- http_proxy = os.environ.get("HTTP_PROXY") or os.environ.get("http_proxy")
- https_proxy = os.environ.get(
- "HTTPS_PROXY") or os.environ.get("https_proxy")
-
- # 如果存在代理设置,使用它们
- proxies = {}
- if http_proxy:
- logging.info(f"使用 HTTP 代理: {http_proxy}")
- proxies["http"] = http_proxy
- if https_proxy:
- logging.info(f"使用 HTTPS 代理: {https_proxy}")
- proxies["https"] = https_proxy
-
- # 如果有代理,使用代理发送请求,否则使用默认设置发送请求
- """
- 暂不支持修改
- if shared.state.balance_api_url != BALANCE_API_URL:
- logging.info(f"使用自定义BALANCE API URL: {shared.state.balance_api_url}")
- """
- if proxies:
- response = requests.get(
- BALANCE_API_URL,
- headers=headers,
- timeout=timeout,
- proxies=proxies,
- )
- else:
- response = requests.get(
- BALANCE_API_URL,
- headers=headers,
- timeout=timeout,
- )
- return response
-
-def get_usage(openai_api_key):
- try:
- response=get_usage_response(openai_api_key=openai_api_key)
- logging.debug(response.json())
- try:
- balance = response.json().get("total_available") if response.json().get(
- "total_available") else 0
- total_used = response.json().get("total_used") if response.json().get(
- "total_used") else 0
- except Exception as e:
- logging.error(f"API使用情况解析失败:"+str(e))
- balance = 0
- total_used=0
- return f"**API使用情况**(已用/余额)\u3000{total_used}$ / {balance}$"
- except requests.exceptions.ConnectTimeout:
- status_text = standard_error_msg + connection_timeout_prompt + error_retrieve_prompt
- return status_text
- except requests.exceptions.ReadTimeout:
- status_text = standard_error_msg + read_timeout_prompt + error_retrieve_prompt
- return status_text
diff --git a/spaces/jeonsworld/whisper-medium-ko/app.py b/spaces/jeonsworld/whisper-medium-ko/app.py
deleted file mode 100644
index cd9e84e72478244f2d237fb27f02a8202f03b635..0000000000000000000000000000000000000000
--- a/spaces/jeonsworld/whisper-medium-ko/app.py
+++ /dev/null
@@ -1,169 +0,0 @@
-import os
-
-os.system("pip install git+https://github.com/openai/whisper.git")
-import gradio as gr
-import whisper
-
-model = whisper.load_model("medium")
-
-def inference(audio):
- audio = whisper.load_audio(audio)
- audio = whisper.pad_or_trim(audio)
-
- mel = whisper.log_mel_spectrogram(audio).to(model.device)
-
- # _, probs = model.detect_language(mel)
- options = dict(language="Korean", beam_size=5)
- transcribe_options = whisper.DecodingOptions(fp16=False, task="transcribe", **options)
- result = whisper.decode(model, mel, transcribe_options)
- # print(result.text)
- return result.text
-
-
-title = "Whisper"
-
-description = "Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification."
-
-css = """
- .gradio-container {
- font-family: 'IBM Plex Sans', sans-serif;
- }
- .gr-button {
- color: white;
- border-color: black;
- background: black;
- }
- input[type='range'] {
- accent-color: black;
- }
- .dark input[type='range'] {
- accent-color: #dfdfdf;
- }
- .container {
- max-width: 730px;
- margin: auto;
- padding-top: 1.5rem;
- }
-
- .details:hover {
- text-decoration: underline;
- }
- .gr-button {
- white-space: nowrap;
- }
- .gr-button:focus {
- border-color: rgb(147 197 253 / var(--tw-border-opacity));
- outline: none;
- box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000);
- --tw-border-opacity: 1;
- --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);
- --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color);
- --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity));
- --tw-ring-opacity: .5;
- }
- .footer {
- margin-bottom: 45px;
- margin-top: 35px;
- text-align: center;
- border-bottom: 1px solid #e5e5e5;
- }
- .footer>p {
- font-size: .8rem;
- display: inline-block;
- padding: 0 10px;
- transform: translateY(10px);
- background: white;
- }
- .dark .footer {
- border-color: #303030;
- }
- .dark .footer>p {
- background: #0b0f19;
- }
- .prompt h4{
- margin: 1.25em 0 .25em 0;
- font-weight: bold;
- font-size: 115%;
- }
-"""
-
-block = gr.Blocks(css=css)
-
-with block:
- gr.HTML(
- """
-
-
-
-
- Whisper
-
-
-
- Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.
-
-
- """
- )
- with gr.Group():
- with gr.Box():
- with gr.Row().style(mobile_collapse=False, equal_height=True):
- audio = gr.Audio(
- label="Input Audio",
- show_label=False,
- source="microphone",
- type="filepath"
- )
-
- btn = gr.Button("Transcribe!")
- text = gr.Textbox(show_label=False)
-
- btn.click(inference, inputs=[audio], outputs=[text])
-
- gr.HTML('''
-
- ''')
-
-block.launch()
\ No newline at end of file
diff --git a/spaces/jgurzoni/image_background_swapper/saicinpainting/evaluation/evaluator.py b/spaces/jgurzoni/image_background_swapper/saicinpainting/evaluation/evaluator.py
deleted file mode 100644
index aa9e80402633c08a580929b38a5cb695cb7171d8..0000000000000000000000000000000000000000
--- a/spaces/jgurzoni/image_background_swapper/saicinpainting/evaluation/evaluator.py
+++ /dev/null
@@ -1,220 +0,0 @@
-import logging
-import math
-from typing import Dict
-
-import numpy as np
-import torch
-import torch.nn as nn
-import tqdm
-from torch.utils.data import DataLoader
-
-from saicinpainting.evaluation.utils import move_to_device
-
-LOGGER = logging.getLogger(__name__)
-
-
-class InpaintingEvaluator():
- def __init__(self, dataset, scores, area_grouping=True, bins=10, batch_size=32, device='cuda',
- integral_func=None, integral_title=None, clamp_image_range=None):
- """
- :param dataset: torch.utils.data.Dataset which contains images and masks
- :param scores: dict {score_name: EvaluatorScore object}
- :param area_grouping: in addition to the overall scores, allows to compute score for the groups of samples
- which are defined by share of area occluded by mask
- :param bins: number of groups, partition is generated by np.linspace(0., 1., bins + 1)
- :param batch_size: batch_size for the dataloader
- :param device: device to use
- """
- self.scores = scores
- self.dataset = dataset
-
- self.area_grouping = area_grouping
- self.bins = bins
-
- self.device = torch.device(device)
-
- self.dataloader = DataLoader(self.dataset, shuffle=False, batch_size=batch_size)
-
- self.integral_func = integral_func
- self.integral_title = integral_title
- self.clamp_image_range = clamp_image_range
-
- def _get_bin_edges(self):
- bin_edges = np.linspace(0, 1, self.bins + 1)
-
- num_digits = max(0, math.ceil(math.log10(self.bins)) - 1)
- interval_names = []
- for idx_bin in range(self.bins):
- start_percent, end_percent = round(100 * bin_edges[idx_bin], num_digits), \
- round(100 * bin_edges[idx_bin + 1], num_digits)
- start_percent = '{:.{n}f}'.format(start_percent, n=num_digits)
- end_percent = '{:.{n}f}'.format(end_percent, n=num_digits)
- interval_names.append("{0}-{1}%".format(start_percent, end_percent))
-
- groups = []
- for batch in self.dataloader:
- mask = batch['mask']
- batch_size = mask.shape[0]
- area = mask.to(self.device).reshape(batch_size, -1).mean(dim=-1)
- bin_indices = np.searchsorted(bin_edges, area.detach().cpu().numpy(), side='right') - 1
- # corner case: when area is equal to 1, bin_indices should return bins - 1, not bins for that element
- bin_indices[bin_indices == self.bins] = self.bins - 1
- groups.append(bin_indices)
- groups = np.hstack(groups)
-
- return groups, interval_names
-
- def evaluate(self, model=None):
- """
- :param model: callable with signature (image_batch, mask_batch); should return inpainted_batch
- :return: dict with (score_name, group_type) as keys, where group_type can be either 'overall' or
- name of the particular group arranged by area of mask (e.g. '10-20%')
- and score statistics for the group as values.
- """
- results = dict()
- if self.area_grouping:
- groups, interval_names = self._get_bin_edges()
- else:
- groups = None
-
- for score_name, score in tqdm.auto.tqdm(self.scores.items(), desc='scores'):
- score.to(self.device)
- with torch.no_grad():
- score.reset()
- for batch in tqdm.auto.tqdm(self.dataloader, desc=score_name, leave=False):
- batch = move_to_device(batch, self.device)
- image_batch, mask_batch = batch['image'], batch['mask']
- if self.clamp_image_range is not None:
- image_batch = torch.clamp(image_batch,
- min=self.clamp_image_range[0],
- max=self.clamp_image_range[1])
- if model is None:
- assert 'inpainted' in batch, \
- 'Model is None, so we expected precomputed inpainting results at key "inpainted"'
- inpainted_batch = batch['inpainted']
- else:
- inpainted_batch = model(image_batch, mask_batch)
- score(inpainted_batch, image_batch, mask_batch)
- total_results, group_results = score.get_value(groups=groups)
-
- results[(score_name, 'total')] = total_results
- if groups is not None:
- for group_index, group_values in group_results.items():
- group_name = interval_names[group_index]
- results[(score_name, group_name)] = group_values
-
- if self.integral_func is not None:
- results[(self.integral_title, 'total')] = dict(mean=self.integral_func(results))
-
- return results
-
-
-def ssim_fid100_f1(metrics, fid_scale=100):
- ssim = metrics[('ssim', 'total')]['mean']
- fid = metrics[('fid', 'total')]['mean']
- fid_rel = max(0, fid_scale - fid) / fid_scale
- f1 = 2 * ssim * fid_rel / (ssim + fid_rel + 1e-3)
- return f1
-
-
-def lpips_fid100_f1(metrics, fid_scale=100):
- neg_lpips = 1 - metrics[('lpips', 'total')]['mean'] # invert, so bigger is better
- fid = metrics[('fid', 'total')]['mean']
- fid_rel = max(0, fid_scale - fid) / fid_scale
- f1 = 2 * neg_lpips * fid_rel / (neg_lpips + fid_rel + 1e-3)
- return f1
-
-
-
-class InpaintingEvaluatorOnline(nn.Module):
- def __init__(self, scores, bins=10, image_key='image', inpainted_key='inpainted',
- integral_func=None, integral_title=None, clamp_image_range=None):
- """
- :param scores: dict {score_name: EvaluatorScore object}
- :param bins: number of groups, partition is generated by np.linspace(0., 1., bins + 1)
- :param device: device to use
- """
- super().__init__()
- LOGGER.info(f'{type(self)} init called')
- self.scores = nn.ModuleDict(scores)
- self.image_key = image_key
- self.inpainted_key = inpainted_key
- self.bins_num = bins
- self.bin_edges = np.linspace(0, 1, self.bins_num + 1)
-
- num_digits = max(0, math.ceil(math.log10(self.bins_num)) - 1)
- self.interval_names = []
- for idx_bin in range(self.bins_num):
- start_percent, end_percent = round(100 * self.bin_edges[idx_bin], num_digits), \
- round(100 * self.bin_edges[idx_bin + 1], num_digits)
- start_percent = '{:.{n}f}'.format(start_percent, n=num_digits)
- end_percent = '{:.{n}f}'.format(end_percent, n=num_digits)
- self.interval_names.append("{0}-{1}%".format(start_percent, end_percent))
-
- self.groups = []
-
- self.integral_func = integral_func
- self.integral_title = integral_title
- self.clamp_image_range = clamp_image_range
-
- LOGGER.info(f'{type(self)} init done')
-
- def _get_bins(self, mask_batch):
- batch_size = mask_batch.shape[0]
- area = mask_batch.view(batch_size, -1).mean(dim=-1).detach().cpu().numpy()
- bin_indices = np.clip(np.searchsorted(self.bin_edges, area) - 1, 0, self.bins_num - 1)
- return bin_indices
-
- def forward(self, batch: Dict[str, torch.Tensor]):
- """
- Calculate and accumulate metrics for batch. To finalize evaluation and obtain final metrics, call evaluation_end
- :param batch: batch dict with mandatory fields mask, image, inpainted (can be overriden by self.inpainted_key)
- """
- result = {}
- with torch.no_grad():
- image_batch, mask_batch, inpainted_batch = batch[self.image_key], batch['mask'], batch[self.inpainted_key]
- if self.clamp_image_range is not None:
- image_batch = torch.clamp(image_batch,
- min=self.clamp_image_range[0],
- max=self.clamp_image_range[1])
- self.groups.extend(self._get_bins(mask_batch))
-
- for score_name, score in self.scores.items():
- result[score_name] = score(inpainted_batch, image_batch, mask_batch)
- return result
-
- def process_batch(self, batch: Dict[str, torch.Tensor]):
- return self(batch)
-
- def evaluation_end(self, states=None):
- """:return: dict with (score_name, group_type) as keys, where group_type can be either 'overall' or
- name of the particular group arranged by area of mask (e.g. '10-20%')
- and score statistics for the group as values.
- """
- LOGGER.info(f'{type(self)}: evaluation_end called')
-
- self.groups = np.array(self.groups)
-
- results = {}
- for score_name, score in self.scores.items():
- LOGGER.info(f'Getting value of {score_name}')
- cur_states = [s[score_name] for s in states] if states is not None else None
- total_results, group_results = score.get_value(groups=self.groups, states=cur_states)
- LOGGER.info(f'Getting value of {score_name} done')
- results[(score_name, 'total')] = total_results
-
- for group_index, group_values in group_results.items():
- group_name = self.interval_names[group_index]
- results[(score_name, group_name)] = group_values
-
- if self.integral_func is not None:
- results[(self.integral_title, 'total')] = dict(mean=self.integral_func(results))
-
- LOGGER.info(f'{type(self)}: reset scores')
- self.groups = []
- for sc in self.scores.values():
- sc.reset()
- LOGGER.info(f'{type(self)}: reset scores done')
-
- LOGGER.info(f'{type(self)}: evaluation_end done')
- return results
diff --git a/spaces/joaogabriellima/Real-Time-Voice-Cloning/encoder/params_data.py b/spaces/joaogabriellima/Real-Time-Voice-Cloning/encoder/params_data.py
deleted file mode 100644
index bdb1716ed45617f2b127a7fb8885afe6cc74fb71..0000000000000000000000000000000000000000
--- a/spaces/joaogabriellima/Real-Time-Voice-Cloning/encoder/params_data.py
+++ /dev/null
@@ -1,29 +0,0 @@
-
-## Mel-filterbank
-mel_window_length = 25 # In milliseconds
-mel_window_step = 10 # In milliseconds
-mel_n_channels = 40
-
-
-## Audio
-sampling_rate = 16000
-# Number of spectrogram frames in a partial utterance
-partials_n_frames = 160 # 1600 ms
-# Number of spectrogram frames at inference
-inference_n_frames = 80 # 800 ms
-
-
-## Voice Activation Detection
-# Window size of the VAD. Must be either 10, 20 or 30 milliseconds.
-# This sets the granularity of the VAD. Should not need to be changed.
-vad_window_length = 30 # In milliseconds
-# Number of frames to average together when performing the moving average smoothing.
-# The larger this value, the larger the VAD variations must be to not get smoothed out.
-vad_moving_average_width = 8
-# Maximum number of consecutive silent frames a segment can have.
-vad_max_silence_length = 6
-
-
-## Audio volume normalization
-audio_norm_target_dBFS = -30
-
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/JpegImagePlugin.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/JpegImagePlugin.py
deleted file mode 100644
index dfc7e6e9f569e05e3a1f9e3fd1407b5f202a6d56..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/JpegImagePlugin.py
+++ /dev/null
@@ -1,849 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# JPEG (JFIF) file handling
-#
-# See "Digital Compression and Coding of Continuous-Tone Still Images,
-# Part 1, Requirements and Guidelines" (CCITT T.81 / ISO 10918-1)
-#
-# History:
-# 1995-09-09 fl Created
-# 1995-09-13 fl Added full parser
-# 1996-03-25 fl Added hack to use the IJG command line utilities
-# 1996-05-05 fl Workaround Photoshop 2.5 CMYK polarity bug
-# 1996-05-28 fl Added draft support, JFIF version (0.1)
-# 1996-12-30 fl Added encoder options, added progression property (0.2)
-# 1997-08-27 fl Save mode 1 images as BW (0.3)
-# 1998-07-12 fl Added YCbCr to draft and save methods (0.4)
-# 1998-10-19 fl Don't hang on files using 16-bit DQT's (0.4.1)
-# 2001-04-16 fl Extract DPI settings from JFIF files (0.4.2)
-# 2002-07-01 fl Skip pad bytes before markers; identify Exif files (0.4.3)
-# 2003-04-25 fl Added experimental EXIF decoder (0.5)
-# 2003-06-06 fl Added experimental EXIF GPSinfo decoder
-# 2003-09-13 fl Extract COM markers
-# 2009-09-06 fl Added icc_profile support (from Florian Hoech)
-# 2009-03-06 fl Changed CMYK handling; always use Adobe polarity (0.6)
-# 2009-03-08 fl Added subsampling support (from Justin Huff).
-#
-# Copyright (c) 1997-2003 by Secret Labs AB.
-# Copyright (c) 1995-1996 by Fredrik Lundh.
-#
-# See the README file for information on usage and redistribution.
-#
-import array
-import io
-import math
-import os
-import struct
-import subprocess
-import sys
-import tempfile
-import warnings
-
-from . import Image, ImageFile
-from ._binary import i16be as i16
-from ._binary import i32be as i32
-from ._binary import o8
-from ._binary import o16be as o16
-from .JpegPresets import presets
-
-#
-# Parser
-
-
-def Skip(self, marker):
- n = i16(self.fp.read(2)) - 2
- ImageFile._safe_read(self.fp, n)
-
-
-def APP(self, marker):
- #
- # Application marker. Store these in the APP dictionary.
- # Also look for well-known application markers.
-
- n = i16(self.fp.read(2)) - 2
- s = ImageFile._safe_read(self.fp, n)
-
- app = "APP%d" % (marker & 15)
-
- self.app[app] = s # compatibility
- self.applist.append((app, s))
-
- if marker == 0xFFE0 and s[:4] == b"JFIF":
- # extract JFIF information
- self.info["jfif"] = version = i16(s, 5) # version
- self.info["jfif_version"] = divmod(version, 256)
- # extract JFIF properties
- try:
- jfif_unit = s[7]
- jfif_density = i16(s, 8), i16(s, 10)
- except Exception:
- pass
- else:
- if jfif_unit == 1:
- self.info["dpi"] = jfif_density
- self.info["jfif_unit"] = jfif_unit
- self.info["jfif_density"] = jfif_density
- elif marker == 0xFFE1 and s[:5] == b"Exif\0":
- if "exif" not in self.info:
- # extract EXIF information (incomplete)
- self.info["exif"] = s # FIXME: value will change
- self._exif_offset = self.fp.tell() - n + 6
- elif marker == 0xFFE2 and s[:5] == b"FPXR\0":
- # extract FlashPix information (incomplete)
- self.info["flashpix"] = s # FIXME: value will change
- elif marker == 0xFFE2 and s[:12] == b"ICC_PROFILE\0":
- # Since an ICC profile can be larger than the maximum size of
- # a JPEG marker (64K), we need provisions to split it into
- # multiple markers. The format defined by the ICC specifies
- # one or more APP2 markers containing the following data:
- # Identifying string ASCII "ICC_PROFILE\0" (12 bytes)
- # Marker sequence number 1, 2, etc (1 byte)
- # Number of markers Total of APP2's used (1 byte)
- # Profile data (remainder of APP2 data)
- # Decoders should use the marker sequence numbers to
- # reassemble the profile, rather than assuming that the APP2
- # markers appear in the correct sequence.
- self.icclist.append(s)
- elif marker == 0xFFED and s[:14] == b"Photoshop 3.0\x00":
- # parse the image resource block
- offset = 14
- photoshop = self.info.setdefault("photoshop", {})
- while s[offset : offset + 4] == b"8BIM":
- try:
- offset += 4
- # resource code
- code = i16(s, offset)
- offset += 2
- # resource name (usually empty)
- name_len = s[offset]
- # name = s[offset+1:offset+1+name_len]
- offset += 1 + name_len
- offset += offset & 1 # align
- # resource data block
- size = i32(s, offset)
- offset += 4
- data = s[offset : offset + size]
- if code == 0x03ED: # ResolutionInfo
- data = {
- "XResolution": i32(data, 0) / 65536,
- "DisplayedUnitsX": i16(data, 4),
- "YResolution": i32(data, 8) / 65536,
- "DisplayedUnitsY": i16(data, 12),
- }
- photoshop[code] = data
- offset += size
- offset += offset & 1 # align
- except struct.error:
- break # insufficient data
-
- elif marker == 0xFFEE and s[:5] == b"Adobe":
- self.info["adobe"] = i16(s, 5)
- # extract Adobe custom properties
- try:
- adobe_transform = s[11]
- except IndexError:
- pass
- else:
- self.info["adobe_transform"] = adobe_transform
- elif marker == 0xFFE2 and s[:4] == b"MPF\0":
- # extract MPO information
- self.info["mp"] = s[4:]
- # offset is current location minus buffer size
- # plus constant header size
- self.info["mpoffset"] = self.fp.tell() - n + 4
-
- # If DPI isn't in JPEG header, fetch from EXIF
- if "dpi" not in self.info and "exif" in self.info:
- try:
- exif = self.getexif()
- resolution_unit = exif[0x0128]
- x_resolution = exif[0x011A]
- try:
- dpi = float(x_resolution[0]) / x_resolution[1]
- except TypeError:
- dpi = x_resolution
- if math.isnan(dpi):
- raise ValueError
- if resolution_unit == 3: # cm
- # 1 dpcm = 2.54 dpi
- dpi *= 2.54
- self.info["dpi"] = dpi, dpi
- except (TypeError, KeyError, SyntaxError, ValueError, ZeroDivisionError):
- # SyntaxError for invalid/unreadable EXIF
- # KeyError for dpi not included
- # ZeroDivisionError for invalid dpi rational value
- # ValueError or TypeError for dpi being an invalid float
- self.info["dpi"] = 72, 72
-
-
-def COM(self, marker):
- #
- # Comment marker. Store these in the APP dictionary.
- n = i16(self.fp.read(2)) - 2
- s = ImageFile._safe_read(self.fp, n)
-
- self.info["comment"] = s
- self.app["COM"] = s # compatibility
- self.applist.append(("COM", s))
-
-
-def SOF(self, marker):
- #
- # Start of frame marker. Defines the size and mode of the
- # image. JPEG is colour blind, so we use some simple
- # heuristics to map the number of layers to an appropriate
- # mode. Note that this could be made a bit brighter, by
- # looking for JFIF and Adobe APP markers.
-
- n = i16(self.fp.read(2)) - 2
- s = ImageFile._safe_read(self.fp, n)
- self._size = i16(s, 3), i16(s, 1)
-
- self.bits = s[0]
- if self.bits != 8:
- msg = f"cannot handle {self.bits}-bit layers"
- raise SyntaxError(msg)
-
- self.layers = s[5]
- if self.layers == 1:
- self.mode = "L"
- elif self.layers == 3:
- self.mode = "RGB"
- elif self.layers == 4:
- self.mode = "CMYK"
- else:
- msg = f"cannot handle {self.layers}-layer images"
- raise SyntaxError(msg)
-
- if marker in [0xFFC2, 0xFFC6, 0xFFCA, 0xFFCE]:
- self.info["progressive"] = self.info["progression"] = 1
-
- if self.icclist:
- # fixup icc profile
- self.icclist.sort() # sort by sequence number
- if self.icclist[0][13] == len(self.icclist):
- profile = []
- for p in self.icclist:
- profile.append(p[14:])
- icc_profile = b"".join(profile)
- else:
- icc_profile = None # wrong number of fragments
- self.info["icc_profile"] = icc_profile
- self.icclist = []
-
- for i in range(6, len(s), 3):
- t = s[i : i + 3]
- # 4-tuples: id, vsamp, hsamp, qtable
- self.layer.append((t[0], t[1] // 16, t[1] & 15, t[2]))
-
-
-def DQT(self, marker):
- #
- # Define quantization table. Note that there might be more
- # than one table in each marker.
-
- # FIXME: The quantization tables can be used to estimate the
- # compression quality.
-
- n = i16(self.fp.read(2)) - 2
- s = ImageFile._safe_read(self.fp, n)
- while len(s):
- v = s[0]
- precision = 1 if (v // 16 == 0) else 2 # in bytes
- qt_length = 1 + precision * 64
- if len(s) < qt_length:
- msg = "bad quantization table marker"
- raise SyntaxError(msg)
- data = array.array("B" if precision == 1 else "H", s[1:qt_length])
- if sys.byteorder == "little" and precision > 1:
- data.byteswap() # the values are always big-endian
- self.quantization[v & 15] = [data[i] for i in zigzag_index]
- s = s[qt_length:]
-
-
-#
-# JPEG marker table
-
-MARKER = {
- 0xFFC0: ("SOF0", "Baseline DCT", SOF),
- 0xFFC1: ("SOF1", "Extended Sequential DCT", SOF),
- 0xFFC2: ("SOF2", "Progressive DCT", SOF),
- 0xFFC3: ("SOF3", "Spatial lossless", SOF),
- 0xFFC4: ("DHT", "Define Huffman table", Skip),
- 0xFFC5: ("SOF5", "Differential sequential DCT", SOF),
- 0xFFC6: ("SOF6", "Differential progressive DCT", SOF),
- 0xFFC7: ("SOF7", "Differential spatial", SOF),
- 0xFFC8: ("JPG", "Extension", None),
- 0xFFC9: ("SOF9", "Extended sequential DCT (AC)", SOF),
- 0xFFCA: ("SOF10", "Progressive DCT (AC)", SOF),
- 0xFFCB: ("SOF11", "Spatial lossless DCT (AC)", SOF),
- 0xFFCC: ("DAC", "Define arithmetic coding conditioning", Skip),
- 0xFFCD: ("SOF13", "Differential sequential DCT (AC)", SOF),
- 0xFFCE: ("SOF14", "Differential progressive DCT (AC)", SOF),
- 0xFFCF: ("SOF15", "Differential spatial (AC)", SOF),
- 0xFFD0: ("RST0", "Restart 0", None),
- 0xFFD1: ("RST1", "Restart 1", None),
- 0xFFD2: ("RST2", "Restart 2", None),
- 0xFFD3: ("RST3", "Restart 3", None),
- 0xFFD4: ("RST4", "Restart 4", None),
- 0xFFD5: ("RST5", "Restart 5", None),
- 0xFFD6: ("RST6", "Restart 6", None),
- 0xFFD7: ("RST7", "Restart 7", None),
- 0xFFD8: ("SOI", "Start of image", None),
- 0xFFD9: ("EOI", "End of image", None),
- 0xFFDA: ("SOS", "Start of scan", Skip),
- 0xFFDB: ("DQT", "Define quantization table", DQT),
- 0xFFDC: ("DNL", "Define number of lines", Skip),
- 0xFFDD: ("DRI", "Define restart interval", Skip),
- 0xFFDE: ("DHP", "Define hierarchical progression", SOF),
- 0xFFDF: ("EXP", "Expand reference component", Skip),
- 0xFFE0: ("APP0", "Application segment 0", APP),
- 0xFFE1: ("APP1", "Application segment 1", APP),
- 0xFFE2: ("APP2", "Application segment 2", APP),
- 0xFFE3: ("APP3", "Application segment 3", APP),
- 0xFFE4: ("APP4", "Application segment 4", APP),
- 0xFFE5: ("APP5", "Application segment 5", APP),
- 0xFFE6: ("APP6", "Application segment 6", APP),
- 0xFFE7: ("APP7", "Application segment 7", APP),
- 0xFFE8: ("APP8", "Application segment 8", APP),
- 0xFFE9: ("APP9", "Application segment 9", APP),
- 0xFFEA: ("APP10", "Application segment 10", APP),
- 0xFFEB: ("APP11", "Application segment 11", APP),
- 0xFFEC: ("APP12", "Application segment 12", APP),
- 0xFFED: ("APP13", "Application segment 13", APP),
- 0xFFEE: ("APP14", "Application segment 14", APP),
- 0xFFEF: ("APP15", "Application segment 15", APP),
- 0xFFF0: ("JPG0", "Extension 0", None),
- 0xFFF1: ("JPG1", "Extension 1", None),
- 0xFFF2: ("JPG2", "Extension 2", None),
- 0xFFF3: ("JPG3", "Extension 3", None),
- 0xFFF4: ("JPG4", "Extension 4", None),
- 0xFFF5: ("JPG5", "Extension 5", None),
- 0xFFF6: ("JPG6", "Extension 6", None),
- 0xFFF7: ("JPG7", "Extension 7", None),
- 0xFFF8: ("JPG8", "Extension 8", None),
- 0xFFF9: ("JPG9", "Extension 9", None),
- 0xFFFA: ("JPG10", "Extension 10", None),
- 0xFFFB: ("JPG11", "Extension 11", None),
- 0xFFFC: ("JPG12", "Extension 12", None),
- 0xFFFD: ("JPG13", "Extension 13", None),
- 0xFFFE: ("COM", "Comment", COM),
-}
-
-
-def _accept(prefix):
- # Magic number was taken from https://en.wikipedia.org/wiki/JPEG
- return prefix[:3] == b"\xFF\xD8\xFF"
-
-
-##
-# Image plugin for JPEG and JFIF images.
-
-
-class JpegImageFile(ImageFile.ImageFile):
- format = "JPEG"
- format_description = "JPEG (ISO 10918)"
-
- def _open(self):
- s = self.fp.read(3)
-
- if not _accept(s):
- msg = "not a JPEG file"
- raise SyntaxError(msg)
- s = b"\xFF"
-
- # Create attributes
- self.bits = self.layers = 0
-
- # JPEG specifics (internal)
- self.layer = []
- self.huffman_dc = {}
- self.huffman_ac = {}
- self.quantization = {}
- self.app = {} # compatibility
- self.applist = []
- self.icclist = []
-
- while True:
- i = s[0]
- if i == 0xFF:
- s = s + self.fp.read(1)
- i = i16(s)
- else:
- # Skip non-0xFF junk
- s = self.fp.read(1)
- continue
-
- if i in MARKER:
- name, description, handler = MARKER[i]
- if handler is not None:
- handler(self, i)
- if i == 0xFFDA: # start of scan
- rawmode = self.mode
- if self.mode == "CMYK":
- rawmode = "CMYK;I" # assume adobe conventions
- self.tile = [("jpeg", (0, 0) + self.size, 0, (rawmode, ""))]
- # self.__offset = self.fp.tell()
- break
- s = self.fp.read(1)
- elif i == 0 or i == 0xFFFF:
- # padded marker or junk; move on
- s = b"\xff"
- elif i == 0xFF00: # Skip extraneous data (escaped 0xFF)
- s = self.fp.read(1)
- else:
- msg = "no marker found"
- raise SyntaxError(msg)
-
- def load_read(self, read_bytes):
- """
- internal: read more image data
- For premature EOF and LOAD_TRUNCATED_IMAGES adds EOI marker
- so libjpeg can finish decoding
- """
- s = self.fp.read(read_bytes)
-
- if not s and ImageFile.LOAD_TRUNCATED_IMAGES and not hasattr(self, "_ended"):
- # Premature EOF.
- # Pretend file is finished adding EOI marker
- self._ended = True
- return b"\xFF\xD9"
-
- return s
-
- def draft(self, mode, size):
- if len(self.tile) != 1:
- return
-
- # Protect from second call
- if self.decoderconfig:
- return
-
- d, e, o, a = self.tile[0]
- scale = 1
- original_size = self.size
-
- if a[0] == "RGB" and mode in ["L", "YCbCr"]:
- self.mode = mode
- a = mode, ""
-
- if size:
- scale = min(self.size[0] // size[0], self.size[1] // size[1])
- for s in [8, 4, 2, 1]:
- if scale >= s:
- break
- e = (
- e[0],
- e[1],
- (e[2] - e[0] + s - 1) // s + e[0],
- (e[3] - e[1] + s - 1) // s + e[1],
- )
- self._size = ((self.size[0] + s - 1) // s, (self.size[1] + s - 1) // s)
- scale = s
-
- self.tile = [(d, e, o, a)]
- self.decoderconfig = (scale, 0)
-
- box = (0, 0, original_size[0] / scale, original_size[1] / scale)
- return self.mode, box
-
- def load_djpeg(self):
- # ALTERNATIVE: handle JPEGs via the IJG command line utilities
-
- f, path = tempfile.mkstemp()
- os.close(f)
- if os.path.exists(self.filename):
- subprocess.check_call(["djpeg", "-outfile", path, self.filename])
- else:
- try:
- os.unlink(path)
- except OSError:
- pass
-
- msg = "Invalid Filename"
- raise ValueError(msg)
-
- try:
- with Image.open(path) as _im:
- _im.load()
- self.im = _im.im
- finally:
- try:
- os.unlink(path)
- except OSError:
- pass
-
- self.mode = self.im.mode
- self._size = self.im.size
-
- self.tile = []
-
- def _getexif(self):
- return _getexif(self)
-
- def _getmp(self):
- return _getmp(self)
-
- def getxmp(self):
- """
- Returns a dictionary containing the XMP tags.
- Requires defusedxml to be installed.
-
- :returns: XMP tags in a dictionary.
- """
-
- for segment, content in self.applist:
- if segment == "APP1":
- marker, xmp_tags = content.rsplit(b"\x00", 1)
- if marker == b"http://ns.adobe.com/xap/1.0/":
- return self._getxmp(xmp_tags)
- return {}
-
-
-def _getexif(self):
- if "exif" not in self.info:
- return None
- return self.getexif()._get_merged_dict()
-
-
-def _getmp(self):
- # Extract MP information. This method was inspired by the "highly
- # experimental" _getexif version that's been in use for years now,
- # itself based on the ImageFileDirectory class in the TIFF plugin.
-
- # The MP record essentially consists of a TIFF file embedded in a JPEG
- # application marker.
- try:
- data = self.info["mp"]
- except KeyError:
- return None
- file_contents = io.BytesIO(data)
- head = file_contents.read(8)
- endianness = ">" if head[:4] == b"\x4d\x4d\x00\x2a" else "<"
- # process dictionary
- from . import TiffImagePlugin
-
- try:
- info = TiffImagePlugin.ImageFileDirectory_v2(head)
- file_contents.seek(info.next)
- info.load(file_contents)
- mp = dict(info)
- except Exception as e:
- msg = "malformed MP Index (unreadable directory)"
- raise SyntaxError(msg) from e
- # it's an error not to have a number of images
- try:
- quant = mp[0xB001]
- except KeyError as e:
- msg = "malformed MP Index (no number of images)"
- raise SyntaxError(msg) from e
- # get MP entries
- mpentries = []
- try:
- rawmpentries = mp[0xB002]
- for entrynum in range(0, quant):
- unpackedentry = struct.unpack_from(
- f"{endianness}LLLHH", rawmpentries, entrynum * 16
- )
- labels = ("Attribute", "Size", "DataOffset", "EntryNo1", "EntryNo2")
- mpentry = dict(zip(labels, unpackedentry))
- mpentryattr = {
- "DependentParentImageFlag": bool(mpentry["Attribute"] & (1 << 31)),
- "DependentChildImageFlag": bool(mpentry["Attribute"] & (1 << 30)),
- "RepresentativeImageFlag": bool(mpentry["Attribute"] & (1 << 29)),
- "Reserved": (mpentry["Attribute"] & (3 << 27)) >> 27,
- "ImageDataFormat": (mpentry["Attribute"] & (7 << 24)) >> 24,
- "MPType": mpentry["Attribute"] & 0x00FFFFFF,
- }
- if mpentryattr["ImageDataFormat"] == 0:
- mpentryattr["ImageDataFormat"] = "JPEG"
- else:
- msg = "unsupported picture format in MPO"
- raise SyntaxError(msg)
- mptypemap = {
- 0x000000: "Undefined",
- 0x010001: "Large Thumbnail (VGA Equivalent)",
- 0x010002: "Large Thumbnail (Full HD Equivalent)",
- 0x020001: "Multi-Frame Image (Panorama)",
- 0x020002: "Multi-Frame Image: (Disparity)",
- 0x020003: "Multi-Frame Image: (Multi-Angle)",
- 0x030000: "Baseline MP Primary Image",
- }
- mpentryattr["MPType"] = mptypemap.get(mpentryattr["MPType"], "Unknown")
- mpentry["Attribute"] = mpentryattr
- mpentries.append(mpentry)
- mp[0xB002] = mpentries
- except KeyError as e:
- msg = "malformed MP Index (bad MP Entry)"
- raise SyntaxError(msg) from e
- # Next we should try and parse the individual image unique ID list;
- # we don't because I've never seen this actually used in a real MPO
- # file and so can't test it.
- return mp
-
-
-# --------------------------------------------------------------------
-# stuff to save JPEG files
-
-RAWMODE = {
- "1": "L",
- "L": "L",
- "RGB": "RGB",
- "RGBX": "RGB",
- "CMYK": "CMYK;I", # assume adobe conventions
- "YCbCr": "YCbCr",
-}
-
-# fmt: off
-zigzag_index = (
- 0, 1, 5, 6, 14, 15, 27, 28,
- 2, 4, 7, 13, 16, 26, 29, 42,
- 3, 8, 12, 17, 25, 30, 41, 43,
- 9, 11, 18, 24, 31, 40, 44, 53,
- 10, 19, 23, 32, 39, 45, 52, 54,
- 20, 22, 33, 38, 46, 51, 55, 60,
- 21, 34, 37, 47, 50, 56, 59, 61,
- 35, 36, 48, 49, 57, 58, 62, 63,
-)
-
-samplings = {
- (1, 1, 1, 1, 1, 1): 0,
- (2, 1, 1, 1, 1, 1): 1,
- (2, 2, 1, 1, 1, 1): 2,
-}
-# fmt: on
-
-
-def get_sampling(im):
- # There's no subsampling when images have only 1 layer
- # (grayscale images) or when they are CMYK (4 layers),
- # so set subsampling to the default value.
- #
- # NOTE: currently Pillow can't encode JPEG to YCCK format.
- # If YCCK support is added in the future, subsampling code will have
- # to be updated (here and in JpegEncode.c) to deal with 4 layers.
- if not hasattr(im, "layers") or im.layers in (1, 4):
- return -1
- sampling = im.layer[0][1:3] + im.layer[1][1:3] + im.layer[2][1:3]
- return samplings.get(sampling, -1)
-
-
-def _save(im, fp, filename):
- if im.width == 0 or im.height == 0:
- msg = "cannot write empty image as JPEG"
- raise ValueError(msg)
-
- try:
- rawmode = RAWMODE[im.mode]
- except KeyError as e:
- msg = f"cannot write mode {im.mode} as JPEG"
- raise OSError(msg) from e
-
- info = im.encoderinfo
-
- dpi = [round(x) for x in info.get("dpi", (0, 0))]
-
- quality = info.get("quality", -1)
- subsampling = info.get("subsampling", -1)
- qtables = info.get("qtables")
-
- if quality == "keep":
- quality = -1
- subsampling = "keep"
- qtables = "keep"
- elif quality in presets:
- preset = presets[quality]
- quality = -1
- subsampling = preset.get("subsampling", -1)
- qtables = preset.get("quantization")
- elif not isinstance(quality, int):
- msg = "Invalid quality setting"
- raise ValueError(msg)
- else:
- if subsampling in presets:
- subsampling = presets[subsampling].get("subsampling", -1)
- if isinstance(qtables, str) and qtables in presets:
- qtables = presets[qtables].get("quantization")
-
- if subsampling == "4:4:4":
- subsampling = 0
- elif subsampling == "4:2:2":
- subsampling = 1
- elif subsampling == "4:2:0":
- subsampling = 2
- elif subsampling == "4:1:1":
- # For compatibility. Before Pillow 4.3, 4:1:1 actually meant 4:2:0.
- # Set 4:2:0 if someone is still using that value.
- subsampling = 2
- elif subsampling == "keep":
- if im.format != "JPEG":
- msg = "Cannot use 'keep' when original image is not a JPEG"
- raise ValueError(msg)
- subsampling = get_sampling(im)
-
- def validate_qtables(qtables):
- if qtables is None:
- return qtables
- if isinstance(qtables, str):
- try:
- lines = [
- int(num)
- for line in qtables.splitlines()
- for num in line.split("#", 1)[0].split()
- ]
- except ValueError as e:
- msg = "Invalid quantization table"
- raise ValueError(msg) from e
- else:
- qtables = [lines[s : s + 64] for s in range(0, len(lines), 64)]
- if isinstance(qtables, (tuple, list, dict)):
- if isinstance(qtables, dict):
- qtables = [
- qtables[key] for key in range(len(qtables)) if key in qtables
- ]
- elif isinstance(qtables, tuple):
- qtables = list(qtables)
- if not (0 < len(qtables) < 5):
- msg = "None or too many quantization tables"
- raise ValueError(msg)
- for idx, table in enumerate(qtables):
- try:
- if len(table) != 64:
- raise TypeError
- table = array.array("H", table)
- except TypeError as e:
- msg = "Invalid quantization table"
- raise ValueError(msg) from e
- else:
- qtables[idx] = list(table)
- return qtables
-
- if qtables == "keep":
- if im.format != "JPEG":
- msg = "Cannot use 'keep' when original image is not a JPEG"
- raise ValueError(msg)
- qtables = getattr(im, "quantization", None)
- qtables = validate_qtables(qtables)
-
- extra = info.get("extra", b"")
-
- MAX_BYTES_IN_MARKER = 65533
- icc_profile = info.get("icc_profile")
- if icc_profile:
- ICC_OVERHEAD_LEN = 14
- MAX_DATA_BYTES_IN_MARKER = MAX_BYTES_IN_MARKER - ICC_OVERHEAD_LEN
- markers = []
- while icc_profile:
- markers.append(icc_profile[:MAX_DATA_BYTES_IN_MARKER])
- icc_profile = icc_profile[MAX_DATA_BYTES_IN_MARKER:]
- i = 1
- for marker in markers:
- size = o16(2 + ICC_OVERHEAD_LEN + len(marker))
- extra += (
- b"\xFF\xE2"
- + size
- + b"ICC_PROFILE\0"
- + o8(i)
- + o8(len(markers))
- + marker
- )
- i += 1
-
- comment = info.get("comment", im.info.get("comment"))
-
- # "progressive" is the official name, but older documentation
- # says "progression"
- # FIXME: issue a warning if the wrong form is used (post-1.1.7)
- progressive = info.get("progressive", False) or info.get("progression", False)
-
- optimize = info.get("optimize", False)
-
- exif = info.get("exif", b"")
- if isinstance(exif, Image.Exif):
- exif = exif.tobytes()
- if len(exif) > MAX_BYTES_IN_MARKER:
- msg = "EXIF data is too long"
- raise ValueError(msg)
-
- # get keyword arguments
- im.encoderconfig = (
- quality,
- progressive,
- info.get("smooth", 0),
- optimize,
- info.get("streamtype", 0),
- dpi[0],
- dpi[1],
- subsampling,
- qtables,
- comment,
- extra,
- exif,
- )
-
- # if we optimize, libjpeg needs a buffer big enough to hold the whole image
- # in a shot. Guessing on the size, at im.size bytes. (raw pixel size is
- # channels*size, this is a value that's been used in a django patch.
- # https://github.com/matthewwithanm/django-imagekit/issues/50
- bufsize = 0
- if optimize or progressive:
- # CMYK can be bigger
- if im.mode == "CMYK":
- bufsize = 4 * im.size[0] * im.size[1]
- # keep sets quality to -1, but the actual value may be high.
- elif quality >= 95 or quality == -1:
- bufsize = 2 * im.size[0] * im.size[1]
- else:
- bufsize = im.size[0] * im.size[1]
-
- # The EXIF info needs to be written as one block, + APP1, + one spare byte.
- # Ensure that our buffer is big enough. Same with the icc_profile block.
- bufsize = max(ImageFile.MAXBLOCK, bufsize, len(exif) + 5, len(extra) + 1)
-
- ImageFile._save(im, fp, [("jpeg", (0, 0) + im.size, 0, rawmode)], bufsize)
-
-
-def _save_cjpeg(im, fp, filename):
- # ALTERNATIVE: handle JPEGs via the IJG command line utilities.
- tempfile = im._dump()
- subprocess.check_call(["cjpeg", "-outfile", filename, tempfile])
- try:
- os.unlink(tempfile)
- except OSError:
- pass
-
-
-##
-# Factory for making JPEG and MPO instances
-def jpeg_factory(fp=None, filename=None):
- im = JpegImageFile(fp, filename)
- try:
- mpheader = im._getmp()
- if mpheader[45057] > 1:
- # It's actually an MPO
- from .MpoImagePlugin import MpoImageFile
-
- # Don't reload everything, just convert it.
- im = MpoImageFile.adopt(im, mpheader)
- except (TypeError, IndexError):
- # It is really a JPEG
- pass
- except SyntaxError:
- warnings.warn(
- "Image appears to be a malformed MPO file, it will be "
- "interpreted as a base JPEG file"
- )
- return im
-
-
-# ---------------------------------------------------------------------
-# Registry stuff
-
-Image.register_open(JpegImageFile.format, jpeg_factory, _accept)
-Image.register_save(JpegImageFile.format, _save)
-
-Image.register_extensions(JpegImageFile.format, [".jfif", ".jpe", ".jpg", ".jpeg"])
-
-Image.register_mime(JpegImageFile.format, "image/jpeg")
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/_core/_compat.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/_core/_compat.py
deleted file mode 100644
index 22d29ab8ac303756047d105dadafcfd5107563ef..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/_core/_compat.py
+++ /dev/null
@@ -1,217 +0,0 @@
-from __future__ import annotations
-
-from abc import ABCMeta, abstractmethod
-from contextlib import AbstractContextManager
-from types import TracebackType
-from typing import (
- TYPE_CHECKING,
- Any,
- AsyncContextManager,
- Callable,
- ContextManager,
- Generator,
- Generic,
- Iterable,
- List,
- TypeVar,
- Union,
- overload,
-)
-from warnings import warn
-
-if TYPE_CHECKING:
- from ._testing import TaskInfo
-else:
- TaskInfo = object
-
-T = TypeVar("T")
-AnyDeprecatedAwaitable = Union[
- "DeprecatedAwaitable",
- "DeprecatedAwaitableFloat",
- "DeprecatedAwaitableList[T]",
- TaskInfo,
-]
-
-
-@overload
-async def maybe_async(__obj: TaskInfo) -> TaskInfo:
- ...
-
-
-@overload
-async def maybe_async(__obj: DeprecatedAwaitableFloat) -> float:
- ...
-
-
-@overload
-async def maybe_async(__obj: DeprecatedAwaitableList[T]) -> list[T]:
- ...
-
-
-@overload
-async def maybe_async(__obj: DeprecatedAwaitable) -> None:
- ...
-
-
-async def maybe_async(
- __obj: AnyDeprecatedAwaitable[T],
-) -> TaskInfo | float | list[T] | None:
- """
- Await on the given object if necessary.
-
- This function is intended to bridge the gap between AnyIO 2.x and 3.x where some functions and
- methods were converted from coroutine functions into regular functions.
-
- Do **not** try to use this for any other purpose!
-
- :return: the result of awaiting on the object if coroutine, or the object itself otherwise
-
- .. versionadded:: 2.2
-
- """
- return __obj._unwrap()
-
-
-class _ContextManagerWrapper:
- def __init__(self, cm: ContextManager[T]):
- self._cm = cm
-
- async def __aenter__(self) -> T:
- return self._cm.__enter__()
-
- async def __aexit__(
- self,
- exc_type: type[BaseException] | None,
- exc_val: BaseException | None,
- exc_tb: TracebackType | None,
- ) -> bool | None:
- return self._cm.__exit__(exc_type, exc_val, exc_tb)
-
-
-def maybe_async_cm(
- cm: ContextManager[T] | AsyncContextManager[T],
-) -> AsyncContextManager[T]:
- """
- Wrap a regular context manager as an async one if necessary.
-
- This function is intended to bridge the gap between AnyIO 2.x and 3.x where some functions and
- methods were changed to return regular context managers instead of async ones.
-
- :param cm: a regular or async context manager
- :return: an async context manager
-
- .. versionadded:: 2.2
-
- """
- if not isinstance(cm, AbstractContextManager):
- raise TypeError("Given object is not an context manager")
-
- return _ContextManagerWrapper(cm)
-
-
-def _warn_deprecation(
- awaitable: AnyDeprecatedAwaitable[Any], stacklevel: int = 1
-) -> None:
- warn(
- f'Awaiting on {awaitable._name}() is deprecated. Use "await '
- f"anyio.maybe_async({awaitable._name}(...)) if you have to support both AnyIO 2.x "
- f'and 3.x, or just remove the "await" if you are completely migrating to AnyIO 3+.',
- DeprecationWarning,
- stacklevel=stacklevel + 1,
- )
-
-
-class DeprecatedAwaitable:
- def __init__(self, func: Callable[..., DeprecatedAwaitable]):
- self._name = f"{func.__module__}.{func.__qualname__}"
-
- def __await__(self) -> Generator[None, None, None]:
- _warn_deprecation(self)
- if False:
- yield
-
- def __reduce__(self) -> tuple[type[None], tuple[()]]:
- return type(None), ()
-
- def _unwrap(self) -> None:
- return None
-
-
-class DeprecatedAwaitableFloat(float):
- def __new__(
- cls, x: float, func: Callable[..., DeprecatedAwaitableFloat]
- ) -> DeprecatedAwaitableFloat:
- return super().__new__(cls, x)
-
- def __init__(self, x: float, func: Callable[..., DeprecatedAwaitableFloat]):
- self._name = f"{func.__module__}.{func.__qualname__}"
-
- def __await__(self) -> Generator[None, None, float]:
- _warn_deprecation(self)
- if False:
- yield
-
- return float(self)
-
- def __reduce__(self) -> tuple[type[float], tuple[float]]:
- return float, (float(self),)
-
- def _unwrap(self) -> float:
- return float(self)
-
-
-class DeprecatedAwaitableList(List[T]):
- def __init__(
- self,
- iterable: Iterable[T] = (),
- *,
- func: Callable[..., DeprecatedAwaitableList[T]],
- ):
- super().__init__(iterable)
- self._name = f"{func.__module__}.{func.__qualname__}"
-
- def __await__(self) -> Generator[None, None, list[T]]:
- _warn_deprecation(self)
- if False:
- yield
-
- return list(self)
-
- def __reduce__(self) -> tuple[type[list[T]], tuple[list[T]]]:
- return list, (list(self),)
-
- def _unwrap(self) -> list[T]:
- return list(self)
-
-
-class DeprecatedAsyncContextManager(Generic[T], metaclass=ABCMeta):
- @abstractmethod
- def __enter__(self) -> T:
- pass
-
- @abstractmethod
- def __exit__(
- self,
- exc_type: type[BaseException] | None,
- exc_val: BaseException | None,
- exc_tb: TracebackType | None,
- ) -> bool | None:
- pass
-
- async def __aenter__(self) -> T:
- warn(
- f"Using {self.__class__.__name__} as an async context manager has been deprecated. "
- f'Use "async with anyio.maybe_async_cm(yourcontextmanager) as foo:" if you have to '
- f'support both AnyIO 2.x and 3.x, or just remove the "async" from "async with" if '
- f"you are completely migrating to AnyIO 3+.",
- DeprecationWarning,
- )
- return self.__enter__()
-
- async def __aexit__(
- self,
- exc_type: type[BaseException] | None,
- exc_val: BaseException | None,
- exc_tb: TracebackType | None,
- ) -> bool | None:
- return self.__exit__(exc_type, exc_val, exc_tb)
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/CH/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/CH/__init__.py
deleted file mode 100644
index 0760c26c2c4c98be5615793d67e9480c982077bf..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/CH/__init__.py
+++ /dev/null
@@ -1,22 +0,0 @@
-# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
-
-# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
-#
-# Permission to use, copy, modify, and distribute this software and its
-# documentation for any purpose with or without fee is hereby granted,
-# provided that the above copyright notice and this permission notice
-# appear in all copies.
-#
-# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
-# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
-# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
-# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
-# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
-# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
-# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
-
-"""Class CH rdata type classes."""
-
-__all__ = [
- "A",
-]
diff --git a/spaces/jordonpeter01/laudos/page2.html b/spaces/jordonpeter01/laudos/page2.html
deleted file mode 100644
index 2a792d732162075ebe42f2cd662560fff337c9ab..0000000000000000000000000000000000000000
--- a/spaces/jordonpeter01/laudos/page2.html
+++ /dev/null
@@ -1,2846 +0,0 @@
-
-
-
-
-
-
-
- Bootstrap Example
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- This is some text within a card body.
-
-
-
-ULTRASSONOGRAFIA DE ABDOME TOTAL
-
-RESULTADO:
-Fígado com dimensões normais, contornos regulares e ecotextura homogênea.
-Veias porta e hepáticas com calibre preservado.
-Vesícula biliar de paredes lisas e finas, com conteúdo anecogênico habitual, sem evidência de cálculos no interior.
-Vias biliares intra e extra-hepáticas sem evidência de dilatação.
-Pâncreas com dimensões, contornos e ecogenicidade normais.
-Baço com dimensões, contornos e ecogenicidade normais.
-Rim direito com dimensões, contornos e diferenciação córtico-medular preservados. Ausência de cálculos ou dilatação do sistema coletor.
-Rim esquerdo com dimensões, contornos e diferenciação córtico-medular preservados. Ausência de cálculos ou dilatação do sistema coletor.
-Aorta abdominal com trajeto habitual, sem dilatações evidentes.
-Bexiga de paredes finas e lisas, com conteúdo anecogênico habitual.
-Ausência de líquido livre na cavidade abdominal.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-
-ULTRASSONOGRAFIA DE ABDOME SUPERIOR
-
-RESULTADO:
-Fígado com dimensões normais, contornos regulares e ecotextura homogênea.
-Veias porta e hepáticas com calibre preservado.
-Vesícula biliar de paredes lisas e finas, com conteúdo anecogênico habitual, sem evidência de cálculos no interior.
-Vias biliares intra e extra-hepáticas sem evidência de dilatação.
-Pâncreas com dimensões, contornos e ecogenicidade normais.
-Baço com dimensões, contornos e ecogenicidade normais.
-Aorta abdominal com trajeto habitual, sem dilatações evidentes.
-Ausência de líquido livre no abdome superior.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-ULTRASSONOGRAFIA DE ABDOME INFERIOR
-
-RESULTADO:
-Avaliação direcionada para a pesquisa de apendicite aguda, conforme indicação clínica.
-O estudo da fossa ilíaca direita evidenciou alças intestinais com peristalse conservada e gordura mesentérica de aspecto habitual.
-Apêndice não identificado.
-
-IMPRESSÃO:
-Ausência de sinais ecográficos sugestivos de apendicite aguda.
-
-Obs.: Salientamos que a avaliação ecográfica negativa não afasta formalmente a possibilidade de apendicite aguda, sendo necessária a estrita correlação clínica e laboratorial.
-
-
-ULTRASSONOGRAFIA DE ABDOME INFERIOR
-
-RESULTADO:
-Avaliação direcionada para a pesquisa de apendicite aguda, conforme indicação clínica.
-O estudo da fossa ilíaca direita evidenciou imagem tubular terminando em fundo cego, não compressível, com halo hipoecóico, medido cerca de xx cm de diâmetro, associada a densificação do tecido gorduroso adjacente e proeminência de linfonodos regionais de aspecto reacional.
-
-IMPRESSÃO:
-Aspecto ecográfico sugestivo de apendicite aguda. Necessário estrita correlação clínico-laboratorial.
-
-
-ULTRASSONOGRAFIA FAST
-
-RESULTADO:
-Estudo realizado em caráter de urgência, focado na pesquisa de liquido livre intra-abdominal.
-Ausência de líquido livre na cavidade abdominal e nos seios costofrênicos.
-Não há evidencias formais de derrame pericárdico.
-
-
-ULTRASSONOGRAFIA DE RINS E VIAS URINÁRIAS
-
-RESULTADO:
-Rim direito com topografia, forma, contornos e dimensões normais, medindo XXX cm, com volume estimado em cerca de XX cm³.
-Parênquima com espessura e diferenciação córtico-medular preservadas.
-Ausência de cálculos ou dilatação do sistema coletor à direita.
-Rim esquerdo com topografia, forma, contornos e dimensões normais, medindo XXX cm, com volume estimado em cerca de XX cm³.
-Parênquima com espessura e diferenciação córtico-medular preservadas.
-Ausência de cálculos ou dilatação do sistema coletor à esquerda.
-Bexiga de paredes finas e regulares, com conteúdo anecogênico habitual.
-Jatos ureterais presentes bilateralmente.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-ULTRASSONOGRAFIA DE APARELHO URINÁRIO E PRÓSTATA TRANSABDOMINAL
-
-RESULTADO:
-Rim direito com topografia, forma, contornos e dimensões normais, medindo XXX cm, com volume estimado em cerca de XX cm³.
-Parênquima com espessura e diferenciação córtico-medular preservadas.
-Ausência de cálculos ou dilatação do sistema coletor à direita.
-Rim esquerdo com topografia, forma, contornos e dimensões normais, medindo XXX cm, com volume estimado em cerca de XX cm³.
-Parênquima com espessura e diferenciação córtico-medular preservadas.
-Ausência de cálculos ou dilatação do sistema coletor à esquerda.
-Próstata de contornos regulares e dimensões preservadas, medindo cerca de xx cm (L x AP x T), com volume estimado em xx cm³.
-Próstata de contornos levemente lobulados e dimensões aumentadas, abaulando o assoalho vesical, medindo cerca de xxx cm (L x AP x T), com volume estimado de cerca de xxx cm³. Apresenta ainda ecotextura heterogênea, com focos de calcificação em seu interior.
-Vesículas seminais simétricas, com ecotextura conservada.
-Bexiga com cerca de XX ml de conteúdo anecogênico habitual, apresentando paredes finas e regulares.
-Resíduo pós-miccional desprezível.
-Resíduo pós-miccional de aproximadamente xx ml após duas micções.
-
-IMPRESSÃO:
-Exame sem evidentes anormalidades.
-Próstata de volume aumentado.
-Pequeno/moderado/acentuado resíduo pós-miccional.
-
-
-ULTRASSONOGRAFIA PROSTÁTICA TRANSABDOMINAL
-
-RESULTADO:
-Próstata de contornos regulares e dimensões preservadas, medindo cerca de xx cm (L x AP x T), com volume estimado em xx cm³.
-Próstata de contornos levemente lobulados e dimensões aumentadas, abaulando o assoalho vesical, medindo cerca de xxx cm (L x AP x T), com volume estimado de cerca de xxx cm³. Apresenta ainda ecotextura heterogênea, com focos de calcificação em seu interior.
-Vesículas seminais simétricas, com ecotextura conservada.
-Bexiga com cerca de XX ml de conteúdo anecogênico habitual, apresentando paredes finas e regulares.
-Resíduo pós-miccional desprezível.
-Resíduo pós-miccional de aproximadamente xx ml.
-
-IMPRESSÃO:
-Próstata com volume dentro dos limites da normalidade.
-Próstata de volume aumentado.
-Pequeno/Moderado/Acentuado resíduo pós-miccional.
-
-ULTRASSONOGRAFIA PROSTÁTICA TRANSRETAL
-
-RESULTADO:
-Exame realizado por vias suprapúbica e transretal evidencia:
-Bexiga com cerca de XX ml de conteúdo anecogênico habitual, apresentando paredes finas e regulares.
-Resíduo pós-miccional desprezível.
-Resíduo pós-miccional de aproximadamente xx ml.
-Próstata de contornos regulares e dimensões preservadas, medindo cerca de xx cm (L x AP x T), com volume de cerca de xxx cm³.
-Próstata de contornos levemente lobulados e dimensões aumentadas, abaulando o assoalho vesical, medindo cerca de xxx cm (L x AP x T), com volume estimado de cerca de xxx cm³.
-Zona periférica simétrica, regular e homogênea.
-Glândula central com textura e ecogenicidade preservadas.
-Vesículas seminais simétricas, com ecotextura conservada.
-
-IMPRESSÃO:
-Próstata com volume dentro dos limites da normalidade.
-Próstata de volume aumentado.
-Pequeno/Moderado/Acentuado resíduo pós-miccional.
-
-*Bexiga e resíduo pós-miccional avaliar SEMPRE por via transabdominal, antes do exame transretal
-
-
-ULTRASSONOGRAFIA DE ARTÉRIAS RENAIS COM DOPPLER
-
-RESULTADO:
-Rins com topografia, forma, contornos e dimensões normais (RD medindo XXX cm; RE medindo XXX cm).
-Parênquima renal com espessura e diferenciação córtico-medular preservadas, bilateralmente.
-Ausência de cálculos ou dilatação dos sistemas coletores.
-Aorta de aspecto habitual à altura dos hilos renais.
-Doppler espectral das artérias renais de padrão e velocidades preservados nos segmentos acessíveis.
-A relação entre as velocidades sistólicas das artérias renais e da aorta está dentro dos limites da normalidade.
-Padrões espectrais habituais das artérias intra-renais.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-ULTRASSONOGRAFIA DE APARELHO URINÁRIO E ARTÉRIAS RENAIS COM DOPPLER
-
-RESULTADO:
-Rins com topografia, forma, contornos e dimensões normais (RD medindo XXX cm; RE medindo XXX cm).
-Parênquima renal com espessura e diferenciação córtico-medular preservadas, bilateralmente.
-Ausência de cálculos ou dilatação dos sistemas coletores.
-Bexiga de paredes finas e regulares, com conteúdo anecogênico habitual.
-Aorta de aspecto habitual à altura dos hilos renais.
-Doppler espectral das artérias renais de padrão e velocidades preservados nos segmentos acessíveis.
-A relação entre as velocidades sistólicas das artérias renais e da aorta está dentro dos limites da normalidade.
-Padrões espectrais habituais das artérias intra-renais.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-COLAR AS TABELAS DO DOPPLER RENAL AQUI
-
-
-
-
-FIGADO
-
-As porções visualizadas do fígado, predominantemente pela via intercostal, apresentam tamanho, contornos e ecotextura preservados
-
-Fígado com contornos regulares, apresentando dimensões acima dos limites superiores da normalidade (LD mede cerca de XX cm; LE mede cerca de XX cm), além de aumento difuso de sua ecogenicidade parenquimatosa.
-Leve hepatomegalia, associada a aumento difuso da ecogenicidade hepática, comumente relacionado a esteatose.
-
-Fígado com dimensões normais e contornos regulares, apresentando aumento difuso de sua ecogenicidade parenquimatosa, comumente relacionado a esteatose.
-Aumento difuso da ecogenicidade hepática, comumente relacionado a esteatose.
-
-Fígado com dimensões normais e contornos regulares, apresentando aumento difuso de sua ecogenicidade parenquimatosa, o que atenua a penetração do feixe sonoro e prejudica a avaliação de segmentos posteriores.
-Aumento difuso da ecogenicidade hepática, comumente relacionado a esteatose.
-
-Fígado com dimensões normais, contornos regulares e ecotextura homogênea, exceto pela presença de diminutos focos ecogênicos, produtores de sombra acústica posterior, localizados no segmento ???, de aspecto residual/cicatricial.
-Granulomas hepáticos calcificados.
-
-Fígado com dimensões normais, contornos regulares e ecotextura homogênea, exceto pela presença de formação cística, de aspecto simples, medindo cerca de XX cm, identificada no segmento ???.
-Cisto hepático simples.
-
-Fígado com dimensões normais, contornos regulares e ecotextura homogênea, exceto pela presença de imagem nodular hiperecogênica, circunscrita, medindo cerca de XX cm, localizada no segmento ???.
-Nódulo hepático hiperecogênico, sugestivo de hemangioma.
-
-Fígado com dimensões normais, contornos regulares e ecotextura homogênea, exceto pela presença de duas imagens nodulares, circunscritas, com área de hiperecogenicidade central e halo hipoecogênico (aspecto “em alvo”), medindo cerca de XX cm e XX cm, localizadas nos segmentos XX e XX, respectivamente.
-Nódulos hepáticos conforme descrito, de aspecto inespecífico ao método. A critério clínico, a complementação propedêutica com outro método de imagem poderá fornecer maiores subsídios diagnósticos.
-
-Fígado com dimensões normais, contornos regulares e ecotextura homogênea, exceto pela presença de imagem nodular hipoecogênica, circunscrita, medindo cerca de XX cm, localizada no segmento XX.
-Nódulo hepático hipoecogênico, de aspecto inespecífico ao método. A critério clínico, o estudo por ressonância magnética com contraste hepatobiliar poderá fornecer maiores subsídios diagnósticos.
-
-Fígado com dimensões normais, contornos regulares e ecotextura homogênea, exceto pela presença de múltiplos diminutos focos ecogênicos, com reverberação acústica posterior, esparsos pelo parênquima hepático.
-Múltiplos diminutos focos ecogênicos, com reverberação acústica posterior, esparsos pelo parênquima hepático, sugestivos de hamartomas biliares.
-
-
-
-VESÍCULA BILIAR
-
-Vesícula biliar não visualizada (paciente relata colecistectomia).
-
-Vesícula biliar repleta de cálculos, limitando a avaliação das dimensões dos mesmos, sem evidentes sinais inflamatórios associados.
-
-Vesícula biliar de paredes lisas e finas, contendo múltiplos cálculos de dimensões infracentimétricas em seu interior, móveis às mudanças de decúbito.
-Colelitíase, sem evidentes sinais inflamatórios associados.
-
-Vesícula biliar de paredes lisas e finas, apresentando conteúdo espesso compatível com lama biliar em seu interior, além de alguns cálculos, móveis às mudanças de decúbito, o maior medindo cerca de XX cm.
-Colelitíase, sem evidentes sinais inflamatórios associados.
-
-Vesícula biliar normodistendida, de paredes lisas e finas, contendo pequena quantidade de conteúdo espesso em seu interior, móvel às mudanças de decúbito.
-Pequena quantidade de conteúdo espesso no interior da vesícula biliar, sugestivo de lama biliar.
-
-Vesícula biliar com conteúdo anecogênico habitual, sem cálculos, de paredes finas, com pequena imagem ecogênica, não produtora de fenômeno acústico posterior, medindo cerca de XX cm, aderida à sua parede XXX.
-Imagem ecogênica aderida à parede XXX da vesícula biliar, sugestiva de pólipo.
-
-Vesícula biliar não caracterizada, observando-se, em sua topografia, interface linear hiperecogênica, de aspecto curvilíneo, com marcada sombra acústica posterior.
-Vesícula biliar não caracterizada, observando-se, em sua topografia, interface linear hiperecogênica, de aspecto curvilíneo, com marcada sombra acústica posterior (Colelitíase? Vesícula “em porcelana”?). A critério clínico, a complementação com outro método de imagem poderá trazer maiores subsídios diagnósticos.
-
-
-
-RINS
-
-Formação cística, de aspecto simples, medindo cerca de XXX cm, localizada em situação cortical, no terço XXX do rim direito.
-Formação cística, de aspecto simples, medindo cerca de XXX cm, localizada em situação cortical, no terço XXX do rim esquerdo.
-Formações císticas, de aspecto simples, medindo cerca de XX cm e XX cm, localizadas em situação cortical, nos terços superior/médio/inferior e superior/médio/inferior do rim XXX, respectivamente.
-
-Cisto renal simples à direita.
-Cisto renal simples à esquerda.
-Cistos renais simples bilaterais.
-
-Imagem anecogênica, ovalada, medindo cerca de 1,7 cm no seu maior eixo, identificada no terço inferior da pelve renal esquerda.
-Imagem anecogênica no terço inferior da pelve renal esquerda, podendo corresponder a cisto parapiélico ou a ectasia calicinal.
-
-Formação cística, de aspecto simples, medindo cerca de XX cm, localizada no seio renal à direita/esquerda, no terço renal XX.
-Cisto renal parapiélico à direita/esquerda.
-
-Múltiplas formações císticas, de aspecto simples, identificadas no seio renal direito/esquerdo, a maior medindo cerca de XX cm, localizada no terço renal superior/médio/inferior.
-Cistos peripiélicos no rim direito/esquerdo.
-
-Imagem hiperecogênica, produtora de sombra acústica posterior, medindo cerca de XX cm, localizada no grupo calicinal superior / médio / inferior do rim direito.
-Nefrolitíase à direita, não obstrutiva.
-
-Imagem hiperecogênica, produtora de sombra acústica posterior, medindo cerca de XX cm, localizada no grupo calicinal superior / médio / inferior do rim esquerdo.
-Nefrolitíase à esquerda, não obstrutiva.
-
-Imagens hiperecogênicas, produtoras de sombra acústica posterior, identificadas no sistema pielocalicinal de ambos os rins, assim distribuídas:
-- grupo calicinal superior do rim direito, medindo cerca de XX cm;
-- grupo calicinal médio do rim direito, medindo cerca de XX cm;
-- grupo calicinal inferior do rim direito, medindo cerca de XX cm;
-- grupo calicinal superior do rim esquerdo, medindo cerca de XX cm;
-- grupo calicinal médio do rim esquerdo, medindo cerca de XX cm;
-- grupo calicinal inferior do rim esquerdo, medindo cerca de XX cm.
-Nefrolitíase bilateral, não obstrutiva.
-
-Observam-se ao menos três imagens hiperecogênicas, produtoras de sombra acústica posterior, localizadas no sistema pielocalicinal do rim XX, assim distribuídas:
-- grupo calicinal inferior, medindo cerca de XX cm;
-- grupo calicinal médio, medindo cerca de XX cm;
-- grupo calicinal superior, medindo cerca de XX cm.
-
-Pequena imagem nodular hiperecogênica, circunscrita, medindo 0,XX cm, localizada na cortical do terço XX do rim direito/esquerdo.
-Alteração focal na cortical do rim direito/esquerdo, podendo estar relacionada a alteração fibrocicatricial ou mesmo a angiomiolipoma.
-
-Imagem nodular hiperecogênica, circunscrita, medindo cerca de XX cm, localizada em situação cortical, no terço XX do rim direito/esquerdo.
-Imagem nodular hiperecogênica na cortical do rim direito/esquerdo, podendo corresponder a angiomiolipoma. A critério clínico, a complementação propedêutica com outro método de imagem poderá fornecer maiores subsídios diagnósticos.
-
-Rim XX com dimensões e diferenciação córtico-medular preservadas, apresentando pequena área hiperecogênica, de aspecto triangular, localizada em situação subcapsular, na face anterior, na junção dos terços renais superior e médio, podendo corresponder a área de defeito juncional parenquimatoso (variação anatômica).
-
-Leve aumento difuso da ecogenicidade de pirâmides renais bilateralmente.
-Leve aumento difuso da ecogenicidade de pirâmides renais bilateralmente, devendo-se considerar entre as hipóteses diagnósticas a possibilidade de nefrocalcinose medular. A correlação clínico-laboratorial e, a critério clínico, o controle imaginológico poderão fornecer maiores subsídios diagnósticos.
-
-Bexiga com cerca de XX ml de conteúdo anecogênico habitual, apresentando paredes levemente espessadas e trabeculadas.
-Leves espessamento e trabeculação das paredes vesicais, sugerindo bexiga de esforço.
-
-
-BAÇO
-
-Baço com dimensões normais e contornos regulares, apresentando ecotextura heterogênea, à custa de alguns diminutos focos ecogênicos, não produtores de fenômeno acústicos posterior, esparsos pelo seu parênquima.
-Diminutos focos ecogênicos esparsos pelo parênquima esplênico, podendo corresponder a corpos de Gamna-Gandy.
-PÂNCREAS
-
-Pâncreas com avaliação prejudicada, devido a interposição de alças intestinais.
-
-Cabeça do pâncreas com dimensões, contornos e ecogenicidade normais. Corpo e cauda pancreática com avaliação prejudicada, devido a interposição de alças intestinais.
-
-
-
-AORTA
-
-Aorta abdominal ateromatosa, sem dilatações evidentes.
-Ateromatose aórtica.
-
-Aorta abdominal com avaliação prejudicada, devido a interposição de alças intestinais.
-
-
-PROSTATA
-
-Próstata e vesículas seminais não identificadas (paciente relata prostatectomia radical).
-Status pós-prostatectomia radical.
-
-Bexiga com cerca de XX ml de conteúdo anecogênico habitual, apresentando paredes levemente espessadas e trabeculadas.
-Leves espessamento e trabeculação das paredes vesicais, sugerindo bexiga de esforço.
-
-Leve heterogeneidade do parênquima prostático, sendo mais evidente na região central (correlacionar com exame físico e laboratorial).
-
-
-OBSERVAÇÕES
-
-.: Exame de avaliação prejudicada, devido à intensa interposição de gases intestinais. Na dependência de achados clínicos e laboratoriais e evolução do quadro clínico, controle ecográfico e/ou complementação por outro método de imagem poderá trazer informações adicionais.
-
-.: A grande quantidade de gases intestinais prejudicou a avaliação adequada. Na dependência de achados clínicos e laboratoriais e evolução do quadro clínico, controle ecográfico e/ou complementação por outro método de imagem poderia trazer melhor subsídio diagnóstico.
-
-.: Útero e anexos sem nítidas alterações quando avaliados via abdominal. À critério clínico, o estudo endovaginal poderá trazer maior subsídio diagnóstico.
-
-.: Útero sem nítidas alterações quando avaliado via abdominal. Anexos não identificados, devido a interposição de alças intestinais. A critério clínico, o estudo endovaginal poderá trazer maior subsídio diagnóstico.
-
-.: Endométrio espessado e heterogêneo, medindo cerca de xx cm, quando avaliado via abdominal. A critério clínico, o estudo endovaginal poderá trazer maior subsídio diagnóstico.
-
-.: Endométrio medindo cerca de xx cm, quando avaliado via abdominal. A critério clínico, o estudo endovaginal poderá trazer maior subsídio diagnóstico.
-
-
-.: Como achado adicional, destacamos próstata de dimensões aumentadas, a ser melhor avaliada, à critério clínico, com estudo específico.
-
-.: Como achado adicional, destacamos aumento difuso da ecogenicidade hepática, comumente relacionado a esteatose, a ser melhor avaliado, a critério clínico, com estudo dirigido.
-
-A critério clínico, a correlação com outro método de imagem poderá fornecer maiores subsídios diagnósticos.
-
-A critério clínico, sugere-se complementação propedêutica com outro método de imagem para obtenção maiores subsídios diagnósticos.
-
-Sugere-se, salvo contraindicação clínica, complementação com estudo citopatológico.
-ULTRASSONOGRAFIA DA REGIÃO INGUINAL DIREITA
-
-RESULTADO:
-Pele e tecido celular subcutâneo sem alterações ecográficas.
-Herniação de alças intestinais e conteúdo gorduroso para o interior do canal inguinal direito, observada às manobras de esforço, com anel herniário medindo cerca de XX cm. Ao repouso o conteúdo herniário mostrou-se redutível.
-Não se identificam coleções e/ou lesão expansiva na região estudada.
-Linfonodos regionais de aspecto ecográfico habitual.
-
-IMPRESSÃO:
-Hérnia inguinal à direita, sem sinais de encarceramento.
-
-
-ULTRASSONOGRAFIA DA REGIÃO INGUINAL DIREITA
-
-RESULTADO:
-Pele de espessura normal.
-Camada subcutânea com espessura e ecogenicidade normais.
-Musculatura regional com padrão sonográfico habitual.
-Não foram visualizadas herniações às manobras provocativas.
-Não foram visualizadas massas ou coleções na região estudada.
-Linfonodos regionais de aspecto ecográfico habitual.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-
-ULTRASSONOGRAFIA DA REGIÃO INGUINAL ESQUERDA
-
-RESULTADO:
-Pele e tecido celular subcutâneo sem alterações ecográficas.
-Herniação de alças intestinais e conteúdo gorduroso para o interior do canal inguinal esquerdo, observada às manobras de esforço, com anel herniário medindo cerca de XX cm. Ao repouso o conteúdo herniário mostrou-se redutível.
-Não se identificam coleções e/ou lesão expansiva na região estudada.
-Linfonodos regionais de aspecto ecográfico habitual.
-
-IMPRESSÃO:
-Hérnia inguinal à esquerda, sem sinais de encarceramento.
-
-
-ULTRASSONOGRAFIA DA REGIÃO INGUINAL ESQUERDA
-
-RESULTADO:
-Pele de espessura normal.
-Camada subcutânea com espessura e ecogenicidade normais.
-Musculatura regional com padrão sonográfico habitual.
-Não foram visualizadas herniações às manobras provocativas.
-Não foram visualizadas massas ou coleções na região estudada.
-Linfonodos regionais de aspecto ecográfico habitual.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-Ectasia de vasos no interior no cordão espermático XXX, sugerindo varicocele. Sugere-se, à critério clínico, avaliação complementar com o estudo Doppler da bolsa escrotal.
-
-
-
-
-
-
-ULTRASSONOGRAFIA DE BOLSA ESCROTAL
-
-RESULTADO:
-Testículos tópicos, com morfologia e ecotextura normais.
-Epidídimos com espessura e ecotextura normais.
-Veias do plexo pampiniforme de calibre normal, bilateralmente.
-Ausência de hidrocele.
-
-Volumetria:
-- Testículo direito: xx cm, com volume de cerca de xx cm³.
-- Testículo esquerdo: xx cm, com volume de cerca de xx cm³.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-ULTRASSONOGRAFIA DE BOLSA ESCROTAL COM DOPPLER
-
-RESULTADO:
-Testículos tópicos, com morfologia normal, ecotextura homogênea e fluxo preservado ao estudo Doppler.
-Epidídimos com espessura e ecotextura normais.
-Veias do plexo pampiniforme de calibre normal, bilateralmente.
-Ausência de hidrocele.
-
-Volumetria:
-- Testículo direito: xx cm, com volume de cerca de xx cm³.
-- Testículo esquerdo: xx cm, com volume de cerca de xx cm³.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-ULTRASSONOGRAFIA DE BOLSA ESCROTAL
-
-DIREITA
-Testículo direito tópico, com morfologia e ecotextura normais.
-Volumetria: xx cm, com volume de cerca de xx cm³.
-Epidídimo direito com espessura e ecotextura normais.
-Veias do plexo pampiniforme direito de calibre normal.
-Ausência de hidrocele à direita.
-
-ESQUERDA
-Testículo esquerdo tópico, com morfologia e ecotextura normais.
-Volumetria: xx cm, com volume de cerca de xx cm³.
-Epidídimo esquerdo com espessura e ecotextura normais.
-Veias do plexo pampiniforme esquerdo de calibre normal.
-Ausência de hidrocele à esquerda.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-TESTÍCULOS ECTÓPICOS
-
-ULTRASSONOGRAFIA DE BOLSA ESCROTAL
-
-RESULTADO:
-Bolsa escrotal vazia.
-Testículos ectópicos, identificados no interior dos respectivos canais inguinais, com morfologia e ecotextura normais.
-Veias do plexo pampiniforme de calibre normal, bilateralmente.
-Ausência de hidrocele.
-
-Volumetria:
-- Testículo direito: XXX cm, com volume de cerca de XX cm³.
-- Testículo esquerdo: XXX cm, com volume de cerca de XX cm³.
-
-IMPRESSÃO:
-Testículos ectópicos, identificados no interior dos respectivos canais inguinais, estando a bolsa escrotal vazia.
-
-
-ULTRASSONOGRAFIA DE BOLSA ESCROTAL
-
-RESULTADO:
-Bolsa escrotal vazia.
-Testículo direito identificado na proximidade do canal inguinal ipsilateral, com morfologia e ecotextura normais.
-Volumetria: xx cm, com volume de cerca de xx cm³.
-Epidídimo direito com espessura e ecotextura normais.
-Veias do plexo pampiniforme direito de calibre normal.
-Ausência de hidrocele à direita.
-Testículo esquerdo identificado na distalidade do canal inguinal ipsilateral, com morfologia e ecotextura normais.
-Volumetria: xx cm, com volume de cerca de xx cm³.
-Epidídimo esquerdo com espessura e ecotextura normais.
-Veias do plexo pampiniforme esquerdo de calibre normal.
-Ausência de hidrocele à esquerda.
-
-IMPRESSÃO:
-Testículos ectópicos, identificados no interior dos respectivos canais inguinais, estando a bolsa escrotal vazia.
-
-
-Testículo direito/esquerdo tópico, apresentando dimensões reduzidas e ecotextura difusamente heterogênea.
-Testículo direito/esquerdo atrófico e heterogêneo.
-
-Epidídimo direito/esquerdo com ecotextura e espessura normais, exceto pela presença de pequena formação cística, de aspecto simples, medindo cerca de 0,XX cm, identificada na cabeça epididimal.
-Pequeno cisto epididimal à direita/esquerda.
-
-25.05.2018
-
-Formação cística de paredes finas e com septos em seu interior, medindo cerca de XX cm (L x AP x T), identificada junto ao polo superior do testículo direito, de provável etiologia epididimal.
-
-Volumosa hidrocele à direita, estendendo-se ao canal inguinal ipsilateral.
-Volumosa hidrocele comunicante à direita.
-
-Análise ao modo Doppler identificou moderado OU acentuado aumento da vascularização no epidídimo e testículo direito ou esquerdo.
-
-Espessamento e heterogeneidade da cauda do epidídimo esquerdo, observando-se ainda fluxo vascular aumentado ao estudo Doppler.
-Sinais sugestivos de epididimite à esquerda, sendo necessária a adequada correlação clínico-laboratorial.
-
-Dilatação e tortuosidade de vasos do plexo pampiniforme esquerdo, medindo em repouso até 0,XX cm (normal < 0,3 cm), com acentuação e refluxo às manobras de Valsalva.
-Varicocele à esquerda.
-
-Notam-se formações tubulares, anecóicas, enoveladas, cujo calibre máximo mede cerca de 0,4 cm, projetadas no plexo pampiniforme esquerdo e que ao estudo Doppler apresentaram fluxo em seu interior. Observamos discreta dilatação e refluxo às manobras de valsalva.
-Varicocele à esquerda.
-
-Imagem hiperecogênica, formadora de evidente sombra acústica posterior, localizada no interior da hemibolsa escrotal direita, móvel, medindo cerca de 0,4 cm.
-Pequena imagem hiperecogênica no interior da hemibolsa escrotal direita, móvel, podendo corresponder a escrotolito.
-
-https://www.slideshare.net/shaffar75/doppler-ultrasound-of-acute-scrotum-23607822
-ULTRASSONOGRAFIA TRANSVAGINAL
-
-RESULTADO:
-Colo uterino de aspecto habitual.
-Útero em AVF, de contornos regulares, ecotextura miometrial homogênea e dimensões preservadas, medindo xxx cm (L x AP x T), com volume estimado em cerca de XX cm³.
-Eco endometrial hiperecóico, homogêneo e regular, medindo 0,XX cm de espessura.
-Ovário direito com dimensões e ecotextura normais, contendo pequenas imagens anecóicas de aspecto funcional. Dimensões: XXX cm, com volume de XX cm³.
-Ovário esquerdo com dimensões e ecotextura normais, contendo pequenas imagens anecóicas de aspecto funcional. Dimensões: XXX cm, com volume de XX cm³.
-Ausência de líquido livre no fundo de saco posterior.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-ULTRASSONOGRAFIA PÉLVICA TRANSABDOMINAL
-
-RESULTADO:
-Bexiga de paredes finas e regulares, com conteúdo anecogênico habitual.
-Útero em AVF, de contornos regulares, ecotextura miometrial homogênea e dimensões preservadas, medindo XXX cm (L x AP x T), com volume estimado em cerca de XX cm³.
-Eco endometrial hiperecóico, homogêneo e regular, medindo 0,xx cm de espessura.
-Ovário direito com dimensões e ecotextura normais, contendo pequenas imagens anecóicas de aspecto funcional. Dimensões: XXX cm, com volume de XX cm³.
-Ovário esquerdo com dimensões e ecotextura normais, contendo pequenas imagens anecóicas de aspecto funcional. Dimensões: XXX cm, com volume de XX cm³.
-Ausência de líquido livre no fundo de saco posterior.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-Colo uterino contendo diminutos cistos de retenção (Naboth).
-Colo uterino contendo cistos de retenção (Naboth), o maior medindo cerca de XX cm.
-
-Útero em AVF, de contornos regulares, medindo XXX cm (L x AP x T), com volume estimado em cerca de XX cm³, compatível com a paridade.
-Ecotextura miometrial difusamente heterogênea, destacando-se imagem nodular hipoecogênica, circunscrita, medindo cerca de XXX cm (L x AP x T), localizada na parede XXX, em situação intramural / subserosa / submucosa, com componente subseroso / submucoso.
-
-Útero em AVF, de ecotextura miometrial heterogênea e dimensões aumentadas, medindo XXX cm (L x AP x T), com volume estimado em cerca de XX cm³, à custa de formações nodulares miometriais, hipoecogênicas, de contornos regulares, assim localizadas:
-- na parede corporal anterior / posterior, em situação intramural / submucosa / subserosa, com componente submucoso / subseroso, medindo cerca de XXX cm (L x AP x T);
-- na região fúndica, em situação intramural / submucosa / subserosa, com componente submucoso / subseroso, medindo cerca de XXX cm (L x AP x T);
-
-Útero em AVF, de ecotextura miometrial heterogênea e dimensões aumentadas, medindo XXX cm (L x AP x T), com volume estimado em cerca de XX cm³, à custa de formações nodulares miometriais, de contornos regulares, assim caracterizadas:
-- nódulo hipoecogênico, localizado na XX, em situação intramural, com componente submucoso/subseroso, medindo cerca de XXX cm (L x AP x T);
-- nódulo heterogêneo, predominantemente hipoecogênico, com pequenas áreas císticas e calcificação de permeio, localizado na XX, em situação intramural, com componente submucoso/subseroso, medindo cerca de XXX cm (L x AP x T);
-
-Nódulo miometrial, conforme descrito, sugestivo de mioma.
-Nódulos miometriais, conforme descrito, sugestivos de miomas.
-
-Presença de istmocele em cicatriz de cesárea, medindo 1,8 x 1,2 x 0,6 cm (T x AP x L) e acometendo cerca de 75% da espessura miometrial.
-Istmocele em cicatriz de cesárea, conforme descrito.
-
-Leve dilatação e tortuosidade de vasos pélvicos à xxx.
-Aparente duplicação da cavidade uterina próximo à região fúndica, com tecido miometrial interposto.
-Aparente duplicação da cavidade uterina próximo à região fúndica, com tecido miometrial interposto. Considerar a possibilidade útero septado parcial.
-
-Ecotextura miometrial leve e difusamente heterogênea, destacando-se ainda pequenas imagens císticas na zona juncional e leve indefinição da relação endométrio-miométrio.
-Útero heterogêneo, com pequenas imagens císticas na zona juncional e leve indefinição da relação endométrio-miométrio, podendo estar relacionado a adenomiose. A critério clínico, a complementação propedêutica com o estudo por ressonância magnética poderá fornecer maiores subsídios diagnósticos.
-
-Ecotextura miometrial difusamente heterogênea, com áreas císticas e ilhas hiperecogênicas de permeio, associado a indefinição da junção endométrio-miométrio.
-Útero com ecotextura miometrial difusamente heterogênea, com áreas císticas e ilhas hiperecogênicas de permeio, associado a indefinição da junção endométrio-miométrio, podendo corresponder a adenomiose.
-
-Eco endometrial hiperecóico, medindo XX cm de espessura, contendo em seu interior, na região fúndica, imagem nodular hiperecogênica, com pedículo vascular, medindo cerca de XXX cm (L x AP x T).
-
-Eco endometrial não caracterizado, muito provavelmente relacionado a atrofia, sendo a cavidade uterina virtual.
-
-Dispositivo intra-uterino em posição habitual no interior da cavidade uterina, distando cerca de 0,X cm da cavidade fúndica e XX cm da serosa fúndica.
-Dispositivo intra-uterino em posição habitual no interior da cavidade uterina.
-
-Cavidade uterina apresentando imagem nodular, sólida, hiperecogênica, na região fúndica, medindo XX cm.
-Imagem nodular na cavidade uterina. Considerar possibilidade de pólipo endometrial.
-
-(GOMES) Imagem arredondada, homogênea, hiperecogênica, medindo cerca de xx cm, localizada no interior da cavidade endometrial.
-Aspecto ecográfico pode corresponder a pólipo endometrial.
-
-Ovário XX com ecotextura normal, contendo pequenas imagens anecóicas de aspecto funcional, apresentando dimensões discretamente aumentadas. Dimensões: XX cm, com volume de XX cm³.
-Ovário XX discretamente aumentado de volume, com parênquima de aspecto ecográfico preservado.
-
-Ovário direito de dimensões aumentadas, à custa de formação cística de aspecto simples, medindo cerca de XXX cm, localizada em seu interior. Dimensões ovarianas: XXX cm, com volume de XX cm³.
-Cisto ovariano simples à direita.
-
-Ovário esquerdo de dimensões aumentadas, à custa de formação cística de aspecto simples, medindo cerca de XXX cm, localizada em seu interior. Dimensões ovarianas: XXX cm, com volume de XX cm³.
-Cisto ovariano simples à esquerda.
-
-Ovário direito de dimensões aumentadas, à custa de formação cística de paredes ligeiramente espessas, com moderados debris e traves ecogênicas de permeio, medindo cerca de XXX cm. Dimensões ovarianas: XXX cm, com volume de XX cm³.
-Cisto ovariano à direita, de paredes ligeiramente espessas, com moderados debris e traves ecogênicas de permeio (cisto hemorrágico?). A critério cínico, sugere-se controle evolutivo ultrassonográfico.
-
-Ovário esquerdo de dimensões aumentadas, à custa de formação cística de paredes ligeiramente espessas, com moderados debris e traves ecogênicas de permeio, medindo cerca de XXX cm. Dimensões ovarianas: XXX cm, com volume de XX cm³.
-Cisto ovariano à esquerda, de paredes ligeiramente espessas, com moderados debris e traves ecogênicas de permeio (cisto hemorrágico?). A critério cínico, sugere-se controle evolutivo ultrassonográfico.
-
-Ovário XX de dimensões aumentadas, à custa de formação cística, com ecos de baixa intensidade homogeneamente distribuídos em seu interior, medindo cerca de XXX cm, com volume de XX cm³.
-Cisto ovariano à XX, com ecos de baixa intensidade homogeneamente distribuídos em seu interior, devendo-se considerar, entre as hipóteses diagnósticas, a possibilidade de endometrioma.
-
-Ovário esquerdo de dimensões levemente reduzidas em relação ao esperado para a menacme. Achado que deve ser valorizado na dependência de estrita correlação clínico-laboratorial.
-Ovários não visualizados devido à interposição gasosa de alças intestinais.
-Regiões anexiais sem evidentes anormalidades.
-
-Ovário direito não visualizado devido à interposição gasosa de alças intestinais.
-Região anexial direita sem evidentes anormalidades.
-
-Ovário esquerdo não visualizado devido à interposição gasosa de alças intestinais.
-Região anexial esquerda sem evidentes anormalidades.
-
-Formação cística, de aspecto simples, medindo XXX cm, identificada em situação paraovariana XXX.
-Cisto paraovariano à XXX, de aspecto simples.
-
-Ovários de dimensões aumentadas, contendo múltiplas imagens arredondadas, anecogênicas, infracentimétricas, de distribuição predominantemente periférica, bilateralmente.
-Ovário direito mede XX cm, com volume de XX cm³.
-Ovário esquerdo mede XX cm, com volume de XX cm³.
-Ovários de dimensões aumentadas, contendo múltiplos folículos de distribuição predominantemente periférica, bilateralmente, o que pode estar relacionado a ovários policísticos. Indicado correlação clínico-laboratorial.
-
-Ovário direito de dimensões aumentadas/normais, contendo múltiplas imagens arredondadas, anecóicas, infracentimétricas, de distribuição periférica, associadas a estroma central hiperecogênico. Dimensões ovarianas: xxx cm, com volume de XX cm³.
-Ovário esquerdo de dimensões aumentadas/normais, contendo múltiplas imagens arredondadas, anecóicas, infracentimétricas, de distribuição periférica, associadas a estroma central hiperecogênico. Dimensões ovarianas: xxx cm, com volume de XX cm³.
-Múltiplos folículos ovarianos de distribuição periférica, bilateralmente, podendo estar relacionado a ovários policísticos. Indicado correlação clínico-laboratorial.
-
-Ovários com estroma central hiperecogênico e homogêneo, apresentando múltiplas imagens anecogênicas, com diâmetros menores que 10 mm, dispersas pela periferia, bilateralmente.
-Ovários micropolicísticos.
-
-
-
-ULTRASSONOGRAFIA OBSTÉTRICA
-
-INFORMAÇÕES CLÍNICAS:
-Gestação de XX semanas e XX dias, de acordo com o exame prévio do dia XX, realizado com XX semanas e XX dias, em outro serviço.
-
-RESULTADO:
-Feto único, em situação longitudinal, apresentação cefálica/pélvica, com dorso à direita/esquerda.
-Movimentos somáticos fetais presentes.
-Batimentos cardíacos rítmicos, com frequência de xx batimentos por minuto.
-Placenta de inserção xxx, homogênea heterogênea, grau xx, com espessura aproximada de xx cm.
-Cordão umbilical de morfologia habitual.
-Volume de líquido amniótico normal (ILA: XX cm).
-
-BIOMETRIA FETAL:
-DBP: xx cm. CC: xx cm. CA: xx cm. CF: xx cm.
-Peso fetal aproximado de xx gramas (+/- 10%).
-
-IMPRESSÃO:
-Biometria fetal compatível com gestação de xx semanas e xx dia(s) de evolução (+/- XX semanas).
-Crescimento fetal no percentil XX, com base no exame prévio do primeiro trimestre.
-
-
-.: O presente exame não tem por finalidade a avaliação morfológica.
-
-ULTRASSONOGRAFIA OBSTÉTRICA
-
-INFORMAÇÕES CLÍNICAS:
-Gestação de XX semanas e XX dias, de acordo com o exame prévio do dia XX, realizado com XX semanas e XX dias, em outro serviço.
-
-RESULTADO:
-Feto único, em situação longitudinal, apresentação cefálica/pélvica, com dorso à direita/esquerda.
-Movimentos somáticos fetais presentes.
-Batimentos cardíacos rítmicos, com frequência de xx batimentos por minuto.
-Placenta de inserção anterior/posterior, homogênea, grau zero, com espessura aproximada de xx cm.
-Cordão umbilical de morfologia habitual.
-Volume de líquido amniótico normal à análise subjetiva.
-
-BIOMETRIA FETAL:
-DBP: xx cm. CC: xx cm. CA: xx cm. CF: xx cm.
-Peso fetal aproximado de xx gramas (+/- 10%).
-
-IMPRESSÃO:
-Biometria fetal compatível com gestação de xx semanas e xx dia(s) de evolução (+/- XX dias).
-Crescimento fetal no percentil XX, com base no exame prévio do primeiro trimestre.
-
-
-.: O presente exame não tem por finalidade a avaliação morfológica.
-ULTRASSONOGRAFIA OBSTÉTRICA
-
-INFORMAÇÕES CLÍNICAS:
-Gestação de XX semanas e XX dias, de acordo com o exame prévio do dia XX, realizado com XX semanas e XX dias, em outro serviço.
-
-RESULTADO:
-Colo uterino de aspecto habitual.
-Útero globoso, miométrio de aspecto habitual e cavidade uterina contendo saco gestacional normoimplantado e de morfologia normal.
-Feto único, com comprimento crânio-nádegas de XX cm e movimentação ativa.
-Batimentos cardíacos rítmicos, com frequência de xx batimentos por minuto.
-Translucência nucal medindo XX mm (abaixo do percentil 95 para o CCN encontrado).
-Osso nasal presente.
-Placenta de inserção anterior/posterior, homogênea, grau zero, com espessura aproximada de xx cm.
-Ovários de aspecto habitual.
-
-IMPRESSÃO:
-Gestação tópica, única, com feto de XX semanas e XX dias (+/- 05 dias), com base no comprimento crânio-nádegas.
-
-
-.: O presente exame não tem por finalidade a avaliação morfológica.
-
-
-
-
-
-
-
-ULTRASSONOGRAFIA OBSTÉTRICA TRANSVAGINAL
-
-INFORMAÇÕES CLÍNICAS:
-Gestação de XX semanas e XX dias, de acordo com a data da última menstruação (XXXX).
-
-RESULTADO:
-Colo uterino de aspecto habitual.
-Útero globoso, miométrio de aspecto habitual e cavidade uterina contendo saco gestacional normoimplantado e de morfologia normal, com diâmetro médio de XX cm.
-Embrião único, com comprimento crânio-nádegas de XX cm e movimentação ativa.
-Batimentos cardíacos presentes.
-Vesícula vitelínica íntegra e de dimensões preservadas, com diâmetro de XX cm.
-Placentação de aspecto habitual.
-Ovários de aspecto habitual.
-
-IMPRESSÃO:
-Gestação tópica, única, com embrião de XX semanas e XX dias (+/- XX dias), com base no comprimento crânio-nádegas.
-
-
-Gestação inviável: CCN > ou = 7mm, sem BCE
-
-ULTRASSONOGRAFIA OBSTÉTRICA TRANSVAGINAL
-
-RESULTADO:
-Colo uterino de aspecto habitual.
-Útero globoso, miométrio de aspecto habitual e cavidade uterina contendo saco gestacional normoimplantado e de morfologia normal.
-Embrião único, com comprimento crânio-nádegas de XX mm.
-Batimentos cardíacos não identificados.
-Vesícula vitelínica íntegra e de dimensões preservadas, com diâmetro de XX mm.
-Vesícula vitelínica íntegra, com diâmetro de XX mm (percentil 95 para a idade gestacional: XX mm).
-Reação trofoblástica de aspecto habitual.
-Ovários de aspecto habitual.
-
-IMPRESSÃO:
-Gestação tópica, com embrião de XX semanas e XX dias (+/- 05 dias), com base no comprimento crânio-nádegas, sem sinais de atividade cardíaca, sugerindo gestação inviável.
-ULTRASSONOGRAFIA OBSTÉTRICA COM DOPPLER
-
-INFORMAÇÕES CLÍNICAS:
-Gestação de XX semanas e XX dias, de acordo com o exame prévio do dia XX, realizado com XX semanas e XX dias, em outro serviço.
-
-RESULTADO:
-Feto único, em situação longitudinal, apresentação cefálica/pélvica, com dorso à direita/esquerda.
-Movimentos somáticos fetais presentes.
-Batimentos cardíacos rítmicos, com frequência de xx batimentos por minuto.
-Placenta de inserção xxx, homogênea heterogênea, grau xx, com espessura aproximada de xx cm.
-Cordão umbilical de morfologia habitual.
-Volume de líquido amniótico normal (ILA: XX cm).
-
-BIOMETRIA FETAL:
-DBP: xx cm. CC: xx cm. CA: xx cm. CF: xx cm.
-Peso fetal aproximado de xx gramas (+/- 10%).
-
-DOPPLERFLUXOMETRIA:
-Artéria cerebral média: IR = XX / IP = XX
-Artéria umbilical: IR = XX / IP = XX
-Artéria uterina direita: IR = XX / IP = XX
-Artéria uterina esquerda: IR = XX / IP = XX
-IP médio das artérias uterinas: XX
-
-IMPRESSÃO:
-Biometria fetal compatível com gestação de xx semanas e xx dia(s) de evolução (+/- XX semanas).
-Crescimento fetal no percentil XX, com base em exame prévio do primeiro trimestre.
-Estudo Doppler dentro dos limites da normalidade para a idade gestacional.
-
-.: O presente exame não tem por finalidade a avaliação morfológica.
-
-
-GEMELAR - DICORIÔNICA E DIAMNIÓTICA
-ULTRASSONOGRAFIA OBSTÉTRICA COM DOPPLER
-
-INFORMAÇÕES CLÍNICAS:
-Gestação gemelar de XX semanas e XX dias de evolução, de acordo com o exame prévio do dia XX, realizado com XX semanas e XX dias, em outro serviço.
-
-RESULTADO:
-Útero gravídico, contendo gestação gemelar, aparentemente dicoriônica e diamniótica (de acordo com ultrassonografia realizada no primeiro trimestre).
-Feto A (saco gestacional 1) à direita do abdome materno, em situação longitudinal, com apresentação cefálica/pélvica e dorso à direita/esquerda, com movimentos somáticos presentes e batimentos cardíacos rítmicos, com frequência de XX batimentos por minuto.
-Feto B (saco gestacional 2) à esquerda do abdome materno, em situação longitudinal, com apresentação cefálica/pélvica e dorso à direita/esquerda, com movimentos somáticos presentes e batimentos cardíacos rítmicos, com frequência de XX batimentos por minuto.
-Placenta de inserção XX, homogênea heterogênea, grau xx, com espessura aproximada de xx cm, no saco gestacional 1.
-Placenta de inserção XX, homogênea heterogênea, grau xx, com espessura aproximada de xx cm, no saco gestacional 2.
-Maior bolsão de líquido amniótico no saco gestacional 1: XX mm (normal: 20 a 80 mm).
-Maior bolsão de líquido amniótico no saco gestacional 2: XX mm (normal: 20 a 80 mm).
-
-BIOMETRIA FETAL – FETO A:
-DBP: xx cm. CC: xx cm. CA: xx cm. CF: xx cm.
-Peso fetal aproximado de xx gramas (+/- 10%).
-
-BIOMETRIA FETAL – FETO B:
-DBP: xx cm. CC: xx cm. CA: xx cm. CF: xx cm.
-Peso fetal aproximado de xx gramas (+/- 10%).
-
-DOPPLERFLUXOMETRIA – FETO A:
-Artéria cerebral média: IR = 0,XX / IP = XX
-Artéria umbilical: IR = 0,XX / IP = XX
-
-DOPPLERFLUXOMETRIA – FETO B:
-Artéria cerebral média: IR = 0,XX / IP = XX
-Artéria umbilical: IR = 0,XX / IP = XX
-
-DOPPLERFLUXOMETRIA – ARTÉRIAS UTERINAS:
-Artéria uterina direita: IR = 0,XX / IP = XX
-Artéria uterina esquerda: IR = 0,XX / IP = XX
-IP médio das artérias uterinas: XX
-
-IMPRESSÃO:
-Gestação gemelar, aparentemente dicoriônica e diamniótica (de acordo com ultrassonografia realizada no primeiro trimestre), com biometria do feto A compatível com gestação de XX semanas e XX dia(s), e biometria do feto B compatível com gestação de XX semanas e XX dia(s).
-Crescimento do feto A no percentil XX e do feto B no percentil XX, com base em exame prévio do primeiro trimestre.
-Estudo Doppler dentro dos limites da normalidade para a idade gestacional.
-
-.: O presente exame não tem por finalidade a avaliação morfológica.
-
-
-
-GEMELAR - CAVIDADE AMNIÓTICA E MASSA PLACENTÁRIA ÚNICA
-ULTRASSONOGRAFIA OBSTÉTRICA COM DOPPLER
-
-INFORMAÇÕES CLÍNICAS:
-Gestação gemelar de XX semanas e XX dias de evolução, de acordo com o exame prévio do dia XX, realizado com XX semanas e XX dias, em outro serviço.
-
-RESULTADO:
-Útero gravídico, contendo gestação gemelar.
-Feto A à direita do abdome materno, em situação longitudinal, com apresentação cefálica/pélvica e dorso à direita/esquerda, com movimentos somáticos presentes e batimentos cardíacos rítmicos, com frequência de XX batimentos por minuto.
-Feto B à esquerda do abdome materno, em situação longitudinal, com apresentação cefálica/pélvica e dorso à direita/esquerda, com movimentos somáticos presentes e batimentos cardíacos rítmicos, com frequência de XX batimentos por minuto.
-Massa placentária aparentemente única, de inserção XX, homogênea heterogênea, grau xx, com espessura aproximada de xx cm.
-Volume de líquido amniótico normal (ILA: XX cm).
-
-BIOMETRIA FETAL – FETO A:
-DBP: xx cm. CC: xx cm. CA: xx cm. CF: xx cm.
-Peso fetal aproximado de xx gramas (+/- 10%).
-
-BIOMETRIA FETAL – FETO B:
-DBP: xx cm. CC: xx cm. CA: xx cm. CF: xx cm.
-Peso fetal aproximado de xx gramas (+/- 10%).
-
-DOPPLERFLUXOMETRIA – FETO A:
-Artéria cerebral média: IR = 0,XX / IP = XX
-Artéria umbilical: IR = 0,XX / IP = XX
-
-DOPPLERFLUXOMETRIA – FETO B:
-Artéria cerebral média: IR = 0,XX / IP = XX
-Artéria umbilical: IR = 0,XX / IP = XX
-
-DOPPLERFLUXOMETRIA – ARTÉRIAS UTERINAS:
-Artéria uterina direita: IR = 0,XX / IP = XX
-Artéria uterina esquerda: IR = 0,XX / IP = XX
-IP médio das artérias uterinas: XX
-
-IMPRESSÃO:
-Gestação gemelar, aparentemente apresentando uma cavidade amniótica e massa placentária única, com biometria do feto A compatível com gestação de XX semanas e XX dia(s), e biometria do feto B compatível com gestação de XX semanas e XX dia(s).
-Crescimento do feto A no percentil XX e do feto B no percentil XX, com base em exame prévio do primeiro trimestre.
-Estudo Doppler dentro dos limites da normalidade para a idade gestacional.
-
-.: O presente exame não tem por finalidade a avaliação morfológica.
-
-
-
-
-
-
-FETO quando > de 10 sem
-EMBRIAO até 10 sem
-______________________________________________________
-
-VARIAÇAO
-Até 6-7 sem – 7 dias
-De 7 a 14 sem – 5 dias
-De 14 a 20 sem – 7 dias
-De 20 a 30 sem – 10 dias
-De 30 a 36 sem – 2 sem
-Acima de 36 sem – 3 sem
-______________________________________________________
-
-Artérias uterinas APÓS 26 SEMANAS
-- IR: < 0,58
-- IP: < 1,20
-- Ausência de incisura (até 26 semanas incisura é fisiológica)
-
-Saco gestacional incipiente no interior da cavidade uterina, sugerindo gestação com 4-5 semanas de evolução.
-Embrião não visualizado, o que pode estar relacionado à precocidade gestacional. Sugere-se, à critério clínico, controle evolutivo ultrassonográfico em 1-2 semanas.
-
-
-Presença de área anecogênica/hipoecogênica, de contornos regulares/irregulares, medindo cerca de XXX cm, identificada adjacente à parede XX do saco gestacional, envolvendo menos/cerca de 30% de sua circunferência.
-Área anecogênica/hipoecogênica adjacente à parede XX do saco gestacional, envolvendo menos/cerca de 30% de sua circunferência, podendo corresponder a hematoma subcoriônico ou a fusão incompleta das decíduas parietal e capsular. Sugere-se, à critério clínico, controle ultrassonográfico em 1-2 semanas.
-
-
-Moderada quantidade de conteúdo heterogêneo, predominantemente hiperecogênico, distendendo a cavidade uterina, com espessura de até 2,2 cm.
-Moderada quantidade de conteúdo heterogêneo, predominantemente hiperecogênico, distendendo a cavidade uterina. Considerar a possibilidade de restos ovulares/coágulos.
-
-Ovário XX contendo formação cística de conteúdo anecóico e paredes espessas, medindo cerca de XX cm, com vascularização periférica ao estudo Doppler, sugestiva de corpo lúteo.
-Ovário XX de aspecto habitual.
-
-
-Translucência nucal medindo XX mm (abaixo do percentil 95 para o CCN encontrado).
-Osso nasal presente.
-Ducto venoso com onda “A” positiva.
-.: De acordo com a Fetal Medicine Foundation, a medida da translucência nucal deve ser realizada com comprimento cabeça-nádegas entre 45 e 84 mm.
-
-Translucência nucal medindo XX mm (Percentis da TN para o CCN encontrado: p50 = XX mm / p95 = XX mm).
-Translucência nucal com espessura acima do percentil 95 para o CCN encontrado.
-
-Foco hiperecogênico em ventrículo cardíaco esquerdo, compatível com calcificação do músculo papilar (“Golf Ball”).
-
-Percentis do ILA para a idade gestacional: p5 = XX cm / p50 = XX cm / p95 = XX cm.
-.: A ultrassonografia obstétrica não tem por finalidade o diagnóstico de doenças genéticas e/ou malformações fetais, pois neste estudo não é protocolo a avaliação de detalhes anatômicos e nem de formação fetal.
-
-
-Presença de incisura protodiastólica em artérias uterinas direita e esquerda, fisiológico para idade gestacional.
-
-Presença de incisura protodiastólica em artéria uterina XX, com
-IP médio das artérias uterinas acima do percentil 95 para a idade gestacional.
-Artérias cerebral média e umbilical apresentando índices de resistência e pulsatilidade dentro dos limites da normalidade para a idade gestacional, sem sinais de centralização.
-
-Presença de incisura protodiastólica em artérias uterinas direita e esquerda.
-IP médio das artérias uterinas acima do percentil 95 para a idade gestacional.
-Fluxo feto-placentário normal com artérias umbilicais apresentando índices de resistência e pulsatilidade normais.
-Não há sinais de centralização.
-
-
-Não dispomos de ultrassonografias prévias do primeiro trimestre para melhor avaliação da idade gestacional e do crescimento fetal, bem como da corionicidade.
-
-Correlacionar com ultrassonografias anteriores para melhor avaliação da idade gestacional e do crescimento fetal.
-Correlacionar com ultrassonografias anteriores (não disponíveis no momento) para melhor avaliação da idade gestacional e do crescimento fetal.
-Não dispomos de exames anteriores para melhor avaliação da idade gestacional, bem como do crescimento fetal.
-
-Gestação de _ semanas, com crescimento adequado para a Idade gestacional (percentil próximo aos 50), de acordo com a DUM (_) e exames ecográficos anteriores.
-
-Gestação tópica, única, com feto vivo, com idade gestacional de XX, de acordo com a biometrial fetal atual (+/- 2 semanas), evidenciando adequado crescimento fetal em comparação com o exame prévio.
-Crescimento fetal em correlação com ultrassonografia realizada no primeiro trimestre no percentil 8 para peso. A critério clínico realizar controle ultrassonográfico com Doppler dentro de 15 dias.
-ULTRASSONOGRAFIA MAMÁRIA
-
-RESULTADO - MAMA DIREITA:
-Pele de espessura normal.
-Camada subcutânea com espessura e ecogenicidade normais.
-Complexo aréolo-mamilar sem alterações.
-Ecotextura de fundo com predomínio de tecido fibroglandular.
-Não há evidências de imagens nodulares sólidas e/ou císticas.
-Camada retro-mamária conservada.
-Linfonodos de aspecto ecográfico habitual no prolongamento axilar.
-
-RESULTADO - MAMA ESQUERDA:
-Pele de espessura normal.
-Camada subcutânea com espessura e ecogenicidade normais.
-Complexo aréolo-mamilar sem alterações.
-Ecotextura de fundo com predomínio de tecido fibroglandular.
-Não há evidências de imagens nodulares sólidas e/ou císticas.
-Camada retro-mamária conservada.
-Linfonodos de aspecto ecográfico habitual no prolongamento axilar.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-BI-RADS categoria XX.
-
-
-
-.: Exames anteriores não disponíveis para avaliação. Em mulheres acima de 35 anos a realização da ultrassonografia na presença da mamografia reduz os falsos negativos.
-
-.: Não se caracterizou, no presente estudo, evidente correspondência ao achado mamográfico prévio, descrito em exame do dia XXX. Diante disto sugere-se controle mamográfico do achado em questão.
-
-.: Não se caracterizou, no presente estudo, evidente correspondência ao achado mamográfico prévio, descrito em exame do dia XXX, o que pode estar relacionado a aspectos técnicos interexames. Diante disto sugere-se controle mamográfico do achado em questão.
-
-ULTRASSONOGRAFIA DE REGIÕES MAMÁRIAS
-
-DIREITA
-
-Pele e tecido celular subcutâneo sem alterações ecográficas.
-Área heterogênea, predominantemente hipoecogênica, medindo cerca de XXX cm (L x AP x T), localizada em região retroareolar, podendo corresponder a tecido fibroglandular.
-Musculatura regional com forma, contornos e padrão fibrilar preservados.
-Linfonodos de aspecto ecográfico preservado na região axilar.
-
-ESQUERDA
-
-Pele e tecido celular subcutâneo sem alterações ecográficas.
-Área heterogênea, predominantemente hipoecogênica, medindo cerca de XXX cm (L x AP x T), localizada em região retroareolar, podendo corresponder a tecido fibroglandular.
-Musculatura regional com forma, contornos e padrão fibrilar preservados.
-Linfonodos de aspecto ecográfico preservado na região axilar.
-
-IMPRESSÃO:
-Ginecomastia bilateral, sendo mais proeminente à direita.
-
-
-ULTRASSONOGRAFIA MAMÁRIA
-
-INDICAÇÃO: Abaulamento palpável na região axilar direita/esquerda.
-
-COMPOSIÇÃO DO TECIDO MAMÁRIO:
-Ecotextura de fundo homogênea, com predomínio de tecido fibroglandular.
-
-ACHADOS:
-Correspondendo ao abaulamento palpável referido pela paciente, observa-se área heterogênea, predominantemente hiperecogênica, localizada na camada subcutânea da região axilar direita/esquerda, podendo corresponder a tecido mamário ectópico.
-
-ANÁLISE COMPARATIVA:
-Exames anteriores inexistentes.
-
-IMPRESSÃO:
-Glândula mamária acessória na região axilar direita/esquerda.
-BI-RADS categoria 2.
-
-RECOMENDAÇÃO:
-Rastreamento periódico.
-
-Laudo conforme método padronizado pelo BI-RADS® 5ª edição do ACR® (American College of Radiology) para imagem da mama.
-Nódulo isodenso no quadrante superolateral da mama direita, que à ultrassonografia mostrou-se tratar de nódulo sólido, ovalado, com orientação paralela, de margens circunscritas, hipoecogênico, sem evidente fenômeno acústico posterior, medindo cerca de 1,2 x 0,4 x 0,8 cm (L x AP x T), localizado em projeção de 07 horas, distando cerca 3,4 cm da papila e 1,4 cm da pele.
-
-
-Nódulos ovalados, com orientação paralela, de margens circunscritas, hipoecogênicos, sem marcado fenômeno acústico posterior, identificados na mama direita/esquerda, assim localizados:
-- em projeção de XX horas, a cerca XX cm da papila e XX cm da pele, medindo XXX cm (L x AP x T);
-- em projeção de XX horas, a cerca XX cm da papila e XX cm da pele, medindo XXX cm (L x AP x T);
-
-
-Nódulos ovalados, com orientação paralela, de margens circunscritas, hipoecogênicos, sem marcado fenômeno acústico posterior, identificados em ambas as mamas, assim localizados:
-- em projeção de XX horas da mama direita, a cerca XX cm da papila e XX cm da pele, medindo XXX cm (L x AP x T);
-- em projeção de XX horas da mama esquerda, a cerca XX cm da papila e XX cm da pele, medindo XXX cm (L x AP x T);
-
-
-À ultrassonografia, sem evidente correspondência mamográfica, observa-se formação cística, ovalada, de aspecto simples, produtora de reforço acústico posterior, medindo cerca de XX cm, identificada em projeção de XX hora da mama direita/esquerda.
-
-
-Formação cística, ovalada, de margens circunscritas, com conteúdo heterogêneo e componente sólido em seu interior, com padrão combinado de atenuação posterior, medindo cerca XXX cm (L x AP x T), identificada em projeção de XX horas da mama direita/esquerda, distando cerca de XX cm da papila e XX cm da pele.
-IMPRESSÃO:
-Cisto complexo em mama direita/esquerda.
-RECOMENDAÇÃO:
-Esclarecimento histológico, salvo contraindicação clínica, do cisto mamário complexo à direita/esquerda.
-
-
-
-Discreta ectasia de ductos mamários em regiões retroareolares, bilateralmente, de forma simétrica, com conteúdo anecogênico, sem evidências de fatores obstrutivos.
-IMPRESSÃO:
-Discreta ectasia de ductos mamários bilateral.
-BI-RADS categoria 2.
-
-Imagem arredondada, de margens circunscritas, hipoecogênica, produtora de reforço acústico posterior, medindo cerca de XX cm, localizada em projeção de XX horas, distando XX cm do mamilo e XX cm da pele, sugestiva de cisto com conteúdo espesso.
-
-Imagens ovaladas, de margens circunscritas, anecogênicas, produtoras de evidente reforço acústico posterior, identificadas em projeção de XX horas, medindo cerca de XX cm, e em projeção de XX horas, medindo cerca de XX cm.
-
-Imagens arredondadas, de margens circunscritas, anecogênicas, produtoras de evidente reforço acústico posterior, dispersas pelo parênquima mamário, sendo a maior localizada em projeção de XX horas, medindo cerca de XX cm.
-
-Discreta ectasia de ductos mamários em região retroareolar, com conteúdo anecogênico, sem evidências de fator obstrutivo.
-Discreta ectasia de ductos mamários bilateral, de forma simétrica, sem evidências de fator obstrutivo.
-BI-RADS categoria 2.
-
-Discreta ectasia de ductos mamários, com conteúdo anecogênico, evidente no quadrante inferior lateral, sem evidências de fator obstrutivo.
-Discreta ectasia de ductos mamários de forma simétrica bilateralmente, sem evidências de fator obstrutivo.
-
-Ectasia ductal, predominando em região retro areolar, aparentemente com conteúdo espesso em seu interior.
-Ectasia ductal à esquerda, aparentemente com conteúdo espesso em seu interior. BI-RADS categoria 4.
-
-Pequena área de heterogeneidade do parênquima mamário e da camada subcutânea adjacente, identificada no quadrante súperolateral da mama direita, relacionada à manipulação cirúrgica prévia.
-
-Múltiplas imagens ovaladas, de margens circunscritas, hipercogênicas, não produtoras de fenômeno acústico posterior, esparsas pela camada subcutânea, destacando-se as seguintes:
-- em projeção de xx horas, medindo cerca de xxx cm (L x AP x T), com maior eixo paralelo à pele, distando cerca de xx cm do mamilo.
-- em projeção de xx horas, medindo cerca de xxx cm (L x AP x T), com maior eixo paralelo à pele, distando cerca de xx cm do mamilo.
-- em projeção de xx horas, medindo cerca de xxx cm (L x AP x T), com maior eixo paralelo à pele, distando cerca de xx cm do mamilo.
-Nódulos hiperecogênicos na camada subcutânea da mama XX, de provável etiologia lipomatosa.
-BI-RADS categoria 3.
-ULTRASSONOGRAFIA DE PAREDE ABDOMINAL
-
-RESULTADO:
-Exame direcionado para a avaliação de abaulamento palpável na parede abdominal supra-umbilical.
-Pele de espessura normal.
-Camada subcutânea com espessura e ecogenicidade normais.
-Correspondendo ao abaulamento palpável referido pela paciente, nota-se herniação de conteúdo gorduroso pré-peritoneal por descontinuidade na linha alba supraumbilical, com anel herniário medindo cerca de XX cm (transversal x longitudinal) e distando cerca de XX cm da cicatriz umbilical. Às manobras de Valsalva e compressão observa-se mobilização do conteúdo herniário.
-Não se observou redução do conteúdo herniário durante a realização do exame.
-Musculatura regional com padrão sonográfico habitual.
-Ausência de coleções e/ou lesão expansiva na região estudada.
-
-IMPRESSÃO:
-Hérnia epigástrica, conforme descrito.
-
-
-ULTRASSONOGRAFIA DE PAREDE ABDOMINAL
-
-RESULTADO:
-Exame direcionado para a avaliação de abaulamento palpável na região umbilical.
-Pele de espessura normal.
-Camada subcutânea com espessura e ecogenicidade normais.
-Falha na integridade da aponeurose muscular na região umbilical, medindo cerca de XX cm (transversal x longitudinal), por onde se observa herniação de conteúdo gorduroso pré-peritoneal, com acentuação à manobra de Valsalva. Não se observou redução do conteúdo herniário durante a realização do exame.
-Musculatura regional com padrão sonográfico habitual.
-Ausência de coleções e/ou lesão expansiva na região estudada.
-
-IMPRESSÃO:
-Hérnia umbilical, conforme descrito.
-
-
-Diástase dos músculos retos abdominais, no nível supra-umbilical, em cerca de até XX cm. (fisiológico na gestante de até 3,0 cm)
-
-ULTRASSONOGRAFIA DE estruturas superficiais
-
-RESULTADO:
-Pele de espessura normal.
-Camada subcutânea com espessura e ecogenicidade normais.
-Musculatura regional com padrão sonográfico habitual.
-Ausência de coleções e/ou lesão expansiva na região estudada.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-ULTRASSONOGRAFIA DE músculo
-
-RESULTADO:
-Pele e tecido subcutâneo sem alterações.
-Ventres musculares apresentando arranjo estrutural fibrilar conservado.
-Não se caracterizam sinais de hérnia, estiramento e/ou contusão muscular.
-Estruturas tendíneas de padrão usual.
-Superfícies ósseas de contornos regulares.
-Ausência de coleções e/ou lesão expansiva na região estudada.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-ULTRASSONOGRAFIA DE ???
-
-RESULTADO:
-Exame direcionado para a avaliação de nodulação palpável na XXX.
-Pele de espessura normal.
-Correspondendo à nodulação palpável referida pela paciente, nota-se nódulo sólido, ovalado, hiperecogênico, bem delimitado, medindo XXX cm (L x AP x T), localizado na camada subcutânea, sem invasão de planos profundos.
-Musculatura regional com padrão sonográfico habitual.
-
-IMPRESSÃO:
-Nódulo sólido na camada subcutânea da XXX, conforme descrito, de provável etiologia lipomatosa.
-
-
-
-
-ULTRASSONOGRAFIA DE OMBRO DIREITO
-
-RESULTADO:
-Tendão da cabeça longa do bíceps tópico, com espessura e textura normais.
-Tendão do subescapular com espessura, contornos e ecotextura normais.
-Tendão do supra-espinhal com espessura, contornos e ecotextura normais.
-Tendão do infra-espinhal com espessura, contornos e ecotextura normais.
-Bursa subacromial/subdeltoídea de espessura normal.
-Articulação acrômio-clavicular sem evidentes alterações.
-Musculatura regional de aspecto ecográfico habitual.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-ULTRASSONOGRAFIA DE OMBRO ESQUERDO
-
-RESULTADO:
-Tendão da cabeça longa do bíceps tópico, com espessura e textura normais.
-Tendão do subescapular com espessura, contornos e ecotextura normais.
-Tendão do supra-espinhal com espessura, contornos e ecotextura normais.
-Tendão do infra-espinhal com espessura, contornos e ecotextura normais.
-Bursa subacromial/subdeltoídea de espessura normal.
-Articulação acrômio-clavicular sem evidentes alterações.
-Musculatura regional de aspecto ecográfico habitual.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-OMBRO TODO FUDIDO (OMTOFU)
-
-ULTRASSONOGRAFIA DE OMBRO XXX
-
-RESULTADO:
-Subluxação medial do tendão da cabeça longa do bíceps, que se encontra levemente espessado e heterogêneo, e com quantidade de líquido acima do habitual no interior de sua bainha.
-Tendão do subescapular apresentando espessura reduzida e heterogeneidade textural.
-Rotura transfixante maciça do tendão do supra-espinhal, com retração do coto tendíneo, impossibilitando sua avaliação, com consequente atrofia/lipossubstituição dos ventres musculares e com cabeça umeral irregular e de aspecto “careca”.
-Tendão do infra-espinhal levemente espessado e heterogêneo, sem sinais de lesões transfixantes.
-Pequeno derrame articular, comunicando-se com o plano da bursa subacromial/subdeltoídea.
-Alterações degenerativas da articulação acrômio-clavicular.
-
-IMPRESSÃO:
-Subluxação medial do tendão da cabeça longa do bíceps.
-Tendinopatia e tenossinovite da cabeça longa do bíceps.
-Tendinopatia do subescapular, com afilamento tendíneo.
-Rotura transfixante maciça do tendão supra-espinhal, com consequente atrofia/lipossubstituição dos ventres musculares.
-Tendinopatia do infra-espinhal.
-Pequeno derrame articular, comunicando-se com o plano da bursa subacromial/subdeltoídea.
-Alterações degenerativas da articulação acrômio-clavicular.
-ULTRASSONOGRAFIA DE BRAÇO DIREITO
-
-RESULTADO:
-Pele e tecido subcutâneo sem alterações.
-Ventres musculares apresentando arranjo estrutural fibrilar conservado.
-Não se caracterizam sinais de hérnia, estiramento e/ou contusão muscular.
-Tendão da cabeça longa do bíceps tópico, com espessura e textura normais.
-Tendões do tríceps braquial e bíceps distal apresentando espessura, contornos e arranjo fibrilar preservados.
-Ausência de coleções e/ou lesão expansiva na região estudada.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-ULTRASSONOGRAFIA DE BRAÇO ESQUERDO
-
-RESULTADO:
-Pele e tecido subcutâneo sem alterações.
-Ventres musculares apresentando arranjo estrutural fibrilar conservado.
-Não se caracterizam sinais de hérnia, estiramento e/ou contusão muscular.
-Tendão da cabeça longa do bíceps tópico, com espessura e textura normais.
-Tendões do tríceps braquial e bíceps distal apresentando espessura, contornos e arranjo fibrilar preservados.
-Ausência de coleções e/ou lesão expansiva na região estudada.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-ULTRASSONOGRAFIA DO COTOVELO DIREITO
-
-RESULTADO:
-Tendões flexores e extensores comuns do antebraço de calibre, contornos e textura habituais.
-Superfícies ósseas epicondileanas de contornos preservados.
-Tendões do tríceps braquial e bíceps distal apresentando espessura, contornos e arranjo fibrilar preservados.
-Nervo ulnar sem particularidades.
-Cavidade articular sem sinais de derrame.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-ULTRASSONOGRAFIA DO COTOVELO ESQUERDO
-
-RESULTADO:
-Tendões flexores e extensores comuns do antebraço de calibre, contornos e textura habituais.
-Superfícies ósseas epicondileanas de contornos preservados.
-Tendões do tríceps braquial e bíceps distal apresentando espessura, contornos e arranjo fibrilar preservados.
-Nervo ulnar sem particularidades.
-Cavidade articular sem sinais de derrame.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-ULTRASSONOGRAFIA DO PUNHO DIREITO
-
-RESULTADO:
-Tendões flexores e extensores com ecogenicidade e padrão fibrilar preservados.
-Túnel do carpo e canal de Guyon de aspecto habitual.
-Nervo mediano com espessura habitual.
-Não foram individualizadas coleções ou tumorações na topografia em estudo.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-ULTRASSONOGRAFIA DO PUNHO ESQUERDO
-
-RESULTADO:
-Tendões flexores e extensores com ecogenicidade e padrão fibrilar preservados.
-Túnel do carpo e canal de Guyon de aspecto habitual.
-Nervo mediano com espessura habitual.
-Não foram individualizadas coleções ou tumorações na topografia em estudo.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-ULTRASSONOGRAFIA DE MÃO DIREITA
-
-RESULTADO:
-Pele e tecido subcutâneo sem alterações.
-Musculatura regional com padrão sonográfico habitual.
-Estruturas tendíneas de padrão usual.
-Recessos articulares sem sinais de derrame.
-Superfícies ósseas de contornos regulares.
-Ausência de coleções e/ou lesão expansiva na região estudada.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-ULTRASSONOGRAFIA DE MÃO ESQUERDA
-
-RESULTADO:
-Pele e tecido subcutâneo sem alterações.
-Musculatura regional com padrão sonográfico habitual.
-Estruturas tendíneas de padrão usual.
-Recessos articulares sem sinais de derrame.
-Superfícies ósseas de contornos regulares.
-Ausência de coleções e/ou lesão expansiva na região estudada.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-ULTRASSONOGRAFIA DE QUADRIL DIREITO
-
-RESULTADO:
-Pele e tecido celular subcutâneo sem alterações ecográficas.
-Musculatura regional de padrão fibrilar preservado.
-Tendões dos glúteos mínimo e médio de morfologia, contornos e ecotextura preservadas.
-Bursas peritrocantéricas anatômicas.
-Estruturas ósseas visualizadas de contornos normais.
-Cavidade articular sem sinais de derrame.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-ULTRASSONOGRAFIA DE QUADRIL ESQUERDO
-
-RESULTADO:
-Pele e tecido celular subcutâneo sem alterações ecográficas.
-Musculatura regional de padrão fibrilar preservado.
-Tendões dos glúteos mínimo e médio de morfologia, contornos e ecotextura preservadas.
-Bursas peritrocantéricas anatômicas.
-Estruturas ósseas visualizadas de contornos normais.
-Cavidade articular sem sinais de derrame.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-ULTRASSONOGRAFIA DO QUADRIL INFANTIL BILATERAL
-
-RESULTADO – QUADRIL DIREITO:
-Teto acetabular ósseo com morfologia habitual.
-Cabeça femoral em topografia anatômica, com adequada conformação e posicionamento do lábio acetabular.
-Ângulo do teto ósseo (alfa) estimado em XX graus.
-Ângulo do teto cartilaginoso (beta) estimado em XX graus.
-
-RESULTADO – QUADRIL ESQUERDO:
-Teto acetabular ósseo com morfologia habitual.
-Cabeça femoral em topografia anatômica, com adequada conformação e posicionamento do lábio acetabular.
-Ângulo do teto ósseo (alfa) estimado em XX graus.
-Ângulo do teto cartilaginoso (beta) estimado em XX graus.
-
-IMPRESSÃO:
-- Quadril direito tipo Ia de Graf, dentro dos padrões da normalidade.
-- Quadril esquerdo tipo Ia de Graf, dentro dos padrões da normalidade.
-- Quadril XX tipo IIa de Graf, relacionado a imaturidade.
-
-
-
-
-
-
-
-ULTRASSONOGRAFIA DO JOELHO DIREITO
-
-RESULTADO:
-Ausência de derrame articular.
-Tendão quadricipital com espessura normal, sem descontinuidade e com textura homogênea.
-Tendão patelar com espessura normal, sem descontinuidade e com textura homogênea.
-Gordura de Hoffa preservada.
-Tendões componentes da "pata de ganso" de configuração anatômica em sua inserção tibial.
-Tendão poplíteo com espessura normal, sem descontinuidade e com textura homogênea.
-Trato iliotibial com espessura e ecotextura preservadas.
-Ligamentos colaterais sem alterações.
-Fossa poplítea sem alterações.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-ULTRASSONOGRAFIA DO JOELHO ESQUERDO
-
-RESULTADO:
-Ausência de derrame articular.
-Tendão quadricipital com espessura normal, sem descontinuidade e com textura homogênea.
-Tendão patelar com espessura normal, sem descontinuidade e com textura homogênea.
-Gordura de Hoffa preservada.
-Tendões componentes da "pata de ganso" de configuração anatômica em sua inserção tibial.
-Tendão poplíteo com espessura normal, sem descontinuidade e com textura homogênea.
-Trato iliotibial com espessura e ecotextura preservadas.
-Ligamentos colaterais sem alterações.
-Fossa poplítea sem alterações.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-ULTRASSONOGRAFIA DA FOSSA POPLÍTEA DIREITA
-
-RESULTADO:
-Pele de espessura normal.
-Camada subcutânea com espessura e ecogenicidade normais.
-Ventres musculares apresentando arranjo estrutural fibrilar conservado.
-Ausência de coleções e/ou lesão expansiva na região estudada.
-Vasos poplíteos de trajeto e calibre habituais.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-ULTRASSONOGRAFIA DA FOSSA POPLÍTEA ESQUERDA
-
-RESULTADO:
-Pele de espessura normal.
-Camada subcutânea com espessura e ecogenicidade normais.
-Ventres musculares apresentando arranjo estrutural fibrilar conservado.
-Ausência de coleções e/ou lesão expansiva na região estudada.
-Vasos poplíteos de trajeto e calibre habituais.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-ULTRASSONOGRAFIA DE TORNOZELO DIREITO
-
-RESULTADO:
-Tendões tibial anterior, extensor longo do hálux e extensor longo dos dedos com espessura, contornos e ecotextura normais.
-Tendões tibial posterior, flexor longo dos dedos e flexor longo do hálux com espessura, contornos e ecotextura normais.
-Tendões fibulares com espessura, contornos e ecotextura normais.
-Tendão calcâneo com contornos, ecotextura e espessura normais, contendo foco de calcificação em sua inserção no osso calcâneo.
-Tendão calcâneo com contornos, ecotextura e espessura normais.
-Espessamento e hipoecogenicidade da origem da fáscia plantar junto à tuberosidade do osso calcâneo.
-Fáscia plantar de espessura e ecogenicidade preservadas.
-Ausência de sinais de derrame articular no recesso tibiotalar anterior.
-
-IMPRESSÃO:
-Entesopatia do calcâneo (esporão posterior).
-Fascite plantar.
-Exame dentro dos limites da normalidade.
-
-
-ULTRASSONOGRAFIA DE TORNOZELO ESQUERDO
-
-RESULTADO:
-Tendões tibial anterior, extensor longo do hálux e extensor longo dos dedos com espessura, contornos e ecotextura normais.
-Tendões tibial posterior, flexor longo dos dedos e flexor longo do hálux com espessura, contornos e ecotextura normais.
-Tendões fibulares com espessura, contornos e ecotextura normais.
-Tendão calcâneo com contornos, ecotextura e espessura normais, contendo foco de calcificação em sua inserção no osso calcâneo.
-Tendão calcâneo com contornos, ecotextura e espessura normais.
-Espessamento e hipoecogenicidade da origem da fáscia plantar junto à tuberosidade do osso calcâneo.
-Fáscia plantar de espessura e ecogenicidade preservadas.
-Ausência de sinais de derrame articular no recesso tibiotalar anterior.
-
-IMPRESSÃO:
-Entesopatia do calcâneo (esporão posterior).
-Fascite plantar.
-Exame dentro dos limites da normalidade.
-ULTRASSONOGRAFIA DO PÉ DIREITO
-
-RESULTADO:
-Exame direcionado para queixa clínica da paciente, localizada na face plantar do retropé.
-Pele e tecido subcutâneo sem alterações.
-Musculatura regional com aspecto ecográfico preservado.
-Espessamento e hipoecogenicidade da origem da fáscia plantar junto à tuberosidade do osso calcâneo.
-Fáscia plantar de espessura e ecogenicidade preservadas.
-Tendão calcâneo com contornos, ecotextura e espessura normais, contendo foco de calcificação em sua inserção no osso calcâneo.
-Tendão calcâneo com contornos, ecotextura e espessura normais.
-Ausência de coleções e/ou lesão expansiva na região estudada.
-
-IMPRESSÃO:
-Fascite plantar.
-Entesopatia do calcâneo (esporão posterior).
-Exame dentro dos limites da normalidade.
-
-Exame direcionado para queixa clínica da paciente, localizada na face dorsal do antepé.
-Pele e tecido subcutâneo sem alterações.
-Musculatura regional com aspecto ecográfico preservado.
-Tendões flexores e extensores com estrutura normal na região estudada.
-Recessos articulares sem derrame e/ou espessamento sinovial.
-Espaços intermetatarsais sem sinais de bursopatia e/ou neuroma.
-Placas plantares com arquitetura anatômica.
-Ausência de coleções e/ou lesão expansiva na região estudada.
-
-
-ULTRASSONOGRAFIA DO PÉ ESQUERDO
-
-RESULTADO:
-Exame direcionado para queixa clínica da paciente, localizada na face plantar do retropé.
-Pele e tecido subcutâneo sem alterações.
-Musculatura regional com aspecto ecográfico preservado.
-Espessamento e hipoecogenicidade da origem da fáscia plantar junto à tuberosidade do osso calcâneo.
-Fáscia plantar de espessura e ecogenicidade preservadas.
-Tendão calcâneo com contornos, ecotextura e espessura normais, contendo foco de calcificação em sua inserção no osso calcâneo.
-Tendão calcâneo com contornos, ecotextura e espessura normais.
-Ausência de coleções e/ou lesão expansiva na região estudada.
-
-IMPRESSÃO:
-Fascite plantar.
-Entesopatia do calcâneo (esporão posterior).
-Exame dentro dos limites da normalidade.
-
-Exame direcionado para queixa clínica da paciente, localizada na face dorsal do antepé.
-Pele e tecido subcutâneo sem alterações.
-Musculatura regional com aspecto ecográfico preservado.
-Tendões flexores e extensores com estrutura normal na região estudada.
-Recessos articulares sem derrame e/ou espessamento sinovial.
-Espaços intermetatarsais sem sinais de bursopatia e/ou neuroma.
-Placas plantares com arquitetura anatômica.
-Ausência de coleções e/ou lesão expansiva na região estudada.
-ESTRUTURAS SUPERFICIAIS
-
-Formação cística, com conteúdo ligeiramente espesso, medindo cerca de 0,XX cm, identificada em situação subdérmica, na XX.
-Formação cística em situação subdérmica na XX, sugestiva de cisto sebáceo.
-
-
-DEDO/MÃO/PÉ
-
-Pequena formação de aspecto cístico, medindo cerca de XX cm, identificada superficialmente e em íntimo contato com o tendão flexor/extensor do XX quirodáctilo, ao nível da XX.
-Pequena formação cística identificada superficialmente e em íntimo contato com o tendão flexor/extensor do XX quirodáctilo, conforme descrito, podendo corresponder a cisto gangliônico.
-
-Correspondendo à nodulação palpável referida pelo paciente, nota-se formação de aspecto cístico, medindo cerca de 1,2 x 0,7 x 1,2 cm (L x AP x T), identificada superficialmente e em íntimo contato com o tendão flexor.
-Formação de aspecto cístico identificada na base do terceiro quirodáctilo direito, superficialmente e em íntimo contato com o tendão flexor.
-
-Espessamento da polia flexora A1, na altura da primeira articulação metacarpofalângica.
-Espessamento e quantidade de líquido acima do usual no interior da bainha sinovial do tendão flexor ao nível da primeira articulação metacarpofalângica,
-Espessamento da polia flexora A1 do primeiro quirodáctilo.
-Tenossinovite e tendinopatia do flexor do primeiro quirodáctilo.
-
-Espessamento da polia flexora A1, na altura da quarta articulação metacarpofalangeana.
-Tendão flexor do quarto quirodáctilo espessado e hipoecogênico em seu segmento justadistal à polia A1.
-Espessamento da polia flexora A1 do quarto quirodáctilo.
-Tendinopatia do flexor do quarto quirodáctilo em seu segmento justadistal à polia A1.
-
-
-Identifica-se, entre as cabeças do terceiro e do quarto metatarsos, imagem nodular, hipoecogênica, circunscrita, estruindo-se manobra de Mulder e medindo cerca de 1,6 x 1,1 x 0,6 cm (L x AP x T).
-Imagem nodular no terceiro espaço intermetatarsal, conforme descrito. Considerar possibilidade de neuroma de Morton.
-PUNHO
-
-Formação cística, de conteúdo levemente heterogêneo, medindo cerca de XXX cm (L x AP x T), com volume estimado em cerca de 0,35 cm³, identificada na face dorsal do punho, adjacente à terceira articulação carpometacarpiana, entre tendões do segundo e do quarto compartimentos extensores.
-Cisto para-articular na face dorsal do punho.
-
-Diminuta formação cística, medindo XX cm, identificada na face dorsal do punho, profundamente ao quarto compartimento extensor, com aparente colo de comunicação articular.
-Diminuta formação cística para-articular na face dorsal do punho.
-
-Nervo mediano de espessura aumentada no nível do túnel do carpo, com área de secção transversa estimada em 0,XX cm2.
-Aumento do calibre do nervo mediano, comumente observado em portadores da síndrome do túnel do carpo.
-
-Presença de dois nervos medianos na altura do túnel do carpo proximal, ambos de calibre normal, com pequena artéria mediana persistente pérvia interposta.
-Bifurcação alta ou bifidez do nervo mediano, com artéria mediana persistente (variação da normalidade).
-
-Tendões do primeiro compartimento extensor de calibre aumentado no nível do processo estiloide do rádio, com quantidade de líquido acima do usual no interior de sua bainha.
-Tendinopatia e tenossinovite do primeiro compartimento extensor (De Quervain).
-
-Tendão do sexto compartimento extensor (extensor ulnar do carpo) de calibre aumentado ao nível do processo estiloide da ulna.
-Tendinopatia do sexto compartimento extensor.
-
-Tendão do sexto compartimento extensor de espessura e textura preservadas ao nível do processo estiloide da ulna, porém apresentando quantidade de líquido acima do usual no interior de sua bainha, com espessamento da mesma.
-Tenossinovite do sexto compartimento extensor.
-
-COTOVELO
-
-
-Hipoecogenicidade da origem do tendão comum dos extensores junto ao epicôndilo lateral do úmero, onde há pequena excrescência óssea associada.
-Hipoecogenicidade da origem do tendão comum dos flexores junto ao epicôndilo medial do úmero, onde há pequena excrescência óssea associada.
-Tendinopatia dos extensores do antebraço, com entesófito associado.
-Tendinopatia dos flexores do antebraço, com entesófito associado.
-
-Hipoecogenicidade da origem do tendão comum dos extensores junto ao epicôndilo lateral do úmero, onde há pequena excrescência óssea associada.
-Tendão comum dos flexores do antebraço de calibre, contornos e textura habituais.
-Superfície óssea do epicôndilo medial de contornos preservados.
-Tendinopatia dos extensores do antebraço, com entesófito associado.
-
-Hipoecogenicidade da origem do tendão comum dos flexores junto ao epicôndilo medial do úmero, onde há pequena excrescência óssea associada.
-Tendão comum dos extensores do antebraço de calibre, contornos e textura habituais.
-Superfície óssea do epicôndilo lateral de contornos preservados.
-Tendinopatia dos flexores do antebraço, com entesófito associado.
-
-Tendão do tríceps com espessura, contornos e arranjo fibrilar preservados, apresentando focos de calcificação em sua inserção distal.
-Tendão do bíceps distal apresentando espessura, contornos e arranjo fibrilar preservados.
-Entesopatia distal do tríceps.
-
-Nervo ulnar de espessura discretamente aumentada no nível do túnel cubital, com área de secção transversa estimada em XX mm2 (valor de referência: até 8,0 mm2).
-Discreto espessamento do nervo ulnar no nível do túnel cubital. A possibilidade de síndrome do túnel cubital deve ser considerada.
-
-Espessamento e hipoecogenicidade do nervo ulnar no nível do túnel cubital, com área de secção transversa estimada em 9,9 mm2 (valor de referência: até 8,0 mm2).
-Espessamento e hipoecogenicidade do nervo ulnar, sugerindo síndrome do túnel cubital.
-OMBRO
-
-Luxação medial do tendão da cabeça longa do bíceps, que se encontra levemente espessado e heterogêneo.
-
-Espessamento e heterogeneidade do tendão do cabo longo do bíceps, com quantidade de líquido acima do usual no interior de sua bainha, sem sinais de lesões transfixantes.
-Tendinopatia e tenossinovite do cabo longo do bíceps.
-
-Rotura completa do tendão da cabeça longa do bíceps, com retração distal e heterogeneidade da transição miotendínea.
-Destaca-se ainda coleção líquida hemorrágica nos espaços interfasciais adjacentes ao ventre muscular do bíceps, com espessura de até 0,9 cm.
-Rotura completa do tendão da cabeça longa do bíceps, com retração distal e heterogeneidade da transição miotendínea e coleção líquida hemorrágica nos espaços interfasciais adjacentes.
-
-
-Tendão do subescapular com espessura normal, apresentando redução de sua ecogenicidade e heterogeneidade textural, sem sinais de lesões transfixantes.
-
-Tendão do supra-espinhal com espessura normal, apresentando redução da ecogenicidade em suas fibras mais anteriores, sem sinais de lesões transfixantes.
-
-Tendão do supra-espinhal com espessura normal, apresentando redução da ecogenicidade e heterogeneidade textural, sem sinais de lesões transfixantes.
-
-Tendão do subescapular espessado e heterogêneo, sem sinais de lesões transfixantes.
-
-Tendão do supra-espinhal espessado e hipoecogênico, sem sinais de lesões transfixantes.
-
-Tendão do infra-espinhal de espessura normal, apresentando redução de sua ecogenicidade, sem sinais de lesões transfixantes.
-
-Tendão do infra-espinhal espessado e hipoecogênico, sem sinais de lesões transfixantes.
-
-Tendinopatia do subescapular e do supra-espinhal / dos componentes do manguito rotador.
-
-Tendão do supra-espinhal levemente espessado, hipoecogênico, sem sinais de lesões transfixantes.
-Tendão do infra-espinhal levemente espessado, hipoecogênico, sem sinais de lesões transfixantes.
-De permeio às fibras de transição entre o supra-espinhal e o infra-espinhal, observa-se calcificação de 0,9 cm.
-Tendinopatia do supra-espinhal e do infra-espinhal, com calcificação de permeio às suas fibras transicionais, sugerindo tendinopatia calcária.
-
-Tendão do supra-espinhal de espessura normal, apresentando redução da ecogenicidade e heterogeneidade textural, destacando-se foco afibrilar intrassubstancial em fibras médias justa-insercionais, medindo XX cm (longitudinal x transversal) e acometendo menos de 20% da espessura tendínea.
-Tendinopatia do supra-espinhal, com rotura parcial de suas fibras.
-
-Tendão do supra-espinhal levemente espessado, heterogêneo, apresentando foco afibrilar intrassubstancial em suas fibras insercionais, medindo XX cm (longitudinal x transversal) e acometendo menos de 20% da espessura tendínea.
-Tendinopatia do supra-espinhal, com rotura parcial de suas fibras.
-
-Tendão do supra-espinhal espessado e hipoecogênico, apresentando ainda foco afibrilar intrassubstancial em suas fibras insercionais, medindo XX cm (longitudinal x transversal) e acometendo menos de 20% da espessura tendínea.
-Tendinopatia do supra-espinhal, com rotura parcial de suas fibras.
-
-Tendão do supra-espinhal com espessura normal, apresentando redução da ecogenicidade e heterogeneidade textural, destacando-se ainda foco afibrilar intrassubstancial em suas fibras insercionais, medindo XX cm (longitudinal x transversal) e acometendo cerca de 20% da espessura tendínea.
-Tendinopatia do supra-espinhal, com rotura parcial de suas fibras.
-
-Tendão do supra-espinhal espessado e hipoecogênico, apresentando ainda foco afibrilar em sua superfície articular, medindo XX cm (longitudinal x transversal) e estendendo-se por cerca de XX% da espessura tendínea.
-Tendinopatia do supra-espinhal, com rotura parcial de suas fibras.
-
-Tendão do supra-espinhal hipoecogênico, apresentando descontinuidade transfixante (de espessura total) de fibras anteriores e médias insercionais, medindo XX cm (longitudinal x transversal).
-Rotura transfixante do tendão do supra-espinhal.
-
-Descontinuidade transfixante (de espessura total) de fibras anteriores e médias do tendão do supra-espinhal, medindo cerca de XX cm (longitudinal x transversal). Fibras posteriores remanescentes com ecotextura heterogênea.
-Rotura transfixante de fibras do supra-espinhal.
-
-Tendão do supra-espinhal espessado e heterogêneo, apresentando descontinuidade transfixante (de espessura total) na projeção da zona crítica, medindo XX cm (longitudinal x transversal), preservando fibras mais posteriores.
-Tendinopatia do supra-espinhal, com rotura transfixante de suas fibras.
-
-Rotura transfixante maciça do tendão do supra-espinhal, com retração do coto tendíneo, impossibilitando sua avaliação, com consequente atrofia/lipossubstituição dos ventres musculares e com cabeça umeral irregular e de aspecto “careca”.
-Rotura transfixante maciça do tendão supra-espinhal.
-
-Leves espessamento e distensão líquida da bursa subacromial/subdeltoídea.
-Leve bursite subacromial/subdeltoídea.
-
-Espessamento e distensão líquida da bursa subacromial/subdeltoídea.
-Bursite subacromial/subdeltoídea.
-
-Leves alterações degenerativas da articulação acrômio-clavicular.
-Artropatia degenerativa acrômio-clavicular.
-
-Alterações degenerativas da articulação acrômio-clavicular.
-Artropatia degenerativa acrômio-clavicular.
-
-Distensão capsular da articulação acrômio-clavicular.
-
-Aumento difuso da ecogenicidade da musculatura regional por lipossubstituição.
-Lipossubstituição da musculatura regional, provavelmente relacionado ao desuso.
-
-.: Exame de avaliação prejudicada, devido à dificuldade da paciente em obter o posicionamento adequado. A critério clínico, a ampliação propedêutica com o estudo por ressonância magnética poderá fornecer informações adicionais
-
-
-QUADRIL
-
-Espessamento e hipoecogenicidade da inserção dos glúteos mínimo e médio junto ao trocanter maior do fêmur, sem sinais de macrorrupturas detectáveis ao método.
-Tendinopatia insercional dos glúteos mínimo e médio.
-
-Distensão líquida de bursa trocantérica.
-Bursite trocantérica.
-
-Calcificações adjacentes à espinha ilíaca anteroinferior, local de inserção do reto femoral.
-Entesopatia proximal do reto femoral.
-
-Calcificações adjacentes às espinhas ilíacas anterossuperior e anteroinferior.
-Entesófitos junto às espinhas ilíacas anterossuperior e anteroinferior.
-
-
-
-JOELHO
-
-Tendão quadricipital com espessura normal, sem descontinuidade, apresentando foco de calcificação em sua inserção na patela.
-Entesopatia distal do quadríceps.
-
-Espessamento e hipoecogenicidade da inserção proximal do ligamento colateral medial, com pequena quantidade de conteúdo líquido adjacente.
-Espessamento e hipoecogenicidade da inserção proximal do ligamento colateral medial, com pequena quantidade de conteúdo líquido adjacente, sugerindo estiramento grau II.
-
-Espessamento e hipoecogenicidade do ligamento colateral medial em sua inserção proximal.
-Espessamento e hipoecogenicidade do ligamento colateral medial em sua inserção proximal, de provável etiologia cicatricial.
-
-Espessamento e hipoecogenicidade do ligamento colateral medial, com calcificação de sua porção proximal.
-Espessamento e hipoecogenicidade do ligamento colateral medial, com calcificação de sua porção proximal, sugerindo lesão de Pellegrini-Stieda.
-
-Excrescências ósseas nas margens da articulação femorotibial medial, com redução do espaço interósseo e extrusão meniscal associada.
-Osteoartrose femorotibial medial, com extrusão meniscal associada. À critério clínico, o estudo por ressonância magnética poderá fornecer informações adicionais.
-
-Excrescências ósseas nas margens da articulação femorotibial, com redução do espaço interósseo, sobretudo no compartimento medial, onde observa-se extrusão meniscal associada.
-Osteoartrose femorotibial, com extrusão meniscal associada no compartimento medial. À critério clínico, o estudo por ressonância magnética poderá fornecer informações adicionais.
-
-
-Extrusão e heterogeneidade do menisco medial.
-Extrusão e heterogeneidade do menisco medial, a ser melhor avaliado, a critério clínico, com estudo por ressonância magnética.
-
-Distensão líquida da bursa do semimembranáceo-gastrocnêmio, determinando a formação de cisto poplíteo, medindo XXX cm (L x AP x T), com volume estimado em cerca de xxx cm³.
-Cisto poplíteo (Baker).
-
-Distensão líquida da bursa do semimembranáceo-gastrocnêmio, determinando a formação de cisto poplíteo, que mede cerca de XXX cm (L x AP x T), com volume estimado em xxx cm³, e que apresenta conteúdo heterogêneo e calcificações em seu interior.
-Cisto poplíteo (Baker).
-
-Espessamento e distensão líquida da bursa do semimembranáceo-gastrocnêmio, determinando a formação de cisto poplíteo, que mede cerca de 4,3 x 0,9 x 3,3 cm (L x AP x T), com volume estimado em 6,7 cm³, e que apresenta conteúdo heterogêneo, com traves ecogênicas de permeio.
-Cisto poplíteo (Baker), com conteúdo heterogêneo, que pode estar relacionado a proliferação sinovial ou a conteúdo hemático.
-
-Distensão líquida da bursa do semimembranáceo-gastrocnêmio, determinando a formação de cisto poplíteo, medindo 3,9 x 1,1 x 1,7 cm (L x AP x T), com volume estimado em cerca de 3,7 cm³, com sinais de ruptura associados, caracterizados por espessamento parietal, traves ecogênicas de permeio e extravasamento de pequena quantidade de líquido, que disseca os planos miofasciais do gastrocnêmio medial.
-Cisto poplíteo (Baker), com sinais de ruptura associados.
-
-Distensão líquida da bursa do semimembranáceo-gastrocnêmio, determinando a formação de cisto poplíteo, medindo XXX cm (L x AP x T), com volume estimado em cerca de xxx cm³, com sinais de ruptura associados, caracterizados por extravasamento de líquido, que disseca os planos miofasciais do gastrocnêmio medial.
-Cisto poplíteo (Baker), com sinais de ruptura associados.
-
-(FRASE DO COMPENDIO PARA RUPTURA) Pequena distensão líquida da bolsa que se interpõe entre o semimembranoso e o gastrocnêmio medial, formando cisto poplíteo com ruptura e extravasamento de líquido que disseca os planos miofasciais do gastrocnêmio medial.
-
-Coleção de conteúdo heterogêneo divulsionando os planos miofasciais do gastrocnêmio medial ao nível da fossa poplítea, com espessura de até 1,0 cm.
-Coleção de conteúdo heterogêneo divulsionando os planos miofasciais do gastrocnêmio medial ao nível da fossa poplítea (rotura de cisto de Baker?). Sugere-se, à critério clínico, avaliação complementar com o estudo por ressonância magnética.
-
-Formação cística, de paredes finas e conteúdo anecóico, medindo XXX cm (L x AP x T), com volume estimado em cerca de XX cm³, identificada na fossa poplítea, profundamente à cabeça medial do músculo gastrocnêmio, adjacente à articulação femorotibial medial.
-Formação cística identificada na fossa poplítea, profundamente à cabeça medial do músculo gastrocnêmio, adjacente à articulação femorotibial medial (Cisto gangliônico? Cisto meniscal?). A critério clínico, o estudo por ressonância magnética poderá fornecer maiores subsídios diagnósticos.
-
-Pequena formação cística, medindo 0,9 x 0,3 x 0,7 cm (L x AP x T), identificada adjacente à margem anteromedial articulação femorotibial medial.
-Formação cística adjacente à margem anteromedial da articulação femorotibial medial (Cisto meniscal? Cisto gangliônico?). À critério clínico, o estudo por ressonância magnética poderá fornecer maiores subsídios diagnósticos.
-
-Distensão líquida da bursa iliotibial.
-Bursite iliotibial, podendo estar relacionada à síndrome da banda iliotibial.
-
-Distensão líquida da bursa semimembranosa.
-Bursite semimembranosa.
-
-Distensão líquida das bursas infrapatelares superficial e profunda.
-Bursite infrapatelar superficial e profunda.
-
-Fragmentação da tuberosidade anterior da tíbia, sem sinais de tendinopatia ou bursopatia adjacentes.
-Fragmentação da tuberosidade anterior da tíbia. Considerar a possibilidade de doença de Osgood-Schlatter.
-
-Leve irregularidade dos contornos ósseos do polo inferior da patela, com edema da gordura de Hoffa adjacente.
-Leve irregularidade dos contornos ósseos do polo inferior da patela, com edema da gordura de Hoffa adjacente. A possibilidade de doença de Sinding-Larsen-Johansson deve ser considerada.
-
-Fragmentação do aspecto superolateral da patela, sugerindo patela bipartida.
-Fragmentação do aspecto superolateral da patela, sugerindo patela bipartida. Sugere-se, à critério clínico, avaliação complementar com o estudo radiográfico.
-
-
-
-TORNOZELO
-
-Tendões fibulares espessados e hipoecogênicos, apresentando ainda quantidade de líquido acima do usual em sua bainha, sem sinais de lesões transfixantes.
-
-Tendões fibulares hipoecogênicos, destacando-se ainda ruptura longitudinal do tipo “split” no segmento retro / infra-maleolar do fibular curto, com interposição do fibular longo entre suas fibras.
-Tendinopatia dos fibulares, com ruptura longitudinal no segmento retro / infra-maleolar do fibular curto.
-
-Tendinopatia e tenossinovite dos fibulares, caracterizadas por espessamento tendíneo, heterogeneidade textural e quantidade de líquido acima do usual no interior da bainha sinovial, destacando-se ainda aspecto sugestivo de ruptura longitudinal do tipo “split” no segmento retro / infra-maleolar do fibular curto.
-
-Tendão calcâneo com contornos, ecotextura e espessura normais, contendo foco de calcificação em sua inserção no osso calcâneo.
-Entesopatia do calcâneo.
-
-Espessamento e hipoecogenicidade da origem da fáscia plantar junto à tuberosidade do osso calcâneo, onde há pequena excrescência óssea associada.
-Fascite plantar, com entesófito (esporão) associado.
-
-Espessamento e hipoecogenicidade do corpo do tendão calcâneo, sem sinais de rupturas detectáveis ao método. Notam-se ainda focos de calcificação em sua inserção no osso calcâneo.
-Tendinopatia e entesopatia do calcâneo.
-
-Tendão do calcâneo apresentando descontinuidade parcial extensa em seu terço proximal/médio, com preservação de pequena quantidade de fibras mediais, observando-se retração da junção miotendínea, com distância entre os fragmentos medindo cerca de 4,2 cm e preenchimento da área rota por efusão líquida hemorrágica.
-Sinais de ruptura parcial extensa do tendão calcâneo.
-
-Descontinuidade parcial de fibras do gastrocnêmio medial junto à transição miotendínea distal, medindo cerca de XX cm, associada a efusão líquida hemorrágica divulsionando os ventres musculares do gastrocnêmio medial e do sóleo, com espessura de até 0,5 cm.
-Ruptura parcial na transição miotendínea distal do gastrocnêmio medial, com pequeno hematoma interpondo-se entre os ventres musculares do gastrocnêmio medial e do sóleo.
-
-
-Pequena imagem cálcica, medindo cerca de 0,5 cm, identificada adjacente à tuberosidade medial do osso navicular, de permeio a fibras insercionais do tendão tibial posterior.
-Pequena imagem cálcica adjacente à tuberosidade medial do osso navicular (ossículo navicular acessório?).
-ULTRASSONOGRAFIA DE TIREOIDE
-
-RESULTADO:
-Tireóide em topografia habitual, apresentando dimensões preservadas, contornos regulares e ecotextura homogênea.
-Ausência de nódulos sólidos e/ou císticos no parênquima tireoidiano.
-Ausência de linfonodomegalias cervicais.
-
-Dimensões tireoidianas:
-Istmo: xx cm.
-Lobo direito: xx cm (L x AP x T).
-Lobo esquerdo: xx cm (L x AP x T).
-Volume glandular: xxx cm³.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-Ausência de lesão focal sólida ou cística ecograficamente significativa.
-
-ULTRASSONOGRAFIA CERVICAL
-
-RESULTADO:
-Glândula tireóide em topografia habitual, apresentando dimensões preservadas, contornos regulares e ecotextura homogênea.
-Glândulas parótidas em topografia habitual, apresentando forma, contornos e ecotextura preservadas.
-Glândulas submandibulares em topografia habitual, apresentando forma, contornos e ecotextura preservadas.
-Ausência de linfonodomegalias regionais.
-
-Dimensões tireoidianas:
-Istmo: xx cm.
-Lobo direito: xx cm (L x AP x T).
-Lobo esquerdo: xx cm (L x AP x T).
-Volume glandular: xxx cm³.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-ULTRASSONOGRAFIA DE TIREÓIDE
-
-1. Glândula tireóide tópica, móvel à deglutição, de volume normal, contornos regulares e textura do parênquima homogênea.
-2. Não há evidência de lesão focal sólida ou cística ecograficamente significativa.
-3. O lobo direito mede _ x _ x _ mm (L x T x AP) com volume estimado de _ ml.
-4. O lobo esquerdo mede _ x _ x _ mm (L x T x AP) com volume estimado de _ ml.
-5. Istmo medindo _ mm de espessura.
-6. Traquéia centrada.
-7. Volume glandular estimado em _ cm³.
-8. Ausência de linfadenomegalias satélites.
-
-Avaliação:
-TI-RADS categoria XX.
-
-Conduta:
-
-Laudo padronizado conforme referência: Reporting and Data System (TI-RADS): White Paper of the ACR TI-RADS Committee. (2017) Journal of the American College of Radiology
--TC pós tireoidectomia –
-Glandula tireoide nao caracterizada. Nota-se tecido com atenuação de partes moles ocupando a loja dos lobos tireoideanos com tenue realce pelo meio de contraste iodado, que pode corresponder a tecido fibrocicatricial relacionado ao procedimento cirurgico ou lesão recidivada / residual.
-
-Identificam-se linfonodos cervicais proeminentes, porém de arquitetura preservada, os maiores em regiões submandibulares, com dimensões no limite superior da normalidade.
-
-Linfonodo cervical de dimensões preservadas, medindo 0,XX cm, porém de ecotextura heterogênea, com perda de sua arquitetura habitual, identificado no nível XX à direita/esquerda.
-Linfonodopatia cervical, conforme descrito.
-
-Linfonodo cervical de dimensões levemente aumentadas e ecotextura levemente heterogênea, com perda da diferenciação córtico-hilar, localizada na cadeia IIa esquerda, medindo cerca de 1,1 cm no seu menor eixo.
-Linfonodopatia cervical, conforme descrito.
-
-Proeminência numérica de linfonodos cervicais, destacando-se dois de dimensões aumentadas, porém de morfologia preservada, ovalados, hipoecogênicos, com mediastino ecogênico, sem sinais de degeneração cístico-necrótica, assim localizados:
-- cadeia IIa direita, medindo cerca de 1,4 cm no seu menor eixo;
-- cadeia IIa esquerda, medindo cerca de 1,1 cm no seu menor eixo.
-Linfonodopatias cervicais, conforme descrito.
-
-Proeminência numérica de linfonodos cervicais à esquerda, destacando-se um de dimensões aumentadas, porém de morfologia preservada, ovalado, hipoecogênico, com mediastino ecogênico, sem sinais de degeneração cístico-necrótica, localizado na cadeia IIa, medindo cerca de 1,4 cm no seu menor eixo.
-Linfonodopatia cervical, conforme descrito.
-
-Sugere-se, salvo contraindicação clínica, complementação com estudo citopatológico.
-Tireóide em topografia habitual, apresentando dimensões preservadas e contornos regulares.
-Ecotextura tireoidiana homogênea, exceto pela presença de duas formações nodulares, assim caracterizadas:
-- nódulo sólido, homogeneamente hipoecogênico, de contornos regulares, sem halo ou calcificações evidentes, medindo cerca de XXX cm (L x AP x T), com maior eixo paralelo à pele, localizado no terço médio do lobo direito, sem sinais de comprometimento de estruturas vizinhas;
-- nódulo sólido-cístico, predominantemente hiperecogênico, de contornos regulares, sem halo ou calcificações evidentes, medindo cerca de XXX cm (L x AP x T), com maior eixo paralelo à pele, localizado no terço médio do lobo esquerdo, sem sinais de comprometimento de estruturas vizinhas.
-
-Tireóide em topografia habitual, apresentando dimensões aumentadas e ecotextura heterogênea, sobretudo à direita/esquerda, à custa de múltiplas lesões nodulares, heterogêneas, predominantemente hipoecogênicas, com áreas de degeneração cística e calcificações grosseiras de permeio, de contornos regulares, com orientação paralela, sem sinais de comprometimento de estruturas vizinhas, destacando-se as maiores:
-- no terço superior/médio/inferior do lobo direito/esquerdo, medindo cerca de XXX cm (L x AP x T);
-
-Tireóide em topografia habitual, apresentando dimensões aumentadas e ecotextura difusamente heterogênea, à custa de múltiplas lesões nodulares, predominantemente hiperecogênicas, algumas com pequenas áreas císticas de permeio, destacando-se a maior, medindo cerca de 2,7 x 1,9 x 3,3 cm (T x AP x L), com maior eixo paralelo à pele, de contornos regulares, com halo hipoecóico, localizada na transição lobo direito-istmo, sem sinais de comprometimento de estruturas vizinhas.
-Bócio multinodular.
-
-Glândula tireóide em topografia habitual, apresentando dimensões aumentadas e ecotextura heterogênea, à custa de lesões nodulares, assim caracterizadas:
-- nódulo sólido-cístico, predominantemente hiperecogênico, de contornos regulares, sem halo ou calcificações evidentes, medindo cerca de XXX cm (L x AP x T), com maior eixo paralelo à pele, ocupando grande parte do lobo esquerdo, sem sinais de comprometimento de estruturas vizinhas;
-- nódulo sólido, homogeneamente hipoecogênico, de contornos regulares, sem halo ou calcificações evidentes, medindo cerca de XXX cm (L x AP x T), com maior eixo paralelo à pele, localizado no terço médio do lobo direito, sem sinais de comprometimento de estruturas vizinhas;
-Bócio multinodular.
-
-Tireoidopatia crônica com nódulos:
-Ecotextura tireoidiana difusamente heterogênea, com áreas hipoecogênicas entremeadas com interfaces lineares ecogênicas, destacando-se duas imagens nodulares sólidas, assim caracterizadas:
-- nódulo hipoecogênico, de contornos regulares, sem halo ou calcificações evidentes, medindo cerca de XXX cm (L x AP x T), com maior eixo paralelo à pele, localizado no terço XX do lobo XX, sem sinais de comprometimento de estruturas vizinhas;
-
-Ecotextura tireoidiana discretamente heterogênea, à custa de pequenas imagens nodulares sólidas, levemente hipoecogênicas, sem áreas de degeneração cística ou calcificações evidentes, apresentando contornos regulares, sem halo ou sinais de comprometimento de estruturas vizinhas, assim distribuídas:
-- no terço médio/inferior do lobo direito, medindo cerca de 1,0 x 0,7 x 0,9 cm (L x AP x T), com maior eixo paralelo à pele;
-- no terço médio do lobo direito, medindo cerca de 0,7 x 0,5 x 0,7 cm (L x AP x T), com maior eixo paralelo à pele;
-- no terço inferior do lobo direito, medindo cerca de 0,8 x 0,5 x 0,6 cm (L x AP x T), com maior eixo paralelo à pele. Destaca-se a marcada hipoecogenicidade desta lesão;
-- no terço médio do lobo esquerdo, medindo cerca de 0,7 x 0,3 x 0,6 cm (L x AP x T), com maior eixo paralelo à pele.
-
-Ao estudo Doppler, apresenta vascularização central e periférica, predominantemente periférica;
-
-Nota-se ecotextura tireoidiana heterogênea as custas de lesões nodulares, sendo as maiores / mais evidentes detalhadas abaixo:
-- nódulo xx: localizado no terço xx do lobo xx, medindo cerca de xx cm, apresentando contornos regulares lobulados, halo parcial completo, ecotextura sólida hiperecogênica isoecogênica hipoecogênica, com áreas de degeneração cística, sem com evidências de calcificações em seu interior, sem com nem comprometimento de estruturas vizinhas. À analise ao modo Doppler evidência vascularização periférica.
-
-Glândula tireoide difusamente heterogênea, à custa de pequenas áreas hipoecogênicas com aspecto pseudo nodular, entremeadas com interfaces lineares ecogênicas. Ao estudo Doppler colorido há aumento da vascularização por todo o parênquima glandular.
-Aspecto ecográfico infere tireoidopatia autoimune.
-
-Notam-se raras imagens anecogênicas, com foco ecogênico em seu interior, localizadas lobo esquerdo, que correspondem a cistos com conteúdo colóide, medindo o maior cerca de xx cm.
-Cistos tireoidianos com conteúdo coloide.
-
-Glândula parótida XXX em topografia habitual, apresentando dimensões levemente aumentadas e ecotextura difusamente heterogênea, com múltiplas pequenas áreas hipoecóicas esparsas. Não se evidenciam coleções associadas.
-Glândula parótida XXX aumentada e heterogênea, sugerindo processo inflamatório/infeccioso (parotidite).
-
-Glândula parótida XXX em topografia habitual, apresentando dimensões aumentadas e ecotextura difusamente heterogênea, com redução difusa da ecogenicidade, sem evidentes coleções associadas.
-PADRAO
-
-ULTRASSONOGRAFIA COM DOPPLER DAS ARTÉRIAS CARÓTIDAS E VERTEBRAIS
-
-DIREITA
-
-CARÓTIDA COMUM: pérvia e de calibre normal, sem evidência de placas ateromatosas parietais, com fluxo normocinético.
-Espessura do complexo médio-intimal: 0,XX mm.
-CARÓTIDA INTERNA: pérvia e de calibre normal, sem evidência de placas ateromatosas parietais, com fluxo normocinético e de baixa resistência.
-CARÓTIDA EXTERNA: com emergência e segmento proximal pérvios e de calibre normal, sem evidência de placas ateromatosas parietais, com fluxo normocinético.
-ARTÉRIA VERTEBRAL: pérvia, com fluxo normocinético em direção cefálica.
-
-ESQUERDA
-
-CARÓTIDA COMUM: pérvia e de calibre normal, sem evidência de placas ateromatosas parietais, com fluxo normocinético.
-Espessura do complexo médio-intimal: 0,XX mm.
-CARÓTIDA INTERNA: pérvia e de calibre normal, sem evidência de placas ateromatosas parietais, com fluxo normocinético e de baixa resistência.
-CARÓTIDA EXTERNA: com emergência e segmento proximal pérvios e de calibre normal, sem evidência de placas ateromatosas parietais, com fluxo normocinético.
-ARTÉRIA VERTEBRAL: pérvia, com fluxo normocinético em direção cefálica.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-PADRÃO NAUJORKS
-
-ULTRASSONOGRAFIA COM DOPPLER
-DAS ARTÉRIAS CARÓTIDAS E VERTEBRAIS
-
-DIREITA
-
-CARÓTIDA COMUM: pérvia e de diâmetro normal. Ao estudo color-Doppler o fluxo é característico de carótida comum (laminar, de baixa resistência e normocinético).
-Espessura do complexo médio-intimal: 0,XX mm.
-CARÓTIDA INTERNA: apresenta-se pérvia e de diâmetro normal. O fluxo é característico de carótida interna (laminar, de baixa resistência e normocinético).
-CARÓTIDA EXTERNA: apresenta-se pérvia e de diâmetro normal. O fluxo é característico (laminar, de elevada resistência e normocinético).
-ARTÉRIA VERTEBRAL: apresenta-se pérvia e de diâmetro normal, sendo o vaso dominante. Ao estudo color-Doppler, o fluxo é normal para o vaso (anterógrado, de baixa resistência e normocinético).
-
-ESQUERDA
-
-CARÓTIDA COMUM: pérvia e de diâmetro normal. Ao estudo color-Doppler o fluxo é característico de carótida comum (laminar, de baixa resistência e normocinético).
-Espessura do complexo médio-intimal: 0,XX mm.
-CARÓTIDA INTERNA: apresenta-se pérvia e de diâmetro normal. O fluxo é característico de carótida interna (laminar, de baixa resistência e normocinético).
-CARÓTIDA EXTERNA: apresenta-se pérvia e de diâmetro normal. O fluxo é característico (laminar, de elevada resistência e normocinético).
-ARTÉRIA VERTEBRAL: apresenta-se pérvia e de diâmetro normal. O fluxo é normal para o vaso.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-PADRÃO ALTERADO COM ESTENOSE ACI
-
-ULTRASSONOGRAFIA COM DOPPLER DAS ARTÉRIAS CARÓTIDAS E VERTEBRAIS
-
-DIREITA
-
-CARÓTIDA COMUM: pérvia e de calibre normal, com placas ateromatosas parietais esparsas. O fluxo é normocinético.
-Da bifurcação para a CARÓTIDA INTERNA: apresenta-se placa calcificada, irregular, com extensão de cerca de XX cm e que determina redução de XX% do diâmetro transverso luminal. Ao estudo Doppler, observa-se fluxo com velocidade de pico sistólico aumentada (VPS = XX cm/s; Relação VPS ACI/ACC = XX).
-CARÓTIDA EXTERNA: pérvia e de calibre normal, com placas ateromatosas parietais em sua origem, que não promovem repercussão hemodinâmica significativa. O fluxo é normocinético.
-ARTÉRIA VERTEBRAL: pérvia, com fluxo normocinético em direção cefálica.
-
-ESQUERDA
-
-CARÓTIDA COMUM: pérvia e de calibre normal, com placas ateromatosas parietais esparsas. O fluxo é normocinético.
-Da bifurcação para a CARÓTIDA INTERNA: apresenta-se placa calcificada, irregular, com extensão de cerca de XX cm e que determina redução de XX% do diâmetro transverso luminal. Ao estudo Doppler, observa-se fluxo com velocidade de pico sistólico aumentada (VPS = XX cm/s; Relação VPS ACI/ACC = XX).
-CARÓTIDA EXTERNA: pérvia e de calibre normal, com placas ateromatosas parietais em sua origem, que não promovem repercussão hemodinâmica significativa. O fluxo é normocinético.
-ARTÉRIA VERTEBRAL: pérvia, com fluxo normocinético em direção cefálica.
-
-IMPRESSÃO:
-Sistema arterial carotídeo cervical com moderado/severo acometimento aterosclerótico.
-Estenose de XX% do diâmetro da carótida interna direita.
-Estenose de XX% do diâmetro da carótida interna esquerda.
-PADRÃO SIMPLIFICADO
-
-ULTRASSONOGRAFIA COM DOPPLER DAS ARTÉRIAS CARÓTIDAS E VERTEBRAIS
-
-DIREITA
-
-Artérias carótidas comum, interna e externa pérvias, com trajeto e calibre preservados, apresentando fluxo característico ao estudo Doppler.
-Não se evidenciam placas ateromatosas parietais.
-Espessura do complexo médio-intimal: 0,XX mm.
-Artéria vertebral pérvia, com fluxo normocinético em direção cefálica.
-
-ESQUERDA
-
-Artérias carótidas comum, interna e externa pérvias, com trajeto e calibre preservados, apresentando fluxo característico ao estudo Doppler.
-Não se evidenciam placas ateromatosas parietais.
-Espessura do complexo médio-intimal: 0,XX mm.
-Artéria vertebral pérvia, com fluxo normocinético em direção cefálica.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-PADRÃO DIMINUTAS PLACAS
-
-ULTRASSONOGRAFIA COM DOPPLER DAS ARTÉRIAS CARÓTIDAS E VERTEBRAIS
-
-DIREITA
-
-Artérias carótidas comum, interna e externa pérvias, com trajeto e calibre preservados, apresentando fluxo característico ao estudo Doppler.
-Observam-se diminutas placas parietais calcificadas (menos de 20% de estenose local) na bifurcação carotídea e origem da carótida interna.
-Artéria vertebral pérvia, com fluxo normocinético em direção cefálica.
-
-ESQUERDA
-
-Artérias carótidas comum, interna e externa pérvias, com trajeto e calibre preservados, apresentando fluxo característico ao estudo Doppler.
-Observam-se diminutas placas parietais calcificadas (menos de 20% de estenose local) na bifurcação carotídea e origem da carótida interna.
-Artéria vertebral pérvia, com fluxo normocinético em direção cefálica.
-
-IMPRESSÃO:
-Sistema arterial carotídeo cervical com leve acometimento aterosclerótico.
-Observam-se pequenas placas parietais calcificadas esparsas, sobretudo na bifurcação carotídea e origem da carótida interna, onde promovem redução de até 30% do diâmetro transverso luminal.
-
-Observam-se pequenas placas parietais calcificadas esparsas nos segmentos avaliados, destacando-se uma que estende-se da bifurcação carotídea para a origem da carótida interna, medindo 1,9 cm de extensão e promovendo redução de cerca de 30% do diâmetro transverso luminal.
-
-CARÓTIDA COMUM: pérvia e de calibre normal, apresentando placa parcialmente calcificada, regular, identificada em seu terço distal, com extensão de aproximadamente 1,1 cm e que determina redução do diâmetro transverso luminal de 50-60%, com aumento da velocidade de pico sistólico do fluxo no vaso (VPS pré-placa = 74 cm/s; VPS na placa = 131 cm/s).
-
-CARÓTIDA COMUM: pérvia e de calibre normal, apresentando ateromatose difusa, com destaque para placa parcialmente calcificada, regular, identificada em seu terço distal, com extensão de aproximadamente 2,7 cm e que determina cerca de 30% de redução do diâmetro transverso luminal. O fluxo é normocinético.
-
-Presença de placa heterogênea, predominantemente ecolucente com poucas áreas ecogênicas (placa tipo II, aspecto de placa fibrolipídica) lisa que determina aproximadamente 30% de redução do diâmetro transverso luminal.
-
-Presença de placa calcificada, com sombra acústica (placa tipo V) irregular, com extensão = 1,8 cm e que determina aproximadamente 30% de redução do diâmetro transverso luminal.
-
-Da bifurcação para a CARÓTIDA INTERNA: apresenta-se placa parcialmente calcificada, irregular, com extensão de cerca de 2,4 cm e que determina redução de 70-80% do diâmetro transverso luminal. Ao estudo Doppler, observa-se fluxo com velocidade de pico sistólico aumentada (VPS = 194 cm/s; Relação VPS ACI/ACC = 5).
-
-Da bifurcação para a CARÓTIDA INTERNA: apresenta-se placa calcificada, irregular, com extensão de cerca de 1,9 cm e que determina de redução do diâmetro transverso luminal de 50-69%, observando-se aumento da velocidade de pico sistólico do fluxo no vaso (VPS = 128 cm/s; Relação VPS ACI/ACC = 2,2). Distalmente o vaso é retilíneo.
-
-Bifurcação a XX cm do lóbulo da orelha.
-Da bifurcação com extensão para a CARÓTIDA INTERNA: apresenta-se longa placa calcificada volumosa, irregular, com extensão = 2,9 cm e que determina XX% de redução do diâmetro transverso luminal. O fluxo é levemente turbulento com velocidade de pico sistólico = XX cm/s e velocidade diastólica final = XX cm/s (ângulo Doppler = XX graus). Distalmente o vaso é retilíneo com diâmetro = XX mm.
-
-Sistema arterial carotídeo cervical com leve acometimento aterosclerótico.
-Sistema arterial carotídeo cervical com moderado acometimento aterosclerótico.
-Sistema arterial carotídeo cervical com severo acometimento aterosclerótico.
-
-Estenose de XX% do diâmetro da carótida XX direita/esquerda.
-
-
-ESTENOSE:
-Artéria carótida interna direita/esquerda apresentando placas de ateroma hipoecogênica e regular em sua emergência, parcialmente calcificada, estendendo-se desde o bulbo carotídeo, causando estenose de cerca de xx% da luz do vaso ao modo bidimensional, sem causar, entretanto, repercussões hemodinâmicas significativas.
-Estenose de cerca de xx% da artéria carótida interna direita/esquerda.
-
-OCLUSÃO:
-Artéria carótida interna direita/esquerda apresentando placas de ateroma hipoecogênica em sua emergência, parcialmente calcificada, estendendo-se desde o bulbo carotídeo, causando oclusão de sua luz.
-Oclusão da artéria carótida interna direita/esquerda.
-
-KINKING:
-Artéria carótida interna à esquerda apresentando acotovelamento em seu trajeto, sem causar repercussões hemodinâmicas significativas (“kinking anatômico”).
-Artéria carótida interna à esquerda apresentando acotovelamento em seu trajeto, sem causar repercussões hemodinâmicas significativas (“kinking anatômico”).
-
-PLACAS:
-Placas parietais calcificadas visualizadas nas bifurcações das carótidas comuns e na emergência da artéria carótida interna esquerda, sem causar estenose significativa da luz do vaso.
-Placas parietais calcificadas visualizadas na carótida comum direita e em sua bifurcação, sem repercussões hemodinâmicas significativas.
-Focos de espessamento intimal esparsos nos trechos visualizados dos sistemas carotídeos.
-Ateromatose do sistema carotídeo.
-
-
-Ao estudo Doppler espectral, observa-se fluxo arrítmico em todos os segmentos avaliados, sem evidentes repercussões hemodinâmicas significativas por fatores locais.
-Fluxo arrítmico em todos os segmentos avaliados, sem evidentes repercussões hemodinâmicas significativas por fatores locais.
-
-CRITÉRIOS PARA DIAGNÓSTICO DE ESTENOSE DE CARÓTIDA INTERNA
-
-VPS ACI
-VDF ACI
-Razão VPS ACI/ACC
-Normal
-< 125
-< 40
-< 2
-< 50%
-< 125
-< 40
-< 2
-50 – 69%
-125 - 230
-40 - 100
-2 - 4
->/= 70%
-> 230
-> 100
-> 4
-Ref: Society of Radiologists in Ultrasound Concensus Conference - 2003
-
-
-Referência:
-Recomendação para a Quantificação pelo Ultrassom da Doença Aterosclerótica das Artérias Carótidas e Vertebrais: Grupo de Trabalho do Departamento de Imagem
-Cardiovascular da Sociedade Brasileira de Cardiologia – DIC - SBC Arq Bras Cardiol: Imagem cardiovasc. 2015 Abril; 28(no especial):e1- e64
-
-Valores referência para análise do Doppler espectral das artérias carótidas internas:
-Estenose entre 0-50% - Pico sistólico até 125cm/s. Pico diastólico final até 40cm/s.
-Estenose entre 50-69% - Pico sistólico: 125 a 230cm/s. Pico diastólico final: 40 a 100cm/s.
-Estenose > 70% - Pico sistólico: > 230 cm/s. Pico diastólico final: >100cm/s.
-Fonte: Carotid Artery Stenosis: Gray-Scale and Doppler US Diagnosis-Society of Radiologists in Ultrasound Consensus Conference. Radiology 2003: 229: 340-346.
-
-Valores referência para análise do Doppler espectral das artérias carótidas interna, pós stent carotídeo:
-Estenose de maior ou igual a 30% - velocidade de pico sistólico: 154 a 223cm/s.
-Estenose de maior ou igual a 50% - velocidade de pico sistólico: 224 a 324cm/s.
-Estenose de maior ou igual a 80% - velocidade de pico sistólico: maior ou igual a 325cm/s.
-Fonte: Jornal of Vascular Surgery, 48: 589-594.
-
-
-ULTRASSONOGRAFIA COM DOPPLER DO SISTEMA VENOSO DO MEMBRO SUPERIOR ???
-
-
-RESULTADO:
-Exame direcionado para a pesquisa de trombose venosa.
-
-Sistema Venoso Profundo:
-Veias subclávia, axilar, braquial, radial e ulnar pérvias, com calibre normal e compressibilidade normal.
-Não se observam sinais de trombose venosa profunda.
-
-Sistema Venoso Superficial:
-Veia cefálica e basílica pérvias, com calibre normal e compressibilidade normal.
-Não há evidências de trombose venosa superficial.
-
-IMPRESSÃO:
-Ausência de sinais ecográficos de trombose venosa profunda ou superficial.
-
-
-
-
-
-ULTRASSONOGRAFIA COM DOPPLER DO SISTEMA VENOSO DO MEMBRO INFERIOR ???
-
-INDICAÇÃO CLÍNICA:
-Pesquisa de trombose venosa.
-Controle de trombose venosa profunda (segmento não informado).
-
-RESULTADO:
-Sistema Venoso Profundo:
-Veias femorais comum, superficial e profunda pérvias, com compressibilidade preservada e fluxo fásico aos movimentos respiratórios, sem evidência de refluxo patológico às manobras provocativas.
-Veia poplítea pérvia, com compressibilidade preservada e fluxo fásico aos movimentos respiratórios, sem evidência de refluxo patológico às manobras provocativas.
-Veias tibiais posteriores, tibiais anteriores e fibulares pérvias, com compressibilidade preservada e fluxo fásico aos movimentos respiratórios, sem evidência de refluxo patológico às manobras provocativas.
-Veias gastrocnêmias e soleares pérvias, com compressibilidade preservada.
-
-Sistema Venoso Superficial:
-Safena magna pérvia, com compressibilidade e fluxo preservados, sem evidência de refluxo patológico às manobras provocativas.
-Safena parva pérvia, com compressibilidade e fluxo preservados, sem evidência de refluxo patológico às manobras provocativas.
-
-IMPRESSÃO:
-Ausência de sinais ecográficos de trombose venosa profunda ou superficial.
-ECODOPPLER DO SISTEMA VENOSO DOS MEMBROS INFERIORES COM MAPEAMENTO DE VARIZES
-
-
-RESULTADO - MEMBRO INFERIOR ESQUERDO:
-
-Sistema Venoso Profundo:
-Veias femorais comum, superficial e profunda pérvias, sem evidência de refluxo patológico às manobras provocativas.
-Veia poplítea pérvia, sem evidência de refluxo patológico às manobras provocativas.
-Veias tibiais posteriores, tibiais anteriores e fibulares pérvias, sem evidência de refluxo patológico às manobras provocativas.
-
-Sistema Venoso Superficial:
-Junção safeno-femoral competente/incompetente, medindo XX mm.
-Safena magna identificada em todo o seu trajeto, com perviedade preservada, sem evidência de refluxo patológico às manobra provocativas. Diâmetros transversais:
- • Coxa distal: XX mm
- • Perna distal: XX mm
-Safena parva com perviedade preservada, sem evidência de refluxo patológico às manobra provocativas. Diâmetros transversais:
- • Cavo poplíteo: XX mm
- • Perna distal: XX mm
-Não são visualizadas perfurantes insuficientes.
-
-
-RESULTADO - MEMBRO INFERIOR DIREITO:
-
-Sistema Venoso Profundo:
-Veias femorais comum, superficial e profunda pérvias, sem evidência de refluxo patológico às manobras provocativas.
-Veia poplítea pérvia, sem evidência de refluxo patológico às manobras provocativas.
-Veias tibiais posteriores, tibiais anteriores e fibulares pérvias, sem evidência de refluxo patológico às manobras provocativas.
-
-Sistema Venoso Superficial:
-Junção safeno-femoral competente/incompetente, medindo XX mm.
-Safena magna identificada em todo o seu trajeto, com perviedade preservada, sem evidência de refluxo patológico às manobras provocativas. Diâmetros transversais:
- • Coxa distal: XX mm
- • Perna distal: XX mm
-Safena parva com perviedade preservada, sem evidência de refluxo patológico às manobras provocativas. Diâmetros transversais:
- • Cavo poplíteo: XX mm
- • Perna distal: XX mm
-Não são visualizadas perfurantes insuficientes.
-
-
-IMPRESSÃO:
-Exame dentro dos parâmetros da normalidade.
-
-
-
-ECODOPPLER DO SISTEMA VENOSO DO MEMBRO INFERIOR XXX COM MAPEAMENTO DE VARIZES
-
-
-RESULTADO:
-
-Sistema Venoso Profundo:
-Veias femorais comum, superficial e profunda pérvias, sem evidência de refluxo patológico às manobras provocativas.
-Veia poplítea pérvia, sem evidência de refluxo patológico às manobras provocativas.
-Veias tibiais posteriores, tibiais anteriores e fibulares pérvias, sem evidência de refluxo patológico às manobras provocativas.
-
-Sistema Venoso Superficial:
-Junção safeno-femoral competente/incompetente, medindo XX mm.
-Safena magna identificada em todo o seu trajeto, com perviedade preservada, sem evidência de refluxo patológico às manobras provocativas. Diâmetros transversais:
- • Coxa distal: XX mm
- • Perna distal: XX mm
-Safena parva com perviedade preservada, sem evidência de refluxo patológico às manobras provocativas. Diâmetros transversais:
- • Cavo poplíteo: XX mm
- • Perna distal: XX mm
-Não são visualizadas perfurantes insuficientes.
-
-
-IMPRESSÃO:
-Exame dentro dos parâmetros da normalidade.
-
-
-
-
-Veia femoral comum pérvia, apresentando refluxo patológico às manobras provocativas.
-Veias femorais superficial e profunda pérvias, sem evidência de refluxo patológico às manobras provocativas.
-
-Safena magna identificada em todo o seu trajeto, com perviedade preservada, apresentando refluxo patológico às manobra provocativas no terço XX da XX. Demais segmentos com suficiência preservada.
-
-Veias perfurantes insuficientes localizadas:
- • na face medial/posterior da coxa, distando cerca de XX cm do joelho.
- • na face medial/posterior da perna, distando cerca de XX cm da face plantar.
-Veias varicosas superficiais em coxa e perna, mais proeminentes na face XX da XX.
-
-Veia safena acessória lateral/medial insuficiente.
-
-
-Veia femoral comum direita/esquerda insuficiente.
-Junções safeno-femorais insuficientes, bilateralmente.
-Junção safeno-femoral direita/esquerda insuficiente.
-Veia safena magna esquerda insuficiente em todo o seu trajeto/no terço proximal/médio/distal da coxa e no terço proximal/médio/distal da perna.
-Veia safena magna direita insuficiente em todo o seu trajeto/no terço proximal/médio/distal da coxa e no terço proximal/médio/distal da perna.
-Veias perfurantes insuficientes em localizações anteriormente descritas.
-Veias varicosas superficiais em membros inferiores, bilateralmente.
-Veias varicosas superficiais no membro inferior esquerdo.
-Veias varicosas superficiais no membro inferior direito.
-
-
-Safena magna semi-compressível, com material hiperecogênico ocupando parcialmente sua luz, e apresentando refluxo patológico às manobras provocativas em todo o seu trajeto. Diâmetros transversais:
-Trombose venosa superficial crônica da safena magna esquerda, com recanalização.
-
-
-Veias analisadas com aspecto ultrassonográfico dentro dos limites da normalidade.
-.: Varizes superficiais em membros inferiores conforme ectoscopia.
-
-
-ULTRASSONOGRAFIA COM DOPPLER DO SISTEMA ARTERIAL DO MEMBRO INFERIOR ?????
-
-RESULTADO:
-Artérias femorais comum, superficial e profunda pérvias, sem evidentes placas ateromatosas parietais, apresentando fluxo trifásico e normocinético.
-Artéria poplítea pérvia, sem evidentes placas ateromatosas parietais, apresentando fluxo trifásico e normocinético.
-Artérias tibial anterior, tibial posterior e fibular pérvias, sem evidentes placas ateromatosas parietais, apresentando fluxo trifásico e normocinético.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-ULTRASSONOGRAFIA COM DOPPLER DO SISTEMA ARTERIAL DO MEMBRO INFERIOR ????
-
-RESULTADO:
-Diminutas placas ateromatosas parietais calcificadas esparsas nos segmentos avaliados.
-Artérias femorais comum, superficial e profunda pérvias, apresentando fluxo trifásico e normocinético.
-Artéria poplítea pérvia, apresentando fluxo trifásico e normocinético.
-Artérias tibial anterior, tibial posterior e fibular pérvias, apresentando fluxo trifásico e normocinético.
-
-IMPRESSÃO:
-Leve acometimento aterosclerótico do sistema arterial do membro inferior direito, sem repercussão hemodinâmica significativa.
-ULTRASSONOGRAFIA COM DOPPLER DO SISTEMA ARTERIAL DO MEMBRO INFERIOR ????
-
-RESULTADO:
-Artéria femoral comum pérvia, com leve/moderada/severa ateromatose calcificada, apresentando fluxo laminar/turbilhonado, trifásico/bifásico/monofásico e normocinético/hipercinético/hipocinético.
-Artéria femoral profunda pérvia, com leve/moderada/severa ateromatose calcificada, apresentando fluxo laminar/turbilhonado, trifásico/bifásico/monofásico e normocinético/hipercinético/hipocinético.
-Artéria femoral superficial pérvia, com leve/moderada/severa ateromatose calcificada, apresentando fluxo laminar/turbilhonado, trifásico/bifásico/monofásico e normocinético/hipercinético/hipocinético.
-Artéria poplítea pérvia, com leve/moderada/severa ateromatose calcificada, apresentando fluxo laminar/turbilhonado, trifásico/bifásico/monofásico e normocinético/hipercinético/hipocinético.
-Artéria tibial anterior pérvia, com leve/moderada/severa ateromatose calcificada, apresentando fluxo laminar/turbilhonado, trifásico/bifásico/monofásico e normocinético/hipercinético/hipocinético.
-Artéria tibial posterior pérvia, com leve/moderada/severa ateromatose calcificada, apresentando fluxo laminar/turbilhonado, trifásico/bifásico/monofásico e normocinético/hipercinético/hipocinético.
-Artéria fibular pérvia, com leve/moderada/severa ateromatose calcificada, apresentando fluxo laminar/turbilhonado, trifásico/bifásico/monofásico e normocinético/hipercinético/hipocinético.
-
-IMPRESSÃO:
-Sistema arterial do membro inferior ??? com leve/moderado/severo acometimento aterosclerótico.
-
-
-Artéria xx xx de calibre preservado, apresentando importante ateromatose parcialmente calcificada, causando estenose de cerca de xx% da luz do vaso ao modo B, notando-se fluxo xx, de padrão xxfásico, com xx de seus picos de velocidade sistólico.
-
-Artéria tibial anterior de calibre preservado, apresentando importante ateromatose parcialmente calcificada, sem evidencias de fluxo à analise ao modo Doppler.
-Artéria tibial posterior de calibre preservado, apresentando importante ateromatose parcialmente calcificada, notando-se fluxo laminar, de padrão monofásico “tardus parvus”, com redução de seus picos de velocidade sistólico.
-Artéria fibular de calibre preservado, apresentando moderada ateromatose parcialmente calcificada, notando-se fluxo laminar, de padrão monofásico “tardus parvus”, com redução de seus picos de velocidade sistólico.
-Oclusão da artéria tibial anterior.
-
-DOPPLER ARTERIAL MMII (oclusão):
-Artéria femoral superficial de calibre preservado, apresentando moderada ateromatose parcialmente calcificada, notando-se fluxo laminar, de padrão monofásico, com redução de seus picos de velocidade sistólico em seu terço proximal.
-Presença de material ecogênico no interior da artéria femoral superficial em seu terço médio e distal, associado a ausência de sinais de fluxo sanguíneo à analise ao modo
-Oclusão da artéria femoral superficial em seu terço médio e distal.
-
-ALTERAÇÃO:
-Artéria femoral superficial de calibre preservado, apresentando material ecogênico em seu interior, notando-se fluxo laminar:
--de padrão monofásico, com redução de seus picos de velocidade sistólico em seu terço proximal.
--de padrão monofásico, com aumento de seus picos de velocidade sistólico em seu terço médio, inferindo estenose hemodinamicamente significativa.
--de padrão “tardus parvus”, com redução de seus picos de velocidade sistólico em seu terço distal
-Artéria poplítea de calibre preservado, apresentando leve ateromatose parcialmente calcificada, notando-se fluxo laminar, de padrão monofásico,com redução de seus picos de
-Artéria tibial anterior de calibre preservado, apresentando leve ateromatose parcialmente calcificada, notando-se fluxo laminar, de padrão“tardus parvus”, com redução de seus picos de velocidade sistólico.
-Artéria tibial posterior de calibre preservado, sem sinais de significativa ateromatose, notando-se fluxo laminar, de padrão monofásico, com velocidade dentro da normalidade.
-Artéria fibular de calibre preservado, apresentando leve ateromatose parcialmente calcificada, notando-se fluxo laminar, de padrão “tardus parvus”, com redução de seus picos de velocidade sistólico.
-
-HIPÓTESE DIAGNÓSTICA (Compatível com):
-Estenose hemodinamicamente significativa ao nível do terço médio da artéria femoral superficial.
-Doença arterial obstrutiva periférica.
-
-ULTRASSONOGRAFIA COM DOPPLER DO SISTEMA AORTO-ILÍACO
-
-RESULTADO:
-Foram avaliadas a aorta abdominal e as artérias ilíacas comuns e externas.
-Pequenas placas ateromatosas parietais calcificadas esparsas nos segmentos avaliados, sem estenoses significativas.
-Aorta abdominal de diâmetro preservado em toda a sua extensão, com medidas a saber:
-- ao nível da emergência das artérias renais: XX cm;
-- ao nível da bifurcação: XX cm.
-Artérias ilíacas comuns com trajeto habitual e calibre preservado, medindo XX cm a direita e XX cm a esquerda.
-Artérias ilíacas externas com trajeto habitual e calibre preservado, medindo XX cm a direita e XX cm a esquerda.
-Os vasos estudados apresentam padrão espectral e velocidades de pico sistólico normais ao estudo Doppler.
-
-IMPRESSÃO:
-Ateromatose aorto-ilíaca.
-
-
-ULTRASSONOGRAFIA COM DOPPLER DO SISTEMA AORTO-ILÍACO
-
-RESULTADO:
-Foram avaliadas a aorta abdominal e as artérias ilíacas comuns e externas (artérias ilíacas internas de avaliação prejudicada devido à interposição gasosa intestinal).
-Placas ateromatosas parietais esparsas nos segmentos avaliados, sem estenoses significativas.
-Aorta suprarrenal ectasiada, com calibre máximo de cerca de 3,0 cm.
-Aorta infrarrenal apresentando dilatação fusiforme em sua porção distal, com 3,5 cm de diâmetro máximo, 4,6 cm de extensão, e com placas parietais parcialmente calcificadas, medindo até 0,7 cm de espessura.
-Aorta com calibre de 3,0 cm ao nível de sua bifurcação.
-Artérias ilíacas comuns com trajeto habitual, apresentando-se levemente ectasiadas, com calibre de 1,9 cm a direita e 1,6 cm a esquerda.
-Artérias ilíacas externas com trajeto habitual e calibre preservado, medindo 1,2 cm a direita e 1,3 cm a esquerda.
-Os vasos estudados apresentam padrão espectral e velocidades de pico sistólico normais ao estudo Doppler.
-
-IMPRESSÃO:
-Ateromatose aorto-ilíaca.
-Aorta ectasiada e ateromatosa, com dilatação aneurismática fusiforme em seu segmento infrarrenal, conforme descrito.
-Ectasia das artérias ilíacas comuns, bilateralmente.
-
-ULTRASSONOGRAFIA COM DOPPLER DO SISTEMA AORTO-ILÍACO
-
-RESULTADO:
-Foram avaliadas a aorta abdominal e as artérias ilíacas comuns e externas (artérias ilíacas internas de avaliação prejudicada devido à interposição gasosa intestinal).
-Placas ateromatosas parietais esparsas nos segmentos avaliados.
-Aorta abdominal de diâmetro preservado em toda a sua extensão, com medidas a saber:
-- segmento suprarrenal: 1,4 cm;
-- segmento infrarrenal: 1,4 cm;
-- ao nível da bifurcação: 1,1 cm.
-Ao estudo Doppler, a aorta abdominal apresenta-se com padrão espectral e velocidade de pico sistólico normais.
-Artéria ilíaca comum direita com trajeto habitual e calibre preservado, medindo 0,8 cm. Ao estudo Doppler apresenta fluxo trifásico, porém com aumento da velocidade de pico sistólico, sugerindo estenose de 50-70% do diâmetro transverso luminal.
-Artéria ilíaca comum esquerda com trajeto habitual e calibre preservado, medindo 0,8 cm. Ao estudo Doppler apresenta fluxo com padrão espectral e velocidade de pico sistólico normais.
-Artéria ilíaca externa direita com trajeto habitual e calibre preservado, medindo 0,9 cm. Ao estudo Doppler apresenta fluxo trifásico, porém com aumento da velocidade de pico sistólico, sugerindo estenose de 50-70% do diâmetro transverso luminal.
-Artéria ilíaca externa esquerda com trajeto habitual e calibre preservado, medindo 0,9 cm. Ao estudo Doppler apresenta fluxo com padrão espectral e velocidade de pico sistólico normais.
-
-IMPRESSÃO:
-Ateromatose aorto-ilíaca, destacando-se estenoses hemodinamicamente significativas de 50-70% nas artérias ilíacas comum e externa à direita.
-
-
-
-
-
-
-
-ULTRASSONOGRAFIA TRANSFONTANELAR
-
-RESULTADO:
-Parênquima cerebral com textura e ecogenicidade habituais.
-Eco da linha média centrado.
-Ventrículos laterais e III ventrículo apresentando forma, tamanho e ecogenicidade habitual.
-Regiões talâmicas apresentando aspecto ecográfico característico, com ecogenicidade e ecotextura preservadas.
-Fossa posterior com IV ventrículo e cerebelo de apresentação habitual.
-
-IMPRESSÃO:
-Exame dentro dos limites da normalidade.
-
-
-
-
-
-
-ULTRASSONOGRAFIA TRANSVAGINAL PARA PESQUISA DE ENDOMETRIOSE PROFUNDA
-
-1 - Exame realizado por via transvaginal após preparo intestinal.
-2 - Bexiga com conteúdo apropriado para o exame, com paredes lisas e regulares.
-Uretra retilínea.
-3 - Canal vaginal com eco central normorefringente.
-4 - Recesso vésico-uterino de aspecto ecográfico preservado. Ausência de lesões focais.
-5 - Útero em A/M/R.V.F., medindo _ x _ x _ mm, volume de _ ml, de contornos regulares e mobilidade preservada.
-6 - O miométrio apresenta estratificação preservada e ecotextura homogênea.
-7 - Eco endometrial regular, medindo _ mm de espessura.
-8 - Colo uterino medianizado com forma normal. Canal endocervical sem alterações ecográficas.
-9 - Ovário direito para-uterino, com mobilidade preservada, forma típica, de contornos bem definidos e ecotextura homogênea, medindo _ x _ x _ mm, volume de _ ml (valor de referência: 3 a 9 ml).
-10 - Ovário esquerdo para-uterino, com mobilidade preservada, forma típica, de contornos bem definidos e ecotextura homogênea, medindo _ x _ x _ mm, volume de _ ml (valor de referência: 3 a 9 ml).
-11 - Recesso reto - uterino de aspecto ecográfico preservado. Ausência de lesões focais.
-12- Fundo de saco de Douglas livre.
-
-Opinião:
-Não foram evidenciados focos de endometriose profunda no presente exame.
-
-
-ULTRASSONOGRAFIA TRANSVAGINAL PARA PESQUISA DE ENDOMETRIOSE PROFUNDA
-
-
-Exame realizado por via transvaginal após preparo intestinal.
-Bexiga com conteúdo apropriado para o exame, com paredes lisas e regulares.
-Uretra retilínea.
-Canal vaginal com eco central normorefringente (avaliação complementada após introdução de gel vaginal).
-Recesso vésico-uterino de aspecto ecográfico preservado. Ausência de lesões focais.
-Útero em M.V.F., medindo 66 x 38 x 29 mm, volume de 37 ml, de contornos regulares e mobilidade reduzida.
-O miométrio apresenta estratificação preservada e ecotextura homogênea.
-Eco endometrial regular, medindo 2,2 mm de espessura.
-Colo uterino medianizado com forma normal. Canal endocervical sem alterações ecográficas.
-Ovário direito para-uterino, com mobilidade reduzida, forma típica, de contornos bem definidos e ecotextura homogênea, medindo 26 x 13 x 21 mm, volume de 3,7 ml (valor de referência: 3 a 9 ml).
-Imagem hipoecogênica com pontos hiperecogênicos na parede anterior do retossigmóide, com provável acometimento de ligamentos úterossacros, aparentemente aderida ao ovário direito e estruturas adjacentes, medindo 10 x 4 x 6 mm, distando aproximadamente 9/10 cm da borda anal e infiltrando a camada externa da muscular própria.
-Ovário esquerdo retro-uterino, com mobilidade reduzida, forma típica, de contornos bem definidos e ecotextura homogênea, medindo 24 x 17 x 26 mm, volume de 5,8 ml (valor de referência: 3 a 9 ml).
-Recesso reto - vaginal de aspecto ecográfico preservado. Ausência de lesões focais.
-Imagem hipoecóica, "em manto", em região retrocervical distando aproximadamente 9 cm da borda anal, e limitando mobilidade uterina.
-
-
-Opinião:
-Provável endometriose profunda em regiões retrocervical, retossigmóide, ovário direito e ligamentos úterossacros associada à mobilidade reduzida de útero e ovários.
-Em correlação com ultrassonografia prévia datada de 02/09/2016 houve redução do tamanho da imagem descrita em região de retossigmóide.
-
-
-ULTRASSONOGRAFIA OBSTÉTRICA COM DOPPLER
-
-Útero gravídico contendo saco gestacional normo-implantado contendo feto único em situação longitudinal e apresentação cefálica, com dorso à direita.
-Movimentação fetal ativa.
-BCF presente (131 bpm).
-Placenta implantada em parede lateral esquerda, grau 2 de Grannum, com espessura média de 35 mm.
-ILA medindo: 12,1 cm. Líquido amniótico de volume normal.
-
-Biometria Fetal:
-D.B.P.:89 mm. D.O.F.: 113 mm. F: 69 mm. PC: 327mm. CA: 313 mm.
-Peso fetal estimado: 2708 gramas (+/-10%).
-Índice cefálico: 83 (normal: 70 a 86).
-Relação CC/CA: 1,04 (normal: 0,98 a 1,2).
-Relação CF/CA: 22 (normal: 20 a 24).
-À avaliação do aparelho urinário fetal observa-se dilatação pielocalicial e ureteral à esquerda e rim direito de aspecto habitual.
-Medida da pelve renal esquerda (ântero-posterior): 14,3 mm (valor de referência: até 10 mm).
-
-Ao estudo Doppler observam-se:
-Artéria umbilical: IR = 0,60 / IP = 0,87
-Artéria cerebral média: IR = 0,78 / IP = 1,67
-Índice de resistência da artéria uterina direita: 0,37.
-Índice de resistência da artéria uterina esquerda: 0,43.
-Índice de pulsatilidade da artéria uterina direita: 0,61.
-Índice de pulsatilidade da artéria uterina esquerda: 0,50.
-IP médio das artérias uterinas: 0,55 (percentil 16 para a idade gestacional).
-Pico de velocidade sistólica da artéria cerebral média: 41,5 cm/s (corresponde a 0,77 múltiplos da mediana).
-Valor de referência de pico de velocidade sistólica da artéria cerebral média para a idade gestacional: 53,52 cm/s.
-
-Opinião:
-Biometria fetal concordante com idade gestacional por ultrassonografia realizada no primeiro trimestre com 35 semanas e 6 dias de evolução (+/- 2 semanas).
-Crescimento fetal em correlação com ultrassonografia realizada no primeiro trimestre no percentil 42 para o peso.
-À avaliação do aparelho urinário fetal observa-se dilatação pielocalicial e ureteral à esquerda. A critério clínico realizar controle ultrassonográfico evolutivo dentro de 15 dias.
-IP médio das artérias uterinas abaixo do percentil 95 para a idade gestacional.
-Fluxo feto-placentário normal com artérias umbilicais apresentando índices de resistência e pulsatilidade normais.
-Não há sinais de centralização.
-Pico de velocidade sistólica da artéria cerebral média dentro da normalidade para a idade gestacional.
-
-ULTRASSONOGRAFIA OBSTÉTRICA GEMELAR – 205
-
-1. Exame realizado por via supra-púbica.
-2. Útero gravídico contendo gestação gemelar, aparentemente monocoriônica/
-dicoriônica e monoamniótica/diamniótica, com fetos em situação e apresentação
-variáveis.
-3. Útero gravídico contendo gestação gemelar, aparentemente monocoriônica/
-dicoriônica e monoamniótica/diamniótica, com feto A à direita/esquerda do abdômen
-materno, em situação longitudinal, apresentação cefálica/pélvica, com dorso
-anterior/posterior e feto B à direita/esquerda do abdômen materno, em situação
-longitudinal, apresentação cefálica/pélvica, com dorso anterior/posterior.
-5. Útero gravídico, aumentado de volume, de contornos regulares e textura homogênea,
-contendo saco gestacional normalmente implantado, de paredes íntegras, medindo
-mm de diâmetro médio.
-6. Observam-se embriões com movimentos ativos e batimentos cardíacos presentes.
-7. Movimentos fetais ativos.
-8. BCF presente (_ bpm) no feto A e BCF presente (_ bpm) no feto B.
-9. Placenta implantada em parede _, grau _ de Grannum, com espessura média de _ mm
-no saco gestacional 1 e placenta implantada em parede _, grau _ de Grannum, com
-espessura média de _ mm, no saco gestacional 2.
-Medida do maior bolsão de líquido amniótico no saco gestacional 1 (feto A): _ mm.
-Medida do maior bolsão de líquido amniótico no saco gestacional 2 (feto B): _ mm.
-10. Líquido amniótico de volume normal em ambos os sacos gestacionais.
-11. Colo uterino de aspecto anatômico.
-11.1. Regiões anexiais sem particularidades.
-
-12. Biometria do feto A:
-
-D.B.P.: _ mm.
-D.O.F.: _ mm.
-Fêmur: _ mm.
-Úmero: _ mm.
-Perímetro cefálico: _ mm.
-Circunferência abdominal: _ mm.
-Peso fetal estimado: _ gramas (+/-10%).
-Índice cefálico: _ (normal: 70 a 86).
-Relação CC/CA: _ (normal: 0,98 a 1,2).
-Relação CF/CA: _ (normal: 20 a 24).
-
-13. Comprimento cabeça-nádegas mede _ mm.
-14. Translucência nucal medindo _ mm (Normal < 2,5 mm).
-15. Osso nasal presente.
-
-Biometria do feto B:
-
-D.B.P.: _ mm.
-D.O.F.: _ mm.
-Fêmur: _ mm.
-Úmero: _ mm.
-
-Perímetro cefálico: _ mm.
-Circunferência abdominal: _ mm.
-Peso fetal estimado: _ gramas (+/-10%).
-Índice cefálico: _ (normal: 70 a 86).
-Relação CC/CA: _ (normal: 0,98 a 1,2).
-Relação CF/CA: _ (normal: 20 a 24).
-
-17. Comprimento cabeça-nádegas mede _ mm.
-18. Translucência nucal medindo _ mm (Normal < 2,5 mm).
-19. Osso nasal presente.
-
-Opinião:
-21. Gestação gemelar, aparentemente mono/diamniótica e mono/dicoriônica (de
-acordo com ultrassonografia realizada no primeiro trimestre), com biometria do
-feto A compatível com gestação de _ semanas e _ dias de evolução (+/- _ dias) e
-biometria do feto B com _ semanas e _ dias de evolução (+/- _ dias).
-22. Idade gestacional pela DUM (_) / por ultrassonografia prévia datada de _ com _
-semanas de evolução.
-23. Correlacionar com ultrassonografias anteriores para melhor avaliação da idade
-gestacional.
-24. Não dispomos de exames anteriores para melhor avaliação da idade gestacional,
-bem como do crescimento fetal.
-25. Crescimento de ambos os fetos adequados para a idade gestacional (em correlação
-com ultrassonografia prévia), percentil _ para o feto A e percentil _ para o feto B para
-os pesos.
-26. Líquido amniótico de volume normal em ambos os sacos gestacionais.
-
-Obs.: A corionicidade pode ser determinada com acurácia por meio da ultrassonografia
-no primeiro trimestre da gestação.
-
-ULTRASSONOGRAFIA OBSTÉTRICA GEMELAR COM DOPPLER – 215
-
-1. Exame realizado por via supra-púbica.
-2. Útero gravídico contendo gestação gemelar, aparentemente monocoriônica/
-dicoriônica e monoamniótica/diamniótica, com fetos em situação e apresentação
-variáveis.
-3. Útero gravídico contendo gestação gemelar, aparentemente monocoriônica/
-dicoriônica e monoamniótica/diamniótica, com feto A em saco gestacional 1, à
-direita/esquerda do abdômen materno, em situação longitudinal, apresentação
-cefálica/pélvica, com dorso anterior/posterior e feto B em saco gestacional 2, à
-direita/esquerda do abdômen materno, em situação longitudinal, apresentação
-cefálica/pélvica, com dorso anterior/posterior.
-5. Útero gravídico, aumentado de volume, de contornos regulares e textura homogênea,
-contendo saco gestacional normalmente implantado, de paredes íntegras, medindo _
-mm de diâmetro médio.
-6. Observam-se embriões com movimentos ativos e batimentos cardíacos presentes.
-7. Movimentos fetais ativos.
-8. BCF presente (_ bpm) no feto A e BCF presente (_ bpm) no feto B.
-9. Placenta implantada em parede _, grau _ de Grannum, com espessura média de _ mm
-em saco gestacional 1 e placenta implantada em parede _, grau _ de Grannum, com
-espessura média de _ mm em saco gestacional 2.
-Medida do maior bolsão de líquido amniótico no saco gestacional 1: _ mm.
-Medida do maior bolsão de líquido amniótico no saco gestacional 2: _ mm.
-10. Líquido amniótico de volume normal em ambos os sacos gestacionais.
-11. Colo uterino de aspecto anatômico.
-
-12. Biometria do feto A:
-
-D.B.P.: _ mm.
-D.O.F.: _ mm.
-Fêmur: _ mm.
-Úmero: _ mm.
-Perímetro cefálico: _ mm.
-Circunferência abdominal: _ mm.
-Peso fetal estimado: _ gramas (+/-10%).
-Índice cefálico: _ (normal: 70 a 86).
-Relação CC/CA: _ (normal: 0,98 a 1,2).
-Relação CF/CA: _ (normal: 20 a 24).
-Crescimento fetal em correlação com ultrassonografia prévia no percentil _ para o peso.
-
-16. Ao estudo Doppler do feto A observam-se:
-
-17. Índice de resistência da artéria cerebral média: _.
-18. Índice de resistência da artéria umbilical: _.
-21. Índice de pulsatilidade da artéria cerebral média: _.
-22. Índice de pulsatilidade da artéria umbilical: _.
-
-Biometria do feto B:
-
-D.B.P.: _ mm.
-
-D.O.F.: _ mm.
-Fêmur: _ mm.
-Úmero: _ mm.
-Perímetro cefálico: _ mm.
-Circunferência abdominal: _ mm.
-Peso fetal estimado: _ gramas (+/-10%).
-Índice cefálico: _ (normal: 70 a 86).
-Relação CC/CA: _ normal: 0,98 a 1,2).
-Relação CF/CA: _ (normal: 20 a 24).
-Crescimento fetal em correlação com ultrassonografia prévia no percentil _ para o peso.
-
-Ao estudo Doppler do feto B observam-se:
-
-Índice de resistência da artéria cerebral média: _.
-Índice de resistência da artéria umbilical: _.
-Índice de pulsatilidade da artéria cerebral média: _.
-Índice de pulsatilidade da artéria umbilical: _.
-
-Ao estudo Doppler das artérias uterinas observam-se:
-
-Índice de resistência da artéria uterina direita: _.
-Índice de resistência da artéria uterina esquerda: _.
-Índice de pulsatilidade da artéria uterina direita: _.
-Índice de pulsatilidade da artéria uterina esquerda: _.
-Índice de pulsatilidade (IP) médio das artérias uterinas: _ (abaixo do percentil 95 para a
-idade gestacional).
-
-Opinião:
-21. Gestação gemelar, aparentemente mono/dicoriônica e mono/diamniótica (de
-acordo com ultrassonografia realizada no primeiro trimestre), com biometria do
-feto A compatível com gestação de _ semanas e _ dias de evolução (+/- _ dias) e
-biometria do feto B com _ semanas e _ dias de evolução (+/- _ dias).
-22. Idade gestacional pela DUM (_) / por ultrassonografia prévia datada de _ com _
-semanas de evolução.
-23. Correlacionar com ultrassonografias anteriores para melhor avaliação da idade
-gestacional.
-24. Não dispomos de exames anteriores, correlacionar com ultrassonografias prévias
-para melhor avaliação da idade gestacional.
-25. Crescimento de ambos os fetos adequados para a idade gestacional (em correlação
-com ultrassonografia prévia), percentil _ para o feto A e percentil _ para o feto B para
-os pesos.
-28. Normodramnia em ambos os sacos gestacionais.
-29. Fluxo em artérias uterinas direita e esquerda apresentando índices de resistência e
-pulsatilidade normais.
-29.1. IP médio das artérias uterinas abaixo do percentil 95 para a idade gestacional.
-30. Presença de incisura protodiastólica em artérias uterinas direita e esquerda,
-fisiológico para idade gestacional.
-31. Fluxo feto-placentário normal com artérias umbilicais apresentando índices de
-resistência e pulsatilidade normais em fetos A e B.
-32. Não há sinais de centralização nos fetos A e B.
-
-ULTRASSONOGRAFIA OBSTÉTRICA MORFOLÓGICA DE PRIMEIRO TRIMESTRE GEMELAR COM DOPPLER – 216
-
-1. Exame realizado por via supra-púbica.
-2. Útero gravídico contendo gestação gemelar, aparentemente monocoriônica/
-dicoriônica e monoamniótica/diamniótica, apresentando sacos gestacionais normo-
-implantados, com feto A à direita/esquerda do abdômen materno e feto B à
-direita/esquerda do abdômen materno, ambos em situação e apresentação variáveis.
-2.1. Útero gravídico contendo gestação gemelar, aparentemente monocoriônica/
-dicoriônica e monoamniótica/diamniótica, apresentando dois sacos gestacionais normo-
-implantados, com feto A à direita/esquerda do abdômen materno, em situação
-longitudinal, apresentação cefálica/pélvica, com dorso anterior/posterior e feto B à
-direita/esquerda do abdômen materno, em situação longitudinal, apresentação
-cefálica/pélvica, com dorso anterior/posterior.
-3. Movimentos fetais ativos.
-4. Placentas implantadas em parede _, grau 0 de Grannum em ambos os sacos
-gestacionais.
-5. Líquido amniótico de volume normal em ambos os sacos gestacionais (análise
-subjetiva).
-6. Colo uterino fechado.
-7. Regiões anexiais sem particularidades.
-
-8. Anatomia do feto A:
-
-Crânio de formato ovalado, calota craniana visualizada, com contornos definidos e
-regulares. Foice cerebral sem desvio da linha média e plexos coróides presentes e
-simétricos.
-Visualizadas 4 câmaras cardíacas.
-Cordão umbilical de aspecto e inserção habituais.
-Imagem gástrica presente, bem localizada.
-Bexiga visualizada com dimensões habituais.
-Membros de aspecto habitual.
-Osso nasal presente com aspecto ecográfico habitual.
-Translucência interna com aspecto ecográfico habitual.
-9. Translucência nucal mede _ mm de espessura (abaixo do percentil 95 para idade
-gestacional).
-10. BCF presente (_ bpm).
-11. Comprimento cabeça-nádegas mede _ mm.
-
-12. Ao estudo Doppler do feto A:
-
-O estudo Doppler do ducto venoso apresenta padrões morfológicos habituais (onda a
-positiva).
-O estudo Doppler da regurgitação da válvula tricúspide apresenta padrões morfológicos
-habituais.
-
-Cálculo do risco de trissomia do 21 do feto A:
-Risco basal (idade materna): 1 em _.
-Risco corrigido pós-teste (idade materna + translucência nucal): 1 em _.
-
-
-Cálculo do risco de trissomia do 18 do feto A:
-Risco basal (idade materna): 1 em _.
-Risco corrigido pós-teste (idade materna + translucência nucal): 1 em _.
-
-Cálculo do risco de trissomia do 13 do feto A:
-Risco basal (idade materna): 1 em _.
-Risco corrigido pós-teste (idade materna + translucência nucal): 1 em _.
-
-13. Anatomia do feto B:
-
-Crânio de formato ovalado, calota craniana visualizada, com contornos definidos e
-regulares. Foice cerebral sem desvio da linha média e plexos coróides presentes e
-simétricos.
-Visualizadas 4 câmaras cardíacas.
-Cordão umbilical de aspecto e inserção habituais.
-Imagem gástrica presente, bem localizada.
-Bexiga visualizada com dimensões habituais.
-Membros de aspecto habitual.
-Osso nasal presente com aspecto ecográfico habitual.
-Translucência interna com aspecto ecográfico habitual.
-14. Translucência nucal mede _ mm de espessura (abaixo do percentil 95 para idade
-gestacional).
-15. BCF presente (_ bpm).
-16. Comprimento cabeça-nádegas mede _ mm.
-
-17. Ao estudo Doppler do feto B:
-
-O estudo Doppler do ducto venoso apresenta padrões morfológicos habituais (onda a
-positiva).
-O estudo Doppler da regurgitação da válvula tricúspide apresenta padrões morfológicos
-habituais.
-
-Cálculo do risco de trissomia do 21 do feto B:
-Risco basal (idade materna): 1 em _.
-Risco corrigido pós-teste (idade materna + translucência nucal): 1 em _.
-
-Cálculo do risco de trissomia do 18 do feto B:
-Risco basal (idade materna): 1 em _.
-Risco corrigido pós-teste (idade materna + translucência nucal): 1 em _.
-
-Cálculo do risco de trissomia do 13 do feto B:
-Risco basal (idade materna): 1 em _.
-Risco corrigido pós-teste (idade materna + translucência nucal): 1 em _.
-
-18. Ao estudo Doppler das artérias uterinas:
-
-Índice de resistência da artéria uterina direita: _.
-Índice de resistência da artéria uterina esquerda: _.
-Índice de pulsatilidade da artéria uterina direita: _.
-Índice de pulsatilidade da artéria uterina esquerda: _.
-
-Presença de incisura protodiastólica bilateralmente em artérias uterinas.
-Índice de pulsatilidade (IP) médio das artérias uterinas: _ (abaixo do percentil 95 para a
-idade gestacional).
-
-Opinião:
-19. Gestação gemelar, aparentemente mono/diamniótica e mono/dicoriônica, com
-biometria do feto A compatível com gestação de _ semanas e _ dias de evolução (+/-
-_ dias) e biometria do feto B com _ semanas e _ dias de evolução (+/- _ dias).
-20. Idade gestacional por ultrassonografia prévia datada de _ com _ semanas e _ dias de
-evolução.
-21. Normodramnia em ambos os sacos gestacionais.
-22. Anatomia dos fetos A e B de aspecto habitual.
-23. Avaliação Doppler do ducto venoso e regurgitação tricúspide dentro dos padrões
-habituais para os fetos A e B.
-24. IP médio das artérias uterinas abaixo do percentil 95 para a idade gestacional (baixo
-risco de doença hipertensiva específica da gestação).
-
-Obs.: A avaliação do risco de doença hipertensiva específica da gestação pode ser
-melhor individualizada através do site da Fetal Medicine Foundation
-(https://fetalmedicine.org/calculator/preeclampsia).
-ULTRASSONOGRAFIA OBSTÉTRICA GEMELAR TRANSVAGINAL – 214
-
-1. Exame realizado por via transvaginal / vias transvaginal e supra-púbica.
-2. Útero gravídico, aumentado de volume, de contornos regulares e textura homogênea,
-contendo gestação gemelar, aparentemente dicoriônica e diamniótica, contendo dois
-sacos gestacionais normalmente implantados e de parede íntegras.
-3. Útero gravídico, aumentado de volume, de contornos regulares e textura homogênea,
-contendo gestação gemelar, aparentemente dicoriônica e diamniótica, contendo sacos
-gestacionais normalmente implantados e de parede íntegras, com diâmetros médios de _
-no saco gestacional 1 e _ no saco gestacional 2.
-4. Observam-se dois embriões com movimentos ativos e batimentos cardíacos
-presentes.
-6. Córion frondoso tópico em sacos gestacionais 1 e 2.
-7. Líquido amniótico de aspecto habitual em ambos os sacos gestacionais.
-8. Colo uterino de aspecto anatômico.
-9. Ovário direito medindo _ x _ x _ mm (Volume de _ ml) de aspecto anatômico.
-10. Ovário esquerdo medindo _ x _ x _ mm (Volume de _ ml) de aspecto anatômico.
-
-12. Biometria do embrião A:
-Comprimento cabeça-nádegas mede: _ mm.
-Vesícula vitelínica de aspecto habitual, medindo _ mm.
-BCE presente (_ bpm), fisiológico para a idade gestacional.
-
-Biometria do embrião B:
-Comprimento cabeça-nádegas mede: _ mm.
-Vesícula vitelínica de aspecto habitual, medindo _ mm.
-BCE presente (_ bpm), fisiológico para a idade gestacional.
-
-Opinião:
-21. Gestação gemelar, aparentemente mono/dicoriônica e mono/diamniótica, com
-biometrias dos fetos A e B compatíveis com gestação de _ semanas e _ dias de
-evolução (+/- _ dias).
-
-
lepqap0r64s-otr1354-ojfmdjgplsf-bnbsl1382. You added.. d1949e3c3a https://ccbp.ru/virus-free-linux-virus-exploits-and-spyware-remove-guides/february-2020-software/droychoudhurynetworksandsystemspdfdownload. 5c2d148ec7 https://stuffile.io/gqs-droychoudhurynetworksandsystemspdfdownload-free-virus-or-spyware.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Kid Kulafu Full Movie Tagalog 25.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Kid Kulafu Full Movie Tagalog 25.md
deleted file mode 100644
index eef1bc876a74d2363da4c06e16d8127cc50f2b38..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Kid Kulafu Full Movie Tagalog 25.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-
-Contents
-
-A story about a boxing coach named Stephen "The Sandman" Balatacan, played by Coco Martin, who has a chance to become a champion in his local league, while losing his love in the process. After a year of being married, Balatacan started to compete in a boxing tournament, in which he has to fight against many opponents.
-
-Bridget Reis portrays Balatacan's wife, who has a child with Balatacan. She has a troubled past, being betrayed by her husband who was abusive to her and is currently in the shadows of the law, a fact that does not escape Balatacan's notice.
-
-Steven Silva and Carmi Martin are also featured in the film.
-
-Synopsis
-
-The film starts in a café in Jomar, an industrial town in the province of Nueva Ecija. The café's owner, Lucy (Carmi Martin), is reading a book while waiting for her boyfriend to arrive. At the café is Stephen "The Sandman" Balatacan (Coco Martin), who is celebrating his second day of retirement after working as a boxing coach at a boxing camp for many years. He is now longing to spend more time with his family, but his wife Bridget (Bridget Reis) has been staying at her parents' house since they got married a year ago and has not been able to return home. A few days before his retirement, Balatacan has a boxing competition which will determine if he is able to box at the national level or not, and if he will become a champion in his local league.
-
-After receiving a call from his friend who was hurt in a car accident, Balatacan sets off to go back home, at the same time his close friend Anthony (Lud / Steven Silva) arrives at the café. When Balatacan arrives home, he is met by his son who is angry at his father for forcing him to work as a boxer for his boxing camp. Balatacan was unable to tell him that he had no choice in the matter. Bridget and the children are shocked at the news of Balatacan's retirement and they argue about it.
-
-Back in the boxing camp, Balatacan attends his first practice with his new boxing team. He meets his first opponent and the two start to get along. He asks Anthony to remain with him at the boxing camp for one week, as he wants his son to learn boxing at 4fefd39f24
-
-
-
diff --git a/spaces/linfanluntan/Grounded-SAM/app.py b/spaces/linfanluntan/Grounded-SAM/app.py
deleted file mode 100644
index 73b7021ad0b9ff24342e04eccab205afb5d31ae0..0000000000000000000000000000000000000000
--- a/spaces/linfanluntan/Grounded-SAM/app.py
+++ /dev/null
@@ -1,336 +0,0 @@
-import os, sys
-import random
-import warnings
-
-os.system("python -m pip install -e segment_anything")
-os.system("python -m pip install -e GroundingDINO")
-os.system("pip install --upgrade diffusers[torch]")
-os.system("pip install opencv-python pycocotools matplotlib onnxruntime onnx ipykernel")
-os.system("wget https://github.com/IDEA-Research/Grounded-Segment-Anything/raw/main/assets/demo1.jpg")
-os.system("wget https://huggingface.co/ShilongLiu/GroundingDINO/resolve/main/groundingdino_swint_ogc.pth")
-os.system("wget https://huggingface.co/spaces/mrtlive/segment-anything-model/resolve/main/sam_vit_h_4b8939.pth")
-sys.path.append(os.path.join(os.getcwd(), "GroundingDINO"))
-sys.path.append(os.path.join(os.getcwd(), "segment_anything"))
-warnings.filterwarnings("ignore")
-
-import gradio as gr
-import argparse
-
-import numpy as np
-import torch
-import torchvision
-from PIL import Image, ImageDraw, ImageFont
-
-# Grounding DINO
-import GroundingDINO.groundingdino.datasets.transforms as T
-from GroundingDINO.groundingdino.models import build_model
-from GroundingDINO.groundingdino.util.slconfig import SLConfig
-from GroundingDINO.groundingdino.util.utils import clean_state_dict, get_phrases_from_posmap
-
-# segment anything
-from segment_anything import build_sam, SamPredictor
-import numpy as np
-
-# diffusers
-import torch
-from diffusers import StableDiffusionInpaintPipeline
-
-# BLIP
-from transformers import BlipProcessor, BlipForConditionalGeneration
-
-
-def generate_caption(processor, blip_model, raw_image):
- # unconditional image captioning
- inputs = processor(raw_image, return_tensors="pt").to(
- "cuda", torch.float16)
- out = blip_model.generate(**inputs)
- caption = processor.decode(out[0], skip_special_tokens=True)
- return caption
-
-
-def transform_image(image_pil):
-
- transform = T.Compose(
- [
- T.RandomResize([800], max_size=1333),
- T.ToTensor(),
- T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
- ]
- )
- image, _ = transform(image_pil, None) # 3, h, w
- return image
-
-
-def load_model(model_config_path, model_checkpoint_path, device):
- args = SLConfig.fromfile(model_config_path)
- args.device = device
- model = build_model(args)
- checkpoint = torch.load(model_checkpoint_path, map_location="cpu")
- load_res = model.load_state_dict(
- clean_state_dict(checkpoint["model"]), strict=False)
- print(load_res)
- _ = model.eval()
- return model
-
-
-def get_grounding_output(model, image, caption, box_threshold, text_threshold, with_logits=True):
- caption = caption.lower()
- caption = caption.strip()
- if not caption.endswith("."):
- caption = caption + "."
-
- with torch.no_grad():
- outputs = model(image[None], captions=[caption])
- logits = outputs["pred_logits"].cpu().sigmoid()[0] # (nq, 256)
- boxes = outputs["pred_boxes"].cpu()[0] # (nq, 4)
- logits.shape[0]
-
- # filter output
- logits_filt = logits.clone()
- boxes_filt = boxes.clone()
- filt_mask = logits_filt.max(dim=1)[0] > box_threshold
- logits_filt = logits_filt[filt_mask] # num_filt, 256
- boxes_filt = boxes_filt[filt_mask] # num_filt, 4
- logits_filt.shape[0]
-
- # get phrase
- tokenlizer = model.tokenizer
- tokenized = tokenlizer(caption)
- # build pred
- pred_phrases = []
- scores = []
- for logit, box in zip(logits_filt, boxes_filt):
- pred_phrase = get_phrases_from_posmap(
- logit > text_threshold, tokenized, tokenlizer)
- if with_logits:
- pred_phrases.append(
- pred_phrase + f"({str(logit.max().item())[:4]})")
- else:
- pred_phrases.append(pred_phrase)
- scores.append(logit.max().item())
-
- return boxes_filt, torch.Tensor(scores), pred_phrases
-
-
-def draw_mask(mask, draw, random_color=False):
- if random_color:
- color = (random.randint(0, 255), random.randint(
- 0, 255), random.randint(0, 255), 153)
- else:
- color = (30, 144, 255, 153)
-
- nonzero_coords = np.transpose(np.nonzero(mask))
-
- for coord in nonzero_coords:
- draw.point(coord[::-1], fill=color)
-
-
-def draw_box(box, draw, label):
- # random color
- color = tuple(np.random.randint(0, 255, size=3).tolist())
-
- draw.rectangle(((box[0], box[1]), (box[2], box[3])),
- outline=color, width=2)
-
- if label:
- font = ImageFont.load_default()
- if hasattr(font, "getbbox"):
- bbox = draw.textbbox((box[0], box[1]), str(label), font)
- else:
- w, h = draw.textsize(str(label), font)
- bbox = (box[0], box[1], w + box[0], box[1] + h)
- draw.rectangle(bbox, fill=color)
- draw.text((box[0], box[1]), str(label), fill="white")
-
- draw.text((box[0], box[1]), label)
-
-
-config_file = 'GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py'
-ckpt_repo_id = "ShilongLiu/GroundingDINO"
-ckpt_filenmae = "groundingdino_swint_ogc.pth"
-sam_checkpoint = 'sam_vit_h_4b8939.pth'
-output_dir = "outputs"
-device = 'cuda' if torch.cuda.is_available() else 'cpu'
-
-
-blip_processor = None
-blip_model = None
-groundingdino_model = None
-sam_predictor = None
-inpaint_pipeline = None
-
-
-def run_grounded_sam(input_image, text_prompt, task_type, inpaint_prompt, box_threshold, text_threshold, iou_threshold, inpaint_mode):
-
- global blip_processor, blip_model, groundingdino_model, sam_predictor, inpaint_pipeline
-
- # make dir
- os.makedirs(output_dir, exist_ok=True)
- # load image
- image_pil = input_image.convert("RGB")
- transformed_image = transform_image(image_pil)
-
- if groundingdino_model is None:
- groundingdino_model = load_model(
- config_file, ckpt_filenmae, device=device)
-
- if task_type == 'automatic':
- # generate caption and tags
- # use Tag2Text can generate better captions
- # https://huggingface.co/spaces/xinyu1205/Tag2Text
- # but there are some bugs...
- blip_processor = blip_processor or BlipProcessor.from_pretrained(
- "Salesforce/blip-image-captioning-large")
- blip_model = blip_model or BlipForConditionalGeneration.from_pretrained(
- "Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda")
- text_prompt = generate_caption(blip_processor, blip_model, image_pil)
- print(f"Caption: {text_prompt}")
-
- # run grounding dino model
- boxes_filt, scores, pred_phrases = get_grounding_output(
- groundingdino_model, transformed_image, text_prompt, box_threshold, text_threshold
- )
-
- size = image_pil.size
-
- # process boxes
- H, W = size[1], size[0]
- for i in range(boxes_filt.size(0)):
- boxes_filt[i] = boxes_filt[i] * torch.Tensor([W, H, W, H])
- boxes_filt[i][:2] -= boxes_filt[i][2:] / 2
- boxes_filt[i][2:] += boxes_filt[i][:2]
-
- boxes_filt = boxes_filt.cpu()
-
- # nms
- print(f"Before NMS: {boxes_filt.shape[0]} boxes")
- nms_idx = torchvision.ops.nms(
- boxes_filt, scores, iou_threshold).numpy().tolist()
- boxes_filt = boxes_filt[nms_idx]
- pred_phrases = [pred_phrases[idx] for idx in nms_idx]
- print(f"After NMS: {boxes_filt.shape[0]} boxes")
-
- if task_type == 'seg' or task_type == 'inpainting' or task_type == 'automatic':
- if sam_predictor is None:
- # initialize SAM
- assert sam_checkpoint, 'sam_checkpoint is not found!'
- sam = build_sam(checkpoint=sam_checkpoint)
- sam.to(device=device)
- sam_predictor = SamPredictor(sam)
-
- image = np.array(image_pil)
- sam_predictor.set_image(image)
-
- if task_type == 'automatic':
- # use NMS to handle overlapped boxes
- print(f"Revise caption with number: {text_prompt}")
-
- transformed_boxes = sam_predictor.transform.apply_boxes_torch(
- boxes_filt, image.shape[:2]).to(device)
-
- masks, _, _ = sam_predictor.predict_torch(
- point_coords=None,
- point_labels=None,
- boxes=transformed_boxes,
- multimask_output=False,
- )
-
- # masks: [1, 1, 512, 512]
-
- if task_type == 'det':
- image_draw = ImageDraw.Draw(image_pil)
- for box, label in zip(boxes_filt, pred_phrases):
- draw_box(box, image_draw, label)
-
- return [image_pil]
- elif task_type == 'seg' or task_type == 'automatic':
-
- mask_image = Image.new('RGBA', size, color=(0, 0, 0, 0))
-
- mask_draw = ImageDraw.Draw(mask_image)
- for mask in masks:
- draw_mask(mask[0].cpu().numpy(), mask_draw, random_color=True)
-
- image_draw = ImageDraw.Draw(image_pil)
-
- for box, label in zip(boxes_filt, pred_phrases):
- draw_box(box, image_draw, label)
-
- if task_type == 'automatic':
- image_draw.text((10, 10), text_prompt, fill='black')
-
- image_pil = image_pil.convert('RGBA')
- image_pil.alpha_composite(mask_image)
- return [image_pil, mask_image]
- elif task_type == 'inpainting':
- assert inpaint_prompt, 'inpaint_prompt is not found!'
- # inpainting pipeline
- if inpaint_mode == 'merge':
- masks = torch.sum(masks, dim=0).unsqueeze(0)
- masks = torch.where(masks > 0, True, False)
- # simply choose the first mask, which will be refine in the future release
- mask = masks[0][0].cpu().numpy()
- mask_pil = Image.fromarray(mask)
-
- if inpaint_pipeline is None:
- inpaint_pipeline = StableDiffusionInpaintPipeline.from_pretrained(
- "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16
- )
- inpaint_pipeline = inpaint_pipeline.to("cuda")
-
- image = inpaint_pipeline(prompt=inpaint_prompt, image=image_pil.resize(
- (512, 512)), mask_image=mask_pil.resize((512, 512))).images[0]
- image = image.resize(size)
-
- return [image, mask_pil]
- else:
- print("task_type:{} error!".format(task_type))
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser("Grounded SAM demo", add_help=True)
- parser.add_argument("--debug", action="store_true",
- help="using debug mode")
- parser.add_argument("--share", action="store_true", help="share the app")
- parser.add_argument('--no-gradio-queue', action="store_true",
- help='path to the SAM checkpoint')
- args = parser.parse_args()
-
- print(args)
-
- block = gr.Blocks()
- if not args.no_gradio_queue:
- block = block.queue()
-
- with block:
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(
- source='upload', type="pil", value="demo1.jpg")
- task_type = gr.Dropdown(
- ["det", "seg", "inpainting", "automatic"], value="seg", label="task_type")
- text_prompt = gr.Textbox(label="Text Prompt", placeholder="bear . beach .")
- inpaint_prompt = gr.Textbox(label="Inpaint Prompt", placeholder="A dinosaur, detailed, 4K.")
- run_button = gr.Button(label="Run")
- with gr.Accordion("Advanced options", open=False):
- box_threshold = gr.Slider(
- label="Box Threshold", minimum=0.0, maximum=1.0, value=0.3, step=0.001
- )
- text_threshold = gr.Slider(
- label="Text Threshold", minimum=0.0, maximum=1.0, value=0.25, step=0.001
- )
- iou_threshold = gr.Slider(
- label="IOU Threshold", minimum=0.0, maximum=1.0, value=0.8, step=0.001
- )
- inpaint_mode = gr.Dropdown(
- ["merge", "first"], value="merge", label="inpaint_mode")
-
- with gr.Column():
- gallery = gr.Gallery(
- label="Generated images", show_label=False, elem_id="gallery"
- ).style(preview=True, grid=2, object_fit="scale-down")
-
- run_button.click(fn=run_grounded_sam, inputs=[
- input_image, text_prompt, task_type, inpaint_prompt, box_threshold, text_threshold, iou_threshold, inpaint_mode], outputs=gallery)
-
- block.launch(debug=args.debug, share=args.share, show_error=True)
diff --git a/spaces/lj1995/vocal2guitar/trainset_preprocess_pipeline_print.py b/spaces/lj1995/vocal2guitar/trainset_preprocess_pipeline_print.py
deleted file mode 100644
index fe1643deefd8795ecea676aab981508650d65128..0000000000000000000000000000000000000000
--- a/spaces/lj1995/vocal2guitar/trainset_preprocess_pipeline_print.py
+++ /dev/null
@@ -1,135 +0,0 @@
-import sys, os, multiprocessing
-from scipy import signal
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-
-inp_root = sys.argv[1]
-sr = int(sys.argv[2])
-n_p = int(sys.argv[3])
-exp_dir = sys.argv[4]
-noparallel = sys.argv[5] == "True"
-import numpy as np, os, traceback
-from slicer2 import Slicer
-import librosa, traceback
-from scipy.io import wavfile
-import multiprocessing
-from my_utils import load_audio
-
-mutex = multiprocessing.Lock()
-f = open("%s/preprocess.log" % exp_dir, "a+")
-
-
-def println(strr):
- mutex.acquire()
- print(strr)
- f.write("%s\n" % strr)
- f.flush()
- mutex.release()
-
-
-class PreProcess:
- def __init__(self, sr, exp_dir):
- self.slicer = Slicer(
- sr=sr,
- threshold=-42,
- min_length=1500,
- min_interval=400,
- hop_size=15,
- max_sil_kept=500,
- )
- self.sr = sr
- self.bh, self.ah = signal.butter(N=5, Wn=48, btype="high", fs=self.sr)
- self.per = 3.7
- self.overlap = 0.3
- self.tail = self.per + self.overlap
- self.max = 0.9
- self.alpha = 0.75
- self.exp_dir = exp_dir
- self.gt_wavs_dir = "%s/0_gt_wavs" % exp_dir
- self.wavs16k_dir = "%s/1_16k_wavs" % exp_dir
- os.makedirs(self.exp_dir, exist_ok=True)
- os.makedirs(self.gt_wavs_dir, exist_ok=True)
- os.makedirs(self.wavs16k_dir, exist_ok=True)
-
- def norm_write(self, tmp_audio, idx0, idx1):
- tmp_audio = (tmp_audio / np.abs(tmp_audio).max() * (self.max * self.alpha)) + (
- 1 - self.alpha
- ) * tmp_audio
- wavfile.write(
- "%s/%s_%s.wav" % (self.gt_wavs_dir, idx0, idx1),
- self.sr,
- tmp_audio.astype(np.float32),
- )
- tmp_audio = librosa.resample(
- tmp_audio, orig_sr=self.sr, target_sr=16000
- ) # , res_type="soxr_vhq"
- wavfile.write(
- "%s/%s_%s.wav" % (self.wavs16k_dir, idx0, idx1),
- 16000,
- tmp_audio.astype(np.float32),
- )
-
- def pipeline(self, path, idx0):
- try:
- audio = load_audio(path, self.sr)
- # zero phased digital filter cause pre-ringing noise...
- # audio = signal.filtfilt(self.bh, self.ah, audio)
- audio = signal.lfilter(self.bh, self.ah, audio)
-
- idx1 = 0
- for audio in self.slicer.slice(audio):
- i = 0
- while 1:
- start = int(self.sr * (self.per - self.overlap) * i)
- i += 1
- if len(audio[start:]) > self.tail * self.sr:
- tmp_audio = audio[start : start + int(self.per * self.sr)]
- self.norm_write(tmp_audio, idx0, idx1)
- idx1 += 1
- else:
- tmp_audio = audio[start:]
- idx1 += 1
- break
- self.norm_write(tmp_audio, idx0, idx1)
- println("%s->Suc." % path)
- except:
- println("%s->%s" % (path, traceback.format_exc()))
-
- def pipeline_mp(self, infos):
- for path, idx0 in infos:
- self.pipeline(path, idx0)
-
- def pipeline_mp_inp_dir(self, inp_root, n_p):
- try:
- infos = [
- ("%s/%s" % (inp_root, name), idx)
- for idx, name in enumerate(sorted(list(os.listdir(inp_root))))
- ]
- if noparallel:
- for i in range(n_p):
- self.pipeline_mp(infos[i::n_p])
- else:
- ps = []
- for i in range(n_p):
- p = multiprocessing.Process(
- target=self.pipeline_mp, args=(infos[i::n_p],)
- )
- ps.append(p)
- p.start()
- for i in range(n_p):
- ps[i].join()
- except:
- println("Fail. %s" % traceback.format_exc())
-
-
-def preprocess_trainset(inp_root, sr, n_p, exp_dir):
- pp = PreProcess(sr, exp_dir)
- println("start preprocess")
- println(sys.argv)
- pp.pipeline_mp_inp_dir(inp_root, n_p)
- println("end preprocess")
-
-
-if __name__ == "__main__":
- preprocess_trainset(inp_root, sr, n_p, exp_dir)
diff --git a/spaces/lj1995/vocal2guitar/uvr5_pack/lib_v5/layers.py b/spaces/lj1995/vocal2guitar/uvr5_pack/lib_v5/layers.py
deleted file mode 100644
index 9835dc0f0dd66a7ef3517101180ec2c54eb6011d..0000000000000000000000000000000000000000
--- a/spaces/lj1995/vocal2guitar/uvr5_pack/lib_v5/layers.py
+++ /dev/null
@@ -1,118 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from uvr5_pack.lib_v5 import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class SeperableConv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(SeperableConv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nin,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- groups=nin,
- bias=False,
- ),
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
-
- def __call__(self, x):
- skip = self.conv1(x)
- h = self.conv2(skip)
-
- return h, skip
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
- h = self.conv(x)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
- self.conv3 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = nn.Sequential(
- Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
- )
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1)
- bottle = self.bottleneck(out)
- return bottle
diff --git a/spaces/lkeab/transfiner/configs/common/models/mask_rcnn_fpn.py b/spaces/lkeab/transfiner/configs/common/models/mask_rcnn_fpn.py
deleted file mode 100644
index 3f87d8da83d93932ddd5e9dc5b38d42786c0cbb4..0000000000000000000000000000000000000000
--- a/spaces/lkeab/transfiner/configs/common/models/mask_rcnn_fpn.py
+++ /dev/null
@@ -1,93 +0,0 @@
-from detectron2.config import LazyCall as L
-from detectron2.layers import ShapeSpec
-from detectron2.modeling.meta_arch import GeneralizedRCNN
-from detectron2.modeling.anchor_generator import DefaultAnchorGenerator
-from detectron2.modeling.backbone.fpn import LastLevelMaxPool
-from detectron2.modeling.backbone import BasicStem, FPN, ResNet
-from detectron2.modeling.box_regression import Box2BoxTransform
-from detectron2.modeling.matcher import Matcher
-from detectron2.modeling.poolers import ROIPooler
-from detectron2.modeling.proposal_generator import RPN, StandardRPNHead
-from detectron2.modeling.roi_heads import (
- StandardROIHeads,
- FastRCNNOutputLayers,
- MaskRCNNConvUpsampleHead,
- FastRCNNConvFCHead,
-)
-
-model = L(GeneralizedRCNN)(
- backbone=L(FPN)(
- bottom_up=L(ResNet)(
- stem=L(BasicStem)(in_channels=3, out_channels=64, norm="FrozenBN"),
- stages=L(ResNet.make_default_stages)(
- depth=50,
- stride_in_1x1=True,
- norm="FrozenBN",
- ),
- out_features=["res2", "res3", "res4", "res5"],
- ),
- in_features="${.bottom_up.out_features}",
- out_channels=256,
- top_block=L(LastLevelMaxPool)(),
- ),
- proposal_generator=L(RPN)(
- in_features=["p2", "p3", "p4", "p5", "p6"],
- head=L(StandardRPNHead)(in_channels=256, num_anchors=3),
- anchor_generator=L(DefaultAnchorGenerator)(
- sizes=[[32], [64], [128], [256], [512]],
- aspect_ratios=[0.5, 1.0, 2.0],
- strides=[4, 8, 16, 32, 64],
- offset=0.0,
- ),
- anchor_matcher=L(Matcher)(
- thresholds=[0.3, 0.7], labels=[0, -1, 1], allow_low_quality_matches=True
- ),
- box2box_transform=L(Box2BoxTransform)(weights=[1.0, 1.0, 1.0, 1.0]),
- batch_size_per_image=256,
- positive_fraction=0.5,
- pre_nms_topk=(2000, 1000),
- post_nms_topk=(1000, 1000),
- nms_thresh=0.7,
- ),
- roi_heads=L(StandardROIHeads)(
- num_classes=80,
- batch_size_per_image=512,
- positive_fraction=0.25,
- proposal_matcher=L(Matcher)(
- thresholds=[0.5], labels=[0, 1], allow_low_quality_matches=False
- ),
- box_in_features=["p2", "p3", "p4", "p5"],
- box_pooler=L(ROIPooler)(
- output_size=7,
- scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32),
- sampling_ratio=0,
- pooler_type="ROIAlignV2",
- ),
- box_head=L(FastRCNNConvFCHead)(
- input_shape=ShapeSpec(channels=256, height=7, width=7),
- conv_dims=[],
- fc_dims=[1024, 1024],
- ),
- box_predictor=L(FastRCNNOutputLayers)(
- input_shape=ShapeSpec(channels=1024),
- test_score_thresh=0.05,
- box2box_transform=L(Box2BoxTransform)(weights=(10, 10, 5, 5)),
- num_classes="${..num_classes}",
- ),
- mask_in_features=["p2", "p3", "p4", "p5"],
- mask_pooler=L(ROIPooler)(
- output_size=14, # ori is 14
- scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32),
- sampling_ratio=0,
- pooler_type="ROIAlignV2",
- ),
- mask_head=L(MaskRCNNConvUpsampleHead)(
- input_shape=ShapeSpec(channels=256, width=14, height=14),
- num_classes="${..num_classes}",
- conv_dims=[256, 256, 256, 256, 256],
- ),
- ),
- pixel_mean=[103.530, 116.280, 123.675],
- pixel_std=[1.0, 1.0, 1.0],
- input_format="BGR",
-)
diff --git a/spaces/lordvader31/almithal/mindmap.py b/spaces/lordvader31/almithal/mindmap.py
deleted file mode 100644
index b978b4ab386b98b40abe1021bc696d9343efd1df..0000000000000000000000000000000000000000
--- a/spaces/lordvader31/almithal/mindmap.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import os
-import openai
-import json
-import graphviz
-import streamlit as st
-
-class MindMap:
-
- def __init__(self):
- openai.api_key = os.getenv("OPENAI_API_KEY")
-
- def get_connections(self, text_chunks_libs:dict) -> list:
-
- state_prompt = open("./prompts/mindmap.prompt")
- PROMPT = state_prompt.read()
- state_prompt.close()
-
- final_connections = []
- for key in text_chunks_libs:
- for text_chunk in text_chunks_libs[key]:
- PROMPT = PROMPT.replace("$prompt", text_chunk)
-
- response = openai.Completion.create(
- engine="text-davinci-003",
- prompt = PROMPT,
- temperature=0.5,
- max_tokens=2048,
- top_p=1,
- frequency_penalty=0.0,
- presence_penalty=0.0,
- )
-
- relationships = response.choices[0].text
- final_string = '{"relations":' + relationships + '}'
- data = json.loads(final_string)
- relations = data["relations"]
- final_connections.extend(relations)
- return final_connections
-
-
- def generate_graph(self, text_chunks_libs:dict):
- graph = graphviz.Digraph()
- all_connections = self.get_connections(text_chunks_libs)
- for connection in all_connections:
- from_node = connection[0]
- to_node = connection[2]
- graph.edge(from_node, to_node)
- st.graphviz_chart(graph)
\ No newline at end of file
diff --git a/spaces/loveu-tgve/loveu-tgve-leaderboard/README.md b/spaces/loveu-tgve/loveu-tgve-leaderboard/README.md
deleted file mode 100644
index 637af71c6c778ea0071e483e9352a63b8969586f..0000000000000000000000000000000000000000
--- a/spaces/loveu-tgve/loveu-tgve-leaderboard/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: LOVEU-TGVE Leaderboard
-emoji: 📊
-colorFrom: red
-colorTo: blue
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ma-xu/LIVE/pybind11/include/pybind11/iostream.h b/spaces/ma-xu/LIVE/pybind11/include/pybind11/iostream.h
deleted file mode 100644
index eaf92dfa49add54c298844b31898a82de3fb429d..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/pybind11/include/pybind11/iostream.h
+++ /dev/null
@@ -1,209 +0,0 @@
-/*
- pybind11/iostream.h -- Tools to assist with redirecting cout and cerr to Python
-
- Copyright (c) 2017 Henry F. Schreiner
-
- All rights reserved. Use of this source code is governed by a
- BSD-style license that can be found in the LICENSE file.
-*/
-
-#pragma once
-
-#include "pybind11.h"
-
-#include
-#include
-#include
-#include
-#include
-
-PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE)
-PYBIND11_NAMESPACE_BEGIN(detail)
-
-// Buffer that writes to Python instead of C++
-class pythonbuf : public std::streambuf {
-private:
- using traits_type = std::streambuf::traits_type;
-
- const size_t buf_size;
- std::unique_ptr d_buffer;
- object pywrite;
- object pyflush;
-
- int overflow(int c) {
- if (!traits_type::eq_int_type(c, traits_type::eof())) {
- *pptr() = traits_type::to_char_type(c);
- pbump(1);
- }
- return sync() == 0 ? traits_type::not_eof(c) : traits_type::eof();
- }
-
- int sync() {
- if (pbase() != pptr()) {
- // This subtraction cannot be negative, so dropping the sign
- str line(pbase(), static_cast(pptr() - pbase()));
-
- {
- gil_scoped_acquire tmp;
- pywrite(line);
- pyflush();
- }
-
- setp(pbase(), epptr());
- }
- return 0;
- }
-
-public:
-
- pythonbuf(object pyostream, size_t buffer_size = 1024)
- : buf_size(buffer_size),
- d_buffer(new char[buf_size]),
- pywrite(pyostream.attr("write")),
- pyflush(pyostream.attr("flush")) {
- setp(d_buffer.get(), d_buffer.get() + buf_size - 1);
- }
-
- pythonbuf(pythonbuf&&) = default;
-
- /// Sync before destroy
- ~pythonbuf() {
- sync();
- }
-};
-
-PYBIND11_NAMESPACE_END(detail)
-
-
-/** \rst
- This a move-only guard that redirects output.
-
- .. code-block:: cpp
-
- #include
-
- ...
-
- {
- py::scoped_ostream_redirect output;
- std::cout << "Hello, World!"; // Python stdout
- } // <-- return std::cout to normal
-
- You can explicitly pass the c++ stream and the python object,
- for example to guard stderr instead.
-
- .. code-block:: cpp
-
- {
- py::scoped_ostream_redirect output{std::cerr, py::module::import("sys").attr("stderr")};
- std::cerr << "Hello, World!";
- }
- \endrst */
-class scoped_ostream_redirect {
-protected:
- std::streambuf *old;
- std::ostream &costream;
- detail::pythonbuf buffer;
-
-public:
- scoped_ostream_redirect(
- std::ostream &costream = std::cout,
- object pyostream = module::import("sys").attr("stdout"))
- : costream(costream), buffer(pyostream) {
- old = costream.rdbuf(&buffer);
- }
-
- ~scoped_ostream_redirect() {
- costream.rdbuf(old);
- }
-
- scoped_ostream_redirect(const scoped_ostream_redirect &) = delete;
- scoped_ostream_redirect(scoped_ostream_redirect &&other) = default;
- scoped_ostream_redirect &operator=(const scoped_ostream_redirect &) = delete;
- scoped_ostream_redirect &operator=(scoped_ostream_redirect &&) = delete;
-};
-
-
-/** \rst
- Like `scoped_ostream_redirect`, but redirects cerr by default. This class
- is provided primary to make ``py::call_guard`` easier to make.
-
- .. code-block:: cpp
-
- m.def("noisy_func", &noisy_func,
- py::call_guard());
-
-\endrst */
-class scoped_estream_redirect : public scoped_ostream_redirect {
-public:
- scoped_estream_redirect(
- std::ostream &costream = std::cerr,
- object pyostream = module::import("sys").attr("stderr"))
- : scoped_ostream_redirect(costream,pyostream) {}
-};
-
-
-PYBIND11_NAMESPACE_BEGIN(detail)
-
-// Class to redirect output as a context manager. C++ backend.
-class OstreamRedirect {
- bool do_stdout_;
- bool do_stderr_;
- std::unique_ptr redirect_stdout;
- std::unique_ptr redirect_stderr;
-
-public:
- OstreamRedirect(bool do_stdout = true, bool do_stderr = true)
- : do_stdout_(do_stdout), do_stderr_(do_stderr) {}
-
- void enter() {
- if (do_stdout_)
- redirect_stdout.reset(new scoped_ostream_redirect());
- if (do_stderr_)
- redirect_stderr.reset(new scoped_estream_redirect());
- }
-
- void exit() {
- redirect_stdout.reset();
- redirect_stderr.reset();
- }
-};
-
-PYBIND11_NAMESPACE_END(detail)
-
-/** \rst
- This is a helper function to add a C++ redirect context manager to Python
- instead of using a C++ guard. To use it, add the following to your binding code:
-
- .. code-block:: cpp
-
- #include
-
- ...
-
- py::add_ostream_redirect(m, "ostream_redirect");
-
- You now have a Python context manager that redirects your output:
-
- .. code-block:: python
-
- with m.ostream_redirect():
- m.print_to_cout_function()
-
- This manager can optionally be told which streams to operate on:
-
- .. code-block:: python
-
- with m.ostream_redirect(stdout=true, stderr=true):
- m.noisy_function_with_error_printing()
-
- \endrst */
-inline class_ add_ostream_redirect(module m, std::string name = "ostream_redirect") {
- return class_(m, name.c_str(), module_local())
- .def(init(), arg("stdout")=true, arg("stderr")=true)
- .def("__enter__", &detail::OstreamRedirect::enter)
- .def("__exit__", [](detail::OstreamRedirect &self_, args) { self_.exit(); });
-}
-
-PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE)
diff --git a/spaces/ma-xu/LIVE/pybind11/tests/test_iostream.py b/spaces/ma-xu/LIVE/pybind11/tests/test_iostream.py
deleted file mode 100644
index 7ac4fcece0b089c03e240a0ae89e54c0c33feedf..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/pybind11/tests/test_iostream.py
+++ /dev/null
@@ -1,215 +0,0 @@
-# -*- coding: utf-8 -*-
-from pybind11_tests import iostream as m
-import sys
-
-from contextlib import contextmanager
-
-try:
- # Python 3
- from io import StringIO
-except ImportError:
- # Python 2
- try:
- from cStringIO import StringIO
- except ImportError:
- from StringIO import StringIO
-
-try:
- # Python 3.4
- from contextlib import redirect_stdout
-except ImportError:
- @contextmanager
- def redirect_stdout(target):
- original = sys.stdout
- sys.stdout = target
- yield
- sys.stdout = original
-
-try:
- # Python 3.5
- from contextlib import redirect_stderr
-except ImportError:
- @contextmanager
- def redirect_stderr(target):
- original = sys.stderr
- sys.stderr = target
- yield
- sys.stderr = original
-
-
-def test_captured(capsys):
- msg = "I've been redirected to Python, I hope!"
- m.captured_output(msg)
- stdout, stderr = capsys.readouterr()
- assert stdout == msg
- assert stderr == ''
-
- m.captured_output_default(msg)
- stdout, stderr = capsys.readouterr()
- assert stdout == msg
- assert stderr == ''
-
- m.captured_err(msg)
- stdout, stderr = capsys.readouterr()
- assert stdout == ''
- assert stderr == msg
-
-
-def test_captured_large_string(capsys):
- # Make this bigger than the buffer used on the C++ side: 1024 chars
- msg = "I've been redirected to Python, I hope!"
- msg = msg * (1024 // len(msg) + 1)
-
- m.captured_output_default(msg)
- stdout, stderr = capsys.readouterr()
- assert stdout == msg
- assert stderr == ''
-
-
-def test_guard_capture(capsys):
- msg = "I've been redirected to Python, I hope!"
- m.guard_output(msg)
- stdout, stderr = capsys.readouterr()
- assert stdout == msg
- assert stderr == ''
-
-
-def test_series_captured(capture):
- with capture:
- m.captured_output("a")
- m.captured_output("b")
- assert capture == "ab"
-
-
-def test_flush(capfd):
- msg = "(not flushed)"
- msg2 = "(flushed)"
-
- with m.ostream_redirect():
- m.noisy_function(msg, flush=False)
- stdout, stderr = capfd.readouterr()
- assert stdout == ''
-
- m.noisy_function(msg2, flush=True)
- stdout, stderr = capfd.readouterr()
- assert stdout == msg + msg2
-
- m.noisy_function(msg, flush=False)
-
- stdout, stderr = capfd.readouterr()
- assert stdout == msg
-
-
-def test_not_captured(capfd):
- msg = "Something that should not show up in log"
- stream = StringIO()
- with redirect_stdout(stream):
- m.raw_output(msg)
- stdout, stderr = capfd.readouterr()
- assert stdout == msg
- assert stderr == ''
- assert stream.getvalue() == ''
-
- stream = StringIO()
- with redirect_stdout(stream):
- m.captured_output(msg)
- stdout, stderr = capfd.readouterr()
- assert stdout == ''
- assert stderr == ''
- assert stream.getvalue() == msg
-
-
-def test_err(capfd):
- msg = "Something that should not show up in log"
- stream = StringIO()
- with redirect_stderr(stream):
- m.raw_err(msg)
- stdout, stderr = capfd.readouterr()
- assert stdout == ''
- assert stderr == msg
- assert stream.getvalue() == ''
-
- stream = StringIO()
- with redirect_stderr(stream):
- m.captured_err(msg)
- stdout, stderr = capfd.readouterr()
- assert stdout == ''
- assert stderr == ''
- assert stream.getvalue() == msg
-
-
-def test_multi_captured(capfd):
- stream = StringIO()
- with redirect_stdout(stream):
- m.captured_output("a")
- m.raw_output("b")
- m.captured_output("c")
- m.raw_output("d")
- stdout, stderr = capfd.readouterr()
- assert stdout == 'bd'
- assert stream.getvalue() == 'ac'
-
-
-def test_dual(capsys):
- m.captured_dual("a", "b")
- stdout, stderr = capsys.readouterr()
- assert stdout == "a"
- assert stderr == "b"
-
-
-def test_redirect(capfd):
- msg = "Should not be in log!"
- stream = StringIO()
- with redirect_stdout(stream):
- m.raw_output(msg)
- stdout, stderr = capfd.readouterr()
- assert stdout == msg
- assert stream.getvalue() == ''
-
- stream = StringIO()
- with redirect_stdout(stream):
- with m.ostream_redirect():
- m.raw_output(msg)
- stdout, stderr = capfd.readouterr()
- assert stdout == ''
- assert stream.getvalue() == msg
-
- stream = StringIO()
- with redirect_stdout(stream):
- m.raw_output(msg)
- stdout, stderr = capfd.readouterr()
- assert stdout == msg
- assert stream.getvalue() == ''
-
-
-def test_redirect_err(capfd):
- msg = "StdOut"
- msg2 = "StdErr"
-
- stream = StringIO()
- with redirect_stderr(stream):
- with m.ostream_redirect(stdout=False):
- m.raw_output(msg)
- m.raw_err(msg2)
- stdout, stderr = capfd.readouterr()
- assert stdout == msg
- assert stderr == ''
- assert stream.getvalue() == msg2
-
-
-def test_redirect_both(capfd):
- msg = "StdOut"
- msg2 = "StdErr"
-
- stream = StringIO()
- stream2 = StringIO()
- with redirect_stdout(stream):
- with redirect_stderr(stream2):
- with m.ostream_redirect():
- m.raw_output(msg)
- m.raw_err(msg2)
- stdout, stderr = capfd.readouterr()
- assert stdout == ''
- assert stderr == ''
- assert stream.getvalue() == msg
- assert stream2.getvalue() == msg2
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/inner_product.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/inner_product.h
deleted file mode 100644
index bd6aec606c16e5eb4c5aa3276b7d374647b021cd..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/inner_product.h
+++ /dev/null
@@ -1,94 +0,0 @@
-/******************************************************************************
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * * Neither the name of the NVIDIA CORPORATION nor the
- * names of its contributors may be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- ******************************************************************************/
-#pragma once
-
-
-#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-
-namespace cuda_cub {
-
-template
-T __host__ __device__
-inner_product(execution_policy &policy,
- InputIt1 first1,
- InputIt1 last1,
- InputIt2 first2,
- T init,
- ReduceOp reduce_op,
- ProductOp product_op)
-{
- typedef typename iterator_traits::difference_type size_type;
- size_type num_items = static_cast(thrust::distance(first1, last1));
- typedef transform_pair_of_input_iterators_t
- binop_iterator_t;
-
- return cuda_cub::reduce_n(policy,
- binop_iterator_t(first1, first2, product_op),
- num_items,
- init,
- reduce_op);
-}
-
-template
-T __host__ __device__
-inner_product(execution_policy &policy,
- InputIt1 first1,
- InputIt1 last1,
- InputIt2 first2,
- T init)
-{
- return cuda_cub::inner_product(policy,
- first1,
- last1,
- first2,
- init,
- plus(),
- multiplies());
-}
-
-} // namespace cuda_cub
-
-} // end namespace thrust
-#endif
diff --git a/spaces/melihunsal/demogpt/__init__.py b/spaces/melihunsal/demogpt/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/merle/PROTEIN_GENERATOR/utils/model/se3_transformer/runtime/metrics.py b/spaces/merle/PROTEIN_GENERATOR/utils/model/se3_transformer/runtime/metrics.py
deleted file mode 100644
index 63026a71989441603df6abd447555524f2fd1e85..0000000000000000000000000000000000000000
--- a/spaces/merle/PROTEIN_GENERATOR/utils/model/se3_transformer/runtime/metrics.py
+++ /dev/null
@@ -1,83 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# Permission is hereby granted, free of charge, to any person obtaining a
-# copy of this software and associated documentation files (the "Software"),
-# to deal in the Software without restriction, including without limitation
-# the rights to use, copy, modify, merge, publish, distribute, sublicense,
-# and/or sell copies of the Software, and to permit persons to whom the
-# Software is furnished to do so, subject to the following conditions:
-#
-# The above copyright notice and this permission notice shall be included in
-# all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
-# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
-# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
-# DEALINGS IN THE SOFTWARE.
-#
-# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES
-# SPDX-License-Identifier: MIT
-
-from abc import ABC, abstractmethod
-
-import torch
-import torch.distributed as dist
-from torch import Tensor
-
-
-class Metric(ABC):
- """ Metric class with synchronization capabilities similar to TorchMetrics """
-
- def __init__(self):
- self.states = {}
-
- def add_state(self, name: str, default: Tensor):
- assert name not in self.states
- self.states[name] = default.clone()
- setattr(self, name, default)
-
- def synchronize(self):
- if dist.is_initialized():
- for state in self.states:
- dist.all_reduce(getattr(self, state), op=dist.ReduceOp.SUM, group=dist.group.WORLD)
-
- def __call__(self, *args, **kwargs):
- self.update(*args, **kwargs)
-
- def reset(self):
- for name, default in self.states.items():
- setattr(self, name, default.clone())
-
- def compute(self):
- self.synchronize()
- value = self._compute().item()
- self.reset()
- return value
-
- @abstractmethod
- def _compute(self):
- pass
-
- @abstractmethod
- def update(self, preds: Tensor, targets: Tensor):
- pass
-
-
-class MeanAbsoluteError(Metric):
- def __init__(self):
- super().__init__()
- self.add_state('error', torch.tensor(0, dtype=torch.float32, device='cuda'))
- self.add_state('total', torch.tensor(0, dtype=torch.int32, device='cuda'))
-
- def update(self, preds: Tensor, targets: Tensor):
- preds = preds.detach()
- n = preds.shape[0]
- error = torch.abs(preds.view(n, -1) - targets.view(n, -1)).sum()
- self.total += n
- self.error += error
-
- def _compute(self):
- return self.error / self.total
diff --git a/spaces/merve/dataset-worldviews/public/fill-in-the-blank/init-diff.js b/spaces/merve/dataset-worldviews/public/fill-in-the-blank/init-diff.js
deleted file mode 100644
index e0bb76f70a4d3ff6689b493236b5da93150746da..0000000000000000000000000000000000000000
--- a/spaces/merve/dataset-worldviews/public/fill-in-the-blank/init-diff.js
+++ /dev/null
@@ -1,525 +0,0 @@
-/* Copyright 2021 Google LLC. All Rights Reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-==============================================================================*/
-
-
-window.initDiff = function(pair){
- var sel = d3.select('.' + pair.class).html('')
- .at({role: 'graphics-document', 'aria-label': pair.ariaLabel})
- .on('keydown', function(){
- sel.classed('changed', 1)
- if (d3.event.keyCode != 13) return
- d3.event.preventDefault()
-
- pair.str0 = ''
-
- updateChart()
- })
-
- if (!sel.node()) return
-
- var isMobile = innerWidth <= 1100
-
- var optionSel = sel.append('div.options')
- .classed('wide', !isMobile)
- .st({marginBottom: isMobile ? 20 : ''})
-
- var input0Sel = optionSel.append('div.flex-row').append('textarea.input-0')
- .st({marginBottom: 10})
- if (isMobile){
- input0Sel.on('change', updateChart)
- }
-
- input0Sel.node().value = pair.s0.replace('[MASK]', '_')
-
- var countSel = optionSel.append('div.option-tokens')
- .append('b').text('Number of Tokens')
- .parent()
- .append('div.flex-row')
- .appendMany('div.button', [30, 200, 1000, 5000, 99999])
- .text(d => d > 5000 ? 'All' : d)
- .st({width: 34, textAlign: 'center'})
- .on('click', d => {
- pair.count = d
- updateChart()
- })
-
- var typeSel = optionSel.append('div.option-type')
- .append('b').text('Chart Type')
- .parent()
- .append('div.flex-row')
- .appendMany('div.button', ['Likelihoods', 'Differences'])
- .text(d => d)
- .st({width: 116, textAlign: 'center'})
- .on('click', d => {
- pair.type = d
- updateChart()
- })
-
- var modelSel = optionSel.append('div.option-model')
- .st({display: 'none'})
- .append('b').text('Model')
- .parent()
- .append('div.flex-row')
- .appendMany('div.button', ['BERT', 'Zari'])
- .text(d => d)
- .st({width: 116, textAlign: 'center'})
- .on('click', d => {
- pair.model = d
- updateChart()
- })
-
- var updateSel = optionSel.append('div.button.update').on('click', updateChart)
- .text('Update')
- .st({display: isMobile ? 'none' : ''})
-
- var resetSel = optionSel.append('div.reset')
- .html('↻ Reset')
- .on('click', () => {
- pair = JSON.parse(pair.pairStr)
- pair.pairStr = JSON.stringify(pair)
- input0Sel.node().value = pair.s0
- updateChart(true)
- })
- .st({display: 'none'})
-
- if (pair.alts){
- d3.select('.' + pair.class + '-alts').html('')
- .classed('alt-block', 1).st({display: 'block'})
- .appendMany('span.p-button-link', pair.alts)
- .html(d => d.str)
- .on('click', d => {
- input0Sel.node().value = d.rawStr
-
- updateChart()
- })
- }
-
- var scatters = []
- var scatterSel = sel.append('div.pair-container-overflow').append('div.pair-container')
- .st({width: 940})
- .appendMany('div', 'p0 p1 c0 p2 p3 c1'.split(' '))
- .each(function(id){
- var c = d3.conventions({
- sel: d3.select(this).append('div.graph.diff').st({marginTop: -5}),
- height: 250,
- width: 250,
- margin: {bottom: 40, right: 60, top: 5, left: 0},
- layers: 'sdds',
- })
-
- var [type, i] = id.split('')
-
- if (type == 'p'){
- c.sel
- .st({pointer: 'cursor'})
- .on('click', () => {
- pair.colorByIndex = +i
- updateChart()
- })
- }
-
- var nTicks = 4
- var tickScale = d3.scaleLinear().range([0, c.width])
- c.svg.appendMany('path.bg-tick', d3.range(nTicks + 1))
- .at({d: d => `M ${.5 + Math.round(tickScale(d/nTicks))} 0 V ${c.height}`})
- c.svg.appendMany('path.bg-tick', d3.range(nTicks + 1))
- .at({d: d => `M 0 ${.5 + Math.round(tickScale(d/nTicks))} H ${c.width}`})
-
-
- c.type = type
- c.scatters = scatters
- c.scatter = window.initScatter(c)
- c.scatters.push(c.scatter)
-
-
- d3.select(this).datum({c, type, i})
- })
-
-
- updateChart(true)
-
-
- async function updateChart(isFirst){
- // warningSel.st({opacity: isFirst ? 0 : 1})
- // resetSel.st({opacity: isFirst ? 0 : 1})
- sel.classed('changed', 0)
-
- countSel.classed('active', d => d == pair.count)
- typeSel.classed('active', d => d == pair.type)
- modelSel.classed('active', d => d == pair.model)
-
- function getStr(sel){
- return sel.node().value.replace('_', '[MASK]')
- }
-
-
- pair.s0 = input0Sel.node().value.replace('_', '[MASK]')
- var str = pair.s0.replace('[MASK]', '{MASK}')
- var sentences = str.split('|').length == 2 ? getZariSenteces() : getTwoPairSentences()
-
- function getTwoPairSentences(){
- var start = str.split('[')[0]
- var mid = str.split(']')[1].split('[')[0]
- var last = str.split(']')[2]
-
- var pairA = str.split('[')[1].split(']')[0].split('|')
- var pairB = str.split('[')[2].split(']')[0].split('|')
-
- return [
- {i: 0, j: 0},
- {i: 0, j: 1},
- {i: 1, j: 0},
- {i: 1, j: 1},
- ].map(word => {
- var strA = pairA[word.i]
- var strB = pairB[word.j]
-
- var sentence = [start, strA, mid, strB, last]
- .join('')
- .replace('{MASK}', '[MASK]')
-
- var modelPath = pair.model == 'Zari' ? 'embed_zari_cda' : 'embed'
-
- return {word, strA, strB, sentence, modelPath}
- })
- }
-
- function getZariSenteces(){
- var start = str.split('[')[0]
- var last = str.split(']')[1]
- var pairB = str.split('[')[1].split(']')[0].split('|')
-
- return [
- {i: 0, j: 0},
- {i: 0, j: 1},
- {i: 1, j: 0},
- {i: 1, j: 1},
- ].map(word => {
- var strA = word.i ? 'Zari' : 'BERT'
- var strB = pairB[word.j]
-
- var sentence = [start, strB, last]
- .join('')
- .replace('{MASK}', '[MASK]')
-
- var modelPath = strA == 'Zari' ? 'embed_zari_cda' : 'embed'
-
- return {word, strA, strB, sentence, modelPath}
- })
- }
-
-
- updateSel.classed('loading', 1)
- // TODO parallel?
- for (var d of sentences){
- d.maskVals = await post(d.modelPath, {sentence: d.sentence})
- }
- updateSel.classed('loading', 0)
-
-
- var allTokens = sentences[0].maskVals.map((v0, i) => {
- var word = tokenizer.vocab[i]
- var v = sentences.map(d => d.maskVals[i])
-
- return {word, i, v, isVisible: false}
- })
-
- _.sortBy(allTokens, d => -d.v[0]).forEach((d, i) => d.v0i = i)
- _.sortBy(allTokens, d => -d.v[1]).forEach((d, i) => d.v1i = i)
- _.sortBy(allTokens, d => -d.v[2]).forEach((d, i) => d.v2i = i)
- _.sortBy(allTokens, d => -d.v[3]).forEach((d, i) => d.v3i = i)
-
- allTokens
- .filter(d =>
- d.v0i <= pair.count ||
- d.v1i <= pair.count ||
- d.v2i <= pair.count ||
- d.v3i <= pair.count
- )
- .forEach(d => {
- d.isTop = true
- d.isVisible = true
- })
-
- var pairs = [
- [0, 1],
- [2, 3],
-
- // [1, 2],
- // [3, 0],
-
- [0, 2],
- [1, 3],
-
- ].map((d, i) => {
- var sentA = sentences[d[0]]
- var sentB = sentences[d[1]]
-
- var allPairTokens = allTokens.map((t, i) => {
- return {word: t.word, v0: t.v[d[0]], i, v1: t.v[d[1]], t}
- })
-
- allPairTokens.forEach(d => {
- d.dif = d.v0 - d.v1
- d.meanV = (d.v0 + d.v1) / 2
- })
- var i0key = 'v' + d[0] + 'i'
- var i1key = 'v' + d[1] + 'i'
-
- // TODO should this be done per chart or globally?
- var topTokens = allPairTokens.filter(d => d.t.isTop)
- // var topTokens = allPairTokens.filter(d => d.t[i0key] <= pair.count || d.t[i1key] <= pair.count)
- var logitExtent = d3.extent(topTokens.map(d => d.v0).concat(topTokens.map(d => d.v1)))
-
- var tokens = allPairTokens
- .filter(d => logitExtent[0] <= d.v0 && logitExtent[0] <= d.v1)
-
- var mag = logitExtent[1] - logitExtent[0]
- logitExtent = [logitExtent[0] - mag*.002, logitExtent[1] + mag*.002]
-
- if (pair.type == 'Differences') tokens = _.sortBy(allPairTokens, d => -d.meanV).slice(0, pair.count)
-
- tokens.forEach(d => {
- d.isVisible = true
- })
-
- var maxDif = d3.max(d3.extent(tokens, d => d.dif).map(Math.abs))
- var color = palette(-maxDif*.5, maxDif*.5)
-
- label0 = sentA.strA + ' / ' + sentA.strB
- label1 = sentB.strA + ' / ' + sentB.strB
-
-
- return {i, sentA, sentB, allPairTokens, logitExtent, tokens, maxDif, color, label0, label1}
- })
-
- var compares = [[0, 1], [2, 3]].map((d, i) => {
- var pairA = pairs[d[0]]
- var pairB = pairs[d[1]]
-
- var allTokensA = pairA.allPairTokens
- var allTokensB = pairB.allPairTokens
-
- var allPairTokens = allTokens.map((t, i) => {
- return {word: t.word, t, difA: allTokensA[i].dif, meanA: allTokensA[i].meanV, difB: allTokensB[i].dif, meanB: allTokensB[i].meanV}
- })
-
- _.sortBy(allPairTokens, d => -d.meanA)
- .slice(0, pair.count)
- .forEach(d => d.isVisible = true)
-
- _.sortBy(allPairTokens, d => -d.meanB)
- .slice(0, pair.count)
- .forEach(d => d.isVisible = true)
-
- var tokens = allPairTokens.filter(d => d.isVisible)
-
- return {pairA, pairB, tokens, allPairTokens}
- })
-
- if (!pair.colorByIndex) pair.colorByIndex = 1
- var color = pairs[pair.colorByIndex].color
- pairs[pair.colorByIndex].allPairTokens.forEach(d => {
- d.t.color = color(d.dif)
- })
-
- scatterSel.each(function({c, i, type}){
- updatePairChart(c, type == 'p' ? pairs[i] : compares[i])
- })
- }
-
- function updatePairChart(c, p){
- var {logitExtent, tokens, maxDif, color} = p
- var allTokens = p.allPairTokens
-
- if (c.type == 'c'){
- drawDifDif()
- } else {
- if (pair.type == 'Likelihoods'){
- drawXY()
- } else{
- drawRotated()
- }
-
- sel.classed('is-xy', pair.type == 'Likelihoods')
- sel.classed('is-rotate', pair.type != 'Likelihoods')
- c.sel.classed('is-color-by', p.i == pair.colorByIndex)
- c.sel.classed('not-is-color-by', p.i != pair.colorByIndex)
- }
-
- function drawXY(){
- c.x.domain(logitExtent)
- c.y.domain(logitExtent)
-
- d3.drawAxis(c)
-
- var s = {30: 4, 200: 3, 1000: 3}[pair.count] || 2
- var scatterData = allTokens.map(d => {
- var x = c.x(d.v0)
- var y = c.y(d.v1)
- var fill = d.t.color
- var dif = d.dif
- var word = d.word
- var show = ''
- var isVisible = d.isVisible
-
- return {x, y, s, dif, fill, word, show, isVisible}
- })
-
-
- var textCandidates = _.sortBy(scatterData.filter(d => d.isVisible), d => d.dif)
- d3.nestBy(textCandidates.slice(0, 1000), d => Math.round(d.y/10))
- .forEach(d => d[0].show = 'uf')
- d3.nestBy(textCandidates.reverse().slice(0, 1000), d => Math.round(d.y/10))
- .forEach(d => d[0].show = 'lr')
-
- logitExtent.pair = pair
- c.scatter.draw(c, scatterData, true)
- c.svg.selectAppend('text.x-axis-label.xy-only')
- .translate([c.width/2, c.height + 24])
- .text(p.label0 + ' →')
- .at({fill: util.colors[0], textAnchor: 'middle'})
-
- c.svg.selectAppend('g.y-axis-label.xy-only')
- .translate([c.width + 20, c.height/2])
- .selectAppend('text')
- .text(p.label1 + ' →')
- .at({fill: util.colors[1], textAnchor: 'middle', transform: 'rotate(-90)'})
- }
-
- function drawRotated(){
- c.x.domain(d3.extent(tokens, d => d.meanV))
- c.y.domain([maxDif, -maxDif])
-
- d3.drawAxis(c)
-
- var scatterData = allTokens.map(d => {
- var x = c.x(d.meanV)
- var y = c.y(d.dif)
- var fill = d.t.color
- var word = d.word
- var show = ''
- var isVisible = d.isVisible
-
- return {x, y, s: 2, fill, word, show, isVisible}
- })
-
- scatterData.forEach(d => {
- d.dx = d.x - c.width/2
- d.dy = d.y - c.height/2
- })
-
- var textCandidates = _.sortBy(scatterData, d => -d.dx*d.dx - d.dy*d.dy)
- .filter(d => d.isVisible)
- .slice(0, 5000)
- d3.nestBy(textCandidates, d => Math.round(12*Math.atan2(d.dx, d.dy)))
- .map(d => d[0])
- .forEach(d => d.show = (d.dy < 0 ? 'u' : 'l') + (d.dx < 0 ? 'l' : 'r'))
-
- c.scatter.draw(c, scatterData, false)
- c.svg.selectAppend('text.rotate-only.x-axis-label')
- .translate([c.width/2, c.height + 24])
- .text(p.label0 + ' + ' + p.label1 + ' →')
- .at({textAnchor: 'middle'})
- .st({fill: '#000', fontWeight: 300})
-
- c.svg.select('g.rotate-only.sent-1').html('')
-
- c.svg.selectAppend('g.rotate-only.sent-1')
- .translate([c.width + 20, c.height/2])
- .append('text')
- .text(p.label1 + ' →')
- .at({textAnchor: 'start', transform: 'rotate(-90)', x: 10})
- .st({fill: util.colors[1]})
-
- c.svg.selectAppend('g.rotate-only.sent-1')
- .translate([c.width + 20, c.height/2 + 0])
- .append('text')
- .text('← ' + p.label0)
- .at({textAnchor: 'end', transform: 'rotate(-90)', x: -10})
- .st({fill: util.colors[0]})
- }
-
- function drawDifDif(){
- var maxDifA = d3.max(d3.extent(tokens, d => d.difA).map(Math.abs))
- var maxDifB = d3.max(d3.extent(tokens, d => d.difB).map(Math.abs))
- var maxDif = d3.max([maxDifA, maxDifB])
-
- c.x.domain([maxDif, -maxDif])
- c.y.domain([maxDif, -maxDif])
-
- d3.drawAxis(c)
-
- var scatterData = allTokens.map(d => {
- var x = c.x(d.difA)
- var y = c.y(d.difB)
- var fill = d.t.color
- var word = d.word
- var show = ''
- var isVisible = d.isVisible
- return {x, y, s: 2, fill, word, show, isVisible}
- })
-
- scatterData.forEach(d => {
- d.dx = d.x - c.width/2
- d.dy = d.y - c.height/2
- })
-
- var textCandidates = _.sortBy(scatterData.filter(d => d.isVisible), d => d.x - d.y)
- d3.nestBy(textCandidates, d => Math.round(d.y/10))
- .forEach(d => d[0].show = 'uf')
- d3.nestBy(textCandidates.reverse(), d => Math.round(d.y/10))
- .forEach(d => d[0].show = 'lr')
-
- c.scatter.draw(c, scatterData, true)
-
- var isColor = pair.colorByIndex == p.pairA.i
-
- var labelSel = c.svg.selectAppend('g.sent-0')
- .html('')
- .translate([c.width/2, c.height + 24])
-
- labelSel.append('text')
- .text(p.pairA.label1 + ' →')
- .at({textAnchor: 'start', x: 10})
- .st({fill: isColor ? util.colors[1] : '#444', fontWeight: isColor ? 400 : ''})
-
- labelSel.append('text')
- .text('← ' + p.pairA.label0)
- .at({textAnchor: 'end', x: -10})
- .st({fill: isColor ? util.colors[0] : '#444', fontWeight: isColor ? 400 : ''})
-
-
- var isColor = pair.colorByIndex == p.pairB.i
-
- var labelSel = c.svg.selectAppend('g.sent-1')
- .html('')
- .translate([c.width + 20, c.height/2])
-
- labelSel.append('text')
- .text(p.pairB.label1 + ' →')
- .at({textAnchor: 'start', transform: 'rotate(-90)', x: 10})
- .st({fill: isColor ? util.colors[1] : '#444', fontWeight: isColor ? 400 : ''})
-
- labelSel.append('text')
- .text('← ' + p.pairB.label0)
- .at({textAnchor: 'end', transform: 'rotate(-90)', x: -10})
- .st({fill: isColor ? util.colors[0] : '#444', fontWeight: isColor ? 400 : ''})
- }
-
- }
-}
-
-if (window.init) init()
diff --git a/spaces/merve/fill-in-the-blank/public/third_party/misc.js b/spaces/merve/fill-in-the-blank/public/third_party/misc.js
deleted file mode 100644
index a51b6b5292feaa6ee497806752a0d3d0cb4ef547..0000000000000000000000000000000000000000
--- a/spaces/merve/fill-in-the-blank/public/third_party/misc.js
+++ /dev/null
@@ -1,38 +0,0 @@
-/* Copyright 2019 Google LLC. All Rights Reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-==============================================================================*/
-
-
-function lerp(a, b, t){ return a + t*(b - a) }
-
-function addVec([a0, a1], [b0, b1]){
- return [a0 + b0, a1 + b1]
-}
-
-function phyllotaxis(i, initialRadius=10, initialAngle=Math.PI*(3 - Math.sqrt(5))){
- i = i + Math.random()/20
-
- var r = initialRadius*Math.sqrt(Math.random() + i)
- var angle = i*initialAngle
-
- return [r*Math.cos(angle), r*Math.sin(angle)]
-}
-
-var names = {
- old_m: 'James John Robert Michael William David Richard Joseph Thomas Charles Christopher Daniel Matthew Anthony Donald Mark Paul Steven Andrew Kenneth Joshua George Kevin Brian Edward Ronald Timothy Jason Jeffrey Ryan Jacob Gary Nicholas Eric Stephen Jonathan Larry Justin Scott Brandon Frank Benjamin Gregory Samuel Raymond Patrick Alexander Jack Dennis Jerry Tyler Aaron Jose Henry Douglas Adam Peter Nathan Zachary Walter Kyle Harold Carl Jeremy Keith Roger Gerald Ethan Arthur Terry Christian Sean Lawrence Austin Joe Noah Jesse Albert Bryan Billy Bruce Willie Jordan Dylan Alan Ralph Gabriel Roy Juan Wayne Eugene Logan Randy Louis Russell Vincent Philip Bobby Johnny Bradley'.split(' '),
- old_f: 'Mary Patricia Jennifer Linda Elizabeth Barbara Susan Jessica Sarah Karen Nancy Margaret Lisa Betty Dorothy Sandra Ashley Kimberly Donna Emily Michelle Carol Amanda Melissa Deborah Stephanie Rebecca Laura Sharon Cynthia Kathleen Helen Amy Shirley Angela Anna Brenda Pamela Nicole Ruth Katherine Samantha Christine Emma Catherine Debra Virginia Rachel Carolyn Janet Maria Heather Diane Julie Joyce Victoria Kelly Christina Joan Evelyn Lauren Judith Olivia Frances Martha Cheryl Megan Andrea Hannah Jacqueline Ann Jean Alice Kathryn Gloria Teresa Doris Sara Janice Julia Marie Madison Grace Judy Theresa Beverly Denise Marilyn Amber Danielle Abigail Brittany Rose Diana Natalie Sophia Alexis Lori Kayla Jane'.split(' '),
- m: 'Noah Liam Jacob Mason William Ethan Michael Alexander James Elijah Daniel Benjamin Aiden Jayden Logan Matthew David Joseph Lucas Jackson Anthony Joshua Samuel Andrew Gabriel Christopher John Dylan Carter Isaac Ryan Luke Oliver Nathan Henry Owen Caleb Wyatt Christian Sebastian Jack Jonathan Landon Julian Isaiah Hunter Levi Aaron Eli Charles Thomas Connor Brayden Nicholas Jaxon Jeremiah Cameron Evan Adrian Jordan Gavin Grayson Angel Robert Tyler Josiah Austin Colton Brandon Jose Dominic Kevin Zachary Ian Chase Jason Adam Ayden Parker Hudson Cooper Nolan Lincoln Xavier Carson Jace Justin Easton Mateo Asher Bentley Blake Nathaniel Jaxson Leo Kayden Tristan Luis Elias Brody Bryson Juan Vincent Cole Micah Ryder Theodore Carlos Ezra Damian Miles Santiago Max Jesus Leonardo Sawyer Diego Alex Roman Maxwell Eric Greyson Hayden Giovanni Wesley Axel Camden Braxton Ivan Ashton Declan Bryce Timothy Antonio Silas Kaiden Ezekiel Jonah Weston George Harrison Steven Miguel Richard Bryan Kaleb Victor Aidan Jameson Joel Patrick Jaden Colin Everett Preston Maddox Edward Alejandro Kaden Jesse Emmanuel Kyle Brian Emmett Jude Marcus Kingston Kai Alan Malachi Grant Jeremy Riley Jayce Bennett Abel Ryker Caden Brantley Luca Brady Calvin Sean Oscar Jake Maverick Abraham Mark Tucker Nicolas Bradley Kenneth Avery Cayden King Paul Amir Gael Graham Maximus'.split(' '),
- f: 'Emma Sophia Olivia Isabella Ava Mia Abigail Emily Madison Charlotte Elizabeth Amelia Chloe Ella Evelyn Avery Sofia Harper Grace Addison Victoria Natalie Lily Aubrey Lillian Zoey Hannah Layla Brooklyn Samantha Zoe Leah Scarlett Riley Camila Savannah Anna Audrey Allison Aria Gabriella Hailey Claire Sarah Aaliyah Kaylee Nevaeh Penelope Alexa Arianna Stella Alexis Bella Nora Ellie Ariana Lucy Mila Peyton Genesis Alyssa Taylor Violet Maya Caroline Madelyn Skylar Serenity Ashley Brianna Kennedy Autumn Eleanor Kylie Sadie Paisley Julia Mackenzie Sophie Naomi Eva Khloe Katherine Gianna Melanie Aubree Piper Ruby Lydia Faith Madeline Alexandra Kayla Hazel Lauren Annabelle Jasmine Aurora Alice Makayla Sydney Bailey Luna Maria Reagan Morgan Isabelle Rylee Kimberly Andrea London Elena Jocelyn Natalia Trinity Eliana Vivian Cora Quinn Liliana Molly Jade Clara Valentina Mary Brielle Hadley Kinsley Willow Brooke Lilly Delilah Payton Mariah Paige Jordyn Nicole Mya Josephine Isabel Lyla Adeline Destiny Ivy Emilia Rachel Angelina Valeria Kendall Sara Ximena Isla Aliyah Reese Vanessa Juliana Mckenzie Amy Laila Adalynn Emery Margaret Eden Gabrielle Kaitlyn Ariel Gracie Brooklynn Melody Jessica Valerie Adalyn Adriana Elise Michelle Rebecca Daisy Everly Katelyn Ryleigh Catherine Norah Alaina Athena Leilani Londyn Eliza Jayla Summer Lila Makenzie Izabella Daniela Stephanie Julianna Rose Alana Harmony Jennifer Hayden'.split(' '),
- last: 'SMITH JOHNSON WILLIAMS BROWN JONES GARCIA MILLER DAVIS RODRIGUEZ MARTINEZ HERNANDEZ LOPEZ GONZALEZ WILSON ANDERSON THOMAS TAYLOR MOORE JACKSON MARTIN LEE PEREZ THOMPSON WHITE HARRIS SANCHEZ CLARK RAMIREZ LEWIS ROBINSON WALKER YOUNG ALLEN KING WRIGHT SCOTT TORRES NGUYEN HILL FLORES GREEN ADAMS NELSON BAKER HALL RIVERA CAMPBELL MITCHELL CARTER ROBERTS GOMEZ PHILLIPS EVANS TURNER DIAZ PARKER CRUZ EDWARDS COLLINS REYES STEWART MORRIS MORALES MURPHY COOK ROGERS GUTIERREZ ORTIZ MORGAN COOPER PETERSON BAILEY REED KELLY HOWARD RAMOS KIM COX WARD RICHARDSON WATSON BROOKS CHAVEZ WOOD JAMES BENNETT GRAY MENDOZA RUIZ HUGHES PRICE ALVAREZ CASTILLO SANDERS PATEL MYERS LONG ROSS FOSTER JIMENEZ POWELL JENKINS PERRY RUSSELL SULLIVAN BELL COLEMAN BUTLER HENDERSON BARNES GONZALES FISHER VASQUEZ SIMMONS ROMERO JORDAN PATTERSON ALEXANDER HAMILTON GRAHAM REYNOLDS GRIFFIN WALLACE MORENO WEST COLE HAYES BRYANT HERRERA GIBSON ELLIS TRAN MEDINA AGUILAR STEVENS MURRAY FORD CASTRO MARSHALL OWENS HARRISON FERNANDEZ MCDONALD WOODS WASHINGTON KENNEDY WELLS VARGAS HENRY CHEN FREEMAN WEBB TUCKER GUZMAN BURNS CRAWFORD OLSON SIMPSON PORTER HUNTER GORDON MENDEZ SILVA SHAW SNYDER MASON DIXON MUNOZ HUNT HICKS HOLMES PALMER WAGNER BLACK ROBERTSON BOYD ROSE STONE SALAZAR FOX WARREN MILLS MEYER RICE SCHMIDT GARZA DANIELS FERGUSON NICHOLS STEPHENS SOTO WEAVER RYAN'.split(' ').map(d => d[0] + d.slice(1).toLowerCase())
-}
diff --git a/spaces/merve/uncertainty-calibration/source/uncertainty-calibration/style.css b/spaces/merve/uncertainty-calibration/source/uncertainty-calibration/style.css
deleted file mode 100644
index 8073cf0a59eac0be0e293b35af5255c40c063e21..0000000000000000000000000000000000000000
--- a/spaces/merve/uncertainty-calibration/source/uncertainty-calibration/style.css
+++ /dev/null
@@ -1,89 +0,0 @@
-svg{
- overflow: visible;
-}
-
-text{
- fill: #202124;
- user-select: none;
-}
-
-.domain{
- display: none;
-}
-
-.thresholds, .threshold > g{
- cursor: pointer;
-}
-
-svg{
- user-select: none;
-}
-
-text.axis-label .legend-text{
- font-family: 'Roboto';
- font-style: normal;
- font-size: 16px;
- line-height: 20px;
- /* identical to box height, or 125% */
-
- fill: #000;
-}
-
-.axis text{
- font-size: 10px;
-}
-
-text{
- text-shadow: 0 1px 0 #fff, 1px 0 0 #fff, 0 -1px 0 #fff, -1px 0 0 #fff;
-}
-
-
-
-
-.bucket text{
- /*text-shadow: 0 1px 0 #000, 1px 0 0 #000, 0 -1px 0 #000, -1px 0 0 #000;*/
- /*fill: #fff;*/
- font-size: 11px;
-}
-
-
-.big-text{
- font-variant-numeric: tabular-nums;
- font-size: 16px;
-}
-
-#card{
- display: flex;
- flex-direction: column;
- align-items: flex-start;
- padding: 24px 24px;
- gap: 6px;
-
- background: #EDF4EC;
- border: 1px solid #34A853;
- box-sizing: border-box;
- border-radius: 4px;
-}
-
-text.val-text{
- background: #DFE9E1;
- border: 1px solid #476C63;
- box-sizing: border-box;
- border-radius: 4px;
- fill: #2A4C4A;
- text-shadow: none;
-}
-
-.val-box{
- fill: #DFE9E1;
- stroke: #476C63;
- opacity: 1;
-}
-
-.legend-title{
- fill: #002622;
-}
-
-h3 {
- color: #00695C;
-}
\ No newline at end of file
diff --git a/spaces/mfrashad/CharacterGAN/models/stylegan/stylegan_tf/training/misc.py b/spaces/mfrashad/CharacterGAN/models/stylegan/stylegan_tf/training/misc.py
deleted file mode 100644
index 50ae51c722cb1e553c56051cbd4556110fe4a1f9..0000000000000000000000000000000000000000
--- a/spaces/mfrashad/CharacterGAN/models/stylegan/stylegan_tf/training/misc.py
+++ /dev/null
@@ -1,245 +0,0 @@
-# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
-#
-# This work is licensed under the Creative Commons Attribution-NonCommercial
-# 4.0 International License. To view a copy of this license, visit
-# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to
-# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
-
-"""Miscellaneous utility functions."""
-
-import os
-import glob
-import pickle
-import re
-import numpy as np
-from collections import defaultdict
-import PIL.Image
-import dnnlib
-
-import config
-from training import dataset
-
-#----------------------------------------------------------------------------
-# Convenience wrappers for pickle that are able to load data produced by
-# older versions of the code, and from external URLs.
-
-def open_file_or_url(file_or_url):
- if dnnlib.util.is_url(file_or_url):
- return dnnlib.util.open_url(file_or_url, cache_dir=config.cache_dir)
- return open(file_or_url, 'rb')
-
-def load_pkl(file_or_url):
- with open_file_or_url(file_or_url) as file:
- return pickle.load(file, encoding='latin1')
-
-def save_pkl(obj, filename):
- with open(filename, 'wb') as file:
- pickle.dump(obj, file, protocol=pickle.HIGHEST_PROTOCOL)
-
-#----------------------------------------------------------------------------
-# Image utils.
-
-def adjust_dynamic_range(data, drange_in, drange_out):
- if drange_in != drange_out:
- scale = (np.float32(drange_out[1]) - np.float32(drange_out[0])) / (np.float32(drange_in[1]) - np.float32(drange_in[0]))
- bias = (np.float32(drange_out[0]) - np.float32(drange_in[0]) * scale)
- data = data * scale + bias
- return data
-
-def create_image_grid(images, grid_size=None):
- assert images.ndim == 3 or images.ndim == 4
- num, img_w, img_h = images.shape[0], images.shape[-1], images.shape[-2]
-
- if grid_size is not None:
- grid_w, grid_h = tuple(grid_size)
- else:
- grid_w = max(int(np.ceil(np.sqrt(num))), 1)
- grid_h = max((num - 1) // grid_w + 1, 1)
-
- grid = np.zeros(list(images.shape[1:-2]) + [grid_h * img_h, grid_w * img_w], dtype=images.dtype)
- for idx in range(num):
- x = (idx % grid_w) * img_w
- y = (idx // grid_w) * img_h
- grid[..., y : y + img_h, x : x + img_w] = images[idx]
- return grid
-
-def convert_to_pil_image(image, drange=[0,1]):
- assert image.ndim == 2 or image.ndim == 3
- if image.ndim == 3:
- if image.shape[0] == 1:
- image = image[0] # grayscale CHW => HW
- else:
- image = image.transpose(1, 2, 0) # CHW -> HWC
-
- image = adjust_dynamic_range(image, drange, [0,255])
- image = np.rint(image).clip(0, 255).astype(np.uint8)
- fmt = 'RGB' if image.ndim == 3 else 'L'
- return PIL.Image.fromarray(image, fmt)
-
-def save_image(image, filename, drange=[0,1], quality=95):
- img = convert_to_pil_image(image, drange)
- if '.jpg' in filename:
- img.save(filename,"JPEG", quality=quality, optimize=True)
- else:
- img.save(filename)
-
-def save_image_grid(images, filename, drange=[0,1], grid_size=None):
- convert_to_pil_image(create_image_grid(images, grid_size), drange).save(filename)
-
-#----------------------------------------------------------------------------
-# Locating results.
-
-def locate_run_dir(run_id_or_run_dir):
- if isinstance(run_id_or_run_dir, str):
- if os.path.isdir(run_id_or_run_dir):
- return run_id_or_run_dir
- converted = dnnlib.submission.submit.convert_path(run_id_or_run_dir)
- if os.path.isdir(converted):
- return converted
-
- run_dir_pattern = re.compile('^0*%s-' % str(run_id_or_run_dir))
- for search_dir in ['']:
- full_search_dir = config.result_dir if search_dir == '' else os.path.normpath(os.path.join(config.result_dir, search_dir))
- run_dir = os.path.join(full_search_dir, str(run_id_or_run_dir))
- if os.path.isdir(run_dir):
- return run_dir
- run_dirs = sorted(glob.glob(os.path.join(full_search_dir, '*')))
- run_dirs = [run_dir for run_dir in run_dirs if run_dir_pattern.match(os.path.basename(run_dir))]
- run_dirs = [run_dir for run_dir in run_dirs if os.path.isdir(run_dir)]
- if len(run_dirs) == 1:
- return run_dirs[0]
- raise IOError('Cannot locate result subdir for run', run_id_or_run_dir)
-
-def list_network_pkls(run_id_or_run_dir, include_final=True):
- run_dir = locate_run_dir(run_id_or_run_dir)
- pkls = sorted(glob.glob(os.path.join(run_dir, 'network-*.pkl')))
- if len(pkls) >= 1 and os.path.basename(pkls[0]) == 'network-final.pkl':
- if include_final:
- pkls.append(pkls[0])
- del pkls[0]
- return pkls
-
-def locate_network_pkl(run_id_or_run_dir_or_network_pkl, snapshot_or_network_pkl=None):
- for candidate in [snapshot_or_network_pkl, run_id_or_run_dir_or_network_pkl]:
- if isinstance(candidate, str):
- if os.path.isfile(candidate):
- return candidate
- converted = dnnlib.submission.submit.convert_path(candidate)
- if os.path.isfile(converted):
- return converted
-
- pkls = list_network_pkls(run_id_or_run_dir_or_network_pkl)
- if len(pkls) >= 1 and snapshot_or_network_pkl is None:
- return pkls[-1]
-
- for pkl in pkls:
- try:
- name = os.path.splitext(os.path.basename(pkl))[0]
- number = int(name.split('-')[-1])
- if number == snapshot_or_network_pkl:
- return pkl
- except ValueError: pass
- except IndexError: pass
- raise IOError('Cannot locate network pkl for snapshot', snapshot_or_network_pkl)
-
-def get_id_string_for_network_pkl(network_pkl):
- p = network_pkl.replace('.pkl', '').replace('\\', '/').split('/')
- return '-'.join(p[max(len(p) - 2, 0):])
-
-#----------------------------------------------------------------------------
-# Loading data from previous training runs.
-
-def load_network_pkl(run_id_or_run_dir_or_network_pkl, snapshot_or_network_pkl=None):
- return load_pkl(locate_network_pkl(run_id_or_run_dir_or_network_pkl, snapshot_or_network_pkl))
-
-def parse_config_for_previous_run(run_id):
- run_dir = locate_run_dir(run_id)
-
- # Parse config.txt.
- cfg = defaultdict(dict)
- with open(os.path.join(run_dir, 'config.txt'), 'rt') as f:
- for line in f:
- line = re.sub(r"^{?\s*'(\w+)':\s*{(.*)(},|}})$", r"\1 = {\2}", line.strip())
- if line.startswith('dataset =') or line.startswith('train ='):
- exec(line, cfg, cfg) # pylint: disable=exec-used
-
- # Handle legacy options.
- if 'file_pattern' in cfg['dataset']:
- cfg['dataset']['tfrecord_dir'] = cfg['dataset'].pop('file_pattern').replace('-r??.tfrecords', '')
- if 'mirror_augment' in cfg['dataset']:
- cfg['train']['mirror_augment'] = cfg['dataset'].pop('mirror_augment')
- if 'max_labels' in cfg['dataset']:
- v = cfg['dataset'].pop('max_labels')
- if v is None: v = 0
- if v == 'all': v = 'full'
- cfg['dataset']['max_label_size'] = v
- if 'max_images' in cfg['dataset']:
- cfg['dataset'].pop('max_images')
- return cfg
-
-def load_dataset_for_previous_run(run_id, **kwargs): # => dataset_obj, mirror_augment
- cfg = parse_config_for_previous_run(run_id)
- cfg['dataset'].update(kwargs)
- dataset_obj = dataset.load_dataset(data_dir=config.data_dir, **cfg['dataset'])
- mirror_augment = cfg['train'].get('mirror_augment', False)
- return dataset_obj, mirror_augment
-
-def apply_mirror_augment(minibatch):
- mask = np.random.rand(minibatch.shape[0]) < 0.5
- minibatch = np.array(minibatch)
- minibatch[mask] = minibatch[mask, :, :, ::-1]
- return minibatch
-
-#----------------------------------------------------------------------------
-# Size and contents of the image snapshot grids that are exported
-# periodically during training.
-
-def setup_snapshot_image_grid(G, training_set,
- size = '1080p', # '1080p' = to be viewed on 1080p display, '4k' = to be viewed on 4k display.
- layout = 'random'): # 'random' = grid contents are selected randomly, 'row_per_class' = each row corresponds to one class label.
-
- # Select size.
- gw = 1; gh = 1
- if size == '1080p':
- gw = np.clip(1920 // G.output_shape[3], 3, 32)
- gh = np.clip(1080 // G.output_shape[2], 2, 32)
- if size == '4k':
- gw = np.clip(3840 // G.output_shape[3], 7, 32)
- gh = np.clip(2160 // G.output_shape[2], 4, 32)
-
- # Initialize data arrays.
- reals = np.zeros([gw * gh] + training_set.shape, dtype=training_set.dtype)
- labels = np.zeros([gw * gh, training_set.label_size], dtype=training_set.label_dtype)
- latents = np.random.randn(gw * gh, *G.input_shape[1:])
-
- # Random layout.
- if layout == 'random':
- reals[:], labels[:] = training_set.get_minibatch_np(gw * gh)
-
- # Class-conditional layouts.
- class_layouts = dict(row_per_class=[gw,1], col_per_class=[1,gh], class4x4=[4,4])
- if layout in class_layouts:
- bw, bh = class_layouts[layout]
- nw = (gw - 1) // bw + 1
- nh = (gh - 1) // bh + 1
- blocks = [[] for _i in range(nw * nh)]
- for _iter in range(1000000):
- real, label = training_set.get_minibatch_np(1)
- idx = np.argmax(label[0])
- while idx < len(blocks) and len(blocks[idx]) >= bw * bh:
- idx += training_set.label_size
- if idx < len(blocks):
- blocks[idx].append((real, label))
- if all(len(block) >= bw * bh for block in blocks):
- break
- for i, block in enumerate(blocks):
- for j, (real, label) in enumerate(block):
- x = (i % nw) * bw + j % bw
- y = (i // nw) * bh + j // bw
- if x < gw and y < gh:
- reals[x + y * gw] = real[0]
- labels[x + y * gw] = label[0]
-
- return (gw, gh), reals, labels, latents
-
-#----------------------------------------------------------------------------
diff --git a/spaces/mfrashad/CharacterGAN/netdissect/nethook.py b/spaces/mfrashad/CharacterGAN/netdissect/nethook.py
deleted file mode 100644
index f36e84ee0cae2de2c3be247498408cf66db3ee8f..0000000000000000000000000000000000000000
--- a/spaces/mfrashad/CharacterGAN/netdissect/nethook.py
+++ /dev/null
@@ -1,266 +0,0 @@
-'''
-Utilities for instrumenting a torch model.
-
-InstrumentedModel will wrap a pytorch model and allow hooking
-arbitrary layers to monitor or modify their output directly.
-
-Modified by Erik Härkönen:
-- 29.11.2019: Unhooking bugfix
-- 25.01.2020: Offset edits, removed old API
-'''
-
-import torch, numpy, types
-from collections import OrderedDict
-
-class InstrumentedModel(torch.nn.Module):
- '''
- A wrapper for hooking, probing and intervening in pytorch Modules.
- Example usage:
-
- ```
- model = load_my_model()
- with inst as InstrumentedModel(model):
- inst.retain_layer(layername)
- inst.edit_layer(layername, 0.5, target_features)
- inst.edit_layer(layername, offset=offset_tensor)
- inst(inputs)
- original_features = inst.retained_layer(layername)
- ```
- '''
-
- def __init__(self, model):
- super(InstrumentedModel, self).__init__()
- self.model = model
- self._retained = OrderedDict()
- self._ablation = {}
- self._replacement = {}
- self._offset = {}
- self._hooked_layer = {}
- self._old_forward = {}
-
- def __enter__(self):
- return self
-
- def __exit__(self, type, value, traceback):
- self.close()
-
- def forward(self, *inputs, **kwargs):
- return self.model(*inputs, **kwargs)
-
- def retain_layer(self, layername):
- '''
- Pass a fully-qualified layer name (E.g., module.submodule.conv3)
- to hook that layer and retain its output each time the model is run.
- A pair (layername, aka) can be provided, and the aka will be used
- as the key for the retained value instead of the layername.
- '''
- self.retain_layers([layername])
-
- def retain_layers(self, layernames):
- '''
- Retains a list of a layers at once.
- '''
- self.add_hooks(layernames)
- for layername in layernames:
- aka = layername
- if not isinstance(aka, str):
- layername, aka = layername
- if aka not in self._retained:
- self._retained[aka] = None
-
- def retained_features(self):
- '''
- Returns a dict of all currently retained features.
- '''
- return OrderedDict(self._retained)
-
- def retained_layer(self, aka=None, clear=False):
- '''
- Retrieve retained data that was previously hooked by retain_layer.
- Call this after the model is run. If clear is set, then the
- retained value will return and also cleared.
- '''
- if aka is None:
- # Default to the first retained layer.
- aka = next(self._retained.keys().__iter__())
- result = self._retained[aka]
- if clear:
- self._retained[aka] = None
- return result
-
- def edit_layer(self, layername, ablation=None, replacement=None, offset=None):
- '''
- Pass a fully-qualified layer name (E.g., module.submodule.conv3)
- to hook that layer and modify its output each time the model is run.
- The output of the layer will be modified to be a convex combination
- of the replacement and x interpolated according to the ablation, i.e.:
- `output = x * (1 - a) + (r * a)`.
- Additionally or independently, an offset can be added to the output.
- '''
- if not isinstance(layername, str):
- layername, aka = layername
- else:
- aka = layername
-
- # The default ablation if a replacement is specified is 1.0.
- if ablation is None and replacement is not None:
- ablation = 1.0
- self.add_hooks([(layername, aka)])
- if ablation is not None:
- self._ablation[aka] = ablation
- if replacement is not None:
- self._replacement[aka] = replacement
- if offset is not None:
- self._offset[aka] = offset
- # If needed, could add an arbitrary postprocessing lambda here.
-
- def remove_edits(self, layername=None, remove_offset=True, remove_replacement=True):
- '''
- Removes edits at the specified layer, or removes edits at all layers
- if no layer name is specified.
- '''
- if layername is None:
- if remove_replacement:
- self._ablation.clear()
- self._replacement.clear()
- if remove_offset:
- self._offset.clear()
- return
-
- if not isinstance(layername, str):
- layername, aka = layername
- else:
- aka = layername
- if remove_replacement and aka in self._ablation:
- del self._ablation[aka]
- if remove_replacement and aka in self._replacement:
- del self._replacement[aka]
- if remove_offset and aka in self._offset:
- del self._offset[aka]
-
- def add_hooks(self, layernames):
- '''
- Sets up a set of layers to be hooked.
-
- Usually not called directly: use edit_layer or retain_layer instead.
- '''
- needed = set()
- aka_map = {}
- for name in layernames:
- aka = name
- if not isinstance(aka, str):
- name, aka = name
- if self._hooked_layer.get(aka, None) != name:
- aka_map[name] = aka
- needed.add(name)
- if not needed:
- return
- for name, layer in self.model.named_modules():
- if name in aka_map:
- needed.remove(name)
- aka = aka_map[name]
- self._hook_layer(layer, name, aka)
- for name in needed:
- raise ValueError('Layer %s not found in model' % name)
-
- def _hook_layer(self, layer, layername, aka):
- '''
- Internal method to replace a forward method with a closure that
- intercepts the call, and tracks the hook so that it can be reverted.
- '''
- if aka in self._hooked_layer:
- raise ValueError('Layer %s already hooked' % aka)
- if layername in self._old_forward:
- raise ValueError('Layer %s already hooked' % layername)
- self._hooked_layer[aka] = layername
- self._old_forward[layername] = (layer, aka,
- layer.__dict__.get('forward', None))
- editor = self
- original_forward = layer.forward
- def new_forward(self, *inputs, **kwargs):
- original_x = original_forward(*inputs, **kwargs)
- x = editor._postprocess_forward(original_x, aka)
- return x
- layer.forward = types.MethodType(new_forward, layer)
-
- def _unhook_layer(self, aka):
- '''
- Internal method to remove a hook, restoring the original forward method.
- '''
- if aka not in self._hooked_layer:
- return
- layername = self._hooked_layer[aka]
- layer, check, old_forward = self._old_forward[layername]
- assert check == aka
- if old_forward is None:
- if 'forward' in layer.__dict__:
- del layer.__dict__['forward']
- else:
- layer.forward = old_forward
- del self._old_forward[layername]
- del self._hooked_layer[aka]
- if aka in self._ablation:
- del self._ablation[aka]
- if aka in self._replacement:
- del self._replacement[aka]
- if aka in self._offset:
- del self._offset[aka]
- if aka in self._retained:
- del self._retained[aka]
-
- def _postprocess_forward(self, x, aka):
- '''
- The internal method called by the hooked layers after they are run.
- '''
- # Retain output before edits, if desired.
- if aka in self._retained:
- self._retained[aka] = x.detach()
-
- # Apply replacement edit
- a = make_matching_tensor(self._ablation, aka, x)
- if a is not None:
- x = x * (1 - a)
- v = make_matching_tensor(self._replacement, aka, x)
- if v is not None:
- x += (v * a)
-
- # Apply offset edit
- b = make_matching_tensor(self._offset, aka, x)
- if b is not None:
- x = x + b
-
- return x
-
- def close(self):
- '''
- Unhooks all hooked layers in the model.
- '''
- for aka in list(self._old_forward.keys()):
- self._unhook_layer(aka)
- assert len(self._old_forward) == 0
-
-
-def make_matching_tensor(valuedict, name, data):
- '''
- Converts `valuedict[name]` to be a tensor with the same dtype, device,
- and dimension count as `data`, and caches the converted tensor.
- '''
- v = valuedict.get(name, None)
- if v is None:
- return None
- if not isinstance(v, torch.Tensor):
- # Accept non-torch data.
- v = torch.from_numpy(numpy.array(v))
- valuedict[name] = v
- if not v.device == data.device or not v.dtype == data.dtype:
- # Ensure device and type matches.
- assert not v.requires_grad, '%s wrong device or type' % (name)
- v = v.to(device=data.device, dtype=data.dtype)
- valuedict[name] = v
- if len(v.shape) < len(data.shape):
- # Ensure dimensions are unsqueezed as needed.
- assert not v.requires_grad, '%s wrong dimensions' % (name)
- v = v.view((1,) + tuple(v.shape) +
- (1,) * (len(data.shape) - len(v.shape) - 1))
- valuedict[name] = v
- return v
diff --git a/spaces/michellaneous/Baymax/README.md b/spaces/michellaneous/Baymax/README.md
deleted file mode 100644
index c3e46d253cc6cfa00132fc8bfab5fe2c9fa8a6ad..0000000000000000000000000000000000000000
--- a/spaces/michellaneous/Baymax/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Baymax
-emoji: 🐠
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: unknown
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/mingyuan/ReMoDiffuse/tools/slurm_train.sh b/spaces/mingyuan/ReMoDiffuse/tools/slurm_train.sh
deleted file mode 100644
index 1ba4d37c15828d01f1e4090b79ba976c2647ac55..0000000000000000000000000000000000000000
--- a/spaces/mingyuan/ReMoDiffuse/tools/slurm_train.sh
+++ /dev/null
@@ -1,26 +0,0 @@
-#!/usr/bin/env bash
-# Copyright (c) OpenMMLab. All rights reserved.
-
-set -x
-
-PARTITION=$1
-JOB_NAME=$2
-CONFIG=$3
-WORK_DIR=$4
-GPUS=$5
-GPUS_PER_NODE=$((${GPUS}<8?${GPUS}:8))
-CPUS_PER_TASK=${CPUS_PER_TASK:-2}
-SRUN_ARGS=${SRUN_ARGS:-""}
-PY_ARGS=${@:6}
-
-PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \
-srun -p ${PARTITION} \
- --job-name=${JOB_NAME} \
- --gres=gpu:${GPUS_PER_NODE} \
- --ntasks=${GPUS} \
- --ntasks-per-node=${GPUS_PER_NODE} \
- --cpus-per-task=${CPUS_PER_TASK} \
- --kill-on-bad-exit=1 \
- -w SG-IDC2-10-51-5-49 \
- ${SRUN_ARGS} \
- python -u tools/train.py ${CONFIG} --work-dir=${WORK_DIR} --launcher="slurm" ${PY_ARGS}
\ No newline at end of file
diff --git a/spaces/monra/freegpt-webui/README.md b/spaces/monra/freegpt-webui/README.md
deleted file mode 100644
index 6d7f22622bf76f16bb2c88c137fc51fd608fa8db..0000000000000000000000000000000000000000
--- a/spaces/monra/freegpt-webui/README.md
+++ /dev/null
@@ -1,194 +0,0 @@
----
-title: FreeGPT WebUI
-emoji: 🚀
-colorFrom: blue
-colorTo: yellow
-sdk: docker
-sdk_version: 1.24.0
-app_file: run.py
-pinned: true
-app_port: 1338
----
-
-# FreeGPT WebUI
-## GPT 3.5/4
-
-NOT REQUIRE ANY API KEY ❌🔑
-
-This project features a WebUI utilizing the [G4F API](https://github.com/xtekky/gpt4free).
-Experience the power of ChatGPT with a user-friendly interface, enhanced jailbreaks, and completely free.
-
-## Known bugs 🚧
-- Stream mode not working properly.
-
-## News 📢
-I have created a new version of FreeGPT WebUI using the [ChimeraGPT API](https://chimeragpt.adventblocks.cc/).
-
-
-This free API allows you to use various AI chat models, including GPT-4, GPT-4-32k, Claude-2, Claude-2-100k, and more.
-Check out the project here: [FreeGPT WebUI - Chimera Version](https://github.com/ramonvc/freegpt-webui/tree/chimeragpt-version).
-
-## Project Hosting and Demonstration 🌐🚀
-The project is hosted on multiple platforms to be tested and modified.
-|Plataform|Status|API Key|Free|Repo|Demo|
-|--|--|--|--|--|--|
-|[replit](https://replit.com/)||◼️|☑️|[FreeGPT WebUI](https://replit.com/@ramonvc/freegpt-webui)|[Chat](https://freegpt-webui.ramonvc.repl.co/chat/)
-|[hugging face](https://huggingface.co)||◼️|☑️|[FreeGPT WebUI](https://huggingface.co/spaces/monra/freegpt-webui/tree/main)|[Chat](https://huggingface.co/spaces/monra/freegpt-webui)
-|[replit](https://replit.com/)||☑️|☑️|[FreeGPT WebUI - Chimera Version](https://replit.com/@ramonvc/freegpt-webui-chimera)|[Chat](https://freegpt-webui-chimera.ramonvc.repl.co/chat/)
-
-## Note ℹ️
-
- FreeGPT is a project that utilizes various free AI conversation API Providers. Each Provider is an API that provides responses generated by different AI models. The source code related to these services is available in G4F folder.
-
-It is important to note that, due to the extensive reach of this project, the free services registered here may receive a significant number of requests, which can result in temporary unavailability or access limitations. Therefore, it is common to encounter these services being offline or unstable.
-
-We recommend that you search for your own Providers and add them to your personal projects to avoid service instability and unavailability. Within the project, in the Providers folder, you will find several examples of Providers that have worked in the past or are still functioning. It is easy to follow the logic of these examples to find free GPT services and incorporate the requests into your specific FreeGPT project.
-
-Please note that the choice and integration of additional Providers are the user's responsibility and are not directly related to the FreeGPT project, as the project serves as an example of how to combine the G4F API with a web interface.
-
-
-## Table of Contents
-- [To-Do List](#to-do-list-%EF%B8%8F)
-- [Getting Started](#getting-started-white_check_mark)
- - [Cloning the Repository](#cloning-the-repository-inbox_tray)
- - [Install Dependencies](#install-dependencies-wrench)
-- [Running the Application](#running-the-application-rocket)
-- [Docker](#docker-)
- - [Prerequisites](#prerequisites)
- - [Running the Docker](#running-the-docker)
-- [Incorporated Projects](#incorporated-projects-busts_in_silhouette)
- - [WebUI](#webui)
- - [API FreeGPT](#api-g4f)
-- [Star History](#star-history)
-- [Legal Notice](#legal-notice)
-
-##
-
-## To-Do List ✔️
-
-- [x] Integrate the free GPT API into the WebUI
-- [x] Create Docker support
-- [x] Improve the Jailbreak functionality
-- [x] Add the GPT-4 model
-- [x] Enhance the user interface
-- [ ] Check status of API Providers (online/offline)
-- [ ] Enable editing and creating Jailbreaks/Roles in the WebUI
-- [ ] Refactor web client
-
-## Getting Started :white_check_mark:
-To get started with this project, you'll need to clone the repository and have [Python](https://www.python.org/downloads/) installed on your system.
-
-### Cloning the Repository :inbox_tray:
-Run the following command to clone the repository:
-
-```
-git clone https://github.com/ramonvc/freegpt-webui.git
-```
-
-### Install Dependencies :wrench:
-Navigate to the project directory:
-```
-cd freegpt-webui
-```
-
-Install the dependencies:
-```
-pip install -r requirements.txt
-```
-## Running the Application :rocket:
-To run the application, run the following command:
-```
-python run.py
-```
-
-Access the application in your browser using the URL:
-```
-http://127.0.0.1:1338
-```
-or
-```
-http://localhost:1338
-```
-
-
-## Docker 🐳
-### Prerequisites
-Before you start, make sure you have installed [Docker](https://www.docker.com/get-started) on your machine.
-
-### Running the Docker
-Pull the Docker image from Docker Hub:
-```
-docker pull ramonvc/freegpt-webui
-```
-
-Run the application using Docker:
-```
-docker run -p 1338:1338 ramonvc/freegpt-webui
-```
-
-Access the application in your browser using the URL:
-```
-http://127.0.0.1:1338
-```
-or
-```
-http://localhost:1338
-```
-
-When you're done using the application, stop the Docker containers using the following command:
-```
-docker stop
-```
-
-## Incorporated Projects :busts_in_silhouette:
-I highly recommend visiting and supporting both projects.
-
-### WebUI
-The application interface was incorporated from the [chatgpt-clone](https://github.com/xtekky/chatgpt-clone) repository.
-
-### API G4F
-The free GPT-4 API was incorporated from the [GPT4Free](https://github.com/xtekky/gpt4free) repository.
-
-
-
-## Star History
-[](https://star-history.com/#ramonvc/freegpt-webui&Timeline)
-
-
-
-## Legal Notice
-This repository is _not_ associated with or endorsed by providers of the APIs contained in this GitHub repository. This
-project is intended **for educational purposes only**. This is just a little personal project. Sites may contact me to
-improve their security or request the removal of their site from this repository.
-
-Please note the following:
-
-1. **Disclaimer**: The APIs, services, and trademarks mentioned in this repository belong to their respective owners.
- This project is _not_ claiming any right over them nor is it affiliated with or endorsed by any of the providers
- mentioned.
-
-2. **Responsibility**: The author of this repository is _not_ responsible for any consequences, damages, or losses
- arising from the use or misuse of this repository or the content provided by the third-party APIs. Users are solely
- responsible for their actions and any repercussions that may follow. We strongly recommend the users to follow the
- TOS of the each Website.
-
-3. **Educational Purposes Only**: This repository and its content are provided strictly for educational purposes. By
- using the information and code provided, users acknowledge that they are using the APIs and models at their own risk
- and agree to comply with any applicable laws and regulations.
-
-4. **Copyright**: All content in this repository, including but not limited to code, images, and documentation, is the
- intellectual property of the repository author, unless otherwise stated. Unauthorized copying, distribution, or use
- of any content in this repository is strictly prohibited without the express written consent of the repository
- author.
-
-5. **Indemnification**: Users agree to indemnify, defend, and hold harmless the author of this repository from and
- against any and all claims, liabilities, damages, losses, or expenses, including legal fees and costs, arising out of
- or in any way connected with their use or misuse of this repository, its content, or related third-party APIs.
-
-6. **Updates and Changes**: The author reserves the right to modify, update, or remove any content, information, or
- features in this repository at any time without prior notice. Users are responsible for regularly reviewing the
- content and any changes made to this repository.
-
-By using this repository or any code related to it, you agree to these terms. The author is not responsible for any
-copies, forks, or reuploads made by other users. This is the author's only account and repository. To prevent
-impersonation or irresponsible actions, you may comply with the GNU GPL license this Repository uses.
diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/stories/README.md b/spaces/mshukor/UnIVAL/fairseq/examples/stories/README.md
deleted file mode 100644
index 588941eddc5f0280f5254affd40ef49de874c885..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/examples/stories/README.md
+++ /dev/null
@@ -1,66 +0,0 @@
-# Hierarchical Neural Story Generation (Fan et al., 2018)
-
-The following commands provide an example of pre-processing data, training a model, and generating text for story generation with the WritingPrompts dataset.
-
-## Pre-trained models
-
-Description | Dataset | Model | Test set(s)
----|---|---|---
-Stories with Convolutional Model ([Fan et al., 2018](https://arxiv.org/abs/1805.04833)) | [WritingPrompts](https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/stories_checkpoint.tar.bz2) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/stories_test.tar.bz2)
-
-We provide sample stories generated by the [convolutional seq2seq model](https://dl.fbaipublicfiles.com/fairseq/data/seq2seq_stories.txt) and [fusion model](https://dl.fbaipublicfiles.com/fairseq/data/fusion_stories.txt) from [Fan et al., 2018](https://arxiv.org/abs/1805.04833). The corresponding prompts for the fusion model can be found [here](https://dl.fbaipublicfiles.com/fairseq/data/fusion_prompts.txt). Note that there are unk in the file, as we modeled a small full vocabulary (no BPE or pre-training). We did not use these unk prompts for human evaluation.
-
-## Dataset
-
-The dataset can be downloaded like this:
-
-```bash
-cd examples/stories
-curl https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz | tar xvzf -
-```
-
-and contains a train, test, and valid split. The dataset is described here: https://arxiv.org/abs/1805.04833. We model only the first 1000 words of each story, including one newLine token.
-
-## Example usage
-
-First we will preprocess the dataset. Note that the dataset release is the full data, but the paper models the first 1000 words of each story. Here is example code that trims the dataset to the first 1000 words of each story:
-```python
-data = ["train", "test", "valid"]
-for name in data:
- with open(name + ".wp_target") as f:
- stories = f.readlines()
- stories = [" ".join(i.split()[0:1000]) for i in stories]
- with open(name + ".wp_target", "w") as o:
- for line in stories:
- o.write(line.strip() + "\n")
-```
-
-Once we've trimmed the data we can binarize it and train our model:
-```bash
-# Binarize the dataset:
-export TEXT=examples/stories/writingPrompts
-fairseq-preprocess --source-lang wp_source --target-lang wp_target \
- --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \
- --destdir data-bin/writingPrompts --padding-factor 1 --thresholdtgt 10 --thresholdsrc 10
-
-# Train the model:
-fairseq-train data-bin/writingPrompts -a fconv_self_att_wp --lr 0.25 --optimizer nag --clip-norm 0.1 --max-tokens 1500 --lr-scheduler reduce_lr_on_plateau --decoder-attention True --encoder-attention False --criterion label_smoothed_cross_entropy --weight-decay .0000001 --label-smoothing 0 --source-lang wp_source --target-lang wp_target --gated-attention True --self-attention True --project-input True --pretrained False
-
-# Train a fusion model:
-# add the arguments: --pretrained True --pretrained-checkpoint path/to/checkpoint
-
-# Generate:
-# Note: to load the pretrained model at generation time, you need to pass in a model-override argument to communicate to the fusion model at generation time where you have placed the pretrained checkpoint. By default, it will load the exact path of the fusion model's pretrained model from training time. You should use model-override if you have moved the pretrained model (or are using our provided models). If you are generating from a non-fusion model, the model-override argument is not necessary.
-
-fairseq-generate data-bin/writingPrompts --path /path/to/trained/model/checkpoint_best.pt --batch-size 32 --beam 1 --sampling --sampling-topk 10 --temperature 0.8 --nbest 1 --model-overrides "{'pretrained_checkpoint':'/path/to/pretrained/model/checkpoint'}"
-```
-
-## Citation
-```bibtex
-@inproceedings{fan2018hierarchical,
- title = {Hierarchical Neural Story Generation},
- author = {Fan, Angela and Lewis, Mike and Dauphin, Yann},
- booktitle = {Conference of the Association for Computational Linguistics (ACL)},
- year = 2018,
-}
-```
diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/ppx.py b/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/ppx.py
deleted file mode 100644
index d6a40e4d359bdcae6d64f53ba06d8a533aec01ac..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/ppx.py
+++ /dev/null
@@ -1,122 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-import torch
-import numpy as np
-import warnings
-
-
-def get_target_sequences(manifest, ground_truth, to_take=1000):
- import json
- import pathlib
-
- with open(ground_truth, 'r') as fin:
- original_continuations = json.loads(fin.read())
-
- sequence2length = [(k, v[0]) for k, v in original_continuations.items()]
- assert all(float(v) >= 6.0 for (_, v) in sequence2length) # 6 seconds
-
- sequence2length.sort(key=lambda x: x[1])
- to_take_sequences = set(v[0] for v in sequence2length[:to_take])
- to_take_ids = []
-
- with open(manifest, 'r') as f:
- f.readline()
-
- for i, line in enumerate(f.readlines()):
- seq_id = line.split()[0]
- seq_id = pathlib.Path(seq_id).name.split('__')[0]
-
- if seq_id in to_take_sequences:
- to_take_ids.append(i)
-
- print(f'Took {len(to_take_ids)} ids')
- return set(to_take_ids)
-
-
-def get_args():
- import argparse
-
- parser = argparse.ArgumentParser("Evaluate PPX metric of a transcript.")
- parser.add_argument('--asr-transcript', type=str,
- help='Path to the transcript file.')
- parser.add_argument('--cut-id', action='store_true',
- help='Whether cut the first token (typically a seq id)')
- parser.add_argument('--cut-tail', action='store_true',
- help='Whether cut the last token (typically a speaker id)')
-
- parser.add_argument('--manifest', type=str, default=None)
- parser.add_argument('--prompts-description', type=str, default=None)
-
- args = parser.parse_args()
-
- return args
-
-
-def main():
- args = get_args()
-
- lm = torch.hub.load(
- 'pytorch/fairseq', 'transformer_lm.wmt19.en', tokenizer='moses', bpe='fastbpe')
-
- lm.eval().cuda() # disable dropout
-
- if args.manifest is None and args.prompts_description is None:
- target_ids = None
- else:
- target_ids = get_target_sequences(
- args.manifest, args.prompts_description)
-
- with open(args.asr_transcript, 'r') as fin:
- lines = fin.readlines()
-
- if target_ids is not None:
- filtered = []
- for line in lines:
- line_id = line.split()[-1]
- line_id = int(line_id.split('-')[1][:-1])
- if line_id in target_ids:
- filtered.append(line)
- lines = filtered
- else:
- pass
-
- if args.cut_id:
- lines = [' '.join(x.split()[1:]) for x in lines]
- if args.cut_tail:
- lines = [' '.join(x.split()[:-1]) for x in lines]
- lines = [x.strip().lower() for x in lines]
-
- def get_logprob(sent): return \
- lm.score(sent)['positional_scores'].mean().neg().item()
-
- logprobs = [get_logprob(l) for l in lines]
-
- filtered = [x for x in logprobs if not np.isnan(x)]
- if len(filtered) != len(logprobs):
- warnings.warn("NaNs detected!")
- logprobs = filtered
-
- perplexities = [np.exp(l) for l in logprobs]
-
- for name, stats in [('logprob', logprobs), ('perplexity', perplexities)]:
- mean = np.mean(stats)
- sem = np.std(stats) / np.sqrt(len(stats))
-
- median = np.median(stats)
- interval = list(np.percentile(stats, [10, 90]))
-
- mean, sem, median, percentile10, percentile90 = [
- round(x, 2) for x in [mean, sem, median] + interval]
-
- print(name)
- print(f"\tMean {mean} +- {sem}")
- print(
- f"\tMedian {median}, 90% confidence interval {percentile10}...{percentile90}")
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/fp16_optimizer.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/fp16_optimizer.py
deleted file mode 100644
index b9ca9270bea067ad41881febf84a7da6e6b9049e..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/fp16_optimizer.py
+++ /dev/null
@@ -1,548 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from collections import defaultdict
-from itertools import chain
-
-import torch
-from fairseq import optim
-from omegaconf import DictConfig
-
-from .dynamic_loss_scaler import DynamicLossScaler
-
-
-class _FP16OptimizerMixin(object):
- def __init__(self, *args, **kwargs):
- # forward __init__ call to the next class in mro(method resolution order)
- super().__init__(*args, **kwargs)
- self._multiply_factor = 1.0
-
- @property
- def has_flat_params(self):
- return torch.is_tensor(self.fp32_params) or (
- isinstance(self.fp32_params, dict)
- and all(torch.is_tensor(t) for t in self.fp32_params.values())
- )
-
- @classmethod
- def build_fp32_params(cls, args, params, flatten=True):
- # create FP32 copy of parameters and grads
- if flatten:
- is_pipeline_parallel = getattr(
- args, "pipeline_model_parallel", False
- ) and getattr(args, "distributed_no_spawn", False)
- total_param_size = sum(p.data.numel() for p in params)
- devices = [torch.cuda.current_device()]
- if is_pipeline_parallel:
- devices = list(set(args.pipeline_devices))
- fp32_params = {}
- for device in devices:
- if is_pipeline_parallel:
- device_param_size = sum(
- p.data.numel() for p in params if p.device.index == device
- )
- device_params = [p for p in params if p.device.index == device]
- else:
- device_param_size = total_param_size
- device_params = params
- fp32_params[device] = (
- device_params[0].new(0).float().new(device_param_size)
- )
- offset = 0
- for p in device_params:
- numel = p.data.numel()
- fp32_params[device][offset : offset + numel].copy_(p.data.reshape(-1))
- offset += numel
- fp32_params[device] = torch.nn.Parameter(fp32_params[device])
- fp32_params[device].grad = fp32_params[device].data.new(
- device_param_size
- )
- return fp32_params
- else:
- fp32_params = []
- for p in params:
- p32 = torch.nn.Parameter(p.data.float())
- if hasattr(p, 'expert'):
- p32.expert = True
- elif hasattr(p, 'base_expert'):
- p32.base_expert = True
- p32.grad = torch.zeros_like(p32.data)
- if hasattr(p, "param_group"):
- p32.param_group = p.param_group
- fp32_params.append(p32)
- return fp32_params
-
- def state_dict(self):
- """Return the optimizer's state dict."""
- state_dict = self.fp32_optimizer.state_dict()
- if self.scaler is not None:
- state_dict["loss_scale"] = self.scaler.loss_scale
- return state_dict
-
- def load_state_dict(self, state_dict, optimizer_overrides=None):
- """Load an optimizer state dict.
-
- In general we should prefer the configuration of the existing optimizer
- instance (e.g., learning rate) over that found in the state_dict. This
- allows us to resume training from a checkpoint using a new set of
- optimizer args.
- """
- if "loss_scale" in state_dict and self.scaler is not None:
- self.scaler.loss_scale = state_dict["loss_scale"]
- self.fp32_optimizer.load_state_dict(state_dict, optimizer_overrides)
-
- def backward(self, loss):
- """Computes the sum of gradients of the given tensor w.r.t. graph leaves.
-
- Compared to :func:`fairseq.optim.FairseqOptimizer.backward`, this
- function additionally dynamically scales the loss to avoid gradient
- underflow.
- """
- if self.scaler is not None:
- loss = self.scaler.scale(loss)
- loss.backward()
- self._needs_sync = True
-
- def _sync_fp16_grads_to_fp32(self):
- if self._needs_sync:
- # copy FP16 grads to FP32
- if self.has_flat_params:
- devices = list(self.fp32_params.keys())
- device_params_dict = defaultdict(list)
- for p in self.fp16_params:
- if p.requires_grad:
- device_params_dict[p.device.index].append(p)
- for device in devices:
- device_params = device_params_dict[device]
- offset = 0
- for p in device_params:
- grad_data = (
- p.grad.data
- if p.grad is not None
- else p.data.new_zeros(p.data.shape)
- )
- numel = grad_data.numel()
- self.fp32_params[device].grad.data[
- offset : offset + numel
- ].copy_(grad_data.reshape(-1))
- offset += numel
- else:
- for p, p32 in zip(self.fp16_params, self.fp32_params):
- if not p.requires_grad:
- continue
- if p.grad is not None:
- if p32.grad is None:
- p32.grad = p.grad.data.float()
- else:
- p32.grad.data.copy_(p.grad.data)
- else:
- p32.grad = torch.zeros_like(p.data, dtype=torch.float)
-
- self._needs_sync = False
-
- def _sync_fp32_params_to_fp16(self):
- # copy FP32 params back into FP16 model
- if self.has_flat_params:
- devices = list(self.fp32_params.keys())
- device_params_dict = defaultdict(list)
- for p in self.fp16_params:
- device_params_dict[p.device.index].append(p)
- for device in devices:
- device_params = device_params_dict[device]
- offset = 0
- for p in device_params:
- numel = p.data.numel()
- p.data.copy_(
- self.fp32_params[device]
- .data[offset : offset + numel]
- .view_as(p.data)
- )
- offset += numel
- else:
- for p, p32 in zip(self.fp16_params, self.fp32_params):
- if not p.requires_grad:
- continue
- p.data.copy_(p32.data)
-
- def _unscale_grads(self):
- self._sync_fp16_grads_to_fp32()
- if (
- # Skip the multiplication if it's a no-op (i.e., if _multiply_factor
- # is 1.0). At the same time, we want to avoid the device-to-host
- # transfer by comparing it to 1.0. Since _multiply_factor starts as
- # a Python float, we roughly assume that if it's a tensor then it's
- # probably not =1.0 anymore and we do the multiplication. Otherwise
- # we can safely check the value without a D2H transfer.
- torch.is_tensor(self._multiply_factor)
- or self._multiply_factor != 1.0
- ):
- self.fp32_optimizer.multiply_grads(self._multiply_factor)
- self._multiply_factor = 1.0
-
- def multiply_grads(self, c):
- """Multiplies grads by a constant ``c``."""
- self._multiply_factor *= c
-
- def clip_grad_norm(self, max_norm, aggregate_norm_fn=None):
- """Clips gradient norm and updates dynamic loss scaler."""
- self._sync_fp16_grads_to_fp32()
-
- grad_norm = self._multiply_factor * self.fp32_optimizer.clip_grad_norm(
- 0, aggregate_norm_fn
- )
-
- if self.scaler is not None:
- if grad_norm > max_norm > 0.0:
- self._multiply_factor *= max_norm / grad_norm
-
- self.scaler.check_overflow(grad_norm)
- elif max_norm > 0.0:
- clip_coef = (max_norm / (grad_norm + 1e-6)).clamp_(max=1)
- self._multiply_factor *= clip_coef
-
- return grad_norm
-
- def step(self, closure=None, groups=None):
- """Performs a single optimization step."""
- self._sync_fp16_grads_to_fp32()
-
- if getattr(self, "supports_step_with_scale", False):
- self.fp32_optimizer.step(closure, scale=(1.0 / self._multiply_factor), groups=groups)
- else:
- self._unscale_grads()
- self.fp32_optimizer.step(closure, groups=groups)
-
- if self.scaler is not None:
- self.scaler.update()
-
- self._sync_fp32_params_to_fp16()
-
- def zero_grad(self):
- """Clears the gradients of all optimized parameters."""
- for p in self.fp16_params:
- p.grad = None
- if self.has_flat_params:
- if torch.is_tensor(self.fp32_params):
- self.fp32_params.grad.zero_()
- elif isinstance(self.fp32_params, dict):
- for fp32_params in self.fp32_params.values():
- fp32_params.grad.zero_()
- else:
- raise RuntimeError("self.fp32_params must be a tensor or dict")
- else:
- for p32 in self.fp32_params:
- if p32.grad is not None:
- p32.grad.zero_()
- self._needs_sync = False
-
- if self.scaler is not None:
- self._multiply_factor = 1.0 / float(self.scaler.loss_scale)
-
-
-class FP16Optimizer(_FP16OptimizerMixin, optim.FairseqOptimizer):
- """
- Wrap an *optimizer* to support FP16 (mixed precision) training.
- """
-
- def __init__(self, cfg: DictConfig, params, fp32_optimizer, fp32_params, **kwargs):
- super().__init__(cfg.optimizer)
- self.fp16_params = params
- self.fp32_optimizer = fp32_optimizer
- self.fp32_params = fp32_params
-
- if getattr(cfg.common, "fp16_scale_window", None) is None:
- if len(cfg.optimization.update_freq) > 1:
- raise ValueError(
- "--fp16-scale-window must be given explicitly when using a "
- "custom --update-freq schedule"
- )
- data_parallel_size = int(
- cfg.distributed_training.distributed_world_size
- / cfg.common.model_parallel_size
- )
- scale_window = int(
- 2 ** 14 / data_parallel_size / cfg.optimization.update_freq[0]
- )
- else:
- scale_window = cfg.common.fp16_scale_window
-
- if not getattr(cfg.common, "bf16", False):
- self.scaler = DynamicLossScaler(
- init_scale=cfg.common.fp16_init_scale,
- scale_window=scale_window,
- tolerance=cfg.common.fp16_scale_tolerance,
- threshold=cfg.common.threshold_loss_scale,
- min_loss_scale=cfg.common.min_loss_scale,
- )
- else:
- # disable loss scaling for bfloat16
- self.scaler = None
-
- @classmethod
- def build_optimizer(cls, cfg: DictConfig, params, **kwargs):
- """
- Args:
- cfg (omegaconf.DictConfig): fairseq args
- params (iterable): iterable of parameters to optimize
- """
- flatten = not getattr(cfg.common, "fp16_no_flatten_grads", False)
- if getattr(cfg.common, "bf16", False):
- flatten = False # mixed precision is faster on TPUs without flat grads
- fp32_params = cls.build_fp32_params(cfg.optimizer, params, flatten=flatten)
- if flatten:
- fp32_optimizer = optim.build_optimizer(cfg.optimizer, [fp32_params])
- else:
- fp32_optimizer = optim.build_optimizer(cfg.optimizer, fp32_params)
- if flatten and not fp32_optimizer.supports_flat_params:
- raise RuntimeError(
- f"chosen optimizer {fp32_optimizer.__class__.__name__} does not support flat params, please set --fp16-no-flatten-grads"
- )
- return cls(cfg, params, fp32_optimizer, fp32_params, **kwargs)
-
- @property
- def optimizer(self):
- return self.fp32_optimizer.optimizer
-
- @optimizer.setter
- def optimizer(self, optimizer):
- self.fp32_optimizer.optimizer = optimizer
-
- @property
- def lr_scheduler(self):
- return getattr(self.fp32_optimizer, "lr_scheduler", None)
-
- @property
- def optimizer_config(self):
- return self.fp32_optimizer.optimizer_config
-
- def get_lr(self):
- return self.fp32_optimizer.get_lr()
-
- def set_lr(self, lr):
- self.fp32_optimizer.set_lr(lr)
-
- def all_reduce_grads(self, module):
- self.fp32_optimizer.all_reduce_grads(module)
-
- @property
- def supports_flat_params(self):
- return self.fp32_optimizer.supports_flat_params
-
-
-class _MemoryEfficientFP16OptimizerMixin(object):
- def __init__(self, *args, **kwargs):
- # forward __init__ call to the next class in MRO (method resolution order)
- super().__init__(*args, **kwargs)
- self._multiply_factor = 1.0
-
- @property
- def has_flat_params(self):
- return False
-
- def state_dict(self):
- """Return the optimizer's state dict."""
- state_dict = self.wrapped_optimizer.state_dict()
- if self.scaler is not None:
- state_dict["loss_scale"] = self.scaler.loss_scale
- return state_dict
-
- def load_state_dict(self, state_dict, optimizer_overrides=None):
- """Load an optimizer state dict.
-
- In general we should prefer the configuration of the existing optimizer
- instance (e.g., learning rate) over that found in the state_dict. This
- allows us to resume training from a checkpoint using a new set of
- optimizer args.
- """
- if "loss_scale" in state_dict and self.scaler is not None:
- self.scaler.loss_scale = state_dict["loss_scale"]
-
- self.wrapped_optimizer.load_state_dict(state_dict, optimizer_overrides)
-
- # Hack: PyTorch automatically casts the optimizer state to match the
- # type of the current parameters. But with --memory-efficient-fp16 the
- # params are FP16 while the optimizer state is FP32 and we don't want
- # to cast. A workaround is to manually copy back the original state
- # after the optimizer has been loaded.
- if not getattr(self.optimizer, "disable_mem_eff_fp16_loading_hack", False):
- groups = self.optimizer.param_groups
- saved_groups = state_dict["param_groups"]
- id_map = {
- old_id: p
- for old_id, p in zip(
- chain(*(g["params"] for g in saved_groups)),
- chain(*(g["params"] for g in groups)),
- )
- }
- for k, v in state_dict["state"].items():
- if k in id_map:
- param = id_map[k]
- self.optimizer.state[param] = v
-
- def backward(self, loss):
- """Computes the sum of gradients of the given tensor w.r.t. graph leaves.
-
- Compared to :func:`fairseq.optim.FairseqOptimizer.backward`, this
- function additionally dynamically scales the loss to avoid gradient
- underflow.
- """
- if self.scaler is not None:
- loss = self.scaler.scale(loss)
- loss.backward()
-
- def _unscale_grads(self):
- if (
- # Skip the multiplication if it's a no-op (i.e., if _multiply_factor
- # is 1.0). At the same time, we want to avoid the device-to-host
- # transfer by comparing it to 1.0. Since _multiply_factor starts as
- # a Python float, we roughly assume that if it's a tensor then it's
- # probably not =1.0 anymore and we do the multiplication. Otherwise
- # we can safely check the value without a D2H transfer.
- torch.is_tensor(self._multiply_factor)
- or self._multiply_factor != 1.0
- ):
- self.wrapped_optimizer.multiply_grads(self._multiply_factor)
- self._multiply_factor = 1.0
-
- def multiply_grads(self, c):
- """Multiplies grads by a constant *c*."""
- self._multiply_factor *= c
-
- def clip_grad_norm(self, max_norm, aggregate_norm_fn=None):
- """Clips gradient norm and updates dynamic loss scaler."""
- max_norm = float(max_norm)
- grad_norm = self._multiply_factor * self.wrapped_optimizer.clip_grad_norm(
- 0, aggregate_norm_fn
- )
-
- if self.scaler is not None:
- grad_norm_cpu = float(grad_norm)
- if grad_norm_cpu > max_norm > 0.0:
- self._multiply_factor *= max_norm / grad_norm_cpu
-
- # detect overflow and adjust loss scale
- self.scaler.check_overflow(grad_norm_cpu)
- elif max_norm > 0.0:
- clip_coef = (max_norm / (grad_norm + 1e-6)).clamp_(max=1)
- self._multiply_factor *= clip_coef
-
- return grad_norm
-
- def step(self, closure=None, groups=None):
- """Performs a single optimization step."""
- if getattr(self, "supports_step_with_scale", False):
- # NOTE(msb) optimizer divides by scale factor
- self.wrapped_optimizer.step(closure, scale=(1.0 / self._multiply_factor), groups=groups)
- else:
- self._unscale_grads()
- self.wrapped_optimizer.step(closure, groups=groups)
-
- if self.scaler is not None:
- self.scaler.update()
-
- def zero_grad(self):
- """Clears the gradients of all optimized parameters."""
- self.wrapped_optimizer.zero_grad()
- if self.scaler is not None:
- self._multiply_factor = 1.0 / float(self.scaler.loss_scale)
- else:
- self._multiply_factor = 1.0
-
- @property
- def supports_flat_params(self):
- return self.wrapped_optimizer.supports_flat_params
-
-
-class MemoryEfficientFP16Optimizer(
- _MemoryEfficientFP16OptimizerMixin, optim.FairseqOptimizer
-):
- """
- Wrap an *optimizer* to support FP16 (mixed precision) training.
-
- Compared to :class:`fairseq.optim.FP16Optimizer`, this version does not
- maintain an FP32 copy of the model. We instead expect the optimizer to
- convert the gradients to FP32 internally and sync the results back to the
- FP16 model params. This significantly reduces memory usage but slightly
- increases the time spent in the optimizer.
-
- Since this wrapper depends on specific functionality in the wrapped
- optimizer (i.e., on-the-fly conversion of grads to FP32), only certain
- optimizers can be wrapped. This is determined by the
- *supports_memory_efficient_fp16* property.
- """
-
- def __init__(
- self, cfg: DictConfig, params, optimizer, allow_unsupported=False, **kwargs
- ):
- if not allow_unsupported and not optimizer.supports_memory_efficient_fp16:
- raise ValueError(
- "Unsupported optimizer: {}".format(optimizer.__class__.__name__)
- )
-
- super().__init__(getattr(cfg, "optimizer", None))
- self.wrapped_optimizer = optimizer
-
- if getattr(cfg.common, "fp16_scale_window", None) is None:
- if len(cfg.optimization.update_freq) > 1:
- raise ValueError(
- "--fp16-scale-window must be given explicitly when using a "
- "custom --update-freq schedule"
- )
- data_parallel_size = int(
- cfg.distributed_training.distributed_world_size
- / cfg.common.model_parallel_size
- )
- scale_window = int(
- 2 ** 14 / data_parallel_size / cfg.optimization.update_freq[0]
- )
- else:
- scale_window = cfg.common.fp16_scale_window
-
- if not getattr(cfg.common, "bf16", False):
- self.scaler = DynamicLossScaler(
- init_scale=cfg.common.fp16_init_scale,
- scale_window=scale_window,
- tolerance=cfg.common.fp16_scale_tolerance,
- threshold=cfg.common.threshold_loss_scale,
- min_loss_scale=cfg.common.min_loss_scale,
- )
- else:
- # disable loss scaling for bfloat16
- self.scaler = None
-
- @classmethod
- def build_optimizer(cls, cfg: DictConfig, params, **kwargs):
- """
- Args:
- args (argparse.Namespace): fairseq args
- params (iterable): iterable of parameters to optimize
- """
- fp16_optimizer = optim.build_optimizer(cfg.optimizer, params)
- return cls(cfg, params, fp16_optimizer, **kwargs)
-
- @property
- def optimizer(self):
- return self.wrapped_optimizer.optimizer
-
- @optimizer.setter
- def optimizer(self, optimizer):
- self.wrapped_optimizer.optimizer = optimizer
-
- @property
- def optimizer_config(self):
- return self.wrapped_optimizer.optimizer_config
-
- @property
- def lr_scheduler(self):
- return getattr(self.wrapped_optimizer, "lr_scheduler", None)
-
- def get_lr(self):
- return self.wrapped_optimizer.get_lr()
-
- def set_lr(self, lr):
- self.wrapped_optimizer.set_lr(lr)
-
- def all_reduce_grads(self, module):
- self.wrapped_optimizer.all_reduce_grads(module)
diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/countless/countless3d.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/countless/countless3d.py
deleted file mode 100644
index 810a71e4b1fa344dd2d731186516dbfa96c9cd03..0000000000000000000000000000000000000000
--- a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/countless/countless3d.py
+++ /dev/null
@@ -1,356 +0,0 @@
-from six.moves import range
-from PIL import Image
-import numpy as np
-import io
-import time
-import math
-import random
-import sys
-from collections import defaultdict
-from copy import deepcopy
-from itertools import combinations
-from functools import reduce
-from tqdm import tqdm
-
-from memory_profiler import profile
-
-def countless5(a,b,c,d,e):
- """First stage of generalizing from countless2d.
-
- You have five slots: A, B, C, D, E
-
- You can decide if something is the winner by first checking for
- matches of three, then matches of two, then picking just one if
- the other two tries fail. In countless2d, you just check for matches
- of two and then pick one of them otherwise.
-
- Unfortunately, you need to check ABC, ABD, ABE, BCD, BDE, & CDE.
- Then you need to check AB, AC, AD, BC, BD
- We skip checking E because if none of these match, we pick E. We can
- skip checking AE, BE, CE, DE since if any of those match, E is our boy
- so it's redundant.
-
- So countless grows cominatorially in complexity.
- """
- sections = [ a,b,c,d,e ]
-
- p2 = lambda q,r: q * (q == r) # q if p == q else 0
- p3 = lambda q,r,s: q * ( (q == r) & (r == s) ) # q if q == r == s else 0
-
- lor = lambda x,y: x + (x == 0) * y
-
- results3 = ( p3(x,y,z) for x,y,z in combinations(sections, 3) )
- results3 = reduce(lor, results3)
-
- results2 = ( p2(x,y) for x,y in combinations(sections[:-1], 2) )
- results2 = reduce(lor, results2)
-
- return reduce(lor, (results3, results2, e))
-
-def countless8(a,b,c,d,e,f,g,h):
- """Extend countless5 to countless8. Same deal, except we also
- need to check for matches of length 4."""
- sections = [ a, b, c, d, e, f, g, h ]
-
- p2 = lambda q,r: q * (q == r)
- p3 = lambda q,r,s: q * ( (q == r) & (r == s) )
- p4 = lambda p,q,r,s: p * ( (p == q) & (q == r) & (r == s) )
-
- lor = lambda x,y: x + (x == 0) * y
-
- results4 = ( p4(x,y,z,w) for x,y,z,w in combinations(sections, 4) )
- results4 = reduce(lor, results4)
-
- results3 = ( p3(x,y,z) for x,y,z in combinations(sections, 3) )
- results3 = reduce(lor, results3)
-
- # We can always use our shortcut of omitting the last element
- # for N choose 2
- results2 = ( p2(x,y) for x,y in combinations(sections[:-1], 2) )
- results2 = reduce(lor, results2)
-
- return reduce(lor, [ results4, results3, results2, h ])
-
-def dynamic_countless3d(data):
- """countless8 + dynamic programming. ~2x faster"""
- sections = []
-
- # shift zeros up one so they don't interfere with bitwise operators
- # we'll shift down at the end
- data += 1
-
- # This loop splits the 2D array apart into four arrays that are
- # all the result of striding by 2 and offset by (0,0), (0,1), (1,0),
- # and (1,1) representing the A, B, C, and D positions from Figure 1.
- factor = (2,2,2)
- for offset in np.ndindex(factor):
- part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))]
- sections.append(part)
-
- pick = lambda a,b: a * (a == b)
- lor = lambda x,y: x + (x == 0) * y
-
- subproblems2 = {}
-
- results2 = None
- for x,y in combinations(range(7), 2):
- res = pick(sections[x], sections[y])
- subproblems2[(x,y)] = res
- if results2 is not None:
- results2 += (results2 == 0) * res
- else:
- results2 = res
-
- subproblems3 = {}
-
- results3 = None
- for x,y,z in combinations(range(8), 3):
- res = pick(subproblems2[(x,y)], sections[z])
-
- if z != 7:
- subproblems3[(x,y,z)] = res
-
- if results3 is not None:
- results3 += (results3 == 0) * res
- else:
- results3 = res
-
- results3 = reduce(lor, (results3, results2, sections[-1]))
-
- # free memory
- results2 = None
- subproblems2 = None
- res = None
-
- results4 = ( pick(subproblems3[(x,y,z)], sections[w]) for x,y,z,w in combinations(range(8), 4) )
- results4 = reduce(lor, results4)
- subproblems3 = None # free memory
-
- final_result = lor(results4, results3) - 1
- data -= 1
- return final_result
-
-def countless3d(data):
- """Now write countless8 in such a way that it could be used
- to process an image."""
- sections = []
-
- # shift zeros up one so they don't interfere with bitwise operators
- # we'll shift down at the end
- data += 1
-
- # This loop splits the 2D array apart into four arrays that are
- # all the result of striding by 2 and offset by (0,0), (0,1), (1,0),
- # and (1,1) representing the A, B, C, and D positions from Figure 1.
- factor = (2,2,2)
- for offset in np.ndindex(factor):
- part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))]
- sections.append(part)
-
- p2 = lambda q,r: q * (q == r)
- p3 = lambda q,r,s: q * ( (q == r) & (r == s) )
- p4 = lambda p,q,r,s: p * ( (p == q) & (q == r) & (r == s) )
-
- lor = lambda x,y: x + (x == 0) * y
-
- results4 = ( p4(x,y,z,w) for x,y,z,w in combinations(sections, 4) )
- results4 = reduce(lor, results4)
-
- results3 = ( p3(x,y,z) for x,y,z in combinations(sections, 3) )
- results3 = reduce(lor, results3)
-
- results2 = ( p2(x,y) for x,y in combinations(sections[:-1], 2) )
- results2 = reduce(lor, results2)
-
- final_result = reduce(lor, (results4, results3, results2, sections[-1])) - 1
- data -= 1
- return final_result
-
-def countless_generalized(data, factor):
- assert len(data.shape) == len(factor)
-
- sections = []
-
- mode_of = reduce(lambda x,y: x * y, factor)
- majority = int(math.ceil(float(mode_of) / 2))
-
- data += 1
-
- # This loop splits the 2D array apart into four arrays that are
- # all the result of striding by 2 and offset by (0,0), (0,1), (1,0),
- # and (1,1) representing the A, B, C, and D positions from Figure 1.
- for offset in np.ndindex(factor):
- part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))]
- sections.append(part)
-
- def pick(elements):
- eq = ( elements[i] == elements[i+1] for i in range(len(elements) - 1) )
- anded = reduce(lambda p,q: p & q, eq)
- return elements[0] * anded
-
- def logical_or(x,y):
- return x + (x == 0) * y
-
- result = ( pick(combo) for combo in combinations(sections, majority) )
- result = reduce(logical_or, result)
- for i in range(majority - 1, 3-1, -1): # 3-1 b/c of exclusive bounds
- partial_result = ( pick(combo) for combo in combinations(sections, i) )
- partial_result = reduce(logical_or, partial_result)
- result = logical_or(result, partial_result)
-
- partial_result = ( pick(combo) for combo in combinations(sections[:-1], 2) )
- partial_result = reduce(logical_or, partial_result)
- result = logical_or(result, partial_result)
-
- result = logical_or(result, sections[-1]) - 1
- data -= 1
- return result
-
-def dynamic_countless_generalized(data, factor):
- assert len(data.shape) == len(factor)
-
- sections = []
-
- mode_of = reduce(lambda x,y: x * y, factor)
- majority = int(math.ceil(float(mode_of) / 2))
-
- data += 1 # offset from zero
-
- # This loop splits the 2D array apart into four arrays that are
- # all the result of striding by 2 and offset by (0,0), (0,1), (1,0),
- # and (1,1) representing the A, B, C, and D positions from Figure 1.
- for offset in np.ndindex(factor):
- part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))]
- sections.append(part)
-
- pick = lambda a,b: a * (a == b)
- lor = lambda x,y: x + (x == 0) * y # logical or
-
- subproblems = [ {}, {} ]
- results2 = None
- for x,y in combinations(range(len(sections) - 1), 2):
- res = pick(sections[x], sections[y])
- subproblems[0][(x,y)] = res
- if results2 is not None:
- results2 = lor(results2, res)
- else:
- results2 = res
-
- results = [ results2 ]
- for r in range(3, majority+1):
- r_results = None
- for combo in combinations(range(len(sections)), r):
- res = pick(subproblems[0][combo[:-1]], sections[combo[-1]])
-
- if combo[-1] != len(sections) - 1:
- subproblems[1][combo] = res
-
- if r_results is not None:
- r_results = lor(r_results, res)
- else:
- r_results = res
- results.append(r_results)
- subproblems[0] = subproblems[1]
- subproblems[1] = {}
-
- results.reverse()
- final_result = lor(reduce(lor, results), sections[-1]) - 1
- data -= 1
- return final_result
-
-def downsample_with_averaging(array):
- """
- Downsample x by factor using averaging.
-
- @return: The downsampled array, of the same type as x.
- """
- factor = (2,2,2)
-
- if np.array_equal(factor[:3], np.array([1,1,1])):
- return array
-
- output_shape = tuple(int(math.ceil(s / f)) for s, f in zip(array.shape, factor))
- temp = np.zeros(output_shape, float)
- counts = np.zeros(output_shape, np.int)
- for offset in np.ndindex(factor):
- part = array[tuple(np.s_[o::f] for o, f in zip(offset, factor))]
- indexing_expr = tuple(np.s_[:s] for s in part.shape)
- temp[indexing_expr] += part
- counts[indexing_expr] += 1
- return np.cast[array.dtype](temp / counts)
-
-def downsample_with_max_pooling(array):
-
- factor = (2,2,2)
-
- sections = []
-
- for offset in np.ndindex(factor):
- part = array[tuple(np.s_[o::f] for o, f in zip(offset, factor))]
- sections.append(part)
-
- output = sections[0].copy()
-
- for section in sections[1:]:
- np.maximum(output, section, output)
-
- return output
-
-def striding(array):
- """Downsample x by factor using striding.
-
- @return: The downsampled array, of the same type as x.
- """
- factor = (2,2,2)
- if np.all(np.array(factor, int) == 1):
- return array
- return array[tuple(np.s_[::f] for f in factor)]
-
-def benchmark():
- def countless3d_generalized(img):
- return countless_generalized(img, (2,8,1))
- def countless3d_dynamic_generalized(img):
- return dynamic_countless_generalized(img, (8,8,1))
-
- methods = [
- # countless3d,
- # dynamic_countless3d,
- countless3d_generalized,
- # countless3d_dynamic_generalized,
- # striding,
- # downsample_with_averaging,
- # downsample_with_max_pooling
- ]
-
- data = np.zeros(shape=(16**2, 16**2, 16**2), dtype=np.uint8) + 1
-
- N = 5
-
- print('Algorithm\tMPx\tMB/sec\tSec\tN=%d' % N)
-
- for fn in methods:
- start = time.time()
- for _ in range(N):
- result = fn(data)
- end = time.time()
-
- total_time = (end - start)
- mpx = N * float(data.shape[0] * data.shape[1] * data.shape[2]) / total_time / 1024.0 / 1024.0
- mbytes = mpx * np.dtype(data.dtype).itemsize
- # Output in tab separated format to enable copy-paste into excel/numbers
- print("%s\t%.3f\t%.3f\t%.2f" % (fn.__name__, mpx, mbytes, total_time))
-
-if __name__ == '__main__':
- benchmark()
-
-# Algorithm MPx MB/sec Sec N=5
-# countless3d 10.564 10.564 60.58
-# dynamic_countless3d 22.717 22.717 28.17
-# countless3d_generalized 9.702 9.702 65.96
-# countless3d_dynamic_generalized 22.720 22.720 28.17
-# striding 253360.506 253360.506 0.00
-# downsample_with_averaging 224.098 224.098 2.86
-# downsample_with_max_pooling 690.474 690.474 0.93
-
-
-
diff --git a/spaces/niizam/sovits-models/hubert/__init__.py b/spaces/niizam/sovits-models/hubert/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/GETTING_STARTED.md b/spaces/nikitaPDL2023/assignment4/detectron2/GETTING_STARTED.md
deleted file mode 100644
index 404b0c8f467264d1adf61e8274e5f864e24018e8..0000000000000000000000000000000000000000
--- a/spaces/nikitaPDL2023/assignment4/detectron2/GETTING_STARTED.md
+++ /dev/null
@@ -1,79 +0,0 @@
-## Getting Started with Detectron2
-
-This document provides a brief intro of the usage of builtin command-line tools in detectron2.
-
-For a tutorial that involves actual coding with the API,
-see our [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5)
-which covers how to run inference with an
-existing model, and how to train a builtin model on a custom dataset.
-
-
-### Inference Demo with Pre-trained Models
-
-1. Pick a model and its config file from
- [model zoo](MODEL_ZOO.md),
- for example, `mask_rcnn_R_50_FPN_3x.yaml`.
-2. We provide `demo.py` that is able to demo builtin configs. Run it with:
-```
-cd demo/
-python demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \
- --input input1.jpg input2.jpg \
- [--other-options]
- --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl
-```
-The configs are made for training, therefore we need to specify `MODEL.WEIGHTS` to a model from model zoo for evaluation.
-This command will run the inference and show visualizations in an OpenCV window.
-
-For details of the command line arguments, see `demo.py -h` or look at its source code
-to understand its behavior. Some common arguments are:
-* To run __on your webcam__, replace `--input files` with `--webcam`.
-* To run __on a video__, replace `--input files` with `--video-input video.mp4`.
-* To run __on cpu__, add `MODEL.DEVICE cpu` after `--opts`.
-* To save outputs to a directory (for images) or a file (for webcam or video), use `--output`.
-
-
-### Training & Evaluation in Command Line
-
-We provide two scripts in "tools/plain_train_net.py" and "tools/train_net.py",
-that are made to train all the configs provided in detectron2. You may want to
-use it as a reference to write your own training script.
-
-Compared to "train_net.py", "plain_train_net.py" supports fewer default
-features. It also includes fewer abstraction, therefore is easier to add custom
-logic.
-
-To train a model with "train_net.py", first
-setup the corresponding datasets following
-[datasets/README.md](./datasets/README.md),
-then run:
-```
-cd tools/
-./train_net.py --num-gpus 8 \
- --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml
-```
-
-The configs are made for 8-GPU training.
-To train on 1 GPU, you may need to [change some parameters](https://arxiv.org/abs/1706.02677), e.g.:
-```
-./train_net.py \
- --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \
- --num-gpus 1 SOLVER.IMS_PER_BATCH 2 SOLVER.BASE_LR 0.0025
-```
-
-To evaluate a model's performance, use
-```
-./train_net.py \
- --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \
- --eval-only MODEL.WEIGHTS /path/to/checkpoint_file
-```
-For more options, see `./train_net.py -h`.
-
-### Use Detectron2 APIs in Your Code
-
-See our [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5)
-to learn how to use detectron2 APIs to:
-1. run inference with an existing model
-2. train a builtin model on a custom dataset
-
-See [detectron2/projects](https://github.com/facebookresearch/detectron2/tree/main/projects)
-for more ways to build your project on detectron2.
diff --git a/spaces/niro-private/chatCSV/loaders/powerpoint.py b/spaces/niro-private/chatCSV/loaders/powerpoint.py
deleted file mode 100644
index 526a8c92620e0b26ae17a5f2d4258ff774015e01..0000000000000000000000000000000000000000
--- a/spaces/niro-private/chatCSV/loaders/powerpoint.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .common import process_file
-from langchain.document_loaders import UnstructuredPowerPointLoader
-
-def process_powerpoint(vector_store, file, stats_db):
- return process_file(vector_store, file, UnstructuredPowerPointLoader, ".pptx", stats_db=stats_db)
\ No newline at end of file
diff --git a/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/zlib_wrapper/zlibwrapper.cc b/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/zlib_wrapper/zlibwrapper.cc
deleted file mode 100644
index a3a2fa5e584354f8216428df199b027ebfac8d21..0000000000000000000000000000000000000000
--- a/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/zlib_wrapper/zlibwrapper.cc
+++ /dev/null
@@ -1,841 +0,0 @@
-/*
- * Copyright 2021 Google LLC
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#include "sparse_matmul/zlib_wrapper/zlibwrapper.h"
-
-#include
-#include
-
-#include
-#include
-#include
-
-#include "glog/logging.h"
-#include "sparse_matmul/zlib_wrapper/gzipheader.h"
-#include "zconf.h"
-#include "zlib.h"
-
-// The GZIP header (see RFC 1952):
-// +---+---+---+---+---+---+---+---+---+---+
-// |ID1|ID2|CM |FLG| MTIME |XFL|OS |
-// +---+---+---+---+---+---+---+---+---+---+
-// ID1 \037
-// ID2 \213
-// CM \010 (compression method == DEFLATE)
-// FLG \000 (special flags that we do not support)
-// MTIME Unix format modification time (0 means not available)
-// XFL 2-4? DEFLATE flags
-// OS ???? Operating system indicator (255 means unknown)
-
-// Header value we generate:
-// We use a #define so sizeof() works correctly
-#define GZIP_HEADER "\037\213\010\000\000\000\000\000\002\377"
-
-namespace csrblocksparse {
-
-// We allow all kinds of bad footers when this flag is true.
-// Some web servers send bad pages corresponding to these cases
-// and IE is tolerant with it.
-// - Extra bytes after gzip footer (see bug 69126)
-// - No gzip footer (see bug 72896)
-// - Incomplete gzip footer (see bug 71871706)
-bool ZLib::should_be_flexible_with_gzip_footer_ = false;
-
-// Initialize the ZLib class
-ZLib::ZLib()
- : comp_init_(false), uncomp_init_(false), gzip_header_(new GZipHeader) {
- Reinit();
- init_settings_ = settings_;
-}
-
-ZLib::~ZLib() {
- if (comp_init_) {
- deflateEnd(&comp_stream_);
- }
- if (uncomp_init_) {
- inflateEnd(&uncomp_stream_);
- }
- delete gzip_header_;
-}
-
-void ZLib::Reinit() {
- settings_.dictionary_ = nullptr;
- settings_.dict_len_ = 0;
- settings_.compression_level_ = Z_DEFAULT_COMPRESSION;
- settings_.window_bits_ = MAX_WBITS;
- settings_.mem_level_ = 8; // DEF_MEM_LEVEL
- settings_.no_header_mode_ = false;
- settings_.gzip_header_mode_ = false;
- settings_.dont_hide_zstream_end_ = false;
-
- if (comp_init_) {
- int err = deflateReset(&comp_stream_);
- if (err != Z_OK) {
- deflateEnd(&comp_stream_);
- comp_init_ = false;
- }
- }
- if (uncomp_init_) {
- // Use negative window bits size to indicate bare stream with no header.
- int wbits = (settings_.no_header_mode_ ? -MAX_WBITS : MAX_WBITS);
- int err = inflateReset2(&uncomp_stream_, wbits);
- if (err == Z_OK) {
- init_settings_.no_header_mode_ = settings_.no_header_mode_;
- } else {
- inflateEnd(&uncomp_stream_);
- uncomp_init_ = false;
- }
- }
- crc_ = 0;
- uncompressed_size_ = 0;
- gzip_header_->Reset();
- gzip_footer_bytes_ = -1;
- first_chunk_ = true;
-}
-
-void ZLib::Reset() {
- first_chunk_ = true;
- gzip_header_->Reset();
-}
-
-void ZLib::CheckValidParams() {
- if (settings_.dictionary_ != nullptr &&
- (settings_.no_header_mode_ || settings_.gzip_header_mode_)) {
- LOG(FATAL)
- << "Incompatible params: require zlib headers with preset dictionary";
- }
-}
-
-void ZLib::SetNoHeaderMode(bool no_header_mode) {
- settings_.no_header_mode_ = no_header_mode;
- if (init_settings_.no_header_mode_ != settings_.no_header_mode_) {
- // Once the header mode changes, we have to reinitialize all our streams
- if (comp_init_) {
- deflateEnd(&comp_stream_);
- comp_init_ = false;
- }
- if (uncomp_init_) {
- inflateEnd(&uncomp_stream_);
- uncomp_init_ = false;
- }
- } else {
- // Mode hasn't changed, but treat this as a reset request nevertheless
- Reset();
- }
- CheckValidParams();
-}
-
-void ZLib::SetGzipHeaderMode() {
- settings_.gzip_header_mode_ = true;
- SetNoHeaderMode(true); // we use gzip headers, not zlib headers
- CheckValidParams();
-}
-
-void ZLib::SetDictionary(const char* initial_dict, unsigned int dict_len) {
- settings_.dictionary_ = (Bytef*)initial_dict; // NOLINT
- settings_.dict_len_ = dict_len;
- CheckValidParams();
-}
-
-void ZLib::SetDontHideStreamEnd() { settings_.dont_hide_zstream_end_ = true; }
-
-int ZLib::MinFooterSize() const {
- int min_footer_size = 2; // Room for empty chunk.
- if (settings_.gzip_header_mode_) {
- min_footer_size += 8; // Room for actual footer.
- }
- return min_footer_size;
-}
-
-// --------- COMPRESS MODE
-
-// Initialization method to be called if we hit an error while
-// compressing. On hitting an error, call this method before returning
-// the error.
-void ZLib::CompressErrorInit() {
- if (comp_init_) {
- deflateEnd(&comp_stream_);
- comp_init_ = false;
- }
- Reset();
-}
-
-// These probably return Z_OK, but may return Z_BUF_ERROR if outbuf is full
-int ZLib::WriteGzipHeader() {
- if (comp_stream_.avail_out < sizeof(GZIP_HEADER)) return Z_BUF_ERROR;
- memcpy(comp_stream_.next_out, GZIP_HEADER, sizeof(GZIP_HEADER) - 1);
- comp_stream_.next_out += sizeof(GZIP_HEADER) - 1;
- comp_stream_.avail_out -= sizeof(GZIP_HEADER) - 1;
- return Z_OK;
-}
-
-int ZLib::WriteGzipFooter(Bytef* dest, uLongf destLen) {
- if (destLen < 8) // not enough space for footer
- return Z_BUF_ERROR;
- *dest++ = (crc_ >> 0) & 255;
- *dest++ = (crc_ >> 8) & 255;
- *dest++ = (crc_ >> 16) & 255;
- *dest++ = (crc_ >> 24) & 255;
- *dest++ = (uncompressed_size_ >> 0) & 255;
- *dest++ = (uncompressed_size_ >> 8) & 255;
- *dest++ = (uncompressed_size_ >> 16) & 255;
- *dest++ = (uncompressed_size_ >> 24) & 255;
- return Z_OK;
-}
-
-int ZLib::DeflateInit() {
- int err =
- deflateInit2(&comp_stream_, settings_.compression_level_, Z_DEFLATED,
- (settings_.no_header_mode_ ? -settings_.window_bits_
- : settings_.window_bits_),
- settings_.mem_level_, Z_DEFAULT_STRATEGY);
- if (err == Z_OK) {
- // Save parameters for later reusability checks
- init_settings_.compression_level_ = settings_.compression_level_;
- init_settings_.window_bits_ = settings_.window_bits_;
- init_settings_.mem_level_ = settings_.mem_level_;
- init_settings_.no_header_mode_ = settings_.no_header_mode_;
- }
- return err;
-}
-
-int ZLib::CompressInit(Bytef* dest, uLongf* destLen, const Bytef* source,
- uLong* sourceLen) {
- int err;
-
- comp_stream_.next_in = (Bytef*)source; // NOLINT
- comp_stream_.avail_in = (uInt)*sourceLen;
- // Check for sourceLen (unsigned long) to fit into avail_in (unsigned int).
- if ((uLong)comp_stream_.avail_in != *sourceLen) return Z_BUF_ERROR;
- comp_stream_.next_out = dest;
- comp_stream_.avail_out = (uInt)*destLen;
- // Check for destLen (unsigned long) to fit into avail_out (unsigned int).
- if ((uLong)comp_stream_.avail_out != *destLen) return Z_BUF_ERROR;
-
- if (!first_chunk_) // only need to set up stream the first time through
- return Z_OK;
-
- // Force full reinit if properties have changed in a way we can't adjust.
- if (comp_init_ &&
- (init_settings_.dictionary_ != settings_.dictionary_ ||
- init_settings_.dict_len_ != settings_.dict_len_ ||
- init_settings_.window_bits_ != settings_.window_bits_ ||
- init_settings_.mem_level_ != settings_.mem_level_ ||
- init_settings_.no_header_mode_ != settings_.no_header_mode_)) {
- deflateEnd(&comp_stream_);
- comp_init_ = false;
- }
-
- // Reuse if we've already initted the object.
- if (comp_init_) { // we've already initted it
- err = deflateReset(&comp_stream_);
- if (err != Z_OK) {
- deflateEnd(&comp_stream_);
- comp_init_ = false;
- }
- }
-
- // If compression level has changed, try to reconfigure instead of reinit
- if (comp_init_ &&
- init_settings_.compression_level_ != settings_.compression_level_) {
- err = deflateParams(&comp_stream_, settings_.compression_level_,
- Z_DEFAULT_STRATEGY);
- if (err == Z_OK) {
- init_settings_.compression_level_ = settings_.compression_level_;
- } else {
- deflateEnd(&comp_stream_);
- comp_init_ = false;
- }
- }
-
- // First use or previous state was not reusable with current settings.
- if (!comp_init_) {
- comp_stream_.zalloc = (alloc_func)0;
- comp_stream_.zfree = (free_func)0;
- comp_stream_.opaque = (voidpf)0;
- err = DeflateInit();
- if (err != Z_OK) return err;
- comp_init_ = true;
- }
- return Z_OK;
-}
-
-// In a perfect world we'd always have the full buffer to compress
-// when the time came, and we could just call Compress(). Alas, we
-// want to do chunked compression on our webserver. In this
-// application, we compress the header, send it off, then compress the
-// results, send them off, then compress the footer. Thus we need to
-// use the chunked compression features of zlib.
-int ZLib::CompressAtMostOrAll(Bytef* dest, uLongf* destLen, const Bytef* source,
- uLong* sourceLen,
- int flush_mode) { // Z_FULL_FLUSH or Z_FINISH
- int err;
-
- if ((err = CompressInit(dest, destLen, source, sourceLen)) != Z_OK)
- return err;
-
- // This is used to figure out how many bytes we wrote *this chunk*
- int compressed_size = comp_stream_.total_out;
-
- // Some setup happens only for the first chunk we compress in a run
- if (first_chunk_) {
- // Append the gzip header before we start compressing
- if (settings_.gzip_header_mode_) {
- if ((err = WriteGzipHeader()) != Z_OK) return err;
- compressed_size -= sizeof(GZIP_HEADER) - 1; // -= is right: adds to size
- crc_ = crc32(0, nullptr, 0); // initialize
- }
-
- // Initialize the dictionary just before we start compressing
- if (settings_.dictionary_) {
- err = deflateSetDictionary(&comp_stream_, settings_.dictionary_,
- settings_.dict_len_);
- if (err != Z_OK) return err;
- init_settings_.dictionary_ = settings_.dictionary_;
- init_settings_.dict_len_ = settings_.dict_len_;
- }
-
- uncompressed_size_ = 0;
- first_chunk_ = false; // so we don't do this again
- }
-
- // flush_mode is Z_FINISH for all mode, Z_SYNC_FLUSH for incremental
- // compression.
- err = deflate(&comp_stream_, flush_mode);
-
- const uLong source_bytes_consumed = *sourceLen - comp_stream_.avail_in;
- *sourceLen = comp_stream_.avail_in;
-
- if ((err == Z_STREAM_END || err == Z_OK) && comp_stream_.avail_in == 0 &&
- comp_stream_.avail_out != 0) {
- // we processed everything ok and the output buffer was large enough.
- {}
- } else if (err == Z_STREAM_END && comp_stream_.avail_in > 0) {
- return Z_BUF_ERROR; // should never happen
- } else if (err != Z_OK && err != Z_STREAM_END && err != Z_BUF_ERROR) {
- // an error happened
- CompressErrorInit();
- return err;
- } else if (comp_stream_.avail_out == 0) { // not enough space
- err = Z_BUF_ERROR;
- }
-
- assert(err == Z_OK || err == Z_STREAM_END || err == Z_BUF_ERROR);
- if (err == Z_STREAM_END) err = Z_OK;
-
- // update the crc and other metadata
- uncompressed_size_ += source_bytes_consumed;
- compressed_size = comp_stream_.total_out - compressed_size; // delta
- *destLen = compressed_size;
- if (settings_.gzip_header_mode_) // don't bother with crc else
- crc_ = crc32(crc_, source, source_bytes_consumed);
-
- return err;
-}
-
-int ZLib::CompressChunkOrAll(Bytef* dest, uLongf* destLen, const Bytef* source,
- uLong sourceLen,
- int flush_mode) { // Z_FULL_FLUSH or Z_FINISH
- const int ret =
- CompressAtMostOrAll(dest, destLen, source, &sourceLen, flush_mode);
- if (ret == Z_BUF_ERROR) CompressErrorInit();
- return ret;
-}
-
-int ZLib::CompressChunk(Bytef* dest, uLongf* destLen, const Bytef* source,
- uLong sourceLen) {
- return CompressChunkOrAll(dest, destLen, source, sourceLen, Z_SYNC_FLUSH);
-}
-
-int ZLib::CompressAtMost(Bytef* dest, uLongf* destLen, const Bytef* source,
- uLong* sourceLen) {
- return CompressAtMostOrAll(dest, destLen, source, sourceLen, Z_SYNC_FLUSH);
-}
-
-// This writes the gzip footer info, if necessary.
-// No matter what, we call Reset() so we can compress Chunks again.
-int ZLib::CompressChunkDone(Bytef* dest, uLongf* destLen) {
- // Make sure our buffer is of reasonable size.
- if (*destLen < MinFooterSize()) {
- *destLen = 0;
- return Z_BUF_ERROR;
- }
-
- // The underlying zlib library requires a non-nullptr source pointer, even if
- // the source length is zero, otherwise it will generate an (incorrect) zero-
- // valued CRC checksum.
- char dummy = '\0';
- int err;
-
- assert(!first_chunk_ && comp_init_);
-
- const uLongf orig_destLen = *destLen;
- // NOLINTNEXTLINE
- if ((err = CompressChunkOrAll(dest, destLen, (const Bytef*)&dummy, 0,
- Z_FINISH)) != Z_OK) {
- Reset(); // we assume they won't retry on error
- return err;
- }
-
- // Make sure that when we exit, we can start a new round of chunks later
- // (This must be set after the call to CompressChunkOrAll() above.)
- Reset();
-
- // Write gzip footer if necessary. They're explicitly in little-endian order
- if (settings_.gzip_header_mode_) {
- if ((err = WriteGzipFooter(dest + *destLen, orig_destLen - *destLen)) !=
- Z_OK)
- return err;
- *destLen += 8; // zlib footer took up another 8 bytes
- }
- return Z_OK; // stream_end is ok
-}
-
-// This routine only initializes the compression stream once. Thereafter, it
-// just does a deflateReset on the stream, which should be faster.
-int ZLib::Compress(Bytef* dest, uLongf* destLen, const Bytef* source,
- uLong sourceLen) {
- int err;
- const uLongf orig_destLen = *destLen;
- if ((err = CompressChunkOrAll(dest, destLen, source, sourceLen, Z_FINISH)) !=
- Z_OK)
- return err;
- Reset(); // reset for next call to Compress
-
- if (settings_.gzip_header_mode_) {
- if ((err = WriteGzipFooter(dest + *destLen, orig_destLen - *destLen)) !=
- Z_OK)
- return err;
- *destLen += 8; // zlib footer took up another 8 bytes
- }
-
- return Z_OK;
-}
-
-// --------- UNCOMPRESS MODE
-
-int ZLib::InflateInit() {
- // Use negative window bits size to indicate bare stream with no header.
- int wbits = (settings_.no_header_mode_ ? -MAX_WBITS : MAX_WBITS);
- int err = inflateInit2(&uncomp_stream_, wbits);
- if (err == Z_OK) {
- init_settings_.no_header_mode_ = settings_.no_header_mode_;
- }
- return err;
-}
-
-// Initialization method to be called if we hit an error while
-// uncompressing. On hitting an error, call this method before
-// returning the error.
-void ZLib::UncompressErrorInit() {
- if (uncomp_init_) {
- inflateEnd(&uncomp_stream_);
- uncomp_init_ = false;
- }
- Reset();
-}
-
-int ZLib::UncompressInit(Bytef* dest, uLongf* destLen, const Bytef* source,
- uLong* sourceLen) {
- int err;
-
- uncomp_stream_.next_in = (Bytef*)source; // NOLINT
- uncomp_stream_.avail_in = (uInt)*sourceLen;
- // Check for sourceLen (unsigned long) to fit into avail_in (unsigned int).
- if ((uLong)uncomp_stream_.avail_in != *sourceLen) return Z_BUF_ERROR;
-
- uncomp_stream_.next_out = dest;
- uncomp_stream_.avail_out = (uInt)*destLen;
- // Check for destLen (unsigned long) to fit into avail_out (unsigned int).
- if ((uLong)uncomp_stream_.avail_out != *destLen) return Z_BUF_ERROR;
-
- if (!first_chunk_) // only need to set up stream the first time through
- return Z_OK;
-
- // Force full reinit if properties have changed in a way we can't adjust.
- if (uncomp_init_ && (init_settings_.dictionary_ != settings_.dictionary_ ||
- init_settings_.dict_len_ != settings_.dict_len_)) {
- inflateEnd(&uncomp_stream_);
- uncomp_init_ = false;
- }
-
- // Reuse if we've already initted the object.
- if (uncomp_init_) {
- // Use negative window bits size to indicate bare stream with no header.
- int wbits = (settings_.no_header_mode_ ? -MAX_WBITS : MAX_WBITS);
- err = inflateReset2(&uncomp_stream_, wbits);
- if (err == Z_OK) {
- init_settings_.no_header_mode_ = settings_.no_header_mode_;
- } else {
- UncompressErrorInit();
- }
- }
-
- // First use or previous state was not reusable with current settings.
- if (!uncomp_init_) {
- uncomp_stream_.zalloc = (alloc_func)0;
- uncomp_stream_.zfree = (free_func)0;
- uncomp_stream_.opaque = (voidpf)0;
- err = InflateInit();
- if (err != Z_OK) return err;
- uncomp_init_ = true;
- }
- return Z_OK;
-}
-
-// If you compressed your data a chunk at a time, with CompressChunk,
-// you can uncompress it a chunk at a time with UncompressChunk.
-// Only difference bewteen chunked and unchunked uncompression
-// is the flush mode we use: Z_SYNC_FLUSH (chunked) or Z_FINISH (unchunked).
-int ZLib::UncompressAtMostOrAll(Bytef* dest, uLongf* destLen,
- const Bytef* source, uLong* sourceLen,
- int flush_mode) { // Z_SYNC_FLUSH or Z_FINISH
- int err = Z_OK;
-
- if (first_chunk_) {
- gzip_footer_bytes_ = -1;
- if (settings_.gzip_header_mode_) {
- // If we haven't read our first chunk of actual compressed data,
- // and we're expecting gzip headers, then parse some more bytes
- // from the gzip headers.
- const Bytef* bodyBegin = nullptr;
- GZipHeader::Status status = gzip_header_->ReadMore(
- reinterpret_cast(source), *sourceLen,
- reinterpret_cast(&bodyBegin));
- switch (status) {
- case GZipHeader::INCOMPLETE_HEADER: // don't have the complete header
- *destLen = 0;
- *sourceLen = 0; // GZipHeader used all the input
- return Z_OK;
- case GZipHeader::INVALID_HEADER: // bogus header
- Reset();
- return Z_DATA_ERROR;
- case GZipHeader::COMPLETE_HEADER: // we have the full header
- *sourceLen -= (bodyBegin - source); // skip past header bytes
- source = bodyBegin;
- crc_ = crc32(0, nullptr, 0); // initialize CRC
- break;
- default:
- LOG(FATAL) << "Unexpected gzip header parsing result: " << status;
- }
- }
- } else if (gzip_footer_bytes_ >= 0) {
- // We're now just reading the gzip footer. We already read all the data.
- if (gzip_footer_bytes_ + *sourceLen > sizeof(gzip_footer_) &&
- // When this flag is true, we allow some extra bytes after the
- // gzip footer.
- !should_be_flexible_with_gzip_footer_) {
- VLOG(1) << "UncompressChunkOrAll: Received "
- << (gzip_footer_bytes_ + *sourceLen - sizeof(gzip_footer_))
- << " extra bytes after gzip footer: "
- << std::string(reinterpret_cast(source),
- std::min(*sourceLen, 20UL));
- Reset();
- return Z_DATA_ERROR;
- }
- uLong len = sizeof(gzip_footer_) - gzip_footer_bytes_;
- if (len > *sourceLen) len = *sourceLen;
- if (len > 0) {
- memcpy(gzip_footer_ + gzip_footer_bytes_, source, len);
- gzip_footer_bytes_ += len;
- }
- *sourceLen -= len;
- *destLen = 0;
- return Z_OK;
- }
-
- if ((err = UncompressInit(dest, destLen, source, sourceLen)) != Z_OK) {
- LOG(WARNING) << "ZLib: UncompressInit: Error: " << err
- << "SourceLen: " << *sourceLen;
- return err;
- }
-
- // This is used to figure out how many output bytes we wrote *this chunk*:
- const uLong old_total_out = uncomp_stream_.total_out;
-
- // This is used to figure out how many input bytes we read *this chunk*:
- const uLong old_total_in = uncomp_stream_.total_in;
-
- // Some setup happens only for the first chunk we compress in a run
- if (first_chunk_) {
- // Initialize the dictionary just before we start compressing
- if (settings_.gzip_header_mode_ || settings_.no_header_mode_) {
- // In no_header_mode, we can just set the dictionary, since no
- // checking is done to advance past header bits to get us in the
- // dictionary setting mode. In settings_.gzip_header_mode_ we've already
- // removed headers, so this code works too.
- if (settings_.dictionary_) {
- err = inflateSetDictionary(&uncomp_stream_, settings_.dictionary_,
- settings_.dict_len_);
- if (err != Z_OK) {
- LOG(WARNING) << "inflateSetDictionary: Error: " << err
- << " dict_len: " << settings_.dict_len_;
- UncompressErrorInit();
- return err;
- }
- init_settings_.dictionary_ = settings_.dictionary_;
- init_settings_.dict_len_ = settings_.dict_len_;
- }
- }
-
- first_chunk_ = false; // so we don't do this again
-
- // For the first chunk *only* (to avoid infinite troubles), we let
- // there be no actual data to uncompress. This sometimes triggers
- // when the input is only the gzip header, say.
- if (*sourceLen == 0) {
- *destLen = 0;
- return Z_OK;
- }
- }
-
- // We'll uncompress as much as we can. If we end OK great, otherwise
- // if we get an error that seems to be the gzip footer, we store the
- // gzip footer and return OK, otherwise we return the error.
-
- // flush_mode is Z_SYNC_FLUSH for chunked mode, Z_FINISH for all mode.
- err = inflate(&uncomp_stream_, flush_mode);
- if (settings_.dictionary_ && err == Z_NEED_DICT) {
- err = inflateSetDictionary(&uncomp_stream_, settings_.dictionary_,
- settings_.dict_len_);
- if (err != Z_OK) {
- LOG(WARNING) << "UncompressChunkOrAll: failed in inflateSetDictionary : "
- << err;
- UncompressErrorInit();
- return err;
- }
- init_settings_.dictionary_ = settings_.dictionary_;
- init_settings_.dict_len_ = settings_.dict_len_;
- err = inflate(&uncomp_stream_, flush_mode);
- }
-
- // Figure out how many bytes of the input zlib slurped up:
- const uLong bytes_read = uncomp_stream_.total_in - old_total_in;
- CHECK_LE(source + bytes_read, source + *sourceLen);
- *sourceLen = uncomp_stream_.avail_in;
-
- // Next we look at the footer, if any. Note that we might currently
- // have just part of the footer (eg, if this data is arriving over a
- // socket). After looking for a footer, log a warning if there is
- // extra cruft.
- if ((err == Z_STREAM_END) &&
- ((gzip_footer_bytes_ == -1) ||
- (gzip_footer_bytes_ < sizeof(gzip_footer_))) &&
- (uncomp_stream_.avail_in <= sizeof(gzip_footer_) ||
- // When this flag is true, we allow some extra bytes after the
- // zlib footer.
- should_be_flexible_with_gzip_footer_)) {
- // Due to a bug in old versions of zlibwrapper, we appended the gzip
- // footer even in non-gzip mode. Thus we always allow a gzip footer
- // even if we're not in gzip mode, so we can continue to uncompress
- // the old data. :-(
-
- // Store gzip footer bytes so we can check for footer consistency
- // in UncompressChunkDone(). (If we have the whole footer, we
- // could do the checking here, but we don't to keep consistency
- // with CompressChunkDone().)
- gzip_footer_bytes_ = std::min(static_cast(uncomp_stream_.avail_in),
- sizeof(gzip_footer_));
- memcpy(gzip_footer_, source + bytes_read, gzip_footer_bytes_);
- *sourceLen -= gzip_footer_bytes_;
- } else if ((err == Z_STREAM_END || err == Z_OK) // everything went ok
- && uncomp_stream_.avail_in == 0) { // and we read it all
- {}
- } else if (err == Z_STREAM_END && uncomp_stream_.avail_in > 0) {
- VLOG(1) << "UncompressChunkOrAll: Received some extra data, bytes total: "
- << uncomp_stream_.avail_in << " bytes: "
- << std::string(
- reinterpret_cast(uncomp_stream_.next_in),
- std::min(static_cast(uncomp_stream_.avail_in), 20));
- UncompressErrorInit();
- return Z_DATA_ERROR; // what's the extra data for?
- } else if (err != Z_OK && err != Z_STREAM_END && err != Z_BUF_ERROR) {
- // an error happened
- VLOG(1) << "UncompressChunkOrAll: Error: " << err
- << " avail_out: " << uncomp_stream_.avail_out;
- UncompressErrorInit();
- return err;
- } else if (uncomp_stream_.avail_out == 0) {
- err = Z_BUF_ERROR;
- }
-
- assert(err == Z_OK || err == Z_BUF_ERROR || err == Z_STREAM_END);
- if (err == Z_STREAM_END && !settings_.dont_hide_zstream_end_) err = Z_OK;
-
- // update the crc and other metadata
- uncompressed_size_ = uncomp_stream_.total_out;
- *destLen = uncomp_stream_.total_out - old_total_out; // size for this call
- if (settings_.gzip_header_mode_) crc_ = crc32(crc_, dest, *destLen);
-
- return err;
-}
-
-int ZLib::UncompressChunkOrAll(Bytef* dest, uLongf* destLen,
- const Bytef* source, uLong sourceLen,
- int flush_mode) { // Z_SYNC_FLUSH or Z_FINISH
- const int ret =
- UncompressAtMostOrAll(dest, destLen, source, &sourceLen, flush_mode);
- if (ret == Z_BUF_ERROR) UncompressErrorInit();
- return ret;
-}
-
-int ZLib::UncompressAtMost(Bytef* dest, uLongf* destLen, const Bytef* source,
- uLong* sourceLen) {
- return UncompressAtMostOrAll(dest, destLen, source, sourceLen, Z_SYNC_FLUSH);
-}
-
-int ZLib::UncompressChunk(Bytef* dest, uLongf* destLen, const Bytef* source,
- uLong sourceLen) {
- return UncompressChunkOrAll(dest, destLen, source, sourceLen, Z_SYNC_FLUSH);
-}
-
-// We make sure we've uncompressed everything, that is, the current
-// uncompress stream is at a compressed-buffer-EOF boundary. In gzip
-// mode, we also check the gzip footer to make sure we pass the gzip
-// consistency checks. We RETURN true iff both types of checks pass.
-bool ZLib::UncompressChunkDone() {
- if (first_chunk_ || !uncomp_init_) {
- return false;
- }
- // Make sure we're at the end-of-compressed-data point. This means
- // if we call inflate with Z_FINISH we won't consume any input or
- // write any output
- Bytef dummyin, dummyout;
- uLongf dummylen = 0;
- if (UncompressChunkOrAll(&dummyout, &dummylen, &dummyin, 0, Z_FINISH) !=
- Z_OK) {
- return false;
- }
-
- // Make sure that when we exit, we can start a new round of chunks later
- Reset();
-
- // We don't need to check footer when this flag is true.
- if (should_be_flexible_with_gzip_footer_) {
- return true;
- }
-
- // Whether we were hoping for a gzip footer or not, we allow a gzip
- // footer. (See the note above about bugs in old zlibwrappers.) But
- // by the time we've seen all the input, it has to be either a
- // complete gzip footer, or no footer at all.
- if ((gzip_footer_bytes_ != -1) && (gzip_footer_bytes_ != 0) &&
- (gzip_footer_bytes_ != sizeof(gzip_footer_)))
- return false;
-
- if (!settings_.gzip_header_mode_) return true;
-
- return IsGzipFooterValid();
-}
-
-bool ZLib::IsGzipFooterValid() const {
- // If we were expecting a gzip footer, and didn't get a full one,
- // that's an error.
- if (gzip_footer_bytes_ == -1 || gzip_footer_bytes_ < sizeof(gzip_footer_))
- return false;
-
- // The footer holds the lower four bytes of the length.
- uLong uncompressed_size = 0;
- uncompressed_size += static_cast(gzip_footer_[7]) << 24;
- uncompressed_size += gzip_footer_[6] << 16;
- uncompressed_size += gzip_footer_[5] << 8;
- uncompressed_size += gzip_footer_[4] << 0;
- if (uncompressed_size != (uncompressed_size_ & 0xffffffff)) {
- return false;
- }
-
- uLong checksum = 0;
- checksum += static_cast(gzip_footer_[3]) << 24;
- checksum += gzip_footer_[2] << 16;
- checksum += gzip_footer_[1] << 8;
- checksum += gzip_footer_[0] << 0;
- if (crc_ != checksum) return false;
-
- return true;
-}
-
-// Uncompresses the source buffer into the destination buffer.
-// The destination buffer must be long enough to hold the entire
-// decompressed contents.
-//
-// We only initialize the uncomp_stream once. Thereafter, we use
-// inflateReset2, which should be faster.
-//
-// Returns Z_OK on success, otherwise, it returns a zlib error code.
-int ZLib::Uncompress(Bytef* dest, uLongf* destLen, const Bytef* source,
- uLong sourceLen) {
- int err;
- if ((err = UncompressChunkOrAll(dest, destLen, source, sourceLen,
- Z_FINISH)) != Z_OK) {
- Reset(); // let us try to compress again
- return err;
- }
- if (!UncompressChunkDone()) // calls Reset()
- return Z_DATA_ERROR;
- return Z_OK; // stream_end is ok
-}
-
-// read uncompress length from gzip footer
-uLongf ZLib::GzipUncompressedLength(const Bytef* source, uLong len) {
- if (len <= 4) return 0; // malformed data.
-
- return (static_cast(source[len - 1]) << 24) +
- (static_cast(source[len - 2]) << 16) +
- (static_cast(source[len - 3]) << 8) +
- (static_cast(source[len - 4]) << 0);
-}
-
-int ZLib::UncompressGzipAndAllocate(Bytef** dest, uLongf* destLen,
- const Bytef* source, uLong sourceLen) {
- *dest = nullptr; // until we successfully allocate
- if (!settings_.gzip_header_mode_) return Z_VERSION_ERROR; // *shrug*
-
- uLongf uncompress_length = GzipUncompressedLength(source, sourceLen);
-
- // Do not trust the uncompress size reported by the compressed buffer.
- if (uncompress_length > *destLen) {
- if (!HasGzipHeader(reinterpret_cast(source), sourceLen)) {
- VLOG(1) << "Attempted to un-gzip data that is not gzipped.";
- return Z_DATA_ERROR;
- }
- VLOG(1) << "Uncompressed size " << uncompress_length
- << " exceeds maximum expected size " << *destLen;
- return Z_MEM_ERROR; // probably a corrupted gzip buffer
- }
-
- *destLen = uncompress_length;
-
- *dest = (Bytef*)malloc(*destLen); // NOLINT
- if (*dest == nullptr) // probably a corrupted gzip buffer
- return Z_MEM_ERROR;
-
- const int retval = Uncompress(*dest, destLen, source, sourceLen);
- if (retval != Z_OK) { // just to make life easier for them
- free(*dest);
- *dest = nullptr;
- }
- return retval;
-}
-
-// Convenience method to check if a bytestream has a header. This
-// is intended as a quick test: "Is this likely a GZip file?"
-bool ZLib::HasGzipHeader(const char* source, int sourceLen) {
- GZipHeader gzh;
- const char* ptr = nullptr;
- return gzh.ReadMore(source, sourceLen, &ptr) == GZipHeader::COMPLETE_HEADER;
-}
-
-} // namespace csrblocksparse
diff --git a/spaces/oguzakif/video-object-remover/FGT_codes/tool/utils/Poisson_blend_img.py b/spaces/oguzakif/video-object-remover/FGT_codes/tool/utils/Poisson_blend_img.py
deleted file mode 100644
index 5aa182e19d7c17c5136284ae390e478952217f38..0000000000000000000000000000000000000000
--- a/spaces/oguzakif/video-object-remover/FGT_codes/tool/utils/Poisson_blend_img.py
+++ /dev/null
@@ -1,314 +0,0 @@
-from __future__ import absolute_import, division, print_function, unicode_literals
-
-import scipy.ndimage
-from scipy.sparse.linalg import spsolve
-from scipy import sparse
-import scipy.io as sio
-import numpy as np
-from PIL import Image
-import copy
-import cv2
-import os
-import argparse
-
-
-def sub2ind(pi, pj, imgH, imgW):
- return pj + pi * imgW
-
-
-def Poisson_blend_img(imgTrg, imgSrc_gx, imgSrc_gy, holeMask, gradientMask=None, edge=None):
-
- imgH, imgW, nCh = imgTrg.shape
-
- if not isinstance(gradientMask, np.ndarray):
- gradientMask = np.zeros((imgH, imgW), dtype=np.float32)
-
- if not isinstance(edge, np.ndarray):
- edge = np.zeros((imgH, imgW), dtype=np.float32)
-
- # Initialize the reconstructed image
- imgRecon = np.zeros((imgH, imgW, nCh), dtype=np.float32)
-
- # prepare discrete Poisson equation
- A, b, UnfilledMask = solvePoisson(holeMask, imgSrc_gx, imgSrc_gy, imgTrg,
- gradientMask, edge)
-
- # Independently process each channel
- for ch in range(nCh):
-
- # solve Poisson equation
- x = scipy.sparse.linalg.lsqr(A, b[:, ch])[0]
-
- imgRecon[:, :, ch] = x.reshape(imgH, imgW)
-
- # Combined with the known region in the target
- holeMaskC = np.tile(np.expand_dims(holeMask, axis=2), (1, 1, nCh))
- imgBlend = holeMaskC * imgRecon + (1 - holeMaskC) * imgTrg
-
-
- # while((UnfilledMask * edge).sum() != 0):
- # # Fill in edge pixel
- # pi = np.expand_dims(np.where((UnfilledMask * edge) == 1)[0], axis=1) # y, i
- # pj = np.expand_dims(np.where((UnfilledMask * edge) == 1)[1], axis=1) # x, j
- #
- # for k in range(len(pi)):
- # if pi[k, 0] - 1 >= 0:
- # if (UnfilledMask * edge)[pi[k, 0] - 1, pj[k, 0]] == 0:
- # imgBlend[pi[k, 0], pj[k, 0], :] = imgBlend[pi[k, 0] - 1, pj[k, 0], :]
- # UnfilledMask[pi[k, 0], pj[k, 0]] = 0
- # continue
- # if pi[k, 0] + 1 <= imgH - 1:
- # if (UnfilledMask * edge)[pi[k, 0] + 1, pj[k, 0]] == 0:
- # imgBlend[pi[k, 0], pj[k, 0], :] = imgBlend[pi[k, 0] + 1, pj[k, 0], :]
- # UnfilledMask[pi[k, 0], pj[k, 0]] = 0
- # continue
- # if pj[k, 0] - 1 >= 0:
- # if (UnfilledMask * edge)[pi[k, 0], pj[k, 0] - 1] == 0:
- # imgBlend[pi[k, 0], pj[k, 0], :] = imgBlend[pi[k, 0], pj[k, 0] - 1, :]
- # UnfilledMask[pi[k, 0], pj[k, 0]] = 0
- # continue
- # if pj[k, 0] + 1 <= imgW - 1:
- # if (UnfilledMask * edge)[pi[k, 0], pj[k, 0] + 1] == 0:
- # imgBlend[pi[k, 0], pj[k, 0], :] = imgBlend[pi[k, 0], pj[k, 0] + 1, :]
- # UnfilledMask[pi[k, 0], pj[k, 0]] = 0
-
- return imgBlend, UnfilledMask
-
-def solvePoisson(holeMask, imgSrc_gx, imgSrc_gy, imgTrg,
- gradientMask, edge):
-
- # UnfilledMask indicates the region that is not completed
- UnfilledMask_topleft = copy.deepcopy(holeMask)
- UnfilledMask_bottomright = copy.deepcopy(holeMask)
-
- # Prepare the linear system of equations for Poisson blending
- imgH, imgW = holeMask.shape
- N = imgH * imgW
-
- # Number of unknown variables
- numUnknownPix = holeMask.sum()
-
- # 4-neighbors: dx and dy
- dx = [1, 0, -1, 0]
- dy = [0, 1, 0, -1]
-
- # 3
- # |
- # 2 -- * -- 0
- # |
- # 1
- #
-
- # Initialize (I, J, S), for sparse matrix A where A(I(k), J(k)) = S(k)
- I = np.empty((0, 1), dtype=np.float32)
- J = np.empty((0, 1), dtype=np.float32)
- S = np.empty((0, 1), dtype=np.float32)
-
- # Initialize b
- b = np.empty((0, 3), dtype=np.float32)
-
- # Precompute unkonwn pixel position
- pi = np.expand_dims(np.where(holeMask == 1)[0], axis=1) # y, i
- pj = np.expand_dims(np.where(holeMask == 1)[1], axis=1) # x, j
- pind = sub2ind(pi, pj, imgH, imgW)
-
- # |--------------------|
- # | y (i) |
- # | x (j) * |
- # | |
- # |--------------------|
- # p[y, x]
-
- qi = np.concatenate((pi + dy[0],
- pi + dy[1],
- pi + dy[2],
- pi + dy[3]), axis=1)
-
- qj = np.concatenate((pj + dx[0],
- pj + dx[1],
- pj + dx[2],
- pj + dx[3]), axis=1)
-
- # Handling cases at image borders
- validN = (qi >= 0) & (qi <= imgH - 1) & (qj >= 0) & (qj <= imgW - 1)
- qind = np.zeros((validN.shape), dtype=np.float32)
- qind[validN] = sub2ind(qi[validN], qj[validN], imgH, imgW)
-
- e_start = 0 # equation counter start
- e_stop = 0 # equation stop
-
- # 4 neighbors
- I, J, S, b, e_start, e_stop = constructEquation(0, validN, holeMask, gradientMask, edge, imgSrc_gx, imgSrc_gy, imgTrg, pi, pj, pind, qi, qj, qind, I, J, S, b, e_start, e_stop)
- I, J, S, b, e_start, e_stop = constructEquation(1, validN, holeMask, gradientMask, edge, imgSrc_gx, imgSrc_gy, imgTrg, pi, pj, pind, qi, qj, qind, I, J, S, b, e_start, e_stop)
- I, J, S, b, e_start, e_stop = constructEquation(2, validN, holeMask, gradientMask, edge, imgSrc_gx, imgSrc_gy, imgTrg, pi, pj, pind, qi, qj, qind, I, J, S, b, e_start, e_stop)
- I, J, S, b, e_start, e_stop = constructEquation(3, validN, holeMask, gradientMask, edge, imgSrc_gx, imgSrc_gy, imgTrg, pi, pj, pind, qi, qj, qind, I, J, S, b, e_start, e_stop)
-
- nEqn = len(b)
- # Construct the sparse matrix A
- A = sparse.csr_matrix((S[:, 0], (I[:, 0], J[:, 0])), shape=(nEqn, N))
-
- # Check connected pixels
- for ind in range(0, len(pi), 1):
- ii = pi[ind, 0]
- jj = pj[ind, 0]
-
- # check up (3)
- if ii - 1 >= 0:
- if UnfilledMask_topleft[ii - 1, jj] == 0 and gradientMask[ii - 1, jj] == 0:
- UnfilledMask_topleft[ii, jj] = 0
- # check left (2)
- if jj - 1 >= 0:
- if UnfilledMask_topleft[ii, jj - 1] == 0 and gradientMask[ii, jj - 1] == 0:
- UnfilledMask_topleft[ii, jj] = 0
-
-
- for ind in range(len(pi) - 1, -1, -1):
- ii = pi[ind, 0]
- jj = pj[ind, 0]
-
- # check bottom (1)
- if ii + 1 <= imgH - 1:
- if UnfilledMask_bottomright[ii + 1, jj] == 0 and gradientMask[ii, jj] == 0:
- UnfilledMask_bottomright[ii, jj] = 0
- # check right (0)
- if jj + 1 <= imgW - 1:
- if UnfilledMask_bottomright[ii, jj + 1] == 0 and gradientMask[ii, jj] == 0:
- UnfilledMask_bottomright[ii, jj] = 0
-
- UnfilledMask = UnfilledMask_topleft * UnfilledMask_bottomright
-
- return A, b, UnfilledMask
-
-
-def constructEquation(n, validN, holeMask, gradientMask, edge, imgSrc_gx, imgSrc_gy, imgTrg, pi, pj, pind, qi, qj, qind, I, J, S, b, e_start, e_stop):
-
- # Pixel that has valid neighbors
- validNeighbor = validN[:, n]
-
- # Change the out-of-boundary value to 0, in order to run edge[y,x]
- # in the next line. It won't affect anything as validNeighbor is saved already
-
- qi_tmp = copy.deepcopy(qi)
- qj_tmp = copy.deepcopy(qj)
- qi_tmp[np.invert(validNeighbor), n] = 0
- qj_tmp[np.invert(validNeighbor), n] = 0
-
- NotEdge = (edge[pi[:, 0], pj[:, 0]] == 0) * (edge[qi_tmp[:, n], qj_tmp[:, n]] == 0)
-
- # Have gradient
- if n == 0:
- HaveGrad = gradientMask[pi[:, 0], pj[:, 0]] == 0
- elif n == 2:
- HaveGrad = gradientMask[pi[:, 0], pj[:, 0] - 1] == 0
- elif n == 1:
- HaveGrad = gradientMask[pi[:, 0], pj[:, 0]] == 0
- elif n == 3:
- HaveGrad = gradientMask[pi[:, 0] - 1, pj[:, 0]] == 0
-
- # Boundary constraint
- Boundary = holeMask[qi_tmp[:, n], qj_tmp[:, n]] == 0
-
- valid = validNeighbor * NotEdge * HaveGrad * Boundary
-
- J_tmp = pind[valid, :]
-
- # num of equations: len(J_tmp)
- e_stop = e_start + len(J_tmp)
- I_tmp = np.arange(e_start, e_stop, dtype=np.float32).reshape(-1, 1)
- e_start = e_stop
-
- S_tmp = np.ones(J_tmp.shape, dtype=np.float32)
-
- if n == 0:
- b_tmp = - imgSrc_gx[pi[valid, 0], pj[valid, 0], :] + imgTrg[qi[valid, n], qj[valid, n], :]
- elif n == 2:
- b_tmp = imgSrc_gx[pi[valid, 0], pj[valid, 0] - 1, :] + imgTrg[qi[valid, n], qj[valid, n], :]
- elif n == 1:
- b_tmp = - imgSrc_gy[pi[valid, 0], pj[valid, 0], :] + imgTrg[qi[valid, n], qj[valid, n], :]
- elif n == 3:
- b_tmp = imgSrc_gy[pi[valid, 0] - 1, pj[valid, 0], :] + imgTrg[qi[valid, n], qj[valid, n], :]
-
- I = np.concatenate((I, I_tmp))
- J = np.concatenate((J, J_tmp))
- S = np.concatenate((S, S_tmp))
- b = np.concatenate((b, b_tmp))
-
- # Non-boundary constraint
- NonBoundary = holeMask[qi_tmp[:, n], qj_tmp[:, n]] == 1
- valid = validNeighbor * NotEdge * HaveGrad * NonBoundary
-
- J_tmp = pind[valid, :]
-
- # num of equations: len(J_tmp)
- e_stop = e_start + len(J_tmp)
- I_tmp = np.arange(e_start, e_stop, dtype=np.float32).reshape(-1, 1)
- e_start = e_stop
-
- S_tmp = np.ones(J_tmp.shape, dtype=np.float32)
-
- if n == 0:
- b_tmp = - imgSrc_gx[pi[valid, 0], pj[valid, 0], :]
- elif n == 2:
- b_tmp = imgSrc_gx[pi[valid, 0], pj[valid, 0] - 1, :]
- elif n == 1:
- b_tmp = - imgSrc_gy[pi[valid, 0], pj[valid, 0], :]
- elif n == 3:
- b_tmp = imgSrc_gy[pi[valid, 0] - 1, pj[valid, 0], :]
-
- I = np.concatenate((I, I_tmp))
- J = np.concatenate((J, J_tmp))
- S = np.concatenate((S, S_tmp))
- b = np.concatenate((b, b_tmp))
-
- S_tmp = - np.ones(J_tmp.shape, dtype=np.float32)
- J_tmp = qind[valid, n, None]
-
- I = np.concatenate((I, I_tmp))
- J = np.concatenate((J, J_tmp))
- S = np.concatenate((S, S_tmp))
-
- return I, J, S, b, e_start, e_stop
-
-
-def getUnfilledMask(holeMask, gradientMask):
- # UnfilledMask indicates the region that is not completed
- UnfilledMask_topleft = copy.deepcopy(holeMask)
- UnfilledMask_bottomright = copy.deepcopy(holeMask)
-
- # Get the shape information of the mask
- imgH, imgW = holeMask.shape
-
- # Precompute the unknown pixel position
- pi = np.expand_dims(np.where(holeMask == 1)[0], axis=1)
- pj = np.expand_dims(np.where(holeMask == 1)[1], axis=1)
-
- # Check connected pixels
- for ind in range(0, len(pi), 1):
- ii = pi[ind, 0]
- jj = pj[ind, 0]
-
- # check up (3)
- if ii - 1 >= 0:
- if UnfilledMask_topleft[ii - 1, jj] == 0 and gradientMask[ii - 1, jj] == 0:
- UnfilledMask_topleft[ii, jj] = 0
- # check left (2)
- if jj - 1 >= 0:
- if UnfilledMask_topleft[ii, jj - 1] == 0 and gradientMask[ii, jj - 1] == 0:
- UnfilledMask_topleft[ii, jj] = 0
-
- for ind in range(len(pi) - 1, -1, -1):
- ii = pi[ind, 0]
- jj = pj[ind, 0]
-
- # check bottom (1)
- if ii + 1 <= imgH - 1:
- if UnfilledMask_bottomright[ii + 1, jj] == 0 and gradientMask[ii, jj] == 0:
- UnfilledMask_bottomright[ii, jj] = 0
- # check right (0)
- if jj + 1 <= imgW - 1:
- if UnfilledMask_bottomright[ii, jj + 1] == 0 and gradientMask[ii, jj] == 0:
- UnfilledMask_bottomright[ii, jj] = 0
-
- UnfilledMask = UnfilledMask_topleft * UnfilledMask_bottomright
-
- return UnfilledMask
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/midas/midas/midas_net.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/midas/midas/midas_net.py
deleted file mode 100644
index 8a954977800b0a0f48807e80fa63041910e33c1f..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/midas/midas/midas_net.py
+++ /dev/null
@@ -1,76 +0,0 @@
-"""MidashNet: Network for monocular depth estimation trained by mixing several datasets.
-This file contains code that is adapted from
-https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py
-"""
-import torch
-import torch.nn as nn
-
-from .base_model import BaseModel
-from .blocks import FeatureFusionBlock, Interpolate, _make_encoder
-
-
-class MidasNet(BaseModel):
- """Network for monocular depth estimation.
- """
-
- def __init__(self, path=None, features=256, non_negative=True):
- """Init.
-
- Args:
- path (str, optional): Path to saved model. Defaults to None.
- features (int, optional): Number of features. Defaults to 256.
- backbone (str, optional): Backbone network for encoder. Defaults to resnet50
- """
- print("Loading weights: ", path)
-
- super(MidasNet, self).__init__()
-
- use_pretrained = False if path is None else True
-
- self.pretrained, self.scratch = _make_encoder(backbone="resnext101_wsl", features=features, use_pretrained=use_pretrained)
-
- self.scratch.refinenet4 = FeatureFusionBlock(features)
- self.scratch.refinenet3 = FeatureFusionBlock(features)
- self.scratch.refinenet2 = FeatureFusionBlock(features)
- self.scratch.refinenet1 = FeatureFusionBlock(features)
-
- self.scratch.output_conv = nn.Sequential(
- nn.Conv2d(features, 128, kernel_size=3, stride=1, padding=1),
- Interpolate(scale_factor=2, mode="bilinear"),
- nn.Conv2d(128, 32, kernel_size=3, stride=1, padding=1),
- nn.ReLU(True),
- nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),
- nn.ReLU(True) if non_negative else nn.Identity(),
- )
-
- if path:
- self.load(path)
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input data (image)
-
- Returns:
- tensor: depth
- """
-
- layer_1 = self.pretrained.layer1(x)
- layer_2 = self.pretrained.layer2(layer_1)
- layer_3 = self.pretrained.layer3(layer_2)
- layer_4 = self.pretrained.layer4(layer_3)
-
- layer_1_rn = self.scratch.layer1_rn(layer_1)
- layer_2_rn = self.scratch.layer2_rn(layer_2)
- layer_3_rn = self.scratch.layer3_rn(layer_3)
- layer_4_rn = self.scratch.layer4_rn(layer_4)
-
- path_4 = self.scratch.refinenet4(layer_4_rn)
- path_3 = self.scratch.refinenet3(path_4, layer_3_rn)
- path_2 = self.scratch.refinenet2(path_3, layer_2_rn)
- path_1 = self.scratch.refinenet1(path_2, layer_1_rn)
-
- out = self.scratch.output_conv(path_1)
-
- return torch.squeeze(out, dim=1)
diff --git a/spaces/padmanabhbosamia/Pascal/grad_cam_func.py b/spaces/padmanabhbosamia/Pascal/grad_cam_func.py
deleted file mode 100644
index 4645d374f81c1e59d35462544d4cdd906f130d4e..0000000000000000000000000000000000000000
--- a/spaces/padmanabhbosamia/Pascal/grad_cam_func.py
+++ /dev/null
@@ -1,150 +0,0 @@
-import numpy as np
-import torch
-import ttach as tta
-from typing import Callable, List, Tuple
-from pytorch_grad_cam.activations_and_gradients import ActivationsAndGradients
-from pytorch_grad_cam.utils.svd_on_activations import get_2d_projection
-from pytorch_grad_cam.utils.image import scale_cam_image
-from pytorch_grad_cam.utils.model_targets import ClassifierOutputTarget
-import pandas as pd
-
-import config as config
-import utils
-
-class BaseCAM:
- def __init__(self,
- model: torch.nn.Module,
- target_layers: List[torch.nn.Module],
- use_cuda: bool = False,
- reshape_transform: Callable = None,
- compute_input_gradient: bool = False,
- uses_gradients: bool = True) -> None:
-
- self.model = model.eval()
- self.target_layers = target_layers
- self.cuda = use_cuda
- if self.cuda:
- self.model = model.cuda()
- self.reshape_transform = reshape_transform
- self.compute_input_gradient = compute_input_gradient
- self.uses_gradients = uses_gradients
- self.activations_and_grads = ActivationsAndGradients(
- self.model, target_layers, reshape_transform)
-
- """ Get a vector of weights for every channel in the target layer.
- Methods that return weights channels,
- will typically need to only implement this function. """
-
- def get_cam_image(self,
- input_tensor: torch.Tensor,
- target_layer: torch.nn.Module,
- targets: List[torch.nn.Module],
- activations: torch.Tensor,
- grads: torch.Tensor,
- eigen_smooth: bool = False) -> np.ndarray:
-
- return get_2d_projection(activations)
-
- def forward(self,
- input_tensor: torch.Tensor,
- targets: List[torch.nn.Module],
- eigen_smooth: bool = False) -> np.ndarray:
-
- if self.cuda:
- input_tensor = input_tensor.cuda()
-
- if self.compute_input_gradient:
- input_tensor = torch.autograd.Variable(input_tensor,
- requires_grad=True)
-
- outputs = self.activations_and_grads(input_tensor)
-
- if targets is None:
- bboxes = [[] for _ in range(1)]
- for i in range(3):
- batch_size, A, S, _, _ = outputs[i].shape
- anchor = config.SCALED_ANCHORS[i]
- boxes_scale_i = utils.cells_to_bboxes(
- outputs[i], anchor, S=S, is_preds=True
- )
- for idx, (box) in enumerate(boxes_scale_i):
- bboxes[idx] += box
-
- nms_boxes = utils.non_max_suppression(
- bboxes[0], iou_threshold=0.5, threshold=0.4, box_format="midpoint",
- )
- # target_categories = np.argmax(outputs.cpu().data.numpy(), axis=-1)
- target_categories = [box[0] for box in nms_boxes]
- targets = [ClassifierOutputTarget(
- category) for category in target_categories]
-
-
- if self.uses_gradients:
- self.model.zero_grad()
- loss = sum([target(output)
- for target, output in zip(targets, outputs)])
- loss.backward(retain_graph=True)
-
- # In most of the saliency attribution papers, the saliency is
- # computed with a single target layer.
- # Commonly it is the last convolutional layer.
- # Here we support passing a list with multiple target layers.
- # It will compute the saliency image for every image,
- # and then aggregate them (with a default mean aggregation).
- # This gives you more flexibility in case you just want to
- # use all conv layers for example, all Batchnorm layers,
- # or something else.
-
- cam_per_layer = self.compute_cam_per_layer(input_tensor,
- targets,
- eigen_smooth)
- return self.aggregate_multi_layers(cam_per_layer)
-
- def get_target_width_height(self,
- input_tensor: torch.Tensor) -> Tuple[int, int]:
- width, height = input_tensor.size(-1), input_tensor.size(-2)
- return width, height
-
- def compute_cam_per_layer(
- self,
- input_tensor: torch.Tensor,
- targets: List[torch.nn.Module],
- eigen_smooth: bool) -> np.ndarray:
-
- activations_list = [a.cpu().data.numpy()
- for a in self.activations_and_grads.activations]
- grads_list = [g.cpu().data.numpy()
- for g in self.activations_and_grads.gradients]
- target_size = self.get_target_width_height(input_tensor)
-
- cam_per_target_layer = []
- # Loop over the saliency image from every layer
- for i in range(len(self.target_layers)):
- target_layer = self.target_layers[i]
- layer_activations = None
- layer_grads = None
- if i < len(activations_list):
- layer_activations = activations_list[i]
- if i < len(grads_list):
- layer_grads = grads_list[i]
-
- cam = self.get_cam_image(input_tensor,
- target_layer,
- targets,
- layer_activations,
- layer_grads,
- eigen_smooth)
- cam = np.maximum(cam, 0)
- scaled = scale_cam_image(cam, target_size)
- cam_per_target_layer.append(scaled[:, None, :])
-
- return cam_per_target_layer
-
- def aggregate_multi_layers(
- self,
- cam_per_target_layer: np.ndarray) -> np.ndarray:
- cam_per_target_layer = np.concatenate(cam_per_target_layer, axis=1)
- cam_per_target_layer = np.maximum(cam_per_target_layer, 0)
- result = np.mean(cam_per_target_layer, axis=1)
-
- return scale_cam_image(result)
\ No newline at end of file
diff --git a/spaces/parkyzh/bingo/src/components/chat-suggestions.tsx b/spaces/parkyzh/bingo/src/components/chat-suggestions.tsx
deleted file mode 100644
index 00c2fee295c9e010946046eb71705a5e131f7a5a..0000000000000000000000000000000000000000
--- a/spaces/parkyzh/bingo/src/components/chat-suggestions.tsx
+++ /dev/null
@@ -1,45 +0,0 @@
-import React, { useMemo } from 'react'
-import Image from 'next/image'
-import HelpIcon from '@/assets/images/help.svg'
-import { SuggestedResponse } from '@/lib/bots/bing/types'
-import { useBing } from '@/lib/hooks/use-bing'
-import { atom, useAtom } from 'jotai'
-
-type Suggestions = SuggestedResponse[]
-const helpSuggestions = ['为什么不回应某些主题', '告诉我更多关于必应的资迅', '必应如何使用 AI?'].map((text) => ({ text }))
-const suggestionsAtom = atom([])
-
-type ChatSuggestionsProps = React.ComponentProps<'div'> & Pick, 'setInput'> & { suggestions?: Suggestions }
-
-export function ChatSuggestions({ setInput, suggestions = [] }: ChatSuggestionsProps) {
- const [currentSuggestions, setSuggestions] = useAtom(suggestionsAtom)
- const toggleSuggestions = (() => {
- if (currentSuggestions === helpSuggestions) {
- setSuggestions(suggestions)
- } else {
- setSuggestions(helpSuggestions)
- }
- })
-
- useMemo(() => {
- setSuggestions(suggestions)
- window.scrollBy(0, 2000)
- }, [suggestions.length])
-
- return currentSuggestions?.length ? (
-
- ) : (
- <>>
- );
-};
-
-export const TotalTokenCostToggle = () => {
- const { t } = useTranslation('main');
-
- const setCountTotalTokens = useStore((state) => state.setCountTotalTokens);
-
- const [isChecked, setIsChecked] = useState(
- useStore.getState().countTotalTokens
- );
-
- useEffect(() => {
- setCountTotalTokens(isChecked);
- }, [isChecked]);
-
- return (
-
- );
-};
-
-export const TotalTokenCostDisplay = () => {
- const { t } = useTranslation();
- const totalTokenUsed = useStore((state) => state.totalTokenUsed);
-
- const [totalCost, setTotalCost] = useState(0);
-
- useEffect(() => {
- let updatedTotalCost = 0;
-
- Object.entries(totalTokenUsed).forEach(([model, tokenCost]) => {
- updatedTotalCost += tokenCostToCost(tokenCost, model as ModelOptions);
- });
-
- setTotalCost(updatedTotalCost);
- }, [totalTokenUsed]);
-
- return (
-
-
- {`USD ${totalCost.toPrecision(3)}`}
-
- );
-};
-
-export default TotalTokenCost;
diff --git a/spaces/prerna9811/Chord/portaudio/src/common/pa_stream.c b/spaces/prerna9811/Chord/portaudio/src/common/pa_stream.c
deleted file mode 100644
index ffbf5303237322a2ab45f23ba502c659af787c42..0000000000000000000000000000000000000000
--- a/spaces/prerna9811/Chord/portaudio/src/common/pa_stream.c
+++ /dev/null
@@ -1,150 +0,0 @@
-/*
- * $Id$
- * Portable Audio I/O Library
- * stream interface
- *
- * Based on the Open Source API proposed by Ross Bencina
- * Copyright (c) 2008 Ross Bencina
- *
- * Permission is hereby granted, free of charge, to any person obtaining
- * a copy of this software and associated documentation files
- * (the "Software"), to deal in the Software without restriction,
- * including without limitation the rights to use, copy, modify, merge,
- * publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so,
- * subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be
- * included in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR
- * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
- * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
- * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- */
-
-/*
- * The text above constitutes the entire PortAudio license; however,
- * the PortAudio community also makes the following non-binding requests:
- *
- * Any person wishing to distribute modifications to the Software is
- * requested to send the modifications to the original developer so that
- * they can be incorporated into the canonical version. It is also
- * requested that these non-binding requests be included along with the
- * license above.
- */
-
-/** @file
- @ingroup common_src
-
- @brief Stream interfaces, representation structures and helper functions
- used to interface between pa_front.c host API implementations.
-*/
-
-
-#include "pa_stream.h"
-
-
-void PaUtil_InitializeStreamInterface( PaUtilStreamInterface *streamInterface,
- PaError (*Close)( PaStream* ),
- PaError (*Start)( PaStream* ),
- PaError (*Stop)( PaStream* ),
- PaError (*Abort)( PaStream* ),
- PaError (*IsStopped)( PaStream* ),
- PaError (*IsActive)( PaStream* ),
- PaTime (*GetTime)( PaStream* ),
- double (*GetCpuLoad)( PaStream* ),
- PaError (*Read)( PaStream*, void *, unsigned long ),
- PaError (*Write)( PaStream*, const void *, unsigned long ),
- signed long (*GetReadAvailable)( PaStream* ),
- signed long (*GetWriteAvailable)( PaStream* ) )
-{
- streamInterface->Close = Close;
- streamInterface->Start = Start;
- streamInterface->Stop = Stop;
- streamInterface->Abort = Abort;
- streamInterface->IsStopped = IsStopped;
- streamInterface->IsActive = IsActive;
- streamInterface->GetTime = GetTime;
- streamInterface->GetCpuLoad = GetCpuLoad;
- streamInterface->Read = Read;
- streamInterface->Write = Write;
- streamInterface->GetReadAvailable = GetReadAvailable;
- streamInterface->GetWriteAvailable = GetWriteAvailable;
-}
-
-
-void PaUtil_InitializeStreamRepresentation( PaUtilStreamRepresentation *streamRepresentation,
- PaUtilStreamInterface *streamInterface,
- PaStreamCallback *streamCallback,
- void *userData )
-{
- streamRepresentation->magic = PA_STREAM_MAGIC;
- streamRepresentation->nextOpenStream = 0;
- streamRepresentation->streamInterface = streamInterface;
- streamRepresentation->streamCallback = streamCallback;
- streamRepresentation->streamFinishedCallback = 0;
-
- streamRepresentation->userData = userData;
-
- streamRepresentation->streamInfo.inputLatency = 0.;
- streamRepresentation->streamInfo.outputLatency = 0.;
- streamRepresentation->streamInfo.sampleRate = 0.;
-}
-
-
-void PaUtil_TerminateStreamRepresentation( PaUtilStreamRepresentation *streamRepresentation )
-{
- streamRepresentation->magic = 0;
-}
-
-
-PaError PaUtil_DummyRead( PaStream* stream,
- void *buffer,
- unsigned long frames )
-{
- (void)stream; /* unused parameter */
- (void)buffer; /* unused parameter */
- (void)frames; /* unused parameter */
-
- return paCanNotReadFromACallbackStream;
-}
-
-
-PaError PaUtil_DummyWrite( PaStream* stream,
- const void *buffer,
- unsigned long frames )
-{
- (void)stream; /* unused parameter */
- (void)buffer; /* unused parameter */
- (void)frames; /* unused parameter */
-
- return paCanNotWriteToACallbackStream;
-}
-
-
-signed long PaUtil_DummyGetReadAvailable( PaStream* stream )
-{
- (void)stream; /* unused parameter */
-
- return paCanNotReadFromACallbackStream;
-}
-
-
-signed long PaUtil_DummyGetWriteAvailable( PaStream* stream )
-{
- (void)stream; /* unused parameter */
-
- return paCanNotWriteToACallbackStream;
-}
-
-
-double PaUtil_DummyGetCpuLoad( PaStream* stream )
-{
- (void)stream; /* unused parameter */
-
- return 0.0;
-}
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/vegalite/v5/schema/channels.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/vegalite/v5/schema/channels.py
deleted file mode 100644
index 67d7fe1c020b1924c3083ea13925b317e73a8488..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/vegalite/v5/schema/channels.py
+++ /dev/null
@@ -1,17634 +0,0 @@
-# The contents of this file are automatically written by
-# tools/generate_schema_wrapper.py. Do not modify directly.
-
-import sys
-from . import core
-import pandas as pd
-from altair.utils.schemapi import Undefined, with_property_setters
-from altair.utils import parse_shorthand
-from typing import overload, List
-
-from typing import Literal
-
-
-class FieldChannelMixin:
- def to_dict(self, validate=True, ignore=(), context=None):
- context = context or {}
- shorthand = self._get('shorthand')
- field = self._get('field')
-
- if shorthand is not Undefined and field is not Undefined:
- raise ValueError("{} specifies both shorthand={} and field={}. "
- "".format(self.__class__.__name__, shorthand, field))
-
- if isinstance(shorthand, (tuple, list)):
- # If given a list of shorthands, then transform it to a list of classes
- kwds = self._kwds.copy()
- kwds.pop('shorthand')
- return [self.__class__(sh, **kwds).to_dict(validate=validate, ignore=ignore, context=context)
- for sh in shorthand]
-
- if shorthand is Undefined:
- parsed = {}
- elif isinstance(shorthand, str):
- parsed = parse_shorthand(shorthand, data=context.get('data', None))
- type_required = 'type' in self._kwds
- type_in_shorthand = 'type' in parsed
- type_defined_explicitly = self._get('type') is not Undefined
- if not type_required:
- # Secondary field names don't require a type argument in VegaLite 3+.
- # We still parse it out of the shorthand, but drop it here.
- parsed.pop('type', None)
- elif not (type_in_shorthand or type_defined_explicitly):
- if isinstance(context.get('data', None), pd.DataFrame):
- raise ValueError(
- 'Unable to determine data type for the field "{}";'
- " verify that the field name is not misspelled."
- " If you are referencing a field from a transform,"
- " also confirm that the data type is specified correctly.".format(shorthand)
- )
- else:
- raise ValueError("{} encoding field is specified without a type; "
- "the type cannot be automatically inferred because "
- "the data is not specified as a pandas.DataFrame."
- "".format(shorthand))
- else:
- # Shorthand is not a string; we pass the definition to field,
- # and do not do any parsing.
- parsed = {'field': shorthand}
- context["parsed_shorthand"] = parsed
-
- return super(FieldChannelMixin, self).to_dict(
- validate=validate,
- ignore=ignore,
- context=context
- )
-
-
-class ValueChannelMixin:
- def to_dict(self, validate=True, ignore=(), context=None):
- context = context or {}
- condition = self._get('condition', Undefined)
- copy = self # don't copy unless we need to
- if condition is not Undefined:
- if isinstance(condition, core.SchemaBase):
- pass
- elif 'field' in condition and 'type' not in condition:
- kwds = parse_shorthand(condition['field'], context.get('data', None))
- copy = self.copy(deep=['condition'])
- copy['condition'].update(kwds)
- return super(ValueChannelMixin, copy).to_dict(validate=validate,
- ignore=ignore,
- context=context)
-
-
-class DatumChannelMixin:
- def to_dict(self, validate=True, ignore=(), context=None):
- context = context or {}
- datum = self._get('datum', Undefined)
- copy = self # don't copy unless we need to
- if datum is not Undefined:
- if isinstance(datum, core.SchemaBase):
- pass
- return super(DatumChannelMixin, copy).to_dict(validate=validate,
- ignore=ignore,
- context=context)
-
-
-@with_property_setters
-class Angle(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefnumber):
- """Angle schema wrapper
-
- Mapping(required=[shorthand])
-
- Parameters
- ----------
-
- shorthand : string
- shorthand for field, aggregate, and type
- aggregate : :class:`Aggregate`
- Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
- ``"min"``, ``"max"``, ``"count"`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- bin : anyOf(boolean, :class:`BinParams`, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating
- that the data for ``x`` or ``y`` channel are binned before they are imported into
- Vega-Lite ( ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`))
- One or more value definition(s) with `a parameter or a test predicate
- `__.
-
- **Note:** A field definition's ``condition`` property can only contain `conditional
- value definitions `__
- since Vega-Lite only allows at most one encoded field per encoding channel.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access
- nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If
- field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details
- about escaping in the `field documentation
- `__. 2) ``field`` is not required
- if ``aggregate`` is ``count``.
- legend : anyOf(:class:`Legend`, None)
- An object defining properties of the legend. If ``null``, the legend for the
- encoding channel will be removed.
-
- **Default value:** If undefined, default `legend properties
- `__ are applied.
-
- **See also:** `legend `__
- documentation.
- scale : anyOf(:class:`Scale`, None)
- An object defining properties of the channel's scale, which is the function that
- transforms values in the data domain (numbers, dates, strings, etc) to visual values
- (pixels, colors, sizes) of the encoding channels.
-
- If ``null``, the scale will be `disabled and the data value will be directly encoded
- `__.
-
- **Default value:** If undefined, default `scale properties
- `__ are applied.
-
- **See also:** `scale `__
- documentation.
- sort : :class:`Sort`
- Sort order for the encoded field.
-
- For continuous fields (quantitative or temporal), ``sort`` can be either
- ``"ascending"`` or ``"descending"``.
-
- For discrete fields, ``sort`` can be one of the following:
-
-
- * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in
- JavaScript.
- * `A string indicating an encoding channel name to sort by
- `__ (e.g.,
- ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g.,
- ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a
- sort-by-encoding definition
- `__. For
- example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order":
- "descending"}``.
- * `A sort field definition
- `__ for sorting by
- another field.
- * `An array specifying the field values in preferred order
- `__. In this case, the
- sort order will obey the values in the array, followed by any unspecified values
- in their original order. For discrete time field, values in the sort array can be
- `date-time definition objects
- `__. In addition, for time
- units ``"month"`` and ``"day"``, the values can be the month or day names (case
- insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ).
- * ``null`` indicating no sort.
-
- **Default value:** ``"ascending"``
-
- **Note:** ``null`` and sorting by another channel is not supported for ``row`` and
- ``column``.
-
- **See also:** `sort `__
- documentation.
- timeUnit : anyOf(:class:`TimeUnit`, :class:`BinnedTimeUnit`, :class:`TimeUnitParams`)
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field. or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit